text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
http://arxiv.org/abs/2311.15880v2
{ "authors": [ "Alexander M. Leshansky", "Itzhak Fouxon", "Boris Y. Rubinstein" ], "categories": [ "cond-mat.soft", "physics.flu-dyn" ], "primary_category": "cond-mat.soft", "published": "20231127145151", "title": "Quartz Crystal Microbalance frequency response to finite-size adsorbents in liquids" }
[email protected]=Faculty of Mathematics and Computer Science, Babeș-Bolyai University, city=Cluj-Napoca, postcode=400084,country=Romaniaorganization=Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, city=Cluj-Napoca, postcode=400110,country=RomaniaApplying techniques originally developed for systems lacking a variational structure, we establish conditions for the existence of solutions in systems that possess this property but their energy functional is unbounded both lower and below. We show that, in general, our conditions differ from those in the classical mountain pass approach by Ambroseti-Rabinovitz when dealing with systems of this type. Our theory is put into practice in the context of a coupled system of Stokes equations with reaction terms, where we establish sufficient conditions for the existence of a solution. The systems under study are intermediary between gradient-type systems and Hamiltonian systems. Variational method, Stokes system, Mountain pass geometry§ INTRODUCTION AND PRELIMINARIES Many real-world processes can be represented by equations or systems of equations. However, solving these problems can be quite challenging. Over time, various techniques have been developed, with the critical point technique being one of the most significant. This technique is important because it simplifies the task of solving an equation to demonstrating that a specific function has a critical point. In the recent papers <cit.>, systems of the formE_11(u,v)=0 E_22(u,v)=0,were considered, where E_1, E_2 are certain C^1 functionals.Such systems have the property that they lack a variational structure as a whole but possess it individually on each component.In this paper we considersystems of the formE_u(u,v)=0 E_v(u,v)=0,where E is a C^1 functional. In the literature there are many tools to establish the existence of critical points for E. However, if E has no upper and lower bounds, or is not well behaved, such methods may fail. Our aim is to use the techniques developed in <cit.> to prove the existence of critical points for E, using some partial functionalsE_1, E_2, which may not necessarily be related to E. The novelty of this paper consist in obtaining different conditions then the onestypically used in the classical mountain pass approach by Ambroseti-Rabinovitz for the existence of a solution for the system (<ref>).Our theorey is applied to an abstract system from H_0^1(Ω) as well as a system of Stokes equations. The latter system comes in the study of fluid dynamics and it is obtained neglecting the nonlinear term from the Navier-Stokes equations, which is an Agmon-Douglis-Nirenberg elliptic and linear system. We send to <cit.> for further details. In the following section, we will review some important results from functional analysis, matrices converging to zero, and the Stokes system. These concepts will be used in the upcoming material.§.§ Ekeland variational principle The proof of our main result (<Ref>) is essentially based on the weak form of Ekeland's variational principle (see, e.g., <cit.>).Let (X,d) be a complete metric space and let Φ :X→ℝ∪{+∞} be a lower semicontinuous and bounded below functional. Then, given any ε >0, there exists u_ε∈ X such thatΦ (u_ε)≤inf_XΦ +εandΦ (u_ε)≤Φ (u)+ε d(u,u_ε),for all u∈ X.§.§ Abstract linear operator Let Ω⊂ℝ^n, n≥ 3, be a bounded open set. Let A H_0^1(Ω)→ H^-1(Ω) be a continuous and strongly monotone operator, that is, there exists θ>0 such that ⟨ Au,u ⟩≥θ |u|_H_0^1 ^2,for all u∈ H_0^1(Ω).Here, ⟨·,·⟩ stand for the dual pairing between H^-1(Ω) and H_0^1(Ω). We observe that for every h∈ H^-1(Ω), Riesz representation theorem guarantees that there exists a unique element u_h∈ H_0^1(Ω) such thatA u_h=h,i.e., A is a bijective, where A^-1h=u_h.If h∈ L^2(Ω), we have that⟨ A u_h, v⟩=(h,v)_L^2,for all v∈ H_0^1(Ω),and thus ( u_h, v)_H_0^1=(h,v)_L^2.If we identify H^-1(Ω) with H_0^1(Ω), the operator L induces in H_0^1(Ω) the scalar product (·,·)_A and the norm |·|_A, given by( u, v)_A:=⟨A u, v ⟩and|u|_A:=√(⟨ Au, u ⟩), for all u,v ∈ H_0^1(Ω).From the strong monotony of A given in (<ref>), we immediately deduce the followingPoincaréinequality |u|_L^2≤√(θ) |u|_A,for all u∈ H_0^1(Ω).§.§ Matrices convergent to zero A square matrix M∈ℳ_n× n(ℝ+) is considered to be "convergent to zero" if its power M^k tends to the zero matrix as k→∞. Other equivalent characterizations include the requirement that the spectral radius of the matrix is less than one, or if the inverse of I-A (where I is the identity matrix) is both invertible and has nonnegative entries (see, e.g., <cit.>).The following result, concerning matrices convergent to zero, holds true: Let ( x_k,p) _k≥ 1, ( y_k,p) _k≥ 1 be two sequences of vectors in ℝ _+^n (column vectors) depending on a parameter p, such thatx_k,p≤ A x_k-1,p+y_k,pfor all k and p, where A∈𝕄_n × n(ℝ_+) is a matrix convergent to zero. If the sequence ( x_k,p) _k≥ 1 is bounded uniformly with respect to p and y_k,p→ 0_n as k→∞ uniformly with respect to p, then x_k,p→ 0_n as k→∞ uniformly with respect to p.§.§ Stationary Stokes-type equation Let Ω^'⊂ℝ^N (N≤ 3) be an open and bounded domain and letf∈ H^-1(Ω^')^N. We recall some results related to theStokes-type problem (see, e.g., <cit.>),-Δv+μv+∇ p=f in Ω^' divv=0in Ω^' v=0on Ω^'.A solution is sought in the Sobolev spaceV={v∈ H_0^1(Ω^')^N:divv=0 }.We endow V with the scalar product(v,w)_V=∫_Ω∇v·∇w+∫_Ωμv·wand the corresponding norm |v|_V=√((v,v)_V).Onehas the Poincare's inequality (see, e.g., <cit.>),|v|_(L^2)^N≤1/λ_1+μ|v|_V,for all v∈ V,where λ_1 is the first eigenvalue of the Dirichlet problem -Δv=λv in Ω^' and v=0 on ∂Ω^'.For (v,p)∈ H_0^1(Ω^')^N× L^2(Ω), the variational formulation of the system (<ref>) is:(v,w)_(H_0^1)^N+μ(v,w)_(L^2)^N- (p,divw)_L^2=⟨f,w⟩,for all w∈ H_0^1(Ω^')^N If v∈ V,the above relation becomes,(v,w)_V=⟨f,w⟩,for all w∈ V. Here, ⟨·, ·⟩ stands for the dual pairing between V^' and V. If we find a solution v∈ V to (<ref>), the pressure p∈ L^2(Ω^') is guaranteed by De Rham's Theorem (see, e.g., <cit.>).From Riesz's representation theorem, there exists a unique weak solution v_f∈ V of the problem (<ref>), that is, there is only one v_f∈ V such that( v_f,w)_V=⟨f,w⟩, for all w∈ V.Moreover, one has the inequality,|v_f|_V^2=(f,v_f)≤ |f|_V^'| v_f|_V,i.e., |v_f|_V≤ |f|_V^'.Thus, we may define the solution operator S V^'→ V, S(f)=v_f. Clearly, it is an isomorphism between V^' and V.§ MAIN RESULTS Let H be a Hilbert space together with the scalar product (·, ·)_H and the induced norm |·|_H.We consider the system of the typeu=N_u(u,v) -v=N_v(u,v),where N H× H →ℝ is a continuous operator.The structure of the system (<ref>) that we have considered situates it as an intermediary between gradient-type systems and Hamiltonian systems.Clearly, it admits a variational structure given by the functionalE(u,v)=12|u|_H^2-12 |v|_H^2-N(u,v).However, in general, this functional is unbounded from both above and below. To the system (<ref>), we associate the partial functionals E_1, E_2H× H →ℝ given by E_1(u,v)=12|u|_H^2-N(u,v), andE_2(u,v)=-12|v|_H^2-N(u,v).One easily sees that both E_1 and E_2 are Fréchet differentiable andmoreover,E_11(u,v):=(E_1)_u=u-N_u(u,v),E_22(u,v):=(E_2)_v=-v-N_v(u,v).We say that an point (u^∗,v^∗)∈ H × H is a partial critical point for the pair of functionals (E_1, E_2) if it satisfiesE_11(u^∗,v^∗)=0andE_22(u^∗,v^∗)=0.Obviously, any partial critical point for the pair of functionals (E_1, E_2) is a solution to the system (<ref>). The subsequent result establishes a relation between the critical points of the functional E and the partial critical points of the pair of functionals (E_1,E_2). A pair (u^∗, v^∗)∈ H × H is a critical point of E if and only if it is a partial critical point for the pair of functionals (E_1, E_2).The result is immediate if we observe that for any u, v ∈ H, the following relations hold:E_u(u,v)=u-N_u(u,v)=E_11(u,v)andE_v(u,v)=-v-N_v(u,v)=E_22(u,v). §.§ Existence of a partial critical point Now we are prepared to present our main result, which essentially involves establishing sufficient conditions to ensure the existence of at least one partial critical point for the pair of functionals (E_1, E_2).Under the previous established setting, we additionally assume:(h1) One has the growth conditions-α |v|_H^2-C≤ N(u,v)≤α|u|_H^2+C,for all u,v∈ H,where 0≤α,α<1/2 such that α+α<1/2 and C>0. (h2) There are nonegative real numbers m_ij(i,j ∈{1,2}) such that the following monotony conditions hold true:( N_u(u,v)-N_u(u,v), u-u) ≤ m_11 |u-u|^2_H+m_12 |u-u|_H |v-v|_H,( N_v(u,v)-N_v(u,v), v-v) ≥ -m_22 |v-v|^2_H-m_21 |u-u|_H |v-v|_H, for all u,v,u,v∈ H. (h3) The matrix M=(m_ij)_1≤ i,j≤ 2 is convergent to zero.Then, there exists a partial critical point (u^∗,v^∗)∈ H× H for the pair of functionals (E_1,E_2).For better comprehension, we structure our proof into several steps.Step 1:Boundedness from below and upper of the functionals E_1, E_2. Let u,v∈ H. The growth conditions(<ref>) yieldsE_1(u,v) =12|u|_H^2-N(u,v)≥(12-α)|u|^2_H-C ≥ -C,andE_2(u,v) =-12|v|_H^2-N(u,v)≤ -(12-α)|v|^2_H+C ≤ C. Step 2:Construction of an approximation sequence (u_k,v_k). We employ a method similar to the one described in <cit.>. For an v_0 arbitrarily chosen,using Ekeland's variational principle within a recursive procedure, we generate a sequence (u_k,v_k)∈ H × H such thatE_1(u_k,v_k-1) ≤inf_HE_1(· ,v_k-1)+1k, E_2(u_k,v_k) ≥sup_HE_2(u_k,· )-1k, | E_11(u_k,v_k-1)| _H≤1k,| E_22(u_k,v_k)| _H≤1k.Step 3:Boundedness of the sequence u_k.From (<ref>) and the second relation from (<ref>),we infer 12|u_k|^2_H ≤ N( u_k,v_k-1)+ inf _HE_1(·,v_k-1)+1k≤N( u_k,v_k-1)+ E_1(0,v_k-1)+1≤α|u_k|^2_H +α |v_k-1|^2_H+2C+1.Hence,|u_k|_H^2≤α/12-α|v_k-1|_H^2+C_1,for some constant C_1. Under similar computations, from the second relation of (<ref>) we obtain12|v_k|^2_H ≤α|u_k|^2_H +α |v_k|^2_H+2C+1,which yields|v_k|_H^2≤α/12-α|u_k|_H^2+C_2,for some constant C_2. Now, we combine inequalities (<ref>) and (<ref>) to deduce|u_k|_H^2≤μ |u_k-1|^2_H+C_3,whereμ=α α/(12-α)(12-α).From (h1), we easily see that μ<1, which guarantees that u_k is bounded.Step 4:Convergence of the sequences u_k and v_k. Let p>0.From the monotony conditions (h2), we have|u_k+p-u_k|_H^2 =( u_k+p-N_u(u_k+p,v_k+p-1) -u_k+N_u(u_k,v_k-1),u_k+p-u_k)_H +(N_u(u_k+p,v_k+p-1)- N_u(u_k,v_k-1),u_k+p-u_k)_H ≤(1/k+p +1/k) |u_k+p-u_k|_H+m_11|u_k+p-u_k|_H^2 +m_11|u_k+p-u_k|_H |v_k+p-1-v_k-1|_H.Thus,|u_k+p-u_k|_H≤2k+m_11|u_k+p-u_k|_H+m_11|v_k+p-1-v_k-1|_H.For the sequence (v_k), we similarly obtain|v_k+p-v_k|_H^2 =(v_k+p-v_k, -v_k-N_v(u_k,v_k)+v_k+p+N_v(u_k+p,v_k+p) )_H -(v_k+p-v_k, N_v(u_k+p,v_k+p)-N_v(u_k,v_k) )_H≤|v_k+p-v_k|_H(1/k+p +1/k)+m_11 |v_k+p-v_k|_H^2+m_11 |v_k+p-v_k|_H|u_k+p-u_k|_H^2.Hence, |v_k+p-v_k|_H≤2/k+m_11| v_k+p-v_k|_H+m_11|u_k+p-u_k|_H.If we write therelations (<ref>) and (<ref>) in matrix form, we infer[ |u_k+p-u_k|_H; |v_k+p-v_k|_H ]≤[ m_110; m_11 m_11 ][ |u_k+p-u_k|_H; |v_k+p-v_k|_H ]+[0 m_11;00 ][ |u_k+p-1-u_k-1|_H; |v_k+p-1-v_k-1|_H ]+[ 2/k; 0 ]. Since u_k is bounded and the matrix M converges to zero, we can conclude from <Ref> that both u_k and v_k converge. Let us denote their limits as u^∗ and v^∗.Step 5: Passing to limit. Since u_k→ u^∗ and v_k→ v^∗, the conclusion follows immediately if we pass to limit in(<ref>).The partial critical point obtained in <Ref> has the additionalproperty of being a Nash equilibrium for the functionals E_1 and -E_2 (see, e.g., <cit.> for further details on Nash equilibrium). This relationship is a result of taking the limit in (<ref>), which givesE_1(u^∗,v^∗)=inf_H E_1(·,v^∗), E_2(u^∗,v^∗)=sup_H E_2(u^∗,·). §.§ Relation with the classical mountain pass approach The well-known approach to obtain critical points for functionals that lack upper or lower bounds is to employ the Ambrosetti-Rabinowitz results, which guarantee the existence of mountain pass points (as seen in <cit.>). The typical conditions imposed on the functional E are: (I1) There exists τ>0 such thatE(u,v)≥α >E(0,0),for all |(u,v)|_H × H=τ. (I2) There exists e∈ H× H with |e|>τ such thatE(e)<inf_|(u,v)|=τ E(u,v) . (I3) The functional E has the Palais-Smale property, i.e., if e_k is a sequence such thatE(e_k)is boundedand∇ E(e_k)→ 0,then e_k admits a convergent subsequence.In the following, we will explore how these conditions align with our hypotheses (h1)-(h3).Condition (I1):Let (u,v)∈ H× H such that |(u,v)|_H × H=τ, i.e.,|u|_H+|v|_H=τ. We compute, E(u,v)=12 |u|_H^2-12|v|_H^2- N(u,v)= 12(|u|_H+|v|_H )(|u|_H-|v|_H ) -N(u,v) .Thus, for the relation E(u,v)≥α to hold, we needN(u,v)<τ2(|u|_H-|v|_H )-α,for all |(u,v)|_H × H=τ.On the other hand, E(0,0)<αimplies that-N(0,0)< α,i.e., -α< N(0,0).Hence, relation (<ref>) becomesN(u,v)<-τ2(|u|_H-|v|_H )+N(0,0),for all |(u,v)|_H × H=τ,that isN(u,v)-N(0,0)< τ2(|u|_H-|v|_H ),for all |(u,v)|_H × H=τ. In our main result such a condition is not required, which enables us to encompass a broader range of situations in which the system (<ref>) is solvable. It is clear that there might be cases where our result is not applicable, but the Ambrosetti-Rabinowitz theorem is, and vice versa. Condition (I2). This condition is satisfied; for instance, one can take (0, γ e), where γ is a sufficiently large real number, and e is a fixed element from H distinct from the origin of the space. Indeed,E(0,γ e)=-γ^22 |e|^2-N(0,γ e)≤ -(12-α) γ^2 |e|^2+C → -∞,as γ→∞.Condition (I3)Let e_k=( (e_1)_k, (e_2)_k) be a sequence such thatE(e_k)is uniformly bounded, and ∇ E(e_k)→ 0, i.e.,(e_1)_k-N_u(e_k)→ 0,- (e_2)_k-N_v(e_k)→ 0.Let k_0 large enough such that |(e_1)_k-N_u(e_k)|≤ 1, for all k≥ k_0. Consequently, when taking a scalar product in (<ref>) with (e_1)_k for k≥ k_0, we obtain ((e_1)_k-N_u(e_k) ,(e_1)_k )_H ≤ | (e_1)_k|_H .From the monotony conditions (h2) we deduce((e_1)_k-N_u(e_k) ,(e_1)_k )_H =((e_1)_k, (e_2)_k )_H-(N_u(e_k), (e_1)_k )_H = | (e_1)_k|_H^2-(N_u(e_k)-N_u(0), (e_1)_k )_H-(N_u(0), (e_1)_k )_H ≥(1-m_11)|(e_1)_k|^2_H-m_11|(e_1)_k|_H|(e_2)_k|_H-|N_u(0,0)|_H|(e_1)_k|_H.Hence, (1-m_11)|(e_1)_k|^2_H-m_11|(e_1)_k|_H|(e_2)_k|_H≤( |N_u(0,0)|_H+1)|(e_1)_k|_H.Following a similar reasoning, from (<ref>) we have(1-m_11)|(e_2)_k|^2_H-m_11|(e_1)_k|_H|(e_2)_k|_H≤( |N_v(0,0)|_H+1)|(e_2)_k|_H.Therefore,the above two relations (<ref>), (<ref>) yieldsβ |(e_2)_k|_H ≤ D, where D is some constant andβ=1-m_11- m_11m_11/1-m_11. Given that the matrix M is convergent to zero, we immediately deduce thatβ= 1-m_11- m_11m_11/1-m_11>0,which guarantees the boundedness of (e_2)_k. From this, is clear that (e_1)_k is also bounded. The boundedness of the sequence e_k guarantees the existence of aweakly convergent subsequence. However, establishing the strong convergence of this subsequence solely under hypotheses (h1)-(h3), remains an open question. Thus, we can formulate the following problem:Given only the assumptions (h1)-(h3), does the functional E satisfy the Palais-Smale condition? Nonetheless, under certain additional assumptions, this result is valid.Assume that the operator K:=∇ N=(N_u, N_v) is compact. Then the functional E satisfies the Palais-Smale condition.Note that ∇ E=I-K. Given the compactness of K and the boundedness of e_k, it follows that there exists asubsequence, also denoted as e_k, such that K(e_k) converges to a point e in H × H. Thus,|e_k-e|-|e-K(e_k)|≤ |e_k-K(e_k)|=|∇ E(e_k)|.Now, the conclusion is immediate since |∇ E(e_k)|→ 0 and |K(e_k)-e|→ 0.§ APPLICATIONSIn this section, we present two application for the results obtained in <Ref>.§.§ Abstract system on H_0^1(Ω)Let us consider the Dirichlet problemAu=F_u(u,v) -Av=F_y(u,v) u|_∂Ω=v|_∂Ω=0,where Ω⊂ℝ^n (n≥ 3) is a bounded open set, Fℝ^2→ℝ is a C^1 functional and the operator A is defined in <Ref>.Here, F_u and F_v stand for the partial derivatives of F with respect to the first and second component, respectively. We use (·, ·) and |·|to denote the scalar product and the corresponding normin ℝ^2.The Hilbert space H is considered to be the Sobolev space H_0^1(Ω) equipped with the scalar product(·,·)_A and the corresponding norm|·|_A. Clearly, the system (<ref>) admits a variational given by the energy functional E H_0^1(Ω)× H_0^1(Ω)→ℝ, E(u,v)=12|u|_A^2-12|v|^2_A-∫_Ω F(u,v).The partial functionals E_1, E_2 H_0^1(Ω)× H_0^1(Ω)→ℝ associated to the system (<ref>) are given by E_1(u,v)=12|u|_A^2-∫_Ω F(u,v) , E_2(u,v)=-12|v|_A^2-∫_Ω F(u,v).If we denotef_1(u,v)=F_u(u,v ) f_2(u,v= F_v(u,v),the identification of H^-1(Ω) with H_0^1(Ω) via A^-1,yields to the representation∇ E(u,v)=( u- A^-1 f_1(u,v), v-A^-1f_2(u,v))=(E_11(u,v), E_22(u,v)),where E_11, E_22 stand for the partial Fréchet derivatives of E_1 and E_2 with respect to the first and second component, respectively. Consequently, the operator N is given byN(u,v)=∫_Ω F(u,v)and its derivatives are theNemytskii’s operatorsN_u(u,v)=A^-1 f_1(u,v)andN_v(u,v)=A^-1 f_2(u,v).On the potential F, we assume the following conditions: (H1) There exist real numbers 0 ≤τ_1, τ_2 ≤1/4 θand C>0, such that the following conditions hold -τ_1|x|^2-C≤ F(x,y) ≤τ_2 |y|^2 +C,for all x,y∈ℝ^2. Related to the gradient of F, let us assume: (H2) There are nonegative real numbers m_ij such that,for all x,y∈ℝ^2, one hasthe monotony conditions:( f_1(x,y)-f_1(x,y),x-x) ≤m_11|x-x|^2+ m_12|x-x||y-y|and( f_2(x,y)-f_2(x,y),x-x) ≥ -m_22|y-y|^2- m_21|x-x||y-y|. Finally,the constants specified in (H2) are such that: (H3) The matrixM:=θ[ m_11 m_12; m_21 m_22 ] is convergent to zero. In the subsequent, we prove that conditions (H1)-(H3) are sufficient to ensure the existence of a partial critical point for the pair of functionals (E_1, E_2). Assume (H1)-(H3) hold true. Then, there exists a pair of points (u^∗, v^∗)∈ H_0^1(Ω) × H_0^1(Ω) such that it is a critical point for the functional E.Furthermore, it has the additional property thatE_1(u^∗, v^∗)=inf_H_0^1(Ω) E_1(·, v^∗),E_2(u^∗, v^∗)=sup_H_0^1(Ω) E_2(u^∗, ·).We verify that all conditions from <Ref> are satisfied.Check of the condition (h1).Let u,v ∈ H_0^1(Ω). Then, for some constant C_1>0, using the Poincaré inequality (<ref>), we deduceN(u,v)=∫_Ω F(u,v)≤τ_2 |u|_L^2^2+C_1≤τ_2 θ |u|_A^2+C_1,andN(u,v)=∫_Ω F(u,v) ≥-τ_1 |v|_L^2^2-C_1≥ -θτ_1 |u|_A^2-C_1.The conclusion is immediate since θ τ_1<1/4 and θ τ_2 <1/4.Check of the condition (h2). For any u,v,u, v∈ H_0^1(Ω), one has( N_u(u,v)-N_u(u, v), u-u)_A =( A^-1 f_1(u,v)-f_1(u, v), u-u)_A = ( f_1(u,v)-f_1(u, v), u-u)_L^2≤m_11|u-u|_L^2^2+m_12|u-u|_L^2 |v-v|_L^2From the Poincaré inequality (<ref>), we further obtain( N_u(u,v)-N_u(u, v), u-u)_A≤θ m_11|u-u|_A^2+θ m_12|u-u|_A |v-v|_A.Similar estimates are obtained for N_v,( N_v(u,v)-N_v(u, v), u-u)_A = ( f_2(u,v)-f_2(u, v), u-u)_L^2≥ -m_22|v-v|_L^2^2-m_21|u-u|_L^2|v-v|_L^2≥ -θ m_22|v-v|_A^2-θ m_21|u-u|_A |v-v|_A.Consequently, condition (h2) is satisfied with m_ij=θ m_ij, (i,j={1,2}).Check of the condition (h3). This condition is immediate from (H3). Thus, all hypothesis of <Ref> are satisfied and consequently, there exists a partial critical point (u^∗, v^∗) for the pair of functionals (E_1, E_2) such thatE_1(u^∗, v^∗)=inf_H_0^1(Ω) E_1(·, v^∗),E_2(u^∗, v^∗)=sup_H_0^1(Ω) E_2(u^∗, ·).Moreover, from <Ref>, the pair (u^∗, v^∗) is a critical point for the functional E. §.§ Stokes-type coupled system We consider the Stokes-type coupled system -Δu_1+μu_1+∇ p_1= F_u_1(u_1,u_2)in Ω^'-Δu_2+μu_2+∇ p_2=-F_u_2(u_1,u_2)in Ω^' divu_i=0 in Ω^' u_i=0 (i=1,2) on ∂Ω^',where μ>0 and Fℝ^2N→ℝis a C^1 functional. Here, F_u_1,F_u_2 represent for the partial derivatives of F with respect to the first and second component, respectively. Our problem (<ref>)is equivalent with the fixed point equation u_1=S^-1 F_u_1 u_2=S^-1 F_u_2,where (u_1,u_2)∈ V × V.Now, we can apply <Ref>, where H=V andN(u_1,u_2)=∫_Ω^' F(u_1,u_2).The verification of conditions (h1)-(h3) follows a similarprocess to the previous application. This is done under the assumption that F satisfies (H1)-(H3), where by (·,·) and |·| we understand the usual scalar product and norm inℝ^N, while θ is replaced by 1/λ_1+μ. Therefore, <Ref> ensures the existence of a pair ( u_1^∗, u_2^∗)∈ V× V, which, according to De Rham’s Lemma, further guarantees the pressures (p_1,p_2)∈ L^2(Ω^')× L^2(Ω^') such that (( u_1^∗,p_2),( u_2^∗,p_2))∈( V× L^2(Ω^'))^2 solves the system (<ref>).§ ACKNOWLEDGMENTS This work was supported by the project "Nonlinear Studies of Stratified Oceanic and Atmospheric Flows " funded by the European Union – NextgenerationEU and Romanian Government, under National Recovery and Resilience Plan for Romania, contract no. 760040/23.05.2023, cod PNRR-C9-I8-CF 185/22.11.2022, through the Romanian Ministry of Research, Innovation and Digitalization, within Component 9, Investment I8.elsarticle-harv
http://arxiv.org/abs/2311.15552v1
{ "authors": [ "Andrei Stan" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20231127054039", "title": "Role of partial functionals in the study of variational systems" }
=1
http://arxiv.org/abs/2311.15941v1
{ "authors": [ "Sicong Leng", "Yang Zhou", "Mohammed Haroon Dupty", "Wee Sun Lee", "Sam Conrad Joyce", "Wei Lu" ], "categories": [ "cs.CL", "cs.CV" ], "primary_category": "cs.CL", "published": "20231127154929", "title": "Tell2Design: A Dataset for Language-Guided Floor Plan Generation" }
Luca Cacciapuoti [email protected] European Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching bei Munchen, Germany Fakultat fúr Physik, Ludwig-Maximilians-Universität München, Scheinerstraße 1, 81679 München, Germany INAF, Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy Dipartimento di Fisica e Astronomia "Augusto Righi" Viale Berti Pichat 6/2, Bologna INAF, Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy INAF, Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy INAF, Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy Univ. Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France European Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching bei Munchen, Germany INAF, Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik, Albert-Ueberle-Straße 2, 69120 Heidelberg, Germany Universität Heidelberg, Interdisziplinäres Zentrum für Wissenschaftliches Rechnen, INF 205, D-69120 Heidelberg, Germany INAF-Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, I-00133, Rome, ItalyLow dust opacity spectral indices (β < 1) measured in the inner envelopes of class 0/I young stellar objects (age ∼ 10^4-5 yr) have been interpreted as the presence of (sub-)millimetre dust grains in these environments. The density conditions and the lifetimes of collapsing envelopes have proven unfavorable for the growth of solids up to millimetre sizes. As an alternative, magneto-hydrodynamical simulations suggest that protostellar jets and outflows might lift grains from circumstellar discs and diffuse them in the envelope. We reframe available data for the CALYPSO sample of Class 0/I sources and show tentative evidence for an anti-correlation between the value of β_1-3mm measured in the inner envelope and the mass loss rate of their jets and outflows, supporting a connection between the two. We discuss the implications that dust transport from the disc to the inner envelope might have for several aspects of planet formation. Finally, we urge for more accurate measurements of both correlated quantities and extension of this work to larger samples, necessary to further test the transport scenario.§ INTRODUCTION The formation of terrestrial planets and of the rocky cores of giant planets is thought to happen in a core-accretion scenario, a process spanning ten orders of magnitude in size, where interstellar medium, sub-micron dust grains grow into km-sized objects. While dust growth has been long thought to take place exclusively in isolated, evolved protoplanetary discs revolving around class II young stellar objects (YSOs), recent observations indicate that dust growth up to millimeter sizes might start in collapsing protostellar envelopes, thus much earlier and further away from host stars than previously thought. Observationally, the slope α of the spectral energy distribution (SED) across (sub-)millimetric wavelengths is a means to interpret interstellar dust properties and its size. Specifically, if (i) dust opacity scales as a power law (κ∝ν^β), (ii) the emission is optically thin, and (iii) the Rayleigh-Jeans (RJ) approximation holds, then β = α - 2 (, ). In turn, β depends on dust properties, and strongly on the maximum grain size of the dust population. For the interstellar medium, typically β∼ 1.7 <cit.>. In Class II objects, β < 1 suggests the presence of millimetre dust grains (e.g., , , ). Several authors measured low β values in the inner envelopes (a few 10^2 au) of Class 0/I sources (, , ). Although β also depends on grain composition and porosity, some of the observed values are too low (β<0.5) to be explained without considering 100 μm - 1 mm grains (e.g., , ). However, simulations have so far predicted that dust coagulation would be ineffective at the low densities (n∼ 10^5-7 cm^-3) and short timescales (a few 10^5 yr) that characterize these environments (, , ). It will be crucial for next simulations to test the effects of generally disregarded processes, like the dust back-reaction on the turbulence through gas-dust friction and dust-magnetic-field interaction <cit.>, to check whether growth remains a viable scenario.Alternative or concomitant processes must be considered that could contribute to explain the observed low β. For example, <cit.> first presented a simple analytical model to argue that millimetre dust from the disc could be entrained by protostellar outflows and transported to the envelope. <cit.> also presented an analytical model for the entraiment of dust grains along magnetohydrodynamical (MHD) disc winds, and concluded that grains of ∼10 μm can be lifted by MHD winds and be transported outwards in the disc of T Tauri and Herbig Ae/Be objects. However, their model assumes typical evolved mass outflow rates of ∼10^-8 M_⊙/yr. Since the maximum grain sizes lifted in the envelopes depend linearly on this quantity, it can be much larger in young Class 0/I objects, for which the mass loss rates are orders of magnitude higher. These findings might have found recent confirmation thanks to exquisite JWST observations of the Tau 042021 edge-on disc, for which <cit.> reported an X-shaped feature in dust scattered whose spatial location is consistent with ALMA CO line emission tracing an outflow. Their observations suggest the entrainment of ≳ 10 μm grains even beyond 300 au. Finally, the findings of these models are supported by <cit.> and <cit.>, who arrived to consistent conclusions via three-dimensional MHD simulations. In particular, <cit.> proposed the expression ash-fall[Continuing <cit.> nomenclature, we thus propose outflows as “chimney flues” in the title of this work.], referring to the dust grains decoupling from the entrainment outflow and their subsequent fall back in the disc. Thus, as outflows represent in principle a means to transport submillimetre grains to envelopes, we here explore the unique CALYPSO sample <cit.> to test whether a correlation holds between the observed power of jets and outflows in Class 0 protostars and the dust opacity index in their envelopes. § THE SAMPLEThe sources that make up our sample are part of the Continuum And Lines in Young ProtoStellar Objects IRAM-PdBI Large Program (CALYPSO[https://irfu.cea.fr/Projets/Calypso/Welcome.html]; ). CALYPSO is a survey of 16 Class 0 sources, located in different star forming regions (d≤450 pc), observed in three spectral setups (centered at ∼ 94, 219 and 231 GHz). The observations were carried out with the IRAM Plateu de Bure Interferometer (PdBI). See <cit.> for further details. Out of the sixteen sources, nine can be fully characterized for our purposes. Only for these, in fact, a reliable measure of β and of the jets/outflows mass loss rates could be performed (Ṁ_J, Ṁ_OF) (Table <ref>). Among the sources considered in this study, seven are in binary or multiple systems. We report considerations on their multiplicity in Appendix <ref>.While this is a low number statistics, we note that CALYPSO is unique in its uniformity and is the only survey for which the SiO (54) transition is systematically targeted to detect the high-velocity jets of a sample of protostars (see Section <ref>). This allows us to perform the jets/outflows analysis as explained in Sect. <ref> and have deep enough continuum datasets with which <cit.> measured dust optical properties. Finally, we note that the CALYPSO data for the sources we consider here have been self-calibrated as explained in Section 2 of <cit.>, and the self-calibrated data has been later used in <cit.> and <cit.>, i.e. the works that have measured the dust opacity spectral index β and the jets mass loss rates Ṁ_J that we also consider in this work. § THE DUST OPACITY SPECTRAL INDEXAs stated in Section <ref>, the dust opacity spectral index β can be derived starting from the radio spectrum of a source and carries dependencies on the properties of interstellar dust, such as the maximum grain size of the distribution. <cit.> used CALYPSO 1.3 and 3.2 millimetre continuum observations to constrain β and infer the maximum dust grain sizes in the protostellar envelopes of the sources we consider in this work, up to ∼2000 au radial distances from the central protostars. While we report on the details of their measurements in Appendix <ref> for completeness, we here briefly summarise them for the reader's convenience.They measured β including a temperature correction that accounts for discrepancies from the Rayleigh-Jeans approximation due to low envelope temperatures:β = log_10[(F_ν_2/B_ν_2(T)) / (F_ν_1/B_ν_1(T))]/log_10ν_2- log_10ν_1where ν_2= 231 GHz and ν_1= 94 GHz are the representative frequencies of the PdBI observations, F_ν is the flux at each frequency. The term B_ν(T) is the value of the Planck function at a temperature T that depends on the radial distance from the central protostar[In brief, <cit.> assumed this temperature profile based on the radiative transfer post-processing of dusty envelopes from <cit.>.] (T ∝ r^-0.4), evaluated at frequency ν_i. For each envelope, they measured β across scales and reported best-fit linear models to extrapolate β at any other scale. We report the final estimated envelope-onlyβ values at 500 au in Table 1. We note that β also depends on the extent of porosity and on the composition of both grain bulk and ice mantles (e.g., ). However, the lowest values observed by <cit.> are only reconcilable with laboratory experiments in which the sizes of the dust grains are ≳ 100 μm, regardless of ice mantles properties (, ). The effect of porosity would also not affect the interpretation of low β as due to large grain sizes (e.g., ). § JETS AND OUTFLOWS MASS LOSS RATES In this section, we report previous measurements of jets energetics for the CALYPSO sample and present new ones for their low-velocity outflow counterparts. We note that we are interested in the instantaneous mass loss rates of these components rather than their total ejected mass, hence we do not complement the CALYPSO observations with single dish data. This is the case because we aim to investigate a link between the presently observed low β values of <cit.> and the continuous flow of material along jets and outflows. The mass loss rates are constant along the jets/outflows extension since mass needs to be conserved, and we measure the latter (outflow) as <cit.> measured the former (jet), i.e., at the first peak of the respective tracer emission, to minimize the contribution of possible gas entrained along the jet. The positions of these peaks are in Tab. 4 of Podio et al. (2021). For the reader's convenience, we note that the maximum recoverable scale of the observations is reported to be about 80 <cit.>. Based on the SiO (54) transition at the innermost knots of the blue- and red-shifted lobes, <cit.> defined the high-velocity (HV) ranges, where the emission probes the jet, for all sources associated with SiO (see their Fig. C.1 and Table 4). In this work, we define the outflow as the emission on the complementary low-velocity (LV) ranges.In Fig. <ref>, we show the spatial distribution of CO (21) towards L1448-C, obtained integrating on the LV and HV ranges: HV CO traces the collimated jet, which is believed to originate from the inner disc region, while LV CO probes the wide-angle outflow, which is likely to arise from a more extended disc region. The HV ranges for all the CALYPSO sources are listed in Table 4 of <cit.>.At this point, <cit.> estimated the Ṁ_̇ ̇J̇, in the blue (B) and red (R) lobes. Here, we apply the same methodology to infer the LV outflows Ṁ_̇ ̇ȮḞ of the sources in the CALYPSO sample, for the first time. The beam-averaged CO column densities in the jet and outflow, N_CO, are derived from the integrated line intensities on HV and LV, respectively. We assume local thermodynamic equilibrium (LTE) at a fixed excitation temperature, T_K = 100 K for HV jets <cit.>, and T_K = 20 K for LV outflows <cit.>, and that the emission is optically thin.The jet and outflow mass-loss rates are computed as (, ):Ṁ = 1/√(C)· m_H_2· (N_CO/X_CO) · b_t · V_tanwhere1/√(C) accounts forcompression inthe shocks (C=3), m_H_2 is the mass of molecular hydrogen,X_CO = 10^-4 the assumed CO abundance with respect to H_2, b_t the beam size perpendicular to the jet, and V_tan the tangential jet/outflow velocity.The latter isobtained by correcting for inclination the jet/outflow velocities, assumed to be 100 km s^-1 for the HV jet, and 10 km s^-1 for the LV outflow. The inclination is derived from the ratio between the assumed jet velocity and its radial component from the HV spectra (see Table 4 of ).For the HV jets, <cit.> identified the sources for which Ṁ_ J is a lower limit by comparing CO and SiO spectra. The estimated rates carry a factor 3-10 of uncertainty due to the calibration of the parameters of Eq. <ref>. The LV outflow emission is likely optically thick, therefore, the estimated Ṁ_ OF must be considered as lower limits. We can estimate the uncertainty introduced by optical depth using ^13CO emission in the assumption that it is optically thin.For two sources only (IRAS 4A1 and IRAS 4B1), ^13CO is detected along the jet (see the maps in <cit.>). For these two sources we can reliably estimate the ^12CO/^13CO ratio, hence opacity. We find τ^(R)_IRAS4A1 = 6, τ^(B)_IRAS4A1 = 18, τ^(R)_IRAS4B1 = 15, τ^(B)_IRAS4B1 = 7. These values imply Ṁ_OF higher by a factor at least ∼6-18. Since we cannot repeat this analysis for all sources, we here stress that the derived Ṁ_OF (in Table <ref> and Fig. <ref>) are lower limits and we consider the jets to be a more robust proxy of the effective mass loss rates of each protostar. Observations of optically thin tracers of the low-velocity outflows will be key to further test the correlation we propose.§ A TENTATIVE ANTI-CORRELATIONModern theoretical efforts have shown how growing dust grains in protostellar envelopes is problematic due to the lifetimes and densities of these environments (, , , ). If millimeter dust, implied by recent measurements of low dust opacity spectral indices in envelopes (, ), cannot grow at envelope scales, alternative processes might explain their presence therein. We here show a tentative anti-correlation between β with Ṁ_J and Ṁ_OF, potentially supporting a scenario in which protostars launching powerful outflows can lift millimeter grains into their envelopes. Fig. <ref> show the β indices found by <cit.> as a function of Ṁ_J and Ṁ_OF summed over the blue and red lobes (see Section <ref>). The values are reported in Table <ref>. We do not include SerpM-S68N because SiO (5-4) emission in <cit.> is only at low velocities, likely due to the system inclination, thus impeding the identification of the LV and HV.The resulting Pearson correlation coefficients are: ∙ ρ_J = -0.73 ± 0.27 (β_500au, Ṁ_J^R+B)∙ ρ_OF = -0.68 ± 0.28 (β_500au, Ṁ_OF^R+B) We evaluate the statistical significance of such a correlation by means of a two-tailed Student's t-test, where the null hypothesis is that ρ = 0 (against ρ≠ 0).We reject the null hypothesis at p < 0.04 level in the jet case, and at the p < 0.06 level for the outflows.These tentative correlations might support a dust transport scenario from young discs to their embedding envelopes.Alternative explanations to the observed tentative correlation are possible in case these two share correlations with other quantities. <cit.> found a correlation between the envelope mass of Class 0/I YSOs and the CO momentum flux of their outflows. Since <cit.> observed a correlation between β and envelope mass of CALYPSO sources, then the tentative correlation we show in Fig. <ref> might be the combined result of these underlying relationships. However, it remains unclear whether the fundamental causal correlation is the one between the dust opacity spectral index and envelope mass or the mass loss reates, as presented here. Moreover, the β-Ṁ_OF correlation in Fig.<ref> might be caused by an underlying Ṁ_J-Ṁ_OF correlation. Such a correlation cannot be quantified here, given that the estimated Ṁ_OF are lower limits. To rule out possible contamination of the correlation from any dependence of the measured β and mass loss rates on the inclination of the source (disc/jet), we reject possible underlying correlations in Appendix <ref>. § DISCUSSIONWe here further discuss our findings, and the conditions that need to be met in order for the proposed dust transport to happen. §.§ When and where do transported grains grow? If outflows are lifting millimetre (or larger) dust grains into the envelopes of Class 0 objects, these must have first grown in the disc.<cit.> studied dust coagulation in the first 1 Myr of disc evolution at representative 1, 10 and 100 au scales and found that millimetre dust grains dominate the dust size distribution already after few 10^3 yr in the inner 1 au of the disc. <cit.> considered several dust size distributions and simulated their early evolution during protostellar collapse under the effects of turbulent, brownian and radial motions. They found that millimetre grains are formed at ≤ 0.1 au scales in few years after the first Larson core formation start. Laboratory experiments have been performed to constrain the stickiness of dust grains in the disc inner regions. When heated at 1000 K, dust grains become super-dry and their stickiness can increase up to a factor 100, thus providing the conditions to grow even larger agglomerates (, ).These temperatures are typically reached in the inner ∼ 0.1 au of low-mass protostellar discs. At these distances, both jets and outflows could lift grains. Indeed, the typical foot-point of jet is much closer to the star than the outflows'. For example, <cit.> measured a 0.05-0.3 au foot-point radius for the high-velocity SiO jet in the Class 0 HH212 source. Low-velocity outflows, instead, likely extend to a wider disc region out to radii of even 20-40 au (, , ), and thus could entrain grains from a larger reservoir. §.§ Can outflows lift millimetre grains?<cit.> and <cit.> presented an analytical treatment in which they explored the conditions for the uplifting of dust grains along outflows. <cit.> presented the critical mass of the protostar for which, if M_* < M_cr, grains of a given size could be entrained against gravity (see their Eq. 7). Another analytical model, by <cit.>, reported an equation to compute the maximum grain size a_max that a given wind can uplift against the gravity of a star of mass M_*. We report the latter for the reader's convenience:a_max≈ 0.35 μ m (M_*/M_⊙)^-1(Ṁ/10^-8 M_⊙/yr) (T_gas/200 K)^0.5 (r/au)^-0.25( (z/r)/0.06)^-1(log(r_+/r_-)/10^3)^-1,where Ṁ_⊙/yr is the mass loss rate of the outflow, T_gas is the gas temperature, r is the launching footpoint, z/r the disc flaring ratio, r_+/r_- the ratio between disc's outer and inner edge. See <cit.> for the details. We notice that the three sources of our sample with the largest outflows mass loss rates (≳ 2 · 10^-7 M_⊙/yr) have β < 0.8. If we consider this value in Eq. <ref>, and we fix M_* = 1M_⊙, T_gas = 20K at the outflow's base, r = 1 au, z/r = 0.1, r_+ = 50 au (typical Class 0 disc radius, e.g., ) and r_- = 0.1 au, we obtain a_max≳ 150 μm. Since outflows mass loss rates are lower limits due to optical depth effects, a_max could be higher by even an order of magnitude. We refrain from evaluating Eq. <ref> source by source since it was derived for class II objects rather than class 0/I, and because most parameters suffer from large uncertainties for young sources. Thus, at face value, assuming standard parameters, grains larger than 100 μm could be lifted for the sources with highest mass loss rates (and lowest betas).Similar findings for the maximum sizes of dust grains entrainable by outflows were reported by <cit.> and <cit.>. They both performed magneto-hydrodynamical simulations. <cit.> ran their setup including large grains to account for growth that might have happened at earlier times, while <cit.> models dust coagulation. They both found that large grains in the inner region of disc (a few 100 μm to 1 cm) can be entrained. These grains then decouple from the gas and are ejected from the outflow into the envelope, before falling back into the disc like ash fall, as coined by <cit.>.§.§ Do grains survive the transport?Given their lower velocities and temperatures, as well as a wider entraining base, outflows seem to be the preferred mechanism to lift dust grains from protoplanetary discs to the inner envelopes of young protostars (, , ). The tentative β - Ṁ_OF correlation we present in Fig. <ref> might support this scenario. The observed β - Ṁ_J correlation might either mean that jets are contributing to the mechanism or that they share an underlying correlation with the outflows. We thus here discuss if lifted dust grains would survive the transport along jets. Given the much lower speeds and temperatures of outflows, their survival to transport along the latter is a consequence.The destruction of silicon-bearing dust grains in shocks has been identified as the mechanism that enriches SiO in the interstellar medium and makes of this molecule a key jet tracer <cit.>. However, shock models predict that only a small fraction (< 10%) of grains is destroyed in the mild shocks along jets, with typical velocities of 20-50 km s^-1 and pre-shock gas densities of 10^4-10^6 cm^-3 <cit.>. In the wide grid of models explored by <cit.>, where the shock velocities range 20 km s^-1 < v_s < 50 km s^-1 and the pre-shock gas densities are in the interval 10^4 cm^-3 < n_H < 10^6 cm^-3, no more than 5% of Si is released in the gas phase by sputtering. Taking into account shattering and vaporisation of the grains in grain-grain collisions may enhance the fraction of grains destroyed to ∼ 8% <cit.>. These shock models reproduce the typical SiO abundances estimated in protostellar shocks which span from a few 10^-8 and a few 10^-7 <cit.>. Recent high angular resolution observations, e.g. in CALYPSO, indicate that SiO may reach abundances >5 × 10^-6 in jets, which requires either dust-free jets or the fraction of grains sputtered in shocks being larger than 10% (for [Si/H]_⊙∼ 3.5 × 10^-5, ).Finally, <cit.> studied whether (sub-)millimetre dust seeds would survive grain-grain collisions in the envelope, after reaching the transport limit velocity (v ∼ 0.5 km/s), given by the gravity-drag equilibrium along the outflow. Making use of the shattering model of <cit.>, they concluded that millimetre-sized dust grains could survive in the envelope environment: only a fraction as small as 10% might be destroyed.Thus, it seems reasonable that a large percentage of dust grains could survive the transport along outflows and even jets, being only partially eroded by collisions with both other grains and gas molecules in the latter. However, we note that there is a strong necessity for dust laboratory and modeling studies to assess the effects of high temperatures in the inner disc if submillimeter dust were lifted from inner outflows footpoints. In particular, it will be crucial to test whether the high temperatures would sublimate grain's mantles materials causing them to further shrink in size.§.§ Potential implicationsThe possibility that protostellar outflows lift large millimetre grains from the disc into the envelopes of young stellar objects can have several implications for the evolution of dust throughout the system. The outwards transport of dust can extend the timescales of grain growth in discs, limited by the meter barrier problem; it can affect the physical properties of grains as they are transported upwards away from the optically thick disc; and it can contribute to explain mixing of the mineralogy of outer discs, like the one found in meteorites in the Solar System.First, the orbital dynamics of dust grains orbiting in a disc depends on their stokes number, defined by their composition, density and size. When particles grow in size, they experience a larger and larger headwind that slows them down and cause an inward orbit shift, known as radial drift. In a typical disc orbiting a 1 M_⊙ star, radial drift velocities of solids at 1 au reach a maximum for meter-sized boulders, thus causing intermediate solids to rapidly fall towards the central star in timescales much faster than the ones estimated for planet cores formation <cit.>. At larger radii, this peak velocity is reached for even smaller pebbles.If outflows were uplifting grains in a continuous recycle of dust to the outer disc, this would setback grown millimetre grains in its outskirts and contribute to stretch the available time-span to form larger agglomerates, as already proposed by <cit.>. Moreover, if young protoplanetary discs harbor ring substructures that act like dust traps (as is the case for, e.g., GY91 fromor IRS63 from ), then outwards transported grains will be halted on their drift back towards the inner disc at one of these substructures, potential birthplaces of planetesimals via streaming instabilities <cit.>.Secondly, transported dust grains would undergo physical and chemical reprocessing once in the envelope. While they are partially shielded from the radiation of the star in the dense disc, they are going to be lifted in the much thinner envelope and the different energy and intensity of stellar radiation impinging onto them could change their structural and compositional properties. Furthermore, the grains would be transported from the warm inner disc to the colder envelopes where molecular freeze-out could form ice mantles.Lastly, the uplifting and outward transport of inner disc grains represents a potential explanation for the discovery of cristalline grains in the outskirts of protoplanetary discs, where the temperatures are too low to explain spectral observations of silicate lines (e.g., , ). Along the same line, <cit.> and <cit.> observed anomalous abundances of ^46Ti, ^50Ti and ^54Cr isotopes in outer Solar System chondrules (mm-sized meteorite inclusions). Since Calcium-Aluminum Inclusions (CAIs), which formed in the inner Solar System, consistently show high abundances in both isotopes, they proposed a mixing of solar nebula material in the early stages of formation. In the same direction are the recent findings of <cit.>, who show that carbonaceous chondrites display correlation in different isotopes abundances which can be explained by mixing of refractory inclusions, chondrules, and chondrite-like matrix. They thus highlight the need for a mechanism to transport these constituents from the inner disc to its outskirts and trap them in rings where the meteorites would form. If dynamical barriers to outwards viscous transport were present, such as the core of a Jupiter-like planet, protostellar outflows might have played this transport role: the grains extracted by outflows from inner disc regions will later fall back onto the disc out to larger radii.§ CONCLUSIONS Recently, extremely low dust opacity indices have been observed at few hundred au scales in the envelopes of Class 0 sources, and have been interpreted as the presence of millimetric dust grains <cit.>. Since theoretical models seem to discard the possibility of growing millimetre grains at the densities typical of protostellar envelopes (e.g. , ), we propose here a possible observational test to an alternative explanation, the transport of dust from the disc into envelopes via protostellar outflows. The mechanism has been studied analytically by <cit.> and <cit.> and is supported by numerical simulations of <cit.> and <cit.>. We show a tentative anti-correlation between protostellar envelopes β and their mass loss rates driven by jets and outflows. Such a correlation can be interpreted as supporting a scenario in which protostellar outflows transport large disc grains into the envelopes of young sources.If protostellar outflows are indeed lifting millimetre grains in the envelopes of young sources, implications are important for the meter-size barrier problem, the reprocessing of dust during its life cycle, and for material mixing throughout planetary systems, as already suggested for the Solar System (see Sect. <ref>). While further measurements of both dust opacity index and mass loss rates will be key in either confirming or disproving such a correlation, we here stress how we explored this possibility with a unique sample in this regard, for which uniform observations, reduction and analyses were carried out. ALMA and JWST synergies will be key to better constrain both dust properties and jets/outflows energetics in a larger sample of young sources. § ACKNOWLEDGMENTSThis work was partly supported by the Italian Ministero dell’Istruzione, Università e Ricerca through the grant Progetti Premiali 2012-iALMA (CUP C52I13000140001), by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Ref no. 325594231 FOR 2634/2 TE 1024/2-1, by the DFG Cluster of Excellence Origins (www.origins-cluster.de). This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 823823 (DUSTBUSTERS) and from the European Research Council (ERC) via the ERC Synergy Grant "ECOGAL" (grant 855130) and from the ERC Starting Grant "MagneticYSOs" (grant 679937). CC and LP acknowledge the EC H2020 research and innovation programme for the project "Astro-Chemical Origins” (ACO, No 811312) and the PRIN-MUR 2020 MUR BEYOND-2p (Astrochemistry beyond the second period elements, Prot. 2020AFB3FX). RSK also acknowledges support from the Heidelberg Cluster of Excellence (EXC 2181 - 390900948) "STRUCTURES", funded by the German Excellence Strategy, and from the German Ministry for Economic Affairs and Climate Action in project “MAINN” (funding ID 50OO2206). LC thank Chris Ormel and Sebastian Krijt for insightful discussions on this topic. We thank the referees for their helpful comments, which helped us improved the content and presentation of this work.§ BINARY PROTOSTARS When stars form from the collapse of gas clouds, fragmentation of dense cores often leads to binary or multiple systems. It is estimated that the fraction of stars with at least one companion in the Galaxy is between ∼20% for M-type sources up to ∼90% for O-type ones <cit.>. The protostars of the CALYPSO sample are no exception. The PdBI observations beam allowed <cit.> to separate systems in the maps with separations larger than ∼60 au in Taurus, ∼90 au in Perseus and 132 au in Serpens South. For Serpens Main, the systems (SerpM-SMM4, SerpM-S68N) are probed down to distances smaller than 160 au.These spatial resolutions are based on distance measurements from <cit.> for Taurus (140 pc), <cit.> for Perseus (290 pc), <cit.> for Serpens South (441 pc) and <cit.> for Serpens Main (436 pc). Moreover, on the large-scale end, they were sensitive to companions up to ∼1500-2800 au, depending on the region. They finally classified IRAM04191, L1521F, L1527, L1157, GF9-2, SerpS-MM22 as single sources. On the contrary, L1448-2A, L1448-N, L1448-C, IRAS4A, IRAS4B, SerpS-MM18, SVS13B, IRAS2A were classified as having a companion.For each protostar considered in this work, we report the distance of their comapnion(s), if any, in Table <ref>. We note that the tightest binary systems have not been considered here since either a measurement of β, Ṁ_J, or Ṁ_OF was impractical in the studies of <cit.>, <cit.> or our own, respectively. While most sources of this study are well resolved binaries, their separation is usually closer than the extent of their envelopes, thus they share a common envelope. The only exceptions are SVS13B and IRAS4B1, for which the companion(s) have much wider separations. For all the sources considered in this work, and that enter the tentative correlation described in section <ref>, the source of jets and outflows was well resolved (e.g., see Fig. <ref>) and the measurement of β could be performed after model subtraction of the secondaries. Furthermore, we note that the low β of <cit.> are measured in the inner envelope of each protostar and thus far from possible contamination of the much larger common envelope (see section <ref>). cc SourceCompanion(s) name: distance L1527- L1157-SVS13BSVS13A: 3500au, SVS13C: 4500auIRAS2A1IRAS2-A2: 143au SerpS-MM18aSerpS-MM18b: 2600au^(a) SerpM-S68Na SerpM-S68Nb: 5000au, SerpM-S68Nc: 8300auIRAS4-B1 IRAS4-B2: 3500auIRAS4-A1 IRAS4-A2: 420au L1448-C L1448-C(S): 2000auStellar companions associated to the protostars considered in this work. The separations are reported in <cit.>. (a): Note that the physical separation of the SerpS-MM18 reported therein should instead be 4420 au, given the most up-to-date distance measurements of the Serpens South region . § DETAILS ON Β MEASUREMENTS OF GALAMETZ ET AL. (2019) <cit.> measured the dust opacity spectral index in a sample of Class 0/I protostellar envelopes.First, for β to be a trustworthy proxy of the maximum grain size of a dust distribution, the emission over which the radio spectrum is sampled needs to be optically thin. Hence, they estimated the envelopes optical depths and found τ well below 0.1 at few hundred au scales for every source (see their Fig. 2). To make sure the measured β would be representative of the envelope alone, <cit.> subtracted both the emission of binary companions (see Section <ref>) and circumstellar discs. The companions were subtracted by fitting and removing a gaussian model centered on the secondary sources from the visibilities (further details in <cit.>). Secondly, the contribution of the circumstellar disc orbiting the target protostar was evaluated in the uv space as the mean of the amplitudes after 200 kλ and subtracted from the shorter-baseline visibilities. They test and comment on the robustness of such a correction in their Section 4, where they assess that considering the mean of the amplitudes in slightly different ranges of the long-baseline-end of the visibilities would not affect their results. Moreover, they subtracted the non-thermal dust contribution by extrapolating literature centimetre data for each source, as shown in their Table A.1. § JETS HIGH VELOCITY RANGESIn Table <ref>, we report the velocity ranges in which we identify high and low velocity SiO line emission. These ranges were then used to derive mass loss rates from the CO line emission.cccc SourceLobe HV range (km/s) V_sys (km/s) L1527 B-21.8/-11.8 7 +5.7R+17/+27 L1157 B-60/-20 +2.6 R+30/+70SVS13BB-37/+8.5+8.5 R+8.5/+58 IRAS2A1 B-32/-9+7.5 R - SerpS-MM18a B-17/-2+8.1 R+21/+32SerpM-S68NB-7/+5+9.2R+12/+21IRAS4B1 B-30/-5+6.8 R+16/+50IRAS4A1 B-30/-10+6.3R+30/+70L1448-C B-70/-22 +5.1R+25/+85Table of identified high-velocity ranges from the SiO jet emission for each source (from Tab.4 in <cit.>). Based on these ranges, the high- and low-velocity emission of the CO is defined in order to derive the mass loss rates. Only the blue lobe is detected for IRAS2A1. For L1527, no SiO is detected. Therefore the LV range is defined based on the CO emission, while the HV range is assumed to extend +/-10 km/s with respect to the largest velocity detect in the LV.§ INCLINATION DEPENDENCIES As a sanity check for our correlations, we tested whether any underlying correlation exists between the reported (β, Ṁ_J) or measured quantities (Ṁ_OF) and the inclination of the sources of our sample. To derive the mass loss rates of outflows and jets, in fact, this work and <cit.> assumed velocities of 10-100 km/s, respectively (see section <ref>).Such an approach ensures a uniform method to derive the rates, rather than making assumptions on the inclinations of the jets and outflows since current estimates are either unavailable or very uncertain. Thus, in this section, we check for potential correlation between the quantities involved in our proposed correlation. The inclinations we use have been collected from a number of works. Where more than one estimate was available based on different methods, we reported an average value. If no uncertainty was reported in the literature, for example because the estimate is only of qualitative nature (e.g., for IRAS2A1 reported by <cit.>), we plot no error bar. Inclinations for IRAS4A and IRAS4B were reported by <cit.> and <cit.>. The inclination for the jet of SVS13B was reported to be ∼71^∘ by <cit.>, while <cit.> measured the one for L1157 at ∼73^∘. <cit.> constrained the inclination of L1448-C to be ∼34 and ∼46 for the blue- and red-shifted lobes, respectively (in this case, we report the main value and scatter between the two). The inclination of L1527 is well constrained to be almost perpendicular to the sky plane (e.g., 85^∘ in ). Finally, <cit.> and <cit.> independently and qualitatively assesed that the jet of SerpM-SMM18a lays in the plane of the sky, so we set it i = 90^∘. Figure <ref> summarises the relationship between the inclinations and Ṁ_J, Ṁ_OF, or β for the CALYPSO sample. While a hint for a correlation is seen for the (β, inclination) pair, only a combination of underlying correlations for both β and mass loss rates with inclination would justify the correlation between β and mass loss rates. We thus conclude that possible inclination biases are not driving the correlation in Fig.<ref>.aasjournal
http://arxiv.org/abs/2311.16315v1
{ "authors": [ "L. Cacciapuoti", "L. Testi", "L. Podio", "C. Codella", "A. J. Maury", "M. De Simone", "P. Hennebelle", "U. Lebreuilly", "R. S. Klessen", "S. Molinari" ], "categories": [ "astro-ph.EP", "astro-ph.GA", "astro-ph.SR" ], "primary_category": "astro-ph.EP", "published": "20231127210450", "title": "Protostellar chimney flues: are jets and outflows lifting submillimetre dust grains from discs into envelopes?" }
A Knowledge Graph Approach for Exploratory Search in Research Institutions Tim Schopf, Nektarios Machner and Florian Matthes Department of Computer Science, Technical University of Munich, Germany{tim.schopf, nektarios.machner, matthes}@tum.de============================================================================================================================================================================empty emptyIn the current demand for automation in the agro-food industry, accurately detecting and localizing relevant objects in 3D is essential for successful robotic operations. However, this is a challenge due the presence of occlusions. Multi-view perception approaches allow robots to overcome occlusions, but a tracking component is needed to associate the objects detected by the robot over multiple viewpoints. Most multi-object tracking (MOT) algorithms are designed for high frame rate sequences and struggle with the occlusions generated by robots' motions and 3D environments. In this paper, we introduce MOT-DETR, a novel approach to detect and track objects in 3D over time using a combination of convolutional networks and transformers. Our method processes 2D and 3D data, and employs a transformer architecture to perform data fusion. We show that MOT-DETR outperforms state-of-the-art multi-object tracking methods. Furthermore, we prove that MOT-DETR can leverage 3D data to deal with long-term occlusions and large frame-to-frame distances better than state-of-the-art methods. Finally, we show how our method is resilient to camera pose noise that can affect the accuracy of point clouds. The implementation of MOT-DETR can be found here: https://github.com/drapado/mot-detr § INTRODUCTIONThe agro-food industry is facing increasing challenges due to a growing and more affluent population, alongside a decreasing labor force. Automation, particularly robotics, is considered a key solution but encounters issues in these environments, primarily in terms of robotic perception and interaction because of factors like high occlusion and variation <cit.>.An accurate and efficient representation of the robot's operating environment, including objects' locations and properties, is essential for successful robot operation in these environments <cit.>. Traditional robotic systems using single-view data, like an image, fail to capture all relevant details due to the high occlusion levels of agro-food environments. This leads to errors in detecting and locating objects <cit.>. Representations that incorporate multiple views have the potential to improve detection and localization even in highly occluded conditions. Here, active perception methods can play an important role in selecting the most optimal viewpoints <cit.>. However, developing representations from multi-view perception requires associating upcoming detections with their corresponding object representations and previous measurements<cit.>. This task is often referred to as multi-object tracking (MOT). Performing MOT in complex agro-food environments remains a challenge due to high levels of occlusion and sensor noise, yet the accuracy of the MOT algorithm directly impacts the quality of the representation<cit.>.In this paper we propose a method to build representations that can be used by robots through a novel approach to perform object detection and MOT in 3D. Our method enables the use of multi-view perception to deal with occlusions by detecting and associating the relevant objects in the present viewpoint with the already detected objects from previous viewpoints. This allows robotic systems to build an accurate representation in occluded environments. Fig. <ref> shows an example of the resulting tracking and 3D representation. § RELATED WORK AND CONTRIBUTIONSIn MOT, different strategies are used. Two-stage methods, like Simple Online and Real-time Tracking (SORT) <cit.> and DeepSORT <cit.>, have been popular in the last years. Two-stage methods typically use a deep-learning object detector, and an association algorithm that associates upcoming detections with previous ones. SORT passes the 2D position of the detected objects to an association method based on a Kalman filter and a Hungarian algorithm. DeepSORT extended SORT by adding a feature extraction network to generate re-identification (re-ID) features that help the association algorithm. Single-stage methods combine detection and re-ID in one step, making them more efficient. However, the association is still performed separately. A recent example of single-stage is FairMOT <cit.>. In recurrent methods, the detection of objects and its association over multiple frames is performed through the same network. Recent examples of recurrent methods are Trackformer <cit.> and MOTR <cit.>. In both methods, the tracking is performed end-to-end by a single-stage network that leverages the transformer architecture to detect objects of DETR <cit.> and to track them over consecutive frames. The transformer architecture of DETR allows it to reason about the objects and the relations between themselves and the image context, which can be very useful for MOT applications. Recurrent methods based on transformers eliminate the need for complex handcrafted association components, providing a more streamlined approach to MOT. However, their recurrent-like approach complicates the training process compared to algorithms single-stage algorithms <cit.>.In agro-food environments, MOT has been employed for tasks like crop monitoring and fruit counting <cit.>. However, occlusions can reduce a tracking algorithm's performance <cit.>. This indicates the need for more powerful tracking algorithms. Even though there exist MOT algorithms with more novel and potentially better tracking performance as indicated earlier, most agro-food tracking approaches are still based on SORT or DeepSORT as they require less data and are less complex to train.Above-mentioned MOT algorithms, like DeepSORT <cit.>, FairMOT <cit.> and MOTR <cit.>, were designed to work in video sequences with high frame rates, where the difference between frames is small. This tends to simplify tracking. However, in robotic applications, differences between frames are not always small due to the 3D nature of robot' motions and environments. This results in sequences with more prevalent and complex occlusions, and drastic perspective changes. Using 3D data for MOT can solve this issue by providing additional information to distinguish objects more effectively <cit.>. Nevertheless, to overcome some of the challenges of 3D environments and motions, algorithms that can better utilize 3D data for MOT are needed. In this paper we present a novel method, MOT-DETR, to perform MOT tracking in complex agro-food environments. Our contributions are as follows: * A novel deep learning method, MOT-DETR, which is an adaptation of the transformer-based architecture developed by DETR <cit.> with an extra re-ID output to perform MOT in a single-stage manner. MOT-DETR makes use of the capabilities of the transformer architecture to perform MOT without the complexity of recurrent-based methods. * A method of using 3D information by leveraging the capabilities of self- and cross-attention of the transformer architecture to merge information from color images and point clouds to improve the MOT performance of MOT-DETR. * A comparison of our proposed MOT-DETR against a single-shot detection and tracking state-of-the-art method, FairMOT <cit.>, on sequences with different frame-to-frame distance. * A novel method to generate random synthetic tomato plant 3D models that can be used to render viewpoints and generate training and evaluation sequences. * An evaluation of the performance of MOT-DETR under different levels of noise on the camera pose estimation. § PROPOSED APPROACHIn this work, we present MOT-DETR (Multi-Object Tracking and DEtection with TRansformers), a MOT algorithm that uses the single-shot detection and tracking approach of FairMOT <cit.> but with the detection architecture of DETR <cit.>. We further enhanced MOT-DETR to process simultaneously 2D images and 3D point clouds. For every viewpoint that the robot collects, MOT-DETR predicts the following outputs per detected object: a 2D bounding box, the object class, and re-ID features. The re-ID features are then used in a data association process that associates the objects detected at the current viewpoint with previously detected objects, also referred to as tracklets. The data association is done by using the re-ID features of newly detected and previous objects to build a cost matrix using the cosine distance. This cost matrix is then passed to a Hungarian algorithm that generates the associations for tracking. The architecture of MOT-DETR can be seen in Fig. <ref>. §.§ Data pre-processingFor each viewpoint the robot collects, MOT-DETR takes as input a color image and its corresponding structured 3D point cloud. The point cloud is transformed using the known transformation between the camera and the robot's world coordinate system. In this way, all point clouds collected over time have the same coordinate system. Next, each point cloud is transformed into a normalized image by user-defined limits for each Cartesian axis. This step also removes points in space that are outside of the target area of the robot.§.§ MOT-DETR architectureBoth color image and point cloud are processed through two independent convolutional neural networks (CNNs). The output of the CNNs is then flattened, and a fully connected layer is used to reduce the dimension of the feature to C channels. After this, both flattened maps are concatenated, increasing the feature dimensions to 2C. Following the architecture of DETR <cit.>, the concatenated feature map is passed to a transformer encoder, which applies attention mechanisms to focus on different parts of the image.The output of the transformer encoder is then passed to the transformer decoder. As in DETR, the transformer decoder decodes a set of N object queries using self- and encoder-decoder attention mechanisms. These object queries are independent learnable vectors that are used to query the transformer decoder for the presence of objects in the image. The N object queries are transformed into N output embeddings, which are then passed through a set of prediction heads. There are three heads: one for class prediction, one for bounding box prediction and one for re-ID features. The class prediction head outputs the probabilities of each class for each object query. However, not all N embeddings represent a true object. Most of them correspond to background. To differentiate them, the class "background" is used together with the different object classes present in the dataset. The classification head then predicts, for each output embedding, if it is an object of any of the target classes or if it is not an object at all. The bounding box prediction head outputs the coordinates of the 2D bounding box for each object query, and the tracking head predicts the unique ID of each object at training time and a set of re-ID features at inference time.Apart from the default version of MOT-DETR, which takes 2D images and 3D point clouds as input (3D version), we also implemented a version that takes only 2D images as input (2D version). This version has only one CNN, whose resulting feature map is passed to the transformer network. We doubled the output size of the fully connected layer after the CNN to keep the size of the transformer network the same between the 3D and the 2D versions. Consequently, the output size of this layer is 2C for the 2D version instead of C as in the 3D version. §.§ TrainingMOT-DETR performs three tasks, defined by the three prediction heads: object classification, bounding box prediction, and object tracking. Each task has its own loss function that contributes to the training process.A crucial step to train an object detection network is associating each ground truth object with one of the network predictions. We use the same approach as <cit.>, the Hungarian matching. The cost matrix for the Hungarian algorithm is computed using the object detection loss. As in DETR <cit.>, the object detection loss corresponds to the sum of the cross entropy (CE) loss, L1 loss and the Generalized Intersection over Union (GIoU) between predicted objects and ground truth objects. This approach ensures that each ground truth object is matched with one prediction, and this matching minimizes the total object detection loss. Once predictions have been associated to ground truth objects, the total loss can be calculated asL_total = 1/2 (e^w_1 L_det + e^w_2 L_id + w_1 + w_2)where L_det corresponds to the object detection loss as defined by DETR <cit.>, L_id is the re-ID loss as defined in FairMOT <cit.>, and w_1 and w_2 are learnable parameters that balance the two tasks of object detection and re-identification. §.§ Inference and trackingAt inference time, we discard the tracking ID predicted by the network, and we use the re-ID features. These features, are fed into a data association algorithm which uses the Hungarian algorithm to associate objects over multiple viewpoints. At every viewpoint, the cost matrix between the existing tracklet features, and the newly detected object features is calculated using the cosine distance. Then the Hungarian algorithm is used to select the best assignments between existing objects and new detections. Detections that were not associated with any existing object initiate a new object tracklet. Tracklets that have not been associated with any detection remain.The 3D location of tomatoes is tracked and updated with Kalman filters. When a tomato is detected, its position updates the filter. This position is derived by filtering the point cloud within the tomato's bounding box and averaging the points. § DATADeep neural networks require large amounts of data to train. Applied fields, such as agro-food, have worked around this problem by using pre-trained models. However, these practices are less common in 3D applications, where widely standardized methods like CNNs and large general datasets are less available. To solve this problem, we developed a method to generate random synthetic tomato plant 3D models. The 3D plant models were generated using an L-system-based formalism in the modeling platform GroIMP, v1.6 <cit.>. The formalism allows to specify individual organs of the plant, like fruit or leaf, their geometry, and connections between them. The L-system was parameterized by reading in actual morphological data of a complete plant. Morphological data was obtained from measurements in young greenhouse-grown tomatoes. The architecture of these plants is relatively simple, made by a repetitive pattern of three leaves and one truss with fruits. To generate multiple plants, randomness was added to individual organ traits: leaf angle, leaf length and width, internode length, number of tomatoes per truss, and size of the tomatoes in each truss. In addition, a substantial set of real images was collected.An example of a synthetic plant model is shown in Fig. <ref>-left. From the 3D models, we rendered color images and point clouds from random viewpoints using Open3D <cit.>. For each viewpoint, the position of the camera was sampled randomly inside of a cylinder whose center was approximately the plant stem. The camera would aim to a random point approximately around the stem of the plant. This process would result in viewpoints with five degrees-of-freedom (DoF). An example of the rendered RGB image from a viewpoint can be seen in Fig. <ref>-right. In total, we generated 50 different plant models, and rendered 1,000 viewpoints for each of them, resulting in 50,000 viewpoints. Furthermore, a dataset using five real plants from a tomato greenhouse was collected using the system shown in Fig. <ref>. Per plant, viewpoints were collected using a two-DoF planar motion sequence in front of each plant at distances 40 cm, 60 cm, or both. In total, 5,400 viewpoints from real plants were collected. Fig. <ref> shows the diagram of the data collection path in real plants on the left, and an example of an image viewpoint on the right.To train and evaluate MOT-DETR, we divided both synthetic and real datasets into train, validation and test splits as shown in Table <ref>. To prevent plants from being seen by the network at train time and during the experiments, train and validation splits came from the same pool of plants, while test splits were generated from different plants. We trained MOT-DETR with the combined real and synthetic train sets.At training time, data augmentation was used for both the images and the point clouds. Color images were augmented using brightness, contrast, saturation and hue changes. Point clouds were augmented by adding six DoF noise to the camera pose transformation. A Gaussian distribution with a mean of zero and a standard deviation of 0.005 was used to augment the camera pose. Furthermore, both color images and point clouds were augmented by performing random cropping. § EXPERIMENTAL EVALUATIONIn this section we present the evaluation of the performance of MOT-DETR in different scenarios. We evaluate the detection performance and inference speed of MOT-DETR. We also compare our default MOT-DETR with 2D and 3D inputs (3D) against a version with only 2D input (2D), and a state-of-the-art MOT algorithm: FairMOT <cit.> on different type of sequences with real and synthetic data. Furthermore, since all point clouds are transformed into the robot coordinate system using the camera pose of each viewpoint, we will show the effect of noise in the camera pose transformation in the performance of MOT-DETR. §.§ Detection performance and Inference speedWe studied the detection performance of the 2D and 3D versions of MOT-DETR. Table <ref> shows the number of parameters, inference speed, and the mean average precision (mAP) of our two variants on both real and synthetic test sets. MOT-DETR-3D contains approximately 20 M more parameters than MOT-DETR-2D. Consequently, the inference time of MOT-DETR-3D is slower than its 2D version. Nevertheless, even MOT-DETR-3D is able to run at 43.11 frames per second on a Nvidia RTX 4090 GPU. This is sufficient speed for most robotic applications.The detection peformance, evaluated as mean average precision (mAP), of MOT-DETR-3D is three points larger on the real data, but two points lower on the synthetic data, compared to MOT-DETR-2D. This might be due to the greater uniformity of tomatoes in the synthetic data, facilitating their distinction from the background in the 2D data compared to the real dataset (see Fig. <ref>-right and Fig. <ref>-right). Consequently, the additional insights offered by the 3D data do not significantly enhance the model's performance. In contrast, for the real data, leveraging 3D information can potentially augment the network's ability to better detect tomatoes.Despite multi-tasking algorithms often exhibiting marginally inferior performance compared to single-task ones <cit.>, our detection and re-identification network matches the performance of prevalent detection-only networks for tomatoes <cit.>. §.§ Tracking performanceThe problem of object tracking in robotics is not always similar to object tracking in videos. Change of perspective and long-term occlusions are more common in robotic applications. To study the performance of our algorithm in multiple type of sequences, we defined three different experiments:* Real-Sort. For our real test set, we have two sequences of 600 frames at two distances from the plant. For each sequence, we selected 100 random viewpoints. Then we sorted the viewpoints with the objective of minimizing frame-to-frame distance. This generates a 100 frame sequence similar to a video sequence with low frame rate.* Real-Random. Similarly to the previous experiment, 100 viewpoints were selected out of the pool of viewpoints per plant. However, they are not sorted in this case. This generates a sequence where jumps between frames are larger, and objects are occluded during several frames.* Synthetic-Random. Our synthetic dataset was recorded with more degrees of freedom than our real data. Consequently, we designed an experiment to evaluate the performance of our algorithm when 100 random and unordered frames are selected out of the pool of viewpoints of our synthetic test plants. We evaluated the tracking accuracy using High Order Tracking Accuracy (HOTA), with its sub-metrics Localization Accuracy (LocA), Detection Accuracy (DetA) and Association Accuracy (AssA); and with Multi-Object Tracking Accuracy (MOTA) and its sub-metric ID Switches (IDSW). Each experiment was repeated five times by selecting a different set of 100 random viewpoints. A t-test was then used to assess the significance of the differences between the models. The first five frames from a sequence in each experiment are illustrated in Fig. <ref>.We compared three models: MOT-DETR-2D, MOT-DETR-3D, and the state-of-the-art tracking algorithm FairMOT <cit.>. The key differences between our proposed method MOT-DETR and FairMOT are its architecture being based on CenterNet, which is a standard CNN, and the use of only 2D data. For this experiment, we used a pre-trained FairMOT network and we fine-tuned it using the real training set. Therefore, we only evaluate FairMOT using real data.In the Real-Sort experiment, Table <ref> reveals MOT-DETR-2D to be nearly on par with FairMOT. However, MOT-DETR-3D clearly outperforms any model using only 2D data as input. This is expected as the use of 3D data provides relevant information for tracking objects that look similar, like tomatoes. FairMOT <cit.> only outperforms MOT-DETR in LocA, potentially due to its prior training on a vast object detection dataset, unlike MOT-DETR. For the Real-Random experiment, the performance difference between MOT-DETR-3D and both of the 2D algorithms becomes larger. This is due to the performance of MOT-DETR-2D and FairMOT decreasing more than MOT-DETR-3D when the distance between viewpoints is larger, showing that MOT-DETR-3D can successfully use 3D data to improve tracking accuracy in challenging sequences. Furthermore, it can be seen how in random sequences MOT-DETR-2D significantly outperforms FairMOT. This is expected, as FairMOT is designed to work in high frame sequences with overlap between frames, and this does not always happen in the Real-Random sequences.In sequences with more DoF, like the Synthetic-Random experiment, MOT-DETR-3D's performance resembles the Real-Random one. Against Real-Random, the HOTA score only improves 0.25, and the MOTA score decreases 0.89. This suggest that the 3D variant of MOT-DETR optimally leverages 3D data, maintaining consistent performance across varying sequence conditions. In this situation with more DoF the performance decrease might be larger if real data were to be used, as the difference between consecutive viewpoints is larger than in Real-Random. However, the use of synthetic data might counter the performance drop from using more DoF as synthetic data might be easier to process. §.§ Effect of camera pose noiseOur proposed algorithm takes as input a point cloud that is transformed to the robot world frame using the camera pose at every viewpoint. The data used in our experiments have low camera pose noise since the synthetic data provides perfect camera poses and the robot arm provides highly accurate camera poses. This is not always the case in robotic applications. Robots using odometry or SLAM algorithms are common. These approaches have a higher pose noise. Therefore, to study the effect of camera noise in our algorithm, we added 6DoF artificial Gaussian noise to the camera pose at each viewpoint. Table <ref> shows the performance of our algorithm with three different noise levels. The results are displayed as a delta over the same sequences shown in Table <ref> for Real-Random and Synthetic-Random. It can be seen how the performance of MOT-DETR does not present a statistically significant decrease in the Real-Random sequences with the applied noise levels. The only statistically significant decrease can be found when T_noise is set to 0.05 in the Synthetic-Random sequences. This suggests that MOT-DETR is resilient to some amount of camera pose inaccuracies. This resilience can be explained by the fact that similar random noise was applied as data augmentation at training time. Furthermore, the real dataset inherently contained camera pose noise due to the nature of the real world system, while the camera pose error on the synthetic dataset was zero. This can explain why with larger noise levels, the performance decreases significantly in the synthetic data. § CONCLUSIONIn this work, we have introduced MOT-DETR, an algorithm to perform 3D multi-object tracking in robotic multi-view perception applications. We showed how MOT-DETR outperformed state-of-the-art MOT algorithms, and can be used by robots to build a representation in real-world challenging environments like a tomato greenhouse. Furthermore, we showed that MOT-DETR can successfully use 3D information to improve 3D MOT tracking in complex sequences with large distance between frames and long-term occlusions. Additionally, we showed that our algorithm is resilient to noise in the camera pose.IEEEtran
http://arxiv.org/abs/2311.15674v1
{ "authors": [ "David Rapado-Rincon", "Henk Nap", "Katarina Smolenova", "Eldert J. van Henten", "Gert Kootstra" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20231127100301", "title": "MOT-DETR: 3D Single Shot Detection and Tracking with Transformers to build 3D representations for Agro-Food Robots" }
Discovery of trade-off between utility, privacy and fairness in ML Ficiu, Lawrence and Paleyes Google (work done while at the University of Cambridge) Department of Computer Science and Technology, University of Cambridge, UK Correspondence to: [email protected] Automated discovery of trade-off between utility, privacy and fairness in machine learning models Bogdan Ficiu1 Neil D. Lawrence20000-0001-9258-1030 Andrei Paleyes2,30000-0002-3703-8163 ================================================================================================= Machine learning models are deployed as a central component in decision making and policy operations with direct impact on individuals' lives. In order to act ethically and comply with government regulations, these models need to make fair decisions and protect the users' privacy. However, such requirements can come with decrease in models' performance compared to their potentially biased, privacy-leaking counterparts. Thus the trade-off between fairness, privacy and performance of ML models emerges, and practitioners need a way of quantifying this trade-off to enable deployment decisions. In this work we interpret this trade-off as a multi-objective optimization problem, and propose PFairDP, a pipeline that uses Bayesian optimization for discovery of Pareto-optimal points between fairness, privacy and utility of ML models. We show how PFairDP can be used to replicate known results that were achieved through manual constraint setting process. We further demonstrate effectiveness of PFairDP with experiments on multiple models and datasets. § INTRODUCTION During the past two decades, machine learning (ML) models have been integrated into a wide range of industries and applications. Crucially, these include high-stakes sectors such as healthcare <cit.>, education <cit.>, hiring <cit.>, pretrial detention <cit.>, financial lending <cit.>, and social services <cit.>. While significant efforts are being made towards developing more accurate ML models, practical scenarios impose additional requirements beyond performance. Protecting the users' privacy and ensuring non-discrimination against demographic subgroups are two critical prerequisites which need to be addressed prior to deploying a system in real-life settings. Consequently ML practitioners need to ensure that these systems do not put the users' data at risk and that they do not discriminate based on protected attributes before deployment. Addressing these considerations is not only a matter of moral obligation but also a legal requirement, privacy and fairness being mandated by government laws and regulations such as the General Data Protection and Regulation (GDPR) or the Equal Credit Opportunity Act (ECOA). Differential privacy <cit.> has emerged as the de facto standard notion in privacy-preserving data analysis, enabling a rigorous and practical formalization of data privacy for applications that process sensitive information. Enforcing differential privacy to bound the risk of data disclosure involves random perturbations being applied during computations so as to limit the influence of any individual sample on the outcome of queries. Inevitably, this noise injection will also induce a loss in the overall utility of the system and determining the optimal balance between privacy and precision remains a challenging problem <cit.>. Similarly, available techniques for improving fairness in ML models rely on preprocessing the datasets used for training, modifying the learning procedure or postprocessing the final results <cit.>. While these approaches are indeed successful in reducing bias, they also may affect the utility of the models. Finally, many works emphasise that privacy and fairness are not independent objectives <cit.>. For example, training a model with privacy guarantees can lead to disparate performance across different groups in the population <cit.>. Quantification of impact of differentially privacy on fairness in ML is an area of active research <cit.>. It is therefore necessary to consider the joint impact of utility, fairness and privacy on each other <cit.>. To address these challenges we introduce PFairDP, a pipeline for automatically quantifying the trade-off between differential privacy, fairness and performance of ML models. Compared to previous work in the field <cit.>, it is not limited to training models at predefined levels of privacy, fairness or utility. Instead, our approach to this task extends DPareto proposed by Avent et al. <cit.> that interprets the trade-off between differential privacy and utility as a choice of Pareto-optimal points in two dimensions. We incorporate fairness as an additional objective and implement a multi-objective Bayesian optimization procedure to efficiently estimate the 3D fairness-privacy-utility Pareto fronts in an automated and model-agnostic manner, relying only on empirical measurements of the model's utility, bias, and privacy. Modular structure of PFairDP supports most existing pre- and postprocessing fairness enforcement techniques. The ultimate goal of PFairDP is to efficiently determine the configurations which best balance the three-way fairness-privacy-utility trade-off. The proposed method allows decision makers to choose whether, with similar levels of utility, they want to prioritise privacy over fairness or use a configuration that balances the two. Similarly, the 3D Pareto frontiers can be used to approximate how much privacy and fairness a model can guarantee while maintaining acceptable levels of utility. The experiments performed throughout this paper illustrate the capabilities of our method and its flexibility in relation to models and datasets. § MOTIVATION Currently the literature investigating this three-way trade-off is limited to enforcing manually chosen levels of privacy and fairness, then evaluating the cost incurred in utility. However, due to the conflicting nature of the three objectives, we argue that this problem would be best addressed as a matter of reaching Pareto efficiency. At a Pareto-optimal solution improvement in one of the objectives necessarily deteriorates at least one of the others. Existing work relies on predefined fairness and privacy constraints and so is prevented from exploring the Pareto front. By exploring a wider range of configurations our framework provides better insight into the limitations of models and enables decision makers to prioritise certain requirements, evaluate the effects of their choices, and take informed actions before deployment. As a practical example of application of our work, consider a scenario in which census bureau is given a task to release an ML model trained on the data collected during the most recent census. The task could be small-area estimation <cit.>, administrative lists management <cit.>, matching of records across databases <cit.>, classification of business entities <cit.>, funds allocation <cit.>. Users of the model could be another government-run department or a private contractor. As the model might be used for highly sensitive decisions, the census bureau has to take measures to make sure that citizens' data used for training is protected and that no group is discriminated against, while achieving the highest possible performance. Data scientists employed by the bureau can re-train the model as many times as necessary before releasing it to ensure this trade-off is resolved satisfactorily. Methods currently existing in the literature propose to hand-pick limits for fairness and privacy, and perform training for the highest utility possible under these constraints. This approach faces the difficult question of picking the right limits <cit.>, while also potentially missing better model's configurations. Instead, our proposed method provides the entire Pareto front between privacy, fairness and utility. This allows model developers to make informed decisions on what levels of fairness, privacy and utility can be achieved, while also facilitating a discussion with model's users on the desirable model's behaviour. § RELATED WORK A growing awareness of potential bias and privacy leaks of ML algorithms and datasets is reflected by increasing body of literature dedicated to exploring their effects on ML models. Feldman and Peake <cit.> propose an end-to-end bias mitigation framework, and demonstrate its effectiveness on the case study of bias mitigation in a deep learning setting. Morsbach et al. <cit.> investigate the relationship between neural network architectures and model accuracy under differential privacy constraints. Sharma et al. <cit.> utilise the distance of data observations to the decision boundary to create a training process for neural networks that are more fair and adversarially robust, while maintaining similar level of accuracy. On a more theoretical side, Geng et al. <cit.> provide tight lower and upper bounds on the privacy-utility trade-off. While two-way privacy-utility and fairness-utility relationships are studied extensively, the literature on the joint relationship between all three metrics is scarce. Pannekoek and Spigler <cit.> and Xu et al. <cit.> propose a constraint-based approach, where the focus is on evaluating models under predefined fairness and DP constraints. Such a constraint-based approach can leave optimal configurations unexplored and does not provide actionable information regarding the objectives' trade-offs. We show how results of both of these papers can be replicated in an automated way using our pipeline, highlighting the added value of automation over constraint-guided exploration. Further, Chester et al. <cit.> study the same three-way relationship in the context of medical data, while Zhang et al. <cit.> focus on the same relationship in federated learning context. Both papers empirically explore the trade-off, and allude to optimization techniques for answering the question “How to choose a good balance?”, thus paving the way for our method. § PFAIRDP – FAIRNESS- AND DP-AUGMENTED PIPELINE The core component of our work is PFairDP, a Parametrized Fair and Differentially Private training pipeline for ML models. In order to provide a modular implementation that can be employed as a model-agnostic training pipeline, the architecture of PFairDP is devised into three independent modules: (1) a fairness module, (2) a DP module and (3) a training module. We implemented PFairDP using PyTorch framework <cit.>, Opacus <cit.>, AIF360 <cit.>, and BoTorch <cit.>[The code is openly available at <https://github.com/apaleyes/dp-fairness-multi-objective-bayesian-optimisation>]. At a high level, the fairness module (section <ref>) implements pre- and post-processing algorithms aimed at reducing bias in the datasets and the models' predictions. The DP module (section <ref>) augments the model, the dataset and the optimizer with DP-related capabilities required for enforcing privacy (e.g. per sample gradient computation and noise injection) and finally the training module (section <ref>) performs the training routine of the model. Given the model's final (optionally postprocessed) predictions, the pipeline returns the associated fairness measure, privacy budget, and utility, which are then used by the Bayesian optimization loop for the Pareto frontier discovery. As previously emphasised, PFairDP does not enforce predefined levels of privacy, fairness, or utility when training the models, but instead each module introduces a set of parameters affecting at least one of the objectives. The remainder of this section describes each module in greater detail. §.§ Objective 1 - Fairness In the context of ML decision-making, fairness has been defined as the absence of any prejudice or favoritism toward an individual or group based on their inherent or acquired characteristic <cit.>. While this formulation provides some intuition about the expected behaviour of an unbiased model, formalising this goal into a widely applicable and generally accepted representation remains a challenge due to the complex and multi-faceted nature of fairness, with more than 21 mathematical definitions presented in relevant literature <cit.>. Specific fairness definitions can be separated into two main categories: individual fairness, enforcing that similar individuals receive similar predictions, and group fairness, which ensures that different groups (e.g. males and females) receive the same predictions with close to equal probabilities. The focus of this work is on the latter, as we are interested in removing bias with respect to certain protected attributes (e.g. gender, religion) and while there are a multitude of fairness metrics for this category as well, we will be using two common fairness definitions <cit.> for the reminder of this paper: statistical parity difference (SPD, <cit.>) and disparate impact (DI, <cit.>). If needed, PFairDP can be used with other definitions. In the ideal case, the outcomes of an unbiased model which enforces group fairness would be independent of membership in a sensitive group or, in other words, the optimal values for the fairness objective are ∼ 0 and ∼ 1. In practice, the fairness of a classifier will be assessed with respect to the following bounds on the two metrics introduced above: * 0 ≤≤ 0.1 – to which we will refer to as the lenient SPD threshold <cit.>. * 0 ≤≤ 0.05 – to which we will refer to as the strict SPD threshold <cit.>. * 0.8 ≤≤ 1.25 – to which we will refer to as the standard DI threshold <cit.>. PFairDP supports pre- and postprocessing techniques for enforcing fairness. Our modular implementation allows the use of any such technique, and we showcase two of them in this paper. Disparate Impact Remover (DIR, <cit.>) is a preprocessing algorithm which modifies the values of protected attributes in order to remove distinguishing factors and improve group fairness with respect to the disparate impact metric). The procedure is parametrized with respect to one argument repair level ∈ [0, 1] which controls how much the distribution of the privileged and unprivileged groups should overlap, a value of 1 indicating complete overlap between the two groups. Reject Option Classification (ROC, <cit.>) is a postprocessing algorithm which increases fairness by assigning favorable labels to instances in the unprivileged group and unfavorable labels to instances in the privileged group in a confidence band around the decision boundary with the highest uncertainty. The ROC technique alters the outputs of a binary classifier based on its predicted posterior probabilities so that instead of solely relying on the standard decision rule (P(Y = 1 | X) - P(Y = 0 | X)), instances that lie close to the decision boundary are labelled based on their group membership. The algorithm is designed to minimize statistical parity difference. §.§ Objective 2 - Privacy Differential Privacy (DP) is a mathematical framework for quantifying privacy in statistical analysis, which allows performing complex computations over large datasets while bounding the disclosure of information about individual data points <cit.>. To this end, DP does not define privacy as binary (has the data been exposed or not) but in terms of a “privacy budget”, which limits the influence of any individual sample on the output of an algorithm. This notion is formally quantified by a pair of parameters (ϵ, δ). There is no agreed-upon threshold below which algorithms are considered private <cit.>. In practice <cit.>, δ is chosen beforehand as a fixed small value and the privacy budget is characterised by the ϵ parameter, with smaller values indicating stronger privacy guarantees. The remainder of this work will follow this convention, with privacy reported with respect to ϵ, and δ regarded as fixed: δ≪1/n, where n is the number of records in the dataset used[This is a valid assumption as there is a proven connection between ϵ and δ <cit.>]. One standard mechanism for achieving (ϵ, δ)-differential privacy is through the introduction of DP optimizers in the training procedure <cit.>. The DP-aware optimization used in PFairDP adds noise to the parameter gradients in every iteration, thus preventing training samples from being memorized and altering the outputs of the model. The amount of noise added during computations is controlled by two parameters: a noise multiplier representing the amount of noise added to the average of the gradients in a batch, and a clipping norm representing the maximum l_2 norm of per-sample gradients, which bounds the impact of a single sample on the model. §.§ Objective 3 - Utility Finally, the training module of PFairDP implements a standard training procedure for ML models. In the case of neural networks, which is the focus of experimentation section of our paper, this introduces three additional parameters: the batch size, the number of epochs, and the optimizer's learning rate. These hyperparameters affect not only the final utility of the model, but also the privacy budget (e.g. a larger batch size increases the privacy budget) and the fairness level (e.g. training for longer on a debiased dataset will increase the fairness level while decreasing the privacy budget). In this work we focus on binary classification problems so that fairness can be quantified with respect to whether (un)favorable outcomes are assigned to individuals in the privileged and unprivileged groups. If necessary, it is possible to formulate a generalization beyond this use case. §.§ Determining optimal configurations PFairDP can be seen as a black-box function which, given a set of hyperparameters, returns the associated values for the three objectives. Proceeding with this mental model, the task of determining configurations which best balance the three-way fairness-privacy-utility trade-off becomes a multi-objective optimization problem. Multi-objective Bayesian optimization (MOBO) provides a computationally effective way to address this task. Bayesian Optimization (BO) <cit.> is a class of sample-efficient optimization techniques which have shown success in the optimization of black-box objective functions with high evaluation costs, establishing a new state-of-the-art in ML hyperparameter tuning <cit.> as well as various other domains <cit.>. At a high level, given an expensive black-box function f that needs to be minimized, BO learns a probabilistic surrogate model of the function based on a limited set of evaluations, and relies on an acquisition function to determine the next point to be evaluated based on the learned probabilistic model. The main advantage of the BO framework is that while the true function f might be expensive to evaluate, the surrogate-based acquisition function is not and can therefore be used to determine a set of potential candidates that minimize the objective. Because the main focus of this paper is in simultaneously optimizing for multiple objectives (the utility of the model, the privacy budget, and the fairness level) we introduce below the formalism of the multi-objective BO. Pareto Fronts. For multi-objective optimization problems there is usually no single solution that can simultaneously optimize all objectives; rather, the goal is to identify the set of Pareto optimal solutions such that improving any one of the objectives comes at the cost of deteriorating another. Formally (assuming a minimization problem), we say that a solution f(x) Pareto dominates another solution f(x') if for all k ∈{1, 2, ..., m} it holds that f^k(x) ≤ f^k(x') and there exists at least one k which satisfies f^k(x) < f^k(x'), where m is the number of objectives. The Pareto frontier of optimal trade-offs is defined as the set of non-dominated solutions and the goal of a multi-objective optimization algorithm is to determine an approximate Pareto frontier. Multi-objective Bayesian Optimization. Without loss of generality, we consider a minimization problem of m ≥ 1 objective functions f_1: 𝐗→ℝ, ..., f_m: 𝐗→ℝ, where 𝐗⊂ℝ^n is a bounded set. If we assume 𝐃 = (x_i, f(x_i))_i = 1^k to be a dataset of known evaluations of the objectives[In the context of our experiment 𝐃 would represent the associated privacy budget, fairness level, and utility for a set of hyperparameters configurations, determined from previous evaluations of the pipeline.], the MOBO procedure can be defined as a four-step process repeated for a predefined number of iterations: * Fit a surrogate model of the objectives using the observed data 𝐃. In our work we use Gaussian processes (GP, <cit.>). * Determine the posterior distribution P(f | 𝐃) over the true function values f using the surrogate model. * Collect the next evaluation point x_k + 1 at the estimated global maximum of the acquisition function, based on the observed values 𝐃 and the posterior. We use the expected hypervolume improvement (EHVI) <cit.> as the acquisition function. * Update the set of observations 𝐃 with the new sample. EHVI Acquisition Function. Acquisition function is a key component of the MOBO procedure that determines the new evaluation point while balancing exploration and exploitation. PFairDP uses expected hypervolume improvement (EHVI) acquisition function, which we now explain. Given a set 𝐏⊂ℝ^m, the hypervolume of 𝐏 is defined as the hypervolume of its dominated region bounded by a fixed user-defined reference point. The reference, or anti-ideal, point is a point in the objective space dominated by all of the Pareto-optimal solutions and is usually chosen by the practitioner based on domain knowledge. For our use case, this could be selected to indicate the worst possible values for the utility of the model, the fairness level and the privacy budget. Formally, this can be expressed as ℋ(𝐏) = μ({𝐲∈𝐑^m|𝐲≺𝐫 and ∃p ∈𝐏: p ≺𝐲}), where μ denotes the standard Lebesgue measure on ℝ^m. A larger hypervolume implies that the points in 𝐏 are closer to the true (unknown) Pareto front. We now define the hypervolume improvement of a vector 𝐲∈ℝ^m with respect to a set 𝐏 as ℋ_I(𝐲, 𝐏) = ℋ(𝐏∪{𝐲}) - ℋ(𝐏), which is positive only if the point 𝐲 lies in the set of points non-dominated by 𝐏. In the context of MOBO, 𝐏 would be the set of samples evaluated so far and 𝐲 the random output of the GPs used for modelling the objectives. Finally, the expected hypervolume improvement is defined as the expected value of ℋ_I(𝐲, 𝐏) over the distribution P(𝐲), Eℋ_I(𝐲) = E[ℋ_I(𝐲, 𝐏)]. In other words, EHVI computes the expected gain in hypervolume of observing one candidate point. In the MOBO procedure we therefore choose the point which maximizes the EHVI as the new sample to evaluate. The optimization loop is allowed to execute within a predefined budget of evaluations. When the loop terminates, the Pareto front is approximated based on the set of non-dominated solutions determined throughout the sampling process. The quality of the Pareto front is evaluated with respect to the hypervolume indicator, which calculates the volume encapsulated between the reference point and the Pareto-optimal points. § AUTOMATION OF EXISTING APPROACHES The modularity of PFairDP together with the high level abstractions introduced in each module enable a wide range of bias mitigation algorithms and DP configurations to be integrated into the pipeline. In this section we show how our framework can be used to reproduce experiments from the literature in a more automated manner. We show that with PFairDP these constraint-based approaches become special cases of our method whose results can be reproduced with only minor modifications of our pipeline. §.§ Trade-offs in Utility, Fairness and DP in Neural Networks The work of Pannekoek and Spigler <cit.> in enabling an ethical and legal use of ML algorithms most closely resembles the aims of our project. More specifically, while the authors also evaluate the privacy-utility-fairness trade-off in ML, their proposed framework focuses on evaluating the models under predefined fairness and DP constraints. The authors evaluate four models: a Simple (S-NN), a Fair (F-NN), a Differentially Private (DP-NN), and a Differentially Private and Fair Neural Network (DPF-NN), detailed description of which can be found in Appendix <ref>. They use Adult dataset (described in Appendix <ref>) and regard `sex' as the sensitive attribute (treated as binary). In terms of data preprocessing, they apply following transformations: list-wise deletions, one-hot encoding of categorical variables, normalization of continuous variables, train-dev-test splits in proportions of 53.4%, 13.3%, 33.3% of the total. To enable a fair comparison, we also replicate these preprocessing steps in our experiment. The authors train each of the four models on the Adult dataset and evaluate utility in terms of the mean accuracy of the models and fairness with respect to the average risk difference after ten independent runs. The privacy level is regarded as a fixed constraint (ϵ = 0.1 for δ = 0.00001). As shown in table <ref>, replicating these experiments with PFairDP is a matter of only enabling certain modules in the pipeline and configuring the core module with the same settings. The appropriate level of noise to be added in the DP module is identified beforehand based on the required privacy level ϵ, the sampling rate, the number of epochs and the fixed δ. In the postprocessing module the ROC algorithm is initialised with the default parameters for consistency with original work. The performance of the four original models and their PFairDP counterparts is displayed in table <ref>. As expected, models implemented with PFairDP exhibit most of the patterns observed by the authors: (1) non-fair models rank best in terms of accuracy but have the highest bias; (2) the fair, non-private model reduces bias below what is considered to be the standard lenient threshold <cit.> but also performs worse in terms of utility; (3) the DP and fair model is the most effective in mitigating bias, reducing the risk difference below the standard strict threshold <cit.>. One discrepancy noted during our evaluations arose in the comparison of the differentially private and fair DPF-NN model and the fair-only F-NN model. While Pannekoek and Spigler note that DPF-NN outperforms F-NN with respect to both the fairness and utility, our evaluations suggest that the latter is in fact negatively affected. This result is consistent with existing literature <cit.>, which notes that imposing DP constraints on neural networks leads to decreases in the utility of the models. §.§ Achieving DP and Fairness in Logistic Regression Xu et al. <cit.> propose a privacy-preserving and fair logistic regression (LR) model which incorporates a penalising term to the objective in order to ensure fairness and a functional mechanism <cit.> which also perturbs the objective function to enforce DP. While the framework proposed by Xu et al. is indeed successful in achieving both fairness and DP for a logistic regression model, we illustrate below how PFairDP can be used to achieve comparable results without relying on model specific augmentation procedures. The authors propose a total of four modified versions of logistic regression (LR): PrivLR, FairLR, PFLR, and PFLR* (the latter provides stronger fairness guarantees compared to PFLR). Their detailed description can be found in Appendix <ref>. Risk difference is used as the fairness metric, and evaluation is done with the Adult dataset. PFairDP configurations for this experiment are summarised in table <ref>. All of the models implemented in PFairDP are trained for a fixed duration of 100 epochs using the Adam optimizer with minibatches of size 20 and a fixed learning rate of 1e-3. One important distinction between our work and that of Xu et al. is that their definition of DP assumes δ = 0 (known in the literature as ϵ - differential privacy <cit.>) which is incompatible with our Opacus implementation for the privacy module. As such, in all of our evaluations δ is set to be a sufficiently small and fixed constant (δ = 0.00001, as used in <cit.> on the same task). Table <ref> displays the performance of the three original models and their PFairDP counterparts with respect to the utility and fairness objectives for various privacy constraints. Note that results for FairLR are not displayed because Xu et al. <cit.> only include evaluations of PrivLR, PFLR, and PFLR* when varying the privacy budget. First considering the models which only enforce DP and no debiasing procedures (PrivLR and PFairDP with only the DP module enabled), we note that the degradation in utility is significantly less pronounced in PFairDP from the original non-private and non-fair logistic regression model, which achieves a mean accuracy of 84%. However, this improvement does not come at the cost of fairness, with risk difference generally being lower in PFairDP and below the lenient SPD threshold. Together with the observation that the fairness of the model decreases as the privacy level is decreased, this suggests that the noise injected during the training procedure of the neural network acts as a fairness regularising term, perturbing the final predictions to include more positive predictions for individuals in the unprivileged group and reducing the bias of the model. A similar pattern can be observed when comparing PFLR and PFairDP with DP and fairness preprocessing enabled, both models achieving similar performance in terms of risk difference but latter showing higher accuracy. Additionally, significant decreases can be noted in terms of variance for both objectives (by a factor of 10 on average), suggesting that our training pipeline is much more stable than the objective perturbation approach employed by the authors. As expected, increasing the value of ϵ (i.e. reducing the privacy level by decreasing the amount of noise) improves the utility of both PFLR and PFairDP-PFLR. Finally, we note that while combining the DIR fairness pre-processing and ROC post-processing procedures leads to a decrease in accuracy compared to PFairDP-PFLR, the performance of our model closely matches that of PFLR*. When the privacy constraint is relaxed, PFairDP only exhibits minor degradation with respect to the fairness objective whereas the risk difference of the logistic regression model increases by a factor of 4 when the privacy budget varies from 0.1 to 10. These results suggest that PFairDP can enable an automated way of generating fair and differentially private models without relying on model-specific augmentations and constraint-setting. § AUTOMATIC DISCOVERY OF FAIRNESS - PRIVACY - UTILITY PARETO FRONTS This section demonstrates the effectiveness of PFairDP for determining Pareto fronts on two binary classification tasks. Here PFairDP is evaluated in its intended execution mode, with all the parameters in the fairness, DP, and training modules varying within predefined ranges. §.§ Experimental Setup Datasets. Experiments are performed on two standard benchmark datasets in the fair and private machine learning literature: Adult and MEPS. See Appendix <ref> for their general descriptions and motivation for using bias mitigation and privacy preserving techniques on these datasets. Models. The model trained on the Adult dataset follows the same architecture used by Pannekoek and Spigler <cit.> (previously described in section <ref>). For MEPS we use a similar configuration suggested by Sharma et al. <cit.>, in the form of a two layer dense neural network with 30 hidden units in each layer and ReLU activations, followed by a sigmoid on the output layer. PFairDP Module Configurations and Optimization Domains. Both training datasets encode some level of favoritism towards the privileged group above the acceptable threshold (see Appendix <ref> for details). To reduce the level of disparate impact potentially exhibited by models trained on these datasets, the fairness module is enabled with DIR preprocessing for varying values of the repair level hyperparameter. In both experiments the DP module is enabled using its standard parameters: the noise multiplier, controlling the amount of noise added to the average of the gradients in a batch, and the clipping norm, which bounds the maximum l_2 norm of per-sample gradients. In order to also explore the effect of different optimization algorithms used in the training procedure, experiments are performed using privatized versions of the Adam optimizer <cit.> for Adult and stochastic gradient descent (SGD) <cit.> for MEPS. The same three hyperparameters are varied in both experiments, namely the number of training epochs, the optimizer's learning rate and the batch size. Additionally, we have applied transformations to the PFairDP outputs for convenience of GP modelling. These transformations, as well as value ranges for all input parameters, can be found in Appendix <ref>. Anti-ideal point selection. Previously introduced when describing the implementation of the MOBO procedure in section <ref>, an anti-ideal point is required for computing the hypervolume improvements of the Pareto frontiers determined during the optimization loops and for evaluating the quality of the frontiers. The anti-ideal point is dominated by all the Pareto-optimal solutions and defines bounds on the worst possible values for each of the three objectives. In our experiments we use (acc, fair, DP) = (0, 1, 1) as the anti-ideal point[Note that the transformations described above are also applied to the reference point.], encoding our interest in Pareto frontiers which capture a practical privacy range (ϵ≤ 1) across all possible utility and fairness values (since these can never exceed 1 or be below 0). §.§ Experimental Results This section evaluates the performance of the MOBO procedure in approximating Pareto frontiers on the two binary classification tasks (Adult and MEPS). Additionally, two commonly used hyperparameter tuning strategies, grid and random search, are used as baselines for comparison. For each task we execute: * 250 BO iterations with 16 random initial configurations. * 256 rounds of grid search, using 4 uniform samples per parameter, with fixed batch size and the learning rate. * 300 rounds of random search. Sparse 3D Pareto fronts produced by MOBO procedure are hard to visualise in a 2D image, therefore we refrain from showing these plots here. Our implementation produces interactive 3D visualisations of the Pareto front that can be inspected and used for decision making interactively. For further discussion please refer to Appendix <ref>. However, decision makers can make more informed decisions when balancing the three-way trade-off even without a complete plot of the frontier. For example, as shown in table <ref>, similar accuracies can be obtained with different effects on privacy and fairness; policy makers can decide which one they want to prioritise. In order to compare the performance of the three sampling procedures in approximating the true Pareto frontiers, we explore their improvements with respect to the hypervolume indicator as new configurations are evaluated. As illustrated in figure <ref> the MOBO procedure generates Pareto frontiers with higher hypervolumes in both experiments, compared to random and grid search sampling strategies[Additionally, we observe that random search consistently outperforms grid search, an effect well known in the literature <cit.>.]. These improvements in performance can be observed even when only about 100 configurations have been sampled and hold in spite of the fact that the other two methods are allowed to explore for more iterations, which emphasises the increased sample efficiency of the MOBO procedure. It is important to discuss computational efficiency of PFairDP. While the MOBO procedure is able to find better Pareto frontiers, its optimization loop has sequential nature. In contrast, the grid and random search implementations can be easily parallelized to increase the number of samples evaluated within the same time frame. Nevertheless, when hardware limitations are a constraint our MOBO implementation enables an efficient procedure for determining optimal operating points for a given model. Furthermore, PFairDP can be improved by leveraging existing batch MOBO methods <cit.>. § CONCLUSIONS The premise put forward by this paper is that due to the conflicting nature of the fairness, privacy and utility, a shift in perspective is required in order to optimally balance the three-way trade-off. Currently prevalent manual constraint setting approach leads to discovery of only a subset of the optimal solutions. As an alternative, we introduced PFairDP, an automatic, efficient and holistic approach for training differentially private and fair ML models that reaches empirical Pareto-optimality with respect to the three objectives using multi-objective Bayesian optimization. We showed that PFairDP can be used to automate previous research in the privacy- and fairness-aware ML. We also evaluated our method on classification tasks with multiple bias mitigation methods, models, datasets and optimizers in order to showcase the modularity and versatility of our implementation, as well as its ability to provide actionable information to policy makers tasked with balancing the privacy-fairness-utility trade-offs of a ML system before deployment.splncs04 APPENDICES§ MODELS USED IN “TRADE-OFFS IN UTILITY, FAIRNESS AND DP IN NEURAL NETWORKS” This section describes the models used by Pannekoek and Spigler <cit.>. The authors evaluate four models: a Simple (S-NN), a Fair (F-NN), a Differentially Private (DP-NN), and a Differentially Private and Fair Neural Network (DPF-NN).The Simple Neural Network (S-NN) consists of three fully connected layers with six neurons in the first and second layer and one neuron the final layer. The ReLU activation is used for the first two layers and sigmoid for the final one. Training is performed for a fixed duration of 20 epochs using the Adam optimizer and binary cross-entropy as the loss function.The architecture of the Fair Neural Network (F-NN) is identical to that of the S-NN, the only difference being that the ROC postprocessing algorithm is used to reduce bias in the final predictions of the model. Similarly, the architecture of the Differentially Private Neural Network (DP-NN) is identical to that of the S-NN, except that a differentially private variant of the original optimizer is used for training in order to add noise to the gradients and ensure DP. Finally, the Differentially Private and Fair Neural Network (DPF-NN) integrates both the postprocessing step and the DP optimizer.§ MODELS USED IN “ACHIEVING DP AND FAIRNESS IN LOGISTIC REGRESSION” This section describes the models used by Xu et al. <cit.>. The authors propose a total of four modified versions of logistic regression (LR): PrivLR, FairLR, PFLR, and PFLR*.The core architecture is PFLR, a modified version of LR which enforces DP by injecting Laplacian noise into the polynomial coefficients of the original objective function and fairness by including an additional penalty which aims to minimize the overall discrimination of the model (quantified with respect to the risk difference metric).PrivLR and FairLR are single objective versions of PFLR: the former enforces DP using the functional mechanism whereas the latter only includes the fairness penalty term. Finally, PFLR* is introduced as an enhanced version of the core model which, instead of enforcing the fairness objective through a separate penalty term, incorporates it into the DP functional mechanism, leading to a reduction in the overall noise levels while preserving the privacy budget and the fairness of the model. § DATASETS Here we give general description of the datasets used in our work.The Adult dataset <cit.> has been particularly relevant in research on privacy and fairness due to the presence of personally identifiable information (PII) and sensitive attributes which can potentially be used for identifying individuals or induce bias in models trained on it. The dataset contains 45,222 records and 14 demographic features, with the task of predicting whether the income of a person is below or above 50,000 USD.The Medical Expenditure Panel Survey (MEPS, <cit.>) is a dataset pertaining to the healthcare domain, produced by the US Department of Health and Human Services and assumed to be representative of people's healthcare expenditures in the US. The MEPS dataset includes multiple sensitive attributes, along with other non-protected attributes such as health services used, costs and frequency of services, the classification task being that of predicting whether a person would have high utilization of medical services (defined as requiring at least 10 trips for some sort of medical care).Table <ref> summarises the most relevant aspects for each of the two datasets as well as the level of disparate impact and statistical parity difference, indicating the extent to which bias is inherently embedded in the training data. For both Adult and MEPS the DI level is below the standard acceptable threshold of 0.8 and the SPD is above the lenient threshold of 0.1, which motivates bias usage of mitigation techniques. Privacy is a requirement in this context as well because both datasests include sensitive information for individuals (financial and health related data). Therefore, the two binary classification problems described above are relevant for evaluating the performance of our proposed framework in optimally enforcing fairness and differential privacy.§ PFAIRDP INPUT AND OUTPUT DOMAINS This section gives some additional details on the input and output domains used in the experimentation section of the paper.Table <ref>displays the optimization domains used in our MOBO experiments. Some of the random sampling distributions proposed by Avent et al. <cit.> are also employed in our random sampling procedure, as these have been shown to have a positive effect on performance over naive uniform sampling and improve the quality of the Pareto frontiers.The goal of our experiments is to determine three-dimensional Pareto frontiers for the three objectives of interest, namely privacy level (evaluated with respect to the privacy budget ϵ), utility (here the accuracy of the models), and fairness (quantified as the statistical parity difference exhibited by the model). However, the output domains of these three metrics may not be well modeled by GPs, which model outputs on the entire real line. To address this, we follow the approach suggested by Avent et al. <cit.> and transform the outputs as illustrated in table <ref>. Furthermore, we choose these transformations so that the optimization task can be treated as a maximization problem across all dimensions (previously ϵ and SPD needed to be minimized, whereas accuracy had to be maximized). Importantly, should the users want to replace fairness metrics used by PFairDP, they shall provide similar transformation for it[For example, one valid transformation that can be applied for replacing disparate impact as the fairness optimization objective is log(x/2) + log(1 - x/2).].§ RENDERING OF THE DISCOVERED PARETO FRONT This section provides further discussion on the rendering of the Pareto fronts discovered by PFairDP. These fronts are 3-dimensional, space and hard to query for arbitrary points. This makes their visualizations a challenging task. Even a simpler problem of visualizing a complete 3D Pareto frontier is an area of active modern research with suggested methods including use of radial coordinate system <cit.> and virtual reality technology <cit.>.While we consider detailed research of such visualization methods to be out of scope of this work, in this section we briefly discuss a few simple plotting options, all of which can be achieved with PFairDP. Option 1 is the scatter plot of Pareto-optimal and optionally dominated points. This plot is very challenging to interpret statically. However with modern plotting tools like Matplotlib and Plotly users can make these plots interactive, with the ability to inspect any particular region of the entire front closer. An example of static scatter plot like that can be see on figure <ref>.Option 2 is the 3D surface of the plot, which is a approximated triangulation on the grid of discovered points. Similarly to the option 1, this plot is hard to interpret statically, but is more practical in its interactive form. An example of this plot is given on figure <ref>.If pairwise interaction of the objectives is of interest, practitioners might find 2D cross-sections of the 3D front. While using this technique it is important to bear in mind that Pareto-optimal 3D points might not be optimal when projected to a 2D surface in the objective space. Figure <ref> gives examples of such cross-sections.
http://arxiv.org/abs/2311.15691v1
{ "authors": [ "Bogdan Ficiu", "Neil D. Lawrence", "Andrei Paleyes" ], "categories": [ "cs.LG", "cs.CR", "cs.CY" ], "primary_category": "cs.LG", "published": "20231127102844", "title": "Automated discovery of trade-off between utility, privacy and fairness in machine learning models" }
[ * January 14, 2024 ==================== This paper addresses the challenge of point-supervised temporal action detection, in which only one frame per action instance is annotated in the training set. Self-training aims to provide supplementary supervision for the training process by generating pseudo-labels (action proposals) from a base model. However, most current methods generate action proposals by applying manually designed thresholds to action classification probabilities and treating adjacent snippets as independent entities. As a result, these methods struggle to generate complete action proposals, exhibit sensitivity to fluctuations in action classification scores, and generate redundant and overlapping action proposals. This paper proposes a novel framework termed ADM-Loc, which stands for Actionness Distribution Modeling for point-supervised action Localization. ADM-Loc generates action proposals by fitting a composite distribution, comprising both Gaussian and uniform distributions, to the action classification signals. This fitting process is tailored to each action class present in the video and is applied separately for each action instance, ensuring the distinctiveness of their distributions. ADM-Loc significantly enhances the alignment between the generated action proposals and ground-truth action instances and offers high-quality pseudo-labels for self-training. Moreover, to model action boundary snippets, it enforces consistency in action classification scores during training by employing Gaussian kernels, supervised with the proposed loss functions. ADM-Loc outperforms the state-of-the-art point-supervised methods on THUMOS’14 and ActivityNet-v1.2 datasets. § INTRODUCTION Automated video analysis has broad applications in computer vision research, benefiting diverse fields like self-driving cars, public safety monitoring, and sports analysis<cit.>. A principal challenge in this field is Temporal Action Localization (TAL) in untrimmed video streams, with the objective being to accurately pinpoint the start and end times of actions and to categorize them accordingly <cit.>. Recent advancements in fully-supervised TAL methods have shown promising improvements <cit.>. However, they depend on the detailed annotation of start and end timestamps, along with action labels for every action in training videos, which is both labor-intensive and expensive. To diminish the dependence on extensive labeling throughout the training stage, there has been a growing interest in the advancement of methodologies that operate under limited supervision <cit.>. Specifically, point-supervised TAL requires the annotation of only a single frame within the temporal window of each action instance in the input video <cit.>. Point-level supervision significantly lowers the annotation costs in comparison to full supervision, while providing essential information about the approximate locations and the total number of action instances. In temporal action detection, pseudo-labels are primarily defined as estimated action boundaries (proposals) along with their corresponding action labels. A recent trend aimed at bridging the gap between point-supervised and fully-supervised TAL relies on self-training, wherein pseudo-labels are generated by a base point-supervised model. These pseudo-labels act as substitute action annotations, enabling the training of models under limited supervision. Current techniques generate pseudo-labels by creating proposals based on thresholds applied to the predicted action classification probabilities. However, these methods have several shortcomings. Firstly, they are highly sensitive to the choice of threshold values; varying thresholds can lead to significant shifts in the alignment of proposals with ground-truth instances. Secondly, they often yield an excess of redundant and overlapping proposals, which are unsuitable as pseudo-labels. Ideally, there should be a one-to-one correspondence between pseudo-labels and action instances. Lastly, these methods struggle to generate complete action proposals and are sensitive to inconsistencies in action classification scores.We introduce an innovative approach to generate pseudo-labels by modeling the distribution of action classification probabilities as a combination of Gaussian and uniform distributions. This methodology is based on the observation that certain action instances exhibit homogenous classification probabilities across snippets, resembling a uniform distribution. In contrast, for other actions, snippets near the action boundaries, which often include ambiguous or transitional movements, show lower classification probabilities, resembling a Gaussian distribution. This combination effectively captures the full spectrum of action instances. Our base point-supervised model predicts background snippets and action classification probabilities for each action class in the video. For each annotated action point, preliminary action boundaries are determined by identifying the nearest background timestamps before and after the annotated point. Then, a mixed distribution model is fitted to the action classification probabilities within these boundaries, minimizing the mean squared error (MSE) loss using Brent's method <cit.>. Consequently, high-quality pseudo-labels are generated that overcome prior challenges: 1) eliminating reliance on arbitrary thresholding, 2) ensuring the creation of a single proposal for each action instance, and 3) maintaining robustness against fluctuations in action classification probabilities. Additionally, we propose learning action boundary snippets during the training of the main model by modeling the distribution of action scores. Although snippets near the action boundaries often have lower classification scores compared to more central action snippets, differentiating these boundary snippets from the background is essential. During training, we compare the predicted classification probabilities with the Gaussian kernels to reinforce the consistency of action scores across the entire range of actions, including boundaries. This process, supervised with our proposed loss functions, enhances the model's accuracy in estimating action durations and in generating complete proposals. Our contributions are summarized as follows:* We propose a novel strategy for pseudo-label generation in self-training, where the predicted action classification probabilities are modeled as a composite of Gaussian and uniform distributions. The effectiveness of the strategy is evidenced by the high-quality pseudo-labels it generates.* We propose a framework of learning action boundary snippets during the training of the main model to generate complete action proposals for testing. This process involves comparing the predicted action classification probabilities with a Gaussian kernel predicted by our model. Our designed loss functions supervise the learning of Gaussian parameters and the predicted probability signals.* Our ADM-Loc framework outperforms the state-of-the-art point-supervised methods on THUMOS'14 and ActivityNet-v1.2 datasets.§ RELATED WORKFully-supervised TAL. Fully-supervised methods can be grouped into anchor-based and anchor-free. Anchor-based methods generate pre-defined action proposals distributed across temporal locations <cit.>. They extract fixed-size features from the proposals to evaluate their quality. Anchor-free methods generate proposals with flexible duration by predicting actionness and action offset for each snippet. <cit.>. Temporal feature pyramid is introduced to model actions of varying duration <cit.>. Modeling temporal dependencies in videos has been addressed by recurrent neural networks <cit.>, graph convolutions <cit.>, and transformers <cit.>. Unlike these methods that require detailed frame-level annotations, our framework relies solely on point-level annotations. We employ a multi-scale transformer architecture to model the temporal dependencies of video snippets and to handle actions of varying durations. Weakly-supervised TAL. The methods often require only the video-level labels of actions for training, while the temporal boundaries of actions are not needed. Majority of the weakly-supervised methods rely on the Multi-Instance Learning (MIL) to learn actionness and classification scores to detect discriminative action regions and eliminate background snippets <cit.>. To generate complete action proposals, some methods have proposed adversarial complementary learning approaches to discover different parts of actions by increasing the weight of less discriminative parts of the video <cit.>. Another category of methods rely on self-training scheme to generate pseudo-labels on the train set from an initial base model. The pseudo-labels provide additional supervision for the main model to improve the training <cit.>. These methods often fail to generate high-quality pseudo-labels. In contrast, our model, employing slightly more annotations, produces pseudo-labels that are significantly better aligned with the ground-truth action instances.Point-supervised TAL. Point-level supervision significantly reduces the cost of annotation by labeling a single point for each action instance. SF-Net <cit.> proposed to expanded each annotated single frame to its nearby frames to mine pseudo action frames and utilized the unannotated frames to mine pseudo background frames. PTAL <cit.> performed boundary regression based on keyframe prediction. Back-TAL <cit.> introduced background-click supervision by annotating a random frame from a series of consecutive background frames. Lee et al. <cit.> developed an action-background contrast method to capture action completeness. We propose a novel approach for generating high-quality pseudo-labels using a base point-supervised model. These pseudo-labels then guide our main model in learning action continuity and in generating complete action proposals during testing. § OUR PROPOSED METHODFig. <ref> provides an overview of our framework. Our framework adopts a self-training strategy that incorporates a base model and a main model, each employing a multi-scale transformer as their backbone architecture. The base model's objective is to predict action probability signals and background points, Fig. <ref>(a). The predicted probability signals are employed to generate high-quality pseudo-labels, providing additional supervision for the training of the main model (ADM-Loc), Fig. <ref>(b). ADM-Loc learns action boundary snippets by comparing the predicted probabilities with a predicted Gaussian kernel supervised by our proposed loss functions, ℒ^σ_MSE and ℒ^G_MSE, Fig. <ref>(c).§.§ Point-Supervised FormulationGiven an input video, a single annotated point with the action category is provided for each action instance, denoted by {t_i, y_i}_i=1^N_act. The i-th action instance is annotated at the t_i-th snippet with its action label y_i, and N_act is the total number of action instances in the input video. The label y_i is a binary vector with y_i[c] = 1 if the i-th action instance belongs to class c and otherwise 0 for C action classes.§.§ Backbone ArchitectureA multi-scale temporal transformer is employed as the backbone architecture. Given an input video, snippet-level visual features are extracted with a pre-trained visual encoder (I3D <cit.>) and concatenated to generate a video feature sequence X ∈ℝ^T × D, where T is the number of snippets and D is the feature dimensionality. Each snippet feature is embedded using a shallow temporal convolutional network resulting in feature sequence Z^0 ∈ℝ^T × D. This feature sequence is the input to the transformer network to model the temporal dependencies using local self-attention <cit.>. To represent actions with different duration, a temporal feature pyramid is constructed by down-sampling transformer blocks using a strided depthwise 1D convolution. The feature pyramid is denoted by Z ={Z^1,Z^2,⋯,Z^L} where Z^l ∈ℝ^T_l × D is the output of level l. Also, T_l = T/θ^l, and θ is the down-sampling ratio. Feature pyramid captures multi-scale temporal information, enabling the model to capture both short-term and long-term temporal dependencies, leading to a more comprehensive representation of action dynamics. A shallow 1D convolutional network is attached to each pyramid level with its parameters shared across all levels. A sigmoid function is attached to each output dimension to predict the probability of actions and background. The output of the l-th level of the feature pyramid is a probability sequence, denoted by P_l ∈ℝ^ T_l×C+1, where T_l is the temporal dimension on the l-th level. Additionally, P_l[t,C+1] is the probability of background at time t on level l. The complement of the background probability is the class-agnostic score. The class-specific and class-agnostic scores are fused to derive the final probability sequence P̂_̂l̂∈ℝ^T_l × C+1.P̂_̂l̂[t,c] = P_l[t,c] (1-P_l[t,C+1]). §.§ Point-supervised Base ModelAugmented annotations. We augment the point-level annotations for improved training by defining a vicinity around each annotated point with a hyper-parameter radius r_a. Specifically, for the i-th action instance containing the annotated point t_i and its corresponding label y_i, the label y_i is assigned to all snippets within radius r_a. The augmented annotation set is denoted by Φ.Φ = {([t_i-r_a, t_i+r_a], y_i)}_i=1^N_act. The augmented annotation set on level l is defined as the following where θ represents the down-sampling ratio. The notation is simplified on the second line. N_l is the number of labeled points on level l after augmentation. Φ^l ={([(t_i/θ^l)-r_a, (t_i/θ^l)+r_a], y_i)}_i=1^N_act ={(t_j, y_j)}_j=1^N_l.Video-level action prediction. The video-level score for class c is defined as the average of action probabilities for class c over the top-k temporal positions on each level l of the pyramid, denoted by P_l[c]. The Multiple Instance Learning (MIL) loss <cit.> is utilized to supervise the predictions. The video-level label is denoted by y. ℒ_MIL = - 1/L∑_l=1^L∑_c=1^C y[c] log(P_l[c]) + (1-y[c]) log (1-P_l[c]).Snippet-level action prediction. The snippet-level focal loss is employed to optimize the probability signal P̂_̂l̂ for each level l of the pyramid. γ is the focusing parameter (set to 2) and N^⋆_act is the number of positive instances. 0.95!ℒ_Act = -1/N^⋆_act ∑_l=1^L∑_j=1^N_l∑_c=1^C y_j[c] log(P̂_l[t_j,c])(1-P̂_l[t_j,c])^γ- (1-y_j[c]) log (1-P̂_l[t_j,c]) P̂_l[t_j,c]^γ.Background prediction. To distinguish actions from the background, we select the temporal positions not belonging to any of the augmented annotated points and possessing a background probability exceeding a certain threshold on each level l of the pyramid. The background points on level l are denoted by {b_j}_j=1^M_l with p_l(b_j) as the probability of background at time b_j. The background loss is employed to optimize the probability signals P̂_̂l̂ for all levels. M_bg is the total number of background points.ℒ_BG = - 1/M_bg∑_l=1^L∑_j=1^M_l[∑_c=1^C (P̂_l[b_j,c])^γlog (1-P̂_l[b_j,c])+ (1-p_l(b_j))^γlog p_l(b_j).Joint training. The total loss for the base model is a weighted combination of the three aforementioned losses where λ_⋆ terms are determined through empirical analysis. L_Total = λ_MILℒ_MIL + λ_Actℒ_Act + λ_BGℒ_BG. §.§ Actionness Distribution Modeling (ADM) §.§.§ Pseudo-label Generation with ADMOur proposed pseudo-labels generation method on the training set models the distribution of action classification probabilities predicted by the base model. This distribution is represented as a combination of Gaussian and uniform distributions. The rationale behind this modeling is that certain action instances exhibit uniform classification probabilities across snippets, resembling a uniform distribution. Conversely, actions with ambiguous boundaries or transitional movements tend to have lower classification probabilities near the boundaries, indicative of a Gaussian distribution. This combination of distributions captures the full spectrum of action instances. After training the base model, action classification probabilities are extracted from the final level of the multi-scale transformer, denoted by P̂_̂L̂∈ℝ^T_L × C+1. This choice is made because the larger receptive field at the last feature pyramid level exhibits fewer fluctuations in action probabilities across neighboring snippets, making it more suitable for our modeling purposes. A Gaussian filter is also applied to smooth the signal and reduce the impact of minor inconsistencies in action classification probabilities. The resolution of the last-level probability signal is upgraded to match that of the first level, resulting in signal P̃_L ∈ℝ^T × (C+1). The background points are predicted from the first level of the pyramid because the lower resolution of the first level excels at detecting fine-grained information. The annotated action points and the predicted background points are denoted by { (t_i, y_i) }_i=1^N_act, and { b_j }_j=1^N_bkg, respectively. For each annotated action point (t_i, y_i), we determine preliminary action boundaries by identifying the nearest background points immediately preceding and succeeding the annotated point, denoted by β_i = [b^s_i, b^e_i]. If point t_i belongs to action class c (i.e., y_i[c] = 1), the objective is to estimate the boundaries of the i-th action instance using signal P̃_L[t, c] within the interval β_i. Within interval β_i and within distance δ d_i from the annotated point t_i, we locate the snippet t^⋆_i with the peak probability of class c. Here, d_i is the duration of β_i and δ is a hyper-parameter. t^⋆_i = *argmax_t(P̃_L[t,c])for t ∈ (β_i ∩ [t_i-δ d_i,t_i+δ d_i]). The intuition behind selecting the peak point t^⋆_i is that this point is the most representative snippet of class c in the vicinity of point t_i. The point t^⋆_i is treated as the mean of the uniform and the Gaussian distributions. For the i-th action instance, the signal P̃_L[t, c] is set to zero outside the interval β_i. We fit a Gaussian distribution centered at t^⋆_i to P̃_L[t, c] for each action instance. Gaussian distribution is defined as follows where t, μ, and σ represent the temporal axis, mean, and standard deviation. G(t, μ, σ) = 1/σ√(2π) e^-1/2(t - μ/σ)^2 The Gaussian distribution can be uniquely defined for the i-th action instance as G(t, t^⋆_i, σ_i) by estimating the standard deviation σ_i. An upper bound u_b and a lower bound l_b are estimated for σ_i with respect to boundaries of β_i.u_b = max(t^⋆_i - b^s_i, b^e_i - t^⋆_i),l_b = 10^-6.Thus, the objective is to find the optimal σ_i within range [l_b, u_b] to fit Gaussian distribution G(t, t^⋆_i, σ_i) to probability signal P̃_L[t, c]. We address this optimization problem by minimizing the following MSE loss using Brent's method with the bounded variant <cit.>. L^G-fit_MSE= ∑_t ∈β_i( α· G(t, t^⋆_i, σ_i) - P̃_L[t, c] )^2. α is a scale factor equal to P̃_L[t^⋆_i,c] / G(t^⋆_i, t^⋆_i, σ_i). Brent's method <cit.> is a root-finding algorithm that iteratively adjusts the sigma σ_i within specified bounds l_b and u_b to find an optimal standard deviation for the Gaussian component. The same process is applied to find an ideal width ω_i for the uniform component. L^U-fit_MSE= ∑_t ∈β_i(U(t^⋆_i, ω_i)- P̃_L[t,c] )^2. The linear combination of parameters σ_i and ω_i defines the final interval duration Δ_i for the i-th action where Δ_i = γ_1 σ_i + γ_2 ω_i. The duration Δ_i defines the estimated interval I_i=[t^⋆_i-Δ_i, t^⋆_i+Δ_i]. For each video, the pseudo-labels set includes the annotated point t_i, the predicted sigma σ_i, the estimated interval I_i, and the label y_i, as below: Ψ = {(t_i, σ_i, I_i, y_i)}_i=1^N_act where I_i=[t^⋆_i-Δ_i, t^⋆_i+Δ_i]. §.§.§ The Main Model: ADM-LocThe backbone of the main model is a multi-scale transformer (described in <ref>). The model is supervised with the pseudo-labels set Ψ = {(t_i, σ_i, I_i, y_i)}_i=1^N_act generated by actionness distribution modeling in eq. <ref>. The main model is trained with the losses in eq. <ref> as well as two additional losses introduced in this section.Learning boundary snippets. The ℒ_Act loss (eq. <ref>) supervises the learning of probability signal P̂_̂l̂ for the i-th action instance only within interval I_i which is merely an estimation of the the action boundaries. It is probable that the interval I_i fails to encompass snippets near the action boundaries, which are often ambiguous and include transitional movements. Nevertheless, the model needs to classify these boundary snippets as part of the action to generate complete action proposals during testing. Although the action probabilities at these boundary snippets might be lower compared to the more representative action snippets, it remains essential for the model to differentiate these boundary snippets from the background. We impose this by comparing the probability signal P̂_̂l̂ with a Gaussian kernel, reinforcing the consistency of action classification probabilities for the entire duration of action. The probability signal predicted by the first level of the feature pyramid, denoted as P̂_̂1̂∈ℝ^T_1 × C+1, exhibits the highest variability in action probability predictions due to its small receptive field. As a result, this signal particularly benefits from being compared against a Gaussian kernel to stabilize these fluctuations.Standard deviation prediction. The extracted feature sequence from the first level of the pyramid is denoted by Z^1 ∈ℝ^T_1 × D. For the i-th action instance, K features are sampled from Z^1 within pseudo-label interval I_i and fed to a regression head to predict the standard deviation σ_i. The regression head consists of temporal convolutions, layer normalization, and the sigmoid function to predict the value of σ_i between [0,1]. We use the values of {σ_i}_i=1^N_act from the pseudo-labels to determine parameter K and re-scale the predicted σ_i. This prediction is supervised using an MSE loss, which measures the discrepancy between the predicted and pseudo-label standard deviations.ℒ^σ_MSE = 1/N_act∑_i=1^N_act (σ_i-σ_i)^2.Gaussian imposition. The set S_c denotes the set of action classes that occur in a given video. A Gaussian kernel is defined to represent the i-th action instance formulated as follows where t_i is the annotated point and σ_i is the predicted standard deviation.G_i(t, t_i, σ_i) = e^-1/2(t - t_i/σ_i)^2. For each action class c ∈ S_c, we mix the Gaussian kernels of all action instances belonging to class c, as follows.G^c(t) = max{ G_i(t, t_i, σ_i) |i ∈ [1, N_act],y_i[c]=1 }. The alignment between the probability signal P̂_1[t,c] and the Gaussian kernel G^c(t) is supervised using the following MSE loss.ℒ^G_MSE= 1/T_1 |S_c| ∑_c ∈ S_c∑_t=1^T_1( G^c(t) - P̂_1[t,c] )^2.Pseudo-label sampling. We incorporate a pseudo-label sampling strategy during the training process for ℒ_Act loss by selecting the snippets around the annotated points within a radius hyper-parameter r_s and inside the boundaries of pseudo-labels. The motivation for this sampling is to reduce the likelihood of training the model on false positives. During the pseudo-label generation, the background frames that are erroneously classified as actions constitute the false positives. These are more likely to occur at the boundaries of the pseudo-labels.Joint training. The total loss for the main model is a weighted combination of the following losses where λ_⋆ are determined through empirical analysis. ℒ_Total = λ_MILℒ_MIL + λ_Actℒ_Act + λ_BGℒ_BG+ λ_Gℒ^G_MSE + λ_σℒ^σ_MSE .Inference. The action categories are identified using the video-level scores. The action proposals are predicted from all pyramid levels by applying thresholds to the snippet-level action scores P̂_̂l̂ for each level l for the predicted classes and merging consecutive candidate segments. Each proposal is assigned a confidence score based on its outer-inner-contrast score <cit.>. Finally, the non-maximum suppression (NMS) is used to eliminate overlapping proposals.§ EXPERIMENTS§.§ Experimental Setting Datasets. THUMOS14 <cit.> comprises untrimmed videos across 20 unique categories. In line with prior work <cit.>, we use the 200 videos in the validation set for training and the 213 videos in the testing set for evaluation. The average number of action instances per video is 15.5. ActivityNet-v1.2 is a large-scale dataset containing 9,682 videos that includes 100 complex everyday activities. The average number of action instances per video is 1.5. Consistent with previous work, our model is trained using the training set and evaluated using the validation set <cit.>. Evaluation metric. The Mean Average Precision (mAP) under different Intersection over Union (IoU) thresholds is utilized as the evaluation metric, wherein the Average Precision (AP) is computed for each action class. On ActivityNet-v1.2 <cit.>, IoU thresholds range from 0.5 to 0.95 in increments of 0.05. As for THUMOS14 <cit.>, they range from 0.1 to 0.7 in increments of 0.1. Implementation details. For feature extraction, we use two-stream I3D <cit.> on both datasets. We fed 16 consecutive frames as the input to the visual encoder, using a sliding window with stride 4 on THUMOS14 and stride 16 on ActivityNet-v1.2.Our multi-scale transformer model is trained with Adam <cit.> and linear warm-up <cit.> with the learning rate of 10^-4. Model EMA <cit.> is implemented to further stabilize the training. The number of epochs and warm-up epochs are set to 100 and 10 on THUMOS14, and 50 and 5 on ActivityNet-v1.2. The batch sizes are set to 3 on THUMOS14, and 64 on ActivityNet-v1.2. The input length is set to 2,304 for THUMOS14 and to 192 for ActivityNet-v1.2, using padding, random sampling and linear interpolation. To employ local self-attention, the window lengths are set to 19 and 7 on THUMOS14 and ActivityNet-v1.2, respectively. The number of pyramid levels is set to L=4 and the down sampling ratio θ is set to 2. The annotation augmentation radius r_a and the pseudo-label sampling radius r_s are set to 2. The parameters r_a and r_s are defined on the feature grid, representing the distance in terms of the number of features. At inference, the full sequence is fed into the model without sampling. [The source code will be released upon acceptance of the paper.]§.§ Comparison with State-of-the-art Methods Table <ref> shows a detailed comparison with the leading methods on THUMOS'14 and ActivityNet-v1.2 datasets.Results on THUMOS’14: Our model significantly outperforms other point-supervised methods, achieving an average mAP improvement of 7.4%. Notably, this enhancement is nearly 10% at the most stringent IoU threshold of 0.7. Moreover, our model shows a significant gain of 10.6% average mAP increase over weakly-supervised methods, despite using only slightly more annotations. The mAP at the 0.7 IoU threshold is almost double that of its weakly-supervised counterparts.Results on ActivityNet-v1.2. Our model outperforms the state-of-the-art weakly and point-supervised methods in terms of mAP, consistently across all the IoU thresholds. §.§ Ablation StudiesQuality of pseudo-labels. In Table <ref>, α represents the ratio of the number of generated proposals to the ground-truth instances in the training set (validation set of THUMOS'14). The first row shows the quality of the generated action proposals on the training set using the base model (section <ref>). As shown in the table, the average mAP of these proposals is only 63.8%, and the number of predicted proposals is 12 times the number of ground-truth instances (α=12). This indicates that a large number of proposals are redundant and overlapping, making them unsuitable to be used as pseudo-labels. Ideally, there should be a one-to-one correspondence between the pseudo-labels and action instances. The second row demonstrates the quality of pseudo-labels generated using our proposed Actionness Distribution Modeling (ADM), as detailed in section <ref>. Noticeably, the mAP at the highest IoU of 0.7 is almost doubled compared to the base proposals. Furthermore, ADM generates exactly one proposal (pseudo-label) for each annotated point (α=1). Impact of pseudo-labels. Table <ref> shows the impact of supervision in the base model. The first row shows supervision with only points without augmentation (r_a=0), achieving the lowest results. We also compare the performance of the base model when supervised with the augmented points (r_a=2) versus the sampled pseudo-labels (r_s=2). Note that the radius (for both r_s and r_a) on level l of the pyramid is r_∗·θ^l = 2^l+1, for θ=2 and r_∗=2. Therefore, the radius can be as large as 32 for level l=4. The model trained with augmented points selects all snippets within the radius as positive samples, even if the action duration is much shorter than the radius. In contrast, the pseudo-labels effectively limit the positive samples to estimated action boundaries. This results in a performance gain of 5.7% average mAP. Impact of the backbone network. Table <ref> demonstrates the impact of the backbone multi-scale transformer architecture in ADM-Loc (the main model). Parameter l denotes the number of feature pyramid levels. The pseudo-label sampling radius r_s is set to 2. As shown in the table, the highest average mAP is achieved when l=4, which is comparable to the results for l=5.Impact of pseudo-label sampling. Table <ref> demonstrates the impact of the pseudo-label sampling strategy with different sampling radius r_s on ADM-Loc (the main model). r_s= ∞ indicates no sampling. As indicated in the table, using a sampling radius of r_s=2 results in a 3.2% improvement in average mAP compared to the scenario with no sampling (r_s= ∞). This is because pseudo-label sampling decreases the chance of training the model on false positives with the ℒ_Act loss (see eq. <ref>). During the pseudo-label generation, the background frames that are erroneously classified as actions constitute the false positives. These are more likely to occur at the boundaries of the pseudo-labels. Impact of the proposed losses. Table <ref> demonstrates the impact of the proposed losses ℒ^σ_MSE (eq. <ref>) and ℒ^G_MSE (eq. <ref>) in ADM-Loc. All experiments in this table are also supervised with ℒ_MIL, ℒ_Act, ℒ_BG (eq. <ref>, <ref>, <ref>) losses. The pseudo-label sampling radius r_s is set to 2. As indicated in the table, the implementation of both losses has led to a performance gain of 1.3% in average mAP and 2.3% in mAP at tIoU=0.7.§.§ Qualitative Results Fig. <ref> presents the qualitative results of our model in different stages: (1) the base model supervised with point-level annotations (section <ref>), (2) the base model supervised with pseudo-labels generated by ADM (section <ref>), and (3) our full ADM-Loc framework (section <ref>). This figure demonstrates that ADM-Loc partially addresses misalignments between actual instances and proposals in the base model, such as the incomplete localization of the action `Baseball Pitch' (part a), and the over-complete localization of the action `Shotput' (part b). Furthermore, in some cases, the base model supervised with pseudo-labels generates over-complete proposals (such as the last action instance of `CliffDiving' in part c), which are adjusted in ADM-Loc by modeling action boundary snippets.§ CONCLUSION We propose ADM-Loc, a novel point-supervised framework that employs a self-training scheme to generate high-quality pseudo-labels, providing additional supervision during training. Our approach for pseudo-label generation models the action classification probabilities for each action instance in the video. It avoids reliance on arbitrary thresholding, estimates a single action proposal per action instance, and demonstrates robustness to inconsistencies in action classification probabilities. Furthermore, we propose modeling action boundary snippets by enforcing consistency in action classification scores during training, guided by our designed loss functions. ADM-Loc surpasses state-of-the-art point-supervised methods on both THUMOS’14 and ActivityNet-v1.2 datasets.§ ACKNOWLEDGMENTThis material is based upon work supported by the National Science Foundation under award number 2041307.§ APPENDIX§.§ Temporal Action Detection Error Analysis To assess the effectiveness and limitations of our ADM-Loc framework, we employ DETAD <cit.> for analyzing false negatives (Figure <ref>) and false positives (Figure <ref>).§.§.§ False Negative Analysis Figure <ref> illustrates the false negative (FN) profiling across various coverages, lengths, and number of instances. Part (b) of Figure <ref> displays the FN profiling specific to ADM-Loc. The figure reveals that higher false negative rates are associated with action instances characterized by: (1) extremely short or long durations relative to the video length (Coverage (XS) or Coverage (XL)), (2) actions of very short or very long lengths (Length (XS) or Length (XL)), and (3) videos containing a large number of action instances (#Instances (L)). Furthermore, Figure <ref> demonstrates that ADM-Loc (part b) reduces the false negative (FN) rate compared to the base model (part c), except in two cases: Coverage (L) and Length (XL). This is because the base model samples all snippets within the sampling radius for point augmentations, whereas ADM-Loc only samples snippets that fall within the pseudo-label boundaries. To examine the limitations of ADM-Loc relative to fully-supervised methods, FN profiling of ActionFormer<cit.> is provided in Figure <ref> (part a). The most significant FN differences between ActionFormer (part a) and ADM-Loc (part b) are the following cases: Length (XS), #Instances (L). This demonstrates that the annotation of action boundaries is crucial for detecting very short action instances and for accurate detection in videos containing numerous instances. §.§.§ False Positive AnalysisFigure <ref> presents a detailed categorization of false positive errors and summarizes their distribution. G represents the number of ground truth segments in the THUMOS-14 dataset. This figure indicates that, in comparison with Actionformer (part a), the majority of false positive errors in ADM-Loc (part b) stem from background errors. This occurs because ADM-Loc, as a point-supervised method, lacks access to precise action boundaries. Consequently, background snippets close to action boundaries, sharing characteristics with action instances, may be erroneously detected as actions, resulting in false positives. We also analyze the false positive profiling of ADM-Loc (part b) against the base model (part c) focusing on the top 1G scoring predictions. This comparison reveals that ADM-Loc identifies more true positive instances and exhibits fewer localization and confusion errors. This confirms the effectiveness of ADM-Loc in predicting more precise action boundaries.§.§ Distribution of Annotated Points In the point-supervision setting, only a single frame per action instance is annotated in the training set. SF-Net <cit.> proposed to simulate point annotations by sampling a single frame for each action instance. The Uniform distribution method randomly selects a frame within the action boundaries of each action, while the Gaussian distribution method does so with respect to a given mean and standard deviation. Typically, the Gaussian distribution is more likely to sample frames closer to the central timestamps of actions, thereby increasing the chances of choosing a more discriminative snippet. In contrast, the Uniform distribution can sample frames from any part of the action, without this central bias. Table <ref> demonstrates that ADM-Loc attains state-of-the-art results with both Uniform and Gaussian point-level distributions on THUMOS'14, indicating its robustness. However, it is observed that the ADM-Loc's performance is lower with the Uniform distribution as compared to the Gaussian distribution. We conjecture this may be attributed to the Uniform distribution's tendency to select less discriminative snippets for point annotation, which can occur anywhere within the action's extent, such as at the boundaries §.§ More Qualitative Results Qualitative results depicted in Figure <ref> illustrate various types of errors, including over-completeness, incompleteness, and misalignment, generated by the base model. These issues have been addressed in ADM-Loc. ieeenat_fullname
http://arxiv.org/abs/2311.15916v1
{ "authors": [ "Elahe Vahdani", "Yingli Tian" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127152454", "title": "ADM-Loc: Actionness Distribution Modeling for Point-supervised Temporal Action Localization" }
inst1]F. D'Alessio* inst1]P.E. Lapenna inst1]F. Creta[inst1]organization=Mechanical and Aerospace Engineering, Sapienza University of Rome, addressline=Via Eudossiana 18,city=Rome, postcode=00186, country=ItalyThe addition of hydrogen in ammonia/air mixtures can lead to the onset of intrinsic flame instabilities at conditions of technical relevance. The length and time scales of intrinsic instabilities can be estimated by means of linear stability analysis of planar premixed flames by evaluating the dispersion relation. In this work, we perform such linear stability analysis for hydrogen-enriched ammonia/air flames (50%H2-50%NH3 by volume) using direct numerical simulation with a detailed chemical kinetic mechanism. The impact of pressure and the inclusion of the Soret effect in the governing equations is assessed by comparing the resulting dispersion relation at atmospheric pressure and 10 atm. Our data indicate that both pressure and the Soret effects promote the onset of intrinsic instabilities. Comparisons with available numerical literature data as well as theoretical models are also discussed. AmmoniaHydrogenintrinsic flame instabilities DNSThermal-diffusive instability§ INTRODUCTIONGlobal trends indicate that combustion will remain a key technology for energy conversion throughout the remainder of the twenty-first century <cit.>. To achieve a sustainable combustion process, research is focusing on carbon-free fuels such as Ammonia (NH3) and Hydrogen (H2) which do not emit any carbon oxides (CO, CO2) when burned with air <cit.>. However, their implementation in industrial devices remains economically and technically challenging. In this context, hydrogen offers attractive properties as an energy carrier, but its combustion in power plants raises significant safety concerns <cit.>. Its high reactivity and peculiar thermochemical and transport properties <cit.>, compared to hydrocarbon fuels, leads to significant design barriers in terms of flame stability, unpredictable ignition of leaks, transition to detonation and ultimately nitrogen oxide (NOx) emissions. In addition, hydrogen production, storage, and transportation remain significantly uneconomical for large-scale industrial use <cit.>. On the other hand, ammonia is emerging as an efficient hydrogen carrier.From an economic standpoint, the production, transportation, and storage of ammonia are significantly simpler and more cost-effective than that of hydrogen <cit.>. Although direct combustion of ammonia has also been considered as an alternative to hydrogen, its low reactivity and high emissions of NOx represent significant challenges to its direct use. A possible approach to overcome these issues is to blend H2 and NH3 to leverage the properties of both fuels <cit.>. The addition of hydrogen in a fuel blend can promote the onset of Intrinsic Flame Instabilities (IFIs) at conditions and scales of technical interest. IFIs affect the characteristics of the flame, the amount of heat released, and the overall flame morphology and propagation. Unlike thermoacoustic instabilities, IFIs are endogenous to the flame and not linked to the geometry of the combustor and the ensuing pressure waves or oscillations <cit.>. The two main mechanisms at play in the onset of IFIs are the hydrodynamic or Darrieus-Landau instability (DL) and the thermodiffusive instability (TD). The DL instability is caused by the local flow field induced by the density gradient across a premixed flame which is therefore active at all flame perturbation wavenumbers irrespective of the mixture employed <cit.>. On the other hand, the TD mechanism actively destabilizes the flame when the effective Lewis number of the mixture is lower than a critical value Le_0, which is typically close to unity <cit.>. Given a slightly perturbed (curved) flame front, this introduces a disparity between transverse heat and reactant fluxes, which in turn gives rise to enhanced and reduced reaction zones that amplify the perturbation <cit.>. In the presence of molecular and atomic hydrogen (H2, H), the disparity between molecular and thermal diffusivity is large enough to cause significant local modification of the flame speed, which results, in the non-linear regime, in a cellular structure <cit.>. Early and extensive asymptotic studies have been carried out on the stability of planar flames such as the work of <cit.>,<cit.> and <cit.>. While extremely useful for a qualitative characterization, the ensuing theoretical models for the description of the flame stability are generally not suitable for a quantitative description of realistic mixtures, in particular when hydrogen is present and the TD instability mechanism is active. Therefore, to characterize the stability of the hydrogen-enriched ammonia flames of interest for this work, Direct Numerical Simulations (DNS) are utilized, featuring an accurate description of the thermochemical and transport properties of the mixture.Numerous numerical studies based on DNS have investigated the onset and impact of IFIs <cit.> while more recently, the linear stability analysis of perturbed planar premixed flames has been investigated through DNS for various mixtures and conditions. The effect of pressure on hydrodynamic instabilities has been investigated by <cit.> and <cit.> for lean methane-air flames. On the other hand, for TD unstable mixtures, such as lean hydrogen-air flames, the numerical dispersion relation has been evaluated and compared to theoretical results by <cit.>. Berger et al. <cit.> conducted a comprehensive investigation to determine the numerical dispersion relation of lean hydrogen-air flames spanning several parameters, including pressure, initial temperature, and equivalence ratio. They found that IFIs are promoted using leaner mixtures and increasing the pressure, while keeping the fresh gas temperature low. However, significantly fewer DNS studies are currently available for hydrogen-enriched ammonia flames. Preferential diffusion effects were observed by <cit.> in a partially cracked mixture of ammonia, hydrogen, and nitrogen in air. Similarly, <cit.> observed an abrupt increase in fuel consumption and flame surface area in lean premixed turbulent ammonia-air flame under partially cracked conditions. <cit.> and <cit.> explored the impact of IFIs and flame topology on NOx formation in a turbulent premixed NH3/H2/air flame under varying equivalence ratios and pressures, finding a correlation between topology changes caused by hydrogen preferential diffusion and NOx generation. A parametric analysis of the stability limits of a premixed flame of ammonia, hydrogen, and air has been recently conducted by <cit.> showing how the growth rate of perturbations is modified by the equivalence ratio, composition, and pressure. However, they employed a simplified transport model without the inclusion of thermophoresis (Soret effect) which, in flames featuring large amounts of hydrogen can have a significant role <cit.>.In this framework, we perform linear stability analysis of planar hydrogen-enriched ammonia/air flames (50%H2-50%NH3 by volume) using DNS with detailed chemical kinetic mechanism and transport models. The objective is to evaluate the impact of pressure and the role of the Soret effect on the dispersion relations and the resulting flame stability limits. The latter is of fundamental importance to estimate the IFI's length and time scales as they have a significant impact on the prediction of flame probation. Such values are indeed needed by newly developed combustion models that are accounting for IFIs at the subgrid level <cit.>. § THEORETICAL AN NUMERICAL FRAMEWORK §.§ Governing equations In the following, we assume low-Mach number conditions, which are standard for deflagrating fronts, as well as detailed chemical kinetics. All mixture properties are calculated using mixture-averaged formulations based on pure species properties <cit.>. Diffusion coefficients are computed with the Hirschfelder-Curtiss approximation and a velocity correction 𝐕_c is introduced to enforce mass conservation both in the N_s species and temperature equations. To decrease the computational cost, the Soret effect is accounted for only for lighter species (H2, H) following the formulation and parameters introduced by Schlup and Blanquart <cit.>. Under these assumptions, the diffusion velocity reads: 𝐕_i=-D_i ∇ Y_i/Y_i-D_i∇ W/W-D_i^T/ρ Y_i∇ T/T +𝐕_cwhere the correction velocity is𝐕_c=∑_k^N_s D_k ∇ Y_k + ∇ W/W∑_k^N_s D_k Y_k + 1/ρ∇ T/T∑_k^N_s D_k^T and the system of governing equations reads as follows: Continuity equation in terms of thermal divergence:∇·𝐮 = - 1/ρD ρ/Dt=∑_k^NsW/W_kD Y_k/Dt+1/TD T/DtMomentum equation:ρD𝐮/Dt = -∇ p_1 + ∇·( μ[∇𝐮 + (∇𝐮)^T - 2/3(∇·𝐮)]𝐈)Species mass fraction equation: ρD Y_i/Dt = ∇· ( ρ D_i ∇ Y_i)- ∇·(ρ Y_i ∑_k^N_s D_k ∇ Y_k) + - ∇· ( F_i ∇ W) - ∇· ( H_i ∇ T) + ω̇_iTemperature equation:ρ C_p D T/Dt = ∇· ( λ∇ T) - ( ∑_k^N_s P_k ∇ Y_k ) ·∇ T + - Q ∇ W ·∇ T - Q^s ∇ T ·∇ T - ∑_i^N_s h_i ω̇_i where ρ is the mixture density, T the temperature, p_0 the thermodynamic background pressure, and where the ideal gas equation of state (EoS) reads p_0 = ρ RT, R being the gas constant. In addition, p_1 is the hydrodynamic pressure, W the mean molecular weight of the mixture, Y_i the species' mass fractions, C_p the heat capacity at constant pressure of the mixture, λ is the mixture thermal conductivity, D_i is the diffusion coefficient and D_i^T the Soret thermal diffusion coefficients of the i-th species, h_i its enthalpy and ω̇_i is the i-th species reaction rate. The reaction rate term is established through Arrhenius kinetics and all thermochemical and transport coefficients (except for the Soret coefficients) are established through the CHEMKIN-II package <cit.>, tabulated and stored as a function of temperature for each species, while the mixing rules <cit.> are evaluated at runtime. The coefficients in r.h.s. of the species equation are defined asF_i =ρ Y_i/W(∑_k^NsD_kY_k-D_i )H_i = 1/T(Y_i ∑_k D_k^T - D_i^T)and the coefficients in r.h.s. of the temperature equation are defined asP_k =ρ D_k ( C_p -C_p,k)Q =∑_i^Ns C_p,iF_iQ^s =∑_i^Ns C_p,iH_i This set of governing equations and models are implemented in the low-Mach number, massively parallel, spectral element <cit.> flow solver Nek5000 <cit.>. This numerical framework has been developed starting from a previous version featuring one-step chemistry <cit.>. The code employs a high-order splitting for reacting flows <cit.> and the chemical source terms are implicitly integrated using the stiff ODE solver CVODE <cit.>. The high-order characteristics of the framework are well-suited to efficiently perform combustion DNS, capturing both small-scale flame features and fast chemical time scales with minimum numerical dissipation and dispersion over extended integration times.§.§ Code validationThe code is preliminarily validated by comparing a planar flame simulation with a reference solution obtained with Cantera <cit.>. For this test, a lean methane-air mixture is used as reported in Tab. <ref>. The computational grid has a length of 50 ℓ_T in the flame propagation direction, where ℓ_T = (T_b -T_u)/∇ T_max is the thermal flame thickness, with a zero gradient condition at the outlet, while a 2 ℓ_T width in the spanwise direction with periodic boundary conditions. A spatial resolution of 36 μ m, equivalent to ∼20 points in the flame thickness is used. A stationary flame front is maintained within the domain by providing a fixed velocity inlet with a velocity matching that of the laminar flame speed S_L^0. A total time of 50 τ_F was simulated to ensure independence from the initial conditions, where τ_F= ℓ_T/S_L^0 is the characteristic laminar flame time. <ref> shows the comparison of species profiles obtained from the DNS code with the reference results from Cantera using, for both cases, a skeletal mechanism tailored for lean methane/air flames <cit.>. A good agreement is observed throughout the flame with the DNS code able to reproduce both the temperature and main species profiles as well as the radicals' sharp variation in the reacting region.We conducted a grid sensitivity analysis to evaluate the impact of spatial resolution on the code capability to resolve the flame structure and capture the correct flame front propagation velocity. Three grids were used with varying spatial resolutions, namely: 73 μ m for the Coarse grid, 48 μ m for the Medium grid, and 37 μ m for the Fine grid corresponding to ∼10,∼15 and ∼20 points in the flame thickness, respectively.  <ref> displays a comparison between the profiles of two radicals (HCO,HO2) in physical space and progress variable C space as defined in Tab. <ref>. The reported profiles are obtained using the three different grid resolutions and the Cantera reference solution. The results indicate that the lowest spatial resolution (Coarse grid, resolution δ_x=73 μ m) cannot accurately reproduce the sharp radical profiles present within the flame front, resulting in significant errors in the flame structure reproduction. Concurrently, it is of fundamental importance to accurately reproduce the flame front propagation speed (S_L^0 for planar flames) with minimal error to effectively simulate a premixed flame. The flame speed in the DNS is calculated resorting to a consistent definition of consumption speed <cit.>:S_c = ±1/ρ_u Y_k,uL_y∫ω̇_̇k̇ dxdyWhere ρ_u represents the density of the fresh mixture, Y_k,u refers to the unburnt mass fraction of a species that is completely created/depleted through the flame front. Similarly, L_y represents the spanwise length of the domain whereas ω̇_̇k̇ is the net production/depletion rate of the k-species. The species employed for the evaluation of S_c is the deficient reactant, which is in this case the fuel (methane). Figure <ref>fig: Flamespeed error space reports the deviation, defined as ErrS_L% = (|S_c-S_L^0|)/S_L^0 · 100, of S_c obtained by the DNS at the three resolution levels from the reference laminar value S_L^0. Overall the error remains limited for all the resolution Δ_x employed with the medium grid size that represents a good compromise between computational time and accuracy. For this medium resolution, an assessment of the impact of time step Δ t selection is also performed and reported in Fig. <ref>fig: Flamespeed error time where Δ t is varied from 10^-5 s to 10^-7 s. No substantial variation in the flame speed is observed, therefore computational errors remain essentially unaffected by changes in the timestep.§ THERMOCHEMICAL CONDITIONS AND NUMERICAL ASSESSMENTS FOR AMMONIA FLAMES The target mixture for the present work is 50%NH3-50%H2-Air by volume, with an initial temperature of T_u = 500 K and an equivalence ratio of Φ = 0.5. The chemical kinetic mechanism selected for the Ammonia-Hydrogen chemistry is the one developed by <cit.> featuring 25 species and 110 reactions. Such chemical kinetic mechanism has been proven to be accurate in replicating the laminar burning velocity of hydrogen-enriched ammonia flames, as well as achieving good results on other experimental data such as Ignition delay time (IDT), Jet-stirred reactor (JSR), and flow reactor (FR) <cit.>. The selected mixture is investigated at atmospheric and elevated pressure and the main flame parameters are reported in Tab. <ref>.To assess the use of a chemical kinetic mechanism for ammonia flames, a similar analysis to the one presented in the previous section is carried out for the AH10 flame.  <ref> shows a comparison between the DNS results and the reference Cantera profiles of three representative species (H,H2O2,NO2) in the physical space and the space of progress variables defined, similarly to <cit.>, as follow:C = 1-Y_H2O-Y_H2O^b/Y_H2O^u-Y_H2O^b .The profiles shown are obtained using a coarse grid (∼8 points in ℓ_T), a medium grid (∼12 points in ℓ_T), and a fine grid (∼16 points in ℓ_T) while the Cantera solution has been labeled as the reference solution. Overall a good agreement is observable starting from the medium grid in both physical and progress variable space resulting in a flame speed deviation below ∼1.2%. Such resolution is therefore identified as the reference requirement for the simulation of the selected mixture of Tab. <ref>. The accurate description of transport properties is of fundamental relevance for the combustion of hydrogenated fuels, where the high diffusivity of H and H2 atoms, together with preferential diffusion effects, make predictions of flame propagation characteristics highly sensitive to changes in the diffusion rates of these two species <cit.>. The importance of including thermal diffusion, the Soret effect, in reacting flow simulations is well established in the literature for hydrogen flames. Various studies by <cit.>, <cit.>, and Ern and Giovangigli <cit.> indicate that the Soret diffusion may have a minor impact when dealing with a flat flame that is freely propagating. However, it becomes significant when dealing with a curved or stretched flame, as the cellular structures emerging in the non-linear regime of propagation, for intrinsically unstable flames. In such cases, the thermal diffusion of H2 in the preheat region may affect the amount of fuel that reaches the reaction layer. In the context of ammonia flames, the role of the Soret effect is still to be completely assessed. For this reason, we performed simulations neglecting and including the thermal diffusion effects in the transport model. From a computational perspective, calculating the thermal diffusion terms presents a challenge. Both the iterative multicomponent method and mixture-averaged thermal diffusion model have a computational cost of 𝒪(n^2) for calculating thermal diffusion coefficients within the chosen chemical model, where n is the number of species. To minimize the computational cost associated with calculating Soret diffusivity, we utilized a model developed by <cit.>. This model applies to both molecular and atomic hydrogen diffusivities and scales with 𝒪(n), thereby significantly reducing the local computational effort while maintaining accuracy of thermal diffusion coefficients. As a test on the implemented model, the Soret fluxes for H and H2 obtained from DNS were compared with those obtained by the multicomponent model incorporated into Cantera as shown in Fig. <ref> for AH1 and AH10 flames.§ RESULTSThe linear stability of a planar flame is fully characterized by the dispersion relation ω(k) which represents the growth rate ω of each small harmonic perturbation of wavenumber k. At low wavenumbers, an unstable behavior, ω > 0, is expected due to the prominence of the hydrodynamic DL mechanism which is active at all k, while at higher wavenumber (smaller scales) a stabilizing or destabilizing TD effect is expected, depending on the effective Lewis number Le_eff being respectively larger or smaller than a critical value Le_0. If the TD effects are destabilizing, they will eventually be stabilized at even higher k (yet smaller scales) by transverse heat conduction and reactant diffusion which dampen the perturbation irrespective of the Lewis number <cit.>. The dispersion relation allows for the identification of representative lengthscales related to intrinsic flame instability, such as the cut-off wavelength λ_c which is the lengthscale where ω=0 (i.e. at which hydrodynamic and diffusive effects balance) and the lengthscale of maximum growth rate λ_ω_max. The former is important to determine if a flame in a particular domain will experience the onset of IFI corrugations <cit.> while the latter is related to the most probable cellular-wrinkle size that will emerge in the non-linear regime of propagation <cit.>. Such lengthscales are intrinsic properties of the flame as they depend only on the thermochemical conditions, namely the pressure p, the fresh gas temperature T_u, the equivalence ratio ϕ, and fuel type and composition. The dispersion relation can be evaluated by both resorting to theoretical models and by performing DNS. In this section, we first discuss the qualitative outcomes of a theoretical model. Then we perform DNS to quantitatively investigate the role of pressure and Soret effects on the stability limits of the target hydrogen-enriched ammonia/air flame. The calculation of dispersion relations through DNS allows no assumptions to be made about the thermochemical properties of the mixture. §.§ Theoretical stability limitsThe theoretical results of Matalon et al. <cit.> are employed in this work as also done in other recent DNS works on pure hydrogen/air flames <cit.>. A detailed discussion on the various forms of the model dispersion relation is given in <cit.>. The model dispersion relation reads <cit.>:ω̃ = ω_DLκ̃ + ω_2 κ̃^̃2̃ where ω̃ =ωτ_f, while κ̃ =κℓ_T and he Darrieus-Landau coefficient ω_DL takes the form:ω_DL =1/σ +1√(σ^3 +σ^2 -σ) -σwhere σ= ρ_u/ρ_b is the thermal expansion ratio. Note that the model is limited to 𝒪(k^2) terms and does not incorporate higher𝒪(k^4) stabilizing terms <cit.>. All the other thermochemical parameters of the flame, namely Prandtl Pr, Zel'dovich Ze, and Le_eff, are included in the ω_2 diffusive coefficient. This coefficient is negative when TD effects are stabilizing and vice versa, and reads: ω_2 = B_1 + Ze(Le_eff -1)B_2 + Pr B_3where the parameters B1, B2, B3, are a function of σ and can incorporate a dependence on temperature of transport coefficients  <cit.>. The flame parameters to estimate ω_2 are calculated as follows, starting from the Zel'dovich number:Ze = E/R( T_u-T_b )/T_u^2where R is the universal gas constant, T_b is the adiabatic flame temperature and E is the global activation energy estimated, following <cit.> as: E = R 2d(ρ_u S_L^0)/d(1/T_u) by means of a set of 1D unstretched premixed flame calculations. The mixture's effective Lewis number is evaluated following <cit.>:Le_eff=Le_O+ALe_F/1+A being A defined for lean mixtures as A=1+Ze(ϕ^-1 -1) while Le_O is the oxidizer Lewis number and Le_F is the fuel mixture Lewis number evaluated using the volume-based formulation reported in <cit.>. In this framework, for a particular σ and Ze a critical Lewis number can be evaluated:Le_0 =1-B_1+Pr B_3/β B_2as it is strictly related to the sign of ω_2,where positive values of ω_2 are obtained when Le_eff<Le_0. In this case, however, the model dispersion relation will diverge lacking higher-order stabilizing terms.Using the flame parameters definition previously reported, it is now of interest to evaluate the ω_2 coefficient for hydrogen-enriched ammonia flames and use it as an approximate criterion to establish the mixtures that are expected to be TD-unstable.  <ref> shows ω2 as a function of ϕ and hydrogen content in ammonia mixtures, at the two target pressures selected for this work. As expected, higher and positive values of ω_2 can be observed for leaner mixtures and higher hydrogen contents. The conditions of the AH1 and AH10 flames are also reported and it can be observed that the AH1 flame is expected to be TD-stable (ω_2<0) while the AH10 TD-unstable (ω_2>0). It is worth mentioning that this criterion is as accurate as the simplifying assumptions used in the context of the hydrodynamic model used to derive Eq.<ref>.This further motives the linear stability analysis performed using DNS for the two target flames, as described in the next section. For completeness, all the flame parameters for the AH1 and AH10 flame are summarized in Tab. <ref> where it can be noted that Le_eff>Le_0 for AH1 and Le_eff<Le_0 for AH10.§.§ Numerical dispersion relationsTo calculate the growth rates, multiple DNS simulations are conducted with a planar flame that is selectively perturbed with a single wavelength harmonic perturbation. The flame is initialized within the planar 2D configuration with a 1D unstrected laminar flame and the perturbation is imposed on the flame position. The amplitude of the chosen initial perturbation A_0 is chosen as 8% of the thermal flame thickness ℓ_T to remain within the assumption of small perturbations. The length of the domain in the propagation direction remains constant throughout the linear stability analysis at L_x=50ℓ_T for all DNSs to ensure that the zero gradient outlet condition is well posed behind the flames without any influence on the flame development. On the other hand, the domain dimension L_y in the lateral spanwise direction is changed for each DNS to accommodate perturbations at different wavelengths λ = L_y with periodic boundary conditions. To ensure a proper resolution of the small perturbation imposed, a uniform grid is employed featuring ∼50 grid points resolving the thermal flame thickness. The amplitude of the perturbation is subsequently tracked over time A(t) and the growth rate is calculated, as done in previous studies <cit.>, as:ω= dlogA(t)/dtThe time evolution of the amplitude of the perturbations are shown in Fig. <ref> for the AH1 flame. The simulations featuring long wavelength perturbations have exponentially increasing amplitudes A(t), while smaller wavelength perturbations have exponentially decreasing amplitudes. The growth rate is then deduced from the slope of the amplitude evolution in time and and the numerical dispersion relation is obtained.  <ref> compares the numerical dispersion relations to the model dispersion relations for the AH1 and AH10 flames. As expected, a significant discrepancy can be observed due to the simplifying assumptions of the theoretical model. Moreover, the lack of a higher-order stabilizing term, which is expected to become dominant for k>1, causes the theoretical dispersion relation to diverge for the AH10 flame which is expected to be TD unstable. In this latter case, DNS simulations are the only viable tool to construct a complete dispersion relation.  <ref> compares the numerical dispersion relations for the AH1 and AH10 flames as a function of both a dimensional and non-dimensional wavenumber. The symbols represent the values of ω obtained from each simulation while the fitted curves are obtained using shape interpolant splines. The linear DL term of Eq. <ref> is also reported for reference. The higher-pressure flame AH10 features a wider range of unstable wavelengths as well as higher non-dimensional growth rates. Interestingly, both cases exhibit growth rates that exceed those associated with the hydrodynamic instability mechanism ω_DL, indicating a positive contribution from the TD mechanism. While this was to be expected for the AH10 flame, the theoretical model and the ensuing definition of the critical Lewis number suggested the opposite for AH1. Overall, the increased pressure reduces the critical wavelength dimensions λ_c (higher wavenumber κ_c) while the growth rate ω linked with the unstable wavelengths is increased as pressure rises, consistently with other results obtained for hydrogen-enriched ammonia flames <cit.> and other mixtures <cit.>.Figure <ref> illustrates the impact of temperature on dispersion relations at the two pressures investigated. We compared the dispersion relation obtained for AH1 and AH10 flames using a fresh mixture temperature of T_u=500K with the dispersion relation obtained for flames at the same composition, equivalence ratio, and pressure, but a different T_u at =300K calculated by <cit.>. Consistent with findings by <cit.> for pure hydrogen flames, elevating the temperature of the unburnt mixture leads to an increased critical wavelength, resulting in a stabilizing effect at both the pressures investigated.Finally, the impact of thermophoresis (Soret effect) on flame stability is evaluated by including it in the governing equation and calculating two additional dispersion relations at the conditions of AH1 and AH10 flames. The comparison between the dispersion relations evaluated with and without the Soret effect is shown in Fig. <ref> using non-dimensional units. The Soret effect is shown to affect the dispersion relations, the ensuing cut-off wavelengths, and the maximum growth rates. For the atmospheric pressure case AH1 flames, including the Soret effect causes λ_c to decrease from 2.79 mm to 2.63 mm , while the maximum growth rate is also increased from 302 s^-1 to 339 s^-1, i.e. a 5.7% decrease in the λ_cand a 9% increase of ω_max. Similarly for the AH10 flame, including the Soret effect in the transport models resulted in a decrease of λ_c from 1.19 mm to 1.11 mm, while ω_max increased from 153 s^-1 to 167 s^-1, i.e. a 6.7% decrease for λ_c and a 9.2% increase for ω_max. Overall, the dispersion relations indicate that the effect of the extra diffusion induced by thermophoresis affects the flame stability, leading to a flame that is more prone to feature the onset of IFI. Finally, in Tab. <ref> we reported the key results of our investigation on the stability limits of AH1 and AH10 flames. § CONCLUSIONIn this work, we presented linear stability analyses for hydrogen-enriched ammonia/air flames (50%H2-50%NH3 by volume) using direct numerical simulations with a detailed chemical kinetic mechanism and transport. An in-house modified version of the high-order, spectral element, solver nek5000 has been validated to perform such DNS. It has been demonstrated that our numerical framework is capable of accurately reproducing the flame structure of laminar premixed flames using typical DNS-type grid sizes and with a small error in the flame speed compared to Cantera. The thermophoresis effect has also been included and validated using a computationally efficient model from the literature. The stability limits of an NH3/H2/air mixture at both atmospheric pressure and 10 atm were investigated qualitatively through existing theoretical models in the literature, and subsequently through DNS. The numerical dispersion relations, when compared to the model dispersion relations reveal that the latter have limited quantitative predictive capabilities. On the other hand, the existing theoretical framework can have an important qualitative role in establishing potential TD instability regions in the space of parameters such as equivalence ratio and hydrogen content.The impact of pressure and the Soret effect have been assessed by comparing the numerical dispersion relations. The ensuing data indicates that both pressure and the Soret effect promote the onset of intrinsic instabilities. The pressure is shown to have a similar impact as observed with other mixtures in the literature such as lean methane/air and hydrogen/air premixed flames. It is found that the Soret effect influences the stability limits of flames by reducing the critical wavelength λ_c and increasing the maximum growth rate ω_Max. However, neglecting the supplementary diffusion effects originating from the Soret effect leads to inaccuracies on the order ∼5% on the stability limits of the investigated flames.§ DECLARATION OF COMPETING INTERESTSThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.§ ACKNOWLEDGEMENTThis work has been supported by Baker-Hughes and Lazio Region. P.E.L acknowledges the support of Sapienza University for the early-stage researchers' funding for the project “A pragmatic support to the hydrogen economy: data-driven modeling of high-pressure combustion for propulsion and power” and for the small-size research project "Combustion under extreme thermodynamic conditions for green propulsion and power". The Italian supercomputing center CINECA is acknowledged for the award, under the ISCRA initiative, of high-performance computing resources and support for the project IsB26-Hydrogen. This work has been also supported by ICSC (Centro Nazionale di Ricerca in High-Performance Computing, Big Data and Quantum Computing) funded by the European Union – NextGenerationEU. elsarticle-num-names
http://arxiv.org/abs/2311.16309v1
{ "authors": [ "F. D'Alessio", "P. E. Lapenna", "F. Creta" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20231127205353", "title": "Intrinsic instability of lean hydrogen/ammonia premixed flames: Influence of Soret effect and pressure" }
Technical University of Munich, TUM School of Natural Sciences, Physics Department, 85748 Garching, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, 80799 München, GermanyThe toric code (TC) model subjected to a magnetic field can be mapped to the ℤ_2 lattice gauge Higgs (ℤ_2 GH) model. Although this isometric mapping preserves the bulk energy spectrum, here, we show that it has a non-trivial effect on the entanglement structure. We derive a quantum channel that allows us to obtain the reduced density matrix of the ℤ_2 GH model from the one of the TC model. We then contrast the ground state entanglement spectra (ES) of the two models. Analyzing the role of the electric-magnetic duality, we show that while the ES of the TC model is enriched by the duality, the ES of the ℤ_2 GH model is in fact not. This thus represents an example where the bulk-boundary correspondence fails. Moreover, the quantum channel allows us to investigate the entanglement distillation of the ℤ_2 GH model from the TC model.Entanglement of Gauge Theories: from the Toric Code to the ℤ_2 Lattice Gauge Higgs Model Wen-Tao Xu, Michael Knap, Frank Pollmann Received xxxx; accepted xxxx ======================================================================================== Introduction. Entanglement plays an essential role in understanding and characterizing quantum many-body systems <cit.>.Notably, intrinsic topological order <cit.> is characterized by its long-range entanglement and the resulting topological entanglement entropy <cit.>.The entanglement spectrum (ES) <cit.>, i.e., the (negative logarithmic) spectrum of the reduced density matrix, has been proposed to provide additional information about the structure of entanglement in topological phases of matter.One of the most celebrated models with an intrinsic topologically ordered ground state is the toric code (TC) model <cit.>.The TC model in presence of a magnetic field can be mapped by an isometry to another fundamental model, the ℤ_2 lattice gauge Higgs (ℤ_2 GH) model <cit.>—a simple lattice gauge theory realizing Higgs and confinement transitions.Therefore, the TC model and the ℤ_2 GH model have an identical bulk energy spectrum.However, in this work, we show that the isometric mapping between the two modelschanges the entanglement structure in a non-trivial way, giving rise to several questions:First, while Ref. <cit.> showed that the electric-magnetic duality symmetry can enrich the ES of a ℤ_2 topological state, it is not clear how the duality symmetry affects the ES of the ℤ_2 GH model.Second, Ref. <cit.> points out an interesting boundary transition of the ℤ_2 GH model and it is not clear whether the ES reflects the boundary physics according to a bulk-boundary correspondence.Third, a main difference between the TC model and the ℤ_2 GH model is that the Hilbert space of the latter has a gauge constraint.Several works study the entanglement entropy by taking the gauge constraint into consideration <cit.>, and some conjectures have been proposed which deserve further investigation <cit.>. In this work, we derive a quantum channel that allows us to directly obtain the reduced density matrices of the ℤ_2 GH model from the reduced density matrix of the TC model and to explain the differences and similarities of their ES.Using the quantum channel, we can then understand how symmetry applies to the reduced density matrix of the ℤ_2 GH model and explain their consequences on the ES.By combining the quantum channel and tensor network methods, we derive an efficient approach to extract the ES and entanglement entropy of the ℤ_2 GH model from the solution of the TC model.Definition of the TC and ℤ_2 GH model. The TC model is defined on a square lattice with qubits on the edges as shown in Fig. <ref>a.We label the edges, vertices, and plaquettes of the lattice as e, v, and p, respectively.The Hamiltonian of the TC model in a magnetic field is given byH_TC=-∑_vA_v-∑_pB_p-h_x∑_eX_e-h_z∑_e Z_e,where A_v=∏_e∈ vX_e and B_p=∏_e∈ p Z_e are vertex and plaquette operators and X_e and Z_e are Pauli matrices.In the following,we will use a parameterization in polar coordinates (h,θ)=(√(h_x^2+h_z^2),arctan(h_x/h_z)), where h (θ) is the strength (direction) of the magnetic field.The phase diagram of the model <cit.>, exhibiting a toric code phase with the ℤ_2 topological order and a trivial phase,is shown in Fig. <ref>b.The phase diagram is symmetric about θ=π/4 due to the electric-magnetic duality symmetry, U_TCH_TC(h_x,h_z)U^†_TC=H_TC(h_z,h_x), where U_TC is the duality transformation exchanging the primal lattice and the dual lattice as well as X and Z.It is established that the TC model can be exactly transformed to the ℤ_2 GH model  <cit.>.The Hilbert spaceof the ℤ_2 GH model consists of ℤ_2 gauge (matter) field on edges (vertices) of the lattice.The mapping between the two models is given by the isometryV=∏_⟨ e,v⟩CX_v,e∏_v|+⟩_v,where CX_v,e=X_e^(1-Z_v)/2 is the controlled-X gate acting on a controlling qubit v and a nearest-neighbor controlled qubit e.Applying V on H_ yields the Hamiltonian of the ℤ_2 GH model: H_GH=VH_TCV^†, which can be explicitly expressed asH_GH=-∑_vX_v-∑_pB_p-h_x∑_eX_e-h_z∑_e Z_v(e)Z_eZ_v^'(e), where v(e) and v^'(e) are two vertices which are closest to the edge e (see Fig. <ref>a) and h_z (h_x) can be regarded as the strength of the gauge-matter coupling (gauge fluctuations).Since V^†V=1 and VV^†=∏_v [(1+X_vA_v)/2], the Hilbert space of the ℤ_2 GH model has to satisfy the gauge constraint X_vA_v=1,∀ v, representing Gauss law. The ground states |Ψ_GH⟩ of the ℤ_2 GH model can thus be obtained from the ground states |Ψ_TC⟩ of the TC model as |Ψ_GH⟩=V|Ψ_TC⟩.For h_z=0, the model reduces to a pure ℤ_2 gauge model in which the gauge and matter fields are not entangled, i.e.,|Ψ_GH⟩=|Ψ_TC⟩⊗∏_v|+⟩_v.Since the isometryV preserves the bulk energy spectrum, the phase diagrams of the two models are the same; Fig. <ref>b.The deconfined phase of the ℤ_2 GH model corresponds to the toric code phase.At sufficiently large fields, a phase transition to the Higgs (confining) phase occurs in which ℤ_2 charges condense (are confined).The Higgs and the confined phase are adiabatically connected and thus they are the same phase <cit.>.Moreover, as the phase diagram is symmetric about θ = π/4, there is a modified duality U_GH=VU_TCV^† of the ℤ_2 GH model: U_GHH_GH(h_x,h_z)U_GH^†=H_GH(h_z,h_x). Quantum channel. Starting from a quantum state |Ψ_⟩ defined in the Hilbert space of the TC model, the reduced density matrix ρ_=_L|Ψ_⟩⟨Ψ_| is obtained by tracing the degrees of freedom in the left part L of the system on an infinitely long cylinder with a circumference N, see Fig <ref>a.The corresponding quantum state in the Hilbert space of the ℤ_2 GH model is |Ψ_GH⟩=V|Ψ_TC⟩, from which the reduced density matrix ρ_GH=_L|Ψ_GH⟩⟨Ψ_GH| is obtained. As shown in Fig. <ref>c, we can derive the transformation 𝒩[·] relating the reduced density matrices ρ_ andρ_ from the isometric transformation V in Eq. (<ref>) such thatρ_GH=𝒩[ρ_TC]=∑_z K_zρ_TC K^†_z,where |z⟩=∏_v∈∂ L|z_v⟩_v and z=0 or 1, and the Kraus operatorsK_z=1/2^N/2∏_⟨ e∈ R,v∈ R∪∂ L⟩CX_v,e|z⟩⊗∏_v∈ R|+⟩_v. The map 𝒩[·] satisfies the trace-preserving condition (i.e., ∑_zK_z^†K_z=1) and it maps the identity operator to a projector (i.e., ∑_zK_zK_z^†=∏_v∈ R[(1+X_vA_v)/2]).Thus 𝒩[·] is a quantum channel; see supplemental material <cit.>.In the subspace satisfying the gauge constraint, the projector is equivalent to an identity operator, so 𝒩[·] satisfies the unital condition (map 1 to 1).Since the quantum channel does not necessarily preserve the spectrum of a density matrix, the ES of the ℤ_2 GH model is generically different from that of the TC model.There are some additional properties of the quantum channel 𝒩[·] <cit.>, which are useful for contrasting the entanglement of the TC model and the ℤ_2 GH model and for studying the entanglement structures of the ℤ_2 GH model:(i) The quantum channel has a gauge symmetry [𝒩[·], X_e]=0, ∀ e ∈∂ R, and satisfies the gauge constraint X_vA_v𝒩[·]=𝒩[·]X_vA_v=𝒩[·], ∀ v ∈ R.(ii) If ∃ e∈∂ R, such that {O,X_e}=0, then 𝒩[O]=0.(iii) For operators O satisfying the gauge symmetry [O,X_e]=0,∀ e∈∂ R and the gauge constraint OX_vA_v=X_vA_vO=O,∀ v∈ R, there exists an map 𝒩^-1[·]=∑_zK^†_z· K_z such that 𝒩[𝒩^-1[O]]=O.Entanglement Hamiltonian and ES of the TC model.In the toric code phase, the ground states has a fourfold topological degeneracy for the infinite cylinder geometry represented by so-called minimally entangled states (MES) <cit.>.For simplicity, we only consider the trivial MES corresponding to the vacuum sector [For non-contractible entanglement cuts the spectra depend on the chosen MES and the trivial MES is equivalent to a contractible entanglement cut.].When the magnetic field strength h is small, the ground state can be approximated by perturbation theory |Ψ_⟩=[1+h_x/4∑_e X_e+h_z/4∑_e Z_e+O(h^2)]|TC⟩, where |TC⟩ is the trivial MES at h=0.From |Ψ_TC⟩, the entanglement Hamiltonian is obtained as H_E,TC=-log(ρ_TC).Since we are interested in the low-energy degrees of freedom, we use an isometry 𝒱_ to transform the H_E,TC into the effective Hamiltonian H̃_E,TC=𝒱_H_E,TC𝒱^†_+O(h^2) <cit.> by discarding the high-energy part.In fact, only the terms ∑_ e∈∂ RX_e and ∑_e∈∂ LZ_e in |Ψ_TC⟩ contribute to first order O(h) of H̃_E,TC, where ∂ L (∂ R) is a set of edges at the boundary of region L (R) and these terms become ∑_iX_iX_i+1and ∑_iZ_i in H̃_E,TC.Taking the projector P_+=(1±∏_iZ_i)/2 (defined in the Hilbert subspace of H̃_E,) to the trivial MES into consideration, we derive the effective entanglement Hamiltonian <cit.>,H̃_E,=P_+[log2^N-1-∑_i=1^N(h_z/2Z_i+h_x/2X_iX_i+1)+O(h^2)]. By contrast, in the limit h→∞, the ground state of the TC model becomes a product state, whose effective entanglement Hamiltonian is a 1× 1 dimensional matrix: H̃_E,=0.Next we use tensor-network methods to calculate the ES, i.e., the energy spectrum {ϵ_i} of H_E. We first approximate the ground state of the TC model by variationally optimizing an infinite 2D tensor network ansatz, the infinite projected entangled pair states (iPEPS) <cit.>.There are two important parameters that systematically control the error of the approximation; the bond dimension D of the iPEPS itself and the bond dimension χ of the boundary infinite matrix product operator (iMPO) used to contract the iPEPS <cit.>.We then calculate the ES on an infinitely long cylinder with a finite circumference from the iMPO <cit.>.The ES of the trivial MES in the topologically ordered phase at a small magnetic field h=0.02 is shown in Fig. <ref>a.From perturbation theory, the entanglement Hamiltonian has a transition at θ=π/4 described by the Ising conformal field theory (CFT), which we find to be consistent with our numerical results; see inset of Fig. <ref>a.This confirms a previous study of another self-dual ℤ_2 topological state <cit.>.We also performed simulations for larger fields and found thath_x=h_z=0.16 is still described by the same Ising CFT <cit.>.By contrast, the ES deep in the trivial phase along h=1.3 does not indicate a transition at θ=π/4; Fig. <ref>b.The ESof the TC model is always symmetric about θ=π/4 due to the duality transformation U_TC|Ψ_TC(h_x,h_z)⟩=|Ψ_TC(h_z,h_x)⟩, which induces a duality transformation U_E,TCon the reduced density matrix.Entanglement Hamiltonian and ES of the ℤ_2 GH model. By applying the quantum channel 𝒩[·] to the reduced density matrix of the TC model, we derive the effective entanglement Hamiltonian of the ℤ_2 GH model H̃_E,, whose dominant part is a classical Ising chain, see supplement <cit.>,H̃_E,=[(N-1)log2-h_x/2∑_i=1^LX_iX_i+1+O(h^2)]P_+. Comparing H̃_E, and H̃_E,, the transverse field term ∑_iZ_i is absent.This is because Z_i in H̃_E, corresponds to the term _L(Z_e∈∂ L|TC⟩⟨TC|) in ρ_, which anti-commutes with X_e'∈∂ R, where e ande' are in the same plaquette.By mappingρ_ to ρ_ using the quantum channel 𝒩[·], we find that 𝒩[_L(Z_e∈∂ L|TC⟩⟨TC|)]=0, according to the property (ii) of the quantum channel 𝒩[·] and thus the term ∝∑_iZ_i disappears.Moreover, for h→∞ and θ≠π/2, we derive the effective entanglement Hamiltonian of the ℤ_2 GH model <cit.>, H̃_E,GH=-∑_i=1^Narctanh(sinθ) X_i+Nlog[cos(θ)/2]. In contrast to the TC model, where H̃_E,TC=0for all θ, we find for the ℤ_2 GH model that H̃_E,GH=0 only for θ=π/2. An efficient way to calculate the ES of the ℤ_2 GH model is to apply the quantum channel 𝒩[·] to extract the ES of the ℤ_2 GH model directly from the boundary MPO of the TC model <cit.>; Figs. <ref>c and  <ref>d.Comparing to Fig. <ref>a and b, we observe that the ES of the ℤ_2 GH model are the same as those of the TC model only when θ=π/2, where the matter field and gauge field are no longer entangled.Moreover, the ES of the ℤ_2 GH model are no longer symmetric about π/4.This raises the question of why the modified duality transformation U_ of the ℤ_2 GH model fails to enforce the ES to be symmetric about π/4.Considering the duality transformation U_E, that is applied to ρ_, the duality transformation applied to ρ_=𝒩[ρ_] is 𝒰_E,[·]=𝒩[U_E,𝒩^-1[·]U_E,^†]. As ρ_GH satisfies property (iii) of the quantum channel 𝒩[·], there exists 𝒩^-1[·].For 𝒰_E,[·] to still apply in the usual way, there should exist a unitary matrix U_E,, such that 𝒰_E,[·]=U_E,· U^†_E,.If such a U_E, does not exist, 𝒰_E, applied to ρ_ will no longer be a unitary transformation and consequently the spectra of ρ_(h_x,h_z) and ρ_(h_z,h_x) will generically be different. So it is possible to have a non-symmetric ES even when the model has the duality symmetry. Each level of the ES for the ℤ_2 GH model has an extensive 2^N-1-fold degeneracy in the deconfined phase along h_z axis; Fig. <ref>c. By contrast, in the Higgs phase along h_z axis, the degeneracy is enhanced to 2^N in the thermodynamic limit (N→∞) due to charge condensation (for finite N this degeneracy is weakly lifted to two branches and each has a degeneracy 2^N-1); Fig. <ref>d.The extensive degeneracy on the h_z axis arises from the interplay between the (open) Wilson loop and the gauge symmetry of ρ_,see proof in <cit.>.An interesting question is whether the bulk boundary correspondence holds, i.e., whether the findings made for the ES also apply to the energy spectrum of a system with open boundary conditions.In Ref. <cit.> a two-fold boundary degeneracy was found in the energy spectrum under specifically chosen boundary conditions, as well as a boundary phase transition separating the Higgs and confined phases. By contrast, both the entanglement Hamiltonian Eq. (<ref>) and the ES in Fig. <ref>d do not exhibit such a phase transition. Since for quantum states of gauge theories, the entanglement Hamiltonian and the open boundary system can potentially have different symmetries, i.e., the open boundary systems do not necessarily have the local gauge symmetry {X_e|e∈∂ R}, they can exhibit different low-energy physics. Thus this is an example where the bulk-boundary correspondence fails. Distillable entanglement entropy of the ℤ_2 GH model. The entanglement entropy S in the toric code phase satisfies S=α N-γ, where α is a non-universal constant and γ=log 2 is the universal topological entanglement entropy (TEE) characterizing the ℤ_2 topological order <cit.>.Since the isometry V connecting the TC model and the ℤ_2 GH model is a constant depth quantum circuit, which cannot change the ℤ_2 topological order <cit.>, the deconfined phase and the toric code phase have the same TEE. Entanglement of gauge theories exhibits a richer structure because of the gauge constraints <cit.>.Specifically, the reduced density matrix for the ℤ_2 GH model is block diagonal <cit.> due to the gauge symmetry [ρ_,X_e]=0, ∀ e∈ R,ρ_GH=⊕_xp_xρ_,x,ρ_,x=ρ_GH P_x/p_x, p_x=(P_xρ_GH),where P_x=2^-N∏_e∈∂ R(1+x_e X_e) and x_e=±1.Moreover, ρ_,x is the reduced density matrix of a pure state P_x|Ψ_GH⟩/√(p_x), obtained by fixing gauge field in ∂ R by the measurement of X operators; Fig. <ref>a. The probability of the measurement outcome |x⟩ is p_x.In fact, the dominant classical parts (diagonal in the X basis) of the entanglement Hamiltonians in Eqs. (<ref>) and (<ref>), as well as the ES in Figs. <ref>c and  <ref>d, reflect the probability distributions {p_x}. Hence the von Neumann entanglement entropy S(ρ_GH)=-(ρ_GHlogρ_GH) can be separated into two parts <cit.>: S(ρ_GH)=-∑_xp_xlog p_x+∑_xp_xS(ρ_,x). The first part is the Shannon entropy of the probability distribution {p_x}. The second part S_D(ρ_GH)=∑_xp_xS(ρ_,x) is the distillable entanglement entropy [S_D(ρ_) is also called accessible entanglement entropy and S(ρ_x) is the symmetry resolved entanglement entropy.],characterizing the entanglement that can be detected by gauge invariant local operator operations <cit.>.Such a decomposition of the entanglement can be applied to any quantum state with symmetry, e.g., symmetry protected topological states <cit.>.We now study how the distillable entanglement entropy S_D depends on the subsystem size.In Ref. <cit.> it is conjectured that S_D(ρ_GH)=0 when the ℤ_2 GH model reduces to a pure gauge theory (h_z=0) and S_D(ρ_GH)=α' N-log 2,whereα'is a nonuniversal constant, in the deconfined phase with finite gauge-matter coupling (h_z≠ 0).We use tensor-network methods to check these conjectures.Since it is easier to compute the n-Rényi entanglement entropy S_n(ρ)=(1-n)^-1logρ^n rather than the von Neumann entropy in tensor network methods, and Ref. <cit.> provides the Rényi generalization of the distillable entanglement entropy, we consider the distillable Rényi entanglement entropy with n=1/2: S_D, 1/2(ρ_GH)=log∑_xp_x(√(ρ_,x))^2 <cit.>.Using the quantum channel, we find ρ_,x=P_x𝒩[ρ_]P_x=Ṽ^†_xρ_,xṼ^†_x, where Ṽ_x is another isometric matrix and ρ_,x=P_xρ_P_x/p_x (Notice that [P_x,ρ_]≠0 in general).Thus S(ρ_,x)=S(ρ_,x) and S_D,1/2 of the ℤ_2 GH model can be calculated from TC iPEPS efficiently for arbitrary circumference N <cit.>.We compute the total and distillable Rényi entanglement entropy densities of the ℤ_2 GH along h_x axis; Fig. <ref>a and b.In contrast to the conjecture in Ref. <cit.>, our numerical results indicate that the distillable entanglement entropy can be non-zero for the pure gauge theory with finite h_x.In addition to the area law, the distillable Rényi entanglement entropy in the deconfined phase with gauge-matter coupling (h_z≠0) has a correction γ_D=log 2; Fig. <ref>c.This correction vanishes γ_D=0 for the pure gauge theory (h_z=0); Fig. <ref>d.For the pure gauge theory, X measurements at ∂ R shown in Fig. <ref>a destroy the underlying long-range entanglement completelyalong the entanglement cut, because P_x commutes with the gauge fluctuations in Eq. (<ref>), and only short-range entanglement can be retained. However, when h_z≠0, the gauge-matter coupling in Eq. (<ref>), which does not commute with P_x, prevents the X measurements at ∂ R from destroying the long-range entanglement along the entanglement cut, giving rise to the distillable TEE γ_D=log2 (see <cit.> for additional details). Discussion and outlook. The TC model and ℤ_2 GH model are related by an isometric transformation. We show that the isometry acting on a subsystem acts as a quantum channel. As a consequence, we find that although the reduced density matrix of the TC is entailed with the electric-magnetic duality symmetry, this is not the case for the ℤ_2 GH model. Our results demonstrate that combining quantum channels with tensor networks is useful for extracting entanglement properties of systems related by an isometric transformation. Similar considerations as discussed here, also hold for the deformed wavefunctions of the TC model and ℤ_2 GH model <cit.>.Our approach can be used to study the entanglement of any two wavefunctions transformed by a constant depth circuit, for example, to extract the entanglement of a non-trivial symmetry-protected (enriched) topological state from the trivial one <cit.>. Our results can also be generalized to other Abelian lattice gauge theories of finite groups withmatter fields. Moreover, it would be interesting to investigate the entanglement of non-Abelian gauge theories and topological phases with self-duality <cit.>, as well as gauge theories with continuous groups <cit.>.Acknowledgements. We especially thank Ari Turner for many useful discussions on this work and prior collaborations on entanglement transitions. We also thank R.-Z. Huang and Ruben Verresen for helpful comments.We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy–EXC–2111–390814868, TRR 360 – 492547816 and DFG grants No. KN1254/1-2, KN1254/2-1, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 851161 and No. 771537), as well as the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus.Data availability – Data, data analysis, and simulation codes are available upon reasonable request on Zenodo <cit.>.§ QUANTUM CHANNEL AND ITS PROPERTIESIn this section, we show how to derive the Kraus operators, prove 𝒩[·] is a quantum channel, and show some useful properties of the quantum channel. At first, we derive the Kraus operators, which transform the density operator ρ_ to ρ_: ρ_GH =_L( |Ψ_GH⟩⟨Ψ_GH|)=_L( V|Ψ_TC⟩⟨Ψ_TC|V^†)=_L[(∏_⟨ e,v⟩CX_v,e∏_v|+⟩_v)|Ψ_TC⟩⟨Ψ_TC|(∏_v⟨+|_v∏_⟨ v,e⟩CX_v,e)]=_L[(∏_⟨ v∈(∂ L∪ R), e∈ R⟩CX_v,e∏_v∈∂ L∪ R|+⟩_v)|Ψ_TC⟩⟨Ψ_TC|(∏_v∈∂ L∪ R⟨+|_v∏_⟨ v∈(∂ L∪ R), e∈ R⟩CX_v,e)]=1/2^N∑_{z_v=± 1|v∈∂ L}(∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2∏_⟨ v∈ R,e∈ R⟩CX_v,e∏_v∈ R|+⟩_v)ρ_TC(∏_v∈ R⟨+|_v∏_⟨ v∈ R,e∈ (R)⟩CX_v,e∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2)=∑_zK_zρ_ K_z^†=𝒩[ρ_TC].Therefore, we have the Kraus operators as shown in Eq. (<ref>). Notice that the mapping 𝒩[·] defined by the Kraus operators is completely positive. In order to prove that it is a quantum channel, we need to prove that trace-preserving condition: ∑_zK_z^†K_z=1. Because∑_zK_z^†K_z=1/2^N∏_v⟨+|_v∑_{z_v=± 1|v∈∂ L}(∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2∏_⟨ v∈ R,e∈ R⟩CX_v,e∏_⟨ v∈ R,e∈ (R)⟩CX_v,e∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2)|+⟩_v=1,the mapping 𝒩[·] is indeed trace-preserving and is a quantum channel, as we expected since V is an isometric matrix.It is interesting to check if the quantum channel satisfies the unital condition:∑_zK_zK^†_z=1. Because∑_z K_zK^†_z=1/2^N∑_{z_v=± 1|v∈∂ L}(∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2∏_⟨ v∈ R,e∈ R⟩CX_v,e∏_v∈ R1+X_v/2∏_⟨ v∈ R,e∈ (R)⟩CX_v,e∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2) =∏_v∈ R1+X_vA_v/2,where we use the relation(∏_⟨ v,e⟩CX_v,e)1+X_v/2(∏_⟨ v,e⟩CX_v,e)=1+X_vA_v/2,the quantum channel is not unital in general because it maps the identity matrix to a projector. However, we can restrict to the subspace of the total Hilbert space satisfying the gauge constraint, where the projector (1+X_vA_v)/2 becomes an identity operator, such that the quantum channel 𝒩[·] is unital in this subspace.Moreover, we can also prove that the quantum channel 𝒩[·] has a right inverse under some extra conditions, which means that ∃𝒩^-1[·] such that 𝒩[𝒩^-1[ρ_GH]]=ρ_GH for certain ρ_GH. Consider ∑_z',zK_z^'K_z^†ρ_GHK_zK_z^'^†, becauseK_z^'K^†_z=1/2^N(∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z^'_v/2∏_⟨ v∈ R,e∈ R⟩CX_v,e∏_v∈ R1+X_v/2∏_⟨ v∈ R,e∈ (R)⟩CX_v,e∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2)=1/2^N∏_v∈ R1+X_vA_v/2∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z^'_vz_v/2,we have∑_z',zK_z^'K_z^†ρ_GHK_zK_z^'^† =1/4^N∏_v∈ R1+X_vA_v/2∑_z,z'∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z^'_vz_v/2ρ_GH∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z^'_vz_v/2∏_v∈ R1+X_vA_v/2=ρ_GH.where the last identity is valid ifX_eρ_GHX_e=ρ_GH, ∀ e ∈∂ R,ρ_GHX_vA_v=X_vA_vρ_GH=ρ_GH, ∀ v∈ R.Hence, the quantum channel 𝒩[·] has a right inverse 𝒩^-1[·]=∑_zK_z· K_z^† if the conditions in Eq. (<ref>) are satisfied. And we know the reduced density matrix ρ_GH of any gauge invariant state |Ψ_⟩ (X_vA_v|Ψ_⟩=|Ψ_⟩,∀ v) satisfies the conditions in Eq. (<ref>), and 𝒩^-1[·] is a quantum channel if we only consider the reduced density matrices of gauge invariant quantum states.Let us also consider the relevant symmetry and the null space of the channel operator. It is not difficult to check that the quantum channel satisfies:𝒩[·]=X_e𝒩[·]X_e=𝒩[X_e· X_e],∀ e∈∂ R.Consider an operator O living in the input Hilbert space of the quantum channel, if ∃ e∈∂ R, such that {X_e,O}=0, then 𝒩[O]=𝒩[X_eOX_e]=-𝒩[O], and 𝒩[O]=0. It means that any operator that anti-commutes with the gauge symmetry is in the null space of the quantum channel.Next, let's talk about how to calculate the distillable entanglement entropy of a state in the Hilbert space of ℤ_2 GH model from a corresponding state in the Hilbert space of the TC model. According to the definition of the distillable entanglement entropy, we consider a state P_x|Ψ_⟩=|x⟩⟨x||Ψ_⟩ obtained by fixing physical degrees of freedom in ∂ R to |x⟩=∏_e∈∂ R|x_e⟩:_LP_x|Ψ_⟩⟨Ψ_|=P_xρ_=P_xρ_P_x=P_x𝒩(ρ_)P_x=1/2^N[|x⟩∑_{z_v=± 1|v∈∂ L}(∏_⟨ v∈∂ L,e∈∂ R⟩ x_e^1-z_v/2⟨x|∏_⟨ v∈ R,e'∈ R⟩CX_v,e'∏_v∈ R|+⟩_v)ρ_TC(∏_v∈ R⟨+|_v∏_⟨ v∈ R,e'∈ (R)⟩CX_v,e'|x⟩∏_⟨ v∈∂ L,e∈∂ R⟩ x_e^1-z_v/2)⟨x|]= [|x⟩(∏_⟨ v∈ R,e∈∂ R⟩x_e^1-Z_v/2∏_⟨ v∈ R,e'∈ (R-∂ R)⟩CX_v,e'∏_v∈ R|+⟩_v)⟨x|ρ_TC|x⟩(∏_v∈ R⟨+|_v∏_⟨ v∈ R,e'∈ (R-∂ R)⟩CX_v,e'∏_⟨ v∈ R,e∈∂ R⟩x_e^1-Z_v/2)⟨x|]=Ṽ_xP_xρ_TCP_xṼ^†_x=Ṽ_x_L(P_x|Ψ_⟩⟨Ψ_|P_x)Ṽ^†_x,where Ṽ_x=∏_⟨ v∈ R,e∈∂ R⟩x_e^1-Z_v/2∏_⟨ v∈ R,e'∈ (R-∂ R)⟩CX_v,e'∏_v∈ R|+⟩_v satisfying Ṽ^†_xṼ_x=1 is an isometric operator which applies on physical degrees of freedom in (R-∂ R). Eq. (<ref>) tells usρ_,x=Ṽ_xρ_,xṼ^†_x, p_x=⟨Ψ_|P_x|Ψ_⟩=⟨Ψ_|P_x|Ψ_⟩,whereρ_,x=_L(P_x|Ψ_⟩⟨Ψ_|P_x)/p_x,ρ_,x=_L(P_x|Ψ_⟩⟨Ψ_|P_x)/p_x.So we have S(ρ_,x)=S(ρ_,x) according to Eq. (<ref>), and we can calculate the distillable entanglement entropy of a state living in the Hilbert space of the ℤ_2 GH model by fixing degrees of freedom of a corresponding TC state in ∂ R: P_x|Ψ_⟩.Finally, let us prove ρ_ has an extensive 2^N-1 fold degeneracy observed in Fig. <ref> (c) and (d) at θ=0, where |Ψ_⟩ has the symmetry of Wilson loops: W^Z_C|Ψ_⟩=|Ψ_⟩, ∀ C, and W^Z_C=∏_e∈ CZ_e is a Wilson loop operator and C is a closed loop along the lattice. The reduced density matrix has the symmetry of open Wilson loops: [W^Z_C/2,ρ_]=0, ∀ C/2, where C/2 is an open loop whose two ends e,e'∈∂ R. BecauseW^Z_C/2P_xρ_ W^Z_C/2=P_x'W^Z_C/2ρ_ W^Z_C/2=P_x'ρ_,where |x'⟩=Z_e∈∂ RZ_e'∈∂ R|x⟩. From Eqs. (<ref>) and  (<ref>), it can be found that p_x=p_x' and ρ_x=W_C/2^Zρ_x'W_C/2^Z. Therefore ρ_x and ρ_x' have the same spectrum. Since [W_C/2^Z.∏_e∈∂ R X_e]=0, |x⟩ and |x'⟩ have the same parity. Therefore the spectra of the blocks with the same parity are identical, and the ES of ρ_ has 2^N-1-fold degenerate. Notice that the gauge symmetry {X_e|e∈∂ R} is from the local gauge constraint crossing the entanglement cut and tracing the L part, see Fig. <ref>a, the system with an open boundary does not necessary have this symmetry and the extensive degeneracy in low energy. Moreover, if we choose |Ψ_⟩ as an eigenstate of ∏_e∈∂ RX_e, then ρ_ only contains blocks with even or odd parity, and we can say that ρ_ satisfies the so-called entanglement equaipartition <cit.>.§ EFFECTIVE ENTANGLEMENT HAMILTONIANS OF THE TC MODEL FROM PERTURBATION THEORY In this section, we derive the entanglement Hamiltonians of all MES of the toric code by combining the tensor networks and perturbation theory. Before deriving the entanglement Hamiltonian, we define some conventions of the tensors:< g r a p h i c s > =δ_i_1,i_2δ_i_2,i_3δ_i_3,i_4⋯δ_i_n-1,i_n,< g r a p h i c s > =δ_(i_1+i_2+⋯ i_n)mod2,0,< g r a p h i c s > =δ_i_1,i_2δ_i_3,i_4.The entries of the black dot tensor are 1 if all legs are equal. The entries of the black open circle tensor are 1 if the total parity of all legs is even; otherwise, the entries are 0.If two straight lines cross, it represents a tensor product of two identity matrices. With these definitions, we can define the single-line PEPS for four MES of the TC model at h=0 <cit.>, as shown in Figs. <ref>a and  <ref>b.The reduced density ρ matrix of an injective PEPS |Ψ⟩ contains a large null space, an isometric transformation 𝒱 can be found to transform ρ to the low entanglement energy subspace (spanned by eigenvectors of ρ with non-zero eigenvalues). The transformed reduced density matrix can be expressed in terms of left and right fixed points σ_L and σ_R of the PEPS transfer matrix <cit.>:𝒱ρ𝒱^†=𝒱(_L|Ψ⟩⟨Ψ|)𝒱^†∝√(σ_R^T)σ_L√(σ_R^T),where the isometry 𝒱satisfies 𝒱𝒱^†=1 and 𝒱^†𝒱 is a projector onto the low entanglement energy subspace, and [𝒱^†𝒱,ρ]=0.Following Ref. <cit.>, we know that the transfer matrix of the toric code PEPS at h=0 can be expressed as 𝕋=(1_2^⊗ N⊗1_2^⊗ N+Z^⊗ N⊗ Z^⊗ N). Because the PEPS of the toric code model at h=0 is ℤ_2 injective, the left and right fixed points of the transfer matrix are equal and two-fold degenerate: σ_L=σ_R=1_2^⊗ N or Z^⊗ N. Because the ground states of the TC model are four-fold degenerate, there are four reduced density matrices ρ_^α,(0),α=1,e,m or f at h=0, which can be constructed from the superposition of the degenerate transfer matrix fixed points <cit.>:𝒱^1_ρ^1,(0)_𝒱^1†_=𝒱^e_ρ^e,(0)_𝒱^e†_=1/2^N(1_2^⊗ N+Z^⊗ N)=1/2^N-1P_+, 𝒱^m_ρ^m,(0)_𝒱^m†_=𝒱^f_ρ^f,(0)_𝒱^f†_=1/2^N(1_2^⊗ N-Z^⊗ N)=1/2^N-1P_-, where 𝒱_^α defined in Figs. <ref>c and  <ref>d are nothing but half of the PEPS. These operators satisfy 𝒱^α_𝒱^α†_=P_±=(1^⊗ N± Z^⊗ N)/2, which is equivalent to an identity matrix in the even or odd virtual subspace. And 𝒱^α†_𝒱^α_ is a projector from the 2D physical space to the 1D even or odd virtual subspace. So, these operators can still be viewed as isometries when restricted to low entanglement energy subspace with even or odd parity. Next, Let us consider the first-order perturbed wavefunction:|Ψ^α_⟩=[1+h_x/4∑_e X_e+h_z/4∑_e Z_e+O(h^2)]|TC_α⟩,where |TC_α⟩ is a ground state at h_x=h_z=0 in terms of MES, see Figs. <ref>a and b. The reduced density matrix can be written asρ^α_=_L|Ψ_^α⟩⟨Ψ_^α|=ρ^α,(0)_+ρ^α,(1)_+O(h^2),whereρ^α,(0)_=_L|TC_α⟩⟨TC_α|,ρ^α,(1)_=h_x/4_L∑_e∈ E(X_e|TC_α⟩⟨TC_α|+h.c.)+h_z/4_L∑_e ∈ E(Z_e|TC_α⟩⟨TC_α|+h.c.).Here, we also need an isometry to project onto the low-entanglement energy subspace. The previous isometry 𝒱^α†_ cannot do this exactly because 𝒱^α,†_𝒱^α_ and ρ^α_ are only approximately commute. However, from the Schrieffer–Wolff transformation <cit.>, we know that the error of using 𝒱^α†_ as an approximate isometry is O(h^2), so we have:𝒱^α_ρ^α_𝒱^α†_= 𝒱^α_[_L|TC_α⟩⟨TC_α|+h_x/4_L∑_e∈∂ R(X_e|TC_α⟩⟨TC_α|+h.c.)+h_z/4∑_e∈∂ L(_LZ_e|TC_α⟩⟨TC_α|+h.c.)]𝒱^α†_+O(h^2),Notice that in the last two terms, we only sum over edges in ∂ L or ∂ R because other terms do not contribute to the zeroth and the first orders. For example,𝒱^α__L( X_e|TC_α⟩⟨TC_α|)𝒱^α†_=0,∀ e∈ (E-∂ R), because 𝒱^α_ B_p=𝒱^α_, B_p|TC_α⟩=|TC_α⟩ and {B_p,X_e}=0, e∈ p, where E is a set of all edges. For the same reason, 𝒱^α__L(Z_e|TC_α⟩⟨TC_α|)𝒱^†_α=0,∀ e∈ (E-∂ L).Let us consider the non-zero contributions. For a Z_e∈∂ L, it is convenient to transform it to the R part using the relation B_p|_α⟩=|_α⟩:< g r a p h i c s > .Then we can use the isometries shown on Figs. <ref>c and d to transform above three Z operators as well as a X_e,e∈∂ R to the effective low entanglement energy space (virtual level of the single line tensor networks):< g r a p h i c s > , < g r a p h i c s > .Then, the terms of the effective entanglement Hamiltonians can be derived from the above relations:𝒱^1__L( X_e|TC_1⟩⟨TC_1|)𝒱^1†_ =X_iX_i+1𝒱^1__L( |TC_1⟩⟨TC_1|)𝒱^1†_=X_iX_i+1P_+/2^N-1, 𝒱^m__L( X_e|TC_m⟩⟨TC_m|)𝒱^m†_ =X_iX_i+1𝒱^m__L(|TC_m⟩⟨TC_m|)𝒱^m†_=X_iX_i+1P_-/2^N-1, 𝒱^e__L(X_e|TC_e⟩⟨TC_e|)𝒱^e†_ =(-1)^δ_N,iX_iX_i+1𝒱^e__L(|TC_e⟩⟨TC_1|)𝒱^e†_=(-1)^δ_N,iX_iX_i+1P_+/2^N-1, 𝒱^f__L( X_e|TC_f⟩⟨TC_f|)𝒱^f†_ =(-1)^δ_N,iX_iX_i+1𝒱^f__L( |TC_f⟩⟨TC_f|)𝒱^f†_=(-1)^δ_N,iX_iX_i+1P_-/2^N-1, 𝒱^α__L( Z_e|TC_α⟩⟨TC_α|)𝒱^α†_ =Z_i 𝒱^α__L( |TC_α⟩⟨TC_α|)𝒱^α†_=Z_iP_±/2^N-1.Notice that the horizontal virtual Z string shown in Fig. <ref>d introduces a minus sign when transforming a X_e,e∈∂ R to the effective low entanglement space. Therefore, the reduced density matrices in the low entanglement energy space can be expressed as𝒱_^1ρ^1_𝒱_^1† = P_+/2^N-1(1 +h_x/2∑_i=1^NX_iX_i+1 +h_z/2∑_i=1^NZ_i +O(h^2)),𝒱_^mρ^m_𝒱_^m† = P_-/2^N-1(1 +h_x/2∑_i=1^NX_iX_i+1 +h_z/2∑_i=1^NZ_i +O(h^2)), 𝒱_^eρ^e_𝒱_^e† =P_+/2^N-1(1 +h_x/2∑_i=1^N(-1)^δ_N,iX_iX_i+1 +h_z/2∑_iZ_i +O(h^2)), 𝒱_^fρ^f_𝒱_^f† = P_-/2^N-1(1 +h_x/2∑_i=1^N(-1)^δ_N,iX_iX_i+1 +h_z/2∑_i=1^NZ_i +O(h^2)).The effective entanglement Hamiltonian in each topological sector is H̃^α_E=-log(𝒱_^αρ^α_𝒱_^α†). In the even or odd virtual subspace, the projectors P_± becomes an identity matrix, and we can use the relation log(1+x)=x+O(x^2) to derive the entanglement Hamiltonians:H̃_E,TC^1= [C-∑_i=1^N(h_z/2Z_i+h_x/2X_iX_i+1)+O(h^2)]P_+,H̃_E,TC^e=[C-∑_i=1^N(h_z/2Z_i+(-1)^δ_i,Nh_x/2X_iX_i+1)+O(h^2)]P_+, H̃_E,TC^m= [C-∑_i=1^N(h_z/2Z_i+h_x/2X_iX_i+1)+O(h^2)]P_-,H̃_E,TC^f=[C-∑_i=1^N(h_z/2Z_i+(-1)^δ_i,Nh_x/2X_iX_i+1)+O(h^2)]P_-,where C=(N-1)log2. The method we use for deriving the entanglement Hamiltonian is similar to that shown in Ref. <cit.>. Moreover, we notice that the entanglement cut in Ref. <cit.> has a π/4 angle with the lattice of the TC model, and the resulting entanglement Hamiltonian is still approximately the Ising model. The difference is that the coefficients of the Ising model in Ref. <cit.> are proportional to h^2_x and h_z^2, not h_x and h_z. Furthermore, if we add the perturbations ∑_(e,e')∈ pX_e X_e^' and ∑_(e,e')∈ vZ_e Z_e^' to the Hamiltonian of the TC model, where (e,e')∈ p [(e,e')∈ v] means a pair of edges in the same plaquette [vertex], they will results in the terms ∑_iX_iX_i+2 and ∑_iZ_iZ_i+1 in the low entanglement subspace, and the entanglement Hamiltonian will be described by the anisotropic next-nearest-neighboring Ising model.§ EFFECTIVE ENTANGLEMENT HAMILTONIANS OF THE ℤ_2 GH MODEL FROM THE QUANTUM CHANNEL In this section, we derive the entanglement Hamiltonians of the ℤ_2 GH model via applying the quantum channel 𝒩[·] onto the reduced density matrix ρ^α_ of the TC model. At first, consider the zeroth order, we have 𝒩[ρ^α,(0)_]=ρ^α,(0)_=ρ_^α,(0)⊗∏_v∈ R|+⟩_v⟨+|_v. Then, Let us consider the h_x term shown in Eq. (<ref>) in the first order:𝒩[∑_e∈∂ RX_eρ^α,(0)_]=∑_e∈∂ RX_e𝒩[ρ^α,(0)_]=∑_e∈∂ RX_eρ^α,(0)_⊗∏_v∈ R1+X_v/2.We have used that 𝒩[X_e·]=X_e𝒩[·],∀ e∈ R. Next, let us consider a term _L Z_e∈∂ L|_α⟩⟨_α| from the first order in Eq. (<ref>) and a plaquette p crossing the entanglement cut, it can be found that{X_e∈ (∂ R∩ p),_L Z_e(∈∂ L∩ p)|_α⟩⟨_α|} = {X_e∈ (∂ R∩ p), (∏_e∈[p-(∂ L ∩ p)]Z_e)_L|_α⟩⟨_α|}=0,where we use the relation [X_e∈∂ R,|_α⟩⟨_α|]=0. Using the property (ii) of the quantum channel 𝒩[·] shown in Eq. (<ref>), we have 𝒩[_L Z_e∈∂ L|_α⟩⟨_α|]=0. For the same reason, 𝒩[Z_e∈∂ Rρ^α,(0)_]=0.For e∈(R-∂ R), we have𝒩[Z_eρ^α,(0)_]=Z_v(e)Z_eZ_v'(e)𝒩[ρ^α,(0)_]=Z_v(e)Z_eZ_v'(e)ρ^α,(0)_⊗∏_v∈ R1+X_v/2.So the reduced density matrix of ℤ_2 GH model can be expressed asρ_^α=[ρ^α,(0)_+h_x/4∑_e∈ R(X_eρ^α,(0)_+h.c.)+h_z/4∑_e∈(R-∂ R)(Z_v(e)Z_eZ_v'(e)ρ^α,(0)_+h.c.)]⊗∏_v∈ R1+X_v/2 +O(h^2).Similar to the toric code case, we can transform ρ^α_ to the low entanglement energy subspace using the isometries 𝒱_^α=𝒱_^α⊗∏_v∈ R⟨+|_v where 𝒱_^α is defined in Figs. <ref>c or  <ref>d:𝒱_^1ρ^1_𝒱_^1† = P_+/2^N-1(1 +h_x/2∑_i=1^NX_iX_i+1 +O(h^2)),𝒱_^mρ^m_𝒱_^m† = P_-/2^N-1(1 +h_x/2∑_i=1^NX_iX_i+1 +O(h^2)), 𝒱_^eρ^e_𝒱_^e† =P_+/2^N-1(1 +h_x/2∑_i=1^N(-1)^δ_N,iX_iX_i+1 +O(h^2)),𝒱_^fρ^f_𝒱_^f†= P_-/2^N-1(1 +h_x/2∑_i=1^N(-1)^δ_N,iX_iX_i+1 +O(h^2)).We use Eq. (<ref>) to obtain the above expressions. Taking minus logarithmic, we obtain the effective entanglement Hamiltonians:H̃_E,^1= [(N-1)log(2)-∑_i=1^Nh_x/2X_iX_i+1+O(h^2)]P_+,H̃_E,^e=[(N-1)log(2)-∑_i=1^N(-1)^δ_i,Nh_x/2X_iX_i+1+O(h^2)]P_+, H̃_E,^m= [(N-1)log(2)-∑_i=1^Nh_x/2X_iX_i+1+O(h^2)]P_-,H̃_E,^f=[(N-1)log(2)-∑_i=1^N(-1)^δ_i,Nh_x/2X_iX_i+1+O(h^2)]P_-.Compared to Eq. (<ref>), the h_z term disappears because the quantum channel does not allow it. One may notice that the dominant parts of these entanglement Hamiltonians are classical. Actually, they are related to the probability distribution. Considering the trivial topological sector, because 𝒱^1_X_e∈∂ R𝒱^1†_=X_iX_i+1, it implies that the eigenstate of the H̃_E,^1 can be labeled by x, and the ES can be denoted as {ϵ^1_x=(N-1)log 2-h_x∑_e∈∂ Rx_e/2+O(h^2)}. Compared with the probability distribution:p^1_x=(P_x|Ψ^1_GH⟩⟨Ψ^1_GH|)=1/2^N-1(1+∑_ex_e h_x/2)+O(h^2),we have p^1_x=e^-ϵ^1_x+O(h^2).We can also derive the entanglement Hamiltonian of the ℤ_2 GH model at h→∞ using the quantum channel 𝒩[·]. The corresponding ground state of the TC model is |Ψ_⟩=∏_e |θ⟩_e, where |θ⟩=cos(θ/2)|↑⟩+sin(θ/2)|↓⟩. The reduced density matrix is ρ_=∏_e∈ R|θ⟩_e⟨θ|_e. So ρ_=𝒩[ρ_], and we can find the isometry 𝒱_ transforming ρ_ to the low entanglement energy subspace:𝒱_=(∏_e∈∂ R1_2)⊗(∏_e∈ R-∂ R⟨θ|_e∏_v∈ R⟨+|_v∏_⟨ v∈ R, e∈ (R-∂ R)⟩CX_v,e).Then the reduced density matrix of the ℤ_2 GH model in the low entanglement energy space is:𝒱_ρ_𝒱_^†=𝒱_𝒩(ρ_)𝒱_^†=1/2^N∑_{z_v=± 1|v∈∂ L}(∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2∏_e∈∂ R|θ⟩_e⟨θ|_e∏_⟨ v∈∂ L,e∈∂ R⟩ X_e^1-z_v/2).It commutes with X_e,e∈∂ R, so it is diagonal in X basis. Using the relation ⟨x|θ⟩=[cos(θ/2)+xsin(θ/2)]/√(2), where |x⟩ is an eigenstate of X, we have𝒱_ρ_𝒱_^†=∑_x|x⟩⟨x|𝒱_𝒩(ρ_)𝒱_^†|x⟩⟨x|=1/2^N∑_x∏_i=1^N(1+x_isinθ)|x⟩⟨x|=1/2^N∏_i=1^N[1+sin(θ)X_i].Finally, the entanglement Hamiltonian isH̃_E,= -log[𝒱_ρ_𝒱_^†]=-∑_i=1^N log[1+sin(θ)X_i]+Nlog(2)=-∑_i=1^Nlog{cosθexp [arctanh(sinθ)X_i]}+Nlog(2)= -∑_i=1^N[arctanh(sinθ) X_i+log(cosθ/2)].When θ=0, the lowest entanglement energy is Nlog(2), and it is 2^N-fold degenerate. When θ=π/2, we have lim_θ→π/2[arctanh(sinθ)+log[(cosθ)/2]]=0, so the lowest etanglement energy is 0 and it is not degenerate.§ IPEPS ANSATZ FOR THE TC MODEL AND CALCULATING REDUCED DENSITY MATRICESIn this section, we show the technical details of calculating the ground states and reduced density matrices of the TC model using tensor network states. We approximate a ground state using the iPEPS ansatz proposed in Ref. <cit.>. The iPEPS has a 2×2 unit cell, and it is parameterized by two rank-5 tensors with the virtual bond dimension D and the physical dimension d=2, as shown in Figs. <ref>a and  <ref>b.We impose the square lattice symmetry onto the tensors such that the A and B tensors are related by a π/2 rotation: A_ijkl^p=B_lijk^p, and they have the reflection symmetry A^p_ijkl=A^p_jilk=A^p_lkji, B^p_ijkl=B^p_jilk=B^p_lkji. Moreover, we also use real tensors instead of complex tensors for simplicity. In the toric code phase, we impose the virtual ℤ_2 symmetry onto the tensors:< g r a p h i c s >where Z_D=diag(1,1) for D=2 and diag(1,-1,-1) for D=3. In the trivial phase, we do not impose the virtual ℤ_2 symmetry.Following Ref.<cit.>, we contract (the squared norm of) the iPEPS using the CTMRG (corner transfer matrix renormalization group) algorithm, where the environment of iPEPS is approximated by corner and edge tensors with a bond dimension χ. The energy expectation value can be calculated from the environment. Since A and B tensors are related by a π/2 rotation, the iPEPS is actually parameterized by the A tensor, we also use the CTMRG algorithm to calculate the energy gradient with respect to the tensor A. We also impose all symmetries of the A tensor to the energy gradient. Given the energy expectation value and its gradient, we use the BFGS (Broyden–Fletcher–Goldfarb–Shanno) algorithm to minimize the energy expectation values. When the optimization is converged, we have an iPEPS approximating the ground state of the TC model.After the optimization, we can calculate the reduced density matrix. It is convenient to calculate the reduced density matrix (in low entanglement subspace) from the fixed points of the iPEPS transfer matrix. As shown in Fig. <ref>d, the transfer matrix is chosen to be consistent with that of the entanglement cut. The transfer matrix 𝕋 can be decomposed as a product of 𝕋_A and 𝕋_B. Considering the shape of the transfer matrix, we use the iTEBD (infinite time evolution block decimation) algorithm to approximate the left fixed point σ_L and the right fixed point σ_R in terms of iMPS, see Figs. <ref>e and <ref>f. The bond dimension χ we use in the iTEBD algorithm is the same as that in the CTMRG algorithm. Since A and B tensors have reflection symmetry, σ_L and σ_R have the relation shown in Fig. <ref>g.We can calculate the spectrum of ρ_ in the following way:ρ_≃√(σ_R^T)σ_L√(σ_R^T)/(√(σ_R^T)σ_L√(σ_R^T))≃σ_Lσ_R^T/(σ_Lσ_R^T),where the ≃ means that two matrices are related by an isometry or a similarity transformation such that they have the same spectrum. Notice that the boundary iMPS can be reshaped as an iMPO (infinite matrix product operator) with physical bond dimension D and virtual bond dimension χ. In order to calculate the ES, we have to consider σ_L and σ_R with a finite circumference N. So, we can construct a finite MPO using the unit cell tensors of the iMPO. This approximation is valid if the correlation length of the iPEPSξ≪ N.In the toric code phase, there are four degenerate ground states in terms of MES. They can be obtained from the virtual ℤ_2 symmetry of iPEPS. The iMPO fixed points σ_L and σ_R inherit the virtual ℤ_2 symmetry of the iPEPS, hence we have [σ_L,Z_D^⊗ N]=0 and [σ_R,Z_D^⊗ N]=0.Define the projector P_±=(1± Z_D^⊗ N)/2, the ES of each topological sector can be obtained from the following operators:σ_1=⋮⋮ < g r a p h i c s > , σ_m=⋮⋮ < g r a p h i c s > ,σ_e=⋮⋮ < g r a p h i c s > ,σ_f= ⋮⋮ < g r a p h i c s > .where the definition of the red dot tensor is< g r a p h i c s > =1_D, < g r a p h i c s > =Z_D;< g r a p h i c s > ,< g r a p h i c s > .The last two equations show that V_R,Z can be obtained from an eigenequation, and we should use the canonical gauge of the iMPS. V_L,Z can be obtained similarly. Taking the normalization into consideration, the eigenvalues of σ_α/(σ_α) give rise to the ES.Notice that the ES depends on the bond dimensions (D,χ) and the circumference N. So, we need to analyze the finite-bond dimension effect and finite circumference effect. In Fig. <ref>, we show the ES with fixed MPO circumference 2N=12 at h_x=h_z=0.16 and different bond dimensions. In Fig. <ref>, we fix the bond dimensions to (D,χ)=(2,40) and (D,χ)=(3,40) and change N. With increasing N, the spectra do not converge to the CFT predictions. This is reasonable because we use a finite bond dimension variational iPEPS, the duality symmetry is only approximately ( because e and m sectors are not perfectly the same) and we are slightly away from the criticality. Overall, these spectra are still close to the Ising CFT prediction, and it implies that the ES for a considerably large field is still described by the Ising CFT. § TENSOR NETWORK METHOD FOR CALCULATING THE SUBBLOCK ENTANGLEMENT SPECTRUM OF THE ℤ_2 GH MODELIn this section, we show how to calculate the entanglement spectrum of the subblock reduced density matrix ρ_,x of the ℤ_2 GH model from the transfer matrix fixed points of the TC model. According to Eq. (<ref>), ρ_,x and ρ_,x have the same spectrum, so we just consider ρ_,x, which is the reduced density matrix of P_x|Ψ_⟩. Because _LP_x|Ψ_⟩⟨Ψ_|P_x=(_L⟨x||Ψ_⟩⟨Ψ_||x⟩)⊗|x⟩⟨x|, we can ignore |x⟩⟨x| which doesn't affect the entanglement space. Notice that ⟨x||Ψ_⟩ can be decomposed into three parts:⟨x||Ψ_⟩=Ψ_LT^x_AΨ_L^†= < g r a p h i c s > ,where Ψ_L is a matrix whose row index is the collection of all physical indices at ∂ L and column index is the collection of all virtual indices at the entanglement cut. Reshaping a column of A tensors whose physical indices are fixed to eigenstates of X operator to a matrix gives rise to T^x_A. From the relation shown in Fig. <ref>e, we have _L⟨x||Ψ_⟩⟨Ψ_||x⟩=Ψ^*_LT^x†_Aσ_L T^x_AΨ_L^T, because Ψ_L^†Ψ_L=(𝕋_A𝕋_B)^∞=σ_L. Moreover, according to the method deriving effective reduced density matrix of PEPS <cit.>, we can construct an isometric operator 𝒱=(σ_L^-1/2)^*Ψ_L^T applying on physical edge degrees of freedom in (R-∂ R) such that𝒱Ψ^*_LT^x†_Aσ_L T^x_AΨ_L^T𝒱^†=(σ_L^-1/2)^*Ψ_L^TΨ_L^*T^x†_Aσ_L T^x_AΨ_L^TΨ_L^*(σ_L^-1/2)^T=√(σ_L^T)T_A^x†σ_LT^x_A√(σ^T_L)∼ T_A^x†σ_LT^x_Aσ^T_L=σ_x,where we use the relation σ_L=σ_L^†. So the spectrum of ρ_,x and the spectrum of the operator σ_x are equal up to a normalization factor, and the subblock entanglement spectrum of the ℤ_2 gauge Higgs model can be extracted from the PEPS of the toric code model. When considering the topological sectors of the entanglement spectrum of the ℤ_2 GH model, we just add the defects and projects, similar to the toric code case shown in Eq. (<ref>). In the next section, we use the operator σ_x to extract the total and distillable Rényi entanglement entropies of the ℤ_2 GH model. § TENSOR NETWORK METHOD FOR CALCULATING THE DISTILLABLE RÉNYI ENTANGLEMENT ENTROPYThis section shows how to calculate the distillable Rényi entanglement entropy using tensor networks. It is not straightforward to find the Rényi generalization of the distillable von Neumann entanglement entropy. Fortunately, Ref. <cit.> provides the definition of the distillable Rényi entanglement entropy:S_n(ρ)=1/1-1/nlog∑_xp_x,n^1/n+S_D,n,S_D,n=n/1-nlog∑_xp_xexp[1-n/nS_n(ρ_x)],where p_x,n=⟨x|ρ^n|x⟩/ρ^n. For n=1/2, we have a simplified expression:S_D,1/2=log∑_xp_xexp[S_1/2(ρ_R,x)]=log∑_xp_xexp[2log(√(σ_x))/√(σ_x)]=log∑_xσ_x/∑_x'σ_x'(√(σ_x))^2/σ_x =log∑_x(√(σ_x))^2/∑_xσ_x,where σ_x is defined in Eq. (<ref>). Next, as a warm-up, we show the tensor network method we use to calculate the total Rényi entanglement entropy <cit.>. We consider the total Rényi entanglement entropy of 1 sector, and the Rényi entanglement entropy of other sectors can be calculated in the same way. The 1/2-Rényi entanglement entropy is given by:S_1/2=2logσ^1/2_1-logσ_1=2log∑_x(P_+T_A^x†σ_LT^x_Aσ^T_L)^1/2-log∑_x(P_+T_A^x†σ_LT^x_Aσ^T_L).Because we impose the square lattice symmetry to real iPEPS tensors, we have σ_L=σ_L^T and T_A^x=(T_A^x)^†. So, the above relation can be simplified asS_1/2=2log∑_x(P_+σ_LT^x_A)-log∑_x(P_+σ_LT^x_Aσ_L T^x_A).The terms in the above equation can be expressed as tensor networks:∑_x(P_+σ_L T_A^x)=1/2⋮⋮ < g r a p h i c s > =1/2𝒯_half^N,∑_x(P_+σ_L T_A^xσ_L T_A^x)=1/2⋮⋮ < g r a p h i c s > =1/2𝒯^N,where 𝒯_half (𝒯) is a transfer matrix shown in the dotted blue line box in the first (second) equation. So the 1/2-Rényi entanglement entropy can be expressed asS_1/2=2log𝒯_half^N-log𝒯^N-log2.When the circumference is infinite, we have lim_N→∞𝒯^N=λ∑_i=1^d|R_i⟩⟨L_i|, where λ, L_i(R_i) and d are the dominant eigenvalue, the i-th left (right) dominant eigenvectors and the degeneracy of dominant eigenvalue, respectively. Notice that the dominant eigenvectors satisfy the bi-orthonormal condition ⟨L_i|R_j⟩=δ_i,j. Analogy we have lim_N→∞𝒯_half^N=λ_half∑_i=1^d_half|R_half,i⟩⟨L_half,i|. So if N is very large, we haveS_1/2≈ N log (λ_half^2/λ)+log (d^2_half/d)-log 2.Usually, in the topological phase, d_half=d=1, so we know the TEE is log 2.Following the same logic, let us consider the distillable Rényi entanglement entropy in the trivial topological sector:S_D,1/2=log∑_x( P_+σ_LT_A^x)^2-log∑_x(P_+σ_LT_A^xσ_LT_A^x)The first term is the following tensor network:∑_x( P_+σ_LT_A^x)^2=1/4⋮⋮ < g r a p h i c s > =1/4𝒯_1^N,where 𝒯_1 is the transfer matrix in the y direction shown in the blue box. So, the distillable entanglement can be expressed asS_D,1.2=log𝒯_1^N-log𝒯^N-log 2.When the circumference is infinite, we have lim_N→∞𝒯_1^N=λ_1∑_i=1^d_1|R_1,i⟩⟨L_1,i|. So if N is very large, we haveS_D,1/2≈ N log (λ_1/λ)+log (d_1/d)-log 2,and we can extract the topological correction log (d_1/d)-log 2 to the area law.In the trivial phase of the TC model, the iMPO fixed point has the symmetry Z_D^⊗ Nσ_L=σ_LZ_D^⊗ N=σ_L <cit.>. When reshaping σ_L in terms of iMPS, there exists a W_Z according to the fundamental theorem of MPS <cit.>, such that< g r a p h i c s > .Recall the definition of the red tensor in Eq. (<ref>), the transfer matrix 𝒯 can be expressed as a direct sum of two matrices (corresponding to the up and down bonds of the red tensors taking 0 and 1):< g r a p h i c s > = < g r a p h i c s > ⊕ < g r a p h i c s > ,which are related by a similarity transformation W_Z and have the same spectrum. Therefore, each eigenvalue of 𝒯 is at least 2-fold degenerate, and d=2. For the same reason, it can be derived that the transfer matrix 𝒯_1 can be expressed as a direct sum of four matrices< g r a p h i c s > =< g r a p h i c s > ⊕ < g r a p h i c s > ⊕ < g r a p h i c s > ⊕ < g r a p h i c s >=< g r a p h i c s > ⊕ < g r a p h i c s > ⊕ < g r a p h i c s > ⊕ < g r a p h i c s > ,which have the same spectrum because they can be transformed into each other by similarity transformations. So each level of 𝒯_1 is at least four-fold degenerate, and d_1=4. Substituting d and d_1 into Eq. (<ref>), it can be derived that distillable Rényi TEE is 0 in the trivial phase.Let us consider the pure gauge theory without matter field (h_z=0) in the deconfined phase, which corresponds to the toric code phase. Different from the trivial phase, the iMPO fixed point has the symmetry Z^⊗ Nσ_L=σ_LZ^⊗ N≠σ_L <cit.>, by applying the fundamental theorem of MPS, it can be found that:< g r a p h i c s > = < g r a p h i c s > .Moreover, in the pure ℤ_2 gauge theory, we have the ℤ_2 1-form symmetry W^X_C̃=∏_e∈C̃X_e on the physical level, which is equivalent to a closed loop of Z_D operators in the virtual level of PEPS. Based on this observation, it is reasonable to assume that the PEPS tensor satisfies the following relation:< g r a p h i c s > .From Eqs. (<ref>) and (<ref>), it can be found that the terms in Eq. (<ref>) are related by the similarity transformations< g r a p h i c s > = < g r a p h i c s > , < g r a p h i c s > = < g r a p h i c s > .So each level of 𝒯_1 is two-fold degenerate, and d_1=2. Because in the deconfined phase Z^⊗ Nσ_L≠σ_L, the dominant eigenvalue of 𝒯 is not necessarily degenerate, and d=1. From Eq. (<ref>), we have distillable Rényi TEE is zero for pure ℤ_2 gauge theory in the deconfined phase. In addition, if we turn on gauge-matter coupling h_z, the model does not have explicit 1-form symmetry, and the Eq. (<ref>) is not valid anymore. So the spectrum of 𝒯_1 is not necessarily degenerate and d_1=1. Substituting to Eq. (<ref>), we know that the distillable Rényi TEE is log 2 for the deconfined ℤ_2 gauge theory coupled to the matter field.We calculate the first and the second dominant eigenvalues of 𝒯_1 and 𝒯 along h=0.15, as shown in Fig. <ref>a. It can be found that when θ≠π/2 (h_z≠ 0), both the dominant eigenvalues of 𝒯 and 𝒯_1 are not degenerate, so d_1=d=1, and the distillable Rényi TEE is log 2, according to Eq. (<ref>). When θ=π/2, the dominant eigenvalues of 𝒯_1 becomes 2-fold degenerate, so d_1=2 and d=1, and distillable Rényi TEE is 0, according to Eq. (<ref>). Notice that dominant eigenvalues of 𝒯 and 𝒯_1 are very close, from Eq. (<ref>), it implies that the density of the distillable Rényi entanglement entropy is very small. The numerical results are consistent with the theoretical analysis.§ ANALYSIS THE GROUND STATE DISTILLABLE ENTANGLEMENT ENTROPY OF THE ℤ_2 GH MODELIn this section, we use tensor networks to provide the physical pictures explaining the reason that the distillable entanglement entropy can be non-zero for the pure ℤ_2 gauge theory and also explaining the origin of the log 2 topological correction to the distillable entanglement entropy in the deconfined phase with non-zero gauge-matter coupling. According to Eq. (<ref>), the distillable entanglement entropy of a ground state of the ℤ_2 GH model can be obtained from the corresponding ground state of the TC model via fixing physical degrees of freedom in ∂ R. So, in order to analyze the distillable entanglement entropy of the ℤ_2 GH model, we start from the so-called double-line PEPS representation of a TC ground state at h_x=h_z=0  <cit.>:|⟩= < g r a p h i c s > ,where the tensors are defined by Eq. (<ref>) and a ℤ_2 string operator of X at the virtual level can pull through freely, which is a necessary condition for an iPEPS having the ℤ_2 topological order <cit.>.First, we consider the distillable entanglement entropy for the pure gauge theory, and Ref. <cit.> conjectures that it is always zero. However, in our calculation, we find that it can be non-zero, so we need to analyze it more carefully and understand why it can be non-zero. Since it is hard to analytically analyze the variationally optimized iPEPS, we can consider the wavefunction from the second-order perturbation theory:|Ψ_(h_x,0)⟩=(1+h_x/4∑_e X_e+h_x^2/16∑_(e,e')∉ pX_e X_e'+h_x^2/8∑_(e,e')∈ pX_e X_e'+O(h_x^3))|TC⟩,where (e,e')∈ p [(e,e')∉ p] means two edges e and e' that do [not] belong to the same plaquette. However, the perturbed wavefunction cannot be directly transformed into an iPEPS, and it is not convenient. Using the relation e^x=1+x+x^2/2+O(x^3), we can exponentiate the terms in the perturbed wavefunction such that Eq. (<ref>) can be expressed as <cit.>:|Ψ_(h_x,0)⟩=[∏_e exp(h_x/4 X_e) ∏_⟨ e e'⟩∈ pexp(h_x^2/16 X_e X_e')+O(h_x^3)] |TC⟩.Ignoring the terms in O(h_x^3), |Ψ_(h_x,0)⟩ can be exactly expressed as an iPEPS.Let us first ignore O(h_x^2) terms, i.e., considering |Ψ̃_(h_x,0)⟩=∏_eexp(h_xX_e/4)|⟩. When fixing the physical degrees of freedom in ∂ R, it can be found that the iPEPS is factorized into disconnected parts:P_x|Ψ̃_(h_x,0)⟩=P_x∏_eexp(h_x/4X_e)|⟩=< g r a p h i c s > = < g r a p h i c s > . So we have S(ρ_,x)=S(ρ_,x)=0, ∀x, where ρ_,x=_L[ P_x|Ψ̃_⟩⟨Ψ̃_|P_x]/p_x and p_x=⟨Ψ̃_|P_x|Ψ̃_⟩, and the distillable entanglement entropy S_D=∑_xp_xS(ρ_,x) is zero. Now let us consider terms with a coefficient the h_x^2/16 in Eq. (<ref>), which are two-site non-unitary gates applying within the same plaquette.When fixing the physical degrees of freedom in ∂ R, it can be found that the iPEPS is not factorized: P_x∏_⟨ e,e'⟩∈ pexp(h_x^2/16X_eX_e')|⟩= < g r a p h i c s > =(∏_e∈∂ R x_e)× < g r a p h i c s > ,where there are other two-site gates within the same plaquette that we do not show for simplicity. Since the two-site gates shown in Eq. (<ref>)connects left and right parts of the factorized iPEPS in the bottom layer, the entanglement entropy of a sector S(ρ_,x)=S(ρ_,x) as well as the distillable entanglement S_D=∑_xp_xS(ρ_,x) can be non-zero. We can roughly estimate the 1/2-Rényi distillable entanglement by simply decomposing the two-site gate:E_D,1/2=lim_N→∞S_D,1/2/N≈ 2log(√(cosh^2(h_x^2/16)/cosh(h_x^2/8))+√(sinh^2(h_x^2/16)/cosh(h_x^2/8)))where the hyperbolic functions are from exp(h_x^2 X_1X_2/16)=cosh(h_x^2/16)+sinh(h_x^2/16)X_1X_2. We also numerically calculate the distillable entanglement entropy of the perturbed iPEPS |Ψ_(h_x,0)⟩=V|Ψ_(h_x,0)⟩using a boundary iMPS with a bond dimension χ=20, where V is given in Eq. (<ref>) and |Ψ_(h_x,0)⟩ is shown in Eq. (<ref>) [O(h^3) terms are ignored].The result is shown in Fig. <ref>b, where we also compare with the results of the variationally optimized iPEPS and the rough estimation from in Eq. (<ref>). It can be found that the dominant part of the distillable entanglement of the perturbed iPEPS in Eq. (<ref>) is indeed from these two-site non-unitary gates, and for small h_x, the results of the variationally optimized iPEPS and the perturbed iPEPS are comparable. Because the non-zero distillable entanglement comes from the upper layer non-unitary gates, and the virtual ℤ_2 string operator in the bottom layer disappears at the entanglement cut when pulling through from the left part to the right part, as shown in Eq. (<ref>), S(ρ_,x) satisfies the area law without a topological correction: S(ρ_,x)=c_xN, ∀x, where c_x is a non-universal coefficient. Hence the distillable entanglement entropy S_D also satisfies area law without a topological correction: S_D=(∑_xp_xc_x)N. We conclude that the X measures at ∂ R totally destroy the long-range entanglement along the entanglement cut. However, a portion of short-range entanglement can still be retained by the two-site non-unitary gates in the top layer.Next, let us explain the origin of the log 2 correction to the ground state distillable entanglement entropy of the ℤ_2 GH model when the gauge-matter coupling h_z is non-zero. It is enough to consider the case h_x=0 and h_z≠0, and we still analyze the ground state distillable entanglement entropy of the ℤ_2 GH model from the corresponding ground state of the TC model. Similar to Eq. (<ref>), we consider a first-order exponentiated perturbed iPEPS of the TC model: |Ψ̃_(0,h_z)⟩=∏_eexp(h_z Z_e/4)|⟩, which is an approximate ground state of the Hamiltonian H(0,h_z) when h_z is small. To derive the distillable entanglement entropy of |Ψ̃_(0,h_z)⟩=V|Ψ̃_(0,h_z)⟩, we fix the physical degrees of freedom of |Ψ_(0,h_z)⟩ at ∂ R:P_x∏_eexp(h_z/4 Z_e)|⟩=< g r a p h i c s > = < g r a p h i c s > = < g r a p h i c s > ,where M_x_e=1/√(2)([ exp(h_z/4) x_eexp(-h_z/4); x_eexp(-h_z/4) exp(h_z/4) ]). When h_z=0, M_x_e=√(2)|x_e⟩⟨x_e|, so the iPEPS in Eq. (<ref>) factorizes into two parts, and S(ρ_,x)=S(ρ_,x)=0. However, when h_z≠ 0, the iPEPS in Eq. (<ref>) doesn't factorize, so the virtual ℤ_2 symmetry string operator does not disappear at the entanglement cut and can pull through from left to right, which are necessary conditions for the iPEPS in Eq. (<ref>) possessing the long-range entanglement at the entanglement cut. From the calculation based the perturbation theory in Ref. <cit.>, S(ρ_,x)=S(ρ_,x)=c_xN-log2 when h_z≠0. So the distillable entanglement entropy of |Ψ̃_(0,h_z)⟩ is S_D=∑_xp_xS(ρ_,x)=∑_xp_x(c_xN-log2)=∑_x(p_xc_x)N-log2, where there is a log(2) topological correction. Therefore, we find that because the gauge-matter coupling terms in Eq. (<ref>) do not commute with P_x, which prevents the X measurements at ∂ R from destroying the long-range entanglement along the entanglement cut. Like the log2 correction to the usual entanglement entropy, the log2 topological correction to the distillable entanglement entropy also originates from the underlying ℤ_2 topological order.
http://arxiv.org/abs/2311.16235v1
{ "authors": [ "Wen-Tao Xu", "Michael Knap", "Frank Pollmann" ], "categories": [ "cond-mat.str-el", "quant-ph" ], "primary_category": "cond-mat.str-el", "published": "20231127190002", "title": "Entanglement of Gauge Theories: from the Toric Code to the $\\mathbb{Z}_2$ Lattice Gauge Higgs Model" }
Multi-representation associated to a numbered subbasis ]Multi-representation associated to the numbering of a subbasis and formal inclusion relationsThe author is funded by an Alexander von Humboldt Research Fellowship.We show how the use of a formal inclusion relation associated to a topological (sub)basis, as introduced by Dieter Spreen to study Type 1 computable topological spaces, is also beneficial to the study of represented spaces. We show that different definitions of the multi-representation of a topological space associated to a numbering of a (sub)basis, as considered for instance by Grubba, Weihrauch and Schrder, can be seen as special cases of a more general definition which uses a formal inclusion relation. We show that the use of an appropriate formal inclusion relation guarantees that the representation associated to a computable metric space seen as a topological space always coincides with the Cauchy representation. We also show how the use of a formal inclusion relation guarantees that when defining multi-representations on a set and on one of its subsets, the obtained multi-representations will be compatible, i.e. inclusion will be a computable map. The proposed definitions are also more robust under change of equivalent bases.[ Emmanuel RauzyJanuary 14, 2024 ====================§ INTRODUCTION In order to study computability in areas of mathematics where mathematician freely define very abstract objects, one has first to answer the question: how can a machine manipulate an abstract object? Turing, in his seminal paper <cit.>, gave a first approach to this problem, and he was able to define computable functions of a computable real variable. Kleene's general solution is the idea of realizability: to represent abstract objects by concrete descriptions, thanks to the use of semantic functions which give meaning to a priori inert symbols. A function between abstract objects is then called computable if it can be realized by a computable function on concrete objects. In the Type Two theory of Effectivity (TTE), the considered set of concrete objects is the Baire space ℕ^ℕ, and the notion of computability is given by Turing machines that work with infinite tapes. The semantic functions that map elements of the Baire space to abstract objects are called representations, they were introduced by Kreitz and Weihrauch in <cit.>. This notion was extended by Schrder to multi-representations in his dissertation <cit.>.One of the defining features of TTE is that the study of computable functions is always related to the study of continuity, because, on the Baire space, a function is continuous if and only if it is computable with respect to some oracle. Thanks to Schrder's generalization of Weihrauch's notion of admissible representation <cit.>, this phenomenon can be extended to other topological spaces. One can then investigate in parallel computability and continuity, and reductions between problems, or translations between representations, exist both in terms of computable functions and in terms of continuous functions. A celebrated theorem of Schrder <cit.> characterizes those topological spaces that admit admissible multi-representations as quotient of countably based spaces (qcb-spaces). On such a space, there is a unique admissible multi-representation, up to continuous translation. However, when studying computability, representations are considered up to computable translation. One could of course ask to distinguish amongst admissible representations those that are appropriate to study computability -and possibly call them “computably admissible”. However, because a qcb-spaces can have continuously many auto-homeomorphisms, there is no hope of distinguishing a single equivalence class of representation as the correct one to study computability. Indeed, if ρ:⊆ℕ^ℕ→ X is an admissible representation of a topological space X, which we know to be appropriate for studying computability, then for any auto-homeomorphism Θ:X→ X of X, Θ∘ρ will be another representation of X which will have exactly the same properties as ρ. The representation Θ∘ρ is computably equivalent to ρ exactly when Θ is (ρ,ρ)-computable, and thus there can only be countably many of these representations that are computably equivalent. In practice, there is often a single correct choice of a representation on a set, but this comes from the fact that we consider sets that have more intrinsic structure than just a topology. For example, any permutation of ℕ is an auto-homeomorphism of ℕ for the discrete topology, and thus there is no hope of fixing the correct representation of ℕ using only its topology. However, if we ask that addition should be computable, or that the order relation should be decidable, etc, we may end up distinguishing a single representation as the appropriate one. Such questions are linked to the study of computable model theory. As another example, consider ℕ together with a subset A which is not recursively enumerable. In this case, the appropriate representations of ℕ and A that will allow us to study computability cannot be obtained by considering simply A as a topological space. Indeed, A is homeomorphic to ℕ, but an appropriate representation of A in this context makes of it a non-recursively enumerable set. The additional structure on A that is the embedding A↪ℕ imposes restrictions on the correct representation that one wants to study. In this obvious example, one should simply consider a representation of ℕ and its restriction to A. But less trivial instances of this problem do arise.A possible way to equip a topological space X with an additional structure that will permit to distinguish a single class of representations as “computably admissible”, and that has been used in computable analysis since early work of Weihrauch <cit.>, is to consider a numbered basis (𝔅,β) associated to X, i.e. a countable basis 𝔅 for the topology of X together with a partial surjection β:⊆ℕ→𝔅. Note that the crucial point here is that by fixing a numbering of a basis we are already choosing the desired notion of computability. Fixing an abstract basis, as a set and not as a numbered set, would not be sufficient for this purpose. This is a very natural way to proceed because, in practice, when working with explicit topological spaces, there is often an obvious numbering of a basis that stands out as the correct one to study computability. In fact, one can easily remark that in many settings where authors have used numbered bases, one might as well have considered only numbered subbases and obtained the same results. Additionally, because it is important for us to be able to study computability on non-T_0 spaces (for instance to include at least all finite topological spaces in our field of study), we will allow the use of multi-representations. This gives the setting of the present study: a topological space equipped with a numbered subbasis, on which we want to define a multi-representation. However, and this is where this article actually begins, even once a numbered subbasis has been fixed on a space, there still are several, all seemingly natural, but sometimes non-equivalent, ways to define a multi-representation associated to this numbered subbasis. Our main purpose is to show that the use of a formal inclusion relation is necessary and sufficient to obtain a robust definition of the multi-representation of a topological space associated to a numbered subbasis.Throughout, we fix a set X, and denote by (𝔅,β) a numbered subbasis for X. This is simply a countable subset of 𝒫(X) equipped with a partial surjection β:⊆ℕ→𝔅. There are two main approaches that authors have used to define a multi-representation ρ:⊆ℕ^ℕ⇉ X associated to the numbered subbasis (𝔅,β).In each case the ρ-name of a point x of X is a sequence (u_n)_n∈ℕ of β-names of basic sets which form a neighborhood basis of x. But with this idea, there are two possible approaches: * The sequence (u_n)_n∈ℕ is asked to contain β-names for sufficiently many basic sets so as to define a neighborhood basis of x. This representation is particularly important since it is the one that was used by Schrder to prove the characterization theorem of topological spaces that admit admissible multi-representations. See <cit.>.* Or the sequence (u_n)_n∈ℕ is asked to contain all the β-names for basic sets that contain x. This was first used in <cit.>, but for a notion of computable topological space that was later abandoned. This is also the definition of Weihrauch and Grubba <cit.>. This is now the common approach: see <cit.>. In Schrder's dissertation <cit.>, both approaches are used, depending on whether the focus is solely continuity (first approach, see Section 3.1.2 in <cit.>, which deals with limit spaces and limit bases), or also computability (second approach, see Section 4.3.6 in <cit.>). We thus define two multi-representations associated to the numbered subbasis (ℬ,β), denoted ρ_β^min:⊆ℕ^ℕ⇉ X and ρ_β^max:⊆ℕ^ℕ⇉ X, defined by:ρ_β^min(f)∋ x∀ n∈Im(f), n∈dom(β) & x∈β(n), ∀ B∈𝔅, x∈ B∃ n∈ℕ, β(f(n))⊆ B; ρ_β^max(f)∋ xIm(f)={n∈dom(β), x∈β(n)}. Both approaches highlighted above are particular cases of a more general definition based on a formal inclusion relation, corresponding respectively to the coarsest and finest formal inclusion relations. We will in fact see that neither of these formal inclusion relation is appropriate to every situation. In particular, in metric spaces, using the formal inclusion of metric spaces is much more natural and gives a representation equivalent to the Cauchy representation even without supposing that the space has a dense and computable sequence. The use of a formal inclusion relation in relation with numbered bases was initiated by Dieter Spreen in <cit.>, in terms of strong inclusion relations. Here we will focus only on those strong inclusion relations that are reflexive. We call them formal inclusion relations instead of “reflexive strong inclusion relations”. This is also a reference to formal inclusions as used in the theory of domain representation <cit.>. By removing the reflexivity condition in the following definition, one obtains exactly the definition of a strong inclusion relation.Let 𝔅 be a subset of P(X), and β:⊆ℕ→𝔅 a numbering of 𝔅. Let ⊆ be a binary relation on dom(β). We say that ⊆ is a formal inclusion relation for (𝔅,β) if the following hold: * The relation ⊆ is reflexive and transitive (i.e. ⊆ is a preorder); * ∀ b_1,b_2∈dom(β), b_1⊆b_2β(b_1)⊆β(b_2).The general definition of the multi-representation associated to a numbered subbasis based on a formal inclusions is the following: * A sequence (u_n)_n∈ℕ of β-names constitutes a ρ-name of x if it contains sufficiently many basic sets, but sufficiently many with respect to the formal inclusion. More precisely, when the basis (ℬ,β) is equipped with a formal inclusion relation ⊆, we define a multi-representation ρ_β^⊆:⊆ℕ^ℕ⇉ X by: ρ_β^⊆(f)∋ x∀ b_1∈Im(f), b_1∈dom(β) & x∈β(b_1), ∀ b_1∈dom(β), x∈β(b_1)∃ b_2∈Im(f), b_2⊆b_1. If X is a set equipped with a numbered basis (ℬ,β) that admits a formal inclusion relation ⊆, and if x is a point of X, a subset A⊆dom(β) is called a formal neighborhood basis for x if and only if ∀ b∈ A, x∈β(b); ∀ b_1∈dom(β), x∈β(b_1)∃ b_2∈ A, b_2⊆b_1.Thus the ρ_β^⊆-name of a point x is a list of β-names that forms a formal neighborhood basis of x.The definition of ρ_β^⊆ generalizes the two definitions above: * When we take the formal inclusion to be the actual inclusion relation, i.e. b_1⊆b_2β(b_1)⊆β(b_2), ρ_β^⊆ is exactly ρ_β^min. Note that this is the coarsest formal inclusion relation. * When we take the formal inclusion to be equality, i.e. b_1⊆b_2 b_1=b_2, ρ_β^⊆ is exactly ρ_β^max. Note that equality is the finest formal inclusion relation. A possible way to understand the use of a formal inclusion relation in our context is via an informal notion of “information”. The fact that b_1⊆b_2 can be understood as: the statement “x belongs to the set defined defined by b_1” contains more information than the statement “x belongs to the set defined defined by b_2”. The amount of information contained in the statement “x belongs to the set defined defined by b_1” does not depend solely on the abstract set defined by b_1, it also depends on how this set is described. It is an intensional notion. For instance, suppose we are set in a metric space and that y is a point of this metric space. Knowing that x belongs to the ball B(y,1) provides more information on x than knowing that it belongs to B(y,2). It might so happen that in fact B(y,1)=B(y,2), but to know this for sure, one should rely on some extra information about the metric space that we are looking at. And thus in terms of information, the statement that explicitly gives a smaller radius should be considered more precise. The idea of “formal neighborhood basis of x” is then understood as follows: it is a sequence of names of basic sets that all contain x, and which contains as much information about the location of x as the whole basis is able to provide.This article is organized around four main topics, which correspond respectively to Sections <ref>, <ref>, <ref> and <ref>. Firstly, we show that in any situation, all representations ρ_β^min, ρ_β^max and ρ_β^⊆ are admissible for the topology generated by the basis 𝔅. They can thus be used interchangeably when focusing on continuity. Then, we discuss what happens when considering the multi-representations associated to a set X and to a subset A of X, and investigate whether we can guarantee that the embedding A↪ X will be computable.We then focus on metric spaces, and of the problem of defining, thanks to a numbering of open balls, a representation that is equivalent to the Cauchy representation. Finally, we consider different notions of equivalence of bases, and describe a notion of representation-equivalent bases which guarantees that two bases yield equivalent representations. The rest of this introduction provides more details on the content of this article, and quotes our main results.§.§ Admissibility theorem In Section <ref>, we show that all multi-representations ρ_β^min, ρ_β^max and ρ_β^⊆ are equivalent modulo a powerful enough oracle, and thus equivalent modulo continuous translations. Thanks to results of Schrder, we prove: For any numbered set (ℬ,β) of subsets of a set X, and any formal inclusion relation ⊆ for (ℬ,β), the three multi-representations ρ_β^min, ρ_β^max and ρ_β^⊆ are admissible with respect to the topology of X generated by ℬ (as a subbasis).Results of the next sections show that these multi-representations need not be computably equivalent.§.§ Compatibility of the multi-representations of a set and of a subsetLet X be a set equipped with a numbered subbasis (𝔅,β,⊆) that admits a formal inclusion relation. If A is a subset of X, then we can naturally equip A with a numbered subbasis (𝔄,α), defined by 𝔄={A∩ B, B∈𝔅}; dom(α)=dom(β); ∀ n∈dom(β), α(n)=A∩β(n).The formal inclusion relation ⊆ is also a formal inclusion relation for (𝔄,α). We thus have three multi-representations ρ_β^min, ρ_β^max and ρ_β^⊆ on X and three multi-representations ρ_α^min, ρ_α^max and ρ_α^⊆ on A. The main result of Section <ref> is then: The embedding §.§ Cauchy representation and metric spacesIn Section <ref>, we focus on computable metric spaces. Denote by c_ℝ the Cauchy numbering of ℝ_c, the set of computable reals. The following is the common definition for computable metric spaces.A computable metric space is a quadruple (X,A,ν,d), where (X,d) is a metric space, A is a countable and dense subset of X, ν:ℕ→ A is a total and onto numbering of A, and such that the distance function d:A× A→ℝ_c is (ν×ν,c_ℝ)-computable.The following older definition is in fact more general. A non-necessarily effectively separable computable metric space is a quadruple (X,A,ν,d), where (X,d) is a metric space, A is a countable and dense subset of X, ν:⊆ℕ→ A is a partial and onto numbering of A, and such that the distance function d:A× A→ℝ_c is (ν×ν,c_ℝ)-computable.A computable metric space (X,A,ν,d) is naturally associated to its Cauchy representation, denoted ρ_Cau: the name of a point x encodes, via ν, a sequence that converges at exponential speed towards this point. There is also a numbering of β open balls: the name k of a ball is an encoded pair: k=⟨ n,m⟩, where n is a ν-name of its center and m is a c_ℝ-name of its radius. And finally, there is a natural formal inclusion relation, given by ⟨ n_1,m_1⟩⊆⟨ n_2,m_2⟩ d(ν(n_1),ν(n_2))+c_ℚ(m_1)≤ c_ℚ(m_2). We compare the three representations ρ_β^min, ρ_β^max and ρ_β^⊆ to the Cauchy representation ρ_Cau on a metric space. In general, ρ_Cau≤ρ_β^min, but it is possible that ρ_β^min≰ρ_Cau on a computable metric space. The example we provide where ρ_β^min≰ρ_Cau is a space which is discrete in some places and non-discrete in others. It was already remarked by Weihrauch in <cit.> that the existence of isolated point could be a problem for the representation ρ_β^min. And:In general, ρ_β^max≤ρ_Cau, but it is possible that ρ_Cau≰ρ_β^max on a non-computably separable computable metric space. Finally:On any (even non-computably separable) computable metric space, denoting by ⊆ the formal inclusion of metric spaces, one has ρ_Cau≡ρ_β^⊆.§.§ Notion of equivalence of bases The last section of this article, Section <ref>, is dedicated to the description of a notion of equivalence of bases appropriate to the study of representations of the form ρ_β^⊆. We first show that some usually considered notions of equivalence of bases, like Lacombe equivalence (defining the same Lacombe sets), are not relevant here. Let (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) be two numbered bases of a set X equipped with formal inclusion relations. The bases (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) are called representation-equivalent if the representations ρ_β_1^⊆_1 and ρ_β_2^⊆_2 are equivalent. We show that representation-equivalence is equivalent to the existence of a computable function f_12 which, given as input a tuple (b_1,...,b_n)∈dom(β_1), produces a tuple (d_1,...,d_k)∈dom(β_2) such that β_1(b_1)∩...∩β_1(b_n)⊆β_2(d_1)∩...∩β_2(d_k), and which, when applied along a sequence that defines a formal neighborhood basis of a point x for (𝔅_1,β_1,⊆_1), produces a formal neighborhood basis of x for (𝔅_2,β_2,⊆_2). A function f_21 that satisfies the same conditions, but with the roles of (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) reversed should also exist. We also describe a notion of uniform representation-equivalence which seems easier to grasp but which is only sufficient for two bases to give equivalent representations. Acknowledgements. I would like to thank Vasco Brattka for helpful discussions, and Andrej Bauer for valuable references. § PRELIMINARIES §.§ Multi-representations: translation and admissibilityA multi-representation <cit.> of a set X is a partial multi-function: ρ:⊆ℕ^ℕ⇉ X such that every point of X is the image of some point in ρ.Note that, extentionally, a partial multi-function: ρ:⊆ℕ^ℕ⇉ X is nothing but a total function to the power-set of X: ρ:ℕ^ℕ→𝒫(X). The domain of ρ in this case is {f∈ℕ^ℕ, ρ(f)∅}. But in terms of the way one uses a multi-representation, it should not be seen as a function to 𝒫(X). For instance, the preimage of a subset A⊆ X by ρ is not defined as {f∈ℕ^ℕ, ρ(f)=A}, but as ρ^-1(A)={f∈ℕ^ℕ, ρ(f)∩ A≠∅}.For f∈ℕ^ℕ, if x∈ρ(f), then f is a ρ-name of x. A partial multi-function H:⊆ X⇉ Y between multi-represented sets (X,ρ_1) and (Y,ρ_2) is called (ρ_1,ρ_2)-computable if there exists a computable function[Or “computable functional” in the sense of Kleene <cit.> or Grzegorczyk <cit.>, see <cit.>.] F:⊆ℕ^ℕ→ℕ^ℕ defined at least on all f∈dom(ρ_1) for which ρ_1(f)⊆dom(H) such that ∀ x∈dom(h), ∀ f∈dom(ρ_1), x∈ρ_1(f) H(x)∩ρ_2(F(f))≠∅.This definition has a very simple interpretation: the multi-function H is (ρ_1,ρ_2)-computable if there is a computable map which, when given a name for a point x, produces a name for one of the images of x by H. When H:⊆ X⇉ Y is a multi-function between represented spaces (X,ρ_1) and (Y,ρ_2), any partial function F:⊆ℕ^ℕ→ℕ^ℕ which satisfies the condition written in the above definition is called a realizer for H. A multi-function is computable if and only if it has a computable realizer. If ρ_1 and ρ_2 are multi-representations of a set X, we say that ρ_1 translates to ρ_2, denoted ρ_1≤ρ_2, if the identity id_X of X is (ρ_1,ρ_2)-computable. The multi-representations ρ_1 and ρ_2 are called equivalent if each one translates to the other, this is denoted ρ_1≡ρ_2.The fact that ρ_1 translates to ρ_2 can be interpreted as meaning that the ρ_1-name of a point in X provides more information on this point than a ρ_2-name for this point. We also have a notion of continuous reduction: If ρ_1 and ρ_2 are multi-representations of a set X, we say that ρ_1 continuously translates to ρ_2 if the identity id_X of X admits a continuous realizer. This is denoted by ρ_1≤_tρ_2, where the subscript t stands for topological. The multi-representations ρ_1 and ρ_2 are continuously equivalent when each one continuously translates to the other. This is denoted ρ_1≡_tρ_2.The final topology on X induced by a multi-representation ρ is the topology consisting of all subsets of X whose preimage by ρ is open in the usual topology of Baire space (more precisely: open on the topology on dom(ρ) induced by the topology of Baire space). In this paper, as we focus on first countable topological spaces, the topology of each space we consider is determined by converging sequences, and thus by the limit space induced by the topology. The following definition is appropriate only to sequential topological spaces. The general definition implies first defining admissibility for representations of limit spaces, a multi-representation of a topological space is then called admissible if and only if it is an admissible representation of the limit space induced by the topology. See <cit.>. Let (X,𝒯) be a sequential space, i.e. a topological space whose topology is determined by converging sequences. A multi-representation ρ:⊆ℕ^ℕ⇉ X of (X,𝒯) is called admissible <cit.> if the final topology of ρ is 𝒯, and if ρ it is the maximal multi-representation with this property in terms of continuous translations ≤_t: for any multi-representation ϕ:⊆ℕ^ℕ⇉ X of X such that the final topology of ϕ is 𝒯, there is a partial continuous function H:⊆ℕ^ℕ→ℕ^ℕ with ρ∘ H=ϕ.§.§ Computable topological spacesWe now introduce here the notion of “computable topological space” that was introduced by Weihrauch and Grubba in <cit.>. It is a special case of a definition of Bauer: <cit.>. We want to note here that the term “computable topological space” is not an appropriate name for this notion, since it is a notion of computable basis, and not the definition of a computable topological space. Furthermore, it is only one amongst several possible notions of computable basis, and not the most general one one can think of. For instance it does not apply to all non-computably separable computable metric spaces. Denote by W_i=dom(φ_i) the usual numbering of r.e. subsets of ℕ.A “computable topological space” is a triple (X,ℬ,β), where X is a set, ℬ is a topological basis on X that makes of it a T_0 space, and β:ℕ→ℬ is a total surjective numbering of ℬ, for which there exists a computable function f:ℕ^2→ℕ of such that for any i, j in ℕ: β(i)∩β(j)=k∈ W_f(i,j)⋃β(k).Note that the requirement that β be total can be read as imposing that (X,ℬ,β) be computably second countable. This is a requirement one might want to do away with. A possible better name for the notion above would be that of a Lacombe basis. In particular, if we do not ask β to be total, and if we do not suppose that X will be T_0, then the conditions imposed on the basis are exactly the necessary and sufficient conditions in order for the Lacombe sets[Lacombe sets are computable union of basic open sets, this name goes back to Lachlan <cit.> and Moschovakis <cit.>.] to form a computable topology: so that finite intersection and computable unions be computable. Associated to a “computable topological space” is a representation of open sets: ρ_(𝔅,β)(f)=k∈Im(f)⋃β(k). The condition of Definition <ref> are sufficient in order for finite intersections and countable unions to be computable for ρ_(𝔅,β), and, again, removing the condition that X be T_0 and that β be total, we obtain necessary and sufficient conditions. The representation ρ_β^max has been up to now often associated to the above definition of a computable basis. One of the purposes of this article is to show that the notion of “computable topological space” described above, while very relevant to the study of the associated representation ρ_(𝔅,β) of open sets, is not relevant to the study of the representations ρ_β^min, ρ_β^max and ρ_β^⊆ associated to a numbered basis. In particular, none of the conditions of Definition <ref> are useful in showing that ρ_β^min, ρ_β^max and ρ_β^⊆ are admissible representations (or admissible multi-representations, if we allow non T_0-spaces).§.§ Computable metric spacesHere we recall the definition of the Cauchy numbering, of the numbering of open balls in a metric space, and of the formal inclusion relation for metric spaces.The following definition of non-necessarily effectively separable computable metric space already appeared in the introduction: A non-necessarily effectively separable computable metric space is a quadruple (X,A,ν,d), where (X,d) is a metric space, A is a countable and dense subset of X, ν:⊆ℕ→ A is a partial and onto numbering of A, and such that the distance function d:A× A→ℝ_c is (ν×ν,c_ℝ)-computable.Denote by c_ℚ the usual numbering of ℚ, which is total. Associated to a (non-necessarily effectively separable) computable metric space (X,A,ν,d) is a numbering β of open balls centered at points of A: dom(β)={⟨ n,m⟩∈ℕ, n∈dom(ν), c_ℚ(m)>0}, ∀⟨ n,m⟩∈dom(β), β(⟨ n,m⟩)=B(ν(n),c_ℚ(m)).Note that when ν is total, β has a recursive domain, we can then suppose that it is total. The natural formal inclusion ⊆ of β is given by ⟨ n_1,m_1⟩⊆⟨ n_2,m_2⟩ d(ν(n_1),ν(n_2))+c_ℚ(m_1)≤ c_ℚ(m_2). Metric spaces come equipped with the Cauchy representation ρ_Cau:dom(ρ_Cau)={p∈ℕ^ℕ, ∀ i>j, d(ν(p(i)),ν(p(j)))<2^j, ∃ x∈ X, x=lim_i→+∞ν(p(i))}, ∀ p∈dom(ρ_Cau), ρ_Cau(p)=lim_i→+∞ν(p(i)).§ ADMISSIBILITY THEOREMIn this section, we prove that all multi-representations ρ_β^min, ρ_β^max and ρ_β^⊆ are admissible. For any numbered set (ℬ,β) of subsets of a set X, and any formal inclusion relation ⊆ for (ℬ,β), the three multi-representations ρ_β^min, ρ_β^max and ρ_β^⊆ are equivalent modulo continuous translations. Note that we always have ρ_β^max≤ρ_β^⊆ for any formal inclusion on dom(β), since the identity on Baire space provides the desired translation. We thus prove the converse inequality, which requires an oracle. With a powerful enough oracle, the inclusion can be decided on dom(β) (i.e. the relation R defined by nRmβ(n)⊆β(m)). Also, a powerful enough oracle can enumerate dom(β). With such an oracle, we can, given the ρ_β^⊆-name of a point x, enumerate in parallel all names of balls that contain a ball that contain x, this will precisely give a ρ_β^max-name of x. For any numbered set (ℬ,β) of subsets of a set X, and any formal inclusion relation ⊆ for (ℬ,β), the three multi-representations ρ_β^min, ρ_β^max and ρ_β^⊆ are admissible with respect to the topology of X generated by ℬ (as a subbasis). This is an immediate corollary of the previous result, together with the theorem of Schrder which states that ρ_β^min is always admissible. See in particular in <cit.>: Proposition 3.1.6, for the case of limit spaces, and Lemma 3.1.10 for the transfer of this result to topological spaces.§ COMPATIBILITY OF THE MULTI-REPRESENTATIONS OF A SET AND OF A SUBSETIf (X,ρ) is a multi-represented set, and if A is a subset of X, we naturally define a multi-representation ρ_| A of A, the restriction of ρ to A, by the following: dom(ρ_| A)={f∈dom(ρ), ρ(f)∩ A∅}; ∀ f∈dom(ρ_| A), ρ_| A(f)=ρ(f)∩ A.If A is a subset of X and ρ is a multi-representation of X, then the embedding A↪ X is always (ρ_| A,ρ)-computable, and the identity on Baire space is a realizer. Immediate. We now consider a numbered subbasis (𝔅,β) for X equipped with a formal inclusion relation ⊆. We thus have three multi-representations ρ_β^min, ρ_β^max and ρ_β^⊆ of X. We can now consider a numbered subbasis (𝔄,α) for A, defined by restriction of (𝔅,β):dom(α)=dom(β); ∀ n∈dom(β), α(n)=A∩β(n).In fact, (𝔄,α) is naturally equipped with the same formal inclusion relation as β. Indeed, the condition n⊆mα(n)⊆α(m) is always valid, since ∀ n,m∈dom(β), n⊆mβ(n)⊆β(m) A∩β(n)⊆ A∩β(m). We then have three multi-representations of A: ρ_α^min, ρ_α^max and ρ_α^⊆. Both ρ_α^max and ρ_α^⊆ are well behaved with respect to the inclusion A↪ X, but ρ_α^min is not. This is what we show now. We have:Left to the reader.The embeddingBy Proposition <ref>.The embedding The proof of this proposition uses a construction that appears in Section <ref> and is postponed to this section. It is not always the case that ρ_α^min≡(ρ_β^min)_| A. This follows by Proposition <ref>.§ METRIC SPACES AND THE CAUCHY REPRESENTATION§.§ The “sufficiently many basic sets” approach and metric spaces Let (X,A,ν,d) be a non-necessarily computably separable computable metric space , and denote by β the numbering of open balls with rational radii associated to (X,A,ν,d). We have ρ_Cau≤ρ_β^min. The ρ_Cau name of a point x is a sequence (of names) of points (u_n)_n∈ℕ that converges towards x at exponential speed. A sequence of names for the balls (B(u_n,2^-n))_n∈ℕ is then a ρ_β^min-name of x. However: The representation ρ_β^min does not have to be equivalent to the Cauchy representation of X, even if X is computably separable.The idea is the following. We build a computable metric space which is discrete in some places and not discrete in others. For a point x which is isolated, if n is a name of a ball B(x,r) which contains only x, i.e. B(x,r)={x}, then (n,n,n,n,...) is a ρ_β^min-name for x. However, to construct a Cauchy name for x starting from this ρ_β^min-name, one has to be able to computably understand that x is isolated, and that the ball B(x,r) determines x uniquely. We choose a space where this is not possible.Note that this example is precisely based upon a case where the formal inclusion relation is different from the actual inclusion relation. Indeed, with the notations above, we have: * For any m such that x∈β(m), then β(n)⊆β(m). * It is not true that for any m such that x∈β(m), then n⊆m. Indeed, this fails if m is a name for B(x,r/2) (i.e. a name where the radius explicitly given is r/2).We take a certain subset of ℝ. Denote by K the halting set: K={n∈ℕ, φ_n(n)↓}. Consider the union A=⋃_n∈ K[n-1/2,n+1/2]∪⋃_n∉ K{n}.Thus A is discrete in some places and not discrete in others. A admits a dense and computable sequence: the set of natural numbers, together with the set of rationals in [n-1/2,n+1/2], for each n in K, which is a r.e. set because K is. The usual distance of ℝ remains computable when restricted to the set of rationals in A. Thus A is a computable metric space. Denote by β the numbering of open intervals of A induced by that of ℝ: if γ is a numbering of open intervals of ℝ that have rational endpoints, put β(n)=γ(n)∩ A. In this setting, for n∉ K, denote by m a γ-name of the basic set ]n-1/4,n+1/4[. Then the constant sequence (m,m,m,m...) constitutes a valid ρ_β^min-name of n. Suppose there is a Type-2 machine T that on input the ρ_β^min-name of a point of A transforms it in a ρ_Cau-name of this point. Then we can enumerate numbers n for which the ρ_Cau-name produced by T when given as input a constant sequence as above gives a precision better than 1/4. This gives precisely an enumeration of K^c. This is a contradiction. We finally use the construction above to prove Proposition <ref>. In the construction that appears above in the proof of Proposition <ref>, we build a metric space A equipped with a numbered basis with numbering β and where ρ_β^min≰ρ_Cau. The constructed set is a subset of ℝ, and it is easy to see that in this case the Cauchy numbering ρ_Cau on A is the restriction of the Cauchy numbering on ℝ, which is itself equivalent to ρ_γ^min, where γ denotes the numbering of open intervals of ℝ with rational endpoints. And thus we have: ρ_β^min≰ρ_Cau, ρ_Cau≡(ρ_γ^min)_| A, and thus ρ_β^min≰(ρ_γ^min)_| A. Finally this directly implies that the embedding A↪ℝ is not (ρ_β^min,ρ_γ^min)-computable. §.§ The representation associated to “all names of basic sets” and non-computably separable metric spaces Let (X,A,ν,d) denote again a non-necessarily computably separable computable metric space, and denote by β the numbering of open balls with rational radii associated to (X,A,ν,d). We have ρ_β^max≤ρ_Cau. This is very simple: a ρ_β^max-name contains names of balls of arbitrarily small radius. Given a ρ_β^max-name of a point x, a blind search in this name for a 2^-n good-approximation of x will always terminate. The following is well known: it shows that the definition of computable topological space as introduced in <cit.> (see Definition <ref>) is coherent with the definition of computable metric space as used since <cit.>. It is for instance recalled in <cit.>. As soon as (X,A,ν,d) has a dense and computable sequence, for the numbering β of open balls given by rational radii and centers in the dense sequence, one also has ρ_Cau≤ρ_β^max and ρ_Cau≡ρ_β^max. Note however, as we will discuss in Section <ref>, that the above proposition holds only for certain choices of a numbering of open balls.Finally, if (X,A,ν,d) does not have a dense and computable sequence, it does not have to have a computably enumerable basis of open balls. In this case, we have:If β is the natural numbering of open balls with rational radii in a non-computably separable computable metric space, it is possible that ρ_Cau≰ρ_β^max. And it is possible that ρ_β^max defines no computable point. We give a very simple example. Consider the set K^c, the complement of the halting set, which is not r.e.. The numbering of K^c is the numbering induced by the identity on ℕ. Take the usual distance of ℕ, d(i,j)=| i-j|, and the associated numbering β of balls with rational radii and centers in K^c. In this setting, no point of K^c has admits a computable ρ_β^max-name. Indeed, a computable enumeration of all balls centered at points in K^c that contain a given point x gives in particular an enumeration of the set of all their centers, which is exactly K^c. §.§ The formal inclusion approach in metric spaces Let (X,A,ν,d) be a non-necessarily computably separable computable metric space. Let β be the numbering of balls with rational radii induced by ν. Denote by ⊆ the formal inclusion of β induced by the metric, which comes from the relation on balls parametrized by pair (point-radius) defined by (x,r_1)⊆(y,r_2) d(x,y)+r_1≤ r_2.We have already defined the Cauchy representation on X and the representation ρ_β^⊆ induced by the numbering of the basis. The equivalence ρ_Cau≡ρ_β^⊆ holds.Denote by t_n a c_ℚ-name of 2^-n. The map n↦ t_n can be supposed computable. If p is a ρ_Cau-name for a point x, then q defined by q(n)=⟨ p(n),t_n⟩defines a ρ_β^⊆-name of x. This name is given as a computable function of p. Conversely, suppose that q is a ρ_β^⊆-name of x. Denote by fst and snd the two halves of the inverse of the pairing function which defines the numbering β of balls in (X,A,ν,d). Then the following p gives a ρ_Cau-name for point x:p(n)=fst(q(μ i, snd(q(i))<2^-n))).In words: p(n) is defined as the center of the first ball of radius less than 2^-n found in the name q of x. The fact that this application of the μ-operator produces a total function comes exactly from the hypothesis that ρ_β^⊆-names give arbitrarily precise information with respect to the formal inclusion: in the ρ_β^⊆-name of x appear balls given by arbitrarily small radii.§ EQUIVALENCE OF BASES§.§ Representation ρ_β^max and equivalence of basesWe first note that the numbering ρ_β^max is badly behaved with respect to equivalence of bases: bases that “should be” equivalent can give non-equivalent representations. We first show this by an example that uses a non-recursively enumerable basis, and then modify it so that it uses only recursively enumerable bases. The example used here is that of open balls of ℝ given either by rational radii/center or by computable reals for their radii and center, and a totalized version of this last basis, which fills every gap with the empty set. In the following section, we present several notions of equivalence of bases, the first two bases considered here are equivalent according to all these definitions, and the “totalized basis” is also representation-equivalent and Nogina equivalent to the other two, but it fails to be Lacombe equivalent to them. Denote by c_ℚ the usual numbering of ℚ, which is total. Denote by c_ℝ the Cauchy numbering of ℝ. Denote by 𝔅_ℚ the set of open intervals of ℝ with rational endpoints. Define β_ℚ:⊆ℕ→𝔅_ℚ by dom(β_ℚ)={⟨ n,m⟩, c_ℚ(m)>0}; β_ℚ(⟨ n,m⟩)=B(c_ℚ(n),c_ℚ(m)).The domain of β_ℚ is easily seen to be recursive, and we can thus in fact suppose that β_ℚ is defined on all of ℕ. Denote by 𝔅_ℝ the set of open intervals of ℝ with computable reals as endpoints. Define β_ℝ:⊆ℕ→𝔅_ℝ by dom(β_ℝ)={⟨ n,m⟩, c_ℝ(m)>0}; β_ℝ(⟨ n,m⟩)=B(c_ℝ(n),c_ℝ(m)). Finally, we use a totalization of β_ℝ, denoted β̂_ℝ, by adding the empty set to 𝔅_ℝ and changing the numbering as follows: dom(β̂_ℝ)=ℕ; ∀ n∈dom(β_ℝ), β̂_ℝ(n)=β_ℝ(n), ∀ n∉dom(β_ℝ), β̂_ℝ(n)=∅. We then have:The representations ρ_β_ℝ^max and ρ_β_ℚ^max are not equivalent. This follows from the following strong fact: there is no ρ_β_ℝ^max-computable point. This follows immediately from the fact that there does not exist a computable enumeration of all computable reals.We have ρ_β_ℝ^max≡ρ_β̂_ℝ^max, and thus ρ_β̂_ℝ^max and ρ_β_ℚ^max are not equivalent, even though β̂_ℝ is a total numbering. The identity of Baire space is a realizer for both directions. However, it is very easy to check that for the natural formal inclusion ⊆ on ℝ, that comes from the metric of ℝ, we have: The representations ρ_β_ℝ^⊆, ρ_β̂_ℝ^⊆ and ρ_β_ℚ^⊆ are equivalent.§.§ Representation-equivalent subbases The notion of topological space of Definition <ref> naturally comes with a notion of equivalence of bases:Two numbered bases (𝔅_1,β_1) and (𝔅_2,β_2) are called Lacombe equivalent if there is a program that takes as input the β_1-name of a basic open set B_1 and outputs the name of a β_2-computable sequence (B_n)_n≥2 of basic open sets such that B_1=⋃_n≥2B_n,and a program that does the converse operation, with the roles of (𝔅_1,β_1) and (𝔅_2,β_2) reversed.Recall that associated to a Lacombe basis (𝔅_1,β_1) is a representation of open sets: the name of an open set O is a sequence (b_i)_i∈ℕ∈dom(β_1) such that O=⋃_i≥0β_1(b_i). Definition <ref> gives the correct notion of equivalence of basis with respect to this representation by the following easy proposition:The numbered bases (𝔅_1,β_1) and (𝔅_2,β_2) define equivalent representations of open sets if and only if (𝔅_1,β_1) and (𝔅_2,β_2) are Lacombe equivalent. In the context of numbered sets, other notions of equivalence of bases can be appropriate, we quote one to illustrate the variety of possible definitions of equivalence of bases. The following notion is appropriate to bases described by Nogina <cit.>, we do not detail their definition here.Suppose that (X,ν) is a countable set equipped with a numbering. Two numbered bases (𝔅_1,β_1) and (𝔅_2,β_2) are called Nogina equivalent if there is a program that takes as input the β_1-name of a basic open set B_1 and the ν-name of a point x in B_1 and outputs the β_2-name of a basic open sets B_2 such that x∈ B_2⊆ B_1, and a program that does the converse operation, with the roles of (𝔅_1,β_1) and (𝔅_2,β_2) reversed.In this case, one can also show that two numbered bases define equivalent numberings of open sets exactly when they are equivalent <cit.> (here, the numbering of open sets in question is the one appropriate to Nogina bases). However, neither of these notions of equivalence of bases is appropriate to the study of the multi-representations ρ_β_1^max, ρ_β_1^min and ρ_β_1^⊆ associated to a numbered subbasis (B_1,β_1,⊆). In particular, Lacombe equivalent bases can yield non-equivalent representations of points.We now describe the notion of equivalence of bases appropriate to the study of the representations ρ_β_1^⊆. When A is a set, denote by A^* the set of (empty or) finite sequences of elements of A. If ⊆_1 is a formal inclusion relation on dom(β_1), we extend ⊆_1 to dom(β_1)^* by (b_1,...,b_n)⊆_1(b'_1,...,b'_m)∀ i≤ m,∃ j≤ n, b_j⊆_1b'_i.Consider two numbered bases (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) of a set X equipped with formal inclusion relations. We say that (𝔅_1,β_1,⊆_1) is representation finer than (𝔅_2,β_2,⊆_2) if there exists a computable function f:⊆dom(β_2)^*→dom(β_1)^* defined at least on all sequences (b_1,...,b_n)∈dom(β_2)^* such that β_2(b_1)∩...∩β_2(b_n)≠∅, and such that:* For all sequence (b_1,...,b_n)∈dom(β_2)^*, if f((b_1,...,b_n))=(d_1,...,d_m), then β_2(b_1)∩...∩β_2(b_n)⊆β_1(d_1)∩...∩β_1(d_m)[In case f((b_1,...,b_n)) is just the empty sequence, we use the convention that an empty intersection gives X. ]. * For all x in X and d in dom(β_1) with x∈β_1(d), for any sequence (b_i)_i∈ℕ∈dom(β_2) that defines a formal basis of neighborhood of x, there exists k∈ℕ such that for all n≥ k,f((b_1,...,b_n))⊆d. We say that (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) are representation-equivalent if each one is representation finer than the other.The second condition written above says that as the sequence (b_1,b_2,...) closes in on a point, the sequence of images by f should also produce a formal neighborhood basis of this point. We also introduce a more restrictive notion of equivalence of bases, which implies equivalence of the associated representations, and which is more natural to work with:Consider two numbered bases (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) of a set X equipped with formal inclusion relations. We say that (𝔅_1,β_1,⊆_1) is uniformly representation finer than (𝔅_2,β_2,⊆_2) if there exists a computablefunction f:⊆dom(β_2)^*→dom(β_1)^* defined at least on all sequences (b_1,...,b_n)∈dom(β_2)^* such that β_2(b_1)∩...∩β_2(b_n)≠∅, and such that:* For all sequence (b_1,...,b_n)∈dom(β_2)^*, if f((b_1,...,b_n))=(d_1,...,d_m), then β_2(b_1)∩...∩β_2(b_n)⊆β_1(d_1)∩...∩β_1(d_m). * For all x in X and d in dom(β_1) with x∈β_1(d), there exists (b_1,...,b_n) in dom(β_2), with x∈β_2(b_1)∩...∩β_2(b_n), and such that∀(b'_1,...,b'_k)∈dom(β_2)^*, (b'_1,...,b'_k)⊆_2(b_1,...,b_n) f((b'_1,...,b'_k))⊆_1d. We say that (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) are uniformly representation-equivalent if each one is uniformly representation finer than the other.One checks that the uniform version is more restrictive that the general notion of representation-equivalence. Let (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) be two numbered bases of a set X. If (𝔅_1,β_1,⊆_1) is uniformly representation finer than (𝔅_2,β_2,⊆_2), then it is also representation finer than (𝔅_2,β_2,⊆_2). If (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) are uniformly representation-equivalent, then they are also representation-equivalent.It is easy to see that (𝔅_1,β_1,⊆_1) being representation finer than (𝔅_2,β_2,⊆_2) does imply that the topology generated by 𝔅_1 as subbasis is finer than the topology generated by the subbasis 𝔅_2 (in terms of classical mathematics).The following lemma shows why we have the correct notion of equivalence of bases.Let (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) be two numbered bases of a set X. Then ρ_β_2^⊆_2≤ρ_β_1^⊆_1(,,) is effectively finer than (,,); ρ_β_2^⊆_2≡ρ_β_1^⊆_1(,,) and (,,) are representation-equivalent. The second equivalence is a direct consequence of the first one, we thus focus on the first one. Suppose first that (𝔅_1,β_1,⊆_1) is effectively finer than (𝔅_2,β_2,⊆_2), and thus that we have a function f as in Definition <ref>. Given the ρ_β_2^⊆_2-name of a point x, we show how to compute a ρ_β_1^⊆_1-name of it. Simply apply the function f along all initial segments of the ρ_β_2^⊆_2-name of x, and output the concatenation of all the results. The fact that f produces oversets implies that x does belong to all produced basic open sets. The second condition on f guarantees that we indeed construct a formal neighborhood basis of x. Suppose now that ρ_β_2^⊆_2≤ρ_β_1^⊆_1. By a classical characterization of Type 2 computable functions in terms of isotone[A function f:A^*→ B^* is called isotone if it is increasing for the prefix relation: if u is a prefix of v, then f(u) is a prefix of f(v). ] functions <cit.>, this implies that there is a computable isotone function f:⊆dom(β_2)^*→dom(β_1)^* that testifies for the relation ρ_β_2^⊆_2≤ρ_β_1^⊆_1. This function f is defined at least on all sequences (b_1,...,b_n)∈dom(β_2)^* such that β_2(b_1)∩...∩β_2(b_n)≠∅, because any such sequence is the beginning of the ρ_β_2^⊆-name of some point. Note first that for any sequence (b_1,...,b_n)∈dom(β_2)^*, if f((b_1,...,b_n))=(d_1,...,d_m), then β_2(b_1)∩...∩β_2(b_n)⊆β_1(d_1)∩...∩β_1(d_m). Indeed suppose that this is not the case. It means that there is a point x∈β_2(b_1)∩...∩β_2(b_n)∖(β_1(d_1)∩...∩β_1(d_m)). The sequence (b_1,...,b_n) could be completed to a ρ_β_2^⊆-name of x, however f cannot map a sequence that starts with (b_1,...,b_n) to a ρ_β_1^⊆-name of x. This is a contradiction, and thus the desired inclusion holds. Finally, f applied along a formal neighborhood basis (with respect to (𝔅_2,β_2,⊆_2)) of a point x will always produce a formal neighborhood basis of x (with respect to (𝔅_1,β_1,⊆_1)). This guarantees that the last condition of Definition <ref> is satisfied. The reason why we introduce the uniform version of representation-equivalence is twofold:* It has a nice interpretation in metric spaces, and it is in general easier to understand, see the example below.* It is unclear whether two bases can be representation-equivalent while not being uniformly representation-equivalent. We thus ask: Can two subbases (𝔅_1,β_1,⊆_1) and (𝔅_2,β_2,⊆_2) of a set X be representation-equivalent while not being uniformly representation-equivalent?Suppose we are set in a separable metric space (X,d). There are many possible numberings of open balls: for any numbering ν of a dense subset A⊆ X, and any numbering c:⊆ℕ→ T⊆ℝ of a set of positive real numbers that has 0 as an accumulation point, the numbering β(⟨ n,m⟩)=B(ν(n),c(m))is a numbering of a basis for the topology of X. For two such numberings β_1 and β_2, the condition of uniform representation-equivalence with respect to the formal inclusion of metric spaces says that there is an algorithm that, given a finite intersection B_1∩...∩ B_n of balls given by β_1-names, covers it by a finite intersection B'_1∩...∩ B'_m⊇ B_1∩...∩ B_n of balls given by β_2-names, and, additionally, that when the minimal radius appearing in the first intersection B_1∩...∩ B_n goes to 0, then the minimal radius appearing in B'_1∩...∩ B'_m should go to 0 as well. The condition of representation-equivalence only asks of this algorithm that along each sequence (b_i)_i∈ℕ of β_1-names which defines a sequence of balls with arbitrarily small radii, the sequence of β_2-names produced should also encode small radii, but the dependence does not have to be uniform in the radii anymore.alpha
http://arxiv.org/abs/2311.15861v1
{ "authors": [ "Emmanuel Rauzy" ], "categories": [ "math.LO", "03D78, 03C57" ], "primary_category": "math.LO", "published": "20231127142700", "title": "Multi-representation associated to the numbering of a subbasis and formal inclusion relations" }
headings24SubNumber*** PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI Generated Images J. Yuan, X. Cao, C. Li, F. Yang, J. Lin, X. CaoSchool of Software & Microelectronics, Peking University, Beijing, China PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI Generated Images Jiquan Yuan, Xinyan Cao, Changjin Li, Fanyi Yang,Jinlong Lin, Xixin CaoCorresponding author. Email: [email protected] Nov 2023 =========================================================================================================================== As image generation technology advances, AI-based image generation has been applied in various fields and Artificial Intelligence Generated Content (AIGC) has garnered widespread attention. However, the development of AI-based image generative models also brings new problems and challenges. A significant challenge is that AI-generated images (AIGI) may exhibit unique distortions compared to natural images, and not all generated images meet the requirements of the real world. Therefore, it is of great significance to evaluate AIGIs more comprehensively. Although previous work has established several human perception-based AIGC image quality assessment (AIGCIQA) databases for text-generated images, the AI image generation technology includes scenarios like text-to-image and image-to-image, and assessing only the images generated by text-to-image models is insufficient. To address this issue, we establish a human perception-based image-to-image AIGCIQA database, named PKU-I2IQA. We conduct a well-organized subjective experiment to collect quality labels for AIGIs and then conduct a comprehensive analysis of the PKU-I2IQA database. Furthermore, we have proposed two benchmark models: NR-AIGCIQA based on the no-reference image quality assessment method and FR-AIGCIQA based on the full-reference image quality assessment method. Finally, leveraging this database, we conduct benchmark experiments and compare the performance of the proposed benchmark models. The PKU-I2IQA database and benchmarks will be released to facilitate future research on <https://github.com/jiquan123/I2IQA>. § INTRODUCTIONIn recent years, Artificial Intelligence Generated Content (AIGC) has garnered widespread attention beyond computer science, and society has become interested in various content-generation products developed by major technology companies. Image generation technology<cit.>, in particular, has experienced rapid development and has had a profound impact. With the development of image generation technology, AI-based image generation techniques have been applied across various fields. Many excellent image-generative models have emerged, such as Midjourney<cit.>, Stable Diffusion<cit.>, Glide<cit.>, Lafite<cit.>, DALLE<cit.>, Unidiffuser<cit.>, Controlnet<cit.>, etc. However, the advancement of AI image-generative models has also brought about new problems and challenges. A significant challenge is that AI-generated images (AIGI) may exhibit unique distortions compared to natural images. Not all generated images meet the requirements of the real world, often necessitating processing, adjustment, refinement, or filtering before practical application. In contrast to common image content<cit.> (such as natural scene images, screen content images, graphic images, etc.), which typically encounter common distortions like noise, blur, compression, etc., AIGIs may suffer from distinctive degradation such as unrealistic structures, irregular textures and shapes, and AI artifacts<cit.>, etc. Additionally, AIGIs may not correspond to the semantics indicated by text prompts<cit.>. As AIGIs continue to be produced, evaluating the quality of these images has become a significant challenge. Previously, AIGC image quality assessment (AIGCIQA) relies on automatic measures like Inception Score (IS)<cit.>, Fréchet Inception Distance (FID)<cit.>, and CLIP Score<cit.>, etc. However, research<cit.> points out that current evaluation metrics may fall short of expressing human perception. Particularly in terms of FID and Clip Score, they may no longer effectively evaluate the state-of-the-art generative models. Unfortunately, research in the field of AIGCIQA remains in its nascent stages. Notable strides have been made, as evidenced by the establishment of dedicated AIGCIQA databases, such as AGIQA-1K<cit.>, AGIQA-3K<cit.>, and AIGCIQA2023<cit.>. These databases represent significant progress in the realm of AIGCIQA. However, they predominantly focus on images produced via text-to-image models, thereby overlooking the diversity inherent in AI image generation technologies, which include both text-to-image and image-to-image generative methods. This oversight highlights a critical gap in the current research landscape, underscoring the need for dedicated databases catering to image-to-image scenarios, as well as more comprehensive databases that encompass a broader range of AI-generated image scenarios. The establishment of such databases is imperative to enable a more holistic assessment for AIGC image quality. Another issue pertains to the human perception-based approach utilized in the existing text-to-image AIGCIQA databases. The absence of reference images in these databases potentially introduces a bias in the human perception scores obtained from subjective experiments. Conversely, the establishment of image-to-image AIGCIQA databases, which utilize prompt images as references, could significantly mitigate this bias. This approach promises a more accurate and reliable collection of human annotations, paving the way for more balanced and objective evaluations in the field of AIGCIQA.To address the above issues, we first establish a human perception-based image-to-image database for AIGCIQA, named PKU-I2IQA. To the best of our knowledge, this is the first human perception-based image-to-image AIGCIQA database. Specifically, we select 200 categories from the well-known large-scale image database ImageNet<cit.> in the field of computer vision. Subsequently, we collect corresponding images from the high-resolution image website Pixabay<cit.> based on the selected categories to serve as image prompts for image-to-image generative models. These prompts include images of various scenes, such as animals, plants, furniture, and natural landscapes, etc. We employ two popular image-to-image generative models Midjourney<cit.> and Stable Diffusion V1.5<cit.> as the AIGI models to generate images. For each image prompt, we generate four images randomly for each generative model. Therefore, the constructed PKU-I2IQA database comprises a total of 1600 images (4 images × 2 models × 200 image prompts) corresponding to 200 image prompts. We conduct a well-organized subjective experiment to collect quality labels for AIGIs and then conduct a comprehensive analysis of the PKU-I2IQA database. Table 1 compares the PKU-I2IQA database with existing AIGCIQA databases. Different from previous works<cit.>, as the database is constructed using images generated by text-to-image models, there is no involvement of reference images when training and testing with deep learning models, which is corresponding to the no-reference image quality assessment method (NR-IQA) in image quality assessment. In contrast, the images in the PKU-I2IQA database are generated by image-to-image generative models using both image prompts and text prompts. Therefore, during training and testing, we can utilize image prompts as reference images which allows for a more accurate evaluation. Depending on whether image prompts are provided as reference images during training and testing, we propose two benchmark models for AIGC image quality assessment: NR-AIGCIQA based on the no-reference image quality assessment (NR-IQA) method and FR-AIGCIQA based on the full-reference image quality assessment (FR-IQA) method. Finally, leveraging this database, we conduct benchmark experiments and compare the performance of the proposed benchmark models. The main contributions of this paper can be summarized as follows:∙We establish the first human perception-based image-to-image database for AIGCIQA, named PKU-I2IQA.∙We propose two benchmark models for AIGCIQA: NR-AIGCIQA based on the NR-IQA method and FR-AIGCIQA based on the FR-IQA method. ∙We conduct benchmark experiments and compare the performance of the proposed benchmark models on the PKU-I2IQA database. § RELATED WORK§.§.§ Image Quality Assessment. In the past few years, researchers have proposed numerous Image Quality Assessment (IQA) methods. IQA methods can be categorized into FR-IQA methods<cit.> and NR-IQA methods <cit.>, depending on whether a reference image is used during the prediction process. Full-reference methods often achieve higher prediction accuracy compared to no-reference methods, as the inclusion of a reference image allows the computer to extract more effective features during the prediction process. Many classical image quality assessment models initially employ methods based on manually extracted features<cit.>. However, with the rapid development of convolutional neural networks, methods based on deep learning for feature extraction<cit.> have led to significant performance improvements. As a branch of image quality assessment, AIGC image quality assessment still requires further research. Previously, AIGCIQA relies on automatic measures like Inception Score (IS)<cit.>, Fréchet Inception Distance (FID)<cit.>, and CLIP Score<cit.>, etc. Recently, Mayu Otan et al.<cit.> from the Japanese internet giant Cyber Agent conduct a detailed investigation and experiments on evaluation metrics for AIGCIQA. They find that current evaluation metrics are limited to express human perception, especially in terms of FID<cit.> and Clip Score<cit.>, and are unable to evaluate the state-of-the-art generative models. Zhang et al.<cit.> establish the first human perception-based image-to-image database for AIGCIQA, named AGIQA-1K. It consists of 1,080 AIGIs generated by 2 diffusion models<cit.>. Through well-organized subjective experiments, human subjective perception evaluations of AIGIs are introduced to collect quality labels for AIGIs. Benchmark experiments are then conducted to evaluate the performance of the current IQA models<cit.>. Li et al. <cit.> consider six representative generative models and build the most comprehensive AIGI subjective quality database AGIQA-3K. This is the first database that covers AIGIs from GAN/auto regression/diffusion-based model altogether. Wang et al.<cit.> establish a large-scale AIGCIQA database, named AIGCIQA2023. They utilize 100 prompts and generate over 2000 images based on six state-of-the-art text-to-image generative models<cit.>. A well-organized subjective experiment is conducted on these images to evaluate human preferences for each image from the perspectives of quality, authenticity, and text-image correspondence. Finally, they perform benchmark experiments on this large-scale database to evaluate the performance of several state-of-the-art IQA models<cit.>. While these efforts have advanced the development of AIGCIQA, there are still issues to address, such as how to cover AIGC image generation in various scenarios as comprehensively as possible and how to introduce reference images into the AIGCIQA methods to enhance model performance.§.§.§ Visual Backbone. Visual Backbone Networks are fundamental and crucial components in computer vision, employed for feature extraction and representation in image processing tasks. These network models typically consist of multiple layers and modules designed to extract and represent features from input images, supporting various computer vision tasks such as object detection, image classification, semantic segmentation, etc. In the last decade, deep learning has seen remarkable progress, especially after the introduction of ImageNet<cit.> by Fei-Fei Li and her colleagues at Stanford University. This has significantly advanced deep learning's role in various computer vision tasks. We've seen the development of multiple visual backbone models, such as CNN-based ones like VGG<cit.>, GoogleNet<cit.>, ResNet<cit.>, and transformer-based ones like ViT<cit.>, Swin Transformer<cit.>, etc. In this paper, we employ several backbone network models pre-trained on the ImageNet<cit.> as feature extraction networks. These networks are utilized to extract features from input images, and we evaluate the performance of different backbone network models.§ DATABASE CONSTRUCTION AND ANALYSIS §.§ AIGI Collection To ensure the diversity of the generated content, we select 200 categories from the famous large-scale image database ImageNet<cit.> in the field of computer vision. Subsequently, we collect corresponding images from the high-resolution image website Pixabay<cit.> based on the selected categories to serve as image prompts for image-to-image generative models. It is explicitly stated that we use the royalty-free images from this website. These prompts include images of various scenes such as animals, plants, furniture, and natural landscapes, etc. Due to the varied resolutions of the collected prompt images from Pixabay, we standardize their resolution to 512×512, while preserving information about the image categories and scenes. This standardization involved resizing and cropping the images.We employ two popular image generative models Midjourney<cit.> and Stable Diffusion V1.5<cit.> as our AIGI generative models. We first use Clip<cit.> to perform reverse deduction to obtain text prompts from image prompts. Subsequently, based on the image prompts, text prompts, and the specified parameters, we obtain the generated images with a resolution of 512×512. For each image prompt, we generate four images randomly for each generative model. Consequently, our constructed PKU-I2IQA database comprises a total of 1600 images (4 images × 2 models × 200 image prompts), corresponding to 200 image prompts. Various scenes and styles of images sampled from the PKU-I2IQA database are shown in Fig.1. §.§ Subjective ExperimentTo evaluate the image quality of the PKU-I2IQA database and obtain Mean Opinion Scores (MOSs), subjective experiments are conducted following the guidance of ITU-R BT.500-14<cit.>. Following previous work<cit.>, evaluators are asked to express their preferences for the displayed AIGIs from three aspects: quality, authenticity, and text-image correspondence. Quality score is assessed based on clarity, color, brightness, and contrast of AI-generated images, along with sharpness of contours, detail richness, and overall aesthetic appeal.Authenticity score focuses on whether the AI-generated images looks real andwhether evaluators could distinguish that the images are generated by AIGI generative models or not. Text-image correspondence scores refers to the matching degree between the generated images and the text prompts.We employ a Python Tkinter-based graphical interface to display AIGIs in their native 512×512 resolution on the computer screen in a random sequence, as illustrated in Fig.2. Using this interface, evaluators rate AIGIs on a 0 to 5 scale with 0.01 increments. Unlike prior studies<cit.>, we integrates image prompts as reference images into the graphical interface. This enables evaluators to conduct more accurate evaluation by directly comparing these images with the AIGIs under review. Twenty graduate students participate in our experiment, which is divided into eight stages to keep each evaluation session around an hour. In each stage, evaluators need to evaluate 200 AIGIs. §.§ Data ProcessingAfter the subjective experiments, we collect ratings from all evaluators who participate in this experiment. Following the guidelines of ITU-R BT.500-14<cit.>, we calculate the mean and standard deviation of the subjective ratings for the same image within the same test group using the following formula:μ_j=1/N∑_i=1^N r_ijS_j=√(∑_i=1^N (μ_j-r_ij)^2/N-1) The notation r_ij represents the score of the i_th observer for the j_th generated image, where N denotes the total number of evaluators. When presenting the test results, all average scores should be accompanied by a relevant confidence interval, which derives from the standard deviation and the sample size. As recommended by ITU-R BT.500-14<cit.>, we employ a 95% confidence interval ( μ_j + ϵ_j, μ_j - ϵ_j ), where ϵ_j is computed using the following formula:ϵ_j=1.96·√(S)/NScores outside the confidence interval will be considered out-of-bounds, and we will discard these scores. The mean opinion score(MOS) for the j_th AIGI is calculated by the following formula:MOS_j = 1/M∑_i=1^M r_ij^'Here, M represents the number of non-discarded scores, and r_ij^' denote the rescaled non-discarded scores. The final score for AIGIs is calculated by the following formula:Final_score= MOSquality + MOS_authenticity + MOS_correspondence §.§ Database AnalysisTo further demonstrate the evaluation of AI-generated images from the perspectives of quality, authenticity, and text-image correspondence, we present examples of high-quality AIGIs, low-quality AIGIs, high-authenticity AIGIs, low-authenticity AIGIs, high-text-image correspondence AIGIs, and low-text-image correspondence AIGIs as shown in Fig.3. Each evaluation perspective has its unique value. Fig.4 displays histograms of Mean Opinion Scores for quality, authenticity, text-image correspondence, and the final score, respectively. We can find that all the score distributions tend to be Gaussian distributions. § APPROACHIn this section, we present two AIGCIQA benchmark models for PKU-I2IQA database, encompassing NR-IQA method and FR-IQA method. Fig.5 and Fig.6 illustrate the pipelines for NR-AIGCIQA and FR-AIGCIQA methods, respectively. §.§ Problem FormulationFor a given AIGI I_g with score label s , our proposed NR-AIGCIQA method first utilizes a visual backbone to extract features from the generated image. Subsequently, a regression network composed of two fully connected layers is employed to regress the predicted score. This method can be represented as:ŝ = R_θ(F_w(I_g)) Here, R_θ and F_w denote the regression network with parameters θ and the feature extraction network with parameters w,respectively. For a given AIGI I_g with score label s and an image prompt I_p , our proposed FR-AIGCIQA method first employs a shared-weights backbone network to extract features from I_g and I_p, separately. These features are then fused using concatenation, and finally, a regression network composed of two fully connected layers is applied to regress the predicted score. This method can be represented as:ŝ = R_θ(Concat(F_w(I_g), F_w(I_p))) Here, R_θ and F_w denote the regression network with parameters θ and the feature extraction network with parameters w, respectively. §.§ Benchmark ModelDue to the images in the PKU-I2IQA database being generated by image prompts and text prompts and each generated image corresponds to a specific image prompt, FR-IQA methods can be employed in this scenario. Additionally, we tested the NR-IQA methods on the PKU-I2IQA database which does not utilize prompt images as reference images during training and testing.Our proposed benchmark models based on the NR-IQA method and FR-IQA method consist of two components: a feature extraction network and a score regression network. We will provide detailed descriptions of these two components below. §.§.§ Feature Extraction Network.Initially, classical image quality assessment models relies on handcrafted feature-based methods. However, the advent of convolutional neural networks has led to the predominance of deep learning-based feature extraction, which surpasses traditional methods in performance. Deep learning approaches, unlike their handcrafted counterparts that rely on empirical rules, are data-driven and excel in extracting abstract and high-level semantic features from images. In our proposed NR-AIGCIQA method and FR-AIGCIQA method, we employ several backbone network models (VGG16<cit.>, VGG19<cit.>, ResNet18<cit.>, ResNet50<cit.>, and InceptionV4<cit.>) pre-trained on the ImageNet<cit.> for feature extraction from input images. §.§.§ Score Regression Network.For the image features extracted by the backbone network with a feature dimension of (B, D), we employ a score regression network composed of two fully connected layers with dimensions D ×D/2andD/2× 1 to regress the predictd score ŝ. §.§.§ Loss Function.We optimize the parameters of the feature extraction network and the score regression network by minimizing the mean squared error between the predicted score ŝ and the true score s:L_MSE(θ, w | I) = ||ŝ - s||^2Here, the parameters θ and w correspond to the parameters of the regression network and the feature extraction network, respectively.§ EXPERIMENT §.§ Implementation DetailsOur experiments were conducted on the NVIDIA A40, using PyTorch 1.11.0 and CUDA 11.3 for both training and testing.In the PKU-I2IQA database, scores are annotated across four dimensions: quality, authenticity, text-image correspondence, and a final score. To accurately evaluate model performance, we train individual models for each scoring category. For feature extraction from input images, we select several backbone network models pre-trained on the ImageNet<cit.>, including VGG16<cit.>, VGG19<cit.>, ResNet18<cit.>, ResNet50<cit.>, and InceptionV4<cit.>. Due to the inconsistency in input dimensions of the backbone networks such as InceptionV4 with the image sizes in our dataset, specific preprocessing is required. For InceptionV4, we adjust image sizes to 320×320, followed by random cropping to 299×299 and a 50% chance of horizontal flipping. For the other networks, images are resized to 256×256, then randomly cropped to 224×224 with the same probability of horizontal flipping. During training, the batch size B is set to 8. We utilize the Adam optimizer<cit.> with a learning rate of 1 × 10^-4 and weight decay of 1 × 10^-5. The training loss employed is mean squared error (MSE) loss. In the testing phase, the batch size B is set to 20.To evaluate the AIGI generative models in the PKU-I2IQA database, we split the data into training and test sets at a 3:1 ratio for each category produced by each generative model. We then report the performance of our two proposed methods alongside various pre-trained backbone networks.We compare the performance of the following methods on the PKU-I2IQA database:∙F^∗+R (Baseline): Corresponds to the NR-AIGCIQA method. ∗ indicates that our model is trained and tested exclusively with AIGIs, without the use of any reference images.∙F+R: Corresponds to the FR-AIGCIQA method. This method employ a combination of prompt images and AIGIS as inputs during both the training and testing phases of the model. §.§ Evaluation CriteriaFollowing prior research <cit.>, we utilize the Spearman rank correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC) as evaluation metrics to evaluate the performance of our model.The SRCC is defined as follows:SRCC = 1 - 6 ∑_i=1^N d_i^2/N(N^2 - 1) Here, N represents the number of test images, and d_i denotes the difference in ranking between the true quality scores and the predicted quality scores for the i_th test image.The PLCC is defined as follows:PLCC = ∑_i=1^N(si - μ_s_i)(ŝ_i - μ̂_s_i)/√(∑_i=1^N(s_i - μ_s_i)^2 ∑_i=1^N(ŝ_i - μ̂_s_i)^2) Here, s_i and ŝ_i represent the true and predicted quality scores, respectively, for the i_th image. μ_s_i and μ̂_s_i are their respective means, and N is the number of test images. Both SRCC and PLCC are metrics used to evaluate the relationship between two sets of variables. They range between -1 and 1, where a positive value indicates a positive correlation and a negative value indicates a negative correlation, and a larger value means a better performance.§.§ Results The performance results of the proposed methods on the PKU-I2IQA database are exhibited in Table 2.Based on the results reported in the Table 2, we can draw several conclusions:∙The benchmark model of the FR-AIGCIQA method outperforms the benchmark model of NR-AIGCIQA method.∙Among the backbone networks we utilize, ResNet18<cit.> performs the best in terms of quality and correspondence on the PKU-I2IQA database. ResNet50<cit.> exhibits the best on Final_score, while InceptionV4<cit.> demonstrates the best performance on authenticity.∙Overall, ResNet18<cit.> exhibits the best performance, followed by Inceptionv4<cit.> and ResNet50<cit.>. § CONCLUSIONIn this paper, we first introduce an image-to-image database named PKU-I2IQA for AIGCIQA based on human perception. We select 200 categories from the well-known large-scale image database ImageNet in the field of computer vision and collecte corresponding images for each selected category as image prompts for generating images using different generative models. For each image prompt, we generate four images randomly for each model. Therefore, the PKU-I2IQA database comprises a total of 1600 images corresponding to 200 image prompts. We conduct a well-organized subjective experiment to collect quality labels for AIGIs and then conduct a comprehensive analysis of the PKU-I2IQA database. Furthermore, we propose two benchmark models, namely NR-AIGCIQA and FR-AIGCIQA. Finally, we conduct benchmark experiments and compare the performance of the proposed benchmark models alongside various pre-trained backbone networks. The results indicate the following: first, despite the proposed benchmark models exhibiting certain performance, there is still considerable room for improvement in designing AIGCIQA models; second, the benchmark model of the FR-AIGCIQA method outperforms the benchmark model of the NR-AIGCIQA method. Therefore, in future research, we will focus on how to introduce reference images in scenarios like text-to-image generation without image prompts to enhance the model's performance. Additionally, we conduct cross-model evaluation experiments. Specifically, we train our models on images generated by one AIGI model and test it on images generated by another. The results indicate that the proposed benchmark model exhibits weak generalization when evaluate different AIGI models. We do not include this part in the paper, and in the future, we aim to further research and design AIGCIQA models with stronger generalization capabilities. plain
http://arxiv.org/abs/2311.15556v2
{ "authors": [ "Jiquan Yuan", "Xinyan Cao", "Changjin Li", "Fanyi Yang", "Jinlong Lin", "Xixin Cao" ], "categories": [ "cs.CV", "eess.IV" ], "primary_category": "cs.CV", "published": "20231127055303", "title": "PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI Generated Images" }
Exploring primordial curvature perturbation on small scales with the lensing effect of fast radio bursts Zong-Hong Zhu January 14, 2024 ======================================================================================================== Wound management poses a significant challenge, particularly for bedridden patients and the elderly. Accurate diagnostic and healing monitoring can significantly benefit from modern image analysis, providing accurate and precise measurements of wounds.Despite several existing techniques, the shortage of expansive and diverse training datasets remains a significant obstacle to constructing machine learning-based frameworks.This paper introduces , an open-source dataset of high-fidelity simulated wounds with 2D and 3D annotations.We propose baseline methods and a benchmarking framework for automated 3D morphometry analysis and 2D/3D wound segmentation. Wound documentation, 3D reconstruction, 2D/3D wound segmentation. § INTRODUCTIONChronic wounds, a widespread issue affecting individuals of all ages, represent a silent epidemic. It was estimated in 2019 that the prevalence of chronic wounds of mixed etiologies was 2.21 per 1000 population<cit.>. Wound management is a major issue for bedridden patients in hospitals and elderly residents in aged care facilities.Wound management is challenging, and there is no standardized patient-centric care model. Wound documentation is crucial and should encompass a range of details such as location, size, surrounding skin condition, presence of undermining and tunneling, exudate, odor, or pain levels. Automated wound analysis by a computer system would allow accurate and precise diagnosis and assessment of the wound type, and enable quantitative assessment during healing, which could span months. Automated wound characterization offers a key advantage by allowing remote monitoring, eliminating the necessity for frequent and expensive physical examinations by medical specialists. Wound assessment based on photography/videos is challenging because of substantial variations in appearance and quality caused by different camera quality, lighting, and camera pose. Data-driven vision-based technologies have been shown to improve wound assessment by enabling objective quantitative evidence for decision support <cit.>. Researchers have reported deep learning methods for 2D wound detection and classification <cit.>, wound segmentation <cit.> or 2D wound image healing classification <cit.>.However, 2D wound measurement techniques do not report wound depth, potentially overlooking a crucial aspect of the wound healing process. Additional challenges include identifying wound margins, variations in the wound's appearance due to changes in patient position, and the natural curvature of body parts such as the heel, toe, and lower leg. Advanced 3D imaging technology, coupled with automated analysis methods, enables standardized and comprehensive image acquisition <cit.>. It could provide natural representation and measurements, especially for attributes that may be challenging to identify in 2D images <cit.>.Automated wound analysis in 3D could assess the topology and textural features of wounds  <cit.>, offering valuable clinical information. A major bottleneck for training modern machine learning systems is obtaining high-quality training datasets and their associated ground truth (annotated by medical experts). Datasets that include 3D sensing are scarce, and collecting video of actual wounds is problematic: it has the potential to interfere with care, may include sensitive views, and can only be performed with limited camera and light setups.An alternative to collecting actual data is synthesizing images and their corresponding annotations, a strategy used in various domains, sometimes called digital twin <cit.>. Relevant to this paper, Dai et al. <cit.> generated textured burn wounds from a 3D human avatar as a synthetic annotated dataset. Sinha et al. <cit.> used similar methods to create 2D images from 3D textured meshes with diverse skin tones and background scenes.In contrast to existing methods, our proposed solution produces 2D synthetic data and precise 3D wound models, facilitating the evaluation of state-of-the-art 3D reconstruction methodologies (Fig. <ref>). This contribution is two-fold: Firstly, we introduce a 3D Wound synthetic dataset , available for research purposes, with 2D and 3D ground truth. Secondly, we present baseline methods and evaluation protocols for i) 3D wound reconstruction, ii) 2D wound bed segmentation, and iii) 3D wound bed mapping, showcasing the merits of 3D wound analysis over 2D approaches. § SYN3DWOUND DATASET The synthetic views in   are generated using Blender, an open-source 3D computer graphic software, capable of producing realistic stills and videos by controlling the camera path. The user has the flexibility to manipulate wound characteristics, its location on the body, human body shape, and texture. The key steps are outlined in Fig. <ref>.The inputs consist of a 3D human body avatar, a 2D wound image, and a predefined 3D wound shape and location. Users can manually carve a wound onto the 3D human body avatar surface, specifying its depth and location. The visual appearance of the wound, along with its segmentation mask, is integrated into the avatar's texture files. The outputs include a 3D human body avatar featuring an attached wound, a collection of rendered images depicting various camera and environmental configurations, and all the necessary parameters for replicating the output. Beyond achieving pixel-perfect segmentation masks and comprehensive data generation,   also provides precise 3D models of the wound, essential for assessing the effectiveness of 3D methodologies.We employed The Rendered people dataset <cit.> and the 3D Body Text dataset <cit.>, which offer high-definition textured meshes of the human body in high resolution. For the 3D rendering engine, Cycles [https://www.cycles-renderer.org/] was chosen for its enhanced light physics modelling and more lifelike rendering compared to routinely used real-time game graphics engines <cit.>. After generating the 3D scene, users can create a camera path, allowing variations in the number of images for 3D reconstruction, as well as the ground truth for camera intrinsics and trajectory. For a particular wound, users can explore different observation angles,camera resolutions, and lens characteristics, as depicted in the first row of  Fig. <ref>.To simulate imperfections present in real-world image acquisition, users can intentionally introduce either overexposure or apply motion/Gaussian blurring to the rendered images.Lighting aspects, such as the strength and the 3D placement of the light source, can also be adjusted at this stage, influencing the appearance of shadows in the rendered image. Ideally, wound characterization would include wound type, body location, size, variations in lighting conditions, and skin colour difference. Unfortunately, the availability of labelled data for 3D wound analysis has been limited.Existing datasets such as WoundSeg <cit.>, DFUC2022 <cit.>, FUSeg Challenge <cit.>, AZH wound care <cit.>, andMedetec <cit.> primarily consist of 2D annotated images.WoundDB <cit.> provides stereo images with the potential for depth estimation investigations. However, these images are not sequential, which limits their utility for 3D wound reconstruction. In contrast,   provides perfect information, albeit simulated. Table <ref> compares   with these existing datasets. § EXPERIMENTS AND RESULTSIn this section, we detail the evaluation protocol to perform 2D and 3D wound assessment of two 3D models, each representing a different ethnicity and depicted in Fig. <ref>. Upon the acceptance of our paper, we will release a more extensive dataset, along with the code required to compute the evaluation metrics.§.§ Baseline systems and evaluation metrics3D wound reconstruction: A 3D reconstruction algorithm estimates the 3D geometry of an object from a collection of 2D images. The prevailing methods in the literature rely on standard projective geometry techniques such as structure-from-motion and multiview stereopsis  <cit.>. However, new deep learning approaches for 3D scene rendering (e.g. Neural Radiance Fields (NeRF) <cit.>), are becoming very competitive. In this paper, we conduct a comparative analysis of two prominent open-source tools for 3D reconstruction: COLMAP <cit.> and Meshroom <cit.>. We also assess the performance of NeusFacto, a NeRF model tailored for surface extraction from the open-source SDFStudio toolbox <cit.>. We compared the 3D reconstructed meshes with the ground-truth synthetic mesh, after alignment using three steps: i) align the camera positions of the ground-truth data with those estimated by the frameworks (by solving a Procrustes problem <cit.>); ii) crop both meshes using the ground-truth 3D mask for wound bed segmentation, followed by fine alignment using the Iterative Closest Point (ICP) algorithm (applied only to the cropped meshes); ii) apply the transformations to the original meshes, followed by cropping the wound area again to report performance on the wound area only. In Table <ref>, we report the Average Symmetric Distance (ASD), Hausdorff Distance (HD90), and Normal Consistency (NC) metrics.The proposed pipeline facilitates benchmarking of 3D reconstruction methods and investigation into the influence of image features in the performance of the reconstruction method.Fig. <ref> shows the overall performance on the shoulder wound. COLMAP outperforms its competitor with increased image resolution. In every scenario, high-resolution images allow more fine-grained 3D reconstruction (see Fig. <ref>). 2D wound segmentation:We trained a deep learning segmentation model SegFormer <cit.> on a dataset provided by DFUC2022 <cit.> and tested it on a set of images from . From a predicted mask (A) and a ground truth mask (B), we compared the IoU score (Intersection over Union):| A ∩ B |/| A ∪ B |, and the Dice score: 2 | A ∩ B |/| A | + | B |.3D wound bed segmentation:We introduce a 3D wound segmentation technique that assigns 2-dimensional labels to different regions of the reconstructed 3D models. We used a Meshroom-based texturing algorithm <cit.> to project a set of 2D wound segmentation masks onto 3D mesh vertices labeled as background and wound bed.Following the established standard  <cit.>, we report the Balanced Average Hausdorff distance (BAHD) <cit.>, defined as BAHD( G, S ) = 1/2 |G| (ℋ(G,S) + ℋ(S,G) ), where ℋ is the directed average Hausdorff distance and |G| is the number of points in the ground truth wound segmentation. We also report recall R=(T_p)/(T_p+F_n) and precision P=T_p/(T_p + F_p), with T_p the number of vertices from the 3D ground truth segmentation that are also in the 3D estimated segmentation, F_p the number of vertices in the predicted segmentation that are missing from the ground truth segmentation, and F_n is the number of the ground truth segmentation vertices missing from the predicted segmentation.§.§ Results and discussionInfluence of the quality of the images: While a recent study explores the use of synthetic images for dermatological assessments <cit.> with relatively small 512× 512 images, we propose adopting Cycles, a powerful rendering engine that outperforms Open3D's physic-based renderer or Unity3D [https://docs.unity3d.com/ScriptReference/Renderer.html].Notably, our rendering method, though not real-time, produces superior results taking an average of 12.86 (± 0.73) seconds to generate a 4k synthetic image.[With path tracing integrator using 800 samples to render each pixel, leveraging parallel computation of tiles on a cluster of 10 x RTX 2080Ti.] Balancing Gender and Racial Diversity: In response to the emerging concern of the under-representation of minority groups in the training datasets of recent medical AI solutions, our released dataset is specifically designed to cover greater diversity of cases. This initiative aims to promote fairer wound analysis by providing a more inclusive and representative dataset.3D wound reconstruction:Quantitative results for 3D wound reconstruction are reported in Table <ref>. In our experiment, COLMAP demonstrates superior surface accuracy, while the performance of the Neural rendering-based method is nearly comparable.2D wound segmentation: Table <ref>, presents the performance of SegFormer <cit.> trained on DFUC2022 <cit.>, tested on the synthetic images produced by 's model.The model, having been trained on real 2D wound data, exhibits promising performance when applied to our synthetic data, validating the quality of the  dataset. However, the limitations of 2D wound segmentations arise from the constrained perspective during capture, potentially impacting accuracy and comprehensiveness as they fail to fully represent the complexity of 3D structures (e.g., as shown in the second row of Fig. <ref>, only the middle panel of leg/shoulder represents a complete view of a wound without presenting details such as depth). Therefore, it is advisable to adopt methods that leverage rich 3D information through 3D segmentation. One way to achieve this is through projecting 2D masks onto 3D mesh vertices based on the results of the initial 2D segmentation. 3D wound segmentation: Table <ref> compares 3D wound segmentation results with ground truth using previously described metrics. Notably, for the second sample, incorporating a higher number of 2D segmentation maps enhances the performance of the resulting 3D segmentation.Fig. <ref> shows the reconstructed 3D wound segmentation of the shoulder wound, generated from 120 renderings, with color-coded true positive (light blue), false positives (blue) and false negatives (yellow).The 3D projection of 2D segmentations provides a more precise understanding of the geometric failure modes of 2D segmentation models. § CONCLUSIONIn this paper, we contribute a unique 3D wound dataset to encourage collaboration between computer vision and medical imaging communities, intending to advance 3D wound reconstruction and documentation. We perform a study on widely used 3D reconstruction and segmentation pipelines, generating a set of baseline results pivotal for a better understanding of 3D wound analysis to address limitations in traditional 2D wound documentation. § COMPLIANCE WITH ETHICAL STANDARDSThis study was performed in line with the principles of the Declaration of Helsinki. The experimental procedures involving human subjects described in this paper were approved by CSIRO Health and Medical Human Research Ethics Committee (CHMHREC). The CHMHREC is an NHMRC Registered Human Research Ethics Committee (EC00187). CSIRO Ethics ID 2022_025_LR IEEEbib
http://arxiv.org/abs/2311.15836v1
{ "authors": [ "Léo Lebrat", "Rodrigo Santa Cruz", "Remi Chierchia", "Yulia Arzhaeva", "Mohammad Ali Armin", "Joshua Goldsmith", "Jeremy Oorloff", "Prithvi Reddy", "Chuong Nguyen", "Lars Petersson", "Michelle Barakat-Johnson", "Georgina Luscombe", "Clinton Fookes", "Olivier Salvado", "David Ahmedt-Aristizabal" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127135953", "title": "Syn3DWound: A Synthetic Dataset for 3D Wound Bed Analysis" }
The role of magnetic fields in disc galaxies: spiral arm instability Raghav Arora 1, Christoph, Federrath 2,Robi Banerjee 1 ,Bastian Körgten 1 ,Received xxxx; accepted xxxx ====================================================================================================================================== This paper studies singularities of mean curvature flows with integral mean curvature bounds H ∈ L^∞ L^p_loc for some p ∈ ( n, ∞]. For such flows, any tangent flow is given by the flow of a stationary cone 𝐂. When p = ∞ andis a regular cone, we prove that the tangent flow is unique. These results hold for general integral Brakke flows of arbitrary codimension in an open subset U ⊂ℝ^N with H ∈ L^∞ L^p_loc. For smooth, codimension one mean curvature flows with H ∈ L^∞ L^∞_loc, we also show that, at points where a tangent flow is given by an area-minimizing Simons cone, there is an accompanying limit flow given by a smooth Hardt-Simon minimal surface.§ INTRODUCTION A time-dependent family of embeddings F : M^n × [0, T) →^N is said to evolve by mean curvature flow if ∂_t F =Hwhere H = H⃗_M_t denotes the mean curvature vector of the embedded submanifold M_t = F( M ×{ t } ) ⊂^N. Submanifolds evolving by mean curvature flow often develop singularities in finite time T < ∞. Huisken <cit.> showed that the second fundamental form A of a compact hypersurface M_t evolving by mean curvature flow always blows up at a finite-time singularity T < ∞, that is, lim sup_t ↗ Tsup_x ∈ M_t |A | = ∞. Given Huisken's result <cit.>, it is natural to ask if the trace of the second fundamental form, namely the mean curvature H = A, must also blow up at finite-time singularities of the mean curvature flow. <cit.> answered this question in the negative by showing the mean curvature flow solutions constructed by Velázquez <cit.> develop finite-time singularities even though H remains uniformly bounded sup_t ∈ [0, T)sup_x ∈ M_t| H| < ∞. In dimension n=7, <cit.> further showed how to extend these mean curvature flow solutions M_t^7 ⊂^8 to weak mean curvature flows defined for later times t ∈ [0, T + ϵ)in such a way that H remains uniformly bounded and the flow has an isolated singularity at (0 , T) ∈^8× [0, T+ϵ).Given that there exist smooth mean curvature flows which develop singularities with bounded mean curvature <cit.>, the focus of this current article is to instead study the singularities of any mean curvature flow M^n_t ⊂^N with uniform mean curvature bounds. In this general setting, uniform mean curvature bounds along the flow allow us to incorporate the well-developed theory of varifolds with bounds on their first variation. In particular, we leverage that theory to obtain the following result which holds more generally for weak mean curvature flows of arbitrary codimension with integral mean curvature bounds: Let 2 ≤ n < N and let U ⊂^N be open. Let (μ_t)_t ∈ (a,b) be an integral n-dimensional Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^p_loc(U × (a,b)) for some p ∈ (n, ∞]. Except for a countable set of times t ∈ (a,b), (μ_t)_t ∈ (a,b) equals the Brakke flow of a family of integer rectifiable n-varifolds (V_t)_t ∈ (a,b] that extends to the final time-slice t = b. For any (x_0, t_0) ∈ U × (a, b], any tangent flow of (μ_t)_t ∈ (a,b) at (x_0, t_0) is given by the static flow of a stationary cone . The stationary conesthat arise as tangent flows to (μ_t)_t ∈ (a,b) at (x_0, t_0) are exactly the tangent cones to V_t_0 at x_0. Note that the stationary conesin Theorem <ref> are generally integer rectifiable n-varifolds that are dilation invariant and have zero first variation (see Lemma <ref> and Theorem <ref> for more precise statements).Given a Brakke flow (μ_t), tangent flows at (x_0,t_0) are, simply-speaking, subsequential limits of parabolic rescalings of (μ_t) based at (x_0, t_0). Analogously, tangent cones of a varifold V_t_0 at x_0 are subsequential limits of spatial rescalings of V_t_0 based at x_0. Thanks to suitable monotonicity formulas and compactness theorems, tangent flows and tangent cones always exist. However, the uniqueness of tangent cones and tangent flows, that is independence of subsequence, is generally an open problem. This uniqueness question is fundamental for singularity analysis and regularity.Theorem <ref> in particular shows that, for flows with mean curvature H ∈ L^∞ L^p_loc( U × (a,b)),uniqueness of the tangent flow of (μ_t) at (x_0, t_0) is equivalent to uniqueness of the tangent cone of V_t_0 at x_0. Because of this correspondence, we are able to prove the following uniqueness result: Let 2 ≤ n < N and let U ⊂^N be open. Let (μ_t)_t ∈ (a,b) be an integral n-dimensional Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^∞_loc(U × (a,b)). If a tangent flow to (μ_t)_t ∈ (a,b) at (x_0, t_0) ∈ U × (a,b] is given by the static flow of a regular cone(with multiplicity one), then this is the unique tangent flow to (μ_t) at (x_0, t_0). It is worth mentioning some related uniqueness results. Huisken's monotonicity formula ensures tangent flows are always self-similarly shrinking Σ_t = √(-t)Σ_-1 for t < 0. There are various uniqueness results depending on Σ = Σ_-1. When Σ is compact, <cit.> proved that if √(-t)Σ is a tangent flow of a mean curvature flow (M_t) at (x_0,t_0), then it's the unique tangent flow at (x_0, t_0). <cit.> proved uniqueness of the tangent flow √(-t)Σ when Σ = ^n-k×^k is a generalized cylinder. <cit.> proved uniqueness of the tangent flow when Σ is smooth and asymptotically conical. <cit.> obtained additional generalizations of the uniqueness results in <cit.>, respectively.Importantly, the uniqueness results in <cit.> require Σ to be smooth and do not apply when Σ = is a minimal cone for example. The first uniqueness result for tangent flows given by non-smooth Σ came in <cit.> which showed that, for 2-dimensional Lagrangian mean curvature flows L^2_t ⊂ℂ^2, tangent flows given by a transverse pair of planes Σ = P_1 ∪ P_2 ⊂ℂ^2 are unique. The uniqueness result Theorem <ref> here applies to non-smooth minimal cones Σ = which arise as tangent flows to general integral Brakke flows of any codimension in an open subset U ⊂^N, albeit under the assumption of a uniform mean curvature bound H ∈ L^∞ L^∞_loc.Under additional hypotheses, we can also describe an accompanying limit flow that arises as a more general blow-up limit around a singularity. Let M = (M_t^n)_t ∈ (a,b) be a smooth, properly embedded mean curvature flow in an open subset U ⊂^n+1 with mean curvature H ∈ L^∞ L^∞_loc( U × (a,b) ). If the tangent flow of M at (x, b) ∈ U ×{ b } is given by the static flow of a generalized Simons cone ^n ⊂^n+1 (with multiplicity one) andis area minimizing, then there exists a sequence (x_i, t_i ) ∈ U × (a,b) with lim_i →∞ (x_i , t_i) = (x, b) and a sequence λ_i ↘ 0 such that the sequence of rescaled mean curvature flows M_i = D_λ_i^-1 ( M - (x_i , t_i) ) converges to the static flow of a smooth Hardt-Simon minimal surface.We refer the reader to Section <ref> for the definitions relevant to Theorem <ref>. For now, we simply note that the mean curvature flow solutions constructed in <cit.> provide examples of mean curvature flows satisfying Theorem <ref>. Theorem <ref> states that general mean curvature flows with H ∈ L^∞ L^∞_loc in some sense mimic the dynamics of Velázquez's mean curvature flow solutions <cit.> near Simons cone singularities.The paper is organized as follows:In Section <ref>, we establish notation, specify definitions, and obtain some general results used throughout the paper.In particular, Theorem <ref> is proven here. Section <ref> proves the uniqueness of tangent flows given by regular stationary cones, Theorem <ref>. In Section <ref>, we obtain refined dynamics of mean curvature flows near regular stationary cones and prove Theorem <ref>. Finally, Appendix <ref> reviews some well-known results about integral varifolds with generalized mean curvature H ∈ L^p_loc that are cited throughout the paper. Acknowledgements. I would like to thank Professor Felix Schulze for many helpful conversations, particularly regarding the results in <cit.>.The author is supported by a Leverhulme Trust Early Career Fellowship (ECF-2023-182). § PRELIMINARIES §.§ Brakke Flows with H Bounds Let 2 ≤ n < N, U ⊂^N be open, and p ∈ (1, ∞]. Let V be an integer rectifiable n-varifold in U and μ_V be the associated measure on U. We say that V has generalized mean curvature H ∈ L^p_loc(U) if there exists a Borel function H : U →^N with H ∈ L^p_loc( U, dμ_V) such that the first variation δ V satisfies δ V ( X )= - ∫ H · X d μ_V∀ X ∈ C^1_c ( U, ^N) . Observe that, by the definition given in Definition <ref>, if V has generalized mean curvature H ∈ L^p_loc(U) then V has no generalized boundary in U. This convention differs somewhat from the existing literature but we adopt it nonetheless to simplify the statements in the remainder of the paper. Let 2 ≤ n < N, U ⊂^N be open, and p ∈ (1, ∞]. Let (μ_t)_t ∈ (a,b) be an n-dimensional integral Brakke flow in U. Recall that, for a.e. t ∈ (a,b), there exists an integer rectifiable n-varifold V_t such that μ_V_t = μ_t. We say the Brakke flow (μ_t)_t ∈ (a,b) has generalized mean curvature H ∈ L^∞ L^p_loc(U × (a,b) ) if for a.e. t ∈ (a,b) there exists an integer rectifiable n-varifold V_t with generalized mean curvature H_t ∈ L^p_loc(U) such that μ_V_t = μ_t and H _L^∞ L^p ( K × (a,b) ) ≑_t ∈ (a, b) H_t _L^p( K, dμ_t) < ∞ ( ∀ K ⋐ U).Throughout, “K ⋐ U" means K ⊆ U and K is compact.Let 2 ≤ n < N, U ⊂^N be an open subset, -∞ < a < b < ∞. Throughout, we assume (μ_t)_t ∈ (a, b) is an n-dimensional integral Brakke flow in U such that (μ_t) has locally uniformly bounded areas, that is, sup_t ∈ (a, b)μ_t( K) <∞∀ K ⋐ U.For simplicity, we will abbreviate this assumption as “(μ_t)_t ∈ (a, b) is an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas" in the remainder of the paper. We will also often assume that, for some p ∈ (1, ∞],(μ_t)_t ∈ (a,b) has generalized mean curvatureH ∈ L^∞ L^p_loc( U × ( a, b) ) ,but this assumption will be indicated in each statement. The assumption that the flow has locally uniformly bounded areas is quite mild. For example, it holds for Brakke flows (μ_t)_t ∈ [a,b) starting from initial data μ_a with locally bounded areas, i.e. μ_a(K) < ∞ for all K ⋐ U. Indeed, this can be seen by using Brakke's inequality with suitably defined spherically shrinking test functions. On the other hand, the assumption that the flow has generalized mean curvature H ∈ L^∞ L^p_loc( U × (a,b)) is much more restrictive. Nonetheless, <cit.> shows that even smooth mean curvature flows (M_t^n ⊂^n+1)_t ∈ (a,b) with H ∈ L^∞ L^∞ ( ^n+1× (a,b)) can develop singularities at the final time t = b. Combined with the work of <cit.>, there are non-smooth Brakke flows with H ∈ L^∞ L^∞ (^N × (a,b)) with mild singularities and small singular sets, informally speaking. The next lemma shows that Brakke flows with H ∈ L^∞ L^p_loc can be changed at countably many times to get a Brakke flow which is a varifold with H ∈ L^p_loc at every time.Moreover, the flow naturally extends to the final time-slice. Let (μ_t)_t ∈ (a,b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^p_loc (U × (a,b) ) for some p ∈ (1, ∞]. Then for all t ∈ (a, b], there exists a unique integer rectifiable n-varifold V_t with generalized mean curvature in H_V_t∈ L^p_loc(U) such that: * μ_t ≤lim_t' ↗ tμ_t = μ_V_t for all t ∈ (a, b), * μ_t = μ_V_t for all but countably many t ∈ (a, b), * for all t ∈ (a, b] and all K ⋐ U μ_V_t (K) ≤sup_τ∈ (a, b)μ_τ (K) andH_V_t_L^p (K ) ≤ H _L^∞ L^p ( K × (a, b) ) * for all t ∈ (a, b] lim_t' ↗ t V_t'(f) = V_t(f) ∀ f ∈ C^0_c ( G(n, U) ) and * (μ_V_t )_t ∈ (a, b] is an n-dimensional integral Brakke flow in U. Let t ∈ (a, b]. Since (μ_t) has H ∈ L^∞ L^p_loc(U × (a, b)), there exists a sequence of times t_j ↗ t (with t_j < t) and integer rectifiable n-varifolds Ṽ_j with generalized mean curvature H_Ṽ_j in L^p_loc(U) such that μ_t_j = μ_Ṽ_j and H_Ṽ_j_L^p(K, dμ_t_j ) ≤ H _L^∞ L^p(K × (a, b))≑ C_K < ∞∀ K ⋐ U. By compactness Lemma <ref>, there exists a subsequence (still denoted Ṽ_j) and an integer rectifiable n-varifold V_t with locally bounded areas and generalized mean curvature H_V_t∈ L^p_loc(U) such that Ṽ_j ⇀ V_t as varifolds and μ_V_t(K) ≤sup_τ∈ (a,b)μ_τ(K) and H_V_t_L^∞(K, dμ_V_t )≤ C_K ∀ K ⋐ U. This defines the varifold V_t for any t ∈ (a, b] and proves it satisfies (3). By <cit.>, for any f ∈ C^0_c ( U, _≥ 0 ) and any t ∈ (a, b) μ_t ( f ) ≤lim_s ↗ tμ_s (f) = lim_j →∞μ_Ṽ_j(f) = μ_V_t(f). This proves (1). For (2), simply note that <cit.> implies the first inequality in (<ref>) is an equality for all but countably many times t ∈ (a, b). Note additionally that the equality lim_s ↗ tμ_s = μ_V_t in (1) implies the varifold V_t is unique. To prove (4), let t ∈ (a, b] and f ∈ C^∞_c(U, _≥ 0). Take a sequence t_j ↗ t. By <cit.>, there exists C_f such that μ_t(f) - C_f t is decreasing in t. It follows that, for any j, μ_V_t (f) = lim_s ↗ t( μ_s (f) - C_f s ) + C_f t (1)≤μ_t_j (f) - C_f t_j + C_f t ≤μ_V_t_j (f) + C_f ( t - t_j ) (1). Taking j →∞ gives μ_V_t (f) ≤lim inf_j μ_V_t_j(f). For the reverse inequality, let ϵ > 0. Recall μ_V_t (f) = lim_s ↗ tμ_t(f) and similarly for the the V_t_j. Thus, there exists s = s(ϵ) < t such that | s - t | < ϵ and | μ_V_t(f) - μ_s (f) | < ϵ. There exists J such that s < t_j < t for all j > J. It follows that, for j > J, μ_V_t_j(f) - μ_V_t(f) ≤μ_V_t_j(f) - μ_s(f) + ϵ= lim_σ↗ t_j ( μ_σ (f) - C_f σ ) + C_f t_j - μ_s(f) + ϵ≤μ_s(f) - C_f s + C_f t_j - μ_s (f) + ϵ= C_f ( t_j - s ) + ϵ. Taking j →∞ then ϵ→ 0 gives lim sup_j μ_V_t_j(f) ≤μ_V_t(f). Since the sequence t_j ↗ t and f ∈ C^∞_c(U, _≥ 0 ) were arbitrary, it follows that lim_t' ↗ tμ_V_t'(f) = μ_V_t (f) ∀ f ∈ C^∞_c ( U, _≥ 0 ). Convergence as varifolds then follows from Lemma <ref> and completes the proof of (4). To prove (5), it suffices to check Brakke's inequality. Let a < t_0 < t_1 ≤ b and f ∈ C^1_c ( U × [t_0, t_1] ) with f ≥ 0. Then μ_V_t_1 (f) - μ_V_t_0 (f) ≤lim_s ↗ t_1μ_s (f) - μ_t_0 (f) (1) ≤lim_s ↗ t_1∫_t_0^s ∫∂_t f + ∇ f · H - |H|^2 f dμ_t dt = ∫_t_0^t_1∫∂_t f + ∇ f · H - |H|^2 f dμ_V_t dt (2).§.§ Huisken's Monotonicity Formula and Gaussian DensityLet Φ_x_0, t_0 (x, t ) ≑1/( 4 π ( t_0 - t) )^n/2 e^ -| x - x_0|^2/4 ( t_0 - t)( t< t_0)denote the backwards heat kernel based at (x_0, t_0). Let ϕ_x_0, t_0; r (x, t) ≑( 1 -| x - x_0|^2 - 2n ( t_0 - t) / r )^3_+ ( t ≤ t_0 )denote the spherically shrinking localization function based at (x_0, t_0) with scale r > 0.Huisken's monotonicity formula and its localized analogue for Brakke flows states∫Φ_x_0, t_0( ·, t) ϕ_x_0, t_0; r ( · , t) d μ_t - ∫Φ_x_0, t_0( ·, s) ϕ_x_0, t_0; r ( ·, s) d μ_s≤ - ∫_s^t ∫| H +( x - x_0 )^⊥/2 ( t_0 - τ ) |^2 Φ_x_0, t_0ϕ_x_0, t_0; r d μ_τ dτ≤ 0for all s < t < t_0 and r > 0 such that (μ_t) is a Brakke flow on B_r + 2n (t_0 - s)(x_0) × [s, t]. Moreover, for Brakke flows ( μ_t)_t ∈ (a,b) defined in U ⊂^N and points x_0 ∈ B_2 r_0 (x_0) ⊂ U, the Gaussian densityΘ_μ ( x_0, t_0 ) ≑lim_t ↗ t_0∫Φ_x_0, t_0 ( ·, t) ϕ_x_0, t_0 ; r ( · , t) d μ_tis well-defined for t_0 ∈ (a, b] and independent of r ∈ (0 , r_0).For an n-dimensional Brakke flow (μ_t) and λ > 0, define the parabolically dilated Brakke flow D_x_0, t_0; λμbased at (x_0, t_0) to be [ D_x_0, t_0; λμ]_t ( A) ≑λ^n μ_t_0 + t λ^-2 ( λ^-1 A+ x_0).Often, we omit the basepoint x_0, t_0 and simply write D_λμ when the basepoint is clear from context. §.§ Equivalent Notions of Density Recall the density of varifold V at x_0 ∈^N is given by θ_V(x_0) ≑lim_ρ↘ 0μ_V( B_ρ(x_0))/ω_n ρ^nwhen the limit exists and where ω_n denotes the volume of the unit n-ball. In the remainder of the article, we use η_x_0, λ to denote the spatial translation and dilation η_x_0, λ : ^N →^N, η_x_0, λ (x) ≑λ ( x - x_0 ).η_x_0, λ naturally induces a map of integer rectifiable n-varifolds which we denote by (η_x_0, λ)_♯. Specifically, if V = (Γ, θ), then (η_x_0, λ)_♯ V = ( η_x_0, λ (Γ) , θ∘η_x_0,λ^-1 ). When the basepoint x_0 is clear from context or x_0 =0, we shall often simply write η_λ for simplicity.When H ∈ L^∞ L^p_loc with p > n, the monotonicity formula (<ref>) for varifolds gives a characterization of subsequential blow-ups of time-slices. Let (μ_t)_t ∈ (a,b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas (Assumption <ref>) and generalized mean curvature H∈ L^∞ L^p_loc(U × (a,b)) for some p ∈ (n, ∞]. Let (V_t)_t ∈ (a, b] be the family of varifolds as in Lemma <ref>. For any (x_0, t_0) ∈ U × (a, b] and any sequence λ_i ↗ +∞, there exists a subsequence (still denoted λ_i) and an integer rectifiable n-varifoldin ^N such that ( η_ x_0, λ_i)_♯ V_t_0⇀as varifolds in ^N. Moreover,is a stationary, dilation invariant varifold in ^N with μ_ ( B_r ( 0 ) ) /ω_n r^n= θ_V_t_0 (x_0) = lim_ρ↘ 0μ_V_t_0 ( B_ρ ( x_0 ) ) /ω_n ρ^n ∀ 0 < r < ∞. Anythat arises as such a limit is called a tangent cone of V_t_0 at x_0. By Lemma <ref>, V_t_0 has generalized mean curvature H = H_V_t_0∈ L^p_loc( U). It follows that the dilations V_i ≑ (η_x_0, λ_i)_♯ V_t_0 have generalized mean curvature H_i ∈ L^p_loc( λ_i (U - x_0) ) with H_i _L^p(K , d μ_V_i ) ≤λ_i^n/p - 1H_V_t_0_L^p ( B_R(x_0), d μ_V_t_0 ) for all K ⊂^N compact and i ≫ 1 sufficiently large so that 1/λ_i K + x_0 ⊂B_R(x_0) ⋐ U. In particular, lim_i →∞ H_i _L^p(K , d μ_V_i )= 0. Since p > n, there is the monotonicity formula (<ref>) for integral n-varifolds with H ∈ L^p_loc (Proposition <ref>, see also <cit.>). In particular, the density θ_V_t_0 (x_0) ≑lim_ρ↘ 0μ_V_t_0 (B_ρ(x_0) )/ω_n ρ^n is well-defined and the blow-up sequence V_i has uniform local area bounds. It now follows from compactness (Lemma <ref>) that there is a subsequence (still denoted V_i) and an integer rectifiable n-varifoldin ^N such that V_i ⇀. By (<ref>) and Lemma <ref>, H__L^p(K, dμ_ )≤lim sup_iH_V_i_L^p (K, d μ_V_i )= 0, which impliesis stationary. Moreover, the convergence V_i ⇀ implies μ_ ( B_r(0 ) )/ω_n r^n= θ_V_t_0 (x_0) ∀ 0 < r < ∞. The monotonicity formula (<ref>) for stationary varifolds finally impliesis dilation invariant, that is ( η_0, λ )_♯ = ∀ 0 < λ <∞ (see e.g. <cit.>).We will show in Lemma <ref> below that Huisken's monotonicity formula (<ref>) similarly allows us to extract tangent flows from space-time blow-ups. First, we show local uniform area ratio bounds follow from the local uniform area bounds. Let (μ_t)_t ∈ (a,b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas (Assumption <ref>) and generalized mean curvature H ∈ L^∞ L^p_loc(U ×(a,b)) for some p ∈ (n, ∞]. For any K ⋐ U, there exists C such that sup_t ∈ ( a , b) sup_B_r(x) ⊂ K μ_t ( B_r(x)) /r^n≤ C. The proof will proceed by combining the monotonicity formula (<ref>) with the local uniform area bounds. Given K ⋐ U, there exists ϵ > 0 such that K' ≑B_ϵ (K) = ⋃_x ∈ KB_ϵ (x) ⋐ U. Let R_0 ≑sup{ r > 0 : B_r(x) ⊂ Kfor somex }, R_1 ≑max{ R_0, 1 }. For any B_r(x) ⊂ K, there exists R ∈ [max{ϵ, r } , R_1] such that B_r (x) ⊂ B_R(x) ⊂ K'. Let (V_t)_t ∈ (a, b] be the family of varifolds as in Lemma <ref> and let t ∈ (a,b). The monotonicity formula (<ref>) implies μ_V_t ( B_r(x) ) / r^n≤μ_V_t ( B_r(x) ) / r^n e^ H_V_t/ 1 - n/ pr+ e^ H_V_t/ 1 - n/ pr ≤μ_V_t ( B_R(x) ) / R^n e^ H_V_t/ 1 - n/ pR+ e^ H_V_t/ 1 - n/ pR where H_V_t =H_V_t_L^p ( B_R (x) , dμ_V_t ). It then follows that μ_t (B_r (x) ) / r^n ≤μ_V_t (B_r (x) ) / r^n≤μ_V_t ( B_R(x) ) / R^n e^ H_V_t_L^p ( K' )/ 1 - n/ pR+ e^ H _L^p(K') / 1 - n/ pR ≤μ_V_t ( K' ) /ϵ^n e^ H_V_t_L^p ( K' )/ 1 - n/ pR_1+ e^ H _L^p(K') / 1 - n/ pR_1 ≤sup_τ∈ (a,b)μ_τ ( K' ) /ϵ^n e^ H _L^∞ L^p ( K' × ( a, b))/ 1 - n/ pR_1+ e^ H _L^∞ L^p(K' × ( a, b) ) / 1 - n/ pR_1< ∞. Taking the supremum over t ∈ (a, b) and B_r(x) ⊂ K completes the proof. Let (μ_t)_t ∈ (a, b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^p_loc(U × (a,b)) for some p ∈ (n, ∞]. For any (x_0, t_0) ∈ U × (a,b] and any sequence λ_i ↗ +∞, there exists a subsequence (still denoted λ_i) and an integer rectifiable n-varifoldin ^N such that for any t < 0 ( D_x_0, t_0; λ_iμ)_t ⇀μ_as i →∞ (weakly as measures). Moreover,is a stationary, dilation invariant varifold in ^N with Θ_μ ( x_0, t_0 ) = 1/( 4 π)^n/2∫ e^- |x|^2/4 d μ_ . Anythat arises as such a limit is called a tangent flow of (μ_t)_t ∈ (a,b) at (x_0, t_0). Fix (x_0, t_0) ∈ U × ( a, b]. By Lemma <ref>, (μ_t) has local uniform area ratio bounds in a neighborhood of (x_0, t_0). It follows from Huisken's monotonicity formula (<ref>) and the compactness of Brakke flows with local uniform area bounds that there exists a Brakke flow (μ^∞_t)_t < 0 on ^N and a subsequence (still denoted λ_i) such that ( D_λ_iμ)_t = ( D_x_0, t_0; λ_iμ)_t ⇀μ^∞_t ∀ t < 0 (see <cit.> or <cit.>). Moreover, μ^∞ is a shrinker for t < 0 in the sense that μ_t ( A) = (-t)^n/2μ_-1 ( A / √( -t) ) ∀ t < 0 and, for any t < 0, μ^∞_t is an integer rectifiable n-varifold with H +x^⊥/2(-t) = 0 for μ^∞_t-a.e.x ∈^N. Additionally, Θ_μ( x_0, t_0) = 1/ ( 4 π (-t) )^n/2∫ e^- |x|^2/4 (-t) d μ^∞_t ∀ t < 0. Because (μ_t) is a Brakke flow with H ∈ L^∞ L^p_loc, there exists t < 0 such that, for all i, ( D_λ_iμ )_t is represented by an integer rectifiable n-varifold V_i with H_V_i∈ L^p_loc ( λ_i U ) and, for any K ⋐^N, H_V_i_L^p( K , d μ_V_i )≤λ_i^n/p - 1 H _L^∞ L^p( B_r_0 (x_0) × (a,b) ) for all i ≫ 1 sufficiently large such that λ_i^-1 K + x_0 ⊂ B_r_0 ( x_0) and B_r_0 (x_0) ⋐ U. Since p > n, lim_i →∞ H_V_i_L^p ( K , d μ_V_i ) = 0 ∀ K ⋐^N. Since μ_V_i⇀μ^∞_t, the compactness of varifolds with mean curvature bounds Lemma <ref> and Lemma <ref> imply that μ^∞_t = μ_ for some integer rectifiable n-varifoldin ^N which is stationary (H_ = 0). It then follows from (<ref>) that x^⊥ = 0 μ_-a.e. and thusis dilation invariant (see e.g. <cit.>). Finally, (<ref>) implies μ^∞_t = μ_ for all t < 0. This completes the proof. A priori, the tangent cones from Lemma <ref> could be entirely unrelated to the tangent flows from Lemma <ref>. Indeed, the two limiting objects arise from entirely different blow-up sequences. The next theorem, however, proves that the tangent cones from Lemma <ref> exactly correspond to tangent flows from Lemma <ref>. Let (μ_t)_t ∈ (a, b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H∈ L^∞ L^p_loc(U × (a,b)) for some p ∈ (n, ∞]. Let (V_t)_t ∈ (a, b] be the family of varifolds as in Lemma <ref>. For any (x_0, t_0) ∈ U × (a, b], θ_V_t_0 (x_0) = Θ_μ(x_0, t_0). For any sequence λ_i ↗ +∞, there exists a subsequence (still denoted λ_i) such that there are limiting integral n-varifolds , as in Lemmas <ref>, <ref> respectively, and in fact =. By translation, we can assume without loss of generality that (x_0, t_0) = ( 0 , 0) ∈ U × (a,b]. For any sequence λ_i ↗ +∞, denote the rescalings V^i_0 ≑( η_λ_i)_♯ V_0, μ^i_t ≑ (D_λ_iμ)_t. Lemmas <ref> and <ref> imply there exists a subsequence (still denoted by index i) such that V^i_0 ⇀, andμ^i_t ⇀μ_ (∀ t < 0), where , are stationary, dilation invariant, integral n-varifolds. Additionally, θ_V_0 (0) = μ_ ( B_r(0))/ω_n r^n ( ∀ 0 < r < ∞) and Θ_μ ( 0, 0) = 1/ ( 4 π (-t) )^n/2∫ e^- |x|^2/4(-t) d μ_ ( ∀ t < 0). We first claim that μ_≤μ_. Let f ∈ C^1_c( ^N) with f ≥ 0. Say f ⊂ B_R. It follows that μ_ (f) = lim_i →∞μ_V^i_0 (f) = lim_i →∞lim_t' ↗ 0μ^i_t' (f)(Lemma <ref>) ≤ lim_i →∞lim_t' ↗ 0( μ^i_-1 (f) + ∫_-1^t'∫H_i ·∇ f - |H_i|^2 f dμ^i_t d t ) ≤ lim_i →∞μ^i_-1 (f) + lim_i →∞∫_-1^0∫|H_i||∇ f|dμ^i_t d t ( f ≥ 0) = μ_ (f) + lim_i →∞∫_-1^0∫|H_i||∇ f|dμ^i_t d t. For any 0 < δ≪ 1, the integral term can be estimated as follows: lim_i →∞∫_-1^0∫|H_i||∇ f|dμ^i_t d t ≤ lim_i →∞f _C^1∫_-1^0 ∫_B_R |H_i| dμ^i_t dt = lim_i →∞f _C^1λ_i^n-1∫_-1^0 ∫_B_Rλ_i^-1 |H| dμ_tλ_i^-2 dt = lim_i →∞f _C^1λ_i^n+1∫_-λ_i^-2^0 ∫_B_Rλ_i^-1 |H| dμ_τ d τ ( τ = t λ_i^-2 ) ≤ lim_i →∞f _C^1λ_i^n+1∫_-λ_i^-2^0 ( ∫_B_Rλ_i^-1 |H|^p dμ_τ)^1/pμ_τ( B_R λ_i^-1 )^p-1/p d τ ≤f _C^1 H _L^∞ L^p ( B_δ× (a, b) ) lim_i →∞λ_i^n-1( sup_-λ_i^-2≤τ≤ 0 μ_τ ( B_R λ_i^-1 ) )^p-1/p =f _C^1 H _L^∞ L^p ( B_δ× (a, b) )R^ n(p-1)/plim_i →∞λ_i^-1 + n/p( sup_-λ_i^-2≤τ≤ 0 μ_τ ( B_R λ_i^-1 ) / R^n λ_i^-n)^p-1/p. By Lemma <ref>, the μ_τ ( B_R λ_i^-1 ) / R^n λ_i^-n term is uniformly bounded above by say C_0 < ∞. Since p >n, μ_(f) ≤μ_ (f) + C_0f _C^1 H _L^∞ L^p ( B_δ× (a, b) )R^ n(p-1)/p C_0 lim_i →∞λ_i^-1 + n/p = μ_(f). Thus, μ_(f) ≤μ_ (f) for all f ∈ C^1_c (^N) with f ≥ 0. It then follows from a limiting argument that μ_≤μ_. Note that, sinceis dilation invariant, μ_ ( B_r( 0 ) )/ω_n r^n = 1/ ( 4 π (-t) )^n/2∫ e^-|x|^2/ 4 (-t) d μ_∀ 0 < -t, r < ∞ and similarly for . μ_≤μ_ implies μ_ ( B_r( 0 ) )/ω_n r^n ≤μ_ ( B_r( 0 ) )/ω_n r^n ∀ 0 < r < ∞. We claim that the reverse inequality μ_ ( B_r( 0 ) )/ω_n r^n ≥μ_ ( B_r( 0 ) )/ω_n r^n also holds (for some or equivalently all 0 < r < ∞). First, observe there exists a time τ < 0 such that μ_τλ_i^-2 is an integral n-varifold with generalized mean curvature H_τλ_i^-2∈ L^p_loc (U) for all i and H_τλ_i^-2_L^p(K)≤ H _L^∞ L^p(K × (a,b)) < ∞ (∀ K ⋐ U) , where as usual H denotes the mean curvature of the flow (μ_t). Fix R> 0 such that B_R = B_R(0 ) ⋐ U. Let F_i ( ρ ) ≑ e^ H_τλ_i^-2_L^p( B_R )1/ 1 - n/pρ^1 - n/pandF(ρ) ≑ e^ H _L^∞ L^p(B_R ×( a,b))1/ 1 - n/pρ^1 - n/p as in the monotonicity formula (<ref>). Note 1 ≤ F_i (ρ ) ≤F( ρ ) ∀ρ > 0. It follows from the monotonicity formula (<ref>) that, for any 0 < ρ < R and any r > 0, r^-nμ_ ( B_r ) = lim_i →∞ r^-nμ^i_τ( B_r) = lim_i →∞μ_τλ_i^-2( B_r λ_i^-1 ) / r^n λ_i^-n= lim_i →∞1/ F_i( r λ_i^-1 ) ( F_i( r λ_i^-1 ) μ_τλ_i^-2( B_r λ_i^-1 ) / r^n λ_i^-n + F_i( r λ_i^-1 ) - F_i ( r λ_i^-1 ) )≤lim sup_i →∞1/ F_i( r λ_i^-1 ) [ F_i ( ρ ) μ_τλ_i^-2 ( B_ρ ) /ρ^n + F_i( ρ ) - F_i ( r λ_i^-1 ) ] (<ref>)≤lim sup_i →∞[ F(ρ ) μ_τλ_i^-2 ( B_ρ ) /ρ^n + F( ρ ) - 1 ] =F( ρ ) μ_V_0 ( B_ρ ) /ρ^n +F( ρ ) - 1 . Thus, μ_ ( B_r ) / r^n≤F( ρ ) μ_V_0 ( B_ρ ) /ρ^n + F( ρ ) - 1. Taking ρ↘ 0 reveals μ_ ( B_r ) / r^n≤lim_ρ↘ 0F( ρ ) μ_V_0 ( B_ρ ) /ρ^n + F( ρ ) - 1 = lim_ρ↘ 0μ_V_0 ( B_ρ ) /ρ^n = ω_n θ_V_0 ( 0) = μ_ ( B_r ) /r^n. In summary, we have shown that μ_≤μ_andμ_( B_r) /ω_n r^n≥μ_( B_r) /ω_n r^n∀ 0 < r < ∞. In particular, μ_( B_r )= μ_ (B_r) for all 0 < r < ∞. It follows that =. Indeed, if not, then there exists a bounded subset A ⊂^N such that μ_(A) < μ_ (A). Taking R > 0 large enough so that B_R ⊃ A would then imply μ_ (B_R) = μ_ ( B_R ∖ A ) + μ_ (A) < μ_ ( B_R ∖ A ) + μ_(A) = μ_ (B_R) = μ_ (B_R) a contradiction. Thus, = and in particular θ_V_0 ( 0) = Θ_μ ( 0, 0). This completes the proof. As an immediate consequence of Theorem <ref>, we deduce that the uniqueness of tangent cones of time-slices is equivalent to the uniqueness of tangent flows. Let (μ_t)_t ∈ (a, b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H∈ L^∞ L^p_loc(U × (a,b)) for some p ∈ (n, ∞]. Let (V_t)_t ∈ (a, b] be the family of varifolds as in Lemma <ref>. Let (x_0, t_0) ∈ U × ( a, b]. Then { :is a tangent cone of V_t_0 at x_0} = { : μ_ is a tangent flow of (μ_t) at (x_0, t_0)}. In particular, the tangent cone V_t_0 at x_0 is unique if and only if the tangent flow of (μ_t) at (x_0, t_0) is unique. In other words, ( η_x_0, λ_i)_♯ V_t_0⇀ for every sequence λ_i ↗ +∞ if and only if ( D_x_0, t_0; λ_iμ )_t ⇀μ_ ( ∀ t < 0) for every sequence λ_i ↗ +∞. Let (μ_t)_t ∈ (a,b) be an integral Brakke flow with mean curvature bounds as in Theorem <ref>, and consider the associated flow of varifolds (V_t)_t ∈ (a,b] as in Lemma <ref>. By combining Theorem <ref> with White's local regularity theorem <cit.> (and its generalization to integral n-Brakke flows <cit.>), it follows that if t_0 ∈ (a,b] and V_t_0 also has unit density (i.e. θ_V_t_0(x) = 1 for μ_V_t_0-a.e. x ∈ U), then the singular set of the flow M = ( V_t)_t ∈ (a,t_0] in the t=t_0 time-slice has n-dimensional Hausdorff measure 0, that is H^n(sing_t_0M ) = 0. This recovers Brakke's main regularity theorem <cit.> in the special case of flows with mean curvature bounds (see also <cit.> and <cit.>). We refer the interested reader to <cit.> for precise definitions. § UNIQUENESS OF TANGENT FLOWS GIVEN BY REGULAR CONES Define the link L() of a dilation invariant set ⊂^N to be L() ≑∩^N-1. is said to be a regular cone if L() is a smooth, (properly) embedded submanifold of ^N-1. If ⊂^N is a regular cone, then ∖{0} is a smooth submanifold of ^N and we write ^n when ∖{0} has dimension n. Note that a regular cone ^n ⊂^N naturally gives a dilation invariant integral n-varifold (of multiplicity one) with associated measure H^n on ^N.The main result of this section is the following uniqueness result which restates Theorem <ref>: Let (μ_t)_t ∈ (a,b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^∞_loc(U × (a,b)). If (μ_t) has a tangent flow μ_ at (x_0,t_0) ∈ U × (a, b] given by a regular cone ^n (with multiplicity one), then μ_ is the unique tangent flow of (μ_t) at (x_0, t_0). The proof essentially follows from <cit.> applied to the time slice V_t_0. However, <cit.> requires a regularity assumption <cit.> which we must verify for V_t_0. Informally, this regularity assumption states that if V_t_0 is a small C^1, α-graph over the regular conein an annulus, then the mean curvature H_V_t_0 of V_t_0 has interior C^2-bounds.Since V_t_0 comes from a Brakke flow, we can prove V_t_0 satisfies this regularity assumption <cit.> as follows: * show that C^1, α-graphicality propagates outward in space and backward in time (Lemma <ref>), and * apply interior estimates to improve the C^1, α bounds to C^∞ estimates for V_t and apply interior estimates to the evolution equation for H = H_V_t to obtain H _C^2≲ H _C^0 (Lemma <ref>).The remainder of this section rigorously carries out this argument to prove Theorem <ref>. In what follows we use A_r, R( x_0) to denote the open annulus A_r, R(x_0) = B_R( x_0) ∖ B_r (x_0) and A_r, R = A_r, R (0). For ^n ⊂^N a regular cone, we slightly abuse notation and write u : ∩ A_r, R→ T^⊥ to mean a function u : ∩ A_r, R→^N such that u(x) ∈ T^⊥_x for all x ∈∩ A_r,R. For u : ∩ A_r,R→ T^⊥, denote G_r, R (u) ≑{ x + u(x) /√( 1 +|u(x)|^2 / |x|^2) : x ∈∩ A_r, R}⊂^N.Note that ifis a regular cone and u : ∩ A_r, R→ T^⊥ is a C^2-function with u(x)/|x| sufficiently small in C^1, then G_r, R(u) is a properly embedded C^2-submanifold. Fora regular cone, let ·_C^k_* and ·_C^k, α_* (0 < α≤ 1) denote the standard C^k and C^k, α norms respectively. For example, if u : ∩Ω→^N then u _C^k, α_*( ∩Ω)= ∑_j = 0^k sup_x ∈∩Ω | ∇^j_ u |(x) + sup_xy ∈∩Ω | ∇^k_ u (x) - ∇^k_ u(y) |/|x - y|^α. For 0 < r < R < ∞ and u : ∩ A_r, R→^N, define scale-invariant C^k and C^k, α norms by u _C^k ( ∩ A_r, R )≑∑_j = 0^k sup_x ∈∩ A_r, R |x|^j - 1sup_y ∈ B_|x|/2(x) ∩∩ A_r, R| ∇^j_ u |(y) = ∑_j = 0^k sup_x ∈∩ A_r, R |x|^j - 1∇^j_ u _C^0 ( B_|x|/2(x) ∩∩ A_r, R) u _C^k, α( ∩ A_r, R )≑u _C^k( ∩ A_r, R ) + sup_x ∈∩ A_r, R |x|^k -1 + αsup_yz ∈ B_|x|/2 (x) ∩∩ A_r, R | ∇^k_ u (y) - ∇^k_ u(z) | / | y - z |^α =u _C^k( ∩ A_r, R ) + sup_x ∈∩ A_r, R |x|^k -1 + α [ ∇^k u ]_C^α_* ( B_|x|/2 (x) ∩∩ A_r, R) where [·]_C^α_* denotes the standard C^α semi-norm. For functions u : ∩ A_r, R× (a, b) →^N that also depend on time t ∈ (a,b), denote backward parabolic neighborhoods as P_r (x, t) ≑ B_r(x) × (t - r^2, t) and define scale-invariant C^k and C^k, α norms by u _C^k ( ∩ A_r, R× (a,b) )≑ ∑_2i + j ≤ ksup_(x,t) ∈∩ A_r,R× (a,b) |x|^2i + j - 1∂_t^i ∇_^j u _C^0_* ( P_|x|/2 (x,t) ∩ (∩ A_r,R× (a,b) ) ) + ∑_0 < k - 2i - j/2 < 1sup_(x,t) ∈∩ A_r,R× (a,b) |x|^k-1+α [∂_t^i ∇_^j u ]_t, C^k-2i-j/2_* ( P_|x|/2 (x,t) ∩ (∩ A_r,R× (a,b) ) ) , u _C^k, α ( ∩ A_r, R× (a,b) )≑ ∑_2i + j ≤ ksup_(x,t) ∈∩ A_r,R× (a,b) |x|^2i + j - 1∂_t^i ∇_^j u _C^0_* ( P_|x|/2 (x,t) ∩ (∩ A_r,R× (a,b) )) + ∑_2i + j = ksup_(x,t) ∈∩ A_r,R× (a,b) |x|^k - 1 + α [ ∂_t^i ∇_^j u ]_x,C^α_*( P_|x|/2 (x,t) ∩ (∩ A_r,R× (a,b) ) )+ ∑_0 < k + α - 2i - j/2 < 1sup_(x,t) ∈∩ A_r,R× (a,b) |x|^k-1+α [ ∂_t^i ∇_^j u ]_t,C^k + α - 2i - j/2_*( P_|x|/2 (x,t) ∩ (∩ A_r,R× (a,b) ) ) . Here, [·]_x, C^α_* and [·]_t, C^α_* denote the standard C^α semi-norms in the variables x and t respectively. Namely, [u]_x, C^α_*(× (a,b) ∩Ω) ≑sup_ (x,t)(x', t) ∈× (a,b) ∩Ω| u(x,t) - u(x',t)|/|x - x'|^α,and [u]_t, C^α_*(× (a,b) ∩Ω) ≑sup_ (x,t)(x, t') ∈× (a,b) ∩Ω| u(x,t) - u(x,t')|/|t - t'|^α.Since we use the C^1, α norm most often, we note explicitly that u _C^1, α( ∩ A_r,R× (a,b)) = sup_x,t |x|^-1sup_(y,s) ∈ P_|x|/2 (x,t) |u(y,s) | + sup_x,tsup_(y,s) ∈ P_|x|/2 (x,t) |∇_ u(y,s) | + sup_x,t |x|^αsup_(y,s) (y',s) ∈ P_|x|/2 (x,t)|∇_ u(y,s) - ∇_ u(y',s)|/|y - y'|^α+ sup_x,t |x|^αsup_(y,s) (y,s') ∈ P_|x|/2 (x,t)| u(y,s) -u(y,s')|/|s - s'|^1+α/2 where also the suprema above are restricted to points in ∩ A_r,R× (a,b). While the C^k, α norms defined above are somewhat non-standard, they have been chosen so that they satisfy the following properties, the proofs which have been left as exercises to the reader. * (Parabolic Scaling Invariance) If u : ∩ A_r,R× (a,b) →^N, λ > 0, and ũ : ∩ A_λ r, λ R× ( λ^2 a, λ^2 b) →^N is given by ũ (x,t) = λ u ( x/ λ, t / λ^2 ) then ũ_C^k, α ( ∩ A_λ r, λ R× (λ^2 a, λ^2 b) )=u _C^k , α ( ∩ A_r, R× ( a, b) ). * (Time Translation Invariance) If u : ∩ A_r,R× (a,b) →^N, t_0 ∈, and ũ : ∩ A_ r,R× (a + t_0, b + t_0) →^N is given by ũ (x,t) =u ( x, t - t_0 ) then ũ_C^k, α ( ∩ A_ r,R× ( a+t_0,b+t_0) )=u _C^k , α ( ∩ A_r, R× ( a, b) ). * (Equivalent to Standard Hölder Norms) There exists C = C(r,R, k, α) such that C^-1 u _C^k, α_* ( ∩ A_r, R× ( a, b) )≤ u _C^k, α ( ∩ A_r, R× ( a, b) )≤ Cu _C^k, α_* ( ∩ A_r, R× ( a, b) ) . Similar properties hold for the space-time C^k norms and the spatial C^k and C^k, α norms. The choice of using radius |x|/2 balls in the above definitions was somewhat arbitrary. Indeed, it can be shown through a covering argument that, for any L > 1, replacing “|x|/2" with “|x|/L" in the above definitions gives an equivalent norm, that is, the norms differ by a factor of C = C(k, α , L). Let ( μ_t)_t ∈ (a,b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^p_loc(U × (a,b)) for some p ∈ (n, ∞]. Let (V_t)_t ∈ (a, b] be theassociated integral n-Brakke flow from Lemma <ref>. Let (x_0, t_0) ∈ U × ( a, b] and let ^n ⊂^N be a minimal regular cone. For any ϵ > 0, there exists r_0, δ > 0 such that the following holds for all 0 < ρ < r_0: if (V_t_0 -x_0) ∩ A_ρ/2, ρ = G_ρ/2, ρ(u) for some u : ∩ A_ρ/2, ρ→ T^⊥ C with u _C^1, α ( ∩ A_ρ/2, ρ ) ≤δ, then (V_t - x_0) ∩ A_ρ/4, 2ρ = G_ρ/4, 2ρ ( ũ ( · , t) ) ( ∀ t ∈ [t_0 - 4ρ^2, t_0] ) for some extension ũ : (∩ A_ρ/4, 2ρ) × [t_0 - 4 ρ^2, t_0] → T^⊥ of u with ũ_C^1, α ( ∩ A_ρ/4, 2ρ× [t_0 - 4 ρ^2, t_0 ] )≤ϵ . By translation, assume without loss of generality that (x_0, t_0) = (0 , 0). Suppose the lemma were false for the sake of contradiction. Then we can take a sequence r_i = δ_i ↘ 0 and obtain ρ_i ∈ ( 0 , r_i ) where the implication fails. That is, V_0 ∩ A_ρ_i/2, ρ_i = G_ρ_i/2, ρ_i (u_i) is a C^1, α-graph over ∩ A_ρ_i/2, ρ_i withu_i _C^1, α(∩ A_ρ_i/2, ρ_i )≤δ_i, but (V_t) is not a C^1, α-graph over ∩ A_ρ_i/4, 2 ρ_i× [t_0 - 4 ρ_i^2, t_0] with C^1, α-norm bounded by ϵ in this region. Parabolically dilate V_t by λ_i ≑1/ρ_i→ +∞ to obtain V^i_t ≑ (η_λ_i)_♯ V_t λ_i^-2 and set μ^i_t ≑μ_V^i_t. After passing to a subsequence, Theorem <ref> applied to the Brakke flow (μ_V_t) implies there exists a stationary, dilation invariant varifold ' such that V^i_0 ⇀' and μ^i_t⇀μ_' ( ∀ t < 0) as i →∞. Since ' is dilation invariant and V_0^i ∩ A_1/2, 1 = G_1/2, 1( λ_i u_i ( ·/ λ_i ) )with λ_i u_i ( ·/ λ_i) _C^1, α ( ∩ A_1/2, 1 ) =u_i _C^1, α ( ∩ A_ρ_i/2, ρ_i )≤δ_i → 0, it follows that in fact ' =. Now the stationary flow ( μ^∞_t = μ_C )_t ≤ 0 given byhas Gaussian density Θ_μ^∞ (x, t) = 1 ∀ (x, t) ∈A_1/6, 4× [ -6, 0] sincehas smooth link. The upper semi-continuity of Gaussian density then implies that for any σ > 0 Θ_μ^i(x,t) < 1 + σ∀ (x, t) ∈A_1/6, 4× [ -6, 0] for all i ≫ 1 sufficiently large. By White's local regularity theorem <cit.> (and its generalization to integral Brakke flows <cit.>), it follows that, for i ≫ 1, V^i_t is a smooth mean curvature flow in A_1/6,4× [-6,0] with second fundamental form bounded by a dimensional constant C = C(N) < ∞ in A_1/5, 3× [-5, 0]. Interior regularity for mean curvature flow then implies that the convergence V^i_t is smooth on A_1/4, 2× [ -4, 0]. It follows that, for all i ≫ 1, V^i_t ∩ A_1/4, 2 = G_1/4,2 ( w̃_i ( ·, t) ) ( ∀ t ∈ [-4,0]) for some w̃_i : ∩ A_1/4, 2× [-4,0] → T^⊥ C extending λ_i u_i ( · / λ_i ) with w̃_i _C^3, α ( C ∩ A_1/4, 2× [-4, 0] )0. Since w̃_i ( ·, 0) _C^1, α ( ∩ A_1/2, 1 ) = λ_i ũ_i ( · / λ_i ) _C^1, α ( ∩ A_1/2, 1 )≤δ_i → 0, and derivatives of w̃_i converge to 0 on A_1/4, 2× [-4, 0], we have that in fact w̃_i _C^1, α ( ∩ A_1/4, 2× [ -4, 0] )≤ϵ for all i ≫ 1. Undoing dilations gives that for i ≫ 1 V_t ∩ A_ρ_i/4, 2ρ_i = G_ρ_i/4, 2ρ_i ( ũ_i(·, t) ) (∀t ∈ [-4ρ_i^2, 0]) for ũ_i(x,t) = 1/λ_iw̃_i ( x λ_i, t λ_i^2 ) : ∩ A_ρ_i/4, 2ρ_i→ T^⊥ extending u_i and ũ_i _C^1, α ( ∩ A_ρ_i/4, 2ρ_i× [- 4ρ_i^2, 0] ) ≤ϵ, which contradicts the choice of the r_i, δ_i, ρ_i. Let (μ_t)_t ∈ (a,b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^∞_loc ( U × (a, b)). Let (V_t)_t ∈ (a, b] be the associated family of varifolds as in Lemma <ref>. Let (x_0, t_0 ) ∈ U × (a, b] and let 𝐂 be a regular cone. There exists β, C, r_0 > 0 such that for all 0 < ρ < r_0 the following holds: if (V_t_0 - x_0) ∩ A_ρ/2,ρ = G_ρ/2, ρ ( u ) for some u : 𝐂∩ A_ρ/2, ρ→ T^⊥𝐂 with u _C^1, α (𝐂∩ A_ρ/2, ρ )≤β, then for any 0 < σ < ρ | ∇_V_t_0 H_V_t_0 | ≤ CH _L^∞ L^∞(A_ρ/2, ρ (x_0) × (t_0 - ρ^2 , t_0))/σ ≤ CH _L^∞ L^∞(A_r_0/2, r_0 (x_0) × (t_0 - r_0^2 , t_0))/σ <∞and | ∇^2_V_t_0 H_V_t_0 | ≤ CH _L^∞ L^∞(A_ρ/2, ρ(x_0) × (t_0 - ρ^2,t_0)) /σ^2 ≤ CH _L^∞ L^∞(A_r_0/2, r_0(x_0) × (t_0 - r_0^2,t_0)) /σ^2 < ∞ on V_t_0∩ A_ρ/2 + σ, ρ - σ(x_0). Throughout, we assume 0 < r_0 ≪ 1 is small enough so that B_2 r_0 (x_0 ) ⋐ U and (t_0 - 4 r_0^2, t_0 ) ⊂ (a,b). By translation, assume without loss of generality that (x_0, t_0 )= ( 0, 0). Assume 0 < ρ < r_0 and V_0∩ A_ρ/2,ρ = G_ρ/2, ρ ( u ) for some u : 𝐂∩ A_ρ/2, ρ→ T^⊥𝐂 with u _C^1, α (𝐂∩ A_ρ/2, ρ )≤β. By Lemma <ref>, for any ϵ > 0, we can assume β, r_0 ≪ 1 are sufficiently small (depending on ϵ) so that V_t∩ A_ρ/4,2ρ = G_ρ/4, 2ρ ( ũ( ·, t) ) ∀ t ∈ [-4ρ^2, 0] for some extension ũ : 𝐂∩ A_ρ/4, 2ρ×[-4 ρ^2, 0] → T^⊥𝐂 of u with ũ_C^1, α (𝐂∩ A_ρ/4, 2ρ× [-4ρ^2, 0] )≤ϵ. Consider the parabolically rescaled flow W_t ≑ ( η_1/ρ )_♯ V_t ρ^2 and note W_t ∩ A_1/4, 2 = G_1/4, 2 ( w̃ ( ·, t ) ) ∀ t ∈ [-4, 0] where w̃(x, t) ≑1/ρũ ( x ρ, t ρ^2 ), w̃_C^1, α ( A_1/4, 2× [ -4, 0] ) = ũ_C^1, α ( A_ρ/4, 2 ρ× [ -4 ρ^2, 0] )≤ϵ. If ϵ = ϵ ( N, 𝐂) is sufficiently small (depending only on N and 𝐂), interior estimates (see e.g. <cit.> or <cit.>) imply that W_t is a smooth mean curvature flow on A_1/3, 3/2 × [-3, 0] with derivative bounds on the second fundamental form A = A_W_t of the form sup_(x,t) ∈ A_1/3, 3/2× [ -3,0]| ∇^k_W_t A_W_t | ≤ C_k = C_k ( N, 𝐂) ( ∀ k ∈). In this region A_1/3, 3/2× [-3, 0] where W_t is a smooth mean curvature flow, the mean curvature H = H_W_t satisfies an evolution equation of the form ∂_t H = Δ H + ∇ A * H + A * ∇ H + A * A * H (see <cit.>). The bounds (<ref>) imply (<ref>) is a linear parabolic PDE system for H_W_t in the domain A_1/3,3/2× [ -3, 0] with uniform C^k-bounds on the coefficients that depend only on N, 𝐂, and k. Interior estimates for parabolic systems (see e.g. <cit.>) therefore imply that for some C = C(N, 𝐂) sup_A_1/2 + σ, 1 - σ | ∇^2_W_0 H_W_0 | ≤C/σ^2sup_A_1/2, 1× [-1, 0] |H_W_t | ( ∀ 0 < σ < 1 ) . In terms of V_t, (<ref>) becomes sup_A_ρ/2 + σ, ρ - σ | ∇^2_V_0 H_V_0 | = 1/ρ^3sup_A_1/2 + σ/ρ, 1 - σ/ρ | ∇^2_W_0 H_W_0 | ≤C/(σ/ρ)^2 ρ^3 sup_A_1/2, 1× [-1, 0] |H_W_t | (<ref>) =C/σ^2 ρsup_A_1/2, 1× [-1, 0] |H_W_t | =C/σ^2sup_A_ρ/2, ρ× [-ρ^2, 0] |H_V_t |≤C/σ^2 H _L^∞ L^∞ (A_ρ/2, ρ× (-ρ^2, 0))(Lemma <ref>) for all 0 < σ < ρ. Note that in the last line H denotes the mean curvature of the Brakke flow (μ_t)_t ∈ (a,b). An analogous argument applies to estimate | ∇_V_0 H_V_0 |. We can now prove Theorem <ref> by adapting the argument <cit.> used for tangent cones of stationary varifolds. Throughout, we use (V_t)_t ∈ (a,b] to denote the associated family of varifolds given by Lemma <ref>. By translation, assume without loss of generality that (x_0, t_0) = (0, 0). By Corollary <ref>, it suffices to show 𝐂 is the unique tangent cone of V_0 at 0. By Theorem <ref>, there exists some sequence λ_k ↗ +∞ such that V_k ≑ ( η_λ_k )_♯ V_0 ⇀𝐂. and θ_V_k ( 0 ) = μ_𝐂 (B_1 ) /ω_n for all k. Note that the mean curvature H_V_k of V_k is bounded by H_V_k _L^∞ ( B_2 )= 1/λ_kH_V_0_L^∞ ( B_2/ λ_k ) ≤1/λ_k H_V_t_L^∞ L^∞ ( B_δ× (-δ, 0])0 so long as B_δ× (-δ, 0) ⊂ U × (a,b) and k is large enough to ensure 2/λ_k < δ. Since 𝐂 is smooth away from 0, Allard's regularity theorem <cit.> (see also <cit.>) implies that V_k ∩ A_1/2,1 is smooth for all k ≫ 1 sufficiently large, and the convergence V_k ∩ A_1/2,1𝐂 is smooth. In particular, for k ≫ 1, V_k ∩ A_1/2, 1 = G_1/2,1 ( u_k ) for some u_k : A_1/2, 1→ T^⊥𝐂 with u_k _C^1, α ( A_1/2,1)0. Lemma <ref> implies V_k satisfies property <cit.> and therefore <cit.> applies. That is, for k ≫ 1, V_k ∩ B_1 ∖{0} = G_0, 1 ( ũ_k ) for some extension ũ_k ∈ C^2 ( 𝐂∩ B_1 ∖{0} ) of u_k that satisfies lim_ρ↘ 0ũ_k ( ρω) /ρ = ζ_k ( ω )( ω∈ L( 𝐂 ) ) where ζ_k ∈ C^2 ( L ( 𝐂 ) ) and where the convergence is in the C^2 ( L ( 𝐂 ) ) norm. Since the V_k are all dilations of V_0, it follows that ũ_k(x) = λ_k/λ_lũ_l ( λ_l/λ_k x ) for all k,l ≫ 1, and thus ζ_k ≡ 0 for all k ≫ 1. Undoing the dilations reveals that, for a fixed k suitably large, V_0 ∩ B_1/ λ_k∖{0} = G_0, 1/λ_k( 1/λ_kũ_k ( λ_k x ) ) and 1/λ_kũ_k ( λ_k ρω) /ρ 0. The uniqueness of the tangent cone now follows.As a corollary, we note that Theorem <ref> implies the flow may be written as a graph over the cone in certain space-time regions near the singularity. Let (μ_t)_t ∈ (a,b) be an integral n-Brakke flow in U ⊂^N with locally uniformly bounded areas and generalized mean curvature H ∈ L^∞ L^∞_loc(U × (a,b)). Let (V_t)_t ∈ (a,b] be the associated integral n-Brakke flow from Lemma <ref>. Assume (μ_t) has a tangent flow μ_ at (x_0,t_0) ∈ U × (a, b] given by a regular cone ^n. For any ϵ, C > 0, there exists r > 0 such that (V_t - x_0) ∩ A_√(t_0 - t)/C , r= G_√(t_0 - t)/C , r(u( · , t)) ∀ t ∈ [ t_0 - r^2 , t_0 ] for some u : Ω≑{ (x, t ) ∈× [ t_0 - r^2, t_0 ] | x ∈ A_√(t_0 - t)/C , r }→ T^⊥ with u _C^1, α (Ω)≤ϵ. For any δ > 0, the proof of Theorem <ref>, namely (<ref>), implies there exists r > 0 such that (V_t_0 - x_0) ∩ B_r∖{0} can be written as a graph overwith C^1, α-norm bounded by δ. Let ϵ, C > 0 be given. If δ = δ( ϵ , C ) ≪ 1 is sufficiently small and r is possibly made smaller, then it follows from Lemma <ref> that, for any 0 < ρ < r, V_t - x_0 is a graph overon the region A_ρ/2 , ρ× [ t_0 - C^2 ρ^2, t_0] with C^1,α-norm bounded by ϵ > 0. The statement now follows by taking a union over ρ∈ (0, r). § PINCHING HARDT-SIMON MINIMAL SURFACES Throughout this section, we restrict to the case where (M_t^n)_t ∈ [-T, 0) is a smooth mean curvature flow of properly embedded hypersurfaces in an open subset U ⊂^n+1. M = ⋃_t ∈ [-T, 0) M_t ×{ t }⊂^n+1× denotes its space-time track. Fix a regular cone ^n_0 ⊂^n+1 and let C = { A ·_0 : A ∈ O(n+1) } denote all rotations of the cone. We generally useto denote a rotation of the cone _0, that is ∈C.The goal of this section is to prove Theorem <ref>,which says that if M has bounded H and develops a singularity with tangent flow given by an area-minimizing quadratic cone, then there is a type II blow-up limit given by a Hardt-Simon minimal surface (see Subsection <ref> for the relevant definitions).§.§ Flows Near a Regular Cone We begin with general results for mean curvature flows locally close to a regular cone ∈C. The approach here was inspired by <cit.>. We say M is ϵ-close to ∈Cif M is a C^1, α-graph on ∩ A_ϵ, ϵ^-1× [ - ϵ^-2, - ϵ^2 ] with C^1, α norm at most ϵ. In other words, M_t∩ A_ϵ, ϵ^-1 = G_ϵ, ϵ^-1 ( u( ·, t) ) ∀ t ∈ [-ϵ^-2 , - ϵ^2 ] for some u : ∩ A_ϵ, ϵ^-1× [ - ϵ^-2, - ϵ^2 ] → T^⊥ with u _C^1, α(∩ A_ϵ, ϵ^-1× [ - ϵ^-2, - ϵ^2 ]) ≤ϵ. We say M is ϵ-close to ∈C at X = (x, t) if M- X is ϵ-close to . Throughout the remainder of this section, ϵ is always assumed to be less than some small constant ϵ_0 = ϵ_0 ( n, _0) that depends only on the regular cone _0 and implicitly its dimension n. Suppose M is ϵ-close to ∈C at X for some ϵ≤ϵ_0. Define 1 ∈ [ λ_* ( X) , λ^*(X) ] ⊂ [ 0, ∞ ] to be the largest interval such that, for all λ∈ [ λ_*(X), λ^*(X) ], D_λ^-1 ( M - X ) is ϵ-close to some ' = '_λ∈C at ( 0, 0). Occasionally, we may write λ_*(X; M, ϵ) or λ^*(X; M, ϵ) to emphasize the dependence on M and ϵ. Observe that λ_*(X) and λ^*(X) are continuous in the basepoint X.We note the following consequence of pseudolocality for mean curvature flow in our setting. Define constants 0 < c__0≑1/2sup_x ∈ L ( _0 ) | A__0|(x) < C__0≑ 2 sup_x ∈ L ( _0 ) | A__0|(x) < ∞. There exists C = C(n, _0) ≫ 1 and ϵ_0 = ϵ_0(n, _0) ≪ 1 such that if M is ϵ-close to ∈C at X= (x,t) for some ϵ≤ϵ_0, then c__0/ρ≤sup_|y-x| = ρ |A_M_t|(y) ≤C__0/ρ∀(ϵ + C) λ_* (X) ≤ρ≤(ϵ^-1 - C )λ^* (X) . Let λ∈ [ λ_* (X) , λ^*(X) ] and consider M' ≑D_λ^-1 ( M - X ). By definition M' can be written as a C^1,α-graph over some ' ∈C on A_ϵ, ϵ^-1× [- ϵ^-2, - ϵ^2 ], and the C^1, α-norm is bounded by ϵ≤ϵ_0. Let δ = δ(n, _0) ≪ 1 denote some small constant to be determined which depends only on n and _0. If ϵ_0 ≪ 1 is sufficiently small (depending also on δ) and C ≫ 1 is sufficiently large (depending also on δ), then pseudolocality for mean curvature flow <cit.> implies that M' can be written as a Lipschitz graph over ' on A_ϵ + C, ϵ^-1 - C× [ - ϵ^-2, 0]. Namely, if M'_t denotes the time t time-slice of M', then M'_t ∩ A_ϵ + C, ϵ^-1 - C = G_ϵ+C, ϵ^-1 - C (u ( · , t) ) ∀ t ∈ [-ϵ^-2,0] for some u : ' ∩ A_ϵ + C, ϵ^-1 - C× [-ϵ^-2, 0] → T^⊥' with u(·, t) _C^0,1( ' ∩ A_ϵ+C, ϵ^-1 - C )≤ C' ( sup_x ∈' ∩ A_ϵ + C, ϵ^-1 - C|u(x,t)|/|x| + sup_xy ∈' ∩ A_ϵ + C, ϵ^-1 - C | u(x, t) - u(y,t)|/|x-y|)≤ C' δ ∀ t ∈ [-ϵ^-2, 0],(where C' is a universal constant). Interior estimates for mean curvature flow <cit.> then imply u satisfies C^2-estimates on ' ∩ A_ϵ + C + 1, ϵ^-1 - C - 1× [ - ϵ^-2 + 1, 0] of the form u _C^2 ( ' ∩ A_ϵ + C + 1, ϵ^-1 - C - 1× [ - ϵ^-2 + 1, 0] )≤ C”δ for some constant C” = C” ( n, _0 ). If δ≪ 1 is sufficiently small depending on n and _0, then this C^2-closeness of M_0' to the cone ' implies curvature estimates of the form c_/ρ≤sup_|y| = ρ| A_M'_0|(y) ≤C_/ρ∀ϵ+ C + 1≤ρ≤ϵ^-1 - C -1. The estimate (<ref>) now follows from undoing the dilation D_λ^-1 and translation and letting λ vary in [ λ_*(X), λ^*(X)]. Note that, since δ = δ(n, _0), the dependence of ϵ_0 and C on δ can instead be regarded as a dependence on n and _0. There exists ϵ_0 = ϵ_0(n, _0) such that the following holds for all ϵ≤ϵ_0: if (M_t^n)_t ∈ [-T, 0) is smooth in U ⊂^N and its space-time track M is ϵ-close to ∈C at X = (x,t) ∈ U × (-∞, 0), then λ_*(X) > 0. If not, (<ref>) implies |A_M_t|(x) = lim_ρ↘ 0sup_| y - x|= ρ |A_M_t|(y) = +∞, which contradicts the smoothness of the flow at (x,t). Next, we show that λ_*( x , t) satisfies a strict inequality in an annulus. If ϵ_0 = ϵ_0(n, _0) ≪ 1 is sufficiently small (depending on n, _0), then there exists a constant C = C(n, _0) such that λ_*( y, t) > λ_* ( x, t ) ∀ y ∈ A_λ_*(x,t) [ ϵ + C ] , λ^*(x, t) [ ϵ^-1 - C] (x). Fix C = C(n, _0) ≥ 1 as in Lemma <ref>. Assume ϵ_0 = ϵ_0 ( n, _0) ≪ 1 is small enough so that Lemma <ref> holds. Let ϵ≤ϵ_0. To simplify notation, denote r_0≑ ( ϵ + C) λ_*(x, t), R_0≑ ( ϵ^-1 - C) λ^*(x, t), r_1≑ ( ϵ + 100 C) λ_*(x, t), R_1≑ ( ϵ^-1 - 100C) λ^*(x, t). Let y ∈ A_r_1, R_1(x). Suppose for the sake of contradiction that λ_*(y, t) ≤λ_*(x, t). Then ( ϵ + C) λ_*(y, t) ≤ r_0 ≤ ( ϵ^-1 - C) λ^* (y, t) if ϵ_0(n, _0) ≪ 1. Lemma <ref> therefore gives curvature estimates of the form c__0/r_0≤sup_|y'-y| = r_0 |A_M_t|(y'). Observe also that the sphere ∂ B( y, r_0) is contained in the annulus A_r_0, R_0(x) based at x. Indeed, |y' -y| = r_0 implies |y' - x| ≥ | y - x| - |y' - y| > r_1 - r_0 ≥ ( 1 + 98 C )λ_*(x, t) > ( ϵ + C) λ_*(x, t) = r_0, and an analogous argument applies to show |y' - y|= r_0 implies |y' - x| < R_0. Therefore, Lemma <ref> based at x also applies and gives sup_|y' - y| = r_0 |A_M_t| (y') ≤sup_|y' - y| = r_0 C__0/ | y' - x|≤sup_|y' - y| = r_0 C__0/| y - x| - | y' - y| <C__0/ r_1 - r_0<C__0/4 r_0=c__0/r_0 , which contradicts estimate (<ref>). This completes the proof after relabelling 100 C to C. §.§ Finding Hardt-Simon Minimal SurfacesAssume additionally throughout this subsection that M has H ∈ L^∞ L^∞_loc(U × [-T, 0) ) and that 0∈ U. Take a sequence λ_i ↘ 0 and suppose M_i ≑D_λ_i^-1M⇀M__0where M__0 = _0 × (-∞, 0) is the flow of the stationary cone_0. Since M_i ⇀M__0, M_i is ϵ-close to _0 at (0, -1) for i ≫ 1 and lim_i →∞λ_*( 0, -1; M_i )= 0. Let R_0 = C(n, _0) be the constant from Lemma <ref> and obtain x_i ∈B_2 R_0 ( 0 ) such that λ_* (x_i, -1 ; M_i ) = min_x ∈B_2 R_0 ( 0 )λ_*(x, -1; M_i ).Denote X_i = (x_i, -1) and observe lim_i →∞λ_*(X_i; M_i)= lim_i →∞λ_*(0 , -1; M_i) = 0. Moreover, Lemma <ref> implies that the minimizer x_i must lie in B_λ_*(0, -1; M_i) [ ϵ + C ] ( 0), which implies lim_i →∞ x_i = 0.Define M'_i≑D_1/λ_* ( X_i; M_i )( M_i - X_i ).By construction, for all i, M_i' is ϵ-close to some '_i∈C at (0 , 0). In what follows, we deviate from the notation of subsection <ref> and write Huisken's monotonic quantity as Θ( (x,t), M, r )= ∫Φ_x,t( ·, t-r^2) d μ_t - r^2, Θ( (x,t), M )= lim_r ↘ 0∫Φ_x,t( ·, t-r^2) d μ_t - r^2 where M = ( μ_t)_t ∈ (a,b) is a Brakke flow in ^n+1 and r>0. Along some subsequence i →∞, M_i' ⇀M̂≑ ( μ̂_t )_t ∈ where M̂ is an eternal integral n-Brakke flow in ^n+1 such that *(entropy bound) for all r > 0 and (x,t) ∈^n+1× Θ((x,t), M̂, r ) ≤Θ ( (0, 0), M__0 ), *for all t ∈, μ̂_t = μ_V̂_t for some stationary integral n-varifold V̂_t, *H _L^∞ L^∞ ( ^n+1× )= 0, and *for all λ∈ [1, ∞), D_λ^-1M̂ is ϵ-close to some 𝐂̂_λ∈C. For any r > 0 and (x,t) ∈^n+1×, lim sup_i →∞Θ( (x, t), M'_i, r ) =lim sup_i →∞Θ( (x,t), D_λ_*(X_i)^-1 ( M_i - X_i ), r ) = lim sup_i →∞Θ( X_i + (x λ_*(X_i), t λ_*(X_i)^2), M_i , r λ_*(X_i))≤Θ ( (0, -1), M__0 , 0) by the limiting behavior of X_i, M_i, λ_*(X_i) and the upper semi-continuity of Θ. Huisken's monotonicity formula therefore implies that the M'_i satisfy local uniform area bounds. Compactness of Brakke flows with local uniform area bounds then allows us to extract a subsequential limit M̂ of the M'_i. Observe that, since lim_i →∞λ_*(X_i) = 0, the limiting integral n-Brakke flow M̂ is defined for all t ∈. Additionally, (<ref>) shows that for any r > 0 and (x,t) ∈^n+1× Θ( (x,t), M̂, r) = lim_i →∞Θ( (x,t), M_i', r) ≤Θ( (0, -1) , M__0, 0) = Θ ( (0, 0) , M__0, 0 ). The fact that M has mean curvature H ∈ L^∞ L^∞_loc( U × [-T, 0) ) implies that the mean curvature H_i of the rescalings M'_i has lim_i →∞ H_i _L^∞ L^∞ ( K × [a,b] ) = 0 for any K × [a,b] ⋐^n+1×. Thus, the limiting Brakke flow M̂ has H _L^∞ L^∞( ^n+1× ) =0. Let t_0 ∈ and consider the t_0-timeslice M_i'(t_0) of M_i'. Lemma <ref> applies to give that some subsequence M_i'(t_0) converges in the weak varifold sense to a stationary integral n-varifold V̂_t_0. In particular, the underlying measures converge μ_M_i'(t_0)⇀μ_V̂_t_0. On the other hand, the Brakke flow convergence M_i' ⇀M̂ implies μ_M'_i(t_0)⇀μ̂_t_0 and so μ̂_t_0 = μ_V̂_t_0. Let λ∈ [1, ∞). Observe that lim_i→∞λ_*( X_i; M_i) = 0 and λ^* ( X_*; M_i ) ≥ 1 for all i ≫ 1. Thus, λλ_* ( X_i; M_i ) ∈[ λ_*( X_i ; M_i) ,λ^* ( X_i ; M_i ) ] for all i ≫ 1. It follows that D_λ^-1M_i' = D_1/λλ_* (X_i ; M_i)( M_i - X_i ) is ϵ-close to some _λ, i' ∈C for all i ≫ 1. After passing to a subsequence and using that D_λ^-1M_i' ⇀D_λ^-1M̂, it follows that there exists a limiting cone _λ∈C such that D_λ^-1M̂ is ϵ-close to _λ. Since λ∈ [1, ∞) was arbitrary, this completes the proof. For p, q ∈ with p+q = n-1, define the n-dimensional quadratic minimal cone or generalized Simons cone ^p,q to be the hypersurface ^p,q≑{ (x,y) ∈^p+1×^q+1 : q|x|^2 = p |y|^2 }⊂^n+1. ^p,q is minimal for any p,q. Moreover, ^3,3, ^2,4 (equivalently ^4,2), and ^p,q for any p+q >6 are all area minimizing. In fact, these are the only quadratic minimal cones which are area minimizing (see e.g. <cit.> and references therein). Let ^p,q be a quadratic minimal cone which is area minimizing. Denote the two connected components of ^n+1∖^p,q by E_±. <cit.> showed that there exist smooth, minimal surfaces S_±⊂ E_± which are both asymptotic to ^p,q at infinity and whose dilations λ S_± (λ > 0) foliate E_±, respectively. By dilating, we can assume without loss of generality that both S_± are normalized to have (S_±, 0 ) = 1. For any λ∈, define S_λ≑{ λ S_+,if λ > 0, ^p,q,if λ = 0, -λ S_-,if λ < 0.. We refer to this family of minimal hypersurfaces as the Hardt-Simon foliation. Let _0 = ^p,q⊂^n+1 (p+q = n-1) be a generalized Simons cone, and let C = { A ·_0 : A ∈ O(n+1) } denote all rotations of the cone. Let 0 < ϵ_0 = ϵ_0(n, _0) ≪ 1 be sufficiently small so that Lemmas <ref>–<ref> hold, and let 0 < ϵ≤ϵ_0. If _0 = ^p,q is area-minimizing, then there exist λ_00, A_0 ∈ O(n+1), and a_0 ∈^n+1 such that the eternal Brakke flow M̂ obtained in Lemma <ref> is the static flow of the smooth Hardt-Simon minimal surface M =A_0 · S_λ_0 + a_0 for all t ∈. Additionally, the scale λ_00 and center a_0 ∈^n+1 are such that, for all 0 < ϵ' < ϵ and all ∈C, M̂ is not ϵ'-close to . It follows from the entropy bound Lemma <ref> (<ref>), compactness of Brakke flows with uniform local area bounds, and Huisken's monotonicity formula that there exists a limiting shrinker D_λ_i^-1M̂⇀M̂_-∞ along some sequence λ_i →∞. Since M̂ has H ≡ 0 (Lemma <ref> (<ref>)), the same argument as in the proof of Lemma <ref> shows that, for t < 0, M̂_-∞ must be the flow of a stationary cone(which is dilation invariant with respect to 0∈^n+1). Because D_λ_i^-1M̂ is ϵ-close to some _λ_i∈C for all i (Lemma <ref> (<ref>)), we can pass to a further subsequence (still denoted by i) so that the _λ_i converge to _-∞∈C and M̂_-∞ = M_ is ϵ-close to _-∞∈C. By the dilation-invariance of the cone , it follows thatcan be written globally as a C^1, α-graph over _-∞ with C^1,α-norm less than or equal to ϵ. Consider the Hardt-Simon foliation (<ref>) rotated so that S_0 = _-∞. If ϵ≪ 1 is sufficiently small depending on n, _0, then <cit.> implies that there exists a ∈^n+1, q ∈ SO(n+1), λ' ∈ such thatis a C^1, β-graph over a + q ( S_λ' ) and the graphing function u satisfies an improved decay estimate sup_x ∈ B_r(a) ∩ (a + q( S_λ' ))|u(x)| ≲ r^1 + β∀ r ≤ 1/2. Becauseis dilation-invariant (with respect to 0), this estimate is only possible if a = 0, λ' = 0, and u ≡ 0. In other words, = q ( S_0) = q ( _-∞ ) ∈C. Thus, for t < 0, M̂_-∞ = M_ and ∈C. Because D_λ_i^-1M̂⇀M̂_-∞ converges as Brakke flows, we can find t_0 < 0 and pass to a further subsequence so that the time slices V̂_i ≑λ_i^-1V̂_t_0 λ_i^2⇀ converge as integral n-varifolds. For i ≫ 1, <cit.> applies and gives that, for all i ≫ 1, there exist a_i∈^n+1, q_i ∈ SO(n+1), λ'_i ∈ such that V̂_i ∩ B_1/2 is a C^1,β-graph over a_i +q_i(S_λ'_i). Moreover, after renormalizing the Hardt-Simon foliation (<ref>) so that S_0 =, lim_i →∞( |a_i| + |q_i - Id | + |λ_i'| ) = 0. λ_i'0 for all i ≫ 1. Consider some index i with λ_i' = 0. Then the monotonicity formula (<ref>) for stationary varifolds implies that θ_V̂_i - a_i ( 0) = lim_r ↘ 0μ_V̂_i - a_i ( B_r) /ω_n r^n≥θ_q_i ( S_0)( 0) = θ__0 ( 0 ) = Θ( M__0, ( 0, 0) ). Theorem <ref> then gives that θ__0 ( 0 ) ≤θ_V̂_i - a_i ( 0) = θ_V̂_i ( a_i ) ≤Θ_D_λ_i^-1M̂ ( a_i , t_0) ( Theorem <ref> ) ≤Θ_D_λ_i^-1M̂ (( a_i , t_0), r) ( Huisken's monotonicity formula (<ref>),r >0 ) = Θ_M̂ ( ( λ_i a_i, λ_i^2 t_0) , λ_i r ) ≤θ__0 ( 0) ( Lemma <ref>(<ref>)). Thus, we have equality throughout and Θ_M̂ ( ( λ_i a_i, λ_i^2 t_0) , r) = θ__0 ( 0) ∀ r > 0. It follows from Huisken's monotonicity formula (<ref>) that M̂ - ( λ_i a_i, λ_i^2 t_0 ) is a shrinker (for t<0) and must therefore be equal to the limiting shrinker M̂_-∞, that is M̂-( λ_i a_i, λ_i^2 t_0 ) = M̂_-∞ = M_fort < 0. Write M̂ = ( μ̂_t )_t ∈, and note μ̂_t = μ_+λ_i a_i for all t < λ_i^2 t_0 by (<ref>). Moreover, Brakke's inequality and the fact that M̂ has H ≡ 0 (Lemma <ref> (<ref>)) implies μ̂_t_1≥μ̂_t_2 for all t_1 ≤ t_2. In particular, μ̂_t_2⊂μ̂_t_1 for t_1 ≤ t_2. Let t ∈ [λ_i^2 t_0 , -ϵ^2] and recall μ̂_t is represented by a stationary integral n-varifold V̂_t with H = H_V̂_t = 0 (Lemma <ref> (<ref>)). Then V̂_t = μ̂_t ⊂μ̂_λ_i^2 t_0 =+ λ_i a_i. On the other hand, Solomon-White's strong maximum principle <cit.> applied to the smooth manifold ( + λ_i a_i ) ∖{λ_i a_i }⊂^n+1∖{λ_i a_i } implies that either ( + λ_i a_i) ∖{λ_i a_i }⊂V̂_t or ( + λ_i a_i ) ∩V̂_t ∖{λ_i a_i } = ∅. In particular, either + λ_i a_i = V̂_t orV̂_t ⊂{λ_i a_i } since V̂_t ⊂+ λ_i a_i and V̂_t is a closed set. However, the second case is impossible since M̂ is ϵ-close to some _λ=1∈C by Lemma <ref> (<ref>). Thus, V̂_t = + λ_i a_i for all t ∈ [λ_i^2 t_0, -ϵ^2 ]. By the entropy bounds Lemma <ref> (<ref>) and the constancy theorem <cit.>, it follows that in fact V̂_t =+ λ_i a_i for all t ∈ [λ_i^2 t_0, -ϵ^2 ]. Thus, in combination with (<ref>), we have shown M̂ - ( λ_i a_i , 0) = M_fort ∈ [λ_i^2 t_0, -ϵ^2] for any index i such that λ_i' = 0. Suppose for the sake of contradiction that there exist arbitrarily large indices i with λ_i' = 0. Then there exists i_0 such that λ_i_0' = 0 and λ_i_0^2 t_0 ≤ - ϵ^-2-1. Hence, equality (<ref>) holds for t ∈ [-ϵ^-2-1, - ϵ^2] and D_λ_*( X_j; M_j)^-1 ( M_j - X_j - ( λ_*( X_j) λ_i_0 a_i_0 , 0 ) ) = M_j' - ( λ_i_0 a_i_0 , 0)⇀ M̂ - ( λ_i_0 a_i_0 , 0) (asj →∞) = M_ ( fort ∈ [-ϵ^-2-1, -ϵ^2] ). In particular, White's local regularity theorem <cit.> implies M_j' - ( λ_i_0 a_i_0 , 0) converges to M_ in C^∞_loc( ^n+1∖{0}× [- ϵ^-2-1, -ϵ^2]) as j →∞. Using smoothness of the flows M_j', it then follows that, for some large enough j, D_λ^-1 ( M_j' - ( λ_i_0 a_i_0 , 0) ) is ϵ-close toat (0, 0) for all λ in a neighborhood of 1. Equivalently, D_λ^-1 ( M_j - X_j - ( λ_*(X_j) λ_i_0 a_i_0, 0)) is ϵ-close to ∈C at (0, 0) for all λ in a neighborhood of λ_*(X_j). This however contradicts the definitions of X_j and λ_*, and this contradiction completes the proof of the claim. With Claim <ref> in hand, V̂_i ∩ B_1/2 is a C^1,β-graph over a smooth minimal surface a_i + q_i (S_λ'_i) (λ_i'0) for all i ≫ 1. Interior regularity for stationary varifolds then implies that V̂_i ∩ B_1/4 is smooth for all i ≫ 1, and thus so is μ̂_t_0 λ_i^2∩ B_λ_i /4. Let t ≤ - ϵ^2 and R > ϵ^2. Let i ≫ 1 be sufficiently large such that μ̂_t_0 λ_i^2∩ B_λ_i/4 is smooth, t_0 λ_i^2 < t, and λ_i/4 > R. Since μ̂_t ≤μ̂_t_0 λ_i^2, it follows that V̂_t ∩ B_R = μ̂_t ∩ B_R ⊂μ̂_t_0 λ_i^2∩ B_R. Because μ̂_t_0 λ_i^2∩ B_R is smooth, the strong maximum principle <cit.> applies and implies that either V̂_t ∩ B_R = μ̂_t_0 λ_i^2∩ B_R orV̂_t ∩ B_R = ∅. However, the second case is impossible, since it would imply (together with μ̂_-ϵ^2≤μ̂_t) that μ̂_-ϵ^2∩ B_R = ∅, which contradicts that M̂ is ϵ-close to some _λ = 1∈C (Lemma <ref> (<ref>)). Hence, V̂_t ∩ B_R = μ̂_t_0 λ_i^2∩ B_R. By letting i,R, and t vary, it follows that there exists a smooth manifold M such that M = μ̂_t = V̂_t for allt ≤ -ϵ^2. Additionally, M is minimal (H_M ≡ 0) and λ_i^-1 M ⇀∈C ( as varifolds) since D_λ_i^-1M̂⇀M_ as Brakke flows. The proof of <cit.> holds in this setting and implies that M is necessarily a smooth Hardt-Simon minimal surface, that is M = q(S_λ) + a for some q ∈ SO(n+1), λ 0 and a ∈^n+1. It then follows from the constancy theorem <cit.> and the entropy bounds for M̂ (Lemma <ref> (<ref>)) that V̂_t = M = q( S_λ) + a for all t ≤ - ϵ^2. In summary, M̂ is the stationary flow of a smooth Hardt-Simon minimal surface M = q( S_λ) + a for all t ≤ -ϵ^2. Using the strong maximum principle <cit.> and the fact that μ̂_t_1≥μ̂_t_2 for t_1 ≤ t_2, it can be shown that there exists T ∈ [-ϵ^2, +∞] such that the flow M̂ = ( μ_V̂_t )_t ∈ has V̂_t = { M fort < T, ∅fort > T. . Since M̂ is a limit of smooth flows M_i', M̂ is unit-regular <cit.> and therefore T= +∞. Thus, M̂ = ( μ_M )_t ∈ is the stationary flow of the smooth Hardt-Simon minimal surface M = q(S_λ) + a. In particular, White's local regularity theorem <cit.> implies M'_i converges to M̂ in C^∞_loc( ^n+1× ) as i →∞. Finally, suppose for the sake of contradiction that there exists 0 < ϵ' < ϵ and ∈C such that M̂ is ϵ'-close to . Since M_i' converges to M̂ in C^∞_loc( ^n+1×), one can take ϵ”∈ ( ϵ', ϵ) and deduce that M_i' is ϵ”-close tofor i ≫ 1 sufficiently large. Since M_i' is smooth and ϵ” < ϵ, D_λ^-1M_i' = D_λ^-1λ_*(X_i; M_i)^-1 ( M_i - X_i) must then be ϵ-close tofor all λ in a neighborhood of 1. This, however, contradicts the definition of λ_*(X_i; M_i). Theorem <ref> and the results of this section complete the proof of Theorem <ref>. § VARIFOLDS WITH GENERALIZED MEAN CURVATURE H ∈ L^P_LOC In this appendix, we collect some standard results for varifolds with generalized mean curvature H ∈ L^p_loc that are cited throughout the article. For example, the compactness statement Lemma <ref> and monotonicity formula Proposition <ref> are given here. Let 2 ≤ n < N, U ⊂^N be open, and p ∈ (1, ∞]. The collection of integer rectifiable n-varifolds in U with locally uniformly bounded area and locally uniformly bounded generalized mean curvature H ∈ L^p_loc(U) is weakly compact. In other words, if V_i is a sequence of integral n-varifolds in U such that * the V_i have locally uniformly bounded area, i.e. sup_i μ_V_i ( K ) < ∞ (∀ K ⋐ U), and * the V_i have generalized mean curvature H_i ∈ L^p_loc(U) with uniform L^p_loc(U) bounds, i.e. sup_i H_i _L^p(K, d μ_V_i )< ∞ (∀ K ⋐ U), then there exists a subsequence V_i_j and an integral n-varifold V_∞ in U such that V_i_j⇀ V_∞ weakly as varifolds. Moreover, V_∞ has μ_V_∞ (K ) ≤lim sup_i μ_V_i (K) (∀ K ⋐ U ) and generalized mean curvature H_∞∈ L^p_loc(U) with H_∞_L^p ( K , d μ_V_∞ ) ≤lim sup_i H_i _L^p ( K , d μ_V_i )(∀ K ⋐ U ). Allard's compactness theorem <cit.> gives compactness under the weaker assumption where (2) is replaced by the bound sup_i | δ V_i (X) | ≤ C_KX _C^0(K)∀ X ∈ C^1_c ( K , ^N ), K ⋐ U. Therefore, there exists a subsequential limit V_i_j⇀ V_∞ such that V_∞ is an integral n-varifold with locally finite area andthe weaker property that | δ V_∞ ( X) | ≤ C_KX _C^0∀ X ∈ C^1_c ( K, ^N ), K ⋐ U. However, convergence as varifolds V_i_j⇀ V_∞ implies that for any K ⊂ U compact and X ∈ C^1_c(K, ^N) | δ V_∞ ( X) | = lim_j →∞ | δ V_i_j ( X) | = lim_j →∞| ∫ H_i_j· X d μ_V_i_j| ≤ { (lim sup_i →∞ H_i _L^p_loc ( K, d μ_V_i ) )( ∫ |X|^p/p-1 d μ_V_∞)^p-1/pifp ∈ (1, ∞),(lim sup_i →∞ H_i _L^∞_loc ( K, d μ_V_i ) )∫ |X| d μ_V_∞ifp = ∞. . The Radon-Nikodym theorem then implies that δ V_∞ = - H_∞∈ L^p_loc ( U, dμ_V_∞ ). (<ref>) then also implies H_∞_L^p(K, d μ_V_∞ )≤lim sup_i H_i _L^p ( K , d μ_V_i ) . Let 2 ≤ n < N, U ⊂^N be open, and p ∈ (1, ∞]. Let (V_i)_i ∈ℕ∪{∞} be a collection of integer rectifiable n-varifolds with generalized mean curvature H_i ∈ L^p_loc(U). Assume sup_iμ_V_i (K) + sup_iH_i _L^p(K, dμ_V_i )< ∞ (∀ K ⋐ U). If ∫ f d μ_V_i∫ f d μ_V_∞∀ f ∈ C^∞_c ( ^N)with f ≥ 0, then V_i ⇀ V_∞ as varifolds as i →∞. We first show d μ_V_i⇀ dμ_V_∞. Let f ∈ C^0_c ( U ). There exists δ > 0 and a compact set K ⊂ U such that f ⊂ B_δ ( f) ⊂ K ⊂ U where B_δ (f) denotes the radius δ neighborhood of f. Denote C_K ≑sup_i μ_V_i ( K ) < ∞. Let ϵ > 0. Split f = f_+ - f_- into positive and negative parts. Convolving f_+, f_- with a suitable mollifiers, we can find f̃_+, f̃_- ∈ C^∞_c ( U ) such that f̃_±≥ 0, f̃_±⊂ K,and f̃_±- f_±_C^0 < 1/6C_Kϵ . Then | ∫ f dμ_V_i - ∫ f dμ_V_∞| ≤ | ∫ f_+ dμ_V_i - ∫ f_+ dμ_V_∞| + | ∫ f_- dμ_V_i - ∫ f_- dμ_V_∞| ≤ | ∫ f_+ - f̃_+d μ_V_i| + | ∫f̃_+ dμ_V_i - ∫f̃_+ dμ_V_∞| + | ∫ f_+ - f̃_+ d μ_V_∞| + | ∫ f_- - f̃_-d μ_V_i| + | ∫f̃_- dμ_V_i - ∫f̃_- dμ_V_∞| + | ∫ f_- - f̃_- d μ_V_∞| ≤4f - f̃_C^0 C_K + | ∫f̃_+ dμ_V_i - ∫f̃_+ dμ_V_∞| + | ∫f̃_- dμ_V_i - ∫f̃_- dμ_V_∞| < 2/3ϵ + | ∫f̃_+ dμ_V_i - ∫f̃_+ dμ_V_∞| + | ∫f̃_- dμ_V_i - ∫f̃_- dμ_V_∞| < ϵ ( ∀ i ≫ 1) where the last inequality follows by assumption (<ref>). This completes the proof that d μ_V_i⇀ d μ_V_∞. Next, we prove the varifold convergence V_i ⇀ V_∞. Suppose for the sake of contradiction that V_i ⇀̸V_∞. Then there exists f ∈ C^0_c ( G(n, U ) ), ϵ > 0, and a subsequence V_i_j such that | ∫ f dV_i_j - ∫ f d V_∞ | > ϵ∀ j. Since the V_i_j have locally uniformly bounded areas and locally uniformly bounded generalized mean curvatures H_i_j∈ L^p_loc(U), Lemma <ref> implies there exists an integer rectifiable n-varifold V_∞' and a subsequence still denoted V_i_j such that V_i_j⇀ V_∞'. In particular, dμ_V_∞ = lim_j →∞ dμ_V_i_j = dμ_V_∞'. Because V_∞, V_∞' are integer rectifiable n-varifolds with dμ_V_∞ = d μ_V_∞', V_∞ = V_∞'. We then have a contradiction that V_i_j⇀̸V_∞ = V_∞' and V_i_j⇀ V_∞' = V_∞. This contradiction proves that in fact V_i ⇀ V_∞. Let 2 ≤ n < N, U ⊂^N open, and p ∈ ( n, ∞]. Let V be an integer rectifiable n-varifold in U with generalized mean curvature H ∈ L^p_loc(U). Let x_0 ∈ U and B_R(x_0)⊂ U. Then for any 0 < σ < ρ < R (e^ H / 1 - n/pρ^1 - n/pμ_V ( B_ρ( x_0) ) /ρ^n + e^ H / 1 - n/pρ^1 - n/p - 1 ) - (e^ H / 1 - n/pσ^1 - n/pμ_V ( B_σ( x_0) ) /σ^n + e^ H / 1 - n/pσ^1 - n/p - 1 ) ≥∫_B_ρ (x_0) ∖ B_σ (x_0)| ( x - x_0)^⊥ |^2 / |x-x_0|^n+2 d μ_V ≥ 0 where H=H _L^p( B_R(x_0) , dμ_V ). In particular, e^ H / 1 - n/pρ^1 - n/pμ_V ( B_ρ( x_0) ) /ρ^n + e^ H / 1 - n/pρ^1 - n/p - 1 is non-decreasing in ρ and θ_V (x_0 ) ≑lim_ρ↘ 0 μ_V ( B_ρ ( x_0) ) /ρ^n exists. The following proof follows <cit.>. We provide the proof for p ∈ (n, ∞) and let the reader make the necessary adjustments to the proof in the case of p = ∞. For any 0 < ρ < R/ ( 1+ ϵ ), d/d ρ ( ρ^-n I (ρ ) ) = ρ^-nd/dρJ(ρ) - ρ^-n∫ρ^-1 ( x - x_0) · H ϕ_ϵ ( | x - x_0| / ρ ) d μ_V(x) where ϕ_ϵ : [0, ∞) →_≥ 0 is a smooth, non-increasing function with ϕ_ϵ (s) ≡ 1 for s ∈ [0, 1] and ϕ_ϵ⊂ [0, 1 + ϵ], x_0 ∈ Uand B_R (x_0)⊂ U, I(ρ) ≑∫ϕ_ϵ ( | x - x_0| / ρ ) d μ_V(x) ≥ 0,and J(ρ) ≑∫ | ( x- x_0)^⊥ |^2 / | x - x_0 |^2ϕ_ϵ ( | x - x_0|/ ρ ) d μ_V(x) ≥ 0 . The integral on the right-hand side of (<ref>) can be estimated by | ρ^-n∫ x - x_0/ρ· H ϕ_ϵ(| x - x_0 |/ρ) dμ_V (x) | ≤( 1 + ϵ ) ρ^-n H _L^p (B_(1 + ϵ ) ρ (x_0 ) , dμ_V ) ( I (ρ) )^1 - 1/p ≤ ( 1 + ϵ ) ρ^-n/p H _L^p (B_R(x_0 ) , dμ_V ) ( ρ^-n I (ρ) )^1 - 1/p( 0 < ρ < R/1 + ϵ) ≤( 1 + ϵ ) ρ^-n/p H _L^p (B_R(x_0 ) , dμ_V ) (1 +ρ^-n I (ρ) ) where in the last step we used a ≥ 0a^1 - 1/p≤ 1 + a. Inserting this estimate into (<ref>) yields d/d ρ ( 1 + ρ^-n I (ρ ) ) ≥ρ^-nd/dρJ(ρ) - ( 1 + ϵ ) ρ^-n/p H _L^p (B_R(x_0 ) , dμ_V ) (1 +ρ^-n I (ρ) ) ∀ 0 < ρ < R / ( 1 + ϵ ) . To simplify the notation, we write H=H _L^p( B_R(x_0), d μ_V ) for the remainder of the proof. Multiplying (<ref>) by the integrating factor F_ϵ(ρ) ≑ e^∫_0^ρ ( 1 + ϵ)H ρ̃^- n/pd ρ̃ = e^ ( 1 + ϵ )H / 1 - n/pρ^1 - n/p≥ 1 and using the fact that d/d ρ J ≥ 0 yields d/ d ρ( F_ϵ( ρ ) ρ^-n I(ρ) + F_ϵ( ρ ) - 1 ) ≥ F_ϵ( ρ ) ρ^-nd/ d ρ J ≥ρ^-nd/ d ρ J. Integrating from σ to ρ then gives (F_ϵ ( ρ )I_ϵ ( ρ ) /ρ^n + F_ϵ ( ρ ) - 1 ) - (F_ϵ ( σ )I_ϵ ( σ ) /σ^n + F_ϵ ( σ ) - 1 ) ≥∫_σ^ρρ̃^-n d/ d ρ̃ J d ρ̃. Using that d/d ρ(ϕ_ϵ ( | x - x_0 |/ ρ ) ) is supported on the region ρ≤ | x - x_0 | ≤ ( 1 + ϵ ) ρ, it follows that ∫_σ^ρρ̃^-n d/ d ρ̃ J d ρ̃ = ∫_σ^ρ∫| ( x - x_0)^⊥|^2/ | x - x_0|^2 ρ̃^-nd/ d ρ̃( ϕ_ϵ ( |x -x_0| / ρ ) ) dμ_V d ρ̃≥∫_σ^ρ∫| ( x - x_0)^⊥|^2/ | x - x_0|^n+2d/ d ρ̃( ϕ_ϵ ( |x -x_0| / ρ ) ) dμ_V d ρ̃= ∫| ( x - x_0)^⊥|^2/ | x - x_0|^n+2( ϕ_ϵ ( |x -x_0| / ρ ) - ϕ_ϵ ( |x -x_0| / σ )) dμ_V ≥∫_B_ρ(x_0) ∖ B_(1 + ϵ) σ ( x_0) | ( x - x_0)^⊥|^2/ | x - x_0|^n+2dμ_V. Letting ϵ↘ 0 finally yields (e^ H / 1 - n/pρ^1 - n/pμ_V ( B_ρ( x_0) ) /ρ^n + e^ H / 1 - n/pρ^1 - n/p - 1 ) - (e^ H / 1 - n/pσ^1 - n/pμ_V ( B_σ( x_0) ) /σ^n + e^ H / 1 - n/pσ^1 - n/p - 1 ) ≥∫_B_ρ (x_0) ∖ B_σ (x_0)| ( x - x_0)^⊥ |^2 / |x-x_0|^n+2 d μ_V ≥ 0 . In particular, we have monotonicity of ρ↦ e^ H / 1 - n/pρ^1 - n/pμ_V ( B_ρ ( x_0) ) /ρ^n + e^ H / 1 - n/pρ^1 - n/p - 1. Hence, its limit as ρ↘ 0 exists and thus θ_V(x_0) is well-defined. alpha
http://arxiv.org/abs/2311.16262v1
{ "authors": [ "Maxwell Stolarski" ], "categories": [ "math.DG", "math.AP" ], "primary_category": "math.DG", "published": "20231127190825", "title": "On the Structure of Singularities of Weak Mean Curvature Flows with Mean Curvature Bounds" }
0009-0004-4832-0895]Yiming Jiao State Key Laboratory of Space Weather, National SpaceScience Center, Chinese Academy of Sciences, Beijing, China; [email protected] University of Chinese Academy of Sciences, Beijing, China 0000-0002-3483-5909]Ying D. Liu State Key Laboratory of Space Weather, National SpaceScience Center, Chinese Academy of Sciences, Beijing, China; [email protected] University of Chinese Academy of Sciences, Beijing, China Ying D. Liu [email protected] 0000-0002-8234-6480]Hao Ran State Key Laboratory of Space Weather, National SpaceScience Center, Chinese Academy of Sciences, Beijing, China; [email protected] University of Chinese Academy of Sciences, Beijing, China 0009-0005-3941-1514]Wenshuai Cheng State Key Laboratory of Space Weather, National SpaceScience Center, Chinese Academy of Sciences, Beijing, China; [email protected] University of Chinese Academy of Sciences, Beijing, ChinaWe identify more than ten steady sub-Alfvénic solar wind intervals from the measurements of the Parker Solar Probe (PSP) from encounter 8 to encounter 14. An analysis of these sub-Alfvénic intervals reveals similar properties and similar origins. In situ measurements show that these intervals feature a decreased radial Alfvén Mach number resulting from a reduced density and a relatively low velocity, and that switchbacks are suppressed in these intervals. Magnetic source tracing indicates that these sub-Alfvénic streams generally originate from the boundaries inside coronal holes, or narrow/small regions of open magnetic fields. Such properties and origins suggest that these streams are mostly low Mach-number boundary layers (LMBLs), which is a special component of the pristine solar wind proposed by Liu et al.We find that the LMBL wind, the fast wind from deep inside coronal holes, and the slow streamer wind constitute three typical components of the young solar wind near the Sun. In these sub-Alfvénic intervals, the Alfvén radius varies between 15 and 25 solar radii, in contrast with a typical 12 radii for the Alfvén radius of the super-Alfvénic wind. These results give a self-consistent picture interpreting the PSP measurements in the vicinity of the Sun. § INTRODUCTION A sub-Alfvénic region is a magnetically dominated region around the Sun where the solar wind speed is slower than the local Alfvén speed.This region is inside the range of the solar corona and is where the coronal heating and solar wind acceleration occur <cit.>.However, observations of the sub-Alfvénic wind had been limited until the Parker Solar Probe (PSP) mission, which aims to explore the solar wind source region <cit.>, sampled the sub-Alfvénic wind for the first time during its 8th solar encounter <cit.>.Studies have been performed on this first sampling as well as a few subsequent intervals of the sub-Alfvénic wind to determine the properties of the coronal plasma <cit.>. Now with a broader data range available, extensive sub-Alfvénic intervals can be identified from the PSP measurements from encounter 8 to the more recent encounter 14.These observations provide the opportunity to analyze the overall properties of the sub-Alfvénic wind in comparison with the super-Alfvénic wind.As reported by <cit.>, the first sustained sampling of the sub-Alfvénic wind lasted for 5 hours at a distance of about 20 solar radii from the Sun. <cit.> suggest that the first sub-Alfvénic interval is associated with a pseudo-streamer, but the density is usually low. Further studies showed that the sub-Alfvénic wind detected by PSP is characterized by weaker magnetic field reversals <cit.> in comparison with the super-Alfvénic solar wind <cit.>. <cit.> interpreted the nature of the sub-Alfvénic intervals as a special type of wind originating from the peripheral areas inside coronal holes, termed as a low Mach-number boundary layer (LMBL). An LMBL is characterized by an enhanced Alfvén radius, which explains the detection of the sub-Alfvénic wind at a relatively large distance. <cit.> also suggest that switchbacks are naturally suppressed in an LMBL by the low Alfvén Mach number due to their nature as the Alfvénic turbulence. While this theory is consistent with the observations, it is necessary to examine whether the newly observed sub-Alfvénic solar wind fits this framework.The components of the young solar wind beyond LMBLs are also of great interest. According to previous studies of the well-evolved solar wind, the solar wind at large distances (e.g., at 1 AU) is divided into fast and slow wind respectively. The fast wind is relatively homogeneous and is long believed to originate from inside coronal holes <cit.>. The origins and properties of the slow solar wind, however, are much more diverse.Suggested sources of slow wind include active regions <cit.>, helmet streamers and pseudo-streamers <cit.>, and coronal hole boundaries <cit.>. The exact nature of the slow solar wind is still under debate due to various complexities. For example, some of the slow wind also shows characteristics that are typical of fast wind. While Alfvénic fluctuations are usually associated with fast solar wind, some slow streams can be highly Alfvénic despite their low velocities <cit.>. An LMBL also falls into this category, i.e, the Alfvénic slow wind <cit.>. The components of the nascent solar wind near the Sun have not been determined yet. PSP provides in situ measurements at distances where the solar wind has not yet undergone significant evolution or stream-stream interactions. With these measurements, the nascent solar wind can be classified more clearly by their properties and their origins can be inferred.Another important parameter to be determined is the Alfvén radius r_A, which in the classical theory of <cit.> is the heliocentric distance where the solar wind become from sub-Alfvénic to super-Alfvénic. Earlier studies before the PSP mission have estimated r_A to vary from a few to tens of solar radii <cit.>. Recent works using in situ measurements from PSP have constrained r_A to 10 to 20 solar radii <cit.>. As PSP dives below the Alfvén critical point more times and for longer durations, we have more measurements to constrain the Alfvén radius. However, since the plasma properties may change significantly on either side of the Alfvén critical point, previous methods may lead to unrealistically large values for the Alfvén radius for the measurements of the sub-Alfvénic wind. A new method is needed to give reliable r_A values, especially for the measurements below the Alfvén critical point.With the extensive observations from both sides of the Alfvén transition that cover more of the solar longitudes, it is possible to present a more complete and accurate picture of the r_A distribution. In this paper, we identify steady sub-Alfvénic intervals from PSP measurements during its 8th to 14th solar encounters.We analyze the properties of these intervals, trace their solar origins, and categorize them as the same type of solar wind that meets the criteria of an LMBL. As representatives of the LMBL wind, these sub-Alfvénic streams manifest a different structure in terms of the velocity and density than the commonly known fast wind and the slow streamer wind. Our result suggests three typical components of the young solar wind where the different streams have not yet mixed. We also obtain a complete picture of the r_A distribution in the ecliptic plane.The contrast of r_A between the current sub-Alfvénic wind and the wind that still remains super-Alfvénic shows the enhancement of r_A in LMBLs. § ANALYSIS AND RESULTSAs an example of the measurements, Figure <ref> shows the data from encounter 12. The magnetic field data are obtained from the measurements of the PSP/FIELDS fluxgate magnetometer instrument <cit.>.The solar wind velocity is measured by the PSP/Solar Probe ANalyzer-Ions (SPAN-I) instruments <cit.>.The electron density is from quasi-thermal noise (QTN) spectroscopy <cit.> and is used as a proxy for the plasma density throughout the study.The electron density n_e, and magnetic field B are normalized to values at 1 AU by a 1/r^2 scaling (with r being the heliocentric distance) to eliminate the effect of distance variations.All parameters are set to a cadence of 1 minute. Three sub-Alfvénic intervals are identified at encounter 12 as shown by the shaded areas (see the time periods in Table <ref>). We note a lack of QTN data in the second interval.We supplement the data with the proton density from SPAN-I <cit.> after applying a low-pass filter to minimize measurement fluctuations. The reliability of n_p is verified by comparing it with n_e from QTN (Figure <ref>(b)). A heliospheric current sheet (HCS) crossing was observed between the second and the third intervals. The sub-Alfvénic intervals mark where the radial Alfvén Mach number M_A keeps lower than 1 for a few hours persistently. Here M_A is the ratio of the radial solar wind speed to the local Alfvén speed V_R/V_A, and V_A is computed as V_A=B/√(μρ), where μ is the vacuum magnetic permeability and ρ is the plasma density.The Alfvén Mach number compares the kinetic and magnetic energy densities and indicates the entering of the magnetically dominated corona when lower than 1. PSP observed a transition from a relatively fast, tenuous wind before the sub-Alfvénic intervals to a slow, dense wind after these intervals. This situation is similar to the cases shown in Figure 1 and Figure 2 of <cit.> and suggests that the sub-Alfvénic streams were sampled when the PSP's magnetic footpoint was inside a transition layer. In these streams, an increased Alfvén speed is caused mainly by a low plasma density since the magnetic field strength shows no significant change. The solar wind speed V_R also stays low or moderate inside the intervals. These together lead to the decrease in M_A. Such properties of these sub-Alfvénic intervals satisfy the criteria of an LMBL proposed by <cit.>. Since the LMBL streams have a low Alfvén Mach number compared to other types of wind, their M_A would be the first to decrease below 1 and such wind would be recognized as sub-Alfvénic. Therefore, the first observed sub-Alfvénic wind is likely to be an LMBL stream. To further validate this, the solar source of these sub-Alfvénic streams will be traced to determine their origins. The enhanced magnetic control in these sub-Alfvénic streams also leads to an extended solar corona, or an increased Alfvén radius r_A (Figure <ref>(a)). The Alfvén radius r_A is computed following <cit.> when M_A is close to or exceeds 1.This method assumes that the solar wind velocity does not change much between the observational site and the Alfvén critical point.Although the approximation is valid in the super-Alfvénic wind (seefor details), the solar wind can accelerate a lot well below the Alfvén surface before it reaches the critical point.Such a situation may result in a significant error in the above method of r_A calculation when M_A is much less than 1. In this case, we calculate r_A from the expressionL_p+L_m=Ω r_A^2 that relates the solar wind angular momentum to r_A <cit.> when M_A<0.8. The threshold of 0.8 is chosen based on our experience although arbitrary. Because there are complications associated with the measurements of the transverse velocity (seefor detailed discussions), we drop the particle term L_p and use the field term only, i.e., Ω r_A^2≃ -(rB_RB_T/μρ V_R)=L_m, where B_R and B_T are the radial and azimuthal components of the magnetic field and Ω is the solar rotation rate. Therefore, this can be considered as a lower limit in general, but note that in fast wind the particle term can be negative <cit.>. Thus, the overall expression for r_A is r_A≃r/M_A,M_A≥0.8(-rB_RB_T/μρ V_R Ω)^1/2,M_A<0.8This combination of the calculation methods is supposed to give a more reliable r_A for both above and under the Alfvén surface.As shown in Figure <ref>(a), r_A increased to over 20 radii during the sub-Alfvénic intervals, which indicates an extended corona and facilitates PSP's crossings of the Alfvén surface.In previous studies, suppressed switchbacks were found in the sub-Alfvénic solar wind <cit.> and in LMBLs <cit.>. A similar phenomenon is also observed in the sub-Alfvénic intervals during encounter 12. As shown in Figure <ref>(d), the magnetic field is almost radial with switchbacks indicated by the spikes in B_R, and the occasional changes of signs denote deflections over 90^∘.However, these large-angle deflections can hardly be seen in the sub-Alfvénic intervals. To further address the question, we use the parameters θ and δ V_R/V_A computed using the methods of <cit.>. The deflection angle θ is the magnetic field's deviation angle from the radial or anti-radial direction depending on the field polarity, and its sign indicates the deflection direction (Figure <ref>(f)).Inside the sub-Alfvénic intervals, θ shows a smaller absolute value than the neighboring super-Alfvénic wind. We obtain δ V_R by subtracting a low-pass filtered radial velocity V_Rf (Figure <ref>(c)) from V_R. The value of δ V_R/V_A increases with the magnetic field deflection angle by the relation δ V_R/V_A=1-cosθ in Alfvénic fluctuations <cit.>.It is also reduced in the sub-Alfvénic intervals. The association of the suppressed switchbacks to a decreased M_A in these sub-Alfvénic LMBLs, as well as in super-Alfvénic LMBLs (see Figure 1 offor example), is supportive of the switchbacks' nature as Alfvénic turbulence.Figure <ref> shows the magnetic field source tracing results for the sub-Alfvénic intervals at encounter 12. The magnetic field lines from the spacecraft are ballistically mapped following the Parker spiral field to the source surface at which the magnetic field is set to be radial. Then a potential field source surface (PFSS) model is used to trace the coronal magnetic fields from the source surface to the photospheric sources <cit.>.The construction is based on the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) magnetograms provided by the Global Oscillation Network Group (GONG). The ADAPT-GONG synoptic map is updated every two hours. The open field areas given by the PFSS model are compared with EUV imaging observations of coronal holes. We use Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) 193 Å synoptic maps of the corresponding Carrington rotations for such a comparison.The time of the ADAPT-GONG magnetogram and the height of the source surface are adjusted to best fit PSP and SDO/AIA observations. The uncertainties of this magnetic mapping were discussed <cit.>, and the final position error on the photosphere is usually a few degrees according to our experience <cit.>.As can be seen from Figure <ref>, the magnetic mapping for the first two sub-Alfvénic intervals shows connections first to the edge of a low-latitude coronal hole and then to the edge of the equatorial extension of a northern polar coronal hole. After a crossing of the HCS, the third interval is connected to a small low-latitude coronal hole with a negative polarity. The crossing of the HCS is consistent with the B_R polarity reversal between the second and third sub-Alfvénic intervals in Figure <ref>. The coronal hole boundaries where the magnetic flux tube expands rapidly have been one of the suggested sources of the slow solar wind <cit.>. The expansion rate of the magnetic flux tube from the photosphere to the source surface is evaluated by the expansion factor, which is inversely correlated to the wind speed <cit.>. The expansion factor tends to be large in areas just within the coronal hole boundaries, and this would result in a slow solar wind compared to areas deeper inside a coronal hole. <cit.> reasoned that the low wind speed and the tenuous nature of the coronal hole-originated wind together lead to the decreased radial Alfvén Mach number of the LMBL wind coming from coronal hole boundaries. In addition, narrow strip-shaped coronal holes, such as the one the second interval is connected to, and very small coronal holes, such as the one the third interval is connected to, may also be associated with a large expansion factor of the open field lines. Such coronal holes could also be the source regions of LMBLs. The mapping results of the origins of the sub-Alfvénic intervals at encounter 12 satisfy what is expected for an LMBL wind. We repeat the same procedure of identification and examination of the sub-Alfvénic intervals as well as source tracing for encounters 8 to 14.Table <ref> lists all the sub-Alfvénic intervals at these encounters. We select only robust sub-Alfvénic intervals that last over 3 hours.Intervals that are too short may be affected by uncertain factors such as temporal solar wind fluctuations and PSP measurement errors. The longest duration is about 22 hours, which indicates that PSP was steadily below the Alfvén surface.Due to the absence of QTN n_e measurements for encounter 11, we replace it with the proton density from SPAN-I. However, considering the inaccuracy that the replacement may cause, we list the cases of encounter 11 here only for display purpose and remove them in the hereafter statistical analysis. The sub-Alfvénic intervals in Table <ref> show similarities in terms of their sources and properties. Source tracing indicates that these intervals are generally connected to the peripheries inside coronal holes, or narrow/small regions of open field lines. Combined with in situ observations, this result confirms their common nature as LMBLs. In general, the wind speed of the sub-Alfvénic intervals is low (from 100 to 300 km/s), which can be explained by the fast expansion of magnetic flux tubes from their sources. The density is also generally small, which is consistent with their origin from within coronal holes. The magnetic field deflection angle is significantly reduced (≲ 15^∘), so instead of being literally “switchbacks" small deflections are observed in the sub-Alfvénic intervals. The velocity enhancement in units of the Alfvén speed is also reduced accordingly. The Alfvén radius is calculated from Equation (<ref>) and shows an average value of around 20 R_S, which explains the first detection of the sub-Alfvénic wind even at a distance of about 20 R_S from the Sun <cit.>.Note that the Alfvén radius of interval No.11 is abnormally large, so it cannot be considered as a typical case. More general results are given by other intervals.The last two rows of Table <ref> contrast the assembled sub-Alfvénic wind with the super-Alfvénic wind.The analyzed sub-Alfvénic wind data set is a combination of the intervals listed in Table <ref> except the intervals from encounter 11 (No.5 and No.6). The super-Alfvénic wind data is from full observations from encounters 8 to 14 with encounter 11 excluded, and we also exclude transient structures (such as coronal mass ejections).Such a data selection should represent a wide range of super-Alfvénic wind properties. The contrast shows the universal characteristics, such as the low density, low speed, suppressed switchbacks, and enhanced Alfvén radius, for the sub-Alfvénic wind compared with the super-Alfvénic wind. This distinction also differentiates LMBL streams from the ordinary solar wind.Using the data selected above, we show in Figure <ref> the solar wind distribution as a function of the normalized plasma density and radial velocity for the sub- and super-Alfvénic solar wind. The correspondence between the solar wind speed and density can be used as a signature to examine the components of the nascent solar wind in the vicinity of the Sun, where the different solar wind streams have not yet interacted considerably. The solar wind is generally divided according to velocity into fast and slow wind.The fast wind is believed to come from inside coronal holes featuring open magnetic field lines and reduced density <cit.>. Such a type of solar wind can be seen in Figure <ref> as a gathering area of the super-Alfvénic wind that is clearly faster than the rest, with velocities concentrating around 450 km/s (the upper PDF peak in the right panel), and with corresponding low densities concentrating around 6 cm^-3 (the left peak in the top panel). The slow wind, however, shows more diversity in properties and its origin is still under debate. <cit.> suggests that the slow solar wind has two kinds of origins, one from the rapidly diverging open magnetic field rooted at coronal hole boundaries and the other from the closed magnetic loops of helmet streamers. This difference in origins is suggested to cause the observed variance in multiple parameters of the slow solar wind <cit.>. In general, the plasma escaping the closed field lines of streamers tends to be denser, whereas it is more tenuous when from boundaries within coronal holes. This difference is also seen in Figure <ref>.The clustered data points of the super-Alfvénic wind corresponding to the right peak in the top panel and the lower peak in the right panel may indicate the dense, slow plasma ejected from streamers. In contrast, for the sub-Alfvénic wind, the normalized density is substantially decreased while the wind speed also remains low (corresponding to the single, relatively broad peak in the top and right panels). This is again consistent with the previously discussed LMBL flows with rapidly diverging open magnetic fields from either coronal hole boundaries or small/narrow regions of open fields. Based on the analysis above, we conclude that the usual super-Alfvénic wind fits the dichotomy of the fast wind from the coronal hole interiors and the slow wind from streamers (two peaks in the black curves). Some of the slow, dense wind may also come from active regions, although it could be few in the present measurements. The LMBL wind is a third component of the solar wind, a tenuous yet slow wind from coronal hole edges.These three components constitute the young solar wind.Figure <ref> shows the Alfvén radius distribution as a function of the Carrington longitude of PSP, and the distance from the origin indicates the value r_A at that time. As the PSP orbit is confined to a few degrees in latitude near the ecliptic, it can be regarded as the configuration of the Alfvén surface in the ecliptic plane. PSP has circled the Sun from encounters 8 to 14 to give a relatively complete picture. The height of the Alfvénic radius is shown as varying from 10 to 30 solar radii around the Sun, with the parts associated with the sub-Alfvénic wind generally protruding to larger distances.This variation of r_A is consistent with the “rugged surface" picture <cit.>. PSP's crossings of the protruding parts of the Alfvén surface always correspond to its entry into the LMBL streams. As a result, PSP sampled solar wind of similar properties and similar origins, as exhibited by the present sub-Alfvénic intervals. Note that for some longitudes we may see both sub- and super-Alfvénic wind at the same longitudes.These are observations at different encounters.Figure <ref> shows the probability density function (PDF) of r_A using data from the 6 encounters. The PDF corresponding to the super-Alfvénic wind peaks at about 12 solar radii from the center of the Sun. This value is consistent with the estimate of <cit.>. As for the sub-Alfvénic or LMBL wind, the values of r_A estimated from measurements under the Alfvén critical point are more variant, and the PDF peaks at 15 to 25 solar radii. These results are also in line with the r_A distribution reported by <cit.>, which is extrapolated from PSP measurements by assuming the radial trends of the solar wind parameters. The r_A of the super-Alfvénic wind larger than 15 solar radii corresponds to the protruding parts of the Alfvén surface in Figure <ref> that were not crossed by PSP, and the wind remained super-Alfvénic at that moment. These parts are also likely to be associated with LMBL winds as protrusions of the Alfvén surface are usually simultaneous with drops in M_A, which is an important signature of LMBLs. § CONCLUSIONSPSP has observed extensive periods of the sub-Alfvénic solar wind since encounter 8. We have identified all the steady sub-Alfvénic intervals from the measurements at encounter 8 to encounter 14 and analyzed their origins and properties.Together with the super-Alfvénic wind, we have examined key issues including the nature of the sub-Alfvénic wind, the components of the nascent solar wind, and the distribution of the Alfvén radius. This is the first comprehensive study that includes all the main sub-Alfvénic intervals observed by PSP so far and their comparison with the super-Alfvénic wind. The main conclusions are summarized as follows.1.The observed sub-Alfvénic streams show similarities in their properties and origins, and mostly fall into the category of the special solar wind termed as LMBLs by <cit.>. These sub-Alfvénic streams are characterized by a low or moderate speed and a low density, which result in a decrease in the Alfvén Mach number, an enhancement of the Alfvén radius, and suppression of switchbacks in such winds. Also, these streams generally originate from the boundaries inside coronal holes or narrow/small regions of open magnetic fields according to the source tracing. Such properties and origins are consistent with those of an LMBL flow proposed by <cit.>.LMBLs tend to be the first wind to become sub-Alfvénic as PSP approaches the Sun due to their low Alfvén Mach number. Until now, observations of the sub-Alfvénic wind are mostly limited to LMBL streams. As PSP descends to lower perihelia, the spacecraft would sample more frequently the sub-Alfvénic wind, in particular the LMBL wind. However, note that not necessarily all the sub-Alfvénic wind is an LMBL flow as PSP moves even closer to the Sun. For example, flows from deeper inside a coronal hole may also have a possibility that their Alfvén surface is crossed given the typical Alfvén radius of about 12 R_S in comparison with PSP's final orbit.2.The young solar wind is shown to be composed of three typical components, i.e., streamer flows, wind from coronal hole interiors, and wind from coronal hole boundaries (i.e., LMBL flows).These three components are revealed by the distribution of the young solar wind with respect to the radial velocity and normalized density.The streamer wind is dense and slow as the plasma is trapped by the closed magnetic fields. In contrast, the wind from coronal hole interiors with open magnetic field lines is tenuous and fast.However, the wind speed becomes slower as the magnetic field lines expand faster near the coronal hole boundaries, which gives rise to the LMBL wind (i.e., tenuous and relatively slow). The streamer wind and LMBL wind together constitute the slow solar wind. Solar wind originating from active regions could also be dense and slow, although it is not a major part of present measurements and is not discussed in detail in this paper.3.PSP measurements have provided a complete picture for the Alfvén radius distribution. We obtain a distribution of the Alfvén radius around the Sun for both sub-Alfvénic and super-Alfvénic wind in the ecliptic plane. The Alfvén radius of the current super-Alfvénic wind concentrates around 12 solar radii from the center of the Sun. In contrast, the Alfvén radius of the sub-Alfvénic LMBL intervals is enhanced to 15 to 25 solar radii due to their reduced Alfvén Mach numbers. These LMBLs are associated with protruding parts of the Alfvénic transition, which facilitates the PSP crossings. As a result, PSP has observed steady sub-Alfvénic intervals with similar origins and similar properties of the LMBL wind.The research was supported by by NSFC under grant 42274201, by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No.XDB 0560202, by the National Key R&D Program of China No.2021YFA0718600, and by the Specialized Research Fund for State Key Laboratories of China. We thank Dr.Huidong Hu for his valuable suggestions. We acknowledge the NASA Parker Solar Probe mission and the SWEAP and FIELDS teams for the use of data. The PFSS extrapolation is performed using the pfsspy Python package <cit.>. The data used for PFSS modeling are courtesy of GONG and SDO/AIA.aasjournal cccccccccccc0pt Sub-Alfvénic Intervals from PSP Measurements at Encounters 8-14.No. Enc. Start Duration r M_A V_R n_e· r^2 |θ| δ V_R/V_A r_ALMBL(UT) (hr) (R_S)(km s^-1) (cm^-3) (^∘) (R_S) 1 8 2021-04-28 09:33 5.2 19.1 0.81±0.07 318±47 2.7±1.2 15±8 0.04±0.0322.9±1.9Y2 9 2021-08-09 21:26 3.0 16.1 0.57±0.09 159±21 6.8±1.5 12±5 0.03±0.03 20.8±2.4 Y3 10 2021-11-21 21:17 3.7 15.6 0.42±0.10 110±16 7.6±1.8 8±4 0.07±0.09 20.5±2.3 Y4 10 2021-11-22 02:38 8.0 18.1 0.58±0.19 131±17 8.0±2.6 14±8 0.03±0.03 25.1±3.9 Y5 11^* 2022-02-25 12:46 3.3 13.3 0.76±0.12^* 319±60 4.8±1.4^* 12±9 0.03±0.03^* 15.3±0.8^* Y6 11^* 2022-02-25 18:35 4.9 13.7 0.68±0.16^* 315±42 3.8±1.0^* 13±8 0.04±0.04^* 17.4±6.1^* Y7 12 2022-05-31 23:46 11.2 16.5 0.76±0.09 338±41 4.4±1.4 15±10 0.05±0.06 18.8±2.1 Y8 12 2022-06-01 16:38 16.6 13.6 0.52±0.17 318±73 5.2±2.9 12±8 0.04±0.08 18.6±6.4 Y9 12 2022-06-02 22:14 7.0 19.8 0.85±0.08 198±26 10.0±2.4 11±9 0.03±0.08 21.3±3.5 Y10 13 2022-09-06 08:40 8.7 13.9 0.46±0.14 256±41 4.1±1.2 10±7 0.04±0.08 22.0±9.3 Y11a 13 2022-09-06 17:40 19.3 17.8 0.35±0.26 168±57 2.4±1.9 10±8 0.03±0.03 52.0±27.0 Y12 14 2022-12-10 20:02 22.0 14.2 0.58±0.14 284±89 7.6±4.9 14±9 0.05±0.07 22.8±3.8 Ysub-Alfvénic246±95 5.8±3.8 12±9 0.04±0.06 21.0±5.7a super-Alfvénic333±111 13.1±6.6 33±23 0.16±0.29 13.9±4.1 Columns (1-5) correspond to the number, encounter number, start time, duration, and average PSP distance of the intervals, respectively. Columns (6-11) give the mean value and standard deviation of the Alfvén Mach number, radial speed, normalized density, absolute deflection angle, radial speed enhancement in units of the Alfvén speed, and Alfvén radius, respectively. Column (12) indicates whether the interval is an LMBL wind or not (Y/N). The * superscript marks where n_e is replaced by the proton density from SPAN-I as the QTN density is not available. aHere r_A of interval No.11 is discarded for it is an outlier and causes a large deviation in statistical characteristics.
http://arxiv.org/abs/2311.15622v3
{ "authors": [ "Yiming Jiao", "Ying D. Liu", "Hao Ran", "Wenshuai Cheng" ], "categories": [ "astro-ph.SR", "physics.space-ph" ], "primary_category": "astro-ph.SR", "published": "20231127083737", "title": "Properties of Steady Sub-Alfvénic Solar Wind in Comparison with Super-Alfvénic Wind from Measurements of Parker Solar Probe" }
January 14, 2024=13pt Graded Jet Geometry 0.5cmJan Vysoký^1 0.6cm ^1Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in PragueBřehová 7, 115 19 Prague 1, Czech Republic, [email protected] Jet manifolds and vector bundles allow one to employ tools of differential geometry to study differential equations, for example those arising as equations of motions in physics. They are necessary for a geometrical formulation of Lagrangian mechanics and the calculus of variations. It is thus only natural to require their generalization ingeometry of -graded manifolds and vector bundles. Our aim is to construct the k-th order jet bundle ^k_ of an arbitrary -graded vector bundleover an arbitrary -graded manifold . We do so by directly constructing its sheaf of sections, which allows one to quickly prove all its usual properties. It turns out that it is convenient to start with the construction of the graded vector bundle of k-th order (linear) differential operators ^k_ on . In the process, we discuss (principal) symbol maps and a subclass of differential operators whose symbols correspond to completely symmetric k-vector fields, thus finding a graded version of Atiyah Lie algebroid. Necessary rudiments of geometry of -graded vector bundles over -graded manifolds are recalled.Keywords: Jet manifolds, graded manifolds, graded vector bundles, jet vector bundles, linear differential operators. § INTRODUCTIONIntroduced by Ehresmann <cit.>, jet geometry forms one of the pillars of modern differential geometry and mathematical physics. It proved to be an immeasurable tool to understand the geometry of differential equations and their solutions, and consequently of equations of classical and quantum physics. In particular, it is vital for a geometrical description of Lagrangian mechanics and the calculus of variations. It is of course not possible to provide a full list of references, hence let us at least point out standard monographs <cit.> for the geometry of jet bundles. For applications in theoretical physics, see e.g. <cit.> and <cit.>. Linear differential operators on sections of vector bundles can be naturally described in terms of jet bundles <cit.>. There is their fully algebraic description, tracing back to none other than Grothendieck <cit.>. This viewpoint is fully utilized in a more conceptual algebraic approach to jet bundles, using their modules of sections <cit.>, which should be considered as a main source of inspiration for methods used in this paper. The need for a geometrical description of supersymmetric field theories and their variational calculus lead naturally to the notion of jet supermanifolds and jet supervector bundles. See <cit.> and a different approach in <cit.>. Note that in the above list of references, the term “graded” always refers to the _2-grading. In recent years, there was a reinvigoration of interest in -graded manifolds with local coordinates of arbitrary degrees <cit.>, following on the non-negatively graded (or just -graded) manifolds <cit.> and <cit.>. It is thus natural to consider a generalization of jet geometry to the (general) -graded setting. There is a notion of graded jet bundles for -manifolds in the literature <cit.>, based on the explicit utilization of the Batchelor isomorphism for supermanifolds.In this paper, we aim to provide a construction of jet bundles for general -graded manifolds, following the definitions and formalism of our previous work <cit.>. Let us henceforth use the term “graded” exclusively for -graded (manifolds, vector bundles). However, it shall be noted that all of the constructions in this paper apply immediately to the _2-graded case (Berezin-Leites supergeometry) and perhaps to some more exotic gradings as well, e.g. _2^n. Let us sketch the main philosophy of the construction. Graded vector bundles over a graded manifoldcan be fully described in terms of sheaves of graded ^∞_-modules, where ^∞_ is the structure sheaf of . Linear differential operators on graded vector bundles can be then readily defined using the graded version of the iterative algebraic definition of Grothendieck, see also <cit.>. We prove that they form a locally freely and finitely generated sheaves of graded ^∞_-modules, hence new examples of graded vector bundles. This gave us the confidence that one can construct sheaves of sections of graded jet bundles directly, starting from a sheaf of sections of any given graded vector bundle. The procedure uses a graded analogue of jet modules in commutative algebra, see <cit.>, described also in <cit.>. However, the resulting presheaf of graded ^∞_-modules is way too big (it has non-trivial sections “vanishing at all points”) at it does not form a sheaf. We address those technical details by introducing the notion of geometric presheaves of ^∞_-modules, inspired by <cit.>. One can then shape the problematic presheaf into a locally freely and finitely generated sheaf of graded ^∞_-modules, hence a sheaf of sections of a graded vector bundle, without loosing any of the functorial properties. In particular, it retains its natural relation to linear differential operators. Since every graded vector bundle (as a sheaf) can be used to construct the unique (up to a diffeomorphism) total space graded manifold, we obtain another tool for a construction of new graded manifolds. Let us also note that recently, there appeared a general construction of jet bundles in context of noncommutative geometry <cit.>, defining k-th order jet modules starting from any given A-module, where A is any unital associative algebra. The paper is organized as follows:In Section <ref>, we start by recalling the necessary notions from graded geometry, namely the ones of a graded manifold and a graded vector bundle. We generalize the notion of a local operator on a graded module of sections of a graded vector bundle and show that those naturally form a sheaf of graded modules. We argue that endomorphisms of graded modules of sections (of any degree) form an example of local operators. In Section <ref>, we define linear differential operators (of a given order) on graded vector bundles. We prove that they are local and it turns out that they can be viewed as sections of a sheaf of graded submodules of the sheaf of local operators. The fact that it is in fact a sheaf of sections of graded vector vector bundle is proved in Section <ref>. We do so by explicitly by finding its local frame using a local frame of the graded vector bundle in question, together with a graded local chart of the underlying manifold. This also relates the algebraic definition of differential operators to their more self-explanatory explicit local form. In Section <ref>, we first recall the notion of completely symmetric (differential) forms on a graded manifold, and the “internal hom” in the category of graded vector bundles over a given graded manifold. We do so in order to define a graded analogue of a (principal) symbol map. We prove that it fits into a canonical short exact sequence of graded vector bundles. The symbol map can be then used in Section <ref> to define a certain subclass of differential operators (their respective symbol map is determined by a single completely symmetric k-vector field). It turns out that this subset is in a certain sense closed under a graded commutator and the corresponding algebra of symbol maps is governed by a graded version of a Schouten-Nijenhuis bracket. In particular, for first-order differential operators, we obtain a graded Lie algebra on the graded module of sections of a so called Atiyah graded vector bundle, making it into a non-trivial example of a graded Lie algebroid. In Section <ref>, we discuss a subclass of graded modules (over graded algebras of functions on graded manifolds) which contain no sections that in some sense “vanish at all points”. More generally, we define a subclass of “geometric” presheaves of graded ^∞_-modules, motivated by the similar notion introduced in <cit.>. The name is justified by proving that sheaves of graded vector bundles are always geometric. We prove that there is a canonical “geometrization” functor into the subcategory of geometrical presheaves of graded ^∞_-modules. Section <ref> is central to this paper. The sheaf of (k-th order) jets is constructed from a sheaf of sections of a given graded vector bundle. We prove that this sheaf is locally freely and finitely generated of a constant graded rank, hence a sheaf of sections of a graded vector bundle. We declare it to be the k-th order jet bundle of a graded vector bundle. By choosing a trivial “line bundle” over a given graded manifold, we obtain its k-th order jet manifold. Finally, Section <ref> provides a justification of the previous section. In other words, we prove that the graded vector bundle we have constructed has all the usual properties of jet bundles in ordinary differential geometry. We show that there is a canonical jet prolongation map which becomes an isomorphism with the original graded vector bundle map for k = 0. We argue that k-th order differential operators on a graded vector bundlecan be equivalently described as graded vector bundle maps from the k-th jet bundle to . It is shown that jet bundles give raise to a canonical inverse system of sheaves, allowing one to define the sheaf ^∞_. We argue that fibers of graded jet bundles coincide with graded jet spaces defined at each point of the underlying smooth manifold in a usual way. We conclude by proving that in the ordinary (i.e. trivially graded) case, we obtain the usual definition of jet bundles. § ACKNOWLEDGMENTSFirst and foremost, I would like to express gratitude to my family for their patience and support. I would also like to thank Branislav Jurčo and Rudolf Šmolka for helpful discussions. The author is grateful for a financial support from MŠMT under grant no. RVO 14000.§ GRADED VECTOR BUNDLES, LOCAL OPERATORS In this section, we will briefly recall elementary definitions required in this paper. The notion of local operators on graded vector bundles is introduced and we prove that they actually form a sheaf of graded modules. Throughout this paper,will be some fixed but otherwise general -graded manifold. Its structure sheaf of smooth functions will be denoted as ^∞_ and the underlying smooth manifold as M. A sequence (n_j)_j ∈ of non-negative integers is called a graded dimension of , ifis locally modeled on the graded domain Û^(n_j)_j. In plain English, the number of its local coordinates of degree j is precisely n_j. We assume that its total dimension n := ∑_j ∈ n_j is finite. Note that then (M) = n_0. For a detailed definition and related discussions see 3.3 of <cit.>. Since M is a topological space, we can write (M) for its set of open subsets. For any m ∈ M, _m(M) denotes the set of open subsets of M containing m.For each U ∈(M), elements of ^∞_(U) all called (smooth) functions onover U. To each f ∈^∞_(U), there is an associated smooth function f: U →, called the body of f. For each m ∈ U, one defines f(m) := f(m). Note that whenever |f| ≠ 0, one has f = 0 and thus f(m) = 0 for all m ∈ U.By a graded vector bundleover , we mean a locally freely and finitely generated sheaf Γ_ of graded ^∞_-modules on M of a constant graded rank. Γ_ is called the sheaf of sections of .A sheaf of graded ^∞_-modules Γ_ assigns to each U ∈(M) a graded ^∞_(U)-module Γ_(U) and the action of ^∞_(U) has to be compatible with restrictions. By “locally freely and finitely generated” we mean the existence of a local trivialization { (U_α,ϕ_α) }_α∈ I. More precisely, { U_α}_α∈ I is an open cover of M and each ϕ_α is a ^∞_|_U_α-linear sheaf isomorphism of Γ_|_U_α and a freely and finitely generated graded sheaf of ^∞_|_U_α-modules ^∞_|_U_α[K].To every U ∈(U_α), it assigns a graded ^∞_(U)-module (with the obvious action)(^∞_|_U_α[K])(U) := ^∞_(U) ⊗_ K,where K ∈ is some fixed (for all α∈ I) finite-dimensional graded vector space called the typical fiber of . Its graded dimension (K) =: (r_j)_j ∈ is called the graded rank ofand denoted as (). The number r := ∑_j ∈ r_j is called the total rank ofand denoted as (). Equivalently, for each m ∈ M, there exists an open subset U ∈_m(M) and a collection {Φ_λ}_λ=1^r of elements of Γ_(U), such that * |Φ_λ| = |ϑ_λ|, where {ϑ_λ}_λ=1^r is some fixed total basis of the typical fiber K. * For every V ∈(U), every ψ∈Γ_(V) can be written as ψ = ψ^λ·Φ_λ|_V for unique functions ψ^λ∈^∞_(V) such that |ψ^λ| = |ψ| - |Φ_λ|. Every such collection {Φ_λ}_λ=1^r is called a local frame forover U. Note that we usually denote the action of f ∈^∞_(U) on ψ∈Γ_(U) simply as f ·ψ. There exists a more geometrical approach to graded vector bundles. In fact, one can always construct a total space graded manifold(unique up to a graded diffeomorphism) and a surjective submersion π: →. The sheaf of sections Γ_ can be then fully recovered from ^∞_. See 5.5 in <cit.> for details.The most vital property of graded vector bundles, necessary in everything what follows in this paper, is the ability to extend (in some sense) local sections to global sections. It is proved by using smooth bump functions, similarly to the ordinary case. Unless explicitly stated,will always denote some fixed but arbitrary graded vector bundle overof a graded rank (r_j)_j ∈.[The Extension Lemma] Suppose ψ∈Γ_(U) for some U ∈(M). Then for any V ∈(U) such that V⊆ U, there exists a section ψ' ∈Γ_(M), such that ψ'|_V = ψ|_V.This property will allow us to restrict certain linear maps of sections of graded vector bundles from larger open subsets to smaller open subsets. We only have to limit ourselves to those which depend “locally” on sections. We thus arrive to the following important notion.Let U ∈(M). A graded -linear map F: Γ_(U) →Γ_(U) of any given degree |F| is called a local operator onover U, if for any V ∈(U) and ψ∈Γ_(V) satisfying ψ|_V = 0, one has F(ψ)|_V = 0. Local operators onover U form a graded vector space which we denote as _(U). As promised, local operators can be restricted in a unique way. In fact, these restrictions allow us to obtain a sheaf of graded ^∞_-modules of all local operators on .Let U ∈(M) and F ∈_(U). Then for any V ∈(U), there is the unique local operator F|_V∈_(V) onover V, such that F(ψ)|_V = F|_V(ψ|_V),for all ψ∈Γ_(U). There is a canonical graded ^∞_(U)-module structure on _(U). With respect to aforementioned restrictions, the assignment U ↦_(U) makes _ into a sheaf of graded ^∞_-modules, called the sheaf of local operators on .Let ψ∈Γ_(V). For each m ∈ V, find V_(m)∈_m(V) satisfying V_(m)⊆ Vand use the extension lemma to find ψ'_(m)∈Γ_(U) satisfying ψ|_V_(m) = ψ'_(m)|_V_(m). Since { V_(m)}_m ∈ V is an open cover of V and Γ_ is a sheaf, the formulaF|_V(ψ)|_V_(m) := F(ψ'_(m))|_V_(m),imposed for every m ∈ V defines a unique section F|_V(ψ) of Γ_(V), if the elements of Γ_(V_(m)) on the right-hand side agree on the overlaps. This follows from the fact that F is -linear and local. Similarly, one can prove that the definition of F|_V depends neither on the open cover { V_(m)}_m ∈ V or extensions ψ'_(m) used. F|_V is then easily shown to be a graded -linear map satisfying (<ref>). Let us argue that F|_V is local. Suppose that there is W ∈(V) and ψ∈Γ_(V) satisfying ψ|_W = 0. In the construction of the open cover { V_(m)}_m ∈ V, we can always assume that for m ∈ W, one has V_(m)⊆ W. But then ψ'_(m)|_V_(m) = 0 and thus F(ψ'_(m))|_V_(m) = 0 for every m ∈ W, since F is local. Since { V_(m)}_m ∈ W is an open cover of W, we find F|_V(ψ)|_W = 0. Hence F|_V is local. To see that F|_V is a unique local operator satisfying (<ref>), suppose that G ∈_(V) is another such operator. Let ψ∈Γ_(V) be arbitrary and { V_(m)}_m ∈ V and ψ'_(m)∈Γ_(U) be constructed as above. Since G is local and satisfies (<ref>), one can for each m ∈ V writeG(ψ)|_V_(m) = G(ψ_(m)|_V)|_V_(m) = F(ψ_(m))|_V_(m)≡ F|_V(ψ)|_V_(m).As { V_(m)}_m ∈ V covers V and ψ was arbitrary, this proves that G = F|_V. Let U ∈(M). For each f ∈^∞_(U), F ∈_(U) and ψ∈Γ_(U), the formula(f · F)(ψ) := f · F(ψ)clearly makes _(U) into a graded ^∞_(U)-module. The unique property (<ref>) of restrictions can be now utilized to prove that for any open subsets W ⊆ V ⊆ U, every F ∈_(U) and any f ∈^∞_(U), one has (F|_V)|_W = F|_W,F|_U = F,(f · F)|_V = f|_V· F|_V.This proves that _ is a presheaf of graded ^∞_-modules. Finally, suppose that U ∈(M) and let { U_α}_α∈ I be some open cover of U. Let { F_α}_α∈ I be a collection of local operators onover U_α agreeing on the overlaps. For each ψ∈Γ_(U) and every α∈ I, defineF(ψ)|_U_α := F_α(ψ|_U_α).By using the assumption and (<ref>), the elements of Γ_(U_α) agree on the overlaps, whence they glue to a unique element Γ_(U). It is not difficult to see that this procedure defines an element of _(U). Its defining formula shows that F|_U_α = F_α, for all α∈ I, and such F is unique. This proves that _ is a sheaf of graded ^∞_-modules. Every ^∞_(U)-linear map F: Γ_(U) →Γ_(U) of any given degree |F| is local. Indeed, suppose there is V ∈(U) and ψ∈Γ_(U) satisfying ψ|_V = 0. For any m ∈ V, find V_(m)∈_m(V) such that V_(m)⊆ V, and a smooth bump function η∈^∞_(U) satisfying (η) ⊆ V and η|_V_(m) = 1. It follows that ψ = (1 - η) ·ψ. But thenF(ψ)|_V_(m) = (1 - η)|_V_(m)· F(ψ)|_V_(m) = 0,and since { V_(m)}_m ∈ V is an open cover of V, this proves that F(ψ)|_V = 0. Whence F ∈_(U). There is one important feature of local operators which follows from the property (<ref>). When we talk about sheaves, we usually talk about sheaf morphisms between them. It turns out that for graded vector bundles, it suffices to define a mapping of global sections.Let F ∈_(M). Then there exists a unique sheaf morphism F: Γ_→Γ_, such that F = F_M. If F is ^∞_(M)-linear, F is a ^∞_-linear sheaf morphism. We will usually abuse the notation and omit the bar over F.Recall that a sheaf morphism is just a natural transformation of the two sheaf functors (possibly with a degree shift). F is unique, since for every U ∈(M) and every ψ∈Γ_(M), the naturality forces F_U(ψ|_U) = F(ψ)|_U, that is F_U := F|_U by Proposition <ref>. Since _ is a presheaf, F_U is natural in U. We thus obtain a sheaf morphism F = {F_U}_U ∈(M). It is not difficult to see from the proof of Proposition <ref> that if F is ^∞_(M)-linear, then F|_U is ^∞_(U)-linear for every U ∈(M), hence F is a ^∞_-linear sheaf morphism. § SHEAVES OF DIFFERENTIAL OPERATORS In this section, we will produce a new class of local operators generalizing Example <ref>. We mimic standard definitions and prove that even in the graded case, we get expected properties. Again,is still assumed to be a general graded vector bundle over a general graded manifold . For every f ∈^∞_(U), one has a ^∞_(U)-linear operator λ_f: Γ_(U) →Γ_(U), defined asλ_f(ψ) := f ·ψ,for all ψ∈Γ_(U). Note that |λ_f| = |f| and λ_f|_V = λ_f|_V for any V ∈(U).Let k ∈_0 be a non-negative integer. A graded -linear map D: Γ_(U) →Γ_(U) is called a k-th order differential operator onover U, if* it is ^∞_(U)-linear (when k = 0); * for every f ∈^∞_(U), the graded commutator [D,λ_f] := D ∘λ_f - (-1)^|D||f|λ_f∘ D is a (k-1)-th order differential operator onover U (when k > 0).We denote the graded vector space of k-th order differential operators onover U as _^k(U). Let us introduce the following notation. For every graded -linear map D, every j ∈ and every j-tuple (f_1, …, f_j) of functions from ^∞_(U), we define a pair of graded -linear mapsD_(f_1,…,f_j)^(j) := [ ⋯ [[D,λ_f_1], λ_f_2], … ],λ_f_j], D^(j)_(f_1,…,f_j) := [λ_f_1,[…, [λ_f_j-1, [λ_f_j, D]] ⋯ ].They differ only by a sign but it turns out that it is convenient to keep both definitions. More explicitly, one finds the relationD^(j)_(f_1,…,f_j) = (-1)^j + |D|(|f_1| + … + |f_j|)D^(j)_(f_1,…,f_j).It is easy to see that D ∈^k_(U), iff D^(k+1)_(f_1,…,f_k+1) = 0 for all f_1, …, f_k+1∈^∞_(U). It turns out that differential operators are local. In fact, they form sections of a sheaf of graded ^∞_-submodules of the sheaf of local operators. For each k ∈_0 and U ∈(M), one has ^k_(U) ⊆_(U). In fact, one can view ^k_ as a sheaf of graded ^∞_-submodules of the sheaf _. ^k_ is called the sheaf of k-th order differential operators on .Let us proceed by induction on k ∈_0. For k = 0, every F ∈^0_(U) is ^∞_(U)-linear, hence ^0_(U) ⊆_(U) as proved in Example <ref>. Hence assume that k > 0 and that the statement holds for all lower order differential operators. Let D ∈^k_(U) and suppose there is V ∈(U) and ψ∈Γ_(V) satisfying ψ|_V = 0. For each m ∈ V, find V_(m)∈(V) and a smooth bump function η as in Example <ref>, so one can write ψ = (1 - η) ·ψ. But then D(ψ)|_V_(m) = [D, λ_1 - η](ψ)|_V_(m) + (1 - η)|_V_(m)· D(ψ)|_V_(m) = 0,where we have used the induction hypothesis and the definition of η. Since { V_(m)}_m ∈ V covers V, we see that D(ψ)|_V = 0 and conclude that D ∈_(U). To show that ^k_ forms a sheaf of graded ^∞_-submodules, we must first prove _^k(U) is a graded ^∞_(U)-submodule of _(U). For any f,g ∈^∞_(U) and D ∈_^k(U), one has [f · D, λ_g] = [λ_f∘ D, λ_g] = λ_f∘ [D, λ_g] + (-1)^||D||g| [λ_f,λ_g] ∘ D = λ_f∘ [D,λ_g] = f · D^(1)_(g),where we have used the graded Jacobi identity for the graded commutator and the fact that [λ_f,λ_g] = 0 for all f,g ∈^∞_(U). By iterating this equation, one finds the formula(f · D)^(k+1)_(f_1,…,f_k+1) = f · D^(k+1)_(f_1,…,f_k+1),for all f_1, …, f_k+1∈^∞_(U). Using the Remark <ref> twice and the assumption on D, we see that f · D ∈^k_(U). Hence ^k_(U) is a graded ^∞_(U)-submodule of _(U). To prove that it forms a sheaf, we will proceed once more by induction on k ∈_0. Let U ∈(M) and { U_α}_α∈ I be some its open cover. Suppose D ∈_(U) satisfies D|_U_α∈^0_(U_α). For any f ∈^∞_(U) and ψ∈Γ_(U), one can use (<ref>) and the assumption to write D(f ·ψ)|_U_α = D|_U_α( f|_U_α·ψ|_U_α) = (-1)^|D||f| f|_U_α· D|_U_α(ψ|_U_α) = (-1)^|D||f| (f · D(ψ))|_U_α.Since Γ_ is a sheaf, this proves that D(f ·ψ) = (-1)^|D||f| f · D(ψ), that is D ∈^0_(U). This proves that ^0_ is a subsheaf of graded ^∞_-submodules. Next, assume that k > 0 and that the differential operators form sheaf of graded ^∞_-submodules for all orders lower then k. Suppose that D ∈_(U) satisfies D|_U_α∈^k_(U_α) for some U ∈(M) and its open cover { U_α}_α∈ I. Let f ∈^∞_(U) be an arbitrary function. Then[D,λ_f]|_U_α = [D|_U_α, λ_f|_U_α] ∈^k-1_(U_α).But then [D,λ_f] ∈^k-1_(U) by the induction hypothesis. Since f was arbitrary, this proves that D ∈^k_(U). Hence ^k_ is a sheaf of graded ^∞_-submodules of _. To conclude this section, we have to derive some important properties of the maps (<ref>).For every graded -linear map D: Γ_(U) →Γ_(U) and every j ∈, one has D^(j)_(f_1, …, f_i,f_i+1, …, f_j) = (-1)^|f_i||f_i+1|D^(j)_(f_1,…,f_i+1,f_i,…,f_j),for every j-tuple (f_1, …, f_j) of functions in ^∞_(U) and i ∈{1, …, j-1}. Moreover, for any f,g ∈^∞_(U) and any (j-1)-tuple (f_2,…,f_j) of functions in ^∞_(U), one hasD^(j)_(f · g, f_2, …, f_j) = f ·D^(j)_(g,f_2,…,f_j) + (-1)^|f||g| g ·D^(j)_(f,f_2,…,f_j) - D^(j+1)_(f,g,f_2,…,f_j). Similar equations can be derived for the maps D^(j)_(f_1,…,f_j).The property (<ref>) can be easily derived using definitions, the graded Jacobi identity for the graded commutator, and the fact that [λ_f,λ_g] = 0 for all f,g ∈^∞_(U). The property (<ref>) follows from definitions, the graded Leibniz rule for the graded commutator, and the fact that λ_f · g = λ_f∘λ_g. We leave the detailed proof as an exercise. § GRADED VECTOR BUNDLES OF DIFFERENTIAL OPERATORS In this section, we will prove that ^k_ is locally freely and finitely generated, hence a sheaf of sections of a graded vector bundle. To do so, we must derive local expressions for differential operators. Let us first establish some notation. Suppose that (U,φ) is a graded local chart forinducing a collection of local coordinate functions {^A}_A=1^n⊆^∞_(U). It is not really important how they are ordered in what follows. We will need some important subsets of sets all n-indices of non-negative integers. Define^n := {≡ (i_1, …, i_n) ∈ (_0)^n| i_A∈{0,1} if |^A| is odd}.This ensures that for each ∈^n, the monomial ^ := (^1)^i_1… (^n)^i_n does not vanish.For each j ∈_0, we sometimes need to make sure that ^ is the product of j coordinate functions:^n(j) := {∈^n| w() := ∑_A=1^n i_A = j }.Note that the set ^n(j) is finite. We write = (0, …, 0) for the n-index of all zeros. Let us also use the shorthand notation(^_(k)) := (^1, …, ^1_i_1×, …, ^n, …, ^n_i_n×). Next, we need the operator of the “partial derivative acting from the right”. For each A ∈{1, …, n}, define a degree -|^A| graded -linear operator on ^∞_(U) by the formula∂^_A(f) := (-1)^|^A|(1 + |f|)∂ f/∂^A,for all f ∈^∞_(U), where ∂ f/∂^A denotes the usual action of the coordinate vector field ∂/∂^A∈_(U) on f. These operators are designed to satisfy the graded Leibniz rule in the form∂^_A(f · g) = f ·∂^_A(g) + (-1)^|^A||g|∂^_A(f) · g,for any f,g ∈^∞_(U). Observe that ∂^_A(^B) = δ^B_A. More generally, we will write ∂^_A_1… A_j := ∂^_A_1∘⋯∘∂^_A_j for any A_1,…,A_j∈{1, …, n}. Finally, suppose that {Φ_λ}_λ=1^r is a local frame forover U and let D: Γ_(U) →Γ_(U) be an -linear operator of degree |D|. For each j ∈ and each j-tuple (f_1,…,f_j) of functions in ^∞_(U), we can for each μ,λ∈{1, …, r} define the functions [^(j)_(f_1,…,f_j)]^μ_λ using the operators defined by (<ref>), and the formulaD^(j)_(f_1,…,f_j)( Φ_λ) =: [^(j)_(f_1,…,f_j)]^μ_λ·Φ_μ.Note that | [^(j)_(f_1,…,f_j)]^μ_λ| = |D| + |f_1| + … + |f_j| + |ϑ_λ| - |ϑ_μ|, where {ϑ_λ}_λ=1^r is some fixed basis of the typical fiber K of , such that |Φ_λ| = |ϑ_λ| for each λ∈{1,…,r}. We will now argue that for D ∈^k_(U), its action on a general section can be fully expressed in terms of these functions. The crucial observation is the following consequence of the graded Leibniz rule (<ref>).Let D ∈^k_(U). Then for each j ∈{0, …, k-1}, one has the formula[^(k-j)_(f,g_2,…,g_k-j)]^μ_λ = ∑_q=1^j+1 (-1)^q+11/q! (∂^_A_1… A_q f) · [^(q+k-j-1)_(^A_1, …, ^A_q, g_2, …, g_k-j)]^μ_λ,for all f,g_2,…,g_k-j∈^∞_(U) and μ,λ∈{1, …, r}.Let us establish some notation first. Since (g_2,…,g_k-j) does not play any role in the formula (<ref>), we will replace it with a symbol ⋆ for the remainder of the proof. We thus aim to prove that for any j ∈{0,…,k-1}, f ∈^∞_(U), μ,λ∈{1,…,r} and any ⋆, one has[^(k-j)_(f,⋆)]^μ_λ = [^(k-j)_(f,⋆)]^μ_λ,where we use [^(k-j)_(f,⋆)]^μ_λ to denote the right-hand side of (<ref>). Next, observe that (<ref>) immediately implies the graded Leibniz rule in the form [^(k-j)_(f · g, ⋆)]^μ_λ = f · [^(k-j)_(g,⋆)]^μ_λ + (-1)^|f||g| g · [^(k-j)_(f,⋆)]^μ_λ - [^(k-j+1)_(f,g,⋆)]^μ_λ.Let us now prove (<ref>) by induction in j. For j = 0, the last term in (<ref>) vanishes as D ∈^k_(U). This also immediately implies that [^(k)_(1, ⋆)]^μ_λ = 0.Let a ∈ U be arbitrary and write ^A_a := ^A - ^A(a) for every A ∈{1,…,n}. Note that ^A_a = ^A whenever |^A| ≠ 0. We claim that (<ref>) holds for all monomials in the coordinates {^A_a}_A=1^n of any order q ∈_0 and for arbitrary a ∈ U. This can be proved by a simple induction in q. We will provide more details in the j > 0 case and thus skip the detailed discussion here. Next, recall that for every a ∈ U, there is a graded ideal ^a_(U) ⊆^∞_(U) of functions vanishing at a ∈ U. It is generated by the graded set {^A_a}_A=1^n. Let f ∈^∞_(U). One can show thatf = 0 ⇔ f ∈⋂_q ∈⋂_a ∈ U (^a_(U))^q,see Proposition 3.5 in <cit.>. It follows from (<ref>) that if f ∈ (^a_(U))^q+1, then [^(k)_(f,⋆)]^μ_λ∈ (^a_(U))^q,and the same observation is valid also for [^(k)_(f,⋆)]^μ_λ. Let us finish the proof of (<ref>) for j = 0 and any f ∈^∞_(U). Choose any q ∈ and a ∈ U. One can write f = T^q_a(f) + R^q_a(f), where T^q_a(f) is a Taylor polynomial of f of order q and R^q_a(f) ∈ (^a_(U))^q+1, see Lemma 3.4 in <cit.>. Since T^q_a(f) is a polynomial of order q in {^A_a}_A=1^n and (<ref>) is -linear in f, we conclude that [^(k)_(f,⋆)]^μ_λ - [^(k)_(f,⋆)]^μ_λ = [^(k)_(R^q_a(f),⋆)]^μ_λ - [^(k)_(R^q_a(f),⋆)]^μ_λ∈ (^a_(U))^q.Since q ∈ and a ∈ U were arbitrary, the left-hand side must vanish due to (<ref>), and we conclude that (<ref>) holds for j = 0. Next, let us assume that j > 0 and the formula (<ref>) holds for all j' ∈{0,…,j-1}. The graded Leibniz rule (<ref>) together with the induction hypothesis imply [^(k-j)_(1, ⋆)]^μ_λ = 0.Let us prove (<ref>) for f = ^B_1_a⋯^B_q_a, where a ∈ U, q ∈_0 and B_1,…,B_q∈{1, …, n} are arbitrary. Let us proceed by induction in q. For q = 0, this reduces to (<ref>). It now takes a bit of combinatorics together with definitions and (<ref>) to verify that [^(k-j)_(^B_1_a⋯^B_q_a,⋆)]^μ_λ = ( ^B_1⋯^B_q-1) · [^(k-j)_(^B_q_a,⋆)]^μ_λ + (-1)^(|^B_1| + … + |^B_q-1|)|^B_q|^B_q_a· [^(k-j)_(_a^B_1⋯_a^B_q-1, ⋆)]^μ_λ - [^(k-j+1)_(^B_1_a⋯^B_q-1_a, ^B_q, ⋆)]^μ_λ.In the first two terms, one can replacebyby the induction hypothesis in q. In the last term, one can replacebyby the induction hypothesis in j. But then one can then use (<ref>) to obtain[^(k-j)_(^B_1_a⋯^B_q_a,⋆)]^μ_λ = [^(k-j)_(^B_1_a⋯^B_q_a,⋆)]^μ_λ.This finishes the induction step in q and thus the proof of (<ref>) for monomials in {^A_a}_A=1^n for any order q ∈_0 and a ∈ A. The rest goes similarly to the j = 0 case. The graded Leibniz rule (<ref>) together with the induction hypothesis imply that for any q ∈, a ∈ U and f ∈ (^a_(U))^q + 1 + j, one has [^(k-j)_(f,⋆)]^μ_λ∈ (^a_(U))^q,and the same is true for [^(k-j)_(f,⋆)]^μ_λ. For any f ∈^∞_(U), one can then choose any q ∈, a ∈ U and write f = T^q+j_a(f) + R^q+j_a(f). The same trick as for j = 0 case is then utilized to prove that (<ref>) holds for any f ∈^∞_(U). This finishes the induction step and the proof is finished. As a simple consequence, one obtains a local expression for every differential operator.Let D ∈^k_(U). Write D(Φ_λ) =: ^μ_λ·Φ_μ. Then the action of D on a general section ψ = ψ^λ·Φ_λ∈Γ_(U) can be written in terms of functions (<ref>) as D(ψ) = { (-1)^|D||ψ^λ|ψ^λ·^μ_λ + ∑_q=1^k (-1)^q + |D||ψ^λ|1/q! (∂^_A_1… A_qψ^λ) · [^(q)_(^A_1, …, ^A_q)]^μ_λ}·Φ_μ. Starting from the left-hand side, one can writeD(ψ) = (D ∘λ_ψ^λ)(Φ_λ) = (-1)^|D||ψ^λ| (λ_ψ^λ∘ D - D^(1)_(ψ^λ))(Φ_λ) = { (-1)^|D||ψ^λ|ψ^λ·^μ_λ - [^(1)_(ψ^λ)]^μ_λ}·Φ_μ.The rest is just the formula (<ref>) for j = k-1 and f = ψ^λ.In the following, we will need the following technical lemma.Let ∈^n be arbitrary. Let ∂^_ be an operator on functions defined as ∂^_ := (∂_n)^i_n∘…∘ (∂_1)^i_1,where ∂_A(f) := ∂ f/∂^A for each A ∈{1,…,n}. The superscriptjust indicates that the partial derivatives act in the order opposite to the n-index . Then it has the following properties: * For every ∈^n such that w() ≤ w(), one has ∂^_( ^) = ! ·δ_^,where ! := i_1! ⋯ i_n!. In particular, one has ∂^_(^) = 0 whenever w() < w(). * For every f,g ∈^∞_(U), there holds the Leibniz rule ∂_^(f · g) = ∑_≤ (-1)^σ(,) + |f||^ - |(∂_ f) · (∂_ - g),where := i_1k_1⋯i_nk_n andσ(,) := ∑_A=2^n (i_A-k_A)|^A|{ k_1|^1| + ⋯ + k_A-1 |^A-1| }. This is an easy verification. The interpretation of σ(,) is the following. For each A ∈{2,…,n}, the contribution to the sign is the Koszul sign obtained by commuting (∂_A)^i_A - k_A through the operators (∂_A-1)^k_A-1∘⋯∘ (∂_1)^k_1. We leave the rest to the reader. Corollary <ref> is the backbone of the main theorem of this section. Note that in the following, we use the functions [^(j)_(f_1,…,f_j)]^μ_λ defined in the same way as in (<ref>), but using the differential operators D^(j)_(f_1,…,f_j) instead. Let k ∈_0. Let D ∈^k_(U) for U ∈(M) where we have a graded local chart (U,φ) forand a local frame {Φ_λ}_λ=1^r forover U. Then D can be uniquely decomposed as D = ∑_q = 0^k∑_∈^n(q)1/! [^]^μ_λ·_^λ_μ,where the functions [^]^μ_λ∈^∞_(U) are obtained from D via the formulas[^]^μ_λ := ^μ_λ,[^]^μ_λ := [^(w())_(^_(w()))]^μ_λ._^λ_μ∈^k_(U) are differential operators of degree |ϑ_μ| - |ϑ_λ| - |^| defined by _^λ_μ(ψ) = (-1)^(|ϑ_μ| - |ϑ_λ|)(|ψ^λ|-|^|) (∂^_ψ^λ) ·Φ_μ, for all sections ψ = ψ^λ·Φ_λ∈Γ_(U).Consequently, ^k_ forms a sheaf of sections of a graded vector bundle ^k_ of k-th order differential operators. If ( ℓ_j )_j ∈ := (^k_), then ℓ_j = #{ (,μ,ν) ∈∪_q=0^k^n(q) ×{1, …, r}^2| |ϑ_μ| - |ϑ_λ| - |^| = j }.The collection {_^λ_μ}, for all ∈^n(q) with q ∈{0,…,k} and μ,λ∈{1,…,r}, forms a local frame for ^k_ over U.In a nutshell, the proof of (<ref>) consists only of a slight reshuffling of the formula (<ref>). One employs (<ref>) and the fact that for every f ∈^∞_(U) and A_1,…,A_q∈{1, …, n}, one has∂^_A_1… A_q(f) = (-1)^(|^A_1| + … + |^A_q|)(|f|+1)∂_A_q… A_1(f),where ∂_A_q… A_1 := ∂_A_q∘…∘∂_A_1. This allows one to rewrite the equation (<ref>) as D(ψ) = ^μ_λ·{ (-1)^(|ϑ_μ| - |ϑ_λ|)|ψ^λ|ψ^λ·Φ_μ} + ∑_q=1^k1/q! [^(q)_(^A_1, …, ^A_q)]^μ_λ·{ (-1)^(|ϑ_μ| - |ϑ_λ|)(|ψ^λ| - |^A_1| …-|^A_q|)∂_A_q… A_1(ψ^λ) ·Φ_μ}.Next, observe that for each q ∈ and any expression E_A_1… A_q completely symmetric in indices A_1,…,A_q∈{1, …, n}, one can write∑_A_1,…,A_q1/q! E_A_1… A_q = ∑_∈ (_0)^nw() = q 1/! E_1 … 1_i_1×, …, n … n_i_n×. In our case, we use the fact that E_A_1… A_q := [^(q)_(^A_1, …, ^A_q)]^μ_λ·{ (-1)^(|ϑ_μ| - |ϑ_λ|)(|ψ^λ| - |^A_1| …-|^A_q|)∂_A_q… A_1(ψ^λ) ·Φ_μ}is completely symmetric in (A_1,…,A_q) thanks to (<ref>) and the fact that the graded commutator of the coordinate vector fields vanishes. Finally, observe that in this particular case, one only has to consider ∈^n(q) in the right-hand side sum of (<ref>). It follows that (<ref>) can be written as D(ψ) = ∑_q=0^k∑_∈^n(q)1/! [^]^μ_λ·{ (-1)^(|ϑ_μ| - |ϑ_λ|)(|ψ^λ| - |^|)∂^_(ψ^λ) ·Φ_μ},which is precisely the formula (<ref>). We have to verify two facts. Each _^λ_μ defined by (<ref>) is to be a k-th order differential operator onover U, and the decomposition (<ref>) has to be unique. To prove the first claim, let us show that _^λ_μ is w()-th order (and thus k-th order) differential operator for any ∈^n with w() ≤ k. We proceed by induction in w(). For w() = 0, the only possibility is =. But it is easy to check that _^λ_μ(ψ) = (-1)^(|ϑ_μ| - |ϑ_λ|)|ψ^λ|ψ^λ·Φ_μis indeed ^∞_(U)-linear of degree |ϑ_μ| - |ϑ_λ|. Hence suppose that w() > 0 and _^λ_μ are differential operators of the w()-th order whenever w() < w(). For any f ∈^∞_(U), one finds[_^λ_μ, λ_f](ψ) = (-1)^(|ϑ_μ| - |ϑ_λ|)(|f| + |ψ^λ| - |^|){∂_^(f ·ψ^λ) - (-1)^|^||f| f ·∂_^(ψ^λ) }·Φ_μ= ∑_ < ≤ (-1)^σ'(,,|f|,λ,μ)∂^_(f) ·_-^λ_μ(ψ),where σ'(,,|f|,λ,μ) is some integer which does not depend on ψ and we have used (<ref>). This shows that [_^λ_μ, λ_f] can be written as a ^∞_(U)-linear combination of the operators _-^λ_μ for < ≤. By the induction hypothesis, one has _-^λ_μ∈^w( - )_(U) ⊆^w()-1_(U).Since f ∈^∞_(U) was arbitrary, this proves that _^λ_μ∈^w()_(U) and the induction step is finished. We conclude that _^λ_μ are indeed k-th order differential operators onover U. It remains to prove the uniqueness of the decomposition (<ref>). Let us define a graded -linear mapping ^^κ_ρ: (Γ_(U)) →^∞_(U) as ^^κ_ρ(D) := [^]^κ_ρ,for each ∈^n, κ,ρ∈{1, …, r}, and every graded -linear map D: Γ_(U) →Γ_(U). Note that^^κ_ρ(f · D) = f ·^^κ_ρ(D),for any f ∈^∞_(U) and D ∈(Γ_(U)). This follows from (<ref>). Moreover, we claim that for every ∈^n and every μ,λ∈{1,…,r}, one has ^^κ_ρ( _^λ_μ) = ! ·δ^_δ^κ_μδ^λ_ρ.It is easy to observe that the left-hand vanishes whenever w() < w(). This is because each its term is proportional to the expression ∂_^( ^B_1⋯^B_a), where a ≤ w() < w(). But every such term vanishes. It also vanishes whenever w() > w() since we have proved that _^λ_μ is a w()-th order differential operator. We thus only have to consider the case w() = w(). Using the same reasoning, we only have to deal with the term of the iterated graded commutator, where _^λ_μ acts on the section ^·Φ_ρ. We have _^λ_μ( ^·Φ_ρ) = (-1)^(|ϑ_μ|-|ϑ_λ|)(|^| - |^|)∂_^(δ^λ_ρ·^) ·Φ_μ = {! ·δ^_δ^κ_μδ^λ_ρ}·Φ_κ,where we have used (<ref>). It follows that the function in the curly brackets must be ^^κ_ρ(_^λ_μ). This proves (<ref>). Finally, suppose that a given D ∈^k_(U) can be decomposed as D = ∑_q=0^k∑_∈^n(q)1/! f^^μ_λ·_^λ_μ,for some functions f^^μ_λ∈^∞_(U), where |f^^μ_λ| = |D| + |^| + |ϑ_λ| - |ϑ_μ|. Choose any ∈^n with w() ≤ k and κ,ρ∈{1, …, r} and apply ^J^κ_ρ on both sides of this expression. It follows immediately from (<ref>) and (<ref>) that [^]^κ_ρ≡ K^^κ_ρ(D) = f^^κ_ρ.This proves the uniqueness of the decomposition (<ref>). The remaining statements of the proposition now follow immediately. Recall that we work in the category ^∞_ of graded vector bundles over a fixed base manifold . Its morphisms are ^∞_-linear sheaf morphisms of the respective sheaves of sections. It turns out that the assignment of the graded vector bundle of k-th order differential operators respects the categorical structure.For each k ∈_0, the assignment ↦^k_ defines a functor ^k: (^∞_)^→^∞_.Let F: →' be a graded vector bundle map, that is a ^∞_-linear sheaf morphism F: Γ_→Γ_ of any given degree |F|. We must produce a graded vector bundle map^kF: ^k_'→^k_.For each U ∈(M), we thus have to find a degree |F| graded ^∞_(U)-linear map (^kF)_U: ^k_'(U) →^k_(U).For every D' ∈^k_'(U) and any ψ∈Γ_(U), define [(^kF)_U(D')](ψ) := (-1^|F||D'| D'( F_U(ψ)).We must argue that this defines a k-th order differential operator onover U of degree |D'| + |F|. It is clearly -linear of degree |D'| + |F| and for any f ∈^∞_(U), one finds the relation[(^kF)_U(D')]^(1)_(f) = (^kF)_U( D'^(1)_(f)).By iterating this procedure, one can argue that (^kF)_U(D') ∈^k_(U). It is easy to see that (^kF)_U is ^∞_(U)-linear of degree |F|. For any V ∈(U) and ψ∈Γ_(U), one finds[(^kF)_V(D'|_V)](ψ|_V) = (-1)^|F||D'| D'|_V(F_V(ψ|_V)) = (-1)^|F||D'| D'|_V( F_U(ψ)|_V) = (-1)^|F||D'| D'(F_U(ψ))|_V = [(^kF)_U(D')](ψ)|_VBut it now follows from Proposition <ref> that necessarily(^kF)_V(D'|_V) = (^kF)_U(D')|_V.This proves the naturality in U and we conclude that ^kF: ^k_'→^k_ is indeed a ^∞_-linear sheaf morphism of degree |F|. The facts that ^k_ = _^k_, ^k(G ∘ F) = ^kF ∘^kGare obvious. This concludes the proof. § SYMBOL OF A DIFFERENTIAL OPERATOR Recall that to any k ∈_0, a completely symmetric k-form on a graded manifoldover U ∈(M) is a graded k-linear map from _(U) to ^∞_(U) satisfying the conditionsω(f · X_1, …, X_k) = (-1)^|f||ω| f ·ω(X_1,…,X_k),ω(X_1, …, X_i,X_i+1, …, X_k) =(-1)^|X_i||X_i+1|ω(X_1, …, X_i+1,X_i, …, X_k),for all f ∈^∞_(U) and X_1,…,X_k∈_(U). Such maps form a graded vector space Ω̃^k_(U) and they are local in the sense completely analogous to Section <ref>. Consequently, they can be naturally restricted to smaller open subsets. This provides one with a sheaf Ω̃^k_ of graded ^∞_-modules called the sheaf of completely symmetric k-forms on . There is a canonical (degree zero) bilinear, graded symmetric, and associative product ⊙: Ω̃^k_(U) ×Ω̃^ℓ_(U) →Ω̃^k+ℓ_(U).If (U,φ) is a graded local chart forwith local coordinate functions {^A}_A=1^n, one can show that Ω̃^k_(U) is freely and finitely generated by the the collection {^}_∈^n(k), where^ := ^1⊙⋯⊙^1_i_1×⊙⋯⊙^n⊙⋯⊙^n_i_n×,and ^A∈Ω̃^1_(U) ≡Ω^1_(U) are the usual coordinate 1-forms. It follows that Ω̃^k_ is a sheaf of sections of a graded vector bundle which we shall denote simply as S^k(T^∗). Before we proceed, note that to a pair of graded vector bundlesand ' over a graded manifold , one can assign a graded vector bundle (,'). Its space of localsections over U ∈(M) is defined as to be the graded space of ^∞_(U)-linear maps from Γ_(U) to Γ_'(U), that isΓ_(,')(U) := ^^∞_(U)( Γ_(U), Γ_'(U)).Analogously to Section <ref>, these form into a graded sheaf of graded ^∞_-modules. Suppose {Φ_λ}_λ = 1^r is a local frame forover U and {Ψ_κ}_κ=1^r' is a local frame for ' over U. Then Γ_(,')(U) is easily shown to be freely and finitely generated by the collection {^λ_κ} of ^∞_(U)-linear maps, each one defined for λ∈{1,…, r} and κ∈{1, …, r'} by^λ_κ(ψ) = (-1)^(|Ψ_κ| - |Φ_λ|)|ψ^λ|ψ^λ·Ψ_κ,for all ψ = ψ^λ·Φ_λ∈Γ_(U).Observe that (,) = ^0_ and compare this to (<ref>). Letand ' be graded vector bundle bundles. Then graded vector bundle maps over _ can be identified with global sections of (,').Recall that graded vector bundle maps fromto ' over _ are just ^∞_-linear sheaf morphisms from Γ_ to Γ_'. One usually considers just the degree zero case (that is they preserve the degree), but they can in principle be of any degree. A global section F of (,') is a graded ^∞_(M)-linear map F: Γ_(M) →Γ_'(M). Similarly to Proposition <ref>, there is a unique ^∞_-linear sheaf morphism F: Γ_→Γ_', such that F = F_M. This is the corresponding graded vector bundle map (of degree |F|). One usually does not distinguish between F and F and writes simply F: →' for both equivalent notions.We can now proceed to the main construction of this section. For each k ∈_0, there is a canonical surjective graded vector bundle map σ: ^k_→(S^k(T^∗), ^0_) called the symbol map. For k > 0, its kernel can be identified with ^k-1_ and we thus have a short exact sequence0r ^k-1_[hookrightarrow]r ^k_rσ (S^k(T^∗), ^0_) r0in the category of graded vector bundles over . For k = 0, σ is a graded vector bundle isomorphism, so the statement is still valid if we declare ^-1_ := 0.For each U ∈(M) and D ∈^k_(U), the graded ^∞_(U)-linear map σ_U(D): Ω̃^k_(U) →^0_(U) of degree |D| is called the symbol of the differential operator D.Let D ∈^k_(U) for some U ∈(M). For any k-tuple of functions f_1,…,f_k∈^∞_(U), we want the symbol map to satisfy the equation[σ(D)]( f_1⊙⋯⊙f_2) = (-1)^|f_1| + ⋯ + |f_k| D^(k)_(f_1,…,f_k)∈^0_(U),where we omit the explicit writing of the subscript U throughout the proof. Now, suppose that (U,φ) is a graded local chart, hence inducing a set {^A}_A=1^n of local coordinate functions. For each ∈^n(k), the formula (<ref>) requires one to prescribe [σ(D)]( ^) := (-1)^|^| D^(k)_(^_(k)),see also (<ref>). Note that for k = 0, we declare ^ := 1 and D^(0)_(^_(0)) := D. The map σ(D) is to be ^∞_(U)-linear of degree |D|. If we write a general ω∈Ω̃^k_(U) as ω = ∑_∈^n(k)1/!ω_·^, we thus have to define[σ(D)](ω) := ∑_∈^n(k) (-1)^|D|(|ω|-|^|) + |^|1/!ω_· D^(k)_(^_(k)).It is obvious that such σ(D) is ^∞_(U)-linear (in ω) of degree |D|. Moreover, thanks to (<ref>), this formula defines a degree zero ^∞_(U)-linear mapσ: ^k_(U) →^^∞_(U)( Ω̃^k_(U), ^0_(U)).Finally, if (U',φ') is a different graded local chart, one can verify that the definitions agree on U ∩ U'. This requires one to show that (<ref>) is in fact independent of used local coordinates. We leave this for an interested reader. Note that one has to use the formula (<ref>) together with (<ref>) to find the transformation rules for the functions ^^μ_λ(D). Since the formulas agree on the overlaps, they glue toσ: Γ_^k_(M) →Γ_(S^k(T^∗), ^0_)(M),which is of degree zero and ^∞_(M). Hence by Proposition <ref>, this defines a degree zero graded vector bundle map σ: ^k_→(S^k(T^∗), ^0_). Note that one can verify that such σ satisfies (<ref>) and for any graded local chart (U,φ), the restriction of σ to U is given by the formula (<ref>). Recall that σ is surjective, if it is surjective as a sheaf morphism. This is equivalent to the surjectivity of the corresponding map of global sections. In fact, it suffices to show that each point m ∈ M has a neighborhood U ∈_m(M), such that the restriction of σ to U is surjective. Let (U,φ) be a graded local chart forand {Φ_λ}_λ=1^r be a local frame forover U, that is σ is given by (<ref>). Let F ∈^^∞_(U)( Ω̃^k_(U), ^0_(U)). For each ∈^n(k), one can thus writeF(^) = ^^μ_λ·_^λ_μ,where ^^μ_λ∈^∞_(U) have degree |^^μ_λ| = |F| + |^| + |ϑ_λ| - |ϑ_μ|. Define D ∈^k_(U) by D := ∑_∈^n(k)1/!(-1)^|^|^μ_λ·_^λ_μ.Note that |D| = |F| and observe that D^(k)_(^_(k)) = K^^κ_ρ(D) ·^ρ_κ for each ∈^n(k). It is then easy to utilize (<ref>) and (<ref>) to see that [σ(D)](^) = F(^). Hence the restriction of σ to U is surjective. Next, let k > 0. We have to prove that for every U ∈(M), one has (σ) = ^k-1_(U). Since we are comparing two subsheaves of ^k_, one has to show that for each m ∈ M, there is U ∈_m(M), such that this is true. But if (U,φ) is a graded local chart, this statement is immediately verified thanks to the definition of ^k-1_(U) and the formulas (<ref>, <ref>). Finally, for k = 0, note that Ω̃^0_≡^∞_ and for each D ∈^0_(M) and f ∈^∞_(M) one has [σ(D)](f) = (-1)^|D||f| f · D, which is obviously an isomorphism. Let us rephrase the above result perhaps in a more standard way. Without going into details, there are canonical isomorphisms (,') ≅^∗⊗' and S^k(T^∗) ≅ S^k(T)^∗, where the sheaf of sections of S^k(T) is the sheaf of completely symmetric k-vector fields on . One can thus view the symbol map as a graded vector bundle map σ: ^k_→ S^k(T) ⊗^∗⊗. Since tensor products of graded vector bundles are a bit cumbersome to work with, we stick to the maps of sections.§ ATIYAH LIE ALGEBROID OF A GRADED VECTOR BUNDLE In this section, we aim to construct a certain subbundle ^k_⊆^k_ for every k ∈_0. It will come equipped with a canonical degree zero graded vector bundle map ℓ_(k): ^k_→ S^k(T). It turns out that this subclass of differential operators has a nice algebraic structure which is preserved by these maps. For k = 1, this will give us a non-trivial example of a graded Lie algebroid, see e.g. 6 of <cit.>. Let us start by some observations.Let D ∈^k_(U) and D' ∈^m_(U) for some k,m ∈_0 and U ∈(M). Then D ∘ D' ∈^k+m_(U).Let us prove this statement by induction in k + m ∈_0. For k + m = 0, one has k = m = 0, and the claim follows from the fact that the composition of two graded ^∞_(U)-linear maps is graded ^∞_(U)-linear. Hence assume that k + m > 0 and the claim is true for all pairs k',m' ∈_0 with k' + m' < k + m. For any f ∈^∞_(U), one has[D ∘ D', λ_f] = D ∘ [D',λ_f] + (-1)^|D'||f| [D,λ_f] ∘ D'.where we have used the Leibniz rule for [·,·]. But both terms are (k+m-1)-th order differential operators by the induction hypothesis. Since f was arbitrary, one obtains D ∘ D' ∈^k+m_(U).Recall that S^k(T) is a graded vector bundle, such that it sheaf of sections is a sheaf ^k_ of completely symmetric k-vector fields. If (U,φ) is a graded local chart inducing a set of local coordinate functions {^A}_A=1^n, then the graded ^∞_(U)-module ^k_(U) is freely and finitely generated by a collection {∂_^}_∈^n(k), where for each ∈^n(k), one defines∂^_ := ∂/∂^n⊙⋯⊙∂/∂^n_i_n×⊙⋯⊙∂/∂^1⊙⋯⊙∂/∂^1_i_1×,where ∂/∂^A∈^1_(U) ≡^1_(U) are the coordinate vector fields. There is a canonical identification of S^k(T^∗) with the dual graded vector bundle to S^k(T). If (U,φ) is a graded local chart, the action of the local frame {^}_∈^n(k) for S^k(T^∗) over U on the above generators satisfies^( ∂_^) = ! ·δ^_,for all ,∈^n(k). We can use this to prove the following statement.For each k ∈_0, there is a canonical fiber-wise injective graded vector bundle map I_(k): S^k(T) →(S^k(T^∗), ^0_).For every X ∈^k_(M) and every ω∈Ω̃^k_(M), we declare[I_(k)(X)](ω) := (-1)^|X||ω|λ_ω(X)∈^0_(M).It is straightforward to verify that I_(k)(X) defines a graded ^∞_(M)-linear map of degree |X|, that is a global section of (S^k(T^∗), ^0_). Moreover, I_(k)(X) is graded ^∞_(M)-linear of degree zero in X, hence it defines a graded vector bundle map I_(k) by Proposition <ref>. One only has to prove that I_(k) is fiber-wise injective. Note that this does not follow automatically from the injectivity of I_(k). Let (U,φ) be a graded local chart forand {Φ_λ}_λ=1^r a local frame forover U. Then we have the induced local frames:{∂^_}_∈^n(k) for S^k(T), the k-forms {^}_∈^n(k) for S^k(T^∗) and {_^λ_μ}_μ,λ=1^r for ^0_. Finally, there is the induced “standard basis” local frame {_^λ_μ} for (S^k(T^∗), ^0_) defined for each ∈^n(k) and μ,λ∈{1,…,r} to satisfy_^λ_μ(ω) := 1/! (-1)^(|ϑ_μ|-|ϑ_λ| - |^|)(|ω| - |^|)ω_·_^λ_μ, for all ω = ∑_∈^n(k)1/!ω_·^. This is an explicit example of the maps (<ref>). One finds[I_(k)|_U(∂_^)](ω) = (-1)^|^||ω|ω(∂_^) ·_Γ_(U) = (-1)^|^||ω| w_·_Γ_(U)= (-1)^|^||ω|ω_δ^μ_λ·_^λ_μ = (-1)^|^|!δ^μ_λ·_^λ_μ(ω).For every ∈^n(k), we have thus obtained the formulaI_(k)|_U( ∂^_) = ∑_λ=1^r (-1)^|^|! ·_^λ_λFor every m ∈ U, the induced graded linear map on the fibers thus takes the form [I_(k)]_m( ∂_^|_m) = ∑_λ=1^r (-1)^|^|! ·_^λ_λ|_m.It is not difficult to see that it is injective (that is injective in each degree). Recall that if {Φ_λ}_λ=1^r is a local frame forover U, then {Φ_λ|_m}_λ=1^r forms a total basis of the fiber _m for any m ∈ U.Let k ∈_0 be arbitrary. Then there is a graded vector bundle ^k_ over , together with a pair of graded vector bundle maps J_(k): ^k_→^k_ and ℓ_(k): ^k_→ S^k(T), fitting into the universal pullback diagram^k_rℓ_(k)dJ_(k)S^k(T) dI_(k) ^k_rσ (S^k(T^∗), ^0_).ℓ_(k) is (fiber-wise) surjective and J_(k) is fiber-wise injective. Consequently ^k_ can be viewed as a subbundle of ^k_, the corresponding sheaf of graded ^∞_-submodules ^k_ being^k_(U) = { D ∈^k_(U) |σ(D) = I_(k)(X)for someX ∈^k_(U) },for each U ∈(M). Moreover, the new graded vector bundle fits into the short exact sequence of graded vector bundles over :0 r ^k-1_[hookrightarrow]r ^k_rℓ_(k)S^k(T) r0.The construction of the universal pullback diagram is the same as for ordinary vector bundles. One constructs a graded vector bundle map K: ^k_⊕ S^k(T) →(S^k(T^∗), ^0_),by declaring K(D,X) := σ(D) - I_(k)(X) for all (D,X) ∈^k_(M) ⊕^k_(M). By Proposition <ref>, this extends to the morphism of the corresponding sheaves of sections, hence one can define ^k_ := (K).Since K is surjective thanks to the surjectivity of σ, this defines a sheaf of sections of a subbundle ^k_ of ^k_⊕ S^k(T). The maps J_(k) and ℓ_(k) are then simply the restrictions of the corresponding projections. It is straightforward to prove that (<ref>) now forms a universal pullback diagram, ℓ_(k) inherits the surjectivity from σ and J_(k) its fiber-wise injectivity from I_(k). The description (<ref>) then follows from the construction. The exactness of (<ref>) follows immediately from the exactness of (<ref>) and the fact that (σ) ⊆^k_ and it coincides with (ℓ_(k)).Observe that it follows from the definitions and (<ref>) that for D ∈^k_(U), the k-vector field ℓ_(k)(D) is uniquely determined by the formulaD^(k)_(f_1,…,f_k) = (-1)^|f_1| + … + |f_k| [ℓ_(k)(D)]( f_1, …, f_k) ·_Γ_(U).In particular, for k = 0, one has D = [ℓ_(0)](D) ·_Γ_(U). This shows that D ∈^0_(U), iff D = λ_f for some f ∈^∞_(U). Note that ℓ_(0)(λ_f) = f.We can now prove the main theorem of this section, relating graded commutators of (a subclass of) differential operators to a canonical algebraic structure on completely symmetric multivector fields induced by the Schouten-Nijenhuis bracket. We will provide all necessary details in course of the proof, but we refer the interested reader to Proposition A.12 in <cit.>.Let k,m ∈_0, U ∈(M), D ∈^k_(U) and D' ∈^m_(U) be arbitrary. Then [D,D'] ∈^k+m-1_(U) and there holds the formula ℓ_(k+m-1)([D,D']) = [ℓ_(k)(D), ℓ_(m)(D')]_S,where [·,·]_S is the Schouten-Nijenhuis bracket of completely symmetric multivector fields on .Recall that for each k,m ∈_0 and U ∈(M), [·,·]_S is a degree zero -bilinear bracket[·,·]_S: ^k_(U) ×^m_(U) →^k+m-1_(U),having the following properties: * For every f ∈^∞_(M) and X ∈^k_(U), one has [f,X]_S = (-1)^|f|+1 j_f(X), where j is the interior product. For k=m=1, it coincides with the graded commutator of vector fields.* It is graded skew-symmetric, that is [X,Y]_S = -(-1)^|X||Y|[Y,X]_S.* It satisfies the graded Leibniz rule with respect to the symmetric product ⊙: [X,Y ⊙ Z]_S = [X,Y]_S⊙ Z + (-1)^|X||Y| Y ⊙ [X,Z]_S. * It satisfies the graded Jacobi identity:[X,[Y,Z]_S]_S = [[X,Y]_S,Z]_S + (-1)^|X||Y| [Y, [X,Z]_S]_S. [·,·]_S is called the Schouten-Nijenhuis bracket and it is uniquely determined by the properties (i)-(iv). Note that they are significantly simpler then for the analogous bracket for completely skew-symmetric multivector fields.Now, for any f ∈^∞_(U), one has λ_f∈^0_(U) by Remark <ref>. For any D ∈^k_(U), we thus expect D^(1)_(f)≡ [D,λ_f] ∈^k-1_(U). For any g_1,…,g_k-1∈^∞_(U), one has [D^(1)_(f)]^(k-1)_(g_1,…,g_k-1) =D^(k)_(f,g_1,…,g_k-1)=(-1)^|f|+|g_1|+…+|g_k-1| [ℓ_(k)(D)]( f, g_1, …, g_k-1) ·_Γ_(U)= (-1)^|f|(1 + |D|) + |g_1| + … + |g_k-1| [j_f( ℓ_(k)(D))]( g_1, …, g_k-1) ·_Γ_(U),where we have used (<ref>) for D and the definition of the interior product. But using (<ref>) again for D^(1)_(f), this at once proves that D^(1)_(f)∈^k-1_(U) and ℓ_(k-1)( D^(1)_(f)) = (-1)^|f|(1+|D|) j_f( ℓ_(k)(D)) = [ℓ_(k)(D), f]_S. Note that this is in accordance with (<ref>) since f = ℓ_(0)(λ_f) by Remark <ref>. Let us now prove the claim by induction in k + m ∈_0. For k + m = 0, one has D = λ_f andD' = λ_g by Remark <ref>. The formula (<ref>) thus follows from [λ_f, λ_g] = 0. Next, let k + m = 1. Since the equation (<ref>) is graded skew-symmetric in (D,D'), we may assume that k = 1 and m = 0. But we have already proved this case above. Finally, suppose k + m > 1 and assume that the formula holds for all differential operators whose sum of degrees is strictly less then k + m. Let f_1,…,f_k+m-1∈^∞_(U). Then [D,D']^(k+m-1)_(f_1,…,f_k+m-1) = [[D,D'], λ_f_1]^(k+m-2)_(f_2,…,f_k+m-1)= [D, D'^(1)_(f_1) ]_(f_2, …, f_k+m-1)^(k+m-2) +(-1)^|D'||f_1| [D^(1)_(f_1), D']^(k+m-2)_(f_2,…,f_k+m-1),where we have used the graded Jacobi identity. Using the induction hypothesis together with the formulas (<ref>, <ref>) and the graded Jacobi identity (<ref>) for [·,·]_S now leads to [D,D' ]^(k+m-1)_(f_1,…,f_k+m-1) = = (-1)^|f_2| + … + |f_k+m-1| [[ℓ_(k)(D),ℓ_(m)(D')]_S,f]_S(f_2, …, f_k+m-1) ·_Γ_(U)= (-1)^|f_1| + … + |f_k+m-1| [ℓ_(k)(D), ℓ_(m)(D')]_S( f_1, …, f_k+m-1) ·_Γ_(U).But this and the (<ref>) prove at once that [D,D'] ∈^k+m-1_(U) and the formula (<ref>). This finishes the induction step and thus concludes the proof. The elements of ^1_(U) are called the derivative endomorphisms ofover U. We also call ^1_ the Atiyah bundle of a graded vector bundleand denote it as _. The triple (_, ℓ_(1), [·,·]) forms an example of a transitive graded Lie algebroid of degree zero, called the Atiyah Lie algebroid of . We have a short exact sequence0 r ^0_[hookrightarrow]r _rℓ_(1)Tr0.See 6 of <cit.> for the definition of a graded Lie algebroid of degree zero. Being transitive means that its anchor map ℓ_(1): _→ T is surjective. This will follow from the exactness of the sequence (<ref>). ^1_(M) forms a degree zero graded Lie algebra thanks to the above theorem and (<ref>). One only has to verify the graded Leibniz rule to finish the proof. For all D,D' ∈^1_(M) and f ∈^∞_(M), one has[D, f · D'] =[D, λ_f∘ D'] = [D,λ_f] ∘ D' + (-1)^|D||f|λ_f∘ [D,D'] = { (-1)^|f| [ℓ_(1)(D)](f) ∘_Γ_(U)}∘ D' + (-1)^|D||f| f · [D,D'] = ℓ_1(D)(f) · D' + (-1)^|D||f| f · [D,D'].But this is precisely the graded Leibniz rule for (_, ℓ_(1), [·,·]). The sequence (<ref>) is just (<ref>). As in ordinary differential geometry, splittings of (<ref>) are of some importance. A connection onis a splitting : T→_ of the sequence (<ref>). For each X ∈_(M), the first order differential operator _X := (X) ∈^1_(M) is called a covariant derivative with with respect to X.By definition, the covariant derivative operators adhere to the expected rules_f · X(ψ) = f ·_Xψ, _X(f ·ψ) = X(f) ·ψ + (-1)^|X||f| f ·_Xψ,for all X ∈_(M), ψ∈Γ_(M) and f ∈^∞_(M). Since the splittings of short exact sequences always exist in the category of graded vector bundles over , we immediately obtain the following statement:There always exists some connection on . To conclude this section, note that one can define the curvature operator R_ ofin order to describe the failure of (T) to form an involutive subbundle of _, that is set [R_(X,Y)](ψ) = [_X, _Y](ψ) - _[X,Y](ψ),for all X,Y ∈_(M) and ψ∈Γ_(M). By exactness of (<ref>), R_(X,Y) is in ^0_(M), that is ^∞_(M)-linear of degree |X| + |Y|.§ GEOMETRIC PRESHEAVES In this section, we will identify and examine an important class of presheaves of graded ^∞_-modules. Those will be of vital importance in the following section. Recall that for each a ∈ M, we have a sheaf of ideals ^a_ of functions vanishing at a. For any U ∈(M), it is defined as ^a_(U) = {[ { f ∈^∞_(U) | f(a) = 0 }whena ∈ U; ^∞_(U)whena ∉ U ]. These (sheaves of) ideals play an important role to characterize vanishing functions: f ∈^∞_(U) is zero, iff f ∈ (^a_(U))^q for all q ∈ and a ∈ U. We have already used this, see (<ref>). Now, for any given graded ^∞_(U)-module (P,), let us write P^[q,a] := (^a_(U))^q P for the graded ^∞_(U)-submodule generated by the graded subset { fp | f ∈ (^a_(U))^q, p ∈ P }, and let P^∙ := ⋂_q ∈⋂_a ∈ U P^[q,a].We would like to have the following criterion: p = 0, iff p ∈ P^[q,a] = 0 for all q ∈ and a ∈ U. Equivalently, P^∙ = 0. However, for a general graded ^∞_(U)-module P, this is not true. This leads us to the following definition.We say that a graded ^∞_(U)-module P is geometric, if P^∙ = 0. More generally, letbe a presheaf of graded ^∞_-modules. We say thatis geometric, if (U) is a geometric graded ^∞_(U)-module for every U ∈(M). Letbe a sheaf of graded ^∞_-modules. Thenis geometric, iff there is an open cover { U_α}_α∈ I of M, such that |_U_α is geometric for every α∈ I.The only if direction is trivial. To prove the if direction, suppose that there is an open cover { U_α}_α∈ I, such that |_U_α is a geometric. Let U ∈(M) and ψ∈(U)^∙. It is obvious that ψ|_U ∩ U_α∈(U ∩ U_α)^∙ for every α∈ I, hence ψ|_U ∩ U_α = 0 for every α∈ I, by assumption. Sinceis a sheaf, this implies ψ = 0. Henceis geometric.The notion of geometric sheaves is justified by the following observation. For every graded vector bundle , its sheaf of sections Γ_ is geometric.Thanks to Lemma <ref>, it suffices to prove that sheaf of sections of a trivial graded vector bundle is geometric. We can thus assume that Γ_ = ^∞_[K] for some finite-dimensional graded vector space K, see (<ref>). For each U ∈(M), every ψ∈Γ_(U) can be uniquely decomposed as ψ∈ψ^λ⊗ϑ_λ, if we fix some total basis {ϑ_λ}_λ=1^r of K. Let q ∈ and a ∈ U. It follows that ψ∈Γ_(U)^[q,a], iff ψ^λ∈ (^a_(U))^q for each λ∈{1,…,r}. Consequently, one obtainsΓ_(U)^∙ = ^∞_(U)^∙⊗_ K = 0,where we view ^∞_ as a sheaf of graded ^∞_-modules and use (<ref>). Hence Γ_ is geometric. It turns out that there is a canonical procedure making graded presheaves of ^∞_-modules into geometric ones. This procedure is universal in the sense explained below.To any presheafof graded ^∞_-modules, there is a geometric presheaf of graded ^∞_-modules (), together with a ^∞_-linear presheaf morphism : →(). It has the following universal property: To any geometric presheaf of graded ^∞_-modulesand any ^∞_-linear presheaf morphism φ: →, there exists a unique presheaf morphism φ̂: () →, such that φ̂∘ = φ. () is called the geometrization of . It is unique up to a ^∞_-linear presheaf isomorphism. Ifis geometric, then : →() is an isomorphism.The idea is to simply kill the unwanted submodule, hoping that this makes sense. Hence for any U ∈(M), set()(U) := (U) / (U)^∙.Obviously, we let _U: (U) →()(U) to be the canonical quotient map. For any ψ∈(U), we shall write [ψ]^∙ := _U(ψ) and call it the geometric class of ψ. There is a unique graded ^∞_(U)-module structure on ()(U) making _U into a ^∞_(U)-linear map. Let U ∈(M) and suppose [ψ]^∙∈()(U)^∙. For every q ∈ and a ∈ U, we thus have [ψ]^∙ = f^μ [ψ_μ]^∙≡ [f^μψ_μ]^∙ for some collection of f^μ∈ (^a_(U))^q and ψ_μ∈(U). But this proves that ψ - f^μψ_μ∈(U)^∙. In particular, this implies that ψ∈(U)^[q,a]. Since q ∈ and a ∈ U were arbitrary, we have ψ∈(U)^∙ and thus [ψ]^∙ = 0. This proves that ()(U) is a geometric graded ^∞_(U)-module. Since the restriction morphisms obviously map (U)^∙ to (V)^∙ for any V ⊆ U, there is a canonical way to make () into a presheaf of graded ^∞_(U)-modules and := {_U}_U ∈(M) into a ^∞_-linear presheaf morphism. Hence () is a geometric presheaf of graded ^∞_-modulesSuppose φ: → is a ^∞_-linear presheaf morphism into a geometric presheaf. For each U ∈(M), φ̂_U must be given by φ̂_U[ψ]^∙ := φ_U(ψ) for every ψ∈(U). Since _U is surjective, one only has to verify that φ̂_U is well-defined. But this follows from the obvious property φ_U( (U)^∙) ⊆(U)^∙ = 0. Clearly φ̂ := {φ̂_U}_U ∈(M) defines a ^∞_-linear presheaf morphism from () to . The rest of the claims follows easily.The assignment ↦() can be viewed as a faithful functor from the category ^^∞_ of presheaves of graded ^∞_-modules into its full subcategory ^^∞__ of geometric presheaves of graded ^∞_-modules.Note that even ifis a sheaf, its geometrization is in general not a sheaf. Conversely, a sheafification of a geometric presheaf is not necessarily geometric. There are some general statements about geometric presheaves, some of which will become useful in the next section. Let , be any presheaves of graded ^∞_-modules. Then we have the following observations: * Any restriction of a geometric presheaf is geometric.* If φ: → is an injective ^∞_-linear presheaf morphism andis geometric, then so is .* Ifis geometric, the presheaf ^^∞_( , ) is geometric. * The dual presheaf ^∗ := ^^∞_(,^∞_) is always a geometric sheaf. The fact (i) is obvious. Let us start by proving (ii). For any U ∈(M), one obviously has φ_U( (U)^∙) ⊆(U)^∙. Ifis geometric, we have φ_U((U)^∙) = 0 and since φ_U is injective, this proves (U)^∙ = 0. Henceis geometric. To prove (iii), recall that ^^∞_(,) is a presheaf assigning to each U ∈(M) the graded vector space of ^∞_|_U-linear presheaf morphisms from |_U to |_U, with the obvious graded ^∞_(U)-module structure. The presheaf restrictions are restrictions of presheaf morphisms. Ifis a sheaf, then it is also a sheaf. Hence suppose that φ∈^^∞_(,)(U)^∙. For any q ∈ and a ∈ U, one can thus write it as some finite combination φ = f^μφ_μ, where f^μ∈ (^a_(U))^q and φ_μ: |_U→|_U are ^∞_|_U-linear presheaf morphisms. For any V ∈(U) and ψ∈(V), one thus has φ_V(ψ) = f^μ|_V (φ_μ)_V(ψ) ∈(V)^[q,a],since f^μ|_V∈ (^a_(V))^q. In particular, we can choose any q ∈ and a ∈ V, which proves that φ_V(ψ) ∈(V)^∙. But sinceis geometric, ψ∈(V) and V ∈(U) were arbitrary, we get φ = 0. The claim (iv) follows from the fact that ^∗≡^^∞_(, ^∞_) and ^∞_ is a geometric sheaf due to (<ref>). As already noted, ^∗ is automatically a sheaf. § GRADED JET BUNDLES: CONSTRUCTION To any graded vector bundleand k ∈_0, we intend to assign a k-th order graded jet bundle ^k_. In the classical setting, this is usually done by constructing the k-th order jet space over each m ∈ M and then obtaining the total space of the k-th jet bundle as a disjoint union of jet spaces over all points. This is a problematic approach in the graded setting. Instead, we will directly construct its sheaf of sections ^k_ := Γ_^k_. This turns out to be a bit complicated and requires one to employ the geometrization procedure described in the preceding section. We will do the construction in three steps. We will first construct a presheaf of graded ^∞_-modules denoted as ^k_. Next, we will geometrize it to obtain a geometric presheaf ^k_. In the final step, we will sheafify it to obtain a sheaf of graded ^∞_-modules ^k_, which will turn out to be locally freely and finitely generated of a constant graded rank. U ∈(M) be arbitrary. Let us first consider a graded vector space(U) := ^∞_(U) ⊗_Γ_(U).There are two graded ^∞_(U)-module actions on (U):f(g ⊗ψ) := (f · g) ⊗ψ, f(g ⊗ψ) := (-1)^|f||g| g ⊗ (f ·ψ),for all g ⊗ψ∈(U). In fact, with restricting morphisms defined in an obvious way,forms a presheaf of graded ^∞_(U)-modules. These two actions do not coincide. For each f ∈^∞_(U), one can thus define a graded linear map δ_f: (U) →(U) of degree |f|, given byδ_f(g ⊗ψ) := (L^_f - L^_f)(g ⊗ψ) ≡ (f · g) ⊗ψ - (-1)^|f||g| g ⊗ (f ·ψ),for all g ⊗ψ∈(U), where L^_f and L^_f denote the left translations by f. It is straightforward to verify that δ_f is also graded C^∞_(U)-linear with respect to bothand . Now, let k ∈_0 and let _(k)(U) ⊆(U) be the graded subspace _(k)(U) := {δ_f_1…δ_f_k+1(g ⊗ψ) | f_1,…,f_k+1,g ∈^∞_(U),ψ∈Γ_(U) }Thanks to the ^∞_(U)-linearity of δ_f, it forms a graded ^∞_(U)-submodule of (U). In fact, one can view _(k) as a presheaf of graded ^∞_-submodules of the presheaf . This is true for both actionsand . Note that clearly _(k+1)(U) ⊆_(k)(U). Finally, let ^k_(U) := (U) / _(k)(U).Let ♮^(k)_U: (U) →^k_(U) be the natural quotient map. There are two graded ^∞_(U)-module structures on ^k_(U) induced byand , which we will denote by the same symbol as on (U). There are induced restrictions making ^k_ into a presheaf of graded ^∞_-modules (with respect to two different actions) and ♮^(k) := {♮^(k)_U}_U ∈(M) into a ^∞_-linear presheaf morphism. For every f ∈^∞_(U) and ψ∈Γ_(U), we will write f ⊗_(k)ψ := ♮^(k)_U(f ⊗ψ).For each f ∈^∞_(U), one has δ_f(_(k)(U)) ⊆_(k)(U). Consequently, there is an induced graded linear map δ_f: ^k_(U) →^k_(U), ^∞_(U)-linear with respect to bothandactions, and again denoted by the same symbol. Explicitly, one findsδ_f(g ⊗_(k)ψ) = (f · g) ⊗_(k)ψ - (-1)^|f||g| g ⊗_(k) (f ·ψ),for all generators g ⊗_(k)ψ∈^k_(U). For every j ∈ and all f_1,…,f_j∈^∞_(U), we define δ^(j)_(f_1,…,f_j) := δ_f_1∘…∘δ_f_j.The relations forced by the quotient can be then written as δ_(f_1,…,f_k+1)^(k+1)(g ⊗_(k)ψ) = 0,for all f_1,…,f_k+1,g ∈^∞_(U) and ψ∈Γ_(U). It is not a coincidence that this resembles the equivalent definition of ^k_(U) discussed in Remark <ref>. This finishes the construction of the presheaf ^k_. As already announced, we now define ^k_ := ( ^k_).Before we proceed, recall that to any presheafof graded ^∞_-modules, there exists its sheafification (), a sheaf of graded ^∞_-modules together with a ^∞_-linear presheaf morphism : →(), having the following universal property: To any sheaf of graded ^∞_-modulesand any ^∞_-linear presheaf morphism φ: →, there exists a unique presheaf morphism φ̂: () →, such that φ̂∘ = φ. () is called the sheafification ofand ifwas already a sheaf, : →() is a sheaf isomorphism. In fact, this defines a faithful functor : ^^∞_→^^∞_ into the category of sheaves of graded ^∞_-modules. Let ^k_ := ( ^k_). Let k ∈_0. For every U ∈(M), the elements of ^k_(U) are called k-th order jets ofover U. ^k_ is called the sheaf of k-th order jets of .We will now spend the rest of this section proving that ^k_ is locally freely and finitely generated of a constant graded rank, that is a sheaf of sections of a graded vector bundle which we will denote as ^k_. To do so, suppose that (U,φ) is a graded local chart onand we have a local frame {Φ_λ}_λ=1^r forover U. Now, for every j ∈, f_1,…,f_j∈^∞_(U) and λ∈{1,…,r}, define^(j)_(f_1,…,f_j),λ := [ δ^(j)_(f_1,…,f_j)(1 ⊗_(k)Φ_λ) ]^∙∈^k_(U).First, we obtain an analogue of Lemma <ref>:For every j ∈, one has ^(j)_(f_1,…,f_i,f_i+1, … ,f_j),λ = (-1)^|f_i||f_i+1|^(j)_(f_1, …, f_i+1, f_i, …, f_j),λ,for every j-tuple (f_1, …, f_j) of functions in ^∞_(U) and i ∈{1, …, j-1}. Moreover, for any f,g ∈^∞_(U) and any (j-1)-tuple (f_2,…,f_j) of functions in ^∞_(U), one has^(j)_(f · g, f_2,…,f_j),λ = f ^(j)_(g,f_2,…,f_j), λ + (-1)^|f||g| g ^(j)_(f,f_2,…,f_j),λ - ^(j+1)_(f,g,f_2,…,f_j), λ.The identity (<ref>) follows immediately from the fact that [δ_f,δ_g] = 0 for all f,g ∈^∞_(U). The property (<ref>) can be obtained from the fact that for all f,g ∈^∞_(U), one hasδ_f · g = L^_f∘δ_g + (-1)^|f||g| L^_g∘δ_f.This can be verified easily by evaluating both sides on the generators h ⊗_(k)ψ∈^k_(U). Then use δ_g = L^_g - L^_g, definitions, and (<ref>), to arrive to (<ref>). The above lemma is now vital for the proof of the following analogue of Proposition <ref>. For each j ∈{0, …, k-1}, one has the formula^(k-j)_(f,g_2,…,g_k-j), λ = ∑_q=1^j+1 (-1)^q+11/q!∂^_A_1… A_q(f) ^(q+k-j-1)_(^A_1, …, ^A_q,g_2,…,g_k-j),λfor all f,g_2,…,g_k-j∈^∞_(U) and λ∈{1, …, r}.The proof is a line-to-line copy of the proof of Proposition <ref> and we will not repeat it here. However, there is one notable difference we have to very carefully point out. In the proof of (<ref>), we have used the fact that both its sides were in ^∞_(U). In particular, we have used the criterion (<ref>) in the very last step of the proof. This time, both sides are elements of ^k_(U). At the certain moment, one is able to show that the difference of two sides of (<ref>) is in the graded ^∞_(U)-submodule ^k_(U)^∙. But since ^k_(U) is geometric, it has to vanish. This is exactly the reason why we geometrize the presheaf ^k_, otherwise we were not able to deduce the formula (<ref>).We are almost ready to prove the main theorem of this section. However, we still need one more statement. It will also play an important role in the following section. Let k ∈_0 and U ∈(M) be arbitrary. Let D ∈^k_(U). Define^k_0[D][f ⊗_(k)ψ]^∙ := (-1)^|D||f| f · D(ψ),for each ψ∈Γ_(U) and f ∈^∞_(U). Then ^k_0[D] is a well-defined degree |D| map ^k_0[D]: ^k_(U) →Γ_(U),^∞_(U)-linear with respect to the actionon ^k_(U) and the (given) one on Γ_(U). It suffices to prove that ^k_0[D](f ⊗_(k)ψ) := (-1)^|D||f| f · D(ψ) is a well-defined graded ^∞_(U)-linear map (with respect to ) from ^k_(U) to Γ_(U) of degree |D|. Since Γ_(U) is geometric, ^k_0[D] is then obtained by the universal property of geometrization. To see that ^k_0[D] is well-defined, observe that ^k_0[D] ∘δ_f = -^k_0[D^(1)_(f)], for any f ∈^∞_(U). This is easily verified on generators. Whence^k_0[D] ∘δ^(j)_(f_1,…,f_j) = (-1)^j^k_0[D^(j)_(f_1,…,f_j)]for any j ∈ and f_1,…,f_j∈^∞_(U). In particular, ^k_0[D] ∘δ^(k+1)_(f_1,…,f_k+1) = 0, since D ∈^k_(U). This shows that ^k_0[D] is well-defined. The fact that it is ^∞_(U)-linear of degree |D| is easily checked by a direct verification. Let ψ = ψ^λ·Φ_λ∈Γ_(U) be arbitrary. Then one has [1 ⊗_(k)ψ]^∙ = ∑_q=0^k∑_∈^n(q) (-1)^q1/!∂^_(ψ^λ) ^_λ,where ^_λ := [1 ⊗_(k)Φ_λ]^∙ and for every q ∈ and ∈^n(q), one sets ^_λ := ^(q)_(^_(q)), λ.Finally, ∂_^ := (∂^_1)^i_1∘…∘ (∂^_n)^i_n, see also (<ref>).It follows from definitions and the formula (<ref>) that [1 ⊗_(k)ψ]^∙ = ψ^λ [1 ⊗_(k)Φ_λ]^∙ - [δ_ψ^λ(1 ⊗_(k)Φ_λ)]^∙= ψ^λ^0_λ + ∑_q=1^k (-1)^q1/q!∂^_A_1… A_q (ψ^λ) ^(q)_(^A_1, …, ^A_q), λ= ∑_q=0^k∑_∈^n(q) (-1)^q1/!∂^_(ψ^λ) ^_λ,where we have used (<ref>) and the argument similar to (<ref>) to rewrite the sums in the last step. This theorem is essential for the main statement of this section (and also of this paper). The collection {^_λ} freely and finitely generates ^k_|_U.Consequently, ^k_|_U is a sheaf and ^k_ is locally freely and finitely generated sheaf of graded ^∞_-modules of a constant graded rank.The formula (<ref>) shows that a finite collection {^_λ}, wheregoes over ∪_q = 0^k^n(q) and λ∈{1,…,r}, generates ^k_|_U. One only has to prove that it generates it freely. Let V ∈(U) and suppose that 0 = ∑_q=0^k∑_∈^n(q) (-1)^q1/! f_^λ^_λ|_Vis a zero element of ^k_(V) of some degree ℓ∈ and some functions f_^λ∈^∞_(V). Note that|f_^λ| = ℓ - |^| - |ϑ_λ|,for each ∈∪_q = 0^k^n(q) and each λ∈{1,…,r}. We must argue that all of those functions are zero. We will do so by utilizing the results of Proposition <ref>. Indeed, let D ∈^k_(V) be arbitrary. Let us apply the operator ^k_0[D] on both sides of the equation (<ref>). Since ^k_0[D] is ^∞_(V)-linear of degree |D|, one finds0 = ∑_q=0^k∑_∈^n(q) (-1)^q + |D|(ℓ - |^| - |ϑ_λ|)1/! f_^λ·^k_0[D]( ^_λ|_V).Now, it follows from the formula (<ref>) that for each ∈^n(q) and λ∈{1, …, r}, one has ^k_0[D]( ^_λ|_V) = (-1)^q [^]^μ_λ·Φ_μ|_V.See Theorem <ref> and above for the notation. We thus obtain the expression 0 = ∑_q=0^k∑_∈^n(q) (-1)^|D|(ℓ - |^| - |ϑ_λ|)1/! f_^λ· [^]^μ_λ·Φ_μ|_V,for every D ∈^k_(V). Now, fix ∈∪_q=0^k^n(q) and κ,ρ∈{1,…,r }. Choose D = _^κ_ρ|_V and use the fact that then [^]^μ_λ = ! δ_^δ^μ_ρδ^κ_λ. See (<ref>, <ref>) for definitions and details. This gives0 = (-1)^(|ϑ_ρ| - |ϑ_κ| - |^|)(ℓ - |^| - |ϑ_κ|) f_^κ·Φ_ρ|_V.Note that there is no longer any summation on the right-hand side. Since the collection {Φ_λ|_V}_λ=1^r freely generates Γ_(V), this implies f_^κ = 0. Sinceand κ were arbitrary, this proves the claim. Now, since we have just proved that ^k_|_U is a finitely and freely generated presheaf, it is in fact a sheaf. There is thus a canonical sheaf isomorphism ^k_|_U≅^k_|_U. We will henceforth use this identification. This proves that ^k_ is locally freely and finitely generated. Finally, it is of a constant graded rank, since if (q_j)_j ∈ := ( ^k_|_U), one has q_j = #{ (,λ) ∈∪_q = 0^k^n(q) ×{1, …, r }| |^| + |ϑ_λ| = j },which is a number independent of the used graded local chart (U,φ) and a local frame {Φ_λ}_λ=1^r. ^k_ is a sheaf of sections a graded vector bundle ^k_ called the k-th order jet bundle of a graded vector bundle . Similarly to the graded vector bundle of k-th order differential operators, one can view the assignment of the k-th order jet bundle as a functor in the category of graded vector bundles over a given graded manifold . For each k ∈_0, the assignment ↦^k_ defines a functor ^k: ^∞_→^∞_.Let F: →' be a graded vector bundle map, that is a ^∞_-linear sheaf morphism F: Γ_→Γ_' of any given degree |F|. We must produce a ^∞_-linear sheaf morphism ^kF: ^k_→^k_'.We do so by first producing a ^∞_-linear presheaf morphism ^k_0F: ^k_→^k_. Let (^k_0F)_U(g ⊗_(k)ψ) := (-1)^|F||g| g ⊗_(k) F_U(ψ), for all U ∈(M), g ∈^∞_(U) and ψ∈Γ_(U). For each f ∈^∞_(U), one has(^k_0F)_U∘δ_f = (-1)^|F||f|δ_f∘ (^k_0F)_U.This is easy to verify. This ensures that (^k_0F)_U is well-defined. It is easily seen to be ^∞_(U)-linear of degree |F| and natural in V. Moreover, one finds^k_0_ = _^k_, ^k_0(G ∘ F) = ^k_0(G) ∘^k_0(F).This makes ↦^k_ into a functor from ^∞_ to the category of presheaves of ^∞_-modules. Since ^k_ = ((^k_), whereandare functors, let ^kF := ((^k_0F)).As already noted in Remark <ref>, one can construct a unique (up to a diffeomorphism) total space graded manifold ^k_, together with a surjective submersion ϖ_k: ^k_→. In particular, one can use the trivial “line bundle” = × to obtain the graded manifold^k_[] := ^k_×,called the k-th order jet manifold of . Recall that Γ_× := ^∞_. § GRADED JET BUNDLES: PROPERTIES So far it is not clear why ^k_ should be called a sheaf of k-th order jets. We will now attempt to justify this construction by proving its properties which are obvious generalizations of their counterparts in ordinary geometry. Let us start by making the following list of expected results: * For each k ∈_0, there is a canonical “jet prolongation” sheaf morphism ^k: Γ_→^k_.* ^0 is a ^∞_-linear sheaf isomorphism. Consequently, there is a canonical isomorphism ≅^0_. * For each k ∈_0, there is a canonical isomorphism ^k_≅( ^k_, ). * For any ℓ≤ k, there is a surjective graded vector bundle map π^k,ℓ: ^k_→^ℓ_. * The fiber of ^k_ at any a ∈ M can be interpreted as a graded set of equivalence classes of germs of sections whose components have the same Taylor polynomials of the order k at a. * For ordinary vector bundles over ordinary manifolds, this gives ordinary jet bundles.We will now discuss these items one by one. * Jet prolongations. For each k ∈_0, let us first construct a presheaf morphism ^k: Γ_→^k_ as follows. For each U ∈(M) and ψ∈Γ_(U), set^k_U(ψ) := [1 ⊗_(k)ψ]^∙This map is ^∞_(U)-linear with respect to theaction on ^k_(U), but not with respect to the other action . It is natural in U, whence it defines a presheaf morphism ^k: Γ_→^k_. The sheaf morphism ^k: Γ_→^k_ is then obtained as ^k := ∘^k, where : ^k_→^k_ is a canonical presheaf morphism coming from the sheafification procedure. For each ψ∈Γ_(U), ^k_U(ψ) is called the k-th order jet prolongation of the section ψ. Note that the image of ^k “locally generates” ^k_. Indeed, for every point m ∈ M, there is U ∈_m(M) where ^k_|_U≅^k_|_U. For any V ∈(U), every element of ^k_(V) can be then written as a combination f^j^k_V(ψ_j) for some finite collection {ψ_j}_j⊆Γ_(V) and functions f^j∈^∞_(V).* Degree zero jets. First, observe that already ^0_ is a geometric sheaf isomorphic to Γ_. To see this, note that for each U ∈(M), one has _(0)(U) = { (f · g) ⊗ψ - (-1)^|f||g| f ⊗ (g ·ψ) | f,g ∈^∞_(U),ψ∈Γ_(U) }.It follows that the quotient is precisely the tensor product of two graded ^∞_(U)-modules:^0_(U) ≡(U) / _(0)(U) ≡^∞_(U) ⊗_^∞_(U)Γ_(U).Since ^∞_(U) is the unit with respect to the monoidal product ⊗_^∞_(U), there must be a canonical isomorphism of the graded ^∞_(U)-modules Γ_(U) and ^0_(U). It is given by'^0_U(ψ) := 1 ⊗_(0)ψ.This is easily seen to define a ^∞_-linear presheaf morphism '^0: Γ_→^0_. Note thatandcoincide. In particular, ^0_ is in fact a geometric sheaf, see Proposition <ref> and Proposition <ref>. Hence ^0≡∘'^0 is a ^∞_-linear sheaf isomorphism of Γ_ and ^0_ and ^0≡∘^0 is a ^∞_-linear sheaf isomorphism of Γ_ and ^0_.^0: Γ_→^0_ is a ^∞_-linear sheaf isomorphism, which can be viewed as a canonical graded vector bundle isomorphism ^0: →^0_. * Factorization of differential operators.There is a crucial consequence of Proposition <ref>.Let k ∈_0 and U ∈(M) be arbitrary. Let D ∈^k_(U). Then there is a unique ^∞_(U)-linear map ^k[D]: ^k_(U) →Γ_(U) satisfying D = ^k[D] ∘^k_U.Let D ∈^k_(U). Note that in Proposition <ref>, we have in fact constructed a unique ^∞_(U)-linear map ^k_0[D]: ^k_(U) →Γ_(U) satisfying the relation ^k_0[D] ∘^k_U = D.One can define a ^∞_|_U-linear presheaf morphism ^k_0[D]: ^k_|_U→Γ_|_U by declaring^k_0[D]_V := ^k_0[D|_V],for each V ∈(M). There thus exists a unique ^∞_|_U-linear sheaf morphism ^k[D]: ^k|_U→Γ_|_Uhaving the property ^k[D]∘|_U = ^k_0[D]. We declare ^k[D] := ^k[D]_U. But then ^k[D] ∘^k_U = ^k[D]_U∘ (_U∘^k_U) = ^k_0[D]_U∘^k_U = ^k_0[D] ∘^k_U = D. Let F: ^k_(U) →Γ_(U) be another ^∞_(U)-linear map having the required property. By a straightforward generalization of Proposition <ref>, there exists a unique sheaf morphism F: ^k_|_U→Γ_|_U satisfying F = F_U. We claim that for any V ∈(U), one has F_V∘^k_V = D|_V. First, observe that the left-hand side is in _(V). This is because it is -linear and whenever ψ|_W = 0, one finds(F_V∘^k_V)(ψ)|_W = (F_W∘^k_W)(ψ|_W) = 0.Moreover, for every ψ∈Γ_(U), one has (F_V∘^k_V)(ψ|_V) = (F_U∘^k_U)(ψ)|_V = (F ∘^k_U)(ψ)|_V = D(ψ)|_V.The uniqueness of a local operator satisfying (<ref>) now implies F_V∘^k_V = D|_V. Whence(F_V∘_V) ∘^k_V = D|_V,which in turn implies F_V∘_V = ^k_0[D|_V] by the uniqueness claim of Proposition <ref>. Since V ∈(U) is arbitrary, this means that F∘|_U = ^k_0[D] and thus F =^k[D]. This obviously implies F = ^k[D] and the uniqueness claim follows. One may utilize this statement to prove that this correspondence is one-to-one and well-behaved with respect to restrictions and graded module structures.For each k ∈_0, there exists a canonical graded vector bundle isomorphism Ψ^: ^k_→(^k_,).A graded vector bundle isomorphism is a sheaf isomorphism of the corresponding sheaves of sections. For each U ∈(M), one thus has to construct a ^∞_(U)-linear map Ψ^_U: ^k_(U) →^^∞_(U)( ^k_(U), Γ_(U)),and prove that it is natural in U. For each D ∈^k_(U), we use Proposition <ref> and defineΨ^_U(D) := ^k[D].First of all, one must argue that it is ^∞_(U)-linear in D. But this follows immediately from the uniqueness claim of Proposition <ref>. Next, let V ∈(U). We claim that ^k[D]|_V∘^k_V = D|_V. It is easy to check that the left-hand side is in _(V), and for any ψ∈Γ_(U), one has (^k[D]|_V∘^k_V)(ψ|_V) = (^k[D] ∘^k_U)(ψ)|_V = D(ψ)|_V.Hence, by Proposition <ref>, we have ^k[D]|_V∘^k_V = D|_V. But it follows from the uniqueness claim of Proposition <ref> that necessarily ^k[D|_V] = ^k[D]|_V. This can be translated as Ψ^_V(D|_V) = Ψ^_U(D)|_V,for all D ∈^k_(U). This proves that Ψ^ is a ^∞_-linear sheaf morphism. It remains to prove that is is an isomorphism. For each U ∈(M), we will construct the inverse map (Ψ^_U)^-1. If F: ^k_(U) →Γ_(U) is ^∞_(U)-linear of degree |F|, we define(Ψ^_U)^-1(F) := F ∘^k_U.We must argue that the right-hand side is an element of ^k_(U). To do so, for any ^∞_(U)-linear mapF̂: ^k_(U) →Γ_(U) of degree |F|, let K[F̂] := F̂∘^k_U: Γ_(U) →Γ_(U). For any f ∈^∞_(U), it is straightforward to prove the relationK[F̂]^(1)_(f) = -K[F̂∘δ_f],This can be easily iterated to prove that for every f_1,…,f_k+1∈^∞_(U), one has K[F̂]^(k+1)_(f_1,…,f_k+1) = (-1)^k+1 K( F̂∘δ^(k+1)_(f_1,…,f_k+1)) = 0.We mildly abuse the notation and use the symbol δ_f also for the induced map on ^k_(U). This proves that K[F̂] ∈^k_(U). Consequently, observe that (Ψ^_U)^-1(F) = (F ∘_U) ∘^k_U = K[F ∘_U] ∈^k_(U). The fact that (Ψ^_U)^-1 is a two-sided inverse to Ψ_U^ is obvious. * The inverse system over _0. Suppose ℓ≤ k be two non-negative integers. We aim to construct a surjective ^∞_-linear sheaf morphism π^k,ℓ: ^k_→^ℓ_.For each U ∈(M), let us define a ^∞_(U)-linear map π̂^k,ℓ_U: ^k_(U) →^ℓ_(U) asπ̂^k,ℓ_U[ g ⊗_(k)ψ]^∙ := [g ⊗_(ℓ)ψ]^∙,for each g ∈^∞_(U) and ψ∈Γ_(U). It is easy to check that it is well-defined, since _(k)(U) ⊆_(ℓ)(U). It is ^∞_(U)-linear with respect to bothand , and it is natural in U. It defines a ^∞_-linear presheaf morphism fitting into the commutative diagramld[swap]♮^(k)rd♮^(ℓ)^k_d^ℓ_d ^k_[dashed]rrπ̂^k,ℓ^ℓ_. Since all vertical arrows are surjective presheaf morphisms, so is π̂^k,ℓ. Finally, one can construct π^k,ℓ: ^k_→^ℓ_ as a unique ^∞_-linear morphism fitting into the commutative diagram^k_drπ̂^k,ℓ ^ℓ_d ^k_[dashed]rπ^k,ℓ ^ℓ_One only has to argue that π^k,ℓ is surjective. Since for sheaves of graded ^∞_-modules, cokernels happen to be sheaves, one can argue that π^k,ℓ is surjective, iff the induced map (π^k,ℓ)_m of stalks is surjective for every m ∈ M. Stalks of ^k_ and its sheafification ^k_ are canonically identified via the isomorphism of the stalks induced by . The claim thus follows from the surjectivity of the morphism (π̂^k,ℓ)_m for every m ∈ M, ensured by the surjectivity of π̂^k,ℓ. By definition, π^k,ℓ can be interpreted as a surjective degree zero graded vector bundle map π^k,ℓ: ^k_→^ℓ_ over the identity. We can examine additional properties of these maps. For each ℓ≤ k, let π^k,ℓ: ^k_→^ℓ_ be constructed as above.* For each ℓ≤ k, π^k,ℓ fits into the equationπ^k,ℓ∘^k = ^ℓ. * ( {^k_}_k, {π^k,ℓ}_ℓ≤ k ) forms an inverse system over _0 in the category of sheaves of graded ^∞_-modules. More explicitly, the “transition maps” π^k,ℓ satisfy the the conditionsπ^k,k = _^k_, π^ℓ,q∘π^k,ℓ = π^k,q for anyq ≤ℓ≤ k. * There is a sheaf ^∞_ of graded ^∞_-modules, defined as an inverse limit of the above inverse system:^∞_ := ^k_,together with a collection of surjective degree zero ^∞_-linear sheaf morphisms π^∞,k: ^∞_→^k_, such that for every ℓ≤ k, one hasπ^k,ℓ∘π^∞,k = π^∞,ℓ. * For each k ∈, one has a short exact sequence of graded vector bundles overand vector bundle maps over _0 r ( S^k(T), ) rJ_(k) ^k_rπ^k,k-1 [2em] ^k-1_r0.If one declares ^-1_ = 0, the statement is true also for k = 0. To prove (i), note that for any ℓ≤ k, one has π̂^k,ℓ∘^k = ^ℓ, where ^k: Γ_→^k_ are defined by (<ref>). Since ^k is uniquely determined by the equation ^k = ∘^k and π^k,ℓ are uniquely determined by (<ref>), the equation (<ref>) follows. The equations in (<ref>) follow immediately from the same ones satisfied by the presheaf morphisms π̂^k,ℓ, together with the uniqueness of the map completing the diagram (<ref>). This proves (ii). Next, it is easy to see that all products and equalizers exist in the category of sheaves of graded ^∞_-modules. This implies that all limits exist in this category. Consequently, the inverse limit (<ref>) exists. The collection {π^∞,k}_k ∈_0 is then nothing but its universal limiting cone. We have to argue that π^∞,k are surjective. To do so, observe that one can describe ^∞_ explicitly. Indeed, for each U ∈(M) and i ∈, one has (^∞_(U))_i = { (σ_ℓ)_ℓ∈_0∈∏_k ∈ (^ℓ_(U))_i|π^q,s_U(σ_q) = σ_s for alls ≤ q }.Moreover, one has π^∞,k_U[(σ_ℓ)_ℓ∈] := σ_k for each k ∈_0. Since ^∞_ is a sheaf of graded ^∞_-modules, it suffices to argue that for each m ∈ M, there exists U ∈_m(M), such that π^∞,k_U: ^∞_(U) →^k_(U) is surjective. As noted in the last paragraph of (1), for each m ∈ M, there exists U ∈_m(M), such that ^k_(U) is generated by the image of ^k_U: Γ_(U) →^k_(U). Let σ_k∈^k_(U) be an arbitrary given section. We can thus find a finite collection of sections {ψ_j}_j in Γ_(U), such that σ_k = f^j^k_U(ψ_j)for some functions f^j∈^∞_(U) of degree |f^j| = |σ_k| - |ψ_j|. For every ℓ∈_0, let us define σ_ℓ := f^j^ℓ_U(ψ_j) ∈^ℓ_(U).Now, suppose that s ≤ q is a pair of arbitrary integers. Since π^q,s_U is ^∞_(U)-linear, we can utilize the equation (<ref>) to obtain π^q,s_U(σ_q) = π^q,s_U( f^j^q_U(ψ_j)) = f^j (π^q,s_U∘^q_U)(ψ_j) = f^j^s_U(ψ_j) = σ_s.This proves that (σ_ℓ)_ℓ∈_0∈^∞_(U). By construction, one has π^∞,k_U[(σ_ℓ)_ℓ∈] = σ_k. Since σ_k was arbitrary, this proves that π^∞,k_U is surjective. This concludes the proof of (iii). To prove (iv), we have to construct a fiber-wise injective graded vector bundle map J_(k): (S^k(T), ) →^k_,for every k ∈. Suppose that (U,φ) is a graded local chart forand let {Φ_λ}_λ=1^r be a local frame forover U. We will construct a degree zero ^∞_(U)-linear map (J_(k))_U: ^^∞_(U)( _(U), Γ_(U)) →^k_(U).Recall that _(U) is freely and finitely generated by a collection {∂^_}_∈^n(k), see (<ref>). Suppose ϕ: _(U) →Γ_(U) is a given ^∞_(U)-linear map of degree |ϕ|. We declare(J_(k))_U(ϕ) := ∑_∈^n(k)1/! (-1)^|ϑ_λ|(1+|ϕ|) + |^|Φ^λ( ϕ(∂_^)) ^_λ,where ^_λ∈^k_(U) are the generators defined by (<ref>) and (<ref>). See also Theorem <ref>. It is easy to see that (J_(k))_U(ϕ) is ^∞_(U)-linear in ϕ. If (U',φ') is a different graded local chart forand {Φ'_λ}_λ=1^r is a different local frame forover U', one has to verify that the definitions (<ref>) agree on U ∩ U'. To do so, one has to derive how ∂_^ and ^_λ transform on U ∩ U' under the change of coordinates (and local frames). To derive the latter, one has to employ (<ref>) together with (<ref>). The calculation is straightforward, albeit a bit tedious, so we omit it here. Now, the local frame for (S^k(T), ) over U can be chosen to be the “standard basis” {^_λ}, see (<ref>), acting on each X = 1/! X^·∂^_∈^k_(U) as ^_λ(X) = 1/! (-1)^(|ϑ_λ| - |^|)(|X| + |^|) X^·Φ_λ.In other words, one has ^_λ(∂_^) = δ_^·Φ_λ, for every ,∈^n(k) and λ∈{1, …, r }. Plugging this into (<ref>), one finds the expression(J_(k))_U(^_λ) = 1/! (-1)^|^|(1 + |ϑ_λ|)^_λ,for every ∈^n(k) and λ∈{1, …, r}. This proves that J_(k) is fiber-wise injective and its image is precisely the kernel of π^k,k-1. The exactness of (<ref>) follows. Finally, for k = 0, we have X̃^0_≅^∞_ and a canonical vector bundle isomorphism ≅(S^0(T),), sending each ψ∈Γ_(U) to a ^∞_(U)-linear map ϕ(f) := (-1)^|ψ||f| f ·ψ. Observe that the only element of ^n(0) is , we declare ∂_^ := 1, and ^_λ = ^0_U(Φ_λ). One has (J_(0))_U( ^_λ) = ^0_U(Φ_λ),if we use (<ref>) also for the k = 0 case. This means that J_(0) is the unique vector bundle isomorphism fitting into the commutative diagramld[swap]≅rd^0(S^0(T),) [dashed]rrJ_(0)^0_Hence (<ref>) is exact also for k = 0, if we declare ^-1_ = 0. This finishes the proof.For a non-trivial , ^∞_ is not a sheaf of sections of a graded vector bundle, except whenhas only odd coordinates. Indeed, suppose ^∞_ = Γ_^∞_ for a graded vector bundle ^∞_. Then for each k ∈_0, we have a surjective vector bundle map π^∞,k: ^∞_→^k_, which forces the inequality ( ^∞_) ≥( ^k_). On the other hand, one has( ^k_) = ( ^k-1_) + ( (S^k(T),)),which follows from the exactness of (<ref>). This shows that (^k_) grows strictly in k, iff ( S^k(T)) is non-zero for all k. Ifhas some coordinates of even degree (including 0), this is the case. The above inequality then proves that ^∞_ cannot have a finite total rank, which cannot happen (in our setting).The only exception ishaving all coordinates odd. In particular, one has M = {∗}. Let n := ∑_j ∈ n_j, where (n_j)_j ∈ := (). Since n_j = 0 whenever j is even, it is easy to see that S^k(T) = 0 for all k > n. It follows from the exactness of (<ref>) that π^k,n: ^k_→^n_ are ^∞_-linear sheaf isomorphisms for all k > n. But this ensures that π^∞,n: ^∞_→^n_ is a ^∞_-linear sheaf isomorphism. We see that in this case ^∞_ is a sheaf of sections of a graded vector bundle isomorphic to ^n_.* Fibers are jet spaces. Let a ∈ M be a fixed point of the underlying manifold M. Consider the Jacobson radical _,a⊆^∞_,a of the stalk of the sheaf ^∞_ at a, that is the unique maximal graded ideal, having the explicit form_,a = { [f]_a∈^∞_,a| f(a) = 0 }.Let us denote the canonical action of ^∞_,a on Γ_,a as , that is let [f]_a [ψ]_a := [f ·ψ]_a, for all [f]_a∈^∞_,a and [ψ]_a∈Γ_,a. The reason for this will become clear later. There is another action , given for each [f]_a∈^∞_,a and [ψ]_a∈Γ_,a by[f]_a [ψ]_a := f(a) · [ψ]_a.For each k ∈_0, consider the graded ^∞_,a-submodule Γ_,a^[k] of the stalk Γ_,a defined as Γ^[k]_,a := (_,a)^k+1Γ_,a,that is the graded ^∞_,a-submodule (with respect to ) generated by elements of the form [f]_a [ψ]_a, where [f]_a∈ (_,a)^k+1 and [ψ]_a∈Γ_,a. Note that Γ^[k]_,a forms a graded submodule also with respect to the action .For each k ∈_0, define the k-th order jet space ^k_,a ofat a as the quotient ^∞_,a-module^k_,a := Γ_,a / Γ^[k]_,a,and denote the respective quotient map as ^k_a: Γ_,a→^k_,a. We say that the [ψ]_a, [ψ']_a∈Γ_,a have the same k-th order jet at a, if ^k_a[ψ]_a = ^k_a[ψ']_a.Note that ^k_,a inherits both ^∞_,a actions, which we shall again denote asand .Let us now argue that this definition can be restated in a more standard way. Let (U,φ) be a graded local chart forover U ∈_a(M), and let {Φ_λ}_λ =1^r be a local frame forover U. Let k ∈_0.Suppose [ψ]_a∈Γ_,a is represented by ψ∈Γ_(U) and write ψ = ψ^λ·Φ_λ for unique functions ψ^λ∈^∞_(U). Then [ψ]_a∈Γ^[k]_,a, iff (∂_^ψ^λ)(a) = 0,for all λ∈{1,…,r} and ∈∪_q=0^k^n(q). See Section <ref> for the notation.First, suppose that [ψ]_a∈Γ^[k]_,a. It follows that [ψ^λ]_a∈ (_,a)^k+1 for every λ∈{1,…,r}. This ideal is generated by the monomials in [^A_a]_a of order k+1, where ^A_a := ^A - ^A(a). Consequently, there is V ∈_m(U), such that ψ^λ|_V is a finite linear combination of some terms of the form ^B_1_a⋯^B_k+1_a· hfor some h ∈^∞_(V). For any ∈∪_q=0^k^n(q), we have w() < k+1 and thus clearly (∂^_ψ^λ)(a) = 0. Conversely, suppose that ψ representing [ψ]_a satisfies (<ref>) for all λ∈{1,…,r} and ∈∪_q=0^k^n(q). This implies that for each λ∈{1,…,r} , the k-th order Taylor polynomial of ψ^λ at a vanishes, T_a^k(ψ^λ) = 0. Consequently, one has ψ^λ = R_a^k(ψ^λ) ∈ (^a_(U))^k+1,see Lemma 3.4 in <cit.>. This immediately implies [ψ]_a = [ψ^λ]_a· [Φ_λ]_a∈Γ^[k]_,a.The collection {^k_a[ ^_a·Φ_λ]_a}, wheregoes over ∪_q=0^k^n(q) and λ goes over {1, …, r}, forms a total basis for the graded vector space ^k_,a. In particular, one has ( ^k_,a) = ( ^k_).The first statement follows easily from definitions and the previous proposition. One has to utilize a modification of (<ref>), namely the fact that(∂_^(^_a))(a) = ! ·δ_^,for all ,∈^n. Next, note that | ^k_a[ ^_a·Φ_λ]_a| = |^| + |ϑ_λ|. This is the same degree as |^_λ|, see (<ref>). The second statement then immediately follows from Theorem <ref>.Let us now argue that ^k_,a has a structure not dissimilar from the one constructed in Section <ref>. First, to each [f]_a∈^∞_,a, one can assign a ^∞_,a-linear (with respect to bothand ) map δ'_[f]_a: Γ_,a→Γ_,a of degree |f| given byδ'_[f]_a[ψ]_a := [f]_a [ψ]_a - [f]_a [ψ]_a≡ [(f(a) - f) ·ψ]_a,where we assume that the respective germs are represented by some f ∈^∞_(U) and ψ∈Γ_(U) for some U ∈_a(M). We immediately see the following statement:One can writeΓ_,a^[k] = {δ'_[f_1]_a…δ'_[f_k+1]_a[ψ]_a| [f_1]_a, …, [f_k+1]_a∈^∞_,a, [ψ]_a∈Γ_,a}.Compare this formula to (<ref>).For every a ∈ M, there is a canonical ^∞_,a-linear isomorphism of the fiber (^k_)_a of the graded vector bundle ^k_ and the k-th order jet space ^k_,a ofat a. Recall that the fiber _a of a graded vector bundleat a ∈ M is defined as _a := ⊗_^∞_,aΓ_,a,where the action of ^∞_,a onis given by [f]_aλ := f(a) ·λ. It is easy to see that there is a canonical isomorphism _a≅Γ_,a / (_,a·Γ_,a).For any U ∈_a(M) and ψ∈Γ_(U), the value of ψ at a is the element ψ|_a := 1 ⊗ [ψ]_a∈_a. Equivalently, this is precisely the equivalence class of [ψ]_a in the quotient (<ref>). To define a ^∞_,a-linear isomorphism Ψ_a: (^k_)_a→^k_,a, we thus start by definingΨ_a^0: ^k_,a→^k_,a.Fix any U ∈_a(M). Let [h ⊗_(k)ψ]^∙_a be the germ of the section [h ⊗_(k)ψ]^∙∈^k_(U), where h ∈^∞_(U) and ψ∈Γ_(U). We declareΨ_a^0[h ⊗_(k)ψ]^∙_a := h ^k_a[ψ]_a≡ h(a) ·^k_a[ψ]_a.Observe that Ψ^0_a will be ^∞_,a-linear with respect to the actionson ^k_,a andon ^k_,a, respectively. One has to check that this is a well-defined map. The definition clearly does not depend on the choice of the section [h ⊗_(k)ψ]^∙ representing the element [h ⊗_(k)ψ]^∙_a. It also does not depend on the representative h ⊗_(k)ψ of the geometric class [h ⊗_(k)ψ]^∙, as every element of ^k_(U)^∙ is also in ^a_(U) ^k_(U) and the rest is taken care of by the ^∞_,a-linearity of Ψ^0_a. Moreover, for every f ∈^∞_(U), one has Ψ^0_a[ δ_f(h ⊗_(k)ψ)]^∙_a = δ'_[f]_a( Ψ^0_a[h ⊗_(k)ψ]^∙_a).This ensures that the image of a germ of a geometric class of an element of _(k)(U) vanishes. Hence Ψ^0_a is a well-defined ^∞_,a-linear map from ^k_,a to ^k_,a. Moreover, observe that Ψ^0_a( _,a^k_,a) = 0.Finally, since ^k_ is just a sheafification of ^k_, there is a canonical isomorphism ^k_,a≅^k_,a of the respective stalks, and Ψ^0_a induces a degree zero ^∞_,a-linear map Ψ_a: (^k_)_a≡^k_,a / (_,a^k_,a) →^k_,aThe inverse to Ψ_a is for each [ψ]_a∈Γ_,a defined by Ψ_a^-1( ^k_a[ψ]_a) := { (^k)_a[ψ]_a}|_a,where (^k)_a: Γ_,a→^k_,a is the -linear mapping induced by a sheaf morphism ^k: Γ_→^k_. One has to check that it is well-defined. Suppose [ψ]_a∈Γ^(k)_,a. But this means that there is U ∈_a(M), such that ψ∈Γ_(U) and it is a finite linear combination of sections of the formf_1⋯ f_k+1·ψ',where f_1,…,f_k+1∈^a_(U). It suffices to check that the value of the k-th jet prolongation ^k_U( f_1⋯ f_k+1·ψ') at a vanishes. One can certainly choose U so that ^k_(U) ≅^k_(U) and ^k_U(f_1⋯ f_k+1·ψ') = [1 ⊗_(k) (f_1⋯ f_k+1·ψ')]^∙.By expanding the relation δ^(k+1)_(f_1,…,f_k+1) [1 ⊗_(k)ψ]^∙ = 0, it is easy to see that [1 ⊗_(k) (f_1⋯ f_k+1·ψ')]^∙_a∈_,a^k_,a,which proves the claim. Hence Ψ_a^-1 is well-defined. It is straightforward to prove that it is a two-sided inverse to Ψ_a. This finishes the proof.* Ordinary limit. We have to check that for an ordinary vector bundle E over an ordinary manifold M, ^k_E constructed in this paper is isomorphic to the usual notion of k-th order jet bundle. See e.g. the standard reference <cit.>. Let us denote the “standard” version as ^k_. Recall that its total space is defined to be the disjoint union of its jet spaces:^k_E := _a ∈ M^k_E,a.The projection ϖ: ^k_E→ M is defined in an obvious way. Choose some local chart (U,φ) and a local frame {Φ_λ}_λ=1^r for E. The induced local trivialization chart ϕ: ϖ^-1(U) → U ×^ℓ, where ℓ := (^k_E) = r ·n+kk is defined for each a ∈ U byϕ(^k_a[ψ]_a) := (a, (( ∂^_ψ^λ)(a))_λ,),where [ψ]_a is represented by some ψ = ψ^λ·Φ_λ∈Γ_E(U), and the indices in the sequence go over all λ∈{1,…,r} and all ∈∪_q=0^k^n(q). By Proposition <ref>, this is well-defined.There is a canonical vector bundle isomorphism Ψ: ^k_E→^k_E over _M for each k ∈_0.Since both ^k_E and ^k_E are ordinary vector bundles, Ψ is uniquely determined by its restriction to each fiber. For each a ∈ M, we thus have to define a linear isomorphism Ψ_a: (^k_E)_a→ (^k_E)_a≡^k_E,a.But we constructed such a map for each a ∈ M in Proposition <ref>. One only has to prove that Ψ is smooth. Let (U,φ) be a local chart and {Φ_λ}_λ=1^r a local frame for E over U. As noted in Theorem (<ref>), the collection {^_λ}, where λ∈{1,…,r} and ∈∪_q=0^k^n(q), forms a local frame for ^k_ over U. It is easy to see from (<ref>, <ref>) that one hasΨ_a( ^_λ |_a) = ^k_a[ ^_a·Φ_λ]_a,for each a ∈ U, λ∈{1,…,r} and ∈⋃_q=0^k^n(q). Composing this with the above local trivialization chart ϕ then shows that Ψ is smooth. To conclude this paper, let us discuss a bit the uniqueness of ^k_. First, up to a graded vector bundle isomorphisms, it is determined uniquely by the collection of short exact sequences (<ref>). Since short exact sequences split in the category ^∞_ of graded vector bundles over , we get the isomorphism ^k_≅^k-1_⊕(S^k(T),) for each k ∈_0. We can thus iteratively construct ^k_, starting with ^0_≅(S^0(T),) ≅. Second, one can easily generalize Section <ref> and Section <ref> to construct a graded vector bundle ^k_,' of k-th order differential operators fromto '. It is then straightforward to modify the proof of Proposition <ref> to obtain a canonical graded vector bundle isomorphismΨ^,': ^k_,'→(^k_,').natural in bothand '. Since the “internal” Yoneda functor ↦(,·) is in a sense fully faithful, the existence of the isomorphism (<ref>) determines the “representing” object ^k_ up to a graded vector bundle isomorphism.
http://arxiv.org/abs/2311.15754v1
{ "authors": [ "Jan Vysoky" ], "categories": [ "math.DG", "math-ph", "math.MP" ], "primary_category": "math.DG", "published": "20231127121748", "title": "Graded Jet Geometry" }
Optical response of the bulk stabilized mosaic phase in Se doped TaS_2-xSe_x. Erik van Heumen January 14, 2024 =============================================================================*Equal contributionfootnote †Equal supervisionfootnoteVideo Anomaly Detection (VAD) is an open-set recognition task, which is usually formulated as a one-class classification (OCC) problem, wheretraining data is comprised of videos withnormal instances while test data contains both normal and anomalous instances. Recent works have investigated the creation of pseudo-anomalies (PAs) using only the normal data and making strong assumptions about real-world anomalies with regards to abnormality of objects and speed of motion to inject prior information about anomalies in an autoencoder (AE) based reconstruction model during training. This work proposes a novel method for generating generic spatio-temporal PAs by inpainting a masked out region of an image using a pre-trained Latent Diffusion Model and further perturbing the optical flow using mixup to emulate spatio-temporal distortions in the data. In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting by learning three types of anomaly indicators, namely reconstruction quality, temporal irregularity and semantic inconsistency. Extensive experiments on four VAD benchmark datasets namely Ped2, Avenue, ShanghaiTech and UBnormal demonstrate that our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting. Our analysis also examines the transferability and generalisation of PAs across these datasets, offering valuable insights by identifying real-world anomalies through PAs.§ INTRODUCTIONVideo Anomaly Detection <cit.> refers to the task of discovering the unexpected occurrence of events that are distinct and follow a deviation from known normal patterns.The rarity of anomalies in the real-world and the unbounded nature (open-set recognition <cit.>) of their diversities and complexities have led to unbalanced training datasets for VAD making it an extremely challenging task. Therefore VAD is commonly addressed as a OCC problem where only normal data is available to train a model <cit.>.Reconstruction based approaches exploiting an AE are usually adopted to tackle the OCC task <cit.>. The intuition behind this is that during training, the AE would learn to encode normal instances in its feature space with the assumption that during the test phase a high reconstruction error would correspond to an anomaly and a low reconstruction error would indicate normal behaviour. Contrary to this, <cit.> observed that when trained in this setting,the AE learns to reconstruct anomalies with high accuracy resulting in a low reconstruction error in the testing phase. Hence, the capability of the AE to distinguish normal and anomalous instances is greatly diminished (Figure 1a in <cit.>).<cit.> introduced a memory-based AE to restrict the reconstruction capability of the AE by recordingprototypical normal patterns during training in the latent space therefore shrinking the capability of the AE to reconstruct anomalous data. However, such methods are highly sensitive tomemory size. A small-sized memory may hinder reconstruction ofnormal data asmemorising normal patterns can be interpreted as severely limiting the reconstruction boundary of the AE, resulting in failure to reconstruct even the normal events during the testing phase (Figure 1b in <cit.>). Astrid <cit.> proposed the generation of two types of PAs (patch based and skip-frame based) to synthetically simulate pseudo-anomalous datafrom normal data and further introduced a novel training objective for the AE to force the reconstruction of only normal data even if the input samples are anomalous. Patch based PAs are generated by inserting a patch of a specific size and orientation from an intruder dataset (e.g. CIFAR-100) using theSmoothMixS <cit.> data augmentation methodwhile in order to create skip-frame based PAs, a sequence of frames is sampled with irregular strides to create anomalous movements in the sequence. The intuition behind this training procedure is based on limiting the reconstruction boundary of the AE near the boundaries of the normal data resulting in more distinctive features between normal and anomalous data (Figure 1c in <cit.>). A notable limitation of the approach proposed in Astrid <cit.> is its heavy reliance on a predefined set of assumptions and inductive biases. These assumptions encompass various aspects, including the specific intruding dataset selected for patch insertion, the patch's size and orientation, and the idea that altering the movement speed by skipping frames could introduce temporal irregularities into the normal data.With such assumptions, there is no guarantee that the test anomalies which comprise of an unbounded set of possible anomalous scenarios would comply with pseudo-anomalous samples. This creates a need for more generic solutions for creating PAs from the normal data. Since VAD is an open-set recognition problem and anomalies present an inexhaustible set of possibilities, every pseudo-anomaly synthesiser carries strong or weak inductive biases and thus it is inherently challengingto emulate real anomalies through PAs. Furthermore, there are other challenges, such as the fact that certain normal behaviours are rare but possible and therefore not well represented in the normal data. This presents an interesting research question: “Is it possible to synthetically generate generic PAs by introducing spatio-temporal distortions into normal data in order to detect real-world anomalies effectively?, and importantly, can such PAs transfer across multiple VAD datasets?” Our work is motivated by <cit.> and extends it by addressing its drawbacksand proposing a more generic pseudo-anomaly generator. We focus on generating PAs by injecting two different types of anomaly indicators, the first being distortion added through image inpaintingperformed by a pre-trained latent diffusion model (LDM) <cit.>, the second being the addition of temporal irregularity through perturbation of the optical flow <cit.> using mixup <cit.>. In addition, our method also measures the semantic inconsistency between normal samples and PAs using semantically rich ViFi-CLIP <cit.> features. This unifies estimation of reconstruction quality, temporal irregularity and semantic inconsistency under one framework. We conduct an extensive study on understanding the generalisation and transferability of such PAs over real-world anomalies. Overall, our main contributions are: * We propose a novel and generic spatio-temporal pseudo-anomaly generator for VAD encompassing inpainting of a masked out region inframes using an LDM and applying mixup augmentation to distort the optical flow.* We introduce a unified VAD framework that measures and aggregates three different indicators of anomalous behaviour namely reconstruction quality, temporal irregularity and semantic inconsistency in an OCC setting. * Extensive experiments on Ped2, Avenue, ShanghaiTech and UBnormal show that our method achieves on par performance with other existing SOTA methods (Table <ref>, <ref>) indicating that our method is a generic video anomaly detector and our spatio-temporal pseudo anomaly generation process is transferable across multiple datasets.§ RELATED WORK §.§ Restricting Reconstruction Capacity of an AEA standard approach to address VAD is to adopt an OCC strategy by training an AE model to reconstruct the input data <cit.>. During training, only normal inputs are used for learning the AE with the assumption that reconstruction of anomalies during testing would yield a higher reconstruction error. However,in practice it has been shown that the AE can also reconstructanomalous data<cit.>. <cit.> mitigated this issue by augmenting the AE with memory-based techniques in the latent space to restrict the reconstruction capability of an AE. However the performance of such methods are directly impacted by the choice of the memory size, which may over-constrain the reconstruction power of the AE resulting in poor reconstruction of even the normal events during testing.To alleviate this issue, <cit.> utilised data-heuristic based PAs built on strong assumptions to limit the reconstruction capacity of the AE. Patch-based PAs were generated by inserting a patch from an intruding dataset (CIFAR-100) into the normal data by using techniques such as SmoothMixS <cit.>. For modeling motion-specific anomalous events, PAs were generated by skipping frames with different strides to induce temporal irregularity. The training configuration was set up to minimise the reconstruction loss of the AE with respect to the normal data only. PAs can be interpreted as a type of data-augmentation<cit.>, where instead of creating more data of the same distribution, pseudo-anomalous data is created that belongs to a near-distribution i.e. between the normal and anomaly distributions. <cit.> adopted adversarial training to generate augmented inputs, which were also effective as an adversarial example for the model. Our methodfalls into the category of restricting the reconstruction capability of an AE, wherewe follow the training setup introduced in <cit.>, however we propose simulation of generic spatio-temporal PAs without making bold assumptions about dataset specific anomalies.§.§ Generative ModelingGenerative models have been used to generate out of distribution (OOD) data for various applications in semi-supervised learning (Bad GAN <cit.>, Margin GAN <cit.>), anomaly detection (Fence GAN <cit.>), OOD detection (BDSG <cit.>), medical anomaly detection <cit.> and novelty detection <cit.>. However,such methods mostly work with low dimensional data and are not suitablefor generating OOD data for VAD. OGNet <cit.> and G2D <cit.> exploit a GAN-based generator and discriminator for VAD. During the first phase of training, a pre-trained state of the generator is used to createPAs or irregular samples while in the second phase, binary classification is performed using a discriminator to distinguish between normal and PAs samples.We design our model from the perspective of generating generic spatio-temporal PAs where a generative model (pre-trained LDM) is availed to generate spatial PAs while the mixup method is exploited to create temporal PAs from optical flow. §.§ Other VAD MethodsNon-Reconstruction Based Methods: Several non-reconstruction based methods have also been proposed which derive their anomaly scores from various different indicators of anomaly in addition to reconstruction loss. The work presented in <cit.> utilised a future frame prediction task for VAD and estimated optical flow and gradient loss as supplementary cues for anomalous behaviour. <cit.> performed object detection as a pre-processing step under the assumption that anomalous events are always object-centric. Several other works added optical flow components <cit.> to detect anomalous motion patterns and a binary classifier <cit.> to estimate anomaly scores.In our work, we also use a segmentation mask (object detection) and optical flow to generate corresponding spatial and temporal PAs during the training phase. However during inference we do not carry out any object detection and perform anomaly detection solely based on reconstruction of images and on optical flow.Non-OCC methods: There are many other formulations of VAD. <cit.> introduced a self-supervised method where different pretext tasks such as arrow of time, middle-box prediction, irregular motion discrimination and knowledge distillation were jointly optimised for VAD. <cit.> adopted a self-supervised single pre-text task of solving decoupled temporal and spatial jigsaw puzzles corresponding to modeling normal appearance and motion patterns. Several works have also addressed the VAD problem as a weakly supervised problem through multiple instance learning <cit.>. Unsupervised VAD methods involve the cooperation of two networks through an iterative process for pseudo-label generation <cit.>. Zero-shot VAD was introduced in <cit.> where a model was trained on the source domain to detect anomalies in a target domain without any domain adaptation. USTN-DSC <cit.> a proposed video event restoration framework based on keyframes for VAD while EVAL <cit.> presented a technique for video anomaly localisation allowing for human interpretable explanations.§ METHOD §.§ PreliminariesLatent Diffusion Models (LDMs): Diffusion Probabilistic Models (DMs) <cit.> are a class of probabilistic generative models that are designed for learning a data distribution p_data(𝐱). DMs iteratively denoise a normally distributed variable by learning the reverse process of a fixed Markov Chain of length T through a denoising score matching objective <cit.> given by:𝔼_𝐱∼ p_data, τ∼ p_τ, ϵ∼𝒩(0,𝐈) [|| 𝐲 - 𝐟_θ (𝐱_τ;𝐜,τ) ||^2_2], where 𝐱∼ p_data, the diffused input can be constructed by 𝐱_τ = α_τ𝐱 + σ_τϵ, ϵ∼𝒩(0,𝐈) and is fed into a denoiser model 𝐟_θ, (σ_τ, α_τ) denotes the noise schedule parameterised by diffusion-time τ, p_τ is a uniform distribution over τ, c denotes conditioning information and the target vector 𝐲 is either the random noise ϵ or 𝐯 = α_τϵ - σ_τ𝐱. The forward diffusion process corresponds to gradual addition of the gaussian noise to 𝐱 such that the logarithmic signal-to-noise ratio λ_τ = log(α_τ^2/ σ_τ^2) monotonically decreases.LDMs<cit.> were proposed to make standard DMs efficient by training a VQGAN <cit.> based model to project input images i.e. 𝐱∼ p_data into a spatially lower dimensional latent space of reduced complexity and then reconstructing the actual input with high accuracy. In particular, a regularised AE <cit.> is used to reconstruct the input 𝐱 such that the reconstruction is given by : 𝐱̂ = 𝐟_𝐝𝐞∘𝐟_𝐞𝐧(𝐱) [∘: denotes function composition]𝐱, where 𝐟_𝐞𝐧 and 𝐟_𝐝𝐞 denotes encoder and decoder respectively. Furthermore an adversarial objective is added using a patch-based discriminator <cit.> to ensure photorealistic reconstruction. DM is then trained in the latent space by replacing 𝐱 with its latent representation 𝐳 = 𝐟_𝐞𝐧(𝐱) in eq. (<ref>). This leads to reduction in # of learnable parameters and memory.§.§ Generating Spatial-PAsReal world anomalies are highly context specific without having a ubiquitous definition. Ramachandra et al. <cit.> loosely define them as, the “occurrence of unusual appearance and motion attributes or the occurrence of usual appearance and motion attributes at an unusual locations or times”. Examples of such cases include: an abandoned object in a crowded area or suspicious behaviour of an individual. We address this notion of occurrence of unusual appearance attributes through generation of spatial PAs. Since LDMsachieve state-of-the-art performance for the image inpainting task,this can be exploited as a spatial PAs generator. In particular, we hypothesise that an off-the-shelf pre-trained LDM model <cit.> without any finetuning on VAD datasets can inpaint the image with enough spatial distortion that can serve as spatially pseudo-anomalous samples for training a VAD model. We follow the mask generation strategy proposed in LAMA <cit.> to generate both randomly shaped and object segmentation masks 𝐦. We concatenate image 𝐱, masked image 𝐱⊙𝐦 [ ⊙: denotes point-wise multiplication] and mask 𝐦 over the channel dimension and give this 7 channel input to UNet <cit.>.We denote the normal data samples as 𝐱 unless otherwise explicitly stated. The spatial PAs 𝒫_s(𝐱) is given by:𝒫_s(𝐱)= ℱ_s( stack(𝐱 , 𝐱⊙𝐦, 𝐦); θ),where ℱ_s is the inpainting model that uses latent diffusion with pre-trained model parameters θ. Some examples of the spatial PAs are shown in Figure <ref>.§.§ Generating Temporal-PAsWe address the notion of unusual motion occurrences (such as person falling to ground) through the generation of temporal PAs. Various video diffusion models <cit.> have been proposed, which can be exploited to induce temporal irregularity in the video. However due to limited computational resources, we introduce a simple but effective strategy for the generation of temporal PAs by applyinga vicinal risk minimisation technique mixup <cit.> to the optical flow of the normal videos.More specifically, given a normal video 𝐯, its frame 𝐱_𝐭, andits corresponding segmentation mask 𝐦_𝐭 and another consecutive frame 𝐱_(𝐭+1), we compute the optical flow ϕ(𝐱_𝐭,𝐱_(𝐭+1)) using the TVL1 alogrithm <cit.>. For simplification, we use ϕ as an alias to represent ϕ(𝐱_𝐭,𝐱_(𝐭+1)). Let us consider a rectangular patch𝐩^' in ϕ corresponding to the mask 𝐦_𝐭 in the frame 𝐱_𝐭 with dimensions μ_h and μ_w. In order to perturb the optical flow ϕ, we take another rectangular patch 𝐩_𝐫^' at a random location in ϕ with the same dimensions as 𝐩^' and apply mixup to yield 𝐩̂, which is a convex combination of 𝐩^' and 𝐩_𝐫^'given by : 𝐩̂ = λ𝐩^' + (1 - λ) 𝐩_𝐫^', where λ is sampled from a beta distribution with α=0.4 as in <cit.>. We denote the temporal PAs as 𝒫_t(𝐱) given by: 𝒫_t(𝐱) = ℱ_t (ϕ (𝐱_𝐭, 𝐱_(𝐭+1))),where ℱ_t is the temporal PAs generator. Some examples oftemporal PAs are depicted in Figure <ref>. It is important to note that our PAs generation method does not explicitly require segmentation masks, it can also generate PAs using random masks. Since segmentation masks carry semantic meaning, using them enables generation of more semantically informative PAs as further validated by our experiments.§.§ Reconstruction ModelThe training mechanism follows a similar strategy as in <cit.>, where regardless of the input (ℐ) i.e normal (𝐱/ϕ) or PAs (𝒫_s(𝐱)/𝒫_t(𝐱)) the network is forced to reconstruct only the normal input using a 3D-CNN (Convolutional Neural Network) based AE model adapted from the convolution-deconvolution network proposed by <cit.> (Table <ref> in supplementary material (supp.)).We train two different AEs with the aim of limiting their reconstruction capacity by exposing them to spatial and temporal PAs. We represent the spatial (temporal) AE by 𝒜^s (𝒜^t) with 𝒜_e^s (𝒜_e^t)and 𝒜_de^s (𝒜_de^t) denoting its encoder and decoder respectively. The reconstruction output of 𝒜^s is given by : 𝐱̂ = 𝒜_de^s∘𝒜_e^s (𝐱) while the reconstruction output of 𝒜^t is computed by : ϕ̂ = 𝒜_de^t∘𝒜_e^t (ϕ). In order to train 𝒜^s and 𝒜^t, PAs (𝒫_s(𝐱) or 𝒫_t(𝐱)) are given as respective inputs with a probability p_s (or p_t) while the normal data is provided as input with probability of (1-p_s) (or (1-p_t)). p_s (or p_t) is a hyperparameter to control the ratio of PAs to normal samples. Overall, the loss for 𝒜^s and 𝒜^t is calculated as: ℒ_𝒜^(s) =1/Π || 𝐱̂ - 𝐱 ||^2_2 if ℐ = 𝐱|| 𝒫̂_s(𝐱) - 𝐱 ||^2_2 if ℐ = 𝒫_s(𝐱), ℒ_𝒜^(t) = 1/Π || ϕ̂ - ϕ ||^2_2 if ℐ = ϕ|| 𝒫̂_t(𝐱) - ϕ ||^2_2 if ℐ = 𝒫_t(𝐱),where 1/Π is normalisation factor,Π = 𝒯× C × H × W and ||.||_2 denotes the ℒ_2 norm. where 𝒯, C, HandW arethe number offrames, number of channels, height, and width of the frames in the input sequence (ℐ), respectively. §.§ Estimating Semantic InconsistencyWhile measuring the spatial reconstruction quality and temporal irregularity between normal and anomalous data is essential for real-world VAD, it is alsocrucial to learn and estimate the semantic inconsistency (degree of misalignment of semantic visual patterns and cues) between normal and anomalous samples (e.g. abnormal object in the crowded scene). In practice, to emulate this idea in our approach, we extract frame-level semantically rich features from the ViFi-CLIP <cit.> model (pre-trained on Kinetics-400 <cit.>) and perform binary classification between normal data samples 𝐱 and spatial pseudo-anomalies 𝒫_s(𝐱) using a discriminator 𝒟,(Table <ref> in supp.), which can be viewed as an auxiliary component to AEs. Intuitively, it is highly likely that latent space representation of PAs will be semantically inconsistent to the normal scenarios. § EXPERIMENTAL SETUP §.§ Implementation Details a). Training Spatial (𝒜^s) and Temporal (𝒜^t) AE's: We closely follow the training procedure described in <cit.> to train 𝒜^s and 𝒜^t. The architecture of 𝒜^s and 𝒜^t is adapted from <cit.>, however instead of relying on single channel image as input we use all 3 channels.𝒜^s and 𝒜^t were trained on respective datasetsfrom scratch with the objective defined in eq. <ref> and eq. <ref> respectively on 2 NVIDIA GeForce 2080 Ti GPUs with effective batch size (ℬ) of 24 distributed across the GPUs (12 each). The input to𝒜^s and 𝒜^t is ofsize (ℬ×𝒯× 3 × 256 × 256), where 𝒯=16.The spatial and temporal PAs were sampled by probabilityp_s=0.4 and p_t=0.5 respectively. 𝒜^s is trained with Adam optimiser for 25 epochs with a learning rate of 10^-4. During training, the reconstruction loss is calculated across all 16 frames of the sequence. The training of the 𝒜^t follows a similar procedure, however the input to the model is the optical flow representing normal events i.eϕ.b). Training the Discriminator (𝒟): During the training phase, the input to 𝒟 has a batch size of 16 andfeature dimension of 512. The model was trained using a SGD optimiser with a learning rate of 0.02, momentum of 0.9 and weight decay of 10^-3 for 20 epochs.The groundtruth for normal and PAs samples are given labels 0 and 1 respectively. See section <ref> (supp.) for ViFi-CLIP <cit.> feature extraction and additional details. The completepipeline is depicted in Figure <ref>.§.§ Inference During inference (Figure <ref> in supp.), our goal is to measure all three types of anomaly indicators of all frames of the test video in the given dataset i.e reconstruction quality, temporal irregularity and semantic inconsistency. Therefore, our anomaly score should holistically combine these aspects to gain deeper insights into real-world anomalies in videos. In order to measure the reconstruction quality, we follow the recent works of <cit.>, which utilise normalised Peak Signal to Noise Ratio P_t (PSNR) between the test input frame at time t and its reconstruction from 𝒜^s to calculate the anomaly score ω^(t)_1. The input to 𝒜^s during inference is given in a sliding window fashion and hasdimensions 1 × 16 × 3 × 256 × 256, where batch size is 1 and 16 represents number of frames. At test time, only the 9^th frame of a sequence is considered for anomaly score calculation as in <cit.>. For measuringtemporal irregularity, a similar strategy is followed as for frames but instead of measuring the PSNR, the normalised ℒ_2 loss (denoted by ω^(t)_2) is computed between the input test ϕ at time t and its reconstruction from 𝒜^t. For measuringsemantic inconsistency, the sequence of input frames is fed into 𝒟 in a sliding window fashion with a window size of 16. We compute the output probability of a frame at time t to be anomalous from its ViFi-CLIP feature representation anddenote it by ω^(t)_3. The aggregate of anomaly score for allthree components is given by the following weighted average:ω^(t)_agg = η_1 ω^(t)_1 + η_2 ω^(t)_2 + η_3 ω^(t)_3,w/ 𝒟 η_1 ω^(t)_1 + η_2 ω^(t)_2,w/o 𝒟; (η_3=0) where η_1, η_2, η_3 are tuned for every dataset. (Refer to section <ref> in supp. material for further details) §.§ ResultsWe performed extensive and exhaustive quantitative and qualitative assessments on four datasets namelyPed2 <cit.>, Avenue <cit.>, ShanghaiTech <cit.> and UBnormal <cit.>.Baselines: We compare our results with memory based AE <cit.> and other reconstruction based method trained with pseudo-anomalous samples created using other simulation techniques <cit.>. The network trained without any PAs is represented as the standard baseline. The model design of the AE is fixed across all the experimental settings. Object-level information is only considered for perturbing the normal data during training while at inference we evaluate results strictly based on reconstruction and classification outputs. Hence our method is not directly comparable to object-centric methods.1. Quantitative Assessment: In Table <ref>, we reportmicro AUC comparisons of overall scores of our model andexisting state-of-the-art (SOTA) methods on test sets of Ped2, Avenue and Shanghai datasets. We follow the same practice as in <cit.> of dividing the SOTA methods into 5 categories. Our method is closest to reconstruction based methods though we also avail the discriminator 𝒟 as the auxiliary component to learn the distance between normal data distribution and PAs distribution. For clarity, we provide results with and without 𝒟 for all the datasets.Compared tomemory-based networks, our unified framework trained on synthetically generated spatio-temporal PAs outperforms MemAE <cit.> and MNAD-Reconstruction <cit.> on Avenue and Shanghai while on Ped2 surpasses MNAD-Reconstruction and achieves comparable performance as MemAE. We also compare our results with other PAs generator methods such as STEAL Net <cit.> and LNTRA <cit.>. We observe that on the Avenue dataset our model outperforms LNTRA (patch, skip-frame based) though marginally lags behind STEAL-Net whereas STEAL-Net and LNTRA achieve better performance than our model on Ped2 and Shanghai dataset. However such methods generate PAs under bold assumptions and inductive biases which may cause them to fail in particular cases. We report such cases in the Ablation study (Figure <ref>). We also show in Table <ref> that the transfer performance of our model is on par with other PAs generation methods (see section <ref>). We do employ optical flow like other methods (e.g  Frame-Pred <cit.>) and observe that our results outperform Frame-Pred on the Avenue, achieve comparable performance on ShanghaiTech and are marginally less on Ped2.In Table <ref>, we show a comparison between baseline, LNTRA and our approach on the validation set of the UBnormal dataset using only the normal videos in the training split. This is done to ensure consistency in evaluation under the OCC setting (refer to section <ref> in supp for data-split details). The training and evaluation for baseline and LNTRA (patch, skip-frame) based methods on UBnormal was performed using scripts provided by the authors of LNTRA[https://github.com/aseuteurideu/LearningNotToReconstructAnomalieshttps://github.com/aseuteurideu/LearningNotToReconstructAnomalies]. We observe that our method outperforms baseline and LNTRA achieving micro AUC score of 57.98% andimplying that our PAs are generic and applicable for more diverse anomalous scenarios. Both in Table <ref>, <ref> we notice that the effect of adding 𝒟 is minimal, which validates the intuition that VAD cannot be directly addressed as a classification problem.Table <ref>, <ref> shows that no single reconstruction-based method excels on all datasets. This is because anomalies are context-dependent. Different methods have inductive biases that work for specific datasets but not others. Our work provides a generic solution towards generating PAs without making bold assumptions about dataset's anomalies. 2. Qualitative Assessment:We conduct qualitative analysis of the anomaly score over time for sample videos in Avenue, Shanghai (Figure <ref>) and Ped2, UBnormal (Figure <ref> in supp). We also compare our model's anomaly score over time with those obtained from LNTRA skip-frame and patch-based methods. It can be concluded thaton the Avenue and Ped2 datasets, our method detects anomalies fairly well and performance is equivalent with LNTRA models. Though there exist certain failure cases in the Shanghai and UBnormal datasets which occur due to anomalies occurring due to abnormal interaction between two objects i.e. fighting between two individuals in Shanghai and accident with a bike in UBnormal. Even though our PAs generator is generic it fails to emulate such complex real-world anomalies.§.§ Ablation Studies 1: How transferable are PAs? We also examine how well PAs transfer across various VAD datasets. We use our pre-trained model on UBnormal dataset, which contains a wide range of anomalies and backgrounds, making it suitable for transferability. We tested the model on rest of the datasets without fine-tuning. Our results in Table <ref> show that our model outperforms the patch-based method on all other datasets while achieves competitive performance compared to the skip-frame based method. This provides an interesting insight that our PAs are generic and transferable.2: How to interpret PAs? In Figure <ref>, we compare error heatmaps generated using a model trained with patch and skip-frame based PAs and with our spatial-PAs on all the respective datasets. Since skip-frame and patch based PAs carry strong assumptions, they tend to have problems detecting complicated real-world anomalies in ShanghaiTech such as a baby carriage (anomalous object) whereas our model trained with spatial-PAs yields high error for such cases. Furthermore, our PAs also give strong results on the synthetic dataset UBnormal, where patch and skip-frame based PAs fail to detect complex violent scenes as temporal irregularity induced through skip-frames is not generic. However, even our spatial-PAs, which are not explicitly trained to detect temporal anomalies are able to determine such real-world anomalies. On Avenue and Ped2 datasets, our model gives comparable error to patch based PAs for an anomalous activity however we observe that skip-frame based PAs overly estimates the reconstruction error for the same. Intuitively this indicates that even though skip-frame performs reasonably well on benchmark datasets but it is susceptible to amplification of the error. An explanation for this phenomena could be due to underlying strong assumption of skipping frames based on a specific stride value to model temporal irregularity. These observations validate that our PAs are generalised and enable understanding of which real-world anomalies can be detected using which type of PAs.3: Random vs Segmentation masks: Table <ref> shows the effect of using random and segmentation masks for generating spatial PAs. We observe that using a segmentation mask gives better AUC score on Ped2 and Avenue dataset, which is intuitively justified as segmentation masks contain more semantic information. Despite this, our method is flexible in terms of type of mask chosen. § CONCLUSIONS AND DISCUSSION In this paper we presented a novel and generic spatio-temporal PAs generator vital for VAD tasks without incorporating strong inductive biases. We achieve this by adding perturbation in the frames of normal videos by inpainting a masked out region in the frames using a pre-trained LDM and by distorting optical flow by applying mixup-like augmentation (Figure <ref>). We also introduced a unified VAD framework that learns three types of anomaly indicators i.e. reconstruction quality, temporal irregularity and semantic inconsistency in an OCC setting (Figure <ref>). Through extensive evaluation, we show that our framework achieves on par performance with other SOTA reconstruction methods and PA generators with predefined assumptions across multiple datasets (Table <ref>, <ref>) indicating the effectiveness, generalisation and transferability of our PAs.There are limitations with this work. First, our model was not trained in an end-to-end fashion due to limited computational resources available. It will be interesting to make this setting adaptive in nature by learning a policy network to select which anomaly indicator among poor reconstruction quality, temporal irregularity and semantic inconsistency contributes more towards detection ofreal-world anomalies. Second, the notion of generating latent space PAs for VAD through LDMs or manifold mixup remains to be investigated. Third, in this work micro AUC scores are used for evaluation though the method needs to be further validated on other metrics such as region, tracking based detection criteria. These limitations will be addressed in our future work.§ ACKNOWLEDGEMENTThis work has emanated from research supported by Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289_P2, co-funded by the European Regional Development Fund and Xperi FotoNation. § DATASETS Ped2 <cit.> dataset comprises of 16 training and 12 test videos and all videos have the same scene in the background. The videos with normal events consist of pedestrians only, whereas the videos with anomalous events include bikes, skateboards and carts apart from pedestrians.Avenue <cit.> dataset comprises of 16 training and 21 test videos with every video having the same background scene. Normal events involve people routinely walking around while the abnormal instances include abnormal objects such as bikes and abnormal human actions such as unusual walking directions, running around or throwing things. ShanghaiTech <cit.> dataset includes 330 training and 107 test videos recorded at 13 different background locations with complex lightning conditions and camera angles, making it the one of the largest one-class anomaly detection datasets. The test split captures a total of 130 anomalous events including running, riding a bicycle and fighting. UBnormal <cit.> is a synthetic dataset with multi-scene backgrounds and a diverse set of anomalies. The dataset consists of training, validation and test split with both normal and abnormal events. The normal events include walking, talking on the phone, walking while texting, standing, sitting, yelling and talking with others. It is to be noted that abnormal events in each of the train, validation and test split are different to each other. The train split includes abnormal events like falling, dancing, walking injured, running injured, crawling, and stumbling walk. The validation split comprises fighting, sleeping, dancing, stealing, and rotating 360 degrees. All the evaluations are conducted on the validation set.UBnormal data-split under OCC Setting.In order to use this dataset in the one class classification (OCC) setting, we train our model using only the normal 186 videos in the training split and the pseudo-anomalies (PAs) generated using them (i.e. totally ignoring the abnormal samples provided in the train set). We tested our model on all the videos in the validation split, comprising of 64 videos with both normal and abnormal events.Such a setting was chosen to keep consistency in evaluation as with other datasets under the OCC setting. The frame-level groundtruth annotation for validation set of UBnormal <cit.> was created using the script[https://github.com/lilygeorgescu/UBnormal/tree/main/scriptshttps://github.com/lilygeorgescu/UBnormal/tree/main/scripts] provided by the authors.§ ADDITIONAL DETAILS AND INSIGHTS1: Pseudo-Anomaly Construction. We take an off-the-shelf Latent Diffusion Model <cit.> (LDM[https://github.com/CompVis/latent-diffusion/tree/mainhttps://github.com/CompVis/latent-diffusion/tree/main])pre-trained on the Places dataset <cit.>. We do not perform any finetuning of the LDM on any video anomaly dataset and therefore it is “under-trained" on video data and hence capable of spatially distorting them. For inpainting the masked out regions of the images, 50 steps of inference were carried out. It is to be noted that due to lack of computational resources we did not experiment with other values of timesteps. A very low number of timesteps may produce mostly noisy inpainting output while a very high valuemight result in inpainted images very close to the input image. The strategy for generation of random and segmentation masks was adopted from the code[https://github.com/advimman/lama/tree/main/saicinpainting/evaluation/maskshttps://github.com/advimman/lama/tree/main/saicinpainting] provided by the authors of LAMA <cit.>. If segmentation mask was detected for a frame, a random mask was selected instead. Figure <ref> depicts more examples of generated PAs. 2: Extracting ViFi-CLIP Features.For the training split of the benchmark datasets and their corresponding spatial pseudo-anomalies, we extract frame level features using the ViFi-CLIP <cit.> model. The input to the ViFi-CLIP model hassize : ℬ^'×𝒯^'× 3 × 224 × 224, where ℬ^' (batch size) was set to 1 and 𝒯^' (# of frames) was set to 16. Allframes were passed into ViFi-CLIP in a sliding window fashion with a stride of 16 therefore we obtain a 512-dimensional feature for every frame. ViFi-CLIP uses the backbone of ViT-B/16 <cit.> and is pre-trained on Kinetics-400 <cit.>.It is to be noted that the ViFi-CLIP model performs temporal pooling of the CLIP <cit.> features, however we do not perform temporal pooling and use the frame level representations as during inference we evaluate our pipeline using frame level micro AUC scores. For the frames of the videos in test split (Ped2, Avenue, ShanghaiTech) and validation split (UBnormal), we follow the same procedure for feature extraction. 3: Effect of changing the probability of sampling PAs.We conduct an experimental study by varying the probability of sampling spatial and temporal PAs (p_s, p_t) on Ped2 during training between 0.1 to 0.5 and measuring micro AUC scores during inference. Figure <ref> shows that the model achieves best performance when p_s = 0.4 and p_t = 0.5.4: Inference Time.The average inference time calculated over three runs for a single frame on a single Nvidia RTX-2080-Ti GPU is 123.35ms.§ EVALUATION CRITERIA To measure the reconstruction quality, we follow the recent works of <cit.>, which utilised normalized Peak Signal to Noise Ratio (PSNR) P_t between an input frame and its reconstruction to calculate the anomaly score. This is illustrated in the following equation. P_t = 10 log_10M^2_𝐱̂_𝐭/1/R || 𝐱̂_𝐭 - 𝐱_𝐭 ||^2_2, ω^(t)_1 = 1 - P_t - min_t (P_t) /max_t (P_t) - min_t (P_t), where 𝐱_𝐭 is the input frame at time t, 𝐱̂_𝐭 represents reconstruction of 𝐱_𝐭, R denotes the total number of pixels in 𝐱̂_𝐭 and M_𝐱̂_𝐭 is the maximum possible pixel value of 𝐱̂_𝐭. The anomaly score ω^(t)_1 is an indicator of reconstruction quality of the input frame. For measuring the temporal irregularity, we compute the normalised ℒ_2 loss between input optical flow at time t and its reconstruction given by the equation: ω^(t)_2 = 1/R^' || ϕ̂(𝐱_𝐭,𝐱_(𝐭+1)) - ϕ(𝐱_𝐭,𝐱_(𝐭+1)) ||^2_2, where ϕ(𝐱_𝐭,𝐱_(𝐭+1)) is the input optical flow frame calculated using consecutive frames 𝐱_𝐭 and 𝐱_(𝐭+1), ϕ̂(𝐱_𝐭,𝐱_(𝐭+1)) represents the reconstruction of ϕ(𝐱_𝐭,𝐱_(𝐭+1)), R^' denotes the total number of pixels in ϕ̂(𝐱_𝐭,𝐱_(𝐭+1)). To measure the semantic inconsistency, the input frames sequence is fed into 𝒟 in a sliding window fashion with a window size of 16. The output probability (ω^(t)_3) of a frame at time t to be anomalous is computed using its ViFi-CLIP feature representation. A higher value of ω^(t)_1, ω^(t)_2 and ω^(t)_3 represents higher reconstruction error for frameand optical flow and high anomaly probability at time t in the test videos during inference. Alternatively, they are indicators of poor reconstruction quality, temporal irregularity and semantic inconsistency and their aggregation can aid in determining real-world anomalies. The aggregate anomaly score is given by:ω^(t)_agg = η_1 ω^(t)_1 + η_2 ω^(t)_2 + η_3 ω^(t)_3,w/ 𝒟 η_1 ω^(t)_1 + η_2 ω^(t)_2,w/o 𝒟; (η_3=0) where η_1, η_2, η_3 are weights assigned to ω^(t)_1, ω^(t)_2 and ω^(t)_3 respectively. The values of η_1, η_2 and η_3 lies in the interval [0,1] and their sum is equal to 1. We manually tune the values of η_1, η_2, η_3 for all the datasets. The values of (η_1,η_2,η_3) for all the datasets are given by - Ped2 (0.65,0.25,0.1), Avenue (0.45,0.5,0.05), Shanghai (0.85, 0.13, 0.02) and UBnormal (0.4, 0.5, 0.1). In all of the cases, any of the three component can be excluded during evaluation by setting the corresponding weight (η_1,η_2,η_3) to zero. Note : We also experimented with the learnt weights for the three anomaly indicators but there was a marginal decrease in the performance compared to manually tuning their weights.Evaluation Metric.For evaluation, we follow the standard metric of frame-level area under the ROC curve (micro-AUC) as in <cit.>. We obtain the ROC curve by varying the anomaly score thresholds to plot False Positive Rate and True Positive Rate for the whole test set for a given dataset. Higher AUC values indicate better performance and more accurate detection of anomalies. ieeenat_fullname
http://arxiv.org/abs/2311.16514v1
{ "authors": [ "Ayush K. Rai", "Tarun Krishna", "Feiyan Hu", "Alexandru Drimbarean", "Kevin McGuinness", "Alan F. Smeaton", "Noel E. O'Connor" ], "categories": [ "cs.CV", "cs.AI", "cs.LG" ], "primary_category": "cs.CV", "published": "20231127131406", "title": "Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach" }
^1 Department of Physics, Hebei University, Baoding, 071002, China^2Hebei Key Laboratory of High-precision Computation and Application of Quantum Field Theory, Baoding, 071002, China^3 Hebei Research Center of the Basic Discpline for Computational Physics, Baoding, 071002, China^4 School of Physics, Harbin Institute of Technology, Harbin 150001, China Based on the method of solving the complete Salpeter equation, we study the semileptonic decays of a heavy 0^- pseudoscalar to 1P, 2P, or 3P heavy 2^+ tensors, B_q → (c̅ q)(nP) ℓ^+ ν_ℓ (q=u,d,s,c;n=1,2,3). The obtained branching ratio of ℬ (B →D_2^⋆(2460)ℓ^+ν_ℓ) agrees with the experimental data. We predict ℬ(B_s^0→ D_s2^⋆-(1P) ℓ^+ν_ℓ)=3.76× 10^-3 and ℬ(B_c^+ →χ_c2(1P)ℓ^+ν_ℓ)=1.82× 10^-3. The branching ratios of decays to 2P and 3P final states are found to be very small. The ratios of ℛ(D̅_2^⋆ 0)=0.045, ℛ(D_s2^⋆)=0.048 and ℛ(χ_c2)=0.059 are also obtained. This study focuses on the contribution of relativistic corrections. The wave function of the pseudoscalar includes non-relativistic S-wave and relativistic P-wave. While for a tensor, it contains non-relativistic P-wave, and relativistic D and F waves in its wave function. We find the individual contributions of relativistic partial waves are significant in decay B → D_2^⋆(2460)ℓ^+ν_ℓ, but the overall contribution of the relativistic effect is 24.4%, which small due to cancellation. Similarly, for the decay B_s^0→ D_s2^⋆-(1P) ℓ^+ν_ℓ, the contribution of the relativistic effect is 28.8%. While for B_c^+ →χ_c2(1P)ℓ^+ν_ℓ, the individual contributions of relativistic partial waves and the overall relativistic correction are both small, the later is 22.1%.Relativistic study on the semileptonic decays of B_q mesons to orbital excited heavy Tensors Wen-Yuan Ke^1,2,3[[email protected]], Su-Yan Pei^1,2,3,Tianhong Wang^4, Guo-Li Wang^1,2,3[[email protected], corresponding author] January 14, 2024 ================================================================================================================================================§ INTRODUCTIONIn the past few years, the bottom mesons semileptonic decays induced by b→ c have attracted a lot of research interest both in theory <cit.> and in experiment <cit.>, since such decays are important for the studies of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V_cb <cit.>, CP violation <cit.>, and probing new physics <cit.>, etc. So far, many processes have been extensively studied. However, compared to the widely studied and well understood semileptonic bottom mesons decays to ground charmed mesons, such as the B to D and D^⋆, our knowledge on the final state being an orbitally excited state, is still insufficient. For example, there is the long-lived `1/2 vs 3/2' puzzle <cit.> in B semileptonic decays to orbitally excited states.Among the orbitally excited states, the 2^+ tensor meson is the most complex one. Since there are significant differences between theoretical results on B →D_2^⋆(2460)ℓ^+ν_ℓ, and there are few that are in good agreement with experimental data, see Table <ref> in this article for details. The relativistic correction of a excited state is greater than that of its ground state <cit.>, so one possible reason for the inconsistency between theory and experiment is that the relativistic correction was not well considered. Therefore, in this article, we will give a relativistic study of the semileptonic decays, B_q → (c̅ q)(nP) ℓ^+ ν_ℓ (q=u,d,s,c;n=1,2,3), where bottom meson B_q is a 0^- pseudoscalar, and charmed meson (c̅ q) is a 2^+ tensor. Where the processes with highly excited 2P and 3P final states are also included, as we know almost nothing about them.In this paper, we will solve the instantaneous Bethe-Salpeter (BS) equation <cit.>, which is also called Salpeter equation <cit.>, to obtain the relativistic wave functions for pseudoscalar and tensor mesons. Compared with the non-relativistic Schrodinger equation, the BS equation is a relativistic dynamic equation for bound states. But it is very complicated, we have to make approximation before solving it. Salpeter equation is its instantaneous version, and instantaneous approximation is suitable for heavy mesons. We have solved the complete Salpeter equation without further approximations <cit.>. Since Salpeter equation itself does not provide the form of wave function, we give the general expression of the relativistic wave function for a meson according to its J^P quantum number, where the unknown radial wave functions are the solution of Salpeter equation. For the Salpeter equations satisfied by mesons with different J^P, they need to be solved separately, see Ref.<cit.> for example.Through the study of ψ(3770), it is known that this particle is not a pure wave state, but a S-D mixing state <cit.>. In our method, a meson wave function contains different partial waves, each of which contains the same J^P. So it is found that <cit.>, in a complete relativistic method similar conclusion applies to all particles, that is, all particles are not composed of pure waves, but contain other partial waves in addition to the main wave. Where the main wave provides the non-relativistic contribution, while other waves give relativistic corrections. Taking the 0^- pseudoscalar B_c wave function as an example, S-wave is the non-relativistic main wave, P-wave is the relativistic correction term <cit.>.Although we can calculate the ratio of different partial waves in the wave function, which reflects the relativistic effect of this meson <cit.>, it does not represent the size of the relativistic effect in the transition it participates in. Since in a transition process involving interaction, it is necessary to calculate the overlapping integral of the initial and final state wave functions. In this case, the correction of relativity becomes complex and requires careful study. The main contribution may not necessarily come from the non-relativistic main wave, but may come from the relativistic partial wave. This phenomenon motivates us to study the role of various partial waves in different decays. Previously, we have studied the contribution of various partial waves in strong interaction <cit.> and radiative electromagnetic transitions <cit.>. In this article, we will study their performance in weak interaction.In Sect.II of the article, taking the semileptonic decay B^+ →D̅_2^⋆(2460)^0 ℓ^+ ν_ℓ as an example, we show our method how to calculate the transition matrix element. In Sect.III, the used wave functions including different partial waves of initial 0^- pseuduscalar and final 2^+ tensor mesons are given. In Sect.IV, we firstly show the ratios of different partial waves in the wave functions of 0^- and 2^+ mesons, then our results of semileptonic bottom hadron decays, finally, we give the contributions of different partial wave and discussions.§ SEMILEPTONIC DECAY WIDTH FORMULAFor the B^+ →D̅_2^⋆(2460)^0 ℓ^+ ν_ℓ process shown in Fig. 1, the transition amplitude is written asT=G_F/√(2)V_cbμ̅_ν_ℓγ ^μ(1-γ ^5)ν _ℓ⟨D_2^⋆(2460)^0(P_f) |J_μ |B^+(P)⟩ ,where G_F is the Fermi constant, J_μ≡ V_μ -A_μ is the charged current responsible for the decays, V_cb=40.5 × 10^-3 (PDG <cit.>) is the Cabibbo-Kobayashi-Maskawa (CKM) matrix element, P and P_f are the momenta of the initial B^+ and the final D̅_2^⋆(2460)^0, respectively. After summing the spins of the initial and final mesons, the square of the above matrix elements is written as∑ | T^2|=G_F^2/2 |V_cb | ^2 ℓ^μνh_μν,where ℓ^μν≡∑μ̅_ν_ ℓγ^μ(1-γ_5)ν_ℓν̅_ℓ(1+γ_5)γ^νμ_ν_ℓ is the leptonic tensor, and h_μν≡∑⟨ B^+(P) |J_ν^+ | D_2^⋆(2460)^0 (P_f) ⟩⟨ D_2^⋆(2460)^0 (P_f) |J_μ |B^+(P)⟩ is the hadron tensor.By using the Mandelstam's formulism, the hadronic transition matrix element can be written as the overlapping integral over the Bethe-Salpeter wave functions of initial and final mesons. Since we do not solve the Bethe-Salpeter equation, but the Salpeter equation, the transition matrix element is further simplified by instantaneous approximation, then for the precess B^+ →D̅_2^⋆(2460)^0 ℓ^+ ν_ℓ, the hadronic matrix element can be written as <cit.>⟨D_2^⋆(2460)^0 (P_f) |J_μ |B^+(P)⟩ =∫dq⃗/(2 π )^3 Tr[ P/Mφ^++_P(q⃗ ) γ _μ (1-γ _5)φ̅^++_P_f (q⃗_⃗f⃗) ] =t_1 ϵ_μ P+t_2 ϵ_P P P_μ +t_3 ϵ_P P P_f μ+i t_4 ϵ^ρ Pε_ρ P P_f μ,where ϵ_μν is the polarization tensor of the final tensor meson, t_1, t_2, t_3 and t_4 are the form factors, φ^++_P and φ̅^++_P_f=γ_0(φ^++_P_f)^†γ are the positive energy Salpeter wave function for the initial and final mesons, respectively. We have used the abbreviations, for example ϵ^ρσP_σ ε_ραβμP^αP_f^β=ϵ^ρ Pε_ρ P P_f μ. Based on the covariance analysis of the Lorenz index, the general form of h_μν can be expressed ash_μν=-α g_μν+β_++(P+P_f)_μ(P+P_f)_ν+β_+-(P+P_f)_μ(P-P_f)_ν+β_-+(P-P_f)_μ(P+P_f)_ν+β_–(P-P_f)_μ(P-P_f)_ν+ i γε_μνρσ(P+P_f)^ρ(P-P_f)^σ,where, the coefficients α, β_±± and γ are functions of the form factors t_i  (i=1,2,3,4). Thus, the differential decay rate of this ergodic process can be written asd^2Γ/d x d y= |V_i j|^2G_F^2 M^5/32 π^3{α(y-m_ℓ^2/M^2)/M^2+2 β_++.×[2 x(1-M_f^ 2/M^2+y)-4 x^2-y+m_ℓ^2/4 M^2(8 x+4 M_f^2-m_ℓ^2/M^2-3 y)]+(β_+-+β_-+) m_ℓ^2/M^2(2-4 x+y-2 M_f^2-m_ℓ^2/M^2)+ β_–m_ℓ^2/M^2(y-m_ℓ^2/M^2).-γ[y(1-M_f^2/M^2-4 x+y)+m_ℓ^2/M^2(1-M_f^2/M^2+y)]},where x≡ E_ℓ/M, y≡ (P-P_f)^2/M^2, M and M_f are the masses of B^+ and D̅_2^⋆ (2460)^0, respectively, and m_e and E_e are the mass and energy of the final charged lepton ℓ, respectively.§WAVE FUNCTIONS AND THEIR PARTIAL WAVES. The wave function will be given in the center of mass system of the corresponding meson. q_ is the relative momentum between the quark and the antiquark, defined as q_⊥=q-P · q/M^2 P=(0,q⃗), where P and M are the momentum and mass of the meson, respectively. §.§ 0^- mesonThe relativistic Salpeter wave function for a 0^- state has the general form <cit.>φ _0^-(q_)=[f_2(q_)+P/Mf_1(q_)+q_/Mf_3(q_)+Pq_/M^2f_4(q_)]γ ^5,withf_3(q_)=f_2 M (ω _2-ω _1)/m_2 ω _1+m_1 ω _2,   f_4(q_)=-f_1 M (ω _1+ω _2)/m_2 ω _1+m_1 ω _2.The normalization formula for this wave function is <cit.>∫dq⃗/(2π)^38 M ω_1 ω_2 f_1 f_2/ω_1 m_2 + ω_2 m_1=1,where ω_1=√(m_1^2+q⃗^2), ω_2=√(m_2^2+q⃗^2), m_1 and m_2are the constituent quark masses of quark 1 and antiquark 2, respectively.We have pointed out that the wave function of the 0^-state not only contains S-wave, the terms with f_1 and f_2, but also P-wave components, namely f_3 and f_4 terms <cit.>. If only the S-wave is retained and the P-wave component is deleted, the normalization formula becomes <cit.>∫dq⃗/(2π)^32 M f_1 f_2 (ω_1 m_2 + ω_2 m_1)/ω_1 ω_2.Based on Eq.(<ref>) and Eq.(<ref>), which are (S+P)^2 and S^2, the ratio between S partial wave and P wave can becalculated <cit.>.For the 0^- meson, itspositive energy wave function can be expressed as <cit.>φ _0^-^++(q_)=[A_1(q_)+P/MA_2(q_)+q_/MA_3(q_)+Pq_/M^2A_4(q_)]γ ^5,where A_1 and A_2 terms are S waves, and A_3 and A_4 terms are P waves, their detail expressions are shown in Appendix. §.§ 2^+ mesonThe relativistic wave function for a 2^+ state has the general form <cit.>φ _2^+(q_)=ϵ _μνq_^μq_^v[ζ_1(q_)+P/Mζ_2(q_)+q_/Mζ_3(q_)+Pq_/M^2ζ_4(q_)]+Mϵ _μνγ ^μq_^v[ζ_5(q_)+P/Mζ_6(q_)+q_/Mζ_7(q_)+Pq_/M^2ζ_8(q_)],where ϵ _μν is the polarization tensor of the meson, and we have the following relationsζ _1(q_)=q_^2ζ _3 (ω _1+ω _2)+2 ζ _5 M^2 ω _2/M (m_2 ω _1+m_1 ω _2),  ζ _7(q_)=M (ω _1-ω _2)/m_2 ω _1+m_1 ω _2ζ _5 , ζ _2(q_)=q_^2ζ _4 (ω _1-ω _2)+2 ζ _6 M^2 ω _2/M (m_2 ω _1+m_1 ω _2),   ζ _8(q_)=M (ω _1+ω _2)/m_2 ω _1+m_1 ω _2ζ _6.The normalization condition of this wave function is <cit.>∫dq⃗/(2π)^38M ω_1 ω_2 q⃗^2/15(ω _1m_2+ω_2 m_1) [-ζ_5ζ_6+2q⃗^2/M^2(-ζ_4 ζ_5+ζ_3 ζ_6+ζ_3ζ_4q⃗^2 /M^2 ) ]=1. In this relativistic expression, the 2^+ state, for example, D̅_2^⋆(2460)^0, is not a pure P-wave, it contains both D and F partial waves <cit.>. In Eq.(<ref>), the terms including ζ_5 and ζ_6 are P waves, ζ_3 and ζ_4 terms are F-P mixing waves, and others are D waves. So we can conclude that the wave function of the tensor D̅^⋆_2(2460)^0 contains P, D, and F partial waves.If only the pure P wave is considered, the wave function of 2^+ meson becomesφ _2^+^P(q_)=ϵ _μνq_^μγ^v(M ζ_5+ P ζ_6)+2/5ϵ _μνq_^μγ^vq_^2 (ζ_3/M- P/M^2ζ_4 ),with the normalization condition∫dq⃗/(2π)^32 q⃗^2 (2 ζ_3q⃗^2-5 ζ_5 M^2) (2 ζ_4q⃗^2+5 ζ_6 M^2)(ω _1 m_2 +ω _2 m_1 )/75 M^3 ω _1 ω _2.While for a pure F wave, the wave function isφ _2^+^F(q_)=ϵ _μνq_^μq_^v (q_/Mζ_3+ P q_/M^2ζ_4)-2/5ϵ _μνq_^μγ^v q_^2 (ζ_3/M- P/M^2ζ_4 ),and the normalization formula is∫dq⃗/(2π)^34 ζ_3 ζ_4 q⃗ ^6(ω _1 m_2 +ω _2 m_1 )/25 M^3 ω _1 ω _2.Using Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>), we can calculate the ratios between different partial waves.The positive energy wave function of a 2^+ meson is expressed asφ _2^+^++(q_)=ϵ _μνq_^μq_^v[B_1(q_)+P/MB_2(q_)+q_/MB_3(q_)+Pq_/M^2B_4(q_)]+Mϵ _μνγ ^μq_^v[B_5(q_)+P/MB_6(q_)+q_/MB_7(q_)+Pq_/M^2B_8(q_)],where B_is are functions of four independent radial wave functions ζ _3,ζ _4,ζ _5 and ζ _6, we show their detail expressions in the Appendix. The independent four radial wave functions are obtained by solving the Salpeter equation for 2^+ state, interested reader is referred to the paper <cit.> for details.§ RESULTS AND DISCUSSION In our method, the complete Salpeter equations are solved for pseudoscalar <cit.> and tensor <cit.>, where the Cornell potential is chosen as the interaction kernel: V(r)=λ r+V_0-γ_0⊗γ^04/3α_s(r)/r, where λ is the string constant,α_s(r) is the running coupling constant, and V_0 is a free constant. So there are some model dependent parameters, for example, the used constituent quark masses are m_u=0.38 GeV, m_d=0.385 GeV, m_s=0.55 GeV, m_c=1.62 GeV, and m_b=4.96 GeV. Other model dependent parameters can be found, for example, in Ref.<cit.>. The mass spectra of the tensor 2^+ (2^++) states are shown in Table <ref>, and the the masses of B, B_s and B_c are same as the experimental values and will not be given here. §.§ The wave functions and ratios of different partial waves The numerical values of the radial wave functions for 0^- pseudoscalars B and B_s are shown in Figure <ref> (For B_c meson, see Ref.<cit.>). Where we can see the S-wave components, the f_1 and f_2 terms in Eq.(<ref>), are dominant, and the P-wave ones, f_3 and f_4 terms, are small. So, B^+, B^0, B_s^0 and B_c^+ are all S wave dominant states. To see this clearly, we calculate the ratio between S and P waves which are based on the normalization formulas Eq.(<ref>) and Eq.(<ref>), and the results are shown in Table <ref>. In a non-relativistic limit, only the S wave survives, and f_1=f_2 for a 0^- meson. In our relativistic method, first, radial wave function f_1 is not exactly equal to f_2, second, the 0^- wave function also includes the P wave components, f_3 and f_4 terms, which contribute to the relativistic correction. The ratios in Table <ref>, show us that, the relativistic correction in B^+ (or B^0) is large, and is a little larger than those of B_s^0, much larger than B_c^+.For the 2^+ tensors, in our relativistic method, their wave functions contain 8 terms, of which 4 are independent. For simplicity, we only draw 4 independent radial wave functions in Figure <ref> and Figure <ref> for 2^+ mesons D̅_2^⋆ 0(nP) and D_s2^⋆ -(nP) (n=1,2,3), respectively. The relations between the rest 4 radial wave functions and these 4 independent ones can be found in the formulas just below Eq.(<ref>). Among the eight terms of the wave function for a tensor, ζ _5 and ζ _6 terms are P waves, ζ _3 and ζ _4 terms are mixture of P and F waves, and others, ζ _1, ζ _2, ζ _7 and ζ _8 are D waves. Figures <ref> and <ref> roughly show us that the the 2^+ wave function is dominated by P wave, which is consistent with the description of a non-relativistic method, where only P wave exists with ζ _5=-ζ _6. In order to provide details on the proportion of different waves, using the normalization formulas, Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>), we calculate the their ratios, and the results are shown in Table <ref>. Where we can see that in the wave function, the proportion of P wave is dominant, the proportion of D wave is also sizable, and the one of F wave is very small. In the non-relativistic limit, only P wave exists, our results confirm that the P wave is dominant, so these states are marked as 1P, 2P and 3P states in Figures <ref>, <ref> and in Tables <ref>, <ref>, respectively. Compared with the non-relativistic P wave, the D and F waves in the 2^+ wave function provide the relativistic correction. From Table <ref>, it can be seen that for the 1P, 2P, and 3P states, the proportion of F wave is very small and can be ignored when precise calculation is not required. Based on the proportions of D wave in Table <ref>, we conclude that, the relativistic correction in D_2^⋆ is large, and is a little larger than those of D_s2^⋆, much larger than in χ_c2. It also shows that the relativistic correction of the highly excited state is larger than that of the lowly excited one, and the latter is larger than that of the ground state. §.§ The branching ratios of the semileptonic decaysWith the numerical values of wave functions and the formula of transition matrix element, Eq.(<ref>), the calculation of the semileptonic decay is straightforward. We show our results of branching ratios and other theoretical predictions in Table <ref>. Where it can be seen that there are few theoretical results in literature regarding the case of highly excited tensor particles as the final state. Almost all existing results focus on studying the 1P final state process, while there are significant differences in predictions from different models, especially for B^+ →D̅_2^⋆ 0(1P)ℓν_ℓ, whose branching ratios vary from 1.01 to 38.0.At present, only the production of ground state D_2^⋆(1P) in the semileptonic decay of B meson and its cascade strong decay have been detected in experiments. The decay chains are B → D_2^⋆(2460) ℓν_ℓ, D_2^⋆(2460) → D π. The averaged experimental results are <cit.>ℬ(B^+→D̅_2^⋆ 0ℓ^+ν_ℓ) ℬ(D̅_2^⋆ 0→ D^-π^+) =(1.53 ± 0.16) × 10^-3,ℬ(B^+→D̅_2^⋆ 0ℓ^+ν_ℓ) ℬ(D̅_2^⋆ 0→ D^⋆-π^+) =(1.01 ± 0.24) × 10^-3,ℬ(B^0→ D_2^⋆-ℓ^+ν_ℓ) ℬ(D_2^⋆-→D̅^0π^-)= (1.21 ± 0.33) × 10^-3,ℬ(B^0→ D_2^⋆-ℓ^+ν_ℓ) ℬ(D_2^⋆-→D̅^⋆ 0π^-) = (0.68 ± 0.12) × 10^-3.The mass of D_2^⋆ is above the thresholds of D π and D^⋆π, so D_2^⋆ has the OZI-allowed strong decay channels D_2^⋆→ D π and D_2^⋆→ D^⋆π, which are the dominant decay processes of D_2^⋆. Ref. <cit.> predicted ℬ(D̅_2^⋆ 0→ D^-π^+)=44.5% and ℬ(D̅_2^⋆ 0→ D^⋆-π^+)=21.0%. Using these, our theoretical predictions areℬ(B^+→D̅_2^⋆ 0ℓ^+ν_ℓ) ℬ(D̅_2^⋆ 0→ D^-π^+) =1.33 × 10^-3,ℬ(B^+→D̅_2^⋆ 0ℓ^+ν_ℓ) ℬ(D̅_2^⋆ 0→ D^⋆-π^+) =0.628 × 10^-3,ℬ(B^0→ D_2^⋆-ℓ^+ν_ℓ) ℬ(D_2^⋆-→D̅^0π^-)= 1.23 × 10^-3,ℬ(B^0→ D_2^⋆-ℓ^+ν_ℓ) ℬ(D_2^⋆-→D̅^⋆ 0π^-) = 0.582 × 10^-3.The first two are slightly smaller than the experimental values, while the last two are in good agreement with the experimental data.Similarly, using ℬ(D_s2^⋆ -→D̅^0 K^-)=48.7% and ℬ(D_s2^⋆ -→ D^-K̅^0)=44.1% from Ref.<cit.>, we obtainℬ(B_s^0→ D_s2^⋆-ℓ^+ν_ℓ) ℬ(D_s2^⋆-→D̅^0 K^-)= 1.83 × 10^-3,ℬ(B_s^0→ D_s2^⋆-ℓ^+ν_ℓ) ℬ(D_s2^⋆-→D^-K̅^0) = 1.66 × 10^-3.Compared to the ground 1P final state case, Our result show that the branching ratio of the process with a highly excited final state (2P or 3P) is very small. The small branching ratio may be caused by the node structures(see, Figures <ref> and <ref>) in the wave functions of the excited 2P and 3P mesons, as the contributions of the wave functions on both sides of the node cancel to each other, resulting in a very small branching ratio.In Table <ref>, the ratios ℛ(D̅_2^⋆ 0), ℛ(D_2^⋆ -), ℛ(D_s2^⋆) and ℛ(χ_c2) are also listed, where for example,ℛ(D̅_2^⋆ 0)=ℬ(B^+→D̅_2^⋆ 0(1P)τν_τ)/ℬ(B^+→D̅_2^⋆ 0(1P) ℓν_ℓ).The ratio ℛ may cancel some model dependent factors, which can be seen from the results of Ref.<cit.>, <cit.> and ours, the branching ratios are much different, but the ℛ(D̅_2^⋆ 0) values are around 0.04, very close to each other. We have similar conclusions for ℛ(D_s2^⋆) and ℛ(χ_c2).We have pointed out that the relativistic corrections of excited states are greater than those of ground states <cit.>. And there are still significant differences of semileptonic decays between theoretical results, especially for the B decays. The differences may be caused by the relativistic corrections. So for these processes containing excited states, we need more careful study, especially focusing on relativistic corrections, so in the following, we will study the detail contributions of different partial waves.§.§ Contributions of different partial wavesWe provide the proportions of different partial waves in the wave function, which allows us to roughly estimate the magnitude of the relativistic correction. However, this does not represent the true relativistic correction of particles in interaction, as different partial waves behave differently in interactions. Where we need is the overlapping integration between wave functions, not the individual wave functions themselves. Therefore, taking some transition processes as examples, we provide the detailed contributions of partial waves. §.§.§ B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓTable <ref> shows that the wave function of B^+ is S wave (A_1 and A_2 terms) dominant but mixed with P wave (A_3 and A_4 terms), their ratio is S : P = 1 : 0.339. D̅_2^⋆ 0 is a P wave dominant mixed with D and F waves, P : D :F = 1:0.393:0.0729. To see the detail of transition B^+ →D̅_2^⋆ 0(1P), we will study carefully the overlapping integral of (S+P)×(P'+D'+F'), where to distinguish between the initial and final states, we use 'prime' to represent the final state. We show some of the detailed contributions of different partial waves to the branching ratio of B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ in Table <ref>. Where 'whole' means the complete wave function, while the 'S wave' in column or'P' wave' inrow means the corresponding result is obtained only using the S or P' wave and ignoring others, etc. From Table <ref>, we can see that, the dominant S partial wave in B^+ state and P' wave in D̅_2^⋆ 0(1P) provide the dominant contribution. The P wave in B^+ and D' wave in D̅_2^⋆ 0(1P) give the main relativistic corrections, while the F'partial wave in D̅_2^⋆ 0(1P) has tiny contribution, which can be ignored safely.In non-relativistic limit, only S× P' exists, contributes 35.6 ×10^-4 to the branch ratio. Our complete and relativistic branch ratio is 29.9 ×10^-4, so the relativistic effect can be calculated asℬ_rel-ℬ_non-rel/ℬ_rel=24.4 %,which is significant, but not as large as we expected. Because if we only consider the wave function without interaction, the relativistic effect of B meson is about 44%, while D^⋆_2 is 53%, both are much larger than 24.4 %. There are two possible reasons for this. First, there is a cancellation between relativistic corrections, for example, P× P' is 4.45×10^-4, P× D' is 10.1×10^-4, while their sum contribution P× (P'+D') is 1.48×10^-4. Second, from Table <ref>, we can see that the main relativistic correction is from the interaction P× D', not from S× D' or P× P'.§.§.§ B^+ →D̅_2^⋆ 0(2P)ℓ^+ν_ℓTable <ref> shows the details of decay B^+ →D̅_2^⋆ 0(2P)ℓ^+ν_ℓ. Compared with the case of D̅_2^⋆ 0(1P) final state, the contributions of all the partial waves are much smaller. The main reason is that there are the nodes in all the partial wave functions of 2P state, and the contributions of the wave functions before and after the nodes cancel to each other, resulting in a very small branching ratio. In addition, the mass of D̅_2^⋆ 0(2P) is heavier than that of D̅_2^⋆ 0(1P), and the phase space of decay B^+ →D̅_2^⋆ 0(2P)ℓ^+ν_ℓ is smaller that of B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ. It can be seen from Table <ref>, the greatest contribution does not come from the non-relativistic one S× P', nor from relativistic corrections S× D' and P× P', but from relativistic correction P× D'. The results show that the node structure has a more severe inhibitory effect on S× P' than on P× D', leading to the latter providing the maximum contribution and a large relativistic effect in this process. §.§.§ B_s^0 → D_s2^⋆ -(1P)ℓ^+ν_ℓ Table <ref> shows that, similar to the process of B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ, the overlap of S× P' provides the dominant contribution to B_s^0 → D_s2^⋆ -(1P)ℓ^+ν_ℓ, which is non-relativistic. All other contributions are relativistic corrections, with P× D' being the largest, followed by S× D' and P× P', while the contribution of F' wave can be ignored safely. The relativistic effect isℬ_rel-ℬ_non-rel/ℬ_rel=28.8 %,which is also not as large as we expected, but is a little larger than those of B^+ →D̅_2^⋆ 0(1P). But we cannot simply conclude that the relativistic effect of the former is greater than that of the latter, because when we look at the details of relativistic corrections, compared with the non-relativistic contribution, the contributions of P× D', S× D', and P× P' in process B_s^0 → D_s2^⋆ -(1P)ℓ^+ν_ℓ are much smaller than those in B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ, respectively. However, when summing them up, due to cancellation, the overall result of the latter is smaller.§.§.§ B_s^0 → D_s2^⋆ -(2P)ℓ^+ν_ℓ The situation is similar to the case of B^+ →D̅_2^⋆ 0(2P)ℓ^+ν_ℓ, the relativistic effectis very large. And the main contribution of branching ratio comes from the relativistic corrections, especially the P× D', rather than the non-relativistic contribution.§.§.§ B_c^+ →χ_c2(1P)ℓ^+ν_ℓUsing the values in Table <ref>, we obtain the relativistic effectℬ_rel-ℬ_non-rel/ℬ_rel=22.1 %for B_c^+ →χ_c2(1P)ℓ^+ν_ℓ. This value seems not much different from that of B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ or B_s^0 → D_s2^⋆ -(1P)ℓ^+ν_ℓ. But from Tables <ref>, <ref> and <ref>, we can see that, although the complete branching ratios and non-relativistic results do not differ significantly, each relativistic correction in B_c^+ →χ_c2(1P)ℓ^+ν_ℓ is much smaller than that in B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ or in B_s^0 → D_s2^⋆ -(1P)ℓ^+ν_ℓ, respectively. We also note that, the largest relativistic correction come from S× D', not P× D'.§.§.§ B_c^+ →χ_c2(2P)ℓ^+ν_ℓFrom table <ref>, we can see that, unlike the cases of B^+ →D̅_2^⋆ 0(2P)ℓ^+ν_ℓ and B_s^0 → D_s2^⋆ -(2P)ℓ^+ν_ℓ, the non-relativistic contribution, S× P' in B_c^+ →χ_c2(2P)ℓ^+ν_ℓ still contributes the most, much larger than the relativistic corrections, indicating that the node structure has different effects on process B_c^+ →χ_c2(2P)ℓ^+ν_ℓ and B^+ →D̅_2^⋆ 0(2P)ℓ^+ν_ℓ ( or B_s^0 → D_s2^⋆ -(2P)ℓ^+ν_ℓ).§.§ CONCLUSIONWe present a relativistic study on the semileptonic decays of heavy pseudoscalars B^+, B^0, B_s^0, and B_c^+to 1P, 2P, and 3P 2^+ tensors caused by the transition of b̅→c̅ using the Bethe-Salpeter method. We obtain the branching ratio 2.99× 10^-3 for B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ and 2.77× 10^-3 for B^0 →D̅_2^⋆ -(1P)ℓ^+ν_ℓ, which are in good agreement with the experimental data. For the undetected channels, our results are ℬ(B_s^0→ D_s2^⋆-(1P) ℓ^+ν_ℓ)=3.76× 10^-3 and ℬ(B_c^+ →χ_c2(1P)ℓ^+ν_ℓ)=1.82× 10^-3. For the decays to final states 2P and 3P processes, all branching ratios are very small and cannot be detected in current experiments.In this paper, we focus on studying the different partial waves in the relativistic wave functions, and their contributions in semileptonic decays.(1), in the wave function for a 0^- pseudoscalar, B^+, B^0, B_s^0, or B_c^+, S-wave is dominant and provides the non-relativistic contribution; P-wave is sizable and gives the relativistic correction. While for a 2^+ tensor, D̅_2^⋆ 0, D_2^⋆ -, D_s2^⋆ -, or χ_c2, P-wave is dominant, combined with sizable D-wave, and tiny F-wave, where P-wave gives the non-relativistic contribution, others contribute to the relativistic corrections.(2), we note that, considering only the wave functions, the relativistic corrections for B, B_s, D̅_2^⋆, and D_s2^⋆ mesons are large, while the relativistic corrections for χ_c2 and B_c are small. However, when calculating the transition process, the overlapping integration of wave functions plays a major role. Thus we obtain similar relativistic effects, for example, 24.4% for B^+ →D̅_2^⋆ 0(1P)ℓ^+ν_ℓ, 28.8% for B_s^0→ D_s2^⋆-(1P) ℓ^+ν_ℓ and 22.1% for B_c^+ →χ_c2(1P)ℓ^+ν_ℓ.(3), when we look at the details, there are significant differences. For example, in the transition of B→ D_2^⋆(1P), the individual contributions of relativistic partial waves are significant, while in the overall result, they are in a dissipative relationship, resulting in a small overall relativistic effect. While in B_c^+ →χ_c2(1P), the individual contributions of relativistic waves are small, directly leading to a small overall relativistic effect.(4), when the process contains a radial excited state, the node structure in the wave function of the radial excited state plays an overwhelming role in the result, resulting in a very small branching ratio. AcknowledgmentsThis work was supported in part by the National Natural Science Foundation of China (NSFC) under the Grants Nos. 12075073, 12375085 and 11865001, the Natural Science Foundation of Hebei province under the Grant No. A2021201009. § APPENDIX:BETHE-SALPETER WAVE FUNCTIONIn the positive energy wave function of 0^- state, we have the following relationsA_1=M/2(f_1 (ω _1+ω _2)/m_1+m_2+f_2),  A_3=-A_1 M (ω _1-ω _2)/m_2 ω_1+m_1 ω_2, A_2=M/2(f_2 (m_1+m_2)/ω _1+ω _2+f_1),  A_4=-A_1 M (m_1+m_2)/m_2 ω_1+m_1 ω_2. For 2^+ states, we haveB_1 =1/2 M(m_1ω_2+m_2ω_1)[(ω_1+ω_2) q_⊥^2ζ_3+(m_1+m_2) q_⊥^2ζ_4+2 M^2ω_2ζ_5-2 M^2 m_2ζ_6], B_2 =1/2 M(m_1ω_2+m_2ω_1)[(m_1-m_2) q_⊥^2ζ_3+(ω_1-ω_2) q_⊥^2ζ_4+2 M^2ω_2ζ_6-2 M^2 m_2ζ_5] , B_3 =1/2[ζ_3+m_1+m_2/ω_1+ω_2ζ_4-2 M^2/m_1ω_2+m_2ω_1ζ_6],  B_5 =1/2[ζ_5-ω_1+ω_2/m_1+m_2ζ_6], B_4 =1/2[ω_1+ω_2/m_1+m_2ζ_3+ζ_4-2 M^2/m_1ω_2+m_2ω_1ζ_5] ,   B_6 =1/2[-m_1+m_2/ω_1+ω_2ζ_5+ζ_6] , B_7 =M/2ω_1-ω_2/m_1ω_2+m_2ω_1[ζ_5-ω_1+ω_2/m_1+m_2ζ_6],   B_8 =M/2m_1+m_2/m_1ω_2+m_2ω_1[-ζ_5+ω_1+ω_2/m_1+m_2ζ_6]. 50 lv2W. Wang, Y.-L. Shen, C.-D. Lv, M. A. Paracha, C. Wang, Phys. Rev. D 79, 054012 (2009).lv3R.-H. Li, C.-D. Lv, H. Zou, Phys. Rev. D 78, 014018 (2008). Kang:2018jzg X.-W. Kang, T. Luo, Y. Zhang, L.-Y. Dai, and C. Wang,Eur. Phys. J. C 78, no.11, 909 (2018). Dingfelder:2016twb J. Dingfelder and T. Mannel,Rev. Mod. Phys. 88, no.3, 035008 (2016).Aliev:2006gk T. M. Aliev, K. Azizi, and A. Ozpineci,Eur. Phys. J. C 51, 593 (2007).BaBar:2008dar B. Aubert et al. (BaBar Collaboration),Phys. Rev. Lett. 101, 261802 (2008). Belle:2021idw R. Van Tonder et al. (Belle Collaboration),Phys. Rev. D 104, no.11, 112011 (2021).BaBar:2011sxq J. P. Lees et al. (BaBar Collaboration),Phys. Rev. D 85, 011101 (2012).D0:2007ukf V. M. Abazov et al. (D0 Collaboration),Phys. Rev. Lett. 102, 051801 (2009). LHCb:2020ayi R. Aaij et al. (LHCb Collaboration), J. High Energy Phys. 07 (2020) 123.LHCb:2021tdf R. Aaij et al. (LHCb Collaboration),J. High Energy Phys. 01 (2022) 065.Belle-II:2023jgq F. Abudinén et al. (Belle-II Collaboration),Phys. Rev. D 107, no.7, 072002 (2023). bibi2 D. Bigi, P. Gambino, S. Schacht, J. High Energy Phys. 11 (2017) 061.BaBar:2001 B. Aubert et al. (BaBar Collaboration),Phys. Rev. Lett. 87, 091801 (2001).Belle:2001 K. Abe et al. (Belle Collaboration), Phys. Rev. Lett. 87, 091802 (2001). FajferS. Fajfer, J.F. Kamenik, I. Nisandzic, Phys. Rev. D 85, 094025 (2012).lv1Z.-R. Huang, Y. Li, C.-D. Lv, M. A. Paracha, C. Wang, Phys. Rev. D 98, 095018 (2018).scoraD. Scora, N. Isgur, Phys. Rev. D 52 (1995) 2783.puzzleV. Morenas, A. Le Yaouanc, L. Oliver, O. Pene, J. C. Raynal, Phys. Rev. D 56 (1997) 5668.colangeloP. Colangelo, F. De Fazio, N. Paver, Phys. Rev. D 58 (1998) 116005.bigiI. I. Bigi, B. Blossier, A. Le Yaouanc, L. Oliver, O. Pene, J. C. Raynal, A. Oyanguren, P. Roudeau, Eur. Phys. J. C 52 (2007) 975.me05G.-L. Wang, Q. Li, T. Wang,T.-F. Feng, X.-G. Wu, C.-H. Chang, Eur. Phys. J. C 82, 1027 (2022).wangvG.-L. Wang, T.-F. Feng, X.-G. Wu, Phys. Rev. D 101, 116011 (2020). BS equationE. E. Salpeter and H. A. Bethe, Phys. Rev. 84, 1232 (1951).SalpeterE. E. Salpeter, Phys. Rev. 87, 328 (1952).wang0-C. S. Kim and G.-L. Wang, Phys. Lett. B 584, 285 (2004).wang2+G.-L. Wang, Phys. Lett. B 674, 172 (2009).changwang1C.-H. Chang and G.-L. Wang, Sci. China. Phys. Mech. Astron. 53, 2005 (2010).eichtenE. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, T. M. Yan, Phys. Rev. D 17, 3090 (1978); 21, 313(E) (1980).part waveG.-L. Wang, T. Wang, Q. Li, and C.-H. Chang, J. High Energy Phys. 05 (2022) 006. Liu:2022kvs T.-T. Liu, S.-Y. Pei, W. Li, M. Han, and G.-L. Wang,Eur. Phys. J. C 82, no.8, 737 (2022). liweiW. Li, S.-Y. Pei, T. Wang, Y.-L. Wang, T.-F. Feng, and G.-L. Wang, Phys. Rev. D 107, no.11, 113002 (2023).peisyS.-Y. Pei, W. Li, T.-T. Liu, M. Han, G.-L. Wang, T.-H. Wang, Phys. Rev. D 108, no.3, 033003 (2023).PDGR. L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2022, 083C01 (2022).changwangC.-H. Chang, J.-K. Chen, and G.-L. Wang, Commun. Theor. Phys. 46, 467 (2006).MorenasV. Morenas, A. Le Yaouanc. L. Oliver, O. Pene, J.-C. Raynal, Phys. Rev. D 56, 5668 (1997).AlievT. M. Aliev, H. Dag, A. Kokulu, and A. Ozpineci, Phys. Rev. D 100 (2019) no.9, 094005.sunduK. Azizi, H. Sundu, and S. Sahin, Phys. Rev. D 88, 036004 (2013).cllL.-L. Chen, Y.-W. Ren, L.-T. Wang, and Q. Chang, Eur. Phys. J. C 82 no.5, (2022) 451.odaM. Oda, K. Nishimura, M. Ishida, S. Ishida, Prog. Theor. Phys. 103, 1213 (2000). D.ScoraD. Scora, N. Isgur, Phys. Rev. D 52, 2783 (1995).Ebert2D. Ebert, R. N. Faustov, V. O. Galkin, Phys. Rev. D. 61, 014016 (2000). DongH.-R. Dong, A. Le Yaouanc. L. Oliver, J.-C. Raynal, Phys. Rev. D 90, 114014 (2014).Barakat:2022lmr T. Barakat,Nucl. Phys. B 983, 115915 (2022). Azizi:2014nta K. Azizi, H. Sundu, and S. Sahin,Eur. Phys. J. C 75, no.5, 197 (2015). FaustovR. N. Faustov, V. O. Galkin, Phys. Rev. D. 87, 034033 (2013).SegoviaJ. Segovia, C. Albertus, D.R. Entem, F. Fernandez, E. Hernandez, M.A. Perez-Garcia, Phys. Rev. D. 84, 094029 (2011).EbertD. Ebert, R. N. Faustov, and V. O. Galkin, Phys. Rev. D. 82, 034019 (2010).IvanovM. A. Ivanov, J. G. Korner, and P. Santorelli, Phys. Rev. D 73, 054024 (2006).Hernandez E. Hernandez, J. Nieves and J. M. Verde-Velasco, Phys. Rev. D 74, 074008 (2006).changwang2C.-H. Chang, Y.-Q. Chen, G.-L. Wang, H.-S. Zong, Phys. Rev. D 65, 014017 (2002).sicheng zhangS.-C. Zhang, T. Wang, Y. Jiang, Q. Li, and G.-L. Wang, Int. J. Mod.Phys. A 32, 1750022 (2017).
http://arxiv.org/abs/2311.15748v1
{ "authors": [ "Wen-Yuan Ke", "Su-Yan Pei", "Tianhong Wang", "Guo-Li Wang" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20231127120628", "title": "Relativistic study on the semileptonic decays of $B_q$ mesons to orbital excited heavy Tensors" }
Analysis of the subsolar-mass black hole candidate SSM200308from the second part of the third observing run of Advanced LIGO-Virgo Ester Ruiz Morales January 14, 2024 ====================================================================================================================================While performance of many text classification tasks has been recently improved due to Pre-trained Language Models (PLMs), in this paper we show that they still suffer from a performance gap when the underlying distribution of topics changes.For example, a genre classifier trained on political topics often fails when tested on documents about sport or medicine. In this work, we quantify this phenomenon empirically with a large corpus and a large set of topics. Consequently, we verify that domain transfer remains challenging both for classic PLMs, such as BERT, and for modern large models, such as GPT-3.We also suggest and successfully test a possible remedy:after augmenting the training dataset with topically-controlled synthetic texts, the F1 score improves by up to 50% for some topics, nearing on-topic training results, while others show little to no improvement. While our empirical results focus on genre classification, our methodology is applicable to other classification tasks such as gender, authorship, or sentiment classification. § INTRODUCTIONAutomatic text classification is a critical task in natural language processing, enabling proper understanding, summarization, archiving, and retrieval of documents across various domains, such as legal and medical. This task has been greatly improved due to pre-trained language models such as BERT <cit.>, T5 <cit.> or GPTs <cit.>.To achieve true artificial general intelligence (AGI), it is essential that trained computer models can recognize various document categories across different domains. However it has been noticed <cit.> that while in general PLMs are more robust than previous models, they still suffer from spurious domain-specific clues.While all our methods proposed here apply to other non-topical text classification tasks such as sentiment or authorship identification, in this particular work we have taken a thorough look at document genre classifiers: distinguishing between different styles (genres) of texts, such as academic articles, experimental protocols, regulatory documents, and patient leaflets <cit.>.People can easily recognize document genres from just a few examples even if those examples are from a different domain <cit.>.Text classification research often contrasts the properties of topic vs. those of style<cit.>.However, this contrast is difficult to maintain, as the training sets in most corpora for style or genre prediction are biased with respect to topics specific to individual styles or genres, so that classifiers do not transfer across corpora in case of variation between their topics. For example, a model identifying FAQs can learn to pay attention to such keywords as hurricane and tax advice in case these topics are common for FAQs in a specific training corpus <cit.>. So far, this cross-influence of topics and styles has not been studied in the context of PLMs such as BERT <cit.>, T5 <cit.> or GPTs <cit.>.There has also been no quantification of the gap in transferring genre/style classifiers to new domains. For instance, no study has yet assessed the performance degradation when a classifier is trained on political topics but tested on texts about sports or medicine.In light of the aforementioned challenges, our study offers the following novel contributions[The tools and the experimental setups are available at <https://github.com/dminus1/genre>]: * While our study primarily focuses on genre classification, the methodology we use to assess and mitigate domain transfer gaps can be broadly applied, making it suitable for other non-topical classifications such as authorship or sentiment identification;* We have created a large corpus with “natural genre annotation” covering a range of topics with some biases;* We empirically quantify the domain transfer gap on our corpus, demonstrating drops in F1 classification performance by 20-30 absolute percentage points; * We propose a data augmentation approach which involves training text generators that can produce synthetic documents in any of the genres present in the genre training corpus and on any topic, out of those identified by neural topic-modeling algorithm <cit.> trained on an unrelated topically diverse large corpus. * We verify that augmenting the training dataset with synthetics texts generated by our approach facilitates domain transfer by improving F1 classification metric by 2-6 absolute percentage points in average and on some topics as much as from 57.6 to 73.0.This improvement surpasses a general data augmentation baseline that generates synthetic documents but does not apply any domain transfer mechanisms that we propose here.* Through ablation studies, we verify that all the components of our augmentation approach are crucial. Also, by varying hyper-parameters, we can identify the optimal augmentation setup and avoid performance degradation.* Through a qualitative exploratory study with ChatGPT we were able to confirm that even a much larger language model can still suffer from a domain transfer gap. § RELATED STUDIES AND BASELINESThere have been studies that looked at impact of out-of-domain training data on PLM-based classifiers.In particular, hendrycks20pretrained noticed that while in general PLMs are more robust than previous models, they still suffer from spurious clues. However, they tested the transfer gap only on a few hand-picked datasets with similar tasks but different data distributions (e.g. sentiment analysis trained on book reviewsapplied to movie reviews), while here we are presenting an original methodology based on a neural topic model to investigate domain transfer between a wide variety of topics. Also, none of the prior works looked at domain transfer for genre/style classification tasks which we do here.Within the broader context of domain transfer, genre classification holds a unique position.Automatic genre classification has been recognised as an important task since the 1990s <cit.>.The effect of topical biases has been estimated empirically by considering the reduction in performance of genre classifiers across topics in the New York Times corpus <cit.>.Several studies have also demonstrated the success of PLMs with respect to the genre classification tasks <cit.>.However, there have been no studies of topical biases for these models.The split between topics and styles has been studied for a related task, including disentangled representation <cit.> and other methods of topic-style decomposition <cit.>.However, our study focuses on the numerical estimates of the topic transfer gap on large samples diverse in topics and in genres.A related research area concerns the use of causal models for interpreting the biases of neural predictions, for example, with respect to gender <cit.>.There have been studies to investigate biases in neural models by adding counter-factuals <cit.>. It has been noted that well-established data augmentation (DA) methods in domains such as computer vision and speech recognition <cit.>, relying on simple transformations of existing samples, cannot be easily applied to natural text since they can lead to syntactic and semantic distortions. For a survey ofDA approaches for various natural language processing tasks we refer a reader to feng2021survey.The survey mentions several studies showing that DA is generally much less beneficial when applied to out-of-domain data (as studied here), likely because “the distribution of augmented data can substantially differ from the original data." Whileonly a few of the surveyed works involved PLMs, the survey points out that PLMs have made many previously useful DA techniques obsolete since fine-tuned PLM-based classifiers already achieve high performance, as they have been pre-trained on large and diverse corpora. For those reasons, we decided not to contrast our approach with any of the classical pre-PLM domain transfer techniques, such as blitzer07 or daume10frustratingly. While up to our knowledge, none of the prior works has specifically looked into the domain transfer gap for genre (or style) classification,it is still worth to note several closely related works, some of them included in feng2021survey survey that involved PLMs not only as classifiers but also as generators for augmented data. This includes kumar2020data who looked at sentiment/intent/questionclassification, lee2021neural who targeted under-represented categories, edwards2021guiding who focused on selecting the seeds examples to train augmentation generation in the context of few-shot classification, and yang2020generative focused on low-resource in commonsense reasoning.Since the augmentation approach tried in those works is based onstraightforward training (fine-tuning)a PLM-based text generator using the existing data (without exercising any topical control), we include the results from this general approach in “aug baseline" column in addition to the baseline that does not attempt any augmentation (“off-topic" column in <ref>). Since the above mentioned works also have demonstrated that classical “back-translation" augmentation approach is substantially inferior to the PLM-based text generation, we decided not to include the former in our experiments. jin2022deep provides an overview of recent research in a closely related task of text style transfer (TST). Unlike TST, we are interested in keeping the topic, but not specifically concerned with preserving the content as long as the generated documents aid in domain transfer.The challenges maintaining coherent style and topic within longer texts (that exceed the current transformers' input limits of 500-4000 tokens) have been proposed to address by progressive generation <cit.>. In this study, we are not as much concerned with quality of output texts, but rather with their help in domain adaptation.§ METHODOLOGY Our study builds upon prior investigations into domain biases in text classification <cit.>, which largely depended on a limited set of hand-selected datasets with analogous tasks but varying distributions. We present a comprehensive methodology to assess and mitigate the domain transfer gap. The main idea is to simulate the situation when a classifier is trained on documents that lack a topic, e.g. medicine, and then to test it on the documents where such topic is well represented. This performance is contrastedwith the situation when the classifier is initially trained on the documents where this topic is represented well. While our empirical results focus on genre classification, our methodology is directly applicable to other classification tasks such as gender, authorship, or sentiment classification. We train two classes of models:* a topic model produced from a diverse corpus, even though it might be biased with respect to its genres, and * genre-classification models based on a PLM (such as Bert) which is fine-tuned on a genre-diverse corpus, even though each individual genre might be biased with respect to its topics.Figure <ref> illustrates the overall workflow for our experiments. §.§ Estimation of Topic ModelsFor our experiments, we needed as diverse topic model as possible so that we can assess the performance gaps when transferring between the topics. The topic model in this study was produced by a neural model <cit.> which can achieve better interpretability in comparison to traditional Latent Dirichlet Allocation (LDA) models <cit.>.More specifically, the Embedding Topic Model (ETM) differs from LDA by estimating the distribution of words over topics as:w_dn∼softmax(ρ^⊤α_z_dn)where ρ are word embeddings and α_z_dn are topic embeddings, dn refers to iteration over documents and topics, see() for the full description of ETM. For estimating the topic model, we used a topically-diverse corpus of ukWac <cit.> created by wide crawling of web pages from the .uk top level domain name (the total size of ukWac is 2 billion words, 2.3 million Web pages).As suggested by dieng20topic, the number of topics of a topic model can be selected by maximising the product of topic coherence (the average pointwise mutual information of the top words for a topic) by its diversity (the rate of unique words in the top k words of all topics).In this way we arrived at choosing 25 topics for the ukWac corpus, see <ref>, Topic Coherence of this model is 0.195, Topic Diversity is 0.781.In the absence of a gold test set for an unsupervised method, all of the topics are interpretable (the topic labels in <ref>) have been assigned by inspecting the keywords and a sample of documents).Topic 8 applies to short documents with residual fragments from HTML boilerplate cleaning in ukWac, so that the date and time indicators remain the only identifiable keywords for such documents.§.§ Genre Corpus We also needed a corpus with good coverage of several genres. Up to our knowledge, there is no large corpus for that purpose, so we combined several data sources into a corpus of “natural genre annotation” so that each source is relatively homogeneous with respect to its genres.The list of our genres follows other studies which detect text types which are common on the Web <cit.>.They have been matched to commonly used datasets, such as a portion of the Giga News corpus to represent News reporting and the Hyperpartisan corpus to represent news articles expressing opinions.The composition of the natural genre corpus is listed in Table <ref>.The corpus of natural genres is large, but it is biased with respect to its topics.For example, the Amazon reviews dataset contains a large number of book and music reviews, and a small number of reviews of office products and musical instruments.However, these are not the topics inferred by the topic model, as this division into topics exists only with the reviews dataset, while other sources of natural annotation haveno office products or musical instruments. What is more they are likely to have a very different structure of annotation labels even when there is some intersection between their topics.For example, the category labels assigned to the pages in Wikipedia are different from both the Amazon review labels and for the inferred ukWac topics, while bothas listed in <ref>.Having the topics for all sources as inferred by our topic model and the documents annotated with their genres gives two views on the same document,for example, a document which starts with*There's little need to review this CD after Daniel Hamlow's thoughtful and informative critique above, but I loved the CD so much I had to weigh in.In case you aren't familiar with his citations, he mentions the big three Brazilian music classics: Astrud Gilberto's "Jazz Masters 9" from Verve, "Jazz Samba" … can be described as a Review from its provenance from the Amazon reviews dataset and as primarily belonging to Topic 1 (Entertainment, <ref>) from its ETM inference. §.§ Transfer AssessmentThis subsection describes the methodology that we have developed to test the effect of a topic change. While this methodology is applicable to any non-topical classification, here, we describe how we use it with document genres. Our main goal here is being able to create training, validation and test sets on particular topics to experiment with a genre classification task, specifically knowledge transfer between the topics. We used the following procedure for estimating topical biases. For each topic as estimated by the topic model (e.g., “Entertainment"), we create a dataset, that we label as off-topic. For this, we take N documents of each class (document genre in our case).For example, for N = 100we take 100 argumentative texts, 100 instructions, 100 news reports, etc. such that the selected documents have the lowest scores with respect to that topic, e.g. documents not about entertainment. Through our experiments, we compare the classification results trained onthe off-topic datasets with those trained on on-topic datasets. The latter are constructed in exactly the same way except by selecting the documents with thehighest scores on the topic, e.g. those most relevant to entertainment. For each topic, we also created an on-topic test set making sure it does not overlap with the training sets. Validation sets were off-topic since within a domain transfer setting there isn't any on-topic training data available.Specifically, in the experiments below, we used 300 documents of each genre in a test set, 300 documents of each genre in a validation set, and varied the sizes of the training sets as stated in our section <ref>.This way we assess the “domain transfer": a scenario when a model trained on off-topic data needs to be applied to an on-topic dataset. Structuring our datasets that way has several advantages: 1) both on-topic and off-topic sets have same number of documents in each class (genre) and the same total size, which allows us to determine the transfer gap under the same conditions, and 2) the datasets are automatically balanced with respect to each class (genre), even while our original corpus is not, thus the comparison metrics are more reliable and interpretable. To build the genre classifiers, we fine-tune the ROBERTA-large <cit.> and BERT-large <cit.> models from the Hugging-Face library[<https://huggingface.co/>] with the the common in the prior research learning rate of 10^-5 for 6 epochs, using its Adam optimizer <cit.>. Following the standard validation procedure, we report the F1 score computed on the respective test set for the number of epochs that showed the best score on the validation (development) set. As a compromise between the reliability of our results and the processing time, after preliminary investigation we settled on working with the window of 1000 characters randomly positioned within a document.Random positioning mitigates the impact of document structure, e.g. an introductory question positioned at the start of the StackExchange dataset. Our experiments with human raters show that the windows obtained this way still provide sufficient information to determine the topic and genre.In order to mitigate the superficial differences between the sources, when training and applying our classifiers, we remove all the numbers and punctuation. We do not apply this filtering when training our text generators to preserve readability. We apply it to the generated texts instead. §.§ Data augmentation§.§.§ Our Keyword Extraction AlgorithmOur domain adaptation approach involves generating synthetic documents on a given topic. Thus, the generator is trained to receive a sequence of keywords and to generate a document in the desired category (genre in this study). We experimented with several variations of a heuristic algorithm to select the keywords and settled on the following approach after manually inspecting the quality of the generations and their topical relatedness.We are not much concerned how truthfully the keywords represent the content of the document, but rather how well they represent the topic to enable topic-focused generation. Thus, when deciding which words to extract as keywords, we promote those that are strong representatives of the document topic, which is quantitatively assessed by our topic model. It assigns each word (in the corpus) a score with respect to each topic between 0 and 1. The higher the score the stronger the word is related to the topic. Since some documents mix several topics, attimes with numerically similar proportions, we accordingly weight the individual word scores with the overall topic scores in the document. Finally, we also want to adjust for repeated occurrences of the same word. Thus, our word scoring formula (within a document) simply iterates through all the topics and through all the word occurrences in the document while adding up the word scores with respect to the corresponding topic:score(w,D) = ∑ _i ∈D _w∑ _t L(D,t) · L(w, t) where i goes over all the occurrences of the word w in the document D, t goes over all topics (25 in the study here), L(D,t) is the score of the document with respect to topic t and L(w, t) is the score of the word w with respect to topic t.We preserve only 10 top-scoring words in each document, so all the other words are discarded and the original sequence of the remaining words becomes the keyword sequence for the generator. Table <ref> in Appendix shows an example of extracted keywords along with how they are used to generate new synthetic documents, as detailed in the following subsection. §.§.§ Our Topical Augmentation Control Our suggested method of improving domain transfer proceeds by augmenting the off-topic training set with automatically generated on-topic documents. Thus, in a practical scenario, the test topics (keywords) don't have to be known in advance but can be extracted from previously unseen test documents from the target domain. The only tool required for this is an existing topic model, which can be built similarly to as we did here on any general corpus of a modest size, e.g. two billion words of ukWac, <cit.>, which is not resource-consuming.To achieve this we fine-tune a pre-trained language model into a separate generator for each of our genres (listed in <ref>).Our earlier experimenting with using a single model for all genres and a special token to specify the desired genre resulted in weaker results.For this fine-tuning,we use exactly the same N·6 documents as are in our off-topic training set, thus operating in a practical scenario when on-topic documents are not available.Each generator is fine-tuned to take a sequence of keywords extracted according to the algorithm detailed above as input and to generate a document in the genre corresponding to this generator and of the topic defined by the keywords. During fine-tuning, the generators learn to associate the input keywords with the content of the output document,which becomes an important mechanism of topic control and facilitating the domain transfer. We specifically used T5 as our generating model <cit.>.It is a unified text-to-text transformer, trained on the Colossal Common Crawl Corpus to predict the next word based on the preceding words in an auto-regressive way.We used the small version since we did not observe any advantage in using the Base or Large T5 model in our early experiments, so we kept the less computationally intensive model.Its input format requires a prefix to indicate which downstream task is being fine-tuned, so we used the word “generate.” We trained each model for 16 epochs using Simple Transformers library[https://simpletransformers.ai/] with a default learning rate of .001 and its Adam optimizer.For generating, we also use the following T5 hyper-parameters, specifically the number of beams =1, top k=50, top p=.95. The selected hyper-parameters were chosen after preliminary experimentation by inspecting the produced quality of generations in terms of both topical and genre fit. Table <ref> in the Appendix illustrates our domain adaptation approach by examples of extracted keywords and synthetic documents generated from those keywords in different genres. One of our overall hyper-parameters is how many documents to generate. Our preliminary experimentation suggested that 1:1 was a near optimal ratio: the same number of original and synthetic documents. We include several other combinations in our empirical results below.§ EXPERIMENTS The most time-consuming part of our experiments were fine-tuning the generators (T5)and the classifiers at the cost of roughly 6000 hours of NVIDIA GeForce RTX 2080.§.§ Comparison ResultsWe assess the effect of domain mismatch and our approach to improving domain transfer by augmenting the training sets with synthetic on-topic documents. The difference between the accuracy obtained before and after generation demonstrates the efficiency of the augmentation model. Table <ref> shows the comparison results for 3 different sizes of training data: 1000, 100 or 30 documents per genre. As we can see, the topic mismatch effect is extremely significant: the average absolute F1 drop from on-topic to off-topic training set is around 20% for N = 1000 and 30% for smaller Ns. The average on-topic F1 scorefor the largest size is 86.4%, while in our tests the human raters achieved 93% on a sample of 100 documents of each genre.The average off-topic performance for that size drops to 66.8%. All three configurations (“aug adapt" columns) demonstrate 2-6 percentage point increases in F1 from non-augmented off-topic training sets (“off-topic" columns). At the same time, the straightforward “augmentation by generating" approach from prior works (“aug baseline" columns) does not show any noticeable improvement, even though it was found by prior work somewhat effective in several tasks not involving domain transfer.We hypothesise that this is because the general approach does not provide a mechanism to facilitate domain transfer, while our approach does.All the differences between our approach and the baselines are statistically significant at the level of alpha 0.01 according to a pairwise t-test. This confirms empirically with high confidence that our augmentation procedure is beneficial for genre classification.While in this current study we prioritized reporting metrics averaged across all 25 topics rather than on individual topic level, we still can observe that the magnitude of the transfer gap and the augmentation effects are normally consistent across all the configurations and models used, see <ref> in the Appendix. Still, there are some exceptions due to a large number random factors involved including the choice of off-topic documents, the quality of synthetic documents in terms of both genre and topic, the optimality of hyperparameters, and others.Qualitative analysis demonstrates that little recovery is possible in case of a very strong correlation between the topics and genres, for example, scientific texts (Topic 7) mostly occur in the genre category of Academic texts; similarly, texts related to law (Topic 18) mostly occur in News reporting.The quality of generation in these topics for other genres remains low. §.§ Ablation Studies This subsection reports several ablation experiments that we conducted to additionally verify the effects reported above and to gain the insight into the phenomena studied. In order to verify that the genre labels in our synthetic texts were important we randomly shuffled their labels. This way, the augmented data became to act only as noise. Not surprisingly,the average scores dropped to the baseline levels which verified that using the proper model for each genre to generate the synthetic augmenting texts is important, and that the improvements reported above were not due to simply the change in the statistical properties of the training and validation sets or due to addition of noise.We also looked at several ways of mixing the original and augmented data. Table <ref>presents the averageacross topics scores for various sizes used. It can be observed that while some small improvements can be achieved by generating more documents, those gains are not statistically significant. On the other side, very small numbers of added documents indeed result in statistically detectable drops. Using only synthetic documents results in drops to the levels only slightly above or even below baselines. We also observed that using keywords from randomly selected off-topic documents is significantly worse than using those from the on-topic documents, which confirms that using domain adaptation mechanism such as suggested here by us is crucial. The details are in the last rows for each N in Table <ref> in the Appendix. We have also looked at the optimal choice of the number of keywords. While the details are presented on Figure <ref> in the Appendix, it is worth noting here that the optimal number is indeed around 10-20 keywords. Also, the augmentation affect drops to 0 on both ends: Too few keywords means no topical control is performed. 100+ keywords result in practically all the non-stop words treated as keywords. This means the model does not really learn how to generate a document on a topic specified by a set of keywords but it rather learns how to restore deleted stop-words from the given text.§.§ Qualitative Exploratory Study with ChatGPTAs a further qualitative investigation into the problem, we have also confirmed that a much larger language model still suffers a domain transfer gap when tasked with genre classification. We have randomly sampled 72 triples consisting of a pair of non-identical genres and a topic. Then, we compared binary classification accuracy by entering specially crafted prompts into ChatGPT[Accessed throughout March-April 2023], which is built on top of GPT-3.5 model with approximately 355 billion parameters.An example of prompts is presented in <ref> in the Appendix. Each includes 5 randomly selected document examples of each genre (5-shot).The choice of those numbers was dictated by the combination of input size limitation, our early experience and prior studies on text classification with ChatGPT. For assessing a domain transfer gap, we followed the same methodology as described in section <ref>: we compared the binary classification performance when off-topic documents were used as prompt examples with when on-topic documents were used. We have indeed verified that the domain gap exists even in a language model of that size: the average accuracy with on-topic examples was 83% while the average accuracy when using off-topic examples was 42%. We also estimated human accuracy in this setup as 88%.When experimenting with our prompts, we discovered that it was crucial to use chain-of-thought (CoT) approach (e.g., ):After presenting examples of both classes, weasked the model to “list at least three criteria by which Class 1 and Class 2 texts are different from each other." Examples of the criteria generated by the model can be found in Table <ref> in the Appendix. We have qualitatively (informally) observed that: 1) ChatGPT was able to use both on-topic and off-topic examples to produce criteria that looked potentially useful for genre classification, e.g. “Class 1 texts appear to be informational or factual, whereas Class 2 texts appear to be more conversational or personal in nature." or “Class 1 texts are typically more objective and neutral in tone, while Class 2 texts tend to be more subjective and expressive." 2) Both on-topic and off-topic examples occasionally resulted in the criteria that are topic-reliant, e.g. “Class 1 texts provided are about musicians and their careers" or "Class 2 uses words like position, certified gold, and innovation." 3) The presence of topically-reliant criteria was stronger with off-topic examples.Next, within our prompt, we separately asked to apply each of the three criteria to the given test document, followed by a request to combine the criteria to make a classification decision. Examples can be found in Table <ref> in the Appendix. By inspecting the model's responses, we have observed that using off-topic examples resulted in the following types of chain-of-thought “confusion" to happen more often than using on-topic examples: 1) applying not the same criteria that originally stated 2) applying a criterion incorrectly. 3) erroneously “swapping" classes when combining.This suggests that while ChatGPT has strong “emerging" capabilities for recognizing genres (see another confirmation at kuzman2023chatgpt), they are weaker when the examples are off-topic and so are more likely to “break" the chains of thoughts. § CONCLUSIONS We have demonstrated a severe degradation in a PLM-based document classifier when trained on one topic, such as politics, and tested on another, like healthcare. Rather than following the prior empirical studies on the impact of domain transferthat involved only a few hand-picked datasets with similar tasks but somewhat different data distributions, we have developed a methodology based on a neural topic model to assess the domain transfer gap between a wide variety of topics. While our empirical results focus on genre classification, our methodology is applicable to other classification tasks such as gender, authorship, or sentiment classification. We have also shown that the topic transfer gap can be mitigated by means of proper topic control while generating additional training documents (augmentation). As a result of our approach, a model to predict a non-topical category (genres in the case here) can be trained on the documents in one topic (e.g. politics) and applied to another (e.g. healthcare) even when there are no healthcare-related documents in the training corpus.We have also created a large corpus with natural genre annotation and a very general/diverse topic model. Both can be used in follow-up studies. Still, our study has certain limitations. The degree of improvements from augmentation is not uniform. For some topics we obtain much better results than for others, while occasionally the performance on the augmented set is even lower than on the original off-topic training set. This is likely to be related to the high degree of correlation between the topics and genres, for example, the lack of texts on the topic of law in genres other than news reporting in our corpus, thus leading to less successful attempts to generate discussions, academic articles or advice texts on this topic. We need to find better ways to improve off-topic generation when it makes no positive impact on the accuracy of classification of on-topic test texts, possibly by using very large language models.Nevertheless, through a qualitative exploratory study with ChatGPT we were able to confirm that even such larger language models still suffer from the domain transfer gap. Even while our approach does not solve this very challenging domain transfer problem completely, it suggests a direction in which a small but productive step can be made. Larger pre-trained language models can be tried in future such as GPT-4, for bothgeneration and classification. Also, larger training sets can be explored, as well as in “few-shot" settings. A number of approaches improving the quality of generated text, e.g. those based on Generative Adversarial Networks <cit.> or meta learning <cit.> can be explored, as well as various methods to control the quality and topical fit of the generated texts. § LIMITATIONS We have already discussed several limitations of our study in the preceding section. Since our primary focus was on reporting metrics averaged across all 25 topics, this approach prevented us from discerning clear patterns or relationships between the properties of individual topics, domain gaps, and the effects of augmentation. More research is needed to investigate topic-level conditions for successful transfer. We intend to address this in future work. Given the computationally demanding nature of our experiments, we have limited our study to short text samples rather than full documents. Our corpora consisted exclusively of English documents, which might limit empirical findings to languages with limited morphological complexity. While we utilized Latent Dirichlet Allocation, other topical models might also be suitable for assessing domain transfer, and alternative augmentation methods might be worth exploring. For better generalization, a corpus with a larger set of genres can be assembled and explored. Additionally, other tasks such as authorship or sentiment classification could be explored in this context.acl_natbib § APPENDIXHaving both topics inferred by a topic model from a diverse corpus in addition to natural genres creates a table of topical biases across the genre sources, see Table <ref>. add the numbers!
http://arxiv.org/abs/2311.16083v1
{ "authors": [ "Dmitri Roussinov", "Serge Sharoff" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231127185331", "title": "BERT Goes Off-Topic: Investigating the Domain Transfer Challenge using Genre Classification" }
These authors contributed equally to the work.These authors contributed equally to the work.These authors contributed equally to the work. Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg Strong local disorder in interacting quantum spin chains can turn delocalized eigenmodes into localized eigenstates, giving rise to many-body localized (MBL) phases. This is accompanied by distinct spectral statistics: chaotic for the delocalized phase and integrable for the localized phase. In isolated systems, localization and chaos are defined through a web of relations among eigenvalues, eigenvectors, and real-time dynamics. These may change as the system is made open. We ask whether random dissipation (without random disorder) can induce chaotic or localized behavior in an otherwise integrable system. The dissipation is described using non-Hermitian Hamiltonians, which can effectively be obtained from Markovian dynamics conditioned on null measurement.Through the use of the singular value decomposition and the introduction of new diagnostic tools complementing the singular-value statistics, namely, the singular form factor, the inverse participation ratio, and entanglement entropy for singular vectors, we provide a positive answer. Our method is illustrated in an XXZ Hamiltonian with random local dissipation. Diagnosing non-Hermitian Many-Body Localization and Quantum ChaosviaSingular Value Decomposition Aurélia Chenu January 14, 2024 ==================================================================================================== Since the early days of quantum mechanics, understanding the dynamics of many-body quantum systems continues to be a hard challenge. One of the chief questions on the behavior of generic, interacting quantum systems concerns the presence of quantum chaos <cit.>. The situation is also complicated by the fact that localization <cit.> can take place in interacting quantum systems <cit.>; this led to postulating the existence of a robust, non-ergodic phase of matter known as many-body localization (MBL). The competition between localized and chaotic quantum dynamics has been studied extensively in spin Hamiltonians <cit.>, with relevant implications for applications, including quantum annealing <cit.>, as well as fundamental questions, such as the lack of thermalization <cit.>. As a result, localization has become central for understanding complex quantum dynamics, with connections to quantum simulation experiments <cit.>, topological phases of matter <cit.>, Floquet time crystals <cit.>, and many others.The peculiarities in the dynamics of many-body quantum systems are not limited to the ideal situation where the system is isolated from the environment and the dynamics is unitary. In the last years,the conventional understanding of renormalization group approaches—by which the coupling to a thermal bath would render quantum fluctuations irrelevant <cit.>—has been shown to be incomplete. Indeed, evidence is being accumulated that open quantum systems may host unusual phases that would exist neither in a quantum unitary setting nor at equilibrium <cit.>. One of the many intriguing features of open quantum system dynamics is the phenomenon of dissipative localization <cit.> and dissipative quantum chaos <cit.>. For the unitary counterpart, both localization and chaos are defined through a web of relations among eigenvalues, eigenvectors, and real-time dynamics <cit.>. As a quantum system is made open and the constraints set by unitarity lifted, it is natural to expect that such relations may change in nature. While non-Hermitian (NH) localization is fairly well understood in the single-particle case, which admits exact solutions <cit.> and a clear renormalization-group treatment in 1D <cit.>, its many-body version has been the object of several numerical studies, suggesting the presence of a stable, localized phase <cit.>. These studies mostly relied on the eigendecomposition of large NH matrices. Because of non-Hermiticity, however, the spectral indicators of Hermitian localization had to be generalized to the NH setting, causing some ambiguity given the complex nature of eigenvalues <cit.> and the non-orthonormality of right and left eigenvectors. In particular, one of the most widely used indicators of localization is the fraction of non-real eigenvalues <cit.>, but this seems sensitive to NH localization only in presence of imbalanced hoppings <cit.>.Recent works put forward the idea that using the singular value decomposition, one can circumvent certain problems set by biorthogonal quantum mechanics <cit.>, since the left and right singular vectors are always orthonormal, and by complex eigenvalues, as the singular values are always real.This approach was benchmarked against standard random matrix ensembles <cit.> and has not yet been used to study the localization transition of many-body open systems in finite dimensions.In this work, we fill the gap by studying non-Hermitian many-body localization via the singular value decomposition. Our objective is twofold. First, we show that the singular value decomposition clearly distinguishes between the chaotic and localized regimes in NH models, providing cleaner and more robust numerical indicators than those obtained from the eigendecomposition. Second, we show that random local dissipation in an otherwise clean, integrable XXZ Hamiltonian induces a crossover to NH quantum chaos for small dissipation strength, followed by NH localization for large dissipation. These crossovers are similar to the ones caused by a purely Hermitian disorder, so our results provide one more point of contact between Hermitian and NH MBL.Model.—The tools we introduce to diagnose NH quantum chaos and MBL are illustrated in a model madeof an integrable interacting Hermitian term, and a disordered non-Hermitian contribution describing random site-dependent losses, namely,Ĥ= Ĥ_XXZ - i Γ̂/2 ,with Ĥ_XXZ= J ∑_i=1^N ( Ŝ_i^x Ŝ_i+1^x + Ŝ_i^y Ŝ_i+1^y + ΔŜ_i^z Ŝ_i+1^z),Γ̂= ∑_i=1^N γ_i ( Ŝ^z_i +1/2). Above, Ŝ_i^x,y,z = 1_2^⊗ (i-1)⊗1/2σ̂^x,y,z⊗1_2^⊗ (N-i) are spin-1/2 operators acting on site i (σ̂^x,y,z are thePauli matrices). The coefficient J is set to unity to fix the energy scale, and we further take Δ = 1. The rates γ_i are independently sampled from a uniform distribution over the interval [0,γ]. We assume periodic boundary conditions; this does not influence the system's behavior since the hoppings are symmetric (contrary to the Hatano-Nelson model <cit.>, that we also analyze in detail in App. <ref>).As the magnetization is conserved, we choose to work in the zero magnetization sector, of dimension D =NN/2. The XXZ Hamiltonian (<ref>) is an integrable many-body system. It has been extensively studied when complemented with random, local magnetic fields ∑_i h_iŜ^z_i, where the h_i's are random variables, e.g., uniformly distributed over [-h,h] <cit.>. As such, it has been used to probe a transition between chaos and integrability, which occurs as a function of the disorder strength <cit.>. The XXZ chain with weak disorder exhibits chaotic spectral properties, described by the Gaussian orthogonal ensemble (GOE), and delocalized eigenstates, while in presence of a strong disorder, it shows integrable spectral properties and localized eigenstates, at least for the system sizes accessible by numerics. By contrast, the existence of a finite-disorder MBL phase with local integrals of motion <cit.> in the thermodynamic limit is still debated <cit.>.Note that the NH Hamiltonian Ĥ we consider here can be obtained from the full Lindblad master equation with coherent dynamics driven by Ĥ_XXZ, and dissipative dynamics dictated by the quantum jump operators √(γ_i)Ŝ^-_i, whereŜ_i^± = Ŝ_i^x ± iŜ_i^y. The Lindblad equation can be regarded as the unconditional evolution of the system, that is, averaged over a large number of trajectories <cit.>. Focusing on the no-jump trajectories only (null-measurement condition), the open system dynamics is described through the effective NH Hamiltonian Ĥ (see App. <ref>). In turn, this physical origin of Ĥ leads to non-negative jump rates γ_i ≥ 0 <cit.>.We perform our investigation using the Hamiltonian in Eq. (<ref>) as a toy model. We stress that our methods are not model dependent and that similar results are found in another commonly considered NH model with asymmetric hoppings, i.e., the interacting Hatano-Nelson model <cit.>, as detailed in App. <ref>.Eigen– vs singular–value decomposition.—Standard Hermitian Hamiltonians have real eigenvalues and orthonormal eigenstates, whose physical meanings are the possible energy measurement outcomes and the corresponding quantum states of the system, respectively. For this reason, the eigendecomposition (ED) of a Hermitian Hamiltonian has a fundamental role in quantum mechanics. Importantly, the ED is realized with a single unitary operator, whose columns (or rows) are the eigenstates.In turn, NH Hamiltonians, even when physically motivated, generally have a complex spectrum and non-orthogonal eigenvectors. For a diagonalizable NH Hamiltonian Ĥ, the ED can be generalized with the tools of biorthogonal quantum mechanics <cit.>. This approach, well-developed and widespread in the NH literature, has been intensely used to (attempt to) recover many known results from the Hermitian setting <cit.>. Making use of the eigenvectors of Ĥ^†,orthogonal to those of Ĥ, it is possible to resolve the identity and therefore diagonalize the Hamiltonian using two different non-unitary (yet invertible) operators. Notwithstanding, a complex spectrum and the use of both right and left eigenvectors (those of Ĥ and Ĥ^†, respectively) pose a challenge to the extension of certain techniques and quantities that are well defined in the Hermitian case, as the latter rely on a real spectrum and orthonormal states. As an example, we mention here the definition of the complex spectral gap ratios for dissipative quantum chaoticsystems <cit.>, and the various definitions of topological invariants in terms of right and/or left eigenvectors that have been proposed <cit.>.In view of the above, there has been a recent growing attention in the NH literature towards the use of the singular value decomposition (SVD) <cit.>, which can be regarded as a generalized version of the ED. We emphasize that for (non-)Hermitian Hamiltonians the ED and the SVD are (not) related, with details in App. <ref>. From a purely technical standpoint, the SVD, compared to a biorthogonal ED, has the advantage of providing real (and positive) singular values and two sets of (independently) orthonormal singular vectors. Notice that, though the biorthogonal left and right eigenvectors of a NH Hamiltonian can be made biorthonormal, it is not possible to normalize both simultaneously. By contrast, the left and right singular vectors are automatically normalized, and therefore they both correspond to physical states. This represents a strong motivation to use the SVD for NH Hamiltonians to generalize and study well-established Hermitian phenomena. Indeed, the SVD has been shown to be instrumental in describing the bulk-boundary correspondence in NH models <cit.>, and more recently, to study the statistics of NH random matrices and diagnose dissipative quantum chaos <cit.>.There is yet another motivation to use the SVD for NH Hamiltonians. For Hermitian Hamiltonians, a constant diagonal shift does not affect the physics; the ED captures this fact,as the eigenvalues are rigidly shifted and the eigenstates unaffected. However, the SVD issensitive to a diagonal shift, making its physical meaning rather questionable inHermitian systems. By contrast, any diagonal shift of a NH Hamiltonian does change the physics, as exemplified in a vectorized Lindbladian (a NH matrix) that admits a steady state (zero eigenvalue). A rigid spectral shift along the real axis would either introduce amplified modes or increase the decay rates, while a shift along the imaginary axis could remove the steady state. Since such a shift is captured by the SVD, it seems a more suited decomposition to study NH Hamiltonians.For these reasons, we use the SVD to study the model (<ref>). Its Hermitian version [γ_i→ i h_i] exhibits MBL and chaotic behavior for strong and weak disorder, respectively. Here, we investigate whether random non-Hermiticity, which physically corresponds to random losses, cf. Eq. (<ref>), can induce a chaotic to integrable crossover. We do so by using the singular form factor, a new tool we define below, and the singular-value statistics <cit.>. Furthermore, we study the localization transition of the singular vectors.Our results further motivate the use of the SVD as a sensitive tool to generalize Hermitian phenomena, such as MBL, to the NH setting. Dissipative quantum chaos: the singular form factor.—One of the defining features of quantum chaos is the level spacing distribution, which is well known in random Hermitian Hamiltonians taken from Gaussian ensembles (GOE, GUE and GSE) and Hamiltonians whose eigenvalues are not correlated (Poisson ensemble) <cit.>. Indeed, the spectrum of a chaotic Hamiltonian is conjectured to have a level spacing distribution that follows random matrix behavior <cit.>, while the spectrum of an integrable Hamiltonian is uncorrelated, and its level spacing is expected to follow an exponential (but usually referred to as Poisson) distribution <cit.>. However, to be able to make such statements for a specific system,the spectrum must be unfolded before computing the level spacing distribution to remove the global energy dependence of the eigenvalue density <cit.>. Two alternative measures to extract information about the onset of chaos, while circumventing the unfolding procedure, are the spectral form factor (SFF) <cit.> and the spectral ratio statistics <cit.>. For a standard Hermitian Hamiltonian Ĥ, we recall that the spectral form factor is defined as SFF(t) = | ∑_n e^-iE_nt/D |^2, where {E_n} are the eigenenergies of Ĥ, and D is the Hilbert space dimension. One may also express the SFF as SFF(t) = |⟨ψ| e^-iĤ t|ψ⟩|^2, i.e., as the return probability of the infinite-temperature coherent Gibbs state (CGS), |ψ⟩=∑_n |E_n⟩/√(D), where |E_n⟩ are the eigenstates of Ĥ. This form is particularly handy and has been used to generalize the SFF to dissipative and non-Hermitian dynamics <cit.>. The SFF is a time-dependent quantity with distinct features at different time scales. In both integrable and chaotic systems, the ensemble-averaged SFF decays at early times and saturates to a plateau of value 1/D at very late times [Here we are assuming that the energies have no special relations among themselves, as is generically expected in many-body, interacting quantum systems.]. The main difference between the chaotic and integrable behavior of the SFF is in the region between the decay and the plateau: generically, integrable systems go directly from decay to plateau, while chaotic systems exhibit a linear growth between these two regimes. This additional “ramp” stems from correlations between eigenvalues; it is visible whether or not the spectrum is unfolded <cit.>. More generally, a region where the SFF features values below the value of the plateau is often referred to as the correlation hole and its existence is an indication of correlations between eigenvalues.We generalize this return probability to a NH Hamiltonian Ĥ by introducing the singular form factor (σFF): σFF(t) = | 1/D∑_n e^-iσ_nt|^2 = |⟨ψ_R| e^-it√(Ĥ^†Ĥ)|ψ_R⟩|^2 .This extends the SFF to the NH case via the SVD: σ_n are the singular values of Ĥ and |ψ_R⟩=∑_n |v_n⟩/√(D) is the right infinite temperature CGS, built from its right singular vectors|v_n⟩ (see App. <ref>). Note that Eq. (<ref>) can also be written in terms of the left singular vectors |u_n⟩, replacing |ψ_R⟩ with |ψ_L⟩=∑_n |u_n⟩/√(D) and Ĥ with Ĥ^†. We argue that the σFF is a good indicator of quantum chaos in a NH setting, being able to detect the presence of correlations among singular values.Figure <ref>ashows the σFF for various disorder strengths in the considered model, Eq. (<ref>). The σFF exhibits a correlation hole before the plateau for small disorder. This correlation hole closes as the disorder strength gets larger, indicating the loss of correlations between the singular values.In parallel to the form factor, the distribution of the spectral ratios r_n=min(s_n+1,s_n)/max(s_n+1,s_n), where s_n=E_n+1-E_n are the level spacings between ordered eigenvalues, is also used as a spectral probe for chaos vs integrability <cit.>. Because it involves ratios of level spacings, the density of states cancels out, removing the need for unfolding to compare systems with different global densities. Distributions of r_n are known for the GOE, GUE, GSE and Poisson ensemble <cit.>. In our case, the relevant ensembles are: the GOE for low disorder, and the Poissonian ensemble for high disorder. The probability density distributions, p(r_n), and their average values, r, can be found in Ref. <cit.>.The statistics of the singular values can also be studied via the spectral ratios defined above, replacing E_n with σ_n in the definition of the level spacing s_n <cit.>. This idea has recently been used to classify the singular-value statistics of NH random matrices <cit.>, and we extend it to study the chaotic to integrable crossover. In this respect,Figs. <ref>b–c clearly display a crossover from GOE to Poisson statistics as the dissipation strength γ is ramped up. In particular, Fig. <ref>b suggests the presence of a finite-size crossover around γ_c/J ≳ 9. In App. <ref> we further show that, in the NH case, the results for the generalizations of the ratio statistics to complex eigenvalues <cit.> are rather vague compared with the results for the singular values. Our results thus support the use of singular values statistics as a diagnostic of dissipative chaos and further point to the occurrence of a localized regime, as detailed below.Dissipative localization of singular vectors.—The singular-value indicators presented in Fig. <ref> support the presence of a localized regime in the XXZ model with random dissipation, Eq. (<ref>). The localization induced by disorder, however, is better understood from a real-space perspective, as the name itself suggests. It is thus interesting to see whether the singular vectors of NH models display the same signatures of localization as their Hermitian counterparts (or even as NH eigenvectors). For a single particle, the eigenstates of a Hermitian and localized Hamiltonian have a well-understood real-space structure. Each eigenstate |E_n⟩ is concentrated around its localization center 𝐱_n, and its decaying profile is characterized by a localization length ξ, namely ⟨𝐱|E_n⟩∼ e^-|𝐱-𝐱_n|/ξ. A similar situation takes place in single-particle, NH, localized Hamiltonians, where the disorder-induced localization competes with the localization yielded by the non-Hermitian skin effect <cit.>.In the many-body case, the situation is more complicated. Even in the Hermitian setup, there seems to be no simple localization in Hilbert space <cit.>. Rather, the eigenstates of MBL Hamiltonians are believed to be also eigenstates of local integrals of motion (LIOMS) <cit.> and to obey the area law of entanglement <cit.>.Previous works have studied some aspects of NH, localized, many-body eigenvectors, e.g.,identifying a crossover from volume to area law for the entanglement entropy <cit.>. Here, we perform a similar analysis, though using the singular vectors of NH models, which, we recall, form an orthonormal basis for the Hilbert space,contrary to the eigenvectors. Thus, we show that the SVD allows discerning between localized and ergodic regimes even at the level of singular vectors; this extends the use of the SVD beyond diagnosing dissipative chaos <cit.>. For simplicity, we use two commonly considered indicators: the inverse participation ratio (IPR) and the entanglement entropy across a bipartition. Our analysis is based on the right singular vectors, the left ones yielding similar results.The IPR ofsingular vectorsis defined as the ensemble average of ∑_k=1^D |⟨e_k|v_n⟩|^4 /D, where {|e_k⟩} is the computational basis and |v_n⟩ are the (right) singular vectors. It is expected that IPR≈1/D for delocalized states (as obtained for |v_n⟩ uniformly spread over the computational basis), while IPR≈ 1 if |v_n⟩ is localized on a single Fock state. Figure <ref>a presents the logarithm of the IPR of singular vectors, scaled with system size: our data shows the presence of a finite-size crossoverfrom delocalized to localized singular vectors around a value γ_c/J ≈ 9, consistent with the picture extracted from the average gap ratio (r-parameter) statistics.We further present our results for the entanglement entropy across a bipartition in Fig. <ref>b. Recall that, for a generic state |ψ⟩, the entanglement entropy is defined as S_E = - ρ_A logρ_A, where ρ_A = _B (ψψ) and A ∪ B form a bipartition of the chain in two intervals. As for the IPR, the entanglement entropy supports the presence of a localized regime: it crosses over from a volume law at small dissipation to an area law at large dissipation, again consistently indicating a critical value γ_c/J≈ 9 for the system sizes considered.Remarkably, the same analysis of the IPR and entanglement entropy with the eigenvectors does not display such a clear crossover, thus strongly motivating the use of the SVD (see App. <ref>). While a weak, inhomogeneous dissipation breaks the integrability of the XXZ chain making it (dissipatively) chaotic, more disordered losses localize it again and restore integrability.Conclusions.—We have investigated the role of a disordered dissipative term on an otherwise clean, integrable, interacting quantum system. Adding such a term makes the system evolve under an effective non-Hermitian (NH) Hamiltonian, physically representingthe average evolution of quantum trajectories conditioned to no quantum jumps.The eigendecomposition of NH Hamiltonians yields complex eigenvalues and non-orthogonal (left and right) eigenvectors. We argued in favor of using the singular value decomposition (SVD) and showed that, indeed, the singular values can be used to detect a crossover from chaotic to integrable spectral features, and the singular vectors to probe a crossover from delocalization to localization.The crossover takes place at the dissipation strength γ_c/J ≈ 9, which is comparable to the Hermitian critical disorder strength h_c/J ≈ 4 seen in finite-size numerics, once one realizes that γ_i ∈ [0,γ] is half the width of the interval h_i ∈ [-h,h]. As additional tools to study the singular value statistics, we introduced the singular form factor (σFF). We showed that the σFF features a correlation hole when the dissipative disorder is weak, and that the correlation hole closes for large dissipative disorder. Despite providing an effective description of the time evolution of open systems, NH Hamiltonians do not capture all the features of open system dynamics, e.g., the entanglement behavior.In this setting,random dissipation-induced localization points to a quantum dynamics that is highly sensitive to the effect of inhomogeneities. This contrasts with a homogeneous dissipation which, in our case, does not induce localization. A crucial point, in the Hermitian setting, is how the chaotic/localized crossover scales with system size. For the dissipative case, this is not as relevant because the NH evolution describes an exponentially small (in system size and time) fraction of trajectories. In turn, our findings are meaningful precisely for small systems size: only in this case the chaotic/localized behaviors can be actually observed in experiments. Our results are thus relevant for noisy intermediate-scale quantum (NISQ) devices <cit.>, since we show that the Hamiltonian properties are significantly altered by disordered dissipation.Acknowledgments.—We thank Masahito Ueda for useful discussions, and Oskar A. Prośniak and Pablo Martínez-Azcona for careful reading of the manuscript. F.B. thanks Carlo Vanoni for illuminating discussions on localization indicators. This research was funded in part by the Luxembourg National Research Fund (FNR, Attract grant 15382998), the John Templeton Foundation (Grant 62171), and the QuantERA II Programme that has received funding from the European Union’s Horizon 2020 research and innovation programme (Grant 16434093). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. The numerical simulations presented in this work were in part carried out using the HPC facilities of the University of Luxembourg. apsrev4-1 § EFFECTIVE NON-HERMITIAN HAMILTONIANS Several theoretical frameworks have been developed to describe the dynamics of open quantum systems: quantum trajectories <cit.>, Zwanzig-Mori projection operators <cit.> and collision models <cit.> to name a few. Here, we perform the common assumptions of having an infinitely large Markovian bath, with a fast relaxation timescale compared to that of the system, to which it is weakly coupled. Formally, this is reflected in the possibility of writing down a Gorini–Kossakowski–Sudarshan–Lindblad (GKSL) master equation for the reduced density matrix ρ_t of the system:ρ̇_t = - i[Ĥ,ρ_t] +∑_j𝒟 [√(γ_j)L̂_j]ρ_t ,with the dissipator 𝒟 [𝒪̂]ρ = 𝒪̂ρ𝒪̂^† - 12{𝒪̂^†𝒪̂,ρ}. Above, the Hermitian Hamiltonian Ĥ describes the dynamics of the system alone, while the interaction with the bath is encoded in the jump operators 𝒪̂= √(γ_j)L̂_j, γ_j>0. The introduction of an effective non-Hermitian Hamiltonian, Ĥ_eff = Ĥ - i/2∑_j γ_j L̂_j^†L̂_j,allows re-writing the GKSL equation asρ̇_t = - i (Ĥ_effρ_t-ρ_t Ĥ_eff^†) + ∑_j γ_j L̂_jρ_t L̂_j^†.This equation separates the density matrix evolving under Ĥ_eff from the jump terms.The case we describe in the main text, Eq. (<ref>), is a NH Hamiltonian and corresponds to a dynamics with no jumps; the dynamics is then governed by the effective Hamiltonian only. Two important remarks are in order. First, the use of the GKSL equation automatically implies that an average over the bath influence is taken at the level of the density matrix. This fact implies that quantities that are non-linear in the density matrix, like entanglement, are not well captured within the GKSL framework <cit.>. Second, no-jump trajectories constitute an exponentially small fraction (both in time and in the system size) of all the quantum trajectories since, at each time and for each spin, one needs to impose that no jump occurs: these events are typically independent and their probabilities multiply. Despite these two caveats, wenevertheless study a many-body system under strong local dissipation, and moreover focus on the MBL transition, which is usually linked to entanglement properties. This might seem a contradiction, but it is only an apparent one. Indeed, the use of non-Hermitian Hamiltonians has already been shown to capture well entanglement transitions in many-body quantum systems <cit.>. Additionally, averaging the density matrix before computing the entanglement is reminiscent of the annealed average usually considered in disordered systems <cit.>, which provides an approximation that, in certain cases, is also accurate. § GENERAL PROPERTIES OF THE EIGEN- AND SINGULAR VALUE DECOMPOSITIONHere, we briefly review some known properties of the eigen- and singular-value decompositions of Hermitian and non-Hermitian Hamiltonians. We interchangeably refer to the matrix representation of our Hamiltonian in some basis simply as the “Hamiltonian”, as we focus on a finite-dimensional Hilbert space of dimension D. To fix the ideas, one can take the Hamiltonian Ĥ to describe the XXZ spin chain, Eq. (<ref>), in the zero magnetization sector, corresponding to D = NN/2.§.§ Remarks on the eigendecompositionConsider a Hermitian Hamiltonian H (we drop the hat as we refer to its matrix representation). It can be diagonalized as H=WΛ W^†=∑_n E_n w_n though a single unitary transformation, W=(|w_1⟩,…,|w_d⟩),Λ=(E_1,…,E_d) being the diagonal matrix formed by the real eigenvalues. The eigenstates of H, {|w_n⟩}, are orthogonal and can be made orthonormal; so, they resolve the identity,1_d=∑_n w_n. The eigenvalues and eigenstates of a Hermitian operator H have a physical meaning in quantum mechanics, being respectively the outcomes of the measurements of H and the corresponding states the system collapses to after the measurement.In turn, a non-Hermitian Hamiltonian H≠ H^† is not guaranteed to be diagonalizable. The lack of diagonalizability comes from a degeneracy of eigenvalues and eigenvectors, at what are known as exceptional points <cit.>. Away from these degeneracies, we can assume H to be diagonalizable, although its eigenvectors are in general not orthogonal and its eigenvalues can be complex. A way out of this, that has been intensely used in the non-Hermitian physics literature <cit.>, is to use both the right and left eigenvectors of H, satisfying H|R_n⟩= E_n|R_n⟩ and⟨L_n| H= E_n⟨L_n|, respectively. These are not orthonormal sets themselves, as ⟨R_n|R_m⟩≠δ_nm (same with L), but are biorthogonal, i.e., ⟨L_n|R_m⟩= c δ_nm. The constant c can be set to one to make the right and left eigenvectors biorthonormal. This way, the identity can be resolved as1_d=∑_n R_nL_n. Note that, if the right and left eigenvectors are biorthonormal, one can still normalize the right ones but not the left ones, and vice versa. Assuming biorthonormality, one can make the decomposition H=RΛ L^†=∑_n E_nR_nL_n, where R=(|R_1⟩,…,|R_d⟩) and L=(|L_1⟩,…,|L_d⟩) are the non-unitary transformations that make H diagonal, and where the eigenvalues E_n are now generally complex. §.§ Brief review of the singular value decompositionAny Hamiltonian H, be it Hermitian or non-Hermitian, can be decomposed through the singular value decomposition (SVD) into H=UΣ V^† = ∑_j σ_n u_nv_n, where U=(|u_1⟩,…,|u_d⟩) and V=(|v_1⟩,…,|v_d⟩) are unitaries whose columns are the left and right singular vectors, respectively, and Σ=(σ_1,…,σ_d) is the positive semi-definite matrix of singular values.As U and V are unitaries, their columns are orthonormal and resolve the identity without any need to recur to bi-orthogonality.The SVD makes explicit how a general matrix H acts on an orthonormal basis V, to get a rotated orthonormal basis U stretched or compressed by the real and non-negative numbers in the diagonal of Σ. From an operational point of view, one can obtain the SVD of a matrix by using an eigendecomposition procedure. The basis V is the orthonormal eigenbasis of the Hermitian matrix H^† H with eigenvalues {σ_n^2}, while the orthonormal basis U is the eigenbasis of the Hermitian matrix HH^†, sharing the same eigenvalues. By standard manipulations in linear algebra, one can see that the SVD of a matrix H can also be obtained from the eigendecomposition of the cyclic matrixC = [ 0 H; H^† 0 ].C is a 2D × 2D matrix with eigenvalues {±σ_n}, associated to eigenvectors (± u_n, v_n)^T / √(2).It is important to stress that the two procedures described above to obtain the SVD of a matrix from eigendecomposition routines are equivalent in principle, but different in practice. The use of either H^† H or H H^† does not increase the matrix dimension, but yields the singular values squared. This results in a loss of precision, which is particularly severe for the small singular values—which are the ones that we used in the main text. On the other hand, the cyclic matrix C provides directly the singular values σ, albeit with a doubled computational cost (typically, one gets both σ_n and -σ_n), and with the caveat that the small σ_n's lie at the center of the spectrum of C—and procedures as shift-invert or polynomial filtering must be used. § COMPLEX SPECTRAL GAP RATIOS In the case of an effective open system Hamiltonian such as Eq. (<ref>) the spectrum is complex and there is no clear notion of ordering the eigenvalues. One can nevertheless find the nearest neighbor E_n^nn of each eigenvalue E_n on the complex plane, defined as the eigenvalue E_n^nn which satisfies |E_n^nn-E_n|=min_m≠ n |E_n-E_m|. As explained in the main text, to avoid the need to unfold the spectrum, we would like to consider a ratio of energy-dependent quantities. A natural other such quantity would be the next-nearest neighbor E_n^nnn for each eigenvalue, which would amount to ordering the set {|E_m-E_n|}_m≠ n and selecting the second entry to find E_n^nnn. The spectral ratio to be studied is then the complex ratio <cit.>z_n ≡E_n^nn-E_n/E_n^nnn-E_n .Note that this definition is similar but not the same as the ratio of closest-neighbor and 2nd closest-neighbor defined in Ref. <cit.> for real eigenvalues. In that work the ratio of the absolute value of the difference between levels (=distance) was considered, while in the definition (<ref>) it is not the ratio of distances which is computed, but rather the ratio of differences.More specifically, the ratio for real eigenvalues defined in Ref. <cit.> gets values between 0 and 1, while the definition (<ref>), restricted to real eigenvalues, gets values between -1 and 1.As in the Hermitian case, it is often useful to extract a single number from the ratio statistics as a probe to the type of spectrum. In the complex case z_n = r_n e^i θ_n with 0≤ r_n ≤ 1 because |E_n^nn-E_n| ≤ |E_n^nnn-E_n|. One such single-number probe would be the averages r = ⟨ r_n ⟩ and cosθ = ⟨cosθ_n ⟩. Table <ref> shows the values expected from uncorrelated complex eigenvalues (Poisson ensemble) and random matrix ensembles, as reported in Ref. <cit.>. The Ginibre Unitary Ensemble (GinUE) are complex matrices with independent identically distributed complex entries taken from a Gaussian distribution and setting β=2 in the joint eigenvalue distribution. The Toric Unitary Ensemble (TUE) is a generalization of the Circular Unitary Ensemble (where eigenvalues lay on a circle) to the complex plane compactified into a torus (such that all eigenvalues lay on the torus). The next Appendix discusses this single-number probe in our model. § FURTHER RESULTS FOR THE XXZ MODEL WITH RANDOM LOSSES In the main text, we studied the singular-value, real, spectral ratio statistics for the model (<ref>) and found that they show a clear crossover from GOE to Poisson, see Fig. <ref>. Let us now present the eigenvalue statistics using the complex spectral gap ratios as described in Appendix <ref>. Figure <ref> displays the gap ratio statistics for the complex eigenvalues of the XXZ model with dissipative disorder, Eq. (<ref>), as a function of the disorder strength γ. As we show in Fig. <ref>, the results for r and cosθ do not provide clear-cut values which we could clearly attribute to a crossover from chaos to integrability. Also, the behavior of these single-number statistics does not change smoothly with the system size N, in contrast with the behavior of the single-number statistics for the singular values.We show our study on the localization indicators for the (right) eigenvectors of the NH XXZ model (<ref>), see Fig. <ref>. For the entanglement entropy, we use the definition S_E = - ρ_A logρ_A with ρ_A = _B (R_nR_n), i.e., using the right eigenvectors only <cit.>. In principle, both right and left eigenvectors could be used to construct the reduced density matrix <cit.>, even if the physical difference between the various cases is not yet well understood. Even with a rescaling with the system size, a crossover between a delocalized/localized regime for the eigenvectors does not occur (contrary to what we see for singular vectors). This supports even more the use of the SVD as a tool to generalize standard Hermitian phenomena, as already motivated in totally different settings <cit.>. Finally, in the main text, we showed the localization crossover captured by the scaled IPR and entanglement entropy. Here, we further show the same results, without the system-size rescaling in Fig. <ref>a–b, which masks the crossover, but clearly show the transition between volume law to area law, Fig. <ref>c.§ RESULTS FOR THE INTERACTING HATANO-NELSON MODEL Here we perform the same analysis described in the main text for the interacting Hatano-Nelson modelĤ = J∑_i [ 1/2( e^g Ŝ_i^+ Ŝ_i+1^- + e^-gŜ_i^- Ŝ_i+1^+ ) + ΔŜ_i^z Ŝ_i+1^z ] + ∑_i h_i Ŝ_i^z .This model was studied in Ref. <cit.>, where it was shown it hosts a NH MBL phase. Here, we study its localization properties through the lens of the SVD. We fix, as for the model in the main text, J=1 and Δ = 1, and further set g=0.1. Then, we extract the disordered fields h_i ∈ [-h,h] according to the uniform distribution. In order to avoid the NH skin effect, which may be mistaken for localization, we use periodic boundary conditions.In contrast with the Hamiltonian considered in the main text, Eq. (<ref>), where non-Hermiticity and disorder coincide, the non-Hermiticity in the interacting Hatano-Nelson model, Eq. (<ref>), comes from the unbalanced hoppings while the randomness comes from a standard (Hermitian) local field. Therefore, in this model, localization and chaos are not induced by non-Hermiticiy, but one can quantify their resilience to non-Hermiticity.The singular form factor and the singular value statistics for the interacting Hatano-Nelson model, Eq. (<ref>), as function of disorder strength h, are shown in Fig. <ref>. The localization properties of the singular vectors, as quantified by the IPR and the entanglement entropy, are plotted as a function of h in Figs. <ref>–<ref>.
http://arxiv.org/abs/2311.16229v1
{ "authors": [ "Federico Roccati", "Federico Balducci", "Ruth Shir", "Aurélia Chenu" ], "categories": [ "quant-ph", "cond-mat.dis-nn", "cond-mat.stat-mech", "hep-th" ], "primary_category": "quant-ph", "published": "20231127190001", "title": "Diagnosing non-Hermitian Many-Body Localization and Quantum Chaos via Singular Value Decomposition" }
[email protected] [email protected] [email protected] for Quantum Science and Technology, International Institute of Information Technology Hyderabad, Gachibowli, Hyderabad-500032, Telangana, India. Center for Security, Theory and Algorithmic Research, International Institute of Information Technology Hyderabad, Gachibowli, Hyderabad-500032, Telangana, India. [email protected] of Mathematics, Birla Institute of Technology and Science Pilani, Hyderabad Campus, Telangana-500078, India.Counter-intuitive to classical notions, quantum conditional entropy can be negative, playing a pivotal role in information-processing tasks. This article delves deeply into quantum channels, emphasizing negative conditional entropy breaking channels (NCEB) and introducing negative conditional entropy annihilating channels (NCEA). We characterize these channels from both topological and information-theoretic perspectives, examining their properties when combined serially and NCEB in parallel. Our exploration extends to complimentary channels associated with NCEB, leading to the introduction of information-leaking channels. Utilizing the parameters of the standard depolarizing channel, we provide tangible examples and further characterization. We demonstrate the relationship of NCEB and NCEA with newly introduced channels like coherent information breaking (CIB) and mutual information breaking (MIB), along with standard channels like zero capacity channels. Preservation of quantum resources is an integral constituent of quantum information theory. Recognizing this, we lay prescriptions to detect channels that do not break the negativity of conditional entropy, ensuring the conservation of this quantum resource. On quantum channels that destroy negative conditional entropy Nirman Ganguly Received: date / Accepted: date =============================================================§ INTRODUCTION Quantum resource theory <cit.> is a standard framework to study quantum information theory, where we categorize quantum states based on their usefulness in information processing tasks. Such useful states are termed as resources, and the rest as free states. For example, quantum entanglement <cit.>, which are states that cannot be expressed as a convex combination of product states, is one of the most significant resources in quantum information theory. In the resource theory of entanglement, a separable state acts as a free state. Entanglement is a key ingredient in tasks like teleportation <cit.>, dense coding <cit.>, remote state preparation <cit.>, key generation <cit.>, secret sharing <cit.>, routing quantum information <cit.> and setting up quantum network between various quantum processors <cit.>.However, the mere presence of entanglement is not the sole requirement for all quantum tasks. Tasks like state merging <cit.>, dense coding <cit.>, quantum memory <cit.>, and one-way distillation processes <cit.> require states with negative conditional entropy. From aresource theory perspective, states with non-negative conditional von Neumann entropy (CVENN) are free states. CVENN was shown to be convex and compact in <cit.>, thereby facilitating the detection of states with negative conditional entropy. Additionally, states that retain non-negative conditional entropy even under global unitary action were characterized in <cit.>. Such states are termed absolute non-negative conditional von Neumann entropy states (ACVENN).Given a quantum state with some potential as a resource, studying its evolution in an environment is equally important. Environmental interactions are ubiquitous in information processing tasks and are modeled as quantum channels. A quantum channel is a trace-preserving completely positive linear map 𝒩: ρ→∑_i N_i ρ N_i^† with ∑_i N_i^† N_i=I, where N_i s are the Kraus operators associated with any such map. The noise in such an environment can be to the extent that the state loses its value as a resource. In the resource theory of entanglement, quantum channels that turn any state into a separable state are called entanglement-breaking channels (EB). A map 𝒩 is entanglement breaking, if (id ⊗𝒩)(ρ) is always separable. It was shown that an entanglement breaking channel can be written in the Holevo form <cit.>,𝒩(ρ)= ∑_k=1^m R_k Tr(F_k),where R_k's are density matrices, and F_k's are positive operator value measurement operators (POVMs). Here each F_k≥ 0 and ∑_k F_k=I. The choice of R_k's and F_k's are not unique, and their different values give rise to various types of entanglement-breaking channels. In a similar spirit, we also have entanglement annihilating channels. These are channels that destroy the entanglement within a subsystem of a composite system <cit.>.As noted before, the negativity of quantum conditional entropy can also be considered a resource akin to entanglement. Therefore, quantum channels that break such negativity (i.e., convert any state to a state with non-negative conditional entropy) warrant attention. We call such channels to be negative conditional entropy breaking (NCEB). In a recent article <cit.>, the authors have introduced information theoretic resource-breaking channels pertaining to negative conditional entropy and fully entangled fraction. The authors have characterized such channels and introduced methods to detect channels that are not negative conditional entropy breaking <cit.>.In this article, we take the characterization of NCEB further. Besides the usual topological characterization of NCEB channels, we formulate an information-theoretic characterization of such channels. In particular, we study the complimentary channel of NCEB channels to discover their information-leaking property. We also exemplify these channels in terms of the range of the parameters of depolarizing channels. We show that the set of NCEB channels is equivalent to channels with zero coherent information. In addition, we show that the set ofNCEB channels acts as a superset to entanglement breaking channels (EB), zero capacity channels, and the newly introduced mutual information breaking channels (MIB). Furthermore, we introduce the notion of negative conditional entropy annihilating channels (NCEA), which differ from NCEB and provide for their characterization. We also connect negative conditional entropy annihilating channels (NCEA) to conditional Von-Neumann entropy non-decreasing channels(NCVE). In a related perspective, when implementing a quantum information processing protocol, one would like to know which channels to select that do not result in a resource loss. It is here that the detection of non-NCEB and non-NCEA channels becomes imperative. We lay down prescriptions for the detection of non-NCEB channels. In section <ref>, we describe a few preliminary concepts that will be required to describe the central idea of our work. Section <ref> describes NCEB channels with examples and properties. We also introduce NCEA channels similarly by citing examples and highlighting the properties. In section <ref>, we connect NCEB and NCEA with other known channels and the newly introduced channels like CIB and MIB. In section <ref>, we provide a topological characterization of NCEB and NCEA, thus paving the way for detecting non-NCEB channels. Finally, we conclude in section <ref>.§ PRELIMINARIES In this section, we introduce essential concepts and notations that underpin the subsequent discussions. We start with a quantum system A characterized by its Hilbert space ℋ_A and density operator ρ∈ D(ℋ_A). By extension, a bipartite quantum system AB is associated with Hilbert space ℋ_AB = ℋ_A ⊗ℋ_B and is described by a density operator ρ_AB. An isomorphic quantum system to A is denoted as Ã, with its Hilbert space given by ℋ_Ã. We use B(ℋ) to denote the set of bounded linear operators on a Hilbert space ℋ, and B_+(ℋ) for the subset of positive semi-definite linear operators. The set D(ℋ) represents the set of unit-trace, positive semi-definite linear operators, commonly known as density operators. Lastly, I denotes the identity operator, and id represents the identity channel. §.§ Information measures on quantum states The entropy of a quantum system, described by its density operator ρ, is defined by the von Neumann entropy of ρS(ρ) = - Tr{ρlogρ}, where the base of the logarithm is 2. Quantum conditional entropy measures the uncertainty associated with one part of a bipartite system, say A, when given access to the other part, B. Given the density operator ρ_AB of a bipartite system, the conditional entropy of the state is expressed asS(A|B)_ρ = S(AB)_ρ - S(B)_ρ with S(B)_ρ as the entropy of the marginal density operator ρ_B = Tr_A{ρ_AB}. The expression is derived from the chain rule for von Neumann entropy - S(AB)_ρ = S(A|B)_ρ + S(B)_ρ. Conditional entropy is uniformly continuous with respect to the trace norm <cit.>.The negativity of the conditional entropy is a significant signature of quantum systems, as classically, this is impossible. Negative values of conditional entropy indicate the presence of entanglement in the system; hence, conditional entropy itself is an important feature in several information processing protocols. One, however, should note that the converse of this statement is not true; there are entangled states with non-negative conditional entropy. Previous investigations <cit.> established that the class constituted by states with non-negative conditional entropy (CVENN) is convex and compact. Even the states that preserve the non-negativity under non-local unitary action (ACVENN) also form a convex and compact set <cit.>. These results facilitate the detection of states non-ACVENN states via an appropriate witness operator. A related work <cit.> identifies a class of quantum operations that preserves monotonicity with respect to conditional entropy (i.e., conditional entropy cannot decrease under the action of such channels) and characterizes such channels.Quantum mutual information measures the total classical and quantum correlations present in the state. For a given bipartite system described by its density operator ρ_AB, the mutual information present between the subsystems A, B is defined asI(A;B) = S(A)_ρ + S(B)_ρ - S(AB)_ρ= S(A)_ρ - S(A|B)_ρ = S(B)_ρ - S(B|A)_ρ.Quantum mutual information is always non-negative and vanishes if and only if the subsystems are uncorrelated. Quantum mutual information is symmetric, that is, I(A; B) = I(B; A).Quantum mutual information is extended to multiparty scenarios to provide measures for quantum correlation <cit.>. Additionally, this measure forms the base for certain types of capacity measures established for quantum channels, namely - Holevo capacity and entanglement-assisted classical capacity <cit.>.§.§ Coherent Information of Quantum Channels Quantum channels serve as mathematical models that describe the evolution of quantum systems. They can also be viewed as a model of the communication medium through which one party, A(Alice), transmits information in the form of a quantum system to the other party, B(Bob). Analogous to classical channels, we can quantify the information-carrying capacity of a quantum channel under different settings via capacity measures. One such measure, namely the coherent information of a quantum channel, captures the achievable rate of quantum information transmission through a quantum channel in a reliable manner. Let 𝒩: B(ℋ_A') → B(ℋ_B) ( 𝒩_A' → B) describe a quantum channel for some finite-dimensional spaces ℋ_A', ℋ_B. Then, the coherent information of 𝒩_A' → B is Q(𝒩_A' → B) = max_ϕ_AA' I(A ⟩ B)_σ,where σ_AB = 𝕀_A ⊗𝒩_A'→ B(ϕ_AA')for pure state ϕ_AA' and I(A ⟩ B)_σ = S(B)_σ - S(AB)_σ. Coherent information can also be expressed in the form,Q(𝒩_A' → B) = max_ρ_A' I_C(ρ_A', 𝒩_A'→ B) = max_ϕ_AA'[S(B)_ψ - S(E)_ψ],where I_C(ρ_A', 𝒩_A'→ B) = S(𝒩_A'→ B(ρ_A') - S(𝒩^C_A'→ B(ρ_A')))and |ψ_ABE⟩ = U^𝒩_A'→ BE |ϕ_AA'⟩ and U^𝒩_A'→ BE the isometric extension of the channel <cit.>. Finally, the capacity of a quantum channel captures the information-carrying capacity of the channel in an asymptotic setting and is formally defined as𝒬(𝒩) = lim_n →∞1/n Q(𝒩^⊗ n).§ NEGATIVE CONDITIONAL ENTROPY ANNIHILATING AND BREAKING CHANNELSWe discuss negative conditional entropy breaking channels, introduced in <cit.> and introduce negative conditional entropy annihilating channels (NCEA) here. In later sections, we characterize NCEB from an information theoretic perspective: its equivalence with channels of zero coherent information. Furthermore, we discuss the results in <cit.> for completeness and also from pedagogic considerations. However, we also consider NCEB from several complementary perspectives from those in the previous work <cit.>. While entanglement breaking or negative conditional entropy breaking channels affect the resourceful properties across subsystems of a composite system, we can consider channels that affect such properties on the system they act on but do not affect relevant properties across partitions. Entanglement annihilating channels <cit.> is one such example in quantum information theory. Similarly, we consider channels annihilating negative conditional entropy (NCEA), note its definition, and discuss related results in the upcoming sections.§.§ Definitions Given a d ⊗ d bipartite system AB, let 𝒮_CVENN(ℋ_AB) denote the set of quantum states having non-negative conditional entropy across the A-B partition. Similarly, let 𝒮_CVENN(ℋ_B) be the set of quantum states having non-negative conditional entropy within the subsystem B. We express this formally as𝒮_CVENN(ℋ_AB) = {ρ∈ D(ℋ_AB) | S(A|B)_ρ≥ 0 }, 𝒮_CVENN(ℋ_B) = {ρ∈ D(ℋ_B) | S(B_1 | B_2)_ρ≥ 0},where B_1, B_2 is a fixed partition of the B subsystem.As depicted in figure <ref>, an NCEB channel acts on subsystem Bof a bipartite system AB and destroys the negative conditional entropy present between A and B subsystems. In other words, the set of negative conditional entropy-breaking channels acting on d ⊗ d-dimensional system is described as <cit.>,NCEB^(d)={𝒩_B→B̃| ∀ρ_AB∈ D(ℋ_AB), S(A|B̃)_σ≥ 0,where σ_AB̃ = id_A ⊗𝒩_B→B̃(ρ_AB) }.Equivalently, we can say that a channel 𝒩_B→B̃ is inNCEB^(d) if(id_A ⊗𝒩_B→B̃)(D(ℋ_AB)) ⊂𝒮_CVENN(ℋ_AB̃). We define NCEA as the set of channels acting on subsystem B that annihilates negative conditional entropy within the subsystem B (refer figure <ref>). For a d-dimensional system, the set of negative conditional annihilating channels is expressed asNCEA^(d)={𝒩_B→B̃| ∀ρ_B∈ D(ℋ_B), S(B̃_1|B̃_2)_ϵ≥ 0, where ϵ_B̃ =𝒩_B→B̃(ρ_B) }.It is understood thatB̃_1, B̃_2 is a fixed partition of the B̃ subsystem. From the definition above, it follows that a channel 𝒩_B→B̃ is NCEA if 𝒩_B→B̃(D(ℋ_B)) ⊂𝒮_CVENN(ℋ_B̃). §.§ Properties We explore the properties of the NCEB and NCEA channels introduced above. We investigate on how these channels act in series and parallel concatenation. Additionally, we consider the action of the complimentary channel of NCEB channels and reveal the effect on the environment or adversary coupled with the original system.I. NCEB/NCEA channels in series: Given two quantum channels 𝒩_1, 𝒩_2 we denote the serial combination of these channels with 𝒩_1 ∘𝒩_2 and express the action on input state ρ as 𝒩_1 ∘𝒩_2(ρ) = 𝒩_1(𝒩_2(ρ))).We show that for a serial combination of two channels 𝒩_1 and 𝒩_2 both taken fromNCEB^(d) or NCEA^(d), the resultant channel will always belong to the same set. Let𝒩_1, 𝒩_2 ∈ NCEB^(d) , then𝒩_1 ∘𝒩_2 ∈ NCEB^(d) Let 𝒩_1 and 𝒩_2 be quantum channels belonging NCEB^(d). Let the serial combination of these channels be expressed as 𝒩_1 ∘𝒩_2. The action of this combination on input state ρ is given by 𝒩_1(𝒩_2(ρ)). Clearly, (id_A ⊗𝒩_1)(D(ℋ_AB)) ⊂𝒮_CVENN(ℋ_AB), (id_A ⊗𝒩_2)(D(ℋ_AB)) ⊂𝒮_CVENN(ℋ_AB). It follows that, (id_A ⊗𝒩_1 ∘𝒩_2)(D(ℋ_AB)) = (id_A ⊗𝒩_1) ∘ (id_A ⊗𝒩_2)(D(ℋ_AB)) and since the range of (id_A ⊗𝒩_2) is within 𝒮_CVENN(ℋ_AB), the range of (id_A ⊗𝒩_2 ∘𝒩_1) must also be within 𝒮_CVENN(ℋ_AB) (follows from the definition of NCEB). Thus, 𝒩_2 ∘𝒩_1 ∈ NCEB^(d)Let𝒩_1, 𝒩_2 ∈ NCEA^(d) , then𝒩_1 ∘𝒩_2 ∈ NCEA^(d) Let 𝒩_1 and 𝒩_2 be quantum channels belonging NCEA^(d) and consider the serial combination 𝒩_1 ∘𝒩_2The action of the this combination on input state ρ is expressed as 𝒩_1(𝒩_2(ρ)). Clearly, 𝒩_1(D(ℋ_B) ⊂𝒮_CVENN(ℋ_B) and 𝒩_2(D(ℋ_B) ⊂𝒮_CVENN(ℋ_B) which follows from the definition of NCEA channels. It follows that, (𝒩_1 ∘𝒩_2)(D(ℋ_B)) ⊂𝒮_CVENN(ℋ_B) and thus, 𝒩_1 ∘𝒩_2 ∈ NCEA^(d). II. NCEB channels in parallel:We consider the combination of twoNCEB channels in parallel and ask whether the resulting channel still breaks negative conditional entropy. We show that a parallel combination of two NCEB channels may not be an NCEB channel. In a subsequent section (<ref>), we prove that the set NCEB^(d) is equivalent to zero coherent information channels acting on a d-dimensional system. We utilize this result in our arguments.The quantum capacity of a channel is formalized as𝒬(𝒩)=1/nlim_n=1^∞𝒬^(d)_1(𝒩 ^⊗ n),where 𝒩 ^⊗ n represents the parallel use of n copies of the channels. Zero capacity channels are aclass of quantum channels with zero quantum capacity, implying that they cannot transmit any quantum information. One example of zero capacity channels is symmetric quantum channels <cit.>. Though symmetric channels display a correlation between input and output, they are useless in sending quantum information, as a non-zero transfer would violate the no-cloning theorem. Another example is entanglement binding channels <cit.>(also known as Horodecki channels), which can only produce weakly entangled states satisfying the PPT criterion. Now for channels 𝒩_1 ∈ N_H, 𝒩_2 ∈ A_S, where N_H and A_S are the set of Horodecki channels and symmetric channels, we have 𝒬(𝒩_1)= 0 =𝒬(𝒩_2). It then follows that Q(𝒩_1) = 0 = Q(𝒩_2) as 𝒬(𝒩) ≥ Q(𝒩) ≥ 0 for any quantum channel 𝒩. In <cit.>, it was established that the Q(𝒩_1 ⊗𝒩_2) > 0. Hence, a parallel combination of NCEB channels may not be NCEB in the larger Hilbert space.III. Complimentary of NCEB' channel: We consider an isometric extension of a negative conditional entropy breaking channel 𝒩_B →B̃ (refer figure <ref>). Let U^𝒩_B →B̃E the isometric extension of 𝒩_B →B̃. We can treat E as either environment or Eve. The complementary channel 𝒩^C_B → E is a quantum channel from B to E given by𝒩^C_B → E(ρ)=Tr_B̃{U^𝒩_B →B̃E(ρ)},for any input quantum state ρ∈ D(ℋ_B). We establish interesting aspects of the complimentary channel below. For any channel 𝒩_B →B̃∈ NCEB^(d) acting on an state ρ_AB, the conditional entropy between systems A and E is always non-positive i.e S(A|E)_ψ≤ 0 where |ψ⟩_AB̃E is a purification of the output system.We know that that, the action of an NCEB channel 𝒩_B →B̃ on the stateρ_AB can be written as(id_A ⊗𝒩_B →B̃)(ρ_AB) = σ_AB̃.Thus, we have S(A|B̃)(σ) ≥ 0. Consider a purification |ψ⟩_AB̃E for σ_AB̃. It follows thatS(A|B̃)_σ = S(AB̃)_σ -S(B̃)_σ= S(AB̃)_ψ - S(B̃)_ψ= S(E)_ψ- S(AE)_ψ = - S(A|E)_ψ.If S(A|B̃)_σ≥ 0, which holds true under the action of an NCEB channel, then S(A|E)_ψ≤ 0.Now, if we consider E as an adversary Eve, we can prove that Eve will be able to leak out information about the input system AB under the action of these channel. In other words the mutual information I(A:E) about the system A and the Eve will always be greater than the output mutual information I(A;B̃).I(A;B̃) ≤ I(A;E) We know that under the action of a channel 𝒩_B →B̃∈ NCEB^(d), the output conditional entropy to be as S(A|B̃)σ≥ 0. For such channels, the complementary channel 𝒩^C_B → E produces conditional entropyS(A|E)_ψ≤ 0. If we consider the differenceI(A; B̃)_ψ - I(A; E)_ψ= S(A|E)_ψ -S(A|B̃)_ψ≤ 0 I(A; B̃)_ψ≤ I(A; E)_ψ,where inequality follows from the fact that S(A|E)_ψ≤ 0 while S(A|B̃)_ψ≥ 0.The above results reveal a fascinating aspect of NCEB channels. The complementary channel of NCEBcan be interpreted not only as information leaking channel but also as hacking channel from Eve's point of view. §.§ Examples This section covers examples of NCEB and NCEA channels on bipartite systems. We observe that 2 ⊗ 2 quantum systems possess the smallest dimension for whichNCEB channels are found. To this extent, we consider a qubit depolarizing channel and study the range for which it turns NCEB. Similarly, we consider the class of global depolarizing and transpose depolarizing quantum channels in the context of negative conditional entropy annihilation. For both classes of channels, we identify sufficiency conditions for which they become NCEA.I: Example of NCEB channel :Depolarizing channels are a well-known type of quantum channel in the study of quantum information. The action of this channel on a d-dimensional system, described by its density operator ρ, is expressed as𝒩_p^d(ρ) = (1 - p) ρ + p I/d Tr{ρ},where p is the mixing parameter and 0 ≤ p ≤ 1. We obtain a qubit depolarizing channel when d= 2.It is well known that the negative conditional entropy of the state indicates the presence of entanglement. However, not all entangled states possess negative conditional entropy, with two-qubit Werner states as an example. Additionally, for 2 ⊗ 2 and 2 ⊗ 3 dimensional systems, we can determine whether a given state is separable via the positive partial transpose (PPT) test <cit.>. Therefore, the range of p for which id_2 ⊗𝒩_p^2 produces separable states provides a direct example for NCEB channels since separable states have non-negative conditional entropy. This range for the qubit depolarizing channel described is 2/3 < p ≤ 1 and corresponds to the region where the channel is entanglement-breaking. Moreover, we aim to identify the range of p where the channel destroys negative conditional entropy but does not necessarily destroy the entanglement between two qubits. Here, two propositions come to our aid. First, the convex geometry of the set of D(ℋ_AB) allows the decomposition of any ρ∈ D(ℋ_AB) as a convex combination of pure states. That is,ρ_AB = ∑_i p_i Π_i ,where Π_i ∈ D(ℋ_AB) and represents pure states of the d ⊗ d system. Second, pure entangled states are the only set of pure states having negative conditional entropy in 2 ⊗ 2 systems. Therefore, to determine if a channel is NCEB, we can calculate the conditional entropy for 2 ⊗ 2 pure entangled states.a. Non-maximally entangled state: We examine 2 ⊗ 2 pure entangled states of form|ψ⟩ = cosα|00⟩ + sinα|11⟩, where α∈ [0, π]. Now, we consider the action of a qubit depolarizing channel on one part of the subsystem, id ⊗𝒩^2_p. The resultant state is of formρ_AB = [ β_1 0 0 β_2; 0p/2cos^2 α 0 0; 0 0p/2sin^2 α 0; β_2 0 0 (1 - p/2) - β_1 ].with β_1 = (1 - p/2) cos^2 α and β_2 = (1 - p)/2sin 2α. The conditional entropy S(A|B)_ρ is then given byS(A|B)_ρ = -λ_1 logλ_1 -λ_2 logλ_2 -λ_3 logλ_3 - λ_4 logλ_4 + (λ_5)log(λ_5) + (1 - λ_5)log(1 - λ_5),whereλ_1 = p/2cos^2 α,λ_2 = p/2sin^2α λ_3 = 1/2 - σ_1 - p/4,λ_4 = 1/2 + σ_1 - p/4 and λ_5 = cos^2α - p/2cos 2αwithσ_1 = 1/8√(5 p^2 - 12 p + 8 + 4 p cos 4 α - 3p^2 cos 4 α).Our numerical simulations, illustrated in figure <ref>, indicate that maximally entangled states maximize the conditional entropy for a given p, suggesting that checking the output on maximally entangled states is sufficient to determine if the channel is NCEB. This observation is also supported by <cit.>.b. Bell-diagonal state: We are interested in the action of qubit depolarizing channels on general 2 ⊗ 2 quantum states. We consider the set of all Bell diagonal states, which is given byρ_AB = ∑_m, n = 0^1 p_mn |γ_mn⟩⟨γ_mn|, where |γ_mn⟩ = 1/√(2)(|0, n⟩ + (-1)^m|1, 1⊕ n⟩) and ∑_m, n =0^1 p_mn = 1, p_mn≥ 0. An equivalent characterization of Bell diagonal states in terms of Bloch vectors and correlation matrix is given byρ_AB = 1/4 [I_4 + ∑_i=1^3 (x^b_i σ_i ⊗ I_2 y^b_i I_2 ⊗σ_i) + ∑_i, j = 1^3 t^b_ijσ_i ⊗σ_j],where x^b = (0, 0, 0), y^b=(0, 0, 0) and T^b = [t^b_ij] with t^b_ij = 0 for ij and t^b_ij = c_i, -1 ≤ c_i ≤ 1 otherwise. The probabilities p_mn are related to the correlation matrix via the following relation:p_mn = 1/4(1 + (-1)^m c_1 -(-1)^m+nc_2 + (-1)^n c_3.It is evident from figure <ref> that for p > 0.2, the channel breaks the negative conditional entropy of the input states. For our simulations, we fix p = 1/2, which is not in the entanglement-breaking range but within the negative conditional entropy breaking range for qubit depolarizing channels. Figure <ref> depicts the same. II. Example of NCEA channel: Similar to entanglement annihilating channels, we consider quantum operations that annihilate the negative conditional entropy in a system. For this, we evaluate conditional entropy across a fixed system partition of the output system. We analyze two quantum channels and determine the conditions under which they qualify as negative conditional entropy annihilating (NCEA) channels. Annihilation is made possible by the application of a (a) global channel on the subsystem , (b) local channels acting on the subsystem. We exhibit the action through the use of global channels.a. Global depolarizing channel:A global depolarizing channel acting on a d^2-dimensional (or alternately, a d⊗ d system) with density operator ρ, is described by the following expressionℰ_gd(ρ) = p ρ + (1-p)I_d^2/d^2. Thus, the action on a maximally entangled state |ϕ^+⟩ = 1/√(d)∑_i = 1^d |ii⟩ is given byχ = ℰ_gd(|ϕ^+⟩⟨ϕ^+|) = p |ϕ^+⟩⟨ϕ^+|+ (1-p)I_d^2/d^2, where the marginal density operators are χ_B_1 = χ_B_2 = I_d/d. In the rest of this section, we establish a sufficiency condition for the depolarizing channel to be NCEA based on its action on the maximally entangled state. Firstly, we observe that the action of the depolarizing channel on the maximally entangled state produces an isotropic state, with d^2 eigenvalues - λ_1 = 1 + (d^2-1)p/d^2 and λ_2 = … = λ_d^2 = 1-p/d^2. with Von Neumann entropy:S(χ) = -1 + (d^2-1)p/d^2log(1 + (d^2-1)p/d^2)-(d^2 - 1)1-p/d^2log(1-p/d^2) = S(p,d).Thus, conditional entropy of the output system B across a fixed partition B_1 - B_2 asS(B_1|B_2)_χ = S(p, d) - log d.Let Ω represent maximally entangled states in a d ⊗ d system except |ϕ^+⟩ and χ' be its output state under the action of the global depolarizing channel. Then S(B_1|B_2)_χ = S(B_1|B_2)_χ'.The equality in conditional entropy follows from the fact that χ' has the same spectra as χ. Also, both χ and χ' have the same marginal density operator - I_d/d. Now, let Π be a pure state in the Hilbert space of the system, and χ_π be the corresponding output state under the depolarizing channel. It follows thatS(χ) = S(χ_π)The equality stems from the eigenvalues of χ and χ_π are the same. However, we notice that the marginals density operators of χ and χ_π are not equal with S(χ)_B_2≥ S(χ_π)_B_2. This gives rise to the inequalityS(B_1|B_2)_χ≤ S(B_1|B_2)_χ_πCombining both propositions, we arrive at our main statementIf a global depolarizing channel annihilates the conditional entropy of a maximally entangled state, it does so for all pure input states. In other words, if S(B_1|B_2)_χ = S(χ) - S(χ)_B_2≥ 0, then S(B_1|B_2)_χ_π = S(χ_π) - S(χ_π)_B_2≥ 0Thus, if ℰ_gd annihilates the conditional entropy of a maximally entangled state, then it does so for all states, pure or mixed. Hence, verifying the action of the global depolarizing channel on a pure maximally entangled state suffices to comment on whether it is NCEA or not. b. Transpose depolarizing channel:A transpose depolarizing channel whose action on a d^2 × d^2 complex matrix μ is defined asΦ(μ) = t μ^T+ (1-t) I_d^2/d^2 Tr{μ},where t ∈ [-1/d-1, 1/d+1] and μ^T represents the full transpose of the complex matrix. For adensity matrix ρ, ρ^T is a valid state, and hence the transpose depolarizing channel is a valid completely positive, trace-preserving (CPTP) map. Since ρ^T is a valid state, the action of the channel is described asΦ(ρ) = t ρ' + (1 -t) I_d^2/d^2with ρ' = ρ^T. We find that for pure state input Π, we have Π^T is also pure. Hence, the action of the transpose depolarizing channel is the same as the depolarizing channel discussed above. Using this equivalence, we find that the sufficiency condition in the above-mentioned lemma applies to transpose depolarizing channels.§ RELATIONS OF NCEB AND NCEA WITH OTHER CHANNELSIn this section, we connectNCEB and NCEAwith other channels, both analytically and with the help of examples. These include channels like zero coherent information channels (also termed Coherent Information Breaking Channels(CIB)), Entanglement Breaking Channels (EB), and Zero quantum capacity channels. We have introduced a new channel called Mutual Information Breaking channels (MIB) and showed that it is a subset of NCEB channel, We also show how the set of NCEA channels relate to A-unital channels, introduced in <cit.>.§.§ Equivalence of NCEB and CIB Here, we cover a characterization of negative conditional entropy breaking channels (NCEB) in terms of their coherent information. Zero coherent information channels or coherent information breaking channels (CIB) are those channels 𝒩_B→B̃ for which the coherent information is zero, i.e. Q(𝒩_B→B̃ ) = 0. We denote the set of such channels mapping states between d-dimensional systems withQ^(d)_0 = {𝒩_B→B̃ | Q(𝒩_B→B̃ ) = 0}.Our key claim in this regard is NCEB^(d) = 𝒬^(d)_0 and discuss the details of this equivalence below.If a channel breaks negative conditional entropy on pure states, it will break negative conditional entropy for all states. We know that 𝒟(ℋ_AB) forms a convex and compact set with pure states being located at the boundary and represent any pure states in D(ℋ_AB) as Π_i.So,any ρ_AB∈ D(ℋ_AB) can be expressed as ρ_AB = ∑_i p_i Π_i,where ∑_i p_i =1. For a given channel 𝒩_B→B̃, it follows that (id_A ⊗𝒩_B→B̃)(ρ_AB) = ∑_i p_i (id_A ⊗𝒩_B→B̃)(Π_i).Invoking concavity of conditional entropy, we have S(A|B̃)_σ = S(A|B̃)_∑_i p_i σ_i ≥∑_i p_i S(A|B̃)_σ_i,where σ_i = (id_A ⊗𝒩_B→B̃)(Π_i) and σ_AB̃ = (id_A ⊗𝒩_B→B̃)(ρ_AB) = ∑_i p_i (id_A ⊗𝒩_B→B̃)(Π_i) = ∑_i p_i σ_i Thus, it suffices to characterize negative conditional entropy breaking channels on pure states. Therefore, any channel that breaks negative conditional entropy for pure states must also do so on mixed states.The coherent information of a channel 𝒩_B→B̃ depends on the conditional entropy of the output bipartite state σ_AB̃, given a pure state input.Consider any pure state Π_i ∈ D(ℋ_AB) and let σ_AB̃ = (id_A ⊗𝒩_B→B̃)(Π_i).Coherent information of the output state σ_AB̃ is given by I(A⟩B̃)_σ = S(B̃)_σ - S(AB̃)_σ = -S(A|B̃)_σ and overall, the coherent information of the channel isQ(𝒩_B→B̃) = max_Π_i I(A ⟩B̃)_σ.Thus, for pure states, we find that the conditional entropy of the output states is related to coherent information of the channel.The set of all negative conditional entropy breaking channels is the same as zero coherent information channels, i.e., NCEB^(d) = 𝒬^(d)_0.First, we show that NCEB^(d)⊆𝒬^(d)_0 and then𝒬^(d)_0⊆ NCEB^(d). Forward implication: NCEB^(d)⊆𝒬^(d)_0:Based on prior arguments, it suffices to comment on the action of the channel on pure states to characterize NCEB^(d) on all d ⊗ d bipartite states. Consider a channel 𝒩_B→B̃∈ NCEB^(d). By definition, S(A|B̃)_σ≥ 0, σ_AB̃ = (id_A ⊗𝒩_B→B̃)(ρ_AB) for any input state ρ_AB. Let χ_AB̃ = (id_A ⊗𝒩_B→B̃)(Π_i) for some pure state Π_i ∈ D(ℋ_AB) and I(A ⟩B̃)_χ be the coherent information of the output state.We know from definition that I(A⟩B̃)_χ = -S(A|B̃)_χ. It follows that I(A⟩B̃)_χ≤ 0 as S(A|B̃)_χ≥ 0.Now, we have Q(𝒩_B→B̃) = max_Π_i I(A⟩B̃)_χ. Since the outputs of all pure state inputs have I(A ⟩B̃)_χ≤ 0, it follows that Q(𝒩_B→B̃) must be 0 as coherent information of the channel is always non-negative. Hence 𝒩_B→B̃∈ Q^(d)_0. Since this holds true for an arbitrary channel in NCEB^(d), it must hold true for all channels in the set. Therefore,NCEB^(d)⊆𝒬^(d)_0 . Reverse implication: 𝒬^(d)_0 ⊆ NCEB^(d):We employ a proof by contradiction. Assume that 𝒬^(d)_0 ⊈NCEB^(d) and let 𝒩_B→B̃∈ Q^(d)_0 but not in NCEB^(d) This implies there exists a state ρ_AB∈𝒟(ℋ_AB) such that S(A|B̃)_σ < 0, σ_AB̃ = (id_A ⊗𝒩_B→B̃)(ρ_AB). We know that ρ_AB = ∑_i p_i Π_i for pure states Π_i ∈ D(ℋ_AB) with ∑_i p_i =1. Using this and the concavity of conditional entropy, we have that S(A|B̃)_σ = S(A|B̃)_∑_i p_i σ_i ≤ S(A|B̃)_σ_i < 0,with σ_i = (id_A ⊗𝒩_B→B̃)(Π_i) and σ_AB̃ = (id_A ⊗𝒩_B→B̃)(ρ_AB) = (id_A ⊗𝒩_B→B̃)(∑_i p_i Π_i) = ∑_i p_i σ_i.This implies that at least one pure state input exists for which the output state, under the channel,has negative conditional entropy. Recall that Q(𝒩_B→B̃) = max_Π_i I(A ⟩B̃)_σ_i and that I(A⟩B̃)_σ_i = -S(A|B̃)_σ_i. Thus, if the output state of the channel for any pure state input has negative conditional entropy, it would imply that the coherent information of the channel is strictly positive (non-zero). This contradicts the fact that 𝒩_B→B̃∈ Q^(d)_0 . Therefore our assumption is wrong and 𝒬^(d)_0 ⊆ NCEB^(d).Thus we have, NCEB^(d) = 𝒬^(d)_0 .§.§ Relation between NCEBand MIB In this subsection, we define mutual information-breaking channels (MIB). The channels 𝒩_B→B̃ acting on one part of a d ⊗ d system, where the mutual information of the output state becomes zero i.e., I(A; B̃)_σ = 0,where σ_AB̃ =(id_A ⊗𝒩_B→B̃)(ρ_AB) are termed as mutual information breaking channel (MIB). Expanding on the concept of mutual information-breaking channels (MIB), consider their application in 2 ⊗ 2 quantum systems. Channels like the total depolarizing channel and the total amplitude damping channel, when applied to a part of a bipartite state, effectively create a product state. This transformation is important because it reduces the mutual information of the system to zero. Let the set containing these channels, mapping states between d-dimensional systems and having zero mutual information on the output states be defined as,I^(d)_0 = {𝒩_B→B̃ | I(A; B̃)_σ = 0withσ_AB̃ = (id_A ⊗𝒩_B→B̃)(ρ_AB), ∀ρ_AB∈ D(ℋ_AB)}.We prove that any mutual information-breaking channel is a conditional entropy breaking channel.I^(d)_0 ⊂ NCEB^(d)Let 𝒩_B→B̃∈ I^(d)_0 be arbitrary channel. The action of the channel on a d ⊗ d state ρ_AB results in a state σ_AB̃. By definition, the mutual information of the resultant state will be,I(A: B̃)_σ=S(A)_σ +S(B̃)_σ -S(A B̃)_σ = 0S(AB̃)_σ =S(A)_σ +S(B̃)_σ.The conditional entropy of the output state ρ_AB̃ in such a scenario will beS(A| B̃)_σ = S(AB̃)_σ - S(B̃)_σ = S(A)_σ≥ 0 .This follows from the fact that the entropy of the quantum state is non-negative. Since this holds true for an arbitrary input state ρ_AB, we conclude that𝒩_B→B̃∈ NCEB^(d). Hence I^(d)_0 ⊂ NCEB^(d)Interestingly the converse is not true i.e NCEB^(d)⊂ I^(d)_0. §.§ Relation between NCEBand EB Entanglement breaking channels are a crucial class of quantum channels in the field of quantum information theory, as discussed in <cit.>. An entanglement breaking channel 𝒩_B→B̃ is defined as follows:(id_A ⊗𝒩_B→B̃)(Γ_AB) = ∑_i p_i ρ^i_A ⊗ρ^i_B,for all Γ_AB∈𝒟(ℋ_AB). In the following theorem, we establish a connection between the set of entanglement-breaking channels and negative conditional entropy breaking (NCEB) channels. Given an entanglement breaking channel 𝒩_B→B̃ acting on a d-dimensional system, 𝒩_B→B̃ is also a NCEB channel.Since 𝒩_B→B̃ is an entanglement breaking channel, we can express it as:(id_A ⊗𝒩_B→B̃)(ρ_AB) = ∑_i p_i σ^i_A ⊗σ^i_B ,for some input state ρ_AB. Now, for any separable state, the conditional entropy is non-negative. Thus, by the definition of NCEB channels, 𝒩_B→B̃ is also an NCEB channel.The above result shows that the set of entanglement-breaking (EB) channels is a strict subset of NCEB channels. The subset relation arises because the range space of any EB channel is a strict subset of the set of states possessing non-negative conditional entropy. This was also shown in <cit.>. §.§ Relation between NCEB and Zero Capacity Channels Zero capacity channels are an interesting class of channels in quantum communication and quantum information theory. Such channels do not possess the ability to transmit quantum information. While a complete characterization of zero capacity is still open, two well-known classes of quantum channels, namely positive partial transposition (PPT) channels<cit.>, and anti-degradable channels (ADG) channels <cit.> are found to have zero capacity. PPT channels are those quantum channels whose Choi operator is a PPT state. Anti-degradable channels are those channels whose output can be simulated from the complementary channel's output. Thus, whatever information was lost to the environment is sufficient to recreate what was sent through the channel, making the actual transmission almost irrelevant for information retrieval. The statement below establishes the connection between NCEB channels and zero capacity quantum channelA quantum channel 𝒩_B →B̃) with zero quantum capacity is also an NCEB channel, i.e, 𝒩_B →B̃∈ NCEB^(d) Consider a quantum channel 𝒩_B →B̃ acting on a d-dimension system with quantum capacity 𝒬(𝒩_B →B̃) = 0. We know that coherent information of a quantum channel is a lower bound for its quantum capacity, i.e., 𝒬(𝒩) ≥Q(𝒩). Additionally, coherent information of a channel is non-negative. From these two statements, it follows that𝒬(𝒩_B→B̃) = 0Q(𝒩_B →B̃) = 0.Thus, 𝒩_B→B̃∈ Q^(d)_0 and hence, 𝒩_B→B̃∈ NCEB^(d)§.§ Relation between NCEA and NCVE channels Here, we explore the connection between negative conditional entropy annihilating channels(NCEA) and the class of conditional von Neumann entropy non-decreasing, known as NCVE(A|B → C|D) and A-unital channels (denoted by UNI(A|B)), introduced in <cit.>. A channel 𝒩_AB → CD is in NCVE(A|B → C|D) if for all ρ_AB∈ D(ℋ_AB), we have S(C|D)_σ≥ S(A|B)_ρ with σ_CD = 𝒩_AB → CD(ρ_AB). On the other hand, a channel 𝒩_AB → AB is A-unital if for every ρ_B∈ D(ℋ_B), 𝒩_AB → AB(I_A/d_A⊗ρ_B) = I_A/d_A⊗σ_B. The equivalence of NCVE(A|B)(NCVE(A|B → A|B)) and UNI(A|B) was established in <cit.>. Our analysis focuses on channels 𝒩_B→B̃ that are NCEA, that is 𝒩_B→B̃∈ NCEA^(d) where B, B̃ are d-dimensional systems. In this context, we define NCVE(B_1|B_2) and UNI(B_1|B_2) where B_1, B_2 represent a fixed bipartition of B with B̃_1, B̃_2 as the corresponding isomorphic partitions in B̃. We first discuss an example of a channel common to these classes and proceed to demonstrate a nuanced relationship between NCEA^(d) and NCVE(B_1|B_2).In section <ref>, we examined the global depolarizing channel in the context of NCEA channels. We consider the action of such a map on a d ⊗ d system B. We observe that for the channel in equation <ref>,ℰ_gd(I_B_1/d⊗ρ_B_2) = p(I_B_1/d⊗ρ_B_2) + (1-p) I_B_1/d⊗I_B_2/d= I_B_1/d⊗ (p ρ_B_2 + (1-p) I_B_2/d) = I_B_1/d⊗σ_B_2.Thus, the global depolarizing channel is A-unital and, by its equivalence, in NCVE(B_1|B_2). Furthermore, we are interested in identifying the range of p for which the channels are NCEA. Combining lemma <ref> and the action of the channel on a maximally entangled state (described in equation <ref>), we determine that the range for p aligns with the conditions for which the output state maintains non-negative conditional entropy.Naturally, one could ask whether NCEA^(d^2) and NCVE(B_1|B_2) are equivalent or posses a intricate relationship. Despite the aforementioned example, we demonstrate that neither set is a subset of the other below: * NCVE(B_1|B_2) does not necessarily imply NCEA^(d^2). While NCVE(B_1|B_2) ensures S(B_1|B_2)_σ≥ S(B_1|B_2)_ρ, it does not guarantee S(B_1|B_2)_σ≥ 0. For instance, the identity channel id_B, included in NCVE(B_1|B_2), does not qualify as an NCEA channel.* Conversely, NCEA^(d) is not a subset of NCVE(B_1|B_2) since a non-negative conditional entropy for output states does not imply a greater conditional entropy compared to the input state. § CHARACTERIZATION AND DETECTION OF NCEA AND NCEBIn this section, we discuss the topological characterization of the NCEB and NCEA channels. We prove the set of negative conditional entropy breaking channels NCEB^(d) to be convex and compact. This confirms the existence of a witness to detect the action of the channels that do not belong to this class. This empowers us to identify channels that are useful and not negative conditional entropy breaking. We also show that the set NCEA^(d) is convex. §.§ NCEB^(d) is convex and compact NCEB^(d) is convex. Let AB be a d ⊗ d bipartite system and𝒩^1_B→B̃, 𝒩^2_B→B̃∈ NCEB^(d). From the definition of NCEB^(d), we haveS(A|B̃)_σ^1≥ 0 S(A|B̃)_σ^2≥ 0,where σ^1_AB̃ = id_A⊗𝒩^1_B→B̃(ρ_AB) and σ^2_AB̃ = id_A⊗𝒩^2_B→B̃(ρ_AB) and ρ_AB∈ D(ℋ_AB). Consider 𝒩_B→B̃ = λ𝒩^1_B→B̃ + (1 -λ) 𝒩^2_B→B̃ for some λ∈ [0,1].Given any input state ρ_AB∈ D(ℋ_AB) , we have σ_AB̃ = id_A⊗𝒩_B→B̃(ρ_AB) = λ (id_A⊗𝒩^1_B→B̃)(ρ_AB) + (1 - λ)(id_A⊗𝒩^2_B→B̃)(ρ_AB)=λσ^1_AB̃ + (1-λ)σ^2_AB̃,with σ^1_AB̃ = id_A⊗𝒩^1_B→B̃(ρ_AB) and σ^2_AB̃ = id_A⊗𝒩^2_B→B̃(ρ_AB), where the equivalences follow from the linearity of quantum channels and distributive property of addition over tensor product. From the definition of NCEB^(d), we have S(A|B̃)_σ^1≥ 0 S(A|B̃)_σ^2≥ 0.It is clear that σ^1_AB̃, σ^2_AB̃∈ CVENN <cit.>, the set of states with non-negative conditional entropy. Using convexity of CVENN, it follows that σ_AB̃∈ CVENN. Since this holds true for an arbitrary inputρ_AB, we conclude that 𝒩_B→B̃∈ NCEB^(d), thereby proving convexity of the set. NCEB ^(d) is compact This was established in <cit.>, however we provide an alternative proof here. Let Φ_NCEB^(d) be the set of Choi states of quantum channels inNCEB^(d). From definition of NCEB^(d), we have S(A|B̃)_ρ_AB̃≥ 0, ∀ρ_AB̃∈Φ_NCEB^(d). We claim that Φ_NCEB^(d) has a closed range under conditional entropy function, i.e., S(A|B̃)_Φ_NCEB^(d) = [0, log d]. The minimum value of 0 is achieved for local projective measurements on the B subsystem of a maximally entangled state. Whereas conditional entropy of log d is obtained for a completely depolarizing channel acting on B subsystem of I/d⊗ρ_B. It must be noted that both these channels are entanglement breaking and hence are conditional entropy breaking. Since the output range forms a closed interval and conditional entropy is a uniformly continuous function, it implies that Φ_NCEB^(d) is closed. Additionally, since channel-state isomorphism is continuous <cit.> and NCEB^(d) is the inverse image of the closed set Φ_NCEB^(d), it follows that NCEB^(d) is closed.It is known that the completely bounded trace norm of quantum channels is equal to 1 <cit.>. Hence, channels in NCEB^(d) are also bounded maps. As NCEB^(d) is closed and bounded, it must be compact. Existence of Witness Operators : Since we have established that the set NCEB^(d) is closed and compact, by Hahn Banach <cit.> theorem, there will exist a hyperplane given which detects non-NCEB channels. We introduce the notion of a number-witness to detect non-NCEB channels. Consider a scalar-valued functional W(Ψ) = max (0, inf_Φ∈ NCEB^(d)Ψ - Φ_◊)where ._◊ represents the diamond norm on quantum channels. From the definition, we observe that if Ψ∈ NCEB^(d), then W(Ψ) = 0 and if W(Ψ) > 0 it implies Ψ∉NCEB^(d). Due to compactness of NCEB^(d), we will have a map Φ' ∈ NCEB^(d) that is nearest to Ψ. Thus, W(Ψ) constitutes as a witness for non-NCEB channels.A similar witness can also be constructed to detect non-NCEA channels because of the characterization given in the next subsection.§.§ Set of NCEA^(d) is convex NCEA^(d) is convex. Consider two quantum channels 𝒩^1_B →B̃, 𝒩^2_B →B̃from NCEA^(d). For any given input density operator ρ_B ∈ D(ℋ_B), we haveS(B̃_1|B̃_2)_σ^1_B̃≥ 0 S(B̃_1|B̃_2)_σ^2_B̃≥ 0,with σ^1_B̃ = 𝒩^1_B→B̃(ρ_B) and σ^2_B̃ = 𝒩^2_B→B̃(ρ_B). Let 𝒩_B→B̃ = λ𝒩^1_B →B̃ + (1 -λ) 𝒩^2_B →B̃ for some 0 ≤λ≤1. We now consider the action of 𝒩_B→B̃ on ρ_B. Thus, we getϵ_B̃ = 𝒩_B→B̃(ρ_B) = λ𝒩^1_B→B̃(ρ_B) + (1 - λ) 𝒩^2_B→B̃(ρ_B)=λϵ^1_B̃ + (1-λ)ϵ^2_B̃,where ϵ^1_B̃ = 𝒩^1_B→B̃(ρ_B) and ϵ^2_B̃ = 𝒩^2_B→B̃(ρ_B). We can express conditional entropy across the B̃_1 - B̃_2 partition asS(B̃_1|B̃_2)_ϵ = S(B̃_̃1̃|B̃_2)_λϵ^1 + (1-λ) ϵ^2 ≥λ S(B̃_1|B̃_2)_ϵ^1 + (1-λ)S(B̃_1|B̃_2)_ϵ^2 ≥ 0.The first inequality follows from the concavity conditional entropy and the second from the definition of NCEA^(d) channels. Therefore, the set of NCEA^(d) must be convex.Finally, NCEA^(d) is compact, and the proof can be done using a technique analogous to the one used in <cit.>.§ CONCLUSIONS In conclusion, we can say that in this article, we have dealt with channels that destroy the negative conditional entropy of a quantum state. Since negative conditional entropy is a resource, these channels can be broadly classified as resource-breaking channels. In particular, we have extended the characterization of negative conditional entropy breaking channel (NCEB) in further depth and detail. In addition to, we have introduced a class of channels called the negative conditional entropy annihilating channel (NCEA). We have examined the properties of these channels when they are combined serially and in NCEBs in parallel . We investigate complementary channels associated with NCEB, which leads us to the information-leaking channels. In this work, we take depolarizing channels to give examples and further characterize of these channels. We have connected these channels (NCEB and NCEA) with standard channels like zero capacity channel, Entanglement breaking channel (EB), conditional von Neumann entropy non-decreasing channel (NCVE), and with newly introduced channels like coherent information breaking channel (CIB) and mutual information breaking channel (MIB). We did a further topological characterization of NCEB channels by showing that the set containing them is convex and compact.This empowers us to detect channels that will not break the negativity of conditional entropy, ensuring the conservation of quantum resources. Acknowledgements : NG acknowledges support from the project grant received from DST-SERB (India) under the MATRICS scheme, vide file number MTR/2022/000101. PV thanks Ms. Mahathi Vempati for useful discussions while working through this problem.apsrev4-2
http://arxiv.org/abs/2311.15705v2
{ "authors": [ "PV Srinidhi", "Indranil Chakrabarty", "Samyadeb Bhattacharya", "Nirman Ganguly" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231127104815", "title": "On quantum channels that destroy negative conditional entropy" }
=1 =1jhep
http://arxiv.org/abs/2311.16066v1
{ "authors": [ "Gustavo P. de Brito", "Astrid Eichhorn", "Shouryya Ray" ], "categories": [ "hep-th", "gr-qc", "hep-ph" ], "primary_category": "hep-th", "published": "20231127183746", "title": "Light fermions in color: why the quark mass is not the Planck mass" }
[email protected] Zürich & Disney Research|Studios Zürich [email protected] Research|Studios Zürich [email protected] Zürich Zürich [email protected] Zürich & Disney Research|Studios Zürich Switzerland <ccs2012><concept><concept_id>10010147.10010371.10010352.10010379</concept_id><concept_desc>Computing methodologies Physical simulation</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10010147.10010257.10010293.10010294</concept_id><concept_desc>Computing methodologies Neural networks</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10010147.10010178.10010224.10010240.10010242</concept_id><concept_desc>Computing methodologies Shape representations</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10010147.10010371.10010396.10010400</concept_id><concept_desc>Computing methodologies Point-based models</concept_desc><concept_significance>500</concept_significance></concept></ccs2012> [500]Computing methodologies Physical simulation [500]Computing methodologies Neural networks [500]Computing methodologies Shape representations [500]Computing methodologies Point-based models< g r a p h i c s >Wrinkle Generation in Cloth-Object Interaction. (Left) A coarse-resolution mesh grid (resolution 128 × 128, totaling 49152 free variables) employs the original mesh connectivity for loss computation. Traditional meshes struggle to produce detailed wrinkles at low resolutions, and even generate unnatural artifacts due to discretization in some places. (Middle Left) A multi-resolution grid neural network with fewer free variables (47369) captures cloth details using the original mesh connectivity. This model shows small improvements over direct vertex optimization but still finds it challenging to capture detailed wrinkles correctly and naturally. (Middle Right) The same variables (47369) in the multi-resolution grid model, with losses computed using our novel method (Section  <ref>) but with uniform sampling of local structures (Subsection  <ref>). This continuous domain approach significantly enhances wrinkle patterns in a more natural way. (Right) The same variables (47369) in the model, with losses computed using our method and adaptive sampling of local structures (Subsection  <ref>), yield the most natural and refined wrinkles and demonstrate superior results compared to uniform sampling when trained for the same number of epochs.The accurate representation of fine-detailed cloth wrinkles poses significant challenges in computer graphics. The inherently non-uniform structure of cloth wrinkles mandates the employment of intricate discretization strategies, which are frequently characterized by high computational demands and complex methodologies. Addressing this, the research introduced in this paper elucidates a novel anisotropic cloth regression technique that capitalizes on the potential of implicit neural representations of surfaces. Our first core contribution is an innovative mesh-free sampling approach, crafted to reduce the reliance on traditional mesh structures, thereby offering greater flexibility and accuracy in capturing fine cloth details. Our second contribution is a novel adversarial training scheme, which is designed meticulously to strike a harmonious balance between the sampling and simulation objectives. The adversarial approach ensures that the wrinkles are represented with high fidelity, while also maintaining computational efficiency. Our results showcase through various cloth-object interaction scenarios that our method, given the same memory constraints, consistently surpasses traditional discrete representations, particularly when modelling highly-detailed localized wrinkles. Spatially Adaptive Cloth Regression with Implicit Neural Representations Markus Gross January 14, 2024 ========================================================================§ INTRODUCTIONIn recent years, learning-based methods have become increasingly popular for simulating cloth. These methods use neural networks to predict the deformations on virtual garments. A common approach for training these neural networks is supervised learning <cit.>, which requires large amounts of physics-based simulated or animated cloth data as ground truth. The training process minimizes the vertex offsets between the predicted and ground truth meshes. Although inference with these trained networks is nearly real-time, the generalizability of supervised learning methods can be limited and generating sufficient training data can be difficult or time-consuming.To overcome these limitations, unsupervised learning methods have been developed. Bertiche et al. <cit.> introduced a novel unsupervised learning method that formulates the loss function as the garment's potential energy. This method jointly trains the neural network weights and evaluates the equations of motion for quasi-static scenarios, allowing the regression of garment vertex positions by directly minimizing the potential energy without the need for training data. Santesteban et al. <cit.> further improved this approach by adding temporal information and kinetic energy to the loss function for dynamic garments, and a hyperelastic material model to characterize in-plane elasticity.However, these unsupervised techniques demand an explicit representation of the entire garment mesh, leading to extensive networks with slow convergence rates, and low fidelity in representing fine cloth details, e.g., wrinkles. In response, we propose an implicit representation of garments that uses a multi-resolution grid structure. This representation boasts several advantages: reduced memory usage, and most importantly a continuous domain with inherent adaptivity. This adaptivity permits the network weights to capture intricate details at any spatial location without changing the network architecture. Leveraging this strength, we introduce a novel mesh-free sampling technique that reduces reliance on traditional mesh structures. This offers enhanced flexibility and precision in capturing fine cloth details. Employing this sampling approach, we formulated an adversarial loss function, finely-tuned to strike a balance between sampling and simulation objectives, thus aiding in training.We demonstrate that, under the same memory constraints, our method consistently outperforms traditional discrete representations. This is especially evident in the enhanced simulation results for detailed cloth wrinkles, particularly for small, localized ones. Contributions. In summary, the major technical contributions of this paper include * A specifically designed multi-resolution grid encoding model for neural implicit surface representation to enable efficient garment simulation. * A suitable sampling method specifically designed for adaptive garment simulation. * A new formulation for the losses computed on neural implicit surfaces based on a newly proposed sampling local structure. * A novel adversarial loss formulation for adaptive garment simulation and its proof of effectiveness.§ RELATED WORKCloth Simulation. The simulation of cloth is a long-standing and widely researched topic in computer animation. Since the debut of the seminal Baraff–Witkin model <cit.>, several improvements were proposed to better virtually represent fabrics over the years. These include mixed implicit-explicit solvers <cit.>; improving stability <cit.>; finite-elements formulations with co-rotational <cit.>, hyperelastic <cit.>, linear orthotrophic <cit.> and Baraff–Witkin <cit.> energy strains; adaptive remeshing for cloth <cit.>, paper <cit.> and thin-sheets <cit.>; efficient modelling of yarn-level fabrics <cit.>; anisotropic elastoplasticity coupled with frictional contacts <cit.>, Eulerian-on-Lagrangian contact resolution <cit.>, and sub-millimeter wrinkle synthesis <cit.>. For an analysis of different strain formulations along with production implementation practicalities, we refer to Kim and Eberle <cit.>. Wrinkle Simulation. There has been a significant focus on proficiently enhancing coarse base animations with intricate wrinkle details. Beginning with Grinspun et al. <cit.>, who introduced adaptive refinement for wrinkles and folds, the field has progressed with Bergou et al. <cit.> utilizing constrained Lagrangian mechanics to mirror low-resolution dynamics. Rohmer et al. <cit.> provided dynamic wrinkles integration through strain tensor analysis. Müller and Chentanez <cit.> harnessed position-based dynamics for intricate wrinkles, while Chen et al. <cit.> emphasized on the interplay of cloth and body, capturing fine wrinkles. Zuenko and Harders <cit.>, Rémillard and Kry <cit.>, and Casafranca and Otaduy <cit.> delved into unique methods to replicate human skin wrinkling. Furthermore, tension field theory (TFT) and data-driven approaches, highlighted by works from Chen et al. <cit.> and Wang et al. <cit.>, have enriched the field with detailed and realistic wrinkle simulations. Collision detection. A crucial step from numerically simulating cloth is the collision detection and response phase. Such process is often the bottleneck of the entire simulation, specially if implemented naively. Since we aim to mimic steps of a physically-based solver during the training phase, it is important to understand how collision detection can be robustly and efficiently implemented on GPUs. Bridson et al. <cit.> adopted the GPU-friendly signed distance functions (SDFs). SDFs were also regressed implicitly by a neural network relative to a given a character pose <cit.>; such an approach can be useful for animated characters, since the majority of the collisions are due to cloth-body interactions. Similarly, Santesteban et al. <cit.> proposes a self-supervised collision loss that augments decoded network predictions by automatically sampling the latent space connected to a collision loss. Other works also focus on efficiently dealing with cloth self-collisions on the GPU; repulsion-based methods <cit.> model spring forces using minimal edge distances to avoid interpenetration. Tang et al. <cit.> implemented an efficient collision-detection algorithm tailored for GPUs that combines spatio–temporal coherence, bounding volume hierarchies, discrete (DCD) and continuous collision detection (CCD). Lastly, Lan et al. <cit.> employs a medial axis transform to model volumetric objects, combining spatial hashing and a collision culling algorithm that exploits mathematical properties of the medial axis transform. Data-driven methods. Many works have used data-driven methods without relying on Machine Learning, some of which include: example-based wrinkle synthesis <cit.>, cloth upsampling for real-time applications <cit.>, efficient mesh representations for clothed humans <cit.> and soft tissue animation <cit.>. Accurately estimating physical parameters for simulating cloth is an important task in order to faithfully recreate them in virtual environments. Data-driven estimation of cloth parameters include models represented by linear <cit.>, Kirchhoff–Love <cit.> and St. Venant–Kirchhoff <cit.> strain energies. Machine Learning in Computer Animation.Several works <cit.> were proposed to reduce computations when regressing physically-based deformations. Tan et al. tailored the computational graph for simulating cloth in both width and depth: a graph-based convolutional neural network encodes the input into a low dimensional space, while a recurrent neural network (RNN) learns a fully differentiable physics loss in a reduced number of iterations. Similarly, but substituting the RNN by a limited set of message passing iterations, deformables <cit.>, continuous materials <cit.>, and soft tissues <cit.> were successfully regressed by graph neural networks. The aforementioned approaches, however, only loosely approximate the equations of motion; hence, Fulton et al. <cit.> proposed a subspace solver that directly integrates the the latent space of a non-linear autoencoder to more aggressively reduce the width of the computational graph. Follow up work <cit.> identified missing non-linear inertial terms when integrating the latent space of autoencoders. However these terms require third-order (Hessians) network derivatives, which were approximated with a complex-step finite difference method. Other works include modelling cloth–body interactions through point features represented by varying levels of detail <cit.>, graph convolutions tailored to cloth regression and upsampling <cit.>, mapping deformations to a two dimensional spaces to exploit efficient CNN architectures <cit.>, high-frequency wrinkle synthesis <cit.>, decoupling low and high-frequency mesh deformations with mixture models <cit.>. § METHODWe propose a novel representation of the garment surface using implicit neural representations; the details of the surface are captured using neural network parameters. Building on this implicit neural representation, we introduce a new formulation to compute the simulation losses based on a sampling local structure. We propose a minimax adversarial objective function. During training, we alternate between sampling and simulation objectives to strike a balance between speed and accuracy. Structure. In Subsection <ref> we detail our approach to utilizing neural networks for representing the implicit surface, which includes our specially designed multi-resolution grid encoding neural network model. In Subsection <ref>, we delve into the sampling method and explain the rationale behind our choice. Subsection <ref> introduces a novel loss computation method for the neural implicit surface, based on sampling local structures. Subsection <ref> presents our innovative minimax adversarial loss formulation, complete with algorithm details. §.§ Representation of Surfaces The traditional representations like the mass-spring system or the finite element method necessitate the discretization of the garment surface. Capturing intricate details, such as cloth wrinkles, with these discretized surfaces is often challenging unless extremely high resolutions are used, which in turn increases computational costs. As an alternative, we employ an implicit neural representation for the cloth. This method provides a continuous domain with inherent adaptivity. Our study emphasizes quasi-static scenarios, as our main objective is to represent cloth behavior accurately and stably in situations with minimal dynamic changes.To parameterize the shape of a cloth, we use the UV coordinates. This is formally represented by the function 𝒮:𝒮: ℝ^2→ℝ^3,𝐩_UV ↦𝐩_3D,where 𝐩_UV represent the UV coordinates, and 𝐩_3D represent the deformed 3D position.Each 3D position on the deformed cloth shape can be decomposed into two components, the undeformed position 𝐩_0, and the 3D deformation Δ𝐩_3D on top of the undeformed position, i.e., 𝐩_3D = 𝐩_0 + Δ𝐩_3D. Given that the undeformed position 𝐩_0 is known, our primary objective becomes learning the deformation Δ𝐩_3D. This deformation is captured by the function 𝒟:𝒟: ℝ^2→ℝ^3,𝐩_UV ↦Δ𝐩_3D,where Δ𝐩_3D is the difference in the 3D position due to deformation.Following the reasoning above, in our implicit neural representation, instead of using a neural network to represent the map 𝒮, we opt to using a neural network to represent the map 𝒟. The strategy of incremental learning — as exemplified by ResNets <cit.> in learning residuals — offers distinct advantages, particularly when applied to the task of modeling 3D shapes. When a network is focused on capturing the nuanced differences from a base structure, it inherently grapples with simpler and often smaller magnitudes of change compared to recreating an intricate shape in its entirety. This eases the learning process, making the optimization landscape less fraught with local minima that could trap the model in sub-optimal solutions. Furthermore, this incremental approach can act as an implicit form of regularization. Instead of the expansive freedom to generate any conceivable shape, which could inadvertently lead to overfitting, the model is gently tethered to a foundational shape, adapting and molding it through subtle deformations.For training the network, we set the physically based energies as the losses, and utilize back-propagation to optimize the network parameters, this can let us directly obtain the 3D deformation of the garments without explicitly computing the forces in the physical system for solving the equation of motion. Multi-resolution Grid Encoding Model. In computer graphics, the concept of the UV domain refers to a two-dimensional coordinate system that is integral to texture mapping on 3D surfaces. Each vertex of a 3D model is linked with a 2D coordinate (u, v) that determines its correspondence on the texture. This UV mapping effectively transforms a 3D surface into a two-dimensional representation. Because of this, the UV parameterized domain inherently possesses spatial properties. Points that are adjacent or near each other in UV space often have a similar proximity on the actual 3D model.This spatial characteristic of the UV domain is not just theoretical; it provides actionable insights. By understanding how the UV space spatially correlates with the 3D model, this knowledge can be integrated into the encoding process. Such integration of prior knowledge can significantly enhance the efficiency and accuracy of encoding, tailoring it more closely to the nuances of the 3D model it represents.At the heart of this enhanced encoding is the concept of multi-resolution grid encoding. Think of this as viewing a picture with varying levels of zoom. At a lower resolution or a more zoomed-out view, you see broader features, capturing the overall essence. Conversely, a high-resolution or zoomed-in perspective reveals the minute intricacies. This method is pivotal for systems where spatial relationships exist in a hierarchical manner. The vast world of garment simulation provides an apt illustration. Here, while the broad shape of a shirt or a dress is an overarching spatial feature, the fine stitches, textures, or minute wrinkles are the granular details. The multi-resolution approach ensures both these details are captured and represented with fidelity.In our model that is specifically designed for this garment simulation, in order to produce a more standardized representation, we employ bilinear interpolation as a means of embedding unstructured texture coordinates into a structured grid. This procedure encodes local topological information into the neural network. In details, each UV point p_UV = (x, y) is passed into the GE (Grid Encoding) to obtain the bilinearly interpolated grid features on each layer of the multi-resolution grid. Such multi-resolution grid is constructed of L layers, where L is a user-defined constant. Suppose the densest layer is of resolution N_max, then the rest layers are of resolution⌊ N_max / 2^1 ⌋, ⌊ N_max / 2^2 ⌋, ⋯, ⌊ N_max / 2^L ⌋. Note that here we assumeL ≤⌊log_2 N_max⌋.In details, on layer l, the interpolated feature vector ℱ^l(x, y) can be computed as:α = 1/(x_2 - x_1)(y_2 - y_1),v_x = [ x_2 - x x - x_1 ],M = [ ℱ^l(x_1, y_1) ℱ^l(x_1, y_2); ℱ^l(x_2, y_1) ℱ^l(x_2, y_2) ],v_y = [ y_2 - y; y - y_1 ],ℱ^l(x, y)= α·v_x·M·v_ywhere x_1 = ⌊ x ⌋, x_2 = x_1 + 1, y_1 = ⌊ y ⌋, and y_2 = y_1 + 1.And then these grid features on each layer are concatenated together to form the input to the MLP,𝐆𝐄 (x, y) = ℱ (x, y) = ℱ^1 (x, y) ⊕ℱ^2 (x, y) ⊕⋯⊕ℱ^L (x, y),where L is the total number of layers. Then, we pass this ℱ(x, y) through the MLP, and the output of the MLP represents the 3D deformation:Δ𝐩_3D = 𝐌𝐋𝐏(𝐆𝐄 (x, y) ). We illustrate the pipeline containing the multi-resolution grid encoding model in Figure  <ref>. In our implementation, the number of layers in the multi-resolution grid, the number of features on each grid cell, the resolution of the grid cells, and the number of layers and sizes of the following MLP are all user-definable. We will provide a detailed description of the architecture used in our experiments in Section  <ref>.§.§ Sampling Method To leverage the advantages of the continuous domain and the adaptive benefits of implicit neural representations, we investigate sampling methods specifically designed for the parameterized UV space. This ensures a denser concentration of sampling points in regions with intricate details. In this specific case, we assume that the UV parametrization for the garment has minimal distortion. Since garments are often designed using developable surfaces, it can be easily cut into pieces and laid flat on a plane.In every optimization step, we select points based on a probability distribution. Regions with more intricate details have higher probability values. However, understanding the genuine continuous probability density function (PDF) can be challenging. Still, there are strategies to address this. One of the strategies is to create a discrete approximation of the elusive PDF and select sampling points based on this approximation. Probability Computation. To better grasp and represent a continuous, unseen PDF, we approximate its values at select discrete points. This snapshot forms a discrete model of the actual continuous PDF, allowing for clearer visualization and simplified sampling. Assuming the true PDF is continuous and smooth, this discrete version is often high fidelity. This is because in a smooth function, closely situated points have similar values. Thus, the values we determine at these discrete locations are likely reliable indicators of the continuous function's behavior in their immediate vicinity. We divide the 2D UV space into a grid of moderate density. For each grid cell indexed as (i, j), we calculate a weighted sum of the losses at the center point within that cell. We assume that the probability value p_ij of the grid at this specific epoch is represented by this weighted sum p̂_ij. Starting from a uniform discrete PDF in the first epoch, we update the sampling PDF in each subsequent epoch to align it more closely with the estimated PDF for that specific epoch using linear interpolation:p'_ij = γ p_ij + (1 - γ) p̂_ij,where p'_ij is the probability value in the next epoch, and α is a user-defined constant. Next, we determine an appropriate scaling to produce a discrete PDF so that all the values sum up to 1. Details on computing the losses are provided in Subsection <ref>. Inverse Transform Sampling. When dealing with a 2D discrete probability density function (PDF), inverse transform sampling becomes a crucial tool for sampling points. Imagining a 2D discrete space where each point is defined by coordinates (i, j), every point is assigned a specific probability, which we will represent as p_ij.The first stage in the inverse transform sampling process is the calculation of the marginal PDF for each row. This is achieved by taking the sum of probabilities along each row. If you imagine an array or matrix, it is akin to summing up all values in a specific row. We can express the marginal PDF of a given row i as p_i, represented mathematically by the formula: p_i = ∑_j=1^N p_ij,fori ∈ [M],where M stands for the total rows and N symbolizes the total columns.Once the marginal PDF is determined, the next phase is deducing the marginal cumulative density function (CDF) for each row. This involves cumulatively summing the probabilities of rows up to a given point. For any given row i, the marginal CDF is notated as P_i, and it's calculated as:P_i = ∑_k=1^i p_k,fori ∈ [M]. With the marginal CDF in place, the next move is to generate a random number u, sourced from a uniform distribution in the range [0, 1]. This number plays a pivotal role as it will guide us in identifying the sampled row index. Essentially, we are looking for the smallest row index i where p_i either equals or surpasses u, mathematically put as:i = inf{k : P_k ≥ u}. Having pinpointed the row, we then dive deeper into it and compute its conditional CDF. This requires summing up the conditional probabilities along that specific row. For the chosen row i and any column j, the column-wise CDF is represented as Q_ij and is computed via:Q_ij = ∑_l=1^jp_il/p_i,forj ∈ [N]. The last steps of the process are quite similar to the earlier ones but on a columnar basis. A random number v is pulled from a uniform distribution within the range [0, 1], directing us to the specific column index to be sampled within our earlier chosen row. We determine j by pinpointing the smallest column index such that Q_ij equals or surpasses v, represented as:j = inf{l : Q_il≥ v}. By the end of this process, we obtain a randomly sampled point (i, j). We then randomly sample a point within the grid corresponding to this pair of indices. This point aligns with the original two-dimensional distribution mapped out by p_ij. An essential thing to remember is that for the entire process to be accurate and valid, the probabilities p_ij must be normalized, ensuring their sum equals 1. Lloyd's Relaxation. Direct sampling according to the PDF may result in points that are overly concentrated in specific regions. To address this, we use Lloyd's Relaxation on points acquired through inverse transform sampling. Lloyd's Relaxation is a critical process in ensuring a balanced and uniform distribution of points within a defined space, especially when direct sampling in line with the PDF might lead to an undesired concentration of points in certain regions. This method is primarily employed to refine the positions of points acquired through inverse transform sampling.The principle behind this technique is the optimization of point positions to improve their distribution in relation to the Voronoi diagram. Imagine we have an initial set of points, which we can denote as 𝒫 = {𝐩_1, 𝐩_2, …, 𝐩_n}. Each point, say 𝐩_i, has coordinates represented as (x_i, y_i) corresponding to the i-th point.To better understand how Lloyd's Relaxation functions, we walk through the steps in a 2D setting. The process commences by constructing the Voronoi diagram using the present positions of the points in the set 𝒫. This is a spatial division of a plane where each division (or region) contains points that are closest to a specific point in set 𝒫.Upon the construction of the Voronoi diagram, the next step involves calculating the centroid for each point 𝐩_i within the set 𝒫. The centroid, 𝐜_i, represents the average coordinates of all points lying inside the Voronoi region corresponding to 𝐩_i. Mathematically, the centroid can be expressed as:𝐜_i = 1/m_i∑_𝐪_j ∈ R_i𝐪_j,where R_i symbolizes the Voronoi region related to 𝐩_i, and m_i denotes the count of points within that specific region.Following the centroid calculations, each point 𝐩_i has its position updated to match the coordinates of its respective centroid, 𝐜_i.This entire sequence of steps is repeated either for a pre-defined number of iterations or until certain convergence criteria are achieved. The beauty of Lloyd's Relaxation is that as these steps are performed iteratively, the points in set P progressively shift toward a configuration that is more evenly spaced, thereby optimizing the Voronoi diagram. This results in a more uniform distribution of points, avoiding the problem of concentration in specific regions. §.§ Simulation Losses To harness the distinct advantages of the implicit neural representation and potentially delve into its adaptivity, we redefined the simulation energies tailored for this implicit neural representation. We achieved this by constructing local sampling structures atop our neural implicit surface. Using these sampling local structures, we can compute the losses for the corresponding sampling point, based on the relative positions of the vertices within the local structure. For clarity, we use the term 3D sampling points to refer to the 3D points corresponding to the sampling points in the UV space. The original sampling points in the UV space are referred to as 2D sampling points. For each 2D sampling point, we construct four equilateral triangles around it in the UV space, as shown in Figure <ref>. The triangle ABC has a degree of freedom θ which denotes the in-plane rotation. This θ is a randomly generated number within the range [0, 2π / 3] in each epoch.It is important to note that all 2D sampling points within the sampling local structures are initially established in the UV space, then they are mapped from 2D to 3D. Remember, the 3D position, 𝐩_3D, corresponding to a 2D UV point, 𝐩_UV = (x, y), can be easily computed using the deformation network:𝐩_3D = 𝐩_0 + 𝐌𝐋𝐏(𝐆𝐄 (x, y) ),where 𝐩_0 signifies the 3D position corresponding to the UV point in its undeformed state. We further analyze the local surface properties based on this 3D sampling local structure.The local sampling structure presents several noteworthy benefits:Firstly, in physics-based simulations that utilize traditional mesh representation, the outcome of the simulation can be heavily influenced by the quality of the triangulation. However, preparing the input model as a mesh with good triangulation quality often requires meticulous attention and intricate mesh processing algorithms. Each of our local sampling structure is locally Delaunay in 2D space by construction. Moreover, garments are often designed using developable surfaces which can be readily segmented and flattened on a plane. As a result, when mapping the local sampling structure to 3D space, only minimal distortion occurs. Based on these premises, the 3D local sampling structure usually retains a high-quality local triangulation.Secondly, the technique of sampling local structures with random orientations offers a nuanced way to comprehend garment material behavior. Instead of relying solely on traditional mesh-based representations, this approach focuses on the minuscule, localized structures within the material. In doing so, it is not limited to a single orientation or direction. By randomly sampling these structures, the method accounts for losses in strain and bend from various angles. This is invaluable for understanding garments, as it sheds light on how the material reacts when worn, especially during movement. Many such materials are anisotropic, exhibiting properties that vary depending on the direction. For instance, some fabrics may stretch more in one direction than another. This contrasts with isotropic materials, which display consistent properties irrespective of direction. Given these differences, the sampling method is particularly suitable for simulating anisotropic garments. Instead of assuming uniformity, it samples various orientations of local structures, capturing the unique attributes of anisotropic materials.In the remainder of this subsection, we will demonstrate how we define the losses using this innovative sampling of the local structure. §.§.§ Strain LossThe computation of strain loss consists of three parts: precomputation, rest length computation, and the loss computation itself.Precomputation. Let us consider a garment mesh, denoted as ℳ, along with its corresponding UV parametrization ϕ: ℝ^3 → [0, 1]^2. To represent the 3D positions of the mesh vertices, we employ a square image in the range of [0, 1]^2. Specifically, we encode the scaled 3D vertex positions as RGB values and assign them to the corresponding pixels of the image. Alternatively, an RGBA image can be used, where the additional Alpha channel can represent a mask. Users have the flexibility to specify the resolution of the image, with a default value of 1024.Let 𝐩_1, 𝐩_2, and 𝐩_3 denote the 3D positions of three vertices within the mesh ℳ. It is possible to determine the 3D position of any 2D point in the UV space, provided that it lies within the convex hull defined by ϕ(𝐩_1), ϕ(𝐩_2), and ϕ(𝐩_3). This interpolation is achieved using the Barycentric interpolation method. It is noteworthy that this step only needs to be computed once for each garment mesh in its rest pose, and parallel computation techniques can be employed to minimize the computational time required. Barycentric Interporlation. Consider a 2D triangle defined by vertices 𝐩_1, 𝐩_2, and 𝐩_3. Any point 𝐩 within this triangle can be represented as a unique linear combination of these vertices:𝐩 = λ_1𝐩_1 + λ_2𝐩_2 + λ_3𝐩_3,where λ_1, λ_2, and λ_3 are the Barycentric coordinates of 𝐩, and λ_1 + λ_2 + λ_3 = 1.These coordinates do not only depict the weights of the vertices for interpolating 𝐩, but they also remain invariant under affine and barycentric transformations. This invariance yields consistent interpolations under transformations, providing a unique advantage over other interpolation methods.Using Barycentric coordinates, we can express Barycentric interpolation in the form of:F(𝐩) = λ_1F(𝐩_1) + λ_2F(𝐩_2) + λ_3F(𝐩_3),where F represents the function that we wish to interpolate (such as color, texture, or other attributes), and F(𝐯_i) denotes the attribute value at vertex 𝐯_i. Note that in our case, F is the inverse of UV parametrization function ϕ, under the assumption that ϕ is bijective and thus invertible.Rest Length Computation. We use the term valid to refer to 2D points that lie within the union set of the 2D UV triangulation. By utilizing Barycentric interpolation on the pre-computed 3D positions in the rest pose, it becomes possible to compute the 3D position of any valid point within the 2D UV space. For instance, considering two such 2D points denoted as 𝐀 and 𝐁, as depicted in Figure <ref>, an approximation of the rest length of the hypothetical edge connecting the two 3D points represented by 𝐀 and 𝐁 can be determined using the Euclidean norm, expressed as ϕ^-1(𝐀) - ϕ^-1(𝐁)_2. It is necessary to compute the 3D edge length between any two 3D points represented by two valid points since the vertices of the sampling triangles can be any valid 2D points. Loss Computation. The strain loss is the potential elastic energy of the system, formulated based on the Hooke's law in the mass spring system to ensure that the cloth is not excessively stretched or compressed. Let E(𝐩, Θ) be the immediate edge set of sampling point 𝐩 when the surface is in the state captured by parameter Θ, with its edges marked in blue in Figure <ref>, for a given sampling local structure. Then, the strain loss at the 3D sampling point 𝐩 can be computed in two ways. The first way is formulated as:ℒ_Strain (p, Θ) = ∑_𝐞∈ E(p, Θ) (𝐞'_2 - 𝐞_2)^2,which weights more heavily for edges with larger edge lengths. However, in the second formulation, the strain energy is computed according to the ratio of the length change and the original edge length:ℒ_Strain(p, Θ) = ∑_𝐞∈ E(p, Θ)(𝐞'_2 - 𝐞_2/𝐞_2)^2,which weights equally for edges with different edge lengths. We opt for the second formulation in our implementation.The total strain loss for all the 3D sampling points is then computed as:ℒ_Strain (𝒫, Θ) = ∑_p∈𝒫∑_𝐞∈ E(p, Θ)(𝐞'_2 - 𝐞_2/𝐞_2)^2,where 𝒫 represents the set of all 3D sampling points.§.§.§ Bend LossThe bending loss penalizes differences between neighbouring face normals, effectively enforcing locally smooth surfaces. Given a sampling local structure constructed around the sampling point 𝐩 on a neural implicit surface captured by parameter Θ, let the set of the face pairs be ℱ𝒫(p, Θ), for each face pair {f_1, f_2}∈ℱ𝒫 (p, Θ), we denote the corresponding normalized face normals as {𝐧_1, 𝐧_2}. There are three such face pairs in each sampling local structure, as shown in Figure <ref>. Let k_b be the bending constant, A̅ be the area sum of the two incident faces, and let 𝐞_0 represent the edge connecting the two faces. Then the bending loss at the 3D sampling point 𝐩 can be formulated asℒ_Bend(p, Θ) = ∑_{f_1, f_2}∈ℱ𝒫(p, Θ)1/2· k_b ·√(3)𝐞_0_2^2/2 A̅·𝐧_1 - 𝐧_2_2^2. In our case, we want to ignore the scale difference in sampling local structures to ensure that each sampling point is weighted the same. The bend loss computation can instead be formulated as:ℒ_Bend(p, Θ) = ∑_{f_1, f_2}∈ℱ𝒫(𝓅, Θ)𝐧_1 - 𝐧_2_2^2, where all the constants are absorbed into the weight for the bend loss in the weighted sum.The total bend loss for all the 3D sampling points can thus be calculated asℒ_Bend (𝒫, Θ)= ∑_p∈𝒫∑_{f_1, f_2}∈ℱ𝒫(p, Θ)𝐧_1 - 𝐧_2_2^2,where 𝒫 represents the set of all 3D sampling points.§.§.§ Gravity LossWe also incorporate a term that aims to generate more realistic garment predictions by modeling the effect of gravity. Based on classical mechanics, the potential gravitational energy of a 3D sampling point 𝐩 in a surface state captured by parameter Θ can be calculated asℒ_Gravity(p, Θ) = m(p) · g · h(p, Θ),where m(𝐩) is the mass of a 3D sampling point 𝐩, g is the gravitational acceleration, and h(p, Θ) is the height of 𝐩 measured in a user specified axis, note that by default the axis of gravity is set to be the z axis in our implementation.The total gravity loss is computed by summing up over all the 3D sampling points:ℒ_Gravity(𝒫, Θ) = ∑_𝐩∈𝒫m(p) · g · h(p, Θ),where 𝒫 represent the set of all 3D sampling points.§.§.§ Collision LossThe model needs to handle collisions with other objects. To do so, we design the following loss:ℒ_Collision (p, Θ) = ∑_(i, j) ∈𝒜(p, Θ)min(𝐝_j, i·𝐧_j-ϵ, 0)^2,where 𝒜(p, Θ) represents the set of correspondences (i, j) between all the 3D vertices in the local structure sampled around the point p on a surface parameterized by Θ, and the colliding object, respectively. These correspondences are found using nearest neighbors. 𝐝_j, i is the vector that goes from the j-th vertex of the colliding object to the i-th vertex of the outfit, 𝐧_j represents the normal vector at the j-th vertex of the colliding object, and ϵ is a small positive threshold used to enhance robustness. The total collision loss is computed by summing up over all the 3D sampling points in 𝒫:ℒ_Collision (𝒫, Θ) = ∑_p∈𝒫∑_(i, j) ∈𝒜(p, Θ)min(𝐝_j, i·𝐧_j-ϵ, 0)^2.This loss term is vital for ensuring that the predictions of the garment are valid, as its gradients will encourage the vertices of the outfit to move away from the colliding object. §.§ Adaptivity §.§.§ Minimax Adversarial Loss FormulationDrawing from the methods discussed previously, we can now outline our approach to constructing the adaptive sampling framework. During each epoch, we prioritize sampling points in areas with finer details and then adjust the network's weights to update the neural implicit surface. Ideally, if we could sample an infinite number of points in each epoch, we would compute the losses at each of these points and subsequently update the network weights using them all.Let us denote the sampling point as 𝐩 = (x, y) and the parameters of the implicit neural surface as Θ. Suppose the sampling space is [0, 1]^2. Let ℱ= {Strain, Bend, Gravity, Collision} represent the set of loss names. We can formulate the ideal optimization problem as:min_Θ∫_0^1 ∫_0^1 ∑_f ∈ℱ𝒲_f ℒ_f ((x, y), Θ) dx dy,where 𝒲_f represents the corresponding loss weight for the loss named f ∈ℱ in the weighted sum.However, due to memory and time constraints, infinite sampling is not feasible. We are limited to a finite number of sample points within the domain for each iteration.The question then arises: where should these points be sampled? A straightforward strategy is to uniformly sample within the sampling space. However, this approach may not be efficient enough, as uniform sampling may not prioritize sampling regions that require more attention. Therefore, based on this consideration, we propose a strategy to sample more densely in regions with finer details. This corresponds to areas where the losses are higher, and it will be more efficient than a simple uniform sampling method in cases where small regions need more attention than others, such as cloth wrinkles. Based on this strategy, assuming the set of the sampling points is 𝒫 = {𝐩_1, 𝐩_2, ⋯, 𝐩_N}, N ∈ℕ. Let ℱ= {Strain, Bend, Gravity, Collision} represent the set of loss names, we formulate our heuristic optimization problem as:min_Θmax_𝒫∑_𝐩∈𝒫∑_f ∈ℱ𝒲_f ℒ_f(𝐩, Θ),where 𝒫 follows some constraints that the points are not too close together. Mathematically, we define the constraint as follows:∀𝐩_i, 𝐩_j ∈𝒫,𝐩_i - 𝐩_j_2 ≥δ,where δ is a pre-defined threshold. However, since the maximization is over a black-box function, we opt for an approximation method to compute the maximization and the constraint part of the system, as detailed in Subsection <ref>.§.§.§ Details of the AlgorithmruledComment#We provide the pseudocode of the algorithm as in Algorithm <ref>. The algorithm uses an innovative adversarial framework that capitalizes on a dual-player system. The goal is to optimize the representation of physical properties of the garment, such as bending, strain, gravity, and collision, while concurrently refining the spatial distribution of the simulation points. Initialization and Model Setup. Initially, the algorithm focuses on setting the groundwork. A set, denoted as 𝒫, is initialized as an empty set which will later serve to store the simulation's sampling points. In parallel, the model parameters, symbolized by Θ, are initialized with random values. These could be envisioned as the underlying weights of a MLP or a comparable model, like the parameters in our multi-resolution grid encoding model. These model parameters encodes the neural implicit surface, which represents the shape of the garment. The process is further streamlined by defining N, which represents the total number of desired sampling points in the simulation. Adversarial Player 1: Optimal Point Sampling. In the adversarial training, the first player is responsible for point-sampling to make the sum of losses at these sampling points as large as possible. The parameter, μ, typically defaults to 0.5 but remains user-adjustable within the range [0, 1]. It divides the total sampling points, N, into two distinct categories: * Adaptive Points: A segment of the total points, calculated as N_a, are adaptively sampled. This number is essentially the floor value of the product of μ and N.* Uniform Points: The remainder, denoted as N_u, is uniformly sampled. They provide a consistent distribution to make the sampling points have a good coverage of the whole sampling domain. To achieve the adaptive sampling, a discrete PDF is constructed and updated using the method mentioned in Section  <ref>. The N_a adaptive points are then sampled using this discrete PDF, and the N_u uniform points are sampled according to a uniform PDF within the domain. Subsequently, these adaptively and uniformly sampled points are combined into the main set, 𝒫.To ensure the sampling points do not cluster too closely together, we slightly space out the points but maintain a higher concentration in regions where the losses are large. To achieve this, Lloyd's relaxation is applied. It refines the distribution of the sampling points, ensuring points are spread as uniformly as possible while preserving the original density variations. Adversarial Player 2: Physical Property Calculation And Optimization. As the second adversary enters the game, the focus pivots to the physical essence of the garment. For each point in the optimized set, 𝒫^*, a sampling local structure is generated, as detailed in <ref>, with a randomly generated rotation angle, θ, from the range [0, 2π/3].The algorithm then evaluates a suite of loss functions, tailored to measure various physical properties at each point on the neural implicit surface, where the shape of the surface is captured by the current model parameters, Θ. This includes determining the garment's bending, strain, gravity, and collision losses. Summing these individual losses across all points, we employ back-propagation in conjunction with gradient descent to refine the model parameters, Θ, in order to minimize the loss. This process utilizes a learning rate, α, to update the neural implicit surface.After iterating between the two adversarial players for the specified epochs, the algorithm concludes, presenting the finely-tuned model parameters, Θ. This setup enables continuous querying of the 3D surface positions across the UV domain.In summary, this novel adversarial framework delivers a more accurate and realistic garment simulation, optimizing spatial representation while capturing the intricate nuances of fabric behavior across diverse situations. § EVALUATION All our tests were conducted on a desktop running Ubuntu 20.04.5 LTS, equipped with an Intel(R) Xeon® E5-1680 v3 @ 3.20GHz processor and a GeForce RTX 2080 Ti graphics card. We developed the framework using Python with Tensorflow 2.0. §.§ Network and EncodingGiven that our training process involves randomly generated sampling points and local structures, ensuring a fair comparison for network and encoding can be challenging if evaluated in an unsupervised scheme. So to achieve our goal of highlighting the efficiency of our proposed multi-resolution grid encoding model, we evaluate various neural network models using a supervised approach. Our ground truth is a manually constructed 3D model that resembles a 3D sine wave. This model incorporates both low and high-frequency details, making it an ideal candidate for assessing the performance of different neural network models.To ensure a fair comparison, the parameters and sizes of the network models have been fine-tuned so that all models operate under the same memory constraints. In detail, the first model is a baseline MLP architecture. It consists of four fully connected layers, each with its weight matrix and bias vector. The input layer has dimensions 2 × 152, where 2 represents the dimension of the UV space, followed by two hidden layers with dimensions 152 × 152 each, and a final output layer with dimensions 152 × 3. This model has a total of 47427 parameters.The second model incorporates positional encoding into its architecture. Like the first model, it also consists of four fully connected layers with their respective weight matrices and bias vectors. The input layer has dimensions 18 × 148, where 18 equals the dimension of the UV space plus the hidden dimension of the positional encoding, followed by two hidden layers with dimensions 148 × 148 each, and a final output layer with dimensions 148 × 3. The total number of parameters in this model is 47363.The third model is our multi-resolution grid encoding model, which includes two grid layers with shapes 101 × 101 × 3 and 51 × 51 × 3. These grid layers are followed by four fully connected layers with various weight matrices and bias vectors. The first fully connected layer has dimensions 6 × 64, where 6 represents the number of concatenated grid features, followed by two hidden layers with dimensions 64 × 64 each, and a final output layer with dimensions 64 × 3, with a total of 47369 parameters. Speed Comparison.A comparative analysis of the running times across various neural network models is provided in Table  <ref>. In terms of the number of epochs required for convergence, the baseline MLP model was trained for 500000 epochs before completion, while the multi-resolution grid encoding model was trained for only 400 epochs. In terms of clock running time, the multi-resolution grid encoding model is approximately 346.21 times faster than the Baseline MLP model. The baseline MLP model took 2 hours, 41 minutes, and 34 seconds to reach its final epoch, while the multi-resolution grid encoding model only took 28 seconds.Quality Comparison. For a direct visual comparison of the fully trained outputs, please refer to Figure <ref>. It is worth noting that both the baseline MLP model and the positional encoding model exhibit some minor artifacts in their outputs despite trained for much longer time, which you may zoom in to see clearly. These artifacts manifest as challenges in maintaining sharp and high-frequency features. In contrast, the multi-resolution grid encoding model produces results that closely align with the ground truth, showcasing a high level of fidelity in its representation. §.§ RepresentationWe compare the implicit neural representation using the multi-resolution grid encoding model with traditional mesh representations, employing the same unsupervised losses computed based on the original mesh connectivity.When employing the traditional mesh representation, the simulation simplifies to a vertex optimization problem, with the free variables set to the 3D positions of the vertices (for a total of 49152 free variables in our settings). These variables are optimized to minimize the weighted sum of the losses calculated using the mesh connectivity. On the other hand, when utilizing the implicit neural representation of the surface with our multi-resolution grid encoding model, the free variables are the parameters within the network model, totaling 47369 free variables, which is fewer than in vertex optimization using the traditional mesh representation. In this scenario, since the input to the network model can be any 2D UV point, we can compute the 3D real-world position of the corresponding UV point using the network's output. Thus, we can query the 3D deformed positions of the original input mesh vertices (in UV space) and compute the losses using the original mesh connectivity. The network parameters will then be optimized to minimize the weighted sum of these losses. Quality Comparison. For a direct visual comparison, please refer to the first two columns in Figure  <ref>, Figure  <ref>, and Figure  <ref>. These examples were intentionally created to highlight the capability of a single source of cloth-object interaction in generating predictable localized wrinkles. Additionally, you can examine the first two columns of Figure  <ref>, and Figure  <ref> for more complicated examples and an overall effect. In all these examples, we observe that when the traditional mesh representation is used, the expressiveness of the local wrinkles is restricted by the discretization. Moreover, when the mesh resolution is low, the local wrinkles are either ignored or become artifacts.In contrast, when utilizing the implicit neural representation within our multi-resolution grid encoding model, we observe fewer artifacts. Nevertheless, it is crucial to emphasize that this comparison primarily delves into exploring the representation aspect, with losses computed based on the original mesh connectivity. The extent of improvement may not be as pronounced at this stage, as the implicit neural representation provides the benefits of a continuous domain and adaptivity, enabling us to compute local losses without the limitations imposed by discretization. We will further showcase this capability in the next subsection.§.§ Simulation LossesTo better showcase the superiority of our novel loss computation method on top of the neural implicit surface, we compare the simulation results achieved using different unsupervised loss computation methods when using the same network and encoding architecture.Quality Comparison. For a direct visual comparison, please refer to the second and third columns in Figure  <ref>, Figure  <ref>, and Figure  <ref> to closely examine local and detailed wrinkles. Furthermore, you can explore the second and third columns of Figure  <ref> for more complex examples and an overall view.In all of these examples, we have observed that when the simulation losses are determined based on the original mesh connectivity, the neural network parameters tend to capture localized wrinkles less effectively. This is primarily due to the limited utilization of the continuous domain; we consistently query the same UV points and train the network on these discrete points, causing the network parameters to be updated solely based on results computed at these specific points within the continuous domain.However, when the simulation losses are computed using our innovative approach of sampling local structures, the continuous domain can be thoroughly explored. In each epoch, we query random sampling points within the continuous domain and optimize the network parameters based on the losses computed at these points. This results in significant improvements, particularly noticeable in the case of localized wrinkles generated by a single source of cloth-object interaction. The wrinkles are much better captured when the losses are computed based on our novel local structure sampling method. §.§ Adaptivity Speed Comparison. We kept all other settings the same while changing only the sampling method. We compared the uniform sampling method to the adaptive sampling method, and it turned out that the adaptive sampling method resulted in faster convergence. We compare the number of epochs required for convergence and present the results in Table  <ref>. Quality Comparison. We keep the number of epochs constant, as summarized in Table  <ref>, and visually compare the simulation results of different models. For a direct visual comparison, please refer to the last two columns in Figure  <ref>, Figure  <ref>, and Figure  <ref> to closely examine the detailed wrinkles.When trained with the same number of epochs, the simulation results were significantly improved when using adaptive sampling. This is because the system placed greater emphasis on regions requiring more attention during adaptive sampling, leading to deeper and clearer wrinkles in the results.§ LIMITATIONS AND CONCLUSIONIn this paper, we delved into the potential of leveraging implicit neural representations to simulate intricate cloth details, such as wrinkles. Through various cloth-object interaction examples, our technique demonstrates superiority over conventional discrete representations under the same memory constraints. This is most evident in the enhanced simulation of detailed cloth wrinkles, especially the fine and localized ones. However, our work does come with its challenges. We have categorized these into five aspects, summarized as follows: UV Mapping Limitation. Our current model is restricted to a straightforward case where the UV space is a square domain, [0, 1]^2. When extending this to complex garments, the UV map might encompass irregular boundaries, seams, and void regions. One approach to address this is segmenting the UV map into panels and using a mask within each panel to highlight void areas. Deformations are then learned only for UV positions outside these void spaces. For managing seams and boundaries, constraints could be introduced to ensure smooth transitions on either side of the seams. While we currently adjust the sampling local structures to fit the square domain, future research could delve into improved methods, possibly exploring boundary-specific sampling structures or mirrored seam padding. Theoretical Guarantee. While our adaptive method has demonstrated promising experimental outcomes, a rigorous proof might be necessary to provide a solid theoretical foundation. This involves proving that such adaptive sampling would closely approximate the ideal optimization scenario, aiming to minimize the integrated losses across the entire domain over an infinite number of sampling points. Sampling Methods. We have examined and compared three different sampling techniques: discrete PDF approximation, as well as other probabilistic techniques such as simulated annealing and Bayesian optimization with Gaussian processes. Among these three, discrete PDF approximation performs the best in our specific settings. However, there are numerous other methods that could be applicable. For example, the Winner-takes-it-all method involves multiple random samplings and selecting the one yielding the maximum sum of function values. While this method might appear time-intensive, its practicality might be feasible considering the amortized runtime. Encoding Models. Our neural network currently employs a multi-resolution grid encoding, which considerably accelerates the process compared to the baseline MLP. Numerous encoding models exist in other related research field, such as multi-resolution hash encoding. Incorporating hash encoding alongside our grid encoding is a potential avenue for enhancement, though its efficacy remains contingent on the specific problem. Loss Balance. The weights for the losses in our system are adjusted manually for every model. A more sophisticated method to determine the loss balance based on material attributes could make tuning more straightforward. However, given the geometrical nature of the collision loss and the unpredictability of the sampling process, devising a systematic method for determining loss weights might prove challenging. In conclusion, our methodology exhibits promising results in simulating cloth details. However, there is room for continued research and enhancement. We look forward to seeing subsequent studies refine and expand upon our approach. Particularly when considering the simulation of characters in tight-fitting clothing with wrinkles arising from garment-body collisions, the potential is vast. This opens doors for innovative applications in sectors like fashion design, virtual try-ons, and animation. ACM-Reference-Format
http://arxiv.org/abs/2311.16344v1
{ "authors": [ "Lei Shu", "Vinicius Azevedo", "Barbara Solenthaler", "Markus Gross" ], "categories": [ "cs.CV", "cs.GR", "68T07", "I.3.0" ], "primary_category": "cs.CV", "published": "20231127222053", "title": "Spatially Adaptive Cloth Regression with Implicit Neural Representations" }
University of Chieti-Pescara, Italy ISTI-CNR, Pisa, Italy University of Parma, ItalyGloNets: Globally Connected Neural Networks Antonio Di Cecco10000-0002-9070-4663 Carlo Metta20000-0002-9325-8232 Marco Fantozzi30000-0002-0708-5495 Francesco Morandin30000-0002-2022-2300 Maurizio Parton10000-0003-4905-3544 January 14, 2024 ====================================================================================================================================================================================== Deep learning architectures suffer from depth-related performance degradation, limiting the effective depth of neural networks. Approaches like ResNet are able to mitigate this, but they do not completely eliminate the problem. We introduce Globally Connected Neural Networks (GloNet), a novel architecture overcoming depth-related issues, designed to be superimposed on any model, enhancing its depth without increasing complexity or reducing performance. With GloNet, the network's head uniformly receives information from all parts of the network, regardless of their level of abstraction. This enables GloNet to self-regulate information flow during training, reducing the influence of less effective deeper layers, and allowing for stable training irrespective of network depth. This paper details GloNet's design, its theoretical basis, and a comparison with existing similar architectures.Experiments show GloNet's self-regulation ability and resilience to depth-related learning challenges, like performance degradation. Our findings suggest GloNet as a strong alternative to traditional architectures like ResNets.§ INTRODUCTIONDeep learning's success in AI is largely due to its hierarchical representation of data, with initial layers learning simple features and deeper ones learning more complex, nonlinear transformations of these features <cit.>. Increasing depth should enhance learning, but sometimes it leads to performance issues <cit.>. Techniques like normalized initialization <cit.> and normalization layers <cit.> enable up to 30-layer deep networks, but performance degradation persists at greater depths without skip connections. This issue, detailed in the original ResNet paper <cit.>, stems from the fact that learning identity maps is not easy for a deeply nonlinear layer. ResNet idea is to focus on learning nonlinear “residual” information, with a backbone carrying the identity map. This brilliant solution has been key in training extremely deep networks that, when weight-sharing and batch normalization are used, can scale up to thousands of layers.Deeper neural networks should not experience performance degradation. Theoretically, a deeper network could match the performance of an n-layer network by similarly learning features 𝒢_1, …, 𝒢_n in its initial layers, then minimizing the impact of additional layers. With this ability to self-regulate, such a network could effectively be “infinitely deep”. However, even with ResNet architectures, performance degradation persists beyond a certain depth, see Figure <ref> or <cit.>. This issue can be due to various factors, see for instance <cit.>, and may partly arise from the inability of modern architectures to self-regulate their depth. Our paper introduces a novel technique to enable self-regulation in neural network architectures, overcoming these depth-related performance challenges.Novel Contributions. The main contribution of this paper is introducing and testing GloNet, an explainable-by-design layer that can be superimposed on any neural network architecture, see Section <ref>. GloNet's key feature is its capacity to self-regulate information flow during training. It achieves this by reducing the influence of the deepest layers to a negligible level, thereby making the training more stable, preventing issues like vanishing gradients, and making the network trainable irrespective of its depth, see Section <ref>.This self-regulation capabilities of GloNet lead to several significant benefits: * Faster training: GloNet trains in half the ResNet time while achieving comparable performance. Beyond the depth threshold where ResNet begins to degrade, GloNet trains in less than half the time and outperforms ResNet.* ResNet alternative: The inability of ResNet-based architectures to self-regulate depth makes GloNet a preferable option, particularly for applications requiring very deep architectures. * No NAS needed: GloNet networks inherently find their effective depth, eliminating the need for computationally expensive Network Architecture Search methods to determine optimal network depth.* More controllable efficiency/performance trade-off: Layers can be selectively discarded to boost efficiency, allowing a controlled trade-off between efficiency and performance, optimizing the network for specific requirements. § NOTATION AND MODEL DEFINITIONA feedforward neural network is described iteratively by a sequence of L blocks:x⃗_l+1 = 𝒢_l(x⃗_l),l=0,…,L-1,where x⃗_0 denotes the input vector, and x⃗_l+1 is the output from the l-th block. In this context, a “block” is a modular network unit, representing a broader concept than a traditional “layer”. Each block function 𝒢_l typically merges a non-linearity, such as ReLU, with an affine transformation, and may embody more complex structures, like the residual blocks in ResNet.At the end of the sequence (<ref>), a classification or regression head ℋ is applied to x⃗_L. For instance, a convolutional architecture could use a head with average pooling and a fully connected classifier. The fundamental principle in deep learning is that 𝒢_0,…,𝒢_l-1 hierarchically extract meaningful features from the input x⃗_0, that can then be leveraged by computing the output of the network:= ℋ(x⃗_L). When the blocks in (<ref>) are simple layers like an affine map followed by a non-linearity (this description comprises, for instance, fully connected and convolutional neural networks), all features extracted at different depths are exposed to the head by a single feature vector x⃗_L that has gone through several non-linearities. This fact leads to several well-known drawbacks, like vanishing gradients or difficulty in learning when the task requires more direct access to low-level features. When using and shared biases, some low-level information could actually be destroyed. Several excellent solutions have been proposed to these drawbacks, like for instance residual networks <cit.>, DenseNets <cit.>, and preactivated units with non-shared biases <cit.>.We propose an alternative solution: a modification to (<ref>), consisting of a simple layer between the feature-extraction sequence and the head, computing the sum of every feature vector. The architecture is designed to receive information uniformly from all parts of the network, regardless of their level of abstraction:x⃗_l+1 = 𝒢_l(x⃗_l),l=0,…,L-1 x⃗_L+1 = ∑_l=1^L x⃗_l = ∑_l=0^L-1𝒢_l(x⃗_l)= ℋ(x⃗_L+1) When feature vectors have different dimensions, adaptation to a common dimension is required before the sum, as happens in ResNet. If x⃗_l∈^n_l, one can use embeddings in ^max{n_l} to maximally preserve information, and embeddings or projections to ^n_L to maintain the same parameters for the head. We refer to the additional layer in (<ref>) as a GloNet layer, because all the features x⃗_l that without GloNet would be preserved only “locally” up to the next x⃗_l+1 = 𝒢_l(x⃗_l), appear now in the “global” feature vector x⃗_L+1 = ∑_l=1^L x⃗_l as summands.GloNet provides skip connections solely to the head, and intermediate blocks are not required to learn a residual map, see (<ref>) and Figure <ref>. This ensures direct and simultaneous backpropagation pathways from each block, enabling uniform information distribution across the network to the head. Due to SGD-like training's preference for shorter paths <cit.>, GloNet is expected to accumulate information mostly in the initial blocks rather than the latter ones, by reducing the influence of the deepest layers to a negligible level. Consequently, GloNet self-regulates its depth during training, rendering it akin to an “infinitely deep” architecture. Empirical evidence supporting this claim is presented in Section <ref>. Given the linearity of the GloNet layer, the global feature vectorx⃗_L+1 provides the contribution that each layer makes to the neural network's prediction. In a feedforward neural network, GloNet enables the analysis of an ensemble of networks represented by the outputs of each network block. This ensemble, comprising blocks (𝒢_l)_l=0,…,k, functions as individual neural networks, with GloNet integrating their outputs through a linear (currently unweighted) combination. This architecture allows each sub-network to specialize in learning features at varying levels of granularity, from low-level in early blocks to more complex, large-scale features in later blocks. The linear nature of the GloNet layer facilitates the attribution of importance scores to these features, effectively creating an 'explainable-by-design' tool <cit.>. § RELATED WORKIn a ResNet with an activation-free backbone, also known as ResNetv2 <cit.>, blocks 𝒢_l are defined as 𝕀 + ℱ_l, where 𝕀 is the skip connection and ℱ_l is the “residual block”, computing two times ∘∘, where is batch normalization. Unrolling the ResNetv2 equation x⃗_l+1 = x⃗_l + ℱ_l(x⃗_l) from any block output x⃗_l (see <cit.>) gives:x⃗_L = x⃗_l + ∑_i=l^L-1ℱ_i(x⃗_i)This equation shows that ResNet, much like GloNet, passes the output x⃗_l of each block directly to the head. However, a key distinction lies in how the head accesses these outputs: ResNet requires distinct pathways for simultaneous access to different outputs, whereas GloNet's head achieves simultaneous access to each output via the GloNet layer. This unique capability of GloNet may contribute to its additional features compared to ResNetv2, as explored in Section <ref>.Moreover, GloNet is faster than ResNet (not requiring batch normalization), and can be seen as an ensemble computing the sum of models of increasing complexity, giving an explainable-by-design model (differently from ResNet).Unlike DenseNet <cit.>, aggregating each block with all subsequent blocks through concatenation, GloNet connects only to the last block, with summation for aggregation. This approach avoids the parameter explosion given by concatenation in DenseNet, and maintains the original complexity of the model.Finally, GloNet can be viewed as a network with early exits at every block, adapted and aggregated before the head. See <cit.> for the early exit idea.§ IMPLEMENTING GLONETImplementing GloNet within a certain architecture may not always be as straightforward as described in Section <ref>.In this Section we describe how GloNet can be implemented in common scenarios.GloNet and Skip Connections. If the original architecture includes skip connections (such as ResNet or DenseNet), these should be removed and replaced with the GloNet connection. Otherwise, putting GloNet on top of skip connections, training would not converge as we would be adding the identity to the output multiple times. Note that GloNet provides only skip connections to the GloNet layer, and does not ask the blocks to learn a residual map.GloNet and Batch Normalization. Batch normalization plays a major role in enhancing and stabilizing neural network training by normalizing the inputs of each block. Its positive impact is widely acknowledged, though the specific mechanisms of its benefits are still debated <cit.>. Despite these advantages, batch normalization poses challenges, particularly in its interaction with GloNet. GloNet is designed to dynamically regulate the outputs of different blocks, based on their contribution to the task. It makes negligible the outputs of deeper blocks, a strategy that conflicts with the objectives of batch normalization, which strives to maintain a consistent mean and variance for block inputs. Consequently, batch normalization, and similarly layer normalization, should be removed prior to the GloNet layer's aggregation. GloNet introduces an alternative form of regularization, which, as we demonstrate in Section <ref>, is capable of achieving comparable performances without the need for batch normalization.GloNet into Residual Networks. Architectures using residual blocks feature both skip connections and normalization. Once skip connections and normalizations are removed from a ResNetv2 block computing 𝕀 + affine map∘∘∘affine map∘∘, one is left with two simpler blocks affine map∘, and each of those blocks can potentially be aggregated into the GloNet layer. In this case, one ResNetv2 block corresponds to two simpler blocks. This is what we do in this paper, and for this reason when GloNet has n blocks, its equivalent ResNetv2 architecture has n/2 blocks.GloNet into Vision Transformers. When using more complex architectures like transformers, several different choices can be made, each one potentially affecting the final performance of the GloNet-enhanced model. In this initial exploration of GloNet, we propose a straightforward integration with a Vision Transformer (ViT) <cit.> adapted to CIFAR-10. The image is segmented into 4x4 patches, its class encoded, and then concatenated with the patch encoding and a positional embedding. This series is then fed into a cascade of n encoders with 4 attention heads each, which are accumulated into a GloNet layer, and passed to a classification head. In our experiment, we compared n=4, 5 and 6.§ EXPERIMENTSIn this section we provide experiments supporting the core claims of our paper, as stated in the Introduction. In particular, we focus on showing that GloNet trains much faster than ResNet, that GloNet performances are on par with ResNet's ones, that GloNet can self-regulate its depth, that GloNet does not need batch normalization, and that GloNet is virtually immune to depth-related problems. All the experiments can be reproduced using the source code provided at <cit.>.SGEMM Fully Connected Regression.We experimented with a regression task from the UCI repository <cit.>, focused on predicting the execution time of matrix multiplication on an SGEMM GPU kernel. See <cit.> for details on this task and SGEMM dataset. Since GloNet has skip connections, to obtain a fair comparison we used a ResNetv2-like baseline. For comparison, we used also a vanilla baseline, identical to the ResNetv2 baseline but without skip connections. Moreover, since GloNet does not use batch normalization (in fact, GloNet self-regulation capabilities can be tampered by normalization, see Section <ref>), we also experimented with a vanilla and a ResNetv2 baseline with the layer removed. GloNet and the corresponding baselines (denoted by vanilla, ResNetv2, vanilla-no-, and ResNetv2-no-in figures) have a similar amount of parameters, the only difference being given by the trainable parameters. All blocks have 16 units. All models starts with a linear layer mapping the 14-dimensional input to ^16, and ends with the head, a linear layer with 1 unit. GloNet models have an additional GloNet layer before the head with no additional parameters.The number of blocks ranges in [10, 24, 50, 100, 200], respectively (halved for the ResNets because every block is twice the layers of the corresponding non-ResNet model).For training, we used MSE loss, L^2-regularization with a coefficient of 10^-5 (we also tried 10^-4 without improvements), Adam optimizer with a batch size of 1024, learning rate set to 0.01, He normal initializer for weights, and zero initializer for biases.We trained all models for 200 epochs, the point at which baselines plateaued, potentially favoring them over GloNet. The first thing to notice is that with GloNet training takes almost half or less than half the time of ResNet, see Table <ref>. This is because GloNet, differently from ResNet, does not need batch normalization.After 200 training epochs, we compared the best test errors and learning curves across different block configurations, see Figure <ref> for learning curves and Table <ref> for best test errors. At 200 blocks, GloNet surpassed ResNet in both best test error and learning curve shape. See the caption of Figure <ref> for details on the results of this experiment.The training shapes in Figure <ref> suggested a unique aspect of GloNet not present in the ResNetv2 baseline. GloNet's training was unaffected by the increasing depth, as shown by the shape of the learning curve that remained consistent whether the network had 10, 24, 50, 100, or 200 blocks. On the contrary, ResNet learning curve became flatter when depth is increasing.To further explore this feature, GloNet was tested with even deeper models (600 and 1000 blocks), and compared against the corresponding ResNetv2 baseline. Even at these substantial depths, GloNet's learning curve maintained its shape, as shown in Figure <ref>. Moreover, GloNet's performance remained stable across these varying depths, maintaining a best test error of around 0.02 regardless of the number of blocks (10, 24, 50, 100, 200, 600, or 1000), see Table <ref>.In contrast, ResNet's showed a clear decline as the network depth increased. While its best test error remained around 0.02 up to 200 blocks, this error increased to 0.04 and almost 0.05 at 600 and 1000 blocks, respectively, see again Table <ref>. As happened with 100 and 200 blocks, the ResNet learning curve was flatter, diverging from GloNet's more consistent curve shape as depth increased.GloNet accumulates information in the first few blocks and uses only the required capacity for a specific task and architecture, leading to minimal output from subsequent blocks, see Remark <ref>. This is not observed in baseline models with or without batch normalization, see Figure <ref>, and likely contributes to GloNet's stable performance as network depth increases, in contrast to the degradation observed in the baseline models under similar depth conditions.MNIST Fully Connected Classification. To confirm that GloNet automatic choice of optimal depth and GloNet training resilience to depth werenot associated to the particular SGEMM regression task, we performed a series of experiments with identical architecture on a completely different task: image classification with MNIST. Although using fully connected architectures for image classification is generally not the best approach, with this task we have been able to significantly increase the number of input features, which theoretically could pose a greater challenge to models that are not very deep.The only difference from the architecture used in SGEMM is the head, which in this case is a fully connected layer followed by a SoftMax layer on 10 classes. We tested architectures with 6, 10, 24, 50, 60, 80, 100, and 200 blocks for GloNet and a convolutional baseline, halved for the corresponding ResNet baseline.Figure <ref> shows that GloNet automatically chooses the optimal number of blocks. However, notice that in this case, differently from Figure <ref>c, ResNetv2 outputs show a decreasing shape. This is probably due to the trainable parameters of the batch normalization, that in this case are able to force a small mean and variance on the last blocks. This indicates that also with batch normalization the network struggles to self-regulate its depth. Figure <ref> confirms GloNet resilience to an increasing depth, and shows a severe performance degradation of ResNetv2 when depth goes above 50 blocks. CIFAR10 Convolutional Classification. We further experimented with a ResNet20 on CIFAR10. ResNet20 is a ResNetv2 with 3 stages of 3 residual blocks each, described in <cit.>. We compared a ResNet20 architecture with its GloNet version, obtained by removing the backbone and adding a GloNet layer before the classification head, as detailed in Section <ref>. We trained for 200 epochs. Learning curves are completely overlapping, with best test errors 91.12% and 91.08% for ResNet and GloNet respectively. This experiment shows that also with convolutional architectures, GloNet performs on par with the traditional ResNet architecture, despite taking half the time for training.GloNet for Vision Transformer Classification. A Visual Transformer (ViT) is a transformer applied to sequences of feature vectors extracted from image patches <cit.>. The n encoders outputs can be accumulated into a GloNet layer before going to the classification head, see Section <ref> for details. In this experiment we compare ViT, with and without GloNet, on CIFAR-10, with 4, 5 and 6 encoders. Training plateaus at around 500 epochs, and final accuracies align with those from literature.With 4 and 6 encoders, accuracies overlap for ViT and GloNet-ViT. With 5 encoders, GloNet-ViT appears to improve over ViT, see Figure <ref>. ViT best accuracies are 0.707, 0.709, and 0.725, and GloNet-ViT best accuracies are 0.709, 0.727, and 0.729, for 4, 5, and 6 encoders, respectively. This is a proof-of-concept experiment showcasing the robustness and versatility of GloNet for complex architectures like transformers.Controllable Efficiency/Performance Trade-Off. In an experiment to demonstrate how GloNet can be used to choose an optimal efficiency/performance trade-off, we trained a 50-block GloNet fully connected architecture on MNIST for 200 epochs. After training, we progressively removed the last block, adjusted accordingly the GloNet layer to sum fewer blocks, and evaluated the shallower model without retraining. As shown in Figure <ref>, removing up to 42 blocks did not significantly impact accuracy, illustrating GloNet's ability to balance efficiency and performance. § CONCLUSIONSWe introduce GloNet, a method designed to augment existing architectures without adding complexity or reducing performance. It effectively renders the architecture resilient to depth-related learning issues. As an alternative to ResNet, GloNet offers advantages, without any disadvantage: it achieves similar training outcomes in nearly half the time at depths where ResNet remains stable, and maintains consistent performance at greater depths where ResNet falters.splncs04
http://arxiv.org/abs/2311.15947v1
{ "authors": [ "Antonio Di Cecco", "Carlo Metta", "Marco Fantozzi", "Francesco Morandin", "Maurizio Parton" ], "categories": [ "cs.LG", "cs.NE" ], "primary_category": "cs.LG", "published": "20231127155420", "title": "GloNets: Globally Connected Neural Networks" }
Mixing model of Phobos' bulk elemental composition for the determination of its origin: Multivariate analysis of MMX/MEGANE data [ November 6, 2023 ================================================================================================================================ Large vision-language models (LVLMs) suffer from hallucination a lot, generating responses that apparently contradict to the image content occasionally.The key problem lies in its weak ability to comprehend detailed content in a multi-modal context, which can be mainly attributed to two factors in training data and loss function. The vision instruction dataset primarily focuses on global description, and the auto-regressive loss function favors text modeling rather than image understanding.In this paper, we bring more detailed vision annotations and more discriminative vision models to facilitate the training of LVLMs, so that they can generate more precise responses without encounter hallucination. On one hand, we generate image-text pairs with detailed relationship annotations in panoptic scene graph dataset (PSG). These conversations pay more attention on detailed facts in the image, encouraging the model to answer questions based on multi-modal contexts. On the other hand, we integrate SAM and mask prediction loss as auxiliary supervision, forcing the LVLMs to have the capacity to identify context-related objects, so that they can generate more accurate responses, mitigating hallucination.Moreover, to provide a deeper evaluation on the hallucination in LVLMs, we propose a new benchmark, RAH-Bench. It divides vision hallucination into three different types that contradicts the image with wrong categories, attributes or relations, and introduces False Positive Rate as detailed sub-metric for each type. In this benchmark, our approach demonstrates an +8.4% enhancement compared to original LLaVA and achieves widespread performance improvements across other models.§ INTRODUCTIONRecently, large language models (LLMs) <cit.> have achieved significant successes in the field of Natural Language Processing (NLP). Due to the generative pretraining process <cit.> on a large amount of text corpus, these models have strong capacities on different language tasks. They are able to comprehend complex text inputs, and provide flexible responses when interacting with human. The succuess in text inspires the researchers to focus on understanding inputs of other modals, such as images. As a result, there appear Large Vision-Language Models (LVLMs) <cit.>. LVLMs use a pretrained visual encoder <cit.> to extract image features, and align them with a LLM via multimodal pretraining and instruction tuning. This training process makes LVLMs possible to conduct a complex conversation based on image content. Even though LVLMs inherit strong feature representation and generation capabilities from up-to-date LLMs <cit.> and image models <cit.>, they still generate unsatisfactory responses sometimes. One main problem is hallucination. In the context of LLMs, hallucination refers to a phenomenon that the generated outputs are sometimes detached from, or even violate the provided inputs. These responses are generated by merely following the learned patterns from training corpus, neglecting factual and accurate information in the provided text <cit.>. When taking the additional input image into consideration,LVLMs might still generate outputs contrary to the image content. Since the image itself provides sufficient content to conduct a wide range of visual tasks <cit.>, the main problem is that the model lacks a comprehensive understanding of the multi-modal context. As shown in Figure <ref>, LVLMs have the capacity to describe an image or respond correctly to text questions, but generate hallucinations when answering directly according to the image. We refer the inconsistency between the response and the given image as vision hallucination.Hopefully, there appears some works trying to solve this problem. POPE <cit.> provides a new benchmark to evaluate object hallucination. LRV-Instruction <cit.> introduces a more robust instruction dataset. Woodpecker <cit.> performs inference one more time, so that it can correct hallucination based on more-detailed text prompts.These methods mainly address hallucination from the aspect of text. On the contrary, we suggest improving the model's ability to comprehend the spatial structure and detailed relationships in the multi-modal context is also important.However, we may wonder: with such abundant information in an image, why cannot a LVLM understand properly and generate the right response? There are two causes that LVLMs miss the crucial features: lack of fine-grained alignment visual annotations and insufficient supervision to explicitly learn visual structures. On one hand, existing visual instruction datasets <cit.> generated by LLMs tend to focus on global descriptions, and contain mostly positive descriptions. These training data can hardly cover all potential statements about the input images, especially those misleading questions. On the other hand, LVLMs are supervised with the next-token prediction loss, which is inherited from NLP to model the dependency among word tokens. It is unable to model the visual relationships and understand spatial regions in the image. Thus, it is hard to guarantee that LVLMs can answer a specific question according to the input image.Therefore, in this paper, we construct a fine-grained vision instruction dataset based on Panoptic Scene Graph (PSG) <cit.>, called Relation-Associated Instruction (RAI-30k), which focuses on answering questions about detailed relations among instances. Other than standard dialogs, each instruction data in RAI-30k is also associated with one relation annotation in PSG, including mask annotations for related instances. With these additional annotations, we further supervise LVLMs with mask prediction loss by a state-of-the-art expert vision model, guiding LVLMs to focus on highly-related image content. More specifically, this is achieved by integrating SAM into the training of LVLMs. SAM receives the outputs from LVLMs, generates masks for instances associated with the instruction data. With the additional supervision from the mask prediction loss, LVLMs are encouraged to extract features that can better represents these crucial instances, thus generating more accurate responses and mitigating vision hallucination.In addition, the proposed method only operates the training pipeline. The LVLMs still follow their original manner in inference, but with more precise outputs generated.Moreover, in order to conduct a deeper evaluation on vision hallucination, we propose a new hallucination benchmark, called Relataion-Associated Hallucination Benchmark (RAH-Bench). It contains 3,000 interrogative sentences with corresponding images, and asks the model to judge if these detailed descriptions are consistent with image contents. To provide a detailed analysis on what mistakes LVLMs are more likely to make, all negative queries are divided into three types based on how they contradict the image: category hallucination, attribute hallucination and relation hallucination. For each type, we design a detailed sub-metric to reveal how vulnerable the model is to this specific hallucination.Our contributions are summarized as below: * We construct a fine-grained vision instruction dataset, RAI-30k. It contains multi-modal conversations focusing on specific vision relations in an image, enabling LVLMs to learn detailed vision features and spatial regions. * We propose using the visual supervision to guide the model to focus on corresponding objects, thus generating more accurate responses and eliminating hallucination.* We introduce a new hallucination benchmark, named RAH-Bench, which categorizes hallucinations into three distinct types and designs sub-metrics to enable more detailed analysis and assessment.§ RELATED WORKS §.§ Large Vision-Language ModelsInspired by the success of Large Language Models in NLP <cit.>, researchers are now developing a wide range of Large Vision Language Models that have strong ability to perform a wide range of tasks with both image and text inputs. These models usually consists of a strong vision encoder <cit.> and a trained LLM <cit.>, and bridge them with a simple linear layer <cit.> or q-former <cit.>. In order to conduct various vision-language tasks, LVLMs need to be pretrained on large-scale image-text pairs, and then finetuned on vision instruction datasets. Currently, most vision instruction datasets are constructed based on annotations in vision datasets <cit.>. They utilize fixed templates <cit.> or GPT4 <cit.> to generate diverse data. However, most of these datasets are construct in a straight-forward manner, resulting in positive conversations and global descriptions. On the contrary, we construct our dataset with explicit focuses on the images, leading to fine-grained conversations and a high quality dataset. §.§ Combine LVLMs and Vision ModelsIn order to accomplish more various tasks and provide detailed outputs, many recent works attempt to combine LVLMs with existing strong vision models. The most straight-forward solution is to let LVLMs generate commands to activate specific models <cit.>. These methods consider LVLMs and vision models as separate components, each with its inherent capabilities, and combine them with detailed rules. Recently, LISA <cit.> and ContextDET <cit.> propose to tune LVLMs and vision models together, so that the whole system has the capacity to flexibly predict detailed vision outputs based on the contexts. However, they mainly introduce new functions by appending additional models, while paying little attention on the abilities of LVLMs themselves. On the contrary, our method mainly concentrates on how to use existing vision models and annotations to enhance LVLMs, so that they can generate accurate text as responses, mitigating vision hallucination. §.§ Hallucination in LVLMsLVLMs often prone to hallucination. Current works takes two kinds of approaches to mitigate it. One of them is to enrich the context with additional text inputs <cit.>, akin to the use of external knowledge in LLMs <cit.>. The other strategy focus on constructing high-quality instruction datasets with negative samples <cit.> or iterative refinements <cit.>. In addition to these approaches, we suggest that other than vision annotations, existing models and loss functions for visual tasks could also facilitate the training of LVLMs.As to hallucination evaluation, current works usually provide overall metrics. POPE <cit.> performs binary classification tasks for easy assessment, while the questions in it share a consistent structural pattern. LRV-Instruction <cit.> scores the responses with GPT-4. The question set is diverse, however, utilizing LLMs leads to high cost and unstable results. In this paper, we propose a benchmark that is both diverse and easy to evaluate. Moreover, we offer some more detailed sub-metrics as well, to uncover specific vulnerabilities in LVLMs.§ METHOD To enhance the multi-modal context comprehension of LVLMs, we first construct a vision instruction dataset emphasizing detailed image analysis, and then utilize advanced vision models to facilitate LVLM training. It aims to improve LVLMs' ability to identify and interpret the key local regions in the image, thus generating more accurate responses. The following sections will elaborate on the development of RAI-30k dataset and the implementation of additional visual supervision separately.§.§ Relation-Associated Instruction Dataset Conventional methods for constructing visual instruction datasets typically involve seeding all available captions and object annotations directly into GPT-4 <cit.>. These approaches generally yield highly-consistent image-text pairs which are lack of distinctive focus and misleading context. In contrast, daily discussions often center around specific elements, facts, or events, supplemented by related details. Therefore, we adopts a more nuanced strategy to generate conversations from seed topics. In this section, we take a vision relation annotation in panoptic scene graph dataset (PSG) <cit.> as the a seed topic. The relation annotation comprises a subject, object, and their relation, providing detailed information to construct a single-round conversation that focuses exclusively on it.As shown in Figure<ref>, we employ GPT-4 <cit.> to generate conversations around seed relation annotations, using a pool of three prompt variations to generate different questions, including yes/no interrogatives and open-ended questions starting with what, how, where, etc. For each data sample, we randomly take one prompt template, fill the seed annotation in, and ask GPT-4 to generate a question-answer pair. This approach ensures a wide range of conversational scenarios, mirroring the complexity and variability of real-world interactions with images.To further enrich the conversations with more details and ensure that they conform to the input image, we include detailed annotations from vision datasets <cit.> in the prompts: * Captions of the entire image.* Descriptions of specific regions, especially those overlapping with the subject or object in the seed annotation.* A list of objects within the image, including their bounding boxes. The resultant dataset, Relation-Associated Instruction-30k (RAI-30k), encompasses 29,712 data samples, each containing an image, a question-answer pair and the corresponding seed relationship annotation. The seed relationship includes the binary mask annotations of the subject and the object. Several generated examples are demonstrated in Figure <ref>. To enrich our vision instruction data, we also append the multi-round conversations of the same images from LLaVA-Instruct 80k <cit.> to augment each data sample.Overall, RAI-30k provides a diverse and fine-grained vision instruction dataset to train LVLMs for accurate, contextually relevant responses, effectively mitigating vision hallucination.§.§ Supervision with Visual AnnotationsIn order to generate precise responses, human would first identify the key elements in context before answering. LVLMs are supposed to perform better in this way similarly. In RAI-30k, we have collected the critical relationship annotation as a supplement for each question, with detailed binary masks for the subject and the object. These annotations can be utilized to explicitly guide LVLMs to attend to specific details in the image. Therefore, we integrate expert vision models and auxiliary mask prediction loss, including binary cross entropy and dice loss <cit.>, in the training stage of LVLMs, mitigating vision hallucination. We exemplify our approach by incorporating SAM <cit.> in vision instruction tuning. SAM is a versatile segmentation model, designed to interpret various types of prompts and generate binary masks. Take the training of LLaVA <cit.> as an example. We initialize LLaVA and SAM from their individually pretrained weights, and then attach SAM to facilitate instruction tuning of LLaVA. SAM takes the input prompts from LLaVA's output features, and computes mask prediction losses to guide LLaVA to attend to instances that are crucial to generate accurate responses.More specifically, other than text responses, the LVLM generates two additional feature vectors f_sub and f_obj in its output sequence, referring to the two instances. These features are conditioned on the input image x_img and question x_q. After processed through a linear layer g(·), these features are sent into SAM as the prompts to predict masks, and then be supervised with the ground truth masks in the seed relationship annotation. The whole process is shown in Figure <ref>.During training, we freeze most of the trained weight in LVLM and update LoRA <cit.> and the decoder in SAM for parameter-efficient tuning.f_sub, f_obj = LLM(x_img, x_q) Mask_sub = SAM(x_img, g(f_sub)) Mask_obj = SAM(x_img, g(f_obj)) By appending two special tokens [SUB] and [OBJ] to extract corresponding features before the prediction,the model is encouraged to identify the crucial instances based on only input context during training. Moreover, we modify the attention masks and position ids, so that these special tokens are processed alongside conventional tokens in a single forward pass. This modification does not have any influence on response generating in inference.After the training phase, SAM is discarded to ensure that the inference process remains unchanged. The LVLM generates text responses according to the input image and text tokens by conventional next token prediction. §.§ Relation-Associated Hallucination Benchmark In order to provide a detailed evaluation on vision hallucination, we introduce the Relation-Associated Hallucination Benchmark (RAH-Bench).RAH-Bench contains 3,000 yes-or-no questions with their corresponding images. The images are from COCO validation set, while the questions are generated by GPT-4. When evaluating, we simple parse the responses into binary classification results like in <cit.>, so that quantitative precision, recall and F1 score can be calculated.Since LVLMs may generate responses that violate the image contents from different perspectives, we further divide the negative questions into three subsets to better reveal which questions are more likely to cause vision hallucination. Each subset contains 500 questions with misleading statements in different aspects, including: * Categorical Hallucination: LVLMs identify nonexistent object categories or incorrect background categories in the given image.* Attribute Hallucination: The object categories identified by LVLMs are accurate, while the descriptions of these objects' attributes (such as color, shape, material, content, etc.) are wrong.* Relation Hallucination: All objects and their attributes are described correctly, but the relationships among them (such as human-object interactions or relative positions) do not align with the actual image content. Figure <ref> presents some examples of different types. To reveal the susceptibility of LVLMs to different types of misleading queries, we introduce False Positive Rates (FP) for each subset, which represents the probability that this hallucination type would occur.FP = Num. of False Positives/Num. of Samples. Compared to previous benchmark in vision hallucination <cit.>, RAH-Bench has the following advantages. Firstly, RAH-Bench feeds GPT-4 with more visual annotations, leading to complex and diverse questions. Secondly, RAH-Bench blends truths and misleads in a single question, posing a greater challenge for distinction. Finally, the specific hallucination types and the additional sub-metric can provide a deeper evaluation on how LVLMs are vulnerable to different vision hallucination. § EXPERIMENTS §.§ Main ResultsIn this paper, we provide a data construction pipeline and a training algorithm to mitigate vision hallucination. These methods are model-independent, thus can be applied to most up-to-date LVLMs <cit.>. We conduct experiments with several different models to validate our efficacy. These LVLMs are different in base LLMs <cit.>, model sizes, and detailed structures of the adapter between the vision encoder and the LLM.As shown in Table <ref>, most existing LVLMs have high recall and relatively low precision, indicating the existence of vision hallucination. With FP metrics in the subsets of RAH-Bench, we can see that LLaVA and mPLUG-owl provide positive responses to more than half of the misleading questions. Moreover, 13B models do not necessarily perform better than 7B models. With our method equipped, the F1 score of LLaVA-13B is improved by +8.4% on RAH-Bench, and by +10.1%, +7.4%, and +3.9% with different evaluation settings on POPE <cit.>, indicating a geneal improvement across different metrics. As to mPLUG-owl, the F1 score is increased by +2.3%. This smaller improvement may be due to the fact that we tune all these models with the same training settings and hyper-parameters as in LLaVA, which may not be optimal for other LVLMs. Nevertheless, these experiments validate that our method is versatile to mitigate vision hallucination in different LVLMs.InstructBLIP <cit.> achieves significantly better performance than other models. This is mainly due to its comprehensive instruction dataset constructed from 20+ vision datasets with various annotations. This dataset already contains some detailed information. Even so, when testing on RAH-Bench, we find that InstructBLIP is more vulnerable to relation hallucination, resulting in a FP significantly higher than other types. Queries within this subset invariably combine accurate instance descriptions with incorrect relational details between them, therefore requiring reasoning at a higher semantic level. Therefore, we may conclude that InstructBLIP is better at identifying facts than complex reasoning. This can be enhanced with our method. Since InstructBLIP only supports single-round conversation, we only tune them on the generated question-answer pairs. As shown in Table <ref>, our tuning preserves the performance on the random and popular split within POPE, and gains a significant improve for the harder adversarial split (+0.8%/+4.6% for 7B/13B) and our RAH-Bench (+0.5%/+4.0%), which means our tuning improves the reasoning ability in InstructBILP and enables it to discriminate those easy-to-confuse statements.§.§ Ablation on Data ConstructionIn Section <ref>, we have introduced the data construction pipeline for RAI-30k. The whole pipeline can be simply summarized into two steps: generating relation-associated question-answer pairs, and append existing multi-round conversations in LLaVA-Instruct-80k.First we ablate on how to generate question-answer pairs. Providing more abundant annotations is important, since the seed relationships in PSG <cit.> only contain simple category names like people drive car. Data generated with these only would be less relevant to detailed image contents. As shown in Table <ref> L3, tuning with these uninformative data leads to degradation in model performance. Meanwhile, captions from COCO <cit.> and region descriptions from Visual Genome <cit.> are more detailed and informative. With them as additional context, GPT4 can pose questions that target specific fact in the given image. When testing with RAH-Bench, it leads to +6.9% improvement on F1 score, and False Positive Rates generally decreases for all hallucination types.In order to generate more diverse questions, we design three different prompts for GPT-4, corresponding to different question types: questions that should be answered with yes / no, and wh-questions formulated to elicit detailed information. Though the evaluation metric only requires binary classification results, by comparing L2 and L4 in Table <ref>, manually prompting GPT-4 to generate more diverse questions results in slightly higher F1 scores on all splits in POPE. We suggest that designing more question types (e.g. providing more prompts) will further improve LVLMs' performance, as the training data becomes more diverse. Finally, we investigate the impact of incorporating multi-round data from LLaVA-Instruct-80k <cit.> into our generated dataset. Our question-answer pairs predominantly concentrate on specific relationships, contrasting with the broader range of instances and common knowledge presented in LLaVA. As shown in L2 and L5 in Table <ref>, augmenting RAI-30k with more diverse conversational leads to a large improvement in performance on RAH-Bench (+6.2%), a benchmark with diverse questions, while a modest improvement on the more uniform POPE benchmark. §.§ Ablation on Vision SupervisionIn this section, we provide additional experiments to dive deeper into the function of SAM and the auxiliary mask prediction loss. As shown in Table <ref>, adding these vision components in training leads to 1.5% higher F1 score on RAH-Bench than tuning with RAI-30k with a conventional manner, indicating that the expert vision model helps alleviate vision hallucination. Continue training with more epochs would gain further improvements. However, excessive training epochs on this dataset could result in overfitting to the binary classification task, consequently impairing the general ability of LVLMs. We evaluate this ability with LLaVA-Bench (In-the-Wild) as in Table <ref>, which is a benchmark to assess models' robustness to different prompts. Training for one epoch does not affect the general capacities of LLaVA much, therefore we adopt this duration as the default training configuration.Moreover, by comparing the detailed metrics, we further find that after tuning, our model obtains a significant enhancement in its ability to generate detailed description, alongside a modest improvement in complex reasoning. This may be attributed to our fine-grained vision instruction dataset and mask prediction supervision, which provides additional evidences that the model trained with our method can better perceive detailed image contents and perform multi-modal deduction. §.§ Visualization To better demonstrate how our method helps LVLMs find crucial instances, we provide the following analysis.Since SAM is attached and LVLMs are forced to predict the right masks in the seed relation annotation, we may draw the masks out to see if the LVLMs do recognize the crucial items. As shown in Figure <ref>, LVLMs are able to attend to the key items associated with the question. If the question provides correct descriptions for the subject and object (e.g. right questions or relation hallucination), the model can identify them. If the question provides misleading questions about them (e.g. object hallucination and attribute hallucination), the model will attend to the content that can point out the contradiction, or just predict low scores on the whole output mask. The mask supervision loss encourages the LVLMs to generate more precise responses, and also makes their behavior more explainable. Note that we only visualize these images to visualize the model behavior. In practical inference, LVLMs just conduct conventional next-token generation. Overall, these illustrations demonstrate that our model can associate the provided text with image contents, and infer final responses according to them.§ CONCLUSIONIn this paper, we construct a fine-grained vision instruction dataset RAI-30k that focuses on specific details in images, and propose a methodology to enhance LVLMs with supervision from delicated losses in expert vision models. These could help mitigate vision hallucination in different LVLMs. Moreover, in order to provide a more detailed evaluation on vision hallucination, we develop RAH-Bench. This benchmark includes various categorized subsets specifically designed to assess the severity of different types of hallucination in LVLMs. We hope our work provides a solid foundation for further research in LVLM training and vision hallucination.While we have validated the efficacy of our method, it is important to acknowledge certain limitations in our research. Firstly, the dataset we constructed is not that large. Expanding it from more seed annotations could potentially enhance performance. Secondly, though we design a general method to facilitate vision instruction tuning with expert vision models, our training approach is somewhat coupled with the data format in RAI-30k, such as subject and object in relation annotations. In the future, we will evolve our method to accommodate a broader range of vision models and more versatile annotation formats. ieeenat_fullname
http://arxiv.org/abs/2311.16479v1
{ "authors": [ "Zhiyang Chen", "Yousong Zhu", "Yufei Zhan", "Zhaowen Li", "Chaoyang Zhao", "Jinqiao Wang", "Ming Tang" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127093002", "title": "Mitigating Hallucination in Visual Language Models with Visual Supervision" }
[email protected]@polimi.it Corresponding author: F. [email protected] [1]Centro de Modelamiento Matemático, Universidad de Chile, Av. Beauchef 851, Santiago, Chile [2]MOX – Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, 20133, Italy Modeling the behavior of biological tissues and organs often necessitates the knowledge of their shape in the absence of external loads. However, when their geometry is acquired in-vivo through imaging techniques, bodies are typically subject to mechanical deformation due to the presence of external forces, and the load-free configuration needs to be reconstructed. This paper addresses this crucial and frequently overlooked topic, known as the inverse elasticity problem (IEP), by delving into both theoretical and numerical aspects, with a particular focus on cardiac mechanics. In this work, we extend Shield's seminal work to determine the structure of the IEP with arbitrary material inhomogeneities and in the presence of both body and active forces. These aspects are fundamental in computational cardiology, and we show that they may break the variational structure of the inverse problem. In addition, we show that the inverse problem might be ill-posed, even in the presence of constant Neumann boundary conditions and a polyconvex strain energy functional. We then present the results of extensive numerical tests to validate our theoretical framework, and to characterize the computational challenges associated with a direct numerical approximation of the IEP. Specifically, we show that this framework outperforms existing approaches both in terms of robustness and optimality, such as Sellier's iterative procedure, even when the latter is improved with acceleration techniques. A notable discovery is that multigrid preconditioners are, in contrast to standard elasticity, not efficient, and domain decomposition methods provide a much more reliable alternative. Finally, we successfully address the IEP for a full-heart geometry, demonstrating that the IEP formulation can compute the stress-free configuration in real-life scenarios where Sellier's algorithm proves inadequate. Reconstructing relaxed configurations in elastic bodies: Mathematical formulation and numerical methods for cardiac modeling D. Riccobelli^2 January 14, 2024 ============================================================================================================================ § INTRODUCTION The objects we observe are rarely free from external mechanical stresses. For example, all bodies around us are subject to gravity. While such a force usually induces small displacements in stiffer materials, it can lead to large deformations in soft matter <cit.>. Furthermore, in biomedical applications, the shape of organs and tissues observed through medical imaging techniques are affected by the presence of mechanical forces that can significantly deform them. An important example in this respect is the heart: the presence of the ribcage and surrounding organs, as well as the blood pressure in the chambers, produce large deformations and it is not possible to observe its relaxed shape in-vivo. In fact, directly observing the configuration of an elastic body in the absence of external forces is far from being a trivial task.In nonlinear elasticity, the task of reconstructing the relaxed configuration of a body subject to mechanical loads, hereafter referred to as the inverse elasticity problem (IEP), is a long-standing and largely overlooked problem, which is briefly cited in the Truesdell and Noll book as the free shape problem <cit.>. The problem has received little attention from the continuum mechanics community: it has been originally addressed by Shield <cit.> for homogeneous bodies in the absence of body forces, and has been extended by Merodio and Ogden <cit.> to take into account body forces. Up to our knowledge, Shield's theory has never been extended to the inhomogeneous case, despite it being fundamental in several application areas, including computational cardiology, since the fiber direction changes within the myocardium. Shield's theory has been exploited as a tool to identify analytical solutions in non-linear elasticity <cit.>, but the structure of the IEP as a boundary value problem remains largely unexplored.The IEP has received some more attention in the scientific computing community <cit.>, where it is known as inverse design problem <cit.> or prestress problem <cit.>. Its role in the specific case of cardiac modeling, and in biomechanics in general, is pivotal, as a reliable identification of the relaxed configuration is fundamental to correctly describe the stress distribution in soft tissues <cit.>. A possible solution approach, based on a fixed-point algorithm, was proposed by Sellier <cit.> and allows for solving the inverse problem by leveraging only a solver for the direct problem. This approach is particularly attractive, as it allows to re-use existing software. However, when applied to real-life problems such as four-chamber cardiac geometries, it often presents convergence issues. To mitigate them, Sellier's method has been improved through adaptive continuation methods <cit.> and acceleration techniques <cit.>. We highlight that the Sellier's method is not only relevant for cardiac simulations, and it has indeed also been used for modeling the eyes <cit.>, aorta <cit.>, and brain <cit.>.In this work, we study the IEP, with a special focus on the context of cardiac modeling. Our scope is twofold: on one hand, we study the mathematical structure of the IEP, extending Shield's theory to the case of inhomogeneous bodies subject to active forces.On the other hand, we thoroughly characterize this problem numerically for increasing levels of complexity and compare a direct numerical approximation of the latter with the Sellier method in terms of robustness with respect to external loads and its optimality. The paper is organized as follows. In Section <ref>, we review some basic facts of non-linear elasticity and we derive the IEP, together with some remarks on the mathematical structure and some elementary examples.In Section <ref>, we derive the weak formulation for both the direct and the inverse elasticity problems. In Section <ref>, we show with simple examples the mechanisms through which the IEP problem can give rise to self-intersections. In Section <ref> we describe all the algorithms that we consider for this study, which are (a) the Sellier method, (b) the Aitken accelerated Sellier method, (c) the Anderson accelerated Sellier method, and (d) the direct numerical approximation of the IEP. In Section <ref> we provide several numerical tests with the scope of (a) validating our theoretical claims, (b) characterizing the computational burden of IEP, and (c) testing the methods in realistic cardiac contexts. We conclude our work in Section <ref>.§ PROBLEM DESCRIPTIONIn this section we describe both the direct elasticity problem (DEP) and the inverse elasticity problem (IEP). For the latter, we show how it can be re-cast in terms of the Eshelby tensor, which will provide a way to guarantee the existence of solutions of the IEP under some special conditions. §.§ The direct problem of non-linear elasticityWe assume that a body occupies a given region Ω_0 of the three dimensional Euclidean space 𝔼^3. Let Ω⊂𝔼^3 be the current configuration of the body, which is given by a deformation field χ⃗ such that Ω=χ⃗(Ω_0). Specifically, the current position of the generic material point X⃗∈Ω_0 is denoted by x⃗, i.e. x⃗ = χ⃗(X⃗). The displacement field is thus defined asu⃗(X⃗)χ⃗(X⃗)-X⃗. We denote byandthe gradient operators with respect to X⃗ and x⃗, respectively. Similarly, we denote byandthe corresponding divergence operators. We introduce the deformation gradient Fχ⃗= I + u⃗, together with the local volume change described by J F. Let P be the Piola-Kirchhoff stress tensor, then under the assumption of quasi-static deformations, the balance of the linear momentum readsP + B⃗=0⃗, in Ω_0,where B⃗ is the density of body forces in the reference configuration. Such a balance equation can be also cast in current configuration by means of the Cauchy stress tensorT = J^-1 P F^T.More specifically, we getT + b⃗ = 0⃗,where b⃗ is the density of body force in the current configuration. The material and the spatial densities of force B⃗ and b⃗ are related by B⃗=Jb⃗. For illustrative purposes, in this section, we assume that the boundary ∂Ω_0 is composed of two distinct subsets, Γ^D_0 and Γ^N_0, such that on Γ^D_0 we prescribe the displacement field u⃗_D, and on we assume thatPN⃗ = t⃗_0on Γ^N_0,where the traction load t⃗_0 is a (known) vector field over Γ^N_0. The Eulerian counterpart of (<ref>) isTn⃗ = t⃗on Γ^N,where t⃗ = J^-1 F^T n⃗t⃗_0 and the normal vectors are related by n⃗ =F^-TN⃗^-1 F^-TN⃗.In the context of hyperelasticity, we postulate the existence of a strain energy density Ψ=Ψ(X⃗,F). Thus, by means of the Clausius-Duhem inequality, we obtainP =P(X⃗,F) = ∂Ψ/∂ F, P_ij = ∂Ψ/∂ F_ij.In the next section, we discuss the inverse counterpart of the problem described in this section, the so called inverse elasticity problem (IEP). §.§ The inverse elasticity problem In solid mechanics, the DEP consists in reconstructing the current configuration given the reference configuration by solving (<ref>), complemented by appropriate boundary conditions and by the constitutive assumptions on the material response. In what follows we are interested in the IEP instead: the reconstruction of the relaxed configuration Ω_0 given the current configuration Ω and the external loads. Consider the inverse deformation = χ⃗^-1, so that X⃗ =(x⃗) and the inverse displacement is defined similarly to u⃗ as (x⃗)= (x⃗) - x⃗. The fields u⃗ andare related through the deformation fields asu⃗ = -∘χ⃗,=-u⃗∘.In what follows, for the sake of conciseness, we will omit the composition with χ⃗ and , when this will be clear from the context, and we will simply write, for instance, u⃗ = -.Unless specified differently, we restrict our attention to the situation in which the reference configuration coincides with the relaxed configuration, namely if F is the identity I, we haveP(X⃗,I) =0.We highlight that this might be restrictive, especially for living tissues, for which a relaxed configuration might not exist. Indeed, a variety of processes, such as growth <cit.>, active phenomena <cit.>, and plastic deformations <cit.>, might produce local distortions that are geometrically incompatible. This leads to the generation of a stress state in the body even in the absence of external loads. The correct identification of the relaxed state of elastic bodies subject to these phenomena require specific treatments which go beyond the scope of the present article. Nonetheless, a remarkable case in which the theory described in this Section can be directly applied is the active stress approach, a method usually exploited to model contractility in muscle tissue. Such aspects are treated in Section <ref>. We denotethe inverse deformation gradient by == I +. The two deformation gradient tensors are related by =F^-1.From (<ref>), the IEP can be cast as findingsuch that{ T(x⃗, ^-1) + b⃗=0⃗in Ω,T(x⃗, ^-1)n⃗ =t⃗on Γ^N,= - u⃗_Don Γ^D. .It is well known that the direct problem has a variational structure, where the displacement field must minimize the functionalℱ[u⃗]=∫_Ω_0Ψ(X⃗,F) -∫_Γ^N_0t⃗_0·u⃗ Ṣ-∫_Ω_0B⃗·u⃗ .We shall assume that Ψ is polyconvex, namely there exists a convex function g:^+(ℝ^3)×^+(ℝ^3)×ℝ^+→ℝ∪{+∞} such thatΨ( F) = g( F,F, J).This condition,plus some growth conditions <cit.>, guarantee that the DEP (<ref>)-(<ref>) admits a solution represented by a minimum of the functional (<ref>).In what follows, we will show that the situation is more complex for the inverse problem (<ref>).§.§.§ Shield's transformation and convexity properties Under the assumption of material homogeneity, the variational structure of the problem follows from Shield's transformation <cit.>. Such a transformation leads to an equivalent formulation of (<ref>), where the Eshelby stress tensortakes the place of the Cauchy stress . As noticed by Chadwick <cit.>, such a correspondence suggests a duality between the Eshelby and the Cauchy stress tensors. In this section, we expand Shield's initial findings to encompass inhomogeneous materials, employing a methodology akin to that elucidated in <cit.>. A simple change of variable shows that∫_Ω_0Ψ(X⃗, F)= ∫_Ω J^-1Ψ(χ⃗^-1(x⃗), F) .Thus, we introduce the dual energy density =(x⃗, ), defined as(x⃗, )=JΨ((x⃗), ^-1).Equation (<ref>) is the Shield transformation of Ψ. We then introduce the spatial Eshelby stress Σ, defined asΣ∂/∂=J^-T(Ψ I- P^-T) = F^-T( ΨI-T).Since/x̣_i (x⃗, (x⃗)) = ∂/∂ x_i +∑_h,k=1^3∂/∂F_hk∂F_hk/∂ x_i=∂/∂ x_i +∑_h,k=1^3Σ_hk∂F_hk/∂ x_i,and∂/∂ x_j(∑_h=1^3F_hiΣ_hj) = ∑_h=1^3(∂F_hi/∂ x_jΣ_hj + F_hi∂Σ_hj/∂ x_j)==∑_h=1^3(∂/∂F_hj∂F_hi/∂ x_j + F_hi∂Σ_hj/∂ x_j)==∑_h=1^3(∂/∂F_hj∂F_hj/∂ x_i + F_hi∂Σ_hj/∂ x_j),from (<ref>) we can rewrite the momentum equation in terms of the Eshelby stress tensor:= (I) -(^T) = = -(^T) ==∂/∂x⃗+: - :-^T==∂/∂x⃗-^T,where ∂/∂x⃗ is the partial derivative of (x⃗, (x⃗)) with respect to its first argument. Thus, the problem (<ref>) is equivalent to the following one expressed in terms of the Eshelby stress tensor{+ β⃗= 0⃗in Ω,n⃗=σ⃗on Γ^N,= - u⃗_Don Γ^D, .where, from (<ref>) and (<ref>), we haveβ⃗= -^-T(b⃗+∂/∂x⃗),σ⃗= ^-T(n⃗ - t⃗). We remark that such a formulation holds for any constitutive assumptions and, up to our knowledge, it has not been reported elsewhere. Previous derivations all assume that the body is homogeneous, i.e.does not depend on x⃗. This is relevant for anisotropic materials, and more so in cardiac mechanics since the direction of the fibers usually depends on x⃗, rendering the term β⃗ in (<ref>) not zero even in the absence of body forces.We have summarized the main quantities involved in the direct and the inverse formulations in Table <ref>.In the particular case of homogeneous materials (∂/∂x⃗=0), problem (<ref>) can be written as a minimization problem assuming that β⃗ and σ⃗ are measurable functions depending only on space:β⃗: Ω→ℝ^3,σ⃗: Ω→ℝ^3.Then, under these assumptions, it can be seen that (<ref>) is equivalent to finding the stationary points of the functionalℱ[u⃗]=∫_Ω() -∫_Γ^Nσ⃗(x⃗)·u⃗(x⃗) Ṣ-∫_Ωβ⃗(x⃗)·u⃗(x⃗) .Remarkably, Shield's transformation (<ref>) preserves polyconvexity or rank-1 convexity if Ψ is polyconvex or rank-1 convex as well, see Proposition 17.6.2 in <cit.>. Ball's theorem on the existence of energy minimizers <cit.> can then be used to prove the existence of minimizers (<ref>).However, in practical applications, this is almost never the case. Indeed, the existence theorem can be applied if β⃗ and σ⃗ are functions of x⃗ only, as assumed in (<ref>), but, in most applications, β⃗ and σ⃗depend onas well, and (<ref>) is not anymore equivalent to finding the stationary points of (<ref>). This aspect can create some issues as shown in the following two examples. §.§.§ Elastic disk subject to an external pressure: non-existence of radially symmetric solutionsConsider now the circular domain Ω=B(O, r_o)⊂𝔼^2 representing the current configuration, where B(O, r_o) is the disk of center O and radius r_o. Let (R, Θ) and (r, θ) be the reference and the spatial polar coordinates of a generic point about the origin O. We assume that the sphere is subject to a pressure p_ext, so thatTe⃗_r = - p_exte⃗_r,where (e⃗_R, e⃗_Θ) and (e⃗_r, e⃗_θ) are the polar basis in the reference and in the current configuration, respectively. We assume that the material behaves as a compressible Neo-Hookean material, given by a strain energy density defined asΨ ( F) = μ/2( ( F^T F) - 2 log J - 2)+λ/2(log J)^2,where λ and μ are the (linear) Lame's parameters. Under such assumptions, the Cauchy stress tensor readsT = 1/J(μ FF^T + (λlog J - μ) I).We assume polar symmetry, so that r=r(R) and θ=Θ. The deformation gradient is given byF=r'e⃗_r⊗e⃗_R+r/Re⃗_θ⊗e⃗_Θ.Due to the symmetries of the deformation field, the balance of linear momentum (<ref>) reduces to the following ordinary differential equationṬ_rr/ṛ + T_rr-T_θθ/r=0.We observe that r=r_o R/R_o satisfies (<ref>). Here, R_o∈ℝ is the reference radius of the disk that can be found by enforcing the boundary condition (<ref>), obtainingR_o ^2/r_o^2λlog(r_o^2/R_o ^2)+(1 - R_o ^2/r_o^2) μ= - p_ext,whose solution can be expressed asR_o^2=(μ +p_ext)(λW_0(e^μ /λ (μ +p_ext)/λ))^-1r_o^2,where W_0 is the principal branch of the Lambert function (w=W_0(z) is the solution of w e^w=z, with z being a complex number), see Fig. <ref>. In the special case λ=0, the solution of (<ref>) is given byR_o^2 = μ+p_ext/μ.We observe that depending on the value of the applied pressure, the inverse problem may not have solutions with radial symmetry. In particular, R_o=0 is a solution for the inverse problem forp_ext=-μ.and for all the values of λ. This is a limit case where the reference configuration shrinks to a single point for a finite value of the external pressure. Thus, the IEP might be ill-posed even if we apply constant Neumann boundary conditions and if we choose a polyconvex strain energy density.§.§ Injectivity of the inverse deformation Usually, the deformation field is supposed to be bijective to avoid self-intersections of the body. However, requiring thatbe bijective might be too strict for the free body problem. Indeed, if we take two S-shaped pieces, we can imagine to glue them together, as shown in Fig. <ref>, and the relaxed state of the body requires a self-intersection. This situation is not uncommon and, as shown in the following paragraphs, applies to the heart as well.Thus, the relaxed configuration of the body can not in general be achievable in reality due to a global geometric incompatibility, i.e.is not injective. The analysis proposed in this section is still valid since χ⃗ is locally invertible due to the inverse function theorem (J=χ⃗>0).Some issues may arise if we want to describe a further deformation of Ω to Ω with Ω=χ⃗(Ω). In such a case, the relaxed configuration Ω_0 cannot be used as a reference configuration due to the self-intersection of the body. A possible strategy to solve such a problem is to take Ω as a reference configuration. In such a case, similarly to the multiplicative decomposition of the deformation gradient exploited to model plasticity <cit.>, growth and remodelling <cit.>, we introduceF_e = F^-1,where F=χ⃗ and F_e is the elastic distortion from the relaxed state to the configuration Ω. Then, the strain energy density per unit volume of Ω is given byψ(F) = ()Ψ(F_e).This underlines the possibility to treat local and global geometric incompatibilities in a unique way. The local incompatibilities manifest themselves as a non-compatible , i.e. there does not exist a functionsuch that = <cit.>, while a global geometric incompatibility is a non-injectivity of . The inclusion of local geometric incompatibilities of the relaxed state in the IEP is left as a possible future study. §.§ Active stress When modeling biological tissues, and the myocardium in particular, it is important to take into account the active forces that are involved during muscle contraction. One of these approaches is the so called active stress. Usually it is assumed that there exists a reference configuration Ω_0 that is stress free in the abscence of active forces <cit.>. Let Ψ_pas(X⃗, F), be the strain energy density of the passive material, and we introduce the passive first Piola-Kirchhoff stress tensors, defined asP_pas(X⃗,F) ∂Ψ_pas/∂ F.We require P_pas to satisfyP_pas(X⃗,I)= 0∀X⃗∈Ω_0.Let P_act(X⃗, F; T_a) be a tensor-valued function representing the active stress generated by the muslce fiber contractility. Here, T_a is a parameter describing the tension generated by the muscle, which is zero in the passive case. Hence, we assume that P_act(X⃗,F; 0)=0. The active stress approach envisages writing first Piola–Kirchhoff stress tensor asP = P_pas+P_act.In conclusion, when considering active materials, the IEP shall be regarded as that of finding the configuration assumed by the body in the absence of both passive and active stress, that is when P_pas = P_act =0.§.§.§ Active stress in cardiac mechanics problems As it is standard in the cardiac modeling literature, we consider an orthonormal triplet (, , ) of fibers, sheets, sheet-normal directions <cit.>. The fiber architecture plays a role in determining both the passive and the active response of the tissue. A common choice for the active stress tensor isP_act = S_f(F; T_a) F⊗/F + S_n(F; T_a) F⊗/F,where S_f and S_n are scalar functions. We remark that, if S_f and S_n are integrable with respect to F and F, respectively, and ψ_f and ψ_n are their primitives, we have <cit.>^,[As shown for skeletal muscles where a single family of fiber is present <cit.>, such an approach is equivalent to model the tissue as a mixture of passive and active elements, for details see <cit.>.]P_act = ∂ψ_f/∂F + ∂ψ_s/∂F = ψ_f'(F; T_a)/F F⊗ + ψ_n'(F; T_a)/F F⊗,where ' denotes the differentiation with respect to the first argument. Thus, the total (passive and active) stress tensor can be associated with the strain energy density:Ψ(X⃗,F; T_a) = Ψ_pas( F) + Ψ_f(F; T_a) + Ψ_n(F; T_a).In light of this observation, the procedure exposed in Section <ref> can be applied to the energy ψ defined in (<ref>).§ NUMERICAL APPROXIMATION In this section, we provide the details to solve numerically equation (<ref>). For this, we provide (i) the weak formulation, (ii) a detailed description of the IEP formulation for cardiac modeling, and (iii) a simple implementation of the IEP to show that an existing DEP solver can be turned into an IEP solver with little modifications.§.§ Weak formulationThe weak formulation of the DEP (<ref>) can then be stated as finding u⃗∈ V_0^u⃗_D such that∫_Ω_0 P(X⃗,F):v⃗+ ∫_Γ_0^Nt⃗_0·v⃗ Ṣ = ∫_Ω_0B⃗·v⃗ , ∀v⃗∈ V_0,where we have defined the trial and test function spaces:V_0^u⃗_D = {v⃗∈ H^1(Ω_0; ℝ^3)s.t. v⃗ = u⃗_D on Γ^D_0}, V_0 = {v⃗∈ H^1(Ω_0; ℝ^3)s.t. v⃗ = 0⃗ on Γ^D_0},and the weak formulation of the IEP can be stated as finding ∈ V^u⃗_D such that∫_Ω T(x⃗, ^-1):v⃗+ ∫_Γ^Nt⃗·v⃗ ṣ = ∫_Ωb⃗·v⃗ , ∀v⃗∈ V,whereV^u⃗_D = {v⃗∈ H^1(Ω; ℝ^3)s.t. v⃗ = -u⃗_D on Γ^D}, V = {v⃗∈ H^1(Ω; ℝ^3)s.t. v⃗ = 0⃗ on Γ^D},and we recall thatT(x⃗, ^-1) = J P((x⃗), ^-1)^-T.Similarly, an equivalent weak formulation of (<ref>) using the strong formulation (<ref>) reads∫_ΩΣ(x⃗, ):v⃗ + ∫_Γ^Nσ⃗·v⃗ṣ = ∫_Ωβ⃗·v⃗, ∀v⃗∈ V,where the Eshelby stress tensor Σ is defined in (<ref>). §.§ Cardiac inverse modelIn this section, we derive the IEP in the setting of cardiac modeling. Specifically, we account for the presence of an active stress and of cardiac fibers, and we consider boundary conditions often used to account for the interactions of the heart with the blood and with the surrounding organs.§.§.§ The direct problemGeometry Let Ω_0 be the stress-free configuration of the passive myocardium, and Ω the deformed configuration. We include in the domain also segments of the main vessels connecting the heart to the circulatory system (aorta, pulmonary artery and main veins). Typically, geometries available from medical imaging are acquired at diastasis, namely one of the last phases of diastole, right before the atrial kick (the beginning of atrial systole). This phase of the heartbeat is the one in which the heart is most stationary, thus facilitating the medical imaging acquisition process. Furthermore, being inertial forces negligible, a quasi-static assumption is well motivated at this stage. At diastasis, the blood pressures in the four chambers are relatively small, compared to the rest of the heartbeat, and active forces are also small. These features facilitate the IEP resolution. Constitutive assumptionsWe use the active stress approach described in Section <ref> to model myocardium contractility. Specifically, we adopt the active stress tensor as in (<ref>)-(<ref>), where we choose <cit.>ψ_f( F ) = T_aF ,ψ_s( F ) =T_aF ,wheredenotes the active tension, acting mainly in the direction of fibers . The fibers are not perfectly aligned due to fiber dispersion. This is modeled through the introduction of the constant parameter 0<<1 in (<ref>). The Piola-Kirchhoff stress tensor thus reads <cit.>P = ∂Ψ_pas(X⃗, F)/∂ F + [F ⊗/ F+F ⊗/ F ],We remark that, while solving the IEP for cardiac models, the active stress term is often neglected. However, it is important to notice that in any moment of the heartbeat (even at diastasis), a non-negligible amount of active tension is present, known as diastolic tension <cit.>. Hence, it is crucial to account for the active stress during the stress-free configuration recovery procedure.As a constitutive choice for the passive contribution to the strain energy density (see (<ref>)-(<ref>)), we use a function Ψ_pas(X⃗,F), where the explicit dependence on X⃗ is necessary to account for the anisotropic behaviour induced by the presence of muscle fibers. We use different expressions for Ψ_pas, which are explicitly specified in what follows. Boundary conditions We split the boundary ∂Ω_0 of the domain in different subsets, and apply boundary conditions depending on the interacting tissues within each subset. The internal boundaries of the myocardium are in contact with blood, which exerts a pressure on the myocardium. We consider = 6 cavities (namely the four cardiac chambers, the aorta and the pulmonary artery), in which the blood pressure can be reasonably considered constant. For each cavity i (with i = 1,…,), we denote its boundary in the reference configuration by i⊂∂Ω_0. We model the action of the blood on the cavity surfaces as a constant hydrostatic pressure p_i:PN⃗ = -p_i JF^-TN⃗on i.The epicardium, that is the external surface of the heart, is instead in contact with the pericardium, a tough fibroelastic sac containing the heart and the roots of the great vessels. We model the interaction of the heart with the pericardium by applying (anisotropic) linear springs on the pericardial surface  <cit.>:PN⃗= - (N⃗⊗N⃗) u⃗- ( I - N⃗⊗N⃗) u⃗on ,where the positive coefficientsandaccount for the elastic response of the pericardium and the surrounding organs in the normal and tangent direction, respectively.Finally, we apply homogeneous Dirichlet boundary conditions on the artificial boundaries originating where arteries and veins are truncated, which we denote by . Weak formulation In conclusion, the weak formulation of the DEP consists in finding u⃗∈ V_0 = {v⃗∈ H^1(Ω_0; ℝ^3)s.t. v⃗ = 0⃗ on } such that∫_Ω_0 P( F):v⃗ = -∑_i = 1^∫_i p_i JF^-TN⃗·v⃗ ṣ- ∫_u⃗·v⃗ ṣ- ∫_ ( - ) ( N⃗·u⃗) ( N⃗·v⃗) ṣfor all v⃗∈ V_0. §.§.§ The inverse problem By proceeding as above, we derive the following IEP formulation for the cardiac model: we look for ∈ V = {v⃗∈ H^1(Ω; ℝ^3)s.t. v⃗ = 0⃗ on } such that∫_ΩJ P(^-1)^-T :v⃗ =-∑_i = 1^∫_i p_i n⃗·v⃗ ṣ+ ∫_J^-Tn⃗·v⃗ ṣ+ ∫_ ( - ) J/^-Tn⃗( ^-Tn⃗·) ( ^-Tn⃗·v⃗) ṣfor all v⃗∈ V. §.§ Remarks on implementationIn this section, we show that it is very simple to modify a solver for problem (<ref>) to obtain a solver for problem (<ref>), at least when relying on an automatic differentiation engine. To show this, we will provide an example using the Unified Form Language (UFL) <cit.>, but the concepts are still valid for other equivalent systems. We start by looking at how a simple formulation of nonlinear elasticity could look like in Listing <ref>, which can be found among the demos at the documentation of FEniCS <cit.>.[language=python, frame=single, caption=UFL formulation of (<ref>)., label=fig:ufl forward, float] V = VectorFunctionSpace(mesh, 'CG', 1) u = Function(V) v = TestFunction(V) F = variable(grad(u) + Identity(3))# Compute original one to diff J = det(F) Cbar = J**(-2/3) * F.T * F E, nu = 1.0e4, 0.3 mu = Constant(E/(2*(1 + nu))) lmbda = Constant(E*nu/((1 + nu)*(1 - 2*nu))) psi = (mu / 2) * (tr(Cbar) - 3) + 0.5 * lmbda * (J-1) * ln(J) P = diff(psi, F) residual = inner(P, grad(v)) * dx - dot(Constant((0,0,-1)), v)* dx bcs = DirichletBC(V, Constant((0,0,0)), "on_boundary")solve(residual==0, u, bcs=bcs)To convert this formulation, we need to (i) push forward the objects in the integrals and (ii) recast the kinematic quantities in terms of the inverse displacement. For this we have to observe that the Piola-Kirchhoff tensor is still the derivative of Ψ with respect to F. This yields the formulation shown in Listing <ref>, where it can be seen that the difference between both codes is limited.[language=python, frame=single, caption=UFL formulation of (<ref>)., label=fig:ufl backward, float] V = VectorFunctionSpace(mesh, 'CG', 1) u_hat = Function(V) v = TestFunction(V) F_hat = Identity(3) + grad(u_hat)# Inverse tensor for inverse problem J_hat = det(F_hat) F = variable(inv(F_hat))# Compute original one to differentiate J = det(F) Cbar = J**(-2/3) * F.T * F E, nu = 1.0e4, 0.3 mu = Constant(E/(2*(1 + nu))) lmbda = Constant(E*nu/((1 + nu)*(1 - 2*nu))) psi = (mu / 2) * (tr(Cbar) - 3) + 0.5 * lmbda * (J-1) * ln(J) P = diff(psi, F) residual = (J_hat * inner(P, grad(v) * inv(F_hat)) * dx- J_hat * dot(Constant((0,0,-1)), v)* dx) bcs = DirichletBC(V, Constant((0,0,0)), "on_boundary")solve(residual==0, u_hat, bcs=bcs) § SELF-INTERSECTION OF THE STRESS-FREE STATE In this section, we discuss several aspects regarding the existence of a stress-free configuration. For this, we consider two simple geometries that represent a transverse cut of an idealized left ventricle as displayed in Figure <ref>. We refer to them as (a) the semi-circle and (b) the eclipse. As discussed in Section <ref>, a global geometric incompatibility can result into self-intersecting relaxed states. We show two mechanisms under which this phenomenon can be seen, namely inner self-intersections and outer self-intersections. We show this in the presented geometries by considering the inner surface as an endocardium where a given pressure is known, and on the epicardium we consider the elastic response that arises from the interaction with the pericardium.We first load the semi-circle geometry with an endocardial pressure of 400, and we solve under these conditions the inverse displacement problem (<ref>). The solution is displayed in Figure <ref>, where the thicker part of the geometry virtually does not deform, and indeed all deformation is obtained from the thinner part of the geometry. This results in an endocardial interpenetration. We proceed analogously with the eclipse, where we depict the solution in Figure <ref>. In contrast to the semi-circle case, here we see that there is a self-intersection through the epicardium. These two examples of self-intersection represent a global geometric incompatibilities as detailed in Section <ref>, but they are still the solution obtained through the inverse displacement problem (<ref>). This means that, unless a contact formulation is used, there is no guarantee that the stress-free configuration will avoid self-penetrations. Furthermore, it is not trivial to formulate a contact inverse displacement problem that is compatible with the forward problem.§ ALGORITHMS FOR SOLVING THE INVERSE PROBLEMThere are essentially two approaches for solving (<ref>). The first one is to solve the weak formulation associated with the inverse problem (<ref>), e.g. by the Newton-Raphson method, and the second one is to leverage only (<ref>), known as the Sellier method. The target problem is computationally challenging in both formulations, so we use a simple homotopy strategy to increase the loading terms with a fixed step size, i.e. by ramping the loads. We will denote this operation with a pseudo-time parameter, such that a load f⃗ becomes f⃗(t)=t f⃗, with t in [0,1] being the ramp parameter. The desired solution is obtained when t=1. We show how to solve the inverse displacement problem with this strategy in Algorithm <ref>, where the solution of problem (<ref>) is done with a Newton algorithm. Unless stated otherwise, all nonlinear algorithms consider as initial guess the solution at the previous step.The most widely used method to compute the solution of (<ref>) is known as the Sellier method <cit.>. If we consider a relaxation parameter α>0 and an initial displacement u⃗^(0), Denoting by Ω(X⃗^k) the configuration obtained in the X⃗^k coordinates, the algorithm is displayed in Algorithm <ref>. This method is a fixed-point iteration, which are in general prone to instabilities and lack of convergence. This has been alleviated by including an acceleration technique known as Aitken acceleration <cit.>, and further improved by an Armijo line search strategy <cit.>. We will refer to the latter as the Aitken-Armijo strategy. The resulting method enjoys improved robustness, which makes it more reliable for data intensive applications. Still, it has been observed that Anderson acceleration performs better than Aitken acceleration in most practical applications (see for example <cit.>). This can be explained mainly by two things: on one hand, Anderson acceleration can be regarded as a nonlinear variant of the GMRES algorithm, so it has better mathematical foundations. On the other hand, it uses an arbitrary number of previous iterations, whereas Aitken uses only one previous solution. We propose a single algorithm that can be used to choose between the Armijo-Aitken strategy and Anderson acceleration in Algorithm <ref>, where we have observed that combining both Aitken and Anderson never yields a better solver (not reported). One possible explanation for this is that Anderson is not capable of accelerating arbitrary fixed point iterations. Indeed, it has been shown that it can acceleratelinearly converging sequences, that quadratically convergent sequences may worsen their performance and anything in between is still an open problem <cit.>.Convergence of the Sellier method (all the three variants previously shown) is established when the deformed geometry is sufficiently close to the original one, or when the increments are sufficiently small. The latter can lead to stagnation, which we have observed to happen sometimes with Aitken acceleration. For this, we have set a minimum relaxation of 0.5 that truncates smaller values. § NUMERICAL TESTS We perform the numerical tests in four different geometries: * A 2D square domain.* A 3D rectangular geometry, commonly referred to as slab in the computational cardiology community, subject to surface and volume loads to validate our solvers.* A simplified left ventricle (LV) geometry subject to an endocardial pressure and an active stress force.* A realistic full-heart geometry with given physiological values of atrial and ventricular pressures. The scope of this section is to clarify the following points: (i)to understand whether it is advantageous to solve the IEP using the Cauchy formulation or the Eshelby one, (ii) to compare the performance of IEP by using its direct solution or a Sellier approach in terms of its robustness (behavior with varying parameters) and optimality (sensitivity on problem size), and (iii) to characterize the computational effort of the IEP with respect to the DEP, i.e.which problem is most computationally challenging, and how to measure this aspect. For these aims, the numerical tests we propose are the following.* A numerical convergence test for the Cauchy and Eshelby formulations for varying degrees of approximation. This test will help us conclude which of the two formulations should be used in practice.* A robustness test where we vary the load of the slab and the endocardial pressure/active stress of the idealized LV. This test measures the sensitivity of the solvers with respect to external loads.* An optimality test in which, for fixed loads, we increase the degrees of freedom of each problem. This test measures the sensitivity of the solvers with respect to the problem size.* A preconditioning test, where we study the performance of both algebraic multigrid (AMG) and domain decomposition (DD) methods for the IEP formulation.* A formulation comparison test, in which we study whether the backward or forward problems are more computationally demanding.* A real-case scenario where we can test our conclusions in a full-heart model. In what follows, we will use the term inverse displacement method to denote a direct numerical approximation of the IEP, based either on a finite element approximation of the Cauchy version (<ref>) or the Eshelby one (<ref>). The inverse displacement method is thus a way, alternative to the Sellier's method, to solve the IEP, and should not be confused with the latter.To avoid ambiguity, we will consider the nonlinear iterations to be the number of iterations required for each method to converge. For the inverse displacement method, this will be the number of Newton iterations. Instead, for the Sellier method, this will refer to the fixed point iterations required for convergence. Given that at each fixed point iteration this method incurs on the solution of a nonlinear elasticity problem, we will refer to such iterations as the inner nonlinear iterations. Whenever more than one ramp step is used, we will report the average number of iterations. The implementation of all tests on the slab and on the idealized LV have been implemented with the FEniCS library <cit.> and visualized with Paraview <cit.>. The preconditioning tests have been performed with the Firedrake library <cit.>. In addition, unless stated otherwise, all linear systems are solved using the MUMPS library <cit.>, which uses a direct method. This avoids the additional complexity of considering the challenges associated with the linear system resolution whenever quantifying the computational burden of the IEP. The real-case scenario was performed with the high-performancelibrary(see[<https://lifex.gitlab.io/>] and <cit.>), built upon the finite element core(see[<https://www.dealii.org>] and <cit.>). §.§ Numerical convergence test In this section, we propose a simple convergence analysis of the discretized counterparts of the weak formulations (<ref>)-(<ref>). We consider the 2D square domain Ω=[0, L]×[0, L], and assume that the body is homogeneous and composed of a material with Neo-Hookean strain energy (<ref>). We construct the fields b⃗ and t⃗ such thatu⃗(x⃗) = A sin(2 π x_1)e⃗_2is a solution of the inverse problem. In (<ref>), x⃗ = x_1 e⃗_1 + x_2 e⃗_2 and (e⃗_1, e⃗_2) is the canonical basis in ℝ^2.We can now compute the corresponding Cauchy stress tensor through (<ref>) and, by applying (<ref>) we can recover the corresponding fields b⃗ and t⃗. Similarly, from (<ref>) we can recover the expressions of β⃗ and σ⃗ such that (<ref>) is a solution of (<ref>).We use this analytical solution to perform the convergence analysis of the discretized problem. We exploit a Galerkin approximation and the finite element method. We construct a triangular, structured mesh Ω_h of the domain Ω, with h being the diagonal of the elements. We use P^1, P^2 and P^3 elements to discretize the field u⃗ and we denote by u⃗_h the discrete counterpart. The nonlinear problem is solved by means of a Newton method. In Figure <ref>, we show a logarithmic plot of the error norm u⃗-u⃗_h_H^1(Ω, ℝ^2) for A = 0.1 L. We observe that the error is O(h^n) for the element P^n as h→ 0. The errors measured using the weak forms (<ref>) and (<ref>) are very close, even though the formulation using the Eshelby stress (<ref>) requires much more iterations. Indeed, for the formulation with the Cauchy stress tensor, we can solve the problem with a single Newton algorithm which requires an average of 5.2 inner iterations. Conversely, with the Eshelby stress the direct application of the Newton method may fail and we need to use a ramp where we iteratively increase the value of A, see Table <ref>.Therefore, in the remaining part of this work, we will focus our attention on the Cauchy stress weak form (<ref>).§.§ Slab testsThe slab consists in a prism cut going from the endocardium to the epicardium, given by Ω (0, e-2)× (0, 3e-3)×(0, 3e-3). On it, we consider the exponential constitutive law of Usyk <cit.>, detailed in Section <ref>, with homogeneous Dirichlet conditions on {x=0} and null traction conditions elsewhere. We display the solution of the inverse displacement problem in Figure <ref>, which we computed for various volumetric and surface loads, given byb⃗ and t⃗ respectively in (<ref>). We note that both solutions were computed using 10 ramp steps for the loads, and the maximum load used for each display was such that twice bigger loads would yield a divergent iterative procedure using the IEP formulation.§.§.§ RobustnessWe study the robustness with respect to volumetric loads. To measure the performance, we look at the number of nonlinear iterations required for convergence. All tests were performed with a Newton method using absolute and relative tolerances of 10^-14 and 10^-6 respectively for the inverse problem. The Sellier methods use equal absolute and relative tolerances of 10^-6. The tangent systems were inverted with MUMPS, a parallel direct solver. The geometry was discretized with 24 subdivisions in the x direction, and 8 subdivisions in the y and z directions, resulting in roughly 6 000 degrees of freedom.We show the results of this test in Table <ref>. We first note that the inverse displacement method is much more robust than the Sellier methods in general, being able to yield a solution for load values more roughly 10 times larger than those of Sellier methods, in only one ramp step, and 4 times larger if Sellier uses 100 ramp steps. Among Sellier methods, we note that they all converge in the same scenarios, meaning that acceleration does not make a difference in this test. Still, in can be appreciated how the Armijo strategy yields a more robust method, which can be greatly improved by using instead Anderson acceleration. Indeed, the latter can sometimes yield convergence in roughly half the number of nonlinear iterations. Still, the superiority of this strategy is less obvious when looking at the inner nonlinear iterations, which increase as the accelerated methods perform larger steps. Naturally, this nested solver problem is not present in the inverse displacement method.§.§.§ OptimalityIn this section, we study the sensitivity of the slab problem as the number of degrees of freedom increases. For this, we consider two volumetric loads given by b⃗=-be⃗_3 for b in {10,20}, and we divide the x, y, and z axes into 3k, k, and k elements respectively, for k in {2,4,8,16,24,32,40}, solved in one ramp step. We show the results in Table <ref>, where we highlight the following results: (i) as is typical of Newton methods, the IEP formulation behaves optimally, with its number of nonlinear iterations remaining constant as the number of degrees of freedom increase <cit.>. (ii) For the smaller load test, the pure Sellier and its Armijo variant are vastly more reliable than the Anderson accelerated variant. Still, for larger loads they all behave very erratically, and there is no obvious better option.§.§.§ Performance comparisonIn this section we compare the CPU times (also referred to as walltime) of the methods under consideration. For this, we present them for the first scenario considered in the optimality test, i.e. for the load b⃗=-10e⃗_3, and report them in Table <ref>. We note that the inverse displacement method provides and clear improvement over the Sellier method, representing roughly a speed-up of an 87%. We highlight that, whenever the Anderson method converges, it is faster than both Sellier variants present in literature. Additionally, we confirm the overall superiority of the Armijo line search strategy for this case, as it is both more robust and faster than plain Sellier.§.§.§ Computational effort of IEP and DEPFor measuring which formulation is the most challenging at the numerical level, we use as an indicator the number of nonlinear iterations incurred by the nonlinear solver, if it converges. We do so in three scenarios: (i) the IEP formulation (<ref>), (ii) the DEP formulation from the stress-free configuration (<ref>), and (iii) the DEP from the current configuration (<ref>). The distinction between the last two is important because they represent two conceptually different scenarios. In scenario (ii), we compare the inverse and forward problems in a physically consistent setting. In problem (iii), the scope is purely methodological, as we compare the computational effort of the inverse problem with respect to what is done by the Sellier method. As a matter of fact, the fixed-point iterations of the Sellier method envisage a sequence of DEPs, moving from (iii) to (ii). This should provide further evidence for the lack of convergence of the Sellier method, justified additionally by the requirement of solving a challenging nonlinear problem at each iteration. We fix the load to be b⃗ = -10e⃗_3. We show the results in Table <ref>. Albeit unintuitive, we note that the easiest problem is the inverse one, which has a consistently lower number of nonlinear iterations than the other two problems. Interestingly, the forward problem from the stress-free configuration in this problem is slightly harder than the one posed on the current geometry. We remark that, in this test case, we only focus on the nonlinear solver, and we disregard the challenges associated with the inner linear systems, as we are employing a direct linear solver. This aspect will be addressed later in Sections <ref> and <ref>. §.§ Simplified cardiac modelIn this section, we study as in Section <ref> the robustness, optimality, performance, and computational effort of the inverse displacement formulation against standard and accelerated Sellier schemes on an idealized LV geometry. We consider the same physical model as in the slab test, with the only difference of having the physiologically motivated boundary condition of (<ref>) on the epicardium.We focus on two types of loads, which are the main ones present in cardiac simulations: (i) a pressure acting uniformly on the endocardium and (ii) the active stress, which we depict respectively in Figures <ref> and <ref> respectively. §.§.§ RobustnessIn this section we study the robustness of the methods with respect to an endocardial pressure going from 0.1 to 10 for 1, 10, and 100 ramp steps. Then, we do the same computation for an active stress magnitude given going from 1 up to 40.The computed results are shown in Tables <ref> and <ref> for the endocardial pressure and the active stress, respectively. First, we note that again the inverse displacement method is the most robust in all scenarios under consideration. There is no significant advantage in augmenting the standard Sellier method with the Armijo strategy, but instead Anderson acceleration provides a consistently more robust solver in both the number of nonlinear iterations and the scenarios in which it converges.We also highlight that the inverse displacement formulation and the Anderson accelerated Sellier method yield the same robustness when using 100 ramp steps.§.§.§ OptimalityIn this section we study the performance of all methods under consideration as the number of degrees of freedom increases. We consider two scenarios: one with a fixed endocardial pressure of 0.2 and another one with a fixed active stress peak of 5, with the results in Table <ref>. We note that in all considered scenarios, the inverse displacement method yields a more robust performance. Still, we highlight that Anderson acceleration performs roughly the same average inner nonlinear iterations as pure Sellier, with reduced nonlinear iterations. In the active stress case, the Armijo strategy is instead both more costly and less robust. Still, there is no significant difference among the methods tested. §.§.§ Performance comparisonIn this section, we compare the CPU times and report them in Table <ref>. In terms of execution time, we note that pure Sellier is the worst, and inverse displacement yields the best performance, yielding roughly a 60% reduction in time with respect to pure Sellier. Between the two methods reside the Armijo and Anderson accelerated Sellier methods, which yield roughly a 7% and a 10% walltime reduction with respect to a pure Sellier method in the endocardial test. In the active stress case, Sellier Armijo is more expensive, whereas Anderson yields again a 10% time save.§.§.§ Computational effort of the problemIn this section, we aim to study whether the inverse or forward problems are more computationally demanding as is Section <ref> by considering the same three scenarios. In this case, as in the previous sections, we consider separately the effect of an endocardial pressure and that of an active stress force, shown in Table <ref>. We note that, as in the slab tests, the inverse displacement method requires the lowest number of iterations in almost all scenarios, except for some instances of active stress. Interestingly, in contrast to the slab case, this test shows that the forward problem from the computed stress-free configuration is easier than the one posed on the deformed configuration for all values considered. This is consistent with experience in cardiac modeling, and shows two things: on one hand, solving the problem from a stress-free configuration is easier, as it is more physically accurate for the given geometry. On the other hand, it shows why it is more difficult to get the Sellier method to converge. This suggest that the Sellier method could be made more robust by adding further ramping strategies to the inner nonlinear problem, which would result in even larger computational costs.§.§ PreconditioningThe main strategy so far to compute preconditioners for nonlinear elasticity has been to devise optimal preconditioners for the linearized formulation, and then use such techniques for the nonlinear scenario. This usually yields satisfactory results in the nonlinear regime, but in this section we show that this approach is not equally valid for the inverse displacement problem. For this, consider the slab problem shown in Section <ref> with a volumetric load given by b⃗=-32e⃗_3, which we have observed to be sufficiently large to challenge the numerical solvers. In contrast to all previous numerical tests, where we have used a direct solver (MUMPS) for all linear systems, we display the average number of GMRES iterations using the well-established Algebraic Multigrid implementation from HYPRE <cit.> for an increasing number of degrees of freedom. This solver is an excellent choice for nonlinear elasticity, and its efficiency has been thoroughly studied for cardiac elasticity as well <cit.>.We compare its performance with a very simple one-level Additive Schwartz preconditioner with minimal overlap and an incomplete LU (ILU) factorization as a local solver, and show the performance with 16 subdomains (16 MPI processes) in Table <ref>. We see that, surprisingly, AMG is particularly not suitable for this problem, and an AS/ILU preconditioner provides a better alternative. Moreover, we test the GDSW preconditioner <cit.> available in PETSc <cit.> under the same conditions. We provide the results in Table <ref>, where a much improved performance is obtained in terms of linear iterations, albeit not optimal. We report the PETSc options to use these preconditioners in <ref>. We note that none of the tested preconditioners are optimal, and that obtaining an optimal preconditioner, at least in practice, for the inverse displacement formulation is beyond the scope of this work.§.§ Realistic four-chamber heartWe now turn to the case of a realistic full-heart model, introduced in Section <ref>. We consider the Zygote Solid 3D Heart Model <cit.>, an anatomically accurate CAD model of the whole human heart, obtained from high-resolution CT scans and representing an average healthy male subject, displayed in Figure <ref>a. We consider a computational mesh with an average cell diameter of 1.18 and accounting for 2.75e6 tetrahedra (see Figure <ref>b), generated relying on the algorithms proposed in <cit.>, implemented in the open source software  <cit.>. We generate the fiber architecture by relying on the Laplace-Dirichlet rule-based method for whole heart geometries proposed in <cit.> and further refined in <cit.>. In the myocardium, we consider the exponential constitutive law of Usyk <cit.>, with a volumetric term enforcing quasi-incompressibility:Ψ( F ) = C2( e^Q- 1 ) + B2( J - 1 ) log(J)where C is the material stiffness, B is the bulk modulus, andQ = b_ff E_ff^2+ b_ss E_ss^2 + b_nn E_nn^2+ b_fs( E_fs^2 + E_sf^2 ) + b_fn( E_fn^2 + E_nf^2 ) + b_sn( E_sn^2 + E_ns^2 ),E_ab =E a·b, for a, b ∈{ f, s, n },where E = 12(C -I ) is the Green-Lagrange strain energy tensor, being C =F^T F the right Cauchy-Green deformation tensor. See the aforementioned reference for the parameter values. In the vessels, instead, we use the Neo-Hookean model:Ψ( F ) = μ/2(J^-2/3 ( F^T F) -3) + κ4[ ( J - 1 )^2 + log^2(J)].We employ the parameters values reported in <cit.>. We consider a factor = 0.4 to account for the effect of microscale fiber dispersion on the active stress. Concerning the epicardium boundary conditions, we set = 0 and = 2e5.To define the IEP, we consider the pressures and diastolic tension reported in Table <ref>. We consider two load cases, namely 50% and 100% of the values reported in Table <ref>.In this test case, we consider both the IEP (that we address with the inverse displacement method and with the Sellier method), and the DEP after having computed the stress-free configuration. Both problems are considerably challenging, due to the highly-nonlinear constitutive law and to the nontrivial geometric features of the considered domain. As a matter of fact, none of the solution methods considered is able to reach convergence with a single load step. Hence, we consider a load-ramp approach by increasing simultaneously both the cavity pressures and the active tension. In such a challenging problem, as the ramp approaches the target value, smaller and smaller steps are typically required to avoid convergence failures, especially when the Sellier method is considered <cit.>. Hence, to avoid the need of manually tuning the ramp step, we implement the adaptive ramp algorithm of <cit.>, by which the ramp step size is automatically decreased (by a factor 0.7) in case of failure, while it is increased (by a factor 1.2, with a maximum of 0.2 relative step length) in case of success.The considered problem is challenging also because of the ill-conditioning of the linear systems arising from each Newton iteration. Hence, in order to mitigate the computational burden, we consider, besides the standard Newton algorithm, an Inexact Newton algorithm that employs a loose tolerance for the linear solver in the first nonlinear solver iteration, and progressively reduces it during the iterations. This strategy has been shown to reduce CPU times in cardiac simulations <cit.> without sacrificing robustness nor optimality. Moreover, we employ the GMRES method by setting a large maximum number of iterations (namely 10^4). We consider both an algebraic multigrid (AMG) preconditioner, and an additive Schwarz method with an ILU approximate solve as inner solver, based on the parallel partitioning (AS/ILU) as shown in Section <ref>. For linear algebra operations we rely on the Trilinos library <cit.>. Simulations are run on a parallel computing cluster on 92 cores (Lenovo SR950 192-Core Intel Xeon Platinum 8160, 2100 MHz and 1.7TB RAM) at MOX, Department of Mathematics, Politecnico di Milano.We report in Table <ref> the results in terms of convergence success, wall time, number of iterations. First, we notice that the Inexact Newton approach brings in all the considered cases a significant advantage (between 2x and 8x speedup). 4c|50% load 4c|100% loadProblem Nonlinear solv. Linear solv. Wall time F.P. steps Newt. steps Time per step Wall time F.P. steps Newt. steps Time per step IDNewtonAMG> 24h> 24hIDInexact NewtonAMG> 24h> 24hIDNewtonAS/ILU 36m 20s21103.8s64m 20s4389.8s IDInexact NewtonAS/ILU 12m 01s2528.8s 19m 30s3830.8s Sellier NewtonAMG331m 41s82 295 67.5s > 24hSellier Inexact NewtonAMG41m 00s 42 160 15.4s > 24hSellier NewtonAS/ILU 568m 20s108364 93.7s > 24hSellier Inexact NewtonAS/ILU 77m 32s 42 166 28.1s > 24hID-forw NewtonAMG20m 31s2745.6s 32m 02s4048.0s ID-forw Inexact NewtonAMG7m 10s 2517.2s 13m 51s4120.3s ID-forw NewtonAS/ILU 42m 50s2795.2s 40m 50s4061.3s ID-forw Inexact NewtonAS/ILU 14m 06s2632.5s 20m 04s4228.6sResults of the realistic 4 chamber cardiac model of Section <ref>. We report: the wall time; the number of fixed point steps (only for the Sellier method); the total number of Newton steps (summed over the ramp steps and, for the Sellier method, over the fixed point iterations); The wall time per each Newton step. Instead, the two considered preconditioners behave very differently depending on the differential problem being solved (namely (<ref>) or (<ref>)). As shown in Section <ref>, AMG shows to be ineffective for the IEP, since the maximum number of GMRES iterations is very often reached, despite the fact that the adaptive algorithm leads to smaller and smaller steps in the ramps.Instead, the AS/ILU method provides a more robust preconditioner for this problem. In contrast, when we consider the DEP or the Sellier method for the IEP (which, at each step, solves a DEP), the choice of preconditioner does not determine the ability to reach convergence or not, but it impacts the wall time. Comparing the results obtained with AMG and AS/ILU, we see that the number of Newton steps is virtually identical, but the wall time is roughly half using AMG. The only exception is in the case of the Sellier method with the traditional Newton algorithm, for which using AS/ILU the nonlinear solver performs about 50% more iterations, meaning that GMRES fails more often than with AMG. In any case, for solving the DEP (<ref>), AMG proves preferable to AS/ILU.Finally, we compare the inverse displacement method with the Sellier method in solving the IEP. We observe that the Sellier method is unable to reach convergence in this real-life test case when 100% of the load is considered, regardless the nonlinear and linear solvers employed. When we consider a 50% reduction of the load, both methods converge, but the inverse displacement method is remarkably more efficient (12 minutes against 41 minutes in the best case, that is with AS/ILU and AMG, respectively). The inverse displacement requires the resolution of more demanding linear systems (28.8s against 15.4s), but this is compensated by a significantly smaller number of Newton steps (25 against 160). We conclude this section by showing the results obtained in the real-life full heart model. In Figure <ref> we show the magnitude of the displacement from the stress-free configuration and the deformed one. In Figure <ref> we report several views of the deformed and the stress-free configuration. As expected, the cardiac chambers are deflated, because of the pressures acting on the endocardium. In addition, the chambers that are deformed the most are those with a thinner wall, since they are more prone to being stretched by pressure, and thus the stress-free configuration is more distant from the deformed one. Atria are deflated to a remarkable degree, an aspect that makes calculating the stress-free configuration particularly challenging in this test case. Such deflation induces a rotation in the right atrium auricle, so that a self-penetration of the domain occurs, both of the atrium into the ventricle and of the opposite walls of the atrium. The self-penetration of the relaxed configuration is the manifestation of a global geometric incompatibility, as discussed in Section <ref>. § CONCLUSIONS In this paper, we have delved into the complex task of reconstructing the stress-free configuration of an elastic body, terming this challenge the inverse elasticity problem.In Section <ref>, we have demonstrated that obtaining the inverse deformation map involves solving a mixed boundary value problem that shares structural similarities with the classical problem of hyperelasticity. Expanding upon Shield's pioneering findings <cit.>, we have extended our analysis to encompass the impact of material inhomogeneities, body and active forces.Although our investigation has revealed that the existence of solutions can be ensured under stringent assumptions, we have uncovered that, even for a simple scenario involving a two-dimensional disk composed of Neo-Hookean material and subjected to external pressure, the problem can yield one, multiple, or even zero solutions depending on the applied pressure.Furthermore, we have conducted an analysis of potential global geometric incompatibilities, leading to a non-injective inverse deformation. While injectivity of the deformation is pivotal in the direct problem to avoid self-intersections, we have shown that this characteristic is not mandatory for the inverse deformation, and characterized numerically two different mechanisms in which this phenomenon can arise. Nevertheless, the resulting self-intersecting relaxed state of the body could pose issues, rendering the domain unsuitable as a reference configuration. To counteract this challenge, we have proposed a novel approach outlined in Section <ref>, based on a multiplicative decomposition of the deformation gradient tensor.We have then thoroughly studied the inverse displacement method in terms of its numerical behavior, both independently and in comparison to alternative fixed-point (Sellier) algorithms. Our numerical evidence suggests that (i) the inverse displacement method outperforms the Sellier methods in terms of convergence speed and robustness,(ii) the Sellier algorithm can be slightly enhanced with Anderson acceleration, but the advantage is negligible when compared against using the inverse displacement method,(iii) the inverse displacement problem can be equivalently formulated in terms of the Cauchy and Eshelby stress tensors, but using the Cauchy formulation requires a smaller computational effort,(iv) in terms of nonlinear solvers, the inverse displacement problem behaves similarly to the standard elasticity problem, and(v) preconditioning the inverse displacement problem is significantly more challenging, and we have shown that domain decomposition preconditioners are significantly more effective than AMG. We have challenged both the inverse displacement method and the Sellier method in a real-life full heart test case, characterized by detailed anatomical features and by a computational mesh having 2.75e6 tetrahedra. The most striking result in this test case is the higher robustness and better performance of the inverse displacement method compared to the Sellier method, the latter being the only one being used at present – to our knowledge – in the cardiac modeling community. As a matter of fact, by relying on the inverse displacement method we were able to recover the stress-free configuration for a realistic load. To the best of our knowledge, this is the first result of this kind in the scientific literature, for a geometry of comparable complexity. Remarkably, the computation took only 19m 30s on 92 cores, that is only 40% more than solving the direct elasticity problem on the same mesh for the same load.§ PETSC OPTIONS FOR PRECONDITIONERSIn Listings <ref>-<ref>, we report the PETSc options used to test the preconditioners AMG, ILU, and GDSW in Section <ref>.[language=python, float, frame=single, caption=PETSc commands to use AMG., label=lis:AMG] "snes_type": "newtonls", "snes_atol": 1e-12, "snes_rtol": 1e-6, "snes_stol": 0.0, "snes_linesearch_type": "basic", "ksp_type": "gmres", "ksp_atol": 0.0, "ksp_rtol": 1e-6, "ksp_max_it": 5000, "ksp_norm_type": "unpreconditioned", "ksp_gmres_restart": 1000, "pc_type": "hypre" [language=python, frame=single, float,caption=PETSc commands to use AS., label = lis:ILU]"snes_type": "newtonls", "snes_atol": 1e-12, "snes_rtol": 1e-6, "snes_stol": 0.0, "snes_linesearch_type": "basic", "ksp_type": "gmres", "ksp_atol": 0.0, "ksp_rtol": 1e-6, "ksp_max_it": 5000, "ksp_norm_type": "unpreconditioned", "ksp_gmres_restart": 1000, "pc_type": "asm", "sub_ksp_type": "preonly", "sub_pc_type": "ilu" [language=python, frame=single, float, label = lis:GDSW, caption=PETSc commands to use GDSW.]"snes_type": "newtonls", "snes_atol": 1e-12, "snes_rtol": 1e-6, "snes_stol": 0.0, "snes_linesearch_type": "basic", "ksp_type": "gmres", "ksp_atol": 0.0, "ksp_rtol": 1e-6, "ksp_max_it": 5000, "ksp_norm_type": "unpreconditioned", "ksp_gmres_restart": 1000, "pc_type": "mg", "pc_mg_galerkin": None, "pc_mg_levels": 2, "pc_mg_adapt_interp_coarse_space": "gdsw", "mg_levels_pc_type": "asm" § ACKNOWLEDGMENTDR gratefully acknowledges funding by the European Union – NextGenerationEU under the National Recovery and Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.1 - Call PRIN 2022 No. 104 of February 2, 2022 of Italian Ministry of University and Research; Project 202249PF73 (subject area: PE - Physical Sciences and Engineering) “Mathematical models for viscoelastic biological matter”. NB has been supported by CMM BASAL proyect FB2100005 and by ANID POSTDOCTORAL 3230325. FR and DR are members of the INdAM research group GNCS (FR) and GNFM (DR). FR and DR acknowledge the support by the MUR, Italian Ministry of University and Research (Italy), grant “Dipartimento di Eccellenza 2023-2027”.10africa2022lifex P. C. Africa. lifex: A flexible, high performance library for the numerical solution of complex finite element problems. SoftwareX, 20:101252, 2022.alnaes2015fenics M. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. Rognes, and G. Wells. The FEniCS project version 1.5. Archive of Numerical Software, 3(100), 2015.Aln_s_2014 M. S. Alnæs, A. Logg, K. B. Ølgaard, M. E. Rognes, and G. N. Wells. Unified form language: A domain-specific language for weak formulations of partial differential equations. ACM Transactions on Mathematical Software, 40(2):1–37, Feb. 2014.Ambrosi_2011 D. Ambrosi and S. Pezzuto. Active stress vs. active strain in mechanobiology: Constitutive issues. Journal of Elasticity, 107(2):199–212, July 2011.amestoy2000mumps P. R. Amestoy, I. S. Duff, J.-Y. L’Excellent, and J. Koster. MUMPS: a general purpose distributed memory sparse solver. In International Workshop on Applied Parallel Computing, pages 121–130. Springer, 2000.Antiga2008 L. Antiga, M. Piccinelli, L. Botti, B. Ene-Iordache, A. Remuzzi, and D. A. Steinman. An image-based modeling framework for patient-specific computational hemodynamics. Medical & Biological Engineering & Computing, 46(11), nov 2008.dealII91 D. Arndt, W. Bangerth, T. Clevenger, D. Davydov, M. Fehling, D. Garcia-Sanchez, G. Harper, T. Heister, L. Heltai, M. Kronbichler, R. Kynch, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells. TheLibrary, Version 9.1. Journal of Numerical Mathematics, 2019.petsc-user-ref S. Balay, S. Abhyankar, M. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, A. Dener, V. Eijkhout, W. Gropp, D. Karpeyev, D. Kaushik, M. Knepley, D. May, L. Curfman McInnes, R. Mills, T. Munson, K. Rupp, P. Sanan, B. Smith, S. Zampini, H. Zhang, and H. Zhang. PETSc users manual. Technical Report ANL-95/11 - Revision 3.13, Argonne National Laboratory, 2021.ball1976convexity J. Ball. Convexity conditions and existence theorems in nonlinear elasticity. Archive for Rational Mechanics and Analysis, 63(4):337–403, 1976.barnafi2022parallel N. A. Barnafi, L. F. Pavarino, and S. Scacchi. Parallel inexact newton–krylov and quasi-newton solvers for nonlinear elasticity. Computer Methods in Applied Mechanics and Engineering, 400:115557, 2022.barnafi2022comparative N. A. Barnafi, L. F. Pavarino, and S. Scacchi. A comparative study of scalable multilevel preconditioners for cardiac mechanics. Journal of Computational Physics, 492:112421, 2023.bayer2012novel J. D. Bayer, R. C. Blake, G. Plank, and N. A. Trayanova. A novel rule-based algorithm for assigning myocardial fiber orientation to computational heart models. Annals of Biomedical Engineering, 40:2243–2254, 2012.bucelli2022partitioned M. Bucelli, L. Dede, A. Quarteroni, and C. Vergara. Partitioned and monolithic algorithms for the numerical solution of cardiac fluid-structure interaction. Communications in Computational Physics, 32(5):1217–1256, jun 2022.carroll2005compressible M. M. Carroll. Compressible isotropic strain energies that support universal irrotational finite deformations. The Quarterly Journal of Mechanics and Applied Mathematics, 58(4):601–614, 2005.Carroll_2005 M. M. Carroll and F. J. Rooney. Implications of shield's inverse deformation theorem for compressible finite elasticity. Zeitschrift für angewandte Mathematik und Physik, 56(6):1048–1060, nov 2005.Chadwick_1975 P. Chadwick. Applications of an energy-momentum tensor in non-linear elastostatics. Journal of Elasticity, 5(3-4):249–258, nov 1975.deng2023fast G. Deng and F. Galetto. Fast iterative reverse filters using fixed-point acceleration. Signal, Image and Video Processing, pages 1–9, 2023.dicarlo2002growth A. DiCarlo and S. Quiligotti. Growth and balance. Mechanics Research Communications, 29(6):449–456, 2002.dohrmann2008family C. R. Dohrmann, A. Klawonn, and O. B. Widlund. A family of energy minimizing coarse spaces for overlapping schwarz preconditioners. In Domain Decomposition Methods in Science and Engineering XVII, pages 247–254. Springer, 2008.epstein2015mathematical M. Epstein. Mathematical characterization and identification of remodeling, growth, aging and morphogenesis. Journal of the Mechanics and Physics of Solids, 84:72–84, 2015.evans2020proof C. Evans, S. Pollock, L. G. Rebholz, and M. Xiao. A proof that anderson acceleration improves the convergence rate in linearly converging fixed-point methods (but not in those converging quadratically). SIAM Journal on Numerical Analysis, 58(1):788–810, 2020.falgout2002hypre R. Falgout and U. Yang. hypre: A library of high performance preconditioners. In International Conference on Computational Science, pages 632–641. Springer, 2002.fedele2023comprehensive M. Fedele, R. Piersanti, F. Regazzoni, M. Salvador, P. C. Africa, M. Bucelli, A. Zingaro, A. Quarteroni, et al. A comprehensive and biophysically detailed computational model of the whole human heart electromechanics. Computer Methods in Applied Mechanics and Engineering, 410:115983, 2023.Fedele2021 M. Fedele and A. Quarteroni. Polygonal surface processing and mesh generation tools for the numerical simulation of the cardiac function. International Journal for Numerical Methods in Biomedical Engineering, 37(4):e3435, 2021.gee2009prestressing M. W. Gee, C. H. Reeps, H. H. Eckstein, and W. A. Wall. Prestressing in finite deformation abdominal aortic aneurysm simulation. Journal of Biomechanics, 42(11):1732–1739, 2009.Giantesio_2018 G. Giantesio, A. Musesti, and D. Riccobelli. A comparison between active strain and active stress in transversely isotropic hyperelastic materials. Journal of Elasticity, 137(1):63–82, dec 2018.Goriely_2005 A. Goriely and M. Ben Amar. Differential growth and instability in elastic shells. Physical Review Letters, 94(19), May 2005.govindjee1996computational S. Govindjee and P. A. Mihalic. Computational methods for inverse finite elastostatics. Computer Methods in Applied Mechanics and Engineering, 136(1-2):47–57, 1996.henderson2007view A. Henderson. Paraview guide: A parallel visualization application. Kitware, Inc., Clifton Park, NY, 2007.horgan2004invariance C. O. Horgan and J. G. Murphy. Invariance of the equilibrium equations of finite elasticity for compressible materials. Journal of Elasticity, 77:187–200, 2004.horgan2005plane C. O. Horgan and J. G. Murphy. Plane strain bending of cylindrical sectors of admissible compressible hyperelastic materials. Journal of Elasticity, 81:129–151, 2005.katz2010 A. M. Katz. Physiology of the Heart. Lippincott Williams & Wilkins, 2010.kelley1991mesh C. Kelley and E. W. Sachs. Mesh independence of newton-like methods for infinite dimensional problems. The Journal of Integral Equations and Applications, pages 549–573, 1991.kondaurov1987finite V. I. Kondaurov and L. V. Nikitin. Finite strains of viscoelastic muscle tissue. Journal of Applied Mathematics and Mechanics, 51(3):346–353, 1987.kroner1959allgemeine E. Kröner. Allgemeine kontinuumstheorie der versetzungen und eigenspannungen. Archive for Rational Mechanics and Analysis, 4(1):273–334, 1959.Lee1969 E. H. Lee. Elastic-plastic deformation at finite strains. Journal of Applied Mechanics, 36(1):1–6, Mar 1969.marx2022robust L. Marx, J. A. Niestrawska, M. A. F. Gsell, F. Caforio, G. Plank, and C. M. Augustin. Robust and efficient fixed-point algorithm for the inverse elastostatic problem to identify myocardial passive material parameters and the unloaded reference configuration. Journal of Computational Physics, 463:111266, 2022.mazier2022inverse A. Mazier, A. Bilger, A. E. Forte, I. Peterlik, J. S. Hale, and S. P. Bordas. Inverse deformation analysis: an experimental and numerical assessment using the FEniCS Project. Engineering with Computers, 38(5):4099–4113, 2022.Merodio_2006 J. Merodio and R. W. Ogden. On the equivalence of strong ellipticity in the material and spatial settings of finite elasticity. Zeitschrift für Angewandte Mathematik und Physik, 57(6):1096–1101, aug 2006.montanino2020recovery A. Montanino and A. Pandolfi. On the recovery of the stress-free configuration of the human cornea. Journal for Modeling in Ophthalmology, 4:11–33, 2020.mora2019shape S. Mora, E. Andò, J.-M. Fromental, T. Phou, and Y. Pomeau. The shape of hanging elastic cylinders. Soft Matter, 15(27):5464–5473, 2019.Mora_2014 S. Mora, T. Phou, J.-M. Fromental, and Y. Pomeau. Gravity driven instability in elastic solid layers. Physical Review Letters, 113(17), oct 2014.Morin_2015 F. Morin, H. Courtecuisse, M. Chabanas, and Y. Payan. Rest shape computation for highly deformable model of brain. Computer Methods in Biomechanics and Biomedical Engineering, 18(sup1):2006–2007, sep 2015.murphy2003inverse J. G. Murphy. Inverse radial deformations and cavitation in finite compressible elasticity. Mathematics and Mechanics of Solids, 8(6):639–650, 2003.pfaller2019importance M. R. Pfaller, J. M. Hörmann, M. Weigl, A. Nagler, R. Chabiniok, C. Bertoglio, and W. A. Wall. The importance of the pericardium for cardiac biomechanics: from physiology to computational modeling. Biomechanics and Modeling in Mechanobiology, 18:503–529, 2019.piersanti2021modeling R. Piersanti, P. C. Africa, M. Fedele, C. Vergara, L. Dedè, A. F. Corno, and A. Quarteroni. Modeling cardiac muscle fibers in ventricular and atrial electrophysiology simulations. Computer Methods in Applied Mechanics and Engineering, 373:113468, 2021.rathgeber2016firedrake F. Rathgeber, D. Ham, L. Mitchell, M. Lange, F. Luporini, A. McRae, G.-T. Bercea, G. Markall, and P. Kelly. Firedrake: automating the finite element method by composing abstractions. ACM Transactions on Mathematical Software (TOMS), 43(3):1–27, 2016.rausch2017augmented M. K. Rausch, M. Genet, and J. D. Humphrey. An augmented iterative method for identifying a stress-free reference configuration in image-based biomechanical modeling. Journal of Biomechanics, 58:227–231, 2017.regazzoni2019reviewXB F. Regazzoni, L. Dedè, and A. Quarteroni. Active force generation in cardiac muscle cells: mathematical modeling and numerical simulation of the actin-myosin interaction. Vietnam Journal of Mathematics, 49:87–118, 2021.regazzoni2021oscillation F. Regazzoni and A. Quarteroni. An oscillation-free fully partitioned scheme for the numerical modeling of cardiac active mechanics. Computer Methods in Applied Mechanics and Engineering, 373:113506, 2021.regazzoni2022emcirculation F. Regazzoni, M. Salvador, P. C. Africa, M. Fedele, L. Dedè, and A. Quarteroni. A cardiac electromechanical model coupled with a lumped-parameter model for closed-loop blood circulation. Journal of Computational Physics, 457:111083, 2022.riccobelli2019existence D. Riccobelli, A. Agosti, and P. Ciarletta. On the existence of elastic minimizers for initially stressed materials. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 377(2144):20180074, 2019.Riccobelli_2019 D. Riccobelli and D. Ambrosi. Activation of a muscle as a mapping of stress–strain curves. Extreme Mechanics Letters, 28:37–42, Apr. 2019.Riccobelli_2017 D. Riccobelli and P. Ciarletta. Rayleigh–Taylor instability in soft elastic layers. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 375(2093):20160421, apr 2017.rodriguez1994stress E. K. Rodriguez, A. Hoger, and A. D. McCulloch. Stress-dependent finite growth in soft elastic tissues. Journal of Biomechanics, 27(4):455–467, 1994.sellier2011iterative M. Sellier. An iterative method for the inverse elasto-static problem. Journal of Fluids and Structures, 27(8):1461–1470, 2011.Shield_1967 R. T. Shield. Inverse deformation results in finite elasticity. Zeitschrift für Angewandte Mathematik und Physik ZAMP, 18(4):490–500, jul 1967._ilhav__1997 M. Šilhavý. The Mechanics and Thermodynamics of Continuous Media. Springer Berlin Heidelberg, 1997.taber2000modeling L. A. Taber and R. Perucchio. Modeling heart development. Journal of Elasticity, 61(1):165–198, 2000.trilinos-website T. Trilinos Project Team. The Trilinos Project Website, 2020 (acccessed May 22, 2020).truesdell2013non C. Truesdell and W. Noll. The Non-Linear Field Theories of Mechanics. Springer Science & Business Media, 2013.Usyk2002 T. P. Usyk, I. J. LeGrice, and A. D. McCulloch. Computational model of three-dimensional cardiac electromechanics. Computing and Visualization in Science, 4(4):249–257, 2002.Zygote2014 Zygote. Zygote solid 3D male anatomy collection generation II develompent report. Technical report, 2014.
http://arxiv.org/abs/2312.11477v1
{ "authors": [ "N. A. Barnafi", "F. Regazzoni", "D. Riccobelli" ], "categories": [ "physics.bio-ph", "cond-mat.soft", "cs.NA", "math.NA", "65N21, 65N30, 74B20, 74G75, 74L15" ], "primary_category": "physics.bio-ph", "published": "20231127204918", "title": "Reconstructing relaxed configurations in elastic bodies: Mathematical formulation and numerical methods for cardiac modeling" }
[ [ January 14, 2024 ==================== We describe two algorithms for multiplying n × n matrices using time and energy Õ(n^2) under basic models of classical physics.The first algorithm is for multiplying integer-valued matrices, and the second, quite different algorithm, is for Boolean matrix multiplication.We hope this work inspires a deeper consideration of physically plausible/realizable models of computing that might allow for algorithms which improve upon the runtimes and energy usages suggested by the parallel RAM model in which each operation requires one unit of time and one unit of energy. § INTRODUCTION Suppose you were presented with a black-box that could multiply any n× n matrices in quadratic time. Would you be surprised?Not necessarily—the box might simply be able to leverage an amount of parallelism that scales with n.Specifically, you could trivially parallelize the multiplication across n machines, and run each machine for O(n^2) time, resulting in O(n^3) energy usage but only O(n^2) time.But what if both the runtime and energy usage of the black-box scaled quadratically?Such a black-box would be surprising if it operated within a computational model where each arithmetic operation requires one unit of energy.But are there physically realizable models that do not have this property? And if so, what is the algorithmic landscape for such models, and what physical gadgets or properties do they leverage?Should we expect to be able to obtain significant polynomial improvements simultaneously for runtime and energy usage for fundamental algorithmic primitives like matrix multiplication?There are several motivations for considering these questions.First, energy is one of the most important computational resources, along with time, and space.Despite this, there is embarrassingly little theoretical work on low-energy computing, and few theoretical models of computation that explicitly consider energy.Of course, on the practical side there is a frenzy of effort to design highly parallel and energy-efficient hardware and algorithms—and aproliferation of analog computing components due to their low energy-usage. Still, a more principled effort tounderstand how different physical systems and assumptions could be algorithmically leveraged for low-energy computation might serve to guide the development of alternative hardware and architectures.From a more conceptual angle, these questions ask whether the conventional wisdom regarding the time and energy complexity of problems is inherent, or simply due to our RAM-centric view of computing, modeled on computers in the von Neumann architecture.In light of the extended Church-Turing thesis, we do not expect natural or physics-driven computational processes to obtain super-polynomial improvements in terms of time and energy—quantum computing aside.In terms of the structure of problems within P, however, we do know that different computational models give rise to different polynomial runtimes.Despite this, there seems to be little investigation of realistic and physically plausible models of computation that result in significant (polynomial) savings in resources over the standard RAM or parallel-RAM models:What is a “fine-grained” analog of the extended Church-Turing thesis that takes into account both runtime and energy?Do plausible non-quantum models of computing admit polynomial savings in terms of time and energy over the RAM or parallel-RAM models where each operation takes a unit of energy? If so, how large can these polynomial factors be, and what are the fundamental lower-bounds for natural problems?§ RELATED WORKThe earliest analog computers were mechanical in nature and were later replaced with electronic analog computers. A good example of an early analog computer was the differential analyser <cit.> which was used to solve differential equations. Later there were theoretical models developed for studying the power of analog computation that uses a set of elementary operations such as constants, adders, multipliers and integrators <cit.>.The focus in these works is on computability, as opposed to runtime or energy usage.Early theoretical work in the study of energy efficient computation was done in the context of reversible computing, initiated by Landauer and Bennet <cit.>. Landauer's principle <cit.> states that erasing a single bit of information requires k_B T log 2 energy, where k_B is Boltzmann’s constant and T is the temperature of the surroundings. The motivation for reversible computing is the stipulation that, from a thermodynamic perspective, such erasures are the only aspect of computation that inherently requires energy, and hence if a computation is reversible, there is no theoretical lower bound to the energy required.More recent work in this vein by Demaine et al. <cit.> studies this in a more algorithmic context and revists many common algorithmic primitives (including sorting, graph algorithms and data structures) with the goal of implementing them entirely, or mostly, with reversible operations.There has also been a line of theoretical work on a different notion of energy complexity (e.g. <cit.>).In that work, the energy complexity of a circuit is defined as the maximum over all inputs, of the number of gates that output 1 (as opposed to a 0).This definition corresponds to the energy expended in a natural implementation of such a circuit. The key questions are how the energy complexity can be related to traditional parameters of circuits, such as width or depth. Our algorithms leverage only classical physics.Of course, quantum algorithms such as Shor's algorithm <cit.> may yield super-polynomial improvements over classical algorithms, both in terms of runtime and energy.There is also a significant line of work investigating the extent to which restricted models of quantum computation—such as “linear optics” <cit.>—can yield super-polynomial speedups.There are also several interesting quantum algorithms, such as Grover's search <cit.> and recent work on quantum “spatial search” <cit.>, which yield only quadratic speedups over their classical analogs.Given this interest in polynomial speedups, it is certainly worth understanding whether certain types of non-quantum physical systems can give similar sorts of surprising speedups.We also note that the challenges to realizing quantum computing in a practical sense appear orthogonal to the challenges of realizing the sort of “physical” algorithms we present here. On the practical side, energy is one of the most important metrics of computational efficiency.On mobile devices (phones, watches etc), battery life is a paramount concern.For training deep neural networks and large-scale scientific computing, energy costs are often significant in comparison to the hardware costs and the salaries of the people involved.This has sparked a large industry of custom hardware, and renewed interest in analog computing.Particularly in settings that allow for low-precision, analog circuits seem to offer significant energy savings for certain problems (see, e.g. the very brief survey <cit.>).For specific computational primitives, in particular, matrix vector multiplication, there have been a series of empirical papers exploring analog implementations via memristor crossbar circuits <cit.>.Additionally, there is a promising wave of work on optical/“photonic” circuits (e.g. <cit.>), which seem to offer both increased speed and lower energy for tasks such as forward passes on a deep neural network. The emphasis in these works is on the empirical behaviour, not asymptotic or theoretical properties. § POTENTIAL ADVANTAGES OF PHYSICAL ALGORITHMSOur algorithms will leverage concrete physical systems that evolve under the laws of classical physics.Before describing these algorithms, we outline three properties of physical systems that could plausibly be employed to yield time and/or energy improvements over the RAM model: * Free Parallelism: The physical world allows for some level of parallelism “for free”, as multiple physical systems can evolve in parallel. The initialization/setup of these systems may need to be done serially, but their evolution according to the laws of physics occurs in parallel. * Can Tradeoff Time and Energy:Under Newtonian mechanics, suppose it takes one unit of energy to move a unit-mass object one unit distance, with the object beginning and ending at rest.In a frictionless setting, to move the same object one unit distance in t units of time, the total energy is 1/t^2, since the object needs to be accelerated to velocity 1/t, and kinetic energy scales with the square of the velocity. This ability to tradeoff between time and energy is exploited in the Boolean Matrix Multiplication algorithm of Section <ref>.It is worth noting that a similar scaling is observed with over/under-clocking CPUs (though there is only a narrow range of flexibility in clock-speed of current CPUs), though this scaling is due both to increasing the voltage and increased fan speed required to dissipate the heat. * Sublinear Time/Energy Aggregation: Physical systems allow for many means for adding or computing the OR of n numbers using a sublinear amount of time and/or energy.1) Diffusion: If the n quantities to be aggregated are presented as n heat sources, arranged on a √(n)×√(n) two dimensional grid of thermally conducting material (thermally insulated from the outside world), then with no additional energy and time O(n log (1/ϵ)), the heat equation will drive the conducting plate to a uniform temperature to within ±ϵ. If the n quantities to be aggregated were presented as n heat sources, arranged within a n^1/3× n^1/3× n^1/3 cube, then the time for diffusion is sublinear:O(n^2/3log (1/ϵ)). 2) Newtonian mechanics: given n bits, let the ith bit be represented as the presence or absence of a unit mass block at location i along a length n friction-less track.The OR of these bits can be computed by sliding a unit-mass block along the track with some initial velocity, and measuring whether that block is the first block to reach location n+1. If the initial block has velocity v (and hence energy O(v^2)), then if the OR is 0 that block will reach the end at time n/v. Provided v < √(n), this provides a smooth tradeoff between sublinear time and sublinear energy. § PHYSICAL ASSUMPTIONSIn this section, we briefly discuss the assumptions underlying the correctness and runtimes/energy usages of our algorithms.As with any such assumptions, they become unrealistic at some problem scale.This is similar to the sense in which the RAM model becomes unrealistic at the problem scales for which the time to communicate a bit of information across the memory footprint is non-negligible. §.§ Precision and Measurement AccuracyOur algorithmsleverage theassumption that physical quantities (e.g. mass, length) of value b can bemeasured to accuracy ±ϵ, using a time and energy cost of log(max(1,b))+log1/.Additionally, the time and energy cost of fabricating a component with desired mass or length b ±ϵ is O(b + log(1/ϵ)). These assumptions are reasonable in the parameter regime in which classical physics applies, where one can perform a binary-search type approach using a set of reference mass/lengths of value 1, 1/2, 1/4,1/8,….These assumptions necessarily break down near atomic scales where a polynomial relationship between desired accuracy and required energy is more appropriate. §.§ Divisibility of MaterialBoth of the matrix multiplication constructions presented below involve some property of the system scaling inversely with the size of the instance.For the integer matrix multiplication algorithm of Section <ref>, we assume that some material can be divided into quantities of size 1/n.In the Boolean matrix multiplication construction of Section <ref>, we assume that the velocity of some components of the system can be 1/n.This inverse scaling breaks down at atomic scales, which is the main limit on the size of the instances for which such systems could be practically realized.Though, as discussed at the end of Section <ref>, in an optical implementation of our integer multiplication algorithm, we would expect the roughly quadratic time and energy scaling to hold up until impressively large problem instances.[We note that, at least for multiplying square matrices, allowing properties to scale with o(1/n) does not seem to help.For other problems, such as k-sum, allowing material to be divisible into quantities of size 1/n^k can likely be leveraged. That said, such an assumption quickly becomes unrealistic—even for modest values of k this assumption becomes practically unreasonable for quite modest values of n.]The specific assumption we require for our integer multiplication algorithm is that with time and energy O( nn), one can construct a “device” with the property that if one “pours in”one unit of “material” (e.g. water, sand, light) at one end, after time O(n),1/n ± o(1/n^2) material will exit each of n equally-spaced “endpoints”.Additionally, the amount of energy required by this system to perform such a division is either negligible, or at most n.A plausible construction of such a gadget would be a binary tree of “tubing”through which material can flow under the force of gravity, with n leaves and “splitters” at each of the internal nodes/junctions that divide the material flow (nearly) equally along the two downstream paths.The construction of Section <ref> is described in terms of such a gadget. §.§ Classical MechanicsBoth algorithms assume that objects operate under Newtonian mechanics: it requires a unit of energy to raise a unit mass to a height of 1 unit, and a unit mass can be moved a unit distance in a unit time, beginning and ending at rest,requiring a unit of energy. We also assume the force of gravity acts in the usual sense.For example, a mass at rest at the top of a length n frictionless track that is at an incline of 1/n, will take time O(n) to reach the bottom.None of our algorithms require perfectly elastic collisions, though the algorithm of Section <ref> assumes that kinetic energy can be transferred from one object to another, losing a constant fraction (bounded below 1) of the energy.This algorithm additionally leverages that kinetic energy scales quadratically with velocity: accelerating a unit mass object to velocity v requires O(v^2) energy. § INTEGER MATRIX MULTIPLICATIONIn this section, we consider multiplying matrices of integers. Given an n × n matrix A, we will construct an O(n^2n) sized physical system, taking time and energy O(n^2n), such that given a vector b, the matrix-vector product Ab=c can be computed in time and energy O(nn).After n such products, the computation corresponds to having multiplied two n × n matrices in time and energy O(n^2n).Without loss of generality, we will assume that A ∈{0,1}^n× n and b ∈{0,1}^n, as the multiplication of matrices withr-bit entries can trivially be reduced to r^2 multiplications of {0,1} matrices.The physical system will be constructed as a simple network of “tubing” and “channels", through which a divisible “material” (e.g. sand, water, light) flows under the influence of gravity without friction. We will have an array of n “channels”, with the ith channel corresponding to the ith index of the output, c_i.One end of each channel will be held at one unit elevation, and the other will be held at elevation 0. The total amount of “material” that collects at the end of the ith channel will be measured, to accuracy ≪ 1/n^2, which will be the value of c_i after rounding to the nearest multiple of 1/n.Between each of these channels, we will also have “garbage” channels, whose material is never measured.For each j ∈{1,…,n}, we will construct a binary tree of tubing, with n “leaves”, and height log n, such that when a unit of “material” is input at the root, after time O(n log n), 1/n ± 1/poly(n) material has come out at each “leaf”.This can be accomplished via “splitters” at each of the O(n) internal nodes/junctions in the tree, each of whichsplits the material equally between the two downstream paths, up to ≪ 1/n^2 accuracy.We assume that each of the O(n^2) splitters (O(n) splitters for each of the n binary trees)is an inert device that has been constructed/calibrated in time and energy O(log n).We discuss the practical feasibility of such splitters more below. The jth binary tree will be positioned j units along the array of channels, such that the tubing at the ith leaf flows into channel i if A_i,j=1. If A_i,j=0, then the tubing at leaf i of binary tree j is directed towards a “garbage” channel.The total size of this construction is O(n^2 log n), corresponding to 2n channels of length n (n corresponding to the outputs, and n interspersed “garbage” channels), and n binary trees each of size O(n log n).Given this system representing matrix A, to multiply vector b ∈{0,1}^n, for each j ∈{1,…,n}, we input 1 unit (up to error ≪ 1/n^2) of material into the jth binary tree of tubing if, and only if, b_j = 1, and measure the amount of material that collects at each of the channels after time O( n log n); the amount of material that exits the ith channel, rounded to the nearest multiple of 1/n will be c_i/n.The correctness of the implementation is clear by construction: the amount of material entering the ith channel from the jth binary tree of tubing is A_i,jb_j/n, and hence up to the scaling factor of n, the amount of material collected at the bottom of the ith channel is ∑_j=1^n A_i,j b_j = c_i.The total energy required to perform this matrix-vector multiplication is O(n log n), corresponding to 1) lifting the ≤ n amount of material the O(log n) distance to reach the top of the binary trees of tubing, 2) measuring each of the ≤ n unit quantities of material to accuracy ≤ 1/n^2 to input into each of the binary trees, 3) measuring each of the n outputs c_1,…,c_n to accuracy ≤ 1/n^2.The total runtime is also O(n log n), consisting of 1) raising the ≤ n units of material to height O(log n, 2) the time to sequentially measure out each of the ≤ n units of material, 3) the O(n) time for the material to flow through the length n tubing path and length ≤ n channel, each of which has an incline of at least 1/n, and 4) sequentially measuring the material emitted at each of the n channels to accuracy ≪ 1/n^2.Finally,this O(n^2 log n) sized construction representing matrix A can be constructed in time/energy O(n^2n).This holds assuming 1) that each of the O(n^2) flow splitters—O(n) per binary tree—can be calibrated to accuracy <1/n^2 in time/energy O(log n) per splitter, 2) that the O(n^2 log n) length of tubing can be fabricated in time/energy O(n^2 log n),3) that the O(n^2) connections between the tubing and the splitters can each be connected in time O(log n), and 4)that the 2n flow channels of length n can, in total, be fabricated in time O(n^2).This near-quadratic time/energy cost will be amortized acrossthe n matrix vector multiplications, to yield an overall time/energy for multiplying two n × n matrices that is O(n^2n). Practical Feasibility:The most natural mapping of this matrix-multiplication scheme into a practically feasible construction that would have runtime and energy usage scaling nearly quadratically up to large values of n, would likely leverage light, rather than a material like water, or sand.The accurate construction of the binary trees of tubing seems practically feasible given the high quality of optical beam splitters currently available.For this application, the fact that beam splitters typically absorb (as opposed to transmit or reflect) a small constant fraction of light does not matter.It is crucial to the construction that the beam splitters transmit and reflect nearly equal amount of light, up to error ≪ 1/n^2—or at least that the amount of light reaching each of the n leaves of each binary is equal, to this accuracy.This property seems achievable via various O(log n) length sequences of measuring and modifying a given splitter.Across all O(n^2) splitters, the total time/energy cost would be O(n^2 log n).As with any construction based on classical physics, this scheme is doomed to fail once 1/n becomes on the same scale as a single photon.Still, this would seem to offer impressively fast and energy-efficient matrix multiplication atlarge scales. § BOOLEAN MATRIX MULTIPLICATION In this section, we consider Boolean matrix multiplication—matrix multiplication of binary matrices where the elementwise product is replaced by AND, and the summation is replaced by OR: Given two n× n binary matrices, A, B, let the n× n binary matrix C be defined with entry C_i,j = _k=1^n (A_i,k B_k,j).Currently, the fastest known algorithms for Boolean matrix multiplication are no better than for integer matrix multiplication.Our main reason for describing this rather different sort of algorithm is to impress the point that it is not all that difficult to come up with physical algorithms that seem to achieve surprising runtimes and energy usages.The algorithm of the previous section certainly seems more amenable to practical implementations than what will be described in this section.As in Section <ref>, we will construct an O(n^2n) sized physical system that represents matrix A.This construction will take O(n^2 n) time and energy.Given this system, we will then be able to evaluate Ab for any vector, b, in near linear time and energy. To motivate our algorithm, we begin with a naive approach to designing an efficient RAM algorithm for this problem: * Represent each column of A via a linked list storing the indices of the entries that are 1.Let L_i denote the list corresponding to the ith column.* For j=1,…,n we compute the (boolean) product between matrix A and the jth column of B:* initialize C_1,j,…,C_n,j to zero.* For k=1… n,* If B_k,j=1, step through L_k and for eachentry i (corresponding to A_i,k=1) do the following: * Set C_i,j=1.* Remove value i from all lists L_k' for k'>k. * Reset the lists L_1,…,L_n so that L_k represents the nonzero indices of the kth column of A. (i.e., undo the “removals” of Step 2.) The above algorithm is trivially correct.Furthermore, when processing each column of B, steps 1 and 2 are only ever executed once per nonzero entry of column C_*,j.Hence each of the n steps of the FOR loop would take time O(n), yielding a total runtime for the matrix multiplication of O(n^2), if the following held: 1) Step 2 could be accomplished in constant time (as opposed to near linear time that would be yielded by doing a binary search within each of the lists L_k'), and 2) the final step of the algorithm that resets all lists after each matrix-vector product, could be accomplished in O(n) time per reset, as opposed to the O(n^2) time it would take to naively rebuild all the lists.We now describe a physical implementation of this algorithm that can be implemented in Õ(n^2) time and energy.The crux of the construction is that we will perform Step 2 using O(log n) energy in such a way that removing value i from the k'th list will take O(k'-k) time, and will only be completed as we begin to process the k'th entry of B_*,j.Phrased differently, in Step 2, we need to remove i from all subsequent lists k'>k.However, in our implementation we will have O(k'-k) time before i must be removed from list L_k', and hence we will be able to clear it very slowly over ≈ k'-k timesteps.Although we have not yet described how this will be implemented, based on the kinetic energy scaling with the square of velocity it should now be plausible that the energy required would be only O(1/(k'-k)^2).Summing this energy over all k'>k is a most ∑_i≥ 11/i^2 = π^2/6, as opposed to the linear energy that would be required in a RAM implementation.We note that even if the energy required to clear a single entry in time t scaled as 1/t instead of the optimistic 1/t^2 scaling suggested by kinetic energy, one could still plausibly implement this high-level strategy with an energy cost of at most ∑_i=1^n 1/i = log n to clear a value from all lists, affecting the total energy cost by at most logarithmic factors. §.§ Physical implementationWe will make an n× n physical system representing matrix A on an n× n friction-less grid.Each of the n^2 cells of the grid will correspond to the analogous entries of matrix A: we represent A_i,j=0 versus A_i,j=1 via a unit mass block being on the left size of the cell versus on the right size of the cell. We will furthermore assume that each cell is set up in such a way that givenc ≤ 1 units of energy, it transitions from the “1” state to the “0” state in time O(1/c).This could be physically realized by imparting Θ(c) kinetic energy to the unit mass block, corresponding to velocity Θ(√(c)), which would allow the block to traverse the unit length in time O(1/√(c))≤ O(1/c), and then coming to rest via a perfectly inelastic collision or any other way of losing its kinetic energy and reaching a configuration from where it can go back to state “1" when necessary.To multiply by the jth column of B, B_*,j,for each k for which B_k,j = 1, we will have a unit-mass “agent” which will move at unit velocity along the right side of the kth column corresponding to A.This is analogous to traversing a linked list representing the location of the ones in the kth column of A, in the sense that the agent will only expend energy when it collides with a unit mass—namely when it arrives at an entry A_i,k=1—otherwise it continues its frictionless motion unimpeded.Upon colliding with a unit mass at the ith location while traversing the kth column the agent will expend n energy to accomplish the following steps, corresponding to Steps 1 and 2 of the naive RAM approach:* Set the corresponding entry of the answer C_i,j=1.(This could be accomplished via Newtonian mechanics by having a special frictionless track along each row of the construction, with the track of the ith row leading to the ith answer register.An agent will send a unit mass block at unit velocity along this track, and the answer register will update from 0 to 1 upon receiving such a unit of energy.)* Clear the remainder of the ith row, that is, for each k'>k, set the entry corresponding to A_i,k' to zero.To accomplish this, the agent will use O(log n) energy (which can be stored at the cell itself), transferring ≈ 1/(k'-k) energy to the cell corresponding to A_i,k' in time O(k'-k), for all k'>k.We discuss how this can be implemented below.If A_i,k'=1, the corresponding cell will use the ≈ 1/(k'-k) energy to set the entry to 0 in time <k'-k; hence the entry will be in the 0 position by the time the agent corresponding to column k' visits the ith row.Note that this energy 1/(k'-k) is quadratically more than would be sufficient to zero the entry, as energy 1/(k'-k)^2 would be sufficient to move the unit mass block a unit distance in time k'-k.* Finally, the agent will use constant energy to adjust its velocity (to compensate for any slowdown required to initiate the previous two steps) so that it enters row i+1 at velocity 1, one timestep after it entered row i.[This step is important, as we must maintain the invariant that the agents in column i'>ireaches row k at least i'-i timesteps after the ith agent reached that row, to allow for Step 2 to be completed.]Transferring Energy to Clear a Row. There are a number of ways to implement Step 2.One approach would be to have log n frictionless tracks associated with each row.To clear the remainder of a row, the agent will send a unit mass block at constant speed along each track.The ℓth such block will travel distance O(2^ℓ) and thenpartition its kinetic energy (roughly) uniformly among the 2^ℓ cells of the ith row in columns k+2^ℓ,…, k+2^ℓ+1.There are a numberconstructions to accomplish thispartitioning (all resembling Rube Goldberg machines at some level).Rather than describing one, we instead sketch a more plausible practical instantiation that leverages light.Suppose we have log n optical channels running along each row, with the ℓth channel having opacity 1/2^ℓ—namely each cell in the ℓth channel absorbs a 1/2^ℓ fraction of the light that enters, and allows the remaining 1-1/2^ℓ fraction to pass through.Suppose the agent at the kth column sends one unit of energy along each of the log n optical channels associated to the given row, and consider the energy absorbed by the cell at column k'=k+d.Defining ℓ = ⌈log d ⌉, the energy absorbed by this cell due to just this ℓth channel will be at least 1/2^ℓ(1-1/2^ℓ)^d-1≥1/2d(1-1/2^ℓ)^2^ℓ≥1/8d, since (1-1/c)^c is monotonically increasing in c, and ℓ≥ 1. Resetting Before Next Matrix-Vector Product.One final step of the algorithm will be to reset each of the O(n^2) cells that were “cleared” in Step 2, and also for each of the ≤ n cells that triggered a collision, refreshing the O(log n) energy stored at that cell.Both of these can beaccomplished in O(nn) time, using O( nn) energy.For the resetting, each cleared entry can reset in time O(n) (in parallel), and hence the energy required per cell could be as low as 1/n^2 to accelerate the unit mass to velocity 1/n.There are various implementations, including one in which there is a weak, restorative force for each cell representing an entry A_i,k=1.(For example, the cell could be at a slight incline allowing a gravitational restorative force favoring the 1 position). Such a force would be sufficient to restore the cells to their original values at a timescale of n log n, but would not have a significant effect at the timescales of each matrix-vector product. § ABSTRACTING PHYSICAL MODELS OF COMPUTING To simplify the design of physical algorithms, and facilitate a rigorous study of lower bounds, it would be useful to formalize an abstraction of the key computational primitives.And ideally, this abstraction would allow the algorithm designer to work at a level removed from the minutia of exactly how and where each data element is stored and accessed.Such an effort may be premature without a morecomplete catalog of the sorts of gadgets that can be fruitfully leveraged by physical algorithms. Still, we introduce, and briefly discuss one such model.Abstracting Clock-speed/Energy Tradeoffs: We define the following computational model parameterized by a real number α∈ [0,2].The model allows for arbitrary parallelism, with processes able to create new processes, subject to the following: * For a problem instance of size n, each process, P is defined via an O(log n) size program which may include calls to create additional processes.Process P has its own rate r_P ≥ 1 which corresponds to the amount of time each basic operation or memory read/write takes process P.Rate r_P = 1 corresponds to each operation taking unit time and unit energy.A rate of r_P = c corresponds to time c per operation and energy usage 1/c^α. * Each process requires one unit of energy to initialize.* No two processes can access (read or write) the same memory location at the same time.For example, if a process is writing a memory location at rate r=100, then that memory location cannot be accessed by other processes during the 100 timesteps in which it is being written to. Setting α = 2 corresponds to the time/energy tradeoff in frictionless classical mechanical systems, due to kinetic energy scaling quadratically with velocity.α = 1 is a more modest assumption (and is presumably easier to instantiate in hardware over a larger range of problem sizes), though stillyields interesting time/energy tradeoffs.The following examples illustrate time/energy tradeoffs for this model.In both cases, the algorithms are trivial—essentially naively parallelizing the task over a number of processes, all with identical rates, where the rate and level of parallelismare jointly optimized.The only component that requires some care is in ensuring that no two processes are reading from the same memory location at the same time.[Copying a List] Given a list of n numbers to be copied, suppose we have n^q processes, each running at rate n^s.Each process will need to copy n^1-q numbers, which will take time n^1-q+s.The total energy will be the product of the number of processors and energy per processor: n^q(1+n^1-q/(n^s)^α), where α∈ [0,2] is the parameter governing the tradeoff between slowdown and energy usage.For α = 1,this yields that for any s∈ [0,1] one can achieve time O(n^2s) and energy O(n^1-s)—for example with s=1/3, both time and energy are O(n^2/3). For α = 2, the analogous calculations give a tradeoff of time O(n^3s) and energy O(n^1-2s).With s=1/5 both the time and energy are O(n^3/5). [Matrix Multiplication] Consider multiplying two n × n matrices, A,B.First suppose we use n^2 processes, each with rate O(n), where process P_i,j is responsible for computing the i,jth entry of the product, ∑_k A_i,kB_k,j.Since each entry of A (and B) will be read by n processes, we need to ensure that no pair of processes tries to access the same entry at the same time.This is not difficult, and does not require any additional overhead: consider dividing time into length n blocks, [0,n],[n,2n],…. During the tth block of time, let process P_i,j read entry A_i,(i+j+t)n and B_(i+j+t)n, j.To see that no two processes are trying to access the same entry at the same time, note that the only potential collisions with process P_i,j involve processes P_i',j or P_i,j'.In the case of P_i',j, a collision at time block t would involve B_(i+j+t) n, j and B_(i'+j+t) n, j but these are distinct, as i≠ i'. Given this lack of collisions, the runtime would be O(n^2), and the energy usage would be O(n^2(1+n/n^α))=O(n^2) as long as α≥ 1. Note that in the case of α = 2, consistent with Newtonian mechanics, the energy overhead for initializing each process dominates the energy used in the actual computation, suggesting that subquadratic time and energy are simultaneously achievable in the α = 2 case by using a subquadratic amount of parallelism.Indeed, time and energy O(n^9/5) can be achieved in the α=2 case by using n^9/5 processes, each computing n^1/5 of the entries of the product AB, with each process running at rate n^3/5. This model suffers from some of the same drawbacks as the RAM model.By abstracting away the details of where each bit of data is stored, for large-scale problems, the model cannot hope to realistically model the additional time/energy that must be expended by a process that needs to perform operations on bits of memory stored in “distant” locations. Still, in the same way that the RAM model accurately models computations that fit on a single laptop, there is hope that future hardware could be developed that reflects the properties of the above model, at least at modest problem scales for some α>0.A more conceptual shortcoming of the above model is that it does not seem to be complete in any sense.There are properties of physical systems that can be computationally leveraged beyond the ability to reduce the energy use by slowing down a process. In particular this model lacks the ability to aggregate values as in the algorithm for integermatrix multiplication of Section <ref>, or the ability to average values viadiffusion.Electromagnetic and optical phenomena are also completely absent.Still, even within this simple and incomplete model, there might be some surprising and elegant algorithms; and there is some hope that such algorithms might be relevant to current computing settings where there is a suite of available hardware with varying speeds and energy (or monetary) costs.Lower bounds within this restricted model might also be of interest. § CONCLUDING THOUGHTSWe hope this work inspires a broader consideration of the potential landscape of time and energy requirements for problems within P, from both theoretical and practical perspectives.Here, wefocused on matrix multiplication, leveraging Newtonian mechanics.There are, of course, many other computational problems worth considering, and many other physical systems and forces that could be exploited for energy efficiency and parallelism, including optical phenomena,biological processes, and gravity.As Moore's Law wanes and alternate computing architectures are empirically investigated more fully, it may be worth developing a more complete theory of the energy or runtime gains that might be accessible via different sorts of physical systems and accompanying assumptions.k-Sum.All pairs shortest paths. Acknowledgments This research was supported by a SimonsFoundation Investigator Award.The author would also like to thank Vijaykrishna Gurunathan for discussions on this topic.Vijaykrishna wished to not be included as an author. alpha
http://arxiv.org/abs/2311.16342v2
{ "authors": [ "Gregory Valiant" ], "categories": [ "cs.CC", "cs.DS" ], "primary_category": "cs.CC", "published": "20231127220221", "title": "Matrix Multiplication in Quadratic Time and Energy? Towards a Fine-Grained Energy-Centric Church-Turing Thesis" }
The seamless integration of visual and auditory information is a fundamental aspect of human cognition. Although age-related functional changes in Audio-Visual Integration (AVI) have been extensively explored in the past, thorough studies across various age groups remain insufficient. Previous studies have provided valuable insights into age-related AVI using EEG-based sensor data. However, these studies have been limited in their ability to capture spatial information related to brain source activation and their connectivity. To address these gaps, our study conducted a comprehensive audio-visual integration task with a specific focus on assessing the aging effects in various age groups, particularly middle-aged individuals. We presented visual, auditory, and audio-visual stimuli and recorded EEG data from Young (18-25 years), Transition (26-33 years), and Middle (34-42 years) age cohort healthy participants. We aimed to understand how aging affects brain activation and functional connectivity among hubs during audio-visual tasks. Our findings revealed delayed brain activation in middle-aged individuals, especially for bimodal stimuli. The superior temporal cortex and superior frontal gyrus showed significant changes in neuronal activation with aging. Lower frequency bands (theta and alpha) showed substantial changes with increasing age during AVI. Our findings also revealed that the AVI- associated brain regions can be clustered into five different brain networks using the k-means algorithm. Additionally, we observed increased functional connectivity in middle age, particularly in the frontal, temporal, and occipital regions. These results highlight the compensatory neural mechanisms involved in aging during cognitive tasks.Audio-Visual Integration, Electroencephalography, Cognitive Ageing, Brain Source Localization, Functional Connectivity.Insights into Age-Related Functional Brain Changes during Audiovisual Integration Tasks: A Comprehensive EEG Source-Based Analysis Prerna Singh1, Ayush Tripathi2, Lalan Kumar1,2,3 and Tapan Kumar Gandhi1,2 1 Bharti School of Telecommunication Technology and Management, Indian Institute of Technology Delhi, Delhi 110016, India2Department of Electrical Engineering , Indian Institute of Technology Delhi, Delhi 110016, India3Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, Delhi 110016, IndiaJanuary 14, 2024 =======================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION In our daily lives, we encounter a multitude of stimuli from different sensory modalities, such as auditory, vision, smell, and touch.Our remarkable brain has the capacity to efficiently process and integrate pertinent information from these different sources, which is known as multisensory integration. This allows us to perceive and understand the external world effectively amidst the dynamic and complex information surrounding us. When communicating with others, we integrate visual and auditory information to understand speech content. This integration process is known as audio-visual integration <cit.>. Audio-visual stimuli elicit faster and more accurate responses compared to unimodal stimuli, highlighting the effectiveness of audiovisual integration <cit.>.With advancing age, there is a noticeable decline in both sensory systems and cognitive functions <cit.>. The process of audiovisual integration serves as a bridge, connecting sensory and cognitive processing, thus mitigating the impact of age-related declines in both domains <cit.>. Age-related declines in cognitive function contribute to increased auditory threshold and decreased visual acuity in older adults. Interestingly, studies on audio-visual integration have shown that older adults exhibit an enhanced audio-visual integration effect compared to younger adults in tasks involving auditory/visual discrimination <cit.>, like sound-induced flash illusion tasks <cit.>and speech perception task <cit.>. Emerging evidence from these studies suggests that audiovisual integration (AVI) may serve as a compensatory mechanism to counteract functional decline associated with aging. However, contrary findings have also been extensively documented in studies employing tasks such as the auditory/visual discrimination tasks <cit.>, and the sentence discrimination task <cit.>.Additionally, the temporal aspect of audiovisual integration (AVI) plays a vital role in determining the occurrence of integration. Past researchers have found that the window for binding is extended for complex stimuli compared to simpler audiovisual stimuli <cit.>. A compelling finding from a previous study <cit.>revealed that a powerful multi-sensory integration effect is observed when the temporal gap between auditory and visual stimuli is less than 100 milliseconds. Hence, variations in experimental materials have been suggested as a primary factor contributing to the contrasting results observed in these studies. Furthermore, the location of the stimulus, whether presented peripherally <cit.> or centrally <cit.>, has varied across studies, leading to differing outcomes. Given the significant age-related decline in peripheral perceptual processing, the specific presentation location of the stimuli has also contributed to the conflicting findings.Subsequent ERP studies on AVI elucidate a compelling insight that older adults displayed a heightened neural response to audiovisual stimuli, particularly in the medial prefrontal and inferior parietal regions <cit.>. These findings strongly supported the notion that the amplified audiovisual integration observed in older adults serves as a compensatory mechanism, counteracting deficiencies in unimodal sensory processing <cit.>.Past Studies have found that neural oscillatory responses in various frequency bands, such as theta, alpha, beta, and gamma, play a role in sensory processing <cit.>. Theta and alpha bands, particularly in fronto-centro-parietal sites, are involved in cognitive control, short-term memory, sensory information maintenance, and suppression of distractions <cit.>. Studies on aging based on EEG have revealed a decline in alpha power, indicating age-related differences in audiovisual integration within the low-frequency bands (theta and alpha).While several models successfully explain behavioral indices of audiovisual integration, the precise neural mechanisms underlying efficient integration in the brain remain unclear. In a meta-analytic study, common brain activity patterns were identified across diverse audiovisual studies <cit.>. They discovered that unisensory signals are processed independently in sensory cortices, while integration occurs in later association areas like the superior temporal cortex. However, accumulating evidence suggests that integration might occur at sensory-perceptual and sub-cortical levels prior to the involvement of higher association cortices <cit.>. Some researchers have used PET scans to identify that the right insula is most strongly involved in audiovisual synchrony-asynchrony detection <cit.>. So, it is clear that audiovisual integration involves various brain regions, including occipital, parietal, temporal, and frontal areas <cit.>. Studies using fMRI and EEG have observed integration effects even in sensory-specific regions like the primary visual cortex <cit.>. Anatomical connections between occipital and superior temporal regions highlight their crucial roles in audiovisual integration <cit.>. Functional connectivity, characterized by temporal correlations or synchronization of physiological signals, provides insights into the coordination among widely distributed brain regions <cit.>.Recent studies emphasize the role of functional connectivity in cognitive functioning <cit.>. In multi-sensory processing, these connections are crucial for integrating information between sub-cortical structures and cortical areas <cit.>. Studies have explored how functional networks influence audiovisual integration. However, the age-related differences in functional connectivity during audiovisual integration remain unknown. Past graph theoretical analysis of EEG and MEG data has demonstrated that aging leads to alterations in functional connectivity and network efficiencies <cit.>. Older adults exhibit increased functional connectivity and higher brain network efficiencies during synchronous audiovisual integration in the beta band <cit.>. However, the impact of aging on functional connectivity during audiovisual integration tasks with temporally asynchronous stimuli requires further investigation. Prior research has predominantly focused on examining functional connectivity in older age groups in the sensor domain, leaving a significant gap in understanding the changes in functional connectivity during audiovisual integration tasks in the middle-aged population. Exploring these changes in middle-aged individuals in the source domain is crucial as it can give spatial information that can aid in the early detection of disorders like mild cognitive impairment (MCI) and contribute to the early identification of Alzheimer's disease (AD), which typically manifests in later stages of life. Detecting MCI in middle age is particularly important as the patho-physiological process begins years before the onset of dementia. However, investigations targeting middle-aged groups are limited, and there is a need to shift the focus towards MCI identification in this population to identify early bio-markers of cognitive decline <cit.>. Additionally, the application of source modeling is crucial for a more precise interpretation of variations between the young and middle-aged groups. Sensor-level data may lack spatial precision, but source modeling provides valuable information about the timing and precise brain regions involved <cit.>. This approach aids in resolving any uncertainties inherent in sensor-level analysis. Motivated by this, the present study investigates brain source activation and functional connectivity in different age cohorts during audiovisual integration tasks. Participants are divided into three age groups: young (18-25 years), transition from young to middle age (26-33 years), and middle age (34-42 years). They perform auditory and visual discrimination tasks using unimodal and bimodal stimuli. EEG signals are collected from different brain regions to analyze the source-based functional connectivity network. The study examines changes in brain sources related to age and task. Features extracted from these networks are combined, and Machine Learning models are used to classify participants into distinct age groups. The rest of the paper is structured into the following sections. Section II describes the materials and methods. Section III presents the results. Section IV discusses the findings, and Section V concludes the paper.Insights into Age-Related Functional Brain Changes during Audiovisual Integration Tasks: A Comprehensive EEG Source-Based Analysis Prerna Singh1, Ayush Tripathi2, Lalan Kumar1,2,3 and Tapan Kumar Gandhi1,2 1 Bharti School of Telecommunication Technology and Management, Indian Institute of Technology Delhi, Delhi 110016, India2Department of Electrical Engineering , Indian Institute of Technology Delhi, Delhi 110016, India3Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, Delhi 110016, IndiaJanuary 14, 2024 =======================================================================================================================================================================================================================================================================================================================================================================================================================§ MATERIALS AND METHODS§.§ ParticipantsIn this study, a total of fifteenhealthy subjects participated, and each subject completed five different sessions as part of the study. These participants were divided into three age groups. The first group consisted of young subjects aged 18-25 years (mean age ± SD: 21 ± 1.41 years, n=5). The second group included individuals in the transition from young to middle age, aged 26-33 years (mean age ± SD: 27.4 ± 2.57 years, n=5). The third group consisted of middle-aged subjects aged 34-42 years (mean age ± SD: 37 ± 2.8 years, n=5). The study included participants who were students/residents of IIT Delhi and met specific criteria. They had normal or corrected-to-normal vision. None of the participants had color blindness and hearing threshold issues. Importantly, all participants were unaware of the purpose of the experiment, ensuring unbiased and objective responses. All participants in the study had normal cognitive functioning, as indicated by their Mini-Mental State Examination (MMSE) scores<cit.>, which were above 24. They also had no known history of cognitive impairment. Prior to participating, all individuals provided written informed consent, and the study protocol was approved by the Institute Ethics Committee of IIT Delhi (Reference #2021/P052). §.§ Stimuli and TaskDuring the experiments, two types of auditory and visual stimuli were utilized: target and non-target stimuli. The non-target visual stimulus used in the experiment was a black-and-white checkered box. On the other hand, the target visual stimulus featured a black-and-white checkered box with two cross markings inside it. The dimensions of the target visual stimulus were 52 mm × 52 mm, with a visual angle of 5°. The non-target auditory stimulus was a musical sound, while the target auditory stimulus was white noise. The visual stimuli (V) were presented on a black screen of a 15.6-inch laptop positioned 60 cm in front of the participants' eyes. They were shown randomly on the screen for 150 ms in either the lower left or lower right quadrant. The auditory stimuli (A) were delivered through earphones at a sound pressure level of 60 dB to either the left or right ear for a duration of 150 ms. Figure 1 offers the schematic of the experimental design. A visual representation of the experimental setup can be seen in Figure <ref>.The experiment included three types of stimuli: unimodal audio (A), unimodal visual (V), and audio-visual stimuli (AV). In the audio-visual condition, the stimuli were presented in three different ways based on the stimulus onset Asynchrony (SOA). The three ways for audio-visual stimulus presentation include simultaneous audio-visual (AV) where the audio and visual stimuli were presented simultaneously, Visual lag auditory by 50 ms (V50A) where the visual stimulus appeared 50 ms after the auditory stimulus, and Auditory lag visual by 50 ms (A50V) where the auditory stimulus appeared 50 ms after the visual stimulus. Each trial of every stimulus lasted between 150 to 250 ms, with the specific duration determined based on prior behavioral investigations <cit.>. Participants were instructed to perform a discrimination task involving visual (V), auditory (A), and audio-visual (AV) stimuli, both in synchrony and asynchrony. The description of the stimuli can be seen in Figure <ref>.Each participant underwent five sessions, each starting with a fixation time of 3000 ms. Following that, the screen displayed 20 stimuli for each of the five types (A, V, AV, A50V, V50A). In each condition, either on the left or right side of the screen, 80% of the stimuli were non-targets and 20% were targets. The inter-stimulus interval between consecutive stimuli ranged from 1300 to 1800 ms. Participants were instructed to identify whether the targets appeared on the left or right side of the screen. They were asked to do this as quickly and accurately as possible by pressing the left arrow key for left-side targets and the right arrow key for right-side targets. The experiment timeline is illustrated in Figure 1c. §.§ Data CollectionBoth EEG data and behavioral data were acquired simultaneously in a dimly lit room. Stimulus presentation and collection of behavioral responses were done using PsychoPy-2022.1.3 <cit.>. EEG signals were recorded via a cap (EasyCap from Brain Products) equipped with 32 scalp electrodes, following the International 10–20 System. Impedance was maintained below 20 kΩ. The raw signals were digitized at a sampling rate of 500 Hz using LiveAmp amplifiers (BrainProducts, Munich, Germany). All data were digitally stored for subsequent offline analysis. §.§ Data Analysis§.§.§ EEG Data Pre-ProcessingThe EEG data was pre-processed using MATLAB R2021a (MathWorks, Inc., Natick, MA, United States) with the open-source EEGLAB toolbox (Swartz Center for Computational Neuroscience, La Jolla, CA, United States) <cit.>. The pre-processing part focused on non-target stimuli to eliminate motor response and decision-making effects. Initially, the EEG data from IO channel were excluded in order to reduce the noise related to eye movements. Later, the continuous EEG data were bandpass filtered between 0.5 and 40 Hz. Further, the data were re-referenced to average reference. After that, Independent Component Analysis (ICA) <cit.> was used to remove artifacts including eye artifacts, frequency interference, muscle artifacts, head movement, and electrocardiographic activity <cit.>. The data was further divided into epochs with time points ranging from 500 ms before stimulus onset to 900 ms after stimulus start. At last, Baseline correction was applied. This produced 80 epochs (non-target stimuli only) with 700 time points for each stimulus type per participant. These epochs were further used for data analysis. §.§.§ Source Domain AnalysisWe have used Brainstorm for source domain analysis <cit.>. It provides comprehensive source estimation tools for in-depth analysis at both individual and group levels. EEGLAB primarily focuses on sensor-level analysis and statistical modeling, rather than bio-physiological sources.Therefore, we integrated EEGLAB's preprocessing (including ICA artifact reduction) and sensor-level analysis with Brainstorm's source modeling using the preprocessed data. Then cortical source activations were estimated later <cit.>. Brainstorm employs a distributed dipoles model for fitting. In our experiment, we utilized the Standardized Low-Resolution brain Electromagnetic Tomography approach (SLORETA) by Pasqual-Marqui (2002) to analyze the data <cit.>. sLORETA normalizes default current density maps using data covariance, calculated from a combination of noise and brain signal covariance. The sLORETA method employs minimum-norm imaging to estimate scalp-recorded electrical activity locations.Following EEGLAB-based pre-processing (including artifact attenuation, filtering, and epoching), EEG data was imported into Brainstorm. Source estimation was confined to the cortex volume and projected onto the Montreal Neurological Institute (MNI) ICBM152 brain template <cit.> using a multi-linear registration technique within Brainstorm. The ICBM152 anatomical template was used to create the forward model <cit.>. Single-trial pre-stimulus baseline intervals from -500 ms to 0 ms were employed to compute single subject noise covariance matrices and derive individual noise standard deviations at each location as described by <cit.>. The boundary element method (BEM), implemented in OpenMEEG <cit.>, served as a head model using Brainstorm's default settings. Source estimation involved selecting the option of constrained dipole orientations, where a dipole was modeled for each vertex, oriented perpendicular to the cortical surface <cit.>.Prior to source estimation, EEG data were re-referenced to the common average, a standard pre-processing step in source analysis software. Re-referencing to the common average is done to meet the assumption of zero current flow for unbiased source strength estimates <cit.>. Single-trial EEG data is averaged per participant, and source estimation is conducted on the subject average. The cortical surface is divided into regions of interest (ROIs) using the Mindboggle structural atlas <cit.>. Source estimation was conducted for every participant in the Y, T, and M age groups. Results were averaged within each age group to assess source-level activation differences. The study also examined source activation across various frequency bands.§.§.§ Extracting Source Activity Time SeriesBrainstorm offers predefined scouts (atlas-based) or manual region-of-interest (ROI) definitions. The Mindboggle structural atlas <cit.> in Brainstorm was used to define ROIs since individual anatomies weren't available. Significant ROIs were selected for each stimulus based on scout activation time series data <cit.> and prior studies <cit.>. Group activation averages were calculated and regions with the highest peaks in scout time series were chosen as significant audio-visual integration (AVI) scouts. These scouts were combined based on source activity to create an AVI scout for comparing age groups' AVI effects. The current approach involved manual scout definition through visual examination <cit.>.Significant scouts like the caudal middle frontal, and superior frontal, insula, superior temporal, transverse temporal, middle temporal, fusiform, parsopercularis, superior parietal were merged to create AVI scouts for analysis. §.§.§ Time-Frequency DecompositionBrainstorm employed complex Morlet wavelets for Time-Frequency decomposition of brain signals. Some EEG signal aspects are challenging to assess in the time domain due to amplitude differences. Oscillations at specific frequencies carry important information, but their amplitude can be lower, making them hard to observe. Averaging in the time domain might cancel such oscillations if they lack strict phase alignment across trials. Time-frequency averaging extracts oscillation power regardless of phase shifts.Complex Morlet wavelets are widely used in EEG analysis for time-frequency decomposition. They are sinusoidal with a Gaussian kernel, capturing local oscillatory components.Following sLORETA-based significant scout estimation for different stimuli within each age group, Time-Frequency decomposition was performed on the significant AVI scout signals. Morlet wavelets with a mean scout function were applied to each age group's brain signals, and spectral flattening was executed. Subsequently, z-score normalization was applied to compare the results. §.§ Experimental Details §.§.§Adjacency Matrix FormulationUsing the pre-processed EEG signals, epochs were extracted for each stimulus case. All eighty epochs of every stimulus were utilized to estimate cortical sources using sLORETA <cit.>. This involves creating a head model using OpenMEEG <cit.>, which is then employed to derive 62 Mindboggle <cit.> scout time series mean values for a time window of -500 ms to 900 ms. This results in 62 × 700 dimensional matrices, which are transposed and normalized. Each epoch is represented by a 700 × 62 dimensional matrix (V), with 700 representing time points and 62 representing scout numbers. These matrices form the basis for constructing adjacency matrices. The construction of a 62 × 62 dimensional adjacency matrix, denoted as Â, relies on Pearson correlation between signals from different pairs of scouts. If μ_V(:,i) represents the mean of the i^thscout time series, the elements of the adjacency matrix are computed as:Â(i,j) = ∑_l=1^700 (V(l,i)-μ_V(:,i))(V(l,j)-μ_V(:,j))/√(∑_l=1^700 (V(l,i)-μ_V(:,i))^2 ∑_l=1^700(V(l,j)-μ_V(:,j))^2) Following this, a binary adjacency matrix (A) is formed by utilizing the elements of  in the following manner: A(i,j) =1 , ifi≠ j and Â(i,j)≥ρ_th0 , otherwise The obtained adjacency matrix will be used to construct a functional connectivity graph for a specific epoch. The threshold parameter (ρ_th) determines the correlation strength to establish an edge between two nodes. A lower ρ_th value results in numerous edges, whereas a higher value leads to edges being formed only between nodes with a substantial signal correlation. An optimum (ρ_th) value will be used to construct a brain connectivity graph.§.§.§ Feature Extraction Five distinct node-level features are derived from the calculated adjacency matrix to facilitate scout-based age-group classification. This accumulates to 5 × 62 = 310 features for the connectivity graph of a specific epoch. The subsequent features are extracted using the NetworkX package <cit.>:* Degree Centrality: Degree centrality for a node u, quantifies the number of edges connected to it. Mathematically, it is computed as,d_u = ∑_l=1^NA(u,l)/N-1where N represents the total number of nodes in the graph. * Betweenness Centrality: It's used for identifying how much impact a node holds in regulating information flow within a graph. This is valuable for pinpointing nodes that connect different sections of a graph. The algorithm calculates the shortest routes between all node pairs. For a specific node u, it's calculated as:b_u = ∑_a,b∈ Uσ(a,b|u)/σ(a,b) In this equation, U represents the set of nodes, σ(a,b) indicates the count of shortest paths connecting nodes a and b, and σ(a,b|u) represents these shortest paths that go through node u. It's worth noting that σ(a,b) is 1 when a and b are the same, and σ(a,b|u) is 0 when u∈ a,b. * Eigenvector Centrality: It is a measure of a node's significance in a graph based on the importance of its connected neighbors. Nodes connected to more important nodes receive higher Eigen Vector Centrality scores. This metric is calculated as the v^th element of the vector e derived from the equation: Ae = λ eBecause all entries in matrix A are non-negative, there is a distinct positive solution e for the largest eigenvector λ. * Closeness Centrality: Closeness centrality identifies nodes that can efficiently disseminate information across a graph. This metric quantifies a node's average closeness (inverse of distance) to all other nodes. Nodes with high closeness scores are those with the shortest paths to other nodes. Closeness centrality for a node u is the reciprocal of the average shortest distance to all other reachable nodes. It's calculated as:c_u = n-1/∑_l=1^n-1d(u,l) here, d(u, l) represents the shortest-path distance between nodes u and l, and n represents the number of nodes that can be reached from node u.* Clustering Coefficient: The clustering coefficient of a node u is a measure that reflects the proportion of triangles that involve the node. This can be expressed mathematically as:κ_u = T(u)/(deg(u))(deg(u)-1) In this equation, T(u) represents the number of triangles passing through node u, and deg(u) indicates the number of edges connected to that node. §.§.§ Age group classification in the source domainUsing the described feature extraction approach, each epoch yields 310 features from 62 scouts. The dataset includes 80 epochs per subject, totaling 1200 epochs per stimulus type, equally distributed among Y, T, and M age groups. A 10-fold cross-validation technique is employed to assess feature performance in age-group classification, reporting mean accuracy. Initially, the impact of varying the correlation threshold parameter (ρ_th)on classification, utilizing the Random Forest (RF) classifier, is explored for different stimuli. Subsequently, the performance of alternative classifiers—Linear Support Vector Machines (Linear SVM), Logistic Regression (LR), and k-Nearest Neighbors (kNN)—is examined for varied stimuli. The classifiers use default parameters from the scikit-learn library <cit.>. §.§ Scout Connectivity AnalysisThe connectivity analyses involve extracting time series from source data (brain voxels or scouts), within a time window of -500ms to 900ms. These time series values were averaged for all the epochs in each stimulus case and are further used to compute an N*N correlation matrix per subject. The correlation matrix is computed after finding the mean of the time series. Then, the average connectivity matrices generated based on Pearson correlation for each age group (Y, T, and M) are compared. A connectivity graph is plotted using a chord diagram, where edge colors indicate Pearson correlation strength. Source-level connectivity was chosen due to the susceptibility of Sensor data to field spread and volume conduction across the scalp. Connectivity measures at the sensor level might misleadingly suggest brain connections. In contrast, connectivity measures between source time series are more anatomically interpretable and can be compared across participants. For frequency-specific brain connectivity analysis, the Phase Locking Value (PLV) <cit.> metric is used. PLV <cit.> leverages the relative instantaneous phase between two-time series to quantify connectivity. In source domain connectivity analysis, we study the functional connections between different brain lobes, which are categorized into 62 scouts, as illustrated in Figure 2. §.§ Scout ClusteringIn order to perform scout clustering for each stimuli condition, mean scout time series values for all the 62 scouts (Mindboggle atlas) were extracted within a time window of -500 ms to 900 ms. This generated a 62*700 dimensional matrix, which was further averaged for each and every subject. The resulting average matrix was used to calculate the average Pearson correlation matrix for every age group. Subsequently, it was transformed into Fisher’s z-transformed r-matrix using Fisher’s equation <cit.>. The obtained r-matrix creates a comprehensive connected matrix, generating a graph of weighted relationships. Each participant's final data matrix was a 62*62 z-matrix with zero diagonal values. This matrix was subsequently used to cluster the 62 regions of interest (ROIs) into the optimal number of clusters. To achieve this, the K-means elbow algorithm <cit.> was applied to the average z-matrix of each age group, yielding an optimal configuration of five clusters. Using these clusters, nodes were rearranged to construct an adjacency matrix for each age group.§ RESULTS §.§ Impact of Ageing on the Brain Activation in Audio-Visual Integration?In the young age group, source activation is more pronounced in the early processing stages, particularly in frontal, temporal, and parietal brain regions. This early activation tends to diminish over time, especially in the theta frequency band. Activation is observed to start around 280-300 ms in response to stimuli and gradually decreases thereafter. This pattern is noticeable in both unimodal and bimodal stimuli cases. A comparison between unimodal and bimodal stimuli is illustrated in Figures 3a and 3b, respectively. Whereas, in the middle age group, source activation begins slightly later, around 300-320 ms, and persists for a longer duration, particularly evident in bimodal cases such as simultaneous audio-visual (AV) and auditory lead visual by 50 ms (A50V) stimuli. Moreover, in bimodal cases, brain activation levels are generally higher in middle age compared to unimodal stimuli, suggesting an enhanced effect of audio-visual integration (AVI) with age.Also, there is an increased involvement of prefrontal, temporal, and parietal areas during AVI tasks in middle-aged participants. This indicates that middle-aged individuals exhibit higher brain activation in response to combined audio and visual stimuli, particularly in regions associated with integration, attention, and cognitive processing. §.§ Role of Lower Frequency Bands in Age-Related Brain Source Activation. The study's findings reveal that in bimodal activation, the amplitude is higher in middle age compared to unimodal cases, as depicted in Figures 4a and 4b. These results specifically pertain to lower frequency bands, such as theta and alpha, as demonstrated in Figure 7. Higher frequency bands do not exhibit significant age-related changes. We performed a paired t-test to compare the unimodal and bimodal activation amplitude in the theta band for the Y, T, and M groups. The analysis revealed a statistically significant difference (t = -3.780, df = 2, p < 0.05 (p = 0.031), one-tailed), indicating that the mean of the bimodal (AV) group is significantly greater than that of the unimodal (V) group. This suggests that when both audio and visual stimuli are present, source activation levels increase with age. Notably, the theta band demonstrates higher source activation amplitude values than the alpha bands. The transition phase (25-33 years) shows the highest activation levels compared to the young and middle age groups in lower and higher frequency bands. In the theta bands, there is notably higher early activation, primarily in frontal, temporal, and parietal sites, which decreases with time. In the alpha bands, activation predominantly occurs in frontal and temporal sites, lasting longer in the middle age.§.§ Divergence in Brain Activation Sites in Response to Unimodal and Bimodal stimuli.Identification of significant AVI-related scouts was determined by examining the disparities in brain source activation due to aging. Scouts that exhibited noteworthy differences in activation levels across age groups were selected for further analysis.This selection process focused on pinpointing the scouts where the activation patterns underwent significant changes between the younger and middle-aged participants.In the case of unimodal stimuli, the difference between young and middle-aged brain source activation levels and the associated brain scouts is large as compared to bimodal stimuli as shown in Figure 5. With the audiovisual integration effect, the differences in activation and deactivation levels decrease with aging. It reduces by 500 pAm. After analyzing the peaks of the Mindboggle scouts we observed that the significant scouts involved in the AVI effect are: caudal anterior cingulate, caudal middle frontal, fusiform, insula, lateral orbitofrontal, middle temporal R, parsopercularis, parstriangularis, superior frontal, superior parietal L, superior temporal and transverse temporal. These scouts were merged together and labeled as AVI-associated scouts. Later the area under their peaks was calculated in the case of various stimuli for AVI-associated scouts. These peaks indicate the absolute mean difference in the average activation amplitude of young and middle-aged participants for the defined regions. It was observed that the area under the curve in the case of A50V stimulus was the least. It indicates that the minimum difference between Y and M is in the case of A50V stimuli case which is clear from figure 5 and 6.§.§ Significant Differences in the Time-Frequency decomposition Maps with Aging.Here we can see the time-frequency decomposition of EEG data performed using Morlet wavelets. It is clear from Figures 7a and 7b, that there is a significant difference between young and middle-aged AVI-associated scout power maps in lower frequency bands. Scout mean power varies in both the unimodal and bimodal cases. Theta band shows major differences as the age increases followed by an alpha band. It concludes that theta and alpha bands are the most crucial frequency bands to study aging while performing Audio-visual tasks. Whereas higher frequency bands above 15 Hz are of typically very low power. Among bimodal stimuli, asynchronous stimuli show comparatively lower power difference values in lower frequency bands. It should be noted that the power scale varies in young and medium which resulted in differences in the power.§.§ Addition of Visual Stimuli to Auditory Stimuli Enhances Brain Functional Connectivity.Functional brain connectivity increases during middle age and these changes vary depending on the stimulus type. Most of the enhanced connections occur between brain regions in the prefrontal, frontal, temporal, limbic, and occipital areas. The introduction of visual stimuli significantly boosts brain connectivity. In particular, bimodal audio-visual stimuli show more extensive connectivity compared to unimodal auditory stimuli. Synchronous audio-visual stimuli exhibit more connections than asynchronous ones. Notably, the central and parietal lobes have fewer connections compared to the frontal lobes. Among the different stimulus types, visual stimuli (V) lead to the highest number of connections, followed by AV, V50A, A50V, and A. Therefore, incorporating visual stimuli into auditory stimuli enhances functional connectivity among various brain regions. Figure 8 shows increased brain functional connectivity in middle age during various stimuli at an optimum correlation threshold value ρ_th=0.85. This connectivity is plotted between different brain lobes subdivided into 62 Mindboggle scouts as depicted in Figure 2. §.§Theta Band exhibits the Highest Functional Connectivity During Audio-Visual Integration Tasks.In middle-aged individuals engaged in Audio-Visual Integration tasks, the theta band displays the highest functional connectivity. Higher frequency bands exhibit lower levels of brain source functional connectivity compared to lower frequency bands. These connections vary depending on the type of stimulus, as depicted in Figure 9a. Among all stimuli, synchronous AV stimuli feature the highest number of edges in the theta band, and this number increases with age, as illustrated in the figure.The regions with the highest connectivity include the occipital, temporal, frontal, and pre-frontal areas. In the alpha band, visual (V), audio-visual (AV), and auditory lag visual by 50 ms (V50A) stimuli exhibit notable increases in connections with age, as indicated in Figure 9b. §.§ Scout Data-Based Age Group Classification.The impact of the correlation threshold (ρ_th) on classification accuracies for different stimulus types is illustrated in Figure 9 using a random forest classifier. Initially, the accuracy increases before decreasing. The optimal (ρ_th) value is found to be 0.85 and is used consistently thereafter for brain functional connectivity study. The Random Forest classifier achieves the highest classification accuracy of 90.75 % for Audio stimuli, followed by 90.5 % for V stimuli. This suggests that unimodal stimuli exhibit more pronounced differences compared to bimodal stimuli, resulting in lower classification accuracy for the latter. Among the bimodal stimuli, the V50A stimulus achieves the highest classification accuracy of 89%, followed by AV (88.75 %) and A50V (87.75 %). After Random Forest, Linear SVM performs well followed by kNN and Linear Regression. Different classifiers were employed for age-group classification, and Table 1 provides an overview of the maximum accuracy achieved for each type of stimulus in the source domain. It is to be noted that the classification accuracies in the source domain for each stimulus are higher than the accuracies in the sensor domain <cit.>. §.§ Clustering of Mindboggle Scouts within the Framework of AVI Task.The analysis results unveil five distinct clusters of brain networks derived from 62 brain scouts through k-means elbow clustering for each age group - young (Y), transition (T), and middle (M), as shown in Figure 11. These clusters, denoted as Cluster 1, Cluster 2, Cluster 3, Cluster 4, and Cluster 5, correspond to different brain regions. The clusters obtained can be shown in Figure 12.A noteworthy observation is that brain networks or clusters exhibit heightened functional connectivity in middle-aged individuals compared to their younger counterparts. For instance, in the context of the AV stimulus, cluster 4 and cluster 1 demonstrate increased connectivity in middle age relative to young age, as depicted in Figure 13. Given that these clusters predominantly encompass frontal and temporal regions, it suggests that functional connectivity between the frontal and temporal lobes strengthens with age during tasks involving audio-visual integration. Consequently, brain network functional connectivity appears to undergo enhancement in middle age during audio-visual integration tasks.Moreover, with respect to unimodal stimuli, brain functional connectivity reaches its peak in middle age during visual tasks. Concerning bimodal stimuli, the AV stimulus exhibits the highest functional connectivity, followed by the V50A stimulus, with age progression.These findings provide insights into the dynamic alterations in brain connectivity across different stages of adulthood, offering valuable information regarding age-related distinctions in cognitive processing during audio-visual tasks.§ DISCUSSIONNeuroanatomical changes contribute to the cognitive declines during aging. The specific neuronal mechanisms involved in age-related audio-visual integration remain unclear. The present study has addressed this knowledge gap by investigating the effects of audio-visual integration on middle-aged individuals at the source level of brain activity. The research aimed to highlight the brain regions involved in this process and identify how their connectivity patterns evolve with age. The study also investigated the prominent EEG frequency bands linked to audio-visual tasks and their age-related dynamics. Additionally, it explored the dynamic brain networks formed during audio-visual integration and how their connectivity changes as individuals age.The study conducted an audio-visual integration (AVI) task to systematically examine middle-aged individuals' responses to audio and visual stimuli, including target and non-target stimuli. The findings revealed that middle-aged participants displayed increased brain activation when subjected to combined audio and visual stimuli. This heightened activation was particularly notable in brain regions associated with integration, attention, and cognitive processing. Important regions of interest included the caudal anterior cingulate, caudal middle frontal, fusiform, insula, lateral orbitofrontal, middle temporal (right), parsopercularis, parstriangularis, superior frontal, superior parietal (left), superior temporal, and transverse temporal areas. These regions were identified using the Brainstorm toolbox. These results align with prior research, including a meta-analytic study by Gao et al., conducted in 2022 <cit.>. The outcomes of the present study underscore the pivotal role of the frontal, pre-frontal, temporal, and occipital lobes in the process of AVI during middle age, consistent with previous investigations <cit.>.Our study discovered that middle-aged adults showed a weaker and delayed audio-visual integration (AVI) effect compared to younger adults in all conditions. This finding supports previous studies by Wu et al. (2012) and Ren et al. (2016), which observed similar trends <cit.>. However, it's important to note that earlier research has produced conflicting results, with multiple reports presenting contradictory findings, as seen in studies like Laurienti et al. (2006) <cit.> and Diederich et al. (2008) <cit.>.It's important to note that some studies, that observed a stronger AVI effect in older adults, typically employed centrally presented stimuli. In contrast, our study used peripheral stimuli with a 5-degree visual angle. Peripheral vision tends to deteriorate with age <cit.>, and this factor might play a role in the variations in our results. Furthermore, with regard to the audio-visual integration (AVI) effect, our results highlight a decrease in the gap between brain activation and deactivation patterns in young and middle-aged adults. Notably, when considering unimodal stimuli, the difference in brain source activation levels between young and middle-aged individuals is more pronounced than with bimodal stimuli. This implies that the incorporation of visual stimuli alongside audio stimuli may partially alleviate the impact of aging on the brain. These findings strongly support the concept that audio-visual integration, as individuals age, functions as a compensatory mechanism, offsetting deficiencies in unimodal sensory processing <cit.>. Our research highlights different patterns of activation in two frequency ranges: theta and alpha. In the theta range, we observe increased early activation, primarily in the frontal, temporal, and parietal areas, which decreases with age. In contrast, the alpha range shows continuous activation in the frontal and temporal regions, especially during middle age. These findings align with prior EEG-based studies on aging, which have suggested diminished alpha power, indicating age-related variations in audio-visual integration, particularly in lower frequency bands <cit.>. Importantly, these patterns hold significance as both theta and alpha bands, particularly in frontal areas, are associated with cognitive control, short-term memory, and sensory information retention <cit.>. Our findings reveal higher brain functional connectivity in middle-aged and young adults under AV and V50A conditions compared to A and A50V conditions. This heightened connectivity is due to the synchronization of auditory and visual neural signals, which arrive at the brain with less temporal separation during AV and V50A conditions, resulting in a stronger integration effect. This aligns with previous research by <cit.> which reported that visual stimuli have an onset latency of approximately 50 ms, while auditory stimuli have an onset latency of less than half that, around 9–15 ms from stimulus presentation. This closer temporal proximity likely amplifies the impact on the brain, leading to stronger connections between brain regions. Moreover, we observed a significant increase in theta band functional connectivity in middle-aged adults during audio-visual stimuli. This increased connectivity predominantly occurs in pre-frontal, frontal, temporal, limbic, and occipital regions. Theta activity is associated with broader brain integration mechanisms and central executive functions during audio-visual integration <cit.>. Also, there is an increase in alpha band functional connectivity in middle age, particularly during AV and V50A conditions. The alpha activity reflects active attentional suppression mechanisms and executive functions <cit.> both of which are vital for multisensory responses <cit.>. Diederich et al. (2008) demonstrated that older adults exhibit greater neural enhancement for neural integration. This heightened cognitive demand with age likely contributes to the observed functional connectivity increases in the theta and alpha bands <cit.>.Our study has also revealed that brain networks formed during the audio-visual integration (AVI) effect can be categorized into five distinct networks. These networks exhibit variations in functional connectivity patterns with age. In summary, our findings align well with prior research, indicating that audio-visual integration experiences delays with age and displays differences between various age groups, such as young and old individuals <cit.>. Our results underscore the prominent involvement of the frontal and temporal cortex, particularly the superior temporal cortex, in AVI as individuals age. Furthermore, our study supports the notion of a compensatory neural mechanism with increasing age, characterized by heightened brain functional connections. Nonetheless, our study has several limitations. Firstly, we utilized 32 scalp electrodes to construct brain networks in the source domain, which is a relatively small number. Future research could employ EEG devices with 64 or 128 channels to enhance the accuracy of our findings. Additionally, our study had a limited number of subjects in each age group, and expanding the sample size in future studies would be beneficial. We used a relatively small number of features for classification, which could be increased to improve accuracy. Expanding the age groups to include individuals in their late sixties would indeed contribute to a more comprehensive and comparative study. This additional age group can provide valuable insights into how audio-visual integration processes evolve in late adulthood, further enhancing our understanding of age-related cognitive changes in the source domain. Lastly, future research might consider conducting a longitudinal study on AVI in middle age to identify potential biomarkers that could aid in the early detection of brain abnormalities. § CONCLUSIONThe study presents significant findings related to the audio-visual integration (AVI) process in different age groups under varied stimuli conditions. The differences in brain activation patterns, particularly in the superior frontal gyrus and superior temporal cortex, were observed between the young and middle-aged groups. In middle age, there was a delay in brain source activation, especially in response to bimodal stimuli. The study also highlights the important role of lower frequency bands in both age groups during the Audio-visual integration. Furthermore, it validates an upsurge in functional connectivity within the brain during audio-visual integration, with these connections primarily concentrated in the frontal, temporal, limbic, and occipital areas. These findings shed light on the compensatory neural mechanisms that come into play as individuals age, especially during specific cognitive tasks. § DISCLOSUREThe authors have declared no conflict of interest related to this study.§ ACKNOWLEDGEMENTThis research work is supported by the Neurocomputing Laboratory and Multichannel Signal Processing Laboratory (MSP Lab) at the Indian Institute of Technology Delhi (IIT Delhi), India. Data were collected at the Multichannel Signal Processing Laboratory (MSP Lab), IIT Delhi. The authors would like to thank all the participants for their contributions.IEEEtran
http://arxiv.org/abs/2311.15752v1
{ "authors": [ "Prerna Singh", "Ayush Tripathi", "Lalan Kumar", "Tapan Kumar Gandhi" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231127121616", "title": "Insights into Age-Related Functional Brain Changes during Audiovisual Integration Tasks: A Comprehensive EEG Source-Based Analysis" }
In this note, we study motivicpro-spaces of the form X/(X-x),where x∈ X is the closed point ofa local essentially smooth scheme Xoverascheme B. We obtainresults on the classification up tomotivic equivalencesin sense ofthe pointed Morel-Voevodsky 𝔸^1-homotopy motivic category 𝐇^∙(B), and study some classes of morphisms in between ofX/(X-x)∧ S^l, l∈,in𝐇^∙(B),andVoevodsky's motives category 𝐃𝐌(B). The role of magnetic fields in disc galaxies: spiral arm instability Raghav Arora 1, Christoph, Federrath 2,Robi Banerjee 1 ,Bastian Körgten 1 ,Received xxxx; accepted xxxx ======================================================================================================================================§ INTRODUCTION We study one natural analogue of topological spheresin the motivic homotopy theory over a base scheme B. We considerthe Morel-Voevodsky motivic homotopy category 𝐇(B)definedin <cit.> for finite-dimensional noetherian separated schemes B, and extended to arbitrary ones in <cit.>, andthe Voevodsky motives category (B)that definition and properties over base schemes Bare studied by Cisinski, and Déglise in <cit.>, while when B= k for a field k,<cit.> provide more precise description. The spheres S^d being defined as a topological space by anyone of the models S^d≃ D^d/∂ D^d≃ℝ^d//(ℝ^d-0),d∈_≥ 0,see Notation <ref>, and <ref>, generate the pointed homotopy category of topological spaces 𝐇^𝐭𝐨𝐩 via extensions. Indeed, for a topological space Xconsidered in the study such asCW-complex, or smooth manifold,there is a filtration X ⊃…⊃ X^(d)⊃ X^(d-1)⊃…⊃ X^(0),by closed subspacessuch that X^(d)/X^(d-1)≃⋁_C_d S^d,for some discrete sets C_d, d∈ℤ_≥ 0.This fact implies conservativity of the family of functorsπ_d(-)=Hom_𝐇^top(S^d,-)𝐇^top→Set^∙, d∈ℤ_≥ 0 . The motivic spaces generatedin the pointed Morel-Voevodsky motivic homotopy category 𝐇^∙(B)by the spheres S^d,and the motivic spheres T^∧ d = 𝔸^d_B/(𝔸^d_B-0_B)viasmash products, and extensionsare called cellular,and for anyB,not all motivic spaces are cellular in 𝐇^∙(B). While for X∈_B in general, there is no filtrationlike (<ref>), (<ref>) with respect to (<ref>),an existing analogueis so-called codimension filtrationgiven by the limit of filtrations of the form (<ref>), where X^(d)=X-Z^d, for some closed subschemes Z^d in X of codimension d. Like as the spheres S^d are 'building blocks' of a topological space Xin view of (<ref>), the'building blocks' for X∈_Barethe pro-objectsof the from X_x/(X_x-x),where x∈ X is a point, and X_x=𝒪_X,x is the local scheme of X at x. Similarly to (<ref>)the pro-objects (<ref>)control the motivic homotopy typesin the sense that the family of functors Hom_𝐇^∙(B)(X_x/(X_x-x), -)𝐇^∙(B)→Set^∙,X∈_B, x∈ Xis conservative at least for a noetherian separated scheme B of finite Krull dimension. In present article,we make progress on classification of pro-spaces (<ref>) over an arbitrary base scheme B, and study the category spanned by pro-objects X_x/(X_x-x)∧ S^l, l∈,in 𝐇^∙(B) and (B).§.§ ClassificationWhen B= k for a field k, and the point x is rational, the pro-space (<ref>) ismotivically equivalent to the space (<ref>) with d=_B^x X.When k is perfect, for any x∈ X,there is the motivic equivalence of pro-spacesX_x/(X_x-x)≃ T^∧^x_k X∧ x;so theequivalence class of (<ref>) is defined by ^x_k X,and the residue field extension k(x)/k. This is heavily used in the study of motivic homotopy theory over perfect fields,see<cit.>.Equivalence (<ref>) can fail,when k(x)/k is not separable, or the base scheme B has positive Krull dimension. In order of the classificationwe obtain the following result.Let B be a scheme, X, and X^' be smooth B-schemes, and x∈ X, x^'∈ X^' be points. (1) Suppose that the dimensions of X and X^' at x and x^' are equal, andthe residue fields at x and x^' are isomorphic, i.e. _B^x X = _B^x^' X^' = d∈ℤ,x≅ x^'∈_B,then there is an isomorphism X_x/(X_x-x)≃ X^'_x^'/(X^'_x^'-x^')of pro-objects in the pointed motivic homotopy category 𝐇^∙(B).(2) Suppose there is an isomorphism (<ref>)then_B^x X = _B^x^' X^' = d∈ℤ,p(x)=p^'(x^')=z∈ Bfor some d∈, where p X→ B, p^' X^'→ B are the structure morphisms.Moreover,if the residue fields K=𝒪_x(x) and K^'=𝒪_x^'(x^') are finite over the residue field k=𝒪_z(z) , thensdeg_kK = sdeg_kK^' where sdeg is the separable degree. Furthermore,ifK and K^' are simple extensions of k, thenx≃ x^'∈_B. §.§ DifferentialsImportant class of morphisms in between of suspensions of (<ref>)are the ones of the formX_x/(X_x-x)∧ S^1→ X_y/(X_y-y),where X∈_B,and x,y∈ X are points such thatx belongs to the closure of y in X, and X_x =X_y+1. The morphism (<ref>) is inducedby the immersions of schemes (X_x-y_x)↪ (X_x-x)↪ X_xin view of the isomorphism (X_x-x)/(X_x-y_x)≃ X_y/(X_y-y).The morphisms (<ref>)describe how the pro-objects (<ref>)are attached togetherforming the motivic space X, and define the differentials inthe coniveau spectral sequenceE_x(X_x/(X_x-x))⇒ E(X)that comes from the codimension filtration on X for a motivic S^1-spectrum E over B <cit.>.Let X,X^'∈_B, x,y∈ X, x^',y^'∈ X be pointssuch that x∈y,X_x =X_y+1,and similarly for x^',y^'. Suppose there is the isomorphism ofone-dimensional local schemes(y)_x≃(y^')_x^',then there is a commutative diagram of pro-objects in 𝐇^∙(B)X_x/(X_x-x)∧ S^1[r][d]^≃X_y/(X_y-y)[d]^≃X^'_x^'/(X^'_x^'-x^')∧ S^1[r]X^'_y^'/(X^'_y^'-y^')where the upper horizontal arrow is (<ref>), the lower horizontal arrow is defined similarly,and the vertical arrows are isomorphisms.§.§ Motives of the same weightFor a closed point x∈ X of X∈_B,the pro-object (<ref>) is an object, becauseX_x/(X_x-x)≃ X/(X-x)∈𝐇^∙(B).We denote by ^·_B and ^·_B the category of pairs (X,x) as above, and the subcategory, where ^x_B X=d; we say that d is the weight of X/(X-x). For a given d∈ℤ_≥ 0, we describe the morphisms Hom_(B)(X_x/(X_x-x), X^'_x^'/(X^'_x^'-x^')[l]),(X,x),(X^',x^')∈^d,·_Bin the category of Voevodsky's motives (B) under the assumption that the residue fields of x and x^' are simple over the ones of their images z and z^' in B.Let B be a scheme.For each d∈ℤ_≥ 0, there is the equivalence ofthe subcategories in (B) and ℤ×(_B) [ (X/(X-x)[l] |(X,x)∈^d,·,1_B, l∈ℤ )_(B)≃ℤ×_B(^1_B),; X/(X-x)[l]↦(l,x) ]where ℤ denotes the discrete category, ^1_B isthe subcategory _B spanned by the spectra of fields K, such that K is a simple extension of the residue field at some closed point z of the scheme B, and ^d,·,1_B is the subcategory of ^d,·_B, where x∈^1_B.§.§ Overview and ingredients. Theorem A(1) and Theorem B are proven in <Ref>. The proof of both is completely elementary and is provided by the construction of two Nisnevich squares obtained in <Ref>. The proof of the first and the second claims of Theorem A(2) presented in <Ref> is quite short and elementary as well. The claim (<ref>) of Theorem A(2) is deduced from Theorem C,while Theorem C has much less elementary proof, than the ones mentioned above. It is based on the Voevodsky's theory on the categories (k) <cit.>with the generality of an arbitrary field k provided by <cit.>. In addition, it uses some part of Gabbers Presentation Lemma, see <Ref>,provided by <cit.>, <cit.>, <cit.>.The claim (<ref>) of Theorem A(2) and Theorem C are proven in sequence of steps in<Ref>. In <Ref>, we generalise the result on wight one motivic cohomolgies from <cit.>. In <Ref>, we apply it to describe the hom-groups in (k) for a field k in between of the objectsC/V, where C is a smooth curve over k, and V is an open subscheme, in other words we prove Theorem C for the category ^1,·_k. In <Ref>, we make a reduction from the category ^d,·,1 to the category^1,·,1 that equals ^1,·. In <Ref>, we deduce Theorem C, and finalise the proof of Theorem A(2). §.§ Notation and conventions* Set^∙ is the category of pointed sets, 𝐇^𝐭𝐨𝐩 is the pointed homotopy category of simplicial sets or topological spaces, and 𝐇^∙(B) is the pointed motivic homotopy category over B. (-)_+ denotes the functor X↦ X⨿ * from unpointed categories to the pointed ones. *D^d and ∂ D^d stands for the d-dimensional topological disc and its boundary. *Given an injection Y→ X of sets, spaces, or presheaves,X/Y denotes the cofiber in the respective category.Speaking about the category of topological spaces, we denote the homotopy cofibre by X//Y.*Given a smooth B-scheme X,we denote by the same symbol its motive in𝐇(B), 𝐇^∙(B), (B). * Given a scheme V∈_B,denote by V/pt_B the reduced motive of V in (B), i.e. V/pt_B=Cone(V→pt_B)[-1]. * Given a morphism of schemes f S→ B, we denote by f^*the inverse image functor. * Denote by L_ the endofunctor on the derived category of abelian presheaves with transfersgiven by the Nisnevich local replacement, and write L_ for the Zariski one. * We use notationh^l_mot(F)=Hom_𝐃𝐌(k)(-,F[l]) for the presheaf on _k. * Given a vector space T over a field K, denote by T^∨ the dual vector space. * We writeZ↪̸X for a closed immersion of schemes. * Given a scheme X, we denote by X_red the maximal reduced closed subscheme. *-, and a point x∈ X, we denote by ^x_B X the relative dimension of X over B at x. *-, we denote by X_x the local scheme 𝒪_X,x. *-, we denote by x the closure of x in X. * -,there is the homomorphismI_X(x)→ T^∨_X,x; f↦ df, where T^∨_X,x=I_X(x)/I^2_X(x).* Given a set of points S in X, I_X(S) is the ideal of functions in 𝒪_X(X) that vanish on S. * ⟨ e_1,… ,e_n⟩⊂ N is the submodule generated by the elements e_1,… e_n∈ N of a module N over a ring R.* (f_1,…, f_n)⊂ R is the ideal generated by the elements f_1,… f_n∈ R of a commutative ring R. * Z(f_1,…,f_n) is the vanishing locus ofa set of regular functions f_1,… f_n∈𝒪_X(X) on a scheme X. * sdeg_KL denotes the separable degree of a fields extension L/K; sdeg_zx= sdeg_KL, for x= L, and z= K.* (C) or C is the set of the isomorphism classes of the objects ofa small category C. * The subcategory ofa category C spanned by the objects of a family F is denoted byF_C, or (E∈ F)_C, or (E| E∈ F )_C.* _B denotes the category of B-schemes. _B denotes the category of smooth B-schemes. * Denote by (C) the subcategory of (_B) spanned by the objects of a subcategory C of _B.* _B is the category of irreducible reduced zero-dimensional finite B-schemes, and ^d_kis the subcategory of such onesthat residue fields are generated by d elements over the residue fields of B. * Denote by (S)^⨿ or S^⨿ the minimal cocomplete subcategory ofa cocpmplete category C that contains a subcategory S.* ^·_B is the category of pairs (X,x), where X∈_B, x∈ X is a closed point. ^d,·_B is the subcategory, where ^x_B X=d. ^∙,1_B is the subcategory of such ones that x∈^1_B.§ PROOF OF THE EQUIVALENCE§.§ Linear algebra lemmasGivensurjective morphisms of vector spaces over a field KL_0p_0↞ Lp_1↠ L_1such that L=n+c, L_0 =L_1 = n for some n,c∈ℤ_≥ 0, there is a set of elements d_1,…, d_n∈ Lthat images along both of morphisms in (<ref>) are linearly independent.Equivalently,for any p_0 and p_1 as in (<ref>), there is a vector subspace F⊂ L such that F=(p_0(F))=(p_1(F))=n.Let us defineU:=(p_1)∩(p_2), and let e_1,…,e_s,r_1,…,r_c-s, q_1,…,q_c-s, t_1,…,t_n-c+sbe a basis of L such that e_i∈ U, r_i∈(p_1), q_i∈(p_2), for any i=1,… s, or i=1,… c-s.Denote byF:=⟨ d_1,...,d_n⟩ the subspace generated by elementsd_i=r_i+q_i,i = 1, …, c-s,t_i, i=c-s+1,…,n-c+s.By the construction the elements d_1,…,d_n are linearly independent in L, so (F)=n. Suppose p_1(F)≠ n,then some nontrivial linear combination of p_1(d_1),…,p_1(d_n) equals zero.This means that some nontrivial linear combination of d_1,…,d_nis contained in ker(p_1),and consequently,some nontrivial linear combination of q_1,…,q_c-s, t_1,…,t_n-c+sis contained in ker(p_1).Sothe elements (<ref>) are not linearly independent,that contradicts to that (<ref>) forms a basis of L.Thus p_1(F)= n.It follows similarly that p_2(F)= n.Let X be an affine scheme over field B, Z be a closed subscheme,x∈ Z be a closed point of Z, and K denote the residue field at x.Letp_0:X→ X_0,p_1:X→ X_1be morphisms of affine schemes that are smooth at x, andinduce isomorphismsZ≃ Z_0=p_0(Z),Z≃ Z_1=p_1(Z)onto the images of Z along p_0, and p_1. Denote x_0=p_0(x)∈ X_0, x_1=p_1(x)∈ X_1. Then there is a set of regular functionsf_1,…, f_n on X such that(0) f_1|_Z= … f_n|_Z = 0 and (1) the images of differentials of f_1,…,f_n at x on X along both of the morphisms of K-vector spacesT^*_x,X×_X_0x_0← T^*_x,X→ T^*_x,X×_X_1x_1are linearly independent over K. Consider the ideals I_X(x) and I_X(Z), and the K-vector space N^*_x,Z/X=I_X(Z)/(I_X(Z)∩ I^2_X(x))that we call by the conormal vector space of the subscheme Z in X at x.Since Z∩ (X×_X_0x_0)= x, the homomorphismI_X(Z)↠ I_X×_X_0x_0(x)is surjective. Composing with the surjectionI_X×_X_0x_0(x)→ T^⋆_x,X×_X_0x_0we get the surjective morphismN^*_x,Z/X↠ T^*_x,X×_X_0x_0.Similarly we prove the surjectiveiy of the morphism N^*_x,Z/X↠ T^*_x,X×_X_1x_1. Applying <Ref> to the morphisms (<ref>) and (<ref>)we get elements d_1,…,d_n∈ N^*_x,Z/X that images along the above morphisms are linearly independent.Since T^*_x,X = I_X(x)/I^2_X(x) the closed immersion Z→ X indices the embedding N^*_x,Z/X↪ T^*_x,Xthat image consists of differentials of regular functors on X that vanish on Z. Then the surjectionI_X(x)↠ T^*_x,Xrestricts to the surjectionI_X(X)↠ N^*_x,Z/X.So there are functions f_1,…,f_n∈ I_X(Z)that differentials at x equal d_1,… d_n, i.e.d_x,X f_1=d_1, …, d_x,X f_n=d_n. Now the claim (1) holds by (<ref>), and the claim (2) holds by (<ref>), and because the images of d_1,…,d_n∈ N^*_x,Z/X along the morphisms (<ref>) and (<ref>) are linearly independent by the above. Let K_0→ L→ C_0 and K_1→ L→ C_1 be short exact sequences of locally free finite rank modules over a ring R.Consider the commutative diagramK_0[dd]_e_01@^(->[rd]^k_0 K_1@_(->[ld]_k_1[dd]^e_10L@->>[rd]_c_0@->>[ld]^c_1 C_1C_0The following conditions are equivalent: (0) e_01 is injective;(1) the intersection of the images of K_0 and K_1 inside L is zero, i.e. K_0×_LK_1≃{0};(2) e_10 is injective.Due to the diagram (<ref>) is commutative, the injectivity of e_01 is equivalent to the injectivity of e_1∘ k_0 that is equivalent to the equality (c_1)∩(k_0)=0 that is equivalent to the equality (k_1)∩(k_2)=0, because the sequence K_1→ L→ C_1 is exact in L. Finally, (k_1)∩(k_2)=0 is equivalent to K_0×_LK_1≃{0}. So the equivalence of the first two points above is proven. The equivalence with the third one follows similarly.§.§ Nisnevich squares, Nisnevich and motivic equivalences Let B be a scheme, X and X^' be essentially smooth local B-schemes, Z and Z^' be closed subschemes, and x∈ X, x^'∈ X^' be the closed points. Given an isomorphism of schemesc Z≅ Z^', there is a pair of Nisnevich squaresX^''-Z^''@^(->[r][ld][rd]X^''[ld][rd] X-Z@^(->[r] X X^'-Z^'@^(->[r] X^'for some X^'', and Z^''. Consequently, there are Nisnevich local and motivic equivalences X/(X-Z)≃ X^'/(X^'-Z^')of pro-objects in the category of pointed presheaves over B. The second and the third claims follow from the first one. Consider the projectionsX X×_B X^' X^',closed immersionsX×_B x^' X×_B X^' x×_B X^',and the sequence of closed immersions x^''→ Z^''→ Z ×_B Z^'→ X ×_B X^',where Z^'' is the graph of the isomorphism Z ≃ Z^', and x^'' is the graph of the induced isomorphism x≃ x^'.Applying <Ref> toX×_B X^', Z^'', and x^''we get regular functions f_1 ,…, f_n ∈𝒪_X×_B X^'(X×_B X^') that vanish on Z^'', and such that T =i^*(T) = i^'^*(T) = n,whereT^∨ = ⟨ df_1,…,df_n ⟩⊂T^∨_x^'',X ×_B X^'',the elements df_1,…,df_n ∈ T^∨_x^'',X ×_B X^' are the differentials of (<ref>),and the homomorphisms of the cotangent spacesT^∨_x^'',X×_B x^'T^∨_x^'',X ×_B X^'T^∨_x^'',x×_B X^'are induced by (<ref>). Define Γ=Z(f_1,…,f_n)and denote by γΓ↪ X×_B X^'the closed immersion.Then Z^'' is a closed subscheme of Γ, and x^'' is a closed point in Γ. We are going to prove that the morphisms XΓ X^'induced by the projections (<ref>)are étale over x^''. Consider the commutative diagramT^∨_x,X@_(->[rd]^p^*[dd]_e^* T^∨@^(->[ld]_q[dd]^i^*∘ qT^∨_x^'',X×_B X^'@->>[rd]^i^*@->>[dl]_γ^*T^∨_x^'',ΓT^∨_x^'',x× X^'whereq is the canonical injection, the surjection γ^* is induced by the closed immersion γ, the surjection i^* is induced by the closed immersion i,the injection p^* is induced by the projection p.The composite i^*∘ p^* equals to the homomorphism induced by the composite morphism of schemesx×_B X^'→ x→ X;hencei^*∘ p^* = 0. Then since (T^∨_x,X)= n = (T^∨_x^'',x× X^'), it follows that (p^*)=(i^*). On the other hand, (q)= T = (γ^*),because of the construction of γ and the definition of q. Thus both the diagonals in (<ref>) are short exact sequences. By (<ref>) i^*∘ q is injective, and applying <Ref> to (<ref>)we conclude that e^* is injective.Then since _KT^∨_x,X = _KT^∨_x^'',Γ, it follows that e^* is an isomorphism.Similarly e^'^* T^∨_x^',X^'→ T^∨_x^'',Γ is an isomorphism. Thus e and e^' are étale over x^''.Then Γ×_X Z≅ Z^''⨿ R, Γ×_X^' Z^'≅ Z^''⨿ R^',for some closed subschemes R and R^' of Γ. Define X^'' = Γ - (R∪ R^'). Thus we get the Nisnevich squares (<ref>), and theclaim follows.Let X,X^'∈_B, x,y∈ X, x^',y^'∈ X be pointssuch that x∈y,X_x =X_y+1,and similarly for x^',y^'. Suppose there is the isomorphism of schemes(y)_x≃ (y^')_x^',then there is a commutative diagram of pro-objects in 𝐇^∙(B)X_x/(X_x-x)∧ S^1[r][d]^≃X_y/(X_y-y)[d]^≃X^'_x^'/(X^'_x^'-x^')∧ S^1[r]X^'_y^'/(X^'_y^'-y^')where the upper horizontal arrow is (<ref>), the lower horizontal arrow is defined similarly,and the vertical arrows are isomorphisms. The schemes (<ref>) are always local of Krull dimension one, and not necessarily regular. By <Ref> we may assume that there is a Nisnevich squaree (X^'_x^',X^'_x^'-y^')→ (X_x,X_x-y).In this situation, the Nisnevich square e induces the diagram (<ref>). § MOTIVIC COHOMOLOGIES WITH RESPECT TO A ONE-DIMENSIONAL GENERALISED SPHEREIn this section, we generalise the results on weight one motivic homologies from <cit.>.Let U be essentially smooth scheme over k such that Pic_0(U)≅ 0. Let V = ^1_k-z for some closed point z∈^1_k. ThenH^l(Cor( U ×_k Δ^∙_k , V )) ≅ℤ⊕ k[U×_k z]^×, l=0, 0, l>0.The groups Cor(X,V) for X∈_k by the definition are generated by irreducible closed subschemesZ⊂ X×_k Vfinite surjective over X.Since U∈_k and by the assumption on _0(U) we have _0(Δ^l_k×_k U×^1)=_0(U)=0. Henceany Z as in (<ref>) with X=Δ^l_k×_k Uequals to the vanishing locus Z(f) of some functionf∈ k[Δ^n_k× U×_k^1_k]≅ k[Δ^n_k× U][t], f=t^n+c_n-1t^n-1⋯+c_0, n∈ℤ_≥ 1such that for some invertible function u∈ k[U×_k z]^×, there is the isomorphismf|_Δ^n_k× U×_k z=u,where u at the right side denotes the inverse image along the morphism Δ^l_k×_k U→ U. The set of such functions f as in (<ref>), and (<ref>), for a given n≥deg_k z, and u∈ k[U×_k z]^×, is parameterised by the set of Δ^l_k-points of the affine space ^n-deg_k z_Δ^l_k. Sincefor a large enough n,the affine space ^n-deg_k z_k is ^1-contractible, it follows the quasi-equivalence of complexesCor(Δ^∙_k×_k U,V)≃ℤ⊕ k[U×_k z]^×,where the right side denotes the complex of abelian groups concentrated in the degree 0. For a closed point z∈^1_k, consider the motives [ T = ^1_k/(^1_k-z) ∈𝐃𝐌(k),;T_k(z) = ^1_k(z)/(^1_k(z)-0) ∈ 𝐃𝐌(k(z)). ]Let r_k→_k(z) be the base change morphism induced by the fields extension k(z)/k.For anyl∈ℤ, there is the isomorphism of presheaves on _kh^l(T)≃ r_*h^l(T_k(z)).Moreover, if the field extension k(z)/k is purely inseparable, thenh^l(T)≃ r_*h^l(r^*(T)).Sincethe functor r preservessemi-local henselian essentially smooth schemes, and such schemes over k and z have the trivial Picard group _0, in view of <Ref> there are the quasi-isomorphisms of complexes of presheaves on _k L_Cor_k(-×_kΔ^∙_k,^1_k-z) (1)≃L_ r_*Cor_k(z)(-×_kΔ^∙_k(z),^1_k(z)-0) (2)≃r_* L_Cor_k(z)(-×_kΔ^∙_k(z),^1_k(z)-0).The isomorphism (<ref>) follows becauseh^l(^1_k-z)≃ H^l(L_Cor_k(-×_kΔ^∙_k, ^1_k-z)), see (<ref>), and the isomorphismr^*(^1_k/(^1_k-z))≃^1_k(z)/(^1_k(z)-0),that holds, because (z×_k z)_red≃ z. We use the following results provided by Gabber's Presentation Lemma from<cit.>, <cit.>, <cit.>.Let C∈_k, _k C=1. For any closed point z in C, there isan étale morphism f C→^1_k that maps z isomorphically on f(z). The claim is provided by <cit.>, <cit.>, <cit.>.For any C∈_k, _k C=1, and a sense open subscheme V=C-D, C/V≅⊕_z∈ D^1_k/(^1_k-v_z)≅⊕_z∈ D (^1_k-v_z)/pt_k[1],for some family of closed points v_z∈^1_k, where z runes over points of D. For any C and V=C-D as above,C/V≅⊕_z∈ D C/(C-z). For each z, by <Ref> there are isomorphismsC/(C-z)≃^1_k/(^1_k-f(z))≃ (^1_k-f(z))/pt_k[1].So the claim follows.For any C∈_k such that dim_k C=1, and any closed reduced zero-dimensional subscheme D , there is the isomorphism of presheaves h^l(C/(C-D))≃0,l≠ 1,2, 𝒪(-× D)^×,l=1 Pic_0(-× D),l=2.Because of <Ref>the equivalence (<ref>)follows from (<ref>)by <cit.>.§ SUBCATEGORY OF CIRCLES IN VOEVODSKY'S MOTIVES OVER A FIELDRecall the subcategory ^1_k⊂Sch_k spanned by spectra of simple field extensions of k. The coproduct closure (^1_k)^⨿ in Sch_k is spanned by schemes of the form ∐_i=1^lSpec(k(α_i))for a simple finite field extensions k(α_i)/k. So (^1_k)^⨿ is the category of irreducible zero-dimensional finite k-schemes that residue fields are simple extensions of k. Consider the category Circ_k of pairs (C,V), where C∈_k, dim_k C=1, and V is a dense open subscheme,and the subcategory in (k)Circ_(k) = (C/V|(C,V)∈Circ_k)_(k)spanned by the motives of the motivic spaces C/V for all (C,V)∈Circ_k.There is the equivalence of categories[ Circ_(k)≃ ℤ×Cor((^1_k)^⨿);; C/V[l]↦(l,(C∖ V)_red). ]In particular, if z_0∈^1_k, z_1∈^1_k, k(z_0)≇k(z_1), then^1_k/(^1_k-z_0)≄^1_k/(^1_k-z_1). In view of <Ref> the claim reduces to <Ref> below.LetV_0 = ^1_k-z_0, V_1 = ^1_k-z_1. Then Hom(V_0/pt_k,V_1/pt_k)≅Cor_k(z_0,z_1), andHom(V_0/pt_k,V_1/pt_k[l])=0 for l≠ 0. Let U stands for V_0 or pt_k. Then _0(U× z_1)≅ 0, and since V_1/pt_k≃^1_k/V_1[-1], by <Ref>Hom(U,V_1/pt_k) ≃ k[U×_k z_1]^×,and Hom(U,V_1/pt_k[l])≅ 0 for l≠ 0. Then Hom(V_0/pt_k,V_1/pt_k)≅k[V_0×_k z_1]^×/k(z_1)^×≃ k[V_0×_k z_1]^×/k[^1_k×_k z_1]^×,andHom(V_0/pt_k,V_1/pt_k[l])=0 for l≠ 0. There is the commutative diagramk[V_0×_k z_1]^×/k[^1_k×_k z_1]^×@^(->[d] [r]^<<<<<<≃Z_0(z_0×_k z_1)@^(->[d]k(^1_k×_k z_1)^×/k[^1_k×_k z_1]^×@->>[d] [r]^<<<<<<≃Z_0(^1_k×_k z_1)@->>[d]k(V_0×_k z_1)^×/k[V_0×_k z_1]^×[r]^<<<<<<≃Z_0(V_0×_k z_1) ,where Z_0(-) denote the group of zero-dimensional cycles in a scheme. Here the horizontal isomorphisms are induced by the mapping k(^1_k×_k z_1)^×→ Z_0(^1_k×_k z_1);v↦divv.The upper vertical arrows are injective, and the square is pullback. The right side bottom vertical surjection is induced by the open immersion U_0×_k z_1→^1_k×_k z_1,and the left side bottom vertical arrow is surjective, becausek(^1_k×_k z_1)^×≅ k(V_0×_k z_1)^×. So the isomorphism (<ref>) follows, because Z_0(z_0×_k z_1)≅Cor_k(z_0,z_1).The isomorphism (<ref>) defines the fully faithful functor[ (^1_k/(^1_k-z)| z∈^1_k)_(k) → (_k),; ^1_k/(^1_k-z) ↦ z ]where the left side is the subcategory of (k) spanned by the objects of the form ^1_k/(^1_k-z).We claim, firstly, that the groupHom(V_0/pt_k,V_1/pt_k)is generated by the classes of correspondences defined by the divisors of regular functions f∈ k[^1_k×_k^1_k]=k[^1_k][t]such thatf=t^n+c_n-1t^n-1⋯+c_0, andf is invertible on k[V_0×_k z_1]^×. Indeed, this follows from <Ref> becausein view of the embeddingk[V_0×_k z_1]^×→ k(^1_k×_k z_1) any function v∈ k[V_0×_k z_1]^×is a fraction v=r_0/r_∞, for some functions r_0,r_∞∈ k[^1_k×_k z_1] invertibleon V_0×_k z_1.To prove the claim on (<ref>)we need to show thatthe morphisms in (<ref>) agree with the composition rule in _k.By the above to check that the isomorphism (<ref>) agrees with the compositionit is enough to consider a pair of morphisms c_01∈Hom(V_0,V_1), and c_12∈Hom(V_1,V_2) given by the divisors of the functionsf_01∈ k[^1_k×_k ^1_k]=k[t_0,t_1],f_12∈ k[^1_k×_k ^1_k]=k[t_1,t_2].Then the claim follows, becausethe composite c_12∘ c_01 is defined by Z_01×_V_1 Z_12.Denote [ T = ^1_k/(^1_k-0) ∈ (k,ℤ/pℤ),; T = ^1_k/(^1_k-z) ∈ (k,ℤ/pℤ). ],where z∈^1_k is a closed point.Let k(α) be a simple purely inseparable extension of a field k. Let z∈^1_k be a point such that k(z)≃ k(α). Then the map[ Hom(T, T) ⊗Hom(T,T) →Hom(T,T) = ℤ/pℤ;; c_0 ⊗ c_z ↦c_z∘ c_0 ] given by the composition in the category (k,ℤ/pℤ) is trivial.Letc_0Hom(^1_k/(^1_k-0), ^1_k/(^1_k-z)),c_zHom(^1_k/(^1_k-z),^1_k/(^1_k-0)),be morphisms in 𝐃𝐌(k,ℤ/pℤ),where p=chark. In view of<Ref> since deg_k k(z)=0∈ℤ/pℤ, it followsthe equality c_z∘ c_0=0∈Hom(^1_k/(^1_k-0), ^1_k/(^1_k-0))=ℤ/pℤ. § FIELD GENERATORS AND SCHEME DIMENSION Let p X→ B be a smooth morphism of schemes, and x∈ X. Let k = 𝒪_p(x)(p(x)), and K = 𝒪_x(x).Suppose that K/k is finite, and K≅ k(ϕ_1,…,ϕ_n).Then there a Zariski neighbourhood U of x in X and a smooth closed subscheme X^' in Usuch that x∈ X^', ^x_B X^' = n. Since x has an affine Zariski neighbourhood, without loss of generality we can assume that X is affine. Since K ≅ k(ϕ_1,…,ϕ_n), x≅ v for some closed point v∈^n_k≃ k[t_1,… t_n], t_i↦ϕ_i.Since the extension 𝒪_x(x)/𝒪_p(x)(p(x)) is finite, the point x in X×_B p(x) is closed, and there is the surjection 𝒪_X(X)→𝒪_x(x). Let f_1∈𝒪_X(X) be such that f_1↦ϕ_1and the induced homomorphism of K-vector spacesT_X×_B p(x),x→ T_^1_k,f_1(x)is non-trivial.Further, by induction there are functions f_1,…,f_n∈𝒪_X(X) such that f_i↦ϕ_i,and the morphism f=(f_1,…,f_n) X→^d_Binduces the surjection of K-vector spaces T_X×_B p(x),x→ T_^n_k,v,v=f(x). So f is smooth over x. Thenthere are functions f_n+1,…, f_^x_B X∈ I_X(x) such thatthe morphism ff=(f_1,…,f_n,f_n+1,…, f_^x_B X) X→^^x_B X_Bis étale. PutX^' := Z(f_n+1,…,f_^x_B X).Then the claim follows. Let X∈_B, x∈ X. For any smooth closed subscheme X^' in X such that x∈ X^', there is an isomorphism in 𝐇^∙(B)X/(X-x)≃ X^'/(X^'-x)∧ T^^x_B X-^x_B X^'.The claim follows because the normal bundle N_X^'/X is locally trivial. § CLASSIFICATION §.§ General typeLet z∈ B be a point, i z→ B denote the respective morphisms of schemes, and e S→ z be a morphism of schemes.Let X∈_B, and x∈ X be such thatp(x)≠ z, where p X→ B denotes the canonical morphism. Then there is the isomorphisme^*i^*(X_x/(X_x-x))≃ *of pro-objects in 𝐇^∙(K^). To prove the first isomorphismwe note that since z≠ p(x), thenx×_B z=∅, andconsequently,either X_x×_B z=∅, or X_x×_B z = (X_x-x)×_B (B-z). Let z∈ B be a point,K=𝒪_z(z), and consider the purely inseparable closure K^ and the algebraic closure K^alg of the field K. Let i z→ B,e z^→ z, e_alg z^alg→ z be the respective morphisms of schemes,where z^ =K^, z^alg =K^alg.Then for any X∈_B, and x∈ X such that p(x)=z where p X→ B denotes the canonical morphism,there are isomorphisms of pro-objects in 𝐇^∙(K^)e^*i^*(X/(X-x))≃T^∧ d∧ (x_)_+,where x_ = (x×_B z^)_red, ande_alg^*i^*(X/(X-x))≃T^∧ d∧ (∐_(sdeg_zx)z^alg)_+. To prove the first isomorphismwe note that since z≠ p(x), thenx×_B z=∅, and consequently X×_B z = (X-x)×_B (B-z).We proceed with the second claim. Without loss of generality we may assume that X is affine. Denote X_ = X×_B z^.Then x_ is a point of a scheme X_∈_K^, and since X_ is of finite type over k^,the field K^(x_) has finite transcendence degree over K^. Since K^ is a perfect field, the scheme x_∈_K^is essentially smooth, and consequently there is a retraction of the local scheme r (X_)_x^→ x_, and regular functions f_1,…,f_c∈𝒪_X_(X_) that vanishes at x_ and such that the differentials df_1,…,df_c are linearly independent. Then the morphism of K^-schemes (X_)_x^→^c_x^ defined by f_1,…,f_c, and r is étale and induces the isomorphism x^→ 0×_K^ x^. So the isomorphism in (<ref>) follows.The isomorphism (<ref>)follows because (z^alg×_z x)_red≅∐_(sdeg_zx)z^alg.Let B be a scheme, p_0 X_0→ B, p_1 X_1→ B be smooth morphisms schemes,andx_0∈ X_0, x_1∈ X_1 be points. Suppose that there is an isomorphismX_0/(X_0-x_0)≃ X_1/(X_1-x_1) of objects in the category of pro-objects in (B), then_B^x_0 X_0 = _B^x_1 X_1,p_0(x_0)=p_1(x_1)=z∈ B.Moreover, for closed points x_0 and x_1 of X_0 and X_1,there is the equality sdeg_z x_0 = sdeg_z x_1 where sdeg separable degree of the extension of the residue fields.The first claim follow by (<ref>)applied twice with the assignments: (0) z=p(x_0), x=x_1, X=X_1, and (1) z=p(x_1), x=x_0, X=X_0.The second claim follows similarly by (<ref>)§.§ One-dimensional type Recall that _B is the category ofirreducible reduced finite zero-dimensional schemes over B, that is opposite tothe category of finite field extensions of the residue fields of closed points in B. ^1_B⊂_Bis the subcategory spanned bysimple finite filed extensions of the residue fields of closed points of B. Consider the category ℤ×(_B), where ℤ denotes the discrete category, and the subcategory ℤ×(^1_B). Let B be a scheme.For each d∈ℤ_≥ 0, there is the equivalence ofthe subcategories of (B) andℤ×(_B) [ (X/(X-x)[l] |(X,x)∈^d,·,1_B, l∈ℤ )_(B)≃ℤ×_B(^1_B),; X/(X-x)[l]↦(l,x/B) , ]wherex/B∈_B stands for the object given by the morphism x→ B. By <Ref> the claim holds when B= k for a field k. The claim for any base scheme Bfollows from the result over residue fields of closed points in Bbecause of<Ref>, and the reflective adjunction i^*⊣ i_*i_*(z) ⇆(B) i^*for each closed point z∈ B <cit.>. There is a map[ℤ_≥ d×(^d_B) →(^d,·_B);(N, K) ↦ (^N-d×^d_B,(0,v)) ]where v∈^d_B is a closed point with the residue field K. The claim follows becausefor any field K that defines an object in ^d_B,there is a closed point v∈^1_B such that 𝒪_v(v)≅ K.(1) The map (<ref>) induces the surjection[ ℤ_≥ d×(^d_B)↠ (X/(X-x)|(X,x)∈^d,·_B )_𝐇^∙(B);(N,K)↦((^N-d×^1_B/( ^N-d×^d_B-(0,v) ) . ](2) There is the equivalence of categories(X/(X-x)|(X,x)∈^d,·_B)_𝐇^∙(B)≃(T^∧ L∧^d_B/(^d_B-(0,v))|L∈ℤ_≥ 0, v∈^d_B)_𝐇^∙(B). It follows by <Ref> thatfor any (X,x)∈^d,·_Bthere is a motivic equivalence X/(X-x)≃ X^'/(X^'-x^')∧ T^∧^x_B X-d,for someX^'∈_B, and x^'∈ X^' such that _B^x^' X^' =d, and x≃ x^'.Moreover,by <Ref>X^'/(X^'-x^')≃^d_B/(^d_B-v),for some v∈^d_B.Hence both claims of the proposition follow.Consider the functor[^·_B →ℤ×_B; (X,x) ↦ ( X,x/B). ]The functor (<ref>) inducesthe isomorphisms of sets[(X/(X-x)|(X,x)∈^·,1_B )_(B)≃;(X/(X-x)|(X,x)∈^·,1_B )_(B)≃; (X/(X-x)|(X,x)∈^·,1_B )_𝐇^∙(B)≃;ℤ_≥ 1×(^1_B). ]<Ref>(1) provides the surjective maps form the bottom to the top. The composite morphism is an isomorphism because ofthe isomorphisms(X/(X-x)|(X,x)∈^d,·_B)_(B)≃ (T^∧ d-1∧^1_B/(^1_B-(0,v))|v∈^1_B)_(B) provided by <Ref>(2), andthe first claim of <Ref>, and the isomorphism(^1_B/(^1_B-(0,v))|v∈^1_B)_(B)≃(^1_B)provided by <Ref>.Then the claim follows.§.§ SummarySummarising <Ref> and <Ref> we get. Let B be a scheme, X,X^'∈_B be smooth B-schemes, and x∈ X, x^'∈ X^' be points.Suppose there is an isomorphism X_x/(X_x-x)≃ X^'_x^'/(X^'_x^'-x^')of pro-objects in the category (B), then_B^x X = _B^x^' X^' = d∈ℤ,p(x)=p^'(x^')=z∈ B,for some d∈, where p X→ B, p^' X^'→ B are the structure morphisms.Moreover,if the residue fields K=𝒪_x(x) and K^'=𝒪_x^'(x^') at x_0 and x_1are finite over the residue field k=𝒪_z(z) at z, thensdeg_kK = sdeg_kK^', where sdeg is the separable degree. Furthermore,ifK and K^' are simple over k, thenx≃ x^'∈_B. § LEMMAS ON VOEVODSKY'S MOTIVES Recall that the combination of the results of<cit.> provide the isomrpihsmsHom_𝐃𝐌(k)(U,Y(l)[n])≅ℍ^-n+l_(Cor(U×_kΔ^∙_k,Y⊗^∧ l)) .Let U∈_k be such that for any additive ^1-invariant abelian presheaf with transfers F the morphism F(U)→ F(U^(0)) is injective, where U^(0) is the union of generic points of U.ThenHom_𝐃𝐌(k)(U,Y(l)[n])≅H^-n+l(Cor(U⊗_kΔ^∙_k,Y⊗^∧ l)), l≥ 0,H^-n+l(Cor(U⊗_kΔ^∙_k⊗^∧ -l,Y)), l< 0, , where ⊗_k denotes the product in Cor(k).Let U and U^(0) be as in <Ref>. Then for any ^1-invariant presheaf with transfers F, there are isomorphisms H^0_(U,F_)≅ F(U),H^l_(U,F_)≅ 0, l>0.Denote ^-1_(U,F)=(F(U)→ F_(U)), ^0_(U,F)=(F(U)→ F_(U)) ^l_(U,F)=H^l_(U,F_) for l≠ -1,0.The presheaves^l_(U,F) are ^1-invariant presheaves with transfers by<cit.>.<cit.>. Then the homomorphism^l_(U,F)→^l_(U^(0),F)is injective by the assumption.Since the Nisnevich topology on the scheme U^(0) is trivial,^l_(U^(0),F)≅ 0,∀ l∈ℤ,and consequently ^l_(U,F)≅ 0 for all l∈ℤ. The combination of(<ref>) and the isomorphismℍ^-n+l_(Cor(U×_kΔ^∙_k,Y⊗^∧ l)) ≅ H^-n+l(Cor(U×_kΔ^∙_k,Y⊗^∧ l)),provided by<Ref>, implies the claim for l≥ 0. The claim for l<0 follows by inductionfrom the claim for l=0by the use of <cit.>.
http://arxiv.org/abs/2311.16264v1
{ "authors": [ "A. E. Druzhinin", "A. A. Urzabaev" ], "categories": [ "math.AG" ], "primary_category": "math.AG", "published": "20231127191317", "title": "On the motivic classification of codimentional filtration quotients" }
< g r a p h i c s >Pinning of liquid droplets on solid substrates is ubiquitous and plays an essential role in many applications, especially in various areas, such as microfluidics and biology. Although pinning can often reduce the efficiency of various applications, a deeper understanding of thisphenomenon can actually offer possibilities for technological exploitation. Here, by means of molecular dynamics simulation, we identify the conditions that lead to droplet pinning or depinning and discuss the effects of key parameters in detail, such as the height of the physical pinning-barrier and the wettability of the substrates. Moreover, we describe the mechanism of the barrier crossing by the droplet upon depinning, identify the driving force of this process, and, also, elucidate the dynamics of the droplet. Not only does our work provide a detailed description of the pinning and depinning processes, but it also explicitly highlights how both processes can be exploited in nanotechnology applications to control droplet motion. Hence, we anticipate that our study will have significant implications for the nanoscale design of substrates in micro and nano-scale systems and will assist with assessing pinning effects in various applications. § INTRODUCTIONThe control of droplets on solid substrates is crucial for many applications in various areas, such as microfluidics, microfabrication, coatings, and biology. To this end, the accurate steering of droplets' motion can be realised by proper substrate design. In materials science,for example, a design based on micro-pillar structures has been shown to lead to superhydrophobic substrates <cit.> for, among others, self-cleaning <cit.> and anti-icing <cit.>. As a result of this specific design, pinning effects naturally arise that may affect droplet's motion by introducing a sticky or slippery behaviour <cit.>, which also depends on substrate wettability <cit.>. By means of lubrication theory, Joanny and Robbins have investigated the dynamics of a contact line on a heterogeneous plate, which is advanced at constant force or velocity <cit.>. They have unveiled the scaling of the force and the velocity and, also, found that alternating patches of constant wettability produce a linear relation.Espín and Kumar have presented a model based on lubrication-theory to describe contact-line pinning on substrates with heterogeneities. The work has discussed the effectof roughnessthrough a continuum model that has shown to agree with experiments.<cit.> Alava and Dubé have analysed the statistical properties of the spreading contact line(droplet radius and contact angle) on heterogeneous surfaces.<cit.> Moreover, Marmur has described the equilibrium wetting on rough surfaces determining the transition between homogeneous and heterogeneous wetting regimes on the basis of the Wenzel and Cassie–Baxter equations.<cit.> Experimentally, Ramos and Tanguy have studied the pinning–depinning phenomenon of a contact line on a solid surface decorated by a random array of nanometric structures and found a linear relation between the hysteresis caused by defects and their areal density <cit.>. In this context, the relation between the dynamic contact angle and contact line speed has been recently considered by numerical simulation <cit.>.In another example, substrates characterised by a gradient of a physical or a chemical property in a particular direction along the substrate can steer the motion of liquid droplets without the requirement of an external energy source <cit.>. A well-known example is durotaxis, where a droplet can autonomously move along a substrate due to the presence of a stiffness gradient <cit.>, whichcrucially depends on the wettability of the substrate <cit.>. In any of the above systems, pinning of contact line can be advantageous or impede droplet motion or its manipulation, leading to a greater or lower efficiency of relevant processes <cit.>.There are still outstanding issues that remain regarding the possibility of exploiting the effects of droplet pinning and substrate wettability in controlling droplet's motion. This is especiallytrue regarding microlevel origins of pinning and its mechanism, which can be advantageous for various nanotechnology applications.This paper aims at filling the above gap by taking advantage of high-fidelity in silico experiments at nanoscale. We employ molecular dynamics (MD) simulation based on a coarse-grained model and the system setup of Figure <ref>. Apart from aiming at acquiring an in-depth understanding of droplet pinning on solid substrates with different wettability, we also argue that the pinning has the potential of controlling nanodroplets, for example, selective droplet separation. For this reason, we have studied a range of different pinning scenarios, which include various combinations of substrate wettabilities and pinning barriers for droplets of different sizes. Thus, we anticipate that our results will inspire the design of substrates for steering droplets in micro- and nano-scale systems and will assist with assessing pinning effects in a range of different nanotechnological applications. § MATERIALS AND METHODS We have used MD simulations of a coarse-grained model <cit.> where interactions between different components of the system, i.e. the drop and the substrate beads, are described by means of the Lennard-Jones (LJ) potential, namely, U_ LJ(r) = 4ε_ ij[(σ_ij/r)^12 - (σ_ ij/r)^6],where r is the distance between any pair of beads in the system, and i and j indicate the type of beads: `d' for droplet beads, `r' for the beads that belong to the red substrate, and `o' for the beads of the orange substrates (Figure <ref>). In our model, σ_ ij = σ for all combinations of types i and j, with σ being the unit of length. As usual, the LJ potential is cut and shifted at a cutoff distance r_c=2.5σ for any interaction involvingthe droplet beads, while r_c=2^1/6σ (purely repulsive potential) for any interactions between the substrate beads. The strength of the interactions is defined by the parameter ε_ ij of the LJ potential. In our case, the parameters, ε_ rd and ε_ od vary between 0.3ε and 0.7ε, where ε isthe energy unit and k_B (Boltzmann's constant) is considered as unity <cit.>. The interactions ε_ rd and ε_ od are used to tune the wettability of the droplet on the red and the orange substrates (Figure <ref>).We have considered droplets of different size, which consist of N=112, 1008, or 5040chains of ten coarse-grained beads each. The finite extensible nonlinear elastic (FENE)potential <cit.> was used to tether together consecutive beads in these polymer chains, which is mathematically expressed as follows: U_ FENE(r) = -0.5 K_ FENE R_ 0^2 ln[ 1 - (r/R_ 0)^2],where r is the distance between two consecutive beads along the polymer backbone,R_ 0=1.5σ expresses the maximum extension of the bond, and K_ FENE = 30 ε/σ^2 is an elastic constant. For the chosen chain length, there aren't any evaporation effects and the vapour pressure is therefore sufficientlylow <cit.>.To evolve our system in time, we used MD simulation by choosing the Langevin thermostat <cit.> as implemented in the LAMMPS package <cit.>. The time unit inour simulations is τ =√(mσ^2/ε), where m is the mass unit. Thetime-step for the integration of the equations of motion for the droplet particles isΔ t =0.005τ. Thus, the temperature T fluctuates around a predefined value T=ε/k_B, where k_B is the Boltzmann constant, and the energy ε is measured in units of k_B T. Periodic boundary conditions are applied in all directions and we guarantee that mirror images of the droplet do not interact with each other in any direction.A typical initial configuration for our systems is illustrated in Figure <ref>. Typical trajectories for our systems start from such initial configurations. We have run simulations up to 10^8 MD time steps for cases that remained pinned to ensure that unpinning will not happenat a very late time of the simulation. For droplets that cross the pinning boundary, the lengthof the trajectories was up to the point that the droplet reached the final equilibrium state on top of the orange substrate. Our results are based on the analysis of these trajectories. § RESULTS AND DISCUSSION Before delving into the details of the system, it should be mentioned that pinning can be the result of chemical inhomogeneity, surface roughness (or a physical step), or a combination of both. In this work, we will consider the combined effect of physical barrier and wettability to allow for acomprehensive understanding. Pinning is defined as inability of the contact line to move; such inability is rooted in the thermodynamic energy barrier due to chemical and/or physical heterogeneity expressed on a surface.In this study, such barrier to movement of the contact line is through the physical barrier thatprevents the droplet from moving on top of the orange substrate; the wettability of the physical heterogeneity is also varied. Due to the attractive nature of the LJ interaction, the dropletsin thisstudy are pinned at the boundary between the red and orange substrates, as such thepinning inherently takes place without imposing a pinning requirement. The system studied here consists of a droplet on a substrate that is parallel to the x-y plane, as shown in Figure <ref>. The wettability of the substrate by the droplet is determined by the Lennard-Jones (LJ) interaction-parameter, ε_ rd, where `r' indicates the red colour of the substrate and `d' the droplet (Figure <ref>).A larger value of ε_ rd allows for a higher wettability of the substrate, whereas a smaller value corresponds to a lower wettability. From our previous study <cit.>, the choice, 0.3ε≤ε_ rd≤ 0.7ε, maintains the spherical-capshape of the droplet on a substrate monolayer and avoids evaporation effects and large distortions of the droplet contact line. In this case, the contact angle of the droplet is uniquely defined by thestrength of the LJ interaction (e.g. ε_ rd) and linearly depends on it. <cit.> In particular, LJ energy parameters in the range 0.3 - 0.7ε would yield contact angles in the range 60^∘–120^∘.<cit.> In addition, two orange substratesperpendicular to the x-y plane and two orange substrates parallel to the x-y plane are part of the same system as illustrated in Figure <ref>. Both orange substrates have the same wettability, which is expressed by the interaction strength of the LJ potential, ε_ od, where `o' stands for the orange colour of the substrates. The orange substrates, which are parallel to the x-y plane, and the red substrate are separated by a distance, H, in the z direction, which corresponds to the height of the physical barrier that the droplet needs to overcome in order to move from the red substrate to the orange substrate. The pinning barrier, namely the height, H, can vary by changing the position of the red substrate in the z direction.The choice of lengths, L and W (Figure <ref>), does not affect our results. L is chosen such that the droplet sticks to the pinning barrier after a short time, since the interaction of the droplet with the red and the orange substrates is always attractive. W is large enough to guarantee that mirror images of thedroplet do not interact in the y direction due to the presence of the periodic boundary conditions. Hence, depending on the choice of the parameters, H, ε_ rd, and ε_ od, as well as the droplet size (total number of beads, N), the droplet may be able to overcome(cross) the pinning barrier and potentially reach a new equilibrium state on top of one of the orange substrates. In the following, we discuss the effects of these parameters on droplet pinning and describe the mechanism of droplet motion over the barrier upon droplet depinning.Figure <ref> presents the results on the maximum height of the pinning barrier, H_ max, that the droplet is able to overcome. In particular, the dependence of H_ max on the parameters ε_ rd and ε_ od for droplets of different sizes is laid out. We observe that the droplet will remain pinned, when the red substrate has a greater wettability than the orange substrates, independently of the droplet size. In other words, ε_ od must always be larger than ε_ rd to allow for droplet depinning. Hence, the thermal fluctuations of the droplet alone are not sufficient to enable depinning, even for values of H as low as H=σ, and even for our largest droplets (N=50400 beads). However, droplets can generally overcome ever larger barriers as their size increases when ε_ od>ε_ rdand for the range of values considered in this study. In particular, H_ max can be as high as 21σ in the case of a droplet consisting of N=50400 beads (Figure <ref>c,ε_ rd=0.3ε and ε_ od=0.7ε). In contrast, a droplet of N=1120 beads would only overcome a barrier of 7σ at best (Figure <ref>a,ε_ rd=0.3ε and ε_ od=0.7ε). Moreover, the value of H_ max crucially depends on thewettability difference between the red and the orange substrates in each case, as expressed through the LJ parameters ε_ rd and ε_ od. In particular, the larger the difference in wettability, the larger the H_ max the droplet is able to overcome. In other words, as the difference in wettability between the red and orange substrates becomes smaller, H_ max decreases. In addition, choosing the highest possible wettability for the orange substrates always yields the largest H_ max, which suggests that maximising ε_ od favours droplet depinning. For example, the combination (ε_ rd=0.5ε, ε_ od=0.7ε) results in a larger value of H_ max in comparison with the combination (ε_ rd=0.3ε, ε_ od=0.5ε) in the case of all droplet sizes, despite the absolute difference between the parameters ε_ rd and ε_ od being the same. Eventually, the affinity of thedroplet to the orange substrates drives the crossing of the barrier, as will be discussed further below. In summary, the largest H_ max is achieved for (ε_ rd=0.3ε, ε_ od=0.7ε) and the smaller H_ max for the combination (ε_ rd=0.3ε, ε_ od=0.4ε). In view of these observations, we present in the following results of pinning and depinning (see also movies in Supplementary Information) by keepingε_ rd=0.3ε constant, and varying ε_ od, as well as results where we keep ε_ od=0.7ε constant, and vary ε_ rd.Figure <ref> illustrates results that indicate whether droplets of different sizes (small, N=1120 beads; medium, N=10080 beads; large, N=50400 beads) can overcome a certain barrier of height H. It suggests that the cases with ε_ rd=ε_ od will always lead to pinned droplets irrespective of the droplet size. This is merely due to the physical pinning barrier, which, albeit small (e.g. values as low as H=σ), is enough to hinder the beads attached to the substrate at the contact line to climb onto the orange substrate. Moreover, as discussed in the context of Figure <ref>, ε_ od should always be larger than ε_ rd for depinning to take place. In addition, the results of Figure <ref> indicate clearer that a larger difference between ε_ od and ε_ rd allows for the translocation of the droplet at higher values of H. A choice of ε_ od as high as possible is desirable in order to favour depinning (also, for intermediate values of ε_ rd and ε_ od), as suggested by Figure <ref>. Considering the case (ε_ rd=0.3ε, ε_ od=0.7ε), which enables barrier crossing for the highest values of H and clearly highlights the different areas in the graphs of Figure <ref>, we can see that low pinning barriers, H (e.g. H<8σ), will be overcome by all droplets, independently of their size. However, when H>7σ, the medium and large droplets will only be able to cross the pinning barrier, H, while the small droplets will remain pinned. As H further increases, the medium-size droplets will remain pinned when H>13, whereas the large droplets (N=50400 beads) will still be able to cross the pinning barrier. Finally, for H>21, the large droplets will also remain pinned being unable to overcome the pinning barrier. Hence, our results suggest that we can separate droplets of different sizes or control their motion in different directions by properly choosing the wettability of the red and the orange substrates (maximising ε_ od is desirable) and the height, H, of the pinning barrier. This approach could take place in multiple steps, where the small droplets will remain pinned at small H. Then, the medium-size droplets will remain pinned at higher H values and, finally, the larger values will remain pinned at higher H. Of course, different pinning barriers can be applied in different directions, in this way implementing a binary code, where certain droplets can either cross or not the pinning barrier. Our work clearly shows that the different behaviours are distinct and can be achieved by the different choice of parameters.In Figure <ref>, we provide details on the translocation mechanism of the droplet upon depinning as the droplet moves from the red substrate towards the top of the parallel orange substrate. For our discussion, we have selected four specific systems (see the caption of Figure <ref>), but our conclusions are valid for all the successful depinning cases of Figures <ref> and <ref>. The observed phenomena are dominated by the interfacial interactions, therefore the analysis of the different interfacial energy components, as well as the total energy of the system should be investigated.In fact, the energy of the system provides the information for its most favourable state (towards equilibrium) for a particular set of parameters (e.g. H, ε_ rd, ε_ od, and N), since the temperature remains constant throughout the simulation, while no changes in entropy are expected for the droplet and the substrate during the simulation. In particular, we show the pair potential interaction energy, E,for the selected systems, the interfacial energy between the droplet and the red substrate, E_ rd, as well as the interfacial energy between the droplet and the orange substrates, E_ od (Figure <ref>a).In fact, the latter interfacial contributions play the most important role in this translocation process. Indeed, these interfacial energies show significant deviations during the crossing of the pinning barrier (Figures <ref>b and c), which also arises from the wettability difference between the substrate. In particular, the ability of the droplet to establish more interactions (contacts between beads) with the orange substrates will eventually determine whether the droplet will be able to fully cross a pinning barrier of height H.A closer look at the interfacial energies, E_ rd and E_ od, provides more details on the mechanism of the barrier-crossing process (Figures <ref>b and <ref>c). During the crossing of the pinning barrier by the droplet, we observe that the energy E_ rd gradually increases (its absolute value decreases, which means less contacts between the droplet beads and the beads of the red substrate). In contrast, E_ od gradually decreases (faster decrease than the increase in E_ rd, also, due to the fact that ε_ rd < ε_ od), which manifests as an increasing number of contacts between the droplet and the orange substrates. At a specific time, for example, the one marked by the letter `d' in Figure <ref>b for system A and in the snapshot of Figure <ref>d, the two interfacial energies will be equal. In fact, E_ rd and E_ od will be equal for all systems at a certain time while crossing the pinning barrier. However, this happens very early in the depinning process when the difference between the parameters ε_ rd and ε_ od is large, as, for example, in the case of ε_ rd=0.3ε and ε_ od=0.7ε (systems B, C, D). We underline that ε_ od should always be larger than ε_ rd in order for the droplet to be able to cross the pinning barrier, as seen, for example, from our results in Figure <ref>. On the contrary, when the wettability difference between the substrates is small (for example in the case of system A, ε_ rd=0.5ε and ε_ od=0.7ε), E_ rd = E_ od at later times and when the droplet has considerably moved over the pinning barrier. In particular, when the parameters ε_ rd and ε_ oddiffer only by 0.1ε, then E_ rd=E_ od takes place when the droplet's centre of mass is half way along the pinning barrier. Hence, the ability to choose the height of the pinning barrier, H and the wettability of the red and orange substrates provides further possibilities for controlling the position of the droplet around the pinning barrier, in the cases that the droplet would remain pinned.We now turn our attention to the dynamics of the droplet motion during the depinning process. At the initial stages of the barrier crossing, the instantaneous velocity of the centre of mass of the droplet in the x direction, , increases, as the droplet seeks to establish more favourable contacts with the orange substrates (Figures <ref>b and c). However, as the droplet moves further along the pinning barrier, the competition between the red and the orange substrates to establish contacts with the droplet becomes higher since the droplet needs to climb up the pinning barrier in order to create new contacts with the top orange substrate. At this stage of the barrier crossing, the droplet moves back and forth and slowly drifts over the pinning barrier. After this stage and as the droplet moves further over the pinning barrier and because of the higher attraction of the droplet to the orange substrates (ε_ od is always larger than ε_ rd), E_ rd will become zero at some point in time and the droplet will lose its contact with the red substrate. For example, see point `e' in Figure <ref>c and the corresponding snapshot in Figure <ref>e for system D, which illustrates this effect. At this stage of the translocation process, the droplet is not anymore dragged by the red substrate and is `free' to establish further contacts with the top orange substrate. The absence of the attraction between the droplet and the red substrate leads to the increase of the instantaneous velocity, , of the droplet, which, also, translates into the loss of some contacts with the orange substrate, as the droplet tries to obtain again its spherical-cap shape. This results in an increase of the energy, E_ od, which is marked in Figure <ref>c with the letter `f'. A snapshot that corresponds to this situation is presented in Figure <ref>f. A similar behaviour has been discussed in the context of substrates with heterogeneity, where hysteresis builds up when the strength of the defect is above a certain threshold, which depends on the contributions of the elastic energy of the droplet and the barrier energy <cit.>, which is strictly valid when gravitational effects are negligible <cit.>. After this point, the droplet has managed to overcome the pinning barrier and climb on top of the orange substrate. However, the droplet has not yet completely reached its equilibrium shape. For example, the snapshot in Figure <ref>f clearly manifests this situation, since the advancing and receding contact angles of the droplet considerably differ. The droplet and generally the system as a whole will reach its final equilibrium state when it will establish a larger number of contacts with the parallel to the x-y-plane, orange substrate. Then, as alsoindicates, the droplet will move back and forth on the top substrate and will not return back to establish contacts with the perpendicular orange substrate or the red substrate. The number of interfacial contacts between the droplet and the substrate must always be maximised in order the system to minimise its energy, which occurs only when the droplet eventually `sits' on the top substrate. Hence, the snapshot of Figure <ref>g (highlighted with the letter `g' in Figure <ref>c) is a typical equilibrium state of any system that can successfully overcome the pinning barrier. This conclusion is very important, for example, in a droplet separation process since it guarantees that the droplets that cross the pinning barrier will not return back to the red substrate. The description of the depinning mechanism, which we have provided here, is the same for all systems that cross the pinning barrier. However, for small H, the peak `f' of Figure <ref> becomes less pronounced, as can be already hinted by comparing the results for the systems of Figure <ref>. The same is true when the size of the droplet becomes smaller. Finally, we have mentioned that maximising the wettability of the orange substrates (large value of the parameter ε_ od) is desirable in order to overcome ever higher pinning barriers (cf. Figures <ref> and <ref>). We have concluded that the minimisation of the interfacial energy, E_ od, is the driving force that enables the droplet to cross the pinning barrier.Figure <ref> presents results for the time required by the droplet to cross the pinning barrier. For the sake of our discussion, we show results of systems with very efficient barrier crossings, that is the difference in the wettability between the red and the orange substrates is maximised. Hence, ε_ rd=0.3ε and ε_ od=0.7ε. We contrast this behaviour with the systems that exhibit the least efficient barrier crossings, i.e., systems that can reach small H_ max having a small difference in wettabilities, such as the choice ε_ rd=0.3ε and ε_ od=0.4ε. We have also considered different droplet sizes, as indicated in Figure <ref>a. Overall, all cases show that the time to cross the pinning barrier increases with the height H. While this dependence is monotonic, different behaviour regimes can be observed. In particular, in the case of N=50400, ε_ rd=0.3ε and ε_ od=0.7ε (Figure <ref>a), we can clearly discern three regimes. At the first regime (1, Figure <ref>a) for small values of H, namely σ< H < 5, the effect of the pinning barrier in the translocation process is very small, due to the large size of the droplet. In this case, the increase of the barrier height, H, does not significantly affect the time that the droplets need to cross the pinning barrier. However, as the barrier, H, further increases, its effect on the time is more tangible, reflecting longer times that the droplet needs to cross the pinning barrier. This is the second regime (2, Figure <ref>a) characterised by an exponential growth in time. As we will see by comparison with the different cases of Figure <ref>, this exponent depends on both the size of the droplet and the particular choice of the parameters ε_ rd and ε_ od. Hence, it is not possible to find a universal exponent for the crossing time, but we can observe that this exponent becomes smaller as the size of the droplet increases. The regime (1) may be a limiting case of this exponent when the effect the pinning barrier becomes negligible on the time for the droplet to cross the barrier. In the third regime (3, Figure <ref>a), the droplet takes even more time to cross the pinning barrier and as H increases this time practically becomes infinite. This behaviour reflects the great difficulty of the droplet to further establish energetically favourable contacts with the top orange substrate. The above picture for the largest droplet (N=50400 beads) also seems to apply in the case of smaller droplets (i.e. N=1120 and N=50400), of course when the crossing is possible, but with the exception that the behaviour of regime (1) is absent. This simply means that values as low as H=σ already have an important influence on the translocation process in the case of the small droplets. This impact becomes even higher when the wettability difference between the red and the orange substrates is small (for example, ε_ rd=0.3ε and ε_ od=0.4ε as shown in Figure <ref>a). In this case, some of the droplets are already exhibiting the behaviour of regime (3), and the times to cross the pinning barrier increase by almost an order of magnitude for certain H. Hence, we conclude that larger droplets offer better control in the time-scale of the process, when this is relevant for the application design. Our analysis is of course relevant in the absence ofgravitational effects, that is length scales smaller than the capillary length, which is indeed the case in our in silico experiments.Finally, we discuss how the time scale of the barrier crossing is affected by changes in the wettability between the substrates. Based on the results of Figures <ref> and <ref>, we consider the cases shown in Figure <ref>b, for which we can observe barrier crossing for a wide range of parameters ε_ rd and ε_ od for fixed H. From the results of Figure <ref>b, we can conclude: Firstly, choosing higher ε_ od values leads to faster barrier crossings for the same difference between substrates wettability as expressed through the parameters ε_ rd and ε_ od. Secondly, higher H values appear to affect proportionally the time of crossing the pinning barrier across the range of parameters ε_ rd and ε_ od. Our conclusions seem to apply throughout the systems of this study. However, a more comprehensive discussion would still require larger droplets than the ones considered here, which goes beyond our current computational capabilities and the scope of this work.§ CONCLUSIONIn this study, we have investigated the pinning of liquid droplets on solid substrates. We have discussed the necessary conditions for pinning and the mechanism of crossing the pinning barrier upon depinning. We found that even the smallest barrier, namely H=σ, is able to keep the droplet pinned when the wettability of the physical barrier is equal or smaller than the wettability of the substrate where the droplet `sits' before crossing the barrier. This is true for all droplet sizes considered in our study. Moreover, the crossing of a higher pinning barrier (H_ max) by the droplet is favoured by a larger wettability of the substrates that form the barrier (orange substrates). In such cases, the crossing of the barrier will also be quicker. The time scale of the crossing depends on the size of the droplet, N, and the wettability of the substrates as expressed through ε_ rd and ε_ od. In addition, we found that larger droplets can cross higher pinning barriers. We have analysed in detail the mechanism of the barrier crossing and have identified the driving force of this process, which is the minimisation of the system's energy, with the main contribution coming from the decrease of the interfacial energy, E_ od, between the orange substrate and the droplet. To this end, we have presented a detailed discussion of the pinning–depinning mechanism and the barrier crossing by the droplet, and we have analysed the dynamics of this process based on the instantaneous velocity of the centre of mass of the droplet and the time-scale of the crossing. For dynamics of movement, we have identified three different time-scale regimes and discussed its implications for applications exploitation. Furthermore, we have also described how pinning and depinning processes can be exploited in nanotechnology applications by controlling the droplet motion through a proper choice of the pinning barrier and the substrate wettabilities of the red and orange substrates for a given droplet size. Our study provides ways of separating and steering droplets on solid substrates. In this way, we anticipate that our work could have direct implications in various nanotechology applications, especially in the areas of microfluidics, microfabrication, coatings, and biology. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 778104. This research was supported in part by PLGrid Infrastructure.Movies of barrier crossing by the droplet:M1*: H=3.0σ, ε_ rd=0.3ε, ε_ od=0.5ε, H=1120 beads; M2*: H=10.0σ, ε_ rd=0.3ε, ε_ od=0.7ε, H=10080 beads; M3*: H=15.0σ, ε_ rd=0.3ε, ε_ od=0.6ε, H=50400 beads; A movie of a pinned droplet (no barrier crossing) for the sake of comparison: M4*: H=11.0σ, ε_ rd=0.3ε, ε_ od=0.4ε, H=50400 beads;
http://arxiv.org/abs/2311.15882v1
{ "authors": [ "Panagiotis E. Theodorakis", "Alidad Amirfazli", "Bin Hu", "Zhizhao Che" ], "categories": [ "physics.flu-dyn", "cond-mat.soft" ], "primary_category": "physics.flu-dyn", "published": "20231127145229", "title": "Droplet control based on pinning and substrate wettability" }
Case study of the validity of truncation schemes of kineticequations of motion: few magnetic impurities in a semiconductorquantum ring P. I. Tamborenea January 14, 2024 ========================================================================================================================================== § INTRODUCTION Cosmic rays (CRs) are high-energy particles originating from outer space, with some possessing energies up to 10^21 eV. Unraveling the nature of ultra-high-energy CRs is at the forefront of astrophysical research. A vital component in demystifying the properties of CRsis determination of the mass and energy spectra of the extended air showers that they produce. Such assessments depend on precise measurements and simulations of the air shower profiles, modeled through hadronic Monte Carlo (MC) simulations that are often tuned using data from the Large Hadron Collider (LHC) <cit.>. Nonetheless, discrepancies between different model predictions persist, even at LHC energies, leading to substantial systematic uncertainties in our understanding of the composition of CRs <cit.>. To improve the modeling of hadronic interactions, plans are underway to conduct a short run of proton–oxygen (pO) collisions during LHC Run 3 <cit.>.For the upcoming proton–oxygen collision runs, the primary focus of the standard research program will predominantly be on non-diffractive interactions <cit.>; however, interactions involving color-neutral objects can also be explored by tagging forward protons in pO→ pX interactions or forward neutrons in pO→ nY interactions, providing a unique opportunity to study those components with better precision. Schematic diagrams of processes of interest are illustrated in Figure <ref>.The color-neutral interactions are weakly constrained at the LHC, resulting in substantial discrepancies between the experimental data and the predictions of MC simulations (e.g.,in proton–lead collisions <cit.>), suggesting there are missing interactions not included in these event generators, which can be further probed using forward neutron and proton detectors.§ THE FORWARD NEUTRON AND PROTON DETECTORS AT THE LHCThe inclusion of forward neutron and proton detectors at the LHC has significantly broadened the research capabilities of the ATLAS and CMS experiments within the heavy ion and proton–proton physics programs. These detectors can assist in studying the properties of color-neutral interactions during forthcoming proton–oxygen collisions at the LHC. The arrangement of the detector devices along the LHC beamline, on both sides of the interaction point (IP), is schematically illustrated in Figure <ref>.§.§ The Forward Proton Detectors (FPDs) During LHC Run 2, forward proton detectors (FPDs) were successfully deployed as part of the ATLAS and CMS collaborations. FPDs are specialized near-beam detectors located approximately 200 meters from the IP. These detectors were operational throughout the standard high-luminosity runs at the LHC, where they were primarily dedicated to studying the central exclusive production processes in proton–proton collisions. The ATLAS forward proton (AFP) detector <cit.> and the CMS-TOTEM Precision Proton Spectrometer (CT-PPS) <cit.> have been seamlessly integrated during standard LHC runs and delivered a broad range of physics results <cit.>.The kinematic acceptance for forward protons is contingent upon the arrangement of the LHC magnetic field. Protons in diffractive interactions lose a fraction of their momentum (denoted by ξ=Δ p/p) and are consequently deflected from their initial beam trajectory. During standard LHC runs, ξ typically ranges from 1.5% to 15%. This acceptance range is also expected to apply to the upcoming proton–oxygen run, providing the capability to measure the diffractive component of the total pO cross-section by tagging the intact protons. §.§ Zero Degree Calorimeter (ZDC) The zero degree calorimeter (ZDC) is a specialized detector positioned at a zero-degree angle relative to the beamline, contributing substantially to the heavy ion physics program at the LHC <cit.>. Its primary function is to detect forward neutral particles produced in AA and pA collisions at the LHC, primarily spectators from ion disintegration. The ZDC detectors for both ATLAS and CMS experiments are hosted in a dedicated slot inside the neutral beam absorbers (TAN) at a distance of 140 meters from the IP, which shields the LHC machine components against neutral particles emerging from the IP.The ZDC plays a crucial role in detecting forward neutrons and photons with pseudorapidities greater than 8.5 in pp, pA, and AA collisions. It consists of an electromagnetic section, approximately 30 radiation lengths long, and three hadronic modules, each about 1.15 interaction lengths long <cit.>. § NEW CONSTRAINS ON MC HADRONIC MODELS In the forthcoming proton–oxygen run, colorless interactions (including elastic, diffractive, and pion exchange processes), which account for approximately 20% of the total cross-section, can be tagged by operating the ZDC and FPDs downstream of the proton beam. The typical spectra of forward protons and neutrons originating from color-neutral interactions in proton–oxygen collisions at √(S_NN)=9.9 TeV using different Monte Carlo event generators are depicted in Figure <ref> and Figure <ref>, respectively. Elastic and diffractive interactions within proton–oxygen collisions are characterized by large gaps in the rapidity distribution of the final-state particles, defined as Δη_F. In contrast to non-diffractive inelastic events, where the probability of finding a continuous rapidity region Δη_F free of particles is exponentially suppressed, color-neutral interactions such as those involving pomeron or pion exchange are distinctive for their clear rapidity gaps, and discriminating between these topologies is only achievable through proton and neutron tagging. Figure <ref> illustrates the contribution from diffractive processes, where a proton is measured by an FPD, or a neutron is measured by the ZDC, as a function of the Δη_F spectra. § CONCLUSIONS As preparations for proton–oxygen collision runs at the LHC continue, forward neutron and proton detectors are expected to become increasingly vital, allowing the scope of the existing physics research program to be augmented with oxygen beams. These detectors will broaden the range of the observable phase space, which is pivotal in refining our understanding of color-neutral interactions. They are poised to provide precise constraints on diffractive and elastic interactions in proton–ion collisions, including the first measurement of the elastic component of proton–oxygen interactions. If successfully executed, this research could serve as a stepping stone for future measurements in heavy ion runs at the LHC, leveraging both the forward proton detector (FPD) and the zero degree calorimeter (ZDC) to explore new frontiers in particle physics.99mc1 T. Pierog, I. Karpenko, J. M. Katzy, E. Yatsenko and K. Werner, EPOS LHC: Test of collective hadronization with data measured at the CERN Large Hadron Collider. https://doi.org/10.1103/PhysRevC.92.034906 Phys. Rev. C 92 (2015) 3, 034906.mc2 F. Riehn, R. Engel, A. Fedynitch, T. K. Gaisser and T. Stanev, Hadronic interaction model Sibyll 2.3d and extensive air showers. https://doi:10.1103/PhysRevD.102.063002 Phys. Rev. D 102 (2020) 6, 063002.mc3 S. Ostapchenko, Monte Carlo treatment of hadronic interactions in enhanced Pomeron scheme: I QGSJET-II model. https://doi:10.1103/PhysRevD.83.014018 Phys. Rev. D 83 (2011), 014018.mc4 C. Bierlich, G. Gustafson, L. Lönnblad and H. Shah, The Angantyr model for Heavy-Ion Collisions in PYTHIA8. https://doi:10.1007/JHEP10(2018)134 JHEP 10 ((2018), 134. Supanitsky:2022zcw A. D. Supanitsky, Determination of the Cosmic-Ray Chemical Composition: Open Issues and Prospects. https://doi:10.3390/galaxies10030075 Galaxies 10 (2022) 3, 75. Bruce:2021hjk R. Bruce, R. Alemany-Fernández, H. Bartosik, M. Jebramcik, J. Jowett and M. Schaumann, Studies for an LHC Pilot Run with Oxygen Beams https://doi:10.18429/JACoW-IPAC2021-MOPAB005 Proc. IPAC'21, Campinas, SP, Brazil, May 2021, pp. 53–56.Brewer:2021kiv Brewer, Jasmine and Mazeliauskas, Aleksas and van der Schee, Wilke, Opportunities of OO and pO collisions at the LHC. http://cds.cern.ch/record/2753575CERN-TH-2021-028CMS:2023lfr CMS Collaboration. First measurement of the forward rapidity gap distribution in pPb collisions at √(s_NN) = 8.16 TeV. https://doi:10.1103/PhysRevD.108.092004 Phys. Rev. D 108, (2023) 9, 092004.Sopczak:2023xyd A. Sopczak, Overview of ATLAS Forward Proton (AFP) detectors in Run-2 and outlook for Run-3 analyses. https://doi.org/10.48550/arXiv.2307.09780 arXiv:2307.09780 [hep-ex].Royon:2023ihe C. Royon, Recent results from the CMS Proton Precision Spectrometer. https://doi.org/10.48550/arXiv.2307.05321 arXiv:2307.05321 [hep-ph].Suranyi:2021ssd O. Surányi, A. Al-Bataineh, J. Bowen, S. Cooper, M. Csanád, V. Hagopian, D. Ingram, C. Ferraioli, T. Grassi and R. Kellogg, et al. Performance of the CMS Zero Degree Calorimeters in pPb collisions at the LHC. https://doi:10.1088/1748-0221/16/05/P05008 JINST 16, (2021) 5, P05008.ATLAS:2017fur ATLAS Collaboration. Evidence for light-by-light scattering in heavy-ion collisions with the ATLAS detector at the LHC. https://doi:10.1038/nphys4208 Nature Phys. 13, (2017) 9, 852-858.Pitt:2023qmd M. Pitt, Reducing model uncertainties using proton-oxygen collisions with proton/neutron tagging at the LHC, https://doi:10.22323/1.444.0426 PoS ICRC2023, 426 (2023).Adamczyk:2015cjy L. Adamczyk et al., Technical Design Report for the ATLAS Forward Proton Detector. CERN-LHCC-2015-009, ATLAS-TDR-024. <https://cds.cern.ch/record/2017378>CMS:2014sdw CMS and TOTEM Collaborations, CMS-TOTEM Precision Proton Spectrometer. CERN-LHCC-2014-021 ; TOTEM-TDR-003 ; CMS-TDR-013 <https://cds.cern.ch/record/1753795>
http://arxiv.org/abs/2311.15867v1
{ "authors": [ "Michael Pitt" ], "categories": [ "hep-ex", "astro-ph.HE" ], "primary_category": "hep-ex", "published": "20231127143741", "title": "Constraining models of hadronic showers using proton-Oxygen collisions at the LHC involving proton/neutron tagging" }
Three-dimensional ℤ topological insulators without reflection symmetryVladimir Juričić^3,1 January 14, 2024 ======================================================================== Since American Sign Language (ASL) has no standard written form, Deaf signers frequently share videos in order to communicate in their native language. However, since both hands and face convey critical linguistic information in signed languages, sign language videos cannot preserve signer privacy. While signers have expressed interest, for a variety of applications, in sign language video anonymization that would effectively preserve linguistic content, attempts to develop such technology have had limited success, given the complexity of hand movements and facial expressions. Existing approaches rely predominantly on precise pose estimations of the signer in video footage and often require sign language video datasets for training. These requirements prevent them from processing videos 'in the wild,' in part because of the limited diversity present in current sign language video datasets. To address these limitations, our research introduces DiffSLVA, a novel methodology that utilizes pre-trained large-scale diffusion models for zero-shot text-guided sign language video anonymization. We incorporate ControlNet, which leverages low-level image features such as HED (Holistically-Nested Edge Detection) edges, to circumvent the need for pose estimation. Additionally, we develop a specialized module dedicated to capturing facial expressions, which are criticalfor conveying essential linguistic information in signed languages. We then combine the above methods to achieve anonymization that better preserves the essential linguistic content of the original signer. This innovative methodology makes possible, for the first time, sign language video anonymization that could be used for real-world applications, which would offer significant benefits to the Deaf and Hard-of-Hearing communities. We demonstrate the effectiveness of our approach with a series of signer anonymization experiments. § INTRODUCTION American Sign Language (ASL), the predominant form of communication used by the Deaf Community in the United States and parts of Canada, is a full-fledged natural language. It employs manual signs in parallel with non-manual elements, including facial expressions and movements of the head and upper body, to convey linguistic information. The non-manual elements are crucial for conveying many types of lexical and adverbial information, as well as for marking syntactic structures (e.g., negation, topics, question status, and clause types <cit.>). Consequently, in video communications, e.g., on the Web, involving sensitive subjects such as medical, legal, or controversial matters, obscuring the face for purposes of anonymity would result in significant loss of essential linguistic information.Despite the fact that a number of writing systems have been developed for ASL <cit.>, the language has no standard written form. While ASL signers could choose to use written English in order to preserve privacy, that is frequently not their preference, as signers generally have greater ease and fluency in their native language, ASL, than in English.A considerable number of Deaf signers have shown interest in a mechanism that would maintain the integrity of linguistic content in ASL videos while disguising the identity of the signer, as discussed in several recent studies <cit.>. There aremany potential applications of such a tool. For example, this could enable anonymous peer review for academic submissions in ASL. This could also ensure impartiality in various multimodal ASL-based applications, e.g., enabling production of neutral definitions for ASL dictionaries, not tied to the identity of the signer producing them. It could also enable maintenance of neutrality in interpretation scenarios. Additionally, such a tool could increase signers' willingness to contribute to video-based AI datasets <cit.>, which hold significant research value.For these reasons, various approaches for preservation of privacy in ASL videos have been explored <cit.>. However, the majority of these approaches suffer from limitations with respect to preservation of linguistic meaning, and they generally achieve only a limited degree of anonymity.They also require accurate pose estimation, and some require substantial human labor.Furthermore, the effectiveness of many existing anonymization tools is limited to experimental settings, displaying sub-optimal performance with out-of-domain videos. These limitations significantly reduce the potential for practical applications of such technologies.To overcome the limitations of existing anonymization tools, we introduce DiffSLVA, a novel anonymization approach leveraging large-scale pre-trained diffusion models, notably Stable Diffusion <cit.>. DiffSLVA is designed to tackle text-guided sign language anonymization. Through a text prompt, it generates a new video in which the original linguistic meaning is retained, but the identity of the signer is altered. See Figure <ref> for a demonstration of the method. Unlike traditional methods that require skeleton extraction, our approach utilizes the Stable Diffusion model enhanced with ControlNet <cit.> to process language videos with Holistically-Nested Edge (HED) <cit.>, which can much more easily and robustly process videos in the wild. To adapt the image-based Stable Diffusion for video, we follow <cit.> but modify its architecture. We replace the self-attention layer in U-Net with a cross-frame attention layer and implement an optical-flow guided latent fusion for consistent frame generation. Additionally, to capture fine-grained facial expressions, we have developed a specialized facial generation module utilizing a state-of-the-art image animation model <cit.>. The outcomes are integrated via a face segmentation technique <cit.>. Ourresults show substantial promise for anonymization applications in the wild, which would be invaluable for the Deaf and Hard-of-Hearing communities. Our work makes several key contributions to the field of sign language video anonymization: * We propose zero-shot text-guided sign language anonymization: We are the first to address the challenge of zero-shot sign language video anonymization. Our method does not require sign language video data for training. The anonymized videos are based on computer-generated humans, transforming the original signer's appearance to that of a computer-generated individual.* We have developed a specialized module dedicated to improving facial expression transformation. Our ablation studies show that this significantly enhances the preservation of linguistic meaning.* Our approach relies solely on low-level image features, such as edges, enhancing the potential for practical applications, which is a significant achievement.* Our anonymization can accommodate a diverse range of target humans. The anonymized signers can have any ethnic identity, gender, clothing, or facial style, a feature many ASL signers want; this simply requires changing the text input.§ RELATED WORK§.§ Video Editing with Diffusion Models Diffusion models <cit.> have demonstrated exceptional performance in the field of generative AI. Once such models are trained on large-scale datasets (e.g., LAION <cit.>), text-guided latent diffusion models <cit.> (e.g., Stable Diffusion) are capable of producing diverse and high-quality images from a single text prompt. Additionally, ControlNet <cit.> presents a novel enhancement. It fine-tunes an additional input pathway for pre-trained latent diffusion models, enabling them to process various modalities, including edges, poses, and depth maps. This innovation significantly augments the spatial control capabilities of text-guided models.Image-based diffusion models can also be used for video generation or editing. There have been efforts to modify image-based diffusion models for consistent generation or editing across frames. Tune-A-Video <cit.> inflates a pre-trained image diffusion model, modified with pseudo 3D convolution and cross-frame attention and then fine-tuned on a given video sequence. During the inference stage, with the DDIM inversion noises <cit.> as the starting point, the fine-tuned model is able to generate videos with similar motions but varied appearance. Edit-A-Video <cit.>, Video-P2P <cit.>, and vid2vid-zero <cit.> utilize Null-Text Inversion <cit.> for improved reconstruction of video frames, which provides better editing results. Fine-tuning or optimization based on one or more input video sequences is required by these methods. Moreover, the detailed motion in the video cannot be captured properly without having a negative impact on the editing abilities. Therefore, they are not suitable for the sign language video anonymization task.Other methods utilize the cross-frame attention mechanism or latent fusion to achieve the video editing or generation ability of image-based diffusion models. Text2Video-Zero <cit.> modifies the latent codes and attention layer. FateZero <cit.> blends the attention features based on the editing masks detected by Prompt-to-Prompt <cit.>. Pix2Video <cit.> aligns the latent features between frames for better consistency. Rerender-A-Video <cit.> utilizes a cross-frame attention mechanism and cross-frame latent fusion to improve the consistency of style, texture, and details. It can also be used with ControlNet for spatial guidance. However, these methods cannot accurately translate facial expressions from the original videos. Therefore, they lose a significant amount of the linguistic meaning from the original video. Our approach is based on Rerender-A-Video <cit.> method without the post video processing, to best capture manual signs. To overcome the loss of linguistically important non-manual information, we designed a specialized facial expression translation module <cit.>, which we combine with the rest of the anonymized body using a face parser model <cit.>. §.§ Sign Language Video Anonymization In the realm of privacy preservation in ASL video communication, various strategies have been investigated <cit.>. Early approaches used graphical filters, such as a tiger-shaped filter <cit.>, to disguise the face during signing. However, these filters often lead to a loss of critical facial expressions, thereby hindering comprehension. Alternatives like blocking parts of the face <cit.> also result in significant information loss. Approaches involving re-enacting signed messages with actors <cit.> or using virtual humans for anonymous sign language messaging <cit.> are labor-intensive, challenging, and time-consuming.Some approaches to avatar generation for sign language, such as <cit.>, have used cartoon-like characters to replace signers. Cartoonized Anonymization <cit.> proposes the use of pose estimation models <cit.> to automatically enable the avatars to sign. Yet, these methods often lead to unrealistic results <cit.>.Deep-learning approaches, such as the AnonySign project <cit.> or Neural SignReenactor <cit.>, leverage GAN-based methods for photo-realistic sign language anonymization using skeleton keypoints for accurate image generation. The results are encouraging. However, they require accurate skeleton keypoints and face landmarks. In sign language videos, the rapid movements of the hands can lead to blurring in the video frames. Occlusions of the face by the hands also occur frequently. The performance of existing human pose estimation models is often inadequate when applied to sign language videos, which leads to errors in the anonymized video. Recent work <cit.> applies the facial expression transfer method of <cit.> for sign language anonymization. This method involves replacing the signer's face in the video with another individual's face, while transferring the facial expressions to the new face. As a result, this approach successfully preserves the linguistic meanings conveyed by facial expressions and alters the identity of the signer in the video. However, in <cit.> the extent of the anonymization is not complete, since only the face is replaced, while the arms, torso, and hands remain the same as in the original video.Another method <cit.> uses an unsupervised image animation method <cit.> with a high-resolution decoder and loss designed for the face and hands to transform the identity of a signer to that of another signer from the training videos. The results are promising. However, this method canwork well only in the training data domain and is hard to adapt to sign language videos in the wild. To address the above limitations, we propose DiffSLVA, a method that is based on the modification of large-scale diffuson models and ControlNet for consistent high-fidelity video generation, which can be used to achieve effective sign language video anonymization in the wild. Our approach is a text-guided sign language video anonymization, as shown in Figure <ref>. We use large-scale diffusion models, which do not rely on the use of sign language video data for training and can perform zero-shot sign language video anonymization in the wild. With the help of ControlNet, we use low-level features instead of accurate skeleton data as signal for generation guidance so that the results are not adversely affected by inaccurate skeleton estimations. To further improve the facial expression translation, we designed a specialized model for facial expression enhancement and combine it with the model that anonymizes the rest of the body using a face parser model. Our method can anonymize sign language videos based on a single text prompt. The anonymized video is based only on a wide range of computer-generated humans. Our successful anonymization results in the wild show great promise for use by the Deaf community. § METHODOLOGY In this section, we introduce our method for zero-shot text-guided sign language video anonymization. The process is structured as follows: Given a sign language video with N frames {I_i}_i=0^N, we employ a pre-trained latent diffusion model augmented with ControlNet to execute the anonymization. A text prompt c_p serves as the guidance for the desired anonymization identity or style. Our goal is to generate an altered sign language video sequence, represented by {I'_i}_i=0^N, which conceals the identity of the original signer while preserving the linguistic content. In section <ref>, we introduce the text-guided latent diffusion models and the ControlNet, which serve as the foundation for text-guided image generation. Section <ref> details the methods for adapting the text-to-image method for consistent video editing. To ensure the preservation of linguistic meaning through accurate facial expression translation, we introduce a specialized facial enhancement module in Section <ref>. Figure <ref> shows an overview of our method.§.§ Latent Diffusion Models Latent diffusion models are diffusion models operating in the latent space for faster image generation. One major feature of the approach is that it uses an autoencoder, U-Net, and a text encoder. One difference with respect to the standard forward and denoising process is that the input image I is first input to an encoder ε to obtain its latent features x_0 = ε (I). The following diffusion forward process adds noise to the latent featuresq(x_t|x_t-1) = 𝒩(x_t;√(α_t)x_t-1,(1-α_t)𝐈),where t=1,...,T is the time step indicating the level of noises added;q(x_t|x_t-1) is the conditional probability of x_t given x_t-1; and α_t are hyperparameters that adjust the noise level across the time step t. Leveraging the property of Gaussian noise, we can also sample x_t at any time step by the following equation:q(x_t|x_0) = 𝒩(x_t;√(α̅_t)x_0,(1-α̅_t)𝐈),where α̅_t = ∏_i=1^t α_i.In the diffusion backward process, a U-Net ϵ_θ is trained to estimate the above added noise to recover x_0 from x_T. For the conditional diffusion model, ϵ_θ takes the conditional information c_p as input to guide the generation process. After ϵ_θ has been trained, the x_t-1 can be sampled by strategies such as DDIM sampling <cit.>:x_t-1 = √(α̅_t-1)x̂_0+√(1-α̅_t-1)ϵ_θ(x_t,t,c_p),where ϵ_θ(x_t,t,c_p) is the predicted noise at time step t. For the DDIM sampler, we can have an estimation of the final clear output x̂_0 at each time step t. x̂_0 can also be represented as the following equation:x̂_0 = (x_t-√(1-α̅_t)ϵ_θ(x_t,t,c_p))/√(α̅_t),During inference, for a Gaussion noise x_T, we can sample a clear latent x_0 with the DDIM Sampler and decode it to the generated image I'=D(x_0)Our methodology also incorporates ControlNet, which is inspired by the Hyper Network concept. ControlNet introduces an additional signal to the text-guided latent diffusion models. This structure makes it possible for the text-guided diffusion model to take diverse inputs like edges, human poses, and segmentation maps for more spatial constraints. Consequently, with the incorporation of an additional input c_n, the predicted noise at each time step t is represented as ϵ_θ(x_t, t, c_p, c_n). This approach enhances the alignment of the final outputs with the spatial features specified by the input condition c_n.§.§ Consistent Video GenerationAlthough Stable Diffusion models exhibit outstanding performance in image generation, their direct application to videos is challenging. Directly applying Stable Diffusion to videos gives rise to significant frame inconsistency issues. To address this, we adapt text-to-image diffusion models for video editing tasks, drawing upon the framework established by <cit.>. Our approach begins by encoding and sampling the original frames I_i, i = 1, …, N, of the sign language video into noisy latents x^i_t, i = 1, …, N, serving as starting points for the generation of anonymized video frames, following the method described in <cit.>. An anchor frame I_a is selected from the sequence I_i, i = 1, …, N. The corresponding latent feature x^a_t, along with the Holistically-Nested Edge, is processed through ControlNet to create the transformed anchor frame I'a, which constraints the global consistency in general. Empirically, we find that selecting the anchor frame from the middle of the video, where both hands of the signer are visible, yields optimal results. For each frame I_i, the previously generated frame I'_i-1 and the anchor frame I'_a provide cross-frame attention control during the generation of I'_i, as detailed in Section <ref>. A two-stage optical flow guided latent fusion, described in Section <ref>, is applied during the generation process. Finally, a specialized facial expression enhancement module, outlined in Section <ref>, is used to refine the results.§.§.§ Cross-Frame Attention Consistency In the Stable Diffusion model, there are two kinds of attention mechanisms used in the U-Net. The cross-attention retrieves the information from the text embedding. The self-attention helps define the layout and style of the generated images. In order to achieve consistent generation across frames in the sign language video sequence, the self-attention layers are replaced with cross-frame attention layers. The self-attention layer of the U-Net used in Stable Diffusion is represented as follows:Q=W^Q v_i, K=W^K v_i, V=W^V v_i,where v_i is the latent features input to the self-attention layer when generating I'_i. W^Q, W^K, and W^V are the weights for project v_i to the query, key, and value in the attention mechanism, respectively. The attention map SA is calculated as following:SA(Q,K,V) = Softmax(QK^T/√(d))VIn order to obtain consistent generation across frames, we replace the K and V with K_a,i-1 and V_a,i-1, which are the combination of keys and values when generating the selected anchor frame I_a and previous frame I_i-1. The cross-frame attention layer is represented as follows:K_a,i-1 =W^K [v_a;v_i-1], Q=W^Q v_iV_a,i-1 =W^V [v_a;v_i-1],where v_a, v_i-1 are the latent features obtained when generating frame I'_a and I'_i-1. The cross attention map CA is calculated as following:CA(Q,K_a,i-1,V_a,i-1) = Softmax(QK_a,i-1^T/√(d))V_a,i-1The cross-frame attention mechanism is designed to foster consistency in image generation across frames by directing the current generation process to reference patches in both the generated anchor frame and the previous frame.§.§.§ Optical Flow Guided Cross-Frame Latent Fusion Following <cit.>, we utilize two-stage latent fusion guided by optical flow: OFG stage 1 and OFG stage 2.OFG stage 1: In the early stage of the diffusion backward process, the optical flow w^i_a and occlusion mask M^i_a are estimated from I_a to I_i to wrap and fuse the estimated latent of I'_a and I'_i. This latent wrap and fusion is performed when the denoising step t is large, to prevent distortion of the results. At time step t, the predicted x̂_0 is updated by the following equation:x̂^i_0 = M^i_a x̂^i_0 + (1-M^i_a)w^i_a(x̂^a_0),where x̂^i_0 and x̂^a_0 are the predicted clear outputs for I'_i and I'_a at denoising time step t, calculated by equation <ref>. OFG stage 2: At the second stage, the generated anchor frame I'_a and previous generated frame I'_i-1 are used to further enhance consistency during the late stages of the diffusion backward process. The optical flow and occlusion mask are also estimated. We obtain a reference image I̅'̅_i by wrapping and fusing with the previous generated images:I̅'̅_i = M^i_a(M^i_i-1Î'̂_i + (1-M^i_i-1)w^i_i-1(I'_i-1)) + (1-M^i_a)w^i_a I'_a,After obtaining this reference-estimated image I̅'̅_i, we can update the sampling process for generating I'_i using the following equation:x^i_t-1 = M_i x^i_t-1 + (1-M_i) x̅^i_t-1,where M_i=M^i_a ∩ M^i_i-1, and x̅^i_t-1 is the sampled x_t-1 from reference image I̅'̅_̅i̅. We use the same strategy as the fidelity-oriented image encoding from <cit.> for encoding the I̅'̅_̅i̅ to avoid information loss when repeatedly encoding and decoding latents.To maintain coherent color throughout the whole process, we also apply AdaIN<cit.> to x̂^i_0 with x̂^a_0 at time step t during the late stage of the diffusion backward process. This is used to mitigate the color draft problem with diffusion models. §.§ Facial Expression Enhancement Facial expressions convey important linguistic meaning in signed languages. However, current methods cannot transfer meaningful facial expressions; see the ablation study discussed in Section <ref>. ControlNet and Stable Diffusion usually fail to produce faces with the same expressions as the original signer. To address this issue, we propose an additional module to enhance the face generation based on an image-animation model. See Figure <ref> for an overview of this module.When generating the first frame I'_1 , we crop the face of the results and use it as the source face F_s for the image animation module from <cit.>. The facial images in the original videos are also cropped and aligned to formalize the driving face set [F^i_d],i=1...N. A motion estimation module, which is pre-trained on Voxceleb<cit.>, will estimate the dense motion W_i and multi-resolution occlusion maps M_i between the source face F_s and the driving face set [F^i_d],d=1...N.The obtained optical flow and occlusion maps are input to a U-Net to generate new face images that match the identity of the source face F_s while having the same facial expression as F^i_d. The input image F_s is processed through the encoder, and optical flow W_i is applied to wrap the feature map at each level. This adjusted feature map is then combined with the occlusion mask M^f_i that matches its resolution. Subsequently, it is merged into the decoder through a skip connection. After this, the feature map is input to the next upsampling layer. Finally, the enhanced face image F^i_E is produced at the last layer.A face parser model <cit.> is applied on F^i_E to segment the face area and obtain a mask M^f_i. Then, the mask and enhanced face image are aligned with the face location in I'_i. Finally, I'_i is updated by the following equation: I'_i = M^f_i F^i_E + (1-M^f_i) I'_i.§ EXPERIMENTS AND RESULTS§.§ Data Set We implemented our method on video datasets distributed through the American Sign Language Linguistic Research Project (ASLLRP): https://dai.cs.rutgers.edu/dai/s/daihttps://dai.cs.rutgers.edu/dai/s/dai <cit.>. To assess the effectiveness of our anonymization technique, we selected signers of diverse genders and ages. Each test sample was limited to a maximum of 180 video frames. Example results are presented in Figure <ref>.§.§ Models Our experiments utilized Stable Diffusion models version 1.5 and other customized models. The ControlNet version 1.0 was employed, producing optimal results with HED as a conditional input. Optical flow estimation was performed using the model from <cit.>.§.§ Qualitative Evaluation Overall, our method generates clear hand shapes with high fidelity to the original signer’s hand shapes and movement of the hands and arms.Most of the generated facial expressions are good, and we are currently carrying out further refinements to fully preserve the subtleties of expressions that are critical to expression of linguistic information. The effectiveness of our combined method for transmission of linguistic content, complete disguise of identity, and production of natural-looking signing remains to be confirmed throughuser studies, which we plan to carry out in the near future.However, the initial results are quite encouraging.As shown in Figure <ref>,our methods,guided by text prompts, can anonymize original videos to computer-generated signers with different genders and identities:With different text prompts, we can have various anonymized versions of the sign language videos, from the CG(Computer Graphics) style to ink washing painting. Some video examples can be viewed at https://github.com/Jeffery9707/DiffSLVAhttps://github.com/Jeffery9707/DiffSLVA. These results underscore the practical potential of our approach.To our knowledge, this is the first instance of zero-shot sign language anonymization in real-world scenarios. Methods like Cartoonized Anonymization (CA) <cit.> cannot generate photorealistic results and rely on skeleton estimation for accurate anonymization. Methods that can generate photorealistic results, such as AnonySign <cit.>, SLA <cit.> and Neural Sign Reenactor (NSR) <cit.>, require training on sign language video datasets or accurate skeleton estimation. These methods are not accurate enough to be used in the wild.§.§ Ablation StudyOur ablation study focused on the facial expression enhancement module. Results are illustrated in Figure <ref>. Using a separate module significantly improves the preservation of linguistic meaning; the example shown in this figure includes topic and wh-question marking. A video example is also available for viewing at https://github.com/Jeffery9707/DiffSLVAhttps://github.com/Jeffery9707/DiffSLVA. There is a notable challenge with the Stable Diffusion model, primarily in its ability togenerate varied facial expressions accurately for the sign language video anonymization task. Instead of producing diverse expressions, the model tends to replicate a uniform expression across different frames. This leads to a substantial loss in linguistic meaningin the generated results. This limitation highlights the importance of the facial enhancement module in sign language video anonymization.§ CONCLUSION AND DISCUSSIONIn this paper, we introduce DiffSLVA, a novel approach employing large-scale pre-trained diffusion models for text-guided zero-shot sign language video anonymization in the wild. Our approach has the potential to be applied to various use cases. It could enable anonymous peer review for ASL-based academic submissions, thereby ensuring unbiased academic review. Additionally, it could bring neutrality to various multimodal ASL tools, for example, to enable the creation of anonymized definitions in ASL dictionaries. Furthermore, our approach could enhance neutrality in interpreting scenarios in digital communications, such as messaging, enabling maintenance of confidentiality in ASL communications. Furthermore, the implementation of DiffSLVA is likely to increase participation in video-based AI databases, enriching AI research with diverse ASL data.Our method does currentlyhave some limitations.It may encounter challenges, such as cases where the face is occluded by one or both hands or where there is blurring due to rapid movements in sign language videos. We aim to address these issues in our future work. We are also working on further refinements to improve the facial transformation module.However, overall, DiffSLVA shows substantial promise for anonymization applications in the wild, which could offer invaluable tools for the Deaf and Hard-of-Hearing communities.§ ACKNOWLEDGMENTSWe are grateful to the many, many people who have helped with the collection, linguistic annotation, and sharing of the ASL data upon which we have relied for this research.In particular, we are endebted to the many ASL signers who have contributed to our database; to Gregory Dimitriadis at the Rutgers Laboratory for Computer Science Research, the principal developer of SignStream®, our software for linguistic annotation of video data (https://www.bu.edu/asllrp/SignStream/3/https://www.bu.edu/asllrp/SignStream/3/); to the many who have helped with linguistic annotations (especially Carey Ballard and Indya Oliver); and to Augustine Opoku, for development and maintenance of our Web-based database system for providing access to the linguistically annotated video data(https://dai.cs.rutgers.edu/dai/s/daihttps://dai.cs.rutgers.edu/dai/s/dai). We would also like to extend our sincere gratitude to Ligong Han for invaluable discussions about this project. This work was supported in part by grants #2235405, #2212302, #2212301, and #2212303 from the National Science Foundation, although any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. ieeenat_fullname
http://arxiv.org/abs/2311.16060v1
{ "authors": [ "Zhaoyang Xia", "Carol Neidle", "Dimitris N. Metaxas" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127182619", "title": "DiffSLVA: Harnessing Diffusion Models for Sign Language Video Anonymization" }
Probing spin fractionalization with ESR-STM absolute magnetometry Y. del Castillo^1,2, J. Fernández-Rossier^1[On permanent leave from Departamento de Física Aplicada, Universidad de Alicante, 03690 San Vicente del Raspeig, Spain]^,[[email protected]]January 14, 2024 ============================================================================================================================================================================================================= Lung cancer is responsible for 21% of cancer deaths in the UK and five-year survival rates are heavily influenced by the stage the cancer was identified at. Recent studies have demonstrated the capability of AI methods for accurate and early diagnosis of lung cancer from routine scans. However, this evidence has not translated into clinical practice with one barrier being a lack of interpretable models. This study investigates the application Variational Autoencoders (VAEs), a type of generative AI model, to lung cancer lesions. Proposed models were trained on lesions extracted from 3D CT scans in the LIDC-IDRI public dataset. Latent vector representations of 2D slices produced by the VAEs were explored through clustering to justify their quality and used in an MLP classifier model for lung cancer diagnosis, the best model achieved state-of-the-art metrics of AUC 0.98 and 93.1% accuracy. Cluster analysis shows the VAE latent space separates the dataset of malignant and benign lesions based on meaningful feature components including tumour size, shape, patient and malignancy class. We also include a comparative analysis of the standard Gaussian VAE (GVAE) and the more recent Dirichlet VAE (DirVAE), which replaces the prior with a Dirichlet distribution to encourage a more explainable latent space with disentangled feature representation. Finally, we demonstrate the potential for latent space traversals corresponding to clinically meaningful feature changes. Our code is available at <https://github.com/benkeel/VAE_lung_lesion_BMVC>.§ INTRODUCTION Lung cancer is the third most common cancer in the UK,accounting for 13% of cases <cit.>and the biggest cause of cancer death at 21% <cit.>.Early diagnosis of lung cancer is important for prognosis,with five-year survival rates for diagnosis in stages 1–3 at 32.6%compared to 2.9% at stage 4 <cit.>.Radiologists diagnose lung cancer from medical images includingComputed Tomography (CT) scans by visually inspecting lesions in atime-consuming and subjective process <cit.>.A lesion is an area of tissue which has been damaged and iseither a malignant tumour or a benign area of inflammation,abscess or ulcer <cit.>.CT scans are non-invasive and provide high detail images formedical diagnosis and treatment planning.The main contribution of this research is to: * Build state-of-the-art prediction models for lung cancer lesions using VAEs.* Investigate the effectiveness of Dirichlet VAEs for lung lesions, to the best of our knowledge this is the first application in the cancer imaging domain. Several research papers have investigated the application of AI methods to lung cancer, utilising their ability for complex pattern recognition <cit.>.The Variational Autoencoder (VAE) is an encoder-decoder architecturethat maps input data to an n-dimensional latent space <cit.>.Smoothness constraints on the latent space, typically enforced using a Gaussian distribution,promote clustering between similar images.Assuming this space captures sufficient information,these latent vectors can be used for classification purposes.Exploration of the space via latent arithmetic and clustering canlead to new insights about a dataset <cit.>.This paper also explores the use of a Dirichlet distribution in place of the Gaussian. The K-dimensional Dirichlet distribution is a multivariate generalisation of the beta distribution withK strictly positive parameters,{α_i ∈ℝ^+}^i=K_i=1.These α parameters influence the sparsity and density of the probability simplex,the impact of different values is shown in Figure <ref>. The sum of the α values is known as the concentration parameter,which controls the dispersion. When all α equal 1 it is a uniform distribution (Figure <ref> (b)) and a lower/higher sum causes sparsity/density (Figure <ref> (a), (c), (d)).A relatively high α_i will encourage more probability to beconcentrated in the corresponding area of the simplex (Figure <ref> (e), (f)). Choosing target α values in DirVAE influences the distribution of the VAE latent space. In summary, VAE models will be trained on 2D slices of CT scans cropped to lung lesions. The latent vector representations are used in Multilayered Perceptron (MLP) classification models for the task of lung cancer diagnosis. The latent vectors will be evaluated to justify their quality as feature vectors by showing that tumours with similar characteristics are grouped together in the latent space and to demonstrate the ability to predictably change features. This enhances the explainability of the method as it is more intuitive and interpretable for a non-technical audience. Additionally, comparisons between the Gaussian and Dirichlet latent space will show that the DirVAE has better disentanglement of features. To inform this research, we conducted a review of the published literature on AI for lung lesion diagnosis, applications of VAEs in the cancer domain and applications of DirVAEs. § RELATED WORKS <cit.> conducted a systematic review ofArtificial Intelligence (AI) for lung lesion diagnosis frommedical images in the years 2017-2021 and found an accuracy range of 88% to 99.2%and AUC range of 0.7 to 0.967.Over half of the studies use 2D Convolutional Neural Network (CNN) architecturesfor feature extraction and a separate classifier,with transfer learning (TL) commonly applied.For instance,<cit.> used TL with ResNet 50 <cit.>and a shallow CNN, achieving 97.6% accuracy. Additionally, some studies have fused clinically known features with CNN derived features, for instance, <cit.> obtained AUC of 0.967 and an accuracy of 89.5%.<cit.> did not include any papers applying VAE to lung cancer detection, however, there are some existing studies in this domain <cit.>. In the most similar study with the best diagnostic performance using VAEs, <cit.> applied a VAE to lesions extractedfrom the LIDC-IDRI datasetand used retraining of the encoder with a Multi-Layered Perceptron (MLP) classifier, achieving AUC of 0.936. Additionally, several papers have applied VAEs to lung cancer for other tasks including segmentation, survival analysis and tumour growth prediction <cit.>. This paper builds upon the work of <cit.> by improving both the diagnostic performance and the interpretability of the method.Regarding the application of generative models to the cancer domain, several papers have explored the value of VAEs for latent space exploration <cit.>. For instance, <cit.> used VAEs to learn latent representations of the DNA to classify lung cancer subtypes.<cit.> used an approach based on autoencoders and GANs for generating synthetic abdominal CT scans and demonstrated adding and removing liver lesions. Several previous studies have proposed VAEs which replace the prior distribution with a Dirichlet.However, to our knowledge, our work is the first to apply this idea within a cancer setting.The DirVAE was originally proposed by <cit.> and was subsequentlyutilised in similar studies on topic modelling by <cit.> and <cit.>.Later studies applied the model to image classification and demonstrated thatDirVAE latent vectors were very capable in clustering images from the same categoryand separating them from others <cit.>.<cit.> proposed an approach which combined graph neural networks and theDirVAE for abstract graph clustering.In the medical domain,<cit.> used the approach to disentangle DNA sequences into different cell types.Most recently, <cit.> used the DirVAE for chest X-ray classification. Using the Dirichlet distribution in a VAE requires a reparameterisation trick which can produce a differentiable sample from the theoretical distribution. Various techniques have been used before which include the Laplace approximation <cit.>, approximation of the inverse CDF <cit.>, rejection sampling variational inference <cit.> and implicit reparamterisation gradients <cit.>. Instead, sampling from the Dirichlet distribution is done using the pathwise gradient method introduced in <cit.> and subsequently implemented in PyTorch. § METHODS §.§ Dataset and Pre-Processing The LIDC-IDRI public dataset contains 1,010 CT scans, consisting of 20,801 2D image slices which range from 0.6 to 5.0 mm thickwith expert annotations <cit.>.The dataset was then limited to 875 patients with a lesion present totalling 13,916 slices.<cit.> reported that the LIDC-IDRI contains 2,669 lesions larger than 3 mm.The lesions are categorised as malignant, ambiguous or benign in5,249, 5,393 and 3,274 slices respectively,corresponding to 394, 580 and 454 patients.Note that some patients exhibit all three types. These labels were assigned based on a score of 1-5 agreed by four experienced thoracic radiologists:lesions with a score of 1 or 2 are benign, 3 is ambiguous,and 4 or 5 are malignant.All slices have segmentation masks that indicate where the lesion is located.Lesions measuring less than 3 mm in diameter and additionallyany with less than 8 pixels were removed as they correspond to much smaller lesionswhich are not clinically relevant <cit.>. Image slices are 512x512 pixels covering the cross-section of the body, from this a region of interest (ROI) of size 64x64 containing the segmentation masks was selected. Subsequently, 24 slices were excluded as they did not fit in the ROI and a further 64 slices as the bounding box went over the edge of the image,leaving a total of 13,852 in the final dataset. Pixels in the scan are dimensionless Hounsfield units (HUs) in the range[-3000,3000] ∈ℝ.HUs measure the intensity of an X-ray beam, which is altered based on the density of a structure.In this context, HU values below -1000 correspond to air, above 400 are bone, and in between are tissues.Since this work is concerned with lesions which are based in the tissues, upper and lower limits are set for the HU and values are scaled to the range (0,1) as in <cit.>.This scaling will help to homogenise structures of bone and air to reduce variation. §.§ Model Description and Training §.§.§ Initial VAE TrainingThe VAE architecture proposed in this paper is visualised in Figure <ref>. The architecture is loosely adapted from <cit.> with additional hyperparameter training and different activation functions. The encoder component uses blocks of 2D Convolutional (Conv) layers with a Gaussian Error Linear Unit (GELU) activation function <cit.> and 2D Batch Normalisation <cit.>. For the Gaussian VAE, the output of the encoder is used in two separate 2D Conv layers for mean (μ) and log variance ( log(σ^2) ), whereas in the DirVAE a single 2D linear layer is used for the alpha (α) parameters. These layers form a latent space of lesion feature representations for the respective models. The decoder takes a parameterised version of the latent vectors, sampled from an n-dimensional Gaussian or Dirichlet distribution. The decoder is a symmetric architecture which applies upsampling to the feature maps to reconstruct the images. Firstly, with a 2D Convolutional Transpose layer and secondly, using a combination of bilinear interpolation with 2D Conv layers. This second approach is less computationally expensive and helps avoid artifacts <cit.>. The decoder produces a tensor of the same shape as the input containing the reconstructed images which are then evaluated against the original images in the loss function. The loss function is a weighted combination of three terms: the L1 Loss, the Kullback-Leibler Divergence (KLD) <cit.> and the Structural Similarity Index Measure (SSIM) <cit.> or the Multi-Scale SSIM (MS-SSIM) <cit.> for each image i as follows, 1/batch_size·base∑_i=1^nλ·ψ·L1 Loss_i + (1-λ) ·γ·SSIM_i + a ·β_norm·KLD_i.The scale factor (batch_size·base)^-1 is applied so that the values are consistent across different hyperparameters; `base' is a scalar parameter controlling thenumber of feature maps in the VAE model. The first two components, L1 Loss and either SSIM or MS-SSIM, measure image reconstruction quality and the KLD is the standard measure of latent space smoothness <cit.>.The reconstruction metrics are balanced using the hyperparameter constantλ∈[0,1].Two other hyperparameters are used to weight theses components,ψ∈{1,2,3} and γ∈{0,1, batch_size}which is used to either exclude orinclude the mean or the sum of the SSIM. Finally, the KLD is scaled by the hyperparameter β_norm = β·latent_size/image_size, as discussed in <cit.>, this formulation with β>1 leads to better disentanglement of the latent space, here β values are in the range [1,50]. An annealing function a was also included which linearly decreases the KLD by a maximum of 1 across the training epochs. The loss function was altered based on the above hyperparameters to find a combination which balanced the adversarial objectives of image quality and latent space smoothness. In total,the VAE models have 12 trainable hyperparameters which wereexplored using a random search strategy,including upper and lower bound for the HU,number of feature maps in VAE layers (base),size of the latent vector,the 4 parameters in the loss function in equation <ref>,whether to use the SSIM or MS-SSIM, whether or not annealing was applied to theKLD, the learning rate and batch size.[Code for this paper including hyperparameters used during the random search are available from the GitHub page: <https://github.com/benkeel/VAE_lung_lesion_BMVC> ]The DirVAE had an additional hyperparameter for the target alpha parameters which the KLD compares against tries to move towards. The values are in the range α_i ∈ [0.5, 0.99], ranging from a sparse and disentangled distribution to almost a uniform distribution at higher values (c.f. Figure <ref> (a) and (b)). The dataset of 875 patients was randomly split 70/30 into train and test sets with approximately 613 and 262 patients. The VAE reconstructions were evaluated qualitatively, and quantitatively with the average SSIM, Mean Squared Error (MSE) and Mean Absolute Error (MAE) which are conventionally used in the literature. §.§.§ Fine-Tuning and ClassificationAfter initial training, the loss function (<ref>) is updatedto add a new term `BCE_i' which is the binary cross entropy loss <cit.> of the MLP malignancy classifier <cit.> shown in the model architecture (Figure <ref>). The aim is to enable the VAE to be simultaneously useful forreconstruction and classification.We employ a greedy optimisation strategy similar toExpectation-Maximimisation (EM) optimisation as described in the following pseudocode.* Train the VAE model using loss function (<ref>) and extract the latent vectors. * Using these latent vectors, find optimal hyperparameters for the MLP classifier using BCE loss. * Repeat steps 1 and 2 until convergence, adding the BCE loss of the current optimal MLP to the loss function. The MLP hidden layers include GELU activation, dropout and batch normalisation, with a sigmoid activation on the output layer to return probabilities, with a parameter τ controlling the threshold beyondwhich a example is predicted as positive. The key hyperparameters which were trained using a random search strategyinclude τ with a value in the range [0.4,0.6], learning rate,batch size, number of nodes in each layer,whether there are 4 or 5 layers and a dropout probability. The 13,852 slices were split into 5 sets withtrain, validation and test sets in ratio 3:1:1 for5-fold cross-validation; evaluation metrics are reported as the mean of these runs with standard deviations given for AUC and accuracy. Classification performance will be evaluated using the AUC primarily,though we also report the accuracy, precision, recall, specificity, and F1-score.The VAE and MLP models were built in Python 3.9 using PyTorch 1.12 and trained using the Adam optimiser <cit.>. §.§.§ Clustering and Latent Space Exploration Two clustering methods, K-Means <cit.> and CLASSIX <cit.>,were used to partition the latent vectors into distinct groups. An optimal range of values for parameter k, the number of clusters in K-Means, was investigated with an elbow graph of thesum of squared distances within each cluster to find a good balancein the number of clusters and their density. The density parameter in CLASSIX is chosen using a grid search to maximise separation by malignancy class. K-Means is non-deterministic and so results are averaged over 50 runs.Directions in the latent space corresponding to feature changes were found by collecting two groups of latent vectors, with and without a desired feature and taking the average direction vector between the groups. Latent traversal figures were produced by applying multiples of the direction vector to a new image and plotting the decoded images. § RESULTS §.§ VAE Lung Lesion ReconstructionsHere a random sample of 16 images and the reconstructions by the GVAE are qualitatively reviewed in Figure <ref>. Firstly, observe that the overall macrostructure is captured well and so are most of the microstructures, however, some heterogeneity is lost. The most obvious missing information is that some of the lung parenchyma which could be alveoli are not fully captured in the reconstructed versions. Clinical collaborators specialising in oncology, AQ and DJ,confirmed the reconstructions captured the important clinical featuresconsidered in diagnosis.Based on a hyperparameter search of around 120 GVAE and 40 DirVAE candidates,overall the DirVAE had a poorer image reconstruction.The best GVAE achieved SSIM of 0.89, MSE of 0.0032 and MAE of 0.027,whereas the best DirVAE achieved SSIM 0.65, MSE of 0.017 and MAE of 0.055. §.§ Classification PerformanceResults are generated from a mean of5-fold cross-validation of MLP classifiers and are summarised inTable <ref>.Separate results are given for1: malignant vs non-malignant and2: malignancy vs benign with ambiguous excluded. This method achieves state-of-the-art results exceeding themaximum AUC of 0.967 from <cit.> (c.f. Section <ref>).For a direct comparison with similar methodology,<cit.> achieved AUC 0.936 after retraining the encoder. For a comparison to clinical radiologist performance, <cit.> conducted a study based on 60 CT scans evaluated by 4 expert radiologists and compared to pathologically confirmed cases. The radiologists had a mean AUC of 0.846, recall of 0.749, specificity of 0.81. Results provided give performance metrics after initial training and after Expectation-Maximisation optimisation with the classifier loss (`X_EM').Clearly, the fine-tuning improves the performance of the classifiersbut also the VAE performance metrics for image reconstruction and the KLD do not significantly change and in most cases improve. The best individual model performance outside of cross-validation is a malignant vs benign classifier using GVAE latent vectors which achieved AUC 0.99 and 95.9% accuracy. Overall the EM-optimised VAEs had a virtually idenitcal performance, the GVAE had the highest AUC of 0.98 and DirVAE had the highest accuracy of 93.9%.§.§ Clustering and Latent Space Exploration In Figure <ref> visual similarities can be observed, for instance in (a) there is a large circular mass in the centre, whereas in (b) more bone is concentrated in the top left corner.Clustering statistics for the GVAE (G) and DirVAE (D) models with 131 clusters are given in Table <ref>; these show that the latent space is capable of separating the lesions based on clinically relevant features such as tumour size and malignancy class, and furthermore attempts to group multiple images of the same patient together. It is worth noting that this clustering is post EM optimisation, which increased the separation by malignancy class. This indicates that the VAE was encouraged to encode features related to class in the latent space. Although, the clusters already had a high separation before using the classifier loss which indicates the latent space naturally encodes these meaningful attributes. Finally, to demonstrate the capabilities of VAE models in this domain, in Figure <ref> there are two examples of latent space traversals (c.f. Section <ref>). These directions were applied to a new lesion not used in finding the direction and it appears to generalise well including maintaining the surrounding bone structure and generating realistic images at each step. Animations of latent traversals were generated by this analysis showing smooth transitions with more samples. Traversals are constructed by sampling from the latent space, either by using a start and end image and interpolating, or choosing a start point and moving in the direction of the desired feature as in Figure <ref>. Note that all images other than the start point are synthetic. Further examples are provided on the GitHub page. § DISCUSSION The most significant contribution of this work is the novel use of DirVAEs in the cancer imaging domain. This work has also shown that a VAE and MLP combination can achieve state-of-the-art classification performance for lung lesion diagnosis with AUC 0.98 which compares to radiologist performance of 0.846 and is on par with the best AI-based approaches. Overall the results suggest that both approaches produce good classification models, the key difference is that the DirVAE demonstrates greater disentanglement and separation by clinically meaningful characteristics, whilst GVAE produces better reconstructions. In practice, the best model will likely depend upon the context, dataset and specific task.This approach for encoding the images with a VAE lends robustness and an element of explainability as we can observe that lesions with similar characteristics have representations that are close together in the latent space as demonstrated by the clustering results. This aspect of the work may be valuable for generating pseudo-labels in tasks without a ground truth.Although this paper demonstrates accurate classification models, it is important to discuss some of the limitations of the proposed method.Firstly, the labels are generated by expert radiologists rather than the gold standard of pathological confirmation.Secondly, the data uses a non-standardised slice thickness, while some may argue it is better to standardise, this approach may be more generalisable to the real world. One further limitation of the 2D approach is that slices from the same patient are not independent both in structure and the likelihood of malignancy. While extending this analysis to 3D may produce a more robust model, data samples would reduce from 13,852 to 875 and model complexity would increase. Some of the lung parenchyma were not fully captured by the latent vectors as demonstrated in Figure <ref>. However, the lung naturally has more connective tissue septa than otherparts of the body and these hold little relevance to malignancy diagnosis, meaning that failure to capture the parenchyma could actuallyincrease the signal-to-noise ratio. Further experimentation is needed to determine whether they are important for the overall classification.§ CONCLUSION AND FUTURE WORK Overall, (1) VAEs with Gaussian and Dirichlet priors were trained to produce a latent space which was capable of capturing macro details to a very high standard and micro details to a satisfactory standard. (2) Clustering algorithms were implemented, with results showing that latent vectors were clustered by patient and lesion type and that the Dirichlet prior was better at separating the data in this way. (3) MLP classifiers for malignant or benign lesions were trained using latent vectors from the VAEs, the best model achieved state-of-the-art performance with an AUC of 0.98 and 93.1% accuracy.Future work could include combining 2D slice level prediction into higher level predictions such as at the 3D lesion or patient level. This would mitigate the limitations associated with a 2D approach including slice thickness and independence of samples. Further improvements to the VAE methodology could include segmenting bone and fat to remove this impact from the latent space. Additionally, extending the latent space exploration to see how different features affect classifications. For instance, using the tumour growth direction or other feature changes such as adding/removing parenchyma to see the impact on the probability of malignancy. Applying methods for latent direction discovery by selecting the best traversals based on metrics such as largest change in prediction score. Finally, to look at implementing DirVAE latent traversals along single dimensions to demonstrate its disentanglement and to add value for model interpretation../bibliography.tex
http://arxiv.org/abs/2311.15719v1
{ "authors": [ "Benjamin Keel", "Aaron Quyn", "David Jayne", "Samuel D. Relton" ], "categories": [ "cs.CV", "cs.AI", "cs.LG" ], "primary_category": "cs.CV", "published": "20231127111233", "title": "Variational Autoencoders for Feature Exploration and Malignancy Prediction of Lung Lesions" }
#1#1 #1#1 #1#1.3ex>-.75em1ex∼ .3ex<-.75em1ex∼ β̃δ_ PBH^ localM_⊙ #1#1#1 [GM: #1] Service de Physique Théorique, Université Libre de Bruxelles (ULB), Boulevard du Triomphe, CP225, B-1050 Brussels, BelgiumDépartement de Physique, Université deMontréal (UdeM), Succ. Centre-Ville, Montréal, Québec, H3C 3J7, CanadaInstituto de Física Teórica UAM/CSIC, Universidad Autónoma de Madrid, Cantoblanco 28049 Madrid, SpainInstituto de Física Teórica UAM/CSIC, Universidad Autónoma de Madrid, Cantoblanco 28049 Madrid, SpainService de Physique Théorique, Université Libre de Bruxelles (ULB), Boulevard du Triomphe, CP225, B-1050 Brussels, BelgiumInstituto de Física Teórica UAM/CSIC, Universidad Autónoma de Madrid, Cantoblanco 28049 Madrid, SpainDepartamento de Física, ETSIDI, Universidad Politécnica de Madrid, 28012 Madrid, SpainInstituto de Física Teórica UAM/CSIC, Universidad Autónoma de Madrid, Cantoblanco 28049 Madrid, SpainA follow-up of a subsolar black hole candidate identified in the second part of the third observing run of the LIGO-Virgo-KAGRA collaboration is carried out. With a search signal-to-noise ratio of 8.90 and a false-alarm rate of 1 per 5 years, close to the usual thresholds for claiming a gravitational-wave event, we cannot exclude a noise origin. A complete Bayesian parameter estimation of this candidate, denoted SSM200308, reveals that if the signal originates from a compact binary coalescence, the component masses are m_1= M_⊙ and m_2 = M_⊙ (90% credible intervals) with at least one component being firmly subsolar, below the minimum mass of a neutron star.This discards the hypothesis that the signal comes from a standard binary neutron star.The signal coherence test between the two LIGO detectors brings support to a compact object coalescence origin.Analysis of the subsolar-mass black hole candidate SSM200308from the second part of the third observing run of Advanced LIGO-Virgo Ester Ruiz Morales January 14, 2024 ====================================================================================================================================§ INTRODUCTIONSince the very first detection of a gravitational wave event by LIGO in September 2015 <cit.>, the LIGO-Virgo-KAGRA (LVK) collaboration has reported nearly a hundred gravitational-wave (GW) events from the coalescence of compact binary systems <cit.>.These broad-band GW detectors are able to detect a wide range of compact binary coalescences (CBC) masses and are even sensitive to the merging of hypothetical subsolar mass (SSM) compact objects m< 1 M_⊙. As stellar evolution models predict that neither black holes (BH) nor neutron stars can be significantly lighter than one solar mass, the detection of SSM compact objects would clearly indicate a new formation mechanism alternative to the classical scenario. The discovery of an SSM merger would therefore have revolutionary implications for astrophysics, cosmology and fundamental physics. Several GW searches for CBCs having at least a component mass of less than 1 M_⊙ have been carried out using the Advanced LIGO-Virgo data <cit.> with no firm detection. However, in the latest LIGO-Virgo observing run, O3b <cit.>, three candidates of SSM binary black hole events were reported <cit.>. One candidate found in O2 data <cit.> were also analysed in <cit.>. Those triggers are not classified as confirmed SSM GW events but rather as candidate events due to their false alarm rate (FAR) being too large to confidently claim for the existence of such revolutionary objects. However, these candidates are very promising and, as the sensitivity of the detectors improves and observation time is accumulated <cit.>, the perspectives for the future detection of an SSM compact object are hopeful. In this work, we further investigate one of these SSM triggers, the candidate event observed on March 8th 2020 -referred here as SSM200308- reported in Table <ref>. With a FAR of 1 per 5 years, SSM200308 is the most significant candidate of the search, found by <cit.> in coincidence in both LIGO Hanford and LIGO Livingston detectors. Even though SSM200308 did not generate a trigger in Virgo with a signal-to-noise ratio (SNR) above the single detector threshold, Virgo was taking data at that time, which we will include in the parameter estimation (PE). We perform a follow-up of this candidate and analyze the data in detail, performing a careful PE of the signal. As a by-product, the PE allows us to infer the probability that the source of SSM200308 has SSM components, if one assumes that the signal comes from a binary black hole merger event. The goal of this work is not to claim the detection of SSM black holes by the LVK collaboration. The possibility that the candidate is not of astrophysical origin but induced by environmental or instrumental noise cannot be excluded. Given the expected increase in sensitivity of future observing runs, this work aims to show that a proper PE on such long duration and low mass signals can be performed, by using Reduced Order Quadrature (ROQ) methods <cit.>, in preparation for O4, O5, and subsequent SSM BH searches. This paper is organized as follows. In Section 2, we describe the method used to perform the PE. In Section 3, we present the inferred properties of the source. In Section 4, we present the tests carried out to assess the significance of the candidate and investigate the potential nature of the source of SSM200308 before concluding in Section 5.§ METHOD The candidate was found in a dedicated search for GWs from compact binaries with at least one component below one solar mass performed on the Advanced LIGO-Virgo's O3b run <cit.>. Thepipeline reports detector frame masses of 0.78 M_⊙ and0.23 M_⊙, with a FAR of 0.20 yr^-1 and a combined network SNR of 8.90. Given the time of O3b coincident data suitable for observation T_ obs = 125.5 days, the search would produce a higher-ranked candidate in 1-exp(-T_obs·FAR)=6.5% of searches on data containing only noise, assuming a Poisson distribution for the background. In the following, we analyze SSM200308 assuming that it comes from the coalescence of two compact objects. The properties of the source are inferred by performing a Bayesian PE on the data from LIGO Livingston, LIGO Hanford, and Virgo. The strains are directly obtained from the O3b open-access data <cit.><cit.>. Looking at the data quality, we found only two very minor glitches, one in Livingston and one in Virgo at226.3s and 252.0s before coalescence respectively. Even though these were not expected to significantly bias the PE, we removed them using <cit.>. The median power spectral density (PSD) for each detector was computed from a posterior distribution of PSDs as estimated by . We choose the waveform model <cit.> with spin parameters measured at a reference frequency of f_ref = 100 Hz to fit our candidate GW signal.The priors are purposely chosen uninformative and broad on the 15 parameters, to minimize bias in PE. More precisely, we take uniform priors in component masses and spins, comoving volume, sky location and time of coalescence. The chirp mass M_c was intentionally constrained in a narrow range of M_c ∈ [0.351,0.355] M_⊙ around the expected M_c from the search. Indeed, we expect the chirp mass to be the best-constrained parameter as it is the dominant quantity dictating the frequency and phase evolution of the GW signal <cit.>. Given the very small relative error that can be expected for M_c, it would be otherwise difficult for the nested sampler to find the extremely narrow peak in the chirp mass posterior. The final posterior distribution of the chirp mass is narrower than this prior zoom, which means that this choice does not bias the shape of the posterior distribution.The PE is performed with a template starting at 37 Hz. Assuming the chirp mass provided by the search, the signal is expected to last ∼ 245 s. Therefore, we analyze 256 s of data with the help of ROQ methods <cit.>, which greatly speeds up the Likelihood evaluation time. We generate the ROQ basis in a parameter space compatible with the priors using the algorithm introduced in Ref. <cit.>. To make sure that this method does not bias our PE, we also perform the analysis without the ROQ method on a smaller analysis duration of 120 s (with a template starting at 50 Hz). The two methods give similar results for the parameters of interest, i.e. the source masses, with a larger uncertainty for the first 120 s PE, as expected, given that we approximately lose 11% of SNR by starting the analysis at 50 Hz instead of 37 Hz with ROQ. A lower value of the low-frequency cut-off could have been considered, however, the SNR expected to be gained is negligible. For example, if we had used a low frequency cut-off of 30Hz, we could have gained at most 2.7% SNR, but the signal would be ∼430s, which presents some challenges to build the PSD and guarantee the quality of the data. To sample the posterior distribution we use the <cit.> Nested Sampling routine [The configuration files for the PE and PSDs along with PE results -corner plots and posteriors- are all available on https://github.com/MarinePrunier/Analysis-Of-Subsolar-Mass-Black-Hole-Candidates-In-Advanced-LIGO-Virgo-Data.gitgithub.]. § PROPERTIES OF THE SOURCE OF SSM200308 Table <ref> summarizes the values found for several significant parameters of the source of SSM200308, assuming a compact binary merger origin. It has individual source-frame masses m_1= M_⊙ and m_2 = M_⊙ as shown in Fig. <ref>, for each parameter, we report the median value and the 90 % credible interval. The marginalized posterior distribution for the first mass favors a mass lower than 1 M_⊙ at 92% and for the second mass, the whole posterior distribution lies below 1 M_⊙. These component masses were computed using the chirp mass M_c and the mass ratio q (Fig.<ref>). The detector frame chirp mass is tightly constrained to be  M_⊙, allowing us to constrain with relatively high accuracy the source properties of SSM200308 in spite of its low SNR. It is interesting to note in Table <ref> that the effective inspiral spin parameter χ_eff is relatively well measured and, with a value of χ_eff =, it is found to be significantly larger than 0.Therefore there is a high probability that at least one component has a non-zero spin. The luminosity distance d_L and inclination angle θ_JN posterior distributions are shown together in Figure <ref>. θ_JN corresponds to the angle between the system’s total angular momentum and the line of sight from the source to the observer. As expected from a CBC event, in the absence of observation of higher order modes, the two parameters are strongly correlated. The corner plot shows a clear bimodal distribution for θ_JN likely due to the fact that one cannot distinguish whether the system is being observed face-on (θ_JN∼ 0) or face-away (θ_JN∼π), nevertheless the system being edge-on (θ_JN∼π/2) seems disfavoured. In Figure <ref> we show the posterior sky localization of the event, which is relatively well localized, thanks to the three detectors being used in the PE analysis. The posteriors of the source of SSM200308 have converged to well-defined distributions that differ from their prior distribution (uniform in detector frame component masses, cosθ_JN and power-law for d_L). Nevertheless, it is known that GW signals can be mimicked by Gaussian noise <cit.> or non-Gaussian transients, especially given the relatively low SNR and high FAR. In the following section, we discuss the statistical significance of the candidate. § DISCUSSION§.§ Statistical test to assess the significance of the candidate The LVK collaboration's search for SSM black hole binaries <cit.> shows that SSM200308 is a promising SSM candidate. With its low false-alarm rate (FAR) of 0.20 yr^-1 and its global signal-to-noise ratio of 8.90, the candidate is not far from, but still less significant than some confirmed events of O3b having similar SNR and FAR; e.g. GW200216_220804 with FAR of 0.35 yr^-1 and SNR 9.40 or GW191230_180458 with FAR 0.13 yr^-1 and SNR 10.30 <cit.>. Furthermore, one has to take into account that since SSM is a more speculative source of GWs, the significance required to claim a detection will be higher. One can define a Bayes factor that characterizes the model evidence, quoted B_S, N=Z_s/Z_n, which is the evidence for a coherent signal hypothesis divided by that for Gaussian noise. For the SSM200308 candidate, the natural logarithm of the Bayes factor given with our parameter estimation is ln(B_S, N) =. In Refs. <cit.> a test has been developed which can discriminate between a signal coherent model and an incoherent signal model. The Bayesian coherence ratio (BCR) computes the odds between i) the hypothesis that a coherent CBC signal is present in the data and the hypothesis that instead, ii) the data presents gravitational-wave–like glitches occurring independently in each detector and mimicking a CBC signal. Using the coherent Bayes factor and the individual Bayes factors in each interferometer, listed in table <ref>, we compute the BCR for SSM200308 and find a value of lnℬ_coh,inc = . According to Ref. <cit.> a positive value of ln(BCR) > 1 indicates a preference for the coherent signal hypothesis over the instrumental-artifact one. Although we have not calculated the ℬ_coh,incfor a series of background triggers, the BCR of SSM200308 greatly favors the coherent signal model. A comprehensive analysis of coherence factors on background triggers is essential to better assess the significance of SSM candidates. A detailed study will be presented in future work.§.§ Nature of the source signalGiven the results of the PE discussed so far, we can try to asses the possibilities for the nature of the source of SSM200308.The neutron star nature of SSM200308 compact objects seems disfavored. Indeed, it is known from observations and simulations  <cit.> that it should be difficult to form neutron stars with masses below ∼ 0.9 M_⊙[However the possible observation of a neutron star mass as low as 0.77^+0.20_-0.17 M_⊙ was claimed in <cit.>.]. This is consistent with the detection of the first neutron star binary event GW170817 <cit.> with source masses around 1.4 M_⊙, and with the measured neutron star masses from binary pulsars <cit.>. Moreover, no known stellar evolution scenario can produce a black hole with a subsolar mass <cit.>. Given the secondary component mass is firmly subsolar (having all the posterior distribution below 0.4M_⊙), if the signal does indeed come from a genuine astrophysical event, we are in the presence of compact objects from a new formation mechanism, alternative to the classic BH formation scenario.One can use the sensitivity volume values ⟨ VT ⟩ obtained in the O3 search of SSM objects <cit.> to get an estimate on the rates for events similar to SSM200308, assuming that one event has been observed in the lowest chirp mass bin of Figure 1 of Ref. <cit.>. We obtain merger rates between 400 and 20000 Gpc^-3 yr^-1 at 90% C.L. for thesearch in this bin, and similar numbers for other pipelines.These rates are comparable or even higher than the inferred rates for vanilla neutron star mergers, but for objects that are expected to be outliers of the main neutron star population. Another origin should therefore be seriously considered.The nature of the source signal remains an open question but there exist several theories about how such compact objects of subsolar mass might form. The subsolar origin of SSM200308's source could be explained by exotic compact objects such as boson stars <cit.>, exotic black holes <cit.> or by theoretical compact objects of primordial origin: Primordial Black Holes (PBHs) <cit.>. PBHs may have formed in the early Universe, shortly after inflation ended, from the direct collapse of highly overdense regions. Many studies show that the thermal history of the Universe can enhance the formation of PBHs during the Quantum Chromodynamic (QCD) phase transition (t ∼ 10 μs in the early universe) <cit.> generating a distribution of PBH masses sharply peaked around one solar mass, leading to an enhanced rate of binary merger events in the sub-solar and solar mass range <cit.>. The inferred characteristics of SSM200308, if truly coming from a GW event, would be consistent with the coalescence of two PBHs. § CONCLUSIONIn this work, we have performed, using ROQ methods, an in-depth analysis of one of the most significant candidates reported in the O3b search for SSM black hole binaries <cit.> with SNR = 8.90 and FAR = 0.20 yr^-1. Even if the candidate does not show enough significance to claim the firm detection of a gravitational wave event, it is of great interest to study and characterize the candidate. We also demonstrate that the ROQ method can be efficiently used to reduce the computational cost of the PE for such long signals. The inferred masses show that SSM200308, if coming from a GW event, is consistent with a binary of two SSM black holes; m_1=M_⊙ and m_2 =M_⊙ (90% credible intervals). Given the very low masses of the candidate's components, their neutron star nature seems disfavoured <cit.>. The question of the nature of the source of SSM200308 therefore remains open. The unusual characteristics of SSM200308 could be explained by the two components being black holes of primordial origin. If SSM200308 is a real signal, we can expect that improved detector sensitivities and longer observing time would within a few years allow for the firm detection of a SSM black hole. § ACKNOWLEDGEMENTSWe would also like to thank Bhooshan Gadre and Viola Sordini for their work reviewing this paper within the LIGO and Virgo Collaborations respectively. The authors acknowledge the use of the publicly available codes: <cit.> and <cit.>.They acknowledge support from the research projectPID2021-123012NB-C43 and the Spanish Research Agency (Agencia Estatal de Investigación) through the Grant IFT Centro de Excelencia Severo Ochoa No CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033.GM acknowledges support from the Ministerio de Universidades through Grant No. FPU20/02857and JFNS acknowledges support from MCIN through Grant No. PRE2020-092571.S.C. aknowledge support from the Belgian Francqui Foundation through a Francqui Start-up Grant, as well as the Belgian Fund for Research through a MIS and an IISN grants.The authors are grateful for computational resources provided by the LIGO Laboratory and supported by the National Science Foundation Grants PHY-0757058 and PHY-0823459.This research has made use of data or software obtained from the Gravitational Wave Open Science Center <cit.> (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. apsrev4-1
http://arxiv.org/abs/2311.16085v1
{ "authors": [ "Marine Prunier", "Gonzalo Morrás", "José Francisco Nuño Siles", "Sebastien Clesse", "Juan García-Bellido", "Ester Ruiz Morales" ], "categories": [ "gr-qc", "astro-ph.CO" ], "primary_category": "gr-qc", "published": "20231127185418", "title": "Analysis of the subsolar-mass black hole candidate SSM200308 from the second part of the third observing run of Advanced LIGO-Virgo" }
VehicleGAN: Pair-flexible Pose Guided Image Synthesis for Vehicle Re-identificationBaolu Li^†, Ping Liu^†, Lan Fu,Jinlong Li,Jianwu Fang,Zhigang Xu^*,Hongkai Yu^* Baolu Li and Zhigang Xu are with Chang'an University, Xi’an 710064, China. Ping Liu is with the Center for Frontier AI Research (CFAR), Agency for Science, Technology, and Research (A*STAR), Singapore 138634. Lan Fu is with University of South Carolina, Columbia 29201, SC, USA. Jianwu Fang is with Xi'an Jiaotong University, Xi'an 710049, China. Baolu Li, Jinlong Li, and Hongkai Yu are with Cleveland State University, Cleveland, OH 44115, USA. † indicates co-first authors. * Co-corresponding authors: Zhigang Xu ([email protected]), Hongkai Yu ([email protected]).December 15, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Vehicle Re-identification (Re-ID) has been broadly studied in the last decade; however, the different camera view angle leading to confused discrimination in the feature subspace for the vehicles of various poses, is still challenging for the Vehicle Re-ID models in the real world. To promote the Vehicle Re-ID models, this paper proposes to synthesize a large number of vehicle images in the target pose, whose idea is to project the vehicles of diverse poses into the unified target pose so as to enhance feature discrimination. Considering that the paired data of the same vehicles in different traffic surveillance cameras might be not available in the real world, we propose the first Pair-flexible Pose Guided Image Synthesis method for Vehicle Re-ID, named as VehicleGAN in this paper, which works for both supervised and unsupervised settings without the knowledge of geometric 3D models. Because of the feature distribution difference between real and synthetic data, simply training a traditional metric learning based Re-ID model with data-level fusion (, data augmentation) is not satisfactory, therefore we propose a new Joint Metric Learning (JML) via effective feature-level fusion from both real and synthetic data. Intensive experimental results on the public VeRi-776 and VehicleID datasets prove the accuracy and effectiveness of our proposed VehicleGAN and JML.Vehicle Re-identification, Joint Metric Learning, Pose Guided Image Synthesis§ INTRODUCTIONVehicle Re-identification (Re-ID) is an important task in intelligent transportation systems <cit.>, as it allows for the retrieval of the same vehicle from multiple non-overlapping surveillance cameras. With the availability of vehicle surveillance datasets <cit.>, many vehicle Re-ID models have been proposed <cit.>, which have made significant progress in the past decade and gained wide interest among theresearch communities of human-machine systems, cybernetics, and transportation.However, the large viewpoint divergence of vehicle images caused by different cameras views in real world makes significant challenges for these vehicle Re-ID models <cit.>. As shown in Fig. <ref>, the same vehicles of diverse poses are ambiguous in an embedded feature subspace, leading to identification difficulties, while the feature discrimination could be enhanced if the vehicle images could beprojected to the same target pose. Inspired by our discovery of Fig. <ref>, this paper proposes to project the vehicles of diverse poses into the unified target pose so as to enhance feature discrimination. To tackle the pose-varied vehicle images effectively, the controllable various-view synthesis of vehicle images has been investigated recently <cit.>, which aims to synthesize the images of a vehicle at a target pose given an input vehicle image and a specific target pose. Existing methods of pose guided vehicle image synthesis can use two kinds of methods: 3D-based and 2D-based approaches. Those 3D-based approaches <cit.> utilize geometric 3D model to synthesize image, which might be not available or prone to errors in the real traffic surveillance scenarios due to the lack of the camera parameters and diverse vehicle poses. The 2D-based methods <cit.> use paired 2Dimages of the same vehicle in different cameras to supervise neuralnetworks to learn the transformation of the vehicle to the target pose. Although the 2D methods achieved progress in pose guided image synthesis under the supervised learning manner, they suffered from the manual annotation cost of the same vehicles in different cameras. Therefore, no matter the existing 3D or 2D methods have significant drawbacks for the pose guided vehicle image synthesis in the real world.Differently, this paper proposes the first Pair-flexible Pose Guided Image Synthesis method for Vehicle Re-ID, named as VehicleGAN, which works for both supervised and unsupervised settings without the knowledge of geometric 3D models. 1) We design a novel Generative Adversarial Network (GAN) based end-to-end framework for pose guided vehicle image synthesis, which takes the 2D vehicle image in the original pose and the target 2D pose as inputs and then directly output the new synthesized 2D vehicle image in the target pose. Using the 2D target pose as condition to control the Generative Artificial Intelligence (AI), the proposed method gets rid of using geometric 3D model. 2) The proposed VehicleGAN works for both supervised (paired images of same vehicle) and unsupervised (unpaired images of same vehicle) settings, so it is called Pair-flexible in this paper. For the pose guided image synthesize in the current vehicle Re-ID research community, the supervised (paired) setting is easy for training the Generative AI model,however the unsupervised (unpaired) setting is challenging. 3) To solve the challengingunsupervised problem, we proposed a novel method AutoReconstruction to transfer the vehicle image in original pose to the target pose and then transfer it back to reconstruct itself as self-supervision. In this way, the paired images of the same vehicles in different cameras are not required to train the Generative AI model. After getting the synthesized vehicle images in different poses, simply training a traditional metric learning based Re-ID model with the direct data-level fusion of real and synthetic images (, data augmentation) is not satisfactory.Please see Sec. <ref> in our experiment for the degraded results of data-level fusion. This is because of the feature distribution difference between real and synthetic data. To solve this real-and-synthetic feature difference problem, we propose a novel Joint Metric Learning (JML) via effective feature-level fusion from both real and synthetic data. We conduct intensive experiments on the public VeRi-776 <cit.> and VehicleID <cit.> datasets, whose results display the accuracy and effectiveness of our proposed VehicleGAN and JML for the vehicle Re-ID problem. The main contributions of this paper are summarized as follows. * This paper proposes a novel method to project the vehicles of diverse poses into the unified target pose to enhance vehicle Re-ID accuracy.* This paper proposes the first Pair-flexible Pose Guided Image Synthesis method for Vehicle Re-ID, called VehicleGAN, which works for both supervised and unsupervised settings without the knowledge of geometric 3D models.* This paper proposes a new Joint Metric Learning (JML) via effective feature-level fusion from both real and synthetic data to overcome the shortcomings of the real-and-synthetic feature distribution difference. § RELATED WORKS §.§ Vehicle Re-ID Benefited from a series of public datasets and benchmarks <cit.>, vehicle Re-ID has made significant progress over the past decade. All the previous works of vehicle Re-ID task is to enhance the feature discrimination of vehicles in different cameras. On the one hand, some previous works aim to learn supplemental features of the local vehicle regions to reinforce the global features. Wang <cit.> utilizes 4 directions based on 20 vehicle keypoints to represent local features beyond the main backbone, takes local features as a supplementation to global features. He <cit.> develops a vehicle components detection network to integrate part constrains with global feature extraction. Meng <cit.> introduce a parsing network to divide a vehicle into 4 views, allowing view-aware feature alignment for global global features. On the other hand, some previous works focus on designing more powerful neural network structures to enhance feature discrimination. Chen <cit.> implements a two-branch network including height-channel branch and width-channel branch to improve feature extraction ability. Lian <cit.> uses a multi-branch enhanced discriminative network to extract the subtle distinguishing features. Zhao <cit.> brings graph-based relation module to the main network backbone, combining with cross-level complementary branch to enhance the expression ability. Different with all the existing feature enhancement works for vehicle Re-ID, this paper proposes to project the vehicles of diverse poses into the unified pose so as to enhance feature discrimination for vehicle Re-ID. §.§ Pose Guided Vehicle Image SynthesisPose Guided Vehicle Image Synthesis allows vehicles to synthesize novel view based on pose. Previous works can be mainly divided into 3D-based and 2D-based approaches. Those 3D-based methods rely on 3D model of vehicle or parameters of camera to achieve perspective conversion. Furukawa <cit.> builds a 3D model from multiple images of the same object and uses it to synthesize novel views. Garg <cit.> proposes to use the depth map of an image to assist novel view synthesis, specifically transforming each reconstructed 3D point in the depth map. Horn <cit.> uses pixel-to-pixel correspondences between source and target image to fulfill view synthesis tasks. Zhou <cit.> proposes the use of appearance streams to map pixels in the source view to the target view. Park<cit.> adds an image completion network on its basis to enhance the mapping effect. 3D-based methods are limited by the difficulty to obtain detailed 3D models or accurate camera parameters in real scene. Benefited from the Generative AI, the 2D-based methods learn the experience of view synthesis from paired 2D images in various poses. Zhou and Lv <cit.> uses vehicle viewpoint features, intrinsic features, poses to generate a new view of the vehicle image. It introduces feature-level perspective transformation in the process of vehicle view synthesis to preserve more vehicle details. These 2D-based methods can more easily extract poses from pictures, which will be more advantageous in the real world. However, since they require ground-truth to supervise the learning, the paired images of same vehicle need to be manually identified. Differently, our proposed method is less constrained in the real world and can beunsupervised without the need for identity annotations on vehicle images.§.§ Training with Synthetic Data Training with Synthetic Data is used as a complementary strategy when real data has some deficiencies. To tackle the issue caused by the diversity of views in vehicle Re-ID, Many previous methods have utilized the synthetic data. Zhou <cit.> extracts single-view features for the input image, and then generates multiple-view features based on GAN, aiming to convert the features into a global multi-view feature representation. Lou <cit.> proposes to generate hard negative samples and cross-view samples as a supplement to the training data. Those methods take generated feature-level or image-level samples to enhance the discrimination ability of original model. The differences between those methods and our method are: 1) VehicleGAN bringsthe ability of synthesizing any-view vehicle images based on given poses, which is controllable and versatile. 2) Because of the distribution difference of real data and synthetic data, simply training a traditional metric learning based Re-ID model with data-level fusion (, data augmentation) is not satisfactory (see Sec. <ref>), so we propose the Joint Metric Learning (JML) to solve this challenge by feature-level fusion. § PROPOSED APPROACHIn this paper, we propose a framework for synthetic image guided vehicle Re-ID via joint metric learning. In this section, we first illustrate the whole framework, which consists of VehicleGAN and joint metric learning, in Sec. <ref>. Then, we describe the VehicleGAN and joint metric learning, in Sec. <ref> and Sec. <ref>, in details.§.§ Overview The Fig. <ref> shows the whole framework of the proposed VehicleGANguided vehicle Re-ID via Joint Metric Learning. The framework includes two stages: VehicleGAN and Joint Metric Learning. The former is a encoder-decoder based GAN for pose guided vehicle image synthesis, and the latter consists of two branches: a Re-IDmodel for real images (M_R) and the other Re-IDmodel for synthetic images (M_S). The VehicleGAN aims to generate an image with a target pose given an original vehicle image as input. The two Re-IDmodels (, M_R and M_S) do not share weights due to their feature difference. With the synthetic vehicle images in the unified target pose generated by VehicleGAN, the M_S learns to identify pose-invariant features, while the M_R learns to recognize features from real images. The M_R and M_S are trained by Joint Metric Learning. The training of the whole pipeline of VehicleGAN can be implemented in an unsupervised way as well as a supervised way, which is named as Pair-flexible in this paper. §.§ VehicleGANOur VehicleGAN aims to generate synthetic images of a same vehicle under a specific target pose, which is a new controllable generative AI method.As shown in Fig. <ref>, the VehicleGAN includes one generator and two discriminators. Given an original image I_o and an image I_t with the target pose, we first extract the target pose P_t via a pose estimator ψ, , P_t=ψ(I_t). Then, a synthetic image I_o^t will be generated via the generator G, , I_o^t=G(I_o, P_t)=G(I_o, ψ(I_t)). I_o^t denotes an image which has the content of image I_o as well as the target pose of image I_t. A discriminator D_t is to distinguish the synthetic fake I_o^t from the real I_t. By our proposed AutoReconstruction, we reuse the generator to transfer I_o^t back to an image I_o^o with the content and pose of the original image I_o. The other discriminator D_o is to distinguish the synthetic fake I_o^o from the real I_o. Because of the proposed AutoReconstruction, we can supervisely as well as unsupervisedly train the whole VehicleGAN pipeline for optimization. Next, we will describe the implementation of each part in detail.§.§.§ Pose EstimationHere, we use 20 keypoints <cit.> annotated on the VeRi-776 <cit.> dataset to represent the vehicle pose. These keypoints are some discriminative positions on the vehicle, , wheels, lights, and license plates. Specifically, we adopt the Deconvolution Head Network <cit.> as ψ to estimate the vehicle pose by outputting a response map for each of the 20 keypoints. The response maps have Gaussian-like responses around the locations of keypoints. Given a target image I_t, the output pose response map is P_t=ψ(I_t) with 20 channels. §.§.§ AutoReconstruction asSelf-SupervisionGiven an original vehicle image I_o, and a target vehicle image I_t, the generator aims to synthesize a fake vehicle image with I_o content in the I_t pose. The input of the generator is the concatenation of the original image and the target pose. The generator adopts an encoder-decoder based network, similar to PN-GAN <cit.>. The ResNet blocks between the encoder and the decoder are used to transfer the identity-related invariant information and change the variable information of the pose. Reversely, the synthesized I_o^t will go through the generator to reconstruct the original image using the pose of I_o as guidance, which generates the reconstructed image I_o^o. The Original-to-Target and Target-to-Original bidirectional image transfer is named as Autoreconstruction in this paper. Because of the specially designedAutoreconstruction, the original I_o andreconstructed I_o^o can be forced to be identical as self-supervision. §.§.§ Pair-flexible SettingsThe optimization of our VehicleGAN can be performed in eithersupervised or unsupervised way. When the original image I_o and the targetimage I_t are from the same vehicle, the corresponding ground truth image of the generated image I_o^t will be I_t, which can provide full supervision information for training. An example of paired I_o and I_t of same vehicleis shown in Fig. <ref>(a) for supervised learning. Meanwhile, the original image I_o and the target image I_t can be from different vehicles (unpaired), as shown in Fig. <ref>(b) for unsupervised learning. This advanced pair-flexible setting is excellent for real-world usages.§.§.§ Supervised Learning with Paired DataTo optimize the whole pipeline, we adopt four loss functions: adversarial loss, pose loss, identity-preserving loss and reconstruction loss. Please note that the loss functions using the paired data of same identity are denoted as 𝔏 (supervised), while other loss functions are denoted as ℒ (unsupervised) in thispaper. Adversarial Loss: The adversarial losses ℒ_adv_1 and ℒ_adv_2 aim to make the synthetic images more similar to the real images. In specific, we want to align the generated images I_o^t with I_t and I_o^o with I_o via ℒ_adv_1 and ℒ_adv_2, respectively. We adopt two discriminators to perform the distribution alignment to distinguish whether an image is real or fake. The optimizing object function isℒ_adv_1 =𝔼_I_t∼ p_data(I_t)[logD_t(I_t)] + 𝔼_I_o^t∼ p_data(I_o^t)[log(1-D_t(G(I_o,P_t)))],ℒ_adv_2 =𝔼_I_o∼ p_data(I_o)[logD_o(I_o)] + 𝔼_I_o^o∼ p_data(I_o^o)[log(1-D_o(G(I_o^t,P_o)))]. Pose Loss: ℒ_pose is to align the poses of the synthetic images (I_o^t, I_o^o) with the guided poses during the Autoreconstruction. The pose loss is defined as ℒ_pose(ψ, I_o^t, P_t, I_o^o, P_o) = ψ(I_o^t)-P_t_2 + ψ(I_o^o)-P_o_2. Identity-preservingLoss: During the image transfer process of the VehicleGAN for the target pose, the vehicle identity information should be preserved, , keeping the identity of the synthetic image consistent with that of the original image, , I_o^t and I_o. After pose synthesis of vehicles, the semantic content, style, texture, and color of the synthetic image should be kept consistent with those of the original image. Therefore, we introduce style loss,perceptual loss, content loss to optimize the network to preserve the identity. We introduce Gram matrix <cit.>, which generally represents the style of an image, to construct the style loss ℒ_style.Let ϕ_j(I) ∈ H_j× W_j× C_j be the feature map at j-th layer of VGG network for the input image I, then the Gram matrix is defined as a C_j× C_j matrix whose elements are given by 𝒢_j(I)_c,c^' =1/C_jH_jW_j∑_h=1^H_j∑_w=1^W_j(ϕ_j(I)_h,w,c·ϕ_j(I)_h,w,c^'). Then, the style loss is formulated as the mean squared error between the Gram matrices of I_o^t and I_t as𝔏_style=∑_j𝒢_j(I_o^t)-𝒢_j(I_t)_2, where we use the feature maps of [relu1_1, relu2_1, relu3_1, relu4_1] layers to calculate the style loss. Following <cit.>, we define the perceptual loss as𝔏_per=ϕ_j(I_o^t)-ϕ_j(I_t)_2,where we use the feature map from the relu4_1 layer of VGG network to compute the perceptual loss. Also, the reconstructed image is expected to keep the same content as the source image, then, the content loss is defined asℒ_c = ∑_j^ϕ_j(I_o^o)-ϕ_j(I_o)_2,where we use the feature maps from [relu1_1, relu2_1, relu3_1, relu4_1] layers of VGG network. Therefore, the identity-preserving loss is formulated to the weighted sum of the above three losses as 𝔏_idp=β _1𝔏_style+β _2𝔏_per+β _3ℒ_c. Reconstruction Loss: A reconstruction loss is also employed to measure the pixel-wise difference between the generated images and their ground truth, which is defined as𝔏_rec=I_o^o-I_o_1+δI_o^t-I_t_1.On summary, we define the total supervised loss Loss_sp as a weighted sum of all the defined losses:Loss_sp=λ _1ℒ_adv_1+λ _2ℒ_adv_2+λ _3ℒ_pose+λ _4𝔏_idp+λ _5𝔏_rec.§.§.§ Unsupervised Learning with Unpaired Data The input original image I_o and the target image I_t might be from different vehicle identities, as shown in Fig. <ref>(b). The generated image I_o^t does not have ground truth for supervision. In this way, the VehicleGAN can only be optimized in an unsupervised way. Since the style loss 𝔏_style, perceptual loss 𝔏_per, reconstruction loss 𝔏_rec require the ground truth of I_o^t for computation, we need to reformulate the three losses to achieve unsupervised learning. In addition, because of the lack of supervision, we propose a trust-region learning method to reduce the degradation effects of the background region of different vehicles in image transfer. Trust-region Learning: We propose a trust-region learning method to only focus on the trust regions (, shape of vehicle) in the unsupervised setting. We follow <cit.> to utilize 20 keypoints of a vehicle torepresent vehicle pose. We use the positions of these keypoints to calculate the convex hull surrounding the vehicle as mask. Let M ∈ℝ^1× H× W represents a binary mask formed by the pose P, where H and W represent the height and width of the pose feature maps. M is inferred from the feature maps of P through average pooling to represent the shape of the vehicle. The values inside the convex hull/shape of the vehicle are all set as 1 (trust regions) while be 0 when outside of the convex hull. Losses Reformulation: Due to the lack of paired data of same vehicles, we propose a trust-region style loss ℒ_style, a trust-region perceptual loss ℒ_per, a new reconstruction loss ℒ_recto replace the 𝔏_style, 𝔏_per, 𝔏_rec to optimize VehicleGAN in an unsupervised way.Given the Gram matrix, we define a trust-region Gram matrix to calculate the style loss, which is defined as𝒢_j(I,M)_c,c^'=1/C_jH_jW_j∑_h=1^H_j∑_w=1^W_j(ϕ_j(I)_h,w,c· M)·(ϕ_j(I)_h,w,c^'· M).The proposed trust-region style loss is formulated toℒ_style=∑_j𝒢_j(I_o^t, M_t)-𝒢_j(I_o, M_o)_2,where M_t and M_o represents the trust-region masks corresponding to the pose of images I_o^t and I_o, respectively.Similarly, the trust-region perceptual loss is formulated toℒ_per=ϕ_j(I_o^t)· M_t-ϕ_j(I_o)· M_o_2. Then, we can replace the supervised loss 𝔏_idp as ℒ_idp=β _1ℒ_style+β _2ℒ_per+β _3ℒ_c in an unsupervised manner. The unsupervised reconstruction loss is re-defined as self-supervision only viaℒ_rec=I_o^o-I_o_1.The total unsupervised loss Loss_usp is reformulated to Loss_usp =λ _1ℒ_adv_1+λ _2ℒ_adv_2+λ _3ℒ_pose+λ _4ℒ_idp+λ _5ℒ_rec.§.§ Joint Metric LearningGiven a pre-trained VehicleGAN obtained in Sec. <ref>, we first synthesize a unified target-pose image for each original vehicle image. Then, the original real images are fed into the Re-ID model M_R, and the synthetic images with unified pose go through the Re-ID modelM_S. M_R and M_S are optimized by a Joint Metric Learning (JML) framework. Next, we will describe the implementation of each part in detail. §.§.§ Unified Target PoseFollowing the classification of <cit.>, we classify vehicles into nine categories, , sedan, suv, van, hatchbark, mpv, pickup, bus, truck, and estate. We manually choose one target-pose image for each of the ten categories as the unified target-pose image. Then, each original image can be translated into a synthetic image with the unified target pose by the proposed VehicleGAN, as shown in Fig. <ref>.§.§.§ Re-ID ModelFollowing <cit.>, we adopt ResNet50 <cit.> as backbone for M_R and M_S models. However, we modify the stride of the last convolutional layer of the network to 1 to obtain larger-size feature maps with rich information. For the whole pipeline, the input vehicle image goes through the modelto obtain a 2048-dimensional feature map with size 16×16. Then, the feature map goes through a global average pooling layer to output a 2048-dimensional feature vector f. Thus, the original image and the synthetic image with the unified pose are fed into M_R and M_S to obtain feature vectors f_r and f_s, respectively. Then, the f_r and f_s are concatenated into a 4096-dimensional feature vector f_c as the final combined feature.§.§.§ Loss FunctionsTo optimize the whole pipeline, we adopt two kinds of loss functions: triplet loss and cross-entropy loss. We first optimize the M_R model when only the original image is fed into M_R. We define the triplet loss L_t_r on real images asL_t_r=max(0,f_r-f_r^p-f_r-f_r^n+α ),where α = 0.3 denotes the triplet distance margin, f_r^p and f_r^n represent the feature maps from positive and negative samples in each mini-batch of the input images. The positive samples has the same identities with f_r in a minibatch, while the negative samples and f_r are with different identities. The identification loss (cross-entropy) on real images is defined as L_id_r=-ylog(Softmax(FC(f_r))),where FC denotes a Fully Connected layer, and Softmax outputs the normalized probability of the vehicle ID classification result, and y denotes the vehicle identity label of the input image. The learnedM_R, only by a weighted sum of L_t_r and L_id_r, is the baseline in our experiment for comparison.Moreover, we consider to optimize M_R when the original image and the synthetic image with the unified pose are fed into M_R and M_S, respectively. The output feature map is f_c. The triplet loss ℒ_t_c on combined real-and-synthetic features is formulated toL_t_c=max(0,f_c-f_c^p-f_c-f_c^n+α),and the identification loss (cross-entropy) on combined real-and-synthetic features is formulated toL_id_c=-ylog(Softmax(FC(f_c))).In this way, the total loss function of JML is the weighted sum of the four losses as follow:Loss_JML=L_t_r+L_id_r+L_t_c+L_id_c.§ EXPERIMENTS §.§ Datasets and EvaluationsWe perform pose guided vehicle image synthesis and vehicle Re-ID on two public benchmark datasets VeRi-776 <cit.> and VehicleID <cit.>, which are both real traffic surveillance data, for performance evaluation. §.§.§ VehicleGAN Experiments SettingFor the pose guided vehicle image synthesis task,paired images from the same vehicle are inputs to the VehicleGAN for supervised learning, i.e., one is as the input original image, and the other is as the target pose image, which is also the ground truth for the synthesized image. For unsupervised learning, unpaired images from the same-type vehicle are fed into the VehicleGAN optimized through the unsupervised losses. During the inference stage, paired images from the same vehicle are fed into VehicleGAN for view synthesis and performance evaluation, , the target pose image is the ground truth for the synthetic image after view synthesis. §.§.§ Datasets VeRi-776 <cit.> includes more than 50,000 images of 776 vehicles. All are collected by 20 cameras in unconstrained real traffic scenes, containing many categories with diverse poses. Following <cit.>, VeRi-776 is split into a training subset (37,778 images of 576 vehicles) and a testing subset. The testing subset includes a probe subset of 1,678 images of 200 vehicles and a gallery subset of 11,579 images of the same 200 vehicles. The task of pose guided vehicle image synthesis follows the division of the original training subset and testing subset. The training subset of supervised learning has 1,048,576 pairs of vehicle images with the same identity, while the input images of unsupervised learning are randomly sampled from images of the same vehicle type. The testing subsets for both learning manners are the same, including 12,000 pairs of vehicle images with the same identity. VehicleID <cit.> has 211,763 images with 26,267 vehicles. A half of the vehicles with the same identities serve for training while the other half are used for testing evaluation. There are three test subsets with different sizes, , Test800, Test1600, and Test2400, for evaluation. Specifically, Test800 includes 800 gallery images and 6,532 probe images of 800 vehicles. Test1600 includes 1,600 gallery images and 11,395 probe images of 1,600 vehicles. There are 2,400 gallery images and 17,638 probe images of 2,400 vehicles in Test2400. For each subset, we randomly select one image for each identity as the gallery set and the rest of all images are taken as query set, and then perform evaluation. We repeat this process for 10 times, and average the evaluation performance as the final result for comparison. For the pose guided vehicle image synthesis by VehicleGAN, the training set includes 912,273 pairs of images, and the testing set consists of 12,000 pairs of images. The rest of the settings about unsupervised learning for VehicleGAN remain the same as VeRi-776. §.§.§ Evaluation MetricsThe evaluation metrics of pose guided vehicle image synthesis quality include Structural Similarity (SSIM)<cit.> and Frechet Inception Distance (FID) <cit.>. The evaluation metrics of vehicle Re-ID accuracyinclude mean Average Precision (mAP) and Cumulative Matching Characteristic (CMC) at Rank-1 and Rank-5.§.§ Implementation Details §.§.§ VehicleGANThe resolution of input image in the VehicleGAN is 256× 256. For supervised learning, we set the loss weight parameters λ _1, λ _2, λ _3, λ _4, and λ _5 to 1, 0.2, 10,000, 1, 2, respectively. β _1, β _2, and β _3 are 1,000, 0.5, 0.05, respectively. δ is set to 4. For unsupervised learning, λ _1, λ _2, λ _3, λ _4, λ _5, β _1, β _2, and β _3 are 5, 1, 20,000, 1, 0.5, 500, 0.01, 0.1, respectively. We adopt Adam as the optimizer, and set the batch size to 12. We trained 200K iterations for supervised learning, and 300Kiterations for unsupervised learning. §.§.§ Re-ID modelWe utilize ResNet50 <cit.> as the backbone network for M_R and M_S, which is per-trained on ImageNet <cit.>. The input image is resized to 224×224 before fed into the model. We set the batch size to 64, which includes 16 vehicle IDs, and 4 vehicle images for each vehicle ID. We perform data augmentation with random horizontal flipping, random cropping, and random erasing <cit.> during training. We trained the M_R for 80 epochs when only the original image is fed into M_R, and 100 epochs when the original image and synthetic image are fed into M_R and M_S, respectively.§.§ Comparison for Pose Guided Vehicle Image Synthesis For supervised learning, we compare the proposed method with SOTA (state of the art) methods CGAN <cit.>, PG2 <cit.>, DSC <cit.>, and PAGM <cit.>. For unsupervised learning, we compare the proposed method with Perspective Transformation (PerTransf) <cit.>. We calculate the SSIM and FID metrics between the synthetic image and the target pose image, , ground truth, for performance evaluation. The results are shown in Table <ref>. §.§.§ Results on VeRi-776 <cit.>We can observe that for the supervised learning methods, the proposed VehicleGAN achieved the best performance with highest SSIM 0.554 and lowest FID 233.0 compared to other methods. Higher SSIM demonstrates that the proposed method can successfully convert the original image to the synthetic image with the target pose. Lower FID means that the proposed method can generate more realistic vehicle image. Note that the proposed method outperforms PAGM, the SOTA pose guided vehicle image synthesis method in supervised learning, with 0.062 increasing in SSIM and 12.3 decreasing in FID.Meanwhile, our unsupervised VehicleGAN^* obtains SSIM score of 0.437 and FID score of 285.0, which is 0.117 lower in terms of SSIM and 52 higher in terms of FID than those of VehicleGAN obtained by supervised learning. However, the performance is comparative to previous supervised learning-based works, and even better than CGAN, PG2, and DSC in terms of FID indicators. §.§.§ Results on VehicleID <cit.>Table <ref> shows that our proposed method also obtains the best performance in terms of SSIM and FID in supervised learning compared to other methods on VehicleID dataset. The VehicleGAN achieves the best SSIM 0.551 and FID 193.6, outperforming CGAN, PG2, DSC, and PAGM by 0.104, 0.125, 0.126, and 0.107 at SSIM score, respectively. For the unsupervised learning, the FID score obtained by VehcicleGAN^* is better than that of the four comparison supervised learning methods, and the SSIM is comparative to that of supervised learning methods. Meanwhile, the performance on VehicleID dataset is better than that on VeRi-776, because the vehicle view variation on VehicleID is relatively small than that of VeRi-776. §.§ Visualization Results of VehicleGANGiven a well-trained VehicleGAN, we aim to reduce the view variations for the Vehicle Re-ID tasks, , make the feature extraction of Re-ID task focus on view-invariant features. In specific, we transfer vehicle images with diverse views into synthetic images with a unified target pose via the proposed VehicleGAN. The quality of synthetic images with a unified target pose decides the feature representation ability of vehicle Re-ID task, , whether the vehicle Re-ID can extract view-invariant features. We show the visualization results of the pre-trained VehicleGAN when it transfers vehicles with diverse views into a target pose view on datasets VeRi-776 <cit.> and VehicleID <cit.>. The visualization results of VehicleGAN in supervised learning and unsupervised learning are shown in Fig. <ref> and Fig. <ref>, respectively. Benefiting from the AutoReconstruction and well-designed losses, the synthetic images reserve most of the color, style, and texture with the original images and keep the same pose with the target images in both learning manners. With the help of the ground truth, the VehicleGAN in supervised learning generates images with more realistic details in Fig. <ref>, , wheels, lights, and license plates. While the VehicleGAN in unsupervised learning do not performance as well as in supervised learning, most of the identity information is still preserved in the synthetic images with the target pose in Fig. <ref>, which can effectively reduce the view variations among the synthetic images and benefits the following Re-ID task.§.§ Comparison for Vehicle Re-ID The M_R model, optimized when only the original images are fed into the model, is the Baseline-ResNet50 method. When the original images and synthetic images are inputs for M_R and M_S, respectively, , M_R and M_S models are optimized together by involving pretrained VehicleGAN and Joint Metric Learning, the method is denoted as VehicleGAN+JML or VehicleGAN^*+JML. The results of Vehicle Re-ID on VeRi-776 and VehicleID datasets are shown in Table <ref> and Table <ref>, respectively.§.§.§ Results on VeRi-776 <cit.>Table <ref> shows that VehicleGAN+JML achieves the best mAP with 0.742 compared to the baseline method Baseline-ResNet50 with 0.703 mAP and theVehicleGAN^*+JML with 0.736 mAP. Meanwhile, VehicleGAN+JML achieves better or comparative Rank-1 and Rank-5 compared to those of the baseline method Baseline-ResNet50 and theVehicleGAN^*+JML. It demonstrates the effectiveness of the pose guided vehicle image synthesis for the vehicle Re-ID task, even with unsupervised learning-based image synthesis via VehicleGAN^*, the Re-ID performance are with promising improvement.We also compare the proposed method with other SOTA vehicle Re-ID methods in Table <ref>. By optimizing the M_R and M_S with the original images and synthetic images via Joint Metric Learning, the proposed methods achieves the best mAP and the comparable Rank-1 and Rank-5 with other SOTA Vehicle Re-ID methods.§.§.§ Results on VehicleID <cit.>We report the results of Re-ID on the three subsets, , Test800, Test1600, and Test2400, of the VehicleID dataset in terms of Rank-1 and Rank-5 in Table <ref>. It shows that compared to the baseline Baseline-ResNet50, the proposed method VehicleGAN+JML achieving better performance, with an improvement of 0.031 Rank-1 and 0.011 Rank-5 on the Test800, 0.018 Rank-1 and 0.014 Rank-5 on the Test1600, and 0.02 Rank-1 and 0.018 Rank-5 on the Test2400. The proposed VehicleGAN^*+JML obtained comparableperformance compared to VehicleGAN+JML on the three subsets, indicating that JML is robustly functional to either the supervised or unsupervised versions of the proposed VehicleGAN.We also compare the propose method to other Re-ID methods on the VehicleID dataset in Table <ref>. The proposed method VehicleGAN+JML achieves the best Rank-1 and Rank-5 performance compared to other SOTA Vehicle Re-ID methods. §.§ Ablation Studies§.§.§ Effectiveness of Loss Functions in VehicleGANThe effectiveness of the loss functions in VehicleGAN is evaluated in Table <ref>. As described in Sec. <ref> of this work, the loss functions of the training process contains six terms: ℒ_adv, ℒ_pose, ℒ_idp, ℒ_rec, 𝔏_idp and 𝔏_rec. Here, we let ℒ_adv = ℒ_adv_1 + ℒ_adv_2 be the total adversarial loss. Table <ref> shows that it achieves the best pose guided vehicle image synthesis performance when incorporating all the loss functions for both supervised and unsupervised learning strategies. For the supervised learning strategy, ℒ_adv+ℒ_pose achieves the worst results with 0.249 SSIM and 603.1 FID. The performance becomes better after involving 𝔏_idp and 𝔏_rec, due to incorporating the ground truth as guidance. For the unsupervised learning strategy, the performance becomes better when gradually incorporating each loss function. Even though there is no ground truth as guidance, ℒ_idp and ℒ_rec can still perform identity constraints in the process of pose guided vehicle image synthesis optimization, leading to better performance.§.§.§ Effectiveness of Feature-level FusionThe effectiveness of data-level (, data augmentation) and feature-level (, our JML) fusions is evaluated in Table <ref>. As mentioned in the introduction of this paper,there is a deviation between the distribution of real data and synthetic data. The data-level fusion even leads to some performance degradation as shown in Table <ref>. The reason for this phenomenon is that the inconsistent distribution between real data and synthetic data causes interference in the process of metric learning. Differently, our JML implements feature-level fusion for more effective metric learningto avoid this interference and achieve theperformance improvement. § CONCLUSIONSThis paper proposes a novel VehicleGAN for pose guided vehicle image synthesis, followed by a new Joint Metric Learning framework to benefit vehicle Re-ID. The VehicleGAN utilizes a proposed AutoReconstruction as self-supervision for pose guided image synthesis. In this way, the proposed VehicleGAN is pair-flexible, working for either supervised (paired) or unsupervised (unpaired) setting. VehicleGAN is used to generate pose guided synthetic images with a unified target pose, which helps the feature-level fusion based Joint Metric Learning framework to learn vehicle perspective-invariant features, reducing the Re-ID recognition difficulties introduced by diverse view angles (poses) of the same vehicles. Extensive experiments on two public datasets show that: 1) the proposed VehicleGAN can synthesize pose guided target image with high quality, 2) the proposed Joint Metric Learning framework obtains outstanding Re-ID accuracy with the assistance of VehicleGAN. unsrtnat
http://arxiv.org/abs/2311.16278v1
{ "authors": [ "Baolu Li", "Ping Liu", "Lan Fu", "Jinlong Li", "Jianwu Fang", "Zhigang Xu", "Hongkai Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127193404", "title": "VehicleGAN: Pair-flexible Pose Guided Image Synthesis for Vehicle Re-identification" }
[email protected] [email protected] [email protected] (A1), the Indian gravitational wave detector, is expected to join the International Gravitational-Wave Observatory Network (IGWN) and begin operations in the early 2030s. We study the impact of this additional detector on the accuracy of determining the direction of incoming transient signals from coalescing binary neutron star sources with moderately high signal-to-noise ratios. It is conceivable that A1's sensitivity, effective bandwidth, and duty cycle will improve incrementally through multiple detector commissioning rounds to achieve the desired `LIGO-A+' design sensitivity. For this purpose, we examine A1 under two distinct noise power spectral densities. One mirrors the conditions during the fourth science run (O4) of the LIGO Hanford and Livingston detectors, simulating an early commissioning stage, while the other represents the A+ design sensitivity. We consider various duty cycles of A1 at the sensitivities mentioned above for a comprehensive analysis. We show that even at the O4 sensitivity with a modest 20% duty cycle, A1's addition to the IGWN leads to a 15% reduction in median sky-localization errors (ΔΩ_90%) to 5.6 sq. deg. At its design sensitivity and 80% duty cycle, this error shrinks further to 2.4 sq. deg, with 84% sources localized within a nominal error box of 10 sq. deg! This remarkable level of accuracy in pinpointing sources will have a positive impact on GW astronomy and cosmology. Even in the worst-case scenario, where signals are sub-threshold in A1, we demonstrate its critical role in reducing the localization uncertainties of the BNS source. Our results are obtained from a large Bayesian parameter estimation study using simulated signals injected in a heterogeneous network of detectors using the recently developed meshfree approximation aided rapid Bayesian inference pipeline. We consider a seismic cut-off frequency of 10 Hz for all the detectors.We also present hypothetical improvements in sky localization for a few GWTC-like events injected in real data after including a hypothetical A1 detector to the sub-network in which such events were originally detected. We also demonstrate that A1's inclusion could resolve the degeneracy between the luminosity distance and inclination angle parameters, even in scenarios where A1 does not directly contribute to improving the network signal-to-noise ratio for the event.How I wonder where you are: pinpointing coalescing binary neutron star sourceswith the IGWN, including LIGO-Aundha Anand S. Sengupta0000-0002-3212-0475 ==================================================================================================================== § INTRODUCTIONThe detection and prompt localization of GW170817 event <cit.> can be regarded asa monumental discovery that led to follow-up observations over the entire electromagnetic (EM) spectrum. This discovery resulted in the first extensive multi-messenger astronomical observing campaign <cit.> undertaken to follow up post-merger emissions from compact binary coalescence. The concurrent observations of such events via electromagnetic, neutrino, and gravitational wave detectors facilitate complementary measurements of phenomena that would otherwise remain inaccessible when observed independently. For example, the standard siren measurement of Hubble-Lemaître constant H_0 from the gravitational wave data is independent of the cosmic distance-ladder method as opposed to that in astronomy with EM radiation. This provides a complementary measurement of H_0, which can be used to elucidate the Hubble-Lemaître tension associated with the disparity between the measurements of Hubble constant <cit.> in the early and late universe.Early observations of post-merger emissions of binary neutron star (BNS) sourcescan be used to constrain the physical models behind the internal mechanisms of these emission processes. For instance, there are different models proposed explaining the origin of the early blue emission of the kilonova associated with GW170817. Even though the model proposing radioactive decay of heavy elements in low-opacity ejecta being a theoretically motivated candidate <cit.> fits the rise time curve well enough, there are also other models (for example: by cooling of shock-heated ejecta) which fit the decline of the emission equally well. In the case of GW1701817, the associated kilonova was detected ∼ 11 hours after the merger, limiting the information about the rise time of the kilonova, particularly in the ultraviolet band <cit.>. Capturing the rise time of the emission light curves by early detection may provide a useful measure to differentiate between the kilonova origin models. The gamma-ray burst GRB 170817A was detected independently ∼ 1.7 seconds after the trigger time of GW170817, with studies later confirming its association with the BNS merger <cit.>. This association confirmed BNS mergers as the progenitors of at least certain short GRBs <cit.>. The simultaneous detection of GWs and GRB may provide remarkable insights into the central engine of short gamma-ray bursts (SGRBs). The time delay between the GW and GRB events (∼ 1.7 secs as in the case of GW170817) offers valuable information into the underlying physics and intrinsic processes within the core, including the formation of a remnant object and the subsequent jet. It is expected that the EM waves and GW must have identical propagation speeds. The time delay between the GW and GRB events can be used to put constraints on the deviation of the speed of gravity waves from the speed of light, hence allowing for the tests of fundamental laws of physics <cit.>. Radio emissions enable tracing the fast-moving ejecta from BNS coalescence, providing insights into explosion energetics, ejecta geometry, and the merger environment <cit.>. Meanwhile, X-ray observations are crucial for determining the energy outflow geometry and system orientation relative to the observer's line of sight <cit.>. The post-merger transients possess observational time scales ranging from a few seconds to weeks, owing to their potential to harness radiation across the entire EM spectrum. Detecting EM counterparts to ∼ 50 BNS mergers may enable determining H_0 with 2% fractional uncertainty <cit.>, sufficient to verify the presence of any systematic errors in the local measurements of H_0. Even in the absence of potential EM counterparts, the accurate localization of compact binary coalescence (CBC) sources allows for dark siren measurements of H_0 <cit.>. For instance, observations from more than ∼ 50 BNS dark sirens may be required to obtain H_0 measurements with 6% fractional error <cit.>. Thus, pinpointing these mergers is key in providing promising probes of fundamental physics, astrophysics, and cosmology.However, in order to accurately locate the EM counterparts and supplement regular follow-ups, more accurate and rapid 3D-localization of the sources using GW observations is of utmost importance. With imminent improvements in the detector sensitivities for future observing runs, there is an inevitable need for rapid Parameter Estimation (PE) methods. This arises from the fact that an increased bandwidth of detectors towards the lower cut-off frequencies would result in a momentous increase in the computational cost of Bayesian PE, hence affecting the prompt localization of the GW source. Currently, LIGO-Virgo-KAGRA Collaboration (LVK) <cit.> uses a Bayesian, non-MCMC-based rapid sky localization tool, known as  <cit.>, to locate (posterior distributions over sky location parameters, α, and δ) CBC sources within tens of seconds following the detection of the corresponding GW signal. However, as shown by Finstad et al. <cit.>, a full Bayesian analysis including both intrinsic and extrinsic parameters can significantly increase the accuracy of sky-localization (by ∼14deg^2 in their analysis) of the CBC sources. This is of primary significance in reducing the telescope survey area & time for locating the EM counterparts, making rapid Bayesian PE-based methods a preferable as well an evident necessity. Nevertheless, sincecan construct skymaps in the order of a few tens of seconds, it would be an interesting case to test if theskymap samples can be used as priors for sky location estimates in rapid Bayesian PE methods. This scheme might enable a more accurate localization measurement of the source. Future improvements in detector noise sensitivities shall lead to an increase in the detector “sensemon range”[“sensemon range" is defined as the radius of a sphere of volume in which a GW detector could detect a source at a fixed SNR threshold, averaged over all sky locations and orientations <cit.>. For lower redshifts (z ≲ 1), the sensemon range is approximately equal to the horizon distance divided by 2.264. <cit.>] <cit.>.Hence, higher BNS detections are expected from distances further away than current ranges <cit.>. It is projected that ∼ 180 BNS events could be detected in future O5 run <cit.>. Hence, it is judicious to start the EM follow-up right from the merger epoch, which would require early warning alerts <cit.> for the EM telescopes in future runs. Given the finite observational resources of EM facilities allotted for the GW localized regions, it is important to prioritize the follow-up based on the chirp-mass estimates of GW events, as suggested by Margalit et al. <cit.>. This can be facilitated by a rapid Bayesian PE analysis, which, in addition to the sky localization, more importantly, provides a significantly accurate estimation of chirp mass for these compact binary systems. However, there are studies <cit.> which show that the chirp mass can be estimated accurately (with uncertainty no larger than ∼ 10^-3M_⊙)in low-latency searches. Although the mass ratio and effective aligned spin estimates may be severely biased in this case. With the addition of new detectors in the ground-based detector network, it becomes important to observe improvements expected in the sky localization and source parameter estimations of these compact binary sources.LIGO-Aundha <cit.> (hereafter, `A1') is set to join the network of ground-based GW detectors in the early years of the next decade. Based on our experience with currently operational interferometric GW detectors, we expect A1 to go through multiple commissioning rounds, resulting in incremental improvement to its noise sensitivity, effective bandwidth, and duty cycle. We expect that it will eventually attain the targetA+ design sensitivity <cit.> (sometimes referred to as the `O5' sensitivity in this paper) and acquire higher up-times, resulting in duty cycles that are commensurate with the stable operations of other detectors in the IGWN. To model this progression, we consider A1 to have two distinct noise sensitivities: one at conditions that mirror the fourth science run (O4) of the LIGO Hanford and Livingston detectors, simulating an early commissioning stage, and the other at the A+ design sensitivity of the detector. Additionally, we consider various duty cycles of the A1 detector at the sensitivities mentioned above for a comprehensive analysis. By the time A1 begins its maiden science run, the current detectors namely LIGO-Hanford (H1) and LIGO-Livingston (L1) <cit.>, are likely to reach their A+ design sensitivities or beyond; meanwhile Virgo (V1) <cit.> and KAGRA (K1) <cit.> are also expected to reach their target design sensitivities. A1 would add significantly longer baselines to the existing GW network and also contribute to increasing the network SNR <cit.>, leading to better sky localizations of GW sources along with improvements in the estimations of source parameters. A number of studies done previously have addressed the localization capabilities of GW networks which include analytical studies <cit.> as well as simulations using ,or other localization algorithms  <cit.>.In this article, we aim to provide an illustration of the contribution A1 shall make to the current network of terrestrial GW detectors with a focus on improvements in sky localization capabilities for BNS mergers. We focus on the scenarios where the localization uncertainties are of the orders that enable a potential EM follow-up. This is possible when the source is localized by three or more detectors.We perform a full Bayesian Parameter Estimation using a rapid PE method developed by Pathak et al. <cit.>. The method enables us to perform rapid Bayesian PE for BNS systems from a lower seismic cut-off frequency f_low = 10, hence allowing for the bandwidths (especially at lower frequencies) that would be typical of the future (O5 & beyond) observing runs. Analyzing CBCs from f_low = 10 increases the number of cycles in the frequency band and leads to an improvement in signal-to-noise ratio (SNR) accumulation and information content of the event. The paper is organised as follows: We describe the simulation study in Section <ref>. The section describes the detector network, duty cycles, simulated injection sources, and the Bayesian Inference method adopted here. Section <ref> outlines the priors and meshfree parameter configurations employed in the analysis. We summarise our localization results in Section <ref> and Section <ref>. In Section <ref>, we present a case of sky localization areas achieved for events from Gravitational Wave Transient Catalog (GWTC) in the presence of A1 taking real noise into account. We also explore the effect of an additional detector in resolving the degeneracy between luminosity distance and inclination angle parameters. Finally, we conclude in Section <ref> and describe possible improvements and discussion regarding this study in Section <ref>.§ SIMULATIONS The study of the localization capabilities of a GW detector network can be broadly characterized by the intrinsic source parameters (e.g. component masses) <cit.> of binary systems, detector sensitivities, and duty cycles of individual detectors <cit.>. We hereby discuss these aspects in relation to our work.§.§ Detector NetworksThe ground-based GW detector network is comprised of detectors with different sensitivities. The heterogeneous nature of the noise Power Spectral Density (PSD) curves of detectors plays an important role in deciding the localization abilities of the network. Hence, we assume the ground-based GW detectors to be at different noise sensitivities for our analysis. The first two LIGO detectors L1 and H1 are configured toPSDs <cit.>. The detector V1 is taken to be at its projectedPSD <cit.>. Meanwhile, the K1 detector is assumed to be atO5 design sensitivity <cit.>. To study the improvements in localization capabilities of the network with the addition of A1, we analyze the A1 detector at two different sensitivities. We first consider the case where A1 is in its initial operating phase atO4 sensitivity. In the second case, we take A1 to be atconfiguration, marking the target sensitivity it shall achieve after undergoing staged commissioning over time. All the above-mentioned PSDs can be found in <cit.>. The aforementioned noise sensitivity curves are shown in Fig. <ref>. These sensitivity configurations, in conjunction with the duty cycles taken into account, shall provide a more comprehensive approach in presenting a science case for the addition of A1 to the current GW detector network. §.§ Duty CyclesThe duty cycle of a detector/network is defined as the fraction of time for which it successfully collects data of scientific significance during an observing run <cit.>. The duty cycle for a detector depends on the specific phase in which the detector is relative to its intended target operating configuration. Additionally, environmental effects also play a role in affecting the detector duty cycle during an observation run. No detector can practically acquire science-quality data at all times. This translates to detectors working at different duty cycles depending on the commissioning of the detectors.To understand the impact of duty cycles on the localization of BNS sources by a network, we assume the following three cases as suggested by Pankow et al. <cit.> representing different stages of a detector's operation:1) 20% duty cycle: A representative of the early stages of commissioning and engineering runs, resulting in reduced operational time. 2) 50% duty cycle: A representative of unresolved technical issues with the instrumental setup as well as signaling challenges like suboptimal environmental conditions. 3) 80% duty cycle: A representative of a detector operating near the possible target operating point. Consider a GW network with N detectors. Over the course of an observing run, there can be a k number of detectors (N_min≤ k≤ N) participating in data collection, depending on their duty cycles. Here, N_min represents the minimum number of detectors assumed to be participating in the observation of a GW event. These k participating detectors can comprise different subnetworks of distinct detectors. For instance, a GW network with N=5 detectors may have only k=4 detectors in operation. Subject to which detector is out of operation, there can be Nk=5 different subnetworks of distinct detectors. Depending on the duty cycles of individual detectors, the effective duty cycle of a subnetwork can be evaluated. Out of a total of N detectors in the network, we assume a set m composed of all the detectors participating in data collection and a set n comprising detectors that are out of operation (possibly due to maintenance) during this observation period. We represent the probability (p) of being in operation defined by a given duty cycle (i.e. p=0.5 for 50% duty cycle). The probability representing the effective duty cycle of a subnetwork (p_eff) is given as p_eff = ∏_m_i ∏_n_j p_m_i (1 - p_n_j), where p_m_i and p_n_j are the duty cycles of the ith detector in set m and jth detector in set n respectively. For instance, consider a network of N=4 detectors, namely L1, H1, V1, and K1. The probability of being in operation for individual detectors is given by p_L1, p_H1, p_V1, and p_K1 representing their respective duty cycles. There may be a scenario where any one of these four detectors may get out of operation due to maintenance or environmental causes. Depending on which detector is not in observation mode, there can be 43=4 subnetworks, comprising of k=3 detectors participating in data acquisition.To evaluate the probability of being in operation p_eff for one of the subnetworks consisting of say H1, V1, and K1 detectors (this is the case whenis not in operation) we have p_eff|_H1V1K1 = p_H1· p_V1· p_K1· (1 - p_L1) where, H1, V1, K1 ∈m and L1 ∈n respectively. The effective duty cycle for N-detector network is evaluated as the probability obtained by adding the probabilities of being in operation for all subnetworks over k ≤ N number of participating detectors. In this study, we consider the L1, H1, V1, and K1 detectors operating near their target operating point at 80% duty cycle each and are fixed during the analysis. As A1 is expected to join this network by the early 2030s, we aim to show how the addition of A1 improves the localization capabilities of the GW network. We vary duty cycles for the A1 detector, simulating the cases for various phases of its configuration relative to its target operating point. Single detectors are nearly omnidirectional due to the structure of the antenna pattern functions - leading to poor source localization. For a two-detector network, solving for the direction in the sky corresponding to a fixed time-delay between the coalescence times recorded at the detectors leads to a `ring` like pattern on the sky, and as such, events localized with two detectors are generally not useful enough for EM follow-ups. Hence, we do not include the cases with k ≤ 2 in our analysis. For the purpose of this study, we assume 3 ≤ k ≤ N (i.e. N_min = 3) and evaluate the network duty cycles accordingly. The impact of A1 as an addition to the second-generation GW network consisting of L1, H1, V1, and K1 detectors is presented by the implementation of varying duty cycles (20%, 50%, 80%) in conjunction with different detector sensitivities ( O4 and ) for A1 detector.§.§ Injected BNS sourcesThe remarkably high signal-to-noise ratio (SNR) due to the fortunate proximity of the GW170817 event enabled its effective localization and multi-messenger efficacies. Since the number of such `golden events' is expected to be low even in future observing runs <cit.>, it becomes increasingly important to study a network's ability to localize such events.Hence, we aim to focus on studying the impact of the addition of the LIGO-Aundha detector in localizing moderately high SNR events, which may lead to potential multi-messenger observations.Taking this into consideration, we choose to generate 500 BNS events having an optimal network SNR in the range of 20 to 25 in the GW network comprising L1, H1, V1, and K1 detectors. We use these events for the purpose of our investigation. We would like to highlight that this is not a population study but focuses on possible improvements to the localization capabilities of a GW network with the addition of the LIGO-Aundha detector in accurately locating such `golden events'.An event is considered to be detected if the individual detector optimal SNR is greater than a threshold value of 6 (ρ_th> 6) in at least two detectors. From all the generated BNS sources, an event must follow the detection criteria to be considered detected by a subnetwork/network. The effectiveness of a network with an additional A1 detector is studied against the four-detector network with L1, H1, V1, and K1detectors.The intrinsic parameters, like component masses, spins, etc., affect the localization of CBC sources. The effective bandwidth, as defined in <cit.>, measures the frequency content of the signal. Effective bandwidth is one of the important factors affecting the localization of CBC sources <cit.>. The signals from BNS sources mostly span through the entire bandwidth of the ground-based detectors owing to their relatively small component mass values in comparison to binary black holes or neutron star-black hole sources. The mass ranges for the BNS are also narrow, leading to small variations in effective bandwidths. In fact, it has also been shown by Pankow et al. <cit.> that the sky localization uncertainties for BNS systems are effectively independent of the population model of their component masses and spins. Hence, in order to simplify our simulations, we work with a particular choice of component source masses and spins. The tidal parameters are also expected to have a negligible effect on the source localization of these systems <cit.> and, therefore, are not included as source parameters.The source-frame intrinsic parameters are chosen to have the maximum a posteriori (MAP) values of the posterior samples obtained from the LIGO PE analysis of GW170817 <cit.> using  <cit.> Python package. The source frame component masses are m^src_1 = 1.387M_⊙, m^src_2 = 1.326 M_⊙, while the dimensionless aligned spin component parameters are χ_1z≈ 1.29 × 10^-4,χ_2z≈ 3.54 × 10^-5. We choose the inclination angle ι=π/6 .We set the polarization angle arbitrarily to ψ=0 . The sources are distributed uniformly in sky directions. We distribute the sources in luminosity distances corresponding to the redshifts following a uniform in comoving volume distribution up to a redshift of ∼ 0.14, which is greater than the detection range of a detector at(O5) for a BNS with component masses m^src_1 and m^src_2. In addition to the sources that are to be detected with certainty, this limit also allows for events that are barely near the threshold in some detectors. We simulate uncorrelated Gaussian noise in each detector characterized by their associated PSDs respectively. The injection sources are generated with the  <cit.> waveform model, and the source parameters are recovered using the  <cit.> waveform model for the Bayesian PE analysis. Since BNS mergers are mostly inspiral-dominated in the LIGO-Virgo-KAGRA (LVK) detector band, the use of thewaveform model sufficiently extracts the required information from strain data. §.§ Bayesian inferenceIn order to estimate the parameters of GW sources, we use a Bayesian framework, where for a given waveform model h and data d from the detectors, the posterior distribution of the source parameters Λ⃗ can be estimated via Bayes' theorem:p(Λ⃗|d) = ℒ(d|Λ⃗)p(Λ⃗)/p(d)where ℒ(d|Λ⃗) is the likelihood function, p(Λ⃗) is the prior over the source parameters ≡{λ⃗, θ⃗, t_c}, and p(d) is called evidence, which describes the probability of data given the model. Here λ⃗ represents the intrinsic parameters, whereas θ⃗ denotes the extrinsic parameters. In principle, p(Λ⃗|d) can be estimated by placing a grid over the parameter space Λ⃗, which for a typical compact binary coalescence (CBC) source described by a ∼ 15 dimensional parameter space, would become practically intractable. In the case of a BNS system, it increases to 17 dimensional space due to the addition of two tidal deformability parameters. Instead, stochastic sampling methods such as Markov chain Monte Carlo (MCMC) <cit.> and Nested Sampling <cit.> are employed to generate representative samples from the posterior distribution p(Λ⃗|d). However, this process still requires evaluating the likelihood function, which involves a computationally expensive step of generating model (template) waveforms at the proposed points by the sampler and calculating the overlap between these waveforms and the data. This computational cost is notably significant, especially for low-mass systems such as binary neutron star (BNS) events with lower cutoff frequency decreased to 10 Hz. The situation is exacerbated by the enhanced sensitivity of detectors, resulting in a large number of in-band waveform cycles. Furthermore, incorporating additional physical effects can further escalate the computational burden of waveform generation. These factors have significant implications for the feasibility of promptly following up on EM counterparts of corresponding BNS systems.As mentioned earlier, a high number of BNS detections are expected in the O5 runs; it would be prudent to prioritize the EM follow-ups given the limited observational resources. This underscores the importance of the development of rapid PE methods, which can efficiently estimate both intrinsic and extrinsic parameters. Various rapid PE methods have been proposed in the recent past. They broadly come under two categories: (i) “likelihood-based” approaches such as Reduced order models <cit.>, Heterodyning (or Relative Binning) <cit.>, and other techniques such as RIFT <cit.>, simple-pe <cit.>, multibanding <cit.> (ii) “likelihood-free” approaches which aim to directly learn the posteriors employing Machine-learning techniques such as deep learning, normalizing flows, and variational inference as well  <cit.>. In this work, we use a likelihood-based rapid PE method developed by Pathak et al. <cit.>, which combines dimensionality reduction techniques and meshfree approximations to swiftly calculate the likelihood at the proposed query points by the sampler. This algorithm is interfaced with dynesty <cit.>, a Python implementation of the Nested sampling algorithm to quickly estimate the posteriors distribution over the source parameters. In the forthcoming sections, we will first define the likelihood function and subsequently provide a concise overview of how the meshfree method expeditiously computes the likelihood at the sampler's proposed query points.§.§.§ Likelihood function Given a stream of data d^(i) from the i^th detector and a template h̃^(i)(Λ⃗), under an assumption of uncorrelated noise across the detectors, the coherent network log-likelihood is given by lnℒ(Λ⃗) =∑_i=1^N_d⟨d^(i)|h̃^(i)(Λ⃗)⟩ - 1/2∑_i=1^N_d [ h̃^(i)(Λ⃗)^2 + d^(i)^2]where h̃^(i)(Λ⃗) represents the frequency domain Fourier Transform (FT) of the signal h^(i)(Λ⃗) and N_d is the number of detectors. Here, the inner product is defined as⟨ x | y⟩ = 4 Re∫_0^∞df x̃(f)^*ỹ(f)/S_h(f) In this paper, we focus on the non-precessing GW signal model, which can be decomposed into factors dependent on only intrinsic and extrinsic parameters as follows: h̃^(i)(Λ⃗) ≡h̃(Λ⃗, t^(i)) = 𝒜^(i)h̃_+(λ⃗, t^(i)), = 𝒜^(i) h̃_+(λ⃗)e^-j 2π f t_ce^-j 2π f Δ t^(i) where 𝒜^(i), the complex magnitude of the signal depends only on the extrinsic parameters θ⃗∈Λ⃗ through the antenna pattern functions, luminosity distance d_L, and the inclination angle ι, and can be expressed as the following: 𝒜^(i) =1/d_L[ 1+cos^2 ι/2F^(i)_+(α, δ, ψ) . . - j cosι F^(i)_×(α, δ, ψ) ], Δ t^(i) corresponds to the time-delay introduced due to the relative positioning of the i^th detector in relation to the Earth's center <cit.>, the F^(i)_+(α, δ, ψ) and F^(i)_×(α, δ, ψ) are respectively the `plus' and `cross' antenna pattern functions of the i^th detector, which are functions of right-ascension α, declination δ, and polarization angle ψ. The antenna pattern functions describe the angular response of the detector to incoming GW signals <cit.>. In our analysis, we opt for the log-likelihood function marginalized over the coalescence phase <cit.>. With h̃^(i)(Λ⃗) given by Eq. (<ref>), the expression of the marginalized phase likelihood is given by.lnℒ(Λ⃗|d^(i))|_ϕ_c= ln I_0[|∑_i=1^N_d𝒜^(i)^* ⟨d^(i)|h̃_+ (λ⃗, t^(i)) ⟩|] - 1/2∑_i=1^N_d[ |𝒜^(i)|^2 σ^2(λ⃗)^(i) + d^(i)^2 ];where I_0(·) is the modified Bessel function of the first kind and z⃗^(i)(λ⃗^n) ≡⟨d^(i)|h̃_+(λ⃗, t^(i)) ⟩ is the complex overlap integral, while σ^2(λ⃗)^(i)≡⟨h̃_+(λ⃗, t^(i)) |h̃_+(λ⃗, t^(i)) ⟩ is the squared norm of the template h̃_+ (λ⃗). σ^2(λ⃗)^(i) depends on the noise power spectral density (PSD) of the i^th detector. The squared norm of the data vector, d^(i)^2, remains constant throughout the PE analysis and, hence, does not affect the overall `shape' of the likelihood. Consequently, it can be excluded in the subsequent analysis. Note that the marginalized phase likelihood will not be an appropriate choice for systems with high precession and systems containing significant power in subdominant modes <cit.>.§.§.§ Meshfree likelihood interpolation The meshfree likelihood interpolation, as outlined in <cit.>, comprises two stages: (i) Start-up stage, where we generate radial basis functions (RBF) interpolants of the relevant quantities and (ii) Online-stage, where the likelihood is calculated by evaluating the interpolants at the query points proposed by the sampler. Let's briefly discuss both stages.* Start-up stage: First, we generate N RBF interpolation nodes in the intrinsic parameter space (ℳ, q, χ_1z, and χ_2z in this context). The center λ⃗^cent around which these interpolation nodes are positioned is determined by optimizing the network-matched filter SNR, starting from the best-matched template or trigger λ⃗^trig and t_trig identified by the upstream search pipelines <cit.>. For simulated systems, the injection parameter is taken as the central point for node placement. We employ a combination of Gaussian and uniform nodes, where the Gaussian nodes are sampled from a multivariate Gaussian distribution (MVN) with a mean of λ⃗^cent and a covariance matrix calculated using the inverse of the Fisher matrix evaluated at λ⃗^cent. A hybrid node placement approach ensures that nodes are positioned near the peak of the posterior, where higher accuracy in likelihood reconstruction is necessary. Once the nodes λ⃗^n are generated, we efficiently compute the time-series z⃗^(i)(λ⃗^n) ≡ z^(i)(λ⃗^n, t_c) using the Fast Fourier Transform (FFT) circular correlations, with t_c being uniformly spaced discrete-time shifts within a specified range (± 150 ms[This range should be larger than the maximum light travel time between two detectors.]) around a reference coalescence time t_trig. During this calculation, we set Δ t^(i) = 0 for overlap time series, handling extra time offsets introduced due to sampling in the sky location parameters during the online stage. Similarly, we compute the template norm square σ^2(λ⃗^n)^(i) at the RBF nodes λ⃗^n. We then stack the time series (row-wise) and perform Singular Value Decomposition (SVD) of the resulting matrix, producing a set of basis vectors spanning the space of z⃗^(i)(λ⃗^n): z⃗^(i)(λ⃗^n) = ∑_μ = 1^NC^n (i)_μ u⃗^(i)_μ where the SVD coefficients C^n (i)_μ, smooth functions of λ⃗^n within the sufficiently narrow boundaries encompassing the posterior support, can be interpolated over the λ⃗ using a linear combination of RBFs and monomials <cit.>:C^q (i)_μ = ∑_n=1^Na^(i)_n ϕ(λ⃗^q - λ⃗^n_2) + ∑_j = 1^Mb^(i)_jp_j(λ⃗^q) where ϕ is the RBF kernel centered at λ⃗^n∈ℛ^d, and {p_j} denotes the monomials that span the space of polynomials with a predetermined degree ν in d-dimensions. Since the coefficients are only known at N RBF nodes λ⃗^n, we impose M additional conditions of the form ∑_j=1^M a^(i)_j p_j(λ⃗^q) = 0 to uniquely solve for the coefficients a_n and b_j in the Eq. (<ref>). Furthermore, it turns out that only “top-few” basis vectors are sufficient to reconstruct z⃗^(i)(λ⃗^q) at minimal reconstruction error. Consequently, we generate only top-ℓ meshfree interpolants of C_μ^q(i) where μ = 1,....,ℓ, where ℓ can be chosen based on the singular value profile. Similarly, we express σ^2(λ⃗^q) in terms of RBFs and monomials, treating them as smoothly varying functions over the interpolation domain. Finally, we have uniquely constructed the ℓ + 1 RBF interpolants, which are to be used in the online stage.* Online stage: In the online stage, we rapidly compute interpolated values of C^q (i)_μ and σ^2(λ⃗^q)^(i) at any query point λ⃗^q within the interpolation domain. Subsequently, we determine the corresponding z⃗^(i)(λ⃗^q) using Eq.(<ref>). Rather than generating the entire time series, we focus on creating z⃗^(i)(λ⃗^q) with around 𝒪(10) time samples centered around the query time t^q (i), which contain the additional time-offset Δ t^(i). We fit these samples with a cubic spline, from which we calculate z⃗^(i)(λ⃗^q) at the query time t^q (i). Similarly, we compute the interpolated value of σ^2(λ⃗^q)^(i). Finally, we integrate these interpolated values with the factors related to extrinsic parameters, as outlined in Eq.(<ref>), to compute the interpolated likelihood lnℒ_RBF. § ANALYSIS OF SIMULATED EVENTSAs discussed previously in Section <ref>, we create injections with fixed source-frame masses and dimensionless aligned spin component parameters. However, the detector-frame parameters (masses) for these events vary according to their associated redshifts. We define the intrinsic detector-frame parameters by λ⃗= (ℳ_det, q, χ_1z, χ_2z). Similarly, the injected intrinsic parameters in the detector frame are denoted as λ⃗^cent. To perform Bayesian PE for each event, we first generate N_nodes = 800 RBF nodes as described in Section <ref>. We sample 20% of the total RBF nodes (N_Gauss = 160) from a multivariate Gaussian distribution 𝒩(λ⃗^cent, Σ), where λ⃗^cent is the mean and Σ is the covariance matrix obtained from inverse of the Fisher matrix Γ around the center λ⃗^cent using the  <cit.> python package. The remaining 80% of the total RBF nodes (N_Unif = 640) are sampled uniformly from the ranges provided in Table <ref>. We choose ϕ = exp(-(ϵ r)^2) as the Gaussian RBF kernel in our analysis, with ϵ being the shape parameter. For the purpose of this analysis, we use ϵ=10, monomial terms with degree ν=7 and l=20 top basis vectors for reconstructing the time-series in Eq. (<ref>). After the successful generation of interpolants, the likelihood function can be evaluated using lnℒ_RBF by sampling the ten-dimensional parameter space λ⃗using thesampler. The sampler configuration is outlined as follows:=500,=100,= “rwalk”, and=0.1. These parameters play a critical role in determining both the accuracy and the time required for the nested sampling algorithm to converge. In this context, the parameterrepresents the number of live points. Opting for a larger value ofleads to a more finely sampled posterior distribution (and consequently, the evidence), but it comes at the cost of requiring more iterations to achieve convergence. The parameterspecifies the minimum number of points necessary before proposing a new live point,indicates the chosen approach for generating samples, andrepresents the proportion of the remaining prior volume's contribution to the total evidence. In this analysis,=0.1 serves as a stopping criterion for terminating the sampling process. For a more comprehensive understanding of dynesty's nested sampling algorithm and its practical implementation, one can refer to the following references  <cit.>.The prior distribution for λ⃗, along with the associated parameter space boundaries, are presented in Table <ref>. The prior distributions for the extrinsic parameters (α, δ, V_com, ι, ψ, t_c) and their respective parameter space boundaries are also presented in Table <ref>. To evaluate the Bayesian posteriors of source parameters, we sample over the entire ten-dimensional parameter space involving four intrinsic and six extrinsic source parameters. This ensures accounting for the correlations between parameters. However, the focus of this study lies in discussing the sky localization uncertainties obtained from the posteriors over α and δ parameters.In accordance with the previous discussion in Section <ref>, we perform PE for the simulated events with different subnetworks of a GW network to take into account the effect of duty cycles. For instance, in the case of a network with L1, H1, V1, K1, and A1, there can be 10 different subnetworks consisting of three distinct detectors (k=3), and 5 different subnetworks of four distinct detectors (k=4) taking observations depending on the duty cycles. In addition to these,there is a subnetwork consisting of all the five detectors for k=5 case.Bayesian PE analyses are performed for the events detected in each of these subnetworks. The total number of subnetworks for all 3≤ k ≤ 5 is 16 for the five detector networks comprising of L1, H1, V1, K1, and A1 detectors. The exercise is repeated for two cases:(i) Keeping A1 atO4 noise sensitivity in the GW network. Here, the A1 sensitivity is close to(O5) sensitivity (Refer Fig. <ref>). (ii) Setting A1at(O5) in the GW network. In this case, the A1 detector would be at the same sensitivity as the other two LIGO detectors. We represent the network with N=5 detectors as the L1H1V1K1A1 network and, similarly, the network with N=4 detectors as the L1H1V1K1 network. Using the  <cit.> utility, we compute the 90% credible sky localization areas ΔΩ_90% (in sq. deg) from the posterior samples over right ascension (α) and declination (δ) obtained from Bayesian PE.§ SKY LOCALIZATION RESULTSIn order to take into account the effect of duty cycles in the sky localization of our simulated BNS events, we first evaluate the probabilities associated with the effective duty cycles of each subnetwork of a detector network using Eq. (<ref>). Each subnetwork is assigned a fixed number of events depending on their observation probabilities. Taking this into account and integrating all such cases across 3≤ k≤ N with area samples of corresponding events gives the localization ΔΩ_90% distribution related to the network duty cycle for a given GW network. The results of our simulations are presented in Fig. <ref>. The Cumulative Distribution Function (CDF) plots in Fig. <ref> are constructed from ΔΩ_90%sky area samples obtained by inverse sampling from the localization distributions for each subnetwork.We find that with the L1H1V1K1 network, the median 90% localization area ΔΩ_90% is 6.6 sq. deg., meanwhile 59% of the BNS sources are localized within less than 10 sq. deg. area in the sky.With the addition of an A1 detector to this network, we find significant improvements in the localization capabilities of the terrestrial detector network. It is also evident from Fig. <ref> that duty cycles and detector noise sensitivities play a vital role in the effective localization of sources. We shall discuss these in detail as follows:§.§ A1 at -O4 sensitivityAs a part of the five-detector network, the A1 detector is initially set toO4 noise sensitivity. The median ΔΩ_90% area in the decreasing order are found to be 5.6, 4.3, and 3.5in sq. deg. when A1 is set to 20%, 50%, and 80% duty cycle respectively. We find that 64%, 71%, and 77% of the events are localized with less than 10 sq. deg in sky area, given that A1 is at 20%, 50%, and 80% duty cycles respectively. Our results suggest that even when A1 is at 20% duty cycle, which can be interpreted as the early commissioning phase of the detector, the five-detector network reduces the median ΔΩ_90% localization uncertainty to 5.6 sq. deg. in comparison to 6.6 sq. deg. obtained by the four detectors L1H1V1K1 network. This reduction in the sky localization area plays a crucial role in the `tiled mode' search for EM counterparts undertaken by the EM facilities such as the GROWTH India Telescope <cit.> with a field of view of 0.38 sq. deg. in area, to tile the GW localization regions.As the A1 detector is upgraded to be at 80% duty cycle, the median localization area ΔΩ_90% remarkably reduces by approximately a factor of two in comparison to that achieved by the four detector L1H1V1K1 network. We reiterate that the L1, H1, V1, and K1 detectors are taken to be operating at 80% duty cycles.§.§ A1 at(O5) sensitivityBy upgrading the A1 configuration to(O5), the improvement in the localization capabilities of the five-detector network relative to the four-detector network as well as the five-detector network with A1 at O4 sensitivity is considerable. The median ΔΩ_90% localization uncertainties in the decreasing order are found to be 4.9, 3.4, and 2.4 sq. deg. in area, when A1 is set to 20%, 50% and 80% duty cycle respectively. We find that 66%, 75%, and 84% of the events are localized with less than 10 sq. deg in sky area, given that A1 is at 20%, 50% and 80% duty cycles respectively, where A1 is set atsensitivity. We observe that with the A1 detector operating at 50% duty cycle, the median localization area ΔΩ_90% reduces by a factor of two with respect to the values obtained by the L1H1V1K1 network. As A1 reaches its target operating point with 80% duty cycle, we find the median ΔΩ_90% to reduce by a factor of three against the median localization achieved with the four-detector network. As mentioned previously, for A1 operating at O4 sensitivity and 80% duty cycle, the median ΔΩ_90% is 3.5 sq. deg., whereas this reduces to 2.4 sq. deg. when A1 is set to 80% duty cycle and at O5 sensitivity. The A1 detector, with an upgraded O5 sensitivity, leads to a two-fold impact on the network capabilities. On the one hand, it leads to an increase in the number of detections (events satisfying the SNR threshold ρ_th≥ 6) in the subnetworks, including A1. It also results in an increase in network SNR, which is one of the important factors contributing to the effective localization of sources. By adding A1 to the GW network, there is an increase in the observation probability of three or more detectors by 7% (for A1 operating at the early 50% duty cycle) in comparison to the four detector networks. We summarize a few important remarks about the localization ΔΩ_90% for the different network configurations in Table <ref>. Note that the CDF plots in Fig. <ref> may indicate slightly lower median ΔΩ_90% localization values than those obtained in related studies <cit.>. One of the reasons being that the events are analyzed from f_low = 10, which increases the effective bandwidths, as well as due to the exclusion of cases with two detectors subnetworks or single detectors participation in source localization. The focus of this study is to explore the localization capabilities of the GW network with A1 with possibilities leading to potential EM follow-ups as well as providing a better ground for astrophysical and cosmological investigations. § LOCALIZATION OF SIMULATED BNS EVENTS SUBTHRESHOLD IN A1During the observation of GW170817, the event was detected in L1 and H1 detectors but was below the detection threshold in the V1 detector. Yet, the presence of V1 contributed to localizing the source to a few tens of sq. deg. Out of the 500 simulated BNS events described in our previous discussion, a total of 191 events detected in the five-detector network were found to be subthreshold (ρ_th<6) in A1 is atO4 sensitivity. We compare the sky-localizations for these events obtained from a four-detector L1H1V1K1 network to those achieved by the L1H1V1K1A1 network. Since these events are subthreshold in A1, the contribution of A1 in improving the network SNR is negligible.Yet, the presence of an A1 detector leads to an improvement in reducing the localization uncertainties of these events. This is shown in Fig. <ref>.We find that even in the case where these events are subthreshold in A1 (atO4 sensitivity), the percentage of events localized with less than 10 sq. deg. in the sky increase from 72% to 89% in comparison to the L1H1V1K1 network. Even though the CDFs (for ΔΩ_90%) used for the estimation of these improvements are not very smooth due to a lesser number of such events (191 in this case), but nevertheless they summarize the essence of overall nature of the improvement well enough.As the noise sensitivity configuration of A1 is upgraded to , there is an increase in the number of detections in the A1 detector. In this case, the number of events that are subthreshold in the A1 detector reduces from 191 to just 44 out of all the 500 simulated BNS sources. Due to the improved sensitivity, further improvements in the localization of such events are achieved in comparison to the localizations obtained relative to the four-detector L1H1V1K1 network and L1H1V1K1A1 network with A1 at O4 sensitivity. Note that in the context of this section, we do not consider the duty cycles for these networks. Therefore, we make a direct comparison between the localization results achieved for such events with the L1H1V1K1 network and the L1H1V1K1A1 network. For instance, the event marked in Fig. <ref> is localized to 44 sq. deg. with L1H1V1K1 network. The same event when detected by the L1H1V1K1A1 network with A1 is atO4 sensitivity, is recorded at an optimal SNR value ρ_A1 = 3.1 in A1 detector (ρ_A1 < ρ_th) and is localized to ∼ 6 sq. deg. Meanwhile, when A1 is set to , this event is localized to ∼ 3.5 sq. deg. area in the sky. The baselines added to the network with the addition of the A1 detector, and its antenna patterns are some of the factors leading to better localization of such events. An improved noise PSD results in an increase in effective bandwidth and hence leads to the reduction in localization uncertainties. § EXPERIMENTS WITH GWTC-LIKE EVENTS IN REAL NOISEIn the preceding section, we showed that even if a detector does not detect an event, it nevertheless adds a valuable contribution to the network in localizing the source. In this section, we provide an illustration of how the incorporation of an additional detector could have facilitated the source localization of events from GWTC for compact binary mergers. The two BNS events, GW170817 and GW190425, and an NSBH event, GW200115, are chosen as examples from GWTC for this purpose. In our analysis, we consider A1 as a supplementary detector. We simulate the aforementioned events and inject them into real detector noise to account for a realistic scenario. The noise strains from the L1, H1, and V1 detectors were acquired by using the  <cit.> python package, which allows the extraction of noise strain timeseries from the datasets publicly available on GWOSC <cit.>. The noise strain for A1 is taken to be that of the detector, which recorded the least SNR during the observation of these events. In fact, among all the detectors observing these events, the lowest individual SNR was recorded in the Virgo detector. The noise strain data for all the detectors is chosen hundreds of seconds away from the trigger times of the GW events in consideration. Even though the A1 noise strain is taken from V1 data, the noise strain data for both detectors belong to different stretches of data. The PE analysis for these events is performed from a lower seismic frequency of 20 Hz. §.§ Noise Strain and PSDs A noise strain data of a fixed segment length (360 seconds for GW170817 and GW190425; 64 seconds for GW200115) is used in our analysis, which is cleaned by a high-pass filter of 4th order and setting the frequency cut-off at 18 Hz. We analyze the event from a lower seismic cutoff frequency of 20 Hz, which is illustrative of the observing runs associated with their detections. The estimation of noise PSD uses 2 seconds overlapping segments of the strain data with the implementation of the median-mean PSD estimation method from PyCBC <cit.>. The noise PSD for A1 is constructed from the strain data of the V1 detector. §.§ Choice of injection parameters:§.§.§ GW170817-like event The intrinsic detector-frame parameters (m_1^det, m_2^det, χ_1z, χ_2z) and extrinsic parameters like inclination (ι) take values chosen by evaluating the MAP values from the posterior samples of detector-frame parameters in the  <cit.> for GW170817. We assume the BNS system with spins aligned in the direction of orbital angular momentum. The sky location coordinates of NGC 4993-the potential host galaxy of the GW170817 event, are taken as the injection values for (α, δ) sky position parameters <cit.>. The luminosity distance takes the value d_L = 40.4 Mpc <cit.> for our simulated BNS system. The polarization angle is taken to be zero (ψ=0) since the associated posterior samples in are found to be degenerate. As mentioned previously, the simulated signal is injected in uncorrelated real noise in the detectors. The simulated signal is generated using thewaveform model. The source parameters are recovered using thewaveform model.§.§.§ GW190425-like event The intrinsic (detector-frame) and extrinsic parameter values are chosen by evaluating the MAP values of the posterior samples for parameters obtained fromLIGO analysis file of GW190425 event <cit.>. The polarization angle is chosen to be ψ=0 as the posterior samples for ψfrom the LIGO analysis follow a uniform distribution. We generate the simulated signal using thewaveform model. The source parameters are recovered using thewaveform model for the PE analysis.§.§.§ GW200115-like event For simulating the NSBH event, we choose the intrinsic (detector-frame) and extrinsic parameter values by evaluating the MAP values of the posterior samples for parameters fromfile of GW200115 LIGO analysis <cit.>. We take the polarization angle ψ=0. We generate the simulated signal using thewaveform model. For recovering the source parameters during the PE analysis, we again use thewaveform model, as it also accounts for the post-inspiral regime, which occurs within the LIGO-Virgo band for the NSBH system. §.§ Analysis and ConfigurationsThe prior distributions and prior boundaries for (ℳ_𝒹ℯ𝓉, q, χ_1z, χ_2z, V_com) parameters, chosen for the three simulated events are presented in Table <ref>. The priors for the parameters (t_c, α, δ, ι, ψ)are same as that shown in Table <ref> and hence are not seperately mentioned here. The Bayesian PE analysis follows a similar methodology of generating interpolants for the likelihood function, as discussed previously. The analysis involves the generation of RBF nodes.We specify the total number of RBF nodes (N_nodes) by mentioning the number of nodes sampled from a multivariate Gaussian 𝒩(λ⃗^cent, Σ), represented by N_Gauss; meanwhile the number of nodes uniformly sampled around λ⃗^cent are represented as N_Unif for each event. Thesampler configurations are also mentioned in Table <ref> for the three simulated events. The network comprising the L1, H1, and V1 detectors detected the GW170817 event. As discussed earlier, we simulate a GW170817-like signal in the non-Gaussian real noise and find the source localization uncertainty in the presence of an A1 detector added to the L1H1V1 network. The matched-filter SNR in L1, H1, V1 and A1 are 22.6, 18.6, 5.4 and 6.3 respectively for the given noise realization. Even though in this case, the addition of A1 to the L1H1V1 network does not lead to any considerable improvements in the network matched-filter SNR for the event, yet a significant reduction in 90% credible localization area is observed. We find the localization uncertainty ΔΩ_90% to be 15 sq. deg for the L1H1V1 network, whereas the localization area ΔΩ_90% reduces to 6 sq. deg. with L1H1V1A1 network. Hence, the localization uncertainty is reduced by a factor of more than two in this case. The localization probability contours representing ΔΩ_90% obtained from the two different networks for GW170817-like event is presented in Fig. <ref>. For a GW190425-like event, we compare the sky localization with the then-observing network of the L1V1 network to the L1V1A1 network. The sky localization uncertainty (ΔΩ_90%) reduces from 9350 sq. deg. with the L1V1 network to a sky region of area 212 sq. deg with the L1V1A1 network. The matched filter SNR in L1, V1, and A1 are 10.1, 5.1, and 5.3, respectively, for this case. It is evident that the event in V1 and A1 is at subthreshold SNR for the given noise realization. Yet, there is a contribution in reducing the sky localization areas. For the case of the GW200115-like event, the source localization with the L1H1V1 network, which was the observing network during the real event, is compared to that with the L1H1V1A1 network. The source localization error (ΔΩ_90%) is reduced from 662 sq. deg. obtained with L1H1V1 to 87 sq. deg. achieved with L1H1V1A1. Here, we observe that the majority of the SNR is accumulated by the initial two LIGO detectors. Meanwhile, V1 and A1 contribute negligibly to improving the network SNR. This is because both V1 and A1 are at similar noise sensitivities for the aforementioned events.Note that these results vary with different realizations of the detector noise. Nevertheless, the antenna patterns and baselines added to a network by incorporating an additional detector (here, A1) may lead to an enhancement in the localization abilities of the network, even if the signal is subthreshold in one of the detectors. §.§ Degeneracy between luminosity distance and inclination angleThe GW190425-like event, when observed by the L1V1 network, shows a degeneracy between luminosity distance and inclination angle parameters, which was also observed for the real event. As the number of detectors in the network increases from L1V1 to L1V1A1, we observe a resolution of the distance-inclination angle degeneracy. For further investigation, we present the case for GW190425-like events with 28 different real non-Gaussian noise realizations. The noise strains and PSD are obtained as mentioned at the beginning of Section <ref>, where different noise strains correspond to different segments of detector strain data. The events for which the injected chirp-mass (ℳ_c) is within the 90% credible interval of the posterior samples are chosen. The results are summarized in Fig. <ref>. The degeneracy between the luminosity distance (d_L) and inclination angle (ι) parameters is resolved with an additional detector (here A1), even when the contribution of A1 in increasing the network SNR is not appreciable relative to the two detector network (L1V1). Note that, here, GW190425-like events are generated with a waveform model (), which does not include higher-order modes. Also, both the compact objects (in this case: BNS) are of approximately equal masses i.e. q≈ 1. Hence, we can safely assume that the higher-order modes do not play a role in the resolution of degeneracy between the parameters. We obtain similar results on relaxing the condition over ℳ and performing a similar analysis for 60 different noise realizations. An investigation addressing the luminosity distance and inclination angle degeneracy for BNS systems has also been done in <cit.>. It is not clear that a better measurement of both the polarizations (h_+ & h_×) in a larger network leads to a more precise measurement of the inclination - especially for face-on systems (ι< 45 deg.). In our study, we show the result as an empirical observation for a GW190425-like event. An extensive study constraining the inclination angle with a network of GW detectors has been performed by Usman et al. <cit.>.The improvement in the measurements of luminosity distance has direct implications in cosmology, as mentioned in Section <ref>. The accurate measurements of inclination angle may lead to improvements in the constraints on the models for gamma-ray bursts and X-ray emissions from BNS mergers <cit.>. Similar improvements in the measurements of luminosity distance and inclination angles for binary black hole mergers by a three-detector network relative to a two-detector network have been obtained in <cit.>.§ CONCLUSIONThe addition of A1 to the GW network is observed to improve the overall localization capabilities of the global detector network, even when A1 is in its early commissioning stages. To estimate the source parameters, we performed a full Bayesian PE from a lower cut-off frequencyf_low= 10 Hz, which is representative of the future LVK Collaboration analysis of GW sources. We find that addition of A1 detector (atO4 sensitivity) to the GW network leads to a reduction of the median ΔΩ_90% area to 5.6, 4.3, and 3.5 sq. deg. for cases where A1 is operating at 20%, 50%, and 80% duty cycles respectively, in comparison to the median ΔΩ_90% area of 6.6 sq. deg. obtained with the four detector L1H1V1K1 network for BNS sources with potential for multi-messenger follow-ups.Our results suggest that an expanded GW detector with at an early phase A1 operating at a 20% duty cycle and operating at a weaker sensitivity ( O4) as compared to the other LIGO detectors ( A+ Design Sensitivity) is capable of localizing 64% of these BNS sources under 10 sq. deg in comparison to 56% by the four detector network. With the imminent improvement in the duty cycle and noise PSD of the A1 detector, an apparent enhancement in the localization capabilities of the GW network is observed (Refer Table <ref>). With the addition of an A1 detector to the GW network, the observation probability for the sub-networks of k≥3 detectors increases, leading to a decrease in localization uncertainties in the sky area. This allows for an optimized “tiled mode” search for post-merger emissions by telescopes such as the GROWTH India facility with a field of view of the order of ∼0.4 sq. deg. in sky area. We show that improvements in duty cycles and noise sensitivity for A1 detector play a crucial role in enhancing the localization capabilities of the GW network. Hence, in order to get the maximal payoff from the addition of the A1 detector, efforts should be made towards maximizing the operational duty cycle and improving the noise sensitivity as soon as the detector becomes operational. Furthermore, we show that even for BNS sources that are sub-threshold in A1, the sky-localization uncertainties with the five detector L1H1V1K1A1 network are reduced in comparison to that obtained from the four detector L1H1V1K1 network. Thus, even in a situation where A1 does not detect the BNS event independently, it plays a crucial role in pinpointing the sources that enable a fast and efficient electromagnetic follow-up by ground and space-based telescopes.Taking the examples of two BNS events and one NSBH event from GWTC, we show the possible source localization improvements with A1 as an additional detector in the network with real noise. For this exercise, the real noisy strain data from Virgo is used as surrogate noise in A1 detector - to simulate a scenario where the Indian detector is observing the event but has not achieved its design sensitivity. We reaffirm the role of an additional detector (A1 in our case) in resolving the degeneracy between luminosity distance and inclination angle parameters relative to a two-detector network for a GW190425-like BNS source. This is shown by reconstructing the source parameters for GW190425-like BNS events in real, non-Gaussian noise, with the L1V1 and L1V1A1 detector networks, respectively, where data samples from V1 are used as surrogates for A1.§ DISCUSSIONIn order to maximize the incentives from the GW detection of BNS sources, the EM follow-up of these events is of utmost importance. A1, joining the network of terrestrial GW detectors in the early 2030s, will enhance the localization capabilities of the network. We studied the impact of the addition of A1 in the detector network in the localization of BNS sources with moderately high signal-to-noise ratios. The observation of an event with three or more detectors working in conjunction is fundamental for achieving localization uncertainties small enough so as to allocate telescope time for subsequent electromagnetic follow-ups. Our results presented in Fig. <ref> from Section <ref> can be considered optimistic, owing to the assumption of a BNS event being observed by more than two detectors at any given time.Including sub-networks of two detectors will lead to the broadening of the distribution of localization uncertainties, causing a slight shift to the right in the Cumulative Distribution Functions (CDFs) shown in Fig. <ref>.However, this is beyond the scope of this work, and a more realistic study taking the case of two detector subnetworks into account can be performed in the future. Along with this, considering the case where one or more detectors turn out to be at duty cycles that are lower than expected, as is the case for Virgo and KAGRA during the O4 run, can provide a more realistic account depicting the localization capabilities of a GW network. For instance, in the context of our study, the median ΔΩ_90% area of ∼ 13.5 sq. deg. is obtained with the four detector L1H1V1K1 network, where L1 and H1 are at 80% duty cycle and both V1 and K1 detectors are operating at a lower 20% duty cycle. With the addition of A1 detector to this network, where A1 is set toO4 sensitivity and operates at 20% duty cycle (same as that of V1 and K1), there is a significant reduction in median ΔΩ_90% area to ∼ 8 sq. deg.A case study of the localization capabilities considering only the three LIGO detectors (L1, H1, and A1) is presented in  <cit.>, where all three are considered to be at A+ sensitivity. For this study, we generate the BNS events in Section <ref> using thewaveform model and reconstruct the source parameters using themodel template waveforms. Using a waveform model that includes tidal deformability parameters, higher-order modes, and other physical effects captured by additional intrinsic parameters in the analysis can make the study more comprehensive. We aim toincorporate the tidal parameters and higher-order modes within the meshfree framework in the future. This extension will enable us to achieve a more comprehensive and rigorous analysis. We have also fixed the values of ι and ψ as shown in Section <ref>. This may also have an effect on the localization results. For a more general treatment, the events under consideration should be generated such that all the parameters should be allowed to vary in parameter space. This shall allow for a more exhaustive assessment of the localization capabilities of different GW networks. We simulate uncorrelated Gaussian noise in the detectors for our analysis. In this context, it has been shown by Berry et al. <cit.> that no appreciable impact is observed in the localization results for the case of simulated signal injected in real detector noise. Another aspect that might affect the sky localization area evaluated from the Bayesian posterior samples is the narrow prior boundaries taken over the intrinsic parameters. For consistency, we evaluate the sky localization areas from the posterior samples with wide boundaries over intrinsic parameters using another rapid PE method (relative binning in this case) and compared the results with the meshfree framework adopted here. We find that the difference between the localization areas obtained from these two approaches is not significant enough to affect the localization results, at least for a network involving three or more detectors. The results showcased in this work serves as a demonstration of what can be accomplished by adding A1 as a new detector to the gravitational wave (GW) network. The primary emphasis is on evaluating the GW network's ability to pinpoint the source of gravitational waves, particularly in the context of potential electromagnetic follow-up observations. We would like to thank Varun Bhalerao, Gaurav Waratkar, Aditya Vijaykumar, Sanjit Mitra, and Abhishek Sharma for useful suggestions and comments. S. S. is supported by IIT Gandhinagar. L. P. is supported by the Research Scholarship Program of Tata Consultancy Services (TCS). A. S. gratefully acknowledges the generous grant provided by the Department of Science and Technology, India, through the DST-ICPS cluster project funding. We thank the HPC support staff at IIT Gandhinagar for their help and cooperation. The authors are grateful for the computational resources provided by the LIGO Laboratory and supported by the National Science Foundation Grants No. PHY-0757058 and No. PHY-0823459. This material is based upon work supported by NSF's LIGO Laboratory, which is a major facility fully funded by the National Science Foundation. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. This material is based upon work supported by NSF's LIGO Laboratory, which is a major facility fully funded by the National Science Foundation, as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded through the European Gravitational Observatory (EGO), the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN), and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan.
http://arxiv.org/abs/2311.15695v1
{ "authors": [ "Sachin R. Shukla", "Lalit Pathak", "Anand S. Sengupta" ], "categories": [ "gr-qc", "astro-ph.HE", "astro-ph.IM" ], "primary_category": "gr-qc", "published": "20231127103206", "title": "How I wonder where you are: pinpointing coalescing binary neutron star sources with the IGWN, including LIGO-Aundha" }
1 .001 Small and Dim Target Detection in IR Imagery: A Review Nikhil et al. mode = title]Small and Dim Target Detection in IR Imagery: A Review1,2]Nikhil Kumar[] [email protected]]Pravendra Singh[orcid=0000-0003-1001-2219] [1] [email protected][cor1]Corresponding author.[1] addressline=Department of Computer Science and Engineering, Indian Institute of Technology Roorkee,city=Roorkee, citysep=,postcode=247667, state=Uttarakhand, country=India[2] addressline=Instruments Research and Development Establishment, Defence Research and Development Organization,city=Dehradun, citysep=,postcode=248008, state=Uttarakhand, country=IndiaWhile there has been significant progress in object detection using conventional image processing and machine learning algorithms, exploring small and dim target detection in the IR domain is a relatively new area of study. The majority of small and dim target detection methods are derived from conventional object detection algorithms, albeit with some alterations. The task of detecting small and dim targets in IR imagery is complex. This is because these targets often need distinct features, the background is cluttered with unclear details, and the IR signatures of the scene can change over time due to fluctuations in thermodynamics. The primary objective of this review is to highlight the progress made in this field. This is the first review in the field of small and dim target detection in infrared imagery, encompassing various methodologies ranging from conventional image processing to cutting-edge deep learning-based approaches. The authors have also introduced a taxonomy of such approaches. There are two main types of approaches: methodologies using several frames for detection, and single-frame-based detection techniques. Single frame-based detection techniques encompass a diverse range of methods, spanning from traditional image processing-based approaches to more advanced deep learning methodologies. Our findings indicate that deep learning approaches perform better than traditional image processing-based approaches. In addition, a comprehensive compilation of various available datasets has also been provided. Furthermore, this review identifies the gaps and limitations in existing techniques, paving the way for future research and development in this area.Infrared imagingPoint target Small and dim target detection Deep learning [ [ January 14, 2024 ====================§ INTRODUCTIONInfrared (IR) imagery has emerged as a pivotal technology in various fields, including surveillance, reconnaissance, and target detection. The ability to capture thermal radiation emitted by objects enables the acquisition of valuable information, especially in scenarios where traditional optical sensors may fall short <cit.>. One of the critical challenges within the realm of IR imagery is the detection of small and dim targets <cit.>, a task that demands advanced techniques and sophisticated algorithms.The identification of small and dim targets in IR imagery holds paramount significance in applications such as military surveillance, search and rescue operations, and environmental monitoring <cit.>. These targets, often characterized by their low thermal contrast against the background, pose a formidable challenge for traditional image processing methods. The complexity arises from factors such as noise, clutter, and varying environmental conditions, all of which can obscure the detection of these subtle thermal signatures.As technology advances, the demand for robust and efficient small and dim target detection algorithms becomes increasingly pressing. Addressing this challenge requires a multidisciplinary approach, combining expertise in signal processing, machine learning, deep learning, and computer vision. Researchers are driven to develop innovative methodologies <cit.> that enhance the sensitivity and accuracy of IR imagery systems, enabling the reliable detection of elusive targets that may otherwise escape notice.The main objective of this review is to offer an extensive evaluation of the advancements in the realm of detecting small and dim targets within IR domain. This assessment covers both conventional techniques rooted in image processing and more sophisticated methods founded on deep learning. By scrutinizing various approaches, our aim is to shed light on the advancements made in detecting diminutive and faint objects, while also gaining a comprehensive understanding of the strengths and limitations inherent in these methods. Additionally, this paper delivers a thorough examination of currently available datasets designed for detecting small and dim targets in IR images. §.§ IR imagingMost objects with temperatures above absolute zero emit a notable amount of IR radiation, as outlined by <cit.>. Such sources are all around us. In the creation of IR images, a substantial portion of the signals consist of radiated emissions. One notable advantage of these emissions is their persistent presence and their ability to withstand deterioration caused by adverse weather conditions. These characteristics make the IR spectrum a practical wavelength for imaging, especially in applications related to defense. The origins of IR radiation can be traced back to a scientific experiment conducted by Frederick William Herschel over two centuries ago. Herschel used prisms and basic temperature sensors to study the distribution of wavelengths across the electromagnetic spectrum. It is now widely accepted that objects emit radiation across a wide spectrum of wavelengths, following established principles of physics. Within the electromagnetic spectrum, the infrared (IR) region spans wavelengths from 700 nanometers to 1 millimeter, with the lower end coinciding with the red edge of the visible spectrum. Empirical evidence shows that a significant portion of the IR spectrum is unsuitable for conventional applications due to the absorption of IR radiation by atmospheric water or carbon dioxide <cit.>. This IR range is conventionally divided into five spectral sub-bands. The near-infrared (NIR) spectrum covers wavelengths from 0.7 µm to 0.9 µm, while the shortwave IR (SWIR) ranges from 0.9 µm to 2.5 µm. The mid-wave infrared (MWIR) spectrum includes wavelengths between 3 µm and 5 µm, and the long-wave infrared (LWIR) spectrum comprises wavelengths from 8 µm to 12 µm. Lastly, the far-infrared (FIR) spectrum extends to wavelengths of up to 1000 micrometers. §.§ Significance of Small and Dim targetsAs defined by the International Society for Optics and Photonics (SPIE) and elaborated in Zhang's work <cit.>, a small target is one that occupies less than 0.12 percent of the total pixels in an image. In the context of 256x256 images, targets smaller than 80 pixels are categorized as small.In military operations, many aerial targets exhibit distinct signatures within the medium-wave infrared (MWIR) band, making it a critical band of operation. Consequently, various electro-optical systems like IR Search and Track (IRST) and Missile Approach Warning Systems (MAWS), as referenced in the works of <cit.> and <cit.>, are designed to operate within this segment of the electromagnetic spectrum. In addition to this, the detection, identification, and tracking of small and dim objects play a crucial role in the field of infrared guidance and unmanned aerial vehicles (UAVs) <cit.>. In numerous situations, these targets need to be detected at considerable distances, which can extend to several hundred kilometers. As a result, an infrared sensor <cit.> will only be able to see distant targets with small angular sizes and few pixel-based target signatures. In airborne scenarios, military targets, despite emitting strong infrared radiation, often appear faint in the image plane due to significant transmission losses, including absorption and scattering <cit.>, particularly over long distances. Identifying and tracking such small targets holds great importance in defense applications. Infrared imagery frequently suffers from notable noise and background clutter, causing targets to become obscured, reducing contrast and diminishing the signal-to-clutter ratio (SCR). Consequently, aerial targets of military significance are often observed as small and faint targets. §.§ Why Specialised Algorithms for Detecting Small and Dim Targets in IR Imagery Small IR targets often have small dimensions, low intensity, amorphous structures, and lack texture, making them vulnerable to blending into complex background clutter. The direct application of popular generic deep learning-based object detection algorithms like the RCNN series <cit.>, YOLO series <cit.>, and SSD <cit.> for detecting small and dim point targets is not suitable because pooling layers in these networks may result in the loss of such targets in deeper layers. Researchers have focused on developing deep networks customized for detecting small IR targets by leveraging domain-specific knowledge. In contrast to techniques for small target detection in RGB images, which primarily address the issue of small target size <cit.> and employ strategies like context information learning <cit.>, data augmentation <cit.> and multi-scale learning <cit.> to enhance detection robustness and generalization, applying these techniques directly to the detection of small and dim targets in IR imagery, as indicated by in <cit.>, leads to a significant drop in performance. Convolutional neural networks (CNNs) typically include max-pooling layers, which have the potential to suppress or eliminate small and dim IR targets, as observed in Liangkui's work <cit.>. Hence, there's a need for specialized neural network architectures to effectively address these specific challenges.§.§ ChallengesLong-range surveillance systems employed in defense applications frequently rely on MWIR imaging systems, as discussed in Singh's book <cit.>. In such scenarios, the signatures of objects typically occupy only a small number of pixels, resulting in limited spatial features. The infrared signatures are susceptible to temporal fluctuations due to the dynamic nature of scene thermodynamics and changes in the visual aspect angle of the observed object in relation to the imaging system's position. Infrared imagery often suffers from high levels of noise and background clutter, which can obscure targets, reduce contrast, and result in a diminished Signal-to-Clutter Ratio (SCR). IR small-dim targets exhibit a higher resemblance to the background with a low SCR compared to targets in RGB images, making it more challenging to distinguish small-dim targets from their surroundings. §.§ Motivation and ContributionA significant proportion of long-distance objects observed by aerial targets tend to be small targets. This characteristic underscores the importance of studying the detection of such targets, particularly within defense applications. Though recent research efforts have concentrated on this field, it seems that the scope is quite broad within the IR domain. These methods encompass a range from conventional image processing approaches to cutting-edge deep learning methods. Recently, only a limited number of reviews in this field have been published. Zhao et al.'s study <cit.> specifically focuses on single-frame infrared small-target detection approaches. Rawat et al. <cit.> exclusively concentrate on traditional image processing approaches, while Kou et al. <cit.> solely address machine learning-based approaches. The present work represents the first comprehensive survey in this field, examining various technologies for detecting small and dim targets in infrared imagery. Our survey covers a spectrum of approaches, from conventional image processing to cutting-edge deep learning methods. Additionally, our work incorporates up-to-date approaches in this area. The classification illustrated in Figure <ref> outlines the various approaches employed for the detection of small and dim targets in IR imagery. Small and dim target detection algorithms can be primarily categorized into two groups based on their implementation: Multiple frame InfraRed Small Target (MIRST) and Single frame InfraRed Small Target (SIRST). In terms of computational complexity, SIRST techniques are considered to be more advanced alternatives. SIRST methods are further divided into conventional image processing-based and deep learning-based approaches. Our survey paper also offers further classification ofSIRST and MIRST methodologies. A comprehensive compilation of most of the datasets pertaining to this area has also been presented. The remainder of this review is organized as follows. Sections <ref> and <ref> provide a detailed taxonomy of these approaches. Section <ref> offers a comprehensive overview of the datasets available for the specific purpose of detecting small and faint targets in IR imagery. Performance metrics relevant to the problem settings are discussed in Section <ref>, while a detailed discussion of the performance of these techniques, including potential future directions, is presented in Section <ref>. The conclusion is presented in Section <ref>. § MULTIPLE FRAME INFRARED SMALL TARGET (MIRST) DETECTIONIn these approaches, multiple frames are simultaneously employed for the purpose of detecting targets. The majority of MIRST detection algorithms described in the literature utilize conventional image processing techniques. The following are the representative methods falling under this category.§.§ Matched Filter Based Approaches Reed et al. <cit.> introduced the utilization of three-dimensional matched filtering as a robust processing technique for detecting weak,targets in motion within a noisy background. This method involves the manipulation of complete sequences of optical frames that encompass mobile targets. This process necessitates precise matching with the target's signature and velocity vector, allowing it to simultaneously detect all matched targets. However, a primary challenge associated with 3-D matched filtering is the need to match the filter to a specific velocity profile. This means that the filter must be customized for a predefined target moving at a particular velocity and direction. To address this limitation to some extent, a filter bank can be implemented, tailored to encompass the targeted speed and direction uncertainties. Porat and Friedland <cit.> tackled this issue by employing a bank of 3-dimensional Directional Derivative Filters (3DDFs). They examined each possible target direction separately and devised a rule to determine the presence or absence of targets in each direction. Their findings indicate that the signal-to-noise ratio (SNR) increases linearly as the integration time extends. This increase surpasses what is typically achieved through the application of 2-dimensional matched filtering to a single frame. The authors presented their results graphically and the accuracy of the plotted SNR can be readily verified through direct computation, using fundamental calculations of signal and noise power. This aligns with the theoretical expectations.Li et al. <cit.> introduced a 3D Double Directional Filter (3DDDF) based approach for detecting and tracking small moving targets within complex and cluttered backgrounds in sequences of IR images. This algorithm employs a double-directional filtering technique in three dimensions, enhancing the target's energy accumulation beyond that of the 3DDF method. Before applying the filter, they employ a pre-whitening technique known as a Three-Dimensional Spatial-Temporal Adaptive Prediction Filter (TDSTAPF) to mitigate the effects of cluttered backgrounds. Comprehensive experiments presented by Li et al. <cit.> have demonstrated that their algorithms are capable of detecting weak dim point targets amidst complex backgrounds cluttered with clouds in real IR image sequences. The authors conducted a performance assessment of the 3DDDF technique in practical settings using infrared image sequences. They also compared the performance of the 3DDDF algorithm with the 3DDF algorithm, using actual IR image data. The experimental results suggest that the 3DDDF algorithms outperform the 3DDF algorithms.One of the primary challenges associated with methodologies that rely on information from multiple frames is their limited effectiveness when applied with a moving camera. In such cases, an additional pre-processing step, such as image registration, becomes necessary, rendering these approaches more computationally expensive. §.§ Regularity Flow-based ApproachesIn this category of approaches, spatio-temporal regularity is considered a prominent feature and its pattern is analyzed in the temporal domain. The following is a representative algorithm within this category.Nikhil et al. <cit.> introduced a Hough transform <cit.> based approach for detecting small and dim targets. This method is based on the formation of a video data cuboid. Instead of detecting targets in the X-Y plane, it explores the trajectories of targets in X-T slices. One of the primary assumptions of this method is a stationary camera. Due to the greater coverage of pixels in the X-T plane by the trajectories of small targets compared to the number of target pixels in the X-Y plane, there is an inherent increase in the signal-to-noise ratio (SNR). The authors compiled a dataset that includes multiple mobile vehicles captured from a distance of approximately 5 kilometers using a MWIR imager. As a result of the significant distance, the majority of captured objects often appear as small targets. Their findings demonstrate exceptional performance, particularly in effectively managing occlusion and rejecting clutter. It's worth noting that all video sequences analyzed in their study exclusively capture sequences of ground-based targets and do not include any aerial targets.§ SINGLE-FRAME INFRARED SMALL TARGET (SIRST) DETECTIONWithin this specific domain, researchers have employed a range of innovative methodologies. These methods can be further categorized into two subcategories: conventional image processing-based approaches and deep learning-based approaches. §.§ Conventional Image Processing Based ApproachesAlgorithms falling under this category tend to have low computational complexity, lack generalizability, and struggle to suppress non-uniform backgrounds effectively. They may also struggle with complex backgrounds, resulting in low detection rates and inadequate stability. These approaches can be further classified into three main categories: Low-rank Representation-based approaches, Human Visual System (HVS) based approaches, and Filtering-based approaches.§.§.§ Low-rank Representation Based ApproachesThese approaches leverage the mathematical properties of matrices and employ an alternative low-dimensional representation for the purpose of detecting small and dim targets in IR imagery.The authors <cit.> introduced a technique known as the IR Patch-Image model (IPI), which utilizes a patch-based image model to enhance the accuracy of small target detection. This method involves breaking down the input IR image into numerous overlapping patches and creating a model that captures the relationship between these patches and the overall image. By employing a non-local self-similarity constraint, the model can capture the inherent similarity between patches, which aids in the detection process. In the IPI model, the IR image can be mathematically expressed as shown in Equation <ref>:f_D(x, y) = f_T(x, y) + f_B(x, y) + f_N(x, y)Here, the variables f_D, f_T, f_B, f_N, and (x, y) represent the original IR image, the target image, the background image, the random noise image, and the pixel location, respectively. In IPI model, corresponding patch-images D, B, T and N can be constructed and expressed as shown in Equation <ref>:D = B + T + NThe patch image D is generated using the original IR image f_D, which is derived from an image sequence. The Accelerated Proximal Gradient algorithm is utilized to simultaneously estimate the low-rank background patch-image B and the sparse target patch-image T within the patch-image D. Subsequently, the process of reconstructing the background image f_B and the target image f_T is carried out by using the patch images B and T, respectively. An algorithm employing a straightforward segmentation technique is applied to dynamically partition the target image f_T, responding to the presence of minor errors characterized by low magnitudes. Finally, through post-processing techniques, the segmentation results are optimized to achieve the final detection outcome. The test image synthesis involves using actual IR background images along with various targets. The background images are selected from multiple real image sequences with varying levels of clutter. The different targets are created by resizing four real targets using the bi-cubic interpolation method. The dataset is divided into ten groups, each with different target sizes ranging from 5 to 37 pixels and varying numbers of targets. It has been observed that the probability of detection can vary from 0.5 to 0.9, with an average value of 0.82.The authors <cit.> introduced a technique known as the Reweighted IR Patch-Tensor model (RIPT) to effectively leverage both local and non-local priors simultaneously. Initially, they used the IR Patch-Tensor (IPT) model to accurately represent the image while preserving its spatial correlations. By incorporating the sparse prior of the target and the non-local self-correlation prior to the background, the authors formulated the task of separating the target from the background as a robust low-rank tensor recovery problem. To integrate the local structure prior to the IPT model, the authors created a weight for each element based on the structure tensor. This weight is designed to reduce the influence of remaining edges and maintain the desired target dimensionality. To enhance computational efficiency, a re-weighted scheme was implemented to increase the sparsity of the patch tensor of the target. Due to the unique nature of IR small target detection, an additional stopping criterion was implemented to prevent excessive computational processing. The authors computed the local structure feature map of an IR image and generated the original patch-tensor and the local structure weight patch-tensor using the IR image and the local structure map. They decomposed the patch-tensor into two separate tensors: the background patch-tensor and the target patch-tensor. The background image and target image were reconstructed using these tensors, and the target was segmented using a methodology similar to the one described inIPI model<cit.>. The performance of this algorithm was evaluated using the NUAA-SIRST <cit.> and IRSTD <cit.> datasets. Table <ref> illustrates the performance comparison of Low-Rank Representation-based algorithms on the NUDT-SIRST and NUAA-SIRST datasets. Methods employing low-rank representation have the capability to adapt to low SCR in IR images. However, they still face the significant challenge of generating a high number of false alarms when applied to images that contain small and irregularly shaped targets within complex backgrounds.§.§.§ Human Visual System (HVS) Based Approach HVS-based algorithms leverage principles of human visual perception, encompassing low-level image processing and higher-level cognitive processes, to extract relevant information and differentiate point targets from the surrounding background. By incorporating human-like processing capabilities, these algorithms aim to address challenges such as complex backgrounds, limited signal strength relative to noise and variations in the appearance of the target.A representative algorithm in this category is the Local Contrast Method (LMS) <cit.>. Researchers have concluded that contrast is a fundamental attribute encoded within the streams of the visual system <cit.>. This holds true throughout the entire target detection process, as mentioned in previous studies. It has been observed that small targets exhibit a distinct pattern of discontinuity compared to their surrounding regions. The target is concentrated within a relatively small area, characterized as a homogeneous compact region <cit.>. Additionally, the target's background aligns with its neighboring regions consistently. Therefore, it is hypothesized that a local area exhibiting contrast greater than a designated threshold at a specific scale could potentially indicate the presence of the target.In the algorithm by <cit.>, the image is divided into a three-by-three grid of cells. The cell at the center of the grid is labeled as 0, while the remaining cells are labeled as 1, 2, 3, 4, 5, 6, 7, and 8. The values of L_n, m_i and C_n are determined using Equation <ref>, Equation <ref> andEquation <ref>. The value of the center cell is denoted as C_n, and a computed threshold Th is applied for segmentation. L_n= max_j=1,2,...,N_0I_0^ j m_i= 1/N_u∑_1^N_uI_j^i C_n = min_i(L_n ^2/m_i)The performance of the Human Visual System (HVS)-based technique with a customized dataset is shown in Table <ref>. This table presents two scenarios that have been examined, each characterized by Gaussian white noise with standard deviations of 0.00001 and 0.00005, respectively.In the case of the method utilizing HVS, it has been observed that such a method is not effective in effectively reducing the unwanted elements present in the background. Local contrast-based methodologies demonstrate greater suitability for high-contrast targets as opposed to dim ones. §.§.§ Filtering Based Approaches In the IR target detection area, these algorithms were some of the earliest solutions designed to address the challenge of identifying small targets in IR images. They operate by analyzing variations in grayscale features and the visual saliency of elements within the image to locate faint and small targets. The process begins with an estimation of the IR background, followed by the implementation of a technique to suppress this background. Subsequently, a decision plane is established to isolate small and faint objects of interest. This procedure can be likened to the operation of a high-pass filter.The researchers <cit.> in their work introduced a method referred to as the Max-Mean Filter technique. This method entails the sliding of a window across the image of the scene. Four mean values, denoted as Z_1, Z_2, Z_3, Z_4 are computed for the elements within the window in both horizontal and vertical directions, as well as in two diagonal directions. This process is depicted in the accompanying Figure <ref> and Equation <ref>. The central value of the window is then replaced with the highest value among these four computed values.Z_1=mean{A_31,A_32,A_33,A_34,A_35} Z_2=mean{A_13,A_23,A_33,A_43,A_53} Z_3=mean{A_15,A_24,A_33,A_42,A_51} Z_4=mean{A_11,A_22,A_33,A_44,A_55}and max{Z_1,Z_2,Z_3,Z_4} replaces A_33. The choice of window size is a crucial parameter that has a substantial impact on the method's effectiveness. It has been noticed that when we increase the window size at a specific threshold, both the Detection Rate (DR) and the False Alarm Rate (FAR) tend to rise. Hence, when selecting the window size, it's vital to ensure that we attain satisfactory values for both DR and FAR. Achieving the best performance requires finding a suitable balance between these two aspects. The Max-Median algorithm, introduced by the authors <cit.>, is akin to the Max-Mean method. It involves the movement of a window across an IR image. In this process, the variables Z_1, Z_2, Z_3, Z_4 are employed to denote the directions where median values need to be computed for the window elements. These directions encompass both horizontal and vertical orientations, along with two diagonal directions. The central value of the window is subsequently substituted with the maximum value among these four calculated values, as illustrated in Figure <ref>.Z_1=median{A_31,A_32,A_33,A_34,A_35} Z_2=median{A_13,A_23,A_33,A_43,A_53} Z_3=median{A_15,A_24,A_33,A_42,A_51} Z_4=median{A_11,A_22,A_33,A_44,A_55}Here A_33 is used to replace by max{Z_1,Z_2,Z_3,Z_4}.The Top Hat Morphology (THM), as described by Gonzalez in his work <cit.>, is a morphological technique that makes use of the top-hat transform to extract small elements and intricate features from provided images. The top-hat transform has two distinct types: The first type is the white top-hat transform, which entails calculating the discrepancy between the input image and its opening operation using a specific structuring element. The second type is the black top-hat transform, defined as the disparity between the closing operation of the input image and the input image itself. The white top-hat transform is commonly applied in the context of point and small target detection. If we denote a grayscale image as f and a structuring element as b, then the white top-hat transform of an image f can be expressed as follows: T_w (f)=f-f∘ bwhere ∘ denotes the opening operation. The opening operation f∘ b consists of an erosion operation followed by a dilation operation.f∘ b=(f ⊖ b )⊕ bThe white top-hat transform generates an image that emphasizes elements in the input image smaller than the specified structuring element. Essentially, it accentuates regions where the structuring element doesn't align and appears brighter than its surroundings. Subsequent to employing the white top-hat transform, it's common to apply a global threshold to divide the detection plane into pixels representing targets and those depicting the background. This thresholding procedure aids in the differentiation of pertinent targets from their adjacent background, simplifying the process of detection and analysis.The authors <cit.> introduced a technique known as Modified Top-Hat Morphology (MTHM) to enhance the detection of small targets. Small targets typically exhibit a concentrated and bright region against an IR-cluttered background. The surrounding areas in the image usually have a significant contrast in gray intensity compared to the central region. However, the classical top-hat transformation, which uses identical structuring elements, may not effectively distinguish the target region from the surrounding region. This is due to the limited discriminative power of the structuring elements used. To address this limitation and optimize the use of gray intensity differences between the target region and its surroundings, the authors propose the use of two structuring elements. This approach can help mitigate the impact of noise and enhance the detection of small targets. Let B_oi denote the combination of two structuring elements, namely B_o, B_i, as illustrated in Figure <ref>. Here, B_i corresponds to the inner structuring element denoted by the region EFGH, while B_o is denoted by the region ABCD. Additionally, B_b is a structuring element that represents region MNOP and is positioned between ABCD and EFGH. The symbol ▵ B denotes the annular structure, which is seen as the darkened region in Figure <ref>. The modified white top-hat transformation can be mathematically expressed as the difference between the image and the opening of the picture with respect to the structuring element B_oi. The detection of small targets is improved with the implementation of this modified technique, which effectively leverages the contrast in grey intensity between the target region and the surrounding cluttered background.T̂_̂ŵ (f) =f-f ⊙b̂where,f ⊙b̂ =(f ⊖▵ B) ⊕B_b The MTHM method is commonly accompanied by the use of a global threshold to distinguish between target and background pixels in order to achieve segmentation of the detection plane. This step is of paramount importance as it plays a critical role in distinguishing the target pixels that have been enhanced through MTHM from the background pixels. By establishing an optimal threshold value, it becomes feasible to distinguish the regions that potentially contain small targets from the complex background, which greatly facilitates the effective detection and analysis of these targets.Contour Morphology (CM) <cit.>is a distinct iteration of Modified Top-Hat Morphology, characterized by a series of image operations. The procedure associated with Contour Morphology commences with the application of a dilation operation, followed by an erosion operation, to the input image f. This sequence of operations is mathematically represented as shown in Equation <ref>, and it plays a vital role in amplifying the detection of particular features or objects within the image. Contour Morphology serves as an image processing technique designed to emphasize specific attributes or traits in the image, rendering them more conspicuous and contributing to the identification of particular elements, such as small targets. f⊚ b=(f ⊕b_1 ) ⊖ b_2In the realm of Contour Morphology (CM), specific mathematical symbols and operations are employed: ⊕ represents the dilation operation, ⊖ stands for the erosion operation, and ⊚ signifies the contour morphology operation. CM utilizes two distinct structuring elements: b_1 takes the form of a one-pixel wide ring structure, illustrated in Figure <ref>, while b_2 is a square-shaped element. The background estimation, denoted as f⊚ b, is calculated using the contour morphology operation. This estimated background is then subtracted from the original scene image f to generate the detection plane DP, as expressed in Equation <ref>. This process effectively enhances the detection of specific features or elements in the image, such as small targets, by emphasizing their contours and boundaries. DP = f- f⊚ b Following the application of the contour morphology operation and the subsequent generation of the detection plane (DP), a universal threshold is utilized to divide this DP into two groups of pixels: those representing potential target elements and those representing the background. This thresholding procedure aids in the differentiation of regions likely to encompass small targets from the neighboring background, simplifying the identification and analysis of the specific elements of interest in the image.The Method of Directional Derivatives (MODD) <cit.> is another approach in the category of image processing-based techniques for the detection of small targets. MODD operates on the assumption that a specific target within an image exhibits relatively isotropic behavior compared to the surrounding clutter and displays a significant magnitude of directional derivative in multiple directions. This technique is designed to enhance the visibility of the target while rejecting the influence of clutter within a scene image. The process involves a series of steps to achieve this goal The directional derivatives of the scene are computed up to the first and second order in four distinct directions. The values of derivatives at 0^∘, 45^∘, -45^∘, 90^∘ may be expressed using the Equations <ref> and <ref>.d_α^1 = (K_2 - 17/5K_7-2K_9)sinα-(K_3 - 17/5K_10-2K_8)cosαd_α^2 = 2K_4sin^2α + 2K_5sinαcosα + 2K_6 cos^2αThe matrices K_2 - K_10 are derived from convolving the scene image with 5 × 5 matrices W_2 - W_10, as providedW_2 = 1/50[22222;11111;00000; -1 -1 -1 -1 -1; -2 -2 -2 -2 -2 ]W_3 = 1/50[210 -1 -2;210 -1 -2;210 -1 -2;210 -1 -2;210 -1 -2 ] W_4 = 1/70[22222; -1 -1 -1 -1 -1; -2 -2 -2 -2 -2; -1 -1 -1 -1 -1;22222 ] W_5 = 1/100[420 -2 -4;210 -1 -2;00000; -2 -1012; -4 -2024 ] W_6 = 1/70[2 -1 -2 -12;2 -1 -2 -12;2 -1 -2 -12;2 -1 -2 -12;2 -1 -2 -12 ] W_7 = 1/60[11111; -2 -2 -2 -2 -2;00000;22222; -1 -1 -1 -1 -1 ] W_8 = 1/140[420 -2 -4; -2 -1012; -4 -2024; -2 -1012;420 -2 -4 ] W_9 = 1/140[4 -2 -4 -24;2 -1 -2 -12;00000; -2121 -2; -4242 -4 ] W_10 = 1/60[1 -202 -1;1 -202 -1;1 -202 -1;1 -202 -1;1 -202 -1 ]After applying this enhancement procedure, the scene image's directional derivatives are subjected to convolution with E-filters. Each filter, denoted as E^1_α, E^2_α, is generated by deriving the first and second-order directional derivatives of a Gaussian function in four directions: α = 0^∘, 45^∘, -45^∘, 90^∘. The resulting enhanced images, represented as f^1_α and f^2_α, are obtained using following Equation <ref>.f^1_α = d^1_α∗ E^1_α andf^2_α = d^2_α∗ E^2_αwhere the E-filters are given asE_0^1 =[ −0.1884 00.1884; −0.1991 00.1991; −0.1884 00.1884 ] E_0^2 =[ −0.0381 −0.0762 −0.0381; −0.0381 −0.0762 −0.0381; −0.0381 −0.0762 −0.0381 ] E_45^1 =[ −0.2664 −0.1408 0; −0.1991 00.1991; −0.1884 00.1884 ] E_45^2=[ −0.0306 −0.0571 −0.0456; −0.0571 −0.0762 −0.0571; −0.0456 −0.0571 −0.0306 ] E_-45^1=[ 00.14080.2664; −0.1408 00.1408; −0.2664 −0.1448 0 ] E_-45^2=[ −0.0456 −0.0571 −0.0306; −0.0571 −0.0762 −0.0571; −0.0306 −0.0571 −0.0456 ] E_90^1 =[ −0.1884 −0.1991 −0.1884; 0 0 0;0.18840.19910.1884 ] E_90^2=[ −0.0381 −0.0381 −0.0381; −0.0762 −0.0762 −0.0762; −0.0381 −0.0381 −0.0381 ]The objective of this phase is to enhance the detectability of targets that possess attributes consistent with the derivatives of a Gaussian function. After the construction of the Detection Plane (DP), it is crucial to acknowledge that the improved images f^1_α and f^2_α obtained in the preceding stage may exhibit negative values. To achieve this, all negative values are adjusted to zero, leading to the formation of the detection plane DP as described in Equation <ref>. DP = f_0^1 ⋆ f_0^2 ⋆ f_45^1 ⋆ f_45^2 ⋆f_-45^1 ⋆ f_-45^2 ⋆ f_90^1 ⋆ f_90^2These algorithms exploit the differences in frequency content between the intended target, the surrounding background, and extraneous noise. Typically, this frequency discrepancy is more pronounced and can be identified in the transform domain through the use of high-pass filters to remove the background and clutter noise. While frequency domain-based filtering detection algorithms demand more computational resources compared to spatial domain-based methods, advances in computer hardware have made frequency domain-based filtering algorithms increasingly practical in engineering applications. The primary high-pass filters used include the ideal high-pass filter, Gaussian high-pass filter, and Butterworth filter. Nevertheless, the former two filters exhibit some degree of the ringing phenomenon, leading to incomplete filtering. In contrast, the Butterworth filter effectively addresses these issues. On this basis, researchers have explored alternative filtering techniques, such as the wavelet transform. This method is employed to separate high-frequency target data from low-frequency background data and enhance the image's signal-to-noise ratio through specific processing methods to achieve target detection.In the early stages of developing traditional methods for detecting small and dim targets in IR imagery, researchers often encountered challenges related to the availability of suitable datasets. Consequently, many of these methods relied on self-constructed datasets. However, these self-generated datasets typically had limitations in terms of target diversity and variability, which made it challenging to establish robust benchmarks for evaluating algorithm performance. To tackle this issue, some researchers <cit.> have taken the initiative to create their own datasets for the purpose of performance evaluation. By generating datasets that cover a broader range of scenarios and target variations, their goal is to provide a more comprehensive assessment of algorithm performance and better reflect real-world conditions. This contributes to enhancing the credibility and reliability of evaluations of traditional small and dim target detection methods.Nikhil et al. <cit.> conducted an experiment in which they recorded a video sequence of the sky with clouds as the background using a panning IR camera. In this video, a small target with dimensions of 3 x 3 pixels and following a predefined trajectory was inserted into the image frames. The detected targets were then compared to the ground truth, and the counts of true positives and false alarms were recorded for each video frame. To assess the performance of the filtering-based methods mentioned earlier, they calculated the DR and FAR, which were averaged over 1000 frames.As shown inFigure <ref> and Table <ref>, a comparison of various filtering-based methods for small and dim target detection in IR imagery reveals different trade-offs in terms of DR and FAR. The MTHM method tends to have the lowest FAR among the compared methods, indicating that it produces fewer false alarms. However, it comes at the cost of a lower DR, suggesting that it may miss some true targets. THM demonstrates superior performance in terms of DR compared to other methods, meaning it is more effective at detecting true targets. However, it also results in a higher FAR, which implies it may produce more false alarms. CM and Max-Median methods exhibit similar performance metrics, with both DR and FAR falling in between the extremes observed with MTHM and THM. They offer a balance between detection and FARs. MODD appears to have a lower DR and FAR compared to other methods. It is a more conservative approach that may miss some targets but is less sensitive to changes in the threshold value. The choice of which method to use depends on the specific application's requirements and priorities. For example, in scenarios where minimizing false alarms is critical, MTHM might be preferred. On the other hand, if maximizing target detection is the primary goal, THM might be more suitable. Researchers and practitioners must weigh the trade-offs and select the method that best aligns with their operational needs. It is important to note that the dataset utilized in these works is tailored to include exclusively aerial scenarios acquired by a sensor installed on an aerial platform. It is worth highlighting that such scenarios typically exhibit significantly lower levels of clutter compared to ground scenarios captured from an aerial perspective. Filtering-based methods are limited to reducing uniform background clutters and are unable to effectively reduce complex background noises. This limitation leads to high rates of false alarms and unstable performance.In defense applications, it is necessary to handle numerous real-world scenarios that involve a significant amount of background clutter. The utilization of these methods may necessitate the implementation of supplementary algorithms for managing clutter and result in an escalation of computational expenses.Nevertheless, the aforementioned techniques relying on image processing, filtering, or manually designed features exhibit limited efficacy when confronted with complex scenarios such as targets exhibiting diverse shapes and sizes and backgrounds containing excessive clutter and noise. In contrast, deep neural networks have the ability to autonomously learn complex features from extensive datasets that encompass intricate scenes, thanks to their end-to-end learning approach. Traditional image processing methods often require hyper-parameter tuning, which can be a challenging issue. In defense applications, for instance, if one intends to apply such algorithms with automated systems like IRST or MAWS, the lack of human intervention makes it nearly impossible to adjust hyper-parameters according to different scenarios.Assuming the selection of the MTHM algorithm for an automated defense system, there exist three adjustable hyper-parameters: the diameter of the inner ring, the diameter of the outer ring, and the threshold for segmentation of the detection plane. Achieving robust performance across all scenarios poses a challenging task when tuning these three hyper-parameters simultaneously. §.§ Deep Learning Based Approaches The use of deep learning algorithms to detect small targets in IR images has proven to be significantly better than traditional methods. Traditional methods for target detection often rely on manual or rule-based approaches, which can be time-consuming, subjective, and less effective in complex scenarios. However, machine learning algorithms, particularly deep learning models, have revolutionized the domain of point target detection in IR imagery. One of the key advantages of machine learning algorithms is their ability to learn and adapt to large volumes of data. In the context of IR imagery, these algorithms can be trained on diverse datasets consisting of labeled images that contain small targets. By exposing the algorithms to a wide range of target characteristics, such as size, shape, orientation, and IR signatures, they can effectively learn the discriminative features that distinguish targets from background clutter.Deep learning models, such as CNNs, have shown exceptional performance in detecting small targets in IR images. CNNs are specifically designed to automatically extract hierarchical features from images, enabling them to capture complex patterns and structures. They can learn to recognize distinctive IR signatures and subtle spatial arrangements associated with small targets, even in challenging conditions like low contrast or high background noise. Notwithstanding the considerable achievements of CNNs in object detection and segmentation <cit.>, there has been limited exploration of deep learning methodologies in the domain of IR small target detection. Deep learning models require large amounts of data for training.These datasets should ideally have high-quality annotations, which would enable researchers to develop, evaluate and compare new approaches for this task. IR small targets frequently encounter the challenge of being immersed in complex backgrounds characterized by low signal-to-clutter ratios. In the context of networks, the task of identifying dim targets while minimizing false alarms requires a combination of a comprehensive understanding of the entire IR image at a higher level and a detailed prediction map with fine resolution. However, this poses a challenge for deep networks as they tend to prioritize learning semantic representations by gradually reducing the size of features <cit.>.Deep learning-based SIRST detection methods can be broadly classified into two categories. The initial category within this classification encompasses methodologies that heavily depend on complete supervision. In recent years, there has been a significant increase in research interest in methods that rely on fully supervised learning. However, these methods are associated with high annotation costs due to the requirement of a large number of per-pixel annotations. The second category of this classification is based on the concept of Weak Supervision.Under such approaches, point-level supervision is employed in the context of IR small target detection. Compared to earlier methods, these approaches include a lower annotation cost for per-pixel annotation.§.§.§ Fully Supervised Learning Based ApproachesApproaches under this category can be further classified into two subclasses: Detection type approaches and Segmentation type approaches. Detection-type approaches employ a generic framework for the purpose of detecting small and dim targets. In contrast, segmentation methods focus on the binary classification of pixels, distinguishing between foreground and background.Detection Based ApproachesWithin this particular group of techniques, generic deep learning frameworks for detecting different types of targets are modified and adapted specifically for the purpose of detecting small and dim point targets. Liu et al. <cit.> offered a deep learning-based end-to-end solution for small target detection that might be viewed as a classifier approach. In comparison to traditional image processing-based methods, experimental results show that this method is robust and insensitive to background and altering targets. The network is constructed with an input dimension of 21×21 pixels. Small input neural networks are utilized as a mobile filter window for the purpose of detecting small targets located at any position within an image. In the training phase, the only pre-processing step involves subtraction of the mean value, which is calculated on the training set, from every pixel. The image undergoes a process of layering. Each layer is fully connected to the following layer. The initial layers consist of 128 channels each, while the final layer executes binary classification. The final layer is the soft-max transformation. All hidden layers are equipped with rectification non-linearity. Details of different trained models labeled as A, B, C, D, and E can be seen in Table <ref>. Furthermore, a number of cutting-edge networks <cit.> are specifically engineered for generic image datasets. Utilizing them directly for IR small target detection can result in severe failure due to the significant disparity in the data distribution. It necessitates the reconfiguration of the network across various dimensions.Numerous studies highlight the importance of aligning the receptive fields of CNNs with the scale range of objects being analyzed <cit.>. Without modifying the down-sampling method, it becomes increasingly difficult to maintain the ability to detect small IR targets as the network becomes deeper.Current attention modules commonly aggregate global or long-range contexts <cit.>. The underlying assumption is that the objects being referred to are of significant size and have a wide distribution. However, this is not applicable to IR small targets, as a global attention module would diminish their distinctive characteristics. This prompts the inquiry of which type of attention module is appropriate for detecting IR small targets.Recent studies have incorporated cross-layer features in a unidirectional top-down approach <cit.>, with the goal of choosing appropriate low-level features based on high-level semantics. However, due to the potential for small targets to be obscured by background noise in deeper layers, relying solely on top-down modulation may not be effective and could potentially have negative consequences.In an effort to address the challenge of losing critical features of small IR targets during the network sampling process, Mou et al. <cit.> introduced YOLO-FR, a model designed for the detection of small targets in IR images. YOLO-FR is based on the YOLOv5 <cit.> architecture and integrates feature reassembly sampling techniques, allowing for resizing the feature map while preserving existing feature information. To prevent feature loss during down-sampling, a Spatial Temporal Down-sampling (STD) Block is employed, which retains spatial information within the channel dimension. Additionally, the CARAFE <cit.> operator is utilized to expand the feature map size without distorting the feature mapping mean. To facilitate the down-sampling process, an STD block is specifically designed to reduce image resolution, effectively transferring additional spatial domain information to the depth dimension to enhance the extraction of small target features without introducing parameter inflation. This block is responsible for all down-sampling operations within the backbone network. As shown in Figure <ref>, the up-sampling process in the feature fusion network employs the CARAFE operator, which is a region-content-based up-sampling technique involving two key steps: the prediction of up-sampling kernels and their application to the original map positions. Evaluation metrics and visualization results indicate a substantial improvement in the model's ability to detect small targets following the incorporation of the CARAFE operator. To enhance the detection of small targets using shallow detailed features, the feature fusion network has been extended for features extracted from the backbone network after down-sampling for fusion. Additionally, the authors incorporated a small target detection head with a reduced receptive field. The authors conducted experiments to determine the optimal combination of target detection heads. The dataset utilized in their research was the publicly accessible infrared dim-small aeroplane target dataset provided by Liu et al. <cit.>. The dataset consisted of 22 sets of data, comprising a total of 16,177 infrared images. Each image had dimensions of 256 × 256 pixels. Segmentation Based ApproachesThese approaches utilize a deep learning framework to conduct binary classification of scenes, distinguishing between foreground and background. Typically, small and dim targets are categorized as foreground entities, while the remaining elements are designated as background components. Dai et al. <cit.> introduced the Asymmetric Contextual Modulation Network (ACMnet). ACMnet presents a network architecture that places a strong emphasis on customizing the down-sampling process, attention mechanisms, and feature fusion techniques. The primary goal is to effectively preserve the features related to small IR targets. It is essential that the network's receptive fields of predictors are adapted to match the scale range of the objects to ensure the preservation of IR small target features as the network delves deeper. Failure to customize the down-sampling approach could result in the loss of vital information. ACMnet effectively handles this challenge by carefully adjusting the down-sampling process to ensure the capture and retention of pertinent features of IR small targets. Traditional attention modules are typically designed to aggregate global or long-range contextual information, assuming that objects are large and distributed globally. However, this assumption does not hold for IR small targets, where a global attention module can potentially weaken their features. ACMnet introduces an attention module that specifically aims to enhance the visibility of IR small targets. By employing a more localized attention mechanism, ACMnet enhances its detection capabilities for these specific targets.Recent methodologies incorporate cross-layer feature fusion in a top-down manner, selecting low-level features based on high-level semantic information. However, the presence of background interference in deep layers can overshadow small targets, rendering the pure top-down modulation ineffective or even detrimental. ACMnet overcomes this limitation by redesigning the feature fusion approach, incorporating mechanisms to account for the potential interference of small targets by background noise. This approach offers an innovative solution to address the challenges arising from the size discrepancy between IR small targets and objects in general datasets. The solution involves integrating the ACM mechanism as a plug-in module into various host networks, enabling the bidirectional transfer of abstract concepts and specific implementation details across different feature levels. As shown in Figure <ref>, this approach enhances the efficiency of small target detection in IR imagery by incorporating a top-down pathway that incorporates high-level semantic feedback and a bottom-up pathway that encodes finer visual details into deeper layers. This is made possible by utilizing Global Channel Attention Modulation (GCAM) for top-down modulation and Pixel-wise Channel Attention Modulation (PCAM) for bottom-up modulation. PCAM is specifically designed to enhance and maintain the visibility of IR small targets. The ACM module replaces the existing cross-layer feature fusion operations, resulting in improved network performance with only a minimal increase in the number of parameters. To emphasize the subtle details of IR small targets in deep layers, the authors have introduced a point-wise channel attention modulation module, aggregating the channel feature context for each spatial position individually. Unlike top-down modulation, this modulation pathway propagates context information in a bottom-up manner to enrich high-level features with spatial details from low-level feature maps. Attentional Local Contrast Network (ALCNet) <cit.>, as shown in Figure <ref>,introduced an innovative approach to address the challenge of detecting IR small targets in a single image by integrating the feature learning capabilities of deep networks and the physical mechanisms of model-driven methods into an end-to-end network. ALCNet is specifically designed for the detection of single-frame IR small targets, and it brings two substantial enhancements. Firstly, the authors introduced an acceleration strategy that utilizes feature map cyclic shifts, modularizing a local contrast measure method developed by Wei et al. <cit.>. The modularization is achieved through the creation of a depth-wise parameter-less nonlinear feature refinement layer, which holds a clear physical interpretation and addresses the limited receptive field imposed by convolutional kernels. This refinement layer encodes longer-range contextual interactions. Additionally, the network's downsampling approach has been adjusted to enhance and retain the characteristics of small targets. They introduced a Bottom-up Attentional Modulation module, which encodes finer details from low-level features into the higher-level features of deeper layers. The feature maps obtained through cross-layer fusion are utilized for segmentation purposes. A specific model-inspired module is employed to encode the input image into local contrast measures. This approach effectively combines both labeled data and domain knowledge to leverage the full capacity of the network. Consequently, the network's ability to autonomously learn discriminative features overcomes the limitations of inaccurate modeling and sensitivity to hyperparameters often encountered in model-driven methods. Furthermore, it tackles the difficulty of having minimal intrinsic features in data-driven methods by incorporating the domain knowledge of the local contrast prior into deep neural networks. ALCNet <cit.> utilizes a local contrast foundation, measurements of local contrast at multiple scales, and bottom-up attentional adjustment to boost its detection performance. The network is trained using a modified ResNet-20 backbone that serves as a feature extractor, capturing high-level semantic features from the input image. To address the class imbalance between small targets and the background, the Soft-IoU loss function is employed to optimize the network's segmentation task. These strategies significantly contribute to the improved performance of ALCNet in small target detection. By highlighting and preserving crucial small target features, the network improves its ability to differentiate targets from the background. The authors also provided comprehensive ablation studies to evaluate the effectiveness and efficiency of the network architecture. In contrast to methods that rely solely on either data <cit.> or models <cit.>, this approach maximizes the integration of both labeled data and domain knowledge. As a result, it effectively addresses the issues of inaccurate modeling and hyper-parameter sensitivity inherent in model-driven methods by enabling the network to autonomously learn discriminative features. Additionally, it mitigates the challenge of minimal intrinsic features in data-driven approaches by incorporating domain knowledge of the local contrast prior to deep networks. This approach underscores the potential of convolutional networks that integrate local contrast prior, a feature traditionally addressed in non-learning methods. ALCNet demonstrates promising outcomes in IR small target detection by breaking traditional constraints and fusing local contrast feature maps across layers. By uniting deep networks with domain knowledge, the network achieves enhanced accuracy and efficiency in detecting small IR targets, highlighting the importance of incorporating domain knowledge into deep learning architectures to tackle the challenges of small target detection in IR imagery. The objective of SIRST detection is to differentiate small targets from complex backgrounds in IR images. While CNN-based methods have demonstrated promise in generic object detection, their direct application to IR small targets is hindered because the pooling layers within these networks can lead to the loss of target information in deeper layers. To tackle this challenge, the Dense Nested Attention Network (DNA-Net), as presented in the work by Li et al. <cit.>, is introduced. DNA-Net draws inspiration from the success of nested structures in medical image segmentation <cit.> and hybrid attention in generic object detection <cit.>. Several key innovations in DNA-Net contribute to its enhanced performance in the detection of IR small targets. Primarily, the Dense Nested Interactive Module (DNIM) in DNA-Net facilitates a gradual interaction between high-level and low-level features. This interaction allows information about small targets to propagate through the network's layers without any loss, enabling the preservation of crucial target details even in deep layers. By ensuring the presence of small targets throughout the network, DNA-Net effectively addresses the issue of target loss, a common problem in traditional CNN-based methods. In addition, the cascaded Channel and Spatial Attention Module (CSAM) further enhances features by adapting them to the specific characteristics of targets and cluttered backgrounds. CSAM employs attention mechanisms that selectively emphasize pertinent information while suppressing irrelevant noise, enhancing the network's discriminative power. This adaptive feature enhancement ensures that DNA-Net focuses on the most salient aspects of the targets, improving their visibility and aiding in accurate detection. The combination of DNIM and CSAM in DNA-Net results in a synergistic effect, with the network benefiting from both progressive feature fusion and adaptive enhancement. This allows the network to effectively incorporate and exploit contextual information, leading to superior performance in detecting IR small targets.DNA-Net<cit.> leverages the advantages of dense nested structures and attention mechanisms, enabling it to capture intricate details and distinguish small targets from cluttered backgrounds, even in challenging scenarios with varying target sizes, shapes, and clutter conditions. The architecture of DNA-Net comprises three main modules: feature extraction, feature pyramid fusion, and eight-connected neighborhood clustering. These modules collaborate to generate the final detection results for SIRST images. In the feature extraction module, DNA-Net employs DNIM and CSAM. Input SIRST images undergo pre-processing and are passed through the DNIM backbone to extract multi-layer features. DNIM incorporates skip connections with intermediate convolution nodes to facilitate iterative feature fusion at different layers. To bridge the semantic gap that may arise during feature fusion, CSAM is employed to adaptively enhance these multi-level features, ensuring improved feature fusion and representation. The feature pyramid fusion module focuses on the combination of enhanced multi-layer features from DNIM. These features are initially upscaled to the same size, ensuring consistency across scales. Subsequently, feature maps from shallow layers, rich in spatial information, and deep layers, rich in high-level semantic information, are concatenated to produce robust and comprehensive feature maps. The eight-connected neighborhood clustering module receives the feature maps from the previous stage and determines the spatial location of the target's centroid through calculations. The centroid serves as a reference point for later stages of comparison and evaluation. DNA-Net architecture follows a sequential process of feature extraction, feature pyramid fusion, and eight-connected neighborhood clustering to detect IR small targets effectively. By harnessing the capabilities of DNIM and CSAM, the network achieves progressive feature fusion, adaptive enhancement and robust representation of small targets, ultimately improving the accuracy and reliability of detection results. Figure <ref> illustrates the presence of small targets within the layers of two network topologies: the U-shape network and the Dense Nested U-shape (DNA-Net) network. The authors in their work <cit.> have introduced a feature enhancement module, which operates as a feed-forward network. Its purpose is to obtain more distinct features associated with small-sized targets. Given the limited sizes and faint appearance of these targets, the failure to capture distinctive information from them significantly increases the chances of missing detections. Furthermore, the network may suffer from the potential loss of features related to small-sized targets. To mitigate this, an up-sampling structure akin to U-Net <cit.> has been integrated to enhance the retrieval of information concerning these compact targets. The authors have proposed an approach for the detection of small-sized targets. In this system, the self-attention mechanism borrowed from the transformer model is employed to gather interaction information among all embedded tokens. This capability allows the network to discern the differentiation between small-sized targets and the background within a broader context. This research marks one of the first endeavors to explore the use of transformers for the detection of small IR targets. The introduced feature enhancement module has significantly contributed to obtaining a more comprehensive understanding of discriminative features associated with targets. This innovative method consists of three main components; a feature embedding module, which is intended to extract a succinct feature representation of an image; a compound encoder, employed to gather information about the interactions between all embedded elements and to derive more distinct features for smaller targets; and a specialized decoder algorithm designed for producing confidence maps.The model proposed by <cit.>, as depicted in Figure <ref>, is structured around a conditional Generative Adversarial Network (GAN) architecture that includes two generator networks and one discriminator network. Each generator is specialized for a distinct sub-task, while the discriminator's role is to differentiate between the three segmentation outcomes produced by the two generators and the ground truth. Furthermore, to enhance their performance in handling sub-tasks, the two generators incorporate context aggregation networks with varying receptive field sizes. This enables them to encompass both local and global perspectives of objects during the segmentation process. The model is composed of generator and discriminator components, similar to the conditional Generative Adversarial Network (cGAN).In contrast to the traditional cGAN, the model proposed by <cit.> features two generators, namely G_1 and G_2, and a single discriminator denoted as D. Each generator function takes an input image, denoted as I, and generates an output image representing the segmentation result. The primary objective of the generators is to minimize metrics like MD or FA during the segmentation process. To facilitate adversarial learning, the discriminator is specifically designed to differentiate between three segmentation outcomes: S_1, S_2, and S_0. Here, S_0 corresponds to the accurate segmentation of the ground truth, where "1" represents objects and "0" represents the background. While it's possible to train the two generator networks separately and then combine their segmentation results, this fusion-after-training approach limits the exchange of information during their training process, resulting in sub-optimal segmentation outcomes. To address this issue, they employ a cGAN framework to jointly train both generators. More precisely, it utilizes the discriminator D as an intermediary to establish a connection between G_1 and G_2, thereby facilitating information exchange between the two generators. This information exchange has the potential to enhance the effectiveness of G_1, initially designed to minimize MD, in reducing FA and similarly enhance the effectiveness of G_2, initially designed to minimize FA, in reducing MD. Additionally, both generators receive robust supervision signals from D due to the adversarial mechanism, which compels them to converge towards the ground truth to deceive D. Through this process, the two generators ultimately generate segmentation results that are consistent and closely resemble the ground truth. Once the entire model has been trained, either generator can be employed to process a test image and generate a segmentation result. This is possible because both generators have undergone training to reach convergence via the adversarial learning process. Peng et al. <cit.> introduced the Dynamic Background Reconstruction (DBR) approach for detecting small and dim targets, which comprises three modules: the Detection Head (DH), the Background Reconstruction module (BR), and the Dynamic Shift Window (DSW). Initially, the DSW algorithm calculates an offset value based on the target object's ability to shift toward the patch's center in raw IR images. The BR algorithm then dynamically adjusts window positions based on this offset, utilizing the Naive Background Reconstruction (NBR) to restore clean backgrounds. To enhance detection performance, the approach combines IR images with targets, background-only images, and their differences before inputting them into the DH algorithm. The recall rate surpasses the precision rate due to the detector's tendency to classify reconstruction errors as positive events. The utilization of Vision Transformers in image processing, as described by Dosovitskiy et al. <cit.>, involves the partitioning of a given input image into patches of size 16x16. This approach provides inherent advantages in effectively dealing with mask tokens. MAE <cit.> utilizes random patch removal to restore pixels when a high masking ratio is present.DBR <cit.> effectively addresses the problem of a transformer model incorrectly dividing a target into adjacent patches, which can hinder background reconstruction. The authors introduced the DSW algorithm to calculate offsets for dynamic image shifting before input embedding. The DBR algorithm shows resilience against reconstruction errors. The authors proposed an approach incorporating the DH technique and the WDLoss method to mitigate the impact of reconstruction errors on detection performance by addressing specific aspects of the network architecture and loss function. When an IR image is initially fed into the DBR system, it undergoes processing with the DSW algorithm to determine offset values (Δ x, Δ y). These offsets indicate the horizontal and vertical shifts necessary to reposition the target object to the center of a patch, avoiding division into multiple patches. Following the input embedding phase, the background reconstruction process involves subjecting the IR image to two masking processes that complement each other. This approach is commonly referred to as grid masking. The theoretical goal is to subtract the generated background from the original image. However, due to discrepancies between the generated background and the actual background, reconstruction errors occur. In order to minimize the influence of these errors on the performance of detection, the DH algorithm is employed to integrate the original image, the generated background, and their discrepancies.Precisely identifying shape details in the detection of IR small targets is a challenging task due to factors such as low SNR and poor contrast. These challenges often lead to targets getting obscured within noisy and cluttered backgrounds. To address this issue, the authors <cit.> presented an approach known as the IR Shape Network (ISNet)in their work, as illustrated in Figure <ref>. ISNet comprises two key components: the Taylor Finite Difference (TFD)-inspired edge block and the Two-Orientation Attention Aggregation (TOAA) block.The TFD-inspired edge block draws inspiration from the TFD algorithm, employing mathematical techniques to enhance edge information at various levels. This enhancement boosts the contrast between the target and the background, facilitating the extraction of shape information. The TFD-inspired edge block significantly contributes to improved target edge detection by combining information from different levels, thus enabling the network to effectively capture fine target edges. To address the issue of low-level features capturing intricate target details not found in high-level features, the authors introduced the TOAA block. This block incorporates two attention modules operating in parallel, generating attention maps in both row and column directions. These attention maps are utilized to adapt and enhance high-level features. Ultimately, the attentive features are aggregated and combined to produce the block's output. For training, the network employs two loss functions: the Dice loss<cit.> and the Edge loss. The Dice loss quantifies the similarity between the predicted mask and the ground truth by comparing their intersection. Conversely, the Edge loss utilizes binary cross-entropy to assess the dissimilarity between the predicted mask and the ground truth concerning edge prediction. These two loss functions are weighted using a hyper-parameter referred to as lambda, and the final training objective combines the Edge loss and the Dice loss for mask prediction.The performance of CNN-based IR small target detection is constrained by the small target size, complex backgrounds leading to clutter, and the inability of traditional CNNs to capture long-range dependencies. To overcome these challenges, the authors <cit.> introduced the Multi-Patch Attention Network (MPANet) in their work, which incorporates an axial-attention encoder and a Multi-Scale Patch Branch (MSPB) structure. The encoder architecture integrates axial attention mechanisms to improve the representation of small targets and reduce the impact of background noise. In order to mitigate the heavy computational cost associated with multi-head attention and facilitate the stacking of self-attention layers over wider regions, a factorization technique is employed to transform a two-dimensional self-attention into two separate one-dimensional self-attentions.The axial attention mechanism, as proposed by Wang et al. <cit.>, incorporates relative positioning encodings for queries, keys and values, so augmenting the effectiveness of the model. Current methods suffer from significant loss of semantic information due to multiple pooling processes, which hinders their applicability to small IR target detection, primarily because of the low resolution of IR images. Additionally, the patch-wise training approach restricts the network's ability to capture mutual information or long-range dependencies between patches. To address these issues, an MSPB structure is used to perform aggregation operations hierarchically, beginning from the bottom and progressing upward. The global branch employs a position-sensitive attention block to capture semantic and contour features of both the foreground, background and target location information. In the encoder, a lightweight U-shape architecture and axial attention block replace conventional convolutions. The structure of MSPB enables the effective extraction of feature representations at several semantic scales, including both coarse and fine-grained features. This is achieved by a bottom-up fusion process that integrates low-scale patch semantics with global picture semantic information. The experimental results on the publicly available SIRST dataset indicate that the MPANet demonstrates excellent performance in fine segmentation and target localization. Zhang et al. <cit.> proposed a methodology for detecting small and dim IR targets using a fusion-based approach that incorporates a transformer attention module and an adaptive asymmetric fusion module. The overall structure of the Global Attention Network (GANet) is illustrated in Figure <ref>, showcasing multiscale feature fusion. This network comprises distinct stages, namely feature extraction, transformer attention, adaptive asymmetric fusion, and up-sampling. Feature extraction is utilized to capture multiscale features from the input image, employing ResNet34 <cit.> pre-trained on ImageNet as the baseline, which includes Res1, Res2, and Res3 downsampling layers. To address the challenge of small infrared targets consisting of only a few pixels, the authors limit the increase in downsampling layers to prevent the loss of target information in deeper layers, thereby preserving detection accuracy for small targets. The feature maps obtained from various layers undergo processing in the transformer attention module (Trans1, Trans2, and Trans3). This module leverages global features to establish long-range dependencies for targets. An Adaptive Asymmetric Fusion (AAF) module integrates low-level and high-level features into the up-sampling stages, ensuring precise detection of small targets. The end-to-end network produces a binary map indicating predicted target locations, with dimensions identical to the input image.Table<ref>showcases the evaluation of various deep learning methods across multiple datasets.Given the diverse range of performance measures, datasets and hyperparameters under consideration, it is impractical to directly evaluate the performance of these aspects on a common platform. Nevertheless, it is usually evident that most of these techniques demonstrate significant efficacy in circumstances including both cluttered and homogeneous backgrounds. The majority of the techniques under consideration are founded on the re-engineering of downsampling methods in order to prevent the loss of small targets in deeper layers. A small subset of individuals have incorporated attention modules into their network architectures. The majority of the approaches exhibit exceptional performances. In addition to target detection, ISNet also offers contour information to a significant degree.§.§.§ Weakly Supervised Learning Based Approaches Training CNNs for the detection of IR small targets using fully supervised learning has gained significant attention recently. However, these methods are resource-intensive because they require a substantial amount of manual effort to annotate each pixel.As presented in <cit.>, authors have introduced an innovative solution to address this challenge by using point-level supervision. During the training process, CNNs are guided by point labels. Interestingly, CNNs initially learn to segment a cluster of pixels near the targets. Gradually, they refine their predictions to closely match the ground truth point labels. The approach described in <cit.> is influenced by the notion of mapping degeneration and is implemented through a framework called Label Evolution with Single Point Supervision (LESPS). One of the key motivations behind their research arises from an intriguing observation made while training SIRST detection networks. When single-point labels are employed for guidance, CNNs tend to initially segment a group of pixels near the targets with low confidence. However, over time, the CNNs progressively enhance their performance and exhibit higher confidence in predicting accurate ground truth point labels.The process of mapping degeneration is shaped by the unique imaging attributes of IR systems, as shown in studies <cit.>, as well as by the local contrast properties of small IR targets. Additionally, the inherent learning progression of CNNs, as discussed in <cit.>, plays a role in causing this degeneration. The first two factors lead to the enlargement of segmented areas beyond the defined point labels, while the third factor contributes to this degeneration process. They introduced the LESPS framework for weakly supervised SIRST detection. LESPS leverages intermediate predictions generated by the neural network during training to iteratively update the current labels. These updated labels serve as supervision until the next label update. By employing iterative label updates and network training techniques, the neural network can approximate these updated pseudo mask labels, enabling end-to-end pixel-level SIRST detection. § POPULAR DATASETSThe lack of large-scale datasets presents a significant obstacle to the adoption of deep learning methods for detecting small and dim targets in the IR domain. Although there has been some progress in recent years in dataset generation (see Table <ref> and Figures <ref>, <ref>, and <ref>) related to this area, the subsequent section provides an overview of the datasets currently available in this area. §.§ SIRST Dataset The SIRSTdataset plays a pivotal role in the realm of small and dim target detection within IR imagery. It functions as a benchmark dataset, uniquely tailored to the evaluation and advancement of algorithms crafted specifically for the purpose of detection, tracking, and classification of small targets within IR environments.The SIRST dataset offers a diverse array of IR image sequences procured from a variety of platforms, encompassing both aerial and ground-based sensors. These sequences span diverse scenarios, including urban environments, rural landscapes, and maritime settings. The dataset covers an extensive spectrum of target dimensions, signal-to-noise ratios, and background intricacies, effectively representing authentic and intricate conditions for the task of target detection. The SIRST framework includes the following several prominent datasets that have garnered considerable acclaim. §.§.§ NUAA-SIRST DatasetThis dataset <cit.> includes real IR images featuring a wide array of backgrounds, including scenes like clouds, urban settings, and bodies of water. It comprises a total of 427 images, which have been meticulously annotated. Within this dataset, one can find a variety of targets, ranging from small and dim point targets to extended targets. §.§.§ NUST-SIRST DatasetThis dataset <cit.> holds a prominent position in the IR imaging domain and classifies targets into two main categories: point and spot types. It consists of an extensive collection of 10,000 images, encompassing diverse backgrounds, such as clouds, cityscapes, rivers, and roadways. The dataset has been annotated manually with a broad classification. It's worth noting that this dataset is artificially generated using synthetic techniques. §.§.§ CQU-SIRST DatasetThis dataset <cit.> includes a collection of 1676 synthetic images. It incorporates diverse background scenarios, encompassing settings like clouds, urban environments, and maritime scenes. The dataset also includes corresponding ground truth data. The main focus of this dataset is on point targets. §.§.§ NUDT-SIRST DatasetThe dataset <cit.> comprises a sum of 1327 synthetically generated images. It covers a wide range of background scenarios, including settings like clouds, urban environments, and maritime scenes. Ground truth data is also included along with the dataset. It contains a mixture of point and extended targets.§.§ IRSTD DatasetThe IRSTD-1K <cit.> dataset is a recent dataset where the collection of 1,000 IR images was obtained in real-world scenarios using an IR camera. These images have dimensions of 512×512 pixels. To ensure precise annotations, the targets within these images have been meticulously labeled at the pixel level. Within the IRSTD-1K dataset, a variety of small targets are included, encompassing categories such as drones, creatures, vessels and vehicles. These targets have been captured at different locations, and the images were acquired from considerable imaging distances. The dataset spans a diverse array of environments, including scenes involving bodies of water, natural landscapes, urban settings, and atmospheric conditions.The backgrounds in the IRSTD-1K dataset exhibit notable clutter and noise, further enhancing the complexity of detecting and recognizing targets in IR imagery. Consequently, this dataset is well-suited for comprehensive assessments and method benchmarking, specifically in the domain of IR small target detection (IRSTD). The IRSTD-1K benchmark provides a valuable resource for researchers, enabling them to evaluate and compare the performance of their algorithms and models for the accurate detection and classification of small targets in IR images.§.§ Customized DatasetIn addition to the aforementioned datasets, numerous researchers <cit.> have created their own unique datasets <cit.> specifically designed for the purpose of evaluating their algorithms. Naraniya et al. <cit.> presented a methodology for creating a specialized dataset by combining artificially generated backgrounds with synthetic targets. In this methodology, the motion of the target is modeled in the NED coordinate system. Additionally, Gaussian blurred point targets are superimposed onto these backgrounds. § PERFORMANCE EVALUATION Commonly used metrics to evaluate the performance of methods for small and dim target detection in IR Imagery are discussed below.§.§ Performance Measure at Pixel levelIntersection over Union (IoU) is a metric commonly used in computer vision and object detection tasks. It measures the overlap between two bounding boxes or regions of interest (ROIs). IoU is a metric used to evaluate the accuracy of an object detection algorithm. IoU is calculated by finding the ratio of the area of overlap between the predicted bounding box and the ground truth bounding box to the area of their union as follows:IoU= A_i/A_u where A_i represents the area of the intersection region and A_u represents the area of the union region. The IoU value ranges from 0 to 1, where 0 indicates no overlap between the predicted and ground truth bounding boxes, and 1 indicates a perfect match. Normalized Intersection over Union (nIoU) is a metric that is typically expressed as the Intersection over Union (IoU) divided by the sum of the areas of the predicted bounding box and the ground truth bounding box minus IoU. It is a normalized version of the Intersection over Union (IoU) metric.nIoU=1/N∑_i=1^N ( TP[i]/ (T[i]+P[i] - TP[i] ) )In the above equation, N represents the total number of samples. TP[i] represents the count of true positive pixels, while T[i] and P[i] represent the count of ground truth and predicted positive pixels, respectively. §.§ Performance Measure at Object level At the object level, accurate detection is considered when both of the following conditions are met simultaneously: the resulting output shows a degree of pixel overlap with the ground truth, and the gap in pixels between the centers of the detection result and the ground truth is below a specified threshold. The Probability of Detection (P_d) is a statistical measure that quantifies the likelihood of correctly identifying the presence of a target or signal in a given system or scenario.P_d, also known as the Probability of Detection, is a metric that represents the ratio of correctly predicted targets (N_pred) to all targets (N_all).P_d=N_pred/N_all False-Alarm Rate (F_a) is the ratio of the number of falsely predicted target pixels (N_false) and the total number of pixels in the image (N_all). F_a=N_false/N_all Other evaluation measures employed for assessing model performance include precision, recall, and mean average precision (mAP). The calculation of both precision and recall relies on the utilization of the confusion matrix, as depicted in Table <ref>. As shown in Equation <ref>, Precision is calculated as the ratio of true positives to the sum of true positives and false positives. Precision is the number of true positive predictions divided by the total number of positive predictions made by the model. In the context of object detection, precision is the ratio of correctly detected objects to the total number of objects predicted by the model. As shown in Equation <ref>, Recall is calculated as the ratio of true positives to the sum of true positives and false negatives. Recall is the number of true positive predictions divided by the total number of actual positive instances. In object detection, recall is the ratio of correctly detected objects to the total number of ground truth objects. In the context of target detection, there is often a trade-off between precision and recall. Increasing one metric may result in a decrease in the other. Therefore, it's common to use a combined metric like F1 score, which considers both precision and recall. F1 score is defined in Equation <ref>. Average Precision (AP) is calculated by taking the precision-recall curve of a model's predictions and computing the area under that curve. It summarizes the precision-recall trade-off for different confidence thresholds. Mean Average Precision (mAP) is the mean of the Average Precisions across multiple classes or categories. It is often used in scenarios where there are multiple classes, and we want to assess the overall performance of the model. Each class has its own precision-recall curve and AP, and mAP provides an aggregated performance measure, as represented by Equation <ref>. Precision = TP/TP + FPRecall = TP/TP + FN F_1=2*Precision*Recall/Precision+Recall mAP = 1/N∑_i=1^N AP_i Where N is the number of classes, and AP_i is the Average Precision for class i. In the context of object detection, a higher mAP indicates better performance, meaning that the model is good at both correctly identifying objects (precision) and not missing any objects (recall) across multiple classes.§ DISCUSSION AND POTENTIAL FUTURE DIRECTIONS Detecting small and dim targets using MIRST methods requires multiple frames, making them less suitable for real-time implementation. Approaches based on low-rank representation have demonstrated adaptability to low signal-to-clutter ratio IR images. However, these algorithms still encounter challenges in accurately detecting small and variably shaped targets in complex backgrounds, resulting in a high False Alarm Rate (FAR).HVS-based approaches involve assessing the discrepancy between targets and backgrounds through discontinuity measurement. In cluttered scenarios with low signal-to-noise ratio (SNR), these algorithms exhibit inadequate performance. Traditional image processing relies on manually designed features, but the significant variations in real scenes (such as target size, shape, signal-to-clutter ratio (SCR), and clutter backdrop) make using handcrafted features and fixed hyperparameters challenging in effectively addressing these differences. Achieving a generalized implementation without scenario-specific hyperparameter tuning is highly challenging. The complexity of real scenarios, marked by dynamic changes in target size, shape, and cluttered background, poses challenges in utilizing handcrafted features and fixed hyperparameters to address these fluctuations. Additionally, incorporating handcrafted features and optimizing hyperparameters require specialized expertise and substantial engineering efforts. Most conventional image processing-based algorithms perform effectively on images with uniform backgrounds and distinct contrast targets. However, their performance is suboptimal in the presence of significantly diverse cluttered backgrounds.Conventional small and dim target detection methods exhibit a susceptibility to generate instances of both missed detections and false detections in scenarios where the signal-to-clutter ratio (SCR) is low. Additionally, these methods tend to produce false alarm detections in situations characterized by high local contrast. In contrast to conventional image processing approaches, deep learning-based methodologies have the capability to acquire small and dim IR target characteristics through a data-centric approach. While deep learning-based methods have recently demonstrated state-of-the-art performance, it is important to note that the majority of these methods have only fine-tuned networks that were originally built for generic extended objects. The application of these approaches for small target detection in IRcan result in the loss of small targets in deeper layers due to the significant size difference between IR small targets and generic objects. Deep Learning based approaches typically exhibit superior performance compared to traditional methods. However, in general, these may encounter limitations in accurately predicting target shapes. The robust training of deep learning-based techniques necessitates a large-scale dataset due to its data-centric nature. The effective training of deep learning techniques requires a large dataset due to its emphasis on data-driven approaches. Segmentation-type approaches necessitate a dataset with pixel-level annotations, which entails a substantial amount of engineering work. In contrast, detection-type approaches and weakly supervised approaches require less engineering effort for dataset generation.As technology continues to advance, several potential future directions can be explored to enhance the accuracy and efficiency of detecting small and dim targets in IR imagery. One promising avenue for improvement lies in the development of advanced machine learning algorithms, particularly deep learning models, tailored specifically for small target detection. Convolutional Neural Networks (CNNs) have shown success in various computer vision tasks, and their application to small target detection in IR imagery can be further refined. Researchers may explore novel network architectures, optimization techniques, and training strategies to boost the model's ability to discern subtle features indicative of small and dim targets. In addition to traditional supervised learning approaches, there is potential for leveraging unsupervised and semi-supervised learning techniques. Anomaly detection methods, which can identify deviations from normal patterns in the absence of labeled training data, could be explored to enhance small target detection in situations where annotated datasets are limited. Incorporating transfer learning from other domains or modalities may also prove beneficial in training models with limited IR imagery datasets.The integration of multispectral and hyperspectral data is another promising direction <cit.>. Combining information from different wavelengths beyond the infrared spectrum can provide a more comprehensive view of the scene, aiding in the discrimination of small and dim targets from background noise. Fusion techniques that intelligently merge data from multiple sensors or platforms can be explored to exploit the complementary strengths of different spectral bands. Advancements in sensor technology, including higher spatial and temporal resolutions, could significantly impact small target detection. The development of sensors with improved sensitivity and dynamic range, as well as the incorporation of emerging technologies such as quantum sensors, may enhance the overall quality of IR imagery, making it easier to identify small and dim targets with greater accuracy. Furthermore, the exploration of real-time processing capabilities is essential for applications where timely detection is critical. Implementing parallel processing techniques, leveraging Graphics Processing Units (GPUs) or dedicated hardware accelerators, can expedite the analysis of large volumes of IR imagery in real time, ensuring swift and accurate detection of small and dim targets.§ CONCLUSION This review encompasses a wide variety of approaches developed and refined over the years for small and dim target detection in IR imagery. The authors aggregated findings and conducted a comparative analysis of the majority of approaches in tabular form. It can be inferred that most deep learning-based methods for detecting small and dim targets exhibit notable performance, demonstrating a discernible improvement compared to conventional image processing-based approaches, especially in scenarios with cluttered backgrounds. Traditional image processing methods, specifically MTHM and CM, exhibit commendable performance and have a minimal computational footprint. However, they lack the capability to manage cluttered backgrounds effectively. In contrast, deep learning-based techniques show exceptional performance in both cluttered and uniform backgrounds.§ ACKNOWLEDGEMENT Throughout the duration of this undertaking, the authors wish to extend their heartfelt gratitude to Ms.Neeta Kandpal, Scientist 'G', IRDE, Dehradun, and Dr. Ajay Kumar, Outstanding Scientist and Director, IRDE, Dehradun, for their invaluable support and encouragement. The realization of this groundbreaking endeavor would not have been possible in the absence of his guidance.unsrtnat nikhil Nikhil Kumar received his M.Tech. degree from the Department of Electrical Engineering, IIT Kanpur. He is currently pursuing a Ph.D. from the Department of Computer Science, IIT Roorkee. As a scientist at DRDO, he has extensive experience in developing electro-optical systems for the Indian Armed Forces. His area of research is IR Signal Processing, Image processing, Computer Vision, Deep Learning, and Artificial Intelligence.psingh Pravendra Singh received his Ph.D. degree from IIT Kanpur. He is currently an Assistant Professor in the CSE department at IIT Roorkee, India. His research interests include deep learning, machine learning, computer vision, and artificial intelligence. He has published papers at internationally reputable conferences and journals, including IEEE TPAMI, IEEE TIV, IJCV, CVPR, ECCV, NeurIPS, AAAI, IJCAI, Pattern Recognition, Neural Networks, Knowledge-Based Systems, Neurocomputing, and others.
http://arxiv.org/abs/2311.16346v1
{ "authors": [ "Nikhil Kumar", "Pravendra Singh" ], "categories": [ "cs.CV", "cs.LG" ], "primary_category": "cs.CV", "published": "20231127222546", "title": "Small and Dim Target Detection in IR Imagery: A Review" }
The stability of smooth solitary waves for the b-family of Camassa-Holm equationsJi Li School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China ([email protected]). Changjian Liu School of Mathematics (Zhuhai), Sun Yat-sen University, Zhuhai 519082, China ([email protected]). Teng Long(Corresponding author) School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China ([email protected]). Jichen Yang College of Mathematical Sciences, Harbin Engineering University, Harbin 150001, China ([email protected]).January 14, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================plainWhen a speaker verification (SV) system operates far from the sound source, significant challenges arise due to the interference of noise and reverberation. Studies have shown that incorporating phonetic information into speaker embedding can improve the performance of text-independent SV. Inspired by this observation, we propose a joint-training speech recognition and speaker recognition (JTSS) framework to exploit phonetic content for far-field SV. The framework encourages speaker embeddings to preserve phonetic information by matching the frame-based feature maps of a speaker embedding network with wav2vec's vectors. The intuition is that phonetic information can preserve low-level acoustic dynamics with speaker information and thus partly compensate for the degradation due to noise and reverberation. Results show that the proposed framework outperforms the standard speaker embedding on the VOiCES Challenge 2019 evaluation set and the VoxCeleb1 test set. This indicates that leveraging phonetic information under far-field conditions is effective for learning robust speaker representations.Far-field speaker verification, multi-task learning, phonetic content, wav2vec § INTRODUCTION Speaker verification (SV) plays an important role in various fields, such as biometric authentication, e-banking, and access control. Traditional SV models rely on statistical models like Gaussian Mixture Models (GMMs) <cit.> and i-vectors <cit.> to achieve good performance. With the advance in deep learning, deep neural networks, such as TDNNs <cit.>, ResNets <cit.>, and ECAPA-TDNNs <cit.>, have been prevailing for speaker embedding. Notably, the ECAPA-TDNN has achieved state-of-the-art performance on various datasets, demonstrating its superiority in speaker verification tasks.The SV systems mentioned earlier are usually trained on “clean" utterances and perform well on near-field speech signals. Under far-field conditions, however, due to uncontrollable noise and reverberation, a severe mismatch occurs between the near-field and far-field domains, and these systems suffer greatly <cit.>. Developing an SV system that can address the adverse conditions in the far field is essential.Researchers attempted to address the far-field challenge by modifying the system architecture, exploring adversarial learning techniques, and leveraging advanced data augmentation strategies. For instance, <cit.> introduced the channel-interdependence enhanced Res2Net (CE-Res2Net) to aggregate speaker information from multi-scale frame-level representations and achieved performance gains on VOiCES Challenge 2019 data. The authors in <cit.> used a domain separation network to disentangle and suppress the domain-specific information related to far-field noise and reverberation. In <cit.>, a population-based searching strategy was proposed to optimize the augmentation parameters and greatly boosted far-field SV performance.On the other hand, studies have shown that text-independent SV systems can be enhanced by incorporating phonetic information into embedding learning. In <cit.>, the authors adopted a multi-task learning strategy by combining a phonetic classifier with a speaker classifier for speaker embedding and obtained superior performance. The authors of <cit.> investigated the usefulness of phonetic information at the segment level and the frame level. They concluded that although phonetic content at the segment (embedding) level is detrimental to SV performance, using phonetic information at the frame level is beneficial. One possible explanation for the performance improvement in <cit.> is that shared spectral dynamics exist at the lower (frame-level) layers, which are useful for speech and speaker recognition. Enriching content information at the frame-level layers also strengthens the information essential for speaker discrimination. In this paper, we exploit phonetic information in far-field SV. Inspired by the above observations, we propose a framework that can jointly train a model to perform speech recognition and speaker verification tasks. The framework comprises a speech recognition component for phonetic information extraction and a speaker identification component for enforcing the segment-level layers to produce speaker discriminative vectors. We refer to the framework as joint-training speech and speaker recognition (JTSS). Unlike <cit.>, phonetic labels are not required in JTSS.Instead, we use a pre-trained wav2vec 2.0 model to extract phonetic content in an unsupervised way. This strategy greatly saves the effort to transcribe the speech files in speaker recognition corpora. The rationale behind JTSS is that although noise and reverberation can blur speaker information in speech signals, the phonetic information extracted from wav2vec 2.0 assists in preserving the underlying acoustic dynamics shared by the speaker identity. Therefore, the degradation due to the far-field conditions can be compensated to a certain extent. Our main contributions are as follows: * We proposed a phonetic-aware JTSS framework, which improves the robustness of far-field SV by exploiting phonetic information.* We incorporated a pre-trained wav2vec 2.0 model in the speech recognition part, eliminating the need for manually transcribing speaker verification datasets.The rest of the paper is organized as follows. Section 2 introduces the wav2vec 2.0 and details the JTSS framework. Section 3 presents the experimental settings, and Section 4 shows the results and analyses. We draw a conclusion in Section 5.§ METHODOLOGY This section introduces the JTSS framework and its two components: the speech recognition part and the speaker classification part.§.§ Speech Recognition ModelIn Fig. <ref>, we utilize a wav2vec 2.0 <cit.> network fine-tunned by CTC loss <cit.> as the speech model. Wav2vec 2.0 is a self-supervised learning framework that leverages a large amount of unlabeled data to learn speech representations. It takes in a waveform and produces context representations through a stack of CNN layers and transformer layers. Through contrastive learning, the model is able to extract compact and meaningful speech representations that can be used for downstream speech tasks. Recently, the pre-trained wav2vec 2.0 model has gained popularity as a front-end feature extractor in various speech applications.§.§ JTSS FrameworkAs shown in Fig. <ref>, the speech recognition component and the speaker classification component share the frame-level layers (green block). The representations outputted from an intermediate frame-level layer are fed into the speech recognition part. We denote these representations as 𝒳 = {x_t ∈ℝ^D; t =1,…,T }, where x_t is a D-dimensional vector at the t-th frame. For the speaker classification part, the feature maps produced from the last frame-level layer are processed by a pooling layer and a fully connected (FC) layer to derive an utterance-level embedding e. The AAMSoftmax <cit.> loss is employed as the loss function (L_speaker in Fig <ref>).For the speech recognition part, the waveform is fed into the speech model and we obtain a sequence of T frames 𝒱 = {v_t ∈ℝ^D; t =1,…, T}, where D is the dimension of speech vectors. A max-pooling layer is applied to 𝒳 to ensure that the resulting 𝒵={z_t ∈ℝ^D ; t=1,…,T } has the same length as 𝒱.We compute the speech loss as the cosine similarity between 𝒵 and 𝒱:L_speech = 1 -1/T∑_t=1^Tcos(z_t,v_t) .Then, we average the L_speech across the utterances in a mini-batch. By making 𝒵 close to 𝒱, we enable the frame-level layers of the speaker encoder to preserve useful phonetic information. Because phonetic information contains speaker-dependent acoustic dynamics, maintaining phonetic information at the frame level would also preserve speaker information in the embedding network. As will be demonstrated in Section 4.1, this speaker information preservation helps compensate for the performance degradation caused by far-field environments.The total loss is defined as follows:L_total = L_speaker + λ L_speech ,where L_speaker is the AAMSoftmax loss defined in <cit.> and λ is a hyperparameter that controls the contribution of phonetic information. During training, we freeze the parameters of the speech model. § EXPERIMENTAL SETUP§.§ Datasets and Data PreparationThe training data comprise the VoxCeleb1 development set and the VoxCeleb2 development set <cit.> <cit.>, which consist of a total of 7,205 speakers. Voice activity detection (VAD) was not used. We followed the data augmentation strategy in Kaldi's recipes <cit.>. We added noise, music, and babble to the training data using MUSAN <cit.> and created reverberated speech data based on RIR <cit.>. For evaluation, we used the VOiCES Challenge 2019 evaluation (VOiCES19-eval) dataset <cit.>. The Voxceleb1 test Original (Vox-O), which comprises 40 speakers, was also used as the evaluation set.§.§ Network TrainingWe used the standard x-vector <cit.> and ECAPA-TDNN <cit.> as our backbones. The channel size of ECAPA-TDNN is 512. The dimension of speaker embeddings is 192 for ECAPA-TDNN and 512 for x-vector, respectively. For the speech model, we used the wav2vec 2.0 model fine-tuned on the LibriSpeech dataset <cit.>.[<https://huggingface.co/facebook/wav2vec2-base-960h>] The output of the wav2vec 2.0 was obtained from the projection layer of the fine-tuned model. The frame-level representation from the lowest-level TDNN of the x-vector network and the ECAPA-TDNN were used as the input to the speech recognition part.For ECAPA-TDNN, we extracted 80-dimensional filter-bank (Fbank) features from 16 kHz audio signals using a 25ms window with a 10ms frameshift. For the x-vector network, we extracted 40-dimensional Fbank features. Each training segment in the mini-batch has a duration of 2 seconds. The batch size was set to 100 for ECAPA-TDNN and 50 for x-vector, respectively. We used an Adam optimizer with an initial learning rate of 0.001 and employed a step learning rate scheduler. The total number of epochs is 80. For the AAMSoftmax loss function, the margin is 0.2 and the scale is 30. §.§ Performance EvaluationWe used a cosine backend in all experiments. When performing evaluation on the Vox-O test set, we followed the setting in <cit.> to apply the AS-norm <cit.> on the scores. The performance metrics include equal error rate (EER) and minimum detection cost function (minDCF) with P_target = 0.01. § RESULTS AND ANALYSES We report the performance of JTSS in this section. The comparison with conventional speaker embeddings is detailed.§.§ Main ResultsTable <ref> presents the results of various systems on the VOiCES19-eval dataset. We observe that our baselines achieve superior or comparable performance to existing systems. From Table <ref>, it is evident that JTSS outperforms the baselines for both x-vector and ECAPA-TDNN. Specifically, for ECAPA-TDNN, our proposed method reduces the EER by12.9% and minDCF by 14.4%. For x-vector, our method achieves a reduction of 14.1% and 23.8% on EER and minDCF, respectively. This observation demonstrates the effectiveness of JTSS in leveraging phonetic information for far-field SV.To verify that JTSS can partially compensate for the degradation due to adverse conditions in the far field, we investigated the performance of JTSS on the clean and noisy Vox-O datasets. The “clean" set refers to the standard Vox-O test data, and the noisy Vox-O set was created by randomly adding noise and reverberation to the standard (clean) Vox-O data, following the data augmentation strategy in Section 3.2.Table <ref> shows the results of JTSS and the baseline models. From Table <ref>, we observe that JTSS outperforms the baselines on both clean and noisy Vox-O sets. On the clean Vox-O, JTSS achieves a slight improvement over the baseline. This confirms the conclusion in <cit.> that using phonetic content can benefit text-independent SV. On the noisy Vox-O set, we see substantial performance degradation compared with the clean counterpart. Nevertheless, JTSS obtains remarkably greater performance gains over the baseline systems. This observation verifies our motivation that incorporating phonetic information into the speaker embedding system can improve SV performance, particularly in far-field environments with noise and reverberation.§.§ Ablation StudyTable <ref> shows the impact of feeding different frame-level representations to the speech recognition part on SV performance. In Table <ref>, Layer 0 and Layer 4 correspond to the initial and final TDNN layers of the ECAPA-TDNN, respectively. The remaining three layers correspond to the three SE-Res2Blocks, respectively. Table <ref> shows that the performance improvement of JTSS becomes more prominent when we feed features from lower layers into the speech recognition part. Specifically, when we input features from the initial TDNN layer (Layer 0) into the speech recognition part, we obtained the best result with an EER of 5.13%. However, performance gradually drops when we preserve phonetic information at the upper layers (with higher-level representations). This result is reasonable because the lower-level feature maps contain more speaker and content information that is entangled together. By contrast, the representations at upper layers are more speaker-specific. Therefore, it is preferable to exploit phonetic information at lower layers. This is also the reason for using the bottom layer for phonetic information extraction in Section 4.1.We also investigated the effect of λ in Eq. <ref> on JTSS. The results are shown in Table <ref>. We observe that the best performance is achieved when λ=0.1, with an EER of 5.28%. As λ increases, the performance of JTSS gradually deteriorates. When λ was set to 0.4, the EER of the JTSS system is higher than that Baseline 2 in Table <ref>. The above observations suggest that excessive phonetic information can cause the speaker embedding network to focus on the content details, neglecting the speaker information and leading to performance degardation. § CONCLUSIONS In this paper, we propose a joint training framework (JTSS) for speech recognition and speaker verification tasks to improve far-field SV performance. By using a pre-trained speech recognition model, we incorporate the phonetic information into the conventional speaker encoders. Also, we eliminate the reliance on transcriptions for the speech recognition task. Experimental results demonstrated that leveraging phonetic information can improve the performance of far-field speaker verification. IEEEbib
http://arxiv.org/abs/2311.15627v1
{ "authors": [ "Zezhong Jin", "Youzhi Tu", "Man-Wai Mak" ], "categories": [ "cs.SD", "cs.AI", "eess.AS" ], "primary_category": "cs.SD", "published": "20231127084535", "title": "Phonetic-aware speaker embedding for far-field speaker verification" }
GloNets: Globally Connected Neural Networks Antonio Di Cecco10000-0002-9070-4663 Carlo Metta20000-0002-9325-8232 Marco Fantozzi30000-0002-0708-5495 Francesco Morandin30000-0002-2022-2300 Maurizio Parton10000-0003-4905-3544 January 14, 2024 ====================================================================================================================================================================================== Most graph neural networks (GNNs) are prone to the phenomenon of over-squashing in which node features become insensitive to information from distant nodes in the graph. Recent works have shown that the topology of the graph has the greatest impact on over-squashing, suggesting graph rewiring approaches as a suitable solution. In this work, we explore whether over-squashing can be mitigated through the embedding space of the GNN. In particular, we consider the generalization of Hyperbolic GNNs (HGNNs) to Riemannian manifolds of variable curvature in which the geometry of the embedding space is faithful to the graph's topology. We derive bounds on the sensitivity of the node features in these Riemannian GNNs as the number of layers increases, which yield promising theoretical and empirical results for alleviating over-squashing in graphs with negative curvature. § INTRODUCTIONGraph Neural Networks (GNNs) have emerged as a powerful tool for modeling relational systems and learning on graph-structured data <cit.>. Most GNN architectures rely on the message-passing paradigm in which information is propagated along the edges of the graph, resulting in a class of Message Passing Neural Networks (MPNNs). However, due to an exponentially growing computational tree,the compression of a quickly increasing amount of information into a fixed-size vector leads to informational over-squashing <cit.>. This phenomenon poses a significant challenge on long-range tasks with a large problem radiussince it obstructs the diffusion of information from distant nodes. The over-squashing problem has been analyzed through various lenses such as graph curvature <cit.>, information theory <cit.>, and effective resistance <cit.>, each suggesting a corresponding approach to mitigate the issue by rewiring the graph. Along with several other works, this line of reasoning has resulted in a “zoo” of proposed graph rewiring techniques for over-squashing <cit.>. Recent work has unified the spatial and spectral techniques under a common framework and justified their efficacy by demonstrating that graph topology plays the biggest role in alleviating over-squashing as opposed to MPNN properties such as width or depth <cit.> .One potential drawback of many spatial graph rewiring techniques is the distortion of structural information that may be relevant to the learning task. Instead of altering the graph topology, we thus consider augmentations to the MPNN architecture that would make it topology-aware. Specifically, we explore the effects of changing the embedding space of the GNN. The hypothesis behind our approach is that by embedding the negatively curved sections of the graph in hyperbolic space, there would be less information lost at each layer due to the increased representational capacity. However,hyperbolic space is a poor inductive bias for graphs with significant positive curvature, where spherical space would be more suitable.Therefore, we consider a GNN that embeds graphs in Riemannian manifolds of variable curvature.We study the over-squashing phenomenon in one such model by generalizing the Hyperbolic GNN (HGNN) architecture <cit.> to Riemannian GNNs (RGNNs). Assuming that there exists a Riemannian manifold where the geometry matches that of the input graph, RGNNs are in principle able to embed the graph in this manifold. While the RGNN architecture is not immediately computationally tractable in its most general form, it provides a means to derive a best-case theoretical result on over-squashing. We derive a bound on the Jacobian of the node features in a RGNN and show that it relies on the global curvature properties of the embedding space. Based on this bound, we heuristically and empirically demonstrate that our model addresses cases where the graph's curvature is predominantly negative everywhere (e.g. tree-like graphs). We also identify pathological cases where our model may fail on manifolds with both positive and negative curvature. Finally, we propose concrete next steps to complete our theoretical analysis that would justify step (2) in the argument above and motivate the development of tractable methods that approximate general Riemannian GNNs.§ RIEMANNIAN GNNS For a primer on the Riemannian geometry notions used throughout the following sections, we refer the reader to Appendix <ref>. We define GNNs that embed node representations in a Riemannian space that is faithful to the input graph's topology. Crucially, we assume that we are given an “optimal” Riemannian manifold (ℳ, g) and that the GNN has access to the distance, exponential map, and logarithmic map functions as differentiable operations. While finding an optimal Riemannian manifold of variable curvature is challenging in practice, there exist methods for its approximation <cit.>. For the purposes of our analysis, we assume this approximation of (ℳ, g) is exact. To generalize the Euclidean GNNs to Riemannian manifolds, <cit.> build upon Hyperbolic Neural Networks (HNNs) <cit.>. Since there is no well-defined notion of vector space structure in Riemannian space, the main idea is to leverage the exponential and logarithmic maps to perform node feature transformation and neighborhood aggregation functions as Euclidean operations in the tangent space 𝒯_𝐩ℳ of some chosen point 𝐩∈ℳ. In particular, the node update rule is given by𝐱_i^(ℓ+1)=σ(exp _𝐩(∑_j ∈𝒩(i)𝐀̃_ij𝐖^(ℓ)log _𝐩(𝐱_j^(ℓ))))where 𝐀̃ = 𝐃^-1/2(𝐀 + 𝐈)𝐃^-1/2 is the normalized adjacency matrix with self-loops, 𝒩(i) is the set of in-neighbors of node i, 𝐖^(ℓ) is the matrix of trainable parameters at layer ℓ, and σ is a chosen activation function. Note that in the case of the Euclidean manifold, operating in the tangent space of the origin by setting 𝐩 = 𝐨 recovers a vanilla GNN. Since hyperbolic manifolds fall under the class of manifolds that have a pole 𝐨 (i.e., exp_𝐨: 𝒯_𝐨ℳ→ℳ is a diffeomorphism <cit.>), <cit.> choose 𝐩 = 𝐨 across all nodes and layers for HGNNs. However, general Riemannian manifolds do not have a pole, so we let 𝐩=p(i, ℓ )∈ℳ for an arbitrary function p that depends on the current node and/or the layer ℓ. We leave the selection of an optimal function p as future work. We also ensure that the exponential and logarithmic maps are differentiable by restricting ∑_j ∈𝒩(i)𝐀̃_ij𝐖^(ℓ)log _𝐩(𝐱_j^(ℓ))_2 to fall within the injectivity radius of 𝐩.§ SENSITIVITY ANALYSISFollowing the methodology in <cit.>, we assess the over-squashing effect in RGNNs by deriving a bound on the norm of the Jacobian of node features after ℓ layers.Since this involves bounding the differentials of the exponential and logarithmic maps, we first derive the following lemma. Consider a RGNN as in equation (<ref>) with Riemannian manifold (ℳ, g) with bounded sectional curvature k ≤κ_𝐩(𝐮, 𝐯) ≤ K for all 𝐩∈ℳ and 𝐮,𝐯∈𝒯_𝐩ℳ. Let Df denote the differential of a map f. Then for exp_𝐩 and log_𝐩 in (<ref>) and i ∈ V we have D exp_𝐩_2D log_𝐩_2 ≤sinh(√(-k)r_i, exp)/√(-k)r_i, expk < K ≤ 0 sinh(√(-k)r_i, exp)sin(√(K)r_i, log)/√(-kK)r_i, expr_j, logk < 0 < K sin(√(K)r_i, log)/√(K)r_i, log0 ≤ k < K1 k = K = 0=: β_i(k,K)where r_i, exp = sup_ℓ∑_z ∈𝒩(i)𝐀̃_iz𝐖^(ℓ)log _𝐩(𝐱_z^(ℓ)) _2 denotes the maximum radius around 𝐩 for the exponential mapand r_i, log = sup_z, ℓ𝐱_z^(ℓ)_g is the maximum radius for the logarithmic map. The proof for the above lemma relies on a well-known sectional curvature comparison result in differential geometry and can be found in Appendix <ref>. We use this lemma to derive a bound on the sensitivity of node features. Under the same assumptions as in Lemma <ref>, if c_σ is the Lipschitz constant of the nonlinearity σ and w ≥𝐖^(l)_2 is an upper bound on the spectral norm of all weight matrices,then for i,j ∈ V∂𝐱_i^(ℓ)/∂𝐱_j^(0)_2 ≤ c_σ^ℓ w^ℓβ_i(k,K)^ℓ(𝐀̃^ℓ)_ijwhere β_i(k,K) is a bound on the sensitivity of the exponential and logarithmic maps as defined in Lemma <ref>.The proof uses induction over the number of layers ℓ and is provided in Appendix <ref>. Note that this bound has the same form as in <cit.> for classical GNNs, and in fact is equivalent for Euclidean space (i.e., k=K=0). To show that the RGNN is able to compensate for the information bottlenecks arising from taking powers of the adjacency matrix, it remains to demonstrate that the growth (decay) of β_i(k,K)^ℓ is able to mitigate the decay (growth) of (𝐀̃^ℓ)_ij as ℓ increases. In Appendix <ref>, we demonstrate that this property holds for the pathological example of negative curvature mentioned in <cit.>. While a formal analysis of the variable curvature case is left as future work, we provide a heuristic argument based on the magnitude of k and K. Assume that |r_i, exp| and|r_i, log| do not grow very small or large as ℓ increases. If k <0 and |k| << |K|, β_i(k,K) is dominated by the term sinh(√(-k)r_i, exp)/√(-k)r_i, exp which increases as k grows more negative. Therefore, β_i(k,K)^ℓ grows large as ℓ increases and thus helps to alleviate over-squashing. On the other hand, if K > 0 and |K| >> |k|, β_i(k,K) is dominated by the term sin(√(k)r_i, log)/√(k)r_i, log which decreases (albeit non-monotonically) as k grows more positive. Then β_i(k,K)^ℓ grows small as ℓ increases and instead hinders the flow of information from j to i. This behavior is not problematic since graphs with positive curvature (corresponding to cycles) would have already exchanged overlapping information in the earlier layers. However, an issue may arise in the case when k < 0 < K and |k| << |K| for which β_i(k,K)^ℓ grows small despite the existence of very negatively curved sections of the graph. This argument highlights a limitation of the result in Theorem <ref> in that the bound only depends on global sectional curvature bounds k and K. Therefore, β_i(k,K) does not target the sensitivity of specific node pairs induced by (𝐀̃^ℓ)_ij. Note that if we let 𝐩=p(i, ℓ, 𝐱_i^(ℓ))∈ℳ be a function of the current node feature,the neighboring feature aggregation would intuitively depend on the local curvature at 𝐱_i^(ℓ)∈ℳ. However, this would significantly increase the complexity of the Riemannian GNN model and hence the Jacobian sensitivity derivation.§ EMPIRICAL RESULTSGiven that the special case of Hyperbolic GNNs is well-defined and computationally tractable, we compare the empirical sensitivity of node features in Hyperbolic Graph Convolutional Networks (HGCNs) <cit.> to Euclidean GCNs. We use the link prediction benchmark datasets (as well as the model hyperparameters) provided in <cit.>: citation networks (Cora <cit.> and PubMed <cit.>), disease propogation trees (Disease), and flight networks (Airport). The Gromov δ-hyperbolicity value of each dataset is reported in Figure 1, where lower δ is more hyperbolic. Since over-squashing is more severe for deeper GNNs, we evaluate GCNs and HGCNs (specifically the Poincaré model) of depth 6. We then consider 100 randomly sampled pairs of nodes that are distance 6 apart and take the average of the norm of their Jacobians, 1/100∑_(i, j)∂𝐱_i^(6)/∂𝐱_j^(0)_2. As shown in Figure 1, for three of the four datasets, both the average and maximum sensitivity in the sample are greater in HGCNs than in GCNs at each epoch. For PubMed, while the average sensitivities are roughly equal, the maximum is still always greater for HGCNs, which is consistent with our upper bound in Theorem 1. The results hold even for Cora, which has a higher hyperbolicity value. This suggests that hyperbolic embeddings may be sufficient for alleviating over-squashing even in non-hyperbolic graphs, as the distortion of positively curved regions could be compensated for by the increased sensitivity between node pairs in those regions.We limit our empirical analysis to the special case of hyperbolic manifolds since the implementation of Riemannian GNNs as defined in (<ref>) is not immediately feasible. First of all, it is not obvious how the reference point 𝐩 should be defined at any given node. Moreover, our analysis assumes that we are given an optimal manifold in which the GNN should embed the graph. As described in Appendix <ref>, it is not trivial to obtain the exact manifold for heterogeneous embedding spaces. However, there exist several methods for approximating these manifolds <cit.>, many of which have desirable properties such as well-defined origin points for 𝐩. We leave an empirical study of over-squashing in RGNNs built on these approximations as future work. § DISCUSSIONWe derive a bound on the Jacobian of node features in a Riemannian GNN. The bound contains a global curvature-dependent term β_i(k,K) that grows exponentially with the number of layers ℓ when the embedding space has a minimum sectional curvature which is very negative and decays exponentially when the space has very positive maximum curvature. Since information bottlenecks have been linked to negative curvature on graphs, the exponential growth when k<0 is a promising result for mitigating over-squashing. Despite the heuristic argument provided in section 3 and promising empirical results for Hyperbolic GNNs in section 4, we do not formally prove that β_i(k,K) compensates for the exponential decay of (𝐀̃^ℓ)_ij as ℓ increases without hindering overall model performance. One potential approach to deriving the relationship between the two terms could involve connecting the β_i(k,K) term to edge-based Ricci curvature and utilizing the results in <cit.>. Using the intuition that the Ricci curvature can be considered as an “average” over sectional curvatures, it may be possible to define a notion of sectional curvature on a graph (e.g. the one proposed by <cit.>) such that the Balanced Forman curvature in <cit.> is an average of curvatures assigned to triangles of nodes. This connection may allow one to quantify how (𝐀̃^ℓ)_ij is affected by both local and global sectional curvature. Additionally, due to the Riemannian GNN's dependence on global curvature properties, the model may end up in a pathological scenario when the decay in sensitivity from maximum positive curvature outweighs the growth from the minimum negative curvature. This may call for the introduction of local curvature information into the architecture such that the neighbor aggregation at node i explicitly depends on the curvature near i. It may also be possible to localize the sensitivity bounds by constraining the manifold to have locally bounded sectional curvature everywhere.Finally, while the Riemannian GNN is useful for the theoretical over-squashing analysis, implementing the proposed architecture comes with several challenges. It would be exciting to see the development of models that can more closely approximate Riemannian GNNs while maintaining tractability. For instance, it may be possible to apply the deep Riemannian manifold learning in <cit.> such that the optimal manifold (ℳ, g) is parameterized as a neural network itself. We hope that the insights gained from our theoretical results will inspire future work in the development of practical architectures that leverage these findings. § RIEMANNIAN GEOMETRY We first introduce some preliminary notation and concepts in Riemannian geometry. We refer the reader <cit.> for a more detailed discussion of these concepts.A Riemannian manifold (ℳ, g) is a smooth manifold equipped with a Riemannian metric g_𝐱: 𝒯_𝐱ℳ×𝒯_𝐱ℳ→ℝ where 𝒯_𝐱ℳ is the tangent space at the point 𝐱∈ℳ. The Riemannian metric is a local inner product that varies smoothly with 𝐱 and allows us to define the geometric properties of a space such as length, angle, and area. For instance, g induces a norm𝐯_g = √(g_𝐱(𝐯, 𝐯)) for any v ∈𝒯_𝐱ℳ.§.§ Geodesics The Riemannian metric also gives rise to a notion of distance. For a curve γ: [0, T] →ℳ, the length of γ is given by L(γ) = ∫_0^T γ'(t) _g dt.Thus, for two points 𝐱, 𝐲∈ℳ, the distance is defined as d_g(𝐱, 𝐲) = inf L(γ) where γ is any curve such that γ(0) = 𝐱 and γ(T) = 𝐲. A geodesic is a curve that minimizes this length. §.§ Exponential and Logarithmic Map For each point 𝐱∈ℳ and velocity vector 𝐯∈𝒯_𝐱ℳ, there exists a unique geodesic γ: [0,1] →ℳ where γ(0)=𝐱 and γ'(0)=𝐯. The exponential map exp_𝐱: 𝒯_𝐱ℳ→ℳ is defined as exp_𝐱(𝐯) = γ(1). Its local inverse is called the logarithm map, log_𝐱(𝐯). Note that the distance between two points 𝐱, 𝐲∈ℳ can be represented asd_g(𝐱, 𝐲) = log_𝐱(𝐲)_g. Manifolds where the exponential map is defined on the whole tangent space 𝒯_𝐱ℳ are called geodesically complete. However, geodesic completeness does not guarantee that the exponential map is a global diffeomorphism (i.e. a differentiable bijective map with a differentiable inverse). The radius of the largest ball about the origin in 𝒯_𝐱ℳ that can be mapped diffeomorphically via the exponential map is called the injectivity radius of ℳ at 𝐱.§.§ Curvature For each point 𝐱∈ℳ and pair of linearly independent tangent vectors 𝐮, 𝐯∈𝒯_𝐱ℳ, the sectional curvature κ_𝐱(𝐮, 𝐯) at 𝐱 is defined as the Gaussian curvature of thetwo-dimensional surface obtained by exponentiating a plane spanned by 𝐮 and 𝐯 at 𝐱. The Gaussian curvature of a surface is given by the product of the principal curvatures. Riemannian manifolds of constant sectional curvature κ are called space forms, the most common examples being spherical space (κ > 0), Euclidean space (κ = 0), and hyperbolic space (κ < 0). Another form of curvature on a Riemannian manifold is Ricci curvature, which is a symmetric bilinear form determining the geodesic dispersion at nearby points. The Ricci curvature of a tangent vector 𝐯 at 𝐩 is the average of the sectional curvature over all tangent planes containing 𝐯. Several works have also introduced discrete notions of sectional and Ricci curvature on graphs. <cit.> introduced a discrete notion of sectional curvature for learning product manifolds of mixed curvatures for graph embeddings. <cit.> and <cit.> proposed edge-based curvature that could recover certain properties of the Ricci curvature on manifolds. <cit.> used a novel formulation of Ricci curvature to show that over-squashing in GNNs is related to the existence of edges with high negative curvature. §.§ Riemannian Manifolds for Graph EmbeddingsThere has been a surge in the development of algorithms that represent graphs as sets of node embeddings in hyperbolic and spherical space due to their favorable geometric inductive biases <cit.>. These space forms are well defined and offer closed-form expressions for geometric operations such as the exponential and logarithmic map, making them suitable for optimization in these spaces. However, space forms individually may not capture all of the geometric properties of a given graph. On the other hand, heterogeneous manifolds of variable curvature lack computational tractability. Several works have instead embedded graphs in manifolds of mixed curvature by taking Cartesian products of homogenous model spaces <cit.>, adding heterogeneous dimensions to homogenous spaces <cit.>, or limiting the embedding space to certain classes of manifolds <cit.>. An exciting direction for learnable Riemannian manifolds has been proposed by <cit.>, where the metric is parametrized by a deep neural network.§ EXAMPLE: SENSITIVITY FOR A BINARY TREE IN HYPERBOLIC SPACESuppose that nodes i and j are distance ℓ + 1 apart and that the receptive field of node i is a binary tree in a RGNN given a manifold with constant negative sectional curvature k<0 (i.e. a Hyperbolic GNN). Then (𝐀̃^ℓ)_ij = 2^-13^-ℓ and, by Theorem <ref>, β_i(k,k)^ℓ = (sinh(√(-k)r_i, exp)/√(-k)r_i, exp)^ℓTherefore, β_i(k,k)^ℓ > (𝐀̃^ℓ)_ij when(sinh(√(-k)r_i, exp)/√(-k)r_i, exp)^ℓ > 1/3^ℓ > 1/2·3^ℓ sinh(√(-k)r_i, exp)/√(-k)r_i, exp > 1/3.This example suggests that over-squashing is indeed less severe in HGNNs on graphs exhibiting negative curvature. § PROOF OF LEMMA <REF>We first note a comparison lemma from chapter 6.2 in <cit.> that yields bounds on the differential of the exponential and logarithmic maps.Assume that (ℳ, g) satisfies k ≤ K_𝐱(𝐮, 𝐯) ≤ K for all 𝐱∈ℳ and 𝐮,𝐯∈𝒯_𝐱ℳ. Let Df denote the differential of a map f. Then for the exponential and logarithmic map at 𝐱 and for a radius r around 𝐱 we have D exp_𝐱_2 ≤max{1, sn_k(r)/r}, D log_𝐱_2 ≤min{1, sn_K(r)/r}where sn_κ(·) is the generalized sine function given sectional curvature κsn_κ(r):= sin (√(κ) r)/√(κ)if κ>0 r if κ=0sinh (√(-κ) r)/√(-κ)if κ<0.We use the above lemma to derive a bound on the product of norms of the exponential and logarithmic maps in equation (<ref>) as stated in Lemma <ref>. Let r_j, exp = sup_ℓ∑_z ∈𝒩(j)𝐀̃_jz𝐖^(ℓ)log _𝐩(𝐱_z^(ℓ)) _2 denote the maximum radius around 𝐱 for the exponential map and r_j, log = sup_ℓ𝐱_z^(ℓ)_g denote the maximum radius for the logarithmic map given equation (<ref>). Applying Lemma <ref>, there are three possible cases for the bounds k and K:Case 1: k < K ≤ 0. We then haveD exp_𝐩_2D log_𝐩_2≤max{1, sinh(√(-k)r_j, exp)/√(-k)r_j, exp}·max_z ∈𝒩(j)min{1, sinh(√(-K)r_j, log)/√(-K)r_j, log}.Since sinh(x)/x > 1 for all x ≠ 0, we obtain the boundD exp_𝐩_2D log_𝐩_2≤sinh(√(-k)r_j, exp)/√(-k)r_j, exp.Case 2: k < 0 < K. We then haveD exp_𝐩_2D log_𝐩_2≤sinh(√(-k)r_j, exp)/√(-k)r_j, exp·max_z ∈𝒩(j)min{1, sin(√(K)r_j, log)/√(K)r_j, log}.Since sin(x)/x < 1 for all x ≠ 0, we obtain the boundD exp_𝐩_2D log_𝐩_2 ≤sinh(√(-k)r_j, exp)/√(-k)r_j, exp·max_z ∈𝒩(j)sin(√(K)r_j, log)/√(K)r_j, log.Case 3: 0 ≤ k < K. We then haveD exp_𝐩_2D log_𝐩_2 ≤max{1, sin(√(k)r_j, exp)/√(k)r_j, exp}·max_z ∈𝒩(j)min{1, sin(√(K)r_j, log)/√(K)r_j, log}= max_z ∈𝒩(j)sin(√(K)r_j, log)/√(K)r_j, log. Case 4: 0 = k = K. Then we have D exp_𝐩_2D log_𝐩_2 ≤max{1, r_j, exp/r_j, exp}·max_z ∈𝒩(j)min{1, r_j, log/r_j, log} = 1.Combining all of the cases above, we obtain the boundD exp_𝐩_2D log_𝐩 ≤sinh(√(-k)r_j, exp)/√(-k)r_j, expk < K ≤ 0 sinh(√(-k)r_j, exp)/√(-k)r_j, exp·max_z ∈𝒩(j)sin(√(K)r_j, log)/√(K)r_j, logk < 0 < K max_z ∈𝒩(j)sin(√(K)r_j, log)/√(K)r_j, log0 ≤ k < K1 k = K = 0 = β_j(k,K).§ PROOF OF THEOREM <REF> We prove the bound by induction on the number of layers ℓ. For the base case of ℓ=1, we have∂𝐱_i^(1)/∂𝐱_j^(0)_2= ∂/∂𝐱_j^(0)[σ(exp _𝐩(∑_z ∈𝒩(i)𝐀̃_iz𝐖^(0)log _𝐩(𝐱_z^(0)))) ]_2 ≤ c_σD exp_𝐩_2𝐖^(0)_2 D log_𝐩_2 ∑_z ∈𝒩(i)𝐀̃_iz∂𝐱_z^(0)/∂𝐱_j^(0)_2≤ c_σ w D exp_𝐩_2D log_𝐩_2𝐀̃_ij∂𝐱_j^(0)/∂𝐱_j^(0)_2= c_σ w 𝐀̃_ijD exp_𝐩_2D log_𝐩_2.If we let β_i(k, K) be the bound on D exp_𝐩_2D log_𝐩 defined in Lemma <ref>,the norm of the Jacobian in the base case (i.e. ℓ=1) is bounded by∂𝐱_i^(1)/∂𝐱_j^(0)_2 ≤ c_σ w β_i(k, K) 𝐀̃_ij.We now assume the bound to be satisfied for ℓ layers and use induction to show that it holds for ℓ+1.∂𝐱_i^(ℓ+1)/∂𝐱_j^(0)_2= ∂/∂𝐱_j^(0)[σ(exp_𝐩(∑_z ∈𝒩(i)𝐀̃_iz𝐖^(ℓ)log _𝐩(𝐱_z^(ℓ)))) ]_2 ≤ c_σ wD exp_𝐩_2 D log_𝐩_2 ∑_z ∈𝒩(i)𝐀̃_iz∂𝐱_z^(ℓ)/∂𝐱_j^(0)_2≤ c_σ wβ_i(k,K) ∑_z ∈𝒩(i)𝐀̃_iz[c_σ^ℓ w^ℓβ_i(k, K)^ℓ(𝐀̃^ℓ)_zj]= c_σ^ℓ+1 w^ℓ+1β_i(k,K)^ℓ+1∑_z ∈𝒩(i)𝐀̃_iz(𝐀̃^ℓ)_zj= c_σ^ℓ+1 w^ℓ+1β_i(k,K)^ℓ+1(𝐀̃^ℓ+1)_ij.
http://arxiv.org/abs/2311.15945v1
{ "authors": [ "Julia Balla" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20231127155107", "title": "Over-Squashing in Riemannian Graph Neural Networks" }
e.g E.g et alCharacterizing Video Question Answering with Sparsified Inputs Shiyuan Huang^1 Robinson Piramuthu^2Vicente Ordonez^2,3Shih-Fu Chang^1 Gunnar A. Sigurdsson^2 ^1 Columbia University ^2 Amazon Alexa AI^3 Rice UniversityJanuary 14, 2024 ==========================================================================================================================================================================In Video Question Answering, videos are often processed as a full-length sequence of frames to ensure minimal loss of information. Recent works have demonstrated evidence that sparse video inputs are sufficient to maintain high performance. However, they usually discuss the case of single frame selection. In our work, we extend the setting to multiple number of inputs and other modalities. We characterize the task with different input sparsity and provide a tool for doing that. Specifically, we use a Gumbel-based learnable selection module to adaptively select the best inputs for the final task. In this way, we experiment over public VideoQA benchmarks and provide analysis on how sparsified inputs affect the performance. From our experiments, we have observed only 5.2%- 5.8% loss of performance with only 10% of video lengths, which corresponds to 2-4 frames selected from each video. Meanwhile, we also observed the complimentary behaviour between visual and textual inputs, even under highly sparsified settings, suggesting the potential of improving data efficiency for video-and-language tasks.Sections/introSections/related_worksSections/methodSections/experiments§ CONCLUSIONIn this work, we characterize video question answering from the perspective of sparsified inputs. We propose to use a learnable selection module to adaptively select the best and representative input. This allows us to get multi-length input on different types of modalities. In our experiments, we analyze the current Video Question Answering benchmarks, where we observe a fair performance with a small input budget. We also observe a complimentary performance under a multi-modal setting. We believe our work meaningfully shows the potential of improving data efficiency under various video representation types. ieee_fullname
http://arxiv.org/abs/2311.16311v1
{ "authors": [ "Shiyuan Huang", "Robinson Piramuthu", "Vicente Ordonez", "Shih-Fu Chang", "Gunnar A. Sigurdsson" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127210020", "title": "Characterizing Video Question Answering with Sparsified Inputs" }
F. Author et al.CentraleSupélec, IETR UMR CNRS 6164, France Univ Rennes, INSA Rennes, CNRS, IETR - UMR 6164, F-35000 Rennes, France CentraleSupélec, Inria, Univ Rennes, CNRS, IRISA, FranceAttacking at non-harmonic frequencies in screaming-channel attacks Jeremy Guillaume10009-0005-3398-3423 Maxime Pelcat20000-0002-1158-0915 Amor Nafkha10000-0002-1164-7163 Rubén Salvador30000-0002-0021-5808January 14, 2024 ============================================================================================================================================== Screaming-channel attacks enable em sca at larger distances due to higher em leakage energies than traditional SCAs, relaxing the requirement of close access to the victim.This attack can be mounted on devices integrating rf modules on the same die as digital circuits, where the rf can unintentionally capture, modulate, amplify, and transmit the leakage along with legitimate signals. Leakage results from digital switching activity, so the hypothesis of previous works was that this leakage would appear at multiples of the digital clock frequency, i.e., harmonics.This work demonstrates that compromising signals appear not only at the harmonics and that leakage at non-harmonics can be exploited for successful attacks.Indeed, the transformations undergone by the leaked signal are complex due to propagation effects through the substrate and power and ground planes, so the leakage also appears at other frequencies. We first propose two methodologies to locate frequencies that contain leakage and demonstrate that it appears at non-harmonic frequencies. Then, our experimental results show that screaming-channel attacks at non-harmonic frequencies can be as successful as at harmonics when retrieving a 16-byte AES key.As the rf spectrum is polluted by interfering signals, we run experiments and show successful attacks in a more realistic, noisy environment where harmonic frequencies are contaminated by multi-path fading and interference. These attacks at non-harmonic frequencies increase the attack surface by providing attackers with an increased number of potential frequencies where attacks can succeed. § INTRODUCTIONsca <cit.> allow retrieving confidential information from computing devices by exploiting the correlation of internal data with the leakage produced while computing over these data. The term side channel is therefore used to refer to physical leakage signals carrying confidential information. Side channels are general to CMOS computing devices and can take many forms, from runtime variations of system power consumption <cit.> to em emanations<cit.>.Screaming channels are a specific form of em side channel that occurs on mixed-signal devices, where a rf module is co-located on the same die as digital modules.In this context, the leakage of the digital part reaches the rf module, which can transmit it over a distance of several meters. This phenomenon allows attackers to mount side-channel attacks at distances from the victim. The seminal work of Camurati et al. <cit.> demonstrated how screaming-channel attacks can succeed at distances of up to 15 meters. Leakage is generated by the switching activity of the transistors from the digital part of the victim system, which operates at a clock frequency F_clk. When observed on a spectrum analyzer, the leakage power spectral density is shaped as peaks at the harmonics of F_clk (i.e. n× F_clk where n ∈ℤ).What makes screaming-channel attacks different from other sca is that the harmonics, after being modulated by the rf module, are visible around the carrier frequency F_RF of the legitimate rf signal (Section <ref>).A limitation of this attack is that the harmonics of F_clk can be modulated at the same frequency as some interfering signals, such as WiFi signals. Since these interfering signals are transmitted voluntarily, they are stronger than the leakage signal, which, as a result, can be easily polluted and hence quickly become non-exploitable.To overcome this limitation and further study the risk posed by screaming channels, this paper studies the attack's feasibility when capturing signals at frequencies other than the harmonics of the digital processing clock. Specifically, we seek to answer the following question: is exploitable leakage also present at frequencies other than the harmonics?If this question is answered positively, attackers can have an extensive choice of potential frequencies to select from and find one that is not polluted by environmental noise during the attack.Such a property can also be an enabler to effectively extend the framework of multi-channel attacks <cit.>, which attack by combining different side-channel sources, and combine different frequencies in the context of modulated leakage signals.To summarize, we investigate the presence of leakage over the spectrum at non-harmonic frequencies and demonstrate that this leakage can be used to build successful attacks. We propose the following contributions: * Two methodologies to search for exploitable leakage over the spectrum. The first is based on a fixed vs. fixed t-test <cit.>, and the second is an original contribution based on vt <cit.>.* With these methodologies, we demonstrate that leakage in screaming-channel attacks is not only present at harmonic frequencies, as explored in previous works <cit.>, but it is also spread over a large share of the near-carrier spectrum. * We compare bothmethods and demonstrate a significant reduction in the exploration time when looking for exploitable frequencies with the second methodology based on pattern detection.* We evaluate the effectiveness of attacks at non-harmonic frequencies in a noiseless environment and show how this effectiveness can sometimes be higher at non-harmonics. * We demonstrate successful attacks in more realistic scenarios. We apply the proposed methodologies and the insights learned when attacking at non-harmonics and build attacks in a context where most harmonics are polluted by other standard signals typically found in the spectrum.The rest of this paper is organized as follows:section <ref> introduces related and previous works on screaming-channel attacks.The attack scenario of our work and the setup are described in section <ref>.The two methods we propose to search for leakage over the spectrum are presented in section <ref>. Afterward, in section <ref>, we demonstrate the attack feasibility by exploiting the leakages found at non-harmonic frequencies. Section <ref> demonstrates the attack in a more challenging, and therefore more realistic, scenario.Lastly, section <ref> concludes the paper. § RELATED WORKS§.§ Side-channel attacksBy capturing the leakage generated by computing devices, attackers can mount sca to jeopardize confidentiality and recover internal secret data.The most common methods used to build sca aredpa<cit.>,cpa<cit.>, mia<cit.>,ta<cit.>and more recently dl <cit.>. The steep peaks of the signals resulting from the switching activity of transistors in digital devices produce leakage signals whose variations correlate with that switching activity and, therefore, with the data over which the processor computes. That leakage signal is then most often found in the em emanations <cit.> or the power consumption variations <cit.> of the victim device.The leakage signal of interest is generated by the digital part of the system from the switching activity of the transistors occurring when data is being computed or moving through the chip.In a synchronous device, transistors switch at a pace much influenced by a unique digital clock signal. The leakage resulting from successions of transitions and non-transitions is originally modulated by this clock, which acts as an oscillator <cit.>. Since the clock is a square wave signal, its Fourier transform corresponds to peaks located at the harmonics H(f) of its main oscillating frequency F_clk, as formalized in Equation (<ref>), with A_n being the respective amplitudes of each harmonic. H(f) = Σ_n=-∞^∞ A_nδ(f - nF_clk) Theoretically, if the clock signal was a perfect square with a duty cycle of 50%, its frequency spectrum would have peaks only at the odd harmonics. However, in practice, the duty cycle is never perfect, and peaks also appear at even harmonics <cit.> as illustrated in Fig. <ref>.This is why the frequency spectrum of conventional side-channels SC, being the digital noise N(f) modulated by the clock signal, has a peak of energy at each harmonic of the clock frequency as formalized in Equation (<ref>).SC(f) =N(f) ∗ H(f) =Σ_n=-∞^∞ N(f) × A_nδ(f - nF_clk) A limitation of traditional sca to capture clean leakage signals is that attackers must be in very close proximity to the victim device for a successful attack, usually only a few millimeters away. There are, however, some specific scenarios that allow an attacker to take distance from the device <cit.>. One scenario for gaining distance from the device, which is the focus of this paper, is the so-called screaming-channel attack <cit.>. In this attack, the leakage is transmitted by an rf module that sits beside the digital part of a mixed-signal chip on the same die, allowing the attacker to capture it at a distance of several meters.§.§ Screaming-channel attacks Mixed-signal devices are heterogeneous platforms with digital and analog modules integrated into the same die. One of these analog modules can be the rf chain needed to build a soc with radio communications capabilities. This tight integration has the advantage of reducing the power consumption or the transmission delay between the digital and rf modules, as well as the cost of the final device.However, the very nature of this type of device has already been proven to be a hardware vulnerability <cit.>. Fig. <ref> illustrates a mixed-signal device. Compared to regular side-channels in digital devices, the leakage resulting from the switching activity in mixed-signal systems can travelthrough the substrate by the so-called substrate coupling effect <cit.>. This way, leakage signals can reach the radio transceivers of the rf part, which is very sensitive to noise and hence prone to capture these slight variations carrying information that correlates with secret data.The leakage from the digital part, which is the one that would be used to build a traditional sca is, as expressed in Equation (<ref>), modulated at the frequency of the legitimate rf signal F_RF, amplified and then transmitted by the rf module through the antenna. As a result, this amplification can bring, i.e., scream, the leakage signal at distances of several meters. In Camurati et al. use case <cit.>, the transmitted Bluetooth signal is centered at 2.4GHz, and the device clock frequency is 64MHz.The leakage harmonics are therefore at 2.4GHz + multiples of 64GHz, i.e., 2.464GHz, 2.528GHz, 2.592GHz, etc. ScreamC(f) =SC(f) ∗ F_RF =Σ_n=-∞^∞ N(f) × A_nδ(f - nF_clk - F_RF)In their seminal work, Camurati et al. <cit.> demonstrated that an attack using this modulated leakage is possible. Authors also conducted further works to understand better the properties of the leakage in this context <cit.>. In this original use case, the device transmits a Bluetooth signal from a commodity low-power soc while the digital part (an Arm Cortex-M4 microcontroller) executes AES encryptions. The authors reported performing the screaming-channel attack at up to 15 meters and observed some leakage at up to 60 meters.The limitation of this scenario is that the leakage carries low energy compared to other legitimate signals potentially present at the harmonic frequencies.As a result, attacks can be very difficult to conduct at those frequencies when they are polluted. This sets a high uncertainty on the feasibility of the attackin a noisy environment where all harmonics would be polluted. The following sections demonstrate that attackers do not have to limit themselves to attack at a very limited set of harmonic frequencies. On the contrary, a wide spectrum is at their disposal to compromise the system, which increases the threat that screaming-channel attacks represent. To the best of our knowledge, this is the first work to demonstrate that the attack is possible at non-harmonics. All previous works on screaming-channel attacks <cit.> used one harmonic (the second harmonic at 2.528 GHz) to perform the attack.§ THE ATTACK SCENARIO AND SETUP This paper considers a scenario where the victim is a mixed-signal device that executes a cp while transmitting an rf signal in parallel. Since the leakage is emitted by the rf module, the attacker can capture it with a sdr device. The attacker's goal is to recover the secret key used by the cp from this remotely captured leakage signal. Fig. <ref> shows the attack setup used for all experiments. The victim device is an nRF52832 from Nordic instrument[<https://www.nordicsemi.com/Software-and-tools/Development-Kits/nRF52-DK.>]. It contains an Arm Cortex M4 processor and an rf module.The attacker device is a USRP N210[<https://www.ettus.com/all-products/un210-kit/>] using an SBX daughter board that can measure a signal between 400 MHz and 4.4 GHz and has a bandwidth of 40 MHz.To collect the leakage at distance, a parabolic gridantenna with a gain of 26 dBi is used. The computer used for all our experiments has a 4-core Intel Xeon(R) CPU E3-1226 V3 @ 3.30 GHz, and 8 GB RAM.The legitimate rf signal is a Bluetooth signal transmitted at 2.4GHz without frequency hopping[<Frequency-hopping is the repeated switching of the carrier frequency during radio transmission to reduce interference and avoid interception. In the case of Bluetooth transmissions, switching occurs among 81 channels, from 2.4GHz to 2.48GHz with 1MHz wide bands.>]. The attacked encryption algorithm is a software implementation of AES-128, whose encryption on the considered microcontroller takes 870 μ s. In the following, we describe the steps used in the experiments to collect traces.One trace corresponds to the collected leakage signal produced by one cp execution. §.§ Leakage collectionFig. <ref> shows the steps undergone by the leakage signal. First, the USRP demodulates the rf signal at a given frequency, Ftested, that potentially carries leakage.Then, the USRP samples the baseband signal at 5 MHz.The choice of this sampling frequency is based on previous works on screaming-channel attacks <cit.>,which we also use as it has so far provided sufficient resolution for successful attacks.Although a study of the impact of sampling frequency on screaming-channel attack success could be an interesting subject, it is out of the scope of this paper. §.§ Trace segmentationTo segment the obtained raw traces[A raw trace corresponds to the collected signal, sampled and quantized by the sdr.], i.e., to separate the segments corresponding to each individual AES encryption, pattern recognition is used. It consists in identifying the locations within the raw trace matching with the shape of the leakage produced by cp.The steps applied for pattern recognition, chosen empirically for their good performance on the problem at hand, are the following:* Low-pass filter the raw trace with a cutoff frequency at the sampling frequency divided by 4: 5MHz / 4 = 1.125MHz.* Compute the sliding correlation between the pattern and the filtered trace.* The peaks obtained during the sliding correlation are expected to correspond to the locations of the AES segments. Segment the raw trace by cutting it at these locations.* To reduce noise, low-pass filter the obtained segments with a cutoff frequency of 550KHz.To extract an initial pattern, we use vt <cit.> as pattern extraction technique, illustrated on Fig. <ref>. This study shows that by knowing the precise time duration L_cp between 2 cp, it is possible to segment a raw trace containing leakage produced by a series of cp executions.Averaging the obtained segments returns a representative pattern of the leakage produced by that device each time it executes a cp. The study also describes a procedure to find this precise length L_cp.§.§ Time diversityAs in previous works on screaming-channel attacks, time diversity is used to reduce noise from the leakage collection. It consists in running N encryptions with exactly the same data (plaintext and key) and averaging their leakage. Since the N encryptions are computing the same data, the leakage they produce would be very similar, and hence averaging tends to cancel the random noise contributions.While also typical in regular side-channel attacks, this is especially important in the case of screaming-channel attacks,as the leakage captured by the attacker contains additional noise due to the transmission channel. In the remaining of this paper, this reduced-noise segment is called a trace. The number N is set to 10 in the experiments where the leakage is collected through a cable (Sections <ref> and <ref>). It is increased to 50 for the experiments where it is collected at a distance with an antenna (Section <ref>). These values are significantly different from the 500 traces used in<cit.> when attacking at a distance. While this brings additional difficulty in running a successful attack, it allowed us to run the experiments in a more reasonable time of 6 days instead of 5 weeks. For each experiment, the number of traces collected will be noted as Nb_Traces× N_Time_Diversity. § SEARCHING FOR LEAKAGE AT NON-HARMONIC FREQUENCIESFig. <ref> shows a part of the frequency spectrum at the output of the victim board while transmitting a Bluetooth signal at 2.4 GHz. Next to the legitimate signal peak, other peaks with lower energy appear. These correspond to the first 4 harmonics of the leakage from the digital part. They are present at frequencies equal to 2.4 GHz plus multiples of 64 MHz, which is the digital clock frequency. Therefore, one would intuitively assume that the leakage from the digital part transmitted by the victim board is stronger at these harmonics and hence that it would be more challenging to perform the attack at other frequencies.For this reason, previous works on screaming-channel attacks <cit.> use harmonic frequencies for the attack. The second harmonic is typically the one used, as it is often less polluted by interfering signals, while the first harmonic is located within a frequency band used by signals like Bluetooth or WiFi. Nevertheless, it can be seen on the spectrum that some variations in energy are also present between these harmonics, suggesting that leakage could potentially be distributed continuously along the spectrum. The first question we want to answer is: is the leakage from the digital part only present at the harmonic frequencies, or does it also appear at other frequencies? To answer this question, we propose two methodologies to investigate at which frequencies the leakage exists in the spectrum. The first consists in running a t-test at each tested frequency. This test is commonly used by the side-channel community to determine whether the internal data computed by the cp has an impact on the leakage. Then we propose a second method that reduces the implementation complexity while giving similar results. It is an adaptation of the method used in <cit.> to extract the cp leakage pattern; we refer to this second method as pattern detection. It consists in analyzing whether or not a signal collected at a given frequency is good enough to extract a cp pattern; if yes, this means that leakage is present.Fig. <ref> illustrates the difference between the two methods, described in the remainder of this section.Compared with the first methodology, the second method removes the most time-consuming phase, which is the collection of (500) synchronized traces needed for the t-test.§.§ Leakage localization using t-test method In this first investigation, a fixed vs. fixed t-test is performed <cit.> at each tested frequency.This test indicates whether or not some leakage samples depend on the internal data computed by the cp.Therefore, if the t-test is conclusive at a given frequency, i.e., the score is over 4.5[This score means that there is information leakage with confidence >0.99999 <cit.>], this means leakage is present there, as it is a necessary condition for the t-test to detect a dependency. We chose to perform a fixed vs. fixed t-test as Durvaux et al.<cit.> demonstrated that it needs fewer traces to detect data dependencies compared to the classical fixed vs. random tvla test <cit.>.To perform this test, two sets of cp are executed with unique plaintext and key per set. All cp in one set are computed with the values of their respective set. The leakage generated during the cp executions is collected.Using leakage samples from a common time point within cp, i.e., leakage samples generated by the same operations, we compute a t-test as indicated in the following Eq. (<ref>): t-test = û_1 - û_2/√(σ_1^2/N_1 + σ_2^2/N_2) where û_i represents the average of the samples belonging to the i_th set, σ_i^2 is their variance, and N_i the number of samples in this set. Thus,for the t-test to work, it is necessary to synchronize the collection of the leakage with the execution of cp perfectlyto know which samples correspond to which time point from one leakage collection to another. The leakage collection method in section <ref> is used for this. For pattern recognition, a different pattern must be extracted for each frequency, as the shape of the cp leakage differs among frequencies. The vt <cit.> technique is used to automate pattern extraction.In the experimentwe collect 500 traces at each frequency using the wired setup from Fig <ref>. This methodology is used over the frequency range 1.4GHz to 3.4GHz (2.4 +/- 1 GHz), with a resolution of 1 MHz. These parameters are application-dependent and can be adjusted according to each particular use case in order to test a wider frequency band or to change the resolution. §.§.§ Experimental resultsFig. <ref> shows the results of the t-test. At each frequency, the maximum absolute value of the t-test is kept as a score. Frequencies corresponding to the harmonics are highlighted in blue. If the leakage had only been present at these frequencies, the score would have been higher than this threshold only at these locations, but it can be observed that the score is also above 4.5 at other frequencies, suggesting that leakage is also present at non-harmonics. In fact, even if the highest peaks are at the first 2 harmonics, the score is very high around the first 3 harmonics and is above the threshold over a band of more than 500 MHz of width (almost until the 6th harmonic). This is true on both the right and left sides of the spectrum. The advantage of employing this t-test method to localize leakage in the spectrum is that it gives a result that we know how to interpret, as the t-test is a well-known tool in the side-channel community. The downside of this method is that the collection phase takes 27 hours. A way to reduce this time would be to collect fewer traces at each frequency.As introduced, these results were obtained using 500 traces at each frequency sub-band (250 traces per set). If we repeat the experiment for 300 traces, the time is reduced to 15 hours. However, 16.89% of the frequencies previously identified as carrying leakage with the experiment using 500 traces are not detected anymore. Then if this method is used with an insufficient number of traces, there is a risk of not detecting frequencies that actually carry leakage. For example, in our experiments, frequencies between the 5th and 6th harmonics, which already have a low score with 500 traces per frequency, are not detected anymore by the t-test when it is performed with only 300 traces. It is, in fact, very difficult to determine how many traces are enough because the exploitable frequencies are unknown, and hence we cannot know if some are missing. We limit the number of traces to 500 to maintain the experiment time tractable. However, there is no guarantee that some frequencies, that could in fact contain exploitable leakage, remain undetected.Therefore, in the next section, we propose a methodology that is more adapted to our requirements: detecting the presence of leakage in as many frequencies as fast as possible.The objective of this original method is to reduce the test complexity and, thus, its processing time.We apply this methodology to the same experiment and demonstrate equivalent results. §.§ Leakage localization using pattern detection methodThis method consists of testing the similarity of segments returned by vt (c.f. section <ref>).If leakage is present in the raw trace collected at the tested frequency F, then these segments should all correspond to the leakage of one cp execution and have the same shape, so the similarity test should be conclusive. We evaluate segment similarity with an adaptation of vt <cit.> as illustrated in Fig. <ref>.We refer to this test as pattern detection.The algorithm of the pattern detection method is formalized in Algorithm <ref>.First, a raw trace is collected at the tested frequency f (line 6). This trace is segmented using vt (line 7), it is cut in Nsegs=50 segments, each segment with a size L_cp equal to the sampling frequency×the time duration between 2 cp. The similarity test between the resulting segments is applied (lines 8 to 10).This test can be repeated N_tests times (loop from lines 4 to 11) and the results averaged (line 12). In our experiments, N_tests is initially set to 10. This method is applied over the same frequency range as in the first method.§.§.§ Experimental results and discussionsThe black curve in Fig. <ref> corresponds to the results L_presence(f) in Algorithm <ref>. They are expressed as a correlation, which is representative of the similarity between segments. The higher the correlation, the more likely the cp pattern is present. To make sure that, as expected, the correlation is high only at certain frequencies due to the presence of cp leakage, we repeat the same experiment, but now the victim does not perform any cp. The red curve shows the results of this second test. It can be seen that when cp are not executed, there are still peaks around the carrier frequency (2.4 GHz) due to digital activity. However, the correlations are much weaker because when segmenting the raw traces the segments obtained are not similar as they did not correspond to cp leakage. To compare both methods, we show their results side by side in Fig. <ref>. Similarly to the threshold of 4.5 used in the t-test method, we need to set another condition for this second method to consider the detection as positive. The comparison of both methods for a threshold of minimum correlation larger or equal to 0.75 is shown in Fig. <ref>. We chose this threshold as it corresponds to the highest score of the test in the absence of leakage (red curve). Therefore, any score below this threshold cannot be taken as an indication of the presence of leakage. For both methods, the frequencies where their respective condition is met are highlighted in green.For this selected threshold of 0.75, we found after analyzing the results for each frequency that both methods yield the same results for 93.95% of the tested frequencies. For this second method, it took 50 minutes instead of the 27 hours in method 1 of Section <ref>. It is possible to further reduce this time by reducing the number of similarity tests performed at each frequency. As introduced, N_tests was initially set to 10 tests per frequency.We repeated the experiment reducing N_tests to only 1. In this case, leakage localization took only 15 minutes without significantly altering the results, and 94.78% of the detected frequencies with N_tests=10 were still detected with N_tests=1. § ATTACKINGAT NON-HARMONICS In Section <ref>, we demonstrated that the leakage is also present at non-harmonics frequencies. This brings another question: how efficient are attacks at these non-harmonic frequencies? To answer the question, the attack is performed on a part of the spectrum where the leakage localization methods gave the best results. To keep the experimentation phase tractable, experimentsonly cover the right-hand side of the spectrum (i.e., positive) with respect to the legitimate signal. Indeed, the objective of the experiment is not to evaluate all possible frequencies on this particular type of device but to determine if an attack is possible at non-harmonic frequencies and to compare their performance to attacks at harmonic frequencies. The experiment covers a range of frequencies from 2.45 GHz to 2.6 GHz, and attacks are centered at 150 different frequencies, with 1MHz steps. One may note that the attack would probably also be possible at other frequencies, including the one on the left part of the spectrum. However, as indicated, we focus only on the right half side to keep the experimentation phase in a reasonable time.§.§ The attack and scoreIn our experiments, we run profiled correlation attacks<cit.>, where the attacker has access to a similar device as the victim. This enables the attacker to build a profile for this type of device and learn the leakage behavior.During the attack phase, for each key byte, an assumption(i.e., hypothesis)is made on their value. The 256 possible hypotheses are tested, each getting a probability to be the correct one based on the correlation between the estimated leakage using the profile on one side, and the real leakage produced by the victimon the other.The hypothesis giving the highest probability is assumed to be the correct one. In many cases, some bytes are incorrectly guessed, but it is still possible to brute-force the correct key.A brute-force approach <cit.> consists in testing the ranked keys from the most probable, according to the probabilities computed during the attack, until finding the right one.The number of keys tested by the brute-force algorithm before reaching the correct one is the Key Rank and is representative of the complexity to recover the key.The lower the key rank is, the better the attack performs, as the brute-force attack needs less time to reach the correct key. When the key rank is lower than 2^32, it takes about 5 minutes on the experimental computer to brute-force the key. When lower than 2^35, the brute-force takes about 1 hour. In the remainder of this paper, this key rank is kept as the criteria to evaluate the efficiency of an attack as in previous works <cit.>. §.§ Experiment and results In this first experiment, the victim and attacker are still connected by a cable (Fig. <ref>). Two sets of 15000×10 traces are collected at each frequency, one to build the profile and the other to test the attack. The collection phase takes 4 days. For each attack, we compute the key rank and show the results in Fig. <ref>. The experiment confirms that the attack is not only succeeding at the harmonics as expected but also at many other frequencies. Then, this finding increases the number of potential frequencies to use to succeed in the attack. The key rank is lower than 2^32 at the 3 harmonics as they are not polluted. Among the 147 non-harmonics, 105 have a key rank lower than 2^32 and 12 lower than 2^35.As the experiment is fully automated, it is important to notice that a very high key rank at a given frequency does not ensure the attack is necessarily more difficult there.But it can be tried to make it work better by putting more effort into it, which is not our concern here.§ ATTACKING IN CHALLENGING CONDITIONS After demonstrating attacks are also possible at frequencies other than the harmonics, we investigate how useful this finding is in a noisy environment when attacking at a distance.The questions targeted by this second experiment are the following:In a noisy environment where harmonics are polluted, can non-harmonic frequencies keep the attack feasible? Can the attack be better at non-harmonic frequencies than at harmonic frequencies In the experiments presented in this section, the leakage is collected using the antenna and the setup shown in Fig. <ref>. Compared to the first experiment we increase time diversity from 10 to 50, but collect the same number of traces at each frequency (15000×50).The patterns used for the collection phase and profiles used for the attack are the ones that were built in Section <ref>. The key rank is kept as the attack score. §.§ Attacking at a distance in a noisy environmentA first test is performed with the antenna at 2 meters. In these conditions, the collection phase takes six days.Fig. <ref> shows the results.As expected, we can observe how the noisy environment reduces the number of exploitable frequencies. This is particularly visible around the first harmonic at 2.464 GHz, where WiFi and Bluetooth signals are present. Among the 150 frequencies, the rank is lower than 2^32 only at 2 harmonics, as the first one is polluted. Among the 147 non-harmonics, 78 have a rank lower than 2^32 and 12 lower than 2^35.§.§ Attacking with fewer tracesA common goal of side-channel attacks is to succeed with as few traces as possible.We re-computed the attack with the traces collected at 2 meters but reduced the number of traces per attack to 750×50 (from 15000×50). We were then able to set up (15000 / 750 = 20 attacks at each frequency). The experiment provides the results shown in Fig. <ref> After sorting the 150 frequencies according to the average of their scores (average of the 20 log2(key rank)), the harmonics ranked at the 3rd, 7th and 121st place. This proves that some non-harmonics can even get better scores than harmonic frequencies.§.§ Attacking at a further distance As screaming-channel attacks try to enable attacks where attackers are as far as possible from the victim, we run a new round of attack experiments with the antenna put at a distance of 7 meters.This time 50[50 is the minimal number usually considered by the side-channel community for statistically meaningful results] attacks are performed under the exact same conditions at each frequency.To keep the experiments feasible in a reasonable time, we reduced the number of tested frequencies.The frequencies selected for this attack at 7 meters are chosen among the ones where the attack performed best in the 2-meters scenario using 750 traces.The results of the attacks at 7 meters are shown in Fig. <ref>.In this case there is only one harmonic that still gets a key rank lower than 2^35, while most of the selected non-harmonics still work. Among the 9 which were selected, 5 have a lower rank than 2^32 and 2 than 2^35. Fig. <ref> shows the results of the same attacks, but it focuses on the evolution of the key rank according to the number of traces used.Again, we keep the average of the 50 log2 (key rank) as the attack score for each frequency. When using the same harmonic as in previous works <cit.> (the second one at 2.528 GHz), the key rank decreases but very slowly. Then in our case, to get a rank lower than 2^35 using this frequency, it is necessary to collect up to 30286×50 traces.These results show how allowing to search for leakage at frequencies other than the harmonics considerably reduces the number of traces needed to get the same results. The best non-harmonic frequency, at 2.484 GHz, needs only 65×50 traces to get this result, which is even better than the best harmonic at 2.592 GHz, where the number of traces required is 166×50. § DISCUSSION AND CONCLUSION This work defied the assumption that screaming-channel attacks perform best (or only) at harmonics of the digital processing clock of the victim, frequencies where the leakage was so far supposed to be present with the highest amplitude. To investigate this, we proposed two methods to locate and evaluate leakage over a band of the frequency spectrum. The first method, the most intuitive and direct, builds from the literature on side-channel attacks and tries to find exploitable leakage through a t-test at each tested frequency. We used a fixed vs. fixed test due to its better performance with fewer traces.The second method is an original contribution of this work that tries to reduce the implementation complexity and the processing time while keeping the same quality of results as the first method, which uses a standard methodology accepted by the side-channel community. Exploiting these two methods, we demonstrate that the leakage is also present at a large amount of non-harmonic frequencies. The presence of leakage at non-harmonics is consistent with previous studies <cit.>, demonstrating that when the digital part creates noise at a given frequency, and as this noise travels through the CMOS substrate, the latter acts as a filter that spreads the noise over a wider frequency band. As a consequence, the noise can be found on the rf side at frequencies other than the harmonics. We considered only one type of device, the same used by previous works on screaming-channel attacks, that is still available off-the-shelf at the time of our study. The present study does not prove that leakage will always be present at non-harmonics on any other device. However, it highlights the fact that leakage presence has to be checked there, too, as it is possible to find it at these frequencies, even if stronger peaks at harmonics give the intuitive idea that leakage would appear mainly there. This is exactly what we have proved in this work.This study also demonstrates how this phenomenon can make attacks feasible in cases where all exploitable harmonics are polluted by interfering signals, as the case in more realistic, real-life scenarios.The studied phenomenon can also reduce the number of traces needed for the attack.Compared with the performance of the attack at the best harmonic, using the best non-harmonic enables to reduce by 60% the number of traces needed to get a key rank under 2^35. In future works, it could be interesting to detect the best frequencies first and then focus the efforts only on them. For example, by building better profiles: in our work, in order to build profiles at a large number (150) of frequencies, these profiles were built with only 150K traces (15000×10), which is relatively small compared with previous works (1 to 5M traces per profile). One can also extend the range of attacks to a distance where no harmonic gives a reasonable key rank (for example, superior to 2^39) with a given maximum number of traces and observe how many meters a non-harmonic attack is capable of gaining.splncs04
http://arxiv.org/abs/2311.15832v1
{ "authors": [ "Jeremy Guillaume", "Maxime Pelcat", "Amor Nafkha", "Ruben Salvador" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20231127135621", "title": "Attacking at non-harmonic frequencies in screaming-channel attacks" }
 >1.0ex∼ Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100190, China [email protected] Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100190, [email protected] Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Kavli Institute of Theoretical Sciences, University of Chinese Academy of Sciences, Beijing, 100190, ChinaNew Cornerstone Science Laboratory, Beijing, 100190, China Cooper pairs formed by two electrons with different effective mass are common in multiband superconductors, pair density wave statesand other superconducting systems with multi-degrees of freedom. In this work, we show that there are paramagnetic contributions to the superfluid stiffness in superconductors with different-mass Cooper pairs. This paramagnetic response is owing to the relative motion between two electrons with different mass. We investigate the paramagnetic contributions based on the linear response theory in two-band superconductors with interband pairingsand inpair density wave states respectively. Our results offer a new perspective to the electromagnetic superfluid stiffness in unconventional superconductors beyond BCS theory. Paramagnetic contribution in superconductors with different-mass Cooper pairs Jiangping Hu January 14, 2024 =============================================================================Superconductors (SCs)are defined byzero resistance and Meissner effect with perfect diamagnetism <cit.>. Microscopically, the central ingredients for superconductors are the Cooper pairs and their phase coherence, in which electrons bind together two by two and condense to form a coherent quantum state <cit.>. Especially, the phase coherence plays an essential role in the electromagnetic response of SCs. Any phase disturbance induced by external magnetic fields is disfavored by Cooper pairs' phase coherence leading to the diamagnetic Meissner effect. The diamagnetic rigidity in SCs are normally characterized by the superfliud stiffness ρ_s, which is defined in the London equation coefficient as 𝐣=-4e^2/ħ^2ρ_s𝐀=1/λ^2𝐀 <cit.>. As schematically illustrated in Fig.<ref>(a), Cooper pairs’ diamagnetic supercurrent expels the magnetic field from the interior of the superconductor with a characteristic penetration depth λ. Understanding superfliud stiffness is not only one key question in BCS theory <cit.>, but also a window towards understanding high-temperature superconductors <cit.>.On the other hand, Cooper pairs are formed by two electrons. Since effective mass of electrons can vary dramatically in a solid state system, there are superconducting pairing systems with Cooper pairs formed by two electrons with different effective mass, namely the different-mass Copper pair systems. The different-mass Copper pair systems are common in superconductors. For example, in multiband systems, interband pairings between two different electronbands genericallyprovide different-mass Copper pairs<cit.>; a pair density wavestate (PDW) with vector Q connecting k and -k+Q sectors can easily host different mass Cooper pairs as well<cit.>. Therefore, it is natural to ask whether new properties can be associated to the different-mass pairing. In this work, we show thatdifferent mass Cooper pairs, surprisingly, carry outa new paramagnetic response. The paramagnetic contribution is important to understand the superfluid stiffness in unconventional superconducting states.To investigate the electromagnetic response of SC with different-mass Cooper pairs, we start from how a system with two different-mass electrons couples to the electromagnetic gauge fieldfrom a Cooper problem <cit.>. The kinetic part Hamiltonian of this Cooper problem can be written as H_c = [-∇__1-e(_1)]^2/2m_1 + [-∇__2-e(_2)]^2/2m_2Using the center-of-mass (COM) coordinate =_1+_2/2, the relative coordinate = _1-_2 and their corresponding mass M_±^-1= m_1^-1± m_2^-1, the Hamiltonian H_c transforms intoH_c = 1/2M_+[-∇_/2-e()]^2 + (-∇_)^2/2M_- +1/2M_-{-∇_/2-e(),-∇_}.From Eq. <ref>, we can see the first term describing the energy of COM coupled to the gauge field 𝐀, which contains both the paramagnetic and diamagnetic contribution. Note that we approximate the vector potential as (±/2) ≈(), due to the slowly varying field. The second term (-∇_)^2/2M_- describes the kinetic energy of the relative motion without coupling to the gauge fieldbecause the relative motion is a charge-less process. The last term 1/2M_-{-∇_/2-e(),-∇_} describes the coupling between the COM motion and relative motion, which plays an important role in linking the relative motion to the gauge field as discussed later. Taking the H_c derivative respective to 𝐀, the response current J is obtained with two contribution from COM and relative motions as 𝐉 = -∂ H_c /∂=J_R+J_r, as schematically plotted in Fig.<ref>(b,c). Starting from the equal mass limit with M_-=0, the last two terms in H_c vanishes. Hence, there only remains the current J_R from COM motion, as shown in Fig.<ref>(b). On the other hand, for the Cooper pair with different mass plotted in Fig.<ref>(c), both J_R in the up panel and J_r in the lower panel exist. Especially, the J_r emerges from the coupling between COM motion and relative motion in the last term of H_c. In other words, if the two different-mass electrons form the Cooper pair in SC, the COM part performs like the Cooper pair composed of the equal mass electrons. The electromagnetic response of COM shows perfect diamagnetism, where the paramagnetic response is strictly zero. However, the relative motion contributes a paramagnetic response, resulting in the reduction of the superfluid stiffness owing to the vanishing diamagnetic term related to M_- in (-∇_)^2/2M_-.To illustrate above ideas, we first study the superfluid stiffness in the multiband SCs with interband pairing. The multiband effect in superconductivity is a long-term topic <cit.>, since the discovery of BCS theory. The multiple band signatures have been widely observed in superconductors, including elemental metals Nb, Ta, V and Pb <cit.>, MgB_2 <cit.>, doped-SrTiO_3 <cit.>, especially the iron pnictides and chalcogenides <cit.>. Here, we consider a simplified two-band model with interband pairing:H_𝐤=∑_α, σξ_α(𝐤) c_𝐤, α, σ^† c_𝐤, α, σ+Δ∑_α≠β(c_𝐤, α, ↑^† c_-𝐤, β, ↓^†+ H.c. )where α,β=1,2 label two separated bands. These two bands host the dispersion ξ_α(k)=k^2/2m_α-μ with mass m_α and chemical potential μ, as the red and blue parabolic bands illustrated in Fig.<ref>(a). Δ is the mean-field interband pairing order parameter. Normally, this interband pairing should not be the leading pairing instability. However, we take a phenomenology approach to this problem and assume the interband pairing is dominated in this toy model.Below we define γ_α=1/2m_α for convenience.Under the basis Ψ_k^†={c_1k↑^†,c_2k↑^†,c_1,-k↓,c_2,-k↓}^T, the BdG Hamiltonian can be written asH_BdG=([γ_1 k^2-μ00Δ;0γ_2 k^2-μΔ0;0Δ -γ_1 k^2+μ0;Δ00 -γ_2 k^2+μ ]). where k^2=k_x^2+k_y^2. The Hamiltonian is composed of two decoupled blocks, which are particle-hole related. Thus, we can focus on the H_1 block for convenience as H_1=([γ_1 k^2-μΔ;Δ -γ_2 k^2+μ ]).The eigenvalue of H_1 are E_k^±=η_k±√(ϵ_k^2+Δ^2), where η_k=γ_1-γ_2/2k^2 and ϵ_k=γ_1+γ_2/2k^2-μ. Note that the system is fully gapped only when Δ>μγ_1-γ_2/2√(γ_1γ_2), namely E_k^+>0 and E_k^-<0. Fig.<ref>(b) shows the band dispersion of H_1 block in the fully-gapped region. The color of the band represents the weight of the two band in Fig.<ref>(a). It's obvious that the two band is not particle-hole symmetric because of the η_k contribution in this block.This η_kis just the contribution from the relative motion. The particle-hole symmetry is recovered for all bands in H_BdG.The next step is to find the superfluid stiffness. Within linear response theory, the response current to the vector potential of electromagnetic field can be obtained via J_μ(,Ω)=-∑_νK_μν(,Ω)A_ν(,Ω). Here K_μν is the electromagnetic response tensor with two part of contributions, paramagnetic response from current-current correlation function Π_μν and diamagnetic response from charge density ⟨n̂/m⟩ as K_μν=-e^2(Π_μν+⟨n̂/m⟩).According to London equation,the superfluid stiffness can be defined via ρ_s=ħ^2/4e^2 K_μμ(ω=0,q0). Notice that the paramagnetic current obtained from Eq.<ref> can be decomposed into two components as discussed above: the COM motion and the relative motion,J^P(q) = J^P_R+J^P_r= γ_1+γ_2/2 (2k+q) σ_0 + γ_1-γ_2/2(2k+q) σ_3Then the current-current correlation function can be calculated by Π_μν(q,Ω_n) = 1/Vβ∑_k,ω_n[J^P_μ𝒢 J^P_ν𝒢] as the Feynman diagram in Fig.<ref>(c), where 𝒢 is the Green's function for H_1 (see Supplemental Material (SM) for more details).At zero temperature, considering the complete model given in Eq.<ref>, the paramagnetic contribution of the superfluid response of this system with interband pairing isΠ_μν(Ω=0,q→ 0) = -2(γ_1-γ_2)^2 ϵ_F N_γ^+(0)/γ_1+γ_2δ_μνwhere N_γ^+ is the DOS of the 2D free electron gas with energy dispersion ϵ_k, which is proportional to 1/γ_1+γ_2. The factor 2 comes from the equivalence when exchanging γ_1 and γ_2. This nonzero paramagnetic contribution is the key finding in this work. We can also calculate the diamagnetic contribution ⟨n̂/m⟩=2(γ_1+γ_2)n_c, where n_c is the number of carriers in either band. (More details of the calculation can be found in SM.)The paramagnetic contrition Π_μν(Ω=0,q→ 0) vanishes if γ_1=γ_2, which corresponds to the Cooper pairs composed of the equal mass electrons. This is consistent with the conventional BCS theory, where the Π_μν contribution is strictly zero owing to the SC gap. Hence, the SC shows perfect diamagnetism from ⟨n̂/m⟩ due to the vanishing Π_μν. On the other hand, if γ_1 ≠γ_2, the paramagnetic response Π_μν remains finite at zero temperature. Although the whole SC system remains diamagnetism, the existence of relative motion current J^P_r(q) reduces the superfluid stiffness with finite Π_μν.Actually the above results can be understood from the optical perspective. In the optical response, the optical conductivity σ(ω) follow a sum rule ∫_0^∞σ(ω)=⟨π e^2/2mn̂⟩ <cit.>. For clean SCs in BCS theory, all the optical weight transfers into the ω=0 with ρ_sδ(ω) with σ(ω≠0)=0. This σ(ω≠0)=0 is due to the forbidden optical selection rule from particle-hole symmetry, as proved in Ref. <cit.> and illustrated in Fig.<ref>(b) inset. However, this situation changes in the multiband system <cit.>. The σ(ω≠0) becomes finite, as calculated in Fig.<ref>(d). To satisfy optical sum rule, the superfluid stiffness in δ(ω) must have finite Π_μν contribution. To further demonstrate above analytical results, we can numerically calculate the paramagnetic response using lattice models. Here we consider two band model on square lattice with the energy dispersion ξ_1,2=-2t_1,2(cosk_x+cosk_y)-μ, where t_1,2 is the nearest-neighbor hopping parameter for band 1,2 respectively. We set t_1=1 as an energy scale. The ratio of the effective mass, defined by m^*=(∂^2 ξ_k/∂ k^2)^-1 of two energy bands is m^*_1/m^*_2=t_2/t_1. The paramagnetic responses Π_xx(Ω=0,q→ 0) as a function of interband pairing order parameter Δ_1 for different value of t_2 are plotted in in Fig.<ref>(a). Notice that, as we fix intraband pairing to zero, the system is gapless with Bogoliubov Fermi surface contributed by the lower quasiparticle band when Δ_1 is small. Here we only focus on the interband transition process as shown by the arrow in Fig.<ref>(b), although the Bogoliubov Fermi surface also contributes to the paramagnetic response through the intraband process. From Fig.<ref>(a), we can find that the paramagnetic response is always zero for any Δ_1 at t_1=t_2, which corresponds to m_1=m_2. For t_1 ≠ t_2, the interband process starts to generate finite response.For small value of interband pairing, the Π_xx response strengthens as Δ_1 increases.This is because the only the quasiparticles under the Fermi surface contribute to the paramagnetic response, whose numberincreases as Δ_1 increases. After Δ_1 exceeds a critical value Δ_c, the Bogoliubov Fermi surface disappears, and all quasiparticles in the lower band participate in interband processes, leading to a saturated response.We can also find that the saturated response is positively correlated with t_1-t_2 and negatively correlated with t_1+t_2. These are consistent with the our analytical calculation results based on the effective continuum model.In more realistic multiband SC cases such as iron based SCs, the intraband pairing is always the leading instability, which always dominates in comparison to the interband pairing <cit.>. The influence of the intraband pairing to Π_μν becomes important. We plot the paramagnetic response for different intraband pairing order parameter Δ_0 in Fig.<ref>(b) with fixed hopping parameter t_1=1 and t_2=0.6. The system is fully gapped for all parameters in the calculation. So the response completely results from interband processes. The result in Fig.<ref>(b) suggests that the interband pairing strengthens the paramagnetic response, while the intraband pairing suppresses it as Δ_0 increasing from 0.2 to 0.5.Thus, although the paramagnetic response is small in the intraband pairing Δ_0 dominated multiband system, it does exist as long as the interband pairing is finite.Pair density wave is another important example for different-mass Copper pairs <cit.>. Recently, the PDW has been widely explored in CsV_3Sb_5, cuprates and other superconducting system <cit.>. PDW is a special SC state composed of the Copper pairs with momentum k and -k± Q. The effective mass of band electrons are naturally different for electrons in PDW Cooper pairs due to the finite momentum Q. To simplify our discussion, we calculate the paramagnetic response in a PDW with Q=(π,π) on square lattice.As the pure PDW state may host a Bogoliubov Fermi surface, we also add an onsite pairing term to achieve a gap system. The Hamiltonian H_PDW under the basis Ψ_k^†={c_k↑^†,c_k+Q↑^†,c_-k↓,c_-k+Q↓}^T can be written asH_PDW=([ ξ_k 0Δ_on Δ_PDW; 0 ξ_k+Q Δ_PDWΔ_on;Δ_on Δ_PDW -ξ_-k 0; Δ_PDWΔ_on 0 -ξ_-k+Q ]).where ξ_k=-2t(cosk_x+cosk_y)-4t'cosk_xcosk_y-μ. In the calculation we set t=1, t'=-0.3 and μ=-1.Fig.<ref>(c) shows the Fermi surface of ξ_k and the reduced Brillouin zone (BZ). Δ_on and Δ_PDW represents the onsite pairing order parameter and the PDW order parameter respectively.The paramagnetic responses as a function of Δ_PDW for different Δ_on are shown in Fig.<ref>(d). The result is similar to the case in Fig.<ref>(b). As Δ_PDW dominating different-mass pairing increases, the Π_μν keep increasing as the interband cases. On the other hand, the onsite pairing term suppresses the paramagnetic response because it leads to the pairing of electrons with opposite momenta, i.e., electrons with the equal mass.Above results can be further extended to charge 4e superconductors <cit.>, where the superfluid density reduced to 3/4 of that in a two-electron BCS superconductors.Using the wavefunction method <cit.>, we can calculate the Green's function of charge 4e SC asG^R(kσ,ω)_αα = u_k^2/ω+iη-(E_k-ξ_k)+v_k^2/ω+iη+(E_k+ξ_k),where ξ_k=k^2/2m-μ and E_k=√(4ξ_k^2+Δ_4e^2). The single particle excitation spectrum is shown in Fig.<ref>(a). This spectrum is highly particle-hole asymmetric, since adding one electron and adding one hole are related to different excitation states <cit.>. Through the optical conductance shown in Fig.<ref>(b), we find that finite paramagnetic response corresponding to the direct transition between the two excitations at high frequency leads to a reduction in the superfluid density. Although it stems from the multi-body pairing rather than different-mass Cooper pairs, the reduction of the superfluid stiffness due to finite paramagnetic contribution is a common property in unconventional superconductors.In summary, we carry out a systematic study of electromagnetic property in superconductors with different-mass Cooper pairs. We find that the different-mass Cooper pairing can result in new paramagnetic contributions in superfluid stiffness. This paramagnetic responsedirectly links to the relative motion between two different mass electrons inside each Cooper pair. Using the two-band model with interband pairing, the paramagnetic responses are calculated based on the linear-response theory from both continuum model and lattice model. This paramagnetic response is finite only at m_1 ≠ m_2. Furthermore, this reduced superfluid stiffness can be understood from the optical sum rule. The weight from finite σ(ω≠0) is equal to this paramagnetic response.Besides the interband pairing in multiband systems, this paramagnetic response also exists in PDW system and other different-mass Cooper pair SCs. Finally, we want to point out this paramagnetic contribution is common both in different-mass Cooper pair systems andmulti-body pairing superconducting systems, as we have demonstrated in the charge 4e and correlated BCS superconductors <cit.>. These results offer a new perspective on the superfluid stiffness in superconductors beyond BCS theory.§.§ AcknowledgementThis work is supported by the Ministry of Science and Technology(Grant No. 2022YFA1403901), the National Natural Science Foundation of China (Grant No. NSFC-11888101, No. NSFC-12174428), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB28000000, XDB33000000), the New Cornerstone Investigator Program, and the Chinese Academy of Sciences through the Project for Young Scientists in Basic Research (2022YSBR-048).Supplemental Material§ SUPERFLUID RESPONSE AND OPTICAL CONDUCTANCEThe eigenenergy of the Hamiltonian H_1 (Eq.<ref>) in the main text isE_k^±=η_k±ℰ_k=η_k±√(ϵ_k^2+Δ^2)with corresponding eigenvectors (u_k,v_k)^T and (v_k,-u_k)^T respectively in whichu_k = √(1/2(1+ϵ_k/ℰ_k))v_k = √(1/2(1-ϵ_k/ℰ_k))The Green's function of the system described by H_1 can be expressed as𝒢=(ω_n-H_1)^-1≡([ G(k,ω_n) F(k,ω_n); F^†(k,ω_n)G̅(k,ω_n) ])where G(k,ω_n) = u_k^2/ω_n-E_++v_k^2/ω_n-E_-,G̅(k,ω_n) = v_k^2/ω_n-E_++u_k^2/ω_n-E_-,F(k,ω_n) = u_k v_k/ω_n-E_+-u_k v_k/ω_n-E_-.The paramagnetic current is defined asĴ^P_μ(q)=∑_α,kγ_α(2k+q)_μ c_k+q,α^† c_kαwhich can be decomposed into two components as discussed in the main text: the COM motion and the relative motion,J^P(k,q) = J^P_R+J^P_r= γ_1+γ_2/2 (2k+q) σ_0 + γ_1-γ_2/2(2k+q) σ_3 Then the paramagnetic part of the linear response, i.e. current-current correlation function, can be calculated by Π_μν(q,Ω_n) = 1/Vβ∑_k,ω_n[J^P_μ(q)𝒢(k,ω_n)J^P_ν(q)𝒢(k+q,ω_n+Ω_n)] =(γ_1+γ_2)^2 χ_00 + 2(γ_1^2-γ_2^2) χ_30 + (γ_1-γ_2)^2 χ_33whereχ_ij(q,Ω_n)=1/4Vβ∑_k,ω_n (2k+q)_μ(2k+q)_ν[σ_i 𝒢(k,ω_n)σ_j 𝒢(k+q,ω_n+Ω_n)].Considering the case T=0, the results of these correlation functions areχ_00(q,Ω_n) = 1/4V∑_k(2k+q)_μ(2k+q)_ν[-u^2^2+v^2^2-2uv/Ω_n+E^+-^-+^2v^2+u^2^2-2uv/Ω_n+E^–^+]. χ_33(q,Ω_n) = 1/4V∑_k(2k+q)_μ(2k+q)_ν[-u^2^2+v^2^2+2uv/Ω_n+E^+-^-+^2v^2+u^2^2+2uv/Ω_n+E^–^+]. χ_30(q,Ω_n) = 1/4V∑_k(2k+q)_μ(2k+q)_ν[-u^2^2-v^2^2/Ω_n+E^+-^-+^2v^2-u^2^2/Ω_n+E^–^+]. where u=u_k, v=v_k, E^±=E_k^±, =u_k+q, =v_k+q and ^±=E^±_k+q.In the long wavelength limit q → 0, we have → u, → v and ^±→ E^± and obviously χ_00=χ_30=0 and thenΠ_μν(Ω_n=0,q→ 0)=-4/V∑_kk_μ k_ν(γ_1-γ_2)^2u^2v^2/ℰ_k=-(γ_1-γ_2)^2/V∑_kk_μ^2 Δ^2/ℰ_k^3δ_μν=-(γ_1-γ_2)^2 /2(γ_1+γ_2)∫ dϵ N_γ^+(ϵ)ϵ_F Δ^2/(ϵ^2+Δ^2)^3/2δ_μν=-(γ_1-γ_2)^2 ϵ_F N_γ^+(0)/γ_1+γ_2δ_μνwhere N_γ_1+γ_2 is the DOS of the 2D free electron gas with energy dispersion (γ_1+γ_2)k^2, which is proportional to 1/γ_1+γ_2. If γ_1=γ_2, the paramagnetic contrition vanishes which is just the single band case. This result suggests that the nonzero paramagnetic contribution stems from interband pairing.Then we shall consider the diamagnetic contribution. The diamagnetic current is defined asJ^D_μ=∑_kαe^2 n̂_α, μν/m_αA_νThe matrix form isJ^D(q)=([2γ_1 0; 0 -2γ_2 ]) = J^D_R+J^D_r = (γ_1+γ_2)σ_3+(γ_1-γ_2)σ_0Thus the diamagnetic contribution of superfluid stiffness can be obtained by calculating<J^D> = 1/Vβ∑_k,ω_n[J^D 𝒢(k,ω_n)] A= ∑_k(γ_1+γ_2)[1-ϵ_k/ℰ_k[n_F(E_k^-)-n_F(E_k^+)]] - ∑_k (γ_1-γ_2)[1-[n_F(E_k^-)-n_F(E_k^+)]] AT=0 ∑_k(γ_1+γ_2)[1-ϵ_k/ℰ_k] A=(γ_1+γ_2) n_c Awhere n_c is the number of carriers in either band.Below we shall calculate the optical conductance and check the optical sum ruleΠ_μν(Ω,q→ 0)=-π/V∑_kk_μ^2(γ_1-γ_2)^2Δ^2/ℰ_k^2[δ(Ω+2ℰ_k)-δ(Ω-2ℰ_k)] δ_μν=-π(γ_1-γ_2)^2 ϵ_F N_γ^+(0)/2(γ_1+γ_2)∫ dϵΔ^2/ϵ^2+Δ^2[δ(Ω+2√(ϵ^2+Δ^2))-δ(Ω-2√(ϵ^2+Δ^2))]δ_μνUsing the property of δ-function δ[g(ϵ)]=∑_iδ(ϵ-ϵ_i)/g'(ϵ_i), we have∫ dϵΔ^2/ϵ^2+Δ^2δ(Ω-2√(ϵ^2+Δ^2))= ∫ dϵΔ^2/ϵ^2+Δ^2[δ(ϵ-1/2√(Ω^2-4Δ^2))/2√(Ω^2-4Δ^2)/Ω + δ(ϵ+1/2√(Ω^2-4Δ^2))/2√(Ω^2-4Δ^2)/Ω] = 4Δ^2/Ω√(Ω^2-4Δ^2), (Ω>0)Since Π_μν(Ω,q→ 0) is odd function of Ω, the integration of it is∫_-∞^∞ dΩΠ_μν(Ω,q→ 0)/Ω =2∫_0^∞ dΩΠ_μν(Ω,q→ 0)/Ω= π(γ_1-γ_2)^2 ϵ_F N_γ^+(0)/γ_1+γ_2δ_μν∫_2Δ^∞ dΩ4Δ^2/Ω^2√(Ω^2-4Δ^2)= π(γ_1-γ_2)^2 ϵ_F N_γ^+(0)/γ_1+γ_2δ_μν= πΠ_μν(Ω=0,q→ 0)Thus the optical sum rule is satisfied.The real part of optical conductance isσ_μμ(q→ 0,Ω>0) = Π_μν(Ω,q→ 0)/Ω = 4π(γ_1-γ_2)^2 Δ^2ϵ_F N_γ^+(0)/2(γ_1+γ_2)Ω^2√(Ω^2-4Δ^2)which is showed in Fig.<ref>(d).§ TWO DIFFERENT-MASS BANDS WITH THE SAME FERMI SURFACEIn our main text, the two different-mass bands have a degeneracy at the band bottom, resulting in separated Fermi surfaces. Consequently, the interband pairing order parameter needs to exceed a critical value for the system to be fully gapped. Actually, the bottom of the two bands may not be degenerate, allowing their Fermi surfaces to coincide. That means the two bands hosts the dispersion with different Fermi energy ξ_α(k)=k^2/2m_α-μ_α. By tuning μ_1/μ_2=γ_1/γ_2, we can get this case as shown in Fig.<ref>(a). Due to the perfectly nesting Fermi surface of the two bands, infinitesimal interband pairing order parameter can make the system fully gapped as shown in Fig.<ref>(b).We find that in this case, the eigenvalue of H_1 are E_k^±=η_k±√(ϵ_k^2+Δ^2), where η_k= γ_1-γ_2/2k^2-μ_1-μ_2/2 and ϵ_k=γ_1+γ_2/2k^2-μ_1+μ_2/2. It is a little different from the case in the main text, but the result of superfuild response is the same.§ QUASIPARTICLE BAND OF PDWWe plot the quasiparticle band of the Q=(π,π) PDW in Fig.<ref>(b) along the high symmetric line shown in Fig.<ref>(a). The optical transition process is similar to that of the interband pairing SC with different mass Cooper pairs. § SUPERFLUID RESPONSE AND OPTICAL CONDUCTANCE OF THE CHARGE 4E SCFollowing the paper <cit.>, the current operator matrix under the basis (c_1 k↑, c_2 k↑, c_1-k↓^†, c_2-k↓^†)^T can be written as J^P(k,q)=2k+q/2mτ_0σ_0where τ and σ are Pauli matrix in particle-hole and orbital space respectively. The single particle Green's function has the form G̅^R(k,ω_n) = ([ ω+η+3ξ_/(ω+η+ξ_)^2-E_^2; ω+η+3ξ_/(ω+η+ξ_)^2-E_^2; ω+η-3ξ_/(ω+η-ξ_)^2-E_^2; ω+η-3ξ_/(ω+η-ξ_)^2-E_^2 ]).Then the current-current correlation function can be calculated via Eq.<ref> and the result at zero temperature areΠ_μμ(Ω_n=0,q→ 0) = ϵ_F N(ϵ_F)/6m Π_μμ(Ω,q→ 0)= πϵ_F N(ϵ_F)/3mΔ^2/Ω√(Ω^2-4Δ^2)Thus, the superfluid density is obtained asρ_s = n/m - Π_μμ(Ω_n=0,q→ 0) = 3ρ/4where ρ=2ϵ_F N(ϵ_F)/3m, and the real part of the optical conductance is σ_μμ(q→ 0,Ω>0) = Π_μν(Ω,q→ 0)/Ω = πϵ_F N(ϵ_F)/3mΔ^2/Ω^2√(Ω^2-4Δ^2)It's straightforward to check that the optical sum rule is satisfied.
http://arxiv.org/abs/2311.16033v2
{ "authors": [ "Pengfei Li", "Kun Jiang", "Jiangping Hu" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20231127175145", "title": "Paramagnetic contribution in superconductors with different-mass Cooper pairs" }
Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029 Hamburg, [email protected] School of Astronomy and Astrophysics, The Australian National University, Canberra, ACT 2611, Australia Magnetised spiral arm instability Arora et al. Regularly-spaced, star-forming regions along the spiral arms of nearby galaxies provide insight into the early stages and initial conditions of star formation. The regular separation of these star-forming regions suggests spiral arm instability as their origin. We explore the effects of magnetic fields on the spiral arm instability.We use three-dimensional global magnetohydrodynamical simulations of isolated spiral galaxies, comparing three different initial plasma β values (ratios of thermal to magnetic pressure) of β=∞, 50, and 10. We perform Fourier analysis to calculate the separation of the over-dense regions formed as a result of the spiral instability. We then compare the separations with observations. We find that the spiral arms in the hydro case (β = ∞) are unstable, with the fragments initially connected by gas streams, reminiscent of Kelvin-Helmholtz instability. In the β = 50 case, the spiral arms also fragment, but the fragments separate earlier and tend to be slightly elongated in the direction perpendicular to the spiral arms. However, in the β = 10 run the arms are stabilised against fragmentation by magnetic pressure. Despite the difference in the initial magnetic field strengths of the β = 50 and 10 runs, the magnetic field is amplified to β_arm∼ 1 inside the spiral arms for both runs. The spiral arms in the unstable cases (hydro and β=50) fragment into regularly-spaced, over-dense regions. We determine their separation to be ∼ 0.5 kpc in the hydro and ∼ 0.65 kpc in the β = 50 case, both in agreement with the observed values found in nearby galaxies. We find a smaller median characteristic wavelength of the over-densities along the spiral arms of 0.73^+0.31_-0.36 kpc in the hydro case, compared to 0.98^+0.49_-0.46 kpc in the β = 50 case. Moreover, we find a higher growth rate of the over-densities in the β = 50 run compared to the hydro run. We observe magnetic hills and valleys along the fragmented arms in the β = 50 run, which is characteristic of the Parker instability.The role of magnetic fields in disc galaxies: spiral arm instability Raghav Arora 1, Christoph, Federrath 2,Robi Banerjee 1 ,Bastian Körgten 1 ,Received xxxx; accepted xxxx ====================================================================================================================================== § INTRODUCTION Star formation in spiral galaxies is driven by their spiral arms <cit.>. Star-forming regions that resemble beads-on-a-string pattern are ubiquitous in spiral arms of numerous nearby galaxies, despite having a broad range of morphological features and physical properties <cit.>. Recently, also seen in rings of barred <cit.> and lenticular galaxies <cit.>. These star-forming regions are found to be bright in IR <cit.>, and at times also in FUV and Hα <cit.> emission,tracing various stages of star formation. Despite the variety in spiral galaxies that host these regular star-forming regions, the adjacent separations of these regions fall in the range 350-500 pc and (or) integer multiples of this range <cit.>. For example, it was found that adjacent separation of star-forming regions in the spiral arms of NGC 628 and M100 were both ∼ 400 pc <cit.>. Another prominent example being the North-Western arm of M31, where star complexes with a spacing twice this value ∼ 1.1 kpc were found <cit.>. The presence of this characteristic separation of these star-forming regions in spiral arms hints that their origin could be through a spiral arm instability. Another interesting feature observed in unison with regular star-forming regions in M31 were the presence of magnetic fields out of the plane of the galaxy with a wavelength twice that of the adjacent separation of these regions <cit.>. This indicates that magnetic fields could play an important role in their formation. Beads-on-a-string patterns have been observed along spiral arms in numerical simulations <cit.>. The physical mechanism responsible, however, has been debated. It was hypothesized to be the Kelvin-Helmholtz (KH) hydrodynamical instability <cit.>, an artifact of numerical noise <cit.> or of infinitesimally flat 2D discs <cit.>, a magnetic-jeans instability (MJI) <cit.> which was also called the feathering instability <cit.>, or a hydro instability along spiral shocks distinct from the KH instability named as the Wiggle Instability <cit.>. Its physical nature was established recently to be a combination of KHI <cit.> and a vorticity generating instability due to repeated passages of spiral shocks <cit.>. Using a subset or restricting to different physical processes, these works saw facets of the same physical instability around the spiral shock. However, we also know from 2D local simulations and linear analysis <cit.>, that the spiral instability is highly non-linear and sensitive to the physical properties of the interstellar medium (ISM). This includes the sound speed (c_s), the strength of the spiral shock etc. One such important factor are the magnetic fields. Even though the role of magnetic fields has been explored, it still remains to be seen what effect they can have in a global realistic setting. We expect from earlier works that they can affect both the length scales and the growth scales of the instability. For example, it was shown in <cit.> that the spiral shock fronts were weaker in the presence of stronger magnetic fields, not affecting the feather instability's growth rate. However, the range of their parameter space spanned only from β = 2-4. In <cit.>, it was found in 2D local linear analysis and simulations that equipartition magnetic fields (β = 1) in comparison with very weak fields (β = 100) decreased the growth rate of the spiral arm instability by a factor 4 and increased the wavelength of the dominant mode by a factor of 2. Thus having an overall stabalising effect. In three-dimensional local box simulations, <cit.> found no vorticity generating instability and attributed it to presence of magnetic fields (β = 1, 10), but found that the spiral shocks were unstable nonetheless in presence of self-gravity and magnetic fields. This was also confirmed in global 2D simulations in <cit.> where equipartition magnetic field strengths prevented the pure hydro vortical instability, but the spiral arms fragmented in presence of self-gravity despite the magnetic fields. Most of these studies are 2D <cit.> or local <cit.> and all of them are isothermal. While there are more sophisticated magnetised galaxy simulations <cit.>, they have focused on the global evolution of the disc, rather than on the spiral arm instability. It is yet to be seen how magnetic fields affect the spiral arm instability in 3D global disc galaxy simulations where one resolves the spiral arm instability, MJI, and also the Parker instability <cit.> that arises due to the stratified nature of the medium perpendicular to the disc. In this study, we focus on the physical effects of magnetic fields on the spiral arm instability and its impact on the spacings of the over-densities that result in the spiral arms. We perform three-dimensional self-gravitating magnetised disc galaxy simulations that employ fitted-functions for equilibrium cooling, heating and an external spiral potential. This allows us to capture the spiral arm instability in a more realistic environment and study the effects of magnetic fields on it. For this purpose, we build a library of simulations with varying initial magnetisation.The paper is organized as follows: <ref> describes our library of simulations. In <ref>,we compare the the models with different magnetisation and focus on the basic morphology of our galaxies, the approximate timescales of the spiral arm instability and the cloud spacings of the unstable spiral arms. We discuss the caveats of this work, the role played by magnetic fields on the spiral arm instability and compare them with existing observations and simulations in <ref>. We summarise our results in <ref>. § METHODS§.§ Simulation SetupOur 3D MHD disc galaxy simulations have self-gravity, an external spiral potential, magnetic fields and fitted functions for optically thin cooling and heating that include heating from cosmic and soft X-rays, the photoelectric effect, as well as the formation and dissociation of H_2 <cit.>. The cooling and heating gives us a multiphase ISM consisting of the warm neutral medium (WNM), cold neutral medium (CNM), and a cold molecular medium (CMM). These form self-consistently in our simulations. The cooling and heating curve also has the thermally unstable regime, which is important for molecular cloud formation, in the density range 1 ≤ n /cm^-3≤ 10. Our minimalist global models strike a balance by including the dominant physical effects such as self-gravity, galactic shear, thermal instabilities and magnetic fields, while at the same time avoiding complicated stochastic effects such as star formation and various feedback mechanisms. We do not study these mechanisms here since we want to focus on the effects of magnetic fields and defer an investigation of the effects of feedback to a future study. The system of equations that we solve are- ∂ρ/∂ t + ∇.(ρv)= 0,∂ (ρv)/∂ t + ∇. (ρvv) = -∇ P - ∇ (Φ_ ext+ Φ ) + 1/4π (∇×B)×B,∂B/∂ t = ∇× ( v×B ),∇^2Φ = 4π G ρ,∂/∂ t ( ρ v^2/2 + ρϵ_ int + B^2/8π ) +∇.[( ρ v^2/2 + P + ρϵ_ int + B^2/8π ).v + v_jℳ_ij] = ρ/m_ HΓ-( ρ/m_ H )^2Λ(T), where ρ is the density and v, P, B are the velocity, pressure and the magnetic field of the gas. The gravitational potential of the gas and the external gravitational potential are given by Φ, Φ_ ext. m_ H is the mass of Hydrogen atom and Γ, Λ are the heating and the cooling rates respectively <cit.>. ℳ_ij = B^2/8πδ_ij - B_iB_j/4π, and we use thepolytropic equation of state, which gives ϵ_ int = P/ρ (γ - 1), where γ = 5/3 . These represent the ideal MHD equations where we have neglected the magnetic diffusivity and fluid viscosity. We use the flash <cit.> grid-based magnetohydrodynamical code for performing the simulations. The disc is initialized at the centre of a cuboidal box with side length L_xy = 30 kpc in the plane of the disc and L_z = 3.75 kpc in the direction perpendicular to it. The minimum cell size of the base grid is 234 pc, i.e., the base grid has a resolution of 128×128×16 cells. However, we achieve a maximum resolution corresponding to a minimum cell size of 7.3 pc by using adaptive mesh refinement (AMR) with 5 levels of refinement corresponding to a maximum effective resolution of 4096×4096×512 cells, such that the local Jeans length is resolved with at least 32 grid cells <cit.> for R ≥ 5 kpc. We do this in two steps. First we use 4 levels of refinement for the initial 0.3 T_ rot (100 Myr) of the evolution. We then increase the maximum refinement level to 5, which saves computational costs in the initial phases of the evolution, and allows us to achieve an overall higher refinement when the spiral arms start to develop. The initial Jeans length in our models is ∼ 1.8 kpc, and thus we resolve it by 2 additional levels of refinement at the start. In order to avoid artificial fragmentation on the highest level of refinement due to violation of the Truelove criteria <cit.>, we have an artificial pressure term that is adjusted so that the local jeans length is resolved with at least four grid cells <cit.>. §.§.§ Basic setup As done in earlier studies <cit.>, we initialize the disc galaxies in our simulation suite by keeping the effective Toomre parameter (Q_ eff) constant, defined as Q_ eff = κ( c_ s^2 + v_ a^2)^1/2/π G Σ,where κ = √(2)v_ c/r and Σ are the epicyclic frequency and the surface density of the disc, c_ s is the sound speed, and v_ a is the Alfvén speed of the medium. We do this so that all the simulations have a similar response to axisymmetric gravitational perturbations. Next, we use the scale height radial profile, H(R), from H i observations of the Milky Way <cit.>, which gives us ρ (R, z) = κ c_ s√(1 + 2β^-1)/ 2 π G Q_ eff H(R)sech^2 ( z/ H(R) )where H(R) = R_⊙(0.0085 + 0.01719 R/R_⊙ + 0.00564 (R/R_⊙)^2 ) with R_⊙ = 8.5  kpc and β = 2c_ s^2/v_ a^2 is the plasma-beta (ratio of thermal to magnetic pressure) of the disc. For numerical reasons, we define the inner 5 kpc region to be gravitationally stable and keep it unresolved with Q_ eff = 20. We focus on the disc with R>5 kpc, where we have the initial Q_ eff = 3, making the region of interest gravitationally stable to axisymmetric perturbations since Q≥1. We also have pressure equilibrium at the boundaries to avoid any gas inflows and outflows due to the same. We further apply a buffer zone of 1 kpc from the inner disc for our analysis to avoid any boundary effects.We adopt a flat rotation curve for our galaxies, with the circular velocity in the plane of the galaxy given by v_ rot = v_ cR/√(R_ c^2 + R^2),where v_ c = 200   km s^-1 for Milky-Way-like galaxies and R_ c = 0.5   kpc is the core radius. This is the exact solution for the adopted dark matter potentialΦ_ dm = 1/2v_ c^2ln{1/R_ c^2 [ R_ c^2 + R^2 +(z/q )^2] }.We use a marginally lower value of v_ c = 150 km s^-1 compared to the Milky Way. We do this to isolate and focus on the effects of the spiral instability from the presence of swing instabilities in a low-shear environment.§.§.§ Turbulent initial conditionsIn addition to the circular velocity of the disc, we add a turbulent initial velocity field with v_ rms = 10   km s^-1 and a Kolmogorov scaling of k^-5/3 on scales [ 50, 200 ] pc. The turbulent velocity field was constructed to have a natural mixture of solenoidal and compressible modes, generated with the methods described in <cit.>, using the publicly availablecode <cit.>. The details of the initial turbulent perturbations are not critical for our numerical experiments and they primarily serve to break the symmetries in the idealised setup. They also ensure that the perturbations for the instabilities under investigation here to develop self-consistently rather than being seeded by numerical noise. After the initial turbulent seeds have decayed, ISM turbulence is subsequently driven primarily by gas flows and spiral arm dynamics. §.§.§ Magnetic fieldMagnetic fields are observed in nearby disc galaxies to be roughly in equipartition with the turbulent kinetic energy and in super-equipartition with the thermal energy in the ISM <cit.>. We characterise the strength of the magnetic field with the plasma-beta (β), that is, the ratio of the thermal to the magnetic pressure of the medium, which is also equal to the ratio of the thermal to magnetic energies. Our initial values are β∈{∞, 50, 10 }, which represent the hydro, weak and moderate magnetisation cases of our disc. We choose a higher value of β than observed (β_ obs≤ 1), because we expect it to decrease with the dynamical evolution of the galaxy. We initialise the magnetic fields to be completely toroidal (m = 0 mode), which is the dominant mode found in galaxies <cit.>, with a dependence on the gas density, such that B∝ n^α, where α = 0.5 <cit.>.§.§.§ Spiral potential For generating the spiral arms, we adopt a rigidly rotating two-armed spiral potential <cit.> with a pattern speed of 13.34 km. .s^-1.kpc^-1, which gives us a co-rotation radius of = 11.25 kpc, and a pitch angle of α = 20. Thus, the external gravitational potential can be written as Φ_ ext = Φ_ dm + Φ_sp,where Φ_ dm is the dark matter potential that provides the flat-rotation curve (<ref>) and Φ_ sp is the spiral potential <cit.>. We chose the amplitude of the spiral such that, on average, the magnitude of the force due to the potential is ∼0.4 times that of the dark matter potential, analytically given byℱ_ 𝓈𝓅 = ⟨⟨ f_ sp⟩_ϕ/f_ dm⟩_ r∼ 0.40, where ⟨ f_ sp⟩_ϕ is the azimuthal average of f_ sp = ∇Φ_ sp and ⟨…⟩_r denotes the radial average, taken over 6 to 11 kpc, which is our region of interest. §.§.§ Simulation parameter study Our library of simulations contains three runs having different initial magnetisation, with β∈{∞, 50, 10 }, all with the same strength of the spiral arm perturbation, ℱ_ sp = 0.4, rotational velocity of v_ c = 150 km s^-1, and an effective Q = 3. From <ref>, it follows that fixing these values leads to an initial density field that differs only slightly (by ≤ 2 %) between the simulations with different β. The key initial parameters are summarised in <ref>, where we also show the mass-weighted average density (n), the sound speed (c_ s) and the magnetic field magnitude of the disc region of interest. Note that with this parameter set, we initialise our galaxies to be marginally stable to axisymmetric gravitational instabilities, the thermal instability as well as swing instabilities. Since we have an initial Q> 1, mid-plane density of a factor of two less than the thermally unstable regime, and a low-shear environment with v_c = 150kms^-1. § RESULTSHere, we discuss the main results of our simulations. First we show the basic evolution of the three runs, that is, their general morphology and time scales of spiral arm formation and fragmentation in <ref>. In <ref>, we focus on the cases where the spiral arms fragment into over-dense regions. Here, we also describe our method for extracting the separation of the clouds and then report them as well. We then showcase the physical properties of the spiral arms and the effects that magnetic fields have on them in <ref>.§.§ Basic Evolution Here we describe the basic morphology and evolution of our simulations. We first discuss the spiral arms that form self-consistently, and then their subsequent fragmentation patterns in <ref>. We then quantify the timescales over which we see this evolution in <ref>.§.§.§ Morphology The basic morphology of the dense gas in our galaxy is presented in <ref>, where we show the projected density of the three models. The lower limit of the colourbar is chosen such that the denser and colder regions are highlighted in the simulations. The rows represent different time snapshots as indicated in the panels, and the columns represent the runs with varying magnetisation. Starting with the first row (t = 0.5 T_ rot), we can immediately see that the dense gas is dominantly present in the spiral arms. We can understand this simply by looking at the relevant drivers of the gas dynamics in the system. Since the rotation curve that we use is the analytical solution to the dark matter potential, it is the self-gravity, magnetic fields, the external spiral potential, and the equilibrium cooling/heating that drive the time evolution of the gaseous disc. The spiral potential funnels the gas towards its minima. This forms the dense spiral arms in the presence of self-gravity and cooling, while the magnetic fields oppose this by magnetic pressure (P_ mag = B^2/ 8 π).Moreover, since the initial parameters of the galaxy are such that it is stable to toomre, thermal and swing instabilities, the spiral arms dominates the presence of dense gas in our galaxies. We can see the effect of magnetic fields, even as the spiral arms are forming, when we compare the three runs in the first row of <ref>. The spiral arms are visibly more diffuse with increasing magnetisation due to the additional pressure support of the magnetic fields. As expected, this effect is more pronounced in the β = 10 case, since the β of the gas is considerably larger in that case than in the β = 50 run, where the magnetic fields are sub-dominant in the initial phases of the evolution.We begin to see major differences between the evolutionary paths of the spiral arms between the three runs after they form, as seen in the next panel at t = 0.75 T_ rot, where the arms start fragmenting into a beads-on-a-string pattern for the hydro and β = 50 runs, while on the other hand, the spiral arms are diffusing away in the β = 10 case. This then leads to stark differences at t = 0.9 T_ rot, where the spiral arms that we see at t = 0.75 T_ rot have distinctly separated into clouds for the hydro and β = 50 cases. While they are diffused away in the β = 10 case. Moreover, we see secondary arm formation for all the three cases, visible on the inner face of the fragmented arms in the hydro and β = 50 cases, and in the absence of a fragmented arm, solitary in the β = 10 case. We continue running the β = 10 simulation for t = 1.67 T_ rot, and observe that the disc goes through cycles of arm formation and diffusion, and that these arms never manage to fragment. Now we focus on the two runs where we see the spiral arms fragment, namely β = ∞, 50. In the <ref> we can see morphological differences between the two runs at t = 0.75 T_ rot, even though there are no notable differences at t = 0.5 T_ rot, when the spiral arms form. The differences are better seen in the <ref>, where we plot the projected density of the two runs and highlight the cells in the spiral arms with a different colour scheme to accentuate the difference. The two rows are at different times, analogous to the last two rows in <ref>. We trace one of the arms using a friends of friends (FoF) algorithm with a linking length of 60 pc, using cells that are on the verge of being thermally unstable with n_ thresh = 0.9 n_ crit, where n_ crit = 1 cm^-3 is the critical density of the thermally unstable medium. In the first panel, at t = 0.75 T_ rot, we can see that the β = 50 run has the dense structuresall radially elongated, while the β = ∞ one has roughly spherically-shaped over-densities. The second difference is that we see wiggles reminiscent of Kelvin-Helmholtz instabilities, connected with continuously with the gas present in the spiral arms in the hydro run. On the other hand, the spiral arms separate out into distinct clouds without any Kelvin-Helmholtz-like structures in the runs with magnetic fields. There are differences that persist even at late times. At t = 0.9 T_ rot in <ref> we see a larger number of fragments more closely packed in the hydro run compared to the magnetic-field runs. This suggests a different mode of fragmentation in the presence of magnetic fields compared to without magnetic fields.§.§.§ TimescalesA more quantitative picture of the time evolution of the dense gas is presented in <ref>. Here, we show the time evolution of the mass-weighted average density of the gas above the density threshold for the thermally unstable regime, n>1 cm^-3. This is representative of gas that has the potential to become denser since it is in the thermally unstable regime. As expected, we find that this traced, dense gas, is predominantly present in the spiral arms. We visually confirm this by following the evolution of the disc with 10^5 passive tracer particles that are initialised at t=0 in our region of interest (movies to be found in the supplementary material). The three solid lines in <ref> are the three runs with different initial magnetisation. The stars indicate the times at which we show the disc in <ref>. We can break down the time evolution of the dense gas into two phases - one where the spiral arms grow and the second one where it either fragments into clouds or diffuses away. The spiral arm growth phase starts at t ≃ 0.3 T_ rot ( 100 Myr) for all the runs and it lasts till t ≃ 0.64 T_ rot (210 Myr) for the hydro and t ∼ 0.70 T_ rot (230 Myr) for the β = 50 run. In the β = 10 case on the other hand, even though the spiral arms have an appreciable amount of gas around n∼ 1 cm^-3, as seen in the column density plots in <ref>. The arms never manage to get appreciably dense. In the next phase of theevolution, the β = 10 run repeatedly forms transient arms that quickly diffuse after their formation. This phase is seen as small crests and troughs in the <ref> at t ≃ 0.54, 0.74, 0.9 T_ rot. In contrast, the spiral arms in the other two cases fragment. This is visible as a sudden change in the slope of lnn̅ in the same figure <ref>, which marks the end of the spiral arm growth phase.During this phase, the average density rises at a faster rate in the β = 50 case when compared with the hydro run. For similar time intervals, from t = 230 to t = 290 Myr, the density rises by a factor of just 2.2 for the hydro run, in contrast to a factor of 5 in the magnetic run. This is seen in the steeper slope of the former compared to the latter in <ref>. Looking at the tracer particle movies, we see that the hydro run separates out into distinct clouds at around t ≃ 290Myr, while the β = 50 run does it much quicker by t ≃ 270Myr.§.§ Cloud Separation and Fragmentation Modes Here we discuss how the separation (spacing) of the clouds that form in the spiral arms of the β = 50 case and the hydro case differ from each other. This gives us insight into the effects of the magnetic fields and on the fastest growing mode of the spiral instability. §.§.§ Extracting the cloud spacings We quantify the separation of the clouds and the wavelength of the unstable modes by using 1D Fourier analysis. We do this on the projected density binned along the length of the spiral arms. We first construct a spiral arm mask using analytical functions and then use this spiral coordinate to bin the density in preparation for the Fourier transformation. Similar to <cit.> we defineξ =ln (R/R_0 )sinp+θcosp,η = ln (R/R_0 )cosp-θsinp,as the spiral arm coordinates, where R_0 = 1 kpc is the scaling and p is the pitch angle of the spiral arm under consideration. The ( ξ, η )coordinates can be thought of as the ( lnR, θ )-plane rotated counter-clockwise by the angle p. Here ξ is the coordinate along the spiral arm and η is locally perpendicular to it. To define a spiral arm region of a certain thickness, we use a rectangular mask in the (ξ, η)-plane, where the thickness will be decided by the extent in the η coordinate. Our spiral arm mask is shown in <ref>, where the first panel shows the projected density of the galaxy at t = 0.75 T_ rot in the( lnR, θ ) plane along with the masked region shaded and the unit coordinate vectors (ê_ξ,ê_η) on the bottom edge of the mask. The same plot is shown in the ( x, y) plane in the second panel, and the third panel shows the binned projected density along the spiral arm (coordinate ξ). We construct the mask starting with an initial guess for the pitch angle, p_i, that determines the (ξ, η) plane. Next, we draw the rectangular mask in this plane that is then a rectangle bounded via fixed values of lower edge ( lnR_0, θ_0 ) and the maximum radial coordinate (lnR_ max ) that we determine via visual inspection at the beginning. Now, we test the correctness of this initial rectangular mask by fitting a straight line to all the(ln R, θ ) coordinates of cells that lie in this mask weighted by their densities. If it were a good mask that covers all the dense regions then the slope of this line m_ line will be close to m_i = tanp_i. However, if it is not the case, we use it for the next iteration, where p_i+1 = tan^-1(m_i). For convergence we use an absolute tolerance of p_i+1 - p_i = 0.05^∘. Once we have the mask, we simply bin the projected density on the coordinate along the spiral arm.Our algorithm is similar to what <cit.> used on observational data, but with one key difference - here we explicitly allow for a certain thickness of the spiral arm mask in the direction perpendicular to its length, while <cit.> do not. Instead, they just fit the curve η = const by selecting all the pixels that lie along the spiral arm by eye and determine the pitch angle p of the spiral arm via a least square fit.The projected density on the coordinate along the spiral arm is seen in the bottom panel of <ref>. Here, the peaks in the column density are the encountered over-densities along the length of the spiral arm. We chose the number of bins such that there are ≥ 4 cells in each bin. With this binned density, we finally take the 1D discrete Fourier transform, with the following convention:Σ̂_ sp[k] = 1/N∑ ^N-1 _m = 0Σ_ sp, rel[m] exp (-2 π i k m/N ),k = 0, 1,...,N-1,where N is the total number of bins, Σ_ sp, rel[m] = Σ_ sp[m]/Σ_ sp - 1 is the relative-surface density along the spiral in the mth bin and k is the wave number in units of 1/L_ spiral, with L_ spiral being the length of the spiral arm. We calculate L_ spiral by averaging over the lengths of the inner and the outer edge of the spiral arms. The power is then taken to be, P[k] = Σ̂_ sp [k] Σ̂_ sp^* [k], where Σ̂_ sp^* [k] is the complex conjugate of Σ̂_ sp [k]. As a final step, we bin the resultant power spectrum in bins of length k = 2.§.§.§ Cloud separationWe use the 1D power spectrum of the projected density derived in the last section to quantify the separation of the clouds in the spiral arms. We calculate the power spectrum at t ∈{0.75 T_ rot, 0.90T_ rot} of one of the spiral arms as an example. These times correspond to the times shown in the bottom two panels of <ref> and roughly indicate when the spiral arms are undergoing fragmentation and when the spiral arms have separated out into distinct clouds. The power spectrum for one of the arms is shown in <ref>, where the left panel is for the hydro run and the right panel is for the β = 50 case. The power on the vertical axis is plotted against the wave number k, which has units of 1/L_ spiral, where L_ spiral is the length of the spiral arm .We can see the rise in the power from t = 0.75 T_ rot to t = 0.90T_ rot as the arm fragments and the clouds separate out and become denser. This naturally gives us a higher signal, as the Σ_ sp, rel grows and the spacings become more distinct. The hydro case has more power at early times, compared to the β = 50 run, as expected, since it starts fragmenting t∼ 20 Myr earlier than the latter (c.f. Fig. <ref>). At later times, t = 0.9 T_ rot, we see distinct multiple peaks with similar power, where majority of the power resides on larger scales with slight differences, for both the runs. For the hydro run this is for k≲ 80 (l ≳ 260pc), while for the magnetic case we have k ≲ 50 (l≳ 390 pc).The power spectrum also gives us insights into the regularity of the separation of the clouds in this particular arm. At t = 0.90 T_ rot the hydro run has a major peak at k = 40 (l ∼ 500 pc). Similarly, in the β = 50 case, we see a major peak on larger scales at k = 30 (l ∼ 650 pc). Both of these correspond to the total number of clouds that we see in the respective arms. This signal in the Fourier analysis indicates that the adjacent separations of the clouds are regular. The magnetic case also has a major peak present at k = 14 (l ∼ 1.4  kpc ), which corresponds to the number of brighter clouds along the spiral arm, also visible as the taller spikes in the bottom panel of <ref>.An interesting feature in the Fourier transform common to both the runs, is the existence of major peaks that come in pairs of multiples of 2. For example, we have k ∼ (24, 48), (30, 60), (40, 80) in the hydro case and k ∼ (14, 30), (22, 44), (34, 70) for the β = 50 case. This feature could be a combination of two things - the first is the physical presence of such modes due to the nature of the spiral arm instability, and/or the second is due to the asymmetry in the negative and positive values present in the function, Σ_ sp, rel[m] = Σ_ sp[m]/Σ_ sp - 1, of which we take the Fourier transform. In the latter case we expect a single peak of the dominant mode followed by even harmonics with a decreasing power-law amplitude of the subsequent peaks. Looking at the lower panel of <ref>, we can see that the function, Σ/Σ_0 - 1, is indeed asymmetrical, showing that the voids between the clouds are not as under-dense as the clouds are over-dense. However, apart from the dominant modes at low k, the higher k modes all have comparable amplitudes, and thus we consider them to be physical modes present in the system. We test whether the differences observed in one of the spiral arms between the hydro and the magnetic cases can be generalised. For this, we run an identical pair of simulations for both the hydro and β = 50 cases with a different random seed for the initial turbulence. We then also use both the spiral arms of each simulation in our analysis. As done before, we calculate the power spectrum of the projected density along each of the arms. We show the peaks of the power spectrum in <ref>. Here, the error bars indicate the FWHM with a minimum value equal to the binning length k = 2, with respect to the physical scale in kpc on the horizontal axis. The left panel is for the hydro run and the right panel for the β = 50 run. The different colours are for the runs with different initial turbulent seeds. The star and its error bars indicate the 50th (median) and the 16th to 84th percentile range after the points are binned in bins of 0.2 kpc. We see from the <ref> that the trends we observed in one of the spiral arms carry over to the general case as well. The power in both the cases exists on large scales, with the hydro run having major power over l≳300 pc and the magnetic case having it on l≳400 pc. We also see many significant peaks that are a multiple of 2 of a lower k mode in both the magnetised as well as the hydro case. The rise in the cloud separation and unstable modes seen in the Fourier transform of one arm is also seen as a general trend of the presence of magnetic fields. Moreover, we also see that the power rises more steeply on small scales and also falls sharply after 1 kpc in the hydro case. For the magnetic-field case the rise in the power is shallower on smaller scales and spreads towards larger scales till 1.5 kpc. This is reflected in the percentiles of the distribution of the peaks, where both the median and the 16th to 64th percentile range of the hydro case rises from 0.73^+0.31_-0.36 kpc to 0.98^+0.49_-0.46 kpc in the magnetised case. §.§ Effects of magnetic fields on the physical properties of spiral armsAs we have seen, the presence of magnetic fields, regardless of their strength, influences the evolution of the spiral arms. The magnetic fields themselves, are also expected to evolve with the gas in our simulations. In order to gain insight into this, we explore the physical properties of the spiral arms in the three different runs with different initial magnetisation before they fragment or diffuse away. We use one of the spiral arms in each simulation, since we find that they have very similar physical properties. As done in <ref>, we trace the gas in the spiral arm under consideration at t = 0.5T_ rot using a friends of friends algorithm with a linking length of l = 60pc, which is the approximate cell length of the surrounding warm neutral medium around the spiral arms. In addition, we also use a density threshold of n = 0.9 cm^-3, which is similar to the critical density of the thermally unstable medium. The traced spiral arms is presented in <ref>, where we show the projected density of the three runs along with the traced spiral arms highlighted in a different colour scheme. We use these traced regions for all the properties we report in this section.The total mass present in the arms is similar, i.e., log_10 (M/(M_⊙)=7.90, 7.85 and 7.71 for the hydro, β = 50, and β = 10 case, respectively. However, other physical properties vary systematically between them. We can see this in the <ref> where we show the mass-weighted probability density functions (PDFs) of the log_10 of density in the left panel, the sound speed in the middle panel, and the cell-by-cell plasma-beta of the runs in the right panel. The stars on the histograms mark the average mass-weighted quantities in the respective vertical axis. In the left-most panel, as we saw in <ref>, we see that the spiral arms get more diffuse with increasing magnetisation. As seen in the mean density, that decreases by a factor of ∼ 2, possibly due to the additional opposing magnetic pressure. We find that the majority of the gas in the β = 10 case is present in the density range ≃ 1–3 cm^-3 which is thermally unstable. Despite this, it never manages to get denser due to the opposing magnetic pressure. We can also see this effect in the middle panel, where the gas in the hydro and the β = 50 cases have a lower sound speed when compared to the β = 10 case, showcasing that the gas has already cooled in the former two cases, while the gas remains warm and close to its initial temperature in the latter. This is also reflected in the scale heights of the spiral arm, where it is found to be 50.2 ± 2.2 pc in the β = 50 case and 98.6 ± 3.9 pc in the β = 10 case. The details of the scale height calculation can be found in Appendix:scale_height_estimation. Thus the presence of magnetic fields largely makes the spiral arms more diffuse and hotter.Interestingly, even though we start with a factor of 5 difference in the initial plasma-beta of the two magnetised runs, we note that they have similar values in the spiral arms. We see this in the right-most panel of <ref>, where the plasma-beta of the run with initial β = 50 has reduced by a factor of ∼ 25 to an average value of ∼ 2, and even has a non-negligible fraction of cells with values ≤ 1, where the magnetic fields dominate over the gas pressure. The β in β = 10 run reduces by a factor of 5 to reach similar values as in the β=50 case. This shows that the magnetic fields, even though not dynamically important at the beginning of the simulation do become important later on in the spiral arms. This increase in field strength could be due to the combined effects of field tangling, adiabatic compression, and cooling, or the presence of a dynamo <cit.>. These effects are much more pronounced in the β = 50 case, where the gas compresses and cools in the absence of dynamically significant magnetic fields, when compared to the β = 10 case, where they oppose compression and cooling. Thus, in the β = 50 case, we get magnetic fields that are dynamically significant "after" the spiral arms have become dense enough. This results in fragmentation in the presence of magnetic fields, and changes the nature of the instability when compared with the hydro case. We discuss this in detail in the next section.§ DISCUSSION Here, we discuss the physical effects of magnetic fields on the spiral arm instability and compare them with previous theoretical and observational studies. First, we focus on the stabilising effect in <ref>, their destabilising effects in <ref>, and then go on to discuss the cloud separation and unstable modes in <ref>. Lastly, we point out some caveats of our work in <ref>.§.§ StabilisationWe find that moderate initial magnetic fields with initial β = 10, can stabilise the spiral arms against fragmentation. As seen in <ref>, the spiral arms that form in this case are more diffuse and hotter compared to the other cases (hydro and β=50 models) where they fragment. This is mainly due to the increased magnetic pressure of the B-fields that opposes the gas from getting any denser. This inhibition of compression due to the additional magnetic pressure agrees well with other global disc galaxy simulations <cit.>. However, we note that our value of β = 10 for stabilisation is higher than the ones observed in other studies –β≤ 0.1 <cit.> and β≤ 1 <cit.>. As pointed out before, our models are gravitationally stable initially, and also have low shear and a warm medium.For weak magnetic fields, β = 50, even though we see that the spiral arms fragment, they do so in a different morphological manner than in the hydro run. The KHI-like wiggles as seen in the hydro case <cit.>, are not present in the weakly magnetised simulation. This is because the magnetic field in the spiral arms becomes dynamically important, with β_ arm∼ 2, which then opposes the wiggles due to magnetic tension. This stabilising effect of magnetic fields, indeed, has been reported for magnetic fields of near equipartition strengths in both global 2D <cit.> and local 3D simulations <cit.>.§.§ Destabilisation For the case with weak initial magnetisation, the magnetic fields rise to equipartition levels within the arm (c.f., right-hand panel of Fig. <ref>). However, instead of stabilising the arm, as we expect from the additional magnetic pressure, the arms still fragment. They do so by clearing out the gas within the arm into clouds ∼ 20  Myr before the hydro case (see <ref>). We attribute this to the possible presence of the Parker instability within the spiral arms. Parker instability <cit.> arises in a magnetised plasma present in a stratified medium akin to a disc galaxy. A fluid element with a small magnetic-field over-density in the disc becomes more diffuse due to the added magnetic pressure, and will tend to rise upwards. Since the medium is stratified, after rising, the fluid element looses more gas to adjust to the decrease in the ambient pressure, thus becoming lighter and more unstable <cit.>. This eventually results in a characteristic magnetic field structure of regular hills and valleys above and below the galactic plane, where clouds are expected to form in the valleys of the magnetic-field lines. In our simulations, we suspect that the initial magnetic over-densities are naturally provided by the spiral arms. To test the plausibility of the presence of Parker instability, we estimate the expected growth rates and the length scales from linear theory for the gas in the spiral arms before they fragment, at t = 0.50 T_ rot. Taking the average physical properties of the spiral arm presented in <ref>, with c_s,arm = 3.55 cm s^-1, β_ arm = 2.32, H_ arm = 50.2 pc, and γ_e = 1, we find an inverse growth rate of τ≃ 33 Myr, and the wavelength of the fastest growing mode ≃ 600 pc <cit.>. As a result of cooling present in our simulations, the gas is also expected to cool in the valleys as it gets denser <cit.>. This, along with self-gravity is expected to increase the growth rates of the instability, which makes both the length scales and time scales remarkably close to the ones we observe in the spiral arms. From <ref>, we can roughly estimate τ_ arm∼ 30 Myr, and as we saw in <ref>, the cloud separation in the β = 50 runs are ≃ 650 pc.In addition to this, we see the characteristic magnetic field morphology associated with the Parker instability in our spiral arms. A section of this characteristic field structure is shown at t =0.75T_ rot in <ref>. To produce this graph, we initialise the magnetic field lines at one yz face of the three-dimensional box in a circular plane of radius 250 pc, coloured by the field strength. The clouds are solid iso-contours with a density of 2cm^-3, which is about twice the critical density of the thermally-unstable medium. We can see the magnetic-field lines rising above and below the plane of the spiral arm on scales of ∼ 100 pc. Since our thermally-unstable medium is in the range 1-10 cm^-3, the gas predominantly cools in the magnetic valley as expected. Our results are in agreement with <cit.>, who found that the Parker instability in unison with the thermal instability can lead to the formation of dense clouds in sections of spiral arms. In contrast to the β=50 case, the seeds of the instability are never provided for the β = 10 case, since the spiral arms keep dispersing away before they get dense enough to fragment when the field is too strong.§.§.§ Comparison to observationsOne nearby galaxy, NGC 628, recently has been found to have evidence of magnetic Parker loops along one of its spiral arms in RM synthesis maps <cit.>. These loops are roughly coincident with the regularly spaced star-forming regions that were studied in <cit.>. A similar pattern was seen in the NW arm of M31, where the wavelength of the Parker loops, ∼ 2.3 kpc <cit.>, was found to be twice the separation of the regularly spaced star-forming regions found along the arm <cit.>. These are encouraging signs of the presence of the Parker instability. However, to draw firm conclusions, more detailed analyses of a wider sample of galaxies that exhibit this regular spacing of star-forming regions along their spiral arms is needed.§.§.§ Comparison to simulationsLinear stability analysis and local 2D simulations have observed a reduction in the growth rate of the spiral instability by a factor of 4, in the presence of equipartition magnetic fields in comparison to the hydrodynamical case <cit.>. This is in contrast to our results, where we see an increase in the growth rates. This discrepancy could be due to their 2D approximations, where they do not capture the onset of the Parker instability as observed in our simulations. <cit.> observed magnetic destabilisation in their local 3D simulations of the spiral shock fronts, but attributed it to the magneto-Jeans instability (MJI), as they did not observe the characteristic Parker loops. We think this is not the case in our simulations, since the spiral arms separate out into clouds when the arms are at considerably low densities (n_ av∼ 5 cm^-3). It was also later argued that <cit.> had insufficient box size perpendicular to the plane of the disc to find the Parker modes <cit.>.§.§ Cloud separation§.§.§ Comparison to observationsOut of many others, only four spiral galaxies, namely NGC 628 (M74), NGC 895, NGC 5474, NGC 6946, have been analysed in detail for the separation of the regularly spaced star-forming regions, using a method similar to ours <cit.>, where the spiral arm was parameterised in the (ln R, θ) plane. It was found that adjacent star-forming regions in all four galaxies were either at a spacing of 350-500 pc and/or integer multiples (2-4) of this range. This range of separations is remarkably similar to what we find, that is, ≃ 500 pc in the hydro and ≃ 650pc in the weakly magnetised run. This is in spite of the differences between the parameters of our models and these galaxies. We found that the Fourier transform of the column density along the spiral arms exhibits peaks that are integer multiples of each other (c.f. Fig. <ref>). This effect has also been reported in all the four galaxies mentioned here <cit.>. Since this is seen in both the hydro and the weakly magnetised cases, we think magnetic fields are not the main cause behind this, as previously suggested in <cit.>. This intriguing trend is also reflected in the strings of Hi super-clouds found in the Carina arm of the Milky Way, separated by 700 pc± 100 pc, where more massive clouds are at about twice this separation <cit.>. To draw firm conclusions from the cloud separation themselves, however, we need to expand the parameter space and tune our models to different nearby galaxies. This will help us understand the dependence of the cloud separation on the global properties of the galaxy. §.§.§ Comparison to simulationsOther numerical works have reported an increase in the cloud separation by a factor of 2 <cit.> and 3 <cit.> in spiral arms in the presence of magnetic fields for plasma-beta values similar to ours. This is larger than what we find, i.e., a factor of 1.3 increase in the adjacent cloud separation as well as in the average unstable mode along the spiral arms. This could be due to the limited number statistics of the clouds in their local simulation boxes (≲ 10) or other limitations introduced by the local approximation, compared to global disc simulations.§.§ CaveatsSince we focus on the effects of magnetic fields on the formation of clouds, for the sake of simplicity and computational costs, we do not include a star formation model or various feedback mechanisms, such as supernova feedback, ionising radiation, winds from massive stars, or cosmic rays. These processes could potentially have an impact on the fragmentation process of the clouds, and the structure of the spriral arms themselves. Such models including star formation and feedback will be considered in future studies. Here we focus purely on the effects of gas dynamics, which suggests that at least the onset of fragmentation of the sprial arms can be explained by self-gravity, cooling, and magnetic fields alone. Our galaxies are gravitationally stable, have low shear (compared to the Milky Way), and are in the thermally stable regime initially. This is done to ensure that our galaxy is dominated by the spiral arms since we focus on the spiral arm instability in a global setting. § SUMMARY We study isolated spiral disc galaxies in global three-dimensional simulations with self-gravity, magnetic fields, equilibrium heating and cooling, and an external spiral potential, to study the impact of varying magnetic field strengths on the spiral arm instability. The spiral arms in our simulations form self-consistently and fragment into beads-on-a-string patterns. We find that the magnetic fields have a major dynamical impact on the spiral arm instability, which mainly depends upon their initial strength. Our conclusions are summarised as follows: * For comparable spiral background potentials, we find that moderate initial magnetic fields (β = 10) stabilise the spiral arms against fragmentation, in contrast to the hydro and the weak-field case (β = 50), where the arms are unstable. The moderate magnetic field case forms arms that are more diffuse and hotter compared to the other cases, due to the additional opposing magnetic pressure.* For the case of weak initial magnetic fields (β = 50), the spiral arms fragment in the presence of amplified equipartition magnetic fields in the arms (β_ arm∼ 2.3). The magnetic tension of the fields stabilises the vortical KHI-like wiggles present in the hydro case. * We estimate the adjacent cloud separations in the un-magnetised (hydro) case to be ∼ 500 pc, and ∼ 650 pc in the weakly-magnetised case. This is remarkably close to the separations observed in many nearby spiral galaxies, which show separations of star-forming regions in the range .* We find that the wavelength of the average unstable mode along the spiral arms increases in length from 0.73^+0.31_-0.36 kpc in the hydro case, to 0.98^+0.49_-0.46 kpc in the weakly-magnetised case.* Additionally, we find that the peaks of the 1D Fourier power spectrum of the column density along the spiral arms show peaks that are integer multiples of each other for both the magnetic and un-magnetised cases. This has been reported for the nearby galaxies analysed for the regularity of star-forming regions along their spiral arms. * The spiral arms in the weakly-magnetised case separate out into disjointed clouds along the arms around ∼ 20 Myr before the hydro case. We find evidence that this may be due to the onset of the Parker instability in the spiral arms. The calculated linear growth rates and length scales of the Parker instability fall within the expected values seen in the simulation. We also show the magnetic field morphology around the clouds in the arm that form magnetic hills and valleys (c.f., Fig. <ref>), as expected from linear theory. With the advent of the James Webb Space Telescope, we can now resolve infrared cores observed along the spiral arms of nearby galaxies <cit.> in unprecedented detail. Future parameter studies will aim to tailor our models to nearby galaxies for a more direct comparison with these observations.C. F. acknowledges funding provided by the Australian Research Council (Future Fellowship FT180100495 and Discovery Project DP230102280), and the Australia-Germany Joint Research Cooperation Scheme (UA-DAAD). We further acknowledge high-performance computing resources provided by the Leibniz Rechenzentrum and the Gauss Centre for Supercomputing (grants pr32lo, pr48pi and GCS Large-scale project 10391), the Australian National Computational Infrastructure (grant ek9) and the Pawsey Supercomputing Centre (project pawsey0810) in the framework of the National Computational Merit Allocation Scheme and the ANU Merit Allocation Scheme. The simulation software, , was in part developed by the Flash Centre for Computational Science at the Department of Physics and Astronomy of the University of Rochester. aa§ SCALE HEIGHT ESTIMATION We estimate the scale height around the spiral arms by quantifying the density as a function of the z-coordinate, which is perpendicular to the plane of the galactic disc. We take the spiral arms traced via the friends of friends (FoF) algorithm (see <ref>). For each (x,y) in the arms, we define z = 0 as the point of maximum density. This makes sure that we trace the spiral arms in three dimensions. We then bin the density in bins of 20pc. This is shown in <ref>, where the density is plotted on the vertical axis and the z-coordinate on the horizontal axis. For the scale height calculation we then fit the functional form ρ(z) = ρ_0exp(-|z|/H) to the binned data. The fits are shown as dotted lines in <ref>. This gives us the ρ_0 and the scale height H. The scale heights are found to be 50.2 ± 2.2 pc for the β = 50 case and 98.6 ± 3.9 pc for the β = 10 case.
http://arxiv.org/abs/2311.16266v1
{ "authors": [ "Raghav Arora", "Christoph Federrath", "Robi Banerjee", "Bastian Körtgen" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231127191716", "title": "The role of magnetic fields in disc galaxies: spiral arm instability" }
http://arxiv.org/abs/2311.15750v1
{ "authors": [ "Lucas R. D. Freitas", "Tim Bauer", "Reinhold Egger", "Rodrigo G. Pereira" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20231127120839", "title": "Electric polarization near vortices in the extended Kitaev model" }
UTF8gbsn 0000-0002-6990-9058]Zhibo Yu (喻知博) Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA Email: [email protected] Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0002-4436-6923]Fan Zou (邹凡) Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0002-0167-2453]William N. Brandt Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA Department of Physics, 104 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USAThe eFEDS is a wide ≈140 ^2 field that has extensive multiwavelength coverage. To improve the utility of the existing data, we useto fit source Spectral Energy Distributions (SEDs) from X-rays to far-infrared (FIR) mainly to derive stellar masses () and star-formation rates (SFRs) for normal galaxies and X-ray Active Galactic Nuclei (AGNs). The catalog consists of 2,057,027 galaxies and 10,373 X-ray AGNs located in the ≈60^2 GAMA09 sub-field. Comparing ourwith other available catalogs and our SFRs with FIR-derived SFRs, we demonstrate the general reliability of our SED-fitting measurements. Our catalog is publicly available at https://doi.org/10.5281/zenodo.1012722410.5281/zenodo.10127224.§ INTRODUCTIONThe eROSITA Final Equatorial Depth Survey (eFEDS) was the largest observational investment during the eROSITA performance verification phase. The entire field, spanning ≈140 ^2, was observed to a depth of ≈2.2 ks by eROSITA <cit.>. eFEDS was constructed to encompass the ≈60^2 GAMA09 field <cit.>, which has rich multi-wavelength data. Given the X-ray coverage from eROSITA and the well-cataloged photometric data from UV to FIR on a 10^2-^2 scale, eFEDS is useful for both AGN and galaxy studies. To facilitate future studies, we reportand SFRs for ≈ 2 million sources in the eFEDS GAMA09 field primarily utilizing the GAMA-derived photometric catalog that is based upon KiDS and VIKING imaging data <cit.>. The SED fitting was performed with<cit.>. § DATA AND METHODS Our analysis focuses on the GAMA09 region since the remaining parts of eFEDS lack sufficient multi-wavelength coverage, especially the NIR coverage provided by KiDS and VIKING, which are crucial in deriving reliable . We select X-ray AGNs based on the eFEDS X-ray main catalog <cit.>, and the intrinsic X-ray fluxes are taken from <cit.> who corrected for absorption via X-ray spectral analyses. The rest of the sources are classified as galaxies.The UV-to-FIR photometry is from the GAMA-KiDS-VIKING (GKV) catalog compiled in <cit.>. In general, the multiwavelength coverage is uniform, reaching 5 σ depths of 24.2 mag and 21.3 mag for i-band and K_S-band, respectively. We also incorporate the HSC-Wide survey to support the optical coverage, which uniformly covers the entire GAMA09 region with a 5σ limiting magnitude of 26.1 mag in i-band <cit.>. The effective filter response and zero-point calibration across different HSC bands are also corrected. Both the GKV and HSC-Wide catalogs have accounted for Galactic extinction. Additionally, we include Herschel FIR photometry from the HELP collaboration <cit.>. The photometric redshifts (photo-zs) and spectroscopic redshifts (spec-zs) are taken from the compilations of <cit.> and <cit.> for X-ray AGNs and normal galaxies, respectively. The photo-zs generally have good quality with typical dispersions of a few percent and outlier fractions of less than 10%. We also drop sources near bright stars so that the impact on our overall photometry is minimal. Our final sample contains 2,057,027 normal galaxies and 10,373 X-ray AGNs <cit.>.We apply the same methods as in <cit.> to derive the best-fitand SFRs. Briefly,assumes an energy balance principle and decomposes a SED into several user-defined components (including AGNs). For normal galaxies and X-ray AGNs, we use the dense-gridparameter settings for normal galaxies and AGN candidates in <cit.>, respectively (see their Tables 4 and 5).We fit the near-UV (NUV) to FIR SEDs for galaxies and add X-ray photometry for AGNs. To account for systematic uncertainties, 0.05 mag error is added in quadrature for all bands from NUV to NIR.§ RESULTS To evaluate the reliability of ourmeasurements, we compare ourfor normal galaxies with the results from the GAMA collaboration (M_⋆, ref) <cit.>, which only include bright sources with GAMA spectra. For X-ray AGNs, we refer to themeasurements from <cit.>. The comparisons are shown in the top-left panel of Figure <ref> where Δlog=log-log M_⋆, ref. For 59,653 normal galaxies and 2,029 X-ray AGNs, =0.12 and 0.23 with median Δlog=0.02 and -0.12, whereis the normalized median absolute deviation (NMAD).[NMAD is defined as 1.4826 × median absolute deviation.] To assess the SFR measurements, we compare SFRs based upon our SED-fitting and FIR-based SFRs (SFR_FIR) that are derived following the method in <cit.>. We also correct for old-star heating following Equation 25 in <cit.>. Figure <ref> top-right panel shows the comparisons between different SFRs for both galaxies and X-ray AGNs, where ΔlogSFR=logSFR-logSFR_FIR. For 34,610galaxies and 862 X-ray AGNs with FIR signal-to-noise ratio (SNR) > 5, =0.41 and 0.28 with median ΔlogSFR=-0.19 and 0.04, respectively. These values are generally as good as those in <cit.> where they fit SEDs to three million sources in the ≈13.2^2 XMM-SERVS fields. Due to the large area and shallow X-ray depth of eFEDS, there will be a significant fraction of BL AGNs whose AGN components generally dominate the NIR emission, causing less reliablemeasurements. Thus, we also plot Broad-Line AGN (BL AGN) candidates in , which are selected as having AGN components constituting >50% of the total flux density at rest-frame 1 μm. The impact of a high AGN contribution onis clearly shown by the widely scattered Δlog (=0.66). Apart from eFEDS, we also apply the same method for SED-fitting to COSMOS, and the results are consistent with the COSMOS2020 catalog by <cit.>.We further estimate the nominal depth of our measurements. We define “good bands" as those with SNR >5, and we plot the number of good bands vs. i-band magnitude in Figure <ref> bottom panel. The plot indicates that the SED quality in eFEDS degrades at i-mag ≈ 21.5. Approximately 30% of our sources are brighter than this magnitude. Among these, 10% of normal galaxies and 43% of X-ray AGNs have available spec-zs. We only show sources with i-mag < 22 in Figure <ref> top panels, which constitute ≳90% of the sources that we compared. We warn readers to use our catalog at i-mag ≳ 22 cautiously, as fainter sources normally contain only ∼6 bands primarily from HSC, and it is harder to constrain their properties due to lack of NIR coverage.Our catalog is available at https://doi.org/10.5281/zenodo.1012722410.5281/zenodo.10127224. We provide the source , SFRs, classifications (X-ray AGNs vs. normal galaxies), and necessary information that can be helpful to cross-reference GAMA objects and/or eFEDS X-ray/optical-IR counterparts. We also provide AGN fractional contributions at rest-frame 5000 Å, 1 μm, and integrated 8-1000 μm for X-ray AGNs to help readers identify sources with less-reliable . Readers should also note that our AGN selection is based upon X-ray detection only, which is generally a good tracer of black-hole accretion rates (BHAR), but this selection is incomplete. To reach better completeness, one should combine multiple selection methods such as mid-infrared <cit.> and radio <cit.>. AcknowledgmentsWe acknowledge support from NSF grant AST-2106990 and Penn State. § SEDS IN COSMOS The COSMOS field is one of the LSST Deep Drilling Fields (DDFs) and has been well characterized by many past studies <cit.>. Our purpose in working on COSMOS is to consistently measure the galaxy properties via SED-fitting in the same manner as for other DDFs <cit.>. In this work, our methods for COSMOS are the same as those for eFEDS and <cit.>. We focus on the 1.27 deg^2 UltraVISTA footprint inside COSMOS because sufficiently deep NIR coverage is critical in deriving reliable . The final catalog consists of 709,087 normal galaxies and 2,209 X-ray AGNs. The absorption-corrected X-ray data are from <cit.> and <cit.>. The UV-to-MIR data are from the Farmer catalog in the COSMOS2020 data release <cit.>. To optimize the magnitude offsets in each band, we apply the correction in Table 3 of <cit.>. We also add the 24–500 μm data from the “super-deblended" FIR photometric catalog in <cit.>. The Galactic extinction is corrected following the methods in Section 2.6 of <cit.>.The redshifts for X-ray AGNs and normal galaxies are from <cit.> and <cit.>, respectively. The quality of the photo-zs is generally good, with typical dispersions of 1–4% and outlier fractions of a few percent.We compare ourwith other measurements in the top-left panel of Figure <ref>. Thevalues for 703,081 galaxies are compared with those in the Farmer catalog. The median Δlog=0.07, and =0.15. We compare 2,086 X-ray AGNs with those in <cit.>, who also include AGN components when deriving . The median Δlog=0.09, and =0.21. We also select BL AGN candidates with AGN components contributing >50% of the total flux density at rest-frame 1 μm. The result indicates this criterion is efficient in selecting AGNs with problematicmeasurements as almost all sources with Δlog<-0.5 are selected. The impact of these BL AGN candidates on COSMOS should be much reduced compared to eFEDS because of the small area and greater depth of COSMOS. This is supported by the much-reduced fraction of sources with AGN contribution at rest-frame 1 μm >50% in COSMOS (9%) than in eFEDS (32%). In addition, among our X-ray AGNs, 36 are spectroscopically confirmed BL AGNs by <cit.>. The median Δlog is -0.40, which is consistent with the findings in <cit.>. In Figure <ref> top-right panel, we compare SED-based SFRs and FIR-based SFRs corrected for old-star heating. For 4,017 galaxies and 225 AGNs with FIR SNR >5, =0.23 and 0.27 with median Δlog SFR=-0.06 and 0.05, respectively. Our measurements in COSMOS are generally consistent with the previous results. The nominal depth in COSMOS is also shown in the bottom panel of Figure <ref>. The number of good bands in COSMOS is much larger than that in eFEDS at the bright end, but it dramatically degrades at i-mag ≈25.5. Approximately 35% of our sources are brighter than this magnitude. Among these, 64% of X-ray AGNs have spec-zs from <cit.>.[The spec-zs for normal galaxies in the COSMOS2020 data release are not publicly available, so we only adopt their photo-zs. Since their photo-zs agree well with the spec-zs, the adoption of photo-zs should not materially affect our conclusions.] In the Figure <ref> top panels, we only show sources with i-mag <26. We warn readers to use our catalog at i-mag ≳26 cautiously as fainter sources generally have only a handful of bands, so the SED fitting becomes unreliable. aasjournal
http://arxiv.org/abs/2311.16283v1
{ "authors": [ "Zhibo Yu", "Fan Zou", "William N. Brandt" ], "categories": [ "astro-ph.GA", "astro-ph.HE" ], "primary_category": "astro-ph.GA", "published": "20231127195646", "title": "Stellar Masses and Star-Formation Rates of Galaxies and AGNs in the eFEDS GAMA09 Field" }
http://arxiv.org/abs/2311.16231v1
{ "authors": [ "Itai Linial", "Brian D. Metzger" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231127190001", "title": "Ultraviolet Quasi-periodic Eruptions from Star-Disk Collisions in Galactic Nuclei" }
Computing the matter power spectrum, P(k), as a function of cosmological parameters can be prohibitively slow in cosmological analyses, hence emulating this calculation is desirable. Previous analytic approximations are insufficiently accurate for modern applications, so black-box, uninterpretable emulators are often used. To construct an efficient, differentiable, interpretable, symbolic emulator for the redshift zero linear matter power spectrum which achieves sub-percent level accuracy. We also wish to obtain a simple analytic expression to convert A_ s to σ_8 given the other cosmological parameters. We utilise an efficient genetic programming based symbolic regression framework to explore the space of potential mathematical expressions which can approximate the power spectrum and σ_8. We learn the ratio between an existing low-accuracy fitting function for P(k) and that obtained by solving the Boltzmann equations and thus still incorporate the physics which motivated this earlier approximation. We obtain an analytic approximation to the linear power spectrum with a root mean squared fractional error of 0.2% between k = 9×10^-3 - 9 hMpc^-1 and across a wide range of cosmological parameters, and we provide physical interpretations for various terms in the expression. We also provide a simple analytic approximation for σ_8 with a similar accuracy, with a root mean squared fractional error of just 0.4% when evaluated across the same range of cosmologies. This function is easily invertible to obtain A_ s as a function of σ_8 and the other cosmological parameters, if preferred. It is possible to obtain symbolicapproximations to a seemingly complex function at a precision required for current and future cosmological analyses without resorting to deep-learning techniques, thus avoiding their black-box nature and large number of parameters. Our emulator will be usable long after the codes on which numerical approximations are built become outdated. A precise symbolic emulator of the linear matter power spectrumDeaglan J. Bartlett mailto:[email protected]@iap.fr 1 Lukas Kammerer 2 Gabriel Kronberger 2 Harry Desmond 3 Pedro G. Ferreira 4 Benjamin D. Wandelt 1,5 Bogdan Burlacu 2 David Alonso 4 Matteo Zennaro 4Received XXX; accepted YYY ============================================================================================================================================================================================================================================================================ § INTRODUCTION Machine learning (ML) methods have great potential for simplifying and accelerating the analysis of astrophysical data sets.The primary focus has been on what one might dub “advanced numerical methods” using, for example, Gaussian processes or neural networks. In these cases, one tries to construct efficient algorithms which can be used to either infer specific physical properties from complex data sets or emulate complex processes which can then be extrapolated to new situations.Typically these methods involve constructing a set of pre-established basis functions and then inferring their weights, or building complex, expressible functions, with parameters that can be optimised via efficient gradient descent methods. These methods can be easily incorporated in Bayesian inference frameworks that have achieved significant success, becoming the standard practice in astrostatistics.The drawback of the more traditional, numerical ML techniques is their opaqueness; it is not always clear what information is being used and how methods trained on (necessarily imperfect) simulations will perform when applied to real-world data.A somewhat overlooked branch of machine learning which has tremendous promise for the types of problems being considered in astrophysics is Symbolic Regression (SR).With SR one tries to infer the mathematical expressions that best capture the properties of the physical system one is trying to study. The process is an attempt to mimic and systematise the practice that physicists have always used: to infer simple physical laws (i.e. formulae) from data. The field of SR has developed over the years into a vibrant and active field of research in ML, typically associated with evolutionary methods such as Genetic Programming.It has been shown that it can be used to infer some well-established laws of physics from data and infer new ones <cit.>.Within the field of cosmology, one often compresses observations from galaxy surveys into two-point correlation functions (or their Fourier transforms, power spectra), which are compared to theory through Markov Chain Monte Carlo methods to constrain cosmological parameters. As cosmological surveys become increasingly vast and precise, a fundamental limitation to the feasibility of such inferences has been the speed at which one can make this theoretical prediction, since it involves solving a complex set of coupled, highly nonlinear differential equations.Recently, instead of directly solving these equations <cit.> and adding non-linear corrections <cit.>, emulation techniques such as Neural Networks or Gaussian Processes have been used to accelerate these calculations to directly output the matter power spectrum as a function of cosmological parameters <cit.>. These methods act as black boxes and require up to several hundreds of parameters to be optimised. However, through perturbation theory, one knows analytic limits of the power spectrum and, through visual inspection, it does not appear to be an extremely complex function.As such, one wonders whether an analytic approximation exists. Indeed, for many years, the leading method of accelerating this calculation has been an analytic approximation <cit.>, however it is insufficiently precise for modern experiments. Analytic approximations to beyond ΛCDM power spectra have been proposed in the context of modified gravity <cit.>, although these still only achieve a precision of between 1 and 2%.Such an emulator has the advantage that it will not become deprecated when the codes on which current numerical methods are built become outdated, whereas other methods require the transfer of the inferred weights and biases as well as the model architecture, hindering longevity. Even in the short term, an analytic expression using standard operators is more portable, since it can be more easily be incorporated into the user's favourite programming language without the need to install or write wrappers for the model. Moreover, having an analytic expression allows one to interpret such a fit, and potentially identify physical processes which could lead to certain terms, contrary to the black-box numerical methods. Additionally, such expressions often contain fewer free parameters to optimise than numerical ML methods.In <ref> we briefly describe the matter power spectrum and the Eisenstein & Hu approximation, and in <ref> we detail the SR method we use in this work. We present an analytic emulator for σ_8 as a function of other cosmological parameters in <ref> (which is easily invertible to obtain A_ s as a function of cosmological parameters), and in <ref> we give our emulator for the linear matter power spectrum. The main results of this paper are given in <ref> and <ref>. We conclude and discuss future work in <ref>.Throughout this paper “log” denotes the natural logarithm. § THE MATTER POWER SPECTRUM §.§ Definition We would like to construct an efficient, differentiable and (if at all possible) interpretable emulator for thepower spectrum of the matter distribution in the Universe, P(k; θ), for wavenumber k and cosmological parameters θ. The power spectrum is defined as follows: the matter density of the Universe, ρ(x) can be decomposed into a constant (in space) background density, ρ̅, and a density contrast, δ(x) such that ρ(x)=ρ̅[1+δ(x)]. Ifδ̃(k) is the Fourier Transform of δ(x), and the matter distribution is statistically homogeneous and isotropic, we have that(2 π)^3 P(k;θ) δ^ D( k - k^')≡⟨δ̃(k) δ̃^∗ (k^') ⟩,where ⟨⋯⟩ denotes an ensemble average and δ^ D is the Dirac delta function.From observations of the Cosmic Microwave Background (CMB) <cit.>, it is known that the density fluctuations at early times were approximately Gaussian and thus fully described by P(k; θ). At these early times, the power spectrum of the comoving curvature perturbations is proportional to A_ sk^n_ s - 4, where n_ s≈ 0.9665 <cit.>. Although structure formation through gravity makes the present-day density field non-Gaussian (e.g. the intricate structure of the cosmic web is typically associated with higher order statistics), the power spectrum still holds a central role in modern cosmological analyses.The current cosmological model is described by only six parameters: the baryonic, Ω_ b, and total matter, Ω_ m, density parameters, the Hubble constant, H_0 = 100 h km s^-1Mpc^-1, the scalar spectral index, n_ s, the curvature fluctuation amplitude, A_ s, and the reionisation optical depth, τ. All other parameters can be derived from these six, and thus sometimes a different set of parameters is chosen. For example, instead of A_ s, one often quotes σ_8 which isthe root-mean-square density fluctuation when the linearly evolved field is smoothed with a top-hat filter of radius 8. Specifically, one defines for a top-hat of radius Rσ_R^2 = ∫ dkk^2/2 π^2 P (k; θ) | W(k,R) |^2,where θ is the set of cosmological parameters and the Fourier transfer of the top-hat filter is W(k, R) = 3/(kR)^3( sin (k R) - kR cos (k R) ),and σ_8 is simply σ_R for R = 8. Throughout this paper we will ignore the small dependence of the power spectrum on the reionisation optical depth parameter, and focus on the remaining five parameters. We set the neutrino mass to zero in all calculations. §.§ Eisenstein & Hu Approximation Since each evaluation of a Boltzmann solver to compute P(k; θ) can be expensive, the ability to emulate this procedure and replace this solver with a surrogate model has long been desirable. The most notable attempt to do this in an analytic manner is given in a series of papers by <cit.>. In these works, an approximation is constructed based on physical arguments including baryonic acoustic oscillations (BAO), Compton drag, velocity overshoot, baryon infall, adiabatic damping, Silk damping, and cold dark matter growth suppression. Rather than repeat their findings, we refer the reader to these papers to inspect the structure of the equations. Such a model is accurate to a few percent which, although invaluable at the time of writing, is insufficiently accurate for modern cosmological analyses. It is thus the goal of this work to build upon this analytic emulator to provide sub-percent level predictions. We note that alternative symbolic approximations also exist to P(k), such as the earlier, less accurate approximation by <cit.> (BBKS). More recently, <cit.> found simple expressions using genetic programming which can achieve similar accuracy to theexpression, but we choose to use 's approximation due to its physical motivation and widespread use. § SYMBOLIC REGRESSIONTo extract analytic approximations from sampled data, we use the symbolic regression package [<https://github.com/heal-research/operon>] <cit.>.This package leverages the most popular<cit.> approach to SR, namely genetic programming <cit.>. Genetic programming describes the evolution of “computer programs”, in our case mathematical expressions encoded as expression trees. Following the principle of natural selection, over several iterations the worst performing equations (given some fitness metric) are discarded and new equations are produced by combining sub-expressions of the current population (crossover) or by randomly inserting, replacing or deleting a subtree in an expression (mutation). Over the course of several generations, the expectation is that the population of equations evolve to become fitter and thus we obtain increasingly accurate analytic expressions. We note that many other techniques exist for SR, such assupervised or reinforcement learning with neural networks <cit.>,deterministic approaches <cit.>,Markov chain Monte Carlo <cit.>,physics-inspired searches <cit.>, and exhaustive searches <cit.>. However, we chooseand thus genetic programming due to its speed, high memory efficiency and its strong performance in benchmark studies <cit.>.To improve the search, every time a terminal node appears in an expression tree (i.e. k or one of the cosmological parameters), a scaling parameter is introduced, which is then optimised <cit.> using the Levenberg–Marquardt algorithm <cit.>. We denote the total number of nodes in the expression excluding the scaling as the “length” of the model, and the “complexity” refers to the total number of nodes, including these.When comparing objective values during non-dominated sorting (NSGA2),implements the concept of ϵ-dominance <cit.>, where the parameter ϵ is defined such that two objective values which are within ϵ of each other are considered equal. This parameter therefore affects the number of duplicate equations in the population and is designed to promote convergence to a representative well distributed approximation of the global Pareto front. We choose different values for this parameter when searching for our two emulators, and these were found after some experimentation with different values to find settings which produced accurate yet compact models.Model selection is an essential part of any SR search. Since one optimises both accuracy and simplicity during the search, SR if often a Pareto-optimisation problem. Although principled methods exist to combine these competing requirements in the presence of statistical errors <cit.>, we do not have this ability for our current application since there is no noise in our data, and thus have to rely on more heuristic methods.We visually inspect the most accurate solution found for each model length and make a qualitative judgement as to the function which is sufficiently compact to be interpretable yet is accurate enough for our applications. Further details are given in <ref>. § ANALYTIC EMULATOR FOR SIGMA8We begin by considering the simplest emulator one may want for power spectrum related quantities: an emulator for σ_8 as a function of other cosmological parameters (A_ s,Ω_ b,Ω_ m, h,n_ s) or, equivalently, an emulator for A_ s given σ_8 and the other cosmological parameters.Although the set of neural network emulators BACCO contains a function to do this <cit.>, to the best of the authors' knowledge an analytic approximation is not currently in common use.The standard approach is to compute the linear matter power spectrum with a Boltzmann code assuming some initial guess of A_ s, then compute the integral in <ref> to obtain σ_8. For a target σ_8 of σ_8^', one should then use A_ s^' = (σ_8^'/σ_8)^2 A_ s. We wish to accelerate this process with a symbolic emulator. To compute this, we construct a Latin hypercube (LH) of 100 sets of cosmological parameters, using uniform priors in the ranges given in <ref>, which are the same as those used in <cit.>.We construct a second LH of 100 points to be used for validation. For these parameters, we compute σ_8 using<cit.> and attempt to learn this mapping using a mean squared error loss function with . For the equation search, we use a population size of 1000 with a brood size of 10 and tournament size of 5, optimising both the mean squared error and the length of the expression simultaneously, with ϵ = 10^-5 (see <ref>). For numerical stability, we fit using 10^9 A_ s instead of A_ s so that all cosmological parameters are 𝒪(1). Parameters are optimised during the search using a nonlinear least squared optimiser with up to 1000 iterations per optimisation attempt. We set the maximum allowed model length to 200 and maximum number of iterations to 10^7, although we find that both of these are much larger than the required values needed to converge to a desirable solution. The candidate expressions are comprised of standard arithmetic operations (addition, subtraction, multiplication, division), as well as the natural logarithm, cosine, power and analytic quotient operator ((x,y) ≡ x / √(1+y^2)).After 30 minutes of operation on one node of 128 cores,we find the Pareto front of expressions given in <ref>. We see that the training and validation losses are comparable at all model lengths, reaching a root mean squared error of around than 3 × 10^-3 by a length of 14. The best model found (of length 15) is given byσ_8 ≈(a_0 A_ s+ a_1 n_ s) ( a_2Ω_ b+ log( a_3Ω_ m)) log(a_4 h ) + a_5,where the optimised parameters are a = [1.61320734729× 10^8, 0.343134609906, - 7.859274, 18.200232, 3.666163, 0.003359]. We note several important features of this equation which make it desirable. First, we find that it is a highly accurate approximation, with a root mean squared fractional error on the validation set of only 0.4%, which is far smaller than the precision to which one can measure this number with cosmological experiments. Second, one sees the A_ s only appears once in this equation and as a linear term. Thus, it is trivial to invert this equation to obtain A_ s as a function of the other cosmological parameters, as is often needed. Finally, we note that the final parameter, a_5 = 0.003359, is much smaller than the value of σ_8 and thus this additive constant could be neglected in most applications where such precision is not necessary. § ANALYTIC EMULATOR FOR THE LINEAR POWER SPECTRUMWe now move on to the more challenging task of producing an analytic emulator for the linear matter power spectrum. Given the previous success of <cit.>, we believe it is sensible to build upon this work, not least due to the physically-motivated terms included in their fit and so that we must only have to fit a small residual (of the order of a few percent). Thus, instead of directly fitting for P(k, θ), we defineP(k; θ) ≡ P_ EH(k; θ) F(k; θ),where P_ EH(k; θ) is the zero-baryon fit of <cit.>, which does not include an attempt to fit the BAO. We plot both P(k; θ) and log F(k; θ) in <ref> for the best-fit cosmology obtained by Planck <cit.>, where we see that dividing out theterm retains the BAO part of the power spectrum and reduces the dynamic range required for the fit.As before, we obtain 100 sets of cosmological parameters on a LH using the priors in <ref> and compute both P(k;θ) withand P_ EH(k; θ) with the<cit.> implementation, using 200 logarithmically spaced values of k in the range 9× 10^-3 - 9 h Mpc^-1. We note that this is an extremely small training set compared to many power spectrum emulators, but we find that it is sufficient to obtain sub-percent level fits.We choose to symbolically regress log F (k; θ) using a mean squared error loss function, and thus wish to minimise the fractional error on this residual. We choose to fit for log F as this ensures that our final estimate of P(k; θ) is positive, as guaranteed by exponentiation, which is physically required. Additionally, we first multiply log F by 100 so that the target is 𝒪(1). We use a further 100 sets of cosmological parameters, also arranged on a LH, for validation. We choose to fit using the cosmological parameters σ_8, Ω_ b, Ω_ m, h and n_ s. We use the same settings foras in <ref>, except we choose ϵ=10^-3 and terminate our search after 10^8 function evaluations.The root mean squared error for the best function found at each model length is given in <ref>, where we see that we are able to achieve values of 𝒪(10^-3) for log F. Unlike for the σ_8 emulator, we obtain slightly worse losses for the validation set compared to training, however always by less than a factor of two.Given this set of candidate solutions, we wish to choose one which is sufficiently accurate for current applications yet is sufficiently compact to be interpretable. In <ref>,one observes a plateau in accuracy between model lengths ∼65-80 and thus it seems reasonable to choose a solution in this regime, since doubling the model length only achieves approximately a factor of two improvement in fit beyond this point. Moreover, beyond this point the training and validation curves begin to deviate, suggesting a degree of overfitting.We choose to report the model of length 77, as indicated by the dotted line in <ref>, since this provided one of the most interpretable solutions, and achieved a sub-percent error for 95% (2σ) of the cosmological parameters considered, for both the training and validation set. After some simplification, this can be written aslog F≈b_0 h - b_1 + ( b_2 Ω_ b/√(h^2 + b_3))^b_4 Ω_ m[ b_5 k - Ω_ b/√(b_6 + (Ω_ b - b_7 k)^2) b_8 (b_9 k)^-b_10 kcos( b_11Ω_ m - b_12 k/√(b_13 + Ω_ b^2)) - b_14( b_15 k/√(1 + b_16 k^2) - Ω_ m) cos( b_17 h/√(1 + b_18 k^2)) ]+ b_19 (b_20Ω_ m + b_21 h - log(b_22 k) + (b_23 k)^- b_24 k) cos( b_25/√(1 + b_26 k^2))+ (b_27 k)^-b_28 k( b_29 k - b_30log(b_31 k)/√(b_32 + (Ω_ m - b_33 h)^2))cos(b_34Ω_ m - b_35 k/√(b_36 + Ω_ b^2)), where the best-fit parameters for this function are given in <ref>. We find that there are 37 different parameters required for this fit, far fewer than would be used if one were to emulate this with a neural network.We plot this fit and the residuals compared tofor the Planck 2018 cosmology in <ref>, which we note is not included in either our training or validation sets. One can see that the difference between the true power spectrum and our analytic fit is almost imperceivable, and in the residuals plot we see that for all k considered, the fractional error does not exceed 0.3%. This is smaller than the error on log F given in <ref>, since we compare at the level of the full P(k; θ), such that a moderate error on log F becomes very small once substituted into <ref>. This is shown in <ref>, where we plot the distribution of fractional residuals in P(k; θ) for all the cosmologies in the training and validations sets. We obtain sub-percent level predictions for all cosmologies and values of k considered, with a root mean squared fractional error of 0.2%.Part of the appeal of a symbolic emulator is the possibility for interpretability and to easily identify what information used in the input is used to make the prediction. To begin, we note that, although we obtained our emulator by varying σ_8, Ω_ b, Ω_ m, h and n_ s, we see that <ref> contains neither σ_8 nor n_ s. For the linear matter power spectrum, one expects that A_ s and n_ s only appear as a multiplicative factor of A_ s k^n_ s - 1, with all other terms independent of these parameters. Given that theterm already contains this expression, it is unsurprising that log F is independent of n_ s. Indeed, if it did appear, this would indicate a degree of overfitting. Since σ_8 is not proportional to A_ s (see <ref>), we cannot use the same argument to explain the lack of its appearance in our expression, but can conclude that a combination of theterm and the first line of <ref> can sufficiently approximate A_ s, since this line is k independent and thus contributes to an overall offset for the emulator.Turning to the remaining lines of <ref>, we observe that each term contains an oscillation modulated by a k- and cosmology-dependent damping. Despite there being four such terms across the remaining three lines, we find that we can split these into two pairs with the same structure of the oscillations. Firstly, we have cosines with an argument proportional to 1 / √(1 + b k^2), for some constant b. This functional form (x/√(1 + y^2)) arises due to the inclusion of the analytic quotient operator, which also explains why the constant 1 appears multiple times in <ref>. These terms give oscillations which vary slowly as a function of k. In particular, as plotted in <ref>, the third line of <ref> contains approximately one cycle of oscillation across the range of k considered, with a minimum during the BAO part of the power spectrum, and a maximum just afterwards. Beyond this point, this term fits the non-oscillatory, decaying part of the residual beyond k∼ 1 h Mpc^-1 (compare the middle panel of <ref> to the third term plotted in <ref>).The remaining oscillatory terms are of the form cos(ω k + ϕ). The phase, ϕ, of these oscillations is proportional to the total matter density, Ω_ m, such that changing this parameter at fixed Ω_ b shifts the BAOs to peak at different values of k. The frequency of these oscillations is ω∝ 1 / √(b + Ω_ b^2) for some parameter b, such that cosmologies with a higher fraction of baryons have many more cycles of BAO in a given range of k, as one would physically expect.From <ref>, one can see how the second and fourth lines of <ref> capture the BAO signal with opposite signs, such that they combine to give the familiar damped oscillatory feature.Using Ω_ bh^2 = 0.02242 and h= 0.6766, as appropriate for the Planck 2018 cosmology <cit.>, the frequency of the oscillations are b_12 / ( h √(b_13 + Ω_ b^2)) = 146.5 Mpc and b_35 / (h √(b_36 + Ω_ b^2)) = 145.8 Mpc, and are thus approximately equal to the sound horizon, which is r_∗ = 144.6 Mpc for this cosmology. One can therefore view these frequencies as symbolic approximations to the sound horizon, although we refer the reader to <cit.> for alternative SR fits.Thus, although we did not enforce physically motivated terms in the equation search, we see that simple oscillatory contributions for the BAOs have emerged and thus our symbolic emulator is not merely a high order series expansion, but contains terms which are both compact and interpretable. We find that such terms exist in many functions given in <ref>, however we find that using shorter run times forof only 2-4 hours (compared to approximately 24 hours on a single node of 128 cores for our fiducial analysis) do not provide as interpretable expressions as <ref>.As a note of caution, one can identify a few terms in <ref> which will become problematic if extrapolated to values of k much smaller than those used to train the emulator, namely those containing log(k) and k raised to a power proportional to k. For k ≲ 10^-3h Mpc^-1 this can lead to an error on P(k) of more than one percent. Although one is likely cosmic-variance dominated in this regime so such errors should not be problematic, we know that theprovides a very good approximation, and thus we suggest that <ref> is included in a piece-wise fit, such that it is only used approximately in the range of k which were used to obtain it. A similar effect is seen if one extrapolates to higher k than considered here. Although this is far beyond the validity of the linear approximation, we caution that applying any parameterisation of the non-linear power spectrum that depends on the linear one may suffer from potentially catastrophic extrapolation failures at high k if used beyond the k range considered here. Again, it is potentially advisable to just use thefit in this regime.If the reader wishes to use a more accurate, yet less interpretable, emulator, we provide the most accurate equation found in <ref>, which has a model length of 142, with 73 parameters and yields a root mean of squared fractional errors on P(k) of 0.1% for both the training and validation sets.§ DISCUSSION AND CONCLUSIONIn this paper we have found analytic approximations to σ_8 (<ref>) and the linear matter power spectrum (<ref>) as a function of cosmological parameters which are accurate to sub-percent levels.In the case of σ_8, the simple yet accurate expression we have identified can be easily inverted to obtain A_ s as a function of σ_8 and the other cosmological parameters.Our approximation to P(k) is built by fitting the residuals between the output of a Boltzmann solved () and the physics-inspired approximation of <cit.>.As such, unlike neural network or Gaussian process based approaches, our expression explicitly captures many physical processes (and is thus interpretable) whilst still achieving sub-percent accuracy.This work is the first step in a programme of work dedicated to obtaining analytic approximations to P(k) which can be used in current and future cosmological analyses. In this paper we have focused on the linear P(k), i.e. the power spectrum of the linearly evolved density fluctuations. Although this approach is valid on large scales, the real Universe is non-linear, such that non-linear corrections are required at k≳ 10^-1h Mpc^-1 to accurately model the observed matter power spectrum across a wider range of scales. Future work will be dedicated to extending our framework to capture such non-linear physics and to include redshift dependence in our emulator. Finally, in our emulator we have considered a ΛCDM Universe with massless neutrinos. In the future we will add corrections to the expressions found in this work to incorporate the effects of massive neutrinos and include beyond ΛCDM effects, such as a w_0-w_a parametrisation of dark energy.We have demonstrated that, despite the temptation to blindly apply black-box methods such as neural networks to approximate physically useful functions, even in ostensibly challenging situations such as the matter power spectrum, one can achieve the required precision with relatively simple analytic fits. Given the unknown lifetime of current codes upon which numerical ML approximations are built and the ease of copying a few mathematical functions into your favourite programming language, finding analytic expressions allows one to more easily future-proof such emulators and should therefore be encouraged wherever possible.§ ACKNOWLEDGEMENTS DJB is supported by the Simons Collaboration on “Learning the Universe.” LK was supported by a Balzan Fellowship. HD is supported by a Royal Society University Research Fellowship (grant no. 211046). PGF acknowledges support from STFC and the Beecroft Trust.BDW acknowledges support from the Simons Foundation. DA acknowledges support from the Beecroft Trust, and from the Science and Technology Facilities Council through an Ernest Rutherford Fellowship, grant reference ST/P004474/1. MZ is supported by STFC. We made extensive use of computational resources at the University of Oxford Department of Physics, funded by the John Fell Oxford University Press Research Fund, and at the Institut d'Astrophysique de Paris.For the purposes of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.§ DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. We provide python implementations of <ref> at <https://github.com/DeaglanBartlett/symbolic_pofk>.aa § MOST ACCURATE ANALYTIC EXPRESSION FOUND FOR LINEAR POWER SPECTRUMThe expression we report for an analytic approximation for the linear matter power spectrum (<ref>) is not the most accurate one found, but the one which we deemed to appropriately balance accuracy, simplicity, and interpretability.It may be desirable to have a more accurate symbolic expression if interpretability is not a concern. In this case one may wish to use the most accurate equation found, which is100 log F≈ c_0 k+ c_1(Ω_ b c_2 - c_3 k/√(c_4 + k^2)) (c_34(c_35 k)^- c_36 k/√(c_39 + (- Ω_ b + Ω_ m c_37 - c_38 h)^2) - cos(Ω_ m c_32 - c_33 k )) ×(c_17(c_25 k)^- c_26 k((Ω_ b c_18 + Ω_ m c_19 - c_20 h) cos(Ω_ m c_21 - c_22 k ) + cos(c_23 k - c_24))/√(c_31 + (c_27(- Ω_ m c_28 + c_29 k)/√(c_30 + k^2) - k)^2) - c_5(Ω_ m c_12 + c_13 k)^- c_14 k(Ω_ m c_6 - c_7 k + (Ω_ b c_8 - c_9 k) cos(Ω_ m c_10 - c_11 k ))/√(c_16 + (Ω_ b c_15 + k)^2))- c_40(Ω_ m c_41 - c_42 h + c_43 k + c_44 k/√(c_45 + k^2)√(c_47 + (- Ω_ m - c_46 h)^2) - c_48(Ω_ m c_49 + c_50 k)/√(c_51 + k^2)) cos(c_52 k/√(c_53 + k^2)√(c_55 + (Ω_ m c_54 - k)^2)) - c_56 - c_57(Ω_ m c_67 + c_68 k)^- c_69 k(Ω_ m c_58 - c_59 k + (- Ω_ b c_60 - Ω_ m c_61 + c_62 h) cos(Ω_ m c_63 - c_64 k ) + cos(c_65 k - c_66))/√(c_70(Ω_ b + c_71 h/(c_72 + k^2)^0.5)^2/c_73 + k^2 + 1.0).Note that this equation has 73 parameters, which is approximately twice as many as <ref>, yet one only gains a factor of two in the fractional root mean squared error. The best-fit parameter values are reported in <ref>. We note that this function is the direct output ofand is thus over-parameterised so that some simplification could be applied. For example, one only needs two of c_1, c_2 and c_3 as these only appear as c_1c_2 and c_1c_3. Since we only provide this expression as a precise emulator and do not attempt to interpret its terms, we choose not to apply any simplifications (although see <cit.> for an automated method to do this).
http://arxiv.org/abs/2311.15865v1
{ "authors": [ "Deaglan J. Bartlett", "Lukas Kammerer", "Gabriel Kronberger", "Harry Desmond", "Pedro G. Ferreira", "Benjamin D. Wandelt", "Bogdan Burlacu", "David Alonso", "Matteo Zennaro" ], "categories": [ "astro-ph.CO", "astro-ph.IM", "cs.LG", "cs.NE" ], "primary_category": "astro-ph.CO", "published": "20231127143321", "title": "A precise symbolic emulator of the linear matter power spectrum" }
[ Haoqiang Kang^1Haoqiang Kang completed this work as a research assistant at Columbia University.,  Xiao-Yang Liu^1, 2^1Columbia University, ^2Rensselaer Polytechnic Institute ========================================================================================================================================================================================= The hallucination issue is recognized as a fundamental deficiency of large language models (LLMs), especially when applied to fields such as finance, education, and law. Despite the growing concerns, there has been a lack of empirical investigation. In this paper, we provide an empirical examination of LLMs' hallucination behaviors in financial tasks. First, we empirically investigate LLM model's ability of explaining financial concepts and terminologies. Second, we assess LLM models' capacity of querying historical stock prices. Third, to alleviate the hallucination issue, we evaluate the efficacy of four practical methods, including few-shot learning, Decoding by Contrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method for a function to generate a query command. Finally, our major finding is that off-the-shelf LLMs experience serious hallucination behaviors in financial tasks. Therefore, there is an urgent need to call for research efforts in mitigating LLMs' hallucination.[We release our code and data at <https://github.com/mk322/fin_hallu>.]§ INTRODUCTION Motivation. Large language models (LLMs) <cit.> have emerged as transformative tools, demonstrating unprecedented prowess in comprehending and generating human-like text across diverse applications. LLMs are revolutionizing the interface[There may be a switch from graphical user interface (GUI) to language user interface (LUI).] between humans and machines in fields such as finance, education, and law. Among many LLM applications, the finance domain stands out as a notable area of impact. Various specialized models, such as FinBERT <cit.>, BloombergGPT <cit.>, and FinGPT <cit.>, have been customized to understand and generate financial texts, showing considerable promise in aiding humans in a variety of financial tasks, from portfolio management <cit.> to predictive analysis of market trends <cit.> and sentiment analysis <cit.>. However, a fundamental deficiency inherent in these models is hallucination — generating plausible but unsupported or factually incorrect content to a reference text — which may be highly risky when deploying financial large language models (FinLLMs). Considering the sensitive and intricate characteristics of finance use cases, inaccuracies and misinformation may lead to severe consequences, such as substantial monetary losses and erosion of trust.Challenges.The hallucination issue is considered as a major challenge of deploying FinLLMs in real-world applications. Addressing the hallucination issue is nontrivial. First, the problem of properly measuring hallucinations, determining how often and to what extent LLMs produce incorrect or "hallucinated" information, particularly amidst intricate financial concepts. Second, the challenge of deployment in real-world financial scenarios. Finance is a multifaceted field, while tasks such as querying historical stock prices demand pinpoint accuracy. Will LLMs be able to consistently deliver on these demands?Contributions. In this paper, taking an empirical approach, we examine hallucination behaviors of LLMs in financial tasks. We summarize our work as follows: * Examining LLMs' ability of financial knowledge: We conduct an empirical investigation to elucidate the extent of hallucinations in finance, produced by LLMs. Our analysis delves into the models' capacity of memorizing and retaining financial domain knowledge, providing critical insights into their reliability in grasping intricate financial concepts and terminologies.* A case study of analyzing a real-world financial task: As a case study, we further analyze the models' performance in a fundamental financial task, namely the ability of querying historical stock prices accurately. This enables us to discern the models’ practical utility and effectiveness in addressing tasks that are quintessential in the finance sector. * Evaluating mitigation methods for hallucinations: We assess four practical methods, few-shot prompting, Decoding by Contrasting Layers (DoLa) <cit.>, the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method that generates correct function calls. These strategies are designed to enhance the factual accuracy of the generated outputs and enable models to interact with up-to-date financial data, ensuring the relevance and reliability of the information provided. These contributions underscore the need to reduce hallucinations of LLMs and enhance their reliability for practical financial tasks in the real world.§ BACKGROUND AND RELATED WORKS The concept of "hallucination" in the context of LLM refers to instances where the generated text conflicts with either the input instructions (instruction inconsistency), the input context (input context inconsistency), the previously generated text (generated context conflict), or established world knowledge (factual inconsistency) <cit.>. In this work, we focus particularly on factual inconsistency, as it represents a serious and frequent type of error. For instance, as illustrated in Figure <ref>, the GPT4 model incorrectly interprets the financial acronym "TIF" as "Time in Force", which is not the common full name for "TIF", instead of the correct "Tax Increment Financing."Hallucinations in general-purpose LLMs. Previous research has investigated several factors contributing to LLM hallucination, such as imperfect learning and decoding methods, and knowledge gaps in the training data <cit.>. In addition, various methods have been proposed to mitigate hallucination in LLMs. One line of work involves factuality-enhanced decoding techniques <cit.>. Other works leverage external tools, notably the RAG technique, which enhances factuality by incorporating external knowledge sources in the generation process <cit.>, and the application of precise API calls <cit.> to obtain up-to-dated knowledge. In this study, we examine the effectiveness of one of the most representative decoding method of DoLa <cit.> method, a traditional RAG approach <cit.>, and an prompt-based tool learning method in mitigating hallucinations within the finance domain. Hallucinations in specific domains. The deployment of LLMs in specialized domains faces a significant challenge due to the risk of hallucinations. This is particularly critical in areas like finance, medicine and law where accuracy is paramount. Research in this area is growing, with notable efforts like the Med-HALT benchmark by <cit.> assessing hallucinations in the medical domain. Similarly, a study of ChatGPT <cit.> in the Chinese medical sector demonstrates GPT4's advancements, yet also underscores the ongoing challenges. In the legal field, the ChatLaw model <cit.> employs a vector database and keyword retrieval to tackle hallucinations in legal data extraction. Additionally, <cit.> advocate for incorporating domain-specific knowledge, such as NBA data, into supervised-finetuned (SFT) datasets to reduce such inaccuracies.§ METHODOLOGY FOR EMPIRICAL EXAMINATION We introduce an empirical framework tailored for evaluating the hallucination behaviours of LLMs within the financial field, as shown in Fig. <ref>. The framework presents a method to assess LLM responses on three financial tasks: recognizing financial abbreviations, explaining financial terminologies, and fetching stock prices. When provided with standalone questions, the models sometimes hallucinate with wrong facts, but the performance significantly improves when integrated with external data sources, such as financial documents and a function to generate a query command. §.§ Large Language Models (LLMs) Throughout our experiments, we use the HuggingFace weights[https://huggingface.co/meta-llama/Llama-2-7b-chat-hf][https://huggingface.co/meta-llama/Llama-2-7b-hf] of the pretrained Llama2-7B model <cit.> and its instruction-tuned+RLHF version Llama2-7B-chat <cit.>. Also, we utilize the OpenAI API to call the models of GPT3.5-turbo[https://openai.com/blog/chatgpt], and GPT4[https://openai.com/research/GPT4]. We use the greedy decoding method to generate texts for all models. In addition to the general-purpose LMs, we investigate the performance of FinMA-7B-NLP <cit.>, a multi-task fine-tuned LLaMA-1-7B model <cit.> with instruction data on five finance tasks, including question answering.[To optimize FinMA's ability to follow instructions effectively, we employ the same prompt template that the original authors used during the finetuning process.] This evaluation includes a comparison with its base model, LLaMA-1-7B <cit.>, to highlight differences on hallucination after domain-specific finetuning.Budget. For the inference process with the Llama2-7B and Llama2-7B-chat models, we utilized an A100 GPU. The cumulative GPU time for our entire project amounted to roughly 40 hours. Regarding the generation tasks with the GPT3.5-turbo and GPT4 models, the associated API costs were approximately $200. Moreover, the API costs for employing FactScore with GPT3.5-turbo were approximately $250. §.§ Financial TasksWhen selecting representative tasks in finance, we considered factors such as their relevance to real-world financial activities and the potential risks of inaccurate outputs. Based on these considerations, we identified three primary tasks, which together provide an assessment of LLM performance. Examples of prompt and outputs for these three tasks are gave in Appendix <ref>. §.§.§ Task I: Financial Abbreviation Recognition Task description. In this task, we measure a LLM model's ability to recognize financial acronyms and stock symbols. We randomly select 192 financial acronyms from Wikipedia [https://en.wikipedia.org/wiki/List_of_business_and_finance_abbreviations] and 1215 stock symbols from an online list[https://eoddata.com/symbols.aspx. We randomly select symbols that are in the stock exchanges of NASDAQ , NYSE, and AMEX.]. This selection is designed to cover a wide range of financial contexts and complexities. The primary objective is to evaluate a LLM mode's capability to accurately expand financial acronyms or provide the full company names corresponding to specific stock symbols. See Table <ref> and Table <ref> in the Appendix for example outputs and the prompt template used in this task. Metric. The accuracy for both tasks is measured using a consistent approach. The accuracy score is calculated as the ratio of accurately identified acronyms to the total number, which is 192 or 1000, depending on the task. An example is considered to be correctly recognized if the actual full name's string is a substring of the predicted full name. §.§.§ Task II: Financial Term Explanations Task description. In this task, a LLM is prompted to give an explanation of a financial terminology. We randomly select 160 infrequently-visited financial terminologies from Wikidata API[https://query.wikidata.org/], specifically targeting those with the lowest page views between 2021-01-01 and 2023-01-01. This selection process aim to emphasize more obscure financial concepts that are less commonly encountered in typical discussions. See Table <ref> and Table <ref> in the Appendix for example outputs and the prompt templates used in this task.Metric. We employ the FactScore (ChatGPT+Retrieval) metric <cit.> to measure the factuality of our generated content. This method quantifies the ratio of the number of correct atomic facts to the total number of atomic facts within a given response, benchmarking against the content of each term's explanation (on the Wikipedia full page). §.§.§ Task III: Stock Price Query Task description. In this task, we query LLMs for a historical stock price with 560 examples in total. We randomly select 70 stock tickers. For these tickers, we determine a set of four dates that is in the period covered by Llama2's pretraining data (i.e., before September 2022) <cit.>, specifically on 2022-05-23, 2022-06-22, 2022-07-22, and 2022-08-22. See Table <ref> and Table <ref> in the Appendix for example outputs and the prompt templates used in this task.Metric. In scenarios where no external tool is utilized, our assessment metrics include 1) accuracy, which is the percentage of the predicted prices that are exactly the same to their corresponding actual prices [Obtained from the Alpha Vantage API:https://www.alphavantage.co/], 2) mean absolute error (MAE), and 3) the percentage of queries where the integer parts of the predicted values are the same as that of the actual price. In the prompt-based tool learning scenario, a response is considered to be correct only if the generated query code of a function call exactly matches one of the predefined set of expected function calls. §.§ Methods for Mitigating Hallucination Few-shot prompting. We hand crafted the few-shot prompts for each task, utilizing them to guide the models in their learning process.Decoding by contrasting layers (DoLa). DoLa <cit.> is designed to enhance the factual accuracy of LLMs by contrasting the outputs from different layers of the model. This approach assumes that higher layers of the model contain more factual knowledge, which can be leveraged to reduce hallucinations and improve the accuracy of response. In our implementation of DoLa, we utilized the same contrasting layer configurations as those specified by the original authors[https://github.com/voidism/DoLa].Retrieval augmentation generation (RAG). In the tasks of acronym recognition and long-form generation of explanation of financial terms, to further improve the relevance and factuality of the generated content, we integrate RAG <cit.> that sources content directly from Wikipedia. By leveraging the vast informational expanse of Wikipedia, we ensure that our generated outputs are grounded in factual and current knowledge. To make this retrieval process efficient and accurate, we employ the FAISS vector store <cit.>, which ensures that the most relevant content is extracted from Wikipedia and seamlessly incorporated into our model's outputs.Prompt-based tool learning. For the stock price query task, our aim is to equip the model with the capability to use external tools. We employ a prompt-based learning technique. In each query, we assess the model's ability in generating the correct Python function call for a tailored wrapper of the Alpha Vantage API based on given natural language instructions. The model is briefed about the function parameters, encompassing the ticker, date, and price type. With this information, the model must produce the function name and the parameters correctly, adhering to Python's syntax. A response is considered as correct only if it precisely matches a set of the expected query strings. § EMPIRICAL RESULTS FOR HALLUCINATION §.§ Quantifying Hallucinations Our main results across the three benchmark tasks are given in Table <ref> and Table <ref>, and our main findings are as follows.General-purpose LLMs generate factually incorrect content in finance. An initial evaluation of relatively smaller, open-source models, specifically Llama2-7B and Llama2-7B-chat, reveals their comparatively limited capacity to assimilate extensive knowledge within the finance domain. Delving into more sophisticated models, an examination of the GPT4 model, as presented in Table <ref>, demonstrates a 82.5% and 90.4% accuracy when we directly query it in the acronym and stock symbol test. Also, in the long-form generation task that the employed FactScore metric <cit.> delineates an 81.11% score for GPT4 in a direct query setting. While these performances are notable, they inherently suggest a room for error. Taking a deeper look, upon examining the incorrect responses, we notice that certain generated answers are outdated, stemming from obsolete information. For instance, as illustrated in Figure <ref>, the GPT4 model incorrectly references "PERF" as the stock symbol for "Perfumania Holdings," not accounting for its delisting, highlighting a lapse in updating its knowledge base. In the sensitive and dynamic finance domains, these deviations and outdatedness can manifest into significant consequences. Consequently, it is essential for researchers and professionals to adopt a discerning approach when utilizing outputs from large language models and emphasize the necessity of autonomous validation for pivotal data they generate. Multi-task domain-specific finetuning could diminish LMs' general instruction-following abilities. As demonstrated in Table <ref>, FinMA-7B—a model fine-tuned for specific tasks within the finance domain—underperforms its base model, Llama1-7B, in various tasks under both zero-shot and few-shot settings. This trend indicates that while multi-task domain-specific finetuning aims to bolster a model's domain-specific capabilities, it might also lead to a decrease in its overall ability to accurately follow instructions and adapt to new tasks. Such a decrease could result in more frequent occurrences of instruction-inconsistent hallucinations <ref>. This observation serves as a cautionary note for the utilization of multi-task finetuning in the development of future domain-specific language models.General-purpose LLMs generate seriously unreliable real-world financial predictions.The application of LLMs to real-world tasks in the finance domain, particularly in predicting stock prices, raises significant concerns regarding the reliability of their outputs, as highlighted by the results presented in Table <ref>. In the zero-shot setup, models such as Llama2-7B and Llama2-7B-chat exhibit alarmingly high Mean Absolute Errors (MAE) of 6357.6 USD and 6380.5 USD, respectively. Furthermore, while the utilization of few-shot prompting allows models to generate responses that are closer to correct answers, there still exists a gap in terms of MAE and accuracy. This high deviation from the true values suggests substantial inaccuracies in their predictions. Interestingly, the models of GPT3.5-turbo and GPT4 opt for a more cautious approach, abstaining from generating responses to some of stock symbol recognition questions and all stock-price-related questions in the absence of external tools. This restraint is praiseworthy as it prevents the propagation of potentially erroneous and misleading financial predictions to users, fostering user trust and minimizing the risk of misinformation in such a critical domain.§.§ Mitigating Hallucination RAG significantly improves the factuality in finance. Evaluating the impact of the RAG on both foundational and finetuned models provides compelling evidence of its efficacy. As can be seen in Table <ref>, integrating RAG consistently elevates the performance of both Llama-2 and Llama-2-chat models. Table <ref> further underscores the advantage of RAG, showing substantially higher FactScores in the long-form generation task for both models when RAG is implemented. Such consistent improvements across multiple metrics and settings confirm that RAG serves as a significant enhancement to both pretrained and instruction-tuned models.Prompt-based tool learning helps significantly on the time-sensitive task. As discerned from Table <ref>, the smaller models, Llama2-7B and Llama2-7B-chat, without the integration of an external tool, exhibit negligible accuracy in handling stock queries. Conversely, the application of the prompt-based tool learning leads to a transformative elevation in performance, with Llama2-7B+tool and Llama2-7B-chat+tool achieving remarkable accuracies of 100.00% with only one training example. This enhancement in reliability and precision is critical, especially within the rapidly evolving landscape of financial domains.It's notable that the sophisticated models, GPT3.5-turbo and GPT4, refrain from addressing stock price queries in a zero-shot setting. However, when augmented with prompt-based tool tool learning, these models are adeptly optimized to provide accurate answers. The integration of zero-shot and few-shot tool learning thus emerges as an effective strategy bridging the knowledge gap and enhancing the reliability of language models, especially in the dynamic domain of finance.Few-shot learning better improves the ability to follow the question-answering format than factuality. As shown in Table <ref>, we observe that the pretrained Llama2-7B and Llama2-13B models improve significantly in their question-answering capabilities under few-shot learning, compared to their zero-shot performance. However, this improvement does not markedly surpass the performance of their zero-shot, instruction-tuned counterparts, Llama2-7B-chat and Llama2-13B-chat. Furthermore, for the chat variants, few-shot learning's impact is more limited, yielding only modest improvements. This trend suggests that while few-shot learning aids in adapting to question formats, its role in enhancing factual precision is less substantial. DoLa has limitations in enhancing models with knowledge gaps in training data. The DoLa decoding method is designed to enhance the factual accuracy of LMs by contrasting the outputs from different layers of the model. This approach assumes that higher layers of the model contain more factual knowledge, which can be leveraged to reduce hallucinations and improve the accuracy of responses. However, the effectiveness of DoLa is inherently dependent on the breadth and depth of knowledge encoded in the language model's pretraining dataset. As shown in Table <ref> and Table <ref>, while DoLa can improve factual accuracy in some instances, its effectiveness is limited when the underlying model lacks comprehensive knowledge in its pretraining dataset, such as some stock ticker and stock prices. This limitation is particularly noticeable in scenarios where the model is expected to provide responses based on information that might not be well-represented or current in its training data. Since DoLa relies on amplifying the knowledge already present in the model, its ability to compensate for gaps in the model's foundational knowledge is constrained.§ CONCLUSION, LIMITATIONS AND FUTURE WORKS In this study, we conduct an empirical analysis to assess the hallucination problem of LLMs in the financial domain. Our work demystifies the LLM's reliability and ability of explaining financial terminologies and concepts. Furthermore, a performance analysis reveals the practical viability and performance of these models in querying the historical stock prices. To mitigate hallucinations, we show the effectiveness of the RAG method and prompt-based tool learning to generate correct function calls, thereby ensuring the provision of factually correct and up-to-date information in the finance domain. While our research provides insights into the capabilities of LLMs in the finance domain, it is essential to acknowledge certain limitations. Firstly, our tasks, although representative, cannot encompass the full spectrum of real-world tasks in the vast and varied domain of finance. This implies that our results may not generalize to all possible financial tasks and scenarios. Secondly, the mitigation strategies we introduced and tested are task-specific. Their effectiveness might vary when applied to different tasks or in diverse financial contexts. Future research might need to adapt or expand these strategies to ensure their relevance and effectiveness across a broader range of financial applications.Moving forward, potential avenues for future works include refining the hallucination mitigation techniques for broader financial applications, exploring ways to further increase the accuracy and reliability of LLMs in dynamic financial settings, and understanding the interplay between LLM outputs and financial decision-making. Our findings highlight the critical issue of hallucinations, establishing a groundwork for advancing responsible and reliable LLM deployment in the financial domain.plainnat § OUTPUT EXAMPLES § FEW-SHOT PROMPT TEMPLATES
http://arxiv.org/abs/2311.15548v1
{ "authors": [ "Haoqiang Kang", "Xiao-Yang Liu" ], "categories": [ "cs.CL", "cs.AI", "cs.LG", "q-fin.ST" ], "primary_category": "cs.CL", "published": "20231127052713", "title": "Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination" }
-0.08ptCharacterising and Verifying the Core in Concurrent Multi-Player Mean-Payoff Games(Full Version) Michael Wooldridge January 14, 2024 ================================================================================================== Large language models (LLMs)-based image captioning has the capability of describing objects not explicitly observed in training data; yet novel objects occur frequently, necessitating the requirement of sustaining up-to-date object knowledge for open-world comprehension. Instead of relying on large amounts of data and scaling up network parameters, we introduce a highly effective retrieval-augmented image captioning method that prompts LLMs with object names retrieved from External Visual–name memory (). We build ever-changing object knowledge memory using objects' visuals and names, enabling us to (i) update the memory at a minimal cost and (ii) effortlessly augment LLMs with retrieved object names utilizing a lightweight and fast-to-train model. Our model, which was trained only on the COCO dataset, can be adapted to out-domain data without additional fine-tuning or retraining.Our comprehensive experiments conducted on various benchmarks and synthetic commonsense-violating data demonstrate that , comprising solely 3.97M trainable parameters, exhibits superior performance compared to other methods of equivalent model size scale. Notably, it achieves competitive performance against specialist SOTAs with an enormous number of parameters. Our code is available at <https://jiaxuan-li.github.io/EVCap>.§ INTRODUCTIONAdvanced image captioning based on large language models (LLMs) <cit.> has focused on the approach using big-scale models trained on ever-increasingly large-scale datasets, which is no longer viable.This is because the computational cost to train the models increases exponentially and, more importantly, updating training data is almost impossible that keep pace with the growth of novel objects in our daily lives. Sustaining ever-changing object knowledge with a reasonable cost is a pressing concern in LLMs-based models to truly unlock open-world comprehension. Retrieval-augmented image captioning <cit.> is emerging as an alternative since it considerably reduces training costs in both time and data while producing encouraging results. Nonetheless, with their huge datastore, it is obvious that LLMs would imitate the given texts, limiting their ability to describe open-world objects properly. For instance, SmallCap <cit.> considers the words “skateboard" and “wooden floor" to be a pair regardless of visual appearances containing a commonsense-violating pair of “ice skates" and “wood floor" (Fig. <ref>, lower). Additionally, prompting the LLMs given a lot of retrieved texts becomes cumbersome, requiring more trainable parameters. Fig. <ref> (upper) shows that the CIDEr scores obtained by a lightweight SmallCap <cit.> with 43M trainable parameters are far away from those obtained by a heavy REVEAL <cit.> with 2.1B trainable parameters. Beyond that, due to the frequent occurrence of new objects, access to their sample texts is not always feasible, making the memory utilized in <cit.> difficult to grow. We thus aim to streamline the external memory used in previous work <cit.> by storing a sufficiently small amount of object information. And, of course, not only does the model not stereotype the example sentences, but the number of trainable parameters would be reduced drastically as a result of the causation (Fig. <ref>).We follow <cit.> to construct a key-value memory where the key is represented by object's features, and the value corresponds to object's name. Unlike <cit.>, which rely on object definition as the key, our method leverages the visual appearance of the object as the key because of the abundance of object images readily available on the internet. We propose an external visual–name memory tailored for ease of expansion and cost-effectiveness in upholding up-to-date object information. We present a highly effective retrieval-augmented LLMs-based image captioning method, called , that prompts frozen LLMs with object names retrieved from our proposed memory for open-world comprehension.contains a frozen image encoder ViT <cit.> and Q-Former <cit.> with trainable image query tokens for object retrieval, an attentive fusion module, a trainable linear layer for mapping between vision and language latent spaces, and a frozen LLM decoder <cit.> for generating captions. Specifically, the attentive fusion module feeds retrieved object names and visual features into a customized frozen Q-Former using trainable object name query tokens to implicitly reduce the presence of superfluous object names. As a result,amounts to only 3.97M trainable parameters. Once trained, the model can be adapted to new domains and large-scale data without further fine-tuning or retraining. Our contributions are as follows: * We provide an extensible external visual–name memory with minimal but useful object information, which enables LLMs-based models to comprehend the open world. * We present a remarkably lightweight and highly efficacious retrieval-augmented image captioningwith 3.97M trainable parameters.On in-/out-domain benchmarks and synthetic commonsense-violating dataset,trained solely on COCO dataset competes with other lightweight methods by a margin while being on par with other specialist SOTAs.§ RELATED WORKImage captioning aims to describe the contents of a given image. It can be roughly divided into two approaches: non-LLMs-based methods and LLMs-based ones. The former approaches <cit.> typically employ a visual encoder and a language decoder in an end-to-end fashion to generate captions. However, they are incapable of describing open-world objects. The latter one leverages pre-trained large-scale vision models (CLIP <cit.>, ViT <cit.>) and LLMs (GPTs <cit.>, T5 <cit.>, LLaMA <cit.>) by bridging the gap between two modalities using either pre-training with large-scale data or the learned mapper or prompt techniques. LLMs-based models <cit.> demonstrate advancements in image captioning challenges, allowing the capacity to describe anything as long as pre-trained vision models can recognize it. Our method belongs to the LLMs-based approaches, but instead of relying fully on the pre-trained vision model, we use object names retrieved from the external memory to augment LLMs-based image captioning. Novel object captioning is a branch of image captioning that describes images containing objects that were not seen during training. Non-LLMs-based methods explore more objects by learning from unpaired image-sentence sources (DCC <cit.>, NOC <cit.>) or relied on novel object detectors to recognize novel concepts (NBT <cit.>, OSCAR <cit.> and VinVL <cit.>). LLMs-based methods such as ViECap <cit.> leverage the pre-trained CLIP <cit.> to obtain object entities.Nevertheless, the cut-off in training time of the pre-trained object detector or CLIP prevents it from detecting novel objects that arise quickly in reality. Unlike earlier work, we can readily update our recognition of novel concepts by adding them to external memory, ensuring that we keep any new objects from the past and even the future.Retrieval-augmented image captioning is a recently popular approach that augments the captioning model with retrieved information for better open-world understanding.AoANet <cit.> uses a memory bank of image-sentence pairs and target words. SmallCap <cit.> employs image-to-text retrieval to obtain sampled captions from a captions datastore. RA-CM3 <cit.> retrieves documents from an external memory of a mixture of text and image via a dense multimodal retriever.EXTRA <cit.> and Re-ViLM <cit.> exploit the similarity of the input image and vision candidates to retrieve captions.Unlike previous methods, our external memory contains visual–name pairs to avoid redundant information in the external captions/documents.In addition, we use an attentive fusion module to mitigate the effects of irrelevant retrieved object names on caption generation.§ PROPOSED §.§ Idea ofWe aim to build a retrieval-augmented LLMs-based image captioning model with a sufficiently small yet informative external memory. It involves two challenges: (1) constructing an expandable external memory, and (2) building an effective LLMs-based model using retrieved object names.As discussed above, challenge (1) can be resolved by utilizing the visual appearance of objects.However, if we restrict our memory to only a visual–name pair for each object, our memory will be lacking in diversity. Therefore, we gather several images for each target object. Additionally, we keep the synthetic images in our memory to avoid the harm that synthetic images might cause to our method, as pointed out in <cit.>.With the capability to collect images from the internet,can be easily expanded to include novel objects from the real world effortlessly. We base our method on a frozen pre-trained vision model and LLM with several trainable layers (Fig. <ref>), giving in a model that is cheap to train. To guide the LLM, we adopt a recently popular approach called prompting as in <cit.>. We begin by matching the learned visual features from the input image with image embeddings stored in memory, retrieving object names. We also introduce an attentive fusion module designed to implicitly remove irrelevant retrieved names. Finally, following the attentive fusion, we combine the learned visual features and object name features to form a prompt for the LLM to generate a caption, thus addressing challenge (2).§.§ External visual–name memoryTo build the external visual–name memory, we first collect image–name pairs from the external data source. After that, we encode these images into image embeddings, which serve as keys in memory, and use their names as values.External data source. We utilize object images from LVIS dataset <cit.> to construct our external visual–name memory ℳ. Specifically, we use 1203 objects in LVIS, where we randomly select from one to ten images for each object, amounting to 8581 object images. Furthermore, as mentioned in Sec. <ref>, we also incorporate synthetic images in our memory construction. Using stable diffusion <cit.>, we generate five additional images for each object, with a prompt of “a photo of {object name}”, resulting in a total of M=14596 (8581 + 5 × 1203) images. Each object imageX^iis associated with an object name v^i. Note that many object images may share the same object name. For the sake of simplicity, we may regard each image as corresponding to a single name. In summary, we have M image–name pairs {(X^i,v^i)}^M_i=1 for external memory construction.External memory construction.For each image X^i, we use a frozen vision encoder ℰ(·) (see Sec. <ref> for detail) to project it into 32 embeddings with the size of 1×768 each: {𝐤^i_1,𝐤^i_2, ⋯, 𝐤^i_32}=ℰ(X^i). We then average 32 embeddings to produce a single embedding 𝐤^i (1×768) that serves as the key (visual) in ℳ. The paired object name v^i acts as its value (name). Consequently, we have the visual–name memory ℳ={(𝐤^i, v^i)}^M_i=1 which is indexed using FAISS <cit.>, facilitating rapid searches based on similarity measures.Our memory can be expanded effortlessly by gathering additional visual–name pairs (see Sec. <ref>). §.§ Object names retrievalImage encoding. We feed a frozen vision encoder ℰ image X and image query tokens 𝐓_img to produce visual features 𝒬. To enable the retrieval process controllable, we make image query tokens to be trainable. Thus, the image encoding process can be summarized as 𝒬=ℰ(X, 𝐓_img). We use the BLIP-2 pre-trained vision encoder <cit.>, which consists of a pre-trained vision transformer ViT-g <cit.> outputting image features (257 × 1408), and a Q-Former receiving image features producing |𝒬|=32 learned visual features (1 × 768 each). We denote 𝒬={𝐪_1,𝐪_2, ..., 𝐪_32}.Retrieval.Having obtained 𝒬, we calculate the cosine similarity between the query 𝐪_j∈𝒬 and the key 𝐤^i∈ℳ. The similarity calculation is given bySIM(𝐪_j, 𝐤^i)=𝐪_𝐣^⊤𝐤^i/𝐪_j𝐤^i, where i∈ [1, M], j∈ [1, 32]. Given each 𝐪_j, we select one key with the highest similarity score, resulting in 32 key–value candidates {𝐤_j^best, v_j^best}^32_j=1.After that, we filter out candidates with repeated object names (values), and then select the top-K values.In particular, we determine the index j from the key that has the highest SIM score. These selected values v_j^best are redefined as the new notation v_l in the retrieved top-K object names for the input image, which can be summarized as follows: {𝐤_j^best, v_j^best} =max _𝐤^iSIM(𝐪_j, 𝐤^i),j = max _jSIM(𝐪_j, 𝐤_j^best), v_lv_j^best,where l∈[1, K]. As a result, the retrieved top-K object names are {v_l}^K_l=1. §.§ Attentive fusionSince the object names obtained from the retrieval process may be redundant, we develop an attentive fusion module to selectively distill object name features.The retrieved object names {v_l}^K_l=1 are concatenated together into a sequence 𝒮, each separated by a delimiter:𝒮={v_1, , v_2, , ⋯, , v_K}. The sequence 𝒮 and visual features 𝒬 are fed into a customized Q-Former ℱ(·), which is constructed from the frozen pre-trained Q-Former as we used in vision encoder ℰ. Nonetheless, in order to enable object names to get attention from visual features, we switch the image embedding port and the text instruction port (see <cit.> for architecture detail).Like in the image encoding process in Sec. <ref>,we make the object name query tokens 𝐓_obj learnable during training to assist in learning object name features related to the caption.The size of 𝐓_obj is P × 768, where P indicates the number of object name query tokens.We get the object name features 𝒱=ℱ(𝒮,𝒬,𝐓_obj).§.§ Caption generation Before inputting the visual features 𝒬 and object name features 𝒱 into the LLM decoder, we concatenate (⊕) them and use a linear layer ϕ(·) to project them into the input latent space of the LLM as ϕ(𝒬⊕𝒱). The LLM used for caption generation in this work is the pre-trained Vicuna-13B <cit.>, an open-source chatbot constructed from LLaMA <cit.>. During training and evaluation, we design a prompt in a conversational format, that is similar to <cit.>:0.99in which,denotes the projected feature ϕ(𝒬⊕𝒱) after the linear layer.In training phase, given input caption tokens {c_i}^L_i=1, the LLM decoder concatenates the embedded prompt {𝐰_i}^N_i=1 and the embedded caption tokens {𝐜_i}^L_i=1 as input, and predicts the caption tokens in an autoregressive fashion, while in the evaluation phase, we only need to input the embedded prompt. We trainby minimizing the cross-entropy loss in an end-to-end way: ℒ_θ=-∑_i=1^L log p_θ(c_i|𝐰_1, ... 𝐰_N, 𝐜_1, ..., 𝐜_i-1), in which θ indicates the trainable parameters. § EXPERIMENTAL SETTINGS§.§ Training setup Implementation. uses the same image encoder as in BLIP-2 <cit.>, consisting of a ViT-g <cit.> and their pre-trained Q-Former.Since we intend to obtain object name features through cross-attention between retrieved object names and visual features, we develop a customized Q-Former, which consists of BERT <cit.> with cross-attention layers inserted at every other transformer block.We use a frozen Vicuna-13B <cit.> as the caption generator. Training dataset. For all experiments, we exclusively trainusing the training set of COCO dataset <cit.>, consisting of 82k images and 5 captions per images. The entire training process takes about 3 hours on 4 A6000 GPUs, using mixed precisions (more details in the supplementary). §.§ Evaluation setup Evaluation dataset. We evaluate , trained using the COCO training set, across four datasets: its test set, two challenging benchmarks – NoCaps validation set and Flickr30k test set, and a synthetic commonsense-violating dataset – WHOOPS. We adhere follow prior work <cit.> to use the same images of Karpathy split <cit.> on COCO test set, NoCaps <cit.> validation set, and Karpathy split on Flickr30k <cit.> test set. In addition, WHOOPS <cit.> is a synthetic image captioning dataset comprising 500 synthetic commonsense-violating images and 2500 paired captions. Compared methods. We comparewith several SOTAs. According to the trainable parameters size, they can be divided into 1) Heavyweight-training (between 100M to 5B): VinVL <cit.>, AoANet <cit.>, NOC-REK <cit.>, RCA-NOC <cit.>, ViECap <cit.>, InstructBLIP <cit.>, OSCAR <cit.>, BLIP <cit.>, BLIP-2 <cit.>, REVEAL <cit.>;2) Lightweight-training (less than 100M): MiniGPT4 <cit.>, SmallCap <cit.>, ClipCap <cit.>; and also 3) Specialist SOTAs with huge trainable parameters (larger than 5B):Qwen-VL <cit.>, CogVLM <cit.>, PaLI <cit.>, PaLI-X <cit.>. Among these methods, AoANet, NOC-REK, RCA-NOC, REVEAL, and SmallCap are retrieval-augmented captioning methods. § EXPERIMENTAL RESULTS §.§ Results on in-/out-domain benchmarksWe assessagainst SOTAs on both in-domain and out-domain benchmarks. The COCO test set can be considered as in-domain data as we only train our model on the COCO training set. Out-domain benchmarks are the NoCaps validation set and the Flickr30k test set.Quantitative results. Tab. <ref> details our ’s performance in comparison with SOTA methods. We first evaluate training costs in terms of training data sizes and parameters.Similar to various heavyweight-training models that exclude LLMs and the majority of lightweight-training models,is trained solely on the COCO training set. It utilizes only 3.97M trainable parameters, positioning it as the second smallest, slightly larger than MiniGPT4 with 3.94M. Among lightweight-training models, our approach outperforms others, achieving the highest scores on all benchmarks. Despite using less training data and nearly identical trainable parameters as MiniGPT4,significantly surpasses it, with a marked improvement of 10.5, 10.5, and 6.0 in CIDEr scores for each benchmark.When further compared with heavyweight-training models, the performance ofstands out among million-level models, nearly matching InstructBLIP, except in NoCaps.Note that since BLIP-2 does not include Vicuna checkpoints, InstructBLIP performs pre-training with Vicuna using the same procedure as BLIP-2, whereasdoes not involve pre-training. Against REVEAL, which also uses external memory, ourutilizes about 1/3000 training data and 1/500 training parameters yet yields comparable results.Moreover, ’s performance is on par with BLIP-2, the top-performing model with 1.2B trainable parameters. This highlights ’s efficiency and effectiveness despite its significantly smaller training cost, thanks to our external visual–name memory. Regarding specialist SOTAs, they use billion-level training data and over 5B trainable parameters, so it is acceptable that they can achieve exceptionally strong performance, surpassingby nearly 10 on all benchmarks in CIDEr scores. Qualitative results.Fig. <ref> presents a comparison of captions generated by ourand three SOTA models across three benchmarks.The captions of SmallCap are generated by its publicly accessible demo <cit.>.We generate captions of MiniGPT4 and BLIP-2 using their respective pre-trained models. As a lightweight and retrieval-augmented captioning method, SmallCap struggles to produce accurate captions for given images, primarily because it relies on retrieved captions laden with extraneous information. MiniGPT4, though aligned with the primary content of images, sometimes misses certain objects like “trees" and “headphones". This oversight stems from its focus on the main objects in images, without integrating additional cues for other objects provided by the retrieved object names. In contrast, the captions generated by ourare comparable to those of BLIP-2. §.§ Results on commonsense-violating data To explore our 's capability in describing contents in open-word settings, we further evaluate it on WHOOPS dataset, which contains commonsense-violating images. Quantitative results. In Tab. <ref>, we compare the performance of , MiniGPT4, BLIP, and BLIP-2 on WHOOPS dataset.This dataset is particularly challenging due to its inclusion of unusual objects <cit.>. Initially, as an end-to-end trained model, ourexhibits performance similar to MiniGPT4. However, there is a noticeable improvement in the CIDEr score, after the external memory is enriched with 2396 new objects from the WHOOPS dataset, each represented by 5 synthesized images generated using stable diffusion <cit.>. It highlights the effectiveness of our idea of incorporating an expandable external memory into the captioning model for open-world comprehension.Qualitative results. Fig. <ref> illustrates the captions generated by ,(w/WHOOPS), and three SOTAs for two images from the WHOOPS dataset.The first image challenges common sense as it unusually pairs “Einstein" with “racing car". While all SOTAs simply refer to “Einstein" as “a man", ourand (w/WHOOPS) correctly identify him. The second image shows another unusual example. Similar to other methods except for BLIP-2,can not recognize “blue cartoon character" as “Pikachu", while(w/WHOOPS) successfully predicts it because of the updated memory.In these two images, SmallCap and MiniGPT4 tend to generate captions with hallucinatory objects, a result of commonsense-violating contents present in the images.§.§ Detailed analysis Ablation study. We assess the contribution of each component prior to the LLM decoder inby incrementally integrating the image query tokens and the attentive fusion module into our baseline model. The baseline model comprises a ViT+Q-Former, a linear layer, and a LLM decoder. The quantitative results are shown in Tab. <ref>. When employing only the baseline model (Baseline), CIDEr scores drop notably by 5.7, 10.5, and 7.6 on COCO, NoCaps, and Flickr30k, respectively. The inclusion of trainable image query tokens (Baseline+) brings a marginal improvement on NoCaps and Flickr30k. However, the performance is significantly enhanced with the addition of attentive fusion (along with the introduction of external memory), indicating the pivotal role of the external visual–name memory in the overall effectiveness of . This is further corroborated by the qualitative results in Fig. <ref>, where captions from Baseline and Baseline+ inaccurately include objects like “couch" and “bed", and Baseline+ overlooks “hand".Exploration for external memory expandability.To demonstrate the scalability of the external memory in , we visualize the visual features stored in LVIS external memory, and newly synthesized data from objects appearing in the WHOOPS dataset. We employ t-SNE <cit.> to plot visual features after reducing their dimensions to 2-D (Fig. <ref>). For clear visualization, we only randomly display 3649 visual features in LVIS memory, and add 479 visual features from WHOOPS objects. Among them, 35 samples are randomly labeled. The result shows a clear clustering of LVIS objects (blue) in the external memory, as well as the successful integration and appropriate localization of new objects from WHOOPS (red) into these clusters.This pattern not only confirms the distinctiveness of visual features already present in the memory but also demonstrates the potential to accurately incorporate and differentiate new objects introduced from updated data. These findings highlight our external memory's ability to expand and maintain its effectiveness even as new data is incorporated. Impact of external memory size. We examine the impact of external memory size in Tab. <ref>. On the one hand, we randomly remove 30%, 60%, and 90% data in the external memory constructed from LVIS objects. The results show the performance gradually degrades on NoCaps as reducing 30% and 90% LVIS.Despite some unexpected increases in certain results on NoCaps (5th row) and Flickr30k (4th - 5th rows), they do not alter the overall downward trend. Similar phenomena are also noted in SmallCap <cit.>, we speculate it is due to data distribution. On the other hand, as we infuse WHOOPS knowledge into LVIS memory, there is a slight improvement on NoCaps (out) and Flickr30k.These observations validate the model's capability to effectively retrieve object names from an updated memory, enhancing its performance in generating captions.Impact of the number of retrieved object names. We investigate how the number of retrieved object names K (Sec. <ref>) affectin Fig. <ref>. We train the model with K from 1 to 20 and evaluate the performance under CIDEr on all three benchmarks. From the results, we can find that the model works worst on the out-domain dataset (NoCaps) when only one retrieved object name is used. As we gradually add more object names, performance fluctuates but improves. This pattern aligns with our intuition that when using one object name, the model will make errors or miss some objects in generated captions due to the incorrect object name. When we add more object names, we increase the error tolerance, and the attentive module inautomatically pays more attention to image-related object names, thus improving results. Furthermore, we observe that setting K to 10 yields relatively optimal overall performance, validating the choice of K=10 in .Analysis with different decoders. To explore the influence of different LLMs decoders on our , we experiment by substituting Vicuna-13B with GPT2 and Vicuna-7B, as detailed in Tab. <ref>. With GPT2 as the decoder,still markedly surpasses other GPT2-based models, achieving impressive gains of 11.3 and 10.0 under CIDEr on COCO and Flickr30k, compared to SmallCap. When employing Vicuna-7B, the comparison of performance trends mirrors those observed with Vicuna-13B, further attesting to the robustness and adaptability ofacross different LLM decoders. Notably, both SmallCap, which retrieves captions, and our GPT2-based , which retrieves object names, use the same GPT2 decoder. Therefore, their comparison also underscores the effectiveness of our method's object name retrieval and attentive fusion strategy.Limitations.First,cannot retrieve all objects that appear in the given image, leading to incomplete image descriptions as the second example in Fig. <ref>. We will investigate integrating object detection with image captioning to enhance completeness. Second, our focus on object representation restricts consideration of other crucial captioning elements, affecting overall performance. Similar to all models trained on the COCO dataset, ourinherits its captioning style, which limits its ability to generate varied styles. This limitation is reflected in our relatively modest performance improvements in Tab. <ref>, compared to MiniGPT4. We will overcome this limitation by exploring methodologies that encourage style diversity in the future.§ CONCLUSIONWe further advance image captioning in real-world scenarios by introducing , a novel image captioning model with object names retrieved from an external visual–name memory. The external memory is easily expandable, allowing for effortless updates with new object visuals and names.stands out for its efficiency, comprising merely 3.97M trainable parameters, yet delivering robust performance. We extensively comparewith SOTAs on various benchmarks and commonsense-violating data, demonstrating its significant superiority in performance. ieeenat_fullname[ §SUPPLEMENTARY MATERIAL FOR: RETRIEVAL-AUGMENTED IMAGE CAPTIONING WITH EXTERNAL VISUAL–NAME MEMORY FOR OPEN-WORLD COMPREHENSION]This supplementary material complements our paper with the following sections: First, we delve into the implementation specifics of our , which were not covered in the main paper (see Sec. <ref>). Second, we offer an expanded discussion on the external visual-name memory, as utilized in the main paper (see Sec. <ref>). Finally, we present additional results to evaluate the effectiveness of(see Sec. <ref>). § IMPLEMENTATION DETAILSOur method is based on Pytorch and is trained within one epoch with a batch size of 24 using mixed precisions. We optimize the model using AdamW, setting the weight decay at 0.05, and using β_1 and β_2 values of 0.9 and 0.99, respectively. A cosine learning rate (LR) decay strategy is adopted, starting with an initial LR of 1e-4. The model undergoes 5000 linear warm-up steps, beginning with a start LR of 1e-6. During the evaluation phase, we use a beam search strategy with a beam size of 5 to generate captions. § EXTERNAL VISUAL–NAME MEMORY §.§ LVIS memoryAs stated in Sec. 3.2 of the main paper, we utilize 1203 objects from the LVIS dataset. For each of these objects, we randomly select between one and ten images from LVIS. Additionally, we enrich our data by incorporating five synthetic images for each object, created using stable diffusion. We show two samples of this external visual-name memory, constructed using objects from LVIS in Fig. <ref>. §.§ WHOOPS memoryTo illustrate the scalability of the external memory in , we expand it by integrating WHOOPS knowledge into the original external visual–name memory in Sec. 5.2 and Sec. 5.3 of the main paper. Specifically, we focus on objects that are mentioned in the answers of VQA annotations in the WHOOPS dataset because of their conciseness and emphasis on key objects. For each of these objects, we produce five synthetic images employing stable diffusion. Two examples from this augmented memory, featuring newly added object images and their corresponding names, are presented in Fig. <ref>. § ADDITIONAL RESULTS §.§ Experiments on NoCaps test setWe additionally assess ouragainst SOTAs on the NoCaps test set, since we notice several other methods have also benchmarked their performance on this dataset. Note that, NoCaps test set does not have publicly accessible ground truth annotations. To obtain evaluation scores, we submitted our results to the NoCaps leaderboard.Quantitative results. Tab. <ref> presents the quantitative results of ouron the NoCaps test set. Our method outperforms all other SOTA models, both in heavyweight-training and lightweight-training categories, that have reported results on this dataset. Additionally, as a lightweight method, our approach achieves the 8th rank on the NoCaps leaderboard, only surpassed by specialized SOTAs such as CogVLM, which holds the 1st rank. Qualitative results. Fig. <ref> shows the captions generated by ouralongside those from three SOTA methods on the NoCaps test set. It also includes the object names retrieved byand the captions retrieved by SmallCap. Consistent with the findings in the main paper, SmallCap tends to generate hallucinatory objects that are absent in the input images, such as “tie" and “mouse". The same hallucinatory object “mouse" is also found in the retrieved captions, indicating that SmallCap's diminished performance is largely due to its reliance on retrieved captions containing irrelevant information. In comparison, ourdemonstrates a performance on par with BLIP-2.§.§ Further analysis Comparison on training time and used GPUs.Tab. <ref> compares training time and the used GPUs of ourwith various SOTA models. Due to the diversity of GPUs employed across different models, drawing a direct comparison is challenging. Nevertheless, it's evident that the training time for ouris comparatively shorter than most models. Number of object name query tokens. We explore the impact of varying the number of retrieved object names P (Sec. 3.4 of the main paper) onin Fig. <ref>. We train the model using different values of P, ranging from 2 to 10, and evaluate the performance under CIDEr on all three benchmarks. The results suggest that setting P=8 offers relatively optimal results. §.§ More qualitative examples. More qualitative examples on the COCO test set, NoCaps validation set, and Flickr30k test set are shown in Fig. <ref>, Fig. <ref>, and Fig. <ref>, respectively.
http://arxiv.org/abs/2311.15879v1
{ "authors": [ "Jiaxuan Li", "Duc Minh Vo", "Akihiro Sugimoto", "Hideki Nakayama" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127145137", "title": "EVCap: Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension" }
Department of Mathematics, University of Oregon, Eugene, OR 97403–1222, USA [email protected] The author was partially supported by Simons Foundation Grant #849676[2010]33C45, 42C05, 42C10, 65D15, 65D32.We study orthogonal polynomials for a weight function defined over a domain of revolution, where the domain is formedfrom rotating a two-dimensional region and goes beyond the quadratic domains. Explicit constructions of orthogonal bases areprovided for weight functions on a number of domains. Particular attention is paid to the setting when an orthogonalbasis can be constructed explicitly in terms of known polynomials of either one or two variables. Several new families oforthogonal polynomials are derived, including a few families that are eigenfunctions of a spectral operator and theirreproducing kernels satisfy an addition formula.Orthogonal polynomials on domains of revolution Yuan Xu January 14, 2024 ===============================================§ INTRODUCTIONThe structure of orthogonal polynomials (OPs) in the multidimensional setting is much more complex than that in one variable.Not only an orthogonal basis is more difficult to compute and work with (cf. <cit.>), but it is also much harder to utilize multivariate OPs in analysis and computational analysis. It is therefore not surprising that the most well-studied OPsare those families on regular domains, for which an orthogonal basis can be given explicitly in terms of the OPs in onevariable (cf. <cit.>). Such domains include, for example, the product domains, the balls, and simplexes in the Euclidean space.The latter two cases have classical OPs that are orthogonal with respect to weight functions akin to the Jacobi weight in onevariable, and they have been extensively studied in the literature, not only for computational purposes but also as main toolsfor approximation theory and harmonic analysis on the domains (cf. <cit.>). In both cases, an orthogonal basis canbe given in terms of the Jacobi polynomials, which makes it possible to develop fast algorithms for computing the bases(cf. <cit.>), and to study the orthogonal structure and uncover its intrinsic properties. It follows, for example, that the spaceof OPs of degree n is an eigenspace of a second-order differential operator, aka spectral operator, and the reproducingkernel of the space enjoys a closed-form formula, aka addition formula. Together, the spectral operator and the additionformula provide essential tools for approximation theory and harmonic analysis in the localizable space of homogeneoustype <cit.>.Recently, new families of OPs have been studied for quadratic domains of revolution. There are two types of such domains, quadratic surfaces, and solid domains bounded by quadratic surfaces and, if compact, hyperplanes.The latter is of the form^d+1 = {(x,t): x≤ϕ(t),x ∈^d,t ∈ [a,b] } where [a,b] could be _+ orand ϕ is either a linear polynomial or the square root ofa polynomial of degree at most 2, nonnegative over [a,b]. The domain includes cones, hyperboloids, andparaboloids in ^d+1 <cit.>. In several of these domains, orthogonal bases for a family ofweight functions can be given explicitly in terms of OPs of one variable, which greatly facilitates their computation.More importantly, some of the bases can be given in terms of the classical OPs of one variable, which makes itpossible to identify domains and weight functions that possess a spectral operator and an addition formula. In particular,we know that both properties exist for a family of weight functions on the cone <cit.> and, at least partially, on the hyperboloid <cit.>. These properties allow us to carry out extensive analysis for several problems on thesedomains <cit.>. The purpose of this paper is to study OPs on other solid domains, beyond the quadratic ones, of revolution. In thecase of the quadratic domain of revolution, our OPs are constructed as a wrapped product of polynomials in the t variable and OPs on the unit ball ^d of ^d, and their orthogonality is established via the decompositionof the integral∫_^d+1 f(x,t) w(t) (t^2-x^2)^μx̣ṭ = ∫_0^∞ |ϕ(t)|^d ∫_^d f(t y,t) (1-y^2)^μỵw(t) t^2 μṭ.For the study in this paper, we change our point of view and obtain a domain in ^3,for example, by rotating a domain in the positive quadrant of ^2 around the t axis. More generally, letΩ_+ be a domain in _+^2 = {(s,t): t ≥ 0} and symmetric in the s variable. We consider the domainof revolution defined by^d+1 = {(x,t) ∈^d+1:(x, |t|) ∈Ω_+,x ∈^d,t∈},which is not necessarily a quadratic domain, and equipped it with the weight function (x,t), whereisdefined on Ω_+ and is symmetric in its first variable. In this setting, the integral over the domain can bedecomposed as ∫_^d+1 f(x,t) w(x,t) x̣ṭ= ∫_Ω_+∫_ f(s ξ,t) (ξ) (s,t) s^d-1ṣṭ,which suggests that one may construct OPs on ^d+1 by OPs of two variables on Ω_+ and spherical harmonics. The construction, however, is not obvious, nor is it simple, especially if we want to obtain bases givenexplicitly in terms of classical OPs. What it entails is a careful study of the relation between OPs on Ω_+and those on ^d+1. We shall concentrate, as reasoned in the first paragraph, on orthogonal bases that can be constructed in terms of OPs that are explicitly known, either as OPs of two variables or one variable. While our study leads to several new families of OPs for domains, and weight functions, that have not been considered before, the study also shows that the constructionof orthogonal bases on ^d+1 can be quite subtle. For example, the cone can be regarded as a rotation withΩ_+ being the right triangle. One may ask if we can consider other triangles, which are after all just an affine mapping from the right triangle. The answer, however, turns out to be mostly negative for the cone but, nevertheless,positive for the hyperboloid and double cone. If the domain Ω_+ is extended symmetrically to the negativevalues of t variable so that ^d+1 is symmetric in the t direction, we obtain several distinct domains withweight functions that possess not only explicit orthogonal bases but also the spectral operator and additionformula, and they can be derived by relating to the results of the hyperboloid. These results expand our knowledge ofdomains that possess these two essential properties and, as a result, grant access to the framework <cit.> for approximation and harmonic analysis on such domains. The paper is organized as follows. The next section is devoted to the background and preliminary and containsa review of several families of classical OPs that will be needed. The new setup for OPs over domains of revolution will bedeveloped in the third section, where no additional restriction is imposed on Ω_+. In the fourth section, weassume that the domain Ω_+ can be extended to a fully symmetric domain Ω in ^2 and studyOPs of two variables on Ω, so that they can be used for constructing orthogonal bases on domains of revolution.While several examples based on parallelograms in ^2 are given in the fourth section, the examples corresponding to triangle domains are given in the fifth section, which contains new families of OPs that possess the twoessential properties mentioned above. § BACKGROUND AND PRELIMINARYWe start with a review of classical OPs of one variable and two variables, which will be needed later, in the first twosubsections, and spherical harmonics and classical OPs on the unit ball in the third section. In the fourth subsection,we recall what is known for OPs over domains of revolution and lay down the basics for the study in the latter sections.§.§ OPs of one variable We are interested in orthogonal bases that can be expressed in terms of classical OPs. Since we are mostly interested in the compact setting, we first recall the Jacobi polynomials and their variants.§.§.§ Jacobi polynomialsFor , >̱ -1, the Jacobi weight function is defined by w_,(t):=(1-t)^(1+t)^,̱ -1 < x <1.Its normalization constant c'_,, defined by c'_,∫_-1^1 w_, (x)dx = 1, is given byc'_, = 1/2^++̱1 c_,with c_, := Γ(++̱2)/Γ(+1)Γ(+̱1),where c_, is the normalization constant of the Jacobi weight t^(1-t)^$̱ on the interval[0,1].The Jacobi polynomial of degreenis defined byP_n^(,)̱(t) = (+1)_n/n!_2F_1( -n, n+++̱1+1 ; 1-t/2).These polynomials are orthogonal with respect tow_,on[-1,1]; more precisely,c_,' ∫_-1^1 P_n^(,)̱(t) P_m^(,)̱(t) w_,(t) ṭ = h_n^(,)̱δ_m,n,whereh_n^(,)̱is the square of theL^2norm that satisfies (see, for example, <cit.>)h_n^(,)̱ =(+1)_n (+̱1)_n(++̱n+1)/n!(++̱2)_n(++̱2 n+1).For= =̱ ł-1/2, the weight function is the Gegenbauer weightw_ł(t):= w_ł- 12, ł - 12(t) = (1-t^2)^ł-12, ł >- 12,and the corresponding OPs are the Gegenbuer polynomialsC_n^ł, usually normalized byC_n^ł(1) = (2ł)_n/n!, where(a)_n = a (a+1)⋯(a+n-1)is the Pochhammer symbol. §.§.§ Generalized Gegenbauer polynomialForł,μ> -12, the generalized Gegenbauer polynomialsC_n^(ł,μ)satisfy the orthogonal relationΓ(ł+μ)/Γ(ł+12)Γ(μ+12)∫_-1^1 C_n^(ł,μ)(x) C_m^(ł,μ)(y)|x|^2μ (1-x^2)^ł -12x̣=_n^(ł,μ)δ_m,n.The polynomialsC_n^(ł,μ)are given explicitly by <cit.> C_2m^(ł,μ)(t)= (ł+μ)_m/(μ+12)_m P_m^(ł-12,μ-12)(2 t^2 -1),C_2m+1^(ł,μ)(t)= (ł+μ)_m+1/(μ+12)_m+1 t P_m^(ł-12,μ+12)(2 t^2 -1),whereP_n^(a,b)are the standard Jacobi polynomials. The norm square of these polynomials are equal to <cit.> _2m^(ł,μ) = (ł+12)_m(ł+μ)_m(ł+μ)/m!(μ+12)_m(ł+μ+2m), _2m+1^(ł,μ)=(ł+12)_m(ł+μ)_m+1(ł+μ)/m!(μ+12)_m+1(ł+μ+2m+1).Furthermore, in terms of the evaluation of the polynomials att=1, we can write_n^(ł,μ) = ł+μ/n+ł+μ C_n^(ł,μ)(1).§.§ Orthogonal polynomials in two variablesLetΩbe a compact domain in^2with a positive area. Letbe a weight function onΩ.[Throughout the paper we adopt the convention that the letters , , , etc.,in the sansmath font, are reserved for functions and polynomials in two variables.]We consider OPs with respect to the inner productf, g_ = c_∫_Ω f(u,v) g(u,v) (u,v) ụṿ,wherec_is the normalization constant ofsuch that1, 1 _= 1. We denote by_n(, Ω)the space of OPs of degreenforn ∈_0. It is known that_n(,Ω) = n+1,n = 0,1, 2, ….We give two examples that will serve as the building blocks for our orthogonal basis on higher dimensions. §.§.§ The product domain □= [-1,1]^2Let(u,v) = w_1(u) w_2(v), wherew_1andw_2are weight functions on[-1,1]. Letp_n(w_j)be orthogonal polynomials of degreenwith respect to the weight functionw_jforj =1,2. Then the product polynomials_k^n(u,v) = p_k(w_1; x) p_n-k(w_2; y),0 ≤ k ≤ n,form an orthogonal basis for_n(,□). §.§.§ The triangleand the Jacobi polynomials on the triangleLetdenote the triangle defined by= {(u,v): u ≥ 0, v ≥ 0, u+v ≤ 1}.For,,̱> -1, the classical Jacobi weight on the triangle is defined by _,,̱(u,v) = u^ v^(̱1-u-v)^, ,,̱ > -1.Let·,·_,,̱be the inner product defined byf,g _,,̱ =^_,,̱∫_ f(u,v) g(u,v) _,,̱(u,v) ỵṿ.where^_,,̱is the normalization constant of_,,̱,^_,,̱ = Γ(++̱+3)/Γ(+1)Γ(+̱1)Γ(+1).Several orthogonal bases for_m(_,,̱; )can be given explicitly in terms of the Jacobi polynomials.The triangle and the weight function_,,̱are symmetric under the simultaneous permutation of(u,v,1-u-v)and(,,̱). Hence, permuting(x,y,1-x-y)and(,,̱)simultaneously of an orthogonal basis leads to anotherorthogonal basis. We give three orthogonal bases and start with _j,m^,,̱ (u,v) = P_m-j^(2j+ ++1,)̱(2 v -1) (1-v)^j P_j^(,)(1- 2u/1-v),0 ≤ j ≤ m.The set{_j,m^,,̱: 0 ≤j ≤m}is an orthogonal basis for_m(_,,̱; ). Permutation ofvariables and parameters simultaneously leads to two more bases. The first one is given by_j,m^,,̱(u,v)= _j,m^,, (1- u-v,u)or, more explicitly, _j,m^,,̱ (u,v) = P_m-j^(2j+ +̱+1,)(2 u -1) (1-u)^j P_j^(,)̱( 2v/1-u -1),0 ≤ j ≤ m.The second one is given by_j,m^,,̱ (u,v) =_j,m^,̱, (v, 1- u-v)or, more explicitly,_j,m^,,̱ (u,v) = P_m-j^(2j+ ++̱1,)(1-2 u -2v) (u+v)^j P_j^(,̱)( u-v/u+v),0 ≤ j ≤ m.Each of the three sets {_j,m^,,̱: 0 ≤ j ≤ m}, {_j,m^,,̱: 0 ≤ j ≤ m}, and{_j,m^,,̱: 0 ≤ j ≤ m} is an orthogonal basis for _m(_,,̱; ). Moreover, _j,m^,,̱,_j,m^,,̱_,,̱ = (+1)_n-k (+̱1)_k (+1)_k (+̱+2)_n+k/(n-k)!k!(+̱+2)_k (++̱+3)_n+k×(n+k+++̱+2)(k++̱+1)/(2n+++̱+2)(2k++̱+1) =: _j,m^,,̱for 0 ≤ j ≤ m and _j,m^,,̱,_j,m^,,̱_,,̱ = _j,m^, ,and_j,m^,,̱,_j,m^,,̱_,,̱ = _j,m^,̱,.The above bases are derived by the separation of variables. More generally, if the weight functionwsatisfies(u,v) = w_1(v) w_2(u/1-v), then we can derive an orthogonal basis for_n(, )interms of OPs with respect tow_1andw_2<cit.>. We can consider, for example, _,,̱,(u,v) = u^ v^(̱1-u-v)^ (1-v)^, ,,̱, > -1.This can be written asw_1(v) w_2(u/1-v), wherew_1(v) = v^(̱1-v)^++andw_2(v)= v^(1-v)^. With this separation of variables, we can define <cit.> _j,m^,,̱, (u,v) = P_m-j^(2j+ +++1,)̱(2 v -1) (1-v)^j P_j^(,)(1- 2u/1-v)and conclude that{_j,m^,,̱,: 0 ≤j ≤m}is an orthogonal basis for_n(_,,̱,, ). §.§ Spherical harmonics and OPs on the unit ballThese polynomials are building blocks of OPs on domains of revolution. We recall their definition and basic properties; see <cit.> for further results. §.§.§ Spherical harmonicsThese are the restrictions of homogenous harmonic polynomials on the unit sphere. Let_n^dbe the space of spherical harmonics of degreenofdvariables. It is known that_n^d = n+d-1n - n+d-3n-2.An orthogonal basis of_n^dcan be given explicitly in terms of the Jacobi polynomials. The sphericalharmonics are orthogonal with respect to the surface measureon. Let{Y_ℓ^n}be an orthonormal basis of_n^d. Then1/ø_d∫_ Y_ℓ^n(ξ) Y_ℓ'^n' (ξ) (ξ) =δ_ℓ,ℓ'δ_n,n',whereø_ddenotes the surface area of. In terms of this basis, the kernelP_n(·,·)defined byP_n(x,y) = ∑_1 ≤ℓ≤_n^d Y_ℓ^n(ξ) Y_ℓ^n(η), ξ,η∈,is the reproducing kernel of_n^dinL^2(), which is the kernel of the orthogonal projection operator: L^2() ↦_n^d. The kernel satisfies an addition formulaP_n(ξ,η) = Z_n^d-22 (ξ,η ),Z_n^ł (t) = n+ł/łC_n^ł(t),whereC_n^łis the Gegenbauer polynomial. Another important property is that spherical harmonics areeigenvalues of the Laplace-Beltrami operatorΔ_0, which is the restriction of the Laplace operator onthe unit sphere; more precisely,Δ_0 Y = - n(n+d-2) Y, ∀ Y ∈_n^d,n = 0,1,2,….These are the two properties mentioned in the introduction and they play essential roles in the approximation theory and harmonic analysis on the unit sphere. §.§.§ Classical OPs on the unit ball ^dThese are OPs orthogonal with respect the weigh function W_μ(x) = (1-x)^μ-12,x∈^d, μ > -12on the unit ball^d. The normalization constant ofW_μisb_μ^= Γ(μ+d+12) /(π^d 2Γ(μ+12)). Let_n(W_μ, ^d)be the space of OPs of degreen. An orthogonal basis of this space can be given explicitly in terms of the Jacobipolynomials and spherical harmonics. For 0 ≤m ≤n/2, let{Y_ℓ^n-2m: 1 ≤ℓ≤_n-2m}be an orthonormal basis of_n-2m^d. Define <cit.> P_ℓ, m^n (W_μ; x) = P_m^(μ-12, n-2m+d-22)(2x^2-1) Y_ℓ^n-2m(x).Then{P_ℓ,m^n(W_μ): 0 ≤m ≤n/2, 1 ≤ℓ≤_n-2m}is an orthogonal basis of_n^d(W_μ,^d).Let_m,n^μbe the square of the norm ofP_ℓ,m^n(W_μ)inL^2(_μ; ^d). Then the reproducingkernel of the space_n(W_μ,^d)can be written asP_n^μ(x,y) = ∑_0 ≤ m ≤ n/2∑_ 1 ≤ℓ≤_n-2mP_ℓ, m^n(W_μ; x) P_ℓ, m^n(W_μ;y)/_m,n^μ.This is also the kernel of the projection operator_n^μ: L^2(W_μ; ^d) ↦_n(W_μ,^d)and it satisfies an addition formula <cit.>:P_n^μ(x,y) = c_μ∫_-1^1 Z_n^μ+d-12 ( x,y + t √(1-x^2)√(1-y^2)) (1-t^2)^μ-1ṭforμ> 0and in limit forμ= 0. Moreover, there is a spectral operator that has OPs as eigenfunctions (<cit.>),(Δ -x ,∇^2 - (2μ + d-1)x, ∇ ) u = - n(n+2μ+d-1), ∀ u ∈_n(^d, W_μ).As in the case of spherical harmonics, these two properties provide powerful tools for the approximation and harmonic analysison the unit ball (cf. <cit.>).§.§ OPs on quadratic domains of revolution Letϕbe either a nonnegative linear polynomial or the square root of a nonnegative quadratic polynomialon the interval[a,b], which can be an infinite interval, say_+or. We consider the solid domainof revolution defined by𝕍^d+1 = {(x,t): x≤ϕ(t), x ∈ℝ^d, t ∈ [a,b]}.Forϕ(t)=1, the domain is a cylinder. Forϕ(t) = √(1-t^2)on[-1,1], the domain becomes the unit ball. Weconsider OPs for the family of weight functions of the form(x,t) = w(t) (t^2-x^2)^μ-12. For several domains,explicit orthogonal bases for somew(t)can be given in terms of known polynomials. We recall two particular casesthat are most relevant to our study in this paper. §.§.§ Jacobi polynomials on the coneHereϕ(t) = tand the weight function is_,μ(x,t) = (1-t)^ (t^2-x^2)^μ-12, 0 ≤ t ≤ 1,> -1, μ > -12.An orthogonal basis of_n(W_,̱,μ,^d+1)can be givenin terms of the Jacobi polynomials and the OPs on the unit ball. Form =0,1,2,…,let{P_^m(W_μ): || = m, ∈_0^d}be an orthonormal basis of_n(W_μ,^d)on the unit ball. Then the polynomials _m,^n(x,t):= P_n-m^(2μ+2m+d-1, )(1- 2t) t^m P_^m(W_μ; x/t), || = m,0 ≤ m ≤ n,consist of an orthogonal basis of_n(_,μ,^d+1), which were called the Jacobi polynomials in <cit.>. The reproducing kernel_n (_,μ)of the space_n(_,μ,^d+1)satisfies an addition formula forμ≥0and≥-12, _n (_,μ; (x,t), (y,s)) = c_μ,,d∫_[-1,1]^3Z_2n^2 μ++d (ξ (x, t, y, s; u, v)) × (1-u^2)^μ-1 (1-v_1^2)^μ+d-3/2(1-v_2^2)^-12ụṿ,wherec_μ,,dis a constant, so that_0 =1andξ(x,t, y,s; u, v) ∈[-1,1]is defined by ξ (x,t, y,s; u, v) =v_1 √(12 (ts+ x,y+ √(t^2-x^2)√(s^2-y^2)u )) + v_2 √(1-t)√(1-s),and the formula holds under limit whenμ= 0and/or= -12. Moreover, forμ> -12,> -1the second-order differential operator 𝔇_,μ : =t(1-t)∂_t^2 + 2 (1-t)x,∇_x ∂_t + ∑_i=1^d(t - x_i^2) ∂_x_i^2 - 2 ∑_i<jx_i x_j ∂_x_i∂_x_j + (2μ+d)∂_t- (2μ++d+1)(x,∇_x + t ∂_t),where∇_xandΔ_xdenote the gradient and the Laplace operator inx-variable, has the polynomials in_n(_,μ,VV^d+1)are eigenfunctions; more precisely, 𝔇_,μ u =-n (n+2μ++d) u, ∀ u ∈_n(_,μ,^d+1).A classification of such spectral operators is known ford=2(<cit.>) but not ford > 2. The operator𝔇_,μon the cone is a recent addition ford > 2. §.§.§ Gegenbauer polynomials on the hyperboloid and double cone.Forϱ> 0,ϕ(t) =√(t^2 + ρ^2),0 < |ρ| ≤|t|. The domain is defined by^d+1 ={(x,t): x^2 ≤ t^2 - ϱ^2, x ∈^d,ϱ≤ |t| ≤√(ϱ^2 +1)},which is a hyperboloid forϱ> 0and degenerates to a double cone forϱ= 0. For>̱-12,> - 12andμ> -12, we consider the weight function defined by_,μ(x,t): = |t|(t^2-ϱ^2)^-12 (1+ϱ^2-t^2)^-12(t^2-ϱ^2-x^2)^μ - 12.The OPs associated with_,μare called the Gegenbauer polynomials on the hyperboloid in <cit.>. Since the weight function is even in thetvariable, the space_n( _,μ,^d+1)naturally split into two parts depending on the parity of OPs. The space of OPs that consists of the Gegenbauer polynomials even in thetvariable possesses both addition formula and spectral properties <cit.>. These will be needed later in the paperand will be reviewed in the Subsection <ref>.§ ORTHOGONAL POLYNOMIALS ON DOMAINS OF REVOLUTION IN ^D+1 In the first subsection, we discuss our first but most essential construction of OPs for domains ofrevolution. Several examples are given in the second subsection. Further construction and examples will begiven in later sections. §.§ Orthogonal structure on domains of revolution LetΩbe a domain of^2, which we decompose asΩ= Ω_+ ∪Ω_-, whereΩ_+:= {(s,t) ∈Ω: s ≥ 0}andΩ_-:= {(s,t) ∈Ω: s ≤ 0}.We assume thatΩis symmetric in thes-variable; that is,(s,t) ∈Ω_+if and only if(-s,t) ∈Ω_-.Our goal is to consider the domain of revolution defined by^d+1 = {(x,t) ∈^d+1,x ∈^d,t ∈,(x,t) ∈Ω_+}.In words,^d+1is the domain of revolvingΩ_+ofdvariables around thetaxis.Let(s,t)be a weight function defined onΩ_+and we assume that it is nonnegative onΩ_+andhas finite moments. Let[From now on, we adopt the convention that the letters, , , etc., in the bold font, are reserved for functions and polynomials on domains of revolution.](x,t):= (x,t),(x,t) ∈^d+1.We consider OPs on^d+1with respect to the inner productf, g_ = ∫_^d+1 f(x,t) g(x,t) (x,t) x̣ṭ.The assumption onimplies the existence of OPs. LetΠ_n^d+1denote the space of polynomials of total degreenind+1variables. We denote by_n(; ^d+1)the subspace of OPs of total degreenwith respect to the inner product·,·_. It is known that_n(, ^d+1) = n+dnandΠ_n^d+1 = n+d+1n.Our first goal is to show that an orthogonal basis for_n(,^d+1)can be derived from OPs of twovariables onΩ_+and spherical harmonics. This is based on the observation that the integral over^d+1can be decomposed as a double integral overΩ_+and the sphereof^d. Indeed,settingx = s ξwithξ∈andt ∈, we can write(x,t) = (sξ,t), so that(s,t) ∈Ω. Then∫_^d+1 f(x,t) x̣ṭ = ∫_Ω_+∫_x =s f(x,t) x̣ ṭ =∫_∫_Ω_+f(s ξ,t)s^d-1ṣ ṭ (ξ),wheredenote the surface measure on. Let_n (,Ω)be the space of OPs of two variables with respect to the inner productf,g_ = ∫_Ω f(s,t) g(s,t) (s,t) ṣṭ.We further denote by_n^(, Ω)the subspace of polynomials that are even in thesvariable; in other words,_n^(, Ω) ={ P ∈_n(, Ω): P(s,t) = P(-s,t) }.The elements of these spaces are polynomials in two variables and it is easily seen that_n(,Ω) = n+1 and_n^(,Ω) = ⌊n/2⌋ + 1.Since(s,t)is even insvariable, it follows readily thatf,g_ = 1/2∫_Ω_+ f(s,t) g(s,t) (s,t) ṣṭfor all polynomialsfandgthat are even in their first variable. Letkbe a positive integer. We define ^(k)(s,t) = |s|^k+d-1(s,t),(s,t) ∈Ω.For k ≥ 0, let {_j^m(^(2k); s, t): 0≤ j ≤⌊ m 2 ⌋} denote anorthogonal basis of _m^(^(2k), Ω). Let {Y_ℓ^k: 1 ≤ℓ≤_k^d} be anorthonormal basis of the space _k^d of spherical harmonics. Define_j,k,ℓ^n (x,t) = _j^n-k(^(2k); x, t) Y_ℓ^k (x). Then the set {_j,k,ℓ^n:1 ≤ℓ≤_k^d,0≤ j ≤⌊n-k/2⌋, 0 ≤ k ≤ n} is an orthogonal basis of _n(,^d+1) for (x) = (x,t). Moreover,_j,k^n := ⟨_j,k,ℓ^n,_j',k',ℓ'^n'⟩_ = ⟨_j^n-k(^(2k)),_j^n-k(^(2k)) ⟩_^(2k)=: _j,k^n-k.Since _j^n-k(^(2k)) is even in its first variable, we can write_j^n-k(^(2k); s,t) = 1/2[_j^n-k(^(2k); s, t)+_j^n-k(^(2k); -s, t) ],which implies immediately that it contains only even powers of s and, consequently, _j^n-k(^(2k); x,t)is a polynomial of degree n-k in (x,t) variables. Thus, _j, k, ℓ^n is a polynomial of degree n in (x,t) variables.Since Y_ℓ^k is homogeneous, we can write_j,k,ℓ^n(x,t) = _j^n-k(^(2k); s, t) s^k Y_ℓ^k(ξ),x = s ξ, ξ∈.Using the orthogonality of spherical harmonics and the identity (<ref>), we obtain⟨_j,k,ℓ^n,_j',k',ℓ'^n'⟩_= δ_ℓ,ℓ'δ_k,k'∫_Ω_j^n-k(^(2k);s, t) _j'^n'-k(^(2k);s, t) ^(2k)(s,t) ṣṭ = h_j,k^n-kδ_ℓ,ℓ'δ_k,k'δ_j,j'δ_n,n',where h_j,k^n-k is the norm square of _j^n-k(^(2k)). This proves the orthogonality and that_j, k, ℓ^n ∈_n(, ^d+1). To complete the proof, we need to show that the cardinality of{_j, k, ℓ^n} is equal to _n(, ^d+1); that is, ∑_k=0^n (⌊n-k/2⌋+1) _k^d = n+dn.By (<ref>), the left-hand side of the above equation is equal to ∑_k=0^n (⌊n-k/2⌋+1) k+d-1k -∑_k=0^n-2(⌊n-k/2⌋) k+d-1k =∑_k=0^n-2k+d-1k + n+d-2n-1 + n+d-1n = n+dn,where the last identity follows from a straightforward computation. This completes the proof. After a brief subsection on the reproducing kernels, we will give several examples for which an explicit basisfor_n^(, Ω)can be derived and, as a consequence, so can an explicit basis for_n(, ^d+1).§.§ Reproducing kernels and orthogonal seriesThe Fourier orthogonal expansion off ∈L^2(,^d+1)can be written asf = ∑_n=0^∞_n (; f), (): L^2(,^d+1) ↦_n(,^d+1),where_n()is the orthogonal projection operator. In terms of the orthogonal basis{_j,k,ℓ^n}given inTheorem <ref>, we can write_n(; f):= ∑_k=0^n ∑_j=0^⌊n-k/2⌋∑_ℓ=1^_k^d f_j,k,ℓ^n _j,k,ℓ^n,f_j,k,ℓ^n = ⟨ f, _j,k,ℓ^n ⟩_/_j,k^n.Let_n(; ·,·)be the reproducing kernel of the space_n(,^d+1), which is uniquelydetermined by∫_^d+1_n(; (x,t),(y,s))(y,s) (y,s) ỵṣ =(x,t), ∀∈_n(,^d+1).The reproducing kernel satisfies, in terms of the orthogonal basis{_j,k,ℓ^n}, _n(; (x,t),(y,s)) = ∑_k=0^n ∑_j=0^⌊n-k/2⌋∑_ℓ=1^_k^d_j,k,ℓ^n(x,t) _j,k,ℓ^n(y,s)/_j,k^n = ∑_k=0^n ∑_j=0^⌊n-k/2⌋∑_ℓ=1^_k^d_j^n-k(; x,t)_j^n-k(; y,s)/_j,k^n-kx^k y^k Z_k^d-2/2 (ξ,η),where the second identity follows from the addition formula for spherical harmonics. An addition formula holds for the kernel if the right-hand sums can be written in a closed form. Moreover, the projection operator is an integral operator with_n(; ·,·)as its kernel,_n (;f) = ∫_^d+1_n(; (x,t),(y,s)) f(y,s) (y,s) ỵṣ.This relation is the reason why an addition formula is a powerful tool for studying the Fourier orthogonal expansions.§.§ Cylinder Our first example is the simplest one, for whichΩis a rectangular domain. We consider as an example thesquare with weight functionΩ= [-1,1]^2 and(s,t) = s^2 (1-s^2)^μ-12(1-t^2)^ł-12,so that its rotation in thetaxis leads to the cylinder and a weight function given by^d+1 = {(x,t): (x,t) ∈Ω}and(x,t) =x^2 (1-x^2)^μ-12(1-t^2)^ł-12,where> -1,ł, μ> -12. One orthogonal basis for_n(, Ω)consists of products of Gegenbauer polynomials andgeneralized Gegenbauer polynomials_k,n^,μ,ł (s,t) = C_n-k^ł(t) C_k^(μ,)(s),0 ≤ k ≤ n.The polynomialC_k^(μ,)has the same parity askand it satisfies, in particular, C_2j^(μ,)(s) = const.P_j^(μ-12,-12)(2s^2-1).Taking into consideration of degrees, we see that{_n-2j,2j^,μ,ł: 0 ≤j ≤n/2}is an orthogonal basis of_n^(,Ω). In particular, letY_ℓ^kdenote an orthogonal basis of_k^dand recall^(2k)(t) = s^2k+d-1(s,t); then the construction in Theorem <ref> shows that_n-k-2j,2j^+k+d-12,μ,ł(x,t) Y_ℓ^k(x), 0 ≤ j ≤⌊n-k2⌋, 0 ≤ k ≤ n,is an orthogonal basis of_n(, ^d+1). Changing indexk = m - 2jshows that the basis consists of _m,j,ℓ^n; (,μ,ł)(x,t) = C_n-m^ł (t) P_j^(μ-12, +m-2j+d-22)(2x^2-1)Y_ℓ^m-2j(x).We summarize this in the following proposition. Let {Y_ℓ^k: 1 ≤ℓ≤_k^d} be an orthogonal basis of _k^d. Then{_m,j,ℓ^n; (,μ,ł)(x,t): 1 ≤ℓ≤_m-2j^d, 0 ≤ j ≤ m/2, 0≤ m≤ n}consists of an orthogonal basis for _n(, ^d+1) on the cylinder domain (<ref>). If= 0, then the product of the last two terms in (<ref>) is exactly the polynomialP_ℓ,j^m(W_μ;x)for the classical OPs on the ball (<ref>), so that the basis takes the form_m,j,ℓ^n; (0,μ,ł)(x,t) = C_n-m^ł(t) P_ℓ,j^m (W_μ;x),1 ≤ℓ≤_k^d,0≤ j ≤⌊m2⌋, 0 ≤ m ≤ n,which is the usual orthogonal basis for_n(, ^d+1)on the cylinder domain (cf. <cit.>).§.§ Conic domainsWe consider the case whenΩis the triangle symmetric in thesvariable. There are two cases. §.§.§ ConeIn this example,Ωis the triangleΩ_▵defined byΩ_▵ = {(s,t): |s| ≤ t, 0 ≤ t ≤ 1}and(s,t) = |s|^2(t^2 -s^2)^μ-12t^(̱1-t)^.The rotation ofΩ_▵in thetaxis leads to the solid cone_▵^d+1 = {(x,t): (x,t) ∈Ω_▵} ={(x,t): x≤ t, x ∈^d,0 ≤ t ≤ 1}equipped with the weight function_▵(x,t) = (x,t) =x^2 (t^2-x^2)^μ-12t^(̱1-t)^.The triangle domainΩand the cone are depicted in Figure <ref>, whereΩ_+is shaded. For= 0, OPs on the cone have been studied in <cit.>. We shall carry out the construction in our new point of view. Making a change of variables →t u, the integral overΩ_▵can be written as ∫_Ω_▵ f(s,t) (s,t) ṣṭ = ∫_0^1 ∫_-1^1 f(t u, t) (t u, t) t ụ ṭ= ∫_0^1 t^2+ +̱ 2μ (1-t)^∫_-1^1 f(t u, t) |u|^ (1-u^2)^μ -12ụṭ,from which it is easy to verify that an orthogonal basis for_m(, Ω_▵)is given by_k^m(s,t) = P_m-k^(2k+ ++̱ 2μ,)(1-2t) t^k C_k^(μ, )(s/t),0 ≤ k ≤ m,in terms of the Jacobi polynomials and the generalized Gegenbauer polynomials of degreem. Hence, using(<ref>) as in the cylinder case, we conclude that an orthogonal basis for_m^(^(2k), Ω_▵)is given by_j^m(s,t) = P_m- 2j^(4j +2k+2 ++̱ 2μ+d-1,)(1-2t) t^2j P_j^(μ-12, k+ +d-22)(2 s^2/t^2-1)with0 ≤j ≤⌊m2⌋. In particular, the construction in Theorem <ref>shows that an orthogonal basis for_n(_▵, _▵^d+1)is given by P_n-k- 2j^(4j +2k+ ++̱ 2μ+d-1,)(1-2t) t^2j P_j^(μ-12, k++d-22)(2 x^2/t^2-1)Y_ℓ^k (x),where1 ≤ℓ≤_k^d,0≤j ≤⌊n-k/2 ⌋, 0 ≤k ≤n, and changing indexk = m - 2jshows that the basis consists of_j,m,ℓ^n (x,t) = P_n-m^(2m++̱ 2+ 2μ+d-1,)(1-2t) P_j^(μ-12, +m-2j+d-22)(2x^2-1)Y_ℓ^m-2j(x).We summarize this in the following proposition. Let {Y_ℓ^k: 1 ≤ℓ≤_k^d} be an orthogonal basis of _k^d. Then{_m,j,ℓ^n(x,t): 1 ≤ℓ≤_m-2j^d, 0 ≤ j ≤ m/2, 0≤ m≤ n}consists of an orthogonal basis for_n(_▵, _▵^d+1) on the cone. In particular, if= 0, then the polynomial_j,m,ℓ^ncan be written in terms of OPsP_j,ℓ^m(W_μ)in (<ref>) on the unit ball^d; more precisely,_j,m,ℓ^n (x,t) = P_n-m^(2m++̱ 2μ+d-1,)(1-2t) P_j,ℓ^m(W_μ; x/t).This last case is the orthogonal basis for_n(_▵; _▵^d+1)first derived and studied in <cit.>, which satisfies several distinguished properties, including the spectral operator(<ref>)and the addition formula (<ref>) stated in Subsectioin <ref>. §.§.§ Coupled coneHere we assume thatΩis a diamond symmetric in thesvariable, which we denote as. We consider thesetting= {(s,t): |s|+|t| ≤ 1}with the weight function(s,t) = |s|^2+1|t|^2+̱1(1-(s+t)^2)^(1-(s-t)^2)^,Then the rotation in thetaxis of the diamond domain gives the coupled cone_♢^d+1 = {(x,t): (x,t)∈Ω_♢} = {(x,t): x≤ 1-t ≤ 1, x ∈^d}with the weight function defined by_♢(x,t) = x^2+1t^2+̱1(1-(x+t)^2)^-12(1-(x-t)^2)^-12.The diamond domainΩand the coupled cone^3in 3D are depicted in Figure <ref>.Changing variablesu = t-sandv= t+s, the diamond domain becomes[-1,1]^2in the(u,v)plane and the weightfunctionbecomes(u,v) = 1/2^2 |u-v|^2+1 |u+v|^2+̱1(1- u^2)^-12(1- v^2)^-12.An orthogonal basis for_n(, [-1,1]^2)is derived in <cit.> in terms of the Jacobi polynomials. For ourpurpose, we only need those that will lead to a basis for_m^(, ). These are given in<cit.> withu = cosandv = cosϕas_j,2m^(,)̱ (u,v)= P_m^(,)̱(cos (-ϕ)) P_j^(,)̱(cos (+ϕ)) + P_j^(,)̱(cos (-ϕ)) P_m^(,)̱(cos (+ϕ)), 0 ≤ j ≤ m,_j,2m+1^(,)̱ (u,v)= (u+v) [P_m^(,+̱1)(cos (-ϕ)) P_j^(,+̱1)(cos (+ϕ)) . . + P_j^(,+̱1)(cos (-ϕ)) P_m^(,+̱1)(cos (+ϕ)) ],0 ≤ j ≤ m,which are indeed polynomials of degreenin(u,v)variables, wheren = 2mor2m+1. Settingt-s= u = cosandt+s = v = cosϕ, thenu+v = 2 tand cos (±ϕ)= (t-s)(t+s) ∓√(1-(t-s)^2)√(1-(t+s)^2) = t^2-s^2 ∓√((1+t)^2 - s^2)√((1- t)^2 - s^2),both of which which are even ins, so that_j,n^(,)̱(t-s,t+s)are polynomials in(s,t)variables that are even in thesvariable. Thus, they consist of an orthogonal basis for_n^(, )withn =2mor2m+1. In particular, it follows that_j,n^(,)̱(t-x,t+x)is a polynomial in(x,t)variables. Replacingby^(2k), we obtain an orthogonal basis for the coupled cone.Let _j,n^(,)̱ be defined as in (<ref>) and (<ref>). Then an orthogonal basisfor _n(_♢, ^d+1) on the coupled cone is given by_k,ℓ^n(x,t) = _j,n-k^(+ 2k + d-1, )̱(t-x, t+x) Y_ℓ^k(x),0≤ j ≤⌊n-k/2⌋, 0 ≤ k ≤ n,where {Y_ℓ^k: 0≤ℓ≤_n^d} is an orthogonal basis of _n^d. §.§ ParaboloidWe consider the parabolic domainΩ, bounded by the linet= 0and the parabolat = 1 - s^2,and the weight function defined byΩ = {(s,t): s^2 ≤ t,0 ≤ t ≤ 1}, and(s,t) =|s|^2 t^(̱t-s^2)^.Its rotation in thetaxis is the paraboloid defined by^d+1 ={ (x,t): x^2 ≤ t, 0 ≤ t ≤ 1 }and the weight function(x)=(x,t)becomes_,,̱(x,t) =x^2 (1-t)^(̱t - x^2)^, ,̱ > -1,2 +> -d.The domain bounded by the parabola andt=1and the paraboloid in 3D are depicted in Figure <ref>. Using (<ref>) and changing variabless ↦√(s)andt ↦1-t, it is easy to see that∫_^d+1_,,̱(x,t)x̣ṭ = ω_d 12 ∫_ s^+d-1 t^(̱t-s)^ṣṭ =ω_d/2_+d-1/2,,̱^,whereω_ddenote the surface area ofand_,,̱^is given in (<ref>).In this case, it is easy to verify that an orthogonal basis for_m(, Ω)is given by_j^m(s,t) = P_m-j^(++j+12,)̱(1- 2 t) t^ j 2 C_j^(+12,)(s/√(t)),0 ≤ j ≤ m,in terms of the Jacobi polynomialP_k^(,)̱and the generalized Gegenbauer polynomialC_k^(ł,). For= 0,the polynomialC_j^(ł,0) = C_j^łis the ordinary Gegenbauer polynomial and this is the classical basis(<cit.> and <cit.>) on the parabola domainΩ. SinceC_j^(,)is an even polynomial ifjis even, by (<ref>), we obtain that an orthogonal basis for_m^(^(2k), Ω)consists ofP_m-2j^(2k+2j+ ++ d 2,)̱(1-2 t) t^j P_j^(+2k+d-22),(1-2x^2/t),0 ≤ j ≤⌊m/2⌋.Consequently, we obtain an orthogonal basis for the paraboloid by Theorem <ref>.Let {Y_ℓ^k: 1 ≤ℓ≤_k^d} be an orthonormal basis of _k^d. Define _j,k,ℓ^n, (,,̱)(x,t) = P_n-k-2j^(2j+ ++k+ d 2,)̱(1-2 t) t^jP_j^(+k+d-22,)(1- 2x^2/t) Y_ℓ^k(x)= _j, n-k-j^+k+d-22, ,̱(x^2,1-t) Y_ℓ^k(x) ,where _j,m-j^,,̱ is the polynomial on the triangle defined in (<ref>). Then the polynomials in{_j,k,ℓ^n, (,,̱): 1 ≤ℓ≤_k^d,0 ≤ j ≤⌊n-k2⌋,0≤ k≤ n }forms an orthogonal basis for _n(_,,̱, ^d+1) on the paraboloid. Moreover,_j,k,n^,,̱ := ⟨_j,k,ℓ^n, (,,̱),_j,k,ℓ^n, (,,̱)⟩_= _j, n-k-j^+k+d-22, ,̱ in terms of the normalization constant in (<ref>). The first equality of (<ref>) follows immediately from Theorem <ref>, whereas the second one follows by comparing with (<ref>). The norm of _j,k,ℓ^n, (,,̱) is computed by using (<ref>) and changing variables s ↦√(s) and t ↦ 1-t, so that _j,k,n^,,̱, = b_,,̱^∫_^d+1 |_j,k,ℓ^n,(,,̱)(x,t)|^2 W_,,̱(x,t) x̣ṭ= ø_d2 b_,,̱^∫_ |_j, n-k-j^+k+d-22, ,̱(s, t)|^2w_+k+d-2/2,,̱(s,t) ṣṭ = _j, n-k-j^+k+d-22, ,̱,by the definition of(<ref>). Two remarks are in order. First, for= 0, we can rewrite the basis in (<ref>), usingP_j^(,)(x) = (-1)^j P_j^(,)(-x)and settingk = m - 2j,_j, m-2j,ℓ^n,(0,,̱)(x,t) = (-1)^j P_n-m^(+m+ d 2,)̱(1-2 t) t^j P_j,ℓ^m (W_+12; x/t)in terms ofP_j^m(W_μ)in (<ref>), the classical OPs on the unit ball. In this case, we can decomposethe space_n(, ^d+1)as a direct sum according to the spaces of spherical harmonics, so that each of the subspace in the sum is an eigenspace of a second-order differential operator; in other words, the eigenvalues depend ontwo parameters, instead of just the degree of polynomials <cit.>. Moreover, we have an addition formulathat involves OPs of two variables on a parabolic domain instead ofZ_n^λ<cit.>. Theseproperties are utilized for analysis on the paraboloid in <cit.>. Second, the basis (<ref>) is derived from the composition of OPs on the right triangle,_j,m(s^2,t),withsreplaced byxand the domain^d+1comes from revolving a region bounded by the right triangle.This raises the question of a possible extension using OPs on other triangles. We need a basis_j^mfor the space_n(^(2k), Ω)for0 ≤k ≤n, which requires the triangle to have one leg ons= 0if_j^mcanbe given in terms of classical OPs of one variable. Furthermore, our construction requires_j^m (u^2,v) = m+jin order to have an OP basis for_n^(, Ω). Together, however, these requirements turn out to berather restrictive. For example, they hold for_j,m-jin (<ref>) but not for_j,m-jin (<ref>)and_j,m-jin (<ref>). A careful check of all possible cases shows that we obtain the desired basis of OPsonly when the triangle, with one leg ons=0, is a right triangle. In other words, the only parabolic domain of revolution that has OPs expressible by classical OPs of one variable and spherical harmonics appears to be essentially the paraboloiddiscussed in this subsection. As we shall see in the next two sections, the fully symmetric domains offer more possibilities. § DOUBLE DOMAINS OF REVOLUTIONIn this section, we assume that the domainΩis symmetric in bothsandtvariables and give a constructionof bases for_n^(Ω, )by making use of the symmetry. This construction, discussed in the firstsubsection, is more flexible and leads to several new examples that illustrate the advantage of our new approach developed in the previous section. After a brief second subsection on reproducing kernels, we present our examplesbased on parallelograms in the third subsection. Further examples will be given in the next section.§.§ Orthogonal structure on fully symmetric domains We require both the domainΩand the weight functionto be fully symmetric.A domain Ω in ^2 is called fully symmetric if (s,t) ∈Ω implies (± s, ± t) ∈Ω. A weight functionon Ω is called fully symmetric if (s,t) = (± s, ± t) for all (s,t) in a fully symmetric domain Ω.A fully symmetricΩis determined by its portion in the positive quadrant, which we denote byΩ_+ ,+ := {(u,v) ∈Ω: u ≥ 0, v ≥ 0}.For a fully symmetric weight functionand its domainΩ, we further denote√(Ω):= {(√(u),√(v)): (u,v) ∈Ω_+,+}and_±12, ±12(u,v): = u^±12 v^±12(√(u),√(v)).We now show that an orthogonal basis for_n(,Ω)can be derived from four families of orthogonal bases with respect to the inner productsf,g_±12,±12 = ∫_√(Ω) f(u,v) g(u,v) _±12,±12(u,v) ụṿ.Let Ω andbe fully symmetric. Let {_j,m(_±12, ±12): 0 ≤ j ≤ m}be an orthonormal basis of _n(_±12, ±12, √(Ω)). Define_j^n (; u,v)= _j,m(_-12, - 12; u^2,v^2),0 ≤ j ≤ m, n = 2m,v_j,m(_-12, 12; u^2,v^2),0 ≤ j ≤ m, n = 2m +1.Then {_j^n(): 0 ≤ j ≤⌊n/2⌋} is an orthonormal basis for _n^(, Ω).Furthermore, define _j^n (;u,v) =u v _j,m-1(_12, 12; u^2,v^2),0 ≤ j ≤ m-1, n = 2m, u _j,m(_12, -12; u^2,v^2),0 ≤ j ≤ m, n = 2m +1.Then {_j^n(W): 0 ≤ j ≤⌊n/2⌋} is an orthonormal basis for _n^(, Ω). By definition, _j^n is symmetric in the u variable. Moreover, since _j^2m is even in the v variable and_j^2m+1 is odd in v variable, they are orthogonal with respect to the fully symmetric weighton Ω. Changing variables u = s^2 and v = t^2, it follows readily that_j^2m, _j'^2m'_Ω= 4 ∫_Ω_+,+_j^2m(;u,v) _j'^2m'(; u,v) (u,v) ụṿ= 4 ∫_Ω_+,+_j,m(_-12, - 12; u^2,v^2)_j',m'(_-12, - 12; u^2,v^2) (u,v) ụṿ= ∫_√(Ω)_j,m(_-12, - 12; s,t) _j',m'(_-12, - 12; s,t)_-12, -12(s,t) ṣṭ = δ_j,j'δ_m,m'.The same proof also shows the orthogonality of _j^2m+1 and _j'^2m'+1. Furthermore, the polynomials_j^n are odd in the u variable, so that they are orthogonal to _j^n by symmetry. The above proof can alsobe used to establish the orthogonality of _j^n and _j'^n'. It is easy to see that the cardinality of{_j^n, _j^n} is exactly n+1 so that the set consists of an orthonormal basis of _n(, Ω).By parity, this shows that {_j^n} is an orthonormal basis for _n^(; Ω) and {_j^n} is anorthonormal basis for _n^(, Ω). We now use the above theorem to construct an orthogonal basis on the domain of revolution, which will be symmetric in thetvariable, and we shall denote it by^d+1instead of^d+1accordingly,^d+1 = {(x,t) ∈^d+1,x ∈^d,t ∈,(x, |t|) ∈√(Ω)}.To obtain a basis for_n(, ^d+1), we need an orthogonal basis for_n^(^(2k), Ω)by Theorem <ref>. Recall the definition of^(2k)given in (<ref>). We define _-12,±12^(k)(s,t) = |s|^k+d-1/2_-12, ±12(√(s), √(t)), (s,t) ∈√(Ω).Then the polynomials in (<ref>) consist of an orthogonal basis for_n^E(^(2k), Ω)ifwe replace_-12, ±12by_-12, ±12^(k). Consequently, by Theorem <ref>, weobtain the following corollary. Let Ω andbe fully symmetric. Let {_j,m(^(k)_±12, ±12): 0 ≤ j ≤ m}be an orthonormal basis of _n(^(k)_±12, ±12, √(Ω)). Define_j^n-k (^(2k); s, t) = _j,m(_-12, - 12^(k); s^2,t^2), 0 ≤ j ≤ m, n-k = 2m,t_j,m(_-12, 12^(k); s^2,t^2),0 ≤ j ≤ m, n-k = 2m +1. Then {_j^m(^(2k)): 0 ≤ j ≤⌊m/2⌋} is an orthonormal basis for_m^(^(2k), Ω) and _j,k,ℓ^n (x,t) = _j^n-k (^(2k); x, t) Y_ℓ^k(x), 0≤ j ≤⌊n-k/2⌋, 0 ≤ k ≤ n,where {Y_ℓ^k: 1 ≤ℓ≤_k^d} is an orthonormal basis for _k^d, consist of an orthogonal basis for the space _n(,^d+1). The space of OPs_n(,^d+1)has a natural decomposition in terms of the parity of the polynomialsin thetvariable.Letbe an even weight function in t. We denote by _n^(, ^d+1) the subspace of_n(,^d+1) that consists of polynomials even in t variable. Similarly,_n^(,^d+1) denotes the subspace that consists of polynomials odd in t variable.[Let us emphasis that _n^(, Ω) is the space of OPs on Ω⊂^2that are even in the s, or the first, variable, whereas _n^(, ^d+1) is the space of OPs on ^d+1 that are even in the t, or the last, variable. ] By the definition, it follows immediately that _n(,^d+1) = _n^(,^d+1) ⊕_n^(,^d+1).The polynomials_j,n-2m,ℓ^nin (<ref>) consist an orthonormal basis for_n^(,^d+1)and those polynomials_j,n-2m-1,ℓ^nin (<ref>) consist an orthonormal basis for_n^(,^d+1). In particular, we conclude that _n^(,^d+1) =∑_m=0^⌊n/2⌋ (m+1) _n-2m^d = ∑_m=0^⌊n/2⌋n-2m+d-1d-1._n^(,^d+1) =∑_m=0^⌊n-1/2⌋ (m+1) _n-2m-1^d= ∑_m=0^⌊n-1/2⌋n-2m+d-2d-1.Moreover, as a consequence of Corollary <ref>, orthogonal bases for these spaces can be derived fromorthogonal bases on√(Ω), which we formulate as a proposition for easier reference.Let {_j,m(^(k)_-12, ±12): 0 ≤ j ≤ m} be an orthonormal basis of_n(^(k)_- 12, ±12, √(Ω)) and let {Y_ℓ^k: 1 ≤ℓ≤_k^d} bean orthonormal basis for _k^d. Then the space _n^(,^d+1) hasan orthonormal basis given by _j,n-2m,ℓ^n(x,t) =_j,m(_-12, -12^(n-2m); x^2, t^2) Y_ℓ^n-2m (x)for 1 ≤ℓ≤_n-2m^d,0≤ j ≤m ≤⌊n/2⌋, and the space _n^(W,^d+1) has an orthonormal basis given by _j,n-2m-1,ℓ^n(x,t) =_j,m(_-12, 12^(n-2m-1); x^2, t^2) Y_ℓ^n-2m-1 (x)for 1 ≤ℓ≤_n-2m-1^d,0≤ j ≤ m ≤⌊n-1/2⌋.Two remarks are in order. First, for our goal of constructing explicit orthogonal bases on the domain^d+1, werequire our weight function contains the factor|s|^2k+d+1in_-12, ±12^(k), for which we need thelines=0to be part of the boundary of√(Ω). This, however, appears to be the only constraint for the fullysymmetric domain and weight. Consequently, there are plenty of examples of various domains of revolutions, for which explicit orthogonal basis can be written down.Second, it is often more convenient to start with√(Ω)and the weight function(u,v)defined on√(Ω), so that the weight functiononΩandon^d+1become(s,t) = (s^2,t^2) and(x,t) = (x^2,t^2)and, more conveniently,_±12, ±12and_-12,±12^(k)in (<ref>) become_±12, ±12(s,t) = s^±12 t^±12(s,t) and_-12,±12^(k)(s,t) =|s|^k+d-2/2 t^±12(s,t).In the following, we shall adopt this convention when discussing our examples.§.§ Reproducing kernels and Fourier orthogonal seriesThe reproducing kernel of_n(, ^d+1)is denoted by_n(;·,·). For the fully symmetricdomain, by (<ref>), we can write_n(; (x,t), (y,s)) = _n^(; (x,t), (y,s))+_n^(; (x,t), (y,s)),where_n^()and_n^()denote the the reproducing kernels for_n^(, ^d+1)and_n^(, ^d+1), respectively. Recall that the projection operator_n (; f)is an integral operator that has_n(;·,·)as its kernel. Forf ∈L^2(, ^d+1), we definef^(x,t) = 1/2[ f(x,t) + f(x,-t)] and f^(x,t) = 1/2[ f(x,t) - f(x,-t)].Thenf(x,t) = f^(x,t) + f^(x,t), andf^is even in thetvariable andf^is odd in thetvariable.The following proposition is an immediate consequence of the parity and the orthogonality of the function and the kernel.For f ∈ L^2(, ^d+1),_n(; f)(x,t) = _n^(; f^E) + _n^(; f^),where, for = or =,_n^(; f, x,t) = ∫_^d+1 f(y,s) _n^ (; (x,t), (y,s)) (y,s) ỵṣ. By its definition,_n^is the projection operatorL^2(, ^d+1) ↦_n^(,^d+1).Hence, iffis even in thetvariable, then it is easy to see that the Fourier orthogonal series offsatisfies f = ∑_n=0^∞_n^ f.This expansion could also be regarded as studying the Fourier orthogonal series offon the upper part of^d+1,denote by_+^d+1 = {(x,t) ∈^d+1: t ≥ 0},which is the rotation ofΩ^+ = {(s,t) ∈Ω: t ≥0}of the fully symmetric domainΩ. Indeed, iff∈L^2(, _+^d+1), then we can extend it to^d+1by definingf(-x,t) = f(x,t). Let us call this extended functionF. ThenFis even in thetvariable so that it has the Fourier expansion (<ref>), which gives the Fourier expansion offwhen restricted back to_+^d+1. It should be noted, however, that the above Fourier expansion forfis different from the Fourier orthogonal expansion offinL^2(, _+^d+1). For start, the dimension of_n^(, ^d+1)is different from the dimension of_n(, _+^d+1). Nevertheless, as we shall show below, the orthogonal structure of somefully symmetricand^d+1possess desirable properties that the structure of_n(, _+^d+1)may not have. By Proposition <ref>, the orthogonal basis for_n^(, ^d+1)requires OPs of two variables for the weight function_-12, -12^(n-2m), whereas the basis for_n^(, ^d+1)requires OPs for the weight function_-12, 12^(n-2m), which are different weight functions. Thus, it is not surprisingthat the spectral operator and addition formula for the two spaces could be different. Putting it another way, as shown in<cit.> for the hyperboloid, we could say that the two properties hold only for_n^(, ^d+1)orfor_n^(, ^d+1), but the same formulation may not hold for both. In Section 5, we will show that the two properties hold for_n^(, ^d+1)for someand^d+1.The above discussion shows that such a formulation provides powerful tools for studying the Fourier orthogonal series forfunctions either even or odd in thetvariable.§.§ Cylindrical domainsAs our first example, we consider the case when the domain√(Ω)is a parallelogram with one side onthe axisu = 0. The trivial case is the rectangle√(Ω) = {(u,v): 0 ≤u ≤, 0 ≤v ≤}, for whichthe corresponding^d+1will be a fully symmetric cylinder^d+1 = {(x,t): x≤,-≤ t ≤},which is the tensor product of[-,] ×^d_(cf. <cit.>). If we deform the rectangle to a parallelogram,however, we end up with several distinct domains. In view of their geometry, we consider two cases. §.§.§ Cylindrical domains caped by quadratic surfacesLet> ≥0. We consider√(Ω) = {(u,v): 0 ≤ u ≤ 1, + (1-)u ≤ v ≤+ (1-)u}.The rotation in thetaxis of the fully symmetric domainΩ={(s,t): (s^2,t^2) ∈√(Ω)}leads to^d+1 ={(x,t):+ (1-) x^2 ≤ t^2 ≤ + (1-) }.For> 0, the domain has two parts that mirror each other. The upper part is bounded by a cylinder with its two ends caped byhyperbolic surfaces+ (1-)x^2 = t^2and+ (1-)x^2 = t^2, whereas for= 0, the lower capof the cylinder is the cone(1-)x^2 = t^2. For= 58,= 116, the domain^d+1is depictedin the right-hand side of Figure <ref>, andΩis depicted in the left-hand side with√(Ω)being the shaded parallelogram. For, ,̱ ,> -1, let_,,̱,be the weight function defined by _,,̱,(s,t) = s^ (1- s)^(̱t-- (1-)s)^ ( + (1-)s - t)^ t^12,(s,t) ∈√(Ω).The corresponding weigh function on^d+1is defined by _,,̱,(x,t) = _,,̱,(x^2,t^2)= x^2(1- x^2 )^(t^2-- (1-)x^2)^( + (1-)s^2 - t^2)^|t|.By (<ref>), the weight function^(k)_-12,-12becomes _-12, -12^(k)(s,t) =s^k+ + d-22 (1- s)^(̱t-- (1-)s)^ ( + (1-)s - t)^ =(-)^+ϖ_k+ + d-22,(s) ϖ_,( t-(1-)s - /-),whereϖ_a,b(x) = x^a(1-x)^bis the Jacobi weight on the interval[0,1]. Consequently, an orthogonal basis for the space_m(^(k)_- 12, -12, √(Ω))can be given in terms of the Jacobi polynomials: for0 ≤j ≤m,_j,m^(k)(s,t) = P_j^(+k+d-12, )̱(1-2 s) P_m-j^(,)(1 - 2 t- (1-) s - /-).By Proposition <ref>, we obtain immediately that the space_n^(_,,̱,,^d+1)has an orthogonal basis given by _j,n-2m,ℓ(x,t) =P_j^(+n-2m+d-12, )̱(1-2 x^2) × P_m-j^(,)(1 - 2 t^2- (1-)x^2-/-)Y_ℓ^n-2m(x)with1 ≤ℓ≤_n-2m^dand0 ≤j ≤m ≤⌊n/2 ⌋. By(<ref>), we can also derive a basis for_n^(,^d+1)but it is for the weight function(x,t) = _,,̱,(x,t) |t|^-2, differing from_,,̱,(x,t)by an additional|t|^-2, since the basis in (<ref>) uses OPs for_-12, 12^(k), instead of_-12, -12^(k), which we need to choose so that it is again the product of two Jacobi weight functions. §.§.§ Cylindrical domains between two ellipsoid surfacesLet> > 0. We consider√(Ω) = {(u,v): 0 ≤ u ≤ 1,≤ v+ u ≤}.The rotation in thetaxis of the fully symmetric domainΩ={(s,t): (s^2,t^2) ∈√(Ω)}leads to^d+1 = {(x,t): ≤ t^2+ x^2 ≤},bounded by the surfaces of two ellipsoids:t^2 + x^2 = andt^2 + x^2 = . For= 1,= 12, these domains are depicted in Figure <ref>, whereΩis the shaded domain between two ellipses and two vertical lines and√(Ω)is theshaded parallelogram in the left-hand side figure. For,,̱, >-1, let_,,̱,be the weight function defined by_,,̱,(s,t) = s^ (1- s)^(̱t+s - )^ ( - t- s)^ t^12,(s,t) ∈√(Ω)so that the weight function on^d+1is given by_,,̱,(x,t)=_,,̱,(x^2,t^2)= x^2 (1- x^2)^(t^2 +x^2 - )^( - t^2 -x^2)^ |t|.In this case, the weight function^(k)_-12,-12in (<ref>) becomes _-12, -12^(k)(s,t) =s^k+ + d-22 (1- s)^(̱t+ s -)^ ( - t-s)^ = (-)^+ϖ_k+ + d-22,(s) ϖ_,( t +s - /-),whereϖ_a,bis again the Jacobi weigh on[0,1]. Thus, an orthogonal basis for the space_m(^(k)_- 12, -12, √(Ω))can be given in terms of the Jacobi polynomials as_j,m^(k)(s,t) = P_j^(+k+d-12, )̱(1-2 s) P_m-j^(,)(1 - 2 t +s - /-).Consequently, by Proposition <ref>, the space_n^(_,,̱,,^d+1)has an orthogonal basis given by _j,n-2m,ℓ(x,t) =P_j^(+n-2m+d-12, )̱(1-2 x^2) × P_m-j^(,)(1 - 2 t^2- (-1-)x^2-/-) Y_ℓ^n-2m(x),for1 ≤ℓ≤_n-2m^dand0 ≤j ≤m ≤⌊n/2 ⌋. As in the previous case, we can also give an orthogonal basis for_n^(,^d+1)but for the weight function(x,t) = _,,̱,(x,t) |t|^-2.§ DOUBLE CONIC AND HYPERBOLIC DOMAINS This section contains several new double domains of revolution on which an orthogonal basis can be givenexplicitly. By assuming that the weight function is even in thetvariable, we can divide OPs into two families based on their parity in thetvariable. For our examples, the two essential properties, spectraloperator and addition formula, hold for orthogonal spaces that consist of OPs even in thetvariable. Thiswas established in <cit.> for the double cone, which will be reexamined in the first subsection, for which√(Ω)is a right triangle. The new examples in follow-up subsections correspond to triangles ofdifferent types.§.§ Double coneWe consider the case that the domain√(Ω)is the right triangle with vertices(0,0),(0,1)and(1,1); that is, ∇:= √(Ω) = {(u,v):0 ≤u ≤ v≤ 1 }.The rotation in thet-axis of the fully symmetric domainΩ={(s,t): (s^2,t^2) ∈√(Ω)}leads to the double cone, again denoted by^d+1instead of^d+1, bounded by the double conicsurface and hyperplanest = ±1at its two ends, which is depicted on the right-hand side of Figure <ref>. The domainΩis depicted on the left-hand side of Figure <ref> and√(Ω)is the shadedtriangle. A family of OPs on the double cone was studied in <cit.>. Below we discuss it under our setupfor a family of weight functions more general than the one in <cit.>. §.§.§ OPs on double coneFor,,̱, >-1, let_,,̱,be the weight function defined by_,,̱,(s,t) = s^ (1- t)^(̱t- s)^ t^,(s,t) ∈∇,so that the weight function on^d+1is given by_,,̱,(x,t) =_,,̱,(x^2,t^2) = x^2 (1- t^2)^(̱t^2- x^2)^ |t|^2 .In terms of the four-parameter Jacobi weight in (<ref>) and follow the notation (<ref>),_- 12, ±12^(k)(s,t) = _k++d-2/2, ,̱, ±12(s, 1-t).Thus, by Theorem <ref>, the orthogonal basis (<ref>) for the space_m(^(2k), Ω)becomes _j,m^k++d-2/2, ,̱,-12(s^2,1-t^2), n = 2m,t _j,m^k++d-2/2, ,̱,+ 12(s^2,1-t^2), n = 2m+1, 0 ≤ j ≤ m,in terms of the OPs (<ref>) on the triangle. Hence, by Corollary <ref>, we obtain an orthogonal basis on the double cone.Let ,̱ > -1, > - d2, and ++≥ - d2. Let {Y_ℓ^k-2j} be an orthonormal basis of _n^d. Then the space _n(_,,̱,, ^d+1) has an orthogonal basis that consists of _j,k,ℓ^n (x,t) =Y_ℓ^k (x) _j,m^k++d-2/2, ,̱,-12(x^2,1-t^2), n-k = 2m,t _j,m^k++d-2/2, ,̱,+ 12(x^2,1-t^2), n-k = 2m+1for 1 ≤ℓ≤_k-2j^d and 0 ≤ j ≤ m. The basis in (<ref>) can be written in an alternative form, but it is useful in its present form forfurther examples to be discussed in the next section. For the alternative form, we need to use the generalizedGegenbauer polynomialsC_n^(ł,μ)and the Jacobi polynomialsP_n^(,)̱, which we state as a lemma.An orthogonal basis for _n^(^(2k), Ω) consists of polynomials_j^n(^(2k); s,t)= C_n-2j^(+̱12, 2 j+k++++d 2)(t) t^2j P_j^(, k++d-2/2)(2s^2/t^2-1),0 ≤ j ≤ n/2. By (<ref>), the basis (<ref>) can be given in terms of the Jacobi polynomialsP_m-j^(2j+k++++d-1/2,)̱(1- 2 t^2)t^2j P_j^(k++d-2/2,)(1- 2s^2/t^2), n = 2m,t P_m-j^(2j+k++++d+1/2,)̱(1-2 t^2) t^2j P_j^(k++d-2/2,)(1- 2s^2/t^2), n = 2m+1.Using the identity P_n^(,)̱(1-2 s^2) = (-1)^n P_n^(,̱)(2 s^2-1), the two cases in the above can be combined, up toa constant multiple, in terms of the generalized Gegenbauer polynomial defined in (<ref>), which givesthe stated basis.By Corollary <ref>, the lemma leads to an orthogonal basis for_n( _,,̱,, ^d+1)that consist ofC_n-k-2j^(+̱12, 2 j+k++++ d2)(t) t^2j P_j^(, k++d-2/2)(2x^2/t^2-1)Y_ℓ^k(x).Changing indexk ↦k-2j, we sum up the result in the following proposition.Let ,̱ > -1, > - d2, and ++≥ - d2. Let {Y_ℓ^k-2j} be an orthonormal basis of _n^d. Then the space _n(_,,̱,, ^d+1) has an orthogonal basis consisting of _j,k,ℓ^n (x,t)=C_n-k^(+̱12, k++++ d2)(t) t^2j P_j^(, k-2j++d-2/2)(2x^2/t^2-1)Y_ℓ^k-2j(x)=_j^n-k+2j(^(2k); x, t) Y_ℓ^k-2j(x),j ≤ k/2,0 ≤ k ≤ nwhere _j^n(^(2k)) is given in Lemma <ref>. Moreover, by (<ref>),_j,k^n(_,,̱,):=c_,,̱,∫_^d+1|_j,k,ℓ^n (x,t)|^2 _,,̱, (x,t) x̣ṭ = _n-k^+̱12, k++++ d2) h_j^( + k-2j+d-2/2,) =: _j,n-k+2j^n(^(2k))in terms of the norm of _m^(,)̱ in (<ref>) and h_m^(,)̱ in (<ref>), wherec_,,̱, is the normalization constant of _,,̱,. In the case of= 0, this basis is studied in <cit.> but written in terms of OPs on the unit ball, andfurther properties of the OPs are explored. These properties will be needed in the sequel and we describe them in the following subsection.§.§.§ Gegenbauer polynomials on the double coneFor= 0, the weight function in the previous section becomes_,̱,(x,t):= _0,,̱, (x,t) =(1- t^2)^(̱t^2- x^2)^ |t|^.The orthogonal basis for_n^d+1(_,̱,,^d+1)in (<ref>) consist of polynomials_j,k,ℓ^n (x,t) =C_n-k^(+̱12, k+++ d2)(t) t^2j P_j^(, k-2j+d-2/2)(2x^2/t^2-1) Y_ℓ^k-2j(x),which can also be rewritten in terms of the classical OPsP_ℓ,j^k (W_+12)in (<ref>) on the unit ball as_j,k,ℓ^n (x,t) = C_n-k^(+̱12, k+++ d2)(t) t^k P_ℓ,j^k (W_+12;x/t).These are the polynomials given in <cit.>, under a change of parameters(,̱,μ) ↦(, +̱12, +12),where they are called the Gegenbauer polynomials on the double cone when=12. This basis is used to reveal twodistinguished properties for_n^(W_,̱, 12, ^d+1)and_n^(W_,̱, -12, ^d+1), whichwe describe below. Notice, however, that= 12for_n^and= -12for_n^. The first property is the spectral differential equation that has the space of OPs as eigenfunctions, which is <cit.>. For n=0,1,2,…, every u ∈_n^(_,̱,12,^d+1) satisfiesthe differential equation [(1-t^2) ∂_t^2 + Δ_x -x,∇_x ^2 +2/t (1-t^2) x, ∇_x∂_t+ (2+d+1)1/t∂_t-t ∂_t - (2+̱2+d+1)( t ∂_t + x ,∇_x) ] u= -n(n + 2 +̱ 2 +d+1) u.Furthermore, every u ∈_n^(_,̱,-12,^d+1) satisfies the differential equation [(1-t^2) ∂_t^2 + Δ_x -x,∇_x^2 -x,∇_x + 2/t (1-t^2)x,∇_x ∂_t+2+d-1/t(∂_t -1/t)-(2+2+d) (t ∂_t +x ,∇_x) ] u = -n(n + 2 +̱ 2 +d+1) u,where Δ_x and ∇_x indicate that the operators are acting on x variable. The second property is an addition formula for the space of OPs, which is stated in <cit.>.Let ,̱≥ 0, +̱≥ d 2 and let = + +d/2. For n =1,2,…, _n^ (_,̱,; (x,t),(y,s) ) = ++̱12/+12 s t_n-1^ (_,̱,+1; (x,t),(y,s) ).Furthermore, for= 12, ,̱≥ 0 and n =0,1,2,…, _n^(_,̱,12; (x,t),(y,s) )=c_ c_∫_[-1,1]^2Z_m^+̱+d+12( ζ(x,t,y,s; u,v)) × (1-v^2)^-̱12(1-u^2)^-12ụṿ,where ζ(x,t, y,s; u, v) =( x,y+ u √(t^2-x^2)√(s^2-y^2)) sign(st)+ v √(1-s^2)√(1-t^2).The formula holds under the limit if either one of $̱ andis-12or both are.These two properties will be used to derive analogs for several new domains in the next section. For > 12,there is also an addition formula for_n^ (W_,̱,; (x,t),(y,s) ), more involved and given asan integral over[-1,1]^4in <cit.>, which we shall not state to avoid an overload of formulas. As we discussed in the Subsection <ref>, these two properties can be used to study the Fourier orthogonal seriesfor functions even in thet-variable on^d+1or functions on_+^d+1. For some of their applicationsin approximation theorem and computational harmonic analysis, see <cit.>.§.§ Double conic domainsFor > 0, we consider the domain√(Ω_)defined as a triangle with vertices at(0,0),(0, )and(1,1).More precisely,∇_ := √(Ω_) = {(u,v):0 ≤u ≤ v≤ (1-) u + ≤ 1 }, ≥ 0.The rotation in thetaxis of the fully symmetric domainΩ_ ={(s,t): (s^2,t^2) ∈√(Ω_)}leads to the conic domains of two bodies,_^d+1 ={(x,t): x^2 ≤ t^2 ≤ (1-) x^2 + ,|t| ≤ 1}.If = 1, this reduces to the double cone studied in Subsections <ref> and <ref>.For0 <<1, the domain_=1^d+1is bounded by the double conic surface and is caped by hyperbolicsurfaces{(x,t): (1-) x^2 += t^2 ≤ 1}at the two ends. For = 12, the domainΩ_is depicted in the left-hand of Figure <ref>, where√(Ω_)is the shaded triangle, andthe domain_^d+1is depicted in the right-hand side.For > 1, the domain_=1^d+1is again bounded by the double conic surface but is capped at the twoends by the surfaces{(x,t): (1-) x^2 += t^2 ≤ 1}that are elliptic rather than hyperbolic . For = 2, the domains√(Ω_)and_^d+1are depicted in the Figure <ref>.For, ,̱, >-1, let^_,,̱,be the weight function defined by^_,,̱,(s,t) = s^( (1-s)- (t-s))^(̱t-s)^(t - (1-)s)^,(s,t) ∈√(Ω_),so that the weight function on the conic domain^d+1becomes_,,̱,^ (x,t)= ^_,,̱,(x^2,t^2) = x^2( + (1-)x^2 - t^2)^(t^2- x^2)^ (t^2-(1-)x^2 )^.In this case, the triangle∇_becomes the triangle∇in (<ref>) under the affine mapping(s,t) ↦(s, s+t-s/)and the weight function^becomes^_,,̱,(s,t) =^+̱+_,,̱,(s, s+1(t-s))in terms of_,,̱,defined in (<ref>). Since the affine change of variable is applied tothe weight function_- 12, ±12^(k)(s,t), we could deduce the basis for_m(_^(2k), ∇_)from the corresponding basis for = 1by a changing variable. Indeed, by (<ref>), theweight function_- 12, ±12^(k)for 0becomes_k++d-2/2, ,̱, ±12(s, 1-s-1(t-s)).Hence, following the same argument used for = 1, we obtain the following analog of Proposition <ref>.Let ,̱ > -1, > - d2, and ++≥ - d2. Let {Y_ℓ^k-2j} be an orthonormal basis of _n^d. Then the space _n(_,,̱,^, _^d+1) has an orthogonal basis that consists of Y_ℓ^k (x) _j,m^k++d-2/2, ,̱,-12(x^2, 1-x^2-1/(t^2- x^2) ), n-k = 2m,t _j,m^k++d-2/2, ,̱,+ 12(x^2, 1-x^2-1/(t^2- x^2) ), n-k = 2m+1for 1 ≤ℓ≤_k-2j^d and 0 ≤ j ≤ m. There is however a subtlety for an analog of Proposition <ref>. Let us start with an analog ofLemma <ref> for_^(2k)associated with^_,,̱,, which holds howeveronly for polynomials of even degrees.Let _j^n(^(2k)) be the orthogonal basis (<ref>) for _n^(^(2k), Ω). Then an orthogonal basis for _n^(_^(2k), Ω_) consists of polynomials_j^n(_^(2k); s,t) = C_n-2j^(+̱12, 2 j+k++++d 2)(√(s^2+1(t^2- s^2)))(s^2+1(t^2- s^2))^j × P_j^(, k++d-2/2)(2s^2/s^2+1(t^2- s^2)-1) = _j^n(W^(2k); s, √(s^2+1(t^2- s^2))),0 ≤ j ≤ n,0 ≤ j ≤ n/2,only if n is an even integer. Moreover, the norm of _j^n(_^(2k))inL^2(_^(2k), Ω_) and the norm of _j^n(^(2k)) inL^2(_^(2k), Ω_) are equal.The first identity of the OP is verified exactly as in the proof of Lemma <ref>. If n is even, then C_n-2j^(ł,μ) is an even function, so that C_n-2j^(ł,μ)(√(x)) is a polynomial in x. Hence,_j^n(W_^(2k)) is indeed a polynomial in s and t. For n is odd, however, _j^n(W_^(2k)) is no longer a polynomial as C_n-2j^(ł,μ) is odd and C_n-2j^(ł,μ)(√(x))contains a factor √(x). To see that their norm squares are equal, we observe that the norm of _j^n(W^(2k)) can be reduced to the norm square of P_j,m(_-12, ±12), as in the proof ofTheorem <ref>. Since _-12, ±12^ becomes _-12, ±12 by an affine change ofvariable, the norm P_j,m(_-12, ±12^) is the same as that of P_j,m(_-12, ±12). Using the basis in Lemma <ref>, we can deduce by (<ref>) and (<ref>) anorthogonal basis for the space_n^(_,,̱,^, _^d+1)of OPsthat are even intvariable. The space _n^(_,,̱,^, _^d+1) has an orthogonal basis _j,n-2m,ℓ^,n(x,t) =_j^2m(^(2k); x, √(x^2+1(t^2-x^2))) Y_ℓ^k-2j(x)=_j, n-2m,ℓ^n(x, √(x^2+1(t^2-x^2))),where _j,k,ℓ^n are polynomials given in (<ref>). Moreover, the norm of _j,n-2m,ℓ^,n in L^2(_,,̱,^, _^d+1) is equal to the norm of _j,n-2m,ℓ inL^2(_,,̱,, ^d+1). For 1, an orthogonal basis for the space_n^(_,,̱^, ^d+1)can bederived from Proposition <ref>, but the analog of (<ref>) no longer holds. The advantage of the basis in (<ref>) lies in the spectral operator and the addition formula that we now state. Thefirst one is the existence of the spectral differential operator.For > 0 and n=0,1,2,…, there is a second order differential operator ^_x,t such thatevery u ∈_n^(^_,̱,12,^d+1) satisfies the differential equation ^_x,t u = -n(n + 2 +̱ 2 +d+1) u,where if _x,t(∂_x, ∂_t) denotes the operator in the left-hand side of (<ref>),then ^_x,t = ^_x,t(∂_x, ∂_t) is determined by ^_x,t(∂_x, ∂_t) = _x, z(∂_x, ∂_z),z = √(x^2+1(t^2-x^2)). This follows as a consequence of the second identity in (<ref>), so is the next property on the addition formula.For> 0, = 12, ,̱≥ 0, the kernel _n^(^_,̱,12; ·, ·) satisfies the addition formula (<ref>) with ζ(x,t,y,s;u,v) replaced byζ^ (x,t, y,s; u, v) = ( x,y+ u /√(t^2-x^2)√(s^2-y^2)) sign(st)+ v √(1-y^2-1(s^2- y^2))√(1-x^2-1(t^2- x^2)).The formula holds under the limit if either one of $̱ andis0or both are. §.§ Double hyperbolic domainsAs a further generation, we consider the triangle that has vertexes at(0,), (0,)and(1,1)with >> 0; that is,∇_, := √(Ω_,) = {(u,v): 0 ≤ (1-) u+≤ v ≤ (1-) u + ≤ 1 },which reduces to the triangle in the previous subsection when = 0. The rotation in thetaxis of thefully symmetricΩ_, ={(s,t): (s^2,t^2) ∈√(Ω_,))}leads to the double hyperboloid_,^d+1 ={(x,t): (1-) x^2 + ≤ t^2 ≤ (1-) x^2 + ≤ 1 }.§.§.§ =1The domain_,1^d+1is the hyperboloid bounded by the hyperbolic surface{(x,t): (1-) x^2 += t^2}and two hyperplanes att= ± 1,_1,^d+1 ={(x,t): (1-) x^2 + ≤ t^2 ≤ 1 }.For = 0.1, this domain andΩ_1,are depicted in Figure <ref>, where∇_1,is the shaded triangle. OPs for a family of weight functions on the hyperboloid_,1^d+1were studiedin <cit.> but with a different parametrization that amounts to a dilation oft ↦√(1+ρ)twith = ρ /(1+ρ), where it is shown that orthogonal bases can be derived from the corresponding basis on the double cone_1,1^d+1by a simple change of variables, which we shall also discuss below. §.§.§ 1There are three cases according to different geometry of the domain. Case 1. 0<<< 1. The domain_,^d+1is a hyperboloid bounded by the hyperbolic surface{(x,t): (1-) x^2 += t^2 ≤ 1}and capped at both ends by the hyperbolic surface{(x,t): (1-) x^2 += t^2 ≤ 1}instead of hyperplanes. For = 58and = 18, the domainsΩand_,^d+1are depicted in Figure <ref>. Case 2. 0 << 1 <.The domain_,^d+1is the hyperboloid bounded by the same hyperbolicsurface{(x,t): (1-) x^2 += t^2 ≤ 1}but capped at both ends by the surface{(x,t): (1-) x^2 += t^2 ≤}that is elliptic rather than hyperbolic. For = 2and = 18, the domainsΩand_,^d+1are depicted in Figure <ref>.Case 3. 0 < 1 <<. The domain_,^d+1is bounded by two double surfaces{(x,t): (1-) x^2 += t^2 ≤}and{(x,t): (1-) x^2 += t^2 <1},both are elliptic rather thanhyperbolic. The two surfaces intersect at the unit sphere. Geometrically, both surfaces that bound theupper half of_,concave downward. We shall not depict the region in this case. Let_,,̱,be the weight function defined in (<ref>). For(s,t) ∈∇_,, let^,_,,̱,(s,t)= s^( (1-s) - (t -s))^(̱t- s -(1-s))^(t- - (1-)s)^ = (-)^+̱+_,,̱,(s, t-- (1-)s/-).Then the corresponding weight function on the rotational solid_,^d+1is given by_,,̱,^,(x,t)= x^2 ((1-x^2)- (t^2 - x^2) )^ ×(t^2- x^2 -(1-x^2))^(t^2- - (1-)x^2)^.In terms of the_,,̱,defined in (<ref>), we have_,,̱,^,(x,t) = (-)^+̱+_,,̱,(x^2, t^2--(1-) x^2/-). If = 1, then an orthogonal basis for_n(_,,̱,^1,, _1,^d+1)follows immediately fromthe corresponding basis on the double cone^d+1in Proposition <ref>. More precisely,from (<ref>), we obtain the following proposition.Let ,̱ > -1, > - d2, and ++≥ - d2. Let {_j,k,ℓ^n } be the orthogonal basis ofthe space _n(_,,̱,, ^d+1) in Proposition <ref>. Then the polynomials_j,k,ℓ^(1,), n (x,t) =_j,k,ℓ^n (x, √(t^2-/1-)), 1 ≤ℓ≤_k-2j^d,j ≤ k/2,0 ≤ k ≤ n,consist of an orthogonal basis for _n(^1,_,,̱,, ^d+1_1,).In the case = 0, = μ-12and = 12, this basis is studied in <cit.> with = ρ^2/(1+ρ^2)andt ↦√(1+ρ^2)t. In particular, for = 0, the full strength of Theorem <ref> for the spectral differential operator and Theorem <ref> for the addition formula hold, which we summarize in thefollowing proposition for the record.Let = 1 and 0 ≤ < 1. Let _x,t(∂_x, ∂_t) denote the operator in the left-hand sideof (<ref>) or (<ref>), respectively. Define^1,_x,t(∂_x, ∂_t) = _x, z(∂_x, ∂_z),z = √(t^2-/1-). Then every u ∈_n^(^1,_0,,̱,12, _1,^d+1) oru ∈_n^(^1,_0,,̱,-12, _1,^d+1), respectively, satisfies^1,_x,t(∂_x, ∂_t) u = - n (n+2+̱+ 1) u .Let =1 and 0 << 1. Then the kernel _n^(^1,_0,,̱,; ·, ·) and the kernel _n^(^1,_0,,̱,+1; ·, ·) satisfies the relation (<ref>). Moreover, _n^(^1,_0,,̱,; ·, ·) satisfies the addition formula (<ref>) with ζ(x,t,y,s;u,v) replaced byζ^1, (x,t, y,s; u, v) = ( x,y+ u √(t^2-1--x^2)√(s^2-1--y^2)) sign(st)+ v/1-√(1- s^2)√(1-t^2). For 1and > 0, orthogonal bases on the domain^d+1_,can be derived from theorthogonal bases on the domain^d+1_1, , just like in the case in Section <ref> for =0.In this case, we only have an explicit orthogonal basis for the space of OPs that are even in thet-variable.Let ,̱ > -1, > - d2, and ++≥ - d2. Let {_j,k,ℓ^n } be the orthogonal basis ofthe space _n^(_,,̱,, ^d+1) in Proposition <ref>. Then the polynomials_j,k,ℓ^(,), n (x,t) =_j,k,ℓ^n (x, √(t^2-- (1-)x^2/-)),j ≤ k/2,0 ≤ k ≤ nand 1 ≤ℓ≤_k-2j^d, consist of an orthogonal basis for _n^(^,_,,̱,, ^d+1_,).This provides an orthogonal basis for_n^(^,_,,̱,, _,^d+1)givenexplicitly in terms of classical Jacobi polynomials on the simplex and spherical harmonics. Such a basis, however, does not hold for_n^(^,_,,̱,, _,^d+1)for 1.We also have analogs of the spectral differential operators and the addition formula in this setting, both hold for polynomials in_n^(^,_,,̱,, _,^d+1)with = 12.Let 0 <<. Then every u ∈_n^(^,_0,,̱,12, _,^d+1) satisfies an analog of (<ref>) with _x,t^1, replaced by^,_x,t(∂_x, ∂_t) = _x, z(∂_x, ∂_z),z = √(t^2-- (1-)x^2/-). Moreover, the addition formula (<ref>) holds for the kernel _n^(^,_0,,̱,12; (x,t),(y,s) )with ζ(x,t,y,s;u,v) replaced byζ_, (x,t, y,s; u, v) = ( x,y+ u/-√(t^2-- (1-)x^2)√(s^2-- (1-)y^2)) sign(st)+ v/ - √( - t^2+(1-)x^2)√(-s^2+ (1-)y^2).§.§ Intersections of two touching ellipsoids For0 ≤ < , we consider the domain√(Ω_,)defined as a triangle with vertices at(0,),(0, )and(1,0). More precisely,√(Ω) = {(u,v): (1-u) ≤ v ≤(1 - u) }.The rotation in thetaxis of the fully symmetric domainΩ ={(s,t): (s^2,t^2) ∈√(Ω)}leads to the domain bounded by two ellipsoidal surfaces that touch at the unit spherewhent =0,^d+1 = {(x,t):√(1- t^2/)≤x≤√(1- t^2/),|t| ≤}.The ellipsoidal surface{(x,t):(1- x^2) = t^2}becomes the unit sphere^dif = 1. The domainΩ_,and_,^d+1are depicted on the right-hand side of Figure <ref> for = 1and = 12, whereΩ_,is the shaded domain betweenthe circle andthe ellipse, and√(Ω_,)is the shaded triangle domain.Let_,,̱,be the weight function defined in (<ref>). For(s,t) ∈√(Ω_,), let^,_,,̱,(s,t)= s^( (1-s)-t)^( t-(1-s) )^(t-+s)^ = (-)^+̱+_,,̱,(s, t- + s/-).The corresponding weight function on the rotational solid_,^d+1is given by_,,̱,^,(x,t)= x^2 ((1-x^2)- t^2 )^(t^2-(1-x^2))^(t^2- + x^2)^ = (-)^+̱+_,,̱,(x^2, t^2-+x^2/-).where_,,̱,is defined in (<ref>). For0<< , orthogonal bases on the domain^d+1_,can be derived from the orthogonal bases on the domain^d+1_1, , but only for the space of polynomials even in thet-variable.Let ,̱ > -1, > - d2, and ++≥ - d2. Let {_j,k,ℓ^n } be the orthogonal basis ofthe space _n^(_,,̱,, ^d+1) in Proposition <ref>. Then the polynomials_j,k,ℓ^(,), n (x,t) =_j,k,ℓ^n (x,√(t^2-+x^2/-)), j ≤ k/2,0 ≤ k ≤ nand 1 ≤ℓ≤_k-2j^d, consist of an orthogonal basis for_n^(^,_,,̱,, ^d+1_,).If =0and = 1, then the domain_1,0^d+1 = ^d+1is the unit ball in^d+1and theweight function is_,,̱,^1,0(x,t) = x^2 (1-x^2 - t^2 )^|̱t|^2 (t^2-x^2)^,which becomes the classical OPs on the unit ball when ===0, as given in(<ref>), and a basis in the vein of (<ref>) is also known if 0or 0.However, the basis is new if 0. The spectral differential operator and the addition formula are known for the classical OPs.For = 0and =12, both these properties also hold for the space_n^(_0,,̱,12^1,, ^d+1), as stated below, which are new.Let 0 ≤ <. Then every u ∈_n^(^,_0,,̱,12, _,^d+1) satisfies an analog of (<ref>) with _x,t^1, replaced by^,_x,t(∂_x, ∂_t) = _x, z(∂_x, ∂_z),z = √(t^2-+ x^2/-). Moreover, the addition formula (<ref>) holds for the kernel _n^(^,_0,,̱,12; (x,t),(y,s) )with ζ(x,t,y,s;u,v) replaced byζ^, (x,t, y,s; u, v) = ( x,y+ u/-√(t^2- (1-x^2))√(s^2-(1-y^2))) sign(st)+ v/ - √( - t^2-x^2)√(-s^2 -y^2).§.§ ApplicationsA framework is developed in <cit.> for approximation theory and computational harmonic analysis on the space ofhomogeneous type(Ω, , ), whereΩis a domain in a Euclidean space,is a doubling weightfor the metric (distance)onΩ. It is based on highly localized kernels derived from OPs foronΩ, which are established with the help of an addition formula, and uses the spectral operator to relate approximation bypolynomials and smoothness of function. The framework applies to the Gegenbauer polynomials on the hyperboloidin <cit.> and leads to a characterization of best approximation by polynomials and localized tight frame, amongseveral other results.Since the new domains and weight functions in this section are related to double cone and hyperboloid by a change ofvariable, so much so that both spectral operator and addition formula are preserved, it follows readily that the framework in <cit.> is applicable and the aforementioned results hold on the new domains discussed in this section as well. 99AS. A. Agahanov,A method of constructing orthogonal polynomials of two variables for a certain class of weight functions,Vestnik Leningrad Univ.20(19) (1965), 5–10. BPSR. Barrio, J. M. Peña, and T. Sauer,Three term recurrence for the evaluation of multivariate orthogonal polynomials,J. Approx. Theory, 162 (2010), pp. 407–420.DaiX F. Dai and Y. Xu, Approximation theory and harmonic analysis on spheres and balls. Springer Monographs in Mathematics, Springer, 2013. DXC. F. Dunkl and Y. Xu, Orthogonal Polynomials of Several Variables. Encyclopedia of Mathematics and its Applications 155,Cambridge University Press, Cambridge, 2014.KT. H. Koornwinder (1975). Two-variable Analogues of the Classical Orthogonal Polynomials. In R. A. Askey (Ed.), Theory and Application of Special Functions, pp. 435–495. New York: Academic Press.KSH. L. Krall and I. M. Sheffer, Orthogonal polynomials in two variables, Ann. Mat. Pura Appl. (4) 76 (1967), 325–376.LN Z. Liu and A. Narayan, A Stieltjes algorithm for generating multivariate orthogonal polynomials, SIAM Journal on Scientific Computing, 45 (2023) A1125–A1147. OST S. Olver, R. M. Slevinsky, A. Townsend,Fast algorithms using orthogonal polynomials Acta Numerica29, 573–699.OTV S. Olver, A. Townsend, and GM. Vasil, Recurrence relations for a family of orthogonal polynomials on a triangle. Spectral and High Order Methods for Partial Differential Equations – ICOSAHOM 2018, 79–92,Lect. Notes Comput. Sci. Eng., 134, Springer, Cham, 2020.OX1 S. Olver and Y. Xu,Orthogonal polynomials in and on a quadratic surface of revolution.Math. Comp.89 (2020), 2847–2865. SzG. Szegő,Orthogonal polynomials. 4th edition,Amer. Math. Soc., Providence, RI. 1975.WJ. Wade, Cesáro summability of Fourier orthogonal expansions on the cylinder, J. Math. Anal. Appl, 2013. X99Y. Xu,Summability of Fourier orthogonal series for Jacobi weight on a ball in ^d, Trans. Amer. Math. Soc., 351 (1999), 2439–2458.X12 Y. Xu, Orthogonal polynomials and expansions for a family of weight functions in two variables.Constr. Approx.36 (2012), 161–190.X20Y. Xu,Orthogonal polynomials and Fourier orthogonal series on a cone. J. Fourier Anal. Appl., 26 (2020), Paper No. 36, 42 pp.X21aY. Xu, Orthogonal structure and orthogonal series in and on a double cone or a hyperboloid. Trans. Amer. Math. Soc.374 (2021), 3603–3657.X21Y. Xu, Approximation and localized polynomial frame on conic domains. J. Functional Anal.281 (2021), no. 12, Paper No. 109257, 94 pp.X21bY. Xu, Laguerre expansions on conic domains. J. Fourier Anal. Appl.27 (2021), no.4, paper No. 64, 36 pp. X23aY. Xu,Approximation and localized polynomial frame on double hyperbolic and conic domains. Constructive Approx.57 (2023), 921–976.X23bY. Xu, Fourier orthogonal series on a paraboloid. J. d'Analyse Math.149 (2023), 251–279.
http://arxiv.org/abs/2311.15554v1
{ "authors": [ "Yuan Xu" ], "categories": [ "math.CA", "33C45, 42C05, 42C10, 65D15, 65D20" ], "primary_category": "math.CA", "published": "20231127054336", "title": "Orthogonal polynomials on domains of revolution" }
printacmref=false Work at The State Key Laboratory of Blockchain and Data Security. [email protected] University [email protected] Group[1] [email protected] University [email protected] of Connecticut [email protected] Group[1] Zhan Qin is the corresponding author. [email protected] University[1] [email protected] University Top-k frequent items detection is a fundamental task in data stream mining. Many promising solutions are proposed to improve memory efficiency while still maintaining high accuracy for detecting the Top-k items. Despite the memory efficiency concern, the users could suffer from privacy loss if participating in the task without proper protection, since their contributed local data streams may continually leak sensitive individual information. However, most existing works solely focus on addressing either the memory-efficiency problem or the privacy concerns but seldom jointly, which cannot achieve a satisfactory tradeoff between memory efficiency, privacy protection, and detection accuracy.In this paper, we present a novel framework HG-LDP to achieve accurate Top-k item detection at bounded memory expense, while providing rigorous local differential privacy (LDP) protection. Specifically, we identify two key challenges naturally arising in the task, which reveal that directly applying existing LDP techniques will lead to an inferior “accuracy-privacy-memory efficiency” tradeoff. Therefore, we instantiate three advanced schemes under the framework by designing novel LDP randomization methods, which address the hurdles caused by the large size of the item domain and by the limited space of the memory. We conduct comprehensive experiments on both synthetic and real-world datasets to show that the proposed advanced schemes achieve a superior “accuracy-privacy-memory efficiency” tradeoff, saving 2300× memory over baseline methods when the item domain size is 41,270. Our code is open-sourced via the link.[<https://github.com/alibaba-edu/mpc4j/tree/main/mpc4j-dp-service>] 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 Local Differentially Private Heavy Hitter Detection in Data Streams with Bounded Memory Kui Ren=======================================================================================§ INTRODUCTIONDetecting Top-k frequent items in data streams is one of the most fundamental problems in streaming data analysis <cit.>. It forms the foundation for a multitude of critical applications across various domains, such as anomaly detection in data mining <cit.>, click analysis in web analysis <cit.>, and topic mining in social networks <cit.>. In the typical decentralized setting as illustrated in Figure <ref>, the users send local item counts to the server in a streaming fashion, and the server continuously finds hot items (i.e., items with high-frequencies, as depicted in Figure <ref>) in the item domain based on all users' local streams.Apparently, the naïve solution to count and store all items ever appearing in the data streams will incur an overwhelming memory burden for large domain sizes that are commonly encountered in practice.For example, as of 2023, there are over 1 billion videos uploaded on the Youtube platform, with over 500 hours of videos uploaded every minute <cit.>. It becomes evident that maintaining a histogram of the expanding item domain on the server side for identifying hot items is impractical. Many existing works thus focus on improving memory efficiency by designing advanced data structures, especially for applications where the domain size is too large to efficiently fit in the memory <cit.>. Furthermore, users' submitted streaming data often contain sensitive individual information, e.g., click analysis may reveal online behavior and topic mining may reveal political opinions. The privacy of users is under severe threat if they submit local data streams without proper privacy protection.In particular, the privacy concern has a unique characteristic in the Top-k detection problem. That is, the cold items (i.e., items with low frequencies, as depicted in Figure <ref>) are not statistical targets, but constitute the majority of the data domain and are particularly sensitive, as they reveal highly personal information specific to certain user groups. Due to its central role in streaming data analysis, Top-k frequent items detection has attracted significant research attention in recent years.However, most existing works pursue the memory efficiency or privacy protection goals separately but seldom jointly.On the memory efficiency side, a series of approaches have been proposed to improve the memory efficiency with decent accuracy for detecting the Top-k items<cit.>.The key rationale of the memory-saving stems from the fact that most items are cold while only a few items are hot in practical data streams <cit.>. Accurately recording the information of massive cold items not only wastes much memory, but also incurs non-trivial errors in hot item estimation when the memory is tight. Thus, existing methods seek to design a compact data structure to keep and guard the items and their frequencies of hot items, while possibly evicting cold items.One of the most widely adopted and effective data structures addressing this challenge is HeavyGuardian <cit.>.It introduces the separate-and-guard-hot design principle, which effectively segregates hot items from cold items, preserving the accuracy of hot item estimations. HeavyGuardian further delineates a specific strategy called Exponential Decay (ED) to guard the hot items by exponentially decreasing the probability that the possible cold items remain in the heavy part of the data structure. However, despite achieving a promising balance between accuracy and memory efficiency, none of these methods simultaneously account for privacy concerns. On the privacy protection side, Differential Privacy (DP) has been regarded as a de facto standard by both academia and industry <cit.>. In the decentralized data analytics setting,Local Differential Privacy (LDP) is the state-of-the-art approach extended from DP to the local setting, which has been widely deployed in industry, e.g., Google Chrome browser <cit.> and Apple personal data collection <cit.>.In LDP, each user perturbs his/her data with a local randomization mechanism before sending it to the server.The server could still derive general statistics from the perturbed submissions with a certain accuracy decrease. General randomization mechanisms for frequency estimation such as Generalized Randomized Response (GRR), Optimal Local Hash (OLH) <cit.>, and Hadamard Response (HR) <cit.>, can be applied to Top-k items detection as baseline methods.There also exist many works designed specifically for heavy hitter estimation under LDP, including estimates over the single-valued data <cit.>, and set-valued data <cit.>. However, it is noteworthy that these works neither address the data stream setting nor tackle the issue of memory efficiency.In this paper, our objective is to bridge the gap between memory-efficient heavy hitter tracking in data streams and LDP privacy protection. To achieve this, we introduce the HG-LDP framework designed for tracking the Top-k heavy hitters within data streams. This framework comprises three essential modules.First, the randomization module is responsible for randomizing the streaming data generated by users, ensuring event-level LDP privacy that is more suitable for the streaming data <cit.>.Second, the storage module records the incoming data on the server side.To this end, we integrate the HeavyGuardian data structure, and significantly optimize its implementations, i.e., dynamic parameter configuration, and sampling optimization (see details in Appendix <ref>) to facilitate the heavy hitter tasks and processing of LDP-protected noisy data. Finally, the response module processes and publishes the statistical results of heavy hitters.It is worth noting that directly applying existing LDP techniques cannot achieve satisfactory accuracy or would be even functionally infeasible, primarily due to the following two new challenges. Challenge (1): Incompatibility of Space-Saving Strategy and Large Domain Size for LDP.To highlight this challenge, we instantiate a basic scheme BGR as a baseline (detailed in Section <ref>), which directly uses the Generalized Randomized Response (GRR) mechanism <cit.> in the randomization module. The large domain size incurs two problems that jointly fail BGR: 1) the noise variance introduced by the GRR will increase as the data domain increases; 2) the space-saving strategy of the data structure introduces additional underestimation error to the noise items, which will be further amplified by the debiasing operation, required by LDP.Although existing mechanisms such as Optimal Local Hash (OLH) <cit.> and Hadamard Response (HR) <cit.> in the LDP field aim to alleviate the impact of large data domains on randomized results' accuracy, it is crucial to emphasize that we still confront a unique and unaddressed challenge.We identified that the core idea of the LDP field in addressing this problem is to encode the large data domain into a smaller one for randomization. However, the decoding of randomized data on the server side inevitably produces a multiple of diverse collision data, which can significantly disrupt the decision-making of the space-saving strategy.Challenge (2): Dynamically Changing Hot/Cold Items. Notably, cold items often constitute the majority of the data domain, and indiscriminately randomizing data across the entire domain can result in an unnecessary waste of privacy budget. The ability to distinguish between hot items and cold items during the randomization process is crucial for enhancing the accuracy of hot item estimation.However, since the labels of hot and cold items may dynamically change as the data stream evolves, randomizing data based on the previous timestamp's state may introduce a huge bias towards the prior state.This poses several new challenges, e.g., how to strike a balance between reducing unnecessary privacy budget expenditure on cold items, and how to manage such dynamically emerging bias.Addressing this challenge also mandates novel LDP mechanism designs. Contribution.In this paper, we initiate a baseline method and propose three novel advanced LDP designs under the hood of a framework HG-LDP to address these hurdles.First, we present a baseline method that directly combines the GRR mechanism with HeavyGuardian data structure. Second, we propose a newly designed LDP mechanism. It is based on the observation that the ED strategy does not need to know the specific item of the incoming data in most cases if it is not recorded in the data structure. Third, we adjust the noise distribution by dividing the privacy budget to achieve higher accuracy. Finally, we utilize the light part of HeavyGuardian to elect current cold items before they become new hot items, which further improves the accuracy of the estimated result. The main contributions are summarized as follows.* To our best knowledge, this paper is the first to track the Top-k frequent items from data streams in a bounded memory space while providing LDP protection for the sensitive streaming data.We present a general framework called HG-LDP to accommodate any proper LDP randomization mechanisms on the users' side into the space-saving data structures on the server side for the task.* By investigating the failure of naïvely combining existing LDP techniques with HG-LDP, we design three new LDP schemes, which achieve a desired tradeoff performance between accuracy, privacy, and memory efficiency.* We comprehensively evaluate the proposed schemes on both synthetic and real-world datasets in terms of accuracy and memory consumption, which shows that the proposed schemes achieve higher accuracy and higher memory efficiency than baseline methods. For instance, when the size of the domain size reaches 41,270, the proposed schemes save about 2300× size of memory over baselines.§ PRELIMINARIES §.§ Problem StatementWe consider the setting of finding Top-k items in data streams under Local Differential Privacy (LDP). Given n users, each user generates a private infinite data stream. Denote v_i^t∈Ω as the data generated by the user u_i at timestamp t. The user only sends data at the timestamp when data is generated.A server collects values from users at each timestamp t. Note that the server can only maintain a data structure with a length much smaller than the size d of data domain Ω due to its limited memory space. Whenever a query is received, the server needs to publish the Top-k items up to the latest timestamp and their counts. §.§ Privacy DefinitionsIn this paper, we provide event-level privacy guarantee <cit.>. Specifically, the event-level LDP ensures the indistinguishability of any pairs of elements in streams, e.g., every single transaction remains private in a user's long-term transactions: An algorithm ℳ satisfies ϵ-LDP, where ϵ≥ 0, if and only if for any input v, v' ∈𝔻, and any output y ∈ Range(ℳ), we have [ℳ(v) = y ] ≤ e^ϵ[ℳ(v') = y ]. The parameter ϵ is called the privacy budget, whereby smaller ϵ reflects stronger privacy guarantees. We say ℳ satisfies ϵ-LDP if for different data v and v', the ratio of distribution of output ℳ(v) and that of ℳ(v') are not greater than e^ϵ.§.§ LDP Mechanisms The Randomized Response (RR) mechanism <cit.> is considered to be the first LDP mechanism that supports binary response. It allows each user to provide a false answer with a certain probability so as to provide plausible deniability to users. The Generalized Randomized Response (GRR) mechanism <cit.> is an extension of Randomized Response (RR) <cit.>, which supports multi-valued domain response. Denote d as the size of the domain 𝔻. Each user with private value v ∈𝔻 reports the true value v' = v with probability p and reports a randomly sampled value v' ∈𝔻 where v' ≠ v with probability q. The probability p and q are defined as follows{ p = e^ϵ/e^ϵ + d - 1, q = 1/e^ϵ + d - 1. .where d is the size of the data domain. It is straightforward to prove ϵ-LDP for GRR, i.e., p/q≤ e^ϵ <cit.>. Assuming that each of the n users reports one randomized value. Let ĉ_i be the number of value i occurs in randomized values, the estimation of true number c̃_i of value i can be computed withc̃_i = ĉ_i - nq/p-q. The variance of the estimated result c̃_i isVar[c̃_i] = n·d-2+e^ϵ/(e^ϵ-1)^2. As shown above, the variance of the estimation result of the GRR mechanism increases linearly with the increase of d. Some other mechanisms, e.g., Optimal Local Hash (OLH) <cit.> and Hadamard Response (HR) <cit.>, are proposed to randomize data in a large data domain. Essentially, they map the data to a smaller domain before randomizing it to avoid the large variance caused by a large data domain. We defer their details to Appendix <ref>. §.§ Space-Saving Data StructureCounter-based data structures <cit.> and sketches <cit.> are two kinds of mainstream memory-efficient data structures. While sketches have been extensively studied as compressed data structures for frequency estimation, they may not be the optimal choice when it comes to heavy hitter estimation in data streams, particularly in scenarios characterized by limited storage space and real-time response requirements.This preference is underpinned by two key reasons:Firstly, sketches record counts for all items, whereas heavy hitter tasks only concern hot items.This equally treated recording of all counts results in unnecessary memory consumption.For example, the Count-Min sketch (CMS) necessitates a minimum of O(N/α×log(1/δ)) space to guarantee that the probability of error in the estimated count of each item being less than α is no less than 1-δ, with N representing the total data count <cit.>.Furthermore, as highlighted by Cormode and Hadjieleftheriou in <cit.>, sketches require additional storage for finding the counts of hot items.For instance, O(N/αlog d logδ) space increase is incurred when using group testing to find hot items, or a minimum of O(d) computational overhead is needed for hot item retrieval. Thus, in this paper, we choose to employ a counter-based data structure called HeavyGuardian proposed by Yang et al. <cit.> as the foundation for our framework. It identifies and records the high-frequency items in subsequent data streams based on observations of historical streaming data. The basic version of HeavyGuardian is a hash table with each bucket storing several KV pairs (⟨ ID, count⟩) and small counters. Specifically, each bucket is divided into two parts: a heavy part with a length of λ_h (λ_h>0) to precisely store counts of hot items, and a light part with a length of λ_l (λ_l can be 0) to approximately store counts of cold items. For each incoming item e, HeavyGuardian needs to decide whether and how to insert it into the heavy part of a bucket according to a strategy called Exponential Decay (ED). There are three cases when inserting an item e into the heavy part of HeavyGuardian.* The KV pair of e has been stored in the heavy part, it increments the corresponding count by 1.* The KV pair of e is not in the heavy part, and there are still empty buckets.It inserts the KV pair of e into the heavy part and sets the count to 1.* The KV pair of e is not in the heavy part, and there is no empty bucket. It decays 1 from the current least count in the heavy part with probability 𝒫=b^-c, where b is a predefined constant number (b=1.08 in <cit.>), and c is the count value. After decay, if the count becomes 0, it replaces this KV pair (the weakest KV pair) with e's KV pair, and sets the count to 1. If e is not successfully inserted into the heavy part, it is recorded in the light part. Since the heavy hitter tasks only focus on Top-k items and their counts, we set the parameters of HeavyGuardian as the number of buckets w=1, the length of the heavy part λ_h=k, and the length of light part λ_l=0 (except in one of the proposed scheme CNR). For simplicity of description, we denote the data structure of HeavyGuardian as ℋ𝒢 in the following sections. We use ℋ𝒢[i] to denote the i^th key pair in ℋ𝒢, and use ℋ𝒢[i].ID and ℋ𝒢[i].C to denote the ID and the count of an item, respectively. § HG-LDP FOR HEAVY HITTERS TRACKINGIn this section, we first introduce the HG-LDP framework for tracking heavy hitters in data streams with bounded memory space. Then, we instantiate a baseline to highlight key obstacles for achieving a satisfactory “accuracy-privacy-memory efficiency” tradeoff. §.§ Overview Figure <ref> illustrates the framework for HG-LDP, which contains three modules: randomization module, storage module, and response module. The randomization module runs on the user side to randomize the users' sensitive streaming data. The storage module and the response module run on the server side, where the storage module utilizes a space-saving data structure. In this paper, we aim to adapt and optimize the HeavyGuardian (ℋ𝒢) data structure due to its popularity and simplicity, but expect our LDP designs to be generalizable to more sophisticated space-saving data structures in the future. The randomized streaming data continuously reported by users is stored in ℋ𝒢 following the ED strategy, and the statistical results are released by the response module after debiasing. Specifically, the functions of the three modules can be summarized as the following three algorithmic components:* Randomize. It is executed in the randomization module. It takes raw data v_i^t of the i^th user at timestamp t as input, and outputs a randomized data r_i^t that satisfies LDP.* Insert. It is executed in the storage module. It inserts the randomized data r_i^t into ℋ𝒢 following the ED strategy, and updates the counts of the KV pairs in ℋ𝒢.* Response. It is executed in the response module. It obtains the hot items and their corresponding counts from ℋ𝒢 when receiving a request. Then it maps them to a list for publishing after debiasing all counts. In the following sections, we first instantiate a baseline scheme, and then propose three advanced schemes based on this framework by elaborately designing algorithms for the three modules. §.§ A Baseline Scheme: BGR We first discuss a baseline scheme BGR (Basic Scheme Combining GRR) that directly integrates an existing LDP scheme: GRR. Algorithms.The BGR algorithm is outlined in Algorithm <ref>. At timestamp t, the data v_i^t of a user u_i is randomized using GRR, and the resulting randomized value r_i^t is then transmitted to the server. Subsequently, the server incorporates r_i^t into the data structure ℋ𝒢 following the ED strategy. Note that the counts stored within ℋ𝒢 are consistently biased noisy values. To mitigate this, the server debiases all counts in the response module following the standard GRR debiasing approach <cit.> before publishing the statistical outcomes.Theoretical Analysis. Next, we theoretically analyze the error bound of the frequency estimated by BGR. Part of the error comes from the exponential decay of the counts on the server when the coming data is not recorded in ℋ𝒢. Another part of the error comes from the noise introduced by the randomization module to perturb the data with the GRR. We first give the error analysis for the ED strategy of ℋ𝒢 provided by Yang et al. <cit.> in Lemma <ref> below.Given a stream prefix S_t with t items in Ω, it obeys an arbitrary distribution and |Ω|=d. We assume that there are w buckets to store the hottest λ items mapped to them, each item is mapped to a bucket with the probability of 1/w. Let v_i be the i^th hottest item, f_i be the real frequency of v_i, and f̃_i be the estimated frequency of v_i. Given a small positive number α, we havePr[f_i-f̃_i≥α t]≤1/2α t(f_i-√(f_i^2-4P_weakE(V)/b-1))where P_weak=e^-(i-1)/w×(i-1/w)^l-1/(l-1)!, E(V)=1/w∑_j=i+1^d f_j. Our theoretical analysis follows the conclusion provided in Lemma <ref>. In fact, Lemma <ref> only considers the bias caused by exponential decays after the items are recorded as hot items, ignoring the count loss before items are recorded. However, this count loss is strongly related to the distribution of the data stream and the order of the data arrival, so it's difficult to be theoretically analyzed. Besides, as we mentioned in Section <ref>, we set the number of buckets w=1 in this paper since we only track Top-k heavy hitters and k is a small constant.Therefore, we only use the result when w=1 in Lemma <ref> and we show the error bound of BGR in Theorem <ref>. Given a stream prefix Ŝ_t with t items randomized by BGR satisfying ϵ-LDP and there is a data structure ℋ𝒢 to store the Top-k items. Let v_i be the i^th hottest item, f_i be the real frequency of v_i, f̃_i be the final estimated frequency of v_i. We havePr[f_i-f̃_i ≤(√(2tlog (2/β))+α t)·e^ϵ+d-1/e^ϵ-1]≥(1-β)(1-1/2α(1-√(1-4P_weakE(V)/b-1)))where P_weak=(i-1)!(d-k)!/(d-1)!(i-k)!, E(V)=∑_j=i+1^d f_j, α and β are small positive numbers with α,β∈(0,1).We assume that f̂_i is the frequency of noisy data recorded in ℋ𝒢 according to the ED strategy.Meanwhile, due to the ED strategy introducing additional errors during the recording of f̂_i, the frequency used for debiasing by the GRR mechanism before publication is denoted as f̅_i. Then the error bound of final debiased frequency f̃_i compared to f_i can be obtained by combining the error bounds of f̂_i - f̅_i and f_i-f̂_i-tq/p-q. The detailed proof is deferred to Appendix <ref>. Problems with Existing LDP mechanisms. Theorem <ref> shows that the error bound of BGR grows proportionally to the size of the data domain d. While BGR is sufficient for solving the task of finding Top-k items in streaming data with a small data domain, it can inevitably fall into the dilemma that the error is too large when dealing with a large data domain. From the proof of Theorem <ref>, we can find that the excessive error caused by the large data domain mainly comes from the randomization process of the GRR. Several LDP mechanisms have been proposed to address data randomization in large data domains. However, directly integrating these mechanisms with HeavyGuardian is still problematic in practice. The core concept behind these mechanisms revolves around mapping data from the large data domain to a smaller data domain using techniques such as hash functions <cit.>, Hadamard matrix encoding <cit.>, or Bloom filter encoding <cit.>. Subsequently, data is randomized within this reduced data domain.There are several issues with these approaches. Firstly, decoding a single randomized data on the server sideentails an exhaustive scan of the entire data domain, which becomes computationally expensive for large data domains.Furthermore, this approach implies that the server must store the entirety of the data domain, which may contradict the requirement for bounded memory consumption on the server side.Additionally, these mechanisms introduce collisions when decoding randomized data for analysis. While such collisions are typically manageable in general frequency estimation tasks due to their uniform distribution, they can render strategies like the ED strategy and other space-saving techniques unusable. Assuming that data is mapped from a large domain of size d to a smaller data domain of size g, the average number of collision data generated by decoding a data point is d/g. In essence, the arrival density of an item directly impacts its potential to be recorded within the data structure as a hot item. If decoded data is mixed with d/g-1 different data points, the true hot item may lose its advantage in being recorded within ℋ𝒢. In scenarios where the domain size d is extremely large, such that d/g surpasses the size k of ℋ𝒢, the entire scheme becomes untenable, and all data points are indiscriminately recorded with equal probability. Consequently, it is desirable to develop novel LDP mechanisms capable of effectively randomizing data within large data domains and addressing challenges posed by the dynamically changing hot/cold items while optimizing the performance of HeavyGuardian.§ ADVANCED LDP MECHANISM DESIGNS In this section, we propose three novel advanced schemes to address the aforementioned problem in BGR by designing new randomization methods, which are outlined in Figure <ref>.§.§ DSR (Domain-Shrinkage Randomization) Tasks involving heavy hitter estimation in streams often assume that the streaming data follows a Zipf distribution <cit.>. This assumption aligns well with the distribution observed in various real-world scenarios, such as purchased goods and popular songs. In these contexts, the data domain predominantly consists of a few frequently occurring hot items, while most items are relatively rare or never appear. However, the GRR mechanism in BGR randomizes a large number of hot items to these rare items for ensuring LDP, which leads to poor performance of the ED strategy.Furthermore, the protection of these rare items is critical since they often contain highly sensitive information.For instance, an individual might not be concerned about others knowing they have watched popular movies but may be apprehensive about revealing their interest in niche films, as it could inadvertently expose their personal preferences and hobbies. It is based on these observations that we have designed our advanced algorithm, DSR.Specifically, we refer to the items recorded in ℋ𝒢 as the hot items, and the items not in ℋ𝒢 as the cold items.We observe that the ED strategy of ℋ𝒢 does not need to know the specific value of the cold items in most cases. It only needs to reduce the count of the KV pair with the lowest frequency in ℋ𝒢 by 1 with a certain probability when it receives a cold item. The ED strategy needs to know the specific value of the cold item to replace the KV pair in ℋ𝒢 only when the weakest KV pair (with the lowest frequency) is going to be evicted. A direct idea is to represent all cold items as “”, and randomize the data on the domain {ℋ𝒢.C}∪{}. When the weakest KV pair in ℋ𝒢 is about to be evicted, it changes back to BGR to randomize the data on the entire domain. In this way, the size of the data domain can be reduced from d to k+1 when there is no KV pair in ℋ𝒢 going to be replaced, which alleviates the low utility caused by a large domain.Algorithms. The Randomize Algorithm of DSR is presented in Algorithm <ref>. In the general case, the user randomizes data on the shrinking domain ℋ𝒢.C∪. If the server receives a "", it reduces the count of the weakest KV pair by 1 with a certain probability. However, this approach poses a challenge when the count of the weakest KV pair reduces to 0, as it becomes uncertain which cold item should replace the weakest KV pair. Furthermore, requiring the user to re-randomize the data across the entire domain can potentially violate ϵ-Local Differential Privacy (ϵ-LDP). To solve this problem, DSR requires users to switch to BGR for randomization on the entire data domain when the count of the weakest KV pair reaches 1 or less. In this case, as long as the count of the weakest KV pair is reduced by 1, it can be replaced by a new KV pair with a cold item directly. Users can subsequently switch back to randomizing data on the reduced domain once the new KV pair stabilizes (i.e., reaches a count >1).The Insert and Response algorithms are shown in Algorithms <ref> and <ref>, respectively.Due to the switch between two mechanisms with different parameters in the Randomize algorithm, a complex debiasing process is initiated during the insertion and response phases. Each switch between mechanisms necessitates debiasing of all the counts of KV pairs stored in ℋ𝒢 using the debiasing formula of the current mechanism. To prevent redundant debiasing of cumulative counts, it is imperative to multiply all the counts by the denominator of the debiasing formula of the new mechanism. For the sake of readability, the debiasing functionsin the Insert algorithm andin the Response algorithm are deferred to Appendix <ref>.Theoretical Analysis. Theoretical analysis demonstrates that the error bound of DSR in the worst-case scenario aligns with that of BGR, as illustrated in Theorem <ref>. This can be attributed to the frequent replacement of the weakest KV pair for certain data distributions, compelling users to randomize data over the entire data domain for the majority of instances. However, DSR's improvement over BGR is expected to be more substantial for datasets exhibiting a more concentrated data distribution. §.§ BDR (Budget-Division Randomization)We present a novel scheme, BDR, that further enhances accuracy beyond DSR. Although DSR demonstrates improvement over BGR, it still predominantly randomizes data similarly to BGR when there are frequent changes to the items in ℋ𝒢. Additionally, the complexity of debiasing is increased due to the transition between two randomization mechanisms with distinct parameters.Since the current cold value cannot be randomized and sent repeatedly, resulting in the waste of the privacy budget while awaiting new cold items. To address this problem, we designed a budget-division-based scheme (BDR) that efficiently avoids switching between different randomization mechanisms and mixing randomized data from different output data domains. Besides, we observe that the hot items stored by ℋ𝒢 after initialization may not be true hot items. Through adjustments in the allocation of the privacy budget, BDR reduces the impact of the initial ℋ𝒢 on the final result, with the probability of k/e^ϵ+k that any other item be randomized to the current "hot items".We divide the privacy budget into two parts and run three sub-randomization mechanisms ℳ_judge, ℳ_hot, and ℳ_cold. Specifically, the ℳ_judge mechanism is used to randomize whether the data is a hot item. If the ℳ_judge mechanism determines that the data is a hot item, the ℳ_hot mechanism is used to randomize the data in the data domain covered by items recorded in ℋ𝒢. The ℳ_cold mechanism randomizes the items determined to be cold by the ℳ_judge mechanism when an item in ℋ𝒢 is about to be evicted. We show the overall flow of the Randomize algorithm in Algorithm <ref>, and the ℳ_judge, ℳ_hot, and ℳ_cold mechanisms in Algorithm <ref>, Algorithm <ref>, and Algorithm <ref>, respectively.Algorithms. At timestamp t, the user obtains the current raw data v_i^t, which is a hot item or a cold item. Note that the server can write the currently recorded hot items to a bulletin board in real time or the users can obtain the set of hot items from the response module at any time. Therefore, users can always know the current hot items and cold items when randomizing their data. Firstly, the user randomizes whether v_i^t is a hot item using the ℳ_judge mechanism, which is a binary flip. The error introduced by the ℳ_judge mechanism is independent of the size of the data domain. If the ℳ_judge mechanism determines that v_i^t is a hot item, then v_i^t needs to be randomized on the data domain covered by items recorded in ℋ𝒢 with the ℳ_hot mechanism.If v_i^t is a hot item, ℳ_hot mechanism randomizes it in the data domain covered by items recorded in ℋ𝒢 as the general GRR. If v_i^t is actually a cold item, ℳ_hot mechanism uniformly and randomly maps it to any item contained in ℋ𝒢. Otherwise, the user sends “” to the server if the ℳ_judge mechanism determines that v_i^t is a cold item.We consider a special case where the ℳ_judge mechanism determines that v_i^t is a cold item, but the count of the weakest KV pair in ℋ𝒢 is reduced to 0 by the ED strategy. Then the server would need this cold value to replace the item in ℋ𝒢.Therefore, we provide the ℳ_cold mechanism, similar to the ℳ_hot mechanism, randomizing the data in the data domain covered by the cold items. When the user observes that the count of the weakest KV pair in ℋ𝒢 is equal to or smaller than 1, the user uses ℳ_cold mechanism to randomize v_i^t and then sends it to the server when v_i^t is determined to be cold. The ℋ𝒢 has a high probability of replacing the weakest item with a cold item in this case.Note that the privacy budget consumed by ℳ_cold is the remaining budget ϵ_2 at timestamp t, and the total privacy budget for v_i^t is still limited to ϵ.Figure <ref> shows an example at 6 timestamps to illustrate the randomization process.Next, we discuss how the response module on the server debiases the counts of hot items stored in ℋ𝒢. Denote p_1 as the probability e^ϵ_1/e^ϵ_1+1, q_1 as the probability 1/e^ϵ_1+1, p_2 as the probability e^ϵ_2/e^ϵ_2+k-1, q_2 as the probability 1/e^ϵ_2+k-1. Let num denote the total number of data received by the server from the beginning of the statistics to the current timestamp, and γ_h denote the proportion of hot items. Let f̅_v be the noisy recorded count of item v, then the debiased estimation result f̃_v is calculated asf̃_v=f̅_v-γ_h· num(p_1q_2-q_1/k)-num· q_1/k/p_1(p_2-q_2)Here, γ_h can be obtained from the warm-up round or the prior knowledge of data distribution, which is discussed in detail in Section <ref>. We show the details of the Response algorithm in Algorithm <ref>. Besides, we omit the details of the Insert algorithm here since it is the same as that of BGR shown in Algorithm <ref>. Theoretical Analysis. We show that BDR satisfies ϵ-LDP as below. BDR satisfies ϵ-LDP. Firstly, ℳ_judge satisfies ϵ-LDP since p_1/q_1=e^ϵ_1. Secondly, M_hot satisfies ϵ_2-LDP since p_2k≤ p_2/q_2=e^ϵ_2. Similarly, M_cold also satisfies ϵ_2-LDP. Therefore, BDR satisfies (ϵ_1+ϵ_2)-LDP. The detailed proof is deferred to Appendix <ref>.Then we show the error bound of BDR in Theorem <ref>. Given a stream prefix Ŝ_t with t items randomized by BDR satisfying ϵ-LDP and there is a data structure ℋ𝒢 to store the Top-k items. Let v_i be the i^th hottest item, f_i be the real frequency of v_i, f̃_i be the final estimated frequency of v_i. We havePr[f_i-f̃_̃ĩ ≤(3√(tlog(3/β)/2)+α t)·(e^ϵ_1+1)(e^ϵ_2+k-1)/e^ϵ_1(e^ϵ_2-1)] ≥(1-β)(1-1/2α(1-√(1-4P_weakE(V)/b-1)))where P_weak=(i-1)!(d-k)!/(d-1)!(i-k)!, E(V)=∑_j=i+1^d f_j, α and β are small positive numbers with α,β∈(0,1). The approach of the proof is similar to that of Theorem <ref>, the error bound of final debiased frequency f̃_i compared to f_i can be obtained by combining the error bounds of f̂_i - f̅_i and f_i-f̂_i-N_hp_1q_2-(t-N_h)·q_1/k/p_1(p_2-q_2), where N_h is the number of hot items and N_h≤ t. The detailed proof is deferred to Appendix <ref>.The result of Theorem <ref> shows that BDR significantly reduces the impact of the large data domain on the accuracy of the statistical results compared to BGR and DSR (Theorem <ref>).§.§ CNR (Cold-Nomination Randomization) In BDR, we find that the privacy budget ϵ_2 is unexploited when the data is determined to be a cold item and there is no item in ℋ𝒢 that is about to be evicted, which can be observed in Figure <ref>. Besides, there is a light part in the original data structure of ℋ𝒢 used to store the counts of cold items (see Figure <ref>). The length of this part λ_l is set to 0 in BGR, DSR, and BDR.Driven by these observations, we propose a new scheme CNR, which uses these two idle resources to further improve the accuracy over BDR.Algorithms. Algorithm <ref> shows the Randomize algorithm of CNR, similar to that of BDR. All the data determined as cold items by ℳ_judge mechanism are randomized to specific cold items on the cold domain using ℳ_cold mechanism, rather than calling ℳ_cold mechanism only when there is a hot item to be evicted. Here, ℳ_judge, ℳ_hot, and ℳ_cold are the same as Algorithms <ref>, <ref>, and <ref> in BDR. When inserting the randomized items into ℋ𝒢, the cold items that cannot be inserted into the heavy part are inserted into the light part following the ED strategy. Then the light part helps to provide a more accurate potential hot item to become a new hot item when a value in the heavy part is about to be evicted. Note that the light part only provides selected cold items, and its count is set to 1 when a cold item enters the heavy part, just the same as BDR. Thus, the debiasing formula of the counts in the heavy part is the same as that of the BDR, avoiding debiasing the randomized counts from different output domains like DSR. We show the Insert in Algorithm <ref>, and the Response is the same as Algorithm <ref>. Theoretical Analysis. Firstly, CNR still satisfies ϵ-LDP, and the privacy budget consumed by randomizing data is ϵ_1+ϵ_2=ϵ. Then, the error bound of counts recorded in the heavy part is the same as Theorem <ref> shown in BDR, since CNR only provides a better cold item to become a new hot item when there is an item to be evicted. Note that all theoretical analyses for the error bound of the counts we provide only consider the error of the recorded counts without considering whether the items are true hot items. Since the accuracy of the hot items tracked by the scheme is influenced by both initial ℋ𝒢 and data distributions, we evaluate it by conducting a comprehensive evaluation in Section <ref>.Besides, CNR has no specific requirement for the length of the light part λ_l, as long as it satisfies λ_l>0. The longer light part can provide more accurate new hot items to the heavy part. The setting of λ_h can refer to the original ℋ𝒢 <cit.>, or set a small constant according to the specific requirements. In our experiments, setting λ_l=5 for finding Top-20 items on a concentrated data distribution can observe a significant improvement for small ϵ. Furthermore, the counters in the light part of ℋ𝒢 are tailored for cold items, and the counter size is very small, e.g., 4 bits. Therefore, CNR does not increase too much additional memory consumption compared to the other schemes and still meets high memory efficiency.§ EXPERIMENTAL EVALUATION In this section, we design experiments to evaluate our proposed schemes. The evaluation mainly includes four aspects: (1) the accuracy of the heavy hitters via the proposed schemes; (2) the accuracy achieved by the proposed schemes compared with the baselines; (3) the impact of the key parameters on the accuracy of the proposed schemes; (4) the memory size consumed by the proposed schemes compared with the baselines. Towards these goals, we conduct experiments on both synthetic and real-world datasets, and simulate to collect streaming data from users at continuous timestamps for heavy hitter analysis. Besides, we introduce different metrics to evaluate the accuracy of the results from three different aspects.To better guide the application of the schemes in practice, we also conduct supplementary experiments on more datasets and test the computation and communication overheads. Please refer to Appendix <ref> for details. §.§ SetupDatasets. We run experiments on the following datasets: * Several synthetic datasets are generated with two different distributions and three domain sizes. One kind of datasets are generated by randomly sampling data from a Normal distribution with variance σ=5, and others are generated from an Exponential distribution with variance σ=10. There are n=100,000 values in each dataset. * Retail dataset <cit.> contains the retail market basket data from an anonymous Belgian retail store with around 0.9 million values and 16k distinct items.* Kosarak dataset <cit.> contains the click streams on a Hungarian website, with around 8 million values and 42k URLs.* Webdocs dataset <cit.> is constructed from a collection of web HTML documents, which comprises around 300 million records, and 5.26 million distinct items. Metrics. In reality, various applications focus on different aspects of the heavy hitter estimation results. Therefore, we have to comprehensively evaluate the quality of the results from three aspects: (1) how accurately that ℋ𝒢 captures the actual heavy hitters; (2) how accurately that the ordering of the heavy hitters in ℋ𝒢; (3) how accurately that ℋ𝒢 captures the actual counts of heavy hitters. We use the following three metrics to cover each aspect:Precision.It measures the accuracy of the actual heavy hitters captured by ℋ𝒢. It is the number of actual heavy hitters divided by the number of all items in ℋ𝒢, as given byPrecision=#Actual heavy hitters in ℋ𝒢/# Heavy hitters. Normalized Discounted Cumulative Gain (NDCG). It measures the ordering quality of the heavy hitters captured by ℋ𝒢, which is a common effectiveness in recommendation systems and other related applications. NDCG is between 0 and 1 for all k, and the closer it is to 1 means the ordering quality of ℋ𝒢 is higher. The formulas for calculating NDCG is deferred to Appendix <ref>.Average Absolute Error (AAE). It measures the error of the counts of the actual Top-k items with their estimated counts recorded in ℋ𝒢, which can be calculated as AAE_k=1/k∑_i=1^k |f_actual(v_i)-f_estimated(v_i)|.If an actual hot item is not recorded by ℋ𝒢, its AAE is calculated by setting the estimated count as 0. For consistent and fair comparisons, we post-process all counts recorded by ℋ𝒢 to 0 when calculating AAE. All results in experiments are averaged with 20 repeats. §.§ Implementation Details We fully implemented our schemes and all baselines in Java to provide unified concrete performance comparisons. For all schemes, we separately implement the server and the client side, and the perturb data for communication are serialized to `byte[]'. This makes our implementation easier to be deployed in practice, in which the server and clients would communicate via network channels using byte strings.In our experiments, focus more on the effectiveness of our schemes so that we run the server and the client on a single process.All experiments are run on Ubuntu 20.04 with 96 Intel Xeon 2.20 GHz CPU and 256 GB RAM. Our source code is available for public request. Besides, we have some improvements compared with the original implementation in our re-implementation for both LDP mechanisms and original HeavyGuardian. More implementation details are deferred to Appendix <ref>.§.§ Analysis of Experimental ResultsComparison of Accuracy. We compare the accuracy of the baseline scheme and three advanced schemes with the non-private HeavyGuardian and two LDP mechanisms: Generalized Randomized Response (GRR) and Hadamard Response (HR) (HR performs the best in our evaluation, see Figure <ref> in Appendix <ref>). We evaluate all schemes on the Synthetic, Retail, Kosarak, and Webdocs datasets.The results for three metrics: NDCG, Precision, and AAE are shown in Figure <ref>, Figure <ref>, and Figure <ref>, respectively. Since running GRR and HR exceeds the computing or storage capabilities of our server, we only show the results of our schemes on the Webdocs dataset. In each figure, we vary the privacy budget ϵ within a range of [0.5, 5]. All schemes involve a warm-up stage for fairness of the comparison.Firstly, we observe that the accuracy of the proposed schemes BGR, DSR, BDR, and CNR improves sequentially. The improvement of DSR compared with BGR is more obvious as ϵ increases, and the advantage of CNR over BDR is more significant as ϵ decreases. We think the reason is that when ϵ is large, i.e., ϵ>1, the randomized hot items are still concentrated and there are fewer times to randomize on the entire domain to provide specific cold items for replacing with the weakest hot items in ℋ𝒢, thus the improvement achieved by DSR is relatively significant. When ϵ is small, i.e., ϵ<1, the distribution of the randomized data is relatively uniform, thus the weakest hot item in ℋ𝒢 always need to be replaced. In this case, the advantage of CNR compared to BDR in providing more potential cold items to enter ℋ𝒢 can be more obvious.Besides, we find that these observations are not pronounced on two real-world datasets.The reason is that those real-world datasets have large data domains and irregular data distributions. Therefore, ℋ𝒢 needs to replace the items frequently even if the ϵ is relatively large. This means that DSR always randomizes the data on the entire domain in the same way as BGR. In addition, the large data domain can also lead to low accuracy in the light part of ℋ𝒢. Then the performance of the CNR is similar to BDR in this case.Secondly, compared with the non-private HeavyGuardian and memory-unlimited LDP randomization mechanisms, BDR and CNR outperform GRR on all datasets in terms of all metrics when ϵ<3. Moreover, their accuracy on the synthetic dataset is close to HR, and the accuracy on all datasets is close to non-private HeavyGuardian. In all three datasets, BDR and CNR are set to ϵ_1/ϵ_2=0.5, and their parameter γ_h is calculated during the warm-up stage. We also observe that the performance of BGR and DSR gradually dominates that of GRR as the size of the data domain increases when ϵ<3.5. However, their accuracy is much lower than that of BDR and CNR when the domain size is extremely large.Finally, we observe that the NDCG of all schemes is slightly lower than their Precision on all datasets. The main reason is that NDCG considers the ordering weights of the hit items in addition to whether the true hot items are hit or not. Besides, the comparison results of all schemes in terms of AAE on all datasets are consistent with the comparison of NDCG and Precision. The AAE of the statistical results of BGR, DSR, BDR, and CNR decreases in turn.Impact of Key Parameters on the Accuracy. We evaluate the impact of key parameters on the accuracy of the proposed schemes by varying them within a certain range. In order to eliminate the interference of irregular distribution on the evaluation, we conduct experiments on several synthetic datasets. Due to limited space, we only present the NDCG and Precision of the statistical results on two synthetic datasets with Normal distribution and Exponential distribution. The results of all metrics on more synthetic datasets with different domain size are deferred to Appendix <ref>. Firstly, Figure <ref> shows the impact of the allocation method of privacy budget ϵ on the accuracy of BDR and CNR. We observe that BDR and CNR allocate less privacy budget to ϵ_1 and more privacy budget to ϵ_2 can obtain higher accuracy of the statistical results. The improvement of NDCG is significant when ϵ_1/ϵ_2 decreases from 2/1 to 1/9, and the increase slows down after ϵ_1/ϵ_2 is less than 1/9. We think the reason is that a hot item recorded in ℋ𝒢 is randomized to a cold item with a greater probability when ϵ_1 is small, and the number of data that is a hot item is larger than data that is a cold item, which leads to the items in ℋ𝒢 are easier to be evicted. Meanwhile, increasing ϵ_2 can improve the correctness of the orders of the items recorded in ℋ𝒢. Therefore, reducing ϵ_1/ϵ_2 can increase the probability that the hot items with a small count are replaced by other cold items, so that the real hot items can occupy the ℋ𝒢 faster. This is also consistent with the experimental results in original ℋ𝒢 (Figure 4(a), <cit.>). The accuracy of the result increases when the parameter b is reduced to make it easier for the new item to enter ℋ𝒢, but the improvement becomes no longer obvious when b is reduced to a certain small value. Therefore, we recommend setting ϵ_1/ϵ_2=1/9 to get near-optimal accuracy in the actual deployment of BDR and CNR. We also conduct the evaluations on synthetic datasets with different domain sizes, and obtain the consistent observations with the above. The results are shown in Figure <ref>-Figure <ref> in Appendix <ref>. Moreover, we find that increasing the domain size has some impact on the accuracy of the schemes, but we can still improve the accuracy by adjusting the privacy budget allocation.Then Figure <ref> shows the impact of the parameter γ_h on the accuracy of the BDR and CNR. We calculate the exact γ_h≃ 0.92. As a debiasing parameter, γ_h directly affects the counts of the statistical result, so the impact of γ_h can be clearly observed from the AAE of the result. However, the indirect impact on NDCG is not obvious, the lines in the figure are fluctuating. An interesting phenomenon can be observed from the AAE of the results. More accurate γ_h does not necessarily give more accurate count of the result. The reason is that the ED strategy continuously reduces the counts of the weakest item with a certain probability, which causes the statistical results to be underestimated. According to debiasing Equation <ref>, reducing γ_h can cause the debiased result to be over-estimated, thereby offsetting part of the bias introduced by the ED strategy.Finally, Figure <ref> shows the impact of the warm-up stage on the accuracy of the baseline BGR and the proposed three schemes. We compared their accuracy using five different datasets for the warm-up stage. The five datasets include a uniformly random dataset with the size of 50, a dataset with the size of 50 and distribution skewed from the true normal distribution, and two datasets with the true normal distribution with sizes of 50 and 500. We can observe that their accuracy increases as the distribution of the dataset used in the warm-up stage approaches the true distribution and as the size of the dataset increases. Specifically, BDR and CNR set ϵ_1/ϵ_2=0.5, and they are least affected by the warm-up stage among all schemes. We think the reason is that the current cold items in BDR and CNR are easier to enter ℋ𝒢 to become new hot items, which reduces the impact of the accuracy of the initial ℋ𝒢 on the final statistical result. Similar observations can be obtained on the real-world datasets, and the results are shown in Figure <ref> in Appendix <ref>.Comparison of Memory Consumption. We then evaluate the total memory size consumed by all schemes when tracking Top-20 heavy hitters on the four different datasets. We present the results in Table <ref>. The proposed schemes show a significant advantage in memory consumption when the data domain is large, such as in the Kosarak and Webdocs datasets. We can observe that the memory size consumed by the GRR and HR increases linearly as the domain size d increases.In contrast, the memory consumed by all space-saving schemes is only related to the number of tracked heavy hitters k, and k is usually much smaller than d. Note that the GRR and HR do not have memory consumption available for the Webdocs dataset since the computation or memory requirements of these schemes exceeded the capacity of the server used for testing. Additionally, we conduct tests to evaluate the computation and communication overhead of all schemes. The detailed results of these tests can be found in Appendix <ref>.§ DISCUSSION In this section, we supplementally discuss more details about the practical implementation and the potential extension of these schemes. §.§ System Parameters & ImplementationsWarm-up Stage. In practice, our framework along with the theoretical designs and analyses are applied to a steady state where the ℋ𝒢 are filled during previous timestamps, rather than dealing with the cold-start scenario where ℋ𝒢 is empty. Therefore, to simulate such a steady state where ℋ𝒢 is properly warm-started, we consider all the proposed schemes include a warm-up round at the beginning of the statistics. Note that although CNR needs to use the light part in ℋ𝒢 structure, only the heavy part should be filled in the warm-up stage like other schemes.The data for the warm-up stage can be a priori dataset stored on the server or data voluntarily contributed by users in the first round of the statistics. There is no specific requirement for the number of data in the warm-up stage. The only requirement is that the data should at least be able to fill the ℋ𝒢. Besides, the closer the distribution of the priori dataset to the real data distribution, or the larger the number of data that users voluntarily contribute, the higher the accuracy of ℋ𝒢 in the subsequent statistics.Parameter γ_h in BDR and CNR. The debiasing formula for both BDR and CNR contains a parameter λ_h, which is the proportion of data that is the hot item in the stream. The server actually does not know the specific value of γ_h, but it can be theoretically calculated based on prior knowledge about the data distribution. If the server has no prior knowledge about the data distribution, γ_h can also be statistically obtained from the initial ℋ𝒢 after the warm-up stage. Certainly, γ_h obtained by the above two methods both inevitably introduce additional errors to the estimated results, and the impact is evaluated in the experiments. However, the current design of schemes cannot avoid it, and we leave it for future work.Privacy Parameters ϵ_1, ϵ_2 in BDR and CNR.Next, we analyze how to split the privacy budget ϵ into ϵ_1 and ϵ_2 in BDR and CNR, based on insights from our theoretical and experimental results.Our theoretical analysis in Theorem <ref>, provides an error bound for estimating the count of hot items in BDR, which is equally applicable to CNR.It shows that allocating a larger portion of the privacy budget to ϵ_2 leads to a reduced error bound, which is further corroborated by our experimental results in Figure <ref>(b)(d)(f)(h).In fact, the count error only focuses on the accuracy of counts for hot items already identified by the data structure. This calculation excludes errors coming from the misclassification of hot items due to randomization with ϵ_1.However, the estimation is complex when considering the impact of ϵ_1 and ϵ_2 on the precision of the data structure ℋ𝒢 in capturing the true hot items. Increasing the privacy budget allocated to ϵ_1 does reduce the probability of determining hot data as cold and simultaneously enhances the probability that currently recorded items remain within ℋ𝒢.Nevertheless, this does not necessarily get an improved precision in capturing items within ℋ𝒢.The setting of parameter b in ℋ𝒢 <cit.> faces the same dilemma. Increasing b will reduce the probability of the current cold values entering ℋ𝒢, and vice versa. Multiple factors collaboratively impact the precision of ℋ𝒢 in capturing hot items.For instance, when the initial ℋ𝒢 captures inaccurate hot items, a higher probability of eviction among recorded items within ℋ𝒢 can lead to improved precision; if the true hot items are concentrated in the first half of the data stream, a higher probability of retention for items within ℋ𝒢 can result in higher precision. It can also be observed from the experimental results that the Precision and NDCG of the results on some data streams are not as regular as those of AAE as ϵ_1 and ϵ_2 change, i.e., when hot items are distributed in a more dispersed manner within the Exponential distribution as opposed to the Normal distribution, the NDCG depicted in Figure <ref> emphasize that allocating a smaller fraction of ϵ_1 does not confer any discernible advantage. In <cit.>, they provide an empirical value, i.e., b=1.08. Based on our comprehensive evaluations, we suggest setting ϵ_1/ϵ_2=0.5 in most scenarios can achieve promising accuracy. Guidance on Scheme Selection.In this paper, we introduce three enhanced schemes, each making distinct trade-offs between accuracy, computational overhead, and memory usage. According to our theoretical and experimental results, we summarize a table, as detailed in Table <ref>, including three advanced designs and a baseline in terms of accuracy, computation overhead, and memory consumption. From the baseline BGR to DSR, BDR, and CNR, there is a sequential improvement in the accuracy of the results. Meanwhile, this enhancement comes at the cost of increased computational complexity on the client side or memory consumption on the server side. In practical deployment, we recommend selecting a scheme based on the specific performance requirements of the task.§.§ Extensionsw-Event-Level and User-Level Privacy.While the schemes proposed in this paper offer event-level privacy guarantees, they possess the flexibility to be extended to offer enhanced privacy protection, including w-event-level privacy and user-level privacy. Specifically, w-event-level privacy ensures ϵ-LDP within any sliding window of size w, while user-level privacy guarantees ϵ-LDP for all streaming data contributed by an individual user. To achieve w-event-level privacy and user-level privacy for finite data streams, we could distribute the privacy budget evenly across each timestamp.This entails changing the privacy budget used for randomizing each streaming data point from ϵ to ϵ/w and ϵ/l, where l represents the length of the finite data stream. We have to mention that while there are proposed methods for privacy budget allocation that outperform the average allocation approach <cit.>, applying them to our proposed schemes presents certain challenges. The primary obstacle lies in the variation of privacy budgets used to randomize each streaming data, which can impede the server to debias the accumulated counts in the heavy list.This complication also obstructs the application of the schemes to provide user-level privacy for infinite data streams. An intuitive approach to address this issue is that the server to independently debias each incoming streaming data point using the privacy budget transmitted by the user concurrently. However, this approach may introduce increased computational complexity on the server's end and heightened communication complexity for the user.We leave this challenge for future research and exploration. Other Tasks.Since the proposed framework HG-LDP focuses on the heavy hitter estimation task, only CNR involves the Light part of the data structure ℋ𝒢 to store the counts of part of cold items. When CNR extends its functionality to store the counts of all cold items in the Light part as in <cit.>, it can also support other tasks supported in <cit.>, such as frequency estimation and frequency distribution estimation. It's essential to note that these tasks, even functionally supported, encounter a challenge related to accuracy when randomizing within large data domains. The new LDP randomization mechanisms in this paper are designed by utilizing the characteristics of the heavy hitter tasks to only ensure the accuracy of hot items. We intend to delve deeper into this aspect as part of our future research efforts.§ RELATED WORKAn extended Related Work is in Appendix <ref>. Differential Private Data Stream Collection The earliest studies in differential privacy for streaming data collection originate from continuous observation of private data <cit.>. Recent works on differential private data stream collection mainly focus on Centralized Differential Privacy (CDP). Some works study how to publish the summation of the streaming data privately <cit.>. Some works study the release of correlated streaming data <cit.> propose a correlated Gaussian noise mechanism. Some recent works focus on data stream collection with Local Differential Privacy (LDP) <cit.>. Tracking Heavy Hitters in Data stream Mining streaming data faces three principal challenges: volume, velocity, and volatility <cit.>.The existing heavy hitters estimation algorithms in the data stream can be divided into three classes: Counter-based algorithms, Quantile algorithms, and Sketch algorithms <cit.>.Counter-based algorithms track the subset of items in the stream, and they quickly determine whether to record and how to record with each new arrival data <cit.>. The Quantile algorithms <cit.> focus on finding the item which is the smallest item that dominates ϕ n items from the data stream. Sketch algorithms <cit.> record items with a data structure, which can be thought of as a linear projection of the input, hash functions are usually used to define that.However, the sketch algorithms involve a large number of hash operations, which cannot meet the timeliness requirements of streaming data. Besides, all items are recorded and additional information needs to be stored for retrieval, which leads to unnecessary memory consumption <cit.>. Our design is based on Counter-based algorithms with an extended setting where streaming data is protected by LDP.§ CONCLUSIONIn this paper, we proposed a framework HG-LDP for tracking the Top-k heavy hitters on data streams at bounded memory expense, while providing rigorous LDP protection. A baseline and three advanced schemes with new LDP randomization mechanisms are designed under the hood of the framework. We implement all the proposed schemes and evaluate them on both synthetic and real-world datasets in terms of accuracy and memory consumption. The experimental results demonstrated that the proposed schemes achieve a satisfactory “accuracy-privacy-memory efficiency” tradeoff. For future work, we will extend the framework to be compatible with more diverse selections of memory-efficiency data structures as well as broader types of statistical tasks to enhance its flexibility. This work is supported by the National Key Research and Development Program of China under Grant 2021YFB3100300, and the National Natural Science Foundation of China under Grant U20A20178 and 62072395. Weiran Liu is supported in part by the Major Programs of the National Social Science Foundation of China under Grant 22&ZD147. Yuan Hong is supported in part by the National Science Foundation under Grants CNS-2308730, CNS-2302689, CNS-2319277, CMMI-2326341 and the Cisco Research Award. ACM-Reference-Format § APPENDICES §.§ LDP Mechanisms Optimal Local Hash. The Optimal Local Hash (OLH) mechanism <cit.> is designed for randomizing private values in a large domain. It maps the value with a randomly selected hash function to a new data domain with the size of g<<d before randomizing the value. The randomization method is the same as GRR with p' and q' as follows{ p' = e^ϵ/e^ϵ + g - 1, q' = 1/e^ϵ + g - 1. . It can also use the generic method to estimate the count c̃_i with p=p' and q=1/gp'+g-1/gq'=1/g. When the size of the new data domain g=e^ϵ+1, the variance of c̃_i can be minimized asVar[c̃_i] = n·4e^ϵ/(e^ϵ-1)^2 Hadamard Response. The Hadamard Response (HR) mechanism <cit.> encodes private values with a K× K Hadamard matrix, where K=2^⌈log_2(d+1)⌉. Expect for the first row of the matrix (all values are `1'), each other row corresponds to a value in the data domain. When encoding the i^th value in the data domain, the output value is randomly selected from column indices that have `1' in the (i+1)^th row with the probability of p, and selected from other indices (columns have `0') with the probability of q, wherep=e^ϵ/1+e^ϵ q=1/1+e^ϵ Then c̃_i can be calculated as followsc̃_i = 2(e^ϵ+1)/e^ϵ-1(ĉ_i-n/2) The variance of the estimation result c̃_i isVar[c̃_i] = n·4(e^ϵ+1)^2/(e^ϵ-1)^2§.§ Proof of Theorem <ref>We assume that f̂_i is the frequency of noisy data recorded in ℋ𝒢 according to the ED strategy in BGR. Meanwhile, the ED strategy would introduce additional error when recording f̂_i, while the frequency actually recorded and used for debiasing by the GRR mechanism before publishing is f̅_i. According to Lemma <ref>, we have the upper bound of f̂_i-f̅_i isPr[f̂_i - f̅_i≥α t]≤1/2α t(f̂_i-√(f̂_i^2-4P_weakE(V)/b-1))⇒ Pr[f̂_i ≤f̅_i + α t]≥ (1-1/2α t(f̂_i-√(f̂_i^2-4P_weakE(V)/b-1)))The distribution of f̂_i can be decomposed into Bin(f_i, p)+Bin(t-f_i, q). Denote Bin(f_i, p) as P_1 and Bin(t-f_i, q) as P_2, according to the hoeffding inequality, we havePr[f_i p-P_1≤ζ]≥ 1-e^-2ζ^2/f_i, andPr[(t-f_i) q-P_2≤ζ]≥ 1-e^-2ζ^2/t-f_iThen, according to Bonferroni inequality, we havePr[P_1+P_2≥ f_i p+(t-f_i)q-2ζ]≥ 1-e^-2ζ^2/f_i-e^-2ζ^2/t-f_i⇒ Pr[f̂_i≥ f_i p+(t-f_i)q-2ζ]≥ 1-e^-2ζ^2/f_i-e^-2ζ^2/t-f_i⇒ Pr[f̂_i-tq/p-q-f_i≥-2ζ/p-q]≥ 1-e^-2ζ^2/f_i-e^-2ζ^2/t-f_i⇒ Pr[f_i-f̅_i + α t-tq/p-q≤ f_i-f̂_i-tq/p-q≤2ζ/p-q]≥ (1-e^-2ζ^2/f_i-e^-2ζ^2/t-f_i)(1-f_i/2α t(1-√(1-4P_weakE(V)/f̂_i^2(b-1))))⇒ Pr[f_i-f̃_i≤2ζ+α t/p-q]≥ (1-2e^-2ζ^2/t)(1-1/2α(1-√(1-4P_weakE(V)/b-1)))⇒ Pr[f_i-f̃_i≤(√(2tlog (2/β))+α t)·e^ϵ+d-1/e^ϵ-1]≥(1-β)(1-1/2α(1-√(1-4P_weakE(V)/b-1)))where ζ=√(tlog (2/β)/2), P_weak=(i-1)!(d-k)!/(d-1)!(i-k)!, and E(V)=∑_j=i+1^d f_j.§.§ Proof of Theorem <ref>We assume that f̂_i is the frequency of noisy data recorded in ℋ𝒢 according to the ED strategy in BDR. Meanwhile, the ED strategy would introduce additional error when recording f̂_i, while the frequency actually recorded and used for debiasing by the randomization mechanism before publishing is f̅_i. According to Lemma <ref>, we have the upper bound of f̂_i-f̅_i isPr[f̂_i - f̅_i≥α t]≤1/2α t(f̂_i-√(f̂_i^2-4P_weakE(V)/b-1))⇒ Pr[f̂_i ≤f̅_i + α t]≥ (1-1/2α t(f̂_i-√(f̂_i^2-4P_weakE(V)/b-1)))The distribution of f̂_i can be decomposed into Bin(f_i,p_1p_2)+Bin(N_h-f_i,p_1q_2)+Bin(t-N_h,q_1/k), where N_h is the number of hot items, N_h≤ t. Denote Bin(f_i,p_1p_2) as P_1, Bin(N_h-f_i,p_1q_2) as P_2, and Bin(t-N_h,q_1/k) as P_3, according to the hoeffding inequality, we havePr[f_ip_1p_2-P_1≤ζ]≥ 1-e^-2ζ^2/f_i,Pr[(N_h-f_i)p_1q_2-P_2≤ζ]≥ 1-e^-2ζ^2/N_h-f_i, andPr[(t-N_h)·q_1/k-P_3≤ζ]≥ 1-e^-2ζ^2/t-N_h.Then, according to Bonferroni inequality, we havePr[f_ip_1p_2-P_1+(N_h-f_i)p_1q_2-P_2+(t-N_h)·q_1/k-P_3≤ 3ζ]≥ 1-e^-2ζ^2/f_i+1-e^-2ζ^2/N_h-f_i+1-e^-2ζ^2/t-N_h-2⇒ Pr[P_1+P_2+P3≥ f_ip_1p_2+(N_h-f_i)p_1q_2+(t-N_h)·q_1/k-3ζ]≥ 1-e^-2ζ^2/f_i-e^-2ζ^2/N_h-f_i-e^-2ζ^2/t-N_h⇒ Pr[f̂_i≥ f_ip_1p_2+(N_h-f_i)p_1q_2+(t-N_h)·q_1/k-3ζ]≥ 1-e^-2ζ^2/f_i-e^-2ζ^2/N_h-f_i-e^-2ζ^2/t-N_h⇒ Pr[f̂_i-N_hp_1q_2-(t-N_h)·q_1/k/p_1(p_2-q_2)-f_i≥-3ζ/p_1(p_2-q_2)]≥ 1-e^-2ζ^2/f_i-e^-2ζ^2/N_h-f_i-e^-2ζ^2/t-N_h⇒ Pr[f_i-f_i+α t-N_hp_1q_2-(t-N_h)·q_1/k/p_1(p_2-q_2)≤ f_i-f̂_i-N_hp_1q_2-(t-N_h)·q_1/k/p_1(p_2-q_2)≤3ζ/p_1(p_2-q_2)]≥ (1-e^-2ζ^2/f_i-e^-2ζ^2/N_h-f_i-e^-2ζ^2/t-N_h)· (1-f_i/2α t(1-√(1-4P_weakE(V)/f̂_̂î^2(b-1))))⇒ Pr[f_i-f̃_̃ĩ≤3ζ+α t/p_1(p_2-q_2)]≥ (1-3e^-2ζ^2/N_h)· (1-1/2α(1-√(1-4P_weakE(V)/b-1)))⇒ Pr[f_i-f̃_̃ĩ≤(3√(N_hlog(3/β)/2)+α t)·(e^ϵ_1+1)(e^ϵ_2+k-1)/e^ϵ_1(e^ϵ_2-1)]≥(1-β)(1-1/2α(1-√(1-4P_weakE(V)/b-1)))§.§ Proof of Theorem <ref>Denote v and v' as two raw data, o_1, o_2 and o_3 as the output of mechanisms, Ω_h as the domain of hot items, Ω_c as the domain of cold items. Firstly, ℳ_judge satisfies ϵ_1-LDP, we havePr[o=“Hot"|v∈Ω_h]/Pr[o=“Hot"|v'∈Ω_c]≤p_1/q_1=e^ϵ_1The same result can be proved when o=“Cold".Secondly, we prove that ℳ_hot mechanism satisfies ϵ_2-LDP. The probability ratio of v and v' to get the same randomized item o_2∈Ω_h isPr[o_2|v∈Ω_h]/Pr[o_2|v'∈Ω_c]≤p_2/1/k≤p_2/q_2=e^ϵ_2Similarly, the ℳ_cold mechanism satisfies Pr[o_3|v∈Ω_c]/Pr[o_3|v'∈Ω_h]≤ e^ϵ_2According to the composition theorem of DP <cit.>, BDR satisfies (ϵ_1+ϵ_2)-LDP at each timestamp where ϵ_1+ϵ_2=ϵ. Therefore, BDR satisfies ϵ-LDP.§.§ Algorithm of Functionand §.§ Normalized Discounted Cumulative Gain (NDCG) It measures the ordering quality of the heavy hitters captured by ℋ𝒢, which is a common effectiveness in recommendation systems and other related applications. Specifically, let V={v_1,v_2,...,v_k} as the Top-k heavy hitters in ℋ𝒢. If v_i is one of a true Top-k heavy hitter, the relevance score rel_i is rel_v_i=|k-|rank_actual(v_i)-rank_estimated(v_i)||.If v_i is not a true Top-k heavy hitter, we directly set its rel_i as 0. Then, the Discounted Cumulative Gain (DCG) isDCG_k=rel_v_1+∑_i=2^krel_v_i/log_2(i).Finally, we normalize the DCG of ℋ𝒢 by comparing it with the Ideal DCG (IDCG), which is the DCG when ℋ𝒢 records an actual list of Top-k heavy hitters.NDCG_k=DCG_k/IDCG_k.NDCG_k is between 0 and 1 for all k, and the closer it is to 1 means the ordering quality of ℋ𝒢 is higher. §.§ Implementation Details§.§.§ Re-implementation for LDP Mechansims. We treat existing LDP frequency estimation approaches as privacy-preserving baselines.Specifically, we estimate the frequency of all items under LDP, and output the Top-k counts as the heavy hitter. Cormode, Maddock, and Maple <cit.> placed various LDP frequency estimation approaches into a common framework, and performed an series of experiments in Python[<https://github.com/Samuel-Maddock/pure-LDP>].Their work offered a starting point of our implementations. We carefully studied the source codes, and fully re-implemented all baseline LDP mechanisms with the following optimizations.Data serialization. In <cit.>, the client outputs the perturbed data as an object, which server takes as its input to do data aggregation.In practice, the server and the client would communicate via a network channel.This requires object serializations and introduces additional communication and computation costs.In our implementation, we manually serialize the perturbed data to `byte[]' based on the underlying approaches. If the client outputs bit strings (e.g., OUE and RAPPOR), we compress the output bit string by representing each 8 bits into 1 byte. If the client outputs integers (e.g., OLH, HR), we represent the integer with the minimal byte length, i.e., 1-4 byte for integers in range [0, 2^8), [0, 2^16), [0, 2^24), and [0, 2^32), respectively.Choices of the hash. Some frequency estimation approaches leverage (non-cryptographic) hash to map input to Boolean (BLH) or integer(s) (RAPPOR, OLH, HCMS).The performances of these approaches are greately affected by the efficiency of the underlying hash.Meanwhile, HeavyGuardian also leverages (non-cryptographic) hash to partition data into buckets.Note that <cit.> and <cit.> respectively use xxHash and BobHash.We invoke BobHash in all schemes since our test shows that BobHash are more efficient[Our test shows that on MacBook M1, xxHash takes about 0.01us to provide an output while BobHash takes about 0.1us.].Besides, we find that debiasing the randomized data before storing it into HeavyGuardian can avoid the bias introduced by the ED strategy being amplified by the debiasing process. However, if a complete debiasing is performed every time randomized data comes, it can cause the previously accumulated count to be debiased repeatedly, e.g., divided by denominator p-q of the debiasing formula repeatedly in BGR. Therefore, we perform partial debiasing when collecting and dividing all counts by the denominator of the debiasing formula only before publishing the results.§.§.§ Re-implementation for HeavyGuardianWe treat the original HeavyGuardian as the non-private baseline.We carefully studied existing open-source C/C++ codes provided by Yang et al.[<https://github.com/Gavindeed/HeavyGuardian>] and fully re-implemented the HeavyGuardian using Java.Our HeavyGuardian re-implementation has some improvements compared with the original implementation. First, the original implementation contains some hard-coded parameters for different tasks, while our re-implementation allows developers to dynamically config for different tasks.Second, the ED strategy in HeavyGuardian contains a Bernoulli sampling procedure, i.e., sampling a Boolean value with probability 𝒫 = b^-C being , where b > 1 is a predefined constant number, and C is a counting value.The naive method of sampling used in original HeavyGuardian implementation is to randomly sample r ∈ [0, 1) and test whether r < b^-C.However, since finite computers cannot faithfully represent real numbers, the naive method would not produce the Boolean value with the correct distribution.In our implementation, we parse b^-C = exp(-C ·ln(b)) and leverage the method of 𝖡𝖾𝗋𝗇𝗈𝗎𝗅𝗅𝗂(exp(-γ)) proposed by Ganonne et al. <cit.> to do the sampling with no loss in accuracy.Recall that the basic version of HeavyGuardian is a hash table with w ≥ 1 buckets storing KV pairs (⟨ ID, count ⟩).Each bucket is divided into the heavy part with size λ_h > 0 and the light part with size λ_l ≥ 0.Because heavy hitter detection focus on only hot items, Yang et al. <cit.> recommend setting λ_l = 0 when using HeavyGuardian for heavy hitter detection tasks.Our experiments follow this recommendation and set λ_l = 0 (except CNR). The basic version of HeavyGuardian also allows using different λ_h and different numbers of buckets w when counting the most frequent k items in the heavy hitter task. Although our implementation also allows setting w and λ_h, our experiments focus on the basic cases, i.e., w = 1 and λ_h = k, to better demonstrate the effectiveness of our schemes. We obtain memory consumption by measuring the deep sizes (i.e., the size of an object including the size of all referred objects, in addition to the size of the object itself) of Objects packaging the HeavyGuardian and our schemes.The tool we use is the JOL (Java Object Layout) library[<http://hg.openjdk.java.net/code-tools/jol>].Although the error bounds we give in the theoretical analyses are debiasing after storing, the actual error in our implementation is still bounded by and even lower than the theoretical results. §.§ Supplementary Experiments§.§ Extended Related Work Differential Private Data Stream Collection The earliest studies in differential privacy for streaming data collection originate from continuous observation of private data <cit.>. Long-term data collection from users can be regarded as the collection of data streams. These studies mainly consider the degradation of privacy guarantee due to the repeated appearance of private data when the user's state not changing for a period of time.Recent works on differential private data stream collection mainly focus on Centralized Differential Privacy (CDP). Some works study how to publish the summation of the streaming data privately. To avoid the overestimation of sensitivity of the streaming data caused by outliers, Perrier et al. <cit.> propose truncating the data exceeding a threshold to reduce the sensitivity. Wang et al. <cit.> point out that the threshold should be dynamically adjusted for different distributions instead of using the fixed quantile value.Therefore, they propose to use an exponential mechanism to get a more reasonable sensitivity. Some works are devoted to solving the problem that the privacy guarantee is continuously degraded due to the repeated use of data in streams in consecutive periods. kellaris et al. <cit.> propose w-event DP, regarding the statistical results published in a sliding window as an event.Farokhi et al. <cit.> propose to address the problem of the exploding privacy budget by reducing the privacy guarantee provided for data that appeared in the past over time.Some works study the release of correlated streaming data. Wang et al. <cit.> propose a correlated Laplace noise mechanism, and Bao et al. <cit.> propose a correlated Gaussian noise mechanism.There are also some recent works on data stream collection with Local Differential Privacy (LDP) <cit.>. Joseph et al. <cit.> design a protocol to submit LDP-protected data only when the streaming data has changed and can have a greater impact on the statistical results.Ren et al. <cit.> propose a privacy budget segmentation framework that provides w-event LDP protection to prevent the degradation of privacy due to continuous data collection.Besides, Wang et al. <cit.> extend the proposed truncation-based CDP mechanism to LDP for the release of streaming data.Tracking Heavy Hitters in Data stream Mining streaming data faces three principal challenges: volume, velocity, and volatility <cit.>.The existing heavy hitters estimation algorithms in the data stream can be divided into three classes: Counter-based algorithms, Quantile algorithms, and Sketch algorithms <cit.>.Counter-based algorithms track the subset of items in the stream, and they quickly determine whether to record and how to record with each new arrival data. Manku et al. <cit.> propose two algorithms Sticky Sampling and Lossy Counting, which only record the item and its counts whose estimated counts exceed the threshold.Metwally et al. <cit.> design an algorithm called Space-Saving and record data with a data structure called Stream-Summary, which achieves rapid deletion, update, and insertion for each new arrival data. Subsequently, Yang et al. <cit.> propose a new algorithm called HeavyGuardian to improve Space-Saving.Zhou et al. <cit.> also propose a framework called Cold Filter (CF) to improve Space-Saving. The Quantile algorithms such as the GK algorithm <cit.> and QDigest algorithm <cit.>, focus on finding the item which is the smallest item that dominates ϕ n items from the data stream. Sketch algorithms record items with a data structure, which can be thought of as a linear projection of the input, hash functions are usually used to define the linear projection.Some existing works include Count Sketch<cit.>, CountMin Sketch<cit.>, WavingSketch<cit.>, and Moment<cit.>, etc. However, the sketch algorithms may involve a large number of hash operations, which cannot meet the timeliness requirements of streaming data. Besides, the unimportant low-frequent items are all recorded, which leads to unnecessary memory consumption. Our design is based on Counter-based algorithms with an extended setting where streaming data is protected by LDP.
http://arxiv.org/abs/2311.16062v1
{ "authors": [ "Xiaochen Li", "Weiran Liu", "Jian Lou", "Yuan Hong", "Lei Zhang", "Zhan Qin", "Kui Ren" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20231127182815", "title": "Local Differentially Private Heavy Hitter Detection in Data Streams with Bounded Memory" }
Towards complete characterization of topological insulators and superconductors: A systematic construction of topological invariants based on Atiyah-Hirzebruch spectral sequence Ken Shiozaki January 14, 2024 ================================================================================================================================================================================= Source-Free Domain Adaptation (SFDA) aims to adapt a source model for a target domain, with only access to unlabeled target training data and the source model pre-trained on a supervised source domain.Relying on pseudo labeling and/or auxiliary supervision, conventional methods are inevitably error-prone. To mitigate this limitation, in this work we for the first time explore the potentials of off-the-shelf vision-language (ViL) multimodal models (e.g., CLIP) with rich whilst heterogeneous knowledge. We find that directly applying the ViL model to the target domain in a zero-shot fashion is unsatisfactory, as it is not specialized for this particular task but largely generic.To make it task specific, we propose a novel Distilling multImodal Foundation mOdel () approach.Specifically,alternates between two steps during adaptation: (i) Customizing the ViL model by maximizing the mutual information with the target model in a prompt learning manner, (ii) Distilling the knowledge of this customized ViL model to the target model. For more fine-grained and reliable distillation, we further introduce two effective regularization terms, namely most-likely category encouragement and predictive consistency.Extensive experiments show thatsignificantly outperforms the state-of-the-art alternatives. Our source code will be released.§ INTRODUCTIONUnsupervised Domain Adaptation (UDA) relies on both well-annotated source data and unannotated target data. However, due to heightened safety and privacy concerns, accessing source data freely has become difficult <cit.>.In response, Source-Free Domain Adaptation (SFDA) has gained attention as a more practical solution, aiming to transfer a pre-trained source model to the target domain using only unlabeled target data.Due to the absence of source samples, traditional distribution matching approaches are no longer viable <cit.>. The predominant alternative is self-supervised learning, which generates or mines auxiliary information to facilitate unsupervised adaptation. Two main approaches exist: constructing a pseudo source domain to leverage established UDA methods such as adversarial learning <cit.> or domain shift minimization based on distribution measurement <cit.> and mining extra supervision from the source model <cit.> or target data <cit.>. In the presence of domain distribution shift, applying the source model to the target domain introduces inevitable errors in pseudo-labeling or auxiliary supervision, thereby limiting adaptation performance.To address identified limitations, we pioneer the exploration of off-the-shelf multimodal foundation models, such as the vision-language (ViL) model CLIP <cit.>, transcending the constraints of both the source model and target data knowledge.However, direct application of the ViL model proves unsatisfactory, lacking specialization for specific tasks.To overcome this, we propose a novel task-specific distillation approach named Distilling multImodal Foundation mOdel (). Initially, we customize the ViL model through unsupervised prompt learning for imposing task-specific information. Subsequently, we distil the knowledge from this customized ViL model to the target model, with joint supervision through two designed regularization terms: (1) most-likely category encouragement for coarse-grained distillation and (2) predictive consistency for fine-grained distillation.Our contributions are summarized as follows.(1) Pioneering the use of generic but heterogeneous knowledge sources (e.g., the off-the-shelf ViL model) for the SFDA problem, transcending the limited knowledge boundary of a pretrained source model and unlabeled target data. (2) Development of the novelapproach to effectively distill useful task-specific knowledge from the general-purpose ViL model. (3) Extensive evaluation on standard benchmarks, demonstrating the significant superiority of ourover previous state-of-the-art alternatives under conventional closed-set settings, as well as more challenging partial-set and open-set settings.§ RELATED WORK Source-free domain adaptation. Existing SFDA approaches fall into three distinct categories. The first explicitly aligns the pseudo source domain with the target domain, treating SFDA as a specialized case of unsupervised domain adaptation. This alignment is achieved by constructing the pseudo source domain through a generative model <cit.> or by splitting the target domain based on prior source hypotheses <cit.>.The second group extracts cross-domain factors from the source domain and transfers them in successive model adaptation for aligning feature distributions across the two domains. For example, <cit.> establishes a mapping relationship from a sample and its exemplar Support Vector Machine (SVM) (an individual classifier) on the source domain to ensure individual classification on the target domain. Some approaches leverage pre-trained source models to generate auxiliary factors, such as multi-hypothesis <cit.>, prototypes <cit.>, source distribution estimation <cit.>, or hard samples <cit.> to aid in feature alignment.The third group incorporates auxiliary information refined from the unlabeled target domain. In addition to widely used pseudo-labels <cit.>, geometry information, such as intrinsic neighborhood structure <cit.> and target data manifold <cit.>, has also been exploited.Despite continual advancements, these methods are limited by the knowledge derived solely from the pretrained source model and unlabeled target data. We break this limitation by tapping into the rich knowledge encoded in off-the-shelf multimodal foundation models. Large multimodal model. Multimodal vision-language (ViL) models, such as CLIP <cit.> and ALIGN <cit.>, have shown promise across various mono-modal and multimodal tasks by capturing modality-invariant features. Approaches in this domain can be broadly categorized into two lines.The first line focuses on enhancing ViL model performance. For instance, in <cit.>, prompt learning optimizes the text encoder of ViL models through the use of tailored, learnable prompts designed for specific scenarios. Other efforts aim to improve data efficiency by repurposing noisy data <cit.>.The second line utilizes ViL models as external knowledge to enhance downstream tasks, as demonstrated in this paper. Previous work in knowledge transfer primarily falls into two frameworks. For the first scheme, where the ViL model is directly applied to the target task in a zero-shot fashion <cit.>, domain generality is leveraged without task-specific refinement. The second scheme does not focus on source model adaptation. Instead, it fine-tunes the ViL model to the target domain through prompt or adaptor learning with an amount of manal labels <cit.>.A relevant method to ouris the UDA method DAPL <cit.>. Although both adopt CLIP, they differ significantly in problem setting and methodology. DAPL employs CLIP to learn domain-specific prompts, aiming to disentangle domain and category information in CLIP's visual features. In contrast,aligns target features to a progressively customized vision-language latent space in a memory-aware fashion. Importantly, DAPL requires labeled source data, making it inapplicable in SFDA.§ METHODOLOGYProblem statement.In the context of two distinct yet interrelated domains—namely, the labeled source domain and the unlabeled target domain—both characterized by the same set of C categories, the following notation is employed. The source samples and their corresponding labels are represented as 𝒳_s and 𝒴_s respectively. Similarly, the target samples and their true labels are denoted as 𝒳_t={x_i}_i=1^n and 𝒴_t={y_i}_i=1^n, where n signifies the number of samples. We aim to learn a target model θ_t:𝒳_t→𝒴_t. This involves utilizing (1) a pre-trained source model θ_s:𝒳_s →𝒴_s, (2) unlabeled target data, and (3) a Visual-Language (ViL) model denoted as θ_v.Overview. As depicted in Fig. <ref>, the proposedframework alternates between two distinct steps to customize and distill the off-the-shelf ViL knowledge.In the first step, we engage in prompt learning on the ViL model for the purpose of task-specific customization. This serves to mitigate the guidance error within the ViL model. In particular, we adopt a mutual information-based alignment approach. This approach is characterized by its richness in context and interaction between the target model and the ViL model, as opposed to placing blind trust in either model alone as conventional methods.In the second step, knowledge adaptation takes place within a unique constraint that encourages the identification of the most probable category labels in the logit space, while concurrently maintaining the typical predictive consistency. The most likely category labels are determined by a carefully designed memory-aware predictor, which dynamically integrates knowledge from both the target model and the ViL model in a cumulative fashion. §.§ Task-Specific ViL Model CustomizationWe adopt the prompt learning framework for ViL model customization, with all the parameters of the ViL model frozen throughout. The only learnable part in customization is the prompts each assigned for a specific class. To optimize these prompts, we need a useful supervision. In SFDA, however, it is challenging to customize such a domain-generic ViL model towards to the target domain, at the absence of a well-trained target domain model. This is because, none of them can reasonably make predictions. That means there is no clearly good supervision signals available.To address this challenge, we propose to explore the wisdom of the crowd by leveraging their predictive interaction as the supervision. Formally, we denote the predictions by the target model and the ViL model as θ_t(x_k) and θ_v(x_k), respectively, given an unlabeled target sample x_k. We conduct the customization by maximizing the mutual information of their predictions as:L_TSC =- min_v𝔼_x_k∈𝒳_tI(θ_t(x_k), θ_v(x_k, v))where v is the prompt context to be learned and the function I(·,·) measures the mutual information <cit.>. This alignment design differs significantly from the conventional adoption of the Kullback–Leibler (KL) divergence.First of all, the mutual information is a lower optimization bound than KL divergence, facilitating deeper alignment (see Theorem <ref> with the proof provided in ). Given two random variables X, Y. Their mutual information I( X, Y ) and KL divergence D_KL( X||Y ) satisfy the unequal relationship as follows. -I( X, Y ) ≤ D_KL( X||Y ).Crucially, the KL divergence exhibits an inherent bias towards a specific prediction, making it less suitable for our context where none of the predictions holds a significant advantage. On the contrary, mutual information considers the joint distribution or correlation between the two predictions. This distinction arises from their respective definitions: -I( X, Y )=-H( X)+H( X|Y ) and D_KL( X||Y )=-H( X)+H( X:Y ), whereH( X | Y )=-∑ p(x,y)log p(x | y)H( X:Y )=-∑ p(x)log p(y).The conditional entropy component H( X|Y ) of mutual information explicitly captures the joint distributions, a feature absent in KL divergence. Empirically, we also confirm the significance of incorporating this joint distribution-based interaction between the two predictions during the customization of the ViL model (see ablation study inand task-specific knowledge adaptation analysis in  <ref>). §.§ Memory-Aware Knowledge Adaptation As previously mentioned, even with customization for the target domain, the ViL model may not be fully adapted due to no robust target model available in prior. This limitation hinders effective knowledge adaptation at this stage. To address this issue, we propose the incorporation of a specialized memory-aware predictor to provide additional learning guidance – most-likely category encouragement, complementing the conventional predictive consistency constraint.Most-likely category encouragementThe rationale behind incorporating this learning constraint is to harness the collective knowledge of both the target model and the ViL model in order to enhance the discernment of probable category labels for each sample. Given the sluggish nature of this search process, it has been devised to function as a form of learning regularization.An illustration of this regularization process is presented in Fig. <ref>. Specifically, it is realized through two distinct steps as detailed below.(I) Memory-aware predictor. We initiate the process by generating pseudo-labels that represent the most likely category distribution, utilizing historical information stored in a prediction bank. The prediction bank archives two types of historical data for all samples in the target domain: (1) predictions from the target model denoted by {p_i}_i=1^n and (2) predictions from the ViL model denoted by {p'_i}_i=1^n.Throughout the adaptation process, the predictions from the target model are updated iteratively. At the end of each training iteration, the newly predicted labels for the training batch from the target model replace their counterparts in the prediction bank. In contrast, predictions from the ViL model are updated collectively in an epoch-wise manner, triggering updates every M training iterations. This mixed-update strategy is designed to strike a balance between maintaining the stability of the customized ViL model's guidance and capturing the task-specific dynamics inherent in the adaptation process. Based on the provided prediction bank, the creation of a pseudo-label for the most probable category involves a historical prediction fusion process as:p̅_i = ω p_i + (1-ω) p'_i.Here, the weight ω, drawn from an Exponential distribution with parameter λ, is a crucial factor. This fusion introduces dynamic bias rectification (represented by p_i) based on the guidance from the customized ViL model (p'_i). The role of p_i is to provide adjustments, leading us to adopt an asymmetric random weighting approach represented by ω.(II) Category attention calibration. Subsequently, we formulate a regularization technique employing pseudo-labels acquired through category attention calibration. Specifically, we begin by identifying the top-N most probable categories using p̅_i. The indices of these identified categories are denoted by ℳ_i={m_k}_k=1^N. With ℳ_i, the target model's logit of a target domain sample x_i, denoted as l_i, is segregated into positive and negative category groups. We define this regularization as:L_MCE = min_θt𝔼_x_i∈𝒳_tlogexp( a_i / τ)/∑_j ≠ℳ_iexp( b_i ·l_i,j/τ)a_i= ∏_k=1^Nl_i,m_k,     b_i = ∑_k=1^Nl_i,m_kwhere l_i, a denotes the a-th element of l_i and τ is the temperature parameter. In Eq. (<ref>), we note that the product operation with a_i in the numerator amplifies penalties for the probability decrease on the most likely categories compared to the sum form. Similarly, the sum with b_i in the denominator serves as an increasing weighting parameter to enhance suppression of values at other locations. Moreover, a_i is more sensitive to changes than b_i due to ∂ a_i/∂ m_k∝O(n^N-1) and ∂ b_i/∂ m_k∝O(1). By combining the use of a_i and b_i, we globally impose a calibration effect on the elements corresponding to the most likely categories within the logit l_i. Essentially, attention is introduced to these potential categories, as illustrated in the box with a yellow background in Fig. <ref>. Predictive Consistency. For the purpose of knowledge adaptation, we incorporate the conventional predictive consistency loss as:L_PC =min_θ_t[- 𝔼_x_i∈𝒳_tI(θ_t(x_i), θ_v(x_i, v*))+αL_B ],where θ_t(x_i) represents the target prediction, θ_v(x_i,v) denotes the ViL prediction, and v is the prompt context learned during the initial phase of task-specific customization. The function I(·,·) corresponds to the mutual information function. The parameter α serves as a trade-off parameter, and the category balance term L_B =KL (. q̅|| 1/C) aligns with previous approaches <cit.>, preventing solution collapse by ensuring the empirical label distribution q̅ matches the uniform distribution 1/C. For the reasons elaborated in , we employ mutual information for alignment. §.§ Model training To systematically distill and leverage task-specific knowledge from the ViL model, we adopt an epoch-wise training approach for . The training process is divided into T epochs, each comprising two stages aligned with the two steps in theframework (Fig. <ref>). During the first stage, training is governed by the objective L_TSC, and in the subsequent second stage, the objective function transitions toL_MKA = L_PC + βL_MCE,where β is a trade-off parameter. We summarize the whole training procedure ofin Algorithm <ref>. § EXPERIMENTS Datasets.We evaluate four standard benchmarks: Office-31 <cit.>,Office-Home <cit.>,VisDA <cit.>and DomainNet-126 <cit.>.Among them, Office-31 is a small-scaled dataset; Office-Home is a medium-scale dataset; VisDA and DomainNet-126 are both large-scale dataset. The details of the four datasets are provided in . Competitors. We comparewith 18 existing top-performing methods into three groups.(1) The first group contains Source (the source model's results), CLIP <cit.> and Source+CLIP where Source+CLIP directly average the results of the source model and CLIP.(2) The second group includes three state-of-the-art UDA methods DAPL <cit.>, PADCLIP <cit.> and ADCLIP <cit.> that are also multimodal guiding-based.(3) The third group comprises 13 current state-of-the-art SFDA models:SHOT <cit.>,NRC <cit.>,GKD <cit.>,HCL <cit.>, AaD <cit.>, AdaCon <cit.>, CoWA <cit.>,SCLM <cit.>, ELR <cit.>, PLUE <cit.>,TPDS <cit.> and CRS <cit.>.For comprehensive comparisons, we implementin two variants: (1) -C-RN (weak version) and (2) -C-B32 (strong version). The key distinction lies in the backbone of the CLIP image-encoder. Specifically, for -C-RN, ResNet101 <cit.> is employed on the VisDA dataset, while ResNet50 <cit.> is used on the other three datasets. On the other hand, -C-B32 adopts ViT-B/32 <cit.> as the backbone across all datasets.SFDA settings. We consider three distinct settings:the conventional closed-set SFDA setting,the partial-set andthe open-set SFDA settings.The experiment implementation details are provided in .§.§ Comparison ResultsComparison on Closed-set SFDA setting. The comparisons of the four evaluation datasets are listed in Tab. <ref>∼<ref>.-C-B32 surpasses the previous best method CoWA (on Office-31), TPDS (on Office-Home) and PLUE (on VisDA) and GKD (on DomainNet-126) by 2.2%, 9.6% 2.0% and 11.3% in average accuracy respectively.Specifically, -C-B32 obtains the best results on 4 out of 6 tasks on Office-31 while surpassing previous methods on all tasks of the other three datasets. As for -C-RN, besides Office-31, it obtains the second-best results and beat the previous best methods by 5.9%, 0.5% and 8.0% on Office-Home, VisDA and DomainNet-126 in average accuracy. The comparison of -C-RN shows that our method can still perform well despite using a weaker CLIP. Based on a strong CLIP (see results of -C-B32), our method's performance can improve further as we expected.All of the results indicate that thecan boost the cross-domainperformance in closed-set SFDA setting. Comparison to CLIP based prediction results.The original CLIP model can conduct general image classification. We carry out a quantitative comparison between 's adaptation performance and CLIP's performance on the four datasets, averaging the adaptation results ofgrouped by the target domain name. As presented in the bottom of Tab. <ref>, -C-B32 outperforms CLIP-B32 on all tasks.On average accuracy, -C-B32 increases the performance by 12.7%, 7.0%, 7.4% and 3.7% in Office-31, Office-Home, VisDA and DomainNet-126, respectively.Regarding the weak version, as reported in the top, -C-RN maintains similar advantages with the increase of 17.9%, 7.3%, 5.1% and 4.0%.The result shows that the domain generality of the original CLIP model cannot fully excel to the target domain, and task-specific customization is needed.Interestingly, compared with CLIP-B32, except for VisDA with a tiny gap of 0.9%, Source+CLIP-B32 averagely improve by 7.2%at most on the other datasets.Meanwhile, Source+CLIP-B32 is beaten by -C-B32 with an increase of 3.2% at least.In the group of -C-RN, we have the same observation.These results imply that directly weighting the source model and CLIP is an intuitive knowledge adaptation scheme, but it is hard to perform adaptation deeply.Considering Source+CLIP is an average version, we conduct a comprehensive comparison with the weighting strategy where the weighting coefficient of CLIP prediction varies from 0.0 to 1.0.Here, we conduct this experiment based on more challenging CLIP-B32 due to its large performance gap with Source (see the first row in ). For a clear view, all weighted accuracies are normalized by the corresponding -C-B32 accuracies, respectively. As shown in Fig. <ref>, no result can exceed the value of 1.0. This indicates that weighting the source model and CLIP in a zero-shot manner cannot obtain desirable task-specific fusion, and a carefully designed distilling is necessary.Comparison on Partial-set and Open-set SFDA settings.These two settings are the variations of traditional Closed-set SFDA setting, following the same as SHOT <cit.> (the detailed setting introduction is provided in ).As reported in Tab. <ref>, compared with previous best method CoWA (Partial-set) and CRS (Open-set), our -C-B32 improves by 2.4% and 2.7%, respectively. §.§ Model AnalysisFeature distribution visualization.Taking task Ar→Cl in Office-Home as a toy experiment, we visualize feature distribution using t-SNE tool. Meanwhile, we choose 5 comparisons, including the source model (termed Source), CLIP-B32’s zero shot (termed CLIP), SHOT, TPDS and Oracle (trained on domain Cl with the real labels).As shown at the top of Fig. <ref>, from Souce to -C-B32, category aliasing gradually relieves.Compared with Oracle, -C-B32 has the most similar distribution shape.To verify this point, we also give the 3D Density chart results arranged at the bottom of Fig. <ref>.These results confirm the effectiveness of our -C-B32 in terms of Feature distribution.Ablataion study. We evaluate the (1) effect of objective components L_TSC, L_MCE and L_PC, (2) effect of optimization of mutual information and (3) effect of task-specific customization.For this first issue, we conduct a progressive experiment to isolate the loss's effects. The top four rows of Tab. <ref> list the ablation study results.For convenience comparison, the baseline (the first row) is the source model results. When single L_TSP works (the second row), the accuracy largely increases on the three datasets with an improvement of 19.1% in average accuracy compared with the baseline.As L_MCE is introduced, the accuracy evident increase (3.7% in average, the third row) on the top of the case of only L_TSC and further enhanced by adopting of item L_PC (3.5% in average, the fourth row). The results indicate: (1) all objective components positively affect the final performance, (2) L_MCE, L_PC is crucial due to providing a new soft supervision for coarse-to-fine adaptation. For the second and third issues, we propose two variation methods of -C-B32 to evaluate the effect.One is -C-B32 w/ KL where the mutual information maximization loss in L_TSC, L_PC are replace by KL divergence loss. The other one is -C-B32 w/ CLIP where the prompt learning-based customization for CLIP is cancelled, and the inputted prompt is set to the fixed template of "a photo of a [CLS]." during the entire adaptation. As presented in the last two rows in Tab. <ref>, -C-B32 (the fourth row) beats -C-B32 w/ KL and-C-B32 w/ CLIP with average improvement of 1.6% at least, respectively confirming the effect of adopting mutual information optimization and task-specific customization.§.§ Task-Specific Knowledge Adaptation AnalysisIn this part, we give a feature space shift analysis using the measure of MMD (maximum mean discrepancy) distance <cit.> to verify whether the proposed method ensures a task-specific knowledge adaptation.In this experiment, we first train a domain-invariant Oracle model over all Office-Home data with real labels, and use the logits to express the ideal task-specific space 𝒪.After that, an analysis is conducted on the transfer task Ar→Cl.During this adaptation, there are T (epoch number) intermediate target models and customized CLIP models.We feedforward the target data through each intermediate model and take the logits as a space.Thus, we obtain T intermediate target feature spaces {𝒰_k}_k=1^T and T intermediate customized CLIP feature spaces {𝒱}_k=1^T.Within this context, these intermediate spaces can depict the task-specific distillation to 𝒪. In practice, the CLIP image encoder's backbone is set to ViT-B/32.In the left of Fig. <ref>, we give the MMD distance change curve of {𝒰_k}_k=1^T (in red, termed TGT) and {𝒱}_k=1^T (in blue, termed CUS-CLIP), taking 𝒪 as the original space.It is seen that at early epochs (1∼4), TGT and CUS-CLIP sharply decrease and then maintain a gradual decrease in the following epochs.Meanwhile, this change is consistent with the accuracy varying shown in the right of Fig. <ref>. These results indicate that ourindeed encourages task-specific knowledge adaptation due to converging the ideal task-specific space.Besides, we observe two details.First, after epoch 1, CUS-CLIP's distance reduces by 2.2, which is 58.6 time of TGT's decrease of 0.038.This is because CLIP represents a heterogeneous space of vision-language, much different from the vision space 𝒪.Furthermore, the large distance decrease confirms the effect of customization.Second, the synchronized distance reductions of CUS-CLIP and TGT indicate the interaction between the target model and CLIP is a crucial design for task-specific distillation.§ CONCLUSION We present an innovative approach, referred to as , designed to tackle the SFDA problem. To the best of our knowledge, this marks the initial endeavor to address SFDA by leveraging a pretrained ViL foundation model, departing from previous approaches that predominantly concentrated on self-mining auxiliary information.is featured with alternating between customization of the ViL model and the transfer of task-specific knowledge from the customized ViL model. We introduce two pivotal designs: a mutual information-based alignment for ViL customization and a most-likely category encouragement for more precise adaptation of task-specific knowledge. Our method's effectiveness is validated by state-of-the-art experimental results across four challenging datasets.ieeenat_fullname § A PROOF OF THEOREM 1. Restatement of Theorem 1Given two random variables X, Y. Their mutual information I( X, Y ) and KL divergence D_KL(X || Y) satisfy the unequal relationship as follows. -I( X, Y ) ≤ D_KL( X || Y ). Proof. Suppose the probability density function (PDF) of X and Y are p(x) and p(y), respectively; their join PDF is p(x,y).We haveI(X, Y) = ∑ p(x, y)logp(x,y)/p(x)· p(y)= D_KL( p(x,y) || p(x)· p(y) ).Well known, the KL divergence is non-negative <cit.>.Thus,-I( X, Y ) ≤ 0 ≤ D_KL( X || Y )§ EVALUATION DATASETSWe evaluate four standard benchmarks below. * Office-31 <cit.> is a small-scaled dataset including three domains, i.e., Amazon (A), Webcam (W), and Dslr (D), all of which are taken of real-world objects in various office environments. The dataset has 4,652 images of 31 categories in total. Images in (A) are online e-commerce pictures. (W) and (D) consist of low-resolution and high-resolution pictures.* Office-Home <cit.> is a medium-scale dataset that is mainly used for domain adaptation, all of which contains 15k images belonging to 65 categories from working or family environments. The dataset has four distinct domains, i.e., Artistic images (Ar), Clip Art (Cl), Product images (Pr), and Real-word images (Rw).* VisDA <cit.> is a challenging large-scale dataset with 12 types of synthetic to real transfer recognition tasks. The source domain contains 152k synthetic images (Sy), whilst the target domain has 55k real object images (Re) from the famous Microsoft COCO dataset. * DomainNet-126 <cit.> is another large-scale dataset. As a subset of DomainNet containing 600k images of 345 classes from 6 domains of different image styles, this dataset has 145k images from 126 classes, sampled from 4 domains, Clipart (C), Painting (P), Real (R), Sketch (S), as  <cit.> identify severe noisy labels in the dataset.§ IMPLEMENTATION DETAILS Souce model pre-training. For all transfer tasks on the three datasets, we train the source model θ_s on the source domain in a supervised manner using the following objective of the classic cross-entropy loss with smooth label, like other methods <cit.>.L_s(𝒳_s, 𝒴_s; θ_s)=-1/n_s∑_i=1^n_s∑_c=1^Cl̃_i,c^slogp_i,c^s,where n_s is the number of the source data, p_i,c^s is the c-th element of p_i^s=θ_s(x_i^s) that is the category probability vector of input instance x_i^s after θ_s mapping; l̃_i,c^s is the c-th element of the smooth label <cit.> l̃_i^s=(1-σ) 2ptl_i^s + σ/C, in which l_i^s is a one-hot encoding of hard label y_i^s and σ=0.1.The source dataset is divided into the training set and testing set in a 0.9:0.1 ratio.Network setting. Themodel contains two network branches.In the target model branch, the feature extractor consists of a deep architecture and a fully-connected layer followed by a batch-normalization layer.Same to the previous work <cit.>, the deep architecture is transferred from the deep models pre-trained on ImageNet (i.e., ResNet-50 is used on Office-31, Office-Home and DomainNet-126, whilst ResNet-101 is adopted on VisDA).The ending classifier is a fully-connected layer with weight normalization.On the other hand, the ViL model branch chooses the most adopted CLIP as the implementation where the text encoder's transformer-based architecture follows modification proposed in  <cit.> as the backbone.Regarding the image encoder, we adopt two versions corresponding to the two implementations ofin this paper, including-C-B32 and -C-RN.Specifically, in -C-B32, image encoders follow ViT-B/32 architecture proposed in CLIP <cit.> while -C-RN usesResNet <cit.> as the backbone.The same as the target model mentioned above,ResNet-101 is adopted on VisDA and ResNet-50 is used on the rest datasets. Parameter setting.For the trade-off parameter α and β in the objective L_PC (Eq. (6)) and L_MKA (Eq. (7)) is set to 1.0 and 0.4 on all datasets, respectively. The parameter of Exponential distribution λ in Eq. (4) is specified to 10.0.The temperature parameters in and Eq. (5) are τ=0.1.The number of the most-likely categories is set to N=2.Training setting. We adopt the batch size of 64, SGD optimizer with momentum 0.9 and 15 training epochs on all datasets.The prompt template for initiation is the mostly used 'a photo of a [CLASS].' <cit.>.All experiments are conducted with PyTorch on a single GPU of NVIDIA RTX.§ SUPPLEMENTATION OF FULL EXPERIMENT RESULTS Full results on VisDA.As the supplement of results on VisDA, Tab. <ref> presents the full classification details over the 12 categories.It is seen that -C-RN and -C-B32 obtain the best results in 7/12 categories compared with SFDA methods.Meanwhile, -C-RN and -C-B32 are on top of the second best UDA results in 8/12 categories. Also, we note that the UDA method of ADCLIP beats -C-RN and -C-B32 on four transfer tasks.It is understandable that ADCLIP use the labelled source data, whilst our method cannot access the source data. Despite this,still presents advantages over these source data-required method (see the average accuracy).Full results of comparison to CLIP.As the supplementation of these domain-grouped results reported in the paper, Fig. <ref> gives a comprehensive visualization comparison with CLIP in the perspective of all 31 transfer tasks on the four evaluation datasets. It is seen that the results of  (marked by green circles) are above CLIP (marked by orange circles) on all tasks, whether we use -C-RN or -C-B32.Full results of Partial-set and Open-set SFDA.As the supplementation of these average results in Tab. 5, Tab. <ref> gives the full classification accuracy over 12 transfer tasks in the Office-Home dataset.As the top in Tab. <ref>, -C-B32 obtains best results on 9/12 tasks in the Partial-set SFDA and on the half tasks in the Open-set SFDA. § EXPANDED MODEL ANALYSISGrad-CAM visualization.In Fig. <ref>, we present the Grad-CAM visualization <cit.> comparison with the source model and two typical SFDA methods, SHOT and SCLM, based on self-supervised learning without ViL model help.For the single object-contained images (see 1∼3 column), -C-B32's attention focuses on the target object, whilst other methods cover the entire image.Regarding the multi-object-contained images (see 4∼6 column), -C-B32's attention is more consistent with the target semantics given by the real labels than other methods focusing on the wrong object.These results explain the effectiveness of -C-B32 integrating the domain generality of theViL model and the task specificity of the source model.Attention-based evolving dynamics. To better understand the working of , this part visualizes the evolving dynamics of model learning attention during the training phase. For a clear view, we display the Grad-CAM visualization results at some typical iterations, as shown in Fig. <ref>.Among the rightly classified images (the top four rows), the attention smoothly concentrates to the discriminative visual patch.In contrast, the attention of the misclassified image (the last row) converges to the meaningless one. Sensitivity of hyper-parameter. In themethod, α, β are trade-off parameters in objective L_PC (see Eq. (6)) and L_MKA (see Eq. (7)).This part discusses their performance sensitivity based on the symmetric transfer tasks Cl→Ar and Ar→Cl in the Office-Home dataset.As depicted in Fig. <ref>, when these parameters changes, there are no evident drops in the accuracy variation curves. This indicates thatis insensitive to parameters α and β.Confusion matrix. To present a quantitative observation on the category, this part gives the confusion matrix based on the classification results on the VisDA dataset. For comparison, we show the confusion matrix of the source model at the left side of Fig. <ref>.In the no-adaptation case, the misclassified data scatter over the matrix.After adaptation, the misclassified data are evidently corrected by -C-B32 at the right side of Fig. <ref>.It is seen that -C-B32 improves performance on all categories, and on some categories achieving significant growth.For instance, in the second category, the performance promotes by 68% (from 21% to 89%).Training stability.Training stability is a vital characteristic of supervised learning methods.Based on the large-size dataset VisDA, we present the adaptation details of -C-B32 using the accuracy varying curves on the target domain.For comparison, the curves of typical self-supervised methods, SHOT, SCLM and TPDS, are also depicted.As shown in Fig. <ref>, the accuracy gradually increases to the maximum.This result confirms the training stability of -C-B32.Also, -C-B32 converges much faster than SHOT, SCLM and TPDS.It indicates that introducing task-specific knowledge from the ViL model is helpful in boosting the source model adaptation.
http://arxiv.org/abs/2311.16510v1
{ "authors": [ "Song Tang", "Wenxin Su", "Mao Ye", "Xiatian Zhu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231127125802", "title": "Source-Free Domain Adaptation with Frozen Multimodal Foundation Model" }
Influence of the imposed flow rate boundary condition on the flow of Bingham fluid in porous media Alberto Rosso January 14, 2024 ==================================================================================================empty empty abbrvThis paper proposes a backstepping boundary control design for robust stabilization of linear first-order coupled hyperbolic partial differential equations (PDEs) with Markov-jumping parameters. The PDE system consists of 4 × 4 coupled hyperbolic PDEs whose first three characteristic speeds are positive and the last one is negative. We first design a full-state feedback boundary control law for a nominal, deterministic system using the backstepping method. Then, by applying Lyapunov analysis methods, we prove that the nominal backstepping control law can stabilize the PDE system with Markov jumping parameters if the nominal parameters are sufficiently close to the stochastic ones on average. The mean-square exponential stability conditions are theoretically derived and then validated via numerical simulations. § INTRODUCTION Hyperbolic partial differential equations (PDEs) find applications in many engineering areas, including transportation systems <cit.>, open-channel flows <cit.>. Extensive research efforts have been made for boundary control problems of hyperbolic PDE systems.PDE backstepping, as a feedback stabilization method, has been developed for general first-order coupled hyperbolic PDEs in a series of work <cit.>. Further advances generalize the PDE backstepping for adaptive control <cit.>, robust stabilization to delay and disturbances <cit.>. These research results mainly pertain to deterministic PDE systems. On the other hand, scarce studies focus on stochastic PDE systems, the stochasticity being usually caused by uncertain parameters in real applications. The question of boundary control of uncertain hyperbolic PDEs with Markov jump parameters via the backstepping method remains open.Previous results have focused on stochastic stability analysis with distributed or boundary controllers using linear matrix inequalities <cit.>. Prieur <cit.> modeled the abrupt changes of boundary conditions as a piecewise constant function and derived sufficient conditions for the exponential stability of the switching system. Wang et al. <cit.> examined the robustly stochastically exponential stability and stabilization of uncertain linear first-order hyperbolic PDEs with Markov jumping parameters, deriving sufficient stability conditions using linear matrix inequalities (LMIs) based on integral-type stochastic Lyapunov functional.Zhang <cit.> studied traffic flow control of Markov jump hyperbolic systems, employing LMIs to derive sufficient conditions for boundary exponential stability. Auriol <cit.> first considered the mean-square exponential stability of a 2× 2 stochastic hyperbolic system using backstepping.The main results of this paper are that we propose a robust stabilizing backstepping control law for a4× 4 stochastic hyperbolic PDEs. We first design a backstepping boundary control law to stabilize the nominal system without Markov jumping parameters. We prove that the nominal control law can still stabilize the stochastic PDE system, provided the nominal parameters are sufficiently close to the stochastic ones on average. The stability conditions are derived using a Lyapunov analysis. The contribution of this paper extends to both theoretical advancements and practical applications. This paper is organized as follows: In Section II, we introduce the stochastic hyperbolic PDE system and state the problem under consideration. In Section III, the nominal boundary controller is designed, and the mean-square exponential stabilization of the stochastic PDEs is proposed. In Section IV, a Lyapunov analysis is conducted to prove the nominal control law achieves the mean-square exponential stability of the PDE system with Markov jumping parameters. In Section V numerical simulations verify the theoretical results.§ PROBLEM STATEMENTIn this paper, we consider the following stochastic PDE system 𝐰_t^+(x,t)+Λ^+(t) 𝐰^+_x(x,t) = Σ^++(x,t)𝐰^+(x,t) +Σ^+-(x,t) 𝐰^-(x, t) , 𝐰^-_t(x, t)-Λ^-(t) 𝐰^-_x(x, t) = Σ^-+(x,t)𝐰^+,with the boundary conditions 𝐰^+(0,t)= Q(t) 𝐰^-(0, t), 𝐰^-(1, t) = R(t)𝐰^+(1,t)+U(t),where 𝐰^+= [w_1, w_2, w_3]^𝖳, 𝐰^- = w_4, where the spatial and time domain are defined in (x,t) ∈ [0,1] ×ℝ^+. The different transport matrices are defined asΛ^+(t) = [ λ_1(t)00;0 λ_2(t)0;00 λ_3(t) ],Λ^-_i =λ_4(t)The coefficient matrices are Σ^++(x,t)∈ℝ^3× 3, Σ^+-(x,t)∈ℝ^1× 3, Σ^-+(x,t)∈ℝ^3× 1, Q(t)∈ℝ^3× 1, R(t)∈ℝ^1× 3. The function U(t) is the control input. All the parameters of the systems are stochastic and we denote their concatenation 𝒳(t) = {Λ^+(t),Λ^-(t),Σ^++(x,t),Σ^+-(x,t),Σ^-+(x,t), Q(t), R(t)}. We consider that the set 𝒳(t) corresponds to a homogeneous continuous Markov process𝒳(t), t ∈ℝ^+ with a finite number of states 𝒮 = {𝒳_1,𝒳_2,…,𝒳_r}, whose realization is right continuous. For instance, we have 𝒳_j={Λ_j^+,Λ_j^-,Σ_j^++(x),Σ_j^+-(x),Σ_j^-+(x), Q_j, R_j}.The transition probabilities P_ij(t_1,t_2) denote the probability to switch from mode 𝒳_i at time t_1 to mode 𝒳_j at time t_2 ((i,j) ∈{1, …, r}^2, 0 ≤ t_1 ≤ t_2). They satisfy  P_ij: ℝ^2 → [0,1] with ∑_j=1^r P_ij(t_1,t_2)=1. Moreover, for ϱ <t, the transition probabilities P_ij follows the Kolmogorov equation <cit.>,∂_tP_i j(ϱ, t)=-𝔠_j(t)P_i j(ϱ, t)+∑_k=1^rP_i k(ϱ, t) τ_k j(t),P_i i(ϱ, ϱ)=1,and P_i j(ϱ, ϱ)=0fori ≠ jwhere the τ_ij and 𝔠_j=∑_k=1,k≠ j^rτ_j k are non-negative-valued functions such that τ_ii(t)=0. The functions τ_ij are upper bounded by a constant τ^⋆.For 𝒳_j ∈𝒮, we denote ||𝒳_j(t)|| as||𝒳_j(t)|| =(||Λ^+_j(t)||^2+||Λ^-_j(t)||^2+||Q_j(t)||^2+||R_j(t)||^2+sup_x∈ [0,1]||Σ_j^++(x)||^2+sup_x∈ [0,1]||Σ_j^+-(x)||^2+sup_x∈ [0,1]||Σ_j^-+(x)||^2)^1/2,where we have used the standard Euclidean norm. We assume that there exists known lower bounds 𝒳 and upper bounds 𝒳 such that for all j , 𝒳≤ ||𝒳_j||≤𝒳. Moreover, we assume that the lower bounds of the stochastic velocities are always positive. More precisely, we haveΛ_i^+>0, Λ_i^->0, which implies λ_1i>0, λ_2i>0, λ_3i>0 and λ_4i>0. Using the notations 𝐰 = [𝐰^+, 𝐰^-] and Λ_i = Diag{λ_1^i,λ_2^i,λ_3^i,-λ_4^i}, the system (<ref>)-(<ref>) can be rewritten in the compact form:∂_t 𝐰(x, t)+Λ_i∂_x 𝐰(x, t)= Θ_i𝐰(x,t),for x ∈ (0,L), with the boundary condition:[[𝐰^+(0,t); 𝐰^-(1, t) ]]=G_i[[𝐰^+(1,t); 𝐰^-(0, t) ]] + [[0; U(t) ]],where the coefficient matrix Θ_i, G_i are:Θ_i =[[ Σ^++_i(x) Σ^+-_i(x); Σ^-+_i(x) 0 ]], G_i = [ [ 0 Q_i; R_i 0 ]].In the case of deterministic coefficients, equations (<ref>)-(<ref>) naturally appear when modeling traffic networks with two classes of vehicles <cit.>. § BACKSTEPPING CONTROLLER DESIGNIn this section, we propose a backstepping control design for the nominal system (that is, the system is in a known nominal deterministic state). We will then show that the stochastic system with this nominal control law is well-posed. Finally, we will state our main result, which is the mean-square exponential stability of the closed-loop system, provided the nominal parameters are sufficiently close to the stochastic ones on average. This result will be proved in the next section using a Lyapunov analysis. §.§ Backstepping transformation We consider in this section that the stochastic parameters are in the nominal mode 𝒳(t) = 𝒳_0. This nominal mode is not necessarily related to the set 𝒮. Our objective is to design a control law that stabilizes this nominal system. We first simplify the structure of the system (<ref>) by removing the in-domain coupling terms in the equation. More precisely, let us consider the following backstepping transformation 𝒦_0 𝒦_0𝐰= [𝐰^+; 𝐰^- -∫_0^x 𝐊_0(x, ξ)𝐰^+(ξ,t) +N_0(x, ξ) 𝐰^-(ξ, t)) d ξ ]where the kernels 𝐊_0(x,ξ) ∈ℝ^1× 3 and N_0(x,ξ)∈ℝ^1 are piecewise continuous functions defined on the triangular domain 𝒯={0 ≤ξ≤ x ≤ 1}. We have𝐊_0(x,ξ) = [ [ k^0_1(x,ξ) k^0_2(x,ξ) k^0_3(x,ξ) ]].The different kernels verify the following PDE system-Λ^-_0(𝐊_0)_x(x,ξ) +(𝐊_0)_ξ(x,ξ)Λ^+_0=-𝐊_0(x,ξ)Σ^++_0(ξ) -Σ^-+_0(ξ) N_0(x, ξ), Λ^-_0 (N_0)_x(x, ξ)+Λ^-_0 (N_0)_ξ(x, ξ) =𝐊_0(x,ξ)Σ^+-_0(ξ),with the boundary conditions (-Λ^-_0 𝐈_3-Λ^+_0) 𝐊_0^𝖳(x,x) =Σ^-+^𝖳_0(x) ,-Λ^-_0 N_0(x, 0)+𝐊_0(x,0)Λ^+_0Q_0=0,where 𝐈_3 is a 3× 3 identity matrix. The well-posedness of the kernel equations can be proved by adjusting the results from <cit.>. The solutions of the kernel equations can be expressed by integration along the characteristic lines. Applying the method of successive approximations, we can then prove the existence and uniqueness of the solution to the kernel equations (<ref>)-(<ref>). Applying the backstepping transformation, we can define the target system state ϑ asϑ=(α_1,α_2,α_3,β)=𝒦_0 𝐰.And we denote the augmented states 𝒬 = [α_1,α_2,α_3]^𝖳 The target system equations are given by: 𝒬_t(x,t)+Λ^+_0 𝒬_x(x,t) =Σ^++_0(x)𝒬(x,t)+Σ^+-_0(x) β(x,t) +∫_0^x𝐂_0^+(x,ξ)𝒬(ξ,t) dξ +∫_0^x𝐂_0^-(x,ξ)β(ξ,t)dξ, β_t(x, t)-Λ^-_0 β_x(x, t)=0, with the boundary conditions:𝒬(0,t) = Q_0 β(0, t),β(1, t)=R_0 𝐰^+(1,t) - ∫_0^1 𝐊_0(1, ξ)𝐰^+(ξ,t)+N_0(1, ξ) 𝐰^-(ξ, t)) d ξ + U(t).where the coefficients 𝐂_0^+(x,ξ) ∈ℝ^3× 3 and 𝐂_0^-(x,ξ) ∈ℝ^3× 1 are bounded functions defined on the triangular domain 𝒯. Their expressions can be found in <cit.>. Note that we still have the presence of 𝐰^+ and 𝐰^- terms in equation (<ref>), but this is not a problem since these terms will be removed using the control input. The transformation 𝒦_0 is a Volterra transformation, therefore boundedly invertible. Consequently, the states 𝐰 and ϑ have equivalent L^2 norms, i.e. there exist two constants m_ϑ>0 and M_ϑ>0 such that m_ϑ||𝐰||_L^2^2≤ ||ϑ||_L^2^2≤ M_ϑ||𝐰||_L^2^2. §.§ Nominal control law and Lyapunov functionalFrom the nominal target system (<ref>)-(<ref>), we can easily design a stabilizing control law as <cit.>:U(t)= -R_0 𝐰^+(1,t) +∫_0^1(𝐊_0(1, ξ)𝐰^+(ξ,t) . . +N_0(1, ξ) 𝐰^- (ξ, t)) d ξ.To analyze the stability properties of the target system (<ref>)-(<ref>), we consider the Lyapunov functional V_0 defined by V_0(t) = ∫_0^1 ϑ^𝖳(x,t) D_0(x) ϑ(x,t)dx,where D_0(x) = Diag{e^-ν/λ_1^0x/λ_1^0, e^-ν/λ_2^0x/λ_2^0, e^-ν/λ_3^0x/λ_3^0, a e^ν/Λ^-_0 x/Λ^-_0}.This Lyapunov functional is equivalent to the L^2 norm of the system, that is, there exist two constantk_1 > 0 and k_2>0 such thatk_1||ϑ||_L^2^2 ≤ V_0(t) ≤ k_2||ϑ||_L^2^2.It can also be expressed in terms of the original state as V_0(t)=∫_0^1 (𝒦_0𝐰(x,t))^𝖳 D_0(x)𝒦_0𝐰(x,t)dx.Taking the time derivative of V_0(t) and integrating by parts, we getV_0(t) ≤ -ν V_0(t) + ∫_0^1 2𝒬(x,t) D_α^0 (Σ_0^++(x)𝒬(x,t) + Σ_0^+-(x) 𝐰^-(x,t) )dx ≤ - η V_0(t) +(q_10^2+q_20^2+q_30^2 - a) β^2(0,t),where η = ν - 2/||Λ^+|| k_1 (max_x∈ [0,1] ||Σ^++_0(x)||+ (1 + 1/m_ϑ) max_x∈ [0,1] ||Σ^+-_0||(x)),D_α^0= Diag{e^-ν/λ_1^0x/λ_1^0, e^-ν/λ_2^0x/λ_2^0, e^-ν/λ_3^0x/λ_3^0}.We choose a>0 and ν>0 such thatq_10^2+q_20^2+q_30^2 - a ≤ 0, η>0.where q_10, q_20, q_30 are the elements of Q_0. Consequently, we obtain V̇_0(t)≤ -η V_0(t), which implies the L^2-exponential stability of the system.§.§ Mean-square exponential stabilizationWe now state the well-posedness of the stochastic system and then give the main result on mean-square exponential stability. We must first guarantee that the stochastic system (<ref>)-(<ref>) with the nominal controller (<ref>) has a unique solution. We have the following lemma,For any initial conditions of the Markov system 𝐰(x,t) ∈ L^2[0,1] and any initial states 𝒳(t) = 𝒳(0) for the stochastic parameters, the system (<ref>)-(<ref>) with the nominal control law (<ref>) has a unique solution such that for any t,𝔼{ ||𝐰(x,t)|| } < ∞,where the 𝔼{·} denotes the mathematical expectation. This lemma can be easily proved by adjusting the results in <cit.>. Almost every sample path of our stochastic processes are right-continuous step functions with a finite number of jumps in any finite time interval. We can then find a sequence { t_k: k =0,1,…} of stopping times such that t_0 = 0, lim_t→∞ t_k = ∞, and 𝒳(t)=𝒳(t_k) on t_k ≤ t < t_k+1. We start from time t=0 and then use <cit.> for each time interval in the whole time period. Thus, the stochastic system (<ref>)-(<ref>) has a unique solution.The main goal of this paper is to prove that the control law (<ref>) can still stabilize the stochastic system (<ref>)-(<ref>), provided the nominal parameter 𝒳_0 is sufficiently close to the stochastic ones on average. More precisely, we want to show the following sufficient condition for robust stabilization. There exists a constant ϵ^⋆>0, such that if, for all time t>0,𝔼(||𝒳(t) - 𝒳_0||) ≤ϵ^⋆,then the closed-loop system (<ref>)-(<ref>) with the control law (<ref>) is mean-square exponentially stable, namely, there exist ς,ζ>0 such that:𝔼_[0,(p(0),𝒳(0)](p(t)) ≤ςe^-ζ t p(0),where p(t)=∫_0^1 ||𝐰(x,t)||_2^2dx, while 𝔼_[0,(p(0),𝒳(0)] denotes the conditional expectation at time t=0 with initial settings of p(t) = p(0), 𝒳(t) = 𝒳(0).This theorem will be proved in the next section. § LYAPUNOV ANALYSISIn this section, we consider the closed-loop stochastic system with the nominal controller (<ref>). The objective is to prove Theorem 2. The proof will rely on a Lyapunov analysis. More precisely, we will consider the following stochastic Lyapunov functional candidateV(t)=∫_0^1 (𝒦_0𝐰(x,t))^𝖳 D(t,x)𝒦_0𝐰(x,t)dx,where the diagonal matrix D(t,x)=D_j(x) if 𝒳(t)=𝒳_j, and whereD_j(x) = Diag{e^-ν/λ_1^jx/λ_1^j, e^-ν/λ_2^jx/λ_2^j, e^-ν/λ_3^jx/λ_3^j, a e^ν/Λ^-_j x/Λ^-_j}.We consider that the parameters ν and a introduced in the definition of D_j can still be tuned. In the nominal case 𝒳(t) = 𝒳_0, the Lyapunov functional V(t) corresponds to V_0. It is noted that inequality (<ref>) still holds for V(t) (even if the constants k_1 and k_2 may change). §.§ Target system in stochastic mode 𝒳_jIn this section, we consider that 𝒳(t)=𝒳_j at time t. We can define the state ϑ=(α_1,α_2,α_3,β)=𝒦_0𝐰. Our objective is first to obtain the equations verified by the state ϑ that appears in the Lyapunov functional (<ref>).It verifies the following set of equations𝒬_t(x,t)+Λ^+_j 𝒬_x(x,t) = Σ^++_j(x)𝐰^+(x,t) +Σ^+-_j(x) 𝐰^-(x,t), β_t(x, t)-Λ^-_j β_x(x, t) =𝐟_1j(x) 𝐰^+(x,t) +𝐟_2j(x) β(0, t) +∫_0^x 𝐟_3j(x,ξ)𝐰^+(ξ,t) d ξ+∫_0^x 𝐟_4j(x,ξ) 𝐰^-(ξ,t) d ξ,with the boundary conditions:𝒬(0,t)=Q_j β(0, t),β(1, t)=(R_j-R_0)𝒬(1,t),where the functions are defined by:𝐟_1j(x)= Σ^-+_j(x)+Λ^-_j 𝐊_0(x, x)+𝐊_0(x, x) Λ^+_j, 𝐟_2j(x)= -𝐊_0(x, 0) Λ^+_j Q_j+N_0(x, 0) Λ^-_j, 𝐟_3j(x,ξ)= Λ^-_j (𝐊_0)_x(x, ξ)-(𝐊_0)_ξ(x, ξ) Λ^+_j - 𝐊_0(x, ξ) Σ^++_j(ξ)-N_0(x, ξ) Σ^-+_j(ξ), 𝐟_4j(x,ξ)= Λ^-_j (N_0)_x (x, ξ)+Λ^-_j (N_0)_ξ(x, ξ) -𝐊_0(x, ξ) Σ^+-_j(ξ).All the terms that depend on 𝐰 in the target system (<ref>)-(<ref>) could be expressed in terms of ϑ using the inverse transformation 𝒦_0^-1. However, this would make the computations more complex and is not required for the stability analysis. It is important to emphasize that all the terms on the right-hand side of equation (<ref>) become small if the stochastic parameters are close enough to the nominal ones. More precisely, we have the following lemmaThere exists a constant M_0, such that for any realization 𝒳(t)=𝒳_j ∈𝒮, for any (x,ξ)∈𝒯||𝐟_𝔦j|| < M_0||𝒳_j-𝒳_0||, 𝔦∈{1,2,3,4}.Considering the function 𝐟_1j(x). For all x ∈ [0,1], we have𝐟_1j(x)= Σ^-+_j(x)+Λ^-_j 𝐊_0(x, x)+𝐊_0(x, x) Λ^+_j= (Σ^-+_j(x) - Σ^-+_0(x)) + (Λ^-_j-Λ^-_0) 𝐊_0(x,x) + 𝐊_0(x,x)(Λ^+_j-Λ^+_0).Consequently, we obtain the existence of a constant K_1>0 such that||𝐟_1j||≤ K_1||𝒳_j-𝒳_0||.The other inequalities for 𝐟_2(x), 𝐟_3(x,ξ) and 𝐟_4(x,ξ) can also be derived similarly. This finishes the proof. §.§ Derivation of the Lyapunov functionLet us consider the Lyapunov functional  V defined in equation (<ref>). Its infinitesimal generator L is defined as <cit.>L V(𝐰,s_2)=lim sup _Δ t → 0^+1/Δ t×𝔼(V(𝐰(t+Δ t), 𝒳(t+Δ t))-V(𝐰(t), 𝒳(t))).We define L_j, the infinitesimal generator of V obtainedby fixing 𝒳(t) = 𝒳_j ∈𝒮. We have L_j V(𝐰) =d V/d 𝐰(ϑ, 𝒳_j) h_j(ϑ) +∑_ℓ∈𝒮(V_ℓ(𝐰)-V_j(𝐰)) τ_j ℓ,where V_ℓ(𝐰)=V(𝐰,s_2^ℓ), and where the operator h_j is defined by h_j(ϑ)=([-Λ^+_j𝒬_x(x,t)+Σ_j^++(x)𝐰^+(x,t);+Σ_j^+-(x)𝐰^-(x,t);Λ_j^-β_x(x, t)+𝐟_1j(x)𝐰^+(x,t); + 𝐟_2j(x) β(0, t)+∫_0^x 𝐟_3j(x,ξ)𝐰^+(ξ,t) d ξ;+∫_0^x 𝐟_4j(x,ξ)𝐰^-(ξ,t) d ξ ]).To shorten the computations, we denote in the sequel V(t), LV(t), V_j(t) and L_jV(t) instead of (respectively) V(𝐰,𝒳(t)), LV(𝐰,𝒳(t)), V(𝐰,𝒳_j) and L_j(V(𝐰)). From now, we consider that 𝒳(t=0)=𝒳_i_0∈𝒮. We have the following lemma.There exists η>0, M_1 > 0 and d_1,d_2 > 0 such that the Lyapunov functional V(t) satisfies∑_j=1^r P_i j(0, t) L_j V(t) ≤ -V(t)(η-d_1 𝒵(t) .-(M_1+d_1 r τ^⋆) 𝔼(||𝒳(t)-𝒳_0||))+∑_k=1^3 (d_2 𝔼(||𝒳(t)-𝒳_0||)-e^-ν/λ̅)α_k^2(1,t)where the function 𝒵(t) is defined as:𝒵(t)=∑_j=1^r||𝒳(t)-𝒳_0||(∂_t P_i j(0, t)+𝔠_j P_i j(0, t)) In what follows, we denote c_i positive constants. We will first compute the first term of L_j. Consider that 𝒳(t) =𝒳_j. The Lyapunov functional rewritesV_j(t)=∫_0^1ϑ^𝖳(x,t)D_j(x) ϑ(x,t)dx, d V_j/d 𝐰(𝐰) h_j(𝐰)≤-η V_j(t)+M_1 ||𝒳_j-𝒳_0|| V(t)+(c_2|𝒳-𝒳|ε_0+q_1j^2+q_2j^2+q_3j^2-a) β^2(0, t) +∑_k=1^3 (a e^ν/Λ^-_j((R_j)_k-(R_0)_k)^2 -e^-ν/λ_kjL)α_k^2(1,t),where η = ν - 2/||Λ^+|| k_1 (max_x∈ [0,1] ||Σ^++_0(x)||+ (1 + 1/m_ϑ) max_x∈ [0,1] ||Σ^+-_0(x)||)-2 𝒳 c_2/k_1ε_0, M_1=c_4+ac_3+c_2/k_1ε_0+c_1.The coefficients ν, ε_0 and a are chosen such thatη >0,c_2|𝒳-𝒳|ε_0+q_1j^2+q_2j^2+q_3j^2-a <0.where the q_1j, q_2j, q_3j are the elements of Q_j, 𝒳 and 𝒳 are the upper and lower bound of the stochastic parameters. There exists a constant C_0 such that for all 1≤ j ≤ r, V_j(𝐰) ≤ C_0 V(𝐰). Thus, we get the following inequality:d V_j/d 𝐰(𝐰) h_j(𝐰)≤-η̅V(t)+M_1 ||𝒳_j-𝒳_0|| V(t) +∑_k=1^3 (a e^ν/Λ^-_j((R_j)_k-(R_0)_k)^2 -e^-ν/λ_kjL)α_k^2(1,t),where η̅=η C_0. Now, we calculate the second term of L_j. We have:∑_l=1^r(V_l(𝐰)-V_j(𝐰)) τ_j l =∑_l=1^rτ_j l( ∫_0^1 𝒦_0^𝖳(𝐰(x,t)) D_l(x) 𝒦_0𝐰(x,t) dx.- .∫_0^1 𝒦_0^𝖳𝐰(x,t) D_j(x) 𝒦_0𝐰(x,t) dx).≤d_1 ∑_l=1^r τ_jl||𝒳_l - 𝒳_j|| V(t),We then calculate the quantity L = ∑_j=1^r P_ij(0,t) L_jV(t). Using the property of the expectation and we getL≤ -V(t)(η̅- (M_1+d_1 r τ^⋆) 𝔼(||𝒳(t)-𝒳_0||) . .+ d_1 ∑_j=1^r||𝒳_j-𝒳_0||(∂_t P_i j(0, t)+𝔠_j P_i j(0, t))) +∑_k=1^3 (d_2 𝔼(||𝒳(t)-𝒳_0||)-e^-ν/λ̅)α_k^2(1,t),This finish the proof of Lemma <ref>.§.§ Proof of Theorem 2Notice first that if ϵ^⋆ is small enough (namely smaller than e^-ν/λ̅/d_2) and if inequality (<ref>) holds, the term ∑_k=1^3 (d_2 𝔼(||𝒳(t)-𝒳_0||)-e^-ν/λ̅)α_k^2(1,t) < 0, then we have the following result based on Lemma <ref>: ∑_j=1^r P_i j(0, t) L_j V(t) ≤ -V(t)(η-d_1 𝒵(t) .-(M_1+d_1 r τ^⋆) 𝔼(||𝒳(t)-𝒳_0||)).We define the following function:ϕ(t) = η- d_1 𝒵(t) - (M_1 + d_1 r τ^⋆)𝔼(||𝒳(t)-𝒳_0||).And then, using the functional Ψ(t):Ψ(t) = e^∫_0^t ϕ(y) dy V(t).With the definition of Ψ(t), taking the expectation of the infinitesimal generator L of Ψ(t) , we get:𝔼(∑_j=1^r P_ij(0,t) L_jV(t))≤ - 𝔼(V(t)ϕ(t)).We know that 𝔼(∑_j=1^r P_ij(0,t) L_jV(t)) =𝔼(LV(t)), thus𝔼(LV(t))≤ - 𝔼(V(t)ϕ(t)).Then applying the Dynkin's formula <cit.>,𝔼(Ψ(t)) - Ψ(0) = 𝔼(∫_0^t LΨ(y)dy) ≤ 0.To calculate the 𝔼(Ψ(t)), we write down the formulation of Ψ(t):𝔼(Ψ(t)) =𝔼( V(t) e^∫_0^t ϕ(y)dy)= 𝔼( V(t) e^∫_0^t (η - d_1 𝒵(y) - (M_1 + d_1 r τ^*) 𝔼(||𝒳(y)-𝒳_0||))dy).We already know that ∫_0^t 𝒵(y)dy =∫_0^t (∑_j=1^r||𝒳(y)-𝒳_0||(∂_y P_i j(0, y) ... +𝔠_j P_i j(0, y)) V(y)) dy ≤𝔼(||𝒳(t)-𝒳_0||) + rτ^⋆∫_0^t 𝔼 (||𝒳(y)-𝒳_0||) dy,where τ^⋆ is the largest value of the transition rate. Using this inequality, we get𝔼(Ψ(t))≥𝔼( V(t) e^(-d_1 ϵ^⋆ + ∫_0^t (η - (M_1 + 2d_1 r τ^⋆) ϵ^⋆ dy)).Then we take ϵ^⋆ as ϵ^⋆ = η/2(2d_1 r τ^⋆ + M_1),thus we have 𝔼(Ψ(t)) ≥𝔼( V(t) e^(-d_1 ϵ^⋆ + η/2 t)).From the before proof, we know 𝔼(Ψ(t)) ≤Ψ(0), such that𝔼(V(t)) ≤e^d_1 ϵ^⋆e^-ζ t V(0),where ζ = η/2. The function V(t) is equivalent to the L^2-norm of the system. This concludes the proof of Theorem <ref>.§ NUMERICAL SIMULATIONIn this section, we illustrate our results with simulations. We consider that only the parameter λ_4 is stochastic. Its nominal value is -0.024. The five other possible values are λ_4^1=-0.02, λ_4^2=-0.023, λ_4^3=-0.024, λ_4^4=-0.025, λ_4^5=-0.03) and the initial transition probabilities are chosen as (0.02,0.32,0.32,0.32,0.02). The transition rates τ_ij are defined as the same as in  <cit.>. The corresponding matrices in nominal case are setting as:Λ^+_0 =[[ 0.008100;0 0.00370;00 0.0065 ]], Λ^-_0 =-0.024Q_0 = [ -12.29; -3; 8.45 ], R_0 = [0.0011 -0.16010.0034 ] Solving the Kolmogorov forward equation, we get the probability of each state in the simulation process shown in Fig. <ref>.From the probability of the Markov states, the system stays near the nominal value in the entire simulation period. Using the Markov process, we conduct the simulation for t=400 with the sinusoidal initial conditions, the closed-loop results are shown in Fig. <ref>.All the states with Markov jumping parameters almost converge to zero under the nominal control law, which is consistent with the theoretical results.§ CONCLUSIONSIn this paper, we proposed a backstepping control low that mean-squarely exponentially stabilizes a 4× 4 Markov jumpingcoupled hyperbolic PDEs. The full-state feedback boundary control law was derived using the backstepping method for a nominal system. By applying Lyapunov analysis, we prove that this nominal control law can stabilize the PDE system with Markov jumping parameters provided the nominal parameters are sufficiently close to the stochastic ones on average. Finally, we use numerical examples to illustrate the efficiency of our approach.Future work will focus on its application in traffic flow systems.-12cm
http://arxiv.org/abs/2312.16636v1
{ "authors": [ "Yihuai Zhang", "Jean Auriol", "Huan Yu" ], "categories": [ "math.OC", "cs.SY", "eess.SY", "math.AP" ], "primary_category": "math.OC", "published": "20231227164754", "title": "Robust Boundary Stabilization of Stochastic Hyperbolic PDEs" }
Optimal Beamforming Structure and Efficient Optimization Algorithms for Generalized Multi-Group Multicast Beamforming Optimization Tianyu Fang, Yijie Mao, Member, IEEE This work has been supported in part by the National Nature Science Foundation of China under Grant 62201347; and in part by Shanghai Sailing Program under Grant 22YF1428400. T. Fang and Y. Mao are with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China (e-mail: {fangty, maoyj}@shanghaitech.edu.cn).====================================================================================================================================================================================================================================================================================================================================================================================================== In this work, we focus on solving non-smooth non-convex maximization problems in multi-group multicast transmission. Leveraging Karush-Kuhn-Tucker (KKT) optimality conditions and successive incumbent transcending (SIT) duality, we thoroughly analyze the optimalbeamforming structure fora set ofoptimization problems characterized by a general utility-based objective function.By exploiting the identified optimal structure, we further unveil inherent low-dimensional beamforming structures within the problems, which are asymptotically optimal in various regimes of transmit signal-to-noise ratios (SNRs) or the number of transmit antennas. Building upon the discovered optimal and low-dimensional beamforming structures, we then propose highly efficient and toolbox-free optimization algorithms to solve a specific multi-group multicast optimization problem based on the weighted sum rate (WSR) utility function. The proposed algorithms first use the cyclic maximization (CM) framework to decompose the problem into multiple subproblems, each has an optimal or low-dimensional closed-form beamforming solution structure. Then, we propose the projected adaptive gradient descent (PAGD) algorithmto compute the optimal Lagrangian dual variables for each subproblem.Numerical results show that the proposed algorithms maintain comparable or improved WSR performance compared to baselinealgorithms, while dramatically reducing the computational complexity. Notably, the proposed ultra-low-complexity algorithms based on low-dimensional beamforming structures achieve near optimal WSR performance with extremely low computational complexity. This complexity remains independent of the number of transmit antennas, making them promising and practical for extremely large multiple-input multiple-output (XL-MIMO) applications in 6G. Multi-group multicast, transmit beamforming optimization, weighted sum-rate maximization, optimal beamforming structure.§ INTRODUCTIONWireless communication systems are continually advancing to fulfill the escalating demands for high data rates, reliability, and energy efficiency. Within this landscape, the application of multicast strategies is crucial to effectively meet the communication requirements of numerous users seeking simultaneous access to identical data. Physical-layer multicast beamforming was first proposed in <cit.> and has drawn much attention in recent years for its potential to support multi-group multicasting in various wireless services and applications, such as videoconferencing, mobile commerce, and intelligent transportation systems. With the emerging extremely large-scale multiple-input multiple-output (XL-MIMO) for 6G <cit.>, it is vital to develop low-complexity multi-group multicast beamforming solutions to address the high computational demands of these large-scale systems.The design of transmit beamforming in wireless communication systems aims to improve spectral or energy efficiency, typically involving the following two types of optimization problems: 1) total transmit power minimization under minimumsignal-to-interference-and-noise ratio (SINR) constraints for all user—the quality of service (QoS) problem; 2) the system utility maximization subject to a total transmit power constraint—the utility problem. These two types of problems are interconnected, and solutions to one can provide insight into the other. However, in multi-group multicast transmission, these two problems are generally NP-hard even in single-group scenarios <cit.>. Various algorithms have been proposed to address multicast beamforming problems, including globally optimal and suboptimal algorithms. In <cit.>, a globally optimal beamforming optimization algorithm employing the branch and bound (BB) method has been proposed for the QoS problem.Although this global optimal algorithm exhibits attractive performance, it is tailored to a single-group scenario and is hindered by high computational complexity.Therefore, state-of-the-art work focuses primarily on developing suboptimal algorithms capable of achieving near-optimal performance. Among the suboptimal algorithms for solving multicast beamforming problems, semi-definite relaxation (SDR) <cit.> stands out as a popular convex relaxation (CR)-based method that achieves a near-optimal solution. However, as the size of the multicast network increases, the performance of SDR degrades quickly and the computational complexity increases sharply due to the auxiliary relaxation variables. To address these drawbacks, another category of optimization algorithms based on the convex approximation (CA) has emerged and received extensive investigation. Algorithms utilizing various techniques, including successive convex approximation (SCA), <cit.>, weighed minimum mean square error (WMMSE) <cit.>, fractional programming (FP) <cit.>, and majorization-minimization (MM) <cit.>, have been proposed to solve various multi-group multicast beamforming optimization problems. Generally speaking, these CA-based algorithms are mathematically equivalent and share the same ability to find a near-optimal solution for the original problem <cit.>.All the aforementioned algorithms either transform the original non-convex problem into a high-dimensional block-wise convex problem (i.e., WMMSE and FP) or construct a sequence of convex surrogate functions at a given point (i.e., SCA and MM). And each convex subproblem is solved using standard interior-point method (IPM), typically implemented by a dedicatedsolver in optimization toolboxes, such as CVX <cit.>.However, the practical use of these algorithms is hampered by the undesirable computational complexity resulting from the iterative use of CVX optimization solvers.To further reduce the computational complexity, several approaches shift towards closed-form and low-complexity beamforming designs for each convex subproblem obtained from the CA-based methods. Specifically, for the smooth QoS problems, alternating direction method of multipliers (ADMM) <cit.> and extragradient-based<cit.> algorithms have been proposed to solve each subproblem. For the non-smooth utility problems, subgradient-based <cit.> and log-sum-exp (LSE)-based <cit.> algorithms have been introduced to handle each non-smooth subproblem. These methods generally offer lower complexity than the CVX-based algorithms. However, with the advent of XL-MIMO, which further boosts the number of transmit antennas by at least an order of magnitude compared to massive MIMO (e.g., several hundred or even thousands of transmit antennas), the computational complexity of these approaches still grows sharply with the number of transmit antennas. While certain low-complexity beamforming approaches, such as zero-forcing (ZF) <cit.> and weighted maximum ratio transmission (MRT) <cit.>, are employed to reduce thedimensionof the optimization problem, a unified analysis of the conditions under which these beamforming techniques are effective is currently lacking.The aforementioned ZF and MRT low-complexity beamforming approaches originate from the optimal beamforming structure <cit.>. For unicast-only transmission,the optimal beamforming structure was identified in <cit.> for both QoS and general utility problems. It was demonstrated in <cit.> that the optimal beamforming structures for these two problem types are equivalent. For multi-groupmulticast transmission,the optimalbeamforming structure was identified in <cit.>, with a particular focus on the QoS and max-min fair (MMF) problems. Building upon the optimal multi-group multicast beamforming structure identified in <cit.>, some ultra-low-complexity algorithms <cit.> are proposed for large-scale communication networks. However, these algorithms mainly focus onaddressing the QoS problem or its inverse MMF problem by iteratively solving the QoS problem via a bisection search. This approach is not applicable for solving the utility problem due to the potentially high-dimensional search.To the best of our knowledge, the optimal multi-group multicast beamforming structure for the generalized utility function-based maximization problems has not yet been identified. Additionally, efficient algorithms for directly solving these non-convex non-smooth problems with popular utility functions, such as weighted sum rate (WSR), geometric mean or harmonic mean of group rates are yet to be explored. This paper aims to bridge this gap by discovering the optimal multi-group multicast beamforming structure and efficient algorithms for solving the generalized utility function maximization problems. The key contributions of this paper are outlined as follows: * A thorough analysis of the optimal multi-group multicast beamforming structure: By leveraging the Karush-Kuhn-Tucker (KKT) optimality conditions and the successive incumbent transcending (SIT) duality <cit.>, we identify the optimal multi-group multicast beamforming structure for a generalized utility function-based maximization problem. In contrast to the approach in <cit.>, our method firstderives out the optimal solution directly through the first-order optimality conditions of the original QoS problem. We then establish the SIT duality between the QoS problem and the utility problem, revealing the multi-group multicast beamforming for both types of problems shares the same optimal structure. * A comprehensive exploration of low-dimensional beamforming structures: The revealed optimal beamforming structure provides valuable insights into the multi-group multicast beamforming design. This inspires us toexplore inherent low-dimensional beamforming structures that are asymptotically optimal in various regimes of transmit signal-to-noise (SNR) or the number of transmit antennas. These low-dimensional beamforming structures are particularly beneficial in reducing the computational complexity of beamforming design, especially in XL-MIMO systems. * Development of highly efficient and optimization toolbox-free algorithms based on the identified beamforming structures: Leveraging the identified optimal and low-complexity beamforming structures, we propose highly efficient and optimization toolbox-free algorithms to solve the non-smooth utility problem. Specifically, we first propose to utilize the cyclic maximization (CM) framework to decompose the problem into multiple subproblems, each has an optimal or low-dimensional closed-form beamforming solution structure.Then, the projected adaptive gradient descent (PAGD) algorithm is proposed to compute the optimal Lagrangian dual variables for each subproblem. * Numerical simulations demonstrate the superior efficiency of the proposed algorithms, as evidenced by their low central processing unit (CPU) time consumption, all while maintaining comparable or improved WSR performance compared to existing optimization algorithms. Surprisingly, the proposed ultra-low-complexity algorithms, leveraging low-dimensional beamforming structures, attain near-optimal WSR performance with remarkably low computational complexity. Importantly, this complexity remains independent of the number of transmit antennas, making them promising and practical for extensive applications in 6G, particularly in XL-MIMO scenarios.Organization: The rest of this paper is organized as follows. Section <ref> and Section <ref> provide thorough analysis of the optimal andlow-complexity multi-group multicast beamforming structures, respectively, for the generalized utility problem. In Section <ref>, we leverage the CM framework and Lagrange duality to derive closed-form solution for each convex subproblem. Simulation results are presented in Section <ref>. In the end, Section <ref> concludes the paper.Notations: Vectors and matrices are represented by bold lower-case and upper-case letters, respectively. The complex space is denoted by ℂ,the real space by ℝ, and ℝ_+ denotes the set of real values larger than 0. Expectation over a random variable s is denoted as 𝔼[s]. The magnitude of a complex number x is expressed as |x|. The circularly symmetric complex Gaussian distribution (CSCG) with zero mean and variance σ^2 is denoted as 𝒞𝒩(0,σ^2). Theconjugate transpose is represented by (·)^H. The optimal solution for a convex subproblem is denoted as (·)^⋆. The local and global optimal solutions for the original non-convex problem are represented by (·)^♢ and (·)^∘, respectively. diag{𝐲} denotes a diagonal matrix with the entries of 𝐲 along the main diagonal, and blkdiag{𝐲_1,⋯,𝐲_N} is block-diagonal matrix with vector or matrices {𝐲_1,⋯,𝐲_N} along its main diagonal.§ OPTIMAL MULTI-GROUP MULTICAST BEAMFORMING STRUCTURE§.§ System Model and Problem FormulationConsider a downlink multi-group multicast wireless communication network, where a base station (BS) equipped with L antennas simultaneously serving G non-overlapping user groups indexed by 𝒢={1,⋯,G}. In each user group g, there are K_g single-antenna users indexed by 𝒦_g={1,⋯,K_g},∀ g∈𝒢. The total number of users in the system isK=∑_g=1^G K_g. All users within the same group g require the same multicast stream s_g. Without loss of generality, the transmit data stream vector 𝐬=[s_1,⋯,s_G]^T is assumed to have zero mean and identity variance, i.e., 𝔼{𝐬𝐬^H }=𝐈_G. Let 𝐰_g∈ℂ^L× 1 be the corresponding beamforming vector for the stream s_g,∀ g∈𝒢, the transmitted signal at the BS is ∑_g=1^G𝐰_g s_g. The total transmit power is required to be less than or equal to the upperbound P_t, i.e., ∑_g=1^G𝐰_g^2≤ P_t. Let 𝐡_gk^H∈ℂ^1× L be the channel vector from the BS to the user k in group g,∀ k∈𝒦_g, ∀ g∈𝒢, the signal received at user kin group g is expressed as y_gk=𝐡_gk^H∑_i=1^G𝐰_i s_i+n_gk,∀ k∈𝒦_g,∀ g∈𝒢,where n_gk∼𝒞𝒩(0,σ_gk^2) denotes the additive white Gaussian noise (AWGN)at user k in group g with σ_gk^2 denoting the noise power.Each user in group g decodes the intended multicast stream s_g with the SINR of γ_gk= | 𝐡_gk^H𝐰_g|^2(∑_i=1,i≠ g^G |𝐡_gk^H𝐰_i|^2+σ_gk^2)^-1,∀ k∈𝒦_g,∀ g∈𝒢.Consequently, the achievable rate for the multicast stream s_g isR_g=min_k∈𝒦_g{log(1+γ_gk) },∀ g∈𝒢.In this study, our primary focus is on addressing a highly generalized multi-group multicast beamforming optimization problem, characterized by an utility function denoted as f(R_1,⋯,R_G). Following <cit.>, we assume that f(·) is a continuous, strictly increasing function concerning the achievable SINRs of each user. It should be the operations that preserve concavity, such as non-negative weighted sums, pointwise minimum, and so on. The generalized utilityproblem for multi-group multicast transmission is formulated as 𝒰: max_𝐖f(R_1,⋯,R_G) s.t. Tr(𝐖𝐖^H)≤ P_t, where 𝐖≜[𝐰_1,⋯,𝐰_G]. This problem poses two major challenges: the presence of multiple non-convex fractional SINR expressions (<ref>) and the non-smooth nature of the achievable rate expressions (<ref>) for multicast streams. Notably, the multicast beamforming design is inherently NP-hard, even when the problem (<ref>) is reduced to the single-group setting where inter-group interference is absent <cit.>.To address the non-smooth characteristics of problem (<ref>), we first investigate themore tractable QoS problem in (<ref>), and subsequently extend the findings to problem (<ref>) by exploring their interrelations. The QoS problem is formulated as follows 𝒫: min_𝐖 Tr(𝐖𝐖^H) s.t. γ_gk≥α_g, ∀ k∈𝒦_g,∀ g∈𝒢, where α_g refers to the QoS threshold for multicast stream s_g. Users within the same group share a common QoS threshold since the achievable rate is constrained by the worst-case user within each user group. This differs from the unicast-only scenario, where each user has an individual QoS threshold. Problem (<ref>) has been proven to be non-convex and NP-hard <cit.>.In the next subsections, we focus on identifying the optimal multicast beamforming structure for both problem (<ref>) and problem (<ref>), using an approach different from the iterative SCA method in <cit.>.§.§ Optimal Multicast Beamforming Structure for QoS problemIn this subsection, we identify the optimal multicast beamforming structure for the power minimization problem (<ref>). Specifically, we begin by using Lemma <ref> to show that the KKT conditions of (<ref>) are necessary conditions for all stationary points. Following this, we directly unveil the optimal beamforming structure of (<ref>) based on its KKT conditions. Linear independence constraint qualification (LICQ) <cit.> holds for problem (<ref>) when all channel vectors {𝐡_gk,∀ k∈𝒦_g,∀ g∈𝒢} are linearly independent.Proof: Constraint (<ref>) can be rewritten as∑_i=1,i≠ g^G1/σ_gk^2|𝐡_gk^H𝐰_i|^2+1-1/α_gσ_gk^2|𝐡_gk^H𝐰_g|^2_c_gk(𝐖)≤ 0.Denote the left-hand side of (<ref>) as c_gk(𝐖), then the gradient of c_gk(𝐖) with respect to 𝐖 is given as ∇ c_gk(𝐖)=2/σ_gk^2𝐡_gk𝐡_gk^H[𝐰_1,⋯,-1/α_g𝐰_g,⋯, 𝐰_G ].All gradients {∇ c_gk(𝐖) ,∀ k∈𝒦_g,∀ g∈𝒢} are linearly independent and therefore satisfy LICQ if the channel vectors {𝐡_gk,∀ k∈𝒦_g,∀ g∈𝒢} exhibit linear independence. ▪ The power minimization problem (<ref>) for multi-group multicasting beamforming would be infeasible when the channel vectors from different groups are linearly dependent, since the interference from other groups can not be eliminated by the beamforming vectors. Without loss of generality, we assume that all channel vectors {𝐡_gk} are linearly independent, and therefore problem (<ref>) is feasible. This assumption holds for widely adopted channel models, such as Rayleigh fading. Lemma <ref> implies that the KKT conditions are necessary conditions for any stationary point for problem (<ref>). Therefore, we are able to analyze the problem (<ref>) using the KKT optimality conditions. By introducing a set of Lagrange multipliers {λ_gk} for the corresponding reformulated SINR constraints (<ref>), we define the Lagrangian function of (<ref>) as ℒ_(<ref>)(𝐖,λ)=1/2∑_g=1^G 𝐰_G^2+1/2∑_g=1^G∑_k=1^K_gλ_gk(∑_i=1,i≠ g^G1/σ_gk^2|𝐡_gk^H𝐰_i|^2+1-1/α_gσ_gk^2|𝐡_gk𝐰_g|^2), where λ≜[λ_1,⋯,λ_G] with λ_g=[λ_g1,⋯,λ_gK_g]^T. The first-order derivative of ℒ_(<ref>)(𝐖,λ) with respective to 𝐰_g is given by ∂ℒ_(<ref>)/∂𝐰_g=𝐰_g+∑_i≠ g^G∑_k=1^K_iλ_ik/σ_ik^2𝐡_ik𝐡_ik^H𝐰_g-∑_k=1^K_gλ_gk/α_gσ_gk^2𝐡_gk𝐡_gk^H𝐰_g. Therefore, for any stationary point 𝐖^♢ of problem (<ref>), there exists a set of Lagrange multipliers {λ_gk^♢} satisfying the stationary conditions, i.e., ∂ℒ_(<ref>)/∂𝐰_g^♢=0, which leads to (𝐈_L +∑_i=1^G∑_k=1^K_iλ_ik^♢/σ_ik^2𝐡_ik𝐡_ik^H)𝐰_g^♢=∑_k=1^K_gλ_gk^♢/σ_gk^2(1+1/α_g)𝐡_gk𝐡_gk^H𝐰_g^♢.Equation (<ref>) is obtained from (<ref>) by adding and subtracting the term ∑_k=1^K_gλ_gk^♢/σ_gk^2𝐡_ik𝐡_ik^H𝐰_g^♢, and then setting it to zero. The derived locally optimal beamforming solution is given as𝐰_g^♢=(𝐈_L +∑_i=1^G∑_k=1^K_iλ_ik^♢/σ_ik^2𝐡_ik𝐡_ik^H)^-1∑_k=1^K_gλ_gk^♢/σ_gk^2(1+1/α_g)𝐡_gk𝐡_gk^H𝐰_g^♢.The global-optimal beamforming solution of problem (<ref>) aligns with the beamforming structure in (<ref>), as it belongs to one of the local-optimal solutions of (<ref>). For simplicity, we write out the optimal beamforming structure in matrix form as shown in the following Theorem <ref>. The optimal multi-group multicast beamforming structure for problem (<ref>) is 𝐰_g^∘=( 𝐈_L+∑_i=1^G𝐇_i Θ_i^∘𝐇_i^H)^-1𝐇_g𝐝_g^∘, ∀ g∈𝒢, where 𝐇_i≜ [𝐡_i1,⋯,𝐡_iK_i], Θ_i^∘≜diag{θ_i1^∘,⋯, θ_iK_i^∘} with θ_ik^∘=λ_ik^∘/σ_ik^2, 𝐝_g^∘≜ [d_g1^∘,⋯,d_gK_g^∘]^T with d_gk^∘=λ_gk^∘/σ_gk^2(1+1/α_g)𝐡_gk^H𝐰_g^∘, and {λ_gk^∘} are corresponding optimal dual variables. Theorem <ref> coincides with the optimal structure discovered in <cit.>. But the proof is much simpler and more intuitive, since it is built on the LICQ and KKT conditions of the original problem.By directly setting equation (<ref>) equal to zero, the optimal beamfroming structure for problem (<ref>) has another equivalent form, as given in the following Corollary <ref>. The optimal multicast beamforming solution 𝐰_g^∘ in (<ref>) has the following equivalent form 𝐰_g^∘=( 𝐈_L+∑_i=1,i≠ g^G𝐇_i Θ_i^∘𝐇_i^H)^-1𝐇_g𝐝̃_g^∘, ∀ g∈𝒢,where Θ_i^∘ is the same as in (<ref>), and 𝐝̃_g^∘≜ [d̃_g1^∘,⋯,d̃_gK_g^∘]^T with d̃_gk^∘=λ_gk^∘/α_gσ_gk^2𝐡_gk^H𝐰_g^∘.Although the parameters Θ_g^∘ and 𝐝_g^∘ are challenging to obtain due to the NP-hard nature of problem (<ref>), the optimal multicast beamforming structure brings valuable insights to the beamforming design. This is particularly evident when it is used to reduce the dimensions of optimization variables, which we will discuss later.So far, we have attained the optimal beamforming structure for the power minimization problem (<ref>). However, it remains challenging to solve the general utility problem (<ref>). In the following subsections, we will identify the optimal multicast beamforming structure for problem (<ref>) and discuss some valuable insights. This is a major contribution of this work. §.§ Optimal Multi-group Multicast Beamforming Structure for General Utility Function MaximizationIn this subsection, we aim to identify the optimal beamforming structure for the generalized utility problem (<ref>) using the SIT duality approach. Introduced in <cit.>, SIT duality is an optimization approach for calculating the global-optimal solution for non-convex optimization problems. It has shown its effectiveness in solving various resource allocation problems, as demonstrated in <cit.>. To illustrate the SIT principle, we first exchange the objective function and the constraint in (<ref>), resulting in the following SIT dual problem min_𝐖 Tr(𝐖𝐖^H)s.t.f(R_1,⋯,R_G)≥ f(β_1^∘,⋯,β_G^∘), where β_g^∘,∀ g∈𝒢 denotes the optimal achievable rate for the multicast stream s_g at the global-optimal solution of the original problem (<ref>). Given the assumption that f(R_1,⋯,R_G) is strictly increasing with respect to {R_1,⋯,R_G}, problem (<ref>) can be further reformulated as problem (<ref>) where the QoS threshold is given as α_g^∘=exp(β_g^∘)-1. The SIT principle tells that the optimal solution of (<ref>) can be obtained by solving a sequence of power minimization problems (<ref>) with increasing SINR constraints α_g^∘. The optimal α_g^∘ can be obtained by a G-dimensional bisection search. Based on the principle of SIT, the SIT duality between problem (<ref>) and problem (<ref>) can be established. Specifically, let 𝐇=[𝐇_1,⋯,𝐇_G] and σ=[σ_11,⋯,σ_GK_G]^T, a mapping of the general utility problem (<ref>) is defined as 𝒰: ℝ_+→ℝ_+^G,β^∘=𝒰(P_t| 𝐇,σ), where β^∘=[β_1^∘,⋯,β_G^∘]^T. 𝒰(P_t|𝐇,σ) solves problem (<ref>) based on the input parameter P_t, the output corresponds to using the optimal solution 𝐖^∘to compute the optimal rate vector β^∘. Also, define the corresponding mapping of the power minimization problem (<ref>) as𝒫: ℝ_+^G →ℝ_+,P_t=𝒫(β^∘|𝐇,σ).Similarly, 𝒫(β^∘|𝐇,σ) solves problem (<ref>) based on the input parameters β^∘, the output corresponds to the minimized transmit power at theoptimal solution. Then, the SIT dual relation between problem (<ref>) and problem (<ref>) is described in the following Proposition <ref>. The SIT duality between problem (<ref>) and problem (<ref>) is established as P_t =𝒫(𝒰(P_t|𝐇,σ ) |𝐇,σ ) β^∘ =𝒰(𝒫(β^∘| 𝐇,σ )|𝐇,σ) Proof: This conclusion can be obtained directly by using the proofs of SIT duality in existing works <cit.>.Due to space limitations, we omit the proof details in this work. ▪ The SIT duality (<ref>) implies that the general utility problem (<ref>) can be solved by searching over rate targets β, such that the optimal objective value of solving (<ref>) for a given rate target β^∘ is equivalent to the constraint upperbound P_t in (<ref>). Therefore, problems (<ref>) and (<ref>) share the same optimal beamforming structure as shown in Theorem <ref>. The optimal beamforming solution structures for both problem (<ref>) and problem (<ref>) are equivalent to 𝐖^∘=( 𝐈_L+𝐇Θ^∘𝐇^H)^-1𝐇𝐃^∘where Θ^∘=blkdiag{Θ_1^∘,⋯,Θ_G^∘} and 𝐃^∘=blkdiag{𝐝_1^∘,⋯,𝐝_G^∘} are someparameters.Although it remains challenging to determine the optimal achievable rate target β^∘, the SIT duality helps to identify the optimal beamforming structure forproblem (<ref>). §.§ Insights from the Optimal Beamforming StructureTo better characterize the optimal multi-group multicast beamforming structure in (<ref>), we further rewrite it as 𝐖^∘=( 𝐈_L+𝐇Θ^∘𝐇^H)^-1𝐇𝐁^∘𝐏^∘,where 𝐁^∘≜blkdiag{𝐛^∘_1,⋯,𝐛^∘_G} with 𝐛_g^∘≜ [b_g1^∘,⋯,b_gK_G^∘]^T and 𝐏^∘≜diag{√(p_1^∘),⋯,√(p_G^∘)}. In this form, it is evident that the optimal multi-group multicast beamforming structure consists of the following four parts: * The first part 𝐇 is a complete channel matrix, which contains channel directions towards all users. These directions are also known as MRT directions. * The second part ( 𝐈_L+𝐇Θ^∘𝐇^H)^-1 is the inversion of the sum of an identity matrix and a weighted channel covariance matrix. It rotates the MRT directions to reduce the inter-group interference. The parameter θ_gk^∘ represents the priority assigned to user k in group g, with a lager value indicating that the beamforming vectors of other groups are more orthogonal to the corresponding channel 𝐡_gk. * The third part𝐁^∘ is a block-diagonalized coefficient matrix, which is the primary difference between multicast and unicast. Parameter b_gk^∘ represents the priority of user k in group g, with a larger values indicating that the group beamforming direction 𝐰_g is more aligned to 𝐡_gk. * The fourth part 𝐏^∘ is the power allocation matrixcontaining thepower allocated to all beamforming vectors {𝐰_g }.Considering a special case when there is a single user group g, i.e., single-group multicast transmission, the corresponding optimal beamforming structure in (<ref>) is simplified as 𝐰_g^∘=𝐇_g𝐝_g^∘. This implies that the second part is an identity matrix and the optimal beamforming solution is determined by directly optimizing the weight vector 𝐝_g ∈ℂ^K_g× 1 instead of the beamforming vector 𝐰_g∈ℂ^L× 1. Thus, the optimal weight vector 𝐝_g^∘ for maximizing the minimum received signal power |𝐡_gk^H𝐰_g|^2 is given as𝐝_g^∘=max_𝐝_gmin_k∈𝒦_g{ |𝐡_gk^H𝐇_g𝐝_g|^2}s.t.𝐇_g𝐝_g^2≤ P_t.Note that 𝐰^∘_g=𝐇_g𝐝_g^∘ is the optimal beamforming solution when the number of groups is G=1, but it is not optimal for multi-group multicast scenarios since the inter-group interference is not considered. Regarding multi-group multicast scenarios, the second part(𝐈_L+𝐇Θ^∘𝐇^H)^-1 is required to be considered. This component serves to rotate the group channel matrix 𝐇_g into the null space of 𝐇_-g≜{𝐡_11,⋯,𝐡_g-1,K_g-1,𝐡_g+1,K_g+1,⋯,𝐡_GK_G}, thereby mitigating inter-group interference. In general, it is hard to determine the optimal Θ^∘ and 𝐃^∘ due to the NP-hard nature of the multicast beamforming design. In Section <ref>, we will propose an ultra-low-complexity algorithm based on the optimal multicast beamforming structure to address such issue. § LOW-DIMENSIONAL BEAMFORMING STRUCTUREThe primary challenge of finding the optimal beamforming solution to problem (<ref>) lies in the undetermined parameter matrices Θ^∘ and 𝐃^∘. Although the optimal parameter matrices Θ and 𝐃 are challenging to calculate, they can be easily attained or even negligible in some asymptotic scenarios, leading to low-complexity and low-dimensional beamforming solutions. In this section, we initially explore two types of low-dimensional structures: one being universal, and the other being asymptotic. We then extend some well-known low-complexity beamforming algorithms to multi-group multicast scenarios, leveraging the asymptotic analysis of the optimal parameter Θ^∘. Subsequently, we introduce low-dimensional reformulations for the original problems (<ref>) and (<ref>). §.§ Range Space (RS) Beamforming In XL-MIMO systems where the number of transmit antennas is much larger than the number of total users, i.e., L≫ K, the computational complexity of calculating the optimal beamforming sharply increases with L. Here, we provide a low-dimensional structure to reduce the computational complexity by introducing the following Proposition <ref>. Any optimal solution 𝐰_g^∘ of problem (<ref>) and problem (<ref>)must exist within the range space of the complete channel matrix 𝐇, i.e., 𝐰_g^∘=𝐇𝐚_g^∘, ∀ g∈𝒢, with 𝐚_g^∘∈ℂ^K× 1. Proof: By applying matrix identity (𝐈_L+𝐗𝐘)^-1𝐗=𝐗(𝐈_K+𝐘𝐗)^-1 where 𝐗∈ℂ^L× K and 𝐘∈ℂ^K× L,any optimal solution shown in (<ref>) can be rewritten as𝐖^∘ =𝐇(𝐈_K+Θ^∘𝐇^H𝐇)^-1𝐃^∘. Denote 𝐀^∘≜(𝐈_K+Θ^∘𝐇^H𝐇)^-1𝐃^∘∈ℂ^K× G, we conclude that any optimal beamforming vector for each user group g must lie in the RS of the complete channel matrix.▪ Leveraging Proposition <ref>, the RS beamforming is given as 𝐖=𝐇𝐀,where 𝐀∈ℂ^K× G has a lower dimension compared to 𝐖. Notably, the dimension of 𝐀 is independent of the number of transmit antennas L. It implies that substituting 𝐖 with 𝐇𝐀 in problem (<ref>) and (<ref>) significantly reduces the optimization dimension of the beamforming matrix. Further elaboration will be provided in Section <ref>.In the following subsections, we discover more low-dimensionalbeamforming structures based on the asymptotic analysis of the optimal parameter Θ^∘ in (<ref>). §.§ MRT Beamforming In the low SNR regime, i.e., σ_gk^2→∞ (therefore θ_gk=λ_gk/σ_gk^2→ 0), the system is noise-limited and the beamforming matrix in (<ref>) converges tolim_P_t→ 0𝐖^∘ =𝐇𝐃^∘, where the inversion part converges to the identity matrix and 𝐃^∘ includes both the asymptotic power allocation and coefficients of the linear combination for the group-channel direction. It implies that MRT beamforming achieves a good performance in the low SNR regime. Thus, a natural extension of the well-known MRT beamforming in multi-group multicast transmission is given as𝐖=𝐇𝐃,where 𝐃≜blkdiag{𝐝_1,⋯,𝐝_G}∈ℂ^K× G with 𝐝_g≜ [d_g1,⋯,d_gK_G]^T. It differs from Proposition <ref> since 𝐃 is a block diagonal matrix containing K variables while 𝐀 is a full matrix containing K× G variables. This strategy maximizes the minimum received signal power |𝐡_gk^H𝐰_g|^2 received at group g while ignoring the interference from other user groups.§.§ ZF-based and Regularized ZF-based BeamformingWhen SNR is high, i.e., P_t→∞, the system is in the interference-limited region. We focus on the case L≥ K with at least one spatial degree-of-freedom per user. In this scenario, each parameter θ_gk tends to infinity and(<ref>) converges tolim_P_t→∞𝐖^∘=𝐇(𝐇^H𝐇)^-1Θ^∘^-1𝐃^∘.The extension of ZF beamforming in multi-group multicast transmission is 𝐖=𝐇(𝐇^H𝐇)^-1𝐃.Similar to the unicast-only transmission, to achieve numerical stability and robustness to channel uncertainty, regularized ZF (RZF) beamforming is usually considered by forcing ∑_g=1^G∑_k=1^K_gθ_gk=P_t <cit.>. This leads to the following RZF beamforming𝐖=𝐇(1/P_t𝐈_K+𝐇^H𝐇)^-1𝐃.§.§ Multicast ZF and RZF-based BeamformingRecall that the optimal multi-group multicast beamforming has an equivalent form (<ref>), from which we obtainlim_P_t→∞𝐰_g^∘=(𝐇_-g𝐇_-g^H)^†𝐇_g 𝐝̃_g^∘, ∀ g∈𝒢where † denotes the pseudo-inverse of a matrix. From this asymptotic result, we propose the following two useful low-dimensional beamforming structures 𝐰_g =(𝐇_-g𝐇_-g^H)^†𝐇_g 𝐝_g, ∀ g∈𝒢, 𝐰_g =(1/P_t𝐈_L+𝐇_-g𝐇_-g^H)^†𝐇_g 𝐝_g, ∀ g∈𝒢, which are referred as multicast ZF (MZF) and multicast RZF (MRZF), respectively. Although (<ref>) has a similar structure with (<ref>), they have different mathematical implications. As P_t→∞, the matrix (𝐇_-g𝐇_-g^H)^† rotates group channel matrix 𝐇_g into the null space of the matrix 𝐇_-g, which implies any group-channel matrix 𝐇_g satisfies 𝐇_g^H(𝐇_-i𝐇_-i^H)^†𝐇_i=0, ∀ i≠ g, i∈𝒢. Note that 𝐇^H[(𝐇_-1𝐇_-1^H)^†𝐇_1,⋯,(𝐇_-G𝐇_-G^H)^†𝐇_G] is a block diagonal matrix and therefore the MZF beamforming (<ref>) is sufficient to eliminate the inter-group interference. This contrasts to the matrix inversion (𝐇^H𝐇)^-1 in the classical ZF beamforming (<ref>), which rotates allchannel vectors to be orthogonal to each other, i.e., 𝐇^H 𝐇(𝐇^H𝐇)^-1 results in a diagonal matrix.§.§ Large-scale MIMO SystemsNext, we delve into the asymptotic beamforming structure for XL-MIMO when the number of transmit antenna L goes to infinite. In (<ref>), the value of 𝐇^H𝐇 grows with L, it is obviously thatlim_L→∞𝐖^∘=𝐇(𝐇^H𝐇)^-1Θ^∘^-1𝐃^∘,which implies that the ZF-based beamforming is asymptotically optimal when L goes to infinity. A similar low dimensional structure has been proposed in <cit.> and adopted by <cit.>. This approach employs an asymptotic fixed-point iteration to directly calculate the rotated channel matrix 𝐇=𝐇(𝐈_K+Θ𝐇^H𝐇)^-1 and then optimize 𝐃. It should be noted that the proposed approach in <cit.> is only asymptotically optimal when L→∞. But it is tailored for the QoS problem and cannot be extended to solve the general utility problem (<ref>).§.§ Low-dimensional Reformulations The low-dimensional structures introduced in the previous subsections are advantageous for reducing the computational complexity in beamforming design by removing the dependence of the beamforming dimension on the number of transmit antenna L. Here, we take the RS structure (<ref>) as an example to show its benefits. To be specific, by replacing the original high-dimensional beamforming matrix, i.e., 𝐖∈ℂ^L× G,with the low-dimensional RS beamforming matrix i.e., 𝐇𝐀, the original problem (<ref>) and (<ref>) can be respectively reformulated as max_𝐀f( R_1,⋯,R_G) s.t. Tr(𝐀𝐀^H𝐅)≤ P_t, where R_g=min_k∈𝒦_glog(1+|𝐟_gk^H𝐚_g|^2/∑_i=1,i≠ g^G|𝐟_gk^H𝐚_i|^2+σ_gk^2),and min_𝐀 Tr(𝐀𝐀^H𝐅) s.t. |𝐟_gk^H𝐚_g|^2/∑_i=1,i≠ g^G|𝐟_gk^H𝐚_i|^2+σ_gk^2≥α_g, ∀ k∈𝒦_g,∀ g∈𝒢, where 𝐅≜ [𝐟_11,⋯,𝐟_GK_G]∈ℂ^K× K with 𝐟_gk=𝐇^H𝐡_gk and 𝐀≜[𝐚_1,⋯,𝐚_G]∈ℂ^K× G. For both reformulated problems, the dimension of the optimization variables decreases from L× G to K× G and the complexity of matrix inversion decreases from 𝒪(L^3) to 𝒪(K^3).§ EFFICIENT BEAMFORMING ALGORITHMS FOR WSR MAXIMIZATION In this section, we take WSR as an instance of the general utility function, i.e.,f(R_1,⋯,R_G)=∑_g=1^Gζ_g R_g,where ζ_g refers to the weight of user group g. Our focus is on deriving efficient beamforming algorithms to solve the following WSR maximization problem: max_𝐖 ∑_g=1^Gζ_g R_g, s.t.Tr(𝐖𝐖^H)≤ P_t.whereR_g=min_k∈𝒦_g{log(1+γ_gk) }. This problem is more difficult to solve than the MMF problem when f(R_1,⋯,R_G)=min_g∈𝒢 R_g, since the optimal rate targets varies across distinct user groups. Utilizing SIT duality requires a G-dimensional exhaustive search over the optimal rate targets, leading to exponential computational complexity, making it impractical to solve such problem. Generally, problem (<ref>) has two primary challenges:* Non-convex SINR expressions for all users; * Non-smooth rate expressions for all multicast streams. The first challenge has been extensively studied in unicast-only transmission, classical algorithms such as WMMSE, FP, and WSR-MM have emerged to address the non-convexity of SINR expressions. The second challenge posed by non-smooth max-min rate expressions has led to the introduction of some approaches, i.e., linearization <cit.>,subgradient ascent (SA) algorithm <cit.>, and the LogSumExp (LSE)-based algorithm <cit.>. These approaches are briefly introduced below: * Linearization involves introducing auxiliary variables to substitute the max-min rates in the objective functionand adding additional rate constraints for all user groups to reformulate the problem <cit.>. Such approach inevitably increases the optimization dimension. * The SA algorithm <cit.> is a typical approach for solving non-smooth problems. Based on a certain step size, it iteratively updates the solution towards the subgradient direction of the non-smooth objective function until convergence. However, choosing an appropriate rule to update the step size is challenging and can significantly impact the convergence speed.* The LSE method approximates the non-smooth objective function using theLSE function <cit.>. For example, the LSE of R_g is LSE_g = -μlog(∑_k=1^K_gexp(-R_gk/μ) ), where R_gk=log(1+γ_gk). However, choosing a proper value for μ is challenging. The complicated LSE function introduces extra challenges in solving the original problem, and the approximation error prohibits the identification of the optimal beamforming structure.Due to the aforementioned limitations of existing algorithms, we next propose a noveloptimization algorithm to solve the non-convex non-smooth WSR problem (<ref>) based on the optimal and low-complexity beamforming structures we introduced in Section <ref> and Section <ref>.§.§ Problem ReformulationTo better characterize the multi-group multicast beamforming, we move the power constraint into the SINR expression leading to the following unconstrained problem:max_𝐖∑_g=1^G ζ_gmin_k∈𝒦_g{log(1+γ_gk )},whereγ_gk≜|𝐡_gk^H𝐰_g|^2/∑_i≠ g |𝐡_gk^H 𝐰_i|^2+σ_gk^2/P_tTr(𝐖𝐖^H) . The relation between (<ref>) and (<ref>) is provided in the following Proposition <ref>. For any locally optimal solution 𝐖^♢ of problem (<ref>), there exists a corresponding locally optimal solution 𝐖^ of the unconstrained problem (<ref>) such that 𝐖^♢=√(P_t/Tr(𝐖^𝐖^^H))𝐖^. Proof: The detail proof follows the procedure in <cit.>. ▪ Leveraging Proposition <ref>, we can efficiently solve problem (<ref>) by directing our focus towards solving the unconstrained problem (<ref>) in the following.§.§ Cyclic Maximization MethodIn this subsection, we introduce a cyclic maximization (CM)-based method to solve problem (<ref>). The main idea of CM is to construct a high dimensional surrogate objective function and then solve each block cyclically to obtain a stationary point of the original non-convex problem <cit.>. We start with introducing the following Proposition <ref> to transform (<ref>) into a more tractable form. By introducing auxiliary variables {ξ_gk, η_gk}, problem (<ref>) can be equivalently reformulated asmax_𝐖, ξ,η∑_g=1^G ζ_g min_k∈𝒦_g{ h_gk(𝐖,ξ_gk,η_gk) },where ξ≜[ξ_11,⋯,ξ_GK_G]^T, η≜[η_11,⋯,η_GK_G]^T, andh_gk(𝐖, ξ_gk,η_gk)≜log(1+ξ_gk)+2√(1+ξ_gk){η_gk^H𝐡_gk^H𝐰_g}-|η_gk|^2(∑_i=1^G|𝐡_gk^H𝐰_i|^2+σ_gk^2/P_tTr(𝐖𝐖^H))-ξ_gk.Proof: The convex function log(1/x) with x∈ℝ_+ has the following lower boundlog(1/x)≥log(1/x_0)-1/x_0(x-x_0)with equality achieved at x_0=x. By plugging x=1/1+γ_gk and x_0=1/1+ξ_gk into (<ref>), we obtain the following surrogate functionlog(1+γ̂_gk)≥log(1+ξ_gk)-1+ξ_gk/1+γ_gk+1≥log(1+ξ_gk)-ξ_gk+(1+ξ_gk)|𝐡_gk^H𝐰_g|^2/∑_i=1^G |𝐡_gk^H𝐰_i|^2+σ_gk^2/P_tTr(𝐖𝐖^H)with equality holding when ξ_gk=γ_gk. Further, the convex fractional function |x|^2/y with x∈ℂ and y∈ℝ_+ can be lower bounded by its first order Taylor expansion as |x|^2/y≥ 2{x_0^H/y_0x}-|x_0|^2/y_0^2ywith equality achieved at (x_0,y_0)=(x,y). By substituting x=√(1+ξ_gk)𝐡_gk^H𝐰_g, y=∑_i=1^G |𝐡_gk^H𝐰_i|^2+σ_gk^2/P_tTr(𝐖𝐖^H) and η_gk=x_0/y_0 into (<ref>), we obtain the two-layer surrogate function h_gk(𝐖,ξ_gk,η_gk) in (<ref>). ▪Problem (<ref>) remains non-convex, but it is obvious that, it can be solved by the CM method with the following updates ξ_gk^[t+1]=γ_gk^[t],η_gk^[t+1]=√(1+ξ_gk^[t+1])𝐡_gk^H𝐰_g^[t]/∑_i=1^G |𝐡_gk^H𝐰_i^[t]|^2+σ_gk^2/P_tTr(𝐖^[t]𝐖^[t]^H),𝐖^[t+1]=max∑_g=1^Gζ_g min_k∈𝒦_g{ h_gk(𝐖,ξ_gk^[t+1],η_gk^[t+1]) }.For given {ξ,η}, the subproblem (<ref>) with respect to 𝐖 is a non-smooth convex quadratic programming and thus can be directly solved using standard convex optimization approaches (i.e., interior-point methods) implemented by a certain solver (i.e., SeDuMi) in the CVX toolbox <cit.>. Such approach of using CM to solve problem (<ref>) as well as using the standard solvers in CVX to solve problem (<ref>) is referred as the standard CM method as summarized in Algorithm <ref>. The equivalence between the CM framework and the classical MM theory has been confirmed in <cit.>. Besides, the recent study <cit.> has established the equivalence between WMMSE, FP, and MM for solving the WSR problems. Therefore, we could infer that algorithms using convex approximation and employing a standard optimization solver in CVX <cit.> to solve each convex subproblemexhibit comparable performance. §.§ Proposed Efficient Optimization Algorithms Although the standard CM algorithm in Algorithm <ref> successfully solves subproblem (<ref>) using the standard CVX toolbox, this approach is not cost-effective due to the substantial time occupied to parse and canonicalize the original subproblem into a standard form for CVX solvers to understand. Moreover, problem (<ref>) is non-differentiable at the points where two or more functions of {h_gk} share the same value. To avoid these limitations of solving (<ref>) directly using CVX, we propose a novel optimization algorithm based on its optimal beamforming structure. Specifically, by introducing a set of slack auxiliary variables 𝐳=[z_1,⋯,z_G]^T, we aim to solve the following equivalent problem of (<ref>): max_𝐖,𝐳 ∑_g=1^Gζ_gz_gs.t.z_g≤ h_gk(𝐖,ξ_gk,η_gk),∀ g∈𝒢,∀ k∈𝒦_g. Problem (<ref>) is a smooth convex SOCP problem and can be efficiently solved via its Lagrange dual problem. The Lagrangian function of (<ref>) is given asℒ_(<ref>)(δ,𝐖,𝐳)≜∑_g=1^Gζ_gz_g- ∑_g=1^G∑_k=1^K_gδ_gk(z_g-h_gk(𝐖,ξ_gk,η_gk)), where δ_gk≥ 0 is the dual variable corresponding to constraint (<ref>), and δ≜ [δ_1,⋯,δ_G] with δ_g≜ [δ_g1,⋯,δ_gK_g]^T. The first-order derivatives of the Lagrange function ℒ_(<ref>)(δ,𝐖,𝐳) with respect to z_g and 𝐰_g are respectively given as∂ℒ_(<ref>)/∂ z_g =ζ_g-∑_k=1^K_gδ_gk, ∂ℒ_(<ref>)/∂𝐰_g =∑_k=1^K_g2d́_gk𝐡_gk-∑_i=1^G∑_k=1^K_i2(θ́_ik𝐡_ik𝐡_ik^H+δ_ikσ_ik^2/P_t𝐈_L)𝐰_g,where d́_gk,∀ g∈𝒢,k∈𝒦_g, and θ́_ik, ∀ i∈𝒢,k∈𝒦_i are respectively defined asd́_gk ≜δ_gkη_gk√(1+ξ_gk), θ́_ik ≜δ_ik |η_ik|^2.Since (<ref>) is convex and strictly feasible, it satisfies the Slater's condition and the strong duality holds <cit.>. Therefore, the optimal solution of (<ref>), together with the optimal Lagrange dual variable, satisfies the following KKT conditions ζ_g-∑_k=1^K_gδ_gk=0,g∈𝒢,∑_k=1^K_g2d́_gk𝐡_gk -∑_i=1^G∑_k=1^K_i2(θ́_ik𝐡_ik𝐡_ik^H+δ_ikσ_ik^2/P_t𝐈_L)𝐰_g=0,g∈𝒢,δ_gk(z_g-h_gk)=0,g∈𝒢,k∈𝒦_g, where (<ref>) and (<ref>) are the first-order stationary conditions, and (<ref>) refers to the complementary slackness conditions. The primal and dual feasibility conditions are omitted here.According to (<ref>), we could directly obtain the optimal slackness variable z_g as z_g^⋆=min_k∈𝒦_g{h_gk}. ∀ g∈𝒢. Moreover, from (<ref>), we further reveal the optimal beamforming solution structure for problem (<ref>) in Theorem <ref>. The optimal beamforming solution for problem (<ref>) is given by𝐰_g^⋆=(𝐇Θ́^⋆𝐇^H + S_σ𝐈)^-1𝐇_g𝐝́_g^⋆, ∀ g∈𝒢,where {δ_gk^⋆} are the optimal dual variables for problem (<ref>), Θ́^⋆≜blkdiag{Θ́_1^⋆,⋯, Θ́_G^⋆} with Θ́_g^⋆=diag{θ́_g1^⋆,⋯,θ́_gK_G^⋆} with θ́_gk^⋆=δ_gk^⋆|η_gk|^2, S_σ≜∑_i=1^G ∑_k=1^K_iδ_ik^⋆σ_ik^2/P_t, and 𝐝́_g^⋆=[d́_g1^⋆,⋯,d́_gK_g^⋆]^T with d́_gk^⋆= δ_gk^⋆η_gk√(1+ξ_gk). It is clear that the beamforming structure (<ref>) shares the same structure with the optimal multi-group multicast beamforming structure (<ref>). Substituting (<ref>) and (<ref>) into ℒ_(<ref>)(δ,𝐖,𝐳), the Lagrange dual problem of (<ref>) is given bymin_δ∈ℋ∑_g=1^G∑_k=1^K_gδ_gkh_gk(𝐖^⋆,ξ_gk,η_gk),where ℋ≜ℋ_1×⋯×ℋ_G with ℋ_g≜{δ_gk≥ 0: ∑_k=1^K_gδ_gk=ζ_g } and 𝐖^⋆≜ [𝐰^⋆_1,⋯,𝐰_G^⋆]. The dual problem is convex and the feasible space δ∈ℋ implies that each dual vector δ_g lies in the hyperplane ℋ_g. This motivates us to solve problem (<ref>) base on the projected adaptive gradient descent (PAGD) algorithm. PAGD is an optimization algorithm that minimize a function iteratively by updating the solution towards the opposite direction of the gradient with an adaptive step size in each iterative. It then includes a projection step to project the updated solution onto the feasible set.In this work, we propose to solve problem (<ref>) based on the following updating procedure in each iteration [j]: δ̅_gk^[j+1]=δ_gk^[j]-τ_gk^[j](h_gk^[j]- min_i∈𝒦_g{h_gi^[j]}),g∈𝒢,k∈𝒦_g, δ_g^[j+1]=Π_ℋ_g(δ̅_g^[j+1]), ∀ g∈𝒢, whereδ̅_g≜ [ δ̅_g1,⋯,δ̅_gK_g ]^T refers to the intermediate updates before projection, and τ_gk^[j] is the step size, given byτ_gk^[j]=δ_gk^[j]/h_gk^[j]- min_i∈𝒦_g{h_gi^[j]}+ρ_t^[j],with an increasing constant number ρ_t^[j]=ρ_c+ρ_v· j. Π_ℋ_g(δ̅_g^[j+1]) denotes the projection ofδ̅_g^[j+1] onto the hyperplane ℋ_g, which is defined asΠ_ℋ_g(δ̅_g^[j+1])=δ̅_g^[j+1]-∑_k=1^K_gkδ̅_g^[j+1]-ζ_g/K_g.The optimal dual vector δ^⋆ for each convex problem (<ref>) can be obtained using the proposed PAGD algorithm summarized in Algorithm <ref>. Therefore, the subproblem (<ref>) is optimally solved by substituting δ^⋆ into the optimal beamforming solution (<ref>). By employing the CM framework and PAGD to address (<ref>), we establish a highly efficient algorithm referred to as CM-PAGD. In contrast to the standard CM approach, our proposed algorithm exhibits lower computational complexity and ensures no loss in performance. This will be further demonstrated in the simulation section. The step size follows the rule of square summable but not summable <cit.>, which typically follows τ_gk^[j]=x/y+ρ_v· j,where x>0, y≥ 0 are problem-specific parameters and ρ_v is a decreasing factor. To ensure δ̅_gk^[j+1]≥ 0, we have τ_gk^[j]≤δ_gk^[j]/h_gk^[j]- min_i∈𝒦_g{h_gi^[j]}.Let x and y in (<ref>) be defined asx= δ_gk^[j] and y=h_gk^[j]- min_i∈𝒦_g{h_gi^[j]}+ρ_c, we end up with the proposed step size (<ref>). This step size enables the dual vector δ̅_g^[j+1] within the subspace {δ_gk≥ 0:∑_k=1^K_gδ_gk≤ζ_g }, and it is therefore easy to project it back onto the hyperplane ℋ_g based on the defined projection rule (<ref>).§.§ Low-dimensional Reformulations for Large Scale SystemsAs mentioned in Section <ref>, the proposed low-dimensional beamforming structures can be used to reduce the computational complexity of the beamforming design.Here, we take the RS structure 𝐖=𝐇𝐀 as an example to show its benefits in solving the WSR problem for XL-MIMO systems. By substituting 𝐖=𝐇𝐀 into (<ref>), the WSR problem is reformulated as max_𝐀 ∑_g=1^G ζ_gmin_k∈𝒦_g{log(1+ |𝐟_gk^H𝐚_g|^2/∑_i=1,i≠ g^G|𝐟_gk^H𝐚_i|^2+σ_gk^2)} s.t. Tr(𝐀𝐀^H𝐅)≤ P_t. This formulation is equivalent to (<ref>), but with significantly reduced dimensionality in the optimization variables.Therefore, we can solve it using the proposed CM-PAGD methods. Other low-dimensional reformulations based on MRT, ZF, RZF, MZF, or MRZF follow a similar process as (<ref>). To avoid redundancy, we omit the details of these reformulation here. A comprehensive comparison among different approaches will provided in the following simulation section. §.§ Convergence and Computational Complexity Analysis§.§.§ Convergence AnalysisThe proposed CM-PAGD algorithm consists of two iteration layers. The outer-layer CM framework outlined in Algorithm <ref> is guaranteed to generate a monotonically increasing sequence of objective values for (<ref>), as proven in <cit.>. Regarding the convergence of the inner-layer iteration for computing the optimal dual variables in Algorithm <ref>, it is established in <cit.> that the algorithm is guaranteed to converge if the objective function satisfies the Lipschitz condition. This condition is clearly met in our algorithm since the objective function is continuous differentiable over the convex set ℋ. For further details,readers can refer to <cit.>.§.§.§ Computational Complexity AnalysisThe computational complexity of the proposed CM-PAGD algorithm for each iteration is dominated by updating the beamforming matrix (i.e., line 5 of Algorithm <ref>) based on Algorithm <ref>. The complexity of Algorithm <ref> is dominated by the matrix inversion in line 4, with an order of 𝒪(GL^3). The overall complexity order is 𝒪(GL^3log(ϵ_1^-1)log(ϵ_2^-1)), where ϵ_1 refers to the convergence tolerance of the outer-layer CM framework, and ϵ_2 refers to the convergence tolerance of the inner-layer PAGD algorithm. § SIMULATION RESULTS In this section, we evaluate the computational complexity, convergence, and the WSR performance of the proposed CM-PAGD algorithm based on the optimal beamforming structure or other low-dimensional beamforming structures.§.§ Simulation SetupWe consider a symmetric multi-group multicast communication network, where K_g=K_G, ∀ g∈𝒢. Unless specified otherwise, the default user set consists of G=3 groups, with K_G=4 users per group. The channel of user k is generated i.i.d. as 𝐡_gk∼𝒞𝒩(0,𝐈_L) and the noise variance at user k is set to σ_gk^2=1 so that the transmit SNR defined as SNR ≜ P_t/σ_gk^2 is numerically equal to the transmit power. For the proposed algorithms, we set the stopping tolerance for both the outer-layer CM framework and inner-layer iterative algorithms as ϵ_1= ϵ_2=10^-4. Additional, the constant ρ_t^[j] in (<ref>) is set to ρ_t^[j]=1+0.02× j for controlling the convergence accuracy of the inner-layer PAGD algorithm. Without loss of generality, we set the weights ζ_1=ζ_2=⋯=ζ_G=1 in our simulations. The initialization of the beamforming vectors for the CM framework is based on MRT directions, e.g., 𝐰_g^[0]=∑_k=1^K_g𝐡_gk, ∀ g∈𝒢. All simulation results are averaged over 100 random channel realizations. §.§ Baseline Algorithms All schemes considered in the simulation are summarized as follows: * standard CM: This is the standard CM framework we introduced in Algorithm <ref>. Each subproblem (<ref>) is solved using CVX toolbox, leading to an overall computational complexity of 𝒪([GL]^3.5log(ϵ_1^-1)). * CM-SA: This refers to the algorithm of employing the CM framework to solve problem (<ref>), while using the SA algorithm <cit.> to solve each non-smooth surrogate problem directly. The gradient of worst-case user per group is selected as the subgradient <cit.>. The overall computational complexity is 𝒪(GL^2log(ϵ_1^-1)log(ϵ_2^-2)). * CM-LSE: This refers to the algorithm of employing the CM framework to solve problem (<ref>), while using the LSE algorithm <cit.> to approximate the non-smooth objective function for each surrogate problem <cit.>. The approximated convex and smooth problem is then solved by the gradient ascent approach. The corresponding computational complexity is 𝒪(GL^2log(ϵ_1^-1)log(ϵ_2^-1)). * CM-PAGD: This is the algorithm we proposed based on Algorithm <ref> and Algorithm <ref>. Specifically, for each subproblem (<ref>) in Algorithm <ref>, instead of using CVX to solve it directly, we propose to use the PAGD algorithm in Algorithm <ref> to address it. CM-PAGD is therefore an optimization toolbox-free algorithm, which reduces the computational complexity. The corresponding computational complexity is 𝒪(GL^3log(ϵ_1^-1)log(ϵ_2^-1)). * RS CM-PAGD: This is the algorithm we proposed based on the RS property discovered in (<ref>) and CM-PAGD. Specifically, problem (<ref>) is transformed to (<ref>) using the RS property. After that, CM-PAGD is employed to solve (<ref>).The computational complexity of RS CM-PAGD is 𝒪(K^2L+GK^3log(ϵ_1^-1)log(ϵ_2^-1)). * X CM-PAGD: This is the algorithm we proposed based on the low dimensional beamforming structure discovered in(<ref>)/(<ref>)/(<ref>)/(<ref>) and CM-PAGD. Specifically, problem (<ref>) is first transformed based on (<ref>)/(<ref>)/(<ref>)/(<ref>)/(<ref>), respectively for the scenarios of X=MRT/ZF/RZF/MZF/MRZF. Subsequently, CM-PAGD is employed to solve the corresponding transformed problem. The computational complexity of X CM-PAGD is 𝒪(K^2L+GK_g^3log(ϵ_1^-1)log(ϵ_2^-1)).All algorithms, excludingX CM-PAGD, aim at calculating sub-optimal beamforming solutions for the WSR problem (<ref>). In contrast, X CM-PAGD algorithms are asymptotically optimal in different regimes. They notably reduce the computational complexity, but may lead to performance degradation in certain regimes.§.§ Convergence of the Proposed AlgorithmsWe first check the convergence behavior of the standard CM algorithm, the proposed CM-PAGD andRS CM-PAGD algorithms. Both CM-PAGD and RS CM-PAGD have two iteration layers, namely, one outer layer for CM and one inner layer for PAGD. The convergence to both iteration layers is illustrated in Fig.<ref> and Fig.<ref>, respectively. Fig.<ref> shows the WSR performance of all three schemes as the number ofiterations in the outer layer increases when L=16, SNR=20dB.We would observe that the convergence path of the proposed CM-PAGD and RS CM-PAGD nearly overlap with the standard CM algorithm. This is because the convex surrogate problem (<ref>) is optimally solved by the derived optimal beamforming solution structure (<ref>) together with the PAGD algorithm that successfully calculates the Lagrange dual variables. Fig. <ref> illustrates the WSR versus the number of iterations in the inner layer for both dual and primal problems. It is obviously that the duality gap between the primal and the dual objective values converges to zero when solving each subproblem (<ref>).§.§ Comparison among the Sub-optimal AlgorithmsTable. <ref> shows the WSR performance of the five sub-optimal algorithms as the transmit SNR increases from -10dB to 30dB. The number of transmit antenna is fixed to L=16. Notably, the proposed CM-PAGD and RS-PAGD algorithms solve the WSR problem effectively without any performance loss compared to the standard CM. In comparison, CM-SA and CM-LSE cause certain performance degradation especially when SNR is large. This is due to the fact that in the high SNR regime, the CM-SA algorithm is more prone to oscillate in the vicinity of the non-differentiable optimal point during the inner iteration. Additionally, the LSE approximation function introduces a lager approximation error for the CM-LSE algorithm under these conditions.Fig.<ref> illustrates the corresponding average CPU time versus the transmit SNR. Compared with other baselines schemes, the proposed algorithms exhibit substantial reduction in average CPU time. Notably, CM-PAGD and RS CM-PAGD achieve at lease a 99.65% time decrease over standard CM across all SNR regimes, highlighting their effectiveness in achieving excellent WSR performance with lower computational complexity. The CPU time consumption for CM-PAGD and RS CM-PAGD is comparable since the number of transmit antennas L=16 is of the same order as the total number of users K=12, resulting in a marginal CPU time reduction for RS CM-PAGD. Further more, despite the potential higher complexity per iteration for CM-PAGD and RS CM-PAGD compared to CM-SA and CM-LSE, they exhibit faster convergence rates since CM-SA and CM-LSE converge in the primal field, while PAGD converges in the dual field. More details about the differences between these two types of approaches can be found in <cit.>. Table. <ref> shows the WSRs achieved by different algorithms versus the number of transmit antennas with transmit SNR=20dB, andFig.<ref> illustrates the corresponding average CPU time. Due to the exponentially increasing computational cost of the standard CM with number of transmit antennas, we exclusively present its results for scenarios with 16 and 32 transmit antennas. Table. <ref> shows that all algorithms demonstrate nearly identical WSR performance across varying numbers of transmit antennas. Notably, the proposed RS CM-PAGD attains such excellent WSR performance while significantly reducing computational time. Its computational time is not proportional to the number of transmit antennas. This substantial decrease in computational complexity for XL-MIMO transceiver design marks the proposed X CM-PAGD as a promising algorithm for 6G. §.§ Comparison for Asymptotic Optimal Algorithms Fig.<ref> shows the WSR versus SNR comparison among different X CM-PAGD algorithms (X=MRT/ZF/RZF/MZF/MRZF) and RS CM-PAGD when the number of transmit antennas is L=16. It is evident that MRT achieves near optimal performance in the low SNR regime (i.e., SNR from -10 dB to 0 dB), while MZF and MRZF achieve asymptotically optimal performance in the high SNR regime (i.e., SNR is larger than 20 dB). The numerical results align with the theoretical analysis in Section <ref>. Also, it is observed that the classical ZF and RZF beamforming designs attain obvious performance loss compared to the near-optimal solution especially in the high SNR regime. This contrasts with the results in the unicast-only transmission <cit.>, where ZF and RZF achieve near optimal performance. This is due to the distinct relations among user channels in the multi-group multicast communication, as discussed in Remark <ref>. Fig.<ref> illustrates the WSR versus the number of transmit antennas when SNR=20dB. As the number of transmit antennas increases exponentially, RS, ZF, RZF, and MRZF all achieve asymptotically optimal WSR performance. In contrast, MRT exhibits relatively poor WSR performance due to the asymptotic channel orthogonality. It suffers from severe performance degradation due to the presence of inter-group interference. § CONCLUSIONIn this study, we analyze the optimal and low-dimensional beamforming structures for a downlink multi-antenna multi-group multicast transmission network. Specifically, by leveraging the KKT conditions and SIT duality, we identify the optimal multi-group multicast beamforming structure for a general utility function-based maximization problem that embraces the WSR and MMF problems as special cases. This structure reveals valuable insights behind the multi-group multicast beamforming design, and inspires us to discover inherent low dimensional beamforming structures that are asymptotically optimal in various regimes of transmit SNR or the number of transmit antennas. These appealing beamforming structures provide guideline to efficient beamforming design for multi-group multicast transmission. Specially, we consider a special problem when the general utility function is the WSR. By exploiting the optimal beamforming structure, we propose an efficient optimization toolbox-free algorithm based on the CM framework and the proposed PAGD algorithm to solve the problem. We further exploit the low dimensional beamforming structures, and propose RS and MRT/ZF/RZF/MZF/MRZF based beamforming algorithms to further reduce the computational complexity. Numerical results demonstrate that these proposed algorithms achieve near-optimal performance while significantly reducing the computational complexity compared to the baseline schemes. They emerge as promising algorithms for ultra-massive MIMO applications in 6G. The developed algorithms are not limited to solving WSR problems, they can be easily extended to other utility functions-based optimization problems, such MMF and energy efficiency. Additionally, as the proposed PAGD algorithm effectively addresses the non-smooth property introduced in the multicast rate expressions, it can be easily extended to solve the problems of power domain non-orthogonal multiple access (PD-NOMA) and rate splitting multiple access (RSMA). This is due to the similarity between the rate expressions of the streams to be decoded by multiple users in PD-NOMA (and RSMA) and the multicast rate expressions from a mathematical perspective.IEEEtran
http://arxiv.org/abs/2312.16559v1
{ "authors": [ "Tianyu Fang", "Yijie Mao" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20231227124918", "title": "Optimal Beamforming Structure and Efficient Optimization Algorithms for Generalized Multi-Group Multicast Beamforming Optimization" }
a,b,c]Wenjie Xi, d]Tian Lan, d]Longye Wang, a]Chenjie Wang, b,e]Wei-Qiang Chen [a]Department of Physics and HKU-UCAS Joint Institute for Theoretical and Computational Physics, The University of Hong Kong, Pokfulam Road, Hong Kong, China[b]Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China[c]Shenzhen Institute for Quantum Science and Engineering, and Department of Physics, Southern University of Science and Technology, Shenzhen, 518055, China[d]Department of Physics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China[e]Shenzhen Key Laboratory of Advanced Quantum Functional Materials and Devices, Southern University of Science and Technology, Shenzhen 518055, [email protected] [email protected] [email protected] [email protected] [email protected] Recently, many studies are focused on generalized global symmetry, a mixture of both invertible and non-invertible symmetries in various space-time dimensions.The complete structure of generalized global symmetry is described by higher fusion category theory. In this paper, We first review the construction of fusion 2-category symmetry Σ whereis a a braided fusion category. In particular, we elaborate on the monoidal structure of Σ which determines fusion rules and controls the dynamics of topological operators/defects. We then take ΣsVec as an example to demonstrate how we calculate fusion rule, quantum dimension and 10j-symbol of the fusion 2-category. With our algorithm, all these data can be efficiently encoded and computed in computer program. The complete program will be uploaded to github soon. Our work can be thought as explicitly computing the representation theory of , in analogy to, for example the representation theory of SU(2). The choice of basis bimodule maps are in analogy to the Clebsch-Gordon coefficients and the 10j-symbol are in analogy to the 6j-symbol.On a class of fusion 2-category symmetry: condensation completion of braided fusion category [ January 14, 2024 ============================================================================================§ INTRODUCTION Symmetry plays as a guiding principle in physics.In modern language, a generalized symmetry <cit.>, is characterized by topological operators U that are invariant under any deformation of space-time. A q-form generalized symmetry in d-dimensional space-time is associated with d-q-1 dimensional topological operators.The collection of topological operators usually form an algebra under composition.There are mainly two classes of higher-form generalized symmetry, namely, generalized global symmetry and generalized non-invertible symmetry. For generalized global symmetry, the composition usually follows group multiplication and topological operators are unitary. For generalized non-invertible symmetry, the structure of composition is much more complicated and is usually characterized by fusion category. The topological operators for generalized non-invertible symmetry usually do not have inverse. Given a quantum system, its generalized symmetry is uniquely characterized by the combination of all topological operators in various space-time dimensions, which is, mathematically, described by fusion higher category. Fusion 1-category is widely applied in the study of many physical systems. In rational CFTs, non-invertible symmetries are generallycharacterized by Verlinde lines <cit.>, for example, Kramers-Wannier duality <cit.> line operator in Ising CFT.Mathematically, for a given rational CFT, each simple Verlinde line corresponds to a simple object in the fusion 1-category. The structures of Verlinde lines, including composition, splitting, joining, re-coupling are encoded by the categorical data of the fusion category. And the categorical data can be explicitly expressed in terms of the fusion rules N_ab^c and the F symbol. (Braided) Fusion 1-category has also been used to study other physical systems such as 2+1D topological order where F symbol is cornerstone for constructing lattice model and N_ab^c determines fusion structure of anyons. For fusion 1-category,there are many ways to obtain the F symbol, including field theory <cit.>, representation theory <cit.> (where the 6j symbol of SU(2) is an example), or directly solving the pentagon equation <cit.>; the calculation of F is, however, in general very complicated. However, higher fusion category has not yet been well studied. Even the rigorous definition of fusion 2-category is only proposed in 2018 <cit.>. Though it is realized that fusion 2-category plays an important rule in studying higher-form (non-invertible) symmetry <cit.>, few examples of fusion 2-category are explicitly constructed.By now, the only examples of fusion 2-category that we can list all the explicit data, are of the form 2Vec_G^w. In addition, approaches to find examples with complete categorical data are not widely explored either. Recently, in high energy community, people are trying to study properties of higher-form non-invertible symmetries in higher space-time by different approaches <cit.>. For example, some studies are focused on simplest non-invertible symmetry associated with the Kramers-Wannier duality in higher dimensions <cit.>. For many proposed QFTs or lattice models with higher form symmetries, what are their complete generalized global symmetries or which fusion higher categories should be used to characterize them is far from clear.Even for some systems that we can find all (higher-form) symmetries, the categorical data is still incomplete.An important ingredient of fusion 2-category is the 10j-symbol <cit.>,[In Walker-Wang model, the 10j-symbol of a presemisimple 2-category, which is the delooping of a unitary braided fusion category, is provided.] which is analogous to the F symbol of fusion 1-category. It has not been carefully calculated in current literature studying higher symmetries.The only known examples of 10j-symbol are the 4-cocycles of fusion 2-categories 2Vec_G^ω with invertible objects only. One may also try to solve the hexagon equation directly to get 10j-symbol for arbitrary fusion 2-category, but, analytically, it is almost impossible. Even for numerical calculation, the computational cost is incredibly high. Therefore, it is highly demanded to find a practical way to obtain the complete categorical data for fusion 2-category, and derive a few simple but non-trivial examples. Knowing the explicit categorical data is very important for studying physical systems with generalized global symmetries. For example, the data of fusion 2-category can be used to construct lattice model of 3+1D topological order <cit.> and its boundary <cit.>. Since the data of fusion 1-category has been used to characterize 1+1D CFT, hopefully, we may use fusion 2-category to study 2+1D CFT which is also closely related to quantum phase transition in 2+1D.In this paper, we propose the algorithm for systematically constructing examples for a class of fusion 2-category Σ, the condensation completion of braided fusion 1-category , and obtaining all its categorical data. As a first application, we compute the full data of ΣsVec. Roughly speaking, we give the coefficients for all possible kinematics, including fusing, bending, braiding, recoupling, etc., of fermions and open Majorana chains.The paper is organized as the following. In section <ref>, we first review the construction of the braided fusion 1-category . We also review separable algebras in , bimodules of separable algebras and bimodule maps between the bimodules. we then review the construction of the fusion 2-category Σ.In section <ref>, we elaborate the monoidal structure of Σ, which is mainly consist of fusion algebra of objects and 1-morphisms, which correspond to topological operators/defects, and 10j-symbol which captures the generalized crossing relations between the operators/defects. In section <ref>, we impose spherical condition for Σ which gives each topological defect a quantum dimension, a pairing structure of section and retraction bimodule maps and determines the normalization factor of the 10j-symbols. In section <ref>, we explicitly compute the objects, 1-morphisms, 2-morphisms, fusion algebra and quantum dimension of a simple but fundamentally important example: ΣsVec.In section <ref>, we write down the explicit form of 10j-symbol with a chosen base. All the 10j-symbols of ΣsVec and the completecomputer program will be uploaded to github soon. Therefore, a topological quantum field theory(TQFT) can be constructed. With our algorithm, all the categorical data of Σ can be efficiently computed in computer program.§ PRELIMINARIES §.§ Braided fusion category Here we only introduce the properties of a braided fusion category (,,,α,c) that are relevant to our paper. For concrete and detailed definition, please see for example the textbook <cit.>. A monoidal category (, , , α) is a categoryequipped with a monoidal structure consists of * a tensor product : ×→,* a tensor unitwith X =X = X , ∀ X ∈ ().* an associator α, i.e. natural isomorphisms α_X, Y, Z: (X Y) Z → X(Y Z) that satisfy the pentagon diagrams. A fusion category (,,,α) is a category satisfies following conditions: * (,,,α) is a monoidal category,*is -linear,*is rigid,*is finite semi-simple,* The tensor unitis a simple object.A braided fusion category (,,,α, c) is a fusion category (,,,α) equipped with a braiding, i.e. natural isomorphisms c_X,Y:X YY X that satisfy two hexagon diagrams. For concreteness, we restrict to the case where the objects inare “vector spaces” with certain structures, while the morphisms inare “linear maps” preserving the structures. Some examples include representation categories of groups or quasi-triangular Hopf algebras, and pointed braided fusion categories (i.e. finite pre-metric groups).§.§ Algebras and modules in a braided fusion categoryGiven a braided fusion category (,,,α,c), the algebras and its modules inare defined in the following.An algebra is a pair (A, m:A A→ A), where A is an object inand the multiplication morphism A AA satisfies the following diagram(A A) AA(A A) A AA A A ["α_A,A,A", from=1-1, to=1-3] ["m𝕀_A"', from=1-1, to=2-1] ["𝕀_A m", from=1-3, to=2-3] ["m"', from=2-1, to=3-2] ["m", from=2-3, to=3-2]commutes.It can also be denoted as A for simplicity.(, m=𝕀_) is the trivial algebra in , and will be denoted asin the paper.Given an algebra A. A right A-module is a pair (M, r:MA→ M), where M is an object inand r is a morphism M_A A→ M_A such that the following diagram(M A) AM(A A) M AM A M ["α_M,A,A", from=1-1, to=1-3] ["r𝕀_A"', from=1-1, to=2-1] ["𝕀_M m", from=1-3, to=2-3] ["r"', from=2-1, to=3-2] ["r", from=2-3, to=3-2]commutes. A left A-module (N , l:A N→ N) is defined in the same way but with a left action l.Given two algebras A and B. A B-A-bimodule is a triple (M, l:B M→ M, r:M A→ M), where (M, l) is a left B-module and (M, r) is a right A-module such that the following diagram(B M) AB(M A) M A BMM ["α_B,M,A", from=1-1, to=1-3] ["l𝕀_A"', from=1-1, to=2-1] ["𝕀_B r", from=1-3, to=2-3] ["r"', from=2-1, to=3-2] ["l", from=2-3, to=3-2]commutes.In the following, for simplicity, we will denote a B-A-bimodule (M, l, r) with the object M or with *[_B]M_A, if their meanings are evident from the context.It also works for the left and right modules.A left A-module *[_A]M can be regarded as an A--bimodule *[_A]M_, while a right A-module *N_A can be regarded as an -A-bimodule *[_]N_A.Given an algebra (A, m) in . * *[_A]A_A≡ (A, m, m) is an A-A-bimodule.* *[_A]A*A_A≡ (A A, l_A, r_A) is an A-A-bimodule, where l_A, r_A is defined asl_A:A(A A) (A A) A A A["α_A,A,A^-1", from=1-1, to=1-2] ["m𝕀_A", from=1-2, to=1-3] r_A:(A A) A A (A A) A A["α_A,A,A", from=1-1, to=1-2] ["𝕀_A m", from=1-2, to=1-3]Given a C-B-bimodule (M, l_M, r_M) and a B-A-bimodule (N, l_N, r_N). * The triple (M N, l_MN, r_MN) is a C-A-bimodule where l,r is defined asl_MN:C(M N) (C M) N M N["α^-1", from=1-1, to=1-2] ["l_M𝕀_N", from=1-2, to=1-3] r_MN:(M N) A M (N A) M N["α", from=1-1, to=1-2] ["𝕀_M r_N", from=1-2, to=1-3] * The triple (M B N, l_MBN, r_MBN) is a C-A-bimodule where l_MBN,r_MBN is defined asl_MBN:C(M B N) (C M) B N M B N ["α^-1", from=1-1, to=1-2] ["l_M𝕀_B𝕀_N", from=1-2, to=1-4] r_MBN:(M B N) A M B (N A) M B N ["α", from=1-1, to=1-2] ["𝕀_M𝕀_B r_N", from=1-2, to=1-4]Given two right A-modules (M, r_M:M A→ M) and (N, r_N:N A→ N). A right A-module map is a morphism f:M→ N insuch that the following diagramM A M N A N ["r_M", from=1-1, to=1-3] ["f𝕀_A"', from=1-1, to=3-1] ["r_N"', from=3-1, to=3-3] ["f", from=1-3, to=3-3]commutes. Given two B-A-bimodules (M, l_M:B M→ M, r_M:M A→ M), (N, l_N:B N→ N, r_N:N A→ N). A bimodule map is a morphism f:M→ N such that f is both a left module map and a right module map. An algebra (A, m:A A→ A) is called separable if m:A A→ A admits an A-A-bimodule map σ:A→ A A such that the composition m∘σ=𝕀_A.Given a C-B-bimodule (M, l_M:C M→ M, r_M:M B→ M) and a B-A-bimodule (N, l_N:B N→ N, r_N:N A→ N). The relative tensor product (M[B]N, π), or simply M[B]N, inis the coequalizer shown belowM B N M N M[B]N["r_M𝕀_N", shift left=1, from=1-1, to=1-2] ["𝕀_M l_N"', shift right=1, from=1-1, to=1-2] ["π", from=1-2, to=1-3] The universal property of M[B]N is given by the following commuting diagramM B N M N M[B] NX ["π", from=1-2, to=1-3] ["r_M𝕀_N", shift left=1, from=1-1, to=1-2] ["𝕀_M l_N"', shift right=1, from=1-1, to=1-2] ["∃ ! h̃", dashed, from=1-3, to=2-3] ["∀ h"', from=1-2, to=2-3]for any X∈.The relative tensor product M [B] N is uniquely determined up to a canonical isomorphism. For a left A-module (N , l:A N→ N), in general, A [A] NN, where l̃ is given by the universal property of the relative tensor product asA A N A N A[A] NN ["π", from=1-2, to=1-3] ["mid_N", shift left=1, from=1-1, to=1-2] ["id_Al"', shift right=1, from=1-1, to=1-2] ["l̃","∼"', dashed, from=1-3, to=2-3] ["l"', from=1-2, to=2-3]For any two bimodule maps f:_C M _B→_C M' _B and g:_B N _A→_B N' _A, the relative tensor product f[B]g is given by the universal property of the relative tensor product of bimodulesM B N M NM[B] NM' B N'M' N'M'[B] N' ["r_M𝕀_N", shift left=2, from=1-1, to=1-3] ["𝕀_Ml_N"', shift right=2, from=1-1, to=1-3] ["π", from=1-3, to=1-5] ["f[B]g", dashed, from=1-5, to=3-5] ["fid_Bg"', from=1-1, to=3-1] ["r_M'𝕀_N'", shift left=2, from=3-1, to=3-3] ["π'"', from=3-3, to=3-5] ["𝕀_M' l_N'"', shift right=2, from=3-1, to=3-3] ["f g"', from=1-3, to=3-3] Given a C-B-bimodule (M, l_M, r_M) and a B-A-bimodule (N, l_N, r_N). M[B]N is the cokernel of the map f=r_M𝕀_N-𝕀_M l_N as followingM B N M N M[B] NX ["π", from=1-2, to=1-3] ["f", shift left=1, from=1-1, to=1-2] ["0"', shift right=1, from=1-1, to=1-2] ["∃!h̃", dashed, from=1-3, to=2-3] ["∀ h"', from=1-2, to=2-3] §.§ Fusion 2-category and 10j-symbol Here we introduce the definition of a fusion 2-category .We only include the properties that are relevant to our paper. For concrete and detailed definition, please see ,for example, the Ref <cit.>.A monoidal 2-categoryis a 2-categoryequipped with a monoidal structure consists of * the objects (A, B, ⋯), 1-morphisms (f, g, ⋯), and 2-morphisms (α, β, ⋯),* the hom space (A,B), which is a 1-category, consists of all 1-morphisms from object A to object B and the 2-morphisms between these 1-morphisms.* the composition functor ∘∘: (A,B) (B,C) →(A,C),(f, g) ↦ g ∘ f * an associator of bimodule compositionλ_f, g, h: (f ∘ g) ∘ hf ∘ (g ∘ h)for 1-morphisms: f : C → D, g : B → C, and h: A → B* a monoidal unit ,* a tensor product , which are defined as 2-functorsA- : →,-A : →for each object A ∈,* an interchange 2-isomorphismϕ_f,g:(fZ)∘(Bg) ⇒ (C g)∘ (fY)for each pair of 1-morphisms: f : B → C and g : Y → Z,* an invertible natural associativity 1-morphism Λ_A, B, C: (A B) C → A(B C)for any objects A, B, C ∈, which tracks the associativity of the tensor product of objects,* a pentagonator 2-isomorphismβ_A,B,C,D: (AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D) ⇒Λ_A,B,C D∘Λ_A B, C,Dfor any objects A, B, C, D ∈.A fusion 2-category is a finite semisimple monoidal 2-category that has left and right duals for objects and a simple monoidal unit. For given objects A, B, C, K in , the associativity 1-morphism Λ_A, B, C will induce an equivalent functorΛ_A,B,C∘ - : (K,(A B) C) (K, A (B C)).And for given objects A, B, C, D, K in , the pentagonator induces a natural transformation between two equivalent functors(K,((A B) C) D) [ddd,bend right=50,"Λ_A,B,C D∘Λ_A B, C,D∘ -"left,""name=L,right][ddd,bend left=50,"(AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D)∘ -"right,""name=R,left](K,A (B (C D))) [Rightarrow, from=R, to=L, "β_A,B,C,D∘ -"'],which is characterized by the 10j-symbol. §.§ The 2-category ΣB In this paper, we will focus on fusion 2-category Σ, the condensation completion of a braided fusion 1-category .We consider only the case where Σ has a spherical structure.The definition of Σ as a 2-category is given below.The monoidal structure and spherical structure of Σ will be discussed in the following sections. Given a braided fusion category , its condensation completion <cit.> Σ, as a 2-category, consists of the following data. * Objects are separable algebras in .* Given two objects A,B, the hom space (A,B) is a 1-category consists of B-A-bimodules (as objects) and B-A-bimodule maps (as morphisms).* The composition ∘ of hom spaces is a is given by the relative tensor product of bimodules and bimodule maps defined in Def. <ref> and  <ref>∘: (A,B) (B,C)→(A,C),(_B N_A, _C M_B) ↦ M∘ N:=_C M_B [B] _B N_A,(g,f) ↦ f∘ g:=f[B]g. § THE MONOIDAL STRUCTURE OF ΣBIn this section, we describe the monoidal structure of the Σ induced from the braided monoidal structure of . §.§ Tensor productin Σ The tensor productin Σ is induced by the tensor productin .Given two objects A,B∈(Σ), i.e. two separable algebras in , A B:= (AB, m_A B) with multiplication defined as(A B) (AB)[r,"α"] [dd,"m_A B"]A(B A) B[d,"𝕀_A c_B,A𝕀_B"] A(A B) B[d,"α"]A B (A A) (B B)[l,"m_A m_B"]is also a separable algebra in , and hence an object in Σ. Let _C N_B and _Z P_Y be two 1-morphisms in Σ.They are actually two bimodules in , hence N P has a natural structure of C Z-B Y-bimodule, where the right module structure is defined as(N P) (BY)[r,"α"] [dd,"r_NP"]N(P B) Y[d,"𝕀_N c_P,B𝕀_Y"] N(B P) Y[d,"α"]N P (N B) (P Y)[l,"r_N r_P"]The left module structure can be defined similarly.Then the tenor product of N and P in Σ is defined as NP := NP.For the tensor product of 2-morphisms f and g in Σ, since fg is automatically a bimodule map in , we have fg := fg.Therefore, we do not distinguish the tensor productin Σ andinin the following. §.§ Interchange lawThe tensor product must be compatible with the bimodule composition ∘, which means (M P)∘(N Q) must be equivalent to (M∘ N) (P∘ Q).This can be satisfied by the 2-isomorphism (M P)∘(N Q) (M∘ N) (P∘ Q),for the bimodules _D M_C, _C N_B, _Z P_Y, _Y Q_X.c̃_P,N;M,Q is induced by the braiding c_P, N invia the universal property of the relative tensor productM P N Q[r,"𝕀 c_P,N𝕀"] [d,"[CY]"] MN PQ [d,"[C][Y]"] (M P)∘(N Q) [r,"c_P,N;M, Q"](M∘ N) (P∘ Q)where the associator α has been dropped for simplicity.Then the interchanger ϕ_N, P for bimodules N∈(B,C) and P∈(Y,Z) is given byϕ_N, P:(NZ)∘(B P)(N∘ B) (Z∘ P)N P(C∘ N)(P∘ Y)(C P)∘ (N Y).In the following, we will denote c̃_P,N;M,Q as c̃_P,N for simplicity. §.§ Associator of bimodule compositionThe associator λ of bimodule composition ∘ is induced by the associator α offrom the diagram below(M N)P [r,"α_M,N,P"][d,"[A]([B] 𝕀_P)"]M (N P) [d,"[B](𝕀_M [A])"] (M∘ N)∘ P=(M[B]N)[A]P [r,dashed, "λ_M,N,P"]M[B](N[A]P)=M∘(N∘ P)It can be noticed that even the associator ofis trivial, associator of bimodule composition is not necessarily trivial. This is because that the right A-action on MN and M ∘ N could be different, and so does the left B-action on NP and N ∘ P, which may leads to a nontrivial λ. §.§ Associator bimodule and pentagonator For three objects A,B,C∈(Σ), the associator α_A,B,C :(A B) C→ A (B C) inis an algebra isomorphism. Therefore, in Σ, we can define associator 1-morphisms as Λ_A,B,C:=_A (B C) (A B) C_(A B) C, where the left module structure is induced by the algebra isomorphism α_A,B,C. It is clear that Λ_A,B,C is an invertible bimodule, and it is natural in A,B,C following the naturality of α. For example, for any bimodule _D M_C, the naturality of Λ_A, B, C in C leads to a 2-isomorphismΛ_A,B,D∘ ((A B) M)(A (B M)) ∘Λ_A,B,C. The pentagonator β_A,B,C,D is a bimodule map between the following associator bimodule in (((A B) C) D, A (B (C D))) in ΣΛ_A,B,C D∘Λ_A B, C,D[d,equal] (AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D) [d,equal][l,"β_A,B,C,D"'] ((A B) C) D [r,equal] ((A B) C) Dwhere the left module structure on the left hand side is induced by α_A,B,C Dα_A B, C,D and on the right hand side is induced by (𝕀_Aα_B,C,D)α_A,B C,D(α_A,B,C𝕀_D). We omitted the associatorof bimodule composition here (in this example they are cancelled in the final result).By the pentagon equation of , the two bimodules are in fact equal to each other. Therefore, the pentagonator β_A,B,C,D is simply the identity bimodule map.§.§ Associator bimodule map As shown in eqn. (<ref>), the associator bimodule Λ_A,B,C induce an equivalent functorΛ_A,B,C∘ -: (K,(A B) C) →(K, A (B C)),which plays crucial roles in the calculation of 10j-symbols shown in eqn. (<ref>) and will be studied in this subsection.Since Σ is semisimple, we can focus on the case where all of K, A, B, C are simple objects in Σ.Furthermore, the naturality of Λ suggests that we only need to considerrepresentative objects chosen from each equivalent class of the simple objects.Thus in the following, we consider only the objects in Σ_0, a chosen set of representative objects in Σ, and the bimodules in (A, (BC) ), a chosen set of representative simple B C-A-bimodules for any A,B,C∈Σ_0.For any two separable algebras A, B∈Σ_0, AB can be decomposed into direct sum of simple separable algebras in Σ_0AB ≅⊕_M ∈Σ_0 F_M^ABM,where F^AB_M:={s: M→ A B, r: A B→ M} records the section and retraction algebra homomorphisms. We will drop F for simplicity when it does not result in any confusion. It is clear that A B can be taken as an invertible (A B)-(⊕ M)-bimodule, hence can be decomposed asA B≅⊕_M,Q F^AB_M;Q _A BQ_M,where F^AB_M;Q:={s: Q→ A B,r: A B→ Q} tracks section and retraction bimodule map. Thus, any (A B)C-K-bimodule U can be expressed asU≅⊕_M, P, Q (Q ⊗ C) ∘ P,where M∈Σ_0, P∈(K,M C), Q∈(M, A B), and the Fs in the direct sum decomposition have been dropped for simplicity.Therefore, we only need to study the A (B C)-K-bimodule Λ_A,B,C∘ (Q ⊗ C) ∘ P.Similarly, any A (B C)-K-bimodule V can be expressed asV≅⊕_N, Y, X (A ⊗ Y) ∘ X,where N∈Σ_0, X∈(K,A N), Y∈(N,B C).Since Λ_A,B,C∘(Q C)∘ P is a A (B C)-K-bimodule, it can be decomposed asΛ_A,B,C∘(Q C)∘ P ≅[N,X,Y] F^ABC; QP_KMN; YX (A Y)∘ X,where N∈Σ_0, X∈(K,A N), Y∈(N,B C). F^ABC; PQ_KMN; XY tracks the section and retraction bimodule maps in the direct sum decomposition. The normalized retraction bimodule maps serve as a basis for the calculation of the 10j-symbol (see Sec. <ref> for the normalization), while the corresponding normalized section bimodule maps are regarded as the dual basis, taken together they are referred to as associator bimodule maps. §.§ 10j-symbolThe 10j-symbol can be written down by fixing the choice of representative simple objects, simple 1-morphisms and bases of 2-morphisms (associator bimodule maps). We consider the category (K,A (B (C D))) for any given A,B,C,D,K∈Σ_0. The pentagonator induces a natural transformation between two equivalent functors, as depicted below(K,((A B) C) D) [ddd,bend right=50,"Λ_A,B,C D∘Λ_A B, C,D∘ -"left,""name=L,right][ddd,bend left=50,"(AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D)∘ -"right,""name=R,left](K,A (B (C D))) [Rightarrow, from=R, to=L, "β_A,B,C,D∘ -"']Although the pentagonator β_A,B,C,D of Σ is trivial, the 10j-symbol, which characterizing the natural transformation induced by the pentagonator, is not necessarily trivial. This phenomenon is in analogy to that in group representation theory, the associator of G is trivial but the 3j and 6j symbols are not trivial.For any bimodule U in (K,((A B) C) D), the natural transformation corresponds to a bimodule map between the image of the two functors, i.e. β_A, B, C, D∘ U : (AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D)∘ U ⇒Λ_A,B,C D∘Λ_A B, C,D∘ U.Since any bimodule in (K,((A B) C) D) can be decomposed as a direct sum of ((P_3 C) D) ∘ (P_2 D)∘ P_1 for M_1,M_2∈Σ_0 and P_1∈(K,M_1 D), P_2∈(M_1,M_2 C), P_3∈(M_2,A B), we only need consider the case with U = ((P_3 C) D) ∘ (P_2 D)∘ P_1.We denote V_1 ≡Λ_A,B,C D∘Λ_A B, C,D∘ U and V_2 ≡ (AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D)∘ U, and they are objects in (K,A (B (C D))).Any A (B (C D))-K-bimodule can be expressed as direct sum of (A (B Q_3))(A Q_2) Q_1, with N_1,N_2∈Σ_0, Q_1∈(K,AN_1), Q_2∈(N_1,B N_2), Q_3∈(N_2,C D). Thus, the bimodule map β_A,B,C,D U reduces to an endomorphism g^A,B,C,D,U_N_1,N_2;Q_1,Q_2,Q_3 of bimodule A (B Q_3))(A Q_2) Q_1 satisfies g ·Z·β = YWXJ, where Z and YWXJ are normalized retraction maps (see below for details) in the direct sum decomposition of V_1 and V_2, respectively. Since the pentagonator β_A, B, C, D is trivial, i.e. β = 𝕀, we have V_1 = V_2 = V andYWXJ = g ·Z.Therefore, the 10j-symbols, which are characterized by g, are determined by the two direct sum compositions of V, where the first decomposition is given belowΛ_A,B,C D∘Λ_A B, C,D∘((P_3 C) D) [1] (P_2 D)[2] P_1 λ[ "⊕|ζ̃^1 ⟩"',bend right,shift right=31ex,ddd]Λ_A,B,C D∘Λ_A B, C,D∘((P_3 C) D) [2] (P_2 D)[1] P_1 α_P_3,C,D Λ_A,B,C D∘ (P_3 (C D)) [2] Λ_M_2,C,D∘ (P_2 D)[1] P_1 [d, "⊕|ζ^1 ⟩"][N_2, Z, Q_3] F^M_2CD;P_2P_1_KM_1N_2;Q_3ZΛ_A,B,C D∘ (P_3 (C D)) [2](M_2 Q_3)[1] Z ["Ic"',bend right,shift right=31ex,dd] λ [N_2, Z, Q_3] F^M_2CD;P_2P_1_KM_1N_2;Q_3ZΛ_A,B,C D∘ (P_3 (C D)) [1](M_2 Q_3)[2] Z ϕ_P_3, Q_3 [N_2, Z, Q_3]F^M_2CD;P_2P_1_KM_1N_2;Q_3ZΛ_A,B,C D∘ ((A B) Q_3)[1] (P_3 N_2) [2] Z λ[ "⊕|ζ̃^2 ⟩"',bend right,shift right=31ex,ddd][N_2, Z, Q_3]F^M_2CD;P_2P_1_KM_1N_2;Q_3ZΛ_A,B,C D∘ ((A B) Q_3)[2] (P_3 N_2) [1] Z α_A,B,Q_3 [N_2, Z, Q_3]F^M_2CD;P_2P_1_KM_1N_2;Q_3Z (A (B Q_3)) [2] Λ_A,B,N_2∘ (P_3 N_2)[1] Z [d, "⊕|ζ^2 ⟩"][N_1,N_2, Q_1,Q_2, Q_3] [Z] F^M_2CD;P_2P_1_KM_1N_2;Q_3ZF^ABN_2;P_3Z_KM_2N_1;Q_2Q_1 (A (B Q_3)) [2] (A Q_2)[1] Q_1.We have used the naturality of Λ (<ref>), the interchanger (<ref>), and the decomposition (<ref>).[1] means the composition should be done firstly and [2] means the composition should be done secondly. λ is the associator of the composition of three bimodules defined in eqn. (<ref>). α is the associator of the tensor products of three bimodules.F^M_2CD;P_2P_1_KM_1N_2;Q_3Z, and F^ABN_2;P_3Z_KM_2N_1;Q_2Q_1 tracks the corresponding direct sum decompositions, while |ζ^1⟩ and |ζ^2⟩ are the normalized retractions defined in Sec. <ref> (the corresponding normalized sections are denoted as ⟨ζ^1| and ⟨ζ^2|, respectively). Note that we leave the identity maps implicit and only write the vital step in the equation. For simplicity, we introduce two maps |ζ̃^1 ⟩ and |ζ̃^2 ⟩ as shown in the equation, and hence the above decomposition can be depicted as left path in fig.<ref>.Similarly, the second direct sum decomposition is given by(AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D)∘((P_3 C) D) [1] (P_2 D)[2] P_1 c̃_D,P_2[ ddddd,"⊕|ζ̃^3 ⟩",bend right,shift right=40ex] (AΛ_B,C,D) ∘Λ_A,B C,D∘ (Λ_A,B,C D)∘(((P_3 C)[1] P_2)(D∘ D))[2] P_1 c̃_D,(P_3 C)∘ P_2(AΛ_B,C,D) ∘Λ_A,B C,D∘ ((Λ_A,B,C∘(P_3 C)[1] P_2)D)[2] P_1 [d, "⊕|ζ^3⟩"][J,X,W]F^ABC;P_3P_2_M_1M_2J;WX (AΛ_B,C,D) ∘Λ_A,B C,D∘ (((A W)[1] X)D)[2] P_1 c^-1_D,X [J,X,W]F^ABC;P_3P_2_M_1M_2J;WX (AΛ_B,C,D) ∘Λ_A,B C,D∘ ((A W) D) [1](X D)[2] P_1 λ [J,X,W]F^ABC;P_3P_2_M_1M_2J;WX (AΛ_B,C,D) ∘Λ_A,B C,D∘ ((A W) D) [2](X D)[1] P_1 α_A,W,D[ "⊕|ζ̃^4 ⟩",bend right,shift right=40ex,ddd][J,X,W]F^ABC;P_3P_2_M_1M_2J;WX (AΛ_B,C,D) ∘ (A (W D))[2] Λ_A,J,D∘(X D)[1] P_1 [d, "⊕|ζ^4 ⟩"][J,W,N_1,Q_1,Y][X] F^ABC;P_3P_2_M_1M_2J;WXF^AJD;XP_1_KM_1N_1;YQ_1 (AΛ_B,C,D) ∘ (A (W D))[2] (A Y) [1] Q_1 λ [J,W,N_1,Q_1,Y][X] F^ABC;P_3P_2_M_1M_2J;WXF^AJD;XP_1_KM_1N_1;YQ_1 (AΛ_B,C,D) ∘ (A (W D))[1] (A Y) [2] Q_1 c_Λ_B,C,D,A[ "⊕|ζ̃^5⟩",bend right,shift right=40ex,ddddd][J,W,N_1,Q_1,Y][X] F^ABC;P_3P_2_M_1M_2J;WXF^AJD;XP_1_KM_1N_1;YQ_1 (A (Λ_B,C,D∘ (W D))[1] (A Y)[2] Q_1 c_Λ_B,C,D∘ (W D),A [J,W,N_1,Q_1,Y][X] F^ABC;P_3P_2_M_1M_2J;WXF^AJD;XP_1_KM_1N_1;YQ_1 (A (Λ_B,C,D∘ (W D)[1] Y))[2] Q_1 [d, "⊕|ζ^5⟩"][N_1,N_2,Q_1,Q_2,Q_3][J,W,X,Y] F^ABC;P_3P_2_M_1M_2J;WXF^AJD;XP_1_KM_1N_1;YQ_1F^BCD;WY_N_1JN_2;Q_3Q_2 (A ((B Q_3)[1] Q_2)[2] Q_1 c^-1_B Q_3,A [N_1,N_2,Q_1,Q_2,Q_3][J,W,X,Y] F^ABC;P_3P_2_M_1M_2J;WXF^AJD;XP_1_KM_1N_1;YQ_1F^BCD;WY_N_1JN_2;Q_3Q_2 (A (B Q_3))[1] (A Q_2)[2] Q_1 λ [N_1,N_2,Q_1,Q_2,Q_3][J,W,X,Y] F^ABC;P_3P_2_M_1M_2J;WXF^AJD;XP_1_KM_1N_1;YQ_1F^BCD;WY_N_1JN_2;Q_3Q_2 (A (B Q_3))[2] (A Q_2)[1] Q_1.In this decomposition we used interchange law several times, which will cancel each other in the end due to the naturality of braiding c.The decomposition can be depicted as the right path in the fig. <ref> with the maps |ζ̃^3⟩, |ζ̃^4⟩, and |ζ̃^5⟩.Furthermore, we introduce two maps Z and YWXJZ: V_1 → A (B Q_3)) ∘ (A Q_2)∘ Q_1,YWXJ:V_2 →(A (B Q_3)) ∘ (A Q_2)∘ Q_1.as shown in the figure.Z is the composition of the three bimodule maps, ζ̃^1, Ic, and ζ̃^2, which is determined by the 1-morphism Z for given P_i and Q_i. Here we consider only those ζ^1 and ζ^2 that is valid in fig. <ref>, i.e. they share the same 1-morphism Z. As a result, ζ^1 and ζ^2 is also uniquely determined by Z. YWXJ is the composition of ζ̃^3, ζ̃^4, ζ̃^5 and fully determined by the 1-morphisms W, X, Y and object J. Similarly, we consider only those ζ^3, ζ^4 and ζ^5 sharing Y, W, X and J. Therefore, ζ^3, ζ^4 and ζ^5 is uniquely determined by Y, W, X and J. Z and YWXJ can be regarded as two different bases of the vector space of the bimodule maps from V to (A (B Q_3)) ∘ (A Q_2)∘ Q_1. According to eqn. (<ref>) and fig. <ref>, the basis transformation can be expressed as| YWXJ⟩ = g ·| Z ⟩ = ∑_Z G_Z^YWXJ| Z ⟩.where G_Z^YWXJ is just the 10j-symbol.Since | YWXJ ⟩ is over-complete, the above transformation andthe 10j-symbol G_Z^YWXJ as a matrix is non-invertible. However, we can define its right inverse as|Z⟩ = ∑_YWXJ(G^-1)^Z_YWXJ|YWXJ⟩,where ∑_YWXJ G_Z^YWXJ (G^-1)_YWXJ^Z' = δ_Z^Z'.In practice, |ζ^1⟩, |ζ^2⟩, |ζ^3⟩, |ζ^4⟩ and |ζ^5⟩ are the basis of the vector spaces assigned to the five 3-simplices of the boundary of a 4-simplex, and the 10j-symbol G_Z^YWXJ is the data assigned to the 4-simplex.§ SPHERICAL STRUCTURE OF ΣB In this section, we will introduce the spherical structure of Σ, which plays a crucial role in our construction.With the spherical structure, we can define a pairing [ ρ, ξ ] between the bimodule maps ξ: f ⇒ g and ρ: g ⇒ f with f, g ∈(A, B), which is very useful in calculating the 10j-symbols.With the spherical structure, we can also define and calculate the quantum dimensions of objects and 1-morphisms, which in together determine the normalization factor of the 10j-symbols. Instead of providing a strict definition, we only introduce some properties of the spherical structure which are related to our topic. For a detailed definition, see <cit.>. §.§ Spherical structure In a spherical fusion 2-category, every object has a left and a right dual, every 1-morphism has a left and a right adjoint and every 2-morphism has a left and a right mate. For an object A, its right dual is a triple (A^⋆,e_A: A A^⋆→,i_A: → A^⋆ A) where A^⋆ is an object and e_B, i_B are two 1-morphisms (called folds).For a 1-morphism f:A→ B, its right adjoint is a triple (f^*:B→ A,η_f: 𝕀_A⇒ f^∗∘ f,ϵ_f: f^∗∘ f ⇒𝕀_B). f^∗ is a 1-morphism and η_f,ϵ_f are two 2-morphisms satisfying the cusp equations(ϵ_f ∘𝕀_f) · (𝕀_f ∘η_f)=𝕀_f, (𝕀_f^∗∘ϵ_f) · (η_f ∘𝕀_f^∗ ) =𝕀_f^∗,which can be graphically expressed as[ [scale=3] [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1') node[midway,right] B; [very thick] (2.5,1)–(2.5,0.5); [->,very thick] (2.5,0)–(2.5,0.5) node[midway,left] f; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] A; ] = [ [scale=3] [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1') node[midway,right] B;[->,very thick] (2.25,0)–(2.25,0.5) node[midway,left] f; [->,very thick] (2.25,0.5) arc(180:0:0.125); [->,very thick] (2.5,0.5) arc(180:360:0.125); [very thick] (2.75,0.5)–(2.75,1) node[midway,right] f; [dashed] (2.625,0)–(2.625,0.375) node[right=4pt] η_f;; [dashed] (2.375,1)–(2.375,0.625) node[left=4pt] ϵ_f; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] A; ] ,[ [scale=3] [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1') node[midway,right] A; [->,very thick] (2.5,1)–(2.5,0.5); [very thick] (2.5,0)–(2.5,0.5) node[midway,left] f∗; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] B; ] = [ [scale=3] [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1') node[midway,right] A;[->,very thick] (2.25,1)–(2.25,0.5) node[midway,left] f^∗; [->,very thick] (2.25,0.5) arc(180:360:0.125); [->,very thick] (2.5,0.5) arc(180:0:0.125); [very thick] (2.75,0.5)–(2.75,0) node[midway,right] f^∗; [dashed] (2.375,0)–(2.375,0.375) node[left=4pt] η_f;; [dashed] (2.625,1)–(2.625,0.625) node[right=4pt] ϵ_f; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] B; ]All the duals and adjoints are involutive, i.e. f=f^∗∗, B=B^⋆⋆. For a 2-morphism σ: f ⇒ g, its left/right mates agree, i.e.σ=(ϵ_f ∘𝕀_g) · (𝕀_f ∘σ^∗∘𝕀_g) · (𝕀_f ∘η_g)= (𝕀_g ∘ϵ_f^∗) · (𝕀_g ∘σ^∗∘𝕀_f) · (η_g^∗∘𝕀_f)which can be graphically expressed as[ [scale=3] [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1') node[midway,right] B; [very thick] (2.5,0.7)–(2.5,1) node[midway,left] g; [->,very thick] (2.5,0)–(2.5,0.3) node[midway,left] f;(2.5,0.5) circle (0.6pt) node[left] σ; [->,very thick] (2.5,0.3)–(2.5,0.7); [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] A; ] = [ [scale=3] [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1') node[midway,right] B; [->,very thick] (2.25,0)–(2.25,0.5) node[midway,left] f; [very thick] (2.25,0.5) arc(180:0:0.125);(2.5,0.5) circle (0.6pt) node[left] σ^∗; [->,very thick] (2.5,0.5) arc(180:360:0.125); [very thick] (2.75,0.5)–(2.75,1) node[midway,right] g; [dashed] (2.625,0)–(2.625,0.375) node[right=4pt] η_g; [dashed] (2.375,1)–(2.375,0.625) node[left=4pt] ϵ_f; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] A; ] = [ [scale=3] [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1') node[midway,right] B;[very thick] (2.25,1)–(2.25,0.5) node[midway,left] g;(2.5,0.5) circle (0.6pt) node[left] σ^∗; [->,very thick] (2.5,0.5) arc(360:180:0.125); [very thick] (2.75,0.5) arc(0:180:0.125); [->,very thick] (2.75,0)–(2.75,0.5) node[midway,right] f; [dashed] (2.375,0)–(2.375,0.375) node[left=4pt] η_g^∗; [dashed] (2.625,1)–(2.625,0.625) node[right=4pt]ϵ_f^∗; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] A; ] Then one can define the left planar trace Tr_L(ξ): 𝕀_A ⇒𝕀_A and right planar trace Tr_R(ξ): 𝕀_B ⇒𝕀_B of any 2-endomorphism ξ: f ⇒ f for any arbitrary 1-morphism f: A → B asTr_L(ξ):=ϵ_f^∗· (𝕀_f^∗∘ξ ) ·η_f= [ [scale=3] [label=above:B] (ct) at (2.5,0.4); [label=above:A] (cn) at (2.9,0); [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1'); [dashed] (2.5,0)–(2.5,0.2) node[midway,left=2pt] η_f; [dashed] (2.5,1)–(2.5,0.8) node[midway,left=2pt] ϵ_f^∗; [very thick] (2.7,0.4) arc(360:180:0.2); [->, very thick] (2.3,0.6)–(2.3,0.4) node[midway,left] f^∗; [very thick] (2.7,0.6)–(2.7,0.4); [very thick] (2.7,0.6) arc(0:180:0.2);(2.7,0.5) circle (0.6pt) node[right] ξ; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] ; ],Tr_R(ξ):=ϵ_f· (ξ∘𝕀_f^∗ ) ·η_f^∗= [ [scale=3] [label=above:A] (ct) at (2.5,0.4); [label=above:B] (cn) at (2.9,0); [label=right:] (0') at (2,0) ; [label=above:] (1') at (2,1) ; [label=left:] (2')at (3,0) ; [label=below:] (3') at (3,1) ; [help lines] (0')–(1'); [dashed] (2.5,0)–(2.5,0.2) node[midway,left=2pt] η_f^∗; [dashed] (2.5,1)–(2.5,0.8) node[midway,left=2pt] ϵ_f; [very thick] (2.7,0.4) arc(360:180:0.2); [very thick] (2.3,0.6)–(2.3,0.4); [->, very thick] (2.7,0.6)–(2.7,0.4)node[midway,right] f^∗; [very thick] (2.7,0.6) arc(0:180:0.2);(2.3,0.5) circle (0.6pt) node[left] ξ; [help lines] (0')–(2') node[midway,left] ; [help lines] (3')–(1') node[midway,left] ; [help lines] (3')–(2') node[midway,left] ; ]One can further define the back 2-spherical trace Tr_B(ξ) and the front 2-spherical trace Tr_F(ξ) asTr_B(ξ):= Tr_R(e_B ∘ (ξ B^⋆))=Tr_L ((ξ A^⋆) ∘ i_A^⋆). Tr_F(ξ):= Tr_R(e_B^⋆∘ (B^⋆ξ)=Tr_L ((A^⋆ξ) ∘ i_A).Taking the right planar trace Tr_R(e_B ∘ (ξ B^⋆)) as example,it can be graphically expressed as[ [scale=0.8] [help lines] (5,1.5) circle (2.5) node[above=18pt] ; [help lines] (7.5,1.5) arc (0:-180:2.5 and 0.8); [help lines] at (3.8,0.5) B; [help lines] at (6,2.5) B^⋆;at (5,-0.5) η_f^∗;at (5,2.9) ϵ_f;at (5,4.3) e_B;at (5,-1.3) i_B;[help lines,dashed] (2.5,1.5) arc (180:0:2.5 and 0.8); [very thick] (5,1.2) ellipse (1 and 1.4);(4,1.2) circle (2pt) node[left] ξ; [->, very thick] (6,1.201)–(6,1.119) node[right] f^∗; ] ,i.e. evaluated by first stacking B^⋆ from back on Tr_R (ξ): 𝕀_B →𝕀_B and then composing i_B and e_B from bottom and top.In a spherical fusion 2-category, the back and front 2-spherical trace must agree, hence called 2-spherical trace, i.e. Tr_F(ξ)=Tr_B(ξ) =: Tr(ξ). The quantum dimension of a 1-morphism f: A→ B is defined as (f) := Tr(𝕀_f), and the quantum dimension of an object A is defined as (A):= (𝕀_A), where 𝕀_A is the identity 1-morphism. §.§ Normalized sections and retractionsFor 1-morphisms f,g: A → B in a spherical fusion 2-category, we can define the pairing [ · , · ]: (f,g) ×(g,f)→ k as[ ρ, ξ ] :=Tr(ρ·ξ)=Tr(ξ·ρ),where ρ∈(f,g) and ξ∈(g,f).Let (ρ, ξ) and (ρ', ξ') be two pairs of section and retraction maps in the direct sum decomposition A = ⊕ X.The pairing [ ρ', ξ ]=Tr(ρ' ·ξ)=δ^ξ'_ξ C (X), where C is a complex number. Then We introduce the normalized retraction and section maps in bra-ket notations as|ξ⟩:= ξ/√(C (X)) ⟨ξ| := ρ/√(C (X)),whose pairing is denoted as ⟨ξ' |ξ⟩ :=[ ξ', ξ ] = 1/C (X) [ ρ', ξ ] = δ^ξ'_ξNote that ξ can be considered as a basis of (A, X), while ξ is a basis of ((X, A). §.§ The calculation of 10j-symbolAs shown in eqn. (<ref>), eqn. (<ref>), and fig. <ref>,the calculation of 10j-symbol involves the normalized retractions |ζ^1⟩, |ζ^2⟩, |ζ^3⟩, |ζ^4⟩, and |ζ^5⟩, where ζ^1, ζ^2, ζ^3, ζ^4, and ζ^5 are the corresponding retractions.The bimodule maps |Z⟩ and |YWXJ⟩ are the compositions of the normalized retractions as shown in the figure and the equations.With the spherical structure, we can do some surgery depicted as[ [scale=0.4] [red] (-5,1.5) arc(180:0:1); [red] (-5,1.5)–(-5,-1.5); [red] (-3,1.5)–(-3,-1.5)node[midway,left]Z; [red] (-5,-1.5) arc(-180:0:1);[](0,2).. controls (-1.5,1) and (-1.5,-1) ..(0,-2) node[midway,right];[](0,2).. controls (1.5,1) and (1.5,-1) ..(0,-2) node[midway,right]; [] (0,2) circle (0.1) node[above=0.2] |ζ^1⟩; [] (0,-2) circle (0.1) node[above=0.2] ⟨ζ^1|; [] (-1.5,4) circle (0.1) node[left=0.2] |ζ^2⟩; [] (-1.5,-4) circle (0.1) node[left=0.2] ⟨ζ^2|;[](0,2).. controls (3,6)and (3,-6) ..(0,-2);[red](0,2).. controls (-1.5,2.8)and (0,3.2) ..(-1.5,4) node[midway,right];[red](0,-2).. controls (-1.5,-2.8)and (0,-3.2) ..(-1.5,-4) node[midway,right]X;[](-1.5,4).. controls (5,9)and (5,-9) ..(-1.5,-4);[](-1.5,4).. controls (-4,6)and (5,9) ..(5,0);[](-1.5,-4).. controls (-4,-6)and (5,-9) ..(5,0); [](-1.5,4).. controls (-3,3)and (-3,-3) ..(-1.5,-4); ] = [ [scale=0.4][](0,2).. controls (-1,1) and (-1,-1) ..(0,-2) node[midway,left];[](0,2).. controls (1.5,1) and (1.5,-1) ..(0,-2) node[midway,left]; [] (0,2) circle (0.1) node[above=0.2] |ζ^1⟩; [] (0,-2) circle (0.1) node[above=0.2] ⟨ζ^1|; [] (-1.5,4) circle (0.1) node[left=0.2] |ζ^2⟩; [] (-1.5,-4) circle (0.1) node[left=0.2] ⟨ζ^2|;[](0,2).. controls (3,6)and (3,-6) ..(0,-2);[red](0,2).. controls (-1.5,2.8)and (0,3.2) ..(-1.5,4) node[midway,right];[red](0,-2).. controls (-1.5,-2.8)and (0,-3.2) ..(-1.5,-4) node[midway,right]Z;[](-1.5,4).. controls (5,9)and (5,-9) ..(-1.5,-4);[](-1.5,4).. controls (-4,6)and (5,9) ..(5,0);[](-1.5,-4).. controls (-4,-6)and (5,-9) ..(5,0); [](-1.5,4).. controls (-3,3.5)and (-3.5,2) ..(-3.5,0);[](-1.5,-4).. controls (-3,-3.5)and (-3.5,-2) ..(-3.5,0); [red](-0.8,1.6).. controls (-1.1,1) and (-1.1,-1) ..(-0.8,-1.6) node[midway,left];[red](-0.8,1.6).. controls (-0.5,2.5) and (-1.8,5) ..(-2.8,2) node[midway,left];[red](-0.8,-1.6).. controls (-0.5,-2.5) and (-1.8,-5) ..(-2.8,-2) node[midway,left];[red](-2.8,2).. controls (-3.2,1) and (-3.2,-1) ..(-2.8,-2) node[midway,right]Z; ] = [ [scale=0.4][](0,2).. controls (-1,1) and (-1,-1) ..(0,-2) node[midway,left];[](0,2).. controls (1.5,1) and (1.5,-1) ..(0,-2) node[midway,left]; [] (0,2) circle (0.1) node[above=0.2] |ζ^1⟩; [] (0,-2) circle (0.1) node[above=0.2] ⟨ζ^1|; [] (-1.5,4) circle (0.1) node[left=0.2] |ζ^2⟩; [] (-1.5,-4) circle (0.1) node[left=0.2] ⟨ζ^2|;[](0,2).. controls (3,6)and (3,-6) ..(0,-2);[](-1.5,4).. controls (5,9)and (5,-9) ..(-1.5,-4);[](-1.5,4).. controls (-4,6)and (5,9) ..(5,0);[](-1.5,-4).. controls (-4,-6)and (5,-9) ..(5,0); [](-1.5,4).. controls (-3,3.5)and (-3.5,2) ..(-3.5,0);[](-1.5,-4).. controls (-3,-3.5)and (-3.5,-2) ..(-3.5,0); [red](0,2).. controls (-2,4)and (-2,-4) ..(0,-2);[red](-1.5,4).. controls (-0.5,3)and (-2.2,2) ..(-2.2,0);[red](-1.5,-4).. controls (-0.5,-3)and (-2.2,-2) ..(-2.2,0); ]and get that (Z) ⟨ Z' | Z ⟩=δ^Z_Z'(Z) ⟨ζ'^1ζ'^2|ζ^1ζ^2⟩=δ^Z_Z'δ^ζ'^1_ζ^1δ^ζ'^2_ζ^2, where ζ^1ζ^2 are the juxtaposition of ζ^2 and ζ^1 as shown in below.[scale=1.6]at (-0.5,2) (a);at (4,2) (b); [help lines, very thick] (0,0)–(0.6,0) node[midway,below] K;[help lines, very thick](0.6,0).. controls (1,0.2) and (1.4,0.4) ..(2.2,0.4) node[above right] D;[help lines, very thick](0.6,0).. controls (0.8,-0.15) and (1.0,-0.2) ..(1.2,-0.2) node[below left] M_1;[help lines, very thick](1.2,-0.2).. controls (1.4,-0.35) and (1.6,-0.4) ..(1.8,-0.4) node[above right] M_2;[help lines, very thick](1.2,-0.2).. controls (1.4,-0.05) and (1.8,0) ..(2,0) node[above right] C; [help lines, very thick] (0,0)–(0,1.5) node[midway,below] ; [help lines, very thick] (0,1.5)–(0.6,1.5) node[midway,below] ; [help lines, very thick] (1.8,-0.4)–(1.8,1.1); [help lines, very thick] (2,0)–(2,1.5); [help lines, very thick] (2.2,0.4)–(2.2,1.9);[help lines, very thick](0.6,1.5).. controls (1,1.3) and (1.4,1.1) ..(1.8,1.1) node[above left] ;[help lines, very thick](0.6,1.5).. controls (0.8,1.65) and (1.0,1.7) ..(1.2,1.7) node[above left] N_2;[help lines, very thick](1.2,1.7).. controls (1.4,1.55) and (1.8,1.5) ..(2,1.5) node[above left] ;[help lines, very thick](1.2,1.7).. controls (1.4,1.85) and (1.8,1.9) ..(2.2,1.9) node[above left] ;[very thick](0.6,0).. controls (0.7,0.5) and (0.8,0.7) ..(0.9,0.75) node[midway,left] P_1;[very thick](1.2,-0.2).. controls (1.1,0.4) and (1.0,0.7) ..(0.9,0.75) node[midway,right] P_2;[very thick](0.6,1.5).. controls (0.7,1) and (0.8,0.8) ..(0.9,0.75) node[midway,left] Z;[very thick](1.2,1.7).. controls (1.1,1.1) and (1.0,0.8) ..(0.9,0.75) node[midway,right] Q_3; [] (0.9,0.75) circle (0.04) node[right=0.2,red] |ζ^1⟩; [very thick](4.6,0).. controls (4.7,0.5) and (4.8,0.7) ..(4.9,0.75) node[midway,left] ;[very thick](5.2,-0.2).. controls (5.1,0.4) and (5.0,0.7) ..(4.9,0.75) node[midway,right] ;[very thick](4.6,1.5).. controls (4.7,1) and (4.8,0.8) ..(4.9,0.75) node[midway,left] ;[very thick](5.2,1.7).. controls (5.1,1.1) and (5.0,0.8) ..(4.9,0.75) node[midway,right] ; [] (4.9,0.75) circle (0.04) node[right=0.2,red] ζ^2; [very thick](5.6,0).. controls (5.7,0.5) and (5.8,0.7) ..(5.9,0.75) node[midway,left] ;[very thick](6.2,-0.2).. controls (6.1,0.4) and (6.0,0.7) ..(5.9,0.75) node[midway,right] ;[very thick](5.6,1.5).. controls (5.7,1) and (5.8,0.8) ..(5.9,0.75) node[midway,left] ;[very thick](6.2,1.7).. controls (6.1,1.1) and (6.0,0.8) ..(5.9,0.75) node[midway,right] ; [] (5.9,0.75) circle (0.04) node[right=0.2,red] ζ^1;Consequently, we can define an invertible linear mapT=∑_Z|Z⟩⟨ζ^1ζ^2|from V_ζ^1ζ^2, the vector space spanned by ζ^1ζ^2, to V_Z, the vector space spanned by | Z ⟩.And its inverse readsT^-1= ∑_Z(Z) |ζ^1ζ^2⟩⟨Z|,which satisfies T^-1T=𝕀_V_ζ^1ζ^2 and TT^-1=𝕀_V_Z.Similarly, we have ⟨ζ'^3ζ'^4ζ'^5|ζ^3ζ^4ζ^5⟩=δ^ζ'^3_ζ^3δ^ζ'^4_ζ^4δ^ζ'^5_ζ^5. We can thus define another linear mapS=∑_UYWXJ|YWXJ⟩⟨ζ^3ζ^4ζ^5|,from V_ζ^3ζ^4ζ^5 to V_YWXJ = V_Z,and its right inverseS^-1=∑_UYWXJ(X)D^WYU_J|ζ^3ζ^4ζ^5⟩⟨YWXJ|that satisfies SS^-1=𝕀_V_Z, whereD^WYU_J=(W) (Y)/(U) (J) (End_Σ(J)) n(J),n(J) is the number of equivalence classes of simple objects in the connected component of object J, (End_Σ(J)) is the dimension of the fusion 1-category (J,J), and U is a simple 1-morphism in decomposition (Y∘ W,Q_2 ∘ Q_3)=⊕_U(Y∘ W,U) ∘(U,Q_2 ∘ Q_3).As a result, we can introduce a linear map G = T^-1 S from V_ζ^3ζ^4ζ^5 to V_ζ^1ζ^2.According to eqn. (<ref>), (<ref>), and (<ref>), we haveG=T^-1S= ∑_UYWXZJ(Z) |ζ^1ζ^2⟩⟨Z|YWXJ⟩⟨ζ^3ζ^4ζ^5|= ∑_UYWXZJ G_Z^YWXJ|ζ^1ζ^2⟩⟨ζ^3ζ^4ζ^5|,i.e. the 10j-symbol G_Z^YWXJ is just the matrix element of the linear map G. Though the dimension of the vector spaces V_ζ^1ζ^2 and V_ζ^3ζ^4ζ^5 are in general different, the dimension of V_ζ^1ζ^2, V_Z and V_YWXJ are same and called the dimension of the 10j-symbol.Since T^-1SS^-1T=𝕀_V_ζ^1ζ^2, it is clear that the right inverse G^-1 is given byG^-1=S^-1T= ∑_UYWXZ(X)D^WYU_J|ζ^3ζ^4ζ^5⟩⟨YWXJ|Z⟩⟨ζ^1ζ^2|= ∑_UYWXZ D^WYU_J (G^-1)^Z_YWXJ|ζ^3ζ^4ζ^5⟩⟨ζ^1ζ^2|.Please note that G_Z^YWXJ and (G^-1)^Z_YWXJ characterize the transformation between the two bases |YWXJ⟩ and | Z ⟩.And G is only right invertible because the basis |YWXJ⟩ is over-complete. § FUSION 2-CATEGORY ΣSVEC In this section, we will give a brief introduction of a spherical fusion 2-category ΣsVec, the condensation completion of the braided fusion 1-category sVec. sVec is the category of finite dimensional super vector spaces, which consists of the following data: * Two simple objects: , the one-dimensional vector space of grade 0, and f, the one-dimensional vector space of grade 1. They can be regarded as boson and fermion living on the surface of a 3+1D topological order, respectively.* Tensor product: the tensor product of graded vector spaces.* Fusion rule:= ,f=f,f f= . * Trivial associator, i.e. x (yz) = (xy)z for x, y, z ∈{, f}.* The braiding: trivial except c_f, f=-1, which is consistent with fermionic statistics.In the calculation, we will choose a basis for each objects in sVec and take the following nomenclatures * The basis of : { 0}* The basis of f: { 1}* The basis of BC: { bc} or simply {bc}, where { b} is the basis of B, and { c} is the basis of C.For example, the basis of AA with A := ⊕ f is denoted as { 00,01,10,11} or just {00, 01, 10, 11}. ΣsVec is constructed following the definition in Sec. <ref> and will be illustrated in detail below. §.§ Objects in ΣsVecObjects in ΣsVec are separable algebras in sVec.There are two simple separable algebras in sVec, the trivial algebra :=(, m_: →) and a non-trivial algebra (A ≡⊕ f, m_A: AA → A), where m_ is given by0 ∙ 0 =0,and m_A is given bya ∙ b =a+b,for a, b ∈{0, 1}.Please note that the addition within the brackets is always interpreted modulo 2.In the following, we will denote the second algebra asfor simplicity.All the other separable algebras in sVec are (Morita) equivalent to eitheror  <cit.>.For example, ≅, which will be demonstrated in Sec. <ref>.As a consequence, there are only two equivalent classes of the simple objects, and we takeandas the representative objects.§.§ 1-morphisms in ΣsVec1-morphisms in ΣsVec are bimodules in sVec. For example, given two arbitrary objects C and B in ΣsVec, i.e. two separable algebras in sVec, a 1-morphism M in (B, C) is a C-B-bimodule _C M_B in sVec.In general, a C-B-bimodule in a categorycan be regarded as a left CB^rev-module (equivalent to a CB^rev--bimodule) , where B^rev≡ (B, m_B^rev) with multiplication m_B^rev = m_B · c_B, B is also a separable algebra.Therefore, the problem of finding simple bimodules reduces to the problem of finding simple left modules, which can be done by noting that all of the simple left modules of a separable algebra D incan be realized by a direct summand of free left-modules Dx for all of the simple objects x in .[Simple left modules of ]As mentioned above, the multiplication m_A ofis given bya ∙ b =a+b.Then we can define an algebrawith multiplication (m_A ⊗ m_A) · (id_Ac_A, A id_A), or simplya b ∙ c d=(-)^bca+cb+d. AA is obvious a left -modules with the action given by the algebra multiplication, and can be decomposed as AA = V⊕V', where V := (A, l_V) and V' := (A, l_V') are two simple left -modules with the action l_V 0 1 00 0 1 01 1 0 10 1 -0 11 -0 1l_V' 0 1 00 0 1 01 1 0 10 -1 0 11 0 -1and the section mapsV→ AA V'→ AA0↦ 0 0+ 11 0↦ 0 0- 111↦01- 10 1↦01+ 10 .Please check the appendix <ref> for details.V and V' are not isomorphic to each other since one can not find a module map, a map u that preserves both the Z_2 grading and the algebra action, between them.However, there is an invertible module map from V⊗ f to V':0_V 1↦ 1_V',1_V 1↦ 0_V',where the subscript shows explicitly which module the basis belongs to.Therefore, V' is isomorphic to V f ≡ Vf, and we take V and Vf as the two representative simple AA--bimodules in the calculation.Following our nomenclatures, the basis of Vf is denoted as {01, 11}.Since V ≅ V, we sometimes denote the basis of V as {00, 10} for a unified notation with the basis of Vf.[Simple A-A-bimodules] A-A-bimodules can be regarded as left A A^rev-modules, where the multiplication of A^rev is given bya ∙ b =(-)^aba+b.Therefore, the multiplication in AA^rev is justab ∙ cd:= (-)^b(c+d)a+cb+d. Following the same method, we found two left A A^rev-modules W:= (A, l_W) and W' := (A, l_W') with the section mapsW→ AA^revW'→ AA^rev0↦0 0+ 1 1 0↦0 0- 1 11↦0 1+ 1 0 1↦0 1- 1 0 Then we can rewrite W and W' as A-A-bimodules with bimodule actions listed belowW 0 1 0 0 1 1 1 0 0 0 1 1 1 0W' 0 1 0 0 1 1 -1 -0 0 0 1 1 1 0. Alternatively, since A is a separable algebra, A itself can be regarded as a simple A-A-bimodule with the action given by the multiplication of the algebra.We can construct another bimodule fA := fA, where the left action is given by A (fA)f(AA) = f(A ∙ A) → fA.The basis of fA are denoted as {10, 11}. For a unified notation, we sometimes denote the basis of A as {00, 01}. It is clear that W ≅ A and W' ≅ fA, hence we will choose A and fA as the two representative simple A-A-bimodules. With the above methods, we can fix the choice of representative simple 1-morphisms.In the following calculation, we use only the 1-morphisms in (B,C D), for B, C, D ∈{, A}, and the corresponding representative bimodules are, , , , , , ,and , where =.Note that we have used the relation X = X= X. §.§ Composition of 1-morphismsThe composition of 1-morphisms (bimodules) is given by the relative tensor product of modules∘: (B,C) (C,D)→(B,D),(_C N_B, _D M_C) ↦ M∘ N:=_D M_C [C]_C N_B,where the relative tensor product M[C]N is given by a quotient map MN → M[C]N satisfying( m c )n =m ( c n) → m [C]n,∀ m ∈ M, c ∈ C, n ∈ N.In case of no confusion, we will simplify _D M_C [C]_C N_B to _D M [C] N_B.Below we will give some examples of the composition of 1-morphisms, which are going to be used in the following calculations. [Composition of _ V^rev _A A and ] V^rev is a bimodule induced from V. It is a same vector space as V, and the right action on V^rev is induced from the left action on V throughd a b:= b ad = (-)^(a+d)b ()^ba+b+d.Then the composition of V^rev and V reads_ V^rev _A A∘ = _ V^rev[AA] V_ = ,with the quotient mapV^rev V→0_V^rev 0_V+i1_V^rev 1_V↦ 00_V^rev 1_V+i1_V^rev 0_V↦ 0.The detailed calculation can be found in Appendix. <ref> [∘] [A]= via the quotient map AA→ A0 0 +11↦ 00 1 +10↦ 1 0 0 -11↦ 00 1 -10↦ 0In the subsequent discussion, we will implicitly omit basis vectors that map to 0 (for example the last two lines of the preceding equations) for the sake of brevity. [(AA) ∘ Vf](AA) ∘ Vf = Vf with quotient map given by the left action on VfAAVf→ Vf 0 0 01+0 1 11+i1 0 11+i1 1 01 ↦01 0 0 11+0 1 01-i1 0 01-i1 1 11 ↦11.(AAVf) ∘ Vf = VfVf with quotient mapAAVfVf→ VfVf0 0 0101-0 1 0111-i1 0 0111+i1 1 0101 ↦0101 0 0 1101+0 1 1111+i1 0 1111+i1 1 1101 ↦1101 0 0 0111-0 1 0101+i1 0 0101-i1 1 0111 ↦0111 0 0 1111+0 1 1101-i1 0 1101-i1 1 1111 ↦1111. ∘ = AA with quotient mapAAA→ AA000 +011↦ 00 100 +111↦ 10 001 +010↦ 01 101 +110↦ 11. (A ) ∘ = AA with quotient mapAAA→ AA000 +101↦ 00 010 -111↦ 10 001 +100↦ 01 011 -110↦ 11.§.§ 2-morphisms in ΣsVec 2-morphisms in ΣsVec are bimodule maps. For two arbitrary C-B-bimodules _C M_B and _C N_B, a bimodule map is a linear map u between the two vector spaces M and N satisfyingu(c m)=cu(m),u(mb)=u(m)b,∀ c∈ C, b∈ B, m∈ M.For given bases of M and N, the bimodule map can be expressed as a matrix, while the composition of bimodule maps is just matrix multiplication.And it is obvious that the product of u and any nonzero complex number z is also a bimodule map.§.§ Morita equivalence of objects in ΣsVecTwo algebras B and C are Morita equivalent if and only if there exists an invertible bimodule _B M_C, or in other words, there is an invertible 1-morphism between B and C.Morita equivalence is of particular importance as it allows us to concentrate on a finite number of equivalent classes of objects, rather than an infinite number of objects inΣsVec. [A A is Morita equivalent to ] We will show that _A A V _ is invertible. We have shown that _ V^rev[A A] V_ = in example <ref>.For _AA V V^rev _AA, there is an invertible AA-AA-bimodule map V V^rev→ A A given by0_V0_V^rev ↦00 +11, 1_V1_V^rev ↦ -00 -11, 1_V0_V^rev ↦01 -10, 0_V1_V^rev ↦ -01 +10.Therefore, we have _AA V V^rev _AA≅ _AA A A _AA, hence _A A V _ is invertible, and A A is Morita equivalent to . With the same approach, we can find that there are just two Morita equivalent classes of simple objects in sVec, one is with , the other is with A.In the calculation of 10j-symbol, we only need consider the representative objects of these two classes, which are chosen asand A respectively. §.§ Tensor product of bimodulesRecall that for two arbitrary bimodules _C N_B and _Z P_Y, we can define their tensor product N P, which has a natural structure of C Z-B Y-bimodule (see Sec. <ref>). In sVec case, the bimodule structure is given bycznp:=(-)^znc nzpnpby:=(-)^bpnbpy.Since an object B can be regarded as the trivial 1-morphism _B B_B in (B,B), the tensor product _D M_CB can be defined as _D B M B _C B. [Tensor product ofand ] As discussed above, = *[_A AA](VA*)_A =: VA, where the left action is twisted by c_V,A, while the right A-action is untwisted and acted on A in VA. The action is expressed belowVA 00 01 10 11 000 00 01 10 11 010 10 11 00 01 100 i10 i11 -i00 -i01 110 -i00 -i01 i10 i11 001 01 00 -11 -10 011 11 10 -01 -00 101 i11 i10 i01 i00 111 -i01 -i00 -i11 -i10 0 00 01 10 11 1 01 00 11 10 §.§ The retraction bimodule maps Recall that the retraction bimodule maps in thedirect sum decomposition of A (BC)-K-bimodule Λ_A, B, C∘ (Q C)∘ P = ⊕ (A Y)∘ Xplays crucial roles in the calculation of 10j-symbol, where K, A, B, C, M, N ∈Σ_0, P∈(K,M C), Q∈(M, A B), X∈(K,A N), Y∈(N,B C).In the ΣsVec case, the representative objects are ΣsVec_0 = {, A}.The representative 1-morphisms are chose as (, ) = {, }, (, A ) = (,A) = {}, (A, ) = {}, (A, A ) = (A,A) = {, }, (, AA) = {, } and (A, AA) = {}. Note that the data of ΣsVec can be used to describe a 2+1D boundary of a 3+1D topological order.The object A represents a Majorana chain, while the objectrepresents the trivial chain (or just nothing) on the 2+1D boundary.The 1-morphisms are domain walls.For example,andare domain walls between trivial chains (or just nothing), hence are just boson and fermion particle respectively.andare the domain wall between the Majorana chain and the trivial chain, i.e. the Majorana zero modes. andare the particles lived on the Majorana chain, whereisdecorated by a fermion.Similarly,can be considered aswith a decorated fermion, and both of them are domain walls between vacuum and a double-Majorana-chain. is domain wall between a Majorana chain and a double-Majorana chain, hence a Majorana zero mode. Since all the associators in sVec are trivial, the associator bimodule Λ_A,B,C is just the identity bimodule 𝕀_ABC. Therefore, we will drop it in the following, and the direct sum decomposition reduces to(Q C)∘ P = ⊕ (A Y)∘ X. Below, we will give an example on how to calculate the retraction maps.[Retraction map in the decomposition of (V A)∘ A] In this example, we consider the retraction map in the direct sum decomposition of ( A)∘ = ⊕ (AY) ∘ X for Y ∈(, AA) = {, } and X ∈(A, A ) = {, }.Here we choose Y= and X=. The retraction can be expressed as[scale=1](a)at (-3,4);(b)at (-1,4);(c)at (1,4);(p) at (-2,3);(x) at (-1,2);(y)at (1,2);(q)at (2,3);(w)at (0,3);(k)at (-1,0.5); [above] at (a) ; [above] at (b) ; [above] at (c) ; [above] at (p) V; [above] at (x) A; [below] at (k) ;(a) – (p);[dashed] (p) – (x);(b) – (p);(c) – (x);(x) – (k); [thick,->] (1,2)–(5,2) node[midway,above] ;(a1)at (5,4);(b1)at (7,4);(c1)at (9,4);(p1) at (8,3);(x1) at (7,2);(y1)at (9,2);(q1)at (10,3);(w1)at (8,3);(k1)at (7,0.5); [above] at (a1) ; [above] at (b1) ; [above] at (c1) ; [above] at (p1) V; [above] at (x1) A ; [below] at (k1) ;(a1)–(x1);(b1)–(p1);(c1)–(p1);[dashed] (p1)–(x1);(x1)–(k1);With the standard procedure, we have ( A)∘ = V ⊗ A = VA with the quotient map VAA→ VA0_V 0_A 0_A+0_V 1_A 1_A↦00 0_V 0_A 1_A+0_V 1_A 0_A↦01 1_V 0_A 0_A+1_V 1_A 1_A↦10 1_V 0_A 1_A+1_V 1_A 0_A↦11.Similarly, we have the bimodule (A)∘ =: VA. VA is the same vector space as VA, but with different action, which is presented belowVA 00 01 10 11000 00 01 10 11 010 i10 i11 -i00 -i01 100 01 00 -11 -10 110 -i11 -i10 -i01 -i00 001 10 11 00 01 011 -i00 -i01 i10 i11 101 -11 -10 01 00 111 -i01 -i00 -i11 -i10 000 01 10 11 101 00 11 10 The quotient map is given byAVA→VA 0_A 0_V 0_A+1_A 0_V 1_A↦00 0_A 0_V 1_A+1_A 0_V 0_A↦01 0_A 1_V 0_A-1_A 1_V 1_A↦10 0_A 1_V 1_A-1_A 1_V 0_A↦11 Since, there is an invertible bimodule map ζ from VA to VA defined as00↦1/√(2)(00 +11), 01↦1/√(2)(01 +10), 10↦1/√(2)(-i01 +i10), 11↦1/√(2)(-i00 +i11).We have the direct sum decomposition (V A)∘ A = (A V)∘ A with ζ as the retraction map and its reverse ζ^-1 as the section map.§.§ The interchangers Another important bimodule map in our calculation is the interchanger ϕ_[_C]N_B, [_Z]P_Y, which is given by(NZ)∘(B P) (N∘ B) (Z∘ P) ≅ N P≅ (C∘ N)(P∘ Y) (C P)∘ (N Y),where the 2-morphism c̃_B, Z;N, P is induced from the braiding c_B, Z in sVec as shown in eqn. (<ref>).As an example, we consider the interchanger ϕ_, given byϕ_Vf, Vf = c_Vf,Vf; A A,^-1∘θ∘c_,AA; Vf, Vf,where θ is the 2-isomorphism Vf((A A)∘ Vf) ≅ ((A A)∘ Vf) Vf.The interchanger can be depicted as(a)at (-3,4);(b)at (-1,4);(c)at (1,4);(d)at (3,4);(p) at (-2,3);(x) at (-1,2);(z)at (0,1);(y)at (1,2);(q)at (2,3);(w)at (0,3);(k)at (0,-0.5); [above] at (a) ; [above] at (b) ; [above] at (c) ; [above] at (d) ; [above] at (p) Vf; [above left] at (y) Vf; [above] at (z) ; [below] at (k) ;(a) – (p);[dashed] (p) –(z) – (k);(b) – (p);(c) – (y);(d) – (y); [dashed] (y) – (z); Ic(a)at (-3,4);(b)at (-1,4);(c)at (1,4);(d)at (3,4);(p) at (-2,3);(x) at (-1,2);(z)at (0,1);(y)at (1,2);(q)at (2,3);(w)at (0,3);(k)at (0,-0.5); [above] at (a) ; [above] at (b) ; [above] at (c) ; [above] at (d) ; [above right] at (x) Vf; [above] at (q) Vf; [above] at (z) ; [below] at (k) ;(a) – (x);[dashed] (x) –(z) – (k);(b) – (x);(c) – (q);(d) – (q);[dashed] (q) – (z);We start from c_Vf, Vf; A A, and c_, AA; Vf, Vf, which are computed in the following examples. c_Vf, Vf; A A,: (AAVf) ∘ Vf →((AA) ∘ Vf)Vf is induced from c_Vf, Vf in sVec, which is given by01_Vf01_Vf ↦ -01_Vf01_Vf, 01_Vf11_Vf ↦11_Vf01_Vf, 11_Vf01_Vf ↦01_Vf11_Vf, 11_Vf11_Vf ↦11_Vf11_Vf. According to example <ref> and <ref>, we have (AA) ∘ Vf = Vf and (AAVf) ∘ Vf = VfVf.Therefore, we havec_Vf, Vf; A A,: VfVf→ VfVf,0101 ↦ -0101, 0111 ↦1101, 1101 ↦0111, 1111 ↦1111.[c_, AA; Vf, Vf] It is clear that c_, AA=𝕀_ AA. According to example <ref>, we have c_, AA; Vf, Vf=𝕀_VfVf. With these c̃, we can calculate the interchange bimodule maps ϕ_Vf, Vf. Since (AA) ∘ Vf = Vf (example <ref>), we have θ=𝕀_VfVf.With c_, AA; Vf, Vf=𝕀_VfVf, we have ϕ_Vf, Vf=c_Vf, Vf; A A,^-1, henceϕ_Vf, Vf: VfVf→ VfVf,0101 ↦ -0101, 0111 ↦1101, 1101 ↦0111, 1111 ↦1111. §.§ Quantum dimension The quantum dimension of a 1-morphism f of a spherical fusion 2-category is defined as (f) := Tr(𝕀_f), where the 2-spherical trace Tr(ξ) of a 2-morphism ξ is defined in eqn. (<ref>). Below is an example on the quantum dimension of the 1-morphismin ΣsVec.[Quantum dimension of f ≡] We start from the planar trace of 𝕀_f, the identity 2-morphism in (, ).The adjoint of f is f^* = with f^* ∘ f == *[_A]A*A_A and f ∘ f^* = [A]= *[_]A_with quotient map00 +11↦ 0 01 +10↦ 1.The units and counits are given byη_f:⇒*[_]A_ :0 ↦τ 0ϵ_f: *[_A]A*A_A⇒:00 ↦τ^-1 0;11 ↦τ^-1 0;01 ↦τ^-1 1;10 ↦τ^-1 1,andη_f^∗: ⇒*[_A]A*A_A:0 ↦γ ( 00+11);1 ↦γ ( 01+10),ϵ_f^∗: *[_]A_⇒:0 ↦γ^-1 0,where γ and τ are non-zero complex numbers.Then the planar traces of 𝕀_f readsTr_L(𝕀_f):⇒:0 ↦τγ^-1 0,Tr_R(𝕀_f): ⇒:0 ↦ 2 γτ^-1 0; 1 ↦ 2 γτ^-1 1.Thus the planar trace is in general dependent on the values of γ and τ, hence on the choices of the units and counits.We will show below that the spherical structure imposes extra constraints, hence largely reduces the freedom on the choices of units/counits and leads to a more deterministic planar trace.For ΣsVec, both of the objectsand A are self-dual with folds e_=i_ = and e_A=, i_A= respectively.According to eqn. (<ref>), the back 2-spherical trace of 𝕀_f readsTr_B (𝕀_f)= Tr_L ((𝕀_f ) ∘ i_)=Tr_L (𝕀_f)=τγ^-1, =Tr_R(e_A ∘ (𝕀_fA))= 2γτ^-1.Therefore, 2γτ^-1=τγ^-1, which leads to τγ^-1=±√(2).In the following, we will choose the units and counits such that the quantum dimensions are positive numbers, hence (𝕀_f) = √(2). With the same approach, we can compute the quantum dimensions of all the representative 1-morphisms (and their duals), which are all 1 except that ()=()=√(2) (same for their duals).We can also calculate the quantum dimension of the objectsand A, which are given by () := (𝕀_) = () = 1 and (A) := (𝕀_A) = () = 1 respectively.§ ONE EXAMPLE OF 10J-SYMBOL IN ΣSVEC In this section, we will show how to calculate G and G^-1 for P_1 = Q_1 = Q_3 =,P_2 =, and P_3 = Q_2 =, which has been depicted as Fig.<ref>.In the figure, the dashed and solid lines correspond to the objectand A respectively.For readers who want to skip the technical details, the results of this example can be found in eqn. (<ref>).The 10j-symbol is just the transformation between the two bases Z and YWXJ.We start from Z.Recall that YWXZ can only be chosen from the the representative 1-morphisms, which are , , , , , , , , and , hence we have Z =.The retraction ζ^1 is given by the direct sum decomposition (P_2 ) ∘ P_1 = ⊕_Z (AQ_3) ∘ Z.And we have (P_2 ) ∘ P_1 = ∘ = ⊕ and (AQ_3) ∘ Z = (A ) ∘ = *[_AA]AA*_⊕, where AA is same vector space as AA, but with different actions. p_1 and p_2 is given byp_1 =1/√(2)  00 01 10 1100_V 1 0 0 i 01_Vf0 1 i 0 10_V 0 1 -i 0 11_Vf1 0 0 -i p_2 =1/√(2)  00 01 10 1100_V 1 0 0 -i 01_Vf0 i 1 0 10_V 0 -i 1 0 11_Vf1 0 0 i .It is obvious that the retraction map in the direct sum composition is given by ζ^1 = p_2^-1·𝕀_V ⊕ V_f· p_1 = p_2^-1·𝕀_V· p_1 + p_2^-1·𝕀_V_f· p_1 ≡ζ^1_0 + ζ^1_1 withζ^1_0 =1/2  00 01 10 11001 0 0 i 010 i 1 0 100 1 -i 0 11i 0 0 -1ζ^1_1 =1/2  00 01 10 11001 0 0 -i 010 -i 1 0 100 1 i 0 11-i 0 0 -1 ,where the subscript 0 suggests the bimodule map is between representative simple 1-morphisms with no decorated fermion, for example, ,and , while the subscript 1 suggests that the bimodule map is between representative simple 1-morphisms with a (decorated) fermion, for example, ,and . Since the normalization of ζ^1 is trivial, we haveζ^1 = ζ^1=  00 01 10 11001 0 0 0 010 0 1 0 100 1 0 0 110 0 0 -1 . In ΣsVec, the associator of tensor product α and the associator bimodules Λ are all trivial. For ζ̃^1, the associator of bimodule compositions λ is also trivial. The calculation of |ζ̃^1 ⟩ is vastly simplified and given by the following diagram(A A) (A A)(A A)(A A)(A A) ∘ (A A) = (A A) (A A) ∘(A A) = (A A)["𝕀_(A A)|ζ^1⟩ ", from=1-1, to=1-3] ["π_1"', from=1-1, to=3-1] ["|ζ̃^1 ⟩"', from=3-1, to=3-3] ["π_2", from=1-3, to=3-3] ,where π_1 and π_2 are the quotient map in the relative tensor product.With the standard protocol for selecting basis, the matrix of π_1 and π_2 are same, hence the matrix of ζ̃^1_0 (ζ̃^1_1) is same as the matrix of ζ^1_0 (ζ^1_1).Similarly, we can get the interchange bimodule mapIc=  00 01 10 11001 0 0 0 010 1 0 0 100 0 1 0 110 0 0 1 ,and the normalized retraction bimodule mapsζ̃^2_0 =  00 01 10 11001 0 0 0 010 0 0 0 100 0 1 0 110 0 0 0ζ̃^2_1 =  00 01 10 11000 0 0 0 010 1 0 0 100 0 0 0 110 0 0 1 . By composing ζ̃^1, Ic and ζ̃^2, we haveZ = ⊕_a,b| Z_ab⟩ =⊕_a,bζ̃_b^2·Ic·ζ̃_a^1 with| Z_00⟩ =1/2  00 01 10 11001 0 0 i 010 0 0 0 100 1 -i 0 110 0 0 0 ,| Z_01⟩ =1/2  00 01 10 11000 0 0 0 010 i 1 0 100 0 0 0 11i 0 0 -1 , | Z_10⟩ =1/2  00 01 10 11001 0 0 -i 010 0 0 0 100 1 i 0 110 0 0 0 , | Z_11⟩ =1/2  00 01 10 11000 0 0 0 010 -i 1 0 100 0 0 0 11-i 0 0 -1 .For YWXJ, in the case of ΣsVec, the object J is uniquely determined by Y, W, and X, hence it reduce to YWX, which has in total nine different choices.Here we show the result with Y=W= and X= as an example, where | YWX ⟩ reads| YWX ⟩=1/2  00 01 10 11000 0 0 -1 010 0 1 0 100 0 1 0 110 0 0 -1 ,Therefore, we have| YWX ⟩=i/2(| Z_00⟩-| Z_10⟩)+1/2(| Z_01⟩+| Z_11⟩) For better presentation of the 10j-symbols, we divide the representative 1-morphisms into three groups * the bimodules between Morita non-equivalent objects such as , *[_A]fA_, etc, which are denoted as μ* the bimodules between Morita equivalent objects and decorated by one fermion, for example, , ,etc, which are denoted as f.* the bimodules between Morita equivalent objects with no fermion decoration, for example, , ,etc, which are denoted as 1.Then eqn. (<ref>) becomes | ff1 ⟩=i/2(|μ_00⟩-|μ_10⟩)+1/2(|μ_01⟩+|μ_11⟩) or matrix elementsG^ff1_μ_00 = i/2 G^ff1_μ_01 = 1/2 G^ff1_μ_10 = -i/2 G^ff1_μ_11 = 1/2 The final results for the Fig.<ref> areG^T=1/√(2)  μ_00 μ_01 μ_10 μ_11μμμ_0001 0 1 0μμμ_0011 0 1 0μμμ_0100 -i 0 iμμμ_0110 -i 0 iμμμ_100-i 0i 0μμμ_101-i 0i 0μμμ_1100 1 01μμμ_1110 1 01 ff1i/√(2) 1/√(2)-i/√(2) 1/√(2) fff1/√(2)-i/√(2) 1/√(2) i/√(2) f111/√(2) i/√(2) 1/√(2)-i/√(2) f1f -i/√(2) 1/√(2) i/√(2) 1/√(2) 1f1 -i/√(2) 1/√(2) i/√(2) 1/√(2) 1ff -1/√(2)-i/√(2)-1/√(2) i/√(2) 1111/√(2)-i/√(2) 1/√(2) i/√(2) 11f -i/√(2)-1/√(2) i/√(2)-1/√(2), G^-1=1/4√(2)  μ_00 μ_01 μ_10 μ_11μμμ_0001 0 1 0μμμ_0011 0 1 0μμμ_0100 i 0 -iμμμ_0110 i 0 -iμμμ_100i 0-i 0μμμ_101i 0-i 0μμμ_1100 1 01μμμ_1110 1 01 ff1 -i/√(2) 1/√(2) i/√(2) 1/√(2) fff1/√(2) i/√(2) 1/√(2)-i/√(2) f111/√(2)-i/√(2) 1/√(2) i/√(2) f1fi/√(2) 1/√(2)-i/√(2) 1/√(2) 1f1i/√(2) 1/√(2)-i/√(2) 1/√(2) 1ff -1/√(2) i/√(2)-1/√(2)-i/√(2) 1111/√(2) i/√(2) 1/√(2)-i/√(2) 11fi/√(2)-1/√(2)-i/√(2)-1/√(2).One can easily check that GG^-1=1.§ CONCLUSIONIn conclusion, we propose a method to construct a class of fusion 2-category Σ and obtain all its categorical data. We apply this method to ΣsVec to work out all its categorical data explicitly. All the 10j-symbols of ΣsVec and the completecomputer program will be uploaded to github soon. With the example, we demonstrate that our method can be efficiently encoded to calculate all wanted categorical data in computer program.We are grateful to the helpful discussions with Thibault Décoppet. This work was supported by the National Key R&D Program of China (Grants No. 2022YFA1403700), NSFC (Grants No. 12141402), the Science, Technology and Innovation Commission of Shenzhen Municipality (No. ZDSYS20190902092905285), and Center for Computational Science and Engineering at Southern University of Science and Technology. TL is supported by start-up funding from The Chinese University of Hong Kong. LW and TL are also supported by funding from Hong Kong Research Grants Council (ECS No. 2191310). WX and CW are supported by Research Grants Council of Hong Kong (GRF 17311322) and National Natural Science Foundation of China (Grant No. 12222416). § DIRECT SUM DECOMPOSITION OF AA A⊗ A can be regarded as a left A⊗ A-module with the left actiona bc d:= a b ∙ c d.We start from a vector00+α 11,under the AA-action, we have01 ( 00+α 11) =01-α10, 10 ( 00+α 11) =10+α01=α(01+α^-110), 11 ( 00+α 11) =11-α00=-α( 00-α^-1 11).We found that it is closed if we choose α=-α^-1, namely α=±, which gives the direct sum decompositionAA = Ṽ⊕Ṽ',with Ṽ = Span{0 0+11, 01-10} and Ṽ'= Span{0 0-11, 01+10}.It can be easily show that Ṽ≅ V and Ṽ'̃≅ V' in example <ref>.§ RELATIVE TENSOR PRODUCT OF V^REV[AA] VThe bases of V^rev, A A, and V are denoted as m, ab, and n, with a, b, m, n ∈{0, 1}, respectively. Then, mn form a basis of V^rev V where m and n are bases of V^rev and V respectively.The relative tensor product V^rev[AA] V can be regarded as a subspace of V^rev V, with a quotient map satisfiedm (ab n) - ( mab )n ↦ 0,∀ a, b, m, n ∈{0, 1}. The right A A-action on V^rev is given bym a b=(-)^(a+m)b()^ba+b+m,while the left A A-action on V readsa bn = (-)^(b+n)a ()^aa+b+n. Some nontrivial ones from eqn. (<ref>) are given below0 ( 010) - ( 001 )0= 0 1- i10 ↦ 0,0( 100) - ( 010)0= i01 -10 ↦ 0, 0 ( 011) - ( 001 )1= 00 - i11 ↦ 0,where the first two leads to 0 1 ↦ 0, and 0 1 ↦ 0. And the result is we can choose span{ 0 0+i1 1 } as V^rev[AA] V.It is obvious that span{ 0 0+i1 1 }≅, we finally haveV^rev[AA] V = ,with quotient map 0 0+i1 1 ↦ 0. § 10J-SYMBOLS OF ΣSVEC In this appendix, by applying several special properties of ΣsVec, we can generate all the 10j-symbols of ΣsVec. First, we define two special classes of 10j-symbols and provide explicit form of all 10j-symbols in these two classes. Second, we elaborate how to generate all 10j-symbols from this two classes.§.§ Class AFor class A, we first fix the objects A, B, C, D and K to be . That means, the initial state and final state of 10j-symbol can only be chosen from three bimodules, which are ,and ⊕. And which specific bimodule is initial or final state depends on the choices of other objects and 1-morphisms, namely, M_1, M_2, N_1, N_2, P_1, P_2, P_3, Q_1, Q_2 and Q_3. Now, let's count the possible configurations of these objects and 1-morphisms. The number of the configurations is equal to the number of 10j-symbols in class A.The objects M_2, M_1, N_2, N_1 can be arbitrarily chosen fromand A, so there are 16 different configurations for objects. Once objects are all fixed, the configuration of 1-morphisms can be chosen accordingly. We list the number of possible configurations of 1-morphisms and the dimension of the vector space V_Z in table <ref>. And we can find that there are 128 one-dimensional 10j-symbols and 36 two-dimensional 10j-symbols in class A.With a proper gauge choice, the one-dimensional 10j-symbols have the same form, which is a 1× 4 matrixG= 𝐄𝐱(Q_3,P_3)  1 1 1 1, G^-1^T= 𝐄𝐱(Q_3,P_3)   1/4 1/4 1/4 1/4.where 𝐄𝐱(Q_3,P_3)=-1 if Q_3=P_3=f and 𝐄𝐱(Q_3,P_3)=1 for all other situations. This holds for the one-dimensional 10j-symbols not only in the class A, but also in all the other classes. It is also easy to check that GG^-1=1.The two-dimensional 10j-symbols in class A are listed below* AA, P_3 P_2 P_1=1μμ, Q_3 Q_2 Q_1=1μμG^-1^T= 1/4  μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2) f 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2), G=   μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2) f 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2). * AA, P_3 P_2 P_1=1μμ, Q_3 Q_2 Q_1=fμμG^-1^T= 1/4  μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2) f 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2), G=   μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2) f 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2). * AA, P_3 P_2 P_1=fμμ, Q_3 Q_2 Q_1=1μμG^-1^T= 1/4  μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2) f 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2), G=   μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2) f 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2). * AA, P_3 P_2 P_1=fμμ, Q_3 Q_2 Q_1=fμμG^-1^T= -1/4  μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2) f 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2), G= -   μ 1 μ_0μ 1 μ_1μ f μ_0μ f μ_1f μ 1 f μ f 1 μ 1 1 μ f 1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2) f 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2).* A A, P_3 P_2 P_1=1μμ, Q_3 Q_2 Q_1=μμ 1G^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 1 0 -1 0 0 1 1 0μ_1 0 1 0 1 1 0 0 1, G =   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 1 0 -1 0 0 1 1 0μ_1 0 1 0 1 1 0 0 1.* A A, P_3 P_2 P_1=1μμ, Q_3 Q_2 Q_1=μμ fG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 0 1 0 -1 1 0 0 1μ_1 1 0 1 0 0 1 1 0, G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 0 1 0 -1 1 0 0 1μ_1 1 0 1 0 0 1 1 0.* A A, P_3 P_2 P_1=fμμ, Q_3 Q_2 Q_1=μμ 1G^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 0 -1 0 -1 -1 0 0 -1μ_1 1 0 -1 0 0 1 1 0, G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 0 -1 0 -1 -1 0 0 -1μ_1 1 0 -1 0 0 1 1 0.* A A, P_3 P_2 P_1=fμμ, Q_3 Q_2 Q_1=μμ fG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 -1 0 -1 0 0 -1 -1 0μ_1 0 1 0 -1 1 0 0 1, G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_0 -1 0 -1 0 0 -1 -1 0μ_1 0 1 0 -1 1 0 0 1.* AA, P_3 P_2 P_1=μμ 1, Q_3 Q_2 Q_1=1 μμG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1fμ_0 1 0 -1 0 0 1 1 0μ_1 0 1 0 1 1 0 0 1, G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1fμ_0 1 0 -1 0 0 1 1 0μ_1 0 1 0 1 1 0 0 1.* AA, P_3 P_2 P_1=μμ 1, Q_3 Q_2 Q_1=fμμG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1fμ_0 0 -1 0 -1 -1 0 0 -1μ_1 1 0 -1 0 0 1 1 0, G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1fμ_0 0 -1 0 -1 -1 0 0 -1μ_1 1 0 -1 0 0 1 1 0.* AA, P_3 P_2 P_1=μμ f, Q_3 Q_2 Q_1=1μμG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1fμ_0 0 1 0 -1 1 0 0 1μ_1 1 0 1 0 0 1 1 0, G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1fμ_0 0 1 0 -1 1 0 0 1μ_1 1 0 1 0 0 1 1 0.* AA, P_3 P_2 P_1=μμ f, Q_3 Q_2 Q_1=fμμG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f μ_0 -1 0 -1 0 0 -1 -1 0μ_1 0 1 0 -1 1 0 0 1, G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f μ_0 -1 0 -1 0 0 -1 -1 0μ_1 0 1 0 -1 1 0 0 1. * A A A, P_3 P_2 P_1=1 μμ, Q_3 Q_2 Q_1=μ 1μG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2).* A A A, P_3 P_2 P_1=1 μμ, Q_3 Q_2 Q_1=μ fμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2).* A A A, P_3 P_2 P_1=f μμ, Q_3 Q_2 Q_1=μ 1μG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2).* A A A, P_3 P_2 P_1=f μμ, Q_3 Q_2 Q_1=μ fμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2). * A AA, P_3 P_2 P_1=μ 1μ, Q_3 Q_2 Q_1=1 μμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2).* A AA, P_3 P_2 P_1=μ 1μ, Q_3 Q_2 Q_1=f μμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 1 0 0 1 -1/√(2) 1/√(2) 1/√(2)-1/√(2).* A AA, P_3 P_2 P_1=μ fμ, Q_3 Q_2 Q_1=1 μμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2)μ_1 0 1 1 01/√(2) 1/√(2) 1/√(2) 1/√(2).* A AA, P_3 P_2 P_1=μ fμ, Q_3 Q_2 Q_1=f μμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2), G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f μ_0 0 -1 -1 0 -1/√(2)-1/√(2)-1/√(2)-1/√(2)μ_1 -1 0 0 -11/√(2)-1/√(2)-1/√(2) 1/√(2).* A A A A, P_3 P_2 P_1=μ 1μ, Q_3 Q_2 Q_1=μ 1μG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f11/√(2) i/√(2) i/√(2) 1/√(2) -1+i/2 1+i/2 1+i/2 -1+i/2 f1/√(2)-i/√(2)-i/√(2) 1/√(2) -1-i/2 1-i/2 1-i/2 -1-i/2, G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f11/√(2)-i/√(2)-i/√(2) 1/√(2) -1-i/2 1-i/2 1-i/2 -1-i/2 f1/√(2) i/√(2) i/√(2) 1/√(2) -1+i/2 1+i/2 1+i/2 -1+i/2.* A A A A, P_3 P_2 P_1=μ 1μ, Q_3 Q_2 Q_1=μ fμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f1 -1/√(2) i/√(2) i/√(2)-1/√(2) 1+i/2 -1+i/2 -1+i/2 1+i/2 f -1/√(2)-i/√(2)-i/√(2)-1/√(2) 1-i/2 -1-i/2 -1-i/2 1-i/2, G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f1 -1/√(2)-i/√(2)-i/√(2)-1/√(2) 1-i/2 -1-i/2 -1-i/2 1-i/2 f -1/√(2) i/√(2) i/√(2)-1/√(2) 1+i/2 -1+i/2 -1+i/2 1+i/2.* A A A A, P_3 P_2 P_1=μ fμ, Q_3 Q_2 Q_1=μ 1μG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f1 -1/√(2) i/√(2) i/√(2)-1/√(2) 1+i/2 -1+i/2 -1+i/2 1+i/2 f -1/√(2)-i/√(2)-i/√(2)-1/√(2) 1-i/2 -1-i/2 -1-i/2 1-i/2, G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f1 -1/√(2)-i/√(2)-i/√(2)-1/√(2) 1-i/2 -1-i/2 -1-i/2 1-i/2 f -1/√(2) i/√(2) i/√(2)-1/√(2) 1+i/2 -1+i/2 -1+i/2 1+i/2.* A A A A, P_3 P_2 P_1=μ fμ, Q_3 Q_2 Q_1=μ fμG^-1^T= 1/4  μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f11/√(2) i/√(2) i/√(2) 1/√(2) -1+i/2 1+i/2 1+i/2 -1+i/2 f1/√(2)-i/√(2)-i/√(2) 1/√(2) -1-i/2 1-i/2 1-i/2 -1-i/2, G=   μ 1μ_0μ 1μ_1μ fμ_0μ fμ_1f μ 1 fμ f 1μ 1 1μ f11/√(2)-i/√(2)-i/√(2) 1/√(2) -1-i/2 1-i/2 1-i/2 -1-i/2 f1/√(2) i/√(2) i/√(2) 1/√(2) -1+i/2 1+i/2 1+i/2 -1+i/2.* AA, P_3 P_2 P_1=μμ 1, Q_3 Q_2 Q_1=μμ 1G^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11fff f1f 1f1 11111/√(2)0 0i/√(2) 1/√(2) i/√(2) i/√(2) 1/√(2) f1/√(2)0 0 -i/√(2) 1/√(2)-i/√(2)-i/√(2) 1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11fff f1f 1f1 1111√(2)0 0 -√(2)i1/√(2)-i/√(2)-i/√(2) 1/√(2) f√(2)0 0√(2)i1/√(2) i/√(2) i/√(2) 1/√(2).* AA, P_3 P_2 P_1=μμ 1, Q_3 Q_2 Q_1=μμ fG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11ff1 f11 1ff 11f1 01/√(2) i/√(2)0i/√(2) 1/√(2) 1/√(2) i/√(2) f 01/√(2)-i/√(2)0 -i/√(2) 1/√(2) 1/√(2)-i/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11ff1 f11 1ff 11f1 0√(2)-√(2)i 0 -i/√(2) 1/√(2) 1/√(2)-i/√(2) f 0√(2) √(2)i 0i/√(2) 1/√(2) 1/√(2) i/√(2).* AA, P_3 P_2 P_1=μμ f, Q_3 Q_2 Q_1=μμ 1G^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11ff1 f11 1ff 11f1 0i/√(2) 1/√(2)01/√(2) i/√(2) i/√(2) 1/√(2) f 0 -i/√(2) 1/√(2)01/√(2)-i/√(2)-i/√(2) 1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11ff1 f11 1ff 11f1 0√(2)i√(2)01/√(2)-i/√(2)-i/√(2) 1/√(2) f 0 -√(2)i√(2)01/√(2) i/√(2) i/√(2) 1/√(2).* AA, P_3 P_2 P_1=μμ f, Q_3 Q_2 Q_1=μμ fG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11fff f1f 1f1 1111i/√(2)0 01/√(2) i/√(2) 1/√(2) 1/√(2) i/√(2) f -i/√(2)0 01/√(2)-i/√(2) 1/√(2) 1/√(2)-i/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11fff f1f 1f1 1111 -√(2)i 0 0√(2)-i/√(2) 1/√(2) 1/√(2)-i/√(2) f√(2)i 0 0√(2) i/√(2) 1/√(2) 1/√(2) i/√(2). * AA A, P_3 P_2 P_1=μμ 1, Q_3 Q_2 Q_1=μ 1μG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2) i/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) 1/√(2) i/√(2) f1/√(2)-i/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2) 1/√(2)-i/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2)-i/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2) 1/√(2)-i/√(2) f1/√(2) i/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) 1/√(2) i/√(2).* AA A, P_3 P_2 P_1=μμ 1, Q_3 Q_2 Q_1=μ fμG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1 -1/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2)-1/√(2) i/√(2) f -1/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2)-1/√(2)-i/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1 -1/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2)-1/√(2)-i/√(2) f -1/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2)-1/√(2) i/√(2).* AA A, P_3 P_2 P_1=μμ f, Q_3 Q_2 Q_1=μ 1μG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1i/√(2) 1/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) i/√(2) 1/√(2) f -i/√(2) 1/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2)-i/√(2) 1/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1 -i/√(2) 1/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2)-i/√(2) 1/√(2) fi/√(2) 1/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) i/√(2) 1/√(2).* AA A, P_3 P_2 P_1=μμ f, Q_3 Q_2 Q_1=μ fμG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1i/√(2)-1/√(2) i/√(2) 1/√(2)-1/√(2) i/√(2) i/√(2)-1/√(2) f -i/√(2)-1/√(2)-i/√(2) 1/√(2)-1/√(2)-i/√(2)-i/√(2)-1/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1 -i/√(2)-1/√(2)-i/√(2) 1/√(2)-1/√(2)-i/√(2)-i/√(2)-1/√(2) fi/√(2)-1/√(2) i/√(2) 1/√(2)-1/√(2) i/√(2) i/√(2)-1/√(2). * A A A, P_3 P_2 P_1=μ 1μ, Q_3 Q_2 Q_1=μμ 1G^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 11/√(2) i/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) 1/√(2) i/√(2) f1/√(2)-i/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2) 1/√(2)-i/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 11/√(2)-i/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2) 1/√(2)-i/√(2) f1/√(2) i/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) 1/√(2) i/√(2).* A A A, P_3 P_2 P_1=μ 1μ, Q_3 Q_2 Q_1=μμ fG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 1i/√(2) 1/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) i/√(2) 1/√(2) f -i/√(2) 1/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2)-i/√(2) 1/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 1 -i/√(2) 1/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2)-i/√(2) 1/√(2) fi/√(2) 1/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) i/√(2) 1/√(2).* A A A, P_3 P_2 P_1=μ fμ, Q_3 Q_2 Q_1=μμ 1G^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 1 -1/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2)-1/√(2) i/√(2) f -1/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2)-1/√(2)-i/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 1 -1/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2)-1/√(2)-i/√(2) f -1/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2)-1/√(2) i/√(2).* A A A, P_3 P_2 P_1=μ fμ, Q_3 Q_2 Q_1=μμ fG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 1i/√(2)-1/√(2) i/√(2) 1/√(2)-1/√(2) i/√(2) i/√(2)-1/√(2) f -i/√(2)-1/√(2)-i/√(2) 1/√(2)-1/√(2)-i/√(2)-i/√(2)-1/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1μ ffμ 11μ 1fμ 1 -i/√(2)-1/√(2)-i/√(2) 1/√(2)-1/√(2)-i/√(2)-i/√(2)-1/√(2) fi/√(2)-1/√(2) i/√(2) 1/√(2)-1/√(2) i/√(2) i/√(2)-1/√(2). §.§ Class BFor class B, we first fix the objects A, B, D, K to beand C to be A. That means, the initial state and final state of 10j-symbol can only be chosen from two bimodules, which areand ⊕. The number of different configurations of 10j-symbols are expressed in table <ref>. In class B, there are in total 144 one-dimensional 10j-symbols, 24 two-dimensional 10j-symbols and 1 four-dimensional 10j-symbol.The four-dimensional 10j-symbol has been calculated in Section <ref>, while the one-dimensional 10j-symbols are given in eqn. (<ref>). Below, we list all the two-dimensional 10j-symbols in class B. * A, P_3 P_2 P_1=1μ 1, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 11/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) f1/√(2)-1/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2)-1/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 11/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) f1/√(2)-1/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2)-1/√(2).* A, P_3 P_2 P_1=1μ f, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 11/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) f1/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 11/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) f1/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2).* A, P_3 P_2 P_1=fμ 1, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 1 -1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2) f1/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 1 -1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2) f1/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2).* A, P_3 P_2 P_1=fμ f, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 11/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) f -1/√(2) 1/√(2)-1/√(2)-1/√(2)-1/√(2)-1/√(2)-1/√(2) 1/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μ 11/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) f -1/√(2) 1/√(2)-1/√(2)-1/√(2)-1/√(2)-1/√(2)-1/√(2) 1/√(2). * AAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=1μ 1G^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) f1/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2)-1/√(2)-1/√(2)-i/√(2) f1/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2).* AAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=1μ fG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2) 1/√(2) 1/√(2) i/√(2) f1/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2) 1/√(2) 1/√(2)-i/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2) 1/√(2) 1/√(2)-i/√(2) f1/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2) 1/√(2) 1/√(2) i/√(2).* AAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=fμ 1G^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2) f1/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 11/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) f1/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2).* AAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=fμ fG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1 -1/√(2)-i/√(2)-1/√(2) i/√(2)-i/√(2)-1/√(2)-1/√(2) i/√(2) f -1/√(2) i/√(2)-1/√(2)-i/√(2) i/√(2)-1/√(2)-1/√(2)-i/√(2), G=   1 μμ_01 μμ_1f μμ_0f μμ_1μ f 1 μ ff μ 11 μ 1f 1 -1/√(2) i/√(2)-1/√(2)-i/√(2) i/√(2)-1/√(2)-1/√(2)-i/√(2) f -1/√(2)-i/√(2)-1/√(2) i/√(2)-i/√(2)-1/√(2)-1/√(2) i/√(2).* AA, P_3 P_2 P_1=μ 11, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2)μ_1i/√(2)-i/√(2) i/√(2) i/√(2)-i/√(2)-i/√(2) i/√(2)-i/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2)μ_1 -i/√(2) i/√(2)-i/√(2)-i/√(2) i/√(2) i/√(2)-i/√(2) i/√(2).* AA, P_3 P_2 P_1=μ 1f, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1 -i/√(2)-i/√(2)-i/√(2) i/√(2) i/√(2)-i/√(2)-i/√(2)-i/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1i/√(2) i/√(2) i/√(2)-i/√(2)-i/√(2) i/√(2) i/√(2) i/√(2).* AA, P_3 P_2 P_1=μ f1, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2)μ_1 -i/√(2) i/√(2)-i/√(2)-i/√(2) i/√(2) i/√(2)-i/√(2) i/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2)μ_1i/√(2)-i/√(2) i/√(2) i/√(2)-i/√(2)-i/√(2) i/√(2)-i/√(2).* AA, P_3 P_2 P_1=μ ff, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1i/√(2) i/√(2) i/√(2)-i/√(2)-i/√(2) i/√(2) i/√(2) i/√(2), G=   μμ 1_0μμ 1_1μμ f_0μμ f_1f 1 μ ff μ 11 μ 1f μμ_01/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2)μ_1 -i/√(2)-i/√(2)-i/√(2) i/√(2) i/√(2)-i/√(2)-i/√(2)-i/√(2).* AA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=μ 11G^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_01/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2)μ_11/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2), G=   1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_01/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2)μ_11/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2).* AA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=μ 1fG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_0i/√(2) 1/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2)μ_1 -i/√(2) 1/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2)G=   1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_0 -i/√(2) 1/√(2)-i/√(2)-1/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2)μ_1i/√(2) 1/√(2) i/√(2)-1/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2). * AA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=μ f1G^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_0i/√(2)-1/√(2) i/√(2) 1/√(2)-1/√(2)-i/√(2) i/√(2)-1/√(2)μ_1 -i/√(2)-1/√(2)-i/√(2) 1/√(2)-1/√(2) i/√(2)-i/√(2)-1/√(2), G=   1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_0 -i/√(2)-1/√(2)-i/√(2) 1/√(2)-1/√(2) i/√(2)-i/√(2)-1/√(2)μ_1i/√(2)-1/√(2) i/√(2) 1/√(2)-1/√(2)-i/√(2) i/√(2)-1/√(2).* AA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=μ ffG^-1^T= 1/4  1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_01/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2) 1/√(2) 1/√(2) i/√(2)μ_11/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2) 1/√(2) 1/√(2)-i/√(2), G=   1 μμ_01 μμ_1f μμ_0fμμ_1μ f1μ ff μ 11 μ 1fμ_01/√(2) i/√(2) 1/√(2)-i/√(2) i/√(2) 1/√(2) 1/√(2)-i/√(2)μ_11/√(2)-i/√(2) 1/√(2) i/√(2)-i/√(2) 1/√(2) 1/√(2) i/√(2).* AA, P_3 P_2 P_1=11μ, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 f f fff 111 f11 11/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) f1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 f f fff 111 f11 11/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) f1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2).* AA, P_3 P_2 P_1=1fμ, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 1 f f1f 1f1 ff1 11/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) f1/√(2)-1/√(2)-1/√(2)-1/√(2) 1/√(2)-1/√(2)-1/√(2)-1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 1 f f1f 1f1 ff1 11/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) f1/√(2)-1/√(2)-1/√(2)-1/√(2) 1/√(2)-1/√(2)-1/√(2)-1/√(2).* AA, P_3 P_2 P_1=f 1μ, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 1 f f1f 1f1 ff1 1 -1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) f1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 1 f f1f 1f1 ff1 1 -1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) f1/√(2) 1/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2).* AA, P_3 P_2 P_1=f fμ, Q_3 Q_2 Q_1=μμμG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 f f fff 111 f11 11/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) f -1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2)-1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 1 f f fff 111 f11 11/√(2)-1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2) 1/√(2)-1/√(2) f -1/√(2)-1/√(2)-1/√(2) 1/√(2) 1/√(2)-1/√(2)-1/√(2)-1/√(2). * AAAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=11μG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 f f 1 fff 111 11f 11/√(2) 1/√(2) i/√(2)-i/√(2)-i/√(2) 1/√(2) 1/√(2) i/√(2) f1/√(2) 1/√(2)-i/√(2) i/√(2) i/√(2) 1/√(2) 1/√(2)-i/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 f f 1 fff 111 11f 11/√(2) 1/√(2)-i/√(2) i/√(2) i/√(2) 1/√(2) 1/√(2)-i/√(2) f1/√(2) 1/√(2) i/√(2)-i/√(2)-i/√(2) 1/√(2) 1/√(2) i/√(2).* AAAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=1 fμG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 f11 f1f 1f1 1ff 11/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2) f1/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 f11 f1f 1f1 1ff 11/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2) f1/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2).* AAAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=f 1μG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 f11 f1f 1f1 1ff 11/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2) f1/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 f11 f1f 1f1 1ff 11/√(2)-1/√(2) i/√(2) i/√(2) 1/√(2) i/√(2) i/√(2)-1/√(2) f1/√(2)-1/√(2)-i/√(2)-i/√(2) 1/√(2)-i/√(2)-i/√(2)-1/√(2).* AAAA, P_3 P_2 P_1=μμμ, Q_3 Q_2 Q_1=f fμG^-1^T= 1/4  μμμ_00 μμμ_01 μμμ_10 μμμ_11 f f 1 fff 111 11f 1 -1/√(2)-1/√(2) i/√(2)-i/√(2)-i/√(2)-1/√(2)-1/√(2) i/√(2) f -1/√(2)-1/√(2)-i/√(2) i/√(2) i/√(2)-1/√(2)-1/√(2)-i/√(2), G=   μμμ_00 μμμ_01 μμμ_10 μμμ_11 f f 1 fff 111 11f 1 -1/√(2)-1/√(2)-i/√(2) i/√(2) i/√(2)-1/√(2)-1/√(2)-i/√(2) f -1/√(2)-1/√(2) i/√(2)-i/√(2)-i/√(2)-1/√(2)-1/√(2) i/√(2). §.§ Other classes It is easy to check that there are 32 different classes of 10j-symbols corresponding to different choices of A, B, C, D and K. As a result, besides of the one-dimensional 10j-symbols given by eqn. (<ref>), there still are 960 two-dimensional 10j-symbols and 16 four-dimensional 10j-symbols in total.All these 10j-symbols can be generated from 10j-symbols in class A and class B by following steps. First, we stack bimoduleoror their combination on each bimodules of the 10j-symbol in class A or class B. Second, we move those stacked bimodules to proper position through some bimodule maps. So far the explicit form of 10j-symbol is unchanged. One may note that some 1-morphisms in this new structure are not representative 1-morphisms. Finally, through some surgery, we should turn each 1-morphism to the representative 1-morphism locally. This procedure may modify the basis of V_Z and V_YWXJ and thus causes a basis transformation of the 10j-symbol. All the 10j-symbols of ΣsVec and the complete computer program will be uploaded to github soon.JHEP
http://arxiv.org/abs/2312.15947v1
{ "authors": [ "Wenjie Xi", "Tian Lan", "Longye Wang", "Chenjie Wang", "Wei-Qiang Chen" ], "categories": [ "hep-th", "math-ph", "math.MP" ], "primary_category": "hep-th", "published": "20231226082226", "title": "On a class of fusion 2-category symmetry: condensation completion of braided fusion category" }
0.98 Source Code is a Graph, Not a Sequence: A Cross-Lingual Perspective on Code Clone Detection *A Challenge and Opportunity for Source Code Research: Through the Lens of Code Clone Detection using Python and Java. Source code can be found here <https://github.com/Ataago-AI/clone-detection> Mohammed Ataaur RahamanSchool of Electronic Engineering and Computer Science Queen Mary UniversityLondon, UK [email protected] IveSchool of Electronic Engineering and Computer Science Queen Mary UniversityLondon, UK [email protected]================================================================================================================================================================================================================================================================================================================================================================================================================================== Current large-scale diffusion models represent a giant leap forward in conditional image synthesis, capable of interpreting diverse cues like text, human poses, and edges.However, their reliance on substantial computational resources and extensive data collection remains a bottleneck.On the other hand, the integration of existing diffusion models, each specialized for different controls and operating in unique latent spaces, poses a challenge due to incompatible image resolutions and latent space embedding structures, hindering their joint use. Addressing these constraints, we present “PanGu-Draw", a novel latent diffusion model designed for resource-efficient text-to-image synthesis that adeptly accommodates multiple control signals.We first propose a resource-efficient Time-Decoupling Training Strategy, which splits the monolithic text-to-image model into structure and texture generators. Each generator is trained using a regimen that maximizes data utilization and computational efficiency, cutting data preparation by 48% and reducing training resources by 51%. Secondly, we introduce “Coop-Diffusion", an algorithm that enables the cooperative use of various pre-trained diffusion models with different latent spaces and predefined resolutions within a unified denoising process. This allows for multi-control image synthesis at arbitrary resolutions without the necessity for additional data or retraining. Empirical validations of Pangu-Draw show its exceptional prowess in text-to-image and multi-control image generation, suggesting a promising direction for future model training efficiencies and generation versatility. The largest 5B T2I PanGu-Draw model is released on the Ascend platform. Project page: https://pangu-draw.github.iohttps://pangu-draw.github.io § INTRODUCTION The Denoising Diffusion Probabilistic Models (DDPMs) <cit.> and their subsequent enhancements <cit.> have established diffusion models as a leading approach for image generation. These advancements excel in the application of diffusion models to text-to-image synthesis, yielding high-fidelity results with large-scale models and datasets, supported by substantial computational resources <cit.>. These foundational models, capable of understanding and rendering complex semantics, have paved the way for diverse image generation tasks, accommodating various control signals such as reference images, edges <cit.>, and poses <cit.>. However, the extensive computational demand and significant data collection required by these models pose a substantial challenge. The ambitious goal of higher fidelity and increased resolution in image synthesis pushes the boundaries of model and dataset sizes, escalating computational costs, and environmental impact. Moreover, the aspiration for versatile control and multi-resolution in image generation introduces additional complexity. Existing diffusion models, each tailored for specific controls and operating within distinct latent spaces, face the challenge of integration due to incompatible image resolutions and latent space embeddings, obstructing their concurrent utilization. This incompatibility not only leads to more resource consumption of retraining but also impedes the joint synthesis of images controlled by multiple factors, thereby limiting the scalability and practical application of such existing generative models. In response to these challenges,our work introduces a novel paradigm named “PanGu-Draw" that judiciously conserves training resources while enhancing data efficiency, thereby proposing a resource-efficient pathway forward for diffusion model scalability.As shown in Figure <ref>, the training strategies of predecessors like DeepFloyd <cit.> and GLIDE <cit.>, which employ a cascaded approach, excel in leveraging data across resolutions but suffer from inefficient inference due to their reliance on multiple models. Alternatively, Stable Diffusion <cit.> and AltDiffusion <cit.> use a Resolution Boost Training strategy aiming for cost-effectiveness by refining a single model. However, this strategy falls short on data efficiency.In light of these considerations, our PanGu-Draw framework advances the field by presenting a Time-Decoupling Training Strategy that segments the training of a comprehensive text-to-image model into two distinct generators: one dedicated to structural outlines and another to textural details. This division not only concentrates on training efforts but also enhances data efficacy. The structural generator is adept at crafting the initial outlines of images, offering flexibility in data quality and enabling training across a spectrum of data calibers; the textural generator, in contrast, is fine-tuned using low-resolution data to infuse these outlines with fine-grained details, ensuring optimal performance even during high-resolution synthesis. This focused approach not only accelerates the training process of our 5B model but also significantly reduces the reliance on extensive data collection and computational resources, as evidenced by a 48% reduction in data preparation and a 51% reduction in resource consumption.Furthermore, we introduce a pioneering algorithm named Coop-Diffusion, which facilitates the cooperative integration of diverse pre-trained diffusion models. Each model, conditioned on different controls and pre-defined resolutions, contributes to a seamless denoising process. The first algorithmic sub-module addresses inconsistencies in VAE decoders that arise during the denoising process across different latent spaces, ensuring cohesive image quality by effectively reconciling disparate latent space representations. The second sub-module confronts the challenges associated with multi-resolution denoising. Traditional bilinear upsampling for the intermediate noise map, introduced during the denoising process, can undesirably amplify the correlation between pixels. This amplification deviates from the initial Independent and Identically Distributed (IID) assumption, leading to severe artifacts in the final output image. However, our innovative approach circumvents this issue with a single-step sampling method that preserves the integrity of pixel independence, thus preventing the introduction of artifacts. Coop-Diffusion obviates the need for additional data or model retraining, addressing the challenges of multi-control and multi-resolution image generation with scalability and efficiency.PanGu-Draw excels in text-to-image (T2I) generation, outperforming established models like DALL-E 2 and SDXL, as evidenced by its FID of 7.99 in English T2I. It also leads in Chinese T2I across metrics like FID, IS, and CN-CLIP-score. User feedback highlights a strong preference for PanGu-Draw, aligning well with human visual perceptions. Available on the Ascend platform, PanGu-Draw is efficient and versatile.In summary, our contributions are manifold: * PanGu-Draw: A resource-efficient diffusion model with a Time-Decoupling Training Strategy, reducing data and training resources for text-to-image synthesis.* Coop-Diffusion: A novel approach for integrating multiple diffusion models, enabling efficient multi-control image synthesis at multi-resolutions within a unified denoising process.* Comprehensive evaluations demonstrate PanGu-Draw (5B model) can produce high-quality images aligned with text and various controls, advancing the scalability and flexibility of diffusion-based image generation.§ RELATED WORK=-1 Text-to-Image Generation. The integration of diffusion models into the realm of text-to-image generation marks a significant stride in computational creativity <cit.>.Text-to-image synthesis models like GLIDE <cit.> and DALL-E 2 <cit.>, which incorporate CLIP image embeddings, have significantly advanced in generating diverse and semantically aligned images from textual descriptions. The Latent Diffusion model <cit.> addresses computational challenges by creating images from text-conditioned low-dimensional latent representations. Techniques like LoRA <cit.> enhance domain-specific adaptability through low-rank matrix-driven parameter offsets, avoiding catastrophic forgetting. Additionally, ControlNet <cit.> introduces spatial conditioning controls, offering flexibility in image generation under varied conditions like edges and depth. Current research also focuses on aligning model outputs with human aesthetic preferences, aiming to optimize image quality and user satisfaction <cit.>. Despite the proliferation of such specialized models, a unified framework that consolidates these disparate capabilities remains absent, limiting the potential for multi-control and complex editing in image synthesis.Model Efficient Training and Scaling Up Strategies.Efficient training and scaling of models are pivotal for advancing large-scale neural networks. In the realm of text-to-image (T2I) diffusion models, the quest for efficiency has led to innovative training strategies. Historical methods, such as those utilized by DeepFloyd <cit.> and GLIDE <cit.>, capitalize on cascaded approaches that proficiently utilize data across various resolutions, yet their reliance on multiple models results in less efficient inference processes. Contrastingly, models like Stable Diffusion <cit.> and AltDiffusion <cit.> adopt Resolution Boost Training strategies that refine a single model for cost-effectiveness. Despite the advantages, such strategies do not fully exploit data efficiency. In scaling up strategies, training efficiency is also important. The correlation between model size and performance is well-documented <cit.>, with larger models like SDXL <cit.> showing notable gains. Efficient adaptation and scaling are explored in <cit.> through distillation, and in <cit.> by marrying model expansion with domain-specific prompts. Serial scaling and knowledge distillation reduce training times significantly as demonstrated by <cit.>, while <cit.> proposes progressive network expansion for faster training with minimal loss. Our approach offers a novel approach to diffusion model scaling that enhances efficiency.§ PRELIMINARYGiven an image x_0, diffusion models first produce a series of noisy images x_1,...,x_T by adding Gaussian noise to x_0 according to some noise schedule given by α̅_t as follows:x_t = √(α̅_t)x_0 + √(1-α̅_t)ϵ,where ϵ∼𝒩(0, I).Diffusion models then learn a denoising model ϵ_θ(x_t, t) to predict the added noise of a noisy image x_t with the following training objective:ℒ = 𝔼_x_0 ∼ q(x_0), ϵ∼𝒩(0, I), t ∼[1,T]ϵ-ϵ_θ(x_t, t)^2,where t is uniformly sampled from {1,...,T}. Once the denoising model ϵ_θ(x_t, t) is learned, starting from a random noise x_T ∼𝒩(0, I), one can iteratively predict and reduce the noise in x_t to get a real image x_0. During the sampling process, we can predict the clean data x_0 from ϵ_θ(x_t, t) with single-step sampling as follows:x̂_0,t = 1/√(α̅_t) (x_t - √(1 - α̅_t)ϵ_θ (x_t, t)). Our text-to-image generation model is built on the model architecture proposed in Latent Diffusion Model <cit.>. In this model, a real image x_0 is first down-sampled 8 times as a lower-dimension latent code z_0 with an image encoder model E, which can be decoded with a latent decoder model D back to a real image x_0. The denoising network ϵ_θ(z_t, t, c) is parameterized as a U-Net <cit.> model, where embedding of time step t is injected with adaptive normalization layers and embedding of input text c is injected with cross-attention layers. § PANGU-DRAWIn this section, we first illustrate our resource-efficient 5B text-to-image generation model, trained with a time-decoupling training strategy and further enhanced with a prompt enhancement LLM. Then, we present our Coop-Diffusion algorithm for the cooperative integration of diverse pre-trained diffusion models, enabling multi-control and multi-resolution image generation. §.§ Time-Decoupling Training Strategy Enhancing data, training, and inference efficiency is vital for text-to-image models' practical use. Figure <ref> shows two existing training strategies: (a) Cascaded Training, using three models to incrementally improve resolution, is data-efficient but triples training and inference time. (b) Resolution Boost Training starts at 512x512 and then 1024x1024 resolution, discarding lower resolution data and offering moderate efficiency with higher training costs and single-model inference across all timesteps. These approaches differ from our time-decoupling strategy, detailed below.Responding to the need for enhanced efficiencies, we draw inspiration from the denoising trajectory of diffusion processes, where initial denoising stages primarily shape the image's structural foundation, and later stages refine its textural complexity. With this insight, we introduce the Time-Decoupling Training Strategy. This approach divides a comprehensive text-to-image model, denoted as ϵ_θ, into two specialized sub-models operating across different temporal intervals: a structure generator, ϵ_struct, and a texture generator, ϵ_texture. Each sub-model is half the size of the original, thus enhancing manageability and reducing computational load.As illustrated in Figure <ref>(c), the structure generator, ϵ_struct, is responsible for early-stage denoising across larger time steps, specifically within the range T, ..., T_struct, where 0 < T_struct < T. This stage focuses on establishing the foundational outlines of the image. Conversely, the texture generator, ϵ_texture, operates during the latter, smaller time steps, denoted by T_struct, ..., 0, to elaborate on the textural details. Each generator is trained in isolation, which not only alleviates the need for high-memory computation devices but also avoids the complexities associated with model sharding and its accompanying inter-machine communication overhead.In the inference phase, ϵ_struct initially constructs a base structural image, z_T_struct, from an initial random noise vector, z_T. Subsequently, ϵ_texture refines this base to enhance textural details, culminating in the final output, z_0. This sequential processing facilitates a more resource-efficient workflow, significantly reducing the hardware footprint and expediting the generation process without compromising the model's performance or output quality, as demonstrated in our ablated experiment in Sec. <ref>.Resource-Efficient Specialized Training Regime. We further adopt specialized training designs for the above two models. The structure generator ϵ_struct, which derives image structures from text, requires training on an extensive dataset encompassing a wide range of concepts. Traditional methods, like Stable Diffusion, often eliminate low-resolution images, discarding about 48% of training data and thereby inflating dataset costs. Contrarily, we integrate high-resolution images with upscaled lower-resolution ones. This approach, as proven by our ablated experiments in Sec. <ref>, shows no performance drop, as the predicted z_T_struct still contains substantial noise. In this way, we achieve higher data efficiency and avoid the problem of semantic degeneration.Additionally, since the image structure is determined in z_T_struct and the texture generator ϵ_texture focuses on refining texture, we propose training ϵ_texture at a lower resolution while still sampling at high resolution. This strategy, as demonstrated in our ablated experiments in Sec. <ref>, results in no performance drop and no structural problems (e.g., repetitive presentation <cit.>). Consequently, we achieved an overall 51% improvement in training efficiency. Figure <ref> summarizes the data, training, and inference efficiency of different training strategies. Besides higher data and training efficiency, our strategy also achieves higher inference efficiency with fewer inference steps compared to the Cascaded Training strategy and a smaller per-step model compared to the Resolution Boost Training strategy.§.§ Coop-Diffusion: Multi-Diffusion FusionAs shown in Figure <ref>(a), there are numerous pre-trained diffusion models, such as various SD, ControlNet, image variation, etc., each tailored for specific controls and image resolutions. It is promising to fuse these pre-trained models for multi-control or multi-resolution image generation without needing to train a new model. However, the different latent spaces and resolutions of these models impede joint synthesis of images controlled by different models, thereby limiting their practical applications. In response to these challenges, we propose the Coop-Diffusion algorithm with two key sub-modules, as shown in Figures <ref>(b) and (c), to bridge the latent space gap and the resolution gap, and to unite the denoising process in the same space.Bridging the Latent Space Gap. To bridge the latent space gap between spaces A and B, we propose to unify the model prediction in latent space A by transforming the model prediction ϵ_t' in latent space B to latent space A using the image space as an intermediate. This is done in the following way: first, we predict the clean data ẑ_0,t' using Equation (<ref>) as: ẑ_0,t' = 1/√(α̅_t) (z_t' - √(1 - α̅_t)ϵ_t'), which is then decoded into a pixel-level image x̂_0,t' using the latent decoder model D'. This image is encoded into latent space A using the image encoder model E, as z̃_0,t = E(x̂_0,t'), and finally transformed into a model prediction by inverting Equation (<ref>) as: ϵ̃_t = 1/√(1 - α̅_t) (z_t - √(α̅_t)z̃_0,t). With the united ϵ̃_t, we can now perform multi-control fusion between ϵ̃_t and ϵ_t (the prediction from model ϵ_θ with z_t in latent space A, omitted in Figure <ref> for brevity) as: ϵ_t,fuse = d ·ϵ̃_t + (1-d) ·ϵ_t, where d and 1-d are the guidance strengths of each model with d ∈ [0, 1], to guide the denoising process jointly with these two models for multi-control image generation. Algorithm <ref> further illustrates this fusion process.Bridging Resolution Gap. To integrate the denoising processes of a low-resolution model with a high-resolution model, upsampling and/or downsampling is necessary. Traditional bilinear upsampling, often applied to the intermediate result z_t during the denoising process, can undesirably amplify pixel correlation. This amplification deviates from the initial Independent and Identically Distributed (IID) assumption, leading to severe artifacts in the final images, as shown in Figure <ref>(a). Conversely, downsampling does not present this issue. To address the IID issue in upsampling, we propose a new upsampling algorithm that preserves the IID assumption, thereby bridging the resolution gap between models with different pre-trained resolutions.Figure <ref>(c) visualizes our upsampling algorithm. Specifically, for a low-resolution z_t', we use the image space as an intermediate space to transform z_t' in low-resolution space into high-resolution space as z̃_t. We first predict the noise ϵ_t' with the denoising model ϵ_θ' and then predict the clean data ẑ_0,t' as described in Eq. <ref>. This is decoded into an image x̂_0,t' using decoder D'. We then perform upsampling on x̂_0,t' to obtain its high-resolution counterpart x̂_0,t. Finally, x̂_0,t is encoded into the latent space with encoder E as ẑ_0,t, and t-step noise is added to get the final result z̃_t using Eq. <ref>.With the unified z̃_t, we can now perform multi-resolution fusion. First, we denoise with a low-resolution model to obtain the intermediate z_t' and its high-resolution counterpart z̃_t. Then, we perform denoising with a high-resolution model starting from z̃_t, and vice versa. This approach allows us to conduct one-stage super-resolution without undergoing all the low-resolution denoising steps, thereby improving inference efficiency. Algorithm <ref> further illustrates this fusion process.§ EXPERIMENTS=-1 Implementation Details. We adopt the pretrained Variational Autoencoder (VAE) model from SDXL <cit.>, and we build our structure and texture generator based on the architecture of its U-Net model with the following modifications. To achieve bilingual text-to-image generation (Chinese and English), we pre-train a Chinese text encoder <cit.> on our Chinese training dataset. We then concatenate the text embeddings from this Chinese text encoder with those from a pretrained English text encoder, serving as the final text embeddings for the denoising models. For multi-resolution image generation, we select a range of image resolutions around 1024x1024 and further condition the denoising model on the sinusoidal positional embeddings corresponding to the index of image resolutions. The T_struct parameter is set to 500, as suggested by our ablation study.Our models are trained on a cluster consisting of 256 Ascend 910B cards. During training, we applied several techniques to reduce redundant memory usage. These include replacing traditional attention with Flash Attention <cit.>, employing mixed-precision training <cit.>, and using gradient checkpointing <cit.>, also known as the recompute technique. These methods enable the model to fit within the memory of a single Neural Processing Unit (NPU), allowing parallelism to be applied only in the data scope and avoiding model sharding among NPUs, as well as reducing inter-machine communication overhead.Dataset Construction. To encompass the abundant concepts in the world, we collect images in various styles from multiple sources, including Noah-Wukong <cit.>, LAION <cit.>, and others, such as photography, cartoons, portraits, and gaming assets. The collected images are filtered based on CLIP score, aesthetic score, watermark presence, resolution, and aspect ratio. To improve the semantic alignment of PanGu-Draw, we discard parts of the noisy captions that are meaningless or mismatched to the image, sourced from the Internet. Instead, we recaption the collected images by first employing an open-vocabulary detector <cit.> to locate the primary subjects within the images. These subjects are then processed by LLaVA <cit.>, a high-performance vision-language model, along with prompting templates, to yield detailed image descriptions. These English annotations are subsequently translated into Chinese. Evaluation Metrics. We evaluate PanGu-Draw's text-to-image generation on COCO<cit.> with 30kimages for English, and COCO-CN <cit.> with 10k images for Chinese. The Frechet Inception Distance (FID <cit.>) is utilized to evaluate image quality and diversity. For Chinese, additional metrics include the Inception Score (IS <cit.>) and CN-CLIP-score<cit.>, assessing image quality and text-image alignment. Complementing these metrics, a user study is conducted to evaluate image-text alignment, fidelity, and aesthetics using ImageEval-prompt[https://github.com/FlagOpen/FlagEval/tree/master/imageEval] across 339 prompts.§.§ Text-to-Image GenerationEvaluation on COCO. As shown in Table <ref>, PanGu-Draw achieves a FID of 7.99, which is superior to compared methods such as DALL-E 2 and SDXL. It also achieves competitive FID with SOTA methods, indicating the effectiveness of our time-decoupling training strategy and its outstanding data and training efficiencies. Our 5B PanGu model is the best-released model in terms of FID.Evaluation on COCO-CN. As shown in Table <ref>, PanGu-Draw outperforms other released Chinese text-to-image models, including Taiyi-CN, Taiyi-Bilingual, and AltDiffusion, across all three metrics. This performance highlights PanGu-Draw's exceptional Chinese text-to-image generation capabilities and the effectiveness of our bilingual text encoder architecture.User Study. We conducted a user study to compare PanGu-Draw with top-performing methods, including SDXL <cit.>, Midjourney 5.2, and DALL-E 3 <cit.>. As shown in Table <ref>, PanGu-Draw achieves better results than SD and SDXL across all three metrics. It also attains approximately 99%/98% of the performance of Midjourney 5.2 and DALL-E 3, respectively, indicating PanGu-Draw's excellent text-to-image capabilities. Figure <ref> shows a collection of high-fidelity multi-resolution images generated by PanGu-Draw. As we can see, the generated images of PanGu-Draw are of high aesthetics and semantically aligned with the input prompts. §.§ Multi-Diffusion Fusing Results Multi-Control Image Generation. To demonstrate the effectiveness of the proposed reusable multi-diffusion fusing algorithm, Coop-Diffusion, we first present multiple results of multi-control image generation. Figure <ref> displays results from fusing an image variation model [https://huggingface.co/lambdalabs/sd-image-variations-diffusers] with PanGu-Draw. The fusing results maintain a style similar to that of the reference image, matching the texture described by the input prompt. Figure <ref> shows results from fusing PanGu-Draw with a pose/edge-to-image ControlNet model, which operates in guess mode without input prompts. Here, the fusing results combine the structure of the pose/edge image with the texture described by the input prompt.Multi-Resolution Image Generation. We also present multi-resolution image generation results of fusing PanGu-Draw with low-resolution text-to-image and edge-to-image ControlNet model by first denoising with the low-resolution model to get the intermediate z_t and the high-resolution counterpart z̃_t, and then perform denoising in high resolution with PanGu-Draw. Figure <ref> shows the results from the low-resolution model and our fusing algorithm Coop-Diffusion. As we can see, PanGu-Draw adds much details to the low-resolution predictions leading to high-fidelity high-resolution results. Besides, compared with the common practice of super-resolution with diffusion model, which carries out all the low-resolution denoising steps, our method achieve higher inference efficiency. §.§ Ablation Study In this section, we perform ablation studies to analyze our time-decoupling training strategy. The baseline model has 1B parameters while the structure and texture generators both have 0.5B parameters. During the training process, the latter two models only train half the steps of the baseline model with T_struct set as 500. Both settings of the models are trained from scratch on a subsetof the LAION dataset containing images with all sizes. After training, FID, IS and CLIP-score on COCO are reported for comparison.Time-Decoupling Training Strategy. We compare the final performance of models trained with the Resolution Boost strategy and our time-decoupling strategy in Table <ref>. We found that models trained with our strategy achieves much better performance in all three criteria, indicating the effectiveness of our strategy.Training Designs. The structure and texture generators (ϵ_struct and ϵ_texture) are designed to train on different resolutions to improve data and training efficiency. However, this approach may negatively influence the final performance. In Table <ref>, we compare such a design with a traditional training process, where ϵ_struct discards low-resolution images, or ϵ_texture trains with high resolution. Results on COCO show that ϵ_struct benefits from these extra up-scaled data, and ϵ_texture learns enough texture patterns at a smaller resolution.Timestep Splitting Point. The timestep splitting point T_struct between the structure and texture generators also influences the final performance. To this end, we set T_struct to 200, 300, 500, and 700, while keeping the other settings of the structure and texture generators unchanged. As shown in Table <ref>, as T_struct increases from 200 to 700, the performance initially increases and then decreases continuously. T_struct = 500 is the optimal value, and we adopt it as the default setting in all other experiments.§ CONCLUSION In this paper, we present “PanGu-Draw”, a new latent diffusion model for efficient text-to-image generation that effectively integrates multiple control signals. Our approach includes a Time-Decoupling Training Strategy to separate the text-to-image process into structure and texture generation, enhancing data use and computational efficiency. Additionally, “Coop-Diffusion” is introduced, an algorithm allowing cooperative use of different pre-trained diffusion models in a unified denoising process for multi-control image synthesis at various resolutions without extra data or retraining.PanGu-Draw outperforms models like DALL-E 2 and SDXL in English T2I, achieves superior FID, IS, and CN-CLIP-scores in Chinese T2I, and receives favorable user feedback.This positions PanGu-Draw as a versatile and efficient state-of-the-art method, which is available on the Ascend platform.ieeenat_fullname § MORE DETAILS ABOUT PANGU-DRAW Prompt Enhancement LLM with RLAIF Algorithm. To further enhance our generation quality, we harness the advanced comprehension abilities of large language models (LLM)<cit.> to align users' succinct inputs with the detailed inputs required by the model. Specifically, shown in Figure <ref>, we first construct a human-annotated dataset that enriches succinct prompts with background and style descriptions and then fine-tune the LLM to adapt a succinct prompt to an enriched one using this data. To better adapt to the inputs required by PanGu-Draw, we perform further refinement based on the Reward rAnked FineTuning (RAFT)<cit.> method. Subsequently, we use the fine-tuned LLM to expand on multiple texts, which are then input into PanGu-Draw for image generation. The best expansions are selected jointly by an aesthetic scoring model[https://github.com/christophschuhmann/improved-aesthetic-predictor] and a CLIP <cit.> semantic similarity calculation model, allowing for further fine-tuning of the LLM. Figure <ref> shows the generation results of PanGu-Draw without and with prompt enhancement.As we can see, prompt enhancement serves to add more details and illustration to the original brief prompts, leading to better image aesthetics and semantic alignment.Controllable Stylized Text-to-Image Generation. While techniques like LoRA <cit.> allow one to adapt a text-to-image model to a specific style (e.g., cartoon style, human-aesthetic-preferred style), they do not allow one to adjust the degree of the desired style. To this end, inspired by the classifier-free guidance mechanism, we propose to perform controllable stylized text-to-image generation by first construct a dataset consisting of human-aesthetic-prefer, cartoon and other samples with a pretrained human aesthetic scoring model and acartoon image classification models, andthen train the text-to-image generation model with these three kinds of samples. For human-aesthetic-prefer and cartoon samples, we prepend a special prefix to the original prompt, denoted as c_aes and c_cartoon respectively. During sampling, we extrapolated the prediction in the direction of ϵ_θ(z_t, t, c_style) and away from ϵ_θ(z_t, t, c) as follows:ϵ̂_θ(z_t, t, c)= ϵ_θ(z_t, t, ∅) + s · (ϵ_θ(z_t, t, c) - ϵ_θ(z_t, t, ∅)) + s_style· (ϵ_θ(z_t, t, c_style) - ϵ_θ(z_t, t, c)),where s is the classifier-free guidance scale, c_style∈{c_aes, c_cartoon} and s_style is the style guidance scale.Figure <ref> shows the controllable stylized text-to-image generation results of PanGu-Draw, including human-aesthetic-prefer and cartoon style image generation. As we can see, with the corresponding style guidance scale, PanGu-Draw can control the generated images towards the desired style. § IMAGE RESOLUTIONS FOR MULTI-RESOLUTION TRAININGTable <ref> shows the list of resolutions used for multi-resolution training of our structure generation model and texture generation model.§ MORE GENERATION RESULTS OF PANGU-DRAW§.§ Text-to-Image Generation Figure <ref> shows more generated images of PanGu-Draw. As we can see, the generated images are of high visual quality and are well aligned with the input prompts.§.§ Multi-Diffusion Fusing ResultsMulti-Control Image Generation. Figure <ref> shows results of multi-control image generation by fusing PanGu-Draw with different models, including image variation, depth-to-image, edge-to-image generation models.Figure <ref> shows results of fusing two ControlNet models with our algorithm and with the algorithm proposed by ControlNet <cit.>, which fuses the features of different ControlNets before injecting into the U-Net model. As we can see, our algorithm is able to specify the prompts of different ControlNets such that enabling a finer-grain control.Multi-Resolution Image Generation. Figure <ref> shows the results from the low-resolution model and our fusing algorithm Coop-Diffusion by fusing the low-resolution model and our high-resolution PanGu-Draw model. As we can see, PanGu-Draw adds much details to the low-resolution predictions leading to high-fidelity high-resolution results.§ VISUAL COMPARISON AGAINST BASELINESFigure <ref> and <ref> shows qualitative comparisons of PanGu-Draw against baseine methods, including RAPHAEL <cit.>, SDXL <cit.>, DeepFloyd <cit.>, DALL-E 2 <cit.>, ERNIE-ViLG 2.0 <cit.>, PixArt-α <cit.> and . The input prompts are also used in RAPHAEL and are provided at the bottom of the figure.As we can see, PanGu-Draw generates high-quality images, which are better than or on par with these top-performing models.
http://arxiv.org/abs/2312.16486v2
{ "authors": [ "Guansong Lu", "Yuanfan Guo", "Jianhua Han", "Minzhe Niu", "Yihan Zeng", "Songcen Xu", "Zeyi Huang", "Zhao Zhong", "Wei Zhang", "Hang Xu" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231227092145", "title": "PanGu-Draw: Advancing Resource-Efficient Text-to-Image Synthesis with Time-Decoupled Training and Reusable Coop-Diffusion" }
Recovering seldom-used theorems of vectorcalculus and their application to problemsof electromagnetism. Antonio Pérez-Garrido January 14, 2024 ===========================================================================================================Real-world datasets may contain multiple features that explain the training data equally well, i.e., learning any of them would lead to correct predictions on the training data.However, many of them can be spurious, i.e., lose their predictive power under a distribution shift and fail to generalize to out-of-distribution (OOD) data. Recently developed “diversification” methods <cit.> approach this problem by finding multiple diverse hypotheses that rely on different features. This paper aims to study this class of methods and identify the key components contributing to their OOD generalization abilities. We show that (1) diversification methods are highly sensitive to the distribution of the unlabeled data used for diversification and can underperform significantlywhen away from a method-specific sweet spot. (2) Diversification alone is insufficient for OOD generalization. The choice of the used learning algorithm, e.g., the model's architecture and pretraining, is crucial, and using the second-best choice leads to an up to 20% absolute drop in accuracy. (3) The optimal choice of learning algorithm depends on the unlabeled data, and vice versa. (4) Finally, we show that the above pitfalls cannot be alleviated by increasing the number of diverse hypotheses, allegedly the major feature of diversification methods.These findings provide a clearer understanding of the critical design factors influencing the OOD generalization of diversification methods. They can guide practitioners in how to use the existing methods best and guide researchers in developing new, better ones. ^*Equal contribution. Corresponding author: [email protected]§ INTRODUCTIONAchieving out-of-distribution (OOD) generalization is a crucial milestone for the real-world deployment of machine learning models.A core obstacle in this direction is the presence of spurious features, i.e., features that are predictive of the true label on the training data distribution but fail under a distribution shift. They may appear due to, for example, a bias in the data acquisition process <cit.>) or an environmental cue closely related to the true predictive feature <cit.>. The presence of a spurious correlation between spurious features and true underlying labels implies that there are multiple hypotheses (i.e., labeling functions) that all describe training data equally well, i.e., have a low training error, but only some generalize to the OOD test data. Previous works <cit.> have shown that in the presence of multiple predictive features, standard empirical risk minimization <cit.> (ERM) using neural networks trained with stochastic gradient descent (SGD) converges to a hypothesis that is most aligned with the learning algorithm's inductive biases. When these inductive biases are not aligned well with the true underlying predictive feature, it can cause ERM to choose a wrong (spurious) feature and, consequently, fail under a distribution shift. Recently, diversification methods <cit.> have achieved state-of-the-art results in classification settings in the presence of spurious correlations. Instead of training a single model, these methods aim to find multiple plausible and diverse hypotheses that all describe the training data well, while relying on different predictive features, which is usually done by promoting different predictions on additional unlabeled data. The motivation is that among all the found features, there will be the true predictive one that is causally linked to the label and, therefore, remains predictive under a distribution shift.In this work, we identify and study the key factors that contribute to the success of these diversification methods, adopting <cit.> as two recently proposed best-performing representative methods. Our contributions are as follows.nolistsep,leftmargin=15pt * First, through theoretical and empirical analyses, we show that diversification methods are sensitive to the distribution of the unlabeled data (Fig. <ref> vs. <ref>). Specifically, each diversification method works best for different distributions of unlabeled data, and the performance drops significantly (up to 30% absolute accuracy) when diverging from the optimal distribution.* Second, we demonstrate that diversification alone cannot lead to OOD generalization efficiently without additional biases.This is similar to the in-distribution generalization with ERM, where good learning algorithm's inductive biases are necessary for generalization <cit.>. In particular, we show that these methods are sensitive to the choice of the architecture and pretraining method (Fig. <ref> vs. <ref>), and the deviation from best to second best model choice results in a significant (up to 20% absolute) accuracy drop (see Fig. <ref>). * Further, we show that a co-dependence exists between unlabeled data and the learning algorithm, i.e., the optimal choice for one depends on the other.Specifically, for fixed training data, we can change unlabeled data in a targeted way to make one architecture (e.g., MLP) generalize and the other (e.g., ResNet18) to have random guess test accuracy and vice versa. * Finally, we show that one of the expected advantages of diversification methods – increasing the number of diverse hypotheses to improve OOD generalization – does not hold up in practice and does not help to alleviate the aforementioned pitfalls. Specifically, we do not observe any meaningful improvements using more than two hypotheses.These findings provide a clearer understanding of the relevant design factors influencing the OOD generalization of diversification methods. They can guide practitioners in how to best use the existing methods and guide researchers in developing new, better ones. We provide guiding principles distilled from our study in each section and Sec. <ref>.§ RELATED WORKSpurious correlation and underspecification. As a special case of OOD generalization problem, spurious correlations can arise from the underspecified nature of the training data <cit.>. In this setting, neural networks tend to learn simple (spurious) concepts rather than the true causal concepts, a phenomenon known as simplicity bias <cit.> or shortcut learning <cit.>.Some works combat spurious correlation by improving worst-group performance <cit.>,some require group annotation <cit.>and others <cit.>aim at the no group information scenario. Diversification methods fit into the latter, as they only rely on additional unlabeled data to promote diversity between multiple hypotheses.Diversification methods. Recently proposed diversification methods <cit.> find multiple diverse hypotheses during training to handle spurious correlations. They introduce an additional diversification loss over multiple trained hypotheses, forcing them to rely on different features while still fitting training data well. <cit.> use input-space diversification that minimizes the alignment of input gradients over pairs of models at all training data points. DivDis <cit.> and D-BAT <cit.> use output-space diversification, minimizing the agreement between models' predictions on additional unlabeled data. We focus on studying the latter, as these methods outperform the input-space ones by a large margin, achieving state-of-the-art performance in the setting where true labels are close to or completely correlated with spurious attributes. Inductive biases in learning algorithms. In this work, we study the influence of the choice of the learning algorithm and, hence, its inductive bias on the performance of diversification methods. Different learning algorithms have different inductive biases <cit.>, which makes a given algorithm to prioritize a specific solutions <cit.>. While being highly overparameterized <cit.> and able to fit even random labels <cit.>, deep learning models were shown to benefit from architectural <cit.> , optimization <cit.> and pre-training <cit.> inductive biases. In our study we show that diversification methods are sensitive to the choice of architecture and pretraining method.§ LEARNING VIA DIVERSIFICATIONFirst, we formalize the problem of generalization under spurious correlation. Then, we present a diversification framework along with the recent representative methods, DivDis <cit.> and D-BAT <cit.>[At the time of writing, these are the best-performing diversification methods and the only existing output-space ones.], describing key differences between them: training strategies (sequential vs. simultaneous) and diversification losses (mutual information vs. agreement).§.§ Problem FormulationFor consistency, we follow a notation similar to that of D-BAT <cit.>. Let 𝒳 be the input space, 𝒴 the output space. Both methods focus on classification, i.e. 𝒴 = {0,…,q - 1}, where q is the number of classes. We define a domain (D,h) as a distribution D over 𝒳 and a hypothesis (labeling function) h : 𝒳→𝒴. The training data is drawn from the domain (D_t, h^*), and test data from a different domain (D, h^*).Given any domain (D,h'), a hypothesis h, and a loss function (e.g. cross-entropy loss) ℒ : 𝒴×𝒴→ℝ^+, the expected loss is defined as: ℒ_D(h,h') = 𝔼_x ∼ D[ℒ(h(x), h'(x))]. Let $̋ be the set of hypotheses expressed by a given learning algorithm. We define^̋*_tand^̋*to be the optimal hypotheses set on the train and the OOD domains:^̋*_t := _h ∈ℒ_D_t(h,h^*), ^̋* := _h ∈ℒ_D(h,h^*).(Spurious Ratio) Given a spurious hypothesis h, the spurious ratio r^h_D, with respect to a distribution D and its labeling function h^* is defined as the proportion of data points where h^* and h agree, i.e., have the same prediction: r^h_D = 𝔼_x ∼ D[h^*(x) = h(x)]. The spurious ratio describes how the spurious hypothesishcorrelates with the trueh^*on dataD. A spurious ratio of 1 indicates that a given data distribution has a complete spurious correlation. On the contrary, a spurious ratio of 0 indicates that the spurious hypothesis is always in opposition to the true labeling, namely inversely correlated. Finally, a spurious ratio of 0.5 means that the spurious hypothesis is not predictive ofh^*, as there is no correlation between them. We will also refer to this setting as a “balanced” data distribution.We omith^*(and sometimesh) in the notation to keep it less cluttered, as they can be inferred from the context of a specific setting.Spurious correlation setting. In this setting, we assume that there exist one or more spurious hypothesesh ∈_̋sp ⊂^̋*_t ∖^̋*, which generalize onD_tbut not onD. Thus, the spurious ratio on training data is close to one:r_D_t^h≈1. If there is a misalignment between the inductive bias of the learning algorithm and^̋*, the ERM hypothesishmay be closer to hypotheses from_̋spthan^̋*_t ∩^̋*, i.e., have poor OOD generalization. The idea of diversification methods is to find multiple hypotheses from^̋*_twith the aim to have one with good OOD generalization (see Sec. <ref>).§.§ Diversification for OOD GeneralizationDivDis <cit.> and D-BAT <cit.> focus on the spurious correlation setting. They assume access to additional unlabeled datato find multiple diverse hypotheses that all fit the training data(D_t, h^*)but disagree, i.e., make diverse predictions, on. The motivation is to better cover the space_̋t^*and, consequently, find a hypothesis from^̋*_t ∩^̋*that also generalizes to OOD data.Optimization objective. Following <cit.>, we define a diversification loss_(h_1,h_2)that quantifies the agreement between two hypotheses on. Then, in the case of findingKhypotheses, the training objective of a diversification method is the sum of ERM loss and the diversification loss averaged over all pairs of hypotheses: h_1, ..., h_K = _h_1, ..., h_K ∈∑_i = 1^Kℒ_D_t(h_i,h^*) + α/K(K-1)∑_i =1^K∑_j = 1,i≠ j^K_(h_i,h_j), Diversification loss. LetP_h_ibe the predictive distribution of a hypothesish_ion given dataD. We consider the following two diversification losses: nolistsep,leftmargin=15pt * DivDis <cit.>: _D(h_1,h_2) = D_KL(P_(h_1,h_2) || P_h_1⊗P_h_2)+ λ∑_i ∈{1,2} D_KL(P_h_i || P̂),where the first term is the mutual information, which is equal to 0 iff P_h_1 and P_h_2 are independent. The second term is the KL-divergence between the predictive distribution of h_i and a prior distribution P̂, which is usually set to the distribution of labels in D_t. It prevents hypotheses from collapsing to degenerate solutions, such as predicting the same label for all samples. * D-BAT <cit.>: _D(h_1,h_2) = 𝔼_x ∼ D[-log(P_h_1(x;0) · P_h_2(x; 1) + P_h_1(x;1) · P_h_2(x;0))],where P_h_i(x;y) is the probability of class y predicted by h_i.In practice, they are computed and optimized on additional unlabeled data. Note that it is usually favorable to have the distribution ofdifferent from that ofD_t, i.e.,r_^h < r_D_t^h≈1, as this enables the diversification process to distinguish between spurious and semantic hypotheses (This is also confirmed by empirical results in Fig.<ref>-Right).In Sec. <ref>, we will show that both losses have their strengths, and the optimal choice depends on the spurious ratio of.Sequential vs. simultaneous optimization. In practice, when minimizing the diversification objective in Eq. <ref>, there are two choices: (i) optimize over all hypotheses simultaneously or (ii) find hypotheses one by one. DivDis trains simultaneously and defines hypotheses as linear classifiers that share the same feature extractor.D-BAT, on the contrary, starts withh_1≜hand finds new hypotheses, defined as separate models sequentially.For consistency and comparability with D-BAT in Sec. <ref> analysis, we also introduce DivDis-Seq, a version of DivDis using sequential optimization, allowing us to concentrate only on the difference in diversification loss design.The two-stage framework.After findingKhypotheses, one needs to be chosen to make the final prediction, leading to a two-stage approach <cit.>, summarised as follows: *Diversification: find K diverse hypotheses _̋K^* ⊂^̋*_t. *Disambiguation: select one hypothesis ĥ∈^̋*_K given additional information (e.g., a few test labeled examples or human supervision).We identify the first diversification stage as the most critical one. Indeed, if the desired hypothesish^*is not chosen (h^* ∉_̋K^*), the second stage cannot make up for it as it is limited to only hypotheses from_̋K^*. We, therefore, focus on studying the first stage and assume access to the oracle that chooses the best available hypothesis in the second stage.§ THE RELATIONSHIP BETWEEN UNLABELED DATA AND OOD GENERALIZATION VIA DIVERSIFICATION In this section, we study how the different diversification losses of DivDis and D-BAT interact with the choice of unlabeled data. In an illustrative example and real-world datasets, we identify that neither of the diversification losses is optimal in all scenarios and that their behavior and performance are highly dependent on the spurious ratio of the unlabeled OOD data.§.§ Theoretical and Empirical Study of a Synthetic Example Synthetic 2D binary classification task. In Fig. <ref>-Left, we show a 2D task withdistributionDspanning a 2D square, i.e.,{ x =(x_1,x_2) ∈[-1,1]^2 }. We define our hypothesis space$̋ to be all possible linear classifiers h(x;β) where β is the radian of the classification plane w.r.t horizontal axis x_1. The ground truth labeling function is defined as h^⋆(x) = h(x;π/2) = ℐ{x_1 > 0} where ℐ is the indicator function, and the training distribution is defined as D_t={x=(x_1,x_2) ∈{[-1,0]×[0,1]}∪{[0,1]×[-1,0]}}. We then define a spurious feature function as h(x) = h(x;0) = ℐ{x_2 < 0} and assume that ERM converges to h. This means that the first hypothesis h_1^DB (D-BAT) and h_1^DD (DivDis-Seq) of both methods converge to h. Finally, we define different distributions of unlabeled datato have different spurious ratios r_ from 0 to 0.5 (the construction is described in Appendix <ref>).(On Optimal Diversification Loss)In the synthetic 2D binary task, let h_2^DB and h_2^DD be the second hypotheses of D-BAT and DivDis-Seq, respectively.If r_ = 0, then h_2^DB = h^⋆ and h_2^DD = h(x;π/4). Otherwise, if r_ =0.5, then h_2^DB = h(x;π) = 1 - h and h_2^DD = h^⋆.Increasing the spurious ratio r_ from 0 to 0.5 will lead to h_2^DB and h_2^DD rotating counterclockwise.See Appendix <ref> for the full proof and Fig. <ref>-Left for the visual demonstration. This proposition implies that D-BAT recovers h^⋆ when r_ = 0 (i.e., inversely correlated) and DivDis-Seq recovers h^⋆ when r_ = 0.5 (i.e., balanced).For D-BAT, this happens because the optimal second hypothesis h_2^DB is the hypothesis that disagrees with h_1^DB on all unlabeled data points i.e. h_2^DB∈{h∈^̋*_t :h(x) ≠ h_1^DB(x) ∀ x ∈}.On the contrary, the optimal second hypothesis for DivDis-Seq is independent of the first one, i.e., disagreeing on half of the data points h_2^DD∈{h∈^̋*_t : ℙ_x ∼[h(x) = h_1^DD(x)] = 1/2}. In Fig. <ref>-Left, we empirically demonstrate this behavior by training linear classifiers[In Appendix <ref>, we show additional results with more complex classifiers (i.e., MLP).] with D-BAT and DivDis-Seq[For completeness, we also provide results with DivDis, which is deferred to Appendix <ref>.] on such synthetic data, with 0.5k training / 5k unlabeled OOD data points (following <cit.>).We observe that the behavior suggested in Proposition <ref> is consistent with our experiments. This highlights that different diversification losses only recover the ground truth function in different specific spurious ratios. §.§ Verification on Real-World Image Data We further evaluate whether the suggested behavior holds with more complex classifiers and more complex datasets. Specifically, we use M/C <cit.> and M/F <cit.>, which are datasets that concatenate one image from MNIST with one image from either CIFAR-10 <cit.> or Fashion-MNIST <cit.>.We follow the setup of <cit.>: we use 0s and 1s from MNIST and two classes from Fashion-MNIST (coats & dresses) and CIFAR-10 (cars & trucks). The training data is designed to be completely spuriously correlated (e.g., 0s always occur with cars and 1s with trucks in M/C). We vary the spurious ratio r_ of the unlabeled data by changing the probability of 0s occurring with cars/dresses. We use LeNet <cit.> architecture for both D-BAT and DivDis(-Seq) methods.Fig. <ref>-Right shows that similar to Proposition <ref>, D-BAT is optimal when r_ = 0 whereas DivDis(-Seq) optimal setting is r_ = 0.5. Both methods observe a drastic decrease in performance away from their sweet spot (with up to 30% absolute accuracy drop).Note that it is expected that both methods reach chance-level accuracy when r_D_u→ 1, as it means that the spurious hypothesis becomes completely correlated to the true hypothesis on , and it is thus impossible to differentiate them by enforcing diversification on . In Appendix <ref>, Fig. <ref> shows that the same observation holds for the M/F dataset, and Tab. <ref> also shows the results of different spurious ratios on a larger and more realistic dataset, CelebA-CC <cit.>.<cit.> note that both the number of hypotheses K and the diversification coefficient α (Eq. <ref>) are critical hyper-parameters, that may greatly influence the performance. However, controlling for these variables, in Fig. <ref>-Right and Fig. <ref>, we find that tuning α and K is not sufficient to compensate the performance loss from the misalignment between unlabeled OOD data and the diversification loss.Takeaway. Diversification methods' performance drops drasticallywhen away from the spurious ratio (Def. <ref>) sweet spot, and neither diversification loss is optimal in all cases. Therefore, new methods should be designed to adapt to different unlabeled data distributions. § THE RELATIONSHIP BETWEEN LEARNING ALGORITHM AND OOD GENERALIZATION VIA DIVERSIFICATION In this section, we study another key component of diversification methods – the choice of the learning algorithm. First, we present a theoretical result showing that diversification alone is insufficient to achieve OOD generalization and requires additional biases (e.g., the inductive biases of the learning algorithm). Then, we empirically demonstrate the high sensitivity of these methods to the choice of the learning algorithm (architecture and pretraining method). Finally, empirically, we show that the optimal choices of the learning algorithm and unlabeled data are co-dependent.§.§ Diversification Alone Is Insufficient for OOD Generalization Diversification methods find hypotheses h_is that all minimize the training loss, i.e., h_i∈^̋*_t, but disagree on the unlabeled data . The underlying idea is to cover the space ^̋*_t evenly and better approximate a generalizable hypothesis from ^̋*_t ∩^̋* (e.g., see Fig. 3 in <cit.>). However, if the original hypothesis space $̋ is expressive enough to include all possible labeling functions (e.g., neural networks <cit.>), then^̋*_tessentially only constrains its hypotheses' labeling on the training dataD_twhile including all possible labelings overD, which impliesq^|D|possible labelings, whereqis the number of classes.Therefore, one might need to find exponentially many hypotheses before covering this space and approximating the desired hypothesish^* ∈^̋*_t ∩^̋*well enough. Notably, we prove that having as many diverse hypotheses as the number of data points inDis still insufficient to guarantee better than a random guess accuracy. Indeed, there always exists a set of hypotheses satisfying all the constraints of the diversification objective in Eq. <ref> while having random accuracy w.r.t the true labelingh^*on OOD data. The following Proposition <ref> formalizes this intuition in the binary classification case.Please see its proof and extension to multi-class case in Appendix <ref>.For K = (2|D| -1) and h^* the OOD labeling function, there exists a set of diverse K hypotheses h_1, ..., h_K, i.e., _D(h_i,h_j) = |{ x ∈ D:h_i(x) = h_j(x) }| / |D| ≤ 0.5 ∀ i,j ∈{1, ..., K}, i ≠ j and it holds that max _h’ ∈h_1, …, h_KAcc(h^*, h') ≤ 0.5.Since in most cases, <cit.> find2hypotheses to be sufficient to approximateh^*and the size of the used datasets is larger than2, these hypotheses should not only be diverse but also biased towards those that generalize under the considered distribution shift.Takeaway. Diversification alone cannot lead to OOD generalization efficiently and requires additional biases to be brought by a specific learning algorithm used in practice. Properties of diverse hypotheses. In Appendix <ref>, using the agreement score (AS) <cit.> as a measure of the alignment of a hypothesis with a learning algorithm's inductive biases, we study in what way D-BAT and DivDis diverse hypotheses are biased. We show that they find hypotheses that are not only diverse but aligned with the inductive bias of the used learning algorithm. According to the definition of AS, such alignment is expected for a hypothesis found by empirical risk minimization (ERM). However, it is not generally expected from diverse hypotheses (as defined in Eq. <ref>), given that the additional diversification loss could destroy this alignment. This analysis sheds light on the process by which diverse hypotheses are found and emphasizes the choice of a good learning algorithm, which is crucial, as shown in the next section. §.§ Learning Algorithm Selection: A Key to Effective Diversification Sec. <ref> argues that the right learning algorithm's inductive biases (i.e., those aligned well with the true causal hypothesish^*) are required for diversification to enable OOD generalization. In this section, we examine the “sensitivity” of this requirement by using DivDis and D-BAT with different choices of pretraining strategies and architectures on several real-world datasets.Experimental setup. We consider the following datasets. (1) A multi-class classification dataset Office-Home <cit.> consists of images of 65 item categories across four domains: Art, Product, Clipart, and Real-World. Following the experimental setting in <cit.>, we use the Product and Clipart domains during training and the Real-World domain as the out-of-distribution one. (2) A binary classification dataset: Waterbirds-CC <cit.>, a modified version of Waterbirds where the background and bird features are completely spuriously correlated on the training data. We report worst-group accuracy for Waterbirds-CC, i.e., the minimum accuracy among the four possible groups.We train both diversification methods using different architectures and pretraining methods, each resulting in a different learning algorithm with different inductive biases. Please see full experimental details and results in Appendix <ref>. Sensitivity to the model choice. Fig. <ref> shows that the performance of both diversification methods is highly sensitive to the choice of the learning algorithm: 1) the gap between the best and second-best model is significant (10%-20%) 2) there is no single model that performs the best over both datasets, and 3) there is a 20% standard deviation of the performance over the distribution of models (averaged over methods and datasets). Furthermore, similar to the findings of <cit.>, one cannot choose a good model reliably based on the ImageNet performance as a proxy. Indeed, the best model, according to this proxy, ViT-MAE <cit.>, underperforms significantly in all cases. Additionally, ViT-Dino <cit.>, the third best on ImageNet, completely fails for DivDis on both datasets. Overall, these results emphasize the need for a specific architecture and pretraining tailored for each dataset and method, which may require an expensive search.Increasing K does not improve performance. Finally, we study whether the performance gap between the best and second-best models tested in Fig. <ref> can be closed by increasing the number of hypothesesK, as this is allegedly the major feature and motivation of diversification methods.Tab. <ref> shows that, similar to the observation made in Sec. <ref>, increasingKdoes not bring any improvements, suggesting that the choice of the model is more important for enabling OOD generalization. In Fig. <ref>, we further show thatDivDis does not scale well to largerK(e.g.,K=64) “out-of-the-box”, and the performance drops as the number of hypotheses increases. Note that testing D-BAT in this regime would be prohibitively expensive. Takeaway. Diversification methods are highly sensitive to the choice of the learning algorithm, e.g., architecture and pretraining method.The “built-in” mechanism of increasing the number of hypotheses K does not alleviate this issue and fails at improving performance. §.§ On the Co-Dependence between Learning Algorithm and Unlabeled Data Sec. <ref> and Sec. <ref> show the sensitivity of the diversification methods to the distribution of the unlabeled data and the choice of a learning algorithm respectively. Here we further demonstrate that these choices are co-dependent, i.e., the optimal choice for one depends on the other. Specifically, we show that by only varying the distribution of unlabeled data, the optimal architecture can be changed. Experimental setup. We consider two learning algorithms (architectures)∈{MLP, ResNet18}(extension to other architectures is straightforward) and construct examples for D-BAT where one architecture outperforms the other and vice versa. To do that we build on the idea of adversarial splits introduced in <cit.>, defined on a CIFAR-10 <cit.> datasetD. Below, we briefly describe the construction, and refer the reader to Appendix <ref> for more details.We start by considering two hypotheses with high agreement scores <cit.> found by <cit.> for each architecture, such that the following holds: AS_MLP(h_MLP) > AS_MLP(h_RN), AS_RN(h_RN) > AS_RN(h_MLP), whereAS_stands for the agreement score measured with algorithm. As shown in <cit.>, the above inequalities suggest that each hypothesish_is more aligned with its corresponding learning algorithm, i.e., ERM trained withMLParchitecture will preferentially converge toh_MLPoverh_RNand vice-versa when training withResnet18.Akin to adversarial splits <cit.>, we then use these two high-AS hypotheses to construct a dataset to change, in a targeted way, what the first hypothesis of D-BATh_1 ≜h_ERMconverges to, depending on the used learning algorithm. Differenth_1s, in turn, lead to differenth_2s and, hence, different test performance. As the true labelingh^*, we use a binary classification task constructed by splitting the original 10 classes into two sets of five. Then, as Tab. <ref>-Right illustrates, we construct training dataD_tto contain samples where allh^*,h_MLP, andh_RNagree, i.e.,D_t = { x ∈D: h^*(x) = h_MLP(x) = h_RN(x)}. Thus, by design, bothh_MLPandh_RNare completely spuriously correlated withh^*.Then, we define unlabeled OOD data s.t. either(r^h_MLP_ = 0, r^h_RN_ = 1/2)(denoted ash^* ⊥h_RN),or(r^h_MLP_ = 1/2, r^h_RN_ = 0)(denoted ash^* ⊥h_MLP). This means thath^*is inversely correlated with only one ofh_MLPorh_RN, while not correlated to the other hypothesis.Results. Keeping the training data fixed, we train D-BAT (K=2) using different architecture and constructed unlabeled data pairs(, ). Tab. <ref>-Left shows that the performance ofdrops to almost random chance whenh_does not inversely correlate withh^*on the unlabelled data ( and ). This is consistent with Sec. <ref>, where we show that the setting withr^h_1_ = 1/2is disadvantageous for D-BAT. In Appendix <ref>, Tab. <ref> further shows similar observation for a different architecture pair (ViT & ResNet18), and Fig. <ref> extends the experiment with a smooth interpolation from one unlabeled dataset setting to the other, showing a linear transition where one architecture goes from optimal performance to random-chance accuracy, and vice-versa.Takeaway.The optimal choices of the architecture and unlabeled data are co-dependent.§ CONCLUSION AND LIMITATIONS This paper aims to study diversification methods and identify key components enabling their OOD generalization: the diversification loss used, the distribution of the unlabeled data, and the choice of a learning algorithm. Below, we distill some practical recommendations that follow from our analysis. Unlabeled data and diversification loss. Sec. <ref> shows that a sub-optimal spurious ratio w.r.t to the chosen diversification loss may lead to significant performance drops. One possibility to overcome this problem is to use a mixture of diversification losses, determined by an estimate of the spurious ratio of unlabeled data. Another is to try to collect unlabeled data with a specific spurious ratio.Choice of the learning algorithm. Sec. <ref> demonstrates that the methods are highly sensitive to the choice of the learning algorithm inductive bias. Future methods should be made more resilient to this choice, e.g., by modeling each hypothesis with different architectures and pretraining methods or by implementing a mechanism to choose a good model automatically.Co-dependence. Sec. <ref> suggests that a practitioner should not expect the best learning algorithm (e.g., architecture or pretraining choice) found on one dataset to perform well on another one (as observed in Sec. <ref>), and an additional search might be needed to achieve good performance.Then we discuss the limitations of our study:Data characteristics. We characterize the influence of the OOD data distribution through its spurious ratio.The influence of other important properties of the OOD data may need to be studied in future work. Furthermore, we mainly focused on image data to aid the comparison with <cit.>, but we expect our conclusions to be mainly data-agnostic. Co-dependence experiment only with D-BAT In Sec. <ref>, the experiment is only performed with D-BAT. We expect DivDis to have a similar co-dependence. However, its diversification loss (mutual information) and optimization strategy (simultaneous) make such a targeted experiment challenging to design. We leave an explicit demonstration for future work. PART:* AppendixThe appendix of this work is outlined as follows:*Appendix <ref> proves Proposition <ref> of Sec. <ref> (synthetic 2D task), and shows that the optimal diversification loss depends on the spurious ratio of the unlabeled data. *Appendix <ref> extends the experiment done in Sec. <ref> (synthetic 2D task) by training a multilayer perceptron (MLP) instead of a linear classifier, and shows empirically that Proposition <ref> extends to more complex classifiers. *Appendix <ref> provides additional experiments for Sec. <ref>, and shows empirically that Proposition <ref> extends to DivDis. *Appendix <ref> provides the implementation details of the experimental verificationof Proposition <ref>on real-world images (Sec. <ref>) We also provide additional results, using the M/F dataset (where MNIST and Fashion-MNIST <cit.> are concatenated), as well as the CelebA <cit.> dataset. We also show that tuning the diversification hyperparameter α is not sufficient to compensate the performance loss from the misalignment between unlabeled data and diversification loss, i.e., the conclusion of Proposition <ref> still holds when tuning α. *Appendix <ref> proves Proposition <ref> of Sec. <ref>, proving the existence of a large number of pairwise diverse hypotheses which do not generalize. A proof for a similar result in multi-class classification case is also provided. *Appendix <ref> provides an overview of the important concepts from Task Discovery <cit.> used in this paper (agreement score, adversarial splits).*Appendix <ref>, using agreement score, explains the experimental setup and results that demonstrates that D-BAT and DivDis find hypotheses that are not only diverse but aligned with the inductive bias of the used learning algorithm. *Appendix <ref> reports the experimental details and full results of Sec. <ref>. *Appendix <ref> provides a detailed explanation of how to construct the training and unlabeled data of <ref> where we show that by only changing the distribution of unlabeled data, we can influence the optimal choice of the architecture. It also contains an variant of Tab. <ref> with ViT&ResNet pair, as well as an extension of the experiment with a smooth interpolation from one unlabeled dataset setting to the other, showing a linear transition where one architecture goes from the optimal performance to random-chance accuracy, and vice-versa. § PROOF AND DISCUSSION OF PROPOSITION <REF> In Sec. <ref>, we make a proposition that, in the synthetic 2D example, the optimal choice of diversification loss changes with the spurious ratio of unlabeled OOD datar_.Specifically, DivDis-Seq finds the ground truth hypothesish^⋆if and only ifr_=0.5(i.e., balanced or no spurious correlation), whereas D-BAT discoversh^⋆if and only ifr_=0(i.e., inversely correlated). In this section, we provide the proof, method by method, and case by case.We first restate the Proposition <ref> as follows: Synthetic 2D Binary Classification Task. We illustrate the setting in Fig. <ref> and describe it below:*The data domain spans a 2D square, i.e., { x =(x_1,x_2) ∈ [-1,1]^2 }. *The training distribution is defined as D_t={x=(x_1,x_2) ∈{[-1,0]∪[0,1]}∪{[0,1]∪[-1,0]}}, i.e., contains data points the 1st and 4th quadrants.*Our hypothesis space $̋ contains all possible linear classifiersh(x;β)whereβis the radian of the classification plane w.r.t horizontal axisx_1.*The ground truth hypothesis ish^⋆(x) = h(x;π/2) = ℐ{x_1 > 0}, whereℐis the indicator function.*The spurious hypothesis, i.e. the one that ERM converges to, is assumed to beh(x) = h(x;0) = ℐ{x_2 < 0}.*Thus,handh^⋆agree on the training data (1st and 4th quadrants) and disagree on the 2nd and 3rd quadrants.*We vary the spurious ratio of the unlabeled OOD data distributionby varying the ratio of data points sampled from the 1st and 4th quadrants over the number of data points sampled from the 2nd and 3rd quadrants.*One possibility is to define={x=(x_1,x_2) ∈{[R(r_),1]∪[0,1]}∪{[-1,-R(r_)]∪[-1,0]}}, andR(r)=r/r-1for0 ≤ r ≤ 0.5.*LetP_h(x;y)be the probability of classypredicted by hypothesishgiven samplex. The following proof assumes both the hypotheseshand the second hypothesish_2^DBorh_2^DDdiscovered by D-BAT and DivDis-Seq have a hard margin, i.e.,P_h(x;y) ∈{0,1}.Nonetheless, we also show empirically in Sec. <ref> (Fig. <ref>) that when this hard margin condition does not hold, we get the same conclusion as Proposition <ref>.Proposition 1.(On Optimal Diversification Loss)In the synthetic 2D binary task, let h_2^DB and h_2^DD be the second hypotheses of D-BAT and DivDis-Seq, respectively.If r_ = 0, then h_2^DB = h^⋆ and h_2^DD = h(x;π/4). Otherwise, if r_ =0.5, then h_2^DB = h(x;π) = 1 - h and h_2^DD = h^⋆.In the following proof, we use -h to denote the opposite hypothesis, i.e., -h(x) = 1 - h(x). D-BAT. Plugging in the unlabeled OOD data distribution , the first hypothesis h and the second hypothesis h_2^DB, the diversification loss in D-BAT is: _(h,h^DB_2) = 𝔼_x ∼[-log(P_h(x;0) · P_h_2^DB(x; 1) + P_h(x;1) · P_h_2^DB(x;0))], Let ℒ^ERM_D_t be the ERM loss on fitting training data.D-BAT's objective is then ℒ^ERM_D_t + α_, where α is a hyperparameter.Bellow, we prove the proposition case by case:*When r_=0 (inversely correlated), the unlabeled OOD data spans {[0,1]∪[0,1]}∪{[-1,0]∪[-1,0]},i.e., the second and third quadrants.In this case, the diversification loss is:_= 𝔼_x ∼[-log(P_h(x;0) · P_h_2^DB(x; 1) + P_h(x;1) · P_h_2^DB(x;0))]= 𝔼_x ∼[-log(1 · P_h_2^DB(x; 1) + 0 · P_h_2^DB(x;0)) | x∈{[0,1]∪[0,1]}] ·1/2(2nd quadrant) + 𝔼_x ∼[-log(0 · P_h_2^DB(x; 1) + 1 · P_h_2^DB(x;0)) | x∈{[-1,0]∪[-1,0]}] ·1/2(3rd quadrant),where we assume uniform distribution over D. The hypothesis h_2^DB which minimizes the diversification loss in Eq. <ref> should satisfy P_h_2^DB(x; 1) = 1 and P_h_2^DB(x;0) = 1 for the data points in the second and third quadrants, respectively.Since the hypothesis space consists of linear classifiers, the two hypotheses that satisfy (i.e., with _=0) the above constraints are h^⋆ and -h, where -h(x) = 1 - ℐ{x_2 < 0} = ℐ{x_2 > 0}. When considering the entire objective ℒ^ERM_D_t + α_, only h^⋆ minimizes the objective to 0, regardless of α. Therefore, in this case, the D-BAT's solution corresponds to the ground truth function h^⋆.*When r_=0.5 (balanced), the unlabeled OOD data spans {[-1,1]∪[0,1]}∪{[-1,1]∪[-1,0]}, i.e., all four quadrants.The diversification loss is, therefore:_= 𝔼_x ∼[-log(P_h(x;0) · P_h_2^DB(x; 1) + P_h(x;1) · P_h_2^DB(x;0))]= 𝔼_x ∼[-log(1 · P_h_2^DB(x; 1) + 0 · P_h_2^DB(x;0)) | x∈{[0,1]∪[0,1]}] ·1/4(2nd quadrant) + 𝔼_x ∼[-log(1 · P_h_2^DB(x; 1) + 0 · P_h_2^DB(x;0)) | x∈{[-1,0]∪[0,1]}] ·1/4(1st quadrant) + 𝔼_x ∼[-log(0 · P_h_2^DB(x; 1) + 1 · P_h_2^DB(x;0)) | x∈{[-1,0]∪[-1,0]}] ·1/4(3rd quadrant) + 𝔼_x ∼[-log(0 · P_h_2^DB(x; 1) + 1 · P_h_2^DB(x;0)) | x∈{[0,1]∪[-1,0]}] ·1/4(4th quadrant)The hypothesis h_2 which minimizes Eq. <ref> requires P_h_2^DB(x; 1) = 1 for x in the 1st & 2nd quadrants, and P_h_2^DB(x;0) = 1 for x in the 3rd & 4th quadrants. The only hypothesis which satisfies these conditions is -h.Although -h doesn't minimize ℒ_D_t^ERM, we note that, given that the hypothesis function is hard-margin, any data point x in the 1st and 4th quadrants drives the diversification loss to positive infinity if h_2^DB(x) ≠ -h(x). This is because of the -log in the diversification loss. Therefore, in this case, D-BAT's solution is -h regardless of ℒ_D_t^ERM and α. In practice (soft-margin regime), given that the α parameter is large enough to enforce diversification, this also holds as we empirically verified in Fig. <ref>-Left. *The cases where 0 < r_ < 0.5 can be straightforwardly extended from the above two cases. Indeed, as said above, any unlabeled data point in the 1st and 4th quadrants drives the diversification loss to be positive infinity if h_2^DB≠ -h, and, thus, h_2^DB = -h Note, that this “phase transition” arises in theory as we consider all the points from D appearing in the 1st and 4th quadrants, i.e., {[R(r_), 0] ∪ [0, 1]} and{[0, -R(r_)] ∪ [-1, 0]}. In practice, when D contains only some samples from these regions, the h_2^DB will rotate counterclockwise as we increase r_ from 0 to 0.5, starting at h_GT and ending at -h_sp as seen in the empirical experiment shown in Fig. <ref>. Overall, when r_=0, there are h^⋆ and -h minimizing diversification loss with minimum 0, and only h^⋆ minimizes the whole loss (ERM + diversification loss).On the other hand, when r_=0.5, D-BAT finds -h that minimizes diversification loss but violates the ERM objective, regardless of the choice of α. DivDis-Seq. The DivDis diversification loss is_(h,h_2^DD) = D_KL(P_(h,h_2^DD) || P_h⊗P_h_2^DD)+ λ D_KL(P_h_2^DD || P_D_t).The first term on the right-hand side is the mutual information between h and h_2^DD.Minimizing mutual information on the unlabeled OOD datayields an hypothesis h_2^DD that disagrees with h on N_U/2 data points while agreeing on the other N_U/2 (where N_U is the size of unlabeled OOD data). In a finite sample size, this is equivalent to the hypotheses being statistically independent. The second term is the KL-divergence between the class distribution of h_2 on unlabeled dataand the class distribution of D_t. In this setting, an hypothesis h_2 minimizing this metric simply needs to classify half of the samples in the first class and the other half in the second class. Therefore: *When r_=0 (inversely correlated), minimizing ℒ_D_t^ERM(h_2^DD) + α_(h,h_2^DD) finds h_2^DD=h(x;π/4). Indeed, it is the only linear classifier that satisfies (minimizes to 0) both objectives.It classifies all data points from D_t correctly and 'half' disagrees (i.e., statistically independent) on 2nd and 3rd quadrants with h, and classifies half of the unlabeled samples in each class.*When r_=0.5 (balanced), minimizing ℒ_D_t^ERM(h_2^DD) + α_(h,h_2^DD) finds h_2^DD=h(x;π/2)=h^⋆ as the only hypothesis satisfying both losses similar to the previous case. *In general, for 0 ≤ r_≤ 0.5, the classification boundary of h_2^DD rotates counterclockwise (starting at h(x; π/4) for r_=0) as the spurious ration increases i.e. h_2^DD = h(x;β(r_ )), where β(r): [0,0.5] → [π/4, π/2] is an increasing function of r. More precisely, β(r) = π/2 - arctan (1 -2r/1-r). This is the solution that satisfy both losses, similarly to the previous cases. The decision line lies in the 2nd and 3rd quadrants, and, therefore classifies the labeled trainingdata correctly. The angle can be easily derived to satisfy the constraint that h_2^DD and h_sp agree only on half of the unlabeled OOD data. Since β(r) is strictly increasing, it is only when r_=0.5 the solution of DivDis-Seq coincides with the ground truth h^⋆ = h(x; π/2) =h(x; β(0.5)). In this setting, this means DivDis-Seq only finds h^⋆ when the unlabeled OOD data is balanced. Conclusion. Overall, we see that the two methods findh^⋆in completely different conditions, which is consistent with the observation in Fig. <ref> and thus calls for attention on one of the key components – the spurious ratio of unlabeled OOD datar_. DivDis. Since simultaneous training introduces a more complex interaction between the two hypotheses, we do not provide proof for DivDis. In Appendix <ref>, we give empirical results on the 2D example showing that DivDis only findsh^⋆whenr_=0.5, as DivDis-Seq does, but the solutions are different for other values of the spurious ratio.§ RESULTS FOR TRAINING MLPS ON 2D TASKIn this section, we investigate whether the influence of the spurious ratio of unlabeled OOD data shown in Proposition <ref> still holds when the learning algorithm is more flexible. Specifically, we use the same 2D settings, described in Sec. <ref> and Appendix <ref>, but we train a multilayer perceptron (MLP) instead of the linear classifier.The MLP consist of 3 fully-connected layers (with width 40) and has ReLU as the activation function.As shown in Fig. <ref>, we observe that D-BAT <cit.> findsh^⋆only in inversely correlated unlabeled OOD data, while DivDis-Seq, on the contrary, findsh^⋆under balanced unlabeled OOD data, which is consistent with the Proposition <ref>.Indeed, for D-BAT, whenr_ = 0.25orr_ = 0.5, the diversification loss contradicts with the cross-entropy loss on the labeled training data,causing misclassification on the training data. On the contrary, DivDis-Seq's boundary rotates counterclockwise, and its diversification loss causes no contradiction with the cross-entropy training loss. § RESULTS FOR DIVDIS ON 2D BINARY TASK In Sec. <ref>, we show how varying the spurious ratio influences the learning dynamics of DivDis-Seq and D-BAT. For completeness, we also provide results for DivDis (Fig. <ref>).The experimental setup is the same as in Sec. <ref>, with 0.5k / 5k training and unlabeled OOD data (with varied spurious ratio). Similarly, the hypothesis space is restricted to linear classifiers. Because DivDis optimizes simultaneously (i.e., there is no first/second model), we do not fix the first classifier tohcontrary to what was done for D-BAT and DivDis-Seq. As shown in Fig. <ref>, DivDis does not find the true hypothesish^⋆when the unlabeled OOD data is inversely correlated.On the contrary, it recovershandh^⋆when the unlabeled OOD data is balanced.Thus, DivDis and DivDis-Seq share similar learning dynamics whenr_= 0.5. § MORE DETAILS, RESULTS AND DISCUSSION FOR SEC. <REF> In Sec. <ref>, we demonstrate on real-world image data that one of the key factors influencing the performance of diversification methods is the distribution of the unlabeled OOD data (more specifically, the spurious ratior_).Here we provide more details for the experimental setup, and results (in Fig. <ref>) for M/F <cit.> dataset (where MNIST and Fashion-MNIST <cit.> are concatenated).We then provide the results on CelebA <cit.> as a verification on a large-scale dataset, as shown in Tab. <ref>.Dataset & model details. We investigate two datasets: M/C and M/F, where:*In the training set (Fig. <ref>), the spurious dataset (MNIST) completely correlates with the "true" or semantic dataset (CIFAR-10 <cit.> or Fashion-MNIST <cit.>).Specifically, in M/C, MNIST 0s and 1s always concatenate with cars and trucks, respectively. In M/F, MNIST 0s and 1s always concatenate with coats and dresses, respectively.*For the unlabeled data , where D-BAT & DivDis(-Seq)'s hypotheses make diverse predictions, the spurious ratio r_ changes, exposing that the performance of diversification is highly dependent on the unlabeled data. *We can straightforwardly vary the r_ by changing the rules of concatenation in . Specifically, take M/C as an example,we first take the samples of 0s and 1s from MNIST, as well as cars and trucks from CIFAR-10, and we make sure they have the same size and are shuffled.Then, according to spurious ratio r_, we randomly select a r_ proportion of the samples from MNIST (0s / 1s) and CIFAR-10 (cars / trucks), and concatenate 0s with cars and 1s with trucks (so that the semantic feature is correlated with the spurious feature in r_ of samples).We finally concatenate the remaining 1 - r_ proportion of samples oppositely (i.e., 0s with trucks and 1s with cars). * r_=0 means inversely correlated (all images are 0s/trucks or 1s/cars). * r_=0.5 means balanced (half of the 0s are concatenated with cars and the other half is concatenated with trucks, half of the 1s are concatenated with trucks and the other half is concatenated with cars). * r_=1 means completely spurious (all images are 0s/cars or 1s/trucks).*The test data is a hold-out balanced OOD data D (Fig. <ref>), i.e., r_D = 0.5, in which there is no spurious correlation between the MNIST and target dataset (either CIFAR-10 or Fashion-MNIST), and the labels are assigned according to CIFAR-10 (in M/C) and Fashion-MNIST (in M/F). We train a LeNet <cit.>, which contains 2 convolutional layers and 3 linear layers. Following <cit.> setup, depending on the dataset, we modify the number of channels and input / output sizes of the linear layers.We summarize these parameters in Tab. <ref>.Results on M/F dataset. In the same manner of Fig.<ref>, we show results on M/F dataset in Fig. <ref>-Right.We see a similar trend as Fig.<ref>:*When r_∈ [0, 0.5], (inversely correlated to balanced), the results match our observations made in .*When r_∈ [0.5, 1.0] (balanced to completely spurious), both on M/C and M/F, all methods have more and more difficulty to diversify and use the semantic features. Indeed, the unlabeled OOD data distributiongets increasingly closer to the training distribution D_t, thus we cannot expect OOD generalization. Overall the synthetic 2D binary task section, M/C, and M/F experiments suggest that, in practice, across different datasets,diversification methods' behavior and solutions are highly dependent on the spurious ratio of unlabeled OOD data.Discussion on the α hyperparameter. In the above experiments on both datasets, we use large coefficientsαfor diversification losses (A_in Eq. <ref>) as 5 / 50 / 50 for D-BAT / DivDis / DivDis-Seq, in order to study the behavior of these methods when the diversity objective is fully optimized.In Fig. <ref>-Left, we further show results for different values ofα.We observe that tuningαis not sufficient to compensate for the misalignment between the unlabeled OOD data and the diversification loss, and the performance for both methods has the same trend. Specifically, largerαgives better test accuracy in general, as shown in Fig. <ref>-Left.In Fig. <ref>-Right, we select the bestαfor each scenario (i.e., each spurious ratio of unlabeled OOD data), and observe no meaningful difference in behaviour (compared to Fig. <ref> and Fig. <ref>).Therefore, a conclusion similar to Proposition <ref> still holds: even when tuningαfor each unlabeled OOD data setting (i.e. spurious ratio), D-BAT performs best when the unlabeled data is inversely correlated, while DivDis performs best when the unlabeled data is balanced.This suggests that a practitioner might not be able to compensate a misalignment between unlabeled data and diversification loss by tuning the hyperparameterα. Results on CelebA-CC dataset. In Tab. <ref>, we further show results on large-scale real-world dataset, namely CelebA-CC <cit.>. CelebA-CC is a variant of CelebA, introduced by <cit.>, where the training data semantic attribute is completely correlated with the spurious attribute. Here, gender is used as the spurious attribute and hair color as the target. We take D-BAT and DivDis-Seq (for fair comparison on sequential training), and show their test accuracy on different degrees of spurious ratio of unlabeled OOD data (r_={0.0, 0.5, 1.0}).Consistent with our previous observations, the results show that D-BAT performs the best whenr_=0.0, and DivDis-Seq performs the best whenr_=0.5. § PROOF OF PROPOSITION <REF>We first remind our proposition: Proposition 2.For K = (2|D| -1) and h^* the OOD labeling function, there exists a set of diverse K hypotheses h_1, ..., h_K, i.e., _D(h_i,h_j) = |{ x ∈ D:h_i(x) = h_j(x) }| / |D| ≤ 0.5 ∀ i,j ∈{1, ..., K}, i ≠ j and it holds that max _h’ ∈h_1, …, h_KAcc(h^*, h') ≤ 0.5.This formulation covers our two methods of interest, D-BAT <cit.> and DivDis <cit.>. Indeed, the maximum agreement is upper-bounded by 0.5. For DivDis, the optimal solution has maximum agreement of 0.5, as seen in Appendix <ref>. For D-BAT, the optimal solution has the lowest agreement possible. Indeed, forK=2, the optimal solution has_D(h_1,h_2) = 0. Thus, both methods optimal solutions are covered when upper-bounding the maximum agreement by 0.5 (as long asK ≤ (2|D| -1)). We prove the existence of a diverse set ofKhypotheses, satisfying the condition of Proposition 2, using a classic construction from coding theory, called the Hadamard code <cit.>.Terminology. We first make explicit the equivalence between a hypothesis space and coding theory terminology. In binary classification, a labeling function or hypothesish_ionDis a binary codeword (vector) of fixed lengthN, whereN=|D|.A set ofKhypotheses is now referred to as a codeCof sizeK. We define the Hamming distance between codewordsh_i,h_jasd(h_i,h_j) = ∑_k=1^Nℐ[h_i(k) ≠ h_j(k)]whereℐis the indicator function andh_i(k)is the hypothesis prediction on thekth data point fromD. The Hamming distance between two equal-length codewords of symbols is the number of positions at which the corresponding symbols are different.The agreement between two hypothesesh_i,h_jcan now be rewritten using the Hamming distance as_D(h_i,h_j) = 1/N∑_k=1^Nℐ[h_i(k) = h_j(k)] = 1/N(N - ∑_k=1^Nℐ[h_i(k) ≠ h_j(k)]) = 1 - d(h_i,h_j)/N. Similarly, the accuracy can also be rewritten asAcc(h^*, h') = _D(h^*, h')= 1 - d(h^*, h')/N.Proof. We first use the fact that there exists a binary codeCwith minimum distanced^* = min_x,y ∈ C, x ≠ y d(x,y) = N/2and|C| = 2N. This binary code is the Hadamard code <cit.>, also known as Walsh code. This binary code has2Ncodewords of lengthNand has the minimal distance ofN/2.We show now that we can modify the Hadamard codeCto obtain another codeC'with equivalent properties andh^* ∈ C'. ForC', it then holds thatmax _h’ ∈ C'Acc(h^*, h') ≤ 0.5, as it was shown above thatAcc(h^*, h') = 1 - d(h^*, h')/N≤ 1 - d^*/N = 1 - N/2/N = 0.5. Further, we show how to construct suchC'. Leth_1be the first codeword ofC. Let us now define a function (or transformation)f(h): {0,1 }^N →{0,1 }^N such thatf(h_1) = h^*, i.eftransformsh_1intoh^*. Since we are dealing with binary vectors, the functionf(h)can be broken down into individual bit flips i.e.f(h)(i) = h(i) if h_1(i) = h^*(i) 1 - h(i)if h_1(i) ≠ h^*(i)Applyingfto all codewords in codeCgives us a new codeC' = {h ∈ C: f(h)}andh^* ∈ C'. This operation maintains the minimum distanced = N/2since: d(f(c_i),f(c_j))= ∑_k=1^Nℐ[f(h_i)(k) ≠ f(h_j)(k)]= ∑_{k: h_1(k) = h^*(k)}ℐ[h_i(k) ≠ h_j(k)]+ ∑_{k: h_1(k) ≠ h^*(k)}ℐ[ 1 - h_i(k) ≠ 1 - h_j(k))](by definition of f)= ∑_{k: h_1(k) = h^*(k)}ℐ[h_i(k) ≠ h_j(k)]+ ∑_{k: h_1(k) ≠ h^*(k)}ℐ[ h_i(k) ≠ h_j(k))]= ∑_k=1^Nℐ[h_i(k) ≠ h_j(k)]= d(h_i,h_j) Therefore,C'is the code satisfying all the conditions of Proposition 2. As was shown before, a binary codeCis equivalent to a set of|C|hypotheses with the same properties. Thus, withN=|D|, the above construction gives us a set of(2|D| -1)(2Nminus the true labelingh^*) hypotheses satisfying the constraints of Proposition 2. This concludes our proof.□Extension to multi-class classification We used the mathematical framework of coding theory and a classical result from it, the Hadamard code <cit.>, to prove Proposition 2, specifically for binary hypotheses. However, in coding theory, it has not been proven yet whether codes with similar nice properties, similar to Hadamard's, exist for anyq-ary codes i.e. for hypotheses withqpossible classes. One exception is whenqis a prime number. Let q be the number of classes and a prime number. Let m ∈ℕ^+ s.t. |D_ood| = q^m. Then, for K = (q· |D_ood| -1) and h^* the OOD labeling function, there exists a set of diverse K q-ary hypotheses h_1, ..., h_K, s.t., A_D_ood(h_i,h_j) = | x ∈ D_ood: h_i(x) = h_j(x) | / |D_ood| ≤1/q∀ i,j ∈1, ..., K, i ≠ j, and it holds that max _h’ ∈h_1, …, h_KAcc(h^*, h') ≤1/q Using a similar argument from the proof of Proposition 2, <cit.> tells us that for any |D_ood| = q^m where m ∈ℕ^+, we can find a code similar to Hadamard's with minimum distance equal toN(q-1)/q and cardinality equal to q^m+1 = q· |D_ood|. By removing the semantic hypothesis from the count, we obtain that Proposition 3 holds for K = (q· |D_ood| -1). § AGREEMENT SCORE AND IMPLICIT BIAS OF DIVERSE HYPOTHESES§.§ Agreement Score and Task Discovery <cit.>In this section, we introduce more details on the background of <cit.>, as well as how we leverage the findings from it. Agreement score as a measure of inductive bias alignment. We use the agreement score (AS) <cit.> to measure the alignment between the found hypotheses and the inductive biases of a learning algorithm. It is measured in the following way: given a training datasetD_tlabeled with a true hypothesish^*, unseen unlabeled dataD, and a neural network learning algorithm, train two networks from different initializations on the same training data, resulting in two hypothesesh_1, h_2 ∼(D_t, h^*), and measure the agreement between these two hypotheses onD: AS_ (h^* ; D_t, D) =𝔼_h_1, h_2 ∼(D_t,h^*) 𝔼_x ∼ D[ h_1(x)=h_2(x) ]Recent works <cit.> show that the AS correlates well with how well a learning algorithmgeneralizes on a given training task represented by a hypothesish. Indeed, high AS is a necessary condition for generalization <cit.> (different outcomes ofhave to at least converge to a similar solution). Finally, a learning algorithm will generalize on a labeling if the labeling is aligned with the learning algorithm's inductive biases, thus, we use AS as a measure of how well a given hypothesishis aligned with the inductive biases of. Task Discovery.<cit.> use bi-level optimization (also called meta-optimization) to optimize the agreement score (i.e., Eq. <ref>) and discover, on any dataset, high-AS hypotheses (tasks in the terminology of Task Discovery) that a given learning algorithm can generalize well on. They show that there are many diverse high-AS hypotheses different from semantic human annotations. In Fig. <ref>, we show examples of the high-AS hypotheses discovered for the ResNet18 architecture on CIFAR-10 <cit.>. Adversarial dataset splits (Fig. <ref>).<cit.> also introduce the concept of adversarial dataset splits, which is a train-test dataset partitioning such that neural networks trained on the training set fail to generalize on the test set.To do that, they induce a spurious correlation between a high-AS discovered hypothesish_Dand the (target) semantic hypothesish^⋆on the training data, and the opposite correlation on the test set. Specifically, they select data points as training setD_t, such that a discovered high-AS hypothesish_D(specifically,AS(h_D) > AS(h^⋆)) completely spurious correlates withh^⋆, i.e.{ x ∈ D_t:h_D(x) = h^⋆(x) }.The test setD_testis constructed such that the two hypotheses are inversely correlated, i.e.,{ x ∈ D_test :h_D(x) ≠ h^⋆(x) }. Theoretically, a NN trained on such a training setD_tshould learn the hypotheses with a higher AS, i.e.,h_D, which would lead to a low accuracy when tested onD_test. This was indeed shown to hold in practice, where the test accuracy drops from0.8for a random split to0.2for an adversarial split. Adversarial splits, therefore, show that neural networks favor learning the task with a higher AS (the background color in the case of Fig. <ref>) when there are two hypotheses that can 'explain' the training data equally well. In this work, we refer to this 'preference' as an alignment between the neural network andh_D. This creates a controllable testbed for studying the effect of spurious correlations on NNs training, which we also adopt in our study. §.§ Diversification Finds Hypotheses Aligned with Inductive Biases Disclaimer: For an introduction on agreement score, we refer to Appendix <ref>In this section, we study how the diversification process is biased in practice by the inductive biases of the chosen learning algorithm. Specifically, using agreement score, we demonstrate that D-BAT and DivDis find hypotheses that are not only diverse but aligned with the inductive bias of the learning algorithm.Experimental setupCIFAR-10. we build on top of the adversarial splits and construct a CIFAR-10 data split with complete spurious correlation on the training data and a balanced (no spurious correlation) unlabeled OOD data, as shown in Fig.<ref>. This is a typical setting on which D-BAT <cit.> and DivDis <cit.> apply. More precisely,h^⋆is a semantic binary classification on CIFAR-10, defined by choosing a 5 vs 5 split of the original 10 classes. We define the spuriously correlated CIFAR-10 data by using an arbitrary high AS binary labelingh_Das the spurious hypothesis, similarly to "adversarial splits" introduced by <cit.>. There are two reasons for using this setting: one to easily control the data setup, and one for using <cit.> as a reference of the AS for different hypotheses on CIFAR-10.Measuring the AS of found hypotheses. For a given dataset with training dataD_t, unlabeled OOD dataD^U, and test OOD dataD, we train a diversification method (without pretraining) to find multiple diverse hypothesesh_1, …, h_Kand measure their agreement scores. More precisely, for each hypothesish_i, we measureAS_(h_i; D_t ∪ D^U , D)whereis the same learning algorithm (e.g. ResNet18) used to find the diverse hypotheses. This AS allows us to assess whetherh_iis labelingDrandomly or in a way that aligns well with the inductive biases of the learning algorithms. We provide more details on the setting in Appendix <ref> and illustrate the dataset creation in Fig. <ref>.Implicit bias of diverse hypotheses. Tab. <ref> shows the agreement score of random hypothesesh_R(true labels onD_tbut random labels onD) and the diverse hypotheses found by both diversification methods. We observe a clear gap between the two, indicating that all diverse hypotheses labelDin a structured non-random manner. Measuring the agreement score of the true or semantic hypothesish^⋆gives us an estimate of the expected AS value of an hypothesis aligned with the inductive bias of(otherwise, we wouldn't expectto be able to learnh^⋆).We observe that the hypotheses found by DivDis and D-BAT have agreement scores similar to that ofh^⋆, indicating good alignment with the inductive biases of. Thus, optimizing Eq.  <ref>, using neural networks as the learning algorithm, leads to diverse hypotheses implicitly biased towards those favored by its inductive biases. According to the definition of AS, such alignment is expected from an hypothesis found through empirical risk minimization (ERM), however it is not expected from diverse hypotheses (as defined in Eq. <ref>), given that the additional diversification loss could destroy this alignment. This analysis sheds light on the process by which diverse hypotheses are found, and puts an emphasis on the choice of a good learning algorithm, which is crucial, as we show in the subsequent Sec. <ref>.Similar to what is shown with ResNet in Tab. <ref>, in Tab. <ref>, we repeat the experiment with two different architectures, MLP and ViT <cit.>, on CIFAR-10. The diverse hypotheses found by D-BAT and DivDis have high AS. This demonstrates that our above conclusions also hold with different architectures.Diverse hypotheses cannot generalize without the correct pretrainingAs seen above, D-BAT and DivDis produce diverse hypotheses implicitly biased towards those favored by the inductive biases of its learning algorithm. Nonetheless, this implicit bias may not lead to OOD generalization, i.e.,h^* ∉_̋K^*, as the test accuracies in Tab. <ref> are found to be near the chance level.In Tab. <ref>, on Waterbirds, we repeat the same experiment as in Tab. <ref>, with an additional variable. The ResNet50 model is either trained from scratch or startingfrom ImageNet-1k supervised pretraining weights. We can see that pretraining does not affect whether DivDis and D-BAT find high AS hypotheses, however it greatly influences the generalization capability of the found hypotheses. These results corroborate with Sec. <ref> that the correct choice of inductive bias is crucial to unlock OOD generalization. § RESULTS AND IMPLEMENTATION DETAILS OF SEC. <REF>§.§ Experimental details Remarks on D-BAT and DivDis. *All experiments were run using DivDis and D-BAT respective codebases to ensure closest reproducibility to their presented methods and results.*DivDis' default setting is to augment the data while training. This option was disabled to ensure fair comparison to D-BAT.*For their results, DivDis rebuilt the Waterbirds<cit.> dataset from scratch. On the contrary, D-BAT used the one provided by the WILDS<cit.> library. To ensure fair comparison, both methods were run using the latter version of the dataset.*If not precised, all train, validation and test splits are taken as provided from <cit.> or WILDS.*The best models are selected according to validation accuracy.Computational resources. Each experiment can be run on a single A100 40GB GPU. Models. If not precised, the model used in most experiments is ResNet50 <cit.>. Otherwise, when using a Vision Transformer (ViT), we use a ViT-B/16[https://pytorch.org/vision/main/models/generated/torchvision.models.vit_b_16.html] <cit.>. The last exception is for DivDis Camelyon17 (DenseNet121 <cit.>).DivDis parameters.For Waterbirds variants, the optimizer is SGD, the number of epochs is 100, the learning rate is 0.001, the weight decay is 0.0001. Theαparameter (referred asλin DivDis) was tuned over{ 0.1, 1, 10 }. For Office-Home, the optimizer is SGD, the number of epochs is 50, the learning rate is 0.001, the weight decay is 0.0001, and theαparameter was tuned over{ 0.1, 1, 10 }. For Camelyon17, the original best performing setting from DivDis was used.D-BAT parameters. For Waterbirds variants and Office-Home, the optimizer is SGD, the learning rate is 0.001, the weight decay is 0.0001. Given that D-BAT optimizes sequentially, the number of epochs is an important parameter to tune. We tuned overepochs∈{30,100}andα∈{0.0001,0.1}. For Camelyon17, the original D-BAT best performing setting was used.§.§ Complete results of Sec. <ref> We provide the full results for Fig. <ref> (ERM baseline included) in Tab. <ref>. We also provide the accuracy of each new head of D-BAT for Tab. <ref> in Tab. <ref>. Finally, in Fig. <ref>, we further show that DivDis does not scale well to larger K (e.g.K = 64) “out-of-the-box”, and the performance drops as the number of hypotheses increases. Note that testing D-BAT in this regime would be prohibitively expensive.§.§ Pretraining strategy and architecture details. In Fig. <ref>, we vary the pretraining method and architecture, and measure the effects on performance. We provide additional details here. If not precised, the methods use a ResNet-50<cit.> model. All 8 variations are pretrained on the ImageNet-1k<cit.> dataset: *Self-supervised*SwAV <cit.> *SimCLRv2 <cit.> *MoCo-v2 <cit.> *ViT-B/16 MAE <cit.> *ViT-B/16 Dino <cit.> *Supervised*Adversarially robust classifiers <cit.>.*Resnet50 IN <cit.>, supervised pretraining on ImageNet-1k. This is the pretraining method used by <cit.> in their papers. * ViT-B/16 IN <cit.>, supervised pretraining on ImageNet-1k. Experimental details For the adversarially robust classifier, the L2-Robust ImageNet ResNet-50 (ϵ = 0.05) model was chosen, following the advice of <cit.>, as it is hypothesized that smaller values ofϵtend work better on datasets where leveraging finer-grained features are necessary (i.e., where there is less norm-separation between classes in the input space), such as Waterbirds-CC or Office-Home. Each variation hyperparameters were tuned following the same procedure as described in <ref>.§ DETAILED EXPERIMENTAL SETUP AND ADDITIONAL RESULTS FOR SEC. <REF> In Sec.<ref>, we demonstrate thatusing different inductive biases can drastically and predictably influence a diversification method.Here we provide more details on how we construct such examples.Additionally, in Tab. <ref> we also provide results where the examples are constructed using a ViT-ResNet pair (instead of MLP-ResNet pair).Finally, we provide an extension Tab. <ref> by showing how an inductive bias gradually gets favorable and vice versa through the transition of spurious ratios, in Fig. <ref>.Construction * Prerequisite: We consider a semantic (5-vs-5) binary classification task h^⋆ on CIFAR-10 <cit.> (i.e., airplane, automobile, bird, cat, deer original classes as class 1 and dog, frog, horse, ship, truck original classes as class 0). * Step 1 (Selecting hypotheses aligned with learning algorithms from <cit.>): We take two high-AS hypotheses (see Fig. <ref> for examples of such hypotheses) discovered in <cit.> for MLP and ResNet18 <cit.>, where the hypotheses (h_MLP and h_RN) satisfy Eq. <ref>: AS_MLP(h_MLP) > AS_MLP(h_RN), AS_RN(h_RN) > AS_RN(h_MLP),For example, this means the MLP hypothesis h_MLP has a high AS when training MLPs but lower AS when training ResNet18s. Also, we ensure these two hypotheses have higher AS than the true hypothesis h^⋆ to make sure that they are able to act as a spurious hypothesis. * Step 2 (Constructing training data where h^⋆, h_MLP and h_RN completely correlates): As presented in Tab. <ref>, in D_t row, we select the data points as training set D_t such that h^⋆, h_MLP and h_RN agree.As shown in <cit.> by adversarial splits, when two hypotheses correlate with each other (i.e., their labels are the same on training data), a neural network tends to converge to the hypothesis with higher AS.Thus, combining with the conditions in step 1 (i.e., Eq. <ref>), training MLP on D_t with ERM shoudl converge to h_MLP and training ResNet on D_t with ERM to h_RN, which is illustrated in Tab. <ref>-Right. * Step 3 (Further improving the alignment between the two hypotheses and their corresponding architectural inductive biases): This step is not necessary in general, but it allows us to find hypotheses that are better aligned with the inductive biases of the network. This is because the Task Discovery framework from <cit.> might not provide globally optimal hypotheses. The improvement goes as follows: we update h_MLP and h_RN to further increase their AS for a better alignment with the corresponding learning algorithm (i.e., MLP and h_MLP, ResNet18 and h_RN). Specifically, we train with ERM an MLP and ResNet18 on D_t and make predictions on all CIFAR-10 data except for the training data i.e. D ∖ D_t. We replace the old labels of h_MLP and h_RN on D ∖ D_t by the new labels predicted by MLP and ResNet18. This step gives us higher AS hypotheses (thus more preferred by the given architecture) that satisfy Eq. <ref> (equation also shown in Step 1). * Step 4 (Constructing unlabeled OOD data such that ResNet18 or MLP fails): As shown in Tab. <ref>, inrow, we select data points with specific hypothesis labels as the unlabeled OOD data.By design, in Tab. <ref>-Left, h^⋆ is inversely correlated to h_MLP and is not correlated (i.e. balanced) to h_RN. We know training an MLP with ERM on D_t will choose h_MLP. Therefore, D-BAT will perform well (i.e., find h^⋆) by minimizing its diversification loss.On the contrary, training a ResNet with ERM on D_t will choose h_RN. Therefore, as shown in Sec. <ref>, D-BAT cannot perform well by minimizing its diversification loss.The opposite conclusion holds forTab. <ref>-Right. * Step 5: we take D_t and(which are around 12k and 24k images, respectively) and run D-BAT <cit.> (the labels onare inaccessible), and measure the test accuracy on hold-out D∼, which is shown in Tab. <ref>-Left. The construction process is thus a white-box process (or attack), similar to adversarial attacks, but on the architectural inductive bias aspect.We reiterate that the purpose of this example is to illustrate that the choice of architectural inductive bias can have a very drastic influence on the behavior of diversification methods and this choice is co-dependent on the properties of the unlabeled OOD data. Additionally, we can demonstrate this co-dependence in a fine-grained manner, as shown in Fig. <ref>.Here we still keep the training data the same (according toD_tin Tab. <ref>), and construct the unlabeled OOD datasuch that the spurious ratios ofh_MLPandh_RNgradually switch from low to high or high to low.Hence, the Tab. <ref>-Left and the Tab. <ref>-Right correspond to the left and right extremities of the x-axis of Fig. <ref>, and in between are interpolations between the two distributions (spurious ratios).This gives a better view of how one inductive bias gets favorable through the transitions of spurious ratios, and vice versa.
http://arxiv.org/abs/2312.16313v1
{ "authors": [ "Harold Benoit", "Liangze Jiang", "Andrei Atanov", "Oğuzhan Fatih Kar", "Mattia Rigotti", "Amir Zamir" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231226194753", "title": "Unraveling the Key Components of OOD Generalization via Diversification" }
firstpage–lastpage Combinatorial optimization with quantum imaginary time evolutionGeorge Siopsis January 14, 2024 =================================================================We investigate the relationship between the baryonic angular momentum and mass for a sample of 36 isolated disc galaxies with resolvedkinematics and infrared WISE photometry drawn from – and representative in terms of morphologies, stellar masses and -to-star fraction of – the carefully-constructed AMIGA sample of isolated galaxies. Similarly to previous studies performed on non-isolated galaxies, we find that the relation is well described by a power law j_ bar∝ M_ bar^α. We also find a slope of α = for the AMIGA galaxies, in line with previous studies in the literature; however, we find that the specific angular momenta of the AMIGA galaxies are on average higher than those of non-isolated galaxies in the literature. This is consistent with theories stipulating that environmental processes involving galaxy-galaxy interaction are able to impact the angular momentum content of galaxies. However, no correlation was found between the angular momentum and the degree of isolation, suggesting that there may exist a threshold local number density beyond which the effects of the environment on the angular momentum become important.galaxies: kinematics and dynamics –galaxies: evolution – galaxies: spiral – galaxies: fundamental parameters – dark matter § INTRODUCTIONViewed as a basic property of galaxies, the angular momentum holds an important place in constraining theories of galaxy formation and evolution <cit.>. Initial analytical studies on the subject proposed that angular momentum is acquired by the dark matter halo through tidal torques, during the proto-galactic formation phase <cit.>. Additionally, since the baryonic matter in galaxies is thought to experience the same torque, its angular momentum is expected to follow the same distribution as the dark matter (DM) halo <cit.>.On the other hand, one of the most important aspects of the angular momentum in the context of the galaxy evolution study lies in its relationship with the mass. In the framework of the cold dark matter (CDM) cosmology, the angular momentum of the DM halo (characterised by the global spin parameter) is predicted to approximately be independent of the mass <cit.>, leading to a power-law relation between the DM's specific angular momentum j_ DM (i.e, the angular momentum per unit mass) and its mass M_ DM: j_ DM∝ M^α_ DM, with α∼2/3. This relation also holds for the baryons within the DM halo, since they are expected to follow the DM in the angular momentum distribution.The total budget of a galaxy's baryonic angular momentum is essentially provided by the stellar and gas components making up the galaxy. The initial observational study of the j-M relation on the stellar component <cit.> found a slope similar to the theoretical prediction, but also revealed that at given stellar mass, disc galaxies have higher specific angular momenta than early type galaxies. Subsequent and more comprehensive studies refined these results, demonstrating the dependency of the angular momentum on galaxy morphological type <cit.>. More recently, several studies have included the gas component in the evaluation of angular momentum, providing a more complete estimate of the total baryonic content <cit.>. Although the emerging relation of the total baryonic angular momentum does not largely differ from that of the stellar component, the emerging picture suggests that complex mechanisms are responsible for the observed angular momentum content of galaxies. For example, the retained angular momentum fraction (i.e, the ratio between the baryonic and DM angular momenta) is presumably higher for galaxies with higher baryon fraction, suggesting that these galaxies conserve better their angular momentum during their formation phase <cit.>.Numerous theoretical studies have also attempted, over the recent years, to provide a complete description of how the angular momentum of the baryonic component varies over a galaxy's lifetime. Today, the generally accepted picture is that both internal and external processes (such as star formation, stellar feedback, gas inflow and outflow, merging) are capable of affecting the angular momentum of galaxies <cit.>. This in turn can alter the position of individual galaxies in the j-M plane.While the occurrence and importance of internal mechanisms are independent of the environment, the external processes are significantly impacted by local density in the medium around galaxies. In fact, several studies on galaxy formation and evolution have shown that environment plays an important role in shaping the physical properties of galaxies <cit.>. From a morphological point of view, the neutral hydrogen () content is arguably among the most important parameters in tracing environmental processes, since it constitutes the envelope that is most affected by said processes <cit.> and the reservoir of gas out of which stars are formed (via molecular gas). Galaxies evolving in dense environments tend to be moredeficient than their counterparts in low-density regions <cit.>. On the other hand, galaxies residing in the lowest density environments are less exposed environmental processes: theircontent is higher than the average, while theirdistribution is more orderly <cit.>.Most observational investigations since the original <cit.> study have focused on either providing a better constraint of the j-M relation with respect to morphological type and gas fraction, or reconciling measured the retained fraction of angular momentum with the numerical predictions <cit.>. However, little attention was given to the environmental dependency of the angular momentum distribution <cit.>; in particular, no existing study provides analysis on galaxies selected in extremely low density environments. In this work, we investigate the specific angular momentum of a subset of the AMIGA (Analysis of the interstellar Medium in Isolated GAlaxies) sample <cit.>, the most carefully constructed sample of isolated galaxies available to date. The degree of isolation of galaxies in the catalogue was evaluated based on two main criteria: the local environment number density η_k and the total force Q exerted on the galaxies by their neighbours <cit.>. More isolated than most of their field counterparts, the galaxies in AMIGA were found to be almost “nurture free", exhibiting extremely low values for parameters that are usually enhanced by interaction <cit.>. Therefore, the sample provides, by definition, a good reference for evaluating the - relation (the angular momentum - mass relation for the baryonic component) in interaction-free galaxies in the local Universe. The aim of the present investigation is to evaluate how the environment impacts the angular momentum of disc galaxies. Indeed, how environmental processes affect the angular momentum content of a galaxy is not straightforwards, with the change in j being dependent on the specifications of the interactions. However, current simulations tend to agree that processes such as mergers could potentially redistribute the stellar angular momentum from the inner regions of galaxies out to their outer parts <cit.>. It is therefore possible that galaxy interactions transfer part of the (stellar and gas) disc angular momentum into the DM halo, effectively reducing the “observable” angular momentum content. However, no observational study, to date, has conclusively shown evidence of this effect. If these theoretical predictions are correct, we then expect isolated galaxies to have retained a larger fraction of their initial angular momentum – resulting in these galaxies having higher j values. We therefore make use of the AMIGA sample to investigate this hypothesis, which is undoubtedly the best existing sample candidate for the study.The paper is organised as follows: in <Ref> we describe the AMIGA sample, theand mid-infrared data used in the analysis. Next, we present details on the measurement of the specific angular momentum in <Ref>. The relation between j and the mass is then presented and analysed in <Ref>, with a discussion within the context of galaxy evolution in <Ref>. Finally, we summarise and layout the future prospects in <Ref>.§ DATA§.§ The AMIGA Sample of Isolated galaxiesThe AMIGA <cit.> galaxies were selected from the 1050 isolated galaxies of the CIG <cit.> catalogue. The original study of <cit.> found that the AMIGA sample has properties as close as possible to field galaxies, with an optical luminosity function representative of the lower density parts of galaxy environments. The study also performed a completeness test and concluded that the sample was over 80% complete for objects with B-band magnitudes brighter than 15.0 and within 100 Mpc. The morphological study of the sample revealed that it contains 14% of early-type (E/S0) galaxies, with a vast majority of the galaxies (82%) ranging from Sa to Sd Hubble types <cit.>. Several multi-wavelength studies have since then refined the AMIGA sample to ensure that it is as “nurture-free” as possible, by eliminating galaxies that are suspected to have undergone recent interaction. In particular, <cit.> mapped the projected neighbours of 950 CIG galaxies with systemic velocities higher than 1500 , down to a B magnitude limit of 17.5, and within within a radius of 0.5 Mpc around each of these galaxies. The velocity cut ensures that nearby galaxies – i.e, those closer than 20 Mpc – are not included in the AMIGA sample since their low distance would result in impractically large searching areas for potential neighbours during the evaluation of the isolation degree. In their study, the authors identified only 636 galaxies that appeared to be isolated. Subsequently, <cit.> estimated the influence of their potential neighbours on the CIG galaxies by measuring their local number density η_k[by definition, η_k can only be determined for galaxies having at least two neighbours.] and the tidal strength Q to which they are subject, providing a tool for quantifying the degree of isolation of the sample galaxies. These isolation parameters allowed the authors to i) find that the 950 galaxies of v>1500presented a continuous spectrum of isolation, ranging from strictly isolated to mildly interacting galaxies, and to ii) produce a subsample of the 791 most isolated AMIGA galaxies. These isolated galaxies were selected such that η_k < 2.4 and Q < -2. Although the isolation criteria were later revised by <cit.> who further reduced the sample size to 426 galaxies[the authors end up with a smaller sample because not all CIG galaxies are in the SDSS footprint.] based on photometric and spectroscopic data from the SDSS Data Release 9, there is agreement that the <cit.>'s sample of 791 galaxies provides a suitable nurture-free baseline for effectively quantifying the effects of galaxy interactions <cit.>: we will hereafter refer to this sample as the Verley07b sample.From the initial sample of 950 galaxies, we selected 38 galaxies for which high-qualitydata are available (see <Ref>). Among these, 36 galaxies (except CIG 587 & 812) were further detected in mid-infrared (see <Ref>): only these galaxies will be considered in the angular momentum analysis below, and will be referred to as the angular momentum sample (or j-sample). From this sample, 24 meet the isolation criteria of <cit.>, while the remaining 12 were classified by the authors as non-isolated. A closer look at the distribution of the j-sample's isolation parameters reveals that the groups of 24 and 12 galaxies are rather separated by the tidal force Q (left panel of <Ref>): we will therefore refer to them as the low-Q and high-Q samples respectively in the next sections.To assess how representative the j-sample is of the larger AMIGA sample of isolated galaxies, we further constrain the AMIGA sample to those galaxies for which we can reliably determine both the stellar andproperties. Among the 791 galaxies in the Verley07b sample, only 587 galaxies have both theirmasses and<cit.> infrared photometry available (see <Ref>). We refer to these 587 galaxies as the VerleyWISE sample. The different samples are summarised in the diagram of <Ref>. In <Ref> we compare the distribution of the isolation parameters, distance and morphologies in both the angular momentum and VerleyWISE samples. In terms of isolation, the galaxies in the low-Q sample occupy the same parameter space as the VerleyWISE sample although their values of the Q parameter tend to be on the upper end of the VerleyWISE sample. Furthermore, the distances and morphologies of the low-Q sample appear to be distributed similarly to those of the VerleyWISE sample. On the other hand, while the high-Q sample's morphologies are distributed roughly similar to those of the VerleyWISE sample, its distance distribution is skewed towards the lower limit: 7 out of the 12 galaxies in the sample are closer than 40 Mpc, while the median distances of the other two samples are in the range ∼60-80 Mpc.As for the trends of the stellar andmasses, <Ref> shows that the isolated j-sample follows the distribution of the VerleyWISE sample. In fact, the -to-stellar mass fractions of the galaxies in both the low-Q and high-Q samples are distributed uniformly across the stellar mass range, residing together with the majority of the VerleyWISE sample galaxies in the parameter space. Moreover, unlike the distance parameter, themass fractions of the high-Q sample present no discrepancy with those of the low-Q, although the high-Q galaxies tend to have highermass fractions than those in the low-Q sample. Compared to existing samples of non-isolated disc galaxies, the j-sample isolated galaxies are located in the high stellar mass end of the spectrum. This is shown in <Ref> where we compare the j-sample to the medians of other large galaxy samples in the literature: theflux-limited ALFALFA-SDSS sample of 9153 galaxies <cit.> (cyan circles), the HICAT-WISE sample of 3158 galaxies <cit.> and the xGASS sample of 1179 galaxies <cit.>. We also include in the figure two -selected, relatively smaller samples of resolved galaxies fromand , which we describe more extensively in <Ref>. The higher masses of the j-sample galaxies is caused by the velocity cut (threshold systemic velocity of 1500 ) imposed to isolated galaxies during the selection process, which systematically excludes low-mass galaxies. §.§DataThe measurement of the specific angular momentum requires good kinematic information of the candidate galaxies, i.e, reasonable spatial and spectral resolution data. Of the 587 galaxies making up the AMIGA VerleyWISE sample, we obtained good qualitydata for 38 galaxies, compiled from various archival sources mainly obtained with the VLA, WSRT and GMRT telescopes. Particularly, eight galaxies were detected and retrieved from the first data release[Data available through https://vo.astron.nl] of the Apertif <cit.> survey.The resolutions of thedata for each of the individual galaxies, as well as their noise levels and references are given in <Ref>. Eighteen of the 38 galaxies were published in the literature: we have obtained their reduceddatacubes (either through private communications or through the WHISP database[https://www.astro.rug.nl/ whisp/]), on which we performed the rotation curve modeling described in the last paragraph of this section. Additionally, data for 12 galaxies were retrieved from the VLA archive (their references are given in <Ref>); for these, we proceeded to calibrate and image the data using a standard data reduction procedure[adapted from <https://github.com/AMIGA-IAA/hcg_hi_pipeline>] in CASA <cit.>. Furthermore, data for 10 galaxies were retrieved from the Apertif data release, but those of CIG 468 and CIG 571 were discarded because the former lacked sufficient angular resolution and VLA data exist for the latter. For each of the remaining eight galaxies, we downloaded the spectral line data for the Apertif compound beam whose centre was closest to the galaxy of interest and which covered the correct frequency range, including the corresponding synthesized beam cube. The image cubes available in the archive are dirty cubes that have been output by the Apercal pipeline <cit.>. We performed spline fitting on the dirty cubes along the spectral axis to remove any additional continuum residuals, and conducted automated source finding using<cit.> to identify and mask emissions from the galaxy of interest. The data was then cleaned within the mask down to 0.5σ using standard Miriad tools <cit.>, and the clean cubes were primary beam corrected using the recommended Gaussian process regression models released with Apertif DR1[https://www.astron.nl/telescopes/wsrt-apertif/apertif-dr1-documentation/] <cit.>. The properties of thedata for all galaxies in the j-sample are given in <Ref>, and their moment maps and position-velocity diagrams in <Ref>. Their physical resolutions range from 1.3 to 22.9 kpc, with 32 out of the 38 galaxies having synthesised beam sizes of <10 kpc. Furthermore, the column density sensitivities in the data range from 3×10^17 (for CIG 134) to ∼1.5×10^20 cm^-2 (for CIG 676), estimated over a 20linewidth. With the exception of the Apertif galaxies whose 3σ detection levels lie in the range ∼1-3.5 M_⊙ pc^-2, thein all galaxies in the sample is mapped to lower column density levels, reaching up to two orders of magnitude. This ensures that the full extent of the gas rotating with the discs is traced in most galaxies.Themasses of a total of 844 AMIGA galaxies were measured in <cit.> using single-dish data (GBT, Arecibo, Effelsberg and Nançay), including 587 galaxies of the Verley07b sample. All j-sample galaxies, except CIG 571, are comprised in these 587 galaxies. For this galaxy, we derived themass from the interferometric data cube and the optical distance. The downside of this method is the underestimation of themass since, by design, interferometers are poor at recovering the totalflux of galaxies. Themasses of the j-sample isolated galaxies cover the range 9.27 < log(M_/M_⊙) < 10.48, with a median of 9.93± 0.05.From thedatacubes of the isolated galaxies in the j-sample, we made use of thepackage <cit.> to model their rotation curve. The package takes as input thecube of the galaxy, and performs a three-dimensional tilted-ring model fitting to determine the kinematic and geometrical parameters. An advantage of the three-dimensional (over the traditional two-dimensional) model-fitting, specifically with thepackage, is the minimisation of the beam smearing effects that arise when dealing with low-resolution data – as is the case for some galaxies in our sample. For the algorithm to work efficiently one needs to provide initial guesses for the galaxy parameters; these are the kinematic centre, the systemic velocity, the line-of-sight inclination and position angle. For each galaxy in the j-sample, we took the optical parameters to be the initial parameters of the galaxy. To better improve the fitting procedure, we provide a 3D mask for each of the galaxies to . Each mask is constructed with the smooth and clip algorithm ofat 4σ, such that it essentially only contains theemission of the corresponding galaxy. The output ofcomprises therotation curve and surface density profile of the galaxy, computed from concentric annuli, each characterised by a set of geometrical parameters (such as inclination and position angle) and centred on the kinematic centre of the galaxy. In <Ref> we show the variation of the geometric parameters with the radius, as well as the resulting surface density profiles. The values of the average fit results are given in <Ref>. §.§ Mid-Infrared DataWe use mid-infrared<cit.> observations to trace the stellar components of the AMIGA galaxies. More specifically, we refer to theExtended Source Catalog <cit.> to obtain the photometric data of the AMIGA galaxies: these include the W1 (3.4) and W2 (4.6) fluxes – sensitive to stellar populations – of the galaxies, the stellar surface brightness profiles and the W1-W2 colours. The full source characterisation, including the star-formation sensitive bands at 12 (W3) and 23 (W4), are available in <cit.>.Thephotometries of the AMIGA galaxies were derived following the method described in <cit.> and <cit.>; first, image mosaics were constructed from single nativeframes using a technique detailed in <cit.>, and resampled to a 1" pixel scale – relative to the beam. Because of the modest angular size of the AMIGA galaxies (their optical radii range from 10.8” to 4.6'), the above pixel scale was appropriate to accommodate their angular sizes and no extra processing step was needed as is the case for some large nearby objects processed in <cit.>. Of the 791 galaxies in the Verley07b sample, infrared photometries of 632 galaxies were successfully and reliably extracted from the WSXC catalogue. However, only 587 of those also happen to havemasses available. For each of those, the total flux was measured in each of the four bands – including the W3 (12) and W4 (23) bands. The W1 and W2 total fluxes were estimated using a technique developed for the 2MASS <cit.>, which consists of fitting a double Sérsic profile to the axisymmetric radial flux distribution. This way, both the star-forming disc and bulge components are each represented by a single Sérsic profile. Owing to the lower sensitivity of the longer wavelength bands W3 and W4, the total fluxes of part of the sample galaxies in these bands are obtained through extrapolation of their extent to three disc scale lengths after fitting their light profiles with the double Sérsic function. However, since these longer wavelength fluxes are not used in this work, it is not relevant to discuss their measurements here. For a full description and discussion of their derivation, we refer the reader to <cit.>.Besides the total flux, the global stellar mass was also estimated for each of thedetections. This was done by estimating the mass-to-light ratio M/L_W1 in the W1 band from the W1-W2 colour, and converting the W1 flux density to the luminosity L_W1. As specified in <cit.>, this is based on the assumption that the observed W1 light is emitted by the galaxy's sole stellar population, and that the post-AGBs population are not significantly contributing to the near-infrared brightness. To evaluate M/L_W1, we make use of the new GAMA colour-to-mass calibration method in <cit.>. The average M/L_W1 found therein is 0.35±0.05, about 30% lower than the mass-to-light ratio value of of 0.5 (in the 3.6μ m band) adopted infor disc-dominated galaxies. As for , the authors estimated their stellar masses from K_ s magnitudes based on the calibration from <cit.>.Additionally to these parameters, we have also measured the W1 and W2 light profiles – the surface brightness at different radii – of a subset of 449 galaxies, including the 36 isolated galaxies in the j-sample (except CIG 587 & 812). These light profiles, presented in <Ref>, provide information on the distribution of the stellar density as a function of the radius, necessary for measuring the stellar specific angular momentum (see <Ref> below).§ THE SPECIFIC ANGULAR MOMENTUMThe specific angular momentum of a disc galaxy is defined as j ≡ J/M, where J is the orbital angular momentum of the galaxy and M its total mass. More explicitly, the specific angular momentum carried by a galaxy's component i of radius R can be written asj_i(<R) = ∫_0^R r^2 Σ_i(r)v_i(r)dr/∫_0^R r Σ_i(r)dr,where Σ_i(r) and v_i(r) are respectively the surface density and velocity of the component i at radius r. The errors associated with j_i are estimated following <cit.> and approximating the disc scale length R_ d to ∼30% the radius at the 25th magnitude R_25 (e.g,find R_ d∼0.35R_25):δ j_i = 0.3 R_25√(1/N∑_n^Nδ_v_n^2 + (V_ flat/tan( incl.)δ_ incl.)^2 + (V_ flatδ_D/D)^2),where the distance D, inclination incl. and radius R_25 are taken from <cit.>, and the flat velocity V_ flat evaluated from the rotation curve (see <Ref>). For all galaxies, we assume a ∼20% uncertainty on the distance (for reference,find the distance errors of the SPARC galaxies to fluctuate between 10-30%); furthermore, the error associated to the inclination is taken to be the difference between the inclinations of theand stellar discs. Finally, the error δ_v_n associated to the rotation velocity is estimated at each point n of the rotation curve, and N represents the number of radii at which j_i is evaluated. We note that, since <Ref> uses the optical disc scale length for both the stellar and gas components, and given that theusually extends further than the stars in disc galaxies <cit.>, δ j_ gas could somewhat be underestimated. As such, it must be regarded only as an indication of the uncertainties on . Furthermore, we consider that the baryonic mass of a galaxy is distributed among its two major constituents: the stellar and gas components. In the following, we denote the specific angular momenta of these two components as j_⋆ and j_ gas, respectively. Therefore, the total baryonic angular momentum can be expressed asj_ bar = f_ gas j_ gas + (1-f_ gas) j_⋆where f_ gas = M_ gas/(M_ gas+M_⋆) denotes the galaxy's gas fraction.The gas surface densities in <Ref> are obtained by applying a factor of 1.35 to thesurface densities (i.e, Σ_ gas = 1.35 Σ_) to account for the helium. We ignore the molecular component of the gas since no CO observations could be found for the galaxies. Also, the contribution of the molecular gas to the baryonic angular momentum is expected to be negligible based on previous studies <cit.>. We compute j_ gas by simply substituting therotation velocities and the gas surface densities in <Ref>. Because of the difficulty associated with correctly determining the velocities of the stars, we approximate these to the gas velocities – i.e, v_⋆(r) ≡ v_ gas(r) – and therefore determine j_⋆ using <Ref> with the stellar surface densities derived from the3.4 band photometry (seefor how the 3.4 photometry is used to trace the kinematics of the stellar disc). This approximation holds for massive disc galaxies whose stellar components exhibit regular rotational motions, unlike dwarf galaxies in which random, non-circular motions are significant. On the other hand, since the AMIGA galaxies were selected to have velocities greater than 1500 , very few low-mass galaxies were included in the sample. Specifically for the j-sample, <Ref> shows that all 36 galaxies have stellar masses higher than 10^9 M_⊙, which makes the approximation suited for the present study.§.§ The specific angular momentum of the atomic and stellar discs Mathematically, the specific angular momentum is a combined measure of how large a galaxy is and how fast it rotates. Therefore, large and fast-rotating galaxies are expected to possess a higher specific angular momentum than small, slow-rotating galaxies. On the other hand, early-type spirals are known to be larger and have higher circular velocities than their late-type counterparts, which in turn rotate faster than irregular galaxies.The isolated j-sample is constituted of 36 galaxies of mostly late morphological types (Sa to Irr), dominated by Sb and Sc morphologies (see top panel of <Ref>). For each of the atomic gas and stellar components of the galaxies in the sample, we show in the bottom panel of the figure the median specific angular momentum plotted as a function of the morphological type T. The T morphologies are referenced from the RC3 scale <cit.>, where T values increase from early to late-type morphologies, such that T=0 corresponds to an S0a type and T=10 indicates an Irr galaxy. As expected, the angular momentum is highest for early-type spirals and decreases towards the late types, until about T ≈ 6-7. The mean j values at the later morphological types (T = 8 & 10) increase, but since they only contain one galaxy each, it is not clear what the actual trend is at these morphologies. A reverse correlation is seen when the specific angular momentum is plotted against the optical radius (B-band isophotal radius at the 25th magnitude taken from ), as seen in <Ref>. As expected,is systematically higher than ; this is because, on average, the gas is distributed at larger radii than the stars <cit.>, and is therefore expected to carry more angular momentum. § THE SPECIFIC ANGULAR MOMENTUM – MASS RELATIONThe current galaxy formation paradigm predicts that both the dark matter halo and baryonic disc acquire their angular momentum through gravitational torques, during the proto-galaxy formation phase <cit.>. The resulting disc, formed via the collapse and condensation of cold gas within the potential wells of the parent halo, ends up with the same specific angular momentum as the halo <cit.>. As a result, it should be expected that the baryonic j behaves as j_ bar∝ M_ bar^2/3, similarly to the DM halo. However, current observations are not consistent with this prediction. As pointed by some studies, not all the baryons carrying angular momentum may condense into the galaxy disc, explaining the discrepancy with the expectation <cit.>. Furthermore, with numerical simulations becoming more available, it has become evident that more mechanisms are at play in the angular momentum acquisition and conservation of discs throughout their lifetime; for example, the different interactions that galaxies undergo with their environment, such as mergers, are capable of affecting their total baryonic angular momentum <cit.>. In this section, we investigate the Fall relation ( vs. ) for the isolated galaxies in the j-sample and perform a comparison with the samples of non-isolated galaxies. §.§ The comparison samples To investigate whether the angular momentum of isolated galaxies behave in a particular way, in the context of galaxy evolution, we compare the AMIGA galaxies to samples of non-isolated galaxies[Since the isolation parameters η_k and Q were not determined for the galaxies making up these samples, we only consider them non-isolated in a statistical sense: the samples may include a few isolated galaxies, but the majority are not more isolated than field galaxies.] found in the literature: the large two samples(114 galaxies) and(157 galaxies) mentioned above, and three moderately small samples from <cit.>, <cit.> and <cit.>.The specific angular momentum as well as mass values are taken from the corresponding studies, which use somewhat similar methods to determine the gas kinematics. ,and <cit.> derive their rotation curves similarly to the method used in the present work, with the difference that <cit.> use the FAT <cit.> package instead of . Additionally,add an asymmetric drift term to their rotational velocities to correct for the non-circular motions typically prominent in low-mass galaxies. <cit.> and <cit.> build their rotation curves by fitting a tilted-ring model onto concentric ellipses taken along the spatial extent of thediscs, with the assumption that the rotation curve has a parametric functional form.The 114 galaxies inwere selected from the WesterborkSurvey of Spiral and Irregular Galaxies <cit.>, such that theirradius spans at least five resolution elements in the 30” resolution data, and their inclination between 20 and 80. The sample contains a mix of low, intermediate and high mass galaxies, withand stellar masses spanning about threeand five orders of magnitude, respectively (7.8 < log(M_/M_⊙) < 10.5 & 6.7 < log(M_ star/M_⊙) < 11.5). The stellar component of each of the individual galaxies was traced using 2MASS <cit.> K_s-band photometries (see ). Similarly, thesample was constructed by compiling 157 galaxies from six main sources: 90 spirals from the SPARC catalogue <cit.>, 30 from a sample of spirals by <cit.>, 16 dwarfs from the LITTLE-THINGS sample <cit.>, 14 dwarfs from the LVHIS sample <cit.>, four dwarfs from the VLA-ANGST sample <cit.> and finally three dwarfs from the WHISP sample. To derive the properties of the galaxies' stellar components, the authors made use of either the Spitzer 3.6μ m or the H-band 1.65μ m photometry.Of the above two comparison samples, one (CIG 626) and five (CIG 102, 147, 232, 314 & 604) galaxies respectively from theandsamples are included in the j-sample. In fact, 17 galaxies in each of these two samples are catalogued in the initial CIG sample of isolated galaxies <cit.>, but were discarded from the sample of 950 “high-velocity” galaxies discussed in <Ref> because their systemic velocities are v<1500 . These six galaxies will be discarded from the two samples in the analysis follows. Furthermore, five of the remaining 156 galaxies ofdid not have availablevalues, these were therefore removed from the sample. The final sizes of the samples are thusandgalaxies, respectively forand .Unlike the previous two samples, the last three have sizes about an order of magnitude smaller. The <cit.> sample, containing 16 normal, regularly rotating spiral galaxies, was originally drawn from <cit.>'s sample of galaxies in the Ursa Major region. The stellar component of each of the galaxies in the sample was derived using K-band luminosity profiles. The baryonic masses of the galaxies in the sample range from 9.25 < log(M_ bar/M_⊙) < 11, with only UGC 7089 having a baryonic mass lower than 10^9.6M_⊙ (corresponding to the j-sample's lower limit, see <Ref>).On the other hand, the <cit.> and <cit.> samples are essentially made of dwarf galaxies respectively selected from the nearby Lynx-Cancer void <cit.> and the LITTLE-THINGS sample. <cit.> made use of SDSS <cit.> and PanSTARRS <cit.> g-band luminosities and g-i colours to trace the stellar components of the galaxies, while those of the <cit.> sample were obtained from Spitzer 3.6μ m images. It should be noted that none of <cit.>, <cit.> or <cit.> samples include galaxies from the j-sample. §.§ The Fall relation: isolated vs. non-isolated galaxies In <Ref> we present the total baryonic angular momentumof the AMIGA galaxies, along with a comparison with the non-isolated samples mentioned above: the largerandsamples, and the three smaller samples from <cit.>, <cit.> and <cit.>. The <cit.> sample includes the galaxy UGC 8508, which the authors found to be an outlier in the mass-j relation because of its abnormally highfor its modest baryonic mass. Therefore, we accordingly remove UGC 8508 from the angular momentum analyses that follow. The left panel of the figure shows that the galaxies in the AMIGA angular momentum sample havevalues that are similar to those of non-isolated galaxies, with the noticeable difference that they occupy the upper end of the parameter space. A linear regression of the formlog(j_ bar/ kpc km s^-1) = α[log(M_ bar/M_⊙) - 10] + cwas fit to the angular momentum sample and to the two largest samples of non-isolated galaxies using Bayesian inference, specifically aimplementation of the Monte Carlo Markov Chain (MCMC) in the open-source PyMC3[Documentation at https://docs.pymc.io/en/v3/index.html] package <cit.>. The fitting procedure consists of assuming priors for three parameters: the slope α, the intercept c and the intrinsic scatter σ. For the slope and intercept, a gaussian prior with a mean of respectively 1 and 2 and a standard deviation of 4 was used, while for the scatter we chose an exponential prior of coefficient 1. Next, instead of a gaussian distribution, we adopt a Student-t distribution (with a degree of freedom ν for which a half-normal distribution of standard deviation 5 was chosen as prior, see <Ref>) to explore the likelihood. Because of its fatter tails, the t-distribution has the added advantage of minimising the influence of the outliers. Given the modest size of the samples in this study, especially the AMIGA j-sample of 36 galaxies, this distribution proved to be more effective at constraining the free parameters.We obtain a best-fit slope offor the isolated j-sample, about 20% lower than the theoretical slope of ∼ 2/3 predicted in hierarchical models for dark matter (we discuss this in <Ref>). Since we have altered theandsamples, and for consistency, we re-perform linear regression fits on these. As a reminder to the reader, the main changes in the samples are (i) the removal of galaxies that overlap with the j-sample and the inclusion of galaxies previously discarded by , whosevalues are non-converging. The re-derived best fit slopes areand , respectively for theandsamples. For context, the best-fit values of the slope obtained in the previous studies are 0.60±0.02 and 0.55±0.02, respectively for the originalandsamples. As a sanity check, we performed the fit on these original, non-altered samples and found consistent results with the original studies. It should be noted that the authors used a fitting method different than what we adopted here: bothandperformed the fit with the hyper-fit package <cit.>, a tool designed for fitting linear models to data with multivariate gaussian uncertainties.The results of the linear regressions are summarised in <Ref>. The first two columns of the table show respectively the different samples and their sizes, while the last three columns list respectively the slope α, intercept c and intrinsic scatter σ obtained from fitting <Ref> to each of the samples.As mentioned in <Ref>, the isolation criteria adopted in the present analysis are those defined in <cit.>, which were applied on a larger galaxy sample than the study conducted in <cit.> because of the limited SDSS footprint. In fact, <cit.> accounted for the spectroscopic redshift when evaluating the galaxies' isolation parameters, which is not available for a significant subset of the sample. This results in a very strict definition of isolation, given that the AMIGA galaxies were selected from a previously-built catalogue of isolated galaxies <cit.>. A cross-match between the j-sample and the sample considered in <cit.> results in only 16 galaxies, of which one galaxy (CIG 361) does not meet the isolation criteria. For the sake of a fair comparison, we highlight this stricter sample to the right panel of <Ref> (circled stars and grey dash-dotted line). The slope measured for these galaxies is lower than that of the j-sample, but the uncertainty associated with the fit results, as well as the lack of systematic offset between the lines of best fit, suggests that they do not substantially differ from the j-sample. §.§ The Fall relation: low-mass vs. high-mass galaxies Could the narrower mass range of the AMIGA sample induce discrepancies into the results of the regressions? To probe this, we applied a lower cut of log(M_ bar/M_⊙) = 9.6 – corresponding to the lower mass limit of the isolated j-sample – on the baryonic mass of theandsamples. This resulted inandgalaxies, respectively for theandsamples (see <Ref>). A linear regression fit on these new, high-mass samples is shown in the right panel of <Ref>: while the slope of thesample has almost remained constant, that of thesample has decreased fromto . Overall, the best-fit lines of these samples remain below that of the j-sample. This is further seen in the distributions on the marginal plots of the panel, which show that the j-sample's averagevalue is higher than those of the two non-isolated samples, for similar baryonic mass distributions.The change in slope in thesample is likely due to the presence of dwarf galaxies in the sample. In fact, <cit.> found that the angular momentum of dwarf galaxies is higher than what would be expected from the extrapolation of the – relation for more massive galaxies. This suggests that the relation is a broken power law having a higher slope at the low-mass end of the relationship than at the higher-mass end. To investigate this, we derive the slope of the low-mass population of theandsamples, and for the <cit.> and <cit.> samples of dwarf galaxies: the results of the regressions are outlined in <Ref>. As the table shows, the slopes of the low-mass populations are systematically higher than those of the high-mass populations.To ensure that this is not an effect of the non-converging galaxies, we have restricted the analysis to only the converging galaxies ofsample for which convergence analysis is available; we identify 105 out of 151 galaxies meeting the convergence criterion set by the authors. Of the 46 non-converging galaxies, 80% (i.e., 37 galaxies) are low-mass galaxies according to the mass criterion set above. We found slopes of α_- = 0.67±0.06 and α_+ = 0.55±0.06, respectively for the low-mass and high-mass galaxies of the sample. Although these slopes agree within the errorbars, their consistency with the results of <Ref> argues in favour of the broken power law hypothesis. §.§ The Fall relation: gas vs. stellar discsHow is the angular momentum distributed among the galaxies' main components? In <Ref> we present the specific angular momentum as a function of the mass, separately for the gas and stellar components and colour-coded by their gas fraction f_ atm (fraction of atomic gas to total baryonic mass). For comparison, we overlay on the figure the samples of , <cit.> and <cit.>. As expected, the galaxies in the AMIGA angular momentum subset sit on the high-mass end with respect to the non-isolated galaxies, both in terms ofand stars. Furthermore, in terms of gas fraction, two striking trends appear for all the galaxy samples, consistently with results from <cit.>: at fixed gas mass, galaxies with hightend to have a low gas fraction; conversely, at fixed stellar mass, galaxies with hightend to have a high gas fraction. On the other hand, while the j values of the gas component of the AMIGA galaxies seem to agree with those of the non-isolated samples, we note that their stellar component presents a different trend: thevalues of most AMIGA galaxies are among the highest at a given stellar mass.By evaluating the deviations of the j-sample galaxies from the line of best fit, for each of the stellar and gas relations, and setting a maximum scatter of 2σ_i from each relation (where σ_i is the standard deviation of the scatters around the line of best fit for component i), we identify two outliers in each of the panels: CIG 188 & 232 for thedistribution, and CIG 85 & 744 for thedistribution. We find that the two galaxies with abnormally lowvalues (marked with red circles in the figure) present “normal”values with respect to the rest of the samples. Similarly, the galaxies with atypically highvalues (marked with red diamonds) exhibit “normal”values. To ensure that these galaxies are not outliers because of technical biases, we compared their angular resolutions (listed in <Ref>) to the rest of the sample and found that they are not particularly less resolved than the galaxies in the angular momentum plane.Could the outlier galaxies either have an excess in their gas content (for CIG 188 & 232) or a deficit in their stellar mass (for CIG 85 & 744)? To address the first part of the question, we compare the distribution of theand stellar masses of the galaxies in the j-sample to trends found in <cit.>, for a larger AMIGA sample, and in <cit.>, for a sample of spirals (<Ref>). The scaling relation of <cit.> was derived by fitting a linear relation to 544 AMIGA galaxies of high-qualityprofiles, selected from the Verley07b sample and whose stellar masses were estimated from mid-infrared WISE photometry. As for the <cit.>'s scaling relation, it was obtained from a sample of 600 optically-selected spiral galaxies of redshift z≤0.01, with a completeness of 99%. Similarly to <cit.>'s sample galaxies, the stellar masses of the spirals in <cit.> were also measured from WISE bands photometry. The figure shows that most j-sample galaxies sit above both relations although, as expected, an important fraction falls in the region prescribed by the <cit.> relation. We particularly note that CIG 188 presents an average gas mass, while CIG 232 shows a high gas content with respect to its stellar mass. Furthermore, an inspection of the rotation curves of these galaxies in <Ref> (and also in <Ref> discussed in the next section) shows that CIG 188 presents very low rotation velocities, with a maximum as low as ∼40 . These suggest that the deviation of CIG 232 from theplane could be caused by an excess in itsmass, while that of CIG 188 is likely due to its slow rotation. Since themasses presented here are measured from single-dish observations <cit.>, this implies that CIG 232 contains a significant amount of low-density gas in its outer regions that is not seen in its kinematic maps. Thetotal massderived from the galaxy's integrated map reveals that 33.2% of its single-dish flux is not recovered by the interferometric observations, higher than the median of (19.0±5.1)% for the entire j-sample.The missing gas could be in the form of faintenvelopes, similar to that around M83 <cit.>; this is all the more possible since these galaxies are isolated, hence have fewer chances of seeing their envelopes disrupted.Regarding CIG 85 & 744, <Ref> shows that these galaxies have significantly higher gas masses for their low stellar masses – they are, in fact, among the the galaxies with the lowest stellar masses. This makes them the highest gas fractions (f_ atm>0.8) in the j-sample. Two possibilities arise for these galaxies: their deviation from the -M_⋆ relation is either caused by their low M_⋆ (horizontal deviation), or by their highvalues (vertical deviation). CIG 85 and 744 are respectively classified as an irregular and a late-type spiral, with CIG 744 hosting an AGN in its centre <cit.>. Furthermore, CIG 85 presents highly disturbed optical andmorphologies, leading <cit.> to argue that the galaxy may have undergone minor mergers in the recent past. A vertical shift could be explained by the galaxies' high gas fractions, which confer them highvalues <cit.>. The particularly low inclination of CIG 85 (∼16) could also lead to an overestimation of its rotation velocity, pushing the galaxy upwards in the j plane. On the other hand, one could be tempted to attribute the horizontal shift to the lower mass-to-light ratio adopted for these galaxies' types (see <Ref>). We note that independent measurements in the literature quote a maximum stellar mass of 2.4×10^9 M_⊙ <cit.> and 1.5×10^9 M_⊙ <cit.> respectively for CIG 85 and 744, consistent with the values measured in this work. Furthermore, for these galaxies to fall on the relation at thesevalues, their M/L_W1 values would have to be increased to respectively 4.5 and 1.7, which is much higher than allowed. This therefore discards the second possibility, allowing us to conclude that these two outliers possess a lot more stellar angular momentum for their optical size, possibly as a result of their high gas fractions.§ DISCUSSIONThe AMIGA sample is, unlike field galaxies, a nurture-free sample in the sense that it is constituted of galaxies that have not undergone any major interaction in the past ∼3 Gyr <cit.>. Therefore, it represents a reference for evaluating the effects of the environment on the angular momentum. Under this consideration, the results of <Ref>, showing that the AMIGA galaxies possess higher j, further support the initial hypothesis: isolated galaxies contain higher angular momentum than their non-isolated counterparts, mainly because the effects of environmental processes on their kinematics are less important. In other words, they still possess a higher fraction of their initial angular momentum because they have undergone fewer major interactions than their counterparts in denser environments. To further demonstrate this, we present in <Ref> the distribution of the orthogonal deviation[the non-vertical, perpendicular deviation of a given data point from the j-sample line of best fit, measured as the separation between the point and the best fit line.] offrom the j-sample's scaling relation, for all the comparison samples. Most galaxies in the samples havesuch that -0.5 ≤Δ j_ bar_⊥≤ 0.0, with only a low fraction (24%) of galaxies having Δ j_ bar_⊥ > 0. The right panel of the figure shows the same deviation Δ j_ bar_⊥, but plotted as a function of the baryonic mass. As expected from <Ref>, the deviation is higher for low-mass galaxies ( andsamples), with a general trend of Δ j_ bar_⊥ increasing with the baryonic mass from -0.6 dex to about 0.2 dex. Particularly, the median Δ j_ bar_⊥ values of theandgalaxies agree within their errorbars and are on average negative throughout the probed baryonic mass range. This further supports the hypothesis that the specific angular momentum of the isolated AMIGA galaxies is overall higher than those of the non-isolated galaxies. We also show the deviation of the isolated j-sample from the linear fit, exhibiting a significant scatter around the best fit line; obviously (and as expected), the deviation is among the largest for the outlier galaxies. To quantify the amplitude of the deviations, we present in <Ref> the standard deviation of the distributions of Δ j_ bar_⊥ for each of the samples. The distribution of Δ j_ bar_⊥ for the j-sample has a standard deviation (0.17 dex) larger than those of theand <cit.> samples, respectively. However, after removing the four outliers from the sample, the standard deviation of Δ j_ bar_⊥ decreases to 0.14 dex, the lowest of all six samples. This further shows that the large scatter in the j-sample is caused by the four galaxies (11% of the sample) exhibiting either peculiar rotational velocities or a high gas content. However, this low scatter remains mathematically consistent with what is expected from the removal of the outliers, and is not significantly lower than that of the comparison samples. This is inconsistent with previous studies conducted on the properties of the galaxies in the AMIGA sample, finding that their parameters sensitive to environmental processes tend to present more uniform values <cit.>. This suggests that the AMIGA galaxies could present a larger diversity in their kinematics than previously thought.The stellar masses of the j-sample galaxies were derived from their W1 magnitudes, with a mass-to-light ration of M/L_W1 = 0.35±0.05. As noted in <Ref>, this is 30% lower than the values adopted in , the largest comparison sample used in this work. To ensure that the observed highervalues are not solely due to the difference in the M/L values, we perform a test by adopting the same M/L as in . In <Ref> we show the resulting - relation, where only the stellar masses of the j-sample galaxies were altered. The change in the M/L_W1 value leads to an average increase of (0.11±0.01) dex in the baryonic masses of the galaxies, making the new line of best fit shift downward by (0.02±0.03) dex on average. However, the previously observed trend remains:is higher for the isolated galaxies than their non-isolated counterparts. §.§ AMIGA galaxies in the baryonic Tully-Fisher planeA simpler way to evaluate the normality of the rotation of disc galaxies is through the empirical baryonic Tully-Fisher relation (BTF), that links a galaxy's rotation velocity to its baryonic mass. Originally established between optical luminosity and the velocity width <cit.>, the BTF was later translated into a tight linear relation between the flat part (V_ flat) of a disc rotation curve and its total baryonic mass over several mass orders of magnitude <cit.>. The BTF relation has widely been investigated and constrained in the literature, with authors often describing the rotation of galaxies by either theline width or V_ flat <cit.>.In order to test whether the galaxies in the j-sample exhibit different rotation patterns than their non-isolated counterparts, we investigate their positions in the BTF plane. If they possess peculiar rotation velocities, they should stand out in the BTF plane. In other words, if their observed highvalues stem from overestimated rotation velocities, they should deviate from the BTF relation.From the rotation curves of the galaxies, we determine V_ flat using the algorithm adopted in <cit.> and described in <Ref>. All galaxies in the sample reach the flat part of their rotation curve, except CIG 463 & 571 who seem to be still rising. Nonetheless, <Ref> shows that these two galaxies lie within one standard deviation of the BTF relation derived by <cit.>. It is, however, worth mentioning that an inspection of CIG 571's PV diagram (<Ref>) indicates a possible overestimation of its rotation velocities in its outer regions, possibly worth investigating further with deeper data. On the other hand, CIG 329 exhibits peculiar rotation velocities in the external regions, with the outermost part of its curve hinting increasing velocities. This is likely caused by the complex kinematics of the galaxy. In fact, itsmaps and PV diagram reveal a severe warp in both sides of the galaxy'sdisc, described by <cit.> as symmetric and extreme. Although the best kinematic model for CIG 329's disc was yielded by a constant inclination, we do not discard the possibility that, in reality, the observed warps induce variations in the inclination of thedisc. As shown in <Ref>, the j-sample galaxies do not particularly favour high rotation velocities; instead, most sample galaxies are roughly evenly spread across the BTF relation, occupying both sides of the relation. §.§ Comparison with the CDM modelThe ΛCDM cosmology predicts that the baryonic specific angular momentum can be written <cit.>:j_ bar/(10^3kpc km s^-1) = k_ f [M_ bar/(10^10 M_⊙)]^2/3where the coefficient k_ f = 1.96 λ f_j f_M^-2/3 is function of the halo spin parameter λ, the baryon-to-halo specific angular momentum fraction f_j and the baryon-to-halo mass fraction f_M. <cit.> made different considerations to approximate the values of the parameters; namely, the authors adopt λ≈ 0.04±0.02 (independent of the halo mass) from N-body simulations <cit.>, f_j ≈ 1.0±0.5 based on simulations of Milky Way-like galaxies <cit.> and lastly f_M ≈ 0.05. These values constrain the coefficient k_ f to vary between 0.14 and 1.3, allowing to visualize the shape of – relation as predicted by the model.We show in <Ref> a comparison of the AMIGA angular momentum sample's – relation to the DM-rescaled model. The width of the model is determined by the value of the factor k_ f of <Ref> which, as noted above, varies between 0.14 and 1.3. Most AMIGA galaxies lie within the range predicted by the model, with the exception of two galaxies at the lower mass end (CIG 85 & 744), previously found to have higher-than-averagevalues (see <Ref>). Furthermore, as noted in <Ref>, the slope of the AMIGA – relation is shallower than the theoretical prediction of α∼ 2/3 by about 22%. It is worth mentioning that the model neglects the dependency of f_j and f_M with the halo mass, and therefore gives a theoretical prediction independent of the actual baryonic mass.§.§ On the relation between j and galaxy isolationOne of the major results of this work is that the angular momentum of isolated galaxies, even those of the high-Q sample (<Ref>) considered as not strictly isolated, is on average higher than that of their non-isolated counterparts. This hints that the environment might play an important role in removing the angular momentum of galaxies through interactions. However, a question that remains unanswered is whether the position of a galaxy in the angular momentum space is correlated with its degree of isolation. In other words, do more isolated galaxies have lower angular momentum than their less isolated counterparts? To investigate this, we separate the isolated j-sample into three subgroups: in the first, we consider the galaxies having less than two neighbours in the <cit.> catalogue and, as a consequence, have undetermined η_k values. We identify seven galaxies in this category, including CIG 188, which was previously found to have abnormally lowvalues. The second and third bins comprise respectively the low-Q and high-Q subsamples defined in <Ref>. We colour-code these with the said isolation parameters in each of the two panels of <Ref>. With the exception of CIG 102 and 626 (labelled with black empty squares in the figure), the high-Q sample is by definition less isolated than the low-Q sample, with the first bin (of zero or one neighbour) containing the most isolated galaxies. These two exceptions are classified as less isolated because, although they have less than two neighbours, their tidal force parameter is Q>-2. In <Ref>, the distribution of galaxies of the different subgroups in the parameter space shows no correlation between the angular momentum and either of the isolation parameters η_k and Q. This is translated by the absence of any clear trends observed between the three subgroups. In particular, the most isolated subgroups (the low-Q and few neighbours subgroups) do not appear to have the highest angular momentum of the sample, nor do they present a distinct trend in the parameter space; instead, they are “randomly” distributed along the - line of best fit. Moreover, for a more complete analysis, we consider the more robust, three-dimensional definition of the isolation parameters in <cit.>. In some cases, the isolation parameters derived by <cit.> significantly differ from those of <cit.> due to both the differences in the search for neighbours and the evaluation of exerted force parameter Q. The 16 galaxies in the j-sample whose isolation parameters were evaluated by the authors are plotted in the lower panel of the <Ref>. Similarly to the top panels, no clear trend is found in the distribution of the isolation parameters along the j plane. These results, combined with the fact that isolated galaxies possess a higher angular momentum with respect to their non-isolated counterparts, suggest that there is a threshold density beyond which the effects of interactions become important in removing the angular momentum in galaxies. This implies that minor interactions between a galaxy and its neighbours will not considerably remove its angular momentum, unless the tidal forces that it experiences are important enough. Likewise, if the galaxy resides in a low-density environment, the effects of the said environment on its angular momentum will not be significant. However, given the reduced size of the sample used in the present work and the modest robustness of the isolation parameters (see discussion in the appendix of ), further investigation is necessary to confirm the existence of such threshold density. New and upcoming surveys with the MeerKAT <cit.> and ASKAP telescopes <cit.> will offer the possibility to investigate this by targeting galaxies in a wide range of environments. §.§ The disc stability of AMIGA galaxiesThe stability of galaxy discs is an important parameter in their ability to form stars. In fact, both numerical simulations and observations argue that unstable galaxy discs are more susceptible to host higher star formation rates than their more stable counterparts <cit.>, since star formation is thought to be provoked by the collapse of the neutral gas which is converted into stars via molecular gas. A widely accepted method of quantifying the stability of the galaxy disc is through the so-called Toomre criterion <cit.> for an axisymmetric rotating disc, which predicts that a disc is locally stable only if the pressure gradient at small scales is large enough to overcome the large scale centrifugal forces. For a galaxy disc of neutral atomic gas, the criterion is translated by the Toomre parameterQ_ atm = κ σ_ atm/π G Σ_ atm,where κ is the local epicyclic frequency, G the gravitational constant, σ_ atm and Σ_ atm are respectively the local radial velocity dispersion and local surface density of the atomic gas. A stable, poorly star-forming galaxy disc is such that Q_ atm > 1, whereas Q_ atm < 1 corresponds to a more efficient star-forming, unstable disc. Building on this, <cit.> introduced a dimensionless global disc stability parameter, function of the specific angular momentum, the velocity dispersion and the mass of the disc:q = j_ bar σ_ atm/GM_ bar.Later, <cit.> established that the atomic gas fraction f_ atm varies with the global parameter q, such that f_ atm = min{1, 2.5q^1.12}. Interestingly, they also found that galaxies from various samples and including different morphologies tend to follow the model-based predictions of the f_ atm= f(q) relation. We overlay in <Ref> the AMIGA galaxies as well as the non-isolated samples on the <cit.>'s model. Based on the discussion therein and for consistency in the comparisons, we adopt σ_ atm= 10km s^-1. We also show in the figure the value q = (√(2) e)^-1, which <cit.> worked out to correspond to Q_ atm≈ 1, that is, the theoretical value at which galaxy discs turn from unstable to stable. Although the AMIGA galaxies seem to follow the trend of the theoretical model, it is interesting to note that more than half of them (19 out of 36) have atomic gas fractions higher than what the model predicts, with f_ atm values beyond the 40% margin allowed by the model. This is intriguing since it suggests that these galaxies have larger reservoirs ofthan their angular momentum allows. In other words, their stability parameter q is lower for their gas content, locating them on the left-hand side of the stability line where their gaseous discs are predicted to be unstable. Viewed from this angle, a large fraction of the galaxies in the isolated j-sample (especially those at lower q values) can be interpreted as discs susceptible of collapsing on short timescales to form stars <cit.>. In light of all the assumptions made above, it is likely that the model is not perfectly suited for the highly isolated galaxies like those in the AMIGA sample, although <cit.> found it to well describe moderately isolated galaxies such as those of the THINGS and HIPASS <cit.> samples. In either case, the high gas content of the AMIGA galaxies for such moderate q values forces us to consider the possibility that many of these galaxies are (or have been) accreting an important amount ofin a recent period of their evolutionary phase. This is further supported by a comparison of their gas fraction with the other samples: they exhibit higher f_ atm values compared to all other five samples, consistently with the results of <Ref> and those found in <cit.>. Given the high level of isolation of AMIGA galaxies, such accretion would most likely happen through gas infall from the intergalactic medium, as opposed to accretion through galaxy mergers. Currently, the best direct method to obtain evidence of accretion is through high sensitivity mapping of these galaxies in search for companionclouds, extra-planar gas or extended warps <cit.>. High sensitivitydata combined to existing multi-wavelength data will allow us to further investigate this in the future.The two galaxies discussed in <Ref> (CIG 85 & 744) to have low stellar masses with respect to their angular momentum appear to be among the few “stable” discs in the isolated j-sample, located near the region populated by mainly dwarf galaxies. However, they are not the least massive galaxies of the sample, and neither are they dwarf. We argue that their location in the parameter space is simply a direct consequence of their position in the - relation: they exhibit a high baryonic angular momentum for an intermediate baryonic mass. As for the outliers exhibiting lower(red circles in the figure), they are the most discrepant galaxies with respect to the <cit.> model. As discussed above, these galaxies could be candidates for galaxies which have recently experienced gas accretion. § SUMMARYWe have investigated the behaviour of the angular momentum for isolated galaxies through the j-mass relation, using 36 galaxies drawn from the AMIGA sample. The aim of this study was to highlight the effects of the environment on the amount of baryonic angular momentum in galaxies, particularly testing whether interactions can remove galaxies' angular momentum as expected from our current understanding of galaxy evolution. In other words, we aimed to investigate whether isolated galaxies retain a higher fraction of their angular momentum, which would translate to these galaxies having higher j values than their non-isolated counterparts. The main results of this work are as follows: * At a fixed baryonic mass, the isolated galaxies of the AMIGA sample possess a higher specific angular momentum than their non-isolated counterparts (see <Ref>). This constitutes direct evidence of the role of the environment in removing angular momentum from galaxies, predicted by numerical simulations <cit.>. In fact, galaxies in the AMIGA sample have, in theory, not undergone any major galaxy-galaxy interactions during the last ∼3 Gyr <cit.>, reducing their loss of angular momentum with respect to interacting galaxies.* High baryonic mass galaxies (≳10^9 M_⊙) are best fitted with a shallower power law compared to their lower mass counterparts. Consequently, lower mass galaxies (≲10^9 M_⊙) exhibit a steeper power law; this change of slope is consistent with the broken power-law relation found by previous studies <cit.>, where the angular momenta of low-mass galaxies deviate from the extension of the j-mass relation of more massive spirals.* For the atomic gas component of all galaxies considered in this study (isolated and non-isolated), the specific angular momentum of gas-rich galaxies decreases with increasing gas fraction, for a fixed gas mass (<Ref>). The reverse trend is seen for the stellar component, with the specific angular momentum increasing with the gas fraction at a given stellar mass. This is a consequence of the - relation: at a fixed gas mass, gas-poor galaxies are more massive (in terms of baryons) than their gas-rich counterparts whereas, at a fixed stellar mass, it is the opposite.* Most AMIGA galaxies included in this study agree with the DM-rescaled model in the - plane, although their power-law slope () is ∼30% lower than the predicted slope (<Ref>). However, to effectively test whether isolated galaxies agree in general with the DM-rescaled model requires not only a broader range of baryonic mass, but also a tighter constraint on the width of the f_ atm = f(q) model of <cit.>. We also find that all strictly isolated galaxies (i.e, galaxies with no identified neighbour in the optical) lie within the range predicted by the DM-rescaled model. However, no clear correlation was found between the position of the AMIGA galaxies on the - relation and either of the η_k and Q isolation parameters.* Four isolated galaxies were found to exhibit abnormal amounts of stellar or gaseous angular momentum (<Ref>). The analysis of the kinematics and gas content of these galaxies shows that three possess high gas contents, while the last presents significantly low rotation velocities. These results, particularly the discrepancy between the AMIGA and non-isolated samples in the - plane (see <Ref>), provide clear evidence of the role of the local environment in removing angular momentum from galaxies, as suggested by previous studies <cit.>. However, one limitation of the present study is the lack of investigation of individual environmental processes that might affect the total angular momentum of the sample galaxies. For example, processes such as galactic winds and cold mode accretion are predicted to increase angular momentum <cit.>. Accounting for these individual processes, as well as targeting isolated galaxies of lower baryonic masses are interesting avenues for future studies. Furthermore, one consideration made in this study consisted of approximating the circular velocities of the stars to those of the gas. Although this approximation is appropriate for the large baryonic masses of the studied galaxies, the discrepancy seen in some outliers could be resolved by independently measuring their stellar velocities from spectroscopic IFU observations.§ ACKNOWLEDGEMENT We thank the anonymous reviewer for their valuablesuggestions and constructive feedback on the manuscript, which helped to improve the quality and clarity of the paper. This work used the Spanish Prototype of an SRC <cit.> service and support funded by the Spanish Ministry of Science and Innovation (MCIN), by the Regional Government of Andalusia and by the European Regional Development Fund (ERDF).AS, LVM, KMH, JG and SS acknowledge financial support from the grants SEV-2017-0709 funded by MCIN/AEI/10.13039/501100011033.AS, LVM, JG and SS received further support from the grants RTI2018-096228-B-C31 and PID2021-123930OB-C21 funded by MCIN/AEI/10.13039/501100011033, by “ERDF A way of making Europe" and by the European Union.Lastly, part of the work of LVM, JG and SS was funded by the IAA4SKA grant (Ref. R18-RT-3082) from the Economic Transformation, Industry, Knowledge and Universities Council of the Regional Government of Andalusia and the European Regional Development Fund from the European Union. § DATA AVAILABILITYThe data underlying this article are available in<Ref> of this article and in the online supplementary material. The datacubes and kinematic products are available on request.mnras§ POSTERIOR DISTRIBUTION OF THE FIT PARAMETERSTo determine the best-fit parameters for the j-sample and other small size samples in this study, we have made use of the Student-t distribution of probability density functionp(y|μ,σ,ν) = Γ(ν+1/2)/√(νπ) Γ(ν/2)1/σ[1 + (y-μ/σ)^2/ν]^-ν+1/2,where μ, σ and ν respectively represent the mean, standard deviation and degrees of freedom; the Gamma function is written asΓ(x) = ∫_0^∞t^x-1e^-tdt = (x-1)Γ(x-1).As noted in <cit.>, the Student-t distribution is a general, more flexible form of the Gaussian distribution, with the additional parameter ν. Besides maintaining the advantages of Gaussian distributions, the Student-t processes were shown to provide more robust results when accounting for outliers <cit.>.In practice, the fitting method is as follows: - the regression coefficients α and c of <Ref> were given Gaussian priors of standard deviation 4 and centres 1 and 2 respectively: i.e, α∼𝒩(1,4) and c ∼𝒩(2,4);- the distribution of the vertical intrinsic scatter σ was modelled by an exponential prior of coefficient 1: σ∼ Exp(1);- we chose a half-normal distribution of standard deviation 5 for the degrees of freedom: ν∼ℋ(5). This parameter essentially sets the extent of the distribution’s tails, with ν=1 corresponding to the heaviest tails while ν→∞ converges to a normal distribution. By choosing a half-normal distribution, we aim to constrain tails of the likelihood's distributions to be heavier than a normal distribution, hence accounting for the outliers in the data;- next, the likelihood of the logj_ bar values is explored with a Student-t distribution as defined in <Ref>, with a mean μ∼α (logM_ bar - 10) + c, a standard deviation σ and degree of freedom ν;- finally, 4000 Markov chains are randomly drawn to determine the posterior, from which the best fit values of the regression parameters are derived. It is worth noting that the measurement uncertainties were not accounted for in the definition of the likelihood. In principle, this consideration does not significantly impact the regression; however, it can potentially cause the vertical intrinsic scatter σ to be overestimated. <Ref> shows the posterior distributions of the j-sample regression, for each of the regression parameters: the slope α =, intercept c = () dex and degree of freedom ν =, along with the vertical intrinsic scatter σ=() dex. All parameters exhibit unimodal distributions around their mean values, weighing in favour of the robustness of the obtained values.For comparison, we re-performed the regression by modelling the likelihood with a normal distribution (instead of a Student-t distribution): we obtained α_𝒩 = 0.52 ± 0.09, c_𝒩 = (2.96 ± 0.06)dex and σ_𝒩 = (0.20 ± 0.03)dex. These values are consistent with the above results, although we note that the associated intrinsic scatter is ∼18% larger than the previous.§ CONVERGENCE OF ANGULAR MOMENTUMThe analysis conducted in this paper included all galaxies from the j-sample, with no consideration of the convergence of their baryonic angular momentum as to not discriminate against any particular type of galaxies. In this section we distinguish between converging and non-converging galaxies following the criteria in <cit.> and consider a galaxy converging when (i) its outermostvalues differ by less than 10% and (ii) the slope ofin the logarithm space is lower than half. That is, a galaxy is deemed converging whenj_ bar(<R_N) - j_ bar(<R_N-1)/j_ bar(<R_N) < 0.1 &∂logj_ bar(<R)/∂logR < 1/2,with R_N-1 and R_N the respective last two radii.Of the 36 galaxies in the j-sample, only 13 fulfill the above convergence criteria. As shown in <Ref>, these galaxies do not occupy a preferred position in the angular momentum space. They span the same range of baryonic masses as the non-converged galaxies and their distribution seems random. In particular, these converged galaxies do not feature among the highestgalaxies and their line of best fit is consistent with that of the overall j-sample: above the converged galaxies of thesample. This implies that the higher j values observed in this work are independent of the convergence criteria.§ TABLE OF ANGULAR MOMENTUM VALUES § MOMENT MAPSEach row of <Ref> contains the moment maps and position-velocity diagram of a CIG galaxy of the j-sample. The left panel shows the integratedmaps as contours overlaid on DSS2 r-band images. The CIG ID is given in the top right corner, the lowest column density contour level (taken at 3σ) in the top left corner, the telescope whose data was used in the bottom left corner and a representation of the beam in the bottom right corner. The scale is also shown in the bottom center of the panel. The contours increment as 3σ×2^n with n=0,2,4,…. The middle panel shows the velocity fields obtained from first moments, with the velocity values given by the horizontal bar above the panel. Finally, the rotation curve (red circles) is overlaid on the position-velocity diagram in the right panel. The blue contours represent the data, the red contours the model and the thick gray contours the mask within which the model was computed. The figure only includes five selected galaxies, the full sample is shown in the online supplementary material.§ ROTATION CURVES<Ref> shows the variations of the orientation parameters (the inclination and position angle) and the optical andsurface density profiles for the five galaxies included in <Ref>. The full sample is given in the online supplementary material.In <Ref> we show the rotation curves of all galaxies in the j-sample. The horizontal dashed line denotes the average velocity V_ flat along the flat part of the rotation curve. V_ flat is estimated following the method prescribed in <cit.>; that is, starting at the outermost radius N of the rotation curve, we evaluate the mean velocity V̅ = 1/2(V_N + V_N-1).As long as the velocity of the next point N-2 is such thatV_N-2 - V̅/V̅≤ε(where ε = 0.1 is the maximum variation allowed in the flat part), the iteration continues to the next point and so on. When the above condition breaks, we take V_ flat = V̅, and estimate its error asδ_V_ flat = √(1/N∑_n^Nδ_V_n^2 + (V_ flat/tan( incl.)δ_ incl.)^2 + δ^2_V̅),function of the inclination incl.
http://arxiv.org/abs/2312.16661v1
{ "authors": [ "A. Sorgho", "L. Verdes-Montenegro", "K. M. Hess", "M. G. Jones", "T. H. Jarrett", "S. Sanchez-Expósito", "J. Garrido" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231227181456", "title": "The AMIGA sample of isolated galaxies -- Effects of Environment on Angular momentum" }
Bin-picking of novel objects through category-agnostic-segmentation: RGB matters Prem Raj^1, Sachin Bhadang^1, Gaurav Chaudhary^1, Laxmidhar Behera^1,2 ,Tushar Sandhan^1 ^1 Intelligence Systems and Control Lab Indian Institute of Technology Kanpur, India {praj, sachinb20, gauravch, lbehera, sandhan}@iitk.ac.in ^2 Indian Institute of Technology Mandi, India [email protected] 14, 2024 ========================================================================================================================================================================================================================================================================================================================================= This paper addresses category-agnostic instance segmentation for robotic manipulation, focusing on segmenting objects independent of their class to enable versatile applications like bin-picking in dynamic environments. Existing methods often lack generalizability and object-specific information, leading to grasp failures. We present a novel approach leveraging object-centric instance segmentation and simulation-based training for effective transfer to real-world scenarios. Notably, our strategy overcomes challenges posed by noisy depth sensors, enhancing the reliability of learning. Our solution accommodates transparent and semi-transparent objects which are historically difficult for depth-based grasping methods. Contributions include domain randomization for successful transfer, our collected dataset for warehouse applications, and an integrated framework for efficient bin-picking. Our trained instance segmentation model achieves state-of-the-art performance over WISDOM public benchmark <cit.> and also over the custom-created dataset. In a real-world challenging bin-picking setup our bin-picking framework method achieves 98% accuracy for opaque objects and 97% accuracy for non-opaque objects, outperforming the state-of-the-art baselines with a greater margin.Bin-Picking, Deep-Learning, Manipulation, Class-agnostic instance segmentation § INTRODUCTIONCategory-agnostic instance segmentation is the technique to segment the individual objects in the scene regardless of their class <cit.>. This method can be utilized for various robotic manipulation applications, such as robotic bin-picking of novel objects. Prominently, the instance-segmentation problem had been studied for cases with predefined semantic classes <cit.>. This might be useful in bin-picking for a limited type of known object. However, this is not feasible for bin-picking in practical scenarios such as warehouse automation where new types of objects are introduced regularly.There are various state-of-the-art solutions exist for bin-picking of novel diverse objects that directly predict the optimal grasp pose without a pre-segmentation step <cit.>. They have certain disadvantages. First, these are gripper-centric solutions, and hence, a solution designed for one type of gripper (e.g. a parallel jaw gripper) can not be easily extendable to other types of gripper (i.e. a suction gripper). The category-agnostic instance segmentation is the object-centric solution and thus can be easily extended to any type of gripper. Secondly, the direct grasp-pose prediction methods do not have any object-specific information, and the predicted optimal grasp-pose might result in a grasp failure during the grasp attempt for various reasons (e.g. object slips when grasped from the corner side rather than grasping from its middle <cit.>). Convolutional neural networks (CNN) modules are being used by the state-of-the-art solutions for the instance-segmentation tasks. However, the availability of labeled training data remains a major challenge as the process of data labeling is labor-intensive and costlier <cit.>. To overcome this, many solutions have used simulations for auto-generating the training data <cit.> followed by sim-to-real transfer of the learning for real-world deployment <cit.>. Previously, It was believed that real-world visuals differ significantly from the simulated world and hence the sim-to-real transfer is not promising in the case of models trained with only synthetic RGB data <cit.>. Subsequently, leading work for learning class agnostic segmentation for bin-picking has shown that if the CNN module is trained over only the simulated depth images, the learning can directly be employed in the real world <cit.>. Recently, many similar followed works <cit.> have fused RGB features with depth features and shown improved results in this context. However, these methods are tested in the real world with high-quality industrial-grade costlier depth sensors that produce very accurate depth maps with high precision. As verified in our work, this direct transfer of learning with depth-map inputs does not work for noisy depth maps produced with low-cost depth sensors, such as Realsense D435i, that are widely used in the research community in general. The trained network is found to be highly sensitive to noise in the depth maps.One recent work <cit.> talks about this issue and purposed to augment the simulated depth maps with manually modeled noise profiles to mimic the real-world noise. However, the modeling of noise is camera-specific and does not provide a generalized solution. In our work, we re-purpose the problem of category-agnostic instance segmentation in the case of not-so-high-precision depth sensors and show that a model trained with simulated color (RGB) images can directly transfer to the real world with performance to the level of the state-of-the-art if a carefully designed domain randomization strategy <cit.> is used. Additionally, our method can segment effectively the transparent and the semi-transparent objects, enabling them to be grasped with ease, which always has been a great challenge for depth input modality-based methods as depth sensing is poorer for such objects <cit.>. The effectiveness of the proposed method has also been shown by performing real-world bin-picking trials in a challenging bin-picking setup. The details are further elaborated in Section <ref>. In summary, the main contributions of our work are as follows: * Re-purposing the sim-to-real transfer of category-agnostic instance segmentation learning amidst the noisy depth sensing.* A method to generate simulated training samples with domain randomization for sim-to-real transfer with RGB images.* A simulated as well as a real dataset for category-agnostic instance segmentation in the context of warehouse applications for training and evaluation purposes. * An integrated bin-picking framework that can also grasp transparent and semi-transparent objects effectively. The framework uses the purposed instance-segmentation method and an analytical grasp evaluation method <cit.>.§ RELATED WORKSBroadly, our work comprises instance segmentation, sim-to-real transfer, and bin picking of diverse objects. To gain a better understanding of the existing research in these areas, we will comprehensively review related works in each of these categories. §.§ Instance SegmentationInstance segmentation, which involves simultaneously detecting objects and segmenting them into pixel-level masks, has gained significant attention in recent years due to its practical applications in autonomous driving, robotics, and medical imaging. Mask R-CNN <cit.>, one of the most widely used instance segmentation methods, extends the popular object detection framework, Faster R-CNN <cit.>, by adding a segmentation branch that predicts the object mask in parallel with the object classification and bounding box regression tasks. Building on top of Mask R-CNN, many recent works have aimed to improve the accuracy and efficiency of instance segmentation, including the use of advanced backbones such as shufflenet <cit.>, feature pyramid networks <cit.>, and efficient training strategies <cit.>. Another important area of research in instance segmentation is panoptic segmentation, which combines instance segmentation with semantic segmentation to provide a unified view of the scene <cit.>. Category-agnostic instance segmentation detects and segments all object instances in an image, without prior knowledge of object categories. This technique is promising for robotics applications, as it enables robust perception and interaction in unstructured environments, where there are no predefined categories of objects <cit.>. It has the potential for complex tasks, such as bin-picking <cit.>, that require object localization in cluttered settings. §.§ Sim-to-real transferSim-to-real transfer is an important research area in robotics that focuses on developing techniques to transfer machine learning models trained in simulation to real-world settings. A variety of methods have been proposed for sim-to-real transfer, including domain randomization <cit.>, data augmentation <cit.>, and adversarial training <cit.>. Recent works have explored the use of sim-to-real transfer in a range of applications, such as robot grasping <cit.>, navigation <cit.>, motion planning <cit.> and locomotion <cit.>. In the case of the bin-picking problem, recently there are some works <cit.> that have shown that CNN models trained purely over synthetic depth maps can be directly transferred to the real world. However, in contrast to these findings, we have found that this is only true in the case of high-precision noise-free depth sensing which is costlier. Instead, models trained over only RGB images with appropriate domain randomization can successfully transfer the learning to the real world without any further finetuning. The not-so-perfect depth maps from the low-cost depth sensors can still be useful for subsequent steps in the bin-picking such as grasp pose evaluation <cit.>. §.§ Bin-pickingThe robotic bin-picking problem has a wide range of formulations depending on the type of objects in the bin (homogeneous or heterogeneous), the target application (warehouse automation or industrial parts handling), and the perception system (camera types, camera positioning, etc.) <cit.>. Specifically, in this paper, we are focusing on the bin-picking solutions that have a use-case in a warehouse automation application where a large number of novel objects with different shapes, sizes, colors, and textures, need to be handled <cit.>. One type of solution for this category of bin-picking is designed to be gripper-specific <cit.>. On the other hand, the gripper-agnostic works in this category involve an object-centric approach <cit.>. One type of object-centric approach uses 3D CAD model of the target object <cit.> which is not suitable for novel objects whose CAD model is not available. Another type of solution in this category is to apply a category-agnostic instance segmentation <cit.> which is also the focus of our purposed approach. Our work is most closely related to <cit.> which is category-agnostic instance segmentation for bin-picking of unknown novel objects via sim-to-real transfer. § OUR METHOD: BIN PICKING WITH UNKNOWN OBJECTS Our bin-picking framework mainly consists of two parts. One is the CNN-based model training for class-agnostic instance segmentation and another is the grasp-pose planning using the predicted segmentation mask. First, we will describe the grasp-pose planning part in detail. Next, we will iterate through the deep learning framework for the class-agnostic instance segmentation. A schematic diagram of our proposed method is depicted in the Figure <ref> for reference. §.§ Class-agnostic Instance Segmentation Our class-agnostic instance segmentation method aims to segment the previously unseen objects in the bin in a real-world setting. For this, a deep-learning-based framework is utilized via sim-2-real transfer learning. The CNN model is trained entirely in the simulation and learning is transferred directly to the real world. The crucial step for setting up any deep learning framework is to acquire the appropriate training data, choose a befitting CNN model based on the application requirement, and perform proper training. We will describe the details of these components, next.§.§.§ Data GenerationFor generating the ground truth training samples, the PyBullet robotic simulator has been used. A synthetic environment has been created within the simulator that consists of a bin kept over a table and objects are spawned in it randomly. The simulated camera is kept just above the bin facing downwards at a distance of 70 cm from the bin floor. The number of objects in the scene ranges from 1 to 20 for different samples.The 3D object models are taken from an open source google-scanned-objects <cit.> repository and consist of daily-use objects such as groceries, medicines, and toys. For each scene, the bin and the scene objects are assigned different textures from a pool of available options. For the bin floor, the textures are pooled from 20 different wooden textures downloaded directly from the web.For objects, the Describable Textures Dataset <cit.> repository is used which consists of 5,640 texture images of 47 different categories. The camera orientation is randomized for each scene within a short range such that the objects in the scene remain in the camera view. The light parameters of the simulation are also randomized representing a range of scene illuminations varying from the bright day scenes to the dark dim-light scenes. §.§.§ Network design choice and the trainingFor the CNN network design choice, we choose the standard high-performing Mask-RCNN network with ResNet-50 as the backbone. Our proposed bin-picking framework makes use of an open-loop motion planner in which the grasp pose is predicted once and then the robot executes it without further feedback from the vision. Thus, real-time vision feedback is not necessary, however, the quality of the grasp pose matters. The grasp planning algorithm as described in the next section, depends solely on the segmentation mask predicted by the CNN network. Thus, for our bin-picking framework, the segmentation accuracy is more important than the inference time. For the training, PyTorch <cit.> deep learning library is used. The network was trained for 25 epochs with a batch size of 10. The training was carried out using 3 Nvidia-1080Ti GPUs. During the inference, only 1 GPU is used. The training dataset consists of a total of 30,000 samples and a 9:1 ratio is kept for training and validation sets.§.§ Grasp Planning Framework Our bin-picking framework takes the cumulative object instance segmentation mask as input and outputs the final grasp pose for the robot action. The framework for obtaining the instance segmentation mask is described in the previous subsection. To describe our grasp-pose planning method, we define the grasp pose as follows: G_i = (P_i, Θ_i, W_i, Q_i) where P_i represents the center point of the grasp pose G_i. Θ_i denotes the angle of the grasp pose. The grasp pose angle is planner, measured along the vertical axis (i.e. z-axis). The horizontal x-axis is assumed to be the reference zero angle. W_i refers to the width of the grasp pose rectangle, and Q_i represents the grasp quality index.The grasp pose is calculated in image coordinates and converted into the robot's world Cartesian frame. This conversion requires using intrinsic and extrinsic camera parameters obtained through a standard calibration procedure. The depth values used for this purpose are expressed in the camera's reference frame. The camera is positioned above the workspace bin at a fixed distance, facing downwards. The grasp-pose evaluation method consists of many sub-steps that are executed sequentially. The overall flow of the method is summarised in Algorithm <ref>. The details of the different components of the method are described, next. §.§.§ 1. Sampling Candidate Grasp-poseThe algorithm samples grasp poses using segmentation masks generated by our category-agnostic segmentation method (Section <ref>).At each segmentation instance, D number of grasp poses are sampled at equally spaced predefined angles (D=6 in our case). Each of these grasp-poses G_i is represented by a rectangle of width gw and breadth gb in the image plane. The centers of the segmentation instances become the centers of the corresponding grasp poses. For further processing, the rectangular region corresponding to the grasp pose is cropped from the segmentation mask and horizontally aligned. Then, it is translated such that the top left corner of it coincides with the origin.§.§.§ 2. Grasp Pose Subsectors IdentificationTo ensure a comprehensive evaluation of a grasp pose G_i, our objective is to partition the complete area within the grasp pose rectangle into three distinct subsectors. The tactile contact sector S_tc denotes the section of the target object's area within the grasp pose rectangle.The unobstructed space sector S_uo encompasses the region within the grasp pose boundary where the gripping device is unlikely to encounter obstacles during the grasp attempt (i.e. the area corresponding to the background region as per the segmentation mask). The remaining segment constitutes the collision sector S_cl indicating the area where the gripping device is prone to collide with other objects. The derivation of these subsectors is done through a simple strategy that uses the obtained segmentation mask. In the grasp pose rectangle area, the pixels corresponding to the target objects are assigned to S_tc, the pixels corresponding to the background class are assigned to S_uo and the rest pixels are assigned to S_cl. For visualization in Figure <ref>, the subsectors, namely, S_tc, S_uo andS_cl are depicted with green, white, and red colors. Comment/**/ §.§.§ 3. Grasp Pose FiltrationIn the process of validating all the sampled grasp poses, we assess the suitability of each one. A grasp pose is deemed unsuitable under the following circumstances: For all the sampled grasp poses, we check for the validity of each one. A grasp pose is not suitable in the following two situations:* If the width of the target object along the orientation of the grasp pose exceeds the maximum potential opening of the gripper, then the grasp becomes unviable. To ascertain this, we compare the maximum width of the tactile contact sector S_tc with the gripper's maximum opening capacity.* Adequate space within the free-space sector must be available for the gripper's fingers to enter. To confirm this, we compare the minimum width of the unobstructed space sector S_uo on both sides of the grasp pose with the width of the gripper's fingers.§.§.§ 4. Grasp Pose FinetuningWe finetune the grasp poses to enhance their effectiveness. Initially, we reposition the grasp pose's center to align with the center of the S_tc. This adjustment ensures that the grasp pose's central point aligns more accurately with the center of the target object along the grasp pose orientation, resulting in improved stability during the grasping process. Furthermore, in the process of determining the refined width, we calculate the disparity between the centers of the masks representing the left and right subparts of the region S_uo. This method allows us to derive a more precise measurement of the refined width, facilitating a more accurate assessment of the available space for the gripper's fingers. §.§.§ 5. Grasp Quality Assesment If more than one grasp pose has passed the pose validation step, the grasp quality index Q_i is calculated for each of the valid grasp poses for their ranking. For the calculation of Q_i, three things are taken into consideration: first is unobstructed-space-score (), which is the normalized area of the unobstructed space sector S_uo, second is contact-tangibility-score (), which is the normalized area of the tactile contact sector S_tc within a predefined rectangular region around the center, and the third is the segmentation score () which is the confidence score predicted by our instance segmentation network. Each of these components takes values between 0 and 100. The value of the grasp quality index Q_i is obtained by taking an average of the above three components. § RESULTS AND DISCUSSIONIn this section, we assess the effectiveness of our method for bin-picking unknown novel objects through class-agnostic segmentation. First, in the next subsection, we evaluate our proposed framework for class-agnostic segmentation. Subsequently, an evaluation of the proposed bin-picking method is carried out with real-world bin-picking experiments. §.§ Evaluation of the Class-agnostic instance segmentation method To evaluate our proposed framework for class-agnostic instance segmentation for bin-picking applications, two datasets are considered. First, is WISDOM <cit.> which is a public benchmark dataset in this domain, and second is our custom-made dataset. In Table <ref> the two datasets are compared over various attributes. The notable difference between the two datasets is the depth sensors used for capturing the depth maps. While for the WISDOM dataset, industrial grade costlier Phoxi camera is used that produces high precision (accuracy of 25-500 um) depth maps, for our custom-dataset the commodity level cost-effective Realsense camera is used that produces considerable noisy depth maps (accuracy of 2.5 - 5 mm). As an evaluation metric, the average precision (AP) and average recall (AR) are used as defined by the COCO benchmark <cit.> for the instance segmentation task. For calculating AP and AR, IoU thresholds from 0.50 to 0.95 with a 0.05 margin were used and top-100 detections were considered.The experimental results are reported in Table <ref>. Our method has achieved better results compared to the considered baseline methods <cit.>. For baselines, we only consider class-agnostic segmentation works that are related to the bin-picking applications. All the baselines use their respective custom-generated simulated data for the training. The baseline <cit.> uses only a depth map as the input. The baseline <cit.> uses a fusion of depth and RGB features as the input. For <cit.>, two variants are considered, one uses only depth map as the input and another uses photo-realistic rendered RGB images.As shown in the table, the method that has used depth data as the input performs considerably over the Wisdom dataset while the performance over our custom dataset is poorer. The noisy depth maps are the reason behind the performance decline of these methods. Our method used only synthetic RGB images as the input and was able to transfer well in the real world with the help of domain randomization. Our method achieves state-of-the-art performance over both datasets while using only the RGB image as the input. Photo-realism can also be an alternative for smooth sim-to-real transfer as shown by the results of the method <cit.>. Nevertheless, this approach mandates meticulously crafted simulations and significant computational expenses, resulting in limited adaptability and impracticality for real-world implementations.§.§ Bin Picking Experiments To evaluate the complete end-to-end bin-picking pipeline, we have performed real-world experiments. For the experiments, a UR5 robotic manipulator arm is used. A Realsense D435i RGB-D camera is mounted upon the wrist of the manipulator as an eye-in-hand configuration. As a gripping tool, the Schunk WSG-50 gripper (two-fingered parallel-jaw gripper) is mounted at the end-effector of the manipulator. The setup is shown in Figure <ref>. As shown in the figure, the camera is looking directly in the downward direction where the target objects are placed in a bin.The experiments consist of various daily usable objects including transparent and semi-transparent objects. Our grasp prediction method mainly relies on the segmentation mask generated from the CNN network, which uses only the RGB image. All the grasp-pose parameters are calculated without using the depth map. For the experiments, we divide the object set into two. One is opaque objects and the other is non-opaque objects (transparent and translucent objects). We evaluate our method along with the considered baseline methods <cit.> in two different scenarios, all opaque objects only and all non-opaque objects only. The bin-picking experiments are performed using our proposed method and the selected state-of-the-art baseline methods <cit.>. For each method, a total of 100 grasp trials are performed in each scenario type (i.e. opaque and non-opaque). Initially, 15 and 10 objects are randomly thrown in the bin, respectively for opaque and non-opaque categories. Then, objects are grasped one by one and put into the receptacle. Again, a new iteration is started when either two consecutive failures have occurred or all the objects in the scene are grasped. Consecutive grasp failures at the same location are counted only once. Thus, this process is repeated until the total number of grasp attempts reaches 100.The results are reported in Table <ref>. Our method outperforms all the considered baselines by a large margin. The methods <cit.> and <cit.> were trained over noise-free simulated depth images and thus perform poorly in noisy depth sensing environments. In the case of non-opaque objects, the noise in the depth maps increases further resulting in further decline in the performance. Our method is independent of the depth data and thus performs superior. Furthermore, it is interesting to see that our method performs equally well with non-opaque objects although the training data for our instance-segmentation network does not contain any non-opaque objects.§ CONCLUSION AND FUTURE WORKSThis study addresses the critical challenge of category-agnostic instance segmentation for robotic manipulation, enabling versatile applications such as bin-picking with unknown objects in clutter. By focusing on object-centric segmentation and leveraging simulation-based training, our approach is able to segment unknown objects in the real world without a single real-world training sample. The devised strategy effectively addresses the inherent noise in depth sensors and enables reliable picking of objects in the absence of high-precision depth sensing. Notably, our solution accommodates transparent and semi-transparent objects, historically challenging for depth-based techniques. The contributions encompass a successful domain randomization strategy, the provision of benchmark datasets for warehouse applications, and an integrated bin-picking framework for enhanced efficiency.One of the challenges our method faces is that the segmentation quality becomes poorer when the clutter in the bin rises beyond a certain level. It will be interesting to see a method that can selectively segment the objects reliably that are graspable and avoids others that are mostly occluded and will not likely cause any collision during the grasp attempt. Another possible direction for future work is to encompass the depth of information within the learning process for the instance segmentation task while the input to the deep network is still the RGB image only. One way to achieve this is to add a depth estimation as the auxiliary task in the network design. IEEEtran
http://arxiv.org/abs/2312.16741v1
{ "authors": [ "Prem Raj", "Sachin Bhadang", "Gaurav Chaudhary", "Laxmidhar Behera", "Tushar Sandhan" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20231227230546", "title": "Bin-picking of novel objects through category-agnostic-segmentation: RGB matters" }
]Convergence and stability results for the particle system in the Stein gradient descent method There has been recently a lot of interest in the analysis of the Stein gradient descent method, a deterministic sampling algorithm. It is based on a particle system moving along the gradient flow of the Kullback-Leibler divergence towards the asymptotic state corresponding to the desired distribution. Mathematically, the method can be formulated as a joint limit of time t and number of particles N going to infinity. We first observe that the recent work of Lu, Lu and Nolen (2019) implies that if t ≈loglog N, then the joint limit can be rigorously justified in the Wasserstein distance. Not satisfied with this time scale, we explore what happens for larger times by investigating the stability of the method: if the particles are initially close to the asymptotic state (with distance ≈ 1/N), how long will they remain close? We prove that this happens in algebraic time scales t ≈√(N) which is significantly better. The exploited method, developed by Caglioti and Rousset for the Vlasov equation, is based on finding a functional invariant for the linearized equation. This allows to eliminate linear terms and arrive at an improved Grönwall-type estimate. 35Q62, 35B35, 35Q68, 62-08, 65K10 [ Renzo Guido, Luis G. Sarasua, Arturo C. Martí January 14, 2024 =================================================§ INTRODUCTION The Stein gradient descent method is a recently extensively studied algorithm <cit.> to sample the probability distribution ρ_∞:=e^-V(x)/Z when the normalization constant Z = ∫_ e^-V(x) x is unknown or difficult to compute. A prominent example is the Bayesian inference <cit.> used to fit parameters θ∈Θ based on the data D and a priori distribution of parameters π(θ): the a posteriori distribution is given byℙ(θ|D) = ℙ(D|θ)π(θ)/∫_Θℙ(D|θ')π(θ') θ'Comparing to the well-known stochastic Metropolis-Hastings algorithm and its variants <cit.> which require huge number of iterations, the Stein algorithm is completely deterministic. In this method, one starts with a measure μ and modifies it via the mapT_ε,ϕ(x) = x +ε ϕ,where ε is a small parameter and ϕ is chosen to minimize the Kullback-Leibler divergence (T^#_ε, ϕ μ || ρ_∞) where for two nonnegative measures μ, ν the Kullback-Leibler divergence is defined as(μ || ν) = ∫_log(μ/ν(x) )μ/ν(x) ν(x) μ/ν+∞and T^#_ε, ϕμ is a measure which is the push-forward of μ along the map T_ε, ϕ:T^#_ε, ϕμ(A) = μ(T_ε, ϕ^-1(A))A.The unique minimizer of the Kullback-Leibler divergence corresponds to the desired distribution ρ_∞. The reason for choosing this functional is that its first variation does not depend on the normalization constant Z (furthermore, this is the only functional with such a property, see <cit.>). More precisely, ϕ is chosen as a maximizer of the following optimization problemmax_ϕ∈ℋ{ -/(T^#_ε, ϕ μ || ρ_∞)|_ε=0 : ϕ_ℋ≤ 1 }where ℋ is a sufficiently big Hilbert space. A simple computation (see <cit.>) shows that-/(T^#_ε, ϕ μ || ρ_∞)|_ε=0= ∫_^d( ∇log(ρ_∞) ·ϕ + ϕ) μ(x)so that we see that the variation does not depend on the normalization constant Z. In the particular case that ℋ is a reproducing Hilbert space with kernel K(x-y), one can obtain an explicit expression for the optimal ϕ (up to a normalization constant):ϕ∝ (∇log(ρ_∞) μ )∗ K - ∇ K ∗μ,where ∗ denotes convolution operator f∗ g(x) = ∫_^d f(x-y) g(y)y. In particular, if μ has a particle representation, this motivates (formally) an iterative algorithm: we set μ_0 = 1/N∑_i=1^N δ_x_0^i and given μ_l =1/N∑_i=1^N δ_x_l^i from the l-th step, in the (l+1)-th step we compute μ_l+1 = 1/N∑_i=1^N δ_x_l+1^i byx^i_l+1 = x_l^i + ε/N∑_j=1^N [ ∇logρ_∞(x^j_l)K(x^i_l - x^j_l) - ∇ K(x^i_l - x^j_l)](see <cit.> for more details). This shows that the Stein gradient descent method is simple and attractive for practioners. From the analytical point of view, moving from discrete distributions to the continuous ones (i.e. sending N→∞) is a delicate matter. Indeed, the Kullback-Leibler divergence (<ref>) is not well-defined for the discrete distributions. However, its first variation is which makes the algorithm (<ref>) well-defined. In <cit.>, the Stein's method was connected to the ODE system ∂_t x^i(t) = -1/N∑_j=1^N ∇ K(x^i(t)-x^j(t))- 1/N∑_j=1^NK(x^i(t)-x^j(t))∇ V(x^j(t)).We note that the algorithm (<ref>) is in fact the time discretization of the ODE (<ref>) with the time step ε. Considering the empirical measure ρ^N_t= 1/N∑_i=1^N δ_x_i(t), it was proved in <cit.>, that on finite intervals of time, ρ^N_t →ρ_t in the Wasserstein distance 𝒲_p where ρ_t solves the nonlocal PDE∂_t ρ_t = (ρ_t K∗(∇ρ_t + ∇ Vρ_t)).More rigorously, by using Dobrushin-type argument, the authors in <cit.> established the following stability inequality𝒲_p(μ_t, ν_t) ≤ Cexp(Cexp(CT)) 𝒲_p(μ_0, ν_0)for all times t ∈ [0,T] and measure solutions μ_t, ν_t to (<ref>), assuming that V(x) ≈ |x|^p for large x (see <cit.> for more general setting). Having sent N →∞, one can obtain ρ_∞ as the unique stationary solution of (<ref>) by sending t→∞.§.§ Main resultsThe paper <cit.> recasts the Stein method as a limit N →∞ and then t →∞. Yet, practical computations involve discretization in space and so, they correspond in fact to the joint limit N →∞, t→∞. We first state a result showing that in a certain scaling between N and t, one can rigorously justify the joint limit in the Wasserstein distance 𝒲_q. This applies to the potentials having growth |x|^p for large x. Suppose that K, V satisfy Assumptions <ref> and <ref>. Let ρ^N_t = 1/N∑_i=1^N δ_x_i(t) where x_i(t) solve (<ref>). Let N(t)= exp(2 C exp(Ct)) where C is the constant as in (<ref>). Then, for all q ∈ [1,p)𝒲_q(ρ_t^N(t), ρ_∞) → 0t→∞. We see that the number of particles is unpractically large comparing to the time. To understand what happens for longer time scales, we address the question of stability of the particle system (<ref>). Assuming that the initial configuration of particles ρ_0^N is close to the asymptotic state (say, with error ≈1/N), we ask for how long it remains close. For example, the estimate (<ref>) suggests that after time t ≈loglog N, the distance 𝒲_p(ρ^N_t, ρ_∞) is of order 1. Our main result improves this estimate and states that this time is of an algebraic order with respect to N rather than just logarithmic.Suppose that K, V satisfy Assumptions <ref> and <ref>. Let ρ_t be a measure solution to (<ref>). Then, there exists a constant C depending only on K and V such that for all times t satisfying 1-C t (t+1) ρ_0 - ρ_∞_^*_V>0 we haveρ_t - ρ_∞_^*_V≤C (t+1)ρ_0 - ρ_∞_^*_V/1-C (t+1) t ρ_0 - ρ_∞_^*_Vwhere the norm ·_^*_V is defined in (<ref>).Several comments are in order. First, conditions on K and V in Assumption <ref> are quite technical but they allow to consider all smooth, positive definite, sufficiently fast decaying kernels K and potentials V which grow at most like |x|^2 for large x. Second, the exploited distance distance ·_^*_V is a weighted modification of the bounded Lipschitz distance (also flat norm, Fortet-Mourier distance), commonly used in the analysis of transport-type PDEs (see, for instance, <cit.> and Section <ref> for rigorous definition and related background).Third, we see that when ρ^N_0 - ρ_∞_^*_V≤1/N, then even for algebraic (with respect to N) time t ≤(N/2 C)^1/2-1 we have1 - C t (t+1) ρ^N_0 - ρ_∞_^*_V≥1/2so that with C := (2/C)^1/2 we haveρ^N_t - ρ_∞_^*_V≤C/√(N),0 ≤ t ≤(N/2 C)^1/2 - 1,and so, possible instabilities in the particle system may occur much later compared to the time determined by the estimate (<ref>).The inspiration for Theorem <ref> comes from an insightful work of Caglioti and Rousset <cit.> who obtained similar estimates for the Vlasov equation and the vortex method for the 2D Euler equation. The starting point is to consider the dual equation (which is common in the theory of solutions in the space of measures to transport-type PDEs, see for instance the monograph <cit.>). In our case, we let μ_t := ρ_t - ρ_∞.Since ∇ρ_∞ + ∇ Vρ_∞ = 0, we have∂_t μ_t = (μ_t K∗(∇μ_t + ∇ Vμ_t)) + (ρ_∞K∗(∇μ_t + ∇ Vμ_t)).Let g = g(T,x) be a smooth test function and consider the following dual equation∂_t φ =∇φ· K ∗ (∇μ_t + μ_t∇ V) + (∇ρ_∞ φ) ∗∇ K - (∇ρ_∞ φ)∗ K ·∇ V - (ρ_∞ φ)∗Δ K + (ρ_∞ φ)∗∇ K ·∇ V + g (1+V),equipped with the terminal condition φ(T,x) = 0 for all x ∈. Note carefully that φ depends on the test function g and the final time T>0. An easy computation shows that ∫_0^T ∫_^dg(t,x)(1+V(x))μ_t(x)t= ∫_^dφ(0,x)/1+V(x)(1+V(x))μ_0(x)so that to estimate μ_t on [0,T], one needs to control φ(0,x)/1+V(x), uniformly with respect to g. The crucial part of the argument in <cit.> is to find a functional of the form 𝒬(φ) ≈∫_ w(x) |φ(x)|^2x (so that it is equivalent to a weighted L^2 norm of φ), which is invariant under the flow of linearization of (<ref>). As the time derivative of 𝒬(φ) vanish on the linear terms of the dual equation, the estimate on 𝒬(φ) will not yield exponential factors as obtained in (<ref>). For Vlasov and Euler equations, the right choice was w(x) = |ρ_∞'(|x|)|. In our case, we choose𝒬(φ) = ∫_ρ_∞(x) |φ(x)|^2x.While this functional is not necessarily invariant, we prove that there is no positive contribution to its value under the flow of linearization of (<ref>) (see Lemma <ref>). This yields: Suppose that K, V satisfy Assumptions <ref> and <ref>. Let φ be a solution to (<ref>) with g and T>0 fixed. Then, there exists a constant C depending only on V and K such that𝒬(φ(t,·)) ≤ C ∫_t^T g(s,·)_L^∞() s e^C ∫_t^T μ_s_^*_V s.With Theorem <ref>, the proof of Theorem <ref> is a simple analysis of the explicit formula for solutions to (<ref>) together with the Grönwall-type inequality, see Lemma <ref>. We remark that a non-rigorous reason why the functional 𝒬 is important in the analysis of linearized version of (<ref>) is that its dual can be interpreted as a linearization of Kullback-Leibler divergence (<ref>) around ρ_∞. Indeed, writing ρ = ρ_∞+h where ∫_^d h = 0 (to preserve the mass), we have∫_^dρlog(ρ/ρ_∞)x≈∫_^d( h + h^2/ρ_∞)x = ∫_^dh^2/ρ_∞ x. One can wonder how to initially approximate the measure ρ_∞ so that the condition (<ref>) is satisfied. According to <cit.>, almost each initial configuration satisfies a condition of this type. More precisely, if we restrict to dimension d=2 for simplicity. Let λ_∞ be the product Lebesgue measure on ()^∞ := ×× ... (countably many times). Then, from <cit.> we know that for all α∈ (0,1/2), there exists a constant C>0 such that for λ_∞-a.e. x = (x_1, x_2, ...) the empirical measure ρ^N[x] = 1/N∑_i=1^N δ_x_i satisfy the estimateρ^N[x] - ρ_∞_^*_V≤C/N^α.Note that this statement exclude a lot of initial configurations in fact. For instance, if A ⊂ is a set of measure zero, then ×× A ××× ... is of measure zero in ()^∞. Similar results are valid in arbitrary dimensions but the space ^*_V has to be slightly modified, see <cit.>. Of course, one would like to see estimates which shows that the gradient flow improves the initial estimate rather than just does not worsen it too much. This is a difficult problem, far beyond the scope of the current manuscript. Let us comment the novelties of the manuscript and put them in the context of other works. From the analytical point of view, estimates of the form (<ref>) have been only obtained before by Caglioti and Rousset for the Vlasov equation and for the vorticity formulation of 2D Euler equation <cit.>. The method uses the functional 𝒬 which is conservative for the flow of the linearized dual problem and is constructed by the methods of Hamiltonian mechanics. In our case, the functional 𝒬 has different form and can be rather interpreted via linearization of Kullback-Leibler divergence, see (<ref>). More generally, our work shows that the idea of <cit.> can be possibly applied to a much broader class of PDEs without a Hamiltonian structure. Concerning the particular case of the Vlasov equation, we also mention the work of Han-Kwan and Nguyen <cit.> who proved a negative result: if f_∞ = f_∞(v) is an unstable equilibrium (in the sense of so-called Penrose instability condition) and initially 𝒲_1(μ^N_0, f_∞) ≈1/N^α for α>0 sufficiently small thenlim sup_N →∞𝒲_1(μ^N_T_N, f_∞)>0 for T_N = O(log N). From the point of view of numerical analysis and statistics, the only available estimate addressing the convergence of the particle system in the Stein method is (<ref>) obtained by Lu, Lu and Nolen in <cit.> which belongs to the large class of convergence results of mean-field limits for Vlasov equation and aggregation equation <cit.>. First, from their result we deduced convergence of the method assuming that t ≈loglog N, see Theorem <ref>. Moreover, we provided stability estimates for the longer, more practical timescale t ≈√(N) which, up to our knowledge, are entirely new. Other interesting results for the Stein method focus on the convergence of(<ref>), assuming that the time step ε is sufficiently small and the initial distribution is an absolutely continuous measure <cit.>, or the analysis of the asymptotics t→∞ via log-Sobolev-type inequalities <cit.>, which is still a programme far from being complete. In both of these approaches, one needs the continuity of the initial distribution to consider the Kullback-Leibler divergence (<ref>) which is not well-defined for discrete measures. The paper is structured as follows. In Section <ref>, we review the theory of spaces of measures and we define the norm ·_^*_V. We also define measure solutions to (<ref>). In Section <ref> we introduce assumptions on the potential V and the kernel K. In Section <ref> we prove Theorem <ref> while in Section <ref> we prove Theorem <ref> which allows to demonstrate Theorem <ref> in Section <ref>.§ MEASURE SOLUTIONS AND THE FUNCTIONAL ANALYTIC SETTING§.§ Spaces of measures, the Wasserstein distance and the weighted bounded Lipschitz distanceWe first introduce the functional analytic framework. The most common notion of distance in the space of measures is probably the Wasserstein distance𝒲_p(μ, ν) = inf_π∈Π(μ,ν)(∫_^d ×^d |x-y|^p π(x,y))^1/pwhere Π(μ, ν) is a set of couplings between μ and ν, i.e. π∈Π(μ, ν) if π is a probability measure on ^d×^d such that π(A ×^d) = μ(A) and π(^d × B) = ν(B). Definition (<ref>) requires that ∫_^d (1+|x|^p) μ, ∫_^d (1+|x|^p) ν < ∞. Theorem <ref> is formulated using the Wasserstein distance.For Theorem <ref>, we need a notion of distance compatible with the duality method. This will be the bounded Lipschitz distance. To define it, we first introduce the space of bounded Lipschitz functions(^d) = {ψ:^d →: ψ_L^∞(^d) < ∞, |ψ|_ < ∞},where|ψ|_ = sup_x≠ y|ψ(x)-ψ(y)|/|x-y|.A useful fact is that |ψ|_≤∇ψ_L^∞(^d). In particular, to estimate ψ_(^d), it is sufficient to compute ψ_L^∞(^d) and ∇ψ_L^∞(^d).Given an arbitrary signed measure μ∈ℳ(^d), we recall its unique Hahn-Jordan decomposition μ = μ^+ - μ^- where both measures μ^+, μ^- are nonnegative. We define total variation of μ as a nonnegative measure |μ|:= μ^+ + μ^-. Then, for any signed measure μ such that ∫_^d(1+V(x)) |μ|(x) <∞, we define its weighted bounded Lipschitz norm asμ_^*_V := sup_φ_≤ 1∫_^dφ(x)(1+V(x)) μ(x).This is a weighted variant of the bounded Lipschitz normμ_^* := sup_φ_≤ 1∫_^dφ(x) μ(x),widely used in the analysis of PDEs, when the total mass is not conserved <cit.> (otherwise, one can use the Wasserstein distance). In our case, we introduce an additional weight V(x) to address the growth V(x) →∞ as |x|→∞. Similar weighted norms were introduced before to remove singularities in the studied problems <cit.>. §.§ Measure solutions to (<ref>) The measure solution is defined as follows. [measure solution] We say that a family of probability measures {ρ_t}_t∈[0,T] is a measure solution to (<ref>) if t ↦ρ_t is continuous (with respect to the narrow topology), for all T>0 the growth estimatesup_t ∈ [0,T]ρ_t_^*_V = sup_t ∈ [0,T]∫_^d (1+V(x)) ρ_t(x) ≤ C(T)is satisfied and for all ϕ∈ C_c^∞([0,∞)×^d)∫_0^∞∫_^d∂_t ϕ(t,x) + ∇ϕ(t,x) · K∗(∇ρ_t + ρ_t ∇ V) ρ_t(x)t + ∫_^dϕ(0,x) ρ_0(x) = 0.Three explanations are in order. First, the continuity with respect to the narrow topology means that the map t↦∫_^dψ(x) ρ_t(x) is continuous for all ψ:^d → bounded and continuous. Second, the first equality in (<ref>) follows by nonnegativity of ρ_t. Third, it is a simple computation to see that ρ_t^N = 1/N∑_i=1^N δ_x_i(t), where x_i(t) solves (<ref>) with initial condition x_i(0), is a measure solution to (<ref>) with initial condition ρ_0^N = 1/N∑_i=1^N δ_x_i(0). The measure solution to (<ref>) as in Definition <ref> was constructed in <cit.>. In our work, we will need (stronger) continuity in time with respect to the ·_^*_V norm which is given by the following lemma. Let {ρ_t} be a measure solution to (<ref>). Then, for all T>0, there exists a constant C depending on ρ_0, V, K and C(T) in (<ref>) such that for all s,t ∈ [0,T]ρ_t - ρ_s _^*_V≤ C|t-s|. As the computation is classical, we provide a short proof in the Appendix <ref>.§ ASSUMPTIONS ON THE KERNEL AND THE POTENTIALFor the sake of clarity, we specify assumptions for Theorems <ref> and <ref> separately.For both Theorems <ref> and <ref> we assume that: * K is nonnegative, symmetric K(x)= K(-x) and positive-definite, i.e. for all test functions ξ: ^d →^d we have ∫_^d K ∗ξ·ξ x ≥ 0,* V is a smooth and nonnegative function,* there exists p>0, C>0 and R>0 such that for all x with |x|>R we have1/C(|x|^p - 1) ≤V(x) ≤ C(|x|^p + 1), 1/C(|x|^p-1 - 1) ≤|∇ V(x)| ≤ C(|x|^p-1 + 1),1/C(|x|^p-2 - 1) ≤|∇^2 V(x)| ≤ C(|x|^p-2 + 1). For Theorem <ref> we assume additionally (as in <cit.>) that: * condition (<ref>) holds with p > 1,* K ∈ C^4(^d) ∩ W^4,∞(^d),* there exists smooth K_1/2 such that K = K_1/2∗ K_1/2 and its Fourier transform K_1/2 is positive.For Theorem <ref> we assume additionally that:* K∈ W^3,∞(^d),* V and K satisfy the following conditions:sup_x∈^d∇ V(x)/1+V(·)·∇ K(x-·)_,sup_x∈^d∇ V(x) ·∇ V(·)/1+V(·)K(x-·) _ < ∞. The only condition which is difficult to understand is (<ref>). Unfortunately, it restricts our reasoning to the case of V which can be at most quadratic at infinity (i.e. p ≤ 2 in (<ref>)).Suppose that V, K ≥ 0, V ∈ W^2,∞_loc(^d), K∈ W^1,∞(^d) and assume that V satisfies (<ref>) with exponent p. Furthermore, suppose that |∇ V|K, |∇ V||∇ K| ∈ L^∞(^d). Then, V and K satisfy (<ref>) if and only if p ∈ (0,2]. The proof is presented in Appendix <ref>. We conclude with a crucial consequence of (<ref>) which provides bounds on the vector field for the transport equation (<ref>).Let μ be a measure such that μ_^*_V<∞. Then, there exists a constant C depending only on V and K such thatK ∗ (∇μ + μ ∇ V) _L^∞(^d), ∇ K ∗ (∇μ + μ ∇ V) _L^∞(^d)≤ C μ_^*_V.Moreover,∇ V · K ∗ (∇μ + μ ∇ V) _L^∞()≤ C μ_^*_V.The proof is presented in Appendix <ref>.§ PROOF OF THEOREM <REF>Here, we prove that in the particular scaling t ≈loglog N, we can pass to the joint limit t, N →∞. The result is a simple consequence of results in <cit.>.We recall that in <cit.> the Authors prove that 𝒲_p(ρ_t^N, ρ_t) ≤ C exp(C exp(Ct)) 𝒲_p(ρ_0^N, ρ_0) ≤Cexp(C exp(Ct))/N,where C depends only on V and K. Furthermore, if ρ_0 is an absolutely continuous measure, there exists a unique solution ρ_t to (<ref>) which is an absolutely continuous measure for all t ∈ [0,∞) andρ_t →ρ_∞.The target of this Section is to combine (<ref>) and (<ref>) to prove the Theorem <ref>. We will first upgrade the convergence (<ref>).Suppose that {ρ_t} is an (absolutely contiuous) measure solution to (<ref>) with initial condition ρ_0 such that (ρ_0|ρ_∞) < ∞. Then, for all 1 ≤ q < p we have 𝒲_q(ρ_t, ρ_∞) → 0 when t →∞. As in <cit.>, since we deal with the absolutely continuous solution ρ_t, we have inequality∂_t (ρ_t|ρ_∞) + ∫_^d(∇ρ_t + ∇ V ρ_t) K ∗ (∇ρ_t + ∇ V ρ_t)x ≤ 0so that (ρ_t|ρ_∞) ≤(ρ_0|ρ_∞) < ∞. This means that∫_^dρ_t log(ρ_t) + V(x) ρ_tx ≤ C. By standard arguments (for instance, splitting the set {x ∈^d: ρ_t ≤ 1} for two sets: {x ∈^d : ρ_t ≤ e^-|x|^p/σ} and {x ∈^d: e^-|x|^p/σ≤ρ_t ≤ 1}) we have-∫_^dρ_t log^-(ρ_t)x ≤∫_^d e^-|x|^p/(2σ) x + σ ∫_^dρ_t |x|^pxwhere log^- is the negative part of log. Hence, choosing σ small enough, using that ρ_t is a probability measure and growth conditions on V in (<ref>), we getsup_t∈[0,∞)∫_^d( ρ_t |log(ρ_t)| + |x|^p ρ_t )x ≤ C. This inequality gives tightness of the sequence {ρ_t} in <cit.> to prove (<ref>) but in our case, it gives us uniform moment estimate which is relevant in the sequel.To prove the lemma, by <cit.>, it is sufficient to prove ∫_^d |x|^q ρ_t(x)x →∫_^d |x|^q ρ_∞(x)x. Let T_R be the truncation operator defined as T_R(y) =y |y|≤ R, y/|y| R |y| > R,so that |T_R(y)| ≤ |y| and |T_R(y)| ≤ R. Hence,|∫_^d |x|^q (ρ_t - ρ_∞)x | ≤≤|∫_^d T_R(|x|^q) (ρ_t - ρ_∞)x | +|∫_^d (|x|^q - T_R(|x|^q)) (ρ_t - ρ_∞)x |.In the second integral we can restrict to |x|>R so that ||x|^q - T_R(|x|^q)| ≤ 2|x|^p R^q-p.By (<ref>) and ∫_^d |x|^p ρ_∞ x < ∞, we conclude|∫_^d (|x|^q - T_R(|x|^q)) (ρ_t - ρ_∞)x | ≤ CR^q-p.As T_R(|x|^q) is an admissible test function for the narrow convergence, we deduce from (<ref>) and (<ref>)lim sup_t →∞|∫_^d |x|^q (ρ_t - ρ_∞)x | ≤ CR^q-p.As R is arbitrary, the proof is concluded. By the triangle inequality𝒲_q(ρ_t^N, ρ_∞) ≤𝒲_q(ρ_t^N, ρ_t) + 𝒲_q(ρ_t, ρ_∞).In view of (<ref>) and a simple inequality 𝒲_q(ρ_t^N, ρ_t) ≤𝒲_p(ρ_t^N, ρ_t) (as q ≤ p)𝒲_q(ρ_t^N, ρ_t) ≤Cexp(C exp(Ct))/N. Hence, if N= exp(2 C exp(Ct)) then 𝒲_q(ρ_t^N, ρ_t) → 0 and so, the conclusion follows by Lemma <ref>. § THE WEIGHTED L^2 ESTIMATE (PROOF OF THEOREM <REF>)§.§ Estimates on the vector field (the case )Let μ be a measure. Then, for all k = 0, 1, ..., m we have∇^k K ∗ (∇μ + μ ∇ V) _L^∞()≤ C μ_^*_m.Let k = 0. Then,K ∗∇μ_L^∞() = ∇ K ∗∇μ_L^∞()= sup_x∈| ∫_∇ K(x-y) μ(y)| ≤ ≤sup_x∈∇ K(x-·)__m μ_^*_m≤∇ K__m μ_^*_m≤ K__m+1 μ_^*_m.In the same way, we prove that ∇^k K ∗∇μ_L^∞()≤ K__m+k μ_^*_m. Concerning the term ∇^k K ∗ (μ ∇ V) we let again k=0 so thatK ∗ (μ ∇ V) _L^∞() = sup_x∈|∫_ K(x-y) ∇ V(y)μ(y) | ≤ ≤sup_x∈ K(x-·) ∇ V(·)__m μ_^*_m≤K__mV__m+1 μ_^*_m.Similarly, we prove that ∇^k K ∗∇μ_L^∞()≤K__m+kV__m+1 μ_^*_m.We first present the crucial cancellation lemma which allows to cancel the terms which are linear with respect to φ. Let 𝒬(φ) := (∫_ρ_∞|φ|^2x)^1/2 and letf(φ) :=(∇ρ_∞ φ) ∗∇ K - (∇ρ_∞ φ)∗ K ·∇ V- (ρ_∞ φ)∗Δ K + (ρ_∞ φ)∗∇ K ·∇ V.Then, ∫_ f(φ) φ ρ_∞ x = ∫_ (∇φ ρ_∞) ∗ K · (∇φ ρ_∞)x ≥ 0. In particular, if φ solves (<ref>), then∂_t 𝒬(φ) ≥1/𝒬(φ) ∫_ρ_∞ φg(1+V)x + 1/𝒬(φ) ∫_ρ_∞ φ ∇φ· K ∗ (∇μ_t + μ_t∇ V)x.The most important observation is that - ρ_∞∇ V = ∇ρ_∞. Hence,∫_ f(φ) φ ρ_∞ x =∫_ (∇ρ_∞ φ) ∗∇ Kφ ρ_∞ + (∇ρ_∞ φ)∗ K ·∇ρ_∞ φ x - ∫_(ρ_∞ φ)∗Δ Kφ ρ_∞ + (ρ_∞ φ)∗∇ K ·∇ρ_∞ φ x =: I_1 + I_2.We observe that I_1 = -∫_ (∇ρ_∞ φ) ∗K ·∇φ ρ_∞ x by integrating by parts in the first term in I_1. Similarly I_2 = ∫_(ρ_∞ φ)∗∇ K ·∇φ ρ_∞ x. Now, by standard properties of convolutions,(ρ_∞ φ)∗∇ K = (∇ρ_∞ φ)∗ K + ( ρ_∞ ∇φ)∗ K so that summing I_1 + I_2 we conclude the proof of (<ref>) (the nonnegativity follows by the positive definiteness of K). Concerning (<ref>), we observe that differentiating in time and using PDE (<ref>)∂_t 𝒬(φ)𝒬(φ) = = ∫_ρ_∞ φf(φ)x+∫_ρ_∞ φg(1+V)x + ∫_ρ_∞ φ ∇φ· K ∗ (∇μ_t + μ_t∇ V)xso that (<ref>) follows directly from (<ref>). Integrating (<ref>) in time and using φ(T,x) = 0 we deduce 𝒬(φ(t,·)) ≤∫_t^T 1/𝒬(φ(s,·))|∫_ρ_∞(x)φ(s,x)g(s,x) (1+V(x))x |s + + ∫_t^T 1/𝒬(φ(s,·))|∫_ρ_∞(x)φ(s,x) ∇φ(s,x) · K ∗ (∇μ_s + μ_s∇ V)x|s =: J_1 + J_2.Estimate on J_1. We use Hölder inequality to obtain1/𝒬(φ(s,·))|∫_ρ_∞(x)φ(s,x)g(s,x) (1+V(x))x | ≤(∫_ρ_∞(x) g^2(s,x) (1+V(x))^2x )^1/2 ≤ρ_∞ (1+V)^2_L^1()^1/2 g(s,·)_L^∞()≤ C g(s,·)_L^∞()so that J_1 can be estimated by C ∫_t^T g(s,·)_L^∞(^d) s. Estimate on J_2. We write φ ∇φ = 1/2∇φ^2 and integrate by parts to get two terms:∫_t^T 1/𝒬(φ(s,·))|∫_∇ρ_∞(x)φ^2(s,x) · K ∗ (∇μ_s + μ_s∇ V)x|s + + ∫_t^T 1/𝒬(φ(s,·))|∫_ρ_∞(x)φ^2(s,x)∇ K ∗ (∇μ_s + μ_s∇ V)x|s.Using ∇ρ_∞ = -ρ_∞ ∇ V, we can estimate it by∫_t^T 𝒬(φ(s,·)) ( ∇ V · K ∗ (∇μ_s + μ_s∇ V) _L^∞() + ∇ K ∗ (∇μ_s + μ_s∇ V) _L^∞())s.The L^∞ norms above can be bounded by C μ_s_^*_V using Lemma <ref> so that we obtain𝒬(φ(t,·)) ≤ C ∫_t^T g(s,·)_ L^∞() s + C ∫_t^T 𝒬(φ(s,·)) μ_s_^*_V s.Using Lemma <ref>, we conclude the proof.§ BL ESTIMATES ON Φ AND PROOF OF THEOREM <REF>The plan is to write explicit solution to (<ref>) and estimate each term separately. Note that for a general transport equation∂_t φ = ∇φ· b(t,x) + c(t,x), φ(T,x) = 0, the method of characteristics yields the following representation formulaφ(t,x) = - ∫_t^T c(s, X_t,s(x))s,where X_t,s(x) is the flow of the vector field b:∂_s X_t,s(x) = -b(s,X_t,s(x)), X_t,t(x) = x.Therefore, the solution to (<ref>) can be written asφ(t,x) = -∫_t^T(∇ρ_∞ φ) ∗∇ K(s,X_t,s(x))s + ∫_t^T (∇ρ_∞ φ)∗ K ·∇ V(s,X_t,s(x))s +∫_t^T (ρ_∞ φ)∗Δ K(s,X_t,s(x))s - ∫_t^T(ρ_∞ φ)∗∇ K ·∇ V(s,X_t,s(x))s- ∫_t^T g(s,X_t,s(x)) (1+V(X_t,s(x)))s,where X_t,s(x) is the flow of the vector field -K ∗ (∇μ_s + μ_s∇ V):∂_s X_t,s(x) = - K ∗ (∇μ_s + μ_s∇ V)(X_t,s(x)), X_t,t(x) = x.§.§ The case of .We begin with the estimates on the derivatives of the flow X_t,s.Suppose that ... Then,∇ X_t,s_W^m-1,∞()≤ C e^C ∫_t^s μ(s)_^*_m u.We start with k=1. Differentiating (<ref>) and using ∇ X_t,t(x) = I (the identity matrix), we obtain∇ X_t,s_L^∞()≤ e^∫_t^s∇ K ∗ (∇μ + μ ∇ V) _L^∞() u.Estimates on the higher derivatives follow the same way. We havemax_1≤ k ≤ m∇^k X_t,s_L^∞()≤ ≤ C + C∫_t^s max_1≤ k ≤ m∇^k X_t,u_L^∞() max_1≤ k ≤ m∇^k K ∗ (∇μ + μ ∇ V)_L^∞() u,(the first term is only due to k=1 as higher order derivatives of the initial condition vanish) where C is a numerical constant depending on m. It follows that∇ X_t,s_W^m-1,∞() =max_1≤ k ≤ m∇^k X_t,s_L^∞()≤ e^C∫_t^s max_1≤ k ≤ l∇^k K ∗ (∇μ + μ ∇ V)_L^∞() u.We conclude with Lemma <ref>. Now, we are in position to estimate solutions to (<ref>).There exists a constant C depending only on K and V such thatφ(t,·)_L^∞()≤∫_t^T 𝒬(φ(s)) + g(s,·)_L^∞ sNote thatρ_∞ φ(t)_L^1()≤ρ_∞_L^1()^1/2 𝒬(φ(t)) ≤ C 𝒬(φ(t)), ∇ρ_∞ φ(t)_L^1()≤ρ_∞|∇ V|^2 _L^1()^1/2 𝒬(φ(t)) ≤C 𝒬(φ(t)).Using (<ref>) and Young's convolutional inequality, we estimateφ(t,·)_L^∞()≤ ∫_t^T ∇ρ_∞ φ_L^1() ( ∇ K_L^∞() + K_L^∞() ∇ V_L^∞())s+∫_t^T ρ_∞ φ_L^1() (Δ K _L^∞() + ∇ K_L^∞() ∇ V_L^∞())s+∫_t^T g(s,·)_L^∞ s.Thanks to (<ref>)–(<ref>), we conclude the proof.There exists a constant C depending only on K, V and m such that for all k=1,...,m∇^k φ(t,·)_L^∞()≤C((T-t)+ g_L^1(t,T; W^k,∞())) e^C ∫_t^T μ(s)_^*_m u.Differentiating (<ref>) k times and using Young's convolutional inequality we obtain∇^k φ(t,·) _L^∞()≤ C ∫_t^T ∇ρ_∞ φ_L^1() ∇ K _W^k,∞() ∇ X_t,s_W^k-1,∞() s + C ∫_t^T ∇ρ_∞ φ_L^1()K _W^k,∞() ∇ V _W^k,∞() ∇ X_t,s_W^k-1,∞() s+C ∫_t^T ρ_∞ φ_L^1() Δ K _W^k,∞() ∇ X_t,s_W^k-1,∞() s +C ∫_t^T ρ_∞ φ_L^1() ∇ K _W^k,∞() ∇ V _W^k,∞() ∇ X_t,s_W^k-1,∞() s+ C ∫_t^T g(s,·)_W^k,∞() ∇ X_t,s_W^k-1,∞() s.Using (<ref>)–(<ref>), Theorem <ref> and Lemma <ref> we obtain∇^k φ(t,·) _L^∞() ≤ C ∫_t^T (1 + g(s,·)_W^k,∞() ) e^C ∫_t^s μ(s)_^*_m u s ≤ C ((T-t)+ g_L^1(t,T; W^k,∞())) e^C ∫_t^T μ(s)_^*_m u,which concludes the proof. We are in position to prove Theorem <ref>:Thanks to Lemmas <ref> and <ref> we haveφ(0,·)_W^m,∞()≤ C(T+ g_L^1(0,T; W^m,∞())) e^C ∫_0^T μ(s)_^*_m u,where the constant C depends only on K, V and m. Taking supremum over all g such that g_L^1(0,T; W^m,∞())≤ 1 and using duality formula (<ref>), we concludeμ(T)_^*_m≤ C μ(0)_^*_m(T+ 1) e^C ∫_0^T μ(s)_^*_m u.for all T ∈ [0,∞) mention here continuity wrt time to write LHS Applying Lemma <ref> we deduceAccording to (<ref>), we need to estimate φ(0,x)/1+V(x) uniformly with respect to g. From (<ref>) we haveφ(0,x)/1+V(x) = -∫_0^T (∇ρ_∞ φ) ∗∇ K(s,X_0,s(x))/1+V(x) s + ∫_0^T (∇ρ_∞ φ)∗ K ·∇ V(s,X_0,s(x))/1+V(x) s +∫_0^T (ρ_∞ φ)∗Δ K(s,X_0,s(x))/1+V(x) s - ∫_0^T(ρ_∞ φ)∗∇ K ·∇ V(s,X_0,s(x))/1+V(x) s+ ∫_0^T g(s,X_0,s(x)) 1+V(X_0,s(x))/1+V(x) s,First, we will need a lemma on quantities appearing in (<ref>).Let {μ_s}_s∈[0,T] be the family of measures and let X_t,s be defined by (<ref>). Then, there exists a constant C depending only on K and V such that|∇ X_0,s(x) | ,|∇ V(X_0,s(x))/ 1+ V(x)|,| V(X_0,s(x))/1+ V(x)|,|∇^2 V(X_0,s(x))/1+ V(x)| ≤ Ce^C ∫_0^s μ_u_^*_V u.Note thatX_0,s(x) = x - ∫_0^s K ∗ (∇μ_u + μ_u∇ V)(X_0,u(x))uso that in particular∇ X_0,s(x) = 𝕀_d - ∫_0^s ∇ K ∗ (∇μ_u + μ_u∇ V)(X_0,u(x)) ·∇ X_0,u(x)u,where 𝕀_d is the identity matrix. As ∇ K ∗ (∇μ_u + μ_u∇ V)_L^∞(^d)≤ C μ_u_^*_V (Lemma <ref>), the estimate on ∇ X_0,s follows by Grönwall lemma. We now proceed to the proof of the estimates involving potential V. We notice that the second term in (<ref>) can be estimated by ∫_0^s μ_u_^*_V u (Lemma <ref>). Now let f be one of the functions V, |∇ V|, |∇^2 V| so that the target is to estimate f(X_0,s(x))/1+V(x). We want to use the growth conditions (<ref>). If f happens to be bounded, the proof is concluded immediately. Otherwise, there exists q ∈{p, p-1, p-2}, q ≥ 0 such that|f(X_0,s(x))| ≤ C (1+ |X_0,s(x)|^q) ≤ C (1+ |x|^q + |∫_0^s K ∗ (∇μ_u + μ_u∇ V)(X_0,u(x))u |^q ).Using Lemma <ref> and simple inequality |x|^q ≤ Ce^C |x| we get|∫_0^s K ∗ (∇μ_u + μ_u∇ V)(X_0,u(x))u |^q≤|∫_0^sK ∗ (∇μ_u + μ_u∇ V) _L^∞(^d) u|^q≤ Ce^C ∫_0^s μ_u_^*_V u,It follows that|f(X_0,s(x))/1+V(x)|≤ C+C |x|^q/1+V(x)+ C e^C ∫_0^s μ_u_^*_V u/1+V(x)≤ C |x|^q/1+V(x) + Ce^C ∫_0^s μ_u_^*_V u.To conclude the proof, it remains to observe that because 0 ≤ q ≤ p, the term |x|^q/1+V(x) is bounded due to the growth conditions (<ref>). We proceed to the estimates on φ.Let φ be a solution to (<ref>) with g and T>0 fixed. Then, there exists a constant C depending only on V and K such thatφ(0,·)/1+V(·)_L^∞(^d)≤ C (T+1) g_L^1(0,T; L^∞(^d))e^C ∫_0^T μ_u_^*_V u.Using formula (<ref>) and estimating 1 ≤ 1 + V(x), we haveφ(0,·)/1+V(·)_L^∞(^d)≤ ∫_0^T (∇ρ_∞ φ) ∗∇ K(s,·)_L^∞(^d) s + ∫_0^T (∇ρ_∞ φ)∗ K(s,·) _L^∞(^d) ∇ V(s,X_0,s(·))/1+V(·)_L^∞(^d) s +∫_0^T (ρ_∞ φ)∗Δ K(s,·)_L^∞(^d) s + ∫_0^T(ρ_∞ φ)∗∇ K(s,·)_L^∞(^d) ∇ V(s,X_0,s(·))/1+V(·)_L^∞(^d) s+ ∫_0^T g(s,·)_L^∞(^d) 1+V(X_0,s(·))/1+V(·)_L^∞(^d) s.Note thatρ_∞(·) φ(s,·)_L^1()≤ρ_∞_L^1()^1/2 𝒬(φ(s,·)) ≤ C 𝒬(φ(s,·)), ∇ρ_∞(·) φ(s,·)_L^1()≤ρ_∞|∇ V|^2 _L^1()^1/2 𝒬(φ(s,·)) ≤C 𝒬(φ(s,·)),so that due to Theorem <ref> we havemax(ρ_∞(·) φ(s,·)_L^1(), ∇ρ_∞(·) φ(s,·)_L^1() )≤ C 𝒬(φ(s,·)) ≤ C g_L^1(0,T; L^∞())e^C ∫_0^T μ_u_^*_V u.By Young's convolutional inequality, for any function f:^d →,(ρ_∞ φ)∗ f(s,·)_L^∞(), (∇ρ_∞ φ) ∗ f(s,·)_L^∞()≤≤ C f_L^∞() g_L^1(0,T; L^∞())e^C ∫_0^T μ_u_^*_V u. Hence, applying it with f = K, Δ K, ∂_x_i K (for all i = 1, ..., d) and using Lemma <ref> for the potential term, we conclude the proof. Let φ be a solution to (<ref>) with g and T>0 fixed. Then, there exists a constant C depending only on V and K such that∇φ(0,·)/1+V(·)_L^∞(^d)≤ C(T+1) g_L^1(0,T; (^d))e^C ∫_0^T μ_u_^*_V uNote that∇φ(0,x)/1+V(x) = ∇φ(0,x)/1+V(x) - φ(0,x)/1+V(x) ∇ V(x)/1+V(x).The second term is bounded by Lemma <ref> and the assumption on the potential so it is sufficient to estimate ∇φ(0,x)/1+V(x). Differentiating each term in (<ref>) with respect to x at t=0, dividing by (1+V(x)) and estimating 1≤ 1+V(x) when there are no terms with the potential V in the numerator, we get∇φ(0,·)/1+V(·)_L^∞(^d)≤∫_0^T (∇ρ_∞ φ) ∗∇^2 K(s,·)_L^∞(^d) ∇ X_0,s_L^∞(^d) s+ ∫_0^T (∇ρ_∞ φ)∗∇ K(s,·)_L^∞(^d) ∇ V(s,X_0,s(·))/1+V(·)_L^∞(^d) ∇ X_0,s_L^∞(^d) s+ ∫_0^T (∇ρ_∞ φ)∗K(s,·)_L^∞(^d) ∇^2 V(s,X_0,s(·))/1+V(·)_L^∞(^d) ∇ X_0,s_L^∞(^d) s +∫_0^T (ρ_∞ φ)∗∇Δ K(s,·) _L^∞(^d) ∇ X_0,s_L^∞(^d) s +∫_0^T(ρ_∞ φ)∗∇^2 K (s,·) _L^∞(^d) ∇ V(s,X_0,s(·))/1+V(·)_L^∞(^d) ∇ X_0,s_L^∞(^d) s +∫_0^T(ρ_∞ φ)∗∇ K (s,·) _L^∞(^d) ∇^2 V(s,X_0,s(·))/1+V(·)_L^∞(^d) ∇ X_0,s_L^∞(^d) s + ∫_0^T ∇ g(s,·) _L^∞(^d) 1+ V(s,X_0,s(·))/1+V(·)_L^∞(^d)∇ X_0,s_L^∞(^d)s + ∫_0^T g(s,·) _L^∞(^d) ∇ V(s,X_0,s(·))/1+V(·)_L^∞(^d)∇ X_0,s_L^∞(^d)s.Now, we obtain (<ref>) directly from Lemma <ref>, estimates (<ref>) and the fact that ∇ g(s,·) _L^∞(^d)≤|g(s,·)|_. Thanks to Lemmas <ref> and <ref> we know thatφ(0,·)/1+V(·)_(^d)≤ C (T+1) g_L^1(0,T; (^d))e^C ∫_0^T μ_u_^*_V u.where the constant C does not depend on g and T. Using duality formula (<ref>) and taking supremum over all g such that g_L^1(0,T; (^d))≤ 1 we obtain_t ∈ [0,T]μ_t _^*_V≤ C (T+1) e^C ∫_0^T μ_u_^*_V u μ_0 _^*_V.By continuity of the map [0,T] ∋ u ↦μ_u in the ·_^*_V norm (Lemma <ref>), we conclude μ_T _^*_V≤ C (T+1) e^C ∫_0^T μ_u_^*_V u μ_0 _^*_V.Using Lemma <ref>, we arrive at (<ref>). § GRÖNWALL-TYPE INEQUALITIESSuppose that f, g,h: [0,T]→^+ such that h is nonincreasing,C is a nonnegative constant and f(t) ≤ h(t) + C ∫_t^T g(s) f(s)s.Then, f(t) ≤ h(t) e^C ∫_t^T g(u)u. We change variables u = T-s so thatf(T-(T-t)) ≤ h(T-(T-t)) + C∫_0^T-t g(T-u)f(T-u)u.Applying usual Grönwall's inequality to the function s ↦ f(T-s) (note that the function s ↦ h(T-s) is nondecreasing) we deducef(t) = f(T-(T-t)) ≤ h(T-(T-t)) e^C ∫_0^T-t g(T-u)u= h(t) e^C ∫_t^T g(u)u . Let y(t):[0,∞)→^+ be a continuous function such that y(t)≤α(t)e^C ∫_0^t y(s)sfor some C>0 and nondecreasing, nonnegative function α(t). Then,y(t) ≤α(t)/1-C t α(t)whenever 1-C t α(t)>0. We slightly adapt the proof from <cit.>. We fix T>0 and consider t∈[0,T]. Then,y(t)≤α(T)e^C ∫_0^t y(s)s.We let z(t) = α(T)e^C ∫_0^t y(s)s and we note thatz'(t) = Cz(t)y(t) ≤ Cz(t)^2. Integrating this differential inequality, we getz(t) ≤z(0)/1-t z(0) = α(T)/1-C t α(T).Taking t=T, we conclude the proof. § CONTINUITY OF SOLUTIONS IN TIME From <cit.>, we know that the measure solution is the fixed point of the push-forward representationρ_t = X_0,t^# ρ_0,where X_0,t is the flow of the related vector field∂_t X_0,t(x) = -K ∗ (∇ρ_t + ρ_t∇ V)(X_0,t),X_0,0(x) = x.Note carefully that since ∇ρ_∞ + ρ_∞ ∇ V = 0, we have (∇ρ_t + ρ_t∇ V) = (∇μ_t + μ_t∇ V) so that the flow map X_0,t is exactly the one defined in (<ref>). This fact will be relevant in the sequel.Given a test function ψ∈(^d) with ψ_(^d)≤ 1, and times s, t ∈ [0,T] we compute using (<ref>) and (<ref>)∫_^dψ(x)(1+V(x)) (ρ_t - ρ_s)(x) = = ∫_^d[ψ(X_0,t(x)) (1+V(X_0,t(x))) - ψ(X_0,s(x)) (1+V(X_0,s(x)))] ρ_0(x)= ∫_^d[ψ(X_0,t(x)) - ψ(X_0,s(x)) ]1+V(X_0,t(x))/1+V(x)(1+V(x)) ρ_0(x)=+∫_^dψ(X_0,s(x)) V(X_0,t(x)) - V(X_0,s(x))/1+V(x)(1+V(x)) ρ_0(x) =: I_1 + I_2 .For the term I_1 we use (<ref>), Lemma <ref> (to control the vector field) and (<ref>) (to control the solution on bounded intervals of time)|X_0,t(x)-X_0,s(x)|≤∫_s^t K ∗ (∇ρ_u + ρ_u∇ V)_L^∞(^d) u ≤∫_s^t ρ_u_^*_Vu ≤ C(T) |t-s|.Moreover, by Lemma <ref> and (<ref>), |1+V(X_0,t(x))/1+V(x)|, | ∇ V(X_0,t(x))/1+V(x)| ≤ Ce^C ∫_s^t ρ_u_^*_V u≤ C(T).Hence, using 1-Lipschitz continuity of ψ and(<ref>) we obtain that|I_1| ≤ C(T)|t-s| ∫_^d (1+V(x))ρ_0(x) ≤ C(T,ρ_0) |t-s|.For I_2, we first estimate|V(X_0,t(x)) - V(X_0,s(x))/1+V(x)|≤∫_s^t |∇ V(X_0,u)/1+V(x)||K ∗ (∇ρ_u + ρ_u∇ V)(X_0,u)|u ≤ C(T) ∫_s^t ρ_u_^*_V u ≤ C(T)|t-s|,where we used (<ref>) and Lemma <ref>. Hence, since ψ_L^∞(^d)≤ 1, we obtain|I_2| ≤ C(T)|t-s| ∫_^d (1+V(x))ρ_0(x) ≤ C(T,ρ_0) |t-s|.It follows that | ∫_^dψ(x) (1+V(x)) (ρ_t - ρ_s)(x) | ≤ C(T,ρ_0) |t-s|.Taking supremum over all ψ∈(^d) with ψ_(^d)≤ 1, we conclude the proof. § TECHNICAL PROOFS FROM SECTION <REF>Let g(x,y) = ∇ V(x)·∇ V(y)/1+V(y)K(x-y) and h(x,y) = ∇ V(x)/1+V(y)K(x-y). If p > 2, we see, taking x=y, that g is not bounded which proves that p ∈ (0,2] is a necessary condition. Let p ∈ (0,2]. We need to prove that g, ∇_y g, h, ∇_y h ∈ L^∞(^d ×^d). We will use the following inequality|∇ V(x)| ≤ C |∇ V(y)| + C|∇ V(x-y)| + C which is a consequence of (<ref>).Boundedness of g. Using (<ref>) we have1/C|g(x,y)| ≤(|∇ V(y)||∇ V(y)|/1+V(y) + |∇ V(x-y)||∇ V(y)|/1+V(y) +|∇ V(y)|/1+V(y)) K(x-y). The first and third terms are controlled since p ≤ 2 while the second uses additionally the control of |∇ V|K. Boundedness of h. We use (<ref>) to get1/C |h(x,y)| ≤|∇ V(y)|/1+V(y)K(x-y) + |∇ V(x-y)|/1+V(y)K(x-y) + 1/1+V(y)K(x-y).To conclude, we use boundedness of ∇ V/1+V (which holds for any p>0) and |∇ V|K. Boundedness of ∇_y h. By a direct computation,∇_y h(x,y) = -∇ V(x) ⊗∇ V(y)/(1+V(y))^2K(x-y) - ∇ V(x)⊗∇ K(x-y)/1+V(y) =: R_1 + R_2.Using (<ref>) we get| R_1 |/C≤|∇ V(y) ⊗∇ V(y)/(1+V(y))^2K(x-y) | ++ |∇ V(x-y) ⊗∇ V(y)/(1+V(y))^2K(x-y)| +|∇ V(y)|/(1+V(y))^2K(x-y), | R_2 |/C≤|∇ V(y)⊗∇ K(x-y)/1+V(y)|+|∇ V(x-y)⊗∇ K(x-y)/1+V(y)| + |∇ K(x-y)|/1+V(y) .All the terms above are bounded because ∇ V/1+V, |∇ V|K and |∇ V||∇ K| are bounded. Boundedness of ∇_y g. By a direct computation∇_y g(x,y) = ∇ V(x)·∇^2 V(y)/1+V(y)K(x-y)- ∇ V(x)·∇ V(y) ∇ V(y)/(1+V(y))^2K(x-y)- ∇ V(x)·∇ V(y)/1+V(y) ∇ K(x-y) =: P_1 + P_2 + P_3.Concerning the term P_1, we notice that since p ≤ 2, |∇^2 V| ≤ C so that P_1 can be estimated by |∇ V(x)|/1+V(y)K(x-y) = |h(x,y)| which was proved to be bounded above.Concerning the term P_2, we use (<ref>) to get|P_2|/C≤K_L^∞(^d) |∇ V|^3/(1+V)^2_L^∞(^d) + |∇ V|^2/(1+V)^2_L^∞(^d) ( ∇ VK _L^∞(^d) + K _L^∞(^d)).By the growth conditions (<ref>) and p≤ 2, |∇ V|^3/(1+V)^2_L^∞(^d) is finite and so, P_2 is bounded.Concerning the term P_3, we argue as in P_2 to get|P_3|/C≤∇ K_L^∞(^d) |∇ V|^2/1+V_L^∞(^d) + |∇ V|/1+V_L^∞(^d) ( ∇ V ∇ K _L^∞(^d) + ∇ K _L^∞(^d)).The term |∇ V|^2/1+V_L^∞(^d) is bounded because p ≤ 2 and all the other terms are bounded by assumption. The proof is concluded.Concerning (<ref>), we only prove the first estimate. The second can be proved in the same way, replacing K with ∇ K. We need to study two terms ∇ K ∗μ and K ∗ (μ ∇ V). For the first one,∇ K ∗μ_L^∞(^d) = sup_x ∈^d|∫_^d∇ K(x-y)/1+V(y)(1+V(y))μ(y)| ≤ ≤sup_x∈^d∇ K(x-·)/1+V(·)_ μ_^*_V≤sup_x∈^d∇ K(x-·)_ 1/1+V_ μ_^*_V,where 1/1+V∈(^d) thanks to the growth condition (<ref>). For the second one, we writeK ∗ (μ ∇ V) _L^∞(^d) =sup_x∈^d| ∫_^d K(x-y)∇ V(y)/1+V(y)(1+V(y)) μ(y)| ≤≤sup_x∈^d K(x-·)∇ V(·)/1+V(·)_≤sup_x∈^dK(x-·)_ ∇ V/1+V_ μ_^*_V≤sup_x∈^dK_ ∇ V/1+V_ μ_^*_V,where ∇ V/1+V∈(^d) due to the growth condition (<ref>). We proceed to the proof of (<ref>) which requires condition (<ref>). As before, we write |∇ V(x) · K ∗∇μ(x)| = |∫_^d∇ V(x)·∇ K(x-y) μ(y) | ≤ ≤sup_x∈^d∇ V(x)/1+V(·)·∇ K(x-·)_ μ_^*_V.Finally,we conclude| ∇ V(x) · K ∗ (μ ∇ V)(x) | = | ∫_^d∇ V(x) ·∇ V(y)K(x-y)μ(y) | ≤ ≤sup_x∈^d∇ V(x) ·∇ V(·)/1+V(·)K(x-·) _ μ_^*_V.§.§ AcknowledgementsJAC and JS were supported by the Advanced Grant Nonlocal-CPD (Nonlocal PDEs for Complex Particle Dynamics: Phase Transitions, Patterns and Synchronization) of the European Research Council Executive Agency (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 883363). JAC was also partially supported by the EPSRC grant numbers EP/T022132/1 and EP/V051121/1.abbrv
http://arxiv.org/abs/2312.16344v1
{ "authors": [ "José A. Carrillo", "Jakub Skrzeczkowski" ], "categories": [ "math.AP", "math.ST", "stat.TH", "35Q62, 35B35, 35Q68, 62-08, 65K10" ], "primary_category": "math.AP", "published": "20231226215734", "title": "Convergence and stability results for the particle system in the Stein gradient descent method" }
[email protected] of Technology SydneyAustralia [email protected] of Technology [email protected] of Technology [email protected]’s [email protected]’s [email protected] University of [email protected] University of MacauPR China Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.However, one important area that has been largely overlooked in the research of unlearning is reinforcement learning.Reinforcement learning focuses on training an agent to make optimal decisions within an environment to maximize its cumulative rewards. During the training, the agent tends to memorize the features of the environment, which raises a significant concern about privacy.As per data protection regulations, the owner of the environment holds the right to revoke access to the agent's training data, thus necessitating the development of a novel and pressing research field, known as reinforcement unlearning. Reinforcement unlearning focuses on revoking entire environments rather than individual data samples. This unique characteristic presents three distinct challenges: 1) how to propose unlearning schemes for environments;2) how to avoid degrading the agent's performance in remaining environments; and 3) how to evaluate the effectiveness of unlearning.To tackle these challenges, we propose two reinforcement unlearning methods. The first method is based on decremental reinforcement learning, which aims to erase the agent's previously acquired knowledge gradually.The second method leverages environment poisoning attacks, which encourage the agent to learn new, albeit incorrect, knowledge to remove the unlearning environment.Particularly, to tackle the third challenge, we introduce the concept of “environment inference attack” to evaluate the unlearning outcomes.The source code is available at <https://anonymous.4open.science/r/Reinforcement-Unlearning-D347>. printfolios=trueReinforcement Unlearning Wanlei Zhou Received ...; accepted... ============================= § INTRODUCTION Machine learning relies on the acquisition of vast amounts of data, which may encompass sensitive information of individuals.To safeguard the privacy of individual users, data protection regulations have been proposed, e.g., the General Data Protection Regulation (GDPR) <cit.>, which empowers users to request the removal of their data.It is imperative for model owners to adhere to users' requests by removing revoked data from their datasets and ensuring that any influence these revoked data may have on the model is eliminated.This process is referred to as machine unlearning <cit.>. While significant progress has been made in conventional machine unlearning <cit.>, one area that remains an unfilled gap for unlearning is reinforcement learning (RL).RL is an essential research field in machine learning due to its ability to address complex decision-making problems in dynamic environments <cit.>. In RL, the primary objective is to train an intelligent entity, known as an agent, to interact with the environment through a specific policy. This policy guides its actions based on the current state.With each action taken, the agent receives a reward and consequently updates its state, creating an experience sample used to update its policy.The ultimate aim of the agent is to learn an optimal policy that maximizes its cumulative rewards over time.However, in the course of RL, agents tend to memorize features of their environments, raising substantial security concerns.Consider an RL agent designed for providing navigation guidance through real-time data from Google Maps. During its training, the agent learns from a dynamic environment using Google Street Views for its photographic content <cit.>. However, privacy issues may arise when the agent inadvertently learns and stores sensitive information, such as the locations of restricted areas. Without a forgetting mechanism, this poses a significant privacy risk, potentially compromising the anonymity of individuals and organizations. In addition, the agent's ability to forget is also crucial for ensuring the security of autonomous vehicles. A security challenge arises from malicious entities that exploit the mapping system by registering fraudulent businesses on Google Maps <cit.>. Their aim is to redirect organic search traffic away from legitimate establishments and steer it towards profit-driven scams. When training an autonomous driving agent, utilizing maps tainted by such fraudulent entries can lead to suboptimal user experiences or even result in fatal consequences. Thus, the need for the RL agent to forget such sensitive or incorrect information gives rise to a novel research field, which we term as reinforcement unlearning.Conventional machine unlearning methods are not directly applicable to reinforcement unlearning due to fundamental differences in their learning paradigms. In machine learning, unlearning involves removing specific data samples from the static training set, where data samples are independently and identically distributed. In contrast, RL is a dynamic and sequential decision-making process, where agents interact with an environment in a series of actions, and agents' experience samples are temporally dependent. Also, reinforcement unlearning is distinct from privacy-preserving RL <cit.>.Reinforcement unlearning aims to selectively erase learned knowledge from the agent's memory, ensuring the privacy of environment owners, while privacy-preserving RL focuses on preserving the agent's personal information.In essence, reinforcement unlearning presents three specific challenges.* How can we unlearn an environment from the agent's policy?In machine unlearning, a data owner can specify which data samples should be removed. However, in reinforcement unlearning, the environment owner cannot access the experience samples. The difficulty arises as these samples are dynamically accumulated during the agent's interactions with the environment, and samples are managed by the agent. Thus, the key challenge lies in effectively associating the environment that needs to be unlearned with the corresponding experience samples.* How can we prevent a degradation in the agent's performance after unlearning?In conventional machine unlearning, removing a sample leads to a decrease of performance. It is more challenging in reinforcement unlearning as unlearning an environment requires forgetting a significant number of experience samples. Ensuring that the agent maintains its performance on the retained environments becomes a considerable challenge. * How can we evaluate the effectiveness of reinforcement unlearning?In machine unlearning, one commonly used evaluation is using membership inference attack <cit.> to assess if the model has discarded the revoked data. However, this methodology cannot be directly applied to reinforcement unlearning, as the environment owner cannot specify which samples should be unlearned. This poses a challenge in evaluating the effectiveness of reinforcement unlearning and in determining if the agent has been over- or under-unlearned. To address these challenges, we propose two distinct unlearning methods: decremental reinforcement learning and environment poisoning. Decremental reinforcement learning involves deliberately erasing an agent's learned knowledge about a specific environment. This method finds practical applications in scenarios where certain environments become obsolete or need to be forgotten due to privacy concerns. Environment poisoning-based method aims to create poisoning experience samples by modifying the unlearning environment. This method ensures that the agent's performance in other environments remains unaffected, eliminating any negative impact on its overall capabilities. This method finds application in situations where attacks or misinformation may be present. Both methods enable an agent to unlearn specific environments while maintaining its performance in others, thereby tackling the first two challenges.To tackle the third challenge, we utilize an environment inference attack to infer an agent's training environment by observing its behavior.If the inference result after unlearning shows a substantial degradation compared to the result before unlearning, the agent has effectively unlearned that environment. In summary, we make three main contributions:* We provide a valuable step forward in machine unlearning by pioneering the research of reinforcement unlearning. The concept of reinforcement unlearning that selectively forgets learned knowledge of the training environment from the agent's memory offers novel insights and lays a foundation for future research in this emerging domain.* Reinforcement unlearning exposes an impactful vulnerability of RL – the risk of exposing the privacy of the environment owner. This vulnerability can disclose sensitive information about the environment owner's preferences and intentions. To implement reinforcement unlearning, we introduce two innovative methods: decremental RL-based and environment poisoning-based. * With limited prior research in reinforcement unlearning, to confirm the unlearning results, we introduce a novel evaluation approach, “environment inference”. By visualizing the unlearning results, this approach provides an intuitive and effective means of measuring the efficacy of unlearning techniques.§ PRELIMINARIESReinforcement Learning.  The primary objective of RL is to learn an optimal policy for the agent, enabling it to maximize the total accumulated reward and accomplish the task optimally. In the context of deep reinforcement learning (DRL), the policy is typically represented by a deep neural network.Formally, a learning environment is commonly formulated by the tuple ℳ=⟨𝒮,𝒜,𝒯,r⟩ <cit.>. Here, 𝒮 and 𝒜 denote the state and action sets, respectively, while 𝒯 represents the transition function, and r represents the reward function. At each time step t, the agent, given the current environmental state s_t∈𝒮, selects an action a_t∈𝒜 based on its policy π(s_t,a_t). This action causes a transition in the environment from state s_t to s_t+1 according to the transition function: 𝒯(s_t+1|s_t,a_t). The agent then receives a reward r_t(s_t,a_t), along with the next state s_t+1.This tuple of information, denoted as (s_t,a_t,r_t(s_t,a_t),s_t+1), is collected by the agent as an experience sample utilized to update its policy π. Typically, the policy π is implemented using a Q-function: Q(s,a), estimating the accumulated reward the agent will attain in state s by taking action a. Formally, the Q-function is defined as: Q_π(s,a)=𝔼_π[∑^∞_i=1γ^i· r(s_i,a_i)|s_i=s,a_i=a],where γ represents the discount factor. In deep reinforcement learning, a neural network is employed to approximate the Q-function, denoted as Q(s,a;θ), where θ represents the weights of the neural network. The neural network takes the state s as input and produces a vector of Q-values as output, with each Q-value corresponding to an action a. To learn the optimal values of Q(s,a;θ), the weights θ are updated using a mean squared error loss function ℒ(θ). ℒ=1/|B|∑_e∈ B[(r(s_t,a_t)+γmax_a_t+1Q(s_t+1,a_t+1;θ)-Q(s_t,a_t;θ))^2],where e=(s_t,a_t,r(s_t,a_t),s_t+1) is an experience sample showing a state transition,and B consists of multiple experience samples used to train the neural network.Machine Unlearning.  Machine unlearning focuses on removing specific data samples or learned knowledge from a trained model. Unlike model training, unlearning aims to erase the impact of certain data samples on the model's behavior.A straightforward unlearning method involves removing the revoked data and retraining the model from scratch.However, this approach is computationally challenging.To improve computational efficiency, several machine unlearning methods have been proposed. SISA (Sharded, Isolated, Sliced, and Aggregated) is one of the most prevalent methods <cit.>, aiming to divide the training set into disjoint shards and train each shard model separately. When a revoke request is received, only the corresponding shard model is retrained.When combining machine unlearning with reinforcement learning, a novel concept called reinforcement unlearning is introduced. This concept revolves around the selective removal or modification of the agent's acquired knowledge within the context of reinforcement learning, offering unique insights and challenges in the dynamic environment of sequential decision-making tasks.§ REINFORCEMENT UNLEARNING §.§ Problem Statement and Threat ModelProblem Definition.  The definition of “forgetting” is application-dependent, leading to different desiderata and priorities in various scenarios <cit.>. For example, in a privacy-centric application, the main goal of unlearning user data is to ensure that the unlearned model has no exposure to the data, and a successful membership inference attack would reveal that the data is not in the training set for the unlearned model. Conversely, in a bias-removing application, the aim of unlearning is to prevent the unlearned model from predicting the assigned labels of the forgotten data, as these labels may indicate unintended and biased behavior. The objective of reinforcement unlearning is to eliminate the influence of a specific environment on the agent, i.e., “forgetting an environment”. We define “forgetting an environment” as equivalent to “performing deterioratively in that environment”. This definition aligns with common sense. For example, when we have thoroughly explored a place and are highly familiar with it, we can efficiently find things within it, resulting in high performance. Conversely, when we have forgotten a place, our ability to locate things diminishes, leading to deteriorative performance. It is worth noting that we cannot simply adopt the concept of “forgetting” from conventional machine unlearning, which often involves retraining the model from scratch without the revoked data. This is because reinforcement unlearning operates within a distinct learning paradigm, such as sequential decision making and dynamic learning. For instance, even if we were to retrain an agent from scratch without including a specific unlearning environment, there's a possibility that the agent still performs well in that environment. However, this positive performance could be exploited by adversaries to deduce critical information about that environment. This clearly contradicts the fundamental objective of safeguarding the environment owner's privacy in reinforcement unlearning.Formally, let us consider a set of n learning environments:(ℳ_1,…,ℳ_n). Each environment ℳ_i has the same state and action spaces but differs in state transition and reward functions.Consider the target environment to be unlearned as ℳ_u = ⟨𝒮_u, 𝒜_u, 𝒯_u, r ⟩, denoted as the `unlearning environment'. The set of remaining environments, denoted as (ℳ_1, …, ℳ_u-1, ℳ_u+1, …, ℳ_n), will be referred to as the `retaining environments'. Given a learned policy π, the goal is to update the policy π to π' such that the accumulated reward obtained in ℳ_u is minimized: min_π'||Q_π'(s)||_∞,where s∈𝒮_u,while the accumulated reward received in the retaining environments remains the same: min_π'||Q_π'(s)-Q_π(s)||_∞, where s∉𝒮_u.We assume that the owner of the trained RL model can access ℳ_u and gather trajectories within ℳ_u. A trajectory τ is denoted as a sequence of state-action pairs: τ = ((s_1, a_1), …, (s_k, a_k)), where k represents the length of the trajectory.The assumption of the model owner having access to the unlearning environment ℳ_u is reasonable, as ℳ_u is an integral part of the training data used by the model owner to train the agent. When an unlearning request is initiated, the model owner employs the proposed unlearning methods, leveraging the unlearning environment ℳ_u, to execute the unlearning process. Subsequently, the model owner physically removes ℳ_u. This approach ensures that the unlearning is performed using the relevant environment data under the ownership of the user, maintaining a practical and privacy-conscious procedure.Threat Model. Reinforcement unlearning primarily focuses on mitigating the influence of a designated unlearning environment on the trained agent. This essentially involves safeguarding the distinctive features of that environment by thwarting a particular type of attack, namely environment inference attacks. In these attacks, adversaries seek to infer a learning environment by closely observing the actions of the agent within that specific environment. Formally, consider the unlearning environment as ℳ_u = ⟨𝒮_u, 𝒜_u, 𝒯_u, r ⟩ and the unlearned policy as π'. The adversary's objective is to infer the transition function 𝒯_u by accessing 𝒮_u, 𝒜_u, r, and π'. It is essential to note that the environment inference attack differs significantly from conventional membership inference attacks in machine learning. Traditional membership inference attacks typically involve point-level inference, where the focus is on deducing information about an individual sample. In contrast, the environment inference attack operates at the object level, concentrating on the inference of features characterizing an entire environment that encompasses a substantial number of samples. This distinction underscores the unique nature of the environment inference attack, as it extends its scope beyond individual data points to involve the broader context of environments, introducing a new dimension to the evaluation methodology for reinforcement unlearning.Methods Overview. Both decremental reinforcement learning-based method and the poisoning-based method share the common aim of intentionally degrading the agent's performance within the unlearning environment while preserving its performance in other environments. However, they employ distinct strategies to achieve this outcome. The decremental reinforcement learning-based method involves updating the agent by minimizing its reward specifically in the unlearning environment. This is achieved through iterative adjustments to the agent's policy, aiming to reduce its effectiveness within the unlearning environment.In contrast, the environment poisoning-based method focuses on modifying the unlearning environment itself. This method involves introducing deliberate changes to the state transition function of the environment and subsequently updating the agent in this modified environment. The intention is to disrupt the agent's learned behavior in the unlearning environment. §.§ Decremental Reinforcement Learning-based Method The implementation of this method involves two main steps. The first one is the exploration of the unlearning environment ℳ_u. Initially, the agent is allowed to explore the unlearning environment, collecting experience samples specific to that environment. The nature of this exploration depends on the scenario. For instance, in the grid-world setting, the agent might traverse the unlearning grid using a random walk for a predefined number of steps. In the aircraft-landing scenario, it could involve the airplane making random landings within the unlearning environment. The second step is fine-tuning the agent. Following the exploration phase, the agent is fine-tuned using the collected experience samples. This fine-tuning process employs a newly defined loss function (Eq. <ref>) to update the policy π^* with the experience samples accumulated in the first step. This loss function is carefully designed to ensure that the agent's performance within the unlearning environment ℳ_u degrades while preserving its performance in other environments. Essentially, it guides the agent to unlearn the knowledge associated with the unlearning environment.To accomplish the aim of unlearning, we establish an optimization objective to guide the unlearning process (Eqs. <ref> and <ref>).We also introduce a new loss function (Eq. <ref>) that will be used to update the agent. This loss function is designed to minimize the influence of the previously learned knowledge and encourage the agent to modify its existing policy. By incorporating this loss function into the training procedure, we can steer the agent's learning process towards unlearning the knowledge from the given environment.ℒ_u=𝔼_s∼𝒮_u[||Q_π'(s)||_∞]+𝔼_s≁𝒮_u[||Q_π'(s)-Q_π(s)||_∞]. In Eq. <ref>, the first term encourages the new policy π' to work deficiently in the unlearning environment ℳ_u, while the second term drives the new policy π' to have the same performance as the current policy π in other environments.Notably, the two terms in Eq. <ref> have favorable properties.The first term directs an agent to search and attempt different policies to sufficiently explore the state space of environment ℳ_u.Thus, ℳ_u can be adequately unlearned.This property is particularly useful when ℳ_u is a sparse reward setting, i.e., the reward is 0 in most of the states in 𝒮_u.The second term motivates an agent to modify policies, instead of randomly changing its behavior.This property ensures that the agent performs consistently in those states which are not in 𝒮_u.It is essential to highlight that the accurate computation of the second term is infeasible due to its involvement with all states except 𝒮_u.Thus, during implementation, we uniformly select a consistent set of states across all environments, excluding ℳ_u. This approach is taken to mitigate computational burden and ensure a balanced impact on performance preservation across the remaining environments. The decremental RL-based method is formally described as follows. First, the agent explores the unlearning environment using a random policy. This policy ensures that the agent takes each action with equal probability.Adopting a random policy ensures a comprehensive exploration of the unlearning environment. If the agent were to use a well-established policy from its training, there is a risk of swift achievement of the target, potentially resulting in insufficient collection of experience samples within the unlearning environment. This strategic use of a random policy enables the effective collection of diverse experiences crucial for the unlearning process. For instance, in the grid world setting, where the agent has four possible actions (moving up, down, left, and right), the random policy dictates that, in each grid (i.e., state), the agent has an equal likelihood of selecting any of the four directions. As the agent progresses in its exploration, in each time step t, after taking an action a_t, it receives a corresponding reward r_t. Coupled with the current state s_t and the subsequent state s_t+1, the agent effectively creates an experience sample in the form of (s_t,a_t,r_t,s_t+1). In our method, the agent is directed to explore the unlearning environment for a specified number of steps, denoted as m. Consequently, the agent accumulates a total of m experience samples: (s_1,a_1,r_1,s_2 ),…,(s_m,a_m,r_m,s_m+1). In the second step, the agent employs these collected experience samples to fine-tune its current optimal policy π^* to a new policy π' by minimizing the custom loss function defined in Eq. <ref>. Convergence Analysis of the Method.  We proceed with examining the convergence of the method by conducting a separate analysis for each term in Eq. <ref>. For the first term in Eq. <ref>, let y_π=r(s,π(s))+γ max_π(s')Q(s',π(s')) and δ_π=y_π-Q(s,π(s)), where s' denotes the next state.Then, the loss function in Eq. <ref> can be rewritten as: ℒ=𝔼_s∼𝒮[δ_π].Similarly, the first term in Eq. <ref> can also be rewritten as: 𝔼_s∼𝒮_u[|y_π'-δ_π'|].Based on the triangle inequality, we have: 𝔼_s∼𝒮_u[|y_π'-δ_π'|]≤𝔼_s∼𝒮_u[|y_π'|]+𝔼_s∼𝒮_u[|δ_π'|].As the convergence of the learning on the loss function in Eq. <ref> has been both theoretically and empirically proven <cit.>[Although there have been arguments regarding the learning convergence, they have proposed solutions to improve its performance <cit.>. These solutions include adding additional terms to the loss function or using two loss functions and selecting the one that yields a larger result.], we can also conclude the convergence of 𝔼_s∼𝒮_u[|δ_π'|] in Eq. <ref>.For the first term in the right of Eq. <ref>, 𝔼_s∼𝒮_u[|y_π'|], as it is computed by accumulating the previously collected discounted rewards (Eq. <ref> and <ref>), the term converges if the rewards are bounded.The reward bound can be acquired by proper definition, i.e., r∈[-R_max,R_max].Thus, as both 𝔼_s∼𝒮_u[|y_π'|] and 𝔼_s∼𝒮_u[|δ_π'|] converge, 𝔼_s∼𝒮_u[|y_π'-δ_π'|] also converges, i.e., 𝔼_s∼𝒮_u[||Q_π'(s)||_∞] converges.For the second term in Eq. <ref>, to analyze its convergence, we need the following theorem. Let (π_i)^K_i=0 be a sequence of policies with regard to the sequence (Q_i)^K_i=0 of Q-functions learned using a fitted Q-iteration.Then, the following inequality holds.||Q_i-Q^*||_∞≤||ξ_i-1||_∞+γ||Q_i-1-Q^*||_∞+ζ||Q_i-1||_∞,where Q^* is the optimal value function, ξ_i denotes the approximation error: ξ_i=T^π_iQ_i-Q_i+1 which is also bounded, T is the Bellman operator, and ζ is a constant.Theorem <ref> provides evidence that the disparity between the learned Q-function and the optimal Q-function diminishes as the learning process advances. This reduction signifies convergence, given that the Q-function, denoted as Q, remains uniformly bounded by R_max/1-γ for any policy π <cit.>. Consequently, if the number of learning iterations is sufficiently large, the method converges. In our problem, the second term in Eq. <ref>, 𝔼_s≁𝒮_u[||Q_π'(s)(s)-Q_π(s)(s)||_∞], is intended to minimize the performance discrepancy between the unlearned policy π' and the well-trained policy π across all environments except ℳ_u. In this context, the well-trained policy π can be considered as the optimal policy, while the unlearned policy π' represents the policy we aim to learn. Notably, this learning process is analogous to the one described in Theorem <ref>, implying that the second term also exhibits convergence. §.§ Environment Poisoning-based MethodThis method is implemented by modifying the unlearning environment itself. This modification can include various changes, i.e., the poisoning actions, such as altering the layout in the grid world scenario by adding or removing obstacles and repositioning targets within the environment. After these changes are introduced, the agent is updated in this modified environment. This method aims to influence the agent's policy learning by creating a situation where its previously learned knowledge becomes less effective, particularly within the context of the unlearning environment.Specifically, this method consists of three distinct steps. Firstly, we apply a random poisoning strategy to alter the transition function of the unlearning environment. Secondly, the agent learns a new policy in this modified environment.Lastly, based on the agent's learned policy, we update the poisoning strategy and re-poison the unlearning environment. These three steps are iteratively repeated until a predetermined number of poisoning epochs is reached, effectively refining the unlearning process. By employing this iterative approach, the method enhances the agent's ability to unlearn specific experiences associated with the targeted unlearning environment.The schematic diagram of this method is presented in Figure <ref>, illustrating that the targeted unlearning environment ℳ_u is altered to a new one ℳ'_u with strategically introduced perturbations, e.g., adding fake obstacles, and the agent is retrained in this poisoned environment to learn a new policy π'.Let us consider the learned policy as π^*, which is regarded as the optimal policy. To refine the agent's policy, we manipulate the given environment ℳ_u. The manipulation of ℳ_u involves poisoning the transition function 𝒯_u(s'|s,a) <cit.>, where ℳ_u = ⟨𝒮_u, 𝒜_u, 𝒯_u, r⟩. We introduce a poisoned transition function denoted as 𝒯̂_u(ŝ'|s,a). After the agent takes action a in state s, instead of observing the intended state s', it will observe the manipulated state ŝ'. The challenge now lies in determining the appropriate state ŝ', which can mislead the agent's learning process. To address this, we define a new learning environment for poisoning, denoted as ℳ_p = ⟨𝒪, 𝒢, 𝒫, ℛ⟩.By constructing this new environment, we can manipulate the state transitions experienced by the agent, thus guiding its learning process in a desired manner.* 𝒪 denotes the set of poisoning states. Each state π_i∈𝒪 is the policy used by the agent during the i-th poisoning epoch. * 𝒢 is the set of poisoning actions. A poisoning action g∈𝒢 signifies a modification made to the transition function of the unlearning environment 𝒯_u. This modification determines which state should be presented to the agent as the new state. * 𝒫:𝒪×𝒢×𝒪→[0,1] defines the poisoning state transition. It describes how the agent adjusts its policy in response to poisoning actions. Specifically, 𝒫(π'|π,g) is the probability of the agent transitioning from policy π to policy π' when the unlearning environment's transition function is modified by g.* ℛ:𝒪×𝒢×𝒪→ℝ represents the reward function, which serves two purposes. Firstly, it quantifies the disparity between the current policy π_i and the updated policy π' in the unlearning environment ℳ_u. Secondly, it incorporates the rewards obtained by the agent in other environments while utilizing π_i.Specifically, the reward function is defined as ℛ_i:=λ_1Δ(π_i(s_i)||π'(s_i))+λ_2∑_s≁𝒮_u∑_aπ_i(s,a)r(s,a).ℛ_i represents the reward received during the i-th poisoning epoch. The term Δ(π_i(s_i)||π'(s_i)) indicates the difference between π_i(s_i) and π'(s_i). Here, π_i(s_i) represents the probability distribution over the available actions in state s_i under policy π_i. This difference can be measured using either KL-divergence or cosine similarity. Coefficients λ_1 and λ_2 are introduced to balance the two terms. Note that precisely computing the second term is computationally infeasible due to the involvement of states from all environments except ℳ_u.Thus, similar to the decremental RL-based method, in the implementation of the poisoning-based method, a uniform selection of states across all the environments, except ℳ_u, is performed in each poisoning epoch. The proposed poisoning-based method is outlined in Algorithm <ref>. In each poisoning epoch i, we take the first step by choosing a poisoning action g_i to modify the transition function of the unlearning environment (Lines 1-3). The selection process can be implemented using an ϵ-greedy strategy, where the best action is chosen with a probability of 1-ϵ, and the remaining actions are chosen uniformly with a probability of ϵ/|𝒢|-1. Here, the “best action” denotes the action that results in the highest Q-value, signifying the maximum expected future reward. This criterion guides the agent to identify the most effective way to modify the unlearning environment, ensuring optimal adjustments based on the expected outcomes.Next, the agent performs the second step by learning a new policy π_i in the altered environment (Line 4). This learning phase can be performed using any deep reinforcement learning algorithm, such as deep Q-learning <cit.>. Once π_i is learned, we execute the third step by evaluating the reward using Eq. <ref> and updating the poisoning strategy using samples (π_i-1,g_i,π_i,ℛ_i) from batch ℬ (Lines 5 and 6). The update process is carried out using the DDPG algorithm <cit.>. After all poisoning epochs are completed, we obtain a refined policy π̂ (Line 8), which allows the agent to perform poorly in the unlearning environment ℳ_u, while maintaining satisfactory performance in other environments.Convergence Analysis of the Method.  Algorithm <ref> represents a deep RL algorithm employed by the model owner. This algorithm utilizes the agent's policies as states, the modification of the unlearning environment as actions, and the performance of the agent's policies in the environments as rewards. The interaction between the algorithm and the agent's learning process is illustrated in Figure <ref>. In this figure, the model owner is engaged in learning how to poison the unlearning environment ℳ_u, while the agent is concurrently learning within the poisoned environment ℳ'_u. Given that our primary concern lies in the performance of the agent's policies, our analysis primarily revolves around these policies. Each policy π is associated with a state distribution denoted as μ_π, which can be defined as:μ_π:=(1-γ)∑^∞_t=0γ^tℙ[s_t=s|s_0∼ d_0,π],where d_0 is the initial state distribution and μ_π>0 for each state s.Here, μ_π satisfies the following Bellman flow constraints <cit.>:μ_π=(1-γ)d_0+γ∑_s'𝒯(s'|π(s'),s)μ_π(s').Then, the score of policy π can be defined as: ρ_π(ℳ,d_0):=∑_sμ_π(s)r(s,π(s)).The policy score ρ_π quantifies the quality of a policy π, with a higher score indicating a better policy.Specifically, ρ_π has the following property.For two policies π and π', the following equation holds:ρ_π-ρ_π'=∑_s∈𝒮μ_π'(s)(Q_π(s,π(s))-Q_π(s,π'(s))). Let us examine the expression Q_π(s,π(s))-Q_π(s,π'(s)). To simplify the analysis, we introduce a seminorm called the span. The span of Q is defined as sp(Q) = max_i Q(s_i,a_i) - min_j Q(s_j,a_j). This seminorm measures the maximum difference between the highest and lowest values of the function Q across different states and actions.Certainly, we have: Q_π(s,π(s))-Q_π(s,π'(s))≤ sp(Q_π). Then, we have: ρ_π-ρ_π'≤ sp(Q_π)|𝒮|∑_s∈𝒮μ_π'(s).Based on Eq. <ref> and <ref>, it can be inferred that the span sp(Q_π) is limited by the cumulative reward, while μ_π' is bounded by the transition function. The reward is predefined by users, and the initial state distribution d_0 remains fixed for a given environment. Thus, the only variable that influences the difference in policy scores, ρ_π-ρ_π', is the transition function dictated by the environment. This rationale underscores why we opt for environment-poisoning as our unlearning method. §.§ Comparison of the Two Methods We compare the two methods in their ability of overcoming the “over-unlearning” and “catastrophic forgetting” issues. In the context of a set of environments (ℳ_1,...,ℳ_n) and an unlearning environment ℳ_u, the objective of the decremental RL-based method is to minimize the agent's return specifically in ℳ_u while ensuring that the return in the remaining environments remains unaffected. In contrast, the poisoning-based method aims to develop a strategy to modify the unlearning environment ℳ_u into ℳ'_u and subsequently fine-tune the agent within this modified environment. The objective is to ensure that the agent adapts well to ℳ'_u while exhibiting poor performance in the original unlearning environment ℳ_u. Thus, the key distinction between the two methods lies in their approaches. The decremental RL-based method focuses on erasing the knowledge acquired by the agent in ℳ_u, while the poisoning-based method encourages the agent to learn in a new but altered environment, i.e., ℳ'_u.One advantage of the poisoning-based method over the decremental reinforcement learning-based method is its ability to address the over-unlearning issue associated with the latter. The decremental reinforcement learning-based method may inadvertently suffer from over-unlearning, which occurs when the deep reinforcement learning model is fine-tuned to degrade the agent's learning performance in ℳ_u. This issue is also observed in conventional machine unlearning scenarios <cit.>. Even if efforts are made to restrict the deterioration to ℳ_u, it may still affect other environments due to the shared distribution among them. However, the poisoning-based method inherently avoids this issue by focusing on enabling the agent to learn new knowledge rather than intentionally forgetting existing knowledge. Thus, the poisoning-based method has the potential to achieve superior performance in non-unlearning environments compared to the decremental RL-based method. A valid concern regarding the poisoning-based method is the potential occurrence of catastrophic forgetting, which arises when the continual updating of the deep reinforcement learning model results in the overwriting of previously acquired knowledge. However, this issue does not arise in the context of the poisoning-based method. The primary cause of catastrophic forgetting is a shift in the input distribution across different environments <cit.>. In our scenario, the modified environment ℳ'_u retains the same distribution as the other environments. This is because the modification is solely applied to the transition function, while the state and action spaces, as well as the reward function, remain unchanged.Specifically, the transition function dictates the evolution of states based on the actions taken by the agent, governing how states change over time. In contrast, the state and action spaces, as well as the reward function, are pre-defined by the model owner during the training of the agent. These foundational elements remain relatively unaffected by modifications to the transition function. Thus, there is no distribution shift across environments in our problem, thereby mitigating the risk of catastrophic forgetting.§.§ Environment Inference AttackOne of our contributions is to propose a new evaluation methodology named environment inference attacks. This kind of attacks aim to infer an agent's training environments by observing agent's behavior <cit.>. By employing this approach, we can assess whether the agent has successfully eradicated the knowledge of the unlearning environment. If the removal of the unlearning environment's knowledge is executed correctly, the agent's behavior in that environment should be random rather than purposeful. Thus, by observing the agent's behavior, an adversary can only infer a randomized environment, devoid of any specific knowledge. A notable environment inference attack <cit.> utilizes a genetic algorithm to identify a transition function that not only satisfies specific constraints but also provides the best possible explanation for the observed policy. Inspired by this approach, our research also employs the same genetic algorithm to infer the unlearning environments.Specifically, by observing the agent's behavior, we can obtain a policy π_target.In each iteration, we maintain a set of dynamic transition functions. For each transition function 𝒯, we train an optimal policy π^*_𝒯. By quantifying the similarity between π_target and π^*_𝒯, we obtain a fitness score denoted as Score(π_target,π^*_𝒯). Our objective is to identify a transition function 𝒯 that maximizes this fitness score. To achieve this objective, the top n candidates, referred to as the elite population, are selected based on their fitness scores and carried over to the next generation. The remaining candidates are generated through a process involving two parents selected from the previous generation, with the selection based on their scores. These selected parents undergo a two-point crossover, generating child candidates. Finally, to introduce diversity, a random mutation is applied to the child candidates.§ EXPERIMENTAL EVALUATION §.§ Experimental Setup Evaluation metrics in conventional machine unlearning <cit.> are not applicable to reinforcement unlearning.For instance, in reinforcement unlearning, there are no specific datasets to be forgotten, rendering metrics like “accuracy on forget set” irrelevant.Thus, it is necessary to propose new metrics. Cumulative Rewardquantifies the total sum of rewards accumulated by an agent while utilizing the acquired policy. The Number of Stepsquantifies the total number of steps taken by an agent to reach its goal or complete a task. Environment Similarity quantifies the resemblance between the inferred environment and the original one. It is evaluated by doing an environment inference attack.To compute the similarity, we calculate the percentage of agreement between the inferred and original environments. §.§.§ TasksThe experiments were conducted across four learning tasks: grid world, aircraft landing, virtual home and maze explorer. The virtual home and maze explorer tasks were sourced from the VirtualHome <cit.> and MazeExplorer <cit.>, respectively, while the other two tasks were developed by us.Each of these tasks was chosen to represent a broad spectrum of real-world applications, allowing us to explore the versatility and applicability of our proposed reinforcement unlearning methods.Although there are well-known RL tasks available, e.g., Gym <cit.> and Atari <cit.>, they were deemed unsuitable for our researchas those RL tasks are designed for single-environment and do not support multiple environments. Grid World.  This task consists of an agent and a predetermined destination within an environment. The objective for the agent is to navigate towards the destination. The agent has four distinct actions: moving up, down, left, and right. Aircraft Landing.  This task simulates an aircraft landing on the ground by avoiding the obstacles. The aircraft has four available actions: moving up, down, left, and right. Virtual Home.  Virtual home is a multi-agent platform designed to simulate various activities within a household setting. Agents are situated within a simulated household environment and engage in interactions with the objects present. Maze Explorer.  Maze explorer is a customizable 3D platform. The objective is to guide an agent, learning solely from visual information, through a procedurally generated maze to collect a predetermined number of keys. In these tasks, environments are instantiated with predetermined sizes. The instantiation process involves two steps. Firstly, each environment is randomly generated, introducing variability in the placement of obstacles. Subsequently, a manual inspection is carried out to eliminate any instances of “dead locations”. These are locations within the environment that become inaccessible due to being entirely surrounded by obstacles. Specifically, in the grid world task, the arrangement of obstacles within each environment is randomized to mimic real-world scenarios where the spatial distribution of barriers is not predetermined. The subsequent manual check for dead locations ensures that the generated environments are realistic and conducive to effective agent navigation.While our unlearning methods are evaluated in tasks with discrete state and action spaces, they can also be applied to tasks with continuous state and action spaces. This is because our approaches are independent of the underlying reinforcement learning algorithms. To address continuous spaces, one can simply integrate our unlearning techniques with suitable RL algorithms, e.g., <cit.>, designed for such environments. §.§.§ Comparison MethodsAs we introduce a novel sub-field of reinforcement unlearning, there are no existing works closely related. To establish a benchmark, we propose two baseline methods. Learning from scratch (LFS). This method entails removing the unlearning environment and subsequently retraining the agent from scratch using the remaining environments when an unlearning request is received <cit.>. However, this method is not a desirable criterion for reinforcement unlearning.In the forthcoming experimental results, we will show that this approach fails to fulfill the objectives of reinforcement unlearning as defined in Section <ref>.Non-transferable learning from scratch (Non-transfer LFS).  To align with the objectives of reinforcement unlearning, we introduce a non-transferable learning-from-scratch approach. This approach is similar to the previously mentioned learning-from-scratch approach. However, a crucial distinction lies in the non-transferable version, which incorporates the non-transferable learning technique <cit.> to restrict the approach's generalization ability within the unlearning environments. In this approach, while training an agent, the model owner meticulously stores experience samples acquired from all learning environments, labeling them according to their source environment. When an unlearning request is initiated, the model owner engages in an offline retraining process. Specifically, all the collected experience samples are utilized in retraining the agent. If a sample originates from the unlearning environment, an inverse loss function is applied to minimize the agent's cumulative reward. Conversely, for samples from other environments, the standard loss function is used to maximize the agent's overall reward. Denoting the unlearning environment as ℳ_u=(𝒮_u,𝒜_u,𝒯_u,r), the loss functions are defined in Eq. <ref>.ℒ= -1/|B|∑_e∈ B[(r(s_t,a_t)+γmax_a_t+1Q(s_t+1,a_t+1;θ) -Q(s_t,a_t;θ))^2],if s_t∈𝒮_u,1/|B|∑_e∈ B[(r(s_t,a_t)+γmax_a_t+1Q(s_t+1,a_t+1;θ) -Q(s_t,a_t;θ))^2],otherwise.Indeed, we have also conducted evaluations on a random-walking agent, which uniformly selects actions at random in each state. This random-walking agent functions as a representation of a completely unlearned agent, signifying it has forgotten all prior knowledge and returned to the initial status. This can serve as a foundational benchmark in the evaluation. However, the performance of this agent is notably degraded, requiring, for instance, several thousand steps to accomplish a given task. While these results provide valuable insights, their inclusion in the figures can obscure the clarity of other methods under consideration. Therefore, to maintain the focus and coherence of the paper, we have opted not to include these results in the final presentation.§.§.§ Sample Complexity of Unlearning MethodsIn the learning-from-scratch method (LFS), the retraining process involves leveraging all the experience samples collected from various environments, excluding the unlearning environment. This extensive dataset is used for the comprehensive retraining of the agent. Similarly, in the Non-transfer LFS, retraining utilizes all experience samples, encompassing those from the unlearning environment.In contrast, when evaluating the performance of the decremental RL-based and the poisoning-based methods, only a small subset of these samples, approximately one-tenth, is employed to fine-tune the agent to generate the final experimental results. There might be a concern regarding our proposed methods, as they allow the agent to engage in additional interactions with the unlearning environment. In contrast, both LFS and Non-transfer LFS do not involve further interactions with any environments. However, this additional interaction in our methods does not bring any extra advantages. The purpose of engaging with the unlearning environment is solely to collect experience examples. These experience samples are not required by either LFS or Non-transfer LFS since their objective is precisely to forget this information. Therefore, the absence of these samples, i.e., the lack of such interactions, does not impact the performance of both LFS and Non-transfer LFS. §.§ Overall PerformanceThe presented experimental results were derived by averaging the outcomes across 100 rounds of repeated experiments, and a 95% confidence interval of ±3% was calculated. The variances of the average reward and steps are both below 7 and 10, respectively. However, for clarity, they are not visually presented in the figures.Figure <ref> presents the overall performance of the decremental RL-based method in the grid world setting. The obtained results provide compelling evidence of the profound impact of the unlearning process on the agent's performance, as evident from the average number of steps taken and the average received rewards metrics.Following unlearning, the agent demonstrates a substantial increase in the average number of steps taken and a notable reduction in the average received rewards compared to the pre-unlearning stage.For example, in Figure <ref>, after unlearning, it is evident that the average number of steps taken by the agent in the unlearning environment substantially increases from 19.34 to 55.8, while in Figure <ref>, its reward decreases from -44 to -273.5. These findings indicate a significant performance reduction in the unlearning environment, which can be interpreted as a successful unlearning outcome. Conversely, in the retained environments, we observe minimal changes in the agent's steps and rewards. This implies a successful preservation of performance in these environments. Figure <ref> presents the overall performance of the poisoning-based method in the grid world setting. It exhibits a similar trend to the decremental RL-based method. The reason for this similarity lies in the shared objective of both methods, which is to degrade the agent's performance within the targeted unlearning environment while maintaining its performance in other environments. As a result, both methods effectively achieve the goal of reinforcement unlearning by selectively modifying the agent's behavior within the specified context.However, upon closer comparison between Figures <ref> and <ref>, we can observe slight differences in the performance of the two methods in some remaining environments, such as Environments 18 and 19. In these environments, the poisoning-based method maintains almost unchanged steps and rewards between the pre-unlearning and post-unlearning stages, while the decremental reinforcement learning-based method does not achieve this. This result suggests that the decremental reinforcement learning-based method can potentially suffer from the over-unlearning issue to some extent, while the poisoning-based method demonstrates its ability to overcome this issue and retain better performance in the remaining environments after unlearning. These findings highlight the different characteristics and strengths of the two unlearning methods.In Figure <ref>, it becomes evident that the unlearning results of the learning-from-scratch (LFS) baseline method in all four experimental settings are subpar. The agent's performance in the unlearning environment remains nearly unchanged before and after the unlearning process. The reason for this lackluster performance lies in the agent's ability to generalize knowledge from other environments and apply it to the unlearning environment, despite never having encountered it before.During training, the agent learns underlying rules and strategies from various environments. For instance, in the grid world setting, the agent acquires knowledge that obstacles should be avoided while collecting the target as quickly as possible. This learned knowledge, even if it was acquired in different environments, enables the agent to still perform well in unseen environments, including the unlearning environment. As a result, the baseline method proves to be ineffective as a reinforcement unlearning technique.In contrast, the unlearning results of the Non-transfer LFS baseline method surpass those of the regular LFS due to the limitation on its generalizability. Notably, the Non-transfer LFS method exhibits a considerable deterioration in performance within the unlearning environment while maintaining effectiveness in other environments. These outcomes underscore the effectiveness of incorporating an inverse loss function to minimize the agent's cumulative reward in the unlearning environment. §.§ Hyperparameter Study Impact of Environment Size.  The alteration in the environment size allows us to evaluate how well the unlearning methods adapt and perform across different scales.This evaluation helps us determine the suitability of the proposed methods in different environmental settings.In the grid world setting, we extend the size of the environment from 5× 5 to 15× 15, resulting in a larger grid. Figure <ref> visually depicts the impact of this increased environment size on both methods. As illustrated in the figure, we observe that with the expansion of the environment, the discrepancy in rewards between the pre-unlearning and post-unlearning stages is magnified for both methods. The reason behind this phenomenon is that the larger grid size introduces a greater number of states for the agent to navigate. Consequently, unlearning becomes a more challenging task as the agent must modify its learned behavior to adapt to the enlarged environment. Moreover, this magnification effect can also be attributed to the increased number of possible trajectories and interactions in the expanded grid world.After unlearning, the behavior of the agent in the unlearning environment becomes randomized. As a consequence, a wider range of possible trajectories and interactions often leads to longer paths taken by the agent, thereby resulting in lower rewards attained. Hence, the discrepancy in rewards between the pre-unlearning and post-unlearning stages becomes more pronounced.Impact of Poisoning Level.  The hyperparameter, poisoning level, serves as a pivotal factor in evaluating and testing the poisoning-based method exclusively. This parameter governs the quantity of poison introduced to the agent during the unlearning process, enabling us to investigate how the method performs under varying levels of poisoning.Specifically, the poisoning level is measured by the difference between the intended state s' and the manipulated state ŝ'. To illustrate, in the grid world context where an agent's state comprises eight dimensions, a poisoning level of 3 indicates that the two states differ in three dimensions.Figure <ref> illustrates the impact of changing the poisoning level on the evaluation metrics in the grid world setting. As the poisoning level increases, the values of all the evaluation metrics demonstrate a consistent downward trend. This trend indicates an improvement in the unlearning results with higher poisoning amounts. The reason behind this promising phenomenon lies in the nature of the poisoning-based method and its strategic use of targeted perturbations. As the poisoning level escalates, the method introduces a more substantial amount of deceptive information into the agent's policy, causing a stronger deviation from the optimal path.This increase in poisoning intensity effectively compels the agent to unlearn its previous behaviors more forcefully, encouraging it to abandon suboptimal policies. Thus, the agent's learned policy becomes more adaptable and resilient, leading to enhanced performance in unlearning unwanted knowledge. Moreover, higher poisoning amounts facilitate a more efficient exploration of the policy space, allowing the agent to escape local optima and discover better solutions. Thus, the unlearning process becomes more effective in refining the agent's behavior and enhancing its performance.§.§ Adaptability Study Dynamic Environments.  To illustrate the performance of our methods in dynamic environments, where the features and layouts of environments can change during agent training, we introduce a slight modification to the unlearning problem. Specifically, we consider an unlearning environment denoted as ℳ_u, and we employ time steps to represent changes in the environment. As time progresses in t steps, the evolution of the unlearning environment can be represented as ℳ^1_u, …, ℳ^t_u. Thus, the problem of unlearning ℳ_u transforms into the task of unlearning ℳ^1_u, …, ℳ^t_u.The experimental results, shown in Figure <ref>, were derived by setting t=5 and averaging the outcomes across the five environments. Similar outcomes can be observed for other values of t, e.g., 3 and 8. The results indicate that even in this dynamic setting, the outcomes of post-unlearning remain notably favorable. The resilience and effectiveness are attributed to the carefully designed mechanisms inherent in our reinforcement unlearning methods. Both unlearning methods involve dynamism during their operation. The decremental RL-based method dynamically adjusts the agent's knowledge, ensuring it remains effective even as the environment undergoes alterations. Similarly, the poisoning-based method introduces dynamism by modifying the unlearning environment, ensuring the agent to perform optimally in the evolving environment.Generalization.  To assess the generalization capability of our unlearning methods, we evaluated the unlearned models in unseen environments. We established the ratio between training environments and unseen environments as 4:1, employing 20 training environments and 5 unseen environments. This configuration is analogous to the typical setting of the ratio between the size of the training set and the test set in conventional machine learning. The outcomes, shown in Figure <ref>, were derived by averaging the results across the five unseen environments. The results indicate that the performance is sustained in these unseen settings. This suggests that our methods do not compromise the models' generalization ability; instead, they selectively impact the models' performance in the unlearning environments. This success can be attributed to the precision of our unlearning methods, which erase only the features specific to each unlearning environment while preserving the underlying rules and knowledge gained from training environments.Robustness.  Robustness gauges the strength and resilience of a method in the face of external perturbations. Evaluating robustness entails assessing how the unlearning methods perform when subjected to external perturbations.To conduct the evaluation, we introduce noise to the agent's actions during both training and unlearning. The introduction of noise is achieved by randomly perturbing the probability distribution over the agent’s actions. This perturbation involves adding a small randomly generated number, falling within the range of [-0.1, 0.1], to a randomly selected probability in the distribution. This noise represents random variations or disturbances that can occur in real-world scenarios. The corresponding results of the grid world setting are presented in Figure <ref>.Upon analyzing the outcomes, a notable observation emerges: both the decremental reinforcement learning-based and poisoning-based methods exhibit remarkable robustness against external noise. Despite the introduction of noise, the difference in both steps and rewards between the pre-unlearning and post-unlearning states remains nearly unchanged for both methods. This robustness can be attributed to the inherent adaptability and resilience of the unlearning methods. In the case of the decremental RL-based method, the gradual modification of the agent's policy allows it to withstand minor variations in observations, ensuring that its learned behavior remains stable despite external noise.Similarly, the poisoning-based method's strategic use of targeted perturbations enables the agent to develop a more adaptive policy. Thus, the agent's behavior proves to be less affected by the noise, maintaining its consistency in unlearning undesired knowledge. §.§ Environment Inference Testing This inference enables us to infer the environment that the agent needs to forget, allowing for a comparison of the inference outcomes before and after unlearning. In Figure <ref>, we present the results of the aircraft landing setting for the decremental RL-based method.Upon analyzing the figures, a compelling observation emerges. When the environment size is 5× 5, the inference attack successfully recreates about 50% of the unlearning environment before unlearning. However, after unlearning, this inference result significantly reduces to only 20%. This result provides clear evidence of a successful unlearning process.It is important to highlight that the inference attack employed in our experiments is not particularly effective. However, its use is solely for the purpose of illustrating the change in the percentage of an environment that can be inferred before and after unlearning. Evaluating our unlearning methods against a more potent inference attack is left as future research.The reason behind this success lies in the unlearning methods' capability to modify the agent's learned policy effectively. Both of the proposed methods adapt the agent's behavior to forget specific aspects of the environment while preserving essential knowledge. Thus, the environment inference attack becomes less effective in recreating the forgotten parts after unlearning. Also, the visual comparison highlights the unlearning methods' efficiency in refining the agent's policy to eliminate unwanted behaviors. The process of inferring the forgotten environment confirms the success of our unlearning methods in reducing the agent's reliance on previously learned information and adapting to changes in the environment.The results observed in the grid world scenario (Figure <ref>) are similar to the results in the aircraft landing setting, where the environment inference accuracy is significantly reduced after the unlearning process. In the grid world scenario, as the agent unlearns specific navigation patterns, the environment inference attack struggles to recreate the forgotten parts accurately. The unlearning process alters the agent's policy, leading to changes in navigation trajectories. Thus, the inferred environment fails to capture the full complexity and intricacies of the original environment. Note that these results not only show the accuracy of the inference but also exhibit the associated inference difficulty. A lower inference accuracy indicates a higher level of complexity in deducing information about an environment. It is essential to recognize that a diminished accuracy does not merely signify an error rate; rather, it implies that the inference process demands more time and effort to discern the intricacies of the environment accurately.In the virtual home setting, where an inference attack is not applicable, we evaluated unlearning performance by measuring the time it took the agent to complete a predefined task before and after unlearning. As illustrated in Figure <ref>, before unlearning, the agent could swiftly complete the task in 10 seconds. However, after unlearning, the agent needed 1 minute and 45 seconds to accomplish the same task in the same environment, showcasing its diminished performance resulting from the unlearning process. § RELATED WORK Machine Unlearning. The concept of machine unlearning was initially introduced by Cao et al. <cit.>. They employed statistical query learning and decomposed the model into a summation form, enabling efficient removal of a sample by subtracting the corresponding summand.Later, Bourtoule et al. <cit.> proposed SISA training, which involves randomly partitioning the training set into multiple shards and training a constituent model for each shard.In the event of an unlearning request, the model provider only needs to retrain the corresponding shard model. Warnecke et al. <cit.> shifted the focus of unlearning research from removing samples to removing features and labels. Their approach is based on the concept of influence functions, which allows for estimating the influence of data on learning models.Machine unlearning has also been explored from a theoretical perspective.Ginart et al. <cit.> introduced the concept of (ϵ,δ)-approximate unlearning, drawing inspiration from differential privacy (DP) <cit.>.Subsequently, Guo et al. <cit.> formulated unlearning as certified removal and provided theoretical guarantees. They achieved certified removal by employing convex optimization followed by Gaussian perturbation on the loss function. Gupta et al. <cit.> considered update sequences based on a function of the published model.They leveraged differential privacy and its connection to max information to develop a data deletion algorithm. Thudi et al. <cit.> argued that unlearning cannot be proven solely by training the model on the unlearned data. They concluded that unlearning can only be defined at the level of the algorithms used for learning and unlearning.Reinforcement Learning Security. While reinforcement unlearning remains an underexplored area, considerable research has been devoted to reinforcement learning security. For instance, studies have extensively investigated the vulnerabilities present in RL systems <cit.> and policy explanation with security applications <cit.>. However, it is crucial to note that these studies differ significantly from reinforcement unlearning for three distinct reasons. Firstly, the focus of prior research in reinforcement learning security primarily revolves around identifying and addressing vulnerabilities within the learning process. In contrast, our work centers on the fundamental task of how to effectively forget previously acquired knowledge. Secondly, existing studies in reinforcement learning security commonly aim to train robust agents capable of withstanding diverse adversarial activities. In contrast, our objective is to enable the unlearning of knowledge in a well-trained agent, highlighting a different goal and approach. Lastly, prior research endeavors to explain learned policies, particularly within security applications. In contrast, our research focuses on relearning policies based on revoke requests, giving a novel perspective on policy adaptation. § CONCLUSIONThis paper presents a pioneering research area, termed reinforcement unlearning, which addresses the crucial need to protect the privacy of environment owners by enabling an agent to unlearn entire environments. We propose two distinct reinforcement unlearning methods: decremental RL-based and environment poisoning-based approaches. These methods are designed to be adaptable to different situations and provide effective mechanisms for unlearning. Also, we introduce a novel concept termed “environment inference” to evaluate the outcomes of the unlearning process. This evaluation framework allows us to assess the efficacy of our unlearning methods and gauge the level of privacy protection achieved in reinforcement learning-driven critical applications.ACM-Reference-Format § APPENDIX Additional experimental results, along with details regarding the model architecture, are provided in this appendix.§ MODEL ARCHITECTURE In the grid world and aircraft landing settings, we employ a fully-connected neural network as the model for our reinforcement learning agent. The neural network architecture consists of an input layer, two hidden layers, and an output layer. The input layer takes a 10-dimensional vector as input, representing the relevant features of the environment. The output layer generates a 4-dimensional vector, representing the probability distribution over the four possible actions: up, down, left, and right.The network architecture incorporates two hidden layers to facilitate learning and representation of complex patterns. The first hidden layer comprises 64 neurons, while the second one consists of 32 neurons. In the virtual home and maze explorer settings, we employ a Convolutional Neural Network (CNN) comprising three CNN blocks and one hidden layer with 512 neurons. This network receives visual information as input with a size of 140× 120 and produces a 4-dimensional vector, indicating the probability distribution across the four possible actions: up, down, left, and right. The weights of these neural networks are randomly initialized.§ OVERALL PERFORMANCEFigures <ref>, <ref> and <ref> demonstrate the overall performance of the decremental RL-based method in the context of aircraft landing, virtual home and maze explorer, respectively. In all the three scenarios, the agent's behavior exhibits a notable increase in steps taken and a significant decrease in rewards achieved after the unlearning process. The reason for these trends in the three scenarios is rooted in the fundamental nature of reinforcement unlearning. The unlearning process seeks to selectively modify the agent's behavior to forget specific environments or aspects of its learning history. Thus, the agent must re-explore and adapt to new circumstances, leading to fluctuations in its performance. Figures <ref>, <ref> and <ref> provide a comprehensive view of the poisoning-based method's performance in the aircraft landing, virtual home and maze explorer settings, respectively. Remarkably, the performance trend in all the three scenarios is similar to that of the decremental RL-based method. This observation reinforces the effectiveness of the poisoning-based approach in reinforcement unlearning, as it consistently achieves the objective of degrading the agent's performance in the targeted unlearning environment while preserving its capabilities in other environments. The consistent performance trend across different settings shows the method's versatility and potential applicability in various RL scenarios.§ HYPERPARAMETER STUDYImpact of Environment Size. In the aircraft landing setting (Figure <ref>), the discrepancy in rewards between the pre-unlearning and post-unlearning stages becomes more pronounced for both the decremental RL-based and poisoning-based methods. Interestingly, this discrepancy in the aircraft landing setting is even larger compared to that observed in the grid world setting. The reason behind this distinction can be attributed to the inherent differences in the features and complexities of the two settings. In the grid world setting, the environment primarily consists of discrete and structured grids, with agent navigating through straightforward paths. The relatively limited environment complexity in the grid world setting allows the unlearning methods to efficiently adapt the agent's behavior and modify its policy, resulting in significant yet manageable changes in rewards. On the other hand, the aircraft landing setting is considerably more intricate and continuous, involving multiple variables and parameters governing the aircraft's landing procedures. As the size of the environment increases, the number of possible landing trajectories and configurations expands exponentially. This complexity poses a greater challenge to the unlearning methods, requiring them to navigate a more vast policy space.Impact of Environment Complexity. The complexity of an environment can be characterized by the presence and arrangement of obstacles within it. By modifying the complexity of the environment, we can assess the adaptability of the proposed methods. In the grid world setting, we examine the impact of increasing the environment complexity by introducing more obstacles, with the environment size maintained at 10× 10. Figures <ref> and <ref> show the outcomes of the decremental RL-based method.We observe that as the number of obstacles is raised from 10 to 15, there is a notable increase in both the number of steps taken by the agent and the disparity in rewards between the pre-unlearning and post-unlearning stages. This observation suggests that with a moderate increase in complexity, the unlearning process becomes more challenging, resulting in a substantial alteration in the agent's behavior, leading to changes in both step count and rewards. However, intriguingly, as we further augment the number of obstacles from 15 to 20, the difference in both steps and rewards between before and after unlearning seems to stabilize or vary less significantly.The reason behind this behavior lies in the agent's learning adaptability. When the environment complexity rises from 10 to 15 obstacles, the agent faces substantial alterations in the optimal path and must undertake considerable unlearning to adjust its behavior accordingly. As a result, we observe a noticeable increase in step count and disparity in rewards. Conversely, when the number of obstacles increases from 15 to 20, the agent has already adapted its behavior to accommodate the increased complexity. As the agent's policy has already been modified, further increases in obstacle count have a diminishing impact on step count and reward disparity.However, when employing the poisoning-based method (depicted in Figures <ref> and <ref>), an interesting observation emerges. Unlike the decremental RL-based method, the difference in both steps and rewards between the pre-unlearning and post-unlearning stages remains relatively stable even as the number of obstacles increases. The reason behind this intriguing behavior lies in the nature of the poisoning-based approach. When we introduce additional obstacles to the environment, the poisoning-based method operates differently compared to the decremental RL-based approach. Instead of modifying the agent's learned policy gradually, the poisoning-based method incorporates an element of targeted perturbation. As the number of obstacles increases, the poisoning-based method strategically poisons the agent's policy by introducing deceptive information during the unlearning process. This targeted perturbation causes the agent's behavior to deviate from the optimal path more significantly, leading to relatively constant differences in both step count and reward between the pre-unlearning and post-unlearning phases.By undertaking a comparative analysis, it becomes evident that the poisoning-based method introduces a higher level of stability in performance compared to the decremental RL-based method. This enhanced stability is of significant interest and has several underlying reasons. Firstly, the poisoning-based approach leverages targeted perturbations to strategically poison the agent's policy during the unlearning process. By introducing adversarial elements in a controlled manner, this method consistently influences the agent's behavior, leading to more predictable changes in its performance.Secondly, the poisoning-based method's targeted perturbations are designed to cause deliberate deviations from the optimal path. As a result, the agent's policy becomes consistently misled in the presence of additional obstacles, leading to a stable performance difference between the pre-unlearning and post-unlearning states.Moreover, the consistent impact of the poisoning-based method can be advantageous in certain scenarios. For instance, in safety-critical environments, e.g., autonomous driving, where stability and predictability are crucial, the poisoning-based approach offers a more controlled and reliable means of unlearning unwanted behaviors. On the other hand, the decremental RL-based method, gradually modifying the agent's policy, leads to more varied and less predictable changes in behavior as the environment complexity increases. This approach makes it challenging to precisely anticipate the agent's performance changes in response to additional obstacles.In Figure <ref>, the similar results observed in the context of aircraft landing corroborate the findings in the grid world setting. This outcome can be attributed to the analogous reasons previously identified in the grid world scenario.Impact of Poisoning Level.The similar results across the remaining three scenarios, as depicted in Figure <ref> for aircraft landing, Figure <ref> for virtual home, and Figure <ref> for maze explorer, further strengthen the findings of the grid world setting. In all the three cases, it is observed that a greater amount of poison introduced during the unlearning process leads to improved unlearning results. These consistent outcomes emphasize the significance of the poisoning-based approach in reinforcement unlearning. By strategically introducing poison to encourage the agent to forget specific information, this method provides a promising solution to address the challenges of unlearning in complex environments. § ADAPTABILITY STUDYSimilar Environments. In our evaluation, we intentionally introduced scenarios where two environments were very similar, with differences limited to just one grid in the grid world setting. The agent was then subjected to the unlearning process in one of these environments. After unlearning, we carefully examined the agent's performance in the other environment.The results of this experiment, shown in Figure <ref>, provide valuable insights. They demonstrate that when an agent undergoes unlearning in one of two similar environments, it does not exhibit deteriorated performance in the other environment. This finding supports the idea that the agent's unlearning is environment-specific and does not extend to other environments, even if they are almost identical. This can be attributed to the characteristics of the two proposed unlearning methods. In the decremental reinforcement learning-based method, the agent is fine-tuned in the unlearning environment using a new loss function. In contrast, the poisoning-based method involves the agent's retraining in a modified version of the unlearning environment. Consequently, even though the two environments are almost identical, the agent undergoes different training experiences, leading to distinct performance outcomes in the two environments. This outcome underscores the effectiveness of our reinforcement unlearning methods in achieving environment-specific knowledge removal while maintaining performance in other environments. We have also undertaken the challenge of unlearning in both of the two similar environments. The outcomes, illustrated in Figure <ref>, reveal a deterioration in the agent's performance across both environments. This observation underscores the profound impact of the unlearning process on the agent's adaptability and proficiency in similar yet distinct environments. The discernible decline in performance serves as a compelling testament to the intricacies involved in unlearning an agent, shedding light on the nuanced dynamics that govern its responses in comparable scenarios.We further investigated the scenario in which two environments are completely identical. However, the results revealed an intriguing outcome: after the agent undergoes the unlearning process in one environment, its performance in the identical counterpart deteriorates. Our hypothesis is that the agent's updated knowledge, acquired during unlearning in one environment, is seamlessly applied to the identical environment, leading to a decline in performance. We leave the differentiation between two identical environments during reinforcement unlearning as our future research.Robustness. The results observed in the aircraft landing (Figure <ref>), virtual home (Figure <ref>) and maze explorer (Figure <ref>) scenarios closely align with those in the grid world setting, showcasing the robustness of both the decremental reinforcement learning-based and poisoning-based methods against external noise. In the remaining three scenarios, the agent's behavior continues to exhibit consistent patterns even when external noise is introduced during the unlearning process. This consistency is crucial for practical real-world applications, where agents must maintain their adaptability and performance despite uncertainties and disturbances. For example, in autonomous driving, noise from sensor readings or unexpected environmental conditions may often be encountered. The ability of the unlearning methods to maintain their efficacy despite such disturbances reinforces their practicality and reliability in dynamic environments. § NUMERICAL RESULTSIn these numerical results, a new evaluation metric is adopted: trajectory similarity, which is defined as a sequence of state-action pairs. We evaluate the effectiveness of unlearning by comparing the similarity between two trajectories: one captured before unlearning and the other obtained after unlearning. Consider two trajectories, τ = ((s_1, a_1), …, (s_m, a_m)) and τ' = ((s'_1, a'_1), …, (s'_m', a'_m')), where τ represents the trajectory recorded before unlearning, and τ' corresponds to the trajectory collected after unlearning. The similarity between τ and τ' is computed as:sim(τ,τ')=∑^m_i=11_s_i=s'_j∧ a_i=a'_j/m,where 1 is the indicator function and j is an index in [1,m']. Table <ref> numerically showcases the performance of the decremental RL-based method in the grid world setting with the size of 10× 10.An intriguing observation is the notable difference in trajectory similarity between the unlearning environment and the retained environments. In the unlearning environment, the trajectory similarity is as low as 35%, indicating that the unlearned policy operates substantially differently from the original policy in this specific environment. However, in the retained environments, the trajectory similarity remains around 85% in average, implying that the unlearned policy only exhibits slight variations compared to the original policy. These results support the notion of successful unlearning, as the retained environments maintain a high level of performance similarity.The observation in the aircraft landing setting in Table <ref> is similar to the grid world setting. The unlearning process successfully reduces the trajectory similarity in the unlearning environments, with values as low as 28%, while maintaining relatively high trajectory similarity around 90% in average in the retained environments. These consistent results provide strong evidence of the effectiveness of our unlearning method in selectively modifying the agent's behavior within the targeted unlearning environments while preserving its performance in other retained environments.The results of the poisoning-based method align closely with those of the decremental RL-based method in the virtual home and maze explorer settings shown in Tables <ref> and <ref>, respectively. The trajectory similarity is as low as 46% and 35% in the unlearning environment while maintaining high around 87% and 84% in average in the remaining environments in virtual home and maze explorer, respectively. This similarity can be attributed to the effectiveness of both methods in modifying the agent's behavior within the targeted unlearning environments while preserving its performance in other retained environments. The poisoning-based method introduces deliberate changes to the state transition function of the unlearning environment, leading the agent to learn new, albeit incorrect, knowledge specific to the modified environment. Similarly, the decremental RL-based method selectively forgets learned knowledge related to the unlearning environment through iterative adjustments to the agent's policy. As a result, both methods successfully enable the agent to adapt to the unlearning environment while maintaining its performance in other retained environments. The consistent performance across different settings showcases the robustness and efficacy of our reinforcement unlearning methods.
http://arxiv.org/abs/2312.15910v1
{ "authors": [ "Dayong Ye", "Tianqing Zhu", "Congcong Zhu", "Derui Wang", "Jason", "Xue", "Sheng Shen", "Wanlei Zhou" ], "categories": [ "cs.CR", "cs.LG" ], "primary_category": "cs.CR", "published": "20231226070439", "title": "Reinforcement Unlearning" }
[email protected] Department of Physics, Tohoku University, Sendai 980-8578, Japan Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, Taipei 10617, Taiwan (R.O.C.) [email protected] Okinawa Institute of Science and Technology, 1919-1 Tancha, Onna-son, Okinawa 904-0495, Japan Leggett–Garg inequalities place bounds on the temporal correlations of a system based on the principles of macroscopic realism (MR) and noninvasive measurability (NM). Their conventional formulation relies on the ensemble-averaged products of observables measured at different instants of time. However, this expectation value based approach does not provide a clear definition of NM. A complete description that enables a precise understanding and captures all physically relevant features requires the study of probability distributions associated with noncommuting observables. In this article, we propose a scheme to describe the dynamics of generic N-level quantum systems via a probability vector representation of the Schrödinger equation and define a precise notion of NM for the probability distributions of noncommuting observables. This allows us to elucidate MR itself more clearly, eliminating any potential confusion. In addition, we introduce a measure to quantify violations of NM for arbitrary quantum states. For single-qubit systems, we pinpoint the pivotal relation that establishes a connection between the disturbance of observables incurred during a measurement and the resulting NM violation. Probability vector representation of the Schrödinger equation and noninvasive measurability for Leggett–Garg inequalities Sebastian Murk0000-0001-7296-0420 January 14, 2024 ===========================================================================================================================§ INTRODUCTIONThe distinction between classical and quantum phenomena has garnered considerable attention over the years. In contrast to the deterministic nature of classical mechanics, which accurately describes physical events on the macroscopic scales we experience in our day-to-day lives, quantum mechanics is fundamentally nondeterministic, and its precise role in the emergence of macroscopic phenomena is yet to be fully understood. To examine the breakdown of quantum coherence, Leggett and Garg devised an idealized experimental bound founded on the principles of macroscopic realism (MR) and noninvasive measurability (NM) <cit.>. MR posits that physical properties of macroscopic systems exist independent of our observation, i.e. measurements on macroscopic systems merely reveal stable preexisting values. In other words, the moon is there even if nobody looks <cit.>. In a trivial extension of quantum mechanics to large scales, macroscopic objects like Schrödinger's cat are described by a superposition of distinct states, and MR is broken. A more general concept of realism within hidden variable theories encompasses MR as a subset. NM postulates that the measurement process has no bearing on the state of the system being measured, i.e. there is no backreaction of the measurement on the subsequent system dynamics. Let us consider a non-quantum system S which has a corresponding N-dimensional quantum system S_QM that shares the same observables. Then, there exists a complete set of N^2 -1 observables { Q, Q̅_1, ⋯, Q̅_N^2-2} for S_QM that uniquely determine the quantum density operator ρ̂ of S_QM via quantum tomography. However, in quantum mechanics the operators associated with these observables do in general not commute. The observables are assumed to be observed in S, at least when they are measured at distinct points in time. In this article, NM measurements of an observable Q of the system S (not S_QM) are defined as measurements in which the probability distributions of all other observables Q̅_n remain unchanged while the initial probability distribution of Q collapses into a more sharply peaked distribution. Such an NM measurement does not exist in quantum mechanics, but may be allowed in more general theories like hidden variable theories. We regard the collapse of the initial probability distribution of Q as mere knowledge update about Q, not as disturbance against the fundamental degrees of freedom of S, including Q. This interpretation of updating information through measurement without causing a disturbance aligns with the standard approach for macroscopic objects in classical statistical mechanics.[For an explicit example of NM in the classical theory, see Sec. <ref>, Eq. (<ref>).] Predicated on MR and NM, experimentally testable inequalities of the form derived in Ref. <cit.> (“Leggett–Garg inequalities”, abbrev. LGIs)[See Ref. <cit.> for a topical review.] bound the temporal correlations of a system in sequential measurements of observables. This is similar in spirit to the Bell <cit.> and CHSH inequalities <cit.>, which place bounds on the correlations in measurements of spatially separated systems based on the principles of realism and locality. A naive extrapolation of quantum mechanics to the macroscopic regime violates both types of inequalities. Reciprocally, the dynamics of a system that violates either LGIs or Bell/CHSH-type inequalities cannot be understood within the framework of traditional classical mechanics <cit.>. Various proposals amenable to experimental verification of LGIs have been explored, including but not limited to quasiprobabilistic approaches <cit.>, continuous variable versions <cit.>, and using expanded data sets obtained from finer-grained measurements <cit.>. Violations of LGIs have been confirmed in numerous experiments involving different physical systems and using different types of measurements <cit.> (see also Table 1 of Ref. <cit.>). Nevertheless, the precise scale up to which we can detect the quantumness of macroscopic objects in experiments remains elusive. LGIs are one of the principal tools to investigate how quantum coherence in the form of superpositions and/or entanglement is lost in the macroscopic realm. In addition, the possibility of using LGIs to probe the quantumness of gravity through gravitationally induced violations has recently been put forward <cit.>.In certain LGIs, NM plays a more crucial role compared to MR. An example of this is found in the two-time LGIs for a single qubit. Let us consider a spin observable Q= ± 1 for the qubit and perform measurements at t_1 and t_2 >t_1, estimating the probability distribution p(Q_1=Q(t_1),Q_2=Q(t_2)). Here, we do not need to assume MR since the probability p(Q_1,Q_2) is determined directly by the experiment. Then, the following inequality trivially holds: ∑_Q_1=± 1∑_Q_2 =± 1 (1+s_1 Q_1) (1+s_2 Q_2) p(Q_1,Q_2) ⩾ 0 ,with s_1=± 1 and s_2 =± 1. This inequality is rewritten as1+s_1 ⟨ Q_1 ⟩ + s_2 ⟨ Q_2 ⟩ +s_1 s_2 ⟨ Q_2 Q_1 ⟩⩾ 0 ,where the bracket is defined such that ⟨ f(Q_1,Q_2) ⟩∑_Q_1,Q_2 f(Q_1,Q_2) p(Q_1 ,Q_2) for a function f of Q_1 and Q_2. NM for the first measurement at t=t_1 implies ⟨ Q_2 ⟩=⟨ Q_2 ⟩_1̅, where⟨ Q_2 ⟩_1̅∑_Q_2 =± 1 Q_2 p_1̅(Q_2) ,and p_1̅(Q_2) denotes the probability to observe Q_2 at t=t_2 without performing the measurement of Q_1 at t=t_1. This yields the two-time LGI <cit.> given by[In the two-time LGI, we set ⟨ Q_1 ⟩ = ⟨ Q_1 ⟩_2̅ with ⟨ Q_1 ⟩_2̅∑_Q_1 = ± 1 Q_1 p_2̅(Q_1), where p_2̅(Q_1) denotes the probability to observe Q_1 at t=t_1 without performing the measurement of Q_2 at t=t_2. Since the future experiment at t_2 >t_1 does not affect the outcomes already determined in the past experiment at t=t_1 due to causality, ⟨ Q_1 ⟩=⟨ Q_1 ⟩_2̅ is ensured.]1+s_1 ⟨ Q_1 ⟩ +s_2 ⟨ Q_2 ⟩_1̅ +s_1 s_2 ⟨ Q_2 Q_1 ⟩⩾ 0 .The breakdown of the two-time LGI in Eq. (<ref>) is caused by Δ Q_2 ⟨ Q_2 ⟩ - ⟨ Q_2 ⟩_1̅≠ 0 in experiments. If the first measurement at t=t_1 does not affect the second measurement at t=t_2, we obtain Δ Q_2=0 as an NM result. In this case, the constraint Δ Q_2 = 0 required for NM is more stringent than the bound set by the two-time LGI in Eq. (<ref>), since the LGI can still hold even when Δ Q_2 ≠ 0. It is worth noting that the hidden variable theory for a single qubit proposed by J. S. Bell <cit.> precisely reproduces the same results of quantum mechanics by breaking NM. In his theory, all expectation values ⟨σ_a ⟩_HV of the spin component σ_a with a ∈{ x,y,z } are allowed as long as ⟨σ_x ⟩^2_HV + ⟨σ_y ⟩^2_HV + ⟨σ_z ⟩^2_HV⩽ 1 holds. Then, it is possible to define density matrices via the following tomography relation:ρ̂_HV1/2( Î + ⟨σ_x ⟩_HVσ̂_x + ⟨σ_y ⟩_HVσ̂_y + ⟨σ_z ⟩_HVσ̂_z ) ,where Î denotes the two-dimensional identy matrix and σ̂_a are the Pauli matrices. Among the states described by ρ̂_HV, there exist pure states denoted by |ψ_HV⟩⟨ψ_HV|, which also appear in quantum mechanics. Similarly, all pure states in quantum mechanics are shared in Bell's theory. Therefore, the “quantum coherence" of |ψ_HV⟩ is reproduced by a superposition of two distinct states |±_HV⟩ via |ψ_HV⟩ = c_+ | +_HV⟩ + c_- | -_HV⟩ with complex coefficients c_±, even though this is a hidden variable theory. In this realism (not MR) theory, the LGI for the single qubit in Eq. (<ref>) is not satisfied in the same manner as in quantum mechanics, since the unavoidable backreaction of measurements leads to Δ Q_2 ≠ 0. In previous studies of LGIs, arguments related to NM have relied solely on the expectation values and ensemble averages of temporal correlations of observables. However, these quantities are secondary objects, derived from the probability distributions of observables in actual experiments. To provide a clearer and more fundamental definition of NM, we introduce a probability vector representation of the Schrödinger equation in this article. The concept of NM is unambiguously defined using the probability distributions in this representation. Based on this formalism, we precisely quantify the distinction between quantum mechanics and other theories in which NM holds. This ultimately allows us to better understand how fundamental differences between quantum mechanics and NM-compatible theories arise. The remainder of this article is organized as follows: In Sec. <ref>, we review mathematical preliminaries and derive the probability vector representation of the Schrödinger equation [Eq. (<ref>)] which describes the evolution of a quantum system in terms of the probability distributions associated with its observables. In Sec. <ref>, we highlight important differences between classical [Subsec. <ref>] and quantum [Subsec. <ref>] dynamics based on the description of a single-qubit system and introduce measures to quantify the violation of NM[Eq. (<ref>)] caused by the backreaction of a measurement [Eq. (<ref>)] and their interrelation [Eq. (<ref>)]. The generalization to generic N-level systems is covered in Sec. <ref>. In Sec. <ref>, we outline how the classification of quantum states as either NM-conforming or NM-violating could be performed by a machine learning algorithm in the case of very large N (where a manual evaluation becomes infeasible in practice) and present a minimalistic proof-of-principle implementation. Lastly, we summarize our results and discuss their physical implications (Sec. <ref>).§ PROBABILITY VECTOR REPRESENTATIONOF THE SCHRÖDINGER EQUATION Our objective in this section is to derive a probability vector representation for quantum dynamics that is suitable for investigating violations of NM. The Schrödinger equation for an N-level system represented by the quantum state ρ̂(t) at time t is given byi ħd/dtρ̂(t) = [ Ĥ, ρ̂(t) ] ,where [ Â, B̂] ÂB̂ - B̂ denotes the commutator of two operators  and B̂. The generators λ̂_n of SU(N) satisfyλ̂_n^† = λ̂_n , [ λ̂_n ] = 0 , [ λ̂_n λ̂_n'] = N δ_nn' ,where n,n' ∈{ 1, …, dim[SU(N)]} with dim[SU(N)] = N^2-1. The corresponding Lie algebra is given by[ λ̂_n, λ̂_n'] = i ∑_n”=1^N^2-1γ_nn'^n”λ̂_n” ,where γ_nn'^n” labels real-valued coefficients. Any N-level quantum state is decomposable in terms of the SU(N) generators λ̂_n via the so-called Bloch representationρ̂(t) = 1/N( Î + ∑_n=1^N^2-1⟨λ_n(t) ⟩λ̂_n ) ,where Î denotes the N-dimensional identity matrix, and the expectation values of the SU(N) generators are given by⟨λ_n(t) ⟩ = [ λ̂_n ρ̂(t) ] .Their time derivatives are computed asd/dt⟨λ_n(t) ⟩ = [ λ̂_n d/dtρ̂(t) ] = 1/i ħ[ λ̂_n [ Ĥ, ρ̂(t) ] ] = 1/i ħ[ ρ̂(t) [ λ̂_n, Ĥ] ] .It is useful to introduce the real-valued coefficientsh_nn'-i/N[ λ̂_n'[ λ̂_n, Ĥ] ] = i/N[ Ĥ[ λ̂_n, λ̂_n'] ] = -1/N∑_n”=1^N^2-1γ_nn'^n”[ Ĥλ̂_n”] .One can then show that[ λ̂_n, Ĥ] = i ∑_n'=1^N^2-1 h_nn'λ̂_n' .The Schrödinger equation in the form of Eq. (<ref>) can thus be recast in terms of the expectation values ⟨λ_n(t) ⟩ as follows:d/dt⟨λ_n(t) ⟩ = 1/ħ∑_n'=1^N^2-1 h_nn'⟨λ_n'(t) ⟩ .This is a generalization of the standard Bloch equation for a single qubit [N=2] to arbitrary N. The spectral decomposition of the SU(N) generators λ̂_n is given byλ̂_n = ∑_k=1^N λ_n(k) P̂_n(k) ,where λ_n(k) denote their eigenvalues and P̂_n(k) their projectors, respectively. The emergent probability of λ_n(k) for the observable λ̂_n in the state ρ̂(t) isp_n(k,t) = [ P̂_n(k) ρ̂(t) ]and satisfies the normalization condition∑_k=1^N p_n(k,t) = 1 .The expectation values can be expressed in terms of their respective emergent probabilities, i.e. ⟨λ_n(t) ⟩ = ∑_k=1^Nλ_n(k) p_n(k,t) ,and their time derivatives are computed usingd/dt p_n(k,t)= [ P̂_n(k) d/dtρ̂(t) ] = 1/i ħ[ P̂_n(k) [ Ĥ, ρ̂(t) ] ] = 1/i ħ[ ρ̂(t) [ P̂_n(k), Ĥ] ] .Expanding the right-hand side of this equation with respect to the generators λ̂_n yields1/i ħ[ P̂_n(k), Ĥ] = ∑_n'=1^N^2-1 K_nn'(k) λ̂_n' ,with coefficients K_nn'(k) given byK_nn'(k) = 1/N i ħ[ λ̂_n'[ P̂_n(k), Ĥ] ] .Substituting the spectral decomposition of λ̂_n' [cf. Eq. (<ref>)] into Eq. (<ref>), we obtain1/i ħ[ P̂_n(k), Ĥ] = ∑_n'=1^N^2-1∑_k'=1^N K_nn'(k) λ_n'(k') P̂_n'(k') .Defining the coefficientsH_nn'(k,k')λ_n'(k')/N i ħ[λ̂_n'[ P̂_n(k), Ĥ] ] ,the following relation holds:1/i ħ[ P̂_n(k), Ĥ] = ∑_n'=1^N^2-1∑_k=1^N H_nn'(k,k') P̂_n'(k') .Substitution of Eq. (<ref>) into Eq. (<ref>) reveals that the Schrödinger equation [cf. Eq. (<ref>)] can be rewritten in the probability vector form [i.e. expressed in terms of the emergent probabilities p_n(k,t)] as follows:d/dt p_n(k,t) = ∑_n'=1^N^2-1∑_k'=1^N H_nn'(k,k') p_n'(k',t) .This formulation can be straightforwardly extended to the case of open quantum systems by considering the Lindblad master equation. The initial condition of Eq. (<ref>) is chosen such that the underlying probability distribution describes a valid quantum state. Therefore, p_n(k,0) = [ P̂_n(k) ρ̂(0) ] holds for an initial state described by the density matrix ρ̂(0). Using Eqs. (<ref>), (<ref>), and the fact that ∑_k=1^N P̂_n(k) = Î, it is straightforward to show thatd/dt∑_k=1^N p_n(k,t) = 0 ,and thus the normalization condition Eq. (<ref>) holds at any time t:∑_k=1^N p_n(k,t) = ∑_k=1^N p_n(k) = 1 ,where p_n(k) ≡ p_n(k,0). Let p⃗(t) represent the N(N^2-1)-dimensional probability vectorp⃗(t) = [ [ p_1(1,t);⋮; p_1(N,t);⋮; p_N^2-1(1,t);⋮; p_N^2-1(N,t) ]] .Using the N(N^2-1) × N(N^2-1) matrix H = [ H_nn' (k,k') ] whose elements are prescribed by Eq. (<ref>), the solution of Eq. (<ref>) is obtained as[ [ p_1(1,t);⋮; p_1(N,t);⋮; p_N^2-1(1,t);⋮; p_N^2-1(N,t) ]] = exp( t H ) [ [ p_1(1);⋮; p_1(N);⋮; p_N^2-1(1);⋮; p_N^2-1(N) ]] ,which can be rewritten in the series expansion form[ [ p_1(1,t);⋮; p_1(N,t);⋮; p_N^2-1(1,t);⋮; p_N^2-1(N,t) ]] = ∑_n=0^∞t^n/n! H^n[ [ p_1(1);⋮; p_1(N);⋮; p_N^2-1(1);⋮; p_N^2-1(N) ]] .The numerical computation of H^n does not require the exact diagonalization of H and only takes a brief amount of time. Consequently, the right-hand side of Eq. (<ref>) can be calculated without facing significant obstacles even for sizable values of N. For an NM measurement of λ̂_n at time t=0, the expectation values ⟨λ_n ⟩ of λ̂_n are determined by [cf. Eq. (<ref>)]⟨λ_n ⟩ = ∑_k=1^N λ_n(k) p_n(k) .If an NM measurement of λ̂_1 is performed and the result λ_1(1) is obtained, the λ_1 sector of the probability vector p⃗ induces a collapse of the state vector such that p'_1(k)_1,1 = δ_k1, while the other sectors remain unaffected:0.99!p⃗ = [ [ p_1(1); p_1(2);⋮; p_1(N); p_2(1);⋮; p_2(N);⋮; p_N^2-1(1);⋮; p_N^2-1(N) ]] ⇒p'_1,1 = [ [ p_1^'(1)_1,1; p_1^'(2)_1,1;⋮;p_1^'(N) _1,1; p_2^'(1)_1,1;⋮; p_2^'(N)_1,1;⋮; p_N^2-1^'(1)_1,1;⋮; p_N^2-1^'(N)_1,1 ]] = [ [1;0;⋮;0; p_2(1);⋮; p_2(N);⋮; p_N^2-1(1);⋮; p_N^2-1(N) ]] . This provides a precise definition of NM for the probability distribution. Analogously, one can define probability distributions p'_n'(k')_n,k after an NM measurement of λ̂_n at time t=0 results in the observation of the eigenvalue λ_n(k). The λ_n sector of p⃗ induces a collapse of the state vector, while the other sectors remain unaffected:p'_n'(k')_n,k = δ_n' nδ_k' k + ( 1 - δ_n'n) p_n'(k') .However, it is not assured that the post-measurement probability vector p'_n,k = [p'_n'(k^')_n,k] always represents a valid quantum state. The expectation values of λ̂_n^' are evaluated as⟨λ_n'⟩'_n,k = ∑_k'=1^N λ_n'(k') p'_n'(k')_n,k .The post-measurement density matrix is given byρ̂'_n,k = 1/N( Î + ∑_n'=1^N^2-1⟨λ_n'⟩'_n,kλ̂_n')and could possess negative eigenvalues, which would imply that the corresponding operator is no longer positive semidefinite (which, in a slight abuse of notation, may be denoted by ρ̂'_n,k 0). In this sense, the presence of negative eigenvalues signifies the violation of NM. Next, for an arbitary initial state ρ̂(0), let us introduce a measure γ_n,k that quantifies the NM violation when a measurement of λ̂_n observes λ_n(k). To this end, we first evaluate p_n'(k') via [ P̂_n'(k') ρ̂(0) ] [cf. Eq. (<ref>)]. After the measurement, its λ_n sector undergoes the following transition:p_n(k')⇒p'_n(k') = δ_k'k .Then, the expectation value of λ̂_n is computed as⟨λ_n⟩'_n,k = ∑_k'=1^Nλ_n'(k') p'_n(k') = λ_n(k) .For sectors n' ≠ n on the other hand, the probabilities remain unchanged,p_n'(k)⇒p'_n'(k') = p_n'(k') ,and the expectation value of λ̂_n' ≠ n is given by⟨λ_n'⟩'_n,k≡⟨λ_n'⟩ = [ λ̂_n'ρ̂(0) ] .For ρ̂(0) the pseudo-density matrix after the measurement can be defined asρ̂'_n,k = 1/N( Î + ∑_n'=1^N^2-1⟨λ_n'⟩'_n.kλ̂_n') .More specifically, it can be expressed asρ̂'_n,k = 1/N( Î + λ_n(k) λ̂_n + ∑_n' ≠ n[ λ̂_n'ρ̂(0) ] λ̂_n') .Let p'_m(n,k) label the eigenvalues in the spectral decomposition of ρ̂'_n,k, i.e. ρ̂'_n,k = ∑_m=1^N p'_m(n,k) P̂'_m(n,k) .The NM violation measure γ(n,k) for ρ̂(0) is then defined as the sum of the absolute values of all negative eigenvalues:γ_n,k∑_p'_m(n,k) < 0| p'_m(n,k) | .The conclusion of this section warrants the following final remark: one can certainly verify the violation of NM by numerically diagonalizing ρ̂'_n,k, identifying the negative eigenvalues in its spectrum, and then confirming that γ_n,k > 0. However, executing such a numerical diagonalization for large N is a notably huge task that requires a significant amount of computational resources compared to the computation of H^n. Therefore, Eq. (<ref>) provides a much more efficient way of investigating NM violations in large-N systems. Some of the p_n'(k',t) values in Eq. (<ref>) can become negative at specific instances t by takingp_n'(k') = [ P̂_n'(k') ρ̂'_n,k] and an adequate Hamiltonian Ĥ. By solving this equation, one can identify the presence of negative components within p_n'(k',t), which serves as an indicator of the NM violation for ρ̂(0) in the large-N case. § SINGLE-QUBIT SYSTEMS To account for the backreaction of quantum measurements on the probability distributions of observables for a single-qubit system, we first revisit the Bloch representation of quantum states. For a single qubit [N=2], the quantum state ρ̂ is precisely specified by the expectation values ⟨σ_x ⟩, ⟨σ_y ⟩, ⟨σ_z ⟩ of the three Pauli operators σ̂_x, σ̂_y, σ̂_z viaρ̂ = 1/2( Î + ⟨σ_x ⟩σ̂_x + ⟨σ_y ⟩σ̂_y + ⟨σ_z ⟩σ̂_z ) .The state space is represented by a Bloch sphere (see Fig. <ref>), which is defined by the inequality⟨σ_x ⟩^2 + ⟨σ_y ⟩^2 + ⟨σ_z ⟩^2 ⩽ 1 .This condition guarantees that all eigenvalues of ρ̂ remain nonnegative, which is commonly (again, in a slight abuse of notation) denoted as ρ̂⩾ 0. The emergent probabilities of the measurement outcomes ± 1 for σ̂_a with a ∈{ x,y,z } are computed asp_a ( ± 1 ) = [ P̂_a(± 1) ρ̂] ,where the projection operator P̂_a(± 1) for σ̂_a is represented by P̂_a(± 1) 1/2( ασ̂_a ). The Bloch sphere inequality of Eq. (<ref>) can be rewritten in terms of the emergent probabilities as(p_x(+1)-p_x(-1))^2 + (p_y(+1)-p_y(-1))^2 + (p_z(+1)-p_z(-1))^2⩽ 1 .To examine violations of NM for a single-qubit system described by ρ̂, it is convenient to consider the six-dimensional [in general N(N^2-1)-dimensional] real vector of emergent probabilities given byp⃗ = [ [ p_x(+1); p_x(-1); p_y(+1); p_y(-1); p_z(+1); p_z(-1) ]] .The generic form of the time-independent Hamiltonian for a single qubit is given up to a constant byĤ = ħ/2( B_x σ̂_x + B_y σ̂_y + B_z σ̂_z ) ,where B_a with a ∈{ x,y,z } denote real parameters. In this case, the Schrödinger equation [cf. Eq. (<ref>)] can be expressed through the ordinary Bloch equation [cf. Eq. (<ref>)] asd/dt[ [ ⟨σ_x(t) ⟩; ⟨σ_y(t) ⟩; ⟨σ_z(t) ⟩ ]] = [ [0B_z -B_y; -B_z0B_x;B_y -B_x0 ]] [ [ ⟨σ_x(t) ⟩; ⟨σ_y(t) ⟩; ⟨σ_z(t) ⟩ ]] .Before delving into the quantum probability vector representation, it is prudent to first revisit the analogous classical theory to ensure a comprehensive understanding of why the NM postulate is always satisfied by classical dynamics. §.§ Classical Dynamics The classical equation of motion of a spin vector S⃗ = [S_x(t), S_y(t), S_z(t)] is given byd/dt[ [ S_x(t); S_y(t); S_z(t) ]] = [ [0B_z -B_y; -B_z0B_x;B_y -B_x0 ]] [ [ S_x(t); S_y(t); S_z(t) ]] ,where the initial condition is specified through the continuous real parameters S_0a [a ∈{ x,y,z }] as[ [ S_x(0); S_y(0); S_z(0) ]] = [ [ S_0x; S_0y; S_0z ]] .In the following discussion, let the spin vector with its initial conditions be denoted by[ [ S_x(t); S_y(t); S_z(t) ]] = [ [ S_x(S_0x,S_0y,S_0z,t); S_y(S_0x,S_0y,S_0z,t); S_z(S_0x,S_0y,S_0z,t) ]] .Let ρ_0(S_0x, S_0y, S_0z) be the classical probability distribution of the initial spin satisfyingρ_0(S_0x,S_0y,S_0z) ⩾ 0 ,and∫∫∫ρ_0(S_0x,S_0y,S_0z) dS_0x dS_0y dS_0z = 1 .The distribution ρ(S_x,S_y,S_z,t) at time t is determined by ρ(S_x,S_y,S_z,t)= ∫∫∫δ(S_x-S_x(S_0x,S_0y,S_0z,t)) δ(S_y-S_y(S_0x,S_0y,S_0z,t)) δ(S_z-S_z(S_0x,S_0y,S_0z,t)) ×ρ_0(S_0x,S_0y,S_0z) dS_0x dS_0y dS_0z , and satisfies the equation of motion ∂/∂ tρ( S_x,S_y,S_z,t) = -(S_x,S_y,S_z) [ [0B_z -B_y; -B_z0B_x;B_y -B_x0 ]] [ [ ∂/∂ S_x; ∂/∂ S_y; ∂/∂ S_z ]] ρ( S_x,S_y,S_z,t) . Consider a measurement of S_a [a ∈{ x,y,z }] resulting in the observed spin s=±1 of S_a at t=0. The probability of s for S_a is computed viap̅_a,s = ∫∫∫Θ(sS_0a) ρ_0( S_0x,S_0y,S_0z) dS_0x dS_0y dS_0z ,where Θ(x) denotes the Heaviside step function Θ(x) 1x ⩾ 00x < 0 .The probability distribution subsequent to the measurement at t=0 is described byρ_a,s(S_0x,S_0y,S_0z) = Θ(sS_0a)/p̅_a,sρ_0(S_0x,S_0y,S_0z).From Eqs. (<ref>) and (<ref>), one can immediately ascertain that no backreaction from the measurement influences the expectation value of any observable at a future time t. Since the relationρ(S_x,S_y,S_z,t)= p̅_a,+1ρ_a,+1(S_x,S_y,S_z,t)+ p̅_a,-1ρ_a,-1(S_x,S_y,S_z,t)is satisfied at time t, the expectation value0.95!p̅_a,+1∫∫∫ O(S_x,S_y,S_z) ρ_a,+1(S_x,S_y,S_z,t) dS_x dS_y dS_z0.95!+ p̅_a,-1∫∫∫ O(S_x,S_y,S_z) ρ_a,-1(S_x,S_y,S_z,t) dS_x dS_y dS_zof a physical observable O(S_x,S_y,S_z) at time t with measurement matches the expectation value ∫∫∫ O(S_x,S_y,S_z) ρ(S_x,S_y,S_z,t) dS_x dS_y dS_z of that same observable without the measurement. This ensures the stability and predictability of the system even after measurements have been performed, thereby underscoring the classical nature of the described dynamics. Therefore, classical statistical mechanics is an example of an NM-compatible theory.§.§ Quantum Dynamics To consider measurements and violations of NM in quantum dynamical systems, we introduce the six-dimensional probability vector p⃗_cl as a comparison measure for the above-described classical theory by defining the discrete spin variables σ_a=±1 asσ_x = ϵ(S_x) = Θ(S_x) - Θ(-S_x) ,σ_y = ϵ(S_y) = Θ(S_y) - Θ(-S_y) ,σ_z = ϵ(S_z) = Θ(S_z) - Θ(-S_z) .The classical probability vector p⃗_cl(t) at time t is then given by [cf. Eq. (<ref>)]p⃗_cl(t) = [ [ p̅_x(+1,t); p̅_x(-1,t); p̅_y(+1,t); p̅_y(-1,t); p̅_z(+1,t); p̅_z(-1,t) ]] = [ [ ∑_σ_y,σ_zp_cl (+1,σ_y,σ_z,t); ∑_σ_y,σ_zp_cl (-1,σ_y,σ_z,t); ∑_σ_x,σ_zp_cl (σ_x,+1,σ_z,t); ∑_σ_x,σ_zp_cl (σ_x,-1,σ_z,t); ∑_σ_x,σ_yp_cl (σ_x,σ_y,+1,t); ∑_σ_x,σ_yp_cl (σ_x,σ_y,-1,t) ]] , where we use the bar to distinguish the classical probabilities p̅_a(s,t) from their quantum counterparts p_a(s,t), and p_cl (σ_x,σ_y,σ_z,t) = ∫_-∞^∞∫_-∞^∞∫_-∞^∞Θ(σ_xS_x) Θ(σ_yS_y) Θ(σ_zS_z) ρ(S_x,S_y,S_z,t) dS_x dS_y dS_z . By definition, each component p̅_a(s,t) remains nonnegative at any arbitrary time t. Note that the classical state space defined by the set of points {⟨σ_x⟩_cl, ⟨σ_y⟩_cl, ⟨σ_z⟩_cl}, where ⟨σ_a⟩_cl (+1) p̅_a(+1,t) + (-1) p̅_a(-1,t) , is a cube with a side length of 2, centered at the origin, with each side parallel to the x, y, and z axis. Embedded within this cube is the Bloch sphere with a radius of 1, making contact with the cube at its extremities, as illustrated in Fig. <ref>. Any point that is located inside of the cube, yet not within the Bloch sphere (such as those indicated in red in Fig. <ref>), corresponds to a probability distribution that does not align with our traditional understanding of quantum mechanics. Returning to quantum dynamics as delineated by Eq. (<ref>), the evolution of a single-qubit system is described by d/dt[ [ p_x(+1,t); p_x(-1,t); p_y(+1,t); p_y(-1,t); p_z(+1,t); p_z(-1,t) ]] =[ [00 -B_z/2B_z/2B_y/2 -B_y/2;00B_z/2 -B_z/2 -B_y/2B_y/2;B_z/2 -B_z/200 -B_x/2B_x/2; -B_z/2B_z/200B_x/2 -B_x/2; -B_y/2B_y/2B_x/2 -B_x/200;B_y/2 -B_y/2 -B_x/2B_x/200 ]] [ [ p_x(+1,t); p_x(-1,t); p_y(+1,t); p_y(-1,t); p_z(+1,t); p_z(-1,t) ]] . In contrast to the probabilities p̅_a(s,t) in the case of classical dynamics described by Eq. (<ref>), which are always nonnegative, p_a(s,t) can take on negative values in the quantum dynamics described by Eq. (<ref>), even when the same initial condition is chosen for both equations. The presence of negative probability components p_a(s,t) constitutes a direct indication for the violation of NM in quantum dynamics for the initial state with p⃗_cl(0) in Eq. (<ref>). Next, let us reconsider the measure γ_n,k introduced in Eq. (<ref>) to quantify the violation of NM for a single qubit. Unlike the case of large N, we can easily determine its value for N=2. Let us assume that the initial state of the qubit is described by the probability vector p⃗ of Eq. (<ref>). After performing a measurement of σ̂_a at t=0 and obtaining the result s=±1, we define a pseudo-density matrix denoted by ρ̂_a,s^'. As per Eq. (<ref>), this matrix is described by the expressionρ̂'_a,s = 1/2( Î + s σ̂_a + ∑_a' ≠ a[ σ̂_a'ρ̂(0) ] σ̂_a') .Expanding Eq. (<ref>) with respect to the identity matrix and the Pauli matrices [cf. Eq. (<ref>)] asρ̂'_a,s = 1/2( Î + ⟨σ_x⟩'_a,sσ̂_x + ⟨σ_y⟩'_a,sσ̂_y + ⟨σ_z⟩'_a,sσ̂_z)yields the following relations:⟨σ_a⟩'_a,s = s, ⟨σ_b ≠ a⟩'_a,s = [ σ̂_bρ̂(0) ] ,where a,b ∈{ x,y,z } here and in what follows. Note that ρ̂_a,s^' can possess a negative eigenvalue. Indeed, the two eigenvalues of the 2×2 matrix ρ̂'_a,s(0) are given explicitly by0.99!p_a,s_± = 1/2( 1 ±√(( ⟨σ_x⟩_a,s^')^2 + ( ⟨σ_y⟩_a,s^')^2 + ( ⟨σ_z⟩_a,s^')^2)) .Consequently, the NM violation measure γ_a,s of Eq. (<ref>) with s=±1 is evaluated as0.99!γ_a,s = max{ 0, 1/2( √(( ⟨σ_x⟩'_a,s)^2 + ( ⟨σ_y⟩'_a,s)^2 + ( ⟨σ_z⟩'_a,s)^2) - 1 ) } .The value of γ_a,s quantifies the extent to which the NM post-measurement state differs from physical post-measurement states. Since some of the NM post-measurement states are not realized in the experiment, γ_a,s itselfdoes not qualify as a physical quantity. However, it is closely related to a physical quantity Δ which can be determined in experiments, as elucidated below. It follows from Eqs. (<ref>) and (<ref>) that the squared expectation values of the Pauli matrices fulfill the following relation:( ⟨σ_x⟩'_a,s)^2 + ( ⟨σ_y⟩'_a,s)^2 + ( ⟨σ_z⟩'_a,s)^2= 1 + ∑_b ≠ a( [ σ̂_bρ̂(0) ] )^2 .Note that, since (⟨σ_x⟩'_a,s)^2 + (⟨σ_y⟩'_a,s)^2 + (⟨σ_z⟩'_a,s)^2 > 1, the vector [⟨σ_x⟩'_a,s, ⟨σ_y⟩'_a,s, ⟨σ_z⟩'_a,s] lies outside of the Bloch sphere. From Eqs. (<ref>) and (<ref>), γ_a,-s is found to be equivalent to γ_a,s. In what follows, we therefore let γ_a represent γ_a,s given byγ_a≡γ_a,s1/2( √(1 + ∑_b ≠ a([σ̂_bρ̂(0) ] )^2) - 1 ) .The expectation values prior to the measurement are denoted by ⟨σ_b(0) ⟩ = [ σ̂_bρ̂(0) ]. Upon solving Eq. (<ref>), the following set of three equations is obtained:⟨σ_x(0) ⟩^2 + ⟨σ_y(0) ⟩^2= 4 γ_z (1+γ_z) , ⟨σ_y(0) ⟩^2 + ⟨σ_z(0) ⟩^2= 4 γ_x (1+γ_x) , ⟨σ_z(0) ⟩^2 + ⟨σ_x(0) ⟩^2= 4γ_y (1+γ_y) .The summation of Eqs. (<ref>)–(<ref>) yields⟨σ_x(0) ⟩^2 + ⟨σ_y(0) ⟩^2 + ⟨σ_z(0) ⟩^2 = 2 ∑_aγ_a (1+γ_a) .In conjunction with the Bloch sphere condition Eq. (<ref>) for the initial state ρ̂(0) [i.e. the left-hand side of Eq. (<ref>)], this relation establishes the following upper bound for the violation of NM:∑_aγ_a(1+γ_a) ⩽1/2 .Based on Eq. (<ref>), it is possible to derive an analogous inequality for the measured observables. Upon observing σ̂_a, the averaged post-measurement state is given byρ̂_a = ∑_s=±1P̂_a,sρ̂(t=0) P̂_a,s ,where P̂_a,s= | s ⟩_a⟨ s |_a are the projection matrices of σ̂_a associated with the eigenvalues s=±1. The deviation of the expectation value of σ̂_b quantifies the backreaction of the measurement and is given by δσ_b(a) = ⟨σ_b⟩^'_a - ⟨σ_b(0) ⟩ = [ σ̂_b ρ̂_a ] - [σ̂_b ρ̂(0) ] .Based on this expression, we can introduce the measure Δ as follows:Δ 1/3∑_a∑_b( δσ_b(a) )^2 = 1/3∑_a∑_b(⟨σ_b⟩'_a - ⟨σ_b(0) ⟩)^2 .As alluded to previously, this quantity can be determined by experiments. Using Eq. (<ref>) and the cyclic property of the matrix trace, we obtainδσ_b(a) = [ ( ∑_s=±1P̂_a,sσ̂_bP̂_a,s - σ̂_b) ρ̂(0) ] .It turns out that∑_s=±1P̂_a,sσ̂_b≠ aP̂_a,s = 0 is satisfied for the Pauli matrices. From Eqs. (<ref>)–(<ref>), it follows thatδσ_b(a) = ( δ_ab - 1 ) ⟨σ_b(0) ⟩ .This yields∑_a∑_b(δσ_b(a))^2 = 2 ( ⟨σ_x(0) ⟩^2 + ⟨σ_y(0) ⟩^2 + ⟨σ_z(0) ⟩^2 ) .From Eqs. (<ref>), (<ref>), and (<ref>), we obtain a useful formula relating the abstract quantity γ_a to the experimentally accessible Δ given byΔ = 4/3∑_aγ_a( 1 + γ_a) .If Δ>0, the single qubit in the initial state ρ̂(0) does not conform to NM. Put simply, the relation in Eq. (<ref>) quantifies the extent to which the quantum world differs from a world with γ_a=0 where NM holds. Next, we present examples of qubit quantum states to check the breakdown of NM. First, consider the maximally mixed state described by ρ̂(0)=Î/2. The corresponding probability vector is given byp⃗ = [ [ p_x(+1,0); p_x(-1,0); p_y(+1,0); p_y(-1,0); p_z(+1,0); p_z(-1,0) ]] = [ [ ½; ½; ½; ½; ½; ½ ]] .Prior to the measurement of σ̂_a, all expectation values are null, i.e. ⟨σ_a(0) ⟩ = p_a(+1,0) - p_a(-1,0) = ½ - ½ = 0∀ a. If the result σ_x=+1 is observed after an NM measurement of σ̂_x is performed, the probability vector becomesp'_x,+1 = [ [ p_x^'(+1)_x,+1; p_x^'(-1)_x,+1; p_y^'(+1)_x,+1; p_y^'(-1)_x,+1; p_z^'(+1)_x,+1; p_z^'(-1)_x,+1 ]] = [ [ 1; 0; ½; ½; ½; ½ ]] .The associated state ρ̂_x,+1^' is an eigenstate given by | +_x⟩⟨ +_x| of σ̂_x, thus establishing it as a quantum state that is compatible with the NM postulate. Hence, the NM violation measure for this particular state vanishes, i.e. γ_x,+1=0, and thus also Δ_x,+1 =0 by virtue of Eq. (<ref>). Similarly, measurements of other components σ̂_a yield analogous quantum states. A single-qubit system whose initial state is described by ρ̂(0)=Î/2 is therefore compatible with NM. On the other hand, if an NM measurement of σ̂_y is performed on the initial state ρ̂(0) = | +_x⟩⟨ +_x| and the result σ_y=+1 is observed, the probability vector becomesp'_y,+1 = [ [ p_x^'(+1)_y,+1; p_x^'(-1)_y,+1; p_y^'(+1)_y,+1; p_y^'(-1)_y,+1; p_z^'(+1)_y,+1; p_z^'(-1)_y,+1 ]] = [ [ 1; 0; 1; 0; ½; ½ ]] .Similarly, if the result σ_y=-1 is observed instead of σ_y=+1, the probability vector becomes p'_y,-1 = [ [ p_x^'(+1)_y,-1; p_x^'(-1)_y,-1; p_y^'(+1)_y,-1; p_y^'(-1)_y,-1; p_z^'(+1)_y,-1; p_z^'(-1)_y,-1 ]] = [ [ 1; 0; 0; 1; ½; ½ ]] .Since p'_y,±1 does not adhere to the Bloch sphere condition prescribed by Eq. (<ref>), the associated states ρ̂'_y,±1 inevitably violate the NM postulate. Indeed, the corresponding NM violation measure γ_y,±1 takes on positive values, namelyγ_y,±1 = √(2) - 1/2 .In this case, under appropriate selection of B_a in Eq. (<ref>), solving Eq. (<ref>) reveals that some negative probability components p_a(s,t) appear at a future time t. The negativity of the probability components p_a(s,t) therefore serves as evidence illustrating the violation of NM, and thus ultimately LGIs. § N-LEVEL SYSTEMS In a single-qubit system, every point contained within the Bloch sphere corresponds to a quantum state that can be realized in experiments. Analogous to the Pauli operators σ̂_a for N=2, one can introduce N^2-1 observables λ̂_n to describe the dynamics of generic N-level quantum systems, cf. Eqs. (<ref>)–(<ref>). The N-level generalization of the Bloch sphere defining inequality Eq. (<ref>) is∑_n=1^N^2-1⟨λ_n⟩^2 ⩽ N-1 .Analogous to the single-qubit case with N=2, a saturation of this inequality corresponds to a pure quantum state. However, in contrast to the single-qubit case, a subset of the set of points {⟨λ_1⟩, …, ⟨λ_N^2-1⟩} that satisfy the relation prescribed by Eq. (<ref>) does not describe valid quantum states <cit.>. Consider, for instance, the quantum state described by ρ̂_1 = ∑_k p_k| k ⟩⟨ k |⩾ 0. Then, the vector λ⃗_1 = [ ⟨λ_1⟩ _1, …, ⟨λ_N^2-1⟩_1] defined by ⟨λ_n⟩_1 = [ λ̂_nρ̂_1] provides the Bloch representation of ρ̂_1, i.e. ρ̂_1 = 1/N( Î + ∑_n=1^N^2-1⟨λ_n⟩_1λ̂_n) .Another quantum state ρ̂_2 given byρ̂_2 = 1/N( Î + ∑_n=1^N^2-1⟨λ_n⟩_2λ̂_n)should satisfy[ ρ̂_1 ρ̂_2 ]= ∑_k p_k ⟨ k |ρ̂_2 | k ⟩= 1/N( 1 + ∑_n=1^N^2-1⟨λ_n ⟩_1 ⟨λ_n ⟩_2 ) ⩾ 0 .Hence λ⃗_2 = [ ⟨λ_1⟩_2, …, ⟨λ_N^2-1⟩_2 ] obeys the following necessary condition:λ⃗_1·λ⃗_2 = ∑_n=1^N^2-1⟨λ_n⟩_1⟨λ_n⟩_2⩾ -1 .It follows from Eq. (<ref>) that, when ρ̂_1 is a pure state satisfying ∑_n=1^N^2-1⟨λ_n⟩^2_1 =N-1, a pure quantum state ρ̂_2 which meets the condition ⟨λ_n⟩_2 = - ⟨λ_n⟩_1 does not exist for N ⩾ 3. Consequently, the higher-dimensional generalization of the Bloch sphere inequality given in Eq. (<ref>) does not suffice to guarantee physically viable quantum states described by a positive semidefinite operator ρ̂⩾ 0. The implication here is that the state space characterized by [ ⟨λ_1⟩, …, ⟨λ_N^2-1⟩] is a rather intricate manifold. Therefore, for large values of N it is in general quite difficult to determine whether a given vector [ ⟨λ_1⟩, …, ⟨λ_N^2-1⟩] corresponds to a valid quantum state or not since this requires the numerical diagonalization of ρ̂'(0). Similarly, for large N it is a numerically difficult task to check if a given probability vector of the form of Eq. (<ref>) describes a valid quantum state or not (recall that such a vector comprises N(N^2-1) components). In order to alleviate this difficulty, we propose to adopt a machine learning method. § STATE CLASSIFICATIONWITH MACHINE LEARNING The aim of our proposed machine learning method is to train an algorithm in the classification of quantum states as either NM-conforming or NM-violating based on their associated probability vectors of the form given by Eq. (<ref>). Such vectors consist of (N^2-1) probability tuples [P_1, …, P_N^2-1], each containing N entries [p_n(1,t), …, p_n(N,t)] satisfying [cf. Eqs. (<ref>) and (<ref>)]∑_k^N p_n(k,t) = 1 ,p_n(k,t)>0 ∀n,kwith n ∈{ 1, …, N^2-1 } and k ∈{ 1, …, N }. The first step in our approach is the generation of training data that can be used for supervised learning. Since our goal is to distinguish quantum states that conform to NM from those that violate NM, we generate two distinct probability vector training data sets; the first containing exclusively vectors associated with NM-conforming states, and the second containing exclusively vectors associated with NM-violating states. §.§ Training Data GenerationFor any arbitrary N ⩾ 2, we work with the generalized Gell-Mann matrix basis (GGMMB) <cit.>[Chapter 3 of Ref. <cit.> provides an overview of the relevant properties.] and generate pseudo-density states according to the spectral decomposition of Eq. (<ref>), where the projectors P̂'_m(n,k) are constructed from pseudo-randomly generated N-dimensional vectors that are orthonormalized via the Gram-Schmidt process. The individual components of the probability vectors are then generated via Eq. (<ref>), where this time the projectors P̂_n(k) are those associated with the elements of the GGMMB. The difference in the generation of probability vectors for the NM-conforming vs. the NM-violating data set lies in the pseudo-random generation of the coefficients p'_m(n,k) of Eq. (<ref>): while the normalization condition Eq. (<ref>) is always satisfied for each of the N probability tuples in both data sets, negative values p'_m(n,k)<0 are permitted in the generation of probability vectors associated with quantum states that violate the NM postulate to reflect the fact that the spectrum of the post-measurement density matrix [cf. Eq. (<ref>)] may contain negative eigenvalues (and thus cannot describe a physically valid quantum state if NM is assumed to hold). This may ultimately result in probability vectors with negative components p_n(k,t)<0. However, since this contradicts the second requirement stipulated by Eq. (<ref>), such vectors are then discarded, and only those satisfying both conditions are passed onto the NM-violating training data set. Fig. <ref> illustrates the distribution of the NM violation measure γ_n,k [Eq. (<ref>)] for 1,000 pseudo-randomly generated probability vectors in the NM-violating data sets of N ∈{ 2,3,4,5 }. §.§ Supervised Learning andProbability Vector ClassificationSupervised learning is a type of machine learning algorithm that infers a function from labeled training data. Training data sets are typically composed of pairs in which an input object is assigned a desired output value. For our purposes, the supervised learning task corresponds to a classification task, and the inferred function is a classifier function 𝒞, i.e. a map out = 𝒞(in) between input objects (i.e. probability vectors) and output values (i.e. the state classification). An example implementation of our proposed machine learning methodology is openly available in the Github repository listed as Ref. <cit.>, including a separate file documenting the statistical distributions underlying the pseudo-random generation of the coefficients p'_m(n,k) for both training data sets. The code provided in this repository is written in Mathematica 13 <cit.>, and the supervised learning task is performed by Mathematica's built-in “Classify[]” function[A comprehensive documentation of the machine learning techniques available in Mathematica is provided in Ref. <cit.> and accessible online at https://www.wolfram.com/language/introduction-machine-learning/https://www.wolfram.com/language/introduction-machine-learning/.]. In our example implementation, the training data set is generated such that all probability vectors from the NM-conforming [NM-violating] data set are assigned the desired output value 0 [1]. The resulting classifier function takes then an N(N^2-1)-dimensional probability vector as input and returns either 0 or 1 based on whether it has determined the state associated with the input vector to be of the NM-conforming or the NM-violating type. As a sanity check and to test the robustness of the classifier, we can feed the classifier function probability vectors for which the classification is known a priori (e.g. through independent manual determination of NM conformity) and evaluate its performance based on the accuracy of its ouput classifications. We stress that the sole intention of the provided code is to serve as a proof-of-principle implementation for our proposed machine learning methodology. As such, several refinements and extensions will be required in order to model real experimental applications and/or realistic large-N systems. § CONCLUSIONS Starting from the density operator form of the Schrödinger equation [Eq. (<ref>)], we derive a formally equivalent probability vector representation [Eq. (<ref>)] which describes the quantum dynamics of a system in terms of the probabilities associated with its observables. Our analysis demonstrates that the probability vector representation is uniquely suited to study features in the evolution of non-classical systems that are relevant with respect to the NM postulate and its possible violation. Due to the specific form of Eq. (<ref>), an exact diagonalization of the Hamiltonian is not required in this formalism, which has many advantages when N is large. After an NM measurement is performed, the post-measurement density operator may no longer be positive semidefinite (as evidenced by the fact that its spectrum may contain negative eigenvalues), which is indicative of NM violations and a dynamical evolution that cannot be understood classically (in the sense of being compatible with both MR and NM). While the negativity of quasiprobabilities has previously been considered as an indicator of quantumness in the Wigner–Weyl representation <cit.>, the difference in our approach is that all initial probabilities are always nonnegative for both classical and quantum dynamics. This allows us to pinpoint what physical consequences the requirement of no backreaction that is encoded in the NM postulate entails. The extent to which NM is violated by quantum dynamics is ultimately determined by the evolution of probability distributions associated with the observables of the system under consideration and can be quantified using the measure defined in Eq. (<ref>). For single-qubit systems, we derive its explicit relationship to the backreaction of a measurement (i.e. the deviation from the NM postulate) [Eqs. (<ref>), (<ref>), and (<ref>)]. As motivated by our argumentation in Sec. <ref>, we expect our scheme to be more efficient computationally compared to conventional equation-solving approaches, particularly for the dynamics of large-N systems. In this regime, the exploration of machine learning techniques (especially big data methods) appears to hold a lot of promise. The explicit treatment of large-N systems (e.g. N ≃ 2^10^3-2^10^4 in condensed matter systems) including their possible experimental realization(s) will be considered in future works. § ACKNOWLEDGEMENTSThis work was supported by an OIST SHINKA Grant. MH is supported by Grant-in-Aid for Scientific Research (Grant No. 21H05188, 21H05182, and JP19K03838) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan. SM is supported by the Quantum Gravity Unit of the Okinawa Institute of Science and Technology (OIST) and would like to thank the Particle Theory and Cosmology Group at Tohoku University for their hospitality over the course of his research visit. 99 LG:85 A. J. Leggett and A. Garg, https://doi.org/10.1103/PhysRevLett.54.857Phys. Rev. Lett. 54, 857 (1985). M:85 N. D. Mermin https://doi.org/10.1063/1.880968Phys. Today 38, 38 (1985). ELN:rev:14 C. Emary, N. Lambert, and F. Nori, https://doi.org/10.1088/0034-4885/77/1/016001Rep. Prog. Phys. 77, 016001 (2014). B:64 J. S. Bell, https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195Physics 1, 195 (1964). CHSH:69 J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, https://doi.org/10.1103/PhysRevLett.23.880Phys. Rev. Lett. 23, 880 (1969). AGR:81 A. Aspect, P. Grangier, and G. Roger, https://doi.org/10.1103/PhysRevLett.47.460Phys. Rev. Lett. 47, 460 (1981). H:15 B. Hensen et al., https://doi.org/10.1038/nature15759Nature 526, 682 (2015). H:16 J. J. Halliwell, https://doi.org/10.1103/PhysRevA.93.022123Phys. Rev. A 93, 022123 (2016). HBLO:19 J. J. Halliwell, H. Beck, B. K. B. Lee, and S. O'Brien, https://doi.org/10.1103/PhysRevA.99.012124Phys. Rev. A 99, 012124 (2019). MNY:22 A. Matsumura, Y. Nambu, and K. Yamamoto, https://doi.org/10.1103/PhysRevA.106.012214Phys. Rev. A 106, 012214 (2022). BHM:18 S. Bose, D. Home, and S. Mal, https://doi.org/10.1103/PhysRevLett.120.210402Phys. Rev. Lett. 120, 210402 (2018). MHL:21 S. Majidy, J. J. Halliwell, and R. Laflamme, https://doi.org/10.1103/PhysRevA.103.062212Phys. Rev. A 103, 062212 (2021). G:11 M. E. Goggin et al., https://doi.org/10.1073/pnas.1005774108Proc. Natl. Acad. Sci. U.S.A. 108, 1256 (2011). R:15 C. Robens et al., https://doi.org/10.1103/PhysRevX.5.011003Phys. Rev. X 5, 011003 (2015). K:16 G. C. Knee et al., https://doi.org/10.1038/ncomms13253Nat. Commun. 7, 13253 (2016). F:16 J. A. Formaggio, D. I. Kaiser, M. M. Murskyj, and T. E. Weiss. https://doi.org/10.1103/PhysRevLett.117.050402Phys. Rev. Lett. 117, 050402(2016). M:19 S. Majidy et al., https://doi.org/10.1103/PhysRevA.100.042325Phys. Rev. A 100, 042325 (2019). GP:95 S. Goldstein andD. N. Page, https://doi.org/10.1103/PhysRevLett.74.3715Phys. Rev. Lett. 74, 3715 (1995). K:03 G. Kimura, https://doi.org/10.1016/S0375-9601(03)00941-1Phys. Lett. A 314, 339 (2003). G:62 M. Gell-Mann, https://doi.org/10.1103/PhysRev.125.1067Phys. Rev. 125, 1067 (1962). BK:08 R. A. Bertlmann and P. Krammer, https://doi.org/10.1088/1751-8113/41/23/235303J. Phys. A: Math. Theor. 41, 235303 (2008). GithubRep https://github.com/s-murk/ProbabilityVectorMLhttps://github.com/s-murk/ProbabilityVectorML Mathematica13 Wolfram Research, Inc., Mathematica, Version 13.0, Champaign, IL (2021). B:21 E. Bernard, Introduction to Machine Learning (Wolfram Media, Inc., 2021).
http://arxiv.org/abs/2312.16281v1
{ "authors": [ "Masahiro Hotta", "Sebastian Murk" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226190000", "title": "Probability vector representation of the Schrödinger equation and noninvasive measurability for Leggett-Garg inequalities" }
^1Department of Physics and Computer Science, Medgar Evers College of City University of New York, Brooklyn, NY 11225, USA ^2Department of Physics and Astronomy, Hunter College of the City University of New York, 695 Park Avenue, New York, New York 10065, USA ^3Donostia International Physics Center (DIPC), P de Manuel Lardizabal, 4, 20018 San Sebastian, Basque Country, Spain ^4Space Vehicles Directorate, US Air Force Research Laboratory, Kirtland Air Force Base, New Mexico 87117, USAWe have calculated the dynamical polarization, plasmons and damping rates in semi-Dirac bands (SDB’s) with zero band gap and half-linear, half-parabolic low-energy spectrum. The obtained plasmon dispersions are strongly anisotropic and demonstrate some crucial features of both two-dimensional electron gas and graphene. Such gapless energy dispersions lead to a localized area of undamped and low-damped plasmons in a limited range of the frequencies and wave vectors. The calculated plasmon branches demonstrate an increase of their energies for a finite tilting of the band structure and a fixed Fermi level which could be used as a signature of a specific tilted spectrum in a semi-Dirac band. Dynamical polarization function, plasmons, their damping and collective effects in semi-Dirac bandsGabrielle Ross-Harvey^1, Andrii Iurov^1[E-mail contact: [email protected], [email protected] ], Liubov Zhemchuzhna^1, Godfrey Gumbs^2,3,and Danhong Huang^4January 14, 2024 ==============================================================================================================================================================================§ INTRODUCTIONSince the discovery of graphene and "graphene miracle", all the two-dimensional materials with a Dirac cone and investigating their various electronic properteishave become a crucial part of condensed matter physics. These materials include recently discovered α-T_3 model with a flat band,<cit.> anisotropic and tilted 1T'-MoS_2, <cit.> semi-Dirac materials <cit.> and materials with Rashba spin-orbit coupling.<cit.> An anisotropy and energygap in the band structure of Dirac cone materials could be also induced by applying external off-resonance irradiaton.<cit.> Plasmons, or collective quantum density oscillations in an interacting electron system represent one of the most important directions in low-dimensional physics and have been investigated in great depth forgraphene <cit.>, graphene with a finite bandgap and buckled honeycomb lattices<cit.>, graphene-based heterostructures<cit.> at both zero and finite temperatures,<cit.> double and multi-layer systems<cit.> as well as in specific low-dimensional structures, such as fullerenes<cit.> and nanoribbons.<cit.> Specifically, there has beena large number of papers intended to study the plasmons and electronic transport in the presence of a magnetic field. <cit.> A considerable attention has been also directed to how the plasmons are excited,<cit.> as well as they lifetime and instability.<cit.>It is also important to investigate how the plasmons in any new materials are affected by the most specific and distingushed features of their electronicband structure, such as a flat dispersionless band in α-T_3 materials<cit.> and plasmons in twisted graphene bilayers.<cit.>Specifically, the plasmons have been investigated in a large number of newly discovered Dirac and semi-Dirac materials with anisotropy<cit.> and tilting (and, possibly, over-tilting), <cit.> such as screening in 8-Pmmn borophene,<cit.> tilted 1T'MoS_2,<cit.> hyperbolic plasmons in massive tilted two-dimensional Dirac materials with linear dispersions in which the mass is induced by a bandgap <cit.> optical properties in tilted Dirac systems,<cit.> kinks in the plasmons in tilted two-dimensional Dirac systems, <cit.> hyperbolic plasmon modes in borophene,<cit.> as well as intriple component fermionic systems.<cit.> The remaining part of the present paper is organized as follows. In section <ref>, we analyze the low-energy Hamiltonian of semi-Dirac bands and the resulting energy dispersions, as well as derive the corresponding wave functions. We discuss the peculiar properties of the energy spectrum in SDB’s – half-linear half-parabolic in all detail, and find the doping density required to achieve a certain Fermi level.Next, in Section <ref>, we consider the polarization function for semi-Dirac bands, specific overlap factors, dielectric function and the plasmon dispersions together with their damping rates and provide a detailed discussion of our obtained numerical results. Finally, the concluding remarks are made in section <ref>.§ GENERAL FORMALISM The low-energy dispersions of semi-Dirac bands (SDB’s) next to the zero-energy Dirac pointare linear in one direction ∽ v_F k_y and quadratic in the other ∽ (a k_x^2)^2.Apart from that, a finite tilting of the energy bands in the y- direction could be also present. As a result, for the low-energy states in SDB's we obtain the following HamiltonianĤ_ξ ( k) = ħ t v_F k_yΣ̂_0^(2) + a_0 ħ^2 k_x^2 Σ̂_x^(2) +ħτv_F k_yΣ̂_y^(2), where Σ̂_0^(2) is a 2 × 2 unit matrix, Σ̂_x^(2) and Σ̂_2^(2) are Pauli matrices, parameter a_0 = 1/(2 m^⋆) plays a role of the inverse effective mass.The tilting parameter τ is essentially the ratio between the Fermi velocities for the diagonal and off-diagonal linear terms in Hamiltonian (<ref>) which could be either zero or finite, and even exceed unity; v_F = 1.0 · 10^6 m/s is the Fermi velocity in graphene.The explicit matrix form of Hamiltonian (<ref>) isĤ_ξ ( k |a, τ) = ħ {[v_Fτk_y ħ v_F a k_x^2 - iv_F k y; ħ v_F a k_x^2 + iv_F k yv_Fτk_y ]}, where we used a notation a = a_0 ħ. The energy spectrum of semi-Dirac bands obtained as the eigenvalues of Hamiltonian (<ref>) in the following formε_λ, ξ (k̅ |a, τ) =ξ τ v_F k_y + λ √((ħ v_F k_y)^2 + (a k_x^2)^2). The corresponding wave functions areΨ_λ = ± 1 (k̅ |a, τ)=1/√(2) [ [ 1; λ a k_x^2 + i v_F k_y/(v_F k_y)^2 + (a k_x^2)^2 ]] , meaning that diagonal term ξτ v_F k_y has no effect on the wave function which also appears to be valley-degenerate. Introducing avector E̅ (k̅) =[ [ E_x (k̅); E_y (k̅) ]] = [ [ Re(a k_x^2 + i v_F k_y); Im(a k_x^2 + i v_F k_y) ]]= ( [ a k_x^2; v_F k_y ]) so that E(k̅ |a, τ) = |E̅|(k̅ |a, τ)= ε_λ, ξ (k̅ |a, τ) -ξ τ v_F k_y we can introduce an angle Θ_E̅ (k̅ |a, τ) = tan^-1( E_y/ E_x ) = (v_F k_y)/(a k_x^2) and rewrite wave function (<ref>) asΨ_λ = ± 1 (k̅ |a, τ) =1/√(2) [ [1; λ e^iΘ_E̅ (k̅ |a, τ) ]] . We note that a simplified representation (<ref>) of the wave function in semi-Dirac bands is given in terms of an angle Θ_E̅ (k̅ |a, τ) associated with vector (<ref>) but not with the components of wave vector k̅ directly. The most basic and informative two-dimensional plots for the band structure of semi Dirac bands For variousvalues oftilting parametersand inverse effective mass a_0 are presented in Fig. <ref>. As expected to see that if one component of the electron momentum was taken zero, the dependence on the other component is linear. The band structure doesn't reveal any energy band Gap. We also see that a finite value of towel leads over the tilting of spectrum in the k_y direction. The shape and size of the “horizontal” constant-energy cuts of dispersionsε_λ, ξ (k̅ |a, τ) shown in Fig. <ref> feature the Fermi surface: a boundary between the occupied and free electronic states of semi-Dirac bands. The size of those surfaces definitely depend on tilting. Once parameter τ is increased, the surfaces become extended in the k_y-direction (corresponding to θ_ k= π/2 and 3 π/2). For τ = 1,which we are going to refer to as critical tilting, it becomes infinitely large,as well as for any τ > 1.The inverse effective mass amakes those Fermi surfaces less circular and more anisotropic. In Fig. <ref>, will also plot the the vertical (k_x = const and k_y = const) cuts of dispersions (<ref>) which reveal all the specific features of the non-trivial band structure in SDB’s,such as their very specific shapes distinct from those in graphene whichstems from the non-linear dispersions in the x-direction. The tilting could be zero,finite (0 < τ < 1), critical (τ= 1) or even over-critical (τ= 1) making one of the slopes in the k_y-direction negative,as well as substantial anisotropy and overall difference between k_x- and k_y-dispersions of the seven Dirac bands.We also demonstrate the Fermi surface on Fig. <ref> showing both occupied and unoccupied states and the interface between them.[ An informative description with a few recipes for finding an intersection curve for two given surfaces using Wolfram Mathematica similar to what was used here could be found in https://community.wolfram.com/groups/-/m/t/177994] It is clearly seen that for the increasing tilting, the surface becomes extended in the y- direction, increases in size and becomes unbounded and infinite for τ≥ 1. For the critical or over-critical tilting τ,it is possible to observe the Fermi surface in both valence and conduction bands at the same time in contrast to the most known Dirac materials. A finite-size Fermi surface is also possible even for a zero doping E_F = 0.§ POLARIZATION FUNCTION AND PLASMONS IN SEMI-DIRAC BANDSThe plasmon branches are obtained as the locations on the (q,ω)-plane where the dielectric function ϵ(q,ω |E_F) of amaterial becomes equal to zero. Within the random phase approximation, the dielectric function is calculatedϵ(q̅, ω |E_F) = 1 - v_c(q̅)Π(q̅, ω |E_F) , where v_c(q) = e^2 /(2 ϵ_0 ϵ_r q) is a Fourier-transformed Coulomb potential of the electron-electron interaction in a two-dimensional lattice, ϵ_r is the relative dielectric constant of the SDB sheet which essentially depends on the dielectric substrate andΠ^(0)(q,ω |E_F) is the polarization function.Within the random phase approximation, the polarization function is calculated in the following wayΠ^(0)(q_x, q_y, ω |E_F) = 1/4 π^2 ∑_ξ = ± 1∫ d k_x∫ d k_y∑_λ,λ' = ± 1 O_λ_1,λ_2 (k̅, q̅ | a, τ )× ×{ n_F [ε_λ_1, ξ (k̅ |a, τ)| μ(T,E_F), T] -n_F [ε_λ_2, ξ (k̅ + q̅ |a, τ)| μ(T,E_F), T] /ħω + i 0^ + ε_λ_1, ξ (k̅ |a, τ) -ε_λ_2, ξ (k̅ + q̅ |a, τ) }, where n_F [ε_λ_1, ξ (k̅ |a, τ)| μ(T,E_F), T] = (1+ exp[(ε_λ_1, ξ (k̅ |a, τ) - μ)/(k_B T)])^-1 is the Fermi-Dirac distribution function such that for a zero temperature it is reduced to a Heaviside step function n_F [ε_λ_1, ξ (k̅ |a, τ)| μ(T,E_F), T] ⟶Θ[E_F - ε_λ_1, ξ (k̅ |a, τ) ].The overlap factor O_λ_1,λ_2 (k̅, q̅ |a, τ) is defined as the wave function overlap between the electron states in different bands and is calculated asO_λ_1,λ_2 (k̅, q̅ |a, τ) = ⟨Ψ_λ_1( k̅ |a, τ ) |Ψ_λ_2(k̅ + q̅ |a, τ ) ⟩ Using representation (<ref>) of wave functions (<ref>) corresponding to wave vectors k̅ and k̅ + q̅, we immediatelyrewrite overlap factor (<ref>) asO_λ_1,λ_2 (k̅, q̅ |a, τ) = 1/2{ 1 + λ_1 λ_2a k_x^2 + i v_F k_y/(ħ v_F k_y)^2 + (a k_x^2)^2 a (k_x + q_x)^2 + i v_F (k_x + q_x)/[ħ v_F (k_y + q_y)]^2 + [a (k_y + q_y)^2]^2} = = 1/2{ 1 + λ_1 λ_2 cosΘ_( E_k̅, E_k̅ + q̅) } =1/2{ 1 + λ_1 λ_2E_k̅ + E_q̅ cosΘ_( E_k̅, E_q̅) /√(E_k̅^2 + E_q̅^2 + 2 E_k̅E_q̅cosΘ_( E_k̅, E_q̅) )} Overlap factors O_λ_1,λ_2 (k̅, q̅ |a, τ) shown in Fig. <ref> demonstrate a non-trivial dependence on both the magnitude and direction of wave vector shift q̅. which is different from that in graphene and most of the other known materials. However,overlap O_λ_1,λ_2 (k̅, q̅ |a, τ) in Eq. (<ref>) could be presented in terms of a single angle Θ_( E_k̅, E_k̅ + q̅)and, therefore, the inter- (λ_1 λ_2 = -1) and intra-band (λ_1 λ_2 = 1) overlaps demonstrate completely opposite angular behavior. The real and imaginary parts of polarization function (<ref>) are presented in Figs. <ref> and <ref>. The plasmon dispersions are obtained from equation (<ref>) as the zeros of dielectric function ϵ(q̅, ω |E_F).The real part of polarization function Π^(0)(q_x, q_y, ω |E_F) plays a crucial role in shaping out the plasmon branches, while the imaginary part plays a crucial role in determining the plasmon damping and (inverse) lifetime since a plasmon could be only considered stable if Im[ Π^(0)(q_x, q_y, ω |E_F) ] ⟶ 0and |ϵ(q̅, ω |E_F) |⟶ 0.We see that for a finitetransverse momentum component q_ythe results for both real and imaginary parts of polarization functionΠ^(0)(q_x, q_y, ω |E_F) are changed significantly,but in both cases the real part of the polarization function could be found bothpositive and negative which ensures that the plasmon actually exists.Since the energy dispersions of semi-Dirac bands have no energy gap,the region of an undamped plasmon is localized to the relatively small values of the wave vector q̅ and frequency ω. At the same time, we clearly see a well-defined plasmon with zero or small Im[ Π^(0)(q_x, q_y, ω |E_F) ]. A curved and nearly parabolic boundary of the particle-hole excitation region clearly resembles the plasmons in a two- dimensional electron gas (2DEG). This situation is expected because of parabolic dispersions in SDB’s in the k_x-direction.The plasmon branches presented in Fig. <ref>demonstrate a standard ∽√(q) behavior for q_y = 0, However, for a finite q_ythe branches are not monotonic and could be even decreasing with increasing q_x with a clear minimum, which is the result of both tilting and anisotropy.One of the most interesting features of the plasmons in semi-Dirac bands is their dependence on the tilting τ. The plasmons in two-dimensional over-tilted Dirac bands with both linear dispersions and a gap were briefly analyzed in Ref. [yan2022anomalous] in which additional branches were reported for critical in over-critical tilting. However, we don't know much about the damping and the lifetimes of these plasmons. It is crucial to study the tilting in connection with the parabolic dispersions in semi-Dirac bands and the corresponding unique schematics of the electronic transitions. Most importantly, the electron doping density for a given Fermi level is increased for an increasing tilting, as well as the area of the Fermi surface between the occupied and free electronic states. This situation is definitely expected to lead to an increased frequency (the energy) of the plasmon branches for a given wave vector q̅, which we definitely observe in Fig. <ref>. It is also interesting to see that the polarization functions in the regions of low-damped plasmons (small q) demonstrate qualitatively the same behavior for different values of tilting parameter τ.§ SUMMARY AND REMARKSIn this paper, we have calculated the polarization function, plasmon dispersions and their damping for sem-Dirac bands. The energy band structure of this novel material is linear in the k_y-direction and parabolic along the k_x axis, and has a zero energy band gap.The band structure of semi-Dirac bands also allows a finite (0 < τ < 1), critical (τ = 1)and over-critical tilting (τ > 1) so that one of the slopes in the k_y-direction could become zero or even negative. As a result, the area of the Fermo surface – the contact surface between the free and occupied electron states at the Fermi level – could increase and even become infinite for τ→ 1which substantially affects all the electronic and collective properties of SDB’s. In this case, a Fermi interface and a plasmon also exists even for zero Fermi level.We have obtained a well-defined, low-damped and anisotropic plasmon branch for both zero and a finite tilting in semi-Dirac bands. The boundary of the particle-hole modes or single particle excitation spectrum region – locations in the (q, ω) plane in which a plasmon would decay into single-particle excitations –is represented by a curved, nearly parabolic line, similarly to that in a two-dimensional electron gas (2DEG). A finite tilting τ > 0 leads to the increase of the plasma frequency for a given wave vector q and an extension of the region with low-damped plasmons. We are confident that our finding and, specifically, demonstrating an existence of a low-damped plasmon in this new class of two-dimensional materials with non-trivial semi-Dirac dispersions and an earlier unseen schematics of the electronic transitions will undoubtedly find its numerous applications in creating of new nanoscale electronic devices, as well as general and theoretical condensed matter physics. A.I. was supported by the funding received from TRADA-53-130, PSC-CUNY Award # 65094-00 53. D.H. would like to acknowledge the Air Force Office of Scientific Research (AFOSR). G.G. was supported by Grant No. FA9453-21-1-0046 from the Air Force Research Laboratory (AFRL).
http://arxiv.org/abs/2312.16117v1
{ "authors": [ "Gabrielle Ross-Harvey", "Andrii Iurov", "Liubov Zhemchuzhna", "Godfrey Gumbs", "Danhong Huang" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20231226165156", "title": "Dynamical polarization function, plasmons, their damping and collective effects in semi-Dirac bands" }
12pt14pt§.0.5em§.§.0.5em §.§.§.0.5em §0pt2em1em§.§0pt1em0.5em §.§.§0pt1em0.5emfancy VOLZ ET AL.[figure]labelsep=period [table]labelsep=period abstract 0.908pt10ptfigure!htsame myheading[] #1[name=MS, color=orange]MS [name=SV, color=mpl-blue]SV [name=SVp, color=mpl-green]SVp [name=SVm, color=mpl-red]SVm #1#1#1#1cd babelmystyle backgroundcolor=, commentstyle=, keywordstyle=, numberstyle=, stringstyle=, basicstyle=, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 style=mystyle theoremTheorem lemma[theorem]Lemma proposition[theorem]Proposition definition[theorem]Definition example[theorem]Example corollary[theorem]Corollary definition assumptionAssumption remark remarkRemarkı𝒾 e Part OPart Seg Pcw #1∘#1 #1#1 :⇔ :⟺ smallpmatrix ([ ) #1‖#1‖ rankdettrace z. B. i. A. u. a. i. d. R. s. t. resp. w. r. t. i. i. d. a. s. i. e. MHz GHz pcw. cf. argmin argmax CV SE Ø𝒪 øℴ 𝐎 𝐆𝐋 𝐒𝐎 sgn𝒫 𝒮 dof #1# #1 𝐏RMGT-unique d_unifinlinelistenumerate*1 [inlinelist,1]label=(*), before= , after= , itemjoin= Degrees-of-freedom penalized piecewise regression =================================================emptyhttps://orcid.org/0000-0001-7007-7773 < g r a p h i c s > Stefan Volz, https://orcid.org/0000-0003-1427-0776 < g r a p h i c s > Martin StorathLab for Mathematical Methods in Computer Vision and Machine Learning Technische Hochschule Würzburg-Schweinfurt Ignaz-Schön-Str. 11, 97421 Schweinfurt, Germany and https://orcid.org/0000-0002-4969-7609 < g r a p h i c s > Andreas Weinmann Department of Mathematics and Natural Sciences Hochschule Darmstadt Schöfferstraße 3, 64295 Darmstadt, GermanyPreprint December 22, 2023 Many popular piecewise regression models rely on minimizing a cost function on the model fit with a linear penalty on the number of segments. However, this penalty does not take into account varying complexities of the model functions on the segments potentially leading to overfitting when models with varying complexities, such as polynomials of different degrees, are used. In this work, we enhance on this approach by instead using a penalty on the sum of the degrees of freedom over all segments, called degrees-of-freedom penalized piecewise regression (DofPPR). We show that the solutions of the resulting minimization problemare unique for almost all input data in a least squares setting. We develop a fast algorithm which does not only compute a minimizer but alsodetermines an optimal hyperparameter – in the sense of rolling cross validation with the one standard error rule – exactly. This eliminates manual hyperparameter selection. Our method supports optional user parameters for incorporating domain knowledge. We provide anopen-source Python/Rust code for the piecewise polynomial least squares case which can be extended to further models. We demonstrate the practical utility through a simulation study and by applications to real data. A constrained variant of the proposed method gives state-of-the-art results in the Turing benchmark for unsupervised changepoint detection.§ INTRODUCTION Assume we are given the the noisy samples y_i = g(t_i) of a function gT → X at time points t_1 < …< t_N, where (X, d) is a metric space and T a real interval. In many practical applications, thesignal g can be well described by a piecewise function of some sort. For example, piecewise constant signals appear in the reconstruction of brain stimuli <cit.>, single-molecule analysis <cit.>, cellular ion channel functionalities <cit.>, the rotations of the bacterial flagellar motor <cit.> or medication use research <cit.>. Similarly we find higher order piecewise polynomial functions in fuel consumption estimation in automotive engineering <cit.>, the modeling of human learning in quantitative psychology <cit.>, and studies of animal ecology based on biotelemetry <cit.>.The goal is to find a functional description of the underlying piecewise signal. This leads to a piecewise regression problem (also known as segmented regression). Popular approaches for this task are partition-penalized models; they are based on a cost functional of the competing objectives of data fidelity cost and parsimony of the segments, weighted by a parameter γ > 0 that represents their relative importance:min_Ppartition on {1,…,n} γ·# P + ∑_I ∈ Pmin_ω∈Ωd_I( ω, y).Here d_I(ω, y) measures the goodness of fit the model function ω to the data on the interval I; for example d_I(ω, y) =∑_i ∈ I (ω(t_i)-y_i)^2, P is a partition into discrete intervals I of the time indices {1, …, n} and # P denotes the number of segments in the partition. Typical instances of (<ref>) are piecewise polynomial models and piecewise smooth models with ℓ^p data fidelity <cit.>. The approach has been generalized to manifold-valued data <cit.> and indirectly measured data <cit.>. An important condition on d_I and Ω is that min_ω∈Ωd_I( ω, y) can be computed within a reasonable time frame.A limitation of the penalty based on the number of segments # P is that it does not take into account the complexity of the models on the segments I. Each new segment has the same cost, independently of the degrees of freedom of the regression function on the segment. As an example, consider a piecewise polynomial regression: If the regression function on each piece is polynomial up to degree 3, then a regression function typically has degree 3, even if a polynomial of lower degree gives an almost as good but sparser representation. To address this, one may introduce an extra penalty on the degrees of freedom on the segments, but this comes at the cost of introducing a new hyperparameter which complicatesmodel selection. §.§ Proposed method and contributionsIn this work, we study a model that penalizes not the number of segments but instead the degrees of freedom (dof) of the regression function on those segments:min_Ppartition of {1,…,n} λ P → ∑_I ∈ Pmin_ω∈Ω^≤λ_Id_I( ω, y) + γλ_I.Here Ω^≤ν is a space of regression functions with a most ν∈ degrees of freedom (for example a ν dimensional linear space), λ a function assigning each segment I of P a number of degrees of freedom ν≤ν_max, and ν_max is an upper bound on the degrees of freedom per discrete interval. To fix the ideas, one may think of the Ω^≤ν as the space of polynomials of maximum degree ν -1. We refer to (<ref>) as degrees-of-freedom penalized piecewise regression model (DofPPR). A particularly important instance of (<ref>) is the case of (weighted) least square fitting:min_Ppartition of {1,…,n} λ P → ∑_I ∈ Pmin_ω∈Ω^≤λ_I∑_i ∈ I w_i | ω(t_i) - y_i|_2^2 + γλ_I,where w∈^n is a weight vector with positive entries.A crucial point of (<ref>) (and of (<ref>)) is that it involves a model selection procedure on each segment, and the problem can be regarded as minimizing the sum of the AIC scores over all segments. In contrast to the partition penalized model (<ref>), the DofPPR model (<ref>) takes into account the complexity of an estimator on the intervals, and this allows more flexible adaption to signals with mixed complexities. Figure <ref> illustrates the difference of the partition penalized model (<ref>) and the DofPPR model (<ref>) for piecewise polynomial least squares regression. Just as for the partition penalized models (<ref>), minimizers of the DofPPR model (<ref>) may not be unique for general data fidelity terms. For the least squares case (<ref>), we show under certain mild assumptions on the basis function systems and up to excluding interpolating parts that the minimizing partition P^*_γ is unique for almost all (w.r.t. Lebesgue measure) input data y ∈^n (Theorem <ref>). The corresponding discrete function estimate ŷ (which is the evaluation of the corresponding piecewise regression function at the discrete sample points t_1, …, t_n) is shown to be unique even without these assumptions (Theorem <ref>).For a fixed γ-parameter, problem (<ref>) can be solved by adapting standard dynamic programming methods <cit.>. A main contribution of this work is a new fast algorithm that provides the full regularization path for (<ref>); i.e. it provides the mapping γ↦ (P^*_γ, λ^*_γ) for all γ≥ 0. (In case of non-uniqueness minimizers,(P^*_γ, λ^*_γ) denotes a distinguished minimizer.) The result is exact up to usual numerical errors. In Theorem <ref> we show that, under the assumption that a model of complexity m can be fit to data of length n in Ø(mn) time, the complete algorithm's time complexity is no worse than Ø(n^3m^2), where n is the length of the input timeseries and m is an upper bound on the (local) model complexity – only considering λ such that λ_I ≤ m for all I. In particular for the case of least squares polynomial models of degrees no larger than m for real data, we show that the presented algorithm is in Ø(n^3m).Selecting the hyperparameter γ is a common practical issue, as the value of the parameter is difficult to interpret. For the partition-penalized model a series of criteria have been proposed, for example information based criteria <cit.>,an interval criterion <cit.>, Bayesian methods <cit.>, and cross validation <cit.>. We adopt the latter approach, and – as we deal with time series – use rolling cross validation. To obtain more parsimonious results it is common to apply the one standard error rule (OSE) to cross-validated models rather than directly choosing the model with the strictly lowest score <cit.>. Remarkably,we are able to compute a globally optimal hyperparameter with respect to rolling cross validation and the one standard error rule exactly without noteworthy additional extra computational effort. The key observation is that γ↦(γ) is a piecewise constant function, and that the proposed algorithm admits its exact computation for all γ≥ 0.We provide a Python/Rust implementation on Github at <https://github.com/SV-97/pcw-regrs>, featuring a fully implemented piecewise polynomial weighted least squares method. The code can be easily modified to accommodate other model spaces and cost functions. One key benefit of our method is that it does not require additional tuning, such as adjusting model parameters or hyperparameters for an optimizer. However, it is easy to add optional parameters so experts can use their knowledge or impose restrictions on the models. This makes our method useful for both experts and non-experts. Here, we implemented the maximum number of degrees of freedom ν_total∈{ 1, …, N} as additional optional hyperparameter.Potential applications include dimensionality reduction of time series data and serving as a foundation for a change point detector in exploratory data analysis. A simulation study using piecewise polynomials of mixed degrees shows the advantage of the proposed DofPPR with OSE parameter selection over partition penalized models. Additionally, we highlight the method's effectiveness in exploratory data analysis for real-world time series data. A simple variant of our method that uses the same constraint onthe number of changepoints as the current state-of-the-art method givesstate-of-the-art results on the Turing benchmark for unsupervised changepoint detection <cit.>.§.§ Prior and related workTo the best of our knowledge, the piecewise regression model, as defined in Equation (<ref>), has not been previously explored, and as a result, its properties and associated algorithms have not been examined in prior research. However, there exists a range of approaches that are closely related.Piecewise regression models have a long history. Early works employ hard constraint on the maximum number of segments<cit.>. Partition-penalized (or jump-penalized) piecewise regression with a linear penalty on the number of segments as in (<ref>) were studied in various forms with different approximation spaces and different cost functions. The piecewise constant least squares regression case seems to be best understood:<cit.> has proven uniqueness results and <cit.> obtained several consistency results. Piecewise constant regression withrobust ℓ^1 cost function have been studied by <cit.>, <cit.>, and <cit.>. Piecewise polynomial model have been studied by <cit.> and <cit.>. <cit.> and<cit.> studied first and second order splines, respectively, where<cit.> obtained results on the jump localization and on robustness to noise. <cit.> have shown consistency for models with linear segment penalty as presented in (<ref>). For piecewisepolynomial regression, <cit.> provide upper bounds on the localisation error and derive global information-theoretic lower bounds.Regarding parameter selection, some references have been given earlier in Section <ref>. Among the common approaches are information criteria, such as AIC, BIC, mBIC, MDL <cit.>, and different kinds of cross validation <cit.>. <cit.> discuss pitfalls and solutions when using an interleaved cross validation procedure.Fast algorithms for piecewise regression models have been studied in a series of works. <cit.> proposed a dynamic programming algorithm for piecewise linear estimation. <cit.> proposed a dynamic programming algorithm for the fixed number of segments in O(n^2 T(n)) where T(n) is the time needed for fitting a model to a segment. An O(n^2)algorithm for complexity penalizedleast squares regression with piecewise constant model functions was proposed in <cit.>. Further algorithms for the piecewise constant regresssion problems were proposed by <cit.>, <cit.>, <cit.>, <cit.>. The PELT method <cit.> uses a pruning strategy to improved the time complexity to O(n) if the number of discontinuities grows sufficiently fast with the length of the signal. A related pruning strategy with similar observations has been proposed by <cit.>. For the first order spline model, an algorithm of cubic worst-case complexityhas been proposed by <cit.>, and been improved to quadratic complexity by <cit.>. For the second order spline model, algorithms of quadratic complexity have been proposed by <cit.> and <cit.>, in a discrete and the continuous setting, respectively. <cit.> proposed an efficient algorithm for piecewise continuous linear regression. Further variants include ℓ^1 data terms <cit.>, manifold-valued data terms <cit.> and an inverse problem setup <cit.>.Regularization paths of partition penalizedmodels have been investigatedby <cit.> for the piecewise constant least squares model. They have shown that theparameter space for γ is partitioned into finitely many intervals which give identical solutions. Regularization paths of further piecewisemodels have been given by <cit.>,for the piecewise constant least absolute deviation model by <cit.>, and in a changepoint detection context by <cit.>. Further related are works on the regularization paths of the lasso<cit.>.Piecewise regression is closely related to changepoint detection. Detected changepoints can define the pieces for piecewise regression, and conversely, the segment boundaries of piecewise regression can be considered as changepoints. Besides the discussed piecewise regression methods, there are a series of other changepoint estimation methods.CUSUM-based methods <cit.>, Bayesian changepoint inference <cit.>, methods based on binary segmentation <cit.>, a narrowest over the threshold method,<cit.>, or a Bayesian ensemble approach<cit.>, nonparametric maximum likelihood approaches <cit.>, multiscale testing <cit.>, random forests <cit.>, to mention only a few. We refer to <cit.> and the references therein for a more detailed overview and a comparison over selected changepoint detection methods. We note that while both concepts deal with segmenting data, changepoint detection emphasizes locating points of change, while piecewise regression focuses on fitting separate models to each segment. In particular, changepoint detection methods that are not of piecewise regression type do not necessarily come with an associated regression function on the segments. §.§ Preliminaries and notationWe call an (ordered) set of the form I = { l, l+1 , …, r} a discrete interval, and we abbreviate it by l:r . Throughout the paper we assume that the elements of a partition on {1, …, n} are discrete intervals. We occasionally use the notation y_I = (y_i)_i ∈ I for extracting a subvector with indices I. We define the set of ordered partitions of a set A to be the set of all partitions of A, such that for any two elements I,I' of a partition we have either a < a' or a' < a uniformly for all a ∈ I, a' ∈ I'. We denote this set by (A) and frequently identify its elements and the elements of its elements with the correspondingly ordered tuples. For example ((1), (2), (3,4), (5,6)) = (1:1, 2:2, 3:4, 5:6) is an ordered partition of 1:6.For any ordered partition P of a discrete interval let Λ(P) := { (λ_I)_I ∈ Pλ_I ∈ 1: I} be the space of valid sequences of degrees of freedom for P. We usually denote elements of this set by λ. For any I ∈ P the value λ_I is a degree of freedom. _i ∈ I A_i denotes the disjoint union of a family of sets {A_i}_i ∈ I indexed by a set I. Any elements of _i ∈ I A_i is naturally in bijection to a pair (i, a) such that i ∈ I, a ∈ A_i.Throughout this paper, we assume that the data sites satisfy t_1 < t_2 < … < t_n. If the data does not satisfy this constraint, we may merge data sites into a single data point by weighted averaging over the y-values of coinciding t-values; see e.g. <cit.>. §.§ Organization of the paperIn Section <ref>, we prove uniqueness of the minimizer in a least squares setting. In Section <ref>, we develop and discuss an algorithm that solves the DofPPR problem and computes the regularization paths, and provide parameter selection strategies. In Section <ref>, we conduct numerical experiments with simulated and real data. Section <ref> is devoted to discussion and conclusion.Most of the proofs, auxiliary results and details on the implementation are provided in the supplementary material. § UNIQUENESS RESULTS FOR DEGREES-OF-FREEDOM PENALIZED PIECEWISE LEAST SQUARES REGRESSION Let us first discuss what we generally may expect regarding the uniqueness of an estimate. The ultimate goal is to obtain a piecewise regression function which is defined on the real line. The models (<ref>), (<ref>) dealwith partitions on the discrete data sites (t_i)_i=1,…, n represented by their indices 1:n. A naturalreal counterpart of a discrete partition is a partition on the real line into intervals such that the real segments contain exactly the data sites of the corresponding discrete setting. Clearly, there are infinitely many real partitions that fullfil this requirement. As their segments contain the same data sites, all instances give the same functional value in (<ref>) and (<ref>), and so they are indistinguishable w.r.t. to these models. Thus, we consider two real partitions as equivalent, if all their respective segments contain the same data sites. A natural representative of the equivalence class is that partition which has its breakpoints at the midpoints between the data sites. In the following, we use this correspondence, and the following uniqueness results for partitions always refer to uniqueness of the discrete partitions.Let us also briefly discuss the relation of an optimal solution (P^*_γ, λ^*_γ) of (<ref>), and a corresponding regression function. If the estimated functions ϕ^λ_I = _ω∈Ω^≤λ_Id_I( ω, y) are unique for all elements of (P^*_γ, λ^*_γ), then that solution and the estimated functions define a piecewise function, denoted by ω_γ^*, which is unique up to shifts of the break locations between the sampling points corresponding to the borders of two adjacent segments of the partition. The corresponding evaluations at the discrete data sites, denoted by ŷ and given by ŷ_i = ω_γ^*(t_i) for all i ∈{1, …, n}, are well-defined and independent of these possible shifts in the breakpoints. As for the partition-penalized model (<ref>), the minimizing (discrete) partitions of the DofPPR model (<ref>) are not unique for all combinations of input signals and model parameters. This can be seen in the following example: Consider the time series with sample times x = [0,1,2] and values y = [0,1,0] and least squares polynomial fitting functions. All of the four possible models with three degrees of freedom have vanishing energy: P=(0:2), λ=(3); P=(0:1, 2), λ=(2,1); P=(0, 1:2), λ=(1,2); P=(0,1,2), λ=(1,1,1), where P denotes the partition, and λ the corresponding sequence of degrees of freedom.We describe how to resolve ambiguous situations in the estimator in general in the algorithmic part in Section <ref>. For the particularly importantleast squares setting (<ref>), we are going to show uniqueness results for almost all input data. To this end, for fixed partitioning P and fixed degree sequence λ = (λ_I)_I ∈ P, we denote the target function in(<ref>) by G_P, λ(y) whereG_P, λ(y)= ∑_I ∈ Pmin_ω∈Ω^≤λ_I∑_i ∈ I| ω(t_i) - y_i|_2^2 + γλ_I = ∑_I ∈ Pmin_β∈^λ_I A_I, λ_Iβ - y_I^2_2 + γλ_I,and A_I, λ_I denotes the design matrix on the segment I. A standard computation yieldsG_P, λ(y) = ∑_I ∈ P (π_I, λ_I - id) y_I^2_2 + γλ_Iwith the projection matrix (also called hat matrix)π_I, λ_I = A_I, λ_I(A_I, λ_I^TA_I, λ_I)^-1 A_I, λ_I^Tin case A has full column rank. In general, π_I, λ_I= A_I, λ_I M, where M denotes the Moore-Penrose pseudoinverse of A_I, λ_I. Given data y, wefrequently use the hat notationŷ = ŷ_I, λ_I= π_I, λ_I y.(Note that although the corresponding least squares solutions β^∗ may be non unique,A_I, λ_Iβ^∗ is unique and thus ŷ = π_I, λ_I y is well-defined.) A particular case appears whenever π_I, λ_I equals the identity: then the data on the subinterval I remains unchanged, and the corresponding fit is interpolatory.Next, we represent, for fixed partitioning P and fixed degree vector λ = (λ_I)_I ∈ P, the solution operator y ↦ G_P, λ(y) for data y. The representing matrix is given byπ̅_P,λ = [ π_I_1, λ_I_10⋯0;0 π_I_2, λ_I_2⋯0;⋮⋱⋱⋮;0⋯0 π_I_k, λ_I_k ],and the solution operator of y ↦ G_P, λ(y) is given by y ↦ŷ = ŷ_P,λ =π̅_P,λy.As a first step, we show the following lemma.Let P, Q with P ≠ Q be two partitions, λ = (λ_I)_I∈ P, μ = (μ_J)_J ∈ Q. Then either (i) the set{ y ∈^n : G_P, λ(y) = G_Q, μ(y) }has Lebesgue measure zero,or (ii)G_P, λ - G_Q, μ = αfor some constant α. The proof is given in the supplementary material.Using this lemma, we may derive uniqueness in the a.e. sense for the estimate of the regression function. Interestingly, this does not imply uniqueness a.e. w.r.t. the partitioning which we will discuss afterwards.The minimizing (discrete) function ŷ of Equation (<ref>) given by ŷ_i = ω_γ^*(t_i) for all i ∈{1, …, n} equals _P,λŷ_P,λ = _P,λπ̅_P,λy (given via (<ref>)) and is unique for almost all input data. The proof is given in the supplementary material. (Please note that we have overloaded the hat notation several times; the precise implementation should be clear from the context.) Unfortunately, we cannot get an as general uniqueness a.e. statement concerning the partitions as the following simple example shows.As a system of basis functions, we take the monomials b_j = t^j-1. Consider data sites t=0,1,2. As data y_a,b,c, we consider the polynomial t → at^2 + b t + c (a,b,c ∈ℝ) sampled at the data sites. We choose the regularization parameter γ = 1. Given b,c, we find a_0(b,c) such that the optimal solution ŷ equals the datay_a,b,c for all a >a_0(b,c). The set of all a,b,c such thata >a_0(b,c) is a non-zero set. Unfortunately, also partitioning the data sites into (0) and (1,2) and choosing interpolating constant and linear polynomials yields another optimal partitioning on a nonzero set (with however the same function values.) Despite Example <ref>, under certain mild assumption on the basis function systems (specified below) and up to excluding interpolating parts (as detailed below) we are going to show uniqueness results for the partitionings for almost all input data in the following.Concerning the basis functions (b_j)_j ∈ℕ (cf. (<ref>)) we assume that * the design matrices A given by A_ij = (b_j(t_i)) with b_j, 1≤ j≤ n,and t_i, 1≤ i≤ m, have maximal rank for any n,mwith n≤ m,* the corresponding projections π_I, λ_I = A_I, λ_I(A_I, λ_I^TA_I, λ_I)^-1 A_I, λ_I^T given in (<ref>)corresponding to A are atomic for any n≤ m, in the sense that π_I, λ_I cannot be further decomposed into two or more block diagonal matrices.We see that this assumption is fulfilled by the class of polynomials.Consider polynomial design matricesA ∈^n × mwith m< n, where each polynomial b_j, j ∈ℕ, is of degree j-1, e.g. A_ij = t_i^j-1, then A fulfills Assumption <ref>.The proof is given in the supplementary material. Note that basis function systems not fulfilling (ii) ofAssumption <ref> are in a sense redundant in our setup. In fact, the corresponding projection π_I, λ_I acts independently on the subintervals of I corresponding to the blocks, and thus introduces a second notion of “change point” within the interval. This behavior may often be unwanted or undesirable when modeling. We need the specific notion of standard block decompositions of the representing matrices π̅_P,λ. We call a block decomposed matrix B=diag (B_i)(not necessarily one of the form (<ref>)) standard block decomposed, if each B_i is an atomic projection in the sense above. (Note that if B_i is one-dimensional the corresponding entry of the matrix equals 1.) Obviously, each matrix π̅_P,λ possesses a standard block decomposition (which however need not equal the form (<ref>)). Further, each standard block decomposition induces a partitioning P with intervals I corresponding to the blocks of the decomposition.We next relate π̅_P,λ with a standard decomposition and a corresponding partitioning P̃ (plus degree vector λ̃): (i) If π_I, λ_I is non-interpolatory,i.e., π_I, λ_I does not equal the identity, we leave the block π_I, λ_I unchanged, and include I into the partitioning P̃; we let the corresponding degree λ̃_̃Ĩ = λ_I.(ii) If π_I, λ_I is interpolatory,i.e., π_I, λ_I equals the identity, we replace the block by 1× 1 blocks (with entry 1.) Then, the standard block decomposition π̅_P,λ is of the formπ̅_P,λ = [ p_1 0 ⋯ 0; 0 p_2 ⋯ 0; ⋮ ⋱ ⋱ ⋮; 0 ⋯ 0 p_m ].where p_i is either a 1 × 1-matrix with entry 1 or equals one of the diagonal blocks π_I_i, λ_I_i of (<ref>) (at the same position in the matrix π̅_P,λ.) This decomposition induces a partitioning P̃ consisting of the intervals I of the following form: (i) either I is an interval of the partitioning J on which π_I, λ_I is non-interpolatory, i.e., π_I, λ_I does not equal the identity or (ii) I is of length 1. A corresponding degree vector λ̃ is obtained by letting λ̃_I = λ_I in case of (i), and λ̃_I = 1 in case of (ii) respectively.We formulate the following immediate observations as a lemma. Loosely speaking, it asserts that we may restrict to consider partitions corresponding to standard block decompositions.Assume that Assumption <ref> is fulfilled. Consider a partitioning P and degree vector λ = (λ_I)_I ∈ P, with corresponding solution operator π̅_P,λ given via (<ref>). Then, (i) the matricesπ̅_P,λ= π̅_P̃,λ̃,i.e., the solution operators for partitioning P and degree vector λ = (λ_I)_I ∈ P, and for the standard decompositionP̃ and degree vector λ̃are equal. (ii) The corresponding standard partitioning P̃ together with the degree vector λ̃is unique.The proof is given in the supplementary material.We get the following uniqueness result in particular concerning (discrete) partitions. We consider the minimization problem (<ref>). We assume that Assumption <ref> is fulfilled for the system of basis functions (b_i)_i ∈ℕ. Then, the minimizing (discrete) function ŷ of Equation (<ref>) given by ŷ_i = ω_γ^*(t_i) for all i ∈{1, …, n} is unique for almost all input data y (w.r.t. Lebesgue measure.) Moreover, segments I^∗ of a minimizing partitioning corresponding to non-interpolating estimation are unique for almost all input data y (w.r.t. Lebesgue measure.). The proof is given in the supplementary material.Specifing the basis function system to the class of polynomials, we may formulate the following corollary.We consider the minimization problem (<ref>) for a system of polynomials (b_i)_i ∈ℕ, where each polynomial b_i, i ∈ℕ,has precisely degree i. Then, the minimizing (discrete) function ŷ of Equation (<ref>) given by ŷ_i = ω_γ^*(t_i) for all i ∈{1, …, n} is unique for almost all input data y (w.r.t. Lebesgue measure.) Moreover, segments I^∗ of a minimizing partitioning corresponding to non-interpolating estimation are unique for almost all input data y (w.r.t. Lebesgue measure.)The proof is given in the supplementary material.We further obtain the following corollary which is important for implementation purposes as well.Consider the same situation as in Corollary <ref>. To compute a minimizer ofthe problem in(<ref>) we may restrict, for a given partition P, the search space to the maximal degreeν_max(I) = max(1, I - 1)for each interval I of the partitioning P.The proof is given in the supplementary material.§ FAST ALGORITHM FOR COMPUTING THEREGULARIZATION PATHS AND MODEL SELECTIONWe next develop a fastalgorithm for computing the full regularization paths for the DofPPR problem(<ref>), meaning a solver for all hyperparameters γ≥ 0 simultaneously. This enables us to perform model selection based on rolling cross validation (with and without OSE-rule) requiring only little additional computational effort. §.§ Computing the regularization paths The general strategy is as follows: We reduce the problem to solving a collection of constrained problems which in turn are solved using dynamic programming. Extending a result from <cit.>, we find that the optimal solution as a function of γ is piecewise constant with only finitely many jumps, and we show that thepartition of _≥ 0 corresponding to this piecewise constant function may be obtained by computing the pointwise minima of finitely many affine-linear functions.As preparation, weformulate (<ref>) for partial data (t_1, …, t_r), (y_1, …, y_r) and more compactly asmin_P ∈(1:r)λ∈Λ(P)∑_I ∈ Pd_I^λ_I + γλ_Iwhere d_I^λ_I = min_ω∈Ω^≤λ_Id_I( ω, y). The core of the method is solving the following constrained problem, referred to as ν degree of freedom partition problem:B_r^ν:= min_P ∈(1:r)λ∈Λ^ν(P)∑_I ∈ P d_I^λ_I,where Λ^ν(P) := {λ∈Λ(P) ν = ∑_I ∈ P λ_I } are the dof sequences with exactly ν degrees of freedom in total.The solutions of (<ref>) and those of (<ref>) are related as follows: For fixed γ≥ 0, r ∈ 1:N the γ-penalizedproblem (<ref>) is equivalent to the problemmin_ν∈ 1:r B_r^ν + γν. The proof is given in the supplementary material.Computing the regularization paths for all possible values of γ amounts to finding the pointwise minimum of a collection of affine linear functions and in particular their critical points: By the previous lemma, computing the regularization paths is equivalent to solving min_ν∈ 1:r B_r^ν + γν. The map γ↦min_ν∈ 1:r B_r^ν + γν is the pointwise minimum of the set ℱ := {γ↦ B_r^ν + γν}_ν∈ 1:r of affine linear maps. This pointwise minimum is piecewise affine linear with the pieces between any two adjacent critical points being an element of ℱ. Any such piece then determines a solution for a whole range of penalties, such that the solution of the full optimization problem is a piecewise constant function of γ. This correspondence is illustrated in Figure <ref>.In Section <ref> of the supplementary material, we provide an algorithm that can be used to efficiently compute the pointwise minimum. This algorithm operates in linear time when applied to sorted inputs, and sorted inputs can be readily ensured in implementations without any overhead. At the points of intersection of two affine linear functions, our algorithm returns the model with fewer degrees of freedom in accordance with the principle of parsimony. A crucial point for an efficient algorithm is that the Bellman values B_r^ν can be computed efficiently by dynamic programming: The values B_r^ν for r ∈ 1:n, ν∈ 1:r admit the recursionB_r^ν = min_ p_R ∈ 1 : νl ∈ 0:r B_l^ν - p_R + d_l+1:r^p_Rwith initial conditions B_r^1 = d_1:r^1, B_r^0 = B_0^ν = 0.The proof is given in the supplementary material. §.§ Obtaining minimizers by backtracking and resolving ambiguities Knowing the regularization paths, we are also interested in finding the corresponding solutions, i.e. the partition of the data itself P^*_γ, the corresponding sequence of degrees of freedom λ^*_γ, andthe piecewise regression function ω^*_γ. This can be achieved in two steps: first, backtracking to obtain the results of (<ref>) for each ν = 1, …, n, and second performing a lookup using the above correspondence between γ and ν.The backtracking works as follows: When solving (<ref>), we storefor each ν =1, …, n, and r = 1, …,n the minimizing arguments of (<ref>) (that is, the leftmost boundary index of the segment ending with index r and the degrees of freedom on this segment). With these informations, an optimal partition, the corresponding sequence of degrees of freedom, and a regression function with ν degrees of freedom is computed. This gives a minimizer of the constrained problem (<ref>) for each ν =1, …, n.As for the lookup,a solution for (<ref>)for a specific γ≥ 0 is obtained by computing the minimizing argumentof (<ref>) and returning the corresponding solution of (<ref>).Recall from Section <ref> that the minimizers of the studied model are unique (in an a.e.-sense) in certain least squares settings, and that the minimizersare not unique in general. We now describe how to deal with potential ambiguities. First, to rule out general ambiguity issues with function estimation which do not stem from the piecewise estimation, we assume that, on each segment I, the fitted functions ϕ^λ_I = _ω∈Ω^≤λ_Id_I( ω, y) are either unique, or admit a canonical choice in case of non-uniqueness. Then, as mentioned in Section <ref>, an optimal solution (P^*_γ, λ^*_γ) of (<ref>) defines a piecewise function ω_γ^* uniquely up to shift of the break locations between the borders of two adjacent segments of the partition. Now inspecting again Example <ref>, it seems thatnone of the models is clearly better than the others, and the model consisting of only one segment seems attractive for parsimony reasons. However merely trying to select the partition with the lowest number of segments does not resolve ambiguities in general as we can see by considering a similar example with values 0,1,0,1 and ν=4. Here, we employ the following selection strategy between equally good models during backtracking in the dynamic program, which we call right-maximal graph-tracing and call uniqueness results that rely on this strategy : To determine a partition of 1:r with a corresponding sequence λ∈Λ^ν(P) we start with a budget of ν degrees of freedom. We then select the longest rightmost segment of our partition, which corresponds to choosing the lowest l among the minimizers in the recursion from Theorem <ref>. This in turn uniquely determines that we spend p_R dofs on l+1:r. We proceed recursively downwards to reconstruct the remainder of P and λ. An implementation may choose another way to achieve uniqueness at this step. One obvious option to improve results would be to use an additional forward cross-validation at this step (for r < n) to cut down the number of options before picking a single representative. However due to the additional incurred cost of such methods our implementation uses the basic scheme just proposed.Given this algorithm we may now associate to each B_r^ν a unique partition and dof-sequence. This allows us to make the following statement: There is a finite partition of _≥ 0 into intervals such that the solution to the γ-penalized partition problem is a function of γ that is constant on each element of the partition. This solution is except for finitely many penalties γ – so it is in particular unique almost everywhere.The proof is given in the supplementary material.§.§ Parameter selection by rolling cross-validation with OSE-rule The hyperparameter γ is chosen via rolling cross validation, a type of cross validation that respects the temporal order of the time-series and avoids look-ahead bias. It is given as follows:(γ) = 1/n-1∑_r=1^n-1 (ω^*_γ, r(t_r+1) - y_r+1)^2.Here ω^*_γ, r is the piecewise regression function associated to(P^*_γ, r, λ^*_γ, r) which denotes a solution of (<ref>) on the partial data (t_1, y_1), …, (t_r, y_r). (One may use other distance functions than the mean squared error.) An optimal parameter in the sense of cross validation is a value that minimizes γ↦(γ).To obtain such an optimal parameter,we use the following key observation: As the mapping γ↦ (P^*_γ, r, λ^*_γ, r) is piecewise constant, so is the r-th summand, _r(γ) = (ω^*_γ, r(t_r+1) - y_r+1)^2, in (<ref>). Henceis piecewise constant as the mean of finitely many piecewise constant functions. Likewise, on full data, the mapping γ↦ (P^*_γ, n, λ^*_γ, n) is piecewise constant. Thus, the joint mapping γ↦ [ CV(γ), P^*_γ, n, λ^*_γ, n] is piecewise constant as well. By the piecewise constancy, all hyperparameters of a piece of the mapping can be considered as equivalent, and we may represent them by a single value, say the midpoints of the pieces, denoted by {γ_1, …, γ_L}. To obtain a unique result, we invoke the principle of parsimony and select the largest minimizing argument, we denote this choice by γ_:γ_ = max(_γ∈{γ_1, …, γ_L}(γ)). To obtain even more parsimonious results it is common to apply the one standard error rule (OSE) to cross-validated models rather than directly choosing the model with the strictly lowest score <cit.>. For the considered model this means taking the largest parameter such that the CV-score is within a one-standard error window of the minimum value, soγ_OSE = max_γ∈{γ_1, …, γ_L}{(γ) ≤(γ_) + SE(γ_)},where SE(γ) = √(Var(_1(γ), …, _n-1(γ)))/√(n-1) is the standard error of the mean in (<ref>), see <cit.>. (As γ_CV, the parameter γ_OSE is a representative for an interval of hyperparameters which are all associated to the same model and to the same score.)The CV scoring function and the solutions of the two hyperparameter choices for the example of Figure <ref> are given in Figure <ref>.A few important points about the above selection methods: (i) Computing γ_ and γ_OSE comes with litte additional effort. This is becausethe regularization paths have been computed by the algorithm proposed in further above. Furthermore, we do not need to compute ω^*_γ, r on its entire domain; it is sufficient to compute the ω^*_γ, r onthe rightmost segment of the defining partition. (ii) The resulting values are computed exactly up to the usual numerical errors of floating point arithmetic if the computation of the tabulations and the model fits admit these precisions. (iii) The selection strategies are invariant to (global) scaling of the signal. A natural question arising in the above procedure is whycrossvalidation is done with respect to γ based on (<ref>) and not with respect to the total number of degrees of freedom ν_total based on (<ref>). The reason is that rolling cross-validation works with signals of different lengths (r =1, ..., n-1) and the total number of degrees of freedom has a different meaning for each of those signals.§.§ Complete algorithm andanalysis of the computational complexity Let us summarize the main steps of the proposed algorithm that computes the regularization paths of (<ref>), the hyperparameters γ_CV and γ_OSE, and corresponding piecewise regression functions:* Compute the tabulation B_r^ν for r ∈ 1:n, ν∈ 1:r using the recurrence in (<ref>).* For each r ∈ 1:n: * Use the tabulation Band (<ref>) to compute the critical values of the piecewise affine function γ↦min_ν∈ 1:r B_r^ν + γν. (This provides the correspondence of the parameters γ of (<ref>) and ν of (<ref>) for each partial data with indices 1:r.)* Determine the solution of (<ref>) for each ν∈ 1:r by backtracking, resolving ambiguities by RMGT. * For each r = 1:n-1: Use the correspondence between γ and ν to determine the piecewise constant mapping γ↦ω^*_γ, r, and compute the piecewise constant mapping CV_r: γ↦ (ω^*_γ, r(t_r+1) - y_r+1)^2. (Note: For this step, it is sufficient to determine ω^*_γ, r on the rightmost segment of its corresponding partition.)* Compute the piecewise constant mapping = 1/n-1∑_r=1^n-1 CV_r, and determine γ_ and γ_OSE.* Determine thepiecewise regression functions ω^*_γ for the hyperparameters γ = γ_ orγ = γ_OSE.For further details we refer to the supplementary material andto the commented source code. We prove the following central result on the proposed algorithm:The time complexity for the proposed algorithm is in Ø(mn max{ m, n Φ(m,n), n^2 }) wheren ∈ is the length of the input timeseries, m ∈ is the maximal number of degrees of freedom to consider for each segment – so the maximal local model complexity, Φ(m,n) is the cost of fitting a model with m degrees of freedom to data of length n.In particular, the time complexity is Ø(n^3 m^2) whenever Φ∈Ø(mn). For polynomials with least squares errors the time complexity is Ø(n^3 m).The proof is given in the supplementary material.Theorem <ref> contains the Ø(n^3) time complexity result for the piecewise constant case<cit.> asthe special case m=1. § IMPLEMENTATION, SIMULATION STUDY AND APPLICATIONS §.§ Implementation and experimental setupThe implementation may be found at <https://github.com/SV-97/pcw-regrs>. The published packages are available at <https://crates.io/crates/pcw_regrs> and <https://pypi.org/project/pcw-regrs-py/>. The core part of the algorithm is implemented in Rust to provide a high-performance basis. On top of this we provide a native Python extension for ease of use. Multicore parallel processing is used to speed up the computations of the residual errors. The experiments were conducted on a Linux workstation with an AMD Ryzen 9 5900X CPU (4.6) and with 64GB RAM.If not stated differently, we use in the least squares instance of the DofPPR model (<ref>) with Ω^≤ν being the space of polynomials of maximum degree ν-1. In addition to the search space reduction stated in (<ref>), weexclude polynomials of order higher than 10 because polynomial regression with high degrees may lead to numerical instabilities. So in practice, we use ν_max(I) = min(max(1, I - 1), 11). Furthermore, we use by default uniform weights w = (1, …, 1), and the hyperparameter choice γ_OSE (cf. (<ref>)). The method accepts an optional parameter ν_total≤ n which imposes the additional upper bound on the total number of degrees of freedom ∑_I ∈ Pλ_I ≤ν_total; it is useful when more parsimonious results are desired. As mentioned in the beginning of Section <ref>, the (continuous) breakpoints of the piecewise polynomials can be placed anywhere between two data points belonging to two different segments. We here use the point that minimizes thedistance on the ordinate between the lefthand and the righthand polynomial, and take the midpoint if there is no unique solution. In the following, we frequently refer to the Turing change point detection (TCPD) benchmark data set which is specifically designed for changepoint detection studies <cit.>. This dataset comprises 37 time series, each of which has been annotated by five human experts to establish the ground truth for changepoint locations. The benchmark is designed for unsupervised detection so there is no split into training and test data set. More details will be given further below and in the paper of <cit.>. §.§ Results on simulated dataThe first timeseries we consider is a synthetic piecewise polynomial signal with Gaussian noise. We sample from p(T) + ϵ where p is a synthetically generated piecewise polynomial function on [0,1] with local degree no more than 10 with 6 jump points, ϵ is a normally distributed random variable with mean 0 and standard deviation σ, and T is a [0,1]-uniformly distributed random variable. (The full expression for the p displayed can be found in supplementary section <ref>.) The results are shown in Figure <ref>. Each plot is based on sampling the signal distribution 2000 times. The top part of each subplot shows the ground truth signal p in red, an exemplary sample in orange and the pointwise 2.5% to 97.5% quantiles of the resulting models as shaded blue regions. The bottom part of each subplot shows a histogram (with 200 bins) of the changepoints across all 2000 realizations.Figure <ref> illustrates the application of our approach to a simulated signal from the TCPD dataset. This signal wasgenerated such that it contains a single changepoint at index 146. As described in the accompanying documentation, the noise characteristics differ before and after this changepoint, with Gaussian noise preceding it and uniform noise following it. In the process, five human annotators independently identified changepoints at indices 143, 144 (three times), and 146, respectively. The outcome of the DofPPR analysis reveals three segments in total: two constant segments and one linear segment. The predicted changepoints are located at indices 97.5 and 143.Figure <ref> shows the runtimes in dependance of the signal length n. We observe that signals of length 2000 can be processed in around 30 seconds. Also, the scaling in this regime is more favorable than the worst case complexity stated in Theorem <ref>.§.§ Comparison to partition penalized approaches We compare the proposed method with the frequently used partition penalized approach (<ref>). As with the proposed method, we use least squares polynomial regression, but in the baseline the polynomials have fixed degrees (0,1,2 and 5). A standard solver for the latter problem is the PELT algorithm of <cit.>. The piecewise polynomial fit of fixed degree is implemented based on the Python package ruptures from <cit.>.This approach is referred to as the baseline in this subsection.A qualitative illustration using a relatively simple test signal and manually selected hyperparameters has been given in the introduction in Figure <ref>. We next provide a quantitative comparison using the following setup for the baseline method. To obtain ahyperparameter selection strategy comparable to the method we apply a k-forward cross validation with k=25 based on the one standard error rule. So for each penalty, we fit models to data segments 1:25, 1:50, 1:75, ... and cross-validate using the one standard error rule based on the resulting prediction errors. Additionally, we report the results of an oraclestrategy (Or) where we select the most parsimonious model (largest penalty) that is closest to the true number of changepoints. Both of these were based on a model population of 100 equidistantly spaced penalties from the interval [0, 50]. Finally we translated the changepoint indices returned by ruptures to the middle of the corresponding intervals to match the convention used in this paper.For the synthetic data, we generated 20 random piecewise polynomial signals with 5 changepoints and locally no more than 6 degrees of freedom via thePython library available on PyPI. We sampled these at 1000 points uniformly selected from [0,1] and added a Gaussian noise term with mean 0 and standard deviation of 0.025. We report the mean of the residual L^2 error and the Hausdorff distance of the indicated changepoints as well as the median of the difference between the indicated and true numbers of changepoints across all these signals. The reason for choosing the median in the latter case is that both our model as well as ruptures show extreme outliers in this metric for some signals.The results are reported in Table <ref>. The proposed method gives a lower residualL^2 error as well as a lower Hausdorff distance of the changepoints than the baseline approach. This improvement may be attributed to the model's more efficient utilization of the "penalty budget" due to the heterogeneous local degrees: introducing a changepoint into the heterogeneous model has a non-constant cost – in contrast to the constant jump cost in homogenous models. This allows the model to locally increase the degrees of freedom if they significantly improve the data fit, without influencing the penalty for the remainder of the model. The change point oracle models show that using the baseline it is theoretically possible to get closer to the true number of changepoints on average – however it is not clear what selection strategy has to be used on the penalty for this in practice. Furthermore the cross-validated models of degrees 0 and 1 show that it's also possible to get very bad models (with respect to this criterion) from the baseline when using standard model selection methods.In settings of exploratory analysis both the baseline as well as our algorithm allow users to intervene when they notice an excessive number of changepoints. In the case of baseline approach using thePELT solver this is achieved by limiting the number of segments; for our algorithm it amounts to limiting the total number of degrees of freedom of the full model. Finally it's worth mentioning that selecting both the degree as well as penalty for the baseline models comes with a relatively large runtime cost that is avoided by our algorithm's automatic selection strategy. §.§ Results on real data Next we study two real data examples from the TCPD dataset <cit.>.Figure <ref>presents the application of the proposed method to thedata. Two human annotators annotated changepoints after indices 46, 90, and 47, 91, respectively, and three human annotators saw no changepoint. Theproposed method estimates two breaks at the indices 68 and 90.The next example is thedataset which contains the total private construction spending in the US over multiple years (Figure <ref>). The accompanying documentation of this dataset suggests that potential change points occur at economical recessions. Here we have mapped the dates of the time interval into [0, 1] prior to analysis for simplicity. This dataset is interesting as it shows both seasonal waves as well as global trends. We extracted them using the proposed method in a two-stage exploratory data analysis. In a first step we fitted a model with the default settings to the data samples. We then manually restricted the total degrees of freedom to ν_total≤ 81 which eliminated some spurious segments. Using the seasonal models we calculated the means for each segments by integrating the local polynomials across their corresponding intervals. In a second step, we applied the proposed method to those points. The breakpoints of the corresponding piecewise polynomial are (informat) 07.07.1995, 09.04.2001, 03.03.2005, 17.11.2009, 12.01.2019. The breakpoints can be seen to line up with economically significant events to some extent. Interestingly the local extrema of the polynomial pieces also match up to such significant points.§.§ Results on the TCPD benchmark forunsupervised changepoint detection We now evaluate the proposed method using the full TCPD benchmark <cit.>. The benchmark consists of two separate evaluations: The Default setting aims to mimic the typical use-case where a data analyst, unfamiliar with the optimal parameter configurations, applies the algorithm to detect changepoints in a time series. The Oracle setting involves conducting agrid search across the hyperparameters to identify the configuration that yields the highest performance for each algorithm. For the proposed DofPPR method, the Default setting refers to hyperparameter selection based on rolling cross validation with the OSE rule as described above. For the Oracle setting, the γ-parameter was varied with a grid of 101 values evenly spaced on a log scale between 10^-3 and 10^3, following the setup used for other penalized models in the benchmark. (We note that our Oracle setupcould be further refined by exploiting the access to the full regularization paths however this is not easily implementable due to how the benchmark is structured.)The resultsare reported in the row DofPPR-OSE of Table <ref>. We observe that the Oracle score is the highest among all competitors. By contrast, the Default score seems disappointing at first glance. To understand thisdiscrepancy, a deeper inspection of the human annotations, the parameter choice of the competitors and our parameter choice strategy is required. Human annotators were shown five control time series containing known changepoints to validate annotation quality. These control data had a maximum of two changepoints per series. Inspectingsome further annotations reveals that the human annotators preferred changepoint patterns on broader time scales to changepoint patters narrower-scale changes (e.g. the construction data set discussed in Section <ref>). As outlined in Section <ref>, when the hyperparameter γ_OSE is used, it tends to adapt to narrower-scalechangepoint patterns such a seasonal patterns, often proposing a number of changepoints higher than a human annotator would. Another observation is that the top-performing method in the benchmark, binseg, uses a hard constraint on the number of change points, and so do most other of the best performing competitors (segneigh, amoc). Even the trivial method zero, whichalways returns zero changepoints, is competitive. So assuming relatively few changepoints appears to be a strong prior information, which is used by the best competitors. To obtain a fair comparison with these methods, we propose a simple variant of our method enforcing a maximum of five changepoints as binseg does. This corresponds in the proposed model to a maximum of six degrees of freedom, ν_total≤ 6. (A piecewise constant model with five changepoints has six segments, corresponding to six degrees of freedom.) Using this constrained variant, referred to as DofPPR-OSE-6 in the table, yields superior scores in both the cover metric and the F1 metric for the Default method. By the constraint, the oracle score is reduced but remains competitive.§ DISCUSSION AND CONCLUSION We have studied a piecewise regression model which is based on optimizing a tradeoffbetween a goodness of fit term and a penalty on the total number of degrees of freedoms of the regression function, and the tradeoff is weighted by a model hyperparameter. On the analytical side, we have shown that the minimizer is unique (in an Lebesgue almost everywhere setting) for the important piecewise least squares setup when interpolatory functions are excluded. For other setups, we have proposed using distinguished solutions with parsimonious models and largest rightmost segments.We have developed a fast algorithm that computes the complete regularization paths. Its worst case time complexity is Ø(n^3 m^2) whenever is the cost of fitting a model with m degrees of freedom to data of length n is in Ø(mn). For polynomials with least squares errors the time complexity is Ø(n^3 m). Furthermore, we have seen that performing model selection based on rolling cross validation (with or without one-standard-error rule) comes with little additional effort. For the least square instance with polynomials up to degree 10, the total processing time (including model selection) is around 10 seconds for a signal of length 1,000 on a standard desktop computer.We have provided a full reference implementation of the piecewise polynomial case. The experimental results on synthetic data have illustrated the potential of the model for piecewise regression with mixed complexities. In particular we have seen that the model automatically gives satisfactory results for piecewise regression when the underlying signal is piecewise polynomial and the noise is i.i.d. Gaussian.We also demonstrated how the method suits exploratory data analysis. The method can be used to progressively reduce the degrees of freedom in a multiscale analysis fashion. By the automatic hyperparameter tuning, it is suitable for users which are not familiar with the proposed model and the hyperparameters. Yet, it is possible to request more parsimonious results by restricting the total number of degrees of freedom.Evaluation on the TCPD benchmark underpins the models strength in unsupervised changepoint detection outperforming the competitors in the oracle score. A variant that leverage an a priori restriction on the maximum number of changepoints – as used by the best competing methods – resulted in the highest score yet recorded for this benchmark.Though our results are promising, the model's hyperparameter selection could benefit from refinement. The current strategy based on rolling cross validation with one-standard-error rule tends to adapt to high-frequency changepoint patterns which may not always align with human interpretation of the correct scale. This discrepancy suggests the need to explore the human-perceivable scale for changepoints. Incorporating seasonal components into the model could be a potential solution. Additionally, the implementation of robust estimators, such as ℓ^1 data terms, could provide resilience against outliers.§ DATA AVAILABILITYImplementations of the algorithms developed in this paper are provided at <https://github.com/SV-97/pcw-regrs>. The simulated data were generated using the packageavailable at <https://pypi.org/project/rnd-pcw-poly/> with the primary datum being given in supplementary Section <ref>. The time series data are from the Turing Change Point Detection datasetprovided at <https://github.com/alan-turing-institute/TCPD> by the authors of the corresponding paper <cit.>, and the corresponding benchmark code is provided at <https://github.com/alan-turing-institute/TCPDBench>.§ ACKNOWLEDGEMENTMartin Storathwas supported by the project DIBCO funded by the research program Informations- und Kommunikationstechnik of the Bavarian State Ministry of Economic Affairs, Regional Development and Energy(DIK-2105-0044 / DIK0264). Andreas Weinmann acknowledges support of Deutsche Forschungsgemeinschaft (DFG) under project number 514177753. [heading=myheading] Supplemental Materials: Degrees-of-freedom penalized piecewise regressionNote on the numbering: Thenumbers below 100 refer to equations, theorems etc. in the main document, and numbers above 100 to numbers in the supplementary file.§ PROOFS FOR SECTION <REF>§.§ Proof of Lemma <ref>We start out to consider the corresponding representing matrices π̅_P,λ, π̅_Q,μ given by (<ref>). If π̅_P,λ≠π̅_Q,μ, then at least one of them does not equal the identity. W.l.o.g. assume π_P,λ does not equal the identity. Then, by (<ref>), G_P, λ(y) is a non-constant quadratic form w.r.t. y. In turn, G_P, λ(y) - G_Q, μ(y) is a non-constant quadratic form (with gradient π_P,λ - π_Q,μ), and thus the set { y ∈^n : G_P, λ(y) = G_Q, μ(y) } has Lebesgue measure zero as a lower-dimensional manifold. If π̅_P,λ = π̅_Q,μ, then, by (<ref>), G_P, λ(y) - G_Q, μ(y) =γ∑_I( λ_I -μ_I ) which is constant.§.§ Proof of Theorem <ref>For a given partitioning P and given degreee vector λ = (λ_I)_I ∈ P, we consider those (P,λ) which yield the same projection/hat matrices π̅_P,λ, i.e., we letM_(P,λ) = { (Q,μ):π_P,λ = π_Q,μ},and pick(Q,μ) ∈ M_(P,λ). We observe that the correspondingquadratic functionals G_P, λ(y) and G_Q, μ(y) only differ by a constant by Lemma <ref>. Hence, there is (P^∗,λ^∗)such thatG_P^∗, λ^∗(y) = min_(Q,μ) ∈ M_(P,λ)G_Q, μ(y).From each M_(P,λ) we pick such a G_P^∗, λ^∗, and observe that there are at most countably many different G_P^∗, λ^∗. Further, the minimizer ω^∗ of Equation (<ref>)equals min_(P^∗, λ^∗) G_P^∗, λ^∗(y) for any data y. Taking two different setsM_(P,λ)≠ M_(P',λ'), their minimizing representatives (P^∗,λ^∗) and (P'^∗, λ'^∗) yield different projections π̅_P^∗, λ^∗≠π̅_P'^∗, λ'^∗. Hence, by Lemma <ref>., the set { y ∈^n : G_P^∗,λ^∗(y) = G_P'^∗, λ'^∗(y) } has Lebesgue measure zero. Since there are at most countably many such zero sets where two functionals are equal, we may consider their union as an exclusion set X of Lebesgue measure zero. On its complement X^c, the minimizing P^∗, λ^∗ is unique, its whole M_(P,λ) yields the same function and thus the minimizer is unique on the complement of the exclusion set.§.§ Proof of Lemma <ref> Part (i) of Assumption <ref> is a consequence of the invertibility of the corresponding Vandermonde matrices. Concerning part (ii) first note that the particular choice of polynomials is irrelevant, e.g. by plugging a change of basis matrices into the definition of π_I, λ_I in (<ref>). Towards a contradiction, assume that π_I, λ_I can be further decomposed into two or more block diagonal matrices. We denote these block matrices by B_1,…,B_k and their block sizes by d_1,…,d_k. If the size of one of the block matrices exceeds the degree λ_I, say d_r ≥λ_I, for r ∈{1,…,k}, then we consider the corresponding subinterval I_r ⊂ I together with data y_r: I_r →ℝ. If y_r equals 0 (in all components), then the estimate ŷ_r equals 0 on I_r by linearity. We now extent y_r in two ways to functions y_I and y'_I on I: we let y_I be defined by 0 on the complement of I_r in I, and we let y'_I be defined by 1 on the complement of I_r in I. By the fundamental theorem of algebra, both ŷ_I and ŷ'̂_I equal the zero vector on I. But since the polynomial hat operator reproduces constants, ŷ'̂_I should equal 1 on the complement of I_r in I. This is a contradiction. We are hence left with the case that all block matricesB_1,…,B_k haveblock sizes d_1,…,d_k smaller than λ_I. Consider such a block B_r,r ∈{1,…,k}, and arbitrary data y_r: I_r →ℝ. Then y_r is a sample of a polynomial p of degree lower than λ_I. We consider this polynomial p, sample it on the whole interval I to obtain data y, and apply the hat operatorπ_I, λ_I. By polynomial reproduction, the hat operator reproduces the sample of p, and, in turn, ŷ_r= y_r, it is the hat operator on I_r equals the identity. Since r was arbitrary, the hat operator on I equals the identity. This contradicts the assumption m<n representing the fact of more sample points than polynomial degree plus one. In consequence, π_I, λ_I cannot be further decomposed which completes the proof.§.§ Proof of Lemma <ref>By construction, π̅_P,λ= π̅_P̃,λ̃ which is formulated as statement (i). To see (ii) consider two standard partitions P̃_1, P̃_2 together with the degree vector λ̃_1,λ̃_2 corresponding to the partitioning P with degree vector λ = (λ_I)_I ∈ P. Then, if I in P̃_1 is of length 1 it corresponds to an identity block in P which results in a 1 × 1 block in π̅_P̃_1,λ̃_1 which implies thatI belongs to P̃_2 as well. Interchanging P̃_1 and P̃_2 shows the converse inclusion and in turn implies equality for identity blocks. Atomic blocks corresponding to non-interpolatory intervals of P are not modified both inP̃_1 andP̃_2, respectively. Together, P̃_1 = P̃_2 and λ̃_1 = λ̃_2 which implies the uniqueness of the standard decomposition. §.§ Proof of Theorem <ref>The statement on the minimizing function was formulated as Theorem <ref>. So it remains to show the statement on the segments of a minimizing partitioning. For a given partitioning P and given degree vector λ = (λ_I)_I ∈ P, we consider the set M_(P,λ) defined by (<ref>) consisting of those partitions and degree vectors (Q,μ) which yield the same projection matrices π̅_P,λ. We make the crucial observation that, in M_(P,λ), all elements have the same standard block decomposition π̅_P,λ given by (<ref>) as a consequence of Lemma <ref>. Hence, the non-interpolating intervals I of all partitions are identical. Passing to the complement of the zero set X identified in the proof of Theorem <ref>, the segments I^∗ of a minimizing partitioning corresponding to non-interpolating estimation are hence unique. This shows the second assertion of the theorem.§.§ Proof of Corollary <ref>By Lemma <ref>the system of polynomials fulfills Assumption <ref>. In consequence, the assertion of the corollary is a consequence of Theorem <ref>.§.§ Proof of Corollary <ref>By the proof of Theorem <ref> we may minimize the problem in (<ref>) by minimizing w.r.t. all standard block decomposition instead of all partitionings. On a non-1 × 1 block of the corresponding projection matrix of a block decomposition, the projection is non-interpolating. Thus the polynomial degree is strictly lower than the number of data points in the segment minus one which shows the assertion. § PROOFS OF SECTION <REF>§.§ Proof of Lemma <ref>As a first step we rewrite the problem asmin_P ∈(1:r)min_λ∈Λ(P)∑_I ∈ Pd_I^λ_I + γ∑_I ∈ P λ_I.LettingΛ^ν(P) := {λ∈Λ(P) ν = ∑_I ∈ P λ_I }be the set of all dof-sequences with a total of ν degrees of freedom we may partition Λ(P) asΛ(P) = _ν=1^r Λ^ν(P)and may consequently rewrite the minimization asmin_P ∈(1:r)min_ν∈ 1:rmin_λ∈Λ^ν(P)∑_I ∈ Pd_I^λ_I + γ∑_I ∈ P λ_I min_ν∈ 1:nmin_P ∈(1:r)min_λ∈Λ^ν(P)∑_I ∈ Pd_I^λ_I + γν.Since γν is now independent of the two inner minimizations we may pull it out of the respective target functions to obtainmin_ν∈ 1:r( min_P ∈(1:r)λ∈Λ^ν(P)∑_I ∈ Pd_I^λ_I) + γν.We recognize the inner minimization to be the values B_r^ν such that the problem to solve (assuming we know not just the values of B_r^ν but also the corresponding minimizing arguments) becomes min_ν∈ 1:r B_r^ν + γν as desired.§.§ Proof of theorem <ref>Herein we omit the explicit P from Λ^ν(P) for brevity and instead only write Λ^ν.Note that Λ^1 can only contain a single sequence, and this sequence has to have just one element since there are no other partitions of 1. This immediately implies that P also has to consist of just a single segment forcing the corresponding solutionB_r^1 = d_1:r^1.For notational convenience in the proof we furthermore let B_r^ν be zero whenever r or ν are 0.We will now consider B_r^ν for ν > 1: because the optimal partition P ∈(1:r) has to have a last segment R there has to be a natural number p_R ∈# R : ν corresponding to the optimal number of degrees of freedom on R. We may find B_r^ν by considering all potential last segments R and corresponding degrees of freedom p_R:B_r^ν = min_ p_R ∈ 1 : ν,   l ∈ 0:r  R = l+1:r P = LR ∈(1:r) (λ_I ∈ 1:# I)_I ∈ L∈Λ^ν - p_R∑_I ∈ L d_I^λ_I + d_R^p_Rso that L denotes the potentially empty partition obtained by removing the last segment from P and l is the last element of L (if it exists and zero otherwise). Note that since B_r^ν is the optimal energy for exactly ν degrees of freedom, if only p_R are spent on R the remaining ν - p_R dofs have to be spent on L.This may be rephrased asB_r^ν= min_ p_R ∈ 1 : ν,   l ∈ 0:r( min_ R = l+1:rLR ∈(1:r) (λ_I ∈ 1:# I)_I ∈ L∈Λ^ν - p_R∑_I ∈ L d_I^λ_I + d_R^p_R)= min_ p_R ∈ 1 : ν,   l ∈ 0:r( min_ L ∈(1:l)(λ_I ∈ 1:# I)_I ∈ L∈Λ^ν - p_R∑_I ∈ L d_I^λ_I + d_l+1:r^p_R)= min_ p_R ∈ 1 : ν,   l ∈ 0:r( ( min_ L ∈(1:l) (λ_I ∈ 1:# I)_I ∈ L∈Λ^ν - p_R∑_I ∈ L d_I^λ_I) + d_l+1:r^p_R)where we recognize the inner minimization to be another instance of the original problem such thatB_r^ν = min_ p_R ∈ 1 : νl ∈ 0:r B_l^ν - p_R + d_l+1:r^p_R. This can be interpreted (and implemented) as considering the B_r^ν as entries of an upper triangular matrix with rows indexed by ν and columns by r. In terms of the data dependencies occuring during computation any entry then depends on all the elements "above and to the left" of it.In potentially more enlightening way we may equivalently state this recursion asB_0^0 = 0 B_r+1^ν+1= min_0 ≤ l ≤ r p_l + p_r = ν + 1 0 ≤ p_l ≤ l 1 ≤ p_r ≤ r-l B_l^p_l + d_l+1:r+1^p_rfor r ∈ 1:n-1, ν∈ 1:r. Note that p_l and p_r partition ν+1 rather than ν since we don't want to spend any degrees of freedom on the implied jump between l and l+1 – however it's of course also perfectly possible to model this other scenario and solve the corresponding problem in perfect analogy to our approach. If the maximal dofs to be spent on a single segment are limited to be at most m ∈ the minimum has the additional constraint that p_r ≤ m and if the total number of dofs is to be limited with some M ∈ we additionally get ν≤ M. Both of these extensions provide easily interpretable extensions to the basic algorithm that can provide major speedups.Including these extensions an equivalent form of the forward step more amenable to implementation isB_r+1^k+1 = min_ l ∈ 0:r p_r ∈α_k,l : β_r,k,l( B_l^k + 1 - p_r + d_l+1:r+1^p_r)with r ∈ 1:n-1, k+1 ∈ 2:min{r+2, M} andα_k,l = max{ k + 1 - l, 1 }andβ_r,k,l= min{ r + 1 - l, m, k }.§.§ Proof of Theorem <ref>That there is some partition (not necessarily into intervals) follows immediately from there only being finitely many models while _≥ 0 is uncountable. Note that the functions γ↦ B_r^ν + νγ are affine-linear and as such γ↦min_ν∈ 1:r B_r^ν + γν is the pointwise minimum of a collection of r affine-linear functions. Such a minimum is piecewise affine-linear with only finitely many pieces – with the pieces being given by the original functions – and continuous. This induces the desired partition of _≥ 0 into finitely many intervals except for the finitely many points where two pieces meet (intersections of the original affine-linear functions). At those points the solution isn't unique and there's finitely many possible minimizers. On the interiors of the elements of the partition there's always a unique function which corresponds to the solution via its slope.§.§ Proof of Theorem <ref> We consider the complexities of the different parts of the algorithm in isolation and combine them to get a result on the full complexity. Dynamic programTo start off we will consider the dynamic program for the case with local constraints: the degrees of freedom on any segment are no more than some m ∈. For sufficiently long timeseries most segments 1:r will be long enough that the minimization at this step has to consider all p_r ∈ 1:m and left boundaries l ∈ 0:(r-1). So the minimization at each step of the recursion requires an effort of ψ∈Ø(mr). We thus find an asymptotic complexity∑_r=1^n-1∑_ν=1^r ψ(m, r) = ∑_r=1^n-1ψ(m, r) ∑_ν=1^r 1= ∑_r=1^n-1rψ(m, r)_∈Ø(mr^2)≤ (max_r ∈ 1: n-1 rψ(m, r)) ∑_r=1^n-1 1≤ (n-1)^2 ψ(m, (n-1)) ∈Ø(mn^3).Training error calculationSince there are n^2 segments and we have to consider ν∈Ø(m) possible models on each segments we'd generally expect the error calculation to require an effort of Ø(n^2 m Ψ(m,n)) where Ψ(m,n) is the effort required to calculate the error of a single model with m degrees of freedom on data of length n. As we show in Section <ref> of the supplementary material it may be possible to obtain a better bound for specific models like Ø(n^2m^2) for polynomial functions on .Model functionThe graph-tracing for the model function requires linear effor for every degree of freedom ν. There are m possible degrees of freedom and as such this is Ø(m^2). This generates m affine linear functions sorted by slope – finding the minimum of these using the algorithm from supplementary section <ref> requires exactly m steps. So the model function construction is negligible. Cross-validation functionThe remaining part to discuss is the cross-validation function construction. One expensive part of this is the prediction error calculation which we will assume to actually require a model fit – the effort of which we denote by Φ(ν, r) (and assume to be monotonically increasing in both parameters). This means we have a total effort no larger than∑_r=1^n-1∑_ν=1^m Φ(ν, r) ≤∑_r=1^n-1Φ(m, r) m ≤Φ(m, n-1) m (n-1) ∈Ø(mnΦ(m,n)).The remainder of the CV calculation involves finding pointwise minima and DP graph tracing. The graph tracing requires Ø(ν) steps for all ν∈ 1:Ø(m) and r ∈ 1:n-1 yielding a total of no more than Ø(m^2n). Constructing and finding the minima of all the affine functions is Ø(mn) in total.We find the cross-validation to be in Ø(mn max{m, Φ(m,n)}). Combining the partsCombining the partial results we find that the full algorithm 𝒜 is inØ(max{mn max{m, Φ(m,n)}, n^2 m Ψ(m,n), n^3 m}) =Ø(max{mn max{m, Φ(m,n)}, n^2 m max{Ψ(m,n), n }}) =Ø(mnmax{m, Φ(m,n), n Ψ(m,n), n^2 }).Since we always have Ψ∈Ø(Φ(m,n)) (calculating the prediction error has the same complexity as fitting the full model) we may simplify this to𝒜∈Ø(mnmax{m, n Φ(m,n), n^2 }).That Φ∈Ø(mn) implies 𝒜∈Ø(n^3 m^2) follows from direct computation of the general result.In the least squares polynomial case it's possible to compute the model errors without actually computing all the models. Instead it's possible to compute all errors at once in n^2m^2. This can be used to prove the lower bound of𝒜 ∈Ø(max{mn max{m, mn }, n^2m^2, n^3 m})= Ø(max{n^2m^2, n^3 m}) = Ø(n^2m max{m, n}) = Ø(n^3m).This completes the proof. § EFFICIENT COMPUTATION OF RESIDUAL ERRORS IN LEAST SQUARES POLYNOMIAL FITS The general model allows us to fit optimal piecewise polynomial models to timeseries valued in . This necessitates the computation of the residual errors of all possible polynomial models for all segments of the timeseries: for all S ∈(1:n) and degrees of freedom ν∈ 1:|S| we want to find the residual error corresponding to the polynomial p of degree ν - 1 minimizing∑_i ∈ S (p(t_i) - y_i)^2such that ϕ^ν(t_S, y_S) = p.We will now describe an algorithm that computes the residual errors of all models with at most degree d on data of length n in just Ø(n^2 d^2); so it can in particular compute all possible models in Ø(n^4). The basic idea of the described algorithm is not original – however the precise formulation resulting in the advantageous complexity may well be novel and appears to be currently unpublished. Note that a very similar algorithm to the one presented can be found in <cit.>. This algorithm differs from ours in that it only deals with what we call the data recursion, and only handles equidistantly spaced data. §.§ Computing the residual error without the associated model We will call a polynomial p of degree k a polynomial with k+1 degrees of freedom and denote k+1 by (p). Let [x]_<ν be the linear space of all polynomials with no more than ν degrees of freedom (so degree less than ν) with real coefficients and let e_1, ..., e_ν be a basis of [x]_<ν such that (e_j) = j. Then there are (α_1, ..., α_ν) = α∈^ν such that the least squares polynomial p we're after equals ∑_i=1^να_i e_i. We thus want to find α minimizing∑_j=1^n ( ∑_i = 1^να_i e_i(t_j) - y_j )^2,which we easily show to be equivalent tomin_α∈^nAα - y^2_2with A = [ e_i(t_j) ]_j=1,...,n i=1,...,ν∈^n, ν and y = (y_1, ..., y_n)^T ∈^n.The central fact used for the efficient computation is the following lemma. There is an orthogonal matrix Q ∈(n) such that the residual error of this minimization is given by (Q^T y)_ν+1:n_2^2.Given a QR decomposition of A into an orthogonal matrix Q ∈(n) and a real matrix R̃ = [R; 0_n-ν, ν ] where R ∈(ν) we find that for all α Aα - y = Q^T(Aα - y) = R̃α - Q^T y = [ Rα - (Q^T y)_1:ν; -(Q^T y)_ν+1:n ]. Squaring this expression we findAα - y^2_2 = Rα - (Q^T y)_1:ν_2^2 + (Q^T y)_ν+1:n_2^2.Since the rightmost term is independent of α it has to correspond to the residual error: minimizing our original target amounts to minimizing Rα - (Q^T y)_1:ν_2^2, but since R is invertible and the norm nonnegative this term trivially vanishes for the optimal solution α = R^-1 (Q^T y)_1:ν. This shows that (Q^T y)_ν+1:n_2^2 is indeed the residual error we're after. §.§.§ Data recursion We will now derive an recursion linking the residual error on data points 1,...,n with the one on data points 1,...,n+1.Assume now that we already know the QR decomposition as stated above for data 1,...,n. If we want to add another data point t_n+1, y_n+1 this amounts to adding another row to A and y to obtain A' := [A; e_1:ν(t_n+1) ]∈^n+1, ν and analogously a new right hand side y' ∈^n+1. Setting Q̃ := [ Q 0_n,1; 0_1,n 1 ]∈(n+1) we find thatQ̃^TA' = [R;0; e_1:ν(t_n+1) ]. We can eliminate the last row using a sequence of Givens rotations G_1, n+1, G_2, n+1, ..., G_ν, n+1∈(n+1) to obtain a QR decomposition for A':A' = ∏_i=1^ν G Q̃_=: Q' ∈(n+1)[ R';0 ]. We find the corresponding residual to be (Q'y')_ν+1:n+1. Since the Givens rotation G_j, n+1 only affects rows j and n+1 of whatever matrix it acts on, we find the data recursion(Q'y')_ν+1:n+1_2^2 = (Qy)_ν+1:n_2^2 + (Q'y')_n+1_2^2= (Qy)_ν+1:n_2^2 + ((Q'y')_n+1)^2relating the residual error of a ν degree of freedom model on data 1:n with the residual error of a ν degree of freedom model on data 1:n+1. We want to emphasize the fact that Q' is closely connected to Q essentially via left-multiplication by a sequence of Givens rotations.§.§.§ Degree recursion We will now similarly derive a recursion linking the residuals of the ν and ν+1 degree of freedom models on a given segment 1,...,n.Assume now that we already know the QR decomposition as stated above for data 1,...,n. We now extend our basis for [x]_<ν to a basis e_1, ..., e_ν+1 of [x]_<ν+1 and want to find the residual error of the least squares estimate from this larger space for the same data (of course under the assumption that n ≥ν+1). Adding the new basis element amounts to adding another column to A to obtain A' := [A e_ν+1(t_1:n) ]∈^n, ν + 1.We factor A' asA' = [A e_ν+1(t_1:n) ] = [ Q [ R; 0 ] e_ν+1(t_1:n) ] = Q [ [ R; 0 ] Q^T e_ν+1(t_1:n) ]and letting G ∈(n) be the product of Givens rotations eliminating components ν+2 through n of Q^T e_ν+1(t_1:n) from row ν+1 we further factorA' = QG^T [ G [ R; 0 ] [ w; 0 ] ]where [w; 0_ν+2, 1 ] = GQ^T e_ν+1(t_1:n). Since G is a product of Givens rotations that only operate on rows ν+1 : n we have G[R; 0_n-ν, 1 ] = [ R; 0_n-ν,1 ]. Setting R' = [ R̃w ], Q' = QG^T we obtain the QR decompositionA' = Q' [R'; 0_n-(ν+2) ]and the residual error thus has to equal(Q'^T y)_ν+2:n_2^2 = (G Q^T y)_ν+2:n_2^2.Once again using the property that G really only ”modifies” rows ν+1 : n we can easily obtain the new residuals from (Q^T y)_ν + 1 : n by applying the correct sequence of Givens rotations to it.§.§.§ The full algorithm We will now describe the full algorithm for obtaining the residuals for data 1:r for all r ∈ 1:n.To avoid conditioning problems the algorithm uses the Newton basis. Given a real sequence x_1,...,x_d define the associated Newton basis for the space [x]_≤ d of polynomials of degree no more than d byN^0(x) = 1,N^k(x) = ∏_i=1^k (x - x_i) fork=1,...,d.One central property of this basis is that N^k(x_j) = 0 for all j ≤ k.The algorithm splits into two major parts: a rather involved core algorithm computing all residuals for segments starting at the first data point and a simple wrapper for finding the residuals on the the remaining segments by calling into the core for each possible starting point.The core algorithm iteratively constructs the matrix R̃ row by row and column by column by applying Givens rotations to the system matrix. This yields one new residual per eliminated matrix element. It's possible to do this without actually constructing the full system matrix and instead work with just a (ν + 1)× (ν + 1) matrix.We provide an extensively documented implementation of the algorithm in the form of a Rust crate at <https://crates.io/crates/polyfit-residuals>. The repository also contains a python implementation of the core algorithm, including a basic comparison with a numpy-based implementation of the naive algorithm. The algorithm's outer loop is trivially parallelizeable. On a theoretical level (given an unbounded number of processes) this yields an algorithm of complexity Ø(nd^2); in practice the complexity will remain the same, however parallelization still allows for some great performance increases for all but the smallest data sizes and maximal degrees. Some benchmarks comparing the sequential and parallel implementations may be found on the crates' website mentioned above. § LINEAR ALGORITHM FOR POINTWISE MINIMUM OF A SET OF AFFINE FUNCTIONS One commonly used algorithm for computing pointwise minima is a divide-and-conquer algorithm of quadratic complexity. A simpler linear time algorithm can be obtained by translating the computation of pointwise minima to that of finding convex-hulls on the dual of the homogenized functions. This correspondence is well-known. The Graham scan may then be used as a linear time algorithm.Alternatively the following algorithm may be used. It's robust to the potential numeric problems that can occur and integrates Occam's razor as desired by the main algorithm. (The algorithm is probably known, and we state it here for completeness as we did not find a suitable reference for it.)Where the usual recursive divide-and-conquer algorithm works by partitioningdirectly and finding definitive minima at the points of intersection, the one described instead successively makes assumptions about the minimum and backtracks once it realizes those assumptions were incorrect. It does so by scanning through the input collection in order of the slopes calculating intersections of the unchecked functions with assumed minimal segments. A visualization of the main loop is shown in Figure <ref>.[language=Python]listings/pointwise_min_nocomment.py §.§ Proof of correctness We will now prove that this algorithm actually works. Denote by f_1, ..., f_n the input functions, by a_1, ..., a_n their slopes and by F the pointwise minimum. We will use an induction argument on the size n of the input assuming that ℱ = {f_1, ..., f_n} such that a_1 > ... > a_n.For n=2 the main loop doesn't do anything and the returned initial state correctly represents the solution.We thus assume that the algorithm works for n ≥ 2 and show that it also works for inputs of size n+1 = |ℱ|. By the induction assumption in the last iteration of the loop, the two (nonempty) stacks form the pointwise minimum F of {f_1, ..., f_n}. We thus only have to show that the last iteration will find x ↦min{F(x), f_n+1(x)}. It's not hard to show that for large enough x the function f_n+1 of lowest slope will be minimal. Thus f_n+1 will be part of the correct solution F and has to intersect F in some point ξ. Lets assume that F is given by f_i_1 on the open interval I_1, f_i_2 on I_2 and so on up to f_i_k on I_k a_i_1 > a_i_2 > ... > a_i_k. There are two cases to consider: * ξ∈ I_ν for some ν (f_n+1 intersects F in a linear segment)* ξ∉I_ν for all ν (f_n+1 intersects F in a corner) We start by analyzing the first case: Assume that ν = k, f_n+1(ξ) = f_i_ν(ξ). Since a_i_ν > a_n+1 and ξ_k < ξ this implies that f_n+1(ξ_k) > f_i_ν(ξ_k) where ξ_k denotes the top of stack border and the backtracking stops immediately without removing anything resulting in the correct minimum being returned.If on the other hand ν < k we know that f_n+1(ξ) = f_i_ν(ξ) < f_i_k(ξ) for ξ < ξ_k and thus f_n+1(ξ_k) < f_i_k(ξ_k) which means f_i_k and ξ_k are removed from their stacks in the first backtracking step. At this point the rest of the algorithm will behave exactly the same as if it was started on ℱ∖{f_i_k} for which we know that the correct result will be returned by the induction hypothesis.A similar argument works for the second case completing the proof. § EXPRESSION FOR SYNTHETIC PIECEWISE POLYNOMIAL The piecewise polynomial function used in the numerical experiments is given (to 3 decimal places) by0.0 + 10.88 x x ∈ [0.0, 0.092) -3.688 + 128.812 x - 1299.917 x^2 + 5707.356 x^3 - 9229.693 x^4 x ∈ [0.092, 0.262) 0.37x ∈ [0.262, 0.298) -3.881 + 33.877 x - 95.406 x^2 + 87.499 x^3 x ∈ [0.298, 0.6) -88.748 + 267.536 x - 199.38 x^2x ∈ [0.6, 0.729) 1523.272 - 6132.631 x + 8230.53 x^2 - 3679.993 x^3x ∈ [0.729, 0.814) 5.383 - 5.383 x x ∈ [0.814, 1.0] . § GRAPHICAL SCHEME OF ALGORITHMFigure <ref> depicts the basic architecture of the full algorithm.[heading=myheading] ]
http://arxiv.org/abs/2312.16512v1
{ "authors": [ "Stefan Volz", "Martin Storath", "Andreas Weinmann" ], "categories": [ "stat.ME", "cs.NA", "math.NA", "65K05 (Primary) 90C26 (Secondary) 62G05", "G.1.2; G.1.6" ], "primary_category": "stat.ME", "published": "20231227104158", "title": "Degrees-of-freedom penalized piecewise regression" }
Twisted restricted correlation functions and fusion rules]Twisted restricted conformal blocks of vertex operator algebras I: g-twisted correlation functions and fusion rules1]Xu [email protected] These authors contributed equally to this work.2]Jianqi [email protected] These authors contributed equally to this work.[3]Yiyi [email protected] These authors contributed equally to this work.[1]Department of Mathematics, Tongji University, 1239 Siping Road, Shanghai, 200092, Shanghai, China[2]Department of Mathematics, University of Pennsylvania, 209 South 33rd Street, Philadelphia, 19104, PA, USA*[3]Department of Mathematics, South China University of Technology, 381 Wushan Road, Guangzhou, 510641, Guangdong, ChinaIn this paper, we introduce a notion of g-twisted restricted conformal block on the three-pointed twisted projective line →^1 associated with an untwisted module M^1 and the bottom levels of two g-twisted modules M^2 and M^3 over a vertex operator algebra V. We show that the space of twisted restricted conformal blocks is isomorphic to the space of g-twisted (restricted) correlation functions defined by the same datum and to the space of intertwining operators among these twisted modules. As an application, we derive a twisted version of the Fusion Rules Theorem.[MSC Classification]17B69, 81T40[ * January 14, 2024 ====================§ INTRODUCTIONThis is the first paper in a series aiming to explore the general theory of twisted (restricted) conformal blocks of vertex operator algebras. In this paper, we focus on their g-twisted correlation functions and the fusion rules among g-twisted modules. The concept of twisted representations of vertex operator algebras (VOAs for short) originated in the realization of irreducible representations of twisted affine Lie algebras <cit.>, the construction of the celebrated moonshine module <cit.>, and the study of orbifold models in conformal field theory <cit.>.Over the past few decades, twisted representations have been extensively studied, e.g. <cit.>.One of the most notable applications is in the orbifold theory of VOAs <cit.>.The renowned orbifold conjecture posits that every irreducible module over the fixed-point subVOA V^G of V under some finite automorphism group G<(V) occurs in an irreducible g-twisted V-module for some g∈ G, and if V is strongly rational, then V^G also follows suit.Recent breakthroughs have established the validity of this conjecture for cyclic groups <cit.>.This has led to numerous new examples of strongly rational VOAs with irreducible modules emerging as direct summands of certain twisted modules.In the landscape of VOA theory and the associated conformal field theory (CFT for short), a crucial challenge is ascertaining the fusion algebra within the module category, entailing the computation of fusion rules among irreducible modules.By definition, the fusion rule associated to V-modules M^1, M^2, and M^3 is the dimension of the space of intertwining operators among them.In the context of certain rational orbifold CFTs, the application of the renowned Verlinde Formula <cit.> has allowed the determination of fusion rules through a concrete description of the S-matrix in their modular transformations <cit.>.On the VOA side, due to the intricate nature of twisted irreducible representations, the fusion rules were only established for certain /2 or /3-orbifold lattice VOAs.For instance, in the case of the θ-cyclic orbifold VOAs M(1)^+ and V_L^+ introduced in <cit.>, fusion rules were determined in <cit.> through explicit constructions of twisted intertwining operators for Heisenberg and lattice VOAs.In general, when dealing with an arbitrary strongly rational VOA V, the connection between fusion rules among ordinary modules over the orbifold VOA V^G and fusion rules among twisted modules over V remains elusive, and a unified method for computing fusion rules among twisted modules is currently lacking.Let V be a VOA, and let g_1, g_2, and g_3 be three finite-order automorphisms of V. The concept of twisted intertwining operators among g_1, g_2, and g_3-twisted V-modules M^1, M^2, and M^3 was initially introduced by Xu in <cit.>. Xu's definition generalizes the usual Jacobi identity of untwisted intertwining operators in <cit.> by incorporating factors involving rational powers of formal variables. In addition, Huang has further extended the notion of twisted modules and twisted intertwining operators to arbitrary (not necessarily commuting or of finite order) automorphisms g_1,g_2, and g_3 in <cit.> by generalizing the duality properties of untwisted intertwining operators.One approach to fusion rules from the geometric side is by exploring conformal blocks on algebraic curves associated to modules/sectors.Notably, the isomorphism between the space of correlation functions of conformal blocks on the three-pointed complex projective line (^1, ∞,1,0) associated to irreducible V-modules M^2, (M^3)', and M^1, and the space of intertwining operators of typeis well-known, as established in <cit.>.Furthermore, conformal blocks can be reconstructed from their restrictions on the bottom levels M^2(0) and M^3(0)^∗, as established in <cit.>.Then, in this context, the fusion rulecan be computed through the modules M^2(0) and M^3(0)^∗, and the bimodule A(M^1) over Zhu's algebra A(V) for the VOA V.This is the well-known Fusion Rules Theorem claimed in <cit.>.While the concept of conformal block for twisted modules has been formulated by Frenkel and Szczesny in <cit.>, the twisted version of the aforementioned story remains unexplored. In the present work, we address these questions in the simplest nontrivial scenario where g_1=1 and g_2=g_3, namely, when the V-module M^1 is untwisted, while M^2 and M^3 are both g-twisted for some automorphism g of order T<∞. We will refer to this scenario as the g-twisted case.Let I be a twisted intertwining operator among M^1, M^2, and M^3.In order to accommodate the rational powers z^1/T and w^1/T occurring in the twisted fields Y_M^2(-,z), Y_M^3(-,z), and I(-,w) simultaneously, we introduce the T-twisted projective line →^1.On this curve, we attach M^2 to 0, (M^3)' to ∞, and M^1 to a point 1∈^1 that is other than 0 and ∞.In the spirit of <cit.>, the space of g-twisted correlation functions associated to the datum(→^1, ∞, 1, 0, (M^3)', M^1, M^2) can be defined by axiomizing the behaviors of the limit function (on ) of the Puiseux series ⟨*|v'_3⟩Y_M^3(a^1,z_1)⋯ Y_M^3(a^k-1,z_k-1)I(v,w)Y_M^2(a^k,z_k)⋯ Y_M^2(a^n,z_n)v_2w^h, where v'_3∈ (M^3)', v_2∈ M^2, v∈ M^1, and a^i∈ V. Denote this space by .For the generality, we introduce a space [Σ_1(N^3, M^1, M^2)] of g-twisted correlation functions associated to the datumΣ_1(N^3, M^1, M^2):=(→^1, ∞, 1, 0, N^3, M^1, M^2), where N^3 is an arbitrary g^-1-twisted V-module. See <ref>. Our first main theorem (<ref>) establishes an isomorphism betweenand the spaceof g-twisted intertwining operators.To extend our construction to the general case where M^1 is also twisted, we have to introduce twisted curves of higher genus.For instance, the case when g_3=g_1g_2=g_2g_1 and g_1^T=g_2^T=1 involves the Fermat curve of degree T, which has genus (T-1)(T-2)/2. This will be addressed in a subsequent work. To extend the Fusion Rules Theorem to the g-twisted scenario, we introduce an auxiliary space (Σ_1(U^3, M^1, U^2)) of g-twisted restricted correlation functions associated to the datum Σ_1(U^3, M^1, U^2):=(→^1, ∞, 1, 0, U^3, M^1, U^2), where U^2 (resp. U^3) is an irreducible left (resp. right) module over the g-twisted Zhu's algebra A_g(V) introduced in <cit.>. The axioms we impose on (Σ_1(U^3, M^1, U^2)) are based on behaviors of the limit function of the Puiseux series eq:TheSeries, where u_2∈ M^2(0) and u'_3∈ M^3(0)^∗.A crucial difference between our axioms on (Σ_1(U^3, M^1, U^2)) and those in <cit.> is the additional non-integer shifting of the coefficient functions F_n,i(,) in the recursive formulas, due to the ramification ofat the points 0 and ∞.Our second main theorem (<ref>) establishes an isomorphism between the space of g-twisted restricted correlation functions (Σ_1(U^3, M^1, U^2)) and the space of g-twisted correlation functions [Σ_1(M(U^3), M^1, M(U^2))], where M(U^2) is the g-twisted generalized Verma module associated to U^2, and M(U^3) is the g^-1-twisted generalized Verma module associated to the right A_g(V)-module U^3 <cit.>. In the untwisted scenario, it was pointed out by Li in <cit.> that the Fusion Rules Theorem does not hold for arbitrary M^2 and M^3.The axioms of (Σ_1(U^3, M^1, U^2)) imply, in particular, that a system of correlation functions S defines a linear functional φ_S u_3⊗ v⊗ u_2↦ S*u_3(v,)u_2w^ v∈ on the vector space U^3⊗ M^1⊗ U^2.Furthermore, the linear functional φ_S vanishes on a subspace J whose definition will be provided in <ref>. The vanishing of φ_S on J can be interpreted as being invariant under the actions of the twisted chiral Lie algebras constrained at ∞ and 0. We call the space (U^3⊗ M^1⊗ U^2)/J the space of g-twisted restricted coinvariants, and the dual space ((U^3⊗ M^1⊗ U^2)/J)^∗ the space of g-twisted restricted conformal blocks, denoted by [U^3, M^1, U^2].In <ref> and <ref>, we show that there is a one-to-one correspondence between the space of g-twisted restricted conformal blocks and the space of g-twisted restricted correlation functions (<ref>) by reconstructing a system of correlation functions from a given restricted conformal block φ using the recursive formulas.In the twisted case, the occurrence of non-integer shifting of the coefficient functions F_n,i(,) poses several challenges. Our main theorem in <ref> and <ref> can also be viewed as the g-twisted and restricted version of the “propagation of vacua” theorem in <cit.>. The following diagram summarizes our main theorems in <ref>–<ref>, where we assume that M^1 is an untwisted V-module, and M^2 and M^3 are admissible g-twisted V-modules such that M^2 and (M^3)' are generalized Verma modules, with bottom levels M^2(0)=U^2 and M^3(0)^∗=U^3 being irreducible A_g(V)-modules: [column sep=1in, row sep=0.7in] [r, "Theorem thm:I=Cor"][d, "Theorem thm:CorBottom"][l][d,dashed, ""][Σ_1(U^3, M^1, U^2)][r, "Theorem thm:iso-restrictcfb-bottomcorrelation"][u][U^3, M^1, U^2][l][u,dashed]In particular, when V is g-rational <cit.>, the spaceof intertwining operators is isomorphic to the space of g-twisted restricted conformal blocks [U^3, M^1, U^2], for arbitrary irreducible g-twisted V-modules M^2 and M^3. In a subsequent paper, we will introduce the notions of twisted conformal blocks [Σ_1(N^3, M^1, M^2)] and twisted restricted conformal blocks [Σ_1(U^3, M^1, U^2)] using the actions of (constrained) twisted chiral Lie algebra, and demonstrate the isomorphisms in the following diagram: [column sep=0.5in, row sep=0.3in] [Σ_1(N^3, M^1, M^2)][r][d][Σ_1(N^3, M^1, M^2)][l][d][Σ_1(U^3, M^1, U^2)][r][u][Σ_1(U^3, M^1, U^2)].[l] [u] Furthermore, we will show in <ref> that the space of g-twisted restricted coinvariants (U^3⊗ M^1⊗ U^2)/J is isomorphic to both U^3⊗_A_g(V) B_g,h(M^1)⊗_A_g(V)U^2 and U^3⊗_A_g(V) A_g(M^1)⊗_A_g(V)U^2, where B_g,h(M^1) is an A_g(V)-bimodule generalizing B_h(M^1) in <cit.>, and A_g(M^1) is an A_g(V)-bimodule constructed in <cit.> that generalizes A(M^1) in <cit.>. Consequently, we have multiple methods to compute the fusion ruleswhen M^2 and (M^3)' are g-twisted generalized Verma modules. Notably, the isomorphism≅ (M^3(0)^∗⊗_A_g(V) A_g(M^1)⊗_A_g(V)M^2(0))^∗extends the renowned (untwisted) Fusion Rules Theorem in <cit.> to the g-twisted case.We also deduce several applications of the g-twisted Fusion Rules Theorem. First, we establish the finiteness of g-twisted fusion rules under the assumption that V is C_2-cofinite. Secondly, when V is strongly rational, using the main theorem of <cit.>, we find the relation between g-twisted fusion rules among irreducible g-twisted V-modules and the ordinary fusion rules among irreducible V^0-modules by decomposing M^1, M^2, and M^3 into direct sums of irreducible modules over V^0. Lastly, in <ref>, we determine the fusion rules among irreducible θ-twisted modules over the Heisenberg VOA M(1) and rank one lattice VOA V_L with L= and (|)=2. This is achieved through the calculation of A_θ(M(1,)) and A_θ(V_L+1/2), where θ is the standard involution of M(1) and V_L <cit.>. In these examples, the θ-twisted fusion rules encompass all possibilities of fusion rules among θ-twisted modules, given that θ^2=1. This paper is structured as follows. In <ref>, we introduce the twisted projective line →^1 and the space [Σ_1(N^3, M^1, M^2)] of g-twsited correlation functions. The key result in this section establishes the isomorphism betweenand .In <ref>, we introduce the space (Σ_1(U^3, M^1, U^2)) of g-twisted restricted correlation functions and demonstrate its isomorphism to [Σ_1(M(U^3), M^1, M(U^2))]. In <ref>, we reconstruct a system of correlation functions S_φ from a g-twisted restricted conformal block φ in [U^3, M^1, U^2] and establish the locality of S_φ. In <ref>, we demonstrate the associativity and other axioms of the reconstructed S_φ. In <ref>, we prove the g-twisted fusion rules theorem and discuss its applications. Finally, in <ref>, we compute the fusion rules among θ-twisted modules over the Heisenberg VOAs and the rank one lattice VOA using the g-twisted fusion rules theorem.§.§ Convention In this paper, we adopt a specific formatting convention to enhance clarity. Text that we want to emphasize, terms with clear contextual meanings, or results available in standard textbooks will be presented in italic font. Whereas terminology introduced in the context will be in bold font.We adhere to the following mathematical notation:denotes the set of natural numbers, including 0;stands for the ring of integers;represents the field of rational numbers, anddenotes the field of complex numbers.All vector spaces are defined over .Tensor products are overunless otherwise specified. § SPACE OF TWISTED CORRELATION FUNCTIONS §.§ Preliminaries Throughout this article, we fix a VOA (V,Y,,) and an automorphism g∈(V) of order T.The VOA V is then decomposed into g-eigenspacesV^r=* a∈ Vg.a=e^2πr/Ta . Notably, V^0 forms a subVOA of V, and each V^r serves as a module over V^0, utilizing the same vertex operator Y (cf. <cit.>).Throughout this article, we keep the convention 0≤ r≤ T-1 for the superscript r.Unless otherwise specified, when we talk about (weak, admissible, etc.) module, we mean (weak, admissible, etc.) V-modules.We will consistently use notations like an to denote the elements in the Lie algebra Ł_g(V) (cf. eq:def:LgV) and notations like an to denote the components of a vertex operator Y(a,z) or an intertwining operator I(a,z). §.§.§ Twisted modules and the twisted Jacobi identityRecall the following definition: A weak g-twisted V-module is a vector space M equipped with a linear mapY_M V⟶(M){z},a⟼ Y_M(a,z)=∑_n∈an z^-n-1,satisfying the following axioms for all a∈ V^r, b∈ V, and u∈ M: * Index property: Y_M(a,z)=∑_n∈r/T+an z^-n-1. * Truncation property: anu=0 for n≫ 0.* Vacuum property: Y_M(,z)=𝕀_M.* Twisted Jacobi identity: z_0^-1[z_1-z_2z_0] Y_M(a,z_1)Y_M(b,z_2)u - z_0^-1[-z_2+z_1z_0] Y_M(b,z_2)Y_M(a,z_1)u= z_2^-1z_1-z_0z_2^-r/T[z_1-z_0z_2] Y_M(Y(a,z_0)b,z_2)u.A weak g-twisted V-module M is called an admissible g-twisted V-module if it admits a subspace decomposition M=⊕_n∈1/TM(n) such that amM(n)⊂ M( a-m-1+n)for any homogeneous a∈ V, any m∈, and any n∈1/T.A weak g-twisted V-module M is called a g-twisted V-module if L0 acts on it semi-simply with finite dimensional eigenspaces M_λ, and the following property holds: for each λ∈, the eigenspace M_λ+n/T vanishes when n∈ is sufficiently small.For a formal Puiseux series f(z)∈z^1/T, we employ the symbol _z f(z) for the coefficient of z^-1 in f(z).Multiplying the twisted Jacobi identity eq:Jac with z^m+r/T_1z_2^n+s/Tz_0^l and then applying _z_0_z_1_z_2, we obtain its component form as follows: Let M be a weak g-twisted module.Then, for any a∈ V^r, b∈ V^s,∑_i≥ 0li (-1)^iar/T+m+l-ibs/T+n+i -∑_i≥ 0li (-1)^l+ibs/T+n+l-iar/T+m+i=∑_j≥ 0m+r/Tj(aj+lb)r+s/T+m+n-jholds for all m,n,l∈, where aj+lb:=_zz^j+lY(a,z)b. By <ref> and taking into account <cit.>, we can readily establish a twisted version of the duality property:Let (M,Y_M) be an admissible g-twisted module and M' be its graded dual space.Then, for any a∈ V^r, b∈ V^s, and any u∈ M, u'∈ M', there exists a rational function f(z_1,z_2) with possible poles only at z_1=0,z_2=0, and z_1=z_2, such that the following identities of formal Laurent[Note that there are no fractional powers involved.] series hold:⟨*|u'⟩Y_M(a,z_1)Y_M(b,z_2)u z^r/T_1z^s/T_2=ι_z_1,z_2 f(z_1,z_2), ⟨*|u'⟩Y_M(b,z_2)Y_M(a,z_1)u z^r/T_1z^s/T_2=ι_z_2,z_1f(z_1,z_2), ⟨*|u'⟩Y_M(Y(a,z_1-z_2)b,z_2)u (z_2+z_1-z_2)^r/Tz^s/T_2=ι_z_2,z_1-z_2f(z_1,z_2),where ι_z_1,z_2,ι_z_2,z_1, and ι_z_2,z_1-z_2 send a rational function f(z_1,z_2) to its Laurent series expansions in the domains |z_1|>|z_2|, |z_2|>|z_1|, and |z_2|>|z_1-z_2| respectively.Furthermore, the component form of the twisted Jacobi identity eq:Jac' is equivalent to the existence of such a rational function. §.§.§ The associated Lie algebra and lowest-weight modules Let's recall the Lie algebra ℒ_g(V) associated toa VOA V and an automorphism g of V with order T,as introduced in <cit.>.The automorphism g can be extended to the vertex algebra V⊗[t^±1T] byg(a⊗ t^m/T):=e^-2πm/T(ga⊗ t^m/T).Denote the g-invariant subspace of V⊗[t^±1T] by ℒ(V, g). It is clear that ℒ(V, g) is a sub-vertex algebra of V⊗[t^±1T], with the translation operator ∇:= L-1⊗𝕀 + 𝕀⊗t.Then, ℒ_g(V) is the quotientℒ_g(V):=ℒ(V, g)/∇ℒ(V, g).For any m∈ and a∈ V, we denote the equivalent class of a⊗ t^m/T in ℒ_g(V) by am/T.Then, ℒ_g(V) is a Lie algebra, with the Lie bracket given by*am+r/Tbn+s/T =∑_j≥ 0m+r/Tj(ajb)m+n+r+s/T-j,for any a∈ V^r,b∈ V^s, and m,n∈.Moreover, Ł_g(V) has a natural gradation given by am/T= a-m/T-1,where m∈ and a is a homogeneous element of V.Let Ł_g(V)_n be the subspace of Ł_g(V) spanned by elements of degree n∈1/T.Then, we have a triangular decomposition:Ł_g(V)=Ł_g(V)_-⊕Ł_g(V)_0⊕Ł_g(V)_+,where Ł_g(V)_±=⊕_n∈1/Tℤ_>0Ł_g(V)_± n. Recall the following result in <cit.>:Let M be a weak g-twisted module. Then, the linear mapŁ_g(V)⟶(M), am/T⟼_z Y_M(a,z)z^m/T defines a representation of the Lie algebra Ł_g(V) on M.Furthermore, if M is equipped with a 1/T-gradation, then M is an admissible g-twisted module if and only if M is a graded module for the graded Lie algebra Ł_g(V). Recall the following definition in <cit.>:A weak g-twisted module M is called a lowest-weight module if there exists h∈ such that the L0-eigenspace M_h with eigenvalue h is an irreducible Ł_g(V)_0-module, and M=(Ł_g(V)_+)M_h. If this is the case, then L0 acts on M semi-simply with eigenvalues in h+1/T.We denote the eigenspace M_h+n/T by M(nT), and write u=n/T and u=h+n/T for any homogeneous element u∈ M(nT).Then, M=⊕_n∈M(n/T) is an admissible g-twisted module. An admissible g-twisted module M=⊕_n∈M(n/T) is said to be of conformal weight h∈, ifL0 acts on M semi-simply and each eigenspace M_h+n/T is precisely M(n/T).Let M=⊕_n∈ℕM(n/T) be an admissible g-twisted module.Then, its graded dual space M'=⊕_n∈ℕM(n/T)^∗naturally carries right g-twisted vertex operators given by compositions u'∘ Y_M (u'∈ M').Such a structure induces an usual admissible g^-1-twisted module structure Y_M' defined asY_M'(a, z)u' := u'∘ Y_M(e^zL1(-z^-2)^L0a, z^-1),where a∈ V and u'∈ M'. This module is called the contragredient module of M; refer to <cit.> for more details.For an admissible g-twisted module M, its components M(n/T) need NOT be finite-dimensional. Consequently, its double contragredient module M” is not necessarily equal to M itself. Hence, in general, given a g^-1-twisted module N, there is no guarantee that there exists an admissible g-twisted module M such that M' = N. Note that eq:def:contragredient implies⟨*|Y_M'(e^zL1(-z^-2)^L0a, z^-1)u'⟩u =⟨*|u'⟩Y_M(a,z)u.This allows us to spell out a translation between the right action of Ł_g(V) and the left action of Ł_g^-1(V) induced from the contragredient vertex operator Y_M'.Indeed, for any a∈ V and m∈, define θ(am/T):=∑_j≥ 0(-1)^ a/j!(L1^ja) a-j-1 + am/T.Then, θ is an anti-isomorphism between Ł_g(V) and Ł_g^-1(V) and we haveθ(am/T)u'= u'∘am/T, a∈ V, u'∈ M'.In particular, we haveθ(am/T)M'(n)⊂ M'(n-am/T)§.§.§ g-twisted intertwining operators The following definition can be found in <cit.>:Let M^1 (resp. M^2 and M^3) be a weak untwisted (resp. g-twisted) module.An g-twisted intertwining operator of typeis a linear map I(·, w)M^1 ⟶(M^2, M^3){w}, v⟼ I(v,w)=∑_m∈ v_m w^-m-1, satisfying the following axioms: * Truncation property: For any v∈ M^1, v_2∈ M^2, and a fixed λ∈, we have v_λ+nv_2=0whenever n∈ and n≫0.* Twisted Jacobi identity: For any a∈ V^r and v ∈ M^1,we havez_1^-1[z_2-wz_1] Y_M^3(a,z_2)I(v,w) -z_1^-1[w-z_2-z_1] I(v,w)Y_M^2(a,z_2)=z_2^-1w+z_1z_2^r/T[w+z_1z_2] I(Y_M^1(a,z_1)v,w). * L-1-derivative property: For any v∈ M^1, I(L-1v, w)=wI(v, w). Denote the space of g-twisted intertwining operators of typebyand set :=.These numbers are called the fusion rules associated to the above data. The following proposition is a straightforward consequence of the twisted Jacobi identity and the L-1-derivative property. See <cit.> and <cit.> for more details.Suppose M^1, M^2, and M^3 are of conformal weights h_1, h_2, and h_3 respectively.For any v∈ M^1, we can write the intertwining operator I(v,w) asw^h_1+h_2-h_3I(v,w)= ∑_m∈vm/T z^-m/T-1∈(M^2, M^3)z^±1/T,where vm/T=v_h_1+h_2-h_3+m/T. Furthermore, vm/TM^2(n/T)⊂ M^3(n/T+ v-m/T-1) for any homogeneous v∈ M^1 and any m,n∈.Multiplying the twisted Jacobi identity eq:twistedJac with z^m+r/T_1z_2^h+n/Tz_0^l, where m,n,l∈, then take _z_0_z_1_z_2, we obtain its component form: Let a∈ V^r and v∈ M^1, we have∑_i≥ 0li(-1)^i ar/T+m+l-ivn/T+i -∑_i≥ 0li(-1)^l+ivn/T+l-iam+r/T+i=∑_j≥ 0m+r/Tj(aj+lv)m-j+r+n/T,for any m,n,l∈.§.§ Functions on the twisted projective lineNow we introduce the algebraic curve C̅. For the general theory of algebraic curves, we refer to various algebraic geometry textbooks such as <cit.> and <cit.>, or consult <cit.> for a traditional treatment and <cit.> for an analytic approach to Riemann surfaces. Throughout this paper and its subsequences, we adopt the following conventions on an integral scheme (,Ø_) that is proper over : * We use the same notation for the analytification of . According to GAGA, the categories of coherent algebraic sheaves onand coherent analytic sheaves onare equivalent, thus the terminology of Ø_-module is unambiguous. A special case provides the equivalence between rational functions and meromorphic functions on , which allows us to use these terminologies interchangeably. We use _ to denote the constant sheaf of the field of rational functions. * The de Rham complex ofis denoted as (Ω^∙_,)̣. When the 1-forms x̣_̣1̣,⋯,x̣_̣ṇ form a basis of the first de Rham cohomology space H^1(,Ω^∙_), we will use x_1,⋯,x_n to denote its dual basis in the module of derivatives.* Unless otherwise specified, a point of , we mean a closed point of . We use Fraktur letters, such asand , to denote such points. By abuse of notation, we do not distinguish a skyscraper sheaf supported at a pointwith its stalk at .For a point , we use _ to denote its ideal sheaf and κ_ the residue field Ø_/_.* Following <cit.>, we use Ø_ to denote the complete local ring at . That is the _-adic completion of Ø_, namely the limit Ø_/_^n.It is also the completion of the stalk of Ø_ at . Now, we assume thatis a curve, i.e., =1.* Each complete local ring Ø_ is a DVR. We use v_ to denote the normalized valuation (i.e. v_(Ø_∖0)=).This valuation extends to the function field _.* A divisor onis a linear combination of points of .The support Δ of a divisor Δ is the set of points involved (i.e. has nonzero coefficient) in Δ.Any rational function f ondefines a divisor (f):=∑_∈v_(f). Given a divisor Δ, we use Ø(∞Δ) to denote the sheaf of meromorphic functions with possible poles along Δ.* By the Cohen structure theorem, Ø_≅t for some topological generator t of Ø_.Such an element t is called a local coordinate at .The choice ofand t provides a morphism ι_tØ_Ø_≅t, images under which are called formal expansions.The pair (,t) is thus called a local chart.§.§.§ Formal expansions of rational functionsFirst, we recall the general formal expansions.Let U be an open neighborhood of a pointof .Then any rational function f on U admits an expansion f = ∑_n=v_(f)^∞a_nt^n,where a_n∈ and t is a fixed local coordinate at .This equality is understood in the sense that the series on the right-hand side converges to f under the _-adic topology.When f is regular on U, this is just a concrete way to spell out the embedding ι_t, where each ∑_n=0^ma_nt^n serves as a representative of the class f+_^m+1.The rational case follows by taking the fractional sections on both sides of the canonical embedding. To connect this lemma with its analytic counterpart, we need the following notions.For a pointof , a (germ of) 1-cycle aroundis an element in the costalk of the cosheaf of punctured singular homology U↦ H_1(U-,), which can be presented as a 1-cycle on a sufficiently small punctured neighborhood of . Such a 1-cycle is called simple if its winding number is 1. For any meromorphic 1-form α on , its residue _α at a point ∈ is the value of the integration 1/2π∫_γα, where γ is a simple 1-cycle around .By this definition, given any rational map f→^1 and any meromorphic 1-form α on ^1, we have_f^∗α =mult_f()·_f()α.Here mult_f() denotes the multiplicity of f at , which is the κ_f()-dimension of the fiber of the direct image f_∗Ø_ at f().The coefficient a_n in <ref> can be computed by the formula a_n = _t^-n-1fṭ.Direct computation shows ∫_γt^nṭ=2πif n=-1,0 otherwise,where γ is a simple 1-cycle around .Then, the statement follows.Let t be a local coordinate at . The lemma shows the following: for any meromorphic function f, we have _fṭ = _t(ι_tf).As a corollary of this lemma, the series in <ref> converges absolutely on U- and defines a meromorphic function which can be expressed as the rational function f.The following is a special case of <cit.>, originally credited to <cit.>.Letbe distinct points.Then a collection of formal series *f_i∈Ø__i_i=1,⋯,n has the property that ∑_i=1^n__ifα = 0,α∈Γ(-,Ω^1),if and only if f_i can be extended to the same regular function on -. §.§.§ The twisted projective lineThe algebraic curve we are concerned with is a T-twisted version of the projective line. Abstractly, it is an stacky curve obtained from ^1 modulo an action of a cyclic group of order T.We represent it as a smooth projective curveequipped with a ramified covering →^1, whose Galois group is cyclic of order T. To make our expressions more explicit, we give the following ad-hoc construction. First, let C be the smooth curve overdefined by the polynomial Y^T-X. For a point ∈ C, we use (X(),Y()) to denote its (global) coordinates in ^2.We refer to the point with coordinates (X,Y)=(0,0) as 0. Then, we have an isomorphism Γ(C,Ø_C)=[X,Y]/(Y^T-X)≅[z^1/T] X↦ z, Y↦ z^1/T.With the identification eq:isoC, we have the following correspondences: * regular functions on C ⟷ [z^1/T]; and * regular functions on C-0 ⟷ [z^±1/T].Next, we introduce C' as another copy of C, with coordinates written as X' and Y'.We refer to the point with (X',Y')=(0,0) as ∞.Then, we can identify C-0 with C'-∞ through the isomorphism provided by X^-1=X' and Y^-1=Y'.We call this domain .Gluing C and C' alongresults a compactification of C, denoted by .Then, eq:isoC extends to an isomorphism ≅[z^1/T] and gives the following correspondence: * rational functions on⟷ (z^1/T). Finally, the coordinate X extends to a rational function →^1 provides the desired T-fold ramified covering of the projective line ^1, with branch points 0,∞ and unramified locus :=^1∖0,∞. It is straightforward to verify that its group of Deck transformations is cyclic of order T. The inverse image ^∗Ø_^1 can be interpreted as the subsheaf of Ø_ consisting of regular functions factoring through .Then, the following correspondence is straightforward: * rational functions onthat factor through →^1 ⟷ (z)⊂(z^1/T). On the other hand, the direct image _∗Ø_ can be interpreted as an extension of Ø_^1. Spelling out the stalks of _∗Ø_, we see that its sections are multi-valued functions on ^1.Our construction leads to a natural choice of local coordinates at 0, ∞, and ∈. Refer to <ref> for these coordinates, along with the established formal expansions provided by <ref>.The following corollary of <ref> plays a crucial role in the present work. For any meromorphic 1-form α on ^1, we have1T_=0^∗α + 1T_=∞^∗α +∑__=^∗α = 0,whereranges over a branch of . That is to say, for each singularity ∈^1 of α, we only take one representativefrom ^-1.Note that the summation is finite since α only has finitely many poles.The residue sum formula on ^1 indicates that the sum of residues of a meromorphic 1-form α on ^1 is zero.By eq:InvRes, we have 0= ∑_∈^1_α = ∑_mult_()^-1_^∗α,whereranges over a branch of .Then eq:ResidueSum follows from the observation that mult_(0)=T, mult_(∞)=T, and mult_()=1 for ∈.In the rest of this paper, we will frequently express a rational function onin terms of the local coordinate at the given point.For simplicity, we introduce the following shorthand notations. §.§.§ Expansions of two-variable functionsLet f(,) be a meromorphic function on × with possible poles at 0, ∞, and the divisor (z-w).We can write f(,) in the formf(,) =g(z^1/T,w^1/T)/z^m/Tw^n/T(z-w)^l,where g(z^1/T,w^1/T)∈[z^1/T,w^1/T].Fixingand varying , we can assign the following three expansions to f: * ι_=∞f, the formal expansion at the point ∞. This corresponds to the ι_z,w-expansion in formal calculus, reflecting that the series converges in the domain z>w.By <ref>, ι_=∞f = g(z^1/T,w^1/T)/z^m/Tw^n/T∑_i≥ 0l+i-1iz^-l-iw^i ∈z^-1/Tw^1/T.In particular, _z(ι_=∞f) = - 1/T_=∞fẓ∈[w^±1/T].* ι_=0f, the formal expansion at the point 0. This corresponds to the ι_w,z-expansion in formal calculus, reflecting that the series converges in the domain w>z.By <ref>, ι_=0f = g(z^1/T,w^1/T)/z^m/Tw^n/T∑_i≥ 0-li(-w)^-l-iz^i ∈w^-1/Tz^1/T.In particular, _z(ι_=0f) = 1/T_=0fẓ∈[w^±1/T].* ι_=f, the formal expansion at the point . This corresponds to the ι_w,z-w-expansion in formal calculus, reflecting that the series converges in the domain w>z-w with the argument range[Here (a:b)∈-ππ denotes the argument of the ray [a:b]∈^1. This condition guarantees thatandare in the same branch of .] (z^1/T:w^1/T)<π/2T. By <ref>, ι_=z^m/T = ∑_i≥ 0m/Tiw^m/T-i(z-w)^i∈w^-1/Tz-w.In particular, _z-w(ι_=f) = _=fẓ∈[w^±1/T].By <ref> and eq:twistedJac, we have the following duality property.Let M^1 be an admissible untwisted module, and let M^2 and M^3 be admissible g-twisted modules.Suppose M^1, M^2, and M^3 are of conformal weights h_1, h_2, and h_3 respectively.For any a∈ V^r, v∈ M^1, v_2∈ M^2, and v'_3∈ (M^3)', there exists a meromorphic function f on × of the form (where m,n,l∈)f(,) =g(z,w^1/T)/z^r/Tz^mw^n/T(z-w)^l(g(z,w^1/T)∈[z,w^1/T]),such that the following identities of formal Puiseux series hold:⟨*|v'_3⟩Y_M^3(a,z)I(v,w)v_2 w^h= ι_=∞f,⟨*|v'_3⟩I(v,w)Y_M^2(a,z)v_2 w^h= ι_=0f,⟨*|v'_3⟩I(Y_M^1(a,z-w)v,w)v_2 w^h= ι_=f.With the formulas eq:iota_zw,eq:iota_wz,eq:iota_wz-w,the statement follows from the the twisted Jacobi identity eq:twistedJac by the lem:SRT. Conversely, we have For any meromorphic function on × of the form eq:2ptFun, we have_z(ι_=∞z^r/Tf)-_z (ι_=0z^r/Tf)=_z-w(ι_=z^r/Tf). In particular, the twisted Jacobi identity eq:twistedJac of the intertwining operator I among an admissible untwisted module M^1 and admissible g-twisted modules M^2 and M^3 is equivalent to the existence of a meromorphic function f(,) satisfying <ref>. Fixing ∈, the meromorphic 1-form z^r/Tfẓ factors through the covering map x→^1. Hence, the lem:ResidueSum applies and gives us1T_=0z^r/Tfẓ +1T_=∞z^r/Tfẓ +_=z^r/Tfẓ = 0.Applying formulas eq:iota_zw,eq:iota_wz,eq:iota_wz-w to the above, we obtain the desired identity.§.§ Space of g-twisted correlation functions In this subsection, we define the space of g-twisted correlation functions for the g-twisted conformal blocks. Let M^1 be an admissible untwisted module, and let M^2 and M^3 be admissible g-twisted modules. Suppose M^1, M^2, and M^3 are of conformal weights h_1, h_2, and h_3, respectively. Suppose I is a g-twisted intertwining operator in .For v∈ M^1, v_2∈ M^2, v_3'∈ (M^3)', and , consider the following (n+1)-variable formal Puiseux series: ⟨*|v_3'⟩ Y_M^3(a^1,z_1)⋯ Y_M^3(a^k-1,z_k-1) I(v,w) Y_M^2(a^k,z_k)⋯ Y_M^2(a^n,z_n)v_2 w^h,Using a similar method as the proof of <cit.>, by <ref>, applying the lem:SRT, we see that there is a rational function on ^n+1 of the form:f(,)= g(,w^1/T) w^n/T∏_i=1^n z_i^r_i/Tz_i^m_i∏_j<k(z_j-z_k)^l_jk∏_p=1^n (z_p-w)^l_p,where m_i,n,l_kj,l_p∈, and g(,w^1/T)∈[,w^1/T] such that the series eq:npt-Fun is the formal expansion of f(,) in the domain*(,)∈^n+1∞>|z_1|>⋯>|z_k-1|>|w|>|z_k|>⋯>|z_n|>0.We denote the space of functions of the form eq:F(rseq) by (). Let Δ_n denote the divisor Δ_n:= (w∏_i=1^nz_i ∏_j<k(z_j-z_k) ∏_p=1^n (z_p-w)).Then the space Ø(∞Δ_n) of meromorphic functions on ^n+1 with possible poles along the divisor Δ_n, namely at the points where either z_i=0, z_i=∞, w=0, w=∞, z_j=z_k, or z_p=w, admits a decompositionØ(∞Δ_n) =⊕_0≤≤ T-1(),Note that (∅)=Ø(∞Δ_0)=Γ(,Ø_)=[w^±1/T].Following <cit.> with slight modifications, we use the following notation to denote the function f(_1,⋯,_n,) in eq:F(rseq):S_I*v_3'[1][k-1](v,)[k][n]v_2.Then we obtain a system of linear maps S_I=*(S_I)^n_V⋯ M^1⋯ V_n∈, where (S_I)_V⋯ M^1⋯ V^n(M^3)'⊗ V⊗⋯⊗ V⊗ M^1⊗ V⊗⋯ V⊗ M^2⟶Ø(∞Δ_n), v_3'⊗ a^1⊗⋯⊗ a^k-1⊗ v⊗ a^k⊗⋯⊗ a^n⊗ v_2⟼ S_I*v_3'[1][k-1](v,)[k][n]v_2. By <ref>, we have:* each map (S_I)_V⋯ M^1⋯ V^n factors through (M^3)'⊗Sym[V,⋯,V,M^1] ⊗ M^2; and * (S_I)^n_M^1V⋯ V=(S_I)^n_VM^1⋯ V=⋯ =(S_I)^n_V⋯ VM^1.Hence we can always put the terms in the order (v,) and omit these terms unless we want to emphasize some of them.Note that for homogeneous , the function S_I*v_3'⋯v_2 z_1^r_1/T⋯ z_n^r_n/T factors through (x,⋯,x,𝕀)^n+1→(^1)^n×, and thus can be viewed as a meromorphic function on (^1)^n×.The following definition generalizes <cit.>:Let V be a VOA with an order T automorphism g, and let M^1 (resp. M^2 and N^3) be an admissible untwisted (resp. g-twisted and g^-1-twisted) module of conformal weight h_1 (resp. h_2 and h_3).Put h=h_1+h_2-h_3.A system of linear mapsS=*S^n_V⋯ M^1⋯ V_n∈, S_V⋯ M^1⋯ V^nN^3⊗ V⊗⋯⊗ V⊗ M^1⊗ V⊗⋯ V⊗ M^2⟶Ø(∞Δ_n), v_3⊗ a^1⊗⋯⊗ a^k-1⊗ v⊗ a^k⊗⋯⊗ a^n⊗ v_2⟼ S*v_3[1][k-1](v,)[k][n]v_2,is said to satisfy the twisted genus-zero property associated to the datum Σ_1(N^3, M^1, M^2):=(→^1, ∞, 1, 0, N^3, M^1, M^2)if it satisfies the following axioms for all v_3∈ N^3,v_2∈ M^2: * Truncation property: For any fixed v∈ M^1 and v_2∈ M^2, there exists N∈ depending only on v,v_2, such thatS*v_3(v,)v_2w^N/T∈[w^1/T] for all v_3∈ N^3. * Locality: The terms (v,) can be arbitrarily permuted: * each map S_V⋯ M^1⋯ V^n factors through N^3⊗Sym[V,⋯,V,M^1] ⊗ M^2; and * S^n_M^1V⋯ V=S^n_VM^1⋯ V=⋯ =S^n_V⋯ VM^1.Hence we can always put the terms in the order (v,) and omit these terms unless we want to emphasize some of them.* Homogeneous property: For homogeneous , we have S*v_3⋯v_2∈(). * Vacuum property:S*v_3(,)⋯v_2 = S*v_3⋯v_2. * L-1-derivative property: z_1S*v_3(a^1,_1)⋯v_2 = S*v_3(L-1a^1,_1)⋯v_2,w(S*v_3⋯(v,)v_2w^-h)= S*v_3⋯(L-1v,)v_2w^-h. * Associativity:__1=S*v_3(a^1,_1)⋯(v,)v_2(z_1-w)^kẓ_̣1̣ = S*v_3⋯(a^1kv,)v_2,__1=_2S*v_3(a^1,_1)(a^2,_2)⋯v_2(z_1-z_2)^kẓ_̣1̣ = S*v_3(a^1ka^2,_2)⋯v_2.* Generating property for M^2: For any a∈ V^r and m∈, we have:_=0S*v_3(a,)⋯v_2z^m+r/Tẓ =TS*v_3⋯am+r/Tv_2. * Generating property for N^3:For any a∈ V^r and m∈, we have_=∞S*v_3(a,)⋯v_2z^m+r/Tẓ = -TS*θ(am+r/T)v_3⋯v_2,where θŁ_g(V)→Ł_g^-1(V) is the anti-isomorphism defined in eq:def:theta and Ł_g^-1(V) acts on the admissible g^-1-twisted module N^3 via <ref>. The vector space consists of systems of linear maps S=*S^n_V⋯ M^1⋯ V_n∈ satisfying the above axioms is called the space of g-twisted correlation functions associated to the datum Σ_1(N^3, M^1, M^2), we denote it by [Σ_1(N^3, M^1, M^2)].When N^3 is the contragridient module of an admissible g-twisted module M^3, we call this space the space of g-twisted correlation functions of typeand denote it by .Note that we do not initially require N^3 to be the contragridient module of an admissible g-twisted module M^3 in the definition above.The system S_I given by eq:SI belongs to . We have already proven the locality (2) and the homogeneous property (3).The properties (1), (4), and (5) follow from the truncation property, the vacuum property, and the L-1-derivative property of I(·,w) and Y_M^i(·,z).For the associativity (6): by <ref>, we have __1=S_I*v_3'(a^1,_1)⋯(v,)v_2(z_1-w)^kẓ_̣1̣ ι__1= =_z_1-wS_I*v_3'⋯(Y_M^1(a^1,z_1)v,)v_2(z_1-w)^k =S_I*v_3'⋯(a^1kv,)v_2,__1=_2S_I*v_3'(a^1,_1)(a^2,_2)⋯v_2(z_1-z_2)^kẓ_̣1̣ ι__1=_2 =_z_1-z_2S_I*v_3'(Y_M^1(a^1,z_1)a^2,_2)⋯v_2(z_1-z_2)^k=S_I*v_3'(a^1ka^2,_2)⋯v_2. For the generating property (7):_=0S_I*v_3'(a,)⋯v_2z^m+r/Tẓ =_=0S_I*v_3'⋯(a,)v_2z^m+r/Tẓι_=0=T_zS_I*v_3'⋯Y_M^2(a,z)v_2z^m+r/T =TS_I*v_3'⋯am+r/Tv_2 For the generating property (8): using eq:contragredient',_=∞S_I*v_3'(a,)⋯v_2z^m+r/Tẓ ι_=∞ =-T_zS_I*Y_(M^3)'(e^zL1(-z^-2)^L0a, z^-1)v_3'⋯v_2z^m+r/Tẓ eq:def:theta = -TS_I*θ(am+r/T)v_3'⋯v_2.Hence S_I satisfies the twisted genus-zero property associated to . Now we have our first main theorem of this paper, which generalizes <cit.>.Let M^1 (resp. M^2 and M^3) be an admissible untwisted (resp. g-twisted) module of conformal weight h_1 (resp. h_2 and h_3).Put h=h_1+h_2-h_3.Then we have the following isomorphism of vector spaces:≅,I⟼ S_I.Given any S∈, we define I_S(·,w) M^1→(M^2,M^3)w^±1/Tw^-h,v↦ I_S(v,w)=∑_n∈vn/Tw^-n/T-1-h,where vn/T is determined by⟨*|v_3'⟩vn/Tv_2 =1T_=0S*v_3'(v,)v_2w^n/Tẉ∈[w^±1/T],where v_3'∈(M^3)' and v_2∈ M^2. Then the truncation property and the L-1-derivative property of I_S follow from the axioms (1) and (5). It remains to show the following twisted Jacobi identity of the twisted intertwining operator I_S(·,w):∑_i≥ 0li(-1)^i ar/T+m+l-ivn/T+iv_2 -∑_i≥ 0li(-1)^l+ivn/T+l-iam+r/T+iv_2=∑_j≥ 0m+r/Tj(aj+lv)m-j+r+n/Tv_2.Note that the involved summations are finite due to the truncation property.Indeed, by the generating properties eq:generating_M3, eq:IfromS, and eq:contragredient'-comp, we have⟨*|v_3'⟩∑_i≥ 0li(-1)^i ar/T+m+l-ivn/T+iv_2=∑_i≥ 0li(-1)^i ⟨*|θ(ar/T+m+l-i)v_3'⟩vn/T+iv_2=- ∑_i≥ 0li(-1)^i 1T^2_=0_=∞ S*v_3'(a,)(v,)v_2z^m+l-i-r/Tw^n/T+iẓẉ=- 1T^2_=0_=∞ S*v_3'(a,)(v,)v_2 (z-w)^lz^m+r/Tw^n/Tẓẉ.On the other hand, by eq:generating_M2 and eq:IfromS, we have⟨*|v_3'⟩∑_i≥ 0li(-1)^l+ivn/T+l-iam+r/T+iv_2=∑_i≥ 0li(-1)^l+i1T^2_=0_=0 S*v_3'(a,)(v,)v_2z^m+r/T+iw^n/T+l-iẓẉ= 1T^2_=0_=0 S*v_3'(a,)(v,)v_2 (z-w)^lz^m+r/Tw^n/Tẓẉ.Therefore, we have ⟨*|v_3'⟩∑_i≥ 0li(-1)^i ar/T+m+l-ivn/T+iv_2 -∑_i≥ 0li(-1)^l+ivn/T+l-iam+r/T+i v_2= 1T^2_=0(-_=∞-_=0) S*v_3'(a,)(v,)v_2 (z-w)^lz^m+r/Tw^n/Tẓẉ ∗ =1T_=0_= S*v_3'(a,)(v,)v_2 (z-w)^lz^m+r/Tw^n/Tẓẉ ** =1T_=0∑_j≥0m+r/Tj_=S*v_3'(a,)(v,)v_2 (z-w)^l+jw^m-j+r+n/Tẓẉ eq:associativity_p1q =∑_j≥ 0m+r/Tj1T_=0S*v_3'(al+jv,)v_2w^m-j+r+n/Tẉ eq:IfromS =∑_j≥0m+r/Tj⟨v_3'|(al+jv,)m-j+r+n/Tv_2⟩,where * follows from the residue sum formula and ** follows from applying ι_= to z^m+r/T.Indeed, we have S*v_3'(a,)(v,)v_2∈(r) by the homogeneous property of S.Hence the meromorphic 1-form S*v_3'(a,)(v,)v_2(z-w)^lz^m+r/Tw^n/Tẓ factors throughwith possible poles 0, ∞, and w on ^1.Then the lem:ResidueSum applies.For the expansion ι_=, note that the summation involved is finite,so it commutes with the integrals. Thus the equality follows. § RECONSTRUCTING G-TWISTED CORRELATION FUNCTIONS FROM RESTRICTED CORRELATION FUNCTIONSIn this section, we introduce the space of correlation functions associated to the datum (→^1, ∞, 1, 0, U^3, M^1, U^2) where U^2 (resp. U^3) is a left (resp. right) A_g(V)-module.In our application, U^2 and U^3 are the lowest-weight subspaces of M^2 and (M^3)' respectively.In general, they are only considered as irreducible modules over the g-twisted Zhu's algebra A_g(V). Recall the definition of A_g(V) in <cit.>. It is the quotient of V modulo the subspace O_g(V), which is spanned bya∘ _g b:=_z (1+z)^ a-1+δ(r)+r/T/z^1+δ(r) Y(a,z)b, a∈ V^r,b∈ V, where δ(r) is defined by δ(r)=1 if r=0,0 otherwise.By <cit.>, V^r⊆ O_g(V) for r≠ 0. Define a∗ _g b:= _z Y(a,z)b(1+z)^ a/z if a∈ V^0,0otherwise.Denote the image of a∈ V in A_g(V) by [a]. We have the following result:The operation ∗_g induces an associative algebra structure on A_g(V) with [] as the identity, and [] lying in its center.Let M=⊕_n∈ M(n/T) be an admissible g-twisted module. By eq:admissible, its bottom level M(0) is preserved by the zero mode operators o(a):=a a-1 (a∈ V).Note that the assignment o a↦ o(a) vanishes outside V^0.Furthermore, we haveThe bottom level M(0) is an A_g(V)-module with the action given by [a]· v=o(a)v for a∈ V and v∈ M(0). More specifically, we have o(a∘_g b)v =0, o(a)o(b)v =o(a∗ b)v, o(a)o(b)v-o(b)o(a)v =∑_j≥ 0 a-1j o(ajb)v fora∈ V^0. Given an A_g(V)-module U, the dual space U^∗ is a right module over A_g(V),where [a] acts on u'∈ U^∗ on the right by ⟨*|u'· [a]⟩u=⟨*|u'⟩[a]· u for u∈ U.When U=M(0) for some admissible g-twisted module M, we have the following formula that is dual to eq:oaob-oboa: v'o(a)o(b)-v'o(b)o(a)=∑_j≥ 0 a-1j v'o(ajb), a,b∈ V^0, v∈ U, §.§ Space of g-twisted restricted correlation functionsTo define the auxiliary space [Σ_1(U^3, M^1, U^2)] of g-twisted restricted correlation functions, we need the following two-variable functions on F_n,i(,):= z^-n/i!(w)^iw^n/z-w,for all n∈1T.The following lemmas are evident.The expansions of F_n,i(,) at =0, =∞, and = areι_=0F_n,i = -∑_j≥ 0n-j-1iz^j-nw^n-j-i-1, ι_=∞F_n,i = ∑_j≥ 0n+jiz^-n-j-1w^n+j-i, ι_=F_n, i = ∑_l=0^i ∑_p≥ 0ni-l-np w^-i+l-p(z-w)^p-l-1.The functions F_n,i(,) for successive n have the following relation:F_n,i(,) - F_n+1,i(,) = niz^-n-1w^n-i. We now give a definition that generalizes <cit.> to the g-twisted case. Note that there is an additional shifting in each recursive formula. Let M^1 be an admissible untwisted module of conformal weight h_1, and U^2 (resp. U^3) a left (resp. right) A_g(V)-module where [] acts as h_2𝕀 (resp. h_3𝕀). Put h=h_1+h_2-h_3.A system of linear maps S=*S^n_V⋯ M^1⋯ V_n∈, whereS_V⋯ M^1⋯ V^nU^3⊗ V⊗⋯⊗ V⊗ M^1⊗ V⊗⋯ V⊗ U^2⟶Ø(∞Δ_n), u_3⊗ a^1⊗⋯⊗ a^k-1⊗ v⊗ a^k⊗⋯⊗ a^n⊗ u_2⟼ S*u_3[1][k-1](v,)[k][n]u_2,is said to satisfy the twisted genus-zero property associated to the datum Σ_1(U^3, M^1, U^2):=(→^1, ∞, 1, 0, U^3, M^1, U^2)if it satisfies the following axioms for all u_2∈ U^2 and u_3∈ U^3:* Properties (2)–(6) in <ref>, with u_3∈ U^3 and u_2∈ U^2.* Monomial property: There is a linear functional φ∈ (U^3⊗ M^1ø U^2)^∗ such that S*u_3(v,)u_2=⟨*|φ⟩u_3⊗ w^-L(0)+h_1vø u_2. * Recursive formula about U^3 and V: For any a∈ V^r, we haveS*u_3(a,)⋯u_2 = S*u_3· [a]⋯u_2z^- a=+∑_k=1^n∑_i≥ 0F_ a-1+δ(r)+r/T,i(,_k)S*u_3⋯(aia^k,_k)⋯u_2=+∑_i≥ 0F_ a-1+δ(r)+r/T,i(,)S*u_3⋯(aiv,)u_2, * Recursive formula about U^2 and V: For any a∈ V^r, we have S*u_3⋯(a,)u_2 = S*u_3⋯[a]· u_2z^- a=+∑_k=1^n∑_i≥ 0F_ a-1+r/T,i(,_k)S*u_3⋯(aia^k,_k)⋯u_2=+∑_i≥ 0F_ a-1+r/T,i(,)S*u_3⋯(aiv,)u_2, The vector space consists of systems of linear maps S=*S^n_V⋯ M^1⋯ V_n∈ satisfying the above axioms is called the space of g-twisted restricted correlation functions associated to the datum Σ_1(U^3, M^1, U^2) and is denoted by [Σ_1(U^3, M^1, U^2)].Let S be a system of g-twisted correlation functions associated to the datum Σ_1(N^3, M^1, M^2) in eq:def:Sigma1, then its restriction to the bottom levels of N^3 and M^2 gives a systemof g-twisted restricted correlation functions associated to the datum Σ_1(N^3(0), M^1, M^2(0)).We first show the monomial property.Since we have S*u_3(v,)u_2∈[w^±1/T] by truncation property, it suffics to show the w-derivative of S*u_3(w^L(0)-h_1v,)u_2 vanishes for all u_3∈ N^3(0), v∈ M^1, and u_2∈ M^2(0). Indeed, for homogeneous v∈ M^1, we have w(S*u_3(v,)u_2w^ v) =w(S*u_3(v,)u_2w^-hw^v+h)= w(S*u_3(v,)u_2w^-h)w^v+h + S*u_3(v,)u_2w^-hw(w^v+h),by the L-1-propertyeq:L-derivative-w, = S*u_3(L-1v,)u_2w^v + (v+h)S*u_3(v,)u_2w^v-1,by the associativityeq:associativity_p1q, = _=S*u_3(,)(v,)u_2w^vẓ + (v+h)S*u_3(v,)u_2w^v-1=_=S*u_3(,)(v,)u_2zw^v-1ẓ - _=S*u_3(,)(v,)u_2(z-w)w^v-1ẓ + ⋯, applying the eq:ResidueSum to S*u_3(,)(v,)u_2zw^v-1ẓ, and noticing that its possible poles on ^1 are 0, ∞, and , = -(1T_=0+1T_=∞)S*u_3(,)(v,)u_2zw^v-1ẓ - _=S*u_3(,)(v,)u_2(z-w)w^v-1ẓ + ⋯, by the generating propertieseq:generating_M2,eq:generating_M3, and the associativityeq:associativity_p1q again, = - S*u_3(v,)L0u_2w^v-1 + S*L0u_3(v,)u_2w^v-1 - S*u_3(L0v,)u_2w^v-1 + (v+h)S*u_3(v,)u_2w^v-1= (-h_2+h_3-(v+h_1)+(v+h))S*u_3(v,)u_2w^v-1 =0. It remains to prove the recursive formulas eq:recursive_U3,eq:recursive_U2.For any homogeneous a∈ V^r, the function S*u_3(a,)⋯u_2z^m+r/T factors through ^1 and has only n+3 possible poles: 0, ∞, , and w.Expanding it at =0 and applying the lem:ResidueSum, we obtain0= (1T_=0+1T_=∞+∑_k=1^n_=_k+_=)S*u_3(a,)⋯u_2z^m+r/Tẓ.By eq:associativity_p1q,eq:associativity_p1p2,eq:generating_M2,eq:generating_M3, and eq:iota_wz-w, we have _=0S*u_3(a,)⋯u_2z^m+r/Tẓ =TS*u_3⋯am+r/Tu_2,_=∞S*u_3(a,)⋯u_2z^m+r/Tẓ =-TS*θ(am+r/T)u_3⋯u_2,_=_kS*u_3(a,)⋯(a^k,_k)⋯u_2z^m+r/Tẓ ι_=_k =∑_i≥ 0m+r/Ti_=_kS*u_3(a,)⋯(a^k,_k)⋯u_2z_k^m+r/T-i(z-z_k)^iẓ=∑_i≥ 0m+r/TiS*u_3⋯(aia^k,_k)⋯u_2z_k^m+r/T-i,_=S*u_3(a,)⋯(v,)u_2z^m+r/Tẓ ι_= =∑_i≥ 0m+r/Ti_=S*u_3(a,)⋯(v,)u_2w^m+r/T-i(z-w)^iẓ=∑_i≥ 0m+r/TiS*u_3⋯(aiv,)u_2w^m+r/T-i,where the involved summation is finite since aia^k=0 and aiv=0 for i≫ 0. Note that S*u_3⋯am+r/Tu_2 = 0if m+rT>a-1,S*θ(am+r/T)u_3⋯u_2 = 0if m+rT<a-1.Then it follows from eq:ResSumForS(ap) and <ref> that ι_=0S*u_3(a,)⋯u_2=∑_m≤a-1-r/Tz^-1-m-r/T(S*θ(am+r/T)u_3⋯u_2 -∑_k=1^n∑_i≥ 0m+r/Tiz_k^m+r/T-iS*u_3⋯(aia^k,_k)⋯u_2 -∑_i≥ 0m+r/Tiw^m+r/T-iS*u_3⋯(aiv,)u_2) =S*u_3o(a)⋯u_2z^-a -∑_k=1^n∑_i≥ 0∑_m+r/T≤a-1m+r/Tiz^-m-r/T-1z_k^m+r/T-iS*u_3⋯(aia^k,_k)⋯u_2 - ∑_i≥ 0∑_m+r/T≤a-1m+r/Tiz^-m-r/T-1w^m+r/T-iS*u_3⋯(aiv,)u_2= S*u_3o(a)⋯u_2z^- a=+∑_k=1^n∑_i≥ 0ι_=0F_ a-1+δ(r)+r/T,i(,_k)S*u_3⋯(aia^k,_k)⋯u_2=+∑_i≥ 0ι_=0F_ a-1+δ(r)+r/T,i(,)S*u_3⋯(aiv,)u_2(by <ref>),which implies eq:recursive_U3 by the injectivity of ι_=0.Finally, expanding S*u_3(a,)⋯u_2 at =∞ yields a proof of eq:recursive_U2. When N^3 is the contragridient module of M^3, the monomial property follows from the truncation property of intertwining operator I_S. However, in general we cannot assume that N^3 is a contragredient module of some M^3. See <ref>. By <ref>, there exists a linear map: [Σ_1(N^3, M^1, M^2)]⟶[Σ_1(N^3(0), M^1, M^2(0)]. If M^2 and N^3 are lowest-weight modules, thenis injective. Let S∈[Σ_1(N^3, M^1, M^2)] such that (S)=0. Then, S*u_3(v,)u_2=0 for all u_3∈ N^3(0), v∈ M^1 and u_2∈ M^2(0).Let M be the subspace M:=*v_2∈ M^2 S*u_3(v,)v_2=0 [ ] u_3∈ N^3(0), v∈ M^1 Then M^2(0)⊆ M.For any v_2∈ M, homogeneous a∈ V^r, and m∈, by eq:generating_M2,eq:recursive_U3, we have S*u_3(v,)am+r/Tv_2=_=0(S)*u_3(a,)(v,)v_2z^m+r/Tẓ=_=0(S)*u_3o(a)(v,)v_2z^m+r/T- aẓ +∑_i≥ 0F_ a-1+δ(r)+r/T,i(,)_=0(S)*u_3(aiv,)v_2z^m+r/Tẓ=0.This implies that M is a submodule of M^2 containing M^2(0), hence M=M^2. ThereforeS*u_3(v,)v_2=0, [] u_3∈ N^3(0), v∈ M^1, v_2∈ M^2.Similarly, using eq:generating_M3,eq:recursive_U2, we can showS*v_3(v,)v_2=0,[] v_3∈ N^3, v∈ M^1, v_2∈ M^2.Suppose all the (n+3)-point functions in S vanish.Then, for any a∈ V, by eq:associativity_p1q,ι_=S*v_3(a, )⋯(v,)v_2=∑_k∈(_=S*v_3⋯(a,)(v,)v_2(z-w)^kẓ)(z-w)^-k-1=∑_k∈S*v_3⋯(akv,)v_2(z-w)^-k-1=0,where the last equality follows from the inductive hypothesis.Since ι_= is injective, S*v_3(a,)⋯(v,)v_2=0,[] v_3∈ N^3, v∈ M^1, v_2∈ M^2.This shows the system of functions S vanishes.§.§ Extending restricted correlation functions from the bottom levelsIn this subsection, our objective is to establish an isomorphism between the spaces of correlation functions associated to the datum Σ_1(U^3, M^1, U^2) and Σ_1(M(U^3), M^1, M(U^2)) respectively. Here U^2 (resp. U^3) is an irreducible left (resp. right) A_g(V)-module, and M(-) assigns an A_g(V)-module (or A_g^-1(V)-module) to the associated generalized Verma module, as defined in <cit.>.Recall that there is an epimorphism from Ł_g(V)_0 to A_g(V) as Lie algebras. Hence any A_g(V)-module U can be regarded as an Ł_g(V)_0-module, where Ł_g(V) is the twisted Lie algebra Ł_g(V) in eq:def:LgV. Let U be an Ł_g(V)_-+Ł_g(V)_0-module by letting Ł_g(V)_- act trivially and consider the following induced module (Ł_g(V))⊗_(Ł_g(V)_-+Ł_g(V)_0)U.Then the generalized Verma module M(U) generated by U is defined to be the quotient space of the above module modulo the submodule generated by all the coefficients of the twisted Jacobi identity. <ref> shows that any system of correlation functions associated to the datum Σ_1(M(U^3), M^1, M(U^2)) restricts to one for the datum Σ_1(U^3, M^1, U^2).In the rest of <ref>, we prove that the converse is also true by adopting a similar method as in <cit.>. §.§.§ Extending U^2We denote the tensor product space T(Ł_g(V))⊗ U by M(U), where T(Ł_g(V)) is the tensor algebra of the twisted Lie algebra Ł_g(V).The space M(U) is spanned byb^1r_1/T+m_1⊗⋯⊗b^pr_p/T+m_p⊗ u, b^i∈ V^r_i, m_i∈, u∈ U, p∈. Given a system of correlation functions S U^3⊗Sym[V,⋯,V,M^1]⊗ U^2→Ø(∞Δ_n), we first extend it to U^3⊗Sym[V,⋯,V,M^1]⊗ M(U^2) byS*u_3⋯b^1r_1/T+m_1⊗⋯⊗b^pr_p/T+m_p⊗ u_2:=1T__+1=0⋯1T__+p=0S*u_3⋯u_2 z_+1^r_1/T+m_1⋯ z_+p^r_p/T+m_pẓ_̣+̣p̣⋯ẓ_̣+̣1̣.It is easy to show that such a family of functions S is well-defined using the L-1-derivative property eq:L-derivative-z. Define the radical of the family S by (S):=*v_2∈ M(U^2) S*u_3⋯v_2=0 for all u_3∈ U^3.Then define (U^2):=⋂_S(S), where S ranges over [Σ_1(U^3, M^1, U^2)].It is easy to see that for any v_2∈ M(U^2), we haveS*u_3⋯b^1r_1/T+m_1⊗⋯⊗b^pr_p/T+m_p⊗ v_2=1T__+1=0⋯1T__+p=0S*u_3⋯v_2 z_+1^r_1/T+m_1⋯ z_+p^r_p/T+m_pẓ_̣+̣p̣⋯ẓ_̣+̣1̣. Let S∈[Σ_1(U^3, M^1, U^2)], and let φ be as in eq:monomial. If φ=0 then S=0.Clearly, S*u_3(v, )u_2=0 for any u_3∈ U^3 and u_2∈ U^2. Suppose all the (n+3)-point functions S vanish. For any homogeneous a∈ V^r, by eq:recursive_U3,S*u_3(a,)⋯u_2 = S*u_3· [a]⋯u_2z^- a=+∑_k=1^n∑_i≥ 0F_ a-1+δ(r)+rT,i(,_k)S*u_3⋯(aia^k,_k)⋯u_2=+∑_i≥ 0F_ a-1+δ(r)+rT,i(,)S*u_3⋯(aiv,)u_2,and the right-hand side is 0 by the induction hypothesis.The following properties hold for (U^2): *br/T+m⊗(U^2)⊂(U^2) for all b∈ V^r and m∈.*U^2∩(U^2)=0, where U^2 is viewed as ⊗ U^2⊂ M(U^2).*br/T+m⊗ u_2∈(U^2) for all b∈ V^r and m∈ such that br/T+m<0. *b b-1⊗ u_2-1⊗ [b]· u_2∈(U^2) for b∈ V^0.lem:Rad-prop(1) is clear.For property lem:Rad-prop(2), suppose there exists a nonzero u_2∈ U^2∩(U^2). Then for any system of correlation functions S with a linear functional φ as in eq:monomial and any homogeneous a∈ V^r, u_3∈ U^3, v∈ M^1, and u_2∈ U^2, by eq:monomial,eq:recursive_U2, we have ⟨*|φ⟩u_3⊗ v⊗([a]· u_2) =S*u_3(v,)[a]· u_2w^ v=S*u_3(a,)(v,)u_2z^ aw^ v=-∑_i≥ 0F_ a-1+r/T,i(,)S*u_3(aiv,)u_2z^ aw^ v=0. On the other hand, by <cit.>, A_g(V) is a quotient algebra of A(V^0). Hence U^2=A(V^0)u_2, and φ vanishes on all the entire U^3⊗ M^1⊗ U^2. This implies S=0 by <ref>, which is a contradiction.For property lem:Rad-prop(3), given any u_2∈ U^2 and any homogeneous b∈ V^r with br/T+m<0, by eq:recursive_U3,eq:SfromU2toMU2, we haveS*u_3⋯br/T+m⊗ u_2= 1T_=0S*u_3⋯(b,)u_2z^r/T+mẓ= S*u_3· b⋯u_21T_=0z^- b+r/T+mẓ=+∑_k=1^n∑_i≥ 0S*u_3⋯(bia^k,_k)⋯u_21T_=0F_b-1+δ(r)+r/T,i(,_k)z^r/T+mẓ=+∑_i≥ 0S*u_3⋯(biv,)u_21T_=0F_b-1+δ(r)+r/T,i(,)z^r/T+mẓ=0,where the last equality follows from the fact that F_n,i(,)z^n is holomorphic at =0. For property lem:Rad-prop(4), given any u_2∈ U^2 and any homogeneous b∈ V^0, by eq:recursive_U2 we haveS*u_3⋯b b-1⊗ u_2= 1T_=0S*u_3⋯(b,)u_2z^ b-1ẓ= S*u_3⋯[b]· u_21T_=0z^-1ẓ=+∑_k=1^n∑_i≥ 0S*u_3⋯(bia^k,_k)⋯u_21T_=0F_b-1,i(,_k)z^ b-1ẓ=+∑_i≥ 0S*u_3⋯(biv,)u_21T_=0F_b-1,i(,)z^ b-1ẓ=S*u_3⋯[b]· u_2. In the following lemma, we use the same notation for the elements in M(U^2) and their images in M(U^2)/(U^2).For any a∈ V^r, b∈ V^s, v_2∈ M(U^2), and m,n,l∈, the following element of M(U^2)/(U^2) vanishes: -∑_i≥ 0li(-1)^i ar/T+m+l-i⊗bs/T+n+i⊗ v_2+∑_i≥ 0li(-1)^l+ibs/T+n+l-i⊗ar/T+m+i⊗ v_2 +∑_j≥ 0m+r/Tj(aj+lb)r+s/T+m+n-j⊗ v_2. Note that the summations in eq:compJacU2 are finite by property lem:Rad-prop(3). Indeed, for any system of correlation functions S, we haveS*u_3⋯∑_i≥ 0li(-1)^i ar/T+m+l-i⊗bs/T+n+i⊗ v_2=1T__+1=01T__+2=0∑_i≥ 0li(-1)^iS*u_3⋯(a,_+1)(b,_+2)v_2z_+1^r/T+m+l-iz_+2^s/T+n+iẓ_̣+̣2̣ẓ_̣+̣1̣=1T__+1=01T__+2=0 S*u_3⋯(a,_+1)(b,_+2)v_2(z_+1-z_+2)^lz_+1^r/T+mz_+2^s/T+nẓ_̣+̣2̣ẓ_̣+̣1̣ ∗ =1T__+2=0(1T__+1=0+__+1=_+2) S*u_3⋯(a,_+1)(b,_+2)v_2(z_+1-z_+2)^lz_+1^r/T+mz_+2^s/T+nẓ_̣+̣1̣ẓ_̣+̣2̣=1T__+2=01T__+1=0 S*u_3⋯(b,_+2)(a,_+1)v_2(z_+1-z_+2)^lz_+1^r/T+mz_+2^s/T+nẓ_̣+̣1̣ẓ_̣+̣2̣= + 1T__+2=0__+1=_+2 S*u_3⋯(a,_+1)(b,_+2)v_2(z_+1-z_+2)^lz_+1^r/T+mz_+2^s/T+nẓ_̣+̣1̣ẓ_̣+̣2̣ ∗∗ =1T__+2=01T__+1=0∑_i≥ 0li(-1)^l+i S*u_3⋯(b,_+2)(a,_+1)v_2z_+2^s/T+n+l-iz_+1^r/T+m+iẓ_̣+̣1̣ẓ_̣+̣2̣= + 1T__+2=0__+1=_+2∑_j≥0m+r/Tj S*u_3⋯(a,_+1)(b,_+2)v_2(z_+1-z_+2)^l+jz_+2^r+s/T+m+n-jẓ_̣+̣1̣ẓ_̣+̣2̣= ∑_i≥ 0li(-1)^i S*u_3⋯bs/T+n+l-i⊗ar/T+m+i⊗ v_2= + 1T__+2=0 S*u_3⋯(al+jb,_+2)v_2z_+2^r+s/T+m+n-jẓ_̣+̣2̣=S*u_3⋯∑_i≥ 0li(-1)^ibs/T+n+l-i⊗ar/T+m+i⊗ v_2= + S*u_3⋯∑_j≥ 0m+r/Tj(aj+lb)r+s/T+m+n-j⊗ v_2. Equality ∗ follows from the residue sum formula since both of the following functions_+1 ↦1T__+2=0S*u_3⋯(a,_+1)(b,_+2)v_2(z_+1-z_+2)^lz_+1^r/T+mz_+2^s/T+nẓ_̣+̣2̣ _+1 ↦ S*u_3⋯(a,_+1)(b,_+2)v_2(z_+1-z_+2)^lz_+1^r/T+mz_+2^s/T+nfactor through ^1, and the second fucntion has one extra possible pole at z_+1=z_+2. Furthermore, at any common pole _∗≠0 of these functions, we have __+1=_∗__+2=0⋯ = __+2=0__+1=_∗⋯since _∗ is away from the divisor _+1=_+2. Equality ∗∗ follows from expanding (z_+1-z_+2)^l at _+1=0 and z_+1^r/T+m at _+1=_+2, respectively. In particular, taking l=0 in <ref>, we have am+r/T⊗bn+s/T⊗ v_2-bn+s/T⊗am+r/T⊗ v_2≡∑_j≥ 0m+r/Tj(ajb)r+s/T+m+n-j⊗ v_2(U^2).Therefore, M(U^2)/(U^2) is a Ł_g(V)-module.Furthermore, lem:Rad-prop(4) allows us to unambiguously write u_2 for the image of b^1r_1/T+m_1⊗⋯⊗b^pr_p/T+m_p⊗ u_2 in M(U^2)/(U^2). Moreover, it is clear that M(U^2)/(U^2) is spanned by the following elements: u_2,where u_2∈ U^2, b^i∈ V^r_i, m_i∈ for all i, and b^1r_1/T+m_1≥⋯≥b^pr_p/T+m_p. Denote M(U^2)/(U^2) by M(U^2).Define a vertex operator byY_M(U^2): V→(M(U^2))[[z,z^-1]],Y_M(U^2)(a,z) = ∑_n∈an/T z^-n/T-1,where a∈ V and a(n/T)∈Ł_g(V) for all n∈.Furthermore, we introduce a gradation on M(U^2) by ( u_2) :=∑_i=1^p b^ir_i/T+m_i,where b_i∈ V^r_i, m_i∈, and u_2∈ U^2.Then M(U^2)=⊕_m∈M(U^2)(m/T) by the type of its spanning elements (<ref>) and lem:Rad-prop(3). The pair (M(U^2),Y_M(U^2)) in <ref> defines an admissible g-twisted module of conformal weight h_2.By lem:Rad-prop(1), the vertex operator Y_M(U^2)(-,z) is well-defined.Given a∈ V^r, by the definition of gradation 3.24', we have ar/T+nM(U^2)(mT)⊂M(U^2)(mT+ar/T+n).Hence M(U^2)(mT)=0 for m<0 by lem:Rad-prop(3).This shows ar/T+nv_2=0 for n≫ 0. The Jacobi identity of Y_M(U^2) follows from <ref>. By adopting a similar argument as <cit.>, together with the assumption that [] acts as h_2 𝕀 on U^2, we can show that Y_M(U^2) satisfies the vacuum property, and M(U^2)=⊕_n∈M(U^2))(n/T) with the bottom level M(U^2)(0)=U^2.Moreover, for any a∈ V^0, its action on M(U^2)(0)=U^2 agrees with the operator o(a)=a a-1:=_z z^ a-1Y_M(U^2)(a,z), and each M(U^2)(n/T) is an eigenspace of L0=o() with the eigenvalue n/T+h_2.So far, we have extended S to U^3⊗Sym[V,⋯,V,M^1]⊗M(U^2).The last factor can be further extended to M(U^2) since M(U^2) is a quotient module of the generalized Verma module M(U^2).Now let M^(U^3)=U^3⊗ T(Ł_g(V)). By adopting a slight modification of the argument in this subsection, we can extend S to M^(U^3)⊗Sym[V,⋯,V,M^1]⊗M(U^2) byS*u_3⊗b^pr_p/T+m_p⊗⋯⊗b^1r_1/T+m_1⋯v_2:=(-1T__+1=∞)⋯(-1T__+p=∞)S*u_3⋯u_2 z_+1^r_1/T+m_1⋯ z_+p^r_p/T+m_pẓ_̣+̣p̣⋯ẓ_̣+̣1̣.Define ^(U^3) as the intersection of all ^(S), where ^(S):=*v'_3∈ M^(U^3) S*v'_3⋯v_2=0 for all v_2∈M(U^2).Then M^(U^3):=M^(U^3)/^(U^3) is a right Ł_g(V)-module, and so a left Ł_g^-1(V)-module via the pushout along θŁ_g(V)→Ł_g^-1(V). Namely, am/T v_3:=v_3·θ(am/T), where the anti-isomorphism θ is defined in eq:def:theta.Furthermore, M^(U^3) is an admissible g^-1-twisted module of conformal weight h_3 whose vertex operator is given byY_M^(U^3)(a,z)v'_3 =∑_n∈an/Tv'_3 z^-n/T-1 =∑_n∈ v'_3·θ(an/T) z^-n/T-1,where a∈ V and v'_3∈M^(U^3).Then S is extended to M^(U^3)⊗Sym[V,⋯,V,M^1]⊗M(U^2).The first factor can be further extended to generalized Verma module M(U^3).The resulting system of correlation functions S satisfies the twisted genus-zero property associated to the datum Σ_1(M^(U^3), M^1, M(U^2)). It suffices to show the truncation property.We first show that S*u_3(v,)v_2w^n is holomorphic at =0 for all u_3∈ U^3, when n≥ v +v_2. Since M(U^2) is a V-module by <ref>, we may assume v_2=ar/T+mu_2 for some homogeneous a∈ V^r, m∈, u_2∈ U^2, and v_2≥ 0. Then S*u_3(v,)ar/T+mu_2w^n=1T_=0S*u_3(a,)(v,)u_2z^r/T+mw^nẓ=1TS*u_3· [a](v,)u_2w^n _=0z^r/T+m- aẓ=+1T∑_i≥ 0S*u_3(aiv,)u_2w^n _=0F_ a-1+δ(r)+rT, i(, )z^r/T+mẓ lem:ExpOfFpq =1TS*u_3· [a](v,)u_2w^n_=0z^r/T+m- aẓ=-1T∑_i≥ 0r/T+miS*u_3(aiv,)u_2w^r/T+m+n-i=1T⟨*|φ⟩(u_3· [a])⊗ v⊗ u_2w^n- v_=0z^r/T+m- aẓ=-1T∑_i≥ 0r/T+mi⟨*|φ⟩u_3⊗(aiv)⊗ u_2w^n- v_2 - v,which is holomorphic at =0 if n≥ v+ v_2.It remains to show S*v'_3(v,)v_2w^n is holomorphic at =0 for all v'_3∈M^(U^3), when n≥ v +v_2. We may assume v'_3=θ(ar/T+m)u_3 for some homogeneous a∈ V^r, m∈, u_3∈ U^3, and - a+r/T+m+1≥ 0. Then S*θ(ar/T+m)u_3(v,)v_2w^n=-1T_=∞S*u_3(a,)(v,)v_2z^r/T+mw^nẓ=-1TS*u_3· [a](v,)v_2w^n _=∞z^r/T+m- aẓ=-1T∑_i≥ 0S*u_3(aiv,)v_2w^n _=∞F_ a-1+δ(r)+rT, i(, )z^r/T+mẓ lem:ExpOfFpq = -1TS*u_3· [a](v,)v_2w^n_=∞z^r/T+m- aẓ=+1T∑_i≥ 0r/T+miS*u_3(aiv,)v_2w^r/T+m+n-i.The first term is holomorphic at =0 since u_3· [a]∈ U^3 and n≥ v+ v_2. The second term is holomorphic at =0 since aiv+ v_2 =a +v -i -1 + v_2 ≤r/T+m + n - i.By <ref>, the extended system of g-twisted correlation functions S∈M(U^3) ⊗Sym[V,⋯,V,M^1]⊗M(U^2) also satisfies the twisted genus-zero property associated to the datum Σ_1(M(U^3), M^1, M(U^2)). Now we have our main theorem in this section: Let M^1 be an admissible untwisted module of conformal weight h_1, and let U^2 (resp. U^3) be a left (resp. right) A_g(V)-module with [] acting as h_2 (resp. h_3). Put h=h_1+h_2-h_3.Then any system of g-twisted restricted correlation functions associated to the datum Σ_1(U^3, M^1, U^2) can be extended to one associated to the datum Σ_1(M^(U^3), M^1, M(U^2)) and one associated to the datum Σ_1(M(U^3), M^1, M(U^2)). Moreover, we have [Σ_1(U^3, M^1, U^2)] ≅[Σ_1(M^(U^3), M^1, M(U^2))] ≅[Σ_1(M(U^3), M^1, M(U^2))].It follows from <ref> and <ref>. Let M^2 and M^3 be admissible g-twisted V-modules of conformal weight h_2 and h_3, and assume M^2 and (M^3)' are generalized Verma modules. Then[Σ_1(M^3(0)^∗, M^1, M^2(0))]≅≅.It follows from <ref> and <ref>. § RECONSTRUCTING G-TWISTED RESTRICTED CORRELATION FUNCTIONS§.§ g-twisted restricted conformal blocks (first definition)We introduce the following notion of space of restricted coinvariants and conformal blocks: Let M^1 be an admissible untwisted module of conformal weight h_1, and U^2 (resp. U^3) a left (resp. right) A_g(V)-module on which [] acts as h_2 𝕀 (resp. h_3 𝕀). Put h=h_1+h_2-h_3.Let J be the subspace of U^3⊗ M^1⊗ U^2 spanned by the elementsu_3⊗ (L-1+L0-h_1+h)v⊗ u_2,u_3· [a]⊗ v⊗ u_2-∑_j≥ 0 aj u_3⊗aj-1v⊗ u_2, a∈ V^0,u_3⊗ v⊗ [a]· u_2-∑_j≥ 0 a-1ju_3⊗aj-1v⊗ u_2, a∈ V^0,∑_j≥ 0 a-1+r/Tj u_3⊗aj-1v⊗ u_2, a∈ V^r, r≠ 0,where u_3∈ U^3, v∈ M^1, and u_2∈ U^2.We call the quotient space (U^3⊗ M^1⊗ U^2)/J the space of g-twisted restricted coinvariants and a linear functional φ vanishing on J a g-twisted restricted conformal block associated to U^3, M^1, and U^2. We denote the vector space of g-twisted restricted conformal blocks by [U^3, M^1, U^2]. The relations f-relation1,f-relation2,f-relation3,f-relation4 are obtained from our later calculation of the twisted correlation functions. In fact, these relations are also compatible with the definitions of the usual space of (twisted) coinvariants and conformal blocks of VOAs associated to the datum (ℙ^1_, 0,1,∞, M^2, M^1,M^3) in <cit.>. We can obtain relations f-relation1,f-relation2,f-relation3,f-relation4 by restricting M^2 and M^3 to their bottom levels. We will give a more general definition of twisted (restricted) conformal block and discuss it in more detail in a subsequent paper <cit.>. Observe that ∑_j≥ 0 ajaj-1v=a∗ v and ∑_j≥ 0 a-1jaj-1v=v∗ a for a∈ V^0 and v∈ M^1, where a∗ v and v∗ a are the A(V^0)-bimodule actions defined in <cit.>.Later on, we will show that the vector space [U^3, M^1, U^2] is indeed dual to U^3⊗_A_g(V) B_g, λ(M^1)⊗_A_g(V)U^2, where B_g, λ(M^1) is a quotient of A_g(M^1) constructed in <cit.> that generalizes B_λ(M^1) in <cit.> and λ=h_2-h_3. Any systme of correlation functions S∈[Σ_1(U^3, M^1, U^2)] gives rise to a meromorphic family of linear functionas φ_S()_∈ in [U^3, M^1, U^2]. Given any S∈[Σ_1(U^3, M^1, U^2)], we define a meromorphic family oflinear functionas φ_S() on U^3⊗ M^1⊗ U^2 byφ_S() u_3ø vø u_2∈ U^3⊗ M^1⊗ U^2 ⟼ S*u_3(w^L(0)-h_1v,)u_2.We show φ_S() vanishes on J. Vanishing of φ_S() on f-relation1 follows from the L-1-derivative property eq:L-derivative-w.For homogeneous a∈ V^r in J, we have ∑_j≥ 0 a-1+δ(r)+r/Tj⟨*|φ_S()⟩u_3øaj-1vø u_2= ∑_j≥ 0 a-1+δ(r)+r/TjS*u_3(aj-1v,)u_2w^ v+ a-j= ∑_j≥ 0 a-1+δ(r)+r/Tj_=S*u_3(a,)(v,)u_2w^ v+ a-j(z-w)^j-1ẓ= ∑_j≥ 0 a-1+δ(r)+r/Tj_=S*u_3·[a](v,)u_2w^ v+ a-j(z-w)^j-1z^- aẓ+∑_j≥ 0 a-1+δ(r)+r/Tj_=∑_i≥ 0F_ a-1+δ(r)+r/T,iS*u_3(aiv,)u_2=· w^ v+ a-j(z-w)^j-1z^- aẓ= ⟨*|φ_S()⟩u_3· [a]ø vø u_2+∑_j≥ 0∑_i≥ 0∑_l=0^i a-1+δ(r)+r/Tj a-1+δ(r)+r/Ti-l- a+1-δ(r)-r/Tl+1-j=·⟨*|φ_S()⟩u_3øaivø u_2= ⟨*|φ_S()⟩u_3· [a]ø vø u_2.The last equality follows from the identity ∑_j≥ 0 a-1+δ(r)+r/Tj- a+1-δ(r)-r/Tl+1-j=0 for l≥ 0.This shows that φ_S() vanishes on f-relation2 (resp. f-relation4) when r=0 (resp. r≠ 0). The vanishing of φ_S() on f-relation3 can be proved by a similar method using the other recursive formula.Hence φ_S() is in [U^3, M^1, U^2] for all ∈. By the monomial property of S, the family φ_S()_∈ is constant. In particular, it is meromorphic.[U^3, M^1, U^2]≅[Σ_1(U^3, M^1, U^2)]. Given any S∈[Σ_1(U^3, M^1, U^2)], by <ref>, we have a constant family φ_S()_∈ of elements of [U^3, M^1, U^2].The rest of <ref> and the whole <ref> are dedicated to proving the opposite direction, i.e., given any g-twisted restricted conformal block φ, one can reconstruct a system of g-twisted correlation functions S_φ, such that φ_S_φ=φ and S_φ_S=S. §.§ Constructions of 3-point, 4-point, and 5-point functionsIn this subsection, we construct the g-twisted 3-point, 4-point, and 5-point functions based on a given restricted conformal block φ∈[U^3, M^1, U^2]. The general (n+3)-point functions can be inductively defined using the recursive formulas.Define the 3-point function S_M U^3⊗ M^1⊗ U^2→[w^±1/T] by the formula: u_3⊗ v⊗ u_2 ⟼ S_M*u_3(v,)u_2:=⟨*|φ⟩u_3⊗ w^h_1-L(0)v⊗ u_2. Next, we can define the 4-point functions S_VM^L: U^3⊗ V⊗ M^1⊗ U^2→Ø(∞Δ_1) and S_MV^R: U^3⊗ M^1⊗ V⊗ U^2→Ø(∞Δ_1) as follows:For homogeneous a∈ V^r, v∈ M^1, u_3∈ U^3 and u_2∈ U^2, define the 4-point functions byS_VM^L*u_3(a,)(v,)u_2: =S_M*u_3· [a](v,)u_2z^- a+∑_i≥ 0F_ a-1+δ(r)+rT, i (,)S_M*u_3(aiv,)u_2,S_MV^R*u_3(v,)(a,)u_2: =S_M*u_3(v,)[a]· u_2z^- a+∑_i≥ 0F_ a-1+rT, i (,)S_M*u_3(aiv,)u_2.Before we move on to construct 5-point functions,we first prove the following lemma which states that the 4-point functions we constructed satisfy the locality:S_VM^L*u_3(a,)(v,)u_2=S_MV^R*u_3(v,)(a,)u_2. If r≠ 0, then [a]=0 and δ(r)=0. Hence S_VM^L*u_3(a,)(v,)u_2 agrees with S_MV^R*u_3(a,)(v,)u_2.Now suppose a∈ V^0. Recalling <ref>, F_ a-1,i(,) - F_ a,i(,) =a-1iz^- aw^ a-i-1,we thus haveS_VM^L*u_3(a,)(v,)u_2-S_MV^R*u_3(v,)(a,)u_2= S_M*u_3·[a](v,)u_2z^- a-S_M*u_3(v,)[a]· u_2z^- a+∑_i≥ 0(F_ a-1, i (,)-F_ a, i (,))S_M*u_3(aiv,)u_2= ⟨*|φ⟩u_3·[a]⊗ v⊗ u_2z^- aw^- v-⟨*|φ⟩u_3⊗ v⊗ [a]· u_2z^- aw^- v+∑_i ≥ 0 a-1iz^- aw^ v⟨*|φ⟩u_3⊗aiv⊗ u_2= 0.The last equality follows from f-relation2 and f-relation3. For 5-point functions, we define S_VVM^L, S_VMV^L by S_VVM^L*u_3(a^1,_1)(a^2,_2)(v,)u_2=S_VMV^L*u_3(a^1,_1)(v,)(a^2, _2)u_2:=S*u_3· [a^1](a^2, _2)(v,)u_2z^- a^1+∑_i≥ 0F_ a^1-1+δ(r)+rT,i (_1,_2)S*u_3(a^1ia^2,_2)(v,)u_2+∑_i≥ 0F_ a^1-1+δ(r)+rT,i (_1,)S*u_3(a^2, _2)(a^1iv,)u_2,and S_MVV^R, S_VMV^R byS_VMV^R*u_3(a^2,_2)(v,)(a^1, _1)u_2=S_MVV^R*u_3(v,)(a^2,_2)(a^1,_1)u_2:=S*u_3(a^2, _2)(v,)[a^1]· u_2z^- a^1+∑_i≥ 0F_ a^1-1+rT,i (_1,_2)S*u_3(a^1ia^2,_2)(v,)u_2+∑_i≥ 0F_ a^1-1+rT,i (_1,)S*u_3(a^2, _2)(a^1iv,)u_2,where a^1∈ V^r, a^2∈ V^s are homogeneous, v∈ M^1, u_3∈ U^3, u_2∈ U^2, and S is the 4-point function defined in <ref>.For the well-definedness of the 5-point functions, we need to show that S_VMV^L*u_3(a^1,_1)(v,)(a^2, _2)u_2=S_VMV^R*u_3(a^1,_1)(v,)(a^2, _2)u_2,and for the proof of locality of the 5-point functions, we need to show that S_VMV^L*u_3(a^1,_1)(v,)(a^2, _2)u_2=S_VMV^R*u_3(a^2,_2)(v,)(a^1, _1)u_2,S_VVM^L*u_3(a^1,_1)(a^2, _2)(v,)u_2=S_VVM^L*u_3(a^2,_2)(a^1, _1)(v,)u_2,S_MVV^R*u_3(v,)(a^2,_2)(a^1, _1)u_2=S_MVV^R*u_3(v,)(a^1,_1)(a^2, _2)u_2. Assume com1 and com2 hold, then Well-definedness holds. Assume com1 and Well-definedness hold, then com3 holds. The proof of the first part is similar to <cit.>. Now assume com1 and Well-definedness hold, then [-1.4] S_MVV^R*u_3(v,)(a^1,_1)(a^2, _2)u_2=S_VMV^R*u_3(a^1,_1)(v,)(a^2, _2)u_2=S_VMV^L*u_3(a^2,_2)(v,)(a^1, _1)u_2=S_VMV^R*u_3(a^2,_2)(v,)(a^1, _1)u_2=S_MVV^R*u_3(v,)(a^2,_2)(a^1, _1)u_2,where the first and last equality follow from ExpansionFromRight, the second equality follows from com1, and the third equality follows from Well-definedness. Thus com3 holds.By <ref>, to show S^L_VVM, S^L_VMV,S^R_MVV, and S^R_VMV above satisfies locality, it suffices to show com1 and com2 hold. For homogeneous a^1∈ V^r, a^2∈ V^s, v∈ M^1, u_2∈ U^2, and u_3∈ U^3, com1 and com2 hold. The proof will be given at the end of this subsection. Next we show the L-1-derivative property of the twisted 3-point, 4-point and 5-point functions. For a^1, a^2∈ V, we have: S*u_3(L-1v,)u_2w^-h =w(S*u_3(v,)u_2w^-h)S*u_3(L-1a^1,_1)(v,)u_2 =z_1S*u_3(a^1,_1)(v,)u_2, S*u_3(a^1,_1)(L-1v,)u_2w^-h =w(S*u_3(a^1,_1)(v,)u_2w^-h) ,S*u_3(L-1a^1,_1)(a^2,_2)(v,)u_2 =z_1 S*u_3(a^1,_1)(a^2,_2)(v,)u_2, S*u_3(a^1,_1)(a^2,_2)(L-1v,)u_2w^-h =w(S*u_3(a^1,_1)(a^2,_2)(v,)u_2w^-h). We first show 5point-L(-1)a. By Fpq, It is straightforward to show thatwF_n, i(_1,)=(i+1) F_n,i+1(_1,), z_1 F_n,i(_1,)= -(i+1)F_n+1,i+1(_1,).Note that [L-1a+L0a]=0, and (L-1a)i=-iai-1 for a∈ V or a∈ M^1. Suppose a^1∈ V^r, we haveS*u_3(L-1a^1,_1)(a^2,_2)(v,)u_2= S*u_3(a^2,_2)(v,)[L-1a^1]· u_2z_1^- a^1-1+∑_i≥ 0F_ a^1+rT,i(_1, )S*u_3((L-1a^1)ia^2, _2)(v, )u_2+∑_i≥ 0F_ a^1+rT,i(_1, )S*u_3(a^2, _2)((L-1a^1)iv, )u_2= - a^1S*u_3(a^2,_2)(v,)[a^1]· u_2z_1^- a^1-1-∑_i≥ 0iF_ a^1+rT,i(_1, )S*u_3(a^1i-1a^2, _2)(v, )u_2-∑_i≥ 0iF_ a^1+rT,i(_1, )S*u_3(a^2, _2)(a^1i-1v, )u_2 = z_1S*u_3(a^1,_1)(a^2,_2)(v,)u_2.Thus 5point-L(-1)a holds. It's easy to see that the 4-point and 5-point functions we defined satisfy the vacuum property in <ref>. Letting a^2= in 5point-L(-1)a, we get 4point-L(-1)a.For 3point-L(-1)v, note that ⟨*|φ⟩u_3⊗L-1v⊗ u_2=-⟨*|φ⟩u_3⊗ ( v+h)v⊗ u_2, henceS*u_3(L-1v,)u_2w^-h=⟨*|φ⟩u_3⊗L-1v⊗ u_2w^- v-1-h=-⟨*|φ⟩u_3⊗ ( v+h)v⊗ u_2w^- v-1-h=w(S*u_3(v,)u_2w^-h). For 4point-L(-1)v, note that a^1iL-1v=L-1a^1iv+ia^1i-1v. Therefore, S*u_3(a^1,_1)(L-1v,)u_2w^-h=S*u_3(L-1v,)[a^1]· u_2w^-h+∑_i≥ 0F_ a^1-1+rT,i(_1, )S*u_3(a^1iL-1v, )u_2w^-h = w(S*u_3(a^1,_1)(v,)u_2w^-h)+∑_i≥ 0F_ a^1-1+rT,i(_1, )w(S*u_3(a^1iv, )u_2w^-h)+∑_i≥ 0w(F_ a^1-1+rT,i(_1, ))S*u_3(a^1iv, )u_2w^-h= w(S*u_3(a^1,_1)(v,)u_2w^-h).5point-L(-1)v can be proved in a similar way. We conclude this subsection by giving a proof of <ref>. We first show com1. Suppose a^1∈ V^r and a^2∈ V^s. If r=s=0, the proof follows the same suit as the argument in Section 4.2 in <cit.>. Note that the only property of φ we need to use to prove com1 is the equality⟨*|φ⟩u_3·[a]ø vø u_2-⟨*|φ⟩u_3ø vø [a]· u_2=∑_j≥ 0 a-1j⟨*|φ⟩u_3øaj-1vø u_2,which, in our case, follows from f-relation2and f-relation3. If r≠ 0, then com1 holds by ExpansionFromLeft and ExpansionFromRight. So it suffices to deal with the case where r=0 and s≠ 0.Similar to Lemma 4.13 in <cit.>, we have the following formula on module M^1: ∑_i,j≥ 0 a^1-1jsT+ a^2-1+ni(a^1ja^2i-a^2ia^1j)v= ∑_i,j≥ 0 a^1-1jsT+ a^1-j-2+ a^2+ni(a^1ja^2)iv,where a^1∈ V^0,a^2∈ V^s, and n∈. For u_3∈ U^3, u_2∈ U^2, and v∈ M^1, we write A :=S*u_3· [a^1](v,)(a^2,_2)u_2z_1^- a^1-S*u_3(a^2,_2)(v,)[a^1]· u_2z_1^- a^1,B :=∑_j≥ 0(F_ a^1,j(_1,)-F_ a^1-1,j(_1,))Su_3(a^1jv,)(a^2,_2)u_2,C :=∑_j≥ 0(F_ a^1,j(_1,_2)-F_ a^1-1,j(_1,_2))Su_3(v,)(a^1ja^2,_2)u_2. By <ref>, f-relation2, f-relation3, eq:def:3-point, ExpansionFromLeft, and ExpansionFromRight, we can derive the following expressions for A, B, and C: A =∑_i,j≥ 0 a^1-1jF_ a^2-1+sT, i(_2,) ⟨*|φ⟩u_3⊗a^1ja^2iv⊗ u_2 z_1^- a^1 w^- a^2+i+1- v,B =-∑_i,j≥ 0 a^1-1j F_ a^2-1+sT,i(_2,)⟨*|φ⟩u_3⊗a^2ia^1jv⊗ u_2 z_1^- a^1 w^- a^2+i+1- v,C =-∑_j,i≥ 0 a^1-1j F_ (a^1ja^2)-1+sT,i(_2,)⟨*|φ⟩u_3⊗(a^1ja^2)iv⊗ u_2 z_1^- a^1z_2^ a^1-j-1=· w^- a^2- a^1+j+1+i+1- v. Then, by 4.21ι__2=∞(A+B+C)= ∑_i,j≥ 0∑_n≥ 0 a^1-1jsT+ a^2-1+ni⟨*|φ⟩u_3⊗ (a^1ja^2iv-a^2ia^1jv)⊗ u_2=· z_1^- a^1 z_2^- a^2-sT-n w^sT+n- v-∑_i,j≥ 0∑_n≥ 0 a^1-1jsT+ a^1-j-2+ a^2+ni⟨*|φ⟩u_3⊗(a^1ja^2)iv⊗ u_2=· z_1^- a^1 z_2^- a^2-sT-n w^sT+n- v = 0.Thus A+B+C=0, and com1 holds.Now we show com2. Again, the case r=s=0 has been dealt with in <cit.>. It suffices to show that com2 holds for the following three cases: (1) r=0, s≠ 0, (2) r≠ 0, s≠ 0, and r+s≠ T, or (3) r≠ 0, s≠ 0, and r+s=T. Similar to (2.2.10) in <cit.>, we can rewrite the recursive formula 4.7 as follows:Su_3(a,)(v,)u_2=S*u_3·[a](v,)u_2)z^- a+_x(z^- a+1-δ(r)-r/T(w+x)^ a-1+δ(r)+r/T/z-w-x S*u_3(Y_M^1(a,x)v,)u_2),for a∈ V^r homogeneous. For the case where r=0 and s≠ 0, by <ref> and recursive-other-form, we can express the left-hand side of com2 as follows: S^L_VVM*u_3(a^1,_1)(a^2,_2)(v,)u_2=S*u_3·[a^1](v,)(a^2,_2)u_2z_1^- a^1_(D1)+_x_1(z_1^- a^1(w+x_1)^ a^1/z_1-w-x_1) S*u_3(Y_M^1(a^1,x_1)v,) (a^2,_2)u_2)_(D2)+_x_0(z_1^- a^1(z_2+x_0)^ a^1/z_1-z_2-x_0) S*u_3(v,) (Y(a^1,x_0)a^2,_2)u_2_(D3) = (D1)+(D2)+(D3).By 4.8, recursive-other-form, and the Jacobi identity of Y_M^1, we can rewrite (D1),(D2), and (D3) as follows: (D1)= _x_2(z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2) Su_3·[a^1](Y_M(a^2,x_2)v,)u_2 z_1^- a^1(D2)= _x_1_x_2(z_1^- a^1(w+x_1)^ a^1/z_1-w-x_1·z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2) · S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2(D3)= _x_0_x_2∑_n∈z_1^- a^1(z_2+x_0)^ a^1/z_1-z_2-x_0·(w+x_2)^ a^1-n-1+ a^2+s/T-1/z_2-w-x_2· z_2^- a^1+n+1- a^2+1-s/Tx_0^-n-1S*u_3(Y_M^1(a^1na^2,x_2)v,)u_2= _x_1,x_2z_2^- a^2+1-s/T+1z_1^- a^1(w+x_2)^ a^2+s/T-1(w+x_1)^ a^1/(z_2-w-x_2)(z_1(w+x_2)-z_2(w+x_1))· S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2- _x_1,x_2z_2^- a^2+1-s/T+1z_1^- a^1(w+x_2)^ a^2+s/T-1(w+x_1)^ a^1/(z_2-w-x_2)(z_1(w+x_2)-z_2(w+x_1))· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2.On the other hand, by ExpansionFromLeft, we can write the right-hand side of com2 asS_VVM^L*u_3(a^2,_1)(a^1,_2)(v,)u_2= _x_2(z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2) S*u_3(Y_M^1(a^2,x_2)v,)(a^1,_1)u_2_(E1)+_x_0(z_2^- a^2+1-s/T(z_1+x_0)^ a^2+s/T-1/z_2-z_1-x_0) S*u_3(v,)(Y(a^2,x_0)a^1,_1)u_2_(E2)= (E1)+(E2).By 4.7, recursive-other-form, and the Jacobi identity, we can rewrite (E1) and (E2) as(E1)= _x_2(z_1^- a^1z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2) S*u_3·[a^1](Y_M^1(a^2,x_2)v,)u_2+_x_2_x_1(z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2·z_1^- a^1(w+x_1)^ a^1/z_1-w-x_1)· S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2,(E2)= _x_0_x_2∑_n∈z_2^- a^2+1-s/T(z_1+x_0)^ a^2+s/T-1/z_2-z_1-x_0·(w+x_2)^ a^1-n-1+ a^2+s/T-1/z_1-w-x_2· z_1^- a^1+n+1- a^2+1-s/TS*u_3(Y_M^1(a^2na^1,x_2)v,w)u_2x_0^-n-1= _x_2,x_1z_1^- a^1+1z_2^- a^2+1-s/T(w+x_1)^ a^1(w+x_2)^ a^2+s/T-1/(z_1-w-x_1)(z_2(w+x_1)-z_1(w+x_2))· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2-_x_2,x_1z_1^- a^1+1z_2^- a^2+1-s/T(w+x_1)^ a^1(w+x_2)^ a^2+s/T-1/(z_1-w-x_1)(z_2(w+x_1)-z_1(w+x_2))· S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2.Thus we have (D1)+(D2)+(D3)-(E1)-(E2)= _x_1,x_2z_1^- a^1z_2^- a^2+1-s/T(w+x_1)^ a^1(w+x_2)^ a^2+s/T-1/z_1(w+x_2)-z_2(w+x_1)·(z_2/z_2-w-x_2-z_1/z_1-w-x_1)S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2- _x_1,x_2z_1^- a^1z_2^- a^2+1-s/T(w+x_1)^ a^1(w+x_2)^ a^2+s/T-1/z_1(w+x_2)-z_2(w+x_1)·(z_2/z_2-w-x_2-z_1/z_1-w-x_1) S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2+ _x_1_x_2(z_1^- a^1(w+x_1)^ a^1/z_1-w-x_1·z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2)· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2- _x_2_x_1(z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2·z_1^- a^1(w+x_1)^ a^1/z_1-w-x_1)· S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2= 0.The proof of com2 for the case when r≠ 0, s≠ 0, and r+s≠ T is similar to the case when r=0 and s≠ 0, we omit it. Now, we show com2 for the case r≠ 0, s≠ 0, and r+s=T. By adopting a similar computation as above, using the fact that r/T=1-s/T, we can express the left-hand side of com2 as follows: S_VVM^L*u_3(a^1,_1)(a^2,_2)(v,)u_2= _x_1(z_1^- a^1+1-r/Tz_2^- a^2-s/T+1(1+x_1)^ a^1+r/T-1/z_1-z_2-z_2x_1)S*u_3·[Y(a^1,x_1)a^2](v,)u_2 +_x_1,x_2z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1 z_2^- a^2-s/T+1(w+x_2)^s/T+ a^2/((w+x_2)z_1-(w+x_1)z_2)(z_2-w-x_2)· S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2 -_x_1,x_2z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1 z_2^- a^2-s/T+1(w+x_2)^s/T+ a^2/((w+x_2)z_1-(w+x_1)z_2)(z_2-w-x_2)· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 +_x_2_x_1(z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1/z_1-w-x_1·z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2) · S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2= (F1)+(F2)+(F3)+(F4), where (F1)-(F4) are the corresponding terms on the right-hand side. On the other hand, we can express the right-hand side of com2 as follows: S_VVM^L*u_3(a^2,_2)(a^1,_1)(v,)u_2= _x_2(z_2^- a^2+1-s/Tz_1^- a^1-r/T+1(1+x_2)^ a^2+s/T-1/z_2-z_1-z_1x_2) S*u_3·[Y(a^2,x_2)a^1](v,)u_2+ _x_2,x_1z_2^- a^2+1-s/T(w+x_1)^ a^1+r/T z_1^- a^1-r/T+1(w+x_2)^ a^2+s/T-1/((w+x_1)z_2-(w+x_2)z_1)(z_1-w-x_1)· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2-_x_2,x_1z_2^- a^2+1-s/T(w+x_1)^ a^1+r/T z_1^- a^1-r/T+1(w+x_2)^ a^2+s/T-1/((w+x_1)z_2-(w+x_2)z_1)(z_1-w-x_1)· S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2+ _x_1_x_2(z_2^- a^2+1-s/T(w+x_2)^ a^2+s/T-1/z_2-w-x_2·z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1/z_1-w-x_1) · S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2= (G1)+(G2)+(G3)+(G4).It is easy to see that (F2)-(G3)= _x_1,x_2z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1 z_2^- a^2-s/T+1(w+x_2)^ a^2+s/T-1/((w+x_2)z_1-(w+x_1)z_2)·(w+x_2/z_2-w-x_2-w+x_1/z_1-w-x_1)S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2= _x_1,x_2z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1 z_2^- a^2-s/T+1(w+x_2)^ a^2+s/T-1/(z_2-w-x_2)(z_1-w-x_1)· S*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2= (G4).On the other hand, (F3)-(G2)= _x_1,x_2z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1 z_2^- a^2-s/T+1(w+x_2)^s/T+ a^2-1/((w+x_1)z_2-(w+x_2)z_1)(z_2-w-x_2)·(w+x_2/z_2-w-x_2-w+x_1/z_1-w-x_1)S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2= -_x_1,x_2z_1^- a^1+1-r/T(w+x_1)^ a^1+r/T-1 z_2^- a^2-s/T+1(w+x_2)^s/T+ a^2-1/(z_1-w-x_1)(z_2-w-x_2)· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2= -(F4).Thus, (F2)+(F3)+(F4)-(G2)-(G3)-(G4)=0. Finally, recall that o(L-1a+L0a)=0, which implies that o(Y(a,x)b)=o((1+x)^- a- bY(b,-x/(1+x))a) for a, b∈ V. Then, we have(F1)= _x_1(z_1^- a^1+1-r/Tz_2^- a^2-s/T+1(1+x_1)^ a^1+r/T-1/z_1-z_2-z_2x_1)· S*u_3·[(1+x_1)^- a^1- a^2Y(a^2,-x_1/1+x_1)a^1](v,)u_2= _x_2z_1^- a^1+1-r/Tz_2^- a^2-s/T+1(1/1+x_2)^ a^1+r/T-1/z_1-z_2+z_2(x_2/1+x_2)·-1/(1+x_2)^2·(1/1+x_2)^- a^1- a^2 S*u_3·[Y(a^2,x_2)a^1](v,)u_2= _x_2(z_1^- a^1+1-r/Tz_2^- a^2-s/T+1(1+x_2)^ a^2-r/T/z_2-z_1-z_1x_2)S*u_3·[Y(a^2,x_2)a^1](v,)u_2= (G1).Therefore, (F1)+(F2)+(F3)+(F4)-(G1)-(G2)-(G3)-(G4)=0. The proof of <ref> is completed.§.§ The (n+3)-point correlation functions We can use a similar induction argument as in <cit.> to construct the (n+3)-point function S, with the well-defineness and locality of the 3-point, 4-point, and 5-point functions from the previous subsection as the base case. Note that the only property involving the A(V)-modules U^2 and U^3 we used for the induction process in <cit.> wasS*u_3·[a^1][a^2](a^3,_3)⋯ (a^n,_n)(v,)u_2-S*u_3·[a^2][a^1](a^3,_3)⋯ (a^n,_n)(v,)u_2= ∑_j≥ 0 a^1-1j Su_3·[a^1ja^2](a^3,_3)⋯ (a^n,_n)(v,)u_2,which is also true when U^2 and U^3 are modules over A_g(V). We omit the rest of the details for the induction. Thus, we have a well-defined system of (n+3)-point functions for n≥ 3.S_V⋯ M⋯ V U^3⊗ V⊗⋯⊗ M^1 ⊗⋯ V⊗ U^2→Ø(∞Δ_n)u_3⊗ a^1⊗⋯⊗ v⋯⊗ a^n⊗ u_2 ↦ S*u_3(a_1,_1)⋯ (v,)⋯(a^n,_n)u_2,where u_3∈ U^3, a^1,⋯ a^n∈ V, v∈ M^1, and u_2∈ U^2. Note that S satisfies the recursive formulas eq:recursive_U2,eq:recursive_U3, and the locality in <ref>.§ ASSOCIATIVITY OF THE RECONSTRUCTED CORRELATION FUNCTIONSIn this section, we show that the system of (n+3)-point functions S we constructed in <ref> is contained in [Σ_1(U^3, M^1, U^2)].By our construction in <ref>, it remains to show the associativity in <ref>. Since the recursive formulas for the correlation functions in <ref> are different from the ones in <cit.>, there are 5 new cases arise in our case. Recall there are two formulas for the associativity: for any k∈,__1=S*u_3(a^1,_1)⋯(v,)u_2(z_1-w)^kẓ_̣1̣= S*u_3⋯(a^1kv,)u_2, __1=_2S*u_3(a^1,_1)(a^2,_2)⋯u_2(z_1-z_2)^kẓ_̣1̣= S*u_3(a^1ka^2,_2)⋯u_2. §.§ Associativity for one algebra element and one module element We first prove 5.1 for the 4-point functions. The general (n+3)-point functions case can be proved in a similar way, so we omit the details.For a^1∈ V, u_3∈ U^3, u_2∈ U^2, v∈ M^1 homogeneous, and k∈, we have __1=S*u_3(a^1,_1)(v,)u_2(z_1-w)^kẓ_̣1̣=S*u_3(a^1kv,)u_2.Suppose a^1∈ V^r. When k≥ 0, by <ref>, __1= S*u_3(a^1,_1)(v,)u_2(z_1-w)^k ẓ_̣1̣= __1=⟨*|φ⟩u_3 ⊗v⊗ [a^1]· u_2 w^- v z_1^- a^1 (z_1-w)^k ẓ_̣1̣+__1=∑_i≥ 0F_ a^1-1+r/T,i(_1,) ⟨*|φ⟩u_3⊗a^1iv⊗ u_2 w^-( a^1-i-1+ v) (z_1-w)^k ẓ_̣1̣= __1=∑_j≥ 0- a^1j w^- v- a^1-j (z_1-w)^j+k⟨*|φ⟩u_3 ⊗v⊗ [a^1]· u_2+__1=∑_i≥ 0∑_l=0^i ∑_p≥ 0 a^1-1+r/Ti-l- a^1+1-r/Tpw^l-p- a^1+1- v/(z_1-w)^l+1-p-k·⟨*|φ⟩u_3⊗a^1iv⊗u_2= ∑_i≥ 0( ∑_l=k^ia^1-1+r/Ti-l- a^1+1-r/Tl-k)w^k- a^1 +1- v⟨*|φ⟩u_3⊗a^1iv⊗ u_2= ∑_i≥ 0(∑_s=0^i-k a^1-1+r/Ti-k-s- a^1+1-r/Ts) w^k- a^1+1- v⟨*|φ⟩u_3⊗a^1iv⊗ u_2= ⟨*|φ⟩u_3⊗a^1kv⊗ u_2 w^k- a^1+1- v= S*u_3(a^1kv,)u_2,where we used the fact that ∑_s=0^i-k a^1-1+rTi-k-s- a^1+1-rTs is the coefficient of the term x^i-k in (1+x)^ a^1-1+r/T(1+x)^- a^1+1-r/T=1. When k=-1, we have__1= S*u_3(a^1,_1)(v,)u_2(z_1-w)^-1ẓ_̣1̣= ⟨*|φ⟩u_3⊗v⊗ [a^1]· u_2 w^- v- a^1+∑_i≥ 0(∑_l=0^ia^1-1+r/Ti-l- a^1+1-r/Tl+1)w^- a^1- v⟨*|φ⟩u_3⊗a^1iv⊗ u_2= ⟨*|φ⟩u_3⊗v⊗ [a^1]· u_2 w^- v- a^1-∑_i≥ 0 a^1-1+r/Ti+1w^- a^1- v⟨*|φ⟩u_3⊗a^1iv⊗ u_2.If r=0, by f-relation3 we have5.4= a^1-10⟨*|φ⟩u_3⊗a^1-1v⊗ u_2w^- v- a^1=S*v'_3(a^1-1v,)u_2. If r≠ 0, since [a^1]=0, by f-relation4 we have5.4= a^1-1+r/T0w^- a^1- v⟨*|φ⟩u_3⊗a^1-1v⊗ u_2=S*v'_3(a^1-1v,)u_2.When k<-1, the proof is similar to the proof of <cit.>, using the L-1-derivative property in <ref>, we omit the details. §.§ Associativity for two algebra elementsIn this subsection, we prove 5.2 for the 5-point functions. The general (n+3)-point functions case can be proved similarly. First, we show that the kernel J of conformal blocks in <ref> also contains some information about a generalized version of O(M^1) in <cit.>. We give the following definition with the notations in <cit.>: Let (M,Y_M) be an untwisted module, and λ be a complex number. Introduce a bilinear operator ∘_g:V⊗ M⟶ M by letting a∘_g v: =_x (1+x)^ a-1+δ(r)+r/T/x^1+δ(r) Y_M(a,x)v,where 0≤ r≤ T-1, a∈ V^r homogeneous, and v∈ M. LetO_g,λ(M):=*a∘_g u, L-1u+(L0+λ)u: a∈ V, u∈ M,and B_g,λ(M):=M/O_g,λ(M). We will show that B_g,λ(M) is a bimodule over the g-twisted Zhu's algebra A_g(V) in the next Section. The following property is necessary for the proof of associativity 5.2.Let J be the subspace spanned by elements of the form f-relation1-f-relation4 in <ref>. Then, U^3⊗ O_g,h_2-h_3(M^1)⊗ U^2⊆ J.By f-relation1, f-relation4, and def:circle-g, it is clear that u_3⊗ (L-1u+(L0+h_2-h_3)u)⊗ u_2∈ J, and u_3ø(b∘_g u) ⊗ u_2∈ J, where b∈ V^r with 1≤ t≤ T-1. Now let a∈ V^0. Since [L-1a+L0a]=0, it follows from f-relation2 that0= u_3·[L-1+L0a]⊗ u⊗ u_2≡ -∑_j≥ 0 a+1j u_3 ⊗(L-1a)j-1u⊗ u_2-∑_j≥ 0 aj u_3 ⊗(L0a)j-1u⊗ u_2 ≡ -u_3 ⊗((L-1a+L0a)∗ u)⊗ u_2≡ u_3 ⊗ (a∘_g u) ⊗ u_2 J.Thus u_3⊗ O_g,h_2-h_3(M^1)⊗ u_2⊆ J, in view of def:O-quotient. By <ref>, together with the L-1-derivative property of module M^1, it is easy to show the following fact (see <cit.>):u_3⊗(_x (1+x)^ a-1+δ(r)+r/T+i/x^j+1+δ(r)Y_M^1(a,x)v)⊗ u_2∈ J, j≥ i≥ 0, i, j∈.Let S be the 3-point function in <ref>. For a∈ V^r homogeneous, v∈ M^1, u_2∈ U^2, u_3∈ U^3, and j∈, we have_x (w+x)^ a-1+δ(r)+r/T/x^j+δ(r) S*u_3(Y_M^1(a,x)v,)u_2=S*u_3·[a](v,)u_2if r,j=0,0if j≥ 1.By <ref> and the change of variable formula, we have_x (w+x)^ a-1+δ(r)+r/T/x^j+δ(r) S*u_3(Y_M^1(a,x)v,)u_2=_x(w+x)^ a-1+δ(r)+r/T/x^j+δ(r)∑_n∈⟨*|φ⟩u_3⊗an+r/Tv⊗ u_2 x^-n-r/T-1w^- a+n+r/T+1- v=_x 1/w(1+x/w)^ a-1+δ(r)+r/T/(x/w)^j+δ(r)⟨*|φ⟩u_3⊗ Y_M^1(a,x/w)v⊗ u_2 w^-j- v-r/T=_z (1+z)^ a-1+δ(r)+r/T/z^j+δ(r)⟨*|φ⟩u_3⊗ Y_M^1(a,z)v⊗ u_2 w^-j- v-r/T.By eq:O-relation, the last term is 0 if j≥ 1. On the other hand, if r, j=0, by f-relation2 we have _z (1+z)^ a/z⟨*|φ⟩u_3⊗ Y_M^1(a,z)v⊗ u_2 w^- v=⟨*|φ⟩u_3 ⊗∑_i≥ 0 aiai-1v⊗ u_2 w^- v=⟨*|φ⟩u_3· [a]⊗ v⊗ u_2 w^- v=S*u_3· [a](v,)u_2.This proves eq:S-O-relation. For any u_3∈ U^3, u_2∈ U^2, v∈ M^1, a^1,a^2∈ V, and k∈, we have__1=_2S*u_3(a^1,_1)(a^2,_2)(v,)u_2(z_1-z_2)^kẓ_̣1̣=S*u_3(a^1ka^2,_2)(v,)u_2. It suffices to prove <ref> for homogeneous a^1∈ V^r and a^2∈ V^s, where 0≤ r, s<T. Note that a^1na^2∈ V^r+s for any n∈, where r+s denotes the residue of r+s modulo T.When k≥ 0, by ExpansionFromLeft, we have__1=_2 S*u_3(a^1,_1)(a^2,_2)(v,)u_2(z_1-z_2)^k ẓ_̣1̣= __1=_2 S*u_3· [a^1](a^2,_2)(v,)u_2z_1^- a^1 (z_1-z_2)^k ẓ_̣1̣+ __1=_2∑_i≥ 0F_ a^1-1+δ(r)+r/T(_1,_2) S*u_3(a^1ia^2,_2)(v,)u_2(z_1-z_2)^kẓ_̣1̣+__1=_2∑_i≥ 0 F_ a^1-1+δ(r)+r/T (_1,) S*u_3(a^2,_2)(a^1iv,)u_2(z_1-z_2)^k ẓ_̣1̣= 0+__1=_2∑_i≥ 0∑_l=0^i ∑_p≥ 0 a^1-1+δ(r)+r/Ti-l- a^1+1- δ(r)-r/Tp·z_2^-i+l-p/(z_1-z_2)^l+1-p-kS*u_3(a^1ia^2,_2)(v,)u_2+0= ∑_i≥ 0(∑_l=k^i a^1-1+δ(r)+r/Ti-l- a^1+1-δ(r)-r/Tl-k)· S*u_3(a^1ia^2,_2)(v,)u_2z_2^-i+k= S*u_3(a^1ka^2,_2)(v,)u_2. Now consider the case where k=-1. Similar to the case k≥ 0, we have__1=_2 S*u_3(a^1,_1)(a^2,_2)(v,)u_2(z_1-z_2)^-1ẓ_̣1̣-S*u_3(a^1-1a^2,_2)(v,)u_2= __1=_2 S*u_3· [a^1](a^2,_2)(v,)u_2z_1^- a^1 (z_1-z_2)^-1ẓ_̣1̣+__1=_2∑_i≥ 0 F_ a^1-1+δ(r)+r/T, i(_1,_2) S*u_3(a^1ia^2,_2)(v,)u_2(z_1-z_2)^-1ẓ_̣1̣+__1=_2_x_1z_1^- a^1+1-δ(r)-r/T(w+x_1)^ a^1-1+δ(r)+r/T/z_1-w-x_1 (z_1-z_2)^-1ẓ_̣1̣· S*u_3(a^2,_2)(Y_M^1(a^1, x_1)v,)u_2-S*u_3(a^1-1a^2,_2)(v,)u_2= S((u_3· [a^1])·[a^2], (v, )u_2) z_2^- a^2- a^1+__1=_2∑_i≥ 0 F_ a^2-1+δ(s)+s/T,i(_2,) S*u_3· [a^1](a^2iv,)u_2z_1^- a^1(z_1-z_2)^-1ẓ_̣1̣-∑_i≥ 0 a^1-1+δ(r)+r/Ti+1 S*u_3(a^1ia^2, z_2)(v, )u_2 z_2^-i-1+_x_1z_2^- a^1+1-δ(r)-r/T(w+x_1)^ a^1-1+δ(r)+r/T/z_2-w-x_1 S*u_3(a^2,_2)(Y_M^1(a^1, x_1)v,)u_2-S*u_3(a^1-1a^2,_2)(v, )u_2= S*(u_3· [a^1])·[a^2](v, )u_2z_2^- a^2- a^1_(A1)+_x_2z_2^- a^2+1-δ(s)-s/T- a^1(w+x_2)^ a^2-1+δ(s)+s/T/z_2-w-x_2 S*u_3· [a^1](Y_M^1(a^2,x_2)v,)u_2_(B1)-∑_i≥ 0 a^1-1+δ(r)+r/Ti+1 S*u_3·[a^1ia^2](v, )u_2z_2^- a^1- a^2_(A2)-∑_i≥ 0 a^1-1+δ(r)+r/Ti+1_x_2(w+x_2)^ a^1-i-1+ a^2-1+δ(r+s)+r+s/T/z_2-w-x_2_(B_2)· z_2^- a^1- a^2+1-δ(r+s)-r+s/TS*u_3(Y_M^1(a^1ia^2,x_2)v,)u_2_(B2)+_x_1z_2^- a^1+1-δ(r)-r/T(w+x_1)^ a^1-1+δ(r)+r/T/z_2-w-x_1 S*u_3·[a^2](Y_M^1(a^1,x_1)v,)u_2z_2^- a^2_(B3)+_x_1_x_2(w+x_1)^ a^1-1+δ(r)+r/T/z_2-w-x_1·(w+x_2)^ a^2-1+δ(s)+s/T/z_2-w-x_2_(B4)=·z_2^- a^1- a^2+2-δ(r)-δ(s)-s+r/TS*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2_(B4)-S*u_3·[a^1-1a^2](v, )u_2z_2^- a^1- a^2_(A3)-_x_2z_2^- a^1- a^2+1-δ(r+s)-r+s/T(w+x_2)^ a^1+ a^2-1+δ(r+s)+r+s/T/z_2-w-x_2_(B5)=·S*u_3(Y_M^1(a^1-1a^2,x_2)v,)u_2_(B5)= (A1)+(A2)+(A3)+(B1)+(B2)+(B3)+(B4)+(B5).By eq:def:3-point and f-relation2, we have (A1)+(A2)+(A3)=S*(u_3· [a^1])·[a^2](v, )u_2z_2^- a^2- a^1-∑_i≥ 0 a^1-1+δ(r)+r/Ti+1 S*u_3·[a^1ia^2(v, )u_2z_2^- a^1- a^2-S*u_3·[a^1-1a^2](v, )u_2z_2^- a^1- a^2= ⟨*|φ⟩(u_3· [a^1])·[a^2]⊗ v⊗ u_2- ∑_j≥ 0 a^1-1+δ(r)+r/Tju_3·[a^1j-1a^2]⊗ v⊗ u_2=· z_2^ a^1- a^2 w^- v = 0.The last equality follows from the fact that U^3 is a right module over A_g(V). More precisely, if r≠ 0, we have [a^1]=0, and ∑_j≥ 0 a^1-1+r/Tj[a^1j-1a^2]=0; if r=0 and s≠ 0, the last equality holds since [a^2]=[a^1j-1a^2]=0 for all j≥ 0; if r=s=0, the last equality holds since (u_3· [a^1])·[a^2]=u_3([a^1]∗_g [a^2]). On the other hand, by the Jacobi identity, we can express (B2)+(B5) as follows: (B2)+(B5)= -∑_j≥ 0_x_2_x_1-x_2 (x_1-x_2)^j-1 a^1-1+δ(r)+r/Tj (x_1+x_2)^-j·z_2^- a^1- a^2+1-δ(r+s)-r+s/T(w+x_2)^ a^1+ a^2-1+δ(r+s)+r+s/T/z_2-w-x_2· S*u_3(Y_M^1(Y(a^1,x_1-x_2)a^2,x_2)v,)u_2= -_x_2_x_1-x_21/x_1-x_2(1+x_1-x_2/w+x_2)^ a^1-1+δ(r)+r/Tz_2^- a^1- a^2+1-δ(r+s)-r+s/T·(w+x_2)^ a^1+ a^2-1+δ(r+s)+r+s/T/z_2-w-x_2 Su_3(Y_M^1(Y(a^1,x_1-x_2)a^2,x_2)v,)u_2= -_x_1,x_2(1/x_1-x_2)(w+x_1)^ a^1-1+δ(r)+r/T(w+x_2)^ a^2+δ(r+s)+r+s/T-δ(r)-r/T/z_2-w-x_2_(C1)=· z_2^- a^1- a^2+1-δ(r+s)-r+s/TS*u_3(Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2_(C1)+_x_1,x_2(1/-x_2+x_1)(w+x_1)^ a^1-1+δ(r)+r/T(w+x_2)^ a^2+δ(r+s)+r+s/T-δ(r)-r/T/z_2-w-x_2_(C2)=· z_2^- a^1- a^2+1-δ(r+s)-r+s/T S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2_(C2)= (C1)+(C2).By <ref>, we have(C1) =-∑_j≥ 0_x_2z_2^- a^1- a^2+1-δ(r+s)-r+s/T(w+x_2)^ a^2+δ(r+s)+r+s/T-δ(r)-r/Tx_2^j/z_2-w-x_2=· S*u_3(_x_1(w+x_1)^ a^1-1+δ(r)+r/T/x_1^j+1Y_M^1(a^1,x_1)Y_M^1(a^2,x_2)v,)u_2=-_x_2z_2^- a^1- a^2+1-δ(r+s)-r+s/T(w+x_2)^ a^2+δ(r+s)+r+s/T-δ(r)-r/Tx_2^j/z_2-w-x_2=· S*u_3· [a^1](Y_M^1(a^2,x_2)v,)u_2=-(B1).The last equality holds since both (B1) and (C1) are equal to 0 when r≠0. For (C2)+(B4), using <ref> again, we have (C2)+(B4)= _x_1,x_2(1/-x_2+x_1)(w+x_1)^ a^1-1+δ(r)+r/T(w+x_2)^ a^2+δ(r+s)+r+s/T-δ(r)-r/T/z_2-w-x_2=·z_2^- a^1- a^2+1-δ(r+s)-r+s/T S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2=+_x_1_x_2(w+x_1)^ a^1-1+δ(r)+r/T/z_2-w-x_1·(w+x_2)^ a^2-1+δ(s)+s/T/z_2-w-x_2=· z_2^- a^1- a^2+2-δ(r)-δ(s)-r+s/TS*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2.Note that (C2)+(B4) varies when r and s take different values. There are 6 cases in total: (1) r=s=0, (2) r=0 and s≠ 0, (3) r≠ 0 and s=0, (4) r+s=T, (5) r, s≠ 0, and r+s<T, and (6) r+s>T. We only present the proof for the case r=s=0 and the case r+s=T. The proof of other cases are similar, we omit the details. Case (1): r=s=0. In this case,(C2)+(B4) = _x_1_x_2 (w+x_1)^ a^1(w+x_2)^ a^2/z_2-w-x_2(1/-x_2+x_1+1/z_2-w-x_1)z_2^- a^1- a^2=· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 = _x_1_x_2 (w+x_1)^ a^1(w+x_2)^ a^2/z_2-w-x_2(z_2-w-x_2/(-x_2+x_1)(z_2-w-x_1))z_2^- a^1- a^2=· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 =- _x_1 (w+x_1)^ a^1/z_2-w-x_1_x_2∑_j≥ 0((w+x_2)^ a^2/x_2^1+j)x_1^jz_2^- a^1- a^2=· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 = -(B3), where the last equality follows from <ref>. Case (4): r+s=T. Since r,s≠ 0, we have (C2)+(B4) = _x_1_x_2 (w+x_1)^ a^1-1+r/T(w+x_2)^ a^2+s/T/z_2-w-x_2(1/-x_2+x_1)z_2^- a^1- a^2=· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2+_x_1_x_2 (w+x_1)^ a^1-1+r/T(w+x_2)^ a^2-1+s/T/z_2-w-x_2(1/z_2-w-x_1)z_2^- a^1- a^2+1=· S*u_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 = _x_1_x_2 (w+x_1)^ a^1-1+r/T(w+x_2)^ a^2-1+s/T/z_2-w-x_2(w+x_2/-x_2+x_1+z_2/z_2-w-x_1)=· z_2^- a^1- a^2Su_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 = _x_1_x_2 (w+x_1)^ a^1+r/T(w+x_2)^ a^2-1+s/T/z_2-w-x_1(1/-x_2+x_1)z_2^- a^1- a^2=· Su_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 = -_x_1 (w+x_1)^ a^1+r/T/z_2-w-x_1∑_j≥0_x_2((w+x_2)^ a^2-1+s/T/x_2^1+j)x_1^jz_2^- a^1- a^2=· Su_3(Y_M^1(a^2,x_2)Y_M^1(a^1,x_1)v,)u_2 = 0=-(B3).Therefore, we have (B1)+(B2)+(B3)+(B4)+(B5)=((B1)+(C1))+((C2)+(B4)+(B3))=0, and so__1=_2 S*u_3(a^1, _1)(a^2,_2)(v, )u_2 (z_1-z_2)^-1ẓ_̣1̣ =S*u_3(a^1-1a^2,_2)(v, )u_2.This proves 5.2 for k=-1. When k<-1, it can be proved inductively using the L-1-derivative property, we omit the details. The system of (n+3)-point functions S=*S_V⋯ M⋯ V constructed from the given φ∈[U^3, M^1, U^2] in <ref> lies in [Σ_1(U^3, M^1, U^2)].It follows from the construction of S, <ref>.Now the proof of <ref> is complete.§ FUSION RULES CHARACTERIZED BY AG(V)-BIMODULESIn <ref>, we constructed a quotient space B_g,λ(M)=M/O_g,λ(M) associated to an untwisted module M and a complex number λ. In this section, we will show that B_g,λ(M)=M/O_g,λ(M) is in fact a bimodule over A_g(V), and the space of coinvariants (U^3ø M^1ø U^2)/J in <ref>is isomorphic to the tensor product space U^3⊗_A_g(V)B_g,λ(M^1)⊗_A_g(V)U^2, where λ=h_2-h_3. Moreover, we will show that the tensor products U^3⊗_A_g(V) B_g,λ(M^1)⊗ _A_g(V) U^2 is isomorphic to the tensor product U^3⊗_A_g(V) A_g(M^1)⊗ _A_g(V) U^2, which gives us two ways to compute fusion rules using A_g(V)-bimodules. Finally, we will show that the g-twisted fusion rules are all finite when the VOA V is g-rational and C_2-cofinite.§.§ The Ag(V)-bimodules Bglambda(M1) Let M be an untwisted module of conformal weight h_1, and let λ be a complex number. For a∈ V^r and u∈ M, define a∗_g u:=_z Y_M(a,z)u(1+z)^ a/z if r=00 if r≠ 0,u∗_g a:=_z Y_M(a,z)u(1+z)^ a-1/z if r=00 if r≠ 0 .Then, we have b∗_g O_g,λ(M)⊆ O_g,λ(M) and O_g,λ(M)∗ _g b⊆ O_g,λ(M) for any b∈ V. Moreover, B_g,λ(M)=M/O_g,λ(M) is a bimodule over A_g(V), with respect to the products a∗_g u and u∗ _g a in eq:left-right-actions. In <cit.>, Jiang and Jiao introduced a quotient space: A_g(M)=M/ O_g(M),where O_g(M) is spanned by a∘_g u for all a∈ V, u∈ M, and a∘ _g u is given by def:circle-g. They proved that A_g(M) is an A_g(V)-bimodule with left and right actions given by eq:left-right-actions. In particular, b∗_g O_g(M)⊆ O_g(M) and O_g(M)∗_g b⊆ O_g(M) for all b∈ V. Since a∘_g u=a∘ u for a∈ V^0, we introduce an intermediate subspace O^0_g,λ(M):= * a∘_g u,L-1u+(L0+λ)u:a∈ V^0, u∈ M ⊆ O_g,λ(M).By <cit.>, together with the fact that b∗_g u=u∗ _g b=0 for b∈ V^r with r>0, we have b∗_g O^0_g,λ(M)⊆ O^0_g,λ(M) and O^0_g,λ(M)∗_g b⊆ O^0_g,λ(M) for all b∈ V. In particular, for any u∈ M, we haveb∗_g (L-1u+(L0+λ)u),(L-1u+(L0+λ)u)∗_g b∈ O^0_g,λ(M)⊆ O_g,λ(M).Then the conclusion follows from O_g,λ(M)=O_g(M)+ *L-1u+(L0+λ)u:u∈ M,in view of def:O-quotient. By <ref> and <cit.>, B_g,λ(M)=M/O_g,λ(M) is a bimodule over A_g(V) with respect to the products a∗_g u and u∗ _g a in eq:left-right-actions.Since there is an epimorphism of associative algebras A(V^0)⟶ A_g(V) (<cit.>), B_g,λ(M) and A_g(M) are also bimodules over A(V^0) with actions [a]·[u]=[a∗ _g u]=[a∗ u] and [u]·[a]=[u∗_g a]=[u∗ a], where a∈ V^0 and [u]∈ B_g,λ(M) or A_g(M), in view of eq:left-right-actions.Let M^1 be an untwisted module of conformal weight h_1, U^2 (resp. U^3) be a left (resp. right) irreducible A_g(V)-modules on which [] acts as h_2𝕀 (resp. h_3𝕀). Then we have an isomorphism of vector spaces(U^3⊗ M^1⊗ U^2)/J≅ U^3⊗_A_g(V)B_g,λ(M^1)⊗_A_g(V)U^2≅ U^3ø_A(V^0) B_g,λ(M^1)ø_A(V^0)U^2,where J is given by f-relation1–f-relation4, and λ=h_2-h_3.Define a linear map ϕ: U^3⊗ M^1 ⊗ U^2 ⟶ U^3⊗_A_g(V)B_g,λ(M^1)⊗_A_g(V)U^2, ϕ(u_3ø vø u_2): =u_3ø [v]ø u_2, u_3∈ U^3, v∈ M^1, u_2∈ U^2,where [v] is the image of v∈ M^1 in B_g,λ(M^1). By <ref>, it is straightforward to see that ϕ factors through (U^3⊗ M^1⊗ U^2)/J. Denote the induced map by ϕ.Conversely, we consider the following linear map: ψ: U^3ø_A_g(V)B_g,λ(M^1)ø_A_g(V)U^2⟶ (U^3ø_ M^1ø_ U^2)/J, ψ(u_3ø [v]ø u_2):=u_3 ø vø u_2+J. Indeed, by <ref>, we have ψ(u_3ø [O_g,λ(M^1)]ø u_2)=u_3 ø O_g,λ(M^1)ø u_2+J=0. Furthermore, recall that A_g(V) is a quotient of A(V^0) <cit.>, and so U^2 and U^3 are also left and right modules over A(V^0), respectively. Let a∈ V^0, by f-relation2, f-relation3, and eq:left-right-actions, we have ψ(u_3[a]ø [v]ø u_2)=ψ(u_3ø [a∗_g v]ø u_2), ψ(u_3ø [v]ø [a] u_2)=ψ(u_3ø [v∗_g a]ø u_2). Hence ψ is well-defined. It is clear that ψ is an inverse of ϕ̅. Observe that [a∗ _g v]=[a∗ v] and [v∗_g a]=[v∗ a] for a∈ V^0, then by adopting a similar argument, we can also show that (U^3ø_ M^1ø_ U^2)/J≅ U^3ø_A(V^0) B_g,λ(M^1)ø_A(V^0)U^2.Although the A_g(V)-bimodules B_g,h_2-h_3(M^1) and A_g(M^1) are not isomorphic in general, the tensor products U^3⊗_A_g(V)B_g,h_2-h_3(M^1)ø _A_g(V)U^2 and U^3⊗_A_g(V)A_g(M^1)ø _A_g(V)U^2 are in fact isomorphic. This isomorphism was proved in <cit.> for the untwisted case under the assumption that A(V) is semi-simple. Now we drop the semi-simplicity condition.With the aforementioned assumptions, we have a linear isomorphism:U^3⊗_A_g(V) B_g,h_2-h_3(M^1)⊗ _A_g(V) U^2≅ U^3⊗_A_g(V) A_g(M^1)⊗ _A_g(V) U^2≅ U^3⊗_A(V^0) B_g,h_2-h_3(M^1)⊗ _A(V^0) U^2≅ U^3⊗_A(V^0) A_g(M^1)⊗ _A(V^0) U^2.By def:O-quotient and <ref>, B_g,h_2-h_3(M^1)= A_g(M^1)/I, where I=*[(L-1+L0+h_2-h_3)u] u∈ M^1=O_g,h_2-h_3(M^1)/O_g(M^1)is a sub-bimodule of A_g(M^1). Observe that∗ _gu-u∗_g +(h_2-h_3)u =(L-1+L0+h_2-h_3)u for any u∈ M^1, where ∈ V is the conformal vector. Thus,I=* []∗_g [u]-[u]∗_g []+(h_2-h_3)[u] u∈ M^1⊂ A_g(M^1).Recall that []∈ A_g(V) is a central element <cit.>, I is a sub-bimodule of A_g(M^1). (This gives an alternative proof of <ref>.) Denote the inclusion map I A_g(M^1) by ι, and A_g(V) by A for short, then by the right exactness of tensor functor, we have a right exact sequence: U^3⊗_AI ⊗_AU^21øιø 1⟶U^3⊗_A A_g(M^1) ⊗_AU^2 → U^3⊗_A(A_g(M^1)/I) ⊗_AU^2 → 0.We claim that (1øιø1)(U^3⊗_A I⊗_A U^2)=0 in U^3⊗_A A_g(M^1)⊗_AU^2. Indeed, for any u_3∈ U^3, u_2∈ U^2, and u∈ M^1, we have(1øιø1)(u_3⊗ ([]∗ [u]-[u]∗ []+(h_2-h_3)[u])⊗ u_2)=u_3([]-h_3)⊗ [u]⊗ v_2-u_3⊗ [u]⊗ ([]-h_2)u_2=0.in U^3⊗_A A_g(M^1)⊗_AU^2. Then, the first isomorphism in eq:iso-between-different-bimodule-tensors follows from eq:right-exact-sequence. The last two isomorphism follows from <ref> and its proof.In the last remark of <cit.>, the author made a false claim that the isomorphism eq:iso-between-different-bimodule-tensors does not hold in general for the untwisted case. This was due to a mistake in Example 4.22 in <cit.>. We make a correction here:In the isomorphism B_h(M(c,h_1))⊗_A(M_c) M_c(0)≅[t_0]⊗ _[t] M_c(0)≅ M_c(0), the left A(M(c,0))=[t]-action is given by t.(1ø v_c,0)=(t_0+h_2)ø v_c,0=1.tø v_c,0+h_2ø v_c,0=1øL0v_c,0+h_2ø v_c,0=h_2ø v_c,0.Thus the left [t]-module [t_0]⊗ _[t] M_c(0)≅ M_c(0) is isomorphic to M(c,h_2)(0)= v_c,h_2, and so((M(c,h_2)(0))^∗⊗ _A(M_c)B_h(M(c,h_1))⊗ _A(M_c)M_c(0))^∗≅_A(M_c)( M_c(0),M(c,h_2)(0)) is 1-dimensional, same as(M(c,h_2)(0)^∗⊗ _A(M_c)A(M(c,h_1))⊗ _A(M_c)M_c(0))^∗. Let M^1 be an untwisted lowest weight module of conformal weight h_1, M^2 (resp. M^3) be a lowest weight g-twisted module of conformal weight h_2 (resp. h_3) with bottom level U_2 (resp. U_3). Further, assume M^2 and (M^3)' are generalized Verma module. Then, we have isomorphisms:≅[Σ_1((U^3)^∗, M^1, U^2)]≅ ((U^3)^∗⊗_A_g(V)A_g(M^1)⊗_A_g(V)U^2)^∗.In particular, if V is g-rational, then eq:twisted-fusion-rules-theorem holds for any untwisted irreducible module M^1, and irreducible g-twisted modules M^2 and M^3.It follows from <ref>. In general, we have the following upper bound of the g-twisted fusion rules by <ref>: ≤((M^3(0))^∗⊗_A_g(V)A_g(M^1)⊗_A_g(V)M^2(0))^∗,where M^2 and M^3 are irreducible g-twisted modules with bottom levels M^2(0) and M^3(0), respectively, and V is an arbitrary VOA.§.§ The finiteness of twisted conformal blocks and fusion rulesIn this subsection, we assume that V is of CFT-type, i.e. V=V_0⊕ V_+, with V_0=, and V_+=⊕_n=1^∞ V_n.The finiteness of fusion rules is one of the standard assumptions for rational conformal field theory <cit.>. Li proved the finiteness of fusion rules among three irreducible untwisted modules M^1, M^2, and M^3 when the module M^1 is C_2-cofinite <cit.>. Using <ref> and eq:half-twisted-fusion-rules-theorem, we can show the finiteness of g-twisted fusion rules under the same condition.We observe that the filtrations on A(V) and A(M) studied in <cit.> also have a g-twisted analog. Define a filtration: 0=A_g(V)_-1⊂ A_g(V)_0⊂ A_g(V)_1⊂⋯ on A_g(V) by A_g(V)_n:=(⊕_i=0^n V_i+ O_g(V))/O_g(V), n∈.It is clear that A_g(V)_m∗_g A_g(V)_n⊂ A_g(V)_m+n for any m,n∈. Denote the associated graded algebra by A_g(V)=⊕_n=0^∞ (A_g(V)_n/A_g(V)_n-1). The product ·∗_g · on A_g(V) is given by ([a]+A_g(V)_m-1)∗_g ([b]+A_g(V)_n-1):=[a]∗_g [b]+A_g(V)_m+n-1,where [a]∈ A_g(V)_m and [b]∈ A_g(V)_n. It is easy to check this product is commutative. Recall that R(V)=V/C_2(V) is a commutative associative algebra with product (a+C_2(V))· (b+C_2(V))=a-1b+C_2(V), see <cit.>. The following Proposition is a g-twisted version of <cit.> and a refinement of <cit.>: The linear map ϕ: V⟶ A_g(V), ϕ(a)=[a]+A_g(V)_m-1 for a∈⊕_i=0^m V_i, factors through C_2(V). It induces an epimorphism of commutative associative algebras: ϕ: R(V)=V/C_2(V)⟶ A_g(V),ϕ(a+C_2(V))=[a]+A_g(V)_m, where a∈⊕_i=0^m V_i. First we show that ϕ(C_2(V))=0. Let a∈ V^r_m and b∈ V_n. Since (a-2b)=m+n+1, we have ϕ(a-2b)=[a-2b]+A_g(V)_m+n. By Lemma 2.2 in <cit.>, we have _z Y(a,z)b(1+z)^m-1+δ(r)+r/T/z^1+δ(r)+k∈ O_g(V), k≥ 0. We may choose k=0 if δ(r)=1, and k=1 if δ(r)=0. Then, [a-2b]+A_g(V)_m+n=-∑_j≥ 0m-1+δ(r)+r/Tj+1 [aj-1b]+A_g(V)_m+n=[0]+A_g(V)_m+n since aj-1b=m+n-j≤ m+n for any j≥ 0. Hence ϕ in def:R(V)-map is well-defined. Moreover, ϕ((a+C_2(v))· (b+C_2(V)))=ϕ(a-1b+C_2(V))=[a-1b]+A_g(V)_m+n-1. If 0<r<T, by 6.13” with k=0, def:g-star-product and eq:product-in-filtration, we have [a-1b]+A_g(V)_m+n-1 =-∑_j≥ 0m-1+r/Tj+1[ajb]+A_g(V)_m+n-1=[0]+A_g(V)_m+n-1=[a]∗_g [b]+A_g(V)_m+n-1=ϕ(a+C_2(V))∗_g ϕ(b+C_2(V)) since ajb=m+n-j-1≤ m+n-1 for any j≥ 0. Finally, if r=0, by def:g-star-product again, [a]∗_g [b]+A_g(V)_m+n-1 =∑_j≥ 0mj[aj-1b]+A_g(V)_m+n-1=[a-1b]+A_g(V)_m+n-1. Thus, ϕ is an epimorphism of graded commutative associative algebras. Recall that a CFT-type VOA V is called C_1-cofinite, if V/C_1(V)<∞, where C_1(V)=*L_(-1)a:a∈ V∪*a_(-1)b a,b∈ V_+. Karel and Li proved in <cit.> that R(V) is finitely generated when V is C_1-cofinite.As an immediate Corollary, we have the following fact about A_g(V): A_g(V) is an noetherian algebra if V is C_1-cofinite.Let M=⊕_n=0^∞ M_λ+n be an ordinary untwisted module of conformal weight λ. We introduce a similar filtration 0=A_g(M)_-1⊂ A_g(M)_0⊂ A_g(M)_1⊂⋯ by A_g(M)_n:=(⊕_i=0^n M_λ+i+ O_g(M))/O_g(M), n∈.It is clear that A_g(M) becomes a filtered A_g(V)-bimodule with this filtration and filtration eq:filtration of A_g(V), and A_g(M) is a graded A_g(V)-module. The following fact is similar to <ref>, and the proof is also similar: Let M be an untwisted module. There exists a epimomorphism of R(V)-modules ψ: M/C_2(M)⟶ A_g(M), ψ(u+C_2(M))=[u]+A_g(M)_m, where u∈⊕_i=0^m M_λ+i.If V is C_2-cofinite, thethe twisted fusion rules among irreducible untwisted module M^1, and g-twisted modules M^2 and M^3 are finite. It was proved by Buhl in <cit.> that an irreducible untwisted module M is C_2-cofinite if V is C_2-cofinite. In particular, we have A_g(M^1)= A_g(M^1)<∞ in view of eq:epimophism-R(V). Then, the conclusion follows from the estimate eq:half-twisted-fusion-rules-theorem. §.§ Twisted fusion rules and fusion rules for cyclic orbifold VOAsLet V be a strongly rational VOA, and g∈(V) be an automorphism of finite order T. According to <cit.>, the cyclic orbifold subVOA V^0=V^⟨ g⟩ is also strongly rational. We again let M^1 be an irreducible untwisted module, and M^2 and M^3 be irreducible g-twisted modules. Note that M^1, M^2, and M^3 are also ordinary V^0-modules.With <ref> and <cit.> or <cit.>, we can find a concrete relation between the space of twisted intertwining operators , and the space of ordinary intertwining operators of V^0-modules, which we denote byℑ_V^0. By Theorem 3.1 in <cit.>, the twisted modules M^1, M^2, and M^3 decompose into direct sum of irreducible ordinary V^0-modules:M^1=⊕_i=0^m_1 M^1,i, M^2=⊕_j=0^m_2 M^2,j, and M^3=⊕_k=0^m_3 M^3,k, where the direct summands could appear multiple times. Denote the A(V^0)-bimodule M^1,i/O_V^0(M^1,i) by A_V^0(M^1,i) for all i. Since a∘ M^1,i⊂ M^1,i for all i and a∈ V^0, we have A_V^0(M^1)=M^1/O_V^0(M^1)≅⊕_i=0^m_1 M^1,i/O_V^0(M^1,i)=⊕_i=0^m_1 A_V^0(M^1,i). Moreover, since V^0 is also C_2-cofinite <cit.>, we have ℑ_V^0[M^1,i][M^2,j][M^3,k]<∞ for all i,j, and k. By taking restrictions and projections onto the direct summands, using <cit.> and the fact that V^0 is rational <cit.>, we have the following identification of ℑ_V^0: ℑ_V^0 =⊕_i,j,kℑ_V^0[M^1,i][M^2,j][M^3,k] ≅( ⊕_i,j,k M^3,k(0)^∗ø _A(V^0) A_V^0(M^1,i)⊗_A(V^0) M^2,j(0))^∗≅ (M^3(0)^∗ø _A(V^0) M^1/O_V^0(M^1)⊗_A(V^0) M^2(0))^∗.On the other hand, we have ≅ (M^3(0)^∗ø _A(V^0) M^1/O_g(M^1)⊗_A(V^0) M^2(0))^∗, in view of <ref> and eq:iso-between-different-bimodule-tensors. By the proof of <ref>, O_g(M^1)/O_V^0(M^1) is an A(V^0)-sub-bimodule of A_g(M^1)=M^1/O_g(M^1). Thus, we have the following With the settings as above, we have the following relation between the fusion rules of g-twisted modules and fusion rules of ordinary V^0-modules: ∑_i,j,k[M^1,i][M^2,j][M^3,k]= +(M^3(0)^∗ø _A(V^0)O_g(M^1)/O_V^0(M^1)⊗_A(V^0) M^2(0)). § FUSION RULES AMONG THETA-TWISTED MODULES OVER THE HEISENBERG AND LATTICE VOASIn this Section, we apply <ref> to compute the fusion rules among θ-twisted modules over the Heisenberg VOA and rank one lattice VOA. We refer to <cit.> for the detailed constructions of the Heisenberg VOAs and lattice VOAs. Here we recall the definition of involution θ in <cit.>. Let L be a positive definite even lattice of rank d>0, and θ:L⟶ L be an involution of L defined by θ(α)=-α, for any α∈ L. Then, θ lifts to an involution of the Heisenberg VOA M(1) associated to 𝔥=⊗_ L and the lattice VOA V_L=M(1)⊗^ϵ[L] as follows: θ: M(1)⟶ M(1),θ(α^1(-n_1)⋯α^k(-n_k)):=(-1)^k α^1(-n_1)⋯α^k(-n_k),θ: V_L⟶ V_L,θ(α^1(-n_1)⋯α^k(-n_k) e^α):= (-1)^kα^1(-n_1)⋯α^k(-n_k)e^-α,where α^1,⋯, α^k∈𝔥, n_1,⋯, n_k≥ 1, and α∈ L. Clearly, θ^2=1. The θ-eigenspaces of eigenvalue 1 in M(1) and V_L are denoted by M(1)^+ and V_L^+, respectively, while the -1-eigenspaces are denoted by M(1)^- and V_L^-. Since M(1) and V_L are both simple VOAs, by <cit.>, the only possible nonzero intertwining operators among θ-twisted modules are of the type [1][θ][θ], [θ][1][θ], or [θ][θ][1]. Here type [g_1][g_2][g_3] means type , where M^i is a g_i-twisted module for i=1, 2 ,3. On the other hand, by <cit.>, we have [θ][θ][1]≅[θ][1][θ^-1]≅[1][θ][θ].Therefore, in order to determine the fusion rules among θ-twisted modules, where V=M(1) or V_L, we only need to determine the space of [1][θ][θ]-twisted intertwining operators.§.§ The Heisenberg VOA caseLet 𝔥=⊗_ L or any d-dimensional -vector space, equipped with a non-degenerate bilinear form (·|·). Recall the twisted affine algebra𝔥̂_+1/2=𝔥ø t^1/2[t,t^-1]⊕ K, with Lie bracket given by [a(m),b(n)]=δ_m+n,0m(a|b)K,[K,𝔥̂_+1/2]=0,a,b∈𝔥, m,n∈+1/2.Let M(1)_+1/2 be the induced module U(𝔥̂_+1/2)ø _U(𝔥̂_+1/2^+⊕ K), where 𝔥̂_+1/2^+⊕ K acts trivially on<cit.>. By Corollary 3.9 in <cit.>, the θ-twisted Zhu's algebra A_θ(M(1)) is isomorphic to , and M(1)_+1/2 is the unique θ-twisted M(1)-module.It is clear that M(1)_+1/2 is an irreducible module over twisted affine Lie algebra 𝔥̂_+1/2. Since M(1)_+1/2 is universal in the sense that any θ-twisted M(1)-module with bottom levelis a quotient module of M(1)_+1/2, it is also the θ-twisted generalized Verma module over M(1).Let M(1,)=M(1)⊗ e^ be the irreducible (untwisted) module over M(1) associated to ∈. Then, A_θ(M(1,))= [e^].The proof is similar to the proof of Lemma 3.7 and 3.8 in <cit.>. We briefly sketch it. For any α∈, since α(-1)∈ M(1)^-, by def:O-quotient, we haveα(-m-1)v≡ -∑_j≥ 01/2j+1α(j-m)vO_θ(M(1,)),for any m≥ 0, v∈ M(1,). We use induction on the degree n_1+⋯ +n_k=n of a spanning element v=α^1(-n_1)⋯α^k(-n_k)e^ of M(1,) to show that v≡ c e^O_θ(M(1,)), where c∈. The base case n=0 is clear. For n>0, by 6.11 we haveα^1(-n_1)⋯α^k(-n_k)e^≡ -∑_j≥01/2j+1α^1(j-n_1+1)α^2(-n_2)⋯α^k(-n_k)e^O_θ(M(1,)),where (α^1(j-n_1+1)α^2(-n_2)⋯α^k(-n_k)e^)=n_1-j-1+n_2+⋯ +n_k=n-j-1<n. Then, by the induction hypothesis, v≡ c e^O_θ(M(1,)).We note that <ref> only shows A_θ(M(1,)) is at most one-dimensional. It is not obvious that [e^]≠ 0. Using <ref>, <ref>, the Hom-tensor duality, and the fact A_θ(M(1))≅, we have [M(1,)][M_+1/2(1)][M_+1/2(1)]= _( [e^]⊗, )≤ 1. On the other hand,Abe, Dong, and Li constructed a nonzero [1][θ][θ]-twisted intertwining operator in <cit.> based on twisted vertex operators from <cit.>: 𝒴_^tw(·,w): M(1,)⟶(M_+1/2(1)){w},𝒴_^tw(e^,w)=e^-||^2log 2z^-||^2/2exp(∑_n∈ -1/2--(n)/nz^-n) exp(∑_n∈1/2+-(n)/nz^-n),see equation (4.7) in <cit.>. Using 6.12', we have the following result about the fusion rules among θ-twisted module over M(1):Let ∈^∗. Then, [M(1,)][M_+1/2(1)][M_+1/2(1)]=1. §.§ The rank one lattice VOA caseIn this subsection,we assume that L=α, with (α|α)=2 and ϵ (α,α)=1. i.e., ϵ: L× L⟶ <± 1> is trivial. Then, L is the root lattice of type A_1, with dual lattice L^∘=1/2L=L⊔ (L+1/2α). We first recall some general results about twisted representations of V_L in <cit.>. By <cit.>, V_L has two untwisted irreducible modules V_L and V_L+1/2α. On the other hand, according to <cit.>, together with <cit.>, V_L has two irreducible θ-twisted modules V_L^T_χ and V_L^T_-χ, with bottom levels T_χ= v_χ and T_-χ= v_-χ, respectively, where χ∈^×, andV_L^T_χ=M_+1/2(1)⊗ v_χ, V_L^T_-χ=M_+1/2(1)⊗ v_-χ.Moreover, consider the Lie algebra sl_2= e^α+α(-1)+ e^-α=(V_L)_1, let E=e^α+e^-α and F=e^α-e^-α, and let ŝl̂_̂2̂[θ_2]=( Eø[t,t^-1])⊕ ((α(-1)+ F)ø t^1/2[t,t^-1])⊕ Kbe the twisted affine Lie algebra associated to sl_2 and the involutionθ_2: sl_2⟶ sl_2, e^α↦ e^-α,α(-1)↦ -α(-1), e^-α↦ e^α,which is θ in 6.9 restricted to (V_L)_1. See Chapters 2 and 3 of <cit.> for more details. Then, V_L^T_χ and V_L^T_-χ are non-isomorphic irreducible modules over the twisted affine Lie algebra ŝl̂_̂2̂[θ_2]. Let x(n):=xø t^n∈ŝl̂_̂2̂[θ_2], for any x∈ sl_2 and n∈ or +1/2. By the construction of twisted modules, the action of E0 on T_χ and T_-χ are given by E0v_χ=1/2 v_χ,E0 v_-χ=-1/2 v_-χ.See Section 5.1 in <cit.>. By Lemma 3.10 and Proposition 3.12 in <cit.>, A_θ(V_L) has the following characterization: A_θ(V_L)≅[]⊕[e^α], with [e^α]=[e^-α] and [e^α]∗_θ [e^α]=4^-(α|α)[]. Moreover, by Theorem 3.13 in <cit.>, V_L is θ-rational. Thus we can apply the twisted fusion rules formula eq:twisted-fusion-rules-theorem for V_L. Let M^1 be the untwisted V_L-module V_L, we have[V_L][V_L^T_χ][V_L^T_χ]=[V_L][V_L^T_-χ][V_L^T_-χ]=1,[V_L][V_L^T_χ][V_L^T_-χ]=[V_L][V_L^T_-χ][V_L^T_χ]=0.By <ref> and Hom-tensor duality, we have [V_L][V_L^T_±χ][V_L^T_±χ]≅_A_θ(V_L)(T_±χ,T_±χ).Now 6.13 follows from 6.12 since [E]=2[e^α]∈ A_θ(V_L) acts on T_±χ= v_±χ by o(E)v_±χ=E0v_±χ=± (1/2) v_±χ, and an element f in_A_θ(V_L)(T_±χ,T_±χ) preserves E0. It remains to consider the case when M^1 is the untwisted irreducible V_L-module V_L+1/2α. We first give a spanning set of the A_θ(V_L)-bimodule A_θ(V_L+1/2α). A_θ(V_L+1/2α)= [e^1/2α]+ [e^-1/2α], with [E]∗_θ [e^1/2α]-[e^1/2α]∗_θ[E]=[e^-1/2α], [E]∗_θ [e^-1/2α]-[e^-1/2α]∗_θ[E]=[e^1/2α].Since α(-1)∈ V_L^-, we have a congruence formula similar to 6.11:α(-m-1)v≡ -∑_j≥ 01/2j+1α(j-m)vO_θ (V_L+1/2α),where v∈ V_L+1/2α and m≥ 0. In particular, let m=1, we have α(-1)e^1/2α≡ -(1/2) e^1/2αandα(-1)e^-1/2α≡ (1/2) e^-1/2αO_θ (V_L+1/2α). Moreover, given r∈ and u=α^1(-n_1)⋯α^k(-n_k)e^2r+1/2α∈ M(1, 2r+1/2α)⊂ V_L+1/2α, using a similar induction process as Lemma <ref> on the degree n_1+⋯ +n_k of u, we can show that u=α^1(-n_1)⋯α^k(-n_k)e^2r+1/2α≡ b_u e^2r+1/2αO_θ (V_L+1/2α),for some constant b_u∈. Now we use induction on r∈ to show that u=α^1(-n_1)⋯α^k(-n_k)e^2r+1/2α≡ c_u e^1/2αord_u e^-1/2αO_θ (V_L+1/2α), for some constants c_u, d_u∈, where k,r≥ 0, n_1≥⋯≥ n_k≥ 1, and α^1,⋯,α^k∈.When r=0, 6.17 follows from 6.16. Consider the case where r=1. Note that E=e^α+e^-α∈ V_L^+ and E=1. It follows from def:circle-g and 6.2' thatE-m-2v+E-m-1v≡ 0O_θ (V_L+1/2α),m≥ 0, v∈ V_L+1/2α. By the definition of lattice vertex operators in <cit.> and 6.16, we have (e^α)-2e^1/2α≡e^3/2α,(e^α)-1 e^1/2α≡ 0O_θ (V_L+1/2α), (e^-α)-2e^1/2α= _z E^-(α, z) z^-3 e^-1/2α=-α(-2)/2e^-1/2α+1/2α(-1)^2e^-1/2α ≡e^-1/2αO_θ (V_L+1/2α), (e^-α)-1 e^1/2α= _z E^-(α, z) z^-2e^-1/2α= -α(-1) e^-1/2α≡ -(1/2) e^-1/2αO_θ (V_L+1/2α).Choose m=0 in 6.18 we have: E-2e^1/2α+E-1e^1/2α≡ e^3/2α+e^-1/2α-(1/2)e^-1/2α≡ 0O_θ (V_L+1/2α).Hence α^1(-n_1)⋯α^k(-n_k)e^3/2α≡ b_u e^3/2α≡b_u(1/2-)e^-1/2αO_θ (V_L+1/2α), in view of 6.16. This proves 6.17 when r=1 since b_u(1/2-) is a constant. Now suppose r>1, and the conclusion holds for smaller r.Let m=2r in 6.18, by the induction hypothesis, we have(e^α)-2r-2 e^2r+1/2α= _z z^-2r-2 E^-(-α, z)e^2r+3/2α z^2r+1= e^2r+3/2α (e^-α)-2r-2 e^2r+1/2α= _z E^-(α,z)e^2r-1/2αz^-4r-1∈ M(1,(2r-1)α/2) ≡c_r-1 e^±1/2αO_θ (V_L+1/2α), (e^α)-2r-1 e^2r+1/2α = _z z^-2r-1 E^-(-α,z) e^2r+3/2α z^2r+1 =0, (e^-α)-2r-1 e^2r+1/2α = _zE^-(α,z) e^2r-1/2α z^-4r-2∈ M(1, (2r-1)/2) ≡c'_r-1e^±1/2αO_θ (V_L+1/2α),where e^±1/2α attains the same sign in the second and fourth congruence equations. By 6.18, e^2r+3/2α ≡E-2r-2e^2r+1/2α-c_r-1e^±1/2α≡ -E-2r-1 e^2r+1/2α-c_r-1e^±1/2α≡ -c'_r-1e^±1/2α- c_r-1e^±1/2α≡ p_r e^±1/2αO_θ (V_L+1/2α).where p_r=-c'_r-1-c_r-1. Now it follows from 6.16 thatα^1(-n_1)⋯α^k(-n_k)e^2r+3/2α≡ q_r e^2r+3/2α≡ p_rq_r e^±1/2α=c_r e^±1/2αO_θ (V_L+1/2α).This finishes the induction step and proves 6.17 for any r≥ 0. By adopting a similar induction argument, we can also prove 6.17for r∈_<0. Since V_L+1/2α=⊕_r∈ M(1, 2r+1/2α), by 6.17 we have A_θ(V_L+1/2α)= [e^1/2α]+ [e^-1/2α]. Finally, by eq:left-right-actions, we have [E∗_θ e^1/2α-e^1/2α∗_θ E]= _z [Y(E,z)e^1/2α(1+z)^0]=[E0e^1/2α]=[(e^α)0e^1/2α+(e^-α)0 e^1/2α]=[e^-1/2α].Similarly, [E∗_θ e^-1/2α-e^-1/2α∗_θ E]=[e^1/2α]. This proves 6.14. Let M^1 be the untwisted V_L-module V_L+1/2α, we have [V_L+1/2α][V_L^T_χ][V_L^T_χ]=[V_L+1/2α][V_L^T_-χ][V_L^T_-χ]=0,[V_L+1/2α][V_L^T_χ][V_L^T_-χ]=[V_L+1/2α][V_L^T_-χ][V_L^T_χ]=1.We show 6.19 first. By <ref> we have A_θ(V_L+1/2α)ø _A_θ (V_L) T_χ= ([e^1/2α]ø v_χ)+ ([e^-1/2α]ø v_χ).Given f∈_A_θ(V_L)(A_θ(V_L+1/2α)ø _A_θ (V_L) T_χ, T_χ), we assume that f([e^1/2α]ø v_χ)= v_χ, f([e^-1/2α]ø v_χ)=μ v_χ,,μ∈.Recall that o(E)v_χ=E(0)v_χ=(1/2) v_χ, see 6.12. By 6.14 and 6.22 we have: f([E]∗([e^1/2α]ø v_χ)) =f([E∗_θ e^1/2α-e^1/2α∗_θ E]ø v_χ)+ f([e^1/2α]ø o(E)v_χ)=f([e^-1/2α]ø v_χ)+ 1/2 f([e^1/2α]ø v_χ)=(μ+/2) v_χ,f([E]∗([e^-1/2α]ø v_χ)) =f([E∗_θ e^-1/2α-e^-1/2α∗_θ E]ø v_χ)+ f([e^-1/2α]ø o(E)v_χ)=f([e^1/2α]ø v_χ)+ 1/2 f([e^-1/2α]ø v_χ)=(+μ/2) v_χ.On the other hand, we have [E]. f([e^1/2α]ø v_χ)=o(E) v_χ=( /2) v_χ and [E]. f([e^-1/2α]ø v_χ)=o(E)μ v_χ=(μ/2) v_χ. Since f is an A_θ(V_L)-homomorphism, we have(μ+(/2))v_χ=(/2) v_χ and (+(μ/2))v_χ=(μ/2) v_χ. It follows that =μ=0, and f=0. By <ref>, we have [V_L+1/2α][V_L^T_χ][V_L^T_χ]=_A_θ(V_L)(A_θ(V_L+1/2α)ø _A_θ (V_L) T_χ, T_χ)=0.Replacing χ by -χ in the argument above, it is easy to see that [V_L+1/2α][V_L^T_-χ][V_L^T_-χ]=0. This proves 6.19. Next, we show 6.21. Given f∈_A_θ(V_L)(A_θ(V_L+1/2α)ø _A_θ (V_L) T_χ, T_-χ), assumef([e^1/2α]ø v_χ)= v_-χ, f([e^-1/2α]ø v_χ)=μ v_-χ,,μ∈.With a similar argument as above, we have (μ+(/2))v_-χ=-(/2) v_-χ, and (+(μ/2))v_-χ=-(μ/2) v_-χ. Thus, μ=-, and _A_θ(V_L)(A_θ(V_L+1/2α)ø _A_θ (V_L) T_χ, T_-χ)≤ 1.On the other hand, by Proposition 5.10 in <cit.>, there exists a nonzero twisted intertwining operator 𝒴̃^tw_α/2(·, w): V_L+1/2α⟶(V_L^T_χ, V_L^T_χ^(α/2)),𝒴̃^tw_α/2(u ,w)=𝒴_α/2^tw(u,w)øη_(α/2)+β,where u∈ M(1, (α/2)+β), 𝒴_α/2^tw is given by 6.13', η_(α/2)+β:T_χ⟶ T_χ^(α/2) is a linear isomorphism, and T_χ^(α/2)=T_-χ by 6.12 and the construction in Section 5.3 in <cit.>. Thus we have[V_L+1/2α][V_L^T_χ][V_L^T_-χ]=_A_θ(V_L)(A_θ(V_L+1/2α)ø _A_θ (V_L) T_χ, T_-χ)=1, and the second equality in 6.21 can be proved by a similar method.We need to use the nonzero twisted intertwining operator 𝒴̃^tw_α/2 in <cit.> for the proof of 6.21 since it is not clear from <ref> that A_θ(V_L+1/2α)= [e^1/2α]+ [e^-1/2α] is nonzero. Although we cannot achieve here, we believe there is an intrinsic proof of the facts that A_θ(M(1,))= [e^] is nonzero for any ∈, and that A_θ(V_L+1/2α)=[e^1/2α]⊕ [e^-1/2α] is a two-dimensional vector space. AcknowledgmentsWe thank Professors Yi-Zhi Huang, James Lepowsky, and Angela Gibney for their valuable discussions and suggestions.§ DECLARATIONS No funding was received for conducting this study. The authors have no competing interests to declare that are relevant to the content of this article. tocsectionReferences
http://arxiv.org/abs/2312.16278v1
{ "authors": [ "Xu Gao", "Jianqi Liu", "Yiyi Zhu" ], "categories": [ "math.QA", "math-ph", "math.MP", "17B69, 81T40" ], "primary_category": "math.QA", "published": "20231226180744", "title": "Twisted restricted conformal blocks of vertex operator algebras I: $g$-twisted correlation functions and fusion rules" }
./plots/1 =17cm =24cm-2.0cm -0.3cm -0.3cm 0.4 ex <-0.8 em0.62 ex∼ 0.4 ex >-0.7 em0.62 ex∼ Astroph. J.Ap. J. Lett.Ap. J. Supp. Ann. Math.Ann. Phys.Ap. J.Acta Phys. Pol.Astron. J.Astron. and Astrophys.Bull. Am. Math. Soc.Czech. Math. J.Commun. Math. Phys.Commun. Theor. Phys.Class. Quantum GravityEur. Phys. J.Fortschr. PhysikGen. Relativity and GravitationHelv. Phys. ActaInt. J. Mod. Phys.J. Math. Mech.JHEPJ. Phys.J. Chem. Phys.Lett. Nuovo CimentoSuppl. Nuovo CimentoMod. Phys. Lett.Mon. Not. R. Ast. Soc.NatureNew AstronomyIl Nuovo CimentoNucl. Phys.Phys. Lett.Phys. Rev.Phys. Rev. Lett.Physics ReportsPhysica ScriptaProgr. Theor. Phys.Rev. Math. Pure Appl.Rev. Mod. Phys.Rivista del Nuovo CimentoScienceSoviet J. Part. Nucl.Soviet. Phys.Teor. Mat. Fiz.Theor. Math. Phys.Yadernaya FizikaZh. Eksp. Teor. Fiz.Z. Phys.Z. Math. Phys. plainDRAFT VERSION January 14, 2024Entanglement dynamics of accelerated atomsinteracting with the Electromagnetic FieldM. S. Soares[E-mail address: <[email protected]>], N. F. Svaiter[E-mail address: <[email protected]>] Centro Brasileiro de Pesquisas FísicasRua Xavier Sigaud, 150 - Urca, Rio de Janeiro - RJ, 22290-180, BrazilG. Menezes[E-mail address: <[email protected]>][On leave of absence from Departamento de Física, Universidade Federal Rural do Rio de Janeiro.] Instituto de Física Teórica, Universidade Estadual PaulistaRua Dr. Bento Teobaldo Ferraz, 271 - Bloco II, 01140-070 São Paulo, SP, BrazilWe study the effects of acceleration in entanglement dynamics using the theory of open quantum systems. In this scenario we consider two atoms moving along different hyperbolic trajectories with different proper times. The generalized master equation is used for a pair of dipoles interacting with the electromagnetic field. We observe that the proper acceleration plays an essential role in the entanglement harvesting and sudden death phenomenom and we study how the polarization of the atoms affects this results.§ INTRODUCTION Entanglement remains one of the most perplexing and captivating property of quantum mechanics. One popular definition is that it occurs when the Hilbert space of a closed system cannot be described as a tensor product of pure states of subsystems <cit.>. This long range correlation has opened up new frontiers for many research areas such as quantum information, quantum computation and quantum optics <cit.>. In this work, we discuss entanglement dynamics of uniformly accelerated atoms using the formalism constructed to deal with open quantum systems. We intend to merge some protocols of quantum optics/informationwith the insights from the Unruh-Davies effect <cit.>.In Minkowski spacetime inertial observers perform the canonical quantization with an unique Poincaré invariant Minkowski vacuum|0,M⟩. In a generic curved spacetime this usual quantizationwith an unique vacuum cannot be implemented. This problem also appears for observers in Minkowski spacetime that do not travel in inertial worldlines <cit.>. In 1973, Fulling discussed the quantization of a massive scalar field performed by an observer in an uniformly accelerated reference frame <cit.>. Using the fact that the worldlines of uniformly accelerated observers are integral curves of a timelike Killing vectors field, the canonical quantization can also be implemented in this situation. The vacuum state of such a construction is the Rindler vacuum |0,R⟩. Instead of discussing non-unitarily equivalent quantization using the Bogoliubov coefficients, a different route to shed some light on the problem of defining particles without the Poincaré group is to introduce a device constructed to detect quasi-particles. In the late 1970s, Unruh and Davies proved that an accelerated observer perceives the Minkowski vacuum as a thermal bath<cit.>. They considered a model detector at rest in a uniformly accelerated reference frame interacting with the field prepared in the Minkowski vacuum. They obtained that for the uniformly accelerated observer the Minkowski vacuum appears as a thermal bath of Rindler quasi-particles. It is worth noting that the Unruh-Davies effect also appears for uniformly accelerated observers using a Glauber detector <cit.>. This discussion has been a topic of vigorous investigation over the years <cit.> Relativistic Quantum Information (RQI) has laid the theoretical groundwork for understanding quantum phenomena in the context of accelerated reference frames and also for systems in a gravitational field governed by General Relativity <cit.>.This framework helps us to understand entanglement dynamics generated by the Unruh-Davies effect. In this paper we can basically follow two paths to discuss atoms travelling in accelerated worldlines: by using the entanglement harvesting protocol or the theory of open quantum systems. The difference between both approaches is basically the timescale of the interaction with the field. Kaplanek and Tjoa in a recent work foment these differences between these two approaches. They explore the use of open quantum systems in the sense of relativistic quantum information and found a relation of the timescale to assure that the results are reliable for non-Markovian processes <cit.>.The entanglement harvesting protocol is generally used to explore initial correlations between atoms andfields. This approach assures that entanglement of formation is provided by the field and we can study better the role of the quantum field to entanglement harvesting. Many interesting results were obtained with such a technique, see e.g. Refs. <cit.>. The study of entanglement dynamics in open quantum systems has illuminated how quantum correlations evolve when systems interact with their surrounding environments. These interactions introduce the notion of decoherence and can significantly impact the persistence of entanglement. Investigating the entanglement dynamics of uniformly accelerated atoms within this framework becomes particularly intriguing, as it allows us to explore how relativistic effects and open system dynamics combine to influence quantum correlations in complex ways <cit.>. In the sense of this theory, in a recent work we have derived a master equation to deal with qubits, interacting with a scalar field, in different reference frames with different proper times to explore the role of motion in entanglement dynamics <cit.>. In this work, we have demonstrated that different proper accelerations affects the entanglement dynamics more than other system variables such as the orthogonal distance of the qubits. This is the path we are going to follow in this work.Here we generalize the results of Ref. <cit.> for the case of atoms interacting with an electromagnetic field. We revisit the entanglement dynamics of uniformly accelerated atoms using a generalized master equation. This procedure allow us to investigate the influence of acceleration in the entanglement dynamics. Our aim is to shed light on the behavior of initial entangled states under different acceleration and polarization conditions in order to study the degradation of such states. We also want to explore the phenomena of entanglement harvesting by considering an initial separable state for the atoms and changing these two variables of our problems. In this case, we observe that for some polarization of the atoms we do not observe any creation of entanglement states when the systems starts in some states. Our aim in this work is to present an alternative view of entanglement dynamics for non inertial atoms for a system well studied in quantum optics problems. We expect that the following results would help us to construct an even more realistic system which would be founded in a close future in the laboratory leading us to a analog model of Unruh-Davies radiation.The organization of this papers is as follows.In Sec. <ref> we present the formalism of the master equation for a pair of accelerated dipoles interacting with the electromagnetic field. We discuss the solutions of the derived master equation and study how the entanglement dynamics (entanglement degradation and the entanglement harvesting) is affected by the different values of both proper accelerations of the atoms in Sec. <ref>. Conclusions are given in sec. <ref>. We use unit such that ħ = c = k_B = 1. Graphs were drawn using the Mathematica packages.TEXTO PARA RETIRAR AS REFERÊNCIAS – In a generic curved spacetime, a natural decomposition of modes in terms of negative and positive frequency parts is not generally available. The non-uniqueness of the vacuum state is an important consequence of this circumstance <cit.>. Even for observers in Minkowski spacetime, there are non-trivial situations involving the choice of a vacuum state <cit.>. For instance, a uniformly accelerated observer in flat spacetime perceives the Minkowski vacuum as a thermal bath. This is known as the Unruh-Davies effect and has a close connection to the Hawking effect <cit.>. The discussion of quantum fields in non-inertial frames and related subjects has been a topic under vigorous investigation over the years <cit.>. An apparatus device in such studies is the Unruh-deWitt particle detector <cit.>. It is defined as an idealized two-level systemcoupled with a scalar field through a monopole interaction. Other descriptions for particle detectors are also possible as, for example, the one given by the Glauber theory <cit.>. A recent discussion on particle-detector models can be found in Ref. <cit.>. In this work, the authors have highlighted the importance of the Glauber model in the clarification on the different interpretations of the measurements performed by an accelerated detector. See also Refs.<cit.>.An essential tool in quantum computation and quantum information theory is quantum entanglement <cit.>. The literature has examined several sources of entangled quantum systems, recurrently found in solid-state physics and atoms in cavity electrodynamics. In fact, a number of proposals to generate entangled states in systems of two-level systems coupled to bosonic fields were established <cit.>. In recent years, quantum entanglement has been analysed through the lens of the relativistic quantum information theory, which studies the behaviour of atoms interacting with relativistic quantum fields <cit.>. A well known phenomenonin this scenario is the so-called entanglement degradation, when correlated states become uncorrelated by the interaction with a quantum field <cit.>. Another effect is the entanglement harvesting. Atoms initially prepared in a separable state can extract entanglement from the quantum vacuum <cit.>. Vacuum fluctuations are able to act as a source of entanglement for atoms coupled with quantum fields. This can be realized, for instance, when atoms are uniformly accelerated, as the Minkowski vacuum state can be conceived as a multi-particle state with Rindler excitations <cit.>. In any case, it is important to bear in mind that entanglement harvesting protocol can be considered as legitimate only when the detectors are not able to communicate <cit.>. Relativistic quantum entanglement has been investigated in different setups <cit.>. Recently Benatti and Floreanini discussed entanglement dynamics of two non-inertial atoms weakly interacting with a quantum scalar field with the same proper acceleration <cit.>.See also Ref. <cit.>. For the case of atoms also with the same proper acceleration but interacting with the electromagnetic field see Ref. <cit.>. In this work we study a pair of two-level systems travelling along two different worldlines. We generalize the construction of the master equation widely discussed in the literature <cit.>. Our formalism allows one to analyse the implications of different accelerations for entanglement degradation and entanglement harvesting. In order to quantify the content of entanglement between the two-level system, we calculate the concurrence introduced by Wootters <cit.>. § MASTER EQUATION FOR ACCELERATED ATOMS Instead of discussing atoms with a countable infinite energy levels and a continuum we are interested to discuss radiative processes between two discrete energy levels. Therefore, let us suppose the case of two identical two-level atoms interacting with a common quantized electromagnetic field. The atoms are moving along different hyperbolic trajectories in the (t, x) plane and its coordinates of such motion is known as the Rindler coordinates (η, ξ) and has the form x = e^a ξ/acosh a η, t = e^a ξ/asinh a η,where a is a positive constant. The Rindler metric is ds^2 = e^2 a ξ (dη^2 - dξ^2) - d 𝐲^2, with 𝐲 being the coordinates perpendicular to the motion of the atoms. The coordinates (η, ξ) cover only a quadrant of Minkowski space, namely, the wedge x > |t|. Lines of constant η are straight, whereas lines of constant ξ are hyperbolas x^2 - t^2 = a^-2e^2 a ξ,representing world lines of uniformly accelerated observers with associated proper accelerationα^-1 = a e^-a ξ. Hence different values for ξ correspond to different hyperbolae and hence to different proper accelerations. The accelerated observer's proper time τ is related to ξ, η by τ = e^a ξη.We have to deal with two proper times since the atoms are moving along distinct hyperbolic trajectories. To derive the Heisenberg equations of motion of the coupled system, we have to choose a common time variable. Nevertheless, the common choice of the time parameter is to choose one of the atoms proper time τ_1 or τ_2. In this work we use the Rindler coordinate time η. Therefore, we describe the time evolution with respect to such a parameter, which, because of (<ref>), has a functional relation to each of the proper times of the atoms.The Hamiltonian that governs the time evolution of this atomic system with respect to η can be written as H_A(η) =ω_0/2[[S^Z_1(τ_1(η))⊗]d τ_1/dη+ ⊗[S^Z_2(τ_2(η))]d τ_2/dη],The field modes decomposition in terms of creation and annihilation operators can be obtained in the Coulomb gauge following the the standard procedure of canonical quantization and can be written as the following E(t, x) =∑_λ∫ d^3 ki ω_k/(2π)^3 (2 ω_k)^1/2[a_k, λ(t) e^i k·x- a^†_k, λ(t) e^-i k·x]ε̂_λ(k), B(t, x) =∑_λ∫ d^3 ki ω_k/(2π)^3 (2 ω_k)^1/2[a_k, λ(t) e^i k·x - a^†_k, λ(t) e^-i k·x][k̂×ε̂_λ(k)],where a_k, λ and a^†_k, λ ate the annihilation and creation operators of the electromagnetic field. The index λ defines the different polarizations of the field. The electromagnetic field Hamiltonian is written in the traditional form H_F(t) =∑_λ∫d^3 k/(2π)^3ω_k a^†_k, λ(t) a_k, λ(t).In the multipolar coupling scheme and within the so-called electric dipole approximation one has that the Hamiltonian that describes the interaction between the atoms and the field is given by H_I(η) =- μ^(1)(τ_1(η))·E[x_1(τ_1(η))]dτ_1/dη- μ^(2)(τ_2(η))·E[x_2(τ_2(η))]dτ_2/dη,with the electric dipole moment operator for the i-th atom μ^(i)(τ_i(η)) given by μ^(1) = 𝐝^(1) S^(1)_-e^- i ωτ_1(η) + 𝐝^* (1) S^(1)_+e^ i ωτ_1(η).,where we have defined the i-th dipole raising and lowering, respectively, operators as S_(α)^+ = |e_α⟩⟨g_α|. The factor dτ_i/dη which appears in the above equations is justified because we are considering that both atoms are traveling along different stationary trajectories and will play a crucial role in the generalized master equation.The procedure to obtain a master equation for atoms at rest in more generic coordinate systems can be realized via the microscopic derivation applying some techniques of quantum field theory in curved space-time. This was obtained in Refs. <cit.> but we recall that in this work we are considering that the atoms are traveling along different hyperbolas and a generalization of the master equation obtained in the mentioned works is required. See Ref. <cit.>. We can follow the procedure to obtain such a master equation by simply making some identifications. The master equation obtained in the mentioned work has the standard Lindblad form ρ̇_A(η) = - i [ℋ_eff,ρ_A(η)] + ℒ{ρ_A(η)},with the effective Hamiltonian defined by ℋ_eff≡ℋ_A + ∑_ω∑_i,j∑_α, βΔ^(αβ)_ij(a_β, ω)A^† (α)_i(ω) A^(β)_j(ω),the term, ℒ{ρ_A(η)}, usually denoted as the dissipative contribution of the generalized master equation, is given by ℒ{ρ_A(η)} =1/2∑_ω∑_i,j∑_α, βΓ^(αβ)_ij(a_βω)( 2 A^(β)_j(ω)ρ_A(η) A^† (α)_i(ω)-{ A^† (α)_i(ω) A^(β)_j(ω),ρ_A(η)}),where we have associated the atomic operators as A^(α)_i(ω) = d_i^(α)S_-^(α),A^(α)_i(-ω) = A^† (α)_i(ω) =d_i^*(α)S_+^(α).The Γ^(αβ)_ij, Δ^(αβ)_ij are, respectively, real and complex functions Γ^(αβ)_ij(a_βω) =W_ij^(αβ)(a_βω) +W_ji^*(βα)(a_βω), Δ^(αβ)_ij(a_βω) = W_ij^(αβ)(a_βω) -W_ji^*(βα)(a_βω)/2i,where the function W_ij^(αβ)(a_βω) is obtained by an integral transform of the electromagnetic-field correlation functions which will be defined in the following. For various systems we are interested in, these correlation functions can usually be written as functions of the difference of the time parameter η - η'. We then usex_α(τ_α(η)) = x_α(η) to simplify our notations and write the electromagnetic-field correlation functions as G^(αβ)_ij(η - η') = ⟨0,M| E_i( x_α(η))E_j( x_β(η'))|0,M⟩.Using that s = η - η' and a_β =dτ_β/dη, a constant given by Eq. (<ref>), the W_ij^(αβ)(a_βω) function can be written as the followingW_ij^(αβ)(a_βω) = a_α a_β∫_0^∞ ds  e^i a_βω s G^(αβ)_ij(s).The electromagnetic-field correlation functions for inertial observers with the field prepared in the Minkowski vacuum is known as ⟨0,M| E_i( x_α(t))E_j( x_β(t'))|0,M⟩= -1/4 π^2(∂_0∂_0'δ_ij - ∂_i∂_j')(1/(t - t' - iϵ)^2- Δ𝐱^2 ),where Δ𝐱^2 = Δ x^2 + Δ y^2 + Δ z^2 and Δ x_k= x_k - x_k'. To study the role of the Unruh-Davies effect in the entanglement dynamics, we need to write the correlation function of Eq. (<ref>) in terms of the Rindler coordinates from Eq. (<ref>). With a suitable coordinate transformation of Eq. (<ref>), we obtain: G^(αβ)_ij(s) = a^4δ_ij/4 π^2 a_α^2 a_β^2ν_i^(αβ)(s)/[cosh(as - iϵ) - coshϕ]^3,withν_1^(αβ)(s)= cosh(as - iϵ) - coshϕ, ν_2^(αβ)(s)= ν_3^(αβ)(s) =cosh(as - iϵ) coshϕ -1,and coshϕ = 1 + (a_α - a_β)^2/2a_αa_β.Therefore, using the form of operators A^α_i(ω) of Eqs. (<ref>) we can write the dissipative contribution to the master equation in a more useful form ℒ{ρ_A(η)} = 1/2∑_α, β = 1^2∑_i,j = 1^3C_ij^(αβ)[ 2S_j^(β)ρ(η)S^(α)_i - { S^(α)_iS_j^(β),ρ(η) }].where we have defined the Kossakowski matrix C_ij^(αβ) as C_ij^(αβ) = δ_ij A^(αβ) - iϵ_ij3 B^(αβ) - δ_i3δ_3jA^(αβ),with A^(αβ) = Γ^(αβ)(a_βω) + Γ^(αβ)(-a_βω)/4, B^(αβ) = Γ^(αβ)(a_βω) - Γ^(αβ)(-a_βω)/4,and Γ^(αβ)(a_βω) = ∑_i, jΓ^(αβ)_ij(a_βω)d^* (α)_id^(β)_j. In the appendix of this work we have obtained the functions W_ij^(αβ), with those results we compute the functions Γ^(αβ)(a_βω) for α = β as Γ^(β)(α_βω)= a|d|^2ω^2/3π(1 + (α_βω)^2/α_βω)(1 + 1/e^2πα_βω - 1)∑_i = 1^3d̂_i^* (β)d̂_i^(β),and for α≠β in the following form Γ^(αβ)(a_βω) =a|d|^2csch^2ϕ/4πα_αα_β∑_i = 1^3d̂_i^* (α)d̂_i^(β)[ θ(ω)(𝒰̃_i(α_βω)/ 1 - e^- 2 πα_βω) + θ(-ω)(𝒰̃_i(α_β |ω|)/e^ 2 πα_β |ω| - 1) ],where we have used α_β = a_β/a, d_i^β = |d^(β)| d̂_i^β, with d̂_i^β being the unitary vector of the dipole operator and the function 𝒰̃_i(α_βω) defined by 𝒰̃_1(α_βω)= 16[ϕsin(α_βωϕ)-α_βωcos(α_βωϕ) ], 𝒰̃_2(α_βω)= 𝒰̃_3(α_βω)= 16 [ α_βωcos(α_βωϕ)coshϕ. .+ sin(α_βωϕ)sinhϕ((α_βω)^2 -csch^2 ϕ) ].Using the above equations and defining the parameter Γ_0 = a|d|^2 ω^2/4 π,we can construct the A(B)^(αβ) functions that appears in the Kossakowski matrix of Eq. (<ref>). The results of this functions is computed for α = β as A^(ββ) =Γ_01 + (α_βω)^2/3α_βω(1 + 2/e^2πα_βω - 1)∑_i = 1^3d̂_i^* αd̂_i^β, B^(αα) = Γ_01 + (α_βω)^2/3α_βω∑_i = 1^3d̂_i^* αd̂_i^β,and for α≠β A^(αβ) = Γ_0csch^2 ϕ/4 α_αωα_βω( 1 + 2/e^2πα_βω - 1)∑_i = 1^3d̂_i^* αd̂_i^β𝒰̃_i(α_βω) , B^(αβ) =Γ_0csch^2 ϕ/4 α_αωα_βω∑_i = 1^3d̂_i^* αd̂_i^β𝒰̃_i(α_βω). Once the master equation is obtained, we must analyze the entanglement dynamics by making use of some function to determine how much entanglement exists in the system. This will be discussed in the next section. § ENTANGLEMENT DYNAMICSThe aim of this section is to analyze the entanglement dynamics using the results of the previous section. There are many proposed functions to this purpose as the entanglement entropy, concurrence, negativity an others. In this work we choose to use the concurrence, introduced in Ref. <cit.> and the reason will be clear in this section.To simplify this discussion and the calculations we work in the basis of the collective states |G⟩ = |g_1⟩⊗|g_2⟩,|E⟩ = |e_1⟩⊗|e_2⟩, |S⟩ = 1/2(|g_1⟩⊗|e_2⟩ + |e_1⟩⊗|g_2⟩) |A⟩ = 1/2(|g_1⟩⊗|e_2⟩ - |e_1⟩⊗|g_2⟩),where we have two separated states |G⟩ an |E⟩ e two maximally entangled states, the symmetric state |S⟩ and the antisymmetric state |A⟩. The master equation in this basis can be written in the block-diagonal form ρ(τ) = ([ ρ_GG ρ_GE00; ρ_EG ρ_EE00;00 ρ_SS ρ_SA;00 ρ_AS ρ_AA ]),with ρ_ij = ⟨i|ρ|j⟩. We compute the matrix elements of the generalized master equation given by Eq. (<ref>) in the aforementioned basis. We obtain eight coupled linear differential equations involving all the matrix components of Eq. (<ref>). As we are dealing with a more general situation than the ones studied in Refs. <cit.>, our set of equations is more involved and are presented in the appendix.When dealing with atoms with most general motion, as used in this work, we have non-trivial equations for the matrix elements ρ_AS and ρ_SA. Therefore, we choose to study the entanglement dynamics using the concurrence<cit.>. The concurrence C[ρ] is a function of the matrix elements of ρ(η) which has a value C = 0 for separated states and C = 1 for maximally entangled states. It can be shown that the concurrence in this basis has the form C[ρ] = max{ 0, Q(η)},where Q(η) is written as Q(η) = √([ρ_AA(η) - ρ_SS(η) ]^2 - [ρ_AS(η) - ρ_SA(η) ]^2 )- 2 √(ρ_GG(η)ρ_EE(η)).Before we solve the master equation given by Eq. (<ref>) we perform an approximation of the values of the parameters ωα_β. We consider that ωα_2 >> ωα_1 and ωα_1 << 1. In this sense the functions of Eqs. (<ref>)-(<ref>) have the simplification: A^(2) = B^(2), A^(12) = B^(12) and A^(21) = B^(21). The dissipation contribution explicitly presented in the appendix of this work already takes these approximations into account. §.§ Entanglement Sudden DeathWe start setting the atoms to initial entanglement states. In this case, we observe the degradation of such states due to the interaction with the field.In Fig. <ref>, by considering the initial state as |S⟩ we fix a value for ωα_2 and observe that increasing ωα_1 the lifetime of the entangled states also increases. In fact, as α_β is related to the inverse of the proper acceleration, therefore decreasing its accelerations leads to the increase of the lifetime of entanglement. In this figure we consider the polarization unit vector as d̂ = (1,0,0).In Fig. <ref> we consider the same system with the same initial conditions but with a different polarization unit vector along with the z-direction d̂ = (0,0,1). Although the results for the two atom's polarization has the same pattern, in Fig. <ref> we observe that the effects of polarization in this case is a subtly change in the lifetime of the entangled states, with the x-direction polarization being the one with the larger lifetime. This dynamics appears in the same way when considering the initial state of the system as the antissimetric one |A⟩. In Fig. <ref> we observe that for a fixed value of ωα_2 and unit vector of the atoms polarization along the x-direction, the lifetime of the entangled state decreases if the proper acceleration of the first atom increase. For the polarization along the z-direction we also observe a decrease in the lifetime of the entanglement as shown in Fig. <ref>.As we have frequently mentioned in this work, the alternative possibility of the generalized master equation is to study the entanglement dynamics when both atoms' acceleration change. For a fixed value of the time parameter Γ_0τ and both atoms with the polarization along the x-direction, we have that the concurrence changing with the two parameters ωα_1 and ωα_2 has the form presented in Fig. <ref>. We observe that for the entanglement degradation phenomenon and the polarization along the x-direction we have similar results compared with the case with the scalar field in Ref. <cit.>.§.§ Entanglement HarvestingWe now choose the initial state of the system as a separated state |G⟩ or |E⟩. In this scenario we wish to study formation of entangled states. In Fig. <ref>, for the state |G⟩ and polarization unit vector along the x-direction we observe that, when entangled states are created, entanglement is increased by increasing ωα_1. For the same values used to study the entanglement sudden death, we do not obtain any entangled states. In Fig. <ref> we have the same initial configuration but with the polarization of the atoms along the z-direction. In this case, we observe an increase of the lifetime of the entangled states that were created and of the entanglement itself. For the initial state |E⟩ we only observe creation of entangled state for the polarization along the x-direction. For others polarizations the factor √(ρ_GGρ_EE) is always bigger than √((ρ_SS - ρ_AA)^2 -(ρ_SA - ρ_AS)^2 ). Such entanglement harvesting for this initial separated state happens for a slightly large time interval (Γ_0τ) compared with our previous results and for some small choice of the acceleration parameters. This dynamics is shown in Fig. <ref>. Finally, in Fig. <ref> we present the concurrence of entanglement formation changing with both acceleration of the atoms with polarization unit vector d̂ = (1,0,0). In our considerations, we observed that entangled states are created more when we increase ωα_2 compared with ωα_1. This pattern is also similar to the scalar field case.§ CONCLUSIONS In this paper, we investigate acceleration affects the entanglement dynamics of atoms using the open quantum systems framework. In this scenario we used the generalized master equation that allowed us to set the atoms with different values of proper acceleration. We extend our previous work, where we examined two-level systems coupled to a scalar field, to the case of atoms interacting with an electromagnetic field. Assuming that the atoms follow different hyperbolic trajectories with different proper times, which introduces some technical challenges that we address in our analysis, we have observed that this possibility affects the entanglement dynamics more than other system variables. As we are considering a pair of electric dipoles, polarization effects of atoms were studied. The function chosen to quantify how much of entanglement the system has was the concurrence.We observe that once the system is prepared in an entangled system different polarization choices lead us to enlarge the lifetime of the entangled state. This happens for both state |S⟩ and state |A⟩. In this phenomenon, for a fixed value of the proper acceleration of one atom, we observe that increasing the proper acceleration of the other decreases the lifetime of the entangled state because of the Unruh-Davies effect. The concurrence when both accelerations vary was also studied.In the sense of entanglement harvesting, preparing the system in the state |G⟩, fixing the proper acceleration of one atom α_2^-1 and varying the proper acceleration of the other α_1^-1 we observed that entangled states were created and the concurrence increases when the proper acceleration α_1^-1 decreases. The effect of polarization in this case was to slightly increase the lifetime of this states and the concurrence. For the system prepared in the state |E⟩ and both atoms with polarization along the x-direction we observe a certain difficult to create entangled states. The system only creates entangled states after a long time parameter with a greater concurrence, compared with the results for the state |G⟩. The polarization of the atoms plays a crucial role in this case since we do not observe entanglement harvesting for the polarization along the y or z direction.The results presented in this work would help us to understand how the Unruh-Davies effect affect the entanglement dynamics in a more realistic scenario compared with our previous work. Due to the strict relationship between Rindler coordinates and references close to the event horizons of a black hole, this work can serve as a guide for a study in a situation involving the dynamics of entanglement with atoms in a curved spacetime. This topic is under investigation by the authors.§ ACKNOWLEDGEMENTS This work was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico – CNPq, under grants 303436/2015-8 (N.F.S.) and 317548/2021-2 (G.M.), and Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro – FAPERJ under grants E-26/202.725/2018 and E-26/201.142/2022 (G.M.).§ COMPUTATION OF THE Γ^(ΑΒ) FUNCTIONIn this appendix we present the explicit computation of the function W_ij^(αβ)(a_βω). To do so, we use Eq. (<ref>) into (<ref>). Therefore, our function is written as the following W_ij^(αβ)(a_βω)=a^4δ_ij/4 π^2 a_α a_β∫_0^∞ ds e^i a_βω sν_i^(αβ)(s)/[cosh(as - iϵ) - coshϕ]^3.For α=β, we have coshϕ = 1 and ν_1^(ββ)(s) = ν_2^(ββ)(s) = ν_3^(ββ)(s) = 2sinh^2(as/2). The above equation can be simplified as: W_ij^(ββ)(a_βω) =a^4δ_ij/16 π^2a_β^2∫_0^∞ ds e^i a_βω s/sinh^4(as/2 - i ϵ).Instead of solving this equation we use the form of Γ_ij^ββ(a_βω) from Eq. (<ref>) and, after some manipulations in the complex conjugate of the integral in Eq. (<ref>), we write it as the following Fourier transformation: Γ_ij^ββ(a_βω) =W_ij^(ββ)(a_βω) +W_ji^*(ββ)(a_βω)= a^4δ_ij/16 π^2a_β^2∫_-∞^∞ ds e^i a_βω s/sinh^4(as/2 - i ϵ).The integral that appeared in Eq. (<ref>) is similar to the one that appears in the scalar case and has the solution <cit.> Γ^(ββ)(a_βω) =δ_ij a_βω^3/3π(1 + a^2/a_β^2ω^2)×(1 + 1/e^2πα_βω/a - 1).For α≠β we choose to solve the integral of W_ij^(αβ)(a_βω) and use Eq. (<ref>). After some manipulations and using u = as we write Eq. (<ref>) as W_ij^(αβ)(a_βω)= a^3 δ_ij/4 π^2 a_α a_β∫_0^∞ du e^i ωα_β u(ν_i^(αβ)(u,ϵ)/sinh^3 ( u - i ϵ + Φ/2)sinh^3 ( u - i ϵ - Φ/2))=a^3 δ_ij/4 π^2 a_α a_β∫_0^∞ du f_i(u)withν_1 = cosh(u - i ϵ) - coshΦ, ν_2 = ν_3 = cosh(u - i ϵ)coshΦ - 1. To solve (<ref>) we make use of a contour integral in the complex plane illustrated in Fig. <ref>. We consider the contour by going from 0 to R in the real axis and closing with a semicircle in the upper half ( (u) >0), 𝒞_R,of the complex plane for ω > 0 or in the lower half ((u) <0), 𝒞_R^', for ω < 0. For ω>0, the poles inside the upper-half semicircle are given by u^(1)_n = Φ + i ϵ + i 2π n,with n ≥ 0. The contour integral can be split it up to ∮_ω>0du f_i(u)= ∫_0^Rdu[...] + ∫_C_Rdu f_i(u) + ∫_iR^0 du f_i(u)= 2 π i ∑_n = 0^∞Res (f_i(u); u = u^(1)_n)= π/sinh^2 Φ( 1 + 1/ e^2 πα_βω - 1)𝒰_i(ω, α_β),where we have considered the limit ϵ→ 0 and defined the following functions 𝒰_1(ω, α_β) = - 8 e^ i α_βωΦ(α_βω + i Φ),𝒰_2(ω, α_β) = 𝒰_3(ω, α_β) = 8 e^ i α_βωΦ(α_βωcoshΦ+ i/sinhΦ(1 - α_β^2ω^2sinh^2 Φ)).For ω <0, the poles inside the lower-half semicircle contour are given by u^(2)_n = Φ- i 2π n,with n > 0. As before, the contour integral can be divided into ∮_ω<0du f_i(u)= ∫_0^Rdu[...] + ∫_C^'_Rdu f_i(u) + ∫_-iR^0 du f_i(u, ϵ)= - 2 π i ∑_n = 1^∞Res (f_i(u); u = u^(2)_n)= π/sinh^2 Φ(1/e^ 2 πα_β|ω| - 1 )𝒰_i^*(|ω|, α_β).By considering the limit R →∞, the integrals along C_R and C^'_R vanishes by the Jordan lemma. We use the results from Eqs. (<ref>) and (<ref>) into Eq. (<ref>) and, after some algebraic manipulations, one gets W_ij^(αβ)(a_βω) = a^3 δ_ij/4 π a_α a_βcsch^2 Φ{θ(ω)[ ( 1 + 1/ e^ 2 πα_βω - 1)𝒰_i(ω, α_β) + 𝒱_i(ω, α_β)]..+ θ(-ω)[(1/e^ 2 πα_β|ω| - 1 )𝒰_i^*(|ω|, α_β) + 𝒱_i^*(|ω|, α_β)]},where we have defined 𝒱_i(ω, α_β) = i sinh^2 Φ/π∫_ 0^2 π du e^- ωα_β u(ν_i^(αβ)(iu)/(cos u - coshΦ)^3). It is important to point out that we are not considering the summation convention over repeated indices in this work. § EXPLICIT FORM OF THE GENERALIZED MASTER EQUATION In this appendix we present the explicit form of the master equation where we consider the contribution only of the dissipative part of it as discussed in Sec. <ref>. From Eq. (<ref>), considering ωα << 1,we only have to compute eight components that are shown in the following d/dτρ_GG(τ) = (-2 A^(11) + 2 B^(11)) ρ_GG(τ)+ (A^(11) + B^(11) - 2 (B^(12) - B^(22) + B^(21))) ρ_AA(τ)+ (A^(11) + B^(11) - 2 (B^(12) + B^(22) -B^(21))) ρ_AS(τ)+(A^(11) + B^(11) + 2 B^(12) - 2 B^(22) - 2 B^(21)) ρ_SA(τ)+ (A^(11) + B^(11) + 2 B^(12) + 2 B^(22) + 2 B^(21)) ρ_SS(τ), d/dτρ_SS(τ) =(A^(11) - B^(11)) ρ_GG(τ)+ (-B^(11) + B^(12) + B^(22) - B^(21)) ρ_AS(τ)+ (A^(11) + B^(11) + 2 (B^(12) + B^(22) + B^(21))) ρ_EE(τ)+ (-B^(11) - B^(12) + B^(22) + B^(21)) ρ_SA(τ)+ (-2 A^(11) - 2 B^(12) - 2 B^(22) - 2 B^(21)) ρ_SS(τ), d/dτρ_AA(τ) = (A^(11) - B^(11)) ρ_GG(τ)-2 (A^(11) - B^(12) + B^(22) - B^(21)) ρ_AA(τ)+ (-B^(11) + B^(12) + B^(22) - B^(21)) ρ_AS(τ)+ (A^(11) + B^(11) - 2 B^(12) + 2 B^(22) - 2 B^(21)) ρ_EE(τ)+ (-B^(11) - B^(12) + B^(22) + B^(21)) ρ_SA(τ), d/dτρ_EE(τ) = -2 ( A^(11) + B^(11) +2 B^(22)) ρ_EE(τ)+(A^(11) - B^(11)) ρ_AA(τ)+ (-A^(11) + B^(11)) ρ_AS(τ)+ (-A^(11) + B^(11)) ρ_SA(τ)+ (A^(11) - B^(11)) ρ_SS(τ), d/dτρ_AS(τ) =(A^(11) - B^(11)) ρ_GG(τ)- 2 (A^(11) + B^(22)) ρ_AS(τ)+(-B^(11) - B^(12) + B^(22) + B^(21)) ρ_AA(τ)+ (-A^(11) - B^(11) + 2 B^(12) + 2 B^(22) - 2 B^(21)) ρ_EE(τ)+ (-B^(11) - B^(12) + B^(22) + B^(21)) ρ_SS(τ), d/dτρ_SA(τ) =(A^(11) - B^(11)) ρ_GG(τ)+(-B^(11) + B^(12) + B^(22) - B^(21)) ρ_AA(τ)+ (-A^(11) - B^(11) - 2 B^(12) + 2 B^(22) + 2 B^(21)) ρ_EE(τ)-2 ( A^(11) + B^(22)) ρ_SA(τ)+ (-B^(11) + B^(12) + B^(22) - B^(21))ρ_SS(τ), d/dτρ_GE(τ) = -2 (A^(11) + B^(22)) ρ_GE(τ), d/dτρ_EG(τ) = -2 (A^(11) + B^(22)) ρ_EG(τ). unsrt
http://arxiv.org/abs/2312.16342v2
{ "authors": [ "M. S. Soares", "N. F. Svaiter", "G. Menezes" ], "categories": [ "hep-th", "gr-qc", "quant-ph" ], "primary_category": "hep-th", "published": "20231226214425", "title": "Entanglement dynamics of accelerated atoms interacting with the Electromagnetic Field" }
^1 Institute for Physical Chemistry, ^2 Department of Physics, University of Münster, Germany ^3NanoElectronics Group, Faculty of Electrical Engineering, Mathematics and Computer Science, MESA+ Institute for Nanotechnology, and Center for Brain-Inspired Nano Systems (BRAINS), University of Twente, Enschede, The Netherlands^4Department of Applied Physics, ^5Center for Computational Energy Research (CCER), Eindhoven University of Technology, The Netherlands^*These authors contributed equally to the workNonlinear behavior in the hopping transport of interacting charges enables reconfigurable logic in disordered dopant network devices, where voltages applied at control electrodes tune the relation between voltages applied at input electrodes and the current measured at an output electrode. From kinetic Monte Carlo simulations we analyze the critical nonlinear aspects of variable-range hopping transport for realizing Boolean logic gates in these devices on three levels. First, we quantify the occurrence of individual gates for random choices of control voltages. We find that linearly inseparable gates such as the XOR gate are less likely to occur than linearly separable gates such as the AND gate, despite the fact that the number of different regions in the multidimensional control voltage space for which AND or XOR gates occur is comparable. Second, we use principal component analysis to characterize the distribution of the output current vectors for the (00,10,01,11) logic input combinations in terms of eigenvectors and eigenvalues of theoutput covariance matrix. This allows a simple and direct comparison of the behavior of different simulated devices and a comparison to experimental devices. Third, we quantify the nonlinearity in the distribution of the output current vectors necessary for realizing Boolean functionality by introducing three nonlinearity indicators. The analysis provides a physical interpretation of the effects of changing the hopping distance and temperature and is used in a comparison with data generated by a deep neural network trained on a physical device.Critical nonlinear aspects of hopping transport for reconfigurable logic in disordered dopant networks Henri Tertilt^1*, Jonas Mensing^1*, Marlon Becker^1, Wilfred G. van der Wiel^2,3, Peter A. Bobbert^3,4,5, Andreas Heuer^1 January 14, 2024 =============================================================================================================================§ INTRODUCTIONThe development of reconfigurable logic <cit.> enables new approaches for computing, using the concept of intelligent matter <cit.>. A key ingredient is nonlinear behavior <cit.>. A large range of physical properties can be generate by appropriate doping of semiconductors <cit.>. Here we study a dopant network processing unit (DNPU), where dopants are implanted in silicon with a concentration favoring variable-range hopping of electrons among the dopants close to the silicon surface <cit.>. The dopant network is contacted by electrodes deposited on the silicon surface, allowing the application of voltages and the measurement of currents. Reconfigurable logic functionality can be obtained with these DNPUs, among which Boolean functionality.The standard usage is at liquid-nitrogen temperature (T=77 K), but room-temperature operation was also demonstrated <cit.>. Increasingly complex functionalities can be achieved by interconnecting DNPUs in networks <cit.>. Logic circuitry based on DNPUs has the potential to outcompete CMOS-based logic in terms of energy efficiency, latency and footprint <cit.>.We consider here a DNPU with eight contacts: one electrode is chosen as output, two electrodes are chosen as inputs, while voltages applied at the other five electrodes control the input-output characteristics. To verify whether the system behaves like one of the six basic Boolean logic gates (AND, OR, NOR, NAND, XOR, XNOR), one applies voltages to the two input electrodes corresponding to the possible logic combinations `00', `10', `01', `11'. By studying the four-dimensional vector of currents at the output electrode for these inputs one can check whether the system displays the desired Boolean functionality. By introducing a fitness function for each logic gate, the functionality can be quantified and subsequently optimized by variation of the voltages at the control electrodes. Two different approaches for this optimization have been used: a genetic algorithm <cit.> and gradient descent on a deep-neural-network (DNN) surrogate model (SM) trained to reproduce the full input-output characteristics of the DNPU <cit.>.In order to obtain a detailed physical understanding of the atomic-scale behavior of the DNPUs, we recently developed a microscopic model of their functioning<cit.>. The model is based on variable-range hopping of interacting charges. The charge hopping was simulated with kinetic Monte Carlo (KMC) simulations. Like in the experiment, realizations of Boolean logic gates were obtained by varying the control voltages. We gained important insight into the functioning of DNPUs by mapping out the spatial current and voltage distribution for high-fitness realizations of specific gates. By studying the sensitivity of the fitness to variations of the control voltages, the impact of nonlinear effects and the major differences between the linearly separable AND gate and the linearly inseparable XOR gate were analyzed. In the present work, we take a complementary approach to understanding the functioning of DNPUs by analyzing their statistical properties and identifying the critical nonlinear aspects that allow for reconfigurable logic.We study the four-dimensional (4D) current vectors for the input combinations `00', `10', `01', `11' for a large number of different control voltage combinations. Experimentally, a similar study was done, giving rise to abundance plots of individual Boolean gates as a function of a minimal gate fitness <cit.>. Here, we extend and go beyond the analysis of the abundance plots with the goal to grasp the key statistical features of the distribution of Boolean gate realizations in the five-dimensional (5D) space of control voltages. As a central approach, we use a principal component analysis (PCA) <cit.> to define an orthogonal system of directions in the 4D current vector space with uncorrelated properties, resulting from an eigenvector analysis of the covariance matrix of the output current vectors. The corresponding eigenvalues contain information about the fluctuations of the current vectors in the eigenvector directions. A common application of the PCA is the simplification of high-dimensional trajectories, where directions with small eigenvalues are eliminated <cit.>. Also in the field of reservoir computing the PCA method has been used, either to construct an autoencoder <cit.> or to use it as a tool to assess the internal representation ability of the self-organized reservoir. Also in that application only the directions with the largest eigenvalues matter. By contrast, in the PCA applied in this work all eigenvector directions are important. We will see that even the direction with the smallest eigenvalue turns out to be essential for the logic functionality of DNPUs. We will also show that the PCA eigenvectors and eigenvalues allow an objective comparison with experimental data. Going beyond the PCA, we map the current vectors onto three variables that quantify the different nonlinear effects inherently present in the DNPU. This decomposition separates nonlinear single-input responses and nonlinear cross-input responses. Related to these three variables we introduce three key indicators that characterize the occurrence of nonlinearities in the current vector distributions. From the insights of the PCA analysis and this decomposition, key properties of the abundance plots can be understood on a deeper level than before. Specifically, we use these insights to better understand the dependence of the DNPU logic functionality on the hopping distance and on temperature. We expect that the introduced concepts have a general applicability to a large variety of nonlinear disordered systems other than DNPUs, including nanoparticle networks <cit.>.Additionally, we analyze the spatial correlation of fitness values for different gates in the 5D space of control voltages. In this way we obtain information about the hypervolume of individual gates in this space. This hypervolumeis directly related to the sensitivity of the gate fitness to variations of the control voltages, which is of major practical relevance.The outline of this work is as follows. First, we introduce the theoretical background of the different concepts and models used. Then we discuss the results at the three different analysis levels mentioned above. We start with results from a general statistical analysis, in particular abundance plots and the spatial correlation of fitness values. After that we discuss the results for the output covariance matrices in the framework of the PCA. We then discuss the results obtained from the decomposition of the current vectors into the three nonlinearity indicators. Finally, we provide a summary, conclusions and an outlook.§ THEORETICAL BACKGROUND§.§ Model We study DNPUs with an electrode configuration as sketched in Figure <ref>. The input voltages are U_3 (also denoted as U_ in,l, l: left) and U_2 (also denoted as U_ in,r, r: right). The control voltages are U_1, and U_4-U_7. The output electrode is grounded (U_ out=0) and the output current is I_ out.We study in this work two devices D1 and D2 with different configurations of 200 randomly placed boron dopants and 3 counterdopants in a circular area with a diameter of 150 nm. These values are representative for the experimental situation <cit.>.We consider electron hops between dopants i and j, separated by a distance r_ij. We assume phonon-mediated variable-range hopping described by the Miller-Abrahams rateifandotherwise <cit.>. For the hopping prefactor we take a typical phonon frequency , but the specific value does not matter for the statistical results reported in this work. Furthermore, k_ BT is the thermal energy, Δ E_ij the energy change involved in the hop, and a is the dopant wave function decay length, which we will call the `hopping distance'. The Miller-Abrahams rate has initially been derived for electron hopping between dopants in semiconductors and it is assumed here to be valid for all hopping processes, also for for hopping between the electrodes and the boron dopants. The electrodes are regarded as infinite reservoirs of electrons and are modeled as circular segments, with their centers defining the distance to the dopants.The following energy contributions are present: (i) the electrostatic energy, given by the external electrostatic potential imposed by the voltages applied to the electrodes, (ii) the Coulomb interactions in between all the electrons and between the electrons and the ionized counterdopants, and (iii) a random Gaussian contribution to the dopant ionization energy with a standard deviation σ = 0.1 eV, which was found in Ref. <cit.> to yield a resistance-temperature dependence for a low voltage applied between neighboring electrodes in fair agreement with experiment. To obtain the electrostatic energy we solve the two-dimensional (2D) Laplace equation for the electrostatic potential. For this, we use a triangular mesh and a finite element method based on the MFEM library <cit.>. The dielectric constant in the Coulomb interaction is chosen as ϵ_ r=12, close to that of silicon. The hopping distance a is varied from 2.5 to 10 nm for simulations at the standard temperature 77 K and from 1.25 to 10 nm for simulations at room temperature (293 K). For the comparison of the model results with experiment, we take the typical value a=5 nm applicable to a donor like boron in silicon. §.§ Kinetic Monte Carlo simulations We use a standard, in-house developed rejection-free KMC algorithm that considers at each step all possible electron hops in the system <cit.>. Voltages are applied to seven electrodes and the current is determined at a chosen grounded output electrode by counting the net number of electron hops to or from that electrode in the simulated time interval. Starting with as many electrons in the system as counterdopants (neutral system), 10^4 KMC equilibration steps are sufficient to reach a steady state current for all considered voltage combinations. Unless stated otherwise, we determine the current in a time interval corresponding toand estimate the statistical uncertainty from the current fluctuations in 100 equally long subintervals of 10^5 KMC steps. §.§ Hypercube sampling We randomly drawcontrol voltages from a hypercube (hypercube sampling) such that each control voltage ranges between -1 and 1 V. The input voltages are eitheror . For a given set of control voltages we obtain the four-dimensional current vectorfrom the KMC simulations. The hypercube sampling involved in total 10^4 different random choices of control voltages.§.§ Comparison with deep-neural-network surrogate model We compare results of KMC simulations with those of a deep-neural-network (DNN) surrogate model (SM) trained on experimental data <cit.>. The SM accurately reproduces the measured output current I_ out as a function of all seven input voltages U_1-U_7 for an experimental DNPU as sketched in Figure <ref>. Hence, our comparison is equivalent to that with a real-world device. The DNN consists of an output layer with a single neuron giving I_ out as output and an input layer with seven neurons for U_1-U_7 as inputs. In between are six hidden layers, each with 90 neurons. The DNN was trained on the experimental data for voltages U_1-U_5 in the interval [-1.2,0.6] V and U_6,U_7 in the interval [-0.7,0.3] V. The comparison with the KMC simulations was done for U_1-U_5 in the interval [-0.5,0.5] V and U_6,U_7 in the interval [-0.3,0.3] V to avoid extrapolation beyond the trained range. The input voltages for Boolean logic in the comparison are either 0 V (logical 0) or 0.1 V (logical 1). §.§ Fitness function For the abundance plots of a given Boolean logic gate we have used the fitness function F defined as <cit.>F = m/√( MSE) + k|c|,where m and c are fit parameters of a linear fit . Here,is the logic table of the considered gate. MSE denotes the mean squared error of the linear regression. For the constant k we choose 0.01, as in the experimental work <cit.>. A finite value of k rewards a large relative separation of the high and low current levels, which is relevant for the experimental separation of these levels. However, as discussed below, when applying the PCA, normalized currents have to be used, which are not sensitive to this separation. §.§ Data preparationFrom the hypercube sampling we obtain a set of 10^4 current vectors . The fitness values used in the abundance plots are directly based on this data set. For the subsequent analysis we transform each current vector by subtracting the average current I_ av=1/4(I_00+I_10+I_01+I_11) of the four components from each component. First, in this way we increase the sensitivity to nonlinear effects relative to the average current and, second, the analytical calculations, outlined below, can be performed by solving quadratic rather than cubic equations. We denote the average of I_ av over all 10^4 current vectors as ⟨ I_ av⟩. §.§ Principal component analysisIn a principal component analysis (PCA), fluctuations in a multi-component variable are expressed along orthogonal directions that are linearly uncorrelated. The first direction displays the largest fluctuations. After projecting out the first direction, the second direction displays the largest fluctuations in the remaining subspace, and so on. In many applications the PCA is used to reduce the dimension of the problem at hand. For example, projecting out the dimension with the lowest eigenvalue of the covariance matrix typically has limited impact on the data set, but allows for a lower-dimensional description. Here, we use the PCA to characterize the statistical fluctuations in the set of current vectorsobtained in the hypercube sampling. This allows for the identification of relevant directions and inherent symmetries as well asa direct comparison between simulation and experiment.To apply the PCA, we consider the set of 10^4 vectorsfrom the hypercube sampling (after subtracting I_ av from each component). From this set one can define the symmetric covariance matrix C: C= [ σ^2 (I_00) (I_00, I_10) (I_00, I_01) (I_00, I_11); (I_10, I_00) σ^2 (I_10) (I_10, I_01) (I_10, I_11); (I_01, I_00) (I_01, I_10) σ^2 (I_01) (I_01, I_11); (I_11, I_00) (I_11, I_10) (I_11, I_01) σ^2 (I_11) ], where σ^2(I_ij) are the variances and Cov(I_ij,I_kl) the covariances of the current vector components. The PCA implies diagonalization of C, yielding the eigenvalues λ_i and the corresponding eigenvectors 𝐉_i (i=0,...,3). Due to the subtraction of I_ av from each component, one eigenvalue is λ_0=0 with eigenvector 𝐉_0=1/2 (1,1,1,1). The remaining eigenvalues are sorted such that λ_j ≤λ_i if j > i. §.§ Decomposition procedureIn a decomposition procedure that turns out to be very useful, we map the current vectorsonto four new variables via[ I_ av = 1/4 (I_11 + I_10 + I_01 + I_00 ),;M_ l = 1/4 (I_11 + I_10 - I_01 - I_00 ),;M_ r = 1/4 (I_11 - I_10 + I_01 - I_00 ),; X = 1/4 (I_11 - I_10 - I_01 + I_00 ). ]These variables have the following interpretation: (i) I_ av is the average current introduced above (it is zero if I_ av was already subtracted from the current vector components). (ii) M_ l reflects the increase of I_ out upon increasing the voltage of the left electrode. This increase is averaged over the two possible input voltages of the right electrode. M_ l can thus be interpreted as an effective conductance with respect to the left input voltage. (iii) M_ r has a similar interpretation as M_ l but with respect to the right electrode. (iv) In case that the increase of I_ out upon increasing the left input voltage is independent of the right input voltage one has I_11 - I_01 = I_10 - I_00. This is equivalent to X = 0. An equivalent argument holds when left and right electrode are interchanged. Thus,X is a measure for the cross-correlation between the two inputs and can thus be interpreted as a nonlinear coupling between them. §.§ Calculation of PCA eigenvalues and eigenvectorsIn terms of the new variables, Eq. (<ref>) can be rewritten as [ I_00 = I_ av - M_ l - M_ r + X,; I_10 = I_ av + M_ l - M_ r - X,; I_01 = I_ av - M_ l + M_ r - X,; I_11 = I_ av + M_ l + M_ r + X. ]In principle, the eigenvectors and eigenvalues of the PCA covariance matrix can be expressed in terms of the variances and covariances of the new variables appearing on the right hand side of Eq. (<ref>). To simplify the calculation we will neglect correlations between X and the two variables M_ l and M_ r. As shown below, the corresponding Pearson correlation coefficients are indeed very small. Then, each term of the PCA matrix can, for I_ av=0, be written in the form(I_ij,I_kl )= d_ lσ^2( M_ l) + d_ rσ^2( M_ r)+d_xσ^2 (X) + 2d_ lr(M_ l, M_ r),with {i,j,k,l}∈{0,1}, {d_ l,d_ r,d_x}∈{-1,1}, and{d_ lr}∈{-1,0,1}. The calculation of these covariances is straightforward. E.g., one has (I_00,I_01) = σ^2( M_ l) - σ^2( M_ r) - σ^2 (X) + 2× 0 ×(M_ l, M_ r).Next, we define the four vectors[ 𝐯_0 =1/2 (1,1,1,1),; 𝐯_1 =1/√(2) (0,-1,1,0),; 𝐯_2 =1/√(2) (-1,0,0,1),; 𝐯_3 =1/2 (-1,1,1,-1). ]One can directly check that 𝐉_0=𝐯_0 and 𝐉_3=𝐯_3 are eigenvectors of the PCA covariance matrix with eigenvalues λ_0 = 0 and λ_3 = 4 σ^2(X), respectively. The two remaining eigenvalues can be written asλ_1,2 =2(σ^2( M_ l) + σ^2(M_ r) )± √((σ^2(M_ l) - σ^2( M_ r))^2 + 4 [( M_ l,M_ r)]^2)The corresponding eigenvectors 𝐉_1 and 𝐉_2 are linear combinations of 𝐯_1 and 𝐯_2. The result becomes particularly simple if one has σ^2(M_ l) =σ^2(M_ r) ≡σ^2 (M_ l,r). We denote this scenario as l-r symmetry. A special realization of l-r symmetry occurs when the arrangement of dopants and electrodes displays left-right mirror symmetry (up-down mirror symmetry in Figure <ref>), which in practice is only an idealized limit. The eigenvectors are then given by 𝐉_i=𝐯_i with eigenvalues[λ_0=0,;λ_1 =4 σ^2(M_ l,r) (1 +Corr(M_ l,M_ r)),;λ_2 =4 σ^2(M_ l,r) (1 -Corr(M_ l,M_ r)),;λ_3 =4 σ^2(X). ]where Corr(A,B) is the Pearson correlation coefficient between A and B.§.§Nonlinearity indicatorsFrom the hypercube sampling one obtains distributions of the three variables M_ l, M_ r and X. We will consider here the first moment ⟨ A⟩, the second moment ⟨ A^2 ⟩, and the variance σ^2(A) of these distributions. If the dopant network would show a purely linear response to changes in the input voltages, I_1j - I_0j would be a constant 2M_ l^0 independent of j and independent of the control voltages. For the probability distribution function of M_ l we would then have p(M_ l) = δ (M_ l - M_ l^0). The same would hold for the probability distribution function of M_ r: p(M_ r) = δ (M_ r - M_ r^0). M_ r^0 could be different from M_ l^0, e.g., if the positions of the dopants would not be fully symmetric relative to the two input electrodes. Due to nonlinear effects it is expected that M_ l and M_ r fluctuate for the different control voltages chosen in the hypercube sampling. For perfect realizations of Boolean gates we have I_01 =I_10, corresponding to M_ l = M_ r. This is automatically fulfilled if Corr(M_ l,M_ r) =1. Thus, one may expect that a high Pearson correlation between M_ l and M_ r is advantageous for the realization of all gates.For the realization of NAND and NOR gates, it is essential that an increase of an input voltage may lead to a decrease of the output current. This is equivalent to the occurrence of a negative differential resistance (NDR). In our notation, this would imply M_ l < 0 (and/or M_ r < 0). Thus, of utmost relevance is the probability that NDR occurs. For this purpose we define the left and right NDR indicators Q_ l ≡ 1/2 [1 - tanh (⟨ M_ l⟩/σ(M_ l) ) ] Q_ r ≡ 1/2 [1 - tanh (⟨ M_ r⟩/σ(M_ r) ) ].For a symmetric distribution, a value of Q_ l = 0.5 implies that in 50% of all realizations NDR is present for the left input, because the average ⟨ M_ l⟩ is then zero. In contrast, in the limitQ_ l→ 0 the first moment is positive and much larger than the standard deviation. Thus, NDR does not occur. Analogously, in the opposite limit Q_ l→ 1 NDR will always occur. In general, the larger Q_ l, the higher the probability that NDR occurs for a certain combination of control voltages. Thus, Q_ l is a measure of how likely NDR is upon variation of the left input voltage. Identical arguments hold for Q_ r. Note that in this extension of the PCA also the first moment of the distributions plays an essential role.Both the XOR and the XNOR gates are linearly inseparable. Among others things, this implies that non-monotonic behavior upon increasing the sum of the two input voltages must be present. The occurrence of such non-monotonic behavior is strongly connected to the variable X. For a perfect XOR or XNOR gate one has I_11 = I_00 and I_01 = I_10, and thus 2X =I_11 - I_01 (or, alternatively, 2X =I_11 - I_10). A large negative value of X is required for a high-fitness XOR gate and a large positive value is required for a high-fitness XNOR gate, relative to the scale of fluctuations of I_11 - I_01 (or I_11 - I_10). Since the typical scale of the fluctuations of I_11 - I_01 (or I_11 - I_10) is the same as that of M_ l (or M_ r) and since distributions are characterized by their second moments, we choose as an indicator for the nonlinear coupling between the two inputsQ_ lr≡2⟨ X^2⟩/⟨M_ l^2 ⟩ + ⟨ M_ r^2 ⟩.This completes the set of three nonlinearity indicators.§ RESULTS: GENERAL STATISTICAL PROPERTIES §.§ Gate abundances In the experimental work <cit.> the emergence of Boolean functionality was illustrated by abundance plots, representing the probability that the current vector in a random hypercube sampling has a fitness larger than a given value F_ min. In Figure <ref> we show simulated abundance plots for all six Boolean gates at T=77 K for both devices D1 and D2, based on the random sampling of 10^4 control voltage combinations. In order to study the influence of the hopping distance a, we show results for a=2.5, 5 and 10 nm. The abundance plots are qualitatively similar for both devices, but show important quantitative differences, caused by the different locations of the (counter)dopants. A general observation is that, by chance, there are less high-fitness gates for D2.We clearly observe a similar fitness threshold-dependence of the AND and OR gates, the NAND and NOR gates, as well as the XOR and XNOR gates. This pair-wise similarity is in line with our above observationthat the values of Q_l and Q_r should be important for the realization ofNAND and NOR gates, while the value of Q_lr should be important for the realization of XOR and XNOR gates. We see in the middle panels of Figure <ref> that for the standard hopping distance a=5 nm it is much more likely to find, for example, an AND gate than an XOR gate for a random choice of control voltages.We also see from Figure <ref> that the number of logic gates with high fitness values strongly increases with decreasing a. For example, for a fitness threshold F_min=8 the number of AND gates is increased by more than an order of magnitude for both devices when comparing a=10 to a=2.5 nm. Furthermore, the fitness distributions of the 6 different gates tend to approach each other with decreasing a. We attribute the increase in the occurrence of high-fitness gates with decreasing a to the increased importance of Coulomb interactions in determining the current flow. For large a charges can hop far away to energetically favorable sites that they cannot reach for small a. In this way they can bypass sites close to other charges that are inaccessible because of Coulomb repulsion. In the extreme case of very large a the dopant network would act as a linearly resistive medium, with a trivial linear relation between the input voltages and the output current. It is the complex input-output relation for small a due to Coulomb interactions that leads to the occurrence of high-fitness gates in the 5D control voltage space. We note in passing that a further decrease of a below 2.5 nm results in a significant number of simulation runs where no output current is obtained on the typical time scale of the KMC simulations. This is a consequence of the exponential dependence of the hopping rate on a. In Figure <ref> of the Supplemental Material <cit.> we show the same data as in Figure <ref>, but with k=0 in the definition of the fitness function Eq. (<ref>). We clearly observe that for small F_ min the abundance plots are basically identical, whereas for high F_ min at least a qualitative agreement remains. This is an important observation because the interpretation of the covariance matrix is particularly simple for current vectors that have been shifted by subtracting I_av from the current components. Since the shift automatically yields c=0 in the definition of the fitness function Eq. (<ref>), the shift is equivalent to choosing k=0 for the non-shifted currents. We may thus conclude that the fitness properties after shifting the currents hardly change. §.§ Hypervolumes and numbers of gate realizations Next, we elucidate the origin of the observed major differences between AND and XOR gates in the abundance plots in Figure <ref>, as representatives of gates solving a linearly separable and inseparable problem, respectively. Due to reasons of continuity it is expected that in the 5D space of control voltages there exist well-defined regions where the AND or XOR gate fitness is higher than a threshold . The hypervolume of a specific region hosting a high-fitness gate realization is denoted as V_0 and the average hypervolume of these regions as , while the number of different regions is denoted as . In the two extreme cases, the different abundances of AND and XOR gates may be due to a very different number of regionsof similar average hypervolumeor due to a similar number of regions with very different average hypervolume.We determined the reason for the different abundances of AND and XOR gates in the following way. We randomly choose for both devices D1 and D2 an AND gate realization withand an XOR gate realization with . In addition, we randomly choose for D1 an XOR gate realization with(for D2 we refrained from doing this, because the XOR gate abundance is with this fitness threshold too low for statistical significance).We assume that the hypervolume V_0 of the region hosting the randomly chosen gate realization is representative for all regions hosting gate realizations, so that we do not need to distinguish between V_0 and .To estimate V_0 of each region, we randomly choose control voltages restricted to a local hypercubewith hypervolume Δ V incorporating the region. Then, we calculate the probability p_0 that a combination of 10^4 randomly chosen control voltages within this local hypercube leads to a gate fitness F>F_ min. This information allows us to estimate V_0 as. From the global gate abundance p_ abundance, extracted from , we obtain an estimate of the global hypervolume V of all gate realizations with minimal fitness F_ min as V ≈ p_ abundance V_ tot, where V_ tot=2^5 V^5=32 V^5 is the hypervolume of the global hypercube (the control voltage were chosen in a voltage range of 2 V between -1 and 1 V). An estimate of the number of distinct gate realizations with minimal fitness F_ min in the control voltage space is then found as the ratio between the global hypervolume of all gate realizations and the local hypervolume of a particular gate realization: N_ gates≈ V/V_0≈p_ abundance V_ tot/p_0 Δ V .The results of these estimates are shown in Table <ref> for T=77 K and a=5 nm. For device D1 and , the hypervolume V_0 of the XOR gate realization is almost two orders of magnitude lower than that of the AND gate realization (0.00013 vs. 0.0090 V^5). This shows that for the XOR gate more subtle tuning of the control voltages is required than for the AND gate, an observation that was also made in our previous work when studying the fitness change when changing one of the control voltages <cit.>. In contrast, the estimated number of distinct realizations N_ gates of the XOR gate is of comparable magnitude as that of the AND gate (17 vs. 5). The smaller abundance of XOR gates as compared to AND gates is thus not due to a smaller number of regions, but due to a much smaller hypervolume of a region. This conclusion is supported by analogous results for device D2. Remarkably, the number N_ gates of distinct XOR gate realizations for device D1 is not very differentfor F_ min=5 and 10 (by chance even identical: 17). This shows that the strong decrease of the abundance with fitness threshold F_ min in Figure <ref> is not due to a strong decrease of distinct XOR gate realizations, but due to a strong decrease of the hypervolume of the regions when increasing F_ min, as is also observed when directly comparing the values of V_0 (0.00013 vs. 0.00094 V^5). These observations are crucial for the further development of logic functionality with DNPU technology. For the consistency of our choice for the hypervolume Δ V of the local hypercube in Table <ref>, two conditions should be fulfilled. (1) The local hypercube should be chosen large enough to contain the region of control voltages with minimal fitness F_ min hosting the specific gate realization. (2) It should be small enough to avoid overlap with regions hosting other gate realizations. Condition (1) is fulfilled by making sure that at the edges of the local hypercube the gate fitness has decreased well below F_ min, implying p_0≪ 1. Condition (2) is fulfilled by making sure that V_ tot/Δ V≫N_ gates. Both conditions are fulfilled for all the cases in Table <ref>. §.§ Temperature dependence For practical reasons it is desirable to have DNPU logic functionalityat room temperature instead of 77 K. To investigate potential room-temperature functionality, we have redone all simulations for device D1 at T = 273 K. The resulting abundance plots are given in Figure <ref>. One would expect that nonlinear effects become smaller upon temperature increase, because the relative influence of Coulomb interactions, which are responsible for the nonlinear effects, then becomes less. This is expected to result in a smaller abundance of logical gates, in particular XOR and XNOR gates. This is indeed observed in a comparison of Figure <ref> with Figure <ref>. Interestingly, the behavior for T=273 K and a=5 nm is similar as for T=77 K and a=10 nm. Also, the behavior for T=273 K and a=2.5 nm is similar as for T=77 K and a=5 nm. We explain this by a compensation of a decrease of nonlinear effects with increasing temperature and an increase of similar magnitude of nonlinear effects with decreasing hopping distance. We note that for T=273 K we could not perform simulations with a = 1.25 nm, because the higher temperature allows hops that were almost impossible at T=77 K. §RESULTS: PRINCIPAL COMPONENT ANALYSIS §.§Analysis of KMC dataThe results for the eigenvectors J_i (i=1,2,3)of the PCA covariance matrix Eq. (<ref>) for T=77 K and a=5 nm are shown in the left panel of Figure <ref> for both devices D1 and D2. Since by construction 𝐉_0=𝐯_0=1/2(1,1,1,1), we do not show 𝐉_0. Remarkably, we find to an excellent approximation 𝐉_3≈𝐯_3=1/2(-1,1,1,-1). As discussed above, the equality follows if no correlations between X and M_ l or M_ r would be present. The approximate equality suggests that these correlations are indeed small, as we will explicitly verify below. Furthermore, we find 𝐉_1≈𝐯_1=1/√(2) (0,-1,1,0) and 𝐉_2≈𝐯_2=1/√(2) (-1,0,0,1) (1/√(2)=0.7071…). This suggests that there is a high but not perfect l-r symmetry of the devices.The corresponding eigenvalues λ_i are shown in the right panel of Figure <ref>, where also results for a=2.5 and 10 nm have been added. For a proper comparison all eigenvalues are divided by ⟨ I_ av^2 ⟩ as normalization. For all considered cases the normalized eigenvalues are considerably smaller than 1. This implies that the average current for the different input combinations is by far the dominant quantity, while changes in the current for the different input combinations can be regarded as relatively small modulations. For the same hopping distance a the normalized eigenvalues are similar within a factor of less than two for the two devices, so that the devices display statistically similar behavior. Furthermore, the normalized eigenvalues of both devices show a significant increase with decreasing a. In particular, the third normalized eigenvalue strongly increases with decreasing a. According to our above analytical solution λ_3 = 4 σ^2(X) when correlations between X and M_ l or M_ r are neglected, this observation suggests a considerable increase of σ^2(X) with decreasing a. In Figure <ref> we show the Pearson correlation coefficients among the variables M_ l, M_ r and X. The low numbers for the corresponding correlation coefficients confirms the above assumption that the correlation between X and M_ l or M_ r is very small, in particular for T = 77 K.Furthermore, we find a considerable correlation between M_ l and M_ r with only little dependence on T and a. As argued above, a large correlation between M_ l and M_ r is important for a realization of high-fitness logic gates, for which I_01≈ I_10. As shown in Figure <ref> of the Supplemental Material <cit.> the variances of M_ l and M_ r are very similar, which explicitly rationalizes why the eigenvectors 𝐉_i are so close to the 𝐯_i. §.§ Comparison with experiments We now compare results of the KMC simulations with experimental results, as emerging from the surrogate model (SM) of a physical device <cit.>. The comparison is performed on the level of the properties derived from the PCA. We remind the reader that in order to have similar ranges of voltages we have chosen a smaller variation of input and control voltages in the KMC simulations: input voltages U_2,U_3∈ (0,0.1) V, control voltages U_1,U_4,U_5∈ [-0.5,0.5] V and control voltagesU_6,U_7 ∈ [-0.3,0.3] V. As a result, the results are quantitatively different from what is reported above. As before, the results presented here are based on a sampling of 10^4 randomly chosen control voltage combinations for both simulated devices D1 and D2 as well as the SM of the physical device. As seen in the left panel of Figure <ref>, there is a fair agreement between the KMC eigenvectors and the SM eigenvectors. We have to an excellent approximation 𝐉_3≈𝐯_3 both for the simulated devices D1, D2 and the SM of the physical device. For the other two eigenvectors, there is less agreement between 𝐉_1 and 𝐯_1, 𝐉_2 and 𝐯_2. As already argued in Sect.  and seen from the comparison of devices D1 and D2 in Figure <ref>, the latter two eigenvectors are more susceptible to details of the specific dopant distribution, in particular with respect to the closeness to l-r symmetry. The approximation is therefore less accurate than for 𝐉_3. In fact, when calculating the standard deviation ratios of M_l and M_r, we get a value of σ(M_l)/σ(M_r) = 0.58 for the first Device and σ(M_l)/σ(M_r) = 0.43 for the second device, indicating l-r asymmetry. In contrast, the surrogate model exhibits closer l-r symmetry, with σ(M_l)/σ(M_r) = 0.9, going along with a closer similarity of𝐉_0 and 𝐯_0.Comparison of the eigenvalues in the right panel of Figure <ref> shows that, like for the simulated devices D1 and D2, the largest SM eigenvalue λ_1 is two orders of magnitude smaller than <I_ av^2> and the smallest SM eigenvalue λ_3 approximately four orders of magnitude smaller.The agreement shows that the degree of nonlinearity in the simulations and in experiment is very similar, both in terms of the NDR as well as the cross-correlation.The major difference between the KMC and SM results is the ratio of the first and the second eigenvalue. As seen from the analytical solution Eq. (<ref>), the ratio λ_1/λ_2=(1 +Corr(M_ l,M_ r))/(1 -Corr(M_ l,M_ r)) and this ratio is therefore a measure of the correlation between M_ l and M_ r. We find Pearson correlation coefficients Corr_ D1(M_ l,M_ r) = 0.468, Corr_ D2(M_ l,M_ r) = 0.348, and Corr_ SM(M_ l,M_ r) = 0.945 for D1, D2, and the SM, in agreement with the ratios found in Figure <ref>. We note that the experimental uncertainty in the number of dopants in the active region in between the electrodes is large and that our modeling of the electrodes as circular segments is very approximate. Considering these and other uncertainties and approximations, the agreement between our simulated results and the experimental results is remarkable.§ RESULTS: NONLINEARITY INDICATORS We now come to the final analysis level, which is based on the distributions of M_ l, M_ r, and X.The first moments and the variances of these quantities at T=77 K (devices D1 and D2) and T= 293 K (device D1) for a = 5 nm are shown in Table <ref>. The corresponding left and right NDR indicators Q_ l and Q_ r, given by Eq. (<ref>), are displayed in Figure <ref> for different a. Only minor differences are seen between Q_ l and Q_ r, in agreement with approximate l-r symmetry. There is a considerable difference, occurring by chance, between the two devices, with Q_ l and Q_ r for D1 larger than for D2. This difference indicates that the NDR is more pronounced for D1 than for D2.This is in agreement with the larger abundance of NAND and NOR gates in D1 than in D2 as observed in Figure <ref> for small and intermediate fitness thresholds. Additionally, we observe a very strong increase of Q_ l and Q_ r with decreasing hopping distance a, indicating that NDR becomes more pronounced when a is small. Indeed, as seen in Figure <ref>, the probability of NAND and NOR gates is strongly enhanced for decreasing a. We see from the results for device D1 that increasing the temperature from T=77 to 293 K has the same effect as increasing a, which agrees exactly with the observation made when comparing the NAND and NOR abundance plots in Figs. <ref> and <ref> for these temperatures.Theindicator Q_ lr for the nonlinear coupling between the two inputs, given by Eq. (<ref>), is shown in Figure <ref>.Also this indicator strongly increases upon decreasing the hopping distance a. For a = 2.5 nm the results for both devices are very similar. Indeed, as seen in Figure <ref>, the abundances of XOR and XNOR gates for a = 2.5 nm are also very similar for both devices. For larger a≥ 5 nm, Q_ lris larger for device D1 than for D2. This is again reflected by the higher occurrence likelihood of XOR and XNOR gates for D1 than D2 in the abundance plotsof Figure <ref>. Also the considerable decrease of Q_ lr with increasing temperature shows upwhen comparing the XOR and XNOR abundance plots in Figs. <ref> and <ref>.Finally, we mention that we observed from Figure <ref> that Corr(M_ l,M_ r) is large and only weakly dependent on the hopping distance a and temperature T. This is compatible with I_01≈ I_10, which is a condition for high-fitness Boolean gates. However, it is not a sufficient condition. The three nonlinearity indicators Q_ l, Q_ r, and Q_ lr sensitively depend on a and T and are much better measures for the occurrence of high-fitness gates.§SUMMARY, CONCLUSIONS AND OUTLOOK We have focused in this work on the critical nonlinear aspects of hopping transport in disordered dopants networks (DNPUs) used in reconfigurable logic. We considered DNPUs with eight electrodes: one output electrode, two symmetrically positioned input electrodes and five control electrodes. From kinetic Monte Carlo (KMC) simulations of the hopping transport, taking into account Coulomb interactions between the charges, the output currents for different voltages applied at the input and control electrodes can be calculated. This allowed us to assess the occurrence of Boolean logic in the five-dimensional (5D) space of control voltages, as quantified by a Boolean gate fitness value of the four-dimensional (4D) current vector for the different input voltages corresponding to the `01', `10', `01', and `00' logic input combinations.First, we calculated the abundance plots of the six basic Boolean gates from a random hypercube sampling of the 5D control voltage space. For a typical hopping distance of 5 nm the abundance plots for two simulated devices were found to agree well with experimental results <cit.> at liquid nitrogen temperature of 77 K. We came to the important conclusion that a small hopping distance or a low temperature is beneficial for the occurrence of high-fitness gates, because nonlinear effects due to the Coulomb interactions between the charges are then stronger than for a large hopping distance or high temperature. In a next step, we used a principal component analysis (PCA) to characterize the distribution of the current vectors in more detail. We found that the properties of the eigenvectors of the PCA matrix strongly depend on the degree of symmetry of the dopant network. The corresponding normalized eigenvalues provide a simple representation of the statistical properties of the DNPU. We found a fair agreement between the eigenvectors and the normalized eigenvalues of two simulated devices and a deep neural network (DNN) surrogate model (SM) of a physical device. This shows that our modeling at least qualitatively captures the underlying physics of the DNPUs. It is important to note that, in contrast to other applications of the PCA, all eigenvectors are of key importance. When omitting, e.g., the direction along the eigenvector with the smallest eigenvalue, one would no longer be able to assess the occurrence of XOR and XNOR gates, because of the missing information about the cross-correlation between the two inputs, contained in this eigenvalue.Finally, we defined three dimensionless nonlinearity indicators Q_ l, Q_ r, and Q_ lr, where Q_ l and Q_ r are indicators for negative differential resistance (NDR) with respect to the left and right input, important for the realization of NAND and NOR gates, and Q_ lr is an indicator for nonlinear coupling between the left and right input, important for the realization of XOR and XNOR gates. On this deepest analysis level, important new insights about the impacts of the hopping length, the temperature, and cross-correlations on the logic functionality were gained.In addition to the statistical properties obtained from the hypercube sampling, we considered the spatial structure of Boolean gate realizations in the 5D control voltage space. We found the surprising result that for AND and XOR gates, as representatives, respectively, of linearly separable and linearly inseparable gates, the number of regions hosting high-fitness gates is similar, despite the fact that the abundance of AND gates is much higher. This is explained by the much smaller hypervolume of the regions, and the resulting higher sensitivity of the fitness when varying control voltages ofXOR gates than AND gates. Different further applications of the presented methodology are conceivable: (1) In this work, we have modified the hopping distance. The physically relevant quantity is the ratio of the typical nearest-neighbor distance of dopants and the hopping distance. The hopping distance is difficult to change without using a different dopant-semiconductor combination, but the distance between the dopants can easily be changed by changing the dopant density. It would therefore be of interest to make a comparison with DNPUs made with a different dopant density. A lower dopant density may increase the relative importance of Coulomb interactions, with beneficial effects for the logic functionality. (2) The proposed decomposition of the current vectors is very straightforward and not dependent on the specific underlying physical realization. Thus, it could be easily adjusted to situations with, e.g., three input electrodes, or where the current vector results from different realizations of reconfigurable logic, such as nanoparticle networks <cit.>, or where other device properties such as the size are varied. Work along this line is in progress. (3) A very interesting application of DNPUs is the processing of time-dependent signals. For that case, e.g., the quantification of the cross-correlation may be very helpful to characterize the mixing of signals caused by voltage changes of different input electrodes. (4) Different realizations of the DNPs can display significant device-to-device fluctuations as also observed in other neuro-inspired computing systems <cit.>. The nonlinearity indicators, introduced above, may yield direct information about the properties and the consequences of these flucations on the device behavior.§ACKNOWLEDGEMENTThis work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through project 433682494–SFB 1459. We thank Dr. Unai Alegre-Ibarra for setting up the GitHub repository to make the KMC code publicly available (https://github.com/MUTUEL).
http://arxiv.org/abs/2312.16037v1
{ "authors": [ "Henri Tertilt", "Jonas Mensing", "Marlon Becker", "Wilfred G. van der Wiel", "Peter A. Bobbert", "Andreas Heuer" ], "categories": [ "cs.ET", "cs.AR", "cs.LG", "cs.NE", "stat.ME" ], "primary_category": "cs.ET", "published": "20231226125532", "title": "Critical nonlinear aspects of hopping transport for reconfigurable logic in disordered dopant networks" }
[email protected] Department of Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058 Erlangen, GermanyDepartment of Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058 Erlangen, Germany Max Planck Institute for the Science of Light, 91058 Erlangen, GermanyFast quantum gates are crucial not only for the contemporary era of noisy intermediate-scale quantum devices but also for the prospective development of practical fault-tolerant quantum computing systems. Leakage errors, which arise from data qubits jumping beyond the confines of the computational subspace, are the main challenges in realizing non-adiabatically driven, fast gates. In this letter, we propose and illustrate the usefulness of reinforcement learning (RL) to generate fast two-qubit gates in practical multi-level superconducting qubits. In particular, we show that the RL controller offers great effectiveness in finding piecewise constant gate pulse sequences autonomously that act on two transmon data qubits coupled by a tunable coupler to generate a controlled-Z (CZ) gate with 11 ns gate time and an error rate of ∼ 4× 10^-3, making it about five times faster than state-of-the-art implementations. Such gate pulse sequences exploit the leakage space judiciously by controlling the leakage dynamics into and out of the computational subspace at appropriate times during the gate application, making it extremely fast.Designing Fast Quantum Gates with Tunable Couplers: A Reinforcement Learning Approach Michael J. Hartmann===================================================================================== As we are inching closer to building practical quantum computers, the development of fast quantum gates has become increasingly important <cit.>. This is instrumental in the current noisy intermediate-scale quantum (NISQ) era, to allow quantum algorithms to be executed reliably despite the inherent noise and fragility of qubits, as well as to achieve fault-tolerant quantum computing with efficient error-correcting gate implementations in larger systems <cit.>. Fault-tolerance means that, provided the error rates of the physical qubits are below a threshold, quantum computing devices, along with quantum error correction (QEC) and logical gate operations, would be able to be used for practical quantum computation <cit.>. One of the crucial requirements for efficient QEC is the realization of fast and efficient quantum gates <cit.>. As a hardware platform for quantum computing, superconducting circuits have shown remarkable developments and are considered promising for constructing large-scale quantum devices. However, despite recent advancements in such quantum hardware components, designing high fidelity and at the same time fast gates, remains a major challenge. Since superconducting qubits are in fact multi-level systems, the most significant challenge in achieving fast two-qubit gates is avoiding leakage errors that results from qubits leaking out of the computational basis <cit.>. These leakage errors have proven to be particularly hard to correct, and strategies to avoid them limit the amplitude of control pulses, which makes that gate duration longer despite optimization of gate pulse sequences. Another major problem for the precise operation of large-scale superconducting platforms is crosstalk originating from residual ZZ-interactions, which causes unwanted perturbations to two-qubit gate operations. One way to reduce crosstalk is to improve the hardware design, such as by utilizing qubits with opposite anharmonicities to create a quantum interference-induced crosstalk-cancellation effect <cit.>. The more experimentally favorable way would be to park the qubits at a highly dispersive regime with large detunings. Nevertheless, in such setups, when transitioning from a far-off detuning to an operating zone of frequencies with reduced detuning, the gates become slower. This work proposes a solution for realizing fast two-qubit gate that starts from a regime where residual couplings are low, and even though the ramps are sharp, the optimization algorithm leads to a fast gate. When dealing with global optimization problems of complex, non-convex, and non-linear systems, machine learning (ML) in combination with deep learning has recently proven to be extremely successful and is considered highly versatile for a wide range of tasks <cit.>.Reinforcement learning (RL) is a type of ML that is particularly well suited for learning to control sequential decision-making problems <cit.>. Unlike supervised (unsupervised) learning which relies on labeled (un-labeled) datasets, RL learns through interactions with the system to be controlled (called the RL environment). The RL model, called the RL-agent receives rewards or punishments as feedback that guide its learning process and allow it to adapt and improve over time. This has made RL invaluable in areas such as robotics, autonomous systems, and games, where agents need to learn optimal strategies through trial and error, outperforming supervised learning in complex, dynamic environments with superhuman capabilities. <cit.>. Following such developments in technology and various domains in engineering, it has recently been utilized to find control protocols in the domains of quantum physics for some interesting quantum systems <cit.>. It was first demonstrated for the optimization of quantum state preparations <cit.> and QEC <cit.>, and more recently we have seen its applications in other areas, in particular, in quantum state engineering <cit.>, quantum state transfer <cit.>, quantum feedback control <cit.>, etc. RL controls have also been used in real laboratory experiments recently with quantum systems, demonstrating their potential for challenging decisions and their adaptability to control such systems in real time <cit.>.In this Letter, we propose and analyze a method to design an ultrafast two-qubit controlled-Z (CZ) gate using RL-based optimization in a tunable coupler superconducting circuit setup, see Fig. <ref>. Besides being an entangling gate that can be used to generate a universal gate set, the CZ gate is a core operation in QEC with surface codes, where stabilizers can be measured via a sequence of four CZ gates <cit.>. For engineering high performance, large-scale quantum processors, tunable superconducting circuits have gained prominence, particularly due to their recently recognized capability for on-demand on-off switching of couplings between qubit pairs via frequency modulation through external flux <cit.>.This flexibility allows for precise control over the interactions within the system, making it a valuable resource for implementing high fidelity gates.We find that the RL-agent could autonomously discover high performance control pulses, while considering all leakage sub-spaces. Such coherent-error-avoiding flux pulses judiciously control the qubit leakage dynamics into and out of the computational subspace at appropriate times during the course of the gate, making it extremely fast. With this approach, we show that an ultrafast CZ gate can be realized with a gate-time of 11 ns with an error rate ∼ 4×10^-3.The RL-based optimization protocol we consider is depicted in Fig. <ref>, where the problem of two-qubit gate design with tunable coupler superconducting framework is embedded intothe workflow of RL. The RL-agent (shown on the left) is essentially an artificial neural network model that is responsible for deciding the control sequences (called the actions, a⃗) by optimizing the weights, θ⃗ of the model. These optimizations are directed through the scalar signal of rewards, ℛ received by the RL-agent from the RL-environment given it observes some partial information of the system after the application of the control at every step of iteration. These are called the observations, s⃗ of the RL, and the set of rules it learns by optimizing the parameters θ⃗ is called the policy, π(a⃗|s⃗) of the RL-agent, where π(a⃗|s⃗) represents a conditional probability distribution of a⃗ given s⃗. In this case, the RL-environment comprises the tunable coupler circuit, shown on the right of Fig. <ref>, comprising two data qubits and a coupler qubit, all of which are modeled as transmon qubits. Explicitly, the observation of the RL-agent consists of the computational as well as leakage-space qubit populations, i.e., s⃗ = {P_ijk}, where i, j, k = (0, 1, 2) are the indices corresponding to the energy levels of the three qubits, with j being the levels of the tunable coupler qubit. The actions of the RL-agent are choices of the tunable coupler frequency, ω_c,which are realized by the external flux, ϕ_c, ext applied to the coupler, therefore a⃗ ={ω_c}←ϕ_c, ext. The reward, ℛ is considered as a function of the process infidelity of the CZ gate defined by ℛ = -log_10(1 - ℱ), where ℱ is the gate fidelity at the end of the sequence. The learning process can be divided into iterations (called episodes) of total duration τ which is divided into sequences of duration t = τ/n, where n is the number of control steps in the sequence. If we consider a control problem with N_acontrol parameters over n control steps, the complexity of the problem scales exponentially with n as N_a∏_i=1^N_a N_i^n, where N_i is the number of divisions for the i-th control parameter, consideringa discrete control problem. For the problem under study N_a = 1, for which the complexity of the problem scales as N_w_c^n, where N_ω_c is the number of divisions of the control parameter ω_c.Instead of discrete controls, we consider continuous approximations to stepwise constant shapes of ω_c. This choice of control pulses is motivated by the fact that they are typical pulses generated by arbitrary waveform generators in current experimental setups. Despite the fact that we have a single control parameter for the RL-agent to learn, this problem turned out to be a formidable task for the RL to learn and we have found that a sophisticated RL algorithm developed in the last few years needs to be employed. Effectively, we used the recently proposed Soft-Actor-Critic (SAC) algorithm for optimization of the RL policy <cit.>. SAC algorithm is an actor-critic RL algorithm based on the concept of entropy regularization, where the policy π is trained to maximize a trade-off between expected return and entropy that determines the balance between exploration and exploitation, receiving a bonus reward at each time step proportional to the entropy of the policy. This makes the RL-policy to spawn m actions as randomly as possible due to the inherent stochasticity of the policy, encouraging the agent towards more exploration, prevention of premature convergence to sub-optimal solutions, and accelerated learning. The optimal policy π^* is defined as the policy that maximizes the expected return while also maximizing entropy, given byπ^* = πargmaxτ∼π𝔼∑_t=0^∞γ^t [ℛ(s_t, a_t, s_t+1) + αℋ(π(· |s_t))],where 𝔼_τ∼π denotes the expectation value over trajectories τ generated by following the policy π. γ^t is the discount factor raised to the power of t, where γ is a parameter between 0 and 1, representing how much the agent values future rewards relative to immediate rewards. ℛ(s_t, a_t, s_t+1) term represents the immediate reward obtained when taking action a_t in state s_t and transitioning to state s_t+1. The αℋ(π(·|s_t)) term involves the entropy ℋ of the policy π at state s_t, weighted by a hyper-parameter α, that regulates stochasticity of the policy, and encourages randomness in the actions taken by the RL-agent. The tunable coupler circuit depicted in Fig. <ref> is described by the Hamiltonian (considering ħ=1 hereinafter),H= ∑_i=1,c,2(ω_i b_i^† b_i + α_i/2 b_i^† b_i^† b_i b_i) + g_12( b_1^† b_2 + b_2^† b_1)+ ∑_i=1, 2 g_ic( b_i^† b_c + b_c^† b_i ),where b_i (b_i^†) with i=1,c,2 describe the bosonic annihilation (creation) operators for the transmon qubits (i=1,2) and the coupler, where the qubits are considered as weakly anharmonic oscillators possessing multiple energy levels, with anharmonicities given by α_i. The qubits interact with one another through capacitive coupling, where the next-nearest-neighbour coupling capacitance, C_12, is smaller than the nearest-neighbour coupling capacitances, {C_1c, C_2c}, which in turn are small compared to the transmon qubit capacitances {C_1, C_c, C_2}. This leads to the fact that the circuit analysis can be treated perturbatively. The experimentally relevant computational basis of the circuit can be described by the eigenstates |ijk⟩ at the idling point, where {i, j, k} label the energy levels of the qubit 1, coupler and qubit 2 respectively. Due to the residual coupling at the idle point, these eigenstates are sligtly hybridized between the individual circuit components <cit.>. The computational subspace for the two-qubit gates between the data qubits consists of the states |ik⟩ = |00⟩, |01⟩, |10⟩ and |11⟩, where the coupler is considered to be always in the ground state.The circuit is initialized at the eigenstates |ijk⟩ before application of the gate and is returned back to after the completion of the gate operation, which have maximum overlap with the bare states, |i'j'k'⟩ of the circuit; whereas all the other instantaneous eigenstates during the gate constitute the leakage subspace.We aim to design the CZ gate utilizing the transverse qubit-qubit coupling to induce a phase of e^iπ in the computational state |101⟩ by using nonadiabatic transitions to the non-computational eigenstate |002⟩ and back. At the idle point, we consider the qubits to be in the highly dispersive regime where the detuning between the coupler and the qubits is large compared to the couplings, so that both the transverse and longitudinal couplings between the qubits are negligible. However as the frequency of the coupler is brought close to that of the two qubits, with proper choice of circuit parameters, the effective two-qubit transverse coupling can be switched on. We bias the qubits at the frequencies of ω_1/2π = 4.2 GHz, ω_2/2π = 5.2 GHz and ω_c/2π = 6.32 GHz. This results in an almost zero transverse coupling and a negligible ZZ-crosstalk (-8.37 kHz) [see supplemental material for details].Starting at this dispersive coupling limit, a CZ-gate can be obtained by first tuning the frequency of qubit 1 to ω_1 = ω_2 + U_2, so that the levels |101⟩ and |002⟩ become resonant, and then tuning the coupler frequency near to the data qubit frequencies. Holding the coupler frequency at this point for the time of one oscillation between these two states, the target unitary U_CZ = diag(1, 1, 1, -1) can be achieved up to single qubit phases, which can be compensated virtually.However, the gate-time for such a Rabi-oscillation based operation is long as it is given by t_gate = π/ζ_XX, where ζ_XX is the transverse coupling rate.To implement a faster gate, we use a RL-based approach as stated above to tune the coupler frequency. Based on observation O(t_i), the agent takes action A(t_i), which is selecting the frequency of the coupler. Depending on this action, the agent receives either a positive or negative reward, which guides its next move. Therefore, our gate implementation is based solely on reward-based optimization without any initial model for the expected tunable pulse shape. The goal is to quickly drive the qubit transitions to reach the desired unitary while reducing the population of leakage states at the end of the gate. In Fig. <ref> we show the piecewise controls discovered by the RL-agent in panel (a) as well as the population dynamics among the computational and leakage states in panel (b). One can see that the RL-protocol finds very fast control pulses leading to a gate-time of 11 ns where the infidelity can be suppressed to ∼ 4 × 10^-3. For finding this fastest pulse the RL-agent finds optimal conditions where the coupler excited state is also populated during the gate. The leakage population is determined by the detuning of the coupler qubit and is non-zero during the gate, although the population of the leakage states decrease at the end of the gate. Such fast gates are advantageous to overcome the effects of finite qubit lifetime, since the decoherence time for the state of the art transmons is of the order of τ̃∼ 60 μs, leading to error rates characterized by ε_τ̃ = 1 -exp (-t_gate/τ)≈ 1 × 10^-4 <cit.>. Fig. <ref>(c) and (d) show the target CZ-unitary distribution as well as the obtained unitary distribution. The most interesting result found by the RL-agent is the intuitive leakage utilization to realize the ultrafast gate. The flux pulse is designed autonomously by the RL-agent in such a way that there is population transfer first from the state |101⟩ to state |111⟩ i.e. the coupler, then to the state |002⟩ with some remaining part in |101⟩, then again back to the coupler, and finally to the state |101⟩, with the acquired phase of π.Finally, we discuss the prospects of the proposed method for experimental implementations.To explore its utility in this context, we investigated the applicability of the obtained pulse shapes in devices where the transition frequencies of the qubits do not exactly match the parameters of the assumed model, c.f. Eq. <ref>. We show the robustness of our method against such parameter fluctuations in Fig. <ref>, where we apply the pulse that the RL-agent found for the qubit frequencies marked by black vertical lines to qubit frequencies in their vicinity. The plots show that the fidelity is maintained up to frequency variations of (5-10)%. Yet, one of the main strengths of the proposed method is that the RL agent can be trained with real experimental data. Crucially this enables training the agent without knowledge of a precise model for the qubits and coupler, and can thus cope with unavoidable imperfections in the characterization of the device. Moreover, this approach can take advantage of the dynamic nature of the experimental data and enables the RL agent to learn and adapt in real-time to the nuanced variations in the experimental setups, e.g. parameter drift.Another promising avenue is also to use the pulse obtained from RL on a model as an initial guess, which through the rigorous training employed in this study, serves as a well-informed starting point encapsulating valuable insights into the system dynamics and response characteristics. On the basis of this ansatz pulse, further optimization can be applied to minimize any potential model bias due to parameter variation in experimental device. While benefitting from the robustness shown in Fig. <ref>, this iterative approach fine-tunes the pulse based on empirical feedback from the experiment itself, and hence experimental precision can be greatly enhanced, ultimately leading to more accurate results.In summary, we illustrate a reinforcement learning (RL)-driven methodology for the design of rapid and non-intuitive pulse sequences to execute a two-qubit CZ-gate within a tunable coupler architecture.The necessity to avoid leakage into non-computational states in quantum gates has so far been understood as a requirement to employ lower pulse amplitudes leading to slower gates. Our investigation breaks this conclusion via a pulse sequence proposed by an artificial agent. It demonstrates that, grounded solely on penalty or reward considerations, the artificial agent can assimilate effective strategies and unveil realistic parameter configurations for the modulation of coupler piecewise constant flux pulses. These machine-discovered pulses strategically leverage leakage to noncomputational basis states, thereby optimizing time protocols and culminating in an ultrafast CZ-gate with an impressively low duration of 11 ns with error rate ∼ 4×10^-3. This is an improvement in gate duration of ∼5.5 and 3 times respectively, compared to the CZ gate implementations with tunable coupler in <cit.> with a 60 ns long CZ gate and 34 ns in the surface code implementation with Google's Sycamore processor <cit.>. The ability to significantly reduce gate time while maintaining a low error rate represents a pivotal step towards the practical implementation of quantum computation protocols, underscoring the high impact and relevance of our proposed RL-driven design approach. § ACKNOWLEDGEMENTSThis work received support from the German Federal Ministry of Education and Research via the funding program Quantum Technologies—from basic research to the market under Contract No. 13N16182 MUNIQC-SC. It is also part of the Munich Quantum Valley, which is supported by the Bavarian state government, with funds from the Hightech Agenda Bayern Plus. BS thanks Lukas Heunisch for useful discussions.apsrev4-2
http://arxiv.org/abs/2312.16358v1
{ "authors": [ "Bijita Sarma", "Michael J. Hartmann" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226235257", "title": "Designing Fast Quantum Gates with Tunable Couplers: A Reinforcement Learning Approach" }
[Quantscape: ] [email protected] QuantScape Inc. QuantScape Inc., 4-11-18, Manshon-Shimizudai, Meguro, Tokyo, 153-0064, Japan Institut Teknologi Bandung, Jl. Ganesha No.10, Bandung, Jawa Barat, Indonesia [Bandung Institute of Technology: ][email protected], there are more promising qubit technology–such as Majorana fermions, Rydberg atoms, and Silicon quantum dot, have yet to be developed for realizing a quantum computer than Superconductivity and Ion-trap into the world.The simulation of the quantum hardwares of these qubits can only be done numerically. However, a classical numerical simulation is limited concerning available resources. The method for simulation of quantum hardware by quantum hardware may be necessary.In this paper, we propose a novel method for optimizing time propagation from initial states to aimed given states of systems by the Born machine. We call this method the Hamiltonian Engineering Born Machine (HEBM). We calculated the optimal Hamiltonians for propagation to Bars and Stripes distribution, Gaussian distribution, and Gibbs state for H = - ∑_j = 0Z_j Z_j + 1 and revealed that they can be realized rapidly and accurately.Derivation of Hamiltonians from time propagations using Born machines. Andriyan B. Suksmono====================================================================== § INTRODUCTION In 1982, Richard P. Feynman proposed the theory of quantum computers at first<cit.>. Since then, many quantum algorithms that take advantage of quantum computing have been brought into the world, such as Grover's<cit.> and Shor's<cit.> algorithm. Although, quantum systems that can be used for qubits and quantum computers do not have enough properties for practical use. So, the research of quantum algorithms had not advanced rapidly until 2015, when IBM released available quantum computers on cloud platforms. Superconducting and Ion-Trap quantum computers are major practical available hardwares. However, many candidates of qubits can realize the more small-sized quantum computers and a large number of qubits such as Majorana fermions<cit.>, Nitro Vacancy of diamond lattice<cit.>, and Rydberg atoms<cit.>. However, their research is still ongoing in the area of quantum computers. A method that can simulate and optimize the time propagation of quantum systems is needed. On the other hand, a lot of quantum algorithms have been developed since 2015. Dr. Aran-Aspuru Guzik proposed the foundation of Variational Quantum Algorithms(VQAs)<cit.> and many VQAs are released such as Variational Quantum Eigensonlver(VQE)<cit.>, Adaptive VQE<cit.>, Multiscale Contracted VQE(MCVQE)<cit.>, and Variational Quantum Machine Learning Algorithms<cit.><cit.><cit.><cit.><cit.><cit.><cit.><cit.>.Born machine is one ofVQMLs proposed in 2018<cit.>. The objective of this algorithm is to derive the quantum circuits and variables that make the aimed distribution of probability. There are no constraints on the circuits. Therefore, we propose the method that derives Hamiltonians that propagate initial states to aimed given states. The circuit is the propagator of the Hamiltonian. This method can be used to simulate the operations on quantum computers. Optimizing the architecture and compressing gate operations into one operation is also possible. We call this Method Hamiltonian Engineering by Born Machine (HEBM). This is a generalized method of Quantum Coherent Ising Born Machine(QCIBM) <cit.> that can deal with any type of Hamiltonians. QCIBM can only treat Ising model and its family. We derived some Hamiltonians for some pairs of initial and final distributions after propagation by HEBM. As a result, we confirmed that HEBM can derive the Hamiltonians that propagate initial states to aimed given states rapidly, and the accuracy is enough to be used practically. The organization of this paper is as follows. Chapter <ref> describes the details of HEBM. Chapter <ref> and <ref> indicate the result of our calculations. Chapter <ref> is a discussion for HEBM.Chapter <ref> is the result and discussion of noise simulation.Chapter <ref> is the conclusion of our work.§ METHOD In this section, we describe the details of HEBM. The born machine is the Method in VQML. The objective of this algorithm is to derive the quantum circuits and variables that make the aimed distribution. The loss function of the Born machine uses kernelsto derive the quantum circuits and variables that make the aimed distribution like this,F = x K x + f K f - 2 f K x. This is the Maximum Mean Discrepancy Loss (MMD-Loss) function. Then, x and f are calculated distributions that are derived by the parametric quantum circuit and aimed distribution, respectively. They are 2^N-dimensional vectors and expressed as x _j = | x_j | ^2 for |Φ⟩= ∑_j = 0^2^N x_j | j ⟩ and f _j = | f_j | ^2 for |Φ_ ans ⟩= ∑_j = 0^2^N f_j | j ⟩ by the decimal state represented by the state of each qubit. K is the karnel matrix. N is the number of qubits. The kernel is crucial for the Born machine. This is the matrix that interacts with the elements of distributions. Strictly speaking, it is the inner product of hyperfunctions that project variables into problem space. One can choose a given matrix element as that of a kernel matrix such as Gaussian, Poisson distribution, and SWAP-test. It is the archetype of VQAs. For example, it becomes Subspace Search VQE (SSVQE)<cit.> in case x = E_j for the vector of trial energy E_j = ⟨Φ_j | H |Φ_j ⟩. In case j is only zero, and the cluster is made by the manner of Adaptive VQE, it is Adaptive VQE. Circuits to arrange x chosen are clusters ordinarily. However,the circuits for arranging x can be any type of circuit. We use the propagator of Hamiltonian, and the propagator is expressed as exp(- i / 2 H t / N_dt)^N_dt for t = π / 2. The parameters are coefficients of Hamiltonians, and Hamiltonians are expressed in Pauli's words. Then, N_dt is the number of time frames. Hamiltonian takes the form H = ∑_j = 0^ N_oθ_j P_j for the product of the Pauli matrix P_j, consisted into Pauli matrix X_j, Y_j, Z_j.N_o is the number of P_j in Hamiltonian.They are 2^N-dimensional vectors and expressed as x = ∑_j = 0^2^N x_j | j ⟩ and f= ∑_j = 0^2^N f_j | j ⟩ by the decimal state represented by the state of each qubit. Hence, the propagator must be decomposed into propagators of single terms of Hamiltonian. It is achieved by Suzuki-Trotter decomposition<cit.> in this work.In this method, the Born machine is used for Hamiltonian engineering. This is why we call this Method HEBM ( Figs.<ref> ).We use L-Broyeden-Fletcher-Goldfarb-Shanno-B (L-BFGS-B) method<cit.> for optimization of HEBM. The simulation of quantum simulation calculation is performed by blueqat SDK, the software development kit for quantum simulation and computation on Python. The number of shots is infinity; hence the result is read as a state vector.§ RESULT In this section, we perform to generate the optimal Hamiltonians that propagate initial states into aimed distributions. We perform this on Bars and Stripes distribution, Gaussian distribution, and Gibbs state for H = - ∑_j = 0Z_j Z_j + 1. The number of qubits is 4 for all cases. Firstly, we perform the calculation on BAS distribution<cit.>. BAS distribution is the distribution that each qubit corresponds to the block on 2 × 2 tiles, and in case 0 and 1 make the row or column, they have a probability. Hence, 0, 3, 5, 10, 12, and 15 are 1 / 6, and others are zero. We set the initial state | 1010 ⟩ = | 10 ⟩. Hamiltonian is H = ∑_j = 0^3 (θ_j^XX X_j X_j + 1 + θ_j^YY Y_j Y_j + 1 + θ_j^ZZ Z_j Z_1 + 1 + θ_j^Z Z_j) for j = 4 = 0 (cyclic condition). We show the average of distributions of five iterations and the loss function of the fifth iteration in Fig. <ref> and <ref>, respectively. The average is almost the value of the aimed state. However, standard deviations of | 7 ⟩ and | 8 ⟩ are large. Rather those of distribution at the end are larger than those in half of the total number of iterations. Even though the loss function of the fifth iteration became a 10^ - 8 order and aimed distribution is realized, those of 3 of 5 iterations converged above 10^- 4. the Hamiltonian that propagates | 5 ⟩ to BAS distribution is derived roughly. Second,we perform the calculation on Gaussian distribution e^-( x - c)^2 / 2 σ . The center of distribution c is 7.5, and deviation σ is 1. The initial state is equally distributed. Hamiltonian isH = ∑_j = 0^3 (θ_j^ZZ Z_j Z_1 + 1 + θ_j^ZXZ Z_j - 1 X_j Z_1 + 1 + θ_j^X X_j) that is the parent Hamiltonian of H = - ∑_j = 0Z_j Z_j + 1 for j = -1 = 4 = 0.We show the average of distributions of five iterations and the loss function of the fifth iteration in Fig. <ref> and <ref>, respectively. Both the averages of probabilities and their bounds of standard deviations have almost the same values as exact. loss function became 10^- 9 order and aimed distribution is realized. Hence, the Hamiltonian that propagates the equally distributed states to Gaussian is derived. On the next hand, we calculated the Gibbs distribution<cit.> ofH = - ∑_j = 0Z_j Z_j + 1. Hamiltonian is the same as Gaussian. We show the average of distributions of five iterations and the loss function of the fifth iteration in case the initial state is| 1010 ⟩ = | 10 ⟩ state in Fig. <ref> and <ref>, respectively. The probability of | 0 ⟩, | 1 ⟩, | 14 ⟩, and | 15 ⟩ states deviate about 0.05 from those of aimed states. The standard deviations of other states are broad. loss function became 10^- 4 order and aimed distribution is realized. Hence, the Hamiltonian propagates the given decimal states to which Gibbs distribution is difficult to be derived.On the other hand, the average of distributions of five iterations and loss function of the fifth iteration in case the initial state is equal distribution are in Figs. <ref> and <ref>, respectively. The standard deviations of | 1 ⟩, | 2 ⟩, | 13 ⟩, and | 14 ⟩ states deviate about 0.02from those of aimed states. loss function became 10^- 8 order and aimed distribution is realized. Hence, the Hamiltonian propagates the equally distributed states to which Gibbs distribution is derived. From these results, it is possible to derive the Hamiltonians that propagate the nonlocal distribution to nonlocal distributions and local distributions to local distributions. However, nonlocal distributions to local distributions and vice versa cannot be derived. In case we make a BAS distribution from an equal distribution,the loss function cannot be below 10^- 2. Besides, calculated distributions at the half of convergence sufficiently reproduce the aimed distributions. § DISCUSSION In this section, we discuss the advantages and shortcomings of our methods with data. The Hamiltonians that propagate initially to aimed given states can be derived rapidly. However, the more time framesN_dt be, the more time for one iteration will be taken. Therefore, we sampled the time for calculation for each H_dt performing the calculation on Gaussian distribution. We show the result in Fig. reftime. The time for calculation is defined to be the time that the accuracy loss function becomes <10^- 4. Time for calculation increases rapidly from N_dt > 20 and varies in the order of O(atan N_dt + N_dt). It indicates that our method is not sufficient for long simulations of quantum calculation algorithms. Though, some techniques may save time for simulations. For example, some processes in the simulation are compressed into one Hamiltonian. Some algorithms include repetition of unit processes such as Grover's algorithm. Hence, compressing the unit process may contribute to our method. The effects of the number of qubits are studied. We calculated the Gibbs distribution ofH = - ∑_j = 0Z_j Z_j + 1 for 8-qubit systems. According to Figs. <ref> and <ref>, the final distribution is a little deviated from the aimed one, and the loss function is above 10^- 5. Hence,the accuracy declines a little for large systems. Besides, the time for calculation becomes longer as the number of qubits becomes larger. This can be avoided by techniques such as the density matrix renormalization group.§ EFFECT OF NOISESWe simulate the noises by the reservoir by the scheme on paper <cit.>.§.§ method of simulation of noises We describe the details of noisy simulation at first.Time propagation on a noisy system is depicted by the Lindblad equation shown below. d ρ/ d t=- ∑_j i [ P_j , ρ ] +∑_k(2L_k^†ρ L_k- [ L_k^† L_k, ρ]_+) = ∑_j ℰ _j +∑_k ℒ _kℰ _j=- i [ P_j , ρ ] ℒ _k=2L_k^†ρ L_k- [ L_k ^† L_k, ρ]_+Then, ρ is the density matrix of the system, and P_j is the j -th term of the Hamiltonian, L_k is the Lindbradian of k-th noise term, respectively.There are two main noises: dephasing phase kick and pole kick, respectively.The prior is the relaxation of phase described by L = √(γ_1 )/2Z in Lindbradian.The posterior is the relaxation of amplitude described by L = √(γ_2) ( X - i Y ) in Lindbradian.The propagator of the Lindblad equation is,𝒰 = e^ ( ∑_j ℰ _j +∑ _k ℒ _k )t.The propagator of the Lindblad equation can also be Trotterized as, lim_ N →∞ ( ∏_ j e^ℰ _jt / N∏_k e^ ℒ _k t/ N)= e^ ( ∑_ j ℰ _j +∑ _k ℒ _k )t .γ_1 and γ_2 are jumping rates expressed asγ_1 = - ln ( 2 cos^2 θ^1 - 1 ) / τ_0and γ_2 = - ln (cos^2 θ^2 ) / τ_0 , respectively.τ_0is the period of noise, described byτ_0 = e_ unit./ħ N_ d tfor this paper where e_ unit. is the unit of energy in the SIunit.This propagator can also be implemented the same as an ordinary noiseless time propagator.Fig. <ref> depicts the quantum circuit to implement the propagator of the Lindblad equation on unit frame time.In this paper, the effect of noise is simplified into the noises for each qubit; hence, each qubit is acted noise simulation by corresponding ancilla qubit.Law noisy simulation requires 240 variable parameters for only two qubits.We assume the angle is the random value that the maximum absolute value is manually given noise variance. N is 1 for Hamiltonian because there is no difference between the accuracy of whether 1 or 13. N is 13 for all simulations of noises and we use second-order Trotterization. §.§simulation of noises First, we show the result of the quantum Gibbs distribution.The initial state is equally distributed. The calculated distributions at the half and the end of iterations when the noise divergences in angular of phase and pole kick are both 5 and 30 in Figs. <ref> and <ref>, respectively.The standard deviations of distributions when noise divergences are both 30 are larger than that when the noise variances are both 5, even though the average distributions are nearly exact for both cases. The convergence of loss functions showed more differences. We show the convergence of five samples of loss functions when noise variances are both 5 and both 30degrees in Figs.<ref> and<ref>, respectively. All samples converged into 10 ^- 2 order when noise variances are both 30. In contrast, all samples converged less than 10 ^-4 order when noise variances are both 5 degrees.The accuracy of calculation by HEBM declines drastically as the noise variance increases. Fig. <ref> shows the convergences of zeroth samples when noise variances are both 0, 5, 10, 20, 30, and 45 degrees, respectively. The loss functions when noise variances are both more than 10 are all converged into more than 10 ^-4 even other ones when noise variances are both 0 and 5 converged less than 10 ^ - 5. Those when noise variances are both 30 and 45 have not converged less than 10^ -3. The plateaus of both are supposed to be because noises nullify the effect of optimizers. In addition, The result of HEBM has higher accuracy than the case that Hamiltonian is Ising Hamiltonian(QCIBM) for all noise variance.We also investigated the effect of both phase kick and pole kick noises. We calculated the final loss for every 5 degrees from 0 to 45 degrees and sampled once on both phase and pole kicks. We show the result of sampling in Fig.<ref>. There is no doubt that polar hick is mainly affecting the accuracy. On the other hand, phase kick lowers the accuracies only two digits when the noise variance is 45 degrees. However, it is unstable. Secondly, we show the result of BAS distribution. The calculated distributions at the half and the end of iterations, when the noise variances in angular of phase and pole kicks are both 5 and 30 degrees, are shown in Figs. <ref>and <ref>, respectively. Both the distributions at the end and half of the iteration are off exact values at| 0 ⟩,| 3 ⟩,| 5 ⟩,| 9 ⟩, and| 15 ⟩states when house variances are both 30 degrees. The standard deviations are also off exact values and broad. In contrast, the calculated distributions when the noise variances are both 5 degrees are nearly exact values in almost all states. The convergence of loss functions is more variant than those of quantum Gibbs distribution. We show the convergences of five samples of loss functions when the noise variances are both 5 and both 30 degrees in Figs.<ref> and <ref>, respectively. Only one sample converged less than 10^ -5when the noise variances were 5 degrees. Besides, all samples converged into different orders. On the other hand, all samples converged more than 10^-2 when the house variance was 30 degrees. Four of five samples have not converged due to noise, the same as the quantum Gibbs distribution. The noise variance declines, and The accuracy of calculation by HEBM declines drastically as the noise valiance too. Fig.<ref> shows the convergence of zeroth samples when the noise variances are both0, 5, 10, 20, 30, and 45, respectively. Samples converged unstably below 10^-1 when the noise variable was less than 30 degrees and did not converge when it was equal to or more than 30 degrees. We investigated the effect of phase and pole kick also for BAS distribution. Pole kick mainly affects the accuracy. We show the result of sampling in Fig.<ref>. Phase kick is negligible because bare accuracy on BAS distribution of HEBM is itself low.In addition, The result of HEBM has higher accuracy than the case that the Hamiltonian is Ising Hamiltonian (QCIBM) for noise variances below 30.In this section, we describe the relationship among the accuracy of HEBM, KL divergence, and noise variances on the Gauss distribution. The initial state is an equally distributed state of 4 - qubit system, and sampled points are b = 0.1, 0.125, 0.15, 0.175, 0.2, 0.225, 0.25, 0.275, 0.3, 0.325, 0.35, 0.4, 0.5, 1.0, 2.0, 4.0, respectively. Each value of b corresponds to a given KL divergence. Fig.<ref> shows the minimum logarithms Loss function in 5 samples for each KL divergence and noise variance. The minimum Loss functions are affected by KL divergence in case noise variance is 5 and 45. When noise variance is 5, the effect of noises is nearly negligible. On the other hand, the effect of noises is maximum in case the noise variance is 45. As KL divergence increases, the logarithm of the minimum Loss function rises gradually. According to the result on BAS distribution, KL divergence mainly affects the probability that the Loss function converges into minimum values for a noise variance. Besides, the noise variance mainly affects the accuracy. In case the noise variance is below 10, the logarithms of the minimum loss function are below - 4. § CONCLUDING REMARKS .It was confirmed that the Hamiltonians that propagate initially to aimed given states can be derived by the Born machine. This method can be used for hardware simulation of quantum computers. The robustness for dephasing and depolarizing is revealed, too. If depolarization is below 10 at a minimum, HEBM can derive the Hamiltonians accurately enough on read quantum devices. Benchmarking in real quantum devices is also a problem. However, some issues remained. As both the sampling rate and the number of qubits increases, the time for calculation increases. They can be solved by some techniques mentioned in the previous section. This method may contribute to not only hardware simulation but also fabricating materials in case this method can become able to be used for large systems.apsrev4-2
http://arxiv.org/abs/2312.16432v1
{ "authors": [ "Hikaru Wakaura", "Andriyan Bayu Suksmono" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231227064158", "title": "Derivation of Hamiltonians from time propagations using Born machines" }
APS/123-QED [email protected] Friedrich-Schiller-University, Institute of Applied Physics, Albert-Einstein-Str. 15, 07745, Jena, Germany Friedrich-Schiller-University, Institute of Applied Physics, Albert-Einstein-Str. 15, 07745, Jena, Germany Fraunhofer Institute for Applied Optics and Precision Engineering IOF, Albert-Einstein-Str. 7, 07745, Jena, GermanyA fundamental brick of light-matter interaction at large optical intensities is the generation of a plasma. The optically-induced plasma in turn plays a fundamental role in determining the optical propagation. The plasma generation is a result of the interplay between multi-photon, tunnel and avalanche ionization. Here we use the basic rate equations to discuss an analytical model for the interaction between thesephysical effects. After defining a nonlinear impulse response for the system, we describe how the interplay depends on the features of the optical pulses. Our approach strongly simplifies the modelling of the propagation of ultrashort-pulses, paving the way to a much easier and faster interpretation of experimental observations, with potential impact on the broad fields of ultrafast light-matter interaction and laser micro-machining. Application of the Green function formalism to the interplay between avalanche and multiphoton ionization induced by optical pulses Stefan Nolte January 14, 2024 ===================================================================================================================================The interaction of intense optical pulses with matter is a topic of central interest in physics. Indeed, the nonlinear optical regime is intrinsically a strongly out-of-equilibrium system due to the rapidity in the exchange of energy between the electromagnetic field and the atoms <cit.>. This allows the experimental investigation of new regimes in condensed matter physics, including many body problems and Floquet systems <cit.>, or the measurement of the material properties, using for example high harmonic generation <cit.>. Beyond the basic physics, the problem is of primary importance because strong lasers can modify the properties of a material in a temporary manner <cit.> or by inducing permanent modifications, a phenomenon widely exploited in laser micro-machining <cit.>. One common feature of the interaction between intense light and matter is the formation of plasma <cit.>. The impinging photons provide energy to the electrons of the material, thus inducing a considerable amount of electronic transitions towards higher energy states. This process takes place even in materials which are transparent in the linear regime due to the tunnel ionization (TI) <cit.> and multi-photon ionization (MPI) <cit.>. Once free carriers are generated, the optical field is accelerating them, providing on average an increase in the kinetic energy. The accelerated electrons can then collide with other less energetic electrons, inducing a field-dependent and concentration-dependent amplification. Such effect is called avalanche ionization (AI), and it is often associated with the dielectric breakdown <cit.>.On a theoretical ground, the problem of the strong-coupling between light and matter can be solved quantum-mechanically using the TDDFT (Time Dependent Density Functional Theory) <cit.>, yet a very demanding approach from the computational point of view. At a larger scale, the dynamics of the sea of electrons subject to an optical field can be solved using the machinery of the Boltzmann's transport equations <cit.>. In most of the cases, this method is prohibitive given it requires the full knowledge of the energetic dispersion for the electrons and of the loss mechanisms (e.g., excitons and electron-phonon coupling).A common and prolific approach is to define a distribution for the excited electrons n_e( r,t) averaged over the energetic and momentum states, thus depending only on space and time. This approximation works well in a wide range of materials, including liquids <cit.>, amorphous solids <cit.>, and semiconductors <cit.>.Neglecting electron diffusion in space, n_e is then dictated by the rate equation <cit.>∂ n_e/∂ t=W_PI(I)+ (α_av I - 1/τ_el) n_e - σ n_e^2.In Eq. (<ref>) I( r,t) is the optical intensity of a field with central frequency ω. The first term on the RHS (Right Hand Side) W_PI is the photo-ionization rate (PI) as predicted by the Keldysh's theory for atomic transitions under the influence of a periodic field <cit.>; the second term on the RHS α_avI n_e accounts for the electrons excited by the avalanche effect <cit.>;the third term -n_e/τ_el accounts for the average lifetime of the excited electrons due to the various recombination mechanisms. Finally, the last term proportional to the square of the density represents the nonlinear (with respect to the electron density n_e) recombination effects, such as Auger. The Keldysh's theory is amazingly capable of modelling both tunnelling and multi-photon ionization: the transition between the two regimes is demarcated by the so-called Keldysh parameter γ∝ω/q√( m c n ϵ_0 E_g/I), where E_g is the bandgap of the material, c is the speed of light, ϵ_0 is the vacuum dielectric permittivity, and finally m and q the mass and the charge of the electron <cit.>. In the MPI case γ is large, in turn yielding W_PI≈1/ħω∑_Nα_N/N I^N, where α_N is the cross-section for the ionization involving N photons. As shown below, the type of ionization (TI or MPI) does not significantly change our results, in fact providing only a different mathematical relationship between the optical intensity and the source of electrons. Thus, for simplicity, we will restrict ourselves at first on MPI and neglect TI. At the end of the Article we will show how our approach works when the full Keldysh formula is applied. Incidentally, we are for now considering the general case of multiple multi-photon transitions, although usually the smallest one fulfilling Nħω≥ E_g is the relevant one.The usage of a rate equation to model the plasma formation in strong optical fields was already described by Shen in his seminal book about nonlinear optics <cit.>, and later expanded to its final form by Kennedy in 1995 <cit.>. More complicated versions introducing different energetic states have been discussed in literature <cit.>, but the fundamental physics does not strongly depend on that. Furthermore, as discussed for example in Ref. <cit.>, the optical breakdown is usually not strongly affected by σ n_e^2, which can then be neglected.Under the assumptions made, Eq. (<ref>) is linear with respect to the electronic distribution n_e: it can then be solved using the Green's function formalismn_e( r, t) = 1/ħω∑_Nα_N/N∫_-∞^ ∞I^N( r,t^') G( r, t,t^') dt^',where the impulse response isG( r, t,t^') = e^α_av∫_t^'^t I( r,τ) dτ e^-(t-t^')/τ_el u_0(t-t^').In Eq. (<ref>) u_0 is the Heaviside function, necessary to fulfill the causality condition. The Green function also retains the reciprocity property given that G( r,t,t^')=G( r,t^',t). Equations (<ref>-<ref>) are the core of the reasoning and findings we are developing in this paper. In this form and as anticipated earlier, it is clear that the form of the field-induced ionization solely changes the forcing term in Eq. (<ref>), thus not affecting the solution method we are proposing here. As a matter of fact, solutions of Eq. (<ref>) in terms of integral were already sketched in the original paper by Kennedy <cit.>, and explicitly written by Feng and collaborators in Ref. <cit.>. Nonetheless, such integral solution has not been explored in depth to discuss the interplay between MPI and AI; indeed, verbatim from Ref. <cit.>: “Since the analytic solution (8) is not very informative, we have numerically solved the density equation”. Oppositely to the reported view, we show that writing such an integral in terms of the Green formalism permits to disclose how field ionization (either MPI or TI) and AI are working together and explore the underlying physical behavior. The first advantage of our approach is the possibility to clearly distinguish the origin of the excited electrons, and how MPI and AI interact with each other. To further simplify the notation, hereafter we will focus on the case when only one single multi-photon transition is relevant: the generalization to multiple simultaneous transitions is straightforward. From Eqs. (<ref>-<ref>) the net generation of excited electrons per unit time is∂ n_e/∂ t = α_N/Nħω[I^N(t)+(α_av I(t) - 1/τ_el) ∫_-∞^t I^N(t^') × e^α_av∫_t^'^t I(τ) dτ e^-(t-t^')/τ_eldt^'].Equation (<ref>) allows immediate physical interpretation: the excitation rate is the sum of the instantaneous MPI (first term on the RHS) plusthe avalanche electrons generated at each instant normalized with respect to the lifetime τ_el (the integral term). The seed for the avalanche electrons is provided by the MPI at former times, whereas the history of the pulse intensity I(t) determines the overall amplification for each electron generated by MPI. Although Eq. (<ref>) is somehow trivial under our model based upon the temporal Green function, it can not be easily extracted from numerical solutions of Eq. (<ref>).We now turn our attention to describe the type of response modelled through Eq. (<ref>). For the sake of simplicity, hereon we will omit the spatial dependence which is not relevant in our current discussion. Once a shape for the pulse is fixed, the Green function depends only on the product α_avI_0, where I_0 is the intensity peak. The shape of the response of the material depends on the relative position along the pulse profile [i.e., G(t,t^')≠ G(t-t^')] through the avalanche term, i.e., the response is not invariant with respect to time shifts; accordingly, the distribution n_e is not given by a simple temporal convolution. On a more physical ground, Eq. (<ref>) is telling us that n_e at a given instant t is the sum of the electrons excited by MPI at each previous instants, but such electron density needs to be weighted with respect to the amount of amplification -fixed by the net balance between avalanche and losses- the electrons have been subject to. From Eq. (<ref>), the position t_max of the extrema of the Green function G versus t is I(t_max) = 1/τ_elα_av.The causality condition imposes the additional constraint t_max>0. The former equation holds valid irrespective of the temporal shape I(t). Generally speaking, Eq. (<ref>) correctly predicts that, for larger α_av or for longer electron lifetime τ_el, the maximum of the avalanche-generated electrons shifts towards the trailing edge of the pulse, regardless of when the seed electrons (i.e., t^') have been generated.The fluence is the temporal integral of the intensity I. Defining the time-windowed fluence F(t_1,t_2) as the amount of fluence between the two instants t_1 and t_2, Eq. (<ref>) providesG(t,t^') = [ 1 + α_av^n ∑_n=1^∞F^n(t^',t )/n!] e^-(t-t^')/τ_el u_0(t-t^').The optical response (free electrons actually change both the imaginary and the real part of the refractive index) of the material can then be controlled by shaping the impinging pulses, with potential applications in the novel field of photonic materials encompassing a time-dependent response <cit.>. Furthermore, as the intensity is ramping up, more terms in Eq. (<ref>) become relevant: the joint action of MPI and AI effectively behaves like a multi-photon ionization of order N+n <cit.>, but encompasses an additional memory effect registering the previous slices of the optical pulses already passed through the material <cit.>. We now focus on how the MPI and AI interact. Given that Eq. (<ref>) provides the amount of electrons excited by an impulsive pulse placed in t=t^', the avalanche process excites more electrons than the MPI oncelim_t→∞F(t^', t) > 1/α_av.This means that the share of AI-excited electrons with respect to the ones ascribed to MPI depends primarily on the fluence which crossed the material after the excitation instant t^'. Hence, the shape of the pulse -including the pulse duration τ- determines the (continuous) transition between the two regimes; the transition between MPI and AI occurs at a given instant t^' along the optical pulse. When the maximum fluence F(-∞,∞) is lower than 1/α_av, MPI remains the dominant mechanism exciting the electrons to the conduction band. Actually, Eq. (<ref>) alone does not ensure the dominance of AI over MPI in the ionization process. Indeed, AI could be dominant at the front edge of the pulse, where the low intensity generatesa modest number of seed electrons via the MPI to be later accelerated by the avalanche process. To assess this matter, we can calculate which instant t^' (we dub it t^*) contributes for the largest number of electronic transitions. By deriving the integrand of Eq. (<ref>) and setting such time-derivative equal to zero, we find the condition∂ I/∂ t = α_av/N I^2 - I/Nτ_el.For simplification, let us assume a single-humped pulse shape, see Fig. <ref> for a graphical solution of the equation <cit.>. The RHS of Eq. (<ref>) needs to be positive to ensure t^*<0 to achieve net gain, thus setting the constraint I_0>1/(α_avτ_el) with I_0 being the maximum intensity. In fact, in the absence of avalanche (α_av=0) and for τ_el→∞, the maximum of n_e corresponds to theintensity peak. When α_av is large enough, the instant t^* moves towards the front edge of the pulse, where the temporal derivative ∂ I/∂ t [RHS of Eq. (<ref>)] is non-vanishing and positive, see the red curves in Fig. <ref>(a-c). Thus, larger intensities enhances the shift of t^* towards earlier instants, the larger the α_av the larger the shift is [compare the blue and orange curves in Fig. <ref>(d)].Finally, greater N favors the MPI by decreasing the temporal shift of t^* with respect to lower N [compare the blue and green curves in Fig. <ref>(d)].We now proceed with showing applications of Eqs. (<ref>-<ref>) for specific optical pulse shapes: the scope is to demonstrate the versatility of our approach in determining the influence of the pulse shape on the plasma generation. We make the additional assumption that the shape of the optical pulse is fixed: we are thus neglecting self-phase modulation, both in space and time <cit.>. To be more quantitative and provide closed-form solutions for n_e, we now suppose the pulse to be a square function, I(t)=I_0 rect_τ(t), where the rect function is non-vanishing and equal to 1 only for |t|<τ/2. After defining the net gain g(I_0)=α_avI_0-1/τ_el, Eq. (<ref>) with the help of Eq. (<ref>) providesn_e(t)= α_N I_0^N/g(I_0) Nħω{[e^g(I_0)(t+τ/2)-1]u_0(-t+τ/2) } + n_e^max e^-t/τ_el u_0(t-τ/2) .In Eq. (<ref>) we also defined the peak of the electron density asn_e^max = α_N I_0^N/g(I_0) Nħω[e^g(I_0)τ -1 ].The interpretation of Eq. (<ref>) is straightforward: during the pulse the number of electrons grows exponentially with the intensity-dependent net gain g(I_0). The density n_e achieves its maximum n_e^max at the end of the pulse (t=τ/2), then exponentially decays with a lifetime determined by τ_el. The interplay between AI and MPI can be evaluated by expanding the exponential term in Eq. (<ref>) in its power series. At low gains (g(I_0)τ≪ 1), we obtain n_e^max= α_N I_0^N τ /(Nħω), i.e., the case of electrons generated only by MPI. In the opposite limit g(I_0)τ≫ 1, we have a purely exponential growth of the first electrons generated at the leading edge of the pulse, n_e^max∝. W_PI|_t=-τ/2 e^g(I_0)τ/g(I_0). For the intermediate case we expand the exponential series up to the quadratic terms, providing the following condition for the transition to an avalanche-dominated ionizationα_avI_0 > 2/τ + 1/τ_el,in agreement with Eq. (<ref>). Thus, in the case of square pulses the transition between MPI and AI does not depend on the transition order N: the shortest between the pulse duration τ and the electron lifetime τ_el is actually determining the transition between the two regimes. In agreement with the numerical simulations of Eq. (<ref>), AI becomes dominant after a given threshold intensity dependent on the pulse duration, with shorter pulses favouring MPI. Finally, from Eq. (<ref>) it is expected that the nonlinear absorption increases as I_0^N for small enough intensities, then depending as I_0^N+1 when AI kicks in, finally going to a full exponential increase when AI is largely dominant. This general trend is upper bounded in real experiments by the optical breakdown and permanent modifications induced in the material. We now pass to the most common case of a Gaussian-shaped pulse, I=I_0e^-2t^2/τ^2. The function F is then F(t,t^')=I_0τ/2√(π/2)[erf( √(2)t/τ) - erf( √(2)t^'/τ) ]. From Eq. (<ref>) we get t_max = τ√(log(τ_elα_avI_0)/2). Figure <ref> shows the Green function for two pulse durations and two values of I_0α_av, where we fixed τ_el=2τ. The peak of G versus t is always positioned after the peak of the pulse, i.e., t_max>0. Due to the exponential amplification, the peak of G steeply grows for larger products I_0α_av (comparison between columns) andfor longer pulses(comparison between different rows), as well known both from numerical simulations and experiments <cit.>. When τ_el is much longer than the pulse duration τ (e.g., femtosecond pulses), the Green function does not drop significantly for increasing time. On the opposite limit τ_el≪τ (e.g., nanosecond pulses), the electrons accumulation is hindered, with the the maximum of G migrating towards earlier instants, see Eq. (<ref>).The results of the integration of Eq. (<ref>) for Gaussian pulses and α_N=1 is shown in Fig. <ref>. In agreement with the shape of the Green function, n_e first reaches a maximum after the pulse peak, and then drops with a rate determined by τ_el. The maxima of n_e (magenta dashed line) are always placed at t=t_max, the latter corresponding to the peak of the Green functions, regardless of t^'. With respect to α_av, the shape of n_e versus t does not substantially changes, except for an exponential amplification. The interplay between MPI and AI can be first evaluated by finding the temporal instants where the density doubles with respect to the maximum of n_e calculated when α_av=0 (white solid line in Fig. <ref>). Such condition is achieved only when α_av overcomes a given threshold, strongly dependent on the other parameters of the pulse. After achieving the threshold, the curve follows a hyperbola-like trend. For the sake of comparison with the theory developed above, we draw in the same graph the points where the condition defined by Eq. (<ref>) is satisfied (yellow solid line). The two curves are almost parallel, with the theoretical prediction being slightly more stringent (i.e., AI dominance at earlier times and lower avalanche coefficients) than the numerical one. Next, we investigate how the maximum electron density n_e^max varies for different pulse durations τ and avalanche coefficients α_av. Typical results are shown in Fig. <ref>. In qualitative agreement with Eq. (<ref>), the electron density grows exponential both with τ and α_av. The transition to an avalanche-driven process -defined as a doubling of n_e as in Fig. <ref>- is represented by the white solid line in Fig. <ref>. In full analogy with Eq. (<ref>), the curve is hyperbolic, but with a coefficient now dependent on the multiphoton order N.We now apply our model to a real case, fused silica illuminated by optical beams emitting at two different wavelengths, λ=500 nm and λ=800nm. Whereas up to this point our calculations were carried out in normalized units, we now pass to physical units. From Ref. <cit.> we take the parameters α_av=4× 10^-4m^2J^-1 and E_g=9eV, in turn providing N=4 at λ=500nm and N=6 at λ=800nm.For the photo-ionization we employ the full Keldysh formula, thus accounting for the transition from MPI to TI as the impinging intensity increases <cit.>.Figure <ref> shows the corresponding maximum in the electron density versus the peak intensity I_0 for Gaussian pulses of different widths and different wavelengths. The solid lines represent the predicted value in the absence of avalanche and τ_el→∞. The use of the full Keldysh formula is responsible for the abrupt changes in the distribution. In agreement with the multiphoton ionization, the slope is steeper for longer wavelength due to the larger N. When avalanche is accounted for, a sudden exponential growth in n_e is taking place. Before such a divergence, the two cases match well for long enough electron lifetime τ_el. Furthermore, the generation of electrons is stronger for longer pulses for a fixed peak intensity I_0. In particular, the avalanche amplification is strongly enhanced for longer pulse durations, in agreement with the literature and Eq. (<ref>). The dashed lines in Fig. <ref> (labelled as simplified in each panel) is the amount of electrons generated starting from the seed induced by PI only at t=t^*, see Eq. (<ref>) and Fig. <ref>. When τ≪τ_el, the exact number of excited electrons is a little bit larger, but the trend with I_0 is almost identical. When τ_el is shorter than the pulse duration τ, the approximation is overestimating the real amount of excited electrons. In agreement with Eq. (<ref>), the onset of avalanche for a given peak intensity is determined by the interplay between electronic lifetime and pulse duration. In the case of Gaussian pulses, solution of Eq. (<ref>) is a very good proxy for determining the transition to an avalanche-dominated regime. As a last result, we aim to apply our approach to discuss how the simultaneous illumination of the material with two pulses of different duration affects the plasma generation.Recently, the relevant role of temporal contrast in determining the modifications in bulk materials has been investigated experimentally, both in fused silica and in silicon <cit.>.We consider two pulses illuminating the sample simultaneously: a short [dubbed I_s(t)] and a long pulse [dubbed I_l(t)] with a duration of 150 fs and 10 ps, respectively. We also assume that the two pulses are perfectly synchronized by placing the peak of both pulses at t=0. In applying the Keldysh formula, we simply used the sum of the two intensities: thus, we are neglecting possible coherent interference during the transitions between the excited electronic states, see e.g. the coherent control <cit.>. Eq. (<ref>) providesG_joint(t,t^') = e^α_av∫_t^'^t [ I_s( r,τ) + I_l( r,τ)] dτ e^-(t-t^')/τ_el u_0(t-t^'),that is, the total avalanche gain is simply given by the multiplication of the separate gains, at least until Eq. (<ref>) holds valid. The electron density isn_e(t) =∫_-∞^ ∞ W_PI(I_s +I_l) G_joint(t,t^') dt^'. Figure <ref> compares the maximum electron density achieved for different values of the peak intensities, without (top left panel) or with (top right panel) avalanche (note the different scaling on the vertical axis). The horizontal axis is the peak intensity of the short pulse I_s(t), whereas each curve corresponds to a different peak intensity for the long pulse I_l(t). Important to stress, here the peak intensity, which actually depends on the ratio between the pulse energy and the pulse duration, is kept fixed. We start discussing the case without avalanche, which provides the electrons excited by direct field ionization (either MPI or TI).The joint curve differs from the case of isolated pulses only when the intensities are comparable, in agreement with the Keldysh formula. When avalanche is turned on, the exponential gain caused by the short pulse alone does not significantly changein the presence of a long pulse with an intensity up to 2.0× 10^-12 Wcm^-2 and lower than the short pulse(see the blue and the black line in the bottom row of Fig. <ref>).When I_l approaches the threshold for avalanche, the exponential gain strongly differs from the gains calculated when only I_s is illuminating the material: indeed, the amplification, defined as n^max_e/n_e^max(I_l=0) and plotted in the bottom of Fig. <ref>, does not saturate to unity when I_s gets larger and larger. Essentially, using two pulses it is possible to decouple the plasma density from the intensity amplitude and shape, the latter being non-separable in the case of single-pulse illumination.If the long pulse is anticipated with respect to the short pulse with a delay small with respect to τ_el, such a scheme can be used to study the interaction between a tunable density of electrons (fixed by the long pulse, assumed to not induce permanent modifications in the material) and a short pulse of variable intensity. In conclusion, we introduced an analytical model based upon the Green function formalism to depict the excitation of high-energy electrons in the presence of nonlinear photo-ionization (i.e., multiphoton and tunnel) and avalanche ionization. The model allows a versatile and rapid investigation on how many electrons are excited for a given optical pulse, in fact providing a clear picture of the interplay between different ionizations and its dependence on the parameters of the optical pulse.Due to its simplicity, the model can be readily integrated in more advanced algorithms computing the optical propagation in the nonlinear regime, such as beam propagation method codes or approximated solutions based upon the variational theory. Experimentally, our model is a fast and efficient tool to describe pump-probe set-ups measuring the temporal dynamics of the absorption after illumination with an intense pulse <cit.>. With respect to applications, our method represents a rapid solution for estimating the best parameters to inscribe permanent structures in solids. Finally, our approach paves the way to the employment of pulse shaping to control the electron densityfor inputs in proximity of the onset of avalanche ionization <cit.>.Supported by the Free State of Thuringia and the European Social Fund Plus (2022FGR0002). European Union’s Framework Programme for Research and Innovation Horizon 2020 under the Marie Sklowdowska-Curie Grant Agreement No. 889525. 39 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Corkum(1993)]Corkum:1993 author author P. B. Corkum, title title Plasma perspective on strong field multiphoton ionization, https://doi.org/10.1103/PhysRevLett.71.1994 journal journal Phys. Rev. Lett. volume 71, pages 1994 (year 1993)NoStop [Disa et al.(2021)Disa, Nova, and Cavalleri]Disa:2021 author author A. S. Disa, author T. F. Nova,and author A. Cavalleri,title title Engineering crystal structures with light, @noopjournal journal Nat. Phys. volume 17, pages 1087 (year 2021)NoStop [Chu and Telnov(2004)]Chu:2004 author author S.-I. Chu and author D. A. Telnov, title title Beyond the Floquet theorem: generalized Floquet formalisms and quasienergy methods for atomic and molecular multiphoton processes in intense laser fields, https://doi.org/https://doi.org/10.1016/j.physrep.2003.10.001 journal journal Phys. Rep. volume 390, pages 1 (year 2004)NoStop [Oka and Kitamura(2019)]Oka:2019 author author T. Oka and author S. Kitamura,title title Floquet engineering of quantum materials, @noopjournal journal Annu. Rev. Condens. Matter Phys. volume 10, pages 387 (year 2019)NoStop [Goulielmakis and Brabec(2022)]Goulielmakis:2022 author author E. Goulielmakis and author T. Brabec, title title High harmonic generation in condensed matter, @noopjournal journal Nat. Photon. volume 16, pages 411 (year 2022)NoStop [Lakhotia et al.(2020)Lakhotia, Kim, Zhan, Hu, Meng, and Goulielmakis]Lakhotia:2020 author author H. Lakhotia, author H. Kim, author M. Zhan, author S. Hu, author S. Meng, and author E. Goulielmakis, title title Laser picoscopy of valence electrons in solids, @noopjournal journal Nature volume 583,pages 55 (year 2020)NoStop [Neufeld et al.(2023)Neufeld, Tancogne-Dejean, Hübener, De Giovannini, and Rubio]Neufeld:2023 author author O. Neufeld, author N. Tancogne-Dejean, author H. Hübener, author U. De Giovannini, and author A. Rubio, title title Are there universal signatures of topological phases in high-harmonic generation? Probably not., https://doi.org/10.1103/PhysRevX.13.031011 journal journal Phys. Rev. X volume 13, pages 031011 (year 2023)NoStop [Gattass and Mazur(2008)]Gattass:2008 author author R. R. Gattass and author E. Mazur, title title Femtosecond laser micromachining in transparent materials, @noopjournal journal Nat. Photon. volume 2, pages 219 (year 2008)NoStop [Tünnermann et al.(2023)Tünnermann, Momma, and Nolte]Tunnermann:2023 author author A. Tünnermann, author C. Momma, and author S. Nolte,title title Perspective on ultrashort pulse laser micromachining, @noopjournal journal Appl. Phys. A volume 129, pages 157 (year 2023)NoStop [Raĭzer(1966)]Raizer:1966 author author Y. P. Raĭzer, title title Breakdown and heating of gases under the influence of a laser beam, https://doi.org/10.1070/PU1966v008n05ABEH003027 journal journal Sov. Phys. Usp. volume 8,pages 650 (year 1966)NoStop [Shen(1984)]Shen:1984 author author Y.-R. Shen, @nooptitle Principles of Nonlinear Optics (publisher Wiley, year 1984)NoStop [Geissler et al.(1999)Geissler, Tempea, Scrinzi, Schnürer, Krausz, and Brabec]Geissler:1999 author author M. Geissler, author G. Tempea, author A. Scrinzi, author M. Schnürer, author F. Krausz, and author T. Brabec, title title Light propagation in field-ionizing media: Extreme nonlinear optics, https://doi.org/10.1103/PhysRevLett.83.2930 journal journal Phys. Rev. Lett. volume 83, pages 2930 (year 1999)NoStop [Nathan et al.(1985)Nathan, Guenther, and Mitra]Nathan:1985 author author V. Nathan, author A. H. Guenther, and author S. S. Mitra, title title Review of multiphoton absorption in crystalline solids, https://doi.org/10.1364/JOSAB.2.000294 journal journal J. Opt. Soc. Am. B volume 2, pages 294 (year 1985)NoStop [Sparks et al.(1981)Sparks, Mills, Warren, Holstein, Maradudin, Sham, Loh, andKing]Sparks:1981 author author M. Sparks, author D. L. Mills, author R. Warren, author T. Holstein, author A. A. Maradudin, author L. J. Sham, author E. Loh, and author D. F. King, title title Theory of electron-avalanche breakdown in solids, https://doi.org/10.1103/PhysRevB.24.3519 journal journal Phys. Rev. B volume 24, pages 3519 (year 1981)NoStop [Otobe(2020)]Otobe:2020 author author T. Otobe, title title Wavelength dependence of the laser-excitation process on a silicon surface, https://doi.org/10.1103/PhysRevApplied.13.024062 journal journal Phys. Rev. Appl. volume 13,pages 024062 (year 2020)NoStop [Kroll and Watson(1972)]Kroll:1972 author author N. Kroll and author K. M. Watson, title title Theoretical study of ionization of air by intense laser pulses, https://doi.org/10.1103/PhysRevA.5.1883 journal journal Phys. Rev. A volume 5, pages 1883 (year 1972)NoStop [Kennedy(1995)]Kennedy:1995 author author P. K. Kennedy, title title A first-order model for computation of laser-induced breakdown thresholds in ocular and aqueous media. i. theory, @noopjournal journal IEEE J. Quantum Electron. volume 31, pages 2241 (year 1995)NoStop [Sudrie et al.(2002)Sudrie, Couairon, Franco, Lamouroux, Prade, Tzortzakis, and Mysyrowicz]Sudrie:2002 author author L. Sudrie, author A. Couairon, author M. Franco, author B. Lamouroux, author B. Prade, author S. Tzortzakis, and author A. Mysyrowicz, title title Femtosecond laser-induced damage and filamentary propagation in fused silica, https://doi.org/10.1103/PhysRevLett.89.186601 journal journal Phys. Rev. Lett. volume 89, pages 186601 (year 2002)NoStop [Fedorov et al.(2016)Fedorov, Chanal, Grojo, and Tzortzakis]Fedorov:2016 author author V. Y. Fedorov, author M. Chanal, author D. Grojo, and author S. Tzortzakis, title title Accessing extreme spatiotemporal localization of high-power laser radiation through transformation optics and scalar wave equations, @noopjournal journal Phys. Rev. Lett. volume 117, pages 043902 (year 2016)NoStop [Stuart et al.(1995)Stuart, Feit, Rubenchik, Shore, andPerry]Stuart:1995 author author B. C. Stuart, author M. D. Feit, author A. M. Rubenchik, author B. W. Shore, and author M. D. Perry, title title Laser-induced damage in dielectrics with nanosecond to subpicosecond pulses, https://doi.org/10.1103/PhysRevLett.74.2248 journal journal Phys. Rev. Lett. volume 74, pages 2248 (year 1995)NoStop [Li et al.(1999)Li, Menon, Nibarger, and Gibson]Li:1999 author author M. Li, author S. Menon, author J. P. Nibarger, and author G. N. Gibson, title title Ultrafast electron dynamics in femtosecond optical breakdown of dielectrics, https://doi.org/10.1103/PhysRevLett.82.2394 journal journal Phys. Rev. Lett. volume 82, pages 2394 (year 1999)NoStop [Keldysh(1965)]Keldysh:1965 author author L. V. Keldysh, title title Ionization in the field of a strong electromagnetic wave, @noopjournal journal Sov. Phys. JETP volume 20,pages 1307 (year 1965)NoStop [Schaffer et al.(2001)Schaffer, Brodeur, and Mazur]Schaffer:2001 author author C. B. Schaffer, author A. Brodeur,and author E. Mazur, title title Laser-induced breakdown and damage in bulk transparent materials induced by tightly focused femtosecond laser pulses,https://doi.org/10.1088/0957-0233/12/11/305 journal journal Meas. Sci. Technol. volume 12, pages 1784 (year 2001)NoStop [Shen(1975)]Shen:1975 author author Y. Shen, title title Self-focusing: Experimental, https://doi.org/https://doi.org/10.1016/0079-6727(75)90002-6 journal journal Progr. Quant. Electr. volume 4, pages 1 (year 1975)NoStop [Rethfeld(2006)]Rethfeld:2006 author author B. Rethfeld, title title Free-electron generation in laser-irradiated dielectrics, https://doi.org/10.1103/PhysRevB.73.035101 journal journal Phys. Rev. B volume 73, pages 035101 (year 2006)NoStop [Tsibidis and Stratakis(2023)]Tsibidis:2023 author author G. D. Tsibidis and author E. Stratakis, title title Ionization dynamics and damage conditions in fused silica irradiated with mid-infrared femtosecond pulses, https://doi.org/10.1063/5.0130934 journal journal Appl. Phys. Lett. volume 122, pages 043501 (year 2023)NoStop [Feng et al.(1997)Feng, Moloney, Newell, Wright, Cook, Kennedy, Hammer, Rockwell, and Thompson]Feng:1997 author author Q. Feng, author J. Moloney, author A. Newell, author E. Wright, author K. Cook, author P. Kennedy, author D. Hammer, author B. Rockwell, and author C. Thompson, title title Theory and simulation on the threshold of water breakdown induced by focused ultrashort laser pulses, https://doi.org/10.1109/3.552252 journal journal IEEE J. Quantum Electron.volume 33, pages 127 (year 1997)NoStop [Galiffi et al.(2022)Galiffi, Tirole, Yin, Li, Vezzoli, Huidobro, Silveirinha, Sapienza, Alù, andPendry]Galiffi:2022 author author E. Galiffi, author R. Tirole, author S. Yin, author H. Li, author S. Vezzoli, author P. A.Huidobro, author M. G.Silveirinha, author R. Sapienza, author A. Alù, and author J. B.Pendry, title title Photonics of time-varying media, https://doi.org/10.1117/1.AP.4.1.014002 journal journal Adv. Photon. volume 4, pages 014002 (year 2022)NoStop [Tirole et al.(2023)Tirole, Vezzoli, Galiffi, Robertson, Maurice, Tilmann, Maier, Pendry, and Sapienza]Tirole:2023 author author R. Tirole, author S. Vezzoli, author E. Galiffi, author I. Robertson, author D. Maurice, author B. Tilmann, author S. A.Maier, author J. B. Pendry, and author R. Sapienza, title title Double-slit time diffraction at optical frequencies, @noopjournal journal Nat. Phys. volume 19,pages 999 (year 2023)NoStop [Rajeev et al.(2009)Rajeev, Gertsvolf, Corkum, and Rayner]Rajeev:2009 author author P. P. Rajeev, author M. Gertsvolf, author P. B. Corkum, andauthor D. M. Rayner, title title Field dependent avalanche ionization rates in dielectrics, https://doi.org/10.1103/PhysRevLett.102.083001 journal journal Phys. Rev. Lett. volume 102, pages 083001 (year 2009)NoStop [Conti et al.(2010)Conti, Schmidt, Russell, and Biancalana]Conti:2010 author author C. Conti, author M. A. Schmidt, author P. S. J. Russell, andauthor F. Biancalana, title title Highly noninstantaneous solitons in liquid-core photonic crystal fibers, https://doi.org/10.1103/PhysRevLett.105.263902 journal journal Phys. Rev. Lett. volume 105,pages 263902 (year 2010)NoStop [Hunter(2007)]Hunter:2007 author author J. D. Hunter, title title Matplotlib: A 2d graphics environment, https://doi.org/10.1109/MCSE.2007.55 journal journal Comput. Sci. Eng. volume 9, pages 90 (year 2007)NoStop [Harris et al.(2020)Harris, Millman, van der Walt, Gommers, Virtanen, Cournapeau, Wieser, Taylor, Berg, Smith, Kern, Picus, Hoyer, van Kerkwijk, Brett, Haldane, del Río, Wiebe, Peterson, Gérard-Marchant, Sheppard, Reddy, Weckesser, Abbasi, Gohlke, and Oliphant]Harris2020 author author C. R. Harris, author K. J. Millman, author S. J. van der Walt, author R. Gommers, author P. Virtanen, author D. Cournapeau, author E. Wieser, author J. Taylor, author S. Berg, author N. J.Smith, author R. Kern, author M. Picus, author S. Hoyer, author M. H. van Kerkwijk, author M. Brett, author A. Haldane, author J. F. del Río, author M. Wiebe, author P. Peterson, author P. Gérard-Marchant, author K. Sheppard, author T. Reddy, author W. Weckesser, author H. Abbasi, author C. Gohlke, and author T. E.Oliphant, title title Array programming with NumPy, https://doi.org/10.1038/s41586-020-2649-2 journal journal Nature volume 585, pages 357 (year 2020)NoStop [Couairon et al.(2005)Couairon, Sudrie, Franco, Prade, and Mysyrowicz]Couairon:2005 author author A. Couairon, author L. Sudrie, author M. Franco, author B. Prade, and author A. Mysyrowicz, title title Filamentation and damage in fused silica induced by tightly focused femtosecond laser pulses, https://doi.org/10.1103/PhysRevB.71.125435 journal journal Phys. Rev. B volume 71, pages 125435 (year 2005)NoStop [Wang et al.(2020)Wang, Das, and Grojo]Wang:2020 author author A. Wang, author A. Das, andauthor D. Grojo, title title Temporal-contrast imperfections as drivers for ultrafast laser modifications in bulk silicon, https://doi.org/10.1103/PhysRevResearch.2.033023 journal journal Phys. Rev. Res. volume 2,pages 033023 (year 2020)NoStop [Wang et al.(2023)Wang, Lei, Shayeganrad, and Kazansky]Wang:2023 author author H. Wang, author Y. Lei, author G. Shayeganrad, and author P. G. Kazansky, title title Ultrafast laser writing with pulse temporal contrast control, in @noopbooktitle The European Conference on Lasers and Electro-Optics (organization Optica Publishing Group, year 2023) p. pages cm_4_3NoStop [Silberberg(2009)]Silberberg:2009 author author Y. Silberberg, title title Quantum coherent control for nonlinear spectroscopy and microscopy, https://doi.org/10.1146/annurev.physchem.040808.090427 journal journal Annu. Rev. Phys. Chem. volume 60, pages 277 (year 2009)NoStop [Bergner et al.(2018)Bergner, Seyfarth, Lammers, Ullsperger, Döring, Heinrich, Kumkar, Flamm, Tünnermann, and Nolte]Bergner:2018 author author K. Bergner, author B. Seyfarth, author K. A. Lammers, author T. Ullsperger, author S. Döring, author M. Heinrich, author M. Kumkar, author D. Flamm, author A. Tünnermann, and author S. Nolte, title title Spatio-temporal analysis of glass volume processing using ultrashort laser pulses, https://doi.org/10.1364/AO.57.004618 journal journal Appl. Opt. volume 57,pages 4618 (year 2018)NoStop [Jürgens et al.(2019)Jürgens, Vrakking, Husakou, Stoian, and Mermillod-Blondin]Jurgens:2019 author author P. Jürgens, author M. J. J. Vrakking, author A. Husakou, author R. Stoian, and author A. Mermillod-Blondin, title title Plasma formation and relaxation dynamics in fused silica driven by femtosecond short-wavelength infrared laser pulses,https://doi.org/10.1063/1.5117837 journal journal Appl. Phys. Lett. volume 115, pages 191903 (year 2019)NoStop
http://arxiv.org/abs/2312.16085v1
{ "authors": [ "Alessandro Alberucci", "Chandroth P. Jisha", "Stefan Nolte" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20231226151658", "title": "Application of the Green function formalism to the interplay between avalanche and multiphoton ionization induced by optical pulses" }
=1L[1]>p#1<C[1]>p#1<R[1]>p#1<
http://arxiv.org/abs/2312.16480v1
{ "authors": [ "Gilles Dowek", "Murdoch J. Gabbay" ], "categories": [ "cs.LO", "F.4.1; I.2.3" ], "primary_category": "cs.LO", "published": "20231227090605", "title": "Permissive-Nominal Logic (journal version)" }
tda-segmentor: A tool to extract and analyze local structure and porosity features in porous materials [======================================================================================================empty § INTRODUCTIONThe Standard Model (SM) of elementary particles successfully describes the known particles and their interactions. It has explained the mechanism of electroweak symmetry breaking and the mechanism of mass acquisition for gauge bosons and fermions. With the discovery of the Higgs particle in 2012 <cit.>, the SM is believed to be a correct theory. However, there are some mysteries in the SM, for example the generational structure of fermions.That structure comes from the magnitude of the Yukawa interactions, which describe the interaction between charged fermions and the Higgs boson in the SM. The Yukawa interactions are simply parameterized to match experimental data. Therefore, it is not possible to theoretically explain the difference in masses and mixing of flavors between generations in the SM. In particular, the flavor mixing of leptons is predicted to be larger than that of quarks and the magnitude of the CP phase of the lepton sector is not yet precisely known.Leptons have another mystery related to neutrinos. In the SM the neutrinos have zero mass, because there are no right-handed neutrinos. However, the discovery of neutrino oscillations has shown that neutrinos have tiny masses. Although neutrino masses can not be explained by the SM, the neutrino mass matrix is very important when considering lepton flavor mixing. In addition, it is not yet known whether neutrinos are Dirac or Majorana particles. If the neutrinos are Majorana particles, then two additional CP phases called Majorana phases are added, the magnitude of which is not yet known.Therefore, many studies have been conducted to understand the generation structure. For example, the Froggatt-Nielsen (FN) mechanism introduced global a U(1)_FN symmetry <cit.> to provide a natural explanation for the fermion mass hierarchies.Also, several studies of non-Abelian discrete symmetries (see for review <cit.>) have been used to explain flavor mixing <cit.>. The Yukawa couplings are controlled by these symmetries, and it leads to a natural explanation for the mixing angles of lepton.Much work has also been done to explain neutrino masses <cit.>. The masses of the lighter left-handed neutrinos are often explained using heavy right-handed Majorana neutrinos and the seesaw mechanism. Majorana neutrinos are being searched for using neutrinoless double beta (0νββ) decay at experiments like KamLAND-Zen <cit.>. These experiments have provided an upper limit to the effective Majorana neutrino mass. According to this paper <cit.>, the inverted ordering region is being explored.In building our flavor model, in addition to the non-Abelian discrete and the U(1)_FN symmetries, we focus on the Higgs sector. Many multi-Higgs models have appeared in recent times. It is natural to assume that not only fermions but also Higgs sector has a similar generation structure, and there are many previous studies of that type <cit.>. We consider a three Higgs doublets model(3HDM) <cit.>, which means the number of Higgs doublets is three and has a flavor symmetry. This means we assume the Higgs sector follows the same generation structure as the fermions. In a previous study, we considered an A_4 symmetry, which is one possible non-Abelian discrete symmetry, and the 3HDM to construct a flavor model <cit.>. In that study, we could reproduce the lepton mixing angles and give predictions for the CP violating phase and the lightest neutrino mass. However, we only reproduced the neutrino masses in the inverted hierarchy. As mentioned above, the inverted hierarchy is currently being explored and unfavor <cit.>.In this paper, we consider the S_4 symmetry that is also a possible non-Abelian discrete symmetry. The advantage of the S_4 symmetry is that the S_4 symmetry contains one doublet that is not included in the A_4 symmetry. Because of that we can build an S_4 lepton flavor model with U(1)_FN and 3HDM, which reproduce neutrino masses in the normal hierarchy.Additionally, our model predicts the mixing angles and the magnitude of Dirac CP phase of leptons. We also predict the lightest neutrino mass, the sum of neutrino masses, the effective Majorana neutrino mass at the 0νββ decay experiment and the two Majorana phases in normal-ordering.The rest of this paper is organized as follows. In Section <ref>, we present the lepton flavor model and obtain mass matrices. In Section <ref>, we show the numerical analysis of our flavor model. In Section <ref>, we show the scalar potential analysis. In Section <ref>, we summarize this paper. We give the brief introduction of the S_4 symmetry and show the multiplication rule of the S_4 group in Appendix <ref>. We shortly introduce the 3HDM with S_4 symmetry in Appendix <ref>. § LEPTON FLAVOR MODEL WITH S_4 SYMMETRY In this section, we present a lepton flavor model with S_4 symmetry, and we show the mass matrices of charged leptons and neutrinos.The S_4 symmetry is the symmetry of the S_4 group, which is the symmetric group of order 4. The S_4 group has two types of singlets 1, 1', one type of doublet 2, and two types of triplets 3, 3', as noted in Appendix <ref>. We assign the left-handed leptons as an S_4 triplet, the right-handed electron and muon as S_4 doublets, and the right-handed tauon as an S_4 singlet. Then, we suppose three right-handed Majorana neutrinos to obtain neutrino masses with the seesaw mechanism. Uniquely, the right-handed electron neutrino is assigned as an S_4 singlet, and the right-handed muon neutrino and tauon neutrinos are assigned as S_4 doublets.Here, we introduce the three Higgs doublets that are assigned as an S_4 triplet. This means the Higgs sector has a similar generation structure to fermions. Additionally, we introduce two scalar fields Θ and X, where the field Θ is an S_4 singlet and the field X is an S_4 doublet. This allows us to preform a precise reproduction of experimental results. We impose the U(1)_FN symmetry on the added scalar fields and right-handed lepton doublet, and we limit the couplings of scalar fields. In Table 1, we summarize the particle assignments of the SU(2)_L, S_4 and U(1)_FN symmetries. We can write down the Lagrangian for Yukawa interactions and Majorana mass term in our model. The SU(2)_L× S_4× U(1)_FN invariant Lagrangian is, ℒ_Y = ℒ_ℓ + ℒ_D + ℒ_M + h.c. ,where,ℒ_ℓ = y_eμ/Λℓ̅ϕℓ_RΘ+ y_τℓ̅ϕτ_R+ y_ℓ/Λℓ̅ϕℓ_R X, ℒ_D= y_Deℓ̅ϕ̃ν_eR+y_Dμτℓ̅ϕ̃ν_R, ℒ_M= 1/2M_eRν̅_eR^Cν_eR+1/2M_μτ Rν̅_R^Cν_R.Note that y_eμ , y_τ , y_ℓ, y_De and y_Dμτ are Yukawa couplings and, M_eR and M_μτ R are the right-handed Majorana neutrino masses. We consider the Yukawa coupling y_eμ , y_τ, y_ℓ as real numbers, and y_De and y_Dμτ to be complex numbers. At the high-energy scale, we take the real VEVs of Θ and X as ⟨Θ⟩ = Θ_0,⟨ X ⟩ =(X_1,0). Then, after spontaneous symmetry breaking (SSB), the three Higgs doublets have the real VEVs ⟨ϕ_i ⟩ = [ 0; 1/√(2)v_i ] (i=1,2,3). For the charged lepton sector Eq. (<ref>), we derive the Yukawa interactions following the S_4 multiplication rules Eq. (<ref>)-(<ref>) in Appendix <ref>;y_eμ/Λℓ̅ϕℓ_RΘ = y_eμΘ/Λ[ ℓ̅_e; ℓ̅_μ; ℓ̅_τ ]×[ ϕ_1; ϕ_2; ϕ_3 ]×[ e_R; μ_R ]= y_eμΘ/Λ[ 1/√(2)(ℓ̅_μϕ_2 -ℓ̅_τϕ_3)e_R+1/√(6)(-2ℓ̅_e ϕ_1+ℓ̅_μϕ_2+ℓ̅_τϕ_3)μ_R] →y_eμΘ_0/Λ[ 1/2(μ̅_L v_2 -τ̅_Lv_3)e_R+1/2√(3)(-2e̅_Lv_1+μ̅_Lv_2+τ̅_Lv_3)μ_R], y_τℓ̅ϕτ_R =y_τ[ ℓ̅_e; ℓ̅_μ; ℓ̅_τ ]×[ ϕ_1; ϕ_2; ϕ_3 ]τ_R =y_τ (ℓ̅_e ϕ_1+ℓ̅_μϕ_2+ℓ̅_τϕ_3)τ_R →y_τ/√(2)(e̅_Lv_1+μ̅_Lv_2+τ̅_Lv_3)τ_R,y_ℓ/Λℓ̅ϕℓ_R X = y_ℓ/Λ[ ℓ̅_e; ℓ̅_μ; ℓ̅_τ ]×[ ϕ_1; ϕ_2; ϕ_3 ]×[ e_R; μ_R ]×[ X_1; X_2 ]= y_ℓ1/Λ(ℓ̅_eϕ_1+ℓ̅_μϕ_2+ℓ̅_τϕ_3)(e_R X_1+μ_R X_2) +y_ℓ2/Λ[1/√(2)(ℓ̅_μϕ_2-ℓ̅_τϕ_3)(e_R X_2+μ_R X_1)+1/√(6)(-2ℓ̅_eϕ_1+ℓ̅_μϕ_2+ℓ̅_τϕ_3)(e_R X_1-μ_R X_2)] →y_ℓ1/√(2)Λ(e̅_L v_1+μ̅_L v_2+τ̅_L v_3)e_R X_1 +y_ℓ2/Λ[1/2(μ̅_L v_2-τ̅_L v_3)μ_R X_1+1/2√(3)(-2e̅_L v_1+μ̅_L v_2+τ̅_L v_3)e_R X_1 ].The mass matrix from Eq. (<ref>) and Eq. (<ref>) is M_ℓ 1, whereM_ℓ1= [0 -1/√(3)y_eμΘ_0/Λ v_11/√(2)y_τ v_1; 1/2y_eμΘ_0/Λ v_2 1/2√(3)y_eμΘ_0/Λ v_21/√(2)y_τ v_2;-1/2y_eμΘ_0/Λ v_3 1/2√(3)y_eμΘ_0/Λ v_31/√(2)y_τ v_3 ]_LR .The mass matrix from Eq. (<ref>) is M_ℓ 2, whereM_ℓ2= [(1/√(2)y_ℓ1-1/√(3)y_ℓ2)v_1 X_1/Λ 0 0; (1/√(2)y_ℓ1+1/2√(3)y_ℓ2)v_2 X_1/Λ1/2 y_ℓ2 v_2 X_1/Λ 0; (1/√(2)y_ℓ1+1/2√(3) y_ℓ2 )v_3 X_1/Λ -1/2 y_ℓ2 v_3 X_1/Λ 0; ]_LR .Then, we obtain the charged lepton mass matrix M_ℓ asM_ℓ = M_ℓ 1 + M_ℓ 2= [(1/√(2)y_ℓ1-1/√(3)y_ℓ2)v_1 X_1/Λ-1/√(3)y_eμΘ_0/Λ v_11/√(2) y_τ v_1;1/2y_eμΘ_0/Λ v_2 + (1/√(2)y_ℓ1+1/2√(3)y_ℓ2)v_2 X_1/Λ1/2√(3)y_eμΘ_0/Λ v_2 +1/2 y_ℓ2 v_2 X_1/Λ 1/√(2)y_τ v_2; -1/2y_eμΘ_0/Λ v_3 + (1/√(2)y_ℓ1+1/2√(3) y_ℓ2 )v_3 X_1/Λ1/2√(3)y_eμΘ_0/Λ v_3 -1/2 y_ℓ2 v_3 X_1/Λ 1/√(2)y_τ v_3 ]_LR . Next, we calculate the Dirac neutrino mass matrix from the Lagrangian for Dirac neutrino Yukawa interactions Eq. (<ref>). As with the charged lepton mass matrix, the Dirac neutrino mass matrix M_D is obtained via the following,M_D= [ 1/√(2)y_Dev_1 0 -1/√(3)y_Dμτv_1;1/√(2) y_Dev_2 1/2y_Dμτv_2 1/2√(3)y_Dμτv_2;1/√(2) y_Dev_3-1/2y_Dμτv_3 1/2√(3)y_Dμτv_3 ]_. We also derive the right-handed Majorana neutrino mass matrix M_R from Eq. (<ref>),M_R= [ M_eR00;0 M_μτ R0;00 M_μτ R ]_.After these calculations of neutrino mass matrices, we use the type-I seesaw mechanism and obtain the left-handed Majorana neutrino mass matrix m_ν:m_ν = -M_D M_R^-1 M_D^T, (m_ν)_ij=-e^2i ϕ_De|y_De|^2v_i^2/2M_eR-e^2i ϕ_Dμτ|y_Dμτ|^2v_i^2/3M_μτ R, i=j(i,j=1,2,3), (m_ν)_ij=-e^2iϕ_De|y_De|^2v_iv_j/2M_eR+e^2i ϕ_Dμτ|y_Dμτ|^2v_iv_j/6M_μτ R,i≠ j (i,j=1,2,3),0==-20pt=[-e^2i ϕ_De|y_De|^2v_1^2/2M_eR-e^2i ϕ_Dμτ|y_Dμτ|^2v_1^2/3M_μτ R -e^2iϕ_De|y_De|^2v_1v_2/2M_eR+e^2i ϕ_Dμτ|y_Dμτ|^2v_1v_2/6M_μτ R -e^2iϕ_De|y_De|^2v_3v_1/2M_eR+e^2i ϕ_Dμτ|y_Dμτ|^2v_3v_1/6M_μτ R; -e^2iϕ_De|y_De|^2v_1v_2/2M_eR+e^2i ϕ_Dμτ|y_Dμτ|^2v_1v_2/6M_μτ R -e^2iϕ_De|y_De|^2v_2^2/2M_eR-e^2i ϕ_Dμτ|y_Dμτ|^2v_2^2/3M_μτ R -e^2iϕ_De|y_De|^2v_2v_3/2M_eR+e^2i ϕ_Dμτ|y_Dμτ|^2v_2v_3/6M_μτ R; -e^2iϕ_De|y_De|^2v_3v_1/2M_eR+e^2i ϕ_Dμτ|y_Dμτ|^2v_3v_1/6M_μτ R -e^2iϕ_De|y_De|^2v_2v_3/2M_eR+e^2i ϕ_Dμτ|y_Dμτ|^2v_2v_3/6M_μτ R -e^2iϕ_De|y_De|^2v_3^2/2M_eR-e^2i ϕ_Dμτ|y_Dμτ|^2v_3^2/3M_μτ R ]_,.9[1]0 where, ϕ_De and ϕ_Dμτ are the complex phases from the Yukawa couplings y_De and y_Dμτ.§ NUMERICAL ANALYSISIn this section, we numerically obtain the PMNS matrix from the leptons mass matrices. We show the numerical results such as the mixing angles of leptons, Dirac CP phase, the lightest neutrino mass, the sum of neutrino masses, the effective Majorana neutrino mass at the 0νββ decay experiment, and two Majorana phases in normal-ordering.First, we get the PMNS matrix representing the lepton flavor mixing from the mass matrices of charged leptons and neutrinos obtained in the section <ref>.We find the unitary matrix V_ℓ that diagonalizes the charged lepton mass matrix. The charged lepton mass matrix M_ℓ in Eq. (<ref>) becomesM_ℓ = [ (1/√(2)y_ℓ1-1/√(3)y_ℓ2)v_1 X'_1 -1/√(3) y'_eμ v_11/√(2) y_τ v_1;1/2 y'_eμ v_2 + (1/√(2)y_ℓ1+1/2√(3)y_ℓ2)v_2 X'_11/2√(3) y'_eμ v_2 +1/2 y_ℓ2 v_2 X'_1 1/√(2)y_τ v_2; -1/2 y'_eμ v_3 + (1/√(2)y_ℓ1+1/2√(3) y_ℓ2 )v_3 X'_11/2√(3) y'_eμ v_3 -1/2 y_ℓ2 v_3 X'_1 1/√(2)y_τ v_3 ]_LR ,where, X'_1 ≡X_1/Λ, and y'_eμ≡y_eμΘ_0/Λ. We use M_ℓM_ℓ^† to find the unitary matrix V_ℓ that diagonalizes M_ℓM_ℓ^†. Similarly, we evaluate the neutrino sector in the same way. The neutrino mass matrix m_ν in Eq. (<ref>) is rewritten to be(m_ν)_ij=-1/2e^2i ϕ_De|y'_De|^2v_i^2-1/3e^2i ϕ_Dμτ|y'_Dμτ|^2v_i^2, i=j(i,j=1,2,3), (m_ν)_ij= -1/2e^2iϕ_De|y'_De|^2v_iv_j+1/6e^2i ϕ_Dμτ|y'_Dμτ|^2v_iv_j, i≠ j (i,j=1,2,3),0==-65ptm_ν=[-1/2e^2i ϕ_De|y'_De|^2v_1^2-1/3e^2i ϕ_Dμτ|y'_Dμτ|^2v_1^2 -1/2e^2iϕ_De|y'_De|^2v_1v_2+1/6e^2i ϕ_Dμτ|y'_Dμτ|^2v_1v_2 -1/2e^2iϕ_De|y'_De|^2v_3v_1+1/6e^2i ϕ_Dμτ|y'_Dμτ|^2v_3v_1; -1/2e^2iϕ_De|y'_De|^2v_1v_2+1/6e^2i ϕ_Dμτ|y'_Dμτ|^2v_1v_2 -1/2e^2iϕ_De|y'_De|^2v_2^2-1/3e^2i ϕ_Dμτ|y'_Dμτ|^2v_2^2 -1/2e^2iϕ_De|y'_De|^2v_2v_3+1/6e^2i ϕ_Dμτ|y'_Dμτ|^2v_2v_3; -1/2e^2iϕ_De|y'_De|^2v_3v_1+1/6e^2i ϕ_Dμτ|y'_Dμτ|^2v_3v_1 -1/2e^2iϕ_De|y'_De|^2v_2v_3+1/6e^2i ϕ_Dμτ|y'_Dμτ|^2v_2v_3 -1/2e^2iϕ_De|y'_De|^2v_3^2-1/3e^2i ϕ_Dμτ|y'_Dμτ|^2v_3^2 ]_,.77[1]0 where, |y'_De|^2 ≡|y_De|^2/M_eR, and |y'_Dμτ|^2 ≡|y_Dμτ|^2/M_μτ R. We use m_ν m_ν^† to find the unitary matrix V_ν that diagonalizes m_ν m_ν^†.Next, we perform the numerical analysis with the physical input parameters the masses of the charged leptons m_e, m_μ, m_τ, and the neutrino mass-squared differences Δ m^2_21, Δ m^2_31. We assume normal hierarchy for the neutrinos and take, from NuFIT 5.2 <cit.>, the neutrino mass-squared differences Δ m^2_21 = 7.41 × 10^-5 eV^2, Δ m^2_31 = 2.507 × 10^-3 eV^2 in Table <ref>. Additionally, we use the PDG data <cit.> for the masses of the charged leptons. Our model parameters arev_1, v_2, v_3, y'_eμ, y_τ, y_ℓ1, y_ℓ2, X'_1, m_1, |y'_De|, |y'_Dμτ|and ϕ_De,where, v_1, v_2, and v_3 are the VEVs of Higgs that satisfy v_1^2+v_2^2+v_3^2=v^2 with v≈246 GeV <cit.>. For the charged lepton mass matrix M_ℓM_ℓ^†, the Yukawa couplings y_ℓ1 and y_ℓ2 are restricted to be between -π and π. The remaining variables y'_eμ, y_τ, and X'_1 are determined such that the charged lepton masses m_e, m_μ, and m_τ are reproduced when M_ℓM_ℓ^† is diagonalized.For the neutrino sector, only the two type of mass-squared differences are known, and the absolute masses are not known. Therefore, of the three neutrino masses m_1, m_2 and m_3, we vary m_1 within the range satisfied by the experiment data <cit.>. From that the remaining two masses m_2 and m_3 are fixed. Next, the variables |y'_De|, |y'_Dμτ|, ϕ_De of the neutrino mass matrix m_ν m_ν^† are determined by reproducing the neutrino masses m_1, m_2 and m_3 when m_ν m_ν^† is diagonalized. Here, ϕ_Dμτ can be absorbed as a whole phase when calculating m_ν m_ν^†, so it has no physical meaning. From the above, we can numerically obtain the unitary matrices V_ℓ and V_ν that diagonalize M_ℓM_ℓ^† and m_ν m_ν^†. The PMNS matrix U, which represents the lepton flavor mixing is obtained byU=V^†_ℓ V_ν.We denote the ij components of U as U_ij. Then we can derive the mixing angles θ_12, θ_23, and θ_13 as follows; tanθ_12 = |U_12|/|U_11|,tanθ_23 = |U_23|/|U_33|,sinθ_13 = |U_13|.Therefore, we can get the PMNS matrix U by Eq. (<ref>) and also numerically obtain the mixing angles from Eqs. (<ref>) - (<ref>). From NuFIT 5.2 <cit.>, the magnitude of the mixing angles is fixed in the range shown in Table <ref>.Our results that lay within the range of the mixing angles are shown in Table <ref>. First, the relation of the VEVs of Higgs, v_1, v_2, and v_3 are shown in the Figure <ref>.The range of v_1 is almost 38.18 [GeV] ≤ |v_1| ≤ 55.67 [GeV] and takes a relatively small value compared to v_2 and v_3. The v_2 and v_3 are almost 89.51 [GeV] ∼ 106.40 [GeV] or 218.27 [GeV] ∼ 223.30 [GeV]. Our results for the range of y'_eμ, y_τ, and X'_1 are 6.93× 10^-4≤ y'_eμ≤ 9.91× 10^-4, 1.99× 10^-3≤ y_τ≤ 1.01 × 10^-2, and 2.25 × 10^-4≤X'_1 ≤ 4.21 × 10^-2 respectively. Notice the y'_eμ is restricted to a narrow range. However, y_τ and X'_1 are approximately 10^1 or 10^2 wide. Then, the results for the ranges |y'_De| and |y'_Dμτ| are 7.90 × 10^-13 [eV^-1/2] ≤ |y'_De| ≤ 8.80 × 10^-13 [eV^-1/2] and 1.81 × 10^-12 [eV^-1/2] ≤ |y'_Dμτ| ≤ 1.84 × 10^-12 [eV^-1/2] respectively. If we consider the Yukawa couplings of Dirac neutrinos are on the order of 10^-1, the masses of the right-handed Majorana neutrinos, M_eR and M_μτ R, are on the order of 10^15 [GeV] and 10^13 [GeV]. Recall we are taking the full range of -π to π for y_ℓ1 and y_ℓ2. The results of m_1 and ϕ_De will be shown later. Next, within the allowed parameter region from above, we predict the lepton mixing angles, the Dirac CP phase δ_CP, the effective Majorana neutrino mass m_ee for 0νββ decay experiments, the sum of neutrino masses, and the Majorana phases η_1, η_2.Our prediction for the mixing angle sin^2 θ_23 and Dirac CP phase δ_CP is shown in the Figure <ref>. The Dirac CP phase δ_CP can be obtained from the two equations. One equation is the Jarlskog invariant <cit.> J_CP, which is written asJ_CP=Im[U_11U^*_21U^*_12U_22].Then, we can get δ_CP by using J_CP,sinδ_CP = J_CP/s_12 c_12 s_23 c_23 s_13 c^2_13,where, s_ij=sinθ_ij and c_ij=cosθ_ij. The other equation to obtain δ_CP iscosδ_CP = s^2_12 s^2_23 + c^2_12 s^2_13 c^2_23 - |U_31|^2 /2s_12 c_12 s_23 c_23 s_13. We find the |δ_CP| in our model falls in the range 60.12^∘∼ 76.47^∘. In addition, we can predict sinθ_23 to be in the range 0.486 ∼ 0.603. The results of sinθ_12 and sinθ_13 take the full range of possible values. Then, the prediction of the lightest neutrino mass m_light and the effective Majorana neutrino mass m_ee at the 0νββ decay experiment is shown in the Figure <ref>.We derive the effective Majorana neutrino mass m_ee using m_ee = |Σ_i U^2_1i m_i|.We find m_ee≃6.33[meV] and m_light≃5.53[meV]. Both of those results lay relatively close to upper limits. Therefore, we consider this model is a relatively easy to confirm by the near future experiments. Lastly, the relation between the sum of neutrino masses and the complex phase of the Yukawa coupling of Dirac neutrino ϕ_De is shown in the Figure <ref>.For the comparison of the sum of neutrino masses, we find the sum of neutrino masses m_1+m_2+m_3≃66.1 [meV], which is much smaller than the experimental limit from cosmology <cit.> is 0.12 [eV].We derived |ϕ_De| to be approximately 146^∘. Finally, we show the prediction of Majorana phases η_1 and η_2 in the Figure <ref>. The Majorana phases η_1 and η_2 can be derived ase^iη_1=U_11U^*_13/c_12c_13s_13e^iδ_CP,e^iη_2=U_12U^*_13/s_12c_13s_13e^iδ_CP.Because it has not yet been determined whether the neutrino is a Dirac or Majorana particle, this is only a possible prediction for magnitude of the Majorana phases η_1 and η_2.§ POTENTIAL ANALYSISIn this section, we analyze the scalar potential. The scalar potential in our model (Table <ref>) can be written as,V = μ^2 ϕ^†ϕ + λ (ϕ^†ϕ)^2 + cϕ^†ϕ X^† X + kϕ^†ϕΘ^†Θ + (gϕ^†ϕΘ^† X + h.c.),where λ>0. We calculate this potential using the multiplication rules of the S_4 symmetry in Eq. (<ref>)-(<ref>) from Appendix <ref>. Specifically, from Eq. (<ref>) in Appendix <ref>, the first and second term in Eq. (<ref>) are derived as below,μ^2 ϕ^†ϕ + λ (ϕ^†ϕ)^2 = μ^2(|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2)+(λ_1+2/3λ_2)(|ϕ_1|^4+|ϕ_2|^4+|ϕ_3|^4)+(2λ_1-2/3λ_2+2λ_3-2λ_4)(|ϕ_1ϕ_2|^2+|ϕ_2ϕ_3|^2+|ϕ_3ϕ_1|^2) +(λ_3+λ_4)[(ϕ_1ϕ_2^†)^2+(ϕ_2ϕ_3^†)^2+(ϕ_3ϕ_1^†)^2+h.c.].Then, we obtain the third term in Eq. (<ref>),cϕ^†ϕ X^† X =c [ϕ^†_1; ϕ^† _2;ϕ^†_3 ]×[ϕ_1; ϕ _2;ϕ_3 ]×[ X^†_1; X^†_2 ]×[ X_1; X_2 ] =c_1(|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2)(|X_1|^2+|X_2|^2) + c_2[1/√(2) (|ϕ_2|^2-|ϕ_3|^2)(X_1^† X_2 + X^†_2 X_1)+1/√(6)(2|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2)(|X_1|^2-|X_2|^2)].Similarly, we obtain the forth term and fifth term in Eq. (<ref>),kϕ^†ϕΘ^†Θ =k [ϕ^†_1; ϕ^† _2;ϕ^†_3 ]×[ϕ_1; ϕ _2;ϕ_3 ] |Θ|^2 =k|Θ|^2 (|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2), g(ϕ^†ϕΘ^† X + h.c.)= (g [ϕ^†_1; ϕ^† _2;ϕ^†_3 ]×[ϕ_1; ϕ _2;ϕ_3 ]Θ^†[ X_1; X_2 ] +h.c.) = [ g (1/√(2)(|ϕ_2|^2-|ϕ_3|^2) Θ^† X_1 +1/√(6)(2|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2) Θ^† X_2 ) + h.c.].Here, we assume the VEVs of X and Θ to (X_1,0) and Θ_0, and the VEV of ϕ_i is (0, v_i/√(2)) (i=1,2,3). Then, we consider the minimum conditions of the potential, and we obtain the following equations,v_1= ±√(-μ^2+c_1 X_1^2+ kΘ_0^2 /3λ_1 + 4λ_3 -2/3√(6)3λ_1+2λ_2/(3λ_1+4λ_4)(λ_2-2λ_3)c_2 X_1^2 ), v_2= ±√(-μ^2+c_1 X_1^2+ kΘ_0^2 /3λ_1 + 4λ_3 +1/3√(6)3λ_1-4λ_2+12λ_3/(3λ_1+4λ_4)(λ_2-2λ_3)c_2 X_1^2 - 1/√(2)gΘ_0 X_1/λ_2-2λ_3), v_3= ±√(-μ^2+c_1 X_1^2+ kΘ_0^2 /3λ_1 + 4λ_3 +1/3√(6)3λ_1-4λ_2+12λ_3/(3λ_1+4λ_4)(λ_2-2λ_3)c_2 X_1^2 + 1/√(2)gΘ_0 X_1/λ_2-2λ_3).Since there are many parameters for the scalar potential, the result in the Figure <ref> can be realized. One worry could be how the masses of the neutral and charged Higgs contribute to the flavor-changing neutral currents(FCNC). We expect the masses can be heavy enough to suppress the FCNC and use this model by adjusting the parameters of the potential. However, that is beyond the scope of this work. § SUMMARY We have built a lepton flavor model with S_4 and U(1)_FN symmetries. The left-handed leptons are assigned as an S_4 triplet and the right-handed tauon as an S_4 singlet. Then, we assign the right-handed electron and muon as an S_4 doublet and have U(1)_FN charge. We have assumed there are three right-handed Majorana neutrinos that have an S_4 symmetric charge and two types of scalar fields to reproduce the lepton flavor mixing. We have introduced three Higgs doublets that were assigned as an S_4 triplet, and calculated the charged lepton mass matrix and the left-handed Majorana neutrino mass matrix.We have performed a numerical analysis to obtain the PMNS matrix, and gave predictions for the mixing angle θ_23 and the Dirac CP phase δ_CP. The results were 0.486<sinθ_23<0.603 and 60.12^∘<|δ_CP|<76.47^∘, which is a strong prediction for the |δ_CP|. We also gave predictions for the lightest neutrino mass m_light, the effective Majorana neutrino mass m_ee at the 0νββ decay experiment, the sum of neutrino masses and two Majorana phases η_1 and η_2. The results were m_light≃5.53 [meV], m_ee≃6.33 [meV], m_1+m_2+m_3≃66.1 [meV]. For m_ee and m_light, we were able to obtain values relatively close to current experimentally upper limits. Finally, we analyzed the scalar potential to get the conditions for the Higgs VEVs.Our model could be extended to the quark sector.Where we could impose the S_4 symmetry on the quarks as well, and consider how to assign the S_4 charge and built a flavor model. The model would be tested by the precise observations of the CP phase in the quark sector. It is interesting to study more phenomenological aspects on the flavor physics based on flavor symmetry with multi-Higgs in the near future.AcknowledgementWe thank M. Tanimoto, Y. Shimizu, N. Benoit, Y. Watanabe and S. Takeshita for useful discussions. This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2129.§ APPENDIX § S_4 SYMMETRY We give a brief introduction of the S_4 symmetry.The S_4 symmetry is the symmetry of the S_4 group, which is the symmetric group of order 4. The S_4 group consists of all permutations among four objects (x_1, x_2, x_3, x_4). Therefore, the number of elements of the S_4 group are 4!=24(x_1, x_2, x_3, x_4) → (x_i, x_j, x_k, x_l).The S_4 group is the smallest non-Abelian discrete symmetry group that has five irreducible representations. There are two types of singlets 1, 1', one type of doublet 2, and two types of triplets 3, 3'. The S_4 symmetry represents the symmetry of cubic geometry.Next, we show the multiplication rule of the S_4 symmetry:3⊗3 = 1⊕2⊕3⊕3',3'⊗3' = 1⊕2⊕3⊕3',3⊗3' = 1'⊕2⊕3⊕3',2⊗2 = 1⊕1'⊕2,2⊗3 = 3⊕3',2⊗3' = 3⊕3',3⊗1' = 3', 3'⊗1' = 3,2⊗1' = 2, [ a_1; a_2 ]_2⊗[ b_1; b_2 ]_2 = (a_1b_1+a_2b_2 )_1⊕ (-a_1b_2+a_2b_1 )_1'⊕[ a_1b_2+a_2b_1; a_1b_1-a_2b_2 ]_2, [ a_1; a_2; a_3 ]_3⊗[ b_1; b_2; b_3 ]_3 = (a_1b_1+a_2b_2+a_3b_3 )_1⊕[1/√(2)(a_2b_2-a_3b_3); 1/√(6)(-2a_1b_1+a_2b_2+a_3b_3) ]_2⊕[ a_3b_2+a_2b_3; a_1b_3+a_3b_1; a_2b_1+a_1b_2 ]_3⊕[ a_3b_2-a_2b_3; a_1b_3-a_3b_1; a_2b_1-a_1b_2 ]_3'.More details are shown in the review <cit.>.§ 3HDM WITH S_4 SYMMETRY We briefly introduce the 3HDM with S_4 symmetry. In general 3HDM means that the SM Higgs sector is extended to include three Higgs doublets, denoted asϕ_1=[ ϕ^+_1; ϕ^0_1 ], ϕ_2=[ ϕ^+_2; ϕ^0_2 ], ϕ_3=[ϕ^+_3; ϕ^0_3, ]where the Higgs doublets ϕ_i, with i = 1, 2, 3, have the same U(1)_Y hypercharge. One Higgs doublet consists of four real scalar fields. Thus, there are twelve real scalar fields in the 3HDM.In this study, we consider the Higgs fields to also have a flavor symmetry. Therefore, we regard three Higgs fields as a S_4 triplet,ϕ=[ ϕ_1; ϕ_2; ϕ_3 ].In general the scalar potential is given byV = μ^2 ϕ^†ϕ + λ (ϕ^†ϕ)^2,where, μ^2<0, λ>0. We get this potential by using the multiplication rule of the S4 symmetryV= μ^2 [ ϕ^†_1; ϕ^†_2; ϕ^†_3 ]×[ ϕ_1; ϕ_2; ϕ_3 ] +λ[ ϕ^†_1; ϕ^†_2; ϕ^†_3 ]×[ ϕ_1; ϕ_2; ϕ_3 ]×[ ϕ^†_1; ϕ^†_2; ϕ^†_3 ]×[ ϕ_1; ϕ_2; ϕ_3 ] = μ^2 (|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2) + λ_1 (|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2)^2 +2/3λ_2 (|ϕ_1|^4+|ϕ_2|^4+|ϕ_3|^4-|ϕ_1|^2|ϕ_2|^2-|ϕ_2|^2|ϕ_3|^2-|ϕ_3|^2|ϕ_1|^2) +λ_3 [(ϕ^†_1 ϕ_2)^2 + (ϕ^†_2 ϕ_3)^2 + (ϕ^†_3 ϕ_1)^2 + |ϕ_1|^2|ϕ_2|^2 + |ϕ_2|^2|ϕ_3|^2 + |ϕ_3|^2|ϕ_1|^2 + h.c.] +λ_4 [(ϕ^†_1 ϕ_2)^2 + (ϕ^†_2 ϕ_3)^2 + (ϕ^†_3 ϕ_1)^2 - |ϕ_1|^2|ϕ_2|^2 - |ϕ_2|^2|ϕ_3|^2 - |ϕ_3|^2|ϕ_1|^2 + h.c.] = μ^2(|ϕ_1|^2+|ϕ_2|^2+|ϕ_3|^2)+(λ_1+2/3λ_2)(|ϕ_1|^4+|ϕ_2|^4+|ϕ_3|^4)+(2λ_1-2/3λ_2+2λ_3-2λ_4)(|ϕ_1ϕ_2|^2+|ϕ_2ϕ_3|^2+|ϕ_3ϕ_1|^2) +(λ_3+λ_4)[(ϕ_1ϕ_2^†)^2+(ϕ_2ϕ_3^†)^2+(ϕ_3ϕ_1^†)^2+h.c.].Then, we may write the VEVs of Higgs doublets ϕ_i where i = 1, 2, 3 as⟨ϕ_i ⟩ = [ 0; 1/√(2)v_i ].By taking into account the minimization conditions on the potential in Eq. (<ref>), the following relation about the VEVs of Higgs is obtained,v_1^2=v_2^2=v_3^2=-μ^2/3λ_1+4λ_3.We have considered the VEVs to be real numbers. 99 ATLAS:2012yve G. Aad et al. [ATLAS],Phys. Lett. B 716, 1-29 (2012)[arXiv:1207.7214 [hep-ex]].CMS:2012qbp S. Chatrchyan et al. [CMS],Phys. Lett. B 716, 30-61 (2012)[arXiv:1207.7235 [hep-ex]].Froggatt:1978nt C. D. Froggatt and H. B. Nielsen,Nucl. Phys. B 147 (1979), 277-298Ishimori:2010au H. Ishimori, T. Kobayashi, H. Ohki, Y. Shimizu, H. Okada and M. Tanimoto,Prog. Theor. Phys. Suppl. 183 (2010), 1-163[arXiv:1003.3552 [hep-th]].Ishimori:2012zz H. Ishimori, T. Kobayashi, H. Ohki, H. Okada, Y. Shimizu and M. Tanimoto,Lect. Notes Phys. 858 (2012), 1-227, Springer. Kobayashi:2022moq T. Kobayashi, H. Ohki, H. Okada, Y. Shimizu and M. Tanimoto,Lect. Notes Phys. 995 (2022), 1-353, Springer.Ma:2001dn E. Ma and G. Rajasekaran,Phys. Rev. D 64, 113012 (2001)[arXiv:hep-ph/0106291 [hep-ph]].Altarelli:2005yp G. Altarelli and F. Feruglio,Nucl. Phys. B 720 (2005), 64-88[arXiv:hep-ph/0504165 [hep-ph]].Altarelli:2005yx G. Altarelli and F. Feruglio,Nucl. Phys. B 741 (2006), 215-235[arXiv:hep-ph/0512103 [hep-ph]].Brahmachari:2008fn B. Brahmachari, S. Choubey and M. Mitra,Phys. Rev. D 77 (2008), 073008 [erratum: Phys. Rev. D 77 (2008), 119901][arXiv:0801.3554 [hep-ph]].Altarelli:2010gt G. Altarelli and F. Feruglio,Rev. Mod. Phys. 82 (2010), 2701-2729[arXiv:1002.0211 [hep-ph]].Ishimori:2010fs H. Ishimori, Y. Shimizu, M. Tanimoto and A. Watanabe,Phys. Rev. D 83 (2011), 033004[arXiv:1010.3805 [hep-ph]].King:2013eh S. F. King and C. Luhn,Rept. Prog. Phys. 76 (2013), 056201[arXiv:1301.1340 [hep-ph]].King:2014nza S. F. King, A. Merle, S. Morisi, Y. Shimizu and M. Tanimoto,New J. Phys. 16 (2014), 045018[arXiv:1402.4271 [hep-ph]].Minkowski:1977sc P. Minkowski,Phys. Lett. B 67 (1977), 421-428. Yanagida:1979as T. Yanagida,Conf. Proc. C 7902131, 95-99 (1979) KEK-79-18-95.Gell-Mann:1979vob M. Gell-Mann, P. Ramond and R. Slansky,Conf. Proc. C 790927 (1979), 315-321 [arXiv:1306.4669 [hep-th]].Mohapatra:1979ia R. N. Mohapatra and G. Senjanovic,Phys. Rev. Lett. 44, 912 (1980) Schechter:1980gr J. Schechter and J. W. F. Valle,Phys. Rev. D 22 (1980), 2227. Yanagida:1980xy T. Yanagida,Prog. Theor. Phys. 64, 1103 (1980) KamLAND-Zen:2016pfg A. Gando et al. [KamLAND-Zen],Phys. Rev. Lett. 117, no.8, 082503 (2016)[arXiv:1605.02889 [hep-ex]]. KamLAND-Zen:2022tow S. Abe et al. [KamLAND-Zen],Phys. Rev. Lett. 130, no.5, 051801 (2023)[arXiv:2203.02139 [hep-ex]].Lavoura:2007dw L. Lavoura and H. Kuhbock,Eur. Phys. J. C 55 (2008), 303-308[arXiv:0711.0670 [hep-ph]].deAdelhartToorop:2010jxh R. de Adelhart Toorop, F. Bazzocchi, L. Merlo and A. Paris,JHEP 03 (2011), 035 [erratum: JHEP 01 (2013), 098][arXiv:1012.1791 [hep-ph]].Ivanov:2012ry I. P. Ivanov and E. Vdovin,Phys. Rev. D 86 (2012), 095030[arXiv:1206.7108 [hep-ph]].Degee:2012sk A. Degee, I. P. Ivanov and V. Keus,JHEP 02 (2013), 125[arXiv:1211.4989 [hep-ph]].GonzalezFelipe:2013xok R. González Felipe, H. Serôdio and J. P. Silva,Phys. Rev. D 87 (2013) no.5, 055010[arXiv:1302.0861 [hep-ph]].GonzalezFelipe:2013yhh R. Gonzalez Felipe, H. Serodio and J. P. Silva,Phys. Rev. D 88 (2013) no.1, 015015[arXiv:1304.3468 [hep-ph]].Ivanov:2012fp I. P. Ivanov and E. Vdovin,Eur. Phys. J. C 73 (2013) no.2, 2309[arXiv:1210.6553 [hep-ph]].Keus:2013hya V. Keus, S. F. King and S. Moretti,JHEP 01 (2014), 052[arXiv:1310.8253 [hep-ph]].Ivanov:2014doa I. P. Ivanov and C. C. Nishi,JHEP 01 (2015), 021[arXiv:1410.6139 [hep-ph]].Ivanov:2017dad I. P. Ivanov,Prog. Part. Nucl. Phys. 95 (2017), 160-208[arXiv:1702.03776 [hep-ph]].Das:2018qyt P. Das, A. Mukherjee and M. K. Das,Nucl. Phys. B 941 (2019), 755-779[arXiv:1805.09231 [hep-ph]].Das:2019ntw P. Das, M. K. Das and N. Khan,JHEP 03 (2020), 018[arXiv:1911.07243 [hep-ph]].Ivanov:2020jra I. P. Ivanov and F. Vazão,JHEP 11, 104 (2020)[arXiv:2006.00036 [hep-ph]].Buskin:2021eig N. Buskin and I. P. Ivanov,J. Phys. A 54 (2021), 325401[arXiv:2104.11428 [hep-ph]].Carrolo:2022oyg S. Carrolo, J. C. Romao and J. P. Silva,Eur. Phys. J. C 82 (2022) no.8, 749[arXiv:2207.02928 [hep-ph]].Vergeest:2022mqm J. Vergeest, M. Zrałek, B. Dziewit and P. Chaber,[arXiv:2203.03514 [hep-ph]].CarcamoHernandez:2022vjk A. E. Cárcamo Hernández, C. Espinoza, J. C. Gómez-Izquierdo, J. M. González and M. Mondragón,[arXiv:2212.12000 [hep-ph]].Izawa:2022viu Y. Izawa, Y. Shimizu and H. Takei,PTEP 2023, no.6, 063B04 (2023)[arXiv:2209.10201 [hep-ph]].Weinberg:1976hu S. Weinberg,Phys. Rev. Lett. 37, 657 (1976) Esteban:2020cvm I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou,JHEP 09, 178 (2020)[arXiv:2007.14792 [hep-ph]]. NuFIT5.2 NuFIT 5.2 (2022), http://www.nu-fit.org/. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group],PTEP 2022, 083C01 (2022) Planck:2018vyg N. Aghanim et al. [Planck],Astron. Astrophys. 641 (2020), A6 [erratum: Astron. Astrophys. 652 (2021), C4][arXiv:1807.06209 [astro-ph.CO]].Jarlskog:1985ht C. Jarlskog,Phys. Rev. Lett. 55 (1985), 1039.
http://arxiv.org/abs/2312.16545v1
{ "authors": [ "Yukimura Izawa" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20231227121023", "title": "$S_4$ Lepton Flavor Model with 3HDM" }
Error-free Training for Artificial Neural NetworkBo DengDepartment of Mathematics, University of Nebraska-Lincoln, Lincoln, NE [email protected] Abstract: Conventional training methods for artificial neural network (ANN) models never achieve zero error rate systematically for large data.A new training method consists of three steps: first create an auxiliary data from conventionally trained parameters which correspond exactly to a global minimum for the loss function of the cloned data; second create a one-parameter homotopy (hybrid) of the auxiliary data and the original data; and third train the model for the hybrid data iteratively from the auxiliary data end of the homotopy parameter to the original data end while maintaining the zero-error training rate at every iteration. This continuation method is guaranteed to converge numerically by a theorem which converts the ANN training problem into a continuation problem for fixed points of a parameterized transformation in the training parameter space to which the Uniform Contraction Mapping Theorem from dynamical systems applies.Key Words: Artificial neural networks, stochastic gradient descent, gradient descent tunnelling, zero-error training rate, uniform contraction theoremBy definition, to train an ANN model isto find the global minimum of its loss function with the 100% accuracy rate. The theoretical solution to this problem wasestablished by <cit.> in 1989 for finitepoints classification with sufficiently many parameters. For small training data, thetraining problem can be solved by the gradient descent (GD) method.But for large data, such as the most popular MNIST benchmarkdata for handwritten digit classification (<cit.>), the problem is not known to be solved systematically.Currently, the state of art for the MNIST problem achieved a 99.87% positive rate (PR) with about 1.5 millions parameters (<cit.>).Here below we describe a method to bring the PR for supervised ANN models to 100%.Let p=(W,b) be the weight parameters and the biases for an ANN with supervised training (<cit.>). Let q=L(p) be the loss function for the ANN with respect to a given training data set D. Denote by p̅ any local minimum of L and p^∗ the global minimum withthe perfect accuracy if exists. To train the ANN is to find this perfect global minimum p^∗. Currently training is done by a variety of implementations of the gradient descent method (GD) (<cit.>). Specifically, the basic idea works as follows. Let ∇ L(p) denote the gradient of L. Then starting at an initial guess p_0, and for a learning rate parameter α>0, the next update p is given by this iterative formulap_k+1=p_k-α∇ L(p_k)for k=0,1,2,…. So far none of the variations has found the global minimum p^∗ with the 100% PR for MNIST's full 60,000 training data.Theory of Convergence The theoretical basis for our method is based on an equivalent setting for the conventional training method. Specifically, to train the ANN from any p_0 is to find the gradient flow path p(t) satisfying the induced gradient system of equationsṗ(t)=-∇ L(p(t)),with p(0)=p_0 for the initial condition.All conventional GD methods are discrete approximations of the gradient flow p(t). For example, the basic searching algorithm eq_conventional_training above is the numerical implementation of Euler's method for thedifferential equations. In this equivalent setting, any local minimum point of the loss function is a stable equilibrium of the gradient system. The converse is also true.Specifically, let ϕ_t(p_0) denote the solution operator of the gradient system satisfying the initial conditionϕ_0(p_0)=p_0.The solution p(t) to the gradient system with the IC p(0)=p_0 is p(t)=ϕ_t(p_0). That is, ϕ_t:ℝ^n→ℝ^n defines a transformation or mapping from ℝ^n to itself for every time t. Thus, the subscript 0 can be dropped and ϕ_t(p) can be used to denote the solution operator, mapping point p to ϕ_t(p) after time t>0. In this setting, every local minimum p̅ of the loss function L is a locally stable fixed point of the solution operator ϕ_t(p) for every t≥ 0ϕ_t(p̅)=p̅, t≥ 0.Conversely, every locally stable fixed point of ϕ_t is a local minimum point of L. In addition, a fixed point of ϕ_t for one fixed nonzero t, say t=1, is a fixed point of ϕ_t for all t>0 (<cit.>). Thus, we only need to consider the solution operator at one fixed time, say t=1, T(p):=ϕ_1(p). Such a map is called a Poincaré map. The conclusion is, a point p̅ is a locally stable fixed point of the Poincaré mapif and only if p̅ is a local minimum of the loss function L.So far the theory is for supervised training on one set of training data. We now consider the case that there is a set of training sets of data, denoted by _λ where λ is a parameter from a compact interval, say 0≤λ≤ 1. For each λ∈[0,1], the task is to find the global minimum p^∗_λ for the same ANN model on training data _λ whose corresponding loss function can be denoted by L_λ. We can call this type of training a parameterized training with parameter λ or a parameterized co-training. In terms of the Poincaré mapping equivalency, for each λ we have the equivalent Poincaré map _λ for which p̅_λ is a local minimum for L_λ if and only if p̅_λ is a locally stable fixed point of _λ. Our method is based on the following theorem.Assume the perfect global minimum point p^∗_λ∈ℝ^n of L_λ exists for every λ∈ [0,1] and is asymptotically stable for the Poincaré map _λ. Assume also _λ is continuous in λ and is differentiable at p^∗_λ. Then the global minimums form a continuous path γ:={p^∗_λ:0≤λ≤ 1} in the training parameter space ℝ^n. For each co-train parameter λ∈[0,1], let D_λ(p^∗_λ) be the linearization of the Poincaré map _λ at the fixed point p^∗_λ. Since p^∗_λ is asymptotically stable for _λ which is differentiable, there is an adapted norm (<cit.>) and a small convex neighborhood U_λ of the point so that _λ is Lipschitz continuous with the Lipschitz constant smaller than 1, i.e. _λ is locally contracting. Let ρ(p) be a C^∞ cut-off scalar function (<cit.>) in U_λ. Extend _λ from U_λ to the entire space ℝ^n by_λ(p)→ D_λ(p^∗_λ)p+ ρ(p-p^∗_λ)[_λ(p)-D_λ(p^∗_λ)p]For small enough neighborhood U_λ, the extended map is a contraction mapping in ℝ^n. Without loss of generality, we will use the same notation _λ for the extended map. Because the interval [0,1] where λ is from is compact, the extended map _λ can be made to be uniformly contracting for all λ∈[0,1]. As a consequence by the Uniform Contraction Mapping Theorem (<cit.>), for each λ, _λ has a unique fixed point which by construction is exactly the global minimum point p^∗_λ for L_λ. Also as a consequence to the Uniform Contraction Mapping Theorem, because _λ is continuous in λ, the set of points {p^∗_λ:0≤λ≤ 1} form a continuous path in the training parameter space ℝ^n that is parameterized by the co-train parameter λ. Method of Global Minimum Continuation-.15in * Let D denote the training data for an ANN model. For example, for the MNIST benchmark problem, D is the training data of 60,000 handwritten digits. Train the model by any conventional ways, e.g.stochastic gradient descent (SGD) method, to achieve a considerable positive rate. This divides the data set D into those which are correctly labelled or trained, D_t, and those which are incorrectly labelled or untrained, D_u.8pt * Create a set of auxiliary data, consisting two parts. One part is exactly the correctly labelled data D_t. The other part is cloned or duplicated from D_t for the same number as D_u. Specifically, for each incorrectly labelled data from D_u, pick a trained partner datafrom D_t, preferably having the same training label andwithout repeating. Denote this cloned data set by D̅_u. As a result, the joint data set D̅=D_t+D̅_u is perfectly trained for the ANN model, with 100% PR, with the same weight parameter and bias p={W,b} as for the imperfectly trained but true data D. The corresponding auxiliary system with D̅ is referred to as a training partner. The imperfectly trained parameters p is automatically the global minimum, i.e., p=p̅^∗ for the partner system.8pt * Introduce a parameter λ in the interval [0,1]. For each λ, create an hybrid data from the true data D and the auxiliary partner data D̅ by_λ=(1-λ)D̅+λ D.This technique by such weighted average of D̅ and D with weight (1-λ) andλ, respectively, is known as homotopy for continuation in dynamical systems (<cit.>). The trained data D_t remains unchanged for all 0≤λ≤ 1. The part corresponding to the erred data changes from the perfectly trained partner set D̅_u at λ=0 to the original data set D_u at λ=1. If the partnering data are chosen for the same training labels as the partneree data, then the same training labels can be used throughout for all co-train parameters 0≤λ≤ 1. More generally, one can create a homotopy in the classification space and directly compute the error-rate from the classifying vectors. See Fig.<ref>(a). 8pt * For the ANN model with the hybrid and co-train data set _λ from λ=0 to λ=1, the Continuation Theorem of Global Minimums guarantees that the error-free global minimum at λ=0 for _0=D̅ is connected all the way through 0<λ< 1 to the error-free global minimum at λ=1 with _1=D, which is what we want for the solution of the training problem. Continuation of the GMs from λ=0 to λ=1 can be done by any continuation method. For example, it can be done by solving for equilibrium solutions for the gradient vector field∇ L_λ(p)=0by Newton's method or its variations (<cit.>), starting at the partner system's global minimum p^∗_0 at λ=0. Here below we describe a new method referred to as the gradient-descent tunneling (GDT) method by way of backtrack correction. Gradient Descent Tunneling: Backtrack Correction-.15in * Start at λ=0 where the global minimum p=p^∗_0 locates for the partner system with training data _0=D̅.8pt * Move forward in λ to a value λ=a>0. If p^∗_0 is inside the gradient-descent's basin of attraction for the GM p^∗_a of the loss function L_a, then applying a gradient descent algorithm in a few iterations to find the GM p^∗_a. Fig.<ref>(b) illustrates this case if a=λ_1.8pt * If the step size λ=a taken if too large, say, a=λ_2 as shown in Fig.<ref>(b), then the parameter p=p^∗_0 is inside the basin of attraction of a local minimum point. Any gradient descent search will miss the GM p^∗_a. When this occurs, the PR indicator for the search will be less than 100%. In this case, the step size is reduced to a smaller value, say a/2, and then repeat the same step. This procedure is referred to as backtrack correction.8pt * The Continuation Theorem of Global Minimums guarantees the convergence of the algorithm. That is, the number of backtrack correction required before the PR becomes 100% is finite. And the convergence also guarantees the GM p^∗_1 for the true system with data D can be reached in finite steps because the extended Poincaré map _λ is a uniform contraction on a compact interval λ∈ [0,1]. In the continuation path to λ=1 and PR = 100%, there are trained models with sub-100% but high positive rates. Gradient Descent Tunneling: Adaptive Continuation-.15inFor computational efficiency, we can speed up the continuation of λ by increasing the step-size a if the iteration yields a sequence of consecutive 100% PRs. When combining with the backtrack correction technique, such a continuation should result in an efficient algorithm to find the true global minimum p^∗_1 at λ=1. Cumulative Training-.15inAs one direct implication of the continuation method, the trained data D_t can be viewed as what have been learned by the ANN model and the erred data D_u can be viewed as what have to be learnt new data. The co-train continuation method together with GDT can be used to accomplish such memory retention learning tasks so that the model is trained to the new while keeping the old intact. Result. The continuation method with gradient descent tunneling was implemented on Matlab. Models of twodifferent types of activation functions were used, all havingthe 784-n-10 architecture, and all achieved 100% PRfor the MNIST benchmark data, where n is the number ofnodes for the only hidden layer. All used the softmax for the output layers's classificationfunction.We used the popular ReLU as the activation functionsfor the hidden layer, with node numbern= 100, 80, 60, 40, 20, respectively. The corresponding numbers of parameters in {W,b} for the 784-n-10 models are 79510, 63610, 47710, 31810, 15910, respectively.Fig.<ref> shows a comparison between conventionally-trained model and our fully-trained model. The latter achieves a clean and complete partition of the 60,000 training points from the 784-dimensional classification space in a 3D projection for digits 1, 2, and 3.Because ReLU is not differentiable at the origin, theloss function is not differentiable everywhere asrequired by the continuation theorem. To avoid thenon-differentiable points, we chose the initialparameters in W and b randomly with uniformdistribution in interval (-5, 5). Instead ofsoftmax(x), we used a scaled version,softmax(x/100) for the output layer's pooling.But the continuationfailed to work for the initials from (-0.5,0.5)together with the standard softmax,most likely because the continuation can't avoid thenon-smooth points of the loss function.For the second type of activation function,we used a new sigmoid function (x)={[ 0, x≤ 0; tanh^2(x/2), x≥ 0;].This function was derived as the voltage-gatedactivation function for neuron models which all modelersmust agree regardless which approach to take,modeling the conductance or modeling the resistance of theion channels (<cit.>). More specifically,the inverse ofover x>0 isρ(x)=^2(x/2), x≥ 0,having the property that both are solutions to thefollowing differential equation dy/dx=√(y)(1-y),referred to as the switch equation in <cit.>,and the sigmoid function y=(x) isrefereed to as a switch function byextension, for whichthe state is off if x≤ 0 and on if x>0.When the ReLU activation function is replaced bythe switch function , the continuation method works perfectly for initials of W,b from(-0.5,0.5) together with the standard softmax. Moreover,the convergence with the switch activation functionis at least 3 times faster than with the ReLU modelfor GDT. Also, SGD works faster with the switchfunction than the ReLU, most likely because of thesmoothness difference between them. Specifically, Fig.<ref> shows some results for a 784-100-10 model with the switch function for activation. Itshows the accuracy curves for MNIST's training and test data by the conventionally trained model with SGD and thefully-trained model with GDT. It shows that the finalaccuracy for test after GDT is always better than theSGD-trained model. It also shows thatother than the normal random fluctuations for the testaccuracy curve, little can be construed as “overfitting".The figure also shows the last and the second lastsegmentations of the transformed training data, with the secondlast achieved a clearer separation than the model withReLU from Fig.<ref>. Discussion. Mathematical data sciences is to solvetheoretical and computational scaling problems from smalldata to large data. The Universal Approximation Theoremby <cit.> solved the theoretical problem toautomate the classification problem by ANN models.For small data, the (stochastic) gradient descent methodis able to solve the training problem, i.e. finding theglobal minimum of a model's loss function. For largedata, finding the global minimum by SGD becomes a gameof chance. Our Continuation Theorem should fillthis computational gap. In the parameter {W,b} space, there are infinitely manyglobal minimums of a model's loss function. It can befound by choosing different initial parameters, or bydifferent choices of auxiliary training partners, or bydifferent training batches or iterations with SGD searching.Together with the continuation theorem, itsuggests that the global minimums form a hyper-surfacecontinuum in the parameter space. As a result,these global minimums are infinitely many but have zero probability to find by chance via SGD when the datais big and the dimension of the parameter space is big.Our theorem guarantees the convergence of the continuationalgorithm if there are enough parameters to ensure theexistence of the hybrid global minimums.Our empirical finding on the size of the ANN models whichsolved the MNIST training problem suggests that the minimaldimension of the parameter space in which a classificationproblem is solvable may not be too high, i.e., not sufferingthe so-called curse of dimensionality. This is because every ANN model has its own intrinsic dimension, likeevery physics model having finite many state variablesfor their intrinsic dimensionality. That dimension would fixsome upper bound in its number of parameters to fullydetermine the model. These two numbers plus the number ofdistinct data whose basins of attraction contain all data points should define some lower bound for theminimal numberof parameters in {W,b}. Of course, this remains asan educated conjecture based on the findings of this paperand the theory of dimension analysis (c.f. <cit.>).Our result also suggests the following practices forANN models. The switch function foractivation is better suited than the ReLU is forboth SGD training and GDT continuation. Becausethere are infinitely many global minimums, we canalways find those which are good for test data.Thus, we can find one we likeand then use GDT to incorporate all test data intoone fully-trained model for deployment. Over-fitting is never an issue for good model, good math, andgood code. The error-free training method is applicable for all supervised trainings of ANNs, including convolutional neural networks (CNN), spiking neural networks (SNN). It is even more advantageous because of its relatively small numbers of model parameters required, reducing carbon-footprint for AI training. It can be implemented on platforms from cloud-frame supper computers to microchips on mobile devices. The error-free learning capability of the method will enable AI to fully enter into many fields such as inventory logistic, record keeping, human resource management, fully automated grading for examinations, precision manufacturing, health care management, pharmaceutical and medical expert systems (<cit.>). It can build error-free modular systems for search engines and for large language models which alwaysinvolve supervision in various stages. Our result may redirect the question of how accurately an ANN can be trained to how to maximize the potentials of full-trained models. 99Chow82 Chow, S.N. and Hale, J., Methods of Bifurcation Theory, New York, Springer, 1982.Chow78 Chow, S.N., Mallet-Paret, J. and Yorke, J.A., A homotopy method for locating all zeros of a system of polynomials. In Functional Differential Equations and Approximation of Fixed Points: Proceedings, Bonn, July 1978 (pp. 77-88). Berlin, Heidelberg: Springer Berlin Heidelberg, 2006.Cybe89 Cybenko, G., Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4), pp.303-314, 1989.Deng18 Deng, B., An Inverse Problem: Trappers Drove Hares toEat Lynx, Acta Biotheor, 66, pp.213–242, 2018.Deng19 Deng, B., Neuron model with conductance-resistance symmetry.Physics Letters A, 383(33), p.125976, 2019.Deng23 Deng, B., Theory of invariant manifold and foliation and uniqueness of center manifold dynamics. J. Dyn. Diff. Equat., 2023.https://doi.org/10.1007/s10884-023-10265-3Horn89 Hornik, K., Stinchcombe, M. and White, H., Multilayer feedforward networks areuniversal approximators. Neural networks, 2(5), pp.359-366, 1989.Kell03 Kelley, C.T., Solving Nonlinear Equations with Newton's Method, Society for Industrial and Applied Mathematics, 2003.Lecu98 LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P., Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), pp.2278-2324, 1998.Li87 Li, T.Y., Sauer, T. and Yorke, J.A., Numerical solution of a class of deficient polynomial systems. SIAM journal on numerical analysis, 24(2), pp.435-451, 1987.Niel15 Nielsen, M.A., Neural Networks and Deep Learning (Vol. 25). San Francisco, CA, Determination press, 2015.Robb51 Robbins, H. and Monro, S., A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3), pp.400–407, 1951.Rose58 Rosenblatt, F., The perceptron: A probabilistic model for information storage and organization in the brain. Psychological review, 65(6), p.386, 1958.web23 Image Classification on MNIST. https://paperswithcode.com/sota/image-classification-on-mnist https://paperswithcode.com/sota/image-classification-on-mnist, 2023.Yang23 Yang, J., Shi, R., Wei, D., Liu, Z., Zhao, L., Ke, B., Pfister, H. and Ni, B.,MedMNIST v2-A large-scale lightweight benchmark for 2D and 3D biomedical image classification. Scientific Data, 10(1), p.41,2023. DeclarationsEthical approval: Not Applied.Competing interests: None.Authors' contributions: Not Applied.Funding: None.Availability of data and materials: All trained ANN models mentioned in this article can be downloaded from figshare, https://doi.org/10.6084/m9.figshare.24328756https://doi.org/10.6084/m9.figshare.24328756 titled `Validation for error-free ANN models on MNIST'. It also contains Matlab mfiles for the SGD training and GDT continuation algorithms.
http://arxiv.org/abs/2312.16060v1
{ "authors": [ "Bo Deng" ], "categories": [ "cs.LG", "cs.NE", "math.DS" ], "primary_category": "cs.LG", "published": "20231226141519", "title": "Error-free Training for Artificial Neural Network" }
Group Multi-View Transformer for 3D Shape Analysis with Spatial Encoding Lixiang Xu, Member, IEEE, Qingzhe Cui, Richang Hong, Senior Member, IEEE, Wei Xu,Enhong Chen, Senior Member, IEEE, Xin Yuan, Member, IEEE, Chenglong Li and Yuanyan Tang, Life Fellow, IEEEThis work was financially supported by National Natural Science Foundation of China (62176085, 62172458), Scientific Research Innovation Team in Colleges and Universities of Anhui Province (2022AH010095) and Industry-University-Research Cooperation Project (GP/026/2020 and HF-010-2021) Zhuhai City, Guangdong Province, China. (Corresponding author: Richang Hong and Enhong Chen. Equal Contribution: Lixiang Xu and Qingzhe Cui.)Lixiang Xu, Qingzhe Cui and Wei Xu are with the College of Artificial Intelligence and Big Data, Hefei University, Hefei 230027, China (e-mail: [email protected]; [email protected]; [email protected]).Richang Hong is with the School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, China (e-mail: [email protected]).Enhong Chen is with the Anhui Province Key Laboratory of Big Data Analysis and Application, School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui 230000, China (e-mail: [email protected]).Xin Yuan is with the School of Electrical and Mechanical Engineering, The University of Adelaide, Adelaide, SA 5005, Australia (e-mail: [email protected]).Chenglong Li is with the School of Artificial Intelligence, Anhui University, Hefei, 230601, China ([email protected]).Yuanyan Tang is with the Zhuhai UM Science and Technology Research Institute, FST University of Macau, Macau (e-mail: [email protected]).================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Many humanoid and multi-legged robots are controlled in positions rather than in torques, preventing direct control of contact forces, and hampering their ability to create multiple contacts to enhance their balance, such as placing a hand on a wall or a handrail. This paper introduces the SEIKO (Sequential Equilibrium Inverse Kinematic Optimization) pipeline, drawing inspiration from flexibility models used in serial elastic actuators to indirectly control contact forces on traditional position-controlled robots. SEIKO formulates whole-body retargeting from Cartesian commands and admittance control using two quadratic programs solved in real time. We validated our pipeline with experiments on the real, full-scale humanoid robot Talos in various multi-contact scenarios, including pushing tasks, far-reaching tasks, stair climbing, and stepping on sloped surfaces. This work opens the possibility of stable, contact-rich behaviors while getting around many of the challenges of torque-controlled robots. Code and videos are available at <https://hucebot.github.io/seiko_controller_website/>.Whole Body Admittance Control, Multi-Contact, Teleoperation, Joint Flexibility, Humanoid Robot. § INTRODUCTION Humans often use additional contact points to enhance their stability, for instance, using a handrail or a wall when walking, or to extend their reach, for instance, to grasp an object that is too far forward. While humanoid robots would benefit from a similar strategy, current robots minimize the number of contacts and use them only for feet and required interactions with the environment, such as pushing a button <cit.>.The primary challenge in controlling multi-contact lies in the redundancy of force distribution resulting from closed kinematic chains <cit.>. For a given posture with several contacts, there exists an infinity of ways to distribute force among them. For instance, a humanoid with both hands on a table can apply more or less force to the hands without any visible change in joint position.To explicitly regulate forces, most work on multi-contact whole-body control relies on torque-controlled robots with inverse dynamics controllers <cit.>. Unfortunately, inverse dynamics is highly sensitive to model and calibration errors, and identifying models for humanoids is particularly challenging <cit.>. Perfect identification of environment's properties is generally not possible. This is why most deployed robots use position control, which is simpler and more reliable <cit.>, but lacks direct control authority over contact force, thus hindering the exploitation of multi-contact strategies.In this paper, we present a control pipeline (Fig. <ref>) designed to regulate contact forces using position-controlled robots. Drawing inspiration from series elastic actuators <cit.>, our main idea is to leverage the “flexibility” that stems from mechanical parts that bend with high load but also from the non-perfect joint position tracking with PID controllers, which act as “elastic elements”. While it is not possible to compute the contact force from the posture alone <cit.>, the flexibility will make the forces converge to a unique equilibrium. We model this phenomenon to link position commands to force distribution. To invert this relation, that links force distributions to joint positions, we formulate it as a Quadratic Programming (QP) problem to leverage fast QP solvers.We conducted experiments on the Talos humanoid robot <cit.>, equipped with powerful arms but known for significant hip mechanical flexibility <cit.>. Our control pipeline is compatible with commands from autonomous planners and teleoperation, with a focus on the latter in this study. Well-suited for teleoperation, our method is robustness against operator errors related to awareness and embodiment challenges. Unlike most existing methods, our approach enables motions close to feasibility boundaries (both in term of kinematic, balance and torque limits), allowing to fully exploit the capabilities of the hardware.Our work named SEIKO for Sequential Equilibrium Inverse Kinematic Optimization provides the following contributions: * An SQP formulation that computes posture deflection and joint command correction, accounting for joint flexibility in multi-contact quasi-static conditions.* A multi-contact retargeting and control architecture for position-controlled robots with contact switch and pushing capabilities, designed to be robust against model errors.* Validation on the hardware Talos humanoid robot with several multi-contact tasks, including the validation of our prior retargeting work, which was previously tested only in simulation for humanoid robots. § RELATED WORKOur previous works <cit.> explored teleoperation and retargeting for feasible multi-contact tasks on simulated humanoids and hardware bimanual manipulators. While torque-controlled robots and inverse dynamic controllers were used for regulating contact wrenches, this new work introduces a whole-body admittance controller to address position-controlled robots, enhancing robustness to model errors. Many works have studied teleoperation of complex robots with floating bases <cit.>, but few have explicitly addressed multi-contact scenarios. <cit.> demonstrated multi-contact teleoperation with operator whole-body tracking on HRP-4, including retargeting and position control. <cit.> introduced a dedicated human-robot interface for multi-contact teleoperation on Valkyrie, employing joint impedance control. However, both approaches lack explicit regulation of contact forces. On torque-controlled robots, multi-contact tasks have been explored both in simulation <cit.> and with real humanoids <cit.>. Whole-body inverse dynamic controllers are used for direct regulation of contact wrenches and internal forces. Joint impedance control provides a robust alternative to pure torque control, accommodating model errors while enabling force commands. <cit.> showcased multi-contact setups on CENTAURO and COMAN+ robots, employing a QP-based whole-body inverse kinematics for postural control. A second QP calculates contact force references from quasi-static assumption, integrating these references as torque feedforward term in the low-level joint impedance scheme. On position-controlled robots, contact force regulation is frequently ignored. <cit.> demonstrated balance stabilization through feedback laws applied prior to inverse kinematics, and <cit.> showcased ladder climbing on HRP-2 using an inverse dynamic controller optimizing joint accelerations, integrated twice to obtain position commands. While purely kinematic control is effective when the robot is far from feasibility limits, it inherently lacks control over the robot's full configuration. Foot force difference control from <cit.> influenced a common approach for regulating contact forces: applying an admittance scheme on the effector's Cartesian position normal to the contact surface <cit.>. This scheme implicitly relies on joint impedance or flexibility, often requiring ad-hoc feedback laws on foot height, ankle joints, and CoM. The idea of explicitly modeling torques produced from position-controlled actuators, proposed by <cit.> and studied in <cit.>, has been applied to multi-contact tasks on the Walk-Man robot <cit.>. Similar to our approach, they differentiate the quasi-static equilibrium but their method uses pseudo-inverses and does not consider retargeting nor constraints. The closest related work is <cit.>, showcasing multi-contact tasks on the position-controlled HRP-2 humanoid. They differentiate the quasi-static equilibrium and use elastic joint models, but their method solves a cascade of QP problems, while our formulation is unified. Their purely reactive control architecture, lacking feedforward terms and retargeted references, is more sensitive to noise and violations of the quasi-static assumption. In contrast, our method allows faster motions and assumes actual joint positions cannot be measured but are estimated by the model, accommodating robots with mechanical flexibility like the Talos robot.§ PROBLEM DEFINITION Quasi-static robot configurations are defined by postural positions, joint torques, and contact wrenches q, τ, λ. For position-controlled robots, control inputs only consist of joint position commands θcmd. The whole-body retargeting stage (Fig. <ref>, <cit.>) provides a stream of desired quasi-static configurations qd,τd,λd expected to be feasible.Achieving desired contact wrenches λd is essential for multi-contact tasks, but contact wrenches can not be directly commanded on position-controlled robots. Our approach aims to indirectly control contact wrenches through joint position commands θcmd optimized to take into account the flexibility of the robot. Table <ref> lists the notations and quantities used throughout this letter.Addressing the problem involves overcoming the following challenges: * Multi-contact tasks exhibit redundancy in both kinematics and contact wrench distribution, akin to the Grasp matrix's nullspace in manipulation <cit.>).* While adding contacts is generally feasible, removing contacts challenge the robot's balance and can be infeasible.* Transitioning between contact states (enabled or disabled) involves discrete changes in problem formulation. Ensuring continuity in contact wrenches (from non-zero to zero and vice versa) and posture is essential for smooth transitions.* To ensure safety, physical limits must be enforced such as balance, joint kinematics, actuator torque limits, and contact stability conditions <cit.> prohibiting pulling, sliding, tilting.* To apply the controller to hardware, it must be robust to model errors and violations of simplifying assumptions. § METHOD§.§ Main Idea According to rigid body theory in multi-contact <cit.>, the contact wrenches of an ideal infinitely stiff mechanical system are non-unique and lie in a redundant nullspace. Real systems, however, always exhibit inherent flexibility: the structure slightly bends, and both the deflected posture and contact wrenches uniquely evolve towards the configuration minimizing overall elastic energy. Therefore, given constant joint position commands, the mapping taking into account flexibility θcmd↦ (qflex, λflex) is unique and well-defined. Our approach models and predicts this whole-body non-linear deflection effect, utilizing it for the control of contact wrenches.Specifically, we differentiate and linearize the deflection effect to consider how contact wrenches change with variations in joint position commands through the Jacobian matrix ∂λflex/∂θcmd(qflex, λflex, θcmd). Instead of directly inverting this Jacobian matrix, we formulate the control problem as a Quadratic Programming (QP) which solves for position command changes and optimizes multiple objectives, similar to task space inverse dynamic approaches. We explicitly model the system's flexibility by treating each robot joint as a spring, encompassing both internal actuator impedance and mechanical flexibilities. §.§ Overall Architecture Our proposed control architecture depicted in Fig. <ref>consists of a two-stage pipeline. Firstly, SEIKO Retargeting, previously introduced in <cit.>, optimizes a desired whole-body configuration qd, λd, τd within feasibility limits. Subsequently, our novel SEIKO Controller computes corrected joint position commands θcmd for tracking λd. These joint commands are then sent to the robot's low-level servomotors and tracked by stiff internal position controllers.The controller has three goals: (i) achieve the desired contact wrenches λd, (ii) avoid violations of joint torque limits τmax, and (iii) enhance robustness against model inaccuracies. The Retargeting step is crucial as it enforces feasibility limits a priori, and generates a desired configuration to be tracked. The controller indeed exhibits reduced stability when tracking a highly infeasible non-retargeted reference.The set of effectors that may come into contact with the environment is pre-defined. Each effector's state is either: “enabled”, standing for fixed and in contact transmitting forces and torques to the environment, or “disabled”, indicating that it is free to move and is commanded by the operator. Our formulation handles both plane contacts (6 DoFs, e.g., feet) and point contacts (3 DoFs, e.g., hands).An external planner or human operator provides commands as input to the Retargeting stage: (i) Cartesian pose Xop or velocity νop commands for each free (disabled) effector, (ii) a Boolean signal that manually triggers the transition between contact states, and (iii) an optional “pushing mode” enabling explicit control of the normal force of a specific enabled contact. Our method does not plan contact sequencing, relying on external decisions for contact stances and sequence.The proposed method operates instantaneously without considering the future of unknown intention, and relies on the quasi-static assumption. The nonlinear whole-body optimizations are solved using SQP schemes with only one QP iteration per time step. This allows for quick convergence at high frequency (500) and responsiveness to input changes. §.§ Equilibrium Equation and Flexibility Model Motions of mobile robots with a floating base are governed by the equation of motion in joint space <cit.>. Under the quasi-static assumption, where q̈≈q̇≈0, this equation simplifies to represent the equilibrium, i.e. system's balance, between contact wrenches, gravity effects, and applied torques:g(q) = Sτ + J(q)^λ,which is non-linear in q. The equilibrium equation is linearized by considering small variations of the configuration (q+Δq, λ+Δλ, τ+Δτ). The differentiation is written:g(q) + gqΔq = Sτ + SΔτ+ J(q)^λ + J(q)^Δλ+ (Jq^λ)Δq,while neglecting second order terms.Stiff position-controlled robots deviate from the rigid assumption due to inherent hardware flexibility arising from factors like Series Elastic Actuators <cit.>, deformations in links or transmissions <cit.>, impedance of non-ideal position control <cit.>, or the inclusion of soft damper elements within the structure <cit.>. In this work, we model this flexibility as joint elastic flexibility, where the relation between joint position and generated torque is expressed as follows:τflex = K(θcmd - θflex).Note that link flexibility can also be modeled in a similar manner by introducing passive joints without actuation. Its differentiated expression is written:Δτflex = K(Δθcmd - Δθflex) = K(Δθcmd - S'Δqflex),where qflex is the deflected posture under joint flexibility and θcmd is the joint position command of actuators.The differentiated equilibrium equation (<ref>) combined with flexibility model (<ref>) is linear w.r.t. configuration changes: SKΔθcmd = T(qflex,λflex)Δqflex Δλflex + t(qflex,λflex,θ^cmd) where T(qflex,λflex) = gq(qflex)-(Jq^(qflex)λflex)-SKS'| -J(qflex)^,t(qflex,λflex,θ^cmd) = g(qflex) - Sτflex - J(qflex)^λflex.Therefore Δθcmd can also be linearly expressed from Δqflex and Δλflex using the following row decomposition:0 KΔθcmd =TB TJΔqflex Δλflex +tB tJ Δθcmd(Δqflex, Δλflex) = K^-1(TJΔqflex Δλflex + tJ),where TB, tB stands for the floating base rows and TJ, tJ for the joint rows. §.§ SEIKO Retargeting This section summarizes the SEIKO Retargeting method developed in <cit.>.The Retargeting preprocesses inputs for each disabled effector, which includes the commanded motion from the operator (comprising both pose Xop and velocity νop) and the admittance velocity command νadm (see Section <ref>). Processing includes filtering and merging these commands:Xtargetoff = 𝖿𝗂𝗅𝗍𝖾𝗋𝗂𝗇𝗀( Xref(t) ⊕Xop) Xref(t+Δ t) = 𝖻𝗈𝗎𝗇𝖽𝖣𝗂𝗌𝗍𝖺𝗇𝖼𝖾(Xref(t) ⊕Δ t(νop + νadm),  Xdoff),where Xref∈ is a reference pose that integrates velocity commands at each time step. It allows the Cartesian pose command to be expressed relative to this reference. The filtering process incorporates a smoothing low-pass filter and enforces signal's velocity and acceleration limits through time-optimal bang-bang trajectory planning. We also constrain Xref within a radius of Xdoff to prevents the reference pose to windup when the retargeted motion is saturated by the feasibility constraints.At each time step, SEIKO Retargeting solve the QP: Δqd, Δλd, Δτdargmin 𝖥𝖪off(qd) ⊕Joff(qd)Δqd⊖Xofftarget^2 + θd + Δθd - θtarget^2 + τd + Δτd^2 + λd + Δλd^2 + Δqd^2 + Δλd^2such thatdifferentiated equilibrium equation (<ref>)𝖥𝖪on(qd) ⊕Jon(qd)Δqd⊖Xtargeton = 0θmin≤θd + Δθd≤θmax -τmax≤τd + Δτd≤τmax Ccontact(λd)Δλd + ccontact(λd) ≥0 -Δ tθ̇max≤Δθd≤Δ tθ̇max -Δ tλ̇max≤Δλd≤Δ tλ̇max. The QP solves for the configuration change (<ref>), integrating it to update the desired configuration, e.g., λd(t+Δ t) = λd(t) + Δλd. The optimization minimizes tasks weighted by manually tuned parameters for stability and desired trade-off. The cost function includes disabled effector pose targets (<ref>), default joint position targets (<ref>) for regularization and mitigating kinematic local minima, joint torque minimization (<ref>) for human-like postures, contact wrench penalization (<ref>), and decision variable regularization (<ref>).Equality constraints enforce the equilibrium equation (<ref>) and ensure enabled contacts are fixed (<ref>). Inequality constraints include joint position limits (<ref>), joint torque limits (<ref>), and contact stability conditions (<ref>) considering unilaterality, friction pyramid, and center of pressure (see <cit.>). Additional constraints involve limits on joint changes (<ref>) and contact wrench changes (<ref>).We enhanced the contact switching procedure compared to prior work. To remove a contact, we instantly increase the weight of the wrench penalty task to a very high value and use joint velocity θ̇max and wrench velocity λ̇max limits to ensure a smooth transition. When the integrated desired wrench falls below a small threshold, the contact is removed. Enabling a contact is straightforward, as it doesn't require any special considerations, thanks to these limits. §.§ SEIKO Controller We assume that actual joint positions under flexibility cannot be directly measured but can be estimated from the model. Despite model errors, our approach relies on the model's derivatives direction to provide sufficient information about system evolution. The controller uses differentiation of the equilibrium equation with flexibility (<ref>) to model how contact wrench distribution changes with joint command changes Δθcmd. This approach generalizes previously used admittance control laws such as “foot difference control” <cit.> which implicitly depends on flexibility without considering it.A unique feedback law is applied from measured wrenches:Δλeffort = Δλd + K_p(λd - λ̃read) - K_dλ̇read,where Δλeffort is the desired effort in the controller optimization, and Δλd acts as a feedforward term. SEIKO Controller solves the following QP at each time step:: Δqflex, Δλflexargmin Δλeffort - Δλflex^2 + 𝖥𝖪off(qflex) ⊕Joff(qflex)Δqflex⊖Xoffd^2 θcmd+Δθcmd-θd^2 + Δθcmd^2 +such thatTBΔqflex Δλflex+tB = 0𝖥𝖪on(qflex) ⊕Jon(qflex)Δqflex⊖Xtargeton = 0θmin≤θcmd + Δθcmd≤θmax -τ̃max≤τflex + Δτflex≤τ̃max. The QP solves for flexible configuration changes Δqflex, Δλflex (<ref>). Joint command changes Δθcmd are obtained from the decision variables using (<ref>) and qflex, λflex, θcmd are then obtained by integration.The cost function primarily computes joint position correction Δθcmd and resulting posture deflection Δqflex to achieve the control effort on contact wrench changes Δλeffort (<ref>). It also adjusts disabled effector poses influenced by flexibility toward Retargeting's desired poses (<ref>). As secondary objectives, the optimization penalizes the discrepancy between corrected and desired joint positions (<ref>) and regularizes changes in joint commands (<ref>).Equality constraints enforce differentiated equilibrium equation with flexibility (<ref>) through upper floating base rows decomposition (<ref>) and ensure no Cartesian motion for enabled contacts (<ref>). Inequality constraints ensure kinematic limits of joint position commands θcmd (<ref>) and restrict maximum joint torques (<ref>).Joint torque limits τ̃max used as constraints are dynamically updated to prevent the integrated state |τflex| from continuously increasing when the measured joint torque |τread| reaches the defined torque limit τmax. For each joint at each time step:τ̃max(t+Δ t) =τflex+ϵ_1if |τread|>τmax ∧τ̃max(t)>τflex+ϵ_1,τ̃max(t)+ϵ_2else if |τread|<τmax-ϵ_3  ∧τ̃max(t) < τmax,τmax else if τ̃max(t) > τmax, τ̃max(t)else,where ϵ_1, ϵ_2, ϵ_3 ∈ are small positive margin parameters implementing a hysteresis effect to improve stability.§.§ State Estimation and Effectors Admittance The estimated measured wrench λ̃read in feedback law (<ref>) is computed using a complementary filter:λ̃read(t+Δ t) = α( λ̃read(t) + Δλflex) + (1-α)λread.This filter enhances closed-loop stability by mitigating dynamical effects affecting λread neglected by the quasi-static assumption. It introduces a trade-off between the reactive measurement and the term estimated through the integration of the predicted change Δλflex.We utilize an admittance scheme to compute an additional Cartesian velocity command for disabled effectors νadm:νadm = 𝖿𝗂𝗅𝗍𝖾𝗋𝗂𝗇𝗀( Kadmλreadoff),where filtering involves a deadband and output clamping. This control law aims to reduce interaction wrenches to zero for disabled effectors, preventing large unintended and unmodeled forces during contact establishment, facilitating foot alignment with surface orientation, and minimizing residual wrenches after contact removal. Implemented at input of the Retargeting level, this approach seamlessly integrates with operator command processing (<ref>).§ EXPERIMENTAL EVALUATION §.§ Implementation Details We implemented SEIKO in C++ using RBDL <cit.> and Pinocchio <cit.> rigid body libraries. More specifically, Pinocchio efficiently computes the analytical derivatives of the terms appearing in the differentiated equation (<ref>). We solve the QP problems using the QuadProg <cit.> solver.The entire control pipeline operates at a frequency of 500 Hz, with joint position commands interpolated at 2 kHz before being transmitted to the robot's actuators. The median computing times observed on the internal computer of the Talos robot are 0.50 ms and 0.40 ms for SEIKO Retargeting and SEIKO Controller, respectively. The maximum measured times for each were 0.56 ms and 0.43 ms, respectively.The Talos robot, manufactured by PAL Robotics, is a humanoid robot of 1.75 m height with 32 DoFs. Externally, we measured its actual total mass to be 99.7 kg, while the URDF model provided by PAL assumes a mass of 93.4 kg. This discrepancy of 6 kg can be seen by the Force-Torque sensors in the feet, which enable our controller to adapt to this model error. We changed the robot's right hand and forearm with a 3D printed part that replaced the gripper and wrist joints beyond the elbow joint. The ball-shaped hand allows us to apply high contact forces (up to 30 kg) on the arm during multi-contact tests. After removing the right forearm joints and excluding the head joints, our QP solver works with n = 25 joints.Throughout all our evaluations, we employed as flexibility model K the position-control P gains imported from PAL's Gazebo simulation of the Talos robot. Unlike other works <cit.> that estimate precise flexibility model, our approach does not heavily depend on model accuracy. This is because our differentiated formulation utilizes only the approximate “gradient” direction for whole-body control.In all subsequent experiments, an expert operator issued velocity commands for each robot's effectors using dedicated 6-DoF input devices[3Dconnexion SpaceMouse: <https://3dconnexion.com/uk/spacemouse/>], with one device assigned to each effector. Teleoperation was conducted with a clear, direct line of sight to the robot and its surrounding environment. §.§ Wrench Distribution Tracking In Fig. <ref>, we illustrate the role of SEIKO Controller in realizing multi-contact wrench distribution during a hand pushing task. The robot initiates a point contact with a vertical wall using its left hand. The “pushing mode” of SEIKO is employed to command a target trajectory for the normal force applied on the wall. Retargeting adjusts the robot's posture slightly forward to apply a large force (75 N), and generates the desired contact wrenches, including opposing tangential forces on the feet in the sagittal plane.It is worth noting that we didn't perform any identification or tuning of the robot flexibility model on the actual hardware, which may have significant errors. Estimating this flexibility <cit.> could enhance tracking accuracy, given that we observed near-perfect tracking performance in the Gazebo simulator which uses an ideal model.The attached video[<https://hucebot.github.io/seiko_controller_website/>] demonstrates additional multi-contact scenarios, such as stair climbing and stepping on a sloped surface (Fig. <ref>). §.§ Contact Switch Fig. <ref> illustrates the foot contact switch capabilities, showcasing the Talos robot being teleoperated to lift and then re-establish contact with the right foot. Without the Controller, weight transfer from the right to the left foot and hand occurs abruptly during the foot lift. The robot did not fall as it was operating far from its feasibility boundaries. Conversely, when the controller and admittance scheme (equation (<ref>)) were enabled, the redistribution of contact wrenches became smooth and controlled. Additionally, at t=43 s, when the foot collided with the ground, the admittance control sightly lifted the foot to prevent unwanted ground forces before contact was re-established. §.§ Whole Body Damping Imperfect stiff position control and flexibilities lead to small oscillations when disturbed, particularly noticeable on Talos in the sagittal plane, causing forward-backward oscillations. In equation (<ref>), the controller's feedback law employs a damping term with the gain parameter K_d. We show in Fig. <ref> that this unique feedback law on contact wrenches effectively attenuates these whole-body oscillations.In double support, we applied short pushes (10-12 pushes, Fig.<ref> left) to the robot's torso and observed oscillations until energy dissipation. Using the controller, we tested various damping gain (K_d = 0.0, 0.01, 0.05). We recorded unfiltered angular velocity in sagittal plane with pelvis IMU's gyroscope since it does not rely on model nor unobserved joint positions. Fig. <ref> (center) shows median and 20%-80% deciles confidence interval of sagittal motion velocity. To quantify damping (Fig. <ref> right), we estimated the averaged logarithmic decrement from oscillation peaks (δ = 𝖺𝗏𝗀(log(ω(t)/ω(t+T)))), reflecting damping of oscillation amplitudes and linked to the damping ratio for under-damped systems.In following experiments, the damping gain is set to K_d = 0.02, as higher values tended to be unstable near feasibility boundaries where model errors had a more pronounced effect. §.§ Far Reaching with Model Errors Fig. <ref> illustrates the capability of our approach to perform challenging far-reaching tasks near feasibility limits, even in the presence of large model errors. We teleoperated the right hand of the Talos robot for a forward-reaching motion as far as allowed by the controller, and added a 9 kg load during operation on the hand to induce mass model errors. The robot remained stable thanks to the tracking of foot contact wrenches and adaptation of the whole body posture. Additionally, the Controller through equation (<ref>) prevents excessive violation of joint torques, with a limit ratio set to |τread|/τmax < 0.6. §.§ Robustness Evaluation We performed a comprehensive analysis of our approach's robustness using the MuJoCo simulator, as summarized in Fig. <ref>. The focus was on evaluating the impact of model errors and motion speed on system's balance. We simulated the Talos robot in double support, executing 10 motion sequences reaching a distant target with the left hand and returning to the initial posture. The number of successful trials without fall for three conditions are reported: (i) without SEIKO Controller, (ii) with SEIKO Controller but without considering joint torque limits (<ref>), (<ref>), and (iii) using the full control method. Variations included hand Cartesian motion velocity (slow 2 cm/s to fast 40 cm/s) and additional mass on the left hand (none to 12 kg).We observed that MuJoCo's soft contact model produces a more pronounced flexibility behavior than Gazebo or even the actual robot. The presented results implicitly incorporate flexibility model errors, although they are not quantified.SEIKO Retargeting without whole-body control (left) operates in open-loop and is partially robust to motion speed but struggles with model errors. Introducing SEIKO Controller (middle) significantly improves success rates, adapting joint position commands to handle additional hand mass for balance. However, unplanned posture changes and model errors near full extension reach actuator torque limits, leading to loss of control authority. Considering actuator torque limits in the controller (right) enhances robustness by optimizing posture and avoiding infeasible hand pose commands. Challenges persist at high speeds and heavy masses, where inertial effects violate the quasi-static assumption. § DISCUSSION AND CONCLUSION Our control architecture's robustness is showcased at moderate motion speeds (Fig. <ref>), but it inherently relies on the quasi-static assumption and is unsuitable for highly dynamic motions. Establishing contact with stiff position-controlled robots requires precise and slow operator commands, even if effectors admittance (<ref>) helps mitigating this problem. Future work could explore applying the proposed approach to robots using joint impedance control. As analyzed in <cit.>, we noted greater leg flexibility in the Talos robot than in our basic model. Although our controller enables successful contact transitions in teleoperated tasks, this significant difference hampers the quick contact switches needed for walking. Refining the flexibility model may allow walking capabilities.The robot fell when attempting to climb large 20 cm stairs due to exceeding arm joint torque limits during the challenging contact switch. Despite being theoretically feasible according to the retargeting model, the adaptation of joint torque limits (<ref>) is insufficient to ensure robustness if an infeasible contact transition is attempted due to model errors (e.g., underestimating the robot's weight).Our approach overcomes the inherent lack of direct control authority over contact forces of position-controlled by explicitly considering flexibilities. While torque-controlled robots are traditionally used to perform pushing and multi-contact tasks, our SEIKO control pipeline extends these capabilities to position-controlled robots. We also demonstrate robustness to model errors, safely carrying substantial unmodelled loads at arm's length. The unified formulation employs a single feedback law on contact forces, effectively leveraging both posture change (i.e., CoM displacement) and contact force redistribution to regulate whole-body balance. Given that the primary advantage of humanoids and other multi-limbed robots lies in their strong versatility, this research paves the way for broadening the application and deployment of real-world scenarios, utilizing more capable and adaptable multi-contact systems in uncertain contexts and environments.templates/IEEEtran
http://arxiv.org/abs/2312.16465v1
{ "authors": [ "Quentin Rouxel", "Serena Ivaldi", "Jean-Baptiste Mouret" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20231227083200", "title": "Multi-Contact Whole Body Force Control for Position-Controlled Robots" }
#1 1 An efficient approach to characterize spatio-temporal dependence in cortical surface fMRI data Huy DangDept. of Statistics, Pennsylvania State University, USAand Marzia A. Cremona Dept. of Operations and Decision Systems, Université Laval, CanadaCHU de Québec – Université Laval Research Center, Canadaand Francesca ChiaromonteDept. of Statistics, Pennsylvania State University, USAInst. of Economics and L'EMbeDS, Sant'Anna School of Advanced Studies, Italy and Nicole LazarDept. of Statistics, Pennsylvania State University, USAJanuary 14, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Functional magnetic resonance imaging (fMRI) is a neuroimaging technique known for its ability to capture brain activity non-invasively and at fine spatial resolution (1-3mm).Cortical surface fMRI (cs-fMRI) is a recent development of fMRI that restricts attention to signals exclusively from tissue types that have neuronal activities, as opposed to the whole brain. cs-fMRI data is plagued with non-stationary spatial correlations and long temporal dependence which, if inadequately accounted for, can hinder various types of downstream statistical analyses. We propose a fully integrated approach that captures both spatial non-stationarity and varying ranges of temporal dependence across regions of interest. More specifically, we impose non-stationary spatial priors on the latent activation fields and model temporal dependence via fractional Gaussian errors of varying Hurst parameters, which can be studied through a wavelet transformation and its coefficients' variances at different scales. We demonstrate the performance of our proposed approach via simulations and an application to a visual working memory task cs-fMRI dataset.Keywords: cs-fMRI, spatio-temporal dependence, SPDE, wavelets, fractional Gaussian noise. 1.5 § INTRODUCTION AND MOTIVATIONFunctional magnetic resonance imaging (fMRI) is a neuroimaging techniqueknown for its ability to non-invasively measure Blood Oxygen Level Dependent (BOLD) signals at fine spatial resolution (typically 2-3mm). The BOLD signal is defined as the changes in the ratio of oxygenated to deoxygenated blood, either in response to a task/stimulus in an experiment or as a consequence of spontaneousneural metabolism.In fMRI literature, the BOLD signal is widely taken to be a proxy of brain activity, and used to captureassociations between local brain areas with different functions.Traditionally, fMRI data collected during a taskare represented by 4D arrays:signals are measured at discrete locations, each representing a small cube in a partition of the 3D brain (these areknown as voxels), along a time course – which produces the 4th dimension. This type of“volumetric” fMRI datacontains both signals fromtissues that have neuronal activities (such as gray matter) and noise fromtissues that do not (such as white matter and cerebral spinal fluid). Because of spatial contiguity between active and inactive tissues, distance-based analyses are often affected by spurious noise from the latter. Cortical surface fMRI data (cs-fMRI) is the result of a recently developed technique that uses vertices on a 2D surface mesh, instead of traditional volumetric voxels, to represent the folded, sheet-like geometry of the cerebral cortex. In multi-subject studiessurface meshes from different individuals can be aligned to a common template based on cortical folding patterns and areal features <cit.>.By virtue of being confined exclusively to cortical gray matter, cs-fMRI data is not affected by signal contamination frominactive tissues.Increased homogeneity due to the fact that data are collected only fromcortical gray matterthus improves distance-based analyses.Figure <ref> illustrates the surface mesh used to achieve the 2D representation of the cerebral cortex at different degrees of “inflation”– ranging from “white matter surface” (least inflated and preserving folding patterns) to “inflated”, “very inflated”, and “spherical” (most inflated, no folding patterns).By inflating the surface mesh, distances between mesh vertices lose their original 3D Euclidean interpretation and become more geodesic-like.Readers interested in a more comprehensive review of volumetric and cs-fMRI data are referred to <cit.>.From here on, “voxel” refers to the smallest spatial unit in volumetric fMRI data, and “vertex” to its cs-fMRI counterpart. §.§ Temporal dependence It is well known that fMRI data exhibittemporal correlations.Existing literature provides evidence of both short- and long-range temporal dependence that can be attributed to spontaneous, non task-related fluctuations in neuronal activity <cit.>.Moreover, <cit.> showed that the ranges of dependence can be affected by brain structures and functions. As a real data illustration,Figure <ref> shows the autocorrelation function (ACF), the partial autocorrelation function (PACF) and the spectrum against frequency for two resting state BOLD time series. These are taken at different brain locations from an individual in the Human Connectome Project (HCP) healthy young adults data set <cit.>.The upper panelsillustratea time series with short-range temporal dependence,as evidenced by the presence of only a few significant ACF and PACF coefficients at small lags, while the bottom panels illustrate a time series with long-range temporal dependence,with a slowly decaying autocorrelation.The latter alsoshows very large estimated spectral density at small frequencies, in contrast to the more evenly distributed spectral density across frequencies of the former. Since the spectral density is the Fourier transform of the covariance function, large spectral density estimates corresponds to large contributions to the covariance function's second moment, implying significant dependence at frequencies near 0, or equivalently, at large lags. The vast majority of fMRI literatureemploys autoregressive models of order p – that we denote as AR(p) – to prewhiten the time series separately at each voxel, or vertex, prior to further analyses.Some recent examples include the use of AR(1) in <cit.>, AR(2) in <cit.>, AR(3) in <cit.>, and AR(6) in <cit.>.However, ordinary AR(p) models are designed to capture autocorrelations up to a finite order p, and therefore are not suitable for modeling long-range dependence.In addition, we show with a simple simulation that prewhitening time series with an AR process may cause loss of information and lead to biased estimates. Let x = {x_t} = ∫_-∞^∞ h(u)s(t-u)du be a stimulus h(·) convolved with the canonical hemodynamic response function s(·)observed along times, say,t = 1, …, 256 (more detail on this convolutional representation, which is typical in fMRI data, willbe provided in Section <ref>). We simulate the responses y and z by adding to x a long-range dependent noise ϵ and an auto-regressive noise η, respectively; that isy = βx + ϵ, z = βx + η .To simulate the long-range dependentϵ, we use fractional Gaussian noise with Hurst parameter H = 0.8.For the auto-regressive η, we fix p = 6 and use AR coefficients estimated from actual BOLD signals(from the same HCP individual used above, at randomly selected vertices/locations).In both cases, we set β = 2 and noise variance equal to 1. We then carry out the estimation of β following the procedure used in <cit.>: we fit AR(6) models toboth ϵ and η,obtain the AR coefficients in each of the two casesand use them to pre-whiten y and z, and finally regress the pre-whitened time series y_pw and z_pw on x to estimate β. Table <ref> shows estimation results in the long- and short-range dependence scenarioswith the AR(6) prewhitening scheme,as well as with an ordinary regression without prewhitening.The results, based on 1000 simulation runs, indicate that ordinary regression(without prewhitening) produces unbiased estimates of β. On the contrary, AR(6) prewhitening of the time series produces a marked underestimationof β in both scenarios,including when the noise is in fact generated with an AR(6) process. This simple but informative simulation exercise illustrates the problems that can be induced byprewhitening time series with autoregressive models prior to further analyses.In order to account for temporal correlations, we strive for a more flexible modeling approach that can accommodate both short- and long-range dependence across brain regions. Such approach should not cause loss of information or biases, and should be implementable through a computationally efficient algorithm. In particular, to contain computational cost, it should produce sparsetemporal correlations.§.§ Spatial dependenceThe sheer size of fMRI data presents a challenge when modeling correlations between voxels or vertices.In addition to sparsity in representing temporal dependences, sparsity assumptions on the spatial dependence structure are absolutely critical;without them, most standard analyses (e.g., fitting linear models) will face an immense computational burden due to the need to inverta dense VT × VT covariance matrix, where V is the number of voxels or vertices,and T is the number of time points.In traditional volumetric fMRI data V ≈ 120,000, whereas in cs-fMRI V ≈ 30,000 for each hemisphere,and T typically ranges from 200 to 1,200.In existing fMRI literature, the most common (and unrealistic) assumption is that of no spatial dependence,which means fitting a model for each time series at each voxel/vertex independently – an approach that is often referred to as massively univariate<cit.>. However, untreated spatial correlationsoften lead to underestimated uncertainty and inflated type I error rates in hypothesis tests <cit.>. While inverting the aforementioned covariance matrix has been and remains computationally unrealistic,recent years have seen a rise in efforts to acknowledge and treat spatial dependence.This is typically achieved via a combination of more sophisticated statistical models, approximation techniques, and/or down-sampling of the data.An early example of spatial dependence treatment is the work of <cit.>,employing a two-stage hierarchical Bayesian approach.First, standard regressions were fitted independently at each voxel to estimate activation coefficients; then, coefficients of voxels belonging to the same functional region were used to model within region correlations via a spatial autoregressive model. Computation was made feasible by subsampling the 3D volumetric data to 2D slice data, and by assuming stationarity of the covariance. A different approach was taken by <cit.>, which applied a Fourier transform to the fMRI time series and fitted a spatio-spectral mixed-effect model, using random effects to capture spatial correlation within and between regions of interest. Although some degrees of non-stationarity was allowed in this approach, activation was restricted to regional level instead of individual voxels.More recently, <cit.> used L Gaussian processes with different Matern covariances to model local dependence within different regions, and a region-specific random effect to account for between-region dependence. While the model incorporated non-stationary spatial dependence on full 3D volumetric data, estimation at each spatial scale (local and regional) was carried out sequentially and only aportion of the data was used at each estimation step. For example, local dependence was estimated region by region, and regional dependence was estimated using only regional averages. On a different front, <cit.> assumed a latent task activation model and represented spatial dependence via spatial priors on the activation fields. The proposed method was computationally very efficient, as estimation was carried out within the Gaussian Markov Random Fields (GMRFs) framework, and inference was made possible in continuous space thanks to an established connection between GMRFs and Gaussian Fields (GFs) with Matern covariance functions <cit.>. Their approach, however, did not allow for non-stationarity, which is an important spatial feature of fMRI data. §.§ On things to come To improve on the existing treatment of fMRI data, we propose a fully integrated approach that captures both spatial non-stationarity and varying ranges of temporal dependence across regions of interest, focusing in particular on the new class of cs-fMRI data.More specifically, we impose spatial priors on the latent activation fields as in <cit.>; however, our approach allows for non-stationarity by letting the prior hyperparameters be driven by local smoothness in the data.We model temporal dependenceusing fractional Gaussianerrors of varying Hurst parameters; the Hurst parameters can then be studied through wavelet transformation and the wavelet coefficients' variances at different scales.Bayesian inference is carried out approximating the marginal posteriorswith the Integrated Nested Laplace Approximation (INLA) approach <cit.>. The remainder of this article is organized as follows. Section 2provides the theoretical background. Section 3details our modeling approach.Section 4demonstrates the performance of our proposal through simulations and an application to real cs-fMRI data concerning a visual working memory task. Section 5contains final remarks. § THEORETICAL BACKGROUND§.§ Fractional Gaussian noise and wavelet transformation Long- and short-range dependence can be flexibly and sparsely modeled with fractional Gaussian noise (fGn). fGn is defined as the difference between consecutive values of fractional Brownian motion <cit.>. Given a lag ℓ > 0, the autocovariance of a fractional Gaussian process G is given by C_G(ℓ; H) = σ^2/2[(ℓ + 1)^2H - 2ℓ^2H + (ℓ -1)^2H] ℓ→∞≈σ^2 H(2H-1)ℓ^2H -2where H is the Hurst parameter and σ^2 is the variance. It is easy to see that when H = 1/2, the process corresponds to white noise.When 0 < H < 1/2, or equivalently, -2 < 2H - 2 < -1, the autocovariance decays exponentially fast with lag ℓ and the process has short-range dependence.On the other hand, when 1/2 < H < 1, the autocovariance decays at hyperbolic rate -1 < 2H - 2 < 0, causing ∑_ℓ = 0^∞ C_G(ℓ; H) = ∞, and the process has long-range dependence. When ∑_ℓ = 0^∞ C_G(ℓ; H) = ∞, individual correlations at large lags may be small, but cumulatively, their effect is significant. A consequence of untreated correlations are biases in variance estimates that do not vanish as the sample size increases, which in turn affects confidence intervals and hypothesis tests. To treat time series with varying ranges of dependence, one efficient approach is wavelet transformation. It has the desirable property of representing an autocorrelated process with coefficients that are approximately uncorrelated <cit.>. In addition, there is no loss of information as wavelet bases are orthonormal, and the transformation is calculated with 𝒪(n log_2 n) operations – comparable to fast Fourier transform <cit.>. As detailed below, the Hurst parameter can be estimated from the decaying rate of the wavelet coefficients' variances across scales. Let (ϵ_1, …, ϵ_n) be a process with covariance function given by Equation <ref>. The discrete wavelet transform decomposes the process into a set of detail coefficients {d_jk: j = 1, …, J,k = 1, …, n/2^j}, and approximation coefficients {a_Jk: k = 1, …, n/2^J}, where J is the coarsest level of the transformation. Let γ = 2H-1, and c_γ = (2π)^-2Hsin(π H)Γ(2H + 1). According to <cit.>, the covariance matrix of the wavelet transformed process is approximately diagonal, and the detail coefficients {d_jk: k = 1, …, n/2^j} and the approximation coefficients {a_Jk: k = 1, …, n/2^J} have variancesS_d_j≈σ^2 c_γ 2^jγ (2-2^γ)/(2π)^γ (1-γ), S_a_J≈σ^2 c_γ 2^(J+1)γ/(2π)^γ (1-γ).Thus, γ, or equivalently the Hurstparameter H, can be estimated through the linear relationship log_2(S_d_j) = γ j + 𝒪(1). Figure <ref> demonstrates the decorrelating effect of the discrete wavelet transform on a time series with long range dependence. Compared to the original time series (top row), its wavelet transformation (bottom row) has insignificant autocorrelation coefficients, and much more evenly distributed spectral density across frequencies. §.§ The link between GMRFs and GFsIn most analyses of interest, estimation requires calculations withprecision matrices (Q = Σ^-1),not covariance matrices (Σ). For example, if β = (β_1, ⋯, β_V) ∼ N(μ, Q^-1), then the likelihood isL(θ|β) = √((2π)^-V |Q|) exp(-(β-μ)^⊤Q(β-μ)/2). Thus, GMRFs' sparse, banded precision matrix structure can significantly speed up model estimation.By definition of GMRFs, such structure is a result of the conditional independence assumption.Note that while it is entirely unrealistic to assume any two data pointsto be independent, it is more reasonable to assume any two data points outside of their respective neighborhoods to be independentconditional on all others. In this case, a sparse, banded precision matrix still has a dense inverse, i.e. a dense covariance matrix. Even though estimation can beperformed speedily, there remains a serious challenge. Most GMRFs' precision matrices do not have an established correspondence with a closed-form covariance function. Real life data, including fMRI, are observed discretely fromcontinuous processes; and extrapolating beyond the discrete grid of observations requires knowing the functional form of the covariance. For example, for an unobserved vector β^*, the conditional expectationE(β^*|β) = μ_β^* - Σ_β^*βQ_ββ(β-μ_β) is not available without knowing how to relate Q_ββ to Σ_β^*β.<cit.> solved this problem for a class of covariance functions, namely the stationary Matérncovariances. Their solution relies on a result by <cit.> that links such class to the solutions of a particular stochastic partial differential equation (SPDE). Specifically, the stationary solutions β(s) to the linear fractional SPDE(κ^2 - Δ)^ν + d/2/2β(s) = 𝒲(s), s∈ℝ^dhave Matérn covariance functions C(β(0), β(s)) = σ^2/2^ν - 1Γ(ν) (κs)^ν K_ν(κs)where K_ν(·) is the modified Bessel function of the second kind and order ν, Δ = ∑_i = 1^d ∂^2/∂ s_i^2 is the Laplacian operator, 𝒲 is Gaussian noise with unit variance, κ is the spatial scale parameter, α = ν + d/2 controls the smoothness, and σ^2 = Γ(ν)/Γ(ν + d/2)(4π)^d/2κ^2ν. To complete theirproposal, <cit.> formulated GMRF representations of the solutions to the SPDE in Equation <ref>. The precision matrices Q ofsuch GMRF representations are explicitly parameterized by the SPDE parameters, or equivalently by the stationary Matérn covariance parameters. We start by giving a finite dimensional representation of the solutions to the SPDE; in symbols β(s) = ∑_l = 1^L ψ_l(s)w_l for some deterministic basis functions {ψ_l} and weights w_l. In our applications, the bases are piecewise linear, constructed by partitioning the domain into a set of non-intersecting triangles (i.e. a triangular mesh). Specifically, given a triangle with vertices i, j and k located at s_i, s_j and s_k in ℝ^2, for any inner location s, ψ_l(s) is equal to the area of the triangle formed by s, s_j, s_k divided by the area of the original triangle. It follows that ψ_i takes a value of 1 at the i^th vertex and a value of 0 at all other vertices. Let C, G, K and Q_α be L × L matrices such that C_ij = ⟨ψ_iψ_j ⟩, G_ij = ⟨∇ψ_i∇ψ_j ⟩, K_ij = κ^2 C_ij + G_ij andQ_α = K,if α = 1 KC^-1K,if α = 2 KC^-1Q_α-2C^-1K,if α = 3, 4, ...Then, if w = {w_l}∼ N(0, Q^-1_α), the finite dimensional representations of the solutions to the SPDE in Equation <ref> are GFs with precisions Q_α. If C is replaced by the diagonal matrix C̃ where C̃_ii = ⟨ψ_i,1 ⟩, we obtain GMRF representations instead of GFs. Note that ν (and hence α) is usually fixed since it is typically not identifiable in applications. Thus, if s∈ℝ^2, fixing ν = 1, 2,... gives α = 2, 3,...§.§ Integrated Nested Laplace Approximation For Bayesian hierarchical models that involve latent Gaussian fields, closed-form posterior distributions are, in general, unavailable. Sampling-based Markov-chain Monte Carlo (MCMC) methods are used instead of seeking analytical solutions.While such methods are asymptotically exact,in reality, computational cost and time may prevent one from achieving asymptotic guarantees; MCMC samples thus remain just repeated approximations of posterior distributions. The issue is particularly marked when MCMC methods are applied to latent Gaussian models; despite their versatility,they can be extremely slow and the samples may not converge <cit.>. To address this,<cit.> introduced the Integrated Nested Laplace Approximation (INLA), a method that performs direct numerical approximation of posterior distributions instead of sampling from them. Given observed data y, the latent Gaussian fieldβ = {β_v, v = 1,…, V} and a hyperparameter vector θ, the joint posterior distribution of β and θ isπ(β, θ|y)∝π(θ)π(β|θ) π(y|β, θ)∝π(θ)|Q(θ)|^1/2exp[-1/2β^⊤Q(θ)β + log(π(y|β, θ))] . Here, the goal is to approximate the posterior marginalsπ(β_v|y)= ∫π(β_v,θ| y)dθ =∫π(β_v|θ, y)π(θ|y)dθ ∝∫π(β, θ|y)/π(β_-v|β_v, θ, y)π(β, θ|y)/π(β| θ, y)dθπ(θ_l|y)= ∫π(θ|y) dθ_-l∝∫π(β, θ|y)/π(β| θ, y)dθ_-lwhere β_-v = β∖{β_v} and θ_-l = θ∖{θ_l}. The computational cost of numerical integration increases significantly with the dimension of the hyperparameter vector θ;INLA can accommodate up to about 10 hyperparameters, but becomes unfeasible in higher dimension. For a given θ, the denominator densities π(β_-v|β_v, θ, y) and π(β| θ, y) are approximated by π̃(β_-v|β_v, θ, y) and π̃(β| θ, y), respectively, using Laplace approximation. The marginals π(β_v|y) and π(θ_l|y) are then approximated by numerical integration over θ; that is π(β_v|y)≈∫π(β, θ|y)/π̃(β_-v|β_v, θ, y)π(β, θ|y)/π̃(β| θ, y)dθ π(θ_l|y)≈∫π(β, θ|y)/π̃(β| θ, y)dθ_-l .The computational speed-up provided by INLA with respect to MCMC algorithms is on the scale of seconds/minutes compared to hours/days, with comparable approximation errors. For more details, see <cit.>. § MODELING APPROACHA standard model for an fMRIdata set is y_vt = ∑_k = 0^K x_ktβ_kv + ϵ_vtwhere v ∈{1,⋯, V} indexes location,i.e. vertices in cs-fMRI, t ∈{1, ⋯, T} indexes time, and k ∈{1, ⋯, K} indexes a task or stimulus. x_kt is the convolution of a so-called stimulus time course s_k(·) for task k with the canonical hemodynamic response function h(·); namelyx_kt = ∫_-∞^∞ h(u)s_k(t-u)du. s_k(·) takes value 1 when thetask is active, and 0 otherwise. The canonical hemodynamic response function h(·), visualized in Figure <ref>, characterizes the temporal change in oxygenated blood flow for regions of the brain that are affected by the task. This typically consists of a 2 seconds delay at the onset of the stimulus, then a dip below baseline followed by a gradual increase that peaks after 4 seconds, a slow decay to below baseline level, and a return to baseline level. The duration of each phase may vary depending on the task, and the total amount of response time is approximately 15 to 20 seconds <cit.>. We can rewrite the above model for all V vertices and T time points in vector form asy = ∑_k = 0^K X_kβ_k + ϵ, ϵ∼ N(0, Σ)where y is now a VT × 1 vectorcreated by stacking V time series, each of length T, X_k is a VT × V design matrix containing activation information for task k, and β_k is a V × 1 vector of activation amplitudes. As we shall see, the spatial dependence is modeled via non-stationary Matern priors on the task activation fields β_k. The VT × VT error covariance matrix Σ is block diagonal, with V T × T blocks {Σ_v: v = 1, …, V}, each capturing the temporal dependence at a particular vertex.§.§ Modeling varying-range temporal dependenceTemporal dependence can be modeled separately at each vertex, as the covariance matrix Σ in Equation <ref> is block diagonal. To this end,the fMRI time series at a generic vertex v is modeled as a fractional Gaussian noise process, whose covariance Σ_v is parameterized by the Hurst parameter H_v (see Section <ref>). Afterdiscrete wavelet transformation of the time series at vertex v, Equation <ref> becomes y_v^(w) = ∑_k = 0^K x^(w)_kβ_kv + ϵ_v^(w), ϵ_v^(w)∼ N(0, Σ_v^(w))where x_k^(w), y_v^(w) and ϵ_v^(w) are the discrete wavelet transforms of x_kt, y_vt and ϵ_vt,t = 1, …, T.Because of the approximately decorrelating property of the discrete wavelet transformation, the covariance matrix Σ_v^(w) is, to a good approximation, diagonal. As described in Section <ref>, the diagonal entries depend on the scale of wavelet transform, and on the range of temporal dependence at vertex v.With the goal of accommodating different ranges of dependence across brain regions of interest, while keeping the model parsimonious, we devise a data-driven scheme thatgroups the rangesinto just n_H (<< V) distinct levels, thus leaving us with only n_H Hurst parameters to estimate. We proceed as follows:* At each vertex v, we obtain preliminary estimates of coefficients {β̂_kv:k = 1, …, K} and residuals {ϵ̂_vt: t = 1, …, T}using linear regression;* At each vertex v, we take the discrete wavelet transform of the residuals {ϵ̂_vt: t = 1, …, T}, producing detail coefficients {d_jk: j = 1, …, J, k = 1, …, n/2^j}and approximation coefficients {a_Jk: k = 1, …, n/2^J}, where J is the coarsest scale of transformation;* At each vertex v, we estimate the variances {S_d_j: j = 1, …, J} in Equation <ref> computing the variances of {d_jk: j = 1, …, n/2^j};* At each vertex v, we estimate γ via the linear relationship log_2(S_d_j) = γ j + 𝒪(1)and produce a preliminary estimate of the Hurst parameter asĤ_v = (γ̂ + 1)/2;* For each region of interest r = 1, …, R, we obtain the median of such such estimates across vertices in the region,M_r^(H) = med{ Ĥ_v: v ∈ r};*We group the R regions into n_H clusters based on their median estimates ,producing cluster memberships C_r ∈{1, …, n_H} for each region r. Since no two regions share a vertex, cluster memberships C_v ∈{1, …, n_H} can also be attributed to each individual vertexv ∈{1, …, V} based on the membership of the regions it belongs to;* At each vertex v, we assume the time series to be distributed as a fractional Gaussian processparameterized by the cluster-specific Hurst parameter corresponding to the vertex membership.At the end of this procedure, any twovertices v andv' such that C_v = C_v'will have the same Hurst parameter, and thus their Σ_v^(w) and Σ_v'^(w) will have the same diagonal entries expressed in Equation <ref>. Note that we are not using theprocedure to estimate the parameters, we are just employing rough, preliminary estimates Ĥ_v to group regions and vertices, and thus reduce the number of parameters to be estimated. Note also that, in Step 3.1.2, the number of available coefficients to estimate S_d_j is n/2^j. As j increases, this number decreases by a factor of 2, causing estimation of S_d_j to be volatile at large scales. Hence, we recommend using only scales with at least 16 detail coefficients to generate the preliminary Ĥ_v's in Step 3.1.3.Finally, we implement the clustering in Step 3.1.6 using a simple K-mean algorithm with thenumber of clusters n_H fixed beforehand(more clusters correspond to more distinct dependence ranges, and thus to a less parsimonious model, with more parameters to be estimated). While more sophisticated approaches could be employed, in our experience a K-means with n_H ≤ 5 works sufficiently well (larger n_H values do not provide any added improvements in model fit). The choice to perform clustering on regions instead of directly on vertices is based on the assumption that vertices in the same region have similar temporal dependence. This is supported by <cit.>, in which the authors estimated Hurst parameters separately for each brain voxel, after removing spatial covariance from the data;even with spatial covariance removed, Hurst parameter estimatesstill mapped nicely intoobvious brain structures.Clusteringregions has the added advantage of reducing spurious temporal noise, which may arise naturally in the data, and/or from previous estimation steps in the algorithm. We also note that, while beyond the scope of the current article, size and interpretation of the regions being clustered can be flexibly altered by changing the choice of parcellation.Figure <ref> provides an illustration of thepreliminary estimates Ĥ_v which are represented as boxplots for each region of interestsin the Schaefer 100 parcellation <cit.>.Specifically, the illustration uses resting state time series (1,024 time points) for 6,000 vertices in the left hemisphere of an individual from the healthy young adult data set of the Human Connectome Project <cit.>.Of course, in task-related (as opposed to resting state) fMRI data, temporal dependence, and thus Hurst parameter estimates,may be influenced by the task. The horizontal red line in Figure <ref> marks H = 0.5, which corresponds to a white noise process. Although many regions have similarĤ_v values (e.g., regions 32–39), wedo see evidence of varying ranges of dependence (values between H = 0.5 and H = 1) even in this resting state data.The fact that for some regions almost all coarse estimatesare above H=0.5 (e.g., regions 3, 8, 12, 20, 43 and 47), together with the skew of the overall distribution (histogram on the right of Figure <ref>), strongly suggest the existence oflong-range dependence.§.§ Modeling non-stationary spatial dependence In this section, the task indices are suppressed for convenience, so that β_k = β and β_kv = β_v. The SPDE framework can be extended to accommodate non-stationary processes by introducing location-dependent SPDE parameters.Following <cit.>, a parameter τ is used to scale β in the SPDE in (<ref>), while keeping the variance of the Gaussian noise constant; that is (κ^2(s) - Δ)^ν + d/2/2τ(s)β(s) = 𝒲(s), s∈ℝ^dwhere s indicates the location in a space of generic dimension d. Note the distinction between this continuous coordinates notation, which is necessary for distance calculations, and the vertex indices v ∈{1, …, V}.The marginal variance becomes σ^2(s) = Γ(ν)/Γ(ν + d/2)(4π)^d/2κ^2ν(s)τ(s) .As noted in <cit.>, it is often more intuitive to reparameterize the SPDEusing the standard deviation σ and the range ρ = (8ν)^0.5/κ, where ρ is the distance at which 2 observations are approximately independent (correlation approximately 0.13) for all ν > 0.5. We model σ(s) andρ(s) as functions of the location and translate them back to τ(s) and κ(s). Specifically, we setlog(σ(s)) = log(σ_0) + θ_1 δ(s)log(ρ(s)) = logρ = log(ρ_0) + θ_2where σ_0 and ρ_0 are baseline standard deviation and range, and δ(s) is some chosen local variability score. Using the relationship ρ = (8ν)^0.5/κ and Equation <ref> we thus get logκ(s) = logκ = log (8ν)/2 - log(ρ_0) - θ_2 = logκ_0 - θ_2 logτ(s) = 1/2log(Γ(ν)/Γ(ν + d/2)(4π)^d/2) - log(σ_0) - ν(log (8ν)/2 - log(ρ_0))- θ_1log(δ(s)) + θ_2ν= logτ_0 - θ_1δ(s) + θ_2ν .Let T = diag(τ(s_v): v = 1, …, V), then the precision matrix in Equation <ref> becomes TQ_αT. <cit.> remarked that the link between SPDE and Matern parameters is no longer valid in the non-stationary adaptation. However, when κ(s) and τ(s) vary slowly over the domain, Equations <ref> provide a valid approximation to the local variances and correlation ranges. This requirement is achieved if the local variability score δ(s) is also slowly varying with s. For our applications, we calculate the local variability score as follows:*At each vertex v, we obtain initial estimates β̂_v using linear regression;* At each vertex v,we calculate δ_v = δ(s_v) as the standard deviation of the values β̂_i, i ∈ N_v,where N_v is a neighborhood of v. The size of the neighborhood N_v in Step 3.2.2 can be chosen such that δ(s) is smooth in s. For sufficiently dense data such as ours, nearest neighbors suffice. §.§ Tying it all together: joint posteriorGiven the wavelet transformed data y^(w), we can now write the joint posterior of marginal temporal variance σ, temporal Hurst parameters H_1, …, H_n_H (recall these are reduced in number with respect to the number of verticesand are not task dependent)and, for each task k = 1, …, K, activation fields β_k = {β_k1,…, β_kV}and spatial hyperparameters θ_k = {θ_k1, θ_k2}. Such joint posterior isπ(β_1,…, β_K, θ_1, …, θ_K, H_1,…, H_n_H, σ|y^(w)) ∝π(σ) [∏_k = 1^Kπ(θ_k)π(β_k|θ_k)] [∏_i = 1^n_Hπ(H_i)] [∏_v = 1^V π(y_v^(w)|β_1,…, β_K, H_1,…, H_n_H, σ)]∝π(σ) [∏_k = 1^Kπ(θ_k)|Q(θ_k)|^1/2exp(-1/2β_k^⊤Q(θ_k)β_k)] [∏_i = 1^n_Hπ(H_i)] ×[∏_v = 1^V |Σ_v^(w)|^-1/2exp(1/2(y_v^(w) - ∑_k = 0^K x^(w)_kβ_kv)^⊤Σ_v^(w)^-1(y_v^(w) - ∑_k = 0^K x^(w)_kβ_kv))]where π(σ) is Gamma(1,1); {π(H_i): i = 1, …, n_H} are Uniform(0,1); {π(θ_kj): k = 1, …, K;j = 1, 2} are Gaussian with mean 0 and precision 0.3;{π(β_k|θ_k):k = 1, …, K} are GMRFs with mean 0 and precision Q(θ) = TQ_αT, as described in Section <ref>.Based on this joint posterior distribution, marginal posteriors for different parameters are approximated using INLA, as described inSection <ref>. We shall call our approach non-stationary, varying-range Bayesian General Linear Model, hereinafter NSVR-BayesGLM.§ RESULTS §.§ SimulationsSince the main target of our proposed approach are cs-fMRI data, which provide a rendition of the cerebral cortex via a 2D surface mesh, it makes sense to consider 2D simulations. Here, to facilitate comparisons with prior literature, we build simulations on a traditional 2D brain slice. Specifically, we follow the set-up of <cit.>and compare results between our NSVR-BayesGLM and their BayesGLM approach. A 46 × 55 image is constructed from a brain mask provided by thesoftware. Activation is placed at four different sites,with varying signal strengths, spreads and degrees of smoothness.Signal strength is defined as the ratio of activation magnitude at the center of the site, M = |β_center|, to noise standard deviation, σ.Spread is defined as the radius r of the site. Smoothness is defined by how fast the signal decays from the center to the edge of the site, via the exponential functione^-λ d, where d is the distance to the center and λ the parameter that controls smoothness. Thus, for an activation site with signal strength M/σ, radius r and smoothness parameter λ, the activation magnitude at a location d distance away from the center isMe^-λ d if d < r and 0 otherwise.Then, at different sites, we add error time series with different dependence ranges. Specifically, we place fractional Gaussian noise processes with H = 0.8 in two active regions to represent long temporal dependence, H = 0.4 in the remain two active regions to represent short temporal dependence, and H = 0.5 outside of active regions to represent white noise. We assume no overlapping between activation sites. For convenience, we choose σ = 1 and, since the discrete wavelet transform requires T to be a power of two, T = 2^9 = 512. In reality, for any T, one can perform some “padding” to meet this requirement.A summary of the simulation design can be found in Table <ref> –the two tasks, which are depicted in Figure <ref>, alternate in a single-block design scheme. Both NSVR-BayesGLM and BayesGLM use SPDE priors to model spatial dependence of the activation fields,and both are implemented with thepackage using the parallelsolver <cit.>.The varying temporal dependence component of NSVR-BayesGLM is written and implemented in C as a custom add-on that operates within . While BayesGLM prewhitens all time-series with an AR(6) model and assumes stationary spatial priors, NSVR-BayesGLM allows varying ranges of temporal dependence in wavelet space and non-stationary spatial priors in one, unifying framework. Figure <ref>shows thetrue activation magnitudes(left) andestimates by NSVR-BayesGLM (middle) and BayesGLM (right), for all simulation settings. Different gradients of colors at activation sites portray different degrees of spatial smoothness. Whereas the non-stationary feature of NSVR-BayesGLM allows it to capture this varying spatial smoothness, BayesGLM's stationary priors assume the same smoothness level across all locations. This is evident in the latter's tendency to oversmooth activation magnitudes in the bottom and the left activation sites. BayesGLM also underestimates activation magnitudes, which is most evident at the centers of activation sites.This is likely a combined effect of prewhitening with AR(6) models, as discussed in Section <ref>, and of assuming stationarity.In addition to the activation magnitudes captured in Figure <ref>, each activationsite has a different temporal range of dependence.Here we use n_H = 3;NSVR-BayesGLM automatically groups regions with similar temporal dependence ranges – such as the right and bottom activation sites – and produces Hurst parameter estimates Ĥ = 0.373, 0.825, 0.486 for the true H = 0.4, 0.8, 0.5, respectively.Note that, if instead we use n_H = 5, NSVR-BayesGLM produces estimates Ĥ = 0.390, 0.396, 0.853, 0.871, 0.487 for the true H = 0.4, 0.4, 0.8, 0.8, 0.5, respectively.BayesGLM does not accommodate varying ranges of dependence and therefore does not produce estimates of such ranges. Figure <ref> showsthe true activation sites (left), and estimates by NSVR-BayesGLM (middle) and BayesGLM (right), for all simulation settings. In both approaches, an estimated activation site is determined as the largest set D of contiguous locations such that the joint posterior probability P(β_v > 0 for allv ∈ D| y) > 1 - α, where α isa posterior probability threshold which we fixed at 0.05.This way of estimating activation sites is called the excursions set approach;it was introduced by <cit.>and is implemented in thepackage<cit.>. As detailed in <cit.>, since the joint posterior probability is over all vertices, there is no need to adjust for multiple comparisons – and the approach appears to be fairly insensitive to the choice of α.Based on the top plots of Figure <ref>, inTask 1, both NSVR-BayesGLM and BayesGLM produce false negatives around the edge of the top and right activationsites, likely an effect of low signal strength in those areas. Notably, BayesGLM is also particularly prone to false positives, even in the case of strong signals (Task 2, bottom plots).The stationarity assumption of BayesGLM acts like a smoother of activation magnitudes, dampening those that are larger and boosting those that are smaller. This has the effect of making false positives more likely around the edges of the bottom and left activationsites, while making false negatives less likely around the edges of the top and right activation sites.§.§ An application to visual working memory In this application, we use cs-fMRI data tocharacterize activation and temporal dependence in a visual working memory task.The data is part of a working memory study included in the healthy young adult data set of theHuman Connectome Project <cit.>. Preprocessing is conducted with the HCPminimal preprocessing pipeline <cit.>, and individual subjects' cortical areas are registered to a common template for the purpose of group analysis and intersubject comparisons <cit.>. In thestudy, subjects are presented with blocks of trials consisting of different images. Subjects indicate if an image is a 2-back repeat (i.e., occurred2 trials before) in a `2-back' condition, or if an image matches a cue stimulus in a `0-back' condition (as a control for working memory). In summary, four `2-back' blocks and four `0-back' blocks alternate randomly, for a total of eight blocks per run,each block consisting of 10 trials of 2.5 seconds each. For all subjects, a working memory score is also measured, using the NIH Toolbox List Sorting Working Memory Test <cit.>. For the purposes of our analysis,we select 10 subjects from the 100 unrelated subjects (no family relations) available from the HCP; namely, the 5 highest- and the 5 lowest-ranking in the List Sorting Working Memory Test. We analyze left and right hemispheres separately, using the approach described in Section <ref>. To reduce computational burden, we use the Connectome Workbench to subsamplevertices in each hemisphere from 32,492 to ≈ 3,000. Thus, for our modeling exerciseV ≈ 3,000 and T = 401. The BOLD response and design matrix are centered and scaled at each vertex for numerical stability. In addition, spurious variability caused by subject movements and scanner drift is removed from the BOLD signal, using 6 motion parameters estimates (translation and rotation parameters, each in 3 directions) and their temporal derivatives as regressors. We carry out the modellingon the `very-inflated' surface, and visualize results on the `inflated' surface (see Figure <ref>). As in Section <ref>,NSVR-BayesGLM is implemented with thepackage , using the parallelsolver <cit.>, and with the varying temporal dependence component written in C as a custom add-on within . Manipulation of cortical surface fMRI data inis made possible through thepackage <cit.>. Using 8 parallel threads, computation takes on average 30 minutes per hemisphere per subject, on a Macbook Pro with M2 Max chip and 96GB memory. Figure <ref> shows the activation field estimates for the `0-back' task (left) and `2-back' task (right), taken asaverages across sessions and subjects belonging to the same group (high working memory score, top, and low working memory score, bottom). Because NSVR-BayesGLMallows non-stationarity, we can capture activation patterns with noticeably different degrees of smoothness acrossregions. In both tasks, activation is, as expected,most prominent in the striate and extrastriate visual cortex (BA V1, BA V2), followed by subregions of several functional divisions such as precentral ventral attention (BA 6, BA 44), salience/ventral attention (BA 44, BA 6), sommatosensory motor (BA 1, BA 2, BA 4p), prefrontal cortex (BA 45),intraparietal sulcus (IPS), and superior parietal lobule (SPL). This is consistent with the conclusion in a recent meta analysis of visual working memory task<cit.>. Notably, the IPS and SPL have been documented to play a crucial role in visuo-spatial attention and the analysis of spatial elements <cit.>. Functional regions that contain a larger, and/or more significantly activated areas during the `2-back' task include the precentral dorsal attention, salience/ventral attention, prefrontal cortex, inferior parietal lobule and precuneus. While the prefrontal cortex is key in regulating a largenumber of higher-order executive functions, including working memory, the precuneus is well-known for its role in visuo-spatial imagery and episodic memory retrieval <cit.>. Whencontrasting high-and low-scoring groups, the same regions seem to be activated in each task, but interestinglyactivation magnitudes are higher across the board in the low scoring group. This may suggest an increased effort in performing the tasks from subjectswho score lower in the working memory test. Figure <ref> shows Hurst parameter estimates for different brain regions, taken again as averages across subjects belonging to the same group. We note that using n_H = 5, all regions report Hurst estimates larger than 0.5, which strengthen the case for an approach that, like NSVR-BayesGLM, can accommodate long range dependences. Interestingly, subjects who score high in the working memory testhave noticeably higher Hurst estimates across almost all brain regions. This suggest an intriguing association between temporal dependence ranges and working memory performance – and one that our region-specific Hurst estimates, unlike the single score value of the test, allow us to articulate spatially.While this is only a preliminary result, it may point towards a promising, dependence range-based approach to the study of memory-related diseases. It also further strengthens our recommendation against prewhitening, which erases potentially valuable information contained in model residuals.§ DISCUSSIONIn this article we propose NSVR-BayesGLM, a first-of-its-kind approach that fully integrates the modelling of non-stationary spatial dependence and varying ranges of temporal dependence in cs-fMRI data. The former is implemented through an extension of the spatial SPDE prior approach in <cit.>, which allows for spatial non-stationarity driven by local spatial features of the data and results in a sparsely parameterized model. Different ranges of temporal dependence are also modeled in a data-driven fashion,combining the use of several fractional Gaussion processes of varying persistence and wavelettransformations. We demonstrate via simulations that NSVR-BayesGLM can better accommodate local activation featurescompared to a stationary counterpart, while retaining comparable power and improving false positive control. Notably, in addition to assuming stationarity, the competing method employs autoregressive models of order p = 6 to prewhiten the data, while NSVR-BayesGLMaccurately estimates and incorporates different ranges of temporal dependence at different locations. Finally, we apply NSVR-BayesGLM to a visual working memory cs-fMRI data set. The activation patterns are found to be consistent with existing literature. Moreover, NSVR-BayesGLM allows us to unveil different degrees of smoothness in the signals characterizing different regions, and an intriguing association between estimated temporal dependence profiles and working memory performance – something that may point towards further applications in the study of memory-related diseases.A limitation of the current proposal is that, while advancing the modeling of spatial and temporal dependence with respect to existing approaches, it still treats them as separate.Spatial dependence is embedded within the activation coefficients' prior hyper-parameters, while temporal dependence is imposed upon the error terms and is reflected directly in the likelihood. In reality, it is likely that the spatial correlations characterizing activation fields change with time, e.g., as task stimuli switch on and off.Ideally, one should explicitly incorporate interactions between spatial and temporal dependence – but pursuing such an extension would still be prohibitive in terms of computational burden.Indeed, computational cost remains a major challenge that influences modeling choices. As reported in Section <ref>, the current model estimation has a runtime of approximately 30 minutes per hemisphere, compared to BayesGLM's 45 seconds. The mainburden comes from inefficient memory allocation for large spatio-temporal models; takes up nearly 85GB of RAM during the estimation stage for a data set of ≈ 3000 locations/vertices and ≈ 500 time points. Note that, if we use prewhitening and do not explicitly model temporal dependence (as in the case of BayesGLM), the non-stationary spatial model is just as fast and memory-efficient.For the purpose of circumventing memory issue and reducing computational burden, we could consider an approach that combines INLA with the MCMC algorithm. This works by iterating between drawing MCMC samples from the posterior marginals of temporal dependence hyperparameters, and fitting only the non-stationary spatial component with INLA, conditioning on the MCMC samples. This “fusion” approach, also known as the Metropolis-Hastings with INLA, was proposed by <cit.>and shown to have a performance comparable to INLA via numerical examples and heuristic arguments, for some selected models. For our model, potential complications with parameter support boundaries and approximation errors of the conditional marginal likelihood might exist, and will need to be studied in future work.Complete code implementing the current version of NSVR-BayesGLM as well as data for reproducing our simulations are available at <https://github.com/hqd1/SPfmri>.§ ACKNOWLEDGMENTSWe utilized data from the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research,and by the McDonnell Center for Systems Neuroscience at Washington University. We utilized substantial portions of thecode developed in <cit.>, we thank the authors for making such code available and for providing advice.The work of M.A. Cremona was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), by the Fonds de recherche du Québec Health (FRQS), and by the Faculty of Business Administration, Université Laval.The work of F. Chiaromonte was partially supported by the Huck Institute of the Life Sciences of the Pennsylvania State University. M.A. Cremona is also affiliated with CHU de Québec – Université Laval Research Center, Canada. F. Chiaromonte is also affiliated with the Sant'Anna School of Advanced Studies, Italy. agsm
http://arxiv.org/abs/2312.16346v1
{ "authors": [ "Huy Dang", "Marzia Cremona", "Nicole Lazar", "Francesca Chiaromonte" ], "categories": [ "stat.AP" ], "primary_category": "stat.AP", "published": "20231226223233", "title": "An efficient approach to characterize spatio-temporal dependence in cortical surface fMRI data" }
Cryptoanalysis McEliece-type cryptosystem based on correction of errors and erasuresThe article was prepared within the framework of the Basic Research Program at HSE University.Kirill Yackushenoks Higher School of Economics Moscow, Russia [email protected] Fedor Ivanov Higher School of Economics Moscow, Russia [email protected] 14, 2024 =====================================================================================================================================================================================Krouk, Tavernier and Kabatiansky proposed new variants of the McEliece cryptosystem. In this letter, it is shown that cryptosystem based on correction of errors erasures is equal to the McEliece cryptosystem with worse parametrspublic key. It will also add an organic extension of the authors' idea, although one that has its flaws...Code-based cryptography, cryptanalysis, McEliece cryptosystem, post-quantum cryptography, public-key cryptography. § INTRODUCTION McEliece is an excellent cryptosystem with many advantages, but its disadvantages are such as the weight of public keys exceeding thousands of times the weight of other PQ-cryptosystems, prevent it from competing with others.<cit.> McEliece is an excellent cryptosystem with many advantages, but its disadvantages, such as the weight of public keys exceeding thousands of times the weight of other pq cryptosystems, is a significant disadvantage. This disadvantage and try to remove. The main two ways:* Key reduction - use of some constructions from coding theory.* Change the scheme itself or more precisely make the attacker decode not in the sphere, but in the whole set of syndromes.The authors go the second way. They use a structured error vector. The final error vector consists of both errors and erasures and they also add a codeword. Such manipulations should make the cryptosystem resistant to attacks with an Information Set Decoding (ISD for short)-like idea.<cit.> ISD-like attacks force the code length to be taken under the assumption that the desired algorithmic robustness is required O(2^x) = O(2^n/20). Where n - the length of the code. This estimate is calculated exactly based on the weight of the error yet, but for a rough estimate it is enough, that is, if we want the code persistence 2^90 then we should take n = 1800, k≈ n/2. Here we have a matrix 1800 by 900. They have an interesting idea, but they didn't see through the attack that was cited in the article. In this paper it will be shown that the equation He^T=s has one solution. Next the article will be in this order in section 2 will be a reminder of how the cryptosystem works, also the attack itself which the authors consider, in section 3 will be a more detailed breakdown and cryptanalysis, in section 4 an idea for improvement is proposed, in 5 implementation, in 6 conclusions. § PRELIMINARY* G - k × n matrix, which is a generator matrix of a random linear n, k-code C with the minimal distanced = d(C)* M, W - n × n nonsingular random matrix* D - n × n diagonal matrix with r(D) ones on its main diagonal, where r(D)<d* P_1, P_2 - n × n permutation matrix* U - n × k matrix of the rank less than * G_pub = GM - this generator matrix public code* E_pub=(WD(UG + P_1) + P_2)M - matrix, which adds errors, erasures and codeword so that an attacker will have trouble decoding ity = mG_pub+eE_pubThis equation respond for encryption, where wt(e) = d/3. Now let's talk about decoding or decryption. We take y=mG_pub+eE_pubthenyM^-1=mG+e(WD(UG + P_1) + P_2)==(m+WDU)G+eWDP_1+eP_2Now, as described above, we need to use decoding to find the codeword of code with generating matrix (WDU+m)G. We haveyM^-1-(m+WDU)G=e(WDP_1+P_2)This matrix (WDP_1+P_2) is nonsingular therefore it has (WDP_1+P_2)^-1. We have vector e. A codeword was sent c=y-eE_pubFrom c we obtain m. Firstly, we know place for erasuresnow we just need to nullify them and decode without them. Since the error was introduced by weight d/3, from the construction of the matrix, it can only become smaller.(If it falls on erasures). So we can decode using the decoding algorithm for C code. We got the codeword sum of two codewords, now if we subtract it from what has been obtained, we get the error vector along with the erasures. Knowing the inverse for the matrix (WDP_1+P_2) that introduces errors and erasures, you can get the vector that the sender contributed.Now let's talk about the attack. In the paper, the authors consider such an attack: Eve evaluates an (n-k)× n matrix H_pub such thatG_pubH^T_pub = 0Evaluates a parity-check matrix for the code C_pub with generator matrix G_pub. It is easy to verify thatH^T_pub = M^-1H^TWhere H is some parity-check matrix for the code C. Then Eve evaluates the syndrome s = yH^T_pub and tries to solve the following equationmG_pubH^T_pub + eE_pubH^T_pub = eH^T_* = swhereH^T_*= E_pubH^T_pub = (WD(UG + P_1) + P_2)MM^-1H^T ==(WDP_1 + P_2)H^TThis equation can be considered as syndrome equation for the code C_* with the parity-check matrix H_* . Note that there is an obstacle for Eve in this way, namely, the code C_* is not equivalent to the code C as it is for McEliece system. The minimal distance of the code C_* is unknown and moreover very probably it is approximately the same as the distance of a random n, k-code what is twice less than the distance of the initial good code, like Goppa code. The authors write about this in their paper and also note that because of this attack there is no fundamental difference in persistence when using the M matrix, so they will use the permutation matrix.§ CRYPTANALYSIS 1)The first place to start is McEliece cryptanalysis on the ISD side of the attack. y=mG+e and wt(e)=d-1/2, but in the system the authors propose, there's a weight of error wt(e_*)=d/3. For the same code length as in McEliece, because of the fact that the error is lighter the ISD resistance of the cryptosystem proposed by the authors is lower. The implication then is that the code length should be longer. The authors point out that in this case, since the code C' with the check matrix E_pubH^T_pub is not equivalent to the code C, also d' = d/2 most likely. The only problem is that there may be several error vectors fitting this equation H_*e^T=s, let's try to see if there are really that many?At the end of the paper we propose the parameters n = 1023, k = 523, d = 101, t = r(D) = 33. After general evaluations we will check on these parameters. Estimation of the probability for MMT of finding e : wt(e) = t and eH^T = s:P = (k+l)/2p/2(k+l)/2p/2n-k-lt-p/ntDifficulty of finding wt(e)=d/3 versus d=(d-1)/2 via MMT, if the parameters p and l are not changed, then the first 2 multipliers in the numerator are reduced to go to the complexity, mostly the probability is taken to the degree -1, also multiplied by the complexity of one iteration, but with the same parameters it can be omitted. This is a calculation of how much easier the attack will become:P(McEliece)/P(This cryptosystem) = n-k-ld/3-pnd/2/n-k-ld/2-pnd/3With l = 10, p = 4 the probability will be higher by 2^16.48, it is directly proportional to the difficulty, it is already easier by that much.Let's start by estimating the number of words on average in related classes. wt(e)=t and less then ∑_i=0^tni/2^n-k.But what if you are still unlucky and all 2^206.5 words are in several classes and there are a lot of them in each class.∑_i=0^331023i/2^500≈ 2^-293.462)Parsing the number of solutions eH^T_new of a certain weight, let's start with t = d/3. To do this, we need to understand what's going on here: (WDP_1 + P_2)H^T, the first matrix is just d/3 columns of W mixed P_1, that is, after multiplying by H^T, it's a linear combination of d/6 columns with d/3 places. The weight of this: e(WDP_1 + P_2) the worst is 2d/3, but since W is random, thend/6 +3√(d/3)/2 + d/3the probability that all errors will be of this weight and less is greater than 99%, although there will still be overlaps, but more about them later. Let's substitute d=101 wt(e) = 59-this is the maximum error weight in the original code. This code unambiguously corrects all errors up to 51. That is, if there are at least 5 intersections of this error vector (vectors of erasures and permutations of errors), it will have an unambiguous error vector in the original code. If we forget about intersections (they improve the situation):* I) The matrix (WDP_1 + P_2) is reversible consequently all vectors are differentwhen multiplied by e * II) W is random consequently we can say that we have a normal distribution and e(WDP_1 + P_2) is the weight of this in half of the cases heavier than d/2, in therest lighter.From the second and the first, we have for weight d/3 of the initial error there will bend/3/2different syndromes. (Worst case scenario) so that there are multiple vectors in each class, then there are 2 vectors in each class, if there are 3 vectors in one class, then there are 1 vector in the other class - this is ideal, That is, if there is a class with n vectors that have the same syndrome and weight d/3, but then there is an n-1 class minimum in which it is the only one. But here we should consider that this is a specially degraded system, there are no intersections, we also consider that the vector e(WDP_1 + P_2) of weight greater than d/2, in some already adjacent class with the half that is lighter, but it is not necessarily true, there are also heavier vectors that are leaders of adjacent classes, that is, it does not have to fall into the half of the classes mentioned before.There may be a suggestion to use vectors not only of weight d/3, but also lighter. Then with each time light vectors that will be leaders of the adjacent class will be more than half of them consequently there will be more classes in which the total volume is not exponential. It turns out that the more vectors there are in one class, the more vectors there are in other classes, so the probability of getting into it is 1/(n-1), where n is the number of vectors of weight d/3. (It's not exactly written here) Probability without intersection of erasures and errors:n-d/6d/3/nd/3,for n=1023 and d=101, the probability without crossings would be 58%, the important thing here is that each crossing resets the weight by 2 at once! Hence, we can see that the previous estimate about halving errors of weights greater than d/2 and less than d/2 was high. All of the above leads to the fact that the parameters cannot be reduced because of this attack, so they will have to be made larger because of this:n-k-ld/3-pnd/2/n-k-ld/2-pnd/3Since we cannot tell Bob how to make errors without Eve finding out about it, so it comes out that on average with this attack of vectors e :eE_pubH_*^T=sthere will be 1-2 such vectors.It is necessary to run the algorithm ISD not until the first vector, but until the probability of finding a solution does not satisfy us.The authors choose the number of erasures and errors from the equation, a ∈ N:r(D) + 2t + a= dNow let's look at 2 codes: C: (H^T, d) and C' :((WDP_1 ++ P_2)H^T, d'), where first is the code parity-check matrix and the transpose matrix, and the second is the code minimum distance. It is also worth recalling that the matrix (WDP_1++P_2) - nonsingular. Two things follow from this: * if e_1 ≠ e_2 then e_1WDP_1 ≠ e_2WDP_1* if e ≠ 0 then eWDP_1 ≠ 0Consider the equation below at e ≠ 0:e(WDP_1+P_2)H^T = 0That means e(WDP_1+P_2) is codeword from the C code then wt(e(WDP_1+P_2))≥ d.wt(e(WDP_1+P_2)) = wt(eWDP_1+eP_2))==wt(eWDP_1)+wt(eP_2)=r(D)+wt(eP_2)≥ dwt(eP_2)≥ r(D) + 2t + a - r(D)wt(eP_2)≥ 2t + awt(e)≥ 2t + a This is the worst case, there is a situation where erasures and errors do not intersect, otherwise there will be even more d. That is, this is the lower bound. From what has been written above, it follows that d' ≥ 2t + a and a ≥ 1 so the code C' can decode wt(e) ≤ t clearly. It is worth noting again that in this cryptosystem t = d/3 and in McEliece t = d/2, ISD is less difficult, there are larger and heavier matrices to store in this cryptosystem.That's the second place to launch an attack from if the matrix M is changed to a permutation one(maybe even the first ):E_pub=((WDUG + WDP_1) + P_2)PIf you look at the first summand, you realise that it is a subcode, also that it can have at most 2^r(D) codewords. For the example from the article, it's 2^33.e_1E_pub+e_2E_pub=(e_1+e_2)(WDUG+WDP_1 + P_2))POur goal is to find 2 errors such that the code words involved in creating the error are the same, then (e_1+e_2)(WDP_1P + P_2P)=(e_1+e_2)WDP_1P + (e_1+e_2)P_2Por we need to find one vector:eWD=0 then eE_pub=eP_2P, P^*=P_2P, it should also be noted wt(eP_2P)=t.Task: restore P^*. Algorithm 1 will help with this. This algorithm is here to show schematically the process. It can be improved, but further there will be ISD attack and its complexity is much higher, so you can not bother much. 2n2^r(D)=2^r(D)+1+log_2(n) - This is how many times the while loop will run. For parameters are 2^44. For loop will run 2n times. We know that E_pub = (WD(UG + P_1) + P_2)P and P_2P nextE_pub=WD(UG + P_1)P + P_2P E_1=WD(UG + P_1)P E_2 = P_2PNow we need to develop an attack using the knowledge of E_2. The channel transmits y = mG_pub + eE_pub. Here's what we do yH_pub^T=(WDP_1 + P_2)H^T,whereH_pub^T = P^-1H^T.There are erasures and errors in the same matrix, but if you use the knowledge about E_2 then A_1 = E_1H_pub^T=WDP_1H^TA_2 = E_2H_pub^T=P_2H^T. yH_pub^T=e(A_1+A_2)=eA_1+eA_2= = [ e e ]×[ A_1; A_2 ] = yH_pub=s=(s_1||s_2)It is also worth noting that rank(A_1) = r(D). The matrix A_1 || A_2 can be reduced to the form shown in Figure 1(quasi-systematic view).Figure 2 shows a schematic view of the ISD for the matrix A_0.Now if you look at the matrix, then eA_0=s_2, for this matrix we need to run the ISD-algorithm. The input is A_0, s_2, t, the algorithm will search for t and less weight. Probability compared to the original McEliece:P(This cryptosystem)/P(McEliece))= n-k-l-r(D)d/3-p(k+l+r(D))/2p/2^2 nd/2/n-k-ld/2-p(k+l)/2p/2^2 nd/3If we substitute the numbers for McEliece [1024,524], t = 50 and compare with [1023,490], t=r(D)=33 for this cryptosystem≈2^18. Thus with probability more than 60% we can find vector e, if not we repeat, then the probability will be 84%, if not, so on and so forth. vector e is singular otherwise a legitimate user cannot decrypt the message too, since such a matrix construction has separated erasures from errors. You can check, of course, you have to eA_x+eA_y=s_1. The error weight t was found in this way. § REMEDIAL IDEASIncreasing r(D) helps to solve these 2 problems. But in the usual case t and r(D) binds the 2t+r(D)<d. Therefore, it is proposed to build the matrix G = G_0 || G_1 where G_0 - k × n code matrix with an efficient decoding algorithm that can fix wt(e)=d-1/2,G_1 - k × v the matrix of another code. The idea of putting erasures on G_1,r(D)= v. * Weakness: is R more than before R = k/n+v.* Positive: in a regular code scheme, the following can be said about the number of corrected errors depending on n t ≈0.1n/2=0.05n, but with the addition of code for erasures t=0.5n. Here by error he means how many of them will be at random e introduced into the code by erasures.We also suggest another way to close from the second attack, a more graceful one. E_pub = (WUG + WDP_1 + P_2)PThen there will be no second attack provided that WDP_1 + + P_2 has an inverse matrix and e_pub - singular.§ IMPLEMENTATIONWe ran a trial implementation of the modernised this cryptosystem and the attack algorithm in Section III using the Python language. It should be noted that the decryption does not use decoding of the C code and therefore we can treat C as a random code rather than a well-designed code with efficient decoding such as Goppa or BCH code. Source code: https://github.com/Persequentes/ search_matrix_permutation_on_E_pub§ CONCLUSIONWe would like to point out that the authors have an interesting idea, but it has been shown that a prototype cryptosystem based on error and erasure correction is equivalent to the McEliece cryptosystem, but with a lower complexity factor of the exponent. There is also one idea that helps to close from the second attack, but it is harder to close from the first with this idea, you need to choose the right x. If r(D) is large enough, then you can try to switch to probabilistic decoding and increase the error vector.§ ACKNOWLEDGEMENTThe article was prepared within the framework of the Basic Research Program at HSE University in 2023.IEEEtran
http://arxiv.org/abs/2312.15912v1
{ "authors": [ "Kirill Yackushenoks", "Fedor Ivanov" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20231226071001", "title": "Cryptoanalysis McEliece-type cryptosystem based on correction of errors and erasures" }
University of Wisconsin, Madison [email protected] prove a Łojasiewicz-Simon inequality| E(u) - 4π n | ≤ C (u) ^αfor maps u ∈ W^2,2( S^2, S^2 ).The inequality holds with α = 1 in general and with α > 1unless u is nearly constant on an open set. We obtain polynomial convergence of weak solutions of harmonic map flow u(t) : S^2 → S^2 as t →∞ on compact domains away from the singular set, assuming that the body map is nonconstant.The proof uses Topping's repulsion estimates together with polynomial lower bounds on the energy density coming from a bubble-tree induction argument.Łojasiewicz inequalities for maps of the 2-sphere Alex Waldron January 14, 2024 ================================================= empty§ INTRODUCTION§.§ Background and main resultLet (M, g) and (N,h) be compact Riemannian manifolds. Recall that a sufficiently regular map v : M → N is called harmonic if its tension field(v) = _g ∇ dvvanishes identically. Equivalently, v is a critical point of the Dirichlet energy functionalE(u) = 1/2∫_M |du|^2 dV.We are chiefly concerned with the fundamental case M = N = S^2, although some of our results are stated more generally. A classical result due to Lemaire and Wood <cit.> states that any harmonic map v : S^2 → S^2 must either be holomorphic or antiholomorphic; henceE(v) = 4 π | (v) |and v attains the minimum allowable energy within its homotopy class. The proof relies on a famous trick:since the Hopf differential (a certain quadratic expression in the components of du) of a harmonic map is holomorphic, it necessarily vanishes, forcing the map v to be holomorphic or antiholomorphic. It is natural to ask whether a quantitative version of this result holds true; specifically, whether the energy of a map with small tension must be close to a predetermined value. In his thesis <cit.>, Topping obtained a remarkable first result in this direction: any map u : S^2 → S^2 withE(u) < 4 π |(u)| + δ_0,must obey the estimate E(u) ≤4 π |(u) | + C (u) ^2. For initial data satisfying (<ref>), Topping used (<ref>) to prove that Struwe's weak solution of the harmonic map flow converges exponentially away from a finite set of singular points.Without the assumption (<ref>), one can no longer expecta map with low tension to be nearly holomorphic; instead, one might expect such a map to be close to a sum of holomorphic and anti-holomorphic maps. This intuition was borne out by the work of Ding-Tian <cit.>, Qing <cit.>, and Wang <cit.>, establishing that a sequence of maps u_iwith (u_i) _L^2→ 0 must subconverge to a bubble tree of harmonic maps. The reader may see Theorem <ref> of the current paper for a detailed statement of this theory, or recent work of Jendrej, Lawrie, and Schlag <cit.> for more precise results in the parabolic setting.In fact, the energies E(u_i) converge to the sum of the energies of the maps in the tree, which by (<ref>), is an integer multiple of 4 π:E(u_i) → 4 π n. Another remarkable result of Topping <cit.> generalizes the estimate (<ref>) to this setting: under certain assumptions on the nature of the bubble tree (described below),we have| E(u_i) - 4 π n | ≤ C' (u_i)^2. Topping was able to use (<ref>), which he calls a “quantization estimate,” to again obtain exponential convergence of harmonic map flow on compact domains away from the singular set. The mechanism of Topping's second theorem is more complex than the first, incorporating a “repulsion” effect between the body map, assumed to be holomorphic, and any antiholomorphic bubbles—in other words, a quantification of the Hopf-differential trick. To ensure a sufficiently strong repulsion effect, Topping makes the following assumptions: (1) holomorphic and antiholomorphic bubbles do not occur at the same points, and (2) the energy density of the body map does not vanish at any of the antiholomorphic bubble points—see <cit.> for precise statements. As shown later <cit.>, these hypotheses are necessary for an estimate of the form (<ref>) to hold. However, ifone settles for a weaker estimate in place of (<ref>), namely, a Łojasiewicz(-Simon) inequality| E(u) - 4 π n | ≤ C (u) ^α,then it turns out that Topping's assumptions can be removed.We shall prove (<ref>) with α = 1 in full generality, and with α > 1 assuming only that the body map is not identically constant. This allows us to conclude that for a solution of harmonic map flow u(t) : S^2 → S^2, if the weak limit along some sequence of times tending to infinity is nonconstant, then u(t) converges polynomially on compact domains away from the singular set; in particular, weak subsequential limits are unique in this case. §.§ Related workOur main theorem can be compared with Łojasiewicz inequalities and uniqueness-of-subsequential-limit results that have appeared in similar contexts.Assuming that u is C^2,α-close to a fixed harmonic map, (<ref>) follows directly from fundamental work of Leon Simon <cit.>.Historically, extensions of Simon's theorem to singular contexts have been exceedingly rare, Topping's theorems <cit.> being the best and perhaps earliest examples. Another instance is the work of Daskalopoulos and Wentworth <cit.> (generalized by Sibley <cit.> and Sibley-Wentworth <cit.>) establishing convergence modulo gauge and uniqueness of the bubbling set for Hermitian Yang-Mills flow on holomorphic bundles over compact Kähler manifolds.In his thesis <cit.>, the author also gave an exponential convergence result for 4D Yang-Mills flowanalogous to Topping's first convergence theorem, although in that context the bubbling set turned out to be empty.More recently, Malchiodi, Rupflin, and Sharp <cit.> obtained a Łojasiewicz inequality for the H-functional that applies near “simple bubble trees,” consisting of a single bubble on a constant body map (assuming that the domain is a surface of genus at least one). Rupflin <cit.> extended this result to the Dirichlet energy of harmonic maps and succeeded to work with a general real-analytic target. Using similar estimates, Rupflin <cit.> has also obtained Łojasiewicz inequalities for low-energy maps into 3-manifolds with generic metrics. These results appear to be almost completely disjoint from those of the present paper. (For recent progress on the related question of closeness of u to a genuine holomorphic map under the hypothesis (<ref>), see Bernand-Mantel, Muratov, and Simon, <cit.>, Topping <cit.>, and Rupflin <cit.>.) A related trend initiated by Colding and Minicozzi <cit.> aims to develop Łojasiewicz-type estimates suitable for proving uniqueness of blowup limits at “generic” singularities. These estimates are designed for analyzing type-I singularities of the mean curvature flow, and have been remarkably successful on that front <cit.>. Lotay, Schulze, and Szekelyhidi <cit.> have also obtained a strong uniqueness result for certain singularities of Lagrangian mean curvature flow, which are of type II.As we shall demonstrate in future work, the inequality (<ref>) can also be used to analyze (type-II) finite-time singularities of 2D harmonic map flow. Finally, we mention the recent Łojasiewicz inequality of Deruelle-Ozuch <cit.> for a version of Perelman's λ-functional near a Ricci-flat ALE space, which has applications to infinite-time convergence of Ricci-DeTurck flow <cit.>. §.§ Detailed statements The simplest version of our main result is as follows.Suppose that u ∈ W^2,2( S^2, S^2) is a map with E(u) ≤ 4 π k, for k ∈.(a)There exists an integer n ∈{ 0, …, k} such that| E(u) - 4 π n | ≤ C_<ref>a(u),where C_<ref>a depends only on k.(b) There exists L ∈, depending only on k, as follows. Let 1 < α < 2L + 2/2L + 1 and 0 < κ < _0, depending on N. Suppose that there exist open sets Γ⋐Γ̂⊂ S^2 such that E(u, Γ̂) < _0 and E(u, Γ) ≥κ.We then have| E(u) - 4 π n | ≤ C_<ref>b(u) ^α.Here C_<ref>b depends on k, κ, α, and the geometry of Γ and Γ̂. We also have the following more quantitative version.Fix ℓ∈{0, …, k}, β > 0, and κ > 0. Let 0 < λ_i ≤λ≤1/√(ℓ) and x_i ∈ S^2, for i = 1, …, ℓ, with B_2λ_i(x_i) ∩ B_2 λ_j (x_j) = ∅ for i ≠ j, and letΩ = S^2 ∖∪_i B_λ_i(x_i),U_i = U_λ_i^2λ_i(x_i) , Û_i = U_λ_i/2^4λ_i(x_i). (a) Suppose that u ∈ W^2,2(S^2, S^2) satisfies E(u) ≤ 4π k, as well asmax_i E ( u, Û_i ) ∧ E_(u, Ω) < _0andκ E_(u, U_i ) ≤ E_( u, U_i )for i = 1, …, ℓ.There exists n ∈{0, …, k} such that| E(u) - 4 π n | ≤ C_<ref>a( (u) ^2 + λ^1 - β(u) ).Here, C_<ref>a depends on k, κ, and β. (b) Let 1 < α < 2L + 2/2L + 1, where L ∈ depends on k. In place of (<ref>), assumeκ≤min_i E_( u, U_i ).We then have| E(u) - 4 π n | ≤ C_<ref>b(u) ^α.Here C_<ref>b depends on k, κ, and α.Similar results hold after reversing the roles of E_ and E_. Theorems <ref> and <ref> will both follow by contradiction from Theorem <ref> below, which states the same estimates in the context of sequences of maps with tension fields tending to zero in L^2 (so-called “almost-harmonic sequences”).We can now state our harmonic-map-flow results. There exists δ_0 > 0, depending on k, κ, α, and minλ_i, as follows. Let u_0 : S^2 → S^2 be a map as in Theorem <ref>b, satisfying (<ref>) and (<ref>), and suppose further that( E(u_0) , 4π) ≤δ < δ_0,where n ≤ k. Let u(t) be the Struwe solution of harmonic map flow with u(0) = u_0, and let 0 < T ≤∞ be the maximal time such that u(t) satisfies( E(u(t) ) , 4π) < δfor 0 < t < T. (a) For 0 ≤ t < T, we have( E(u(t)) , 4π) ≤( δ^α - 2/α + 2 - α/αmin t, T - t )^α/α - 2, ∫_t^T-t(u(s)) _L^2(S^2)ds ≤α/α - 1( δ^α - 2/α + 2 - α/αt )^α - 1/α - 2,andu(t) - u_0 _L^2(S^2)≤α/α - 1δ^α - 1/α.(b) Suppose T = ∞. Then there exists a nonconstant holomorphic map u_∞: S^2 → S^2 such thatu(t) - u_∞_L^2(S^2)≤α/α - 1( δ^α - 2/α + 2 - α/αt )^α - 1/α - 2Moreover, there exists a finite set of points z_j ∈ S^2 such that given any domain Γ⋐ S^2 ∖{z_j} and m ∈, there exists D > 0 such thatu(t) - u_∞_C^m (Γ) < D t^α - 1/α - 2for t sufficiently large.Let t_i be any sequence of times tending to infinity. Given a Struwe solution u(t) : S^2 → S^2 of harmonic map flow with E(u(0)) ≤ 4 π k, pass to any subsequence such that u(t_i) → u_∞, where u_∞ is harmonic. If u_∞ is nonconstant, then we have u(t) → u_∞ polynomially in L^2(S^2), in the sense of (<ref>), and in C^∞ on compact domains away from the bubbling set, in the sense of (<ref>). §.§ Idea of proofWe follow a stategy of decomposing an almost-harmonic map into an alternating sum of almost-holomorphic and almost-antiholomorphic maps. The basic estimate of Topping's first theorem can be applied on each map, modulo a boundary term; the goal is to make sure that the boundary terms are taken on annuli where the holomorphic and antiholomorphic energies are comparable and bounded below by a power of the radius. A key observation is that in this case, the boundary term can be controlled using Topping's repulsion estimate, giving precisely a Łojasiewicz inequality. The main difficulty in exploiting this observation is to obtain the required lower (and upper) bounds on the holomorphic and antiholomorphic energiesin the neck regions. These ultimately follow from a basic three-annulus estimate and its generalization to a “multi-annulus estimate,” needed to pass the lower bounds across ghost bubbles;however, a tricky inductive argument over the bubble tree is required to put the estimates together.§.§ AcknowledgementThe author was partially suppored by NSF DMS-2004661 during the preparation of this article. § PRELIMINARIESIn <ref>, we recall the Kähler formalism for harmonic maps. In <ref>, we recall the explicit formulae for the quantities defined in <ref> in the case Σ = N = S^2, viewed as the unit sphere inside ^3.For the purposes of understanding our main results, the reader may feel free to skip <ref> and refer only to the formulae in <ref>.§.§ Harmonic maps to Kähler manifoldsLet Σ be a Riemannian surface and N a compact Kähler manifold of complex dimension n. The complexified tangent bundles decompose according to type:TΣ^ = T^1,0Σ⊕ T^0,1Σ,TN^= T^1,0 N ⊕ T^0,1 N.Given a local holomorphic coordinate z on Σ, we have the local coordinate frames/ z = 1/2( / x - i / y), /z̅ = 1/2( / x + i / y)for T^1,0Σ and T^0,1Σ, respectively, as well as the dual framesdz = dx + i dy,dz̅ = dx - idyfor T^*^1,0Σ and T^*^0,1Σ, respectively. Given local holomorphic coordinates w^α, α = 1, …, n, on N, we have similar formulae for the frames / w^α, /w̅^α, dw^α, and d w̅^α.The metric tensor g on Σ may be extended complex-linearly to TN^. We obtain a hermitian metric by the formulav,w= g(v, w̅),which agrees with the real-valued metric on TN ⊂ TN^. If z is a local coordinate, we may letσ(z) =√(2 g ( / z, /z̅)).The metric tensor and Kähler form on Σ are given locally byg =1 /2σ^2 ( dz ⊗ d z̅ + d z̅⊗ dz ), ω_Σ = i/2σ^2 dz ∧ d z̅.Similarly, we may writeh_αβ̅ = h ( / w^α, /w̅^β),which is a Hermitian matrix. The Kähler form on N then has the local expressionω_N = i h_αβ̅ dw^α∧ dw̅^β.See e.g. <cit.>, for a few more details. Now, given a map u : Σ→ N, its differential du is naturally a section of ℰ = T^*M ⊗ u^* TN. The complexification ℰ^ = T^*M^⊗_( u^*TN )^ decomposes into four factors corresponding to the direct sum decompositions (<ref>). The components of du under this splitting can be written schematically asdu = ( [ u u; u u ]),where in local coordinates, we haveu = w_z^α dz ⊗/ w^α,u = w^α_z̅ dz̅⊗/ w^α.Here we have written w^α_z =w^α(u(z))/ z, etc. The Dirichlet energy density of u,e(u) = 1/2 |du|^2,decomposes under the above splitting ase(u) = 1/2( | u|^2 + |u|^2 + | u|^2 + | u |^2 )= | u|^2 + |u|^2=: e_(u) + e_(u). LettingE_(u) = ∫_Σ e_(u) dV,E_(u) = ∫_Σ e_(u) dV,the Dirichlet energy splits asE(u) = E_(u) + E_(u).Meanwhile, a brief calculation shows thatu^* ω_N = ( e_(u) - e_(u) ) ω_Σ.Since d ω_N = 0, the pullback is again closed and its integral is invariant under homotopy of u. We conclude thatE_(u) - E_(u) = ∫_Σ u^* ω_N =: κis a constant depending only on the homotopy class of u. Rearranging, we obtainE(u) = 4 πκ + 2 E_(u).Note that if N also has complex dimension one, then κ = 4 π(u).Recall that the tension field (u) is the negative L^2 gradient of the Dirichlet functional. We can decompose (u) as follows:(u) = τ(u) + τ(u)∈ u^* T^1,0N ⊕ u^* T^0,1N.In view of (<ref>), τ(u) is half the L^2 gradient of the functional E_(u), which can also be computed by integration-by-parts. This yields the formulaτ(u) = - _u^*u.Here, _u : Ω^0,0(u^* TN^1,0 ) →Ω^0,1(u^*TN^1,0 ) is the coupled -operator using the pullback of the Levi-Civita connection on N. Similarly, we haveτ(u) = - _u^*u.By the Kähler identities, we also haveτ(u) = - Λ_uu = Λ_uu.In local coordinates, we have the formulaτ(u)^α = σ^-2(z) ( w^α_ z z̅ + ^NΓ^α_βγw^β_z w^γ_z̅)and (u) = 2 (τ(u)). We have the Weitzenbock formula, for α∈Ω^0,1(u^*T^1,0N):α, ^* α = α, ∇^* ∇α + K_Σ |α|^2 + q_1( u, α) + q_2( u, α) ,whereq_1( u, α ) loc=σ^-4^N R_βγ̅δη̅( α_z̅^βα_z̅^γ w_z^δw_z^η)andq_2( u, α ) loc= - σ^-4^N R_βγ̅δη̅( α_z̅^βα_z̅^γ w_z̅ ^δw_z̅^η).By the Kato inequality, we haveα, ^* α≤ -2 |∇α|^2 + Δ | α|^2 + K_Σ |α|^2 + q_1( u, α) + q_2( u, α) ,In the case that N has nonnegative holomorphic bisectional curvature, the q_1 term is nonpositive for general α, and we obtainα, ^* α≤ -2 |∇α|^2 + Δ | α|^2 + K_Σ |α|^2 + C_2 | u|^2 |α|^2.Similar identities hold with u in place of u. We shall apply these inequalities below with α =u, in <ref>, and with α = f̅(z)u for a holomorphic function f(z), in <ref>. Last, we define the Hopf differentialΦ(u) = h(du ⊗ du)^2,0∈(Ω^1,0_Σ)^⊗ 2loc= 2 h_αβ̅ w^α_zw̅^β_ z dz ⊗ dz.Using holomorphic normal coordinates centered at p ∈Σ, we can computeΦ(u) (p) = 2 ( w^α_z z̅w̅^α_z + w^α_ z w̅^α_ z z̅) dz ⊗ dz ⊗ d z̅ = 2 ( τ(u),u +u, τ(u) ) dz ⊗ dz ⊗ d z̅ = 4 (u), dudz ⊗ dz ⊗ d z̅. §.§ The 2-sphereThe round 2-sphere S^2 ⊂^3 carries an integrable almost-complex structure I : TS^2 → TS^2 derived from the cross-product on ^3:I_u(v) = u × v,v ⊥ u.This agrees with the complex multiplication on tangent vectors coming from the identification S^2 ≅^1, given by the stereographic coordinate chartsz↦1/1 + |z|^2( 2 z, 2 z, 1 - |z|^2 ) w = z^-1 ↦1/1 + |w|^2( 2 w, - 2 w, |w|^2 - 1).A similar chart centered at any point in S^2 can be obtained by post-composing with an (3) rotation. We haveσ(z) = 2/1 + |z|^2andVol_S^2 = σ^2 Vol_^2.Letting z = x + i y, and viewing u : ≅^2 → S^2 → S^2 ⊂^3 as a vector-valued function on ^2, we havee_(u) = 1/4 σ^2 |u_x - u × u_y |^2 e_(u) = 1/4 σ^2 |u_x + u × u_y |^2(u) = 1/σ^2( Δ_^2 u + |∇_^2 u|^2 u )(u) _L^2(S^2)= σ(u) _L^2(^2).See Topping <cit.> for these formulae.§ PRE-ŁOJASIEWICZ ESTIMATES§.§ -regularity Given x ∈ S^2 and 0 < r < ∞, let D_r(x) denote the image of the disk of radius r in the stereographic coordinate chart centered at x, i.e.D_r(x) = B_2 arctan(r)(x).We shall also writeU^ρ_2_ρ_1(x) = D_ρ_2(x) ∖ D_ρ_1(x) .In contrast with <cit.>, by default, we will take norms with respect to the metric and volume form of S^2 rather than with respect to the Euclidean metric in the stereographic chart. For D_ρ with ρ≤ C, these are equivalent up to constants, so the distinction can almost always be ignored. There exists _0 > 0, depending on the geometry of N, as follows. Let σ > 1 and ρ≤ 1, and put D = D_ρ(x), D̂ = D_σρ(x). (a) Given u ∈ W^2,2(D̂, N) satisfying ρ^2(u) _L^2(D̂)≤ 1 andE(u, D̂) < _0,we haveρ^2du ^2_ W^1,2( D ) ≤ C_σ( E(u, D̂ ) + ρ^2 (u) _L^2(D̂)̂)^2 ).(b) Supposing that N is Kähler, we also haveρ^2u ^2_ W^1,2( D ) ≤ C_σ( E_(u, D̂ ) + ρ^2 (u) _L^2(D̂)^2 )andρ^2u ^2_ W^1,2( D ) ≤ C_σ( E_ (u, D̂ ) + ρ^2 (u) _L^2(D̂)^2 ).The same statements hold after replacing D and D̂ by annuli U_R^σ R(x) and U_σ^-1 R^σ^2R(x), respectively. The proof of (a) is a minor adaptation of the proof by Ding-Tian <cit.>, which we omit.To prove (b), we apply (<ref>) with α =u, to getu, τ(u) ≤ -2 |∇ u|^2 + Δ |u|^2 + ( K_Σ + C |∇ u|^2 ) | u|^2.The result follows by a similar argument.Suppose that N is Kähler. Let 4/3≤ q ≤ 2, K ≥ 1, 0 < ρ≤ 1, and x_0 ∈ S^2. Let U = U_ρ^2ρ(x_0) and Û = U_ρ/2^4ρ(x_0) ⊂ S^2. Suppose that ρ^2 (u) ≤ 1, E(u, Û) < _0, andE_ (u,Û) ∧ρ^2 (u)^2_L^2 ( Û)≤ K E_(u, U).Let V ⊂ U be any measurable set with|V| ≥( 1 - (2CK)^-2 ) |U|,with C sufficiently large. We then haveE_(u, U) ≤ C K ρ^2 - 4/q e__L^q/2(V).We may assume without loss of generality that ρ = 1, since the general statement follows by scaling invariance. Letp = q^*/2 = q/2(q - 1)∈ 1, 2 .By Lemma <ref> and the Sobolev inequality, we havee__L^2(U) =u ^2_L^4(U)≤ C ( E_(u, Û) + (u)^2_L^2 ( Û) ) ≤ CK E_ (u, U),where we have applied the assumption (<ref>). Then (<ref>) and Hölder's inequality givee__L^p(U)≤ C e__L^2(U)≤ C K E_ (u, U).We haveE_(u, U) = E_(u, V^c) + E_(u, V). Applying Hölder's inequality on the first term and the interpolation inequality on the second, we obtainE_(u, U)≤|V^c|^1/2 e__L^2(U)+e__L^p(V) ^1/2 e__L^q/2(V) ^1/2 .By (<ref>), we have|V^c|^1/2≤1/2CK .Inserting (<ref>-<ref>), we obtain E_(u, U) ≤1/2 E_(u, U) + ( C KE_(u, U) )^1/2 e__L^q/2(V) ^1/2.Rearranging and cancelling exponents, we have the desired estimate. Given K ≥ 1, there exist _2 > 0 (depending on K, N) and c_1 > 0 (depending on K) as follows. Suppose that u: Û→ N satisfiesE(u, Û ) < _2, ρ^2(u) ^2_L^2(Û )≤_2 E_ (u, U),andE_(u,Û ) ≤ K E_ (u, U). Thena c_1sup_ρ≤ r ≤ 2ρ∫_S^1_r e_dθ≤inf_ρ≤ r ≤ 2ρ∫_S^1_r e_dθandbArea{ x ∈ U |ρ^2 e_(u)(x) ≤ c_1 E_(u, U) }≤ (2CK)^-2Area(U).This will follow by applying a standard contradiction argument to the (1,0)-form u. We may assume ρ = 1 without loss of generality. We view U and Û as subsets of the ball B_4(0) ⊂ by stereographic projection.First let α = f(z) dz be a -valued holomorphic 1-form on U' = U^3ρ_2 ρ / 3, satistfyingα = 0, ∫_U |α|^2 = 1, and∫_U' |α|^2 ≤ K.By elementary complex analysis, α must obey estimates of the form2 c_1 sup_ρ≤ r ≤ 2ρ∫_S^1_r |α|^2 dθ≤inf_ρ≤ r ≤ 2ρ∫_S^1_r |α|^2 dθandArea{ x ∈ U | |α(x)|^2 ≤ 2 c_1}≤ (2CK)^-2.The same statements hold for a holomorphic 1-form valued in a flat holomorphic bundle over U'.We can now prove (<ref>) by contradiction. Assume that u_i : Û→ N is a sequence of nonconstant maps such thatE(u_i, Û )≤1/i, E_(u_i,Û ) ≤ K E_ (u_i, U),and(u_i) ^2_L^2(Û )≤K/i E_ (u_i, U),but for whichc_1sup_ρ≤ r ≤ 2ρ∫_S^1_r | u_i|^2 dθ≥inf_ρ≤ r ≤ 2ρ∫_S^1_r | u_i|^2 dθ.Let U ⋐ U' ⋐Û. By Lemma <ref>b and (<ref>), we haveu_i ^2_W^1,2(U')≤ C ( 1 + K/i) E_(u_i, Û) ≤ C E_(u_i, U).Letα_i =u_i/√(E_ (u, U)).We then have∫_U |α_i|^2 = 1 ∫_Û |α_i|^2 ≤ K ∫_Û | _u_iα_i |^2 = ∫ |(u_i)|^2/E_(u, U)≤K/i→ 0,and, by (<ref>),α_i _W^1,2(U')≤ C. We may pass to a weak limit in W^1,2(U'),α_i ⇀α,which again satisfies∫_U |α|^2 = 1,∫_U'|α|^2≤ K. By (<ref>-<ref>) and Lemma <ref>, we have d u_i → 0 strongly in W^1,2(U'). Passing to a subsequence, we may assume that the images u_i ( U' ) are contained in a fixed coordinate ball B ⊂ N. We then have_u_iα_i = α_i + ^N Γ(du_i, du_i) #α_i → 0,whereis the trivial -operator in B.Since du_i → 0 in W^1,2, and in particular in L^p for all p, while α_i is also bounded in L^p for all p, we have α_i → 0in L^2, hence α = 0. Moreover, since α_i converges weakly in W^1,2(U'), √(∫_S^1_r |α_i|^2 dθ) converges weakly in W^1,2(ρ, 2ρ) as a function of r, and in particular strongly in C^0(ρ, 2ρ), so (<ref>) is preserved. But together with (<ref>), this contradicts (<ref>), establishing (<ref>).The proof of (<ref>) is similar.§.§ Topping's estimates The following basic estimates are due to Topping <cit.>. Throughout this subsection and the next, we suppose Σ = N = S^2. For 1 ≤ q < 2, assuming E(u) ≤ 4 π k, we have√(e_(u) e_(u))_L^q(S^2)≤ C_q √(k) (u) _L^2(S^2) . Let 0 < ρ≤ R < ∞, and assume that D_ρ_i(x_i) ⊂ D_ρ(x_0) for each i. LetΩ' = D_ρ(x_0) ∖∪_i D̅_ρ_i(x_i), Ω̂' = D_2R(x_0) ∖∪_i D̅_ρ_i/2(x_i).If E_(u, D_R(x_0) ) < _1 = _1(β), then we haveE_(u, Ω' ) ≤ C_ℓ, β( ρ/R)^β( R^2 (u) _L^2 ( Ω̂' ) ^2 + E_( u, U^2R_R(x_0) )+ ∑_i = 1^ℓ(R/ρ_i)^-β E_ ( u , U^ρ_i _ρ_i/2 (x_i) ) ).After homothetically rescaling so that R = 1, the E_ terms are unchanged and the R^2 (u) ^2 term decreases; hence, we may assume without loss of generality that R = 1. Letting μ = 1 in Topping's notation, the proof of <cit.> on p. 486 givesE_(u, Ω' )≤ C ρ^2 (q - 1)/q u _L^2q/2 - q(Ω̂') ≤ C_qρ^2 (q - 1)/q( (u) _L^2 ( Ω̂' ) ^2 +u _L^q(U^2_1) + ∑_i = 1^ℓ1/ρ_i u _L^q(U^ρ_i _ρ_i/2 (x_i) ) ).The proof proceeds by applying Hölder's inequality on the remaining terms, as in <cit.>, and letting β = 4(q - 1)/q. Assume 0 < ρ_i ≤1/√(ℓ) for i = 1, …, ℓ. Write Ω = S^2 ∖∪_i D_ρ_i (x_i), Ω̂ = S^2 ∖∪_i D_ρ_i / 2(x_i), U_i = U^2 ρ_i_ρ_i(x_i), Û_i = U^4 ρ_i_ρ_i/2(x_i).Supposing that E_(u, Ω̂ ) < _0, we haveE_(u, Ω ) ≤ C_ℓ, β( (u) _L^2(Ω̂)^2 + ∑ρ_i^-β E_ ( u , U^ρ_i_ρ_i/2 (x_i) ) ).LetR = max{√(ℓ), 2 (4 C_ℓ,β)^1/β},where C_ℓ,β is the constant of the previous Proposition. Since ρ_i ≤1/√(ℓ), there must exist two antipodal points x_0, x̂_0 ∈ S^2 such thatD_1/ √(ℓ)(x_0) ∩ D_ρ_i(x_i) = ∅ = D_1/√(ℓ)(-x_0) ∩ D_ρ_i(x_i)for i = 1, …, ℓ.We apply the previous proposition twice, with ρ = 2 and R as above,to obtainE_(u, Ω∩ D_2(x_0) ) ≤1/4( R^2 (u) _L^2 (S^2) ^2 + E_( u, U^2R_R(x_0) ) + ∑_i = 1^k (R/ρ_i)^β E_ ( u , U^ρ_i _ρ_i/2 (x_i) ) ), E_(u, Ω∩ D_2(x̂_0) ) ≤1/4( R^2 (u) _L^2 (S^2) ^2 + E_( u, U^2R_R(x̂_0) )+ ∑_i = 1^ℓ(R/ρ_i)^β E_ ( u , U^ρ_i _ρ_i/2 (x_i) ) ).Since R^2 ≥ℓ, we have U^2R_R(x_0) ⊂ D_1/√(ℓ)(x̂_0), so U^2R_R(x_0) ⊂Ω∩ D_2(x̂_0); also U^2R_R(x̂_0) ⊂ D_1/k(x_0), so U^2R_R(x̂_0) ⊂Ω∩ D_2(x_0). ThereforeE_( u, U^2R_R(x̂_0) ) +E_( u, U^2R_R(x_0) ) ≤ E_(u, Ω).We may add the above two inequalities together and rearrange, to obtainE_(u, Ω ) ≤ R^2 (u) _L^2 (S^2) ^2 + ∑_i = 1^ℓ( R/ρ_i)^β E_ ( u , U^ρ_i _ρ_i/2 (x_i) ) .Finally, we absorb R into the constant to obtain the desired estimate.Let x_i ∈ S^2 and 0 < ρ_i ≤π/2, for i = 1, …, m, be such that D_2ρ_i(x_i) ∩ D_2ρ_j(x_j) = ∅ for i ≠ j. Write U_i = U^2 ρ_i_ρ_i(x_i), Û_i = U^4 ρ_i_ρ_i/2(x_i), andΛ = S^2 ∖∪ D_ρ_i(x_i), Λ̂ = S^2 ∖∪ B_ρ_i / 2(x_i).Let Ω, Ω̂, and U_i be as above. (a) Suppose that (u)≤ 1 and ∑_i = 1^ℓ E(u, U_i) + E_(u, Λ) ≤ < _0.Then( E(u, Λ̂), ) ≤ C .(b) Suppose further that ∪ D_2 ρ_i(x_i) ⊂ D_ρ_0 (x_0), and let Ω', Ω̂' be as above. for some ρ_0 ≤π/2, and letΛ' = B_ρ_0(x_0) ∖∪_i = 1^L B_ρ_i (x_i). Λ̂' = B_2ρ_0(x_0) ∖∪_i = 1^L B_ρ_i / 2(x_i). If (u)≤ρ_0^-1 and∑_i = 0^ℓ E(u, U_i) + E_(u, Λ') ≤ < _0,then( E(u, Λ̂'), ) ≤ C .§.§ Pre-Łojasiewicz estimates Fix L ∈, K≥ 1, k≥ 2, and1 < α < k/ k-1 ≤ 2. Let x_i ∈ S^2 and 0 < ρ_i ≤π / 2, for i = 1, …, L. Write U_i = U^2 ρ_i_ρ_i(x_i), Û_i = U^4 ρ_i_ρ_i/2(x_i),Ω = S^2 ∖∪_i B_ρ_i (x_i), Ω̂ = S^2 ∖∪_i B_ρ_i / 2(x_i). Given u: S^2 → N, suppose thatE_( u, Λ̂) < _0, E ( u, Û_i ) < _0, ρ_i (u) _L^2(Û_i)≤ 1, E_(u, Û_i) ≤ K E_(u, U_i)+ρ_i^2 (u) _L^2(Û_i)^2,andρ_i^k + E(u, Û_i) ≤ K E_(u, U_i)+ρ_i^2 (u) _L^2(Û_i)^2for i = 1, …, L. Thena∑_i=1^L ρ_i^-β E_( u, U_i ) + E_( u, Λ) ≤ C_<ref>a^α.Here C_<ref>a depends on K, L, k, n, and α.Supposing further thatE_ (u, U_i) ≤ K E_(u, Û_i) +ρ_i^2 (u) _L^2(Û_i)^2for each i, we haveb∑_i=1^L ρ_i^-β E ( u, U_i ) + ( E ( u, Λ), ) ≤ C_<ref>b^αandcmax_i ρ_i^k ≤ C_<ref>c(u) ^2 - β.Here C_<ref>b-c depend on K, L, k, n, and α. (a)Let U_i = U^2 ρ_i_ρ_i. From Proposition <ref>, we haveC E_0 ^2≥( ∫ (e_ e_ )^q/2dV )^2/q≥( inf_U_i e_)e__L^q/2(U_i).First, note from (<ref>) thatE_(u, U_i)≤ ( sup_U_i e_ )^1 - q/2∫_U_i e_^q/2dV≤( 4 K/ρ_i^2)^1- q/2 E_(u,U_i)^1 - q/2∫_U_i e_^q/2dV.Rearranging, we obtaine__L^q/2(U_i)≥ K^1 - 2/qρ_i^4/q - 2 E_(u, U_i). Next, from Young's inequality and (<ref>), we haveρ_i^2( 1 - 1/α) k E_(u, U_i)^2/α - 1≤ρ_i^k + E_(u, U_i) ≤ K ρ_i^2 inf_U_i e_. Dividing both sidesby ρ_i^2, and rearranging, we obtaininf_U_i e_≥ K^-1ρ_i^- β E_(u, U_i)^2/α - 1 ,where β = 2(1 - ( 1 - 1/α) k ) is positive, by (<ref>). Inserting (<ref>) and (<ref>) into (<ref>), we obtainC E_0 ^2≥ K^-2/qρ_i^4/q - 2 -β E_^2/α. Choosing q= 8/4 + β, and applying the power α/2 to both sides, we haveC ^α≥ρ_i^-αβ/4 E_(u, U_i).We may now apply Global Dominance of Tension, Proposition <ref>, to obtain the result.Given k ∈, K ≥ 1, and 0 < β < 1/2, there exists _2 > 0 as follows.Let x_i ∈ S^2 and 0 < ρ_i ≤λ≤π/2, for i = 1, …, ℓ, with ℓ≤ k, be such that D_2ρ_i(x_i) ∩ D_2ρ_j(x_j) = ∅ for i ≠ j. Write U_i = U^2 ρ_i_ρ_i(x_i), Û_i = U^4 ρ_i_ρ_i/2(x_i),Ω = S^2 ∖∪_i D_ρ_i (x_i),andΩ̂ = S^2 ∖∪_i D_ρ_i / 2(x_i).(a) Suppose that u ∈ W^2,2 (S^2, S^2) satisfiesE(u) ≤ 4π k, (u) ≤ 1, max_i E(u, Û_i) ∧ E_ (u, Ω̂ ) < _2,and, for some M ≥ 1 and i = 1, …, ℓ, E_(u, Û_i) ≤ KE_(u, U_i) + M ρ_i^2 (u) _L^2(Û_i)^2 , E_(u, Û_i) ≤ KE_(u, U_i) +M ρ_i^2 (u) _L^2(Û_i)^2 ,andE_(u, U_i) ≤ M (E_(u, U_i) + ρ_i^2 (u) _L^2(Û_i)^2 ). Then∑_i=1^ℓρ_i^-β E( u, U_i ) + ( E(u, Ω) , ) ≤ C_<ref> a ( (u) ^2 + λ^1 - 2 β(u) ). (b) Let m ≥ 2 and1 < α = m + β/m - 1 + 3β <2.Suppose further thatρ_j^m + 2β≤ M ( E_(u, U_j) + ρ_j^2 ^2 ),where ρ_j = max_iρ_i. We then havemax_i ρ_i^m -1 + 3β≤ C_<ref> band∑_i=1^ℓρ_i^-β E( u, U_i ) + ( E(u, Ω) , ) ≤ C_<ref> b (u) ^α.Here C_<ref>a-b depend on k, K, M, and β. Let _2 be the minimum of the constants _i from Lemma <ref> and Propositions <ref>-<ref>, and let c_1 be the constant of Lemma <ref> corresponding to the given K. Let q = 2/1 + β, so that 4/3≤ q < 2 and-β = 1 - 2/q,1 - 2 β = 3 - 4/q,2 - β =3 - 2/q.(a) We will first proveλ^3 - 4/q + λ^3 - 2/q^2 ≳∑_i ρ_i^1 - 2/q E_(u, U_i).Let J = { i ∈{1, …, ℓ}|ρ_i^2 (u)^2_L^2 ( Û_i )≤_2/M^2 E_(u, U_i) }.Summing over i ∈ J^c, we have∑_i ∈ J^cρ_i^1 - 2/q E_(u, U_i) ≤λ^3 - 2/q∑_i ∈ J^cρ_i^-2 E_(u, U_i) ≤M^2 λ^3 - 2/q/_2(u)^2.On the other hand, for i ∈ J, we haveρ_i^2 (u)^2_L^2 ( Û_i )≤_2/M^2 E_(u, U_i).Combined with (<ref>-<ref>), this gives theconditions (<ref>) and (<ref>-<ref>).LetV_i = { x ∈ U_i |ρ_i^2 e_(u)(x) ≥ c_1 E_(u, U_i)}.By Lemma <ref>, we have|V_i| ≥( 1 - (CK)^-2 ) |U_i|.By Lemma <ref>, we haveE_(u, U_i) ≤ C K ρ_i^2 - 4/q e__L^q/2(V_i).Applying Proposition <ref>, we obtainC k^2≥ e_ e__L^q/2(S^2) ≥1/ℓ^q/2∑_ie_ e__L^q/2(V_i) ≥c_1/ℓ^q/2∑_i ρ_i^-2 E_(u, U_i)e__L^q/2(V_i) ≥c_1/MK ℓ^q/2∑_i ρ_i^4/q - 4E_(u, U_i)^2 ≥c_1/MK ℓ^q/2 + 1( ∑_i ρ_i^2/q - 2E_(u, U_i) )^2.Rearranging and canceling squares, we get√(C MKk ℓ^q/2 + 1/c_1)λ^3 - 4/q ≥∑_i ρ_i^1 - 2/q E_(u, U_i) .Combining (<ref>) and (<ref>) now yields the claim (<ref>).Applying Proposition <ref>, we obtainE_(u, Ω ) ≤ C ( ^2 + λ^1 - 2 β).The desired estimate now follows from Lemma <ref>a. (b) We may let λ = ρ_j = maxρ_i. By our assumption (<ref>) and (a), we haveλ^m + β≤ M ( λ^-β E(u, U_j) + λ^2 - β^2_L^2(Û_j))≤ C (+λ^1 - 2β)and0≤^2 +λ^1 - 2β - λ^m + β/C.Adding ( λ^1 - 2β - λ^m - 1 + 3β/2C) ≥ 0 to the RHS and factoring, we have0≤(- λ^m - 1 + 3β/2C) (+ 2 λ^1 - 2 β),which gives the first estimate of (b). We therefore haveλ^1 - 2 β ≤ C^1 - 2 β/m + 3 β - 1 + 1 = C ^m + β/m + 3 β - 1.The second estimate of (b) now follows from (a). Let x_i ∈ S^2 and 0 < ρ_i ≤λ≤π/2, for i = 0, …, ℓ, be such thatD_2ρ_i(x_i) ∩ D_2ρ_j(x_j) = ∅for i ≠ j, and⋃_i = 1^ℓ D_2 ρ_i(x_i) ⊂ D_ρ_0 (x_0).Write U_i = U^2 ρ_i_ρ_i(x_i), Û_i = U^4 ρ_i_ρ_i/2(x_i), andΩ' = D_ρ_0 (x_0) ∖∪ D_ρ_i (x_i)Ω̂' = D_2ρ_0 (x_0) ∖∪ D_ρ_i / 2(x_i).Suppose that (u) _L^2(Ω̂' )≤ρ_0^-1, and make the assumptions (<ref>-<ref>) for i = 0, …, ℓ. We then have∑_i=0^ℓρ_i^-β E( u, U_i ) + ( E( u, Ω' ), ) ≤ C_<ref>( λ^2 - β^2+ λ^1 - 2 β(u) ).Here C_<ref> depends on k, K, M, and β. Applying Proposition <ref> in place of Proposition <ref> in the previous proof, together with (<ref>), we get E_(u, Ω') ≤ C( λ^2 ^2 + λ^2 - β^2 + λ^1 - 2 β). The desired estimate follows from Lemma <ref>b. § THREE-ANNULUS AND MULTI-ANNULUS ESTIMATES §.§ Three-annulus estimates Suppose that N is Kähler. Let n ∈ with |n| ≤ L, σ > 1, and 0 < β≤1/2 be such that2 σ^2/σ^2 + 1< σ^2 β.There exists ξ_0 > 0, depending on N, L, σ, and β, as follows.Given any x_0 ∈ S^2 and 0 < ρ≤ 1, let Û = U^σ^2 ρ_σ^-1ρ(x_0). Suppose that u: Û→ N is a nonconstant W^2,2 map withE(u, Û) < ξ_0andρ^2(u) ^2_L^2(Û)≤ξ_0 E_(u, Û) .For i = 1,2,3, let S(i) = sup_σ^i-2ρ≤ r ≤σ^i-1ρ r^-n√(_S^1_r(x_0) e_dθ).The following implications hold:a S(1) ≤σ^1 - βS(2) ⇒ S(2) < σ^βS(3) bS(3) ≤σ^1 - βS(2) ⇒ S(2) <σ^β S(1).We first check the implications when u = g(z) dz is a nonzero holomorphic 1-form on a flat annulus V = U^σ^2_σ^-1(0) ⊂.More specifically, we prove:a' S(1) ≤√(σ( 1 + (σ - 1)^2/2σ) ) S(2) ⇒ S(2) ≤√(σ/1 + (σ - 1)^2/2σ) S(3) b'S(3) ≤√(σ( 1 + (σ - 1)^2/2σ) ) S(2) ⇒ S(2) ≤√(σ/1 + (σ - 1)^2/2σ) S(1)Under the assumption (<ref>), these implications are strictly stronger than (a) and (b). Dividing g by S(2) z^n, we may reduce to the case n = 0 and S(2) = 1.To prove (a') in this case, we use the Laurent expansion of g, which readsg(z) = ∑_n = -∞^∞ a_n z^n.This givesF(r)^2 : = 1/2 π∫_S^1_r |v|^2 dθ = ∑ |a_n|^2 r^2n. Assume that the supremum S(2) =1 is attained at r_0 ∈ 1, σ, soF(r_0)^2 = 1 = ∑ |a_n|^2 r_0^2n.We have the identity1 = σ + σ^-1/2 + (σ - 1)^2/σ,and, for n ∈, the inequality1 ≤σ^-2n + 1 + σ^2n-1/2 + (σ - 1)^2/σ.Inserting (<ref>) into (<ref>) for each n ∈, we have1 = ∑ |a_n|^2 r_0^2n ≤1/2 + (σ - 1)^2/σ∑ |a_n|^2 r_0^2n( σ^-2n + 1+ σ^2n-1)= 1/2 + (σ - 1)^2/σ( σ^-1 F ( σ^-1 r_0 ) + σ F ( σ r_0) ) ≤1/2 + (σ - 1)^2/σ( σ^-1 S(1)^2 + σ S(3)^2 ).We insert the assumption S(1)^2 ≤σ( 1 + (σ - 1)^2/2σ) , and rearrange, to obtain1≤2 σ/2 + (σ - 1)^2/σ S(3)^2,establishing (a'). The proof of (b') is identical.The proof of (a) and (b) now follows from a standard compactness-and-contradiction argument (similar to the proof of Lemma <ref>), which we omit. Let 0 < σ^-1≤γ≤ 1. Let Û = U_ρ / 2^2σρ(x_0) and u : Û→ N as above, satisfying (<ref>-<ref>), where ξ_0 depends on N, L, σ, and γ. There exists a constant C_m > 0 such that ifC_m √( E_(u,U_ρ/2^ρ))/ρ≤γsup_ρ≤ r ≤σρ( r/ρ)^m + 1√(_S^1_r(x_0) e_dθ)thensup_ρ≤ r ≤σρ( r/ρ)^m√(_S^1_r(x_0) e_dθ) < σγsup_σρ≤ r ≤ 2σρ( r/ρ)^m√(_S^1_r(x_0) e_dθ).LetF(r) = ( r/ρ)^m√(_S^1_r(x_0) e_d θ).We prove the contrapositive for u = g(z) dz a holomorphic 1-form, as above; assumingsup_ρ≤ r ≤σρ F(r) ≥σγsup_σρ≤ r ≤ 2 σρ F(r),we must showC_m √( E_(u,U_ρ/2^ρ))/ρ > γsup_ρ≤ r ≤σρ( r/ρ) F(r)for an appropriate constant C_m.Letμ := √( E_(u,U_ρ/2^ρ))/ρ.First note that by (<ref>) and epsilon-regularity, we have F( ρ/√(2)) ≤ C_m' μfor an appropriate C_m'.By Hadamard's Theorem, log F(r) is a convex function of log(r). In view of (<ref>) and (<ref>), we must have sup_ρ≤ r ≤σρ F(r) ≤ C_m' μ.But then (<ref>) impliesF(σρ) ≤sup_σρ≤ r ≤ 2 σρ F(r) ≤C_m' μ/σγ.By convexity, we havelog F(r) ≤( 1 - log( √(2) r/ρ) /log 2 σ) log C'_m μ + log( √(2) r/ρ) /log 2 σlog( C_m' μ/σγ).for ρ/√(2)≤ r ≤σρ. This simplifies tolog F(r)≤log C_m' μ - logσγ/log 2σlog( √(2) r/ρ)≤log C_m' μ - logσ + logγ/logσ + log 2log( r/ρ) ≤logμ + log C_m' - log( r/ρ) - logγ - log 2/logσ + log 2log( r/ρ),where we have used the fact that logσγ≥ 0. Since logγ - log 2 < 0 and r / ρ≤σ, this implieslog F(r) ≤logμ + log C_m' - log( r/ρ) - logγ - log 2/logσ + log 2logσ ≤logμ + log C_m' - log( r/ρ) + log 2 - logγ.Exponentiating yields the desired result (<ref>).The general result can again be derived by a compactness-and-contradiction argument. Given nonnegative integers m,n ∈{ 0, …, L } and σ > 1, there exists ξ_0 > 0 as follows.Let 0 < σρ≤ R ≤ 1, 0 < ξ≤ξ_0, and β satisfyingC_L≤σ^β.Given u : U^2R_ρ/2 (x_0) → N, let δ =(u) _L^2 ( U^2R_ρ/2) . Suppose thatsup_ρ / 2 ≤ r ≤ R E(u, U_r^2r) < ξ_0. Let μ, ν be such thatμ≥√( E_(u,U_ρ/2^ρ))/ρ∧ξ^-1δandν≥√( E_(u, U_R^2R) )/R∧ξ^-1δ .Writef(r) = max{√(_S^1_r(x_0) e_(u) dθ), ξ^-1δ}and, for any τ > 1,F_τ(r) = sup_r ≤ s ≤τ r ( s/ρ)^m f(s), G_τ(r) = sup_τ^-1 r ≤ s ≤ r ( R/s)^n f(s).The following implications hold. (a) IfC_m μ≤sup_ρ≤ r ≤σρ( r/ρ)^m + 1 f(r)thenC^-1_mσ^ - 1( ρ/r)^βμ≤ F_σ(r) ≤ C_m ( R/σ r)^β( R/ρ)^m νfor each σ^2 ρ≤ r ≤ R/σ. (b) IfC_n ν≤sup_σ^-1 R ≤ r ≤ R ( R/r)^n + 1 f(r)thenC^-1_nσ^- 1(r/R)^βν≤ G_σ(r) ≤ C_n (r/σρ)^β( R/ρ)^n μ.for each σρ≤ r ≤ R / σ^2. (c) If both (<ref>) and (<ref>) hold, thenf(r) ≥ C_L^-1( sup_r/2≤ s ≤ 2 r f(s) ∧σ^-2μ( ρ/r)^m + β∧σ^- 2ν( r/R)^n + β)for each σ^2 ρ≤ r ≤ R/σ^2.We first claim that given any τ > 1 and β > 0 as in (<ref>), for ξ_0 > 0 sufficiently small, we have the implication F_τ(τ^-1 r) ≤τ^1 - β F_τ (r) ⇒ F_τ (r) ≤τ^β F_τ(τ r).This follows from Lemma <ref>a. For, ifsup_r ≤ s ≤τ r √(_S^1_r(x_0) e_(u) dθ)≤ξ^-1δ,then F_τ (r) = ( τ rρ)^mξ^-1δ andF_τ(τ r) ≥( τ^2 r/ρ)^mξ^-1δ = τ^m F_τ(r),so the implication holds (since m ≥ 0). On the other hand, ifsup_r ≤ s ≤τ r √(_S^1_r(x_0) e_(u) dθ) > ξ^-1δ,thenC_m,τξ^2 E_(U_r^τ r) ≥ r^2 δ^2,which gives the assumption (<ref>), soLemma <ref><ref> gives (<ref>). assumption F_τ(τ^-1 r) ≤τ^1 - β F_τ(r) impliessup_τ^-1 r ≤ s ≤ r √(_|x - x_0 | = s e_(u) dθ)≤τ^1- βsup_r ≤ s ≤τ r √(_|x - x_0 | = s e_(u) dθ). In particular, (<ref>) and (<ref>) giveE_( u, U_τ^-1 r^τ^2 r) ≥ c r^2 F_τ(r)^2 ≥_1^-1 r^2.We may therefore apply Theorem <ref> to deduce the RHS of (<ref>) from (<ref>).To prove (a), note that by the assumption (<ref>) and Lemma <ref>, after letting γ = (C_m σ^β)^-1, we haveF_σ(σρ ) ≤σ^1 - β F_σ (σ^2 ρ). We may apply Lemma <ref> iteratively to obtain the conclusion of (a) for r ∈ρσ^. We can then apply (<ref>) with τ = σ/2, to obtain (a) for all r in the stated range. The proof of (b) is identical. To obtain the first estimate in (c), namely f(r) ≥ C_m,n^-1sup_r/2 ≤ s ≤ 2r f(s), one can apply (<ref>) with τ = 2 as well as the corresponding statement for G_τ. The last two estimates in (c) then follow from (a) and (b).§.§ Three-annulus estimates near nonconstant maps For technical reasons, we also need the following versions of the above estimates.Suppose that N is Kähler and let ϕ : B_2(0) ⊂→ N be a smooth map. Let n ∈ with |n| ≤ L, σ > 1, and 0 < β≤1/2, and suppose that σ satisfies C_n, ϕ≤σ^β.There exists ξ_1 > 0, depending on N, L, ϕ, σ, and β,as follows.Given any x_0 ∈ S^2 and 0 < ρ≤σ^-2, let Û = U^σ^2 ρ_σ^-1ρ(x_0). Suppose that u: Û→ N is a nonconstant W^2,2 map with u( x_0 + y/ρσ^2) - ϕ( y ) _W^2,2(Û) < ξ_1andρ^2(u) ^2_L^2(Û)≤ξ_1 E_(u, Û) .Then the implications of Lemma <ref> still hold.We may rescale without loss of generality so that ρσ^2 = 1.We first check the implications assuming that u = α is a holomorphic 1-form valued in ϕ^* TN. Since ϕ^*TN is a smooth bundle on B_2(0) ⊂, by the Korn-Lichtenstein theorem (see e.g. Donaldson-Kronheimer <cit.>),there is a holomorphic frame e_1, …, e__(N) for the holomorphic structure induced by _ϕ over B_1(0). The norms of these sections are bounded above and away from zero by constants. Since a general holomorphic 1-form valued in ϕ^*T^(1,0)N is a holomorphic linear combination of {e_i }, the same calculation as in Lemma <ref> givesa” C_ϕ S(1) ≤√(σ( 1 + (σ - 1)^2/2σ) ) S(2) ⇒ S(2) ≤ C_ϕ√(σ/1 + (σ - 1)^2/2σ) S(3) b” C_ϕ S(3) ≤√(σ( 1 + (σ - 1)^2/2σ) ) S(2) ⇒ S(2) ≤ C_ϕ√(σ/1 + (σ - 1)^2/2σ) S(1).The claimed implications follow by letting σ^β absorb the constants.The implications for a general 1-form u can be now obtained by a contradiction argument similar to the proof of Lemma <ref>.Let Û = U_ρ / 2^2σρ(x_0) and u : Û→ N as above, satisfying (<ref>-<ref>).There exists C_m, ϕ > 0 such that ifC_m, ϕ√( E_(u,U_ρ/2^ρ))/ρ≤γsup_ρ≤ r ≤σρ( r/ρ)^m + 1√(_S^1_r(x_0) e_dθ)thensup_ρ≤ r ≤σρ( r/ρ)^m√(_S^1_r(x_0) e_dθ) < γσsup_σρ≤ r ≤ 2 σ' ρ( r/ρ)^m√(_S^1_r(x_0) e_dθ).This can be proved as in the previous Lemma, by reference to Lemma <ref>. Given nonnegative integers m,n ∈{ 0, …, L }, 0 < ζ' ≤ 1, and smooth maps ϕ_1, ϕ_2: B_2(0) ⊂→ N, as well as 1 < σ≤ (ζ')^-1 and β > 0 satisfyingC_m, ϕ_1∧ C_n, ϕ_2≤σ^β, where C_m, ϕ are the constants of (<ref>), there exists ξ_1 > 0 as follows.Let ρ and R ≤ 1 be such that 0 < ρ≤ (ζ')^2 R ≤ 1.Given u : U^2R_ρ/2 (x_0) → N, let δ =(u) _L^2 ( U^2R_ρ/2) . Suppose thatsup_ρ / 2ζ' ≤ r ≤ζ' R E(u, U_r^2r) < ξ_0,where ξ_0 is the constant of Proposition <ref>, as well asu ( x_0 + ρ y ) - ϕ_1 ( y^-1) _W^2,2( U_1/2^(ζ')^-1)< ξ_1, u ( x_0 + R y ) - ϕ_2 ( y ) _W^2,2( U^2_ζ')< ξ_1,andρ^2(u) ^2_L^2(Û)≤ξ_1 E_(u, Û) . Then the implications of Proposition <ref> still hold, after replacing C_m by C_m, ϕ_1 in (<ref>) and C_n by C_n, ϕ_2 in (<ref>).This follows by applying the proof of Proposition <ref> using Lemmas <ref>-<ref> in place of <ref>-<ref> over the inner and outer intervals U_ρ / 2^ρ / ζ' and U^2R_ζ' R.§.§ Multi-annulus estimatesNext, we will prove a weaker version of the previous estimates which applies across bubble maps.Let 0 < ζ_i ≤ζ_0 ≤1/2, σ_i > 1, and x_i ∈ S^2, for i = 1, …, ℓ, with ℓ≤ k. Assume that D_4ζ_0(x_i) ∩ D_4ζ_0(x_j) = ∅,for i ≠ j, and letΛ= S^2 ∖∪_i D_ζ_i (x_i),Λ̂= S^2 ∖∪_i D_ζ_i / σ_i (x_i).We need the following split -regularity results, which assume only that E_(u) is small, at the price of a curvature assumption on N.Suppose that N is compact Kähler with nonnegative holomorphic bisectional curvature, and let 2 ≤ p < ∞. There exists _0 > 0, depending on the geometry of N, such that if u ∈ W^2,2( Λ̂, N ) satisfies E_( u, Λ̂) < _0, thenu ^2_ L^p ( Λ) ≤ C_<ref>( E_(u, Λ̂∖Λ ) +(u) _L^2 ( Λ̂ )^2 ).Here C_<ref> depends on p, k, ζ_i, and σ_i. This is a minor adaptation of the proofs by Struwe <cit.>, Topping <cit.>, or Liu-Yang <cit.>. There exists ξ_1 > 0, depending on N, k, ζ_i, and σ_i, as follows. Suppose that u ∈ W^2,2(Λ̂, N) satisfiesE_(u, Λ̂) ∧(u)_L^2 ( Λ̂ )^2 < ξ_1,and let α∈Ω^0,1(u^*T^(1,0)N) on Λ̂. We have the estimateα^2_L^2(Λ)≤ C ( _u^* α^2_L^2(Λ̂) + ∑_i = 1^ℓ1/logσ_isup_σ_i^-1ζ_i ≤ r ≤ζ_i _S^1_r(x_i)|α|^2).We may choose ξ_1 small enough that the previous Lemma makes u ^2_ L^p ( Λ) arbitrarily small, for a fixed p > 2. Since N has nonnegative holomorphic bisectional curvature and the domain is the round S^2, (<ref>) gives the inequality∫ |∇ ( φα)|^2 + ∫φ^2 |α|^2 ≤ C ( ∫φ^2 | _u^* α |^2 + ∫φ^2 e_(u) |α|^2 + ∫ |∇φ|^2 |α|^2 ).The estimate follows by letting φ = ∏φ_i bea product of logarithmic cutoffs, with r |∇φ_i| ≤ C / logσ_i, and using the Sobolev and Hölder's inequalities together with the result of Lemma <ref>.Let 0 < γ≤ 1. Given u and α∈Ω^0,1(u^*T^(1,0)N) as in the previous lemma, suppose that_u^* α^2_L^2(Λ̂)≤α^2_L^2(Λ)/2C ,where C is the univeral constant of the previous Lemma. Assume that 4C/γ≤logσ_ifor i = 1, …, ℓ. Fixing j ∈{1, …, ℓ}, the following implication holds:γ∑_i ≠ jsup_σ_i^-1ζ_i ≤ r ≤ζ_i_S^1_r(x_i)|α|^2 ≤α^2_L^2(Λ)⇒α^2_L^2(Λ)< γsup_σ_j^-1ζ_j ≤ r ≤ζ_j_S^1_r(x_j)|α|^2. This follows by rearranging the estimate of the previous Lemma.Suppose that N has nonnegative holomorphic bisectional curvature.Given n_i ∈, i = 1, …, ℓ, ℓ≤ k, with |n_i| ≤ L, and ∑ n_i = 0,as well as 0 < ζ≤ζ_0 and 0 < γ≤ 1, there exists σ_0 >1 (depending on k, L, ζ, and γ) such that given σ≥σ_0, there exists ξ_1 > 0 (depending on k, L, ζ, and σ) as follows.Let ζ_i = ζ and σ_i = σ, for i = 1, …, ℓ, in (<ref>). Suppose that u : Λ̂→ N is a nonconstant W^2,2 map withE_(u,Λ̂ ) < ξ_1and(u) ^2_L^2(Λ̂)≤ξ_1 E_(u, Λ̂) . LetS_i = sup_σ^-1ζ≤ r ≤ζ( ζ/r)^n_i√(_d(x, x_i) = r e_( u(x) )dθ),for i = 1, …, ℓ. Fixing j ∈{1, …, ℓ}, the following implication holds:γ∑_i ≠ j S^2_i ≤E_ (u, Λ) ∀ i ≠ j⇒ E_ (u, Λ) < γ S_j.First notice that the casen_i = 0 for all i follows directly from Proposition <ref>, where α =u. To obtain the general case, we let f(x) be a meromorphic function on S^2 with order - n_i at x_i, which we can choose such that ∫_S^2 ∖∪_i D_ζ_0(x_i)| f |^2 dV = 1—this determines f(x) uniquely up to a unit complex number. We haveK_1^-1≤sup_σ^-1ζ≤(x,x_i) ≤ζ |f(x)| |x-x_i|^n_i≤ K_1. We now letα = f̅· u.Then ^* α = Λα = f̅^*u = f̅(u),since f is holomorphic on Λ̂. Choosing σ_0 to account for K_1, and ξ_1 to account for the size of |f| on Λ, we again obtain the desired implication from Proposition <ref>. Suppose that N has nonnegative holomorphic bisectional curvature.Given L ∈, n_i ∈, with | n_i | ≤ L, as well as 0 < ζ≤1/2 and 0 < γ≤ 1,there exists σ_0 > 1 (depending on k, L, γ, and ζ), as well as σ_1 > 0 (depending only on k, L and γ), such that given σ≥σ_0 and σ' ≥σ_1,there exists ξ_1 > 0 (depending on k, L, γ, ζ, σ, and σ') as follows.Let 0 < λ≤1/σ', p ∈ S^2, and x_1, …, x_ℓ∈ D_λ/2(p) ⊂ S^2, ℓ≤ L,withD_2 ζ_0 λ(x_i) ∩ D_2 ζ_0 λ(x_j)= ∅for each i ≠ j. WriteΛ' = D_λ(p) ∖∪_i D_ζλ(x_i), Λ̂ '= D_σ' λ(p) ∖∪_i D_σ^-1ζλ(x_i).Suppose that u : Λ̂' → N is a nonconstant W^2,2 map withE_(u,Λ̂' ) < ξ_1andλ^2 (u) ^2_L^2(Λ̂)≤ξ_1 E_(u, Λ̂) . Putm = ∑ n_i,and letS_0 = sup_λ≤ r ≤σλ( r/λ)^m√(_S^1_r(p)e_ ( u(x) )dθ), S'_0 =sup_λ≤ r ≤σλ( r/λ)^m + 1√(_S^1_r(p)e_ ( u(x) )dθ), S_i =sup_σ^-1ζλ≤ r ≤ζλ( ζλ/r)^n_i√(_S^1_r(x_i)e_ ( u(x) )dθ),andS'_i =sup_σ^-1ζλ≤ r ≤ζλ( ζλ/r)^n_i + 1√(_S^1_r(x_i) e_ ( u(x) )dθ),for i = 1, …, ℓ. We then have the following implications:aγ∑_i = 1^ℓ S^2_i ≤ E_ (u, Λ' ) /λ^2⇒ E_ (u, Λ' )/λ^2< γ( S'_0 ) ^2and bγ∑_i = 0i ≠ j^ℓ S^2_i ≤ E_ (u, Λ' ) /λ^2 ⇒ E_ (u, Λ' ) /λ^2<γ( S'_j )^2. Since Λ̂ is contained in the hemisphere D_1(p), where the conformal factor is bounded by 4, we may replace D_σλ(p) by the flat ball B_σ(0) ⊂, and assume λ = 1.We will prove the case m = 0 = n_i of (a) by contradiction. Letα =u/√( E_(u, Λ') ),and suppose that both( S'_0)^2 ≤1/γand∑_i = 1^ℓ S_i^2 ≤1/γ.For 1 ≤ r ≤σ', from (<ref>), we have√(_S^1_r(p)|α|^2 )≤S'_0/γ r.It follows that the Lorentz norm of α is bounded as follows:α_L^2, ∞(Λ̂')^2≤α^2_L^2( Λ̂' ∩ B_1(0) ) +1/γ^2( S_0' )^2 ≤2/γ^2α^2_L^2 ( Λ̂' ∩ B_1(0) ) .We choose φ = φ_0 ·∏_i = 1^ℓφ_i to be a product of logarithmic cutoffs, with φ⊂Λ̂ and φ≡ 1 on Λ. In view of (<ref>) and the fact that α_L^2(Λ') = 1, we must haveφα^2_L^2(B_1(0))≤ 1 + 1/γ.Hence (<ref>) reduces toφα_L^2, ∞(^2 )^2≤2( 1 + 1/γ)/γ^2.By <cit.>, Ladyzhenskaya's inequality allows a Lorentz space on the RHS, giving:φα_L^4(^2) ^2 ≤ C φα_L^2, ∞∇ ( φα) _L^2.Inserting (<ref>), we haveφα_L^4(^2) ^2 ≤C/γ∇ ( φα) _L^2.Integrating the Weitzenbock formula (<ref>) against φ, instead of (<ref>),we have∫ |∇ ( φα)|^2 ≤ C ( ∫φ^2 | _u^* α |^2 + ∫φ^2 e_(u) |α|^2 + ∫ |∇φ|^2 |α|^2 ).Applying Hölder's inequality on the RHS, we get∫ |∇ ( φα)|^2≤ C ( ∫φ^2 | _u^* α |^2 + ( ∫ e_(u)^2 )^1/2( ∫( φ |α| )^4 )^1/2 + ∫ |∇φ|^2 |α|^2 ). Assuming from Lemma <ref> that ( ∫ e_(u)^2 )^1/2 < δ, we can apply (<ref>) and rearrange (<ref>) to obtain∇ ( φα) _L^2( ∇ ( φα) _L^2 - C δ)≤ C ( ∫φ^2 | _u^* α |^2 + ∫ |∇φ|^2 |α|^2 ).By Hölder, we have1 = α_L^2(Λ')≤ C φα_L^4≤C/γ∇ ( φα) _L^2,so (<ref>) givesγ/C ≤ C ( ∫φ^2 | _u^* α |^2 + ∫ |∇φ|^2 |α|^2 ) ≤ C ξ_1 + C ( 1/logσ'sup_1 ≤ r ≤σ' _S^1_r(p)|α|^2 + ∑_i 1/logσsup_σ^-1ζ≤ r ≤ζ_S^1_r(x_i)|α|^2).Choosing ξ_1 small enough and σ, σ' large enough gives a contradiction. and( 1/C_p - δ) ( ∫ | φα |^p )^2/p≤ C ( ∫φ^2 | _u^* α |^2 + ∫ |∇φ|^2 |α|^2 ).Applying Hölder's inequality in combination with (<ref>), we haveα^2_L^2(Λ)≤( Vol(Λ) )^1/q·( ∫ | φα |^p )^2/p ≤ C_p ( ∫φ^2 | _u^* α |^2 + ∫ |∇φ|^2 |α|^2 )andα^2_L^2(Λ)≤ C_p ( ∫φ^2 | _u^* α |^2 + 1/logσ'sup_1 ≤ r ≤σ' _S^1_r(p)|α|^2 + ∑_i 1/logσsup_σ^-1ζ≤ r ≤ζ_S^1_r(x_i)|α|^2)Now, the case m = 0 = n_iof (a) and (b) follow as above. To obtain the general case of (a),letf(z) = Π (z - z_i)^n_i. On D_σ'(0) ∖ D_1(0), we haveK_1^-1≤|f(w)/ w^m | ≤ K_1,where K_1 may depend on {n_i} but not on ζ or the points x_i ∈ B_1/2(0). We can now obtain the conclusions by applying Proposition <ref> to the form α = f̅· u. The proof of (b) follows similarly by choosing f(z) = (z - z_j)^n_j + 1Π_i ≠ j (z - z_i)^n_i.As in the last proof, we begin by checking the implications for a holomorphic 1-form α on Λ. After dividing by a holomorphic function g(x) on S^2 with order n_i at x_i, we can reduce to the case n_i = 0 for i = 0, …, L and max_i ≠ 0 S_i(1) = 1.After possibly taking σ_0 larger, this implies the general case.Choose a stereographic chartwith x_0 = ∞, which sends B_R_0(x_0) to ∖ B_1(0). Let z_i ∈ B_1(0) ⊂ be the point corresponding to x_i ∈ S^2, for i ≠ 0. Let B_R̃_i(z_i) be the image of B_R_i(x_i) in the chart; we havec_ρ_0σ^-3≤R̃_i ≤ C_ρ_0σ^-2.Write α = f(z) dz in these coordinates. This is accomplished using the following “multiple” Laurent expansion.By the Cauchy integral formula, we havef(z) = 1/2 π i∫_|z| = σ^2 R_1f(w)/w - z dw - ∑_j = 1^L 1/2 π i∫_|z - z_j| = ζ^-2ρ_0f(w)/w - z_j dw= ∑_n = 0^∞ a_n z^n + ∑_j = 1^L ∑_n = 1^∞ b_n,j (z - z_j)^-n =: f_0(z) + ∑_j = 1^L f_i(z).The expansion is obtained by inserting the identity 1/w - z = ∑_i = 0^∞z^n/w^n + 1in the first term and 1/w - z = ∑_i = 0^∞-(w - z_j)^n/(z - z_j)^n + 1in the remaining terms. By construction, a_n are the nonnegative Fourier coefficients of f(z) along the circle |z| = σ^2 R_1 and b_n,j are the negative Fourier coefficients of f(z) along |z - z_j| = ζ^-2ρ_0. We therefore have the bounds_|z| = σ R_1 |f_0(z)|^2ds = ∑_n = 0^∞ |a_n|^2 (R_1 σ)^2n≤ S(2)^2 = 1, ∑_n = 0^∞ |a_n|^2 (R_1 σ)^4n≤ S(3)^2,and∑_n = 1^∞ |b_n,j|^2 (ρ_0 ζ)^4n≤ S_j(0)^2, ∑_n = 1^∞ |b_n,j|^2 (ρ_0 ζ)^2n≤ S_j(1)^2. For 1 ≤ r ≤σ, we also have√(_|z| = r |f(z)|^2ds) ≤√(_|z| = r |f_0(z)|^2ds ) + ∑_j = 1^L √(_|z| = r |f_j(z)|^2ds )≤√(∑_n = 0^∞ |a_n|^2 r^2n) + ∑_j = 1^L √(∑_n = 1^∞ |b_n,j |^2 ( r/2)^-2n) For the general case, we may reduce to the previous case by replacing f(z) byf̃(x) = f(z)/Π_j = 1^L (z - z_j)^n_j. By increasing σ_0 to account for the norm of the denominator in (<ref>), the implications in this case imply those in the original case. andS(2) = E_ (u, Λ̂).Put m = ∑_i = 1^L n_i, and letS(3) = √(sup_R ≤ r ≤σ Rr^-2 m_d(x, x_0) = r e_ (u)dθ).§.§ Cut-and-pasteLet u : S^2 → S^2 be a W^1,2 map and let n ∈ be such that| E( u, S^2 ) - 4 π n | ≤ 2 π. (a) IfE_ ( u, B_2ρ ) + E_ ( u, S^2 ∖ B_ρ ) + _U^2ρ_ρ u ≤η,then| E( u, S^2 ) - 4 π n | ≤ C η. (b) If E_( u, S^2 ∖ B_ρ ) ≤ 2 πand E_ ( u, B_2ρ ) + E( u, S^2 ) - 4 π n + _U^2ρ_ρ u ≤η,thenE_ ( u, S^2 ∖ B_ρ ) + | E( u, S^2 ) - 4 π n | ≤ C η. § BUBBLE-TREE ESTIMATESA sequence u_i : Σ→N of W^2,2 maps with E(u_i) ≤ 4 π L and (u_i)_L^2→ 0 as i →∞ will be called an almost-harmonic sequence.A collectionℬ = { J, ζ_0, {z_j }, {y_j_1, …, j_q}, ϕ_j_1, …, j_q , u_∞} will be referred to as bubble-tree data. Here, J = {(j_1, …, j_q ) }_q = 1^q_max is a finite indexing set of nonnegative integers; 0 < ζ_0 ≤1/2 is a positive real number; {z_j}⊂Σ, for (j) ∈ J, and y_j_1, …, j_q∈ B_1(0) ⊂^2, for (j_1, …, j_q) ∈ J (q ≥ 2), are finite sets of points; andu_∞ = ϕ_0 : Σ→ N, ϕ_j_1, …, j_q : ^2 → N,for (j_1, …, j_q) ∈ J, are finite-energy harmonic maps. These are required to satisfy:|J| ≤ 4 π L / _0, D_2 ζ_0(z_j) ∩ D_2 ζ_0(z_k) = ∅ , D_2 ζ_0(y_j_1, …, j_q, j) ∩ D_2 ζ_0(y_j_1, …, j_q, k) = ∅for j ≠ k, and, if (j_1, …, j_q) ∈ J is a terminal index (i.e. (j_1, …, j_q, k) ∉J for all k), then ϕ_j_1, …, j_q is nonconstant. Given a W^1,2 map u: B_R(x_0) → N, the outer energy scale λ_, R, x_0(u) is the smallest number λ≥ 0 such thatsup_λ < ρ < R E ( u, U^ρ_ρ/2(x_0) ) < .Note that λ = R satisfies (<ref>) vacuously, so 0 ≤λ_, R, x_0(u) ≤ R by definition.Given an almost-harmonic sequence u_i : Σ→ N, we may pass to a subsequence (again called u_i) which “converges in the bubble-tree sense,” i.e., for which there exists a set of bubble-tree data ℬ as in Definition <ref>, as follows.Given 0 < ≤_0, there exists 0 < ζ≤ζ_0 (with ζ = ζ_0 if = _0), and for all i sufficiently large,points x^i_j_1, …, j_q∈Σ and positive numbers λ^i_j_1, …, j_q→ 0, such that:* x^i_j_1, …, j_q→ z_j_1 as i →∞,* λ^i_j_1 = λ_ζ, , x^i_j_1(u_i), * λ^i_j_1, …, j_q = λ_ζλ^i_j_1, …, j_q - 1, , x^i_j_1, …, j_q(u_i(t_i)),* x^i_j_1, …, j_q = x^i_j_1, …, j_q - 1 + λ^i_j_1, …, j_q-1 y_j_1, …, j_q,* λ^i_j_1, …, j_qλ^i_j_1, …, j_q - 1→ 0 as i →∞,* u_i ( x^i_ j_1, …, j_q + λ^i_j_1, …, j_q y) →ϕ_j_1, …, j_q(y) in W^2,2_loc( ^2 ∖∪_k y_j_1, …, j_q-1, k) as i →∞, * Given any ' ≤, there exists ζ' >0 such that for all i sufficiently large, we have E (u_i, U_(ζ')^-1λ_j_1, …, j_q^ζ' λ_j_1, …, j_q - 1(x_j_1, …, j_q) ) < '. The proof is by now standard (see Song-Waldron <cit.> for a recent sketch), and we omit it. Suppose now that u_i converges in the bubble-tree sense and that ϕ_j_1, …, j_q and u_∞ are each either holomorphic, antiholomorphic, or constant, as is the case for Σ = N = S^2.Denote these disjoint subsets of J byJ = J_⨿J_⨿J_0,respectively. We also writeJ_T ⊂ J_∪ J_for the set of “terminal” bubbles in the tree, which are all nonconstant by construction. Next, given any index = (j_1, …, j_q), write_() = ( j_1, …, j_p) ∈ J_∪{0},with p ≤ q, for the nearest holomorphic index precedingin the bubble tree, or _() = (0) if there are only ghost (i.e. constant) bubbles betweenand the body map. WriteJ⃗_() = { (j_1, …, j_q, …, j_r) }⊂ J_for the set of holomorphic indices succeedingin the bubble tree, i.e., (j_1, …, j_q, …, j_r) ∈J⃗_() iff (j_1, …, j_q, …, j_r) ∈ J_ but (j_1, …, j_p, …, j_s) ∉J_ for any p ≤ s < r.We define _() and J⃗_() similarly.For each = (j_1, …, j_q) ∈ J, define a nonnegative integer M^_j_1, …, j_q as follows. For ∈ J_, suppose first that (j_1, …, j_q) ∈ J_, and letM^_j_1, …, j_q = 2 + ( order-of-vanishing of ϕ_j_1, …, j_q at ∞).For a terminal, antiholomorphic index (j_1, …, j_q) ∈ J_T ∩ J_, letM^_j_1, …, j_q = 0.For an antiholomorphic or ghost index ∈ J_^c, defineM^_j_1, …, j_q = ∑ M^_j_1, …, j_q, ℓ_q + 1, …, ℓ_n,where the sum runs over all succeeding holomorphic indices (j_1, …, j_q, ℓ_q + 1, …, ℓ_n) ∈J⃗_ (). Define nonnegative integers N^_j_1, …, j_q as follows. Supposing first that(j_1, …, j_q - 1) ∈ J_,letN^_j_1, …, j_q = order-of-vanishing of ϕ_j_1, …, j_q-1 at y_j_1, …, j_q.Suppose next that (j_1, …, j_q - 1) ∈ J_^c is an antiholomorphic or ghost index. Assuming q = 1 (i.e. the body map ϕ_0 is antiholomorphic or constant), defineN^_j_1 = ∑_j ≠ j_1 M^_j.Assuming q > 1, write_() = (j_1, …, j_p), and defineN^_j_1, …, j_q = N^_j_1, …, j_p+ ∑_n = p + 1^q ∑_j ≠ j_n M_j_1, …, j_p, …, j_n-1, j.Define M^_* and N^_* similarly, replacing holomorphic by anti-holomorphic and vice-versa. These integers can be bounded above by an integer L = L(k) depending only on k and _0 (which can be replaced by 1 when N = S^2). In simple cases, L can be made explicit: for instance in the situation of Topping's second theorem <cit.>, the holomorphic exponents M_*^ and N^_* are all zero, so one may take L = 0 in Theorems <ref>b and <ref>b.Now, given 0 < ≤_0, letx^i_j_1, …, j_q∈ S^2,0 < λ^i_j_1, …, j_q≤ζ,for i = 1, …, ∞, be the sequences of points and scales guaranteed by Theorem <ref>. For q ≥ 1, also let Λ^i_j_1, …, j_q = D_λ^i_j_1, …, j_q(x^i_j_1, …, j_q ) ∖∪_k D̅_ζλ^i_j_1, …, j_q, k(x^i_j_1, …, j_q, k),V^i_j_1, …, j_q= U^2ζλ^i_j_1, …, j_q -1_ζλ^i_j_1, …, j_q - 1(x^i_j_1, …, j_q ),andU^i_j_1, …, j_q = U^λ^i_j_1, …, j_q_λ^i_j_1, …, j_q / 2(x^i_j_1, …, j_q ).In addition, letΛ^i_0 = S^2 ∖∪_j D̅_ζ( x^i_j ).Given ξ_1 > 0, for i large enough, we may assume that for each (j_1, …, j_q) ∈ J_, there holdsE_(u_i, Λ^i_j_1, …, j_q ) < ξ_1,while for (j_1, …, j_q) ∈ J_,E_(u_i, Λ^i_j_1, …, j_q ) < ξ_1,and for (j_1, …, j_q) ∈ J_0,E(u_i, Λ^i_j_1, …, j_q ) < ξ_1.We are finally ready to prove our main bubble-tree estimate. For (j_1, …, j_q) ∈ J_, letS^i_j_1, …, j_q =E_( u_i, U^2 λ^i_j_1, …, j_q_λ^i_j_1, …, j_q (x^i_j_1, …, j_q ) )andS^i_j_1, …,j_q→ k = E_( u_i, U^2 ρ_0 λ^i_j_1, …, j_q_ρ_0 λ^i_j_1, …, j_q(x^i_j_1, …, j_q, k ) ).Let 0 < β < 1/2. Suppose that u_i : S^2 → N, with nonnegative holomorphic bisectional curvature, is a bubble-tree convergent almost-harmonic sequence, with bubble-tree data ℬ, for which the body map and all bubbles are either holomorphic or antiholomorphic. Writeδ_i := (u_i) → 0.Given 0 < ≤_0,there exist κ, ξ > 0, depending on , β, and ℬ, as follows.Let ζ, Λ^i_, Λ̂^i_, etc., be as in Theorem <ref>, and putμ^i_ = max{√(min{ E_( u_i, Λ̂^i_), })/λ^i_, ξ^-1δ_i }andf^i_(r) = max{√(_S^1_r(x^i_ ) e_(u_i) dθ), ξ^-1δ_i }for λ^i_j_1, …, j_q≤ r ≤ζλ^i_j_1, …, j_q - 1 (where = (j_1, …, j_q)). For i sufficiently large, we havea f^i_j_1, …, j_q(r) ≥κ( μ^i_j_1, …, j_q( λ^i_j_1, …, j_q/r)^M^_j_1, …, j_q + β + μ^i_j_1, …,j_q - 1( r/λ^i_j_1, …, j_q - 1)^N^_j_1, …, j_q + β)for each ∈ J and λ^i_j_1, …, j_q≤ r ≤ζλ^i_j_1, …, j_q - 1, as well asb f^i_j_1, …, j_q(r) ≥ c_Lsup_r/2≤ s ≤ 2 r f^i_j_1, …, j_q(s)for κ^-1λ^i_j_1, …, j_q≤ r ≤κζλ^i_j_1, …, j_q - 1. b E_( u_i, U_r^2r( x^i_j_1, …, j_q) )+ r^2_2^-1δ_i^2 ≥κ E_( u_i, U_r/2^4r( x^i_j_1, …, j_q) )for λ^i_j_1, …, j_q≤ r ≤1/2λ_0 λ^i_j_1, …, j_q - 1, andc E_( u, U^i_j_1, …, j_q) ∧ E_( u, V^i_j_1, …, j_q, j_q + 1) + _2^-1( λ^i_j_1, …, j_qδ_i )^2 ≥κ(λ^i_j_1, …, j_q e^i_j_1, … j_q)^2. We will let σ increase and ξ decrease finitely many times during the proof.1. Given 0 < γ≤ 1, we can choose σ large enough such that for each ∈ J and i sufficiently large, there holdsμ^i_j_1, …, j_q≤γsup_λ^i_≤ r ≤σλ^i_( r/λ^i_)^M^_ + 1 f^i_ (r).First assume that ∈ J_ is a holomorphic index. It follows from the definition of M^_ on a holomorphic index thatμ^i_j_1, …, j_q≤κ^-1sup_1/2σλ^i_≤ r ≤σλ^i_( r/λ^i_)^M^_ f^i_ (r).for any σ and i sufficiently large, which implies the claim after letting σ≥ 2 γ^-1κ^-1.Next, assume that ∈ J_T ∩ J_^c is a terminal but non-holomorphic index, in which case M^_ = 0. Then (<ref>) holds for i sufficiently large, and we can apply Theorem <ref>a with m = 0 and {n_i } = ∅, to obtain the claim.We can now prove the claim for the remaining indices J_^c = J_∪ J_0 by “inward” induction on the bubble tree, the base case being that of the holomorphic/terminal bubbles. Let (j_1, …, j_q) ∈ J_^c. Note from Definition <ref> thatM^_j_1, …, j_q = ∑_k M^_j_1, …, j_q, k.Assume that the claim has been proven for all indices succeeding a given index = (j_1, …, j_q); in particularμ^i_j_1, …, j_q, k≤γsup_λ^i_j_1, …, j_q,k ≤ r ≤σλ^i_j_1, …, j_q,k ( r/λ^i_j_1, …, j_q,k )^M^_j_1, …, j_q,k + 1 f^i_j_1, …, j_q,k(r)for each k such that (j_1, …, j_q,k) ∈ J. Let ' = ξ_0, the constant of Proposition <ref> corresponding to σ. By the last item in Theorem <ref>, we may choose ζ' > 0 such that the hypothesis (<ref>) is satisfied, with ρ = λ^i_j_1, …, j_q,k and R = ζλ^i_j_1, …, j_q. By choosing i sufficiently large, we may assume that (<ref>-<ref>) are satisfied for the relevant bubble maps.Corollary <ref>a then gives ussup_σ^-1ζλ^i_j_1, …, j_q≤ r ≤ζλ^i_j_1, …, j_q( r/ζλ^i_j_1, …, j_q)^M^_j_1, …, j_q,k f^i_j_1, …, j_q,k(r)≤γ√( E_(u,V_j_1, …, j_q, k ))/λ^i_j_1, …, j_q≤γμ^i_j_1, …, j_q, k.We may now apply Theorem <ref>a, with m = M^_j_1, …, j_q and n_k = M^_j_1, …, j_q, k (in view of (<ref>)), to complete the induction step. (To account for the case μ^i_ = ξ^-1δ_i, we can simply decrease ξ.)2. For each = (j_1, …, j_q) ∈ J and i sufficiently large, we haveμ^i_j_1, …, j_q≤γsup_σ^-1ζλ^i_ j_1, …, j_q-1≤ r ≤ζλ^i_j_1, …, j_q-1( ζλ^i_j_1, …, j_q - 1/r)^N^_ + 1 f^i_ (r).First assume that ∈ J_ is a holomorphic index. Then, taking σ larger if necessary, the claim follows as above from the definition of N^_ on a holomorphic index.Next, assume that the body map is either antiholomorphic or constant; then (<ref>) holds for i sufficiently large. Fix an index j_1, let n_j_1 = N^_j_1, and n_k = - M^_k for k ≠ j_1. In view of (<ref>), we may apply Theorem <ref> to obtain the claim, with = (j_1).We can now prove the claim for the remaining indices by “outward” induction on the bubble tree. Assume that the claim has been proven for all indices up to and including a given index = (j_1, …, j_q); we shall prove it for (j_1, …, j_q + 1) ∈ J_^c =J_∪ J_0.Note that from Definition <ref>, we haveN^_j_1, …, j_q, j_q + 1 = N^_j_1, …, j_q + ∑_k≠ j_q + 1 M^_j_1, …, j_q, k.Setting m = N^_j_1, …, j_q, n_j_q + 1 = N^_j_1, …, j_q + 1, and n_k = -M^_j_1, …, j_q, k for k ≠ j_q + 1, in view of the induction hypothesis (<ref>) and the established inequality (<ref>), Theorem <ref>b implies the claim for (j_1, …, j_q + 1), completing the induction. Claims 1 and 2 imply the hypotheses (<ref>) and (<ref>) of Corollary <ref>c, which gives the desired bounds (a) and (b). In the statement, we absorb all dependence on σ into the constant κ.Let = (j_1, …, j_q) ∈ J, and put_() = (j_1, …, j_p).(a) Supposing that _() is holomorphic (and not constant), we havef^i_(r) ≥κ( r/λ^i__ ())^N_ + β. (b) Supposing that k⃗ = (j_1, …, j_q, k_m + 1, …, k_n) ∈J⃗_( ) is holomorphic (and not constant), we havef^i_(r) ≥κ( λ^i_k⃗/r)^M_ + β. (c) Let p ≤ m ≤ q and consider three indices of the form ℓ⃗ = (j_1, …, j_m), = (j_1, …, j_m, j_m +1, …, j_q), k⃗ = (j_1, …, j_m, k_m+ 1, …, k_n).Suppose that k⃗∈J⃗_(ℓ⃗) is a holomorphic index succeeding ℓ⃗, and that J⃗_() ⊂J⃗_(ℓ⃗), i.e., there are no holomorphic indices between ℓ⃗ and k⃗ or between ℓ⃗ and . For i sufficiently large, we then havef^i_j_1, …, j_q(r) ≥κ( λ^i_k⃗)^M^_ℓ⃗ + β r^N^_ +β/( λ^i_ℓ⃗)^M^_ℓ⃗ + N^_ + 2 β.According to the hypotheses, the relevant indices are connected only by antiholomorphic or flat maps. The estimates then follow by running Theorem <ref>a across the connecting indices. We can encapsulate the results of this section in the following statement. Given k ∈,β > 0, and > 0, there exists δ > 0 as follows. Given any u ∈ W^2,2( S^2, S^2 ) with E(u) ≤ 4 π k and(u)< δ,there exists at least one bubble-tree decomposition of u such the estimates of Theorem <ref> and Corollary <ref>a-c hold, with κ = ξ = δ. This follows by contradiction from the last two results.§ MAIN THEOREMS The following is our main technical result. Let u_i : S^2 → S^2 be a bubble-tree convergent almost-harmonic sequence with E(u_i) ≤ 4 π k. There exist a constant K ≥ 1, depending only on k, as well as constants C_<ref>b-d and M ≥ 1, depending on the bubble-tree data ℬ and β > 0, such that for all i sufficiently large, u_i satisfies each of the following statements. (a) There exist 0 < ρ^i_j ≤ζ_0, j = 1, …, ℓ, as follows: lettingΩ_i = S^2 ∖∪_j D̅_ρ^i_j (z_j), Ω̂_i = S^2 ∖∪_j D̅_ρ^i_j / 2(z_j),U^i_j = U^ρ^i_j_ρ^i_j / 2(z_j),we haveE_(u_i, Ω̂_i) ∨ E_ (u_i, Ω̂_i ) < _2,and, for j = 1, …, ℓ,E(u, Û^i_j)< _2,E_(u, Û^i_j) ≤ KE_(u, U^i_j) + M ( ρ^i_j )^2 (u) _L^2(Û^i_j)^2 ,andE_(u, Û_i) ≤ KE_(u, U^i_j) +M ( ρ^i_j )^2 (u) _L^2(Û^i_j)^2.Moreover, we either haveE_ (u_i, Ω̂_i ) < _2 and E_(u_i, U^i_j) ≤ M (E_(u_i, U^i_j) +(ρ^i_j)^2 (u_i) _L^2(Û_i)^2 )for i = 1, …, ℓ, orE_(u_i, Ω̂_i) < _2 and E_(u_i, U^i_j) ≤ M (E_(u_i, U^i_j) +(ρ^i_j)^2 (u_i) _L^2(Û_i)^2 ).Here _2 is the constant of Theorems <ref>-<ref>. (b) Given any ρ^i_j such that (<ref>-<ref>) hold for some K, M ≥ 1, let λ_i = max_j ρ^i_j. We then have( E(u_i, S^2) , 4 π) ≤ C_<ref>b( (u_i) ^2 + λ_i^1 - β(u_i) ).Here C_<ref>b may depend additionally on K and M.(c) For ∈ J_ and k⃗∈ J_ holomorphic and antiholomorphic bubble indices, respectively, at the same bubble point z_j_1 = z_k_1∈ S^2, we have the repulsion estimateλ^i_λ^i_k⃗ < C_<ref>c(u_i) ^1/L.(c) Let L ∈ be the integer described in Remark <ref>, and choose 1 < α < 2L+2/2L + 1. Suppose that the body may u_∞ is nonconstant and holomorphic,or that the outermost nonconstant bubbles on each branch are holomorphic. Then ( E(u_i) , 4 π) ≤ C_<ref>c(u_i) ^α. Here C_<ref>c may depend additionally on α. (d) Under the same assumption as (c), given any antiholomorphic bubble index ∈ J_, we have( λ^i_)^2L + 1 + β≤ C_<ref>d(u_i) .We assume that i is sufficiently large that Theorem <ref> applies to both the holomorphic and antiholomorphic energies. We then suppress the label i, writing u = u_i, Λ_ = Λ^i_ , δ = δ_i = (u_i), etc.We now decompose the map u into almost-holomorphic and almost-antiholomorphic parts, as follows. Define Ĵ_⊂ J to be the set of all ∈ J for whichE_( u, Λ̂_) ≥ E_( u, Λ̂_).Let Ĵ_ = Ĵ_^c, with ∈Ĵ_ satisfyingE_( u, Λ̂_) < E_( u, Λ̂_).We then haveJ = Ĵ_⨿Ĵ_.Moreover, suppose that (j_1, …, j_p) ∈Ĵ_ but (j_1, …, j_p+1) ∈Ĵ_, or vice-versa; then the holomorphic and antiholomorphic energies must “cross” in the neck. Specifically, letf(r) = f^i_j_1, …, j_q(r) = max√(_S^1_r(x^i_j_1, …, j_q ) e_(u_i) dθ), ξ^-1δ_i , g(r) = max√(_S^1_r(x^i_j_1, …, j_q ) e_ (u_i) dθ), ξ^-1δ_i .By Theorem <ref> and (<ref>), either f(λ^i_j_1, …, j_q) = ξ^-1δ = g(λ^i_j_1, …, j_q), or else we havef(λ^i_j_1, …, j_q) ≥κ E_(u, Λ_j_1, …, j_q )≥κ E_(u, Λ_j_1, …, j_q ) ≥κ^2 g(λ^i_j_1, …, j_q ).By (<ref>), either f(λ^i_j_1, …, j_q + 1) = ξ^-1δ = g(λ^i_j_1, …, j_q + 1) , or we haveg(λ^i_j_1, …, j_q + 1) ≥κ E_(u, Λ_j_1, …, j_q + 1 )≥κ E_(u, Λ_j_1, …, j_q + 1 ) ≥κ^2 f(λ^i_j_1, …, j_q + 1).Since both f and g are continuous, by the intermediate value theorem and Theorem <ref>a, there exists ρ∈κ^-1λ_j_1, …, j_q + 1 , κλ_j_1, …, j_q such thatM^-1 g(ρ) ≤ f(ρ) ≤ M g(ρ),where M = κ^-2 - L. Theorem <ref>b also gives (<ref>-<ref>), with K = (c_L)^-1.To obtain (a), we can apply the preceding discussion and (<ref>) to the body map ϕ_0, giving (<ref>) or (<ref>) depending on whether 0 ∈Ĵ_ or Ĵ_, respectively.Also note that given any ρ^i_j as in (a), these may be decreased so that (<ref>) (rather than either (<ref>) or (<ref>)) is satisfied. In particular, again by Theorem <ref>, we haveE_( u, U_ρ^2 ρ(x_j_1, …, j_p ) ) ≤κ^-1(E_( u, U_ρ^2 ρ(x_j_1, …, j_q ) ) + ρ^2 δ^2 )and alsoE_( u, U_ρ^2 ρ(x_j_1, …, j_q ) ) ≤κ^-1( E_( u, U_ρ^2 ρ(x_j_1, …, j_q ) ) + ρ^2 δ^2 ). This in particular gives (a).Together, these giveE ( u, U_ρ/2^4 ρ(x_j_1, …, j_q ) ) ≤ K ( E_( u, U_ρ^2 ρ(x_j_1, …, j_q ) ) ∧ E_( u, U_ρ^2 ρ(x_j_1, …, j_q) ) + _2^-1ρ^2 δ^2,which is the key hypothesis (<ref>) of Theorems <ref>-<ref>. Now let I_ and I_ be the set of connected components of Ĵ_ and Ĵ_, respectively, and letI = I_∪ I_.We re-index I by the same ordering as J; accordingly, an even-length index (i_1, …, i_2n) belongs to I_ and an odd-length index (i_1, …, i_2n+1) belongs to I_.Given ≠ 0 ∈ I, letbe the “root” of the sub-tree corresponding to , i.e. the smallest index in the corresponding connected component, and letq_ = x_.Also define a radius ρ_ as follows: since, by definition, (j_1, …, j_q-1) ∈ J_, there exists a radius λ_j_1, …, j_p≤ρ = ρ_≤ζλ_j_1, …, j_p-1 for which (<ref>) is satisfied: in other words, lettingW_ = U_ρ_i_1, …, i_q + 1^2ρ_i_1, …, i_q + 1 (q_i_1, …, i_q +1), Ŵ_ = U_1/2ρ_i_1, …, i_q + 1^4 ρ_i_1, …, i_q + 1 (q_i_1, …, i_q +1),we obtainE ( u, Ŵ_) ≤ M (( E_( u, W_) ∧ E_( u, W_) ) + ρ^2 δ^2 ).We further define the domainsΩ_i_1, …, i_q = D_ρ_i_1, …, i_q(q_i_1, …, i_q ) ∖∪_k D̅_ρ_i_1, …, i_q, k( q_i_1, …, i_q, k ), Ω̂_i_1, …, i_q = D_2ρ_i_1, …, i_q(q_i_1, …, i_q ) ∖∪_k D̅_1/2ρ_i_1, …, i_q, k( q_i_1, …, i_q, k ),where the unions run over k such that (i_1, …, i_q, k) ∈ I_ or I_, respectively. TheΩ_ are disjoint, with ∪_Ω_ = S^2, ∪_i_2, …, i_qΩ_i_1, i_2, …, i_q = D_ρ_i_1 (q_i_1)for each (i_1) ∈ I, andΩ_0 = S^2 ∖∪_i_1D̅_ρ_i_1(q_i_1),whileΩ̂_i_1, …, i_q∩Ω̂_i_1, …, i_q + 1 = U_i_1, …, i_q + 1.Supposing that u= u_i is sufficiently far along the sequence, we may assume thatE_(u, Ω̂_i_1, …, i_2n ) < _2andE_(u, Ω̂_i_1, …, i_2n + 1 ) < _2,so that the hypothesis (<ref>) is satisfied. We can now assemble the desired estimates. Letting λ = maxρ_,the local pre-Łojasiewicz inequality, Theorem <ref>, gives( E(u, Ω_ ) , 4 π) ≤ C λ^1 - β(u)for each ≠ 0. By the triangle inequality, in view of (<ref>), we have( E(u, ∪_i D_ρ_i ) , 4 π) = ( E(u, ∪_≠ 0Ω_ ) , 4 π) ≤ C λ^1 - β(u) .Now, applying the global pre-Łojasiewicz inequality, Theorem <ref>a, and the triangle inequality again, we obtain( E(u, S^2 ) , 4 π)≤( E(u, Ω_0) , 4 π) + ∑_≠ 0( E(u, Ω_ ) , 4 π) ≤ C ( (u) ^2 + λ^1 - β(u) ).This gives (b).To obtain (c) and (d), simply notice that in case the body map is holomorphic (or the leading bubbles are all holomorphic), the lower bound (<ref>) with m = 2L + 2 follows from Corollary <ref>a, so the results follow by using Theorem <ref>b in place of <ref>a.Supposing that = (i_1, … i_p) ∈ I_p corresponds to a sub-tree {j⃗^1, …, j⃗^n }⊂ J, we have the following:j⃗^i ∈ J_∪ J_0for each i,§.§ Proof of Theorems <ref>-<ref>Suppose for contradiction that the estimate of Theorem <ref>a fails. We then have a sequence of maps u_i satisfying the assumptions, but for which(E(u), 4 π ) ≥ i (u_i) ,which implies(u_i) ≤4 π/i.In particular, u_i form an almost-harmonic sequence. But we may then pass to a bubble-tree convergent subsequence and apply Theorem <ref>a-b, to obtain the estimate(E(u_i), 4 π ) ≤ C (u_i)for some constant C and i sufficiently large. This contradicts (<ref>), establishing the result.Theorem <ref>b follows similarly by contradiction from Theorem <ref>d, since -regularity (Lemma <ref>a) ensures that the body map is not constant.Theorem <ref>a follows by contradiction from Theorem <ref>b, where λ_i ≤λ is guaranteed by the assumptions (<ref>) and (<ref>).Meanwhile, Theorem <ref>b follows by contradiction from Theorem <ref>d, whose hypotheses are guaranteed by the assumptions (<ref>) and (<ref>). §.§ Proof of Theorem <ref> and Corollary <ref> Letδ(t) = (u(t) )andΔ(t) = E(u(t)) - 4π n, where n is the unique integer such thatE(u(t)) ∈( 4 π( n - 1/2) , 4π(n + 1/2) .To avoid confusion with δ(t), we use δ_0 during the proof in place of the constant δ in the statement.By the energy identity for 2D harmonic map flow <cit.>, we know that E(u(t)) can only jump by an integer multiple of 4 π at each singular time. Since |Δ(t)| < δ_0 for 0 ≤ t < T, the function Δ(t) is in fact continuous and decreasing, and the global energy identityΔ(t_2) + ∫_t_1^t_2δ(t)^2 = Δ(t_1)is valid. Let 0 < T_* ≤ T be the largest time such that the following hold for 0 ≤ t < T_*:E_(u(t), Ω' ) ≤ 2_0, max_i E ( u(t), Û_i' ) ≤ 2_0, κ/2≤min_i E_( u(t), U_i' ),and- 2 (δ_0^α - 2/α + 2 - α/α(T_* - t) )^α/α - 2≤Δ(t) ≤ 2 (δ_0^α - 2/α + 2 - α/α t )^α/α - 2.Assuming that δ_0 is sufficiently small, we will show that each of these estimates holds with strict inequality on 0, T_* . This implies that T_* cannot be maximal unless T_* =T, which will give us the desired estimates. The following local energy identity is standard (see e.g. <cit.> or <cit.>):| ∫( e_(u(t_2)) - e_(u(t_1)) ) φdV | ≤ C sup |∇φ | √(k)∫_t_1^t_2δ(t) dt,where φ is compactly supported. The same identity holds with e_ or e in place of e_. LetU_i' = U^5/2λ_i_4/5λ_i (x_i), Û_i' = U^3λ_i_2/3λ_i (x_i), Ω' = S^2 ∖∪D̅_4/5λ_i(x_i).We can apply (<ref>) several times, to obtain:E_(u(t), Ω')< _0 + C √(k)/minλ_i^2∫_0^tδ(t) dt, E(u(t), Û_i')< _0 + C√(k)/minλ_i^2∫_0^tδ(t) dt,andE_(u(t), U'_i) ≥κ - C√(k)/minλ_i^2∫_0^tδ(t) dt.We first prove (<ref>) with coefficient 1 in place of 2 on both sides. Since the hypotheses of Theorem <ref>b are satisfied, we haveΔ(t) ≤δ(t)^α,where we may ignore the constant by taking δ_0 sufficiently small. This gives usd/dtΔ(t) ≤ - Δ(t)^2/αas long as Δ(t) ≥ 0, andd/dtΔ(t) ≤ - (-Δ(t))^2/αthereafter. The inequalities follow from Gronwall's Theorem.Next, we prove strict inequality in (<ref>-<ref>). By (<ref>), we have d/dtΔ(t)^α - 1/α = -α - 1/αΔ(t)^-1/αδ^2(t).Together with (<ref>), this gives∫_t_1^t_2δ(t) dt = ∫_t_1^t_2δ(t)^2 δ(t)^-1dt≤∫_t_1^t_2δ(t)^2 Δ(t)^-1/αdt= -α/α - 1∫_t_1^t_2d/dtΔ(t)^α - 1/αdt ≤α/α - 1( Δ(t_1)^α - 1/α - Δ(t_2)^α - 1/α).Apply (<ref>) with t_1 = 0 and t_2 = T_*, to obtain∫_0^T_*δ(t) dt ≤2α/α - 1δ_0^α - 1/α.Provided that δ_0 is sufficiently small, we may conclude from (<ref>-<ref>) that strict inequality holds in (<ref>-<ref>). This implies that in fact T_* = T, so the first estimate of Theorem <ref>a now follows from (<ref>). The second estimate of Theorem <ref>a follows from (<ref>) by integrating the harmonic map flow equation in time and applying the L^2 norm, then using (<ref>).To obtain Theorem <ref>b, let u_∞ be the body map along any subsequence of times; by (<ref>-<ref>), this must be nonconstant and holomorphic. The stated L^2 estimate follows by letting t_2 →∞ in (<ref>) and inserting (<ref>). The C^k improvements follow by combining (<ref>) with standard derivative estimates valid away from the bubbling set {z_j} (see e.g. Song-Waldron <cit.>).To prove Corollary <ref>, suppose that u_∞ is nonconstant, and assume without loss of generality that it is holomorphic. Let {x_i} = {z_i} be the bubbling set along the given sequence t_i →∞, put λ_j = ζ_0 for each j, and take Ω as in Theorem <ref>b-<ref>. Since u(t_i) → u_∞ strongly on Ω, there exists κ > 0 such that they hypotheses (<ref>) and (<ref>) hold, with u_0 = u(t_i) for t_i sufficiently large. Theorem <ref> gives the desired conclusions.amsinitial
http://arxiv.org/abs/2312.16686v1
{ "authors": [ "Alex Waldron" ], "categories": [ "math.DG", "math.AP" ], "primary_category": "math.DG", "published": "20231227185854", "title": "Lojasiewicz inequalities for maps of the 2-sphere" }
LLM-SAP: Large Language Model Situational Awareness Based Planning Aldo Vera================================================================== In this study, we propose two novel input processing paradigms for novel view synthesis (NVS) methods based on layered scene representations that significantly improve their runtime without compromising quality. Our approach identifies and mitigates the two most time-consuming aspects of traditional pipelines: building and processing the so-called plane sweep volume (PSV), which is a high-dimensional tensor of planar re-projections of the input camera views. In particular, we propose processing this tensor in parallel groups for improved compute efficiency as well as super-sampling adjacent input planes to generate denser, and hence more accurate scene representation. The proposed enhancements offer significant flexibility, allowing for a balance between performance and speed, thus making substantial steps toward real-time applications. Furthermore, they are very general in the sense that any PSV-based method can make use of them, including methods that employ multiplane images, multisphere images, and layered depth images. In a comprehensive set of experiments, we demonstrate that our proposed paradigms enable the design of an NVS method that achieves state-of-the-art on public benchmarks while being up to 50x faster than existing state-of-the-art methods. It also beats the current forerunner in terms of speed by over 3x, while achieving significantly better rendering quality. [* Equal contribution.] § INTRODUCTIONDue to the growing prevalence of mobile phones and virtual reality (VR) headsets with stereo cameras, novel view synthesis (NVS) in the wild is becoming increasingly popular. This task presents an interesting challenge for computer vision research, as it requires the generation of high-quality novel views with spatial and temporal consistency in dynamic environments. Its complexity stems from the fact that application environments are often unfamiliar, potentially involving dynamic content, occlusions, and disocclusions as well as varying lighting conditions. Despite these challenges, substantial progress has been made in recent years, targeting applications such as stereo magnification <cit.>, passthrough for VR headsets <cit.>, interactive 3D photography (e.g, <cit.>) and VR videos with 6DoF NVS in dynamic scenes (e.g. <cit.>). The most suitable methods for NVS in the wild work on top of multi-layer representations of scenes, such as multiplane images (MPIs) <cit.>,multisphere images (MSIs) <cit.> and layered depth images (LDIs) <cit.>. In these methods, novel views are generated from a small set of images, which are transformed into a semitransparent layered representation by a neural network. Contrary to traditional view interpolation techniques with a single depth map, layered representations can model complex appearance effects, such as transparency, lightning reflection, and complex structural patterns, even in scenes with high depth complexity. Furthermore, in contrast to neural field methods such as NeRF <cit.>, these methods are particularly suitable for NVS in the wild because they do not require per-scene optimization and offer real-time rendering. However, NVS in dynamic scenes not only requires efficient rendering but also on-the-fly generation of such layered scene representations. In this regard, all available methods fall short of meeting this requirement due to their high computational complexity for scene understanding. For example, the current state-of-the-art MLI method SIMPLI <cit.> can render in 120FPS but takes around 3 seconds to generate an MLI at a relatively low resolution of 480p. This hinders wider adaptation of layer-based methods in the aforementioned use cases[For example, passthrough on VR devices (generating realistic views from camera streams at the user's eyes) requires rendering rates above 30FPS at 1080p or higher.] and prevents the development of potential future applications like immersive remote control and virtual telepresence via 2.5D or stereoscopic video calling.In this paper, we take a step towards real-time NVS in the wild by presenting two contributions that dramatically speed up existing layered-based approaches. Towards this end, we identify the use of so-called plane sweep volumes (PSV) as the main source of computational complexity. Obtaining such volumes - which constitute the neural network input in existing pipelines -requires multiple re-projections of the source views, which are expensive to obtain due to the need to compute projection matrices, map source to target coordinates, as well as to sample the source view multiple times. Furthermore, large PSVs necessitate the use of either 2D convolutional neural networks with a large number of channels or networks with three-dimensional convolutional kernels (e.g. <cit.>), both of which have high computational cost.To address this issue, we introduce two novel paradigms for generating and processing plane sweep volumes, enabling significantly faster view synthesis times without compromising quality: * Plane grouping: We propose breaking the PSV into several disjunct groups which are processed in parallel. This allows for employing small (and fast) neural network backbones without introducing information bottlenecks. The number of groups presents a flexible hyper-parameter that adjusts the per plane compute budget, hence trading speed and performance.* Super-sampling: We furthermore show that neural networks are able to leverage redundancies in the PSV to improve the performance of MPI methods without added computational costs. In particular, a sparse PSV can be super-sampled into a dense MPI by giving up on the usual one-to-one plane correspondence, which significantly reduces runtimes as creating PSV planes is an expensive process. Finally, building on these two contributions, we set a new reference runtime for NVS in the wild with MPI-based methods. Through an extensive set of experiments, we demonstrate its ability to achieve state-of-the-art quality, while being 50x faster than the best existing MPI, and 10x faster than the best MLI method to date. § RELATED WORKNovel View Synthesis (NVS) is a long-standing problem at the intersection of computer graphics and computer vision with seminal works in the field dating back to the 1990s. Early methods focused on interpolating between corresponding pixels from input images <cit.> or between rays in space <cit.>. In recent years, machine learning-based methodshave driven significant advancements in both the quality of rendered images and the range of possible new views. This progress has been primarily driven by huge developments in the field of deep learning, alongside the emergence of differentiable rendering methods, which allow for 3D scene understanding based solely on 2D supervision. In the following paragraphs, we summarize the two most common approaches for learned NVS.Explicit, layered scene representations Multi-plane images (MPIs) present a perspective variant of volumetric representations (such as 3D voxel grids) that, instead of using a uniformly sampled volume, positions a layered scene representation in the view frustum corresponding to one of the input (or target) cameras with fronto-parallel planes along the optical axis (usually placed at uniform disparities) <cit.>. Traditionally, each plane element contains color and opacity (α) values, which allows for fast and differentiable rendering with standard alpha compositing <cit.>. Furthermore, the soft alpha enables high-quality view synthesis with smooth transitions, semi-transparencies, realistic reflections, and specular highlights. Moreover, MPIs can model occluded content for a range of views that grows linearly in the number of planes <cit.>. Importantly, MPI-based methods do not require per-scene optimization and thus lend themselves to NVS in the wild. Most notable works in this line of research include the seminal paper on stereo magnification <cit.>, the theoretical analysis in <cit.>, the local MPI fusion proposed in <cit.>, the extension to spherical representations in <cit.>, the adaptive plane placement proposed in <cit.>, as well as the current state-of-the-art method, DeepView <cit.>, which models MPI learning as an inverse problem, solved by a sophisticated variant of learned gradient descent.[For completeness, we note that there is a growing line of work generating MPIs from single image inputs but they typically achieve inferior quality compared to the multi-camera settings mentioned in Section <ref> (see e.g. <cit.>).] In a concurrent line of research, several methods propose to enrich the rigid MPI plane representation by adding per-entry depth values, which results in deformable layers that can adapt to the scene geometry. This approach, referred to as layered depth images (LDI) or multilayer images (MLI) goes back to <cit.> and has recently been proposed for in the wild NVS <cit.>. MLIs promise more compact scene representations, but as we point out in Section <ref>, the main bottleneck of any layer-based method does not lie in rendering from the intermediate representation but in generating it in the first place. Here, the two lines of research do not differ. All of the above methods perform planar reprojections of the input views assuming different depth values to create a so-called Plane Sweep Volume (PSV), which allows the network to process any given scene independently of the specific camera calibrations and relative rotations/translations between the input cameras. Since the PSV usually consists of a double-digit number of planes, its creation is time-consuming and furthermore, its magnitude necessitates the use of either 2D convolutional neural networks with high numbers of channels (e.g. <cit.>) or networks with slow three-dimensional convolutional kernels (e.g. <cit.>). Due to this computational constraint, current techniques face difficulties in producing and rendering layered scene representations in real time. In Section <ref>, we put forth two adaptations to the above-mentioned conventionalmethods that significantly reduce their runtime requirements.=-1 Implicit scene representations Explicit representations often have large memory footprints and poor scalability with image resolution. An alternative approach is to implicitly store light fields as functions of spatial location and viewing angle. Neural scene representations, such as NeRF <cit.>, optimize continuous scene representations using MLPs, sometimes combined with sparse explicit representations (e.g <cit.>). These methods excel in view-dependent effects, non-Lambertian surfaces, and complex objects but are traditionally limited to static scenes, per-scene optimization, and slow rendering due to numerous MLP evaluations per pixel.=-1 Recent developments have led to faster training <cit.>, increased rendering frame rate <cit.>, dynamic scene modelling <cit.>, and reduced input camera requirements <cit.>. Some techniques avoid per-scene optimization <cit.> by learning shared priors. For example, IBRNet <cit.> learns a generic view interpolation function that generalizes to unseen scenes. However, this method still requires expensive ray sampling and achieves the best performance only with scene-specific fine-tuning. Further related works NeX <cit.> proposes a hybrid of MPI and neural radiance fields, resulting in high-quality renderings in real time. Yet, this method requires a high number of input images as well as a lengthy per-scene optimization of the applied basis functions and thus is not suitable for in-the-wild NVS. A distinct solution explicitly developed for NVS on VR headsets is NeuralPassthrough <cit.>. This approach utilizes simple image-based rendering based on forward warping with learned stereo-depth. As a result, it provides fast view synthesis but cannot effectively model reflections and semi-transparencies. Furthermore, it cannot inpaint disoccluded regions, opting instead to fill these disoccluded pixels with the smoothed colors of the local vicinity. Clearly, this limits the applicability of <cit.> to use cases with very small camera offsets of just a few centimeters.§ METHOD§.§ SettingMPI methods usually generate novel views by transforming input camera views into an intermediate, layered scene representation from which new perspectives can be rendered. In particular, given a set of V input views {I_v}_v=1^V, where each image I_v∈ℝ^3,H,W, layered scene representations are generally obtained by first broadcasting the input views D times along a new depth dimension ℬ:ℝ^3,H,W→ℝ^D,3,H,W and then using inverse homography (details in Sect. <ref>) to warp the repeated input images to the MPI camera 𝒲:ℝ^D,3,H,W→ℝ^D,3,H,W assuming planar projections spaced linearly in disparity. The resulting tensor, named PSV, aligns at every disparity level all regions from all input images whose rays originated from the same corresponding disparity. This property allows scene understanding by patch comparison, a task for which convolutional neural networks are a natural fit.Thus, the PSV is processed by a mapping ℱ_θ:ℝ^B,D,V,3,H,W→ℝ^B,D,4,H,W (where B constitutes the batch dimension) parameterized with learnable parameters θ, which predicts a volumetric scene representation consisting of fronto-parallel RGBα images (MPIs) or deformable layers of RGBdα <cit.>. In the MPI case, novel views can be rendered via over-operation <cit.>: I_target = ∑_d=1^D (RGB_dα_d ∏_j=0^d-1(1-α_j)),where RGB_d and α_d are the color and opacity values of the MPI at layer d. The term ∏_j=0^d-1(1-α_j) constitutes the net transmittance of layer d. This operation is efficient and fully differentiable, which enables learning ℱ_θ purely based on image supervision.Existing methods differ in where exactly the MPI camera is placed and how the mapping ℱ_θ is constructed and learned. However, what is common to all methods is the rather low inference speed due to the need to process the highly redundant PSV with a large neural network. The present method presents two novel input processing paradigms for improving this very aspect. An overview is presented in Figure <ref>. In what follows, we focus on target-centered MPI images (as in <cit.>), which we identify as an adequate setting for dynamic NVS in the wild. Nonetheless, as detailed in Section <ref> our contributions also work in the setting of static, warped MPIs. In general, we highlight that our contributions can be integrated into any method that relies on processing PSVs, encompassing MLI methods. Finally, it is also conceivable to extend our approach to stereo-depth methods based on cost volumes (e.g. <cit.>). §.§ MethodologyWe propose two novel paradigms for processing PSV tensors that leverage the fragmented nature of MPI-based NVS as well as redundancies in the PSV commonly used as input. Our approaches overcome the runtime limitations of existing methods and enable real-time generation of novel views in the wild. An overview of our method is presented in Figure <ref>. Recall that the PSV is of shape [B,D,V,3,H,W], where usually B=1 during inference and B>1 during training. There are two obvious ways of processing the PSV: (a) Most methods merge depth planes into the RGB channel dimension, which results in network inputs with triple-digit channels (3· D· V) as typical values of D range between 32 <cit.> and 80 <cit.>. This necessitates the use of large neural networks to prevent the information bottleneck that would arise from the large input PSV tensor (for example, channels in the network employed the seminal works of <cit.> range from 99 in the first layer to 1024 in the last layer). (b) The works of <cit.> and <cit.> (also in <cit.> for single image MPIs) present notable exceptions. There, depth planes are processed independently by merging the input view dimension V into the RGB channel dimension, but the depth dimension D is instead merged with the batch dimension B. In this way, the number of input channels to the network is drastically reduced to (3· V), which allows for much smaller networks. Yet, this process is inherently slow because the network needs to be queried D times. Furthermore, when processing each depth plane independently, the network cannot access cross-depth information. This can decrease the consistency between the predicted MPI planes, which in turn may lead to aliasing artifacts <cit.>.We show that the best performance-latency trade-off is achieved by processing depth planes in groups, as opposed to (a) all as one PSV or (b) independently plane-by-plane. Interestingly, we find that modeling only interactions of neighboring planes (e.g. planes within a group) is sufficient to achieve state-of-the-art performance while drastically reducing the computation time. Furthermore, inspired by results from single view novel-view synthesis with MPIs such as <cit.>, we demonstrate that performance can be further improved by letting ℱ_θ super-sample the PSV into MPIs with larger numbers of depth planes, which is significantly more efficient than increasing both input (PSV) and output (MPI) planes alike. The following two paragraphs present our contributions in more detail. (𝒲∘ℬ)(I) ℱ_θ 𝒮_G(PSV) 𝒮^-1_G(MPI_g) 𝒪_α(MPI) 𝒪_RGBα(MPI)§.§.§ Plane grouping Given the PSV computed as 𝒲(ℬ({I_v}_v=1^V)), we obtain the input for our neural network ℱ_θ by dividing the PSV tensor along the depth dimension into G sets of D/G planes, assuming DG=0. Each set is then considered as a sample in the batch dimension and the depth, view, and color channels are merged (as in other MPI methods), resulting in the transformation 𝒢:ℝ^B,D,V,3,H,W→ℝ^B· G,D/G· V·3,H,W. Compared to approach (a) introduced above, this reduces the information bottleneck in the first layer by a factor of G. At the same time, it also reduces the number of forward passes of (b) by a factor of G.This is illustrated in Figure <ref>. As can be seen, the rendering speed decreases in the number of groups as expected. At the same time, performance increases with G to the point of having 32 groups (with two planes each). Compared with lower values of G, this indicates that more compute on local context is better than having more global context but less compute per plane. At the same time, comparing to G=64 shows that having at least some local context is better than processing planes completely independently. Notably, most existing approaches are either (a) on the far left (e.g. <cit.>) or (b) on the far right (e.g . <cit.>). Our results show that both approaches are sub-optimal. In fact, the best performance-latency trade-off lies between these two extremes.[Note that the specific intersection depicted here does not indicate an optimum as it depends on the scaling of the y-axes.] Importantly, our plane grouping approach allows for considerable flexibility, as the specific selection of G enables a simple adjustment to meet runtime and quality requirements, even dynamically at inference time.§.§.§ Plane super-samplingThe performance of MPI methods is generally correlated with the number of MPI planes used for rendering <cit.>. Traditionally, increasing the number of input planes in the PSV and output MPI planes has been done with one-to-one correspondence. However, this approach results in higher computational costs as the time required to generate the PSV is proportional to the number of depth planes as Ω(D). Taking inspiration from single view MPI methods (e.g. <cit.>), we here show that such strict correspondence is actually not necessary. Instead, redundancies in the PSV can be leveraged by the neural network to generate better results with super-sampled MPIs.As can be seen, the network ℱ_θ can predict super-sampled MPIs of almost equal quality as the standard one. In particular, the performance of MPI methods can be enhanced at marginal computational cost[Only the number of output channels of the last layer of the neural network changes.] by letting the network ℱ_θ leverage redundancies in the PSV to predict super-sampled MPIs. As can be seen in Figure <ref>, this approach yields significantly better results than the regular method (compare lines vertically), while achieving comparable performance to regular models with equivalent numbers of MPI planes (compare lines horizontally). This result is significant given the elevated computational cost of generating PSV planes. For instance, generating a PSV of 4 input cameras at 464 × 800 takes 24.21ms for 32, 43.6ms for 64 and 85.25ms for 128 depth planes on an A100 GPU (see Figure <ref> for more details)§.§ Training DetailsIn order to show the above-mentioned flexibility of our approach, we design and train three different models, each parameterizing ℱ_θ with a fully convolutional encoder-decoder network. We consider two backbones: (i) the simple 4-level U-Net architecture presented in <cit.> (details in Table <ref>) and (ii) a variant of the former where we replace the blocks of each level with the corresponding blocks of the ImageNet pre-trained ConvNeXt tiny <cit.>. Given these two backbones, we design the following three versions of our method, which we term fast MPI (or fMPI for short):* fMPI-S: U-Net backbone, D=16, G=4, S=2* fMPI-M: U-Net backbone, D=32, G=16, S=2* fMPI-L: ConvNext backbone, D=40, G=20, S=2.For instance, the M model receives a PSV of D=32 planes as input. It processes this input in G=16 parallel forward passes, each taking D/G=2 input planes and predicting S· D/G=4 output planes. In total, the model hence outputs an MPI of S· D=64 planes.Following <cit.>, we do not directly predict an RGB value per MPI plane pixel but opt to take the input views as an effective prior. In particular, the network outputs only a single RGB background image per plane group alongside plane-specific view weights (one for each input camera and for the background image). The final MPI RGB value is then a softmax weighted combination of the background image and the input image colors from the corresponding pixels in the PSV (see Eq. <ref>). Since the described rendering approach is fully differentiable, it conveniently allows for training ℱ_θ solely based on image supervision. Toward this end, we construct and minimize the following loss function: ℒ(Y,Ŷ_θ):=Y-Ŷ_θ_1 + SSIM(Y,Ŷ_θ)+λVGG(Y)-VGG(Ŷ_θ)_1, where Y and Ŷ_θ constitute the ground truth and predicted target views. SSIM stands for the structural similarity index measure <cit.> and the term VGG(Y)-VGG(Ŷ_θ)_1 represents the so-called perceptual loss (LPIPS), calculated from the image embeddings given by the first four layers of a VGG19 network <cit.>. We choose λ=0.01 and minimize ℒ with respect to θ under the classical empirical risk minimization regime, using the Lion optimizer <cit.> with learning rate lr=0.00009 as well as β_1=0.99 and β_2=0.90. Furthermore, we employ a learning rate decay factor of 10 for the last 20% (of 150'000) iterations. As usual, we train on patches (in our case size 352 × 352) instead of full images. During each step, a random target view is chosen for each training sample. All experiments are performed on a set of eight NVIDIA A100 GPUs with 40GB memory. Due to computational constraints, we did not grid-search any of the hyper-parameters.§ RESULTSWe present results on two publicly available NVS datasets, which are standard for benchmarking novel view synthesis methods. These datasets are not exclusively tied to NVS in the wild, but for a fair comparison, we will limit the methods considered in our benchmarks to those able to generalize to unseen scenes.[Excluding implicit and hybrid methods like NeX <cit.>, which achieves outstanding performance (e.g. 0.7% better SSIM on Spaces than DeepView) but is constrained to per-scene pre-training and higher numbers of input views.] At the time of writing, the best-performing methods include DeepView <cit.>, LiveView <cit.>, SIMPLI <cit.> and IBRNet <cit.> (see Section <ref> for more details). At least one of the former methods beats other common benchmarks like Soft3D <cit.>, LLFF <cit.> and NeRF <cit.> in terms of both runtime and performance. We furthermore include StereoMag <cit.> into our benchmark as it is currently the fastest model. §.§ Spaces We first consider the Spaces dataset published by DeepView <cit.>. This dataset consists of 100 indoor and outdoor scenes captured 5 to 14 times from different viewpoints, using a fixed rig of 16 forward-facing cameras. 90 scenes are used for training and 10 scenes are held-out for evaluation. Each image has a resolution of 480 × 800. <cit.> presents four different camera setups. We here focus on the most challenging one called "4-views, large baseline", which has four inputs (arranged in a rectangle of approximate size 40 × 25cm) and 8 target cameras.As can be seen in Table <ref>, fMPI-L achieves state-of-the-art in SSIM and PSNR, while being ∼ 50 × faster than DeepView. This suggests that target-centered MPIs can be learned without sophisticated learned gradient descent algorithms when using local cross-plane context and a powerful backbone. Furthermore, fMPI-M performs comparably to LiveView but is ∼3.5 × faster. Finally, our fastest variant beats the current forerunner in terms of both performance and speed <cit.>. Figure <ref> shows that fMPI-S can achieve good quality renderings from four input cameras with over 25FPS (see Figure <ref> for detailed runtime breakdown).[We note that, for a fair comparison, we time all models in fp32 and omit any runtime optimizations like compiling, operator fusion, kernel auto-tuning, static graph freezing, etc.]§.§ Real Forward-FacingAdditionally, we train and evaluate on the Real Forward-Facing dataset, introduced by <cit.>. This dataset is composed of 48 static indoor and outdoor scenes (40 for training and 8 for evaluation) with 20 to 62 images each from handheld cellphone captures. Camera poses are computed using the COLMAP structure from motion implementation <cit.>. 18 of the images in the test scenes are held out as target views for the test set. Evaluation is performed by selecting the five closest images to the target view as input views. Table <ref> shows a quantitative comparison of fast MPI and previous methods on this dataset. We observe that fMPI-M and fMPI-L outperform previous methods by a large margin. fMPI-S performs on par with SIMPLI while offering 100x faster runtime speed, enabling real-time NVS in the wild. This raises doubts on the marginal value of more compact scene representations, as suggested by works on multi-layer images and layered depth images.[One can of course argue that MLI-based scene representations are more memory efficient, but we emphasize that even in this case, the plane sweep volume and the neural network activations are still memory heavy.]§.§ Ablations To substantiate the claims of our two major contributions, ablations on plane grouping and super-sampling are depicted in Fig. <ref> and Fig. <ref>. Regarding the former, we find that plane grouping not only offers a simple speed-performance trade-off but more importantlyFig. <ref> shows that processing PSV planes in groups yields strictly better performance than either of the two currently common approaches (namely plane by plane or joint processing). Regarding the latter, Fig. <ref> shows that super/sub-sampling planes offers a simple, yet powerful, option for practitioners to gain performance/save runtime with a given MPI method. This claim is substantiated by the fact that fMPI-M matches the performance of LiveView despite using only half the number of PSV planes (32 vs 64), which yields 3x speed-ups using the same network architecture. In what follows, we review further design choices of fMPI. First, we have chosen to generate target-centered MPIs. These can be generated on the fly and are thus suitable for dynamic scene content. However, warping static MPIs to novel target views may yield temporal consistency across views. For in the-wild applications with dynamic scene content, we foresee that the best approach will be a hybrid method, re-computing MPIs at given intervals and using view-consistent homography warpings in between. From a runtime perspective, Sect. <ref> shows that our innovations allow for generating novel MPIs at the target view at no more than 2x the latency of warping a static MPI to a novel view.[Compare the time for generating 32 MPI planes on the left of Fig. <ref> (19.6ms) to the runtime of fMPI-S (34ms), which also gives 32 MPI planes] From a quality perspective, we find similar performance. Namely, the SSIM of fMPI-M on spaces drops by only 0.011 when generating a single MPI (at the center of the camera rig) and warping it to all nine target views, instead of generating one MPI per target view.=-1Second, as described in Section <ref>, in addition to input view weights, ℱ_θ also outputs a single RGB image for each plane group, to facilitate inpainting. Although the benefits of this approach are relatively marginal, (increase in SSIM of 0.0027 for fMPI-M on spaces) we opted to retain this feature due to the minimal computational overhead (∼ 3.5ms).Third, to isolate the impact of the larger backbone of fMPI-L from the fact the these weights where pre-trained on ImageNet, we retrained this model from scratch with random initialization. Interestingly, we observed that the spaces SSIM score declined by 0.006, indicating that approximately 26.2% of the performance gap between our M and L models is attributable to pre-training. § LIMITATIONS AND FUTURE WORKIn this study, we presented two novel input processing paradigms for layer-based NVS methods that significantly improve their runtime requirements. Our approach is highly flexible, enabling a trade-off between performance and speed while outperforming state-of-the-art methods on public benchmarks. Notably, it is also very general and can benefit layer-based NVS methods of all sorts. Despite these improvements, some limitations and interesting areas for future work remain. Firstly, temporal consistency is not enforced in our method, which could lead to inconsistencies when synthesizing novel views in videos. We thus consider the integration of our innovations into an MPI pipeline for online video generation an interesting follow-up. Secondly, developing optimized code for homography warping and optimizing network inference times, for example by employing advanced techniques such as horizontal and vertical operator fusion, kernel auto-tuning and dynamic memory management (as done in optimization libraries like <cit.> and <cit.>), has the potential to yield significant speedups. Thirdly, more efficient backbone architectures should further enhance the real-time capabilities of our method. In this regard, inspirations can be drawn from the vast literature on efficient segmentation models (see e.g. <cit.>). Lastly, depth-based losses could further improve rendering quality when ground truth depth is available.Importantly, we did not consider the memory requirement of methods based on layered representations. Generally, these methods have high memory footprint, which can pose limitations in resource-constrained environments. Exploring ways to reduce this footprint presents an intriguing avenue for future research. Finally, integrating our approach with adaptive positioning of the MPI planes in space (e.g. <cit.>), rather than fixing them at inverse disparity levels, is likely to enhance performance in practical applications and could lead to additional runtime savings.ieeenat_fullname
http://arxiv.org/abs/2312.16109v1
{ "authors": [ "Jonas Kohler", "Nicolas Griffiths Sanchez", "Luca Cavalli", "Catherine Herold", "Albert Pumarola", "Alberto Garcia Garcia", "Ali Thabet" ], "categories": [ "cs.CV", "cs.LG" ], "primary_category": "cs.CV", "published": "20231226162408", "title": "fMPI: Fast Novel View Synthesis in the Wild with Layered Scene Representations" }
A Survey on Super Resolution for video Enhancement Using GAN 1st Ankush Maity Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 2nd Sourabh Kumar Lenka Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 3rd Roshan Pious Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 4th Vishal Choudhary Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 5th Prof. Sharayu Lokhande Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Generalized Category Discovery (GCD) is a crucial real-world task that aims to recognize both known and novel categories from an unlabeled dataset by leveraging another labeled dataset with only known categories.Despite the improved performance on known categories, current methods perform poorly on novel categories.We attribute the poor performance to two reasons: biased knowledge transfer between labeled and unlabeled data and noisy representation learning on the unlabeled data. The former leads to unreliable estimation of learning targets for novel categories and the latter hinders models from learning discriminative features. To mitigate these two issues, we propose a Transfer and Alignment Network (TAN), which incorporates two knowledge transfer mechanisms to calibrate the biased knowledge and two feature alignment mechanisms to learn discriminative features. Specifically, we model different categories with prototypes and transfer the prototypes in labeled data to correct model bias towards known categories. On the one hand, we pull instances with known categories in unlabeled data closer to these prototypes to form more compact clusters and avoid boundary overlap between known and novel categories. On the other hand, we use these prototypes to calibrate noisy prototypes estimated from unlabeled data based on category similarities, which allows for more accurate estimation of prototypes for novel categories that can be used as reliable learning targets later. After knowledge transfer, we further propose two feature alignment mechanisms to acquire both instance- and category-level knowledge from unlabeled data by aligning instance features with both augmented features and the calibrated prototypes, which can boost model performance on both known and novel categories with less noise. Experiments on three benchmark datasets show that our model outperforms SOTA methods, especially on novel categories. Theoretical analysis is provided for an in-depth understanding of our model in general. Our code and data are available at <https://github.com/Lackel/TAN>.§ INTRODUCTIONDespite remarkable breakthroughs achieved by modern deep learning systems, the majority of models are designed under a close-world setting, based on the assumption that training and test data are from the same set of pre-defined categories <cit.>. However, many practical problems such as intent detection <cit.> and relation extraction <cit.> are open-world, where the well-trained models may encounter unlabeled data containing unseen novel categories. To meet the open-world demands, Generalized Category Discovery (GCD) was widely studied in both NLP <cit.> and CV fields <cit.>.GCD requires models to recognize both known and novel categories from encountered unlabelled data based on a set of labeled data with only known categories, which can adapt models to the increasing number of categories without any additional labeling cost.Most existing works <cit.> adopt a two-stage approach to address GCD: pre-training on labeled data and then transferring the pre-trained model for pseudo-label training on unlabeled data. Even though these methods have achieved good performance on known categories, they usually perform poorly on novel categories due to the lack of supervision (yellow bar in Fig. <ref>), which limits their applications in the real world.We attribute the poor performance to two reasons: Biased knowledge transfer and Noisy representation learning.First, models pretrained on labeled data with only known categories tend to be over-confident and biased towards known categories, so instances with novel categories in unlabeled data can be easily misclassified into known categories (Fig. <ref> (a) Top), where the biased knowledge transfer can lead to noisy estimation of learning targets for novel categories (e.g., biased category prototypes <cit.> or unreliable pseudo labels <cit.>).Second, under the noisy learning targets, it is usually hard for current models to learn discriminative representations from unlabeled data effectively. Moreover, these methods solely focus on instance- <cit.> or prototype-based discrimination <cit.> with the noisy learning targets, but fail to combine them together to capture both instance- and category-level semantics, which can further disrupt the performance. To mitigate above issues, we propose a Transfer and Alignment Network (TAN) to calibrate the biased knowledge and learn discriminative features.We first propose two knowledge transfer mechanisms to calibrate the biased knowledge caused by pre-training. Specifically, we model different categories with prototypes <cit.>, then we leverage prototypes of known categories in labeled data as a prior to guide the training process on unlabeled data.On the one hand, we propose Prototype-to-Instance Transfer (P2I Trans) to cluster instances with known categories in unlabeled data around these prototypes to form compact clusters, which can make different categories discriminative and avoid boundary overlap between known and novel categories.On the other hand, inspired by the fact that similar categories may share some common features (e.g., cats and dogs), we propose Prototype-to-Prototype Transfer (P2P Trans) to transfer these prototypes to calibrate the noisy prototypes estimated from unlabeled data. To avoid negative transfer between dissimilar categories, we introduce semantic similarities between categories as weights and select only the most similar k prototypes for the prototype calibration. By transferring knowledge from known to novel categories, the calibrated prototypes can be used as reliable learning targets to guide the subsequent representation learning. Combining P2I and P2P Trans, TAN can learn clear decision boundaries for known categories and reliable prototypes for novel categories, which can help to alleviate the effects of biased knowledge transfer (Fig. <ref> (a) Bottom).After knowledge transfer, we further propose two feature alignment mechanisms to acquire both instance- and category-level knowledge from unlabeled data with less noise. First, we propose Instance-to-Prototype Alignment (I2P Align) to pull instance features closer to the corresponding calibrated prototypes to acquire category-level knowledge (i.e., common category semantics embedded in prototypes from multiple instances), so that these features can be discriminative and compact around the prototypes to form more distinguishable decision boundaries.Second, we propose Instance-to-Instance Alignment (I2I Align) to align instance features with their augmented features to acquire instance-level knowledge (i.e., specific instance semantics embedded in instance features), so that these features can be self-consistent and locally smooth for better representation learning <cit.>. As shown in Figure <ref>, our model achieves the best performance on three benchmark datasets, and the improved performance on novel categories further validates the effectiveness of our model.Last but not least, we theoretically justify the effectiveness of our model.Our main contributions can be summarized as follows: * We propose a Transfer and Alignment Network (TAN) to mitigate the performance gap between known and novel categories in GCD.* We propose two knowledge transfer mechanisms to alleviate the effects of biased knowledge transfer and two feature alignment mechanisms to acquire both instance- and category-level knowledge with less noise.* Extensive experiments show that our model outperforms SOTA methods, and theoretical analysis further validates the effectiveness of our model.§ RELATED WORK §.§ Generalized Category DiscoveryGeneralized Category Discovery (GCD) is a practical and challenging task formalised by <cit.>. Under the open-world setting, GCD assumes that the newly collected unlabeled data contain novel categories that have never been seen during training. Due to the lack of supervision for novel categories, previous methods mainly employed pseudo-labeling based methods <cit.> or self-supervised methods <cit.> to learn from unlabeled data. For example, <cit.> proposed to generate pseudo labels by clustering, <cit.> and <cit.> improved this method by generating more robust and consistent pseudo labels. As for self-supervised methods, <cit.> proposed a semi-supervised k-means framework with contrastive learning to learn discriminative features, and <cit.> proposed a decoupled prototypical network to decouple known and novel categories from unlabeled data. Another line of work focuses on the discovery of novel fine-grained categories from coarsely-labeled data <cit.>, which can transfer knowledge between different label hierarchies. Despite the improved overall performance, most of these methods performed poorly on novel categories because of the biased knowledge transfer and noisy representation learning, which may limit their applications in the real world.§.§ Transfer LearningTransfer learning aims at transferring knowledge from the source domain to boost model performance on the target domain <cit.>. GCD is related to transfer learning since we need to transfer knowledge from known to novel categories. Most of current methods are based on the pre-training and fine-tuning paradigm to transfer knowledge implicitly by initializing model parameters <cit.>. For example, <cit.> proposed to use contrastive learning to pre-train their model, and <cit.> combined supervised learning and masked language modeling to initialize their model.However, we think this paradigm is sub-optimal for GCD because models pre-trained on labeled data tend to be biased towards known categories. § METHODIn this section, we first formulate the Generalized Category Discovery (GCD) task. Then we introduce our Transfer and Alignment Network (TAN) in detail. Specifically, we first pre-train a feature encoder and learn category prototypes from both labeled and unlabeled data. Then we propose two knowledge transfer mechanisms to learn clear decision boundaries for known categories and reliable prototypes for novel categories, which can help to mitigate the effects of biased knowledge transfer. After knowledge transfer, we propose two feature alignment mechanisms to capture both instance- and category-level semantics to learn discriminative features. The framework of our model is shown in Fig. <ref> (b). §.§ Problem FormulationModels trained on a labeled dataset 𝒟^l = {(x_i,y_i)|y_i∈𝒴_k} can recognize pre-defined known categories 𝒴_k well. However, in the open world, the trained models may encounter unlabeled data 𝒟^u = {x_i|y_i∈{𝒴_k, 𝒴_n}} that contain both known categories 𝒴_k and novel categories 𝒴_n, which can make the models fail. To cope with this limitation, Generalized Category Discovery (GCD) requires models to recognize both known and novel categories based on 𝒟^l and 𝒟^u, without any annotation for novel categories. We denote M = |𝒴_k| as the number of known categories and K = |𝒴_k| + |𝒴_n| as the number of all categories. Finally, model performance will be measured on a testing set 𝒟^t = {(x_i,y_i)|y_i∈{𝒴_k, 𝒴_n}}.§.§ Pre-training and Prototype LearningWe use the pre-trained BERT <cit.> as our feature encoder F_θ to extract feature z_i = F_θ(x_i) for the input x_i. To adapt the pre-trained model to the downstream GCD task, we use cross-entropy loss on labeled data and masked language modeling (mlm) loss <cit.> on unlabeled data to pre-train F_θ, where the mlm loss can help to learn general knowledge and reduce model bias towards known categories.To transfer knowledge between different categories and acquire reliable learning targets for unlabeled data, we model categories with prototypes <cit.> and learn two sets of prototypes from the labeled and unlabeled dataset, respectively. For the labeled dataset, we take average of all instance features belonging to the same category as labeled prototypes P^l = {μ_j^l}_j=1^M, where μ_j^l = 1/|𝒟^l_j|∑_x_i∈𝒟^l_j F_θ(x_i) and 𝒟^l_j is a set of labeled instances from the category j. For unlabeled data, we follow <cit.> to perform clustering and utilize cluster centers as estimated unlabeled prototypes P^u = {μ_j^u}_j=1^K, where μ_j^u = 1/|𝒟^u_j|∑_x_i∈𝒟^u_j F_θ(x_i) and 𝒟^u_j is a set of unlabeled instances belonging to the cluster j.§.§ Knowledge TransferTo transfer knowledge between known and novel categories and alleviate the effects of biased knowledge transfer, we propose two knowledge transfer mechanisms called Prototype-to-Prototype Transfer (P2P Trans) and Prototype-to-Instance Transfer (P2I Trans). §.§.§ P2P Trans.Due to the lack of supervision and the model bias towards known categories, the estimated unlabeled prototypes can be noisy and biased, especially for novel categories. So directly performing prototypical learning <cit.> on these noisy prototypes can lead to inferior results. To mitigate this issue, we propose to transfer knowledge between categories and treat the labeled prototypes as unbiased estimation for known categories to calibrate the noisy unlabeled prototypes. To avoid negative transfer between dissimilar categories, we introduce semantic similarities between categories as weights and select only the most similar k prototypes for the prototype calibration. Specifically, we measure the semantic similarity of two categories based on the Euclidean distance of the corresponding prototypes:S_i = {- ‖μ_i^u - μ_j^l‖_2|μ_j^l∈ P^l}where μ_i^u is the i-th unlabeled prototype and μ_j^l is the j-th labeled prototype. Then we select the top-k similar labeled prototypes as the transfer set S_i^' for each unlabeled prototype:T_i = { j | S_ij∈ top_k (S_i) }S_i^' = { S_ij| j ∈ T_i}where S_ij is the j-th element of S_i. Then we calculate the transfer weights by normalizing similarities in the transfer set:w_i = softmax(S_i^'/√(dim(z)))where dim(z) is the feature dimension. Then we use the transfer set and weights to calibrate each unlabeled prototype. The calibrated prototypes P^c = {μ_i^c}_i=1^K can be estimated as follows:μ_i^c = α·μ_i^u + (1-α) ·∑_j ∈ T_i w_ij·μ_j^lwhere α is a weighting factor, w_ij is the j-th element of w_i. By using category similarities as weights, we can transfer knowledge from labeled prototypes to calibrate the noisy unlabeled prototypes, where the calibrated prototypes can be used as reliable learning targets for unlabeled data later. §.§.§ P2I Trans.Since the labeled prototypes are learned from ground-truth labels, they can be viewed as unbiased estimation for known categories. So we further utilize these labeled prototypes to guide the training process of unlabeled data with known categories, based on the pseudo labels from clustering. Specifically, we need to first match the prototypes for known categories in labeled and unlabeled data. Following the assumption that the closest prototypes in the feature space represent the same category, we can find the optimal match function 𝒫 through a bipartite matching algorithm introduced by <cit.>, where μ_i^l and μ_𝒫(i)^u represent the same known category. Then we cluster unlabeled instances with known categories around the corresponding prototypes:ℒ_p2i = 1/N∑_i=1^M∑_x_j∈𝒟^u_𝒫(i)‖F_θ(x_j) - μ_i^l‖_2where N is the number of unlabeled data belonging to known categories based on clustering results. 𝒟^u_𝒫(i) is a set of unlabeled data belonging to the cluster 𝒫(i) (i.e., the i-th known category). In this way, we can form compact clusters and distinguishable decision boundaries for known categories. §.§.§ Discussion.The knowledge transfer mechanisms are effective towards the problem of biased knowledge transfer in two aspects. First, we can calibrate the biased unlabeled prototypes and estimate more reliable prototypes through the P2P Trans (Sec. Prototype Calibration), which can help to correct instances that are biased to known categories. Second, we can form compact clusters and clear decision boundaries for known categories through the P2I Trans, which can also help to correct the biased instances.Combining P2P and P2I Trans, our model can learn more reliable prototypes and more distinguishable decision boundaries for different categories, which can make them more discriminative and alleviate the effects of biased knowledge transfer (Fig. 2 (a) Bottom).§.§ Feature AlignmentAfter knowledge transfer, we further propose two feature alignment mechanisms called Instance-to-Prototype Alignment (I2P Align) and Instance-to-Instance Alignment (I2I Align) to learn discriminative features from unlabeled data.§.§.§ I2P Align. After P2P Trans, the calibrated prototypes are less noisy and can be treated as reliable learning targets for unlabeled data. So we propose to acquire category-level knowledge by pulling instance features of unlabeled data closer to the corresponding calibrated prototypes based on the clustering results, so that these features can be discriminative and compact around the prototypes to form more distinguishable decision boundaries for different categories:ℒ_i2p = -1/|𝒟^u|∑_i=1^K∑_x_j∈𝒟^u_i‖F_θ(x_j) - μ_i^c‖_2where |𝒟^u| is the number of unlabeled instances, 𝒟^u_i is a set of unlabeled instances belonging to the cluster i.§.§.§ I2I Align.In addition to I2P Align, we also propose Instance-to-Instance Alignment (I2I Align) to acquire instance-level knowledge from unlabeled data. Specifically, we use data augmentation methods to generate augmented instance x_i^' for each instance x_i. Then we perform instance-wise contrastive learning <cit.> to pull x_i closer to x_i^' and push other instances away from x_i, which can help to learn self-consistent and locally-smooth features for better representation learning <cit.>:ℒ_i2i = -1/|𝒟^u|∑_i=1^|𝒟^u|logexp(z_i· z_i^'/τ)/∑_j=1^2Bexp(z_i· z_j^'/τ))where z_i^' is the feature of x_i^', τ is a temperature hyper-parameter, and B is the batch size. We further add cross-entropy loss ℒ_u on unlabeled data to learn more discriminative features with cluster ids as pseudo labels <cit.>.To avoid catastrophic forgetting for knowledge acquired from labeled data and mitigate the effects of label noise in unlabeled data <cit.>, we also add cross-entropy loss ℒ_ce on labeled data with ground-truth labels.§.§.§ Overall Loss.The objective of our model is defined as:ℒ_TAN = ℒ_p2i + ℒ_i2p + ℒ_i2i + ℒ_u + βℒ_cewhere β is a weighting factor. By combining instance- and prototype-based learning, our model can capture both instance- and category-level semantics from unlabeled data, which can help to learn discriminative representations.In summary, our model can mitigate the problem of biased knowledge transfer by transferring knowledge from labeled to unlabeled data and alleviate the noisy representation learning problem by acquiring both instance- and category-level knowledge, which can help to learn clear decision boundaries for different categories and boost our model performance. §.§ Theoretical AnalysisWe formalize the error bound of our model with the theory of unsupervised domain adaptation <cit.>. Training on both labeled and unlabeled data, the classification error of GCD can be written as the linear weighted sum of errors on labeled and unlabeled data: ϵ(h)=γ·ϵ_u(h,ŷ^u) + (1-γ) ·ϵ_l(h,y^l)where h is a hypothesis, γ is a weighting factor, ϵ_u(h,ŷ^u)=E_x∼𝒟^u|h(x)-ŷ^u| and ϵ_l(h,y^l)=E_x∼𝒟^l|h(x)-y^l| represent the error over the sample distribution of unlabeled data 𝒟^u with pseudo labels ŷ^u and labeled data 𝒟^l with ground-truth labels y^l, respectively. Then we want to analyse how close the error ϵ(h) is to an oracle error ϵ_u(h,y^u) that evaluates the model learned on the unlabeled data with ground truth labels y^u. Following the analysis in <cit.>, difference between the two losses can be bounded by the following Lemma. Let h be a hypothesis in class ℋ. Then|ϵ(h)-ϵ_u(h,y^u)| ≤(1-γ) (1/2d_ℋΔH(𝒟^l,𝒟^u)+λ)+γρwhere d_ℋΔH(𝒟^l,𝒟^u)= 2 sup_h,h' ∈ℋ |ϵ_u(h,h')-ϵ_l(h,h')| measures the domain discrepancy between the labeled and unlabeled data in the hypothesis space ℋ. λ=ϵ_l(h^*,y^l)+ϵ_u(h^*,y^u) is the total error on the labeled and unlabeled data with the joint optimal hypothesis h^*. And ρ denotes the ratio of false pseudo labels for unlabeled data. From the Lemma 1 we can see that the bound are decided by three terms. First, the term λ is negligibly small with the joint optimal hypothesis h^* and ground-truth labels, which can be disregard. Second, for the discrepancy term d_ℋΔH(𝒟^l,𝒟^u) that can be quantified by category-level discrepancy of prototypes <cit.>, the main discrepancy of the labeled and unlabeled data is from the differences between known and novel categories. Our knowledge transfer mechanisms can mitigate this discrepancy by transferring knowledge from known to novel categories, where the P2P Trans can help to estimate more reliable prototypes for novel categories and mitigate the discrepancy between known and novel categories. Third, the ratio of false pseudo labels ρ can be gradually reduced by learning more discriminative representations during training. Our feature alignment mechanisms can capture both instance- and category-level semantics to learn more discriminative features, which can reduce the noise of pseudo labels (Sec. Accuracy of Pseudo Labels). In summary, our knowledge transfer and feature alignment mechanisms can help to tighten the bound in Lemma 1, which can prove the effectiveness of our model theoretically.§ EXPERIMENTS §.§ Experimental Setup§.§.§ Datasets.We validate the effectiveness of our model on three benchmark datasets. BANKING is an intent detection dataset in the bank domain <cit.>. StackOverflow is a question classification dataset processed by <cit.>. CLINC is a text classification dataset from diverse domains <cit.>. For each dataset, we randomly select 25% categories as novel categories and 10% data as labeled data.§.§.§ Comparison with SOTA.We compare the proposed model with various baselines and SOTA methods.Unsupervised Models.(1) DeepCluster: Deep Clustering <cit.>. (2) DCN: Deep Clustering Network <cit.>. (3) DEC: Deep Embedding Clustering <cit.>. (4) KM-BERT: KMeans with BERT embeddings <cit.>. (5) KM-GloVe: KMeans <cit.> with GloVe embeddings <cit.>. (6) AG-GloVe: Agglomerative Clustering <cit.> with GloVe embeddings. (7) SAE: Stacked Auto Encoder. Semi-supervised Models.(1) Simple: A Simple Parametric model <cit.>. (2) Semi-DC: Deep Clustering <cit.> pretrained on labeled data. (3) Self-Labeling: Self-Labeling Framework <cit.>. (4) CDAC+: Constrained Adaptive Clustering <cit.>. (5) DTC: Deep Transfer Clustering <cit.>. (6) Semi-KM: KMeans with BERT pretrained on labeled data. (7) DAC: Deep Aligned Clustering <cit.>. (8) GCD: Label Assignment with Semi-supervised KMeans <cit.>. (9) PTJN: Robust Pseudo-label Training <cit.>. (10) DPN: Decoupled Prototypical Network <cit.>. §.§.§ Evaluation Metrics.We measure model performance with clustering accuracy on the testing set with Hungarian algorithm <cit.>. (1) H-score: harmonic mean of the accuracy for known and novel categories, which can avoid evaluation bias towards known categories <cit.>. (2) Known: accuracy for instances with known categories. (3) Novel: accuracy for instances with novel categories.§.§.§ Implementation Details.We use the pretrained bert-base-uncased model <cit.> and adopt its suggested hyper-parameters. We only fine-tune the last four Transformer layers with AdamW optimizer.Early stopping is used during pretraining with wait patience 20.For hyper-parameters, k is set to 5, α is set to 0.8, β is set to 100 and τ is set to 0.07.Training epochs for StackOverflow, BANKING and CLINC dataset are set to {10, 20, 20}. The learning rate for pretraining and training is set to 5e^-5 and 1e^-5, respectively. For masked language modeling, the mask probability is set to 0.15 following previous works. And SimCSE <cit.> is used to generate augmented instances.§.§ Results and Discussion §.§.§ Main Results. We show the results in Table <ref>. From the results we can get following observations.First, our model gets the best performance on all evaluation metrics and datasets, which can show the effectiveness of our model.Second, our model achieves the best results on H-score (average 3.98% improvement), which means that our model can better balance the performance on known and novel categories and alleviate the effects of model bias towards known categories.Third, our model achieves the best performance on accuracy for known categories (average 0.84% improvement). Thanks to the knowledge transfer and feature alignment mechanisms, our model can form compact clusters and discriminative decision boundaries for known categories, which means that our model can boost model performance on novel categories without sacrificing the model performance on known categories. Last but not least, our model achieves the best performance on accuracy for novel categories (average 4.76% improvement). We attribute the significant improvement to following reasons. First, our knowledge transfer mechanisms (P2I Trans and P2P Trans) can help to alleviate the effects of biased knowledge transfer by calibrating the noisy prototypes and forming clear decision boundaries for both known and novel categories, where the calibrated prototypes can be used as reliable learning targets for the subsequent training. Second, our feature alignment mechanisms (I2P Align and I2I Align) can alleviate the effects of noisy representation learning by acquiring instance- and category-level knowledge simultaneously. And under the guidance of the calibrated prototypes, our model can learn discriminative features to form compact clusters with less noise. §.§.§ Ablation Study. We inspect the contribution of different components to our model on the BANKING dataset in Table <ref>. First, removing different components from TAN degrades model performance on novel categories and H-score, which can show the effectiveness of different components towards mitigating the model bias to known categories and boosting model performance on novel categories. Removing components related to representation learning (I2I Align, I2P Align and ℒ_u) has the greatest impact on the performance, which means that learning discriminative features is crucial for novel categories. Second, removing P2I Trans and I2I Align degrades model performance on known categories since they are responsible for learning compact and discriminative clusters for known categories. Even though removing some components (P2P Trans, I2P Align and ℒ_u) can improve the performance on known categories, they can also greatly exacerbate the model bias and degrade the performance on novel categories. In summary, our model can balance model performance on known and novel categories by mitigating the model bias and learning discriminative features.§.§.§ Prototype Calibration.We investigate the effectiveness of our Prototype Calibration (P2P Trans) mechanism by answering the following questions. (1) Can prototype distances measure semantic similarities between categories? In Table <ref>, We show the top-3 selected known categories in Eq. (3) for both known (Top) and novel (Bottom) categories, based on the prototype distance. From the results we can see that the selected categories are highly relevant to query categories, which means that modeling categories with prototypes can preserve semantic similarities between categories. And by measuring distances between prototypes, we can transfer knowledge between similar categories and calibrate the noisy prototypes. (2) Can prototype calibration help to learn better prototypes? In Fig. <ref>, we compare the average distance between the ground-truth prototypes and the prototypes before and after calibration. We can see that our model can learn prototypes that are closer to the ground-truth prototypes after calibration, which means that the prototype calibration can help to estimate more accurate and reliable prototypes. §.§.§ Real-world Applications. In the real world, the number of categories K is usually unknown. We show the robustness of our model towards this real-world setting from two aspects. (1) Number of categories estimation. We report the results on estimating the number of categories with the filtering algorithm <cit.> in Table <ref>. We can see that our estimations come very close to the ground-truth number of categories, which can show the effectiveness of our model. (2) Over Clustering.To investigate the sensitivity of our model to the number of categories, we over-estimate the number of categories used for training and testing by a factor of one point two. As shown in Table <ref>, our model (TAN (OC)) gets close performance even without knowing the ground-truth number of categories, which can show the robustness of our model towards the real-world settings. §.§.§ Accuracy of Pseudo Labels. We report the accuracy of pseudo labels generated by different models for unlabeled data in Table <ref>. Our model gets the highest accuracy, so the ratio of false pseudo labels ρ in the Sec. Theoretical Analysis can be controlled by our model, which can verify the validity of our theoretical analysis.§.§.§ Visualization. We visualize the learned embeddings of our model before and after training with t-SNE in Fig. <ref>. From the figure we can see that novel categories are mixed together before training. And the clusters are more distinguishable after training, especially for novel categories, which indicates that our model can learn discriminative features and form distinguishable decision boundaries for different categories. § CONCLUSIONIn this paper, we propose Transfer and Alignment Network for GCD, which incorporates two knowledge transfer mechanisms to mitigate the effects of biased knowledge transfer and two feature alignment mechanisms to learn discriminative features with less noise. By modeling different categories with prototypes and transferring knowledge from labeled to unlabeled data, our model can calibrate the noisy prototypes for novel categories and learn more discriminative clusters for known categories, which can help to mitigate the model bias towards known categories. After knowledge transfer, our model can acquire both instance- and category-level knowledge by aligning instance features with both augmented features and the calibrated prototypes, which can help to learn more discriminative features and form more distinguishable decision boundaries for different categories. Experimental results on three benchmark datasets show that our model outperforms SOTA methods, especially for model performance on novel categories. And the theoretical analysis further justifies the effectiveness of our model.§ ACKNOWLEDGMENTSThis work was supported by National Key Research and Development Program of China (2022ZD0117102), National Natural Science Foundation of China (62293551, 62177038, 62277042, 62137002, 61721002, 61937001, 62377038). Innovation Research Team of Ministry of Education (IRT_17R86), Project of China Knowledge Centre for Engineering Science and Technology, "LENOVO-XJTU" Intelligent Industry Joint Laboratory Project.
http://arxiv.org/abs/2312.16467v1
{ "authors": [ "Wenbin An", "Feng Tian", "Wenkai Shi", "Yan Chen", "Yaqiang Wu", "Qianying Wang", "Ping Chen" ], "categories": [ "cs.CL", "cs.LG" ], "primary_category": "cs.CL", "published": "20231227083547", "title": "Transfer and Alignment Network for Generalized Category Discovery" }
[ Xinliang Li^*, January 14, 2024 ==================== Data partitioning that maximizes or minimizes Shannon entropy is a crucial subroutine in data compression, columnar storage, and cardinality estimation algorithms. These partition algorithms can be accelerated if we have a data structure to find the entropy in different subsets of data when the algorithm needs to decide what block to construct. While it is generally known how to compute the entropy of a discrete distribution efficiently, we want to efficiently derive the entropy among the data items that lie in a specific area. We solve this problem in a typical setting when we deal with real data, where data items are geometric points and each requested area is a query (hyper)rectangle. More specifically, we consider a set P of n weighted and colored points in ^d. The goal is to construct a low space data structure, such that given a query (hyper)rectangle R, it computes the entropy based on the colors of the points in P∩ R, in sublinear time. We show a conditional lower bound for this problem proving that we cannot hope for data structures with near-linear space and near-constant query time. Then, we propose exact data structures for d=1 and d>1 with o(n^2d) space and o(n) query time. We also provide a tune parameter t that the user can choose to bound the asymptotic space and query time of the new data structures. Next, we propose near linear space data structures for returning either an additive or a multiplicative approximation of the entropy. Finally, we show how we can use the new data structures to efficiently partition time series and histograms with respect to entropy. § INTRODUCTION Discrete entropy is defined as the expected amount of information needed to represent an event drawn from a probability distribution. That is, given a probability distribution 𝒟 over the set 𝒳, the entropy is defined as H(𝒟) = -∑_x ∈𝒳𝒟(x) ·log𝒟(x). The entropy has a few different interpretations in information theory and statistics, such as: * (Compression) Entropy is a lower-bound on data compressibility for datasets generated from the probability distribution via the Shannon source coding theorem.* (Probability) Entropy measures a probability distribution's similarity to a uniform distribution over the set 𝒳 on a scale of [0, log |𝒳| ].Because of these numerous interpretations, entropy is a highly useful optimization objective. Various algorithms, ranging from columnar compression algorithm to histogram construction and data cleaning, maximize or minimize (conditional) entropy as a subroutine.These algorithms try to find high or low entropy data subsets. Such algorithms can be accelerated if we have a data structure to efficiently calculate the entropy of different subsets of data. However, while it is known how to compute the entropy of an entire distribution efficiently, there is a little work on such “range entropy queries”, where we want to derive efficiently the entropy among the data items that lie in a specific area.To make this problem more concrete, let us consider a few examples.[Columnar Compression] An Apache Parquet file is a columnar storage format that firsthorizontally partitions a table into row groups, and then applies columnar compression along each column within the row group. A horizontal partitioning that minimizes the entropy within each partition can allow for more effective columnar compression.[Histogram Construction] Histogram estimation often uses a uniformity assumption, where the density within a bucket is modeled as roughly uniform.A partitioning that maximizes the entropy within each partition can allow for more accurate estimation under uniformity assumptions.[Data Cleaning] As part of data exploration, a data analyst explores different subsets of data to find areas with high entropy/uncertainty. Usually, subsets of data or items in a particular area of the data with high entropy contain dirty data so they are good candidates for applying data cleaning methods. For example, Chu et al. <cit.> used an entropy-based scheduling algorithm to maximize the uncertainty reduction of candidate table patterns. Table patterns are used to identify errors in data.The first two problems above have a similar structure, where an outer-algorithm leverages a subroutine that identifies data partitions that minimize or maximize entropy. In the third problem we aim to explore areas with high entropy by running arbitrary range entropy queries. We formulate the problem of range entropy query problem in a typical and realistic settingwhen we deal with real data: we assume that each item is represented as a point in the Euclidean space. More specifically, we consider a set P of n weighted and colored points in ^d. The goal is to construct a data structure, such that given a query (hyper)rectangle R, compute the entropy of the points in P∩ R (denoted by H(P∩ R)). The entropy of P∩ R is defined as the entropy of a discrete distribution 𝒟_R over the colors in P∩ R: Let U_R be the set of all colors of the points in P∩ R. For each color u_j∈ U_R, we define a value (we can also refer to it as an independent event or outcome) α_j with probability w_j equal to the sum of weights of points with color u_j in P∩ R divided by the sum of the weights of all points in P∩ R.Notice that ∑_u_j∈ U_Rw_j=1. Unfortunately, we do not have direct access to this distribution; we would need Ω(n) time to construct the entire distribution 𝒟_R in the query phase. Using the geometry of the points along with key properties from information theory we propose data structures to find the entropy of 𝒟_R without constructing 𝒟_R explicitly. Given a set P of n weighted and colored points in ^d, the goal is to construct a data structure with low space such that given any query rectangle R, it returns H(P∩ R) in sub-linear time o(|P|).If the number of colors in P is bounded by a constant then the range entropy query problem can be easily solved. However, in the worse case the number of different colors is O(n). Our goal is to construct data structures whose query time is always sublinear with respect to n.Summary of Results One of the main challenges with range entropy queries is that entropy is not a decomposable quantity. Let P_1, P_2 be two sets of points such that P_1∪ P_2=P and P_1∩ P_2=∅. If we know H(P_1), H(P_2) there is no straightforward way to compute H(P_1∪ P_2). In this paper, we build low space data structures such that given a rectangle R, we visit points or subsets of points in P∩ R in a particular order and carefully update the overall entropy. All our results for the range entropy problem can be seen in Table <ref>. * In Section <ref> we introduce some useful notation and we revisit a way to update the entropy of the union of two sets with no color in common in O(1) time.* In Section <ref>, we reduce the set intersection problem to the range entropy query problem in ^2. We prove a conditional lower bound showing thatwe cannot hope for O(n n) space and O( n) query time data structures for the range entropy queries.* Exact data structure for d=1. In Section <ref>, we efficiently partition the input points with respect to their x coordinates into buckets, where each bucket contains a bounded number of points.Given a query interval R, we visit the bounded number of points in buckets that are partially intersected by R and we update the overall entropy of the buckets that lie completely inside R. For any parameter t chosen by the user, we construct a data structure in O(n^2-t) time, with O(n^2(1-t)) space and O(n^tlog n) query time.* In Section <ref>, instead of partitioning the points with respect to their geometric location, we partition the input points with respect to their colors. We construct O(n^1-t) blocks where two sequential blocks contain at most one color in common. Given a query rectangle we visit all blocks and we carefully update the overall entropy. For any tune parameter t chosen by the user, we construct a data structure in O(nlog^2dn + n^(2d-1)t+1log^d+1 n) time with O(nlog^2d-1n + n^(2d-1)t+1) space and O(n^1-tlog^2d n) query time.* Additive approximation. In Subsection <ref> we use known results for estimating the entropy of an unknown distribution by sampling in the dual access model. We propose efficient data structures that apply sampling in a query range in the dual access model. We construct a data structure in O(nlog^dn) time, with O(nlog^d-1n) space and O(log^d+3 n/Δ^2) query time. The data structure returns an additive Δ-approximation of the entropy with high probability. It also supports dynamic updates in O(log^d n) time.* Multiplicative approximation. In Subsection <ref> we propose a multiplicative approximation of the entropy using the results for estimating the entropy in a streaming setting. One significant difference with the previous result is that in information theory at least Ω(log n/^2· H') sampling operations are needed to find get an (1+)-multiplicative approximation, where H' is a lower bound of the entropy. Even if we have efficient data structures for sampling (as we have in additive approximation) we still do not have an efficient query time if the real entropy H is extremely small. We overcome this technical issue by considering two cases: i) there is no color with total weight more than 2/3, and ii) there exists a color with total weight at most 2/3. While in the latter case the entropy can by extremely small, an additive approximation is sufficient in order to get a multiplicative approximation. In the former one, the entropy is large so we apply the standard sampling method to get a multiplicative approximation.We construct a data structure in O(nlog^dn) time, with O(nlog^dn) space and O(log^d+3/^2) query time. The data structure returns a multiplicative (1+)-approximation of the entropy. It also supports dynamic updates in O(log^d n) time.* Additive and multiplicative approximation. In Subsection <ref>, we propose a new data structure for approximating the entropy in the query range for d=1. We get the intuition from data structures counting the number of colors in a query interval. Such a data structure finds a geometric mapping to a different geometric space, such that if at least a point with color u_i exists in the original P∩ R, then there is a unique point with color u_i in the corresponding query range in the new geometric space. Unfortunately, this property is not sufficient for finding the entropy. Instead, we need to know more information about the weights of the points and the entropy in canonical subsets of the new geometric space, which is challenging to do.We construct a data structure in O(n/log^5 n) time, with O(n/log^2 n) space and O(log^2 n loglog n/) query time. The data structure returns an (1+)-multiplicative and -additive approximation of the entropy.* Partitioning using entropy. In Section <ref> we show how our new data structures can be used to run partitioning algorithms over time series, histograms, and points efficiently.Related work Entropy has been used a lot for partitioning to create histograms in databases. For example, To et al. <cit.> used entropy to design histograms for selectivity estimation queries. In particular, they aim to find a partitioning of k buckets in 1d such that the cumulative entropy is maximized.They consider a special case where they already have a histogram (so all items of the same color are accumulated to the same location) and the goal is to partition the histogram into k buckets. They propose a greedy algorithm that finds a local optimum solution. However there is no guarantee on the overall optimum partitioning. Using our new data structures, we can find the entropy in arbitrary range queries, which is not supported in <cit.>. Our data structures can also be used to accelerate partitioning algorithms with theoretical guarantees (see Subsection <ref>) in a more general setting, where points of the same color have different locations.In addition, there is a number of papers that use entropy to find a clustering of items. Cruz et al. <cit.> used entropy for the community detection problem in augmented social networks. They describe a greedy algorithm that exchanges two random nodes between two random clusters if the entropy of the new instance is lower. Barbará et al. <cit.> used the expected entropy for categorical clustering. They describe a greedy algorithm that starts with a set of initial clusters, and for each new item decides to place it in the cluster that has the lowest entropy. Li et al. <cit.> also used the expected entropy for categorical clustering but they extend it to probabilistic clustering models.Finally, Ben-Gal et al. <cit.> used the expected entropy to develop an entropy-based clustering measure that measures the homogeneity of mobility patterns within clusters of users. All these methods do not study the problem of finding the entropy in a query range efficiently. While these methods perform well in practice, it is challenging to derive theoretical guarantees. In spatial databases items are represented as points in ^d, so our new data structures could be used to find faster and better entropy-based clustering techniques. For example, we could run range entropy queries with different radii around a center until we find a cluster with small radius and small (or large) expected entropy. There is a lot of work on computing an approximation of the entropy in the streaming setting <cit.>. For a stream of m distinct values (m colors in our setting) Chakrabarti et al. <cit.> compute an (1+)-multiplicative approximation of the entropy in a single pass using O(^-2log (δ^-1)log m) words of space, with probability at least 1-δ. For a stream of size n (n points in our setting) Clifford and Cosma <cit.> propose a single-pass -additive algorithm using O(^-2log n log (n^-1)) bits with bounded probability. Harvey et al. <cit.> allow deletions in the streaming setting and they propose a single-pass (1+)-multiplicative algorithm using Ø(^-2log^2 m) words of space with bounded probability. Furthermore, they propose a single-pass -additive approximation using Ø(^-2log m) words of space. While some techniques from the streaming setting are useful in our query setting, the two problems are fundamentally different. In the streaming setting, preprocessing is not allowed, all data are processed one by one and an estimation of the entropy is maintained. In our setting, the goal is to construct a data structure such that given any query range, the entropy of the items in the range should be computed in sublinear time, i.e., without processing all items in the query range during the query phase. Let 𝒟 be an unknown discrete distribution over n values. There is an interesting line of work on approximating the entropy of 𝒟 by sampling in the dual access model. Batu et al. <cit.> give an (1+)-multiplicative approximation of the entropy of 𝒟 with sample complexity O((1+)^2log^2 n/^2· H'), where H' is a lower bound of the actual entropy H(𝒟). Guha et al. <cit.> improved the sample complexity to O(log n/^2· H'), matching the lower bound Ω(log n/(2+)^2· H') found in <cit.>. Canonne and Rubinfeld <cit.> describe a Δ-additive approximation of the entropy with sample complexity O(log^2n/Δ/Δ^2). Caferov et al. <cit.> show that Ω(log^2 n/Δ^2) sample queries are necessary to get Δ-additive approximation. All these algorithms return the correct approximations with constant probability. If we want to guarantee the result with high probability then the sample complexity is multiplied by a log n factor.A related query to estimating the entropy is the range color query. Given a a set of colored points in ^d, the goal is to construct a data structure such that given a query rectangle, it returns the number of colors in the query range.§ PRELIMINARIESLet P be a set of n points in ^d and let U be a set of m colors U={u_1, …, u_m}. Each point p∈ P is associated with a color from U, i.e., u(p)=u_i for u_i∈ U. Furthermore, each point p∈ P is associated with a non-negative weight w(p)≥ 0. For a subset of points P'⊆ P, let P'(u_i)={p∈ P'| u(p)=u_i}, for i≤ m, be the set of points having color u_i. Let u(P')={u_i|∃ p∈ P', u(p)=u_i} be the set of colors of the points in P'. Finally, let w(P')=∑_p∈ P'w(p). The entropy of set P' is defined as H(P')=∑_i=1^mw(P'(u_i))/w(P')log(w(P')/w(P'(u_i))). For simplicity, and without loss of generality, we can consider throughout the paper that w(p)=1 for each point p∈ P. All the results, proofs, and properties we show hold for the weighted case almost verbatim. Hence, from now on, we assume w(p)=1 and the definition of entropy becomes H(P')=∑_i=1^m|P'(u_i)|/|P'|log(|P'|/|P'(u_i)|)=∑_u_i∈ u(P')|P'(u_i)|/|P'|log(|P'|/|P'(u_i)|). If |P'(u_i)|=0, then we consider that |P'(u_i)|/|P'|log(|P'|/|P'(u_i)|)= 0.Updating the entropy Let P_1, P_2 ⊂ P be two subsets of P such that u(P_1)∩ u(P_2)=∅. The next formula for the entropy of P_1∪ P_2 is known (see <cit.>) H(P_1∪ P_2)=|P_1|H(P_1)+|P_2|H(P_2)+|P_1|log(|P_1|+|P_2|/|P_1|)+|P_2|log(|P_1|+|P_2|/|P_2|)/|P_1|+|P_2|.If |u(P_2)|=1 thenH(P_1∪ P_2)=|P_1|H(P_1)/|P_1|+|P_2|+ |P_1|/|P_1|+|P_2|log(|P_1|+|P_2|/|P_1|)+|P_2|/|P_1|+|P_2|log(|P_1|+|P_2|/|P_2|).Finally, if P_3⊂ P_1 with |u(P_3)|=1 and u(P_1∖ P_3)∩ u(P_3)=∅ thenH(P_1∖ P_3)=|P_1|/|P_1|-|P_3|(H(P_1)-|P_3|/|P_1|log|P_1|/|P_3|-|P_1|-|P_3|/|P_1|log|P_1|/|P_1|-|P_3|).We notice that in all cases, if we know H(P_1), H(P_2) and the cardinality of each subset we can update the entropy in O(1) time.Range queries In some data structures we need to handle range reporting or range counting problems. Given P, we need to construct a data structure such that given a query rectangle R, the goal is to return |R∩ P|, or report R∩ P. We use range trees <cit.>.A range tree can be constructed in O(nlog^d) time, it has O(nlog^d-1n) space and can answer an aggregation query (such as count, sum, max etc.) in O(log^dn) time. A range tree can be used to report R∩ P in O(log^dn + |R∩ P|) time. Using fractional cascading the log^d n term can be improved to log^d-1 n in the query time. However, for simplicity, we consider the simple version of a range tree without using fractional cascading. Furthermore, a range tree can be used to return a uniform sample point from R∩ P in O(log^d n) time. We give more details about range trees and sampling in Appendix <ref>. There is also lot of work on designing data structures for returning k independent samples in a query range efficiently <cit.>. For example, if the input is a set of points in ^d and the query range is a query hyper-rectangle, then there exists a data structure <cit.> with space O(nlog^d-1n) and query time O(log^d n + klog n). For our purposes, it is sufficient to run k independent sampling queries in a (modified) range tree with total query time O(klog^d n).Expected entropy and monotonicity Entropy is not monotone because if P_1⊆ P_2, it does not always hold that H(P_1)≤ H(P_2). Using the results in <cit.>, we can show that H(P_1)≥|P_1|-1/|P_1|H(P_1∖{p}), for a point p∈ P_1⊂ P. If we multiply with |P_1|/n we have |P_1|/nH(P_1)≥|P_1|-1/nH(P_1∖{p}). Hence, we show that, for P_1⊆ P_2⊆ P, |P_1|/nH(P_1)≤|P_2|/nH(P_2). The quantity |P_1|/|P|H(P_1) is called expected entropy. This monotonicity property helps us to design efficient partitioning algorithms with respect to expected entropy, for example, find a partitioning that minimizes the cumulative or maximum expected entropy.t§ LOWER BOUNDIn this section, we give a lower bound for range entropy queries in the real-RAM model. We show a reduction from the set intersection problem that suggests that data structures with near-linear space and polylogarithmic query time are unlikely to exist even for d=2.The set intersection problem is defined as follows. Given a family of sets S_1, …, S_g, with ∑_i=1^g|S_i|=n, the goal is to construct a data structure such that given a query pair of indices i, j ,the goal is to decide if S_i∩ S_j=∅. It is widely believed that for any positive value Q∈, any data structure for the set intersection problem with O(Q) query time needs Ω((n/Q)^2) space <cit.>, skipping log^O(1) n factors. Next, we show that any data structure for solving the range entropy query can be used to solve the set intersection problem. 0.370.54 Let S_1,…, S_g be an instance of the set intersection problem as we defined above. We design an instance of the range entropy query constructing a set P of 2n points in ^2 and |U|=|⋃_i S_i|. Let n_0=0 and n_i=n_i-1+|S_i| for i=1,…, g. Let s_i,k be the value of the k-th item in S_i (we consider any arbitrary order of the items in each S_i). Let S=⋃_i S_i, and q=|S|. Let σ_1,…σ_q be an arbitrary ordering of S. We set U={1,…, q}. Next, we create a geometric instance of P in ^2: All points lie on two parallel lines L=x+n, and L'=x-n. For each s_i,k we add in P two points, p_i,k=(-(k+n_i-1), -(k+n_i-1)+n) on L, and p_i,k'=((k+n_i-1), k+n_i-1-n) on L'. If s_i,k=σ_j for some j≤ q, we set the color/category of both points p_i,k, p_i,k' to be j. Let P_i be the set of points corresponding to S_i that lie on L, and P_i' the set of points corresponding to S_i that lie on L'. We set P=⋃_i (P_i∪ P_i'). We note that for any pair i, j, points P_i ∪ P_j' have distinct categories if and only if S_i∩ S_j=∅. P uses O(n) space and can be constructed in O(n) time.Let 𝒟 be a data structure for range entropy queries with space S(n) and query time Q(n) constructed on n points. Given an instance of the set intersection problem, we construct P as described above. Then we build 𝒟 on P and we construct a range tree 𝒯 on P for range counting queries. Given a pair of indexes i, j the question is if S_i∩ S_j=∅. We answer this question using 𝒟 and 𝒯 on P. Geometrically, it is known we can find a rectangle ρ_i,j in O(1) time such that ρ_i,j∩ P=P_i∪ P_j' (see Figure <ref>). We run the range entropy query 𝒟(ρ_i,j) and the range counting query 𝒯(ρ_i,j). Let H_i,j be the entropy of P_i∪ P_j' and n_i,j=|P_i∪ P_j|. If H_i,j=log n_i,j we return that S_i∩ S_j=∅. Otherwise, we return S_i∩ S_j≠∅.The data structure we construct for answering the set intersection problem has O(S(2n)+nlog n)=O(S(2n)) space. The query time is (Q(2n)+log n) or just O(Q(n)) assuming that Q(n)≥log n.In the preceding reduction, S_i∩ S_j=∅ if and only if H_i,j=log n_i,j. If S_i∩ S_j=∅ then from the construction of P we have that all colors in P_i∪ P_j' are distinct, so n_i,j=|u(P_i∪ P_j')|. Hence, the entropy H(P_i∪ P_j') takes the maximum possible value which is H(P_i∪ P_j')=∑_v∈ u(P_i∪ P_j')1/n_i,jlog n_i,j=log n_i,j.If H_i,j≠log n_i,j we show that S_i∩ S_j≠∅. The maximum value that H_i,j can take is log n_i,j so we have H_i,j<log n_i,j. The entropy is a measure of uncertainty of a distribution. It is known that the discrete distribution with the maximum entropy is unique and it is the uniform distribution. Any other discrete distribution has entropy less than log n_i,j. Hence the result follows.We conclude with the next theorem. If there is a data structure for range entropy queries with S(n) space and Q(n) query time, then for the set intersection problem there exists a data structure with O(S(2n)) space and O(Q(2n)) query time, skipping log n factors.§ EXACT DATA STRUCTURES In this section we describe data structures that return the entropy in a query range, exactly. First, we provide a data structure for d=1 and we extend it to any constant dimension d. Next, we provide a second data structure for any constant dimension d. The first data structure is better for d=1, while the second data structure is better for any constant d>1. §.§ Efficient data structure for d=1 Let P be a set of n points in ^1. Since the range entropy query problem is not decomposable, the main idea is to precompute the entropy in some carefully chosen canonical subsets of P. When we get a query interval R, we find the maximal precomputed canonical subset in R, and then for each color among the colors of points in R not included in the canonical subset, we update the overall entropy using Equations <ref>, <ref>, and <ref>. We also describe how we can precompute the entropy of all canonical subsets efficiently.Data Structure Let t∈[0,1] be a parameter. Let B_t={b_1, …, b_k} be k=n^1-t points in ^1 such that |P∩ [b_j,b_j+1]|=n^t, for any j<n^1-t. For any pair b_i, b_j∈ B_t let I_i,j=[b_i,b_j] be the interval with endpoints b_i, b_j and let I be the set of all intervals. For any pair b_i, b_j we store the interval I_i,j and we precompute H_i,j=H(P∩ I_i,j), and n_i,j=|P∩ I_i,j|. Next, we construct an interval tree 𝒯 on I. Finally, for each color u∈ u(P) we construct a search binary tree 𝒯_u over P(u).We have |B_t|=O(n^1-t) so |I|=O(n^2(1-t)). The interval tree along with all the search binary trees have O(n) space in total. Hence we need O(n^2(1-t)) space for our data structure. In Appendix <ref> we show how we can construct the data structure in O(n^2-t) time. Query procedure Given a query interval R, we find the maximal interval I_i,j∈ I such that I⊆ R using the interval tree. Recall that we have precomputed the entropy H_i,j. Let H=H_i,j be a variable that we will update throughout the algorithm storing the current entropy. Let also N=n_i,j be the variable that stores the number of items we currently consider to compute H. Let P_R=P∩ (R∖ I_i,j) be the points in P∩ R that are not included in the maximal interval I_i,j. See also Figure <ref>. l0.55 Instance of the query algorithm given query interval R. Purple points are points in P_R. < g r a p h i c s > We visit each point in P_R and we identify u(P_R).For each 𝐮∈ u(P_R), we run a query in 𝒯_𝐮 with range I_i,j finding the number of points in P∩ I_i,j with color 𝐮. Let n_𝐮 be this count.If n_𝐮=0 then there is no point in P∩ I_i,j with color 𝐮 so we insert |P_R(𝐮)| items of color 𝐮 in the current entropy using Equation <ref>. In that formula, |P_1|=N, H(P_1)=H and |P_2|=|u(P_R)|. We update N=N+|u(P_R)|, and H with the updated entropy H(P_1∪ P_2).If n_𝐮>0 then there is at least one point in P∩ I_i,j with color 𝐮. Hence, we update the entropy H, by first removing the n_𝐮 points of color 𝐮 in P∩ I_i,j and then re-inserting n_𝐮+|u(P_R)| points of color 𝐮. We use Equation <ref> for removing the points with color 𝐮 with |P_1|=N, H(P_1)=H, and |P_3|=n_u. We update N=N-n_𝐮 and H with the updated entropy H(P_1∖ P_3). Then we use Equation <ref> for re-inserting the points with color u, with |P_1|=N, H(P_1)=H, and |P_2|=n_𝐮+|u(P_R)|. We update N=N+n_𝐮+|u(P_R)| and H with the updated entropy H(P_1∪ P_2). After visiting all colors in u(P_R), we return the updated entropy H. The correctness of the algorithm follows from Equations <ref>, <ref>. For each color u∈ u(P_R) we update the entropy including all points of color u.For a query interval R we run a query in the interval tree to find I_i,j in O(log n) time. The endpoints of R intersect two intervals [b_h, b_h+1] and [b_v, b_v+1]. Recall that by definition, such interval contains O(n^t) points from P. Hence, |P_R|=O(n^t) and |u(P_R)|=O(n^t). For each 𝐮∈ u(P_R), we spend O(log n) time to search 𝒯_𝐮 and find n_𝐮. Then we update the entropy in O(1) time. Overall, the query procedure takes O(n^tlog n) time. Let P be a set of n points in ^1, where each point is associated with a color, and let t∈ [0,1] be a parameter. A data structure of O(n^2(1-t)) size can be computed in O(n^2-t) time, such that given a query interval R, H(P∩ R) can be computed in O(n^tlog n) time. In Appendix <ref> we extend this data structure to any constant d>1. §.§ Efficient data structure for d>1 While the previous data structure can be extended to higher dimensions, here we propose a more efficient data structure for d>1. In this data structure we split the points with respect to their colors. The data structure has some similarities with the data structure presented in <cit.> for the max query under uncertainty, however the two problems are different and there are key differences on the way we construct the data structure and the way we compute the result of the query.Data Structure We first consider an arbitrary permutation of the colors in U, i.e. u_1, …, u_m. The order used to partition the items is induced from the permutation over the colors. Without loss of generality we set u_j=j for each j≤ m. We split P into K=O(n^1-t) buckets P_1,…, P_K such that i) each bucket contains O(n^t) points, and ii) for every point p∈ P_i and q∈ P_i+1, u(p)≥ u(q). We notice that for any pair of buckets P_i, P_i+1 it holds |u(P_i)∩ u(P_i+1)|≤ 1, see Figure <ref>. We slightly abuse the notation and we use P_i to represent both the i-th bucket and the set of points in the i-th bucket. For each bucket P_i, we take all combinatorially different (hyper)rectangles R_i defined by the points P_i. For each such rectangle r, we precompute and store the entropy H(P_i∩ r) along with the number of points n(P_i∩ r)=|P_i∩ r|. In addition, we store u^+(r), the color with the maximum value (with respect to the permutation of the colors)in r∩ P_i. Furthermore, we store u^-(r), the color with the minimum value in r∩ P_i. Let n^+(r)=|{p∈ r∩ P_i| u(p)=u^+(r)}| and n^-(r)=|{p∈ r∩ P_i| u(p)=u^-(r)}|.Finally, for each bucket P_i we construct a modified range tree 𝒯'_i over all R_i, such that given a query rectangle R it returns the maximal rectangle r∈ R_i that lies completely inside R. We note that r∩ P_i = R∩ P_i. This can be done by representing the d-dimensional hyper-rectangles as 2d-dimensional points merging the coordinates of two of their corners.Overall, we need O(nlog^2d-1 n) space for the modified range trees 𝒯'_i, and O(n^1-t· n^2dt)=O(n^(2d-1)t+1) space to store all additional information (entropy, counts, max/min color) in each rectangle. This is because there are O(n^1-t) buckets, and in each bucket there are O(n^2dt) combinatorially different rectangles. Overall, our data structure has O(nlog^2d-1n + n^(2d-1)t+1) space. Query Procedure We are given a query (hyper)rectangle R. We visit the buckets P_1,… P_K in order and compute the entropy for R∩(P_1∪…∪ P_i). Let H be the overall entropy we have computed so far. For each bucket P_i we do the following: First we run a query using 𝒯'_i to find r_i∈ R_i that lies completely inside R. Then we update the entropy H considering the items in P_i∩ r_i. If u^-(r_i-1)=u^+(r_i) then we update the entropy H by removing n^-(r_i-1) points with color u^-(r_i-1) using Equation <ref>. Then we insert n^-(r_i-1)+n^+(r_i) points of color u^+(r_i) in H using Equation <ref>. Finally, we remove n^+(r_i) points of color u^+(r_i) from the precomputed H(P_i∩ r_i) using Equation <ref> and we merge the updated H with H(P_i∩ r_i) using Equation <ref>. We note that in the last step we can merge the updated H with the updated H(P_i∩ r_i) because no color from the points used to compute the current H is appeared in the points used to compute the current H(P_i∩ r_i). On the other hand, if u^-(r_i-1)≠ u^+(r_i), then we merge the entropies H and H(P_i∩ r_i) using directly Equation <ref>.In each bucket P_i we need O(log^2d n) to identify the maximal rectangle r_i inside R. Then we need O(1) time to update the current entropy H. Overall, we need O(n^1-tlog^2d n) time. Fast Construction All range trees can be computed in O(nlog^2d n) time. Next, we focus on computing H(P_i∩ r) for all rectangles r∈ R_i. We compute the other quantities n(P_i∩ r), u^-(r), and u^+(r) with a similar way. A straightforward way is to consider every possible rectangle r and compute independently the entropy in linear time. There are O(n^2dt) rectangles so the running time is O(n^2dt+1). We propose a faster construction algorithm.The main idea is to compute the entropy for rectangles in a specific order. In particular, we compute the entropy of rectangles that contain c points after we compute the entropies for rectangles that contain c-1 points. Then we use Equations <ref>, <ref> to update the entropy of the new rectangle without computing it from scratch. Overall, we construct the data structure in O(n^(2d-1)t+1log^d+1 n) time. We describe the missing details in Appendix <ref>. Let P be a set of n points in ^d, where each point is associated with a color, and let t∈ [0,1] be a parameter. A data structure of O(nlog^2d-1n + n^(2d-1)t+1) size can be computed in O(nlog^2d n + n^(2d-1)t+1log^d+1 n) time, such that given a query hyper-rectangle R, H(P∩ R) can be computed in O(n^1-tlog^2d n) time.§ APPROXIMATE DATA STRUCTURES In this section we describe data structures that return the entropy in a query range, approximately. First, we provide a data structure that returns an additive approximation of the entropy and next we provide a data structure that returns a multiplicative approximation efficiently. Finally, for d=1, we design a deterministic and more efficient data structure that returns an additive and multiplicative approximation of the entropy. §.§ Additive approximation In this Subsection, we construct a data structure on P such that given a query rectangle R and a parameter Δ, it returns a value h such that H(P∩ R)-Δ≤ h≤ H(P∩ R)+Δ. The intuition comes from the area of finding an additive approximation of the the entropy of an unknown distribution in the dual access model <cit.>.Let D be a fixed distribution over a set of values α_1, …, α_N. Each value α_i has a probability D(α_i) which is not known, such that ∑_i=1^N D(α_i)=1. The authors in <cit.> show that if we ask O(log^2 N/Δlog N/Δ^2) sample queries in the dual access model, then we can get a Δ additive-approximation of the entropy of D with high probability in O(log^2 N/Δlog N/Δ^2𝒮) time, where 𝒮 is the running time to get a sample.In the dual access model, we consider that we have a dual oracle for D which is a pair of oracles (_D, _D). When required, the sampling oracle _D returns a value α_i with probability D(α_i), independently of all previous calls to any oracle. Furthermore, the evaluation oracle _D takes as input a query element α_i and returns the probability weight D(α_i).Next, we describe how the result above can be used in our setting. The goal in our setting is to find the entropy H(P'), where P'=P∩ R, for a query rectangle R. The colors in u(P') define the distinct values in distribution D. By definition, the number of colors is bounded by |P'|=O(n). The probability weight is defined as |P'(u_i)|/|P'|. We note that in <cit.> they assume that they know N, i.e., the number of values in distribution D. In our case, we cannot compute the number of colors |u(P')| efficiently. Even though we can easily compute an O(log^d n) approximation of |u(P')|, it is sufficient to use the loose upper bound |u(P')|≤ n. This is because, without loss of generality, we can assume that there exist n-|u(P')| values/colors with probability (arbitrarily close to) 0. All the results still hold.Next we present our data structure to simulate the dual oracle.Data structure For each color u_i∈ U we construct a range tree 𝒯_i on P(u_i) for range counting queries. We also construct another range tree 𝒯 on P for range counting queries, which is independent of the color. Next, we construct a range tree 𝒮 on P for range sampling queries. In particular, by pre-computing the number of points stored in the subtree of each node of the range tree, we can return a sample in a query region efficiently. For more details the reader can check Appendix <ref> and <cit.> where the authors propose a data structure for finding k samples in a query region efficiently[While it is known how to get k independent weighted samples in a query hyper-rectangle in O(log^dn + klog n) time <cit.>, the overall asymptotic query time of our problem remains the same if we use a range tree as described in Appendix <ref> with O(klog^d n) query time.]. We need O(nlog^dn) time to construct all the range trees, while the overall space is O(nlog^d-1n).Query procedure The query procedure involves the algorithm for estimating the entropy of an unknown distribution in the dual access model <cit.>. Here, we only need to describe how to execute the oracles _D and _D in P'=P∩ R using the data structure. * _D: Recall that _D returns α_i with probability D(α_i). In our setting, values α_1,…, α_n correspond to colors. So, the goal is to return a color u_i with probability proportional to the number of points with color u_i in P'. Indeed, 𝒮 returns a point p uniformly at random in P'. Hence, the probability that a point with color u_i is found is |P'(u_i)|/|P'|.* _D: Recall that given a value α_i, _D returns the probability weightD(α_i). Equivalently, in our setting, given a color u_i, the goal is to return |P'(u_i)|/|P'|. Using 𝒯_i we run a counting query in the query rectangle R and find |P'(u_i)|. Then using 𝒯, we run a counting query in R and we get |P'|. We divide the two quantities and return the result.In each iteration, every oracle call _D and _D executes a constant number of range tree queries, so the running time is O(log^d n). The algorithm presented in <cit.> calls the oracles O(log^2n/Δlog n/Δ^2) times to guarantee the result with probability at least 1-1/n, so the overall query time is O(log^d+1n ·log^2n/Δ/Δ^2). We note that if Δ<1/√(n) then the query time is Ω(nlog n). However, it is trivial to compute the entropy in P∩ R in O(nlog n) time by traversing all points in P∩ R. Hence, the additive approximation is non-trivial when Δ≥1/√(n). In this case, log^2 n/Δ^2=O(log^2 n). We conclude that the query time is bounded by O(log^d+3n/Δ^2). We conclude with the next theorem.Let P be a set of n points in ^d, where each point is associated with a color. A data structure of O(nlog^d-1n) size can be computed in O(nlog^d n) time, such that given a query hyper-rectangle R and a real parameter Δ, a value h can be computed in O(log^d+3n/Δ^2) time, such that H(P∩ R)-Δ≤ h≤ H(P∩ R)+Δ, with high probability.This data structure can be made dynamic under arbitrary insertions and deletions of points using well known techniques <cit.>. The update time is O(log^d n). §.§ Multiplicative approximation In this Subsection, we construct a data structure such that given a query rectangle R and a parameter , it returns a value h such that 1/1+H(P∩ R)≤ h≤ (1+)H(P∩ R). The intuition comes for the area of finding a multiplicative approximation of the the entropy of an unknown distribution in the dual access model <cit.> and the streaming algorithms for finding a multiplicative approximation of the the entropy <cit.>. In particular, in this section we extend the streaming algorithm proposed in <cit.> to work in the query setting.We use the notation from the previous Subsection where D is an unknown distribution over a set of values α_1, …, α_N.It is known <cit.> that if we ask O(log N/^2· H') queries in the dual access model, where H' is a lower bound of the actual entropy of D, i.e., H(D)≥ H', then we can get an (1+)-multiplicative approximation of the entropy of D with high probability, in O(log N/^2· H'𝒮) time, where 𝒮 is the time to get a sample. We consider that we have a dual oracle for D which is a pair of oracles (_D, _D), as we had in additive approximation. Similarly to the additive approximation, in our setting we do not know the number of colors in P'=P∩ R or equivalently the number of values N in distribution D. However it is sufficient to use the upper bound |u(P')|≤ n considering n-|u(P')| colors with probability (arbitrarily close to) 0. If we use the same data structure constructed for the additive approximation, we could solve the multiplicative-approximation, as well. While this is partially true, there is a big difference between the two problems. What if the actual entropy is very small so H' is also extremely small? In this case, the factor 1/H' will be very large making the query procedure slow.We overcome this technical difficulty by considering two cases. If H' is large, say H'≥ 0.9, then we can compute a multiplicative approximation of the entropy efficiently applying <cit.>. On the other hand, if H' is small, say H'<0.9, then we use the ideas from <cit.> to design an efficient data structure. In particular, we check if there exists a value a_M with D(a_M)>2/3. If it does not exist then H' is large so it is easy to handle. If a_M exists, we write H(D) as a function of H(D∖{a_M}) using Equation <ref>. In the end, if we get an additive approximation of H(D∖{a_M}) we argue that this is sufficient to get a multiplicative approximation of H'.Data StructureFor each color c_i we construct a range tree 𝒯_i over P(u_i) as in the previous Subsection. Similarly, we construct a range tree 𝒯 over P for counting queries. We also construct the range tree 𝒮 for returning unifroms samples in a query rectangle. In addition to 𝒮, we also construct a variation of this range tree, denoted by 𝒮̅. Given a query rectangle R and a color c_i, 𝒮̅ returns a point from {p∈ R∩ P| u(p)≠ c_i} uniformly at random. In other words, 𝒮̅ is a data structure over P that is used to return a point in a query rectangle uniformly at random excluding points of color c_i. While 𝒮̅ is an extension of 𝒮, the low level details are more tedious. We describe 𝒮̅ in Appendix <ref>.The complexity of the proposed data structure is dominated by the complexity of 𝒮̅. Overall, it can be computed in O(nlog^d n) time and it has O(nlog^d n) space.Query procedure First, using 𝒯 we get N=|P∩ R|. Using 𝒮 we get log (2n)/log 3 independent random samples from P∩ R. Let P_S be the set of returned samples. For each p∈ P_S with u(p)=u_i, we run a counting query in 𝒯_i to get N_i=|P(u_i)∩ R|. Finally, we check whether N_i/N>2/3. If we do not find a point p∈ P_S (assuming u(p)=u_i) with N_i/N> 2/3 then we run the algorithm from <cit.>. In particular, we set H'=0.9 and we run O(log n/^2· H') oracle queries _D or _D, as described in <cit.>. In the end we return the estimate h. Next, we assume that the algorithm found a point with color u_i satisfying N_i/N>2/3. Using 𝒮̅ (instead of 𝒮) we run the query procedure of the previous Subsection and we get an -additive approximation of H((P∖ P(u_i))∩ R), i.e., the entropy of the points in P∩ R excluding points of color c_i. Let h' be the -additive approximation we get. In the end, we return the estimate h=N-N_i/N· h'+N_i/NlogN/N_i+N-N_i/NlogN/N-N_i. Correctness It is straightforward to see that if there exists a color u_i containing more than 2/3's of all points in P∩ R then u_i∈ u(P_S) with high probability. For completeness, in Appendix <ref> we prove that this is the case with probability at least 1-1/(2n). Hence, with high probability, we make the correct decision.If there is not such color, then in Appendix <ref> we show that the entropy in this case should be H(P∩ R)>0.9. Hence, O(log n/^2) oracle queries are sufficient to derive an (1+)-multiplicative approximation of the correct entropy.The interesting case is when we find a color u_i such that N_i/N>2/3 and N_i/N<1 (if N_i/N=1 then H(P∩ R)=0). Using the results of the previous Subsection along with the new data structure 𝒮̅, we get h'∈ [H((P∖ P(u_i))∩ R)-, H((P∖ P(u_i))∩ R)+] with probability at least 1-1/(2n). We finally show that the estimate h we return is a multiplicative approximation of H(P∩ R). From Equation <ref>, we have H(P∩ R)=N-N_i/NH((P∖ P(u_i))+N_i/NlogN/N_i+N-N_i/NlogN/N-N_i. Since h'∈ [H((P∖ P(u_i))∩ R)-, H((P∖ P(u_i))∩ R)+], we get h∈[H(P∩ R) - N-N_i/N_i, H(P∩ R) + N-N_i/N_i]. If we show that N-N_i/N_i≤ H(P∩ R) then the result follows. By the definition of entropy we observe that H(P∩ R)≥N_i/NlogN/N_i+N-N_i/NlogN/N-N_i. In Appendix <ref> we show that N-N_i/N_i≤N_i/NlogN/N_i+N-N_i/NlogN/N-N_i, if 1>N_i/N>2/3. We conclude that h∈[(1-)H(P∩ R), (1+)H(P∩ R)]. Analysis We first run a counting query on 𝒯 in O(log^d n) time. Then the set P_S is constructed in O(log^d+1n) time, running O(log n) queries in 𝒮̅. In the first case of the query procedure (no point p with N_i/N>2/3) we run O(log n/^2) oracle queries so in total it runs in O(log^d+1/^2) time. In the second case of the query procedure (point p with N_i/N>2/3) we run the query procedure of the previous Subsection using 𝒮̅ instead of 𝒮, so it takes O(log^d+3/^2) time. Overall, the query procedure takes O(log^d+3/^2) time.Let P be a set of n points in ^d, where each point is associated with a color. A data structure of O(nlog^dn) size can be computed in O(nlog^d n) time, such that given a query hyper-rectangle R and a parameter ∈(0,1), a value h can be computed in O(log^d+3 n/^2) time, such that 1/1+H(P∩ R)≤ h≤ (1+)H(P∩R), with high probability.This structure can be made dynamic under arbitrary insertions and deletions of points using well known techniques <cit.>. The update time is O(log^d n).§.§ Efficient additive and multiplicative approximation for d=1Next, for d=1, we propose a deterministic, faster approximate data structure with query time O( n) that returns an additive and multiplicative approximation of the entropy H(P∩ R), given a query rectangle R.Instead of using the machinery for entropy estimation on unknown distributions, we get the intuition from data structures that count the number of colors in a query region R. In <cit.>, the authors presented a data structure to count/report colors in a query interval for d=1. In particular, they map the range color counting/reporting problem for d=1 to the standard range counting/reporting problem in ^2. Let P be the set of n colored points in ^1. Let P̅=∅ be the corresponding points in ^2 they construct. For every color u_i∈ U, without loss of generality, let P(u_i)={p_1, p_2,…, p_k} such that if j<ℓ then the x-coordinate of point p_j is smaller than the x-coordinate of point p_ℓ. For each point p_j∈ P(u_i), they construct the 2-d point p̅_j=(p_j, p_j-1) and they add it in P̅. If p_j=p_1, then p̅_1=(p_1, -∞). Given a query interval R=[l,r] in 1-d, they map it to the query rectangle R̅=[l,r]× (-∞, l). It is straightforward to see that a point of color u_i exists in R if and only if R̅ contains exactly one transformed point of color u_i. Hence, using a range tree 𝒯̅ on P̅ they can count (or report) the number of colors in P∩ R efficiently. While this is more than enough to count or report the colors in P∩ R, for the entropy we also need to know (in fact precompute) the number of points of each color u_i in P', along with the actual entropy in each canonical subset. Notice that a canonical subset/node in 𝒯̅ might belong to many different query rectangles R̅ that correspond to different query intervals R. Even though a point of color u_i appears only once in R̅∩P̅, there can be multiple points with color u_i in R∩ P. Hence, there is no way to know in the preprocessing phase the exact number of points of each color presented in a canonical node of 𝒯̅. We overcome this technical difficulty by pre-computing for each canonical node v in 𝒯̅, monotone pairs with approximate values of (interval, number of points), and (interval, entropy) over a sufficiently large number of intervals. Another issue is that entropy is not monotone, so we split it into two monotone functions and we handle each of them separately until we merge them in the end to get the final estimation.Before we start describing the data structure we prove some useful properties that we need later. For a set of colored points P'⊆ P, with N=|P'|, let F(P')=N· H(P')=∑_u_i∈ u(P')N_i·logN/N_i, where N_i is the number of points in P' with color u_i. We prove the next lemma in Appendix <ref>.The function F(·) is monotonically increasing. Furthermore, F(P')=O(Nlog N), and the smallest non-zero value that F(·) can take is at least log N.Data structure We apply the same mapping from P toP̅ as described above <cit.> and construct a range tree 𝒯̅ on P̅. Then we visit each canonical node v of 𝒯̅. If node v contains two points with the same color then we can skip it because this node will not be returned as a canonical node for any query R̅. Let v be a node such that P̅_v does not contain two points with the same color. Let also x_v be the smallest x-coordinate of a point in P̅_v. Finally, let U_v=u(P̅_v), and P(U_v)={p∈ P| u(p)∈ U_v}. Notice that P(U_v) is a subset of P and not of P̅. We initialize an empty array S_v of size O(log n/). Each element S_v[i] stores the maximum x coordinate such that (1+)^i≥ |P(U_v)∩ [x_v,x]|. Furthermore, we initialize an empty array H_v of size O(log n/). Each element H_v[i] stores the maximum x coordinate such that (1+)^i≥ F(P(U_v)∩ [x_v,x]). We notice that both functions F(·), and cardinality of points are monotonically increasing. For every node of 𝒯̅ we use O(log n/) space, so in total, the space of our data structure is O(n/log^2 n). In Appendix <ref> we show how we can construct the data structure 𝒯̅ in O(n/log^5 n) time.Query procedure Given a query interval R=[a,b], we run a query in 𝒯̅ using the query range R̅. Let V={v_1, …, v_k} be the set of k=O(log^2 n) returned canonical nodes. For each node v∈ V we run a binary search in array S_v and a binary search in H_v with key b. Let ℓ_v^S be the minimum index such that b≤ S_v[ℓ_v^S] and ℓ_v^H be the minimum index such that b≤ H_v[ℓ_v^H]. From their definitions, it holds that |P(U_v)∩ R|≤ (1+)^ℓ_v^S≤ (1+)|P(U_v)∩ R|, andF(P(U_v)∩ R)≤ (1+)^ℓ_v^H≤ (1+)F(P(U_v)∩ R). Hence, we can approximate the entropy of P(U_v)∩ R, defining ℋ_v=(1+)^ℓ_v^H/(1+)^ℓ_v^S-1. The next Lemma shows that ℋ_v is a good approximation of H(P(U_v)∩ R). It holds that H(P(U_v)∩ R)≤ℋ_v≤ (1+)^2H(P(U_v)∩ R). We have ℋ_v=(1+)^ℓ_v^H/(1+)^ℓ_v^S-1. From their definitions, we have that |P(U_v)∩ R|≤ (1+)^ℓ_v^S≤ (1+)|P(U_v)∩ R|, andF(P(U_v)∩ R)≤ (1+)^ℓ_v^H≤ (1+)F(P(U_v)∩ R). It also holds that (1+)^ℓ_v^S-1≤ |P(U_v)∩ R| and (1+)^ℓ_v^S-1≥|P(U_v)∩ R|/(1+). Hence ℋ_v≤(1+)F(P(U_v)∩ R)/|P(U_v)∩ R|/(1+)≤ (1+)^2H(P(U_v)∩ R). Furthermore, ℋ_v≥F(P(U_v)∩ R)/|P(U_v)∩ R|=H(P(U_v)∩ R). We find the overall entropy by merging together pairs of canonical nodes. Notice that we can do it easily using Equation <ref> because all colors are different between any pair of nodes in V. For example, we apply Equation <ref> for two nodes v, w∈ Vas follows: [1.2](1+)^ℓ_v^Sℋ_v+(1+)^ℓ_w^Sℋ_w+(1+)^ℓ_v^Slog((1+)^ℓ_v^S+(1+)^ℓ_w^S/(1+)^ℓ_v^S-1) + (1+)^ℓ_w^Slog((1+)^ℓ_v^S+(1+)^ℓ_w^S/(1+)^ℓ_w^S-1)/(1+)^ℓ_v^S-1+(1+)^ℓ_w^S-1.In the end we compute the overall entropy ℋ. The next Lemma shows the correctness of our procedure.If we set ←/4· c·loglog n, it holds that H(P∩ R)≤ℋ≤ (1+)H(P∩ R)+, for a constant c>0. We assume that we take the union of two nodes v, w∈ V using Equation <ref>. We can use this equation because nodes v, w do not contain points with similar colors. Let H_1=H(P(U_v)∩ R), H_2=H(P(U_w)∩ R), N_1=|P(U_v)∩ R|, and N_2=|P(U_2)∩ R|. We have[1.1]ℋ_v,w=(1+)^ℓ_v^Sℋ_v+(1+)^ℓ_w^Sℋ_w+(1+)^ℓ_v^Slog((1+)^ℓ_v^S+(1+)^ℓ_w^S/(1+)^ℓ_v^S-1) + (1+)^ℓ_w^Slog((1+)^ℓ_v^S+(1+)^ℓ_w^S/(1+)^ℓ_w^S-1)/(1+)^ℓ_v^S-1+(1+)^ℓ_w^S-1.Using Lemma <ref>, we get[1.1]ℋ_v,w≤(1+)^4N_1H_1+(1+)^4N_2H_2+(1+)^2N_1log((1+)^2N_1+N_2/N_1)+(1+)^2N_2log((1+)^2N_1+N_2/N_2)/N_1+N_2and we conclude thatℋ_v,w≤ (1+)^4H((P(U_v)∪ P(U_w))∩ R)+(1+)^2log(1+)^2. Similarly if we have computed ℋ_x,y for two other nodes x,y∈ V, thenℋ_x,y≤ (1+)^4H((P(U_x)∪ P(U_y))∩ R)+(1+)^2log(1+)^2. If we compute their union, we getℋ_v,w,x,y≤ (1+)^6H((P(U_v)∪ P(U_w)∪ P(U_x)∪ P(U_y))∩ R)+[(1+)^4+(1+)^2]log(1+)^2. In the end of this process we haveℋ≥ H(P∩ R)because all intermediate estimations of entropy are larger than the actual entropy. For a constant c, it also holds thatℋ≤ (1+)^clog(log n)H(P∩ R)+∑_j=1^clog(log n)/2(1+)^2jlog(1+)^2.This quantity can be bounded byℋ≤ (1+)^clog(log n)H(P∩ R)+clog(log n)(1+)^clog(log n)log(1+).We have the factor log(log n) because |V|=O(log^2 n) so the number of levels of recurrence is O(log(log n)).Next, we show that if we set ←/4· clog(log n), then ℋ≤ (1+)H(P∩ R)+.We have(1+/4/clog(log n))^clog(log n)≤ e^/4≤ 1+.The first inequality holds because of the well known inequality (1+x/n)^n≤ e^x. The second inequality is always true for ∈ (0,1). Then we have(1+)clog(log n)log(1+/4· clog(log n))≤ 2clog(log n)log(1+/4· clog(log n)).Next, we show that this quantity is at most . Let L=clog( log n) and letf(x)=x-2Llog(1+x/4L)be a real function for x∈[0,1]. We havef'(x)=1-2L/Lln(16)+xln(2).We observe that ln(16)≈ 2.77 and xln(2)≥ 0 so f'(x)≥ 0 and f is monotonically increasing. So f(x)≥ f(0)=0. Hence, for any ∈[0,1] we have-2Llog(1+/4L)≥ 0. We conclude withℋ≤ (1+)H(P∩ R)+. We need O(log^2 n) time to get V from 𝒯̅. Then, we run binary search for each node v∈ V so we spend O(log^2 n loglog nloglog n/)=O(log^2 n loglog n/) time. We merge and update the overall entropy in time O(|V|), so in total the query time is O(log^2 n loglog n/).Let P be a set of n points in ^1, where each point is associated with a color, and let ∈(0,1) be a parameter. A data structure of O(n/log^2 n) size can be computed in O(n/log^5 n) time, such that given a query hyper-rectangle R, a value h can be computed in O(log^2 n loglog n/) time, such that H(P∩ R)≤ h≤ (1+)H(P∩ R)+. § PARTITIONINGThe new data structures can be used to accelerate some partitioning algorithms with respect to the (expected) entropy. Letbe one of our new data structures over n items that can be constructed in O(P(n)) time, has O(S(n)) space, and given a query range R, returns a value h in O(Q(n)) time such that 1/αH-β≤ h≤α· H+β, where H is the entropy of the items in R, and α≥ 1, β≥ 0 two error thresholds. On the other hand, the straightforward way to compute the (expected) entropy without using any data structure has preprocessing time O(1), query time O(n) and it returns the exact entropy in a query range. In most cases we consider the expected entropy to partition the dataset as this is mostly the case in entropy-based partitioning and clustering algorithms. Except of being a useful quantity bounding both the uncertainty and the size of a bucket, it is also monotone. All our data structures can work for both the entropy and expected entropy quantity almost verbatim. We define two optimization problems. Letbe the problem of constructing a partitioning with k buckets that maximizes/minimizes the maximum (expected) entropy in a bucket. Letbe the problem of constructing a partitioning with k buckets that maximizes/minimizes the sum of (expected) entropies over the buckets. For simplicity, in order to compare the running times, we skip the log(n) factors from the running times. Partitioning for d=1 We can easily solveusing dynamic programming: [i,j]=min_ℓ<imax{[i-ℓ,j-1], [i-ℓ+1,i])}, where [i,j] is the minimum max entropy of the first i items using j buckets, and [i,j] is the expected entropy among the items i and j. Sinceis monotone, we can find the the optimum [i,j] running a binary search on ℓ, i.e., we do not need to visit all indexes ℓ<i one by one to find the optimum. Without using any data structure the running time to find [n,k] is O(kn^2). Using , the running time for partitioning is O(P(n)+knQ(n)). If we use the data structure from Section <ref> for t=0.5, then the running time is O(kn√(n))=o(kn^2).Next we consider approximation algorithms for theandproblems.It is easy to observe that the maximum value and the minimum non-zero value of the optimum solution ofare bounded polynomially on n. Let [l_M, r_M] be the range of the optimum values. We discretize the range [l_M, r_M] by a multiplicative factor (1+). We run a binary search on the discrete values. For each value e∈[l_M, r_M] we consider, we construct a new bucket by running another binary search on the input items, trying to expand the bucket until its expected entropy is at most e. We repeat the same for all buckets and we decide if we should increase or decrease the error e in the next iteration. In the end the solution we find is within an (1+) factor far from the max expected entropy in the optimum partitioning. Without using any data structure, we need O(nlog1/) time to construct the partitioning. If we usewe need time O(P(n)+kQ(n)log1/). If we use the data structure in Subsection <ref> we have partition time O(n+k/^2log1/)=o(nlog1/). If we allow a Δ additive approximation in addition to the (1+) multiplicative approximation, we can use the data structure in Subsection <ref> having partition time O(n+k/Δ^2log1/)=o(nlog1/).Next, we focus on theproblem. It is known from <cit.> (Theorems 5, 6) that if the error function is monotone (such as the expected entropy) then we can get a partitioning with (1+)-multiplicative approximation in O(P(n)+k^3/^2Q(n)) time. Hence, the straightforward solution without using a data structure returns an (1+)-approximation of the optimum partitioning in O(k^3/^2n) time.If we use the data structure from Subsection <ref> we have running time O(n+k^3/^4), which is o(k^3/^2n) and multiplicative error (1+)^2. If we set ←/3 then in the same asymptotic running time we have error (1+).If we also allow Δ· n additive approximation, we can use the additive approximationfrom Subsection <ref>. The running time will be O(n+k^3/^2Δ^2)=o(k^3/^2n).Partitioning for d>1 Partitioning and constructing histograms in high dimensions is usually a very challenging task, since most of the known algorithms with theoretical guarantees are very expensive <cit.>. However, there is a practical method with some conditional error guarantees, that works very well in any constant dimension d and it has been used in a few papers <cit.>. The idea is to construct a tree having a rectangle containing all points in the root. In each iteration of the algorithm, we choose to split (on the median in each coordinate or find the best split) the (leaf) node with the minimum/maximum (expected) entropy.As stated in previous papers, let make the assumption that an optimum algorithm for eitheroris an algorithm that always chooses to split the leaf node with the smallest/largest expected entropy. Using the straightforward solution without data structures, we can construct an “optimum” partitioning in O(kn) time by visiting all points in every new generated rectangle. Using , the running time of the algorithm is O(P(n)+kQ(n)). In order to get an optimum solution we usefrom Subsection <ref>. The overall running time is O(n^(2d-1)t+1+kn^1-t). This is minimized for n^(2d-1)t+1=kn^1-t⇔ t=t^*=log k/2dlog n, so the overall running time is O(kn^1-t^*)=o(kn). If we allow (1+)-multiplicative approximation we can use thefrom Subsection <ref>. The running time will be O(n+k/^2)=o(kn). If we allow a Δ-additive approximation, then we can use thefrom Subsection <ref> with running time O(n+k/Δ^2)=o(kn). § CONCLUSIONIn this work, we presented efficient data structures for computing (exactly and approximately) the entropy of the points in a rectangular query in sub-linear time.Using our new data structures we can accelerate partitioning algorithms for columnar compression (Example <ref>) and histogram construction (Example <ref>). Furthermore, we can accelerate the exploration of high uncertainty regions for data cleaning (Example <ref>).There are multiple interesting open problems derived from this work. i) Our approximate data structures are dynamicbut our exact data structures are static. Is it possible to have dynamic data structure for returning the exact entropy? ii) We showed a lower bound for designing exact data structures when P∈^d for d≥ 2. Does the lower bound extend for d=1? iii) There is still a gap between the proposed lower bound and upper bound. An interesting problem is to close that gap. iv) Can we extend the faster deterministic approximation data structure from Subsection <ref> in higher dimensions?abbrv§ LOWER BOUND PROOF Lemma <ref>.In the preceding reduction, S_i∩ S_j=∅ if and only if H_i,j=log n_i,j.If S_i∩ S_j=∅ then from the construction of P we have that all colors in P_i∪ P_j' are distinct, so n_i,j=|u(P_i∪ P_j')|. Hence, the entropy H(P_i∪ P_j') takes the maximum possible value which is H(P_i∪ P_j')=∑_v∈ u(P_i∪ P_j')1/n_i,jlog n_i,j=log n_i,j.If H_i,j≠log n_i,j we show that S_i∩ S_j≠∅. The maximum value that H_i,j can take is log n_i,j so we have H_i,j<log n_i,j. The entropy is a measure of uncertainty of a distribution. It is known that the discrete distribution with the maximum entropy is unique and it is the uniform distribution. Any other discrete distribution has entropy less than log n_i,j. Hence the result follows. § OMITTED ALGORITHMS AND DATA STRUCTURES FROM SECTION <REF>§.§ Fast construction of data structure for d=1 In order to construct the data structure we need to compute H_i,j for every interval I_i,j. A straightforward algorithm is the following: We first visit all intervals I_i,i+1 and compute the entropy by traversing all points in P∩ I_i,i+1. Then we repeat the same for intervals I_i,i+2. More specifically, we first make a pass over P and we compute H_i,i+2 for each i={1,3, 5, …}. Then, we make another pass over P and we compute, H_i,i+2 for each i={2,4, 6, …}. We continue with the same way for intervals I_i,i+ℓ. Overall the running time is upper bounded by O(n+∑_ℓ=2^n^1-tℓ·n^1-t/ℓn)=O(n^3-2t). We can improve the construction with the following trick. The overall algorithm remains the same. However, when we compute H_i,i+ℓ, notice that we have already computed H_i,i+ℓ-1. Hence, we can use H_i,i+ℓ-1 and only traverse the points in P∩ I_i+ℓ-1,i+ℓ updating H_i,i+ℓ-1 as we did in the query procedure. Each interval I_i+ℓ-1,i+ℓ contains O(n^t) points so we need only O(n^tlog n) time to find the new entropy. For each ℓ, we need O(n^1-t/ℓn^t) time to find all H_i,i+ℓ for i={1, 1+ℓ, 1+2ℓ,…}. Hence, we need O(ℓn^1-t/ℓn^t) time to compute all entropies H_i,i+ℓ. Overall we can construct our data structure in O(∑_ℓ=1^n^1-tℓ·n^1-t/ℓn^t)=O(n^2-t) time.§.§ Extension to any constant dimension d≥ 1Data Structure. For any dimension d we construct a k-d tree <cit.>, denote it with 𝒜, but we stop the construction after having O(n^1-t) leaf nodes. Each leaf node contains O(n^t) points. Each leaf node v corresponds to a rectangle R_v. Let ℛ be the set of all rectangles defined by the leaf nodes of the k-d tree. For each possible rectangle r over the corner vertices of rectangles in ℛ we compute and store the entropy H_r=H(P∩ r). We compute it by simply visit all points in P∩ r. Let 𝐫 be the set of all possible rectangles r. We construct a modified range tree 𝒯 over the set of rectangles 𝐫 such that given a query rectangle R we find the maximal rectangle r∈𝐫 that lies completely inside R. We can do it by storing the rectangles in 𝐫 as points in R^2d merging their opposite corners. Finally, for each color u_i∈ u(P), we construct a range tree 𝒯_i for range counting queries.There are O(n^1-t) leaf nodes and |𝐫|=O(n^2d(1-t)). Hence, 𝒯 uses O(n^2d(1-t)log^2d-1 n) space, while all 𝒯_i range trees have O(nlog^d-1 n) space. Overall, the space of this simple data structure is O(n^2d(1-t)log^2d-1 n). The data structure can be constructed in O(n^2d(1-t)log^2d n+nlog^d n) time. Query procedure Given a query rectangle R we find the maximal rectangle r∈𝐫 using 𝒯. Using 𝒜 we find the set of nodes V_R in 𝒜 that are partially intersected by R. We know that |V_R|=O(n^(1-t)(1-1/d)) (see <cit.>). Let P_R=P∩ (R∖ r), as we had in the 1d case. We visit each point in P_r and we update the entropy H_r as we did in the 1d case. We need O(log^2d m) time to find r and O(n^(1-t)(1-1/d)log^d n) time to return the entropy. The overall query time is O(log^2d n + n^(1-t)(1-1/d)log^d n).We conclude with the following theorem. Let P be a set of n points in ^d, where each point is associated with a color, and let t∈ [0,1] be a parameter. A data structure of O(n^2d(1-t)log^2d-1 n) size can be computed in O(n^2d(1-t)log^2d n+nlog^d n) time, such that given a query hyper-rectangle R, H(P∩ R) can be computed in O(log^2d n + n^(1-t)(1-1/d)log^d n) time. § FAST CONSTRUCTION ALGORITHM IN ANY CONSTANT DIMENSION.Let L_d be the points in P sorted in ascending order with respect to their d-th coordinate. For each color u_k we construct a range tree 𝒯_k for range counting queries. Furthermore, we construct a range tree 𝒯 for range counting queries (independent of color). Let P_i be a bucket. Assume that we have already computed the entropy for every rectangle that contains c-1 points in P_i. We traverse all rectangles containing c points: Let p be any point in P_i. We assume that p lies in the bottom hyperplane of the hyper-rectangle (with respect to d-th coordinate). Next we find the points that lie in the next 2d-2 sides of the rectangle. In particular we try all possible sets of 2d-2 points in P_i. We notice that each such set, along with the first point p, define an open hyper-rectangle, i.e., a hyper-rectangle whose bottom hyperplane with respect to the d-th coordinate passes through point p and there is no top hyperplane with respect to coordinate d. We find the top-hyperplane by running a binary search on L_d. For each point q∈ P_i we check in the binary search, let r be the hyper-rectangle defined by the set of 2d points we have considered. Using 𝒯, we run a range counting query on r∩ P_i. If |r∩ P_i|<c then we continue the binary search on the larger values. If |r∩ P_i|>c, we continue the binary search on the smaller values. If |r∩ P_i|=c then let q∈ P_i be the point on the top hyperplane we just checked in the binary search. We run another binary search on L_d to find the hyper-rectangle r'⊆ r that contains c-1 points. Again, we use the range tree 𝒯 to find the rectangle r' as we run the binary search on L_d. We have, H(r∩ P_i)=H((r'∩ P_i)∪{q}). Let u(q)=u_k. Using 𝒯_k we count n(r',u_k) the number of points in r' with color u_k. Let H be the entropy of H(P_i∩ r') by removing n(r',u_k) points of color u_k from P_i∩ r' as shown in Equation <ref>. Finally, we get the entropy H(P_i∩ r) by updating H, inserting n(r',u_k)+1 points of color u_k, as shown in Equation <ref>.The running time is bounded by O(n^(2d-1)t+1log^d+1 n) time, because we have O(n^1-t) buckets, each rectangle in a bucket contains at most O(n^t) points so we have to check O(n^t) values of c, then we take O(n^t) possible points p, and all sets of size 2d-2 are O(n^(2d-2)t). For each such rectangle we run two binary searches where each step takes O(log^d n) time to run the range counting query.§ OMITTED PROOFS FROM SUBSECTION <REF>Let D be a discrete distribution over m values {α_1,…, α_m} and let D(α_i)>0 for at least two indices i. If there is no index j such that D(α_j)> 2/3, then H(D)>0.9. We have the minimum value of H(D), when D is concentrated over one value. Since there is no index j with D(α_j)>2/3, in the worst case we assume there is index j with D(α_j)=2/3. The rest probability weight 1/3 is assigned over another value α_j' (anything else increases the entropy). Then H(D)≥ D(α_j)log D(α_j) + (1-D(α_j))log1/1-D(α_j)=2/3log3/2+1/3log 3 ≈ 0.918 > 0.9.Let u_i be the color with |P(u_i)∩ R|/|P∩ R|>2/3, and let B be the event that u_i∈ u(P_S). The following holds: [B]≥ 1-1/(2n). Let B_j be the event that the j-th point selected in P_S does not have color u_i. We have [B_j]≤ 1/3. Then we have [⋂_j B_j]≤1/3^|P_S|, since the random variables B_j's are independent. We conclude that [B]=1-[⋂_j B_j]≥ 1-1/3^|P_S|=1-1/2n.If 1>N_i/N>2/3, it holds that N-N_i/N_i≤N_i/NlogN/N_i+N-N_i/NlogN/N-N_i. Let α=N_i/N. We define f(α)=αlog1/α+(1-α)log1/1-α - 1/α+1. We get the first and the second derivative and we have f'(α)=1/α^2+log1/α -log1/1-α, and f”(α)=α^2-α·ln 4+ ln 4/(α-1)α^3ln 2. For 2/3<α<1, the denominator of f”(α) is always negative, while the nominator of f”(α) is positive. Hence f”(α)≤ 0 and f'(α) is decreasing. We observe that f'(0.75)>0 while f'(0.77)<0, hence there is a unique root of f' which is β∈(0.75,0.77). Hence for α≤β f'(α)≥ 0 so f(α) is increasing, while for α> β we have f'(α)≤ 0 so f(α) is decreasing. We observe that f(0.5)=0 and lim_α→ 1 f(α)=0. Notice that 0.5<2/3<β<1, so f(α)≥ 0 for α∈[0.5,1). Recall that 2/3<α< 1 so f(α)≥ 0. The result follows. § RANGE TREES AND SAMPLING We first give a high level overview of range trees and then explain how we can sample uniformly at random in a query rectangle using them.For d=1, the range tree on P is a balanced binary search tree T of O(log n) height. The points of P are stored at the leaves of T in increasing order, while each internal node v stores the smallest and the largest values/coordinates, α_v^- and α_v^+, respectively, contained in its subtree. The node v is associated with an interval I_v=[α_v^-, α_v^+] and the subset P_v=I_v∩ P. For d>1, T is constructed recursively:We build a 1D range tree T_d on the x_d-coordinates of points in P. Next, for each node v∈ T_d, we recursively construct a (d-1)-dimensional range tree T_v on P_v, which is defined as the projection of P_v onto the hyperplane x_d=0, and attach T_v to v as its secondary tree. The size of T in ^d is O(nlog^d-1 n) and it can be constructed in O(nlog^d n) time.For a node v at a level-i tree, let p(v) denote its parents in that tree. If v is the root of that tree, p(v) is undefined. For each node v of the d-th level of T, we associate a d-tuple ⟨ v_1, v_2, …, v_d=u⟩, where v_i is the node at the i-th level tree of T to which the level-(i+1) tree containing v_i+1 is connected. We associate the rectangle □_v=∏_j=1^d I_v_j with the node v. For a rectangle R=∏_i=1^d δ_i , a d-level node v is called a canonical node if for every i∈ [1,d], I_v_i⊆δ_i and I_p(v_i)⊈δ_i. For any rectangle R, there are O(log^d n) canonical nodes in , denoted by 𝒩(R), and they can be computed in O(log^d n) time <cit.>.can be maintained dynamically, as points are inserted into P or deleted from P using the standard partial-reconstruction method, which periodically reconstructs various bottom subtrees. The amortized time is O(log^d n); see <cit.> for details.A range tree can be used to answer range (rectangular) aggregation queries, such as range counting queries, in O(log^d n) time and range reporting queries in O(log^d n + K) time, where K is the output size. The query time can be improved to O(log^d-1 n) using fractional cascading. See <cit.> for details. However, for simplicity, in this work we use the simpler version of it with the term log^d n in the query time. Sampling. A range tree can be used to return a uniform sample in a query rectangle. More formally, the goal is to construct a data structure such that given a query rectangle R, a uniform sample in P∩ R is returned in O(log^d n) time. We construct a standard range tree T on the point set P. For each d-level node v of the tree we precompute and store c(v)=|P∩□_v|, i.e., the number of points stored in the subtree with root v. The space of T remains O(nlog^d-1n) and the construction time O(nlog^d n). We are given a query rectangle R. We run the query procedure in the range tree T and we find the set of canonical nodes 𝒩(R). For each node v∈𝒩(R), we define the weight w_v=c(v)/∑_v'∈𝒩(R) c(v'). We sample one node v from 𝒩(R) with respect to their weights. Then we get a random number in [1,c(v)]. Let k be that number. Using the precomputed counters in the children of v we can recursively find in O(log n) time the point with the k-th smallest d-coordinate among points in P∩□_v. The running time is O(log^d n + log n)=O(log^d n). It is easy to argue that each point has equal probability to be selected. Let p∈ P∩ R, and let p∈ P∩□_v, for a node v∈𝒩(R). The probability of selecting p is exactly c(v)/∑_v'∈𝒩(R)c(v')·1/c(v)=1/|P∩ R|. §.§ Sampling excluding a color Next, we extend the previous data structure to handle the following query: Given a query rectangle R and a color u_j, the goal is to return a uniform sample among the points in (P∩ R)∖ P(u_j).In each d-level node v, we store a hashmap M_v having as keys the colors of the points stored in leaf nodes of the subtree rooted at v, and as values the number of leaf nodes in the subtree rooted at v with color key. In other words, in each node v we store M_v[u_i]=|P_v(u_j)|) for each u_i=u(P_v). We also store the cardinality c_v=|P_v|, as we had before. The modified range tree can be constructed in O(nlog^d n) time and it has O(nlog^d n) space. Given a query rectangle R we get the set of canonical nodes 𝒩(R). For each node v∈𝒩(R) we define the weight w_v=c(v)-M_v[u_j]/∑_v'∈𝒩(R) c(v')-M_v'[u_j]. We sample one node v from 𝒩(R) with respect to the weights w. Then we get a random number in [1,c(v)-M_v[u_j]]. Let k be that number. Using the counters and the hashmap M in the children of v we can recursively find in O(log n) time the point with the k-th smallest d-coordinate among points in (P∩□_v)∖ P(u_j). The running time is O(log^d n + log n)=O(log^d n). It is easy to argue that each point has equal probability to be selected. Let p∈ (P∩ R)∖ P(u_j), and let p∈ P∩□_v, for a node v∈𝒩(R). The probability of selecting p is exactly c(v)-M_v[u_j]/∑_v'∈𝒩(R)c(v')-M_v'[u_j]·1/c(v)-M_v[u_j]=1/|(P∩ R)∖ P(u_j)|.§ ADDITIVE AND MULTIPLICATIVE APPROXIMATION FOR D=1Assume that we have a set P'⊆ P with N=|P'| and |u(P')|>2 colors. Then the minimum entropy is encountered when we have |u(P')|-1 colors having exactly one point, and one color having |P'|-|u(P')|+1 points. Let consider any other arbitrary instance. Let u_i be the color with the maximum number of points in P'. We consider any other color u_j≠ u_i having at least 2 points, so |P'(u_i)|≥ |P'(u_j)|≥ 2. We assume that we move one point from color u_j to color u_i and we argue that the new instance has lower entropy. If this is true, we can iteratively apply it, and whatever the initial instance is, we can create an instance as described in the lemma with lower entropy. Hence, the minimum entropy is encountered when we have |u(P')|-1 colors having exactly one point, and one color having all the rest |P'|-u(P')+1 points.Initially, we haveH(P')=∑_ℓ∈ u(P')N_ℓ/NlogN/N_ℓ=∑_ℓ∈ u(P')N_ℓ/N(log N - log N_ℓ)=log N -1/N∑_ℓ∈ u(P')N_ℓlog N_ℓ.The new instance has entropyH'=H(P')-1/N(-N_ilog N_i - N_jlog N_j + (N_i+1)log (N_i+1) + (N_j-1)log (N_j-1)).Next, we show thatH'≤ H(P')⇔ -N_ilog N_i - N_jlog N_j + (N_i+1)log (N_i+1) + (N_j-1)log (N_j-1)≥ 0.We define the functionf(x)=(x+1)log (x+1) - xlog x +(N_j-1)log (N_j-1) - N_jlog N_j,for x≥ N_j≥ 2. We have f'(x)=log(x+1)-log(x)≥ 0 for x>0, so function f is monotonically increasing for x≥ 2. Since x≥ N_j, we have f(x)≥ f(N_j)≥ 0. Hence, we proved that the new instance has lower entropy. In particular if N_i=N_j then the new instance has no higher entropy, and if N_i>N_j then the new instance has strictly lower entropy. Lemma <ref>. The function F(·) is monotonically increasing. Furthermore, F(P')=O(Nlog N), and the smallest non-zero value that F(·) can take is at least log N.Let p∈ P be a point such that p∉ P'. We show that F(P'∪{p})≥ F(P'). If u(p)∉ u(P') it is clear that F(P'∪{p})≥ F(P') because all nominators in the log factors are increasing and a new positive term is added to the sum. Next, we focus on the more interesting case where u(p)∈ u(P'). Without loss of generality assume that u(P')={u_1,…, u_k} and u(p)=u_k. We have F(P'∪{p})=∑_i=1^k-1N_ilogN+1/N_i + (N_k+1)logN+1/N_k+1. For i<k, each term N_ilogN+1/N_i in F(P'∪{p}) is larger than the corresponding term N_ilogN/N_i in F(P') (1). Let g(x)=xlogc+x/x, for any real number c>2. We have g'(x)=(c+x)lnc+x/x - c/(c+x)ln(2). Using the well known inequality ln a≥ 1-1/a, we note that (c+x)ln(1+c/x)≥ (c+x)cx/x(c+x)=c so g'(x)≥ 0 and g(x) is monotonically increasing. Hence we have (N_k+1)logN+1/N_k+1≥ N_klogN/N_k (2). From (1), (2), we conclude that F(P'∪{p})≥ F(P').The inequality in the end follows straightforwardly from Lemma <ref> (we actually show a more general result in Lemma <ref>).The data structure 𝒯̅ can be constructed in O(n/log^5 n) time. The structure of 𝒯̅ can be constructed in O(nlog^2 n) time. For each color 𝐮∈ u(P), we construct a 1d binary search tree T_𝐮. In total, it takes O(nlog n) time. These auxiliary trees are useful for the construction of our main data structure. A 2d range tree consists of one search binary tree with respect to x-coordinate and for each node in this tree there is a pointer to another tree based on the y coordinates. Hence, it is a 2-level structure. Recall that we need to compute the values in tables S_v, H_v for each node v in the 2-level trees. For each tree in the second level we do the following. We visit the nodes level by level. Assume that we have already computed S_v[i] and H_v[i]. In order to compute the next value in H_v (or S_v), we run a binary search on the x-coordinates of P that are larger than H_v[i] (or S_v[i]). Let x' be the x-coordinate value we check. We visit all colors u stored in the leaf nodes of the subtree with root v and we run another binary search on T_u to get the total number of points of color u in the range [x_u,x']. In that way we check whether the interval [x_u,x'] satisfies the definition of H_v[i+1] (or S_v[i+1]). Based on this decision we continue the binary search on the x-coordinates of P. Using the data structures T_𝐮 to run counting queries when needed, in each level we spend time O(log n/(∑_z∈ℒlog n_z)log n)=O(nlog^3 n/), where ℒ is the set of leaf nodes of the current 2-level tree and n_z is the number of points with color equal to the color of point stored in z.Notice that we run this algorithm only for the nodes of the tree that do not contain points with same colors. The tree has O(log n) levels so for each 2-level tree we spend O(nlog^4 n/) time. We finally notice that the 1-level tree in 𝒯̅ has O(log n) levels and two nodes of the same level do not “contain” any point in common. Hence, the overall running time to compute all values S_v[i], H_v[i] is O(nlog^5 n/). Lemma <ref>. If we set ←/4· c·loglog n, it holds that H(P∩ R)≤ℋ≤ (1+)H(P∩ R)+, for a constant c>0.We assume that we take the union of two nodes v, w∈ V using Equation <ref>. We can use this equation because nodes v, w do not contain points with similar colors. Let H_1=H(P(U_v)∩ R), H_2=H(P(U_w)∩ R), N_1=|P(U_v)∩ R|, and N_2=|P(U_2)∩ R|. We have[1.1]ℋ_v,w=(1+)^ℓ_v^Sℋ_v+(1+)^ℓ_w^Sℋ_w+(1+)^ℓ_v^Slog((1+)^ℓ_v^S+(1+)^ℓ_w^S/(1+)^ℓ_v^S-1) + (1+)^ℓ_w^Slog((1+)^ℓ_v^S+(1+)^ℓ_w^S/(1+)^ℓ_w^S-1)/(1+)^ℓ_v^S-1+(1+)^ℓ_w^S-1.So we get[1.1]ℋ_v,w≤(1+)^4N_1H_1+(1+)^4N_2H_2+(1+)^2N_1log((1+)^2N_1+N_2/N_1)+(1+)^2N_2log((1+)^2N_1+N_2/N_2)/N_1+N_2and we conclude thatℋ_v,w≤ (1+)^4H((P(U_v)∪ P(U_w))∩ R)+(1+)^2log(1+)^2. Similarly if we have computed ℋ_x,y for two other nodes x,y∈ V, thenℋ_x,y≤ (1+)^4H((P(U_x)∪ P(U_y))∩ R)+(1+)^2log(1+)^2. If we compute their union, we getℋ_v,w,x,y≤ (1+)^6H((P(U_v)∪ P(U_w)∪ P(U_x)∪ P(U_y))∩ R)+[(1+)^4+(1+)^2]log(1+)^2. In the end of this process we haveℋ≥ H(P∩ R)because all intermediate estimations of entropy are larger than the actual entropy. For a constant c, it also holds thatℋ≤ (1+)^clog(log n)H(P∩ R)+∑_j=1^clog(log n)/2(1+)^2jlog(1+)^2.This quantity can be bounded byℋ≤ (1+)^clog(log n)H(P∩ R)+clog(log n)(1+)^clog(log n)log(1+).We have the factor log(log n) because |V|=O(log^2 n) so the number of levels of the recurrence is O(log(log n)).Next, we show that if we set ←/4· clog(log n), then ℋ≤ (1+)H(P∩ R)+.We have(1+/4/clog(log n))^clog(log n)≤ e^/4≤ 1+.The first inequality holds because of the well known inequality (1+x/n)^n≤ e^x. The second inequality is always true for ∈ (0,1). Then we have(1+)clog(log n)log(1+/4· clog(log n))≤ 2clog(log n)log(1+/4· clog(log n)).Next, we show that this quantity is at most . Let L=clog( log n) and letf(x)=x-2Llog(1+x/4L)be a function of x∈[0,1]. We havef'(x)=1-2L/Lln(16)+xln(2).We observe that ln(16)≈ 2.77 and xln(2)≥ 0 so f'(x)≥ 0 and f is monotonically increasing. So f(x)≥ f(0)=0. Hence, for any ∈[0,1] we have-2Llog(1+/4L)≥ 0. We conclude withℋ≤ (1+)H(P∩ R)+.§ APPROXIMATE PARTITIONING It is easy to observe that the maximum value and the minimum non-zero value of the optimum solution ofare bounded polynomially on n. Let [l_M, r_M] be the range of the optimum values. We discretize the range [l_M, r_M] by a multiplicative factor (1+). We run a binary search on the discrete values. For each value e∈[l_M, r_M] we consider, we construct a new bucket by running another binary search on the input items, trying to expand the bucket until its expected entropy is at most e. We repeat the same for all buckets and we decide if we should increase or decrease the error e in the next iteration. In the end the solution we find is within an (1+) factor far from the max expected entropy in the optimum partitioning. Without using any data structure, we need O(nlog1/) time to construct the partitioning. If we usewe need time O(P(n)+kQ(n)log1/). If we use the data structure in Subsection <ref> we have partition time O(n+k/^2log1/)=o(nlog1/). If we allow a Δ additive approximation in addition to the (1+) multiplicative approximation, we can use the data structure in Subsection <ref> having partition time O(n+k/Δ^2log1/)=o(nlog1/).
http://arxiv.org/abs/2312.15959v1
{ "authors": [ "Sanjay Krishnan", "Stavros Sintos" ], "categories": [ "cs.DS", "cs.DB" ], "primary_category": "cs.DS", "published": "20231226084724", "title": "Range Entropy Queries and Partitioning" }
Computing Balanced Solutions for Large International Kidney Exchange Schemes When Cycle Length Is Unbounded Márton Benedek1 Péter Biró1 Gergely Csáji1Matthew Johnson2 Daniel Paulusma2 Xin Ye2 January 14, 2024 =========================================================================================================== We prove that for every k≥ 10, the online Ramsey number for paths P_k and P_n satisfies r̃(P_k,P_n) ≥5/3n-2, matching up to an additive constant term the upper bound recently obtained by Bednarska-Bzdęga, given that k is fixed, and disproving a conjecture by Cyman, Dzido, Lapinskas and Lo. § INTRODUCTIONOnline Ramsey numbers, sometimes referred to as online size Ramsey numbers, were firstly introduced by Beck <cit.> and independently by Kurek and Ruciński <cit.>. One way to define them is by a one-player game where the board is the infinite graph K_ℕ, with a 2-colouring of its edges by red and blue, which is hidden from the player. Given two (finite) graphs G and H, at each round the player chooses an edge of the graph and its colour is revealed, and the game ends when either a red copy of G or a blue copy of H is fully revealed. The online Ramsey number r̃(G,H) is the minimum number of rounds needed for the player to end the game for any colouring of the board. A more common but equivalent definition of the online Ramsey number r̃(G,H) is by a combinatorial game R̃(G,H) played by two Players, Builder and Painter, where the board is the infinite vertex set ℕ. At each round, Builder chooses a previously not yet selected edge between two vertices and Painter colours it red or blue. The game ends when either a red copy of G or a blue copy of H is created. Builder's goal is to end the game as fast as possible while Painter's goal is to make it last as long as possible. The online Ramsey number r̃(G,H) is the number of rounds in a game when both players play optimally. This notion is closely related to the size Ramsey number r̂(G,H), the minimum number of edges in a graph for which every 2-colouring of its edges by red and blue contains either a red copy of G or a blue copy of H. In particular, we have r̃(G,H) ≤r̂(G,H). If G=H then it is common to write r̃(G) and r̂(G) instead of r̃(G,G) and r̂(G,G), respectively, for simplicity. A significant attention was given to online Ramsey numbers for paths. Beck <cit.> proved that the size Ramsey number r̂(P_n) is linear in n, implying that the online Ramsey number r̃(P_n) is linear as well, where by P_n we denote the path consisting of n vertices. Grytczuk, Kierstead and Prałat <cit.> and Prałat <cit.> studied the value of r̃(P_k,P_n) and determined it when max{k,n }≤ 9. In <cit.> they gave bounds also for the general case, showing that k+n-3 ≤r̃(P_k,P_n) ≤ 2k+2n-7. Recently, Bednarska-Bzdęga <cit.> proved that r̃(P_k,P_n) ≤5/3n+12k, which improves the upper bound when n is large compared to k. As for the lower bound, Cyman, Dzido, Lapinskas and Lo <cit.> proved that r̃(P_k,P_n) ≥3/2n+k/2-7/2 for every k ≥ 5. They believed that the strategy they suggested for Painter in their proof was asymptotically optimal, and therefore made the following conjecture.For every k ≥ 5, we havelim_n →∞r̃(P_k,P_n)/n=3/2.Other variants of online Ramsey numbers for paths were also studied. For example, in <cit.> and in <cit.> the authors studied online Ramsey numbers for ordered paths in infinite complete ordered graphs and hypergraphs, and in <cit.> the authors considered the induced version.In this paper we continue this line of research and study the online Ramsey number for paths r̃(P_k,P_n). The main result of our paper is a lower bound for r̃(P_k,P_n) when k ≥ 10, which matches the upper bound obtained by Bednarska-Bzdęga up to an additive constant term when k is fixed, and therefore also disproves <Ref>.For k≥ 10 we haver̃(P_k,P_n) ≥5/3 n - 2.For every fixed k≥ 10 we havelim_n →∞r̃(P_k,P_n)/n=5/3.§.§ NotationAs mentioned above, for an integer ℓ≥ 1 we denote by P_ℓ the path on ℓ vertices (and ℓ-1 edges). By a round of the game we mean one turn of Builder followed by one turn of Painter. The first round of the game is round 1. We often consider the graph spanned by the blue (red) edges which we call the blue (red) graph. We also say blue (red) component to refer to a connected component in the blue (red) graph. § PROOF IDEAIn order to prove a lower bound for the value of r̃(P_k, P_n ) we provide Painter with an explicit strategy (see <Ref>). Following the proofs of previous results, the first feature of our strategy is that it does not allow any red copy of P_k to be created. This is shown in <Ref>. It is then left for us to prove that, if Painter follows this strategy, it takes at least 5/3n-2 rounds for Builder to create a blue copy of P_n.In our proof for <Ref> we let Painter follow <Ref>, and we denote the set of possible moves in this strategy by . We split all possible moves of Builder into categories, and for each category we instruct Painter with a response. Additionally, for each move of Painter M∈, we assign a variable x_M to count the number of times this move was played throughout the game. We then define a setof parameters of the board which we track throughout the course of the game. These parameters are in fact functions. Given a parameter X ∈ we write X(t) for the value of this parameter in the graph obtained after precisely t rounds have been played, and accordingly we write (t) {X(t)  :  X∈}. We denote by N the number of rounds the game lasts until Builder wins. In particular we can write ∑_M∈ x_M = N.Two of the parameters we consider inplay a key role in our proof. One, denoted by , is the number of blue components in the graph, and the other, denoted by , is the sum of diameters of blue components in the graph. For example, if Painter played a blue move for the first time in the game on round t_0, then we have (t_0) = 1 and (t_0) = 1. More importantly, since a red copy of P_k is never created, at the end of the game there is a blue copy of P_n, and therefore we have (N) - (N) ≥ n-2 (see <Ref>). Hence, our goal is to control the growth of the function (t) - (t), showing that (N) - (N) ≥ n-2 implies that N is large enough.To do so, we analyse the change of values of parameters inover the rounds, in terms of moves in . This yields a set of linear inequalities, where the final values of the parameters in (N) are bounded by linear combinations of the variables {x_M  :  M ∈}. Recalling that ∑_M∈ x_M = N, and considering the set of linear inequalities mentioned above, we can bound N from below by solving the linear programming problem associated. In our specific case, this linear programming problem can be solved by a linear combination of the inequalities on (N), and we define this exact linear combination to be our potential function β (see (<ref>)). Naturally, as β is a linear combination of the parameters in , it is also a function of t. In fact, β(t) is bounded from below by 5/3(t) - 1/3(t). Hence, we control the growth of the latter instead of controlling the growth of (t) - (t). Consequentially, we get to control the increment of the potential function β over the rounds.More precisely, our goal is showing that the potential function β satisfies the following. (1) β(0) = 0.(2) β(t) - β(t-1) ≤ 1 for every t ∈ [N].(3) β(N) ≥5/3n - 2.Given the above, we getN ≥∑_t∈ [N]β(t) - β(t-1) = β(N) - β(0) ≥5/3n - 2,implying our main result <Ref>. Item (1) will follow immediately from the definition of β. Item (2) is where we bound the increment of β. We prove this in <Ref> which is our main lemma. Then item (3) will follow easily as β(t) ≥5/3(t) - 1/3(t) and (N) - (N) ≥ n-2. Hence, <Ref> is the heart of our argument.We would like to note that the use of the potential function β is not completely needed in our proof, as one could solve the linear programming problem associated and derive the same bound. However, this would lead to a tedious computation involving many parameters, whereas when using the potential function β it suffices to bound its increments case by case.The reason that <Ref> gives the asymptotically correct lower bound lies in a certain core idea implemented in Painter's moves. To point it out, we need to give some background on the proofs of the upper and lower bounds as given in <cit.>. In the proof of the upper bound in <cit.>, Builder follows a strategy which starts by constructing 1/3n + o(n) many blue copies of P_3, which lasts for n+o(n) rounds of the game. This is Stage 1 of the strategy. Then, in Stages 2 and 3, Builder merges all these blue paths into one blue copy of P_n. For these two stages 2/3n + o(n) more rounds are enough. This results in a total of 5/3n + o(n) rounds, and shows that r̃(P_k,P_n) ≤5/3n + o(n).In <cit.> the authors prove the lower bound r̃(P_k, P_n) ≥3/2n + o(n), for k≥ 5, by providing Painter with the following strategy. Painter colours red every edge built by Builder, unless it creates a red copy of either a P_k or a cycle, in which case Painter colours it blue. This way, no red copy of P_k can ever appear, so Builder wins after creating a blue copy of P_n. By a careful analysis of this process, they prove that it takes Builder at least 3/2n+o(n) rounds to win. Moreover, their analysis of this strategy is optimal, as Builder can indeed create a blue copy of P_n in 3/2n+o(n) rounds when Painter follows their suggested strategy. Builder firstly creates a red copy of P_k-1, and then builds two edges vu and vw adjacent to one of its endpoints v which are both coloured blue by Painter, as illustrated in <Ref>. This constructs a blue path uvw of length 3. From this point on, Builder increases the length of this blue path by 2 every three rounds. In the next round Builder builds the edge xy, where x is the neighbour on the red copy of P_k-1 of its endpoint v, and Painter colours it red. This creates another red copy of P_k-1 which intersects an existing one with k-2 vertices. Then Builder repeats the same moves played for the first red copy of P_k-1, building two edges yu and yz. These two edges are coloured blue by Painter, increasing the length of the blue path by 2 within three rounds. Repeating this (as illustrated in <Ref> with one more repetition), Builder indeed wins after 3/2n + o(n) rounds.The main reason that Painter's strategy from <cit.> does not yield a lower bound greater than 3/2n + o(n) is that red copies of P_k-1 are easily built, and once a red copy of P_k-1 is built, Painter has no choice but colouring blue any edge incident to an endpoint of it. This makes the process of building a blue copy of P_n rather quick; as explained above, Builder can increase the length of a blue path by 2 every three rounds. Thus, the first idea of our proof is to prevent Builder from playing this sequence of moves. One possible way for Painter to achieve this, is by colouring blue any edge e=xy where x is a second to last vertex on a red P_k-1. Indeed, if y is not adjacent to any blue edge and Painter colours the edge e=xy blue, then Builder does not increase lengths of blue paths faster than Stage 1 in the proof of the upper bound in <cit.>. However, if y is already adjacent to a blue edge, colouring it blue increases the length of a blue path too fast. Therefore, in this case, we call the vertex y terminal and we instruct Painter to colour the edge red. If Builder later claims another edge incident to y, Painter will be forced to colour it blue. This results, in the worst-case, with the merger of two blue components in 2 moves, which is similar to Stages 2 and 3 in the proof of the upper bound in <cit.>. Having these two cases for how to colour any edge incident to a vertex which is next to an endpoint of a long red path, guarantees that Builder cannot increase lengths of blue paths too fast.By a careful analysis for k large enough, it can be shown that this idea alone prevents Builder from creating a blue copy of P_n in less than ( 3/2+ϵ)n+o(n) rounds for some absolute constant ϵ>0, therefore disproving <Ref>. However, it is not sufficient by itself to give the asymptotically optimal lower bound of 5/3n+o(n) and prove <Ref>. Indeed, by following a strategy as we describe above, Builder can still create disjoint copies of the structure depicted in <Ref>. Doing this, Builder can build 7 blue copies of P_3 in 20 rounds, which is slightly faster than Stage 1 of the upper bound in <cit.>, where Builder creates an average of 1 blue P_3 every 3 rounds. Therefore, it can be shown that when Painter follows this strategy, it is possible for Builder to build a blue P_n in at most (5/3-δ)n+o(n) rounds with δ=5/63. Thus, to prove the optimal bound 5/3n+o(n) from <Ref>, we must address this possible scenario in Painter's strategy. To do so, we label three vertices as centres for each red component having size at least three, and we instruct Painter to never colour an edge blue if this edge is adjacent to a centre vertex (except for a particular case). Doing that, Painter prevents the graph in <Ref> from being created by Builder, and in general, prevents Builder from creating many blue copies of P_3 faster than in Stage 1 of the upper bound <cit.>. For computational reasons, we need three centres per large red component, and not any less.Our strategy for Painter does allow red copies of P_9 to be created, but not red copies of P_10. Hence, we prove the lower bound for the case k=10. Since trivially we have r̃(P_k, P_n ) ≥r̃(P_10, P_n ) for every k≥ 10, our general result follows.This summarises the ideas behind our proof. In <Ref> we describe an explicit strategy for Painter and we state several immediate properties of it. In <Ref> we define the set of parameterswhich we track throughout the course of the game, and we define the potential function β which is a linear combination of those. Then we state and prove <Ref> which is our main lemma, and show how it implies <Ref>. In <Ref> we discuss further directions and open problems. § PAINTER'S STRATEGYBefore describing Painter's strategy we introduce some terminology. We say that a red component is large if it consists of at least three vertices. Otherwise, we say it is small. Note that a small red component is in fact a copy of a P_2. Every large red component contains three vertices which we call centres. A vertex of a large red component which is not a centre is either outer or terminal. Every vertex on a red component has at most one label amongst centre, outer and terminal. By a red P_2 component we mean a red component which is isomorphic to P_2.We present Painter's strategy by spelling out the cases where an edge built by Builder is coloured red and the cases where it is coloured blue. In principle, it is enough to spell out the cases where an edge is coloured red and let Painter colour it blue otherwise. However, since we analyse the course of the game based on the set of moves of Painter, and we use them to track the change in values of parameters in(defined in the next section), it will be more convenient to describe both the red and the blue moves explicitly. Then, we show that this strategy is exhaustive, meaning that it provides Painter with a response for each possible move of Builder. [Strategy for Painter]Let xy be an edge built by Builder in the last round. Painter colours it according to the following rules in order of preference from first to last. See <Ref> for illustrations of the moves. Painter colours xy red, in any of the following cases. A Neither x nor y is adjacent to any red edge.B x is an endpoint of a red P_2 component and y is not adjacent to any red edge. Painter labels all three vertices of the new red component as centres.C Both x and y are endpoints of two different red P_2 components. Painter arbitrarily chooses three consecutive vertices on the obtained red P_4 and labels them as centres. Painter labels the fourth vertex as an outer vertex.D x is a centre of a large red component and y is not adjacent to any red edge. Painter labels y as an outer vertex.E x is a centre of a large red component and y is an endpoint of a red P_2 component. Painter labels y and the other endpoint of the red P_2 component as outer vertices.F x is an outer vertex of a large component and y is adjacent to precisely one blue edge and no red edges. Painter labels y as terminal.Painter colours xy blue in any of the following cases. G Either x or y is incident to at least 2 blue edges.H x is an outer vertex of a large red component and y is not adjacent to any red nor blue edge.I x is an outer vertex of a large red component and y is an endpoint of a red P_2 component.J x is a terminal vertex of a large red component.K Both x and y are vertices of large red components. <Ref> is exhaustive. Indeed, if xy is the edge built by Builder in round t of the game, then each of x and y can be exactly one of the following five options: (1) a vertex not contained in any red component; (2) a vertex on a small red component; (3) a centre vertex on a large red component; (4) an outer vertex on a large red component; (5) a terminal vertex on a large red component. <Ref> shows that <Ref> covers all possible combinations of options for x and y. A terminal vertex is never of type 0. Indeed, a vertex y is labelled terminal only when move F is played, and for this to happen it must be already incident to precisely one blue edge. The next lemma captures an important and relatively simple property of <Ref>. If Painter follows <Ref> then no red copy of P_10 is ever built in the game R̃(P_10,P_n).We prove this lemma by a series of claims. For two vertices x,y ∈ℕ we write (x,y) for the graph distance between them in the red graph.Every red component is a tree. By <Ref>, whenever Painter colours an edge e=xy red we have that x and y are in two different red components. Therefore, no cycle inside a red component is ever created.Every large red component contains precisely three centres. Moreover, if x and y are two centres in the same red component, then (x,y) ≤ 2.The only moves which create a large red component are moves B and C, and by definition, for either of them, the new large red component contains precisely three centres which are consecutive vertices. Hence, it suffices to show that the vertices which later join a large red component are never labelled as centres. This is indeed true, as moves D, E and F are the only moves where vertices are added to a large red component, and by definition, new added vertices are always labelled as either outer or terminal vertices.If z is an outer vertex in a large red component, then there exists a centre c in the same large red component such that (c,z) ≤ 2. The vertex z was labelled as an outer vertex when Painter played either move C, D or E. For each of those moves it is easy to verify that there exists a centre c of the same large red component for which (c,z) ≤ 2.For every terminal vertex w in a large red component, there exists an outer vertex z in the same large red component for which (w,z)=1. This follows easily as w was labelled terminal when Painter played move F, and therefore shares a red edge with an outer vertex z. We are now ready to finish the proof. Let w_1 and w_2 be two vertices of a red component. Suppose first that w_1 and w_2 are both terminal. Then according to the claims above, there exist outer vertices z_1 and z_2 and centres c_1 and c_2 of the same large red component, such that (w_1,z_1)=1, (w_2,z_2)=1, (z_1,c_1)≤ 2 and (z_2,c_2)≤ 2. Moreover, we know that (c_1,c_2) ≤ 2. Therefore by the triangle inequality we have (w_1,w_2)≤(w_1,z_1) + (z_1,c_1) + (c_1,c_2)+(c_2,z_2) + (z_2,w_2) ≤ 1+2+2+2+1 =8.The other cases can be analysed the same way, and allow us to conclude that (w_1,w_2) ≤ 8 for any two vertices w_1 and w_2 in the same large red component. Thus each red component is a tree of diameter at most 8, and therefore does not contain any copy of P_10.§ POTENTIAL FUNCTION AND MAIN LEMMAAs explained in <Ref>, the potential function β is derived from the solution of the linear programming problem associated to <Ref>. In order to obtain this linear programming problem, we analyse the course of the game, assuming that Painter follows <Ref>, by tracking the values of certain parameters of the board of the game.We say that a vertex of a red component is of type 0 if it is not adjacent to any blue edge and of type 1 if it is adjacent to precisely one blue edge. We consider the following parameters of the board. * : Number of blue components.* : Sum of diameters of blue components.* : Number of vertices of type 0 on small red components.* : Number of vertices of type 1 on small red components.* : Number of centre vertices of type 0.* : Number of centre vertices of type 1.* : Number of outer vertices of type 0.* : Number of outer vertices of type 1.* : Number of terminal vertices of type 1.As mentioned in <Ref>, we denote this set of parameters by . Moreover, this set of parameters is in fact a set of functions from ℤ_≥ 0 to ℤ_≥ 0 (in particular, non-negative), where for an integer t ∈ℕ and for X ∈, we write X(t) for the value of the parameter X for the board of the game after precisely t rounds have been completed.We are now ready to define our potential function.β(t) 5/3(t) - 1/3(t) + 1/2(t) + 1/2(t) + 2/3(t) + 1/2(t) + (t) + 2/3(t) + (t). <Ref> follows easily from the next couple of lemmas. Suppose that Painter follows <Ref> and that the game R̃(P_10,P_n) ends after N rounds, for some integer N ∈ℕ. Then we have (N) - (N) ≥ n-2.Let B_1, …, B_ℓ be the non-empty blue components of the graph after N rounds were completed, for some integer ℓ. So we have (N) = ℓ and (N) = ∑_i=1^ℓdiam(B_i), and diam(B_i) ≥ 1 for all i ∈ [ℓ]. Since the game has ended we know by <Ref> that at least one of the blue components contains a copy of P_n, assume without loss of generality that it is B_1, and therefore diam(B_1) ≥ n-1. Hence,(N) - (N) = ∑_i=1^ℓ(diam(B_i) - 1 ) ≥diam(B_1) - 1 ≥ n-2,as required.The next lemma is the key ingredient in our argument. We have β(0) = 0. Moreover, if Painter follows <Ref> and the game R̃(P_10,P_n) lasts for N rounds, for some integer N ∈ℕ, then for every t ∈ [N] we haveβ(t) - β(t-1) ≤ 1.Before proving <Ref>, we show how together with <Ref> it implies <Ref>.Suppose that Painter follows <Ref> and the game R̃(P_10,P_n) lasts for N rounds, for some integer N ∈ℕ. By <Ref> we know that (N) - (N) ≥ n-2 and clearly we also have (N) ≥ 1. Recall that all parameters inreturn non-negative values, so we getβ(N) ≥5/3(N) - 1/3(N) = 5/3((N) - (N) ) + 4/3(N) ≥5/3(n - 2) + 4/3 = 5/3n - 2.By <Ref> we get thatN ≥∑_t∈ [N]β(t) - β(t-1) = β(N).Combining both inequalities above we get N ≥5/3n-2, as we wanted. It is left to prove <Ref>. For any function f : ℕ→ℝ and t ∈ℕ we write Δ f(t)f(t)-f(t-1).As β(0)=0 is trivially true as X(0) = 0 for all X ∈. Hence, it suffices to show that when Painter plays according to <Ref>, for every t ∈ [N] we have Δβ(t) ≤ 1. For this purpose, we consider for each move A to K the possible changes of values of parameters in , and show that they all lead to Δβ(t) ≤ 1. For some of those moves, we will need to distinguish between several cases, depending on the number of blue incident edges to either x or y. However, the case analysis remains elementary.A Let α_0 and α_1 be the number of vertices of type 0 and 1, respectively, amongst x and y at the end of round t-1. Then we have α_0 + α_1 ≤ 2, Δ(t)=α_0, Δ(t)=α_1, and the other parameters do not change. ThereforeΔβ(t)= 1/2α_0 + 1/2α_1 = 1.B Let x' be the vertex such that the edge x'x was red before this move was played. Suppose that at the end of round t-1 there were α_0 and α_1 vertices of type 0 and 1, respectively, amongst x,x', and that y was adjacent to precisely a blue edges. We have α_0 ≤α_0 + α_1 ≤ 2, Δ(t)=-α_0, Δ(t)=-α_1, Δ(t)=α_0+1_a=0, Δ(t)=α_1+1_a=1, and the other parameters do not change. ThereforeΔβ(t)=-1/2α_0 + 2/3(α_0+1_a=0) - 1/2α_1 + 1/2(α_1+1_a=1) ≤1/6α_0 + 2/3(1_a=0+1_a=1) ≤1/6· 2+2/3= 1.C Let x',y' be the vertices such that the edges x'x and y'y were red before this move was played. Without loss of generality we may suppose that Painter labels x, x' and y as centres. Let α_0 and α_1 be the number of vertices of type 0 and 1, respectively, amongst x',x,y at the end of round t-1, so we have α_0 ≤α_0 + α_1 ≤ 3. Let a be the number of blue edges incident to y' at the end of round t-1. Then we have Δ(t)=-α_0-1_a=0, Δ(t)=-α_1-1_a=1, Δ(t)=α_0, Δ(t)=α_1, Δ(t) = 1_a=0, Δ(t) = 1_a=1, and the other parameters do not change. ThereforeΔβ(t)= 1/2(-α_0-1_a=0) + 1/2(-α_1-1_a=1) + 2/3α_0 + 1/2α_1 + 1_a=0 + 2/3· 1_a=1≤1/6α_0 +1/2(1_a=0+1_a=1) ≤1/6· 3 +1/2= 1.D Let a be the number of blue edges incident to y at the end of round t-1. Then we have Δ(t) = 1_a=0, Δ(t) = 1_a=1, and the other parameters do not change. ThereforeΔβ(t)= 1_a=0+2/3· 1_a=1≤ 1_a=0+1_a=1≤ 1.E Let y' be the vertex such that the edge y'y was red before this move was played. Let α_0 and α_1 be the number of vertices of type 0 and 1, respectively, amongst y and y'. We have α_0 + α_1 ≤ 2 and Δ(t)=-α_0, Δ(t)=-α_1, Δ(t)=α_0, Δ(t)=α_1, and the other parameters do not change. ThereforeΔβ(t)= -1/2α_0 + - 1/2α_1 + α_0 + 2/3α_1 ≤1/2(α_0+α_1) ≤ 1.F We have Δ(t)=1, and the other parameters do not change. Therefore Δβ(t) = 1. G Define the following functionγ(t) 1/2(t) + 1/2(t) + 2/3(t) + 1/2(t) + (t) + 2/3(t) + (t).Then we have β(t) = 5/3(t) - 1/3(t) + γ(t).Assume without loss of generality that, at the end of round t-1, the vertex x is adjacent to at least two blue edges and y is adjacent to precisely a blue edges. There are several cases to consider. * If y is a vertex on a small red component, then we have Δ(t) = -1_a=0, Δ(t) = 1_a=0 - 1_a=1 and therefore Δγ(t) = -1/21_a=0 + 1/2(1_a=0 - 1_a=1) ≤ 0.* If y is a centre vertex on a large red component, then we have Δ(t) = -1_a=0, Δ(t) = 1_a=0 - 1_a=1 and therefore Δγ(t) = -2/31_a=0 + 1/2(1_a=0 - 1_a=1) ≤ 0.* If y is an outer vertex on a large red component, then we have Δ(t) = -1_a=0, Δ(t) = 1_a=0 - 1_a=1 and therefore Δγ(t) = -1_a=0 + 2/3(1_a=0 - 1_a=1) ≤ 0.* If y is a terminal vertex then we have Δ(t) ≤ 0 and therefore Δγ(t) = 0.* If y is neither of the above then clearly we have Δγ(t) = 0.In any case we get that Δγ(t) ≤ 0. We also have, in any case, that Δ(t) ≤ 0 and Δ(t) ≥ -1. Therefore Δβ(t) ≤1/3≤ 1. H Let a be the number of blue edges incident to x at the end of round t-1. We then have Δ(t) = 1_a=0, Δ(t) = 1, Δ(t) = -1_a=0 and Δ(t) = 1_a=0-1_a=1, and the other parameters do not change. ThereforeΔβ(t)≤5/3-1/3· 1_a=0-1_a=0+2/3(1_a=0-1_a=1) = 5/3 -2/3(1_a=0+1_a=1) =1.I Let a and b be the numbers of blue edges incident to x and y, respectively, at the end of round t-1. We then have Δ(t) = 1-1_a=1-1_b=1, Δ(t) = 1, Δ(t) = -1_a=0, Δ(t) = 1_a=0-1_a=1, Δ(t)=-1_b=0 and Δ(t) = 1_b=0-1_b=1, and the other parameters do not change. ThereforeΔβ(t)≤5/3-1/3(1-1_a=1-1_b=1)-1_a=0+2/3(1_a=0-1_a=1)-1/21_b=0+1/2(1_b=0-1_b=1) = 4/3 -1/3(1_a=0+1_a=1)-1/6· 1_b=1≤ 1.J As mentioned in <Ref>, x is a terminal vertex at the end of round t-1, so it has precisely one blue edge incident to it. In particular, we get Δ(t) ≤ -1. Considering this, and repeating the case analysis from move G for the value of the function γ as defined in (<ref>), we obtain Δγ(t) ≤ -1. We also have Δ(t)= 1, Δ(t)≥ -1, and thereforeΔβ(t)≤5/3+1/3 - 1 = 1.K Let c_0, c_1, o_0, o_1 be respectively the numbers of centre vertices of type 0, of centre vertices of type 1, of outer vertices of type 0 and of outer vertices of type 1 amongst x,y. Then we have c_0+c_1+o_0+o_1=2, Δ(t) = 1-c_1-o_1, Δ(t) = 1, Δ(t)=-c_0, Δ(t)=c_0-c_1, Δ(t)=-o_0, Δ(t)=o_0-o_1, and the other parameters do not change. ThereforeΔβ(t)= 5/3-1/3(1-c_1-o_1) -2/3c_0 + 1/2(c_0-c_1) - o_0 + 2/3(o_0-o_1) = 4/3 - 1/6(c_0+c_1) - 1/3(o_0 + o_1)≤4/3 - 1/6(c_0+c_1+o_0+o_1)= 1. Therefore, we get that Δβ(t) ≤ 1 for every t∈ [N], proving the statement. § CONCLUDING REMARKSAs we mentioned in <Ref>, we determine the asymptotic value of r̃(P_k,P_n) for a fixed k ≥ 10, by matching our lower bound from <Ref> with the upper bound from <cit.>. In the case k=3, Cyman, Dzido, Lapinskas and Lo <cit.> showed that r̃(P_3,P_n)= ⌈5(n-1)/4⌉. For k=4, they also showed that r̃(P_4,P_n) ≥⌈7n/5-1 ⌉, and later it was proved independently by Bednarska-Bzdęga <cit.> and Zhang and Zhang <cit.> that in fact r̃(P_4,P_n) = ⌈7n/5-1 ⌉.The most natural continuation of this work would be to obtain exact values of r̃(P_k,P_n) for all values of k and n. However, for 5 ≤ k ≤ 9 or when k k(n) goes to infinity with n, even the asymptotic value of r̃(P_k,P_n) is still unknown, or whether the limit exists.While in this paper we considered paths, another natural line of research would be to study online Ramsey numbers for cycles, or of cycles and paths, which may be closely related to it, as shown in <cit.>.*Acknowledgements. The authors would like to thank their PhD supervisor Professor Béla Bollobás for his support and valuable comments. The first author is funded by Trinity College, Cambridge. The second author is funded by EPSRC (Engineering and Physical Sciences Research Council) and by the Cambridge Commonwealth, European and International Trust. amsplain
http://arxiv.org/abs/2312.16628v1
{ "authors": [ "Adva Mond", "Julien Portier" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20231227162646", "title": "The asymptotic of off-diagonal online Ramsey numbers for paths" }
JaColBERT and Hard Negatives, Towards Better Japanese-First Embeddings for Retrieval: Early Technical Report Benjamin Clavié January 14, 2024 ============================================================================================================plain plainMulti-access Edge Computing (MEC) can be implemented together with Open Radio Access Network (O-RAN) over commodity platforms to offer low-cost deployment and bring the services closer to end-users. In this paper, a joint O-RAN/MEC orchestration using a Bayesian deep reinforcement learning (RL)-based framework is proposed that jointly controls the O-RAN functional splits, the allocated resources and hosting locations of the O-RAN/MEC services across geo-distributed platforms, and the routing for each O-RAN/MEC data flow. The goal is to minimize the long-term overall network operation cost and maximize the MEC performance criterion while adapting possibly time-varying O-RAN/MEC demands and resource availability.This orchestration problem is formulated as Markov decision process (MDP). However, the system consists of multiple BSs that share the same resources and serve heterogeneous demands, where their parameters have non-trivial relations. Consequently, finding the exact model of the underlying system is impractical, and the formulated MDP renders in a large state space with multi-dimensional discrete action.To address such modeling and dimensionality issues, a novel model-free RL agent is proposed for our solution framework. The agent is built from Double Deep Q-network (DDQN) that tackles the large state space and is then incorporated with action branching, an action decomposition method that effectively addresses the multi-dimensional discrete action with linear increase complexity. Further, an efficient exploration-exploitation strategy under a Bayesian framework using Thomson sampling is proposed to improve the learning performance and expedite its convergence. Trace-driven simulations are performed using an O-RAN-compliant model. The results show that our approach is data-efficient (i.e., converges significantly faster) and increases the returned reward by 32% than its non-Bayesian version. Moreover, it outperforms Deep Deterministic Policy Gradient by up to 41%.O-RAN, Multi-access Edge Computing, Network Orchestration, Deep Reinforcement Learning, Bayesian Learning § INTRODUCTIONOpen Radio Access Network (O-RAN) is one of the most promising technologies for future RANs <cit.>. In O-RAN, the legacy hardware-based RANs are replaced with softwarized RANs <cit.>. And the Base Station (BS) functions are disaggregated into Radio Unit (RU), Distributed Unit (DU), and Central Unit (CU) <cit.>. The CU and DU can be deployed as Virtual Network Functions (VNFs) and executed through Virtual Machine (VM) instances or lightweight containers across geo-distributed cloud infrastructures <cit.>. It enables flexible deployment and dynamic resource scaling based on users' demands and network conditions, which potentially reduces operational expenses <cit.>. Furthermore, another key enabler of 5G+, Multi-access Edge Computing (MEC), brings serverless computing for diverse 5G+ use cases through Function as a Service that can be implemented over virtualized platforms to run the applications. The MEC is expected to serve heterogeneous services, including emerging delay-sensitive applications such as Tactile Internet applications, and hosting them in the same infrastructures as RANs is a way to deliver the services much closer to the users <cit.>. However, jointly orchestrating the O-RAN and MEC configurations while serving the admitted legacy traffic and heterogeneous MEC demands is non-trivial. Indeed, O-RAN enables a flexible selection of centralization degrees through the functional splits <cit.>. However, such flexibility also creates a problem in finding the optimal split for each BS (which functions are at the DU and CU). The optimal split selection is highly affected by computing resource availability, traffic demands, link capacity, and routing delay between RUs, DUs, and CUs <cit.>. Each split also induces a different data load over the xHaul links[The paths connecting a core network (EPC) to CUs, CUs to DUs, and DUs to RUs are backhaul (BH), midhaul (MH), fronthaul (FH), respectively. The integration of these elements is called Crosshaul/xHaul transport network.], has different constraint requirements, and requires different computing resources. Since the DUs and CUs are virtualized, they need to be allocated a certain amount of virtualized computing resources (e.g., virtual CPU, storage and memory), where their optimal allocation depends on the split selections. They also need to be implemented over geo-distributed cloud platforms (servers), which creates a placement problem of the optimal location to execute each DU/CU. In addition to the delay and resource availability dependence, this placement is affected by the decisions of split selections and allocated resources.The problem becomes more complex when the RANs also need to accommodate the MEC demands that have diverse service requirements, i.e., an autonomous vehicle application has a different maximum allowable delay compared to 3D gaming, Virtual Reality, etc.To illustrate, the MEC services can be hosted together with the CUs (as proposed in <cit.>) or even with the DUs (as suggested by experimental studies in <cit.>). Such flexibility raises another placement problem in finding the optimal hosting location for each MEC service, either with the DUs or CUs. Hosting the MEC services with the CUs (at a more centralized server) may have processing cost/performance gain, i.e., due to resource pooling and a more powerful computing server, but it induces a higher routing delay <cit.>. Contrarily, the MEC services can be deployed closer to the users by hosting them with the DUs to reduce the incurred delay <cit.>. When allocating the computing resources and determining the placement locations for these services, we should consider the resource availability, which is affected by the resources, locations, and splits of O-RAN. Clearly, this pairing makes the decisions among O-RAN/MEC configurations intertwined. Furthermore, the legacy traffic/MEC demands and resource availability might vary over time, suggesting to dynamically reconfigure the O-RAN/MEC system to adjust to the varying conditions. However, altering the O-RAN/MEC configurations at runtime may require additional costs or even disrupts the network operations. Such a reconfiguration should be prudently performed by considering long-term consequences. In addition, softwarized RANs have different behavior than legacy RANs, where their configurations often have non-trivial relations, high variance, and dependence on platform and platform load <cit.>. Therefore, it becomes impractical to obtain the perfect model of the underlying system and find the optimal configurations over time. To this end, albeit having minimal modeling assumptions about the underlying system, we aim to solve the above O-RAN/MEC orchestration problem by dynamically controlling: (i) the split selection for each BS, (ii) how much the allocated resources for each DU/CU and MEC service, (iii) where to host each DU/CU and MEC service, and (iv) how to route the legacy traffic/MEC demands between RUs, DUs, and CUs. Our objective here these decisions are being made to minimize the long-term operating expenses of the network, and at the same time, to maximize the MEC performance criterion.§.§ Methodology and ContributionsWe propose and study the joint O-RAN/MEC orchestration problem, where it jointly controls the split selections for the BSs, the resource allocation and placement for each DU/CU/MEC service over geo-distributed platforms, and the routing for each data to the hosting locations. Our system model follows the latest proposal of O-RAN architecture with multiple BSs sharing the same computing and link resources. We model the operations as a time-slotted system, where at each time slot, there are arbitrary incoming legacy traffic and MEC demands and resource availability. At each time slot, the control decisions are being selected to minimize the long-term overall operation cost and maximize the MEC performance criterion. This sequential decision-making problem is formulated as Markov decision process (MDP). Since obtaining the exact model for the underlying O-RAN/MEC system is non-trivial and it is possibly unknown in practice, our solution framework is developed using a model-free reinforcement learning (RL) approach, where the O-RAN/MEC system is seen as a black-box environment, and we do not make any particular assumptions about the underlying system and state transition probability distribution. However, the resulting formulation renders a semi-continuous state space with multi-dimensional discrete action space, which causes a curse dimensionality issue. To address the dimensionality issue of the state space with discrete action, we adopt a Double Deep Q-network (DDQN)-based approach <cit.>. Since the action space is also multi-dimensional discrete, the number of estimated outputs for DDQN is expected to grow combinatorially with the number of control decisions (e.g., the number of O-RAN/MEC configurations and BSs).To tackle such prohibitive complexity, we incorporate action branching <cit.>, an action decomposition method that decomposes the multi-dimension action into sub-actions and utilizes shared decision module followed by neural network branches, with DDQN (BDDQN). This decomposition exhibits a linear growth of the number of estimated outputs while still maintaining the shared decisions. The proposed branching in <cit.> assumes that each sub-action has the same dimensional size, while we adopt it suited to our problem, where each sub-action can have a different size. However, solving a high-dimensional MDP typically requires numerous trial-and-error interactions. It can become a time-consuming and costly operation, particularly when reconfiguring the virtualized resources (VMs/containers) of the O-RAN/MEC system is expensive and incurs overhead delay. In this case, an efficient exploration-exploitation strategy plays a crucial role. Motivated by the advantages of using a Bayesian neural network <cit.>, we propose a Bayesian framework-based Thompson sampling to encourage data-efficient exploration and improve learning performance. We tailor Gaussian Bayesian Linear Regression (BLR) <cit.> into BDDQN by modifying the output/last layer at each branch to approximate the posterior distribution of the set of Q values. Hence, Bayesian BDDQN not only utilizes estimates of the Q values but also exploits uncertainties over the estimated Q values and employs them to perform Thompson sampling.Further, we evaluate our proposed approach using collected traces from real demands and a range of network topologies. Following the evaluation results, we proved that our approach is data-efficient, where it significantly converges faster and improves the learning performance by up to 32% than its non-Bayesian counterpart. Moreover, it offers the cost-saving benefits by 41% compared to DDPG. We summary our contributions as follows: * We propose and study the joint O-RAN/MEC orchestration problem, where it jointly controls the O-RAN functional splits, the allocated resources and placement locations of O-RAN/MEC services, and the routing for each O-RAN/MEC data flow. This problem is formulated as MDP. * We propose a novel model-free deep RL framework, Bayesian BDDQN, to solve the formulated MDP. It is constructed from action branching of DDQN (BDDQN) to tackle the multi-dimensional and large action space with linear growth of the neural network outputs. Further, we tailor Bayesian learning into BDDQN by modifying the last layer at each branch to exploit uncertainties to enable data-efficient exploration while also improving the learning performance. It is the first work that tailors Bayesian learning with a branching DDQN algorithm. * We perform a battery of tests on our approach using O-RAN compliant model and collected measurement traces from real traffic demands.The rest of this paper is organized as follows. Sec. <ref> discusses our contributions with regards to prior works. In <ref>, we discuss the O-RAN/MEC orchestration model and the formulated MDP problem. In Sec <ref>, we explain how we design our solution approach, Bayesian BDDQN. The detailed experiment setups and simulation results are discussed in Sec <ref>. Finally, we conclude this paper in Sec. <ref>.§ RELATED WORKRAN orchestration. There are some works studied RAN orhestration. For example, by using predetermined models, <cit.> studied altering the functional split at runtime to maximize throughput, to maximize revenue <cit.>, and to minimize the inter-cell interference and FH utilization <cit.>. The work in <cit.> aimed to maximize the served traffic by efficiently schedule the radio/computing resources. Further, other studies proposed model-free approaches such as for orchestrating the radio resource with functional split <cit.>, managing the interplay between computing and radio resource <cit.>, energy-aware BS <cit.>, green RAN-based functional split selections <cit.>, and joint RAN slicing, scheduling and online model training <cit.>. Possibly, <cit.> and <cit.> are the closest RAN orchestration with this paper, where the proposed frameworks jointly control the splits, resources and placement location, and routing for each data flow.However, none of these work study on the resource sharing between RANs and MEC, although their parameters are highly coupled. Joint O-RAN/MEC orchestration. The idea of deploying MEC with O-RAN has been proposed by <cit.> and it is followed by experimental studies of the DU and CU to share their resources with MEC services <cit.>. Recent works study how to manage RANs and MEC parameters together. For example, <cit.> proposed an optimal network design framework to jointly optimize the functional split of RANs and MEC deployment, <cit.> proposed additional degrees of freedom, where the operators can also execute their CU/DU/MEC at servers located in several locations, <cit.> expanded the problem in <cit.> with RAN slicing, where the operators can dynamically reconfigure the isolated slices of RAN and MEC functions together, and <cit.> proposed fast and near-optimal algorithms for networking, storage and computation resources in the joint network-MEC system. However, these works relied on fine-tuning models and assumptions, while we adopt model-free approaches. Using model free approaches, <cit.> developed an ML-based predictor that learns to efficiently share the computing resource between RAN and other workflows, such as edge services. In <cit.>, the authors proposed a Bayesian online learning for controlling RAN resources and intelligent edge service parameters, aiming to minimize the overall energy cost while satisfying the performance targets. Our orchestration problem differs from these works. In addition to the resource allocation, we consider for the coupling between the functional splits, hosting placement, routing for each service, and the impact of altering the configurations at runtime. Bayesian RL in networking. One of Bayesian learning approaches for network orchestration that close with our framework is Bayesian contextual bandit. This technique has been applied to minimize the power consumption in virtualized BS <cit.>, to optimize the BS handover <cit.>, to assign CPU of virtualized BS <cit.>, and to jointly control RAN resources with edge AI <cit.>. However, our orchestration problem requires a full RL formulation while these works only consider the exogenous parameters for the RL state. Perhaps, the closest approach to our framework is action branching method of deep RL for network orchestration as studied in <cit.> for O-RAN auto-reconfiguration, <cit.> for controlling the network slicing reconfiguration, and <cit.> for vehicular networks. However, none of these work adopt Bayesian learning in their deep RL approaches. Here, we tailor an efficient Thompson sampling through Gaussian BLR into the Q and target networks of the deep RL algorithm to enable data-efficient exploration and improve the learning performance.§ SYSTEM MODEL AND PROBLEM FORMULATIONIn the latest O-RAN proposals <cit.>, the protocol stacks (or functions) of a BS can be disaggregated into an RU, DU and CU via a functional split. The DU and CU are then hosted as VMs or containers on top of commodity platforms across geo-distributed edge cloud infrastructures.Similar to O-RAN entities, MEC utilizes a virtualized platform to run the application <cit.>. Hence, O-RAN and MEC can share the same infrastructures. ETSI proposed the MEC deployment on the core network (EPC) or as close as the CU, where the RAN and MEC interaction are performed after the PDCP function (and onwards). Recent studies have experimentally validated and suggested hosting the service co-located with the DU, particularly for delay-sensitive applications, where the interaction can be performed from lower functions through an MEC agent <cit.>.Then, in order to manage resources and interaction of the BSs and MEC, O-RAN has envisioned learning-based orchestration, namely RAN Intelligent Controler (RIC) <cit.>. The controller further is deployed as an xApp in the Non-Real-Time (Non-RT) for closed loop control greater equal than 1 sec and an rApp in Near-Real-Time (Near RT) RIC for 10 ms to 1 sec. Our framework works as an xApp where it enforces a policy at every period of t = 1, ...., T. The optimal policy at each time t depends on the state, which is observed at the beginning of each period via O1 interface. §.§ ModelFunctional Split & MEC. Let us consider O-RAN/MEC system with K BSs, where each BS-k can be disaggregated into RU-k, DU-k and CU-k.The DUs and CUs are virtualized components (e.g., VM/container workloads) that can be executed at commodity platforms (e.g., white-box servers), while RUs are radio units. The detailed the functional split nomenclature and their requirements have been defined by 3GPP in <cit.>, summarized in Table <ref>. Following the latest O-RAN proposals, the split betweenthe DU and RU, called the Low Layer Split (LLS), can implement Option 7.x (O7) and Option 8 (O8). And, the split between the CU and DU, called the High Layer Split (HLS), can employ Option 2 (O2), Option 4 (O4) and Option 6 (O6). Then, we have four selections of the functional splits for our model: Split 1 (S1) – O2 for the HLS and O7 for the LLS; Split 2 (S2) – O4 for the HLS and O7 for the LLS; Split 3 (S3) – O6 for the HLS and O7 for the LLS; and Split 4 (S4) – legacy C-RAN system, which implements Option 8 (O8), i.e., all the BBU are hosted at the servers as an integrated DU/CU and the RF functions are at the RU. We define the possible splits that can be deployed at each BS by the set 𝒱 = {S1,S2,S3,S4}, which is illustrated in Fig. <ref>. And, the MEC services can be hosted together with these functions (e.g., co-located with the CUs or DUs).Demand. We focus on the uplink demands as the MEC data typically goes upstream, but it can easily be extended for the downlink. The O-RAN/MEC system must serve the incoming demands from legacy traffic (e.g., mobile broadband) and heterogeneous MEC services. Each different type of service is isolated from others via hard slicing, but soft slicing (e.g., spatial multiplexing) is applied among same type of services when they are at the same location/server.We model the incoming demands from the users associated to BS k at time t as follow: i) λ_kc^t (Mbps) is the demand from the aggregated request of MEC service type c ∈𝒞 that need to be transferred to the MEC hosting location; and ii) λ_k0^t (Mbps) is the legacy traffic (type 0) that must be routed from RU-k to the hosting locations of DU-k and CU-k, and then to the EPC/Internet.All these demands associated with BS-k are denoted by λ^t_k = {λ_kc^t :c ∈𝒞̃ := 0 ∪𝒞}, and λ^t = {λ_k :k ∈𝒦} defines the set of the demands originated all the BSs. Network & Server.We model a packet-based O-RAN/MEC network as a graph of 𝒢 = (ℐ, ℰ), where ℐ is the set of physical nodes, which includes the subsets: 𝒦 = {1, ..., K} of RUs, ℒ = {1, ..., L} of hosting platforms (servers), EPC (index 0) and routers; and ℰ is a set of available links. The DUs are usually hosted at the far-edge servers (close to the RUs) while CUs are at more centralized locations. We define ℒ_D⊆ℒ and ℒ_C⊆ℒ as the sets of candidate servers to host DUs and CUs, respectively. The MEC services can be processed at the same hosting servers (co-located) with DUs and CUs[The MEC services are originally proposed to be deployed with EPC or CUs (PDCP function and onwards) <cit.>. However, recent study experimentally validated that they can be deployed at the DUs using an MEC agent <cit.>.], and we define the set of available servers that can process MEC requests with ℒ_E ⊆(ℒ_D ∪ℒ_C) ⊆ℒ. Each server has processing capacity P_l, ∀ l ∈ℒ. The execution of the DUs, CUs and MEC services needs a certain number of computing resources, where 𝒳_0 and 𝒳_c, c ∈𝒞 are the set of available configurations (flavors) that determine the amount of the allocated computing resources for CUs/DUs and MEC services, respectively[We use a different set of flavors between O-RAN and MEC as they may require a different type of resources, i.e., legacy traffic affects CPU resources <cit.> while mobile edge AI needs GPU resources for its processing <cit.>. ]. Further, each node is connected through link (i,j) ∈ℰ, which has a data transfer capacity c_ij (Mbps) and delay d_ij (secs). Let p_0 ∈𝒫_k and p_c ∈𝒫_k denote the paths to transfer the data flow of legacy traffic and MEC type c, respectively, from RU-k to the destination server; and 𝒫_k is the set of all paths connecting RU-k to the servers.Then, we have 𝒫 = ∪_k=1^K 𝒫_k as a set of all paths connecting RUs to servers and consider the data flow is unsplittable. The path p_0 ∈𝒫_k is selected by p_0 := p^FH∪ p^MH∪ p^BH. Then, the paths p^FH∈𝒫_k^FH⊆𝒫_k, p^MH∈𝒫_k^MH⊆𝒫_k,andp^BH∈𝒫_k^BH⊆𝒫_k are the shortest path at FH, MH and BH among the set of available paths for RU-k. The path p_c ∈𝒫_k is selected depending on the MEC hosting locations. When the MEC services are deployed with the DUs, p_c := p^FH; otherwise, p_c := p^FH∪ p^MH (co-located with the CUs). Each of these paths also have a total delay of its links, and we denote the respective path delay as d_p_0, d_p_c, d_p^FH,d_p^MH,andd_p^BH. Finally, Fig. <ref> illustrate an example of our system model. §.§ Problem Formulation We model the O-RAN/MEC orchestration as a time-slotted system. At each time slot t, the operator needs to take an action that controls their O-RAN/MEC system configurations to adapt to time-varying legacy/MEC demands and resource availability while respecting constraint requirements. This sequential decision-making problem is formulated as MDP and formalized as follows.§.§.§ ActionWe introduce a set of control variablesv^t := { v_k^t ∈𝒱 : k ∈𝒦} to activate the functional splits for the BSs at time t. This variable determines which functions of the BS to be placed at DU and CU. We define variables x^t := {x_k^t ∈𝒳_0 : k ∈𝒦} and y^t := {y_k^t ∈𝒳_0 :k ∈𝒦} to determine the allocated resources (flavors) for DUs and CUs at the servers. The MEC services can be hosted at the same server with DUs and CUs. And we define the allocated resources for them with z^t := {z_kc^t ∈𝒳_c : c ∈𝒞, k ∈𝒦}, where their placement over DUs/CUs is defined by a set of binary variablesζ^t := {ζ_kc^t ∈{0,1} : c ∈𝒞, k ∈𝒦}, i.e., it is equal to one if the services are co-located with the CUs and otherwise with the DUs.The function placement of the DUs and CUs over the candidate servers are defined by α^t := {α_k^t ∈ℒ_D ⊆ℒ: k ∈𝒦} and β^t := {β^t ∈ℒ_C ⊆ℒ : k ∈𝒦}, respectively. Then, the action at time t can be formalized:a^t:= { v^t, x^t, y^t, z^t α^t, β^t, ζ^t }∈𝒜 := {𝒱×ℒ_D ×ℒ_C ×𝒳_0^2×𝒳_c^|𝒞|×ℤ^|𝒞|}^ |𝒦|,where the action space 𝒜 is a finite set of all pairs control variables associated with all the BSs. Note that the paths p_0 and p_c can be directly determined once the action in (<ref>) is known; hence, we treat the routing for each data flow as part of the environment.§.§.§ State The state observation consists of: (i) the incoming demands from legacy traffic and MEC demands λ^t; (ii) the last deployed splits of the BSs v^t-1; (iii) the last allocated DU resources x^t-1 (iv), CU resources y^t-1 and (v) MEC resources z^t-1; and (vi) the last hosting servers/locations for the DUs α^t-1, (vii) CUs β^t-1 and (viii) MEC services ζ^t. It gives us information about time dynamic of our variable interests: (i) the incoming legacy/MEC demands that must to be served; (ii) currently active split at each BS; (iii) the resource availability for DU, (iv) CU and MEC (v) that helps to decide increasing/decreasing the allocated resources;(vi) the availability of the servers to host the DUs, (vii) CUs and (viii) MEC services. Then, we can formalize the state observation of the O-RAN/MEC orchestration at time t as: s^t:= {λ^t, v^t-1, x^t-1, y^t-1, α^t-1, β^t-1, ζ^t-1}∈𝒮 := {ℝ^|𝒞̃|×𝒱×ℒ×𝒳_0^2×𝒳_c^|𝒞|×ℤ^|𝒞|}^|𝒦|, where 𝒮 is the state space. The first point is an exogenous parameter that does not depend on the action, but provides the contextual information about the users' needs. The other points are the network state, which are highly affected the last selected action. §.§.§ Reward & Learning ObjectiveIn this evaluation, the reward is computed from the total network operation cost and MEC performance criterion, which is in this case, we consider the delay cost of elastic MEC services[Note that the inelastic MEC services are considered through hard delay constraints and denoted by the set 𝒞_A ⊆𝒞. Depending on the applications, each service has a different requirement, such arising in Tactile Internet (≤1ms). Then, 𝒞_B ⊆𝒞 is a set of delay-elastic MEC services, and we focus on delay-sensitive services. However, other performance criteria can be trivially tailored (e.g., throughput).]. The source of monetary costs for network operation are accounted from reserving the computing resources, instantiation/reconfiguration cost, penalty cost due to SLA violation, and routing cost. The costs for reserving a certain amount of DU/MEC and CU/MEC computing resources can be calculated as: ∑_k ∈𝒦 f_DM( x_k^t + ∑_c ∈𝒞 (1-ζ_kc^t) z_kc^t ),∑_k ∈𝒦 f_CM( y_k^t + ∑_c ∈𝒞ζ_kc z_kc^t ). The left hand sides in (<ref>) and (<ref>) represent the allocated resources for each DU and CU, while the right hand sides represent the allocated resources for MEC, which depend on the MEC placement location, e.g., whether co-located with the DU or CU. The cost functions f_DM (x_k^t)andf_CM (y_k^t) charge the allocated DU/MEC and CU/MEC resources into monetary units ($). When allocating the resources, the operator must respect the actual resource utilization. If the allocated resources are less than the actual resource utilization, it can cause some declined/disrupted service demands that can trigger monetary compensation due to SLA violation. And we define this cost: ∑_k ∈𝒦f_D( max (0, x̂_k^t - x_k^t, ŷ_k^t - y_k^t) + ∑_c ∈𝒞max (0, ẑ_kc^t - z_kc^t) ).where f_D(.) is the penalty cost function, x̂_k^t is the actual resource utilization of DU-k, ŷ_k^t is the respective utilization of CU-k, and ẑ_kc^t is for MEC service type c associated with BS-k. Note that for simple case, x̂_k^t, ŷ_k^t and ẑ_kc^t can be linear relations with their demands. <cit.>. However, in practice, the actual resource utilization also depends on hosting platform, resource availability, and many unknown factors <cit.>. Hence, we characterize them using the collected measurement traces.Further, the penalty can be also induced when the enforced action violates the constraint requirements. For instance, the total allocated resources must not exceed their server capacity: ∑_l ∈ℒ f_D( max( 0,∑_k ∈𝒦( 1_=l (α_k^t) (x_k^t + ∑_c ∈𝒞 (1-ζ_kc^t) z_kc^t) + 1_= l (β_k^t) (y_k^t + ∑_c ∈𝒞ζ_kc^tz_kc^t) ) - P_l).The deployed configurations also have to respect both O-RAN and MEC service requirements. In O-RAN, the incurred delay at HLS/LLS of each BS has to respect the delay requirement of the selected split:∑_p ∈𝒫_k, k ∈𝒦 f_D( max (0, d_p^FH - d_v^L, d_p^MH - d_v^H ), where d_v^H (secs) and d_v^L (secs) are the delay requirements of split v at HLS and LLS, respectively,as seen in Table <ref>. Also, each inelastic MEC service has a delay requirement: ∑_k ∈𝒦, c ∈𝒞_A f_D( max (0, D^c_k - d^th_c), where D_kc and d^th_c are the total delay and the delay threshold for each inelastic MEC service c ∈𝒞_A, respectively. In our evaluation, the total delay D_kc is calculated from the routing delay d_p_c and processing delay. We follow the delay model from an experimental study of OpenFace <cit.> in C-RAN, which can be calculated: D_kc := λ_kc d_p_c + δ_1 ( λ_kcρ_l/z_kc) + δ_2 (ẑ_kc/P_l )^2, where ρ_l is the computational processing capability (Mbps/cycles) of server l. The routing delay is calculated depending on the hosting location: d_p_c := (1-ζ_kc^t) d_p^MH + d_p^FH. And the processing delay is constant at a very low demand, but it increases with the demand for a service that consumes high computing resources. In order to avoid the above SLA violation (due to insufficient resource allocation and constraint violation) or wasted resources (due to resource overprovisioning), the operator can dynamically instantiate additional resource and reconfigure the O-RAN/MEC system following control variables in (<ref>). However, altering the configurations at runtime may require additional costs or even disrupt the network operations. We define the cost that arises for instantiating additional resources:∑_k ∈𝒦f_I( max (0, x_k^t - x_k^t-1) + max (0, y_k^t - y_k^t-1)+ ∑_c ∈𝒞max (0, z_kc^t - z_kc^t-1)).Then, the reconfiguration cost can arise when the operator decides to alter the BS split or reallocate a new flavor:∑_k ∈𝒦 f_R( |x_k^t - x_k^t-1| + |y_k^t - y_k^t-1| + ∑_c ∈𝒞 |y_kc^t - y_kc^t-1| ).This cost also arises when migrating the MEC hosting location between DU and CU:∑_k ∈𝒦 f_R( ∑_c ∈𝒞 z_kc^t |ζ^t_kc-ζ_kc^t-1| ),or migrating the DU/CU resources to a new server location:∑_k ∈𝒦 f_R((x_k^t + ∑_c ∈𝒞 (1- ζ_kc^t) z_kc^t ) 1_≠α^t-1(α_k^t) + (y_k^t + ∑_c ∈𝒞ζ_kc^t z_kc^t ) 1_≠β_k^t-1(β_k^t)), where f_I(.)andf_R(.) are the instantiation and reconfiguration cost functions.When moving the CU/DU/MEC instances into a new hosting location, the whole resources of an instance have to be migrated. A soft migration method can be applied to avoid disrupted operation by creating a duplicate instance that has the same functionality with the old one. Hence, all the new created resources at the new location are accounted for the reconfiguration cost. However, when instantiating/reallocating a new resource at the same server, the raised overhead cost is calculated from the difference between the old and new allocated resource <cit.>.In addition to the above computing-related resource managements, we also have the routing cost for reserving a bandwidth to transfer the data flow to the destination, which can be denoted:∑_ k ∈𝒦, p ∈𝒫_kf_H( r_p^FH, v + r_p^MH, v + r_p^BH, v) where r_p^FH, v, r_p^MH, v,andr_p^BH, v are the transferred data flow over FH, MH and BH, and f_H(.) is the routing cost function. This routing cost depends on the selected splits, incurred data flows (see Table <ref>), and hosting locations. Reward.We define the reward with the case where the operator aims to minimize the overall operation cost and maximize MEC performance criterion. And we define this through a scalarized reward as: r(s^t, a^t ) := -J(s^t, a^t) + ηB(D(s^t, a^t )).where J(s^t, a^t) is the overall operation cost, parameter η defines the relative importance of the delay cost D(s^t, a^t ) := ∑_k ∈𝒦, c ∈𝒞_B D_kc (s^t, a_kc^t) to the operation cost, and B is a smooth function that models the delay cost associated to elastic MEC services. The overall network cost J(s^t, a^t) is accounted from outputs of the cost functions f_DM, f_CM, f_D, f_I, f_R andf_H. For simplicity, we assume that these costs functions are proportional with their inputs and introduce the respective coefficients κ_DM, κ_CM, κ_D, κ_I, κ_R and κ_H ($/units) that charge every unit of the input into monetary units ($), e.g., f_DM (v) := κ_DM v. Similarly, the smooth function B(.) is also assumed as a linear function that maps the incurred delay into a monetary value (e.g., negative reward). Objective. The objective of our learning framework is to find an optimal policy that takes a sequence of actions from the action space given a sequence of state observations, which maximize the expected long-term accumulated reward starting at time slot τ, 𝔼_π∑_τ = 0^∞[γ^τ r^τ + t | π], as: π^*(s) := max𝔼_π∑_τ = 0^∞[γ^τ r^τ + t | π], where the discount γ is set to γ = 1 during the online operation; otherwise, γ∈ (0,1].§ BAYESIAN BDDQN ALGORITHMWe design our solution framework following a model-free RL paradigm, which treats the O-RAN/MEC system as a black-box environment and imposes no assumptions about the system state and state transition probability distribution. However, the formulated MDP raises challenging dimensionality issues because the state space is semi-continuous and the action space is multi-dimensional discrete. To address such a challenging RL problem, we proposed a novel agent called Bayesian BDDQN.It is built from DDQN <cit.> to address an RL problem that has a large state space with discrete action space. Since the discrete action space is multi-dimensional that consists of multiple degrees of freedom, it renders a combinatorial growth of the number of possible actions that DDQN needs to estimate. Hence, we incorporate an action decomposition method through action branching <cit.> into DDQN (BDDQN) to reduce the complexity into a linear increase. Then, we employ an efficient exploration-exploitation strategy using a Thompson sampling by tailoring a Bayesian framework into BDDQN; hence, it not only exploits the estimated Q functions but also utilizes their uncertainties <cit.>. This strategy becomes essential, particularly when an efficient pre-training model may not be available and performing trial-and-error interactions with the environment becomes time-consuming and costly.The detailed design of our proposed Bayesian BDDQN is discussed as follows. §.§ DDQN Let us define the optimal action-value function (Q function) Q^*(s,a) as the maximum expected reward after observing some sequences s, then following some policies π and taking some actions a: Q^*(s,a) := max_π𝔼 [ ∑_τ^∞γ r^τ+t | s^t = s, a^t = a]. We can find the optimal policy π^* := 𝔼_s ∼ℰ[r + γmax_a' Q(s', a') | s',a' ] if the the optimal value Q(s', a') given the sequences at the next time slot s' for all the possible actions a' is known. Since finding the optimal Q function via the value iteration method is impractical, the Q function can be estimated through a function approximator such as a neural network <cit.>. The estimated Q function parameterized by a neural network (Q-network) with weights θ can be represented as: Q(s,a; θ) ≈ Q(s,a) and trained by minimizing the loss function: L(θ) := 𝔼_s, a,r, s' ∼𝒟[ u - Q(s, a; θ) ], where u is the Temporal Difference (TD) target and the transition {s,a,r,s'} is collected by a random sampling from the stored experience 𝒟. The TD target of DQN <cit.> is represented by:u^DQN := 𝔼_s' ∼𝒮 [r + γmaxQ̃(s', a; θ̃)],where Q̃(s', a'; θ̃) is the target network parameterized by weights θ̃. The TD target in DQN is frequently overestimated in relation to the actual Q-function. Thus, this overestimation issue is addressed by using Double DQN (DDQN) <cit.>, which modifies the TD target into: u^DDQN := 𝔼_s' ∼𝒮 [r + γQ̃(s', a'max Q(s', a'; θ); θ̃)].§.§ BDDQNAlthough DDQN can effectively address many RL applications with a large state space with discrete action <cit.>, it does not intend to handle a multi-dimensional discrete action space, such as arising in our problem. It renders a combinatorial growth in the number of estimated Q values with the increase of control decisions (e.g., O-RAN/MEC configurations and BSs).We discuss how to adopt an action decomposition method through action branching <cit.> with DDQN (BDDQN) to convert such a combinatorial growth into a linear increase. We define ℳ_k := { v_k^t, x_k^t, y_k^t, z_kc^t α_k^t, β_k^t, ζ_kc^t : c ∈𝒞} as the set of all control variables from the action set defined in (<ref>) that are associated with each BS-k; and M_k = |ℳ_k|. Then, we can decompose the action a into sub-actions a_km, ∀ m ∈ℳ_k, ∀ k ∈𝒦, where each sub-action represent each control variable at every BS[From this point, we use the term sub-action to refer each control variable.]. Hence, the action in (<ref>) can also be represented by a := {a_km : m ∈ℳ_k, k ∈𝒦}. Each of sub-actions also takes values from a finite set of the sub-action space 𝒜_km⊆𝒜 that describes the m-th control space associated with BS-k. As the RL problem has M_k sub-actions at every BS, the number of possible actions to be estimated when directly applying DDQN becomes ∏_k=1^K∏_c=1^M_k |𝒜_km|. By adopting an action decomposition method such as action branching, the estimated Q values can be reduced into ∑_k = 1^K ∑_c = 1^M_k |𝒜_km|. The initial method outlined in <cit.> has effectively addressed applications with a discretized continuous action space, but its effectiveness has not yet been proven in scenarios where the action space is inherently multi-dimensional. Additionally, they still assumed that every sub-action space has the same dimensional size. Hence, we can not directly adopt it to our problem. We describe how to tailor action branching to DDQN suited for our RL problem.We use the common state s (defined in (<ref>)).Then, the Q value corresponds to sub-action a_km at common state s can be denoted as Q_km (s, a_km) and estimated following the Q network in Fig. <ref>. The TD target of BDDQN is set as a single global learning target (e.g., u_km := u, ∀ k ∈𝒦, ∀ m ∈ℳ) and similar to the DDQN TD target in (<ref>), but we average all the dimensions of the sub action: u := r +γ1/K∑_k=1^K 1/M_k∑_m=1^M_kQ̃_km( s', a_km∈𝒜_kmmax Q_km(s', a_km) ), where Q̃_km is the target network. Then, the Q network is trained to minimize the following loss function: L(θ):= 𝔼_s, a, r, s' ∼𝒟[ 1/K∑_k = 1^K 1/M_k∑_m=1^M_k [u - Q_km(s, a_km; θ) ]^2 ].BDDQN Architecture. Fig. <ref> shows the Q-network architecture of DDQN. BDDQN is built from an input layer, a shared representation segment with several hidden layers, and neural network branches. The input layer is constructed from a Linear layer with ReLU activation function and has an input size |s| to receive the common state observation s. Then, we build the shared representation segment using two fully connected Linear layers (each layer with ReLU activation). The outputs of this shared segment become the input of neural network branches, where each branch aims to estimate the Q-value Q_km(s, a_km). And we simply apply a Linear layer that has an output size of |𝒜_km|.Since BDDQN does not take uncertainties of the estimated Q values into account (e.g., it only computes a point of estimates), it adopts an ϵ-greedy strategy to select the action, i.e., a :=[ a_k1maxQ_k1 (s, a_k1) , ...,a_KM_KmaxQ_KM_K (s, a_K,M_K) ] by probability 1- ϵ and otherwise, to select randomly. §.§ A Bayesian Framework for BDDQNAlbeit BDDQN is suited to address a large state space with a multi-dimensional action space, one challenging issue rises in our problem is to perform an effective exploration-exploitation strategy. Note that our agent tries to control virtualized resources (e.g., VMs), where trial-and-error interactions with environment can be a time-consuming and costly. And, the pre-training models also may not be always accurate or available. Therefore, we adopt an efficient Thomson sampling-based method under a Bayesian framework for solving our high dimensional RL problem. The idea is to remove the last Linear layer at each branch and replace it with the feature representation layer, where we deploy BLR on top of it. In BDDQN, the Q and target networks are constructed following a deep neural network architecture illustrated in Fig <ref>. It utilizes a Linear layer for the last (output) layer at each branch and the agent learns the Q function through empirical estimates of the regression problem in (<ref>). Therefore, the Q function (corresponding to possible sub-actions of each branch) can be represented as a linear transformation of features as: Q_km(s,a_km) := ϕ_km(s;θ)^⊤ω_a_km, where ϕ_km(s;θ) ∈ℝ^d is the feature representation of the Q network and ω_a_km∈ℝ^d, ∀ a_km∈𝒜_km is the parameters of the last linear layer for each possible sub-action. Since The target network follows the same architecture with the Q network, we can redefine (<ref>) by: u := r +γ1/K∑_k=1^K 1/M_k∑_m=1^M_kϕ̃_km(s';θ̃)^⊤ω̃_â_km,whereϕ̃_km(s;θ̃^⊤) and ω̃_â_km are the feature representation and last layer parameters for target network; and we have â_km := max_a_kmϕ_km(s';θ)^⊤ω_a_km. Clearly, the regression in (<ref>) results in a linear regression problem of the last layers. Instead of solving this regression directly (e.g., using a point of estimate), Bayesian BDDQN uses Gaussian BLR, which renders the approximated posterior on the weight parameters ω_a_km and eventually the Q function.Let consider the prior and likelihood choices conjugates to each other, i.e., the posterior resulting from the Bayesian updating process is in the same parametric family as the prior (e.g., Gaussian distribution). Hence, the posterior update can be computationally tractable since there exists a closed form. Given the experience replay buffer 𝒟 = {s^τ, a^τ, u^τ}, we build disjoint datasets: 𝒟 = ∪_k=1^K ∪_m=1^M_k∪_a_km=1^|𝒜_km|𝒟_a_km, where 𝒟_a_km is the datasets correspond to sub-action a_km. Then, our interest is to approximate the posterior distribution of ω_a_km and correspondingly Q_km(s,a_km): ℙ(ω_a_km, 𝒟_a_km) and ℙ(Q_km(s,a_km), 𝒟_a_km). Following BLR, for each sub-action a_km and the corresponding dataset 𝒟_a_km, we construct u_a_km∈ℝ^|𝒟_a_km| as the concatenation of target values in set 𝒟_a_km and a matrix Φ_a_km from a concatenation of feature column vectors corresponds to each branch as Φ_a_km = {ϕ(s_i) }_i=1^|𝒟_a_km|∈ℝ^d × |𝒟_a_km|. Then, the posterior distribution of ω_a_km can be represented as: ω_a_km∼𝒩 (μ_a_km, Σ_a_km), and the mean and variance can be computed as:μ_a_km = 1/σ^2_ϵΣ_a_kmΦ_a_km𝐮_a_kmΣ_a_km = ( 1/σ^2_ϵΦ_a_kmΦ_a_km^⊤+ 1/σ𝕀)^-1 where 𝕀 is identity matrix and σ_ϵ is the standard deviation of bias. By utilizing (<ref>), we can have Q_km(s, a_km) | 𝒟_a_km := ω_a_km^⊤ϕ_a_km(s).Let ω_km = {ω_a_km: a_km∈𝒜_km}, ω̃_km = {ω̃_a_km: a_km∈𝒜_km}, μ_km = {μ_a_km: a_km∈𝒜_km}, and Σ_km = {Σ_a_km: a_km∈𝒜_km}. We perform Thompson sampling for the exploration-exploitation strategy by adopting (<ref>). At every time slot, each sub-action can be selected by:a_km := max_a_kmω_km^⊤ϕ_km (s;θ), and a := { a_km : k ∈𝒦, m ∈ℳ_k}. Then, the objective of our agent is to learn the Q network presented in Fig. <ref> by minimizing the loss function:L(θ):= 𝔼_s, a, r, s' ∼𝒟[ 1/K∑_k = 1^K 1/M_k∑_m=1^M_k [u -ω_km^⊤ϕ_km(s, θ) ]^2 ]. The TD target u in (<ref>) above can be computed as: u:=r +γ1/K∑_k=1^K 1/M_k∑_m=1^M_kω̃_â_km^⊤ϕ̃_km(s';θ̃),whereâ_km :=max_a_kmω_km^⊤ϕ_km (s';θ)Finally, Algorithm <ref> summarizes the learning process of Bayesian BDDQN.§ RESULTS AND DISCUSSION§.§ Experiment Setup Our simulations rely on a Milan-based MEC topology (N1) <cit.> and real traffic demands <cit.>. We also use a synthetic topology (N2) generated using Waxman algorithm <cit.> with parameters of link probability (0.5) and edge length control (0.1). Each BS in N1 and N2 serves three categories of slicing: legacy traffic (e.g., mobile broadband), delay-elastic MEC services (e.g., massive machine-type communications) and delay-inelastic MEC services (e.g., ultra-reliable low latency communications). Since the difficulty capturing the actual computing behavior of the traffic in <cit.> in a tracable model, we utilize a deep neural network model that maps the traffic demands into the actual computing utilization following <cit.>[The model is trained using real collected measurements from two different platforms (e.g., Platform A and B) that run srsRAN <cit.>. Then, the architecture is constructed from an input, an output and three hidden layers. The size of hidden layers are 128, 64 and 16. The model is trained using Adam optimizer over 200 epochs. It uses mini-batches with size of 128 and MSE loss function. The detailed experiments are available in <cit.>.]. We use the term Reference Core (RC) to define a computing unit, i.e., 1 RC translates to 1 virtual CPU unit. Among the O-RAN functions, LP, HP, LM, HM, LR, HR and PD yield 48%, 17%, 7%, 7%, 0.5%, 0.5%, 10%, 10% of the total BBU computing utilization, respectively, c.f. <cit.>. A single cluster of O-RAN/MEC system in N1 and N2 consists of 1 EPC, 6 available servers (4 servers for DUs/MEC and 2 servers for CUs/MEC), and 4 RUs (default), where the routers are co-located with each node. N1 and N2 have per link delay latency between 0 and 0.1 ms, capacity between 30 Gbps and 160 Gbps, and link weights between 0 to 0.1. We set the computing capacity of each server as H_l = 20RCs, ∀ l ∈ℒ_D ⊆ℒ and H_l = 100RCs, ∀ l ∈ℒ_C ⊆ℒ. The computational processing at each server to process the MEC services is assumed ρ_l = 1 /Mbps/cycles, and we set MEC delay parameters δ_1 =: δ_2 = 1. For simplicity, we set the available flavors homogeneously for each service with |𝒳_c| := |𝒳_0| := 16 that translates into { 0,1, .... 13,14, 15} RCs. Then, we define two different environments for our evaluation, where we utilize Platform A with N1 (OM1) and Platform B with (OM2). Unless otherwise stated, we use OM1 as the default environment.Further, we set the default coefficient fees with κ_DM := 0.25, κ_D := 5, κ_R := κ_I := 0.05, and κ_H := 1. Since executing the BS functions or MEC services at a more centralized computing platform can gain central processing benefits, we set κ_CM := 0.5 κ_DM (see <cit.> with ≈ 10 BSs). The detailed Q-network of Bayesian BDDQN is presented in Fig. <ref>. In total, there are ∑_k = 1^K M_k branches, and each branch has a representation layer with the size of F = 128 and an output with size of |𝒜_km|. The batch size and replay buffer capacity are 128 and 10^6, respectively. We set the learning rate of Adam optimizer <cit.> with 10^-4. The exploration-exploitation strategy is based on Thompson sampling (under a Bayesian BDDQN framework). The time horizon for a single episode is 144 time slots. The agent updates its posterior distribution at every T_p = 1440 time slots (10 episodes), the target network at every T_g = 1440 time slots (10 episodes), the policy by re-sampling its posterior distribution at every T_s = 144 time slots (1 episode).§.§ Numerical EvaluationWe compare our proposed solution, Bayesian BDDQN, to its non-Bayesian version and a state-of the art RL approach, DDPG. In DDPG, we relax the discrete action defined in (<ref>) into a continuous action. Then, we discretize the selected action by estimating the output of DDPG into the nearest discrete value. Since the output must be positive, we use a Sigmoid function at the output layer.§.§.§ Delay coefficient feeFig. <ref> illustrates the learning performance of Bayesian BDDQN compared to its non-Bayesian version and DDPG over various delay coefficients of the elastic-delay services when the pre-training model is not available. Albeit having different delay coefficients, it shows that Bayesian BDDQN successfully learns the optimal policy and converges to the best policy that the agent can learn. Moreover, it shows that Bayesian BDDQN can obtain a higher return reward than other approaches. Our findings reveal that the episodic reward of BDDQN can be improved by using a Bayesian approach (Bayesian BDDQN) up to 31.25%, 32.76%, and 29.76% when B=100, B=10, and B=1, respectively. Moreover, compared to DDPG, the episodic reward of Bayesian BDDQN is significantly higher by 31.12%, 31.71%, and 25.07% when B=100, B=10, and B=1, respectively. It means that our approach offers a more cost-saving compared to both benchmarks. Furthermore, by adopting Thompson sampling with a Bayesian framework, our approach can also significantly expedite the learning convergence compared to its non-Bayesian version. Fig. <ref> indicates that our approach is data-efficient, which can converge within less than 12 episodes, precisely after the posterior distribution at each branch is updated. On the contrary, the non-Bayesian needs around 125 episodes to converge. Although the convergence speed of DDPG can be as fast as Bayesian BDDQN, DDPG is still underperformed, offering a considerable lower episodic reward. DDPG utilizes Ornstein–Uhlenbeck (OU) noise for the exploration-exploitation strategy, originally used for continuous action spaces, and its reward performance is degraded when the action space is discrete such as arising in our problem.The above evaluations highlight that Bayesian BDDQN is data-efficient (i.e., offering fast learning convergence) and, at the same time, obtains the highest episodic reward (hence, the most cost-efficient) over all the delay coefficient settings.§.§.§ Delay minimizationFig. <ref> shows how our approach successfully reduces the incurred delay cost of delay-elastic MEC services. Overall, the incurred delay cost can be reduced after the agent converges by around 41%, 52% and 48% when B=100, B=10, and B=1, respectively. Our findings also indicate that the rate at which the cost decreases depends on the coefficient for the delay cost. When the coefficient is high (B=100), the delay cost is potentially larger and can produce a much negative reward. As a result, our approach aims to minimize this impact quickly. And this is validated from Fig. <ref>, where the cost-savings start to converge after around 15 episodes. Conversely, when the coefficient is low (B=1), the delay potentially produces a lower cost. As a result, our approach prioritizes delay cost reduction to a lesser extent in this case, i.e., the cost starts to converge after 50 episodes. §.§.§ Impact of reconfiguration fee Fig. <ref> depicts the learning performance of Bayesian BDDQN over different reconfiguration fees when the pretraining model is unavailable. Regarding reward performance, the results show that Bayesian BDDQN outperforms its non-Bayesian version and DDPG. The performance gap between Bayesian BDDQN and its non-Bayesian version remains at around 25-31% for both reconfiguration fees. However, compared to DDPG, Bayesian BDDQN performance gain increases with the reconfiguration fees, where it gains 31.12% for κ_R = 0.05 and 44.04% for κ_R = 0.5. This evaluation highlights that the reconfiguration fee does not significantly affect Bayesian and non-Bayesian BDDQN performance.§.§.§ Impact of the number of BSsWe evaluate Bayesian BDDQN over a different number of BSs. The higher number of the BSs indicates the larger action and state spaces in our problem. As illustrated in Fig. <ref>, Bayesian BDDQN has a similar reward and convergence performance although the size of action space is different. Some slight differences are found, which indicate that the BS can have different costs than others. When K=4, Bayesian BDDQN converges just after 12 episodes and obtains the episodic reward (average per BS) around -800. The same trend also appears when K=1 and K=2. §.§.§ Pretraining modelsWe evaluate the reward and convergence performance of Bayesian BDDQN when the pretraining model ("Bayesian w/ pretraining") is available. The pretraining model has been trained for 200 episodes in OM1. We utilize it as the weight initialization of Bayesian BDDQN learning, which possibly can further expedite its convergence. The environment for this evaluation is still on OM1, but the input traffic demands are on a different day. We use a Bayesian version without pretraining (Bayesian w/o pretraining) and a non-Bayesian version with a pretraining model for benchmarks ("Non-Bayesian w/ pretraining"). For"Non-Bayesian w/ pretraining", we modify the maximum epsilon parameter from ϵ_max = 1 into ϵ_max = 0.1 to encourage less exploration. Fig. <ref> shows that "Bayesian w/ pretraining" offers the fastest convergence rate and obtains the highest return of the episodic reward compared to the benchmarks. In particular, "Bayesian w/ pretraining" produces a high episodic reward directly at its first learning episode. Then, when the learning goes, it slightly improves the returned reward and eventually outperforms "Bayesian w/o pretraining" and "Non-Bayesian w/ pretraining" by up to 7.02% and 22.4%, respectively. Moreover, even leveraging a pretraining model, "Non-Bayesian w/ pretraining" still underperforms the Bayesian approaches. This evaluation validates that a Bayesian approach can offer performance gain of the non-Bayesian approach even when the pretraining model is unavailable. And this gain can be further increased when a pretraining model is leveraged. §.§.§ Transfer LearningWe study the performance of Bayesian BDDQN when the pretraining is available and utilize it for transfer learning ("Bayesian w/ transfer") to a different environment and context. We leverage a pretraining model trained over 200 episodes in OM and use it for the weight initialization of Bayesian BDDQN on OM2. For benchmarks, we compare this approach with Bayesian BDDQN without pretraining ("Bayesian w/o transfer") and its non-Bayesian version with pretraining ("Non-Bayesian w/ transfer"). For"Non-Bayesian w/ pretraining", the maximum epsilon parameter is modified from ϵ_max = 1 into ϵ_max = 0.1. Fig. <ref> illustrates that, albeit the pretraining model is leveraged from different O-RAN/MEC systems (environments) and demands (context), "Bayesian w/ transfer" can deliver the highest reward than other benchmarks. Moreover, "Bayesian w/ transfer" can converge as soon as the learning goes (e.g., less than five episodes), and then its reward surpasses "Bayesian w/o transfer," which learns directly from OM2. Fig. <ref> also shows that regardless of with/without transfer learning, Bayesian approaches have a better reward performance than non-Bayesian, even with transfer learning. These findings emphasize the generalization of our proposed approach over heterogeneous O-RAN/MEC systems, where we possibly reuse the existing pretraining models across different O-RAN/MEC systems and contexts. At the same time, it can offer a fast learning convergence and high reward performance. §.§.§ Penalty Fig. <ref> shows the penalized cost as a result of the enforced actions that violate the constraint requirements. For both Bayesian and non-Bayesian versions, it shows that at the beginning of learning episodes, the cost due to constraint violation is considerably high when the pretraining model is not utilized. One main reason is that it needs some explorations, which renders numerous constraint violations and a severe penalty cost. In contrast, when the pretraining model is adopted, the approaches do not require considerable exploration as they have already acquired some experiences from prior training, which can be valuable in the current learning process. However, albeit a pretraining model is not available, Bayesian BDDQN can offer fast penalty cost reduction, and eventually, the cost can reach near zero or zero. Moreover, when a pretraining model is leveraged, Bayesian BDDQN can deliver a further advantage, where the penalty cost can reach near zero or zero starting at the beginning of episodes. § CONCLUSION In this paper, we have proposed a fresh O-RAN/MEC orchestration by dynamically controlling the split selection for each BS, the allocated resources for each DU/CU and MEC service, placement for each DU/CU and MEC service over geo-distributed infrastructures, and routing for each legacy/MEC data flow. And the objective is to minimize the long-term overall network operation cost and maximize the MEC performance criterion while adapting possibly time-varying O-RAN/MEC demands and resource availability. We have proposed Bayesian BDDQN for the solution framework based on model free RL paradigm. We developed this framework using a combination of DDQN and action branching, BDDQN, to tackle the large state space and multi-dimensional action space. Further, we tailor a Bayesian framework-based Thompson sampling into BDDQN to encourage data-efficient exploration and improve learning performance. The numerical results have shown that Bayesian BDDQN is provably data-efficient, where it converges faster and improves the learning performance by up to 32% than its non-Bayesian counterpart, and at the same time, it brings the cost-saving benefits by 41% compared to DDPG.IEEEtran
http://arxiv.org/abs/2312.16142v1
{ "authors": [ "Fahri Wisnu Murti", "Samad Ali", "Matti Latva-aho" ], "categories": [ "cs.NI", "cs.LG" ], "primary_category": "cs.NI", "published": "20231226180449", "title": "A Bayesian Framework of Deep Reinforcement Learning for Joint O-RAN/MEC Orchestration" }
0000-0003-0760-0618ICC - Universidad de Buenos Aires, ConicetBuenos Aires Argentina0000-0003-1556-2623Dipartimento di Scienze Pure e Applicate, Università di Urbino Urbino Italy 0000-0001-8911-1580Dipartimento di Matematica e Informatica, Università di Cagliari Cagliari Italy0-abstract<ccs2012> <concept> <concept_id>10003752.10003753.10003761</concept_id> <concept_desc>Theory of computation Concurrency</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10010124.10010131.10010134</concept_id> <concept_desc>Theory of computation Operational semantics</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Theory of computation Concurrency [300]Theory of computation Operational semanticsA Reversible Perspective on Petri Nets and Event Structures G. Michele Pinna===========================================================§ INTRODUCTION 1-introduction 1-contribution 1-structure§ PRELIMINARIES 2-preliminaries§ EVENT STRUCTURESIn this section, we provide an overview of the fundamentals of prime event structures. Subsequently, we delve into the reversible variant of prime event structures, by following the presentation in <cit.>.3-prees 3-rpes§ NETS 4-nets§ CAUSAL NETS5-causal-nets § CAUSAL NETS AND EVENT STRUCTURESIn this section, we establish the connection between causal nets and event structures.6-pestocn 6-cntopes§ REVERSIBLE S AND REVERSIBLE ESIn this section we introduce a reversible version of ps and show that they are an operational counterpart for res.7-1-revcausalnet 7-1-rpestorcn 7-1-rcntorpes §.§ Correspondence between es and es7-2-pre-configurations 7-2-correspondence-material 7-2-configurations§ CAUSAL NETS AND OCCURRENCE NETS8-discussion 8-occnet 8-occtocausal 8-causaltoocc 8-discussionend§ APPLICATIONS9-app§ CONCLUSIONS 0-conc§ ACKNOWLEDGMENTThis work has been partially supported by the BehAPI project funded by the EU H2020 RISE under the Marie Sklodowska-Curie action (No: 778233), by the Italian PRIN 2020 project NiRvAna – Noninterference and Reversibility Analysis in Private Blockchains, the Italian PRIN 2022 project DeKLA – Developing Kleene Logics and their Applications,the INdAM-GNCS E53C22001930001 project RISICO – Reversibilità in Sistemi Concorrenti: Analisi Quantitative e Funzionali, and the European Union - NextGenerationEU SEcurity and RIghts in the CyberSpace (SERICS) Research and Innovation Program PE00000014, projects STRIDE and SWOP. abbrv
http://arxiv.org/abs/2312.16714v1
{ "authors": [ "Hernán Melgratti", "Claudio Antares Mezzina", "G. Michele Pinna" ], "categories": [ "cs.CL", "cs.LO" ], "primary_category": "cs.CL", "published": "20231227204748", "title": "A Reversible Perspective on Petri Nets and Event Structures" }
Goal-Oriented Integration of Sensing, Communication, Computing, and Control for Mission-Critical Internet-of-Things Jie Cao, Member, IEEE, Ernest Kurniawan, Senior Member, IEEE, Amnart Boonkajay, Member, IEEE, Sumei Sun, Fellow, IEEE, Petar Popovski, Fellow, IEEE, Xu Zhu, Senior Member, IEEE Jie Cao and Xu Zhu are with the School of Electronic and Information Engineering, Harbin Institute of Technology, Shenzhen 518055, China (Corresponding author: Jie Cao, e-mail: [email protected]). Ernest Kurniawan,Boonkajay Amnart and Sumei Sun are with the Institute of Infocomm Research, Agency for Science, Technology and Research, Singapore 138632. Petar Popovski is with the Department of Electronic Systems, Aalborg University, Danish. ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Driven by the development goal of network paradigm and demand for various functionsin the sixth-generation (6G) mission-critical Internet-of-Things (MC-IoT), we foresee a goal-oriented integration of sensing, communication, computing, and control(GIS3C) in this paper. We first provide an overview of the tasks, requirements, and challenges ofMC-IoT. Then we introduce an end-to-end GIS3C architecture, in which goal-oriented communication is leveraged to bridge and empower sensing, communication, control, and computing functionalities. By revealing the interplay among multiple subsystems in terms of key performance indicators and parameters,this paper introduces unified metrics, i.e., task completion effectiveness and cost, to facilitate S3C co-design in MC-IoT. The preliminary results demonstrate the benefits of GIS3C in improving task completion effectiveness while reducing costs. We also identify and highlight the gaps and challenges in applying GIS3C in the future 6G networks. Goal-oriented communication, integration, mission-critical Internet-of-Things. § INTRODUCTION Mission-critical Internet-of-Things (MC-IoT)have recently gathered a significant attention, showing the potential to revolutionize many emerging applications such as industrial metaverse and intelligent transportation<cit.>. However, the fast growth of completely automated, highly dynamic, and fully intelligent IoT networks is likely to exceed the capability of the latest fifth-generation (5G) wireless systems<cit.>. Future sixth-generation (6G) MC-IoT requires further advances in current 5G systems forimproving the ability to support various functions<cit.>. As shown in Fig. 1 and Table I, an intelligent task typically involves a set of complexprocesses. It has been a challenge tomeet the stringent and heterogeneous requirements of multiple subsystems simultaneously, due to the complex interdependence among various subsystems with limited network and hardware resources. Both academia and industry have recently shown considerable interest in researching extreme ultra-reliable and low-latency communication, to facilitate the development of 6G MC-IoT<cit.>. Mobile edge computing (MEC) is able to reduce transmission latency, and thenenable more reliable and secure communications. Big data with artificial intelligence(AI) enables accurate prediction and reasonable decision-making to improve transmission efficiencyand intelligence. Moreover, there has been substantial independent research dedicated to enhancing the performance of sensing, control and, computing, with numerous innovative methods being proposed<cit.>. However,rather than struggling to overcome physicaland technical limits to implementthese underlying functionalities separately,sensing-communication-control-computing (S3C) co-designappears to be a morepromising solution<cit.>. For example, the performance of sensing can be influenced by the allocation of computing resources, whichalso has an impact on data size and type in transmission. Likewise, control performance may be limited by wireless fading channels and insufficient computing power. Recently, there has been a widespread and comprehensive investigation into the domain of integrated sensing, communication, and computing<cit.>,showcasing the benefits of co-design. Though these studies have inspired the investigation of S3C co-design, there remain several unresolved challenges in its realization: (a) a huge amount of heterogeneous data leads to high processing burden and low transmission efficiency, (b) different parameters and lack of unified metricshinder system-level global optimization. The quality of 6G mission-critical services is normally characterized by the efficiency of completing specific tasks, instead of conventional task-agnostic performance metrics like latency, reliability, and throughput<cit.>. Also, inadequate considerations of packet contents and receiver tasks lead to a large number of useless packets being transmitted, resulting in wasted resources and hindering S3C co-design. However, most of the current systems are designed to recover data accurately while ignoring the contents/semantics of transmitted packets or their impacts on the receiver. Meanwhile, the existing communication technologies have nearly approached the physical-layer capacity limit<cit.>. Therefore, there is an urgent need to develop a new communication paradigm. In this regard, adopting goal-orientedcommunication as a bridge between multiple functionalities to communicate and exchange desired information can effectively enhance the successful execution of tasks. We believe it is timely to study the goal-oriented integration of sensing, communication, computing, and control (GIS3C)in 6G MC-IoT<cit.>. GIS3C aims at extracting and transmitting the relevant semantic features needed to make the receiver accomplish a goal with the desired effectiveness<cit.>. This can help to reduce the amount of data and clarify the coupling between multiple subsystems. However, GIS3C is still in its infancy and there are a large number of open issues, such as (a) Howdoes GIS3C empower MC-IoT and facilitate the co-design of S3C? (b) Is there any unified system-levelmetricto break down the barriers among different subsystems? We are aware that GIS3C can be studied from many perspectives, such as AI-based approaches. In this paper, we focus more on the information flow and data transmission of GIS3C in 6G MC-IoT. The main contributions are as follows. * We first overview the tasks, requirements, and metrics of 6G MC-IoT, from the perspective of data transmission. Also, the challenges of realizing S3C co-design in 6G MC-IoTare summarized. * We introduce an end-to-end (E2E) GIS3C architecture, based on which environment-aware sensing, semantic communication, context-aware control, and situation-aware computing are analyzed. * We provide a comprehensive analysis toreveal the interplay among multiple subsystems in terms of key performance indicators (KPIs) and parameters. We illustrate how GIS3C can be used to facilitate S3C co-design, and introduce task completion effectiveness and cost as the unified metrics. Furthermore,preliminary results are provided to verify the effectiveness of the introduced GIS3C method. The rest of this article is organized as follows. Section II first reviewsthe tasks, requirements and challenges of the integrated system in 6G MC-IoT. Sections III and IV illustrate how GIS3C can be used to empower sensing, communication, control and computing, as well as to facilitatetheir integration. Conclusion and future work are presented in Section V. § TASKS AND CHALLENGES OF S3C FOR6G MC-IOT To clarify the coupling between multiple subsystems, we first specify their tasks and requirements in Subsection II-A. The challenges of realizing S3C co-design in 6G MC-IoT are summarized in Subsection II-B. §.§ Tasks and Requirements of 6G MC-IoT Accurate and timely sensingis the basis of 6G MC-IoT. Multiple smart devices are installedto collect environmental parameters and observe physical processes, based on whichmultiple sub-tasks including monitoring, detection, and localization are completed, as summarized in Table I. For the monitoring task, multiple parameters, e.g., speed, direction, and position, are required to be monitored accurately, which is evaluated by the monitoring error. Also, such measurements provide holistic supervision andreal-time alerts for abnormal conditions<cit.>, where the detection accuracy is the critical KPI and can be improved with a larger sampling rate. Precise positioning of mobile devices is also one of the important sub-tasks of sensing, where a higher localization resolution is obtained by a larger sampling duration. Timely and reliable transmission is the core of implementing 6G MC-IoT. Reliable transmission with a low error rate ensures that the packets can be transmitted successfully, by optimizing data size and channel use. With power and bandwidth allocation, effective transmission with low latency and high throughput can enable massive amounts of status information to be updated in real time. For example, extremely reliable communication with negligible latency is necessary to support autonomous and safe driving.This is supplemented by the need for a secure transmission. Reliable and automatic control is the key to successful task execution in 6G MC-IoT. Most of control tasks are aimed at the stable operation of the equipment, where the stability margin is designed to be maximized. By designing the control law and period, the steady-state error is minimizedfor automatic tracking and the overshoot is minimized to ensure that the control task is completed within the given constraints. For example,loadfrequencycontrol shares the burdenof power regulationin the interconnectedpowersystem. It canensure that the grid operates in a stable frequency range while avoiding frequency overshooting, by adjusting the generator to allow power generation to cope with load fluctuations. Efficient computing provides the critical support for 6G MC-IoT. MEC can be adopted for training based on collected data, which can assist in understanding situations such as road condition analysis in autonomous driving. In addition,inference and decision-making cannot be accomplished without computation. For example, load forecasting in smart gridand predictive control in smart factory require massive amounts of computing resources. Insuch tasks, multiple KPIs including resource utilization, service response time and reliability are adopted to evaluate the effectiveness of computing. §.§ Challenges of Realizing S3C Co-Design in6G MC-IoT Even though the rapid development of 5G and the introduction of several emergingtechnologies have made mission-critical services possible, the following challenges remain unexplored in realizing S3C co-design in 6G MC-IoT. Huge Amounts of Data: To constantly track the state of the environment, a great number of connected sensors and devices are employed, which generates large amounts of data.Hence, it can be a challenging task to store, process, and analyze big data while ensuring low latency and high efficiency. To alleviate the burden,MEC has been widely usedto mine and aggregate data in a decentralized manner, which bringsthepossibility to enable low latency transmission. However, a large amount of data is still discarded even with MEC, resulting in wasted resources, as much of the information is not useful to accomplish a certain task such as fault detection and automatic control. Multiple sources with heterogeneous traffic: The 6G MC-IoT paradigm encompasses a significant portion of big data, which is characterized by heterogeneity, manifested in high dimensionality and diverse forms of expression, as shown in Table I. This situation poses a major challenge for data management due to the introduction of multiple sources with different attributes. Additionally, the communication system must be capable of accommodating heterogeneous data and diverse types of traffic. Specifically, the coexistence of short and long packets, as well as the presence of both delay-sensitive and tolerant data, necessitates the development of a robust and dependable communication infrastructure. Different data types and structures for sensing, communication, and control complicate integration design. Coupled Multiple Functionalities: As highlighted in Subsection II-A,the 6G MC-IoTcomprises interconnected subsystems, including sensing, communication, control, and computing, which exhibit intricate interdependencies. Therefore,to design a comprehensive and fully integrated system, a holistic approach is required. However, the dynamics and interactions between these subsystems remain ambiguous and the impact of different parameters on system performance needs to be further investigated. The absence of standardized protocols and unified metrics poses a substantial challenge when it comes to integrating multiple subsystems. § E2E GIS3C FOR 6G MC-IOT In this section, we propose an E2E GIS3C architecture in 6G MC-IoT, based on which GIS3C-empowered multiple subsystems are illustrated. §.§ GIS3C in 6G MC-IoT As shown in Fig. 2, semantic communication (SemCom), focusing on transmitting symbols convey the semantic and effectiveness level of goal-oriented communication, serves as a bridge among sensing, control, and computing subsystems. Based on pioneering work done by Weaver<cit.>, GIS3Cis designed to extract and transmit the relevant information needed to make the receiver accomplish a goal with the desiredeffectiveness. Furthermore, GIS3C is expected to empower multiple subsystems by making them more “understandable” and “communicable” based on the common task and shared knowledge, as elaborated in the following subsections. §.§ Environment-aware Sensing In 6G MC-IoT, multiple sensors monitor the physical process based on the given task/goal and environment. Traditional sensing can be empowered by GIS3C for environment-aware sensing. Unlike the traditional one that samples and transmits all packets, only important, relevant, and urgent information is sampledin environment-aware sensing, which is able to reduce the amount of data. Also, these data are assigned more resources with higher priority. To achieve this, the foremost and crucial thing is to characterize the semantic attributes and guide the design of sampling and scheduling<cit.>. However, determining semantic attributes of sensing packets based on the task and common knowledge is still challenging. In this paper, wefocus on various semantic metrics from the effectiveness level, as discussed below. §.§.§ Time-Based Metric Age of Information (AoI) is a measure of freshness for an information flow, a typical time-based semantic metric, defined as the time elapsed since the latest successfully received packetwas generated at the source<cit.>. Moreover, the value of information and age of loop were proposed to evaluate the impact of AoI on different applications andcharacterize the freshness of information in theclosed-loop system. However, AoI-based metrics imply that the freshest packet has the most valuable information, which is not applicable to allscenarios. §.§.§ Error-Based Metric Mean square error (MSE) is a well-known metric that characterizes the accuracy aspect of information, which serves as one of the important semantic attributes. The importance of information is explicitly defined by the real-time square error between transceivers, based on which the sampling is triggered by the state error<cit.>. §.§.§ Goal-Based Metric Combining time-based metrics with error-based metrics, age of incorrect information (AoII) characterizes the impact of the prolongation of one inaccurate state on semantic recovery<cit.>. Also, a context-based metric, named as urgency of information (UoI), has been proposed to measure the importance of the non-uniform context-dependence of state information<cit.>. Furthermore,considering the specific control task, a goal-oriented sampling and scheduling policy was proposed for the minimization of the actuation error<cit.>. §.§ Semantic Communication Usually, the semantic features are obtained byextracting and encoding the transmitted source data, and then filtered and compressedaccording to thecommon task and shared knowledge. After that, semantic data is transmitted through traditional data-oriented encoding and decoding overwireless networks<cit.>. GIS3C allows only the transmission of semantic features of interest to the task, rather than raw data, which alleviates bandwidth pressure and reduces the redundant data. However, semantic extraction/filtering for various data types with dynamic tasks and network states is challenging. The key techniques are discussed as follows. §.§.§ Semantic Extraction and Filtering Several semantic extraction technologies have been explored in the field of computer science. For different data types such as text, audio, and image,natural language processing, speech signal processing, and computer vision have attracted extensive attention and have been thoroughly researched<cit.>. Thanks to the development of big data and computing ability, several AI-based semantic extractions such as deep learning-based, reinforcement learning-based and knowledge base-based techniques have been investigated for different scenarios. §.§.§ Semantic Encoding Besides physical turbulence and noise (e.g., Gaussian noise and multi-path fading), semantic noise also involves semantic mismatch, ambiguity, and interpretation errors. Consequently, beyond simple data compression, GIS3C strives to effectively combat the semantic noise and transmit the semantic meaningwith adequate encoding and decoding schemes. Considering the task at the receiver, it is important butchallenging to quantify semantic information and derive semantic capacity. Furthermore, the robustness of GIS3C can be improved byintegrating the channel informationinto semantic encoding/decoding, or by considering source and channel encoding/decoding together, which has attracted a lot of attention. §.§ Context-Aware Control Based on the decoded semantic meaning, the controller applies reasoning methods for obtaining high-level contextual information and can proactively correct transmission-induced errors. In contrast, conventional controllers can only compute control commands based on received packets. Context-aware control allows the prediction of outcomes throughthe complex awareness of the process, thus exploiting all data and emergingan optimal solution based on advanced computing. Specifically, the accuracy of context awareness depends on the context modeling and reasoning, which is challenging for coupled systems with heterogeneous traffic and dynamic tasks.The key techniques are discussed as follows. §.§.§ Modeling Generally, different data types such as sensing, communication, and computing are typically presented in different formats that might not be readily understandable to the user or device. Based on the specific task and common knowledge, context modeling is adopted to represent these data into meaningful terms, which are expected to be simplicity, reusability, and scalability. Context modeling is achieved through a variety of approaches, such as graphical models, logic-based and ontology-based models<cit.>, which are suitable for different scenarios and requirements. §.§.§ Reasoning Reasoning or the evaluation of context involves extracting new knowledge from the modeled data of the available context. This step can be divided into three phases: a) pre-processing of context data to eliminate inaccurate values; b) fusion of sensor data to generate more precise information; c) context inference to obtain new context information from lower-level context sources. Furthermore, techniques for processing the contextual available input can be classified as learning or inference, such as rule-based and fuzzy logic-based methods for the classification task, as well as learning-based methods for the clustering task<cit.>. §.§ Situation-Aware Computing A strong computational base enables intelligent sensing, semantic communication,and context-aware control. By allocating computing resources in data sampling, features representation, and inference as well as control command computing dynamically,resource utilization can be improved. In contrast, the pre-allocation of fixed computing resources leads to inefficiencies. Situation-aware computing is able to offload tasks and allocate resources dynamically and intelligently based on the tasks. However, establishment of knowledge bases and adaptive resources allocation are challenging, as discussed below. §.§.§ Task Offloading and Allocation Task offloading is the process of transferring computation-intensive tasks to a set of remote computing machines (e.g., cloud or edge servers) that can process the tasks. Specifically, task offloading is adopted based on the requirements and common knowledge by the following three steps: a) task priority assignment, b) redundant task elimination, and c) task scheduling. An efficient task offloading strategy can significantlyreduce the latency and energy consumption while improvingtask execution efficiency. §.§.§ Resource Provisioning Determining sufficient computational resources for performing each task is challenging due to the complex coupling among S3C. For example, since thecomputation and communication compete against each otherfor the shared time resource, a higher computing delay isleading to less time budget for communication, and thereforemore transmission errors, i.e., degraded communication dependability.On the other hand, the reduction in computationerror tends to reduce the value error, and therefore increasethe control dependability. Furthermore, with a longer learning process and higher central processing unit (CPU) rate, a more compressed and accurate semantic feature with lower inference error can be obtained. § GIS3C-ASSISTED 6G MC-IOT In the last section, we have shown that multiple subsystems can be enhanced by GIS3C. However, the independent design of each subsystem still results in low resource utilization. In this section, we first investigate the interplay among different subsystems, and then illustrate how GIS3C can be used in 6G MC-IoT and provide a use case to verify our method. §.§ Interplay among S3C As shown in Fig. 3, sensing, communication, control and computing are tightly coupled with each other by different parameters and KPIs. Environment-aware sensing is performed based on the analyzed environment states and control actions, which provideshigh-qualitysampled data for transmission. The sampled data size depends on the sampling type (e.g., event-triggered or time-triggered), sampling rate and sampling duration, which induces the sampling cost 𝒞_sa in terms of energy or computing resources. Also, these parameters affect the sensing performance 𝒥_sa, such as monitoring error and detection accuracy. For example, a larger monitoring duration can obtain a smaller detection error but requires more energy consumption. Sampled data size and sensing performance in turn affect the performance and overhead of semantic representation. Semantics extraction/filtering are accomplished through computation, and transmitted over wireless networks for control command calculation. The sampled source data is extracted and encoded as semantic data for transmission. The receiver decodes the semantic features and infers their meaning. The effectiveness of feature extraction depends on the allocation of computing resources 𝒞_cp. The compression ratio also affects the communication overhead 𝒞_cmthe performance of semantic inference 𝒥_cm. Furthermore, resource allocation scheduling over wireless networks influences the transmission performance of semantic data. A lower compression ratio with limited resource allocation can save the communication cost 𝒞_cm but incur more inference errors. The semantic communication performance includes the transmission error and inference error, which is affected by the allocated communication and computing resources, as well as the optimized communication parameters. Control commands are inferred and reasoned from decoded semantic information based on context analysis and common knowledge. The controller computes the control demand based on the inferred semantic meaning and context,and then provides guidance for taking action. On one hand, making decisions based on the inferred information and context consumes a lot of computing resources 𝒞_cp and control resources 𝒞_cn. Itscorrectness is also influenced by the allocated resources and the semantic inference performance 𝒥_cm. On the other hand, packet loss and transmission delay may lead to inaccurate operations, resulting in worse control performance 𝒥_cn such as system downtime and security risks. §.§ Goal-Oriented System-Level Performance Metric Based on the coupling analysis among multiple subsystems, this subsection compares the proposed GIS3C scheme with thebit-oriented independent design method, as shown in Fig. 4. Particularly, we introduce the task completion effectiveness as the main KPI, i.e., 𝒥_sys=χ(𝒥_sa,𝒥_cm,𝒥_cn,𝒥_cp), which measures the probability and satisfaction that goals are met or tasks are completed. The system-level task completion effectiveness depends on the performance of multiple subsystems with the mapping function χ(·). Similarly, the system-level task cost can be obtained by converting all the costs into the same domain, i.e., 𝒞_sys=c_sa(𝒞_sa)+c_cm(𝒞_cm)+c_cn(𝒞_cn)+𝒞_cp, where c(·) is the transformation function for different domain costs. The mapping functions χ(·) and c(·) can be derived based on the analysis in Subsection IV-A and specific tasks. With the system-level task completion effectiveness and cost, multiple subsystems can be designed jointly with dynamic resource allocations and flexible optimizations. Furthermore, this can help to reduce the amount of data and integrate multiple subsystems with heterogeneous traffic by representing different information as a unified way based on the task and common knowledge. In contrast,in the conventional independent design, resource allocations for each subsystem must be determined in advance based on requirements, in which any failure of subsystems may result in system downtime. With the assistance of GIS3C, the task completion effectiveness can be improved with less cost by collaborating multiple subsystems. For example, if the controller receives/decodes contradictory information from different sources, it can analyze these information globally and compare these with historical information, and then infer the correct control command based on contextual information. §.§Use Case In this subsection, weconsider the load frequency control (LFC) system in smart grid as a case study. The goal of the considered LFC system is to maintain the balance between power load and generation at minimal cost<cit.>. To ensure the stability of power systems, multiple sensors are employed to monitor the frequency deviation and detect cyber-attacks. Then the controller computes control commands, guidingthe actuator (e.g., engine or motor) to adjust power generation to ensure grid stability. Three methods are introduced and compared as follows. Bit-oriented independent design: Pre-allocate fixed resources to each subsystem based on the relationship between sub-tasks and the system-level task, as shown in the upper part of Fig. 4. Bit-oriented co-design: Dynamically allocate resources to each subsystem to maximize their performance with the total cost constraint. Goal-oriented co-design: Analyze the interplay among GIS3C-empowered multiple subsystems and their relationship to the system-level task, and then dynamically allocate resources to each subsystem for maximizing the task completion effectiveness with the cost constraint, as shown in the bottom part of Fig. 4. In the considered LFC system, resources such assensing power, CPU rate, wireless communication resources (e.g., bandwidth or power) affect the system stability in different ways, as analyzed in Subsections IV-A and IV-B. Fig. 5 verifies the effectiveness of our proposed GIS3C method. Task completion effectiveness is defined as the availability of the grid, which measures the probability that the grid operates within the allowable frequency deviation range. Also,task costs for multiple subsystems are converted and normalized into energy consumption. Compared to the existing methods,the proposed goal-oriented co-design can achieve a higher availability with the same cost constraints, especially with a lower cost constraint (up to 15% improvement). This is due to that with the assistance of GIS3C, resources can beallocated dynamically and intelligently to different subsystems to assist in accomplishing system-level tasks. § CONCLUSIONS AND FUTURE WORK In this article,we have introduced the GIS3C for 6G MC-IoT, which sheds light on the development of future 6G networks. The tasks, requirements, and challenges of supporting 6G MC-IoThave been overviewed. We have provided a comprehensive introduction to E2E GIS3C architecture. In particular, environment-aware sensing, semantic communication, context-aware control, and situation-aware computing have been analyzed. Additionally, the interplay among multiple subsystems has been revealed,based on which a system-level metric has been proposed to facilitate S3C co-design in MC-IoT. Although this article has pointed out some possible research opportunities in the GIS3C, there are still many gaps and challenges in applying GIS3C in 6G MC-IoT. Therefore, in the following, we discusssome possible research directions for future goal-oriented SemCom and S3C in 6G MC-IoT, such as a) Real-time green semantic communication: semantic communication requires more computing resources for feature extractions and model training, which further complicates the power control and energy management. How to balance the energy efficiency and stringent communication performance of the 6G MC-IoT with limited bandwidth provisioning and low transmit power remains unexplored. b) Co-existence of heterogeneous networks: for the co-existence of bit-oriented and goal-oriented networks, interaction analysis and optimization are still in their early stage. (c) Semantic native communication: How to implement SemCom in task/goal-unaware systems with the unknown environment requires further investigation. Also, it is worth investigating how to design adaptive (universal) SemCom with implicit semantics in 6G MC-IoT. 1 IEEEtran ref1 J. Cao et al., “Toward Industrial Metaverse: Age of Information, Latency and Reliability of Short-Packet Transmission in 6G,”IEEE Wireless Communications, vol. 30, no. 2, pp. 40-47, April 2023. ref2 S. He, K. Shi, C. Liu, B. Guo, J. Chen and Z. Shi, “Collaborative Sensing in Internet of Things: A Comprehensive Survey,”IEEE Communications Surveys & Tutorials, vol. 24, no. 3, pp. 1435-1474, thirdquarter 2022. ref3 X. Hou, J. Wang, Z. Fang, Y. Ren, K. -C. Chen and L. Hanzo, “Edge Intelligence for Mission-Critical 6G Services in Space-Air-Ground Integrated Networks,” IEEE Network, vol. 36, no. 2, pp. 181-189, April 2022. ref4 P. Park, S. Coleri Ergen, C. Fischione, C. Lu and K. H. Johansson, “Wireless Network Design for Control Systems: A Survey,”IEEE Communications Surveys & Tutorials, vol. 20, no. 2, pp. 978-1013, Secondquarter 2018. ref3gpp 3GPP Technical Report, TR 22.837 V2.0.0, “Feasibility study on integrated sensing and communication,” 15 June 2023. ref5 Adam MM, Zhao L, Wang K, Han Z., “Beyond 5G Networks: Integration of Communication, Computing, Caching, and Control,” arXiv preprint, doi: arXiv:2212.13141, 2022. ref6 U. Demirhan and A. Alkhateeb, “Integrated Sensing and Communication for 6G: Ten Key Machine Learning Roles,”IEEE Communications Magazine, vol. 61, no. 5, pp. 113-119, May 2023. ref8 W. Yang et al., “Semantic Communications for Future Internet: Fundamentals, Applications, and Challenges,” IEEE Communications Surveys & Tutorials, Early Access, 2022. ref9_1 X. Luo, H. -H. Chen and Q. Guo, “Semantic Communications: Overview, Open Issues, and Future Research Directions,”IEEE Wireless Communications, vol. 29, no. 1, pp. 210-219, February 2022. ref9 D. Gündüz et al., “Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 5-41, Jan. 2023. ref11 M. Kountouris and N. Pappas, “Semantics-Empowered Communication for Networked Intelligent Systems,”IEEE Communications Magazine, vol. 59, no. 6, pp. 96-102, Jun. 2021. ref11_1 Antzela Kosta; Nikolaos Pappas; Vangelis Angelakis, “Age of Information: A New Concept, Metric, and Tool,” Now Foundations and Trends, 2017. ref12 E. Fountoulakis, N. Pappas and M. Kountouris, “Goal-Oriented Policies for Cost of Actuation Error Minimization in Wireless Autonomous Systems,”IEEE Communications Letters, vol. 27, no. 9, pp. 2323-2327, Sept. 2023. ref18 R. A. C. Diaz, M. Ghita, D. Copot, I. R. Birs, C. Muresan and C. Ionescu, “Context Aware Control Systems: An Engineering Applications Perspective,” IEEE Access, vol. 8, pp. 215550-215569, 2020. globecom23_CJ J. Cao, E. Kurniawan,A. Boonkajay and S. Sun, “Goal-Oriented Scheduling and Control Co-Design in Wireless Networked Control Systems,” to be published in proc. IEEE Globecom, Dec. 2023.
http://arxiv.org/abs/2312.16064v2
{ "authors": [ "Jie Cao", "Ernest Kurniawan", "Amnart Boonkajay", "Sumei Sun", "Petar Popovski", "Xu Zhu" ], "categories": [ "cs.NI", "eess.SP" ], "primary_category": "cs.NI", "published": "20231226143052", "title": "Goal-Oriented Integration of Sensing, Communication, Computing, and Control for Mission-Critical Internet-of-Things" }
Stanley decompositions of modules of covariants Markus Hunziker January 14, 2024 =============================================== Quantum properties of the state associated to the gluon Green's function in the BFKL approach are studied using adiscretization invirtuality space. Considering the coupling constant as imaginary, its density matrix corresponds to a pure state for any energy. Non–linear corrections due to high gluon densities are modelled through a suppression of infrared modes in the Hamiltonian making it no longer hermitian.This introduces quantum decoherence into the evolution equation. When the coupling is real this leads to unbounded normalization of states which becomesbounded for sufficient saturation of infrared modes. Physicalquantum properties,such as a purity smaller than one or a positive von Neumann entropy, hence are recovered when the infrared/ultraviolet original symmetry of the formalism is broken. Similarly to the work of Armesto, Domínguez, Kovner, Lublinsky and Skokov in <cit.>, an evolution equation of Lindblad type for the normalized density matrix describing the open system is obtained.§ INTRODUCTIONApplication of concepts of quantum information in the context of hadronic reactions and the microscopic theory of strong interactions,Quantum Chromodynamics (QCD), has been a field of intense research during recent years. At the heart of this interest lies the possible relation of the confinement problem of strong interactions withentanglement: Confinement of colored charges into colorless hadrons is interpreted as a particularly strong form of entanglement of microscopic degrees of freedom <cit.>, see Ref. <cit.> for a recent review. Several proposals <cit.>have emerged in the literature that address different ways to study entanglement in the context of hadronic reactions. An interesting subset of such studies refers to the creation of entanglement entropy in the so–called low x limit of QCD. With Q^2 being the resolution scale in a Deep Inelastic Scattering (DIS) event,Bjorken x is generically defined as the ratio of this hard scale and the squared center–of–mass energy s. The low x limit therefore refers to the perturbative high energy limit of strong interactions. Recent studies <cit.> explore entanglement and its imprints in multiplicity distributions in the low x limit using the color dipole model, where the evolution towards largeY=ln(1/x) is understood as the subsequent branching of color dipoles (see also <cit.> for studies within the related Color Glass Condensate framework). In the present work we take a slighly different perspective. Instead of making use of the color dipole picture, we study the QCD density matrix in the low x limit within the Balitsky–Fadin–Kuraev–Lipatov (BFKL) formalism<cit.>. This framework identifies reggeized gluons as the relevant degrees of freedom in the t-channel of high energy factorized scattering amplitudes, which form the starting point for a resummation of high energy logarithms (see <cit.> for a discussion and derivation of the BFKL evolution in the context of an effective action framework, based on reggeized gluon fields).For perturbative scattering amplitudes, high energy factorization is achieved via the exchange of a single reggeized gluon which results at cross-section level into a two reggeized gluon state in the overall color singlet state. This constitutes the starting point for the resummation of perturbative terms enhanced by powers of Y to all orders in the strong coupling constant, generating a bound state oftwo reggeized gluons called the hard or BFKL Pomeron (see <cit.> for a recent work on the conformal properties of this bound state). This procedure generates a powerlike rise of cross-sections with s. While such a rise has been observed in experimental data (see, e.g. <cit.>),it eventually leads – if continued to arbitarily high center–of–mass energies – to a violation of unitary bounds. To tame this growth, it is hence needed to extend the resummation to the exchange of multiple reggeized gluons allowing for vertices changing the number of exchanged reggeized gluons in the t-channel. For a sufficiently inclusive cross-section, the simplest one is the 2 to 4 reggeized gluon transition vertex which in the multicolor limit turns into a triple Pomeron vertex. Apart from slowing down the growth with energy, inclusion of such number–changing elementsalso has an important consequence in the dynamics of thetransverse momentum space.While the BFKL kernel is symmetric with respect to incoming and outgoing t–channel momenta, this symmetry is broken once number–changing elements, such as the triple Pomeron vertex are included<cit.>. The combination oflinear BFKLevolution with these new vertices results in acancellation of infrared modes and the evolution acquires an effective scale, known as saturation scale <cit.>, which increases with Y. In the following we explore the consequences of such dynamics using an explicit construction ofa density matrix for the two reggeized gluon state, employing a matrix representation of the leading order BFKL evolution equation first proposed in <cit.>, which stems from adiscretization of the dynamics in transverse momentum space. This approach is useful becauseit allows a transparent understanding of quantum mechanical properties of the scattering process. The outline of our work is as follows: In Sec. <ref> we provide an introduction to the matrix representation of BFKL evolution, while in Sec. <ref> the corresponding density matrix is investigated.Sec. <ref> introduces a modification of this framework due to infrared screening, while in Sec. <ref> we explore the consequences of the resulting non–hermitian Hamiltonian and derive an evolution equation of the Lindblad type. In Sec. <ref> we finally present theconclusions and outlook for future work.§ HAMILTONIAN IN MATRIX REPRESENTATIONHigh energy scattering in QCD and supersymmetric theories can be described by the BFKL approach when the leading logarithms of the center–of–mass energy are resummed <cit.>. Following the work in Ref. <cit.> we consider the azimuthal–angle averaged forward BFKL equation and discretize it in the virtuality spaceof t–channel Reggeized gluons. After regularizing it in a finite–length box we obtain a square–matrix representation of the Hamiltonian. Its spectrum contains positive and negative eigenvalues <cit.>. If the virtuality space for the propagators of Reggeized gluons in the forward BFKL equation is discretized, after azimuthal angle averaging, the following matrix representation is obtained∂/α∂ Y|ϕ^(N)>= Ĥ^_N |ϕ^(N)> ,where α = α_s N_c/π, see  <cit.> for the details of this result. Note that each iteration of this equation corresponds to a contribution to the total cross section, structure functions in DIS. The Hamiltonian matrix elements take the form(ℋ̂^_N)_i, j = ∑_n=1^N-1δ_i^j+n/n+∑_n=1^N-1δ_i+n^j/n-2 h(i-1) δ_i^j = θ(i-j)/i -j + θ(j-i)/j -i - 2 h(i-1) δ_i^j.h(i) is the harmonic number. To obtain this square N × N matrix representationan upper cut–off in the virtuality integrations is introduced. This corresponds to a one–dimensional box. The limit to recover the original equation corresponds toN →∞. Eq. <ref> is built from shift matrices, H^_N= ∑_n=1^N-1(Ŝ_ IR)^n/n +∑_n=1^N-1(Ŝ_ UV)^n/n + Ĝ,where (Ĝ)_i,j= -2 h(i-1) δ_i^j, (Ŝ_ IR)_i,j = δ_i^j+1 and (Ŝ_ UV)_i,j = δ_i+1^j. This is natural since the Hamiltonian generates the symmetric diffusion towards infrared and ultraviolet values of the virtuality starting at the initial condition, towards the quantum state at a generic Y. These are driven, respectively, by Ŝ_ IR andŜ_ UV. Ĝ accounts for Reggeized t–channel propagators, which correspond to the generation of multiple rapidity gaps in the final state.In order to operate in a Hilbert space of normalized quantum states at any value of the energy, an analytic continuation in the coupling constant to the line (with real λ) α = i λis performed. A standard Schrödinger equation then drives the dynamics:i∂/∂ Y|ϕ^(N)>=-λĤ^_N |ϕ^(N)> .The formal solution can be written in iterative form, |ϕ^(N)> =e^ i λ Y ℋ̂^_N |φ_0^(N_0)> = {1+∫_0^Y d y_1( i λℋ̂^_N) + ∫_0^Y d y_1( i λℋ̂^_N) ∫_0^y_1 d y_2( i λℋ̂^_N). + . ∫_0^Y d y_1( i λℋ̂^_N) ∫_0^y_1 d y_2( i λℋ̂^_N)∫_0^y_2 d y_3( i λℋ̂^_N)+ ⋯} |φ_0^(N_0)> ,where the initial condition |φ_0^(N_0)> ≡(φ_1^0, φ_2^0, …, φ_N^0)^T excites a single virtuality component:φ_i^0=δ_i^N_0 and corresponds to the square of a single gluon propagator in the t–channel, the tree level contribution to the scattering amplitude in the forward limit. If |ϕ^(N)>= (ϕ_1, ϕ_2, …, ϕ_N)^Tthen i ∂ϕ_j/∂ Y =- λ∑_l=1^N((1-δ_l^j)/|l-j|-2 h(j-1) δ_l^j) ϕ_l .The growth with energy in this square truncation depends on the matrix size. The N × N matrix Ĥ^_N is symmetric and real. It can be diagonalized, it has normalized real eigenvectors |ψ_L^(N)>with real eigenvaluesλ_L^(N) (L=1, …, N), and spectral decompositionĤ^_N= ∑_L=1^N λ_L^(N)|ψ_L^(N)> <ψ_L^(N)|. As it is shown in Fig. <ref>, the spectrum of Ĥ^_N has a largest positive eigenvaluefor any N, which tends to 4 ln2 when N →∞, with a gap with respect to the lower ones which are mostly negative <cit.>.The number of positive eigenvalues slowly grows with N (e.g. the second positive eigenvalue appears when N=165). Since any initial condition vector may be expanded in the complete basis of eigenvectors, |φ_0^(N_0)> = ∑_L=1^N c_L^(N_0)|ψ_L^(N)>, it is then possible to express the gluon Green's function state as|ϕ^(N)> = e^ i λ Y ℋ̂^_N|φ_0^(N_0)> = ∑_L=1^N c_L^(N_0) e^ i λ Y λ_L^(N)|ψ_L^ (N)>,where, for any Y, <ϕ^(N).|ϕ^(N)> =1and< φ_0^(N_0). |φ_0^(N_0)> = ∑_L=1^N |c_L^(N_0) |^2  =  1.For the sake of clarity, let us focus on the N=5 case with Hamiltonianℋ̂^_5 = ( [ 0 1 1/2 1/3 1/4; 1-2 1 1/2 1/3; 1/2 1-3 1 1/2; 1/3 1/2 1 -11/3 1; 1/4 1/3 1/2 1 -25/6; ]) .It has the eigenvalues (λ_1^(5), … , λ_5^(5)) = (-4.9838,-4.07483,-3.03006,-1.59174,0.847101).Since <ψ_L^(5)|.ψ_M^(5)> = δ_L,M then, for a particular initial condition, e.g. |φ_0^(3)> = ( [ 0; 0; 1; 0; 0; ])  = ∑_L=1^5 c_L^(3)| ψ_L^(5)>,we have (c_1^(3), … , c_5^(3)) = (0.166387, 0.753377, 0.278794, 0.492161, -0.291187). The discretized version of the Green's function reads as inEq. <ref> (with η=i λ Y): |ϕ^(5)> ≃( [0.24 -0.28 0.031 0.014 -0.0006;0.130.23 -0.20 -0.15 -0.0040;0.080.240.08 0.6 0.028;0.060.190.14 -0.27 -0.11; 0.0410.120.10 -0.380.12; ]) ([ e^0.85 η; e^-1.6 η; e^-3.0 η; e^-4.1 η; e^-5.0 η;]).In the following section we investigate different aspects of this quantum system encoded in its density matrix which is suited for the description not only ofpure but mainly of mixed states.§ DENSITY MATRIX IN VIRTUALITY SPACEThe vector state |ϕ^(N)> represents a system isolated from any external information which evolves with energy following <ref>. Unitary evolutionis driven by theoperator Û (Y) = e^i λ Y Ĥ_N^□. In the N–dimensional Hilbert space of discretized virtualities there exists a pure–state operator describing the initial condition <ref> at Y=0; ρ̂_ pure^(N,N_0) (0) =|φ_0^(N_0)> <φ_0^(N_0)|. Its evolution with Y isρ̂_ pure^(N,N_0) (Y) = |ϕ^(N) (Y)> <ϕ^(N) (Y)| = ∑_L,M=1^N (ρ̂_ pure^(N,N_0)(Y))_L,M|ψ_L^(N)> <ψ_M^(N)| .This is a (real) hermitian matrix with elements(ρ̂_ pure^(N,N_0)(Y))_L,M=c_L^(N_0) (c_M^(N_0))^*e^i λ Y (λ_L^(N) - λ_M^(N)).For an infinitesimal energy interval d Y, ρ̂_ pure^(N,N_0) (Y+d Y) ≃ ρ̂_ pure^(N,N_0)(Y)+ i λ d Y[Ĥ_N^□, ρ̂_ pure^(N,N_0) (Y) ] .Therefored ρ̂_ pure^(N,N_0)/i λ d Y = [Ĥ_N^□, ρ̂_ pure^(N,N_0)] = ∑_L,M=1^N c_L^(N_0) (c_M^(N_0))^*(λ_L^(N) - λ_M^(N))e^i λ Y (λ_L^(N) - λ_M^(N))|ψ_L^(N)> <ψ_M^(N)|,a Liouville–von Neumann equation for the pure–density state operator <ref>. Its trace is unity and time independent,Tr(ρ̂_ pure^(N,N_0) (Y)) = ∑_L,M=1^N (ρ̂_ pure^(N,N_0)(Y))_L,M<ψ_M^(N). |ψ_L^(N)> = ∑_L=1^N|c_L^(N_0)|^2 = 1 .This implies that ρ̂_ pure^(N,N_0) allows for the proper evaluation of expectation values of operators,<Â>_Y =Tr (Âρ̂_ pure^(N,N_0)).Since it is an idempotent matrix with trace one, it has a single non–zero eigenvalue λ_ρ̂_ pure^(N,N_0) = 1. It is a projector onto a one-dimensional subspace within the Hilbert space of possible quantum states. There exists a complete knowledge of the state of the system at any Y. This density matrix is called a pure state; its purity is one: Tr (ρ̂_ pure^(N,N_0))^2 =1. The fact that the rank of the density matrix associated to the Green's function state is one for any value of Y implies that its von Neumann entropy,S_ vN^(N,N_0) (Y) = -Tr(ρ̂_ pure^(N,N_0) (Y) log_2ρ̂_ pure^(N,N_0) (Y)) = - λ_ρ̂_ pure^(N,N_0) (Y) log_2λ_ρ̂_ pure^(N,N_0) (Y) , given in terms of the single non–zero eigenvalue of the density matrix, is zero. This is natural since it is a measure of the amount of uncertainty or lack of information associated to a quantum state. In order to have a mixed state multiple non–zero eigenvalues must be present in the spectrum of the density matrix. The effective dimension of a mixed state is defined as the inverse of its purity, d^ eff (ρ̂^(N) ) = ( Tr (ρ̂^(N) (η))^2 )^-1 and provides a measure of how many pure states contribute significantly to the mixture. In the next section we focus on how to modify this picture when non–linear higher–order corrections are introduced in the formalism. This is a very complicated problem if treated in full generality. It is nevertheless possibleto study how one of its main effects, the suppression of infrared components, affects the hermeticity and spectrum of eigenvalues of the BFKL Hamiltonian in the theoretical framework under discussion in this work. § SCREENING OF INFRARED DIFFUSION An important consequence of introducing the interaction of the BFKL Pomeron with multiple reggeized gluon states,isthe suppression of diffusion into low virtualities (see, e.g.<cit.>). In order to investigate the implication of this effect inthe quantum properties of the BFKL states we will study a modification of the original Hamiltonianin Eq. <ref> which has been already investigated in <cit.>. This amounts to introducing an asymmetry between infrared and ultraviolet diffusion by suppressing the former in the form (ℋ̂^ dressed_N)_i, j = ∑_n=1^N-1(j/i)^κδ_i^j+n/n+∑_n=1^N-1δ_i+n^j/n-2 h(i-1) δ_i^j = (j/i)^κθ(i-j)/i -j + θ(j-i)/j -i - 2 h(i-1) δ_i^j.We will study in which range of the real–valued positive parameter κ it is possible to operate with a proper quantum state in any region of the coupling constant complex plane. The infrared–dressed Hamiltonian <ref> is no longer Hermitian. This has an important effect on the spectrum of the theory. It still consists of real eigenvalues where the largest, positive, one gets rapidly reduced as κ increases <cit.>. This is easy to understand since as κ→∞ the Hamiltonianbecomes a triangular matrix whose eigenvalues correspond to the diagonal elements -2 h (i-1), with i=1,2, …. This is shown for N=5, 50, 200 in Fig. <ref>.For example, for κ=0.5, N=5 the original spectrumis modified to (-4.92985,-4.00719,-2.94808,-1.52508,0.576859). The associated Green's function state for the initial condition ofEq. <ref> reads|ϕ^(5)> ≃( [0.28 -0.340.040.010.00;0.110.30 -0.25 -0.160.00;0.060.260.100.550.03;0.040.180.15 -0.26 -0.10;0.020.110.10 -0.340.10; ]) ([ e^0.58 η; e^-1.53η;e^ -2.95η;e^-4.01 η;e^-4.93 η;]).As we have already discussed, when κ=0 the norm of the BFKL state along the line α = i λ is one for any Y. Even if it is non–Hermitian, the dressed Hamiltonian does not generate complex eigenvaluesalthough its eigenvectors do not form an orthogonal set. This implies that the quantum state is no longer normalized to unity. In Fig. <ref> this normalization is shown for N=5 and different values of the screening parameter κ. With oscillations in λ Y, the scaling variable, the norm is larger than one for non–zero values of κ. It is possible to interpret this change in the normalization of the state as a consequence of the interaction withdiagrams containing multiple reggeized gluon states.Their influence in the system is parametrizedby κ. A study where this idea is put forward can be found in Ref. <cit.> where it is shown that in non-linear evolution equations in the zero conformal spin sector (as the one considered here) the Green's functions receives the bulk of the contributions from anti–collinear configurations where the infrared/ultraviolet symmetry is manifestly broken. To studythe quantum state in the physical region it is needed to analytically continueto thereal line for α. For this the path α = λ + i e^-σλtanh(σλ) with σ=50 is chosen (see Fig. <ref>). This particular choice is arbitrary but it allows us to transit from a region of bounded normalization of the quantum state, very close to the imaginary axis, towards the physical region of real coupling smoothly,while keeping the modulus of the coupling small. Our conclusions are independent of the choice of path (as they also are of the size of N).As expected from the generic properties of the BFKL Pomeron,a fastrise of the norm of the BFKL quantum state appears when the system approaches the region of physical coupling. This effect is larger for larger values of Y and is plotted in Fig. <ref> (top).This drastically changes when introducing the infrared screening as can be seen in Fig. <ref> (down). In this case the normalization of the state is smaller than one for any value of the coupling near the real line and decreases as the energy increases. In Fig. <ref> (top) it is shown, for α≃ 0.2, N=5 and increasing values of κ, how the infrared dressing removes a large fraction of the probability associated to the state as Y increases. The infrared suppression saturates its effect at κ∼ O(5). This final state configuration at larger κ is invariant with N, see Fig. <ref> (down) for N=100.Along the real line for the coupling, and upon evolution, the original system (with κ=0) rearranges itself from the initial pure state at Y=0 into the asymptotic configuration. After a finite amount of evolution in energy the different virtuality components of the quantum state converge to the same stable configuration. This is seen in Fig. <ref> where five distinct pure state initial conditions, exciting different virtuality modes, are plotted. It can be seen how the five (for N=5) component state loses “memory" of the initial condition very rapidly. For real coupling and sufficiently large κ, the rapidity evolution of the quantum state can become stable in a given range of Y. An example is Fig. <ref> where, for N=5 and κ=5, we see howall virtuality modes reach a flat behaviour at large values of Y. Non–Hermitian Hamiltonians appear inopen quantum systems where some sort of interaction with an external environment is present. In the BFKL system this external actor would be the higher–order non–linear quantum corrections. In the next section we investigate the possibility of generating decoherence effects due to the suppression of infrared modes and how this is related to the normalization of the quantum state. We will also show how to regain a probabilistic picture within this setup.§ NON–HERMITIAN HAMILTONIANAs we have seen, the original formulation of the BFKL Hamiltonian along the line α = i λ transforms pure states into pure states since the trace of the associated density matrix, which is a projector, is one. This cannot be the case for the infrared dressed Hamiltonian since the normalization of the state changes with the variable λ Y, Fig. <ref>. This implies that the trace and purity of the density matrix grow to values bigger than one and hence spoil the quantum properties of the system. It is therefore mandatory to return to the physical region for the coupling to investigate how pure states evolve into mixed ones generated by the decoherence process introduced through the infrared dressing, considered as an effective description of higher–order quantum non–linearities. If this is done forthe original BFKL formalism, Fig. <ref> (top), instabilities in the purity of the state soon appear when Y and the value of the coupling are large enough. The structure of the quantum state is notproperly defined especially when being very close to physical values of thecoupling.This situation largely improves if the infrared dressing is implemented. In Fig. <ref> (down)it can be seen that for Ĥ_N^ dressed a smooth transition emerges forany non–zero value of Y from a pure state when the coupling tends to zero towards a highly mixed state in its physical region. This process of decoherence takes place at a faster pace as Y rises. If the coupling is fixed to α≃ 0.2 and the Y dependence of the purity is studied, Fig. <ref> (top), a rapid rise for the originalĤ^□_N is found. The dressed Hamiltonina, Ĥ_N^ dressed, leads on the other hand to a rapid transition,faster as κ≥ 4 grows, from the initial pure statetowards a highly–mixed state at large values of Y. This picture is not modified as N grows, see Fig. <ref> (down) for N=100. In order to deal with an open system driven by a non–hermitian Hamiltonian such as the dressed hamiltonian in Eq. <ref>, it is useful to express it as a sum of symmetric and antisymmetric parts, i.e.Ĥ^ dressed_N= Ĥ^+_N + Ĥ^-_N,Ĥ^+_N=(Ĥ_N^+)^T  = 1/2(Ĥ^ dressed_N + (Ĥ^ dressed_N)^T) , Ĥ^-_N=- (Ĥ^-_N)^T  = 1/2(Ĥ^ dressed_N - (Ĥ^ dressed_N)^T) .The evolution with energy of the quantum state is now expressed in terms of these two Hamiltonians. For a coupling of the form α = λ + i f(λ), with λ, f ∈, it reads∂/∂ Y|ϕ^(N)>= (λ + i f(λ))Ĥ^+_N |ϕ^(N)> + (λ + i f(λ))Ĥ^-_N |ϕ^(N)> ,∂/∂ Y<ϕ^(N)|= (λ - i f(λ)) <ϕ^(N)|Ĥ^+_N-(λ - i f(λ))<ϕ^(N)|Ĥ^-_N .For the density matrix ρ̂^(N) (Y) = |ϕ^(N)><ϕ^(N)| this implies an evolution driven by commutators and anticommutators,∂/∂ Yρ̂^(N) = λ{Ĥ^+_N, ρ̂^(N)} + i f(λ) [Ĥ^+_N ,ρ̂^(N)]+ λ[Ĥ^-_N , ρ̂^(N)] + i f(λ) {Ĥ^-_N , ρ̂^(N)}.The derivative of its trace is∂/∂ YTr (ρ̂^(N) ) = 2 λTr (Ĥ^+_N ρ̂^(N) ).This translates into the purity Tr ((ρ̂^(N) )^2) =(∑_L=1^N |ϕ_L|^2 )^2of the quantum state: ∂/∂ Y Tr((ρ̂^(N))^2)= 4 λTr (ρ̂^(N) ) Tr (Ĥ^+_N ρ̂^(N) ) .The corresponding von Neumann entropy of the quantum system S_ vN^(N) = -Tr(ρ̂^(N)log_2 ρ̂^(N)) is studied in Fig. <ref>. It can be seen how a very fast decoherence process takes place at small energy. After reaching a maximum at a small value of Y, with an entropy, S_ vN^(N)≃ 0.5, corresponding to a highly but not completely mixed state, there is a phase of monotonic decrease toa finite constant value of entropy when N, Y→∞. The physical origin of this asymptotic finite von Neumann entropy is an interesting source of study for future works. To evaluate quantum averages of operators, <O>_Y =Tr(Ô Ω̂^(N)), it is needed to use a normalized density matrix with trace one see, e.g. <cit.>), Ω̂^(N) ≡ ρ̂^(N)/ Tr(ρ̂^(N)).This is idempotent, sinceTr ((ρ̂^(N) )^2) = ( Tr (ρ̂^(N) ) )^2. Its energy derivativefollows∂/∂ YΩ̂^(N) =i f(λ) [Ĥ^+_N ,Ω̂^(N)]+ i f(λ) {Ĥ^-_N , Ω̂^(N)}+λ{Ĥ^+_N, Ω̂^(N)} + λ[Ĥ^-_N , Ω̂^(N)] - 2 λ Ω̂^(N)< Ĥ^+_N>_Y.The non–linear last term ensures probability conservation, ∂/∂ Y Tr (Ω̂^(N)) = 0. It is interesting to observe that< Ĥ^+_N>_Y tends to zero in the large Y limit (Fig. <ref>) if thevalues of κ are sufficiently largeto stabilize the entropy. There are very interesting approaches in the literature which have investigated the concept of a density matrix in non–linear evolution equations. A particular one, in the context of the Color Glass Condensateis that of Ref. <cit.>. Although it is a much more sophisticated approach than the onepresented here, they also found a decrease of purity with energy and were able to describe the system with a Linbland equation associated to an open system (it is also known as Franke–Gorini–Kossakowski–Lindblad–Sudarshan equation <cit.>). The evolution equation in <ref> can be written in Lindblad form once Ĥ^+_N = L̂_N^T L̂_N, with L̂_N = D̂_N Q̂_N, D̂_N= diag(√(μ_1), …, √(μ_N)),where μ_i are the eigenvalues of Ĥ^+_N,i.e.Tr(Ĥ^+_N Ω̂^(N)) =Tr(L̂_N Ω̂^(N)L̂_N^T) .Finally, the evolution equation reads∂/∂ YΩ̂^(N) = (λ + i f(λ) ) Ĥ^ dressed_NΩ̂^(N) +h.c. - 2 λ Ω̂^(N) Tr(L̂_N Ω̂^(N)L̂_N^T).This corresponds to a driven open quantum system with no dissipation but some external fluctuations acting to preserve probability. On the real line, f=0, it can also be written as ∂/∂ YΩ̂^(N) = λ (Ĥ^-_N + L̂_N^T L̂_N)Ω̂^(N) +h.c. - 2 λ Ω̂^(N) Tr(L̂_N Ω̂^(N)L̂_N^T).This equation has quasi–Lindbladian structure where Ĥ^-_N corresponds to coherent evolution of a system inside a quantum environment which generatesdissipation represented by L̂_N^T L̂_N, minus quantum fluctuations responsible for probability conservation.Instead of having N^2-1 Lindblad operators as in the standard Lindblad equation there is only one, L̂_N. The associatedentropy, Fig. <ref>, reaches a plateau at a non–zero value for large Y. This follows a fast period of Lindblad decoherence. The roles of system and environment are somehow reversed from what one would naively argue. It would be more natural to obtain a picture where the hermitian hamiltonian Ĥ^+_N drives the evolution, receiving corrections encoded in Ĥ^-_N, but just the oppositeappears. It is also noteworthy that the corrections due to quantum fluctuations, after the period of decoherence, generate a small and negative contribution to the derivative of the density matrix (Fig. <ref> times -2 λ Ω̂^(N)). Interpreted as a gain and loss system we find that the environment provides the latter while the non–hermitian contribution provides a source of the former where now the trace of the density matrix is conserved. § CONCLUSIONS The resummation of high energy logarithms present inscattering amplitudes in QCD and supersymmetric theories leads to the BFKL equation. By discretizing the space of virtualities in loop corrections and regularizing it inside a box, it is possible to explore quantum properties of the state associated to the gluon Green's function. For imaginary values of the coupling its normalization is bounded and preserved under rapidity evolution. This implies that pure states of well defined virtuality are evolved into pure states characterized by a density matrix with purity one. As a model of non–linear corrections, which introduce contributions to unitarity via suppression of the diffusion intro infrared modes, we study the spectrum of a modified Hamiltonian whose largest positive eigenvalue can become arbitrarily small. This introduces quantum decoherence which affects the normalization of the state forcing to analytically continue the coupling to the real line. This shows how the rapidity evolution of the state generates unbounded normalization for usual BFKL while a bounded one for enough saturation of infrared modes. This implies that to obtain correct quantum properties at high energies such as a purity smaller than one or a positive von–Neumann entropy it is needed to break the infrared/ultraviolet original symmetry of the BFKL equation.Stemming from a non–hermitian Hamiltonian, the density matrix describing the open system, when normalized, fulfils an evolution equation of Lindblad type with dissipation and quantum fluctuations, Eq. <ref>.Much work remains to be done for the future. This includes the introduction of higher order corrections both in QCD and supersymmetric theories and the implementation of a more precise model of unitarization corrections to allow for a more complete comparison to previous works where the Lindblad equation of an open system appears <cit.>. Although the results here presented are robust, it is desirable to connect with other approaches in forthcoming works.§ ACKNOWLEDGEMENTS The work of G.C. was supported by the Fundação para a Ciência e a Tecnologia (Portugal) under project CERN/FIS-PAR/0032/2021 and contract ‘Investigador FCT - Individual Call/03216/2017’ and by project EXPL/FIS-PAR/1195/2021. M.H. would like to thank the IFT UAM/CSIC for hospitality.The work of A.S.V.is partially supported by the Spanish Research Agency (Agencia Estatal de Investigación) through the Grant IFT Centro de Excelencia Severo Ochoa No CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033 and the Spanish Ministry of Science and Innovation grant PID2019-110058GB-C21/ C22. It has also received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 824093. 99 Armesto:2019mna N. Armesto, F. Dominguez, A. Kovner, M. Lublinsky, V. Skokov,JHEP 05 (2019), 025. Gong:2021bcp W. Gong, G. Parida, Z. Tu, R. Venugopalan,Phys. Rev. D 106 (2022) no.3, L031501.deJong:2021wsd W. A. de Jong, K. Lee, J. Mulligan, M. Płoskoń, F. Ringer, X. Yao,Phys. Rev. D 106 (2022) no.5, 054508.Li:2021kcs T. Li et al. [QuNu],Phys. Rev. D 105 (2022) no.11, L111502.Barata:2021yri J. Barata, C. A. Salgado,Eur. Phys. J. C 81 (2021) no.10, 862.Davoudi:2020yln Z. Davoudi, I. Raychowdhury, A. Shaw,Phys. Rev. D 104 (2021) no.7, 074505.Briceno:2020rar R. A. Briceño, J. V. Guerrero, M. T. Hansen, A. M. Sturzu,Phys. Rev. D 103 (2021) no.1, 014506.Liu:2020eoa J. Liu, Y. Xin,JHEP 12 (2020), 011.Chakraborty:2020uhf B. Chakraborty, M. Honda, T. Izubuchi, Y. Kikuchi, A. Tomiya,Phys. Rev. D 105 (2022) no.9, 094503.Lamm:2019uyc H. Lamm et al. [NuQS],Phys. Rev. Res. 2 (2020) no.1, 013272.Mueller:2019qqj N. Mueller, A. Tarasov, R. Venugopalan,Phys. Rev. D 102 (2020) no.1, 016007.Beane:2018oxh S. R. Beane, D. B. Kaplan, N. Klco, M. J. Savage,Phys. Rev. Lett. 122 (2019) no.10, 102001.Kharzeev:2017qzs D. E. Kharzeev, E. M. Levin,Phys. Rev. D 95 (2017) no.11, 114008.Klebanov:2007ws I. R. Klebanov, D. Kutasov, A. Murugan,Nucl. Phys. B 796 (2008), 274-293.Beck:2023xhh D. Beck, J. Carlson, Z. Davoudi, J. Formaggio, S. Quaglioni, M. Savage, J. Barata, T. Bhattacharya, M. Bishof, I. Cloet, et al.[arXiv:2303.00113 [nucl-ex]]. Dumitru:2023qee A. Dumitru, A. Kovner, V. V. Skokov,Phys. Rev. D 108 (2023) no.1, 014014. Duan:2021clk H. Duan, A. Kovner, V. V. Skokov,Phys. Rev. D 105 (2022) no.5, 056009.Dumitru:2022tud A. Dumitru, E. Kolbusz,Phys. Rev. D 105 (2022), 074030.Ramos:2022gia G. S. Ramos, M. V. T. Machado,Phys. Rev. D 105 (2022) no.9, 094009.Kou:2022dkw W. Kou, X. Wang, X. Chen,Phys. Rev. D 106 (2022) no.9, 096027. Ehlers:2022oal P. J. Ehlers,Annals Phys. 452 (2023), 169290.Asadi:2023bat P. Asadi, V. Vaidya,Phys. Rev. D 108 (2023) no.1, 014036.Kou:2023azd W. Kou, X. Chen,Phys. Lett. B 846 (2023), 138199.Kutak:2023cwg K. Kutak,[arXiv:2310.18510 [hep-ph]].Gursoy:2023hge U. Gürsoy, D. E. Kharzeev, J. F. Pedraza,[arXiv:2306.16145 [hep-th]].Barata:2023jgd J. Barata, W. Gong, R. Venugopalan,[arXiv:2308.13596 [hep-ph]].Hentschinski:2023izh M. Hentschinski, D. E. Kharzeev, K. Kutak, Z. Tu,Phys. Rev. Lett. 131 (2023) no.24, 241901.Hentschinski:2022rsa M. Hentschinski, K. Kutak, R. Straka,Eur. Phys. J. C 82 (2022) no.12, 1147.Hentschinski:2021aux M. Hentschinski, K. Kutak,Eur. Phys. J. C 82 (2022) no.2, 111.Liu:2023eve Y. Liu, M. A. Nowak, I. Zahed,[arXiv:2302.01380 [hep-ph]].Liu:2023zno Y. Liu, M. A. Nowak,I. Zahed,Phys. Rev. D 108 (2023) no.9, 094025.Liu:2022bru Y. Liu, M. A. Nowak, I. Zahed,Phys. Rev. D 108 (2023) no.3, 034017.Liu:2022qqf Y. Liu, M. A. Nowak, I. Zahed,Phys. Rev. D 107 (2023) no.5, 054010.Liu:2022hto Y. Liu, M. A. Nowak, I. Zahed,Phys. Rev. D 105 (2022) no.11, 114028.Liu:2022ohy Y. Liu, M. A. Nowak, I. Zahed,Phys. Rev. D 105 (2022) no.11, 114027.Liu:2022urb Y. Liu, M. A. Nowak, I. Zahed,Phys. Rev. D 105 (2022) no.11, 114021.Liu:2019yye Y. Liu, M. A. Nowak, I. Zahed,Phys. Rev. D 105 (2022) no.5, 054021. Duan:2020jkz H. Duan, C. Akkaya, A. Kovner, V. V. Skokov,Phys. Rev. D 101 (2020) no.3, 036017. Lipatov:1985uk L. N. Lipatov,Sov. Phys. JETP 63 (1986) 904 [Zh. Eksp. Teor. Fiz.90 (1986) 1536]. Lipatov:1976zz L. N. Lipatov, Sov. J. Nucl. Phys.23 (1976) 338 [Yad. Fiz.23 (1976) 642].Fadin:1975cb V. S. Fadin, E. A. Kuraev, L. N. Lipatov,Phys. Lett.B 60 (1975) 50.Kuraev:1976ge E. A. Kuraev, L. N. Lipatov, V. S. Fadin,Sov. Phys. JETP 44 (1976) 443 [Zh. Eksp. Teor. Fiz.71 (1976) 840].Kuraev:1977fs E. A. Kuraev, L. N. Lipatov, V. S. Fadin,Sov. Phys. JETP 45 (1977) 199 [Zh. Eksp. Teor. Fiz.72 (1977) 377].Balitsky:1978ic I. I. Balitsky, L. N. Lipatov,Sov. J. Nucl. Phys.28 (1978) 822 [Yad. Fiz.28 (1978) 1597]. Lipatov:1995pn L. N. Lipatov,Nucl. Phys. B 452 (1995), 369-400.Lipatov:1996ts L. N. Lipatov,Phys. Rept. 286 (1997), 131-198.Hentschinski:2018rrf M. Hentschinski,Phys. Rev. D 97 (2018) no.11, 114027.Hentschinski:2011tz M. Hentschinski,A. Sabio Vera,Phys. Rev. D 85 (2012), 056006.GomezBock:2020zxp M. Gómez Bock, M. Hentschinski, A. Sabio Vera,Eur. Phys. J. C 80 (2020) no.12, 1193.Hentschinski:2020rfx M. Hentschinski, [arXiv:2010.14748 [hep-ph]].Hentschinski:2020tbi M. Hentschinski, K. Kutak, A. van Hameren,Eur. Phys. J. C 81 (2021) no.2, 112 [erratum: Eur. Phys. J. C 81 (2021) no.3, 262].Hentschinski:2021lsh M. Hentschinski,Phys. Rev. D 104 (2021) no.5, 054014.Chachamis:2022jis G. Chachamis, A. Sabio Vera,JHEP 07 (2022), 109.Hentschinski:2012kr M. Hentschinski, A. Sabio Vera, C. Salas,Phys. Rev. Lett. 110 (2013) no.4, 041601.Mueller:2002zm A. Mueller, D. Triantafyllopoulos,Nucl. Phys. B 640 (2002), 331-350. Bartels:2007dm J. Bartels, K. Kutak,Eur. Phys. J. C 53 (2008), 533-548. Gribov:1983ivg L. V. Gribov, E. M. Levin, M. G. Ryskin,Phys. Rept. 100 (1983), 1-150. McLerran:1993ni L. D. McLerran, R. Venugopalan,Phys. Rev. D 49 (1994), 2233-2241. BethencourtdeLeon:2011xks N. Bethencourt de León, G. Chachamis, A. Romagnoni, A. Sabio Vera,Eur. Phys. J. C 80 (2020) no.6, 549. Sergi:2014eja A. Sergi, K. G. Zloshchastiev,Phys. Rev. A 91 (2015) no.6, 062108.Franke:1976tx V. A. Franke,Teor. Mat. Fiz. 27 (1976), 172-183Gorini:1975nb V. Gorini, A. Kossakowski, E. C. G. Sudarshan,J. Math. Phys. 17 (1976), 821 Lindblad:1975ef G. Lindblad,Commun. Math. Phys. 48 (1976), 119 Li:2020bys M. Li, A. Kovner,JHEP 05 (2020), 036.
http://arxiv.org/abs/2312.16743v1
{ "authors": [ "G. Chachamis", "M. Hentschinski", "A. Sabio Vera" ], "categories": [ "hep-th", "hep-ph" ], "primary_category": "hep-th", "published": "20231227232155", "title": "Von Neumann entropy and Lindblad decoherence in the high energy limit of strong interactions" }
Generalized Dualities for Heterotic and Type I Strings Falk Hassler [E-mail address: [email protected]], Yuho Sakatani [E-mail address: [email protected]] and Luca Scala [E-mail address: [email protected]] ^∗^University of Wrocław, Faculty of Physics and Astronomy, Maksa Borna 9, 50-204 Wrocław, Poland ^†Department of Physics, Kyoto Prefectural University of Medicine, 1-5 Shimogamohangi-cho, Sakyo-ku, Kyoto 606-0823, Japan We define generalized dualities for heterotic and type I strings based on consistent truncations to half-maximal gauged supergravities in more than three dimensions. The latter are constructed from a generalized Scherk-Schwarz ansatz in heterotic double field theory that satisfies the strong constraint. Necessary and sufficient conditions on the resulting embedding tensor are discussed, showing that only certain gaugings, called geometric, can arise from this procedure. For all of them, we explicitly construct the internal geometry and gauge potentials. In general, this construction is not unique and permits different uplifts which are used to define generalized T-duality. Two examples are worked out underlying the utility of our approach to explore new dualities and uplifts of half-maximal gauged supergravities. empty§ INTRODUCTIONGeneralized dualities originate from the desire to extend abelian T-duality on the string's worldsheet to a larger class of target spaces. In this way, less and less restrictive families of dualities were revealed over time. These include, for example, non-abelian T-duality<cit.>, Poisson-Lie T-duality<cit.> and dressing cosets<cit.>. All of them are summarized under the term generalized T-dualities. As the name T(arget space)-duality suggests, they relate the classical dynamics of closed strings in different, dual target spaces by canonical transformations of the two-dimensional worldsheet theory. In contrast to abelian T-duality, which is known to be a genuine symmetry of string theory, their fate under quantum corrections is still under active investigation. Still, they received a new wave of interest over the last years since their underlying structures seem to be vital to integrablity in string theory and to gauged supergravities (gSUGRA) in various dimensions. The latter allow to go beyond strings, and hint, through solution preserving transformations for M-theory, that generalized U-dualities might also apply to membranes <cit.>.To approach generalized dualities from the low-energy effective description of string and M-theory, consistent truncations have proven to be a valuable tool <cit.>. They split the effective theory's spacetime into an external and internal part. The latter is fixed in terms of an ansatz, whose parameters become fields on the external spacetime. Effectively, this removes degrees of freedom from the theory and explains the name truncation. This process, however, is only consistent if the field equations of the truncated theory obtained by * applying the truncation ansatz to the action before computing the field equations by variation of the truncated action, * plugging the truncation ansatz into the field equations of the initial theory,are equivalent. Remarkably, this consistency condition results in severe constraints on admissible ansätze for truncations, making them in general very hard to find. In this framework, generalized dualities arise as equivalence relations identifying different consistent truncation ansätze which result in the same truncated theory. Therefore, in order to explore generalized dualities, one might start from the lower-dimensional, truncated theory and ask what are its higher-dimensional origins, called uplifts.Implementing this programme for maximal gSUGRAs in ten or less dimensions gives rise to all the currently known generalized U-dualities <cit.>. Since the understanding of generalized dualities for membrane models is still very limited, their connection to consistent truncation is currently the best way to access them. This is in stark contrast to the bosonic string case, where at first T-dualities were discovered on the worldsheet and later related to the low-energy effective theory. When it comes to generalized dualities, there is still a family of string theories that has not been studied extensively, namely, heterotic and type I. Recently, the first explorations in this direction have shown that -models can be defined for them <cit.> and used to construct new integrable deformations <cit.>. Motivated by these hints towards interesting new physics, we take the complementary approach and use insights from half-maximal gSUGRAs as a natural way to define generalized dualities for heterotic and type I strings. Like their maximal relatives, they admit a full classification in terms of the embedding tensor <cit.>, and their uplifts are the concern of this article.Half-maximal gSUGRAs in 10-d dimensions, for 1<d≤ 6, are classified by the embedding of their gauge group into the Lie group= (d) ×O(d,𝔫) with(d) = ℝ^+ 0 < d ≤ 5(2)d = 6 ,where 𝔫≥ 0 counts the number of vector multiplets they have in addition to the gravity one. In three dimensions (where d=7), the product form ofin (<ref>) is replaced by =O(8,𝔫). Conceptually this case is not different, but it is algebraically much more demanding and we thus restrict our discussion to d≤ 6. Concerning possible uplifts, one may consider all the five perturbative superstring theories and M-theory. Depending on the amount of supersymmetry in the low-energy limit, they can be assigned to the two categories: Maximal (32 real supercharges) Half-maximal (16 real supercharges) * M-theory / type IIA * Type IIB * Heterotic E_8 × E_8 * Heterotic SO(32) * Type I. Both cases are interesting but they require different strategies. To uplift to any theory in the first column, the truncation ansatz has to break half of the supersymmetry. This scenario has been analyzed in Exceptional Field Theory (ExFT) <cit.>, where the consistency of the truncation imposes strong restrictions on the number 𝔫≤ d of allowed vector multiplets <cit.>. This bound arises because the largest subgroup of E_d+1(d+1) (the duality group of ExFT on a d-dimensional internal space) capable of hosting the embedding tensor of a half-maximal gSUGRA is O(d, d). On the other hand, reproducing the low-energy limit of any of the theories in the second column requires generalized Scherk-Schwarz-type <cit.> uplifts on generalized parallelizable spaces <cit.>. Here, 𝔫≥ d is required, showing that these two approaches are not redundant but complementary. A priori, there is no upper bound on the number of vector multiplets; however, cancellation of the gauge anomaly in ten dimensions by the Green-Schwarz mechanism requires 𝔫=496 <cit.>. Therefore, we focus on uplifts of consistent truncations of the shaded region in the theory space of half-maximal gSUGRAs, Fig.<ref>. They are central in defining generalized dualities for heterotic and type I strings.Our presentation in this article follows the steps outlined above. We first introduce generalized Scherk-Schwarz reductions of half-maximal SUGRAs in Section <ref>. Next, we ask which gSUGRAs can be realized in terms of these compactifications. More specifically, we define in Section <ref> their embedding tensor in terms of generalized Lie derivatives for the different duality groups (<ref>) and then check how they relate to the generalized Scherk-Schwarz reductions presented before. It turns out that the central object in all these considerations is a generalized frame field. Initially, this latter object is used implicitly, but in Section <ref> we eventually turn to its explicit construction, which extends the results of <cit.>. For a generic upliftable gSUGRA with gauge group G, the frame is not unique but depends on the choice of certain admissible subgroups H⊂ G. This is an hallmark of generalized dualities, as we further explain in Subsection <ref>. To better clarify our construction, in Section <ref> we present two examples. The first one explicitly demonstrates heterotic generalized T-duality at work, while the second one underlines that our construction is completely systematic and general, and that it can be applied also to complicated settings with various non-trivial fluxes. A final Section <ref> deals, in the end, with the conclusions of our analysis. § GENERALIZED SCHERK-SCHWARZ REDUCTIONIn the following, we present how half-maximal gSUGRAs can be derived from a generalized Scherk-Schwarz reduction of the ten-dimensional theory. Although conceptually not very different from the maximal setup, this kind of reduction has received much less attention in the literature to now. The ten-dimensional, ungauged starting point is double field theory (DFT) with duality group (10, 10+n), where n denotes the dimension of the gauge group . This theory is also known as heterotic DFT <cit.> and can be either formulated in terms of a generalized dynamical metric or in terms of a generalized frame, containing all the dynamical fields except for the dilaton. For our purposes, the latter is more suited and we will review, now, this construction before specifying the reduction ansatz. §.§ Heterotic double field theoryIn the frame formulation of heterotic DFT, all the bosonic fields are contained in the T-duality-invariant dilaton d and in the generalized vielbein 𝔼_^∈(10,10+n). The latter relates flat indices, like , to their curved counterparts, here [Since in the paper we will make use of a large number of different kind of indices, we summarized them for the reader's convenience in Appendix <ref>.]. Both these kind of indices take values in the fundamental representation of the duality group. Instead of using the fundamental fields directly, the action and the associated field equations can be written in terms of the following generalized fluxes𝔽_≡ -3 𝔻_[𝔼_^ 𝔼_||] ,𝔽_≡ 2 𝔻_ d - ∂_𝔼_^ ,with 𝔻_≡𝔼_^ ∂_. Here, the indicesandare raised or lowered with the (10,10+n)-invariant metrics η_ and η_, respectively. In all the equations we have to impose the section condition η^ ∂_ · ∂_ ·= 0, solved by the canonical choice∂_ = [ ∂_m∂_ ∂^m ]=[ ∂_m 0 0 ] .This dictates how we decompose (10,10+n)→(10)× by putting the theory on section. The m indices label the fundamental of the first factor in the decomposition, whilecorrespond to the adjoint of the second factor. Flat indices follow the same convention, namely _ = [ _â_ ^â ] .The Lagrangian density of heterotic DFT is given by _DFT = -2d ℝ withℝ = -112 ℍ^ ℍ^ ℍ_ 𝔽̂_^ 𝔽̂_^ - 14 ℍ^ 𝔽̂_^ 𝔽̂_^ + 2 ℍ^ 𝔻_𝔽_ - ℍ^ 𝔽_ 𝔽_ .Here, ℍ_∈(10,10+n) is a constant matrix (chosen for convenience to be diagonal),𝔽̂_^≡𝔽_^ + 𝔼_^ 𝔼_^ 𝔼_^ Σ_^ ,and Σ_^ is a constant torsion implementing the gauge groupin ten dimensions. This torsion is fixed byΣ_^ = f_^ 0otherwise ,where f_^ denote the structure constants of 's Lie algebra. The reason to introduce this contribution in a separate term is that it cannot be generated by a frame _^ which is compatible with the section (<ref>). To keep track of this special contribution we decorate quantities that contain it with a hat. Additionally, one needs to adapt the generalized Ricci scalar ℝ by adding the generalized scalarℤ̂≡η^ (- 16 𝔽̂_^ 𝔽̂_^ + 2 𝔻_𝔽_- 𝔽_ 𝔽_) ,which reduces to 1/3! Σ_ Σ^ under the section condition. Subtracting this term, we define the modified generalized Ricci scalar asℝ̂≡ -112 ℍ^ ℍ^ ℍ_ 𝔽̂_^ 𝔽̂_^ - 14 ℍ^ 𝔽̂_^ 𝔽̂_^ + 2 ℍ^ 𝔻_𝔽_ - ℍ^ 𝔽_ 𝔽_ - ℤ̂ . To see how this quantity relates to the action of ten-dimensional half-maximal supergravityS=∫^10 x √(-g) _10with g = (g_mn) , _10 ≡-2Φ(R + 4 g^mn ∂_mΦ ∂_nΦ -112 Ĥ_mnp Ĥ^mnp -14 κ_ F_mn^ F^mn) ,with the field strengths Ĥ_3≡1/3! Ĥ_mnpx^m∧ x^n∧ x^p ,F_2^ ≡1/2! F_mn^x^m∧ x^n , defined byĤ_3≡ B_2 - 12 κ_ A^∧ A^ - 13! f_ A^∧ A^∧ A^ , F_2^ ≡ A^ + 12 f_^ A^∧ A^ ,we assume the parametrization𝔼_^ ≡[ e_â^n 0 0; 0 ν_^ 0; 0 0 e^â_n ][ δ_n^p- A_n^ -1/2 A_n^ A_p; 0 δ_^ A_p; 0 0 δ^n_p ][δ_p^m0 - B_pm;0δ_^0;00δ^p_m ], -2d ≡-2Φ√(-g) ,of the generalized frame and dilaton. Plugging it into (<ref>), we findS = ∫ x^10 [-2dℝ̂ - ∂_m(4 -2Φ√(-g) g^mn ∂_nΦ)] ,and thereby match the SUGRA action up to a boundary term. Finally, we can slightly modify ℝ̂ intoℝ̂'= -112 ℍ^ ℍ^ ℍ_ 𝔽̂_^ 𝔽̂_^ - 14 ℍ^ 𝔽̂_^ 𝔽̂_^ + ℍ^ 𝔽_ 𝔽_ - ℤ̂to have all derivatives contained in the generalized fluxes and ℤ̂. As a result we can write the action of the ten-dimensional half-maximal SUGRA asS = ∫^10 x[-2dℝ̂' - ∂_m(2 -2Φ√(-g) K_n^nm)] ,where we have definedK_mn^p ≡12 (F_m^p_n + F_n^p_m - F_mn^p) , F_mn^p ≡ -2 ∂_[m e_n]^â e_â^p .In this paper we will consider S = ∫^10 x-2dℝ̂' without the boundary term as the ten-dimensional action. §.§ Hohm-Samtleben-like split formulationIn order to prepare the ground for the dimensional reduction of this action on an internal, d-dimensional, space we follow <cit.> and choose a different splitting of the (10,10+n) indices,x^ = [ x^μ x^I x_μ ],with x^I ≡[ x^ix^ x_i ] .There are several things to note here: first, we decompose(10,10+n)→(D)×(d,d+n) , where D:=10-d and then further(d,d+n)→(d) × ,with μ=1,…,D and i=D+1,…,10 . We also change the parametrization of the generalized frame and dilaton accordingly to𝔼_^= [e_^μ- e_^ν A_ν^I -e_^ν (B_νμ+12 A_ν^K A_μ K); 0_A^I_A^K A_μ K; 0 0e_μ^ ],-2d=-2ϕ√(-(g_μν)) .With this choice, one eventually obtains the fluxes2𝔽̂_= e_^μ e_^ν e_^ρ _μνρ , 𝔽̂_^C = e_^μ e_^ν _μν^I _I^C , 𝔽̂_^= 2 e_[^μ e_]^ν D_μ e_ν^ , 𝔽̂_ B^C = e_^μ _B^I (D_μ_I^C - A_μ^J Σ_JI^K _K^C) , 𝔽̂^_ C= e_μ^ _C e_^μ ,𝔽̂_ABC= _ABC≡_ABC + _A^I _B^J _C^K Σ_IJK , 𝔽_= 2 e_^μ D_μϕ -2 e_[^μ e_]^ν D_μ e_ν^ ,𝔽_A= _A - e^-1 _A e,where _A≡_A^I ∂_I , e≡ (e_μ^), and _AB^C, _A are the generalized fluxes associated with _A^I. Furthermore, we have defined_μνρ ≡ 3 D_[μB_νρ] - 3A_[μ^I ∂_νA_ρ]I + η_IJ A_[μ^I [A_ν, A_ρ]]^J_D -Σ_IJK A_μ^I A_ν^J A_ρ^K , _μν^I≡ 2 ∂_[μ A_ν]^I - [A_[μ, A_ν]]_D^I + Σ_JK^I A_μ^J A_ν^K + ∂^I B_μν ,where [·,·]_D is the DFT D-bracket and we made use of the covariant derivative D_μ≡∂_μ - _A_μ. The latter is defined in terms of the generalized Lie derivative for the internal space, the one for the duality group (d,d+n), which acts on a generalized vector W^J of weight zero by another vector V^I as_V W^I = V^J ∂_J W^I - ( ∂_J V^I - ∂^I V_J ) W^J .From this, we find that the action of the covariant derivative on the fundamental tensors is given byD_μ e_ν^= ∂_μ e_ν^ - A_μ^I ∂_I e_ν^ , D_ρ B_μν= ∂_ρ B_μν - A_ρ^I ∂_I B_μν , D_μ_I^A = ∂_μ_I^A - (A_μ^J ∂_J _I^A + _J^A ∂_I A_μ^J - _J^A ∂^J A_μ I) , D_μϕ= ∂_μϕ - A_μ^I ∂_Iϕ + 12 ∂_I A_μ^I ,and can be easily extended to all the other relevant fields or combinations of them.Taking into account the splitting x^m = [ x^μ x^i ], which follows from (<ref>), the section fixed in (<ref>) implies ∂^μ=0 . With that, the ten-dimensional SUGRA action (<ref>) can be rewritten asS = ∫^10 x [_D - ∂_μ(2 e -2ϕ _ν^νμ)] , _D≡ e -2 ϕ(R+4 D_μϕ D^μϕ -112 _μνρ ^μνρ + 18 D̂_μℍ^IJD̂^μℍ_IJ - 14 ℍ_IJ ^μν I_μν^J -V),where e≡√(-(g_μν)) , _μ^νρ is defined in terms of e_^μ similarly to K_m^np (equation (<ref>)), andℍ_IJ ≡^A_I ℍ_AB^B_J ,D̂_μℍ_IJ ≡ D_μℍ_IJ - 2 A_μ^K Σ_K(I^L ℍ_J)L .Furthermore, we have defined the following quantitiesR ≡ R - e^μ _μν^I ∂_I e^ν_ , V≡ -( + 14 ℍ^IJ ∂_Ig^μν ∂_Jg_μν + e^-2 ℍ^AB _A e _B e -2 e^-1 ℍ^AB _A _B e) , ≡ -112 ℍ^AD ℍ^BE ℍ_CF _AB^C _DE^F - 14 ℍ^AB _AD^C _BC^D + ℍ^AB _A _B - ℤ̂ .Note that the last two terms in the potential V have been missed in <cit.>, but they are important to reproduce the scalar potential for non-unimodular gaugings. §.§ Scherk-Schwarz ansatzEach half-maximal gauged supergravity can be characterized by its embedding tensor X_^ <cit.>, where hatted indices are in the fundamental representation of the duality groupdefined in (<ref>). There are various consistent choices for this tensor, but some of them cannot be uplifted to ten-dimensional SUGRA. Here, we will not consider those and instead focus on theories which result from a reduction of the action (<ref>). The possible form of the embedding tensor X_^ is, therefore, further restricted; we will ignore, for example, trombone gaugings, since they are known to lack a ten-dimensional action. After imposing these constraints, the embedding tensor X_^ can be decomposed into the two (d,d+n) tensors X_AB^C and ξ_A. Comparing this residual duality group with (<ref>), we find 𝔫=d+n, and since n≥ 0, we immediately see how the restriction to the shaded region in the theory space (Fig.<ref>) arises.To reproduce all gauged supergravities compatible with the requirements elaborated above we need to make an appropriate Scherk-Schwarz reduction ansatz. For this purpose, we consider a generalized frame field E_A^I∈ℝ^+×(d,𝔫) that satisfies_E_A E_B^I = - X_AB^C E_C^I ,where the generalized Lie derivative is given in (<ref>). How to explicitly construct such a frame will be the objective of Section <ref>; here, we just assume that it exists. According to (<ref>), the matrix E_A^I is an element of ℝ^+×(d,𝔫) and thus we need to rescale it to obtain an (d,𝔫) matrix:_A^I ≡Δ E_A^I ∈(d,𝔫) .From our construction of E_A^I, we find that _A^I and Δ satisfyF_A≡ - 2ϕ ∂_I( -2ϕ _A^I) = e^Δ (d-9) ξ_A ,withD_AΔ = ξ_A , and D_A≡ E_A^I ∂_I. Furthermore, we define the generalized fluxes for E_A^I as before by F_ABC ≡ - 3 E_[A^I ∂_I E_B^J E_C]J , andF̂_ABC≡ F_ABC + Σ_ABC .Using these quantities, we make the Scherk-Schwarz ansatze_^μ= Δ(x^i) ê_^μ(x^μ) , B_μν= -2Δ(x^i) B̂_μν(x^μ) , A_μ^I= e^-Δ(x^i) A_μ^A(x^μ) _A^I(x^i) ,_A^I= _A^B(x^μ) _B^I(x^i) , -2 ϕ= -2 ϕ̂(x^μ) + (8-d) Δ(x^i) + ln v ,where v≡(v_i^a) is the left-invariant one-form which we will define later. We, then, obtain_ABC = e^Δ_A^D _B^E _C^F F_DEF ,_A = _A^B F_B ,and their hatted counterparts that are required to evaluate the contributions (<ref>) to the reduced action. Under the above ansatz, the quantities that appear in the action (<ref>) become_2^I= -Δ ( A^A + 12 F̂_BC^A A^B∧ A^C + ξ_B A^B∧ A^A -2 ξ^A B̂_2) _A^I, _3= -2Δ (B̂_2 + 2 A^A ξ_A∧B̂_2 - 12A^A∧ A_A -13! F̂_ABC A^A∧ A^B∧ A^C ) , D_μϕ= ∂_μϕ̂ + 8-d2 A_μ^A ξ_A, D_μℍ_IJ= ∂_μℍ_IJ -2 A_μ^A F̂_A(I^K ℍ_J)K-2 A_μ(I ℍ_J)K ξ^K + 2 A_μ^A ξ_(I ℍ_J)A ,where _A^I is used to relate flat indices with curved ones. This only leaves the potential V to be computed – a thing we will deal with in Subsection <ref>. Note that the scale factor Δ completely drops out from the action, and we have successfully reproduced the action of a half-maximal gauged supergravity up to a boundary term. § HALF-MAXIMAL GSUGRAS IN MORE THAN THREE DIMENSIONSWe now come back to the main challenge of this article, namely, the construction of the frames that satisfy the algebra (<ref>) with constant X_AB^C. They are central in both the generalized Scherk-Schwarz reduction already discussed in Subsection <ref> and the definition of generalized dualities with which we will deal later. Our approach to tackle it consists of two steps: first, we need to clarify the algebraic structure underlying the Lie derivative given in (<ref>) and its extension to larger duality groups, as they appear in (<ref>). From this analysis, eventually, conditions on upliftable gaugings are identified and further refined in Subsection <ref>. Afterwards, frames are constructed from basic elements of group theory in Subsection <ref>. For all these steps we will set Σ_ABC from the last section to 0 and thereby just consider uplifts to the abelian sector. We only drop this condition in Subsection <ref> by an additional twist of the frame which recovers the full non-abelian gauge sector. §.§ Generalized Lie derivativeIn (<ref>) we have already encountered the generalized Lie derivative in DFT. To make contact with half-maximal gSUGRAs, we furthermore need to understand how it can be extended to their duality groups given in (<ref>). For this purpose, the more algebraic formulation<cit.>_V W^≡ V^ ∂_ W^ + (t^α̇)_^ ∂_ V^(t_α̇)_^W^ + w(∂_ V^) W^is better suited. Note that this Lie derivative is different form its DFT counterpart and we will see later in Subsection <ref> how they can be related. The definition (<ref>) relies on the generators (t_α̇)_^ of the respective duality groups; we will deal with them in full detail in the next paragraph. In contrast to (<ref>), generalized vectors W^ with arbitrary weight w are here considered; for example, parameters of generalized diffeomorphisms have a natural weight w=β, whereβ≡1/8-d .It is customary to define the Y-tensorY^_≡δ^_ δ^_ + (t^α̇)_^(t_α̇)_^ + β δ^_ δ^_to express the section condition required for this Lie derivative to close asY^_∂_ · ∂_ ·= 0 .At the same time it allows to write the generalized Lie derivative (<ref>) in the more compact form_V W^ ≡ V^ ∂_ W^ - W^ ∂_ V^ + Y^_ ∂_ V^W^ + (w-β) (∂_ V^) W^ . Coming back to the generators of the duality group, note that we distinguish between them in the curved indices, denoted by (t_α̇)_^, and their flat counterparts (t_α)_^. However, they can always be chosen to have the same components and, therefore, we just give more details on the latter. As the duality group decomposes into two factors, namelyand (d,𝔫), we choose the generators accordingly,{t_α} = {t̃_α̃,t_α̂} . Let us start with the dimension independent (d,𝔫) factor and its generators t_α̂. Since we want, eventually, to take into account solutions of the section condition, like (<ref>), it is convenient to decompose them following the branching(d,𝔫)→(d)×(n) .For example, the adjoint representation branches as=2pt1.8[adj→(0.25em1,1,1)⊕(fund,fund)⊕(adj,1)⊕(1,0.25em1,1)⊕ (fund, fund)⊕ (0.25em1,1, 1) ,; -2-1 0 0 1 2 ]where the numbers below each summand indicate a natural grading which arises by the decomposition. Following this prescription, we denote the generators of (d,𝔫) by{t_α̂}≡{R_a_1a_2, R_a^, K^a_1_a_2,R__1_2 , R^a_,R^a_1a_2} ,where a=1,…,d and =1̇,…,ṅ , and the indices of R_a_1 a_2, R__1_2 and R^ab are antisymmetric (according to the first, thirdand last term in (<ref>)). Their commutators and matrix realizations in the fundamental representation are given in Appendix <ref>. To obtain the latter, one also needs the branching of the fundamental under (<ref>). This reads[ =2pt1.8fund →(fund,1) ⊕(1,fund) ⊕(fund,1);-1 0 1 ]and, therefore, we compose generalized vectors Z_A's asZ_A = [ Z_aZ_ Z^a ] .Now, let us come to , which depends on the internal dimension of the reduction d,= ℝ^+ d≤ 5(2) d=6 .Their generators will be denoted ast̃_α̃≡ R_* d≤ 5 R^_1__2d=6 ,where =+,- and R^_=0 . The generators t_α̂ and t̃_α̃ commute with each other and R^_ satisfies the sl(2) algebra[R^_, R^_] = δ_^ R^_ -δ_^ R^_ .We define, also, the dual generators{t^α}≡{t̃^α̃,t^α̂} .They are dual in the sense that the adjoint index is raised with the inverse of the respective Killing metric, where the normalization is fixed by the R_1=fundamental representation. Alternatively, this can be rephrased as the requirement that _R_1(t^α t_β) is diagonal[Concretely, we find 2_R_1(t̃^α̃ t̃_β̃)= -β (c_d+n) δ^α̃_β̃ , _R_1(t^α̂ t_β̂)= -2 δ^α̂_β̂ d≤ 5 , _R_1(t̃^α̃ t̃_β̃)= -(12+n) δ^α̃_β̃ , _R_1(t^α̂ t_β̂)= -4 δ^α̂_β̂ d=6 , where c_d=2,2,6,8,14 for d=1,2,3,4,5 .]. As expected, the grading of the dual generators is the opposite and we find again decompositions that mimic (<ref>) and (<ref>); to be specific{t^α̂} ≡{R^a_1a_2 , R^a_, K_a_1^a_2,R^_1_2 , R_a^,R_a_1a_2}andt̃^α̃ ≡ R^* d≤ 5 R__1^_2d=6 ,whereR^* ≡ - β^-1 R_* , R_^≡ - R^_ , K_a^b ≡ - K^b_a . Finally, we join these two parts together to obtain the full matrices (t_α)_^ that are needed in the definition (<ref>) of the generalized Lie derivative. At this point, one has to fix how the hatted indices decompose. For d<5, this is straightforward, because it reduces to the decomposition (<ref>); for larger d, however, the situation becomes more subtle since the ten-dimensional theory (<ref>) contains a three-form which is dual to a six-form, describing an NS5-brane. This has one time direction and five spacial ones, and all of the latter have to be in the compact d-dimensional space in the reduction. That can only happen for d≥ 5. If this bound is saturated, the 5-form is dual to a scalar field that we call Z_* = 15!ϵ^a_1… a_5 Z_a_1… a_5 . In d=6 the same quantity is dual to the vector Z^-a = 15!ϵ^a b_1… b_5 Z_b_1… b_5 which is related by the (2) S-duality to Z^+a. Therefore, we choose the following decompositions:Z_ = Z_A = [ Z_aZ_ Z^a ]d≤ 4[ Z_A Z_* ] = [ Z_aZ_ Z^a Z_* ]d=5 Z_ A = [ Z_ a Z_ Z_^a ]d=6 ,where A=1,…,d + 𝔫 .All (d,𝔫) generators, which are presented in full detail in the Appendix <ref>, act from the left on Z_A. 's action is more involved: first we have R_* in dimensions d≤ 5; this is realized by the matrices,[ for d≤ 4:for d=5:;(R_*)_A^B ≡β[ δ_a^b 0 0; 0 δ_^ 0; 0 0 δ^a_b; ] , (R_*)_^≡β[ δ_a^b 0 0 0; 0 δ_^ 0 0; 0 0 δ^a_b 0; 0 0 0-2 ] , ]depending on the value of d, as explained before. Consequently, (R_*)_^ is proportional to the identity matrix in d≤ 4. In d=6 , we have to additionally implement the (2)-factor; this is achieved by defining(R^_)_^≡ (R^_)_α A^β B = (δ_^ δ^_-12δ_^δ^_) δ_A^B , (t_α̂)_^≡ (t_α̂)_α A^β B = δ_^ (t_α̂)_A^B ,where (t_α̂)_A^B take the same form as those in d≤ 4 . §.§ Embedding tensorAfter having settled the explicit details of the generalized Lie derivative, we come back to obtaining the generalized frames satisfying the algebra_E_ E_ = - X_^ E_ .Note that this is still not the algebra (<ref>) we based the generalized Scherk-Schwarz reduction on in the last section, since the frames here are valued in the duality groups given in (<ref>) and not in (d,𝔫). Later on, it will become clear how these two algebras are related; for the moment, we just look at how the embedding tensor X_^ arises from the frame. To this end, it is convenient to introduce the Weitzenböck connectionΩ_^≡ D_ E_^E_^ = - D_ E_^ E_^ ,with D_≡ E_^ ∂_. Employing the definition of the generalized Lie derivative (<ref>) from the last subsection, we can express the embedding tensor asX_^ = Ω_^ + (t^α)_^ Ω_^ (t_α)_^ - β Ω_^ (t_0)_^ ,where (t_0)_^=-δ_^ can be understood as a generator of an ℝ^+ symmetry. To see how this additional symmetry relates to the duality group, we distinguish two different cases depending on the value of d. * In d≤ 5 , we make a general decomposition Ω_^ = Ω_^α (t_α)_^ = Ω_^α̂ (t_α̂)_^ + Ω_^* (R_*)_^ + Ω_^0 (t_0)_^ , where we use the splitting of generators (<ref>) and (<ref>). To see how the different components of the Weitzenböck connection contribute to the embedding tensor, we first define the invariant tensors 𝕀_^ ≡ (R_*)_^ + (β-1) (t_0)_^ = [ δ_A^B 0; 0 0 ] , 𝕁_^ ≡ -(β t_0 + R_*)_^ = [ 0 0; 0 1 ] , andℙ_^α̂_β̂ ≡𝕀_^ δ_β̂^α̂ + (t_β̂· t^α̂)_^ . While the first two can be easily seen to commute with all the generators due to their block form, the last is slightly more complicated. It can be understood as a projector onto the totally antisymmetric representation in the decomposition of the tensor product of the fundamental with the adjoint, -0.2em1⊗0.25em1, 1→0.25em2,1⊕0.75em1,1,1 , because it satisfies ℙ_^α̂_β̂ (t^β̂)_^=0 . Note that the projector has to be rescaled by 13 to have the correct normalization. Using these three invariants and remembering (t^α̂)_*^=(t^α̂)_^*=0 , we find X_^= ℙ_^α̂_β̂ Ω_^β̂ (t_α̂)_^ + (t^α̂)_^ ξ_ (t_α̂)_^ + β^-1 ξ_(R_*)_^ + 𝕁_^ Ω_^α̂ (t_α̂)_^ - β^-1 𝕀_^ ϑ_ 𝕁_^ + 1β-1 𝕁_^ ϑ_ 𝕀_^ , where ξ_≡𝕀_^ (β Ω_^* - Ω_^0),ϑ_≡Ω_^0 - β Ω_^ . Therefore, the only non-vanishing components of X_^ are X_AB^C= F_AB^C + (t^α̂)_A^D ξ_D (t_α̂)_B^C + ξ_A δ_B^C , X_*B^C= ξ_B^C + 1β-1 ϑ_* δ_B^C, X_A*^*= -2 ξ_A - β^-1 ϑ_A , where X_*B^C and X_A*^* are not present in d≤ 4 , and we have defined F_AB^C ≡ℙ_A^α̂D_β̂ Ω_D^β̂ (t_α̂)_B^C ,ξ_A^B ≡Ω_*^α̂ (t_α̂)_B^C . Since, as explained above, ℙ_^α̂_β̂ projects on the totally antisymmetric representation, we find that F_ABC≡ F_AB^D η_DC is totally antisymmetric. Moreover, from the antisymmetricity in AB of (t_α̂)_AB≡ (t_α̂)_A^C η_CB, ξ_AB≡ξ_A^C η_CB is also antisymmetric with respect to these two indices. By applying the identity (t^α̂)_A^D (t_α̂)_B^C = η_AB η^CD - δ_A^C δ_B^D , X_AB^C simplifies to X_AB^C= F_AB^C + η_AB ξ^C + ξ_A δ_B^C - ξ_B δ_A^C . As a crosscheck of our results so far, we can take d=5 and truncate ϑ_. After the redefinitions ξ_→ -1/2 ξ_ , ξ_AB→ -ξ_AB , and F_ABC→ -f_ABC, our expression (<ref>) for X_^ reproduces the known embedding tensor given in equation (3.6) of <cit.>. * In d=6 , we follow conceptually the same steps and begin with the decomposition Ω_^ = Ω_^α (t_α)_^ = Ω_^α̂ (t_α̂)_^ + Ω_^ (R^_)_^ + Ω_^0 (t_0)_^ . Similarly to the case d≤ 5 , we define ℙ_^α̂_β̂≡δ_^ δ_A^B δ_β̂^α̂ + (t_β̂ t^α̂)_^ , which again satisfies ℙ_^α̂_β̂ (t^β̂)_^=0 , and after a rescaling by 13 it projects on the totally anti-symmetric representation in (<ref>), which is nowa doublet of (2). Accordingly, we define F_^≡ℙ_^α̂_β̂ Ω_^β̂ (t_α̂)_^≡ F_ AB^C δ_^ , and then F_ ABC≡ F_ AB^D η_DC is totally antisymmetric in ABC . Consequently, we have X_^ = F_ AB^C δ_^ + (t^α̂)_A^D ξ_ D (t_α̂)_B^C δ_^ + 2 ξ_ A δ_^ δ_B^C - ξ_ A δ_^ δ_B^C + 2 (ϑ_ A δ_^ - ϑ_ A δ_^) δ_B^C , after the decomposition (t_α̂)_^=(t_α̂)_B^C δ_^, and having defined ξ_ A≡Ω_ A^ -Ω_ A^0 ,ϑ_ A≡Ω_ A^0 - β Ω_ A^ , which mimic their counterparts in the discussion of d≤ 5. Similarly, with (t^α̂)_A^D (t_α̂)_B^C = η_AB η^CD - δ_A^C δ_B^D , we further simplify it to X_^= F_ AB^C δ_^ + η_AB η^DC ξ_ D δ_^ - ξ_ B δ_A^C δ_^ + ξ_ A δ_^ δ_B^C - ϵ_ ϵ^ ξ_ A δ_B^C+ 2 (ϑ_ A δ_^ - ϑ_ A δ_^) δ_B^C . Again a crosscheck with the existing literature is in order. One has to truncate ϑ_, and after the redefinitions ξ_→ -1/2 ξ_ and F_ ABC→ -f_ ABC our result for X_^ matches equation (2.24) of <cit.>. To sum up, X_^ is decomposed into{F_ABC, ξ_A}d≤ 4 , {F_ABC, ξ_A, ϑ_A, ξ_AB, ϑ_*}d=5 , {F_ ABC, ξ_ A, ϑ_ A}d=6 .§.§ Scherk-Schwarz ansatz revisitedWe now are in the position to identify what we call half-maximal geometric gaugings as those that admit an uplift to the ten-dimensional action (<ref>) we discussed in the Section <ref>. As before, we have to distinguish between the two cases depending on d. * In d ≤ 5, the standard solution of the section condition (<ref>) has to be supplemented with ∂_*=0 in d=5 <cit.>. Consequently, one finds that Ω_*^α̂=0 and this, in turn, implies ξ_AB=ϑ_*=0 , leading, eventually, toX_AB^* = 0. We can use this observation to restrict the frame algebra for the duality group ℝ^+ ×(d,𝔫) to the second factor. This is done in two stages: first, the frame algebra (<ref>) becomes _E_A E_B^ = - X_AB^C E_C^ after removing the * component fromand . Then, we note that the generalized Lie derivative with indexrestricted to I is the same as that of double field theory, and we find _E_A E_B^I = _E_A E_B^I = - X_AB^C E_C^I , where X_AB^C is a restriction of X_^. To compare this setting with the generalized Scherk-Schwarz reduction discussed in Subsection <ref>, we need to parametrize the frame in terms of an (d,𝔫) frame, _A^I, and two additional fields. A suitable parametrization is E_^ = exp[Δt_0 - λ̅ (R_* + β t_0)]_^ _^ , with the new fields being Δ and λ̅. * Turning to the case d=6, we have to impose ∂_-I=0 to supplement the section condition, and we find F_-ABC=0 . A direct consequence is that X_+A+B^-C=0, and thus, replicating the discussion for the previous case, we find that the relation (<ref>) still holds for the submatrix E_A^I≡ E_+A^+I. In the following we will denote +I and +A simply as I and A respectively, as the plus components appear much more often than their minus counterparts. In this case the parametrization of the frame E_^ = [exp(Δt_0) exp(-γ R^-_+) exp[- λ̅ (R^+_+ + β t_0)]]_^ _^ is made in terms of an additional field γ.Combining the two results, we see that in any dimensions d≤ 6 (<ref>) is satisfied withX_AB^C= F_AB^C + η_AB η^DC ξ_D + ξ_A δ_B^C - ξ_B δ_A^C .Now we have all that is needed to eventually relate the embedding tensor identified in this section to the generalized Scherk-Schwarz reduction from the last section. Restricting the parametrizations (<ref>)and (<ref>) for E_^ to the (d,𝔫) subsector one reproduces (<ref>). Furthermore, plugging the frame _A^I in the generalized fluxes (<ref>) and (<ref>) gives rise toF_A = Δ (2 D_A ϕ - D_A Δ - ∂_I E_A^I) , F_ABC = X_[ABC] .To simplify the fluxes in (<ref>) we take the ansatz-2ϕ =-2ϕ̂(x^μ)λ̅(x^i)for ϕ to obtainF_A = -Δ (D_A Δ + D_Aλ̅ + ∂_I E_A^I) .Moreover, we findξ_A = D_AΔand, byimposing the heterotic section[We call the heterotic section the partial fixing of the section ∂_=[ ∂_I 0 ] for d≤ 5 and ∂_=[ ∂_+ I 0 ] for d=6.], the trombone gauging reduces toϑ_A = Ω_A^0 - β ∂_ E_A^ = D_AΔ - β D_Aλ̅ - β ∂_I E_A^I ,after taking into accountΩ_A^0 = β D_Aλ̅ - D_AΔas a result of the above parametrization. At end, we combine them to obtainF_A = -Δ (1+β^-1) ξ_A + β^-1 Δ ϑ_A.From the above relations we compute the scalar potential in (<ref>) as[Note that in (<ref>), we reintroduced hats over the generalized fluxes F_ABC, although in our discussion we dropped Σ_ABC for the moment. Therefore strictly, we are dealing with unhatted quantities. However, after reintroducing Σ_ABC as we will do later, it is easy to see that the general expression with the hats holds.]V =-2Δ [ -112 ℍ^AD ℍ^BE ℍ_CF F̂_AB^C F̂_DE^F - 14 ℍ^AB F̂_BD^C F̂_AC^D+ - (9-d) ℍ^AB ξ_A ξ_B +2 β^-1 ℍ^AB ξ_A ϑ_B + β^-2 ℍ^AB ϑ_A ϑ_B] .For example, in D=4 and D=5, this is consistent with (2.11) and (3.16) of <cit.> respectively, and in D=6 it is consistent with (3.7) of <cit.>. Then the ten-dimensional action depends on x^i only through the overall factor-β^-1 (Δ-β λ̅) ,and if it reduces to a suitable scalar density v on the internal space, parametrized by x^i, then the full action integral (<ref>) splits intoS = ∫^D x (x^i-independent terms) ×∫^d x v .In this case, the dimensional reduction of the full ten-dimensional action to D dimensions is possible. However, this only works when the trombone gauging vanishes and the section condition holds. Then, we have-β^-1 D_A (Δ-β λ̅) = - ∂_I E_A^I= D_A ln v ,where v is identified with the determinant of the Maurer-Cartan 1-form v_i^a, to be defined later in Subsection <ref>.Then, the only thing left to be verified is that v gives rise to a left-invariant measure on the coset E_A^I is defined on. This is, indeed, the case, as we discuss in details in Appendix <ref>.§ CONSTRUCTION OF THE FRAMESIn the last section we identified necessary constraints for half-maximal gSUGRAs to admit an uplift to the ten-dimensional action (<ref>). However, we did not yet construct the explicit ansätze for the uplifts; this is what we are going to do now and we will see that further constraints arise. All the relevant fields can be easily read-off from the extended frame E_^; thus, we are left with getting those of them that satisfy the section condition for heterotic/type I supergravity, namely that they only depend on the coordinates x^i. Their embedding tensors X_^ are called geometric and satisfy a Leibniz algebra,T_∘ T_ = X_^ T_ ,to be called heterotic/type I geometric algebra. §.§ Heterotic/type I geometric algebrasAssume that the frame E_^ just depends on the coordinates x^i. Then, we can always find a global transformations such that it becomes the identity element of the respective duality group at the distinguished point x^i = 0. We define a geometric gauging by the requirement that the Weitzenböck connection evaluated at this point ,W_^ =Ω_^|_x^i=0= W_^δ (t_δ)_^ ,has only contributions from W_a^β, while all the others are removed imposing the section condition. At this point, one computes the components of the embedding tensor, which are coordinate-independent. As before, the details depend on the dimensions d. One thing they all have in common are the contributions coming from (d,𝔫) leading to the constants in W_A^β̂W_a^β̂ = [f_a^b_1b_2f_a^b_ f_a,b_1^b_2f_a^_1_2f_a,b^f_a,b_1b_2 ] .Keep in mind that we used here the decomposition of the generators already encountered in (<ref>). However, not all of these components appear in X_^, but only those that survive after antisymmetrizing the lower indices, namely[f_a^bcf_a^b_ f_[a,b]^cf_a^f_[a,b]^f_[a,bc] ] . * Additionally, for d≤ 5, there are two abelian generators (R_*+β t_0) and t_0 , with the corresponding structure constants denoted by [ f_a Z_a ] . Note that the generator (R_*+β t_0) vanishes in d≤ 4 , and therefore the structure constants f_a appear only in d=5 . After making suitable redefinitions, we parametrize all geometric gaugings by 3 F_abc =h_abc , F_ab^c =f_ab^c + δ_a^c Z_b - δ_b^c Z_a , F_a^bc =f_a^bc , F_ab =h_ab , F_a =f_a , F_ =0, F^abc =0 , and ξ_A = [ Z_a 0 0 ] , ϑ_A = [ β f_a-Z_aβ f_b^b_β f_b^ba ] ,ξ_AB=0 , ϑ_* =0 . * In d=6, one has to take into account four more generators, R^α_β of sl(2), and t_0. We arrange these as [ (R^+_++β t_0) R^+_- R^-_+ t_0 ] to obtain [ W_+a^β̃W_+a^0 ] = [f_a f_a+^- f_a-^+Z_a ] . Again not all of these components will be part of X_^; indeed f_a+^- disappears completely. The reason for this can be understood by looking at the general expressions (<ref>) and (<ref>) from where it is clear thatf_a+^- can appear only through ξ_α A and ϑ_α A . However, under W_-a^α=W_-a^0=0 , we easily see that W_-a+b^-c cannot appear in ξ_α A and ϑ_α A . Consequentially, geometric gaugings for d=6 are given by 2 F_+ABC =F_ABC ,F_-ABC =0 , ξ_+A =[ Z_a 0 0 ],ξ_-A =[ f_a-^+00 ], ϑ_+A = [ β f_a - Z_aβ f_b^b_β f_b^ba ], ϑ_-A =[ -β f_a-^+ 0 0 ], where F_ABC is the same as in (<ref>).Some of the gaugings we discovered here were already known in the context of extended Drinfel'd algebras <cit.>. The new ones that have not been discussed before, originate from h_abc and h_ab. It is possible to write the universal expressionX_a= 12!h_abc R^bc + h_ab^ R^b_ + f_ab^c K^b_c + 12!f_a^bc R_bc + f_a^b_ R_b^ + 12!f_a^ R_ + f_a (R_*+β t_0) + f_a-^+ R^-_+ - Z_a (K + t_0), X_= f_a^b_K^a_b - f_a^R^a_ + 12!f_^R_ + 12!h_abR^ab - Z_a R^a_ + f_a^a_ (R_*+β t_0), X^a = f_b^ca K^b_c - f_b^a R^b_ + (12 f_bc^a - 2 Z_[b δ_c]^a ) R^bc + f_b^ba (R_*+β t_0) , X_* =0 , X_-a= -f_b-^+ K^b_a - f_a-^+ (R_*+β t_0) + f_a R^+_- , X_-= f_a^a_ R^+_- - f_a-^+ R^a_ , X_-^a = f_b^ba R^+_- + f_b-^+ R^ab ,for all geometric gaugings in d≤ 6, where K≡ K^a_a and X_* is defined only in d=5 while X_-A is defined only in d=6. Moreover, we denote X_+A as X_A and R_* as R^+_+ in the latter case. The full structure of the geometric algebras is presented in Appendix <ref>. §.§ Generalized frame fieldsWe have now completed all the algebraic considerations we needed and we can finally start with the construction of the generalized frame that realize the geometric algebras. For several other duality groups this construction has been already executed successfully. A common pattern that emerges is that one should start from the parametrizationE_^ = M_^ V̂_^ ,whereV̂_^ = V_^ N_^decomposes further into the frame V_^ and a N_^ given byN_^≡[exp(-12! 𝔟_mn R^mn) exp(-𝔞_m^ R^m_)]_^ .The matrix M_^ mediates the adjoint action of an underlying Lie group and is therefore constrained by(M^-1)_^M_^ = -v^ X_^ ,where v^ is an extension of the ordinary Maurer-Cartan 1-form, and satisfies the modified Maurer-Cartan equation <cit.>v^ = - 12 X_^ v^∧ v^ -w^ ,with the 2-form w^. The exterior derivative of this object encodes the violation of the Jacobi identity for X_[]^ <cit.>, but still it does not carry on any new physics. In fact, this 2-form is irrelevant for the construction of the generalized frames, sincew^ X_^=0holds. All these objects are defined on the coset G/H, where G is the gauge group of the gSUGRA in D dimensions, which is completely fixed by the embedding tensor. H is a subgroup of G that we specify next. To this end, we first introduce the matrices(T_)_^ = - X_^which satisfy[ T_, T_ ] = X_^ T_under normal matrix multiplication due to the Leibniz identity(T_∘ T_) ∘ T_ + T_∘ (T_∘ T_) = T_∘ (T_∘ T_) .They decompose into three distinguished parts,T_ = [ T_a T_ὰ T_ά ]= [T_a T_α̃ ] .We already know the index a from (<ref>), while the two additional conditions,X_()^a = 0 and X_()^ὰ = 0 ,fix the full decomposition completely. The gauge group G is generated by T_a and T_ὰ, while the Lie algebra of its subgroup H is spanned by T_ὰ. Revisiting (<ref>) in this light, it is instructive to rewrite it asM^-1 M = v^ T_ ,to see that it just defines the left-invariant Maurer-Cartan form for a coset representative M∈ G/H. For example, one might choose this representative asM_^ = exp( x^c T_c )_^ . Finally, we look at V̂_^, which depends on the dimension d and readsV_^≡[ V_A^I 0; 0 v ]d≤ 5 [ 1 0; 0 v ]⊗V_A^I d=6 ,whereV_A^I ≡[ v_a^i 0 0; 0 δ_^ 0; 0 0 v^a_i ] .Here, v^a_i originates from v_i^ x^i after adopting the parametrizationv_i^ = [ v_i^av_i^v_ia v_i^* ]d≤ 5 [ v_i^ a v_i^ v_i^_a ]d=6 .Last but not least, we have v_a^i, that represents the dual vector fields to the one-forms v^a_i.With the frame fixed, we can now start to compute the Weitzenböck connectionΩ_^= M_^ M_^ (M^-1)_^ [Ω̂_^ - V̂_^ (M^-1 ∂_IM)_^]= M_^d M_^ (M^-1)_^ (Ω̂_d^ + A_d^ X_^),where we have introducedΩ̂_a^ = v_a^i V̂_^ ∂_i V̂_^ , A_a^≡ v_a^i v_i^ ,to keep the equations more compact. In order to evaluate Ω̂_a^, we need three more ingredients. First, the Maurer-Cartan equation (<ref>), which results inv^a = - 12 X_BC^a v^B∧ v^C = -(12 f_bc^a v^b∧ v^c - f_b^ac v^b∧ v_c - f_b^a_ v^b∧ v^ + f_b-^+ v^b∧ v^-a) ,and is needed to cope with the derivatives of the one-forms in (<ref>). Next, we need to deal with their dual vector fields. For them, after using v^c(v_a,v_b)=-v^c([v_a,v_b]), we getv^c([v_a,v_b]) = -v^c(v_a,v_b) = f_ab^c - 2 f_[a^cdA_b]d - 2 f_[a|^c_A_|b]^ + 2 f_[a|-^+ A_|b]^-c .The last contribution comes for the flat derivative𝔇_a N_J^L (N^-1)_L^K = -𝔇_a 𝔞^_i R^i_ - 12! (𝔇_a 𝔟_ij - κ_ 𝔇_a𝔞_i^ 𝔞_j^) (R^ij)_J^K,where 𝔇_a≡ v_a^i ∂_i . Putting all of them together gives rise toΩ̂_a^= -𝔇_a V̂_^ V̂_^ - V̂_^ V̂_^ 𝔇_a N_^(N^-1)_^ = k_ab^c (K^b_c)_^ + 𝔇_aln v (R_*+β t_0)_^ - V̂_^ V̂_^ 𝔇_a N_^(N^-1)_^ ,where we have definedk_ab^c ≡ v_a^j v_b^i ∂_i v_j^c , 𝔣_ij^≡ 2 ∂_[i𝔞_j]^ ,and𝔥_ijk≡ 3 (∂_[i𝔟_jk] - κ_ ∂_[i𝔞_j^ 𝔞_k]^) . To further proceed, it is important to keep in mind that only certain projections of Ω̂_a^ will enter the embedding tensor X_^. To take this fact into account quantitatively, we define the equivalence relation ∼ which allow to neglect terms from the Weitzenböck connection that will not contribute. This identifies the following three classes of terms * terms of the form S_ab^c (K^b_c)_^+S_ba^b (R^*+β t_0)_^ with S_[ab]^c=0, * terms of the form S_ade (R^de)_^ with S_[ade]=0, * terms of the form S_a (R^+_-)_^,which can appear in Ω̂_a^ but not in X_^. Applying it to (<ref>) leads to the simplificationsΩ̂_a^ ∼[-(12 f_ab^c - f_[a^cdA_b]d - f_[a|^c_A_|b]^ + f_[a|-^+ A_|b]^-c)K^b_c + k_[ba]^b (R_*+β t_0) + 12! 𝔣_ab^ R^b_ + 13! 𝔥_abcR^bc]_^ ,after taking into account (<ref>) and 𝔇_aln v = v_b^i 𝔇_a v_i^b = k_ba^b. From here on, we suppress the last two indices of the Weitzenböck connection and rather use Ω̂_a that arises from Ω̂_a ^ = Ω̂_a^β (t_β)_^.Eventually, we want to verify that our ansatz (<ref>) results in the embedding tensor (<ref>) for the geometric gaugings from the previous subsection. To this end, we need to check that Ω_^ in (<ref>) gives rise to the correct X_^. Since the embedding tensor is invariant under the adjoint action mediated by M_^, it is sufficient to consider the two terms in the bracket on the right hand side of (<ref>)[To come to this conclusion, keep in mind that the map from the Weitzenböck connection and the embedding tensor (<ref>) is equivariant under the adjoint action of the duality group.]. They give rise toΩ̂_a + A_a^B X_B ∼ X_a + 12 f_ba^c K^b_c + 12 f_ba^b (R_*+β t_0) +(12 𝔣_ac^ - f_c^ A_a^ - A_a^ Z_c - f_c^d A_ad - f_c-^+ A_a^-) R^c_ +(13! 𝔥_abc + 12 h_bc A_a^ + 12 f_bc^d A_ad - 2 Z_b A_ac - f_b-^+ A_a^-_c ) R^bc ,after taking into account (<ref>) and the definition of A_a^B (<ref>). Note again that R_* is R^+_+ in d=6. The right hand side has to matchΩ_a ∼ X_a + 12 f_ba^c K^b_c + 12 f_ba^b (R_*+β t_0) - 13 h_abc R^bc - 12 h_ab^ R^b_ .One can understand this relation as the inverse of the map (<ref>). At this point, it is also clear why we need the equivalence relation. As the map has a non-trivial kernel, we can only specify the Weitzenböck connection up to elements from this kernel. It is straightforward to check that this choice for Ω_a will indeed result in the embedding tensor in (<ref>). The right hand sides of (<ref>) and (<ref>) only match if we impose𝔣_ab^= -h_ab^ + 2 A_[a^f_b]^ + 2 A_[a^Z_b] + 2 A_[a|d|f_b]^d + 2 A_[a^-f_b]-^+ , 𝔥_abc= -2 h_abc - 3 A_[a^ h_bc] - 3 A_[a|d f_bc]^d - 12 A_[ab Z_c] - 6 A_[a^-_bf_c]-^+ ,which can also be expressed more compactly as𝔣_2^= f_2^ - X_ c^ v^∧ v^c, 𝔥_3 = h_3 - 12 X_ b c v^∧ v^b∧ v^c .From the definition (<ref>), we see that this only constrains the derivatives of the potentials 𝔞_i^ and 𝔟_ij, required to fully fix the generalized frame. To obtain the potentials, one has to integrate, which requires that certain integrability conditions are satisfied. These are represented by Bianchi identities, which we discuss and check at the end of the next subsection. §.§ Non-abelian twist and Bianchi identitiesUp to now, we have neglected Σ_ABC which was essential in Section <ref> to obtain a non-abelian gauge groupin ten dimensions. These are different from the geometric gaugings in Subsection <ref> because they require to violate the heterotic section condition for the frame. However, this is not an inconsistency, since the gauge algebra still closes. One might even avoid this situation completely by working with a twisted generalized Lie derivative, but this comes at the cost of more complicated expressions. Therefore, we rather impose the the right-twist_A^I=_A^J _J^I,with_J^I=[ δ_j^i 0 0; 0 u_^ 0; 0 0 δ^j_i ],to achieve the same objective. The twist matrix _J^I is chosen such thatF̂_ABC= - 3 _[ A^I ∂_I _B^J _C]J= F_ABC + _A^I_B^J_C^KΣ_IJKholds and thereby it relates hatted and unhatted quantities. As can be quickly verified, this requiresΣ_IJ^K = f_^ = -2 u_[^∂_ u_]^ u^_ .Regarding this last equation, we notice two things: for non-vanishing f_, u_^ has to depend on the coordinates x^, which are not allowed by heterotic sections. Moreover, u_^ needs to be an element in the adjoint representation of the gauge group, with the inverse u^_ defined such that u_^ u^_ = δ_^.To complete the discussion in the last subsections, we have to compute all the additional contributions to the embedding tensor that result from the twist (<ref>). Fortunately, only the following components of F_AB^C from (<ref>) are affected:F̂_= f_ ,F̂_a= F_a - 𝔞_a^ f_ ,F̂_ab=F_ab + 𝔞_a^𝔞_b^ f_ ,F̂_abc=F_abc - 𝔞_a^𝔞_b^𝔞_c^f_ . Moreover, the field strengths in (<ref>) have to be modified to𝔣_ij^ ≡ 2 (∂_[i𝔞_j]^+1/2𝔞_[i^𝔞_j]^ f_^),𝔥_ijk ≡ 3 (∂_[i𝔟_jk] - κ_ ∂_[i𝔞_j^ 𝔞_k]^-1/3𝔞_[i^𝔞_j^𝔞_k]^ f_) .For these field strengths, we now finally check the Bianchi identities. * In d≤ 5, we find 𝔣_2^= -12 (f_eb^ F_cd^e - X_,b^ F_cd^ + X_b e^ F_cd^e) v^b∧ v^c∧ v^d+ (f_ec^ F_β̃d^e - X_ c^ F_β̃d^ + X_ce^ F_β̃d^e-12 X_β̃e^ F_cd^e) v^β̃∧ v^c∧ v^d + (12 X_ d^ F_β̃γ̃^-X_[β̃|e^ F_|γ̃]d^e) v^β̃∧ v^γ̃∧ v^d = 13! L_bcd^ v^b∧ v^c∧ v^d + 12 L_cβ̃d^ v^β̃∧ v^c∧ v^d + 12 L_β̃γ̃d^ v^β̃∧ v^γ̃∧ v^d . Note that here we encounter the indices α̃ which appear in the decomposition (<ref>) of the Leibniz algebra's generators, and we have defined the Leibniz identity as L_^≡ X_^ X_^ - X_^ X_^ + X_^ X_^ . Of course this tensor vanishes due to (<ref>). Similarly we find 𝔥_3 + 12 κ_ 𝔣_2^∧𝔣_2^= 18 L_abcd v^a∧ v^b∧ v^c∧ v^d + 112 (L_α̃bcd-3 L_bα̃cd) v^α̃∧ v^b∧ v^c∧ v^d + 14 L_α̃β̃cd v^α̃∧ v^β̃∧ v^c∧ v^d . * In d=6 , by defining Δ_^≡ - 14 ϵ_ ϵ^ (L_ A, B,^ + L_ B, A,^) , we find 𝔣_2^= 13! L_bcd^ v^b∧ v^c∧ v^d + 12 L_cβ̃d^ v^β̃∧ v^c∧ v^d + 12 (L+Δ)_β̃γ̃d^v^β̃∧ v^γ̃∧ v^d , and𝔥_3 + 12 κ_ 𝔣_2^∧𝔣_2^= 18 L_abcd v^a∧ v^b∧ v^c∧ v^d + 112 (L_α̃bcd-3 L_bα̃cd) v^α̃∧ v^b∧ v^c∧ v^d + 14 (L+Δ)_α̃β̃cd v^α̃∧ v^β̃∧ v^c∧ v^d . Thus the Bianchi identities are ensured by the Leibniz identity. §.§ Generalized dualitiesAfter settling all the technical aspects of consistent truncations and the corresponding uplifts of half-maximal gSUGRAs, we are ready to look at the results from the point of view of generalized dualities. Conceptually the situation is not different from the one for the bosonic string, type II strings or M-theory. The key points that are important for those cases, and for our as well, are: * All the relevant information about the internal space is encoded in the generalized frame E_^ we constructed in the last section. The generalized Scherk-Schwarz ansatz presented in Section <ref>, makes this information accessible and allows to obtain the metric and all the relevant gauge potentials.* All of the physics in the resulting D-dimensional gauged supergravity is just governed by the embedding tensor. The explicit realization in terms of a generalized frame, as long as it exists, is irrelevant.* While the gauge group G is completely fixed by the embedding tensor, the subgroup H required to construct the coset G/H for the frame is not. Of course it can not be chosen completely arbitrarily and it has to be compatible with the embedding tensor. More specifically, after the transformation with an appropriate element of the respective duality group, the embedding has to be in the form of (<ref>). Combining these points, it is obvious that when there are different admissible choices for the subgroup H there are also different possible uplifts to the ten-dimensional low-energy effective action of the heterotic/type I string. Each of them might have very different configurations for the metric and the gauge potentials, but still all of them result exactly in the same physics. This is the definition of a duality, or when there are more than two choices for H, a plurality.More formally, we define p-plural geometric algebras spanned by the generators T_^(i), i=1,…,p as those that can be related by transformations ^(ij)_^ inacting asT^(i)_^ = ^(ij)_^ T^(j)_ .From this data, one constructs all the plural generalized frames and, therewith, the corresponding metrics and gauge potentials.At this point, the trombone gauging ϑ_ requires additional attention. In D=5, it is given byX_^ + X_^ = - 3 ϑ_ ,while in D=4, we haveX_^ = - 2 (2d+n) ϑ_ .Under the generalized dualities, the structure constants X_^ transform covariantly, and with them the trombone gauging ϑ_ too. Therefore, if we require the absence of a trombone gauging, this can not be reintroduced by a generalized duality transformation. However, the situation is different in D≥ 6 , where X_^ =X_AB^C contains only F_AB^C and ξ_A. In this case, we cannot express ϑ_ as a trace of X_^ and, consequently, a trombone gauging can arise after a generalized duality in this case. As an example, take the algebra whose only only non-vanishing structure constants are f_12^2=1 and f_13^3=1 and comes with ϑ_A=0. In D≥ 6, by performing a generalized duality which exchanges T_a with T^a, the algebra is mapped to one with f_2^12=1 and f_3^13=1, which has non-vanishing ϑ^a=β f_b^ba, and the dual background does not uplift to the standard ten-dimensional supergravity. In D=4 or D=5, if we apply the same duality map (T_a↔ T^a), the resulting algebra goes beyond geometric algebras. This means that, under this duality, the generalized frames get the dependence on the dual coordinates (without breaking the section condition), and if we stick to the standard section, this duality should be prohibited. In the context of Poisson-Lie T-duality, the issue of the upliftability in the presence of f_b^ba has been discussed in <cit.>, and the appearance of a trombone gauging or the need to change the section can be naturally understood in the framework of generalized supergravity <cit.>.§ EXAMPLESThe space of admissible embedding tensors is very large and exploring it systematically is an extremely hard problem inherited by the geometric gaugings introduced in the last section. Most challenging is to solve the Leibniz identity. By brute force, we identified two, more or less random, solutions with are geometric in d=6 with ξ_A≠ 0. The first one permits to construct two backgrounds which are related by a generalized T-duality, while the second one is based on a more complicated gauging and emphasizes that our approach is applicable to any gauging that can be brought into the form (<ref>). §.§ A generalized dualityLet us start choosing the gaugings f_ab^c, f_a^bc, h_abc, Z_a, f_a, f_a^b_I, f_a^IJ, and h_ab^I as6 f_12^3= 1 , f_45^6= 1 , f_1^23 = 1 , h_123 = 1, h_456 = -12 , Z_1= 1 , f_1= 2 , f_1^6_2 = -1 , f_1^12 = 1 , h_16^1 = 1 , h_16^2 = 12 , h_45^1 = -1 ,such that the Leibniz identity holds (<ref>). Here the trombone gauging ϑ_± A vanishes and ξ_± A has the only non-zero component ξ_+1=1 . We also have F_-ABC=0, and F_+ABC can be expressed by using the structure constants given in the first line of (<ref>).Following the general procedure outlined in Subsection <ref>, we first have to fix the coset representative M. A parametrization different from (<ref>), namelyM = x T_1y T_2z T_3u T_4v T_5w T_6 ,turns out the be convenient. Of course, (<ref>) would also work, but it would result in more complicated expressions related by a diffeomorphism to the ones given here. Note that M_^ is regular everywhere and has unit determinant. Next, we compute the one-form fields v^ asv_i^+A = (v_i^a, v_i^I, v_ia) , v_i^-A = 0 ,where2 v_i^a= [ 1 0 y 0 0 0; 0 1 0 0 0 0; 0 0 1 0 0 0; 0 0 0 1 0 v; 0 0 0 0 1 0; 0 0 0 0 0 1 ], v_i^I=[ w w/2 0 ⋯; 0 0 0 ⋯; 0 0 0 ⋯;-v 0 0 ⋯; 0 0 0 ⋯; 0 0 0 ⋯ ], v_i^a= [ -5 w^2/8-y^2 -zy000;z00000;000000;v w000w/2 -v/2;000 -w/200;000000 ].From the matrix v_i^a, we obtain the matrix V_^ of (<ref>), which has unit determinant too. Moreover, combining the one-form fields v^ and the structure constants X_^ gives rise to the field strengths𝔣_2^1= - x∧ w, 𝔣_2^2= -12u∧ v , 𝔣_2^I= 0, I≥3 ,𝔥_3= -2x∧ y∧ z - wx∧ y∧ v +u∧ v∧ w ,required to fix the matrix N_^ in (<ref>) up to gauge transformations. Since the Bianchi identities are satisfied, it is possible to integrate them, resulting in the associated potentials𝔞_1^1= uv - xw, 𝔞_1^2= - x2w, 𝔟_2= u wx∧ v -2 xy∧ z + u (1-x2)v∧ w .Combining these results, we obtain the matrix E_^ which is regular and has unit determinant. In this way, one systematically constructs the generalized frame fields for any geometric algebra.A generalized T-duality arises in this example by an (6,6+n) transformation2 T'_± a= T_± a ,a=1,3,4 , T'_± a= T_±^a ,a=2,5 , T'_± 6= T_± 6- 12 T_±^6 - T_±^1 , T'_±^1= T_±^1 + T_±^6 , T'_±^2= T_±^2 - T_±^1 + T_± 6 - 12 T_±^6 , T'_±^I= T_±^I ,I≥3 , T'_±^a= T_±^a ,a=1,3,4 , T'_±^a = T_± a ,a=2,5 , T'_±^6 = 54 T_±^6 - 12 T_± 6 + 12 T_±^1 - T_±^2 .As we demanded in Subsection <ref>, it results in a new geometric algebra with the structure constants4 f'_12^2= 2 , f'_12^3 = 1 , f'_13^2 = -1 , f'_15^5 = 2 , f'_1^23 = 1 , f'_4^56= 1 , Z'_1 = 1 , f'_1 = 2 .Now the generators T'_a form a Lie subalgebra, and the computation becomes easier. Again, we have to first parametrize the coset representativeM = x T'_1y T'_2z T'_3u T'_4v T'_5w T'_6 ,and then obtainv_i^+A = (v_i^a, v_i^I, v_ia)= (v_i^a, 0, 0) , v_i^-A = 0 ,wherev_i^a= [ 1 2 y-z y 0 2 v 0; 0 1 0 0 0 0; 0 0 1 0 0 0; 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1 ].Since here 𝔣_2^I = 0 and 𝔥_3=0 , we choose 𝔞_1^I=0 and 𝔟_2=0 , resulting in the simple generalized frame E_^=M_^ V̂_^ , which still has unit determinant.§.§ Proof of conceptTo show that our approach works in full generality, we switch now to a more difficult gauging. Specifically, by taking f_ab^c, f_a^bc, h_abc, Z_a, f_a, f_a^b_I, f_a^IJ, and h_ab^I as6 f_23^3 = 1 , f_24^4= 1 , f_25^5= -1 , f_1^34= -1 , f_1^56= 2 , f_2^34= 1 , f_2^56= 1 , h_123 = 1 , h_134= -1 , h_156= 12 , h_234= 1 , h_256= 14 , Z_1= 1 , f_1= 2, f_1^3_1 = 1 , f_1^4_2= 1 , f_2^3_1= -1 , f_2^4_2= -1 , f_1^12 = 1 , f_2^12= -1 , h_13^1 = 1 , h_14^2= 1 , h_23^1= -1 , h_24^2= -1 ,and then parametrizing the coset representative M byM = x T_1y T_2z T_3u T_4v T_5w T_6 ,we find that M_^ is regular everywhere and has unit determinant. Next, we compute the one-form fields v^ asv_i^+A = (v_i^a, v_i^I, v_ia) , v_i^-A = 0 ,wherev_i^a= [ 1 0 sin (√(2) y)/√(2)-sin ycos y -1 0 0; 0 1 z u-v 0; 0 0 1 0 0 0; 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1 ], v_i^I=[ sin y -sin(√(2) y)/√(2)+1-cos (√(2) y)/2 +z u+1-2 cos y +cos (√(2) y)/2 0 ⋯;-z-u 0 ⋯; 0 0 0 ⋯; 0 0 0 ⋯; 0 0 0 ⋯; 0 0 0 ⋯ ], v_i^a= [c_1c_2c_3c_4 -w/2v/2;(u-z)^2-v w+2 z/2 -2 u^2+v w+2 z^2/4 -uz -w/4v/4; -uu0000;000000;w/2w/40000;000000 ], c_1≡(u-z) [2 sin y -√(2)sin (√(2) y)] +2 (u+z-1) cos y +(1-u+z) cos (√(2) y)-u (u+1)-z (z+3)+1/2 , c_2≡√(2) (u-z) sin(√(2) y)-2 u sin y +(u+z)^2-2 z cos y/2 , c_3≡1-cos (√(2) y)/2 + u + siny , c_4≡1 +cos (√(2) y)/2 - z - cos y.Like in the last subsection, we also need the field strengths𝔣_2^1= [cos (√(2) y)-cos y-sin(√(2) y)√(2)]x∧ y + ( y- x)∧ z, 𝔣_2^2= [sin (√(2) y)√(2)-sin y ] x∧ y + ( y- x)∧ u ,𝔣_2^I= 0 , I≥3 ,𝔥_3= [u-sin(√(2) y)√(2)-2 cos y]x∧ y∧ z +x∧ y∧{[z - 2 sin y +√(2)sin (√(2) y)]u + w2v + v2w} + 2 ( x- y)∧ z∧ u - ( x+12y)∧ v∧ w,and the associated potentials𝔞_1^1 = [sin y -sin (√(2) y)√(2)-cos (√(2) y)2]x +(y-x)z, 𝔞_1^2 = 12[cos (√(2) y)-2 cos y ]x +(y-x)u, 𝔟_2 =b_12x∧ y + 2(x-y)z∧ u -(x+y2)v∧ w , b_12 ≡14 { u (x-y-4) [2sin y -√(2)sin (√(2) y)]-2 u cos y +u cos (√(2) y)+4 u z+2 v w + [√(2) x sin (√(2) y)-(8-2 x+2 y) cos y - (1+2 x-2 y) cos (√(2) y) +2 sin y -√(2) y sin (√(2) y)-3 √(2)sin (√(2) y)] z} ,allowing us to obtain the matrix N_^ . Combining these results, we obtain again the generalized frame E_^, which is regular and has unit determinant.§ CONCLUSIONSThe main goal of this paper was to fill a gap in the literature and to define generalised T-dualities for heterotic/type I string theories. In order to do this, we leveraged half-maximal gSUGRAs and their possible uplifts to ten dimensions. The starting point for our discussion is heterotic DFT with the duality group (10,10+n). After a generalized Scherk-Schwarz reduction, this gives rise to half-maximal gSUGRAs in various dimensions; in our discussion we considered the case D≥ 4. Although conceptually not very different from other reductions of this type, we are not aware of previous results of this kind starting from heterotic DFT. Therefore, we developed the required ansätze in three steps: * Subsection <ref> introduced heterotic DFT in the frame formalism and related its action to the one of ten-dimensional half-maximal SUGRA;* Next, Subsection <ref> set the ground for dimensional reductions by introducing a splitting of the ten-dimensional target space into a d-dimensional internal part and a D-dimensional external one in the spirit of the Hohm-Samtleben prescription;* Based on this decomposition, Subsection <ref> introduced the appropriate Scherk-Schwarz ansatz and discussed the resulting gSUGRAs. Note that, up to this point, our discussion held for arbitrary non-abelian gauge groupsin ten dimensions, characterized by structure constants Σ_. To keep the discussion lighter, we temporarily restricted ourselves to abelian groups with Σ_=0, while investigating the features of half-maximal gSUGRAs in D≥ 4. This was possible because the details of the analysis only depended on Σ_ in a very simple way, such that it could be later reintroduced by a twist of the generalized frame in Subsection <ref>. To identify upliftable gSUGRAs, we analyzed their embedding tensors and expressed them in terms of a generalized Lie derivative for the respective duality groupsin Subsection <ref>. The details for this depend on the dimension d of the internal space. For example, d=6 is more cumbersome to work out as the factorpicked up an additional (2) instead of the usual ℝ^+. Moreover, d=5 is more complicated than d≤ 4, due to the presence of an additional coordinate on the extended space originating from NS5-branes. Subsequently, we turned to the relation between the generalized fluxes, that arose in the generalized Scherk-Schwarz reduction ansatz of Section <ref>, and the embedding tensor. A frame incan always be used as an alternative way to encode the latter and we showed how this works in Subsection <ref>, comparing our results with the known results for the embedding tensor in the literature. Finally Subsection <ref> presented upliftability conditions which restrict certain components of the embedding tensor and that are required to make contact with the previously obtained Scherk-Schwarz ansatz.Section <ref> dealt with the explicit construction of generalized frames that satisfy those conditions. First, we found that the section condition, which is required for the closure of the generalized Lie derivative, imposes additional constraints. They give rise to what we call heterotic/type I geometric algebras in Subsection <ref>. Imposing a well-established ansatz, generalized frames were constructed explicitly for all geometric gauge algebras. In particular, we found (<ref>), relating the field strength of the potentials in the ansatz with the gaugings. As we mentioned before, most of the discussion was preliminary made for Σ_=0; Subsection <ref>, finally employed a further right-twisting of the generalized frames to reintroduce this torsion term, extending our results to arbitrary non-abelian groupsin ten dimensions. Central consistency checks in our construction were the Bianchi identities for the field strengths given in (<ref>). They were verified at the end of Subsection  <ref>. With all these tools in place, Subsection <ref> showed that generalized dualities follow exactly the same rules as for the bosonic string or in M-theory. Finally, several examples demonstrated the utility and generality of the presented results in Section <ref>. §.§ AcknowledgementsWe would like to thank David Osten for helpful discussions and comments on the draft. The work by YS is supported by JSPS KAKENHI Grant Number JP23K03391. FH and LS are supported by the SONATA BIS grant 2021/42/E/ST2/00304 from the National Science Centre (NCN), Poland. LS acknowledges financial support from the doctoral school of the University of Wrocław.§ INDEXOLOGYIn the main text, we use many different indices. We did our best to keep things as simple as possible, but at some point alphabets run out and one has to resort to further decorations which in turn get more and more exotic. To assist the reader, we provide here a summary of the most important indices we are using and the groups they describe. When two indices appears in the second column they refers, respectively, to curved and flat ones; also, to make the table lighter, when we did not use explicitly, in the main text, the splitting of curved or flat indices, we introduced a – in their place. Group IndicesSplitting(10,10+n) , [ _μ _I ^μ ] , [ _ _ ^ ] (d,d+n) I , A [ _i_ ^i ] , [ _a_ ^a ] ,(10) –, [ ^â _â ] (d) [ ^i _i ] , [ ^a _a ] (D) [ ^μ _μ ] ,[ ^ _ ], –, _A = [ _a_ ^a ]d≤ 4[ _A _* ] = [ _a_ ^a _* ]d=5_ A = [ _ a _ _^a ]d=6 (2) α=+,-ℝ^+ *It is also helpful to keep in mind the generators and their respective decompositions for all the relevant groups. Group Generators Decomposition T_ {T_a , T_ὰ , T_ά}= { T_a , T_α̃} (adj) {t_α̇} , {t_α} –, {t̃_α̃,t_α̂}G {T_a , T_ὰ}H {T_ὰ}(d,d+n) (adj) {t^α̂} {R^a_1a_2 , R^a_, K_a_1^a_2,R^_1_2 , R_a^,R_a_1a_2} {t̃_α̃} R^* d≤ 5 R__1^_2d=6 , § DUALITY ALGEBRA O(D,N)For completeness, here we write all the details of the Lie algebras (d,𝔫). Its generators are decomposed according to (<ref>) and satisfy the following commutation relations:2 [K^a_b, K^c_d]= δ^c_b K^a_d-δ^a_d K^c_b , [K^a_b, R_] =0 , [K^a_b, R^c_]= δ^c_b R^a_ , [K^a_b, R_c^]= -δ_c^a R_b^ , [K^a_b, R^cd]= 2 δ^cd_be R^ae , [K^a_b, R_cd]= -2 δ_cd^ae R_be , [R_, R_]=-2 (δ_[ R_]-δ_[ R_]) , [R_, R^cd]= 0 , [R_, R_cd]= 0, [R_, R^c_]= - 2 κ_[ κ_]^ R^c_ , [R_, R_c^] = - 2 κ^_[ κ_] R_c^ , [R^ab, R^cd]= 0 , [R^ab, R_cd]= -4 δ^[a_[c K^b]_d] , [R^ab, R^c_]= 0, [R^ab, R_c^]= -2 κ^ δ^[a_c R^b]_ , [R_ab, R_cd]= 0 , [R_ab, R_c^]= 0, [R_ab, R^c_]= -2 κ_ δ_[a^c R_b]^ , [R^a_, R^b_]= - κ_ R^ab , [R^a_, R_b^]= -κ_^ K^a_b - δ^a_b κ^ R_ , [R_a^, R_b^]= - κ^ R_ab .Their fundamental representation can be expressed in term of the matrices(R_)_A^B≡[ 0 0 0; 0 κ_ δ_^ - κ_ δ_^ 0; 0 0 0 ] , (K^c_d)_A^B ≡[ δ_a^c δ_d^b 0 0; 0 0 0; 0 0 - δ_d^a δ_b^c; ] , (R_c_1c_2)_A^B≡[ 0 0 0; 0 0 0; 2 δ_c_1c_2^ab 0 0 ] , (R^c_1c_2)_A^B≡[ 0 0 2 δ^c_1c_2_ab; 0 0 0; 0 0 0; ] , (R_c^)_A^B ≡[ 0 0 0; - δ_c^b δ^_ 0 0; 0κ^ δ^a_c 0; ] , (R^c_)_A^B ≡[ 0 δ^c_a δ_^ 0; 0 0 -κ_ δ_b^c; 0 0 0; ] .As expected, one can check that they leave the following metric invariant:η_AB = [ 0 0 δ_a^b; 0κ_ 0; δ_b^a 0 0 ]. § ON THE LEFT INVARIANCE OF THE MEASUREThe second integral in (<ref>) is well-defined only if v defines a good measure. To check when this is the case, we introduce a matrix representation of the generators of the gauge group G as(T_à)_^ = -X_à^ .Following (<ref>), they decompose into the generators T_ὰ of H and the remaining T_a that span the coset G/H with dimension d. The matrix M_^, which is used in (<ref>) to construct the generalized frame fields, is here be expressed as suggested in (<ref>). Moreover, we decompose the left hand side of (<ref>) according to M^-1M = v^aT_a + ω^ὰT_ὰ . Under left multiplication by a constant element of g ∈ G, the coset representative M_^ transforms asM_^(x) →[g M(x)]_^ .Alternatively, this can be reexpressed as a transformation of the coordinates x^i → x'^i accompanied by a compensating H-transformation from the right given by the element h(x', x)∈ H. Under this action, we find that v^a(x) =v_i^a(x)x^i transforms asv^a(x') = (Ad_h)_b^a(x', x) v^b(x),whereh T_a h^-1≡ (Ad_h)_a^b T_b + (Ad_h)_a^β̌ T_β̌ .Without loss of generality one can take x^i=0 and relabel x' as x to findv^a(x) = (Ad_h)_b^a(x, 0) v^b(0) .Form the transformation of the frame, one sees that the volume element changes according toμ(x) ≡1d!ϵ_a_1⋯ a_d v^a_1(x)∧⋯∧ v^a_d(x) = (Ad_h(x, 0)) μ(0) .Next, we want to show that the determinant of the adjoint action is 1 and therefore the measure is left-invariant. One way to do so is to parametrize Ad_h(x,0) in terms of functions γ^α̃ as(Ad_h)_b^a(x, 0)=exp(γ^α̌(x) T_α̌)_b^a .From the form of the geometric algebra we already discussed, we conclude(Ad_h(x, 0)) =exp(- γ^α̌(x) X_α̌b^b) ,and furthermore we see thatX_α̌b^b = β^-1 ϑ_α̌ .If the trombone gauging ϑ_A is absent, then, the measure becomes left invariant, namelyμ(x) = ^d x v(x) = μ(0) withv≡(v_i^a) . § GEOMETRIC ALGEBRAWe present, here, the full structure of the studied geometric algebras. In d≤ 5 , this can be expressed as follows (note that T_* is absent in d≤ 4)T_a∘ T_b = f_ab^c T_c + f_ab^ T_ + f_abc T^c , T_a∘ T_ = -f_a^c_ T_c + f_a^ T_ + Z_a T_ - f_ac T^c, T_a∘ T^b= f_a^bc T_c + f_a^b T_ - f_ac^b T^c + 2 Z_a T^b , T_a ∘ T_*= (Z_a - f_a) T_* , T_∘ T_b = f_b^c_ T_c - f_b_^ T_ -Z_b T_ + f_bc T^c , T_∘ T_ = f_c T^c + δ_ Z_c T^c , T_∘ T^b= -f_c^b_ T^c , T_∘ T_*= - f_c^c_ T_* , T^a∘ T_b = -f_b^ac T_c - f_b^a T_ + (f_bc^a+2 δ^a_b Z_c-2 δ^a_c Z_b) T^c, T^a∘ T_ = f_c^a_ T^c, T^a∘ T^b= f_c^ab T^c, T^a ∘ T_*= - f_c^ca T_* , T_*∘ T_b=0 , T_*∘ T_ =0 , T_*∘ T^b=0 , T_*∘ T_*=0 .In d=6 , instead, we findT_+a∘ T_+b = f_ab^c T_+c + f_ab^ T_ + f_abc T_-^c , T_+a∘ T_-b = f_a-^+ T_+b + (f_ab^c-f_a δ_b^c ) T_-c + f_ab^ T_- + f_abc T_-^c , T_+a∘ T_+ = -f_a^c_ T_+c + f_a^ T_+ + Z_a T_+ - f_ac T^c , T_+a∘ T_- = -f_a^c_ T_-c + f_a-^+ T_+ + f_a^ T_- +(Z_a-f_a) T_- - f_ac T_-^c , T_+a∘ T_+^b= f_a^bc T_+c + f_a^b T_+ - f_ac^b T_+^c + 2 Z_a T_+^b , T_+a∘ T_-^b= f_a^bc T_-c + f_a^b T_- + f_a-^+ T_+^b - f_ac^b T_-^c + (2 Z_a-f_a) T_-^b , T_-a∘ T_+b = -f_b-^+ T_+a + f_a T_-b , T_-a∘ T_-b = -f_b-^+ T_-a + f_a-^+ T_-b , T_-a∘ T_+ = f_a T_- , T_-a∘ T_- = f_a-^+ T_- , T_-a∘ T_+^b= δ_a^b f_c-^+ T_+^c + f_a T_-^b, T_-a∘ T_-^b= f_a-^+ T_-^b + δ_a^b f_c-^+ T_-^c, T_+∘ T_+b = f_b^c_ T_+c - f_b_I^ T_+ -Z_b T_+ + f_bc T_+^c , T_+∘ T_-b = f_b^c_ T_-c -f_c^c_ T_-b - f_b_^ T_- -Z_b T_- + f_bc T_-^c , T_+∘ T_+ = f_c T_+^c + δ_ Z_c T_+^c , T_+∘ T_- = -f_c^c_ T_- + f_c T_-^c + δ_ Z_c T_-^c , T_+∘ T_+^b= -f_c^b_ T_+^c , T_+∘ T_-^b= -f_c^b_ T_-^c - f_c^c_ T_-^b , T_-∘ T_+b = f_c^c_ T_-b -f_b-^+ T_+ , T_-∘ T_-b = -f_b-^+ T_- , T_-∘ T_+ = f_c^c_ T_- + δ_ f_c-^+ T_+^c , T_-∘ T_- = δ_ f_c-^+ T_-^c, T_-∘ T_+^b= f_c^c_ T_-^b , T_-∘ T_-^b= 0 , T_+^a∘ T_+b = -f_b^ac T_+c - f_b^a T_+ + (f_bc^a+2 δ^a_b Z_c-2 δ^a_c Z_b) T_+^c, T_+^a∘ T_-b = -f_b^ac T_-c - f_c^ca T_-b - f_b^a T_- + (f_bc^a+2 δ^a_b Z_c-2 δ^a_c Z_b) T_-^c, T_+^a∘ T_+ = f_c^a_ T_+^c, T_+^a∘ T_- = -f_c^ca T_- + f_c^a_ T_-^c , T_+^a∘ T_+^b= f_c^ab T_+^c, T_+^a∘ T_-^b= f_c^ab T_-^c - f_c^ca T_-^b, T_-^a∘ T_+b = f_c^ca T_-b + (δ^a_b f_c-^+ - δ_c^a f_b-^+) T_+^c, T_-^a∘ T_-b = (δ^a_b f_c-^+ - δ_c^a f_b-^+) T_-^c, T_-^a∘ T_+ = f_c^ca T_- , T_-^a∘ T_- = 0, T_-^a∘ T_+^b= f_c^ca T_-^b, T_-^a∘ T_-^b= 0. JHEP
http://arxiv.org/abs/2312.16283v1
{ "authors": [ "Falk Hassler", "Yuho Sakatani", "Luca Scala" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20231226190000", "title": "Generalized Dualities for Heterotic and Type I Strings" }
Osaka University [email protected] University [email protected] University [email protected] University [email protected] University [email protected] Dependencies (ODs) have many applications, such as query optimization, data integration, and data cleaning. Although many works addressed the problem of discovering OD (and its variants), they do not consider datasets with missing values, a standard observation in real-world datasets. This paper introduces the novel notion of Embedded ODs (eODs) to deal with missing values. The intuition of eODs is to confirm ODs only on tuples with no missing values on a given embedding (a set of attributes). In this paper, we address the problem of validating a given eOD. If the eOD holds, we return true. Otherwise, we search for an updated embedding such that the updated eOD holds. If such embedding does not exist, we return false. A trivial requirement is to consider an embedding such that the number of ignored tuples is minimized. We show that it is NP-complete to compute such embedding. We therefore propose an efficient heuristic algorithm for validating embedded ODs. We conduct experiments on real-world datasets, and the results confirm the efficiency of our algorithm.Fast Algorithm for Embedded Order Dependency Validation (Extended Version) Takahiro Hara January 14, 2024 ========================================================================== § INTRODUCTION §.§ MotivationIntegrity constraints are commonly used in a number of applications in data profiling <cit.>, such as data integration and cleaning <cit.>, query optimization <cit.>, and schema design <cit.>. Among integrity constraints in relational databases, the most important constraints are functional dependencies (FDs) <cit.> and order dependencies (ODs) <cit.>. Informally, FDs describe that the values of some attributes functionally determine the value of others. An FD X→ Y says that, for each tuple in a database, the values of the attributes in X determine the values of the attributes in Y. ODs describe that the order of tuples w.r.t. given attributes determines the order of tuples w.r.t. other attributes. An OD X ↦ Y says that, if all values in X are increasing (or decreasing), then all values in Y must also be increasing (or decreasing). Because FDs are a special case of ODs <cit.>, this paper focuses on ODs.Many works addressed the problem of discovering ODs <cit.>, because ODs help query optimization, violation detection, and data repairing, to name a few. It is well-known that real-world datasets contain errors <cit.>.To deal with erroneous data, there are some variants of ODs, such as approximate ODs <cit.>. Surprisingly, although missing values are standard observations in real-world data, no existing works have considered ODs on data with missing values. We therefore consider how to define ODs when datasets contain missing values, and use the concept Embedded FDs <cit.>. This concept defines FDs by extracting only complete data on some embedded attributes (i.e., tuples that have no missing values on the attributes).This idea helps find useful ODs that are valid semantically in relations, whereas they may be invalid in the other OD definitions. The found dependencies can be used to define integrity requirements and completeness requirements.We consider, in Table <ref>, OD Rank ↦ Salary; Salary increases as Rank increases. This is a semantically valid OD, but cannot be found by the existing OD definition because of the missing value (denoted by ⊥) of t_2.By considering a “sub-table”, where only no missing values are permitted in Salary, i.e., t_2 is removed from the relation, Rank ↦ Salary holds. This example demonstrates the effectiveness of embedded ODs (eODs), and they can find meaningful ODs that cannot be found from the original OD definition. Therefore, this paper considers an algorithm that, given a pair of attribute lists, returns its OD validity and an embedding which makes the pair valid under the embedding (if such an embedding exists). The formal problem definition appears in Section <ref>.§.§ ChallengeActually, checking whether a given OD candidate (a pair of attributes) is valid is not a difficult task, because a state-of-the-art algorithm OD <cit.> achieves this. However, efficiently computing an embedding on which the pair of attributes is valid is not trivial. A straightforward approach is to enumerate all possible embeddings (i.e., attribute lists) and check the validity under the embeddings by using the state-of-the-art algorithm, see Section <ref>. Trivially, this approach incurs a factorial cost w.r.t. the number of attributes, so this does not scale well to large relational tables.As noted above, ODs are used in, for example, query optimization. If embedding computation incurs a substantial computational cost, we cannot support query optimization, as embedding computation can be the main bottleneck. Therefore, an algorithm that efficiently solves our problem is required.§.§ ContributionThis paper proposes an efficient algorithm that overcomes the above challenge. To compute a valid embedding, we can focus only on violation tuples and their attributes with missing values. Our task then becomes an evaluation of whether these tuples can be removed by adding these attributes to an embedding. This approach can reduce the search space and avoid the factorial cost.To summarize, this paper makes the following contributions. We* formulate the novel concept of embedded order dependencies,* present an algorithm that efficiently checks whether or not an OD holds in some embedding, and* evaluate the performance of our proposed algorithm on real-world datasets. Comparison with our conference version. The above contents appear in our conference version <cit.>. When considering embedding (i.e., a sub-table that contains no missing values), applications would be happy if the sub-table size is maximized, i.e., the number of ignored tuples is minimized. Then, a natural requirement is to consider an embedding that yields the sub-table. However, as we show in this paper, it is NP-complete to find the sub-table. The proof of this hardness is a new contribution. Providing this fact further justifies the design of our heuristic algorithm. § PRELIMINARIES§.§ Problem DefinitionWe use 𝐑 to denote a relational schema, and 𝐫 is a specific relational instance (table). Also, we use A (B) to denote an attribute of 𝐫, whereas 𝐗,𝐘 are lists of attributes. Let s,t represent tuples ∈𝐫, and s_A is the value of s on A. For ease of understanding, we first define order dependencies. An order dependency over a schema 𝐑 is a statement of the form 𝐗↦_≤𝐘, where 𝐗,𝐘∈𝐑. An order dependency 𝐗↦_≤𝐘 is valid iff for any two tuples s,t ∈𝐫, s_X ≤ t_X ⇒ s_Y ≤ t_Y[Actually, extending to s_X ≥ t_X ⇒ s_Y ≤ t_Y, s_X ≤ t_X ⇒ s_Y ≥ t_Y, and s_X ≥ t_X ⇒ s_Y ≥ t_Y is also possible. For ease of presentation, we use the increasing order.]. Next, let 𝐄 be a subset of attributes of 𝐑. We call 𝐄 embedding, and it defines 𝐫^𝐄∈𝐫, which is a set of tuples with no missing values on 𝐄. Then, we define a new concept, embedded order dependencies. An embedded order dependency (eOD) over a relational schema 𝐑 is a statement of the form 𝐄: 𝐗↦_≤𝐘, where 𝐗,𝐘⊆𝐄⊆𝐑. An eOD𝐄: 𝐗↦_≤𝐘 is valid iff for any two tuples s,t ∈𝐫^𝐄, s_X ≤ t_X ⇒ s_Y ≤ t_Y. This paper considers the following problem:Input Given a statement 𝐄: 𝐗↦_≤𝐘, check whether it is valid.OutputIf true, return valid. Otherwise, one of the following options is returned: (1) valid with 𝐄' iff there is an embedding 𝐄' ⊃𝐄 such that 𝐄': 𝐗↦_≤𝐘 holds, and (2) not valid if there does not exist such 𝐄'. If AB: A ↦_≤ B holds, this problem returns “valid.” If it does not, this problem searches for an embedding 𝐄' ⊃ AB such that 𝐄': A ↦_≤ B holds. Then, if ABC: A ↦_≤ B holds, this problem returns “valid with ABC.” On the other hand, if there is no𝐄' ⊃ AB such that 𝐄': A ↦_≤ B holds, this problem returns “not valid.” Before solving the above problem, we introduce two important concepts split and swap <cit.>. They are used as remarks that an OD under “≤” can be invalid. In addition, in <cit.>, the concept of merge is introduced, and it makes an OD under “<” invalid. We formally define them below. Given tuples s,t ∈𝐫 and attributes A,B ∈𝐑, there is a split if s_A=t_A but s_B ≠ t_B.Given tuples s,t ∈𝐫 and attributes A,B ∈𝐑, there is a merge if s_A ≠ t_A but s_B = t_B.Given tuples s,t ∈𝐫 and attributes A,B ∈𝐑, there is a swap if s_A < t_A but s_B > t_B. From the definition of split and merge, a split on (A,B) implies a merge on (B,A) and vice-versa. Splits invalidate ODs under “≤”, and merges invalidate ODs under “<”. Swaps invalidate ODs under both “≤” and “<”. Based on this observation, we introduce the validity of eODs under “≤” and“<”. Specifically, we have the following three lemmas according to <cit.>. 𝐄: A ↦_≤ B is valid, iff there is neither a split nor a swap on attributes A,B in 𝐫^𝐄.𝐄: A ↦_< B is valid, iff there is neither a merge nor a swap on attributes A,B in 𝐫^𝐄.𝐄: A ↦_< B is invalid, if there is a merge or a swap on A,B in 𝐫^𝐄.Although the above statements have the embedding 𝐄, the validity focuses on 𝐫^𝐄 and attributes A and B. Therefore, we can directly use these lemmas. Then, as shown in <cit.>, we have:𝐄: A ↦_< B is valid ⇔𝐄: B ↦_≤ A is valid. This theorem means that we only need to check for eODs under “<” for validity. This is because if the OD under “<” is valid, it means that the OD under “≤” is also valid (except for the case where the left-hand side, LHS, and right-hand side, RHS, are swapped).§.§ Naïve AlgorithmWe here consider a naïve algorithm that solves our problem. Recall Theorem <ref>, and validating 𝐄: B ↦_< A is sufficient to validate 𝐄: A ↦_≤ B. We do this because it is easier to find errors for an eOD under “<” <cit.>, and we can focus on finding where swaps and merges occur. The naïve algorithm has the following steps.* It finds swaps and merges for B ↦_< A through FindErrors[This is based on the validation algorithm from ORDER <cit.>. We extend the algorithm so that we can obtain 𝐒 (a set of swaps) and 𝐌 (a set of merges). We still use sorted partitions <cit.> to efficiently find swaps and merges. While the original implementation terminates whenever it finds a swap or merge, our implementation adds the tuples with this swap (merge) to 𝐒 (𝐌) and continues to scan the corresponding sorted partitions.].* It next checks whether all of these errors disappear under the given embedding through CheckForErrorDeletion[For each swap s ∈𝐒 or merge m ∈𝐌, CheckForErrorDeletion checks whether there is a missing value ⊥ in an attribute in 𝐄 on the tuple that causes the error. If the tuple has a missing value on the attribute, then 𝐫^𝐄 does not have the tuple. Thus, CheckForErrorDeletion removes the error (tuple).]. If so, it returns valid.* Otherwise, for every possible embedding 𝐄' ∈𝐑 such that 𝐄⊂𝐄', it repeats steps 1 and 2. If there exists 𝐄' such that B ↦ A holds, it returns 𝐄'.* If there does not exist such an 𝐄', it returns not valid. This algorithm exactly solves our problem. However, it requires a factorial number of checks in the worst case if the OD is invalid under the given 𝐄. § HARDNESS OF COMPUTING EMBEDDING FOR MINIMIZING IGNORED TUPLES Although our problem does not have a constraint on 𝐄', it is natural to consider that the number of ignored tuples on 𝐄' (i.e., |𝐫^𝐄 - 𝐫^𝐄'|) should be minimized. Unfortunately, it is hard to compute such 𝐄' efficiently. Given 𝐄: 𝐗↦_≤𝐘, assume that there exits 𝐄' such that 𝐄': 𝐗↦_≤𝐘 holds. Then, it is NP-complete to find 𝐄' such that the number of ignored tuples on 𝐄' is minimized among all embeddings that provide 𝐄”: 𝐗↦_≤𝐘. Proof. To prove this theorem, we show that there exists an instance of this problem which can be reduced to the weighted minimum set cover problem, which is NP-complete. The input of the weighted set cover problem is a collection of subsets Q_i⊆ P each of which has a positive weight w_i, and P = ⋃ Q_i. The output of this problem is the collection of Q_j such that P = ⋃ Q_j and the sum of the weights is minimized.Now let w_i be the number of ignored tuples when A_i is added into 𝐄, i.e., w_i = |𝐫^𝐄 - 𝐫^𝐄'|. Similarly, let Q_j be a set of tuples with missing values on A_i. At a first look, this setting is the same as the weighted set cover problem, but it is different. It is important to notice that w_i is generally variable and depends on the current 𝐄'[𝐄' is initialized by 𝐄. Whenever 𝐄'𝐄' + A_i, the tuples with missing value on A_i is ignored. Notice that such tuples may have missing values on A_j.]. We hence consider the following instance:* s^i_X = i and s^i_Y = ⌊i + 1/2⌋, where s^i represents the i-th tuple in a given table.* Each tuple has at most one missing value.Then, each pair of (s^2j-1,s^2j) causes a merge, where j is an integer, and we do not have other swaps and merges. Let O be a set of integers not larger than |𝐫|, where |𝐫| is the number of tuples in 𝐫. Furthermore, let Q_i be a set of non-negative odd integers j such that s^j has a merge and is ignored if A_i is added into 𝐄. In this condition, w_i is not variable anymore. Therefore, we see that there exists an instance that can fall into the weighted minimum set cover problem.§ PROPOSED ALGORITHM To efficiently solve the problem in this paper, we propose ValidEOD. (As this algorithm outputs an arbitrary 𝐄' (if necessary), it is regarded as a heuristic solution for our problem under the constraint considered in Section <ref>.)Main idea. This algorithm improves the efficiency of the naïve algorithm by leveraging the fact that, even if a given eOD 𝐄: A ↦_≤ B does not hold, we need to compute an 𝐄' ⊃𝐄 from only pairs of tuples violating the eOD. This needs a cheaper cost, because what we have to do is to check whether these tuples have missing values or not. (If not, these tuples cannot be removed on any 𝐄' ⊃𝐄.)We introduce an example with a swap that illustrates this main idea by using Table <ref>.Suppose that we want to validate AB: A ↦_≤ B. From Theorem <ref>, it is sufficient to validate AB: B ↦_< A. In Table <ref>, B↦_< A has a swap because of tuples t_2 and t_3, and this still exists on 𝐫^AB. Therefore, AB: A ↦_≤ B is invalid. We then want to compute a possible 𝐄' ⊃𝐄 such that the OD holds. We need to add attributes with missing values into this embedding. We add D (the attribute in which t_2 has a missing value) to the embedding AB, and validate ABD: B ↦_< A. As 𝐫^ABD does not contain t_2, the swap disappears. We then see that ABD: B ↦_< A is valid (and consequently ABD: A ↦_≤ B is also valid) without enumerating a number of embeddings. Overview. ValidEOD has two phases. It first validates a given eOD. If it does not hold, then ValidEOD computes a new embedding (if it exists) for which the OD would hold.First phase: Validating eOD. ValidEOD validates the given eOD in the following way. First, it obtains a set 𝐍 of attributes with missing values. It then finds all swaps and merges for OD B ↦_< A, which is implemented by FindErrors. This runs in linear time with respect to the number of rows in the relational table and returns sets 𝐒 and 𝐌 of swaps and merges, respectively.Then, ValidEOD checks whether all errors disappear under the given embedding 𝐄 via CheckForErrorDeletion. That is, it evaluates whether the found swaps and merges still exist on 𝐫^𝐄.It is trivial that CheckForErrorDeletion needs a cost proportional to |𝐒| and |𝐌|. If no errors remain, ValidEOD returns valid. Otherwise, it proceeds to the next phase.Second phase: New embedding computation. Next, ValidEOD searches for an embedding 𝐄' ⊃𝐄 such that 𝐄': A ↦_≤ B holds, through UpdateEmbedding, which is described in Algorithm <ref>. As mentioned before, we need to care about whether pairs of tuples that have swaps or merges can be removed by adding attributes to a new embedding. If such tuples have no missing values, they absolutely violate the given OD. On the other hand, if the tuples have missing values, we may be able to remove the violations by updating the embedding. Based on this idea, we incrementally update 𝐄 so that swaps and merges are removed. UpdateEmbedding specifically updates 𝐄 as follows:* For each swap s ∈𝐒, UpdateEmbedding checks whether adding an attribute from 𝐍 to the embedding removes s. If true, UpdateEmbedding adds this attribute to a new embedding 𝐄' (initialized by 𝐄) and removes s from 𝐒. Otherwise, UpdateEmbedding tests the next attributes. If these tests still cannot remove s, it is guaranteed that 𝐒≠∅. In this case, UpdateEmbedding returns not valid.* If UpdateEmbedding does not return not valid in the above step, it runs the same operations for each merge m ∈𝐌.* If all swaps and merges are removed in the above steps, UpdateEmbedding returns valid with 𝐄'. Space complexity of ValidEOD is trivially O(|𝐒| + |𝐌| + |𝐍|).Time complexity. The first phase needs O(n + |𝐒| + |𝐌|) time, where n is the number of rows in 𝐫^𝐄. The second phase needs O(|𝐍|(|𝐒|+|𝐌|)) time. Thus, the time complexity of ValidEOD is O(n + (|𝐍|(|𝐒| + |𝐌|))). That is, the time of ValidEOD is linear to the number of attributes, removing the factorial cost held by the naïve algorithm. § EXPERIMENT This section reports our experimental results. All experiments were conducted on a Ubuntu 20.04 LTS machine with 2.4GHz Intel Core i9-12900 and 64GB RAM.Datasets. We used two real-world datasets, Adult (<https://archive.ics.uci.edu/>) and NCVoter (<https://www.ncsbe.gov/>). The existing works <cit.> also used these datasets (but they removed tuples with missing values). Adult consists of 32,000 tuples with 15 attributes and has 4,262 missing values, whereas NCVoter consists of 256,000 tuples with 19 attributes and has 796,496 missing values.Evaluated algorithms. We compared our algorithm with the naïve algorithm. Since this is the first work that deals with eODs, there are no other algorithms that can deal with eODs (i.e., we do not have existing competitors). These algorithms were single-threaded, implemented in C++, and compiled by g++ 9.4.0 with -O3 flag.Parameters. Given 𝐗↦_≤𝐘, 𝐗 (resp. 𝐘) is called the left-hand (resp. the right-hand) side or LHS (resp. RHS). To measure the performance of each algorithm, we varied the sizes of LHS and RHS. The default sizes of LHS and RHS were one. When we varied the size of LHS (resp. RHS), we fixed that of RHS (resp. LHS). For each test, we randomly generated LHS and RHS from the attribute set and repeated this 10 times.Result. Figure <ref> shows how the naïve and our algorithms scale with respect to the LHS and RHS sizes. The first observation is that the proposed algorithm is several orders of magnitude faster than the naïve algorithm. Note that we omit the result of the naïve algorithm on NCVoter because it did not terminate on them within a few hours even for a single experiment (i.e., 10 iterations). The datasets have more than 10 columns, and the factorial cost is huge. Therefore, this result is reasonable.We also observe that our algorithm is not affected by the LHS and RHS sizes. Actually, this is expected from our theoretical analysis. The time of our algorithm is dependent on the distributions of 𝐒 and 𝐌, not on the LHS and RHS sizes, see our time complexity analysis.As stated above, our algorithm runs in time proportional to |𝐒| + |𝐌|. Figure <ref> shows the average 𝐒 and 𝐌 on each dataset. By comparing Figure <ref> with Figure <ref>, it is clear that the running time of our algorithm follows the tendencies in Figure <ref>. This result empirically validates our theoretical analysis in Section <ref>. § RELATED WORK §.§ Exact ODsLanger and Naumann presented ORDER <cit.>, which finds all minimal ODs that hold in a given table. ORDER itself is intentionally incomplete (as discussed in <cit.>). Although it uses many pruning techniques that dramatically decrease the running time, it has the factorial worst-case time complexity with respect to the number of attributes. Later, Schlichta et al. presented FASTOD <cit.>, an OD discovery algorithm with the exponential worst case time complexity with respect to the number of attributes and linear complexity with respect to the number of tuples. They achieved this by mapping ODs to a set-based canonical representation. Exact ODs are useful for data with no errors (missing values). However, there is a possibility that many potentially useful ODs exist but can never be found by this approach (whereas they may be found by ours or other approximate ODs). Notice that ODs are a special case of eODs when the embedding is equal to all attributes in a dataset.§.§ Approximate ODsThis topic (AODs) was defined in <cit.>. Since ODs may not hold on datasets with errors, <cit.> considers the minimum number of tuples that must be removed from a given table for the OD to hold. However, this problem is computationally expensive with respect to the numbers of tuples and attributes.Also, in <cit.>, Jin et al. formalized the AOD discovery problem and developed efficient algorithms for AOD discovery with an error measure optimization.Although AODs find ODs that hold on dirty data, they do not consider data with missing values. Hence, AODs cannot provide the completeness requirement that helps data cleaning and data schema design (whereas eODs can do this).§.§ Embedded Functional FependenciesWei and Link introduced the concept of Embedded Functional Dependencies (eFDs) <cit.>. These are used to establish a robust schema design framework independent of the interpretation of missing values. In <cit.>, they present row-efficient, column-efficient, and hybrid approaches for discovering eFDs.Although eFDs provide valuable data completeness and data integrity requirements, they do not consider integrity requirements with respect to order. Since ODs subsume FDs, eODs also subsume eFDs. § CONCLUSIONThis paper proposed a new concept of embedded order dependencies to deal with order dependencies with missing values and to satisfy the integrity and completeness requirements. A naïve algorithm incurs a factorial cost with regard to the number of attributes, so it does not scale well to large relational databases. Motivated by this, we presented an efficient algorithm that checks whether an OD holds on a given embedding and returns (if possible) an embedding on which it holds. We conducted experiments on real-world datasets, and the results demonstrate the efficiency of our eOD validation algorithm.This research is partially supported by AIP Acceleration Research JPMJCR23U2 and JST CREST JPMJCR21F2.ACM-Reference-Format
http://arxiv.org/abs/2312.16033v2
{ "authors": [ "Alejandro Ramos", "Takuya Uemura", "Daichi Amagata", "Ryo Shirai", "Takahiro Hara" ], "categories": [ "cs.DB" ], "primary_category": "cs.DB", "published": "20231226124925", "title": "Fast Algorithm for Embedded Order Dependency Validation (Extended Version)" }
=18pt plus1pt empty roman tocchapterList of figurestocchapterList of tablesarabic page1 Classes/mnras tocchapterReferences
http://arxiv.org/abs/2312.16266v1
{ "authors": [ "Amar Aryan" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231226100451", "title": "Unveiling diverse nature of core collapse supernovae" }
Ensemble Learning to Assess Dynamics of Affective Experience Ratings and Physiological ChangeAll authors contributed equally to this work.Felix Dollack1, Kiyoshi Kiyokawa1, Huakun Liu1, Monica Perusquia-Hernandez1, Chirag Raman2, Hideaki Uchiyama1, Xin Wei11Nara Institute of Science and Technology2Delft University of Technology{felix.d, kiyo, liu.huakun.li0, m.perusquia, hideaki.uchiyama, wei.xin.wy0}@is.naist.jp, [email protected] 14, 2024 =========================================================================================================================================================================================================================================================================================================================Given a three-valued definition of validity, which choice of three-valued truth tables for the connectives can ensure that the resulting logic coincides exactly with classical logic? We give an answer to this question for the five monotonic consequence relations st, ss, tt, ss∩ tt, and ts, when the connectives are negation, conjunction, and disjunction. For ts and ss∩ tt the answer is trivial (no scheme works), and for ss and tt it is straightforward (they are the collapsible schemes, in which the middle value acts like one of the classical values). For st, the schemes in question are the Boolean normal schemes that are either monotonic or collapsible.§ CHARACTERIZING CLASSICAL LOGICOur goal in this paper is to provide a characterization of different ways in which classical logic can be presented in a three-valued setting. More precisely, our goal is to inventory which three-valued truth tables for negation, conjunction and disjunction can be paired with three-valued definitions of validity so as to yield exactly the same inferences that are obtained in more standard presentations of classical logic. While this project is mostly theoretical, it also has philosophical and conceptual motivations, about which we shall say more after stating the central results.Toward our main goal, we first need to say more about the more standard ways in which classical logic has been characterized. Given a denumerable set of propositional variables P={p, q, r, p', q', r',..}, a propositional logic is a triple ⟨ℒ, C, ⊢⟩ such that C is a finite set of n-ary connectives, ℒ is the set of formulae generated from P by application of the connectives in C, and ⊢ is a relation between sets of formulae in ℒ. In this paper, we will mainly focus on the set C={, ∨, ∧}, comprised of negation, disjunction and conjunction, forming a standard set of connectives in presentations of classical logic. Given a propositional logic, what characterizes this logic as classical? One prominent answer to this question relies on two-valued semantics. On that view, a propositional logic is classical if the connectives are interpretable by specific two-valued truth functions, and ⊢ is interpretable by a specific relation between sets of truth values (where the values in question can be represented by 1 and 0, standing for True and False). Thus, for negation, conjunction, and disjunction to be classical, they must be interpretable by functions coextensional with f_(x)=1-x, f_∨(x,y)=max(x,y), f_∧(x,y)=min(x,y) on the set 𝒱={1,0}. Moreover, ⊢ is classical provided Γ⊢Δ if and only if for every valuation function v which is a homomorphism from (ℒ,(, ∨, ∧)) to (𝒱,(f_, f_∨, f_∧)), {v(A) : A∈Γ}⊆{1} implies {v(B) : B∈Δ}∩{1}≠∅. That is, for every valuation, the truth of the premises in Γ implies the truth of some conclusion in Δ, which we can write Γ_2 Δ. But what justifies the choice of these tables, and of this definition of validity? One possibility to answer this question is to look at syntax, namely proof-theory. Of particular interest to us is Gentzen's perspective on the connectives and on the consequence relation. The leading idea behind Gentzen's approach in his seminal work <cit.> is that what makes a logic classical is the fact that the connectives and the consequence relation obey specific rules. The way Gentzen describes this is by specifying on the one hand structural rules governing the consequence relation ⊢, and on the other operational rules governing the connectives. Arguably, this perspective is more explanatory than the semantic perspective, because it tells us how inferences are shaped to begin with.As structural rules, Gentzen proposed various properties such as reflexivity, monotonicity, contraction, and the Cut rule. For operational rules Gentzen proposed analytic rules, telling us how an inference involving a connective in premise position or in conclusion position depends on other inferences not involving that connective but only involving subformulae. For negation, conjunction, and disjunction, the rules of his calculus 𝐋𝐊 are as follows:Does it matter whether one starts from a proof-theoretic or from a semantic characterization of classical logic? One may say that it does not, considering that the semantic and the syntactic perspective can be made to coincide. Gentzen's sequent calculus 𝐋𝐊, which characterizes ⊢ syntactically, is sound and complete for the semantic interpretation _2 of the consequence relation between sets of formulae. However, Gentzen <cit.> has shown that Cut is eliminable from 𝐋𝐊. In other words, the set of provable sequents in 𝐋𝐊 minus Cut is the same as the set of provable sequents in 𝐋𝐊, i.e., exactly those that are classically valid.So, in the same way in which there is a different syntactic characterization of classical logic than the one based on 𝐋𝐊, one can also ask if there can be different semantic characterizations of classical logic beside the two-valued approach. As it turns out, some authors have provided various three-valued semantics for classical logic. For instance, Girard in <cit.> offers a non compositional semantics based on three-valued valuations (the so-called Schütte valuations), while Cobreros et al. <cit.> do the same using the Strong Kleene valuations, and more recently, Szmuc and Ferguson <cit.> and Ferguson <cit.> show that the Weak Kleene valuations also work. All of these characterizations are given by the so-called st-consequence relation, defined by the fact that when all premises in Γ take the value 1 in the set {1, 12, 0}, some conclusion in Δ takes a value other than 0—see <cit.> and <cit.> for related discussions of this notion of consequence.In <cit.>, it is shown that beside st, other substructural consequence relations admit connectives satisfying Gentzen's operational rules, and are representable by means of three-valued operators. A case of interest is the non-reflexive relation ts, defined by the fact that when all premises in Γ take a value other than 0, some conclusion takes the value 1—see <cit.>, <cit.> and <cit.>. Moreover, <cit.> shows that when the language contains constants for the truth values, ts admits as Gentzen-regular connectives the same Strong Kleene negation, conjunction and disjunction as st. Similar results, both positive and negative, are obtained for alternative definitions of logical consequence, in particular for ss (preservation of the value 1 from premises to conclusion), for tt (preservation of non-falsity) and for their intersection ss∩ tt.[The consequence relation ss ∩ tt over the Strong Kleene valuations renders the well-known logic RM_fde, i.e., the first-degree entailment fragment of the relevant logic R-mingle, as well documented and discussed in <cit.>.] The results in <cit.>, however, did not purport to give a trivalent characterization of classical logic as defined above, namely in terms of both structural and operational rules. Instead, they focus only on operational rules, and for the most part they assume that the language can express all truth values, including the third value 12. Given these results, we are led to the following more general question: what are all the three-valued schemes that can be used to characterize exactly those inferences valid in the two-valued presentation of classical logic, i.e., _2? To answer this question, we determine, for the five definitions of semantic validity mentioned above (st, ss, tt, ss∩ tt, and ts), which three-valued truth tables can be assigned to negation, conjunction, and disjunction so as to yield all and only the inferences of the two-valued presentation of classical logic. For ts, the answer is trivial: no scheme will work, since p⊢ p fails in ts—see, e.g., <cit.> and <cit.>. For the remaining four definitions of validity, the answer is less obvious, in particular in the case of st. For st, it is a contested matter whether it supports classical meta-inferences such as the Cut rule—see, e.g., <cit.> and <cit.>. However, here we are interested primarily in whether st can support the same classical inferences as two-valued semantics. Our work proceeds as follows: in Section <ref> we start by a review of three-valued definitions of validity, with an indication of the valuation schemes playing a central role in our results. Section <ref> presents our main results, whose proof we defer to the Appendix to ease reading. Section <ref> concludes with comparisons and a discussion of the philosophical value of those results. § DEFINITIONS The question we are investigating in this paper can be put as follows: given a three-valued definition of logical consequence , what set of truth tables(or scheme) for the connectives can be such as to ensure that the resulting consequence relation ^_ coincides with classical consequence. In this sectionwe first introduce the five notions of validity of interest in a trivalent setting, where the truth values are going to be 1, 12, and 0. We then define the relevant properties of connectives and their truth tables, and give an overview of the way in which these properties constrain classicality for different consequence relations.§.§ Logical Consequence This section introduces five entailment relations corresponding to distinct ways of thinking of validity in a three-valued setting. They include the so-called pure, mixed and intersective definitions of logical consequence, as defined in <cit.>. Let a valuation be a function v from formulae to the set { 1, 12, 0 }.(ss-validity) Γ^ssΔ if and only if for every valuation v,if v(A)= 1 for every A ∈Γ, then v(B)= for some B ∈Δ.(tt-validity) Γ^ttΔ if and only if for every valuation v,if v(A) ∈{, 12} for every A ∈Γ, then v(B) ∈{, 12} for some B ∈Δ.(st-validity) Γ^stΔ if and only if for every valuation v,if v(A)= 1 for every A ∈Γ, then v(B) ∈{, 12} for some B ∈Δ. (ts-validity) Γ^tsΔ if and only if for every valuation v,if v(A) ∈{, 12} for every A ∈Γ, then v(B) = 1 for some B ∈Δ. (ss ∩ tt-validity) Γ^ss∩ ttΔ if and only if Γ^ssΔ andΓ^ttΔ;or equivalently, if and only if for every valuation v, inf{v(A)|A ∈Γ}≤_ℚsup{v(B)|B ∈Δ}, where ≤_ℚ is the usual order over the rational numbers.(ss-validity) Γ^ssΔ if and only if for every valuation v,either v(A) ∈{, 12} for some A ∈Γ, or v(B) ∈{} for some B ∈Δ.(tt-validity) Γ^ttΔ if and only if for every valuation v,either v(A) ∈{} for some A ∈Γ, or v(B) ∈{, 12} for some B ∈Δ.(st-validity) Γ^stΔ if and only if for every valuation v,either v(A) ∈{, 12} for some A ∈Γ, or v(B) ∈{, 12} for some B ∈Δ. (ts-validity) Γ^tsΔ if and only if for every valuation v,either v(A) ∈{} for some A ∈Γ, or v(B) ∈{} for some B ∈Δ. (ss ∩ tt-validity) Γ^ss∩ ttΔ if and only if Γ^ssΔ andΓ^ttΔ;or equivalently,iff for every valuation v, inf{v(A)|A ∈Γ}≤_ℚsup{v(B)|B ∈Δ}, where ≤_ℚ is the usual order over the rational numbers. Basically, the pure notions of validity are the ones definable in terms of the preservation of a fixed set of designated values between premises and conclusions, they include ss (preservation of value 1) and tt (preservation of values that are not 0). The mixed notions of validity st and ts define logical consequence not in terms of preservation but in terms of specific constraints between values that can differ for premises and conclusions (not going from truth to falsity for st, or from non-falsity to non-truth for ts). Finally, the intersective notion of validity ss∩ tthas also been called order-theoretic in <cit.>, because it is equivalent to requiring that, relative to the total ordering of truth-values 0<12<1, the largest value of the conclusions should not be smaller than the smallest value of the premises. Although more entailment relations are conceivable, in <cit.> these five were identified as the so-called intersective mixed consequence relations.[These consequence relations are called intersective mixed consequence relations in <cit.> because they are all the consequence relations definable as intersections between mixed consequence relations (which include pure consequence relations as defined in <cit.>). From the lattice displayed in Figure  <ref> notice that ss ∩ tt is the only intersective consequence relation which is not a pure or a mixed consequence relation. Given the inclusion between the logics, the other consequence relations are all the possible intersections between mixed and pure consequence relations. See <cit.> for more details.] They form a natural class by corresponding to the three-valued monotonic consequence relations (namely such that if Γ⊢Δ, then Γ,Γ' ⊢Δ,Δ'). These consequence relations are related as depicted in Figure <ref>, in which a lower relation is an extensional subset of a higher relation. §.§ Schemes for the connectives: Boolean normal, Monotonic, Collapsible We define a three-valued valuation schemeas a triple (f_, f_∧, f_∨) of operations, namely of three-valued truth tables for the connectives. The properties of a scheme are defined in terms of the properties of its operations. Here we single out three main properties of interest: Boolean normality, monotonicity, and collapsibility.We first define Boolean normal operations, that is operations that behave on Boolean values like their corresponding (“normal”) counterpart in classical logic. This property is also referred to in the literature as normality (<cit.>), or as regularity (<cit.>). For more on the origin of this terminology, going back to <cit.> and <cit.>, see <cit.> and references therein.[The authors in <cit.> have introduced a related property which they called hyper-classicality which they defined as follows: “a three-valued matrix is hyper-classical if the restriction of its associated function to the classical domain (values 1 and 0) will have its image in the classical codomain (values 1 and 0)”. According to this definition, all Boolean normal schemes are hyper-classical.] An n-ary operation ⋆ is Boolean normal if and only if for {a_1,...,a_n}⊆{0,1}, ⋆(a_1,...,a_n)=⋆^(a_1,...,a_n), where ⋆^ is the corresponding operation over the usual two-element Boolean algebra.A scheme is Boolean normal iff each of its operations is.Next, we assume that truth values are ordered with regard to ≤_ I in terms of their so-called informational value, as described in <cit.>, that is: 12 <_ I 0 and 12 <_ I 1, as depicted in Figure <ref>. Given such an ordering relation <_ I, we can define the componentwise ordering based on this order as follows: ⟨ a_1,...,a_n ⟩≤_ I^comp⟨ b_1,...,b_n ⟩ if and only if a_j ≤_ I b_j for all 1 ≤_ I j ≤ _ I k.An n-ary operation ⋆ is (upward) monotonic if and only if whenever ⟨ a_1,...,a_n ⟩≤_ I^comp⟨ b_1,...,b_n ⟩ then ⋆(a_1,...,a_n) ≤_ I⋆(b_1,...,b_n).A scheme is monotonic if and only if each of its operations is. Next, we assume that truth values are ordered with regard to ≤_ I in terms of their so-called informational value, as described in <cit.>, that is: 12 <_ I 0 and 12 <_ I 1, as depicted in Figure <ref>. Given such an ordering relation <_ I, we recall that a lexicographical ordering based on this order is defined as follows: ⟨ a_1,...,a_n ⟩ <_ I^lex⟨ b_1,...,b_n ⟩ if and only if a_1 <_ I b_1 or there is some k with 1 ≤_ I k ≤_ I n such that for all j <_ I k, a_j =_ I b_j and a_k <_ I b_k.An n-ary operation ⋆ is (upward) monotonic if and only if whenever ⟨ a_1,...,a_n ⟩≤_ I^lex⟨ b_1,...,b_n ⟩ then ⋆(a_1,...,a_n) ≤_ I⋆(b_1,...,b_n).A scheme is monotonic if and only if each of its operations is. In 2D truth table format, monotonic operations are such that no two distinct classical values are found next to one another, horizontally or vertically. This is easy to prove and we refer to Appendix <ref> for a tighter characterization.By combining Boolean normality and monotonicity, we obtain the Boolean normal monotonic operations for negation, conjunction and disjunction presented in the truth tables in Figure <ref>. Here and elsewhere, when a cell contains more than one value, this means that any choice of a value renders an operation with the desired properties, independently of the choice of values in other cells (in this case, Boolean normality and monotonicity).To introduce our last relevant property, we define “α-collapsers”, operations τ_α defined as τ_α(0)=0, τ_α(1)=1, and τ_α(12)=α, for α=0 or α=1. As can be seen, collapsers preserve Boolean values, and collapse the third value onto α. An n-ary operation ⋆ is an α-collapsible version of a classical operation ⋆^ iff τ_α(⋆(x_1, …, x_n)) = ⋆^(τ_α(x_1), …, τ_α(x_n)). A scheme is α-collapsible if and only if all of the operations are α-collapsible.In terms of truth tables, the 1-collapsible (henceforth, truth-collapsible) and the 0-collapsible (falsity-collapsible) operations are as reported in Figures <ref> and <ref>, respectively.We can see how these translates the definitions. First, the Boolean corners of the table should yield the same values as the corresponding Boolean operations, up to τ_α. Second, in an area in which one can move by applying τ_α to one or both of the inputs, all output values should be the same, again up to τ_α. Therefore, the truth-collapsible scheme is one in which the values 1 and 12 play the same functional role, whereas in the falsity-collapsible case the values 0 and 12 play the same role. Figures <ref> and <ref> display the Boolean normal collapsible operations. Finally, notice that no collapsible negation is monotonic, because the third value yields a determinate value for collapsible negations, and an indeterminate for the monotonic negation. This implies that no collapsible scheme is monotonic, and conversely. Various examples from the literature can be given to illustrate those schemes. The well-known Strong Kleene scheme and the Bochvar/Weak Kleene scheme are both Boolean normal monotonic schemes. Boolean normal monotonic schemes also include other schemes, such as the scheme characteristic of Lisp logic as discussed in Fitting's <cit.>, first introduced by McCarthy in <cit.>—also to be found in the presupposition projection literature, in particular in Peters' <cit.>.[This scheme can be viewed as a compromise between a Strong Kleene and a Weak Kleene scheme in that it is asymmetric: binary operations are understood as Weak Kleene on their first argument, and Strong Kleene on the second.] Likewise, the collapsible schemes are not just theoretical possibilities: an example of truth-collapsible scheme can be found in Cantwell's <cit.>, under the name “non-bivalent classical valuation”. Cantwell gives tables for negation, conjunction, and disjunction that are Boolean normal truth-collapsible. Additionally, he defines a conditional operator (originally introduced independently by Cooper in <cit.>), which is not Boolean normal (it yields the value 12 when the antecedent has value 0, and takes the value of the consequent otherwise), but which could be shown to be truth-collapsible.[For visually-inclined readers, we include Cantwell's truth-tables below:12∧121212 12 12∨121212 12 12 →12121212 12 12 12 ] §.§ Interaction of logical consequence and schemes: Overview of the resultsWith the definition of logical consequence and of a valuation scheme in hand, we canrestate our main goal more precisely as follows.Given a schemeand a definition of logical consequence , we write _^ the corresponding consequence relation, namely the set of valid arguments based on the schemerelying on thedefinition of validity. Our key question is: for a given definition of validity , what scheme is inferentially classical? The relevant definition of inferential classicality is as follows:Given a schemeand a definition of logical consequence , we say _^ is inferentially classical if and only if for every pair of sets of formulae Γ, Δ, we have Γ_^Δ if and only if Γ_2 Δ.Before stating the main result of this paper, we justify the choice of the properties of the schemes highlighted above. Boolean normality provides an upper bound for classicality: for every consequence relation, it ensures that the arguments it supports involving negation, conjunction, and disjunction are a subset of the classical arguments (see Lemma <ref>), and is furthermore a necessary property for this to hold with many consequence relations (see Lemma <ref>). Monotonicity, for the specific case of st, provides a lower bound: it ensures that the classical inferences are a subset of the ones supported (Lemma <ref>). Collapsibility, finally, provides either a lower bound or an upper bound, depending on which consequence is considered (Lemmas <ref>, <ref>, <ref>, <ref>). § MAIN CHARACTERIZATION RESULTSWith these ingredients in place we are ready to present the main results of this paper. The results fall in two main classes : each of the consequence relations st, ss, tt supports a positive characterization of classical logic ; ss∩ tt and ts, on the other hand, fail to support classical logic for any scheme. We start with the presentation of those negative results, for which the explanation is straightforward, reading Figure <ref> bottom up. §.§ Negative results: ts and ss ∩ tt As is well-known, the consequence relation ts is nonreflexive, hence no scheme can combine with it to make it classical. ^ts_ ≠ _2, for every three-valued scheme . For an atomic proposition p, independently of , p ^ts_ p does not hold, while p _2 p does. In Section <ref> and in Appendix <ref> we will show that inferential classicality can be obtained inductively from two parts, essentially distinguishing the role of formulae with and without connectives: (i) some structural properties for atomic propositions (namely reflexivity), (ii) some Gentzen regularity for the connectives. Here, with ts we show how the first condition is broken and prevents inferential classicality, independently of the connectives. From a structural point of view, ss∩ tt is a Tarskian relation, unlike ts: it is reflexive, monotonic, and transitive. Despite that, it fails to support classical logic. As the following result shows, it cannot support both the Law of Excluded Middle and the principle of Explosion in a way that makes negation coherent. ^ss∩ tt_ ≠ _2, for every three-valued scheme .First, consider a formula p and a valuation v in which v(p)=12. For the classical inference _2 p,p to ss ∩ tt-hold, it must be that v( p)=1, that is (12)=1. Second, consider atomic formulae p, q and a valuation v in which v(p)=12 and v(q)=0. For the classical inference p,p _2 q to ss ∩ tt-hold, it must be that (12)=0. Contradiction. This result is closely related to Theorem 4.3 of <cit.>, showing that ss∩ tt admits no Gentzen-regular negation. The result holds even when the consequence relation is restricted to single conclusions. To validate explosion, the negation of 1 and of 12 must be 0. To satisfy the entailment from p to p the negation of 0 must be different from 0 when p is valued to 12. To satisfy the converse entailment from p to p, the negation of 0 cannot be 1, so must be 12. But then when p is valued to 1, p is valued to 12, so p cannot entail p in all cases.A simple takeaway from this result is that when entertaining the ss ∩ tt definition of logical consequence there isn't a three-valued scheme X that supports the same valid inferences as the two-valued presentation of classical logic, mainly because there isn't a truth table for negation that supports the same valid inferences in that respect. But the problem isn't restricted to negation: as shown in <cit.> other connectives also cannot find an appropriate truth table so as to validate the intended inferences, for example, the material conditional. Interestingly enough, in <cit.> it is shown that some connectives (like conjunction and disjunction) do indeed have compatible truth tables that validate the target inferences—at least in the restricted language where only those connectives are featured. With these reflections, we hope to shed some light on the aforementioned impossibility regarding ss ∩ tt, by making the appropriate qualifications.[One may wonder whether this result shows that “negative” ornegation-related connectives cannot be supported by any three-valued truth table when the ss ∩ tt consequence relation is around, but “positive” or non-negation-related connectives can. A discussion of this is far beyond the scope of this paper, but we hope to elaborate on this in further research.]Before turning to the positive results, it is important to mention that the negative results presented in this section can be easily generalized to many-valued semantics with more than three values, since the proofs of these statements appear to be independent of the number of nonclassical values. The natural requirement for this generalization is that the ts-consequence relation defined for this many-valued semantics be nonreflexive and that the ss∩ tt be such that ss lacks tautologies and tt lacks logical contradictions. §.§ Positive results: ss, tt, and st The fact that ss and tt can support classical logic separately follows from the simple fact that the value 12 can be made to mirror the role of either 0 or 1 in a given scheme. This is the sense in which collapsibility (whether for falsity, or truth) yields classical logic.Letbe a three-valued scheme. ^ss_ =_2 if and only ifis falsity-collapsible (see Fig. <ref>).See Theorems <ref> and <ref> in Appendix.Letbe a three-valued scheme. ^tt_ =_2 if and only ifis truth-collapsible (see Fig. <ref>).See Theorems <ref> and <ref> in Appendix. One direction to those two results—the one stating the sufficient conditions—is not surprising, arguably: ss and tt are pure consequence relations, i.e. can be formulated as preservation of some set of values, usually called designated values. In this sense, the set of designated values that characterizes ss-validity consists in the singleton {1}, while tt-validity can be characterized as preserving the values on the set { 1, 12}. If we think of designated values as representing truth and undesignated values as representing falsity, the results above are foreseeable. In ss, the intermediate value doesn't belong to the set of designated values: that is why the falsity-collapsible schema works. On the other hand, in tt, the intermediate value belongs to the set of designated values, and in this case, the truth-collapsible schema works. However, the other direction of these results—the one stating the necessary conditions—is more surprising in that no other schemes than those collapsible work in the intended way. The case of st is the least straightforward among the five trivalent consequence relations examined here. For this scheme we get a disjunctive characterization involving collapsibility and monotonicity as separate conditions. One way in which this may be understood is by looking at negation first: when negation is monotonic, the value 12 cannot be interpreted uniformly as 1 or 0, and likewise for the other connectives. When negation is collapsible, then 12 can be thought of as playing the role of 1 or 0 across connectives. Letbe a three-valued scheme. ^st_ =_2 if and only ifis Boolean normal and either monotonic (see Fig. <ref>) or collapsible (see Fig. <ref> and <ref>).See Theorems <ref> and <ref> in Appendix. It follows from Theorem <ref>, using the ss-consequence relation, that there are 8192 different three-valued presentations of classical logic. Similarly, according to Theorem <ref>, we can obtain also 8192 different three-valued presentations of classical logic using the tt-consequence relation. Finally, as a consequence of Theorem <ref> there are 528 different three-valued presentations of classical logic, with the st-consequence relation.In the next section, we will explore how all of these results can be connected with similar investigations about whether ss, tt, st and ts and ss∩ tt can support the operational rules of Gentzen's proof system for classical logic.§ COMPARISONS AND PERSPECTIVESThe results of the previous sections tell us, given a three-valued definition of validity, exactly which three-valued truth tables for negation, conjunction, and disjunction, warrant classical inferences for the resulting logic. In subsection <ref>, we compare this finding to results established in <cit.>. In subsection <ref> we discuss some philosophical implications of our work regarding the definition of classical logic. §.§ Gentzen-regular connectivesIn <cit.>, a goal partly related to the one discussed here was pursued. Namely, given a three-valued definition of validity, it was asked which three-valued operators are Gentzen-regular relative to it. Basically, a Gentzen-regular connective is a connective whose behavior can be characterized in terms of the bidirectional rules of Gentzen's LK—these rules can therefore be understood as introduction and elimination sequent rules, respectively. For example, the rule whereby Γ, A, B⊢Δ iff Γ, A∧ B⊢Δ corresponds to Gentzen's rule when conjunction occurs in premise position. And the rule whereby Γ⊢ A∧ B, Δ iff Γ⊢ A, Δ and Γ⊢ B, Δ corresponds to Gentzen's rule for conjunction in conclusion position—see also Figure <ref>. We give a more precise definition of Gentzen-regularity in Appendix <ref>, since the definition applies to any n-ary connective, beyond negation, conjunction, and disjunction. Clearly, when dealing with the usual two-valued semantics for classical logic, allconnectives are Gentzen-regular. However, a consequence relation can fail to be classical at the structural level, but still admit Gentzen-regular connectives. This means that unlike us here, <cit.> did not seek a three-valued characterization of classical logic qua combination of operational and structural rules. Instead they focused merely on the operational side of Gentzen's proof system for classical logic. Furthermore, <cit.>'s results did not seek to characterize schemes (namely sets of truth tables), but they look at connectives one by one. Consequently, their approach is not limited to negation, conjunction and disjunction, or to a particular set of operators, but it covers arbitrary n-ary truth-functional operators. However, they assume the language to be constant-expressive, which means that the constants 1, 0 and 12 are expressible by means of constant symbols.For comparison, let us consider st and ts first. Under the assumption of constant expressiveness, <cit.> proved that st and ts admit a unique Gentzen-regular negation, a unique Gentzen-regular conjunction, and a unique Gentzen-regular disjunction, described by the Strong Kleene tables. In the case of ts, it therefore admits exactly one Gentzen-regular scheme involving negation, disjunction and conjunction. Above we saw that ts admits no trivalent scheme supporting classical logic. There is no contradiction there, since Gentzen-regularity pertains only to the operational rules of a proof system for classical logic, and not to structural rules. This situation may be interpreted by saying that although ts does not support classical inferences, it can support Gentzen-regular connectives that describe, in a way, classical connectives. For st, the situation is different: as mentioned in the previous section, Figures <ref> and <ref> and <ref> together indicate that st admits 528 distinct schemes involving negation, conjunction and disjunction, supporting classical inferences, including the Weak Kleene scheme and more (512 collapsible schemes, and 16 monotone schemes). However, <cit.>'s result implies that st admits a unique Gentzen-regular scheme, namely the Strong Kleene one. Whence comes the difference? Here the answer concerns the assumption of constant expressiveness. In <cit.>, Chemla and Egré left as an open issue the characterization of Gentzen-regular three-valued operators when the language does not admit constants for all truth-values. The present inventory can be seen as answering this problem for the case in which the constants are not expressible. These comparisons raise the more general question of what may be needed beside the Gentzen-regularity of a connective in order to guarantee that the logic be inferentially classical. The following result gives an answer to this question:A propositional logic L=⟨⊢, C⟩ is (inferentially) classical if and only if its connectives in C are Gentzen-regular and ⊢ is such that for Γ and Δ two sets of atomic propositions, Γ⊢Δ iff Γ∩Δ≠∅. See Appendix <ref>. A consequence relation like ts obviously fails the structural condition expressed in this lemma, and so cannot support classical logic despite admitting Gentzen-regular connectives. On the other hand, st satisfies the condition, just like ss and tt. Finally, while ss∩ tt too satisfies it, it is shown in <cit.> that it does not admit a Gentzen-regular negation.More generally, we believe the above lemma could be used to answer the question we posed relative to arbitrary connectives beside negation, conjunction and disjunction, drawing on the fact that the notion of Gentzen-regularity can be defined for arbitrary finite operators as discussed in <cit.>. We leave this investigation for future work. §.§ Philosophical perspectivesThe results of this paper show that classical logic can be obtained in a variety of ways in a three-valued setting. This raises the following question: from these various presentations of classical logic, is one of them more fundamental than the others? Besides, aren't all of them just superfluous in comparison to the the standard two-valued presentation of classical logic?Let us consider ss and tt first. Relative to those systems, Theorems <ref> and <ref> establish that the collapsible schemes support exactly the classical inferences. But they are also schemes in which the middle value mirrors exactly one of the classical values. Hence, this middle value may be judged entirely redundant. We can find instances of this observation in the literature. In <cit.>, for instance, Cantwell puts forward a system of trivalent truth tables for negation, conjunction, disjunction, and a conditional operator.This system turns out to encapsulate exactly one of the truth-collapsible schemes of Figure <ref>, and it is called “Non-Classical Bivalent” by Cantwell, precisely because it yields classical logic when paired with tt-validity, as presented in <cit.>. In this regard, the interest of Cantwell's conditional operator—proposed earlier by <cit.>, see <cit.> for a comparison—shows up precisely when his conjunction, disjunction and conditional are paired with Strong Kleene negation so as to yield a noncollapsible, nonclassical system. More generally, the collapsible schemes can be applied a reduction technique on truth values presented as a “grouping reduction” in <cit.>, whose goal is precisely to merge truth values that play the same role in premise position and in conclusion positions of arguments. As shown there, for ss and tt, grouping reductions basically fulfill Suszko's goal in <cit.>: they suggest that an appeal to three truth values is idle when it comes to representing classical inferences in a compositional way, and that two values are all we need.What about st? It was proved that the determination of the minimum number of truth values needed to represent a reflexive, monotonic and transitive consequence relation is exactly two, but that it is three if the relation is reflexive and monotonic but nontransitive (viz. <cit.>, Corollary 4.7). But as shown by <cit.>, st is not a transitive consequence relation. For st, therefore, we cannot argue in the same way as ss and tt-systems that three values are idle in comparison to using just two values. Besides, as argued by <cit.> and by <cit.>, the use of a third truth value is independently motivated to represent special semantic status, such as vagueness, or absurdity, or paradoxicality. And for systems of inferences involving sentences with this third semantic status, preserving classical logic for inferences is a conservative benefit.[Notice, however, that not all the schemes that render classical logic with the st consequence relation are compatible with naive non-trivial theories of truth, vagueness, paradoxicality and so on. In fact, only the monotonic ones are. To wit, consider a Liar sentence λ and observe that there can't be a stable valuation for it in a collapsible schema where v(λ) = v( Tr ⌜λ⌝), where obviously Tr is a naive truth predicate and ⌜λ⌝ is a quotation name for the Liar sentence.]We can therefore answer the questions raised above as follows: for consequence relations like ss and tt, collapsible schemes constitute roundabout ways of representing classical logic compared to the two-valued definition. In the case of st, the situation is more complex: while the two-valued approach to classical logic sets a benchmark for the definition of classicality all across the board, we may find different foundations for classicality at the inferential level. At this point, however, more work remains to be done to generalize the present results to more connectives, but also to many-valued logics beyond three values.[This last point has been explored recently by Fitting in <cit.>, <cit.>. In those papers, the author shows how to build the counterpart of some nonclassical logics using the st consequence relation defined for algebras with more than three values.] Furthermore, the present investigations are limited to the propositional case, but one may also be interested in looking at the question of which are all the three-valued presentations of classical logic when such a system is understood as first-order logic. Interestingly enough, the generalizations of the previously discussed results are not always immediate, and the issue is somehow related to the understanding of the universal and existential quantifiers as infinitary versions of conjunction and disjunction, respectively.[For some cases, like the strong Kleene or the weak Kleene schemes, it is well-known and relatively obvious how to devise appropriate quantifiers. This is also true for some Boolean normal collapsible schemes whose operations have only classical outputs. However, both when looking at the Boolean normal monotonic, and the Boolean normal collapsible schemes, there are some (algebraically speaking) asymmetric schemes, where the same pair of inputs gives a certain output in a given order, and another output when considered in the opposite order. For instance, some Boolean normal monotonic schemes are such that 0 ∧12 = 12 although 12∧ 0 = 0. How is one supposed to generalize this asymmetric behavior in order to conceive, e.g., an appropriately infinitary version of this conjunction? It is not obvious whether having a false instance is enough for the quantified statement to be false, or if it's also required that no instances receive the value 12. These, and other similar issues, replicate in the case of the other quantifier, as they do for the Boolean normal collapsible schemes.] § APPENDIX: PROOFS §.§ Proofs common to several consequence relations Letbe a three-valued scheme. Ifis Boolean normal, then ^_ ⊆ _2, with ∈{ss,tt,st}.We need to prove that if Γ⊭_2Δ then Γ⊭^_𝐗Δ, for every Γ, Δ. By a straightforward induction, under the assumption of Boolean normality, it is easy to show that for every classical two-valued valuation v, there is a three-valuedvaluation v^* such that v=v^*. Thus given the notions of ss-,tt- and st-consequence relations, if v is a witness of Γ⊭_2Δ, then v^* is a witness of Γ⊭^_𝐗Δ. Let 𝐗 be a three-valued scheme. If 𝐗 is falsity-collapsible, then _2 ⊆ ^sy_𝐗, with y ∈{s,t}.Suppose Γ⊭^sy_𝐗Δ, i.e.,either there is a three-valued valuation v such that v(A)=1 and v(B)=0, for every A ∈Γ and B ∈Δ, if y=t, or there is a three-valued valuation v such that v(A)=1 and v(B)∈{0, 12}, for every A ∈Γ and B ∈Δ, if y=s. Now we will show that in both cases Γ⊭_2Δ, i.e.,that there is a classical two-valued valuation v^* such that v^*(A)=1 and v^*(B)=0, for every A ∈Γ and B ∈Δ. Consider any of the cases and take v^* to be defined as follows:v^*(p)=0 ifv(p)=12v(p) otherwise Now we show by induction on the complexity of the formula that, on the one hand, if v(A)=1, then v^*(A)=1 and, on the other hand, if v(A) ∈{0, 12}, then v^*(A)=0.Base case: If A is a propositional letter, then it holds by definition of the valuation v^*.Inductive step: Here we need to consider three cases:* A =B.* If v( B)= 1 then v(B)∈{0,12}. By IH v^*(B)=0, then v^*( B)=1. * If v( B)∈{0,12} then v(B)= 1. By IH v^*(B)=1, then v^*( B)=0. * A = B ∧ C.* If v(B ∧ C)= 1 then v(B)=v(C)=1. By IH v^*(B)=v^*(C)=1, then v^*(B ∧ C)=1. * If v(B ∧ C)∈{0,12} then v(B)∈{0, 12} and v(C)∈{0, 12}. By IH v^*(B)=v^*(C)=0, and then v^*(B ∧ C)=0.* A = B ∨ C.* If v(B ∨ C)= 1 then v(B)=1 orv(C)=1. So, depending on which of these two is the case, by IH v^*(B)=1 or v^*(C)=1, and then v^*(B ∨ C)=1. * If v(B ∨ C)∈{0,12} then v(B) ∈{0. 12} and v(C) ∈{0. 12}. By IH v^*(B)=v^*(C)=0, then v^*(B ∨ C)=0.This shows v^* is a classical two-valued valuation witnessing Γ⊭_2Δ, and therefore that _2 ⊆ ^st_𝐗 as desired.Let 𝐗 be a three-valued scheme. If 𝐗 is truth-collapsible, then _2 ⊆ ^yt_𝐗, with y ∈{s,t}.The proof is similar to the previous Lemma, and so we leave it to the reader.§.§ The proofs for stLet 𝐗 be a three-valued scheme. If 𝐗 is monotonic, then _2 ⊆ ^st_𝐗.Assume there is a inference such thatΓ⊭^st_𝐗Δ. Then, there is a valuation v, such that v(A)=1 for every A ∈Γ and v(B)=0 for every B ∈Δ. Now we will show that Γ⊭_2Δ, i.e.,that there is a classical two-valued valuation v^* such that v^*(A)=1 and v^*(B)=0, for every A ∈Γ and B ∈Δ. We take v^* to be defined as follows:v^*(p)=0 ifv(p)=12v(p) otherwise Now we show by induction on the complexity of the formula that, on the one hand, if v(A)=1, then v^*(A)=1 and, on the other hand, if v(A)=0, then v^*(A)=0.Base case: If A is a propositional letter, then it holds by definition of the valuation v^*.Inductive step: Here we need to consider three cases:* A =B.* If v( B)= 1 then v(B)=0. By IH v^*(B)=0, then v^*( B)=1. * If v( B)= 0 then v(B)= 1. By IH v^*(B)=1, then v^*( B)=0. * A = B ∧ C.* If v(B ∧ C)= 1 then v(B)=v(C)=1. By IH v^*(B)=v^*(C)=1, then v^*(B ∧ C)=1. * If v(B ∧ C)= 0 then v(B)=0 or v(C)=0. Then depending on which of the disjuncts holds, by IH v^*(B)=0 or v^*(C)=0, and then v^*(B ∧ C)=0.* A = B ∨ C.* If v(B ∨ C)= 1 then v(B)=1 or v(C)=1. So, depending on which of these two is the case, by IH v^*(B)=1 or v^*(C)=1, and then v^*(B ∨ C)=1. * If v(B ∨ C)= 0 then v(B) =0 and v(C)=0. By IH v^*(B)=v^*(C)=0, then v^*(B ∨ C)=0. This shows v^* is a classical two-valued valuation witnessing Γ⊭_2Δ, and therefore that _2 ⊆ ^st_𝐗 as desired.Let 𝐗 be a three-valued scheme. If 𝐗 is Boolean normal monotonic, or Boolean normal collapsible, then ^st_𝐗 =_2.From Lemmas <ref>, <ref>, <ref> and <ref>.Up until now we proved that certain three-valued schemes—belonging in particular into the class of normal Boolean monotonic, or normal Boolean collapsible schemes—render classical logic when equipped with the st definition of logical consequence. If possible, we also would like to prove the converse. That is to say, that if a three-valued scheme renders classical logic when equipped with the st definition of logical consequence, then said scheme belongs in one and only one of the two classes described before. Below, we show this to be the case. However, to prove this we need both some definitions and some important lemmata, that will do all the heavy-lifting for us. Let 𝐗 be a three-valued scheme. If 𝐗 is not Boolean normal, then ^st_𝐗 ⊈ _2.Suppose 𝐗 is not Boolean normal, then some operation behaves in a way such that some classically invalid inferences are valid in 𝐗. * Let's start with negation. Ifis such that (1) ∈{12, 1} thenp ^st_𝐗 p. On the other hand if (0)∈{12, 0}, then p ^st_𝐗 p. * So, having proved that negation must be Boolean normal, if it is ∨ which is not Boolean normal, then p ∨ p ^st_𝐗 p, or p ∨ p ^st_𝐗p, or p ^st_𝐗 p ∨ p * Again, knowing that negation is Boolean normal, if it is ∧ which is not Boolean normal, then p ^st_𝐗 p ∧ p, or p ∧ p ^st_𝐗 p, or p ^st_𝐗 p ∧ p But none of these are valid in classical logic, whence ^st_𝐗 ⊈ _2. From this Lemma, since the classical values are determined, we can conclude that there are in principle at most three possible negations to consider: (12)∈{0, 12, 1}. And actually, what we will prove next is that each of these negations selects exactly the truth tables we have proved are enough to obtain classical logic. In other words, we will prove the following:Letbe a three-valued scheme. If ^st_𝐗 =_2 we have three cases:(1) If (12)=12 then conjunction and disjunction are Boolean normal monotonic (the operations on Fig. <ref>).(2) If (12)=0 then conjunction and disjunction are operations of a Boolean normal truth-collapsible scheme (the operations on Fig. <ref>).(3) If (12)=1 then conjunction and disjunction are operations of a Boolean normal falsity-collapsible scheme (the operations on Fig. <ref>).By Lemma <ref> we assume Boolean normality. We will prove cases (1) and (2), since (3) is similar.Case (1) Assume then that (12)=12. We will show that the other operations are monotonic. * The case of the conjunction:* First we show that in every , (12∧12)=12.* Assume on the contrary that (12∧12)=1. Then we would have a counterexample to the following classically valid inference: p ∧ p⊭^st_𝐗 q (v(q)=0, v(p)=12).* Assume now that (12∧12)=0. Then we would have a counterexample to the following classically valid inference: ( p ∧ p) ⊭^st_𝐗p ∧ p (v(p)=12).* Now, having proved the previous case, we show that (12∧ 1)=12 (we leave to the reader the case (1∧12)=12).* Assume on the contrary that (12∧ 1)=0. Then we would have a counterexample to the following classically valid inference: p ⊭^st_𝐗 (q ∧ q)∧ p (v(p)=1, v(q)=12). * Assume now that (12∧ 1)=1. Then we would have a counterexample to the following classically valid inference: (p ∧ p) ∧ q ⊭^st_𝐗 q (v(p)=12, v(q)=1). * We show now that (12∧ 0)≠ 1 (we left to the reader the case (0∧12)≠ 1). If it were the case that (12∧ 0)= 1 then we would have a counterexample to the following classically valid inference: p ∧ q ⊭^st_𝐗 q (v(p)=12, v(q)=0). * The case of the disjunction:* First we show that in every , (12∨12)=12.* Assume on the contrary that (12∨12)=1. Then we would have a counterexample to the following classically valid inference: p ∨ p⊭^st_𝐗( p ∨ p) (v(p)=12). * Assume now that (12∨12)=0. Then we would have a counterexample to the following classically valid inference: ⊭^st_𝐗p ∨ p (v(p)=12).* Now, having proved the previous case, we show that (12∨ 1)≠ 0 (we left to the reader the case (1∨12)≠12)). If it were the case that (12∨ 1)= 0 then we would have a counterexample to the following classically valid inference: q ⊭^st_𝐗p ∨ q (v(p)=12, v(q)=1). * We show now that (12∨ 0)=12 (we left to the reader the case (0∨12)=12).* Assume on the contrary that (12∨ 0)=1. Then we would have a counterexample to the following classically valid inference: (p ∨ p) ∨ q ⊭^st_𝐗q (v(p)=12, v(q)=0). * Assume now that (12∨ 0)=0. Then we would have a counterexample to the following classically valid inference: ⊭^st_𝐗(p ∨ p) ∨ q (v(p)=12, v(q)=0). Case (2) Assume now that (12)=0. We will show that the other operations belong to some of the truth-collapsible schemes. * The case of the conjunction:* First we show that (1∧12)≠ 0 (we left to the reader the case (12∧ 1)≠ 0). Assume on the contrary that (1∧12)=0. Then we would have a counterexample to the following classically valid inference: (p ∧ q), p ⊭^st_𝐗 q (v(p)=1, v(q)=12).* Now, we show that (12∧12)≠ 0. Assume on the contrary that (12∧12)=0. Then we would have a counterexample to the following classically valid inference: p ⊭^st_𝐗 p ∧ p (v(p)=12). * We show now that (12∧ 0)= 0 (we left to the reader the case (0∧12)= 0).* Assume on the contrary that (12∧ 0)=1. Then we would have a counterexample to the following classically valid inference: p ∧ p ⊭^st_𝐗 q (v(p)=12, v(q)=0). * Assume now that (12∧ 0)=12. Then we would have a counterexample to the following classically valid inference: ⊭^st_𝐗 (p ∧ p) (v(p)=12). * The case of the disjunction:* First we show that (1∨12)≠ 0 (we left to the reader the case (12∨ 1)≠ 0). Assume on the contrary that (1∨12)=0. Then we would have a counterexample to the following classically valid inference: p ⊭^st_𝐗 p ∨ q (v(p)=1, v(q)=12).* Now, we show that (12∨12)≠ 0. Assume on the contrary that (12∨12)=0. Then we would have a counterexample to the following classically valid inference: p ⊭^st_𝐗 p ∨ p (v(p)=12). * We show now that (12∨ 0)≠ 0 (we left to the reader the case (0∨12)≠ 0). Assume on the contrary that (12∨ 0)=0. Then we would have a counterexample to the following classically valid inference: ⊭^st_𝐗p ∨ p (v(p)=12). Case (3) Similar to the Case (2), so we leave it to the reader.Let 𝐗 be a three-valued scheme. If ^st_𝐗 =_2, then 𝐗 is Boolean normal, and either monotonic, or collapsible.Immediate from Lemmas <ref> and <ref>.§.§ The proofs for ss Let 𝐗 be a three-valued scheme. If 𝐗 is falsity-collapsible, then ^ss_𝐗 ⊆ _2. Suppose Γ⊭_2Δ, i.e.,there is a two-valued valuation v such that v(A)=1 and v(B)=0, for every A ∈Γ and B ∈Δ. Now we will show that Γ⊭^ss_𝐗Δ, i.e.,that there is a three-valued valuation v^* such that v^*(A)=1 and v^*(B)∈{12,0}, for every A ∈Γ and B ∈Δ. We take v^* to be defined as follows:v^*(p)= v(p) Now we show by induction on the complexity of the formula that, on the one hand, if v(A)=0, then v^*(A)∈{12,0} and, on the other hand, if v(A)=1, then v^*(A)=1.Base case: If A is a propositional letter, then it holds by definition of the valuation v^*.Inductive step: Here we need to consider three cases:* A =B.* If v( B)= 0 then v(B)=1. By IH v^*(B)=1, then since 𝐗 is falsity-collapsible v^*( B)∈{0,12}. * If v( B)= 1 then v(B)= 0. By IH v^*(B)∈{12,0}, then since 𝐗 is falsity-collapsible v^*( B)=1. * A = B ∧ C.* If v(B ∧ C)= 0 then v(B)=0 or v(C)=0. By IH v^*(B)∈{12,0} or v^*(C)∈{12,0}, then since 𝐗 is falsity-collapsible v^*(B ∧ C)∈{0,12}. * If v(B ∧ C)= 1 then v(B)= v(C)= 1. Then, by IH v^*(B)= v^*(C)= 1. Thus, since 𝐗 is falsity-collapsible v^*(B ∧ C)=1. * A = B ∨ C.* If v(B ∨ C)= 1 then v(B)=1 orv(C)=1. So, depending on which of these two is the case, by IH v^*(B)=1 or v^*(C)=1, and then since 𝐗 is falsity-collapsible v^*(B ∨ C)=1. * If v(B ∨ C)= 0 then v(B)=v(C)=0. By IH v^*(B)∈{12,0} and v^*(C)∈{12,0}, then since 𝐗 is falsity-collapsible v^*(B ∨ C)∈{0,12}.This shows v^* is a three-valued valuation witnessing Γ⊭^ss_𝐗Δ, and therefore that ^ss_𝐗 ⊆ _2 as desired.v^*(p)= 12ifv(p)=0v(p) otherwise Now we show by induction on the complexity of the formula that, on the one hand, if v(A)=0, then v^*(A)∈{12,0} and, on the other hand, if v(A)=1, then v^*(A)=1.Base case: If A is a propositional letter, then it holds by definition of the valuation v^*.Inductive step: Here we need to consider three cases:* A =B.* If v( B)= 0 then v(B)=1. By IH v^*(B)=1, then since 𝐗 is falsity-collapsible v^*( B)∈{0,12}. * If v( B)= 1 then v(B)= 0. By IH v^*(B)=12, then since 𝐗 is falsity-collapsible v^*( B)=1. * A = B ∧ C.* If v(B ∧ C)= 0 then v(B)=0 or v(C)=0. By IH v^*(B)=12 or v^*(C)=12, then since 𝐗 is falsity-collapsible v^*(B ∧ C)∈{0,12}. * If v(B ∧ C)= 1 then v(B)= v(C)= 1. Then, by IH v^*(B)= v^*(C)= 1. Thus, since 𝐗 is falsity-collapsible v^*(B ∧ C)=1. * A = B ∨ C.* If v(B ∨ C)= 1 then v(B)=1 orv(C)=1. So, depending on which of these two is the case, by IH v^*(B)=1 or v^*(C)=1, and then since 𝐗 is falsity-collapsible v^*(B ∨ C)=1. * If v(B ∨ C)= 0 then v(B)=v(C)=0. By IH v^*(B)=v^*(C)=12, then since 𝐗 is falsity-collapsible v^*(B ∨ C)∈{0,12}.This shows v^* is a three-valued valuation witnessing Γ⊭^ss_𝐗Δ, and therefore that ^ss_𝐗 ⊆ _2 as desired.Let 𝐗 be a three-valued scheme. If 𝐗 is falsity-collapsible, then ^ss_𝐗 =_2. From Lemmas <ref> and <ref>Let 𝐗 be a three-valued scheme. If ^ss_𝐗 =_2, then 𝐗 is falsity-collapsible (i.e.,the operations are those of the schemes in Figure <ref>).We will show that if a three-valued scheme 𝐗 is not falsity-collapsible then _2 ⊈^ss_𝐗. We will show it by cases, considering in order each of the connectives of each possible non-falsity-collapsible scheme. * The case of the negation:* Assume 𝐗 is such that 1 = 1. Then, p,p ⊭^ss_𝐗 q, but of course p,p _2q.* Assume 𝐗 is such that 12≠ 1. Then, q ⊭^ss_𝐗 p,p, but of course q _2p,p.* Assume 𝐗 is such that 0 ≠ 1. Then, q ⊭^ss_𝐗 p,p, but of course q _2p,p. * The case of the conjunction:* Assume 𝐗 is such that x ∧ y ≠ 1, for x=1 and y=1. Then, p, q ⊭^ss_𝐗p ∧ q, but of course p, q _2p ∧ q. * Assume 𝐗 is such that x ∧ y = 1, for x≠ 1. Then, p ∧ q ⊭^ss_𝐗 p, but of course p ∧ q _2 p. * Assume 𝐗 is such that x ∧ y = 1, for y≠ 1. Then, p ∧ q ⊭^ss_𝐗 q, but of course p ∧ q _2 q. * The case of the disjunction:* Assume 𝐗 is such that x ∨ y ≠ 1, for x= 1. Then, p ⊭^ss_𝐗 p ∨ q, but of course p _2 p ∨ q.* Assume 𝐗 is such that x ∨ y ≠ 1, for y= 1. Then, q ⊭^ss_𝐗 p ∨ q, but of course q _2p ∨ q. * Assume 𝐗 is such that x ∨ y = 1, for x ≠ 1 and y ≠ 1. Then, p ∨ q ⊭^ss_𝐗 p, q, but of course p ∨ q _2 p, q. §.§ The proofs for tt We omit all the proofs of this section, since basically they are dual to those for ss. Let 𝐗 be a three-valued scheme. If 𝐗 is truth-collapsible, then ^tt_𝐗 ⊆ _2. Suppose Γ⊭_2Δ, i.e.,there is a two-valued valuation v such that v(A)=1 and v(B)=0, for every A ∈Γ and B ∈Δ. Now we will show that Γ⊭^tt_𝐗Δ, i.e.,that there is a three-valued valuation v^* such that v^*(A)∈{12,1} and v^*(B)=0, for every A ∈Γ and B ∈Δ. We take v^* to be defined as follows:v^*(p)= 12ifv(p)=1v(p) otherwise Now we show by induction on the complexity of the formula that, on the one hand, if v(A)=0, then v^*(A)=0 and, on the other hand, if v(A)=1, then v^*(A) ∈{1, 12}.Base case: If A is a propositional letter, then it holds by definition of the valuation v^*.Inductive step: Here we need to consider three cases:* A =B.* If v( B)= 0 then v(B)=1. By IH v^*(B)=12, then since 𝐗 is truth-collapsible v^*( B)=0. * If v( B)= 1 then v(B)= 0. By IH v^*(B)=0, then since 𝐗 is truth-collapsible v^*( B)∈{1,12}. * A = B ∧ C.* If v(B ∧ C)= 0 then v(B)=0 or v(C)=0. By IH v^*(B)=0 orv^*(C)=0, then since 𝐗 is truth-collapsible v^*(B ∧ C)=0. * If v(B ∧ C)= 1 then v(B)= v(C)= 1. Then, by IH v^*(B)= v^*(C)= 12. Thus, since 𝐗 is truth-collapsible v^*(B ∧ C)∈{1,12}. * A = B ∨ C.* If v(B ∨ C)= 1 then v(B)=1 orv(C)=1. So, depending on which of these two is the case, by IH v^*(B)=12 or v^*(C)=12, and then since 𝐗 is truth-collapsible v^*(B ∨ C)∈{1,12}. * If v(B ∨ C)= 0 then v(B)=v(C)=0. By IH v^*(B)=v^*(C)=0, then since 𝐗 is truth-collapsible v^*(B ∨ C)=0.This shows v^* is a three-valued valuation witnessing Γ⊭^tt_𝐗Δ, and therefore that ^tt_𝐗 ⊆ _2 as desired.Let 𝐗 be a three-valued scheme. If 𝐗 is truth-collapsible, then ^tt_𝐗 =_2.From Lemmas <ref> and <ref>.Let 𝐗 be a three-valued scheme. If ^tt_𝐗 =_2, then 𝐗 is truth-collapsible (i.e.,the operations are those of the schemes in Figure <ref>).We will show that if a three-valued scheme 𝐗 is not truth-collapsible then _2 ⊈^tt_𝐗. We will show it by cases, considering in order each of the connectives of each possible non-truth-collapsible schema. * The case of the negation:* Assume 𝐗 is such that 1 ≠ 0. Then, p,p ⊭^tt_𝐗 q, but of course p,p _2q. * Assume 𝐗 is such that 12≠ 0. Then, p,p ⊭^tt_𝐗q, but of course p,p _2q. * Assume 𝐗 is such that 0 = 0. Then, q ⊭^tt_𝐗 p,p, but of course q _2 p,p. * The case of the conjunction:* Assume 𝐗 is such that x ∧ y ≠ 0, for x=0. Then, p ∧ q ⊭^tt_𝐗p, but of course p ∧ q _2 p. * Assume 𝐗 is such that x ∧ y ≠ 0, for y=0. Then, p ∧ q ⊭^tt_𝐗 q, but of course p ∧ q _2 q. * Assume 𝐗 is such that x ∧ y = 0, for x≠ 0 and y≠ 0. Then, p, q ⊭^tt_𝐗 p ∧ q, but of course p, q _2p ∧ q. * The case of the disjunction:* Assume 𝐗 is such that x ∨ y = 0, for x≠ 0. Then, p ⊭^tt_𝐗 p ∨ q, but of course p _2 p ∨ q.* Assume 𝐗 is such that x ∨ y = 0, fory ≠ 0. Then, q ⊭^tt_𝐗p ∨ q, but of course q _2 p ∨ q. * Assume 𝐗 is such that x ∨ y ≠ 0, for x= 0 and y= 0. Then, p ∨ q ⊭^tt_𝐗 p, q, but of course p ∨ q _2 p, q. § GENTZEN-REGULARITY AND CLASSICAL LOGIC Following <cit.>, we call a connective Gentzen-regular if its behavior, whether in the conclusion or in the premise of an argument, can be explained fully in terms of conjunction of sequents involving the subformulae related by that connective. Formally, the definition is the following: Given a consequence relation ⊢, an n-ary connective C (for n≥ 0) is Gentzen-regular for it if there exist ℬ^p⊆𝒫({1,..., n})×𝒫({1,..., n}) and ℬ^c⊆𝒫({1,..., n})×𝒫({1,..., n}) such that ∀Γ, Δ, ∀ F_1, ..., F_n: [ Γ, C(F_1, ..., F_n) ⊢Δ ⋀_(B_p,B_c)∈ℬ^pΓ, {F_i: i∈ B_p}⊢{F_i: i∈ B_c}, Δ;Γ⊢ C(F_1, ..., F_n) , Δ ⋀_(B_p,B_c)∈ℬ^cΓ, {F_i: i∈ B_p}⊢{F_i: i∈ B_c}, Δ;] The next lemma relates this feature of the connectives, what it is for a logic to be classical, and a structural condition on sets of atomic propositions (atom-sharing between premises and conclusions).A propositional logic L=⟨ℒ, ⊢, C⟩ is inferentially classical if and only if its connectives in C are Gentzen-regular and ⊢ is such that for Γ and Δ any two sets of atomic propositions, Γ⊢Δ iff Γ∩Δ≠∅.The left-to-right direction holds because in classical logic, and in any logic that satisfies the same inferences, connectives are Gentzen-regular, and inferences involving only atomic propositions behave as described.Conversely, suppose the right-hand-side holds for a logic L. Consider then sets of premises and conclusions Γ and Δ. If Γ and Δ only contain atomic propositions, then Γ⊢Δ holds in L iff Γ∩Δ≠∅, by hypothesis, iff it holds in classical logic then.By induction on the complexity of the formulae involved, the assumption of Gentzen-regularity allows us to generalize this equivalence between L and classical logic to inferences with non-atomic propositions. Indeed, the Gentzen regularity rules reduce the validity of any inference Γ⊢Δ to the validity of a conjunction of inferences involving formulae of strictly lower syntactic complexity.[One limit case may be mentioned: Gentzen-regularity rules may reduce the complexity so much that they eliminate the formula altogether. You may obtain this through an empty conjunction in the Definition <ref>. As an illustration, ⊤ seen as a 0-ary connective, has such a rule for itsGentzen-conclusion-rule: Γ⊢⊤, Δ is valid no matter what. This edge case does not block the inductive step of this proof.]For concreteness, consider Γ', A∨ B⊢Δ. This holds if and only if both Γ', A⊢Δ and Γ', B⊢Δ hold. This shows how the Gentzen premise-rule reduces the verification of inferences with disjunction in premises, to the verification of strictly simpler inferences, with no disjunctions in premises. Eliminating connectives one after the other thanks to Gentzen rules, in premises and in conclusion, we can recursively reduce the complexity of the inferences until no more reduction is possible. That is, we can find sets of atomic propositions Γ_i, Δ_i such that Γ⊢Δ holds if and only if the conjunction of the Γ_i⊢Δ_i hold. Three remarks may be made about this result. The first is that a logic can obey the conditions of Lemma <ref> without coinciding exactly with classical logic. For instance, if C={, ↔}, withand ↔ obeying the expected Gentzen rules, then the resulting logic is inferentially classical but is only a fragment of classical logic (because it is functionally incomplete). The second is that irrespective of how Gentzen regular connectives are named in ℒ, what matters concerns which operations they correspond to. To use the same example as in the proof, if Γ', A∧ B⊢Δ holds iff Γ', A⊢Δ and Γ', B⊢Δ hold in ℒ, then it means that “∧” is actually just another name for disjunction in that logic. The third finally is that the structural condition in the Lemma simply corresponds to a form of (strong) Reflexivity on the atoms. It can be verified that it directly implies the admissibility of other classical structural rules, including Exchange, Contraction, Weakening, and Cut, for sequents involving only atoms. For example, if Γ, Γ', Δ, Δ' are sets of atoms, then it follows that Γ⊢Δ, p and Γ', p⊢Δ' imply Γ, Γ'⊢Δ, Δ'.§ MONOTONIC OPERATORS Given a truth table for a unary or binary operator f, the operator is monotonic only if no two horizontally or vertically adjacent cells of the corresponding matrix contain a 1 and a 0. In the unary case, suppose as a particular case that f(1/2)=0 and f(1)=1. Then, although 1/2<_ I1, their images by f are incomparable. The other cases are symmetric. In the binary case, suppose as a particular case of vertical adjacent cells that f(1/2,1)=0 when f(0,1)=1. Then although (1/2,1)<^comp_ I(0,1), their images by f are incomparable, which violates monotonicity. The other cases are symmetric. A binary normal Boolean operator f is monotonic if and only if no two adjacent cells of its matrix get values 1 and 0 and if f(1/2,1/2) is not greater than or incomparable with the value of any other cell. From left-to-right, suppose that f(1/2,1/2) is incomparable with or greater than the value of some other cell. Since (1/2,1/2)<^comp_ I(x,y) for all other cells (x,y), this violates monotonicity. The other condition is entailed by Fact <ref>. From right-to-left, suppose that f is normal but not monotonic. Then there exist (x,y)≤^comp_ I (x',y'), but either f(x,y)≥^comp_ If(x',y'), or f(x,y) and f(x',y') are incomparable. If (x,y) is of type (c,1/2) or (1/2,c) with c classical, and (x',y') of type (c,c), then necessarily one of them is 1 and the other 0. If (x,y) is (1/2, 1/2), then the violation is necessary because f(x,y)=c and f(x',y') is 1/2 or incomparable. abbrv
http://arxiv.org/abs/2312.16035v1
{ "authors": [ "Bruno da Ré", "Damian Szmuc", "Emmanuel Chemla", "Paul Égré" ], "categories": [ "math.LO", "03B05, 03B47, 03B50" ], "primary_category": "math.LO", "published": "20231226125337", "title": "On three-valued presentations of classical logic" }
AutoTask]AutoTask: Executing Arbitrary Voice Commands by Exploring and Learning from Mobile GUI1]Lihang Pan [1] 0000-0001-8856-0309 [email protected] 1]Bowen Wang Equal contribution [email protected] 0009-0009-1358-722XDepartment of Computer Science and Technology, Tsinghua University Beijing China 0000-0003-2591-7993 Corresponding author.Department of Computer Science and Technology, Tsinghua University Beijing China Department of Computer Science and Technology, Tsinghua University Beijing China0009-0006-2183-3054Department of Computer Science and Technology, Tsinghua University Beijing China0000-0003-2273-6927Department of Computer Science and Technology, Tsinghua University Beijing [email protected] command interfaces (VCIs) have gained increasing importance, enabling hands-free and eyes-free interaction with digital devices. However, the inherent complexity in constructing effective voice interfaces has limited the VCIs' functionalities to only a small fraction of GUI applications and tasks. This paper presents AutoTask, a VCI capable of automating any task in any mobile application without configuration or modification from developers or end users. The primary challenge for AutoTask is the lack of knowledge, as it needs to accomplish unknown tasks (e.g., user commands) within an unknown environment (e.g., GUI). To address this challenge, AutoTask employs two strategies: (1) trial and error: AutoTask explores the GUI, attempts potential operation sequences, and recovers from errors through backtracking; (2) learning from the environment: AutoTask accumulates experiences during exploration and summarizes correct knowledge from these experiences. We implemented AutoTask on Android devices and conducted an evaluation study, which proved the feasibility of AutoTask.<ccs2012><concept><concept_id>10003120.10003121.10003124.10010870</concept_id><concept_desc>Human-centered computing Natural language interfaces</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10003120.10003121.10003129.10011756</concept_id><concept_desc>Human-centered computing User interface programming</concept_desc><concept_significance>300</concept_significance></concept></ccs2012> [500]Human-centered computing Natural language interfaces [300]Human-centered computing User interface programming< g r a p h i c s >To execute the command "import contacts from contacts.vcf" in the Contacts application (version 4.8.17), AutoTask first clicks the "Add" button, transitioning from Page 1 to Page 2. AutoTask finds that it can only manually add a contact on Page 2; hence, it reverts to Page 1 and clicks another button labeled "Fix & Manager". It then completes the import process through subsequent steps (4 & 5). After finishing the task, AutoTask synthesizes knowledge from its experiences, improving its ability for future commands. printacmref=false noneplain[ Yuanchun Shi January 14, 2024 ====================§ INTRODUCTIONVoice interaction can effectively enhance the interactivity of applications, enabling end users to automate multi-step GUI tasks (e.g., setting an alarm for 9 AM) eyes-freely and hands-freely <cit.>. However, constructing voice command interfaces (VCIs) for existing GUI tasks is challenging and requires significant effort <cit.>. Consequently, existing VCIs support only a limited set of predefined intents, failing to cover the actual needs of users <cit.>.Large language models (LLMs) have been applied to numerous domains and have significantly reduced the cost of system development and deployment <cit.>. While LLMs help understand user commands and alleviate the development burden of VCIs, their application is limited in scope <cit.>. Developers are still required to pre-define a set of intents for the voice interface <cit.>. Additionally, they must configure how these intents are executed <cit.> and address potential errors arising from LLMs <cit.>.In this paper, we present AutoTask, a ready-to-use VCI that operates without any modifications or configurations by either developers or end users. It is capable of executing any intent within any application. AutoTask accomplishes arbitrary tasks (i.e., user commands) in an unknown environment (i.e., the GUI), the primary challenge of which is the lack of necessary knowledge. To overcome this, AutoTask (1) engages in trial and error: exploring the GUI, attempting possible operation sequences, and recovering from errors through backtracking; and (2) learns from the environment by accumulating experiences during exploration and summarizing knowledge from them.As illustrated in Figure <ref>, AutoTask comprehends the semantics of the GUI and the user command and determines an operations sequence to carry out the given task. For example, to execute the command "Import contacts from contacts.vcf", AutoTask first chooses to click the "Add" button on Page 1. It then emulates this operation on the GUI, leading to content updates and a transition from Page 1 to Page 2. AutoTask subsequently evaluates the correctness of the executed operation sequence. If an error is detected, AutoTask revokes its actions to rectify it. For instance, upon reaching Page 2, AutoTask recognizes that it can only manually add a single contact there and cannot perform a batch import from a file. Consequently, it reverts to Page 1 and selects another button labeled "Fix & Manager". This process continues until the task is successfully completed. Additionally, AutoTask improves its performance by accumulating experiences while navigating the GUI and summarizing knowledge from these experiences, including: * Environmental knowledge: for example, in Figure <ref>, AutoTask can learn that clicking the "Add" button does not lead to batch importing of contacts. This knowledge can expedite AutoTask's execution of subsequent commands.* Task knowledge: for instance, in Figure <ref>, AutoTask can learn that the intent "Import contacts from a file" requires a parameter specifying the file name. This knowledge aids AutoTask in understanding the semantics of the commands.* Execution knowledge: as shown in Figure <ref>, AutoTask can learn the correct operation sequence for importing contacts from a file. This sequence can be directly replayed to accomplish similar tasks and help execute other commands. This paper makes two main contributions: * We introduce a new paradigm in which an agent accomplishes unknown tasks in an unknown environment. The agent explores the environment to find a solution and summarizes its experiences into knowledge to enhance its capabilities.* We present a ready-to-use VCI named AutoTask, where end users can automate any intent with a single command. Experimental results proved its usability. We implemented AutoTask on Android smartphones and conducted an evaluation study. The experimental results proved its usability.§ RELATED WORKVoice command interfaces can effectively reduce the interaction burden, enabling users to interact with devices hands-freely and eyes-freely <cit.>. However, constructing a VCI for mobile devices requires significant effort <cit.>, leading to the existing VCIs covering only a limited set of GUI functionalities. The workload primarily encompasses determining the supported intent set, understanding user natural language commands accurately, and executing tasks correctly. Self-improvement of VCIs during runtime is a crucial approach to reducing effort <cit.> but has yet to gain widespread support. Table <ref> compares existing VCIs with AutoTask in these four aspects. §.§ Supported intents of VCIsThe earliest voice interfaces on smartphones only supported single-step GUI operations <cit.>, for example, clicking a button already present on the GUI. The intent sets of this kind of VCI are limited to the current GUI contents. Although the VCIs can be applied to any mobile application without any configuration or modification, GUI tasks typically require multiple operations (e.g., clicking several buttons sequentially), and providing voice commands for each step would impose a significant interaction burden. Therefore, this approach is mainly used for accessibility purposes <cit.> and has not been widely adopted by ordinary users.Task-oriented VCIs <cit.> (e.g., Siri) address the aforementioned issues and reduce the interaction burden. End users only provide a single voice command, and the virtual assistant can automatically complete a multi-step task on the GUI (e.g., setting a 9:00 am alarm). However, existing task-oriented voice assistants only support a limited set of intents. For instance, Siri does not support sending WhatsApp messages. This results in two challenges: on one hand, developers need to invest a significant amount of effort (e.g., conducting formative studies <cit.>) to determine a useful intent set; on the other hand, end users often complain about the discoverability of functionalities <cit.> and the lack of support for necessary intents <cit.>.AutoTask differs significantly from the two categories of voice interfaces mentioned above. AutoTask is a task-oriented voice assistant capable of automating multi-step tasks with a single command. However, AutoTask does not rely on a predefined set of intents and can accommodate any intent that can be executed on the GUI without requiring additional overhead from developers or end users. §.§ Understanding User CommandsA task-oriented voice assistant understands the user command to: (1) classify the command into a specific intent (e.g., sending a message); and (2) identify parameters for the intent (e.g., message content and message recipient) <cit.>. The most traditional approach to command understanding is using context-free grammar (e.g., regular expressions <cit.> and combinatory categorial grammar (CCG) <cit.>). Additionally, researchers have employed more sophisticated algorithms (e.g., word dependency <cit.>, n-grams <cit.> and word embedding <cit.>) to extract features from commands and create natural language processing scripts. These solutions heavily rely on handcrafted rules <cit.>, necessitating significant developer effort; however, their performance is relatively poor <cit.> and cannot satisfy users' needs.Nowadays, many systems utilize deep neural networks to comprehend user commands, achieving satisfactory results <cit.>. Before the widespread adoption of pre-trained models, researchers needed to collect a large amount of training data <cit.> and meticulously fine-tune the model's architecture and parameters to achieve good results <cit.>. This process involved a significant workload. With the development of pre-trained models, researchers only provide prompts and a few optional examples, and large language models (LLMs) can successfully understand the commands <cit.>.An often overlooked issue is that interactive systems may inaccurately interpret user commands, leading to conversation breakdowns <cit.>. This problem increases the user's burden <cit.> and reduces their willingness to engage in interactions <cit.>. Existing solutions all entail additional costs. For instance, developers can programmatically address breakdowns <cit.>. AutoVCI <cit.> and SOVITE <cit.> require additional user interactions to ensure accurate command understanding.AutoTask utilizes LLMs to comprehend user commands, enabling support for any task in any application with minimal development effort. To address the issue of LLM errors, AutoTask does not require developer or user involvement; instead, it automatically learns from the mobile GUI and continually adjusts its command understanding results. §.§ Executing the CommandExisting VCIs search for and execute scripts in databases based on the command understanding results; the scripts can automatically carry out the user commands. These scripts can be categorized into two types based on their origins: those created by developers and those generated by end users.The scripts for the majority of commercial voice assistants are manually created by developers <cit.>. For example, Siri directly invokes functions implemented by developers to execute voice commands. Because developers need to write a script for each intent, this approach significantly limits the number of functionalities available in the voice interface <cit.>.Since end users have a strong demand for voice assistants that can support their personalized needs, researchers have proposed different methods to collect execution scripts from users. One typical approach is program by demonstration (PBD) <cit.>. Users can demonstrate how they complete tasks on the GUI; this process is recorded and automatically transformed into execution scripts. Researchers have also attempted to extract execution scripts from users' historical behavior records automatically <cit.>. While this approach avoids the explicit burden of demonstration, it also results in unpredictable system capabilities and relies on the time and quality of data accumulation. Note that scripts collected from users are not always correct. For example, replaying the operation sequence demonstrated by users may fail due to pop-up windows or application version updates <cit.>. Existing solutions require users to handle exceptions manually <cit.>, which introduces additional burdens.AutoTask does not require any predefined scripts, whether they originate from developers or end users. It dynamically calculates a potential operation sequence on the GUI at runtime. Furthermore, it assesses whether the sequence is correct and automatically handles errors. The entire process does not necessitate any user intervention. §.§ Self-ImprovementMany systems can learn and improve themselves through interactions with users. A typical application scenario is learning user preferences to provide better services (e.g., recommendations <cit.>, navigation <cit.>, and scheduling <cit.>). AutoVCI <cit.> can enhance its semantic understanding ability through multi-turn dialogues with users <cit.>. In the field of machine learning, this approach is known as "human-in-the-loop" <cit.>, in which users contribute to improving machine capabilities by providing annotations. This approach has achieved significant success in training large language models <cit.>.Reinforcement learning is a common approach for improving machine capabilities without user intervention: AI-driven agents enhance themselves based on rewards provided by the environment <cit.>. This approach has been widely applied in fields such as gaming <cit.> and autonomous driving <cit.>, but it has seen limited application in executing user commands on GUIs <cit.>. AppBuddy <cit.> is a preliminary attempt in this direction; however, it suffers from issues like sparse rewards <cit.> and excessive trial-and-error steps, making it unsuitable for direct application in interactive systems. AutoTask shares a similar concept with reinforcement learning: it autonomously summarizes knowledge from its explorations of the GUI, all without requiring user intervention. § PROBLEM FORMULATION & SOLUTIONThis paper focuses on problems of the following form: an intelligent agent is required to complete an unknown task in an unknown environment. This type of problem is prevalent in applying artificial intelligence (AI) to daily tasks for two primary reasons. Firstly, it is impossible for the developers to collect corpora and pre-train an agent for every real-world scenario. Secondly, end users often struggle to, or choose not to, provide comprehensive, structured descriptions and step-by-step procedures for tasks. Solving problems characterized by these patterns can significantly broaden the application of AI in everyday tasks.AutoTask is a solution to the problem in the field of VCIs: it automates the execution of interaction intents (i.e., tasks) expressed through natural language commands by simulating user operation sequences within the GUI (i.e., the environment) of mobile devices. AutoTask supports any applications and intents, ensuring comprehensive coverage of GUI tasks for voice assistants.The core challenge of "completing unknown tasks in an unknown environment" lies in the lack of knowledge, which includes: * Lack of environmental knowledge. Although the intelligent agent can observe and interact with the environment, there is no prior knowledge about how the environment will change after interactions. For example, AutoTask can acquire GUI content; however, it lacks knowledge about how the GUI will change after simulating user actions (e.g., clicking a button on the screen).* Lack of task knowledge. The agent does not possess a set of supported tasks or have any predefined understanding related to task semantics or the interpretation of external inputs. For example, we have not provided AutoTask with a predefined set of intents or information about intent parameters. Additionally, we have not provided models or scripts to assist AutoTask in recognizing the intents and parameters within user commands.* Lack of execution knowledge. Given the absence of environmental and task knowledge, the agent lacks the knowledge of how to execute tasks. For AutoTask, this is reflected in the absence of execution scripts (whether provided by developers or end users) for the intent. To address this challenge, we propose an "explore-learn" strategy that comprises: * Trial and error: The agent explores the environment and attempts to execute the task. It recovers from errors through backtracking when necessary.* Learn from the environment: The agent accumulates experiences (records of actions and observations) during the exploration of the environment. From these experiences, the agent learns knowledge, enabling it to (a) directly execute intents that have been completed in the past and (b) expedite the exploration when executing unknown intents. The system design of AutoTask is an application of this strategy to the field of voice assistants, which will be elaborated upon in the next section. § SYSTEM DESIGNFigure <ref> illustrates the AutoTask pipeline. Upon receiving a user command, AutoTask checks whether the intent expressed in the command has been previously executed. If it has, AutoTask automatically executes the task by replaying the operation sequence, with adjustments based on the current command's parameter values <cit.>. If the command has not been executed previously or if the replay of the sequence fails, AutoTask enters the "explore-learn" mode, which can be divided into two parts: * Trail and error, which can be further subdivided into forward exploration and backward backtracking: * During the forward exploration, AutoTask selects an optimal operation within the current GUI content (the understanding module and the deciding module), which is then automated by programmatically injecting an event into the GUI (the executing module). After obtaining the resulting GUI content, AutoTask assesses whether the current task is completed and whether the executed operation sequence is correct (the checking module). Based on this assessment, it decides whether to terminate the execution, continue forward exploration, or initiate backward backtracking.* During the backward backtracking, AutoTask undoes the last action (the backtracking module) and evaluates whether the current task is completed and whether the operation sequence is correct (the checking module). AutoTask decides accordingly to terminate the execution, continue backward backtracking, or start forward exploration. * Learning from the environment, which encompasses:* Accumulating experiences: AutoTask records all its decisions (e.g., outputs of the modules) and GUI observations.* Summarizing knowledge: AutoTask extracts correct knowledge from experiences at appropriate times. The details of AutoTask's knowledge are illustrated in Table <ref>.* Applying knowledge: AutoTask utilizes its knowledge during task execution to directly execute previously completed intents or expedite the exploration process. §.§ Learning from the GUI: Summarizing Experiences into Knowledge§.§.§ Experiences of AutoTaskAutoTask automatically records its experiences during runtime, which assists the system in executing the current task and is also summarized into knowledge to support subsequent tasks. As depicted in Figure <ref>, we represent AutoTask's experiences as a graph, where nodes represent GUI pages, and edges record the results of the modules. Please refer to the corresponding sections for detailed results of each module.§.§.§ Environmental knowledgeAutoTask's environment is the GUI, and its environmental knowledge describes the contents of GUI pages and the transitions between pages. This knowledge can be categorized into the following two types:* (Type-1) Contents and page transitions of the GUI, which are stored in the form of triplets (S, O, D). S and D represent the GUI pages before and after the operation (represented through HTML, as indicated in Figure <ref>). O = (E, A, P), which denotes elements, actions, and parameters (e.g., the texts for the "text input" actions), respectively; these three components together describe a GUI operation. For example, the Edge (5) (in Figure <ref>) corresponds to the following triplet: (Page 4, (button labeled "contacts.vcf", click, null), Page 3).* (Type-2) Contents or transitions that do not exist in the GUI. This kind of knowledge is in natural language. For example, one piece of environmental knowledge (Type-2) summarized for Edge (1) (in Figure <ref>) could be, "By clicking the 'Add' button on the home page, you can only manually adding a single contact. Importing contacts from files is not supported". This knowledge may prevent AutoTask from attempting to click the "Add" button in future commands (e.g., "Import contacts from cloud backup"), thereby expediting the execution of subsequent tasks.§.§.§ Task knowledgeTask knowledge helps understand the semantics of the tasks. AutoTask's task is to execute natural language commands, the semantics of which are typically described by intents and parameters (also called slots in some works <cit.>). A piece of task knowledge comprises (1) an intent name, (2) parameter names, (3) a command corresponding to the intent, and (4) values of the parameters in that command. An intent may appear in multiple pieces of task knowledge, as the commands inside are different. For example, the task knowledge from Figure <ref> is (1) intent name - "import contacts from file"; (2) parameter name list - "file name"; (3) command - "import contacts from contacts.vcf"; (4) parameter values - "file name = 'contacts.vcf'".§.§.§ Execution knowledgeExecution knowledge describes how to execute the command in the GUI. It can be categorized into the following two types: * (Type-1) The correct operation sequence for the command. For example, the first type of execution knowledge for "import contacts from contacts.vcf" is "click 'Fix & manage', click 'Import from file', click <file name>". The angle brackets (<>) denote parameter values in the command.* (Type-2) Describing how to avoid incorrect operations. This knowledge comprises lessons summarized in natural language, corresponding to errors made by AutoTask during execution. As exemplified in Figure <ref>, AutoTask makes a mistake when executing "save Alice, 2122000000": it clicks "Save" without adding any information about the contact. A possible piece of knowledge summarized from this error could be, "You should pay attention to the order of steps; actions that may finalize a task (e.g., clicking the Save button) should be performed last". §.§.§ From experiences to knowledge AutoTask's experiences need to be further summarized into knowledge because some experiences may be redundant or erroneous.Environmental knowledge (Type-1): When AutoTask simulates an operation on the GUI and obtains the contents of the resulting page, the experience is immediately transformed into environmental knowledge (Type-1). AutoTask's observations of the environment are always correct; as a result, AutoTask can directly and instantaneously convert the related experiences into knowledge.Execution Knowledge (Type-1): AutoTask identifies the shortest path in its experiences that (1) connects the starting point and the endpoint[The GUI screen where AutoTask thinks the task is completed] and (2) encompasses all parameters. For example, in Figure <ref>, the path "3-4-5" satisfies the aforementioned criteria. This path is considered the correct operation sequence for the task and is added to the database. Such knowledge is only summarized after the task is completed because AutoTask can only determine the task's endpoint at that time.Task Knowledge: AutoTask records the command understanding result (to be discussed in <ref>) at the final step of the correct path as a piece of task knowledge. This type of knowledge will be summarized once AutoTask completes the task; otherwise, the command understanding result may be incorrect.Environmental Knowledge (Type-2) & Execution Knowledge (Type-2): The purpose of these two types of knowledge is to prevent errors during task execution. AutoTask compares its experiences with the correct path to identify erroneous steps (e.g., Step 1 in Figure <ref> & <ref>). AutoTask categorizes the reasons for errors into two types:* Lack of environmental knowledge. For example, the error in step 1 of Figure <ref> occurs because AutoTask does not know whether there will be an "import from file" option after clicking the "Add" button. Since importing from a file is a way to "add" a batch of contacts, AutoTask considers trying the "Add" button worthwhile.* Lack of execution knowledge. For example, the error in step 1 of Figure <ref> happens because AutoTask does not know how to determine the order of operations when multiple GUI actions are related to the command. AutoTask summarizes a lesson in natural language for each error and utilizes it to avoid future errors. This knowledge can only be summarized after completing the task. Otherwise, AutoTask cannot accurately determine whether a step is correct. We employ LLM to compile this type of knowledge. The prompt will be discussed in section <ref>.§.§ The Understanding ModuleIn the understanding module, AutoTask comprehends the GUI and the user command, the results of which serve as inputs for subsequent modules. This module enables AutoTask to augment information about the environment (i.e., the GUI) and the task (i.e., the user command) with its knowledge.§.§.§ Understanding the GUIThe GUI semantics are formed by the contents of GUI pages and the transitions between pages. AutoTask can obtain page data through APIs provided by the operating system (e.g., Android AccessibilityService[<https://developer.android.com/reference/android/accessibilityservice/AccessibilityService>]). However, the incompleteness of GUI semantics arises because the contents after GUI operations cannot be foreseen. AutoTask addresses this issue by querying environmental knowledge to infer the elements "hidden" behind the current GUI elements. This process can be divided into two steps: * AutoTask retrieves elements from the environmental knowledge (Type-1) that can be reached through one or multiple operations starting from the current GUI elements.* To filter out irrelevant elements, these elements are transformed into vectors, and their similarities to the user command are computed (details are discussed in section <ref>). The semantic understanding result includes only elements with similarities exceeding a threshold. As exemplified in Figure <ref> (C), the button "App pinning" is very related to the user command "enable app pinning" and is reachable by operating an element on the current GUI (clicking "Security & privacy" and then clicking "More security & privacy"). As a result, it is added to the "target" property of the button "Security & privacy". The GUI semantics effectively guide AutoTask in selecting the correct operations. As illustrated in Figure <ref>, AutoTask successfully executes the command "enable SIM lock". This is not challenging since SIM lock and security are highly semantically related. AutoTask also learns GUI-related knowledge during execution. Next, AutoTask executes the command "enable App pinning". Without relevant GUI knowledge, executing this command is challenging: AutoTask may blindly attempt to click on different buttons on the Android Settings homepage, such as "Application", "Display", and "Safety". However, GUI knowledge can assist AutoTask in directly choosing "Security & privacy" without additional explorations. During GUI understanding, AutoTask discovers a high semantic similarity between the "App pinning" button and the user command, and there exists an operation sequence from "Security & privacy" to "App pinning".§.§.§ Understanding the command During the process of understanding command semantics, AutoTask generates a natural language phrase that describes the intent conveyed in the user command. It also detects the parameters and their values from the command. Command understanding needs to be performed in each iteration of AutoTask because the understanding results may be updated as experiences accumulate <cit.>.The command semantics can guide AutoTask in taking correct operations in the GUI. For example, in the command "Save Alice, 2122000000 to contact" (Figure <ref>), if AutoTask realizes that "Alice" and "2122000000" are parameters for "create a new contact", it will use the two parameters during task execution, that is, entering them into text boxes[Another way to use a parameter is to select an item with corresponding text in a list <cit.>.]. The results of command understanding may be stored as task knowledge (as already discussed in <ref>), which can assist AutoTask in comprehending subsequent commands.AutoTask employs task knowledge and an LLM to understand the semantics of commands. We calculate the semantic similarities between the historical commands in the task knowledge and the current user command (details will be discussed in <ref>). Historical commands with similarities greater than a threshold will be selected as examples. The LLM will utilize these examples, the executed operation sequence, and current GUI contents to calculate the command understanding results. The prompts will be discussed in <ref>. §.§ The Deciding ModuleIn the deciding module, AutoTask calculates the most possible operation in the current GUI to complete the user command. AutoTask identifies all operations[The parameter for text input will be determined later] available in the GUI. Table <ref> provides an overview of the types of supported operations. Subsequently, AutoTask assigns scores to these operations, with higher scores indicating a greater possibility of being the next operation. Each operation's score is calculated based on a basic score and a penalty factor: score = basic_score / (1 + penalty): * Basic Score (1.0 - 8.0), which is calculated by adding the following two components together: * Likert scale (1.0 - 7.0). We employ an LLM to assess the relevance of the operations to the user's command. We use a 7-point Likert scale, where 1 indicates extremely low relevance, and 7 denotes very high relevance. To assist the LLM in calculating the scores, we retrieve relevant environmental knowledge (Type-2) and execution knowledge (Type-1 & 2) from the knowledge base, as discussed in <ref>. Further details about the prompt will be provided in <ref>.* Tie-breaking score (0.0 - 1.0). We calculate the semantic similarities between operations and tasks (please refer to section <ref> for more details) as the tie-breaking scores. The tie-breaking scores increase the differentiation between different operations. While Likert scales are typically effective at identifying the most relevant operation, they may lack granularity when scoring less relevant options. The tie-breaking scores prevent operations from receiving identical scores, thereby avoiding AutoTask being reduced to brute-force searching. * Penalty factor (0.0 - positive infinity). It is important to note that penalizing an operation does not necessarily mean that the final score of the element will not be the highest. For example, when an operation is penalized by the checking module, AutoTask may attempt other operations and find that these operations are even less relevant to the user command. In this case, AutoTask may retry the penalized operation. The penalty factor consists of two components: * Repetition penalty, used to penalize operations that have already appeared in the executed operation sequence[Note that a combination of an action, a GUI element, and a parameter describes an operation. A repetitive operation implies that AutoTask has arrived at the current GUI screen.]. The repetition penalty is fixed at 10.* Backtracking penalty, used to penalize operations considered incorrect by the checking module. The backtracking penalty is initialized at 0 and can be updated by the checking module (see <ref> for details).AutoTask selects the operation with the highest score as the next to be executed. If the current operation is text input, AutoTask utilizes an LLM to calculate the text content. Please refer to <ref> for more details about the prompt. §.§ The Executing Module & the Backtracking ModuleIn the executing module, we utilize the accessibility API to inject operations into the GUI based on the results from the deciding module. Conversely, the backtracking module injects interaction events (e.g., scrolling backward) to undo previous operations (e.g., scrolling forward, as indicated in Table <ref>). Both modules retrieve the GUI hierarchy after injecting events, which will be converted into HTML format (as shown in Figure <ref>(B)) and used by other modules. Operations may only cause some minor localized changes in the GUI, so we compare the GUI pages before and after the operations to identify newly appeared elements, which are then marked with a boolean property named "new" (as shown in Figure <ref> in the Appendix).We remove elements that meet both of the following two criteria from the GUI hierarchy: * Low interaction importance. This criterion applies when the element itself and its descendant elements (if any) cannot be interacted with and do not contain text or descriptions.* Low layout importance. This criterion applies when the element has no sibling elements or all sibling elements are considered to have "low interaction importance".§.§ The Checking ModuleAfter AutoTask performs GUI operations (both in the executing module and the backtracking module), the checking module conducts two checks on the completed operation sequence: completeness and correctness.§.§.§ completeness checkThe checking module employs an LLM to determine whether the current task is completed. This check is also performed during the backtracking process to address "overshoot" issues, where unnecessary operations are executed after task completion. For more prompt details, please refer to <ref>. The checking module also considers the task completed when the number of executed steps exceeds a threshold (set to 20 in our implementation).When AutoTask considers the task as completed, we present the user with a list describing the shortest execution path (as discussed in <ref>). Each item in the list includes (1) a screenshot, (2) a rectangular bounding box used to highlight the operated element in the screenshot, and (3) a text description of the operation, e.g., "Text input: Alice". The user can choose one of the following options: * Confirming the correctness of the execution process. The system summarizes the knowledge and terminates.* Confirming that the task is not yet completed. The system continues running and starts the correctness check. The current page will not be considered as the endpoint for the current command.* Forcing termination. The system stops running directly without knowledge summarization. Note that the first type of environmental knowledge is still summarized and accumulated.* Ignoring (default[In the evaluation study, we assumed that users would select this option.]). If the step threshold is exceeded, AutoTask stops running without knowledge summarization. Otherwise, the system summarizes knowledge and terminates. §.§.§ correctness checkIf a task is not completed, AutoTask checks whether the last step[Backtracking steps or the steps being undone will not be considered as last steps. For example, if "A-B-C" forms an operation sequence and AutoTask uses operation D to undo operation C, then, even though the sequence is "A-B-C-D", we regard B as the last step.] currently being executed is correct, i.e., whether AutoTask can continue to fulfill the user's instruction. The essence of a correctness check is checking the correctness of the deciding module with more experiences and knowledge accumulated from the GUI. For example, in Figure <ref>, AutoTask clicks the "Add" button to import contacts from the file. However, after simulating user interaction and obtaining new GUI contents, the checking module can discover that the result of the deciding module is erroneous.AutoTask applies an LLM to conduct the correctness check. Please refer to <ref> for the detailed prompt of LLM. It is worth noting that AutoTask takes into account the backtracking penalty of the last step. If an operation has a high backtracking penalty but is still executed, the possibility of other operations may be lower. In such a case, the checking module should be more tolerant and consider it correct, providing an opportunity for further exploration.If the last operation is considered to be incorrect, the checking module will calculate a penalty (0-9) to describe the severity of the error: 0 indicates that the error in the last operation is due to preceding steps already being incorrect; 9 indicates a very serious error in the last operation itself. This penalty will be accumulated into the current backtracking penalty of the last operation. Consequently, the backtracking penalty of an operation may be greater than 9, which indicates that it has been rejected by the checking module several times. We use LLM to calculate the penalty, with details of its prompt discussed in <ref>. § IMPLEMENTATIONIn this section, we describe how AutoTask utilizes LLMs for computational purposes. Please refer to the Appendix for more detailed examples. §.§ AutoTask's contextAutoTask's context describes its state and is widely used throughout the computational process, as shown in Figure <ref> in the Appendix. It encompasses (1) the user command, (2) the executed operation sequence, and (3) the current GUI contents represented in HTML. The GUI content is augmented with environmental knowledge (Type-1), as discussed in <ref>. (4) the latest semantic comprehension result of the instruction (if any). §.§ Embedding & Similarity: Choosing One or More Answers from Several CandidatesAutoTask utilizes the Embedding & Similarity approach to select one or multiple answers from several candidates for a given question. Both the question and the candidates are transformed into vectors using the embedding API provided by OpenAI. We regard each candidate's cosine similarity with the question as its score. A higher score indicates that the candidate is more likely to be the correct answer. Table <ref> summarizes the usage of this method. It is worth noting that the description of elements includes their surrounding elements, as they may exhibit strong semantic relevance. §.§ Text Completion: Answering QuestionsAutoTask utilizes the text completion approach to generate an answer for a given question. The question is passed as part of the prompt to the LLM (gpt-4), and the response generated by the LLM serves as the answer. Table <ref> summarizes the usage of this method. The output template specifies the JSON format the LLM response should adhere to, which AutoTask can parse easily.§ EVALUATION STUDYThe evaluation study has two goals: (1) to validate that AutoTask can correctly execute user instructions without predefined knowledge, and (2) to validate that knowledge accumulation can effectively accelerate AutoTask's execution of user commands. We did not evaluate AutoTask's feasibility regarding executing intents that have been carried out before. In such cases, AutoTask only replays the recorded operation sequences (with parameter adjustments), and the feasibility has already been validated in previous works <cit.>. §.§ ApparatusWe conducted the evaluation on an Android virtual machine (Pixel_XL running Android 11). No modifications were made to the virtual machine or system, and AutoTask can run on commercial physical devices. §.§ TasksWe validated the capabilities of AutoTask on the following datasets: * PixelHelp <cit.>. Consisting of natural language commands and their corresponding operation sequences, PixelHelp was revised for this study. Due to system and application upgrades, some outdated instructions were manually removed, leaving a total of 67 instructions.* UGIF <cit.>. This dataset also comprises natural language commands and their corresponding operation sequences. We concentrated on instructions related to Android Settings, as knowledge accumulation is more pronounced within the same application. We used these tasks to assess the impact of knowledge accumulation on AutoTask's performance. Similar to PixelHelp, outdated instructions were removed, resulting in 100 remaining instructions. We made adjustments to the commands in the datasets, including: * Changing the tone from inquiry to command. Both datasets were compiled from tutorials on the internet, where instructions were mostly presented in the form of inquiries. We modified the instructions to align them with how users typically interact with voice assistants. For example, we changed "how to turn off WiFi" to "turn off WiFi".* Supplementing missing parameters. Some commands lacked parameters and were not executable. We added parameters to these instructions. For example, we changed "how to delete a Google account" to "delete the Google account named Alice".§.§ MetricsSuccess rate, i.e., the ratio of the number of successfully completed tasks to the total number of tasks.The step accuracy of a task, i.e., the ratio of the number of correct steps to the minimum number of steps required to complete the task. The correct steps are defined as the longest subsequence (instead of substring) of the actually executed steps that satisfy the following criterion: the minimum steps needed to complete the task start with the subsequence. For completed tasks, this metric is always 1. For tasks that were not successfully completed, this metric indicates the proximity of AutoTask to successful completion.The step redundancy rate of a task, i.e., the ratio of the difference between the number of executed steps and the number of correct steps to the number of executed steps. This metric evaluates the system's efficiency. Tasks that succeeded with a step redundancy rate of 0 are referred to as "tasks completed without redundancy".Non-redundant completion rate, i.e., the ratio of the number of tasks completed without redundancy to the total number of tasks. §.§ BaselineWe utilized the LLM approach proposed by Android in the Wild (AITW) <cit.> as the baseline. Similar to AutoTask, this approach employs an LLM to automate user instructions without requiring any configuration or modifications, making it applicable to arbitrary GUI intents. However, the baseline solution lacks an explicit self-checking and backtracking mechanism, although it can undo previous actions by performing certain actions (e.g., clicking the back button). Furthermore, it does not summarize and accumulate knowledge from the execution process. Note that in the original baseline solution, the prompt used only included information about the most recent five operations. We adjusted it to include the complete operation sequence to ensure consistency with AutoTask. When the number of execution steps in the baseline approach exceeds 20, we also forcibly terminate it. §.§ ProcedureAutoTask and the baseline are executed in the same order for the commands from the two datasets (shuffled beforehand). After completing each task (normal completion or exceeding the maximum number of steps), the next instruction is automatically executed. Throughout this process, AutoTask accumulates knowledge when a task ends successfully but does not utilize knowledge derived from other tasks[AutoTask still utilizes the first type of environment knowledge accumulated during the execution of the current task.]. After completing all tasks, experimenters manually check whether each task has been executed correctly.We categorized the tasks in UGIF into three types based on AutoTask's execution results: (Type-1) tasks that AutoTask can complete without redundancy; (Type-2) tasks that AutoTask can complete with redundancy; (Type-3) tasks that AutoTask is unable to complete. Tasks of Type-1 and 2 are collectively referred to as Type-A tasks; AutoTask has already summarized knowledge for Type-A tasks in Phase 1. Tasks of Type-2 and 3 are collectively referred to as Type-B tasks; there is room for improvement in AutoTask's performance on Type-B tasks. We then evaluate the improvement of AutoTask's performance with the accumulated knowledge. For each Type-B task, we randomly select a certain number of tasks from Type-A tasks[We guarantee that the selected Type-A tasks do not include the Type-B task to be tested.]. We evaluate the performance of AutoTask after accumulating the knowledge from these tasks. We repeat the aforementioned random selection process ten times and compute the average results to mitigate the influence of random noise. To explore the impact of the amount of knowledge on AutoTask's performance, we repeated the process several times with different percentages of the selected Type-A tasks: 20%, 40%, 60%, 80%, and 100%. §.§ Results §.§.§ Success rateThe accuracy of AutoTask in PixelHelp is 91.2% (6 errors in 67 commands) and that in UGIF is 93.0% (7 errors in 100 commands). The two metrics for the baseline are 52.2% (32 errors in 67 commands) and 67.0% (33 errors in 100 commands), respectively. The chi-square test (p < 0.001) proved that AutoTask significantly outperformed the baseline in both datasets. We identified the following two reasons: * AutoTask demonstrates excellent capability in detecting task completion, with precision at 99.4% and recall at 100%. Although the baseline achieves 100% accuracy in verifying task completion, its recall rate is relatively low: 36.1% (15 tasks from PixelHelp and 11 tasks from UGIF) of the errors are due to "overshoot", that is, the baseline executes unnecessary steps after the tasks had already completed.* AutoTask exhibits higher accuracy in its behavior on the GUI (results from the deciding and checking modules). The step accuracy of AutoTask is 93.5% (PixelHelp: 93.6%, UGIF: 93.4%), whereas the baseline stands at 82.9% (PixelHelp: 77.4%, UGIF: 87.2% ).We analyzed tasks that AutoTask did not complete correctly and categorized the reasons for errors into three main types: * AutoTask failed to properly ground instructions to the GUI, leading to blind attempts on the interface. A typical example is the command "check my chromebook if any", where AutoTask did not recognize "chromebook" as a connected device and instead kept clicking irrelevant buttons such as "Display" and "System". Three tasks failed due to this reason.* AutoTask was misled by information in the command, continuously trying to accomplish irrelevant tasks. For instance, with the command 'lock screen when app unpinning', AutoTask focused on what it perceived as the keyword "lock" and repeatedly tried to add a personal identification number (PIN) to the phone. Nine tasks failed due to this misunderstanding.* AutoTask mistakenly believed the task was completed. This occurred once in our experiments with the command "show system applications". AutoTask successfully navigated to the "Installed Applications" page and thought the task was finished. However, it was expected to click the "More Options" button and then select the "Show System" option.§.§.§ RedundancyThe step redundancy rates of AutoTask across both datasets are 8.54% and 8.96%, significantly lower than the corresponding baseline results (46.0%, p<0.001; 32.0%, p<0.001). AutoTask necessitates backtracking in only 12 (PixelHelp: 7, UGIF: 5) (7.14%) tasks, with an average backtrack count of 2.83 (min=1, max=8, sd = 5.18) within these tasks. This indicates that AutoTask requires minimal backtracking to accomplish tasks.§.§.§ Performance improvement through knowledge accumulation The quantities of the three task types in UGIF are 88, 5, and 7, respectively. Figure x illustrates how the success rate and the step accuracy increase with the accumulation of knowledge. When AutoTask learns all the knowledge, these metrics can reach 85.7% and 97.6%, respectively. Only one Type-3 task cannot be completed even after accumulating all the knowledge.Furthermore, knowledge accumulation effectively improves the efficiency of task completion. Figure <ref> & <ref>demonstrates how the average step redundancy rate and non-redundant completion rate for Type-2 and Type-3 tasks change as the accumulated knowledge increases. Upon acquiring all available knowledge, the step redundancy rates for these two task types decrease from 46.7% to 16.1% and from 97.1% to 4.76%, respectively, with only 3 tasks (2 Type-2, 1 Type-3) still requiring backtracking. § DISCUSSION §.§ Generalization of Knowledge: Across Versions and Applications While the study evaluated the performance improvement brought about by knowledge accumulation within the same application, this knowledge can be applied across different versions of the same application or even across applications with similar functionalities (e.g., iMessage vs. WhatsApp). These applications follow similar design principles and semantics <cit.>. For instance, Figure <ref> depicts an earlier version of a contact application (version 1.7.31). In contrast to Figure <ref>, its home page does not contain "Fix & manage", and "Import from file" is hidden under "Settings". However, AutoTask can still benefit from the knowledge it has summarized. For example, it will refrain from attempting to click the "Add" button, thereby expediting the execution of the command.The generalization of knowledge across versions and different applications also carries some risks, as the accuracy of such knowledge cannot be guaranteed. For example, if AutoTask first imports contacts from a file in version 1.7.31 of the Contacts application, it may summarize the knowledge that this functionality can be found by clicking the "Settings" button. However, when it attempts the same task in version 4.8.17 of the Contacts application based on this knowledge, it might experience decreased efficiency, even though it can still complete the task correctly through backtracking. This is because the functionality of importing contacts from a file has been relocated in the new version. One solution is to estimate the confidence level of its knowledge. While this goes beyond the scope of this paper, it is a promising direction for future research. §.§ The Generalization of AutoTask: to Other Devices and Tasks While we implemented AutoTask on Android smartphones and conducted evaluation experiments, AutoTask can generalize to other devices and platforms (e.g., web browsers <cit.>) as long as they provide APIs, with which AutoTask can (1) retrieve the current contents on the GUI and (2) simulate user interactions in the GUI. AutoTask may also generalize to non-graphical interfaces, such as command line interfaces <cit.>, to reduce the learning curve and interaction burdens. Besides, AutoTask can be viewed as a proxy for existing GUIs <cit.>, and its interaction modality is not limited to voice interaction. GUI mapping <cit.> (e.g., mapping a smartphone GUI to a smartwatch GUI <cit.>) is a typical example. Developers only need to be concerned with the visual mapping rules <cit.> instead of the execution logic of the applications. AutoTask can automatically operate the original GUI (e.g., smartphone GUI) based on the user's interaction behaviors on the new GUI (e.g., smartwatch GUI).AutoTask "accomplishes unknown tasks in an unknown environment", and all tasks that align with this pattern can benefit from the "explore-learn" strategy outlined in this paper with little effort required from developers or end users. For example, crafting prompts that yield high-performance results can be challenging for end users <cit.>. In this problem, the environment is the LLM that has not been fully explored and the task is to generate an effective prompt. However, the effectiveness is not well-defined. AutoTask can attempt an initial prompt to determine the capability boundaries of the LLM and then refine it (similar to the backtracking process described in this paper) based on the responses while accumulating knowledge to enhance its prompt-generation capabilities. Similar concepts <cit.> are found in ReAct <cit.> and Reflexion <cit.>. However, these approaches do not explicitly summarize knowledge to enhance their capabilities. AutoTask can also extend beyond the digital world into the physical world and be applied in various fields, such as embodied intelligence <cit.>.§ LIMITATION & FUTURE WORKAutoTask does not interact with users during its execution. However, user commands may be incomplete or ambiguous <cit.>, and AutoTask should request clarification or additional information when necessary. Additionally, it can proactively ask questions to prune its GUI exploration based on the user's answers. For example, when a user needs to enable App pinning (Figure <ref>, Page 3), AutoTask may not be familiar with this feature and attempt various incorrect operations. AutoTask can significantly expedite command execution if it proactively asks the user questions like, "Is this feature related to Security?" to gain relevant insights.AutoTask utilizes an online LLM (gpt-4) service provided by OpenAI via HTTPS requests, and the computational process is slow. Although optimizing the efficiency of LLMs is beyond the scope of this paper, future work could construct smaller and faster models, for example, through techniques such as knowledge distillation <cit.>, to reduce waiting time for end users.Users often have complex intents that cannot be covered by a single GUI task <cit.>. For example, in the command "Send my schedule to Alice", a voice interface is expected to accomplish two tasks (retrieve the schedule and send a message) sequentially. In future work, AutoTask can be combined with complex task decomposition systems <cit.> to satisfy users' complex needs.§ CONCLUSIONIn this paper, we present AutoTask, a voice command interface that automates voice commands by simulating GUI interactions. To make it applicable across different applications and GUI tasks, AutoTask requires no configuration or modification from developers or end users. Instead, AutoTask explores the GUI to attempt different operation sequences and accumulates knowledge from these explorations to enhance its capabilities. The evaluation study proves the feasibility of this approach: AutoTask performs significantly better than the baseline when no knowledge is accumulated, and knowledge accumulation further improves its performance. AutoTask addresses the special case in the voice assistant domain of accomplishing unknown tasks in an unknown environment, a problem pattern that applies to many other similar scenarios. We hope that AutoTask can inspire future work to apply general artificial intelligence to everyday tasks with little effort from the developers or the end users.ACM-Reference-Format
http://arxiv.org/abs/2312.16062v1
{ "authors": [ "Lihang Pan", "Bowen Wang", "Chun Yu", "Yuxuan Chen", "Xiangyu Zhang", "Yuanchun Shi" ], "categories": [ "cs.HC" ], "primary_category": "cs.HC", "published": "20231226142036", "title": "AutoTask: Executing Arbitrary Voice Commands by Exploring and Learning from Mobile GUI" }
APS/123-QED Zuse Institute Berlin, Takustr.7,14195 Berlin GermanyPotsdam Institute for Climate Impact Research, PO Box 60 12 03, 14412 Potsdam, GermanyFutureLab on Game Theory and Networks of Interacting Agents, Complexity Science Department, Potsdam Institute for Climate Impact Research, PO Box 60 12 03, 14412 Potsdam, GermanyInstitute for Theoretical Physics, Technische Universität Berlin, Hardenbergstr. 36, 10623 Berlin, Germany,Potsdam Institute for Climate Impact Research, PO Box 60 12 03, 14412 Potsdam, Germany,Bernstein Center for Computational Neuroscience Berlin, Humboldt Universität, 10115 Berlin, GermanyIn this letter we present a stochastic dynamic model which can explain economic cycles. Weshow that the macroscopic description yields a complex dynamical landscape consisting of multiple stable fixed points, each corresponding to a split of the population into a large low and a small high income group. The stochastic fluctuations induce switching between the resulting metastable states, and excitation oscillations just below a deterministic bifurcation. The shocks are caused by the decisions of a few agents who have a disproportionate influence over the macroscopic state of the economy due to the unequal distribution of wealth among the population. The fluctuations have a long-term effect on the growth of economic output and lead to business cycle oscillations exhibiting coherence resonance, where the correlation time is controlled by the population size which is inversely proportional to the noise intensity.Capital Inequality Induced Business Cycles Eckehard Schöll January 14, 2024 ========================================== The complex networks approach is a transdisciplinary paradigm to capture the nonlinear dynamics ofa multitude of natural, technological, or social systems. In order to predict and help to understand the effects of economic crises or shocks, and guide policymakers to handle such situations<cit.>,the economy should be modeled as a complex socioeconomic system with a plethora of network interactions between agents (households) and market institutions, taking into regard that wealth and power are heterogenously and unequally distributed among the population.The classical approach in economics is to assume complete rationality and only consider a single representative agent, who then solves a long term optimization problem in order to maximize the long term benefits of increased consumption. The typical use of convex functions results in the existence of a unique fixed point that is then disturbed by external shocks<cit.>. These models in general lack the dynamical complexity needed to describe the economic reality observed, and completely disregard the highly non-uniform distribution of wealth in typical modern economies <cit.>. The long-lasting effects of the 2008 financial crisis have lead to the paradigm shift in economics that business cycles and random fluctuations are interdependent in economic growth theory <cit.>. This work addresses the question of how stochastic interactions between individuals can give rise to fluctuations of macroeconomic quantities, and the associated long-term effects on economic growth. We start from a modified version of the agent-based model forbusiness cycles and economic inequality presented in <cit.>, but use the Langevin equation approach <cit.> and a moment closure for a description of the underlying agent-based model for large but finite population size. This approach allows us to first study the deterministic system for an infinite population and then use the Langevin approach to understand the effect of finite size fluctuations. Such macroscopic models are advantageous with respect tocomparing them to data, since they only deal with average and aggregate quantities, which in reality are much easier to obtain than the refined data necessary to specify the initial conditions of an agent-based model.Agent-Based “Micro” Model. We study a stochastic model of N≫ 1 householdsi (“agents”) in a fully connected network, characterized by two dynamic variables (K_i, S_i). Household capital K_i ≥ 0 is accumulated by saving a fraction of household income given by its current saving rate S_i. Although in principle, S_i could be any real number in [0,1], we assume S_i is one of M>1 discrete saving rate levels s_1<…<s_M. Agent i independently and stochastically updates S_i at random times given by a Poisson process with common jump rate 1/τ. At each update, i either explores or imitates. With probability ϵ, i switches to any of the saving rate levels uniformly at random (“exploration”). With probability 1-ϵ, i will instead copy the saving rate S_j of any agent j, drawn with a probability that depends on j's current consumption C_j (“imitation”). We assume that the probability to choose agent j for imitation is governed by a Boltzmann distribution with inverse temperature β,P(S_i → S_j)= 1/Zexp (β C_j), Z= ∑_j=1^Nexp (β C_j),resulting in a voter model with coevolving transition probabilities <cit.>. In the low temperature limit β→∞, this “softmax policy” converges to the imitate-the-best (“argmax”) policy used in <cit.>, where agents deterministically adopt the saving rate of the agent with the highest consumption in their neighborhood. Our generalization to a stochastic softmax policy can be interpreted as representing rational decision-making under uncertain measurements of others' consumption, similar to <cit.>, and it is a common assumption in behavioral economics and machine learning. Household consumption C_i = (1 - S_i)I_i is that part of income I_i which is not saved. Income depends on gross economic production Y, determined by a Cobb–Douglas production function <cit.>Y = A K^ϵ_KL^ϵ_L,where K=∑_i=1^N K_i is aggregate capital, L is aggregate labor, A is a constant, and ϵ_L and ϵ_K are elasticities, here ϵ_K = ϵ_L = 1/2. Household i supplies their capital and fixed labor l_i=L/N to the economy for production and is compensated at wage w=∂ Y / ∂ Landcapital return r = ∂ Y / ∂ K , resulting in an income <cit.> ofI_i= rK_i + wL/N = A√(L)(K_i/√(K) + √(K)/N)/2.Investing the saved fraction S_i of I_i into capital growth results in a coupled, nonlinear evolution of capital stocks,K̇_̇i̇ = S_iI_i - κ K_i = (rS_i - κ) K_i - wS_i L /N,where κ>0 is the common capital depreciation rate. Macro-Model. To study the agent-based model's oscillatory behavior in the large system limit N→∞, we focus on a few aggregate quantities: the vector of occupation numbers n = (n_1,...,n_M) of all saving rate levels, and the capital distribution in each of these levels. This admits an approximation via a chemical Langevin equation<cit.> combined with a moment closure approach for the capital distributions in each saving rate level. The Langevin equation incorporates fluctuations in the transition rates due to the finite size of the system. This contrasts the usual ways fluctuations are introduced into macro-economic growth models based on demand shocks, credit defaults, or technological progress <cit.>. The time evolution of the occupation numbers follows an Itô stochastic differential equation (SDE),dn = ∑_k,l=1^Mα_klν_kldt + ∑_k,l=1^M√(α_kl)ν_kldB_kl.ν_kl = e_k - e_l indicates a transition between levels s_k→ s_l, where e_k is the k-th unit vector in ℝ^M, and dB_jl are the incrementsof uncorrelated white noise.Due to imitation and exploration, the transition rate for k → l is α_kl=(1-ϵ)n_k/τ Zn_l⟨exp(β C_i)⟩_l + ϵn_k/τ M.where ⟨ X_i ⟩_l=(n_l)^-1∑_{ i: S_i = s_l}X_i denotes the population average of agents in saving rate level l.For the moment closure, we consider the p-thmoment (p≥ 1) of the capital distribution among those households whose saving rates are in level l: m_l^p = ⟨ K^p_i ⟩_l.We cannot directly compute the evolution of the capital moments using Eq. (<ref>), since when a household switches to a different saving rate, it takes its capital stock with it. This leads to correction terms <cit.> that directly couple Eq. (<ref>) with the evolution of the capital moments,dm_l^p= (p(rs_l -κ) m_l^p + pws_l L/Nm_l^p-1 + ∑_k=1^M m_k^p - m_l^p/n_lα_kl) dt+ ∑_k=1^M m_k^p - m_l^p/n_l√(α_kl)dB_kl. Since τ≫ 1, this results in a slow-fast system, where the occupation numbers are the slow variables. Weapply a Taylor approximation of the exponential in Eq.  (<ref>) andin order to better capture the maximum consumption in each level, we expand about the mean consumption ⟨ C_i(t) ⟩_l,⟨exp (β C_i)⟩_l = exp(β⟨ C_i⟩ _l) ∑_p=0^∞β^p/p!⟨(C_i-⟨C_i⟩_l)^p⟩_l.This reduces the systematic error from underestimating the maximal consumption, when using finitely many terms,but introduces a further nonlinearity. The moments of the consumption distributions are easily computed from the moments of the capital distribution using C_i = (1-S_i)I_i and Eq. (<ref>), see <cit.>.We choose to include the third moment of the consumption and capital distributions, since the micro-model displays significant amounts of skewness <cit.>. We hence truncate the moment closure at p=3. This truncation affects the dynamics only by decreasing the accuracy of the maximum consumption estimate, while the evolution equations for the capital moments are unaffected.Results. In our simulations, we restrict ourselves to equidistant saving rate levels s_l within the interval [0.05,0.95].Let us first explore the basic phase space structure by ignoring the noise terms.For sufficiently large number of levels M and sufficiently high inverse temperature β, this deterministic approximation displays a complex dynamical landscape with several fixed points, each corresponding to a different distribution of saving rates (Fig. <ref>): we observe a split of the population into two groups as in <cit.>. Most agents sit in a low saving rate level with a very low capital stock, but a small group of agents has very high saving rates and owns most of the capital in the economy; this state is prominent for all parameters considered. This gives the few “high savers” crucial influence on the overall dynamics.With the addition of noise, we switch to a meso-scale for a large but finite number ofhouseholds. Here the multi-stability on the macro-scale results in excitation oscillations and switching between the now metastable states. In the following we will focus on the case M=5. Above the bifurcation in Fig.<ref>(a) the fluctuations lead to switching between the metastable states. Just below the bifurcation, i.e. below the critical inverse temperature β, the system takes an excursion through the phase space, before returning to the original stable state. This is evidenced by the net transition rates shown inFig. <ref>.Therefore, the presence of intrinsic fluctuations induces macroeconomic shocks. This stands in contrast to the deterministic model, which does not produce these shocks.The splitof the population into two groups of agents with high and low capital stock, respectively, leads to a disparity of influence that drives the transitions between the two metastable states of the system.Each switching transition is preceded by an abrupt spike in mean consumption in the saving rate level to whichthe agents then switch. This is shown in Fig. <ref> for β above the deterministic bifurcation but otherwise same parameters as in Fig. <ref>, and for two saving levels s_2 (a,c,e) and s_5 (b,d,f).This spike exponentially increases the transition rate into that level according to Eq.<ref>. The preceding spike for a switch to higher mean saving rate is depicted in (a) for level s_2, which most agents will finally adopt, and in (b) for the highest saving rate level s_5. The blow-ups (c), (d) show that the spike in s_2 happens on a much faster timescale than the spike in s_5 and cannot be attributed to changes in the economic variables, i.e., the market dynamics, since the capital return r (and thus also the wages) is almost constant during the spike. Panel (e) visualizes that during the spike significant amounts of capital are transferred by a small amount of agents (0.1% of the population <cit.>) switching from s_5 to s_2 and all other capital flows are significantly smaller or reduce the average capital in s_2. (f) shows that this capital flow leads to the increase in average consumption, which then draws all the agents into this level. This shows the disparity of influence, which is generated from the average capital difference for high and low savers. The decisions of a tiny fraction of influential agents can lead to tipping of the entire macroeconomy.The small increase of "high savers" moving to the lower saving rate is due to the finite size fluctuations of the transition rates, which are then amplified by the economic inequality and the fact that the level s_2 has a very low occupation. This combination creates a timescale separation, allowing for the sudden increase in mean capital. The consumption spike in the high saving rate level s_5 in Fig. <ref> happens on the much slower timescale which is associated with the economic variables, mainly the depreciation rate κ. The change in occupation numbers then follows on the slow timescale.Our model exhibits hysteresis of business cycles, referring to long-term effects of fluctuations on economic growth, which corroborates a recent paradigm shift in economic growth theory <cit.>. We findthat fluctuations can lead to a long term change in production, if the system is in the multistable regime and population growth is introduced  <cit.>. For details see <cit.>. Coherence resonance is a common phenomenon in noisy excitable nonlinear systems. It describes the non-monotonic dependence of the coherence of noise-induced oscillations upon noise intensity, i.e., there exists an optimum noise intensity that maximizes coherence <cit.>. A common measure of coherence is correlation time <cit.>. With this measure <cit.>, coherence resonance is visible as maximum correlation time for non-zero noise intensity.Since population growth is a major driver of economic growth, it is natural to ask how the oscillatory behavior changes asthe system size increases.After rescaling Eqns. (<ref>), (<ref>) to densities c=n/N, the population size N directly corresponds to a noise intensity parameter Γ = 1/√(N) if L/N=const.In Fig. <ref> the correlation time of the economic production is plotted vs. noise intensities for several inverse temperatures β. For β=5, i.e., below the bifurcation in Fig.<ref>(a) the correlation time exhibits a very flat region of increased coherence. Above the bifurcation (β=8) the correlation time increases drastically. For β=50 we have a clear peak of optimum coherence at Γ≈ 1.7× 10^-3, and a very broad second maximum upon further increase of noise intensity. Note that no deterministic bifurcation is involved in the dramatic change of the behavior of the correlation time, when going to larger inverse temperatures. However, the time spent near each metastable state changes drastically. (see Fig. 2(b),(c) in <cit.>).This illustrates that the precision, with which the agents imitate the behavior of the agents with the highest consumption, can have a strong effect on the coherence of the business cycle oscillations.The presence of coherence resonance indicates that fluctuations in the system can dramatically affect the business cycles and make them more coherent at a certain noise intensity defined by the size of the population.In more realistic models stochastic fluctuations can arise from other sources as well, and although care should be taken since we are dealing with multiplicative noise, it seems plausible that other sources of noisemight lead to oscillatory behavior that has similar characteristics, because the underlying phase space structure strongly influences the response to fluctuations. In conclusion, we have developed a macroscopic model which captures the characteristic dynamical features ofthe agent-based model proposed in <cit.>, and beyond that includes specific effects of stochastic fluctuations like coherence resonance. The decision-making process of the households creates a high degree of multistability in the infinite population limit, particularly when many saving rate levels are allowed. The multistable states result in a split of the population into a small group with high saving rate and high capital, and a large group of low savers with low capital stock, where the smaller group can exert great influence on the entire economy.The effect of finite-size fluctuations arising in the case of a large but finite population size leads to the possibility of excitation oscillations and stochastic switching between metastable states, which correspond to a synchronized change of saving strategy of a majority of agents in the population. Compared to <cit.>, the increased population size results in business cycles that are much more abrupt and more akin to rare isolated events than sustained oscillations.In going beyond the agent-based model <cit.> we are able to deal with a substantially larger population and show that the capital inequality leads to timescale separation, which can cause rapid changes in macroscopic variables. We also find that only about 0.1% of the population <cit.> are responsible for triggering a recession period.With the introduction of economic growth through a growing population, we show that the fluctuations can lead to long-lasting recessions in economic production, which is commonly discussed as hysteresis in the economics community. Hysteresis of business cycles has typically been linked to fluctuations in financing, debt andmonetary policy, and only in a few cases with heterogeneous agents. Our model is considerably simpler, and explains the metastability as well as the fluctuations solely from the decision-making process of heterogenous households, in contrast to external sources of noise. To the best of our knowledge, such long-term effects on growth solely resulting from the collective saving behavior of households have not been noted before.In our model, we find coherence resonance and a qualitative change in the correlation time when the system switches from excitation oscillation to the stochastic switching regime for larger β, which may elucidate also the effects of other sources of stochastic fluctuations. This work was supported by DFG (German Research Foundation) - Projects No. 429685422 and 440145547 and under Germany’s Excellence Strategy through grant EXC-2046 The Berlin Mathematics Research Center MATH+ (project no. 390685689). 23 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Farmer et al.(2012)Farmer, Gallegati, Hommes, Kirman, Ormerod, Cincotti, Sanchez,and Helbing]farmer2012complex author author J. D. Farmer, author M. Gallegati, author C. Hommes, author A. Kirman, author P. Ormerod, author S. Cincotti, author A. Sanchez, and author D. Helbing, title title A complex systems approach to constructing better models for managing financial markets and the economy, @noopjournal journal The European Physical Journal Special Topics volume 214, pages 295 (year 2012)NoStop [Ball(2014)]ball2014long author author L. Ball, title title Long-term damage from the great recession in oecd countries, @noopjournal journal European Journal of Economics and Economic Policiesvolume 11, pages 149 (year 2014)NoStop [Farmer and Foley(2009)]Farmer2009-kg author author J. D. Farmer and author D. Foley,title title The economy needs agent-based modelling, @noopjournal journal Nature volume 460, pages 685 (year 2009)NoStop [Acemoğlu et al.(2009)Acemoğlu, Johnson, Robinson, Querubin, Ticchi, and Vindigni]acemouglu2009modern author author D. Acemoğlu, author S. Johnson, author J. Robinson, author P. Querubin, author D. Ticchi, and author A. Vindigni, @nooptitle Modern economic growth (year 2009)NoStop [Asano et al.(2021)Asano, Kolb, Heitzig, and Farmer]Asano author author Y. M. Asano, author J. J. Kolb, author J. Heitzig, and author J. D. Farmer, title title Emergent inequality and business cycles in a simple behavioral macroeconomic model, journal journal Proceedings of the National Academy of Sciences volume 118, https://doi.org/10.1073/pnas.2025721118 10.1073/pnas.2025721118 (year 2021), https://arxiv.org/abs/https://www.pnas.org/content/118/27/e2025721118.full.pdf https://www.pnas.org/content/118/27/e2025721118.full.pdf NoStop [Chancel et al.(2022)Chancel, Piketty, Saez, and Zucman]chancel2022world author author L. Chancel, author T. Piketty, author E. Saez, and author G. Zucman, @nooptitle World inequality report 2022 (publisher Harvard University Press, year 2022)NoStop [Cerra et al.(2023)Cerra, Fatás, and Saxena]IMF_review author author V. Cerra, author A. Fatás, andauthor S. C. Saxena, title title Hysteresis and business cycles, https://doi.org/10.1257/jel.20211584 journal journal Journal of Economic Literature volume 61, pages 181 (year 2023)NoStop [Dosi et al.(2018)Dosi, Pereira, Roventini, and Virgillito]ABM_hysteresis author author G. Dosi, author M. C. Pereira, author A. Roventini, andauthor M. E. Virgillito,title title Causes and consequences of hysteresis: aggregate demand, productivity, and employment, https://doi.org/10.1093/icc/dty010 journal journal Industrial and Corporate Change volume 27, pages 1015 (year 2018), https://arxiv.org/abs/https://academic.oup.com/icc/article-pdf/27/6/1015/27183570/dty010.pdf https://academic.oup.com/icc/article-pdf/27/6/1015/27183570/dty010.pdf NoStop [Gillespie(2000)]Gillespie author author D. T. Gillespie, title title The chemical Langevin equation, https://doi.org/10.1063/1.481811 journal journal The Journal of Chemical Physics volume 113, pages 297 (year 2000), https://arxiv.org/abs/https://doi.org/10.1063/1.481811 https://doi.org/10.1063/1.481811 NoStop [Niemann et al.(2021)Niemann, Winkelmann, Wolf, andSchütte]Niemann2021 author author J.-H. Niemann, author S. Winkelmann, author S. Wolf, and author C. Schütte, title title Agent-based modeling: Population limits and large timescales, journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 31, https://doi.org/10.1063/5.0031373 10.1063/5.0031373 (year 2021)NoStop [Blume(1993)]BLUME1993387 author author L. E. Blume, title title The statistical mechanics of strategic interaction, https://doi.org/https://doi.org/10.1006/game.1993.1023 journal journal Games and Economic Behavior volume 5, pages 387 (year 1993)NoStop [Szabó and T őke(1998)]pris_dilem author author G. Szabó and author C. T őke, title title Evolutionary prisoner's dilemma game on a square lattice, https://doi.org/10.1103/PhysRevE.58.69 journal journal Phys. Rev. E volume 58, pages 69 (year 1998)NoStop [Traulsen et al.(2010)Traulsen, Semmann, Sommerfeld, Krambeck, and Milinski]evol_game author author A. Traulsen, author D. Semmann, author R. D. Sommerfeld, author H.-J. Krambeck, andauthor M. Milinski, title title Human strategy updating in evolutionary games,https://doi.org/10.1073/pnas.0912515107 journal journal Proceedings of the National Academy of Sciencesvolume 107, pages 2962 (year 2010), https://arxiv.org/abs/https://www.pnas.org/doi/pdf/10.1073/pnas.0912515107 https://www.pnas.org/doi/pdf/10.1073/pnas.0912515107 NoStop [McKelvey and Palfrey(1995)]MCKELVEY19956 author author R. D. McKelvey and author T. R. Palfrey, title title Quantal response equilibria for normal form games, https://doi.org/https://doi.org/10.1006/game.1995.1023 journal journal Games and Economic Behavior volume 10, pages 6 (year 1995)NoStop [Cobb and Douglas(1928)]CobbDouglas author author C. W. Cobb and author P. H. Douglas, title title A theory of production,http://www.jstor.org/stable/1811556 journal journal The American Economic Review volume 18,pages 139 (year 1928)NoStop [Not()]Note1 @nooptitle Note that the wages and returns are the same for every agent, since they are determined by the production Y. The compensation is then the product with the individual labour and capital inputs.Stop [Wiedermann et al.(2015)Wiedermann, Donges, Heitzig, Lucht, and Kurths]PhysRevE.91.052801 author author M. Wiedermann, author J. F. Donges, author J. Heitzig, author W. Lucht, and author J. Kurths, title title Macroscopic description of complex adaptive networks coevolving with dynamic node states, https://doi.org/10.1103/PhysRevE.91.052801 journal journal Phys. Rev. E volume 91, pages 052801 (year 2015)NoStop [sup()]sup_mat @nooptitle See supplemental material at [url...] for a derivation of the macroscopic system, the time-scale separation, the hysteresis of business cycles, the case with more available levels, the mean consumption, and the correlation time.Stop [Kuehn(2016)]Kuehn2016 author author C. Kuehn, title Moment closure—a brief review, in https://doi.org/10.1007/978-3-319-28028-8_13 booktitle Control of Self-Organizing Nonlinear Systems, editor edited by editor E. Schöll, editor S. H. L. Klapp, andeditor P. Hövel (publisher Springer International Publishing, address Cham,year 2016) pp. pages 253–271NoStop [Pikovsky and Kurths(1997)]Pikovsky author author A. S. Pikovsky and author J. Kurths, title title Coherence resonance in a noise-driven excitable system, https://doi.org/10.1103/PhysRevLett.78.775 journal journal Phys. Rev. Lett. volume 78, pages 775 (year 1997)NoStop [Janson et al.(2004)Janson, Balanov, and Schöll]Delay_Feedback_Controls_Noise author author N. B. Janson, author A. G. Balanov, and author E. Schöll, title title Delayed feedback as a means of control of noise-induced motion, https://doi.org/10.1103/PhysRevLett.93.010601 journal journal Phys. Rev. Lett. volume 93,pages 010601 (year 2004)NoStop [Zakharova et al.(2013)Zakharova, Feoktistov, Vadivasova, andSchöll]zakharova2013coherence author author A. Zakharova, author A. Feoktistov, author T. Vadivasova, and author E. Schöll, title title Coherence resonance and stochastic synchronization in a nonlinear circuit near a subcritical Hopf bifurcation, @noopjournal journal Eur. Phys. J. ST volume 222, pages 2481 (year 2013)NoStop [Geffert et al.(2014)Geffert, Zakharova, Vüllings, Just, and Schöll]Geffert2014 author author P. M. Geffert, author A. Zakharova, author A. Vüllings, author W. Just, and author E. Schöll, title title Modulating coherence resonance in non-excitable systems by time-delayed feedback, https://doi.org/10.1140/epjb/e2014-50541-2 journal journal Eur. Phys. J. B volume 87, pages 291 (year 2014)NoStop Supplemental Materials: Capital Inequality Induced Business Cycles § DERIVATION OF THE MACROSCOPIC SYSTEMThe chemical Langevin equation is a well known approximation for agent-based micro models with fully connected networks <cit.> and with discrete agent states S_i of agent i, which are the available saving rates levels s_1, ..., s_M in our case. Having defined a set of allowed saving rate levels, we can average the transition probabilitiesP(S_i→ S_j) = 1/Zexp(β C_j) withZ=∑_j'=1^Nexp(β C_j'),over the resulting subpopulations of agents with identical saving rate, to find the transition rates α̂_kl between the saving levels k and l α̂_kl =1/τ∑_i:S_i=s_k j: S_j=s_lP(S_i→ S_j)=1/τ∑_i:S_i=s_k n_l ⟨ P(S_i→ S_j) ⟩_l = n_kn_l/τ Z⟨exp (β C_j)⟩_l,for the imitation behavior. Here Z = ∑_l' = 1^M n_l'⟨exp (β C_j)⟩_l' is the normalizing factor, ensuring ∑_k',l' = 1^M α̂_k'l' = N/τ, which is the rate of the combined Poisson processes from each agent. Adding the simple transition rates for the exploration behavior, we get the complete transition rates α_kl=(1-ϵ)n_k/τ Zn_l⟨exp(β C_i)⟩_l + ϵn_k/τ M.These are needed for the stochastic differential equation (SDE) of the occupation numbersdn = ∑_k,l=1^Mα_klν_kldt + ∑_k,l=1^M√(α_kl)ν_kldB_kl.To account for the fact that agents switching between saving levels take their capital with them, we derive an SDE for the moments { m_l^p}_p=1^∞ of the capital distributions of agents in each level l, which can be performed in a similar way as in <cit.>. Indeed, this can be done for any quantity X_i(t) that is associated with the agents i and satisfies a differential equation Ẋ_i = F_l(X_1, ... X_N) if agent i is in level l and jumps.Let J_l(t)⊆{1,...,N} denote the index set of agents with savings rate s_l at time t andf_l(t) = ⟨ F_l(X_i)⟩_l = ∑_i∈ J_l(t)F_l(X_i) the averaged evolution equation. Then we can write the time evolution of the averaged quantity x_l(t) := 1/n_l(t)∑_i ∈ J_l(t) X_i(t) asd x_l(t)≈ x_l(t+t') - x_l(t) = ∑_i ∈ J_l(t+t') X_i(t+t') - n_l(t+t')x_l(t)/n_l(t+t').Now we have the sum over the values X_i(t +t') which are associated with the agents in level l at time t+t'. To account for the agents changing saving rate in the time interval, we can split this into the three useful sets: J_l(t) contains those agents that already were in level l at time t, J_l(t+t')∖ J_l(t) contains those agents that arrived in that time interval, and J_l(t)∖ J_l(t+t') contains those agents that left in that time interval. Then∑_i ∈ J_l(t+t') X_i(t+t')= ∑_i ∈ J_l(t)[ X_i(t) + dt Ẋ_̇i̇(t)]+ ∑_i∈ J_l(t+t')∖ J_l(t)[ X_i(t) + dt Ẋ_̇i̇(t)]- ∑_i∈ J_l(t)∖ J_l(t+t')[ X_i(t) + dt Ẋ_̇i̇(t)]= n_l(t) [ x_l + dtf_l]+∑_k≠ l( dt α_kl+√(α_kl) dB_kl)[ x_k + dtf_k]- ∑_k≠ l( dt α_lk+√(α_lk) dB_lk)[ x_l + dtf_l],where we have used the transition rates, α_kl, to approximate the number of agents switching in a smalltime interval [t, t+t']. This is valid, since the underlying stochastic process assumes that agents are uniformly picked at random to update their saving rate. This means that for any small enough time interval the agents switching between levels have the same distribution as the levels themselves. Now we omit terms of order dt^2 and dt dB_kl, which is valid since we are interested in the limit t'→ 0. We getx_l(t+t') - x_l(t)= 1/n_l(t+t')(-x_l(t)[n_l(t+t')-n_l(t)] + n_l(t)f_l(t)dt . +∑_k≠ l( dt α_kl+√(α_kl) dB_kl)x_k- ∑_k≠ ldt α_lk+√(α_lk) dB_lkx_l))and expanding n_l(t+t')-n_l(t) in the numerator by Eq. (<ref>) and taking t'→ 0 we obtaind x_l = f_l(t) dt + ∑_k=1^M x_k(t)-x_l(t)/n_l(t)(α_kldt + √(α_kl)dB_kl)Notably, agents leaving a level do not have an impact on that level's distribution. These additional terms couple the stochasticity of the agent-based model to the market dynamics, which gives rise to the excitation oscillations.Using the population averages for the capital stock, we can find the evolution equations for the capital moments m_l^p = ⟨ K_l^p⟩. From Eq. (<ref>) in the main text, we get:f_l^p= ⟨d/dtK_i^p ⟩_l = ⟨ p K_i^(p-1)K̇_̇i̇⟩= p (rs_l -κ)⟨ K_i^p⟩ + pws_l L/N ⟨ K_i^(p-1)⟩Combining this result with the additional terms due to switching and the Taylor approximation Eq. (<ref>) from the main text gives the final closed system of SDEs, where the moments of the consumption distribution, which are needed for the Taylor approximation, are easily computed from using C_i = (1-S_i)I_i and Eq. (<ref>) in the main text⟨ C_i^p⟩_l= (1-s_l)^p∑_ρ=0^p pρ r^ρ m_l^ρ (wL/N)^p-ρThe non-linearity in the wage w and capital return r only depend on aggregate capital K, K = ∑_i =1^N K_i = ∑_l=1^L∑_{ i| S_i=s_l} K_i = ∑_l =1^L n_l ⟨ K_i ⟩_l,which only depends on macroscopic variables, so there is no need to omit terms when considering only a finite number of moments. § TIME SCALE SEPARATIONWe already discussed that the large update time τ≫ 1 leads to a slow-fast system. However, the coupling between decision process and market dynamics creates multiple time scales. The fastest time scale arises from the combination of capital inequality and agents changing their saving rate, which is suppressed near the fixed points, because most agents do not actually change their saving rates when imitating.The rate at which agents change their saving rate is not N/τ which is the rate at which decisions are made (since we have N Poisson processes with rate 1/τ) but this rate is reduced by the momentary rate at which they choose to imitate another agent with the same saving rate N/τ - ∑_l=1^Mα_ll which is illustrated in Fig.<ref>.While the system is near a metastable state, most agents choose to imitate an agent with the same saving rate, but when an excitation occurs suddenly most households choose to change their saving rate instead of remaining in their current state (see Fig. <ref> in the main text). So the excitations can be seen as an expression of uncertainty in the population.The next fastest timescale is the normal economic dynamics given by Eq. (<ref>), followed by the dynamics of the occupation numbers, as illustrated in Fig.<ref> of the main text. The slowest timescale is the switching between metastable states and the excitation oscillations. In the main text we briefly mentioned that a lot of time can pass between economics shocks. In Fig. <ref>(b, c) we show the histograms for the resting times near each metastable state, for the same system considered in Fig.<ref> in the main text. For β=50 we see resting times above 250τ even for a relatively small number 287 of observations. Also, the resting time distributions depend on the metastable state from which the process escapes, which illustrates the importance of including multiplicative noise that controls the noise intensity near the fixed points.Although there are no bifurcations between β=15 and β=50, we observe a drastic change of the resting time distribution. As β increases, the expected resting time for both states increases and for higher β the system spends more time in the state with higher average saving rate, which is generally desirable, since it generates a higher level of economic output and also implies less economic inequality.§ HYSTERESIS OF BUSINESS CYCLES Hysteresis of business cycles in economics refers to the long term effects of economic shocks on economic growth <cit.>. We consider an exponentially growing population while keeping L/N = l_0= const., which is a classical approach to introduce economic growth<cit.>. In the multistable regime (large β) we immediately obtain different degrees of growth for the different states, since they correspond to different values of aggregate capital and thus production.In Fig. <ref>(a) we see that for a finite population, the fluctuations in the system are strong enough to excite the system to switch between the different states, and that there are realizations of the stochastic process where switching is a rare economic event. Since we assume that the agents' decision when to switch their saving rate is uncorrelated, the noise intensity is proportional to √(1/N) (Eqs. (<ref>), (<ref>) in the main text). Hence, as the population grows, the fluctuations decrease, and transitions between the multistable states become less likely. So essentially the economy will settle into one of these states and the future growth rate will be fixed (Fig. <ref>). § THE CASE WITH MORE AVAILABLE LEVELSAs shown in the main text, we find more fixed points, when considering the more realistic case with more than 5 available saving rate levels. In this case, we find that the new fixed points correspond to a similar situation as in the case M=5 mainly studied in the paper. Each of the fixed points correspond to a situation where the majority of agents sit in a level with s_l<0.5 (Fig.<ref>). Now that we have a higher resolution of the saving rates, we see that the second group of high savers is distributed along all the saving rate levels s_l>0.5. Notably a higher mean saving rate ⟨ S_i ⟩ corresponds to a smaller group of high savers and thus lower capital inequality. With the addition of noise we again observe switching between the (now)metastable states, however in a much more complicated setting. Instead of switching between two states, we have 11 states and the switching dynamics become much more complex. It is not clear which state the system will occupy after leaving a given state. It would be interesting to study the oscillatory behavior of this more complicated model and what the effects on economic growth would be.§ SPIKES OF MEAN CONSUMPTIONIn order to understand how the sudden consumptionspikes arise, we need to discuss the different mechanisms that can induce an increase of mean consumption. The first possibility is an increase in mean capital due to the market dynamics, which is clearly not the case here, since the returns and thus also the wages are almost constant during the spike in Fig. <ref>(c) in the main text. The only other way the mean consumption can increase is through the influx of capital that other agents carry to a given level.For a given time series we can directly calculate the change of capital Δ_k ⟨ K_i⟩_l due to these mechanismsΔ_km_l^1 = ∫_t_0^t_1 dt' m^1_k - m^1_l/n_lα_klfor k≠ l Δ_lm_l^1 = ∫_t_0^t_1 dt'(rs_l-κ)m_l^1 fork = lWhere Δ_l m_l^1 denotes the changes due to the market dynamics. Note that the integrants sum up to give the right-hand side of Eqn. (<ref>) in the main text, for p=1. Integrating the individual contributions to the change in mean capital over the highlighted time period of the spike in Fig. <ref>(c) in the main text shows that during the spike the greatest contribution comes from agents that switch from s_5 to s_2, and all the other contributions are either negligible or negative in the case of agents coming from s_1, where the agents carry less capital on average (Fig. <ref>(e) in the main text).In the same way we can split the terms in d⟨ C_i⟩_l/dt and verify that the transfer of capital from the highest saving rate level is actually responsible for the increase in mean consumption during the spike (Fig. <ref> e, f in the main text). Similarly, we can calculate the ratio of agents that are involved in the capital transfer from s_5 to s_2 during the spike in Fig.<ref> (in the main text) 1/N∫_t_0^t_1dt' α_52 = 0.14 % § CORRELATION TIMEFor a Markovian stochastic process the auto-correlation C(τ)= 𝔼[X̃_tX̃_t+τ] where: X̃_t = X_t - 𝔼[X_t]measures the correlation of a process with a shifted version of itself. So we can immediately see that this would be useful to detect possible periodicities in the process. If we have high local maxima in the auto-correlation for a set of shifts τ = nT for some T∈ℝ, n∈ℕ this would indicate that realizations of the process are likely to be T-periodic. The simplest is just a sine wave with added noise. Even if the noise is quite strong, and the oscillations might be hard to identify by just looking at the time series, the auto-correlation will often show the periodicity.However when the oscillations do not follow a simple periodicity, spotting these becomes much harder, as the auto-correlation for a Markov process also decays exponentially and eventually the 𝐗_t and 𝐗_t+τ become uncorrelated.An improved measure for coherence is the correlation time τ_corr of a signal. However also here there are two different definitions in the literature <cit.>. Here we chooseτ_corr = 1/C(0)∫_0^∞| C(τ)| dτwhere we have used the physical definition of the autocorrelation function <cit.>. To see how the correlation time measures the coherence of a process,we can make use of the relation of the spectral power densityS(ω) = 𝔼[ |ℱ[𝐗_t](ω)|^2]and the auto-correlation function, given by the Wiener–Khinchin theoremS(ω) = ℱ[C(τ)](ω)Note that the Fourier transform of neither C nor 𝐗_t in general have to exist and the theorem can be stated more broadly. But for most physical applications we can assume their existence.6 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Niemann et al.(2021)Niemann, Winkelmann, Wolf, andSchütte]Niemann2021_SM author author J.-H. Niemann, author S. Winkelmann, author S. Wolf, and author C. Schütte, title title Agent-based modeling: Population limits and large timescales, journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 31, https://doi.org/10.1063/5.0031373 10.1063/5.0031373 (year 2021)NoStop [Wiedermann et al.(2015)Wiedermann, Donges, Heitzig, Lucht, and Kurths]correction_terms_SM author author M. Wiedermann, author J. F. Donges, author J. Heitzig, author W. Lucht, and author J. Kurths, title title Macroscopic description of complex adaptive networks coevolving with dynamic node states, https://doi.org/10.1103/PhysRevE.91.052801 journal journal Phys. Rev. E volume 91, pages 052801 (year 2015)NoStop [Cerra et al.(2023)Cerra, Fatás, and Saxena]IMF_review_SM author author V. Cerra, author A. Fatás, andauthor S. C. Saxena, title title Hysteresis and business cycles, https://doi.org/10.1257/jel.20211584 journal journal Journal of Economic Literature volume 61, pages 181 (year 2023)NoStop [Acemoğlu et al.(2009)Acemoğlu, Johnson, Robinson, Querubin, Ticchi, and Vindigni]acemouglu2009modern_SM author author D. Acemoğlu, author S. Johnson, author J. Robinson, author P. Querubin, author D. Ticchi, and author A. Vindigni, @nooptitle Modern economic growth (year 2009)NoStop [Geffert et al.(2014)Geffert, Zakharova, Vüllings, Just, and Schöll]Geffert2014_SM author author P. M. Geffert, author A. Zakharova, author A. Vüllings, author W. Just, and author E. Schöll, title title Modulating coherence resonance in non-excitable systems by time-delayed feedback, https://doi.org/10.1140/epjb/e2014-50541-2 journal journal The European Physical Journal Bvolume 87, pages 291 (year 2014)NoStop [Pikovsky and Kurths(1997)]Pikovsky_SM author author A. S. Pikovsky and author J. Kurths, title title Coherence resonance in a noise-driven excitable system, https://doi.org/10.1103/PhysRevLett.78.775 journal journal Phys. Rev. Lett. volume 78, pages 775 (year 1997)NoStop
http://arxiv.org/abs/2312.16708v1
{ "authors": [ "Sören Nagel", "Jobst Heitzig", "Eckehard Schöll" ], "categories": [ "physics.soc-ph", "math.DS" ], "primary_category": "physics.soc-ph", "published": "20231227201003", "title": "Capital Inequality Induced Business Cycles" }
Improved decoding of expander codes: fundamental trade-off between expansion ratio and minimum distance of inner code Kuan Cheng [Center on Frontiers of Computing Studies, Peking University, Beijing 100871, China. Email: [email protected]] Minghui Ouyang[School of Mathematical Sciences, Peking University, Beijing 100871, China. Email: [email protected]] Chong Shangguan[Research Center for Mathematics and Interdisciplinary Sciences, Shandong University, Qingdao 266237, China, and Frontiers Science Center for Nonlinear Expectations, Ministry of Education, Qingdao 266237, China. Email: [email protected]] Yuanting Shen[Research Center for Mathematics and Interdisciplinary Sciences, Shandong University, Qingdao 266237, China. Email: [email protected]] ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Tanner codes are graph-based linear codes whose parity-check matrices can be characterized by a bipartite graph G together with an inner code C_0. Expander codes are Tanner codes whose defining bipartite graph G has good expansion property. The landmark work of Sipser and Spielman showed that every bipartite expander G with expansion ratio δ>3/4 together with a parity-check code defines an expander code which corrects Ω(n) errors in O(n) time, where n is the code length. Viderman showed that δ>2/3-Ω(1) is already sufficient. Our paper is motivated by the following natural and fundamental problemin decoding expander codes:Question: What are the sufficient and necessary conditions that δ and d_0 should satisfy so that every bipartite expander G with expansion ratio δ and every inner code C_0 with minimum distance d_0 together define an expander code which corrects Ω(n) errors in O(n) time?We give a near-optimal solution to the question above, showing that δ d_0>3 is sufficient and δ d_0>1 is necessary. Our result significantly improves the previously known result of Dowling and Gao, who showed that d_0=Ω(cδ^-2) is sufficient, where c is the left-degree of G. We suspect that δ d_0>1 is also sufficient to solve the question above. § INTRODUCTIONGraph-based codes are an important class of error-correcting codes that have received significant attention from both academia and industry. They have a long history in coding theory, dating back to Gallager's <cit.> celebrated low-density parity-check codes (LDPC codes for short). LDPC codes are a class of linear codes whose parity-check matrices can be characterized by low-degree (sparse) bipartite graphs, called factor graphs. Gallager analysed the rate and distance of LDPC codes, showing that with high probability, randomly chosen factor graphs give rise to error-correcting codes attaining the Gilbert-Varshamov bound. He also presented an iterative algorithm for decoding these codes from errors caused by a binary symmetric channel. Since the 1990s, LDPC codes have received increased attention due to their practical and theoretical performance (see <cit.> for a few examples and <cit.> for a survey).As a generalization of LDPC codes, Tanner <cit.> introduced the so-called Tanner codes, as formally defined below. Let c,d,n be positive integers and L:=[n], where [n]={1,…,n}. Given a (c,d)-regular bipartite graph G with bipartition V(G)=L∪ R and a [d,k_0,d_0]-linear code C_0[The reader is referred to <ref> for basic definitions on graphs and codes.], the Tanner code T(G,C_0)⊆F_2^n is the collection of all binary vectors x∈F_2^n with the following property: for every vertex u∈ R, x_N(u) is a codeword of the inner code C_0, where N(u)⊆ L is the set of neighbors of u and x_N(u)=(x_v:v∈ N(u))∈F_2^d denotes the length-d subvector of x with coordinates restricted to N(u); in other words,T(G,C_0):={x∈F_2^n:x_N(u)∈ C_0for every u∈ R}. Expander codes are Tanner codes whose defining bipartite graphs have good expansion properties, namely, they are bipartite expanders. To be precise, for real numbers α,δ∈(0,1], a (c,d)-regular bipartite graph G with bipartition V(G)=L∪ R with L=[n] is called a (c,d,α,δ)-bipartite expander if for each subset S⊆ L with |S|≤α n, S has at least δ c|S| neighbors in R, i.e.,|N(S)|:=|∪_v∈ SN(v)|≥δ c|S|.As each S⊆ L can have at most c|S| neighbors in R, being a (c,d,α,δ)-bipartite expander means that every bounded size subset in L has as many neighbors in R as possible, up to a constant factor.Sipser and Spielman <cit.> studied the Tanner code T(G,C_0) with G being a bipartite expander and C_0 being a parity-check code. For simplicity, let Par={(x_1,…,x_d):∑_i=1^d x_i=0} denote the parity-check code in F_2^d. They remarkably showed that the expansion property of G can be used to analyse the minimum distance and decoding complexity for T(G,Par). Roughly speaking, they showed that for every bipartite expander G with sufficiently large expansion ratio δ>1/2, T(G,Par) has minimum distance linear in n, which further implies that T(G,Par) defines a class ofasymptotically good codes. More surprisingly, they showed that if the expansion ratio is even larger, say δ>3/4, then for every such G, T(G,Par) admits a linear-time decoding algorithm that corrects a linear number of errors in the adverserial noise model! Soon after, Spielman <cit.> showed that expander codes can be used to construct asymptotically good codes that can be both encoded and decoded in linear time.Given the strong performance of expander codes, they have been of particular interest in both coding theory and theoretical computer science, and have been studied extensively throughout the years. For example, <cit.> utilized expander codes to attain near MDS codes with linear-time decoding. A line of research <cit.> improved the distance analysis and decoding algorithm for expander codes in various settings. Very recently, a sequence of works applied expander codes on quantum LDPC and quantum Tanner code construction, finally achieving asymptotically good constructions and linear-time decoding <cit.>.Given the discussion above, it is natural to suspect that the expansion ratio δ plays a prominent role in analysing the properties of T(G,Par). More precisely, one can formalize the following question. Note that throughout we always assume that c,d,α,δ are constants while n tends to infinity. What is the minimum δ>0 such that every (c,d,α,δ)-bipartite expander G with V(G)=L∪ R and |L|=n defines an expander code T(G,Par)⊆F_2^n that corrects Ω_c,d,α,δ(n) errors in O_c,d,α,δ(n) time? This question has already attracted a considerable amount of attention. Sipser and Spielman <cit.> used the bit-flipping algorithm (which developed upon the original algorithm of Gallager <cit.>) to show that δ>3/4 is sufficient to correct (2δ-1)α n errors in O(n) time. Using linear programming decoding, Feldman, Malkin, Servedio, Stein and Wainwright <cit.> showed that δ>2/3+1/3c sufficient to correct 3δ-2/2δ-1α· n errors, while at the cost of a poly(n) decoding time. Viderman <cit.> introduced the “Find Erasures and Decode” algorithm to show that δ>2/3-1/6c is sufficient to correct Ω(n) errors in O(n) time. Moreover, he also shows that there exists a (c,d,α,1/2)-bipartite expander G such that T(G,Par) only has minimum distance two, and therefore cannot correct even one error. Viderman's impossibility result implies that δ>1/2 is necessary for the assertion of <ref> holding for every (c,d,α,δ)-bipartite expander.The above results only consider the case where the inner code C_0 is a parity-check code. It is therefore tempting to think about whether one can benefit from a stronger inner code C_0. Let us call a code good if it can correct Ω(n) errors in O(n) time. Chilappagari, Nguyen, Vasic and Marcellin <cit.> showed that if G has expansion ration δ>1/2 and C_0 has minimum distance d(C_0)≥max{2/2δ-1-3,2}, then every such Tanner code T(G,C_0) is good. The result above implies that for ϵ→ 0 and δ=1/2+ϵ, d(C_0)=Ω(ϵ^-1) is sufficient to make every Tanner code T(G,C_0) good. Very recently, Dowling and Gao <cit.> significantly relaxed the requirement on δ by showing that for every δ>0,d(C_0)≥Ω(cδ^-2)is sufficient[More precisely, d(C_0)≥2t+c(t-1)^2-1 with t>1/δ.] to make every Tanner code T(G,C_0) good, and be able to correct α n errors. In particular, their result implies that as long as the minimum distance of C_0 is large enough, any tiny positive expansion ratio is sufficient to construct a good Tanner code!Putting everything together, it is interesting to understand how the expansion ratio δ of G and minimum distance d_0 of C_0 affect the goodness of the Tanner code. We have the following generalized version of <ref>. What are the sufficient and necessary conditions that δ and d_0 should satisfy so that every (c,d,α,δ)-bipartite expander G with V(G)=L∪ R, |L|=n, and every inner linear code C_0⊆F_2^d with d(C_0)≥ d_0, together define an expander code T(G,C_0)⊆F_2^n that corrects Ω_c,d,α,δ(n) errors in O_c,d,α,δ(n) time? The main purpose of this paper is to give a near-optimal solution to the above question, as presented in the next subsection. §.§ Main resultsDeterministic decoding of expander codes. Our main result, which significantly improves upon (<ref>), is presented as follows. Let G be a (c,d,α,δ)-bipartite expander and C_0 be a [d,k_0,d_0]-linear code, where c,d,α,δ,d_0,k_0 are positive constants. If δ d_0>3, then there exists a linear-time decoding algorithm for the Tanner code T(G,C_0) which can correct γ n errors, where γ=2α/d_0(1+c/(d_0-t)). The theorem above shows that δ d_0>3 is sufficient to make every Tanner code T(G,C_0) good.On the other hand, the next proposition shows that for every d_0≥ 2, δ d_0>1 is necessary. For every d,d_0≥2 and n≥ 10d_0, there exist constants 0<α<1,c≥3 and a (c,d,0.9α,1/d_0)-bipartite expander G with V(G)=L∪ R and |L|=n such that for every [d,k_0,d_0]-linear code C_0, the Tanner code T(G,C_0) has minimum Hamming distance at most d_0. <ref> and <ref> together show that our requirement δ d_0=Ω(1) is indeed near-optimal for <ref>. Moreover, we have the following conjecture on the fundamental trade-off between δ and d_0. If δ d_0>1, then for every (c,d,α,δ)-bipartite expander G and every inner code C_0⊆F_2^d with d(C_0)≥ d_0, the expander code T(G,C_0)⊆F_2^n can correct Ω_c,d,α,δ(n) errors in O_c,d,α,δ(n) time.Randomized decoding of expander codes. Another important direction in the study of expander codes is to understand the maximum number of errors that can be corrected in a linear-time decoding algorithm. In a recent work, Chen, Cheng, Li, and Ouyang <cit.> obtained a quite satisfactory answer to this problem for T(G,Par). More precisely, they showed that for every δ>1/2 and (c,d,α,δ)-bipartite expander G, T(G,Par) has minimum distance at least α/2(1-δ)· n-O(1), and this is tight up to a 1-o(1) factor. Moreover, for δ>3/4, they also gave a linear-time decoding algorithm which corrects 3α/16(1-δ)· n errors. A similar problem for general expander codes T(G,C_0) was studied by <cit.>.Our decoding algorithm that proves <ref> is deterministic, and corrects γ n errors in linear time. Our next result shows that one can correct more errors by using a randomized algorithm. Let G be a (c,d,α,δ)-bipartite expander and C_0 be a [d,k_0,d_0]-linear code, where c,d,α,δ,d_0,k_0 are positive constants. If δ d_0 > 3, then there exists a linear-time randomized decoding algorithm for Tanner code T(G,C_0) such that if the input has at most α n errors from a codeword, then with probability 1-exp{-Θ_c, δ, d_0( n ) }, the decoding algorithm can output the correct codeword.§.§ Notations and definitions A graph is a pair G = (V,E), where V is a set whose elements are called vertices, and E is a set of 2-subsets of V, whose elements are called edges. For a vertex u∈ V, the set of neighbors of u in G is denoted by N(u):={v∈ V:{u,v}∈ E}. For a subset S⊆ V(G), let N(S)=∪_u∈ S N(u) be the set of all neighbors of vertices in S. A graph G is bipartite if V(G) admits a bipartition V(G)=L∪ R such that both L and R contain no edge. Furthermore, G is (c,d)-regular if every vertex v∈ L has exactly c neighbors in R and every vertex u∈ R has exactly d neighbors in L.Let F_2={0,1} denote the finite field of size 2. A code C is simply a subset of F_2^n. For two vectors x=(x_1,…,x_n), y=(y_1,…,y_n)∈F_2^n, the Hamming distance between x and y, denoted by d_H(x,y), is the number of coordinates where x and y differ, i.e., d_H(x,y)=|{i∈[n]: x_i≠ y_i}|. The minimum distance of a code C⊆F_2^n, denoted by d(C), is the minimum of d_H(x,y) among all distinct x,y∈ C. Let wt(x) denote the number of coordinates where x are non-zero. A code C⊆F_2^n is said to be an [n,k,d(C)]-linear code if it is a linear subspace in F_2^n with dimension k, and minimum distance d(C). It is well-known that for every linear code C, d(C)=min{ wt(x):x∈ C∖{0}}.Throughout, let G be a (c,d,α ,δ)-bipartite expander and C_0 be a [d,k_0,d_0] linear code. Let T(G,C_0) be the Tanner code defined by G and C_0. Let Check be the error-detection algorithm of C_0, which checks whether a vector in F_2^d is a codeword of C_0.Assume that Check takes h_0 time. Similarly, let Decode be the correct-correction algorithm of C_0, which corrects up to ⌊d_0-1/2⌋ errors. Assume that Decode takes t_0 time. Note that h_0,t_0 are constants depending only on C_0 but not on n.Conventionally speaking, let us call vertices in L variables and vertices in R constraints. Given a vector x∈F_2^n, which is the corrupted from some codeword y∈ T(G,C_0), let us call a constraint u∈ R satisfied if x_N(u)∈ C_0, otherwise call it unsatisfied. §.§ Some related works In this subsection we briefly review two previous works <cit.> that are closely related to our decoding algorithms for Theorems <ref> and <ref>. Let us start from the decoding algorithm of Sipser and Spielman <cit.>. We summarize as follows the so-called iterated decoding or message-passing algorithm of <cit.> which decodes T(G,Par). * Let y∈ T(G,C_0) be the correct codeword that we want to decode from the received vector x. In the first round, the algorithm runs Check(x_N(u)) for every u∈ R. If a constraint u is unsatisfied, then it sends a “flip” message to every variable in N(u)⊆ L. Sipser and Spielman showed that as long as the expansion ratio of G is sufficiently large (δ>3/4) and the number of corruptions in x is sufficiently small but not identically zero (i.e., 1≤ d_H(x,y)≤(2δ-1)α· n), then there must exist a variable v∈ L that receives >c/2 flip messages, which implies that more than half constraints in N(v) are unsatisfied. The algorithm then flips x_v and updates x and the status of the constraints in N(v). Note that since Par is the parity-check code, flipping x_v makes all satisfied constraints in N(v) unsatisfied and all unsatisfied constraints in N(v) satisfied. Therefore, by flipping x_v one can strictly reduce the number of unsatisfied constraints.* The algorithm then runs the above process repeatedly. As long as there are still unsatisfied constraints, the algorithm can find the desired v∈ L so that flipping x_v strictly reduces the number of unsatisfied constraints. As there are at most |R|=cn/d unsatisfied constraints, the above process must stop in O(n) rounds and therefore yields an O(n) time decoding algorithm. Dowling and Gao <cit.> extends Sipser and Spielman's algorithm from T(G,Par) to the more general setting T(G,C_0) by making use of the minimum distance of C_0. Note that their algorithm works for linear codes defined on any finite field but we will only describe it for F_2. * Roughly speaking, the algorithm begins by setting a threshold t≤⌊d_0-1/2⌋ and then runs Decode(x_N(u)) for every u∈ R. If a constraint u∈ R satisfies 1≤ d_H( Decode(x_N(u)),x_N(u))≤ t-1, then it sends a “flip” message to every variable v∈ N(u) with Decode(x_N(u))_v≠ x_v. Note that Decode(x_N(u))∈F_2^d is a codeword in C_0. The algorithm then flips all x_v for those v receiving at least one flip, and then updates x. Dowling and Gao showed that as long as the minimum distance d_0 of C_0 is sufficiently large, i.e., it satisfies (<ref>), then flipping all variables that receive at least one flip can reduce the number of corrupted variables in x by some positive fraction.* In the next steps the algorithm runs the above process repeatedly. As the number of corrupted variables is at most O(n), the algorithm will stop in O(log n) rounds. Crucially, in order to show that the running time of the algorithm is still linear-order but not of order nlog n, the authors proved that the running time of every single round is within a constant factor of the number of corrupted variables at the beginning of this round. As the numbers of corrupted variables form a decreasing geometric sequence with the leading term at most n, it is not hard to check that the total running time, which is within a constant factor of the sum of this geometric sequence, is also O(n).§.§ Key new ideas in our work In this subsection, we briefly introduce the key new ideas in our work. Let us focus on the deterministic decoding algorithm which proves <ref>. Let us begin by analysing the following two possible places where the previous algorithm in <cit.> could be improved.In every decoding round of the above algorithm, the constraints in R which satisfy 1≤ d_H( Decode(x_N(u)),x_N(u))≤ t-1 (and hence send at least one and at most t-1 flips to L) in fact have two status, as detailed below. Let A be the set of constraints u∈ R that sends at least one flip and Decode(x_N(u)) computes the correct codeword in C_0 (i.e., Decode(x_N(u))=y_N(u)); similarly, let B be the set of constraints u∈ R that sends at least one flip and Decode(x_N(u)) computes an incorrect codeword in C_0 (i.e., Decode(x_N(u))≠ y_N(u)). Two possible places where the previous algorithm could be improved:(i) It could be the case that every constraint u∈ A satisfies d_H( Decode(x_N(u)),x_N(u))=1 and hence sends only one correct flip to L; in the meanwhile, every constraint u∈ B may satisfy d_H( Decode(x_N(u)),x_N(u))=t-1 and sends as many as t-1 flips to L, which could be all wrong. In this case, the constraints in R altogether send |A| correct flips and (t-1)|B| wrong flips to the variables in L.(ii) Unfortunately, the situation could be even worse. Recall that our bipartite graph G is (c,d)-regular. It could be the case that the neighbors of the constraints in A are highly concentrated (e.g., all |A| correct flips are received by as few as |A|/c variables in L), and the neighbors of the constraints in B are highly dispersed (e.g., all (t-1)|B| possibly wrong flips are received by as many as (t-1)|B| variables in L). As a consequence, a small number of corrupted variables but a large number of correct variables in L receive flip messages. Given the two issues above, if we flip all variables that receive at least one flip, then in the worst case we could correct |A|/c old corrupt variables but produce (t-1)|B| new corrupt variables. Recall that to make the algorithm in <cit.> work, in each round we need to reduce the number of corrupted variables by at least a positive fraction, which implies that in the worst case it is necessary to have |A|/c≥(t-1)|B|. Together with some lower bound on |A| and upper bound on |B| (see <cit.> for details),one can prove that in such worst scenario (<ref>) is necessary for Dowling and Gao's algorithm to work.Our new algorithm begins by noting that we could indeed fix the two problems mentioned above. To do so, we introduce several new ideas as briefly presented below. Key new ideas in our work: Let F:={i∈[n]:x_i≠ y_i} denote the set of corrupt variables in x. Similarly to <cit.>, our new algorithm begins by setting a threshold t=⌈1/δ⌉ and then runs Decode(x_N(u)) for every u∈ R. (a) To fix the first problem, if a constraint u∈ R satisfies 1≤ d_H( Decode(x_N(u)),x_N(u))≤ t-1, then instead of sending a flip message to every v∈ N(u) with Decode(x_N(u))_v≠ x_v, the new algorithm just arbitrarily picks exactly one such variable v, and sends a flip message to only this specific v. By doing so, every constraint in A∪ B sends exactly one flip to L.(b) To fix the second problem, we associate each v∈ L a counter τ_v∈{0,1,…,c} that counts the number of flips received by v. For each m∈ [c], let S_m denote the set of variables that receive exactly m flips. Then, instead of flipping every variable that receives at least one flip, i.e., instead of flipping ∪_m=1^c S_m, we only flip S_m for some m∈ [c]. Crucially, we show that if the number |F| of corrupt variables is not too large, then there must exist some m∈[c] such that |S_m| has the same order as |F|, and more importantly, a (1/2+κ)-fraction of variables in |S_m| are corrupted (and therefore can be corrected by the flipping operation), where κ is an absolute positive constant. It thus follows that by flipping all variables in S_m, one can reduce |F| by some positive fraction.Note that the details of (a) and (b) can be found in <ref>, where we call the algorithm corresponding to (a) and (b) “EasyFlip” and write EasyFlip(x,m) as the output of the EasyFlip if x is the input vector and S_m is flipped (see <ref>).However, there is still a gap that needs to be fixed, that is, how to find the required S_m? A plausible solution is to run EasyFlip(x,m) for every m∈ [c]. This would increase the total running time roughly by a c factor, which will still be O(n), provided that the original running time is O(n). Unfortunately, by doing so we still cannot precisely identify the required S_m, as in general we do not know how to count the number of corrupted variables in some corrupted vector. We will fix this issue by introducing our third key new idea: (c) Note that what we can explicitly count in each round of the algorithm is the number of unsatisfied constraints. Roughly speaking, our strategy is to run EasyFlip iteratively for a large but still constant number of times and then pick the final output that significantly reduces the number of unsatisfied constraints.More precisely, assume that we will run EasyFlip iteratively for s rounds. Let x^0:=x and write x^1:= EasyFlip(x^0,m_1) as the output of the 1st EasyFlip invocation where the variables in S_m_1 is flipped for some m_1∈ [c]; more generally, for k∈[s], write x^k:= EasyFlip(x^k-1,m_k) as the output of the kth EasyFlip invocation where the variables in S_m_k is flipped for some m_k∈ [c]. Note that in <ref> we call the above iterated invocations of EasyFlip as “DeepFlip”, and write x^k:= DeepFlip(x,(m_1,…,m_k)) as the output of the kth EasyFlip invocation. For 0≤ k≤ s, let F^k⊆ L and U^k⊆ R denote the sets of corrupted variables and unsatisfied constraints caused by x^k, respectively. We prove that there are constants 0<ϵ≪ϵ'≪ϵ”<1 such that the following two wordy but useful observations hold:(c1) If the number of corrupted variables is reduced dramatically then the number of unsatisfied constraints is reduced significantly, i.e., if for some k∈[s], |F^k|≤ϵ |F^0|, then |U^k|≤ϵ'|U|;(c2) If the number of unsatisfied constraints is reduced significantly, then the number of corrupted variables must be reduced by a least a constant fraction i.e., if for some k∈[s], |U^k|≤ϵ' |U^0|, then |F^k|≤ϵ”|F|.Below we will briefly argue how we will make use of the two observations (c1) and (c2). Recall that in (b) we have essentially guaranteed that for every k∈[s], there exists some m^*_k∈[c] such that by flipping S_m^*_k in EasyFlip, one could reduce the number of corrupted variables by an η-fraction, for some η∈(0,1). It follows that if we run DeepFlip iteratively for (m_1,…,m_s)=(m^*_1,…,m^*_s), then we have that |F^s|≤(1-η)^s|F|<ϵ |F|, provided that s>log_(1-η)^-1ϵ^-1 is sufficiently large (but still a constant independent of n). Therefore, if we run DeepFlip thoroughly for all (m_1,…,m_s)∈ [c]^s, then by (c1) there must exist at least one[Clearly, DeepFlip(x,(m^*_1,…,m^*_s)) gives a candidate for such x^k.] x^k:= DeepFlip(x,(m_1,…,m_k)) with k≤ s such that |U^k|≤ϵ'|U|. Moreover, using the last inequality, such x^k and (m_1,…,m_k) can be explicitly identified. Now by (c2) we can conclude that the number of corrupted variables is indeed reduced by at least a constant fraction.Note that the above brute-force search only increases the total running time by at most a c^s factor. The details of (c) and the analysis of DeepFlip can be found in <ref> and <ref>. Moreover, we call the algorithm that runs DeepFlip(x,(m_1,…,m_s)) thoroughly for all (m_1,…,m_s)∈ [c]^s as “HardSearch”, and it is discussed in <ref> and <ref>. The discussion above basically shows that every HardSearch invocation could reduce the number of corrupted variables by some constant fraction.By running HardSearch iteratively for O(log n) rounds the total number of corrupted variables will be smaller than ⌊d_0-1/2⌋, which can be easily corrected by running Decode for every u∈ R. The main algorithm that puts everything together is called “MainDecode”, and is presented in <ref> and <ref>.In order to show that the total running time is still linear in n, we adopt an argument similar to that in the previous works (e.g., <cit.>). We show that the running time of every HardSearch invocation is within a constant factor of the number of corrupted variables at the beginning of this invocation.Lastly, we would like to mention that our randomized decoding algorithm which proves <ref> basically follows from the same framework. The main difference is that we design a delicate random sampling procedure to select a subset of ∪_m∈ [c] S_m to flip. We show that this procedure can ensure that with high probability, corrupted variables are more than correct variables in this subset as long as the total number of corruptions is at most α n. §.§ Future research directions In this subsection we list some possible directions for future research. * Perhaps the most attractive problem is to try to solve <ref> and <ref>. We have shown that δ d_0>3 is sufficient for <ref>. However, when d_0=2 our result does not recover the best-known record δ>2/3-Ω(1) of Viderman<cit.> or the earlier result δ>3/4 of Sipser and Spielman <cit.>. Therefore, as a first step to solve <ref> and <ref> in the full generality, it would be interesting to prove that the weaker condition δ d_0>4/3 or even δ d_0>3/2 is also sufficient for <ref>. * Our decoding algorithm is built upon the earlier works of Sipser and Spielman <cit.> and Dowling and Gao <cit.>. Since the work of Viderman <cit.> makes a step beyond <cit.> by introducing the so-called “Find Erasures and Decode” algorithm, it would be interesting to know if one can utilize the new idea of <cit.> to essentially improve our bound δ d_0>3. * Although we showed that when δ d_0>3, every Tanner code T(G,C_0) can correct Ω_c,d,α,δ(n) errors in O_c,d,α,δ(n) time, we do not make a serious attempt to optimize the hidden constants. It is always interesting to find the optimal constants in Ω(·) and O(·). In particular, finding the optimal fraction of errors that one can correct in linear time is crucial in decoding expander codes. § COLLECTION OF SOME AUXILIARY LEMMAS Given two subsets S,T⊆ V(G), let E(S,T) denote the set of edges with one endpoint in S and another endpoint in T.For every positive integer t, let * N_≤ t(S)={u∈ V(G):1≤|N(u)∩ S|≤ t},* N_t(S)={u∈ V(G):|N(u)∩ S|=t},* and N_≥ t(S)={u∈ V(G): |N(u)∩ S|≥ t}. We will make use of the following crucial property of bipartite expander graphs. Let G be a (c,d,α,δ)-bipartite expander. Then for every set S⊆ L with |S|≤α n and every integer t∈[d], the following inequality holds|N_≤ t(S)|≥δ(t+1)-1/t· c|S|. By double-counting the number of cross edges between S and N(S), one can infer thatc|S| =|E(S,N(S))|=∑_i=1^di|N_i(S)|≥ |N_≤ t(S)|+(t+1)(|N(S)|-|N_≤ t(S)|)≥ (t+1)|N(S)|-t|N_≤ t(S)|≥ (t+1)δ c|S|-t|N_≤ t(S)|,where the last inequality follows by the property of bipartite expanders. It was known that for every integer c,d,n≥ 2, a random (c,d)-regular bipartite graph Gwith bipartition V(G)=L∪ R and |L|=n satisfied the following property with probability 1-(e/α)^-α n (see Theorem 26 in <cit.> or Proposition A.3 in <cit.>). For all 0<α<1, all subsets S of L with size α n have at leastn(c/d(1-(1-α)^d)-2α√(clne/α))neighbors. The following proposition is an easy consequence of the above discussion.For every integers d,n, and constant 0<δ<1, there exists a (c,d,α,δ)-bipartite expander G with V(G)=L∪ R and |L|=n, for some sufficiently small α and sufficiently large c. needs to be rewritten Set α=(δ/100c)^2 such that for any α'≤α, we have that1-α'd≤ (1-α')^d≤ 1-(1-δ/100)·α'd.Thus, one can infer thatn(c/d(1-(1-α')^d)-2α'√(clne/α'))≥ n(c/d(1-δ/100)·α'd-2α'√(clne/α'))=cα'n(1-δ/100-2√(1/clne/α'))≥δ cα'n,where the last inequality follows from a sufficiently large constant c.The following result on the dimension and minimum distance of the expander code T(G,C_0) is well-known. Let G be a (c,d,α,δ)-bipartite expander and C_0 be a [d,k_0,d_0]-linear code where δ d_0>1. Then the Tanner code T(G,C_0) has dimension at least (1-c/d(d-k_0))· n and minimum Hamming distance at least d_0δ⌊α n⌋. § PRELIMINARIES§.§ Basic definitions from graph theory For a positive integer n, let [n]:={1,…,n}. Given a graph G, we use V(G) to denote its vertex set and E(G) to denote its edge set. For a subset S⊆ V(G), let N(S)=∪_u∈ S N(u). A graph G is bipartite if V(G) admits a bipartition V(G)=L∪ R such that both L and R contain no edge. For convenience, let L=[n]. A bipartite graph G with bipartition V(G)=L∪ R is said to be (c,d)-regular if every vertex in L has degree c and every vertex in R has degree d. Informally speaking, a bipartite expander is a bipartite graph with the property that every small subset of L has many neighbors in R. A bipartite graph G with bipartition V(G)=L∪ R is called a (c,d,α,δ)-bipartite expander if it has the following properties: * G is (c,d)-regular;* for every S⊆ L with |S|≤α n, |N(S)|≥δ c|S|. §.§ Basic definitions from coding theory Let F_2={0,1} denote the finite field of size 2. A code C is simply a subset of F_2^n. For two vectors x=(x_1,…,x_n), y=(y_1,…,y_n)∈F_2^n, the Hamming distance between x and y, denoted by d_H(x,y), is the number of coordinates where x and y differ, i.e., d_H(x,y)=|{i∈[n]: x_i≠ y_i}|. The minimum distance of a code C⊆F_2^n, denoted by d(C), is the minimum of d_H(x,y) among all distinct x,y∈ C. Let wt(x) denote the number of coordinates where x are non-zero. A code C⊆F_2^n is said to be an [n,k_0,d_0]-linear code if it is a linear subspace in F_2^n with dimension k_0, and minimum distance d_0.For a subset S⊆ [n] and a vector x∈𝔽_2^n, let x_S:=(x_i:i∈ S)∈𝔽_2^|S| denote the subvector of x with coordinates restricted to S. Given a d-right regular bipartite graph G with bipartition V(G)=L∪ R and a [d,k_0,d_0]-linear code C_0, the Tanner code T(G,C_0) is defined byT(G,C_0):={x∈F_2^n:x_N(u)∈ C_0for every u∈ R}. § DETERMINISTIC DECODING: PROOF OF Let G be a (c,d,α,δ)-bipartite expander and C_0 be a [d,k_0,d_0]-linear code, where c,d,α,δ,d_0,k_0 are positive constants. If δ d_0>3, then there exists a linear-time decoding algorithm for the Tanner code T(G,C_0) which can correct γ n errors, where γ= 2α/d_0(1+c/(d_0-t)). We need to set up some parameters. Suppose that d_0 > 3/δ - 1. Let t = ⌊1/δ⌋. Take ϵ_0 > 0 such that d_0 > 3/δ - 1 + 2ϵ_0 and ⌊1/δ + ϵ_0 ⌋ = ⌊1/δ⌋. For every 0<ϵ_1< ϵ_0 δ^2/100, let ϵ_2=ϵ_1/c+1·δ(t+1)-1/t>0 andϵ_3 = ϵ_2 ( 2(1-ϵ_1) ( 1/2 +ϵ_0 δ^2/2) - 1 ) > 0. It is not hard to check that ϵ_1,ϵ_2 and ϵ_3 are all well-defined. Lastly, let ϵ_4=δ d_0-1/d_0-1·(1-ϵ_3), ℓ=⌈log_1-ϵ_3(⌊d_0-1/2⌋1/γ n)⌉ and s_0=⌈log_1-ϵ_3(ϵ_4δ d_0-1/d_0-1)⌉. §.§ The main decoding algorithm – MainDecode Given a corrupt vector x∈𝔽_2^n with at most γ n corruptions, our main decoding algorithm (see <ref> below) works as follows. The algorithm is divided into two parts. In the first part (see steps 2-10 below), it invokes HardSearch (see <ref> below) recursively for ℓ rounds, where in every round, the number of corrupt variables is reduced by a (1-ϵ_3)-fraction. After ℓ executions of HardSearch, the number of corrupt variables is reduced to at most ⌊d_0-1/2⌋. Then, in the second part of the algorithm (see steps 11-13 below), it applies the decoder of the inner code C_0 to finish decoding.The next two lemmas justify the correctness and the linear running time of MainDecode. (i) Let x be the input vector of HardSearch and let F be the set of corrupt variables of x. Let x':= HardSearch(x) and F' be the set of corrupt variables of x'. If |F|≤γ n, then |F'|≤(1-ϵ_3)·|F|.(ii) In step 11 of MainDecode, the number of corrupt variables in x^ℓ is at most ⌊d_0-1/2⌋. (i) Let x be the input vector of HardSearch and F be the set of corrupt variables of x. If |F|≤γ n, then the running rime of HardSearch is at most O(n+|F|).(ii) Moreover, if the number of corrupt variables in the input vector of MainDecode is at most γ n, then the running time of MainDecode is O(n). Assuming the correctness of the above two lemmas, one can prove <ref> as follows. Let y∈ T(G,C_0) be a codeword and x∈F_2^n be a corrupted vector. Let F={i∈[n]:x_i≠ y_i} be the set of the corrupt variables of x with respect to y. To prove the theorem, it suffices to show that as long as |F|≤γ n, MainDecode finds y correctly in linear time. We will analyse the following two cases: * If the algorithm returns x^i for some 0≤ i≤ℓ-1, then as |U^i|=0, we must have x^i∈ T(G,C_0). Let F^i be the set of the corrupt variables of x^i. Then it follows by <ref> (i) thatd(x^i,y)=|F^i|≤(1-ϵ_3)^i |F|≤(1-ϵ_3)^iγ n<d(T(G,C_0)), which implies that x^i=y.* If the algorithm does not return x^i for any 0≤ i≤ℓ-1, then it follows by <ref> (ii) that d(x^ℓ,y)≤⌊d_0-1/2⌋. Therefore, one can find y by running Decode for every u∈ R.Moreover, by <ref> the running time of MainDecode is O(n), completing the proof of the theorem. The remaining part of this section is organised as follows. In <ref> below we will introduce the basic building block of deterministic decoding – EasyFlip, which also corresponds to items (a) and (b) in <ref>. In <ref> we will introduce the algorithm DeepFlip, which runs EasyFlip iteratively for a constant number of times. DeepFlip corresponds to item (c) in <ref>. In <ref> we will introduce HardSearch, which is designed by running DeepFlip thoroughly for all choices of (m_1,…,m_s) until the number of unsatisfied constraints is significantly reduced. The proofs of <ref> and <ref> are also presented in <ref>. §.§ The basic building block of deterministic decoding – EasyFlip In this subsection, we will present the algorithm EasyFlip (see <ref> below), which is the basic building block of our deterministic decoding. It roughly contains the following two parts:* EasyFlip (i): in the first part (see steps 1-6 below), it invokes Decode for each constraint u∈ R and sends flips to some variables v∈ L;* EasyFlip (ii): in the second part (see steps 7-11 below), it counts the number of flips received by each variable in L and flips all variables that receive exactly m flips. Our goal is to show that there must exist an integer m∈[c] such that by flipping all variables v∈ L that receive exactly m flips, one can reduce the number of corrupt variables in x' by a (1-ϵ_3)-fraction, as compared with x. Note that for this moment, it suffices to prove the existence of such an m and we do not need to find it explicitly. Indeed, later we will find the required m by exhaustive search.We make the discussion above precise by the lemma below. Let x be the input vector of EasyFlip and let F be the set of corrupt variables of x. If |F|≤α n, then there exists an integer m∈[c] such that the following holds. Let x'= EasyFlip(x,m) be the output vector of EasyFlip and F' be the set of corrupt variables of x'. Then |F'|≤ (1-ϵ_3)|F|. The next lemma shows that EasyFlip runs in linear time. If |F|≤α n, then the running time of EasyFlip is at most O(n+|F|) ,where the hidden constant depends only on t_0,c,d.§.§.§ Proof ofLet us first introduce some notations and easy inequalities. Let y∈ T(G,C_0) be the correct codeword that we want to decode from x. Let A be the set of constraints u∈ R that sends a flip and Decode(x_N(u)) computes the correct codeword in C_0 (i.e., Decode(x_N(u))=y_N(u)). Similarly, let B be the set of constraints u∈ R that sends a flip and Decode(x_N(u)) computes an incorrect codeword in C_0 (i.e., Decode(x_N(u))≠ y_N(u)). By the definition of A and N_≤ t(F), it is easy to see thatA={u∈ R:1≤|N(u)∩ F|≤ t}=N_≤ t(F).Therefore, it follows by (<ref>) and <ref> that|A|≥δ(t+1)-1/t· c|F|.Moreover, since a constraint u∈ R computes an incorrect codeword in C_0 only if it sees at least d_0-t corrupt variables in its neighbors (recall that d(C_0)≥ d_0), we have thatB={u∈ R:|N(u)∩ F|≥ d_0-t and ∃ω∈ C_0 s.t.1≤ d_H(ω,x_N(u))≤ t}⊆ N_≥ d_0-t(F).By counting the number of edges between F and N(F), we have that(d_0-t)|B|≤|E(F,B)|≤|E(F,N(F))|=c|F|,which implies that|B|≤c|F|/d_0-t. Consider the following two equalities,∑_k = 1^d k · |N_k(F)| =c |F|,and∑_k = 1^d |N_k(F)|=|N(F)|≥ δ c |F|. By multiplying the second by 1/δ+ϵ_0 and subtracting the first one, we have∑_k = 1^t (1/δ + ϵ_0 - k) |N_k(F)| - ∑_k = t+1^d (k - 1/δ - ϵ_0) |N_k(F)| ≥( ( 1/δ + ϵ_0 ) δ - 1) c |F|≥ϵ_0 δ c |F|,Moreover, it follows by (<ref>) and (<ref>) that∑_k = 1^t (1/δ + ϵ_0 - k) |N_k(F)| - ∑_k = t+1^d (k - 1/δ - ϵ_0) |N_k(F)| ≤∑_k = 1^t (1/δ + ϵ_0 - k) |N_k(F)| - ∑_k = d_0-t^d (k - 1/δ - ϵ_0) |N_k(F)| ≤ (1/δ + ϵ_0 - 1) |N_≤ t(F)| - (d_0 - t - 1/δ - ϵ_0)|N_≥ d_0-t(F)|≤ (1/δ + ϵ_0 - 1) |A| - (d_0 - t - 1/δ - ϵ_0)|B|. As d_0 > 3/δ - 1 + 2ϵ_0 and t = ⌊1/δ⌋, we have that d_0 - t - 1/δ - ϵ_0 > 1/δ + ϵ_0 - 1. Combining the above two inequalities, one can infer thatϵ_0 δ c |F| ≤ (1/δ + ϵ_0 - 1) |A| - (d_0 - t - 1/δ - ϵ_0)|B| ≤ (1/δ + ϵ_0 - 1) (|A| - |B|) ≤1/δ (|A| - |B|) ,which implies that|A| - |B| ≥ϵ_0 δ^2 c |F|.On the other hand, since A and B are disjoint subsets of N(F), we have that|A| + |B| ≤ |N(F)|≤ c |F|.For every integer m∈[c], let S_m be the set of variables in L receiving exactly m flips. It is easy to see that the variables in S_m receive a total number of m|S_m| flips. In EasyFlip, every constraint in A∪ B sends exactly one flip L. The total number of flips sent by constraints in R and received by variables in L is exactly|A|+|B|=∑_m=1^c m|S_m|. Let Z be the set of correct variables receiving at least one flip, i.e., Z=(∪_m=1^cS_m)∖ F. Observe that the set F' of corrupt variables in the output vector x' consists of corrupt variables not flipped by EasyFlip, which is F∖ S_m, and correct variables which are erroneously flipped by EasyFlip, which is S_m∩ Z. Therefore,F'=(F∖ S_m)∪(S_m∩ Z). Let α_m denote the fraction of corrupt variables in S_m. Then we have thatα_m=|S_m∩ F|/|S_m|  and  1-α_m=|S_m∩ Z|/|S_m|.Let β_m denote the fraction of flips sent from A to S_m among all flips received by S_m, i.e.,β_m=the number of flips sent from A to S_m/m|S_m|. The following inequality is crucial in the analysis of EasyFlip. For every m∈[c], α_m≥β_m.As every variable in S_m receives the same number of m flips, by (<ref>) the number of flips received by S_m∖ F is (1-α_m)m|S_m|. Moreover, by (<ref>) the number of flips sent from B to S_m is (1-β_m)m|S_m|. Since the constraints in A always compute the correct codewords in C_0, they always send correct flips to their neighbors in L. Therefore, the flips received by S_m∖ F (which are the wrong flips) must be sent by B, which implies that(1-α_m)m|S_m|≤(1-β_m)m|S_m|,where the inequality follows from the fact that B could also send flips to S_m∩ F (which are the correct flips). Thus, α_m≥β_m, as needed. The following result shows that there exists an integer m∈[c] such that there exists a large set S_m that contains many corrupt variables. If |F|≤α n, then there exists an integer m∈[c] such that α_m≥(1-ϵ_1)|A|/|A|+|B| and |S_m|≥ϵ_2|F|. Suppose for the sake of contradiction that for every m∈[c], we have either α_m<(1-ϵ_1)|A|/|A|+|B| or |S_m|<ϵ_2|F|. Then, by counting the number of flips sent from A to L (which is exactly |A|), we have that|A| =∑_m=1^cβ_m m|S_m|≤∑_m=1^cα_m m|S_m|<(1-ϵ_1)|A|/|A|+|B|∑_m=1^cm|S_m|+∑_m=1^cmϵ_2|F|=(1-ϵ_1)|A|+c(c+1)/2·ϵ_2|F|,where the first inequality follows from <ref>, the second inequality follows from our assumption on α_m and |S_m|, and the last equality follows from (<ref>).Rearranging gives that|A|<ϵ_2(c+1)/2ϵ_1· c|F|=δ(t+1)-1/2t· c|F|,contradicting (<ref>). Next we will show that by flipping all of the variables in S_m, where m satisfies the conclusion of <ref>, one can reduce the size of the set of corrupt variables by a (1-ϵ_3)-fraction, thereby proving <ref>. Let m∈[c] satisfy the conclusion of <ref>.Combining the two inequalities (<ref>) and (<ref>), one can infer that|A|/|A|+|B| =1/2+ |A|-|B|/2(|A|+|B|)≥1/2+ +ϵ_0 δ^2 c|F|/2c|F| = 1/2 + ϵ_0 δ^2/2.Therefore, it follows by (<ref>) thatα_m≥(1-ϵ_1)|A|/|A|+|B|≥ (1-ϵ_1) (1/2 + ϵ_0 δ^2/2) .It follows by (<ref>) that|F'| =(|F|-|S_m∩ F|)+|S_m∩ Z|=|F|-(2α_m-1)|S_m|≤|F|-(2(1-ϵ_1)(1/2+ϵ_0 δ^2/2)-1)|S_m| =|F|-(ϵ_3/ϵ_2)|S_m|≤|F|-ϵ_3|F|,as needed, where the second equality follows by (<ref>), the first inequality follows by (<ref>), the last equality follows by the definition of ϵ_3 and the last inequality follows by <ref>. We will conclude by the following inequality, which shows that for an arbitrary m∈ [c], flipping S_m would not significantly increase the number of corrupt variables. For arbitrary x∈𝔽_2^n and m∈[c], let x':= EasyFlip(x,m). Let F and F' be the sets of corrupt variables of x and x', respectively. Then |F'|≤ (1+c/d_0-t)|F|.Since the constraints in A always compute the correct codewords in C_0, they always send correct flips to their neighbors in L. Therefore, the wrong flips must be sent by B. Therefore, in the worst case (i.e., assuming that A=∅), we have that|F'|≤ |F|+|B|≤(1+c/d_0-t)|F|,where the second inequality follows from (<ref>). §.§.§ Proof of Let U={u∈ R:x_N(u)∉ C_0} be the set of unsatisfied constraints with respect to x. The following inequalities will be useful. First, it follows from (<ref>) and (<ref>) that A∪ B⊆ U⊆ N(F). As A∩ B=∅, we have that|A|+|B|≤ |U|≤ |N(F)|≤ c|F|.Second, it follows by (<ref>) that|S_m|≤|A|+|B|/m≤ c|F|.Recall that we divided EasyFlip roughly into two parts, according to the discussion above <ref>.We will compute the running time of these two parts separately. EasyFlip (i): * For each constraint u∈ R, invoking Decode(x_N(u)) requires t_0 time, and computing the Hamming distance d_H( Decode(x_N(u)),x_N(u)) requires O(d) time. Therefore this process takes O((t_0+d)|R|) time for all constraints in R. * We associate each constraint u∈ R with an indicator vector z^u∈{0,1}^d that indexes the neighbor of u that receives the flip sent from u[Assume that the d coordinates of z^u are labelled by d the neighbors of u.]. Initially, z^u:=0^d. If u sends a flip to some v∈ N(u)⊆ L, then update z^u_v:=1. Note that for every u∈ R, z^u has at most one non-zero coordinate. Initializing and updating the vectors z^u for all u∈ R take O(d|R|)+O(|A|+|B|) time, where we need O(d|R|) time to initialize and O(|A|+|B|) time to update, as R sends |A|+|B| flips to L. * In total, EasyFlip (i) takes O((t_0+d)|R|+|A|+|B|) time.EasyFlip (ii): * We associate each variable v∈ L with a counter τ_v∈{0,1,…,c} to count the number of flips received by v. Initially, set τ_v:=0 and then update τ_v=|{u∈ N(v):z^u_v=1}|. The algorithm in fact flips all variables in S_m={v∈ L: τ_v=m}. Initializing and updating the counters τ_v for all v∈ L take O(cn) time, as we need O(cn) time to initialize and O(cn) time to update. * Lastly, we need to flip |S_m| variables, which needs O(|S_m|) time. * In total, EasyFlip (ii) takes O(cn+|S_m|) time. To sum up, the running time of EasyFlip is at mostO((t_0+d)|R|+|A|+|B|)+O(cn+|S_m|)=O((t_0/d+1)cn+c|F|)=O(n+|F|),where the first equality follows from |R|=cn/d, (<ref>) and (<ref>).§.§ Running EasyFlip iteratively for a constant number of times – DeepFlip In this subsection, we will present and analyse DeepFlip (see <ref> below), which is designed by running EasyFlip iteratively for s times for a particular choice of (m_1,…,m_s)∈ [c]^s. Note that by iteratively we mean a sequence of operations x^0:=x,x^1:= EasyFlip(x^0,m_1),…,x^s= EasyFlip(x^s-1,m_s).Our goal is to show that as long as the number of corrupt variables in x is not too large, by running EasyFlip iteratively for a large enough (but still a constant) number of times, there exists a vector (m_1,…,m_s)∈[c]^s such that the number of corrupt variables in the final output x^s is at most a (1-ϵ_3)-fraction of the number of corrupt variables in the initial input x. Most importantly, later we will show that such a vector (m_1,…,m_s) can be found explicitly and efficiently.The above assertion will be made precise by the lemma below.Let x be the input vector of DeepFlip and let F be the set of corrupt variables of x. If |F|≤γ n, then for every s≥ s_0 there exists a non-empty subset M⊆ [c]^s such that the following holds for every (m_1,…,m_s)∈ M. Let x^s:=DeepFlip(x,(m_1,…,m_s)) be the output vector of DeepFlip and F^s be the set of corrupt variables of x^s. Then |F^s|≤ (1-ϵ_3)|F|. The next lemma shows that for every fixed (m_1,…,m_s)∈[c]^s, the algorithm DeepFlip runs in linear time. If |F|≤γ n and s is a constant, then the running time of DeepFlip is at most O(n+|F|), where the hidden constant depends only on t_0,c,d,s. §.§.§ Proof ofGiven (m_1,…,m_s)∈ [c]^s and x^0:=x, for each k∈[s], let x^k:=EasyFlip(x^k-1,m_k). With this notation,x^s= EasyFlip(x^s-1,m_s)= DeepFlip(x,(m_1,…,m_s)),is exactly the output vector of DeepFlip. Let F be the set of corrupt variables in x and U be the set of unsatisfied constraints with respect to x. Sometimes we will also use F^0:=F and U^0:=U. For k∈[s], define F^k and U^k similarly with x replaced by x^k. It is not hard to observe thatN_≤ d_0-1(F)⊆ U⊆ N(F),where the first inclusion follows holds since d(C_0)=d_0.The following lemma can be viewed as an “idealized” version of <ref>. With the above notation, the following holds. If |F|≤α n, then there exists a vector (m_1,…,m_s)∈[c]^s such that (i) |F^s|≤(1-ϵ_3)^s|F|;(ii) for each k∈[s], |U^k|≤ (1-ϵ_3)^k· c|F|;(iii) |U^s|≤(1-ϵ_3)^s·d_0-1/δ d_0-1·|U|. As |F|≤α n, by <ref>, there exists m_1∈[c] such that x^1= EasyFlip(x,m_1) satisfies|F^1|≤(1-ϵ_3)|F|≤α n.Continuing this process, it follows by <ref> that for each k∈[s], there exists m_k∈[c] such that x^k= EasyFlip(x^k-1,m_k) satisfies|F^k|≤(1-ϵ_3)|F^k-1|≤ (1-ϵ_3)^k|F|≤α n.Such a vector (m_1,…,m_s)∈[c]^s clearly satisfies property (i).To prove (ii), note that it follows by (<ref>) and (<ref>) that for each k∈[s],|U^k|≤ |N(F^k)|≤ c|F^k|≤(1-ϵ_3)^k· c|F|,as needed.To prove (iii), note that as |F|≤α n, applying <ref> in concert with (<ref>) gives thatδ d_0-1/d_0-1· c|F|≤ |N_≤ d_0-1(F)|≤|U|.Combining the equation above and (i) gives that|U^s|≤ c|F^s|≤(1-ϵ_3)^s· c|F|≤ (1-ϵ_3)^s·d_0-1/δ d_0-1·|U|,completing the proof of (iii). <ref> (i) indicates that there exists an “ideal” choice, say (m^*_1,…,m^*_s)∈[c]^s, such that if |F|≤α n, then after the execution of EasyFlip iteratively for s times (guided by (m^*_1,…,m^*_s)), the number of corrupt variables in the final output x^s is at most a (1-ϵ_3)^s-fraction of the number of corrupt variables in the initial input x^0=x.Unfortunately, in general there is no way to compute the number of corrupt variables in the input and output of each execution of EasyFlip. From this perspective, there is no easy way to explicitly find the ideal (m^*_1,…,m^*_s)∈[c]^s. However, <ref> (iii), which is a consequence of <ref> (i), essentially shows that if the number of corrupt variables reduces dramatically, then the number of unsatisfied constraints also reduces significantly - fortunately, it is clear that this quantity can be computed in linear time! The analysis of our deterministic decoding algorithm relies heavily on this observation.The above discussion motivates the following definition. Given the input vector x of DeepFlip, let M be the set consisting of all vectors (m_1,…,m_s)∈[c]^s which satisfy the following two properties: (a) for each k∈[s], |U^k|≤ (1-ϵ_3)^k· cγ n;(b) |U^s|≤ϵ_4 |U|, where ϵ_4=δ d_0-1/d_0-1·(1-ϵ_3).The following result is an easy consequence of <ref>. If |F|≤γ n and s≥ s_0, then M≠∅.Since |F|≤γ n<α n, there exists a vector (m_1,…,m_s)∈[c]^s that satisfies <ref>. By substituting |F|≤γ n into <ref> (ii), it is easy to see that such a vector also satisfies <ref> (a). Moreover, by substituting s≥ s_0=⌈log_1-ϵ_3(ϵ_4δ d_0-1/d_0-1)⌉ into <ref> (iii), it is not hard to see that <ref> (b) also holds. Therefore, M≠∅, as needed. As briefly mentioned above, in general one cannot explicitly find the ideal (m^*_1,…,m^*_s)∈[c]^s which reduces the number of corruptions dramatically. Instead, under a stronger condition |F|≤γ n (recall that <ref> assumes |F|≤α n), <ref> shows that for every (m_1,…,m_s)∈ M, x^s= DeepFlip(x,(m_1,…,m_s)) reduces the number of corrupt variables of x by a (1-ϵ_3)-fraction, which makes every member of M an acceptable (which may be not ideal) choice for DeepFlip.Now we are ready to present the proof of <ref>. First of all we would like to show that <ref> is well-defined, namely, for every |F|≤γ n and (m_1,…,m_s)∈ M, DeepFlip(x,(m_1,…,m_s)) does not return . Indeed, as (m_1,…,m_s)∈ M, by <ref> (a) we have that for every 1≤ k≤ s, |U^k|≤ (1-ϵ_3)^k· cγ n, which implies that U^k always passes the test in step 5 of <ref>. Therefore, under the assumption of <ref>, the output of DeepFlip is a vector x^s∈F_2^n.To prove the lemma, let us assume for the moment that |F^s|≤α n. Given the correctness of this assertion, by applying <ref> in concert with (<ref>) gives thatδ d_0-1/d_0-1· c|F^s|≤ |N_≤ d_0-1(F^s)|≤|U^s|.Moreover, by combining the above equation and <ref> (b), we have thatδ d_0-1/d_0-1· c|F^s|≤|U^s|≤ϵ_4|U|≤δ d_0-1/d_0-1·(1-ϵ_3)· c|F|,which implies that|F^s|≤ (1-ϵ_3)|F|,as needed.Therefore, it remains to show that |F^s|≤α n. We will prove by induction that for each 0≤ k≤ s, |F^k|≤α n/1+c/(d_0-t)≤α n. For the base case k=0, it follows by assumption that |F^0|≤γ n<α n/1+c/(d_0-t). Suppose that for some k∈[s] we have that |F^k-1|≤α n/1+c/(d_0-t). Since x^k=EasyFlip(x^k-1,m_k), it follows by <ref> that|F^k|≤ (1+c/d_0-t)|F^k-1|≤α n.Therefore, we have thatδ d_0-1/d_0-1· c|F^k|≤|N_≤ d_0-1(F^k)|≤|U^k|≤(1-ϵ_3)^k· cγ n≤ cγ n,where the first inequality follows from <ref>, the second inequality follows from (<ref>), and the third inequality follows from <ref> (a). The last equation implies that|F^k|≤d_0-1/δ d_0-1γ n<d_0/2γ n=α n/1+c/(d_0-t),as needed, where the second inequality follows from the assumption δ d_0>3 and the last equality follows from the definition of γ in <ref>.The proof of the lemma is thus completed. §.§.§ Proof ofNote that DeepFlip essentially consists of s EasyFlip invocations, which compute x^k:= EasyFlip(x^k-1,m_k) for all k∈[s]. So it suffices to analyse their total running time.We will need the vectors {z^u:u∈ R} and the counters {τ_v:v∈ L} defined in the proof of <ref>. We will also compute U^k={u∈ R:x^k_N(u)∉ C_0} for all 0≤ k≤ s and associate it with an indicator vector λ∈{0,1}^|R| to record the set of unsatisfied constraints[Assume that the coordinates of λ are labeled by constraints in R.]. Note that all of {z^u:u∈ R}, {τ_v:v∈ L}, and λ will be updated with the computation of x^k during DeepFlip.By <ref>, computing x^1:= EasyFlip(x^0,m_1) takes time O(n+|F^0|). Given x^k-1:= EasyFlip(x^k-2,m_k-1), let us analysis the running time of computing x^k:= EasyFlip(x^k-1,m_k), where k∈{2,…,s}. Recall that we divided EasyFlip roughly into two parts (see the discussion above <ref>). We will compute the running time of these two parts separately. EasyFlip (i): * Note that we know S_m_k-1, which is the set of flipped variables in the computation of x^k-1. Therefore, to compute x^k we only need to invoke Decode(x^k-1_N(u)) and compute d_H( Decode(x^k-1_N(u)),x^k-1_N(u)) for those u∈ N(S_m_k-1), since these are the only constraints that could possibly see a status change after computing x^k-1. This process takes at most O((t_0+d)|N(S_m_k-1)|) time over all the constraints in N(S_m_k-1). * Note that we also know the current value of z^u for every u∈ R, which indexes the neighbor of u that receives the flip sent from u in the computation of x^k-1. Note that z^u has at most one non-zero coordinate. Now to compute x^k, if a constraint u∈ N(S_m_k-1) sends a flip to a variable v∈ L, then we update z^u by setting z^u_v:=1 and all other coordinates 0. This process takes at most O(|N(S_m_k-1)|) time. * In total, to compute x^k EasyFlip (i) takes O((t_0+d)|N(S_m_k-1)|) time.EasyFlip (ii): * We know the current value of τ_v for every v∈ L, which counts the number of flips received by v in the computation of x^k-1. It is not hard to see that to compute x^k, a counter τ_v may change only if v is a neighbor of the constraints in N(S_m_k-1), as other constraints would not send any flip. So, updating the counters for all v∈ L takes at most O(c|N(S_m_k-1)|) time. * Lastly, we need to flip |S_m_k| variables, which needs O(|S_m_k|) time. * In total, to compute x^k EasyFlip (ii) takes O(c|N(S_m_k-1)|+|S_m_k|) time. To sum up, the running time of computing x^k is at mostO((t_0+d)|N(S_m_k-1)|)+O(c|N(S_m_k-1)|+|S_m_k|)=O((t_0+d+c)|N(S_m_k-1)|+|S_m_k|).Updating U^k. Note that in DeepFlip (see step 5 in <ref>) we also need to compute |U^k| for each k∈[s]. It is clear that U^0 can be found by invoking Check(x^0_N(u)) for all u∈ R, which takes time O(h_0|R|). Then we need to initialize λ to a vector that represents U^0, which takes time O(|R|)=O(n). Moreover, given λ which stands for U^k-1, to compute U^k we only need to invoke Check(x^k_N(u)) for those u∈ N(S_m_k), since these are the only constraints that could possibly see a status change after computing x^k. Therefore, for each k∈[s], the running time of computing |U^k| is at most O(h_0|N(S_m_k)|). Running time of DeepFlip. Noting that F^0=F, the running time of DeepFlip is at mostO(n+|F|)+∑_k=1^s O(|S_m_k|+|N(S_m_k)|)≤ O(n+|F|)+∑_k=1^s O(|S_m_k|) ≤ O(n+|F|)+∑_k=1^s O(|F^k|)≤ O(n+|F|)+∑_k=1^s O((1+c/d_0-t)^k|F|)=O(n+|F|),where the three inequalities follow from |N(S_m_k)|≤ c|S_m_k|,(<ref>) and <ref>, respectively. §.§ Running DeepFlip thoroughly until significantly reducing the number of unsatisfied constraints – HardSearch In this subsection, we describe and analyse HardSearch (see <ref> below). Given an input vector x∈F_2^n with at most γ n corruptions, HardSearch runs DeepFlip(x,(m_1,…,m_s)) over all choices of (m_1,…,m_s)∈[c]^s until it finds one, say (m'_1,…,m'_s), such that the number of unsatisfied constraints with respect to DeepFlip(x,(m'_1,…,m'_s)) is at most an ϵ_4-fraction of the number of unsatisfied constraints with respect to x. Then <ref> shows that the number of corruptions in x' is at most a (1-ϵ_3)-fraction of the number of corruptions in x. Therefore, running HardSearch iteratively for ℓ rounds gives us a (1-ϵ_3)^ℓ-reduction on the number of corruptions. §.§.§ Proof of To prove (i), let M be the set of vectors in [c]^s which satisfy the two conditions in <ref> with respect to F and s, where |F|≤γ n ands=s_0. By our choices of F and s, it follows by <ref> that M≠∅. By <ref>, as long as HardSearch finds a vector (m_1,…,m_s)∈ M, it would output a vector x'= DeepFlip(x,(m_1,…,m_s)) such that |U'|≤ϵ_4 |U|[This holds since (m_1,…,m_s)∈ M satisfies <ref> (b).] and |F'|≤(1-ϵ_3)|F|, as needed.It remains to prove (ii), which is an easy consequence of (i). Let F^i be the set of corruptions in x^i for all 0≤ i≤ℓ. Then by (i) for every 0≤ i≤ℓ-1, we have either x^i∈ T(G,C_0) (if |U^i|=0) or |F^i+1|≤(1-ϵ_3)|F^i| (if |U^i|≠ 0). Therefore, after at most ℓ=⌈log_1-ϵ_3(⌊d_0-1/2⌋1/γ n)⌉ iterative executions of HardSearch, the number of corrupt variables is at most(1-ϵ_3)^ℓγ n≤⌊d_0-1/2⌋1/γ n·γ n=⌊d_0-1/2⌋,as needed. §.§.§ Proof of Proof of <ref> (i). Fix an arbitrary total order on [c]^s, and assume that HardSearch go through all vectors in [c]^s increasingly in this order. Note that in the worst case, HardSearch could invoke DeepFlip for c^s times. We will estimate the running time of HardSearch for this worst case.Note that the input vectors of all DeepFlip invocations are the same, i.e., equal to x, and we do not want to write down the whole vector x from time to time, as it could cost a lot of time. Instead, we will use a binary vector w∈F_2^n to record the flipped variables during each invocation of DeepFlip.If the current choice (m_1,…,m_s)∈[c]^s does not satisfy step 6 of HardSearch, then one can use w together with the current value of x' to recover x (indeed, x=x'+w) and then turn to a new choice of(m_1,…,m_s)∈[c]^s.We will show that updating w in any DeepFlip invocation costs only O(|F|) time, where F is the set of corrupt variables of x. Indeed, we initially set w=0^n and if a variable v∈ L is flipped, then w_v is increased by 1 (addition modulo 2). Note that DeepFlip(x,(m_1,…,m_s)) flips a total number of ∑_k=1^s|S_m_k| variables. For each k∈[s], let F^k be the set of corrupt variables of DeepFlip(x,(m_1,…,m_k)). It follows that both wt(w) and the time of updating w can be bounded from the above byO(∑_k=1^s|S_m_k|)=O(∑_k=0^s-1|F^k|)=O(∑_k=0^s-1(1+c/d_0-t)^k|F|)=O(|F|),where the second and third inequalities follow from (<ref>) and <ref>, respectively.Observe that HardSearch contains at most c^s DeepFlip invocations and each DeepFlip contains s EasyFlip invocations. For notational convenience, we use the pair (j,k)∈[c^s]×[s] to index the (j,k)-th EasyFlip in HardSearch.* In the 1st DeepFlip, initializing and updating w take O(n+|F|) time. Therefore, by <ref>, the running time of the 1st DeepFlip is O(n+|F|).* For each 2≤ j≤ c^s, we will compute the running time of the j-th DeepFlip as follows. * First, we will use w to recover the input vector x of HardSearch. More precisely, we flip all x_v with w_v=1, which takes O( wt(w))=O(|F|) time, as shown by (<ref>).* Note that the (j,1)-th EasyFlip only needs to invoke Decode for the neighbors of the variables {v∈ L:w_v=1},as the status of the other constraints would not change. After that,we reset w to 0^n. The above process altogether takes O(|F|) time.* For 2≤ k≤ s, the analysis of the running time of the (j,k)-th EasyFlip is similar to the one presented in the proof <ref>, which takes O(|F|) time.* Note that |U| and |U'| are already known in each DeepFlip invocation, as shown in the proof of <ref>.To sum up, the j-th DeepFlip requires O(|F|) time.Therefore, the running time of HardSearch is O(n+|F|). Proof of <ref> (ii). Let x^0=x be the input vector of MainDecode and F^0 be the set of corrupt variables of x^0. Note that MainDecode contains ℓ HardSearch invocations. For every i∈[ℓ], let x^i:=HardSearch(x^i-1) and F^i be the set of corrupt variables in x^i. Moreover, similar to the notation above, we use the triple (i,j,k)∈[ℓ]×[c^s]×[s] to index the (i,j,k)-th EasyFlip and the pair (i,j)∈ [ℓ]×[c^s] to index the (i,j)-th DeepFlip in MainDecode.* By <ref> (i), computing x^1:=HardSearch(x^0) takes O(n+|F^0|) time.* For each 2≤ i≤ℓ, we analysis the running time of computing x^i:=HardSearch(x^i-1) as follows. * First, we need to reset the vector w, which records the flipped variables of x^i-2 in computing x^i-1, to 0^n. By (<ref>) the above process takes time O( wt(w))=O(|F^i-2|). We also need to update w during the computation of x^i, which takes time O(|F^i-1|) by (<ref>). * Note that the (i,1,1)-th EasyFlip only needs to invoke Decode for the neighbors of the variables that are flipped in the (i-1,c^s,s)-th EasyFlip, which takes time O(|F^i-2|), as by (<ref>) the size of the neighbors of those variables are bounded by O(|F^i-2|). After that, by <ref> we need an extra O(|F^i-1|) time to finish this EasyFlip.For 2≤ k≤ s, the analysis of the running time of the (i,1,k)-th EasyFlip is similar to the one presented in the proof of <ref>, which is at most O(|F^i-1|). Therefore, the (i,1)-th DeepFlip takes O(|F^i-2|+|F^i-1|) time.* For 2≤ j≤ c^s, similar to the discussion in the proof of (i), the running time of the (i,j)-th DeepFlip takes O(|F^i-1|) time.* The set U^i-1 is already computed in the (i-1)-th HardSearch.Therefore, the i-th HardSearch takes time O(|F^i-2|+|F^i-1|). To sum up, the total running time of MainDecode isO(n+|F^0|)+∑_i=2^ℓ O(|F^i-2|+|F^i-1|)= O(n+|F^0|)+∑_i=0^ℓ-1O(|F^i|) =O(n+|F^0|)+∑_i=0^ℓ-1O((1-ϵ_3)^i|F^0|)=O(n),where the second equality follows from <ref>.§ RANDOMIZED DECODING: PROOF OFIn this section, we present our randomized decoding for Tanner codes which can correct more errors. The general strategy is as follows. We use a voting process to derive a set S of candidate variables to flip. Then we design a special sampling process to pick a large fraction of variables from S and flip them. We repeat the above operations until the number of corrupted variables is ensured to be below γ n in which case our deterministic decoding MainDecode in <ref> can work correctly, or we run out of time and stop. Finally, we use MainDecode to get the codeword.Let γ be the relative decoding radius of Theorem <ref>. The exact randomized decoding is given as <ref>. Let G be a (c,d,α,δ)-bipartite expander and C_0 be a [d,k_0,d_0]-linear code, where c,d,α,δ,d_0,k_0 are positive constants. If δ d_0 > 3, then there exists a linear-time randomized decoding algorithm for Tanner code T(G,C_0) such that if the input has at most α n errors from a codeword, then with probability 1-exp{-Θ_c,δ, d_0( n ) }, the decoding algorithm can output the correct codeword.§.§ Proof of Recall that we defined A to be the set of constraints u∈ R that sends a flip and Decode(x_N(u)) computes the correct codeword in C_0 ( see (<ref>)) , and B to be the set of constraints u∈ R that sends a flip and Decode(x_N(u)) computes an incorrect codeword in C_0 ( see (<ref>) ). Also, recall we let α_m denote the fraction of corrupt variables in S_m ( see (<ref>) ) and we let β_m denote the fraction of flips sent from A to S_m among all flips received by S_m ( see (<ref>)). Now we let M:= A ∪ B.Next, we bound the size of P in an arbitrary iteration.For every constant ϵ > 0, with probability ≥ 1 - exp{-Θ_c, ϵ(|M|)}, the size of P is in [(1-ϵ)|M|/2c , (1+ϵ) |M|/2c]. In Algorithm <ref>, for every m∈ [c], each variable in S_i is picked independently with probability m/2c. For each v∈ S, let X_v be the indicator random variable of the event that the variable v is picked. So for every v∈ S_m, [X_v = 1] = m/2c. Let X = ∑_v∈ S X_v. It is easy to see that X = |P|. By the linearity of expectation, we have thatX = ∑_v ∈ S X_v =∑_m∈ [c]m/2c|S_m| = |M|/2c,where the last euqality follows from (<ref>). By Hoeffding's inequality,[ X∈[(1-ϵ)|M|/2c , (1+ϵ) |M|/2c]] ≥ 1- 2exp{-2 (ϵ|M|/2c)^2/|S|}≥ 1- 2exp{-ϵ^2|M|/2c^2}, where the second inequality follows from the fact that |S| ≤ |M|. Next, we show that P contains significantly more corrupted variables than uncorrupted variables. There exists a constant ϵ such that with probability ≥ 1 - exp{-Θ_c, ϵ(|M|)}, the number of corrupted variables in P is at least (1/2+ϵ)|M|/2c.For every v∈ S, let Y_v be the indicator random variable of the event that X_v=1 and v∈ F. Let Y=∑_v∈ SY_v. By definition, Y=X∩ F. Note that for every v∉ S∩ F, [Y_v = 1 ] = 0. By the linearity of expectation, we have thatY = ∑_v∈ SY_v =∑_m∈ [c]∑_v∈ S_m Y_v =∑_m∈ [c]∑_v∈ S_m∩ Fm/2c=∑_m∈ [c]m/2cα_m|S_m| ≥∑_m∈ [c]m/2cβ_m |S_m|,where the inequality follows from <ref>.By the definition of β_m,mβ_m |S_m| =| { AS_m}|.Hence, one can infer thatY ≥∑_m∈ [c]| { AS_m}|/2c = | { AS }|/2c = |A|/2c,where the last equality is due to that each constraint in R can only send at most 1 message.Set ϵ=ϵ_0δ^2/4>0. It follows by (<ref>) that|A|≥(1/2+ϵ_0δ^2/2)|M|=(1/2+2ϵ)|M|.Thus, one can infer thatY ≥|A |/2c≥(1/2+ 2ϵ) |M|/2c. By Hoeffding's inequality,[ Y≤( 1/2+ ϵ) |M|/2c] ≥ 1-exp{-2 (ϵ|M|/2c)^2/|S|}≥ 1-exp{-ϵ^2|M|/2c^2},where the second inequality follows from that |S| ≤ |M|. The next lemma shows that as long as the number of unsatisfied constraints is small enough, we can ensure that the number of corrupt variables is at most γ n. Hence we can handle the matter with MainDecode. If |U| ≤( δ - 1/d_0) cγ n and |F| ≤α n, then |F| ≤γ n. Suppose that γ n<|F|≤α n. By <ref>, we have that|U|≥δ d_0-1/d_0-1c|F|>(δ -1/d_0)cγ n,which is a contradiction. Next, we show that if the input has at most α n corruptions, then the decoding can give the correct codeword with a high probability.If an input has distance at most α n from a codeword, then with probability 1-exp{ - Θ_c,δ,d_0(n) }, <ref> outputs the correct codeword. In each iteration, consider the case that the number of errors |F| is at most α n. If |U| ≤( δ - 1/d_0) cγ n, then by <ref>, |F| ≤γ n. Therefore, it follows by <ref> that when δ d_0 > 3, all errors can be corrected. Otherwise, we claim that the number of corrupt variables can be decreased by a constant fraction in this iteration.Recall that M=A∪ B⊆ U. It follows by (<ref>) that|M|=|A|+|B| ≥δ(t+1)-1/tc|F|.Note that by <ref>, with probability 1-exp{-Θ_c,δ,d_0(n) }, the number of corruptions in P is at least (1/2+ϵ)|M|/2c where ϵ>0 is a constant. Also note that by <ref>, with probability 1-exp{-Θ_c,δ,d_0(n) }, the size of P is in [(1-ϵ/2)|M|/2c , (1+ϵ/2) |M|/2c]. When both of the above events happen, by flipping all variables in P, the number of corruptions is reduced by at least 3ϵ |M|/ 4c. It follows by (<ref>) that3ϵ |M|/4c≥3ϵ(δ(t+1)-1)/4t|F|.This shows the number of corrupt variables indeed is decreased by a constant fraction in this iteration.As a result, after at most logγ/α/log(1-3ϵ(δ(t+1)-1)/4t)iterations, the number of corruptions is at most γ n. Then the decoding can call <ref> to correct all errors. <ref> runs in linear time.Notice that the number of iterations is a constant. For each iteration, running the decoder of C_0 for all right vertices takes linear time. Sending messages and deriving S_i, i∈ [c] also take linear time, since each right vertex can send at most 1 message. For picking P, notice that picking each variable uses a constant number of bits since the probability of picking 1 variable is a constant. Flipping bits in P also takes linear time as P ⊆ [n]. By <ref>, the MainDecode process takes linear time. So the overall running time is linear. If the input word has at most α n errors, then by <ref>, with probability 1-exp{ -Θ_c, δ, d_0(n) }, the decoding outputs the correct codeword. Moreover, the running time is linear by <ref>. § A LOWER BOUND ON : PROOF OFIn this section we will prove the following result, which provides a necessary condition δ d_0>1 for <ref>. Our proof borrows some ideas from a result of Viderman <cit.>, which showed that δ>1/2 is necessary for <ref>. For every d,d_0≥2 and n≥ 10d_0, there exist constants 0<α<1,c≥3 and a (c,d,0.9α,1/d_0)-bipartite expander G with V(G)=L∪ R and |L|=n such that for every [d,k_0,d_0]-linear code C_0, the Tanner code T(G,C_0) has minimum Hamming distance at most d_0.The required graph G is constructed as follows. We will ignore the floorings and ceilings whenever they are not important. Assume that V(G) admits a bipartition with V(G)=L∪ R, where L=[n]. Let L_1=[n-d_0], L_2=L\ L_1={n-d_0+1,…,n}, and R=R_1∪ R_2∪ R_3, where the R_i's are pairwise disjoint, and are constructed as follows. * Given d,d_0≥ 2 and n≥ 10d_0, by <ref>, there exist a constant 0<α<1 and an integer c≥ 2 such that there exists a (c-1,d,α,3/4)-bipartite expander G_1 with bipartition V(G_1)=L_1∪ R_1 and |L_1|=n-d_0.* Let R_2={u_1,…,u_c} be a set of c constraints, where for each i∈ [c],N(u_i)={(i-1)(d-d_0)+1,…,(i-1)(d-d_0)+d-d_0}∪ L_2. * Set m=n-d_0-(d-d_0)c/d. Let R_3={u'_1,…,u'_m} be a set of m constraints, where for each i∈[m],N(u'_i)={(i-1)d+1+(d-d_0)c,…,(i-1)d+d+(d-d_0)c}.It is routine to check that G is (c,d)-regular. Next, we will show that every subset S of L with |S|≤0.9α n has at least c|S|/d_0 neighbors in R. Let S_1=S∩ L_1 and S_2=S∖ S_1. It follows by n≥ 10d_0 that |S_1|≤0.9α n≤α(n-d_0)=α |L_1|. Therefore, we have that|N(S_1)|≥|N(S_1)∩ R_1|≥3/4(c-1)|S_1|≥(3/4-1/4)c|S_1|=1/2c|S_1|≥1/d_0c|S_1|,where the third inequality follows from c≥3 and the last inequality follows from d_0≥2. Moreover, by the definition of G, if S_2≠∅ then |N(S_2)|=c. Therefore, it follows by |S_2|≤ d_0 that|N(S_2)|≥1/d_0c|S_2|.Combining the two inequalities above, one can infer that|N(S)| =|N(S)∩ (R_1∪R_3)|+|N(S)∩ R_2|≥ |N(S_1)∩ R_1|+|N(S_2)∩ R_2|≥1/d_0c|S_1|+1/d_0c|S_2|=1/d_0c|S|, which implies that G is a (c,d,0.9α,1/d_0)-bipartite expander.Let C_0⊆F_2^d be a [d,k_0,d_0]-linear code and assume without loss of generality that0^d-d_01^d_0∈ C_0. If suffices to show that y:=0^n-d_01^d_0∈ T(G,C_0), which implies that d(T(G,C_0))≤ d_0 as T(G,C_0) is a linear code. Indeed, it is routine to check that for every u∈ R_1∪ R_3, y_N(u)=0^d∈ C_0 and for every u∈ R_2, y_N(u)=0^d-d_01^d_0∈ C_0, as needed. plain
http://arxiv.org/abs/2312.16087v1
{ "authors": [ "Kuan Cheng", "Minghui Ouyang", "Chong Shangguan", "Yuanting Shen" ], "categories": [ "cs.IT", "math.CO", "math.IT" ], "primary_category": "cs.IT", "published": "20231226152149", "title": "Improved decoding of expander codes: fundamental trade-off between expansion ratio and minimum distance of inner code" }
plain alldefinitionDefinitionB-.05emi-.025em b-.08em T-.1667em.7exE-.125emX
http://arxiv.org/abs/2312.16547v1
{ "authors": [ "Devriş İşler", "Elisa Cabana", "Alvaro Garcia-Recuero", "Georgia Koutrika", "Nikolaos Laoutaris" ], "categories": [ "cs.CR", "cs.DB" ], "primary_category": "cs.CR", "published": "20231227121759", "title": "FreqyWM: Frequency Watermarking for the New Data Economy" }
Improving One-Shot Transmission in NR Sidelink Resource Allocation for V2X Communication Hojeong Lee and Hyogon Kim Received ...; accepted... ========================================================================================emptyThe Society of Automotive Engineers (SAE) has specified a wireless channel congestion control algorithm for cellular vehicle-to-everything (C-V2X) communication in J3161/1. A notable aspect of J3161/1 standard is that it addresses persistent packet collisions between neighboring vehicles. Although the chances are slim, the persistent collisions can cause so called the wireless blind spot once the event takes place. Then the involved vehicles cannot inform their presence to neighboring vehicles, an extremely dangerous condition for driving safety. J3161's solution to the problem is stochastic one-shot transmission, where the transmission occasionally occurs in a resource that is not originally reserved. Through the one-shot transmission, the worst-case packet inter-reception time (PIR) is bounded and the wireless blind spot problem can be effectively mitigated. Interestingly, the standard one-shot transmission scheme does not resolve the persistent collision relation itself. This paper shows that by breaking out of the relation as soon as the persistent collision condition is identified, vehicles can improve the worst-case PIR by approximately 500 ms, the number of packet collisions per persistent collision event by 10%, and the number of total collisions by 15% to 57% over the standard one-shot transmission.SAE J3161/1, one-shot transmission, persistent collision, communication outage, packet inter-reception time (PIR), packet reception ratio (PRR).§ INTRODUCTIONThe 3rd Generation Partnership Project (3GPP) has been standardizing Cellular V2X (C-V2X) communication technology since Release 14. One of the core features of the C-V2X standards is the sidelink communication that enables direct communication between vehicles. The wireless resource for the sidelink communication can be allocated in either centralized or distributed manner. The base station orchestrates the allocation in the former (called Mode 1), whereas vehicles autonomously allocate the resource using a distributed algorithm in the latter (called Mode 2) <cit.>. Because vehicles could be out of network coverage, Mode 2 is considered the base mode. The distributed resource allocation algorithm used in Mode 2 is called the Sensing-Based Semi-Persistent Scheduling (SB-SPS) <cit.>. As the name implies, the SB-SPS algorithm (henceforth “SPS” for convenience) utilizes the sensed resource use pattern of neighbor vehicles to avoid the resource locations where the neighbors are expected to transmit. It also utilizes each neighbor’s explicit resource reservation information for its next transmission written in the control part of the packet <cit.>. Once a host vehicle decides on a resource least likely to be used by its neighbors, it uses the same resource for a certain number of subsequent packet transmissions without additional signaling, hence the name Semi-Persistent Scheduling. In this paper, we will call this series of packets using the same frequency resource over the average one-second period by the name of a “packet run.” Note that the same resource can be kept in the next packet run with the resource keep probability 0 ≤ P_k ≤ 0.8 <cit.>, so it can be a few seconds before a vehicle re-selects a different resource <cit.>.An issue with SPS is that it cannot completely eliminate packet collisions. For instance, two vehicles that came to re-select a resource almost simultaneously can pick the same resource. If their message transmission periods are the same, the packet collision will repeat at every subsequent message transmission. The higher the resource keep probability P_k, the longer the repeated packet collisions. At a given resource keep probability P_k, the number of packet runs that retain the same frequency resource at a vehicle is geometrically distributed. Then, the two vehicles will break out of the consecutive packet collisions relation as the shorter of the two packet runs expires and the vehicle with the shorter run re-selects. Here, the average number of the shorter packet run is given by𝔼[min(l_1,l_2)] = 1/2p-p^2for p = 1-P_k. According to Irwin-Hall distribution, the number of packet transmissions in the packet runs tends to a normal distribution centered at lX where X is the run length of each packet run. For instance, X=10 if the messaging period is 100 ms. As Fig. <ref> shows, the average duration in Eq. (<ref>) can easily persist for over a few seconds once the repeated collision event takes place, especially as P_k → 0.8. The worst-case duration can be longer. Note that blindly reducing P_k is not a solution, either, because it makes re-selections more frequent in SPS. Consequently, it causes more packet collisions and generally leads to poor packet delivery performance <cit.>.The cost of a long, repeated-collision event is potentially high. The event renders the involved vehicles to be unrecognized by neighboring vehicles and by each other during the persistent collision event. Onboard sensors, such as radars, LiDARs, and cameras may still be able to maintain awareness in many situations, but in others like the non-light-of-sight (NLoS) positions, neighboring vehicle movements become difficult to trace under the consecutive packet loss event. Therefore, in safety-critical situations, this communication lull may push a vehicle into dangerous driving conditions <cit.>, which might even be worse off than not relying on the C-V2X safety communication at all.In order to address the persistent packet collision problem, the Society of Automotive Engineers (SAE) J3161/1 stipulates stochastic One-Shot Transmission mechanism for C-V2X communication environment <cit.>. Under the scheme, vehicles occasionally transmit their packets in a resource that is not originally reserved (see Fig. <ref>(a)). Through the one-shot transmission, the worst-case packet inter-reception time (PIR) is bounded and the wireless blind spot problem can be effectively mitigated. Interestingly, however, the standard one-shot transmission scheme does not resolve the persistent collision relation itself. Such passive approach neglects unnecessary packet collisions thereby degrading packet delivery performance under SPS. In this paper, we demonstrate that by exiting the relation as soon as the persistent collision condition is identified, the vehicles can further improve PIR and packet reception ratio (PRR) than in the standard one-shot transmission. Compared with the duration of repeated collisions in SPS (Fig. <ref>), the proposed scheme reduces the worst-case PIR to less than 3 seconds at the farthest required communication range of 320 meters <cit.> for P_k=0.8. Note that this performance is approximately the minimum duration in SPS. Therefore, the proposed scheme significantly improves the long-tail PIR values. Compared with the J3161/1 One-Shot mechanism, the proposed solution reduces the worst-case PIR by approximately 500 ms, the number of packet collisions per persistent collision event by 10%, and the number of total collisions by 15% to 57% over the standard one-shot transmission. The rest of the paper is organized as follows. Section <ref> briefly discusses the prior work on resolving the problem of repeated packet collisions. In particular, it includes the previous efforts to improve on the performance of one-shot transmission. Section <ref> provides the background on SPS resource allocation and the standard J3161/1 one-shot transmission. It then discusses the proposed modification to SPS. Section <ref> evaluates the performance of SPS, one-shot, and the proposed scheme in terms of PRR, PIR, and the collision statistics. Finally, Section <ref> concludes the paper. § RELATED WORKBazzi et al. <cit.> proposed to curtail semi-persistent resource use at a fixed length, overriding repeated reselection of the same resource over a certain limit, even when it is prescribed by a high probability of keeping the same frequency resource. As a side-effect, any persistent collisions are limited to a deterministic length. But the proposed limit is significantly long than that stipulated in the one-shot transmission in J3161/1.Gholmieh et al. <cit.> analyzed that continuous collisions of packets can be reduced without noticeable loss of packet reception ratio (PRR) performance through one-shot transmission. In Fouda et al. <cit.>, the authors analyzed the performance according to the range of one-shot counters. It showed that the performance of the tail of information age (IA) and inter-packet gap (IPG) was better when the one-shot counter range was [2..6], which is smaller than [5..15]. Saifudd et al. <cit.> showed that under J3161/1 congestion control, a timer-based one-shot transmission performs better than the standard counter-based scheme. This is becausewhen congestion control of J3161 intervenes, Inter-Transmit Time (ITT) increases. This implies that as the vehicle density increases, the time it takes for the one-shot counter to reach zero also increases. Consequently, the duration required to escape from consecutive collisions becomes longer, particularly compared to situations with lower vehicle density. Unfortunately, none of the counter-based and timer-based one-shot transmission methods proposed so far do not resolve the persistent collision condition itself. However, vehicles can ascertain whether other vehicles are utilizing the same resource through one-shot transmission. Therefore, it is neither desirable nor necessary to resume transmission on the resource identified as involved in the persistent collision condition. The enhancement discussed below that by breaking out as soon as the persistent collision condition is identified, packet delivery performance is indeed improved.empty § SOLUTION APPROACH §.§ BackgroundUnder SPS, each vehicle runs a re-selection counter that is decremented upon each transmission. Once the frequency resource for the next packet run is re-selected, the counter is randomly initialized to a value in [5×100/max(RRP,20) : 15×100/max(RRP,20)]where RRP is the resource reservation period in SPS <cit.>. In this paper, we will assume RRP=100 ms for illustrative purpose because it is the most popular messaging period for safety-related V2X applications, although the proposed scheme also applies to other RRP configurations.SAE J3161/1 also defines a congestion control algorithm, where the gap between the transmitted packets called the Inter-Transmission Time (ITT) is controlled as follows <cit.>:ITT[s] =0.1,VD≤25 VD/250, 25<VD<1500.6,150≤VDwhere the average vehicle density VD is obtained from the instantaneous VD asVD(t)← 0.05 × VD + 0.95 ×VD(t-1). VD is the number of vehicles within 100 m of the observing vehicle. §.§ One-shot transmission in J3161/1The stochastic one-shot transmission in J3161/1 effectively mitigated the continuous collision problem by providing a mechanism to occasionally step aside and use a resource different from the currently reserved one. It is executed more frequently than re-selection, and is executed regardless of whether or not a persistent collision event is ongoing. In case the persistent collision event is underway, the one-shot transmission fragments the long series of packet collisions into many smaller ones. Compared with the unmodified SPS, therefore, neighbor vehicles can occasionally have a chance to notice a formerly unheard vehicle due to consecutive collisions.According to SAE J3161/1, each vehicle additionally runs an one-shot counter C_O. It is randomly initialized to a value in [2:6], and decremented with each transmission under SPS. Let R_sps denote the resource allocated by SPS, and R_os, by one-shot. Based on the values of C_R and C_O, the following logic is executed: * If C_O=0 and C_R = 0 * Transmit in R_os instead of in R_sps; Initialize C_O* Perform re-selection conditioned on P_keep; Initialize C_R* If C_O=0 and C_R0 * Transmit in R_os instead of in R_sps; Initialize C_O* Do not decrement C_R * If C_O0 and C_R = 0 * Perform re-selection conditioned on P_keep* If moving, initialize C_R and C_O * If not moving, initialize only C_RIn case neither counter is 0, normal SPS-reserved transmission takes place and both counters are decremented by 1. Notice that there is no provision to exit the persistent packet collision condition itself. empty§.§ Proposed enhancementWe propose an enhancement that curtails the persistent collision event as soon as an involved vehicle finds it. The vehicle then performs the SPS re-selection, exiting the collision relation. In this paper, we assume that the one-shot resource R_os is randomly selected among the candidate resources recommended by SPS when R_sps was selected, and that is within 3 slots after R_sps. This is to minimize the interference to other vehicles' SPS transmissions that could happen at the chosen one-shot resource. Also, limiting the time offset of the one-shot transmission to within 3 slots of the originally reserved resource is to minimize the impact on the packet delay budget (PDB). If no resource exists that is recommended by SPS within the time limit, random selection is performed with the same time constraint. Depending on applications, the PDB constraint may be less strict. Then, an one-shot resource can be selected with larger offsets, reducing the potential of packet collisions. Since the proposed scheme utilizes the information already available after R_sps selection, it does not incur additional processing overhead that would be incurred if we separately invoked the complex SPS processing. In the enhancement, in Condition (2) in the logic presented above, we replace * Do not decrement C_Rwith * If other vehicle is observed to transmit in R_sps, perform SPS re-selection; Initialize C_R and C_O* Otherwise, do not decrement C_R Fig. <ref>(a) and (b) illustrate the difference between J3161/1's one-shot transmission and the enhancement. In the figure, `UE' refers to a user equipment, which is the on-board V2X communication unit on a vehicle. The standard one-shot scheme in Fig. <ref>(a) detects the persistent collision condition by intermittently performing one-shot transmission. However, even after it finds the condition through the one-shot transmission, the involved vehicles UE1 and UE2 do not exit the condition but keep using the collision resource R_sps for future transmissions until the next re-selection. In the proposed enhancement (Fig. <ref>(b)), however, the persistent collision condition itself is resolved by the vehicle that first realizes the condition and immediately moves to the one-shot resource R_os. Namely, UE1 first performs the one-shot transmission and realizes that its previous resource R_sps has another user UE2, so UE1 performs a re-reselection through SPS and moves to a different resource. The persistent packet collision condition is thus broken. Finally, when three or more vehicles are involved in a persistent packet collision condition, the condition can be found by the one-shot performing vehicle through high Received Signal Strength Indicator (RSSI) value even if the transmission(s) there is not decoded. But the probability of such multi-vehicle persistent collisions is deemed low, so we focus on the two-vehicle case in this paper.§ PERFORMANCE EVALUATIONIn this section, we evaluate the impact of the proposed enhancement in terms of packet delivery performance. To this end, we conducted simulation experiments comparing three schemes in the C-V2X environment, as follows: * SPS without one-shot transmission (“SPS”)* One-shot transmission as defined by J3161/1 (“One-shot”)* Proposed solution (“Proposed”)As to the metrics, we utilized the distribution of Packet Inter-Reception time (PIR), number of consecutive collisions and collision events, and Packet Reception Ratio (PRR). PIR and PRR are the most significant measures of packet delivery performance in V2X communication and are defined as follows <cit.>: * PRR: ratio of successfully received packets to the total packets transmitted within 320 meters and binned into 20-meter segments * PIR: time elapsed between two successive receptions of two different packets transmitted from the same vehicle within the same range of 320 metersempty§.§ Simulation settingFor simulation experiments, we employed LTEV2Vsim <cit.>, an open-source simulator designed for C-V2X environment.Specific simulation parameter values are summarized in Table <ref>. Note that the presented results are obtained with J3161/1 congestion control enabled whereas hybrid automatic repeat and request (HARQ) disabled because we consider periodic broadcast traffic such as Basic Safety Message (BSM) <cit.>. Also, we do not use the blind retransmission because it doubles the bandwidth use, which can cause a severe congestion and the J3161/1 congestion control will increase the ITT. It will make information update much less frequent, betraying the very purpose of periodic beaconing through BSM packets.§.§ Packet inter-reception time Fig. <ref> shows how One-shot and Proposed improve the tail PIR distribution over the unmodified SPS. For ρ=100 veh./km (or equivalently VD=20 in Eq. (<ref>)), One-shot and Proposed can respectively limit the PIR below 2,200 and 1,700 ms at 10^-6 probability whereas SPS has high PIR with slowly decreasing probability. In fact, SPS cannot suppress the PIR under 3,500 ms even at 10^-4 probability in any traffic density. For other vehicle densities, we can observe similarly significant improvement enabled by the two one-shot schemes. However, the improvement is not free. In particular, the standard one-shot transmission trades extremely large PIR values for more numerous intermediate PIR values by breaking long consecutive collision events into multiple medium-length events. For example, with ρ=100 veh./km, PIR values between 200 ms and 400 ms have increased in number. Note that it is an inevitable cost to avoid the dire situations where the periodic beacons from two close vehicles suffer from effectively becoming “ghosts" for an extended period of time <cit.>. Comparing One-shot and Proposed, the latter always outperforms former. This is because Proposed eliminates the residual consecutive collision events after the first one-shot opportunity.A further improvement could be made possible by forcing a one-shot transmission as soon as the resource re-selection is performed. Assuming that consecutive packet collisions between more than two vehicles are rare, we can force each vehicle perform a one-shot transmission for the first packet transmission after a re-selection with a probability of 1/2. In case there is indeed another vehicle that selected the same resource with the same RRP, one of the vehicles can identify the consecutive collision condition and move to other resource with probability 1/2. However, it may have a complicated affect on SPS, so we leave it for a future work. §.§ Collision statistics Fig. <ref> compares the collision characteristics of the three schemes. Fig. <ref>(a) shows the average number of consecutive collisions in a persistent collision event. Although the tail distribution information is not available as in Fig. <ref>, the average run length can be shown to decrease with the proposed enhancement. Although Proposed significantly reduces the number of such events over One-shot, the run length per event only slightly differs between them. We can easily estimate the run length for Proposed. It is the minimum X of two (colliding) one-shot runs X1,X2 ∼ U[2..6], i.e., 𝔼[X]=𝔼(min[X1,X2]) = ∑_i=2^5 2·1/5·6-i/5 = 2.4where a one-shot run is defined by the run length of packet transmissions before the one-shot counter (C_O) expires.In case of One-shot, it is slightly higher than Proposed. It can be illustrated by an example. Suppose C_O(u)=2 and C_O(v)=6 for two vehicles u and v. With Proposed, the number of collisions would be 2. However, with the standard one-shot, the number of collisions before the first one-shot by u is 2, but the next ones could be longer than this minimum length. For instance, if u chooses C_O(u) > 2, the next collision run will have a length larger than 2, increasing the average.For ρ = 200 veh./km, both One-shot and Proposed reduce the average run length of packet collisions compared to SPS, which is expected. But an unexpected result is that SPS has smaller number of consecutive collisions than at ρ=100 veh./km. This is because the J2945/1 congestion control begins to kick in, changing the packet collision dynamics. The table below shows that the average ITT is approximately 155 ms for ρ=200 veh./km.ρ (veh./km) 100 200 300 400 500 Average ITT (s) 0.101 0.155 0.232 0.313 0.387 empty Because only discrete RRP values k· 100 (k≥ 1) ms are allowed over 100 ms in SPS <cit.>, vehicles are transmitting at an ITT of either 100 ms or 200 ms. When the vehicles with different RRPs meet with persistent collisions, one with 200 ms RRP will still suffer the consecutive collisions as before whereas the one with 100 ms RRP will experience a collision every other transmission. The latter will not qualify as consecutive collisions. However, for those vehicles with 200 ms RRP, the collision run length will be also shortened because the counterparts with 100 ms RRP decrement their C_R twice as fast. It cuts off the consecutive collisions faster than the situation of the both have the same RC value reduction. As ρ increases further, the population ratio between the vehicles with different RRP values affect how fast a persistent collision event is cutoff by the small RRP counterpart. Fig. <ref>(b) reveals an important aspect of one-shot transmission schemes. In particular, One-shot increases the total number of packet collisions compared to SPS. This is natural because one-shot transmissions occur outside the resource originally reserved (R_sps). When the persistent packet collision does occur, each one-shot eliminates a collision. But with far higher probability the event does not occur, and the one-shot transmissions may only collide with SPS transmissions from other vehicles. Indeed, the figure shows that the latter outweighs the former. This is the second cost aspect of the one-shot transmission, from the perspective of packet collisions. Unlike the standard one-shot, however, the proposed enhancement has even lower number of packet collisions than SPS. This is because once an one-shot transmission that identifies, the rest of the consecutive packet collisions retained by both SPS and the standard one-shot scheme are avoided.§.§ Packet reception ratioThe main purpose of one-shot transmission is to avoid the rare but dangerous V2X communication outages at all costs. However, it is worthwhile to consider how adversely average packet delivery performance is affected by it. Fig. <ref> compares the PRR for the three schemes. For readability, we show for only ρ=100 and ρ=300 veh./km. Other vehicle densities show the same qualitative result. Although Gholmieh et al. <cit.> argued that PRR performance is hardly affected, there is small PRR drop according to our simulation results. We believe that it is unavoidable because one-shot transmission frequently leaves the reserved resource wasted and instead uses a non-reserved resource. Because the proposed enhancement reduces such incursions through early re-selection, it leads to slightly better PRR over the original one-shot transmission scheme. In essence, the proposed scheme does not degrade the PRR performance compared to the standard one-shot transmission scheme.empty § CONCLUSIONIn safety-critical driving situations, preventing rare but dangerous communication outages is also important. The SAE J3161/1 standard prescribes so called the one-shot transmission that stochastically allows vehicles to step aside from potential persistent packet collisions and reveal their presence. However, it is too passive because it does not go as far as using the standard re-selection mechanism readily provided by the Semi-Persistent Scheduling (SPS) upon finding the persistent collision condition. This paper proposes an enhancement to the one-shot scheme to perform the re-selection. Proposed enhancement not only improves the PIR tail distribution but reduces the number of packet collisions over both SPS and the one-shot scheme. Moreover, it leads to slightly better PRR performance than the standard one-shot scheme.§ ACKNOWLEDGMENTSThis research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ICT Creative Consilience program (IITP-2023-2020-0-01819) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation. 1 IEEEtran38214 3GPP, NR; Physical layer procedures for data (Release 16), 3GPP TS 38.214, 2022.38212 3GPP, NR; Multiplexing and channel coding (Release 17), 3GPP TS 38.212, 2023.blindspot A. Bazzi, C. Campolo, A. Molinaro, A. O. Berthet, B. M. Masini, and A. Zanella, “On wireless blind spots in the C-V2X sidelink," IEEE Trans. Veh. Technol., vol. 69, no. 8, pp. 9239–9243, Aug. 2020.jeon18 Y. Jeon, S. Kuk, and H. Kim, “Reducing Message Collisions in Sensing-Based Semi-Persistent Scheduling (SPS) by Using Reselection Lookaheads in Cellular V2X,” Sensors, 18(12), 4388, 2018.j3161 SAE, On-Board System Requirements for LTE-V2X V2V Safety Communications, SAE J3161/1, 2022. 37885 3GPP, Technical Specification Group Radio Access Network; Study on evaluation methodology of new Vehicle-to-Everything (V2X) use cases for LTE and NR, 3GPP TR 37.885 V15.3.0, June 2019. gholmieh2021c R. Gholmieh and H. Abbasi, “C-V2X (LTE-V2X) Performance Enhancement Through SAE J3161/1 Probabilistic One-Shot Transmissions," in Proceedings of IEEE Global Communications Conference, 2021.fouda2021interleaved A. Fouda, R. Berry, and I. Vukovic, “Interleaved One-shot Semi-Persistent Scheduling for BSM Transmissions in C-V2X Networks," in Proceedings of IEEE Vehicular Networking Conference (VNC), 2021.saifuddin2023addressing M. Saifuddin, M. Zaman, Y. Fallah, and J. Rao, “Addressing Rare Outages in CV2X with Time-Controlled One-shot Resource Scheduling," TechRxiv, 2023. 38321 3GPP, NR; Medium Access Control (MAC) Protocol Specification, Release 15,v16.3.0, TS 38.321, Jan.2021.ltev2vsimG. Cecchini, A. Bazzi, B. M. Masini, and A. Zanella,“LTEV2Vsim: An LTE-V2V Simulator for the Investigation of Resource Allocation for Cooperative Awareness," in Proc. IEEE Int'l Conf. on Models and Technologies for Intelligent Transportation Systems, Naples, Italy, 2017.J2735 SAE,Dedicated short range communications (DSRC) message set dictionary, SAE J2735, 2016.
http://arxiv.org/abs/2312.15914v1
{ "authors": [ "Hojeong Lee", "Hyogon Kim" ], "categories": [ "cs.NI", "eess.SP" ], "primary_category": "cs.NI", "published": "20231226071811", "title": "Improving One-Shot Transmission in NR Sidelink Resource Allocation for V2X Communication" }
quantikzC:.. ,
http://arxiv.org/abs/2312.16680v1
{ "authors": [ "Yaroslav Balytskyi", "Yevgen Kotukh", "Gennady Khalimov", "Sang-Yoon Chang" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231227185133", "title": "$\\mathcal{PT}$-symmetric mapping of three states and its implementation on a cloud quantum processor" }
Department of Physics, Simon Fraser University, Burnaby, BC, V5A 1S6, CanadaDepartment of Physics, Simon Fraser University, Burnaby, BC, V5A 1S6, CanadaKavli Institute for Theoretical Sciences, University of Chinese Academy of Sciences, Beijing 100190, ChinaDepartment of Physics, Simon Fraser University, Burnaby, BC, V5A 1S6, CanadaCenter For Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USACenter For Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USADepartment of Physics, University of Florida, Gainesville FL 32611Dopant impurity potentials determined by ab-initio supercell DFT calculations are used to calculate the optical conductivity of overdoped LSCO and Tl-2201 in the superconducting and normal states. Vertex corrections are included, to account for the effect of forward scattering on two-particle properties.This approach was previously shown to provide good, semiquantitative agreement with measurements of superfluid density in LSCO. Here we compare calculations of conductivity with measurements of THz conductivity on LSCOusing identical impurity, band, and correlation parameters, and find similarly good correspondence with experiment.In the process, we delineate the impact of the different disorder mechanisms on single-particle and transport relaxation processes. In particular, we reveal the critical role of apical oxygen vacancies in transport scattering and show that transport relaxation rates in LSCO are significantly reduced when apical oxygen vacancies are annealed out. These considerations are shown to be crucial for understanding the variability of experimental results on overdoped LSCO in samples of nominally identical doping but different types.Finally, we give predictions for Tl-2201 THz conductivity experiments. Optical conductivity of overdoped cuprates from ab-initio out-of-plane impurity potentials P. J. Hirschfeld January 14, 2024 ==========================================================================================§ INTRODUCTIONThe standard model of superconductivityinmetalsrelies upon the BCSpairing instability, generalized to include attraction mediated by fluctuations other than lattice phonons.The superconducting state condenses from a Landau Fermi-liquid normal state, which can be significantly renormalized by interactions, but which nevertheless contains well-defined fermionic quasiparticles as the elementaryexcitations. Cuprate high-temperature superconductors continue to present clear challenges to the Landau–BCS paradigm, particularly in the underdoped to optimally doped regime, where the normal state is a strange metal and a host of intertwined orders survive as remnants of the Mott-insulating parent compound.Nevertheless, a Landau Fermi-liquid description is expected to re-emerge at sufficiently high doping levels, as kinetic energy must eventually dominate in the high density limit. It is therefore valid and important to test the extent to which the Landau–BCS paradigm can provide a description of cuprate physics, particularly on the overdoped side.A complication in pursuing this approach is the role of disorder, which acts in nonintuitve ways in a d-wave superconductor and can mask some of the clear experimental signatures expected in the clean limit.To this end, we have been pursuing a program that attempts to accurately incorporate disorder into the calculation of physical properties of overdoped cuprates, so that the Landau–BCS approach can be tested against experiment.A particularly important set of electrodynamic measurements has been carried out on highly crystalline MBE-grown films of LSCO, in which doping has been controllably tuned across the overdoped regime.In a nutshell, these experimentsrevealed features that at first sight seemed at odds with d-wave BCS theory.Superfluid density, ρ_s, displayed a clear linear temperature dependence, a hallmark of cleansuperconductivity, but simultaneously showed a strong correlation between T_c and zero-temperature superfluid density, something that is usually only associated with the pair-breaking effects of disorder.THz spectroscopy carried out on the same samples showed that the linear temperature dependence of ρ_s was accompanied by large transport scattering rates, of the order of 2 T_c, and a large fraction of residual, uncondensed spectral weight in the T → 0 limit.Both of these observations were unexpected for a clean d-wave superconductor.Our approach to capturing this behavior in LSCO has been based on a semirealistic, tight-binding parameterization of ARPES electronic structure, <cit.> with Fermi liquid corrections applied at the lowest energies.In our early work, we employed a simplified model in which the defects were treated as point scatterers, for concreteness and computational simplicity.It was shown that weak, Born-limit scatterers, representing dopant defects located away from the CuO_2 planes, provided an excellent and internally consistent description of a wide range of physical properties, including superfluid density,<cit.> THz optical conductivity,<cit.> and thermal properties.<cit.>In the latter case, the approach was successfully extended to include Tl-2201, again basing the calculations on ARPES-determined electronic structure.<cit.>Despite the convincing agreement with experiment, a number of concerns emerged in response to that early work, in particular that the use of the Born limit implied arbitrarily small impurity potentials, inconsistent with the actual dopants in LSCO and .<cit.> In addition,questions were raised about the magnitude of thenormal-state scattering rate Γ_N,which includes the combined effect of the impurity potential and concentration, and was taken essentially as a fit parameter.Finally, the weakness of the point scatterer approximation was recognized already in Ref. Lee-Hone:2018, where it was noted that dopant impurity potentials in cuprates must have significant spatial extent, given their location outside of the CuO_2 planes.The importance of extended impurity potentials was also pointed out in Ref. Wang2022, although the details of those calculations have been challenged.<cit.>To address these concerns, we have embarked on a significant new program, starting with ab-initio calculations of the impurity potentials of the three main defect species in LSCO and Tl-2201:<cit.> the Sr dopant and the apical oxygen vacancy in LSCO; and the Cu defect that cross substitutes for Tl in Tl-2201.These calculations employ a Wannier-function-based approachto obtain the impurity potentials in tight-binding form, which are then used in subsequent calculations of the dirty d-wave superconductor.As expected, the ab-initio potentials are extended in real space, resulting in strongly momentum-dependent matrix elements inand requiring vertex corrections for the calculation of two-particle properites.This procedure was implemented in Ref. Ozdemir:2022 for the case of superfluid density.With the shape and magnitude of the potentials fixed by the first principles calculations, a good, semi-quantitative account of the doping and temperature dependent superfluid density in LSCO was obtained starting only from very reasonable assumptions about defect concentrations.In the present work, we extend this approach to the calculation of optical conductivity, which is a sensitive probe of the processes that relax charge currents in a d-wave superconductor.We show that with the same impurity potentials, and with assumptions about defect concentration similar toRef. Ozdemir:2022, we obtain good, semiquantitative agreement with THz spectroscopy of MBE-grown LSCO thin films,<cit.> in terms of the magnitude of the conductivity; the degree of pair breaking and residual spectral weight; and the overall scale of the transport relaxation rate.The calculations reveal a crucial role for apical oxygen vacancies in transport scattering, due to their scattering potential having nearly pointlike character, resulting in significant scattering intensity at large momentum transfers.Indeed, within the LSCO system, we can understand the significant differences in residual conductivity/resistivity between ozone-annealed microbridges<cit.> and cm^2 thin films<cit.> entirely on the basis of apical oxygen vacancies, with the residual resistivity of well-annealed microbridges approaching the intrinsic limit set by the Sr dopants on their own.Additionally, we provide predictions for Tl-2201, for which no comparable measurements of optical conductivity currently exist.We begin by briefly summarizing the methods used to capture the electronic structure of LSCO and Tl-2201, followed by sections describing how the ab-initio impurity potentials are obtained and then self-consistently incorporated into the theory of disorder pair breaking in a dirty d-wave superconductor. Readers interested in further details are referred to our earlier work on superfluid density.<cit.>We follow this with a presentation of the formalism used to calculate optical conductivity, including vertex corrections.§ HOMOGENEOUS ELECTRONIC STRUCTUREAs in our previous work on overdoped cuprates,<cit.> our models for LSCO and Tl-2201 are built on two-dimensional tight-binding parameterizations of the ARPES-determined Fermi surfaces and band structures.<cit.>The semi-realistic nature of these models is particularly important for overdoped LSCO, which undergoes a Lifshitz transition around p = 19% hole doping, as shown in Fig. <ref>, at which a van Hove singularity at the antinodal points passes through the Fermi level.As a result, the electronic dispersion near the antinodes is very flat. This enhances the local density of states and impurity scattering rate near the antinodes but, due to the suppression of Fermi velocity, shown in Fig. <ref>, makes the contributions from these parts of the Fermi surface relatively unimportant to two-particle, transport-like properties such as superfluid density and optical conductivity.Nevertheless, it is important that the antinodal regions be treated very carefully: as previously discussed,<cit.> calculations that convert momentum sums to Fermi-surface integrals assuming the usual infinite linearization of the electronic dispersion near the Fermi surface lead to unphysical artifacts such as a strong, doping-dependent enhancement of the overall scattering rate, and therefore of impurity pair breaking, at the Lifshitz transition.Although computationally more expensive, calculations based directly on momentum sums eliminate these artifacts, and are necessary close to the van Hove crossing.In the case of LSCO, the doping evolution of the electronic structure is captured by a doping-dependent interpolation of the ARPES tight-binding bandstructures.<cit.>The chemical potential, which is the only parameter with significant doping dependence, is set by the correspondence between hole doping and Fermi volume.In order to capture the many-body renormalization that occurs at the lowest energies, an overall mass renormalization m^∗/m = 2.5, determined independently via comparison with specific heat data,<cit.> is applied to the ARPES bands.The limited ARPES data for Tl-2201<cit.> means that the doping evolution of the Fermi surface is generated via rigid bandshift, with the doping dependence of the chemical potential again set by the Fermi volume.It should be noted that, unlike for LSCO, there is no additional many-body renormalization required for Tl-2201, as the ARPES measurements of Platé et al. were carried out at sufficiently low energies (tens of meV) to capture the low energy dispersion directly.Additional details of the tight-binding dispersions can be found in Refs. Lee-Hone:2017 and LeeHone2020.The doping evolution of the LSCO and Tl-2201 Fermi surfaces, shown in Fig. <ref>, illustrates the qualitative differences between the materials, and shows how proximity to Fermi-surface replicas in neighbouring Brillouin zones is key to understanding the importance of umklapp processes in antinodal scattering, which are sketched for the Sr dopants and Cu substituents in Figs. <ref>(d) and (i), respectively.§ IMPURITIES IN LSCO AND TL-2201The ab-initio calculations of impurity potentials for each of the three main defect species (the Sr dopant and apical oxygen vacancy in LSCO, and the Cu–Tl cross-substitution in Tl-2201) were performed in the following way, as described in more detail in Ref. Ozdemir:2022.For each type of impurity, two DFT calculations were carried out: one for a 3 × 3 × 1 supercell containing a single impurity (La_35SrCu_18O_72, La_36Cu_18O_71 or Tl_35Ba_36Cu_19O_108), and one for a pure reference system (La_4Cu_2O_8 or Tl_4Ba_4Cu_2O_12).The DFT calcuations were Wannier-projected to define pairs of one-orbital tight-binding Hamiltonians: one for the supercell Hamiltonian for the i^th impurity,H_supercell^i, and one for the reference Hamiltonian, H_0. The difference between the two tight-binding models then defines the corresponding impurity potential, in tight-binding form:H_imp^i≡(H_supercell^i - μ^iN̂^i) - (H_0 - μ_0 N̂) = ∑_𝐑,𝐑^',σδ H_𝐑𝐑^'^i c_𝐑σ^† c_𝐑^'σ≡∑_𝐑, σ V_𝐑^i c_𝐑,σ^† c_𝐑σ + ∑_𝐑𝐑^',σδ t_𝐑𝐑^'^i c_𝐑σ^† c_𝐑^'σ ,Here μ^i and μ_0 are the chemical potentials of the simulation with and without the impurity, respectively, and the 2D lattice vectors 𝐑 and 𝐑^' are measured in a coordinate system in which the impurity sits directly above (or below) the origin. The impurity potential consists of a set of site energies, V_𝐑^i, along with local modifications to hopping integrals, δ t_𝐑𝐑^', in the vicinity of the impurity site.These have been tabulated in Ref. Ozdemir:2022, along with symmetry-generated form factors, and a detailed technical exposition of the DFT calculations and Wannier-projection method. We noteDFT impurity potentials are initially calculated in units of the DFT-derived nearest neighbour hopping |t|, with the physically relevant value of |t| subsequently set by taking the experimentally measured value from ARPES, thereby correcting for the tendency of DFT calculations to systematically overestimate the electronic bandwidth in correlated materials.The real-space impurity Hamiltonian is then recast in momentum space to obtain the matrix elements between Bloch states, V^i_,:V^i_, = ∑_𝐑,𝐑^'δ H_𝐑𝐑^'^i e^-i ·𝐑 e^i ·𝐑^'= ∑_𝐑 V_𝐑^ie^-i( - )·𝐑 + ∑_𝐑𝐑^'δ t_𝐑𝐑^'^i e^-i ·𝐑 e^i ·𝐑^'.Note that the site energies, V_𝐑^i, give rise to terms that depend only on momentum transfer, 𝐪≡ -.(The hopping modifications cannot be expressed in this way, but turn out to be small compared to the site energies.)This allows the dominant part of the impurity potential to be visualized in 𝐪 space.In Fig. <ref>, we plot the scattering intensity|V^i_𝐪|^2for the impurity types that typically occur in LSCO and Tl-2201.<cit.> Due tothe fact that the apical oxygen site located closest to the CuO_2 planes is site-centered (see Fig. <ref>(a)), the associated impurity potential is nearly point-like, with significant scattering intensity at all momentum transfers. Large-𝐪 scattering is crucial to the relaxation of charge currents, particularly in a d-wave superconductor, for which inter-nodal scattering dominates electrical relaxation.<cit.>The Sr dopants, on the other hand, contribute scattering intensity that is concentrated near 𝐪 = 0 (and umklapp replicas).This is due, in part, to the Sr site nearest the CuO_2 plane being plaquette-centered (see Fig. <ref>(b)), which means that a defect at that location affects the four neighboring Cu sites equally, imparting a nonzero range to the impurity potential. The doping process in LSCO is often assumed to be synonymous with addition of Sr, i.e., that each Sr simply adds a hole to the band.However, ARPES measurements on LSCO reveal a discrepancy between Fermi volume and Sr concentration,<cit.> suggesting that the actual relation is . This was considered in Ref. Ozdemir:2022, along with the conventional relation, .While the assumed form of n_Sr(p) had no significant effect on superfluid density due to the fact that Sr dopants, with their scattering intensity concentrated near 𝐪 = 0, are not a strong source of pair breaking, the result highlights the complexity of the doping process. The most likely reason for such discrepancies is the presence of O vacancies in some samples, and a possible negative correlation between the two dopants,<cit.> suggesting that high concentrations of Sr dopants drive out apical oxygen.The apical oxygen vacancies in LSCO were shown in Ref. Ozdemir:2022to be the dominant source of pairbreaking if their concentration is significant (at or above the few percent level). That they can occur in high-T_c samples to such extent is well-known,<cit.> butconcentrations are difficult to determine independently. Based on x-ray data,<cit.> even well-annealed crystals (i.e., annealed at 500^∘C for 1 week, in 1 atm O_2) can have apical oxygen vacancies at the 9% level.While these results were established decades ago, the O content in the most recent high-quality samples still depends sensitively on geometry and synthesis method, as discussed below.The plot of |V^i_𝐪|^2 in Fig. <ref>(d) reveals why, from a transport perspective, Tl-2201 is qualitatively cleaner than LSCO.The Tl_2O_2 double layers, which form an additional structural element not found in LSCO, are relatively well separated from the CuO_2 planes. The high volatility of Tl_2O_3 at the growth temperature leads to a deficit of Tl, which is replaced by Cu on roughly 4% to 7.5% of Tl sites.<cit.> These excess Cu atoms, being further from the CuO_2 planes,produce a softer, longer-range potential than the Sr dopants in LSCO, generating impurity matrix elements that are sharply peaked near 𝐪 = 0 (and umklapp replicas), with very flat valleys of near-zero scattering intensity in between. The Cu cross-substituents also play a vital role in the overdoping of Tl-2201, as Cu^+ has a valence of -2 relative to Tl^3+, making it an effective hole dopant.As in Ref. Ozdemir:2022, we therefore set the concentration of Cu defects (as a percentage of in-plane Cu atoms) to be n_Cu = p/2, with n_Cu varying from 8% to 15% across the overdoped range. To the extent that it is present, interstitial O^2- can be argued to play a similar role, as it similarly dopes two holes, and is located in the Tl_2O_2 double layers.This has an additional benefit, as these interstitial oxygen atoms act as an oxygen buffer that minimizes the equilibrium concentration of apical oxygen vacancies.LSCO has no equivalent oxygen reservoir, so is highly exposed to the formation of apical oxygen vacancies, and the associated strong, point-like scattering potentials. § DIRTY D-WAVE SUPERCONDUCTIVITYThe “dirty d-wave” theory of cuprate superconductors is built around the Nambu-space Green's function of a superconductor.Within the Matsubara formalism, the renormalized Green's function is writtenG(,i ω_n)=- i ω̃_,nτ_0 + Δ̃_,nτ_1 + ξ_τ_3/ω̃_,n^2 + Δ̃_,n^2 + ξ_^2 ,where ξ_ is the band dispersion relative to the Fermi level, the τ_i are the Pauli matrices in particle–hole space,are the renormalized Matsubara frequencies, and Δ̃_,n≡Δ_+̨Σ_1(,̨ω_n) is the renormalized superconducting gap.Note that for the type of momentum-dependent scattering generated by extended impurity potentials, the self energies Σ_0 and Σ_1 are both nonzero and are explicitly momentum dependent, unlike the case for point scatterers. In principle, the electronic dispersion is also renormalized as ξ̃_ = ξ + Σ_3, however, asargued in Ref. Ozdemir:2022, a Σ_3 self energy is unnecessary, as any impurity renormalization of the quasiparticle bands is already captured in the phenomenological ARPES-derived dispersions.The renormalization equations for ω_n and Δ_𝐤 take the formω̃_,n=ω_n + 1/N∑_i,n_i ω̃_,n/ω̃_,n^2+Δ̃_,n^2+ξ_^2-i Γ_N^U/G_0Δ̃_,n = Δ_ + 1/N∑_i,n_i Δ̃_,n/ω̃_,n^2+Δ̃_,n^2+ξ_^2 .Here the impurity potentials of the extended, out-of-plane defects, V^i_,, are treated to second order, and it was shown in Ref. Ozdemir:2022 that the scattering phase shifts associated with these defects are sufficiently weak that the Born approximation is adequate.We also allow for a small concentration of strong scattering impurities, which are treated as pointlike unitarity scatterers in the t-matrix approxation.These are parameterized by their contribution to the normal-state scattering rate, Γ_N^U, and generate an additive term in the Σ_0 self energy, inversely proportional to the momentum-integrated Green's function,G_0(iω_n) = 1/π N N_0∑_12Tr[τ_0 G(,i ω_n)] ,where N is the number of sites in the lattice and N_0 is the DOS per spin at the Fermi level.As in earlier work based on weak-coupling BCS,<cit.> we assume a separable form for the pairing interaction, V_0 d_ d_, where V_0 parameterizes the pairing strength and the eigenfunction d_ takes the form of the simplest d-wave harmonic of the square lattice,d_ = [cos(k_x a) - cos(k_y a)] .Here a is the in-plane lattice parameter, and d_ satisfies the normalization condition .In terms of this, theBCS gap equation can be written Δ_ = 2 T/N∑_ω_n>0^Ω_c∑_ V_0 d_ d_Δ̃_,n/ω̃_,n^2+Δ̃_,n^2+ξ_^2,where Ω_c is a high frequency cutoff and thesum runs over the first Brillouin zone.As shown previously,<cit.> the combined effect of pairing strength, V_0, and energy cutoff, Ω_c, can be captured by a notional clean-limit transition temperature, T_c0.It is important to note that T_c0 does not imply the transition temperature that would be achieved if disorder was removed from the material. In any real cuprate, inelastic scattering and other fluctuations would destroy superconductivity well before reaching that temperature, something that can be seen, for instance, in the strong downward curvature of ρ_s(T) on the approach to T_c in YBCO,<cit.> which is not a feature of weak-coupling BCS.In our model, we take T_c(p) to have the parabolic shape implied by experiment, and solve the gap equation in the presence of disorder to infer T_c0(p), with these quantities shown in Ref. Ozdemir:2022, for LSCO and Tl-2201.§ OPTICAL CONDUCTIVITYThe electric conductivity can be calculated using the Kubo formula that relates the conductivity, σ, to the retarded current–current correlation function, Π:σ^jj(Ω) = -e^2 Im Π^jj(𝐪=0,Ω)/Ω,where e is the electron charge, and j=x,y,z denotes the spatial direction inreal space. Pointlike scatterers do not renormalize the current vertex for even parity superconductors,<cit.> therefore the bare current–current response is sufficient for calculating conductivity in that case. In contrast,the bare current vertex is modified in the presence of extended scatterers<cit.> and the current–current correlation function withimpurity-dressed current vertex then readsΠ^jj(q=0,iΩ)= T/N∑_,ω_nTr[ v^j_ G(,iω_n) G (,iω_n+iΩ) .. ×Λ^j(, iω_n,iω_n+iΩ) ]. Here ω_n and Ω are the fermionic and bosonic Matsubara frequencies, and v^j_ is the bare current vertex, which for a general dispersion is v^j_= dξ_/dk^j. For the remainder of this section, we replace the full-Brillouin-zonemomentum summation with an angular average over the Fermi surface and integrate out ξ_. The initial and final momentum in the impurity potential functions are restricted to the Fermi surface. Since the main contribution to the current–current correlation function comes from quasi-particles near the Fermi surface, this is a reliable and computationally efficient approach for the conductivity in the energy ranges we are interested in. Nevertheless, we have cross-checked the Fermi-surface-based approach against more computationally expensive calculations that employ a full-Brillouin-zone momentum summation and, as long as the Fermi surface is not too close to the van Hove singularity, the two methods are in good agreement.The vertex correction can be described as Λ^j(ϕ,iω_n,iω_n+iΩ)=v^j_ϕ[ τ_0 γ_0(ϕ,iω_n,iω_n+iΩ). +τ_1γ_1(ϕ,iω_n,iω_n+iΩ).+τ_3γ_3(ϕ,iω_n,iω_n+iΩ) ],where ϕ is the angle over the 2D cylindrical Fermi surface.Finally, for the conductivity we obtain σ(Ω)^jj = -e^2 T/Ω∫_-∞^∞ dω[f(ω + Ω) - f(ω)] (L^-+_jj - L^++_jj) .The vertex function depends on iω_n and iω_n+iΩ, and must beanalytically continued to obtain the physical conductivity as a function of real frequency Ω. In Eq. (<ref>), the integrand L^± +_jj denotes L^± +_jj(iω_n →ω± iη ,iω_n+iΩ→ω+Ω+iη)and L^± +_jj isL^± +_jj = ⟨ (v^j_ϕ)^2 [γ_0,ϕ^±+(I_ϕ^±+ + J_ϕ^±+) + γ_1,ϕ^±+K_ϕ^±+] ⟩.Here, the Fermi-surface average is ⟨⋯⟩ = 1N_0∫_0^2πdϕ/2π N_ϕ[ ⋯]and the angle-dependent single-spin DOS isN_ϕ=|k_F(ϕ)|^2/πħ d v_F(ϕ)· k_F(ϕ) ,where d is the interlayer spacing. The other components of Eq. (<ref>) are I_ϕ^±+= Δ̃^±_ϕΔ̃^'+_ϕ + ω̃^±_ϕω̃^'+_ϕQ^±_ϕ Q^'+_ϕ(Q^±_ϕ + Q^'+_ϕ)J_ϕ^±+= 1(Q^±_ϕ + Q^'+_ϕ)K_ϕ^±+= ω̃^±_ϕΔ̃^'+_ϕ +Δ̃^±_ϕω̃^'+_ϕQ^±_ϕ Q^'+_ϕ (Q^±_ϕ + Q^'+_ϕ) ω̃_ϕ^± = ω̃_ϕ(ω± i δ) ω̃_ϕ^'+ = ω̃_ϕ(ω + Ω + i δ) Δ̃_ϕ^± = Δ̃_ϕ(ω± i δ) Δ̃_ϕ^'+ = Δ̃_ϕ(ω + Ω + i δ)Q^±_ϕ = √((Δ̃^±_ϕ)^2- (ω̃^±_ϕ)^2)Q^'+_ϕ = √((Δ̃^'+_ϕ)^2- (ω̃^'+_ϕ)^2) .Here, the branch cut for the complex square-root function is along the negative real axis. The renormalized energy ω̃(ϕ,ω) and gap Δ̃(ϕ,ω)are obtained by solving the self-consistent equations for the self-energies,ω̃^±(ϕ,ω) = ω± i η + n_impπ∫_ϕ' N_ϕ' |V_ϕϕ'|^2 ω̃_ϕ^'^±/Q^±_ϕ^'-Γ^U_N/g^± ,Δ̃^±(ϕ,ω) = Δ_ϕ + n_impπ∫_ϕ' N_ϕ' |V_ϕϕ'|^2 Δ̃_ϕ^'^±/Q_ϕ^'^±.Here g^± = ⟨ω̃_ϕ^±/Q^±_ϕ⟩, and the self-consistent equations for thevertex functions areγ_0 ±,+ =1+ ∫_ϕ' F_ϕϕ'γ_0 ±,+( I_±,++ J_±,+) + ∫_ϕ' F_ϕϕ'γ_1 ±,+ K_±,+ ,γ_1 ±,+ =- ∫_ϕ' F_ϕϕ'γ_0 ±,+ K_±,+- ∫_ϕ' F_ϕϕ'γ_1 ±,+( I_±,+ - J_±,+) ,∫_ϕ' F_ϕϕ' = ∫_0^2πdϕ'/2ππ n_imp N_ϕ' |V_ϕϕ'|^2v_Fϕ·v_Fϕ'/|v_Fϕ|^2. The vertex correction for the τ_3 component vanishesin this approximation due to particle–hole symmetry near the Fermi surface. The self-consistent equations for the vertex functions are solved using standard iteration methods, followed by numerical calculation of the conductivity. In the next section, we present and discuss the results. § RESULTSAb-initio calculations of impurity potentials have been presented in Ref. Ozdemir:2022. These potentials were then employed to calculate the optical conductivity of LSCO using the formalism described in Sec. <ref>, with the results plotted in Fig. <ref>.In order to compare with the THz experiments from Ref. Mahmood:2017 shown in the last row of Fig. <ref>, the conductivity calculations were performed for overdoped samples at hole doping levels of p = 22.3%, 24.4% and 25.2%, corresponding to superconducting transition temperatures of 27.5 K, 13.5 K and 7 K.These doping levels are sufficiently beyond the van Hove doping that the momentum-sum and Fermi-surface-integral methods agree.For this reason, Fermi-surface integrals have been used to calculate all the conductivities and self-energies presented in this section.In all cases, the doping-dependent Sr concentrationhas been assumed but, as with the superfluid density in Ref. Ozdemir:2022, the conductivity spectra are not particularly sensitive to that choice. In accordance with the THz experiments, calculations of σ(ν) have been carried out in both the normal state (T > T_c) and deep within the superconducting state (T = 1.6 K). Note that because our model only contains elastic disorder scattering, there is no additional temperature dependence of σ(ν) once we reach the normal state. By virtue of the conductivity sum rule, the shaded regions in between the normal-state and superconducting-state σ(ν) spectra indicate the spectral weight that condenses to form the superfluid density, and therefore provide a graphic illustration of the degree of pair breaking (i.e., superfluid suppression).To illustrate the importance of apical oxygen vacancies to transport relaxation in LSCO, conductivity spectra are presented for five different apical oxygen vacancy concentrations, ranging from = 0% to 8%.Comparison with the experimental results from Ref. Mahmood:2017 plotted in the bottom row of Fig. <ref> show that close agreement with experient is achieved when apical oxygen vacancy concentration is within the range = 4% to 6%.As pointed out earlier, it is extremely difficult to obtain an independent measurement of apical oxygen vacancy concentration in these materials, but x-ray structural refinements on LSCO single crystals report as high as 9% in well-annealed crystals.<cit.> The THz experiments in Ref. Mahmood:2017 were, by necessity, carried out on large, cm^2, MBE-grown thin films, and the high degree of crystallinity acheived in the MBE process likely makes the annealing out of oxygen vacancies relatively difficult, due to the need to diffuse oxygen in laterally from the edges of the large samples. To further illustrate the sensitivity to oxygen annealing, we show the effect of apical oxygen vacancy concentration on residual (T → 0) normal-state conductivity/resistivity of LSCO, in Fig. <ref>.Here, we compare with two different types of experimental data, taken from Ref. Mahmood:2017, showing dc transport measurements on ozone-annealed microbridges, and THZ spectroscopy of cm^2 thin films.While the data agree at the higher T_c end, they display a striking bifurcation at lower T_c (i.e., when more heavily overdoped), with the ozone-annealed microbridges exhibiting consistently better conductivity/lower residual resistivity.The experimental data overlay curves of calculated conductivity/resistivity for apical oxygen vacancy concentrations ranging from = 0% to 10%, providing a ready explanation of the variance between the two sample types.This is consistent with our conjecture that the need to laterally diffuse oxygen in these highly crystalline materials provides a kinetic barrier to annealing out oxygen vacancies in larger samples, with the required diffusion length in the microbridges, by contrast, being only a matter of microns.For the larger samples, it is also consistent with themeasurements of Kim et al.,<cit.> suggesting that high concentrations of Sr dopants drive out apical oxygen. We present calculated conductivity spectra for Tl-2201 in Fig. <ref>, with doping levels chosen to give the same T_c's as for LSCO in Fig. <ref>.Due to a lack of suitable Tl-2201 samples, no experimental data on the THz conductivity exist, so these figures serve as a prediction and as a comparison with LSCO. There are several features to note.While there is some slight doping dependence of the assumed defect concentration (the excess Cu atoms that substitute onto some of the Tl sites) the dominant variation with doping is driven by T_c itself, which in turns sets the size of the superconducting energy gap, and therefore the sensitivity to pair breaking.For the T_c = 27.5 K material, the majority of the spectral weight condenses into the superfluid, leaving a narrow residual Drude peak at the lowest temperatures, riding on a broad background absorption at frequencies out to the gap energy and beyond.(As previously discussed in Ref. Lee-Hone:2018, in the context of point scatterers, there is usually no sharp gap feature in the optical conductivity of d-wave superconductors.)As T_c becomes smaller (and along with it, the energy gap) the residual Drude peak increases in width and decreases in magnitude, with more and more of the absorption shifting into the broad background.Interestingly, the calculated spectra for LSCO in Fig. <ref> suggest that if cm^2 thin films of LSCO could be prepared with apical oxygen vacancy concentrations in the 1% range, they would show very similar behaviour, i.e., would display charge dynamics that are comparably as `clean' as for Tl-2011, a material often noted for its chemical purity.This illustrates a somewhat surprising point: thatTl-2201's reputation as one of the cleaner cuprates is not primarily due to qualitatively lower cation disorder, or to that disorder being located further from the CuO_2 planes, but from having an additional structural unit — the Tl_2O_2 double layers — that act as a reservoir for interstitial oxygen, serving as a buffer that suppresses the formation of apical oxygen vacancies.To further explore the low frequency charge dynamics of LSCO and Tl-2201 in the normal state, we show angle-resolved plots of scattering rate, lifetime and mean free path in Fig. <ref>. In the case of LSCO, the calculations have been carried out at a fixed doping of p = 23.5%, without (= 0%) and with (= 8%) apical oxygen vacancies.For Tl-2201, we show results for optimal doping (p = 16%) and strong overdoping (p = 30%), with a concomitant change in the concentration of Cu substituents (n_Cu = 8% and 15%, respectively). A key feature of our calculation is the inclusion of vertex corrections, allowing us to properly take into account the forward-scattering character of the impurity potentials.This enables us to explore the difference between one-particle and two-particle (transport) scatteringrates, which differ by the angle-dependent vertex function, γ_0(ϕ).This is plotted in the first row of Fig. <ref>, in panels (a) to (d).We see that vertex corrections in LSCO turn out to be small, even in the absence of apical oxygen vacancies (i.e., when the only scatterer is the Sr dopants, which have a relatively weak scattering potential).By contrast, in the Tl-2201 system, the vertex corrections lead to significant differences between single-particle and transport lifetimes of order one. As mentioned above, the spatially extended nature of the realistic disorder model gives rise to impurity matrix elements V_, with very strong momentum dependence. This, combined with anisotropic electronic structure, leads to elastic scattering rates that vary strongly around the Fermi surface, something that is a well-established part of cuprate phenomenology.<cit.>Transport is zone-diagonal dominated, as per Ioffe and Millis,<cit.> due to a combination of factors: the ability for small-q processes to efficiently scatter between antinodes in adjacent Brillouin zones (i.e., to give rise to significant umklapp scattering) and, in the case of LSCO, the deep depression of the antinodal v_F(ϕ) in the vicinity of the van Hove singularity.On the experimental side, a comprehensive Dingle analysis of quantum oscillation data in overdoped Tl-2201 yields single-particle mean free paths in the range 330 Å to 410 Å, noting that strong self selection in quantum oscillatory experiments preferentially favours the parts of the sample with longest mean free path.<cit.>This is in qualitative accord with Figs. <ref>(o) and (p).Two-particle mean free paths inferred from magnetotransport measurements in overdoped Tl-2201 are larger, of the order of 500 Å to 1000 Å,<cit.> confirming both the zone-diagonal-dominated nature of transport, and the presence of vertex corrections of order 2 to 3, in line with our ab-initio calculations.§ CONCLUSIONSWe have demonstrated that a materials-specific “dirtyapproach, previously shown to quantitatively agree with superfluid density data in two of the most-studied overdoped cuprate materials, LSCO and Tl-2201, describes THz conductivity data on the same LSCO films with similar accuracy. Our study has highlighted the role of apical oxygen vacancies in LSCO insamples produced by different techniques, andsuggested that strong variations in DC resistivity seen insamples with the same nominal doping are consistent with different levels of O vacancies.Since the O vacancies produce a relatively large and short-range potential relative to Sr, the O-vacancy concentration has important consequences for the angular dependence of the scattering rate in the normal and superconducting states, and therefore for the relative importance of forward-scattering processes.Our calculations indicate that if scattering from the O vacancy could be removed by annealing, LSCO wouldexhibit dramatically different low-frequency conductivity spectra, with narrow Drude peaks in σ(Ω), reminiscent of our predictions for Tl-2201.The spectral weight available to form the superfluid would also be significantly increased.In the Tl-2201 system, the doping by Tl–Cu cross substitution induces much longer-range scattering and relatively weak potentials, leading to a strongly momentum-dependent scattering rate.Vertex corrections, included here in our calculations of the conductivity, are correspondingly more important.Although THz measurements of the conductivity have not yet been performed, we make clear predictions for the expected conductivity spectra, including quite narrow Drude components even in the normal state.The materials-specific analysis confirmsquasiparticle mean free paths that are longer than in LSCO by roughly a factor of three, as deduced in earlier phenomenological analyses.The materials-specific dirty d-wave approach has now succeeded not only in quantitatively reproducing puzzling experimental results on superfluid density and THz conductivity, but also confirmed the choice of phenomenological parameters used earlier to fit specific heat, Volovik effect, and thermal conductivity of the same materials.<cit.>Armed with these confirmations of the theory in the superconducting state, it will now be interesting to see if various puzzles in the normal state can be addressed by the same approach, e.g., the angle-dependence of normal-state elastic scattering in cuprates measured by angle-dependent magnetoresistance (ADMR).Of course, the physics of linear-T resistivity and other non-Fermi liquid effects are not included in this approach, so our theory can perforce only apply in the overdoped regime far from any critical point.Nevertheless, it will be useful to use it to separate the relatively mundane materials-specific effects discussed from the true exotic physics of interacting fermions located elsewhere in the cuprate phase diagram.Finally, we should remark that the disorder-averaged theory can also break down when samples become strongly inhomogeneous.A recent study showed that in quantum simulations of disordered d-wave superconductors, the disorder-averaged theory was accurate to surprisingly high disorder levels, but broke down for very low average superfluid densities when the system broke up into distinct islands at low temperatures.<cit.>Such patchiness of isolated superconducting regions has indeedbeen observed in some samples of LSCO.<cit.> Whether the ideal disorder-driven zero-temperature transition to the normal metal is controlled in the best samples by pairbreaking or inhomogeneity is an important open question that requires further experimental work.We are grateful for useful discussions with N. P. Armitage, J. S. Dodge, S. A. Kivelson, T. A. Maier, D. J. Scalapino, J. E. Sonier, and J. M. Tranquada.D.M.B.acknowledges financial support from the Natural Science and Engineering Research Council of Canada.P.J.H. acknowledges support from NSF-DMR-1849751. V.M. was supported by NSFCand by the priority program of the Chinese Academy of Sciences . The first-principles calculations in this work (X.K. and T.B.) were conducted at the Center for Nanophase Materials Sciences and used resources of the Compute and Data Environment for Science (CADES) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. In addition we used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. DOEunder Contract No. DE-AC02-05CH11231. 41 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Yoshida et al.(2006)Yoshida, Zhou, Tanaka, Yang, Hussain, Shen, Fujimori, Sahrakorpi, Lindroos, Markiewicz, Bansil, Komiya, Ando, Eisaki, Kakeshita,and Uchida]Yoshida:2006hw author author T. Yoshida, author X. J. Zhou, author K. Tanaka, author W. L. Yang, author Z. Hussain, author Z. X. Shen, author A. Fujimori, author S. Sahrakorpi, author M. Lindroos, author R. S.Markiewicz, author A. Bansil, author SeikiKomiya, author Y. Ando, author H. Eisaki, author T. Kakeshita,andauthor S. Uchida, title title Systematic doping evolution of the underlying Fermi surface of , 10.1103/PhysRevB.74.224510 journal journal Phys. Rev. B volume 74, pages 224510 (year 2006)NoStop [Božović et al.(2016)Božović, He, Wu, andBollinger]Bozovic:2016ei author author I. Božović, author X. He, author J. Wu,andauthor A. T. Bollinger,title title Dependence of the critical temperature in overdoped copper oxides on superfluid density, 10.1038/nature19061 journal journal Nature volume 536, pages 309–311 (year 2016)NoStop [Lee-Hone et al.(2017)Lee-Hone, Dodge, and Broun]Lee-Hone:2017 author author N. R. Lee-Hone, author J. S. Dodge,and author D. M. Broun,title title Disorder and superfluid density in overdoped cuprate superconductors, 10.1103/PhysRevB.96.024501 journal journal Phys. Rev. B volume 96, pages 024501 (year 2017)NoStop [Mahmood et al.(2019)Mahmood, He, Božović, andArmitage]Mahmood:2017 author author F. Mahmood, author X. He, author I. Božović, and author N. P. Armitage,title title Locating the missing superconducting electrons in overdoped cuprates, 10.1103/PhysRevLett.122.027003 journal journal Phys. Rev. Lett. volume 122, pages 027003 (year 2019)NoStop [Lee-Hone et al.(2018)Lee-Hone, Mishra, Broun, andHirschfeld]Lee-Hone:2018 author author N. R. Lee-Hone, author V. Mishra, author D. M. Broun,andauthor P. J. Hirschfeld,title title Optical conductivity of overdoped cuprate superconductors: Application to La_2-xSr_xCuO_4, 10.1103/PhysRevB.98.054506 journal journal Phys. Rev. B volume 98, pages 054506 (year 2018)NoStop [Lee-Hone et al.(2020)Lee-Hone, Özdemir, Mishra, Broun, and Hirschfeld]LeeHone2020 author author N. R. Lee-Hone, author H. U. Özdemir, author V. Mishra, author D. M. Broun,andauthor P. J. Hirschfeld,title title Low energy phenomenology of the overdoped cuprates: Viability of the Landau–BCS paradigm, 10.1103/PhysRevResearch.2.013228 journal journal Phys. Rev. Res. volume 2, pages 013228 (year 2020)NoStop [Platé et al.(2005)Platé, Mottershead, Elfimov, Peets, Liang, Bonn, Hardy, Chiuzbaian, Falub, Shi, Patthey, and Damascelli]Plate:2005 author author M. Platé, author J. D. F. Mottershead, author I. S. Elfimov, author D. C. Peets, author Ruixing Liang, author D. A. Bonn, author W. N. Hardy, author S. Chiuzbaian, author M. Falub, author M. Shi, author L. Patthey,and author A. Damascelli, title title Fermi surface and quasiparticle excitations of overdoped , 10.1103/PhysRevLett.95.077001 journal journal Phys. Rev. Lett. volume 95, pages 077001 (year 2005)NoStop [Wang et al.(2022)Wang, Xu, Zhang, and Wang]Wang2022 author author Da Wang, author Jun-Qi Xu, author Hai-Jun Zhang,andauthor Qiang-Hua Wang,title title Anisotropic scattering caused by apical oxygen vacancies in thin films of overdoped high-temperature cuprate superconductors, 10.1103/PhysRevLett.128.137001 journal journal Phys. Rev. Lett. volume 128, pages 137001 (year 2022)NoStop [Özdemir et al.(2023)Özdemir, Mishra, Lee-Hone, Kong, Berlijn, Broun, and Hirschfeld]Ozdemir_comment author author H. U. Özdemir, author Vivek Mishra, author N. R. Lee-Hone, author Xiangru Kong, author T. Berlijn, author D. M. Broun,andauthor P. J. Hirschfeld,title title Comment on “Anisotropic scattering caused by apical oxygen vacancies in thin films of overdoped high-temperature cuprate superconductors”, 10.1103/PhysRevLett.131.049701 journal journal Phys. Rev. Lett. volume 131, pages 049701 (year 2023)NoStop [Wang et al.(2023)Wang, Xu, Zhang, and Wang]Wang_reply author author Da Wang, author Jun-Qi Xu, author Hai-Jun Zhang,andauthor Qiang-Hua Wang,title title Wang et al. reply:,10.1103/PhysRevLett.131.049702 journal journal Phys. Rev. Lett. volume 131,pages 049702 (year 2023)NoStop [Özdemir et al.(2022)Özdemir, Mishra, Lee-Hone, Kong, Berlijn, Broun, and Hirschfeld]Ozdemir:2022 author author H. U. Özdemir, author Vivek Mishra, author N. R. Lee-Hone, author Xiangru Kong, author T. Berlijn, author D. M. Broun,andauthor P. J. Hirschfeld,title title Effect of realistic out-of-plane dopant potentials on the superfluid density of overdoped cuprates, 10.1103/PhysRevB.106.184510 journal journal Phys. Rev. B volume 106, pages 184510 (year 2022)NoStop [Yoshida et al.(2007)Yoshida, Zhou, Lu, Komiya, Ando, Eisaki, Kakeshita, Uchida, Hussain, Shen, andFujimori]Yoshida:2007 author author T. Yoshida, author X. J. Zhou, author D. H. Lu, author Seiki Komiya, author Y. Ando, author H. Eisaki, author T. Kakeshita, author S. Uchida, author Z. Hussain, author Z. X.Shen,and author A. Fujimori, title title Low-energy electronic structure of the high- cuprates studied by angle-resolved photoemission spectroscopy, 10.1088/0953-8984/19/12/125209 journal journal J. Phys. Condens. Matter volume 19, pages 125209 (year 2007)NoStop [Momono et al.(1994)Momono, Ido, Nakano, Oda, Okajima, and Yamaya]Momono:1994et author author N. Momono, author M. Ido, author T. Nakano, author M. Oda, author Y. Okajima,and author K. Yamaya, title title Low-temperature electronic specific heat ofand . Evidence for a d-wave superconductor, 10.1016/0921-4534(94)90768-4 journal journal Physica C volume 233, pages 395–401 (year 1994)NoStop [Wang et al.(2007)Wang, Yan, Shan, Wen, Tanabe, Adachi, and Koike]Wang2007 author author Yue Wang, author Jing Yan, author Lei Shan, author Hai-Hu Wen, author Yoichi Tanabe, author Tadashi Adachi,and author Yoji Koike, title title Weak-couplingBCS superconductivity and unpaired electrons in overdopedsingle crystals, 10.1103/PhysRevB.76.064512 journal journal Phys. Rev. B volume 76, pages 064512 (year 2007)NoStop [Durst and Lee(2000)]Durst:2000 author author Adam C. Durst and author Patrick A. Lee, title title Impurity-induced quasiparticle transport and universal-limit Wiedemann–Franz violation in d-wave superconductors, 10.1103/PhysRevB.62.1270 journal journal Phys. Rev. B volume 62, pages 1270–1290 (year 2000)NoStop [Horio et al.(2018)Horio, Hauser, Sassa, Mingazheva, Sutter, Kramer, Cook, Nocerino, Forslund, Tjernberg, Kobayashi, Chikina, Schröter, Krieger, Schmitt, Strocov, Pyon, Takayama, Takagi, Lipscombe, Hayden, Ishikado, Eisaki, Neupert, Månsson, Matt, and Chang]Horio:2018 author author M. Horio, author K. Hauser, author Y. Sassa, author Z. Mingazheva, author D. Sutter, author K. Kramer, author A. Cook, author E. Nocerino, author O. K. Forslund, author O. Tjernberg, author M. Kobayashi, author A. Chikina, author N. B. M. Schröter, author J. A. Krieger, author T. Schmitt, author V. N. Strocov, author S. Pyon, author T. Takayama, author H. Takagi, author O. J. Lipscombe, author S. M. Hayden, author M. Ishikado, author H. Eisaki, author T. Neupert, author M. Månsson, author C. E.Matt,and author J. Chang, title title Three-dimensional Fermi surface of overdoped La-based cuprates,10.1103/PhysRevLett.121.077004 journal journal Phys. Rev. Lett. volume 121,pages 077004 (year 2018)NoStop [Kim et al.(2017)Kim, Christiani, Logvenov, Choi, Kim, Minola, and Keimer]Kim:2017tk author author Gideok Kim, author Georg Christiani, author Gennady Logvenov, author Sungkyun Choi, author Hun-Ho Kim, author M. Minola,and author B. Keimer, title title Selective formation of apical oxygen vacancies in La_2xSr_xCuO_4,10.1103/PhysRevMaterials.1.054801 journal journal Phys. Rev. Mater. volume 1, pages 054801 (year 2017)NoStop [Torrance et al.(1988)Torrance, Tokura, Nazzal, Bezinge, Huang, and Parkin]Torrance:1988iz author author J. B. Torrance, author Y. Tokura, author A. I. Nazzal, author A. Bezinge, author T. C. Huang,and author S. S. P. Parkin, title title Anomalous disappearance of high-T_c superconductivity at high hole concentration in metallic La_2xSr_xCuO_4,10.1103/PhysRevLett.61.1127 journal journal Phys. Rev. Lett. volume 61, pages 1127–1130 (year 1988)NoStop [Higashi and Kitazawa(1991)]Higashi1991 author author Iwami Higashi and author Hideaki Kitazawa, title title Single-crystal x-ray diffraction analysis of (La_1-xSr_x)_2CuO_4-δ (x=0.047), https://doi.org/10.1016/0921-4534(91)92078-P journal journal Physica C: Supercond. volume 185-189,pages 551–552 (year 1991)NoStop [Liu et al.(1992)Liu, Hughes, Angel, Hackwell, Shibaeva, Edwards, and Edwards]Liu:1992jx author author R. S. Liu, author S. D. Hughes, author R. J. Angel, author T. P. Hackwell, author R. P. Shibaeva, author P P Edwards,and author P. P. Edwards, title title Crystal structure and cation stoichiometry of superconductingsingle crystals, 10.1016/0921-4534(92)90192-F journal journal Physica C: Superconductivity volume 198, pages 203 (year 1992)NoStop [Kolesnikov et al.(1992)Kolesnikov, Korotkov, Kulakov, Shibaeva, Molchanov, Tamazyan, and Simonov]Kolesnikov:1992gg author author N. N. Kolesnikov, author V. E. Korotkov, author M. P. Kulakov, author R. P. Shibaeva, author V. N. Molchanov, author R. A. Tamazyan,and author V. I. Simonov, title title Structure of superconducting single crystals of 2201 thallium cuprate (Tl_1. 85Cu_0.15)Ba_2CuO_6, T_c = 110 K, 10.1016/0921-4534(92)90343-B journal journal Physica C: Supercond. volume 195, pages 219 (year 1992)NoStop [Hasegawa et al.(2001)Hasegawa, Takei, Izawa, and Matsuda]Hasegawa:2001bt author author M. Hasegawa, author H. Takei, author K. Izawa,and author Y. Matsuda, title title Crystal growth techniques for Tl-based cuprate superconductors, 10.1016/S0022-0248(01)01189-7 journal journal J. Cryst. Growth volume 229, pages 401 (year 2001)NoStop [Peets et al.(2010)Peets, Liang, Raudsepp, Hardy, andBonn]Peets:2010p2131 author author D. C. Peets, author R.-X. Liang, author M. Raudsepp, author W. N. Hardy,and author D. A. Bonn, title title Encapsulated single crystal growth and annealing of the high-temperature superconductor Tl-2201, 10.1016/j.jcrysgro.2009.10.042 journal journal J. Cryst. Growth volume 312, pages 344–350 (year 2010)NoStop [Kamal et al.(1994)Kamal, Bonn, Goldenfeld, Hirschfeld, Liang, and Hardy]KAMAL:1994p701 author author S. Kamal, author D. A. Bonn, author N Goldenfeld, author P. J. Hirschfeld, author R.-X. Liang,and author W. N. Hardy, title title Penetration depth measurements of 3D XY critical-behavior in 6.95 crystals, https://doi.org/10.1103/PhysRevLett.73.1845 journal journal Phys. Rev. Lett. volume 73, pages 1845–1848 (year 1994)NoStop [Hirschfeld et al.(1988)Hirschfeld, Wölfle, and Einzel]Hirschfeld:1988 author author P. J. Hirschfeld, author P. Wölfle,and author D. Einzel, title title Consequences of resonant impurity scattering in anisotropic superconductors: Thermal and spin relaxation properties, 10.1103/PhysRevB.37.83 journal journal Phys. Rev. B volume 37, pages 83–97 (year 1988)NoStop [Hussey et al.(1996)Hussey, Cooper, Wheatley, Fisher, Carrington, Mackenzie, Lin,and Milat]Hussey:1996eb author author N. E. Hussey, author J. R. Cooper, author J. M. Wheatley, author I. R. Fisher, author A. Carrington, author A. P. Mackenzie, author C. T. Lin,and author O. Milat, title title Angular dependence of the 𝑐-axis normal state magnetoresistance in single crystal , 10.1103/PhysRevLett.76.122 journal journal Phys. Rev. Lett. volume 76, pages 122–125 (year 1996)NoStop [Ioffe and Millis(1998)]Ioffe:1998p386 author author L. B. Ioffe and author A. J. Millis, title title Zone-diagonal-dominated transport in high- cuprates, 10.1103/PhysRevB.58.11631 journal journal Phys. Rev. B volume 58, pages 11631–11637 (year 1998)NoStop [Valla et al.(2000)Valla, Fedorov, Johnson, Li, Gu, and Koshizuka]Valla:2000en author author T. Valla, author A. V. Fedorov, author P. D. Johnson, author Q. Li, author G D. Gu,and author N. Koshizuka, title title Temperature dependent scattering rates at the Fermi surface of optimally doped , 10.1103/PhysRevLett.85.828 journal journal Phys. Rev. Lett. volume 85, pages 828 (year 2000)NoStop [Abrahams and Varma(2000)]Abrahams:2000hr author author E. Abrahams and author C. M. Varma, title title What angle-resolved photoemission experiments tell about the microscopic theory for high-temperature superconductors, 10.1073/pnas.100118797 journal journal Proc.Natl. Acad. Sci. volume 97, pages 5714–5716 (year 2000)NoStop [Varma and Abrahams(2001)]Varma:2001bb author author C. M. Varma and author Elihu Abrahams, title title Effective Lorentz force due to small-angle impurity scattering: magnetotransport in high- superconductors, 10.1103/PhysRevLett.86.4652 journal journal Phys. Rev. Lett. volume 86, pages 4652–4655 (year 2001)NoStop [Kaminski et al.(2005)Kaminski, Fretwell, Norman, Randeria, Rosenkranz, Chatterjee, Campuzano, Mesot, Sato, Takahashi, Terashima, Takano, Kadowaki, Li, and Raffy]Kaminski:2005ge author author A. Kaminski, author H. M. Fretwell, author M. R. Norman, author M. Randeria, author S. Rosenkranz, author U. Chatterjee, author J. C. Campuzano, author J. Mesot, author T. Sato, author T. Takahashi, author T. Terashima, author M. Takano, author K. Kadowaki, author Z. Z. Li,and author H. Raffy, title title Momentum anisotropy of the scattering rate in cuprate superconductors, 10.1103/PhysRevB.71.014517 journal journal Phys. Rev. B volume 71, pages 014517 (year 2005)NoStop [Abdel-Jawad et al.(2006)Abdel-Jawad, Kennett, Balicas, Carrington, Mackenzie, Mckenzie, and Hussey]AbdelJawad:2006df author author M. Abdel-Jawad, author M. P. Kennett, author L. Balicas, author A. Carrington, author A. P. Mackenzie, author R. H. Mckenzie,and author N. E. Hussey, title title Anisotropic scattering and anomalous normal-state transport in a high-temperature superconductor, 10.1038/nphys449 journal journal Nat.Phys. volume 2, pages 821–825 (year 2006)NoStop [Yamasaki et al.(2007)Yamasaki, Yamazaki, Ino, Arita, Namatame, Taniguchi, Fujimori, Shen, Ishikado, andUchida]Yamasaki:2007hx author author T. Yamasaki, author K. Yamazaki, author A. Ino, author M. Arita, author H. Namatame, author M. Taniguchi, author A. Fujimori, author Z.-X.Shen, author M. Ishikado,and author S. Uchida, title title Unmasking the nodal quasiparticle dynamics in cuprate superconductors using low-energy photoemission, 10.1103/PhysRevB.75.140513 journal journal Phys. Rev. B volume 75, pages 140513 (year 2007)NoStop [Chang et al.(2008)Chang, Shi, Pailhés, Mansson, Claesson, Tjernberg, Bendounan, Sassa, Patthey, Momono, Oda, Ido, Guerrero, Mudry, and Mesot]Chang:2008cb author author J. Chang, author M. Shi, author S. Pailhés, author M. Mansson, author T. Claesson, author O. Tjernberg, author A. Bendounan, author Y. Sassa, author L. Patthey, author N. Momono, author M. Oda, author M. Ido, author S. Guerrero, author C. Mudry,and author J. Mesot, title title Anisotropic quasiparticle scattering rates in slightly underdoped to optimally doped high-temperature La_2xSr_xCuO_4 superconductors, 10.1103/PhysRevB.78.205103 journal journal Phys. Rev. B volume 78, pages 205103 (year 2008)NoStop [Grissonnanche et al.(2021)Grissonnanche, Fang, Legros, Verret, Laliberté, Collignon, Zhou, Graf, Goddard, Taillefer, and Ramshaw]Grissonnanche:2021hw author author GaëlGrissonnanche, author YawenFang, author AnaelleLegros, author SimonVerret, author FrancisLaliberté, author Clément Collignon, author Jianshi Zhou, author DavidGraf, author Paul A Goddard, author L. Taillefer,and author B. J. Ramshaw,title title Linear-in temperature resistivity from an isotropic Planckian scattering rate, 10.1038/s41586-021-03697-8 journal journal Nature volume 595, pages 667–672 (year 2021)NoStop [Rourke et al.(2010)Rourke, Bangura, Benseman, Matusiak, Cooper, Carrington, and Hussey]Rourke:2010bl author author P. M. C.Rourke, author A. F.Bangura, author T. M.Benseman, author M. Matusiak, author J. R. Cooper, author A. Carrington,and author N. E. Hussey,title title A detailed de Haas–van Alphen effect study of the overdoped cuprate ,https://iopscience.iop.org/article/10.1088/1367-2630/12/10/105009 journal journal New J. Phys. volume 12, pages 105009 (year 2010)NoStop [Mackenzie et al.(1996)Mackenzie, Julian, Sinclair, andLin]Mackenzie:1996p199 author author A. P. Mackenzie, author S. R. Julian, author D. C. Sinclair,and author C. T. Lin, title title Normal-state magnetotransport in superconductingto millikelvin temperatures,https://doi.org/10.1103/PhysRevB.53.5848 journal journal Phys. Rev. B volume 53,pages 5848–5855 (year 1996)NoStop [Proust et al.(2002)Proust, Boaknin, Hill, Taillefer,and Mackenzie]Proust:P2lqZi4f author author Cyril Proust, author Etienne Boaknin, author R. W. Hill, author Louis Taillefer,andauthor A. P. Mackenzie,title title Heat transport in a strongly overdoped cuprate: Fermi liquid and a pure d-wave BCS superconductor, https://doi.org/10.1103/PhysRevLett.89.147003 journal journal Phys. Rev. Lett. volume 89, pages 147003 (year 2002)NoStop [Deepwell et al.(2013)Deepwell, Peets, Truncik, Murphy, Kennett, Huttema, Liang, Bonn, Hardy, and Broun]Deepwell:2013uu author author D. Deepwell, author D. C. Peets, author C. J. S. Truncik, author N. C. Murphy, author M. P. Kennett, author W. A. Huttema, author Ruixing Liang, author D. A. Bonn, author W. N. Hardy,and author D. M. Broun, title title Microwave conductivity and superfluid density in strongly overdoped , 10.1103/PhysRevB.88.214509 journal journal Phys. Rev. B volume 88, pages 214509 (year 2013)NoStop [Pal et al.(2023)Pal, Kreisel, Atkinson, and Hirschfeld]Pal2023 author author Mainak Pal, author Andreas Kreisel, author W. A. Atkinson,andauthor P. J. Hirschfeld,title title Simulating superconducting properties of overdoped cuprates: The role of inhomogeneity, 10.1103/PhysRevB.107.144501 journal journal Phys. Rev. B volume 107, pages 144501 (year 2023)NoStop [Li et al.(2022)Li, Sapkota, Lozano, Du, Li, Wu, Kundu, Koch, Wu, Winn, Chi, Matsuda, Frontzek, Bo žžin, Zhu, Božžovi ćć, Pasupathy, Drozdov, Fujita, Gu, Zaliznyak, Li, andTranquada]Tranquada2022 author author Yangmu Li, author A. Sapkota, author P. M. Lozano, author Zengyi Du, author Hui Li, author Zebin Wu, author Asish K.Kundu, author R. J. Koch, author Lijun Wu, author B. L. Winn, author Songxue Chi, author M. Matsuda, author M. Frontzek, author E. S.Bo žžin, author Yimei Zhu, author I. Božžovi ćć, author Abhay N. Pasupathy, author Ilya K. Drozdov, author Kazuhiro Fujita, author G. D. Gu, author I. A. Zaliznyak, author QiangLi,and author J. M.Tranquada, title title Strongly overdoped : Evidence for Josephson-coupled grains of strongly correlated superconductor, 10.1103/PhysRevB.106.224515 journal journal Phys. Rev. B volume 106, pages 224515 (year 2022)NoStop
http://arxiv.org/abs/2312.16632v1
{ "authors": [ "D. M. Broun", "H. U. Özdemir", "Vivek Mishra", "N. R. Lee-Hone", "Xiangru Kong", "T. Berlijn", "P. J. Hirschfeld" ], "categories": [ "cond-mat.supr-con", "cond-mat.str-el" ], "primary_category": "cond-mat.supr-con", "published": "20231227163650", "title": "Optical conductivity of overdoped cuprates from ab-initio out-of-plane impurity potentials" }
Article Title]A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale Attention Transformer and Luminance Consistency Loss1]Xiao [email protected][1]Xin [email protected]]Baofeng [email protected],3]Feng [email protected]]Yu [email protected]]Zhihang [email protected]]Jiansheng [email protected]]Chun [email protected] [1]School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China [2]China Electric Power Research Institute Company Limited, Beijing,100192, China [3]School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China [4]State Grid Shanxi Marketing Service Center, Taiyuan, 030032, ChinaLow-light image enhancement aims to improve the perception of images collected in dim environments and provide high-quality data support for image recognition tasks. When dealing with photos captured under non-uniform illumination, existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure. From the perspective of unsupervised learning, we propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality. Specifically, we present a multi-scale window division scheme, which uses exponential sequences to adjust the window size of each layer. Within different-sized windows, the self-attention computation can be refined, ensuring the pixel-level feature processing capability of the model. For feature interaction across windows, a global transformer branch is constructed to provide comprehensive brightness perception and alleviate exposure problems. Furthermore, we propose a loop training strategy, using the diverse images generated by weighted mixing and a luminance consistency loss to improve the model’s generalization ability effectively. Extensive experiments on several benchmark datasets quantitatively and qualitatively prove that our MSATr is superior to state-of-the-art low-light image enhancement methods, and the enhanced images have more natural brightness and outstanding details. The code is released at https://github.com/fang001021/MSATr.[ [=====§ INTRODUCTIONDue to unavoidable environmental or technical limitations, the quality of images captured in low-light conditions is often displeasing. It can negatively impact subsequent tasks such as image classification<cit.>, target detection<cit.> and image generation<cit.>. Therefore, low-light image enhancement (LLIE) aims to improve the quality of images acquired in poor-lighting environments. It helps to enhance the visual quality, and the enhanced images can also be used for subsequent advanced vision tasks. In the early research on low-light image enhancement, traditional methods<cit.> mainly use artificially designed models and filter structures. However, problems like color bias and insufficient details often occur when processing low-light images. It is also tricky to flexibly make adaptive adjustments in the face of diverse and complex scenarios. In recent years, deep learning-based LLIE methods have developed rapidly, relying on their excellent deep feature extraction capabilities to improve image restoration performance without difficult manual parameter adjustment. Most deep learning methods<cit.>learn feature representations from pairs of images in a supervised manner, requiring images of different brightness in the same scene during the training process. However, meeting such stringent data requirements in actual application scenarios is complex, and an overly strict supervision process may also cause over-fitting problems. Therefore, many unsupervised methods have been proposed to complete low-light image enhancement tasks. Some methods<cit.> use GAN<cit.> structures to distinguish the difference between low-light and normal image sets, learning brightness enhancement rules and optimizing detail representation, while other unsupervised methods<cit.> constrain features such as brightness and color through a large number of no-reference loss functions.However, whether it is a supervised or unsupervised method, how to deeply fit low-light image data features from limited training data, adaptively process diverse images in natural scenes, and balance image brightness is still a vital issue in this visual task. As shown in Figure <ref>, when processing low-light images with relatively balanced illumination in input A, most existing deep-learning models can effectively improve visual quality by enhancing brightness, improving detail representation, and reducing noise. However, in the real world, due to differences in the reflection properties of different objects or the influence of fill-light equipment such as flash, uneven low-light images similar to input B are very common, in which the character is in the highlighted area, and the background is almost entirely dark. When enhancing this type of image, many methods may have difficulty adaptively perceiving the brightness differences in different regions, resulting in over-enhancement of bright areas and under-enhancement of dark spots.In response to the above problems, some low-light image enhancement algorithms <cit.> use attention calculations during the convolutional encoding and decoding processes to alleviate over- and under-enhancement to a certain extent. However, in recent years, literature<cit.> has demonstrated the inevitable loss of information during the convolution compression process, which may make the brightness characteristics and structural information more challenging to capture. In the Vision Transformer proposed in 2021, Dosovitskiy et al. <cit.> guided image classification by slicing the image into blocks and combining attention features between image blocks. Compared with convolutional attention, the transformer columnar structure for images can better handle long-distance dependencies, avoid information loss, and achieve better experimental results under multiple visual tasks. However, in low-light image enhancement tasks, considering complex scenes such as non-uniform lighting, how to use the columnar structure of the image transformer to extract features from limited image information better, avoid information loss while ensuring global brightness balance, and enhance the local detail representation of the image is a great difficulty in improving the attention-based low-light image enhancement process. Therefore, we propose a multi-scale attention Transformer named MSATr for nun-uniform low-light image enhancement. This model performs learning through generative adversarial networks independent of training data pairs. Through the local-global features extraction network, MSATr performs multi-scale window division and self-attention calculation, enhancing pixel-level detail information inside the window and the feature interaction across different windows, to achieve the overall brightness balance and finer details performance. To further solve the image exposure problems, we propose a new loop training process and an uneven loss to help the model deeply understand the complex lighting conditions in various natural scenes, thereby achieving adaptive enhancement of non-uniform low-light images and avoiding overexposure.In summary, the main contributions of our paper are summarized as follows:* A multi-scale attention Transformer named MSATr is presented for low-light image enhancement to solve the over-exposure and under-exposure problems. MSATr refines local feature processing through self-attention computation within multi-scale windows, ensuring finer details in enhanced images and reducing information loss. The designed local-global transformer branch network strengthens the fusion of features across different regions. It also limits the overall brightness based on multi-level regional information, generating a more natural enhanced image.* A consistency loop training strategy is proposed to improve the model's adaptability under limited low-light data. During the model training, pairs of non-uniform images are generated by weighted mixing across photos with different brightness levels. These mixed pictures provide new and diverse references, improving the model's illumination balancing ability and generalization performance. In addition, the designed luminance consistency loss can more effectively constrain the loop process and accelerate training convergence. * Comprehensive experiments on several benchmark datasets are conducted to consistently endorse the superiority of MSATr. The results are measured in terms of visual quality and multiple image quality assessment metrics. In contrast to existing low-light image enhancement approaches, MSATr proves to be particularly adapted to nun-uniform brightness balancing and detail enhancement.§ RELATED WORK This section reviews related research work on low-light image enhancement, which mainly includes traditional and deep learning methods, and then introduces our motivation for using the transformer structure. §.§ Traditional methods Histogram equalization (HE) <cit.> is a commonly used simple and fast method in traditional low-light image enhancement tasks. Researchers have proposed a variety of enhancement algorithms based on global or local histograms (contrast-limited adaptive histogram equalization <cit.>, dualistic sub-image histogram equalization method <cit.>, adaptive histogram equalization<cit.> and brightness bi-histogram equalization method<cit.>) to adjust the pixel value range of the input image dynamically. Lee et al. <cit.> proposed LDR, which uses the connection between adjacent pixels and the global gray level difference to adjust the brightness of local areas, thereby achieving better visual effects locally and globally. However, these traditional HE methods cannot analyze the characteristics of the image itself, and adjustments based on the pixel value range may lead to the loss of crucial information in the picture, resulting in distortion. Therefore, some traditional algorithms decompose low-quality images' illumination and reflection components based on Retinex theory <cit.> and perform brightness estimation and restoration. Jobson et al. <cit.> first proposed the single-scale Retinex algorithm (SSR), using Gaussian blur input as the illumination map to solve the reflection component, and then optimized the multi-scale Retinex algorithm <cit.> (MSR) on this basis, achieving better results by fusing multiple Gaussian functions with different variances. In addition to the methods of solving the reflection component, Xiao et al. <cit.> proposed the LIME model, which directly finds the maximum intensity of each pixel in three channels to construct and optimize the illumination map, improving image quality and computational efficiency. However, due to the relatively fixed algorithm framework, these traditional methods often fail to achieve the expected enhancement effects in some aspects. They cannot consider the detailed information and color information of the image itself in the process of enhancing brightness. At the same time, manual adjustment of parameters is required when facing complex low-light scenes and diverse low-light images, which results in fewer practical applications in multiple scenarios. §.§ Deep learning methods In recent years, deep learning has gradually become the leading solution for various image enhancement tasks <cit.> due to its better accuracy and robustness. In low-light image enhancement, existing deep learning methods can be divided into supervised and unsupervised approaches based on data requirements. §.§.§ Fully supervised methods Most existing low-light image enhancement methods rely on paired image data to learn differences such as brightness between images. The first low-light image enhancement algorithm based on deep learning, LLNet <cit.>, simultaneously achieves end-to-end brightness enhancement and denoising through an improved stacked denoising encoder. After that, Lv et al. <cit.> proposed an end-to-end multi-branch enhancement network MBLLEN, which extracts effective feature representations through the feature extraction module, enhancement module, and fusion module to improve model performance, completing several subtasks such as low-light image brightness enhancement and noise removal. Compared with end-to-end networks, deep learning methods based on physically interpretable Retinex theory have better enhancement performance in most cases. Wei et al. <cit.> decomposed the image into reflection components and smooth illumination through a deep learning network for the first time and reconstructed the image through the enhancement network. On this basis, Wu et al. <cit.> further optimized the image decomposition process and proposed a deep expansion network URetinex-Net, which realizes image enhancement through three modules and uses powerful deep model learning capabilities to simulate data-related priors. Recently, Guo et al. <cit.> separated the three tasks of brightness enhancement, noise removal, and color correction, proposed the Bread model, and tried to solve the coupling relationship between noise elimination and color distortion in the brightness-chromaticity space. However, stringent paired images are required in supervised methods that rely on professional equipment and rigorous acquisition processes. In addition, the fully supervised training strategy can easily lead to overfitting of the model, and the model's generalization performance is not good enough. When faced with non-uniform low-light images in unseen multiple scenes, the supervised model may have difficulty adaptively balancing image brightness, resulting in over-exposure and under-exposure. §.§.§ Unsupervised methods Unlike supervised methods that require paired data, unsupervised methods have received more and more attention and research in recent years due to their lower data requirements and better generalization capabilities. Jiang et al. <cit.> proposed Enlighten-GAN, which used a generative confrontation method to train an unsupervised model for the first time and ensured image enhancement quality through a global-local discriminator. However, the simple generator structure cannot effectively guarantee fine image information processing. Therefore, Fu et al. <cit.> built LE-GAN, trained the model in a generative adversarial manner, and combined attention calculations during the encoding and decoding process to optimize the network's feature extraction capabilities to solve noise and color deviation problems and improve visual quality. In addition to generative adversarial methods, some unsupervised methods use finely combined reference-free loss functions to constrain the image enhancement process. Since there is no need to train a discriminator, these methods reduce the number of parameters and computational overhead to a great extent and do not require additional data annotation at all. Guo et al. <cit.> proposed a zero-reference deep learning method, Zero-DCE, which transforms the image enhancement task into a specific curve estimation. These curves are then used to dynamically adjust pixel values of low-light images to speed up model inference. Although discrete pixel value curve correction can significantly reduce the number of parameters, it ignores local area correlation. After that, Ma et al. <cit.> proposed the self-calibration model SCI, which completes self-supervised learning of image enhancement through a multi-level illumination self-calibration module that shares weights without requiring any data labels. Compared with supervised learning methods, most unsupervised methods may have difficulty fitting the distribution characteristics of training data and processing multi-scale feature information, leading to worse detail performance and higher noise levels. Although the single discriminator or no-reference loss function of the method can better constrain the overall brightness level and various color texture information of the enhanced image, it is difficult to adaptively divide the light and dark areas and adjust the enhancement intensity for unsupervised methods when facing non-uniform low-light images. §.§ Vision Transformer application Vaswani et al. <cit.> initially proposed that the Transformer structure be applied to natural language processing <cit.> (NLP) and fully mine the feature correlation between data through Attention calculation and feed-forward neural network. Due to this structure's excellent global vision and attention mechanism, more and more image processing works have used the Transformer structure as the backbone network and achieved good results. Dosovitskiy et al. <cit.> first proposed a Vision Transformer (ViT), which used the Transformer structure for non-overlapping image blocks and achieved higher accuracy and faster computing speed than the convolutional structure in image classification tasks. Based on the multi-scale and high-resolution characteristics of the image itself, Liu et al. <cit.> proposed the Swin-transformer structure, which reduces model parameters and calculation volume by dividing windows and window shifts. In recent years, researchers<cit.> have applied the vision transformer structure to image reconstruction tasks, verifying the feasibility of the structure and better feature processing capability than convolutional networks. Liang et al. <cit.> proposed SwinIR, using serial Swin modules to perform multiple image restoration tasks, including image denoising, image compression, etc. Deng et al. <cit.> improved the positional encoding process of ViT. They built an encoder and decoder based on the Transformer structure, which surpassed the existing convolutional structure-based methods in style transfer. To better extract features and reduce information loss, this paper introduces the vision transformer structure for low-light image enhancement. Its attention mechanism can also effectively balance brightness and reduce color deviation and noise interference. However, unlike other image tasks, low-light image enhancement requires pixel-level feature extraction capabilities to ensure the detailed representation in enhanced images. This may need a more refined transformer network. At the same time, the significant feature differences in light and dark areas of low-light shots also challenge the attention calculation process. Therefore, how to improve detailed feature-extraction capabilities and better balance differentiated light through attention computation is a great challenge for low-light image enhancement under the transformer framework.§ METHOD This section first introduces the MSATr's overall network structure and training process, then shows the critical local-global attention network, and finally lists several essential loss functions used in the training process. §.§ Network architectures As shown in Figure <ref>, in the generative adversarial network, MSATr is the generator to complete the end-to-end low-light image enhancement task. The model inputs a three-channel RGB low-illumination image, maps the data to a high-dimensional feature space through a local-global network, and calculates the image's multi-feature representation based on multi-head attention<cit.>. Then, multiple convolutions are used to unify the feature dimensions and merge channel features. Finally, the obtained local-global elements are fused through a convolutional network to regenerate the enhanced image. To better train the local-global network, the adversarial process refers to the dual discriminator structure of Enlighten-GAN <cit.>. Among them, the global discriminator is used to determine the category of the entire image. In contrast, the local discriminator is used to identify the small patches randomly cut from the input image, which helps to enhance the brightness and detail performance of local areas. In addition, a loop training network and a luminance consistency loss function are also added to the training process.§.§ Local-global attention network Figure <ref> shows the low-light image enhancement network structure. The network is divided into local-global feature extraction, feature fusion, and multi-layer convolution image generation. The local-global attention network establishes pixel-level feature correlation, thereby achieving light and dark area discrimination and adaptive enhancement while maintaining good detail performance and noise levels. §.§.§ Local attention feature extraction network The local network focuses on detailed features via in-window attention computation. To deal with brightness and color features that vary significantly between different areas, we propose a multi-scale window division mechanism, which can effectively improve the model's ability to extract diverse information. Starting with the smallest window size 2, we continuously increase the size of each i-th layer based on an exponential sequence: Size_i=2^i. Taking into account both accuracy and efficiency, we use three multi-scale window attention layers. In different layers, a larger window can perceive more brightness information, especially for continuous changes in light and dark areas under non-uniform illumination. In contrast, for fragmented light and dark changes, a smaller window can help the model locate the light and dark dividing line and achieve better details.As shown in Algorithm 1, the local attention branch first performs patch segmentation through a 1×1 convolution layer to extract pixel-level detailed features. Then, it passes through the window attention calculation module with windows 2, 4, and 8 in sequence. The window attention calculation method of W-MSA <cit.> is used in each calculation module. Unlike the Swin transformer, because there is another branch for global attention calculation, MSATr abandons the original Swin window shift step to reduce computational complexity.§.§.§ Global attention feature processing network Self-attention calculation and pixel-level feature extraction within a maximum window of 8×8 have been implemented in the local branch, so the global feature processing network focuses on establishing feature connections between these windows. Through the integration of local and global information, pixel-window-pixel feature correlation can be found to achieve more detailed and comprehensive image generation. We first divide the image into multiple patches, each containing representative information in a large window. Operationally, the global network first performs patch segmentation through a convolutional layer with a convolution kernel size of 8×8 and a stride of 8, and then projects input patches into a sequential feature embedding ε. Given the input embedding sequence Z_l={ε_l1+P_l1,ε_l2+P_l2,...,ε_lL+P_lL}of length L and positional encoding P_l, two multi-head self-attention modules in series are used to establish the global connection across windows. The input sequence is encoded into query (Q), key (K), and value (V):Q = Z_lW_q, K = Z_lW_k, V = Z_lW_v,where W_q,W_k,W_v are parameter matrices. The multi-head attention is then calculated by 𝔽_MSA(Q,K,V) = Concat(Attention_1(Q,K,V), . . . , Attention_N (Q,K,V)),where N is the number of attention heads.To unify local and international features for subsequent feature fusion, the global feature vector dimensions are adjusted through Patch_Recovering, which uses multi-layer upsampling and deconvolution. Ultimately, the network dimensionally stacks the extracted local attention features with the global attention features, performs feature fusion through the convolutional network, and generates a three-channel RGB image. §.§ Consistency loop training process To solve the problem of over-enhancement and under-enhancement, we generate diverse non-uniform data by random splicing and mixing across images. Relying on the newly developed data, the loop training process improves the model's adaptive ability and generalization performance. As shown in Figure <ref>, new training data can be obtained in each round of cycle training by mixing the input low-light image with the enhanced image of the model. In the process of image synthesis, a random region is first cropped in the low-light image and the enhanced image. Then, a random weighted mixture is carried out in the delimited region according to the following formula:I'=αI_out+(1-α )I_in,where, I_out and I_in represent the enhanced image and the input image respectively. α is a random magnification with a size of 0 1. The larger the value, the closer the random area is to the enhanced image.After this, the obtained uneven image data is re-input into the model, and the image can be obtained after secondary enhancement. In the best case, the obtained secondary enhanced image should be consistent with the first enhanced image to improve the model's ability to adjust the brightness. At the same time, more data modes improve the model's generalization performance. §.§ Loss functions This section mainly introduces the loss functions involved in the training process. First, a luminance consistency loss used during circuit training is proposed. This section also introduces adversarial loss, identity-invariant loss, and Self-feature-preserving loss.§.§.§ Luminance consistency loss According to Section <ref>, we synthesize new image data for loop training to improve the model's ability to process uneven images. To constrain the model training process and strengthen the model's perceptive ability, the luminance consistency loss function is proposed as follows:ℒ_U=1/α mn∑_i=1^m-1∑_j=0^n-1[I( i,j )-K( i,j )]^2,where I and K represent the restored image and the enhanced image. mn indicates the area of a randomly cut image patch.The above loss function can reasonably constrain the consistency of the image after two enhancements, thus strengthening the light balance ability of the model. In addition, the pixel-based vital supervised learning process can also accelerate the convergence of the model.§.§.§ Adversarial loss To impose constraints on the MSATr generation network and complete the task of low-light image enhancement, the adversarial process uses a local-global dual discriminator for unsupervised learning of the network. The performance of the enhanced network is evaluated through the judgment of the discriminator network on the generated images, encouraging the generated images to be closer to the distribution of normal real images.The global discriminator and generator losses are:ℒ_D^Global=log (D^Global(x_r))+log (1-D^Global(x_f)), ℒ_G^Global=-log (1-D^Global(x_f)),where D^Global represents the global discriminator. x_r and x_f represent the sampled real normal pictures and enhanced pictures.The local discriminator and generator losses are similar to the above:ℒ_D^Local=log (D^Local(x_r^patch))+log (1-D^Local(x_f^patch)), ℒ_D^Local=-log (1-D^Local(x_f^patch)),where x_r^patch and x_f^patchrepresent the local area generated by random cutting from the picture. §.§.§ Self-feature preserving loss To constrain the enhanced image to be consistent with the content information of the input low-light image, the perceptual loss proposed by Johnson et al. <cit.>was referred to during the model training process. We use the pre-trained VGG model to extract feature maps to compare the content similarity of two pictures. The self-feature preserving loss can be expressed as:ℒ_C=1/N_l∑_i=0^N_l||ϕ_i(G(x_l))-ϕ_i(x_l)||_2,where, ϕ_i(· )represents the features extracted from the i-th layer of the vgg16 network, N_lrepresents the number of layers, x_lrepresents the input low-light image and G(x_l) represents the image generated by the enhancement network. §.§.§ Identity invariant loss To avoid over-enhancement and speed up the model's fitting and learning convergence of normal images, normal images are input to the enhancement network during the model training process, and the following identity-invariant loss is used to constrain the enhancement output:ℒ_C=||G(x_r)-x_r||_2,where, x_r represents the use of real normal images as network input, G(x_r) represents the image generated by the enhancement network, and the identity-invariant loss encourages the enhancement network to maintain the brightness level of normal images, thereby avoiding over-exposure and information loss. § EXPERIMENTS This section defines the MSATr's implementation details and parameter settings and then introduces the supervised and unsupervised data sets and indicators used. This section also conducts extensive comparative experiments between MSATr and existing advanced deep learning methods and verifies the model's excellent low-light enhancement performance and generalization ability on multiple data sets. Finally, we further demonstrate each module's impact on model performance through ablation experiments. §.§ Implementation details The model is implemented using Pytorch and optimized using the Adam optimizer, where,β_1 and β_2 are set to 0.9 and 0.999. The initial value of the learning rate is 5e-5 and continues to decay with the training process. During the training process, in each round, batch-size low-light and normal-light pictures are randomly selected from the data set, and a 256*256 part is randomly cropped from each picture for training. The test image will be adjusted to 512*512 as input during the test. All training and testing were performed on an NVIDIA 3090ti GPU. §.§ Datasets and metrics To verify the effectiveness of the method proposed in this article, this section uses some public data sets to evaluate the model's performance. Among them, the LOL data set <cit.> contains 485 pairs of low-light and normal-light training images and 15 pairs of test images. In addition, we also used five reference-free natural dark image data sets, DICM <cit.>, <cit.>, <cit.>, <cit.>, and VV[ https://sites.google.com/site/vonikakis/datasets], to test the generalization performance of the trained model. Image evaluation indicators mainly include PSNR, SSIM, LPIPS<cit.>, and NIQE<cit.> for evaluating enhanced image quality and model performance. The first three reference indicators require normal images for comparison and calculation. PSNR reflects the difference in pixel values between the image to be evaluated and the reference image, SSIM demonstrates the similarity in brightness, contrast, and structure between the image to be considered and the reference image, and LPIPS reflects the difference between the feature maps obtained by the feature extractor between the image to be evaluated and the reference image. NIQE is a non-reference evaluation index that reflects the difference between the image to be assessed and the artificial natural image set. The lower the value, the closer the image to be evaluated is to the realistic image.§.§ Comparison with state-of-the-art methods We compared MSATr with several traditional methods, i.e., LDR<cit.> and LIME<cit.>, and several state-of-the-art deep learning methods, i.e., Retinex-Net<cit.>, Uretinex-Net<cit.> and Bread<cit.>, Zero-DCE<cit.>, EnlightenGAN<cit.>, SCI<cit.> and LE-GAN<cit.>. The details of these methods are shown in Table <ref>. All models were fully retrained and tested using official codes in the same experimental environment.§.§.§ Comparative experimental results on the LOL test setTable <ref> shows the quantitative comparison between MSATr and other competitors on the LOL test set. Our method outperforms the other unsupervised methods in all reference indicators significantly and is close to the best-supervised methods. Under the no-reference index NiQE, MSATr achieved the best score, meaning the enhanced image has a more natural and realistic visual effect. Figure <ref> shows some test image results of LOL. The image generated by MSATr has more uniform illumination levels and fewer artifacts and can consider the overall brightness, color, and local details. Most other contrast methods have problems such as insufficient enhancement, color distortion, and residual noise. §.§.§ Generalization ability comparison Generalization performance is significant for deep learning models. Therefore, in this section, the model trained on the LOL dataset mentioned above performs image enhancement testing on five unsupervised datasets. Then, the quality of the generated images is evaluated. As shown in Table <ref>, except for LIME, MSATr achieved the best results on the other four unsupervised datasets, outperforming all advanced supervised and unsupervised methods on the average of the five datasets. In addition, it can be seen from the experimental results that unsupervised methods are generally better than supervised methods in terms of generalization performance. At the same time, based on data advantages, unsupervised methods have excellent development potential and practical application value. Figure <ref> shows no-reference test images and enhancement effects. The red box represents the bright area of the image, and the blue box represents the dark area of the image. It can be seen that the enhanced appearance of MSATr has the best visual effect. When processing pictures with non-uniform lighting, it can adaptively balance and improve the brightness according to the different brightness between areas, making the overall brightness more natural. It can not only better enhance objects in dark places but also avoid exposure to bright spots. Most other contrast methods have problems of over-enhancement and under-enhancement.§.§.§ Analysis of adaptive capabilities of deep learning methods This article focuses on adapting the intensity during enhancement and avoiding over-exposure and information loss. We believe the model should always recover the image based on color, luminance, and structure information rather than simply increasing the brightness, especially under non-uniform lighting. Therefore, the experiment in this section analyzes the adaptive perception ability of the deep learning model.We re-input the enhanced image into the models several times and check for over-exposure. Figure <ref> shows the qualitative results of each low-light image enhancement algorithm for one enhancement and three repeated enhancements. Except for MSATr, all deep learning-based methods produce severe exposure under repeated enhancement, resulting in unacceptable information loss. This means that our MSATr truly realizes the adaptive perception of the brightness of the input image and can perform adaptive enhancement according to the image's characteristics to avoid over- and under-enhancement.§.§ Ablation study This section conducts ablation experiments to verify the effectiveness of the local-global structure by retaining a single local attention enhancement network and a global attention enhancement network and retraining them. At the same time, the experiment verified its impact on the network by removing the cyclic training process and luminance consistency loss. The quantitative experimental results are shown in Table <ref>. On 6 test sets, the NIQE index has declined to a certain extent whether it is a single local or global enhancement network, which shows that the local-global structure can effectively combine its advantages, taking into account the overall and detailed characteristics of the generated image. In addition, after removing the luminance consistency loss, the NIQE index dropped significantly on all data sets, which proves the model's strong brightness perception and excellent enhancement effect.Figure <ref> shows the impact of local-global structure and luminance consistency loss of visual quality. It can be seen that the enhanced image of a single local branch network is distorted in brightness and color and cannot balance the brightness of the entire image nicely, but the details are retained relatively clearly. The overall brightness of the generated image with a single global structure is more coordinated, but the details (smaller numbers) are blurry. A complete model trained without luminance consistency loss has difficulty capturing dark areas and performing targeted enhancement when processing non-uniform low-light images, which will affect the adaptive enhancement performance of the entire network for low-light images. The complete MSATr network can synthesize local and global information, ensuring overall brightness coordination and better detail processing. Under luminance consistency loss, over-expose and under-expose will not occur when low-light images are processed.§ CONCLUSION This paper introduces an unsupervised low-light image enhancement network MSATr. The multi-scale attention network can better retain pixel-level detailed information through multi-scale window division, self-attention computation and feature interaction. At the same time, a consistency loop training process is used to enhance the model's adaptability for non-uniform low-light images and generalization performance. The brightness of the enhanced image is more balanced, details are more apparent, and the possibility of over-exposure is reduced. The PSNR, SSIM, LPIPS, and NIQE indicators of MSATr in multiple data sets are better than other advanced low-light image enhancement methods, which quantitatively proves the advantages of MSATr over existing methods in low-light image enhancement tasks. The ablation experiment verified the working characteristics of the local-global transformer structure and the effectiveness of the loop training strategy. AcknowledgmentsThe authors would like to thank their colleagues from the machine learning group for discussions on this paper. This work was supported by Science & Technology Project of State Grid Corporation of China (No.5400-202355230A-1-1-ZN)Author ContributionsXiao Fang: Methodology, Software, Writing - Original Draft, Writing - Review & Editing.Xin Gao: Conceptualization, Methodology, Supervision, Writing - Original Draft, Writing - Review & Editing.Baofeng Li: Software, Validation, Funding acquisition.Feng Zhai: Conceptualization, Resources, Funding acquisition.Yu Qin: Software, Validation, Funding acquisition.Zhihang Meng: Writing - Review & Editing.Jiansheng Lu: Conceptualization, Resources, Funding acquisition.Chun Xiao: Conceptualization, Validation, Funding acquisition.Availability of data and materialsThe datasets supporting the results of this article are LOL, DICM, LIME, NPE, MEF and VV public datasets, and the authors confirm that the datasets are indicated in reference list.§ DECLARATIONS * Competing interests: The authors have no competing interests to declare that are relevant to the content of this article.* Ethics approval: Not applicable. * Consent to participate: Not applicable.* Consent for publication: Not applicable.
http://arxiv.org/abs/2312.16498v1
{ "authors": [ "Xiao Fang", "Xin Gao", "Baofeng Li", "Feng Zhai", "Yu Qin", "Zhihang Meng", "Jiansheng Lu", "Chun Xiao" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227100711", "title": "A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale Attention Transformer and Luminance Consistency Loss" }
These authors contributed equally to this work.These authors contributed equally to this work.National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, ChinaDepartment of Applied Physics, Nanjing University of Science and Technology, Nanjing 210094, ChinaSchool of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China J-PARC Center, Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195, JapanLaboratory for Neutron Scattering and Imaging, Paul Scherrer Institute (PSI), CH-5232 Villigen, [email protected]@[email protected] National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, ChinaMagnon polarons are novel elementary excitations possessing hybrid magnonic and phononic signatures, and are responsible for many exotic spintronic and magnonic phenomena. Despite long-term sustained experimental efforts in chasing for magnon polarons, direct spectroscopic evidence of their existence is hardly observed. Here, we report the direct observation of magnon polarons using neutron spectroscopy on a multiferroic Fe_2Mo_3O_8 possessing strong magnon-phonon coupling. Specifically, below the magnetic ordering temperature, a gap opens at the nominal intersection of the original magnon and phonon bands, leading to two separated magnon-polaron bands. Each of the bands undergoes mixing, interconverting and reversing between its magnonic and phononic components. We attribute the formation of magnon polarons to the strong magnon-phonon coupling induced by Dzyaloshinskii-Moriya interaction. Intriguingly, we find that the band-inverted magnon polarons are topologically nontrivial. These results uncover exotic elementary excitations arising from the magnon-phonon coupling, and offer a new route to topological states by considering hybridizations between different types of fundamental excitations.Direct observation of topological magnon polarons in a multiferroic material Jinsheng Wen January 14, 2024 ============================================================================ Magnons and phonons, quanta of spin waves and lattice vibrations respectively, constitute two fundamental collective excitations in ordered magnets. When there is a strong magnon-phonon coupling, they can be hybridized to form a gap at the intersection of the original magnon and phonon bands (Fig. <ref>a)<cit.>. The hybridized bands feature mixed, interconverted and reversed magnonic and phononic characters, and the associated quasiparticles are defined as magnon polarons<cit.>. Magnon polarons can resonantly enhance the spin-pumping effect<cit.>, and provide a phonon-involved way to generate and manipulate spin currents carried by magnons thanks to their hybrid nature<cit.>, signifying promising potentials in spintronics technology<cit.>. More recently, it has been predicted that magnon-polaron bands can exhibit nonzero Chern numbers and large Berry curvatures, giving rise to the thermal Hall effect<cit.>.These proposals have motivated sustained experimental efforts in chasing for magnon polarons <cit.>. However, direct spectroscopic evidence with their delicate band structures being explicitly unveiled by neutron spectroscopy is still rare. This is primarily because: i) materials with strong magnon-phonon coupling that can result in such excitations with prominent features are scarce; ii) magnons and phonons are rarely observed in the same energy-momentum window by neutron spectroscopy due to their different dynamical structure factors, which hinders the exploration of the interaction effects between them. To overcome these difficulties, is a prime candidate material<cit.>. is a multiferroic material with a polar P6_3mc space group (No. 186). Below the Neel temperature T_ N∼60 K, both the space-inversion and time-reversal symmetries are broken <cit.>, as shown in Fig. <ref>b. The magnetism in arises from Fe^2+ ions, while Mo^4+ ions form nonmagnetic spin-singlet trimers<cit.>. The Fe^2+ ions on each Fe-O layer form a bipartite honeycomb network with different magnetic moments in corner-shared tetrahedra and octahedra (Fig. <ref>b, c)<cit.>. The absence of an inversion centre between nearest-neighbour (NN) Fe sites allows for a non-zero in-plane Dzyaloshinskii-Moriya (DM) interaction. exhibits long-range collinear antiferromagnetic order below T_ N, with antiparallel yet uncompensated moments on each Fe-O layer stacking antiferromagnetically along the c axis (Fig. <ref>b)<cit.>. Furthermore, it is noteworthy that the magnetic configuration of can be controlled by either an external magnetic field or chemical doping, leading to a metamagnetic transition into a ferrimagnetic state (Supplementary Fig. 1c, d)<cit.>. More crucially, there has been accumulating evidence that the spin and lattice degrees of freedom are strongly coupled in <cit.>, rendering it a promising platform to probe the long-sought magnon polarons<cit.>. In this work, we perform high-resolution neutron spectroscopy measurements and fully map out the magnon and phonon bands in a multiferroic with strong magnon-phonon coupling<cit.>. Due to the acquisition of spin components from magnon conversion, the acoustic phonons show up together with magnons at small momenta. By examining the interaction effects between them, we directly observe the long-sought magnon polarons, which we show to be topologically nontrivial. These results not only unambiguously identify a new type of excitations, but also provide a fresh ground to study topological states.Magnons at high energiesFigure <ref>d shows the elastic neutron scattering results for single crystals of . At 1.5 K, a significant increase in the magnetic scattering intensity is observed at the magnetic Bragg peak (1, 0, 0), indicating the establishment of an antiferromagnetic order. On the other hand, (1, 0, 1) corresponds to the magnetic Bragg peak for the ferrimagnetic order<cit.>. The fitting of the temperature dependence of the integrated intensities at (1, 0, 0) yields T_ N=59.3 K and a critical exponent of 0.129, which closely matches the expected value of 0.125 for a two-dimensional Ising system<cit.>. These results are consistent with the magnetic susceptibility measurements (Supplementary Fig. 1c), suggesting that is a two-dimensional collinear antiferromagnet with strong magnetocrystalline anisotropy along c axis. Figure <ref>e shows the inelastic neutron scattering (INS) results along [100] direction at 1.5 K, obtained on a triple-axis spectrometer. No acoustic bands are found to disperse up from the magnetic Bragg peak (1, 0, 0). Instead, only two seemingly flat bands with a sizable gap are observed between 10 to 16 meV. Upon warming, these two modes soften slightly, while the scattering intensities decrease dramatically (Fig. <ref>f). Above T_ N, the peaks disappear. This suggests that the two modes correspond to the spin-wave excitations originating from long-range magnetic order, rather than the crystal-electric-field excitations<cit.>. Given that there are four sublattices of Fe^2+ ions in the magnetic unit cell of (Fig. <ref>b), we consider these two magnon modes to be doubly degenerate in the collinear antiferromagnetic state, as also found in its sister compound Co_2Mo_3O_8<cit.>. We note that the energy scales of the excitation spectra in the other isostructural compound Ni_2Mo_3O_8<cit.> are rather different from those in (Fig. <ref>e) and Co_2Mo_3O_8<cit.>. In Ni_2Mo_3O_8, the ground state of the crystal-electric-field levels of Ni^2+ ions is a nonmagnetic singlet, while the interplay between crystal-electric-field effect and magnetic exchange interactions gives rise to a magnetic order with a relatively low transition temperature of T_ N=5.5 K. As a consequence, spin waves are observed below 1.5 meV, and higher-energy excitations are believed to arise from the crystal-electric-field spin excitons with a robust temperature dependence<cit.>.Anomalous phonons at low energiesTo examine the excitation spectra of in a larger momentum-energy space, we next performed INS measurements on a time-of-flight spectrometer. The two magnon modes between 10 to 16 meV can also be observed at 6 K along both [100] and [110] directions (Fig. <ref>a, b). These two modes are flat along [001] direction (Fig. <ref>c), indicating negligible interlayer coupling consistent with the two-dimensional nature of the magnetism deduced from the critical exponent (Fig. <ref>d). This behaviour allows us to integrate the intensities along [001] to improve the statistics when studying the magnon-related excitations, as we did in Fig. <ref>a, b. Turning to the lower energy window, we observe additional excitations alongside the two intense magnon modes (Fig. <ref>a-c). These low-energy modes introduce multiple bands that cannot be solely explained by linear spin-wave theory. Even when accounting for the nonlinearity of spin waves, the extended spin-wave theory fails to account for the presence of multiple bands at significantly lower energies<cit.>. Therefore, we propose that these additional modes to have a phononic origin, which can also be supported by the following facts. The scattering intensities of the low-energy excitations become stronger as the wavevector Q increases (Supplementary Fig. 2a-c), which is characteristic of phonons. Figure <ref>a shows a specific mode among these excitations, with an onset energy around 5 meV along the [100] direction. Notably, we find such onset excitations at (1, 0, 1) also correspond to the maximum of the excitations along the [001] direction as shown in Fig. <ref>b, while no significant signal can be observed at the intense magnetic Bragg peak (1, 0, 0) (Fig. <ref>d). Furthermore, the scattering intensities of these excitations tend to be stronger at larger Qs as shown in Fig. <ref>c. These findings indicate that these excitations can be interpreted as phonons connected by a saddle point around 5 meV, rather than gapped spin waves. Our theoretical calculations, which reproduce the multiple bands well, provide further support for the interpretation of these low-energy modes as phonons (Supplementary Fig. 4). On the other hand, these modes exhibit some anomalous behaviours that suggest they are not ordinary phonons. At 100 K, which is above T_ N, they become invisible at small Qs but persist at large Qs, as shown in Fig. <ref>d-f and Supplementary Fig. 2d-f. Furthermore, it is noteworthy that the saddle point of the phonon spectra around 5 meV, previously considered as an electromagnon (∼ 1.2 THz) in multiferroics by some optical measurements<cit.>, will be electric-dipole active in the antiferromagnetic phase<cit.>. These results indicate these low-energy phonons acquire some spin components<cit.> through the strong magnon-phonon coupling, as discussed later, leading to additional magnetic scattering intensities at small Qs at 6 K. However, at 100 K, as the magnons collapse, phonons recover their original properties and can only be observed at large Qs, where the intrinsic dynamic structure factors are sufficiently large.Formation of magnon polaronsThe appearance of acoustic phonons at small momenta together with magnons enables us to examine the interaction between them, and we now focus on the regions where the weak and dispersive acoustic phonons tend to intersect with the intense and relatively flat magnons (Fig. <ref>a, b). Apparently, there exist spectral discontinuities at the nominal intersections between the phonons and magnons (Fig. <ref>d1, e1), indicative of the gap opening. To better resolve the gaps, we use a two-dimensional curvature method<cit.> to increase the sharpness of the bands (Fig. <ref>d2, e2). There are multiple gaps around the two magnon bands at about 11 and 14 meV. Together with the gap opening, the sudden change of the dispersions and the intensities in proximity to the gap unambiguously indicate that magnons and phonons are strongly hybridized and inverted, and consequently two magnon-polaron bands separated by the gap form, as illustrated in Fig. <ref>a. For the top magnon-polaron band, it changes from dominant magnonic to phononic character as it propagates. As a result, the band's velocity increases but the intensity decreases dramatically around the original intersection. The trend is opposite for the bottom magnon-polaron band. The observation that the low-energy dispersive phonons convert into magnons with enhanced intensity supports our earlier explanations for the anomalous phonon behaviours, as phonons involved with magnon conversion can carry spins<cit.>. Importantly, when the magnons collapse above T_ N, phonons revert to their original dispersions and scattering intensities (Fig. <ref> and Supplementary Fig. 2). The spectroscopic characteristics of hybrid magnon polarons and low-energy phonons acquiring spin components in the resonant and off-resonant regions, respectively, are both manifestations of the strong magnon-phonon coupling in .To further illustrate the formation and evolution of the magnon-polaron excitations, we plot a series of constant-energy (E) contours near the two anticrossing regions as shown in Fig. <ref>a-c (higher energy), and d-f (lower energy). In each region, there are always two sets of excitations around the zone centre, representing two magnon-polaron bands separated by the gap. The shapes and intensities of these two excitations change as the energy increases, manifesting the magnon-phonon interconversion within each magnon-polaron band during propagation. To better clarify these, we first plot a set of constant-E scans along [100] direction through the zone centre (Fig. <ref>i), which correspond to the lower anticrossing region (Fig. <ref>d-f). As the energy increases, it can be found that there is a spectral weight transfer between the two magnon-polaron bands as they propagate. A similar approach can be applied to the constant-E scans at other energies, so that we can extract a series of fitted peak centres and areas. We plot the relative peak position (Δ q) and the ratio of the spectral weight for each band to the total spectral weight of the two magnon-polaron bands as a function of energy (Fig. <ref>j). When the top (inner) and bottom (outer) magnon-polaron bands are approaching the anticrossing point from low energies, they are of primarily magnonic and phononic natures respectively, so the top band dominates the spectral weight as magnons are much stronger. When the two bands reach the anticrossing point, where Δ q has its minimum, strongest hybridization happens so that the magnonic and phononic components become comparable. Across this position, the main components of the two bands are reversed, and so are their relative intensity ratios. Eventually, the bottom band which is of primarily magnonic nature dominates the spectral weight. Figure <ref>g, h shows similar behaviours for the constant-E scans corresponding to the higher anticrossing region (Fig. <ref>a-c). We also examine the magnon-polaron excitations along the orthogonal [-120] direction, and the results are also similar (Supplementary Fig. 3). These results elaborate the band inversion between the original magnon and phonon bands. Together with the gaps in the dispersions (Fig. <ref>), these constitute the hallmarks for the magnon polarons.Dzyaloshinskii-Moriya mechanism and topologyTo understand the underlying mechanism of the observed magnon polarons, we develop an effective two-dimensional model that contains both the magnon and phonon terms (Methods). Without the magnon-phonon coupling, the obtained magnon and phonon dispersions are shown in Fig. <ref>a. Such dispersions capture the main features of the INS spectra (Fig. <ref>) except for the anticrossings between magnons and phonons. In principle, since magnetic exchange interactions depend on the relative ion positions, magnon-phonon coupling can arise due to the lattice vibrations. In , the magnon-phonon coupling induced by the DM interaction dominates those by other Heisenberg interactions (See details in the Methods). Generally speaking, only the component of the DM vector parallel to the magnetic moments will enter into the spin waves<cit.>. On the other hand, the DM vector here with only the in-plane component allowed, which is perpendicular to the magnetic moments (Fig. <ref>b, c), typically will not contribute to the magnetic ground state as well as the spin excitation spectra<cit.>. We consider it is the lattice vibrations that make the in-plane DM interaction come into play, which in return closely couple magnons and phonons (Methods). The mechanism that the vibrant DM vector can disturb the procession of the magnetic moments is illustrated in Fig. <ref>c. In Fig. <ref>b, the calculated spectra of the coupled system are shown (Parameters can be found in Supplementary Table 2). The gaps between the magnon and phonon bands and interconversions of their spectral components (Fig. <ref>b) well reproduce the characteristics of the magnon polarons observed experimentally, indicating that the DM-interaction-induced magnon-phonon coupling can give rise to the magnon-polaron excitations. Importantly, near the gaps, it is found that large Berry curvatures can be induced (Fig. <ref>d, e). Accordingly, the Chern number for each band is calculated and labeled in Fig. <ref>b. These results show that the magnon-polaron excitations are topologically nontrivial, in accordance with our experimental observation of the band inversion between magnons and phonons. To account for the three-dimensional nature of the phonons, we have also developed a three-dimensional model, as presented in Supplementary Fig. 4.DiscussionsBy now, combing our experimental spectra and theoretical calculations, we have provided compelling evidence that in there exist topological magnon polarons. Remarkably, in addition to the gap opening between the original magnon and phonon bands, we have also observed the band inversion. Such a case is analogous to that in topological insulators induced by the spin-orbit coupling, but involves the interconversion between magnons and phonons induced by the DM interaction. Therefore, our work provides new perspectives in seeking for topological states—that is to go beyond a single type of elementary excitations such as electrons, magnons or phonons only, and to consider the topology in their hybrid form.We note that a recent magneto-Raman scattering study<cit.> has reported the existence of topological magnon polarons due to the zigzag antiferromagnetic order in the monolayer FePSe_3, where the magnon-phonon coupling originates from the anisotropic exchange interactions<cit.>. It implies that the topological nature of band-inverted magnon polarons can be inherent and resilient, irrespective of the particular form of magnon-phonon coupling<cit.>. In our model calculations, we find that the DM term is larger than the Heisenberg terms (Supplementary Table 2), supporting the strong magnon-phonon coupling induced by the DM interaction in . Notably, this coupling mechanism involves the interaction between in-plane phonons and in-plane magnons that emerges as the leading order in , surpassing the general magnetoelastic coupling<cit.> that arises from perpendicular easy-axis anisotropy (See details in the Methods).The strong DM-interaction-induced magnon-phonon coupling in leads to the formation of topological magnon polarons in the magnon-phonon resonant region, with prominent features that are observed by our neutron spectroscopy measurements. Additionally, it gives rise to other intriguing phenomena even away from the anticrossing region, such as the anomalous phonons carrying spins (Fig. <ref> and Supplementary Fig. 2), low-energy excitations with electric dipole activity<cit.>, and the emergence of thermal Hall effect<cit.>. These results showing strong magnon-phonon coupling and hybrid excitations in suggest it to be a prime candidate in developing phonon-controllable spintronic devices. 53 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Kittel(1958)]PhysRev.110.836 author author C. Kittel, title title Interaction of Spin Waves and Ultrasonic Waves in Ferromagnetic Crystals, 10.1103/PhysRev.110.836 journal journal Phys. Rev. volume 110, pages 836–841 (year 1958)NoStop [Kamra et al.(2015)Kamra, Keshtgar, Yan, and Bauer]PhysRevB.91.104409 author author AkashdeepKamra, author HedyehKeshtgar, author PengYan,and author Gerrit E. W.Bauer, title title Coherent elastic excitation of spin waves, 10.1103/PhysRevB.91.104409 journal journal Phys. Rev. B volume 91, pages 104409 (year 2015)NoStop [Shen and Bauer(2015)]PhysRevLett.115.197201 author author Ka Shen and author Gerrit E. W. Bauer, title title Laser-induced spatiotemporal dynamics of magnetic films, 10.1103/PhysRevLett.115.197201 journal journal Phys. Rev. Lett. volume 115, pages 197201 (year 2015)NoStop [Hayashi and Ando(2018)]PhysRevLett.121.237202 author author Hiroki Hayashi and author Kazuya Ando, title title Spin pumping driven by magnon polarons, 10.1103/PhysRevLett.121.237202 journal journal Phys. Rev. Lett. volume 121, pages 237202 (year 2018)NoStop [Uchida et al.(2011)Uchida, Adachi, An, Ota, Toda, Hillebrands, Maekawa,and Saitoh]Uchida2011 author author Ken-ichiUchida, author HirotoAdachi, author T An, author T Ota, author M Toda, author B Hillebrands, author S Maekawa,and author E Saitoh, title title Long-range spin Seebeck effect and acoustic spin pumping, 10.1038/nmat3099 journal journal Nat. Mater. volume 10,pages 737 (year 2011)NoStop [Weiler et al.(2012)Weiler, Huebl, Goerg, Czeschka, Gross, and Goennenwein]PhysRevLett.108.176601 author author M. Weiler, author H. Huebl, author F. S. Goerg, author F. D. Czeschka, author R. Gross,and author S. T. B. Goennenwein, title title Spin pumping with coherent elastic waves,10.1103/PhysRevLett.108.176601 journal journal Phys. Rev. Lett. volume 108,pages 176601 (year 2012)NoStop [Chumak et al.(2015)Chumak, Vasyuchka, Serga, and Hillebrands]Chumak2015 author author A. V. Chumak, author V. I. Vasyuchka, author A. A. Serga,and author B. Hillebrands, title title Magnon spintronics, 10.1038/nphys3347 journal journal Nat. Phys. volume 11,pages 453–461 (year 2015)NoStop [Kajiwara et al.(2010)Kajiwara, Harii, Takahashi, Ohe, Uchida, Mizuguchi, Umezawa, Kawai, Ando, Takanashi, Maekawa, and Saitoh]nature464_262 author author Y. Kajiwara, author K. Harii, author S. Takahashi, author J. Ohe, author K. Uchida, author M. Mizuguchi, author H. Umezawa, author H. Kawai, author K. Ando, author K. Takanashi, author S. Maekawa,and author E. Saitoh,title title Transmission of electrical signals by spin-wave interconversion in a magnetic insulator, 10.1038/nature08876 journal journal Nature volume 464, pages 262–266 (year 2010)NoStop [Zhang et al.(2019)Zhang, Zhang, Okamoto, and Xiao]PhysRevLett.123.167202 author author Xiaoou Zhang, author Yinhan Zhang, author Satoshi Okamoto,andauthor Di Xiao, title title Thermal Hall Effect Induced by Magnon-Phonon Interactions, 10.1103/PhysRevLett.123.167202 journal journal Phys. Rev. Lett. volume 123, pages 167202 (year 2019)NoStop [Park et al.(2020)Park, Nagaosa, and Yang]doi:10.1021/acs.nanolett.0c00363 author author SungjoonPark, author Naoto Nagaosa,and author Bohm-Jung Yang, title title Thermal Hall Effect, Spin Nernst Effect, and Spin Density Induced by a Thermal Gradient in Collinear Ferrimagnets from Magnon-phonon Interaction, 10.1021/acs.nanolett.0c00363 journal journal Nano Lett. volume 20, pages 2741–2746 (year 2020)NoStop [Ma and Fiete(2022)]PhysRevB.105.L100402 author author Bowen Ma and author Gregory A. Fiete, title title Antiferromagnetic insulators with tunable magnon-polaron Chern numbers induced by in-plane optical phonons, 10.1103/PhysRevB.105.L100402 journal journal Phys. Rev. B volume 105, pages L100402 (year 2022)NoStop [Takahashi and Nagaosa(2016)]PhysRevLett.117.217205 author author Ryuji Takahashi and author Naoto Nagaosa, title title Berry curvature in magnon-phonon hybrid systems, 10.1103/PhysRevLett.117.217205 journal journal Phys. Rev. Lett. volume 117, pages 217205 (year 2016)NoStop [Go et al.(2019)Go, Kim, and Lee]PhysRevLett.123.237207 author author GyungchoonGo, author Se Kwon Kim,and author Kyung-Jin Lee, title title Topological Magnon-Phonon Hybrid Excitations in Two-Dimensional Ferromagnets with Tunable Chern Numbers, 10.1103/PhysRevLett.123.237207 journal journal Phys. Rev. Lett. volume 123, pages 237207 (year 2019)NoStop [Zhang et al.(2020)Zhang, Go, Lee, and Kim]PhysRevLett.124.147204 author author Shu Zhang, author Gyungchoon Go, author Kyung-Jin Lee,andauthor Se Kwon Kim, title title SU(3) Topology of Magnon-Phonon Hybridization in 2D Antiferromagnets, 10.1103/PhysRevLett.124.147204 journal journal Phys. Rev. Lett. volume 124, pages 147204 (year 2020)NoStop [Park and Yang(2019)]PhysRevB.99.174435 author author SungjoonPark and author Bohm-JungYang, title title Topological magnetoelastic excitations in noncollinear antiferromagnets, 10.1103/PhysRevB.99.174435 journal journal Phys. Rev. B volume 99, pages 174435 (year 2019)NoStop [Ogawa et al.(2015)Ogawa, Koshibae, Beekman, Nagaosa, Kubota, Kawasaki, and Tokura]Ogawa8977 author author Naoki Ogawa, author Wataru Koshibae, author Aron Jonathan Beekman, author Naoto Nagaosa, author Masashi Kubota, author Masashi Kawasaki,and author YoshinoriTokura, title title Photodrive of magnetic bubbles via magnetoelastic waves, 10.1073/pnas.1504064112 journal journal Proc. Natl. Acad. Sci. USA volume 112, pages 8977 (year 2015)NoStop [Holanda et al.(2018)Holanda, Maior, Azevedo, andRezende]holanda2018detecting author author J. Holanda, author D. S. Maior, author A. Azevedo,andauthor S. M. Rezende,title title Detecting the phonon spin in magnon-phonon conversion experiments, 10.1038/s41567-018-0079-y journal journal Nat. Phys. volume 14, pages 500 (year 2018)NoStop [Liu et al.(2021)Liu, Granados del Águila, Bhowmick, Gan, Thu Ha Do, Prosnikov, Sedmidubský, Sofer, Christianen, Sengupta, and Xiong]PhysRevLett.127.097401 author author Sheng Liu, author Andrés Granados del Águila, author DhimanBhowmick, author Chee KwanGan, author T. Thu Ha Do, author M. A. Prosnikov, author David Sedmidubský, author Zdenek Sofer, author Peter C. M. Christianen, author Pinaki Sengupta,and author Qihua Xiong, title title Direct observation of magnon-phonon strong coupling in two-dimensional antiferromagnet at high magnetic fields,10.1103/PhysRevLett.127.097401 journal journal Phys. Rev. Lett. volume 127,pages 097401 (year 2021)NoStop [Kikkawa et al.(2016)Kikkawa, Shen, Flebus, Duine, Uchida, Qiu, Bauer,and Saitoh]PhysRevLett.117.207203 author author Takashi Kikkawa, author Ka Shen, author Benedetta Flebus, author Rembert A. Duine, author Ken-ichi Uchida, author Zhiyong Qiu, author Gerrit E. W. Bauer,and author Eiji Saitoh, title title Magnon Polarons in the Spin Seebeck Effect,10.1103/PhysRevLett.117.207203 journal journal Phys. Rev. Lett. volume 117,pages 207203 (year 2016)NoStop [Li et al.(2020)Li, Simensen, Reitz, Sun, Yuan, Li, Tserkovnyak, Brataas, and Shi]PhysRevLett.125.217201 author author Junxue Li, author Haakon T. Simensen, author Derek Reitz, author Qiyang Sun, author Wei Yuan, author Chen Li, author Yaroslav Tserkovnyak, author Arne Brataas,and author Jing Shi, title title Observation of Magnon Polarons in a Uniaxial Antiferromagnetic Insulator, 10.1103/PhysRevLett.125.217201 journal journal Phys. Rev. Lett. volume 125, pages 217201 (year 2020)NoStop [Cornelissen et al.(2017)Cornelissen, Oyanagi, Kikkawa, Qiu, Kuschel, Bauer, van Wees, and Saitoh]PhysRevB.96.104441 author author L. J. Cornelissen, author K. Oyanagi, author T. Kikkawa, author Z. Qiu, author T. Kuschel, author G. E. W. Bauer, author B. J.van Wees,and author E. Saitoh, title title Nonlocal magnon-polaron transport in yttrium iron garnet, 10.1103/PhysRevB.96.104441 journal journal Phys. Rev. B volume 96, pages 104441 (year 2017)NoStop [Petit et al.(2007)Petit, Moussa, Hennion, Pailhès, Pinsard-Gaudart, and Ivanov]PhysRevLett.99.266604 author author S. Petit, author F. Moussa, author M. Hennion, author S. Pailhès, author L. Pinsard-Gaudart,and author A. Ivanov, title title Spin Phonon Coupling in Hexagonal Multiferroic YMnO_3, 10.1103/PhysRevLett.99.266604 journal journal Phys. Rev. Lett. volume 99, pages 266604 (year 2007)NoStop [Oh et al.(2016)Oh, Le, Nahm, Sim, Jeong, Perring, Woo, Nakajima, Ohira-Kawamura, Yamani, , Yoshida, Eisaki, Cheong, Chernyshev, and Park]oh2016spontaneous author author Joosung Oh, author Manh Duc Le, author Ho-Hyun Nahm, author Hasung Sim, author Jaehong Jeong, author TG Perring, author Hyungje Woo, author KenjiNakajima, author SeikoOhira-Kawamura, author ZahraYamani, , author Y. Yoshida, author H. Eisaki, author S.-W. Cheong, author A. L. Chernyshev,and author Je-Geun Park, title title Spontaneous decays of magneto-elastic excitations in non-collinear antiferromagnet (Y, Lu)MnO_3,10.1038/ncomms13146 journal journal Nat. Commun. volume 7, pages 13146 (year 2016)NoStop [Sukhanov et al.(2019)Sukhanov, Pavlovskii, Bourges, Walker, Manna, Felser, andInosov]PhysRevB.99.214445 author author A. S. Sukhanov, author M. S. Pavlovskii, author Ph. Bourges, author H. C. Walker, author K. Manna, author C. Felser,and author D. S. Inosov, title title Magnon-polaron excitations in the noncollinear antiferromagnet Mn_3Ge, 10.1103/PhysRevB.99.214445 journal journal Phys. Rev. B volume 99, pages 214445 (year 2019)NoStop [Man et al.(2017)Man, Shi, Xu, Xu, Chen, Sullivan, Zhou, Xia, Shi, and Dai]PhysRevB.96.100406 author author Haoran Man, author Zhong Shi, author Guangyong Xu, author Yadong Xu, author Xi Chen, author Sean Sullivan, author JianshiZhou, author Ke Xia, author Jing Shi,andauthor Pengcheng Dai,title title Direct observation of magnon-phonon coupling in yttrium iron garnet, 10.1103/PhysRevB.96.100406 journal journal Phys. Rev. B volume 96, pages 100406 (year 2017)NoStop [Bao et al.(2020)Bao, Cai, Si, Wang, Wang, Shangguan, Ma, Dong, Kajimoto, Ikeuchi, Yu, Sun, Li, and Wen]PhysRevB.101.214419 author author Song Bao, author Zhengwei Cai, author Wenda Si, author Wei Wang, author Xiaomeng Wang, author Yanyan Shangguan, author Zhen Ma, author Zhao-Yang Dong, author Ryoichi Kajimoto, author Kazuhiko Ikeuchi, author Shun-Li Yu, author JianSun, author Jian-XinLi,and author JinshengWen, title title Evidence for magnon-phonon coupling in the topological magnet Cu_3TeO_6, 10.1103/PhysRevB.101.214419 journal journal Phys. Rev. B volume 101, pages 214419 (year 2020)NoStop [McCarroll et al.(1957)McCarroll, Katz, and Ward]McCarroll1957 author author William H.McCarroll, author LewisKatz,and author RolandWard, title title Some Ternary Oxides of Tetravalent Molybdenum, 10.1021/ja01577a021 journal journal J. Am. Chem. Soc. volume 79, pages 5410–5414 (year 1957)NoStop [Wang et al.(2015)Wang, Pascut, Gao, Tyson, Haule, Kiryukhin, and Cheong]Wang2015 author author Yazhong Wang, author Gheorghe L. Pascut, author Bin Gao, author Trevor A. Tyson, author Kristjan Haule, author Valery Kiryukhin,and author Sang-Wook Cheong, title title Unveiling hidden ferrimagnetism and giant magnetoelectricity in polar magnet Fe_2Mo_3O_8, 10.1038/srep12268 journal journal Sci. Rep.volume 5, pages 12268 (year 2015)NoStop [Kurumaji et al.(2015)Kurumaji, Ishiwata, and Tokura]PhysRevX.5.031034 author author T. Kurumaji, author S. Ishiwata,and author Y. Tokura,title title Doping-Tunable Ferrimagnetic Phase with Large Linear Magnetoelectric Effect in a Polar Magnet Fe_2Mo_3O_8, 10.1103/PhysRevX.5.031034 journal journal Phys. Rev. X volume 5, pages 031034 (year 2015)NoStop [Kurumaji et al.(2017a)Kurumaji, Takahashi, Fujioka, Masuda, Shishikura, Ishiwata, and Tokura]PhysRevB.95.020405 author author T. Kurumaji, author Y. Takahashi, author J. Fujioka, author R. Masuda, author H. Shishikura, author S. Ishiwata,and author Y. Tokura, title title Electromagnon resonance in a collinear spin state of the polar antiferromagnet Fe_2Mo_3O_8, 10.1103/PhysRevB.95.020405 journal journal Phys. Rev. B volume 95, pages 020405 (year 2017a)NoStop [Csizi et al.(2020)Csizi, Reschke, Strini ćć, Prodan, Tsurkan, Kézsmárki, and Deisenhofer]PhysRevB.102.174407 author author B. Csizi, author S. Reschke, author A. Strini ćć, author L. Prodan, author V. Tsurkan, author I. Kézsmárki,and author J. Deisenhofer, title title Magnetic and vibronic terahertz excitations in Zn-doped Fe_2Mo_3O_8, 10.1103/PhysRevB.102.174407 journal journal Phys. Rev. B volume 102, pages 174407 (year 2020)NoStop [Kurumaji et al.(2017b)Kurumaji, Takahashi, Fujioka, Masuda, Shishikura, Ishiwata, and Tokura]PhysRevLett.119.077206 author author T. Kurumaji, author Y. Takahashi, author J. Fujioka, author R. Masuda, author H. Shishikura, author S. Ishiwata,and author Y. Tokura, title title Optical Magnetoelectric Resonance in a Polar Magnet (Fe,Zn)_2Mo_3O_8 with Axion-Type Coupling, 10.1103/PhysRevLett.119.077206 journal journal Phys. Rev. Lett. volume 119, pages 077206 (year 2017b)NoStop [Ideue et al.(2017)Ideue, Kurumaji, Ishiwata, and Tokura]Ideue2017 author author T. Ideue, author T. Kurumaji, author S. Ishiwata,andauthor Y. Tokura, title title Giant thermal Hall effect in multiferroics, 10.1038/nmat4905 journal journal Nat. Mater. volume 16,pages 797 (year 2017)NoStop [Bertrand and Kerner-Czeskleba(1975)]bertrand1975etude author author D. Bertrand and author H. Kerner-Czeskleba, title title Étude structurale et magnétique de molybdates d'éléments de transition, 10.1051/jphys:01975003605037900 journal journal J. Phys. volume 36, pages 379–390 (year 1975)NoStop [Varret et al.(1972)Varret, Czeskleba, Hartmann-Boutron, andImbert]varret1972etude author author F. Varret, author H. Czeskleba, author F. Hartmann-Boutron, and author P. Imbert,title title Étude par effet Mössbauer de l'ion Fe^2+ en symétrie trigonale dans les composés du type (Fe, M)_2Mo_3O_8 (M= Mg, Zn, Mn, Co, Ni) et propriétés magnétiques de (Fe, Zn)_2Mo_3O_8, 10.1051/jphys:01972003305-6054900 journal journal J. Phys. volume 33, pages 549–564 (year 1972)NoStop [Reschke et al.(2020)Reschke, Tsirlin, Khan, Prodan, Tsurkan, Kézsmárki, andDeisenhofer]PhysRevB.102.094307 author author S. Reschke, author A. A. Tsirlin, author N. Khan, author L. Prodan, author V. Tsurkan, author I. Kézsmárki,and author J. Deisenhofer, title title Structure, phonons, and orbital degrees of freedom in Fe_2Mo_3O_8, 10.1103/PhysRevB.102.094307 journal journal Phys. Rev. B volume 102, pages 094307 (year 2020)NoStop [Collins(1989)]collins1989magnetic author author Malcolm FCollins, @nooptitle Magnetic critical scattering (publisher Oxford University Press,year 1989)NoStop [Gao et al.(2023)Gao, Chen, Wu, Flynn, Duan, Chen, Huang, Liebman, Li, Ye, Stone, Podlesnyak, Abernathy, Adroja, Duc Le, Huang, Nevidomskyy, Morosan, Balents, and Dai]Gao2023 author author Bin Gao, author Tong Chen, author Xiao-Chuan Wu, author Michael Flynn, author Chunruo Duan, author Lebing Chen, author Chien-Lung Huang, author Jesse Liebman, author Shuyi Li, author Feng Ye, author Matthew B.Stone, author AndreyPodlesnyak, author Douglas L.Abernathy, author Devashibhai T.Adroja, author Manh Duc Le, author Qingzhen Huang, author Andriy H. Nevidomskyy, author Emilia Morosan, author Leon Balents,and author Pengcheng Dai,title title Diffusive excitonic bands from frustrated triangular sublattice in a singlet-ground-state system,10.1038/s41467-023-37669-5 journal journal Nat. Commun. volume 14, pages 2051 (year 2023)NoStop [Reschke et al.(2022)Reschke, Farkas, Strinić, Ghara, Guratinder, Zaharko, Prodan, Tsurkan, Szaller, Bordács, Deisenhofer, and Kézsmárki]Reschke2022 author author S. Reschke, author D. G. Farkas, author A. Strinić, author S. Ghara, author K. Guratinder, author O. Zaharko, author L. Prodan, author V. Tsurkan, author D. Szaller, author S. Bordács, author J. Deisenhofer,and author I. Kézsmárki, title title Confirming the trilinear form of the optical magnetoelectric effect in the polar honeycomb antiferromagnet Co_2Mo_3O_8, 10.1038/s41535-021-00417-3 journal journal npj Quantum Mater. volume 7, pages 1 (year 2022)NoStop [Soda et al.(2014)Soda, Matsumoto, Månsson, Ohira-Kawamura, Nakajima, Shiina, andMasuda]PhysRevLett.112.127205 author author M. Soda, author M. Matsumoto, author M. Månsson, author S. Ohira-Kawamura, author K. Nakajima, author R. Shiina,and author T. Masuda, title title Spin-Nematic Interaction in the Multiferroic Compound Ba_2CoGe_2O_7, 10.1103/PhysRevLett.112.127205 journal journal Phys. Rev. Lett. volume 112, pages 127205 (year 2014)NoStop [Zhang et al.(2011)Zhang, Richard, Qian, Xu, Dai, and Ding]doi:10.1063/1.3585113 author author P. Zhang, author P. Richard, author T. Qian, author Y.-M. Xu, author X. Dai,and author H. Ding, title title A precise method for visualizing dispersive features in image plots,10.1063/1.3585113 journal journal Rev. Sci. Instrum. volume 82, pages 043712 (year 2011)NoStop [Chisnell et al.(2015)Chisnell, Helton, Freedman, Singh, Bewley, Nocera, and Lee]PhysRevLett.115.147201 author author R. Chisnell, author J. S. Helton, author D. E. Freedman, author D. K. Singh, author R. I. Bewley, author D. G. Nocera,and author Y. S. Lee, title title Topological Magnon Bands in a Kagome Lattice Ferromagnet, 10.1103/PhysRevLett.115.147201 journal journal Phys. Rev. Lett. volume 115, pages 147201 (year 2015)NoStop [Luo et al.(2023)Luo, Li, Ye, Xu, Yan, Zhang, Ye, Chen, Hu, Teng, Smith, Yakobson, Dai, Nevidomskyy, He, and Zhu]doi:10.1021/acs.nanolett.3c00351 author author Jiaming Luo, author Shuyi Li, author Zhipeng Ye, author Rui Xu, author Han Yan, author Junjie Zhang, author GaihuaYe, author Lebing Chen, author Ding Hu, author Xiaokun Teng, author William A. Smith, author Boris I. Yakobson, author Pengcheng Dai, author Andriy H. Nevidomskyy, author Rui He,and author Hanyu Zhu, title title Evidence for Topological Magnon-Phonon Hybridization in a 2D Antiferromagnet down to the Monolayer Limit, 10.1021/acs.nanolett.3c00351 journal journal Nano Lett. volume 23, pages 2023–2030 (year 2023)NoStop [Strobel and Le Page(1983)]STROBEL1983329 author author Pierre Strobel and author Yvon Le Page, title title Growth and morphology of single crystals of hexagonal molybdates(IV) M_2Mo_3O_8 (M = Mn, Fe, Co, Ni), https://doi.org/10.1016/0022-0248(83)90370-6 journal journal J. Cryst. Growth volume 61, pages 329–338 (year 1983)NoStop [Strobel et al.(1982)Strobel, Le Page, and McAlister]STROBEL1982242 author author P. Strobel, author Y. Le Page,and author S.P. McAlister,title title Growth and physical properties of single crystals of Fe^ II_2Mo^ IV_3O_8, https://doi.org/10.1016/0022-4596(82)90003-2 journal journal J. Solid State Chem. volume 42,pages 242–250 (year 1982)NoStop [Kajimoto et al.(2011)Kajimoto, Nakamura, Inamura, Mizuno, Nakajima, Ohira-Kawamura, Yokoo, Nakatani, Maruyama, Soyama, Shibata, Suzuya, Sato, Aizawa, Arai, Wakimoto, Ishikado, Shamoto, Fujita, Hiraka, Ohoyama, Yamada, and Lee]doi:10.1143/JPSJS.80SB.SB025 author author Ryoichi Kajimoto, author Mitsutaka Nakamura, author Yasuhiro Inamura, author Fumio Mizuno, author Kenji Nakajima, author Seiko Ohira-Kawamura, author Tetsuya Yokoo, author Takeshi Nakatani, author Ryuji Maruyama, author Kazuhiko Soyama, author Kaoru Shibata, author Kentaro Suzuya, author Setsuo Sato, author Kazuya Aizawa, author Masatoshi Arai, author Shuichi Wakimoto, author Motoyuki Ishikado, author Shin-ichi Shamoto, author Masaki Fujita, author Haruhiro Hiraka, author Kenji Ohoyama, author Kazuyoshi Yamada,and author Chul-Ho Lee, title title The Fermi Chopper Spectrometer 4SEASONS at J-PARC,10.1143/JPSJS.80SB.SB025 journal journal J. Phys. Soc. Jpn. volume 80, pages SB025 (year 2011)NoStop [Stuhr et al.(2017)Stuhr, Roessli, Gvasaliya, Rønnow, Filges, Graf, Bollhalder, Hohl, Bürge, Schild, Holitzner, C., Keller, and Mühlebach]STUHR201716 author author U. Stuhr, author B. Roessli, author S. Gvasaliya, author H.M. Rønnow, author U. Filges, author D. Graf, author A. Bollhalder, author D. Hohl, author R. Bürge, author M. Schild, author L. Holitzner, author Kaegi C., author P. Keller,and author T. Mühlebach, title title The thermal triple-axis-spectrometer EIGER at the continuous spallation source SINQ, https://doi.org/10.1016/j.nima.2017.02.003 journal journal Nucl. Instrum. Methods Phys. Res., Sect. A volume 853, pages 16–19 (year 2017)NoStop [Nakamura et al.(2009)Nakamura, Kajimoto, Inamura, Mizuno, Fujita, Yokoo, and Arai]doi:10.1143/JPSJ.78.093002 author author MitsutakaNakamura, author RyoichiKajimoto, author YasuhiroInamura, author FumioMizuno, author MasakiFujita, author TetsuyaYokoo,and author MasatoshiArai, title title First demonstration of novel method for inelastic neutron scattering measurement utilizing multiple incident energies, 10.1143/JPSJ.78.093002 journal journal J. Phys. Soc. Jpn. volume 78, pages 093002 (year 2009)NoStop [Inamura et al.(2013)Inamura, Nakatani, Suzuki, andOtomo]inamura2013development author author YasuhiroInamura, author TakeshiNakatani, author JiroSuzuki,and author ToshiyaOtomo, title title Development status of software “Utsusemi" for chopper spectrometers at MLF, J-PARC, 10.7566/JPSJS.82SA.SA031 journal journal J. Phys. Soc. Jpn. volume 82, pages SA031 (year 2013)NoStop [Ewings et al.(2016)Ewings, Buts, Le, van Duijn, Bustinduy, and Perring]EWINGS2016132 author author R.A. Ewings, author A. Buts, author M.D. Le, author J. van Duijn, author I. Bustinduy,and author T.G. Perring, title title Horace: Software for the analysis of data from single crystal spectroscopy experiments at time-of-flight neutron instruments, https://doi.org/10.1016/j.nima.2016.07.036 journal journal Nucl. Instrum. Methods Phys. Res., Sect. A volume 834, pages 132–142 (year 2016)NoStop [Moriya(1960)]PhysRev.120.91 author author Tôru Moriya, title title Anisotropic Superexchange Interaction and Weak Ferromagnetism, 10.1103/PhysRev.120.91 journal journal Phys. Rev. volume 120, pages 91–98 (year 1960)NoStop [Chen et al.(2022)Chen, Mao, Chung, Stone, Kolesnikov, Wang, Murai, Gao, Delaire, and Dai]Chen2022 author author Lebing Chen, author Chengjie Mao, author Jae-Ho Chung, author Matthew B. Stone, author Alexander I. Kolesnikov, author Xiaoping Wang, author Naoki Murai, author Bin Gao, author Olivier Delaire,and author Pengcheng Dai, title title Anisotropic magnon damping by zero-temperature quantum fluctuations in ferromagnetic CrGeTe_3, 10.1038/s41467-022-31612-w journal journal Nat. Commun. volume 13, pages 4037 (year 2022)NoStop [Fukui et al.(2005)Fukui, Hatsugai, and Suzuki]Fukui_JPSJ2005 author author TakahiroFukui, author YasuhiroHatsugai,and author HiroshiSuzuki, title title Chern Numbers in Discretized Brillouin Zone: Efficient Method of Computing (Spin) Hall Conductances, 10.1143/JPSJ.74.1674 journal journal J. Phys. Soc. Jpn. volume 74, pages 1674–1677 (year 2005)NoStop MethodsSingle-crystal growth and characterisations. High-quality single crystals of were grown by the chemical-vapor-transport method<cit.>. The well mixed raw powder materials of Fe_2O_3, MoO_2 and Fe in a stoichiometric molar ratio of 2:9:2 were sealed in an evacuated quartz tube with TeCl_4 as the transport agent. The tube was placed into a two-zone horizontal tube furnace with the hot and cold end set at 980^∘C and 845^∘C, respectively. These temperatures were maintained for 10 days for the crystal growth, after which they were cooled naturally in the furnace to room temperature. The magnetisation was measured with a 15.1-mg single crystal using the vibrating sample magnetometer option integrated in a Physical Property Measurement System (PPMS-9T) from Quantum Design.Neutron scattering experiments. Our neutron scattering measurements were performed on 4SEASONS, a time-of-flight spectrometer located at the MLF of J-PARC in Japan<cit.> and EIGER, a thermal-neutron triple-axis spectrometer located at the SINQ of PSI in Switzerland<cit.>. The sample array consisted of ∼150 pieces of single crystals weighing about 3.39 g in total. They were coaligned using a backscattering Laue X-ray diffractometer and glued on aluminum plates using a trademarked fluoropolymer CYTOP-M. These plates were assembled together on an aluminum holder, which was then mounted into a closed-cycle refrigerator for measurements, in a manner that the (H, 0, L) plane was the horizontal plane, as shown in Supplementary Fig. 1a. For the measurements on 4SEASONS, we chose a primary incident energy E_ i=30.04 meV and a Fermi chopper frequency of 250 Hz, with an energy resolution of 1.56 meV at the elastic line. Since 4SEASONS was operated in a multiple-E_ i mode<cit.>, it had other E_ is of 11.93 and 17.95 meV, with respective energy resolutions of 0.56 and 0.84 meV at the elastic line. Note that on direct geometry time-of-flight spectrometers such as 4SEASONS, the energy resolution is improved as the energy transfer increases. We set the angle of the incident neutron beam direction parallel to c axis to be zero. Scattering data were collected by rotating the sample about [-120] direction from 60^∘ to 180^∘ in a step of 1^∘ or 2^∘ for 6 and 100 K, respectively. We counted 20 minutes for each step. In this setup, we found the data with E_ i∼18 meV were overlapped with and contaminated by the elastic line from the next lower E_ i in the region around 15.5 meV. To eliminate this, we used a similar setup as previous measurements, but suppressed the incident neutrons with E_ i∼12 meV and lower by the disk chopper in the second measurements on 4SEASONS. In the work, the results with E_ i∼12 meV and E_ i∼30 meV were all based on the first measurement, while those with E_ i∼18 meV at 100 and 6 K were based on the first and second measurements, respectively. These data were reduced and analyzed using the software suites Utsusemi<cit.> and Horace<cit.>. For the measurements on EIGER, data were collected in the (H, 0, L) scattering plane with a horizontal-focusing analyzer. We fixed the final wavevector k_ f=2.662 Å^-1 corresponding to an energy of 14.7 meV. We used a hexagonal structure with the refined lattice parameters a=b=5.773(3) Å and c=10.054(3) Å<cit.>. The wavevector Q was expressed as (H, K, L) in the reciprocal lattice unit (r.l.u.) of (a^*, b^*, c^*)=(4π/√(3)a, 4π/√(3)b, 2π/c). In this paper, the measured neutron scattering intensities S( Q, E) from 4SEASONS were corrected by the magnetic form factor of Fe^2+ ions and divided by the Bose factor via χ”( Q, E)=|f( Q)|^-2(1- e^-E/k_ BT)S( Q, E), where k_ B was the Boltzmann constant.Theoretical calculations of the bands and their topology. We start the calculations with an effective two-dimensional model that contains both the magnon (H_ m) and phonon (H_ p) terms. The symmetry allowed H_ m on a bipartite honeycomb layer with two inequivalent Fe sites in Fig. <ref>c of the main text can be written as:H_ m=∑_i jJ_ij S_i· S_j-∑_iΔ_i (S^z_i)^2+∑_⟨ ij⟩ D_ij·( S_i× S_j).Here, J_ij is the Heisenberg exchange interaction between the spins S_i and S_j, which is considered up to the third-nearest neighbour (TNN) to fit the magnon spectra shown in Fig. <ref> of the main text. It is noted that the nearest-neighbour (NN) exchange interaction J_1 and TNN exchange interaction J_3 are homogeneous over the whole lattice while the next-nearest-neighbour (NNN) exchange interactions can take different values for the bonds between the Fe_ t sites (denoted by J_2^ t) and those between the Fe_ o sites (denoted by J_2^ o). Δ_i is the single-ion anisotropy constant of the local spin at the site i, which can be distinct for Fe_ t (denoted by Δ^ t) and Fe_ o (denoted by Δ^ o) sites as well. D_ij is the vector of the DM interaction between the NN sites i and j. This term is present because in this case the midpoint between any NN Fe sites is no longer an inversion symmetry centre<cit.>. Due to the preservation of the mirror symmetry with the mirror plane perpendicular to the honeycomb layer passing through any two NN Fe sites, the NN DM interaction here only has the in-plane component<cit.>, as shown in Fig. <ref>c of the main text. The phonon part H_ p considering only Fe^2+ ions can be expressed as follows in the harmonic approximation:H_ p=∑_i P^2_i/2M+1/2∑_i, j u_i^T K_ij( R_i^0- R_j^0) u_j,where M is the ion mass of Fe^2+, P_i is the momentum of the ion at the site i, u_i is the displacement of the ion i from its equilibrium position R_i^0, and K_ij is the dynamical matrix along the bond ij.Magnon-phonon coupling can naturally arise in Eq. <ref> when lattice vibrations are taken into account, because the exchange interaction J_ij and the DM vector D_ij depend on the positions of the ions i and j. In a collinear antiferromagnet with perpendicular easy-axis anisotropy, to the lowest order of u_i and δ S_i= S_i-⟨ S_i⟩, where ⟨ S_i⟩ is the ground state expectation value of S_i, the isotropic Heisenberg interaction leads to a cubic magnon-phonon coupling term<cit.>. This term alone is only capable of causing the softening and broadening of spin waves<cit.>. On the other hand, the anisotropic DM interaction gives rise to a quadratic magnon-phonon coupling term<cit.>, that can create a gap between the magnon and phonon bands, leading to the formation of magnon polarons. It is also worth noting that magnetoelastic coupling, which arises from single-ion magnetostriction and is generally present in magnets with crystalline anisotropy<cit.>, primarily couple out-of-plane phonons with in-plane magnons<cit.>. However, this mechanism contradicts the experimental observations where the main hybridizations occur between magnons and in-plane polarised phonons (Fig. <ref>a, b). Additionally, the calculated acoustic phonons with out-of-plane polarisation have lower energy than the magnons, suggesting the negligible hybridization through magnetoelastic coupling (Supplementary Fig. 4). Hence, we consider the DM-interaction-induced magnon-phonon coupling as the dominant term in , and for simplicity, we neglect the effects of isotropic Heisenberg interactions and magnetoelastic coupling. The final expression for this coupling H_ mp is given by,H_ mp=DS/| R_ij^0|∑_⟨ ij⟩( u_i- u_j)(Î_3-R̂_ij^0R̂_ij^0)(δ S_i+δ S_j),where D=| D_ij| is the magnitude of the DM interaction, | R_ij^0| is the bond length, S=2 is the total electron spin of the Fe^2+ ion, Î_3 is a 3×3 identity matrix, R̂_ij^0 is the unit vector along bond ij, and R̂_ij^0R̂_ij^0 is the Kronecker product between two R̂_ij^0s. Note that for a collinear antiferromagnet with moments aligned along c axis, spins precess about the c axis. In this case, only phonons involving in-plane displacements can couple to magnons according to Eq. <ref>, because δ S_i only has the in-plane component. This can be significantly different from the magnetoelastic coupling mentioned earlier<cit.>.In general, the dynamical matrix K_ij at the bond ij has 6 (3) independent parameters for a three- (two-) dimensional system. For our two-dimensional effective model, it can be assumed that K_ij only has the longitudinal components along the bond ij to reasonably reduce the number of tuning parameters. Then, the elastic potential energy of the phonons can be expressed as following by the Hooke's law <cit.>,1/2∑_i<jk_ij[R̂_ij^0·( u_i- u_j)]^2.Here, k_ij is the longitudinal spring constant of the bond ij. To fit the phonon dispersions of , it is considered up to the fourth nearest neighbour (FNN), with the NN k_1, NNN k_2^ t and k_2^ o, TNN k_3, and FNN k_4.By using the standard Holstein-Primakoff transformation, local spins can be mapped to canonical bosons to the leading orders: S^+_i=√(2S)a_i and S^z_i=S-a^†_ia_i when i belongs to the Fe_ t sites, or S^+_i=√(2S)b_i^† and S^z_i=b^†_ib_i-S when i belongs to the Fe_ o sites. Then, the Hamiltonian of the coupled system can be expressed in a generalized Bogoliubov-de Gennes (BdG) form as H=1/2 X_ k^†Ĥ( k) X_ k, where X_ k=(a_ k, b_ k, a^†_- k, b^†_- k,u_ k,P_ k)^T. Here, u_ k(P_ k) is the four-vector for the two-dimensional displacements (momenta) of the Fe_ t and Fe_ o sites. It is noted that their out-of-plane displacements and momenta are omitted because they are decoupled from the other degrees of freedom in our simple effective model. In this representation, the commutation relation of the X_ k vector isg ≡[ X_ k^†,X_ k] = [ I_2;-I_2; -iI_4;iI_4; ].The eigenvalue and eigenvectors of the coupled system satisfiesgĤ( k) |ϕ_n k⟩ = σ_nn E_n k|ϕ_n k⟩, ⟨ϕ_n k|g|ϕ_m k⟩ = σ_nm,where σ = σ_z ⊗ I_6 acts on the particle-hole space. In the BdG formulation of the Hamiltonian of the coupled system, the second half of the eigenstates are redundant to the first due to the artificial particle-hole symmetry. In Fig. <ref> of the main text, only the lowest four independent energy bands are displayed to make comparison with the experimental results.The Berry curvature Ω_n k of the eigenvector |ϕ_n k⟩ can be defined asΩ_n k = i ⟨∇_ kϕ_n k|g^-1×|∇_ kϕ_n k⟩.Then, in the spirit of the momentum space discretization method <cit.>, the gauge-invariant Chern number and Berry curvatures can be computed from Eq. <ref>.We note that our effective two-dimensional model used to obtain the results in Fig. <ref> of the main text is a simplified model, but since the magnons of this system are perfectly two dimensional, and the out-of-plane phonons do not couple to the magnons at the leading order (Eq. <ref>), it is able to capture the essence of the magnon polarons (Figs. <ref> and <ref> in the main text). Nevertheless, to account for the three dimensionality of the phonons, we have also developed a three-dimensional model (Supplementary Fig. 4). Data availabilityThe data supporting the findings of this study are available from the corresponding author J.W. upon reasonable request.Code availabilityThe codes used for the theoretical calculations of magnon-polaron bands in this study are available from the corresponding author J.W. upon reasonable request. AcknowledgementsWe would like to thank Qi Zhang, Yuan Wan, Peng Zhang and Xiangang Wan for stimulating discussions. We also thank Hongling Cai for allowing us to use their X-ray diffraction machine. The work was supported by National Key Projects for Research and Development of China with Grant No. 2021YFA1400400 (J.-X.L. and J.W.), National Natural Science Foundation of China with Grant Nos. 12225407, 12074174 (J.W.), 12074175 (S.-L.Y.), 11904170 (Z.-Y.D.),and 12004191 (W.W.), Natural Science Foundation of Jiangsu province with Grant Nos. BK20190436 (Z.-Y.D.) and BK20200738 (W.W.), China Postdoctoral Science Foundation with Grant Nos. 2022M711569 and 2022T150315, Jiangsu Province Excellent Postdoctoral Program with Grant No. 20220ZB5 (S.B.), and Fundamental Research Funds for the Central Universities. We acknowledge the neutron beam time from J-PARC with Proposal Nos. 2020B0002 and 2021I0001, and from EIGER with Proposal No. 20200062. Author contributionsJ.W. conceived the project. S.B. prepared the samples with assistance from Y.S., Z.H., J.L., X.Z. and B.Z. S.B., Y.S., R.K., M.N., T.F. and Z.H. carried out the neutron scattering experiments. S.B. and J.W. analysed the experimental data. Z.-L.G., Z.-Y.D., W.W., S.-L.Y. and J.-X.L. performed the theoretical analyses. J.W., S.B., Z.-L.G. and J.-X.L. wrote the paper with inputs from all co-authors. Competing InterestsThe authors declare no competing financial interests. Additional informationCorrespondence and request for materials should be addressed to J.W. ([email protected]), J.-X.L. ([email protected]) or S.-L.Y. ([email protected]).
http://arxiv.org/abs/2312.15943v1
{ "authors": [ "Song Bao", "Zhao-Long Gu", "Yanyan Shangguan", "Zhentao Huang", "Junbo Liao", "Xiaoxue Zhao", "Bo Zhang", "Zhao-Yang Dong", "Wei Wang", "Ryoichi Kajimoto", "Mitsutaka Nakamura", "Tom Fennell", "Shun-Li Yu", "Jian-Xin Li", "Jinsheng Wen" ], "categories": [ "cond-mat.str-el", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.str-el", "published": "20231226081314", "title": "Direct observation of topological magnon polarons in a multiferroic material" }
1.25 arrows,decorations.markings,cd->-/.style=decoration=markings, mark=at position .5 with [scale=1.5]stealth, postaction=decorate equationsection OMXMnSymbolE MnLargeSymbolsOMXMnSymbolEmn MnLargeSymbolsboldOMXMnSymbolEbn OMXMnSymbolEmn <-6>MnSymbolE5<6-7>MnSymbolE6<7-8>MnSymbolE7<8-9>MnSymbolE8<9-10> MnSymbolE9 <10-12> MnSymbolE10 <12-> MnSymbolE12OMXMnSymbolEbn <-6>MnSymbolE-Bold5<6-7>MnSymbolE-Bold6<7-8>MnSymbolE-Bold7<8-9>MnSymbolE-Bold8<9-10> MnSymbolE-Bold9 <10-12> MnSymbolE-Bold10 <12-> MnSymbolE-Bold12undefined undefined MnLargeSymbols'164MnLargeSymbols'164 MnLargeSymbols'171MnLargeSymbols'171 OT1cmrmxn <->cmr10 OT1cmrmxn20pt0pt Non-Invertible Anyon Condensationand Level-Rank Dualities  Clay Córdova[[email protected]@uchicago.edu] andDiego García-Sepúlveda[[email protected]@uchicago.edu] Kadanoff Center for Theoretical Physics & Enrico Fermi Institute, University of Chicago   We derive new dualities of topological quantum field theories in three spacetime dimensions that generalize the familiar level-rank dualities of Chern-Simons gauge theories. The key ingredient in these dualities is non-abelian anyon condensation, which is a gauging operation for topological lines with non-group-like i.e. non-invertible fusion rules.We find that, generically, dualities involve such non-invertible anyon condensation and that this unifies a variety of exceptional phenomena in topological field theories and their associated boundary rational conformal field theories, including conformal embeddings, and Maverick cosets (those where standard algorithms for constructing a coset model fail.)We illustrate our discussion in a variety of isolated examples as well as new infinite series of dualities involving non-abelian anyon condensation including: i) a new description of the parafermion theory as (SU(N)_2× Spin(N)_-4)/𝒜_N,ii) a new presentation of a series of points on the orbifold branch of c=1 conformal field theories as (Spin(2N)_2× Spin(N)_-2× Spin(N)_-2)/ℬ_N, and iii) a new dual form of SU(2)_N as (USp(2N)_1× SO(N)_-4)/𝒞_N arising from conformal embeddings, where 𝒜_N, ℬ_N, and 𝒞_N are appropriate collections of gauged non-invertible bosons. December 2023§ INTRODUCTION In this paper we derive new dualities of topological quantum field theories (TQFTs) in three spacetime dimensions.Our results generalize the celebrated level-rank dualities of Chern-Simons gauge theories, the most familiar of which are of the form:SU(N)_K↔ U(K)_-N,-N,USp(N)_K↔ USp(K)_-N,SO(N)_K↔ SO(K)_-N.For unitary groups these dualities have been explored in <cit.>, the dualities for SO and USp were derived in <cit.>, while those for more general orthogonal groups were discussed in <cit.>.Additionally, there are also dualities involving exceptional groups derived in <cit.>. Beyond their intrinsic conceptual interest, these dualities are important e.g. in that they provide non-trivial evidence for proposals of phase diagrams of 3D gauge theories <cit.>, and establish the existence of families of time-reversal invariant TQFTs <cit.>. Typically, the starting point to prove these dualities is to establish an equivalence of the associated chiral algebras that appear on the edge of the TQFT equipped with suitable boundary conditions <cit.>.For connected, simple, and simply-connected gauge groups these are the familiar Kac-Moody current algebras (for an overview see e.g. <cit.>), while for other global forms of the gauge group, obtained by quotienting by central elements, they are instead extensions of the Kac-Moody algebras <cit.>. Often, the initial step in deriving these equivalences of chiral algebras is to find a larger chiral algebra that contains as a subalgebra the two chiral algebras of interest and then study how representations of the larger chiral algebra decompose under restriction to the subalgebras. Frequently, the larger algebra is taken to be a Kac-Moody algebra at level one, with the same central charge as that of the two subalgebras combined, in which case such embeddings fall under the so-called conformal embeddings. In this technique, duality of chiral algebras is intimately related with conformal field theories (CFTs) that are derived from appropriate quotients of chiral algebras; namely, coset CFTs.A well-known example illustrates the general procedure.Starting from the embedding:SU(N)_K× SU(K)_N⊂ SU(NK)_1,we learn that the chiral algebra SU(N)_K can be presented as a coset:SU(N)_K≅SU(NK)_1/SU(K)_N.Passing to the bulk TQFT one then obtains a duality of Chern-Simons theories:SU(N)_K≅SU(NK)_1× SU(K)_-N/ℤ_K.A particularly subtle point in the above is the appearance of the quotient by ℤ_K, the common center of the gauge group. As we review in Section <ref> this quotient necessarily appears so that the boundary CFT has a unique ground state without additional topological degrees of freedom.We can also directly interpret this quotient as a gauging operation on the theory SU(NK)_1× SU(K)_-N.Specifically this theory has abelian anyons, i.e. lines with abelian fusion rules, which are bosonic and hence may condense.In the language of higher symmetry <cit.>, these are one-form global symmetries and condensing them is equivalent to gauging this one-form symmetry.As an operation on the initial TQFT this condensation operation acts as a simple algorithm <cit.>:*We remove all lines that braid non-trivially with the condensing abelian anyons.Such removed lines are often said to be confined.*We identify any remaining lines that differ by fusion with the condensing abelian anyons.This is the step of forming gauge orbits.*If a remaining line a is invariant under fusion with s condensing abelian anyons, then in the resulting theory the line a is split into s distinct lines.This three-step gauging procedure allows us to treat many foundational examples of duality amongst TQFTs.However, as originally noted in <cit.>, there are certain cosets where this procedure of gauging by the common center to isolate a CFT with a unique vacuum fails.Traditionally such cosets were often referred to as Maverick cosets and in these cases, the construction of a standard CFT proceeds in an ad hoc manner.Two infinite series of such cosets are known:SU(k)_2/Spin(k)_4, c=2(k-1)/k+2, andSpin(2N)_2/Spin(N)_2× Spin(N)_2,c=1,as well as a finite list of exceptional cases summarized in Section <ref> below.One of the main results of this paper is to provide a uniform analysis of these cosets, and their implications for duality in 3D TQFTs.As we will exhibit, a key idea unifying these cosets is the appearance of non-abelian bosonic anyons in the associated TQFTs, as first explored in <cit.>. To obtain boundary CFTs with unique vacua and no additional topological degrees of freedom we must condense such non-abelian bosons. This procedure unifies the treatment of Maverick and more familiar cosets, and in fact is even crucial for a complete understanding of the conformal embeddings behind more familiar level-rank dualities as we discuss in Section <ref>. Non-abelian anyon condensation in 3D TQFTs can also be fruitfully described using the language of higher symmetry. Indeed, as already mentioned, abelian anyons are generators of one-form global symmetries i.e. they are line topological operators with abelian fusion rules.Anyons with more general non-abelian fusion rules are therefore interpreted as non-invertible one-form symmetries.Such generalized symmetries have recently been investigated particularly in spacetime dimension greater than two.(See e.g. <cit.> for recent reviews and lectures).Relatedly, on the boundary the bulk one-form symmetries restrict to ordinary, zero-form symmetries of the CFT.For abelian anyons, these boundary zero-form symmetries are group-like, but for non-abelian anyons, the boundary zero-form symmetries are a general fusion category. In this context, early foundational work on non-invertible symmetries and gauging was done in <cit.>, and a rigorous mathematical treatment of condensing or gauging general symmetries was carried out in <cit.>. We briefly review this formalism in Appendix <ref>.From a more physical point of view, a treatment non-invertible topological lines in 2D theories as symmetries was pioneered in <cit.> and further discussed in <cit.>.The relationship between the 3D bulk TQFT and 2D boundary CFT is a foundational example of the general paradigm of a bulk topological field theory controlling the generalized symmetry of the boundary theory <cit.>. The general idea of gauging and condensing non-invertible symmetries has been explored in <cit.> and the relationship between bulk and boundary non-invertible gauging has been utilized in <cit.>.Finally, recent discussions of gauging non-invertible symmetries in 2D CFTs, closely related to our analysis below, include in particular <cit.>.Below we often make use of the language of generalized symmetries, interchangeably using non-abelian and non-invertible, as well as gauge and condense.§.§ An Invitational Example with Fibonacci Anyons As an illustrative example to show how non-abelian anyon condensation is in fact central to many dualities of TQFTs consider the exceptionalconformal embedding <cit.>:SU(2)_1× SU(2)_3↪ (G_2)_1.In CFT, the existence of this embedding can be interpreted in two ways: *The branching functions of the coset (G_2)_1/SU(2)_1 give the characters of SU(2)_3.*The branching functions of the coset (G_2)_1/SU(2)_3 give the characters of SU(2)_1. Translating the first statement to 3D TQFTs following <cit.> gives the dualitySU(2)_3≅ (G_2)_1× SU(2)_-1.The simplicity of the TQFTs involved in this duality make it easy to verify explicitly.For instance, SU(2)_k has k+1 anyons obeying (truncated) fusion rules of SU(2) representations. Meanwhile (G_2)_1 is the theory of Fibonacci anyons, the simplest non-abelian TQFT consisting of two anyons 1 and ϕ obeying:ϕ×ϕ=1 +ϕ.Thus, for example both sides of the duality (<ref>) have four total anyons, and one may readily verify that their fusion rules and spins are identical.The previous –seemingly simple– chain of ideas raises however an immediate puzzle. Suppose instead that we make use of the second implication of the embedding (<ref>), then proceeding blindly along the same steps would have lead to the proposed duality:SU(2)_1?≅ (G_2)_1× SU(2)_-3.We note that, as in (<ref>), there is no common center of the gauge groups on the right-hand side. However, (<ref>) is obviously false, for instance the number of lines (2 vs. 8) does not match. The resolution of this puzzle is that while (G_2)_1× SU(2)_-3 does not have any condensable abelian anyons, there are non-abelian bosonic anyons which may condense, and doing so leads to a correct, and novel, duality.To demonstrate this we begin with the correct duality (<ref>). Reversing orientation (flipping the signs of all levels), and tensoring by (G_2)_1, we obtain(G_2)_1× SU(2)_-3≅ (G_2)_1× (G_2)_-1× SU(2)_1.We now use that (G_2)_1× (G_2)_-1 is a Drinfeld center, i.e., it is of the form G_k× G_-k.In particular for any such theory it is known that one can gauge/condense all the anyons and obtain a trivial theory <cit.>. The novelty here is that such condensation is necessarily non-abelian. Concretely then, we write𝒵(𝐅𝐢𝐛)≡ (G_2)_1× (G_2)_-1, where the notation above indicates that 𝒵(𝐅𝐢𝐛) is the Drinfeld center of the Fibonacci anyons (<ref>). Condensing 𝒵(𝐅𝐢𝐛) in (<ref>) then leads to a new duality:SU(2)_1≅(G_2)_1× SU(2)_-3/𝒵(𝐅𝐢𝐛). In summary, (G_2)_1× SU(2)_-3 has non-abelian bosonic anyons, or in a different language, it has a non-anomalous non-invertible one-form symmetry, and gauging it, one finds an equivalence with the SU(2)_1 Chern-Simons gauge theory. These non-abelian condensable bosons thus play the role of the common center of the gauge group in more familiar examples. Compare for instance with the more familiar duality (<ref>). In particular, this non-invertible one-form symmetry must be gauged to obtain the duality expected from the existence of the conformal embedding (<ref>). We study this example of non-abelian anyon condensation in more detail in Section <ref> below, and review the condensation of 𝒵(𝐅𝐢𝐛) to the trivial theory in Appendix <ref>. §.§ Summary of Selected Results Having illustrated the ubiquitous nature of non-abelian anyon condensation let us summarize several key results derived using this formalism below.Note that the central charges of the first infinite family of Maverick cosets in (<ref>)match those of the parafermion CFTs <cit.>, so it is natural to suggest that this infinite Maverick family reproduces the parafermions. The parafermions also have two standard coset descriptions given by the SU(2)_k/U(1)_2k, or (SU(k)_1× SU(k)_1)/SU(k)_2 cosets <cit.>. We therefore conjecture the infinite series of dualities:SU(k)_2× Spin(k)_-4/𝒜_k≅SU(2)_k× U(1)_-2k/ℤ_2≅SU(k)_1× SU(k)_1× SU(k)_-2/ℤ_k,for some suitable collection of condensable non-abelian anyons 𝒜_k on the left-hand side. Below in Section <ref> we verify this result explicitly for the first non-trivial case k=3.In this case the parafermion theory in question coincides with the three-state Potts model and the algebra of non-abelian anyons is generated by:𝒜_3 = (1,0) + (1,8) + (8,4),where we label SU(3) representations by their dimension and Spin(3)≅ SU(2) representations by their Dynkin index (i.e. their dimension is the label plus one.)In particular, the anyon (8,4) is non-abelian with fusion rule:(8,4)× (8,4)=∑_i=0^4(1,2i) + (8,2i)→ (1,0)+(1,8) + (8,4),where the first equation denotes the fusion in the full TQFT, and the arrow indicates its projection back to 𝒜_3.We also analyze the second infinite sequence of Maverick cosets in (<ref>).Since all of these have c=1 they must correspond to rational points in the moduli space of c=1 CFTs.We conjecture that these cosets correspond to the orbifold points of U(1)_2N modulo its ℤ_2 reflection symmetry, which we denote as U(1)^Orb_2N.Lifting to TQFTs leads to the proposalU(1)^Orb_2N≅Spin(2N)_2× Spin(N)_-2× Spin(N)_-2/ℬ_N.For a suitable collection of condensable non-abelian anyons ℬ_N. In particular we verify this for the first non-trivial case N=3 where:ℬ_3=(1,0,0) + (1,4,4) + (20',0,4) + (20',4,0) + (15,2,2).The fusion of the first four anyons above is abelian, but the last one is non-abelian with:(15,2,2)× (15,2,2)=∑_i,j=0^2[(1,2i,2j) + (15,2i,2j)+(20',2i,2j)]→ℬ_3,where again the first equality is the fusion in the full TQFT and the arrow indicates the restriction to ℬ_3.We note that the Chern-Simons theory with the orbifold action is equivalent to changing the gauge group from U(1) to O(2). Level-rank dualities involving O(N) were studied in <cit.> and we have the equivalence U(1)^Orb_2N≅ O(2)^0_2N,0 where the additional subscript and superscript on O(2) indicate other possible levels.Because of this equivalence (<ref>) can also be cast as a duality of orthogonal type Chern-Simons theories:O(2)^0_2N,0≅Spin(2N)_2× Spin(N)_-2× Spin(N)_-2/ℬ_N. Beyond analyzing these families of Maverick cosets we can also study many of the isolated examples in Section <ref> and derive a variety of dualities.All of these examples have c<1 and thus correspond to some (possibly non-diagonal) minimal model. For instance, the simplest of these leads to the Ising TQFT:IsingTQFT≅SU(4)_1× SU(2)_-10/𝒜,where 𝒜 = (1, 0) + (6,10) + (1,6) + (6,4),and both (1,6) and (6,4) have non-abelian fusion. Armed with our improved understanding of non-abelian anyon condensation, we also revisit the conformal embeddings generalizing the example of Section <ref> to obtain other level-rank dualities involving non-abelian anyon condensation.For instance revisiting the embedding (<ref>) of unitary groups allows us to derive:SU(Nk)_1≅SU(N)_k× SU(k)_N/𝒜_N,k,where 𝒜_N,k is a suitable collection of non-abelian anyons.For instance for N=3 and k=2 this is the non-abelian anyon:𝒜_3,2=(1,0)+ (8,2). Similarly, a less explored example arises from the conformal embeddingSO(N)_4× SU(2)_N↪ USp(2N)_1.This implies the level-rank duality:SU(2)_N≅USp(2N)_1× SO(N)_-4/𝒜_N,where for instance in the case N=3:𝒜 = (1, 0) + (14,4_1),with non-abelian fusion:(14,4_1) × (14,4_1) = (1,0) + (1,4_1) + (14, 0) + (14,4_1)→ (1,0) + (14,4_1).In general, many of the previous examples can be understood from the “principle of coset inversion,” first derived mathematically in <cit.>. We explain this in Section <ref> from a physics point of view and summarize it in mathematical form in Appendix <ref>. The principle may be summarized as follows: one may rearrange the numerator and denominator of any coset, provided we also allow for the possibility of non-abelian anyon condensation. This principle holds abstractly, independently of the description of the CFT in terms of WZW models.For instance we can even revisit the well-known TQFT associated to the unitary minimal modelsM(k+3,k+2) ≅SU(2)_k× SU(1)_1× SU(2)_-k-1/ℤ_2,where M(k+3,k+2) denotes the k-th minimal model TQFT, with k=1 the Ising model. Allowing for non-abelian anyon condensation implies that in general there is also an equivalence:SU(2)_k× SU(2)_1≅M(k+3,k+2) × SU(2)_k+1/𝒜_k,which we explicitly check for k=2 in Section <ref> (for k=1 this expression may be checked by the three-step gauging rule).§ COSETS, INTERFACES, AND BULK-BOUNDARY CORRESPONDENCE In this section we review the relationship between coset CFTs and associated boundary conditions in topological Chern-Simons theories.We pay particular attention to the interplay between gapless and gapped degrees of freedom at the boundary, whose understanding is important for determining dualities of the bulk topological theories.Let us first recall the coset construction purely in the context of 2D CFT. Our starting point is a G_k WZW theory based on a group G and an integer level k. We take the group G to be compact and simply-connected. Let H be now a subgroup of G such that the Lie algebra of H embeds into the Lie algebra of G with embedding index ℓ. Then, there is an associated embedding of affine Lie algebras:H_k̃⊆ G_k , k̃=ℓ k . Coset CFTs are constructed by expanding a chiral algebra with characters χ_Λ(q) in terms of the characters of a smaller algebra with characters χ_λ(q):χ_Λ(q) = ∑_λ b_Λ^λ(q) χ_λ(q),q=e^2 π i τ,with τ the modular parameter. In our context, we assume the bigger chiral algebra is that of the G_k WZW theory, and the smaller chiral algebra is that of the H_k̃ WZW theory, with H_k̃ embedded in G_k. The quantities b_Λ^λ(q) are called branching functions. The point of the previous expansion is to notice that, since the characters χ_Λ, χ_λ are modular covariant, the branching functions inherit some form of modular covariance and can be thus thought of as the characters of a new CFT with torus partition functionZ_CosetCFT(T^2) = ∑_Λ,λ |b_Λ^λ(q)|^2.This is the so-called GKO coset construction <cit.>, where for simplicity we have restricted ourselves to diagonal theories. A subtlety in the construction above is the generic appearance of multiple copies of the vacuum and of many copies of the same chiral or Virasoro primary in the partition function. Relatedly, not all branching functions are non-zero, so the naive modular covariance of the branching functions in general requires further analysis.[ Historically, these confusions led to the search for methods to remove such a degeneracy, such as the so-called “identification current method” or “fixed point resolutions” that made the final result into a CFT with a single vacuum, a non-degenerate modular S-matrix, etc. The result of such a procedure is what in some older literature is known as “the” coset CFT (See for instance <cit.>).] In modern terms, we recognize the vacuum degeneracy as the presence of a topological sector coupled to gapless, CFT degrees of freedom. In some circumstances it is desirable to remove this degeneracy by, roughly speaking, “gauging away” this topological sector resulting in a CFT with a unique vacuum. In general, we should therefore differentiate between two possible notions of cosets:*A coset CFT with degenerate vacua, where topological sectors are retained.*A coset CFT with a unique vacuum state, where topological sectors have been removed.The partition function (<ref>) corresponds to the first notion above. The distinction between these possibilities is particularly important in the special case where the topological sector is all there is, as occurs for example in the case of conformal embeddings discussed below.The phenomenon just described has been previously noticed and interpreted in terms of projection into universes and/or vacua (See <cit.>), and here we provide a further interpretation in the context of TQFTs with gapped and gapless boundaries, and lines ending or not perpendicularly at a topological junction.Turning now to the relationship between 3D TQFTs and 2D CFTs, a natural question to ask is what are the TQFTs associated to these two possible notions of cosets. As shown in <cit.> the TQFT that reproduces the coset CFT with a single vacuum at the boundary is given –often, but crucially not always– by the product Chern-Simons theory:G_k× H_-k̃/Z ,where Z is the common center of groups G and H. As in <cit.>, Z is, for the time being, some abelian discrete group. Shortly, this assumption will be lifted.With an eye towards future generalizations let us deduce why (<ref>) is correct.In particular, we would like to understand the difference between the boundary conditions in the Chern-Simons theories which differ by whether we gauge or not the common center. As we illustrate, this difference can be usefully phrased in terms of topological interfaces.Consider first the case were we gauge the common center in the bulk as in (<ref>). This situation is depicted in the left in Figure <ref>,where we denote the boundary condition as (G_k/H_k̃)_Z with a subindex to emphasize that the corresponding bulk has the center one-form symmetry gauged.[When discussing non-chiral 2D theories we should really picture the 3D theory with a left and right boundary component. In the following we illustrate one boundary component and we assume the same conditions on the left and right.This means that we consider diagonal theories.] As mentioned above, this is the case where we have a theory with single vacuum on the boundary. Standard examples involving an abelian ℤ_2 gauging that one could keep in mind are the minimal models SU(2)_k× SU(2)_1× SU(2)_-k-1 /ℤ_2, or the parafermions SU(2)_k× U(1)_-2k/ℤ_2. Separately, we note that in the Chern-Simons theory G_k× H_-k̃, the common center indicates the presence of abelian anyons which define a gaugable one-form symmetry <cit.>. We can thus consider the topological interface generated by gauging the common center one-form symmetry on the right half of space, as depicted on the right in Figure <ref>. Placing this topological interface together with the coset boundary (G_k/H_k̃)_Z of Figure <ref>, we obtain the construction depicted in Figure <ref>.Since the interface is topological, we can move it towards the boundary to generate a new boundary condition for the theory without the common center one-form symmetry gauged. The latter boundary condition therefore differs from (G_k/H_k̃)_Z by some topological action, and so we call the new boundary condition (G_k/H_k̃) without a subindex to emphasize that the corresponding bulk has no one-form symmetry gauged. The result of all previous manipulations is depicted in Figure <ref>. In summary, we see that the key difference between the coset boundary condition when the bulk is given by G_k× H_-k̃ and (G_k× H_-k̃)/Z is determined by some topological degrees of freedom, here represented by the topological interface separating the product with and without the common center gauged. Notice the similarity with our comment above regarding the distinction between two notions of coset CFTs: one with and one without topological degrees of freedom removed.Now that we understand how the boundary conditions (G_k / H_k̃) and (G_k / H_k̃)_Z are distinguished from each other we study how to setup the boundary condition (G_k/H_k̃) in G_k× H_-k̃ recalling the well-known steps derived in <cit.>.The key point is that requiring the variation of the action to vanish implies the bulk equations of motion, but also the vanishing of a boundary termk/4π∫_∂ XTr'_G(δ A A) - k̃/4π∫_∂ XTr'_H(δ B B),where A and B are gauge fields based on the Lie algebras of G and H respectively, Tr'_G and Tr'_H are the respective representation-independent traces, and∂ X is the boundary of our bulk 3D spacetime X. Imposing A_0 = 0 and B_0 = 0 at the boundary gives the canonical chiral WZW boundary <cit.>. However, since we have taken H to embed in G there is another boundary condition where we ask for the gauge field A projected onto the Lie algebra of H to equal the gauge field B, which also makes (<ref>) vanish. The action then reduces, following the steps of <cit.> to an expression involving WZW actions:i k S_WZW(U) - i k̃ S_WZW(V) + ik ∫Tr'_G λ (∂_ϕU U^-1 - ∂_ϕ V V^-1),where U and V are the Maurer-Cartan fields that arise when we integrate the time components of A and B respectively in the bulk, and λ is a Lagrange multiplier. As first described in <cit.>, changing variables and using the Polyakov-Wiegmann formulagives the path integral of (the chiral version of) the gauged WZW model.[It is sometimes useful to express the trace in the algebra of H in the path integral in terms of that of G by noting that the embedding relates the normalization of the traces Tr'_H = Tr'_G/ℓ with ℓ the embedding index.] In summary, we see that what we have called above the (G_k/H_k̃) boundary condition corresponds to (the chiral version of) the gauged WZW action.The path integral, clearly, takes into account contributions resulting from topological sectors. This can be illustrated for instance by taking H_k̃ = G_k so that we obtain the well-known G_k/G_k topological coset field theory <cit.> on the boundary, whose partition function over a Riemann surface Σ has been evaluated (see <cit.>) to be Z_G_k/G_k = dim(𝒱), with dim(𝒱) the number of conformal blocks of the corresponding G_k WZW model. On the torus, this evaluates to the number of chiral primaries of the underlying G_k WZW model. We can also view this G_k/G_k boundary condition as an instance of the universal gapped boundary for a bulk theory of the form 𝒞×𝒞, with 𝒞 = G_k.[An easy way to see that the previous gapped boundary always exists is to notice that upon unfolding the interface is simply the identity interface between 𝒞 and 𝒞.] Gauging the center one-form symmetry Z in the bulk implies that the gauge group on the boundary is no longer H, but a non-simply-connected version based on the same Lie algebra. The corresponding summation over bundles/insertion of anyons generating Z descends into a corresponding summation on the gauged WZW model on the boundary. The corresponding model based on a non-simply-connected gauge group has a single vacuum and the so-called “identification current method” has been applied to remove multiple copies of the same chiral primary, as studied in <cit.> (see also <cit.> for recent related discussions). This corresponds to the boundary condition that we have called(G_k/H_k̃)_Z above.Let us now interpret the prior story in terms of lines ending at the boundary. To do this consider first two extreme scenarios: that of the canonical (chiral) CFT boundary with no topological sectors other than the identity, illustrated in Figure <ref>, and that of a purely topological boundary, illustrated in Figure <ref>.In the CFT boundary case all lines can end perpendicularly at the boundary on a non-topological junction, which generates the local operators of the RCFT <cit.>. Under parallel fusion with the boundary, the bulk lines becomes the Verlinde lines <cit.> of the RCFT <cit.>. In particular, this point of view is one way in which we can explain why the local operators and Verlinde lines of an RCFT follow the same fusion ring; namely, that of the bulk MTC <cit.>, and why furthermore Verlinde lines in an RCFT generate a modular tensor category (MTC) instead of merely a fusion category. Importantly, because the bulk lines form a MTC, and all lines can end perpendicularly at the boundary with this choice of boundary condition, it trivially follows that the set of lines that can end perpendicularly at the boundary have non-trivial mutual braiding. That is, the braiding matrix projected to those lines that can end perpendicularly at the boundary is non-degenerate (i.e., the projected braiding matrix has maximal rank).We can compare the discussion above with the case of a purely topological boundary. Since we are assuming Z to be abelian, only a square root of the total number of simple lines can end perpendicularly at the boundary <cit.>, generating a topological junction in the process, and under parallel fusion with the boundary such lines becomes invisible[Technically, one summarizes this statement saying that such lines participate in a Lagrangian subgroup.] (See Figure <ref>). In particular, this set of lines has a size strictly smaller than that of the simple objects of the full bulk MTC, and in which all lines braid trivially with each other. That is, the braiding matrix obtained from (<ref>) projected to those lines that can end perpendicularly at the boundary is maximally degenerate (i.e., the projected braiding matrix has rank 1).We now come back to the situation of the gauged WZW boundary condition (G_k/H_k̃). The general situation is complicated because of the variety of ways that the topological sector can intertwine with the CFT sector. However, we can get an idea of the general situation by considering the example SU(2)_1× SU(2)_1× SU(2)_-2 which is (up to the ℤ_2 common center) the standard coset description of the Ising model. The branching rules of the characters are:χ^SU(2)_1_0χ^SU(2)_1_0 = χ^I_0 χ^SU(2)_2_0 + χ^I_v χ^SU(2)_2_2, χ^SU(2)_1_0χ^SU(2)_1_1 = χ^I_σ χ^SU(2)_2_1, χ^SU(2)_1_1χ^SU(2)_1_0 = χ^I_σ χ^SU(2)_2_1χ^SU(2)_1_1χ^SU(2)_1_1 = χ^I_0 χ^SU(2)_2_2 + χ^I_v χ^SU(2)_2_0.Comparing with (<ref>), we obtain a total of six operators in the CFT, so not all lines of the bulk end at a non-trivial local operator of the boundary theory. Furthermore, every operator of the CFT sector appears twice, with a non-trivial topological operator appearing in the branching space associated to the ℤ_2 generator in the bulk: the line (1,1,2) in SU(2)_1× SU(2)_1× SU(2)_-2. The doubling of the spectrum can be then thought of as fusing this topological operator with the different CFT sectors.We can also obtain the same conclusion by observing[This is a consequence of the fact that Ising≅ (SU(2)_1× SU(2)_1× SU(2)_-2)/ℤ_2. Ungauging the ℤ_2 by gauging the quantum zero-form symmetry allows us to write SU(2)_1× SU(2)_1× SU(2)_-2 in terms of Ising and a twisted ℤ_2 gauge theory. Comparing the spectrum of the lines on both sides we can convince ourselves that such twisted ℤ_2 gauge theory is U(1)_2× U(1)_-2.] SU(2)_1× SU(2)_1× SU(2)_-2≅Ising× U(1)_2× U(1)_-2 .Then the gauged WZW boundary condition corresponds to setting the CFT boundary condition on the Ising factor, but the topological boundary condition on the U(1)_2× U(1)_-2 factor, which explains both why not all lines end perpendicularly at the boundary, and why the spectrum of Ising is doubled. Thus we see that the (G_k/H_k̃) boundary on the one hand is similar to the CFT boundary condition in that the boundary is non-topological and there exists a subset of the bulk lines ending perpendicularly at the boundary such that their mutual braiding is non-trivial and the junctions at the end of the lines are non-topological. On the other hand, there also exists lines that generate topological junctions which have trivial mutual braiding. The gauged WZW boundary condition (G_k/H_k̃) is like the topological boundary condition in that the braiding of those lines ending perpendicularly is degenerate, but unlike the purely topological boundary, not maximally so. That is, the braiding matrix projected to those lines that can end perpendicularly at the boundary is degenerate, but with neither minimal nor maximal rank.In hindsight, this explains why historically the naive proposal (<ref>) led to a degenerate CFT with multiple vacua – since there are multiple topological junctions from the bulk– and degenerate modular S-matrix –since the braiding of the bulk lines ending at the boundary is degenerate–. Upon gauging Z in the bulk to obtain (G_k× H_-k̃)/Z what we are doing is making those lines generating topological junctions transparent and identified with each other, so in the boundary the corresponding degeneracy is removed, and we find a standard 2D RCFT: that which the methods in the older literature provided via the “field identification” and “fixed point resolution” methods.§.§ Application to Level-Rank Dualities A well-known and important application of coset CFTs is to level-rank dualities of 3d TQFTs <cit.>.Consider for instance the conformal embeddingSU(N)_k× SU(k)_N↪ SU(Nk)_1. At the level of the branching rules (<ref>), the embedding (<ref>) means that the characters of SU(Nk)_1 can be expanded in terms of products of those of SU(N)_k and SU(k)_N. We can then construct the coset CFT with single vacuum (SU(Nk)_1/SU(N)_k)_Z, and by the previous conformal embedding the result must be equivalent to the SU(k)_N WZW theory. We have then the equivalence of chiral algebrasSU(k)_N⟷( SU(Nk)_1/SU(N)_k)_Z,Expressing now both sides in terms of their bulk TQFTs we obtain the dualitySU(k)_N≅SU(NK)_1× SU(N)_-k/ℤ_N. More generally, the logic of the above example is simply that if we find two different descriptions of the same chiral algebra either in terms of a WZW theory on one side, and a coset description on the other, or by relating two different coset descriptions on either sides, we can use the relationship between bulk and boundary to find dual descriptions of the same bulk TQFT. In particular, we must take care to ensure that the vacuum degeneracy on both sides matches.Often we implicitly assume that all such degeneracy is removed, as in (<ref>) by a suitable gauging.In familiar examples this is achieved by gauging a one-form symmetry in the bulk (the common center group), but in the examples to follow the required gauging will be more subtle. § DUALITIES VIA NON-INVERTIBLE ANYON CONDENSATION In the previous section, we have outlined how to construct (bosonic) dualities of 3D TQFTs by making clever use of the boundary theory and a proper understanding of the coset construction of 2D RCFTs. However, most of our discussion relied on the standard assumption <cit.> that gauging the common center Z of G_k× H_-k̃is sufficient to find a CFT with a single vacuum. However, already in <cit.> it was observed that there are exceptions to the idea that gauging by an abelian common center Z in G_k× H_-k̃ is sufficient to remove all boundary topological degrees of freedom and obtain a standard boundary CFT with a unique vacuum, as in the case of the conformal embeddings.In the context of the GKO coset construction, it is also known that there are exceptions to the methods mentioned in footnote <ref> to construct CFTs with single vacuum from the “naive” partition function (<ref>). In these exceptions there are additional selection rules (i.e.vanishing branching functions) and field identifications that are not a consequence of group-theoretical selection rules and thus cannot be submitted to the “field identification method”<cit.> mentioned above to find a CFT with single vacuum and non-degenerate S-matrix. Importantly, again the conformal embeddings appear as part of such exceptions (see <cit.>). We stress that if the conformal embedding has two factors in the denominator we mean the full quotient appears as an exception, and not a standard coset based on just one of the factors. Furthermore, it is known that a handful of gapless cosets also share this feature. In the literature, these cosets have been referred to as “Maverick cosets,” and the known examples have been constructed long ago in <cit.>. In some sense, Maverick cosets are then a hybrid in between the standard cosets and the conformal embeddings. To the extent of the authors' knowledge, there is no known classification of Maverick cosets. The previous observations suggest that in order to understand these exceptions we must relax the assumption that the bulk is given by G_k× H_-k̃ gauged by an abelian one-form symmetry Z. Indeed, in the past few years it has been understood that the notion of gauging can be extended to the case where the symmetries are not necessarily group-like <cit.> , and it is natural to explore the consequences of such generalized gauging in the present context. In the following discussion we will not need a rigorous presentation of non-abelian anyon condensation, and content ourselves with a physical presentation. In Appendix <ref> we present a summary of the rigorous statements that we use, and outline them in a physics nomenclature in the next subsection. More extensive treatments of non-abelian anyon condensation from a physics perspective may be found in <cit.>. See <cit.> for a review.To begin, it is instructive to study the example of the exceptional conformal embedding discussed briefly in Section <ref>, which shows the inevitability of non-abelian anyon condensation in the general construction of coset CFTs, and in particular the conformal embeddings. The example is given by the embedding SU(2)_1× SU(2)_3↪ (G_2)_1.In 2D CFT terms, this conformal embedding is translated to the following branching rules:χ^(G_2)_1_1 = χ^SU(2)_1_0 χ^SU(2)_3_0 + χ^SU(2)_1_1 χ^SU(2)_3_3, χ^(G_2)_1_7 = χ^SU(2)_1_0 χ^SU(2)_3_2 + χ^SU(2)_1_1 χ^SU(2)_3_1,where we have labeled the integrable representations of (G_2)_1 by the dimensionality of the representation 𝐑, and those of SU(2)_k as standard by an integer i=0,…,k. The notation is analogous whenever we consider negative levels in TQFT.These branching rules can be regarded in many forms. In the simplest scenario, we can consider the coset (G_2)_1/SU(2)_1, and the branching rules above show that the branching functions are the characters of SU(2)_3. Specifically, using the torus partition function for coset CFTs (<ref>), we findZ_T^2[ (G_2)_1/SU(2)_1] = | χ^SU(2)_3_0|^2 + | χ^SU(2)_3_1|^2 + | χ^SU(2)_3_2|^2 + | χ^SU(2)_3_3|^2 = Z_T^2[SU(2)_3].The result is already an ordinary 2D CFT, with no vacuum degeneracy and a non-degenerate modular S-matrix. Clearly, this means that there are no topological sectors to consider, and the coset theory is simply(G_2)_1/SU(2)_1 = SU(2)_3. More interestingly, we can consider the coset (G_2)_1/SU(2)_3, and then the branching rules (<ref>)-(<ref>) tell us to regard the characters of the SU(2)_1 theory as the branching functions in the corresponding coset decomposition. More precisely, using (<ref>) again:Z_T^2[ (G_2)_1/SU(2)_3] = 2|χ^SU(2)_1_0|^2 + 2| χ^SU(2)_1_1|^2 = 2 Z_T^2[SU(2)_1],from which it is clear that now we do obtain a topological sector. However, notice that the group G_2 has trivial center, so we cannot possibly interpret the previous degeneracy in terms of some ℤ_2 common center as in more standard description of cosets. Even more concretely, if we translate the previous observation to the associated 3D TQFTs, it is straightforward to check that (G_2)_1× SU(2)_-3 simply does not have an abelian anyon in its spectrum.[See Tables <ref> and <ref> later in the paper to check this statement.] The resolution of this puzzle is that while (G_2)_1× SU(2)_-3has no abelian boson in its spectrum, it does have a non-abelian one consisting of the product of the line 7∈ (G_2)_1 and the line 2 ∈ SU(2)_-3. Moreover, this is the exact combination giving the additional topological local operator in the branching rules (<ref>). Therefore, if this boson were to end on the boundary, it would explain the degeneracy above, and noticing that both 7∈ (G_2)_1 and 2 ∈ SU(2)_3 follow Fibonacci fusion rules, provides us with a natural guess for the topological sector present on top of the CFT sector:(G_2)_1/SU(2)_3= 𝐅𝐢𝐛^TQFT× SU(2)_1^CFT = (G_2)_1/(G_2)_1× SU(2)_1^CFT.Shortly, we will give a more rigorous derivation of this topological sector using level-rank duality and studying choices of boundary conditions. In passing, if we follow this proposal, it is clear that we can label the states in the coset (G_2)_1/SU(2)_3 as (0,i) and (ϕ,i), with fusion rules(0,i) × (0,j) = (0,i+j), (0,i) × (ϕ,j) = (ϕ,i+j), (ϕ,i) × (ϕ,j) = (0,i+j) + (ϕ,i+j),where i,j=0,1 labels the integrable representations in the SU(2)_1 CFT factor.Gauging by the non-abelian boson in the bulk would then remove the topological sector on the boundary, leaving us only with the non-degenerate CFT sector. This is clearly what we would have naively expected from the branching rules; that is, the SU(2)_1 WZW theory. Concretely:SU(2)_1≅(G_2)_1× SU(2)_-3/𝒵(𝐅𝐢𝐛).Below in Section <ref> we will verify this statement by a direct computation using non-abelian anyon condensation. Therefore, this is a situation where the coset RCFT lives at the boundary of a bulk TQFT whose only non-trivial boson is non-abelian. Correspondingly, to find the RCFT with single vacuum on the boundary (in this example SU(2)_1) we have to identify fields in the branching rules that are not related by some abelian action as in e.g., the case of the Ising model at the end of Section <ref>. In other words, to find the RCFT with single vacuum in the boundary we would have to gauge/condense the non-abelian boson in the bulk.Generically then, gauging non-abelian anyons in the bulk leads to some RCFT in the boundary, which may however also appear as a boundary condition in a different-looking bulk TQFT. Matching these different descriptions to one another is one way to guess dualities of TQFTs, which we may then verify directly as a statement solely in the context of 3D TQFTs.Finally, one can also consider the full coset (G_2)_1/SU(2)_1× SU(2)_3, which is topological since the central charge vanishes. Indeed, computing the coset torus partition function (<ref>), one obtains:Z_T^2[(G_2)_1/SU(2)_1× SU(2)_3] = 4. Using similar steps as above the concrete proposal for the 2D TQFT is:(G_2)_1/SU(2)_1× SU(2)_3 = 𝐅𝐢𝐛^TQFT⊗ℤ_2^TQFT= (G_2)_1/(G_2)_1⊗SU(2)_1/SU(2)_1The previous discussion can also be derived by examining choices of boundary conditions implied by the TQFT duality:(G_2)_1× SU(2)_-1≅ SU(2)_3,where it is straightforward to see that the spectrum of lines match on both sides. This shows that we can set the canonical SU(2)_3 RCFT boundary condition in the product (G_2)_1× SU(2)_-1, which gives the coset result (<ref>). On the other hand, we can take the orientation reversal of the previous duality and tensor the resulting expression by (G_2)_1 which allows us to write(G_2)_1× SU(2)_-3≅ (G_2)_1× (G_2)_-1× SU(2)_1.Then, by this duality, in the bulk theory (G_2)_1× SU(2)_-3 we are allowed to take as a boundary condition a canonical SU(2)_1 RCFT boundary condition for the SU(2)_1 factor, but the purely topological boundary condition for the factor (G_2)_1× (G_2)_-1 given by the diagonal Lagrangian algebra. Recall this type of topological boundary is given by the G_k/G_k 2D TQFT, which has as many topological local operators as the G_k WZW theory has chiral primaries, with the same fusion rules as the G_k WZW theory. The possibility of choosing this “combined CFT-Topological” boundary condition explains both the degeneracy of two in the torus partition function (<ref>) as well as the precise Fibonacci TQFT factor in (<ref>), since (G_2)_1 has Fibonacci fusion rules.Finally, we can tensor (<ref>) by a SU(2)_-1 factor to find (G_2)_1× SU(2)_-1× SU(2)_-3≅ (G_2)_1× (G_2)_-1× SU(2)_1× SU(2)_-1,which allows us to set a purely topological boundary condition for (G_2)_1× SU(2)_-1× SU(2)_-3 by choosing the purely topological boundary in both the (G_2)_1× (G_2)_-1 and SU(2)_1× SU(2)_-1 factors given by the corresponding diagonal Lagrangian algebras. Similarly, the duality explains the resulting 2D TQFT (<ref>) in the coset (G_2)_1/(SU(2)_1× SU(2)_3), and the corresponding torus partition function (<ref>).Now that we have convinced ourselves of the a priori generic appearance of non-abelian anyon condensation in the context of level-rank dualities, we will explore more examples where this phenomenon occurs. One such instance will be in the case of the conformal embeddings, as hinted above. In <cit.> it was suggested that the failure of the “field identification method” in the case of the Maverick cosets could be explained in terms of non-abelian anyon condensation, and we use this observation to propose dualities by first inspecting the resulting CFT, checking directly via non-abelian anyon condensation as a statement in the bulk TQFT, and then comparing different coset descriptions with the same TQFT data. Finally, a proper physical interpretation of certain mathematical results outlined in the next subsection will make manifest that non-abelian anyon condensation already appears even in standard examples of the coset construction, such as in the well-known coset description of the minimal models. We summarize the precise mathematical statements in Appendix <ref>.Below in this section, we will first discuss in generality the main setup that describes the interplay between dualities and non-abelian anyon condensation, and then we consider the concrete case of Maverick cosets since the underlying CFT (with single vacuum) is non-trivial and it is most straightforward to repeat the arguments pertaining to the case of gauging by an abelian one-form symmetry based on the boundary CFT. The case of the conformal embeddings is slightly more complicated, so we discuss them later in Section <ref>. Since they have vanishing central charge, it seems like applying analogous arguments would lead to a rather trivial duality with the empty theory. This is indeed correct, but certain mathematical arguments will allow us to find non-trivial dualities nevertheless.§.§ The General Picture To proceed further, we need to formalize the statements about the coset construction that we have used to convince ourselves of the general necessity of non-abelian condensation. The mathematical framework that we use that addresses such general scenario, and that is behind the results discussed in this subsection, corresponds to the formalism of local modules of special symmetric commutative Frobenius algebras described in <cit.>. To avoid a heavy mathematical digression, we summarize the definitions and results in this language in appendix <ref>, and in this subsection we approach such results from a physical point of view.The main point is to unpack the results of <cit.> in the language of non-invertible one-form symmetry gauging. According to this result, starting from the decomposition of an affine Lie algebra, or more generally, a chiral algebra described by a MTC ℳ in terms of a smaller one described by a MTC ℳ' (as in (<ref>)) we can writeℳ≅ (𝒞×ℳ')/𝒜,where 𝒞 is the MTC describing the coset theory and the quotient by 𝒜 stands for some one-form gauging that is not necessarily abelian, and under rather general conditions over 𝒜 (see Appendix <ref> for the more precise statement) we can “solve” for 𝒞 and obtain the coset MTC in terms of ℳ and ℳ' as:𝒞≅ (ℳ×ℳ')/ℬ,for some new one-form gauging ℬ. The latter is what in the work of Moore and Seiberg <cit.> was identified as the common center of the affine Lie algebras. However, notice that from the current point of view we have no reason to believe that this is generally the case. Indeed, the conformal embeddings and Maverick cosets are explicit counterexamples to the common center rule, but they are still comfortably described by (<ref>) when we allow for non-invertible anyon condensation.The previous is the situation found more often, but when the general conditions over 𝒜 are not strictly fulfilled a mild variation of the previous theorem still holds. Namely, there exist (generically non-invertible) one-form symmetries 𝒯_1 and 𝒯_2 such that:𝒞/𝒯_1≅ (ℳ×ℳ')/𝒯_2.Essentially, what the conditions over 𝒜 do is to ensure that the chiral algebra described by 𝒞 in (<ref>) already appears “extended,” as otherwise the latter form of the theorem will perform the extension in any case.We may consider our previous SU(2)_1× SU(2)_3↪ (G_2)_1 example from this point of view. Taking ℳ = (G_2)_1, we can take ℳ' to be SU(2)_1 or SU(2)_3, with 𝒞 the remaining factor. When ℳ' = SU(2)_1, ℬ is trivial, and (<ref>) reproduces (<ref>). When ℳ' = SU(2)_3, however, the previous formalism allows ℬ to be non-trivial and non-invertible. As a result, (<ref>) readily reproduces (<ref>), which we previously reproduced by cleverly manipulating the factors. Equations (<ref>) and (<ref>) represent the abstract form of such manipulations, and hold for general MTCs.§.§.§ An Interesting CorollaryNow, we notice the following rather interesting corollary of Eqs. (<ref>) and (<ref>). That is, from the CFT perspective, (<ref>) may be interpreted as the existence of some branching rules (<ref>) for ℳ in terms of the ℳ' data. But on the TQFT perspective by itself, it means that not only we can obtain 𝒞 in terms of ℳ and ℳ' via (<ref>)–which is the standard form of the coset construction– but that we can write ℳ in terms of the coset 𝒞 and ℳ' after some one-form symmetry gauging. The gauging is generically by a non-invertible symmetry, even if the gauging by ℬ in (<ref>) is abelian and given by the common center. Because of this inversion property between (<ref>) and (<ref>), we refer to such expressions as the “coset inversion formulas” or “coset inversion theorem” in the following. To illustrate this coset inversion theorem, we may consider the standard coset description for minimal models (SU(2)_k× SU(1)_1× SU(2)_-k-1)/ℤ_2. By the formulas above, we expect that SU(2)_k× SU(2)_1 can be written in terms of the k-th minimal model asSU(2)_k× SU(2)_1≅M(k+3,k+2) × SU(2)_k+1/𝒜_k,for some generically non-invertible gauging 𝒜_k, where M(k+3,k+2) stands for the k-th minimal model with k=1 the Ising model. As a check, it is readily shown that the combination of the (1,3) primary[As usual, we have denoted primaries in minimal models as pairs (r,s) in the Kac table <cit.>.] in the M(k+3,k+2) minimal model and the line in the adjoint representation in SU(2)_k+1 are such that h^M(k+3,k+2)_1,3 + h^SU(2)_k+1_2 = k+1/(k+3) + 2/(k+3) = 1, so the product line for any k is a boson that is generically non-abelian. For k=1 the gauging on the right-hand side by this combination is a standard abelian ℤ_2 gauging. However, already at k=2 it can be seen that the gauging generically involves a non-invertible boson. It is amusing to check that this is indeed the case –so that (<ref>) holds– which we verify in Section <ref> below by a direct computation on non-abelian anyon condensation. §.§ Maverick Cosets and DualitiesIn this section we provide some explicit proposal of dualities involving non-abelian anyon condensation based on the many observations that we have made previously in this work. The case of the conformal embeddings is treated separately in Section <ref>. In the following we will need the explicit expression for the Maverick cosets. The list of Maverick cosets known to date <cit.> and their central charges is:SU(k)_2/Spin(k)_4,c = 2(k-1)/k+2, Spin(2N)_2/Spin(N)_2× Spin(N)_2,c = 1, (E_6)_2/USp(16)_2,c = 6/7, (E_7)_2/SU(8)_2,c = 7/10,(E_8)_2/Spin(16)_2,c = 1/2, (E_8)_2/SU(2)_2× (E_7)_2,c = 7/10,(E_7)_2/SU(2)_2× Spin(12)_2,c = 8/10,SU(4)_1/SU(2)_10,c = 1/2,Spin(7)_1/SU(2)_28,c = 7/10.It is straightforward to check that for the Maverick cosets G_k / H_k̃ above there is indeed a non-abelian boson in the spectrum of the associated G_k× H_-k̃ TQFT.§.§.§ First Family of Maverick Dualities As a warm-up, let us start considering the simplest example of a Maverick coset corresponding to the k=3 case in the first infinite family of Maverick cosets (<ref>):SU(3)_2/SU(2)_8,where we have used the exceptional isomorphism of chiral algebras Spin(3)_k = SU(2)_2k<cit.>. The central charge of such coset is c=4/5 meaning it could in principle be the Tetracritical Ising Model or the three-state Potts model (TSPM). Fortunately, already in <cit.> from an analysis of the branching rules it was noticed that the result actually corresponds to the TSPM, which is known to allow for other coset descriptions. For instance, SU(2)_3/U(1)_6, or (SU(3)_1× SU(3)_1)/SU(3)_2 are coset description that reproduce the TSPM <cit.>.Translating to TQFT then by the rules of <cit.>, we observe that both of the standard cosets SU(2)_3/U(1)_6, or (SU(3)_1× SU(3)_1)/SU(3)_2 translate to the TQFTs SU(2)_3× U(1)_-6/ℤ_2, orSU(3)_1× SU(3)_1× SU(3)_-2/ℤ_3respectively, both of which are readily seen to match the spectrum of the TSPM after applying the three-step gauging procedure <cit.>. However, translating the Maverick coset version of the TSPM to TQFTs is not so straightforward. Obviously, SU(3)_2× SU(2)_-8 has too many lines to identify it by itself with the TSPM, so we must gauge some lines away in order to make them coincide. However, SU(2) and SU(3) have a trivial common center, so it does not seem possible to use the standard gauging procedure of <cit.> to do so. Notice that SU(3)_2× SU(2)_-8 does have a non-trivial abelian boson in its spectrum; namely (1,8), [Here and below we have used our notation below of denoting a line in SU(3)_2 by the dimension of its representation in bold letters, and a line in SU(2)_k by the standard integer i=0,1,… k.] but gauging it gives SU(3)_2× SO(3)_-4, which is also does not match the spectrum of the TSPM.The solution to this conundrum, of course, is that SU(3)_2× SU(2)_-8 has yet another non-trivial boson in its spectrum: (8,4), and it is a non-abelian one! Clearly, we can now attempt to condense it in order to reproduce the spectrum of the TSPM. It goes without saying, the non-abelian anyon condensation calculation indeed reproduces the spectrum of the TSPM. The prior calculation was done in <cit.> as a test example of the formalism of non-abelian anyon condensation. Instead, later in Section <ref> we will provide an alternative argument based on exceptional conformal embeddings that simplifies the calculation considerably, on top of relating quite directly this Maverick coset with conformal embeddings and the standard arguments for level-rank duality.In summary, (<ref>) and (<ref>) are three different coset descriptions for the same underlying 3D TQFT (i.e., we have found three different Chern-Simons-like descriptions of the same underlying MTC data), two of which are rather standard and one which involves non-abelian anyon condensation. Since all of these descriptions describe the same underlying theory, we can now relate all the previous Chern-Simons-like descriptions of the TSPM to one another, obtaining the dualities:SU(3)_2× SU(2)_-8/𝒜_3≅SU(2)_3× U(1)_-6/ℤ_2≅SU(3)_1× SU(3)_1× SU(3)_-2/ℤ_3,where the first of these involves non-abelian anyon condensation by the algebra object 𝒜_3 = (1,0) + (1,8) + (8,4). Of course, starting with these expressions we can now attempt to “make the dualities proliferate” by gauging zero-form or one-form symmetries on either side, turning-on background fields possibly with discrete torsion, coupling to spin structure or combining these dualities with previously known ones, etc. We do not attempt this here, and we set it aside for future work.This example can be generalized to the complete first infinite family of Maverick cosets (<ref>) noticing that the central charges match those of the parafermion CFTs <cit.>, so it is natural to suggest that this infinite Maverick family reproduces the parafermions. Indeed, the three-state Potts model corresponds to the ℤ_3 parafermion, so we could have foreseen the previous results by merely matching the central charges to that of the parafermions.The parafermions also have two standard coset descriptions given by the SU(2)_k/U(1)_2k, or (SU(k)_1× SU(k)_1)/SU(k)_2 cosets <cit.>, which generalize the cosets of the TSPM above. Using the same arguments (although now with lack of a general calculation valid for any k), we expect the infinite family of dualitiesSU(k)_2× Spin(k)_-4/𝒜_k≅SU(2)_k× U(1)_-2k/ℤ_2≅SU(k)_1× SU(k)_1× SU(k)_-2/ℤ_k,for some suitable algebra object 𝒜_k on the left-hand side. For k=3 this reproduces the result for the TSPM above. §.§.§ Second Family of Maverick Dualities We study now the second infinite family of Maverick cosets (<ref>). Since the value of the central charge is one there are now two natural possibilities for such infinite family: either an infinite family corresponding to the free boson branch of c=1 RCFTs, or an infinite family corresponding to the orbifold branch of c=1 RCFTs. To decide for one of these families, we will have to perform an explicit calculation in the simplest case of the second infinite family of Maverick cosets (<ref>), corresponding to N=3. For the sake of presentation, we do not perform this calculation here but show this computation in Section <ref> below. The calculation shows that for N=3, the result of the non-abelian anyon condensation is actually the orbifold of U(1)_6, which we denote U(1)^Orb_6. Based on this result, we conjecture that after some condensation of non-abelian bosons, the first infinite family of Maverick cosets leads to the orbifold branch of c=1 RCFTs. This suggests the following dualities between theories at the orbifold branch and Chern-Simons-like theories based on Spin groups:U(1)_2N^Orb≅Spin(2N)_2× Spin(N)_-2× Spin(N)_-2/ℬ_Nfor some suitable algebras ℬ_N. The case we will verify explicitly below corresponds to N=3, and it is straightforward to check that in the cases N=1 and N=2 the formula still holds.A quick argument in support of the previous proposal that does not rely on going through the whole computation in Section <ref> is given as follows. We can use the exceptional isomorphisms of chiral algebrasSpin(3)_k≅ SU(2)_2k, Spin(4)_k≅ SU(2)_k× SU(2)_k,Spin(6)_k≅ SU(4)_k,to massage the expression for the Maverick coset (<ref>) at N=3 in the following way:Spin(6)_2× Spin(3)_-2× Spin(3)_-2 ≅ SU(4)_2× SU(2)_-4× SU(2)_-4≅ SU(4)_2× Spin(4)_-4.That is, using the exceptional isomorphisms of chiral algebras we have written the Maverick coset (<ref>) at N=3 as the Maverick coset in the first infinite Maverick family (<ref>) at k=4. This is the next example in such family after the TSPM at k=3 studied above.Let us assume now that the first infinite family of Maverick cosets (<ref>) is given by the parafermion CFTs, as pointed out above. Then, the ℤ_4 parafermion can actually be identified with the U(1)_6^Orb orbifold CFT <cit.>, and we have the explicit resultU(1)_6^Orb≅SU(4)_2× Spin(4)_-4/𝒜_4≅Spin(6)_2× Spin(3)_-2× Spin(3)_-2/ℬ_3,for some appropriate algebra objects 𝒜_4≅ℬ_3. In Section <ref> below we will verify this statement by a direct computation on non-abelian anyon condensation. Naturally, since this specific case turns out to be a parafermion, we can actually further identify (<ref>) with the other Chern-Simons-like expressions on Eqn. (<ref>) for k=4. §.§.§ Isolated Maverick Dualities We now study a few of the isolated Maverick cosets (<ref>)-(<ref>), all of which have c<1 and thus correspond to some (possibly non-diagonal) minimal model. The simplest of these is an interesting example leading to the Ising TQFT:IsingTQFT≅SU(4)_1× SU(2)_-10/𝒜,for some appropriate algebra object 𝒜. We verify this example by a direct computation in appendix <ref>. The Maverick coset (<ref>) similarly leads to the Ising model and can also be checked by an explicit calculation. Minimal models, and in particular the Ising model, have many (standard) coset descriptions. Using these many descriptions, and equating them to the Maverick result above we obtain many instances of dualities involving a gauging by a non-invertible symmetry. Explicitly, for instance:SU(4)_1× SU(2)_-10/𝒜≅(E_8)_2× Spin(16)_-2/𝒜'≅SU(2)_1× SU(2)_1× SU(2)_-2/ℤ_2 ≅ SU(2)_2× U(1)_-4/ℤ_2≅ (E_8)_1× (E_8)_1× (E_8)_-2≅USp(4)_1× SU(2)_-1× SU(2)_-1/ℤ_2,for some appropriate algebra objects 𝒜 and 𝒜'.We can play a similar strategy with the Tricritical Ising TQFT, which also has many (standard) coset descriptions as well as three Maverick descriptions, (<ref>), (<ref>), and (<ref>). Following the same steps, we obtain:Spin(7)_1× SU(2)_-28/𝒜≅(E_8)_2× SU(2)_-2× (E_7)_-2/𝒜'≅(E_7)_2× SU(8)_-2/𝒜” ≅ SU(2)_2× SU(2)_1× SU(2)_-3/ℤ_2≅(E_7)_1× (E_7)_1× (E_7)_-2/ℤ_2≅ (F_4)_1× Spin(9)_-1 ,≅ USp(6)_1× USp(4)_-1× SU(2)_-1/ℤ_2≅SU(3)_2× SU(2)_-2× U(1)_-12/ℤ_6,for some appropriate algebra objects 𝒜, 𝒜', 𝒜”. The last case is notable since the gauging is by an abelian anyon, but it cannot be interpreted as a common center. Rather it is a quantum one-form symmetry present due to the specific Chern-Simons levels. § EXPLICIT CHECKS VIA NON-ABELIAN ANYON CONDENSATION In this section, we present a detailed description of some computations involving non-abelian anyon condensation, specifically with the aim of providing a non-trivial check of the dualities claimed above. First, we summarize some consistency conditions that will allow us to pin down the resulting theory after non-abelian anyon condensation. We follow the heuristic rules of <cit.> for this purpose. Non-abelian anyon condensation was formalized in <cit.>, but the rules of <cit.> are more useful for practical calculations. Following the logic of the three-step gauging procedure reviewed in Section <ref>, we aim to obtain the spectrum and topological spins of the anyons in the gauged theory without working out the full MTC data.The rules of <cit.> enable this approach. In successive subsections, we consider many explicit examples of interest. In the following, we refer as parent theory to the original theory before condensation/gauging, and as child theory to the one obtained after condensation/gauging. For obvious reasons, we also refer to the parent and child as uncondensed and condensed theories, respectively.As in the case of standard gaugings of abelian anyons (one-form global symmetries) only bosons can condense, reflecting the fact that the topological spins capture the `t Hooft anomalies <cit.>.[Relatedly, commutative algebras in a MTC decompose necessarily in terms of simple anyons with trivial topological spin. In particular then, special symmetric commutative Frobenius algebras Frobenius algebras decompose in terms of simple anyons with trivial topological spin.] When we perform anyon condensation, the simple anyons of the parent theory are generically split into many terms associated with excitations in the child theory:a ⟶∑_i n_i^aa_i,n_i^ c∈ℕ.It is useful at this intermediate stage to distinguish between genuine line operators, and non-genuine line operators which necessarily arise at the end of topological surface operators <cit.>. Below we often refer to genuine line operators as unconfined excitations, and non-genuine line operators as confined. Shortly we will see how to differentiate between confined excitations and unconfined ones on the right-hand side of a restriction.The labels a_i on the right-hand side of (<ref>) do not necessarily correspond to genuine line operators in the child, and furthermore, many labels a_i corresponding to different anyons a in the parent theory do not necessarily correspond to different excitations in the child theory. Below we will write certain consistency conditions for non-abelian anyon condensation, and imposing such conditions may imply identifications between these many labels. In the following, we will refer to the decomposition (<ref>) as the restriction of the anyon a.[In this context, some references refer to the splitting a →∑_i n_i^ a a_i as “branching”. We avoid this terminology to prevent confusion with the a priori different concept of branching rules of affine Lie algebras.] We call the different elements a_i on the right-hand side of the restriction (<ref>) the components of the anyon a in the parent theory. When an anyon a restricts to a single label with unit multiplicity on the right-hand side (i.e., it does not “split”), we abuse notation and call the corresponding component on the right-hand side a as well. When the latter happens, the corresponding (non-necessarily genuine) line operator in some sense “descends trivially” from the parent theory. We adopt the previous nomenclature in the next subsections.We assume that when we condense non-abelian anyons, the resulting theory has fusion rules fulfilling the following standard requirements: associativity, existence of a unique vacuum, and existence of unique conjugate representations with a unique way to annihilate to the vacuum.Additionally, condensation is subject to the following rules <cit.>: 1) A sector that condenses has a component that is indistinguishable from the vacuum sector in the condensed phase. Specifically:c ⟶ (c_1≡ 0) + ∑_i>1n_i^ cc_i,n_i^ c∈ℕ,where we assume the vacuum component to have multiplicity one. 2) We require fusion of the old and new labels to be compatible with the restriction, i.e., restricting to the resulting theory and fusion must commute:a × b = ∑_c N_ab^c c ⟹( ∑_in_i^ aa_i) ×( ∑_jn_j^ bb_j) = ∑_c,kN_ab^cn_k^ cc_k.Additionally, if a and a̅ are conjugates to each other in the parent:a ⟶∑_i n_i^ aa_i⟹a̅⟶∑_in_i^ aa̅_i.In order for these compatibility conditions to hold, it may be necessary to identify two different labels a_i and b_j on the right sides of the restrictions of two anyons a and b in the parent. As a consequence of these assumptions the quantum dimensions are preserved under restriction:a ⟶∑_b n_b^ ab ⟹ d_a = ∑_b n_ b^ad_b, which will be a crucial tool below when doing explicit computations. 3) Confined and deconfined excitations c_i are distinguished by their lift to anyons c in the parent theory. If the set of all the components that we identify to a certain c_i lift to anyons in the parent theory that do not all have the same topological spin, the corresponding c_i confines and thus it does not correspond to a simple object in the MTC data describing the condensed theory. §.§ Checking the SU(2)_1≅( (G_2)_1× SU(2)_-3)/𝒵(𝐅𝐢𝐛) Duality In this first check we analyze the straightforward example described in Section <ref> and Section <ref> through direct, in-depth computation of non-abelian anyon condensation. The spectra of (G_2)_1 and SU(2)_3 are given in Tables <ref> and <ref> respectively. We call ϕ the only non-trivial anyon of (G_2)_1, which obeys Fibonacci fusion rules. SU(2)_k for general k consists of k+1 lines labeled from 0 to k, and with fusion rules given byΛ_1×Λ_2 = ∑_Λ = |Λ_1-Λ_2|^min(Λ_1+Λ_2,2k-Λ_1-Λ_2)Λ,where the sum is restricted such that Λ_1 + Λ_2 - Λ is even. We denote the lines in (G_2)_1× SU(2)_-3 in the obvious way: (0,i) and (ϕ,i) with i=0,1,2,3.It is easy to check that there is only one non-trivial boson (ϕ,2) in the spectrum of (G_2)_1× SU(2)_-3, and it is non-abelian. Let us assume that such anyon can condense, which is the statement that(ϕ,2) ⟶ 0 + (ϕ,2)_2, d_(ϕ,2)_2 = 1 + √(5)/2. The simplest way to see that on top of 0 there must be a single further label on the right-hand side of (<ref>) is to check the conservation of the quantum dimension. Since d_0 = 1, the remaining quantum dimension is 1+√(5)/2, which is too small to allow a splitting. For future reference, another method to check that only a single line is allowed on top of 0 in (<ref>) is to inspect the fusion of (ϕ,2) with itself:(ϕ,2) × (ϕ,2)= (0,0) + (0,2) + (ϕ,0) + (ϕ,2) ⟶ 0 + 0 + …,where the arrow means that we have taken the restriction of the anyons on the right-hand side of the equality, and since (ϕ,2) condenses, a vacuum arises on the right-hand side. This means that we need (ϕ,2) to split into two components on the left-hand side. Since one of them is the vacuum 0 by assumption, the other one must be a single line with quantum dimension 1 + √(5)/2. Furthermore, since the vacuum 0 is self-conjugate, we deduce that (ϕ,2)_2 must also be self-conjugate. We also observe that all anyons of the form (0,i) cannot split, since they do not have large enough quantum dimension. Similarly, the anyons (ϕ,0) and (ϕ,3) cannot split.We can deduce the fate of (ϕ,2)_2 upon condensation by studying its fusion with (0,2):(0,2) × (ϕ,2) = (ϕ,0) + (ϕ,2) ⟶ 0 + …This implies that (0,2), which we already deduced does not split, is conjugate with one component of (ϕ,2). But (0,2) is self-conjugate and it cannot be identified with the vacuum, since it has neither the appropriate quantum dimension, nor the spin to condense. Therefore, we must identify (0,2) ≅ (ϕ,2)_2. Now, note that (ϕ,2)_2 lifts to (ϕ,2) while (0,2) in the child theory lifts to (0,2) in the parent theory, but (0,2) and (ϕ,2) have different topological spins, so it follows that (0,2) ≅ (ϕ,2)_2 confine. A completely analogous argument shows that (ϕ,0) belongs in the restriction of (ϕ,2) and we have the identification (ϕ,0) ≅ (ϕ,2)_2≅ (0,2).Since (ϕ,1) has sufficient quantum dimension to split we must check if it does. Computing its self-fusion:(ϕ,1) × (ϕ,1) = (0,0) + (0,2) + (ϕ,0) + (ϕ,2) ⟶ 0 + 0 + …,we deduce that it splits into two components (ϕ,1) → (ϕ,1)_1 + (ϕ,1)_2.Now that we know the splitting pattern we have to assign quantum dimensions. It is straightforward to check that(ϕ,3) × (ϕ,1) = (0,2) + (ϕ,2) ⟶ 0 + …,implies that (ϕ,3) belongs in the restriction of (ϕ,1) since (ϕ,3) is self-conjugate, and since d_(ϕ,3) = (1 + √(5))/2, by conservation of the quantum dimension it must be that the other component of (ϕ,1) has unit quantum dimension. That is, d_(ϕ,1)_1 = (1 + √(5))/2 and d_(ϕ,1)_2 = 1. From the fusions(0,1) × (ϕ,1) = (ϕ,0) + (ϕ,2) ⟶ 0 + …, (0,3) × (ϕ,1) = (ϕ,2) ⟶ 0 + …,we deduce by similar arguments as above that (0,1),(0,3) ∈ (ϕ,1). From the quantum dimensions d_(0,1) = (1 + √(5))/2 and d_(0,3)=1 we obtain that there is a single way to fit (0,1) and (0,3) in the restriction of (ϕ,1), implying the identifications (ϕ,3) ≅ (ϕ,1)_1≅ (0,1) and (ϕ,1)_2≅ (0,3). All in all, we obtain the condensation pattern:(0,0) → (0,0),(0,1) → (0,1),(0,2) → (0,2),(0,3) → (0,3),(ϕ,0) → (0,2),(ϕ,1) → (0,1) + (0,3),(ϕ,2) → 0 + (0,2),(ϕ,3) → (0,1),from which it is straightforward to check that only (0,0) and (0,3) lift to anyons in the parent that have common topological spin and thus do not confine. They also have the correct fusion rules, spins and quantum dimensions to match those of SU(2)_1, as expected.§.§ Checking the SU(2)_1× SU(2)_2≅( M(5,4) × SU(2)_3) / 𝒜 Duality In this subsection we check the coset inversion formulas (<ref>)-(<ref>) observed on mathematical grounds in Section <ref>. Specifically we consider the example given in Eqn. (<ref>), and check thatSU(2)_k× SU(2)_1≅M(k+3,k+2) × SU(2)_k+1/𝒜,k ≥ 1,in the case k=2. When k=1 the gauging is abelian and the expression is easily checked by the three-step gauging rule <cit.>. The case k=2 is more interesting, since the gauging is by a non-invertible one-form symmetry.When k=2, M(5,4) corresponds to the Tricritical Ising Model, whose spectrum can be found in Table <ref>. Its non-trivial fusion rules areε”×ε” = 0, ε”×ε' = ε, ε”×ε = ε', ε”×σ = σ, ε”×σ' = σ', ε' ×ε' = 0 + ε', ε' ×ε = ε + ε”, ε' ×σ = σ + σ', ε' ×σ' = σ,ε×ε' = 0 + ε', ε' ×σ = σ + σ', ε×σ' = σ,σ×σ = 0 + ε + ε' + ε”, σ×σ' = ε + ε', σ' ×σ' = 0 + ε”.Meanwhile, the spectrum of SU(2)_3 and that of the expected result SU(2)_1× SU(2)_2 are shown in tables <ref> and <ref> respectively. Their fusion rules are easily derived from Eqn. (<ref>). In the following we denote the anyons in the obvious way as in the product M(5,4) × SU(2)_3.We start noticing that in the product M(5,4) × SU(2)_3 there is a single boson (ε',2), and it is non-abelian. Then, condensing (ε',2) is the statement that we have the splitting(ε',2) → 0 + (ε',2)_2,with d_(ε',2)_2 = (1+√(5))/2 by conservation of the quantum dimension, so (ε',2) cannot split further.Now notice that the following pairs of anyons{(0,0), (ε',2)}, {(ε”,3), (ε,1)}, {(σ', 3), (σ,1)}, {(σ',0), (σ,2)}, {(ε”,0), (ε,2)}, {(0,3), (ε',1)}share the same topological spins as the anyons in SU(2)_1× SU(2)_2, respectively in the same order as shown in Table <ref>. Based on this observation, consider the second pair {(ε”,3), (ε,1)} (since the first pair is basically the condensing boson), and study the fusion rules in this pair:(ε,1) × (ε,1) = (0,0) + (0,2) + (ε',0) + (ε',2) ⟶ 0 + 0 + …(ε”,3) × (ε,1) = (ε',2) ⟶ 0.The first fusion rule tells us that (ε,1) splits in two, while the second one implies that (ε”,3) ∈ (ε,1). So, we have the splitting(ε,1) → (ε”,3) + (ε,1)_2.Using the same arguments, same conclusions follow for each of the pairs mentioned above; namely, the second anyon in a pair splits in two, and one of its components corresponds to the first anyon in the same pair.Checking for confinement given this structure is now easy. Take for example the anyons (ε',0) and (0,2) and consider their fusion with (ε',2):(ε',0) × (ε',2) = (0,2) + (ε',2) ⟶ 0 + …,(0,2) × (ε',2) = (ε',0) + (ε',2) ⟶ 0 + ….This means that (ε',0), (0,2) ∈(ε',2), and it is easy to see that there is only one way to match quantum dimensions with the restriction (<ref>). We must have the identification(0,2) ≅ (ε',0) ≅ (ε',2)_2.Clearly, all these excitations lift to anyons of different topological spin in the parent theory, so we deduce their confinement in the child theory.A similar argument runs for the remaining excitations. That is, for any anyon not listed in (<ref>) we can find an anyon that appears second in a pair of (<ref>) such that from their fusion we can deduce that the former anyon belongs in the restriction of the latter anyon. This implies the confinement of all excitations, except for those that appear in the first entry of the pairs in (<ref>), which exactly matches the spectra of SU(2)_2× SU(2)_1 shown in Table <ref>, as desired. §.§ Checking the U(1)^Orb_6≅ (SU(4)_2× Spin(4)_-4)/𝒜 Duality This is an important example of a Maverick coset which we will verify upon non-invertible anyon condensation gives a theory on the orbifold branch of c=1 theories; namely, the orbifold of the U(1)_6 free boson, which we denote U(1)^Orb_6. This allows us to write the following Maverick duality of Chern-Simons-like theories:U(1)^Orb_6≅SU(4)_2× Spin(4)_-4/ℬ_3,for some appropriate gauging by an algebra object ℬ_3 that we write below. This example was studied in Section <ref> above, as the first example in the second infinite family (<ref>) of Maverick cosets. To check this example, we follow similar manipulations as in previous examples. The spectrum of SU(4)_2 is shown in Table <ref>, and the subset of the fusion rules that we will use below can be described as follows. The line 10 generates a 𝐙_4 symmetry such that we have the orbits10^2 = 20', 10^3 = 10, 10×15 = 6, 10×6 = 15, 10×4 = 20,10×20 = 20, 10×20 = 4, 10×4 = 4.The line 15 will be important and is such that15×15 = 1 + 15 + 20'.Finally, we will also need the fusion4×4 = 1 + 15.Other fusion rules may be found using the Kac software program <cit.>. The spectrum of SU(2)_4 is shown in Table <ref>, whose fusion rules can be read-off from Eqn. (<ref>). In this subsection we write (𝐑,i,j) where 𝐑 labels the line in representation 𝐑 of SU(4), and the i,j = 0, …, 4 labels the corresponding line in the SU(2)_-4 factor. The 10, 20', and 10 act as simple currents, so if we understand the fate of (1,i,j), (15,i,j), and (4,i,j) upon gauging/condensation we can deduce the rest by acting with such simple currents. For future reference, the spectrum of U(1)^Orb_6 is presented in Table <ref>.In the product SU(4)_2× SU(2)_-4× SU(2)_-4 we have a set of eight abelian bosons(1,0,0), (1,0,4),(1,4,0), (1,4,4), (20',0,0), (20',0,4),(20',4,0), (20',4,4),and a set of five non-abelian bosons(10,1,3),(10,3,1),(1̅0̅,1,3),(1̅0̅,3,1),(15,2,2).We first show that (10,1,3) cannot condense. To show this let us assume it does and find an inconsistency. Consider the fusion(10,1,3) × (1̅0̅,1,3) = (1,0,0) + (1,0,2) + (1,2,0) + (1,2,2),which shows that (10,1,3) and (1̅0̅,1,3) are conjugates in the parent theory, so if one condenses and splits so does the other. If (10,1,3) condenses it necessarily splits since d_(10,1,3) = 3, but this is inconsistent with (<ref>) since there are no non-trivial bosons in the right-hand side. It follows that (10,1,3) cannot condense. By a similar argument (1̅0̅,1,3), (10,3,1), and (1̅0̅,3,1) do not condense.The only non-abelian boson that we can potentially condense is therefore (15,2,2). But notice that we cannot condense all abelian bosons on top of this non-abelian one, since (15,2,2) × (15,2,2) = Allabelianbosons + (15,2,2) + …,and the quantum dimension of (15,2,2) is d_(15,2,2) = 8, meaning that it can split into at most eight labels. However, we have eight abelian bosons, so if all abelian bosons condense on top of (15,2,2) we find nine vacua on the right-hand side of the previous fusion, leading to an inconsistency. For the sake of presentation let us first take as condensing bosons(1,4,4) → 0,(20',0,4) → 0,(20',4,0) → 0,(15,2,2) → 0 + …,or mathematically, the algebra object ℬ_3 given by the non-simple anyonℬ_3 = (1,0,0) + (1,4,4) + (20',0,4) + (20',4,0) + (15,2,2),and check that gauging gives rise to the spectrum of the U(1)^Orb_6. After that we will see that other options are not consistent. Notice that since (1,4,4) → 0 condenses, the lines of the form (1,a,b) arrange according to the gauging of Spin(4)_4 down to SO(4)_4. We will use this fact repeatedly later, so for the reader's convenience, we have summarized the spectrum of SO(4)_4 in terms of the lines of the parent in Table <ref>. Entries that do not appear there confine, in accordance with the gauging of Spin(4)_4 to SO(4)_4. The theory SO(4)_4 can be understood easily from the three-step gauging rule <cit.>, so we do not reproduce such details here. Notice that although the lists of anyons in Table <ref> do not confine in SO(4)_4, their associated lines of the form (1,a,b) in the current example may still confine because there could be additional identifications that imply their confinement.To study how (15,2,2) restricts we study the fusion with anyons of the form (1,a,b) with a and b even:(15,2,2) × (1,0,2)= (15,2,0) + (15,2,2) + (15,2,4) (15,2,2) × (1,2,0)= (15,0,2) + (15,2,2) + (15,4,2) (15,2,2) × (1,0,4)= (15,2,2) (15,2,2) × (1,2,2)= (15,0,0) + (15,0,2) + (15,0,4) +(15,2,0) + (15,2,2) + (15,2,4) +(15,4,0) + (15,4,2) + (15,4,4).From the condensation of Spin(4)_4 to obtain SO(4)_4 we know that (1,2,0), (1,0,2) and (1,2,2)_1 are not identified with each other. For instance, if we assume (1,0,2) and (1,2,0) identify, we obtain that(1,0,2) × (1,0,2) = (1,0,0) + (1,0,2) + (1,0,4),and(1,0,2) × (1,2,0) = (1,2,2)would imply that (1,2,2) condenses, which is not possible since (1,2,2) does not have the correct topological spin to do so.The set of fusions (<ref>)-(<ref>) upon restricting (15,2,2) on the right-hand side imply that (1,0,2), (1,2,0), (1,0,4), and one component of (1,2,2) must belong to the restriction of (15,2,2).[It is straightforward to check by computing self-fusions that (1,0,2), (1,2,0), (1,0,4) do not split.] Using also the previous argument that (1,0,2), (1,2,0), and(1,2,2) cannot identify with each other, we obtain that the restriction of (15,2,2) takes the form(15,2,2) ⟶ 0 + (15,2,2)_2 + (15,2,2)_3 + (15,2,2)_4 + (15,2,2)_4,with(15,2,2)_2≅ (1,0,4) ≅ (1,4,0),d_(1,0,4) = 1, (15,2,2)_3≅ (1,0,2) ≅ (1,4,2),d_(1,0,2) = 2, (15,2,2)_4≅ (1,2,0) ≅ (1,2,4),d_(1,2,0) = 2, (15,2,2)_5≅ (1,2,2)_1,d_(1,2,2)_1 = 2,with no further splittings since we have saturated the conservation of quantum dimension. In the restriction above we had to make a choice between (1,2,2)_1 or (1,2,2)_2 to appear in the restriction of (15,2,2). Since in SO(4)_4 the lines (2,2)_1 and (2,2)_2 are symmetric between each other, the choice is actually immaterial, and by definition we have chosen (1,2,2)_1 to be the one appearing in the restriction of (15,2,2).In passing, let us note that now (1,0,2), (1,2,0), and (1,2,2)_1 have an additional lift to (15,2,2), which implies that while (1,0,2), (1,2,0), and (1,2,2)_1 were unconfined in the SO(4)_4 theory, in the current example they confine. Next let us show that (10,3,1) confines. This is easily seen by acting with our condensing abelian bosons:(1,4,4) × (10,3,1)= (10,1,3) (20',0,4) × (10,3,1)= (10,3,3) (20',4,0) × (10,3,1)= (10,1,1).Using that the abelian bosons condense, we deduce the identifications (10,3,1) ≅ (10,1,3) ≅ (10,1,1) ≅ (10,3,3). From this is easy to see that (10,3,1) and the rest of the labels on the list confine. A completely analogous argumentallows us to deduce the identifications(10,1,1) ≅ (10,3,3) ≅ (10,1,3) ≅ (10,3,1), (1,1,1) ≅ (1,3,3) ≅ (20',1,3) ≅ (20',3,1), (1,1,3) ≅ (1,3,1) ≅ (20',1,1) ≅ (20',3,3),from which in turn we can deduce the corresponding confinement of all the labels in the lists above. The remaining anyons of the form (1,i,j) that we have not treated here all confine, as they already confined as (i,j) in SO(4)_4.In summary, the anyons of the form (1,i,j) that do not confine are (1,0,0), (1,0,4), and (1,2,2)_2. It is straightforward to see that they match the anyons labelled by 0, 2 and 5 respectively in Table <ref> showing the spectrum of the U(1)^Orb_6 theory, both in their conformal weight as in their quantum dimensions. As another check, the fusion rules for these lines in the orbifold theory are indeed the same as those of the corresponding lines in the SO(4)_4 theory. We now move to work out lines of the form (15,a,b). Notice that the condensing abelian bosons relate (15,a,b) and (6,a,b), so studying the former fixes the latter. Start by analyzing (15,0,0):(15,0,0) × (15,0,0) = (1,0,0) + (15,0,0) + (20',0,0).Thus (15,0,0) does not split since (20',0,0) does not condense. From(15,0,0) × (15,2,2) = (15,2,2) + …we deduce that upon restricting (15,2,2) on the right-hand side, since (15,0,0) is self-conjugate, it belongs to the restriction of (15,2,2). In particular, it must be identified with one of the quantum dimension two components of (15,2,2). As such, it follows that (15,0,0) confines.To figure out what component we have to identify (15,0,0) with precisely, consider(15,2,2) = (15,0,0) × (1,2,2)⟶ (15,0,0) × (1,2,2)_1 +(15,0,0) × (1,2,2)_2⟶ 0 …,where in the upper arrow we have restricted the right-hand side, while in the lower arrow we have restricted the left-hand side. Since (1,2,2)_1 and (1,2,2)_2 are self-conjugate, and (1,2,2)_1∈ (15,2,2) but (1,2,2)_2∉ (15,2,2) it must be that(15,0,0) ≅ (1,2,2)_1.In passing, acting with the condensing abelian bosons as in (<ref>)-(<ref>) and (<ref>)-(<ref>), we obtain the additional identifications(15,0,0) ≅ (1,2,2)_1≅ (15,4,4) ≅ (15,0,4) ≅ (15,4,0). With the result (<ref>) is straightforward to extract the content of the lines of the form (15,a,b), since we can do(15,a,b) = (15,0,0) × (1,a,b) ≅ (1,2,2)_1× (1,a,b),and then we can proceed to use the fusion rules of SO(4)_4 to derive the splitting of (15,a,b) in terms of (1,a,b)'s which we already know. Of course, this means that (15,a,b) will not give us any new lines in the spectrum of the child theory, since all of them can be identified in terms of lines that we have already considered. To check that none of the unconfined lines already obtained confine upon lifting them to (15,a,b)'s it is useful to know the explicit results(15,0,2) ⟶ (1,2,0) + (1,2,2)_2, (15,2,0) ⟶ (1,0,2) + (1,2,2)_2, (15,4,2) ⟶ (1,2,0) + (1,2,2)_2, (15,2,4) ⟶ (1,0,2) + (1,2,2)_2, (1,2,2) ⟶ (1,2,2)_1 + (1,2,2)_2, (20',2,2) ⟶ (1,2,2)_1 + (1,2,2)_2.Thus, the unconfined lines on the right-hand-side of (15,0,2), (15,2,0), (15,2,4), (15,4,2), (1,2,2), and (20',2,2) are all the same; namely (1,2,2)_2, which is easy to see that it does not confine when considering the new splittings above, as all left-hand sides share the same topological spin. Similarly, from the fusion rules of SU(2)_4× SU(2)_4 it is straightforward to check that (1,0,4), (1,4,0) are never contained in(1,2,2) × (1,a,b), for any a,b other than a=b=2, and thus also not contained in (15,a,b) = (1,2,2)_1× (1,a,b), so there are no new liftings that could confine (1,0,4).Before working-out the lines of the form (4,a,b), let us apply the simple currents (10,0,0) to the unconfined spectra found thus far consisting of (1,0,0), (1,0,4) and (1,2,2)_1. We find(10,0,0) × (1,0,0) = (10,0,0),(10,0,0) × (1,0,4) = (10,0,4),(10,0,0) × (1,2,2)_2 = (10,2,2)_2.It is straightforward to see that further actions of the simple current will just permute the four lines on the right-hand sides above. Of course, we can apply (10,0,0) to all the rest of the lines we have found before, but this would either give confined sectors, or unconfined ones that are related to the lines on the right-hand side above by identifications already obtained at the level of (1,a,b)'s and (15,a,b)'s. Thus the action of the simple currents provide us with three new unconfined excitations in the child theory: (10,0,0), (10,0,4), and (10,2,2)_2. We can easily check that they match, both in topological spin and quantum dimension to the lines labelled as 1, 3 and 4 in Table <ref> outlining the spectrum of U(1)^Orb_6.We move now to study the restriction of lines of the form (4,a,b). First, notice that using our condensing abelian bosons as in (<ref>)-(<ref>), (<ref>)-(<ref>), or as in Eqn. (<ref>) we can derive the identifications(4, 1, 1) ≅ (4,3,3) ≅ (20,1,3) ≅ (20,3,1),(4, 1, 3) ≅ (4,3,1) ≅ (20,1,1) ≅ (20,3,3),(4̅,1,1) ≅ (4̅,3,3) ≅ (20,1,3) ≅ (20,3,1),(4̅,1,3) ≅ (4̅,3,1) ≅ (20,1,1) ≅ (20,3,3).The identifications imply that we can concentrate our attention to the leftmost anyons in each line. It is easy to check that these identification do not lead to the confinement of any of the anyons involved.Consider now the fusion rule(4,1,3) × (4̅,1,3) = (1,0,0) + (1,0,2) + (1,2,0) + (1,2,2) + (15,0,0) + (15,0,2) + (15,2,0) + (15,2,2),which upon restriction implies the splittings(4,1,3) ⟶ (4,1,3)_1 + (4,1,3)_2,(4̅,1,3) ⟶(4,1,3)_1 + (4,1,3)_2,and a similar fusion implies the splittings (4,1,1) →(4,1,1)_1 + (4,1,1)_2, (4̅,1,1) →(4,1,1)_1 + (4,1,1)_2. We can now consider the crossed fusion rule(4,1,3) × (4̅,1,1) = (15,2,2) + …⟶ 0+…,which implies that one component of (4,1,3) identifies with one component of (4̅,1,1). Let us define the subindex 2 in the previous splitting to be such that this holds. We find then(4,1,1) ⟶ (4,1,1)_1 + (4,1,1)_2, (4,1,3) ⟶ (4,1,3)_1 + (4,1,1)_2, (4̅,1,1) ⟶(4,1,1)_1 + (4,1,1)_2, (4̅,1,3) ⟶(4,1,3)_1 + (4,1,1)_2 ,from which it is easy to see that (4,1,1)_2 confines. Because of this, the lines with subindex 1 in the previous restrictions share the same quantum dimension.Next we must determine the assignment of quantum dimensions. The easiest way to do this at this point is to use the formula (<ref>) relating the central charge with topological spins and quantum dimensions:e^iπ/4c = 1/√(∑_id_i^2)∑_id^2_iθ_i,and plugging in c=1 and the values for the unconfined excitations, whose topological spins and quantum dimensions we know (except of course for the quantum dimension that we want to determine). The equation can then be solved for the remaining unknown quantum dimension, which we assign to the unconfined excitations above. The result is that the assignments must be d_(4,1,1)_1 = √(3) and d_(4,1,1)_2 = 2√(3).A slickly argument that does not involve using an additional formula nor using the central charge or other quantities as input, but that uses the same manipulations with fusions and quantum dimensions that we have used until now is given as follows. Split the quantum dimensions as:3√(3) = 3√(3)/a + 3√(3)/a(a-1),a>1,and recall that quantum dimensions satisfy the fusion algebra as in Eqn. (<ref>), meaning that the assignments of quantum dimensions must be consistent with the fusion (<ref>).[In (<ref>) we have written the fusion of (4,1,3) and its conjugate, but an analog fusion holds for (4,1,1) and its conjugate.] However, we have already determined the quantum dimensions on the right-hand side of (<ref>), and they are given by 1s and 2s. It is actually sufficient to know that the right-hand side of (<ref>) gives an integer quantum dimension for any fusion product on the left-hand side.Thus, we must have that 27/a^2∈ℕ_>0, and (a-1)27/a^2∈ℕ_>0, where we may think of these two conditions as arising from the (4,1,1)_1× (4,1,1)_1 and (4,1,1)_1× (4,1,1)_2 fusion products, respectively. The first condition gives that a=3√(3)/√(n) for some positive integer n, and on the second condition this gives (3√(3n)-n) ∈ℕ_>0. It is now direct to see that the only possible solutions are n = 3, 12, which lead however to the same splitting of the quantum dimension:3√(3) = √(3) + 2√(3).To decide which quantum dimension to assign to (4,1,1)_1, notice that the fusion product of two genuine line operators must be genuine line operators. This means that in (<ref>), the product of the quantum dimensions in the fusion (4,1,1)_1× (4,1,1)_1 is bounded by the sum of the quantum dimensions of the unconfined excitations on the right-hand side. It is straightforward to check that the latter sums to nine, so the only consistent assignment is the one we found above: d_(4,1,1)_1 = √(3) and d_(4,1,1)_2 = 2√(3).In summary, the unconfined excitations that we have found in the (4,i,j) sector are (4,1,1)_1, (4,1,3)_1, and their conjugates. All of them have the same quantum dimension d_(4,1,1)_1 = √(3). We can check now that this matches with the lines 6,7,8,9 in the U(1)^Orb_6 spectrum shown in Table <ref>.Using the condensing abelian bosons as above is easy to show that the remaining anyons of the form (4,i,j) confine. For example, (4,0,0) × (20',0,4) = (20,0,4) and (20',0,4) → 0, but (4,0,0) has topological spin e^2 π i 5/16 while (20,0,4) has topological spin e^2 π i 13/16, implying that (4,0,0) confines.§.§.§ Inconsistency of Condensing Different Abelian Bosons We conclude this subsection indicating how making a putative different choice of condensing bosons (including the non-abelian one) other than (<ref>) leads to an inconsistency.We have already shown that condensing all abelian bosons leads to an inconsistency, so we need to take a subset of them closed under fusion. First suppose we were to take (1,4,4), (20',0,0), (20',4,4) to condense on top of the non-abelian boson (15,2,2). Since (20',0,0) condenses:(15,0,0) × (15,0,0) = (1,0,0) + (20',0,0) + (15,0,0) ⟶ 0 + 0 + …,so (15,0,0) must split into two components each with unit quantum dimension. However, because (1,4,4) → 0, (1,2,2) → (1,2,2)_1 + (1,2,2)_2 each with quantum dimension 2, since this follows from the gauging of Spin(4)_4 to SO(4)_4. This is inconsistent since the left-hand side of(15,2,2) = (15,0,0) × (1,2,2).condenses, but the right hand side decomposes into fusions of components of quantum dimensions one and two. Since conjugates need to have the same quantum dimension, there is no way to accommodate a pair of conjugates that fuse to a vacuum.Now suppose we were to take (1,4,4), (1,0,4), (1,4,0) as condensing abelian bosons. From (<ref>) we see that (15,0,0) does not split in this case. However now(1,2,2) × (1,2,2) = (1,0,0) + (1,0,4) + (1,4,0) + (1,4,4) + …⟶ 0+0+0+0+…,implies (1,2,2) splits into four unit quantum dimension components. The same argument as before using (<ref>) implies an inconsistency.The case when we try to condense (1,0,4), (20',4,0) and (20',4,4) is a bit different. First, notice that if we take (1,0,4) to condense then the right SU(2)_4 factor condenses to SU(3)_1, and (1,i,2) → (1,i,2_1) + (1,i,2_2), for any i=0 , … , 4. We now observe that the fusion products of (15,2,2) with (1,2,0), (1,4,0), (1,0,2), (1,2,2) and (1,4,2) all will have a (15,2,2) on the right-hand side and thus will have a vacuum after restriction. In particular it follows that (1,4,0), (1,4,2_1), (1,2,0) ∈ (15,2,2). [Since the fusion rules in SU(3)_1 are symmetric between 2_1 and 2_2 we could have chosen (1,4,2_2) here instead. It is easy to see that the same conclusions follow.] It is easy to check that (1,2,0) splits, and notice that (1,4,0) cannot possibly be identified with (1,4,2_1), as if this was the case we could fuse both sides with (1,4,0) obtaining an identification of (1,4,2_1) with (1,0,0) which is not possible since (1,4,2_1) does not have the topological spin to condense. It follows that the restriction of (15,2,2) must be of the form(15,2,2) ⟶ 0 + (1,4,0) + (1,4,2_1) + (1, 2, 0) + b,for some b with quantum dimension d_b = 3. However we now take that (1,2,2_1) must also be in the restriction (15,2,2), and the only candidate that matches the quantum dimension above is (1, 2, 0), which is however inconsistent since identifying (1, 2, 0) with (1,2,2_1) implies upon fusing with (1, 2, 0) that (1,0,0) + (1,2,0) + (1,4,0) is identified with (1,0,2_1) + (1,2,2_1) + (1,4,2_1), but none of these last anyons can condense, thus finding an inconsistency.Finally, had we taken a single abelian boson to condense on top of the non-abelian one, (15,2,2) would have split into just three components. Either (1,4,0) or (1,0,4) would not condense and it would belong in the restriction of (15,2,2). So, the third anyon in the restriction would have quantum dimension 6. It is then easily seen that either (1,2,0) or (1,0,2) do not split and belong to (15,2,2), but there is no component in the decomposition that matches the quantum dimension, leading to an inconsistency. § REVISITING CONFORMAL EMBEDDINGS AND LEVEL-RANK DUALITIES In this section we revisit the conformal embeddings of <cit.> and their relation to gauging of non-invertible symmetries. We have already touched upon this matter in Section <ref>, and here we will generalize that discussion to other conformal embeddings. In particular, in the case of the exceptional conformal embeddings of <cit.>, we will see that running an analogous argument to that of the classical embeddings of <cit.> will already lead us to the consideration of non-abelian anyon condensation to make the dualities work. §.§ Revisiting Classical Conformal Embeddings Let us start motivating our discussion with the simple case of that of the unitary series of conformal embeddingsSU(N)_k× SU(k)_N↪ SU(Nk)_1,N,k ∈ℕ_≥ 2,whose standard 3D TQFT duality interpretation, as reviewed at the end of Section <ref>, is <cit.>:SU(k)_N≅SU(Nk)_1× SU(N)_-k/ℤ_N. To relate conformal embeddings to the gauging of non-invertible symmetries, we point out that the following alternative results also applies:SU(Nk)_1≅SU(N)_k× SU(k)_N/𝒜_N,kwhere 𝒜_N,k is some Frobenius algebra object (for brevity, on the rest of this section when we write a letter in calligraphic font we always mean some appropriate Frobenius algebra object that we can use to gauge, and from now on we will not remark on what such objects stand for). The previous statement has been discussed in many mathematical references <cit.>, but in our context it is most quickly understood from the coset inversion formula (<ref>) with ℳ = SU(Nk)_1, 𝒞 = SU(k)_N and ℳ' = SU(N)_k. In this sense, (<ref>) is nothing but the standard form of cosets, Eqn. (<ref>) of the coset inversion formulas, while (<ref>) is “the parent statement” Eqn. (<ref>) translated to a form more akin to physics, and for the particular example at hand. Clearly, the unitary groups play no role in the previous argument, and as such the same would hold for any other conformal embeddings. We explore this in the next sections below.As we will see shortly, in the form (<ref>) the algebra object that we need to gauge is generically that of a non-invertible symmetry, although in particular cases it may simplify to some abelian gauging. For instance, it is easy to see that if we apply this form of the conformal embedding of unitary groups for N=k=2 we obtainSU(4)_1≅SU(2)_2× SU(2)_2/ℤ_2,where the ℤ_2 algebra is given by 𝒜 = (0,0) + (2,2), where (i,j) stands for the spin i/2 and j/2 representations of each SU(2)_2 factor. Eqn. (<ref>) can be verified by a simple use of the three-step gauging procedure <cit.>. The simplest instance where we need to consider gauging by a non-invertible one-form symmetry to make (<ref>) valid occurs at N=3 and k=2, which we verify by a direct non-abelian anyon condensation computation on the following example. Further conformal embeddings will be studied in Section <ref> and Section <ref> below once we finish illustrating this example.§.§.§ Checking the SU(6)_1≅ (SU(3)_2× SU(2)_3)/𝒜 DualityIn this subsection we check by direct computation that SU(6)_1 can be found as a non-abelian anyon condensation of SU(3)_2× SU(2)_3. This is the simplest example in the infinite tower of conformal embeddingsSU(N)_k× SU(k)_N↪ SU(Nk)_1,N,k ∈ℕ_≥ 2,where we can think of SU(Nk)_1 as a non-abelian anyon condensation of SU(N)_k× SU(k)_N. Such conformal embeddings are particularly interesting because of their intimate relation to level-rank dualities <cit.>. We follow the same procedure as in Section <ref>, so the reader may find useful to read that section first before going through this example. The spectrum of SU(2)_3 is given in Table <ref>, while the spectrum of SU(3)_2 is given in Table <ref>. In the following, we write (𝐑,i) for a line in SU(3)_2× SU(2)_3, where 𝐑 labels a line in representation 𝐑 of SU(3) and i=0,1,2,3 labels the corresponding line in SU(2)_3. We will need the fusion rules for SU(2)_k which can be found above in Eqn. (<ref>), and the fusion rules for SU(3)_2:3×3 = 3̅ + 6, 3×3̅ = 1 + 8, 3×8 = 3 + 6̅, 3×6 = 3̅, 3×6̅ = 83̅×3̅ = 3 + 6̅, 3̅×8 = 3̅ + 6, 3̅×6 = 3 , 3̅×6̅ = 8,8×8 = 1 + 8, 8×6 = 3̅, 8×6̅ = 3,6×6 = 6̅ , 6×6̅ = 1, 6̅×6̅ = 6.Notice that in the spectrum of SU(3)_2 the lines 6 and 6̅ act as ℤ_3 simple currents, meaning that if we know the fate of (1,i) and (8,i) upon gauging/condensation, we can deduce that of (6,i), (3,i), and their conjugates by acting with (6,0) and (6̅,0). Thus, we concentrate on (1,i) and (8,i). To begin, observe that there is only one non-trivial boson in the product theory SU(3)_2× SU(2)_3; namely, the boson (8,2), with quantum dimension d_(8,2) = 3+√(5)/2. Let us assume this boson condenses, which has to be the case since it would be the only way to obtain SU(6)_1 from a condensation of SU(3)_2× SU(2)_3.Assuming that (8,2) condenses is the statement that (8,2) restricts as (8,2) → 0 + (8,2)_2,where by conservation of the quantum dimension, d_(8,2)_2 = 1+√(5)/2. This is too small to allow a further splitting, so (8,2) must restrict to just two components.Let us now notice that the line (1,2) cannot split as it does not have large enough quantum dimension to do so. With this observation, consider(8,2) × (1,2) = (8,0) + (8,2) ⟶ 0 + …,which shows that (1,2) ∈ (8,2) since (1,2) is self-conjugate. Matching quantum dimensions the only possibility is to have the identification (1,2) ≅ (8,2)_2, which in turn implies the confinement of these components since they lift to anyons in the parent theory with different topological spins.To study the result of (1,1) and (1,3) after condensation, notice that these lines cannot split since they do not have large enough quantum dimension, and consider the fusion rules(8,2) × (1,1) = (8,1) + (8,3),and(8,2) × (1,3)= (8,1) = (1,3) + (1,2) × (1,3) =(1,3) + (1,1),where in the last expression the first equality comes from doing the standard fusion on the parent theory and the second line comes from first restricting (8,2) → 0 + (1,2) on the left-hand side, and then performing the fusion of these components with (1,3). Comparing both expressions, we obtain the restriction of (8,1) into an abelian anyon (1,3) and a non-abelian anyon (1,1):(8,1) ⟶ (1,1) + (1,3).It is easy to check now that the topological spin of (1,1) is not equal to that of (8,1) in the parent theory, and thus it follows that (1,1) confines. Meanwhile, the topological spins of (8,1) and (1,3) match.Using this information in the first fusion rule (<ref>) we obtain(8,2) × (1,1)= (1,1) + (1,3) + (8,3) = (1,1) + (1,1) + (1,3),where in the first equality we have used that (8,1) → (1,1) + (1,3) on the right side of (<ref>), and in the second equality we have instead first restricted (8,2) → 0 + (1,2). We must therefore identify (1,1) ≅ (8,3).It remains to study (8,0), which does not split since it does not have large enough quantum dimension. Now, inspect the fusion(8,2) × (8,0)= (1,2) + (8,2) = (1,2) + 0 + (1,2) = (8,0) + (8,2) = (8,0) + 0 + (1,2),where in the first line we have performed the fusion on the parent theory and then used the restriction (8,2) → 0 + (1,2), while in the second line we have first restricted on the left-hand side and later computed the corresponding fusions, after which we used the (8,2) restriction again. It follows that we must identify (8,0) ≅ (1,2). Clearly, the only unconfined lines in the (1,i) and (8,i) sectors are (1,0) and (1,3) (up to identifications). Following our comments above, we can now consider the action of the simple currents of SU(3)_2 in the form (6,0) and (6̅,0) over the confined and unconfined excitations already found, which generates the rest of the sectors not considered up to this point. The spectrum of unconfined excitations is seen to match that of the expected SU(6)_1 theory, whose spectrum is summarized in Table <ref>. Furthermore, since in this example the lines in the child theory descend trivially from those of the parent, we can easily check the ℤ_6 fusion rules expected of SU(6)_1 by computing them directly in the parent:(6,3)^2 = (6̅,0),(6,3)^3 = (1,3),(6,3)^6 = (1,0).The condensation computation thus points that indeed[To make this precise we would have to check the modular S-matrix, F-symbols and R-symbols, but the previous consistency conditions do not provide us with these, and we need external sources to point us to what the result should be.] SU(6)_1≅SU(3)_2× SU(2)_3/𝒜_3,2with the algebra element 𝒜_3,2 = (1,0) + (8,2).§.§.§ Continuing Classical Embeddings In the same way that we obtained the main duality for unitary groups via non-abelian anyon condensation; namely, Eqn. (<ref>) in the last section, we can find additional dualities based on other groups as long as they participate in some conformal embedding. The list of conformal embeddings can be found in <cit.>, which we use in the following. Let us start this discussion by recalling the conformal embeddings based on Spin groups studied in <cit.> which are the next natural examples to consider:SO(N)_K× SO(K)_N↪ Spin(NK)_1,Neven,Keven,Spin(N)_K× SO(K)_N↪ Spin(NK)_1,Neven,Kodd,Spin(N)_K× Spin(K)_N↪ Spin(NK)_1,Nodd,Kodd.These conformal embeddings yield the following standard 3D TQFT duality interpretation, obtained by the common center argument in accordance with <cit.>:Spin(N)_K≅Spin(NK)_1× Spin(K)_-N/ℤ_2,Nodd,KoddSpin(N)_K≅ Spin(NK)_1× SO(K)_-N,Neven,KoddSO(N)_K≅Spin(NK)_1× Spin(K)_-N/B,Nodd,KevenSO(N)_K≅Spin(NK)_1× SO(K)_-N/ℤ_2,Neven,Keven,where B above is such that B = ℤ_2×ℤ_2 for k=04 and B = ℤ_4 for k=24. These dualities (derived in <cit.>) may be interpreted as solving for the cosets as in Eqn. (<ref>). If we invert these expressions into the initial ones (<ref>), which exhibit the embeddings of the two factors into the bigger algebra as we did in the unitary case, we add the dualitiesSpin(NK)_1 = SO(N)_k× SO(k)_N/𝒜_N,k,Neven,Keven, Spin(NK)_1 = Spin(N)_k× SO(k)_N/𝒜_N,k,Neven,Kodd, Spin(NK)_1 = Spin(N)_k× Spin(k)_N/𝒜_N,k,Nodd,Kodd,which generically involve gauging by a non-invertible symmetry. We verify this in appendix <ref> in the N=k=3 case, which is the simplest example of the above dualities that involve non-invertible anyon condensation. A similar story holds for the dualities and embeddings associated with symplectic groups <cit.>:USp(2N)_k× USp(2k)_N↪ Spin(4Nk)_1,which have the 3D TQFT duality interpretation USp(2N)_k≅Spin(4Nk)_1× USp(2k)_-N/ℤ_2⟷ Spin(4NK)_1≅USp(2N)_k× USp(2k)_N/𝒜_N,k,where the left duality can be obtained by the common-center argument, as in <cit.>. The right duality is obtained following the arguments outlined above both in the unitary and Spin cases, and it can be readily verified in the case N=2, k=1 where both sides are related by abelian anyon condensation. The gauging on the right duality is generically by a non-invertible symmetry, however, as the next simplest case N=k=2 already shows. In the product USp(4)_2× USp(4)_2 there are both abelian and non-abelian bosons, but it can be checked that no abelian gauging is sufficient to give Spin(16)_1 back.In the previous examples the standard form of the cosets (<ref>), (<ref>)-(<ref>) and the left duality on (<ref>) studied in <cit.> all involve gauging by an abelian symmetry. However, it is well-known that there is yet an additional infinite series of conformal embeddings with a tensoring of two affine lie algebras embedding into a bigger algebra <cit.>. Namely:SO(N)_4× SU(2)_N↪ USp(2N)_1.This is an interesting example, as suppose we tried to follow the same logic as in the aforementioned “standard coset dualities” and try to isolate SU(2)_N in terms of USp(2N)_1 and SO(N)_-4, and running the common center procedure. For N=2 there are no novelties and we find SU(2)_2≅ USp(4)_1× U(1)_-4 / ℤ_2.[Here we have used that SO(2)_k≅ U(1)_k.] However, already at N=3 we observe that there is no abelian boson in the spectrum of USp(6)_1× SO(3)_-4, so the common center procedure manifestly does not work for this value of N. The cure to the puzzle just mentioned is clear based on what we have studied so far in this work, and it is to extend the common center procedure and allow for gauging by non-invertible symmetries. Indeed, USp(6)_1× SO(3)_-4 has two non-abelian bosons in its spectrum, and we can condense them and find SU(2)_3 correspondingly. We study the details of this procedure in appendix <ref>. In general then, we have the dualitySU(2)_N≅USp(2N)_1× SO(N)_-4/𝒜_N,where as the prior example shows, we must allow for the gauging of a non-invertible one-form symmetry on the right-hand side.Of course, as we did previously for the other conformal embeddings, we can still apply the coset inversion theorem and write the duality implied by (<ref>). This gives:USp(2N)_1≅SU(2)_N× SO(N)_4/𝒜_N,and as above, for N=2 the gauging is by an abelian symmetry, but at N=3 we already need non-abelian anyon condensation to make the duality valid. §.§ Further Conformal Embeddings Now we move to discuss the rest of the conformal embeddings, with a focus on those that involve non-invertible symmetries. A list of the conformal embeddings can be found in <cit.>. We begin considering conformal embeddings with a product of two affine lie algebras in the denominator, as in the previous subsection, but we study those associated to exceptional Lie algebras. These embeddings have been previously studied in <cit.>, and here we extend and understand the associated dualities in terms of non-invertible symmetries. The subset of these embeddings for which non-abelian anyon condensation is necessary to understand the dualities areSU(2)_1× SU(2)_3↪ (G_2)_1, SU(2)_1× USp(6)_1↪ (F_4)_1,SU(3)_1× SU(3)_2↪ (F_4)_1, SO(3)_4× (G_2)_1↪ (F_4)_1, SU(2)_3× (F_4)_1↪ (E_7)_1, USp(6)_1× (G_2)_1↪ (E_7)_1, SU(2)_7× (G_2)_2↪ (E_7)_1, SU(3)_2× (G_2)_1↪ (E_6)_1,Here we recognize the example SU(2)_1× SU(2)_3↪ (G_2)_1 we elaborated upon in Section <ref> and in Section <ref>. Basically the same argument follows through in all of these embeddings. Let us quickly recall this argument in an alternative example. For instance, if we take the third example in the first line and try to isolate SU(3)_2 we obtain SU(3)_2≅ (F_4)_1× SU(3)_-1, with no further gauging on the right-hand side and everything is in order. If we instead try to isolate SU(3)_1, as explained previously there is no abelian common center to gauge by, but we still have to take into account non-abelian anyon condensation, and we obtainSU(3)_1≅(F_4)_1× SU(3)_-2/𝒵(𝐅𝐢𝐛),where by 𝒵(𝐅𝐢𝐛) we mean an algebra object that can be traced back to a Lagrangian algebra responsible for gauging away (G_2)_1× (G_2)_-1 or (F_4)_1× (F_4)_-1 down to the trivial theory as in Section <ref>, for which reason we interpret it as gauging by a Drinfeld center of a Fibonacci fusion category. An analogous example of this same form was verified in detail by a direct non-abelian anyon condensation computation in Section <ref>. All of the conformal embeddings in (<ref>) yield dualities that can be obtained by the same arguments, and with one exception, in all of them we gauge by some Fibonacci Drinfeld center. The only exception is the first example in the third line, where(E_7)_1≅SU(2)_7× (G_2)_2/𝒵((G_2)_2),which is however obtained by the same arguments. We discuss now those conformal embeddings with a simple group, or single affine Lie algebra in the denominator of the conformal embedding, such as in the SU(3)_1/SU(2)_4 example. We clearly cannot use the same arguments that were used to obtain the “standard coset form” of the dualities, since now we do not have two factors in the denominator. That is, we will not have an analog of expressions such as (<ref>) or their counterparts for algebras other than unitary. However, we still have an analog of the dualities in the form outlined in this work, Eqn. (<ref>). Indeed, this is tantamount to using (<ref>) with 𝒞 describing the trivial theory. In the example SU(3)_1/SU(2)_4 this is nothing but the statement that we can gauge SU(2)_4 by some algebra to obtain SU(3)_1. Indeed, it is well-known that SU(3)_1≅ SU(2)_4/ℤ_2.Clearly, the general story for an arbitrary conformal embedding will be essentially the same, but where we allow to gauge by a non-invertible symmetry as per (<ref>).With the previous remarks in mind let us summarize a few results. Exploring the conformal embeddings, we find the following infinite families of dualitiesSpin(N^2-1)_1≅SU(N)_N/𝒜_N SU ( N( N ± 1)/2 )_1≅SU(N)_N ± 2/𝒜_N Spin( N(N-1)/2 )_1≅Spin(N)_N-2/𝒜_N Spin( (N^2+N-2)/2 )_1≅Spin(N)_N+2/𝒜_N,Notice that the first of these corresponds to the conformal embedding used in <cit.> in the context of 2D CFT to propose the deep IR of 2D Adjoint QCD. Here we use the conformal embedding to establish a duality of 3D TQFTs. There are various checks that can be performed in the series above. The case N=4 in (<ref>) gives Spin(9)_1≅ Spin(4)_6, which we verify by a explicit computation in appendix <ref> and is actually equivalent to the duality (<ref>) for N=3.We finish mentioning the isolated cases (i.e., no infinite family) of the conformal embeddings where there is a single affine Lie algebra in the denominator. The complete list of these cases can be obtained by reading <cit.>, and yields the following list of dualities: SU(16)_1≅ Spin(10)_4 / 𝒜, SU(27)_1≅ (E_6)_6/ 𝒜, Spin(70)_1≅ SU(8)_10 / 𝒜,USp(4)_1≅ SU(2)_10 / 𝒜,USp(20)_1≅ SU(6)_6/ 𝒜,(E_6)_1≅ SU(3)_9 / 𝒜,(E_7)_1≅ SU(3)_21/𝒜,(G_2)_1≅ SU(2)_28 / 𝒜,Spin(16)_1≅ Spin(9)_2 / 𝒜,Spin(128)_1≅ Spin(16)_16 / 𝒜,Spin(42)_1≅ USp(8)_7/𝒜,USp(32)_1≅ Spin(12)_8/𝒜,Spin(78)_1≅ (E_6)_12/𝒜,Spin(133)_1≅ (E_7)_18/𝒜 Spin(248)_1≅ (E_8)_30/𝒜, USp(14)_1≅ USp(6)_5 / 𝒜,USp(56)_1≅ (E_7)_12 / 𝒜,(E_8)_1≅ USp(4)_12 / 𝒜,Spin(26)_1≅ (F_4)_3 / 𝒜,Spin(52)_1≅ (F_4)_9/ 𝒜,Spin(14)_1≅ (G_2)_4 / 𝒜,(E_6)_1≅ (G_2)_3 / 𝒜.The first example on the second line, USp(4)_1≅ SU(2)_10/𝒜, is a known example that has been studied in the literature mainly with the aim of testing and understanding the formalism of non-abelian anyon condensation (see for example <cit.>). The last example in this list (E_6)_1≅ (G_2)_3 / 𝒜 can be verified very easily by a non-abelian anyon condensation computation, which we outline in appendix <ref>.§.§.§ Three-State Potts Model Maverick Coset from Level-Rank Duality In this subsection we show the simplest duality implied by the Maverick cosets, Eqn. (<ref>), but instead of doing the calculation directly, we will first use our knowledge of the conformal embeddings to simplify the calculation. To understand the three-state Potts Model, we make use of the first of the embeddings in (<ref>), and writeSU(2)_3≅ (G_2)_1× SU(2)_-1.It is also straightforward to check the duality U(1)_6≅ SU(3)_1× SU(2)_-1, so that we haveSU(2)_3× U(1)_-6≅ SU(3)_-1× (G_2)_1× SU(2)_-1× SU(2)_1.Now we can use the first embedding on the second line of (<ref>), and use our knowledge of non-abelian anyon condensation to isolate (G_2)_1, obtaining:(G_2)_1≅(F_4)_1× SO(3)_-4/𝒵(𝐅𝐢𝐛).This duality may be obtained from the arguments outlined previously, but it is simple and sufficiently interesting that we verify it by a direct non-abelian anyon condensation on the next subsection.We also use the third embedding on the first line of (<ref>) to claimSU(3)_2≅ (F_4)_1× SU(3)_-1. These expressions allows us to introduce the first SU(3)_-1 factor on the right-hand side of (<ref>) into the quotient given by (<ref>), with the 𝒵(𝐅𝐢𝐛) denominator acting trivially on such SU(3)_-1 factor, and writeSU(2)_3× U(1)_-6≅SU(3)_2× SO(3)_-4/𝒵(𝐅𝐢𝐛)× SU(2)_-1× SU(2)_1.Finally, gauging the ℤ_2 Drinfeld center SU(2)_1× SU(2)_-1 we obtainSU(2)_3× U(1)_-6/ℤ_2≅SU(3)_2× SO(3)_-4/𝒵(𝐅𝐢𝐛), ≅SU(3)_2× SU(2)_-8/𝒜,which is indeed what we have obtained above using Maverick coset considerations instead of several conformal embeddings/exceptional level-rank duality manipulations. On the last line we have used that by definition SO(3)_4≅ SU(2)_8/ℤ_2.§.§.§ Checking the (G_2)_1≅( (F_4)_1× SO(3)_-4)/𝒵(𝐅𝐢𝐛) Duality This calculation arises when we prove the duality suggested by the simplest Maverick coset (<ref>) using exceptional conformal embeddings. See the previous subsection. The spectrum of (F_4)_1 is that of (G_2)_1, but with the only non-trivial line having spin 3/5 instead of 2/5. We call ϕ the only non-trivial anyon of (F_4)_1, which obeys Fibonacci fusion rules. The spectrum of SO(3)_4 is shown in Table <ref>, and its non-trivial fusion rules are2 × 2 = 0+2+4_1+4_2,2 × 4_1 = 2 + 4_2,2 × 4_2 = 2 + 4_1, 4_1× 4_1 = 0 + 4_1,4_1× 4_2 = 2,4_2× 4_2 = 0 + 4_2. We first need to determine the bosons that condense. It is easy to verify that the only bosons in the product (F_4)_1× SO(3)_-4 are (ϕ,4_1) and (ϕ,4_2), which are non-abelian. From the fusions(ϕ,4_1) × (ϕ,4_1) = (0,0) + (0,4_1) + (ϕ,0) + (ϕ,4_1), (ϕ,4_1) × (ϕ,4_2) = (0,2) + (ϕ,2), (ϕ,4_2) × (ϕ,4_2) = (0,0) + (0,4_2) + (ϕ,0) + (ϕ,4_2),it is clear that (ϕ,4_1) and (ϕ,4_2) cannot simultaneously condense, so me must make a choice. However, lines 4_1 and 4_2 are symmetric in SO(3)_4, so the choice is actually immaterial. We choose (ϕ,4_1) to condense:(ϕ,4_1) ⟶ 0 + (ϕ,4_1)_2.The quantum dimension of (ϕ,4_1)_2 is d_(ϕ,4_1)_2 = (1+√(5))/2, so there is no further splitting allowed by conservation of the quantum dimension. Since (ϕ,4_2) does not condense, we can also see from the above fusion rules that it does not split and d_(ϕ,4_2) = (3 + √(5))/2.The quantum dimensions do not allow (0,4_1) and (0,4_2) to split, but in principle they allow for (0,2) to split. This is not the case, which can be verified from(0,2) × (0,2) = (0,0) + (0,2) + (0,4_1) + (0,4_2)since no non-trivial bosons appear on the right-hand side. Since none of these anyons split, the fusion(ϕ,4_1) × (0,4_1) = (ϕ, 0) + (ϕ, 4_1) ⟶ 0 +…,implies the identification (ϕ,4_1)_2≅ (0,4_1), which in turn implies the confinement of such excitations.We now consider the rest of the (ϕ,i) anyons that are not bosons. Start noticing that(ϕ, 2) × (ϕ, 2) = (0,0) + (ϕ, 4_1) + …, ⟶0+0+…,implies that (ϕ, 2) splits: (ϕ, 2) → (ϕ, 2)_1 + (ϕ, 2)_2. To assign quantum dimensions we can consider the fusion(ϕ,2) × (0,2) = (ϕ,4_1) + …⟶ 0 + …,so (0,2) belongs to the restriction of (ϕ,2). Let (ϕ, 2)_2 be by definition the component that identifies with (0,2). The quantum dimensions must then be d_(ϕ,2)_1 = (1 + √(5))/2, and d_(ϕ,2)_2 = (3 + √(5))/2. With this knowledge it is now straightforward to take the fusions (ϕ,2) × (0,4_2) and (ϕ,2) × (ϕ,4_2) and deduce the identifications (ϕ,2)_1≅ (0,4_2) and (ϕ,2)_2≅ (ϕ,4_2) using the by-now usual arguments.Finally, (ϕ,0) is self-conjugate and cannot split. Moreover, the fusion(ϕ,0) × (0,4_1) = (ϕ,4_1) + …→ 0+…implies the identification (ϕ,0) ≅ (0,4_1), which in turn shows the confinement of such excitations. To summarize, we have the following condensation pattern:(0,0) → 0,(0,2) → (0,2),(0,4_1) →(0,4_1),(0,4_2) → (0,4_2), (ϕ, 0) → (0,4_1),(ϕ,2) → (0,4_2) + (0,2),(ϕ,4_1) → 0 + (0,4_1),(ϕ,4_2) → (0,2). Studying the lifts implied by the previous restrictions to the anyons in the parent theory, we can check that the unconfined excitations are the vacuum and (0,4_2). These have indeed the correct quantum dimensions, fusion rules and spin to recognize the result as (G_2)_1. § ACKNOWLEDGEMENTS We thank Jimmy Huang, and Carolyn Zhang for helpful conversations. CC and DGS acknowledge support from the Simons Collaboration on Global Categorical Symmetries, the US Department of Energy Grant 5-29073, and the Sloan Foundation. DGS is also supported by a Bloomenthal Fellowship in the Enrico Fermi Institute at the University of Chicago. § REVIEW OF (2+1)D TQFTS AS MODULAR TENSOR CATEGORIES In this appendix we summarize a few definitions and results about (unitary) (2+1)D TQFTs in terms of modular tensor categories (MTC). We will limit ourselves to those results that are useful to follow the main text. For a more in-depth discussion of the definitions and results used here we refer to <cit.> which is particularly clear in our context. Other general references are <cit.>, or Section 5 in <cit.>.§.§ Modular Tensor Categories Axiomatically, a unitary bosonic (2+1)D TQFT is described by a unitary modular tensor category 𝒞. Recall that a category in general is composed by a set of objects and a set of morphisms between them. We denote the objects of the category Obj(𝒞) and the morphisms between objects a and b in Obj(𝒞) as Hom(a,b). Throughout, we assume that in the categories we are concerned with, Obj(𝒞) is a set, the category admits direct sums of objects, and morphism-sets are vector spaces over a field that in physics we take to be the complex numbers. We also assume:Semi-simplicity: Any object can be written as a direct sum of finitely many simple objects, where simple objects s are such that the self-junction space Hom(s,s) is one-dimensional.Finiteness: The number of simple objects in the category is finite.We denote the set of simple objects in a category as ℐ.A UMTC is a special type of monoidal, or tensor category, which in turn is essentially a category equipped with a notion of tensor product in between its objects. That is, if a,b are objects in 𝒞, then there exists an object a ⊗ b in 𝒞. The full definition of a monoidal, or tensor category is technically more involved, requiring many consistency conditions which precise details however we will not need. See <cit.> for more details. To define a modular tensor category, we do find useful to first define a Ribbon category. A Ribbon category is a tensor category that meets the following extra requirements. First, to every object a ∈Obj(𝒞) there is an object a^∨∈Obj(𝒞), the (right) dual of a, such that there exists morphisms b_a∈Hom(1,a ⊗ a^∨) and d_a∈Hom(a^∨⊗ a, 1). We also require the existence of certain morphisms called braiding and twist:Braiding: c_a,b∈Hom(a ⊗ b, b ⊗ a),Twist:θ_a∈Hom(a ⊗ a).All these morphisms are subject to various consistency conditions that can be found, e.g., in <cit.>. They allow a useful pictorial notation, whereby objects are denoted by lines with a label a and morphisms are read as a diagram from bottom to top:< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s > A MTC is then a ribbon category, subject to the assumptions stated above, and such that the matrixs_a,bTr(c_a,b∘ c_b,a) =< g r a p h i c s >with entries on the simple objects is non-degenerate.[Here we should be more precise with the definition of the trace. Intuitively, it is clear that we have to take the loops of the anyons involved. See <cit.> for a more precise version of this statement.] Imposing that this matrix is unitary, up to some conventional overall factor, one obtains the definition of a unitary MTC. In physics one does not quite use s, but rather S = S_0,0s, with S unitary and S_0,0 = 1/𝒟, where 𝒟 is the total quantum dimension (see below). The modular S-matrix satisfies S_ab = S_ba = S_a̅ b^*.As a special type of monoidal (or fusion) category, a UMTC further consists, in particular, of an associator or F-symbols dictating the associativity of line junctions, constrained by the so-called pentagon equations. We will not need these for our applications, so we refer the reader to the references above and the original works <cit.> for details.Let us now unpack the previous definitions in a more familiar context. Physically, the objects of the UMTC 𝒞 correspond to the anyons, or line operators of the TQFT. The morphisms of 𝒞 are associated to junctions of the lines. The tensor product is merely denoted by × and it corresponds to the well-known commutative fusion algebraa × b = ∑_c N_ab^cc,which is also associative(a × b) × c = a × (b × c).The quantities N_ab^c are non-negative integers called the fusion coefficients, and they satisfy N_ab^c = N_ba^c as well as ∑_eN_ab^e N^d_ec = ∑_f N^d_af N^f_bcby associativity. We require the existence of a unique transparent line operator, or unique identity object 0:0 × a = a × 0 = a,and the existence of a unique conjugate sector a̅ for any a in 𝒞, defined such thata ×a̅ = a̅× a = 0 + ∑_c ≠ 0 N_a a̅^cc.These conjugate sectors, well-known in the physics context, are nothing but the dual objects defined above mathematically.For our purposes, we define the quantum dimension of a simple anyon a as the expectation value of the corresponding unknot:d_a =< g r a p h i c s >where an anyon and its conjugate satisfy d_a = d_a̅.We say that a line operator, or an anyon a is abelian if d_a = 1. This is actually equivalent to the statement that fusion of any such anyon a with an arbitrary anyon b always gives back a single anyon with unit multiplicity. That is, for a abelian N_ab^c = δ^c_c' for some simple c' in 𝒞. In particular, since the number of simple objects is finite, abelian anyons always generate some finite abelian group. Otherwise, we say the anyon is non-abelian.An important consequence of the associativity of the fusion algebra is that the matrices 𝒩_a defined by the entries (𝒩_a)_b^ c =N_ab^c are mutually diagonalizable. Moreover, an eigenvector of the matrices 𝒩_a is the vector whose entries are the quantum dimensions, with eigenvalue d_a. This is equivalent to the statement that quantum dimensions follow the fusion algebra. That is:d_a d_b = ∑_c N_ab^cd_c.It is also useful to define the total quantum dimension 𝒟 = √(∑_a ∈ℐ d_a^2). In a unitary MTC we can choose a Verlinde basis of anyons <cit.> in which the twist morphism is diagonalized on the basis of simple anyons. More precisely, the twist morphism is proportional by a phase to the identity morphism< g r a p h i c s >and we have abused notation and called the proportionality phase with the same symbol as the twist morphism θ_a. This quantity is known as the topological spin of the anyon a:θ_a = e^2 π i h_a,and we call h_a to the spin of the line a, or abuse terminology from 2D CFT and call it the conformal weight of the line. In the text we use both nomenclatures interchangeably. The spin of a line operator is always a rational number <cit.>. To avoid potential confusion, we stress the difference between spin and topological spin. In the main text we always keep the difference sharp, referring always to the quantity θ_a as topological spin, and the quantity in the exponent as spin, never interchangeably.We have the following action of the anyons looping around each other:< g r a p h i c s >Taking b to be the trivial anyon we find an expression for the quantum dimensions in terms of the modular S-matrix: d_a = S_a0/S_00.The (chiral) central charge of the (2+1)D TQFT c_- is defined by the topological spins and the quantum dimensions of the anyons bye^i π/4c_- = 1/𝒟∑_a ∈ℐ d_a^2 θ_a.It is a non-trivial result that the combination on the right-hand side is a phase. Note that the MTC data only fixes the chiral central charge c_-, and thus the corresponding central charge of the edge modes, modulo eight. On the other hand, a given boundary CFT with known central charge does fix the chiral central charge of its TQFT bulk. § MATHEMATICAL RESULTS ON GAUGING, COSETS AND DUALITIES In this appendix we summarize some mathematical nomenclature and results from <cit.> that were claimed in Section <ref> and allow a better understanding of it. We separate this appendix in two subsections. In the first one we mostly summarize definitions, while in the second one we discuss certain theorems on Local Algebra Modules pertaining to the subject of cosets. §.§ Frobenius Algebras and Local A-Modules An (associative)[It is known that the theory of local modules can be defined when there is a non-trivial associator defining the algebra (see for example <cit.>). However, since in (part of) this work we are interested in giving a physical interpretation to the results of <cit.> in terms of non-invertible symmetries and their relation to TQFT dualities, we restrict to using their definitions and assumptions.] algebra (with unit) 𝒜 in a Ribbon category 𝒞 is a triple (𝒜,m,η) consisting of an object 𝒜 in 𝒞, a multiplication morphism m ∈Hom(𝒜⊗𝒜, 𝒜) and a unit morphism η∈Hom(1,𝒜) such thatm ∘ (m ⊗id_𝒜) = m ∘ (id_𝒜⊗ m), and m ∘ (η⊗id_𝒜) = id_𝒜 = m ∘ (id_𝒜⊗η).The multiplication and unit are often denoted pictorially as follows:< g r a p h i c s >Similarly, a coalgebra in 𝒞 is a triple (𝒜,Δ,ϵ) consisting of an object 𝒜, a comultiplication map Δ∈Hom(𝒜, 𝒜⊗𝒜) and a counit ϵ∈Hom(𝒜,1) possessing analogous coassociativity and counit properties as above, as well as similar pictorial notations. We say an algebra 𝒜 in a braided tensor category is commutative if m ∘ c_𝒜,𝒜 = m, and similarly for cocommutativity.A left module over an algebra 𝒜∈Obj(𝒞) is a pair (M,ρ_M) consisting of an object M in 𝒞 and a morphism ρ_M∈Hom(𝒜⊗ M,M) such thatρ_M∘ (m ⊗id_M) = ρ_M∘ (id_M⊗ρ_M), andρ_M∘ (η⊗id_M) = id_M.For brevity we will refer to left modules just as modules. The definition of right modules is analogous. Taking modules as objects, and the subspaces{ f ∈Hom(N,M) | f ∘ρ_N = ρ_M∘ (id_𝒜⊗ f)}as morphisms from (N,ρ_N) to (M,ρ_M), one defines the category of (left) 𝒜-modules (and similarly when considering right modules instead).Relatedly, an 𝒜-bimodule is a triple (M,ρ^l_M,ρ^r_M) such that (M,ρ^l_M) is a left 𝒜-module, (M,ρ^r_M) is a right 𝒜-module, and the left and right actions of 𝒜 commute.We are interested in the following definitions:*An algebra endowed with a counit ϵ∈Hom(𝒜, 1) in a Ribbon category is called symmetric if the two morphisms defined pictorially < g r a p h i c s >in Hom(𝒜,𝒜^∨) are equal. *An algebra in a tensor category is Frobenius if we have a quintuple (𝒜,m,η,Δ,ϵ) such that (𝒜,m,η) is an algebra, (𝒜,Δ,ϵ) a coalgebra, satisfying the compatibility condition(id_𝒜⊗ m) ∘ (Δ⊗id_𝒜) = Δ∘ m = (m ⊗id_𝒜) ∘ (id_𝒜⊗Δ).*We call a Frobenius algebra special if ϵ∘η = β_1 id_1 and m ∘Δ = β_𝒜 id_𝒜, for some non-zero numbers β_1, β_𝒜. *An algebra 𝒜 is called simple if all bimodule endomorphisms when we consider 𝒜 as a bimodule over itself are proportional to the identity morphism.The following results are useful to know: *A commutative symmetric Frobenius algebra 𝒜 has trivial twist θ_𝒜 = id_𝒜. Physically, this mathematical statement captures the fact that only bosons are non-anomalous, and thus, only them can participate in a Frobenius algebra. *Conversely, every commutative Frobenius algebra with trivial twist is symmetric. *A commutative symmetric Frobenius algebra is also cocommutative. A module (M,ρ_M) over a commutative symmetric special Frobenius algebra 𝒜 in a Ribbon category is called local if and only ifρ_M∘ P^l/r_𝒜 (M) = ρ_M,where P^l_𝒜(a) is the following morphism defined pictorially by the combination of multiplications, comultiplications and braiding:< g r a p h i c s >and an analogous pictorial expression defines P^r_𝒜(a) with all the braidings above reversed.These are all the definitions we will need. The main result (or rather, a version of it) for which we need the previous definitions, and upon which the theory of non-invertible anyon condensation rests, is the following rather fundamental theorem of modular tensor categories: If we have a special symmetric commutative Frobenius algebra 𝒜 in a modular tensor category 𝒞, and if additionally 𝒜 is a simple algebra, then the category 𝒞^loc_𝒜 of local 𝒜-modules is also a modular tensor category. The latter MTC is what we physically refer to as “the gauged theory” or “the condensed theory.”Throughout this work we have mentioned multiple times the concept of a “Lagrangian algebra.” For our purposes a Lagrangian algebra is a special type of Frobenius algebra such that after condensation, we obtain the trivial theory as a result. If we condense just on half of spacetime then, we obtain a gapped boundary for the original theory before condensation. For a precise mathematical definition, see <cit.>. An important statement is that any theory of the form 𝒞⊠𝒞̅ has a Lagrangian algebra (the “diagonal” Lagrangian algebra) of the form𝒜 = ∑_c ∈ℐ(c, c̃) ,where the sum runs over all simple objects of 𝒞 and c̃ stands for the image of 𝒞 in 𝒞̅. §.§ Local Algebra Modules and CosetsThroughout the text we have given a rather physical picture of the ideas at play. Here we focus on a few more mathematical statements that are the basis for many claimed statements, mainly in Section <ref>. In Appendices <ref> and <ref> we have already stated and unpacked with more precision many of the definitions used here.In the following, we have to recall that the TQFT obtained by the gauging of a one-form symmetry in a 3D TQFT described by some MTC 𝒞, is given by the category 𝒞^loc_𝒜 of local 𝒜-modules for some special symmetric commutative Frobenius algebra object 𝒜 over 𝒞 (see Appendix <ref> for definitions). The category 𝒞^loc_𝒜 is again an MTC, so it appropriately describes a new 3D TQFT descending from the parent one described by 𝒞. We stress that from the physics viewpoint the previous result does not necessarily restrict to the notion of gauging by some group-like symmetry, so the construction generically involves gauging by a non-invertible symmetry.The primary focus of this section is to state and discuss the main result of <cit.> in our context. This result states that when a MTC ℳ is written as the category of local 𝒜-modules for some commutative symmetric special Frobenius algebra 𝒜 in the direct product of two MTCs 𝒞 and ℳ':ℳ≅ (𝒞⊠ℳ')^loc_𝒜,and if we require the algebra 𝒜 in 𝒞⊠ℳ' to be such that the only subobject of 𝒜 of the form a ×1 is 1×1, where 1 is the tensor unit, then there exists a commutative symmetric special Frobenius algebra ℬ in the direct product ℳ⊠ℳ', such that if we take the corresponding category of local modules, we obtain𝒞≅ (ℳ⊠ℳ')^loc_ℬ.The algebra ℬ can in principle be computed from the categories ℳ and ℳ', and the modularity of 𝒞 can actually be derived from that of ℳ and ℳ'. In practice, it is often found to be more useful to inspect for a Frobenius algebra object (a set of condensing bosons), and carry the non-abelian anyon condensation procedure.The previous is often sufficient for most practical applications, but it is important to point out that if we do not require 𝒜 to fulfill the condition below Eqn. (<ref>), then a version of the previous theorem still holds. Namely, there exist certain algebras 𝒯_1 and 𝒯_2 such that if we take local modules:𝒞^loc_𝒯_1≅ (ℳ⊠ℳ')^loc_𝒯_2.Essentially, what the requirement over 𝒜 below (<ref>) does is to trivialize the gauging on the left-hand side.The previous mathematical results have a clear interpretation in physics. In the standard coset construction, Eqn. (<ref>) is the statement that an affine lie algebra embeds in another one, with the corresponding TQFTs described by the MTCs ℳ' and ℳ respectively. The coset theory, whose construction we reviewed in the CFT context in Section <ref>, is then described in the corresponding TQFT setup by the MTC 𝒞 whose content is in principle determined by the data of ℳ and ℳ' via (<ref>). Notice however that the previous theorem in some sense generalizes the standard GKO coset construction since nowhere above it was needed that ℳ or ℳ' were MTCs associated with those of an affine Lie algebra. They may be arbitrary MTCs (see <cit.> for some work on generalizing the coset construction to higher spin currents in the context of 2D CFT). More importantly, a second point of generalization, stressed above, is that ℬ in (<ref>) does not necessarily have to correspond to some abelian gauging. As seen in the previous section, one instance where this phenomenon takes place would be the Maverick cosets <cit.>.One form of the statement that a duality exists follows from this mathematical perspective when the same MTC 𝒞, not (necessarily) having a path-integral or gauge theory description, appears “connecting” two different pairs of MTCs (ℳ_1,ℳ'_1) and (ℳ_2, ℳ'_2) having such a description. That is:ℳ_1≅ (𝒞⊠ℳ'_1)^loc_𝒜_1, andℳ_2≅ (𝒞⊠ℳ'_2)^loc_𝒜_2,for some Frobenius algebras 𝒜_1 and 𝒜_2. By the theorem above we can isolate 𝒞 in two different ways, and write(ℳ_1⊠ℳ'_1)^loc_ℬ_1≅ (ℳ_2⊠ℳ'_2)^loc_ℬ_2for some Frobenius algebras ℬ_1 and ℬ_2. For instance, these could correspond to two different descriptions of the parafermions. One of them could be a Maverick coset description, as in Section <ref>, and the other could be the standard coset description SU(2)_k× U(1)_-2k/ℤ_2<cit.>. This is essentially the path we followed for the first infinite family of Maverick dualities studied in Section <ref>. It could also be that 𝒞 itself admits a field theory description, in which case (<ref>) allows us to express such a description in terms of a different one based on ℳ and ℳ'. The latter is what happens with the conformal embedding SU(N)_k× SU(k)_N↪ SU(Nk)_1 reviewed at the end of Section <ref>. Here we can take a standard Chern-Simons description for SU(k)_N, but we can take an alternative one given by the right-hand side of (<ref>). The latter is nothing but the SU(Nk)_1× SU(N)_-k / ℤ_N expression of the same theory, which is essentially the content of the duality (<ref>). The form (<ref>) of the theorem is sometimes useful. To illustrate this <cit.>, we can consider in the contexts of the conformal embeddings:(E_8)_1/SU(3)_6× SU(2)_16,which implies that (E_8)_1 can be written as in (<ref>) with 𝒞 = SU(3)_6 and ℳ' = SU(2)_16. In this case we need to use (<ref>), and fortunately the gaugings are abelian so we can easily verify thatSU(3)_6/ℤ_3≅ (E_8)_1× SU(2)_-16/ℤ_2.That is, both sides involve a non-trivial gauging, which is the main point of the theorem in the form (<ref>). § FURTHER EXAMPLES ON NON-ABELIAN ANYON CONDENSATION In this appendix we present in detail various check example computations of non-invertible anyon condensation that pertain to the global subject of the paper but for the sake of organization we have summarized here instead. The reader may want to first look at the beginning of Section <ref> where we outline the rules that we use below to verify such examples on non-invertible anyon condensation.§.§ (G_2)_1× (G_2)_-1⟶1 This is a rather trivial example of condensation by a non-abelian anyon that condenses the theory to the trivial theory. The example is trivial in the sense that the theory is of the form G_k× G_-k, which is known to condense to the vacuum <cit.>. Equivalently, a gapped boundary exists that separates the trivial theory from G_k× G_-k, which hosts topological degrees of freedom described by the G_k/G_k topological coset. For completeness, we write here the condensation computation explicitly, since this particular example is easy and it appears twice in this work, both in our example in Section <ref> and in the derivation of the three-state Potts model Maverick duality from exceptional conformal embeddings.The spectrum of (G_2)_1 is shown in Table <ref>. The lines in the double are denoted (0,0), (ϕ,0), (0,ϕ), and (ϕ, ϕ). Here we have abused notation and called ϕ to the non-trivial entry in both (G_2)_1 and (G_2)_-1, but where there is no ambiguity as we can tell them apart by their position in the ordered pair. Assume that the unique non-trivial boson (ϕ,ϕ) condenses. That is:(ϕ, ϕ) ⟶ 0 + (ϕ, ϕ)_2,or in mathematical terms, the algebra that we condense is𝒜 = (0,0) + (ϕ,ϕ),which is nothing but the Lagrangian algebra given by the diagonal anyons that always exists in a theory of the form 𝒞×𝒞, with 𝒞 = (G_2)_1 here.By conservation of the quantum dimension d_(ϕ, ϕ)_2 = 1+√(5)/2, implying that (ϕ, ϕ) can only split into two lines. Further, since (ϕ,ϕ) and 0 are self-conjugate, it follows that (ϕ, ϕ)_2 also is self-conjugate. It is easy to check that the other lines (ϕ,0) and (0,ϕ) cannot split and are also self-conjugate.Since (ϕ, ϕ) condenses, the fusion rules(ϕ, ϕ) × (0, ϕ) = (ϕ, 0) + (ϕ, ϕ) ⟶ 0 + …,(ϕ, ϕ) × (ϕ, 0) = (0, ϕ) + (ϕ, ϕ) ⟶ 0 + …imply that (ϕ, ϕ) contains in its restriction the conjugates of (0, ϕ) and (ϕ, 0), which are however self-conjugate. Since their quantum dimensions are d_(0, ϕ) =d_(ϕ, 0) = (1+√(5))/2 they cannot be identified to the vacuum, and must be identified with (ϕ, ϕ)_2. In other words, we have the identification(0, ϕ) ≅ (ϕ,0) ≅ (ϕ, ϕ)_2.These labels therefore lift to lines in the parent theory with different topological spins, and so it follows that all such excitations confine. The only non-trivial line in the gauged theory is thus the vacuum, obtaining the expected result. §.§ Spin(9)_1≅( Spin(3)_3× Spin(3)_3)/𝒜 In this subsection we consider an example in the infinite family of conformal embeddings corresponding to spin groups:Spin(Nk)_1/Spin(N)_k× Spin(k)_N,for k,Nodd.This family of embeddings is intimately related to level-rank dualities for orthogonal groups (see <cit.>), and furthermore conformal embeddings with spin groups in the numerator have the very important property of describing the low-energy dynamics of 2D QCD <cit.>.[More precisely, fermionic conformal embeddings SO(N)_1/ H_k̃ describe the low-energy dynamics of 2D QCD with gauge group H and k̃ the index of the embedding, which in this case corresponds to the Dynkin index of the representation in which the fermions transform.][Rigorously speaking this result is conjectural, but supported by highly non-trivial evidence on its support like the matching of anomalies.] For the specific case N=k=ν the previous conformal embeddings are also intrinsically related to anomalous theories for time-reversal symmetry that realize a value ν for such anomaly <cit.> (once we add fermionic invertible factors).We will consider the example N=k=3 as this gives rise to the simplest example where the numerator (Spin(9)_1) has interesting non-abelian fusion rules; namely, Ising fusion rules. In this case, there exists the exceptional isomorphism of chiral algebras Spin(3)_3≅ SU(2)_6. So the task consists of condensing some (non-abelian) boson(s) in SU(2)_6× SU(2)_6, and matching the unconfined lines of the condensed theory with those of Spin(9)_1. The spectrum of SU(2)_6 is shown in Table <ref>, and the fusion rules can be read from (<ref>). The lines in the product theory are denoted as (i,j), where i,j=0,…,6 is the label of a single factor.There are four bosons in the product theory SU(2)_6× SU(2)_6: (0,0),(2,4), (4,2), and (6,6). The latter is the “common center” of the product, which is abelian. The remaining two non-trivial bosons are non-abelian. Merely condensing the common center in Spin(4)_6≅ SU(2)_6× SU(2)_6 leads to the well-known answer SO(4)_6. Then, as expected, one needs to take at least one of the non-abelian bosons to condense in order to obtain Spin(9)_1. Assume that (2,4) condenses; that is:(2,4) ⟶ 0 + …It is easy to see that since(0,a) × (0,a) = (0,0) + ∑_i (0,a_i),the lines (0,a) and (a,0) do not split and are self-conjugate. With this observation, consider the fusion(0,2) × (2,4) = (2,2) + (2,4) + (2,6) ⟶ 0 + …,which implies that (0,2) belongs to the restriction of (2,4). This means that (2,4) restricts as(2,4) ⟶ 0 + (2,4)_2 + (2,4)_3,with (2,4)_2≅ (0,2), and since d_(0,2) = 1 + √(2) by conservation of the quantum dimension we must have that d_(2,4)_3 = 1 + √(2). Notice that a priori the quantum dimension is sufficiently large to allow (2,4) to split into four lines, but this is ruled out by the fusion(2,4) × (2,4) = (0,0) + (2,4) + (4,2) + …,since there are not enough bosons on the right side that could potentially condense to accommodate four vacua. Furthermore, we also see from this fusion rule that (2,4) can split as (<ref>) only if (4,2) also condenses. Thus we have deduced that if (2,4) condenses, (4,2) also condenses.By a similar argument, (2,0) also belongs to the restriction of (2,4). However, (0,2) and (2,0) cannot be identified with each other. To see this, consider the fusion with (0,6) which has unit quantum dimension:(0,6) × (2,4)= (2,2) = (0,2) × (2,0) != (0,2) × (0,2) = (0,0) + (0,2) + (0,4)= (0,6) + …,where in the third equality in the first row we have (wrongly) assumed that (0,2) and (2,0) identify, and in the second row we have first restricted (2,4) on the left-hand side and isolated the trivial fusion between the vacuum and (0,6). Since both rows must agree, (0,6) must identify with some line in the last equality of the first row, but only one line has unit quantum dimension; namely, the vacuum. This identification, however, is equivalent to the statement that (0,6) condenses, which is not possible since it does not have the correct topological spin to do so. We thus reach the conclusion that (0,2) and (2,0) cannot be identified with each other.Gathering this knowledge, we find the restrictions of (2,4) and (4,2) (the latter follows from the same arguments, once we know it condenses as we have shown above):(2,4) ⟶ 0 + (0,2) + (2,0),(4,2) ⟶ 0 + (0,2) + (2,0).We now deduce a few more identifications. First:(2,4) × (6,6) = (4,2) ⟹ (6,6) + … = 0 + …,where we have used the restrictions of (2,4) and (4,2), and then isolated the fusion of (6,6) with the vacuum on the left-hand side. Only the vacuum on the right-hand side has unit quantum dimension that can match that of (6,6). Thus, we have deduced that (6,6) condenses: (6,6) → 0.In passing, notice that what we have just found is that the condensing algebra is𝒜 = (0,0) + (2,4) + (4,2) + (6,6),and in these terms what we want to show is thatSpin(9)_1≅Spin(3)_3× Spin(3)_3/𝒜,for the algebra object 𝒜 in (<ref>) above.Now (4,0) × (0,2) = (4,2) → 0 + …, but (0,a) and (a,0) are self-conjugate for any a, and thus we must have the identifications (2,0) ≅ (0,4), (0,2) ≅ (4,0). Similarly, (2,6) × (6,6) = (4,0), but (6,6) restricts only to the vacuum, and thus we have the identification (2,6) ≅ (0,4). By similar arguments, we find, overall: (2,6) ≅ (4,0) ≅ (0,2) ≅ (6,4), and (6,2) ≅ (0,4) ≅ (2,0) ≅ (4,6). It is straightforward to check from these identifications that the corresponding labels confine on the child theory.The simple currents (0,6) and (6,0) identify with each other since (0,6) = (6,6) × (6,0) and (6,6) condenses. Using that (4,4) = (6,6) × (2,2) we obtain (2,2) ≅ (4,4), and using that (2,2) = (0,6) × (2,4) we obtain the restriction:(2,2) ⟶ (0,6) + (2,0) + (0,2).Notice that (0,6) always lifts to anyons in the parent theory that share the same topological spin, and thus it does not confine.We now study lines (a,b) that have a and b odd integers. First, notice that since(1,1) × (1,1) = (0,0) + (0,2) + (2,0) + (2,2),(1,1) cannot split. A similar argument shows that (1,5) also does not split. Since (1,1) × (6,6) = (5,5) and (1,5) × (6,6) = (5,1) and (6,6) condenses, we find the identifications (1,1) ≅ (5,5) and (1,5) ≅ (5,1). Furthermore:(1,1) × (1,5) = (2,4) + …⟶ 0 + …,which means (1,5) is conjugate to (1,1) in the child theory, but (1,1) is self-conjugate, which implies the further identifications (1,1) ≅ (1,5) ≅ (5,1) ≅ (5,5).To make progress, study now the fusion rules(3,3) × (3,3)= (0,0) + (2,4) + (4,2) + (6,6) + …⟶ 0+0+0+0+… (1,1) × (3,3)= (2,2) + (2,4) + (4,2) + (4,4) ⟶ 0+0+…The first fusion rule allows two possibilities: either (3,3) splits into four distinct labels or one label with multiplicity two. The first possibility is however not consistent with (<ref>), so it follows that (3,3) restricts as (3,3) → 2(1,1) for consistency in between both (<ref>) and (<ref>). This restriction also shows that (1,1) confines, as it lifts to anyons in the parent theory with different topological spins.Consider now the rest of the lines with two odd entries. In particular, let us consider the fusion rule(1,1) × (1,3) = (0,2) + (0,4) + (2,2) + (2,4) ⟶ 0 + …,which implies that (1,1) belongs in the restriction of (1,3) (a similar statement holds for (3,1)). In turn, by conservation of the quantum dimension, this implies that (1,3) has a restriction of the form:(1,3) ⟶ (1,3)_1 + (1,1),with d_(1,3)_1 = √(2). Clearly, there is no sufficient quantum dimension for further splitting.Using the standard arguments: (1,3) × (6,6) = (5,3) ⇒ (1,3) ≅ (5,3) and (3,1) × (6,6) = (3,5) ⇒ (3,1) ≅ (3,5), meaning the restrictions we have to consider are(1,3) ⟶ (1,3)_1 + (1,1),and(3,1) ⟶ (3,1)_1 + (1,1). It is straightforward to check that the three fusion rules (1,3) × (1,3), (1,3) × (3,1), and (3,1) × (3,1) contain two vacua after restriction. Since we know (1,1) is self-conjugate, this means that (1,3)_1 and (3,1)_1 must both be self-conjugate, but conjugate to each other as well. We have to then identify (1,3)_1≅ (3,1)_1. Clearly, the previous identifications do not lead to confinement of (1,3)_1. Finally, the lines in SU(2)_6× SU(2)_6 of the form (2m,2n+1) with m,n integer confine. For example, (0,1) × (6,6) = (6,5), and since (6,6), condenses we have to identify (0,1) ≅ (6,5). However, these expressions lift to anyons of different topological spins and thus confine. The same argument holds for the rest of such lines. All in all, the spectrum of unconfined lines that we have found is given by (0,0), (6,0), and (1,3)_1 with topological spins and quantum dimensions that match those of Spin(9)_1, in accordance with the conformal embedding Spin(3)_3× Spin(3)_3↪ Spin(9)_1. The previous can be readily verified by looking at the spectrum of Spin(9)_1, presented in Table <ref>. §.§.§ Fusion Rules In the current example the child theory has interesting fusion rules (Ising fusion rules), and the condensation procedure is also sufficiently straightforward to deduce them explicitly from the parent theory. We turn to do this next.First, (0,0) and (6,0) descend trivially from the parent theory so they maintain ℤ_2 fusion rules: (0,6) × (0,6) = (0,0).To deduce the fusion rule (0,6) × (1,3)_1 we fuse (0,6) with (1,3):(0,6) × (1,3)= (1,3) = (1,3)_1 + (1,1) = (0,6) × (1,3)_1 + (0,6) × (1,1) = (0,6) × (1,3)_1 + (1,5) ≅ (0,6) × (1,3)_1 + (1,1),where in the first row we have performed the fusion in the parent theory and restricted the result, while in the second row we have first restricted on the left-hand side, then performed the fusions from the lines that descend trivially from the parent, and then used the identification (1,5) ≅ (1,1). Since both rows must match, we have the fusion rule in the child theory:(0,6) × (1,3)_1 = (1,3)_1. Deducing (1,3)_1× (1,3)_1 is slightly more complicated. We consider the fusion (1,3) × (1,3) in the parent theory:(1,3) × (1,3)= (0,0) + (0,2) + (0,4) + (0,6) + (2,0) + (2,2) + (2,4) + (2,6)= (1,3)_1× (1,3)_1 + (1,3)_1× (1,1) + (1,1) × (1,3)_1+ (1,1) × (1,1),where in the first row we have performed the fusion in the parent theory, while in the second row we have restricted the left-hand side first.The matching of (<ref>) with (<ref>) is simplified if we use the explicit fusion in Eqn. (<ref>), finding:(0,4) + (0,6) + (2,4) + (2,6) = (1,3)_1× (1,3)_1 + (1,3)_1× (1,1) + (1,1) × (1,3)_1.We use now that if a genuine line operator fuses with another genuine line operator it must give genuine line operators, while if a genuine line operator fuses with a confining/non-genuine line operator, it must give confining/non-genuine line operators. Since (1,1) confines and (1,3)_1 does not, the previous means that we can extract the fusion (1,3)_1× (1,3)_1 by inspecting the unconfined excitations on the left hand side of (<ref>) after restriction. Doing this we obtain:(1,3)_1× (1,3)_1 = 0 + (0,6).As promised, the fusion rules (<ref>), (<ref>), and (<ref>) are indeed those of Ising.§.§ ( SU(4)_1× SU(2)_-10)/𝒜 This is an amusing example that we touched upon in Section <ref> of a Maverick duality expressing the Ising TQFT (or rather any of its many coset descriptions) in terms of SU(4)_1 and SU(2)_10 Chern-Simons gauge theories via non-abelian anyon condensation. The spectrum of SU(4)_1 can be found in Table <ref> and that of SU(2)_10 can be found in Table <ref>. SU(4)_1 follows ℤ_4 fusion rules, while those of SU(2)_10 can be obtained from Eqn. (<ref>). The product theory has three non-trivial bosons: (6,10), (1,6), and (6,4). The first of these is abelian, while the other two are non-abelian. Condensing just the non-abelian boson (1,6) is tantamount to condensing only the anyon 6 in SU(2)_10, which leads to SU(2)_10/𝒜≅ USp(4)_1 where 𝒜 = 0 + 6. In the product theory, one would then obtain SU(4)_1× USp(4)_-1. This points to the fact that one should instead condense all available bosons to obtain the Ising TQFT:(6,10) ⟶ 0, (1,6) ⟶ 0 + (1,6)_2, (6,4) ⟶ 0 + (6,4)_2,where the non-abelian anyons cannot be split further, as it is not allowed by the fusion rules. For example:(1,6) × (1,6) = (1,0) + (1,6) + …⟶ 0 + 0 + …,implies that (1,6) can have at most a twofold split. It is also easy to see from (6,10) × (1,6) = (6,4) that (1,6) and (6,4) must share the same restriction: (1,6)_2≅ (6,4)_2. This finishes the discussion as far as the bosons go.We focus now on anyons that are fermions, i.e., that have topological spin θ = -1. These are (6,0), (1,10), (1,4), and (6,6). Using similar fusion rule arguments as above we can deduce that the anyons in the parent (1,4) and (6,6) share the same restriction, and moreover:(6,0) × (1,4) = (6,4) ⟶ 0 + …,implies (1,4) → (6,0) + (1,4)_2. We can show that the second component (1,4)_2 confines by studying the fusion with the boson (1,6):(1,4) × (1,6) = (1,6) + …⟶ 0 + …so (1,4) and (1,6) must have one of their components identified. But (6,0) ∈ (1,4) cannot condense (i.e., we cannot identify it to the vacuum), so the only consistent identification is(1,4)_2≅ (1,6)_2,from which it is easy to study their lift to anyons in the parent and check that they confine. The remaining fermion is (1,10), which identifies with (6,0) since (6,10) × (1,10) = (6,0) and (6,10) → 0, so (1,10) ≅ (6,0). At this point the list of unconfined anyons are the trivial one, and one fermion (6,0). Let us focus now on the spectrum of anyons in the parent with topological spin θ = e^2 π i / 16. These correspond to (4,3), (4̅,3), (4,7), and (4̅,7). The quantum dimensions of all these anyons in the parent is d_(4,3) = √(2) + √(2 + √(3)), and fusions of the form(4,3) × (4̅,3) = (1,0) + (1,2) + (1,4) + (1,6) ⟶ 0 + 0 + …,allows to conclude that this set of anyons all split in two. Furthermore, because (6,10) → 0, the fusions (6,10) × (4,3) = (4̅,7) and (6,10) × (4̅,3) = (4,7) imply that the following pairs of anyons in the parent share their restriction:(4,3) ≅ (4̅,7), and (4,7) ≅ (4̅,3) To deduce the splitting of the quantum dimensions we will consider the fusion with the anyon (4̅,9). This anyon is such that(4,9) × (4̅,9) = (1,0) + (1,2) ⟶ 0 + …,so it does not split, and such that(4,9) × (4,9) = (6,0+2) = (6,0) + (6,2),so it is not self-conjugate. That is, (4̅,9) corresponds to a different excitation in the child theory. Then:(4̅,9) × (4,3) = (1,6) + (1,8) ⟶ 0 + …,implies that (4,9) ∈ (4,3), so the splitting of the quantum dimensions in (4,3) corresponds to d_(4,9) = √(2+√(3)), and another component with quantum dimension √(2). From the previous we also conclude that (4,9) and (4̅,9) confine. Finally, from the fusions(4,3) × (4,7) = (6,4) + (6,6) + (6,8) + (6,10) ⟶ 0 + 0 + …,(4,3) × (4,3) = (6,0) + (6,2) + (6,4) + (6,6) ⟶ 0 + …,we conclude that the restrictions must be(4,3) = (4,3)_1 + (4,9),(4,7) = (4,3)_1 + (4̅,9),with (4,9) and (4̅,9) confining. (4,3)_1 is then the anyon that descends to the spin field with topological spin θ = e^2 π i /16 in the Ising model.It is now straightforward to argue for the confinement of the remaining excitations. For example, (6,10) × (1,3) = (6,7) ⟹ (1,3) ≅ (6,7). Studying the lift of their spin to the parent, it can readily be checked for the confinement of (6,7) and (1,3). A similar argument holds for the rest of the anyons, with the exception of (4,5). To argue for the confinement of this anyon, notice(4,5) × (4,5) = (6,4) +(6,10) ⟶ 0 + 0 + …,so (4,5) splits in two, and(4,9) × (4,5) = (6,4) + (6,6) ⟶ 0 + …,(4̅,9) × (4,5) = (1,4) + (1,6) ⟶ 0 + …,imply that (4,9), (4̅,9) ∈ (4,5). As argued above, (4,9) and (4̅,9) cannot identify with each other, so it must be that (4,5) ⟶ (4,9) + (4̅,9), and thus all components of (4,5) confine.Then, the unconfined excitations are 0, (6,0) and (4,3)_1, which as expected, gives the data of the Ising TQFT. §.§ (E_6)_1≅ (G_2)_3/𝒜 This is an interesting example of a single affine lie algebra embedding into another single one mentioned in Section <ref>. The spectrum of (G_2)_3 can be found in Table <ref> while the spectrum of (E_6)_1 which is the expect result is presented in Table <ref>. The (E_6)_1 Chern-Simons theory has ℤ_3 fusion rules. Those of (G_2)_3 are fairly unwieldy to write down, so instead we just use them as we need them through the calculation.Our starting theory (G_2)_3 has only one non-trivial boson; namely 64, and it is non-abelian. We assume it condenses, and the fusion rule64×64 = 1 + 7 + 27 + 77 + 14 + 64⟶ 0 + 0 + …needs 64 to split in two for consistency with the right-hand side. Thus, 64→ 0 + 64_2, with d_64_2 = (3+√(21))/2. We can now use the fusion rules7 ×64 = 27 + 77 + 14 + 64⟶ 0 + … 14 ×64 = 7 + 27 + 77 + 64⟶ 0 + … 77 ×64 = 7 + 27 + 14 + 64⟶ 0 + …where we conclude that 7, 14, and 77 belong in the restriction of 64. Clearly, the only possibility is to have the identifications 7≅14≅77≅64_2 from which we also deduce the confinement of these excitations.It only remains to study 27. Examining the self-fusion27×27 = 1 + 2 (64) + …⟶ 0 + 0 + 0 + …,we see that 27 splits in three: 27→27_1 + 27_2 + 27_3. To assign quantum dimensions it is sufficient to study the fusion27×64 = 64 + …⟶ 0 + …,from which we deduce that one component of 27 must belong in the restriction of 64. Obviously, 27 cannot condense so one of its components, say 27_3, must identify with 64_2: 27_3≅64_2, so 27_3 confines, and we must have d_27_1 = d_27_2 = 1 and d_27_3 = (3 + √(21))/2.The fusion rules of the remaining non-confined excitations 0, 27_1 and 27_2 can be deduced from the associativity of the fusion and the fact that all these components are abelian. Then, indeed they have the correct spins, fusion rules, and quantum dimensions to recognize the expected result (E_6)_1. So we have indeed found that(G_2)_3/𝒜 = (E_6)_1,where 𝒜 = 1 + 64.§.§ SU(2)_3≅( USp(6)_1× SO(3)_-4)/𝒜 In this subsection we consider an example in the infinite family of conformal embeddings studied in Section <ref>:SO(N)_4× SU(2)_N↪ USp(2N)_1,where we attempt to express SU(2)_N in terms of SO(N)_4 and USp(2N)_1. We consider N=3 which is the simplest case in which non-abelian anyon condensation must be considered.The spectrum of USp(6)_1 and SO(3)_4 can be found in Tables <ref> and <ref> respectively. The fusion rules of USp(6)_1 are given as follows:14' ×14' = 1, 14' ×14 = 6, 14' ×6 = 14, 14×14 = 1 + 14, 14×6 = 14' + 6, 6×6 = 1 + 14.The fusion rules of SO(3)_4 may be found in Eqn. (<ref>).It is easy to see that the product USp(6)_1× SO(3)_-4 has two bosons, both of which are non-abelian: (14,4_1) and (14,4_2). Running a similar argument as in the beginning of Section <ref> we can see that only one of them can condense, and since 4_1 and 4_2 are symmetric between each other the choice is immaterial. We choose (14,4_1) to condense, and thus (14,4_2) does not split and has quantum dimension d_(14,4_2) = (3+√(5))/2. To study how (14,4_1) splits consider the fusion and restriction (14,4_1) × (1,4_1) =(14, 0) + (14,4_1) → 0. Then, (1,4_1) belongs to the restriction of (14,4_1) and therefore confines.We can study the fate of the boson (14,4_2) by computing the fusion with (14,2):(14,2) × (14,2)= (1,0) + (14,4_1) + …⟶ 0 + 0 + …, (14,4_2) × (14,2)= (1, 2) + (1, 4_1) + (14, 2) + (14, 4_1) ⟶ 0 + …The first fusion says that (14,2) splits into two components (14,2) → (14,2)_1 +(14,2)_2, and the second fusion implies the identification (14,4_2) ≅ (14,2)_1[Here we have defined (14,2)_1 to be the component of (14,2) that identifies.] and corresponding confinement of these excitations. Correspondingly, d_(14,2)_1 = (3 + √(5))/2 and d_(14,2)_2 = (1 + √(5))/2. We may now identify the remaining component of (14,2) with (1,4_2), in accordance with the fusion(1, 4_2) × (14, 2) = (14, 2) +(14, 4_1) ⟶ 0 + …and matching of quantum dimensions. So, indeed (14,2)_2≅ (1,4_2).There are two additional identifications that we can deduce, namely(14,4_2) × (1,2) = (14, 2) + (14,4_1) ⟶ 0 + …implies (1,2) ≅ (14,4_2), and so (1,2) confines. Also(14,0) × (1,4_1) = (14,4_1) ⟶ 0 + …implies (14,0) ≅ (1,4_1), and since (1,4_1) confines, so does (14,0).We move-on now to consider anyons of the form (14',a) and (6,a), with a a label in SO(3)_-4. Studying self-fusions as usual we deduce that (6,2) and (6,4_1) split into two components, while (6,4_2) does not split. The fusion(6,4_2) × (6,2) = (14,4_1) + …⟶ 0 + …,implies that (6,2) restricts as (6,2) → (6,4_2) + (6,2)_2, and as such (6,4_2) confines. The remaining component is identified with (14',4_2) because of the fusion(14',4_2) × (6,2) = (14,2) + (14,4_1) ⟶ 0 + …and the matching of the quantum dimensions, so we obtain the full restriction (6,2) → (14',4_2) + (6,4_2).Similarly, we have the fusions (14',0) × (6, 4_1) = (14,4_1) → 0 + … and (14', 4_1) × (6,4_1) = (14,0) + (14,4_1) → 0 + …. From this we find the restriction (6,4_1) → (14',0) +(14',4_1), and in turn we deduce the confinement of (14',4_1). Finally, from the fusions (14', 2) × (6, 4_2) and (6, 0) × (14', 4_1) we may deduce the identifications (14',2) ≅ (6,4_2) and (6, 0) ≅ (14', 4_1) and the corresponding confinement of such excitations.All in all, considering identifications we get the unconfined excitations (14', 4_2), (1,4_2) and (14',0) on top of the vacuum. This correctly reproduces the spectrum of the expect result SU(2)_3 (presented in Table <ref>) according to the conformal embedding (<ref>). JHEPmod
http://arxiv.org/abs/2312.16317v1
{ "authors": [ "Clay Cordova", "Diego García-Sepúlveda" ], "categories": [ "hep-th", "cond-mat.str-el", "math.QA" ], "primary_category": "hep-th", "published": "20231226195315", "title": "Non-Invertible Anyon Condensation and Level-Rank Dualities" }
Towards Better Japanese-First RetrievalB. Clavié[email protected] and Hard Negatives, Towards Better Japanese-First Embeddings for Retrieval: Early Technical Report Benjamin Clavié January 14, 2024 ============================================================================================================ Document retrieval in many languages has been largely relying on multi-lingual models, and leveraging the vast wealth of English training data. In Japanese, the best performing deep-learning based retrieval approaches rely on multilingual dense embeddings. In this work, we introduce (1) a hard-negative augmented version of the Japanese MMARCO dataset and (2) JaColBERT, a document retrieval model built on the ColBERT model architecture, specifically for Japanese. JaColBERT vastly outperform all previous monolingual retrieval approaches and competes with the best multilingual methods, despite unfavourable evaluation settings (out-of-domain vs. in-domain for the multilingual models). JaColBERT reaches an average Recall@10 of 0.813, noticeably ahead of the previous monolingual best-performing model (0.716) and only slightly behind multilingual-e5-base (0.820). These results are achieved using only a limited, entirely Japanese, training set, more than two orders of magnitudes smaller than multilingual embedding models. We believe these results show great promise to support retrieval-enhanced application pipelines in a wide variety of domains.§ CONTRIBUTIONS In this first version of this work, we: * Release a dataset of hard-negatives for the Japanese language subset of MMArco[https://huggingface.co/datasets/bclavie/mmarco-japanese-hard-negativeshttps://huggingface.co/datasets/bclavie/mmarco-japanese-hard-negatives]* Release JaColBERT[https://huggingface.co/bclavie/JaColBERThttps://huggingface.co/bclavie/JaColBERT], a Japanese-only version of ColBERT <cit.> trained on the dataset above and outperforming all existing Japanese models on retrieval tasks and competitive with multilingual e5 models, despite the latter having been trained on the training sets associated with our evaluation data.§ DATA There is a growing number of Japanese NLP datasets <cit.>, a lot of them introduced through constrained automatic translation methods in order to leverage the vast wealth of data annotated for English.However, as of yet, there appears to have been a lack of large scale datasets to train generalist retrieval models. The release of MMARCO and its japanese subsplit <cit.>, the multi-lingual version of the MS MARCO Document Retrieval dataset <cit.>, has provided an initial large scale dataset to be used for this purpose.MS MARCO is one of the most widely used dataset for training document embedding models, and has been shown to provide models with impressive generalisation on a wide variety of retrieval tasks <cit.>, such as the ones in the BEIR benchmark <cit.>. §.§ Generating Japanese Hard Negatives In both the existing literature <cit.> and informal discussions, the importance of hard negatives in training retrieval models is highlighted as particularly important. A hard negative is a negative example that looks very similar to a positive example, and serves to improve a model's ability to discriminate between relevant and "relevant-looking" irrelevant passages.There are many ways of generating hard negatives. Human annotation, while excellent, is prohibitively time-consuming and costly at the scale required, thus, hard negatives are generally generated by existing retrieval methods, both sparse (BM25...) and dense, such as document embedding models or cross-encoders.To support the development of stronger Japanese retrieval models, we generate hard negatives for the MMARCO dataset, using two approaches:Multilingual e5 embeddings The current leading multilingual dense document embeddings, with a strong variety on many languages, including Japanese. We embed the entirety of the MMARCO Japanese Corpus, then retrieve the 110 most similar documents for each of them. We discard the 10 most similar documents, as MMARCO is a lossy dataset: some passages for a query are not annotated as positive examples, although they would indeed be considered relevant. Discarding the most similar documents help us avoid integrating these false negatives to our training data. Finally, we randomly sample 25 examples, and choose them as our e5-generated hard negatives. BM25 We use the Anserini <cit.> implementation of BM25, as well as their default Japanese Analyser. For each individual query, we retrieve up to 10 similar documents, once again discarding the first ten matches.The generated data is used to train JaColBERT, along with the initial training negatives provided in the original dataset. We make our full dataset available to support future work [https://huggingface.co/datasets/bclavie/mmarco-japanese-hard-negativeshttps://huggingface.co/datasets/bclavie/mmarco-japanese-hard-negatives]. Recently, the release of MIRACL <cit.> and Mr.TyDi <cit.>, two multilingual information retrieval datasets, have also provided us with large corpora and a small subset of annotated positive examples. In future work, we believe it would be useful to generate hard-negatives for those datasets on a large scale, to further diversify training sets. We do not do so in this work due to both limited compute and wanting to keep both of those datasets unseen for out-of-domain evaluation of JaColBERT. We also do not explore the Japanese AIO QA retrieval competition datasets, which could provide another useful data source in the future. § EMBEDDINGS & RETRIEVAL Document retrieval, particularly in the context of RAG (Retrieval-Augmented Generation) pipelines, has emerged as an increasingly important topic at the intersection of NLP and Information Retrieval.Most retrieval methods have strong tradeoffs:* Traditional sparse approaches, such as BM25, are strong baselines, but do not leverage any semantic understanding, and thus hit a hard ceiling.* Cross-encoder retriever methods are powerful, but prohibitively expensive over large datasets: they must process the query against every single known document to be able to output scores.* Dense retrieval methods, using dense embeddings in vector databases, are lightweight and perform well, but are data-inefficient (they require hundreds of millions if not billions oftraining examples pairs to reach state-of-the-art performance) and generalise poorly in a lot of cases, as representing every single aspect of a document, to be able to match it to any related query, into a single vector is not a solved problem.Recent work has focused on attempting to leverage the benefits of both sparse and dense retrieval methods. This very recent line of work has produced very capable models such as SPLADE <cit.>, ColBERT <cit.> and SparseEmbed <cit.>. Specifically, ColBERT, and its second version <cit.>, leverage multiple tricks to build upon the strong representation power of transformer models such as BERT <cit.> to represent documents as bags-of-centroids by representing documents as being composed of many smaller contextualised vectors, rather than a single, large dense representation.In this work, we introduce JaColBERT, building upon ColBERT to efficiently train a Japanese retrieval model, without the need of billion training examples from datasets in other languages. §.§ JaColBERT Leveraging the data and model architectures described above, we introduce JaColBERT. Our approach builds upon ColBERT to efficiently train a Japanese retrieval model, without the need of billion training examples from multilingual training sets.To train this initial version of JaColBERT, we randomly sample ten million(Query, PositivePassage, NegativePassage) triplets from our hard-negative augmented MMARCO dataset to serve as our training step. Training on those ten million triplets takes 10 hours.We initialise JaColBERT from Tohoku University's bert-base-japanese-v3. BERT is generally considered as the strongest base model for ColBERT models, as RoBERTa has been anecdotally noted to struggle to learn the kind of representation needed for this approach to work. As a result, we do not evaluate Waseda University's Japanese RoBERTa. We have not yet evaluated Studio Ousia's LUKE <cit.> models either, although they may be a suitable abse model for this approach.We train JaColBERT on 8 NVidia L4 GPUs, with a total batch size of 128 (16 per GPU). We perform training for the full number of steps to iterate over all the training pairs once (roughly 78 000). The model is trained with 8000 warm-up steps, and a learning rate of 5e-6. Our experiments showed worse performance with other common learning rates, with 3e-6 being the closest. Learning rates of 1e-5 and 2e-5 resulted in noticeable performance degradation at early evaluation steps, and were not evaluated further.We set the maximum query length to 64, and the maximum document length to 228. As per ColBERTv2 <cit.>, each vector representation, when indexing documents, is compressed to 2-bits, allowing for efficient storage of large volumes of data, with no impact on retrieval performance. §.§ Fio Embeddings As an early precursor to this work, we also released fio-base-japanese-v0.1 (Fio). Fio is initiated from bert-base-japanese-v3,trained for three epochs on JNLI <cit.> and JSNLI <cit.>, then fine-tuned for a single epoch (due to compute constraints) on an extremely small subset (100 000 sentence pairs, half negatives and half positives) of MMARCO, as well as subsamples of MIRACL and Mr.TiDy. Fio is trained using AnglE optimisation <cit.>. More detail on Fio is outside the scope of this report and available on the associated https://ben.clavie.eu/fio_v1release blog post.At this stage, Fio for retrieval remains a proof of concept and should not be used in lieu of JaColBERT or multilingual e5 models on these tasks. However, we believe that the data released with this work should allow to easily train a version a monolingual dense embedding model with strong retrieval performance, and intend to do so in the future.§ EVALUATIONWe evaluate various models, including JaColBERT, on three datasets: two document retrieval ones (MIRACL and Mr.TyDi) and a Question-Answering one (JSQuAD). We report the recall@K for k={3, 5, 10} for the retrieval datsets and recall@K for k={1, 5, 10} for JSQuAD. We do not report Recall@1 on MIRACL and as it can be a flawed metric, since retrieval datasets are known for the presence of false negatives. JSQuAD, on the other hand, asks questions relevant to a specific context and has a low volume corpus, so we choose to report recall@1. Our evaluation approach is based on the one of Nouu.Me [https://github.com/nouu-me/document_vector_search_benchmarkhttps://github.com/nouu-me/document_vector_search_benchmark], modified to support additional retrieval methods and datasets. We make the exact version of our evaluation code available [https://github.com/bclavie/document_vector_search_benchmarkhttps://github.com/bclavie/document_vector_search_benchmark]. §.§ Datasets and set-upTo speed up evaluation on limited hardware, we evaluate in the following setting:[Anytime random sampling is mentioned, it is initialised with the random seed 42.]JSQuAD <cit.> We use the validation split, as the test split was not available at the time of this work. Passages explicitly listed as containing an answer for the query are treated as relevant passages, and every other passage is considered irrelevant. The total document count is 1145 documents. MIRACL <cit.> We use the 860 evaluation queries provided. The associated positive passages are considered relevant. We use the validation split as hard negatives are readily available for it in benchmarks. For each query, we sample the top two hundred associated hard negatives [Hard negatives are provided by http://github.com/oshizo/JapaneseEmbeddingEvalhttp://github.com/oshizo/JapaneseEmbeddingEval]. Duplicates are removed. The total resulting document count is 156722.Mr.TyDi(test set) <cit.> We use the provided 720 evaluation queries and their associated positive examples. We additionally sample a random hundred thousand passages from the full corpus, to serve as negative examples. The total document count is 100242. §.§ ModelsOn top of our models, we evaluate an array of existing japanese document representation models: the best performing of Nagoya University's simcse-ja family of embeddings models (sup-simcse-ja-base and sup-simcse-ja-large) <cit.>, GLuCoSE-base-ja[https://huggingface.co/pkshatech/GLuCoSE-base-jahttps://huggingface.co/pkshatech/GLuCoSE-base-ja] and sentence-bert-base-ja-*-v2 models[https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens-v2https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens-v2]. We also report results for the current best-performing embedding models for Japanese Document Retrieval, the multilingual-e5 <cit.> family of models.Unlike the other models, the evaluated e5 models are multilingual in nature, not Japanese-specific, and have been previously exposed to all three of our evaluation datasets during training. Table <ref> provides an overview of which evaluation tasks the various models have been exposed to during training [Information is not provided for the sentence-bert-base-ja-* models, however they were trained prior the release of MIRACL.].§ RESULTS AND DISCUSSION JaColBERT considerably outperforms all existing embedding approaches evaluated on all three tasks, despite all three of them being out-of-domain. Although all three of these tasks are general domain, this suggests strong generalisation potential, with only light fine-tuning potentially needed. This makes JaColBERT a strong candidate for a variety of use-cases involving document retrieval, and easily integrable into RAG pipelines through its strong synergy with the DSPy (Demonstrate-Search-Predict) <cit.> approach.On MIRACL and Mr.TyDi, JaColBERT's overall performance lags slightly behind both the base and small multi-lingual e5 models on MIRACL and Mr.TyDi. This small gap in performance can likely be at least partially explained by e5 models having been trained on the training set of both datasets.The difference is greater with the large version of multilingual-e5, possibly due to the larger model being able to make better use of its large pretraining data and in-domain knowledge, although JaColBERT remains not far behind, and considerably closer than all previous Japanese-based approaches. Noticeably, on the dataset which e5 has had no direct exposure to, JSQuAD, JaColBERT outperforms multilingual-e5-large.Moreover, the relatively small performance delta between our approach and these existing models is notable, as JaColBERT has been trained on just 10M triplets for 10 hours on 8 GPUs. On the other hand, multilingual e5 models are the result of an extensive two-step training process with unsupervised training on more than 3.8B sentence pairs followed by supervised training on a variety of retrieval and language entailment datasets.This highlights the strong potential of ColBERT-based retrieval approaches, and greatly reduces the need to rely on extremely costly pre-training datasets to obtain satisfactory results. We believe that this model, as well as future iterations of it, are a strong first step towards supporting generalisable Japanese document retrieval with Japanese-only ressources and lower amounts of compute. § FINAL WORD, CONCLUSION AND FUTURE WORK Thank you for reading this report. I hope it proves to be useful for your research or applications, and I'd highly encourage any researcher or practitioner wanting to further build upon this to reach out! In this work, we have built upon the wealth of work existing in NLP, IR, and Japanese-specific NLP to produce both an improved Japanese retrieval training dataset, augmented with hard negatives, and the best monolingual Japanese document retrieval model on three benchmarks. This model, JaColBERT, leverages the ColBERT model architecture to create efficient Japanese document representation, optimised for retrieval. It significantly outperforms all existing Japanese embedding approaches, and comes closer to matching multilingual models trained on vastly larger amounts of data, despite being evaluated out-of-domain on benchmarks which are in-domain for the multilingual models.We release both our augmented training dataset and JaColBERT, to support applications and future research.Our work also highlights the many shortcomings of our current approach. Our data augmentation techniques, training methods as well as training time are constrained by limited resources, and further work could considerably improve performance.Notably, this work only leverages 10 million training triplets from MMARCO, generates hard negatives in a naive way and does not use or augment any of the other common retrieval datasets. We also do not use the scoring generated from already strong models as teachers for JaColBERT, which has been shown to improve retrieval performance <cit.>. Finally, our evaluation relies on subsamples of large-scale datasets, and evaluating on full-size benchmarks could yield more insight. We plan on exploring this in future work.UTF8goth splncs04
http://arxiv.org/abs/2312.16144v1
{ "authors": [ "Benjamin Clavié" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231226180705", "title": "JaColBERT and Hard Negatives, Towards Better Japanese-First Embeddings for Retrieval: Early Technical Report" }
Cryptoanalysis McEliece-type cryptosystem based on correction of errors and erasuresThe article was prepared within the framework of the Basic Research Program at HSE University.Kirill Yackushenoks Higher School of Economics Moscow, Russia [email protected] Fedor Ivanov Higher School of Economics Moscow, Russia [email protected] 14, 2024 =====================================================================================================================================================================================The Multi-Agent Pathfinding (MAPF) problem involves finding a set of conflict-free paths for a group of agents confined to a graph. In typical MAPF scenarios, the graph and the agents' starting and ending vertices are known beforehand, allowing the use of centralized planning algorithms. However, in this study, we focus on the decentralized MAPF setting, where the agents may observe the other agents only locally and are restricted in communications with each other. Specifically, we investigate the lifelong variant of MAPF, where new goals are continually assigned to the agents upon completion of previous ones. Drawing inspiration from the successful AlphaZero approach, we propose a decentralized multi-agent Monte Carlo Tree Search (MCTS) method for MAPF tasks. Our approach utilizes the agent's observations to recreate the intrinsic Markov decision process, which is then used for planning with a tailored for multi-agent tasks version of neural MCTS. The experimental results show that our approach outperforms state-of-the-art learnable MAPF solvers. The source code is available at https://github.com/AIRI-Institute/mats-lp.§ INTRODUCTIONMulti-agent pathfinding (MAPF) is a non-trivial problem inspired by numerous practical applications like automated warehouses, video games, intelligent transport systems, etc. A large body of works <cit.> study this problem in a centralized setting, i.e., it is assumed that a central control unit exists that i) has a knowledge of the full state of the environment (locations of the agents, their goals, positions of the static obstacles, etc.) at any time moment; ii) is in charge of providing conflict-free solutions to MAPF queries. Indeed, various flavors of MAPF problems are studied within this setting, e.g., Classical MAPF <cit.> when each agent is assigned a unique goal, Colored MAPF <cit.> when the agents are split into teams and agents of one team are interchangeable, Anonymous MAPF <cit.>, when any agent can pursue any goal, etc. One MAPF variant, that we study in this work, is the Lifelong MAPF (LMAPF). In this setting, the agents must constantly pursue goals (provided externally), i.e., when an agent reaches its goal, it is immediately assigned another one. This setting is motivated by the real-world delivery applications when a group of robots has to constantly deliver some items dispersed in the shared environment, e.g., items of goods in the warehouse, documents in the office building, medicine in the hospital, etc.One of the ways to solve LMAPF is to adapt existing MAPF solvers to the lifelong setting. One of such recent methods, RHCR <cit.>, involves centralized re-planning every k time-steps. Indeed, when k is small and the number of agents is large, the performance of such an approach degrades significantly as it may take too much time for a solver to construct a joint collision-free plan. Bounded-horizon planning can mitigate this issue to a certain extent; indeed, RHCR utilizes this technique. However, this is still limited. An appealing orthogonal approach is to solve LMAPF in a distributed fashion, i.e., model it as a decentralized sequential decision-making process when every agent individually decides what action to take at each time step. Most of the state-of-the-art decentralized (L)MAPF solvers are the learnable ones <cit.>. However, the performance of these solvers may rely heavily on the dataset of problem instances used at the learning (training) stage. Their performance often drops significantly in setups that are unlike the latter ones. This is a general problem known in machine learning as low generalization. To mitigate this issue, hybrid approaches were proposed that typically include a (search-based) global planner and a (local) learnable policy that is tailored to follow the global plan while resolving the potential inter-agent conflicts <cit.>. Such approaches also have limitations because, under challenging cases, it is required to move significantly away from the local sub-goal for the agents to disperse in bottlenecks. Learning-based methods may demonstrate low efficiency in this kind of task. Thus, another way of combining the learning-based and search-based approaches is desirable.This work follows a hybrid search-and-learning approach to create a decentralized MAPF solver. However, our methodology is different from the ones described above. On the one hand, we rely on the (lightweight) learnable policy that can drive an agent toward a goal. On the other hand, to improve the agent's ability to cooperate, we utilize Monte-Carlo Tree Search (MCTS). This powerful technique is usually used for antagonistic game environments and single-agent tasks <cit.>. In this work, we follow the seminal AlphaGo approach <cit.> and design a variant of MCTS that uses the suggested learnable policy to evaluate environmental states and provide action distributions to grow the search tree. This contributes to the effective simulation of the different variants of how the agent and the neighboring agents might behave in the future and focus on the most prominent variants (using the MCTS machinery). As a result, all agents can exhibit (implicit) coordination and successfully solve challenging LMAPF instances (i.e., the ones involving long corridors or tight passages, etc.). From the reinforcement learning (RL) perspective our approach may be attributed as model-based RL, i.e., we rely both on the learnable policy and on the model of the world to, first, simulate different variants of how the world might evolve in response to our action, and, second, to choose the most promising action, based on this simulation process.In the empirical evaluation, we compare our method, which we dub MATS-LP (Multi-agent Adaptive Tree Search with the Learned Policy), to the state-of-the-art competitors, i.e., Primal2 <cit.> and SCRIMP <cit.> and show that it numerous cases MATS-LP notably outperforms them. § RELATED WORKS Two streams of research are particularly relevant to our work: learnable (lifelong) MAPF methods and utilizing MCTS for multi-agent systems and MAPF in particular. Next, we review both of these domains. Learnable (L)MAPF Solvers Among the recent works dedicated to MAPF, one of the first ones that were specifically dedicated to creating a learning-based MAPF solver was <cit.>. A combination of reinforcement learning and learning from expert demonstrations was used to create a learnable policy called Primal, tailored to solve conventional MAPF problems. Later in <cit.>, an enhanced version of this solver, Primal2, was introduced. The latter was equipped with special corridor reasoning techniques, aiming at avoiding the deadlocks in narrow corridors, and it supported lifelong MAPF setting (therefore, we choose Primal2 as one of the baselines we compare our method to). Among the other learnable MAPF solvers that use reinforcement learning to obtain a decision-making policy, one can name <cit.>. The learnable methods introduced in <cit.> add communication capabilities to the agents, i.e., allow the agents to communicate to resolve deadlocks and avoid congestion. In this work, we compare with one of the most recent communication-based methods, i.e., SCRIMP <cit.>. However, it is worth noting that our method does not rely on agent communication. MCTS for MAPF Initially, Monte Carlo Tree Search (MCTS) algorithms demonstrated their effectiveness in competitive games with complete information, such as chess or Go <cit.>. More recent versions of MCTS utilize deep neural networks to approximate the values of game states instead of relying solely on simulations. These approaches have also shown promising results in single-agent scenarios, where agents can learn a model of the environment and play Atari games <cit.>. Besides gaming, MCTS methods have found applications in other domains, such as matrix multiplication optimization <cit.> and theorem proving using the Hyper Tree approach <cit.>. Additionally, MCTS techniques have demonstrated applicability in robotics <cit.>.Despite the growing interest in utilizing MCTS for multi-agent tasks, there have been limited applications of MCTS for MAPF. In their work <cit.>, the authors propose a multi-agent MCTS for Anonymous MAPF in a grid-world environment. Their environment has a dense reward signal (the agent who reached any goal on the map received a reward and ended the episode), and there are no obstacles, making collision avoidance easier. The authors build a separate tree for each agent using a classical algorithm. They then jointly apply the best actions (forming a plan) from the trees in the simulator to receive true scores of the solution and update the trees on that difference. This approach performs well even with a large number of agents.A recent paper <cit.> proposed a more sophisticated approach for multi-agent planning that combines RL and MCTS. The authors suggested a two-part scheme that includes a goal achievement module and a conflict resolution module. The latter was trained using MCTS. The construction of the search tree for each of the agents was also performed independently, and actions for other agents were selected using the currently trained policy. This work used MCTS only during training to train the conflict resolution policy.§ BACKGROUNDMulti-agent Pathfinding We rely on commonly-used MAPF assumptions as described in the survey work on this topic  <cit.>. The timeline is divided into time steps, and a graph G=(V, E) represents the positions of K agents. Each agent can either wait in its current vertex or move to an adjacent one at each time step. We assume that the outcomes of the actions are deterministic and no inaccuracies occur when executing the actions. A sequence of such actions is referred to as a plan. For different agents, two plans are conflict-free if there are no vertex or edge collisions, meaning that agents do not swap vertices simultaneously or occupy the same vertex at the same time step. MAPF problem generally asks to find a set of K plans Plans=plan_1, plan_2, ..., plan_K, s.t. a plan for agent i starts at the predefined start vertex and ends at the predefined goal vertex, and all pairs of plans are conflict-free. In MAPF, it is common to minimize one of the following cost objectives: SOC=∑_i=1^n cost(plan_i) or makespan = max_i cost(plan_i). Here, cost(plan_i) represents the individual plan's cost, which is the number of time steps taken by agent i to reach its goal.In this work, we consider the lifelong variant of MAPF (LMAPF), where immediately after an agent reaches its goal, it is assigned to another one (via an external assignment procedure) and has to continue moving to a new goal. Thus, LMAPF generally asks to find a set of K initial plans and update each agent’s plan when it reaches the current goal and receives a new one. In extreme cases, when some goal is reached at each step, the plans’ updates are needed constantly (i.e., at each time step). Thus, one may think of MAPF as a sequential decision-making problem – at each time step, the following action (for all agents) should be decided. We assume the goal assignment unit is external to the system, and the agents' behavior does not affect the goal assignments. We also assume that any LMAPF instance is additionally characterized by the episode length, L, measured in time steps. After L time steps have passed, the instance is considered to be done (despite some agents being on their way to the currently assigned goals).Conventional MAPF success measures like SOC or makespan are not directly applicable to LMAPF. The most commonly used performance measure in LMAPF is the throughput which is the average number of goals the agents achieve per one-time step. Technically it is computed as the ratio of the episode length to the total number of the reached goals. Multi-agent Partially Observable Markov Decision Process In our work, agents receive information about other agents not on the entire map but only in some local observation of their current position. We assume that each agent is aware of the global goals of other agents visible to them at the current moment. Additionally, each agent is assumed to possess a complete map of static obstacles. The observation function can be defined differently depending on the type of graph. In our experiments, we use 4-connected grids and assume that an agent observes the other agents in the area of the size m× m, centered at the agent's current position. In such conditions of partial observability, the agent learns a policy function that allows it to generate a specific action by the observation. This setting can formally be represented as a partially observable multi-agent Markov decision process <cit.>: M=⟨ S, A, U, P, R, O, γ⟩. At each timestep, each agent u ∈ U, where U = 1, …, K, chooses an action a_u ∈ A, forming a joint action 𝐣∈𝐉 = J^K. This joint action leads to a change in the environment according to the transition function P(s' | s, 𝐣): S ×𝐉× S → [0, 1]. After that, each agent receives individual observations o_u ∈ O based on the global observation function G(s, a): S × A → O, whereas individual reward R(s, u, 𝐣): S × U ×𝐉→ℝ, based on the current state, agent, and joint action. Thus, the joint reward is r=∑_u R(s, u, 𝐣). To make decisions, each agent conditions a stochastic policy by the observation o^u: π_u(a_u | o_u): T × A → [0, 1]. The task of the learning process is to optimize the policy π_u for each agent to maximize the expected cumulative reward over time. Monte-Carlo Tree Search In our work, we use Monte-Carlo Tree Search (MCTS) as a model-based variant of the learnable agent's policy π_u. MCTS is a powerfulsearch method well-suited for sequential decision-making problems. Paired with state-of-the-art machine learning techniques, MCTS has recently achieved super-human performance in various board- and video games, see <cit.> for example. In MCTS-based methods, the agent picks an action given a state of the environment based on extensive simulating of how the environment would change and what rewards would be obtained if different actions are sequentially executed. MCTS is composed of four steps executed iteratively and intended to simultaneously build and explore the search tree: selection, expansion, simulation, and backpropagation. Selection is aimed at descending the constructed so far search tree. Conceptually, this can be seen as picking the most promising partial plan. To balance between the exploration and the exploitation, MCTS relies on assessing the nodes using the probabilistic upper confidence bound applied to the tree (PUCT) <cit.>.When the tree is descended, and the leaf node is picked, the latter is expanded by selecting an un-probed action and adding a new node to the tree. The added node is evaluated by simulating actions using a random or learnable policy, and the resulting reward is specially backpropagated through the tree. The process is repeated until the time budget is reached. When it happens, the action corresponding to the most visited outgoing edge of the root node is chosen to be executed. In this work, we will present our adaptation of MCTS for multi-agent partially-observable pathfinding. § METHOD Our method combines two principal ingredients. First, we employ the machinery of MCTS for an agent to reason about the possible future states of the environment and to choose the most promising action to be performed at the current time step, i.e., such action that, on the one hand, maximizes the chance of reaching the goal (eventually) and, on the other hand, decrease the chances of collisions and deadlocks with the other agents. Second, we use a learnable policy inside the MCTS simulation step. This policy is, indeed, approximated by a neural network and is tailored to accomplish MAPF tasks from the perspective of the single agent. We utilize the prominent actor-critic reinforcement learning method, i.e., Proximal Policy Optimization (PPO) <cit.>, to pre-train such a policy. Importantly, as this policy is extensively used in MCTS to simulate the future states of the environment, it should be computationally efficient (fast). In practice, this means that the neural network that approximates the policy should contain a low number of parameters (weights). Motivated by this, we use a relatively compact neural network in this work that contains 161 thousand parameters compared to millions of them in conventional state-of-the-art learnable policies (e.g., the number of parameters in one of the recent methods we compare, SCRIMP, is about 9 million).§.§ Solving Decentralized MAPF Tasks with RL Numerous multi-agent Reinforcement Learning (MARL) algorithms can be used to solve the MAPF problem in partial observability. For incorporating an algorithm within MCTS in our case, the family of actor-critic methods, such as PPO <cit.>, MAPPO <cit.>, or FACMAC <cit.>, is the most suitable. In our experiments, we utilize the PPO algorithm, which learns a shared policy independently for each agent.In addition to choosing the algorithm, defining the observation space and reward function with which the algorithm will be trained in the environment is necessary. We employ design available local information comparable to one used in the Primal2 algorithm, meaning that the agent has information about static obstacles on the entire map, knows its current target, and can obtain information about other agents and their current targets in its field of view. We refer to this proposed approach as CostTracer, which emphasizes the design of the reward function and the neural network inputs. It utilizes only two input matrices and a simple reward function. The schematic representation of CostTracer is outlined in Figure <ref>. The agent's observation is defined as two matrices of the observation space size m× m.The first matrix represents the positions of other agents (+1 if an agent is present and 0 if not). The second matrix represents the normalized inverted cost-to-go function. Each time a target is received, the cost-to-go function is calculated using the breadth-first search (BFS) algorithm. It is provided to the agent in a normalized and inverted form. That is, a value of 1 in the matrix corresponds to the closest cell to the target visible within the agent's observation. Obstacles are represented by -1, and all other values fall from 0 to 1.We define the reward function as follows: the agent receives a reward of +r if it reaches a cell closer to the goal on his current episode history. This information ismeasured by the shortest distance using the cost-to-go function. In all other cases, the agent receives a reward of 0. This reward function provides a dense signal while preventing exploitation of the reward function, as the agent's behavior that maximizes the reward guarantees getting close to the goal. Our neural network architecture employs a Spatial Encoder and Action/Value Decoder heads for both the actor and critic components, drawing inspiration from the AlphaZero approach <cit.> (see Figure <ref>). The proposed architecture stands out by utilizing significantly fewer parameters than Primal2 and SCRIMP, enabling the algorithm to be trained on a single GPU in less than one hour.Despite its simplicity compared to other state-of-the-art algorithms, this setup demonstrates promising results, as shown in the experimental section. §.§ Multi-agent Neural MCTS for Intrinsic MDPs The scheme of multi-agent neural MCTS is sketched in Figure <ref>. Due to the fact that only partial information is available to each agent in the environment, the use of a centralized scheduler is not possible. In order to be able to plan in such situations, we suggest using intrinsic MDP (IMDP). To do this, an intrinsic environment is created based on the egocentric observation of the agent (obstacles, other agents, and their current goals). Only the agents that the agent observes at the current step are included in this environment. All other cells that are not obstacles are considered empty.Even within this intrinsic environment, the count of agents can be substantial. To tackle this, we present an action masking technique contingent on the proximity of other agents relative to the agent for which the planning is being conducted. The agents' proximity is established utilizing the BFS algorithm within their field of view. For the first K agents, encompassing the agent itself, all feasible actions that avoid obstacles are contemplated (invalid actions are masked). Conversely, for the remaining agents, we adopt a greedy policy whereby only the action with the highest probability is used. We denote the set of distant agents as 𝐃 and restrict the action space of these agents to a single action with the highest probability, denoted as A_𝐃^u = _a_u ∈ Aπ(o_u, a_u) (predicted by CostTracer). The final number of transitions is determined by multiplying the unmasked actions for each proximal agent. During the lookahead search in such an MDP, the joint reward of r for all agents is maximized. The reward function of the IMDP is identical to the reward function ofCostTracer. Each node within the search tree corresponds to an intrinsic state s of the IMDP. For every joint action 𝐣 from state s, an edge (s, 𝐣) is established to store a set of statistics {N(s,𝐣), Q, r, π_𝐣}. Here, N represents the node visitation count, Q is the mean joint Q-value, r is the joint reward acquired from the IMDP upon executing action 𝐣, and π_j stands for the probability of joint action 𝐣. Notably, we use the term s to refer to the state of the IMDP.The search process is divided into three distinct stages: Selection. Node selection is started from the Tree Root s^0, which is the initial state of the IMDP. The selection process continues until a leaf node is reached, which we denote as s^l, where l represents the length of a single iteration of lookahead search. Each action is chosen based on the statistics stored in the nodes. This procedure follows PUCT bound, as utilized in the <cit.>:𝐣^k = _𝐣( Q(s, 𝐣) + c π_j √(∑_𝐢N(s, 𝐢))/1 + N(s,𝐣)). Here, 𝐢 represents all possible joint actions from the current node, and π_j=∏_uπ_u(s,𝐣) is the probability of joint action. The constant c controls the influence of the policy distribution on Q. Transition to the next state of the intrinsic environment is proceeded by applying 𝐣 in it. π_j and value estimate of the node v^l = ∑_uv(o_u, 𝐣) is calculated using CostTracer andthe reward r is accumulated using signal provided by IMDP.Expansion. At the final timestep l, a new node is created. The transition to the next state of the intrinsic environment is carried out by applying 𝐣^l action. Action probabilities π_u(s^l,𝐣^l) are calculated using CostTracer, and the reward for each agent R(s,u, 𝐣) is accumulated using the signal provided by theIMDP. The statistics of the new node are initialized as follows: N^l(s^l, 𝐣^l)=0, Q^l=0, r=∑_uR(s,u, 𝐣), π_j^l= ∏_uπ_u^l(o_u^l,a_u^l). Backpropagation. This is the final step where accumulated statistics along the trajectory are updated. The update is computed using a discount factor γ, similar to the classic RL setup. To form an estimate of the cumulative discounted reward for the trajectory, we use:G^k = ∑_τ=0^l-1-kγ^τ r_k+1+τ+γ^l-kv^l.After that the statistics for each edge (s^k-1, 𝐣^k) is updated as follows:Q(s^k-1,𝐣^k) := N(s^k-1,𝐣^k) + Q(s^k-1,𝐣^k) + G^k/N(s^k,𝐣^k) + 1,N(s^k-1, 𝐣^k) :=N(s^k-1,𝐣^k) + 1.The final action for the agent is determined as the action belonging to the most explored edge from the tree's root, determined by the number of visits N(s, 𝐣). The action a_u of the agent on behalf of which the IMDP was built is taken from 𝐣. The final joint action in the global environment is taken as the actions from all egocentric agents, planned with MCTS in their IMDPs. After executing this action in the environment, each agent receives their local observations, recreates its IMDP, and the process repeats.§ EMPIRICAL EVALUATION §.§ Experimental Setup To evaluate the efficiency of MATS-LP, we have conducted a set of experiments, comparing it with existing learnable approaches tailored to solve LMAPF problems. The episode length was set to 512 in all the experiments. All the agents had the same parameters: their field-of-view was 11 × 11, all possible actions were considered only for the closest 3 agents, including the main agent, γ-value was set to 0.96, the number of expansions per iteration – 250, coefficient c was set to 4.4. More details and the values of the rest parameters are given in the Hyperparameters section below.For the comparison, there were chosen two other learnable approaches – a state-of-the-art method for solving LifeLong MAPF – PRIMAL2 <cit.> and a recently presented method that has shown impressive results in solving single-shot MAPF – SCRIMP <cit.>. According to the results, presented in the original paper about SCRIMP, it clearly outperforms some other existing approaches – PICO <cit.> and DHC <cit.>. Thus, they were not taken as baselines.We have used the implementation and the weights of the network provided by the authors of PRIMAL2[https://github.com/marmotlab/PRIMAL2] and SCRIMP[https://github.com/marmotlab/SCRIMP]. The code of SCRIMP has been adapted to solving LifeLong MAPF. To be more precise, the SCRIMP-local version was used, which has a limited communication radius (5) and shows better results. The size of the field-of-view for SCRIMP was set to 3×3, while for PRIMAL2 – 11×11. The comparison was conducted on three types of maps with different topologies. The first one consists of 20×20 grids with randomly placed obstacles. The density of obstacles varies from 0% to 30%. In total 40 random maps were used. For each map 5 different instances with randomly placed start and goal locations were generated. The second type of map is the maze-like environments, that were generated using the generator taken from the PRIMAL2 repository. We have generated mazes with 10×10, 20×20, and 30×30 sizes, 50 maps per each size, 1 (randomly generated) problem instance per each map. Finally, the 33 × 46 warehouse map from <cit.> was used for evaluation. 10 random instances on this map were generated and used for evaluation.To train the CostTracer algorithm, we used an open-sourced asynchronous implementation of the PPO algorithm[https://github.com/alex-petrenko/sample-factory]. A ResNet encoder as the Spatial Encoder with one residual layer was utilized, and the hidden layer sizes for the multi-layer perception (MLP) blocks were set to 32 for the Action/Value Decoder. The training process had a discount factor (γ) of 0.96 and a learning rate of 0.00019. More detailed parameter descriptions can be found in the Hyperperameters section below.We employed a Bayesian hyperparameter search to optimize the algorithm's parameters and architecture. In total, we conducted 100 algorithm runs, which roughly corresponds to 120 GPU hours using a single Titan RTX. The model that showed the best results with fewer parameters was chosen.§.§ ResultsThe results on the random maps are presented in Figure <ref>.In this experiment MATS-LP outperforms SCRIMP in all the cases, gaining 15.6% higher throughput on average. At the same time, PRIMAL2 demonstrates poor performance with more than twice less throughput than MATS-LP on average. Such behavior is explained by the fact, that PRIMAL2 is tailored to solve maps that consist of corridors, such as mazes environments. Moreover, it was trained on maze-like maps, similar to MATS-LP. Thus, this type of map is out-of-distribution for these two approaches. The shaded areas indicate the 95% confidence intervals. A detailed analysis of the results has shown that throughput can vary significantly from map to map, as some maps contain a narrow passage dividing the map into two parts, and many agents get stuck trying to pass through the passage in opposite directions, blocking each other. The results of the second series of experiments on maze-like maps of various sizes are presented in Figure <ref>. As well as in the first series of experiments, MATS-LP significantly outperforms both competitors. Compared to SCRIMP it has shown 46.2% higher throughput on average, while PRIMAL2 was outperformed by 38.8%.In most of the cases SCRIMP and PRIMAL2 demonstrate almost the same efficiency on average with only exception of 30×30 maze maps with 32 or 64 agents where PRIMAL2 substantially outperformed SCRIMP demonstrating a bit better scalability on such type of maps. The last series of experiments involved a warehouse-like that was taken from <cit.>. We utilized the same way of generating start and goal locations for the agents as in the original paper, when start locations for all the agents might be placed only on the left or right edge of the map, while goal locations - only near the obstacles in the middle of the map. Due to the limitations imposed to the possible start locations, the total amount of agents cannot exceed number of 192. Following these rules, we have generated 10 different problem instances.The results of these experiments are shown in Figure <ref>. In addition to measuring the average throughput of the approaches, we have also estimated the time required to make a decision about the next action per each agent and conducted the ablation study of MATS-LP. The left plot of Figure <ref> shows the averaged throughput. Again, MATS-LP demonstrates better performance, its throughput is 15.8% higher than the one of SCRIMP (on average), and 27.1% higher then the one of PRIMAL2.The middle plot demonstrates the time required by each of the solvers to make a decision about the next action for a single agent (decision time). We added the line for the CostTracer, the learnable policy used within MATS-LP, to this plot. Clearly its decision time is almost constant, while the one of MATS-LP increases from 103ms to 300ms when the number of agents goes from 32 to 192. This may be explained by increasing number of agents that appear in the field-of-view of each agent and the fact that MATS-LP predicts the actions of these observable agents, which takes time (as it necessitates running CostTracer more frequently). Similarly, the decision time for SCRIMP is not constant and ranges from 25ms to 107ms, due to the need for coordinating movements with a larger number of agents. Although MATS-LP requires more time than SCRIMP to make a decision, its scalability is slightly better. I.e., the difference in decision time between 192 agents and 32 agents is 3x for MATS-LP and 4x for SCRIMP. The right plot in Figure <ref> demonstrates the results of the ablation study for MATS-LP. CostTracer is MATS-LP with MCTS turned off. We have also evaluated MATS-LP with a random policy instead of CostTracer. The term “No proximal planning" refers to the variant where planning is executed solely for an egocentric agent, selecting only actions with highest probability for other agents. Furthermore, we experimented with increasing the number of expansions to 500 and reducing them to 125, compared to the 250 expansions used by the basic version of MATS-LP. The worst results are demonstrated by the version that utilizes random policy instead of CostTracer, that indicates its crucial importance. Next lowest throughput is obtained by CostTracer, whose throughput is almost twice worse compared to MATS-LP. The results of the version, that is planning for an egocentric agent only, get worse with increasing number of agents as the increase of density of agents increases the need of coordination between them. The versions with increased/decreased number of expansions show results slightly better or worse than the basic version respectively. The latter indicates, that while MATS-LP has a relatively high decision time, it actually can be adjusted to the required decision time or even work in anytime fashion, adapting to a specific time budget. §.§ Hyperparameters Table <ref> presents the hyperparameters of CostTracer and MATS-LP approaches. “Number of agents" denotes the number of agents in the environment in which CostTracer was trained. The table's parameters marked “tuned" were optimized using Bayesian search. We used default values commonly used in other studies for the other parameters. The parameter root exploration ratio corresponds to the noise (with uniform distribution) added in the tree root that facilitates exploration in the MCTS algorithm.§ CONCLUSIONIn this work, we have studied the Lifelong MAPF problem and suggested a solver based on the Monte-Carlo Tree Search equipped with the (lightweight) learnable policy tailored to solve MAPF from the individual agent's prospective. The resultant solver is decentralized and does not require explicit agent communication. Empirically we have shown that our solver can generalize well to the unseen LMAPF instances and outperform the state-of-the-art competitors in different challenging setups. A prominent direction for future research is to develop a fully learnable MCTS for LMAPF, i.e., to learn the simulation policy with MCTS itself like in <cit.>.
http://arxiv.org/abs/2312.15908v1
{ "authors": [ "Alexey Skrynnik", "Anton Andreychuk", "Konstantin Yakovlev", "Aleksandr Panov" ], "categories": [ "cs.AI", "cs.LG", "cs.MA" ], "primary_category": "cs.AI", "published": "20231226065722", "title": "Decentralized Monte Carlo Tree Search for Partially Observable Multi-agent Pathfinding" }