text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
Department of Physics, Chuo University, Bunkyo, Tokyo 112-8551, JapanSISSA and INFN, via Bonomea 265, 34136 Trieste, ItalySISSA and INFN, via Bonomea 265, 34136 Trieste, Italy SISSA and INFN, via Bonomea 265, 34136 Trieste, ItalyInternational Centre for Theoretical Physics (ICTP), Strada Costiera 11, 34151 Trieste, Italy We study the time evolution of the Rényi entanglement entropies followinga quantum quench in a two-dimensional (2D) free-fermion system.By employing dimensional reduction, we effectively transform the 2D problem into decoupled chains, a technique applicable when the system exhibits translational invariance in one direction.Various initial configurations are examined, revealing that the behavior of entanglement entropies can often be explained by adapting the one-dimensional quasiparticle picture.However, intriguingly, for specific initial states the entanglement entropy saturates to a finite value without the reduced density matrix converging to a stationary state. We discuss the conditions necessary for a stationary state to exist and delve into the necessary modifications to the quasiparticle picture when such a state is absent.Time evolution of entanglement entropy after quenches in two-dimensional free fermion systems: a dimensional reduction treatment Pasquale Calabrese January 14, 2024 ================================================================================================================================§ INTRODUCTIONOne of the most fundamental problems connecting quantum and statistical physics ishow a statistical ensemble emerges in a closed many-body quantum system that evolves unitarily <cit.>.The common wisdom is that entanglement generates non-local correlations unique to quantum systems that spread throughout the system byunitary evolution: the resulting reduced density matrix for a local subsystem relaxes to a statistical ensemble such as the Gibbs ensemble <cit.>, or, in the case of integrable systems, a generalized Gibbs ensemble (GGE) <cit.>. Given that the (von Neumann) entanglement entropy quantifies the amount of entanglement between the subsystem and the rest of the system <cit.>, understanding its behavior in a unitary evolution is important to clarify how entanglement spreads and how thermodynamics arises in isolated quantum systems.A popular and tractable setup to investigate the unitary evolution of a quantum many-body system is the quantum quench: the system is initially prepared in a non-equilibrium pure state |ψ_0⟩ and then it is let evolve in time with a post-quench Hamiltonian H, |ψ(t)⟩=e^- t H|ψ_0⟩.In recent years, this protocol has been investigated not only theoretically but also experimentally thanks to remarkable developments in cold atom and ion trap systems <cit.>.The quench dynamics of the entanglement entropy in one-dimensional systems has been extensively studied in the literature.It has been found that it linearly increases in time and eventually saturates to a constant <cit.>; the latter can be identified with the thermodynamic entropy of the (generalized) Gibbs ensemblethat describes locally the system at large times <cit.>.For integrable systems, this behavior is explained by the quasiparticle picture <cit.> in whichthe entanglement growth is due to thepropagation of pairs of entangled quasiparticles.This picture has been validated for one-dimensional free <cit.> and interacting integrable systems <cit.> and generalized to different contexts <cit.> and quantities <cit.>, see also the reviews <cit.>.The same qualitative behavior for the time evolution of the entanglement entropy has been found in generic non-integrable/chaotic interacting models with no quasiparticles, see e.g. <cit.>.For many years, there has been a prevailing belief that the microscopic mechanism for the entanglement growth is fundamentally different in integrable and chaotic systems; only recently a unifying picture emerged in the space-time duality approach <cit.>. The Rényi entanglement entropies are a natural and important generalization of von Neumann one, not only because they help to calculate the former via the replica trick <cit.>, but also because they carry further relevant information about the system and are measurable in cold atom and ion trap experiments <cit.>.While the quasiparticle picture captures theevolution of the Rényi entropies for free systems <cit.>, it breaks down for interacting integrable models <cit.>.In higher dimensions d≥2, the entanglement entropy in equilibrium has been largely investigated, see e.g. Refs. <cit.>;on the contrary its time evolution after a quantum quench has received little attention, mainly in field theory context <cit.>.For this reason, here we study the quench dynamics of Rényi entanglement entropies in a 2D free-fermion system. In particular, we apply a dimensional reduction approach, which was introduced in Ref. <cit.> and then has been applied to study, e.g., the (symmetry-resolved) entanglement entropy at equilibrium <cit.>.For a finite 2D system, periodic in both directions, this approach works as follows.The initial configuration should be translationally invariant (non necessarily one-site shift invariant as we shall see) along one of the axes. Next we should choose as subsystem a periodic strip in this direction, as shown in Fig. <ref>.We can then decompose the Rényi entanglement entropies into the sum of the single-interval entanglement entropies of decoupled one-dimensional (1D) systems, for which exact results are known.We apply this strategy to calculate analytically the time evolution of the Rényi entanglement entropies for several particular initial configurations.We will see that our results can be explained in terms of a direct adaptation of the 1D quasiparticle picture, except for one particular initial configuration.The reason of such a mismatch is that the reduced density matrix does not attain a stationary value — even if its entanglement entropy does tend to a constant value.We then discuss the general conditions under which there is no stationary state in our 2D setting.From this observation, we deduce how the quasiparticle picture has to be modified to describe the behavior of the entropy in the absence of a stationary state.The paper is organized as follows. In Sec. <ref>, weintroduce the setup and some basic quantities, including the Rényientanglement entropy. In Sec. <ref>, wedescribe the dimensional reduction approach. In Sec. <ref>, we apply it toobtain analytically the behavior of the entropies in quenches fromseveral initial configurations. In Sec. <ref>, we give a physical interpretation of the results obtained in the previous section in terms of the quasiparticle picture.In Sec. <ref>, we analyze the conditions for the existence of a stationary state and discuss how the quasiparticle picture modifies in that case.We finally draw our conclusions and present some outlooks in Sec. <ref>.We also include several appendices, where we derivesome of the results presented in the main text.§ SETUP AND DEFINITIONSWe consider free fermions on a 2D square lattice with isotropic hopping between nearest-neighbor sites. The system is described by the Hamiltonian H = -1/2∑_⟨ i,i'⟩ a_ i^† a_ i' +H.c.,where 𝐢=(i_x,i_y) is a vector identifying a site of the lattice, ⟨𝐢,𝐢'⟩ stands for the nearest neighbors, and a_𝐢=a_i_x,i_y(a_𝐢^†=a_i_x,i_y^†) is the annihilation (creation) operator of the fermion on the 𝐢-th site.We assume that the system size L_x(L_y) along the x(y)-axis is even and that periodic boundary conditions are imposed along bothdirections. Moving to Fourier modesã_𝐪 = ã_q_x,q_y = 1/√(L_xL_y)∑_𝐢 e^-𝐪· 𝐢 a_𝐢,with quasi-momenta q_x=0,2π/L_x,...,2π(L_x-1)/L_x and q_y=0,2π/L_y,...,2π(L_y-1)/L_y, the Hamiltonian (<ref>) is diagonalized as H = ∑_ qϵ_𝐪ã_𝐪^†ã_𝐪,where the single-particle dispersion is ϵ_𝐪=-cos q_x-cos q_y. We consider the quantum quench described by the time-evolved state |ψ(t)⟩=e^- t H|ψ_0⟩ with an initial configuration |ψ_0⟩ that is not an eigenstate of the Hamiltonian (<ref>). We take as a subsystem A a periodic strip of length ℓ, as depicted in Fig. <ref>. That is, subsystem A is the set of sites 𝐢 satisfying i_x∈[0, ℓ-1]. The state of A isdescribed by the reduced density matrix ρ_A(t)=_B(|ψ(t)⟩⟨ψ(t)|),where _B denotes the trace over the subsystem B.The Rényi entanglement entropy, S_n(ρ_A)=1/1-nlog(ρ_A^n),measures the degree of entanglement between subsystems A and B. In the limit n→ 1, it gives the von Neumann entanglement entropy,S_1(ρ_A)=lim_n→1S_n(ρ_A)=-(ρ_Alogρ_A).Hereafter, we write S_n(ρ_A) as S_n unless explicitly stated.In this paper, we considerinitial states |ψ_0⟩ that satisfy Wick theorem.Therefore, since the post-quench Hamiltonian is a quadratic fermionic operator, the time-evolved reduced density matrix ρ_A(t) is Gaussian and it is fully characterized by the two-point correlation matrix restricted to subsystem A <cit.>,Γ_𝐢,𝐢'(t)= 2 ⟨ψ(t)|𝐚_𝐢^†𝐚_ i'|ψ(t)⟩ -δ_𝐢,𝐢',where 𝐚_𝐢 = (a_𝐢^†,a_𝐢) and 𝐢,𝐢'∈ A. Γ is a matrix of dimension 2V_A× 2V_A, and V_A is the size of the subsystem A, V_A=ℓ L_y. Using the standard algebra of Gaussian operators, the entanglement entropy can be expressed in terms of the two-point correlation matrix Γ as <cit.>S_n=1/2(1-n)log[ (I+Γ/2)^n+(I-Γ/2)^n],where I is the 2V_A× 2V_A identity matrix. For finite V_A, we can compute the trace in Eq. (<ref>) and obtain the exact value of the entanglement entropy by numerically diagonalizing the two-point correlation matrix Γ.We will use this method as a benchmark of the analytic results obtained in the following sections.To perform analytical calculations, it is useful to write the right-hand side of Eq. (<ref>) as a Taylor series in the moments [Γ^m]. To this end, we introduce the function h_n(x),h_n(x)=1/1-nlog[ (1+x/2)^n + (1-x/2)^n ]. This function can be expanded ash_n(x)=∑_m=0^∞ a_n(2m) x^2m,and, therefore, Eq. (<ref>) can be rewritten in the form S_n=1/2∑_m=0^∞ a_n(2m) (Γ^2m).As we will see in the following sections, the precise form of the coefficients a_n(2m) is never needed and hence will not be reported.§ DIMENSIONAL REDUCTIONIn this section, we present the dimensional reduction approach that we will employ in Sec. <ref> to calculate analytically the time evolution of the Rényi entanglement entropy in different quantum quenches.The treatment is generically valid for initial states |ψ_0⟩ that are invariant under k-site translations in the y-direction, with k being a factor of L_y.For clarity, we present first the case k=1 and after we generalize straightforwardly to arbitrary k. §.§ One-site shift-invariant states in the transverse direction We start considering the case when the initial state |ψ_0⟩ is translationally invariant in the y-direction.Since the Hamiltonian (<ref>) preserves the translational symmetry, the time-evolved state |ψ(t)⟩ is also invariant.Given the geometry of the subsystem A considered, it is useful to take the Fourier transform only along the y-direction by introducing the fermionic operators in a mixed space-momentum basisc_i_x,q_y =1/√(L_y)∑_i_y e^- q_y i_y a_i_x,i_y,which is the core of the dimensional reduction method.The two-point correlation function Γ can be written asΓ_(i_x, j_y), (i_x', j_y') =1/L_y∑_n_y=0^L_y-1 e^ i2 π n_y/L_y(j_y-j_y')(Γ_q_y)_i_x, i_x',where Γ_q_y is the 2ℓ× 2ℓ two-point correlation matrix in the mixed space-momentum representation and the sum of q_y=2π n_y/L_y runs on the L_y allowed transverse modes.As a consequence, modulo the Fourier transform which is a unitary operation, we have the decompositionΓ≃⊕_n_y=0^L_y-1Γ_q_y,and so the entanglement entropy admits the decomposition in L_y independent termsS_n = ∑_n_y=0^L_y-1 S_n(Γ_q_y),where we denoted by S_n(Γ_q_y) the Rényi entropy of the Gaussian state of a 1D chain with correlation matrix Γ_q_y. Notice that the decomposition(<ref>) is valid for arbitrary values of L_x and L_y, not necessarily in the thermodynamic limit. §.§ k-site shift-invariant states in the transverse direction Let us now assume that the initial state |ψ_0⟩ is invariant under k-site translations in the y-direction with k being a factor of L_y.Once again, since the Hamiltonian (<ref>) preserves the translational symmetry, the time-evolved state |ψ(t)⟩ is also invariant under k-site translations.This property is inherited by the two-point correlation matrix Γ(t) (<ref>), and its entries satisfy Γ_(i_x,i_y+mk),(i_x',i_y'+mk) = Γ_(i_x,i_y),(i_x',i_y'), ∀ m∈ℤ.Therefore, it is convenient to decompose the index in the y-direction as i_y=kj_y+p, with j_y=0,…, L_y/k-1 and p=0, …, k-1, and then rearrange the entries of Γ(t) in k× k blocks of the formΓ_(i_x, j_y), (i_x', j_y')(t)=2⟨ψ(t)|𝐚⃗_i_x, j_y^†𝐚⃗_i_x', j_y'|ψ(t)⟩ -δ_i_x, i_x'δ_j_y, j_y'where 𝐚⃗_i_x, j_y=(𝐚_i_x, kj_y, 𝐚_i_x,kj_y+1,…, 𝐚_i_x, kj_y+k-1), with i_x∈ A.Once again, we take the Fourier transform only along the y-direction using Eq. (<ref>). The two-point correlation function Γ can be then written asΓ_(i_x, j_y), (i_x', j_y') =k/L_y∑_n_y=0^L_y/k-1e^ i2 π k n_y/L_y(j_y-j_y')(𝒢_q_y^(k))_i_x, i_x',where (𝒢_q_y^(k))_i_x,i_x'=U (Γ_q_y^(k))_i_x, i_x'U^†.Here U is a unitary matrix with entries U_pp'=e^ i2π (p/L_y+pp'/k)/√(k) and (Γ_q_y^(k))_i_x, i_x' is the 2kℓ× 2kℓ two-point correlation matrix in the mixed space-momentum representation whose entries are rearranged in k× k blocks as(Γ_q_y^(k))_i_x, i_x' = 2⟨ψ(t)|𝐜_i_x,q_y^†𝐜_i_x',q_y|ψ(t)⟩ -δ_i_x,i_x',where q_y=2π n_y/L_y and𝐜_i_x,q_y = (𝐜_i_x,q_y 𝐜_i_x,q_y+2π/k⋯𝐜_i_x,q_y+2π(k-1)/k).with 𝐜_i_x,q_y=(c_i_x,q_y^†,c_i_x,-q_y). Given Eq. (<ref>), the matrix Γ takes the formΓ=k/L_y∑_n_y=0^L_y/k-1𝒢_q_y^(k)⊗ T_q_y,where we have introduced the matrices (T_q_y)_j_yj_y'=e^ ikq_y(j_y-j_y'). These matrices mutually commute and can be diagonalized simultaneously byV_j_y, j_y'=1/√(k)e^ i2π j_yj_y'/ksuch that (VT_q_yV^†)_j_y, j_y'=k δ_q_y, j_yδ_q_y, j_y'.As a consequence, we have the decomposition(I⊗ V)Γ(I⊗ V^†)=⊕_n_y=0^L_y/k-1𝒢_q_y^(k),showing that the two-point correlation matrix Γ is block diagonal in the q_y transverse momentum sectors.Plugging Eq. (<ref>) into Eq. (<ref>), and taking into account that 𝒢_q_y^(k) and (Γ_q_y^(k)) are related by a unitary transformation (<ref>), we finally obtain S_n=∑_n_y=0^L_y/k-1S_n(Γ_q_y^(k)) =1/2∑_n_y=0^L_y/k-1∑_m=0^∞ a_n(2m) [(Γ_q_y^(k))^2m].Accordingly, the entanglement entropy in our 2D system is the sum of the single-interval entanglement entropies of L_y/k one-dimensional fermionic chains, each univocally characterized by the correlation matrices Γ_q_y^(k). § EXAMPLESIn this section, using the dimensional reduction approach described in Sec. <ref> and invoking results for 1D systems, we calculate the entanglement entropy in quantum quenchesfrom several initial states.In particular, we analytically derive its exact behavior in the ballistic regime in which t,ℓ→∞ with t/ℓ fixed, taking the thermodynamic limit in the longitudinal direction, L_x→∞, with the transverse one L_y finite. Here, we only present the results of the calculations, while their physical interpretation will be discussed in Sec. <ref>. For the concrete initial configurations that we will consider, the time evolution of the entanglement entropy can be calculated using Eq. (<ref>). The latter requires the knowledge of the matrix Γ_q_y^(k) defined in Eq. (<ref>), which involves the mixed space-momentum correlations ⟨ c_i_x, q_y^† c_i_x', q_y'⟩ and ⟨ c_i_x, q_y c_i_x', q_y'⟩. Their timeevolution can be easily computed employing the Heisenberg picture since the momentum modes ã_ q that diagonalize the post-quench Hamiltonian (<ref>) evolve trivially in time as ã_ q(t)=e^- itϵ_ qã_ q. Therefore, taking the partial Fourier transform in the x direction, we have⟨ψ(t)|c_i_x, q_y^† c_i_x', q_y'|ψ(t)⟩= 1/L_x∑_q_x, q_x' e^- i(q_xi_x-q_x' i_x') e^ i t(ϵ_ q-ϵ_ q')⟨ψ_0|ã_ q^†ã_ q'|ψ_0⟩and ⟨ψ(t)|c_i_x, q_y c_i_x', q_y'|ψ(t)⟩= 1/L_x∑_q_x, q_x'e^ i(q_xi_x+q_x' i_x') e^- it(ϵ_ q+ϵ_ q')⟨ψ_0|ã_ qã_ q'|ψ_0⟩.Writing now the operators ã_ q and ã_ q^† in terms of the real space ones, a_ i and a_ i^†, we find⟨ψ(t)| c_i_x, q_y^† c_i_x', q_y'|ψ(t)⟩= 1/L_x^2 L_y∑_q_x, q_x'∑_ j,j' e^- i(q_x i_x - q_x' i_x') × e^ it(ϵ_ q-ϵ_ q')e^ i( q· j- q'· j')⟨ψ_0| a_ j^† a_ j'|ψ_0⟩and⟨ψ(t)| c_i_x, q_y c_i_x', q_y'|ψ(t)⟩=1/L_x^2 L_y∑_q_x, q_x'∑_ j,j' e^ i (q_x i_x + q_x' i_x') × e^- it(ϵ_ q+ϵ_ q')e^- i( q· j- q'· j')⟨ψ_0| a_ j a_ j'|ψ_0⟩. Eqs. (<ref>) and (<ref>) recast the time evolution of the correlators ⟨ c_i_x, q_y^† c_i_x', q_y'⟩, ⟨ c_i_x, q_y c_i_x', q_y'⟩ and, consequently of the matrix Γ_q_y^(k), in terms of the t=0 correlators ⟨ a_ j^† a_ j'⟩ and ⟨ a_ j a_ j'⟩. §.§ Collinear Mott insulator stateWe start with the quantum quench from the collinear Mott insulator state, which is defined as |CM⟩ = ∏_i_x=0^L_x/2-1∏_i_y=0^L_y-1 a_2i_x,i_y^†|0⟩,where |0⟩ is the space vacuum state; i.e., a_ i|0⟩=0 for all i.We schematically represent this configuration in Fig. <ref> (a).The collinear Mott insulator state (<ref>) is a product state and, therefore, the entanglement entropy for any bipartition is zero.According to Eq. (<ref>), since the state (<ref>) is invariant under single-site translations in the y-direction, the entanglement entropy after the quench can be calculated from the correlation matrix Γ_q_y^(1). Theentries of this matrix, see Eq. (<ref>), are given by the correlators ⟨ c_i_x, q_y^† c_i_x', q_y'⟩ and ⟨ c_i_x, q_y c_i_x', q_y'⟩, whose time evolution can be obtained with Eqs. (<ref>) and (<ref>) in terms of ⟨ a_ j^† a_ j'⟩⟨ a_ j a_ j'⟩ for the initial state, which in thiscase read⟨ CM|a_ j^† a_ j'| CM⟩= δ_ j,j'/2 [1+(-1)^j_x],and⟨ CM|a_ j a_ j'| CM⟩ =0.Note that the pairing correlation functions such as ⟨ c_i_x,q_y c_i_x',-q_y⟩ vanish because the state (<ref>) has a definite number of excitations.Plugging Eqs. (<ref>) and (<ref>) in Eqs. (<ref>) and (<ref>) respectively,we obtain(Γ_q_y^(1))_i_x,i_x'(t) =( -C_i_x',i_x(t) 0 0C_i_x,i_x'(t) ),where C_i_x,i_x'(t) is the 1D correlation matrix after the quench to the tight binding fermionic chain from the Néel state, see, e.g., <cit.>. In the thermodynamic limit L_x→∞, it readsC_i_x,i_x'(t) =(-1)^i_x'∫_0^2π q_x/2π e^- q_x(i_x-i_x')-2 t cos q_x. Plugging Eq. (<ref>) into Eq. (<ref>) with k=1, we obtain S_n =∑_n_y=0^L_y-1 S_n(C)= L_y S_n(C).The asymptotic form of the 1D entanglement entropy with correlation matrix (<ref>) in the ballistic regime is known <cit.>,S_n(C) ≃log (2) ∫_0^2π q_x/2πmin(ℓ,2t|v_x(q_x)|). where v_x(q_x)=∂_q_xϵ_𝐪=sin q_x is the fermion velocity in the x-direction and, by ≃, we always mean equal in the thermodynamic and ballistic limits.Hence for the 2D model we obtain thatS_n(t) ≃ L_y log (2) ∫_0^2π q_x/2πmin(ℓ,2t|v_x(q_x)|).The expression above shows that the entanglement entropy linearly increases in time for t<ℓ/max(2v_x)=ℓ/2, while for t≫ℓ/2 it saturates to a constant value, lim_t→∞ S_n ≃V_A log(2).In Fig. <ref>, we report the time evolution of the entanglement entropy for the quench from the collinear Mott insulator state.The curve is the analytic result (<ref>), which agrees well with the exact numericaldata obtained using Eq. (<ref>).§.§ Mott insulator state We now consider the quantum quench from the Mott insulator state, which is defined as |M⟩ = ∏_i_x+i_y=even a_𝐢^†|0⟩.This state is represented schematically in Fig. <ref> (b).As in the case of the collinear Mott insulator state discussed in Sec. <ref>, this configuration is also a product state and, therefore, the entanglement entropy at t=0 is zero. Since the Mott insulator state is invariant under two-site translations in the y-direction, the entanglement entropy after the quench can be obtained by applying Eq. (<ref>) once we have determined the time evolution of the correlation matrix Γ_q_y^(2). Using Eqs. (<ref>) and (<ref>), we only need the two-point spatial correlations in the initial configuration to calculate it. For the Mott insulator state, ⟨ M| a_ j^† a_ j'| M⟩ = δ_ j,j'/2 [1+(-1)^j_x+j_y],and⟨ M| a_ j a_ j'| M⟩ =0.If we insert them in Eqs. (<ref>) and (<ref>) respectively, we obtain that the matrix Γ_q_y^(2) is(Γ_q_y^(2))_i_x,i_x'(t) = U e^ t cos q_y σ_z⊗σ_z ×(-C_i_x',i_x(t) 00C_i_x,i_x'(t)) ⊗σ_xe^- t cos q_y σ_z ⊗σ_z U^-1,where the matrix C is the same as in Eq. (<ref>), σ_μ are the Pauli matrices, and U=I/2+∑_μ=x,y,zσ_μ⊗σ_μ/2.In the thermodynamic limit L_x→∞, C is given byEq. (<ref>). Plugging this result into Eq. (<ref>) with k=2 and using σ_x^2m=2, we obtain S_n =L_y S_n(C),which coincides with the expression found inEq. (<ref>) for the collinearMott insulator state (<ref>).The matrix C is the same in the Mott insulator and in the collinear Mott insulator states, both at finite size L_x and when we take L_x→∞, and therefore the entanglement entropy presents exactly the same time evolution in both quenches and for this reason we do not report any numerical test. §.§ Collinear dimer state Let us now analyze the quantum quench from the collinear dimer state,|CD⟩ = ∏_i_x=0^L_x/2-1∏_i_y=0^L_y-1a_2i_x,i_y^†-a_2i_x+1,i_y^†/√(2)|0⟩,which is represented in Fig. <ref> (c).Unlike the previous examples, this configuration is not a product state for each site, but the (2i_x,i_y)-th and (2i_x+1,i_y)-th pairs of sites are entangled by the singlet pairing. However, since these singlet pairs do not cross the boundaries of the subsystem considered (because we choose ℓ to be even and its endpoints to be also endpoints of singlets), we have that the entanglement entropy is zero before the quench. The collinear dimer state (<ref>) has one-site translational symmetry in the y-direction. Since it is invariant under two-site translations in the x-direction, it is convenient to rearrange the entries of the correlation matrix Γ_q_y^(1), which enters in the computation of the entropy (<ref>), as(Γ_q_y^(1))_i_x,i_x' = 2⟨( 𝐜_2i_x,q_y^† 𝐜_2i_x+1,q_y^† ) ( 𝐜_2i_x',q_y 𝐜_2i_x'+1,q_y ) ⟩ -δ_i_x,i_x'I_4,with i_x,i_x'∈ [0,ℓ/2-1]. The correlators in Γ_q_y^(1) can be calculated using Eqs. (<ref>) and (<ref>) with ⟨ CD|a_ j^† a_ j'| CD⟩ = 1/2δ_ j,j' -1/4δ_j_y,j_y'δ_j_x±1,j_x' ±(-1)^j_x/4δ_j_y,j_y'δ_j_x±1,j_x',and⟨ CD|a_ j a_ j'| CD⟩ =0.In this way, once we have properly organized all the entries of Γ_q_y^(1), we find that it takes the form(Γ_q_y^(1))_i_x,i_x' = U (-T_i_x',i_x^ D 0 0 T_i_x,i_x'^ D) U^†,where U=I_4/2+∑_μ=x,y,zσ_μ⊗σ_μ/2 is a 4×4 unitary matrix and T^ D is equal to the 1D two-point correlation matrix of the quench to the tight binding fermionic chain from the dimer state (see e.g. Ref. <cit.>). In the thermodynamic limit L_x→∞, T^ D is a block Toeplitz matrix,T_i_x,i_x'^ D = ∫_0^2π q_x/2π e^-2 q_x(i_x-i_x') g_ D(q_x),generated by the 2× 2 symbol g_ D(q_x)g_ D(q_x) = e^σ_zq_x/2 (σ_-sin q_xe^-2 t cos q_x -σ_x cos q_x ) e^-σ_zq_x/2,with σ_±=σ_y±σ_z.Plugging Eq. (<ref>) into Eq. (<ref>), we obtain S_n= L_y S_n(T^ D).In the ballistic regime, the asymptotic form of entropy S_n(T^ D) is known and reads <cit.>S_n(T^ D) ≃∫_0^2π q_x/2π h_n(cos q_x) min(ℓ,2t|v_x(q_x)|), providing that the Rényi entanglement entropy after the quench in 2D behavesasS_n≃ L_y ∫_0^2π q_x/2π h_n(cos q_x) min(ℓ,2t|v_x(q_x)|).Therefore, the entanglement entropy increases linearly in time for t<ℓ/2, while it approaches the constant value lim_t→∞ S_n≃V_A ∫_0^2π q_x/2π h_n(cos q_x)at large times t≫ℓ/2.In Fig. <ref>, we check the validity of the analytic result (<ref>). We plot it for n→ 1 and n=2, as a function of time (solid curves) and we compare with the exact numerical value computed using Eq. (<ref>). §.§ Staggered dimer state We now investigate a modification of the previous initial configuration, the staggered dimer state, which is defined as |SD⟩ = ∏_i_x+i_y=evena_i_x,i_y^†-a_i_x+1,i_y^†/√(2)|0⟩and illustrated in Fig. <ref> (d).In this case, there are L_y singlet pairs crossing the boundary of the subsystem and, consequently, the initial entanglement entropy is S_n=L_ylog(2) at t=0. This initial offset is subleading (and hence negligible) in the ballistic limit because it does not scale with the volume V_A=ℓ L_y.The staggered dimer state (<ref>) is invariant under two-site translations in the y-direction, i.e. k=2.Since it is also invariant under two-site translations in the x-direction, it is convenient to rearrange the entries of the matrix Γ_q_y^(2)as (Γ_q_y^(2))_i_x,i_x' = 2⟨( c_2i_x,q_y^†c_2i_x+1,q_y^† ) ( c_2i_x',q_yc_2i_x'+1,q_y ) ⟩-δ_i_x,i_x'I_8,with i_x,i_x'∈[0,ℓ/2-1].The definition of c_i_x,q_y is given in Eq. (<ref>). The entries of Γ_q_y^(2) can be calculated by plugging the initial state correlators⟨ SD| a_ j^† a_ j'| SD⟩ = δ_ j,j'/2 -1/4δ_j_y,j_y'δ_j_x±1,j_x' ±(-1)^j_x+j_y/4δ_j_y,j_y'δ_j_x±1,j_x,and ⟨ SD|a_ j a_ j'| SD⟩=0, into Eqs. (<ref>) and (<ref>).Then we find that Γ_q_y^(2) is of the form(Γ_q_y^(2))_i_x,i_x' = U (-(T_q_y+π^ SD)_i_x',i_x 0 0(T_q_y^ SD)_i_x,i_x') U^†,where U=I_8/2+∑_μ=x,y,zσ_μ⊗ I_2⊗σ_μ/2 is a 8× 8 unitary matrix and, in the thermodynamic limit L_x→∞, T^ SD_q_y is a block Toeplitz matrix,(T_q_y)_i_x,i_x'^ SD =∫_0^2π q_x/2π e^-2q_x(i_x-i_x') g_q_y^ SD(q_x),with 4× 4 symbolg_q_y^ SD(q_x) = -(e^-σ_z tcos q_y⊗ e^σ_zq_x/2)(I⊗σ_xcos q_x+σ_x⊗σ_+e^-2 t cos q_x)(e^σ_z tcos q_y⊗ e^-σ_zq_x/2).In the ballistic limit, the asymptotic form of the moments [ (Γ_q_y^(2))^2m] with Eq. (<ref>) can be obtained by an analogous procedure used in Ref. <cit.> to derive Eq. (<ref>). A tedious but straitforward calculation leads to the final result [(Γ_q_y^(2))^2m] ≃4ℓ-4∫_0^2π q_x/2π ×[1-(cos q_x)^2m] min(ℓ,2t|v_x(q_x)|).Plugging it into Eq. (<ref>) with k=2, we arrive at S_n ≃L_y ∫_0^2π q_x/2π h_n(cos q_x)min(ℓ,2t|v_x(q_x)|).This expression is equal to Eq. (<ref>) for the collinear dimer state. This coincidence only occurs in the limit L_x→∞, for finite L_x the entanglement entropy in these two quenches is different. We check the validity of the analytical prediction (<ref>) in Fig. <ref>.It shows that, for n→1 and n=2, Eq. (<ref>) agrees well with the exact results obtained by evaluating numerically Eq. (<ref>). §.§ Diagonal dimer state We next take as initial configuration the diagonal dimer state,|DD⟩ =∏_i_x=0^L_x/2-1∏_i_y=0^L_y-1a_2i_x,i_y^†-a_2i_x+1,i_y+1^†/√(2)|0⟩.An illustration of it can be found in Fig. <ref> (e). The diagonal dimer state is invariant under one-site translations in the y-direction and under two-site translations in the x-direction. Therefore, we rearrange the entries of Γ_q_y^(1) as we have done in Eq. (<ref>). The two-point spatial correlations in the initial state are in this case,⟨ DD| a_ j^† a_𝐣'| DD⟩ = δ_ j,j'/2 -1/4δ_j_x±1,j_x'δ_j_y±1,j_y' ∓(-1)^j_x/4δ_j_x±1,j_x'δ_j_y±1,j_y',and ⟨ DD|a_ j a_𝐣'| DD⟩=0. Inserting them in Eqs. (<ref>) and (<ref>),we find that the matrix Γ_q_y^(1) can be written as(Γ_q_y^(1))_i_x,i_x' =U(-(T_q_y^ DD)_i_x',i_x 0 0 (T_-q_y^ DD)_i_x,i_x')U^†,where U=I_4/2-∑_μ=x,y,zσ_μ⊗σ_μ/2. In the thermodynamic limit L_x→∞, T^ DD_q_y is the block Toeplitz matrix(T_q_y^ DD)_i_x,i_x' = ∫_0^2π q_x/2π e^-2 q_x(i_x-i_x') g_q_y^ DD(q_x),with symbolg_± q_y^ DD(q_x) =-e^q_x/2σ_z [σ_x cos (q_x+ q_y) +e^-2 t cos q_xσ_+ sin(q_x+ q_y)] e^-q_x/2σ_z.The calculation of the moments [(Γ_q_y^(1))^2m] with Eq. (<ref>) in the ballistic limit is analogous to the computation of Eq. (<ref>). We find[(Γ_q_y^(1))^2m] ≃2ℓ- 2∫_0^2π q_x/2π{1-[cos (q_x+q_y)]^2m} ×min(ℓ,2t|v_x(q_x)|).Substituting it into Eq. (<ref>) with k=1, we obtain that the entanglement entropy behaves as S_n≃∑_q_y∫_0^2π q_x/2π h_n(cos(q_x+q_y)) min(ℓ,2t|v_x(q_x)|).Note that, unlike the previous cases, the contribution to the entropy of each mode q_y is different. At large times, t≫ℓ/2, the entropy saturates tolim_t→∞ S_n≃ℓ∑_q_y∫_0^2π q_x/2π h_n(cos(q_x+q_y)). In Fig. <ref>, we analyze the entanglement entropy in the quantum quench from the diagonal dimer state. We obtain an excellent agreement between the analytic result of Eq. (<ref>) and the numerical values computed using Eq. (<ref>). §.§ Crossed dimer stateHere we consider the quantum quench starting from the crossed dimer state|C⟩ = ∏_i_x=0^L_x/2-1∏_i_y=0^L_y/2-11/2 (a_2i_x,2i_y^†-a_2i_x+1,2i_y+1^†)× (a_2i_x+1,2i_y^†-a_2i_x,2i_y+1^†) |0⟩,which is schematically illustrated in Fig. <ref> (f). Since this configuration is invariant under two-site translations in the y-direction, the entanglement entropy after the quench can be calculated by evaluating the moments [(Γ_q_y^(2))^2m]. To adapt the computation to the two-site translation symmetry in the x-direction, we rearrange the entries of Γ_q_y^(2) as in Eq. (<ref>). This matrix can be calculated employing Eqs. (<ref>) and (<ref>) with the initial statetwo-point spatial correlators ⟨ a_ j^† a_ j'⟩ and ⟨ a_ j a_ j'⟩. For the crossed dimer state,the latter read ⟨ C| a_ j^† a_ j'| C⟩ = δ_ j,j'/2 - 1/8 (δ_j_x+1,j_x'δ_j_y+1,j_y' +δ_j_x-1,j_x'δ_j_y-1,j_y' +δ_j_x-1,j_x'δ_j_y+1,j_y' +δ_j_x+1,j_x'δ_j_y-1,j_y') - (-1)^j_x/8 (δ_j_x+1,j_x'δ_j_y+1,j_y' -δ_j_x-1,j_x'δ_j_y-1,j_y' -δ_j_x-1,j_x'δ_j_y+1,j_y' +δ_j_x+1,j_x'δ_j_y-1,j_y') - (-1)^j_y/8 (δ_j_x+1,j_x'δ_j_y+1,j_y' -δ_j_x-1,j_x'δ_j_y-1,j_y' +δ_j_x-1,j_x'δ_j_y+1,j_y' -δ_j_x+1,j_x'δ_j_y-1,j_y') - (-1)^j_x+j_y/8 (δ_j_x+1,j_x'δ_j_y+1,j_y' +δ_j_x-1,j_x'δ_j_y-1,j_y' -δ_j_x-1,j_x'δ_j_y+1,j_y' -δ_j_x+1,j_x'δ_j_y-1,j_y'), and ⟨ C| a_ i a_ i'| C⟩=0. We then obtain that Γ_q_y^(2) is of the form(Γ_q_y^(2))_i_x,i_x' = U e^ t σ_z⊗σ_z ⊗ Icos q_y (- m̂(q_y) ⊗ T_i_x',i_x^ C 00 m̂(q_y) ⊗ T_i_x,i_x'^ C) e^- t σ_z⊗σ_z⊗ I cos q_y U^†,where U=I_8/2-∑_μ=x,y,zσ_μ⊗ I_2⊗σ_μ/2. The matrix m̂_q_y is defined asm̂_q_y =-σ_z cos q_y +σ_ysin q_y,and, in the thermodynamic limit L_x→∞, T^ C readsT_i_x,i_x'^ C =∫_0^2π q_x/2π e^-2 (i_x-i_x') g_ C(q_x).Here the symbol g_ C(q_x) is given by g_ C(q_x) =e^q_x/2σ_z ( σ_x cos q_x + σ_+sin q_xe^-2 t cos q_x ) e^-q_x/2σ_z.Since the trace of the tensor product of two matrices is the product of the traces of the matrices, the moments [(Γ_q_y^(2))^2m] in the present case are given by [(Γ_q_y^(2))^2m] =2 [m̂_q_y^2m][(T^ C)^2m].By simple algebra, one finds [m̂_q_y^2m]=2 while, by employing the stationary phase method of Ref. <cit.>, one can also obtain the asymptotic form of [(T^ C)^2m] in the ballistic limit,[(T^ C)^2m] ≃ℓ- ∫_0^2π q_x/2π [1-(cos q_x)^2m]×min(ℓ,2t|v_x(q_x)|).Therefore, putting the previous results together, the moments of the matrix (<ref>) are[(Γ_q_y^(2))^2m] ≃4ℓ -4∫_0^2π q_x/2π [1-(cos q_x)^2m]×min(ℓ,2t|v_x(q_x)|)and, plugging them in Eq. (<ref>) with k=2, we finally find that S_n=L_y ∫_0^2π q_x/2π h_n(cos q_x) min(ℓ,2t|v_x(q_x)|).In particular, the stationary value of the entanglement entropy at large times, t≫ℓ/2 is lim_t→∞S_n ≃V_A∫_0^2π q_x/2π h_n(cos q_x). In Fig. <ref>, we plot the entanglement entropy in the quantum quench starting from the crossed dimer state. It shows that the analytic expression obtained in Eq. (<ref>) agrees well with the exact result obtained numerically with Eq. (<ref>).§.§ Partially-filled product stateSo far, we have considered initial states with a defined number of particles, i.e. eigenstates of the particle number operator Q=∑_ ia_𝐢^† a_𝐢. Let us now study quenches from configurations that break this U(1) symmetry. We can construct them using as building block the 1D product state|θ⟩_i_y=∏_i_x=0^L_x-1( sinθ/2 + cosθ/2 a_𝐢^†)|0⟩_i_y,where |0⟩_i_y=⊗_i_x=0^L_x-1|0⟩_𝐢 with |0⟩_𝐢 being the local vacuum state for the 𝐢-th site (in 1D spin language this is a tilted ferromagnetic state).The angle θ∈[0,π) controls the probability of finding a particle at the site i and, therefore, tunes how much the particle number symmetry is broken <cit.>. At θ=0 (π), the state (<ref>) is fully-occupied (empty) and it preserves this U(1) symmetry, whereas it breaks it for θ≠ 0,π; in particular, the symmetry is maximally broken at θ=π/2. The state (<ref>) does not satisfy Wick theorem and, therefore, its reduced density matrix is not Gaussian and we cannot calculate the entanglement entropy from the two-point correlation matrix using Eq. (<ref>). However, the cat version of (<ref>),|PF⟩_i_y = 1/√(2+2(cosθ)^L_x)(|θ⟩_i_y- |-θ⟩_i_y).does satisfies Wick theorem and its reduced density matrix is Gaussian, as explicitly shown, e.g., in Ref. <cit.>.From the 1D state | PF⟩_i_y, we can construct twodifferent initial configurations in the two-dimensional square lattice,for which the entanglement entropy evolves differently after a quench to H. We study them separately in the next subsections. §.§.§ Partially-filled product state ILet us first consider the state|PF_ I⟩ =⊗_i_y=0^L_y-1|PF⟩_i_y,where |PF⟩_i_y is defined in Eq. (<ref>).Since (<ref>) is invariant under one-site translations in the y-direction, we can employ Eq. (<ref>) with k=1 to calculate the evolution of the entanglement entropy after the quench. The entries of the correlation matrix Γ_q_y^(1) that enters in such equation can be determined using Eqs. (<ref>) and (<ref>) with the initial values of the two-point spatial correlators. In this case, given the product state structure of the initial configuration (<ref>), we have⟨ PF_ I|a_ i^† a_ i'| PF_ I⟩=_i_y⟨ PF| a_i_x, i_y^† a_i_x', i_y| PF⟩_i_yδ_i_y, i_y'and ⟨ PF_ I|a_ i a_ i'| PF_ I⟩=_i_y⟨ PF| a_i_x, i_y a_i_x', i_y| PF⟩_i_yδ_i_y, i_y'.The correlators for the 1D state | PF⟩_i_y have beencalculated in, e.g. Ref. <cit.>. Using them here, we have⟨ PF_ I|a_ i^† a_ i'| PF_ I⟩=δ_i_x, i_x'δ_i_y, i_y'/2-δ_i_y, i_y'/2L_x∑_q_xe^- iq_x(i_x-i_x')cosΔ_q_xand⟨ PF_ I|a_ i a_ i'| PF_ I⟩= - iδ_i_y, i_y'/2L_x∑_q_xe^- iq_x(i_x-i_x')sinΔ_q_x,withcosΔ_q_x =2|cosθ|-cos q_x(1+cos^2 θ)/1+cos^2θ-2|cosθ|cos q_x,sinΔ_q_x = -sin^2θsin q_x/1+cos^2θ-2|cosθ|cos q_x.Inserting them in Eqs. (<ref>) and (<ref>) and taking the limit L_x→∞, we find that Γ_q_y^(1) is a block Toeplitz matrix,(Γ_q_y^(1))_i_x,i_x' =∫_0^2π q_x/2π e^- q_x(i_x-i_x') g_q_y^ PF(q_x),generated by the 2× 2 symbol g_q_y^ PF(q_x)g_q_y^ PF(q_x) =σ_z cosΔ_q_x+σ_y e^-2 t ϵ_𝐪σ_zsinΔ_q_x. The moments [(Γ_q_y^(1))^2m] of a block Toeplitz matrix can be calculated by applying directly the stationary phase method of Ref. <cit.>. In our case, we get[Γ_q_y^2m] ≃2ℓ-2∫_0^2π q_x/2π [1-(cosΔ_q_x)^2m]×min(ℓ,2t|v_x(q_x)|).Plugging it into Eq. (<ref>) with k=1, we finally obtain thatS_n≃L_y∫_0^2π q_x/2π h_n(cosΔ_q_x) min(ℓ,2t|v_x(q_x)|).In this case, the entropy converges at large times t≫ℓ/2 to the valuelim_t→∞ S_n≃V_A∫_0^2π q_x/2π h_n(cosΔ_q_x). In Fig. <ref> (a), we plot the time evolution of the entanglement entropy after the quench from the state |PF_ I⟩ for different values of the angle θ and the Rényi index n. We find that Eq. (<ref>) agrees well with the numerical value of the entropy calculated with Eq. (<ref>). In Fig. <ref> (b), we represent the stationary value of the entropy given by Eq. (<ref>) as a function of theinitial angle θ. It can be seen that the entanglement entropy monotonically increases as θ increases until θ=π/2, the angle at which the initial state (<ref>) maximally breaks the U(1) particle number symmetry. This maximum value for the von Neumann entropy isS_1^ max= V_A(2 log (2)-1).§.§.§ Partially-filled product state II From the 1D state (<ref>), we can also construct theconfiguration|PF_ II⟩ =⊗_i_x=0^L_x-1|PF⟩_i_x.The difference compared to state (<ref>) is that this is a product state in the x-direction while the other was in the y-direction.Since the native 1D state (<ref>) is a cat and not a product state, the two definitions are inequivalent. This state |PF_ II⟩ is invariant under one site translations in the y and x directions.Since it is a product state along the x direction its two-point spatial correlation functions satisfy⟨ PF_ II|a_ i^† a_ i'| PF_ II⟩=_i_x⟨ PF| a_i_x, i_y'^† a_i_x, i_y'| PF⟩_i_xδ_i_x, i_x'and ⟨ PF_ II|a_ i a_ i'| PF_ II⟩=_i_x⟨ PF| a_i_x, i_y a_i_x, i_y'| PF⟩_i_xδ_i_y, i_x'.As we did in the other partially-filled product state, we can apply the results of Ref. <cit.> for the correlations of the 1D state | PF⟩_i_x. Then we have⟨ PF_ II|a_ i^† a_ i'| PF_ II⟩=δ_i_x, i_x'δ_i_y, i_y'/2-δ_i_x, i_x'/2L_y∑_q_ye^- iq_y(i_y-i_y')cosΔ_q_yand⟨ PF_ II|a_ i a_ i'| PF_ II⟩= - iδ_i_x, i_x'/2L_y∑_q_ye^- iq_y(i_y-i_y')sinΔ_q_y. Inserting these correlators in Eqs. (<ref>) and (<ref>) and taking the thermodynamic limit L_x→∞, the matrix Γ_q_y^(1) that gives the entanglement entropy (<ref>) is block Toeplitz, (Γ_q_y^(1))_i_x,i_x' =∫_0^2π q_x/2π e^- q_x(i_x-i_x') g_q_x^ PF(q_y),in which the symbol is the same as in Eq. (<ref>) but exchanging the moments q_x and q_y. As in the previous examples, in the ballistic limit, the asymptotic form of the moments [(Γ_q_y^(1))^2m] for Eq. (<ref>) can be obtained by applying the stationary phase method for block Toeplitz matrices of Ref. <cit.>. Here we find[(Γ_q_y^(1))^2m] ≃2ℓ-2[1-(cosΔ_q_y)^2m]×∫_0^2π q_x/2πmin(ℓ,2t|v_x(q_x)|).Applying this result in Eq. (<ref>) with k=1, we get that the entanglement entropy evolves in time after the quench asS_n≃∑_q_y h_n(cosΔ_q_y) ∫_0^2π q_x/2πmin(ℓ,2t|v_x(q_x)|),and saturates to lim_t→∞ S_n = ℓ∑_q_y h_n(cosΔ_q_y)at large times t≫ℓ/2. Fig. <ref> (a) shows the entanglement entropy of the time-evolved state |PF_ II(t)⟩, comparing the analytic result of Eq. (<ref>) (solid lines) withthe exact value (symbols) obtained numerically with Eq. (<ref>) for several initial angles θ and Rényi index n.In Fig. <ref> (b), we plot the saturation value of the entanglement entropy found in Eq. (<ref>) as a function of θ. We can see that, as in the case of the state | PF_ I⟩, the saturation value is maximum at θ=π/2 (at least for n=1,2), at which the particle number symmetry is maximally broken at t=0, although it is not in general monotonic in θ. The nonanalytic behavior for n=∞ in Fig. <ref> (b) disappears when L_y is sufficiently large. § QUASIPARTICLE PICTUREIn this section, we discuss the physical interpretation of the results of the previous section in terms of the quasiparticle picture, originally developed for one-dimensional integrable systems <cit.>. The idea of the quasiparticle picture is the following:in one dimension the quench generates uniformly pairs of entangled quasiparticles that propagate ballistically in opposite directions with opposite momenta.Therefore, the entanglement at time t after the quench is proportional to the number of pairs of entangled quasiparticles that are shared by subsystem A and its complement B:S_n=∫_0^2π dq/2π s_n( q) min(ℓ, 2|v(q)|t), where v(q) is the velocity of the quasiparticle with momentum q and s_n(q) its contribution to the entanglement. The function min(ℓ, 2|v(q)|t) counts the quasiparticles that are shared by subsystem A and its complement at time t. However, in d>1 as in the present paper, the multiplet structure of the quasiparticles is generically much more complicated than simple pairs and their counting is far from trivial.Fortunately, the dimensional reduction comes in our help, leading to extremely simple results. Indeed,since the Rényi entanglement entropy can be decomposed in the single-interval entanglement entropies of decoupled 1D chains, we can directly apply the quasiparticle picture in each decoupled chain, which are labeled by the transverse momentum q_y=2 π n_y/L_y.Therefore, after the quench, in the n_y-chain pairs of entangled excitations propagate with momentum ± q_x and velocity v_x(q_x)=∂_q_xϵ_ q.If we denote the contribution to the entanglement entropy of each pair of entangled modes as s_n( q) and sum over all thedecoupled chains the result (<ref>), then we expectS_n=∑_n_y=0^L_y/k-1∫_0^2π dq_x/2π s_n( q) min(ℓ, 2|v_x(q_x)|t). Eq. (<ref>) predicts that the entanglement entropy increases linearly for t< ℓ/2 and tends to a constant at large times t≫ℓ/2, this is the qualitative behavior that we have found in all the examples studied in Sec. <ref>. To obtain quantitative predictions from Eq. (<ref>), we have to determine a specific expression of s_n(𝐪).For one-dimensional free-fermion systems, as proposed in Ref. <cit.> for generic integrable systems, the analogous function s_n(q) can be read off from the limit t→∞, in which the reduced density matrix ρ_A(t) is expected to relax to a GGE. We can apply the same idea here. In particular, for the 2D free-fermion model (<ref>), one may expect that the infinite time limit of ρ_A(t) exists and it is described bya GGE, i.e.,lim_t→∞ρ_A(t) = _B(ρ^GGE) ≡ρ_A^GGE.As in one dimension <cit.>, since the post-quench Hamiltonian is diagonal in terms of the modes ã_ q^† and ã_ q, the GGE can be written in terms of thethe conserved mode occupation numbers n̂_𝐪=ã_𝐪^†ã_𝐪 as ρ^GGE =e^-∑_𝐪λ_𝐪n̂_𝐪/(e^-∑_𝐪λ_𝐪n̂_𝐪),where the Lagrange multipliers λ_ q are determinedby the expectation value of n̂_𝐪, (n̂_𝐪ρ^GGE) = ⟨ψ_0|n̂_𝐪|ψ_0⟩≡ n_𝐪.We stress that this is valid under the quite general assumption that non-Abelian charges like ∑_j (-1)^j c_j c_j+m are not activated by the initial state; their activation would lead to an altered dynamics <cit.>.If the reduced density matrix ρ_A(t) relaxes to the GGE as in Eq. (<ref>), then the Rényi entanglement entropy for ρ_A(t) at large times must be equal to the entropy of the statistical ensemble ρ_A^GGE,lim_t→∞ S_n(ρ_A(t)) =S_n(ρ_A^GGE).We can deduce the specific form of s_n(𝐪) from the above equation as follows. For a large subsystem with volume V_A, S_n(ρ^GGE_A) is proportional to V_A because it is an extensive thermodynamic quantity.Hence the entropy S_n(ρ^GGE_A) is given by the volume of subsystem A times the density of the Rényi entropy evaluated in the GGE of Eq. (<ref>). That is, S_n(ρ_A^GGE)=V_A/L_xL_y S_n(ρ^GGE)=ℓ∑_n_y=0^L_y-1∫_0^2π q_x/2π h_n(2n_𝐪-1),where the function h_n(x) is defined in Eq. (<ref>).In the second line, we have taken the thermodynamic limit L_x→∞ to derive it.On the other hand, if we take the large time limit t→∞ in Eq. (<ref>), we obtain lim_t→∞ S_n(ρ_A) = ℓ∑_n_y=0^L_y/k-1∫_0^2π q_x/2π s_n(𝐪).Comparing Eqs. (<ref>) and (<ref>) and naturally grouping together the chains with equal n_y modulo k, we can conclude that s_n(𝐪)=∑_j=0^k-1h_n(2n_q_x, q_y+2π j/k-1).Finally, plugging this result into Eq. (<ref>), we findS_n=∑_n_y=0^L_y-1∫_0^2π q_x/2π h_n(2n_𝐪-1)min(ℓ,2t|v_x(q_x)|).We emphasize that this predictionassumes the thermodynamic limit in the x-direction, but the value of L_y is an arbitrary integer, also as small as 1 or 2.The quasiparticle formula (<ref>) coincides with the analytic expressions found for most of the particular initial states investigated in Sec. <ref>.For example, the mode occupation number for the collinear Mott insulator state (<ref>) isn_𝐪=1/2 and, substituting it in Eq. (<ref>), we find Eq. (<ref>).Also occupation number for the Mott insulator state (<ref>) is n_𝐪=1/2.In a similar way, it is straightforward to check that the prediction of the quasiparticle picture (<ref>) reproduces the analytic expressions for the entanglement entropy obtained in Eqs. (<ref>) (collinear and staggered dimer states), (<ref>) (diagonal dimer state), (<ref>) (partially-filled product state I), and (<ref>) (partially-filled product state II).However, the situation is different for the crossed dimer state discussed in Sec. <ref>. In this case, the modeoccupation number isn_𝐪 = ⟨C|n̂_𝐪|C⟩ =1-cos q_xcos q_y/2and, if we plug it in the quasiparticle prediction (<ref>), then we obtain S_n=∑_n_y=0^L_y-1∫_0^2π q_x/2π h_n(cos q_xcos q_y) min(ℓ,2t|v_x(q_x)|),which does not match the correct result in Eq. (<ref>), as also shown in Fig. <ref>.Comparing Eqs. (<ref>) and (<ref>), it is clear that the reason why the quasiparticle picture breaks down is that the identification (<ref>) does not hold.Since this identity has been derived assuming Eq. (<ref>), the mismatch means that, in this particular quench, the reduced density matrix ρ_A(t) does nottend to ρ^GGE_A at t→∞. In the following section, we will discern the reason why this occurs.§ EXISTENCE OF STATIONARY STATE In the previous section, we have found that the prediction of the quasiparticle picture (<ref>) does not match the correct result (<ref>) for the crossed dimer state.We claimed that the reason for this disagreement is that the reduced density matrix ρ_A(t) does not relax to the GGE ρ_A^ GGE at large times.In this section, we show that, in two-dimensional free fermionic systems, the reduced density matrix ρ_A(t) does not tend to a stationarystate after quenches from certain particular initial configurations, including the crossed dimer state.Furthermore, we derive a criterion which allows us to know whether the stationary state exists or not for a given initial state and how the quasiparticle picture must be modified in its absence. §.§ Absence of stationary state To show that ρ_A(t)does not relax to a stationary state for particular initial states, we start by decomposing the post-quench Hamiltonian (<ref>) as H=H^X +H^Y,where H^X = ∑_i_y=0^L_y-1 H_i_y^X, = -1/2∑_i_y=0^L_y-1∑_i_x=0^L_x-1 a_i_x+1,i_y^†a_i_x,i_y +H.c.,andH^Y = ∑_i_x=0^L_x-1 H_i_x^Y, = -1/2∑_i_x=0^L_x-1∑_i_y=0^L_y-1 a_i_x,i_y+1^†a_i_x,i_y +H.c.. As clear from the above equations, the terms H^X and H^Y only contain fermion hoppings in the x- and y-direction respectively.Observe that, in terms of the Fourier modes (<ref>), H^X and H^Y are diagonalH^X=-∑_ qcos q_x ã_ q^†ã_ q, H^Y=-∑_ qcos q_y ã_ q^†ã_ q,and, therefore, they commute with each other. This allows us to write the time-evolved state |ψ(t)⟩ as |ψ(t)⟩ =e^- t H^Y|ψ_X(t)⟩,where |ψ_X(t)⟩=e^- tH^X|ψ_0⟩ is the time-evolved state after a quantum quench with a Hamiltonian in which the fermion hopping is allowed only in the x-direction (i.e., L_y copies of 1D chains). Accordingly, the reduced density matrix ρ_A(t) can be written as ρ_A(t)=_B( e^- t H^Y|ψ_X(t)⟩⟨ψ_X(t)| e^ t H^Y).Given its definition in Eq. (<ref>), we have [H_i_x^Y,H_i_x'^Y]=0; then the time-evolution operator e^- t H^Y in Eq. (<ref>) can be decomposed into two operators that respectively act only on the subsystems A and B as e^- t H^Y =U_A(t) U_B(t),with U_A(t)=∏_i_x=0^ℓ-1e^- t H^Y_i_x,U_B(t)=∏_i_x=ℓ^L_x-1e^- t H^Y_i_x.Note that U_A acts only on subsystem A and hence it can be taken out the partial trace _B.Therefore, we can rewrite Eq. (<ref>) as ρ_A(t)=U_A(t) ρ_X,A(t)U_A^†(t),where ρ_X,A(t)=_B|ψ_X(t)⟩⟨ψ_X(t)|.Note that, in the latter expression, U_B and U_B^† cancel each other in the partial trace _B due to the cyclic property. From Eq. (<ref>), we conclude that, when [U_A(t),ρ_X,A(∞)]≠ 0, the limit lim_t→∞ρ_A(t) does not exist, i.e., there is no stationary state.§.§ Condition for the existence of stationary stateHaving established that the existence of a stationary state is related to thevanishing of the commutator [U_A, ρ_X, A(∞)],the next natural step is to determine under which conditions this commutatoris not zero.In Appendix <ref>,we prove the following. If we assume that ρ_X, A(t) relaxes to astationary state, i.e. lim_t→∞ρ_X, A(t) exists, thenthe stationary state of ρ_A(t) exists if and only if ρ_X, A(t) restores the translational symmetry in the y-direction for large times, that is𝒯_A (lim_t→∞ρ_X,A(t)) 𝒯_A^-1 = lim_t→∞ρ_X,A(t),where 𝒯_A is the translation operator in the y-direction action on subsystem A.This result allows us to know whether the stationary state exists ornot from ρ_X,A. For example, in Fig. <ref>, we schematically represent the time evolution of ρ_X,A(t) in quenches starting from the Mott insulator and the crossed dimer states.In the case of the Mott insulator state, after a long time, the hopping in the x-direction makes the distribution of fermions uniform for every i_y-th row, and the translational symmetry in the y-direction is restored. Hence the stationary state exists according to theresult announced before. On the other hand, for the crossed dimer state, since the hopping in the x-direction does not change the amount of entanglement between each row, after long times, the entanglement between 2i_y-thand (2i_y+1)-th rows in the initial state remains, while no entanglement appears between (2i_y-1)-th and 2i_y-th rows. This means that ρ_X,A for the crossed dimer state never restores the translational symmetry in the y-direction.Therefore, the reduced density matrix ρ_A(t) does not relax to a stationary state in this case. In general, as we show in Appendix <ref>, the densitymatrix ρ_X, A(t) restores the translational invariance in they-direction at large times, i.e. Eq. (<ref>) issatisfied, if and only if ⟨ψ_0|ã_q_x,q_y^†ã_q_x,q_y'|ψ_0⟩ =0 ∀ q_x,q_y≠ q_y'. Therefore, we can conclude that ρ_A(t) relaxes to a stationary state at large times if and only if the initial state satisfies Eq. (<ref>). In fact, one can check that such correlator is always zero for all the initial states discussed in Sec. <ref> except for the crossed dimer state, for which we have⟨ C|ã_q_x,q_y^†ã_q_x,q_y+π| C⟩ = -/2cos q_x sin q_y. §.§ Entanglement entropy in the absence of stationary state: the crossed dimer stateAlthough, as we have just seen, there is not a stationary state in aquench from the crossed dimer state, in Sec. <ref> we showed that the entanglement entropy saturates to a constant value at large times.This behavior can be explained as follows. According to Eq. (<ref>), the reduced density matrices ρ_A(t) and ρ_X, A(t) are related by a unitary transformation. This means that their entropies are equal,S_n(ρ_A(t)) = S_n(ρ_X,A(t)).Therefore, this identity implies that, if ρ_XA(t) relaxes to a stationary state, then the entropy tends to a constant value at large times, even if the limit lim_t→∞ρ_A(t) does not exist. The missing point is to determine the stationary value of the entropy in the absence of stationary state for ρ_A(t). We have shown that, if [U_A, ρ_X, A(∞)]=0, then both ρ_A(t) and ρ_X, A(t) relax to the same stationary state, which is described by a GGE that can be built either from the local conserved charges of H=H^X+H^Y or of H^X. However, the set of conserved charges of H^X is not equal to that of the total Hamiltonian H. Therefore, if ρ_A(t) tends to a stationary state, only the charges shared by H^X and H can be activated. On the contrary, if there is not stationary state for ρ_A(t), it exists the possibility that charges only conserved by H^X can be activated. In such cases, the entanglement entropies derived from the GGEs associated to H and H^X are different, and the correct stationary value of the entanglement entropy is given by the latter. This explains why the quasiparticle picture discussed in Sec. <ref> does not work for the crossed dimer state, for which ρ_A(t) does not relax as we have already shown.In fact, for this initial state, we find that, to obtain the saturation value of Eq. (<ref>) at large times, the correct GGE should be built with the following conserved chargesn̂_q_x,i_y^±≡1/√(2)(d_q_x,2i_y^†± d_q_x,2i_y+1^† )(d_q_x,2i_y± d_q_x,2i_y+1),where q_x=0,…, 2π(L_x-1)/L_x, i_y=0,…, L_y/2-1, and d̂_q_x,i_y is the partial Fourier transform of the fermionic a_𝐢 with respect to the x-directiond_q_x,i_y≡1/√(L_x)∑_i_x=0^L_x-1e^- q_x i_x a_𝐢.By simple algebra, one finds that {n̂_q_x,i_y^±} commute with H^X while they do not commute with the total Hamiltonian H=H^X+H^Y.In the same way as in Eq. (<ref>), we can calculate the entropy of a GGE built with the charges {n_q_x, y_i^±}. In the thermodynamic limit L_x→∞, it is S_n = ℓ∑_i_y=0^L_y/2-1∫_0^2π q_x/2π[h_n(2n_q_x,i_y^+-1)+h_n(2n_q_x,i_y^–1)],where we have introduced n_q_x,i_y^±=⟨ψ_0|n̂_q_x,i_y^±|ψ_0⟩. In particular, for the crossed dimer state, we obtainn_q_x,i_y^±=⟨C|n̂_q_x,i_y^±|C⟩=1±cos q_x/2.Plugging it into Eq. (<ref>), we indeed recover the large time limit of the evolution from the crosseddimer state in Eq. (<ref>). At this point, Eq. (<ref>) for the finite time evolution is recovered by a straightforward application of the quasiparticle picture. In Appendix <ref>, we consider a quench from another initial configuration in which ρ_A(t) does not relax to a stationary state and show that, also in this case, the large time behavior of the entanglement entropy is captured by a GGE built with from the charges of H^X instead of H.§ CONCLUSIONSTo summarize, we studied the time evolution of the entanglement entropy following quantum quenches in a translationally invariant 2D free-fermion lattice.We have considered different initial Gaussian configurations that present a periodic pattern in both directions of the system.By applying dimensional reduction and exploiting thewell-known results for 1D free fermionic chains, we analytically determined the exact behavior of the Rényi entanglement entropies after the quench.We found that, for most of the initial states, the standard quasiparticle picture developed for 1D systems can be readily adapted to correctly explain the evolution of the entropy.Instead, for one particular initial state, the quasiparticle picture does not generalize straightforwardly because the Rényi entanglement entropies saturate to a value different from the onepredicted by the GGE built with the local conserved charges of the post-quench Hamiltonian.We traced back the origin of this disagreement to the absence of a stationary limit for the reduced density matrix.Starting from this observation, we deduced that the correct stationary entanglement entropy for this initial state is given by the GGE constructed with the local conserved charges of a Hamiltonian with only hoppings in the longitudinal direction. We also obtained the general conditions for the existence of the stationary limit of the reduced density matrix, which are related to the restoration of the translational symmetry in the transverse direction at large times.The dimensional reduction approach employed here can be easily generalized to study thetime evolution in two-dimensional free fermionic systems of other quantitiesfor which there areexact results in one dimension. These include the entanglement negativity <cit.>, charge fluctuations <cit.>, symmetry-resolved entanglement entropies <cit.>, and therecently introduced entanglement asymmetry <cit.>. The latter measures how much a symmetry is broken in a subsystem, and has been employed to observe a quantum version <cit.> of the Mpemba effect <cit.>. A relevant question is whether this effect may also occur in higher dimensions and under what conditions it happens. Another easy but interesting generalization would be to use dimensional reduction to study quench problems in 2D free bosonic models to contrast with existing field theory literature <cit.>. We thank V. Alba, S. Murciano, and C. Rylands for fruitful discussions.SY is supported by Grant-in-Aid for JSPS Fellows (Grant No. JP22J22306). PC and FA acknowledge support from ERC under Consolidator Grant number 771536 (NEMO). We thank the authors of Ref. <cit.> for sharing their results with us before submission.§ NOTE ADDED After the completion of this work we became aware of the parallel work <cit.>, where also the entanglement entropy of 2D free fermion systems is studied. However, in this manuscript the dimensional reduction is not used and the emphasis is more on the shape on the entangling region rather than on the different initial states.§ PROOF OF THE CONDITION OF EQ. (<REF>) FOR THE EXISTENCE OF STATIONARY STATEIn this Appendix, we prove that reduced density matrix ρ_A(t) relaxes to a stationary state if and only if Eq. (<ref>) holds.In other words, the goal of the appendix is to prove that the following two propositions are necessary and sufficient conditions for each other:The limitlim_t→∞ρ_A(t)exists. For large times, ρ_X,A(t) in Eq. (<ref>) restores the translational symmetry in the y-direction, i.e., the following equation holds:𝒯_A ρ_X,A(∞) 𝒯_A^-1=ρ_X,A(∞).To this end, we make the following assumptions:ρ_X,A(t) relaxes to the GGE associated to H^X (cf. Eq. (<ref>)) at large time, i.e., lim_t→∞ρ_X,A(t) =ρ_X,A^ GGE. If the stationary state for ρ_A(t) exists, then lim_t→∞ρ_A(t) =ρ_A^ GGE,where ρ_A^ GGE is the GGE built from the conserved charges of H.Note that we can use Asm. <ref> only when Prop. <ref> is true. To prove Prop. <ref> ⟺ Prop. <ref>, we first show that Prop. <ref> is true if and only if the following proposition is true: ρ_X,A(t) converges to ρ_A^GGE for large times, i.e., ρ_X,A^GGE=ρ_A^GGE. Proposition <ref> ⟹ Prop. <ref> can be shown as follows: from Eq. (<ref>), lim_t→∞ρ_A(t) = lim_t→∞ U_A(t) ρ_A,X(t)U_A^†(t).Assumption <ref> and Prop. <ref> allow us to replace ρ_A,X in the above equation with ρ_A^GGE, which readslim_t→∞ρ_A(t) = lim_t→∞ U_A(t) ρ_A^GGEU_A(t)^† = ρ_A^GGE.Here we have used [U_A(t),ρ_A^GGE]=0 in the last equality.The above equation clearly shows thatProp. <ref>⟹Prop. <ref>.Proposition <ref> ⟹ Prop. <ref> can also be shown in a similar way: from Eq. (<ref>), lim_t→∞ρ_X,A(t) = lim_t→∞ U_A^†(t) ρ_A(t) U_A(t).Proposition <ref> and Asm. <ref> allow us to replace ρ_A in the above equation with ρ_A^GGE, which results in lim_t→∞ρ_X,A(t) = lim_t→∞ U_A^†(t) ρ_A^GGE U_A(t) = ρ_A^GGE.Here we again used [U_A(t),ρ_A^GGE]=0 in the last equality. From the previous equation, we can conclude thatProp. <ref>⟹Prop. <ref>.Eqs. (<ref>) and (<ref>) implyProp. <ref>⟺Prop. <ref>.By (<ref>), proving Prop. <ref> ⟺ Prop. <ref> is equivalent to show Prop. <ref> ⟺ Prop. <ref>. Let us therefore prove the latter.Proposition<ref> ⟹ Prop. <ref> can be shown as follows: from Prop. <ref>, we obtain 𝒯_A ρ_X,A^GGE𝒯_A^-1 = 𝒯_A ρ_A^GGE𝒯_A^-1, = _B[𝒯_A ρ^GGE𝒯_A^-1].Inserting the identity 𝒯_B^-1𝒯_B in _B in the above equation and using the cyclic property of trace, we obtain 𝒯_A ρ_X,A^GGE𝒯_A^-1 = _B[𝒯_B 𝒯_Aρ^GGE𝒯_A^-1𝒯_B^-1 ].According to Eq. (<ref>), ρ^GGE is invariant under translations in the y-direction, namely, 𝒯_B 𝒯_Aρ^GGE𝒯_A^-1𝒯_B^-1 = ρ^GGE.Substituting it into Eq. (<ref>), we obtain 𝒯_A ρ_X,A^GGE𝒯_A^-1 = ρ^GGE_Aand using Prop. <ref>𝒯_A ρ_X,A^GGE𝒯_A^-1 = ρ^GGE_X,A.The above equation shows that Prop. <ref>⟹Prop. <ref>.Proposition <ref> ⟹ Prop. <ref> can be shown as follows:Without loss of generality, ρ_X,A^GGE can be expressed as ρ_X,A^GGE = e^-ℋ_A/_A(e^-ℋ_A),where ℋ_A is the entanglement Hamiltonian which is given by <cit.> ℋ_A = ∑_𝐢,𝐢'∈ A K_𝐢,𝐢' a_ i^† a_ i',with K being a V_A× V_A Hermitian matrix.Therefore, Prop. <ref> is equivalent to stating that the entanglement Hamiltonian ℋ_A is invariant under translations in the y-direction, i.e., Prop. <ref>⟺𝒯_A ℋ_A𝒯_A^-1 =ℋ_A.This implies that we can decompose ℋ_A in the transverse momentum sectors by taking the partial Fourier transform (<ref>) in the y-direction,ℋ_A = ∑_q_y∑_i_x,i_x'=0^ℓ-1 [K(q_y)]_i_x,i_x' c_i_x,q_y^† c_i_x',q_y,where K(q_y) is a ℓ×ℓ Hermitian matrix.On the other hand, if we express the matrix U_A(t), defined inEq. (<ref>), in the mixed space-momentum basis,U_A(t) =e^- t ∑_i_x=0^ℓ-1∑_q_ycos q_y c_i_x,q_y^†c_i_x,q_y,we find thatU_A(t) c_i_x,q_y U_A^†(t)=c_i_x,q_y e^- t cos q_y.Combining this result with Eq. (<ref>), we obtain U_A(t) ℋ_A U_A^†(t) =ℋ_A.Therefore, applying this identity in Eq. (<ref>), U_A(t) ρ_A,X^GGEU_A^†(t) = ρ_A,X^GGE.Thus, if we take the limitlim_t→∞ρ_A(t)= lim_t→∞U_A(t) ρ_X,A(t) U_A^†(t),we apply Asm. <ref>,lim_t→∞ρ_A(t) = lim_t→∞U_A(t) ρ_X,A^GGE U_A^†(t),and Eq. (<ref>), we obtain lim_t→∞ρ_A(t) =ρ_X,A^GGE.Equation (<ref>) shows that, if ρ_X, A restores the transverse translational symmetry at large times, then ρ_A(t) relaxes to a stationary state, i.e., Prop. <ref> holds. Since we can use Asm. <ref> when Prop. <ref> holds, we can replace the left-hand side of Eq. (<ref>) with ρ_A^GGE andρ_A^GGE= ρ_X,A^GGE.Therefore, Prop. <ref>⟹Prop. <ref>. This ends the proof ofProp. <ref>⟺Prop. <ref>. From (<ref>) and (<ref>), we have finally proved that Prop. <ref>⟺Prop. <ref>. § PROOF OF THE CONDITION (<REF>) FOR THE RESTORATION OF TRANSLATIONAL INVARIANCE IN THE Y-DIRECTION In this Appendix, we show that ρ_X,A(t) restores the translation symmetry in the y-direction at large times, i.e., Prop. <ref> is true, if and only if Eq. (<ref>) is satisfied.As mentioned in Appendix <ref>, Prop. <ref> is equivalent to saying that the stationary state lim_t→∞ρ_A,X(t)=ρ_X,A^GGE is block diagonal in theq_y momentum sectors.Thus, we can verify whether Prop. <ref> is satisfied or not by calculating the mixed space-momentum correlatorC_i_x,i_x'^q_y,q_y'(t) = _A(ρ_X,A(t) c_i_x,q_y^† c_i_x',q_y').Given Asm. <ref>, it is clear that C_i_x,i_x'^q_y,q_y'(∞)=0 ∀ i_x,i_x'∈ A, q_y≠ q_y' if and only if ρ_X,A^GGE is block diagonal in the q_y momentum sectors.Thus, in the following, we prove that C_i_x,i_x'^q_y,q_y'(∞)=0 if and only if Eq. (<ref>) holds.Using Asm. <ref> and performing the Fourier transform in the x-direction, we can rewrite C_i_x,i_x'^q_y,q_y'(∞) as C_i_x,i_x'^q_y,q_y'(∞)= 1/L_x∑_q_x,q_x' e^- q_x i_x+ q_x'i_x'(ρ^GGE_X a_𝐪^†a_𝐪').We recall that ρ_X^GGE is a GGE built with conserved charges of H^X, which is diagonalized as H^X=∑_q_x∑_i_ycos q_x d_q_x,i_y^† d_q_x,i_y, =∑_q_x∑_μcos q_x α_q_x,μ^†α_q_x,μ,where α_q_x,μ=∑_i_y [U(q_x)]_μ,i_yd_q_x,i_y with U(q_x) being a L_y× L_y unitary matrix.Therefore, without loss of generality, the conserved charges of H^X are given by Q̂_q_x,μ=α_q_x,μ^†α_q_x,μ and hence ρ_X^GGE can be written as ρ_X^GGE = e^-∑_q_x,μλ_q_x,μQ̂_q_x,μ/(e^-∑_q_x,μλ_q_x,μQ̂_q_x,μ).Even without knowing explicitly the form of U(q_x), the previous equation shows that ρ_X^GGE has a block-diagonal form in the q_x momentum sectors. This implies that(ρ^GGE_X ã_𝐪^†ã_𝐪')= δ_q_x,q_x'(ρ^GGE_X ã_q_x,q_y^†ã_q_x,q_y').Plugging it into Eq. (<ref>), we obtain C_i_x,i_x'^q_y,q_y' (∞) = 1/L_x∑_q_xe^- q_x(i_x-i_x')(ρ^GGE_X ã_q_x,q_y^†ã_q_x,q_y').Since the charges ã_q_x, q_y^†ã_q_x, q_y' are conserved by H^X, (ρ^GGE_X ã_q_x,q_y^†ã_q_x,q_y')= ⟨ψ_0|ã_q_x,q_y^†ã_q_x,q_y'|ψ_0⟩ and, substituting it into Eq. (<ref>), we arrive at C_i_x,i_x'^q_y,q_y' (∞) = 1/L_x∑_q_xe^- q_x(i_x-i_x')⟨ψ_0|ã_q_x,q_y^†ã_q_x,q_y'|ψ_0⟩.From this expression, we find that C_i_x,i_x'^q_y,q_y' (∞)=0 if and only if ⟨ψ_0|ã_q_x,q_y^†ã_q_x,q_y'|ψ_0⟩=0 ∀ q_x, q_y≠ q_y'. § INHOMOGENEOUS PARTIALLY-FILLED PRODUCT STATEIn this Appendix, we analyze another example of a quench in which ρ_A(t) does not relax to a stationary state, and thus the standard quasiparticle picture must be modified, see Sec. <ref>.Specifically, we consider the quantum quench from the inhomogeneous partially-filled product state|IPF⟩=∏_i_y=0^L_x-1| IPF⟩_i_y,where | IPF⟩_i_y is similar to the 1D cat state of Eq. (<ref>) but now the angle θ that controls the occupation probability depends on the coordinate i_y,|IPF⟩_i_y = 1/√(2+2(cosθ_i_y)^L_x)(|θ_i_y⟩-|-θ_i_y⟩). When θ_i_y=θ for all i_y, |IPF⟩ is the the partially-filled product state I discussed in Sec. <ref>.Evaluating the correlator of Eq. (<ref>) in the state | IPF⟩, we have⟨ IPF|ã_q_x,q_y^†ã_q_x,q_y'| IPF⟩ = = 1/2L_y∑_i_y e^ (q_y-q_y')i_y (1- cosΔ_q_x,i_y),where cosΔ_q_x,i_y is obtained by replacing θ in Eq. (<ref>) with θ_i_y.If θ_i_y is independent of i_y, which corresponds to the case in Sec. <ref>, this correlator vanishes for all q_x and q_y≠ q_y'. In that case, according to the results in Sec. <ref>,ρ_A(t) relaxes to a stationary state and the prediction of the standard quasiparticle picture (<ref>) works.On the other hand, when θ_i_y depends on i_y, the correlator of Eq. (<ref>) is in general nonzero, and Eq. (<ref>) is not satisfied.This implies that it does not exist a stationary state for ρ_A(t). When θ_i_y depends on i_y, the state |IPF⟩ has no translational symmetry in the y-direction, hence we cannot apply the dimensional reduction approach of Sec. <ref>.Instead, when the initial state is the product state for each i_y-th row as in Eq. (<ref>), we can calculate the Rényi entanglement entropy without using the dimensional reduction as follows. Let us decompose the time evolution Hamiltonian as in Eq. (<ref>). Since the operator e^- t H^X has no dynamics in the y-direction, it preserves the initial product structure for each i_y-th row. Therefore, the reduced density matrix ρ_X,A(t), introduced in Eq. (<ref>), and obtained in this case from the state e^- itH^X|IPF⟩, is of the formρ_X,A(t) =⊗_i_y=0^L_y-1ρ_X,A,i_y(t),whereρ_X,A,i_y(t) =_B,i_y( e^- t H_i_y^X|IPF⟩_i_y⟨IPF| e^ t H_i_y^X),and _B,i_y is the partial trace over the sites of the subsystem B in the i_y-th row. Since ρ_A(t) and ρ_X, A(t) are related by the unitary transformation (<ref>), we have S_n(ρ_A(t))=S_n(ρ_X, A(t)). Therefore, if we further takeinto account that the Rényi entanglement entropy is additive in the tensor product,S_n(ρ_A)=S_n(ρ_X,A) = ∑_i_y=0^L_y-1 S_n(ρ_X,A,i_y).Since ρ_X,A,i_y is nothing but the reduced density matrix of the partially-filled product state I of Sec. <ref> with L_y=1, the asymptotic form of S_n(ρ_X,A,i_y) in the space-time scaling limit can be obtained by just setting L_y=1 in Eq. (<ref>).It reads S_n(ρ_X,A,i_y) ≃∫_0^2π q_x/2π h_n(cosΔ_q_x,i_y) min(ℓ,2t|v_x(q_x)|).Plugging the above equation into Eq. (<ref>) yields S_n ≃∑_i_y=0^L_y-1∫_0^2π q_x/2π h_n(cosΔ_q_x,i_y) min(ℓ,2t|v_x(q_x)|). Accordingly, the saturation value of the Rényi entanglement entropy islim_t→∞ S_n= ℓ∑_i_y=0^L_y-1∫_0^2π q_x/2π h_n(cosΔ_q_x,i_y).On the other hand, applying the standard quasiparticle picture of Sec. <ref> to the state |IPF⟩, i.e. Eq. (<ref>) withthe expectation valueof the conserved charges (<ref>), we find S_n ≃L_y ∫_0^2π q_x/2π h_n (L_y^-1∑_i_y=0^L_y-1cosΔ_q_x,i_y)×min(ℓ,2t|v_x(q_x)|). In Fig. <ref>, we plot as a function of time the Rényi entanglement entropies in the quench from the inhomogeneous partially-filled product state with the inhomogeneous angle θ_i_y=2π i_y/L_y. The plot shows that the expression of Eq. (<ref>) (solid lines) agrees well with the exact values (symbols) of the entropies obtained by numerically with Eq. (<ref>), whereas the prediction of the quasiparticle picture in Eq. (<ref>) (dashed curves) does not match the correct result.This discrepancy means that, as in the case of the crossed dimer state, the entanglement entropies derived from the GGEs associated with H and H^X are different, and the correct one is the latter.In the present case, one finds that the GGE that reproduces the saturation value of the entanglement entropy in Eq. (<ref>) is built with the charges n̂_q_x,i_y = d_q_x,i_y^† d_q_x,i_y,which commute with H^X but not with H.Repeating the derivation of Eq. (<ref>) for the current case, we find that the entropy of the GGE built with {n̂_q_x,i_y} isS_n = ∑_i_y=0^L_y-1∫_0^2π q_x/2π h_n(2n_q_x,i_y-1),where n_q_x,i_y=⟨ψ_0|n̂_q_x,i_y|ψ_0⟩.For the state |IPF⟩, this expectation value isn_q_x,i_y =1-cosΔ_q_x,i_y/2.Plugging it into Eq. (<ref>), we indeed obtain Eq. (<ref>). 10pssv-11 A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalattore,Nonequilibrium dynamics of closed interacting quantum systems, http://dx.doi.org/10.1103/RevModPhys.83.863Rev. Mod. Phys. 83, 863 (2011).ge-16 C. Gogolin and J. Eisert,Equilibration, thermalisation, and the emergence of statistical mechanics in closed quantum systems,http://iopscience.iop.org/article/10.1088/0034-4885/79/5/056001Rep. Prog. Phys. 79, 056001 (2016).cem-16 P. Calabrese, F. H. L. Essler, and G. Mussardo,Introduction to “Quantum Integrability in Out of Equilibrium Systems”, https://iopscience.iop.org/article/10.1088/1742-5468/2016/06/064001J. Stat. Mech. (2016) 064001.Deutsch-1991J. M. Deutsch, Quantum statistical mechanics in a closed system, https://doi.org/10.1103/PhysRevA.43.2046Phys. Rev. A 43, 2046(1991).Srednicki-1994 M. Srednicki, Chaos and quantum thermalization, https://doi.org/10.1103/PhysRevE.50.888Phys. Rev. E 50, 888 (1994).Tasaki-1998 H. Tasaki, From Quantum Dynamics to the Canonical Distribution: General Picture and a Rigorous Example, https://doi.org/10.1103/PhysRevLett.80.1373Phys. Rev. Lett. 80, 1373 (1998).Rigol-2008 M. Rigol, V. Dunjko, and M. Olshanii, Thermalization and its mechanism for generic isolated quantum systems, https://doi.org/10.1038/nature06838Nature 452, 854 (2008).akpr-16 L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol,From Quantum Chaos and Eigenstate Thermalization to Statistical Mechanics and Thermodynamics,http://dx.doi.org/10.1080/00018732.2016.1198134Adv. Phys. 65, 239 (2016).Rigol-2007 M. Rigol, V. Dunjko, V. Yurovsky, and M. Olshanii, Relaxation in a completely integrable many-body quantum system: an ab initio study of the dynamics of the highly excited states of 1d lattice hard-core bosons,https://doi.org/10.1103/PhysRevLett.98.050405Phys. Rev. Lett. 98, 050405 (2007).bs-08 T. Barthel and U. Schollwock,Dephasing and the Steady State in Quantum Many-Particle Systems,http://dx.doi.org/10.1103/PhysRevLett.100.100601Phys. Rev. Lett. 100, 100601 (2008).cdeo-08 M. Cramer, C. M. Dawson, J. Eisert, and T. J. Osborne, Exact Relaxation in a Class of Nonequilibrium Quantum Lattice Systems, http://dx.doi.org/10.1103/PhysRevLett.100.030602Phys. Rev. Lett. 100, 030602 (2008).cef-12b P. Calabrese, F. H. L. Essler, and M. Fagotti,Quantum quenches in the transverse field Ising chain: II.Stationary state properties,http://dx.doi.org/10.1088/1742-5468/2012/07/P07022J. Stat. Mech. (2012) P07022.ilievski-15 E. Ilievski, J. De Nardis, B. Wouters, J.-S. Caux, F. H. L. Essler, and T. Prosen,Complete Generalized Gibbs Ensembles in an Interacting Theory, https://doi.org/10.1103/PhysRevLett.115.157201Phys. Rev. Lett. 115, 157201 (2015).vr-16 L. Vidmar and M. Rigol,Generalized Gibbs ensemble in integrable lattice models,http://dx.doi.org/10.1088/1742-5468/2016/06/064007J. Stat. Mech. (2016) 064007.essler-2016 F. H. L. Essler and M. Fagotti, Quench dynamics and relaxation in isolated integrable quantum spin chains https://iopscience.iop.org/article/10.1088/1742-5468/2016/06/064002J. Stat. Mech. (2016) 064002.amico-08 L. Amico, R. Fazio, A. Osterloh, and V. Vedral,Entanglement in many-body systems, https://doi.org/10.1103/RevModPhys.80.517Rev. Mod. Phys. 80, 517 (2008).ccd-09 P. Calabrese, J. Cardy, and B. Doyon,Entanglement entropy in extended quantum systems, https://doi.org/10.1088/1751-8121/42/50/500301J. Phys. A 42, 500301 (2009).laflorencie-14 N. Laflorencie,Quantum entanglement in condensed matter systems,https://doi.org/10.1016/j.physrep.2016.06.008Phys. Rep. 643, 1 (2016).Kinoshita-2006 T. Kinoshita, T. Wenger, and D. S. Weiss, A quantum Newton's cradle, https://doi.org/10.1038/nature04693Nature 440, 900 (2006).hofferberth-2007 S. Hofferberth, I. Lesanovsky, B. Fischer, T. Schumm, and J. Schmiedmayer, Non-equilibrium coherence dynamics in one-dimensional Bose gases, https://doi.org/https://doi.org/10.1038/nature06149Nature 449, 324 (2007).trotzky-2012 S. Trotzky, Y.-A. Chen, A. Flesch, I. P. McCulloch, U. Schollwöck, J. Eisert, and I. Bloch, Probing the relaxation towards equilibrium in an isolated strongly correlated one-dimensional Bose gas, https://doi.org/https://doi.org/10.1038/nphys2232NaturePhys. 8, 325 (2012). gring-2012 M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Rauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler, and J. Schmiedmayer, Relaxation and Prethermalization in an Isolated Quantum System, https://doi.org/10.1126/science.1224953Science 337, 1318 (2012).Cheneau-2012 M. Cheneau, P. Barmettler, D. Poletti, M. Endres, P. Schauß, T. Fukuhara, C. Gross, I. Bloch, C. Kollath, and S. Kuhr, Light-cone-like spreading of correlations in a quantum many-body system, https://doi.org/10.1038/nature10748Nature 481, 484 (2012).langen-2013 T. Langen, R. Geiger, M. Kuhnert, B. Rauer, and J. Schmiedmayer, Local emergence of thermal correlations in an isolated quantum many-body system, https://doi.org/https://doi.org/10.1038/nphys2739Nature Physics 9, 640 (2013).mainert-2013 F. Meinert, M. J. Mark, E. Kirilov, K. Lauber, P. Weinmann, A. J. Daley, and H.-C. Nägerl, Quantum Quench in an Atomic One-Dimensional Ising Chain, https://doi.org/10.1103/PhysRevLett.111.053003Phys. Rev. Lett. 111, 053003 (2013).langen2015experimental T. Langen, S. Erne, R. Geiger, B. Rauer, T. Schweigler, M. Kuhnert, W. Rohringer, I. E. Mazets, T. Gasenzer, and J. Schmiedmayer, Experimental observation of a generalized Gibbs ensemble, https://doi.org/10.1126/science.1257026Science 348, 207 (2015).Islam-2015 R. Islam, R. Ma, P. M. Preiss, M. Eric Tai, A. Lukin, M. Rispoli, and M. Greiner, Measuring entanglement entropy in a quantum many-body system, https://doi.org/10.1038/nature15750Nature 528, 77 (2015).Kaufman-2016 A. M. Kaufman, M. E. Tai, A. Lukin, M. Rispoli, R. Schittko, P. M. Preiss, and M. Greiner,Quantum thermalization through entanglement in an isolated many-body system, https://doi.org/10.1126/science.aaf6725Science 353, 794 (2016).Calabrese-2005 P. Calabrese and J. Cardy, Evolution of entanglement entropy in one-dimensional systems, https://doi.org/10.1088/1742-5468/2005/04/p04010J. Stat. Mech. (2005) P04010.dls-13 J. M. Deutsch, H. Li, and A. Sharma, Microscopic origin of thermodynamic entropy in isolated systems,http://dx.doi.org/10.1103/PhysRevE.87.042135Phys. Rev. E 87, 042135 (2013). Alba-2017 V. Alba and P. Calabrese, Entanglement and thermodynamics after a quantum quench in integrable systems, https://doi.org/10.1073/pnas.1703516114PNAS 114, 7947 (2017).Fagotti-2008 M. Fagotti and P. Calabrese, Evolution of entanglement entropy following a quantum quench: Analytic results for the XY chain in a transverse magnetic field, https://doi.org/10.1103/PhysRevA.78.010306Phys. Rev. A 78, 010306(R) (2008).ep-08 V. Eisler and I. Peschel,Entanglement in a periodic quench, https://doi.org/10.1002/andp.200810299Ann. Phys. 17, 410 (2008).eisler-2009 I. Peschel and V. Eisler, Reduced density matrices and entanglement entropy in free lattice models, https://doi.org/10.1088/1751-8113/42/50/504003J. Phys. A: Math. Theor. 42, 504003 (2009).nr-14 M. G. Nezhadhaghighi and M. A. Rajabpour,Entanglement dynamics in short- and long-range harmonic oscillators,https://doi.org/10.1103/PhysRevB.90.205438Phys. Rev. B 90, 205438 (2014).bkc-14 L. Bucciantini, M. Kormos, and P. Calabrese,Quantum quenches from excited states in the Ising chain,https://doi.org/10.1088/1751-8113/47/17/175002J. Phys. A: Math. Theor. 47, 175002 (2014).bfsed-16 A. S. Buyskikh, M. Fagotti, J. Schachenmayer, F. Essler, and A. J. Daley,Entanglement growth and correlation spreading with variable-range interactions in spin and fermionic tunneling models,https://doi.org/10.1103/PhysRevA.93.053620Phys. Rev. A 93, 053620 (2016).hbmr-18 L. Hackl, E. Bianchi, R. Modak, and M. Rigol,Entanglement production in bosonic systems: Linear and logarithmic growth, https://doi.org/10.1103/PhysRevA.97.032321Phys. Rev. A 97, 032321 (2018).ddr-23 G. Del Vecchio Del Vecchio, B. Doyon, and P. Ruggiero, Entanglement Rényi Entropies from Ballistic Fluctuation Theory: the free fermionic case,https://doi.org/10.48550/arXiv.2301.02326arXiv:2301.02326. ckc-13 M. Collura, M. Kormos,and P. Calabrese, Stationary entanglement entropies following an interaction quench in 1D Bose gas,https://doi.org/10.1088/1742-5468/2014/01/P01009J. Stat. Mech. P01009 (2014)ac-18 V. Alba and P. Calabrese,Entanglement dynamics after quantum quenches in generic integrable systems,https://doi.org/10.21468/SciPostPhys.4.3.017SciPost Phys. 4, 17 (2018). ac-17a V. Alba and P. Calabrese, Quench action and Rényi entropies in integrable systems, https://doi.org/10.1103/PhysRevB.96.115421Phys. Rev. B 96, 115421 (2017).mac-20 R. Modak, V. Alba, and P. Calabrese, Entanglement revivals as a probe of scrambling in finite quantum systems,https://doi.org/10.1088/1742-5468/aba9d9J. Stat. Mech. (2020) 083110.mpc-19 R. Modak, L. Piroli, and P. Calabrese Correlation and entanglement spreading in nested spin chains, https://doi.org/10.1088/1742-5468/ab39d5J. Stat. Mech. (2019) 093106.kb-21 K. Klobas and B. Bertini, Entanglement dynamics in Rule 54: Exact results and quasiparticle picture, https://doi.org/10.21468/SciPostPhys.11.6.107SciPost Phys. 11, 107 (2021).kbp-21 K. Klobas, B. Bertini, and L. Piroli, Exact Thermalization Dynamics in the “Rule 54” Quantum Cellular Automaton, https://doi.org/10.1103/PhysRevLett.126.160602Phys. Rev. Lett. 126, 160602 (2021). bfpc-18 B. Bertini, M. Fagotti, L. Piroli, and P. Calabrese, Entanglement evolution and generalised hydrodynamics: noninteracting systems, https://doi.org/10.1088/1751-8121/aad82eJ. Phys. A: Math. Theor. 51, 39LT01 (2018).kb-21 K. Klobas and B. Bertini, Entanglement dynamics in Rule 54: Exact results and quasiparticle picture, https://doi.org/10.21468/SciPostPhys.11.6.107SciPost Phys. 11, 107 (2021). rc-23 C. Rylands and P. Calabrese, Transport and entanglement across integrable impurities from Generalized Hydrodynamics,https://link.aps.org/doi/10.1103/PhysRevLett.131.156303Phys. Rev. Lett. 131, 156303 (2023).abf-19 V. Alba, B. Bertini, and M. Fagotti Entanglement evolution and generalised hydrodynamics: interacting integrable systems, https://scipost.org/10.21468/SciPostPhys.7.1.005SciPost Phys. 7, 005 (2019).ma-20 M. Mestyán and V. Alba,Molecular dynamics simulation of entanglement spreading in generalized hydrodynamics, https://scipost.org/10.21468/SciPostPhys.8.4.055SciPost Phys. 8, 055 (2020).bbtc-18 B. Bertini, E. Tartaglia, and P. Calabrese,Entanglement and diagonal entropies after a quench with no pair structure, https://doi.org/10.1088/1742-5468/aac73fJ. Stat. Mech. (2018) 063104.ctd-19 X. Cao, A. Tilloy, and A. De Luca,Entanglement in a fermion chain under continuous monitoring, https://doi.org10.21468/SciPostPhys.7.2.024SciPost Phys. 7, 024 (2019).bc-18 A. Bastianello and P. Calabrese,Spreading of entanglement and correlations after a quench with intertwined quasiparticles, https://scipost.org/10.21468/SciPostPhys.5.4.033SciPost Phys. 5, 33 (2018).bmc-20 A. Bastianello and M. Collura, Entanglement spreading and quasiparticle picture beyond the pair structure https://scipost.org/10.21468/SciPostPhys.8.3.045SciPost Phys. 8, 045 (2020).ac-20 V. Alba and F. Carollo, Spreading of correlations in Markovian open quantum systems https://doi.org/10.1103/PhysRevB.103.L020302Phys. Rev. B 103, 020302 (2021). ca-22 F. Carollo and V. Alba, Entangled multiplets and unusual spreading of quantum correlations in a continuously monitored tight-binding chain, https://doi.org/10.1103/PhysRevB.106.L220304Phys. Rev. B 106, L220304 (2022).bbc-20 B. Bertini and P. Calabrese, Prethermalization and thermalization in entanglement dynamics, https://doi.org/10.1103/PhysRevB.102.094303Phys. Rev. B 102, 094303 (2020).cara-22 F. Carollo and V. Alba, Dissipative quasiparticle picture for quadratic Markovian open quantum systems, https://doi.org/10.1103/PhysRevB.105.144305Phys. Rev. B 105, 144305 (2022).acar-22 V. Alba and F. Carollo, Hydrodynamics of quantum entropies in Ising chains with linear dissipation, https://doi.org/10.1088/1751-8121/ac48ecJ. Phys. A: Math. Theor. 55, 074002 (2022).ceh-19 S. Chapman, J. Eisert, L. Hackl, M. P. Heller, R. Jefferson, H. Marrochio, and R. C. Myers, Complexity and entanglement for thermofield double states, https://doi.org/10.21468/SciPostPhys.6.3.034SciPost Phys. 6, 034 (2019). lcp-22 G. Lagnese, P. Calabrese, and L. Piroli, Entanglement dynamics of thermofield double states in integrable models, https://doi.org/10.1088/1751-8121/ac646bJ. Phys. A: Math. Theor. 55, 214003 (2022).ts-23 X. Turkeshi and M. Schiró, Entanglement and correlation spreading in non-Hermitian spin chains, https://doi.org/10.1103/PhysRevB.107.L020403Phys. Rev. B 107, L020403 (2023).hcc-22 D. X. Horvath, P. Calabrese, and O. A. Castro-Alvaredo, Entanglement of Stationary States in the Presence of Unstable Quasiparticles, https://doi.org/10.1007/JHEP04(2023)091JHEP 04 (2023) 091.ge-19 M. Gruber and V. Eisler, Magnetization and entanglement after a geometric quench in the XXZ chain, https://doi.org/10.1103/PhysRevB.99.174403Phys. Rev. B 99, 174403 (2019).fg-23 S. Fraenkel and M. Goldstein, Extensive long-range entanglement in a nonequilibrium steady state https://doi.org/10.21468/SciPostPhys.15.4.134SciPost Phys. 15, 134 (2023).csrc-22 L. Capizzi, S. Scopa, F. Rottoli, and P. Calabrese, Domain wall melting across a defect, https://doi.org/10.1209/0295-5075/acb50aEPL 141, 31002 (2023).kkysytd-22 D. Kagamihara, R. Kaneko, S. Yamashika, K. Sugiyama, R. Yoshii, S. Tsuchiya, and I. Danshita, Rényi entanglement entropy after a quantum quench starting from insulating states in a free boson system, https://doi.org/10.1103/PhysRevA.107.033305Phys. Rev. A 107, 033305 (2023).ykyt-22 S. Yamashika, D. Kagamihara, R. Yoshii, and S. Tsuchiya, Evolution of entanglement entropy in strongly correlated bosons in an optical lattice, https://doi.org/10.48550/arXiv.2209.13340arXiv:2209.13340cc-06 P. Calabrese and J. Cardy, Time-dependence of correlation functions following a quantum quench, https://doi.org/10.1103/PhysRevLett.96.136801Phys. Rev. Lett. 96 (2006) 136801.ac-19b V. Alba and P. Calabrese, Quantum information scrambling after a quantum quench, https://doi.org/10.1103/PhysRevB.100.115150Phys. Rev. B 100, 115150 (2019).ac-19V. Alba and P. Calabrese,Quantum information dynamics in multipartite integrable systems, https://iopscience.iop.org/article/10.1209/0295-5075/126/60001EPL 126, 60001 (2019).ctc-14 A. Coser, E. Tonni, and P. Calabrese, Entanglement negativity after a global quantum quench, https://iopscience.iop.org/article/10.1088/1742-5468/2014/12/P12017J. Stat. Mech. (2014) P12017.mac-21 S. Murciano, V. Alba, and P. Calabrese, Quench dynamics of Rényi negativities and the quasiparticle picture, in https://link.springer.com/chapter/10.1007/978-3-031-03998-0_14A. Bayat, S. Bose, H. Johannesson (eds) Entanglement in Spin Chains, Springer (2022), p. 397.pbc-22 G. Parez, R. Bonsignori, and P. Calabrese, Dynamics of charge-imbalance-resolved entanglement negativity after a quench in a free-fermion model, https://doi.org/10.1088/1742-5468/ac666cJ. Stat. Mech. (2022) 053103.bkl-22 B. Bertini, K. Klobas, and T.-C. Lu, Entanglement Negativity and Mutual Information after a Quantum Quench: Exact Link from Space-Time Duality, https://doi.org/10.1103/PhysRevLett.129.140503Phys. Rev. Lett. 129, 140503 (2022).d-17 J. Dubail,Entanglement scaling of operators: a conformal field theory approach, with a glimpse of simulability of long-time dynamics in 1+1d, http://dx.doi.org/10.1088/1751-8121/aa6f38J. Phys. A 50, 234001 (2017).lsa-21 A. Lerose, M. Sonner, and D. A. Abanin, Scaling of temporal entanglement in proximity to integrability, https://doi.org/10.1103/PhysRevB.104.035137 Phys. Rev. B 104, 035137 (2021).ggs-21 G. Giudice, G. Giudici, M. Sonner, J. Thoenniss, A. Lerose, D. A. Abanin, and L. Piroli, Temporal Entanglement, Quasiparticles, and the Role of Interactions https://doi.org/10.1103/PhysRevLett.128.220401Phys. Rev. Lett. 128, 220401 (2022).parez2021b G. Parez, R. Bonsignori, and P. Calabrese, Quasiparticle dynamics of symmetry resolved entanglement after a quench: the examples of conformal field theories and free fermions, https://doi.org/10.1103/PhysRevB.103.L041104Phys. Rev. B 103, L041104 (2021).parez2021 G. Parez, R. Bonsignori, and P. Calabrese, Exact quench dynamics of symmetry resolved entanglement in a free fermion chain, https://doi.org/10.1088/1742-5468/ac21d7J. Stat. Mech. (2021) 093102.pvcc-22 L. Piroli, E. Vernier, M. Collura, and P. Calabrese, Thermodynamic symmetry resolved entanglement entropies in integrable systems, https://doi.org/10.1088/1742-5468/ac7a2dJ. Stat. Mech. (2022) 073102. bcckr-22 B. Bertini, P. Calabrese, M. Collura, K. Klobas, and C. Rylands, Nonequilibrium Full Counting Statistics and Symmetry-Resolved Entanglement from Space-Time Duality, https://doi.org/10.1103/PhysRevLett.131.140401Phys. Rev. Lett. 131, 140401 (2023).bkccr-23 B. Bertini, K. Klobas, M. Collura, P. Calabrese, and C. Rylands, Dynamics of charge fluctuations from asymmetric initial states, https://doi.org/10.48550/arXiv.2306.12404arXiv:2306.12404.kkr-21 J. Kudler-Flam, Y. Kusuki, and S. Ryu, Correlation measures and the entanglement wedge cross-section after quantum quenches in two-dimensional conformal field theories,https://doi.org/10.1007/JHEP04mac-23 S. Murciano, V. Alba, and P. Calabrese, Symmetry-resolved entanglement in fermionic systems with dissipation, https://doi.org/10.48550/arXiv.2303.12120arXiv:2303.12120bkalc-22 B. Bertini, K. Klobas, V. Alba, G. Lagnese, and P. Calabrese Growth of Rényi Entropies in Interacting Integrable Models and the Breakdown of the Quasiparticle Picture, https://doi.org/10.1103/PhysRevX.12.031016Phys. Rev. X 12, 031016 (2022).ghk-18 S. Gopalakrishnan, D. A. Huse, V. Khemani, and R. Vasseur, Hydrodynamics of operator spreading and quasiparticle diffusion in interacting integrable systems, https://doi.org/10.1103/PhysRevB.98.220303Phys. Rev. B 98, 220303 (2018).Ares-2023 F. Ares, S. Murciano, and P. Calabrese, Entanglement asymmetry as a probe of symmetry breaking, https://doi.org/https://doi.org/10.1038/s41467-023-37747-8Nature Commun. 14, 2036 (2023).amvc-23 F. Ares, S. Murciano, E. Vernier, and P. Calabrese, Lack of symmetry restoration after a quantum quench: an entanglement asymmetry study, https://doi.org/10.21468/SciPostPhys.15.3.089SciPost Phys. 15, 089 (2023).ckacmb-23 C. Rylands, K. Klobas, F. Ares, P. Calabrese, S. Murciano, and B. Bertini,Microscopic origin of the quantum Mpemba effect in integrable systems,https://doi.org/10.48550/arXiv.2310.0441arXiv:2310.04419.makc-23 S. Murciano, F. Ares, I. Klich, and P, Calabrese, Entanglement asymmetry and quantum Mpemba effect in the XY spin chain, https://doi.org/10.48550/arXiv.2310.07513arXiv:2310.07513. ac-23 F. Caceffo and V. Alba, Negative tripartite information after quantum quenches in integrable systems, https://doi.org/10.48550/arXiv.2305.10245arXiv:2305.10245.c-18 P. Calabrese, Entanglement and thermodynamics in non-equilibrium isolated quantum systems,https://doi.org/10.1016/j.physa.2017.10.011Physica A 504, 31 (2018)c-20 P. Calabrese, Entanglement spreading in non-equilibrium integrable systems,https://doi.org/10.21468/SciPostPhysLectNotes.20SciPost Phys. Lect. Notes 20 (2020). lk-08 A. M. Läuchli and C. Kollath,Spreading of correlations and entanglement after a quench in the one-dimensional Bose–Hubbard model,http://dx.doi.org/10.1088/1742-5468/2008/05/P05018J. Stat. Mech. P05018 (2008), 1141.kh-13 H. Kim and D. A. Huse,Ballistic spreading of entanglement in a diffusive nonintegrable system,http://dx.doi.org/10.1103/PhysRevLett.111.127205Phys. Rev. Lett. 111, 127205 (2013).pl-18 R. Pal and A. Lakshminarayan,Entangling power of time-evolution operators in integrable and nonintegrable many-body systems,https://doi.org/10.1103/PhysRevB.98.174304Phys. Rev. B 98, 174304 (2018).bkp-19 B. Bertini, P. Kos, and T. Prosen,Entanglement spreading in a minimal model of maximal many-body quantum chaos,https://doi.org/10.1103/PhysRevX.9.021033Phys. Rev. X 9, 021033 (2019).pbcp-20 L. Piroli, B. Bertini, J. I. Cirac, and T. Prosen,Exact dynamics in dual-unitary quantum circuits,https://doi.org/10.1103/PhysRevB.101.094304Phys. Rev. B 101, 094304 (2020).nrvh-17 A. Nahum, J. Ruhman, S. Vijay, and J. Haah,Quantum entanglement growth under random unitary dynamics, https://doi.org/10.1103/PhysRevX.7.031016Phys. Rev. X 7, 031016 (2017).nz-20 T. Zhou and A. Nahum,Entanglement membrane in chaotic many-body systems,https://doi.org/10.1103/PhysRevX.10.031066Phys. Rev. X 10, 031066 (2020).mck-22 S. Murciano, P. Calabrese, and R. M. Konik, Post-Quantum Quench Growth of Renyi Entropies in Low Dimensional Continuum Bosonic Systems, https://doi.org/10.1103/PhysRevLett.129.106802Phys. Rev. Lett. 129, 106802 (2022). hlw-94 C. Holzhey, F. Larsen, and F. Wilczek,Geometric and renormalized entropy in conformal field theory, http://dx.doi.org/10.1016/0550-3213(94)90402-2Nucl. Phys. B 424, 443 (1994).cc-04 P. Calabrese and J. Cardy,Entanglement entropy and quantum field theory,http://dx.doi.org/10.1088/1742-5468/2004/06/P06002J. Stat. Mech. (2004) P06002.lukin-2019 A. Lukin, M. Rispoli, R. Schittko, M. E. Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Léonard, and M. Greiner,Probing entanglement in a many-body–localized system,https://doi.org/10.1126/science.aau0818Science 364, 256 (2019).brydges-19 T. Brydges, A. Elben, P. Jurcevic, B. Vermersch, C. Maier, B. P. Lanyon, P. Zoller, R. Blatt, and C. F. Roos, Probing entanglement entropy via randomized measure- ments,http://dx.doi.org/10.1126/science.aau4963Science 364, 260 (2019).elben-2020 A. Elben, R. Kueng, H. Y. R. Huang, R. van Bij-nen, C. Kokail, M. Dalmonte, P. Calabrese, B. Kraus, J. Preskill, P. Zoller, and B. Vermersch,Mixed-state entanglement from local randomized measurements, https://doi.org/10.1103/PhysRevLett.125.200501Phys. Rev. Lett. 125, 200501 (2020).neven-21 A. Neven, J. Carrasco, V. Vitale, C. Kokail, A. Elben, M. Dalmonte, P. Calabrese, P. Zoller, B. Vermersch, R. Kueng, and B. Kraus,Symmetry-resolved entanglement detection using partial transpose moments,https://doi.org/10.1038/s41534-021-00487-yNpj Quantum Inf. 7, 152 (2021).vek-21 V. Vitale, A. Elben, R. Kueng, A. Neven, J. Carrasco, B. Kraus, P. Zoller, P. Calabrese, B. Vermersch, and M. Dalmonte,Symmetry-resolved dynamical purification in synthetic quantum matter,https://doi.org/10.21468/SciPostPhys.12.3.106SciPost Phys. 12, 106 (2022).rvm-22 A. Rath, V. Vitale, S. Murciano, M. Votto, J. Dubail, R. Kueng, C. Branciard, P. Calabrese, and B. Vermersch,Entanglement barrier and its symmetry resolution: theory and experiment,https://doi.org/10.1103/PRXQuantum.4.010318PRX Quantum 4, 010318 (2023).ecp-10 J. Eisert, M. Cramer, and M. B. Plenio, Area laws for the entanglement entropy, https://doi.org/10.1103/RevModPhys.82.277Rev. Mod. Phys. 82, 277 (2010).pedc-05 M. B. Plenio, J. Eisert, J. Dreissig, and M. Cramer,Entropy, entanglement, and area: analytical results for harmonic lattice systems,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.94.060503Phys. Rev. Lett. 94, 060503 (2005).gk-06 D. Gioev and I. Klich,Entanglement entropy of fermions in any dimension and the Widom conjecture,https://doi.org/10.1103/PhysRevLett.96.100503Phys. Rev. Lett. 96, 100503 (2006).wolf-06 M. M. Wolf, Violation of the Entropic Area Law for Fermions, https://doi.org/10.1103/PhysRevLett.96.010404Phys. Rev. Lett. 96, 010404 (2006).Li-06 W. Li, L. Ding, R. Yu, T. Roscilde, and S. Haas,Scaling Behavior of Entanglement in Two- and Three-Dimensional Free Fermions,https://doi.org/10.1103/PhysRevB.74.073103Phys. Rev. B 74, 073103 (2006).bcs-06 T. Barthel, M.-C. Chung, and U. Schollwöck,Entanglement scaling in critical two-dimensional fermionic and bosonic systems,https://journals.aps.org/pra/abstract/10.1103/PhysRevA.74.022329Phys. Rev. A 74, 022329 (2006).cepd-06 M. Cramer, J. Eisert, M. B. Plenio, and J. Dreissig,An entanglement-area law for general bosonic harmonic lattice systems,https://journals.aps.org/pra/abstract/10.1103/PhysRevA.73.012309Phys. Rev. A 73, 012309 (2006).ch-07 H. Casini and M. Huerta,Universal terms for the entanglement entropy in 2+1 dimensions, https://doi.org/10.1016/j.nuclphysb.2006.12.012Nucl. Phys. B 764, 183 (2007)fz-07 S. Farkas and Z. Zimboras,The von Neumann entropy asymptotics in multidimensional fermionic systems,http://dx.doi.org/10.1063/1.2800167J. Math. Phys. 48, 102110 (2007).Helling-11 R. C. Helling, H. Leschke and W. L. Spitzer,A special case of conjecture by Widom with implications to fermionic entanglement entropy,https://doi.org/10.1093/imrn/rnq085Int. Math. Res. Not. 2011, 1451 (2011).jk-12 A. J. A. James and R. M. Konik, Understanding the entanglement entropy and spectra of 2D quantum systems through arrays of coupled 1D chains, https://doi.org/10.1103/PhysRevB.87.241103Phys. Rev. B 87, 241103(R) (2013).cmv-12 P. Calabrese, M. Mintchev, and E. Vicari,Entanglement entropies in free fermion gases for arbitrary dimension,https://arxiv.org/abs/1110.6276EPL 97, 20009 (2012).c-13 J. Cardy, Some Results on Mutual Information of Disjoint Regions in Higher Dimensions, https://doi.org/10.1088/1751-8113/46/28/28540J. Phys. A: Math. Theor. 46, 285402 (2013).swingle B. Swingle,Renyi entropy, mutual information, and fluctuation properties of Fermi liquids,https://doi.org/10.1103/PhysRevB.86.045109Phys. Rev. B 86, 045109 (2012).cmv-12b P. Calabrese, M. Mintchev, and E. Vicari,Exact relations between particle fluctuations and entanglement in Fermi gases,http://dx.doi.org/10.1209/0295-5075/98/20003EPL 98, 20003 (2012).aefsb-14 F. Ares, J. G. Esteve, F. Falceto, and E. Sánchez-Burillo, Excited state entanglement in homogeneous fermionic chains, https://doi.org/10.1088/1751-8113/47/24/245301J. Phys. A: Math. Theor. 47 245301 (2014).Frerot-2015 I. Frérot and T. Roscilde, Area law and its violation: A microscopic inspection into the structure of entanglement and fluctuations, https://doi.org/10.1103/PhysRevB.92.115129Phys. Rev. B 92, 115129 (2015).tr-19 M. T. Tan and S. Ryu,Rényi and Symmetry-resolved Entanglement Entropy in Two-dimensional Fermi Gas from Multi-dimensional Bosonization,https://doi.org/10.1103/PhysRevB.101.235169Phys. Rev. B 101, 235169 (2020).mrc-20 S. Murciano, P. Ruggiero, and P. Calabrese, Symmetry resolved entanglement in two-dimensional systems via dimensional reduction, https://doi.org/10.1088/1742-5468/aba1e5J. Stat. Mech. (2020) 083102.fraenkel-20 S. Fraenkel and M. Goldstein, Symmetry resolved entanglement: Exact results in 1D and beyond, https://doi.org/10.1088/1742-5468/ab7753J. Stat. Mech. (2020) 033106.nnt-14 M. Nozaki, T. Numasawa, and T. Takayanagi, Quantum Entanglement of Local Operators in Conformal Field Theories, https://doi.org/10.1103/PhysRevLett.112.111602Phys. Rev. Lett. 112, 111602 (2014).clm-16 H. Casini, H. Liu, and M. Mezei,Spread of entanglement and causality,https://doi.org/10.1007/JHEP07m-17 M. Mezei, On entanglement spreading from holography, https://doi.org/10.1007/JHEP05 cotler2016 J. S. Cotler, M. P. Hertzberg, M. Mezei, and M. T. Mueller, Entanglement growth after a global quench in free scalar field theory, https://doi.org/https://doi.org/10.1007/JHEP11(2016)166JHEP 11 (2016) 166.ms-20 M. Mezei and W. van der Schee, Black Holes Often Saturate Entanglement Entropy the Fastest, https://doi.org/10.1103/PhysRevLett.124.201601Phys. Rev. Lett. 124, 201601 (2020).lm-16 Y. Lemonik and A. Mitra, Entanglement properties of the critical quench ofO(N) bosons,https://doi.org/10.1103/PhysRevB.94.024306Phys. Rev. B 94, 024306 (2016).cp-00 M. C. Chung and I. Peschel,Density-matrix spectra for two-dimensional quantum systems, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.62.4191Phys. Rev. B 62, 4191 (2000).peschel-02 I. Peschel, Calculation of reduced density matrices from correlation functions, https://doi.org/10.1088/0305-4470/36/14/101J. Phys. A: Math. Gen. 36, L205 (2003). Calabrese-2012 P. Calabrese, F. H. L. Essler, and M. Fagotti, Quantum quench in the transverse field Ising chain: I. Time evolution of order parameter correlators https://doi.org/10.1088/1742-5468/2012/07/P07016J. Stat. Mech. (2012) P07016.fe-13 M. Fagotti and F. H. L. Essler, Reduced Density Matrix after a Quantum Quench, https://doi.org/10.1103/PhysRevB.87.245107Phys. Rev. B 87, 245107 (2013).fagotti M. Fagotti, On Conservation Laws, Relaxation and Pre-relaxation after a Quantum Quench, https://doi.org/10.1088/1742-5468/2014/03/P03016J. Stat. Mech.(2014) P03016. gec-17 S. Groha, F. H. L. Essler, and P. Calabrese, Full Counting Statistics in the Transverse Field Ising Chain, https://doi.org/10.21468/SciPostPhys.4.6.043SciPost Phys. 4, 043 (2018).fac-23 F. Ferro, F. Ares, and P. Calabrese, Non-equilibrium entanglement asymmetry for discrete groups: the example of the XY spin chain, https://doi.org/10.48550/arXiv.2307.06902arXiv:2307.06902cool E. B. Mpemba and D. G. Osborne, Cool?, https://iopscience.iop.org/article/10.1088/0031-9120/4/3/312Phys. Educ. 4, 172 (1969).luraz Z. Lu and O. Raz,Nonequilibrium thermodynamics of the Markovian Mpemba effect and its inverse,https://doi.org/10.1073/pnas.1701264114PNAS 114, 5083 (2017).raz2 I. Klich, O. Raz, O. Hirschberg, and M. Vucelja, The Mpemba index and anomalous relaxation,https://doi.org/10.1103/PhysRevX.9.021060Phys. Rev. X 9, 021060 (2019).kumar A. Kumar and J. Bechhoefer, Exponentially faster cooling in a colloidal system, https://www.nature.com/articles/s41586-020-2560-x Nature 584, 64 (2020).gjsb-23 M. Gibbins, A. Jafarizadeh, A. Smith, and B. Bertini, Quench dynamics in d > 1 dimensional lattices: the free fermionic case,ArXiv:2310.XXXXX. | http://arxiv.org/abs/2310.18160v1 | {
"authors": [
"Shion Yamashika",
"Filiberto Ares",
"Pasquale Calabrese"
],
"categories": [
"cond-mat.stat-mech",
"hep-th",
"quant-ph"
],
"primary_category": "cond-mat.stat-mech",
"published": "20231027141045",
"title": "Time evolution of entanglement entropy after quenches in two-dimensional free fermion systems: a dimensional reduction treatment"
} |
5.5in 0.2in 0.2in theoremTheorem[section] philosophyStatement[theorem] lemma[theorem]Lemma corollary[theorem]Corollaryprop[theorem]Proposition claim[theorem]Claim *claim*Claim fact[theorem]Fact *theorem*Theorem*corollary*Corollary definition example[theorem]Example remark[theorem]Remark definition[theorem]Definition *remark*Remark question[theorem]Question *definition*Definition *example*Example *maintheoremenvTheorem 1.1 *chainspropProposition 2.2 *boundeddiameterlemmaBounded Diameter Lemma *namedtheoremnamed[1] remark *bemRemark-Department of MathematicsBoston College 140Commonwealth Avenue Chestnut Hill, MA 02467 USA [email protected] -Toulouse Mathematics Institute Université Paul Sabatier 118, route de Narbonne F-31062 Toulouse Cedex 9France [email protected] We show that except for one exceptional case, a lamination on the boundary of a 3-dimensional handlebody H is a Hausdorff limit of meridians if and only if it is commensurable to a lamination with a `homoclinic leaf'. This is a precise version of a philosophy called Casson's Criterion, which appeared in unpublished notes of A. Casson. Applications include a characterization of when a non-minimal lamination is a Hausdorff limit of meridians, in terms of properties of its minimal components, and a related characterization of which reducible self-homeomorphisms of ∂ H have powers that extend to subcompressionbodies of H. Homoclinic leaves, Hausdorff limits, and homeomorphisms Cyril Lecuire January 14, 2024 =======================================================§ INTRODUCTION Let H be a 3-dimensional handlebody[The body of this paper is written in greater generality, with the pair (H,S) replaced by a compact, orientable, 3-manifold M with hyperbolizable interior, together with an essential connected subsurface S⊂∂ M such that the multi-curve ∂ S is incompressible in M. However, everything we do is just as interesting in the handlebody case.] with genus g ≥ 2 and let S:=∂ H. A simple closed curve m on S is called a meridian if it bounds an embedded disk in H but not in S. Equip S with an arbitrary hyperbolic metric, and consider the set of geodesic laminations on S with the Hausdorff topology. We refer the reader to <cit.> for more information on laminations.§.§ Homoclinic leavesIn J.P. Otal's thesis <cit.> the following is stated; it isattributed to an unpublished manuscript of A. Casson. ['Casson's Criterion'] A geodesic lamination on S is a Hausdorff limit of meridians if and only if it has a homoclinic leaf. We call it a `statement' here instead of a theorem because it is not true as written, as we'll see later on. However, the connection between homoclinic leaves and meridians has been well studied, partly motivated by this statement, see for example the papers <cit.> and Long's earlier paper <cit.>.To define homoclinic, let H̃ be the universal cover of H, which is homeomorphic to a thickened infinite tree. A path ℓ : ⟶H̃ is called homoclinic if there are sequences s_i,t_i∈ such that|s_i-t_i|→∞,and sup_i d_H̃(ℓ̃(s_i),ℓ̃(t_i))<∞.Here, distance in H̃ is measured using the lift of any Riemannian metric on H. Since H is compact, the choice of metric does not matter. A path ℓ : ⟶ S is called homoclinic if it has a lift to ∂H̃ that is homoclinic as above, and a complete geodesic on S is called homoclinic if it has a (possibly periodic, if the geodesic is closed) arc length parametrization that is homoclinic.As an example, any geodesic meridian m on S is homoclinic, for as m lifts to a simple closed curve in ∂H̃, any lift of a periodic parameterization of m is also periodic, and therefore homoclinic. On the other hand, if γ is an essential simple closed curve that is not a meridian, any periodic parameterization of γ lifts to a properly embedded biinfinite path in ∂H̃ that is invariant under a nontrivial deck transformation, and is readily seen to be non-homoclinic.We should mention that the definition introduced above is not quite what Casson and Otal called `homoclinic' (in French, `homoclinique'), but rather what Otal calls `faiblement homoclinique', or `weakly homoclinic'. However, the definition above has been adopted in most subsequent papers.While some of the discussion below is incorrect if we use Casson's definition, it can all be modified to apply. In particular, the same counterexamples show that Statement <ref> is still false using Casson's original definition. See <ref>. In his thesis <cit.>, Otal showed that any Hausdorff limit of meridians has a homoclinic leaf. (This statement was later extended by Lecuire <cit.> from handlebodies to more general 3-manifolds.)However, the converse is not true. First of all, any Hausdorff limit of meridians is connected, and there are disconnected laminations on S that have homoclinic leaves, e.g. the union of two disjoint simple closed curves, one of which is a meridian. There are also connected laminations with homoclinic leaves that are not Hausdorff limits of meridians. For example, let λ = m ∪ℓ be a lamination with two leaves, where m is a nonseparating meridian and the two ends of ℓ spiral around m in the same direction, but from opposite sides, as on the left in Figure <ref>. Then λ has a homoclinic leaf, but it is not a Hausdorff limit of simple closed curves: any simple geodesic that approximates ℓ closely is trapped and forced to spiral forever around m, so cannot be closed. As another example, let λ be the lamination on the right in Figure <ref>, which has three leaves, a meridian m and two leaves spiraling onto it. Then λ is a Hausdorff limit of simple closed curves, but it is not a limit of meridians. Indeed, given a simple closed curve μ on S, an arc α⊂μ is called an m-wave disjoint from m if it is homotopic rel endpoints in H to an arc of m and int(α) ∩ m=∅. Any meridian μ that intersects m has an m-wave disjoint from m: one looks for an `outermost' arc of intersection with a disk bounded by μ in a disk bounded by m. However, no simple closed curve that is Hausdorff-close to λ has an m-wave disjoint from m, so λ is not a limit of meridians. See the discussion after the statement of Theorem <ref> for more details.In both of these examples, the problem lies with the spiraling isolated leaves. One way to address this is as follows. We say that two laminations μ_1,μ_2 on S are commensurable if they contain a common sublamination ν such thatfor both i,the difference μ_i ∖νis the union of finitely many isolated leaves.We say μ_1,μ_2 are strongly commensurable if they contain a common ν such that for both i, the difference μ_i ∖νis the union of finitely many isolated leaves, none of which are simple closed curves. So, is Casson's Criterion at least true up to strong commensurability? It turns out the answer is still no: the lamination λ in Figure <ref> contains a meridian as a leaf, but it is not strongly commensurable to a limit of meridians. Indeed, suppose that λ' is a Hausdorff limit of meridians that contains λ. In each component T ⊂ S∖ m, there is a unique homotopy class rel m of m-waves in T. So λ' contains a leaf ℓ that either intersects T in an arc in this homotopy class, or is contained in T and is obtained by spinning an arc in this homotopy class around μ. In either case, ℓ intersects transversely the component of λ contained in T, a contradiction. One can also make similar examples of laminations that contain a meridian but are not even commensurable to a Hausdorff limit of meridians, by replacing the two curves on either side of m in Figure <ref> with minimal laminations that fill the two components of S ∖ m.It turns out, though, that this is basically the only counterexample. Let's say that a lamination λ on S is exceptional if S has genus 2, there is a separating meridian m on S that is either disjoint from λ or is a leaf of λ, and there are minimal sublaminations of λ that fill the two components of S ∖ m. If λ is a geodesic lamination on S that is not exceptional, then λ is strongly commensurable to a Hausdorff limit of meridians if and only if it is strongly commensurable to a lamination with a homoclinic leaf. See Theorem <ref> for a more general statement and for the proof. In our view, this is the strongest version of Casson's criterion that is likely to be true for arbitrary geodesic laminations. It may be, though, that the original Casson Criterion is true for minimal laminations. Our methods only work up to strong commensurability, though. For instance, if λ is minimal filling on S and contains a homoclinic leaf, to prove that λ is a limit of meridians one would have to ensure that the meridians produced in Lemma <ref> do not run across any diagonals of the ideal polygons that are components of S ∖λ.The main tool in the proof of Theorem <ref> is a complete characterization of the minimal laminations onto which the two ends of ahomoclinic simple geodesic on S can accumulate. Suppose that h is a homoclinic simple biinfinite geodesic on S and that the two ends of h limit onto minimal laminations λ_-,λ_+ ⊂ S. Then either * the two ends of h are asymptotic on S, * one of λ_-,λ_+ is an intrinsic limit of meridians, or * λ_-,λ_+ are contained in incompressible subsurfaces S_-,S_+ ⊂ S that bound an essential interval bundle B⊂ H through which λ_- and λ_+ are homotopic. Here, a minimal laminationλ⊂ S is an intrinsic limit of meridians if it is strongly commensurable to the Hausdorff limit of a sequence of meridians that are contained in the smallest essential subsurface S(λ) ⊂ S containing λ, see Proposition <ref> for a number of equivalent definitions. We refer the reader to Theorem <ref> and Corollary <ref> for more precise and more general versions of the above that apply both to homoclinic biinfinite geodesics, and also to pairs of `mutually homoclinic' geodesic rays on S. Examples of (3) are shows in Figure <ref>. On the left, λ_-,λ_+ are simple closed curves that bound an embedded annulus A in H and B is a regular neighborhood of A. The geodesic h is homoclinic since the annulus A lifts to an embedded infinite strip × [-1,1]⊂H̃ and the two ends of a lift of h are asymptotic to _+ ×{-1} and _+ ×{1}, respectively. On the right, we write H=Y× [-1,1] where Y is a genus two surface with one boundary component. The laminations λ_± are minimal (in the picture they are drawn as `train tracks') and fill Z×{± 1}, where Z⊂ Y is a torus with two boundary components. Here, B=Z× [-1,1]. Interval bundles are essential to the study of meridians on handlebodies, and it is no surprise that they appear in Theorem <ref>. For example, subsurfaces bounding such interval bundles are the `incompressible holes' studied by Masur-Schleimer <cit.>, and interval bundles appear frequently in Hamenstädt's work on the disk set, see e.g. <cit.>. We note that the interval bundles B appearing in Theorem <ref> may be twisted interval bundles over non-orientable surfaces, in which case λ_-=λ_+ and S_-=S_+. See <ref> for background on interval bundles.§.§ Hausdorff limits via their minimal sublaminationsThe previous two theorems suggest that if a lamination λ that is a Hausdorff limits of meridians, one might expect to see minimal sublaminations λ that are intrinsic limits of meridians, or pairs of components that are homotopic through essential interval bundles in H. In fact, we show the following. Suppose that λ⊂ S is a nonexceptional geodesic lamination that is a finite union of minimal components. Then λ is strongly commensurable to a Hausdorff limit of meridians if and only if either * λ is disjoint from a meridian on S,* some component of λ is an intrinsic limit of meridians, or* there are components λ_±⊂λ that fill incompressible subsurfaces S_±⊂ S, such that S_± bound an essential interval bundle B⊂ H, the laminations λ_± are essentially homotopic through B, and there is a compression arc α for B that is disjoint from λ.In (3), a compression arc for B is an arc from ∂ S_- to ∂ S_+ that is homotopic in H, keeping its endpoints in ∂ S_±, to a fiber of the interval bundle B. See <ref> and Figure <ref> for more explanation.Note that Theorem <ref> does not say anything interesting about which minimal filling laminations λ on S are strongly commensurable to Hausdorff limits of meridians, just that that happens if and only if (2) holds, which is trivial. Indeed, for minimal filling laminations, it is not clear that there should be an easy way to `identify' Hausdorff limits of meridians. The point of Theorem <ref>, though, is that it reduces the characterization of Hausdorff limits of meridians to the minimal filling case. We note that it should be possible to replace the part of the proof of Theorem <ref> that references homoclinic geodesics with arguments similar to those used in Masur-Schleimer's paper <cit.>. §.§ Extension of reducible maps to compression bodiesAs another application of our techniques, we consider extension properties ofhomeomorphisms f : S ⟶ S. To motivate this, recall that asubcompression body of H is a 3-dimensional submanifold C ⊂ H with S ⊂∂ C that is obtained by choosing a finite collection Γ of disjoint meridians on S, taking a regular neighborhood of S and a collection of discs in H with boundary Γ, and adding in any complementary components that are topological 3-balls. We say C is obtained by compressing Γ. We usually consider subcompression bodies only up to isotopy, and we allow the case that Γ=∅, in which case we recover the trivial subcompression body, which is just a regular neighborhood of S. Two examples where Γ contains a single meridian are drawn in Figure <ref>. On the left, we compress a separating meridian m_1 and obtain a subcompression body of H that has two `interior' boundary components contained in int(H); these are the tori drawn in gray. On the right, we compress a nonseparating meridian m_2 and obtain a subcompressionbody with a single torus interior boundary component. Note that compressing Γ={m_1,m_2} gives the same subcompression body as compressing m_2, because we fill in complementary components that are balls. See <ref> for details.Biringer-Johnson-Minsky <cit.> showed[Their condition was really that the measured lamination λ_+ lies in the `limit set' of the handlebody, i.e. that it is a limit of meridians in 𝒫ℳℒ(S), but that is equivalent to being strongly commensurable to a Hausdorff limit of meridians. See <ref> for more information about the limit set.] that the attracting lamination λ_+ of a pseudo-Anosov map f : S ⟶ S is strongly commensurable to a Hausdorff limit of meridians if and only if f has a nonzero power that extends to a homeomorphism of some subcompression body of H. Both statements are `genericity' conditions on f with respect to the structure of H and both were previously studied in the literature, see e.g. <cit.> and <cit.>. Here, we show that extension of powers of a homeomorphism f : S ⟶ S to subcompression bodies can be detected by looking at extension of powers of its components in the Nielsen-Thurston decomposition. More precisely, recall that f is pure if there are disjoint essential subsurfaces S_i ⊂ S,such that f=id on S_id:=S ∖∪_i S_i, and where for each i, if we setf_i: = f | _S_i, then either * S_i is an annulus and f_i is a power of a Dehn twist, or* f_i is a pseudo-Anosov map on S_i.It follows from the Nielsen-Thurston classification<cit.>, that every homeomorphism of S has a power that is isotopic to a pure homeomorphism.Let f : S ⟶ S be a pure homeomorphism. Then f has a power that extends to a nontrivial subcompressionbody of H if and only if either: * there is a meridian in S_id, * for some i, the map f_i : S_i ⟶ S_i has a power that extends to a nontrivial subcompressionbody of H that is obtained by compressing a set of meridians in S_i, or * there are (possibly equal) indices i,j such that S_i,S_j bound an essential interval bundle B in H, such that some power of f|_S_i ∪ S_j extends to B, andthere is a compression arc α for B whose interior lies in S_id.In (3), note that if (for simplicity) B is a trivial interval bundle, then f|_S_i∪ S_j extends to B exactly when f_i,f_j become isotopic maps when S_i,S_j are identified throughB. More generally, a power of f|_S_i∪ S_j extends to B when f_j is obtained from f_i by multiplying by a periodic map that commutes with f_i.For the proof of Theorem <ref>, it is necessary to extend the theorem of Biringer-Johnson-Minsky <cit.> referenced above to the case of pseudo-Anosovs on essential subsurfaces of S. More precisely, we show that (2) above holds exactly when the attracting lamination of f_i is an intrinsic limit of meridians. This is done in Theorem <ref>; the proof is basically the same as theirs, although we reorganize it into separate topological and geometric arguments in a way that makes it clearer than the original version. In contrast to the pseudo-Anosov case, one cannot determine when a pure homeomorphism f: S ⟶ S has a power that extends by looking at its attracting lamination. Here, the `attracting lamination' of f is the union of all its twisting curves and all the attracting laminations of its pseudo-Anosov components. Indeed, set H=Y × [-1,1], where Y is a surface with boundary, and let f : Y ⟶ Y be a pseudo-Anosov map such that f=id on ∂ Y. ThenS = Y ×{-1}∪∂ Y × [-1,1] ∪ Y ×{+1}and if we let F : S⟶ S be f× id on Y×{± 1} and the identity map on the rest of S, while we let G : S ⟶ S be f× id on Y ×{1}, and f^2× id on Y ×{-1}, and the identity map on the rest of S, then F,G have the same attracting lamination, but F extends to a homeomorphism of H, while G does not. In fact, it follows from Theorem <ref> that no power of G extends to a nontrivial subcompression body of H.§.§ Other results of interest There are two other theorems in this paper that we should mention in the introduction.In <ref> we study the disk set 𝒟(S,M) of all isotopy classes of meridians in an essential subsurface S ⊂∂ M with ∂ S incompressible, where here M is a compact, irreducible 3-manifold with boundary. We show in Proposition <ref> that either 𝒟(S,M) is small, meaning that it is either empty, has a single element, or has a single non-separating element and infinitely many separating elements that one can explicitly describe, or 𝒟(S,M) is large, meaning that it has infinite diameter in the curve complex 𝒞(S). This result will probably not surprise any experts, but we have never seen it in the literature.In <ref> we show how essential interval bundles in a compact 3-manifold with boundary M can be seen in the limit sets in ∂^3 associated to hyperbolic metrics on int(M). This picture was originally known to Thurston <cit.>, and was studied previously under more restrictive assumptions by Walsh <cit.> and Lecuire <cit.>. We need a more general theorem in the proof of Theorem <ref>: in particular, we need a version that allows accidental parabolics. This is Theorem <ref>. Our proof is also more direct and more elementary than those of <cit.>. See <ref>for more context and details. §.§ Outline of the paper Section <ref> contains all the necessary background for the rest of the paper. We discuss the curve complex, the disc set, compression bodies, interval bundles, the Jaco-Shalen and Johannson characteristic submanifold theory, compression arcs, and geodesic laminations. 3 and 4 are described in the previous subsection. 5 contains a discussion of homoclinic geodesics, intrinsic limits of meridians, and some of their basic properties. The main point of 6 is Theorem<ref>, which is the more precise and general version of Theorem <ref> above. 7 is devoted to Theorem <ref>, which characterizes Hausdorff limits of meridians and combines Theorems <ref> and <ref> above. 8 contains our extension of Biringer-Johnson-Minksy <cit.> to partial pseudo-Anosovs, and 9 contains the proof of Theorem <ref>, which generalizes Theorem <ref> above.§.§ Acknowledgements The authors would like to thank Jeff Brock, Juan Souto and Sebastian Hensel for useful conversations.The first author was partially supported by NSF grant DMS-1308678. § PRELIMINARIES§.§ Subsurfaces with geodesic boundary Suppose S is afinite type hyperbolic surface with geodesic boundary. A connected subsurface with geodesic boundary in S is by definition either * a simple closed geodesic X on S, which is the degenerate case, or * an immersed surface X⟶ S such that the restriction to int(X) and to each component of ∂ X is an embedding, and where each component of ∂ X maps to a simple closed geodesic on S. In (2), the point is that our surface is basically an embedding, except that we allow two boundary components of X to map to the same geodesic in S. We will usually suppress the immersion and write X ⊂ S, abusing notation. We considerX,Y⊂ S to be equal if they are either the same simple closed geodesic, or if they are both immersions as in (2) and the interiors of their domains have the same images. We say X,Y are essentially disjoint if either: * X,Y are disjoint simple closed geodesics, * X is a simple closed geodesic, Y is not, and X is disjoint from int(Y), or vice versa with X,Y exchanged, or* X,Y have nonempty disjoint interiors. More generally, we define a (possibly disconnected) subsurface with geodesic boundary in S to be a finite union of essentially disjoint connected subsurfaces with geodesic boundary.Any connected essential subsurface T ⊂ S that is not an annulus homotopic into a cusp of S determines a unique connected subsurface with geodesic boundary X such that the images of π_1 T and π_1 X in π_1 S are conjugate. Here, we say that X is obtained by tightening T. More generally, we can tighten adisconnected T to a disconnected X by tightening all its components. Tightening is performed as follows. If T is an annulus, then we let X be the unique simple closed geodesic homotopic to the core curve of T. Otherwise, we obtain X by homotoping T so that every component of ∂ T is either geodesic or bounds a cusp in S ∖ T, and then adding in any components of S ∖ T that are cusp neighborhoods.Alternatively, let T̃ be a component of the pre-image of T in the universal cover S̃, which is isometric to a convex subset of ^2, let Λ_T ⊂∂^3 be the set of limit points of T̃ on ∂_∞^2, and let X̃ be the convex hull of Λ_T within S̃. Then X̃ projects to an X as desired.Conversely, suppose X is a subsurface with geodesic boundary in S. Then there is a compact essential subsurface T ↪ S, unique up to isotopy and called a resolution of X, that tightens to X. When X is a simple closed geodesic, we take T to be a regular neighborhood of X. Otherwise, construct T by deleting half-open collar neighborhoods of all boundary components of X, and deleting open neighborhoods of all cusps of T. Note that subsurfaces with geodesic boundary X,Y are essentially disjoint if and only if they admit disjoint resolutions.§.§ The curve complexLet S be a compact orientable surface, possibly with boundary, and assume that S is not an annulus.The curve complex of S, written 𝒞 (S ), is the graph whose vertices are homotopy classes of nonperipheral, essential simple closed curves on S and whose edges connect homotopy classes that intersect minimally.When S is a 4-holed sphere, minimally intersecting simple closed curves intersect twice, while on a punctured torus they intersect once.Otherwise, edges in 𝒞 (S ) connect homotopy classes that admit disjoint representatives.Masur-Minsky <cit.> have shown that the curve complex is Gromov hyperbolic, when considered with the path metric in which all edges have unit length.Klarreich <cit.> (see also <cit.>) showed that the Gromov boundary ∂_∞𝒞 (S) is homeomorphic to the space of ending laminations of S: i.e. filling, measurable geodesic laminations on S with the topology of Hausdorff super-convergence. §.§ The disc setSuppose that S⊂∂ M is an essential subsurface of the boundary of a compact, irreducible 3-manifold M, and that ∂ S is incompressible in M.An essential simple closed curve γ on M is called a meridian if it bounds an embedded disc in M.By the loop theorem, γ is a meridian if and only if it is homotopically trivial in M. The disc set of S in M, written 𝒟 (S,M), is the (full) subgraph of𝒞 (S) whose vertices are the meridians of S in M.When convenient, we will sometimes regard 𝒟 (S,M) as a subset of the space of projective measured laminations (S), instead of as a graph.The following is an extension of a theorem of Masur-Minsky <cit.>, which they prove in the case that S is an entire component of ∂ M. The subset 𝒟 (S,M) of 𝒞 (S) is quasiconvex. To prove Theorem <ref> as stated above, one follows the outline of <cit.>: given a,b ∈𝒟 (S,M), the goal is to construct a well-nested curve replacement sequence from a=a_1,…,a_n=b consisting of meridians, which must be a quasi-geodesic by their Theorem 1.2.The sequence (a_i) is created by successive surgeries along innermost discs, and the only difference here is that one needs to ensure that none of the surgeries create peripheral curves.However, the surgeries create meridians and S has incompressible boundary. §.§ Compression bodies We refer the reader to 2 of <cit.> for a more detailed discussion of compression bodies, and state here only a few definitions that will be used later on.A compression body is a compact, orientable, irreducible 3-manifold C with a π_1-surjective boundary component ∂_+ C, called the exterior boundary of C. The complement ∂ C ∖∂_+ C is called the interior boundary, and is written ∂_- C. Note that the interior boundary is incompressible.For if an essential simple closed curve on ∂_- C bounds a disk D ⊂ C, then C ∖D has either one or two components, and in both cases, Van Kampen's Theorem implies that ∂_+ C, which is disjoint from D, cannot π_1-surject.Suppose M is a compact irreducible 3-manifold with boundary, let Σ be a component of ∂ M and let S ⊂Σ is an essential subsurface. A subcompression body of (M,S) is a compression body C⊂ M with exterior boundary Σ that can be constructed as follows. Choose a set Γ of disjoint, pairwise nonhomotopic simple closed curves on S that are all meridians in M. Let C'⊂ M be the union of Σ with a set of disjoint disks in M whose boundaries are the components of Γ, and define C⊂ M to be the union of a regular neighborhood of C' ⊂ M together with any components of the complement of this neighborhood that are topological 3-balls. Here, we say that C⊂ M is obtained by compressing Γ. Note that the irreducibility of M implies that no component of ∂ C is a 2-sphere, and hence that C is irreducible, and therefore a compression body. See <cit.>for details about constructing compression bodies via compressions. When the set Γ above is empty, we obtain the trivial subcompression body of (M,S), which is just a regular neighborhood of Σ⊂∂ M. At the other extreme, we can compress a maximal Γ, which gives the `characteristic compression body' of (M,S), defined via the following fact.[The characteristic compression body] Suppose Mis anirreduciblecompact 3-manifold, that Σ is a component of ∂ M and that S ⊂Σ is an essential subsurface such that the multicurve ∂ S is incompressible in M. Thenthere is a unique (up to isotopy) subcompression bodyC:=C(S,M) ⊂ Mof (M,S), called the characteristic compression body of (M,S), such that a curve γ in S is a meridian in C if and only if it is a meridian in M.Moreover, C can be constructed by compressing anymaximal set of disjoint, pairwise nonhomotopic meridians in S.This is a version of a construction of Bonahon <cit.>, except that he only defines the characteristic compression body when S is an entire boundary component of M. In that case, the interior boundary components of M are incompressible in M, so Bonahon's construction can be used to reduce problems about 3-manifolds to problems about compression bodies and about3-manifolds with incompressible boundary. The reader can also compare the fact to Lemma 2.1 in <cit.>, which is the special case of the fact where M is a compression body and S is its exterior boundary, so thatC=M is obtained by compressing any maximal set of disjoint, nonhomotopic meridians in M.Let Γ be a maximal set of disjoint, pairwise nonhomotopic M-meridians on S, and define C by compressing Γ. We have to check that any curve in S that is an M-meridian is also a C-meridian. Suppose not, and take an M-meridian m⊂ S that is not a C-meridian, and that intersects Γ minimally. Since Γ is maximal, m intersects some component γ⊂Γ. Then there is an arc α⊂γ with endpoints on m and interior disjoint from m, that is homotopic rel endpoints in M to the arcs β',β”⊂ m with the same endpoints. (Here α is an `outermost' arc of intersection on a disk bounded by γ, where the intersection is with the disk bounded by m, see e.g. Lemma 2.8 in <cit.>.) Since m is in minimal position with respect to Γ, the curves m' = α∪β' and m”=α∪β” are both essential, and are M-meridians in S that intersects Γ fewer times than m. So by minimality of m, both m',m” are C-meridians, implying that α is homotopic rel endpoints to β' and β” in C. This implies m is a C-meridian, contrary to assumption. For uniqueness, suppose we have two subcompression bodies C_1,C_2 of (M,S) in which all curves in S that are meridians in M are also meridians in C_1,C_2. Since C_1,C_2 are subcompression bodies of (M,S), the kernels of the mapsπ_1 Σ⟶π_1 C_iinduced by inclusion are both normally generated by the set of all elements of π_1 Σ that represent simple closed curves in S that are merdians in M. Hence, the disk sets 𝒟(Σ,C_i) are the same for i=1,2. It follows that C_1,C_2 are isotopic in M, say by Corollary 2.2 of <cit.>.§.§ Interval bundles In this paper, an interval bundle always means a fiber bundle B ⟶ Y, where Y is a compact surface with boundary, and where all fibers are closed intervals I. Regarding the fibers as `vertical', we call the associated ∂ I-bundle over Y the horizontal boundary of B, written ∂_H B. An interval bundle that is isomorphic to Y ×[-1,1] is called trivial, and we often call nontrivial interval bundles twisted.All 3-manifolds in this paper are assumed to be orientable, but even when the total space B of an interval bundle is orientable, the base surface Y may not be. Indeed, let Y be a compact non-orientable surface and let π: Ŷ⟶ Y be its orientation cover. Then the mapping cylinderB := Ŷ× [0,1] / ∼, (x,1)∼(x',1) π(x)=π(x') is orientable, and is a twisted interval bundle over Y, where the fiber over y∈ Y is obtained by gluing together the two intervals {x}× [0,1] and {x'}× [0,1] along (x,1) and (x',1), where π^-1(y)={x,x'}. The horizontal boundary ∂_H B here is Ŷ×{0}, which is homeomorphic to the orientable surface Ŷ. Note that B is double covered by the trivial interval bundle Ŷ× [-1,1].Suppose that B⟶ Y is an interval bundle and B is orientable. If Y is orientable, then B is a trivial interval bundle. If Y is nonorientable, then B is isomorphic to the mapping cylinder of the orientation cover of Y. If Y and B are orientable, so is the line bundle, so the bundle is trivial. If Y is nonorientable, the horizontal boundary ∂_H B⊂∂ B is an orientable surface that double covers Y, and from there it's easy to construct the desired isomorphism to the mapping cylinder of the projection ∂_H B ⟶ Y. An interval bundle B⟶ Y comes with a canonical involution σ, which is well defined up to isotopy, and which is defined as follows. If B≅ Y × [0,1] is a trivial interval bundle, we defineσ : Y × [-1,1] ⟶ Y × [-1,1], σ(y,t)=(y,-t).And if B is the twisted interval bundle B ≅Ŷ× [0,1] / ∼ above, we defineσ : Ŷ× [0,1] / ∼⟶Ŷ× [0,1] / ∼, σ(ŷ,t) = (ι(ŷ),t) where ι is the nontrivial deck transformation of the orientation cover. Note that σ is always an orientation reversing involution of B, so in particular, when we give the surface ∂_H B its boundary orientation, the restriction σ|_∂_HB is also orientation reversing.We also recall the following well-known fact. If S is a compact, orientable surface with nonempty boundary, The trivial interval bundle S × [-1,1] is homeomorphic to a handlebody. It's a nice topology exercise to visualize the homeomorphism. Regard S as the union of a polygon and a collection of bands (long, skinny rectangles), each of which is glued along its short sides to two sides of the polygon. Thickening, the picture becomes a ball with 1-handles attached. Note that if S=S_g,b has genus g and b boundary components, then the handlebody S× [-1,1] has genus 2g+b-1, since that is the rank of the free group π_1(S × [-1,1])≅π_1 S. Finally, suppose π : B ⟶ Y is an interval bundle and f : ∂_HB ⟶∂_HB is a homeomorphism. We say that f extends to B if there is a homeomorphism F:B ⟶ B such that F|_∂_HB=f. We leave the following to the reader.The following are equivalent: * f extends to B, * f ∘σ is isotopic to f on ∂_HB, * after isotoping f, there is a homeomorphism f̅ : Y ⟶ Y such that π∘ f = f̅∘π, * there is a homeomorphism from B to eitherY × [-1,1] orŶ× [0,1]/∼,taking horizontal boundary to horizontal boundary, such that f=F|_∂_HB, and where either F : Y × [-1,1] ⟶ Y × [-1,1], F(y,t)=(f̅(y),t), for some homeomorphism f̅ :Y⟶ Y, or F : Ŷ× [0,1]/∼ ⟶Ŷ× [0,1]/∼, F(y,t)=(f̅(y),t), for some homeomorphism f̅ :Ŷ⟶Ŷ commuting with the deck group of Ŷ⟶ Y, and hence covering a homeomorphism of Y. §.§ The characteristic submanifold of a pairSuppose that M is a compact, orientable 3-manifold and that S ⊂∂ Mis an incompressible subsurface. In the late 1970s, Jaco-Shalen <cit.> and Johannson <cit.> described a `characteristic'submanifold of (M,S) that contains the images of all nondegenerate maps from interval bundles and Seifert fibered spaces.There is a perfectly embedded Seifert pair (X,Σ) ⊂ (M,S), unique up to isotopy and called the characteristic submanifold of (M,S),such that any nondegenerate map (B,F)⟶ (M,S) from a Seifert pair (B,F)is homotopic as a map of pairs into (X,Σ).A Seifert pair is 3-manifold pair that is a finite disjoint union of interval bundle pairs (B,∂_H B) and S^1-bundle pairs. Here, an S^1-bundle pair (B,F) is a 3-manifold B fibered by circles, where F ⊂∂ B is a compact subsurface saturated by fibers. A Seifert pair (X,Σ) ⊂ (M,S) is well embedded if X ∩∂ M =Σ⊂ S and the frontier of X in M is a π_1-injective surface, and is perfectly embedded if it is well embedded, no component of the frontier of X in M is homotopic into S, and no component of X is homotopic into another component.When (B,F) is a connected Seifert pair, a map f: (B,F)⟶ (M,S) is essential if it is not homotopic as a map of pairs into S. Notice that this only depends on the image of f and not on f itself. One says f is nondegenerate if it is essential, its π_1-image is nontrivial, its π_1-image is non-cyclic when F=∅, and no fiber of B is nullhomotopic in (M,S). For disconnected (B,F), one says f is nondegenerate if its restriction to every component is nondegenerate.The following is very well known. If int(M) is hyperbolizable and (B,F) is an S^1-bundle pair that is perfectly embedded in (M,S), then either* (B,F) is a `fibered solid torus', i.e. B is an S^1-bundle over a disk, and F ⊂∂ B ≅ T^2 is a collection of fibered parallel annuli, or* (B,F) is a `thickened torus', i.e. B is an S^1-bundle over an annulus, so is homeomorphic to T^2 × [0,1], and each component of F is either a torus or a fibered annulus.So in particular, the components of the characteristic submanifold of (M,S) are either interval bundles, solid tori, or thickened tori. Suppose that(B,F) is a perfectly embedded S^1-bundle pair in M. Then B ⟶ Y is an S^1-bundle, where Y is a compact 2-orbifold, and the cyclic subgroup Z ⊂π_1 B corresponding to a regular fiber is normal in π_1 B. In a hyperbolic 3-manifold, any subgroup of π_1 that has a cyclic normal subgroup is elementary, say by a fixed point analysis on ∂_∞^3. So, π_1 B is either cyclic or isomorphic to ^2. It follows that Y is a disc, in which case B is a fibered solid torus, or Y is an annulus, in which case B is a thickened torus. In this paper we will mostly be interested in interval bundles. For brevity, we'll use the following terminology, which differs slightly from the terminology above used by Jaco-Shalen. An essential interval bundle in (M,S) is an essential, well-embedded interval bundle pair (B,∂_H B)↪ (M,S).Note that the horizontal boundary of any essential interval bundle is an incompressible subsurface of S.The definition above differs from a well embedded interval bundle pair in that we are excluding boundary-parallel interval bundles over annuli, and differs from a perfectly embedded interval bundle pair in that we are allowing components of the frontier of an interval bundle over a surface that is not an annulus to be boundary parallel. For instance, if Y is a surface with boundary and Y'⊂ Y is obtained by deleting collar neighborhoods of the boundary components, and we set M=Y× [-1,1], which is a handlebody, then (Y'× [-1,1],Y'×{-1,1}) is an essential interval bundle in (M,∂ M), but is not perfectly embedded. However, note that any essential interval bundle (B,∂_H B)↪ (M,S) is perfectly embedded in (M,∂_H B).§.§ Compression arcs Suppose (B,∂_H B) ⊂ (M,S) is an essential interval bundle. An arc α⊂ S with endpoints on ∂ (∂_H B) and interior disjoint from ∂_H B is calleda compression arc if it is homotopic in M to a fiber of B, while keeping its endpoints on∂(∂_HB). See Figure <ref>.To link this definition with more classical ones, it is easy to see that there is a compression arc for B if and only if Fr(B) is boundary compressible, see <cit.> for a definition.Write our interval bundle as π : B ⟶ Y. Let α be a compression arc for B. After isotoping the bundle map π, we can assume that α is homotopic rel endpoints to a fiber π^-1(y), where y∈ Y. Suppose c is an oriented, two-sided, essential, simple closed loop Y based at y, and suppose that either c is nonperipheral in Y, or that Y is an annulus or Möbius band. Write π^-1(c)=c_-∪ c_+, where c_± are disjoint simple closed oriented loops in X based at y_±, and where the orientations of c_± project to that of c. The concatenation m(c):=c_- ·α· c_+^-1·α^-1 is homotopic to a meridian on S. So, a compression arc α allows one to make compressible curves on S from essential curves on Y. See Figure <ref>. Since α is homotopic rel endpoints to the fiber π^-1(y), the curve m(c) is homotopic in M to a curve in B that projects under π to c · c^-1, and hence m(c) is nullhomotopic in M. Checking orientations, one can see that m(c) is homotopic to a simple closed curve on S.So, we only have to prove that m(c) is homotopically essential on S. Suppose that c_-,c_+ are freely homotopic on ∂_H B as oriented curves. (This happens exactly when the curve c ⊂ Y bounds a Möbius band in Y.) Then m(c) is homotopic to the commutator of two essential simple closed curves on S that intersect once, and hence is essential since S is not a torus.We can now assume that that c_± are not freely homotopic in ∂_H B as oriented curves. If m(c) is inessential, then c_± are freely homotopic on S, so c_± are homotopic in ∂_H B to boundary components c_±' ⊂∂_H B that bound an annulus in S ∖∂_HB. In this case c_± are peripheral, so we may assume that Y is either an annulus or a Möbius band. If Y is a Möbius band, we are in the situation of the previous paragraph and are done. So, Y is an annulus, and ∂_HB is a pair of disjoint annuli on S, where c_±' lie in different components of ∂_HB. Since c_±' bound an annulus in S∖∂_HB, the interval bundle B is inessential, contrary to our assumption. In fact, more is true.[Arcs that produce meridians] Suppose (B,∂_H B) ⊂ (M,S) is an essential interval bundle and let α⊂ S be an arc with endpoints on ∂_H B and interior disjoint from ∂_H B. Let X ⊂ S be a regular neighborhood of α∪∂_H B within S. Then there is a meridian in X if and only if we have either: * the endpoints of α lie on the same component c of ∂ (∂_HB), and there is an arc β⊂ c such that α∪β is a meridian, or * α is a compression arc.Note that in the second case the endpoints of α lie on distinct components of ∂ (∂_HB), so in particular the two cases are mutually exclusive.The reason we say X `contains a meridian' instead of `is compressible' is that X may not be an essential subsurface of S, and we want to emphasize that the essential curve in X that is compressible in M is actually essential in S. For example, letY be a compact surface with boundary, Y' ⊂ Y be obtained by deleting a collar neighborhood of ∂ Y, set B=Y' × [-1,1] and M=Y × [-1,1], and let α be a spanning arc of B in ∂ M.The `if' direction is immediate: in case (1) we are essentially given a meridian in X, and in case (2) we can appeal to Claim <ref>. We now work on the `only if' direction. Write our regular neighborhood of ∂_H B∪α as X=∂_H B ∪ R where R is a rectangle with two opposite `short' sides on the boundary of ∂_H B. Let D ⊂ M be an essential disc whose boundary is contained in X, and where D intersects the frontier Fr(B)⊂ M in a minimal number of components. Let a ⊂ D ∩Fr(B) be an arc that is `outermost' in D, i.e. there is some arc a'⊂∂ D with the same endpoints as a such that a,a' bound an open disk in D that does not intersect Fr(B).We claim that a' ⊂ R. If not, then a'⊂∂_H B, and bounds a disk in B with the arc a ⊂∂ B. Writing the interval bundle as π: B⟶ Y, the projection π(a ∪ a') in Y is then also nullhomotopic, so π(a') is homotopic rel endpoints into ∂ Y.Lifting this homotopy through the covering map ∂_H B ⟶ Y we get that a' is inessential in ∂_H B, i.e. is homotopic in ∂_H B rel endpoints into ∂(∂_H B). Lifting this homotopy through the covering map ∂_H Y ⟶ Y π(a)⊂∂ Y, it follows that π(a') is an inessential arc in Y. We can then decrease the number of components of D ∩Fr(B), contradicting that this number is minimal. So, a' ⊂ R. Again by minimality of the intersection, the endpoints of a' lie on opposite short sides of R, so α is homotopic to a' through arcs in R with endpoints on Fr(B). Since a' is homotopic rel endpoints to a ⊂Fr(B), it follows that α is homotopic rel endpoints into Fr(B). If the two endpoints of α lie on the same component of ∂(∂_H B), we are in case (1), and otherwise we are in case (2). §.§ LaminationsWe assume the reader is familiar with geodesic and measured laminations on finite type hyperbolic surfaces. See e.g. <cit.>. Suppose λ is a connected geodesic lamination on a finite type hyperbolic surface S with geodesic boundary. We say that λ fills an essential subsurface T ⊂ S if λ⊂ T and λ intersects every essential, non-peripheral simple closed curve in T. For every connected λ, there is a unique subsurface with geodesic boundary (as in <ref>) that is filled by λ, which we denote by S(λ). It is the minimal subsurface with geodesic boundary in S that contains λ. Here, S(λ) can be constructed by taking a component λ̃⊂S̃⊂^2 of the preimage of λ, letting C⊂^2 be the convex hull of the set of endpoints of leaves of λ̃ in ∂^2, and projecting C into S. Suppose that M is a compact, orientable irreducible 3-manifold let S ⊂∂ M be an essential subsurface. The limit set of (S,M) is the closure Λ(S,M)={meridians γ⊂ S}⊂(S),where (S) is the space of projective measured laminations on S. The limits set was first studied by Masur <cit.> in the case that M is a handlebody, with S its entire boundary. In this case, Kerckhoff <cit.> later proved that the limit set has measure zero in (S), although a mistake in his argument was later found and fixed by Gadre <cit.>. In some ways, Λ(S,M) acts as a dynamical limit set. For instance, let Map(S) be the mapping class group of S, and let Map(S,M) ⊂Map(S) be the subgroup consisting of mapping classes represented by restrictions of homeomorphisms of M. Then we have: * If Λ(S,M) is nonempty, it is the smallest nonempty closed subset of (S) that is invariant under (S,H). * If Map(S,M) contains a pseudo-Anosov map on S, then Λ(S,M) is the closure of the set of the attracting and repelling laminations of pseudo-Anosov elements of Map(S,M). Note that Map(S,M) contains a pseudo-Anosov map on S if and only if the disk set 𝒟(S,M) has infinite diameter in the curve complex 𝒞(S), where the latter condition was discussed earlier in Proposition <ref>. See also <cit.>. For the first part just note that Dehn twist T_m around meridians m⊂ S are in Map(M,S), so if A ⊂(S) is nonempty and invariant, λ∈ A andm is a meridian, then m=lim_i T^i_m(λ) is also in A, implying Λ(M,S)⊂ A. For the second part, take a pseudo-Anosov f∈Map(M,S) with attracting lamination λ_+, say. If m is a meridian in S, then T_m^i ∘ f ∘ T^-i_m are pseudo-Anosov maps on S and their attracting laminations converge to m, and then the argument finishes as before.§.§ Laminations on interval bundles Suppose that Y is a compact hyperbolizable surface with boundary, and that B⟶ Y is an interval bundle over Y. Endow Y and the horizontal boundary ∂_H B with arbitrary hyperbolic metrics such that the boundary components are all geodesic.Suppose we have two geodesic laminations λ_± on ∂_H B. We say that λ_± are essentially homotopic through B if there is a lamination λ and a homotopy h_t : λ⟶ B,t∈ [-1,1] such that h_± 1 is a homeomorphism onto λ_±, and where (h_t) is not homotopic into ∂_H B. When B is a trivial interval bundle, λ_± are essentially homotopic through B if and only if we can write B≅ Y × [0,1] in such a way that λ_±=λ×{± 1} for somegeodesic lamination on Y. This is an easy consequence of the fact that on a surface, homotopic laminations are isotopic. In general: Suppose that λ_± are disjoint or equal geodesic laminations on ∂_HB. Then the following are equivalent. * λ_± are essentially homotopic through B. * λ_± is isotopic on ∂_H B to σ(λ_∓), where σ is the canonical involution of B discussed in <ref>. Moreover, (1) and (2) imply (3) There is a geodesic lamination λ̅ on Y such that λ_-∪λ_+ is isotopic on ∂_HB to the preimage (π|_∂_HB)^-1(λ̅).Here, (3) does not always imply (1,2), since it could be that λ̅ has two components, (π|_∂_HB)^-1(λ̅) has four, and these components are incorrectly partitioned into the two laminations λ_±. However, that's the only problem, so for instance if λ_± are minimal then (1) - (3) are equivalent.While we have phrased things more generally in the section, we can always assume in proofs that our hyperbolic metrics have been chosen so that the covering map π|_∂_HB: ∂_HB ⟶ Y is locally isometric. Here, we're using the fact that given two hyperbolic metrics with geodesic boundary on a compact surface, a geodesic lamination with respect to one metric is isotopic to a unique geodesic lamination with respect to the other hyperbolic metric. In this case, we can remove the word `isotopic' from (2) and (3). The fact is trivial when B is a trivial interval bundle. When B is nontrivial, lift the homotopy to the trivial interval bundle B'⟶ B that double covers B, giving homotopic laminations λ_±' ⊂∂_HB'. (1)(2) follows since the canonical involution on B' covers that of B. For (2)(3), note that since λ_± are disjoint or equal and differ by σ, their projections π(λ_±)⊂ Y are the same, and are a geodesic lamination λ̅ on Y.§ LARGE AND SMALL DISK SETS AND COMPRESSION BODIESSuppose that S⊂∂ M is an essential subsurface of the boundary of a compact, irreducible 3-manifold M, and that ∂ S is incompressible in M. The following is probably known to some experts, but we don't think it appears anywhere in the literature, so we give a complete proof.With M,S as above, either * 𝒟(S,M)has infinite diameter in 𝒞(S), * S has one nonseparating meridian δ, and every other meridian is a band sum of δ, * S has a single meridian, which is separating, or * 𝒟(S,M) =∅. In case (1), we will say that 𝒟(S,M) is large, and in cases (2)–(4), we will say that 𝒟(S,M) is small. Similarly, if C(S,M) is the characteristic compression body defined in Fact <ref>, then C(S,M) is said to be large or small depending on whether 𝒟(S,M) is large or small. See also the discussion of small compression bodies in 3 of <cit.>.Here,recall that a band sum of a meridian δ istheboundary of a regular neighborhood of δ∪β, whereβ is a simple closed curve on S that intersectsδ once. Any such band sum must be a meridian:for instance,as an element of π_1 M it is a commutator with a trivial element.Also, (3)includes the case when M is a solid torus and S=∂ M, in which casethere isonly one (nonseparating) meridian.When Mis not a solid torus, though, every nonseparating curve hasinfinitely many band sums.Before beginning the proof, we first establish the following: Suppose S is not a torus, γ⊂ Sis a meridian on S and δ is a meridian that lies in a component T ⊂ S ∖γ. If γ is not a band sum of δ, there is a pseudo-Anosov f: T⟶ T that extends to a homeomorphism of M. The condition that γ is not a band sum is necessary. For if M is a handlebody with S=∂ M, and γ is a separating meridian that bounds a compressible punctured torus T ⊂ S, then Thas only a single meridian δ. This δ is non-separating and γis a band sum of δ. Any map T⟶ T that extends to a homeomorphism of M must then fix δ,so cannot be pseudo-Anosov.Similarly, if S is a torus and γ is a meridian, thecomplement of γ is an annulus, which does not admit any pseudo-Anosov map.Suppose first that γ is not separating. Anysimple closed curve that intersects γonce can be usedto create a band sum.Now S is not a torus, and cannot be a punctured torus either, since then itsboundary would be compressible.So, there are a pair α ,β of band sums of γ that fill S ∖γ.Bya theorem of Thurston <cit.>, the composition of twists T_α∘ T_β^-1is pseudo-Anosov. Each twist extends to M, becausetwist about meridians can be extended to twists along the disks they bound.Now suppose γ separates S, and suppose that R is the component of T ∖ (γ∪δ) adjacent to γ and δ. Any curve in R that bounds a pair of pants with γ and δis also a meridian.Such curves are constructed as the boundary of a neighborhood of the union of γ ,δ and any arc in Rjoining the two.Therefore,there is a pair α,β of such curves that fills R.As before,f=T_α∘ T_β^-1 is a pseudo-Anosov on Rthat extends to M. However, there was nothing special about δ in the above construction. So if there is some (non-peripheral) meridian δ' ⊂ T with δ≠δ', there is also a pseudo-Anosov f' on the corresponding surface R', such that f' extends to M.Since R and R' fill T,<cit.> saysthat forlarge ithe composition f^i (f')^i is apseudo-Anosov on T. See Figure <ref>.Theonly case left to consider is whenδ is the only (non-peripheral) meridian in T. Since new meridians usually can be created by joining δ and γwith an arc and taking a regular neighborhood, the only possibility here is that Tis a punctured torus, in which case this construction always just produces γ again.But then γis a band sum of δ. When Sis a torus, distinct curves have nonzeroalgebraic intersection number, so either there are no meridians or there is a single meridian.So, we assume S≠ T^2below.We first claim that if there are two meridians in S, neither of which is a band sum of the other, then𝒟(S,M)has infinite diameter in the curve complex. To see this, supposeγ_1,γ_2 are such meridians. Claim <ref> gives two pseudo-Anosov maps f_1,f_2,each defined on the component of S ∖γ_i that contains γ_j, where i≠ j. Since the component of S∖γ_1 containing γ_2 and the component of S∖γ_2 containing γ_1 together fill S, for large kthe composition f^kg^k is a pseudo-Anosov map onthe entire surface S, by <cit.>.Any such composition extends to M, so maps meridians to meridians. As pseudo-Anosovs actwith unbounded orbitson the curve complex <cit.>, this impliesthat the set of meridians has infinite diameter in 𝒞(S). Starting now with the proof of the proposition, suppose there are no nonseparating meridians in S. If γ,δ are distinct (separating) meridians, then an innermost disk surgery produces another separating meridian γ_2 disjoint from γ_1, see <cit.>. By thepreviousparagraph, 𝒟(S,M) hasinfinite diameter in the curve complex. So, the only other options are if𝒟(S,M) = ∅, or ifthe only meridian is a single separating curve.Suppose now that there is a non-separating meridian γ in S.By Claim <ref>, unless the disc set has infinite diameter in the curve complex, any meridiandisjoint from γ must bea band sum of γ. So,either we are in case (2) of the proposition, or there is some meridian δthat intersects γ. Anyinnermost disk surgery ofδ along γmust produce a band sum β of γ. However, this β must then bound a punctured torus T containing γ, and δis then forced to lie inside T,which gives a contradiction, see Figure <ref>. § WINDOWS FROM LIMIT SETS Let N=Γ\^3 be an orientable, geometrically finite hyperbolic 3-manifold, let Λ⊂∂^3 be the limit set of Γ, and letCC(N) := Γ\ CH(Λ) ⊂ Nbe the convex core of N. Equip ∂ CC(N)withits intrinsic length metric, which is hyperbolic, see for instance <cit.>.Let S_± be (possibly degenerate) incompressible subsurfaces with geodesic boundary in ∂ CC(N) that are either equal or are essentially disjoint, as in <ref>. LetS̃_±⊂∂ CH(Λ) ⊂^3be lifts of S_±, where if S_-=S_+, we require that S̃_-≠S̃_+. Let Γ_±⊂Γ be the stabilizers of S̃_±, let Λ_±⊂∂^3 be their limit sets and Δ=Γ_+∩Γ_-.The lift S̃_± is isometric to a convex subset of ^2. Let ∂_∞ S_±⊂∂^2 be the boundary of S̃_±. By <cit.>, say, the inclusion S̃_±↪^3 extends continuously to a Γ_±-equivariant quotient mapι_± : ∂_∞S̃_±⟶Λ_±⊂∂^3.Λ_- ∩Λ_+ =Λ_Δ. Next, suppose Δ is nonempty and is not a cyclic group acting parabolically on either S̃_- or S̃_+, and let C̃_±⊂S̃_± be the convex hulls of the subsets ι_±^-1(Λ_Δ) ⊂∂_∞S̃_±. Then C̃_± are Δ-invariant, the quotients C_± :=Δ\C̃_± are (possibly degenerate) subsurfaces with geodesic boundary in S_±, and there is an essential homotopy from C_- to C_+ in CC(N) that is the projection of a homotopy from C̃_- to C̃_+. Above, C_± are (possibly degenerate) subsurfaces with geodesic boundary in S_±, as defined in <ref>, but it follows from the above and Theorem <ref> that there are `resolutions' (see <ref>) C_±'⊂ S_± such that C_±' bound an interval bundle in CC(M). So informally, the theorem says that the intersection Λ_- ∩Λ_+ is exactly the limit set of the fundamental group of some essential interval bundle in (CC(M),S_-'∪ S_+'). The term `window' comes from Thurston <cit.> and refers to interval bundles; for example, one can `see through' a trivial interval bundle from one horizontal boundary component to the other.The assumption that Δ is not cyclic and acting parabolically on either S̃_± is just for convenience in the statement of the theorem. (Just to be clear, note that an element γ∈Δ can act parabolically as an isometry of ^3, but hyperbolically on the convex subsets S̃_±⊂^2.) If Δ is cyclic and acts parabolically on S̃_+ the subset C̃_+ in the statement of the theorem will be empty. However, using the same proof one can construct a homotopy from a simple closed curve on S_+ bounding a cusp of S_+ to some simple closed curve on S_-. As mentioned in the introduction, a version of Theorem <ref> was known to Thurston, see his discussion of the Only Windows Break Theorem in <cit.>. Precise statements for geometrically finite N without accidental parabolics were worked out in Lecuire's thesis <cit.> and by Walsh <cit.>; note that Walsh uses the conformal boundary instead of the convex core boundary, but the two points of view are equivalent. However, for our applications in this paper, we need to allow accidental parabolics in S̃_±, which are not allowed in those theorems. Also, our proof is more direct and natural[In both<cit.>, the authors focus on proving that the boundary components of C̃_± project to simple closed curves in S_±, but that isn't sufficient to say that C̃_± projects to a subsurface with geodesic boundary in S_±, which is what they then claim. E.g. in <cit.> it is stated that under a covering map, the boundary of a subset goes to the boundary of the image, but this isn't true.]than those in <cit.>, despite the extra complication coming from parabolics. Finally, the assumption that N is geometrically finite is not really essential for the theorem statement. With a bit more work dealing with degenerate ends, one can prove the theorem for all finitely generated Γ. Essentially, the point is to use Canary's covering theorem <cit.> to show that degenerate NP-ends in the covers N_± := Γ_±\^3 have neighborhoods that embed in N, and then to use this to prove that geodesic rays in ^3 that converge to points in Λ_- ∩Λ_+ cannot exit degenerate ends in N_±. After showing this, the proof of Claim <ref> extends to the general case. However, we don't have an application for that theorem in mind, so we'll spare the reader the details.§.§ Proof of Theorem <ref>. We first focus on proving that Λ_-∩Λ_+ = Λ_Δ. For each ξ∈∂^3, let Γ_±(ξ) ⊂Γ_± be the stabilizer of ξ. Let ξ∈∂^3 and suppose that Γ_-(ξ) and Γ_+(ξ) are both nontrivial. Then they are equal.By the Tameness Theorem <cit.>, we can identify CC(N) topologically with a subset of a 3-compact manifold with boundary M, whereCC(N) ⊃ int(M), CC(N) ∩∂ M= ∂ CC(N),and where ∂ CC(N) is a collection of essential subsurfaces of ∂N̅. Let ∂_χ=0 M be the union of all torus boundary components of M, and let (X,Σ) be the characteristic submanifold of the pair (M,S_-∪ S_+ ∪∂_χ=0 M), as in <ref>. Since Γ_± are both contained in a discrete group Γ, both Γ_±(ξ) are contained in the stabilizer Γ(ξ), which is either infinite cyclic, or rank 2parabolic.Suppose first that Γ(ξ) is rank 2 parabolic. The groups Γ_±(ξ) are both cyclic, since S_± are incompressible hyperbolic surfaces, so their fundamental groups do not contain ^2 subgroups. So, we can write Γ_±(ξ)=⟨γ_±⟩ for closed curves γ_± on S_±.Both γ_± are homotopic into some fixed component T ⊂∂_χ=0 M, the component whose fundamental group can be conjugated to stabilize ξ. So, there is a component (X_0,Σ_0) ⊂ (X,Σ) of the characteristic submanifold such that Σ_0 intersects T and both γ_± are homotopic on S_± into Σ_0. Since M ≇T^2 × [0,x], the component (X_0,Σ_0) is either an interval bundle over an annulus (so, a fibered solid torus), or an S^1-bundle pair, so by Fact <ref>, X_0 is either a fibered solid torus or a thickened torus. In either case, Σ_0 intersects each of S_± in a fibered annulus, and these annuli are disjoint, so they are parallel on a torus boundary component of X_0, implying that γ_± are homotopic in M, and hence Γ_±(ξ) are conjugate in Γ. But since Γ_±(ξ) have the same fixed point at infinity, the conjugating element must fix ξ, and therefore commute with the two groups, implying Γ_-(ξ)=Γ_+(ξ). Now assume Γ(ξ) is cyclic. Pick a basepoint p∈ S_-, say, and let γ_- ⊂ S_-be a loop based at p representing a generator of Γ_-(ξ). Represent a generator of Γ_+(ξ) as α·γ_+ ·α^-1, where α is an arc from p∈ S_- to a point in S_+, and γ_+ is a loop in S_+.Since Γ_± stabilize distinct components S̃_±, the arc α is not homotopic into S_-∪ S_+. So, α is a spanning arc of an essential map from an annulus, where the boundary components of the annulus map to powers of γ_±. It follows that the loops γ_± are homotopic on S_± into Σ_0 for some component (X_0,Σ_0) ⊂ (X,Σ). If X_0 is an I-bundle with horizontal boundary Σ_0, then as γ_± are not proper powers in π_1 S_±, they are both primitive in π_1 X_0, and hence γ_± (rather than their powers) are homotopic in X_0 ⊂ M. Similarly, if (X_0,Σ_0) is a fibered solid torus, Σ_0 is a collection of parallel annuli on ∂ X_0, so since γ_± are primitive in π_1 S_±, they are homotopic on S_± to simple closed curves in these annuli, and hence are homotopic to each other in X_0. It follows that there are generators for Γ_±(ξ) that are conjugate in Γ, but since these generators both fix ξ, they are equal. For all ξ∈Λ_-∩Λ_+, we have Γ_-(ξ)=Γ_+(ξ). Moreover, Λ_Δ = Λ_- ∩Λ_+. Let N_±⊂^3 be the 1-neighborhood of the convex hull of Λ_±, and for small ϵ>0, let T_±(ϵ) ⊂^3 be the set of all points that are translated less than ϵ by some parabolic element of Γ_±. If ϵ is at most the Margulis constant ϵ_0, then T_±(ϵ) is a disjoint union of horoballs in ^3. The sets N_± and T_±(ϵ) are Γ_± invariant. Since Γ_± is a finitely generated subgroup of Γ, which is geometrically finite, Γ_± is geometrically finite as well by <cit.>. So, the action of Γ_± on N_±∖ T_±(ϵ) is cocompact, see e.g. Theorem 3.7 in <cit.>, implying that either the functionD_+ : ^3 ⟶_>0, D_+(x) = min{d(x,γ(x))|γ∈Γ_+loxodromic} is bounded above on N_+ ∖ T_+(ϵ) by some B(ϵ)>0, or Γ_+ is elementary parabolic. A similar statement holds for - instead of +. With ϵ_0 the Margulis constant, the Margulis Lemma then implies that if ϵ>0 is sufficiently small with respect to B(ϵ_0), and Γ_+ is not elementary parabolic, thenT_-(ϵ) ∩ N_+⊂ T_+(ϵ_0), and similarly with -,+ exchanged. Indeed, if not then we have (say) a point p ∈^3 that is translated by less than ϵ by some parabolic γ_- ∈Γ_- and by at most B by some loxodromic γ_+ ∈Γ_+. If ϵ is small with respect to B, then both γ_- and [γ_+,γ_-] translates p by at most ϵ_0, so they generate an elementary discrete group by the Margulis lemma applied to Γ, implying that γ_+ fixes the fixed point of γ_-, which contradicts that they generate a discrete group. Fix ξ∈Λ_+ ∩Λ_-. We claim that Γ_-(ξ)=Γ_+(ξ). By Claim <ref> it suffices to show that whenever Γ_-(ξ) is nontrivial, say, so is Γ_+(ξ). First, assume that Γ_-(ξ) is elementary parabolic. We claim that Γ_+(ξ) is elementary parabolic as well. Assume not, and let α be a geodesic ray in ^3 converging to ξ. Then α(t) lies in T_-(ϵ) ∩ N_+ for large t, and therefore in T_+(ϵ_0) for large t by (<ref>), which implies ξ is a parabolic fixed point of Γ_+ as well, a contradiction. Next, suppose that Γ_-(ξ) is elementary loxodromic. If ξ is a parabolic fixed point of Γ_+, we are done, so let's assume this isn't the case. Let α be the axis of Γ_-(ξ), parametrized so α(t)→ξ as t →∞. Since ξ is not a Γ_+ parabolic fixed point, there are t_i→∞ such that α(t_i) ∉T_+(ϵ) for all i. Since the action of Γ_+ on N_+ ∖ T_+(ϵ) is cocompact, if p∈^3 is a fixed basepoint, there are elements γ_i^+ ∈Γ_+ such that sup_i d(γ_i^+(p),α(t_i)) < ∞. Since the action of Γ_-(ξ) on α is cocompact, there are then elements γ_i^-∈Γ_-(ξ) with sup_i d(γ_i^+(p),γ_i^-(p))< ∞. By discreteness of Γ, after passing to a subsequence we can assumeγ_i^+ =γ^-_i∘ g for some fixed g∈Γ. Hence, for all i we have γ_i^+ ∘ (γ_1^+)^-1 =γ_i^- ∘ (γ_1^-)^-1∈Γ_+ ∩ (Γ_-(ξ)) ⊂Γ_+(ξ), so we are done. Finally, we want to show that Λ_- ∩Λ_+ = Λ_Δ. The inclusion Λ_Δ⊂Λ_- ∩Λ_+ is clear. So, take ξ∈Λ_- ∩Λ_+. We can assume that Γ_±(ξ)=1, since otherwise we're in the cases handled above. Let α be a geodesic ray in ^3 converging to ξ. As in the previous case, since ξ is not a parabolic fixed point of Γ_+, there are t_i→∞ such that α(t_i) ∉T_+(ϵ_0) for all i.Discarding finitely many i, we have α(t_i) ∈ N_+, so it follows from (<ref>) that α(t_i) ∉T_-(ϵ). Fixing a base point p∈^3, asΓ_- acts cocompactly onN_- ∖ T_-(ϵ) and Γ_+ acts cocompactly on N_+ ∖ T_+(ϵ_0), there are elements γ_i^±∈Γ_± such thatsup_i d(γ_i^±(p),α(t_i))< ∞.So passing to a subsequence, γ_i^+ = γ_i^-∘ g for some fixed g∈Γ, and thenγ_i^+ ∘ (γ_1^+)^-1 =γ_i^- ∘ (γ_1^-)^-1∈Γ_+ ∩Γ_-=Δ for all i. But applying this sequence to p and letting i→∞ gives a sequence of points in the orbit Δ(p) that converge to ξ, so ξ∈Λ_Δ.Now assume that Δ≠ 1. We want to construct the interval bundle W mentioned in the statement of the theorem. After an isotopy on ∂ CC(N), let's assume that S_± is a subsurface of ∂ CC(N) with geodesic boundary. Consequently, we allow degenerate subsurfaces, where S_± is a simple closed geodesic, as well as subsurfaces where only the interior is embedded and two boundary components can coincide. As ∂ CC(N) may have cusps, we also must allow S_± to be noncompact with finite volume, rather than compact. Recall that S̃_± is isometric to a convex subset of ^2, and that if ∂_∞ S_±⊂∂^2 is the boundary of S̃_± the inclusion S̃_±↪^3 extends continuously to a Γ_±-equivariant quotient mapι_± : ∂_∞S̃_±⟶Λ_±⊂∂^3.Moreover, if ξ,ξ' ∈∂_∞S̃_+, say, we have ι_+(ξ)=ι_+(ξ') if and only if there is an element γ∈Γ_+ that acts hyperbolically on S̃_+ ∪∂_∞S̃_+ with fixed points ξ,ξ'∈∂_∞S̃_+, but acts parabolically on ^3. By discreteness of the action Γ_+ S̃_+, each ξ∈∂_∞S̃_+ has the same image under ι_+ as at most one other ξ'. Similar statements holds with - instead of +. All this is a consequence (for instance) of Bowditch's theory of the boundary of a relatively hyperbolic group <cit.>: since the action Γ_±^3 is geometrically finite, Λ_± is a model for the Bowditch boundary of the group Γ_± relatively to its maximal parabolic subgroups, so the statement above follows from Theorem 1.3 of <cit.>, say[See also Theorem 5.6 of <cit.>, which says that thereis a continuous equivariant extension ι_± of the inclusion S̃_±↪^3 as above. This theorem is stated in a much more general setting, though, and our statement is a trivial case.].Let ι_±^-1(Λ_Δ) ⊂∂_∞S̃_±.Since Δ≠ 1 and is not cyclic parabolic, ι_±^-1(Λ_Δ) has at least two points, so it has a well-defined convex hull C̃_±⊂ S_±.[Convex hulls] One of the following holds. * Δ is cyclic and acts hyperbolically on S̃_+. The convex hull C̃_+ is its geodesic axis, which is precisely invariant under Δ⊂Γ, so that the quotient C_+:=Δ\C̃_+ embeds as a simple closed geodesic in S_+. * C̃_+ is a subsurface of S̃_+ with geodesic boundary, the interior int(C̃_+) is precisely invariant under Δ⊂Γ, and the quotient C_+ := Δ\C̃_+ is a generalized subsurface of S_+ with compact geodesic boundary. A similar statement holds with - instead of +. Let's work with +, for concreteness. If g ∈Δ, then g(Λ_Δ)=Λ_Δ, so g leaves ι_+^-1(Λ_Δ) invariant by equivariance of ι_+. Hence g leaves C̃_+ invariant. Let's suppose first that C̃_+ has nonempty interior, since that is the more interesting case. We'll address the case that C̃_+ is a biinfinite geodesic at the end of the proof. Let g∈Γ_+∖Δ.We want to showg(int(C̃_+)) ∩ int(C̃_+)=∅.Assume this is not the case. By Claim <ref>, the fixed points of g in ∂_∞ S_+ lie outside ι_+^-1(Λ(Δ)). So, we cannot have g(C̃_+)⊂C̃_+, as then we'd have g^n(C̃_+)⊂C̃_+ for all n, contradicting that points of S̃_+ converge to the fixed points of g under iteration. Considering backwards iterates, we also cannot have C̃_+ ⊂ g(C̃_+). Therefore, ∂C̃_+ and ∂ g(C̃_+) intersect transversely. Since C̃_+ has nonempty interior, Δ is nonelementary, and therefore the fixed points of loxodromic isometries of Δ are dense in Λ_Δ. Loxodromic fixed points of Δ are in particular not parabolic fixed points in Γ_±, so any biinfinite geodesic in C̃_+ is a limit of biinfinite geodesics in C̃_+ whose endpoints are not fixed points of parabolic isometries of Γ_±. By the previous paragraph, there are thenbiinfinite geodesics α_+,β_+ in C̃_+ such that g(α_+) and β_+ intersect transversely, and where the endpoints of α_+,β_+ project under ι_+ to points ξ_α,ξ_α',ξ_β,ξ_β'∈Λ_Δ that are not parabolic fixed points in Γ_±. Let α_- be the geodesic in S̃_- whose endpoints in ∂_∞S̃_- map to the points ξ_α,ξ_α' underι_-.Define β_- similarly. Then α:= α_+ ∪{ξ_α,ξ_α'}∪α_-, β:= β_+ ∪{ξ_β,ξ_β'}∪β_- are two simple closed curves on the closure cl(∂ CH(Λ_Γ))⊂^3 ∪∂^3, which is homeomorphic to a sphere. For instance, the arcs α_± are disjoint and ξ_α≠ξ_α', since the endpoints of α_+ are not parabolic fixed points. Now consider how the two simple closed curves g(α),β intersect. The arcs β_- and g(α_+) are disjoint since S̃_-≠S̃_+. The arcs g(α_-),β_- are disjoint since g∉Γ_- and hence g(α_-) lies on a different translate of S̃_- than β_- ⊂S̃_-. Moreover, since g(α_+),β_+ intersect transversely in S̃_+, the endpoints of g(α_+) and β_+ are distinct in ∂_∞S̃_+, and since none of them are parabolic fixed points, the points g(ξ_α),g(ξ_α'),ξ_β,ξ_β' are all distinct. But by assumption, g(α_+) intersects β_+ transversely in a single point! This shows that g(α) and β intersect exactly once, transversely, which is a contradiction. By precise invariance of the action on the interior, the quotient int(C_+)=Δ / int(C̃_+) embeds in the finite volume surface S_+, so C_+ has finite volume itself.So if ∂ C_+ is non-compact, it must have two noncompact boundary components that are asymptotic. Lifting, we get two boundary components β_1,β_2 of C̃_+ that are asymptotic. Since C̃_+ is convex, it is contained in the subset of ^2 bounded by β_1,β_2, and hence the common endpoint of β_1,β_2 is an isolated point of Λ_Δ, which is a contradiction since Δ is not elementary. The case when C̃_+ is a biinfinite geodesic is similar. Here, Δ must be cyclic, acting on S̃_+ with axis C̃_+, and acting either parabolically or loxodromically on ^3. In the parabolic case, C̃_+ compactifies to a simple closed curve on the sphere cl(CH(Λ_Γ))⊂^3 ∪∂^3, so no translate g(C̃_+), g∈Γ_+, can intersect C̃_+ transversely, since if it did we'd get two simple closed curves on the sphere that intersect once. In the loxodromic case, we get a similar contradiction by looking at the simple closed curve cl(C̃_+ ∪C̃_-) ⊂ cl(∂M̃) and its g-image. So, C̃_+ is precisely invariant under Δ⊂Γ. The quotient C_+ := Δ\C̃_+ is obviously compact, and is therefore a simple closed geodesic in S_+. We claim that C_- and C_+ are homeomorphic. If C_± are isotopic in ∂ CC(M) this is clear, and otherwise we argue as follows. The subgroups π_1 C_± are both represented by Δ, so are conjugate in π_1 M. The fact that every curve in C_- is homotopic to a curve in C_+ (and vice versa) implies that C_± are isotopic to subsurfaces C_±' ⊂Σ in the boundary Σ of a component (X,Σ) of the characteristic submanifold[Really, we need to be using resolutions of our subsurfaces with geodesic boundary, as discussed in <ref>.] of (CC(M),S_-∪ S_+), see <ref>, and that even within X every closed curve in C_-' is homotopic to a closed curve in C_+', and vice versa. When X is a solid torus or thickened torus, C_±' are annuli, while if X is an interval bundle, C_±' bound a vertical interval bundle in X, and are homeomorphic.So, let f : C_- ⟶ C_+ be a homeomorphism, lift f to a Δ-equivariant homeomorphism f̃ : C̃_-⟶C̃_+ and let F : C̃_-× [0,1] ⟶ CH(Λ)where F(x,·) parametrizes the geodesic from x to f(x). Then F is Δ-equivariant, and projects to an essential homotopy from C_- to C_+, as desired.§.§ An annulus theorem for laminations Suppose M is a compact, orientable, hyperbolizable 3-manifold with nonemptyboundary and let S = ∂_χ<0 M be the union of all non-torus boundary components of M. When α,β⊂ S are disjoint simple closed curves that are essential and homotopic in M, but not homotopic in S, the Annulus Theorem says that there is an essential embedded annulus A⊂ M with ∂ A =α∪β, see Scott <cit.>. More generally, equip S with an arbitrary hyperbolic metric. An essential homotopy between two geodesic laminations λ_± on S is a mapH : (λ× [-1,1],λ×{-1,1}) ⟶ (M,S)where λ is a lamination, such that H maps λ×{± 1} homeomorphically onto λ_±, and where H is not homotopic rel λ×{-1,1} into ∂ M.Here is an `Annulus Theorem' for minimal laminations. Let λ_-,λ_+ be two minimal geodesic laminations on S that are either disjoint or equal, and assume that S (λ_±) are incompressible in M. If λ_± are essentially homotopic in (M,S), there is an essential interval bundle (B,∂_H B) ⊂ (M,S) such that λ_± fill ∂_H B, and where λ_± are essentially homotopic through B, as in <ref>.Here, S (λ_±) are the subsurfaces with geodesic boundary filled by λ_±, as in <ref>. The assumption that they are incompressiblegeneralizes theassumption that α,β are homotopically essential in M in the Annulus Theorem.Identify M∖∂_χ=0 M with the convex core of a geometrically finite hyperbolic 3-manifold. Set S_± := S(λ_±). Lift the essential homotopy from λ_- to λ_+ to a homotopy from lifts λ̃_-⊂S̃_- to λ̃_+⊂S̃_+ in ^3. Under the homotopy, which has bounded tracks, corresponding leaves of λ̃_± have the same endpoints in ∂^3. The endpoints of λ̃_± are dense in ∂_∞S̃_±, so this means that the subsurfaces C_±⊂ S_± constructed in Theorem <ref> are just C_±=S_±. Passing to disjoint or equal resolutions S_±' of S_± and applying Theorem <ref> gives an interval bundle B where λ_± fill ∂_H B=S_-'∪ S_+'. We claim that λ_± are essentially homotopic through B. By Fact <ref>, it suffices to show that if σ is the canonical involution of B, as described in<ref>, then σ(λ_±) is isotopic to λ_∓ on S_∓'. Using the notation of Theorem <ref>, σ lifts to a Δ-equivariant involution σ̃ of B̃ that exchanges S̃_-' and S̃_+', where here Δ = Γ_-∩Γ_+. By equivariance, σ̃ extends continuously to the identity on Λ_Δ, so σ̃(λ̃_-) is a lamination on S̃_+ with all the same endpoints at infinity as λ̃_+, and hence equals λ̃_+. The claim follows.§ LAMINATIONS ON THE BOUNDARYSuppose that M is a compact, orientable 3-manifold with hyperbolizable interior and nonempty boundary ∂ M.Equip M with an arbitrary Riemannian metricand lift it toa Riemannian metric on the universal cover M̃.As in the introduction, a biinfinite path or ray h on ∂M̃ is called homoclinic if there are points s^i,t^iwith |s^i-t^i|→∞ such thatsup_i d_M̃ (h(s^i),h(t^i)) < ∞.Two rays h_+,h_- on ∂M̃ are called mutually homoclinic if there areparameters s_±^i →∞such thatsup_i d_M̃ (h_+(s_+^i),h_-(s_-^i)) < ∞. Here, a rayis a continuous mapfrom an interval [a,∞), and a biinfinite path isa continuous map from .We will also call rays and paths on ∂ M (mutually) homoclinic if they have lifts that are (mutually) homoclinic paths on ∂M̃. We refer the reader to <ref> for some comments on alternate definitions of homoclinic that exist in the literature.Note that if we divide a biinfinite homoclinic pathinto two rays, theneither one of the two rays isitself homoclinic, orthe two rays are mutually homoclinic.Also,these definitions are metric independent: since M is compact, any two Riemannian metrics on M lift to quasi-isometric metrics on M̃, and a path is homoclinic or mutually homoclinic with respect to one metric if and only if it is with respect to the other metric. Here are some examples. * Suppose that D is aproperly embedded disc in M, and h : ⟶∂ M is a path that covers ∂ D⊂∂ M. Then h is homoclinic: indeed, D lifts homeomorphically to M̃, so h lifts to a path in M̃ with compact image.* Suppose that ϕ : (S^1 × [0,1], S^1 ×{0,1}) ⟶ (M,∂ M)is an essential embedded annulus. Then rays covering the two boundary components of the annulus are mutually homoclinic: indeed, ϕ lifts to ϕ̃: × [0,1] ⟶M̃ ,and we have sup_t∈ d(ϕ̃(t,0), ϕ̃(t,1)) < ∞, so restricting to t∈ [0,∞) we get two mutually homoclinic rays in M̃. It will be convenientbelow to work with aparticular choice of metric on M.[An explicit metric on M] Let ∂_χ<0 Mbe the union of allcomponents of ∂ Mthat have negative Euler characteristic, i.e. are not tori. Thurston's Haken hyperbolization theorem, see <cit.>, implies thatthere is a hyperbolic 3-manifold N = ^3 / Γ homeomorphic to the interior of M, where every component of ∂_χ<0 Mcorresponds to a convex cocompact end of N. A torus T ⊂∂ M, on the other hand, determines a cusp of N. So, in other words, N is `minimally parabolic': the only parabolics come from torus boundary components of M. For each T, pick an open neighborhood N_T ⊂ N of the associated cusp that is the quotient of a horoball in ^3 by a ^2-action. Then M ≅ CC(N)∖⋃_toriT⊂∂ M N_T,and we willidentify Mwith the right-hand side everywhere below. Then * M̃⊂^3 is obtained from the convex hull CH(Γ) ⊂^3 of the limit set of Γ by deleting an equivariant collection of horoballs, and* the path metric induced on ∂_χ<0 M is hyperbolic <cit.>,and thepath metric induced on every torus T ⊂∂ Mis Euclidean. We now specialize to the case of paths that are geodesics on∂ M.Recallfrom Example (1) above that one can make homoclinic paths by running around the boundaries of disks in ∂ M. The following shows that discs are essential in such constructions.Suppose that S ⊂∂ M is an essential subsurface. Then the inclusion of any lift S̃⊂∂M̃ is a quasi-isometric embedding into M̃. Moreover if S is incompressible then any pair of mutually homoclinic infinite rays on S are asymptotic and no biinfinite geodesic γ in S is homoclinic. Think of M as embedded in a complete hyperbolic 3-manifold N as in (<ref>), write N=Γ\^3, and let M̃⊂^3 be the preimage of M, so that M̃ is obtained from the convex hull CH(Γ) be deleting an equivariant collection of horoballs. Fix a subgroup Δ < Γ that represents the conjugacy class associated to the image of the fundamental group of S ⊂ M. To show thatS̃↪∂M̃is a quasi-isometric embedding, it suffices to show that Δ is undistorted in Γ. But since M is geometrically finite and Δ is finitely generated, it follows from a result of Thurston (see Proposition 7.1 in <cit.>) that the group Δ is geometrically finite, and geometrically finite subgroups of (say, geometrically finite) hyperbolic 3-manifold groups are undistorted, c.f. Corollary 1.6 in <cit.>.For the `moreover' statement, assume S is incompressible, so that S̃ is simply connected, and consider a pair of infinite raysh^±:^+→S̃that are geodesic for the induced hyperbolic metric and t^±_n→+∞ such that d_M̃(h^+(t_n^+),h^-(t_n^-)) is bounded. Since S̃⊂∂M̃ is a quasi-isometric embedding, d_S̃(h^+(t_n^+),h^-(t_n^-)) is also bounded. Since S̃ is simply connected and hyperbolic, this is possible only if h^+ and h^- are asymptotic on S. Taking h^+=h^- we get that a geodesic ray on S̃ can not be homoclinic. Taking h^+≠ h^-, we get that any pair of mutually homoclinic infinite rays on S are asymptotic. In particular two disjoint geodesic rays in a homoclinic geodesic should be asymptotic. This is impossible for a geodesic in a simply connected hyperbolic surface. Example (2) above shows how embedded annuli in M can be used to create mutually homoclinic rays. In analogy to Fact <ref>, one can show that annuli are essential in such a construction. For instance, suppose M is acylindrical. Then work of Thurston, see <cit.> and more generally <cit.>,says that we canchoose the hyperbolic manifold N so that ∂ CC(N) ≅∂_χ<0 Mis totally geodesic. Hence,the preimage of ∂_χ<0 M in M̃⊂^3 is a collection of hyperbolic planes.Any geodesic ray on ∂_χ<0 M then lifts to a geodesic in ^3, and two geodesic rays on ∂_χ<0 M are mutually homoclinic if and only if their geodesic lifts are asymptotic in ^3, which implies that they were asymptotic on ∂_χ<0 M.§.§ Alternate definitions of homoclinic Added this subsection Above, we defined a pathh : I ⟶∂M̃to be homoclinic if there is are s^i,t^i ∈ Iwith |s^i-t^i|→∞ such thatsup_i d_M̃ (h(s^i),h(t^i)) < ∞.Some other papers use slight variants of this definition. For example, the definition of (faiblement) homoclinique in Otal's thesis <cit.> is almost the same as what is written above, except that distances are computed in the intrinsic metric on ∂M̃ instead of in M̃. This is equivalent to our definition, though: the nonobvious direction follows from Fact <ref>, which says that boundary components of M̃ quasi-isometrically embed in M̃. And in the definition of homoclinique in Lecuire's earlier work <cit.>, distances are computed not in M̃, but within ^3, with respect to a given identification of M̃ with the convex core of some minimally parabolic hyperbolic 3-manifold, as discussed in Example <ref>. When M has tori in its boundary, the inclusion M̃↪^3 is not a quasi-isometric embedding, but the following lemma says that d_^3 is bounded if and only if d_M̃ is bounded, so Lecuire's earlier definition is equivalent to ours. Whenever x,y∈M̃, we haved_^3(x,y) ≤ d_M̃(x,y)≤ e^d_^3(x,y)/2 d_^3(x,y).Set N := Γ\^3, so that M̃ is obtained from the convex hull CH := CH(Λ(Γ)) of the limit set of Γ by deleting horoball neighborhoods around all rank two cusps. Take a ^3-geodesic γ from x to y. Then γ lies inside CH, and it can only penetrate the deleted horoball neighborhoods to a depth of d(x,y)/2. Now, whenever B⊃ B' are horoballs in ^3 such that d_^3(∂ B, B') ≤ d(x,y)/2, the closest point projectionπ : B ∖ B' ⟶∂ Bis well defined and e^d(x,y)/2-lipschitz. (Indeed, it suffices to take B as the height 1 horoball in the upper half space model and B' as the height e^d(x,y)/2 horoball, and then the claim is obvious.) So, the parts of γ above that penetrate the deleted horoballs can be projected back into ∂M̃, and if we do this the resulting path has length at most e^d(x,y)/2d(x,y). We should mention the version of homoclinic defined in Casson's original unpublished notes. There, M is a handlebody, and if we regard ∂M̃↪^3 as above, then a simple geodesic h : I ⟶∂M̃ is called homoclinic if when we subdivide h into two rays h_±, these rays limit onto subsets A_±⊂^3 ∩∂^3 such that A_+∩ A_-≠∅. This definition is stronger than all the ones mentioned above: if A_+∩A_- contains a point on ∂M̃, rather than at infinity, then the definition of homoclinic above is obviously satisfied. Otherwise, h_± have to have a common accumulation point in ∂^3, which corresponds to an end ξ of M̃, and one can use the treelike structure of the universal cover M̃ of the handlebody M to say that h_± have to both intersect a sequence of meridians (m_i) on M̃ that cut off smaller and smaller neighborhoods of ξ. The times t^i_pm when h_± intersects m_i then work in the definition of homoclinic above. In fact, Casson's definition is strictly stronger. For instance, if h_± both spiral around disjoint simple closed curves γ_±⊂∂M̃, then h is homoclinic by our definition but not by Casson's. However, Statement <ref> still fails using Casson's original definition, due to the examples in Figure <ref>.§.§ Waves, Tight position, and Intrinsic limits As in the previous section, let Mbe a compact, orientablehyperbolizable 3-manifold withnonempty boundary∂ M,which wethink of as the convex coreof a hyperbolic 3-manifold with horoball neighborhoods of its rank 2 cusps deleted.Suppose that m is a meridian multicurve on ∂ M, and let γ⊂∂ M be a simple geodesic ray or a simple biinfinite geodesic.An m-waveis a segment β⊂γ that has endpoints on m, and is homotopic rel endpoints in M to an arc of m.If γ has no m-waves, andevery infinite length segment of γ intersects m, then we say that γ is in tight position with respect to m.Waves and tight position were discussed previously in <cit.>, for instance. Note that in our definition, an m-wave βcan intersect m in its interior.More generally, an m-wave of a lamination is an m-wave of one of its leaves, anda lamination is in tight positionwith respect to m if all of its leaves are.Note that from this perspective, ifa geodesic γ is in tight position with respect to some multicurve m (regarded as a lamination), then it is in tight position with respect to some component of m. As an example, a meridian γ can never be in tight position with respect toanother meridian m: taking discs with boundaries γ and m that are transverse and intersect minimally, any arc of intersection of these disks terminates in a pair of intersection points of γ and mthat bound a m-wave ofγ.More generally,we have the following fact.[Tight position^3 quasi-geodesic] Letγbe a simple geodesic ray or biinfinite geodesic on ∂ M. If γis in tight position with respect to some meridian m thenany lift γ̃⊂∂M̃ of γ is a quasigeodesic in ^3. In particular, γ is an M̃-quasigeodesic, and is not homoclinic.Intersecting with m breaks γ into a union offinite arcs. By simplicity of γ,these arcs fall into only finitely many homotopy classes rel m, andthere is a universal upper bound L=L(γ,m) on their lengths.Let D beadisc with boundary m and let D̃be the entire preimage of D in M̃. Tightness means that the path γ̃intersects infinitely many components of D̃, and intersects no single component more than once. In the notation of Example <ref>, we have that M̃⊂^3 is obtained from CH(Γ) by deleting an equivariant collection of horoballs. Eachcomponent of D̃separates CH(Γ), so if γ'is a segment of γ,any geodesic in ^3 joining the endpoints ofγ' mustintersect each of the discs that γ' intersects.Hence, if ϵ>0is the minimum distance between any two components of D̃,then γ̃ is a (L/ϵ,L)-quasigeodesic.We now describe how to createsystems of meridians with respect to which a given lamination is in tight position. Suppose that λ is ageodesic lamination on ∂ M and m=⊔_i=1^n m_iis ageodesic meridian multi-curve on ∂ M, and βis an m-wave in λ whose interior is disjoint from m. Then the pair of points ∂β separates some component m_i of m into two arcs m_i^1,m_i^2, both of which are homotopic to β rel endpoints in M.We perform aλ-surgery on mby replacing m_i^1 (say) with β, thus constructing a new multicurve m':= (β∪ m_i^2) ⊔_j≠ i m_j. This notion of surgery appears in many other references, e.g. <cit.>. We summarize its elementary properties here: Suppose that λis a geodesic lamination. * If m is a meridian and λ has an m-wave, it also has an m-wave whose interior is disjoint from m,so a λ-surgery can be performed.* Any curve m' obtained by λ-surgery on a meridian m as above is a meridian.* If m is a cutsystem for M, i.e. a multi-curve of meridiansbounding discs that cut Minto balls and 3-manifolds with incompressible boundary, then some λ-surgery on a component of m is another cut system. For (1),suppose that λ has an m-wave β. Let m̃ be the entire preimage of min the cover ∂M̃, and lift βto an arc β̃starting and ending on some fixed component m_0 ⊂m̃.Since each component of m̃ separates ∂M̃,there is some “outermost” subarc β̃' that has endpoints on the same component of m̃, and that has interior disjoint from m̃. This β̃'projects to an m-wave of λwhose interior is disjoint from m.For (2), note that if m':=β∪ m_2is obtained by λ-surgery on m, as above, then m,m' are homotopic in M, and hence m' is nullhomotopic. Also, if m' is inessential in ∂ M, then β is homotopic on ∂ M to m_2, implying that λ and m were not in minimal position, a contradiction since they are both geodesic. Hence m' is a meridian.For (3), consider an m-wave in λ whose interior is disjoint from m and say that ∂β⊂ m_1. Then ∂β separates m_1 into two arcs m_1^1,m_1^2. It is not difficult to see that either (β∪ m_1^1) ⊔_j≠ 1 m_j or (β∪ m_1^2) ⊔_j≠ 1 m_j is a cut system.Thefollowing lemma is amodification ofa result of Kleineidam–Souto <cit.>that is essentialfor everything below.Suppose λis a geodesic lamination onS=∂ Mand mis a meridian multi-curve. Then either* there exists a finite sequence of λ-surgeries on m that terminates in some meridian multi-curve m' where λ has no m'-waves,* S(λ) contains a sequence of meridians (γ_i) such that i(λ,γ_i) → 0,with respect to every transverse measure on λ.Here, (2) makes sense even when λadmits no transverse measure of full support. Note that if λ is a minimal lamination and ∂ S(λ) is incompressible, then (2)implies thatλ is anintrinsic limit of meridians. The two cases depend on whether λ contains infinitely manyhomotopyclasses of m-waves, or not. Here, our homotopies are througharcs on S,keeping their endpoints on m.If there are only finitely many classes of m-waves in λ, then a finite sequence ofλ-surgeries converts m into amulti-curve m' such thatλ has no m'-waves,as each surgery decreases the number of waves by at least one.If there are infinitely many homotopy classesof m-waves in λ, then we can choose a sequence of parameterized m-waves α_i : [0,1] ⟶ such that * the two sequencesof endpoints (α_i(0)) and (α_i(1))both converge, and if either sequence converges into a simple closed curve γ⊂λ, then it approaches γfrom only one side,* no α_i and α_j are homotopic keeping their endpoints on m, for i≠ j. To construct the desired sequence of meridians, let β^0_i be the shortest geodesic on S from α_i(0) to α_i+1(0),and define β^1_i similarly.For large i, the union β^0_i ∪α_i ∪β^1_i ∪α_i+1is an essential closed curve in S(λ)that is nullhomotopic in M.It may not be simple, since β^0_i and β^1_imay overlap, but it has at most one self intersection. So by the Loop Theorem <cit.>, one of the three simple closed curves obtained by surgery on it is a meridian γ_i. Now,the fact that the endpoints can approach a simple closed curve in λ only from one sideimplies that for large i, the curves γ_ido not intersect any simple closed curve contained in λ.Since γ_ionly intersects λ along the arcs β^0_i and β^1_i, whosehyperbolic lengths converge to zero,it follows that i(γ_i,λ) → 0for any transverse measure on λ.Here is an important application of Lemma <ref>. Suppose λ⊂∂_χ<0 M is a minimal geodesic lamination and that ∂ S(λ) is incompressible in M. Leth ⊂ S(λ)be asimple geodesic ray or biinfinite geodesic that is disjoint from λ orcontained in λ. Then either* any lift h̃⊂∂M̃ of h is a quasi-geodesic in M̃.* S(λ) contains a sequence of meridians (γ_i) such that i(λ,γ_i) → 0,with respect to every transverse measure on λ. In particular, if h is homoclinic, then λsatisfies (2).Assume that (2) does not hold. Given a cut system m for M, Lemma <ref> and Fact <ref> (3) say that we can perform λ-surgeries until we obtain a new cut system msuch that λ∪ hhas no m-waves. If m intersects λ, then λ∪ his in tight position with respect to m, so (1) follows from Fact <ref>. Therefore, we can assume m does not intersect λ.Up to isotopy,we can also assume that S(λ)does not intersect m. Since ∂ S(λ) is assumed to be a collection of incompressible curves, it follows that S(λ) is itself incompressible, so (1) follows from Fact <ref>. We now come to the central definition of the section. A minimal geodesic lamination λ⊂∂_χ < 0 Mis an intrinsic limit of meridians if there is a transverse measure[It is currently unknown whether the particulartransverse measurematters:wemight suspect that ameasured lamination is a projective limit of meridians if and only if the same is true for any other measured lamination with the same support, butthere could also very well be a counterexample.] on λ and a sequence of meridians (γ_i) contained in S(λ) such that γ_i →λ in (S (λ)).Using Lemma <ref>,we can prove the following proposition, which gives several equivalent characterizationsof intrinsic limits. Suppose λ⊂ S=∂ M is a minimal geodesic lamination and ∂ S(λ)is incompressible.The following are equivalent: * λ is anintrinsic limit ofmeridians,* given (some/any)transverse measure on λ, there is a sequence of meridians (γ_i)in S(λ) such that i(γ_i,λ)→ 0, * there is a biinfinite homoclinic geodesic in S(λ)that is either a leaf of λ, or is disjoint from λ,* given anytransverse measure on λ, there is a sequence ofessential (possibly non-simple) closed curves (γ_i)in S(λ)such that each γ_i is nullhomotopicin M, and i(γ_i,λ)→ 0.Note that when we say ∂ S(λ) is incompressible, we mean that no closed curve that is a boundary component of S(λ) is nullhomotopic in M. This condition is mainly here to make statements and proofs easier. For instance, without this assumption our proof of (4)(2) may produce peripheral meridians, but peripheral meridians can't be used in (2)(1). (2)(1).Fix some transverse measure on λ. By (2), i(γ_i,λ) / (γ_i) → 0,so after passing to a subsequencewe can assume that (γ_i)converges to a measured lamination μ in S(λ) thatdoes not intersect λtransversely. As λ fills S(λ), μ is supported on λ.(1)(3).After passing to a subsequence,we can assume that (γ_i) converges in theHausdorff topology to some lamination, which must then bean extension of λ by finitely many leaves. (3) follows from an unpublished criterion of Casson, see Lecuire <cit.> for a proof,that states that any Hausdorff limit of meridians has a homoclinic leaf.(3)(2). This is an immediate corollary of Lemma <ref>.(4)(2).The direction⇐ is immediate, sosuppose (γ_i)is a sequence of essential closed curvesin S (λ)that are nullhomotopicin H and i(λ,γ_i)→ 0.By Stallings' version of the Loop Theorem,for each i there is a meridian γ_i' that is obtained from γ_iby surgery at the self intersection points.Such surgeries can only decrease the intersection number with λ, so (2) follows.We will also need the following criterion in the next section.Suppose λ⊂∂_χ<0 M is a minimal lamination such that S(λ) is compressible but ∂ S(λ) is incompressible, and thatthere is a sequence (A_i) of essentialembedded annuliin (M, S(λ))with i (∂ A_i,λ)→ 0.Then λ is an intrinsic limit of meridians.Pick a meridian m⊂ S(λ). For each i, let T_i : M ⟶ M be the Dehn twist along the annulus A_i. Then for any sequence n_i ∈, the curves T_i^n_i(m) are meridians, and if n_i grows sufficiently fast, theni(T_i^n_i(m),λ)/(T_i^n_i(m)) → 0.Hence, after passing to a subsequence T_i^n_i(m) converges to a lamination λ' supported in S(λ) with zero intersection number with λ, implying λ' and λ have the same support, so λis anintrinsic limit of meridians. § LIMITS OF HOMOCLINIC RAYS In this section we characterize the laminations onto which pairs of disjoint mutually homoclinic rays can accumulate. Let M be a compact orientable hyperbolizable 3-manifold and equip ∂_χ<0 M with an arbitrary hyperbolic metric. Let h_± be two disjoint, mutually homoclinic simple geodesic rays on∂_χ<0 M that accumulate onto (possibly equal) minimal laminations λ_±, and that the multicurve ∂ S(λ_±) is incompressible in M. Then one of the following holds: * one of λ_+ or λ_- is an intrinsic limit of meridians,* h_+ and h_- are asymptotic on ∂_χ<0 M, and either * any two mutually homoclinic lifts h̃_± to ∂M̃ are asymptotic on ∂M̃, or * λ:=λ_-=λ_+ is a simple closed curve that is homotopic in M to a nontrivial power γ^n,n>1 of some closed curve γ in M,* h_± are not asymptotic on ∂_χ<0 M, and there is an essential (possibly nontrivial) interval bundle B ⊂ M such that λ_± each fill a component of ∂_H B, and λ_± are essentially homotopic through B, as in <ref>. The proof of Theorem <ref> is given in <ref>. One can construct examples of mutually homoclinic rays in each of cases (1)–(3). For concreteness, suppose that M is a handlebody.For (1), pick two meridians λ_-,λ_+ on M and let h_± spiral onto m_±. One can also produce similar examples by letting λ_± be arbitrary laminations in disjoint subsurfaces S(λ_±) that are spheres with at least 4 boundary components, all of which are compressible in M, and letting h_± accumulate onto λ_±. For (2) (a), take λ to be any simple closed curve on ∂ M that is essential in M but has no nontrivial roots in π_1 M, and let h_± spiral around λ in the same direction. We'll discuss 2 (b)in Remark <ref>.For (3), write M = S × [-1,1] where S is a surface with boundary, let λ be a lamination on S, and let h_± be corresponding leaves of λ_± := λ×{± 1}. Then M̃≅S̃× [-1,1], so there are lifts of h_± that are mutually homoclinic. One can also construct similar examples of (3) where the interval bundle B is twisted. In case (1), we expect it is possible that S(λ_-) is incompressible, say, while λ_+ is an intrinsic limit of meridians. For instance, suppose C is a compression body with connected, genus-at-least-two interior boundary ∂_- C, and exterior boundary ∂_+C. Let f : C ⟶ C be a homeomorphism such that f|_∂_+ C and f|_∂_- C are both pseudo-Anosov, with attracting laminations λ_+,λ_-, respectively. We expect that there are rays ℓ_±⊂λ_± that are mutually homoclinic. But λ_+ is an intrinsic limit of meridians, while S(λ_-) is incompressible.The assumption that ∂ S(λ_±) is incompressible is necessary in Theorem <ref>. For instance, suppose M is a compression body with exterior boundary a genus 3 surface S, where the only meridian on S is a single separating curve γ. Let T be the component of S ∖γ that is a punctured genus 2 surface. Then there are distinct minimal geodesic laminations λ,λ' ⊂ T, each of which fills T, that are properly homotopic in M: just take distinct laminations that are identified when we cap off the puncture of T to get a closed genus 2 surface. Corresponding ends of corresponding leaves of λ,λ' are mutually homoclinic rays that accumulate onto λ,λ', respectively, but none of (2)–(3) hold. One could write down a version of Theorem <ref> that omits the assumption that ∂ S(λ_±) is incompressible, but the conclusion would be relative to capping off S(λ_±), and the statement would be more complicated. The reader may be wondering about (2) (b) in Theorem <ref>, and how it differs from (2) (a). A relevant example of two asymptotic rays that have mutually homoclinic nonasymptotic lifts to ∂M̃ is given in Figure <ref>. On the left we have a solid torus that is a boundary-connect-summand of M, which (say) is a handlebody. The biinfinite geodesic h is homoclinic and its two ends are mutually homoclinic rays that both spiral onto a simple closed curve λ, the (2,1)-curve on the solid torus. Then λ is homotopic to the square of the core curve of the solid torus. Although the two ends of h are asymptotic on ∂ M, any lift h̃ in ∂M̃ will have ends that are mutually homoclinic, but nonasymptotic. On the right, we have drawn the preimage λ̃ of λ, and two lifts h̃_1,h̃_2 of h. Note that one end of h̃_1 is asymptotic to an end of h̃_2. When M is a compression body, Casson-Gordon prove in <cit.> that any simple closed curve λ⊂∂ M that has a nontrivial root in π_1 M lies on the boundary of a solid torus that is a boundary connect summand of M, exactly as in Figure <ref>. When M has incompressible boundary, such λ come from components of the characteristic submanifold of M, see <ref>, that are either solid tori or twisted interval bundles over nonorientable surfaces. Here is a slightly more refined version of Theorem <ref> that applies to homoclinic biinfinite geodesics on ∂_χ < 0 M.Suppose that M is as in Theorem <ref>, that h is a homoclinic biinfinite simple geodesic on some component S ⊂∂_χ < 0 M, that h_± are the two ends of h, that h_± limit onto λ_±, and that ∂ S(λ_±)is incompressible in M. Then one of (1)–(3) in Theorem <ref> holds. Moreover, in case (2), if λ:=λ_± is not an intrinsic limit of meridians then either (i) after reparametrizing h, there is some s such that h(-s) and h(s) are joined by a geodesic segment c with h ∩ int(c) =∅,such that c, h |_(-∞,-s) and h |_[s,∞) bound an embedded geodesic triangle Δ⊂ S with one ideal vertex, andc ∪ h([-s,s]) is a meridian in M, or(ii) λis a simple closed curve on S, the two ends of h spiral around λ in the same direction, and any neighborhood of the union h ∪λ⊂ S contains a meridian. And in case (3), we can choose the interval bundle B such that h contains a subarc α⊂ h that is a compression arc for B.Let h̃ be a homoclinic lift of h on ∂M̃. By Lemma <ref>, either one of λ_± is an intrinsic limit of meridians in M, in which case we're in case (1) and are done, or both ends of h̃ are quasi-geodesic in M̃. Since h̃ is homoclinic, it follows that its two ends aremutually homoclinic, so we're in the setting of Theorem <ref> and one of (2)-(3) holds. Assume we're in case (2) of Theorem <ref>, and set λ:=λ_±. Assume first that λ:=λ_± is a simple closed curve.Since the two ends of h are asymptotic, they spiral around λ in the same direction. Let U be a neighborhood of h ∪λ on ∂_χ<0 M. Then h is a homoclinic geodesic contained in U, so Lemma <ref> implies that U is compressible as desired in (ii).Now suppose that λ is not a simple closed curve, in which case we're in case (2) (a) of Theorem <ref>. We show (i) holds. Let's start by constructing the desired geodesic triangle. Parametrize h, pick a universal covering map ^2 ⟶ S and lift h to a ĥin ^2, and letξ = lim_t→ +∞ĥ(t) ∈∂_∞^2.Since the two ends of h are asymptotic on S, there is a deck transformationγ : ^2 ⟶^2 such thatξ = lim_t→ -∞γ∘ĥ(t). It follows that if we use a particular arc-length parametrization of h, we may assume that for each t ∈, the points ĥ(t),γ∘ĥ(-t) lie on a common horocycle tangent to ξ. Fix some large s such that the geodesic segment ĉ joining ĥ(s) and γ∘ĥ(-s) is shorter than the injectivity radius of S, and therefore projects to a simple geodesic segment c in S.Let Δ̂⊂^2 be the triangle bounded by ĉ and the two rays ĥ([s,∞)) and γ∘ĥ((-∞,s]).Let g : ^2 ⟶^2 be a deck transformation. We claim that g ∘ĥ() ∩ int(Δ̂) = ∅. If not, then since Δ̂ has geodesic sides, two of which are disjoint from g ∘ĥ, it follows that one of the two endpoints of g ∘ĥ is ξ. If it's the positive endpoint, then g fixes ξ, and the axis of g projects to a (simple) closed curve on S, around which the two ends of h spiral, contradicting that λ isn't a simple closed curve. If the negative endpoint of g ∘ĥ is ξ, then g ∘γ^-1 fixes ξ and we get a similar contradiction.Next, we claim that we have g(int(Δ̂)) ∩ int(Δ̂)=∅ as long as g≠ id. Suppose that for some g≠ id the intersection is nonempty. Theng(ξ) ≠ξ, since otherwise we have a contradiction as in the previous paragraph. The previous paragraph implies that the sides of the triangles g(Δ̂) , Δ̂ that are lifts of rays of h do not intersect the interior of the other triangle. So, the only way the interiors of g(Δ̂) , Δ̂ can intersect is if ĉ and g(ĉ) intersect. However, this does not happen since we chose s large enough so that ĉ projects to a simple geodesic segment in S.The previous two paragraphs imply that Δ̂ projects to an embedded geodesic triangle Δ in S whose interior is disjoint from h, as desired in (i). By construction, c and h([-s,s]) are simple geodesic segments and, since g ∘ĥ() ∩ int(Δ̂) = ∅ for any g≠ id, they are disjoint. It follows that c ∪ h([-s,s]) is an essential simple closed curve on S. Note that c ∪ h([-s,s]) is an essential simple closed curve on S, since it is the concatenation of two geodesic segments with disjoint interiors. We need to show it is nullhomotopic in M. Now, if h̃ is a lift of h to ∂M̃, its ends are mutually homoclinic, and hence are asymptotic on ∂M̃ by the assertion in case (2) of Theorem <ref>. Therefore, after choosing compatible lifts, the projection Δ̂⟶Δ factors through a geodesic triangle Δ̃⊂∂M̃ bounded by h̃([s,∞)), h̃((-∞,-s]) and a geodesic segment c̃. The curve c ∪ h([-s,s]) is the projection of the closed curve c̃∪h̃([-s,s]) ⊂M̃,and therefore is nullhomotopic in M.Now assume we are in case (3). Let S_± be the components of ∂_H B containing λ_±. We may assume that h is in minimal position with respect to ∂ S_±. Since h is simple and the ends of h limit onto minimal laminations that fill S_±, we have that hintersects ∂ S_- ∪∂ S_+ at most twice. Furthermore, in the case that S_-=S_+, the homoclinic geodesic h cannot be contained entirely in the incompressible surface S_±, by Fact <ref>. So, h is the concatenation of two rays in S_± and an arc α such that int(α) lies outside S_±. Let X⊂ S be the union of S_± and a regular neighborhood of α. Since h is homoclinic, there is a meridian on X by Fact <ref>. If the two endpoints of α lie on different boundary components of ∂_H B, then α is a compression arc for B by Fact <ref>. So, we may assume that the two endpoints of α lie on the same boundary component c of ∂_H B. Fact <ref> then says that α is homotopic rel endpoints in M to an arc of c. So, if we make a new path h' ⊂∂_H B from h by replacing α with that arc of c, then h' is still homoclinic, so it cannot be boundedly homotopic to a geodesic in ∂_H B by Fact <ref>, which implies that its ends h_± are asymptotic, a contradiction to the assumption in (3).§.§ Proof of Theorem <ref> The proof proceeds in a few cases. As in Example <ref>, we identify M with the convex core CC(N) of a geometrically finite hyperbolic 3-manifold N=^3 / Γ, and we identify the universal cover M̃ with the preimage of CC(N) in ^3, which is the convex hull of the limit set of Γ. Note that the closure of M̃ in ^3 ∪∂^3 is a ball. There are four cases to consider: (A) Both λ_± are simple closed curves. We show that either (1) or (2) holds. (B) Both λ_± are distinct, in which case the surfaces S(λ_±) are disjoint, but one of these surfaces is compressible, say S (λ_+). We show (1). (C) At least one of λ_± is not a simple closed curve, and both S(λ_±) are incompressible. We show (2) (a)or (3) holds. (D) λ_- = λ_+, which is not a simple closed curve, and S(λ_±) is compressible. We show either (1) or (2) (a) holds.(A) and (B) above are the easiest. Our proof in case (C) involves a hyperbolic geometric interpretation of the characteristic submanifold of a pair,as discussed in 3 of<cit.> and in Walsh <cit.>; our argument is a bit more complicated than theirs, since we have to deal with accidental parabolics.In case (D), our argument adapts and fills some gaps in a surgery argument of Kleineidam–Souto <cit.> and Lecuire <cit.>. Proof of (A). Assume that both λ_± are simple closed curves.If one of λ_± is a meridian, we are in case (1) and are done. So, we may assume that both λ_± are incompressible in M. If λ_-≠λ_+, then we are in case (3) by the Annulus Theorem. So we may assume the two curves are the same, and write λ:=λ_±. We claim that h_± spiral around λ in the same direction, so that they are asymptotic on ∂ M. Suppose not, and pick mutually homoclinic lifts h̃_± in M̃. Then h̃_- and h̃_+ are asymptotic to lifts λ̃ and α(λ̃) of λ, where α∈Γ is a deck transformation. Any lift of λ is a quasi-geodesic in M̃, and hence in ^3, so h̃_± are quasi-geodesic rays, and therefore have well-defined endpoints in ∂^3, which must be the same since the two rays are mutually homoclinic. Since h_± spiral around λ in opposite directions, this means that α∈Γ takes one endpoint of λ̃ in ∂^3 to the other endpoint of λ̃. Since λ̃ is stabilized by a loxodromic isometry in Γ, and Γ is torsion-free and discrete, this is impossible.Suppose we are not in case (2) (a), so there are mutually homoclinic lifts h̃_± that are not asymptotic on ∂M̃. As in the previous paragraph, we may assume that h̃_- and h̃_+ are asymptotic to lifts λ̃ and α(λ̃) for some deck transformation α∈Γ. Since h̃_± are not asymptotic, λ̃≠α(λ̃). As before, α fixes the common endpoint of h̃_± in ∂^3, which is a fixed point of the cyclic group ⟨β⟩⊂Γ of loxodromic isometries fixing λ̃. As Γ is discrete and torsion-free, and α∉⟨β⟩, we have that α is a root of β or β^-1 in Γ, and (2) (b) follows.Proof of (B). Suppose that λ_± are distinct, in which case the surfaces S(λ_±) are disjoint, but that one of these surfaces is compressible, say S (λ_+). We claim that λ_+ is an intrinsic limit of meridians, in which case (1) holds and we are done. If not, take a meridian m ⊂ S(λ_+) and applyLemma <ref>. We obtain a new meridianm' ⊂ S(λ_+)such that λ_+ has no m-waves. Since λ_+ fills S(λ_+) and the boundary components ∂ S(λ_±)are incompressible, it follows that λ_+ is in tight position with respect to m. So after possibly restricting the domains, h_+is in tight position with respect to m', while h_-never intersects m'.Thiscontradicts the fact that h_±are mutually homoclinic, since if h̃_± are homoclinic lifts in M̃, for large t the point h̃_+(t) isseparated fromthe image of h̃_- byarbitrarily many lifts of m'.Proof of (C). Assume that at least one of λ_± is not a simple closed curve, and that S_± := S(λ_±) are incompressible.Note that S_± are equal or have disjoint interiors. We want to prove that we're in case (2) or (3).Lift h_± to mutually homoclinic rays h̃_±⊂∂M̃. Fact <ref> implies that each inclusion S̃_±↪M̃ is a quasi-isometric embedding, so if S̃_- = S̃_+, then the two mutually homoclinic rays h̃_± are actually asymptotic on ∂M̃. If this is true for all lifts h̃_±, we are in case (2) (a) and are done. So, we can assume below that S̃_- ≠S̃_+. Note that it may still be that λ_-=λ_+ and S_-=S_+.Let Γ_±⊂Γ be the stabilizer of S̃_± and let Λ_±⊂∂^3 be the limit set of Γ_±. Since Γ_± acts cocompactly on S̃_±, the inclusion S̃_±↪^3 extends continously to a map ∂_∞S̃_±⟶Λ_±⊂∂^3, by the main result of <cit.>. In particular, h̃_± have well defined endpoints in ∂^3, and since they're mutually homoclinic, they have the same endpoint ξ∈Λ_- ∩Λ_+ ⊂∂^3. We now apply Theorem <ref>. Since ξ∈Λ_- ∩Λ_+, using the notation of Theorem <ref>, the rays h̃_± are either eventually contained in the convex hulls C̃_±⊂S̃_±, or are asymptotic onto their boundaries. But C̃_± project to (possibly degenerate) generalized subsurfaces C_± with geodesic boundary in S_±, and the rays h_± limit onto filling laminations in S_±, so it follows that actually C_±=S_±, and that there is a homotopy from S_- to S_+ in M that is the projection of a homotopy from S̃_- to S̃_+. Since one of λ_± is not a simple closed curve, this means they are both not simple closed curves and the (a priori degenerate) subsurfaces with geodesic boundary S_± are not simple closed geodesics.Let S_±' ⊂ int(S_±) be obtained by deleting small collar neighborhoods of ∂ S_±, so that S_±' are both actually embedded, still contain λ_±, and are either disjoint or equal. Since S_±' are incompressible and homotopic in M, Theorem <ref> implies that they bound an essential interval bundle B⊂ M. Moreover, the fact that the homotopy from S_- to S_+ is the projection of a homotopy from S̃_- to S̃_+ means that we can assume that there is a component B̃⊂M̃ of the preimage of B that intersects ∂M̃ in S̃_±'. Note that B̃ is invariant under Δ =Γ_- ∩Γ_+, since any element of Δ preserves S̃_±', and hence B̃. We claim that λ_± are essentially homotopic through B. By Fact <ref>, it suffices to show that if σ is the canonical involution of B, as described in<ref>, then σ(λ_±) is isotopic to λ_∓ on S_∓'. Well, σ lifts to a Δ-equivariant involution σ̃ of B̃ that exchanges S̃_-' and S̃_+', where here Δ = Γ_-∩Γ_+. By equivariance, σ̃ extends continuously to the identity on Λ_Δ, so in particular its extension fixes ξ, and hence σ̃(h_±) is properly homotopic to h_∓ on S_∓, which implies σ̃(λ_±) is isotopic to λ_∓ as desired.If h_± are not asymptotic on ∂ M, then we are in case (3) and are done. So, assume h_± are asymptotic. Then there is some γ∈Γ such that γ(h̃_-) ⊂S̃_+' and is asymptotic to h̃_+. This γ fixes the endpoint ξ∈∂^3 of h̃_±. Moreover, γ(B̃) is a component of the preimage of B that contains S_+', and therefore equals B̃. So, γ exchanges S̃_±', and therefore γ^2 ∈Δ. But then h̃_± are asymptotic to the axes of γ^2 S̃_±, implying that h_± accumulate onto simple closed curves in ∂ M, contradicting our assumption in (C).Proof of (D). Assume that λ_-=λ_+, write λ=λ_± for brevity, assume that λ is not a simple closed curve, and that S(λ) is compressible.We want to prove thateither λ is an intrinsic limit of meridians, or h_± are asymptotic, as are any pair of mutually homoclinic lifts h̃_±.If λ is an intrinsic limit of meridians, we are done, so since S(λ) is compressible with incompressible boundary, by Lemma <ref>we can choose a meridian m ⊂ S(λ) with respect to which λ is in tight position.Let m̃ be itsfull preimage in ∂M̃, and let h̃_± be any pair of mutually homoclinic lifts in ∂M̃. Truncating if necessary, we can assume that h_± are in tight position with respect to m, and hence the lifts h̃_± are quasigeodesic rays in ^3, by Fact <ref>.Since they are mutually homoclinic, h̃_- and h̃_+ converge to the same point ξ∈∂_∞^3, and tightness further implies that after restricting to appropriate subrays, h̃_- and h̃_+ intersect exactly the same components of m̃, in the same order. Reparametrizing, we haveh̃_± : [0,∞) ⟶∂M̃, h̃_+ (i), h̃_-(i) ∈m̃_i,∀ i∈,where each m̃_i is a component of m̃, and where h̃_±(t) ∉m̃ when t∉. Letd_i := d_m̃ ( h̃_+ (i) , h̃_- (i))be the distance along m̃between h̃_+ (i) and h̃_-(i). There is someuniform ϵ>0,independent of the particular chosen lifts h̃_±, such that either * h̃_± are asymptotic on ∂M̃, and hence h_± are asymptotic on ∂ M, or* lim inf_i d_i ≥ϵ. Let'sassume that h̃_+ and h̃_- are not asymptotic on ∂M̃, and write d = lim inf_i d_i. Fix some transverse measure on λ. If d is small, we will construct meridians γ⊂ S(λ) with very small intersection number with λ. Since λ is not an intrinsic limit of meridians, there is some fixed lower bound for such intersection numbers, which will give a contradiction for small d.Suppose d is small and pick 0<<i<jsuch thatd_i,d_j < 2d, let b_i be the (unique) shortest path on m̃ from h̃_-(i) to h̃_+(i), and define b_j similarly. Let γ̃_ij be the loop on ∂M̃ obtained by concatenatingthe four segments h̃_+([i,j]), h̃_-([i,j]), b_i and b_j in the obvious way.We first claim that after fixing i, it is possible to choose j such that γ̃_ij is homotopically essential on ∂M̃.Assume not, let S̃⊂∂M̃ be the component containing h̃_±, fix a universal covering map^2 ⟶S̃,and lift the rays h̃_± |_[i,∞)to rays𝔥_± : [i,∞) →^2in such a way that b_i lifts to a segment connecting 𝔥_-(i) to 𝔥_+(i). Now, there are infinitely many j>i with d_j<2d. For each such j, we know that γ̃_ij is homotopically inessential on ∂M̃, so the points 𝔥_-(j) to 𝔥_+(j) are at most 2d away from each other in ^2. This gives a sequences of points exiting the rays 𝔥_± that are always at most 2d apart, so 𝔥_- is asymptotic to 𝔥_+. Hence, h̃_- is asymptotic to h̃_+, contrary to our assumption.We now fix large i,j such that d_i,d_j<2d and γ̃:=γ̃_ij is homotopically essential on ∂M̃. Then γ̃ projects to a homotopially essential loop γ⊂∂ M that is homotopically trivial in M.Note that if i,j are chosen large enough and d is small, then γ⊂ S(λ).Furthermore, since the segments b_i,b_j are the only parts of γ̃ that intersect λ, and these segments havehyperbolic length less than 2d, the intersection number i(γ,λ) is small when d is small. (Recall that λ is a minimal lamination that is not a simple closed curve, so no leaves have positive weight, and hence hyperbolic length can be compared to intersection number.) But λis not an intrinsic limit of meridians, so Proposition <ref> (4) says that there is some positive lower bound for the intersection numbers of λ with essential curves that are nullhomotopic in M. Hence, we get a contradiction if d is small.Suppose we have two pairs {a,b} and {c,d} of points in m, all four of which are distinct. We say the two pairs are unlinked in m if in the induced cyclic ordering on {a,b,c,d}⊂ m, a is adjacent to b and c is adjacent to d, and we say that the two pairs are linked otherwise. If i,j∈, i< j, then the pairs {h_+(i),h_-(i)} and {h_+(j),h_-(j)} are unlinked in m. For an example where the pairs are linked, see Figure <ref>.The proof below worksin general whenever h_± are simple geodesic rays on ∂ M in tight position with respect to m, where neither h_+ nor h_- spirals onto a simple closed curve. The essential observation used in the proof is that the closurecl( ∂M̃) ⊂^3 ∪∂^3is homeomorphic to a sphere: indeed, the closure of M̃ in ^3∪∂^3 is a ball, since M̃⊂^3 is convex with nonempty interior, and the closure of the boundary is the boundary of the closure. We obtain the unlinking property above by exploiting separation properties of arcs and curves on cl(∂M̃).Since h_± are in tight position with respect to m, both lifts h̃_± crossm̃_i exactly once. Since h̃_± limit to the same point in ∂^3, they must then cross m̃_i in the same direction. In other words, the tangent vectors h_+(i)',h_-(i)' point to the same side of m. The same statement holds for j. This allows us to break into the following two cases: (a) the arcs h_± |_[i,j] start and end on the same side of m, i.e. the vectors h_±(i)' point to the opposite side of m as the vectors h_±(j)', or(b) the arcs h_± |_[i,j] start and end on different sides of m, i.e. all four velocity vectors h_±(i)',h_±(j)' point to the same side of m, see Figure <ref>. First, assume we're in case (a). Let α_± := h̃_± |_[i,j],which we regard as oriented arcs in ∂M̃ starting on m̃_i and ending on m̃_j. Let γ : M̃⟶M̃ be the deck transformation taking m̃_j to m̃_i. Then the arcsβ_± :=γ∘h̃_± |_[i,j]start on γ(m̃_i) and end on γ(m̃_j)=m̃_i, and since we're in case (a) they end on the same side of m̃_i as the arcs α_± start. Note that γ(m̃_i) is not m̃_i or m̃_j. Indeed, if γ(m̃_i)=m̃_i then we'd have γ=id, contradicting that γ(m̃_j)= m̃_i. And if γ(m̃_i)=m̃_j, then γ^2 would leave m̃_i invariant, implying that γ^2=id, which is impossible since π_1 M has no torsion.We claim that the interiors of the arcs β_± do not intersect m̃_i or m̃_j, and the arcs α_± do not intersect γ(m̃_i). Indeed, the interiors of β_± don't intersect m̃_i because the arcs β_± end on m̃_i and intersect each component of m̃ at most once, by tight position. The interiors of β_±don't intersect m̃_j because any arc from m̃_i to m̃_j intersect at least j-i+1 components of m̃ (counting m̃_i and m̃_j), while any proper subarc of β_± intersects at most j-i components of m̃. Here, for the j-i+1 bound we are using tight position of h_±, the definitions of m̃_i,m̃_j, and the fact that each component of m̃ separates ∂M̃. The fact that the arcs α_± don't intersect γ(m̃_i) is similar: any arc from m_i to γ(m̃_i) must pass through at least j-i+1 components of m̃, while any proper subarc of α_± intersects at most j-i components, and α_± do not end on γ(m̃_i)≠m̃_j.Let A ⊂ cl(∂M̃)≅ S^2 be the annulus that is the closure of the component of cl(∂M̃) ∖ (m̃_i ∪m̃_j) that contains the side of m̃_i on which the arcs α_± start and the arcs β_± end. Then α_± are two disjoint arcs in Athat join m̃_i to m̃_j, and therefore α_± separate A into two rectangles.The component γ(m̃_i) on which the arcs β_± start is contained in the interior of one of these two rectangles. Therefore, the two arcs β_± must lie in the same component of A ∖ (α_+ ∪α_-). Looking at endpoints, this means the pairs {h̃_+(i),h̃_-(i)} and {γ∘h̃_+(j),γ∘h̃_-(j)} are unlinked in m̃_i, and the claim follows.Now assume that we're in case (b). The curve m̃_i separates ∂M̃, and we let X⊂∂M̃ be the closure of the component of ∂M̃∖m̃_i into which the velocity vectors h̃_±'(i) and (γ∘h̃_±)'(j) all point. The closurecl(X)⊂^3 ∪∂^3is homeomorphic to a disk, since cl(∂M̃) is a sphere. As before, we let γ : M̃⟶M̃ be the deck transformation taking m̃_j to m̃_i.Then the raysα_± := h̃_±([i,∞)),β_±:=γ∘h̃_±([j,∞)) are all contained in X. Note that α_± both limit to a point ξ∈∂^3, while β_± limit to γ(ξ) ∈∂^3. The union α_-∪α_+compactifies to an arc in cl(X), since the two rays limit to the same point in ^3. The same is true for β_-∪β_+. Hoping for a contradiction, suppose that the points in the statement of the claim are linked. Then the pairs of endpoints of α_- ∪α_+ and β_- ∪β_+ are also linked on m̃_i = ∂ cl(X). We now have two arcs on the disk cl(X) with linked endpoints on ∂ cl(X), so the two arcs must intersect. As α_±,β_± are all disjoint, the only intersection can be on ∂^3, so their endpoints at infinity must all agree, i.e. γ(ξ)=ξ. Since γ(ξ)=ξ, all the rays γ^k ∘h̃_+ limit to ξ, where k∈. Hence, all these (quasi-geodesic) rays are pairwise mutually homoclinic, and for each pair k,l, the rays γ^k ∘h̃_+ and γ^l ∘h̃_+ eventually intersect the same components of m̃, in the same order, although their initial behavior may be different.In analogy with the setup of Claim <ref>, let d_k,l be the liminf of the distances from γ^k ∘h̃_+ to γ^l ∘h̃_+ along the components of m̃ that they both intersect. We claim that there are k,lsuch that d_k,l < ϵ, where ϵ is the constant from Claim <ref>. Indeed, for N> (m)/ϵ, it is impossible to pack N points at least ϵ apart in any component of m̃. So if we let k range over a set F ⊂ of size N, whenever a component of m̃ intersects all γ^k ∘h̃_+,k∈ S, two such intersections must be within ϵ of each other. There are infinitely many such components of m̃ and F is finite, so we can pick k,l ∈ S such that γ^k ∘h̃_+ and γ^l ∘h̃_+ are within ϵ on infinitely many such components. Finally, γ^k ∘h̃_+ and γ^l ∘h̃_+ are mutually homoclinic lifts of h̃_+, and d_k,l < ϵ, so the exact same argument as in Claim <ref>shows that γ^k ∘h̃_+ and γ^l ∘h̃_+ are asymptotic on ∂M̃. It follows that h_+ spirals onto a (simple) closed curve in ∂ M in the homotopy class of (a primitive root of) γ^l-k. (Indeed, γ^l-k lifts to a deck transformation of the universal cover ^2 ⟶∂ M, and the axis of this deck transformation is asymptotic to suitably chosen lifts of both γ^k ∘h̃_+ and γ^l ∘h̃_+.) This is a contradiction, though, sinceh_+ limits onto λ, which is not a simple closed curve.Assume now that our mutually homoclinic rays h̃_± are not asymptotic on ∂M̃, in which case we're in case (2) of the theorem and are done. By Claim <ref>, there is some ϵ>0 such that d_m̃_i( h̃_+(i),h̃_-(i)) ≥ϵ for all i. We will show thatλ is an intrinsic limit of annuli, in the sense ofLemma <ref>, which says that then λ is an intrinsic limit of meridians.The proof is an adaptation and correction of a surgery argument of Lecuire <cit.>. As there are two gaps[The first gap is that the sentence “Quitte à extraire, la suite (gh^-1)^2n g(l̃^1) converge vers une géodésique γ̃⊂ p^-1(α_1) dont la projection l ⊂∂ M est une courbe fermé.”at the end of the proof of Affirmation C.3 isn't adequately justified; this is fixed in Claim <ref>. The secondis that the assumption d(l_+^2(y_i),l_+^2(y_j))<ε'in the statement of Affirmation C.3 is never actually verified, and does not come trivially from a compactness argument.This is fixed in Claim <ref>.] in Lecuire's earlier argument, we give the proof in full detail below, without many citations of <cit.>. Given 0<δ<ϵ, there are choices of i<jsuch thateither (I) The points h_+(i) and h_+(j) bound a segment I_+ ⊂ m of length less than δ, and similarly with - instead of +. The four velocity vectors h_+'(i), h_+'(j), h_-'(i), h_-'(j) all point to the same side of m, and the four segments h_+([i,j]), h_-([i,j]), I_+ and I_- have disjoint interiors. So, the curves γ_±:= h_±([i,j]) ∪ I_±⊂∂ M are simple and disjoint.(II) The points h_+(i) and h_-(j) bound a segment I_+ ⊂ m of length less than δ, andsimilarly the points h_-(i) and h_+(j) bound a segment I_- ⊂ m of length less than δ. The four velocity vectors h_+'(i), h_+'(j), h_-'(i), h_-'(j) all point to the same side of m, and the four segments h_+([i,j]), h_-([i,j]), I_- and I_+ have disjoint interiors. So, the curve γ⊂∂ M obtained by concatenating all four segments is simple. See Figure <ref> for a very useful picture. Note that in the picture, the velocity vectors of all paths intersecting m point to the same side of m, i.e. `up', and all 4-tuples of points are unlinked as in Claim <ref>. Start by fixing a circular order on m. Define `the right' to be the direction in m thatis increasing with respect to the circular order, and define `the left' similarly. Since λ is minimal and not a simple closed curve, it has infinitely many leaves ℓ that are notboundary leaves. Fix some such ℓ, making sure that h_±⊄ℓ if the given rays happen to lie inside the lamination λ. The ray h_+accumulates onto both sides of ℓ, so if we fix p ∈ℓ∩ m, the set h_+() accumulates onto p from both sides, and similarly with - instead of +. Fix an interval J ⊂ m of length δ centered at p, and write J=J_l ∪ J_r as the union of the closed subintervals to the `left' and to the `right' of p. Note that p ∉h_±(), so each intersection of h_± with J lies in exactly one of J_l or J_r.Let's call an index i left-closest if either h_+(i)or h_-(i) lies in J_l and is closer to p than any previous h_±(k),k<i, that lies in J_l. Right-closest is defined similarly using J_r, and we call an index i closest if it is either left or right closest. Note that since δ<ϵ we can never have both h_+(i),h_-(i) in J simultaneously, so no i is both right-closest and left-closest at the same time. Since there are infinitely many i of both types, at some point there will be a transition where some i_l is left-closest, some i_r>i_l is right-closest, and there are no closest indices in between. Let i_c be the smallest closest index that is bigger than i_r. (Here, c stands for `center', since the corresponding point on J will lie between the points we get fromthe indices i_l and i_r.) We now have three points in J, so two of the corresponding velocity vectors point to the same side of m. Let i,j ∈{i_l,i_r,i_c} be the two corresponding indices, and for concreteness, let's assume for the moment that h_+(i) and h_+(j) are the corresponding points in J, deferring a discussion of the other cases to the end of the proof. Note that since the rays h_± are mutually homoclinic and are in tight position with respect to m, the velocity vectors h_-'(i) and h_-'(j) point to the same sides of m as h_+'(i) and h_+'(j), respectively, and so all four vectors point to the same side. That is, (a)h_+'(i), h_+'(j), h_-'(i), h_-'(j) all point to the same side of m. (b) the segment I_+ ⊂ J bounded by h_+(i) and h_+(j) contains no element h_+(k) or h_-(k) where k is between i and j. Let I_-⊂ m ∖ J be the segment that is bounded by the points h_-(i) and h_-(j). Suppose for a moment that we knew that I_- had length less than δ.Then for each k, it is impossible that both h_+(k) or h_-(k) lie in I_-, as we're assuming that corresponding intersections of h_± with m stay at least ϵ>δ apart. In particular, if k is between i,j and we apply the unlinking condition of Claim <ref> twice, once to i,k and once to j,k, we get from this and (b) above that neither element h_+(k) or h_-(k) is contained in I_-. So, the four segments h_+([i,j]), h_-([i,j]), I_+ and I_- have disjoint interiors, and we're in the situation of case (I) in the claim, as desired.As constructed above, however, there is unfortunately no reason to believe that the interval I_- has length less than δ. To rectify this, recall that λ actually has infinitely many non-boundary leaves ℓ^n.For each such ℓ^n and p^n∈ℓ^n ∩ m, we can repeat the above construction using constants δ^n→ 0, producing points (say) h_+(i^n),h_+(j^n) that lie within the length δ^n-interval J^n ∋ p^nand that satisfy properties (a) and (b) above.Choose the sequence p^n ∈ℓ^n ∩ m so that it is monotonic in the circular order induced on m, and let δ^n → 0 fast enough so that the associated intervals I_+^n are all disjoint, so that in the circular order on m we haveh_+(i^1) < h_+(j^1) < h_+(i^2) < h_+(j^2) < ⋯ < h_+(i^n),and where each I_+^n is the interval [h_+(i^n),h_+(j^n)], rather than the complementary arc on m that has endpoints h_+(i^n),h_+(j^n). Then Claim <ref> implies thath_-(i^1) > h_-(j^1) > h_-(i^2) > h_-(j^2) > ⋯ > h_-(i^n).Discarding finitely many n, we can assume all the point h_+(i^n),h_+(j^n) lie in an interval U⊂ m of length less than δ. Since the points h_-(i^n),h_-(j^n) are at least ϵ>δ away from the corresponding + points, they all lie in m∖ U. Then since the interval I_-^n is defined to be disjoint from I_+^n ⊂ U,we must have I_-^n=[h_-(j^n), h_-(i^n)], rather than the other interval with those endpoints. It follows that at least for large n, all the intervals I_-^n are disjoint. Since m as compact, we can then pick some n where I_-^nhas length less than δ, as desired.Therefore, we are in case (I) in the statement of the claim, and are done.In the argument above we have simplified the notation by assuming that we have points h_+(i^n),h_+(j^n) ∈ J^n satisfying conditions (a) and (b), which put us in case (I) at the end. Up to exchanging +,-, the only other relevant case is when, our chosen points are h_-(i^n),h_+(j^n) ∈ J^n.After passing to a subsequence in n, if we are not in the case already addressed, then we may assume that our chosen points are h_-(i^n),h_+(j^n) ∈ J^n for all n. And after exchanging + with - and passing to a further subsequence, we may assumeh_-(i^1) < h_+(j^1) < h_-(i^2) < h_+(j^2) < ⋯ < h_-(i^1) in the circular order on m, and that the interval I_+^n = [h_-(i^n), h_+(j^n)]. Everything from then on works exactly as above: if we set I_-^n to be the interval bounded by h_+(i^n), h_-(j^n)that is disjoint from I_+^n, then for some n we have that the length of I_-^n is less than δ, and it is easy to verify that we are in case (II) of the claim. We now finish the proof of Theorem <ref>. Suppose we are in case (I) of Claim <ref>. Then the two simple closed curves γ_± drawn on the left in Figure <ref> are the projections to M of the paths in M̃ obtained by concatenating the arcs h̃_±|_[i,j] with lifts Ĩ_±⊂m̃_j of the intervals I_±⊂ m, see Figure <ref>. We can homotope one path to the other in M̃ while preserving the fact that the endpoints are points on m̃_i,m̃_j that differ by the unique deck transformation taking m̃_i to m̃_j. So projecting down, the simple closed curves γ_± are freely homotopic in M, and hence bound an annulus A ↪ M. See the left part of Figure <ref>.There is a uniform lower bound (depending on λ,m) for the angle at any intersection point of any leaf of λ with m, and the points h_±(i) are at least ϵ away from each other in m.This implies that there is a uniform lower bound for the Hausdorff distance between h_±|_[i,j] on ∂ M. As long as the bound δ on the lengths of I_± is small enough, the geodesics in the homotopy classes of γ_± stay very close to h_±|_[i,j], and are therefore distinct. So, the curves γ_± are not homotopic in ∂ M, and hence bound an essential annulus A ↪ M.Choosing i,j to be large and δ to be very small, ∂ Ais contained in S(λ) and its intersection number with λ is small.Hence, λ is an intrinsic limit of annuli, in the sense of Lemma <ref>, so we're done. Case (II) is similar. Here, the single simple closed curve γ described in Claim <ref> (II)bounds a Möbius band B ↪ M, see the right side of Figure <ref>. Since ∂ M is orientable, B is not boundary parallel, and hence by JSJ theory the boundary of a regular neighborhood of B is an essential annulus A ↪ M whose boundary consists of two disjoint curves that are both homotopic to γ on ∂ M. As in case (I), we can make the intersection number of ∂ A with λ arbitrarily small, so λ is an intrinsic limit of annuli, and we are done.The proof (D) above is quite delicate. Most of this delicacy comes from Claims <ref> and <ref>, which are needed to ensure that the annuli approximating λthat are produced immediately afterward are embedded.But while we are able to prove these claims using arguments involving the planarity of the closure of ∂M̃ in ^3 ∪∂_∞^3, one would not have to worry about these annuli being embedded if there was a strong `Annulus Theorem' guaranteeing that any essential singular annulus in an irreducible 3-manifold M can be surgered to give an essential embedded annulus. If this were true, Claims <ref> and <ref> could be replaced by a one paragraph compactness argument. Here, a singular annulus is a map f: (A,∂ A) ⟶ (M,∂ M) where A = S^1 × [0,1]. We say f is essential if it is nothomotopic rel ∂ Ainto ∂ M.Such an annulus theorem follows from the JSJ decomposition when M has incompressible boundary.When M has compressible boundary, there is a similar theorem as long as the original singular annulus has a spanning arc that is not homotopic rel ∂ into ∂ M, see Cannon-Feustel <cit.>. However, our proof above does not provide such annuli, and indeed such annuli do not exist in compression bodies (the M of most interest to us), since any proper arc in a compressionbody M is homotopic rel ∂ into ∂ M. In fact, in a general M, one cannot always surger essential singular annuli to produce embedded essential annuli. For instance, the two curves in Figure <ref> bound an essential singular annulus that cannot be surgered to give an embedded essential annulus. However, in that example, one can surger to get a meridian in the handlebody, so maybe an essential singular annulus can always be surgered to give either a meridian or an essential embedded annulus? This also turns out not to be true. Suppose Pis a pair of pants and let M=P× [0,1], which is homeomorphic to a genus two handlebody. If γ⟶ P is an immersed figure-8whose image forms a spine of P, thenthe singular annulus γ× [0,1] ⟶ P × [0,1]is essential, but the three embedded annuli that one can obtain from itby surgery are all inessential, and no meridian can be created by surgery either. However,we expect this is the only counterexample. The first author of this paper has spent considerable time trying to prove this with a tower argument,but pushingdown the tower is verysubtle, since if the obvious constructionsfail, one has to characterize the figure 8 example.§ HAUSDORFF LIMITS OF MERIDIANS Let M be an orientable hyperbolizable compact 3-manifold, equip ∂_χ<0M with a hyperbolic metric. In <cit.>, Lecuire showed that every lamination λ on ∂_χ<0M that is a Hausdorff limit of meridians contains a homoclinic leaf that is a homoclinic geodesic. The converse is not true:certainly in order to be a Hausdorff limit of meridians μ needs to be connected, and as explained in Figure <ref> in the introduction there are even connected laminations that contain homoclinic leaves but are notHausdorff limits of meridians.We say that two laminations μ_1,μ_2 are commensurable if they contain a common sublamination ν such thatfor both i,the difference μ_i ∖νis the union of finitely many leaves. μ_1 and μ_2 are strongly commensurable if they contain a common ν such that for both i, the difference μ_i ∖νis the union of finitely many leaves, none of which are simple closed curves.Suppose that S ⊂∂_χ<0 M is a connected subsurface with geodesic boundary, such that ∂ S isincompressible, and that the disc set 𝒟(S,M) is `large', i.e. it has infinite diameter in the curve graph C(S).Let λ be a geodesic lamination in int(S) that is a finite union of minimal laminations, and assume that the following does not hold: (⋆) S is a closed, genus two surface,there is a separating meridian μ that does not intersect λ transversely, and λ intersects transversely the two nonseparating meridians disjoint from μ.Then λ is strongly commensurable to a Hausdorff limit of meridians in S if and only if λ is strongly commensurable to a lamination containing a homoclinic leaf, and this happens if and only if one of the following holds: * λ is disjoint from a meridian on S,* some component of λ is an intrinsic limit of meridians, or* there is an essential (possibly nontrivial) interval bundle B ⊂ M over a compact surface Y that is not an annulus or Möbius band, and there are components λ_±⊂λ that each fill a component of ∂_h B (possibly the same component, if ∂_h B is connected), such that λ_± are essentially homotopic through B, as in <ref>, and there is a compression arc α for B that is disjoint from λ. Recall from Proposition <ref> that when 𝒟(S,M) does not have infinite diameter in C(S), it is either empty, consists of a single separating meridian, or consists of a single non-separating meridian m and all separating curves that are band sums of m. In these cases, it is obvious what the Hausdorff limits of meridians are. For instance, in the last case a finite union λ of minimal laminations in S is strongly commensurable to a Hausdorff limit of meridians if and only if either λ=m or λ⊂ S ∖ m. For the `if'direction, note that if λ⊂ S ∖ m then it can be approximated by an arc with endpoints on opposite sides of m, and doing a band sum with m gives a curve that approximates λ. For the `only if' direction, just note that all meridians are either equal to m or are contained in S ∖ m. The case (⋆) above really is exceptional. In that case, (1) holds, and at least when μ⊂λ we have that λ contains a homoclinic leaf, butλ is not commensurable to a Hausdorff limit of meridians. Indeed, let T_±⊂ S ∖μbe the two components of S ∖μ and hoping for a contradiction, take a sequence of meridians (m_n) that Hausdorff converges to λ'⊃λ. We can assume after passing to a subsequence that m_n has an μ-wave in T_+ (say) for all n. Since T_+ is a compressible punctured torus, there is a unique homotopy class rel μ of μ-wave in T_+, so λ' contains a leaf ℓ that either intersects T_+ in an arc in this homotopy class, or is contained in T_+ and is obtained by spinning an arc in this homotopy class around μ. But then ℓ intersects nontrivially every nonperipheral minimal lamination in T_+ other than the unique nonseparating meridian μ_+ of T_+, so λ is disjoint from μ_+, contrary to assumption. §.§ The proof of Theorem <ref> Most of the proof of Theorem <ref> is contained in the following results. Assume that S⊂∂_χ<0 M is a connected subsurface with geodesic boundary, ∂ S is incompressible, and 𝒟(S,M) is large. Suppose λ⊂ S is a lamination, there is a meridian μ that does not intersect λ transversely, and that if S is a closed surface of genus 2 then μ is nonseparating. Then λ is strongly commensurable to a Hausdorff limits of meridians on S. The proof of Lemma <ref> uses some ideas that the first author developed with Sebastian Hensel, whom we thank for his contribution. We may assume that λ is a finite union of minimal components. It suffices to assume μ is not a leaf of λ, as long as we prove the conclusion both for such a λ and for λ∪μ. Assume first that μ is nonseparating in S. Let (c_i) be a sequence of simple closed curves on S that Hausdorff-converges to λ. One can do this by constructing for each component λ_0 ⊂λ a simple closed geodesic approximating λ_0, by taking an arc that runs along a leaf of λ_0 for a long time, and then closing it up the next time it passes closest to its initial endpoint in the correct direction. Let α be a simple closed curve on S that intersects μ once, and intersects all the components of λ. For each k, let γ_i^k be the geodesic homotopic to the `band sum' of μ and T_c_i^k(α), where T_c_i is the twist around c_i and a band sum of two curves intersecting once is the boundary of a regular neighborhood of their union. Note that γ_i^k is a meridian for all i,k. If (k_i) is a sequence that increases quickly enough, (γ_i^k_i) converges to a lamination strongly commensurable to λ. And if we pick a meridian β on S that intersects both μ and λ, then T_μ^i ∘ T_γ_i^k_i^i(β) Hausdorff converges to a lamination strongly commensurable to λ∪μ. Now suppose μ is separating. We claim that there is another separating meridian in S that is disjoint from μ. Let m be a maximal multicurve of meridians in S that contains μ as a component. Since 𝒟(S,M) is large, m≠μ. If m has a separating component distinct from μ, we are done. So, suppose we have a nonseparating component m_0 ⊂ m. We can make a (separating) band sum of m_0 that is disjoint from μ unless m_0 lies in a puncturedtorus component of S ∖μ. So, we assume the latter is true. Since 𝒟(S,M) is large, it cannot be that m=μ∪ m_0, since then all meridians are disjoint from m_0. So, there is another component m_1 of m, which we can assume is also nonseparating. This m_1 must lie on the opposite side of μ from m_0, and as before we're done unless the component of S∖μ containing m_1 is also a punctured torus. But in this case, S is a genus two surface contrary to assumption.Let T ⊂ S ∖μ be a component that contains a nonperipheral separating meridian, which wecall μ'. Let V be the other component. Write λ = λ_T ∪λ_V, where λ_T⊂ T and λ_V⊂ S ∖ T. Let C be the compression body with exterior boundary equal to the component of ∂ M that contains S, that one obtains by compressing the meridian μ. We claim that there are sequences of simple closed curves (α_i), (β_i) in T such that the following two properties hold: * (α_i) and (β_i) both Hausdorff converge toa geodesic lamination strongly commensurable to λ_T,* for all i, α_i and β_i bound an essential annulus in C.To construct these sequences, start by picking a simple closed curve α in T such that α and each component of λ_T together fill T. Let β be a simple closed curve onT such that α, β,μ bound a pair of pants in T.In C, we can compress the boundary component μ of this pair of pants, so α,β bound an annulus in C. Moreover, this annulus is essential, since otherwise α,β bound an annulus in T, implying T is torus with the one boundary component μ, contradicting the fact that there is a separating nonperipheral meridian in T. Then find a sequence (c_i) of simple closed curves in T that Hausdorff converge toa geodesic lamination strongly commensurable to λ_T, take k_i to be a fast increasing sequence and set α_i=T_c_i^k_i(α) and β_i:=T_c_i^k_i(β). Since α fills with every component of λ_T, the curve β intersects every component of λ_T. It follows that(α_i) and (β_i) Hausdorff converge toa geodesic lamination strongly commensurable to λ_T. And since each c_i is nonperipheral in T, each component of c_i bounds an annulus in C with a curve on the interior boundary of C, so the twist T_c_i extends to C, implying that α_i,β_i bound an annulus in C as desired above. Now let C' be the compression body obtained by compressing both μ and μ', so C ⊂ C' ⊂ M. Note that since both curves are separating and are disjoint, Proposition <ref> says that 𝒟(S,C') is large, so we can pick a meridian m∈𝒟(S,C') that intersects μ and every component of λ. Fix a sequence of geodesic muticurves (d_i) in V that Hausdorff converges to λ_V. As with the twists T_c_i in C, the twists T_d_i extend to C'. And the compositions T_α_i∘ T_β_i^-1 extend to C' because the curves bound annuli in C ⊂ C'. We then define γ_i:=(T_α_i∘ T_β_i^-1)^k_i∘ T_d_i^k_i(m) for some fast increasing k_i→∞. These γ_i are all meridians and Hausdorff converge to a lamination strongly commensurable to λ. To obtain λ∪μ instead, hit γ_i with high powers of twists around μ. Here is a more powerful version of Lemma <ref>. The idea of the proof is more or less the same, but more complicated. Suppose that ν,η are disjoint geodesic laminations on S that are finite unions of minimal components, and set λ=ν∪η. Suppose also that no component of ν is a meridian.Let X be the union of the subsurfaces with geodesic boundary that are filled by the components of ν. Suppose that there are disjoint, nonhomotopic meridians μ,μ' on S that are disjoint from η, and a sequence of homeomorphismsf_i : S ⟶ S, f_i|_S ∖ int(X)=idsuch that μ_i:=f_i(μ) and μ_i':=f_i(μ') are both sequences of meridians that Hausdorff converge to laminations strongly commensurable to ν. Then λ=ν∪η is strongly commensurable to a Hausdorff limit of meridians in S.Before proving the proposition, we record the following application. Suppose that λ is a geodesic lamination on S that is a finite union of minimal components. If either * some component ν⊂λ that is not a simple closed curve is an intrinsic limit of meridians, * there are (possibly equal) components λ_±⊂λ, neither of which is a simple closed curve, and where each fills a component of the horizontal boundary (possibly the same component if ∂_h B is connected) of an essential interval bundle(B,∂_H B) ↪ (M,S),where λ_± are essentially homotopic through B, and where there is a compression arc α for B that is disjoint from λ, then λ is strongly commensurable to a Hausdorff limit of meridians. Suppose some component ν⊂λ that is not a simple closed curve is an intrinsic limit of meridians. Setting X:=S(λ) we can take (μ_i) to be any sequence of meridians in X that Hausdorff converges to a lamination strongly commensurable to ν. Moreover, since ν fills X and is a limit of meridians, the disc set 𝒟(X,M) is large, so for each i there is some meridian μ_i' disjoint from μ_i. Since there are only finitely many topological types of pairs of disjoint curves in X up to the pure mapping class group of X, after passing to a subsequence we can assume that all μ_i,μ_i' are of the form in the proposition. The desired conclusion follows.In the second case, we let X be the subsurface with geodesic boundary obtained by tightening ∂_H B and set ν=λ_-∪λ_+. Write the interval bundle as π: B ⟶ Y, where Y is a compact surface with boundary. We can assume without loss of generality that α is a strict compression arc, i.e. that it is homotopic rel endpoints to a fiber π^-1(y),y∈∂ Y. Note that since λ_± are not simple closed curves, Y is not an annulus or Möbius band. Since λ_± are essentially homotopic through B, Fact <ref> says that if our reference hyperbolic metrics are chosen appropriately, we have that λ_-∪λ_+= (π|_∂_HB)^-1(λ̅) for some geodesic lamination λ̅ on Y. Since λ_± together fill ∂_HB, the lamination λ̅ is minimal and fills Y. So in particular, it has no closed, one-sided leaves, and therefore if we pick a nonzero transverse measure on λ̅, we have that λ̅ is the projective limit of a sequence of two-sided nonperipheral simple closed curves (c_i) in Y, by Theorem 1.2 of <cit.>. Homotope the c_i on Y to based simple loops at y ∈∂ Y, let m(c_i) be the associated compressible curves on S constructed in Claim <ref>, and let μ_i be the geodesic meridians on S in their homotopy classes. Then (μ_i) Hausdorff converges to a lamination strongly commensurable to λ_-∪λ_+. After passing to a subsequence, we can assume that all the c_i differ by pure homeomorphisms of Y, in which case the meridians μ_i are as required in Proposition <ref>, for some μ,f_i.Note that since our compression arc is assumed to be disjoint from λ, all the μ_i are disjoint from η := λ∖λ_±, and hence so is our μ. We create disjoint meridians μ_i' similarly, by taking some c_i' on Y disjoint from c_i, and letting μ_i' be the geodesic meridian homotopic to m(c_i'). It then follows from Proposition <ref> that λis strongly commensurable to a Hausdorff limit of meridians as desired. We now prove the proposition. Assume that μ,μ' are disjoint meridian on S that are disjoint from η, that f_i : S ⟶ S are homeomorphisms that are the identity outside of X, and that μ_i:=f_i(μ) and μ_i':=f_i(μ') are sequences of meridians that Hausdorff converge to laminations strongly commensurable to ν.We want to show that ν∪η is strongly commensurable to a Hausdorff limit of meridians on S. We now basically repeat the argument in Lemma <ref>, so the reader should make sure that they understand that argument before continuing here.Suppose μ (say) is nonseparating in S. Choose a simple closed curve α on S that intersects μ once and intersects essentially each component of η.Let α_i := f_i(α), and note that α_i intersects μ_i once, and also intersects essentially each component of η. Let (c_i) be a geodesic multi-curve that Hausdorff converges to η, and let γ_i^k be the geodesic homotopic to the `band sum' B(μ_i,T_c_i^k(α_i)) = T_c_i^k(B(μ_i,α_i)) = T_c_i^k ∘ f_i( B(μ,α)),where here B(·,·) takes in two simple closed curves that intersect once and returns the boundary of the regular neighborhood of their union. If one of the inputs in a band sum is a meridian, then so is the output, so γ_i^k is a meridian for all i,k. The given equalitiesare true at least for large i. The first equality holds because μ is disjoint from η, f_i=id on the subsurfaces filled by the components of η, and therefore μ_i is disjoint from c_i for large i. The second equality is obvious from the definitions of μ_i,α_i.Let (k_i) be a fast increasing sequence. After passing to a subsequence, we can assume that (γ_i^k_i) Hausdorff converges to a lamination λ. We claim that λ is strongly commensurable to ν∪η. First, using the second term in (<ref>), if k_i is huge with respect to i, then c_i is contained in a small neighborhood of γ_i^k_i, and so since (c_i) Hausdorff converges to η, we have λ⊃η.We claim that each γ_i^k_i essentially intersects each component X_0 ⊂ X. If not, then from the third term in (<ref>) it follows that B(μ,α) is disjoint from X_0. But μ essentially intersects X_0, since otherwise the Hausdorff limit of the μ_i will not contain the associated component ν_0 ⊂ν. So, μ,α and X_0 all lie in the punctured torus T⊂ S bounded by B(μ,α). But since α intersects every component of η, we have that η intersects T as well, in a collection of arcs disjoint from μ. Since X_0 is disjoint from η, X_0=μ, so ν_0=μ is a meridian, contrary to our standing assumption.It now follows that the Hausdorff limit λ essentially intersects each component of X. Since γ_i^k_i is disjoint from μ_i and (μ_i) Hausdorff converges to a lamination containing ν, The laminations λ,ν cannot intersect transversely. Sinceeach component of X is filled by a component of ν, we have λ⊃ν.Finally, if Y is the union of all the subsurfaces with geodesic boundary that are filled by the components of η, then as f_i=id outside X and c_i ⊂ Y for large i, the intersection of γ_i^k_i with S ∖ (X ∪ Y) is properly homotopic to the intersection ofB(μ,α), which is independent of i. It follows that λ∖ (ν∪η) is a finite collection of non-closed leaves, so we are done.We can now assume that both μ,μ' are separating, so that μ_i,μ_i' are also separating for all i. Let T_i ⊂ S ∖μ_i be the component containing μ_i', and let V_i be the other component. Note that T_i is not a punctured torus, since it contains a nonperipheral separating curve. Since ∂ T_i ∩η =∅, we haveη = η_T⊔η_V, where the first term is the intersection of η with T_i, and the second term is defined similarly. Note that since f_i=id on S∖ X, all the μ_i induce the same two-element partition of the components of S ∖ X, so at least after passing to a subsequence the decomposition of η above is actually independent of i, which is why we have omitted the i in the notation. Let C be the compression body whose exterior boundary is the component of ∂ M containing S, and which is obtained by compressing the curve μ. Let C' be similarly obtained by compressing both μ and μ', so that C ⊂ C' ⊂ M. Since C' admits two nonhomotopic disjoint separating meridians, the disc set 𝒟(S,C) is large by Proposition <ref>, so we can pick a meridian m∈𝒟(S,C) that intersects every component of ν∪η, as well as μ,μ'. Let C_i⊂ C_i' ⊂ M be the compression bodies obtained by compressing μ_i,μ_i'. Then f_i extends to a map C' ⟶ C_i', implying that m_i:=f_i(m) is a meridian in C_i'.As in the proof of Lemma <ref>, we can pick sequences (α_i),(β_i) of simple closed curves in T_i such that (α_i) and (β_i) both Hausdorff converge to η_T, and where α_i,β_i bound an essential annulus in C_i for all i. As in the lemma, T_α_i∘ T_β_i^-1 extends to C_i'. Let (c_i) be a sequence of multicurves in V_i that Hausdorff converges to η_V. Each component of c_i bounds an annulus in C_i' with a curve on the interior boundary of C_i', and hence the multitwist T_c_i extends to a homeomorphism of C_i'.For any given k, setγ_i^k := (T_α_k∘ T_β_k^-1)^k ∘ T_c_k^k(m_i). We claim that for fast increasing k_i, the curves γ_i^k_i Hausdorff converge to a lamination that is strongly commensurable to ν∪η as desired. This is proved using the same types of arguments we employed in the nonseparating case above. In particular, recall that m was selected to intersect all components of ν∪η. Since f_i is supported on subsurfaces filled by components of ν, all the m_i=f_i(m) intersect all components of ν∪η, and hence for large k_i they intersect α_k_i,β_k_i. So, γ_i^k_i is twisted many times around α_k_i,β_k_i, and hence its Hausdorff limit contains ν. Similarly, the m_i intersect c_k_i for large i. Since c_k lies in V_k, it is disjoint from α_k⊂ T_k and β_k⊂ T_k,and thus the Hausdorff limit of γ_i^k_i contains η. Finally, the Hausdorff limit has no other minimal components because α_k,β_k,c_k are contained in subsurfaces filled by components of ν∪η, and m_i=f_i(m) is constant outside this subsurfaces.We can now start the proof of the theorem. Suppose that λ⊂ S is a lamination and (⋆) does not hold, so that it is not the case that S is a genus two surface and λ is disjoint from a separating meridian μ, but intersects the two nonseparating meridians disjoint from μ. We want to show that λ is strongly commensurable to a Hausdorff limit of meridians if and only if it is strongly commensurable to a lamination containing a homoclinic leaf, which happens if and only if either* λ is disjoint from a meridian,* some component of λ is an intrinsic limit of meridians, or * there is an essential (possibly nontrivial) interval bundle B ⊂ M over a compact surface Y that is not an annulus or Möbius band, and there are components λ_±⊂λ that each fill a component of ∂_H B, such that λ_± are essentially homotopic through B, as in <ref>, and there is a compression arc α for B that is disjoint from λ. Hausdorff limithomoclinic leaf. Suppose first that λ is strongly commensurable to a Hausdorff limit of meridians λ'. Then by <cit.>, there is a homoclinic leaf h⊂λ', so λ is strongly commensurable to a lamination with a homoclinic leaf as desired.Homoclinic leaf(1), (2) or (3). We now assume we have a homoclinic leaf h in some lamination strongly commensurable to λ.The two ends of h limit onto (possibly equal) components λ_±⊂λ. If one of S(λ_±) has compressible boundary, there is a meridian disjoint from λ, so we are in case (1) and are done. So, ∂ S(λ_±) is incompressible, and we're in the situation of Theorem <ref> and Corollary <ref>. We now break into cases. If one of λ_± is an intrinsic limit of meridians, we're in case (2) and are done. If we're in case (3) of Theorem <ref> and Corollary <ref>, we're in case (3) of the theorem and are done, unless the given interval bundle B⟶ Y is over an annulus or Möbius band. But in that case, letting c be a boundary component of Y, we can consider the geodesic meridian μ on S homotopic to the m(c) constructed in Claim <ref>, using the compressing arc given by Corollary <ref>. This μ is disjoint from λ, so we're in case (1) of the theorem.Finally suppose that the two ends of h are asymptotic on S, so that λ_-=λ_+. Let's separate further into the cases (i) and (ii) in Corollary <ref>. In case (i), using the notation of the corollary, the curve c∪ h([-s.s]) is a meridian disjoint from λ. So, we're in case (1) of the theorem. In case (ii), letT be a neighborhood of h∪λ_± that is either a punctured torus or a pair of pants, depending on whether the two ends of h limit onto opposite sides of λ_±, or onto the same side. Because we're in case (ii), there is a meridian in T. Hence, one of the boundary components of T is a meridian, and is disjoint from λ so we're done.(1), (2) or (3)Hausdorff limit. Suppose (1), (2) or (3) holds. We want to show λ is strongly commensurable to a Hausdorff limit of meridians. If λ is disjoint from a meridian, then we're done by Lemma <ref>. If a component of λ is an intrinsic limit of meridians, we're done by the first part of Corollary <ref>. In case (3) above, we're done by the second part of Corollary <ref>. § EXTENDING PARTIAL PSEUDO-ANOSOVS TO COMPRESSION BODIES Let M be a compression body with exterior boundary Σ. Let S⊂Σ be an essential subsurface such that ∂ S is incompressible. In this section, we prove: Suppose that f : Σ⟶Σ is a partial pseudo-Anosov supported on S. Then f has a power that extends to a nontrivial subcompressionbody of (M,S) if and only if the attracting lamination of f is a projective limit of meridians that lie in S. When S=Σ, this is a theorem of Biringer-Johnson-Minsky <cit.>. The proof of Theorem <ref> is basically the same as their proof, but we need to go through it anyway, to note the places that parabolics appear, and to deal with the fact that we are looking at subcompression bodies of (M,S) rather than of M. Also, before starting on the bulk of the proof in <ref>, we isolate part of the argument into a separate purely topological subsection, <ref>. This separation of the argument into distinct topological and geometric parts makes it more understandable than the original version, we think. §.§ Dynamics on the space of marked compression bodiesLet Σ be a closed, orientable surface, and let S⊂Σ be an essential subsurface. The space of marked S-compression bodies is defined to be CBod(S) = {(C,h:Σ→∂_+ C) }/∼,where here C is a compression body, h is a homeomorphism, and * the multicurve h(∂ S)⊂∂_+C is incompressible, * there is a multicurve on S such that h(S) is a cut system for C, i.e. h(S) bounds a collection of disks that cut C into balls and trivial interval bundles over the interior boundary components.We declare (C_i,h_i:S →∂_+ C_i), i=1,2 to be equivalent (written ∼ as above) if there is a homeomorphism ϕ : C_1→ C_2 that respects the boundary markings: that is, ϕ∘ h_1 and h_2 are homotopic maps S →∂_+ C_2.We write (C_1,h_1) ⊂ (C_2,h_2) if there is an embedding ϕ: (C_1,∂_+ C_1)(C_2,∂_+ C_2) that respects the boundary markings. This gives a partial ordering on CBod(S).We often identify Σ with ∂_+ C instead of specifying the boundary marking, and simply write C for an element of CBod(S).So CBod(S) is the set of all compression bodies with exterior boundary Σ that one obtains by compressing curves in S (without compressing boundary curves) up to the obvious equivalence.A marked S-compression body (C,h) has a disk set 𝒟(C) ⊂𝒞(S), where a simple closed curve γ∈𝒞(S) lies in the disc set if h(γ) is a meridian in C. In fact, the disk set 𝒟(C) determines (C,h) up to equivalence, say by an argument similar to the last paragraph of the proof of Fact <ref>. The set CBod(S) can then be identified with the `set of all disk sets' in 𝒞 (S).It then inherits a topology as a subset of the power set 𝒫( 𝒞 (S)), wherein D_n → D if and only if for every c∈𝒞(S), we have either c∈ D and c∈ D_n for all large n, or c∉D and c∉D_n for all large n. If C_n →C in CBod(S), then C ⊂ C_n for large n. Suppose that C is obtained by compressing a finite set Γ of disjoint simple closed curves on S.For large n, we have Γ⊂𝒟 (C_n), so 𝒞⊂ C_n.CBod(S) is compact. As 𝒫( 𝒞 (S)) is compact, we want to show that CBod(S) is closed. Suppose C_n is a sequence of marked compression bodies with disk setsD_n = 𝒟(C_n) ⊂𝒞(S),and that D_n → D ⊂𝒞(S). Let Γ be a maximal set of disjoint, pairwise non-homotopic elements of D. Compressing Γyields a marked compression body C. Since Γ is finite,Γ⊂ D_n for large n, so 𝒟 (C)⊂ D_n.Thus, 𝒟 (C)⊂ D.It therefore suffices to show D ⊂𝒟(C). Suppose this is not the case, and pick β∈ D∖𝒟(C) such that the intersection number of β and Γ is minimal. By maximality of Γ, this intersection number cannot be zero. Since β∈ D, if n is large we have β∈𝒟(C_n). By an outermost disk argument, if γ∈Γ is a component that intersects β, there is an arc c⊂γ with endpoints on β and interior disjoint from β, that is homotopic rel endpoints in C_n to the two arcs b_1,b_2⊂β into which β is cut by ∂ c. Passing to a subsequence, we can assume that c,b_1,b_2 are independent of n. Then c ∪ b_1 and c ∪ b_2 are both meridians in C_n for all large n, and hence lie in D. Since they intersect Γ fewer times than β does, they lie in 𝒟(C). But then β (which is a band sum of the two curves) also lies in 𝒟(C), a contradiction.Let f : Σ⟶Σ be a homeomorphism with f=id on Σ∖ S. Then f actson CBod(S) by f · (C,h) = (C, h ∘ f^-1).When we regard marked S-compression bodies as compression bodies with exterior boundary equal to Σ, we'll just write C and f(C) for a marked compression body and its image. Note that f(C)=C if and only if f extends to a homeomorphism of C.Fixing M ∈CBod(S) and f as above, let 𝒜be the set of accumulation points in CBod(S) of the f-orbit of M, and let 𝒜_min={C∈ |∄ D ∈ such thatD ⊊ C}be the subset consisting of all minimal elements of 𝒜. 𝒜_min is a finite f-orbit that contains a single element C_f such that C_f ⊂ M.Moreover, C_f is the unique maximal element of CBod(S) such that C_f ⊂ M and a power of f extends to C_f.This result was proved in <cit.> when S=Σ.Our proof follows the same general lines, but is topological instead of hyperbolic geometric.We proceed with a series of lemmas. 𝒜_min is nonempty, finite and f-invariant. 𝒜 is nonempty, since CBod(S) is compact.This implies that 𝒜_min is nonempty, for example since the `height' of a compression body is nonnegative and decreases under strict containment, see 3 of <cit.>.By Claim <ref>, 𝒜_min is discrete.But 𝒜_min is closed in 𝒜, which is closed in CBod(S), which is compact. So, 𝒜_min is compact, and must be finite. Finally, 𝒜_min is f-invariant since 𝒜 is and the f-action respects containment. Suppose that for i=1,2 we have C_i∈CBod(S) with C_i⊂ M, and that f^i(C_1)=C_1 while f^j(C_2)=C_2. Then there is an element C ∈𝒜_min such that C_1, C_2 ⊂ C ⊂ M. Every element ofis the image under a power of f of an accumulation point of the sequence f^nij(M), so since 𝒜_min is f-invariant there is some C' ∈𝒜_min to which a subsequence of f^nij(M) limits.As C_1,C_2 ⊂ f^nij(M) for all n, we must have C_1, C_2∈ C'.By Claim <ref>, there is some n such that f^nij(M) ⊃ C'.Then C:=f^-nij(C') ∈𝒜_min is contained in M and must contain C_1, C_2 as well.There is a unique element C_f ∈𝒜_min that is contained in M, and A_min is an f-orbit. Applying the previous lemma to two copies of the trivial compression body Σ× I shows that A_min has an element that is contained in M. So, suppose that C,D ∈ A_min are both contained in M.By the previous lemma, there is another element of A_min that contains them both, which contradicts the minimality assumption unless C = D.Therefore, there is a unique element C_f ∈𝒜_min that is contained in M.To show that A_min is an f-orbit, suppose that C ∈ A_min.Since C is an accumulation point, there is some n such that f^n(M) ⊃ C.Then f^-n(C) ⊂ M, implying that f^-n(C)=C_f by uniqueness. This finishes the proof of Theorem <ref>, since Lemma <ref> shows that a power of f extends to C_f and Lemmas <ref> and <ref> imply that any subcompression body of M to which a power of f extends is contained in C_f. §.§ The proof of Theorem <ref>Let S ⊂Σ=∂ M_+ be a compact essential subsurface, with ∂ S incompressible in M, and let f: Σ⟶Σ be a pseudo-Anosov map on S. The `only if' direction of the theorem is trivial. Namely, suppose that some power f^k extends to a nontrivial subcompressionbody C of (M,S). Pick a meridian m⊂ S for C. Then (f^k(m)) is a sequence of meridians in M that lie in S, and converge to the attracting lamination of f. For the `if' direction of the theorem, assume that no nonzero power of f extends to a nontrivial subcompression body of (M,S). We must show that the attracting lamination λ^+ is not in the limit set Λ(S,M). The argument issimilar to the proof of the main theorem in <cit.>.As such, we will sketch the argument in places and refer to <cit.> for details.Consider the sequenceM_n = f^-n(M) of marked S-compression bodies, where we consider the exterior boundary of each M_n as identified with the surface Σ. Fix a base point [X] ∈ (Σ) and give the interior of each M_n a geometrically finite hyperbolic metric such thatthe end adjacent to the exterior boundary Σ = ∂_+M is convex cocompact, and when its conformal boundary is identified with Σ, the conformal structure is [X].Letρ_n : π_1 Σ⟶_2 , N_n:=^3/ρ_n (π_1 Σ)be a representation (unique up to conjugacy) uniformizing the interior of M_n and compatible with our markings, in the sense that ρ_n is the composition of the map π_1 Σ⟶π_1 M_n≅π_1 N_n induced by inclusion and a faithful uniformizing representation of π_1 N_n. Note thatthe kernel of ρ_n is(ρ_n) = f^- n _*(( π_1 Σ→π_1 M)). By Theorem <ref> and the assumption that no power of fextends to a nontrivial subcompression body of M, the only minimal accumulation point of (M_n) in CBod(S) is the trivial compression body. So in particular, we can choose a subsequence (M_n_j) that converges to the trivial compression body.By the compactness of generalized Bers slices (see <cit.>), we may assume after appropriate conjugations and passing to a further subsequence that (ρ_n_j) converges algebraically to a representationρ_∞:π_1 Σ⟶_2, N_∞ :=^ 3/ρ_∞(π_1 Σ)and that N_∞ can be identified with the interior of a compression body M_∞ with exterior boundary Σ in such a way that the end of N_∞ adjacent to Σ is convex cocompact with conformal structure [X] and the representation ρ_∞ is compatible with the marking in the same way as before.The disk set 𝒟(S,M_∞) consists of all simple closed curves on S represented by elements γ∈π_1 Σ with ρ_∞ (γ)=1.By Chuckrow's Theorem (see <cit.>), ρ_∞(γ)=1 if and only if ρ_n_j(γ)=1 for all sufficiently large i. Since (M_n_j) converges to the trivial compression body in the topology of CBod(S), it follows that the surface S⊂Σ=∂_+ M_∞ is incompressible in M_∞. The repelling lamination λ^ - of f is unrealizable in N_∞.The proof is almost identical to that of <cit.>, so we offer a sketch and we refer the reader to their paper for details.Fixing an M-meridian γ⊂ S, the sequence f^-n_j(γ) converges in the Hausdorff topology to a lamination λ_M that is the union of λ^ - and finitely many leaves spiraling onto it.It suffices by <cit.> to show that λ_M is unrealizable.So, hoping for a contradiction, assume λ_M is realizable; then λ_M is carried by a train track τ that maps nearly straightly into N_∞ (see <cit.>).By algebraic convergence, τ also maps nearly straightly into N_n_j when i is large. Since f^-n_j(γ) →λ_M, for large i the curve f^-n_j(γ) is carried by τ.This implies that f^-n_j(γ) is geodesically realizable in N_n_j for large i, contradicting the fact that it is homotopically trivial. By work of Thurston <cit.>, the π_1-injective surface S⊂∂_+M_∞ is isotopic into a degenerate end of N_∞ with ending lamination λ^ -.In particular, the peripheral curves ofS represent cusps in N_∞ and every non-peripheral curve on S has hyperbolic type in N_∞.Any pair of disjoint non-peripheral simple closed curves on S can then be realized geodesically by a pleated surfaces S⟶ N_∞ in the given homotopy class, and Thurston's compactness of pleated surfaces (see <cit.>) implies the following. Let α⊂ S be a simple closed curve.Then for every k, there is some K such that for any other simple closed curve β in S, we have_𝒞 (S )(α, β) ≤ k⟹ _ N_∞(α_∞, β_∞) ≤ K, where α and β_∞ are the geodesics in N_∞ in the homotopy classes of α and β. Hoping for a contradiction, suppose now that λ^+ ∈Λ (S , M).When regarded as an element of ∂_∞𝒞 (S ), the support of λ^+ is then an accumulation point of 𝒟(S ,M) ⊂𝒞 (S ).If α∈𝒞 (S ), then for n=1,2,… the sequence (f ^ n(α)) is a quasi-geodesic path that limits to λ^+∈∂_∞𝒞 (S ), see <cit.>. Since 𝒟(S ,M) is a quasi-convex subset (see Theorem <ref>, due to Masur-Minsky), there is a constant C and for each n a meridian γ_i ∈𝒟(S ,M) withd_𝒞 (S ) (f ^ n(α),γ_i) ≤ C.Translating the points f^n( α) and γ_i by f^-n, this becomes:d_𝒞 (S ) (α,f^-n(𝒟(S ,M)) ≤ C.By Lemma <ref>, an element γ_n_j∈ f^-n_j(𝒟(S ,M)) at distance at most C from α in 𝒞 (S ) can be geodesically realized in some fixed compact subset A ⊂ N_∞.Algebraic convergence implies that for sufficiently large j this geodesic can be pulled back and tightened to a geodesic in N_n_j.But by construction, γ_n_j is a meridian in M_n_j, so it cannot possibly be realized geodesically in N_n_j, which is a contradiction.§ EXTENDING REDUCIBLE MAPS TO COMPRESSION BODIESWe present here a generalization of <cit.> that characterizes which (possibly reducible) mapping classes of the boundary of a 3-manifold M have powers that extend to sub-compression bodies.In what follows, let M be a compression body with exterior boundary ∂_+M. Let S⊂∂_+M be an essential subsurface such that ∂ S is incompressible. Let f: ∂_+ M ⟶∂_+ M be a homeomorphism that is `supported' in S, meaning that f=id on ∂_+ M∖ S. We say that f is pure if there are disjoint, compact, essential f-invariant subsurfaces S_i ⊂ S, i=1,…, n, such that f=id on S_id:=S ∖∪_i S_i, and where for each i, if we setf_i: = f | _S_i, then either * S_i is an annulus and f_i is a power of a Dehn twist, or* f_i is a pseudo-Anosov map on S_i. It follows from the Nielsen-Thurston classification, see <cit.>, that every f has a power that is isotopic to a pure homeomorphism.When f is pure, with associated restrictions f_i : S_i ⟶ S_i as above, we define a geodesic lamination λ=∪_iλ_i on S, where λ_i ⊂ S_i as follows. If f_i is pseudo-Anosov, we let λ_i be the support of the attracting lamination of f_i. If f_i is a Dehn twist, we let λ_i be the core curve of the annulus S_i. So defined, λ is called the attracting lamination of the pure homeomorphism f.Suppose that S ⊂∂_+ M isan essential subsurface such that the multicurve ∂ S is incompressible. Let f : ∂_+ M ⟶∂_+ M be a pure homeomorphism supported in S. Then f has a power that extends to a nontrivial subcompressionbody of (M,S) if and only if either: * there is a meridian in S_id, * for some i, the map f_i has a power that extends to a nontrivial subcompressionbody of (M,S_i), or * there are (possibly equal) indices i,j such that S_i,S_j bound an essential interval bundle B in M, such that some power of f|_S_i ∪ S_j extends to B, andthere is a compression arc α for B whose interior lies in S_id.Recall from <ref> that a `subcompressionbody of (M,S) is a compression body obtained from ∂_+ M by compressing some meridian multicurve in S. Theorem <ref> says that (2) is equivalent to asking that the attracting lamination (say) of f is a projective limit of meridians in S_i. In (3), the condition that a power of f|_S_i∪ S_j extends to B is easier to check. Indeed, if σ : ∂_H B ⟶∂_HB is the canonical involution, as defined in <ref>, then by Fact <ref> we have that f^k|_∂_HB extends to B exactly when σ∘ f_i^k is isotopic to f_j^k. When B is a twisted interval bundle, f_i,f_j are both pseudo-Anosovs on ∂_HB and this means that as mapping classes we have f_j = g ∘ f_i for some finite order g commuting with both f_i,f_j, see e.g. McCarthy's thesis <cit.>. When B is a trivial interval bundle, σ indentifies S_i and S_j, and we have similarly that f_j=g ∘σ(f_i) for some g commuting with both. Let's start with the `if' direction, since that's easier. If there is a meridian in S_id, then f extends to the compression body obtained by compressing that meridian. Suppose (2) holds, so that some power f_i^k extends to a nontrivial subcompression body C of (M,S_i). Then f also extends to C, since all the S_j, where j≠ i, bound trivial interval bundles with subsurfaces of the interior boundary of C. So we're done. The only interesting case is if (3) holds, so that some S_i,S_j bound an essential interval bundle B in M such that some power of f^k|_S_i ∪ S_j extends to B, andthere is a compression arc α for B whose interior lies in S_id. Here, let S' ⊂ S be the smallest essential subsurface containing S_i,S_j and α; so, S' is obtained from a regular neighborhood of the union of these three subsets of S by capping off any inessential boundary components with discs. Let C be the characteristic compression body of the pair (M,S'), as defined in Fact <ref>. We claim that f^k extends to C. To see this, note that we can construct C as follows. For concreteness, first assume that the boundary components of S_i,S_j that contain the endpoints of α bound an annulus A ⊃α on S. Then S'=S_i∪ S_j ∪ A, the annulus A is parallel in M to component A'⊂Fr(B) that is an annulus with the same boundary curves as A, and C is the union of the interval bundle B, the solid torus bounded by A,A', and a trivial interval bundle over ∂_+ M ∖ S'. We can then extend f^k to C by letting it be the given extension of f^k|_S_i∪ S_j on B, the identity on the solid torus, and the obvious fiber preserving extension of f^k |_∂_+ M ∖ S' to the adjacent interval bundle. The case that the boundary components of S_i,S_j that contain the endpoints of α do not bound an annulus on S is similar, except that instead of the solid torus above we take a thickened disk bounded by a rectangular neighborhood of α on S∖ (S_i∪ S_j), and a rectangular neighborhood of the homotopic arc on the frontier of B. We now work on the `only if' direction. Passing to a power, suppose that f extends to a nontrivial subcompressionbody C of (M,S). We may assume that there is no proper, f-invariant essential subsurface S' ⊂ S such that f|_S' extends to a nontrivial subcompression body of (M,S'). If there were such a subsurface S', we could replace S by a minimal such S', therefore reducing the argument to the minimal case we are assuming we are in above.If f=id on S, the fact that there is a nontrivial subcompressionbody of (M,S) means there is a meridian in S=S_id, so we're in case (1) and are done. This case may seems silly, but observe that if f is some complicated pure homeomorphism where there's a meridian in S_id, the associated `minimal' case that we pass to in the previous paragraph is where S is an annular neighborhood of some such meridian, and f=id on S. Assume from now on that f is not the identity map of S. We claim that every meridian m∈𝒟(C,S) intersects every component of λ. Indeed, suppose some λ_i is disjoint from some such m and let S' be the component of S ∖ S_i containing m. Since f extends to C and S' ⊂∂ C_+ is f^k-invariant, f extends to the characteristic subcompressionbody C' of the pair (C,S'), defined by starting with ∂_+ M and compressing all meridians of C that lie in S', see Fact <ref>. Since m is a meridian in C', this C' is nontrivial, which contradicts the mimimality assumption in the first paragraph. Pick a meridian m∈𝒟(C,S). Since m intersects all components of λ,the sequence of meridians m_i:=f^i(m) Hausdorff converges to a lamination λ' strongly commensurable to λ. Applying Theorem <ref> to the pair (C,S) and using that all meridians in C intersect all components of λ, we have that either: * some component λ_a is an intrinsic limit of meridians lying in S(λ_a), in which case Proposition <ref> (applied to f_a : S_a ⟶ S_a) says we're in case (2), or * there are indices a,b such that S_a,S_b bound an essential interval bundle B ⊂ C, where λ_a,λ_b are essentially homotopic in B, and where there is a compression arc α⊂ S for B that is disjoint from λ, and hence can be isotoped so that its interior lies in S_id. Let's assume we're in the last case, since otherwise we're done. We want to show that some power off_a ∪ f_b : ∂_H B ⟶∂_HB extends to B. First, suppose that B is a twisted interval bundle, so that S_a=S_b,f_a=f_b,λ_a=λ_b. Using just the index a from now on, if σ is the canonical involution of B, then Fact <ref> implies that σ(λ_a) is isotopic to λ_a. Let A ⊂ T(S) be the axis of f_a on the Teichmüller space T(S). By Theorem 12.1 of <cit.> and Theorem 2 of <cit.>, we have that A,σ(A) are asymptotic, so as they are both pseudo-Anosov axes they must be equal by discreteness of the action of the mapping class group. Since σ has finite order, it then fixes A pointwise. Now the group Γ= ⟨σ,f_a⟩⊂(∂_HB) is isomorphic to the direct product of a finite group fixing A pointwise and a cyclic group of pseudo-Anosovs, so for some positive k we have σ∘ f_a^k = f_a^k in (∂_H B). By Theorem 3 of <cit.> we may isotope f_a^k,σ so that they commute, while preserving that σ^2=id; we can then alter the bundle map π : B ⟶ Y so that the new σ is still the canonical involution. It follows that f_a^k is a lift to ∂_HB of a pseudo-Anosov map g: Y ⟶ Y, and hence f_a^k extends to B as desired, see Fact <ref>. Next, assume B is a trivial interval bundle, with canonical involution σ that switches S_a,S_b. As in the previous paragraph, we have that σ(λ_b) is isotopic to λ_a, so Γ = ⟨ f_a, σ∘ f_b ∘σ^-1⟩⊂(S_a) is a direct product Γ = F ×⟨ϕ⟩ of a finite group F and a cyclic group generated by a pseudo-Anosov ϕ, where if we quotient by F then f_a and σ∘ f_b ∘σ^-1 both project to positive powers of ϕ. It suffices to show that they project to the same positive power of ϕ, for then we are done by the same argument as in the previous paragraph. For this, recall that all meridians of C intersect S_a,S_b, so these surfaces are `holes' for the disk set of C, as discussed in <cit.>. So with m_i=f^i(m) the sequence of meridians in C constructed above, Lemma 12.20 of <cit.> says that for each i, the distance in the arc complex of S_a betweenm_i ∩ S_a = f_a^i(m ∩ S_a) andσ(m_i∩ S_b) = (σ∘ f_b ∘σ^-1)^i σ(m∩ S_b)is at most 6. However, if f_a and σ∘ f_b ∘σ^-1 project to different positive powers of ϕ, their stable translation lengths on the arc complex of S_a are different, which is a contradiction. amsplain | http://arxiv.org/abs/2310.18412v1 | {
"authors": [
"Ian Biringer",
"Cyril Lecuire"
],
"categories": [
"math.GT"
],
"primary_category": "math.GT",
"published": "20231027180827",
"title": "Homoclinic leaves, Hausdorff limits and homeomorphisms"
} |
Experimental Validation for Distributed Control of Energy Hubs Varsha Behrunani^1,2, Philipp Heer^2 and John Lygeros^1 Received XXX; accepted YYY ============================================================== Code-mixing is a well-studied linguistic phenomenon when two or more languages are mixed in text or speech. Several works have been conducted on building datasets and performing downstream NLP tasks on code-mixed data. Although it is not uncommon to observe code-mixing of three or more languages, most available datasets in this domain contain code-mixed data from only two languages. In this paper, we introduce OffMix-3L, a novel offensive language identification dataset containing code-mixed data from three different languages. We experiment with several models on this dataset and observe that BanglishBERT outperforms other transformer-based models and GPT-3.5. *These two authors contributed equally to this work. WARNING: This paper contains examples that are offensive in nature. § INTRODUCTION Code-mixing and code-switching are common linguistic phenomena observed both in speech and text form. While the two terms are often used interchangeably, code-mixing is defined as the use of words or morphemes from multiple languages within a single utterance, sentence, or discourse whereas code-switching refers to the deliberate alternation between multiple languages within the same context <cit.>. The first case is often spontaneous while the second case is purposeful. However, both are widely observed in bilingual and multilingual communities. As described in <cit.>, several social, linguistic, and cognitive factors are behind these two phenomena. Socially, this often serves as a sign of group identity which allows individuals to navigate multiple social and cultural affiliations. Linguistically, it is common for bilingual speakers to not be able to find a word for a specific concept in one language thus using a word from another language to help communication. Additionally, there are several cases even in monolingual communities, when code-mixing might be the convenient way to express a concept as in the case of English loan words such as feedback used in various languages.Most commonly, code-mixing is a bilingual phenomenon. <cit.>, for example, estimates that by the year 2035, over half of children enrolled in kindergarten in California will have grown up speaking a language other than English. Another study conducted by <cit.> shows that bilingualism is a common practice in European countries such as Germany and Spain. However, in cosmopolitan cities and areas like New York, London, Singapore, and others, code-mixing with three or more languages is fairly common. This is also observed in countries like Luxembourg, and regions such as West Bengal, and South-East India where more than two languages are commonly used on a daily basis.Several papers have presented code-mixed datasets for various NLP tasks <cit.>. However, most of these datasets are bilingual leaving the processing of code-mixing in three or more languages largely unexplored. In this paper, we present a Bangla-Hindi-English dataset annotated for offensive language identification. To the best of our knowledge, this is one of the first datasets to contain code-mixing between more than two languages.The main contributions of this paper are as follows:* We introduce OffMix-3L, a novel three-language code-mixed test dataset in Bangla-Hindi-English for offensive language identification. OffMix-3L contains 1,001 instances annotated by speakers of the three languages. We made OffMix-3L freely available to the community.[https://github.com/LanguageTechnologyLab/OffMix-3L]* We provide a comprehensive evaluation of several monolingual, bilingual, and multilingual models on OffMix-3L. We present OffMix-3L exclusively as a test set due to the unique and specialized nature of the task. The size of the dataset, while limiting for training purposes, offers a high-quality testing environment with gold-standard labels that will serve as a benchmark in this domain. Given the scarcity of similar datasets and the challenges associated with data collection, OffMix-3L provides an important resource for the rigorous evaluation of offensive language identification models, filling a critical gap in multi-level code-mixing research. § RELATED WORK There have been several studies describing Bangla-English <cit.>, Hindi-English <cit.> and Bangla-Hindi <cit.> code-mixing and code-switching separately. Code-mixing between these three languages has also been studied separately in NLP. There have been few studies conducted on offensive language identification for Bangla-English code-mixed data. The work by <cit.> focused on detecting Bangla-English code-mixed and transliterated offensive comments on Facebook. Another Bangla-English dataset is gathered by <cit.>, where they collected 2,200 instances. Comparatively more work has been carried out for Hindi-English Code-mixing. <cit.> uses fastText <cit.> to represent 10,000 instances collected from different sources. Other offensive language datasets collected from Facebook and Twitter were introduced by <cit.>. <cit.> proposes Fused Attention-based Network (FA-Net), which introduces a fusion of attention mechanism of collective and mutual learning between local and sequential features for Hindi-English offensive language and hate speech classification. <cit.> uses Character Level Embeddings, GRU, and attention layer to offensive language identification in Hindi-English code-mixed. To the best of our knowledge, no existing work focuses specifically on Hindi-Bangla code-mixing. However, some studies focused on multiple Indian languages code-mixing altogether including Bangla and Hindi. The work by <cit.> focuses on offensive language identification in Dravidian languages. A few similar works include <cit.>.In summary, to the best of our knowledge, there has been no work on offensive language identification for code-mixed Bangla-English-Hindi. There have also been no offensive language datasets made available for these three languages. OffMix-3L fills this gap by providing the community with a novel resource for these three languages. OffMix-3L provides the community with the opportunity to evaluate how state-of-the-art models perform on Bangla-English-Hindi. § THE OFFMIX-3L DATASETWe choose a controlled data collection method, asking the volunteers to freely contribute data in Bangla, English, and Hindi. This decision stems from several challenges of extracting such specific code-mixed data from social media and other online platforms. Our approach ensures data quality and sidesteps the ethical concerns associated with using publicly available online data. Such types of datasets are often used when it is difficult to mine them from existing corpora. As examples, for fine-tuning LLMs on instructions and conversations, semi-natural datasets like <cit.> and <cit.> have become popular.Data Collection A group of 10 undergraduate students fluent in the three languages was asked to prepare 250 to 300 social media posts each. They were allowed to use any language including Bangla, English, and Hindi to prepare posts on several daily topics like politics, sports, education, social media rumors, etc. We also ask them to switch languages if and wherever they feel comfortable doing so. The inclusion of emojis, hashtags, and transliteration was also encouraged. The students had the flexibility to prepare the data as naturally as possible. Upon completion of this stage, we gathered 1,734 samples that contained at least one word or sub-word from each of the three languages using langdetect <cit.> an open-sourced Python tool for language identification. Data Annotation We annotate the dataset in two steps. Firstly, we recruited three students from social science, computer science, and linguistics fluent in the three languages to serve as annotators. They annotated all 1,734 samples with one of the two labels (Non-Offensive and Offensive) with a raw agreement of 63.7%. We then take 1,106 instances, where all three annotators agree on the labels, and use them in a second step. To further ensure high-quality annotation, we recruit a second group of annotators consisting of two NLP researchers fluent in the three languages. After their annotation, we calculate a raw agreement of 91% <cit.>, a Cohen Kappa score of 0.82. After the two stages, we only keep the instances where both annotators agree, and we end up with a total of 1,001 instances. The label distribution is shown in Table <ref>. Dataset Statistics A detailed description of the dataset statistics is provided in Table <ref>. Since the dataset was generated by people whose first language is Bangla, we observe that the majority of tokens in the dataset are in Bangla. There are several Other tokens in the dataset that are not from Bangla, English, or Hindi language. The Other tokens in the dataset primarily contain transliterated words as well as emojis and hashtags. Also, there are several misspelled words that have been classified as Other tokens too.Synthetic Train and Development Set We present OffMix-3L as a test dataset and we build a synthetic train and development set that contains Code-mixing for Bangla, English, and Hindi. We use two English training datasets annotated with the same labels as OffMix-3L, namely OLID <cit.> and SOLID <cit.>. We randomly select 100,000 data instances randomly and we carefully choose an equal number of Non-Offensive and Offensive instances. We then use the Random Code-mixing Algorithm <cit.> and r-CM<cit.> to generate the synthetic Code-mixed dataset. § EXPERIMENTSMonolingual Models We use seven monolingual models for these experiments, five general models, and two task fined-tuned ones. The five monolingual models are DistilBERT <cit.>, BERT <cit.>, BanglaBERT <cit.>, roBERTa <cit.>, HindiBERT <cit.>. BanglaBERT is trained in only Bangla and HindiBERT on only Hindi while DistilBERT, BERT, and roBERTa are trained in English only. Finally, the two English task fine-tuned models we use are HateBERT and fBERT <cit.>. Bilingual Models BanglishBERT <cit.> and HingBERT <cit.> are used as bilingual models as they are trained on both Bangla-English and Hindi-English respectively. Multilingual Models We use mBERT <cit.> and XLM-R <cit.> as multilingual models which are respectively trained on 104 and 100 languages including Bangla-English-Hindi. We also use IndicBERT <cit.> and MuRIL <cit.> which cover 12 and 17 Indian languages, respectively, including Bangla-English-Hindi. We also perform hyper-parameter tuning while using all the models to prevent overfitting. Prompting We use prompting with GPT-3.5-turbo model <cit.> from OpenAI for this task. We use the API for zero-shot prompting (see Figure <ref>) and ask the model to label the test set.Additionally, we run the same experiments separately on synthetic and natural datasets splitting both in a 60-20-20 way for training, evaluating, and testing purposes.§ RESULTS In this experiment, synthetic data is used as a training set and natural data is used as the test set. The F1 scores of monolingual models range from 0.43 to 0.66 where BERT performs the best. mBERT is the best of all the multilingual models with an F1 score of 0.63. Besides, a zero-shot prompting technique on GPT 3.5 turbo provides a 0.57 weighted F1 score. The best task fine-tuned model is HateBERT with the F1 score of 0.60. Among all the models BanglishBERT scores 0.68 which is the best achieved F1 score. These results are available in Table <ref>.We perform the same experiment using synthetic data for training and testing. We present results in Table <ref>. Here, mBERT and XLM-R with 0.88 F1 scores are the best-performing models. § ERROR ANALYSIS We observe Other tokens in almost 39% of the whole dataset, as shown in Table <ref>. These tokens occur due to transliteration which poses a challenge for most of the models since not all of the models are pre-trained on transliterated tokens. BanglishBERT did well since it recognizes both Bangla and English tokens. However, the total number of tokens for Hindi-English is less than Bangla-English tokens, justifying HingBERT's inferior performance compared to BanglishBERT (see Table <ref>). Also, misspelled words and typos are also observed in the datasets, which are, for the most part, unknown tokens for the models, making the task even more difficult. Some examples are available in Appendix <ref> which are classified wrongly by all the models. § CONCLUSION AND FUTURE WORKIn this paper, we presented OffMix-3L, a Bangla-English-Hindi code-mixed offensive language identification dataset containing 1,001 instances. We also created 100,000 synthetic data in the same three languages for training. We evaluated various monolingual models on these two datasets. Our results show that when training on synthetic data and testing on OffMix-3L, BanglishBERT performs the best. When using synthetic data for both training and testing, multilingual models such as mBERT and XLM-R perform well.In the future, we would like to expand OffMix-3L so that it can serve as both training and testing data. Additionally, we are working on pre-training Bangla-English-Hindi trilingual code-mixing models for offensive language identification. § ACKNOWLEDGMENTS We thank the annotators who helped us with the annotation of OffMix-3L. We further thank the anonymous workshop reviewers for their insightful feedback.Antonios Anastasopoulos is generously supported by NSF award IIS-2125466.§ LIMITATIONSAlthough most datasets for the downstream tasks are scraped from social media posts in the real world, in our case these data instances are generated in a semi-natural manner, meaning that they were generated by people but not scraped from social media directly. This was done due to the complexity of extracting contents that contain a specific 3 language code-mixing in them. Also, the dataset is comparatively smaller in size, since it is costly to generate data by a specific set of people who are fluent in all 3 target languages. § ETHICS STATEMENTThe dataset introduced in this paper, which centers on the analysis of offensive language in Bangla-English-Hindi code-mixed text, adheres to thehttps://www.aclweb.org/portal/content/acl-code-ethicsACL Ethics Policy and seeks to make a valuable contribution to the realm of online safety. OffMix-3L serves as an important resource for the moderation of online content, which will make it easier to create safer digital environments. Moreover, the contributors and annotators of the dataset are paid respectable remuneration and also attended two sessions with psychologist before starting and after completing the work to ensure their mental health is not compromised throughout the course of this dataset preparation. acl_natbib§ EXAMPLES OF MISCLASSIFIED INSTANCES< g r a p h i c s > | http://arxiv.org/abs/2310.18387v2 | {
"authors": [
"Dhiman Goswami",
"Md Nishat Raihan",
"Antara Mahmud",
"Antonios Anastasopoulos",
"Marcos Zampieri"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231027095935",
"title": "OffMix-3L: A Novel Code-Mixed Dataset in Bangla-English-Hindi for Offensive Language Identification"
} |
-2.2cm 1.15 -3ptdecorations.pathmorphing,patterns shapes.misc,arrows,decorations.markings cross/.style=cross out, draw=black, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt, cross/.default=1pt Witten diagram/.style=execute at begin picture= [blue ,fill=blue!05] circle[radius=/tikz/Witten/radius]; node (X)X; ,baseline=(X.base),vertex/.style=circle,fill,inner sep=1.414pt,node contents=, Witten/.cd,radius/.initial=1.414cm wittendiagram[1][][Witten diagram,#1] | http://arxiv.org/abs/2310.18137v1 | {
"authors": [
"Gabriel J. S. Bliard"
],
"categories": [
"hep-th",
"hep-lat"
],
"primary_category": "hep-th",
"published": "20231027133506",
"title": "Perturbative and non-perturbative analysis of defect correlators in AdS/CFT"
} |
]Unveil Sleep Spindles with Concentration of Frequency and Time Department of Biomedical Engineering, Duke University, Durham, NC, 27708 USA [email protected] Courant Institute of Mathematical Sciences, New York University, New York, NY, 10012 USA [email protected][ Hau-Tieng Wu January 14, 2024v2.2 - notao G_p^n=======================================Objective: Sleep spindles contain crucial brain dynamics information. We introduce the novel non-linear time-frequency analysis tool 'Concentration of Frequency and Time' (ConceFT) to create an interpretable automated algorithm for sleep spindle annotation in EEG data and to measure spindle instantaneous frequencies (IFs).Methods: ConceFT effectively reduces stochastic EEG influence, enhancing spindle visibility in the time-frequency representation. Our automated spindle detection algorithm, ConceFT-Spindle (ConceFT-S), is compared to A7 (non-deep learning) and SUMO (deep learning) using Dream and MASS benchmark databases. We also quantify spindle IF dynamics.Results: ConceFT-S achieves F1 scores of 0.749 in Dream and 0.786 in MASS, which is equivalent to or surpass A7 and SUMO with statistical significance. We reveal that spindle IF is generally nonlinear.Conclusion: ConceFT offers an accurate, interpretable EEG-based sleep spindle detection algorithm and enables spindle IF quantification. § INTRODUCTIONSleep spindles are brief bursts of activity within the sigma frequency range (approximately 11-16 Hz) of the electroencephalogram (EEG) signal, with durations ranging from 0.5 to 2 seconds, as noted by Iber et al. in 2007 <cit.>. These spindles are physiologically generated by the thalamic reticular nucleus in coordination with specific thalamic nuclei, modulated by corticothalamic and thalamocortical connections, and manifest as spindle-shaped patterns in EEG readings. They are distinctive features of the N2 stage, a non-rapid eye movement (NREM) sleep phase signifying a transitional state between light and deep sleep. Understanding sleep spindles is crucial for unraveling the intricacies of sleep architecture, the processes governing sleep, memory consolidation, and broader cognitive functions <cit.>. These spindles also offer insights into healthy sleep patterns and neurological disorders.For example, reduced sleep spindle activity and coherence have been observed in schizophrenia patients <cit.>, suggesting impaired memory consolidation <cit.>.The grouping of spindle activity and fast brain oscillations by slow oscillations during slow-wave sleep represents an essential feature in the processing of memories during sleep <cit.>. Aging introduces changes in sleep spindles, potentially affecting their role in memory and sleep maintenance mechanisms <cit.>. In epilepsy patients, the highest density of sleep spindles often appears away from the epileptic focus <cit.>, and alterations in sleep spindle density are linked to neurodegenerative disorders such as narcolepsy <cit.>.Notably, spindles exhibit variable oscillatory frequencies, characterized by “instantaneous frequency” (IF) fluctuations (alternative names include time-varying frequency and internal frequency modulation). Linear chirp rates for sleep spindle frequency acceleration or deceleration were quantified using matching pursuit with Gabor chirplet dictionaries <cit.>. IF dynamics were further explored with continuous wavelet transform(CWT) <cit.>, offering an alternative visualization in <cit.>. The absence of sleep spindle deceleration is linked to sleep-related disorders like sleep apnea <cit.>. More negative chirp rates are found in children with autism compared to normal controls <cit.>. We refer readers to the state-of-the-art review of sleep spindle <cit.> and its role in sleep disorders <cit.> for more details. Conventional sleep spindle detection relies on manual EEG signal inspection, which is labor-intensive and prone to errors and inter-rater discrepancies among experts<cit.>. Consequently, there is a growing demand for reliable automated algorithms to streamline the process, reducing the burden on experts and improving spindle annotation consistency. Various algorithms have emerged over the years, encompassing time-frequency (TF) analysis <cit.>,matching pursuit <cit.>, signal decomposition <cit.>, bayesian algorithms <cit.>, decision trees <cit.>, and support vector machines <cit.>. Additionally, deep neural network (DNN) approaches have been introduced, such as the the DOSED algorithm based on the Convolutional Neural Network (CNN) framework <cit.>, the SpindleNet architecture that fuses a CNN and a Recurrent Neural Network (RNN) <cit.> that has potential for real-time applications, and the U-Net architecture repurposed for spindle detection <cit.>. Although DNN frameworks offer the potential to improve the precision of spindle detection, their inherent black-box nature restricts interpretability, and the demanding training phases required can pose practical application challenges. This paper focuses on the application of a recently developed nonlinear TF analysis algorithm known as Concentration of Frequency and Time (ConceFT) <cit.> to spindle research. This application involves the creation of an automated annotation algorithm and an investigation into internal frequency modulation. ConceFT was designed to produce a precise and robust TFR for noisy time series data, improving the quantification of dynamics <cit.>.ConceFT consists of two components: a nonlinear version of the commonly used short-time Fourier transform (STFT) or CWT, known as the synchrosqueezing transform (SST) <cit.>, and a nonlinear adaptation of the widely employed multitapering technique <cit.>. It is worth noting that the TFR generated by STFT or CWT tends to be blurred, even in noise-free signals, due to the uncertainty principle <cit.>, limiting our ability to measure oscillatory component dynamics. SST addresses this issue by incorporating the phase information hidden in STFT or CWT. Additionally, the nonlinear generalization of multitapering reduces the impact of noise. In our context, where spindles are considered deterministic oscillatory components affected by the stochastic nature of EEG signals, ConceFT is a suitable tool for exploring spindles. For visual examples of TFR determined by STFT, CWT, and ConceFT, please refer to Figure <ref> and <cit.>.Our main contribution lies in leveraging ConceFT to develop an accurate and interpretable automated sleep spindle detection algorithm called ConceFT-Spindle (ConceFT-S). In essence, we can perceive the Time-Frequency Representation (TFR) derived from ConceFT as a precise, time-varying power spectrum. This allows us to apply a bandpass filter concept to extract information relevant to sleep spindles. The concept behind ConceFT-S is straightforward: we calculate EEG energy within the sigma frequency band in the TFR generated by ConceFT and apply specific thresholding rules to identify the presence of spindles. ConceFT-S bears similarities to the approach described in <cit.>, which utilizes CWT. For a comprehensive understanding of ConceFT-S, see Figure <ref>. Our second contribution involves introducing a method to leverage ConceFT for enhanced quantification of spindle IF. In the existing literature, common approaches for studying spindle IF include matching pursuit <cit.> with linear chirplets and linear-type time-frequency analysis methods like STFT and CWT.It is worth noting that linear chirps and associated chirpletshave linear IF, making them inherently parametric methods. Consequently, unless the redundant frame used in the matching pursuit is expanded to encompass more general patterns, the estimated IF remains linear. However, as shown in Figure <ref>, spindle IF is generally nonlinear. This observation raises the question of whether it is possible to achieve a more accurate quantification of IF. While theoretically, expanding the redundant frame is feasible, to the best of our knowledge, it remains largely unexplored, and the associated computational complexity needs careful consideration. Given the nonparametric nature of spindle IF, an ideal tool is TF analysis, such as STFT and CWT, which are however limited by. We demonstrate that ConceFT offers a suitable approach to improve the quantification of spindle IF, revealing that spindle IF is typically nonlinear. In other words, the IF may not consistently accelerate or decelerate linearly over time. The paper is structured as follows. Section <ref>provides the mathematical foundation, which includes a phenomenological model of sleep spindle, a concise overview of time-frequency analysis, and the fundamental principles of ConceFT along with its numerical implementation. Section <ref> offers a comprehensive introduction to the proposed automated sleep spindle detection algorithm, ConceFT-S, and the quantification of spindle IF. In Section <ref>, we present the analysis results, demonstrating ConceFT-S' improved performance in spindle detection and an exploration of spindle IF using two benchmark databases. Sections <ref> and <ref>, contain the discussion and conclusion of this paper, respectively. § MATHEMATICAL BACKGROUND We start with a phenomenological model of sleep spindle, followed by ConceFT and its numerical implementation. A quick summary of TF analysis is postponed to Section S-I in the Online Supplementary.§.§ Mathematical model for spindles Sleep spindles are distinctive burst-like 10-15 Hz sinusoidal cycles observed in sleeping mammals' EEG. They are named after their spindle-like waveform <cit.>. According to the American Association of Sleep Medicine (AASM), human sleep spindles manifest on the cortical surface as distinct waves with an 11-16 Hz frequency, typically within the 12-14 Hz range. These waves last more than 0.5 seconds and are most prominent in central derivations. Notably, spindle frequency can change, exhibiting speed acceleration or deceleration <cit.>. See Figure <ref> for an illustration, which suggests that the acceleration of deceleration of spindle frequency is nonlinear.In short, sleep spindles are deterministic oscillations amid the stochastic EEG signal, combining defined characteristics with observed nonlinear frequency changes. In summary, sleep spindles contribute to the non-stationary nature of the EEG signal as follows:(F1) The frequency and magnitude of spindle are time-varying.(F2) Spindles occurs in short-term. (F3) Spindle coexists with the stochastic component of the EEG signal. In light of the notable characteristics (F1)-(F3), instead of directly modeling the complicated underlying mechanism of sleep spindles, we introduce a phenomenological model aimed at capturing the essential features of spindles within the EEG signal. We model an EEG signal with sleep spindles asE(t) = s(t)+Φ(t),where s(t) is a deterministic oscillatory signal describing the spindle and Φ is a random process describing the stochastic part of the EEG signal. The spindle s fulfills the following assumptions. Fix a small constant ϵ>0.s(t) = ∑_l=1^L a_l(t)cos(2πϕ_l(t)) ,where L∈ℕ, and for each l=1,…,L, (C1) a_l(t)>0 is a bump-like C^1 function describing the envelope shape of the l-th spindle supported on an interval lasts for about 0.5-2 s, which we call the amplitude modulation (AM) ; (C2) ϕ_l(t) is a C^2 function that is strictly monotonically increasing denoting the phase function of the l-th spindle; (C3) ϕ_l'(t)>0 is the instantaneous frequency (IF) of the l-th spindle so that and |ϕ”_l(t)|≤ϵϕ'_l(t) for all t∈ I_l;The density of spindle is quantified by L divided by the epoch length under investigation. Over a 30 s epoch, L could be roughly 6 in adults, 4 in elders, and more up to 20 in teenagers <cit.>.Note that L, ϕ_l(t), ϕ_l'(t) and a_l(t) might be different from one spindle to another, and depends on age, gender, genetics, recording location, etc <cit.>. For example, the support of a_l is roughly 1-2 seconds in adults, a bit shorter in elders, and longer up to 4 s in infants. We do not assume any parametric form of ϕ'_l(t) besides constraining its time-varying speed in (C3), while in the literature, ϕ'_l is usually assumed to be linear <cit.>; that is, a linear chirp, or a cosine function in <cit.>. The stochastic part of the EEG signal Φ satisfies (C4) Φ(·) is a long range dependent locally stationary random process and approximately stationary over a 30 s epoch. The scientific consensus holds that the EEG signal is non-stationary and possesses long-range temporal dependencies <cit.>. However, we can reasonably assume that, within a short time frame, the stochastic part of the EEG is “relatively” stationary. This leads us to model Φ as a locally stationary random process <cit.>, which imparts a form of stationarity over an epoch, typically lasting 30 seconds. In summary, we depict the spindle as a deterministic oscillatory signal with preassigned frequency characteristics that coexists within the stochastic component of the EEG signal.§.§ A quick review of time-frequency analysis TF analysis tools can be broadly classified into three main categories: linear, bilinear, and nonlinear <cit.>. In general, these tools convert the input signal into a function defined on the TF domain or time-scale domain (or more generally time-frequency-chirp domain), called the TFR. Linear-type TF analysis involves dividing the signal into segments and computing the spectrum for each segment. The choice of segmentation method distinguishes different approaches. For example, the STFT or Gabor transform employs a fixed window for segmentation, while the CWT adapts the segments based on wavelet dilation <cit.>. Bilinear-type TF analysis quantifies oscillatory properties using cross-correlation perspectives and various smoothing techniques, encompassing methods like the Wigner-Ville distribution and the Cohen class <cit.>. Nonlinear-type TF analysis aims to represent signals in a data-driven manner, often modifying linear or bilinear TF analyses by incorporating phase information. Over recent decades, various practical methods have emerged, such as the reassignment method (RM), the SST, empirical mode decomposition (EMD), and several variations. For a comprehensive overview of the field, readers can refer to a recent review in <cit.>.It is worth noting that the commonly employed matching pursuit method <cit.> can be viewed as a nonlinear-type TF analysis tool. In the matching pursuit technique, the signal is adaptively approximated using atoms within a pre-defined redundant frame. If needed, the signal can be subsequently transformed into a TFR by using the selected atoms.In general, linear-type TF analysis methods like the STFT and CWT are susceptible to the uncertainty principle <cit.>, which introduces blurriness in the resulting TFR. Additionally, they rely on the chosen window (or mother wavelet) and lack adaptivity to the signal characteristics. Bilinear-type TF analysis, exemplified by the Wigner-Ville distribution, encounters limitations such as interference when dealing with signals composed of multiple oscillatory components or those with time-varying frequencies, even when a single oscillatory component is present <cit.>. Nonlinear-type TF analysis emerges as a solution to these challenges. Among these methods, the widely used EMD lacks a theoretical foundation, potentially leading to erroneous interpretations with real data. In contrast, the RM and SST have been rigorously developed with theoretical support <cit.>. The SST and its variations incorporate phase information to mitigate blurriness caused by the uncertainty principle, resulting in a TFR less reliant on window choice <cit.>. While the SST is a nonlinear method, its robustness to various types of noise, including non-stationary and heteroscedastic noises, has been established <cit.>. However, its effectiveness can diminish when the signal-to-noise ratio (SNR) is low, typically below 1 dB. Thus, SST is a suitable tool when quantifying signal dynamics, like IF, is the focus and the SNR is not too low. To enhance nonlinear-type TF analysis in low SNR conditions, one can consider the concept of multi-tapering (MT) <cit.>.The core idea behind MT involves using orthonormal windows, denoted as h_1,…,h_J, to render noise components independent when employing methods like the RM or the SST with different windows. Averaging the results from RM or SST using these J orthonormal windows helps mitigate the impact of noise <cit.>. However, practical constraints limit the number of orthonormal windows due to the Nyquist rate, typically ranging from 6 to 10. To overcome this limitation imposed by the Nyquist rate, the concept of generalized MT emerged, leading to the development of the ConceFT algorithm <cit.>. The fundamental idea behind the ConceFT is reducing the noise impact through generating more windows. Note that a point x:=(x_1,…,x_J)∈ℂ^J on the unit sphere Ω^J-1⊂ℂ^J could lead to a linear combination of J orthonormal windows, denoted as h^[ x]:=∑_i=1^Jx_ih_i. Consequently, using n points on Ω^J-1 results in n TFRs via SST. Averaging these n TFRs yields the desired TFR. Since the windows are not entirely independent, this approach deviates from the traditional MT technique. It's vital to note that the generalized MT technique leverages the nonlinearity of SST to break the dependence between windows effectively. This approach proves highly valuable in handling low SNR scenarios in real data <cit.>, and its performance is guaranteed with theoretical underpinning <cit.>.§.§ SST and ConceFT in a nutshell We start with a summary of SST. Suppose the signal we want to analyze is a realization of a generalized random process Y with 𝔼Y=f and a finite covariance structure, where f could be as general as a tempered distribution. Take a window function h(t) that is real symmetric centered at 0, smooth and decay fast so that h is a Schwartz function. We assume that h is the Gaussian function to simplify the discussion. Mathematically, the STFT of Y associated with h is defined as V_Y^(h)(t,ξ):=⟨ Y,h(·-t)e^-i2πξ (·-t)⟩ ,where t∈ℝ is the time, ξ∈ℝ^+ is the frequency, h is the window function chosen by the user, and the notation ⟨·,·⟩ means the evaluation of the random process Y with the Schwartz function h(·-t)e^-i2πξ (·-t), which can be formally understood as ∫ Y(x)h(x-t)e^-i2πξ (x-t)d x. In other words, STFT is obtained by dividing the signal into pieces, and evaluate the Fourier transform of each piece. By patching them together, we obtain the information about how the signal oscillates at each time. Second, evaluate the reassignment rule, ω^(h)_f(t,ξ):= ν - ℑ𝔪V_f^(𝒟h)(t,ξ)/2π V_f^(h)(t,ξ),where ℑ𝔪 means taking the imaginary part, and 𝒟h is the derivative of h <cit.>. Equation (<ref>) is well-defined on every point (t,ν) where V_f^(h)(t,ν)≠ 0.Third, the linear TFR determined by STFT is sharpened by:S^(h)_f(t,ν):= ∫_𝔑_tV^(h)_f(t,ξ)δ_|ν-ω^(h)_f(t,ξ)|dξ,where 𝔑_t:={ξ: |V^(h)_f(t,ξ)|>θ_0} <cit.> and θ_0>0 is the chosen threshold.The main step to sharpen the linear TFR is using the phase information of STFT via (<ref>). Hence, (<ref>) should be understood as two separate steps: * Select all entries (t,ξ) so that the frequency information provided by ω^(h)_f(t,ξ) is νvia δ_|ν-ω^(h)_f(t,ξ)|;* Gather all non-zero STFT coefficients to the entry (t,ν). The SST can be effectively applied to investigate the adaptive harmonic model <cit.>; that is, a multicomponent oscillatory signal with each component oscillating with slowly varying amplitude and frequency. For instance, in the case of f(t)=A(t)cos(2πϕ(t)), the TFR determined by SST concentrates around ϕ_l'(t) while encoding the AM function A(t) as the intensity. This representation is notably less influenced by the choice of window and robust the noise <cit.>. Although not necessary for our current discussion, it is worth noting that SST allows users to achieve more signal processing missions, e.g., recovering individual oscillatory components or denoising. With SST, we now describe ConceFT. Take J orthonormal windows, h_1,h_2,…,h_J∈ L^2(ℝ), where J∈ℕ. We focus on the first J Hermite windows due to its property of having a minimal essential support in the TF domain <cit.>. For 𝐱:=(x_1,…,x_J)∈Ω^J-1, we have a new window h_𝐱:= ∑_j=1^J x_j h_j, which satisfies h_𝐱_L^2=1.Fix Q∈ℕ much larger than J and randomly uniformly sample Q points from Ω^J-1, denoted as 𝐱_1,…,𝐱_Q. The ConceFT of Y is C_Y^(J,Q)(t,ν) :=1/Q∑_k=1^Q|S^(h_𝐱_k)_Y(t,ν)| , where t∈ℝ and ν>0. The traditional MT approach <cit.> is a special ConceFT when Q=J, and 𝐱_k=e_k, the unit vector with the k-th entry 1. Since more than J non-orthonormal windows are used in ConceFT, it is called the “generalized MT” scheme. Since 𝐱_k and 𝐱_i are in general not orthonormal, the noises in V^(h_𝐱_k)_f(t,ν) and V^(h_𝐱_i)_f(t,ν) are dependent. However, the nonlinearity of SST drives the noise in S^(h_𝐱_k)_f(t,ν) and S^(h_𝐱_i)_f(t,ν) to be less correlated compared with that in V^(h_𝐱_k)_f(t,ν) and V^(h_𝐱_i)_f(t,ν) <cit.>. Due to the reduced correlation, the impact of noise on the final TFR is reduced via the averaging. In practice, ConceFT is especially effective when the SNR is low.See Figure <ref> for a comparison of TFRs determined by STFT and CWT.In STFT, four labeled spindles produce blurred representations due to the uncertainty principle, limiting IF quantification. ConceFT sharpens these bumps into curves, resulting in a cleaner TFR with reduced influence from EEG background noise. Furthermore, ConceFT significantly concentrates the spectral spreading in the low-frequency region (around 0-3Hz) compared to STFT, particularly around slow oscillations. This enables us to extract IF from the TFR using a curve fitting algorithm, as explained in the next section. §.§ Numerical implementationDenote the discretized signal as 𝐟∈ℝ^N at the sampling rate f_s>0, where N∈ℕ. So the recording duration is T = N/f_s s. Assume the recording starts at time 0. To evaluate the ConceFT of 𝐟, we take the top J∈ℕ hermite windows and uniformly sample them over the interval [-10, 10] at a sampling period of dt = 10/K. Denote the discretized Hermite windows as 𝐡_1, 𝐡_2,𝐡_3 ∈ℝ^2K+1. Also denote 𝐡'_1,𝐡'_2,𝐡'_3 ∈ℝ^2K+1 as the derivatives of these Hermite windows.We then uniformly sample Q random points z_1, …, z_Q∈Ω^J-1⊂ℂ^J and obtain Q∈ℕ new window functions 𝐠_1, …, 𝐠_Q∈ℂ^2K + 1 (and their derivatives 𝐠'_1, …, 𝐠'_Q∈ℂ^2K + 1) by the formulas 𝐠_i = ∑_j=1^J z_i(j) 𝐡_j and 𝐠'_i = ∑_j=1^J z_i(j) 𝐡'_j, where i = 1, …, Q.For each i = 1, …, Q, the STFT of 𝐟 with the window 𝐠_i, denoted as 𝐕_i ∈ℂ^N × M, is evaluated by𝐕_i(n, m) = ∑_k=1^2K+1𝐟(n + k - K - 1) 𝐠_i(k) e^-i2π (k-1)m /M,where M∈ℕ is the number of frequency bins in the frequency axis, m=1,…,M, n=1,…,N, and we set 𝐟(l) := 0 when l < 1 or l > N. Here, M can be arbitrarily picked, which balanced between the frequency resolution and the computational time. Similarly, we define the other STFT of 𝐟 using 𝐠'_i, denoted as 𝐕'_i ∈ℂ^N × M. To sharpen each 𝐕_i, we choose a threshold υ > 0 and calculate the reassignment operatorsΩ_i(n, m) = -ℑ𝔪N/2π𝐕_i'(n,m)/𝐕_i(n,m)when |𝐕_i(n,m)|> υ and -∞when |𝐕_i(n,m)|≤υ,where i = 1,…,Q.With the reassignment operator, the SST of 𝐟 with the window 𝐠_i, denoted as 𝐒_i∈ℂ^N × M, is evaluated by 𝐒_i(n, m) = ∑_l;|l- Ω_𝐢(n, m)|<ν𝐕_i(n, l),where ν>0 is a small constant.Finally, the TFR of 𝐟 determined by ConceFT, denoted as 𝖢𝖥𝖳_𝐟∈ℂ^N × M, is 𝖢𝖥𝖳_𝐟(n, m) = 1/Q∑_i=1^Q |𝐒_i(n, m)| . § MATERIALS AND METHODS§.§ Annotated Databases We consider two open-access benchmark databases in this study. The first dataset is the Dream database from the University of MONS-TCTS Laboratory and Universite Libre de Bruxelles-CHU de Charleroi Sleep Laboratory <cit.>, which was previously used to evaluate automatic spindle detection algorithms <cit.>. It contains 30 minutes of Polypolysomnography (PSG) recordings from 8 subjects with various sleep disorders with a sampling frequency at 50 Hz for one subject, 100 Hz for another subject, and 200 Hz for the other subjects. The dataset was annotated by two experts. Expert 1 annotated all 8 subjects, while Expert 2 annotated the first 6 subjects. Subjects 1 and 3 were annotated over the C3-A1 channel, while other subjects were annotated over the CZ-A1 channel. The original EEG signals were resampled into 50 Hz for the purpose of standardization. For the first 6 subjects, which were annotated by both experts, the union of the two scorings was used as the ground truth. The intra-rater agreement was reported in <cit.>. Denote e_1, e_2∈{0,1}^N to represent experts' annotations, where 0 indicates no spindles and 1 indicates the existence of spindle. The union annotation is determined by an element-wise OR operation on e_1 and e_2. The last 2 subjects were not used in this research following the conventions of previous research <cit.>.The second database is the second subset (SS2) of the Montreal Archive of Sleep Studies (MASS) <cit.>. The use of this dataset was approved by the Duke Institutional Review Board. This subset comprises full-night PSG recordings of 19 young and healthy participants. Two experts separately annotated sleep spindles on the EEG channel C3-A1. Expert E1 annotated all 19 recordings, while Expert E2 made annotations for 15 of them. The EEG signals were sampled at 256 Hz but were resampled at 50 Hz for the standardization purpose in this research. It is worth mentioning that while Expert E1 adhered to the standard AASM scoring guidelines, Expert E2 used a similar approach to Ray et al. <cit.> that utilized both broad-band EEG signals (0.35-35 Hz) and sigma-filtered signals (11-17 Hz).Furthermore, Expert E2 did not set a minimum duration for spindles, and four of the 19 nights were not assessed due to perceived poor sleep quality or inconsistent signal integrity. The significant difference in the annotated spindles by E1 and E2 is well known <cit.>, so we used the same union scheme in the Dream database as the ground truth, and only 15 subjects were utilized, following a common practice in the literature <cit.>. §.§ Proposed ConceFT-S Algorithm The overall flowchart of the proposed ConceFT-S algorithm is shown in Figure <ref>, where the EEG signal from Subject 2 in the Dream Dataset between 605 to 615 seconds is demonstrated.Below we detail the algorithm step by step, and provide details about parameter selection.§.§.§ Step 1: Evaluate ConceFT Denote the raw EEG signal as 𝐟∈ℝ^N sampled at the sampling rate f_s>0. Compute ConceFT of 𝐟 with the first J Hermite windows, where the Hermite window implementation is detailed in Section <ref>. Denote the result as 𝖢𝖥𝖳_𝐟∈ℂ^N × M. §.§.§ Step 2: Indices for Spindle DetectionGiven 𝖢𝖥𝖳_𝐟, we calculate the sigma band amplitude from 12 Hz to 15 Hz. These values picked to account for the spectral leakage outside of the 12-14 Hz, the range used by Combrisson et al <cit.>.𝖠_σ(n) =Δ_f ∑_m: 12 ≤𝖿𝗋𝖾𝗊(m) ≤ 15𝖢𝖥𝖳_𝐟(n, m) ,where 𝖿𝗋𝖾𝗊(m) is frequency associated with the m-th frequency index and Δ_f:=𝖿𝗋𝖾𝗊(2)-𝖿𝗋𝖾𝗊(1).Next, we calculate the power density of four frequency bands, where the delta band is B_δ:=[0.5-4] Hz, the theta band is B_δ:=[4-8] Hz, the alpha band is B_α:=[8-12] Hz, and the sigma band is B_σ:=[12-15] Hz, by 𝖯_(n)= Δ_f /|B_|∑_m:𝖿𝗋𝖾𝗊(m) ∈ B_ |𝖢𝖥𝖳_𝐟(n, m)|^2 ,where =δ,θ,α,σ. The normalized sigma band power is then computed by dividing 𝖯_σ(n) by the sum of the power in the four bands𝖯̂_σ(n) = 𝖯_σ(n)/𝖯_δ(n) + 𝖯_θ(n) + 𝖯_α(n) + 𝖯_σ(n) .Introduce a threshold parameter δ>0 such that the hard threshold for the sigma band amplitude is θ_a:=mean(𝖠_σ) + δ×std(𝖠_σ). We apply this hard threshold to 𝖠_σ and get a subset of indices T_1 ⊂{1, … N}; that is,i ∈ T_1 if 𝖠_σ(i) is greater than the hard threshold. At the same time, we apply the second threshold parameter ϵ to 𝖯̂_σ(n) and get another subset of indices T_2 ⊂{1, … N}; that is, i ∈ T_2 if 𝖯̂_σ(i) is greater than ϵ. We do not use the mean and standard deviation for 𝖯̂_σ(n) because it is already normalized to take into account the individual signal characteristics.Define ℐ=T_1∪ T_2.§.§.§ Step 3: Post-processing Cluster ℐ into ℐ_1,…, ℐ_L so that each ℐ_i contains consecutive integers, ℐ_k∩ℐ_l=∅, ℐ=∪_l=1^Lℐ_l, and indices in ℐ_i are smaller than those in ℐ_i+1. Due to the randomness of the EEG signal, we cannot directly use ℐ_1,…, ℐ_L to estimate sleep spindles, and we modify them by the following rules before estimating sleep spindles. First, if the distance between ℐ_i and ℐ_i+1, defined as (ℐ_i, ℐ_i+1)=min_k∈ℐ_i, l∈ℐ_i+1{|k-l|}, is less than 300 ms for any i, bridge the time gap by replacing ℐ_i by ℐ_i∪ℐ_i+1 and deleting ℐ_i+1. Suppose we end up with a new set of intervals, ℐ_1,…, ℐ_L', where L'≤ L.Subsequently, define a soft threshold calculated from the sigma band amplitude by ϑ_a:=0.5×(mean(𝖠_σ) + δ×std(𝖠_σ)). For each 1≤ i≤ L', denote ℐ_i:={s_i,s_i+1,…,e_i}. Suppose s_i' and e_i' are the closest points at which the sigma band amplitude intersects with ϑ_a. Update ℐ_i by ℐ'_i:={s'_i,s'_i+1,…,e'_i}. Denote the new set of intervals as ℐ'_1,…, ℐ'_L'.Once more, if the distance between ℐ'_i and ℐ'_i+1 is less than 300 ms for any i=1,…,L'-1, bridge the time gap by replacing ℐ'_i by ℐ'_i∪ℐ'_i+1 and deleting ℐ'_i+1. Suppose we end up with a new set of intervals, ℐ'_1,…, ℐ'_L”, where L”≤ L'.Finally, any ℐ'_i shorter than 300 ms or longer than 3,000 ms are discarded, and we end up with the final set of intervals, ℐ'_1,…, ℐ'_L”', where L”'≤ L”, which are our final spindle estimates. The post-processing steps follow Combrison et al <cit.> with a slight modification in the duration criterion. The cutoff durations of 500 ms and 2,000 ms were used in <cit.> while we changed them to 300 ms and 3,000 to fit the characteristics of the datasets used in this research. Their code implementations are available at <https://github.com/EtienneCmb/visbrain>.§.§.§ Parameter selection In Step 1, choose J=3, K = f_s, Q=30 and M = 4000 over the frequency range [0,20] Hz in our implementation of ConceFT. The choice of K is based on the rule of thumb in the TF analysis that the chosen window should encompass approximately 10 cycles of the oscillatory component, or more if the signal is noisy. Considering that our target oscillations are in the sigma band (11-16 Hz), K = f_s corresponds to a one-second signal span, encompassing approximately 11 to 16 oscillations. This range effectively mitigates the influence of the stochastic EEG component. While theoretically, a higher Q might enhance performance, empirical evidence shows that performance plateaus around 20 or 30 <cit.>. Given the trade-off between performance and computational efficiency, we set Q=30. For further details, see the Supplemental Material in <cit.>.In Step 2, the two threshold parameters are optimized through a non-exhaustive grid search in the training phase, where we search δ from {1,1.5,2,2.5,3} and ϵ from {0.05,0.2,0.35,0.5}. This optimization is chosen to balance between the prediction accuracy and computational speed. §.§ Spindle IF estimate With the TFR determined by ConceFT, 𝖢𝖥𝖳_𝐟∈ℂ^N × M, and the detected spindle or spindle labeled by experts, we could estimate the spindle IF by fitting a curve into the TFR by solving the following optimization problem.c^*=_c:{1,…,n}→ℳ∑_ℓ=1^n 𝐑(ℓ,c(ℓ)) - λ∑_ℓ=1^n-1|Δ c(ℓ)|^2 ,where the spindle is assumed without loss of generality to live in the first n samples to simplify the discussion, ℳ⊂{1,…,M} is the frequency band [10,15] Hz, Δ c:ℳ_-^n-1, where ℳ_-:={i-j|i,j∈ℳ} so that Δ c(ℓ):=c(ℓ+1)-c(ℓ) for ℓ=1,…,n-1, λ>0 is the penalty term constraining the regularity of the fit curve c, and 𝐑(ℓ,q)=log|𝖢𝖥𝖳_𝐟(ℓ,q)|/∑_i=1^N∑_j=1^M|𝖢𝖥𝖳_𝐟(i,j)|. Here, Δ c is the numerical differentiation of the curve c and 𝐑(ℓ,q) is a normalization of the TFR. This optimization can be efficiently solved by the penalized forward-backward greedy algorithm <cit.>. To make a connection with existing literature that often assumes that the spindle IF is linear, we carry out the following curve fitting scheme. Suppose the spindle IF is c^* over interval [-t_c, t_c] extracted by (<ref>). Fit a quadratic polynomial c̃(t)=β_0+β_1 t+1/2β_2t^2 into c^* by minimizing the least squared error, where β_0, β_1 and β_2 are the mean rate, linear chirp rate and quadratic chirp rate with the unit Hz, Hz/s and Hz/s^2 respectively, and calculate the relative root mean square error (RRMSE), which is defined as c̃-c^*_2/c^*_2. For a comparison, fit a linear polynomial č(t)=γ_0+γ_1 t into c^* by minimizing the least squared error, where γ_0 and γ_1 are the mean rate and linear chirp rate with the unit Hz and Hz/s respectively, and calculate the RRMSE. We assume spindle IFs are independent, so (β_0,β_1,β_2)^⊤ (and (γ_0,γ_1)^⊤) of different spindles are independent. §.§ Comparison with existing detection algorithmsWe compare the proposed ConceFT-S algorithm with two state-of-the-art automatic spindle detection algorithms, A7 <cit.> and SUMO <cit.>. A7 has the best performance among non-DNN-based algorithms, and SUMO has the best performance among DNN-based algorithms. §.§.§ A7The A7 algorithm operates by setting thresholds on four key parameters: both the absolute and relative power within the sigma band (11-16 Hz), as well as the covariance and correlation between broadband-filtered EEG signals (0.3-30 Hz) and signals filtered within the sigma band (11-16 Hz) <cit.>.The authos showed that the A7 algorithm achieved the best F1 score on the younger cohort and the second best on the older cohort. It is shown in <cit.> that A7 outperforms DETOKS <cit.>. Thus we focus on A7 in this work and use the A7 implementation in <https://github.com/swarby/A7_LacourseSpindleDetector>. §.§.§ SUMOThe SUMO algorithm is a one-dimensional variant of the U-Net architecture tailored for sleep spindle detection. Kaulen et al. benchmarked SUMO using the MODA dataset, demonstrating its superior performance over A7 in sensitivity, recall, and F1 score <cit.>. This algorithm is thus chosen in this study. Spindle U-Net <cit.> achieved a similarly good performance but was not used in this research as it shares the same architecture as SUMO. To apply SUMO to the Dream dataset, we adjust the training approach to suit the data's specifics. We divided the 1800 s EEG signals from each individual into fifteen 120-s segments. This 120 s segment length was chosen because it is the nearest integer divisor of 1800 to 115, the segment length employed in <cit.>. For every test participant, the 75 blocks from the other 5 subjects were allocated to training and validation datasets in a 13:2 split. The implementation of SUMO is available at <https://github.com/dslaborg/sumo>. §.§ Statistical Analysis All quantities are presented as the mean ± standard deviation. Continuous variables are analyzed using the Wilcoxon ranksum test. P values <0.05 are considered statistically significant with Bonferroni correction. We applied a leave-one-subject-out cross-validation (LOSOCV) scheme to assess ConceFT-S performance on both datasets. In each fold, we selected δ and ϵ by finding the combination that maximizes the average F1 score on the training subjects. These thresholds were then used to evaluate ConceFT-S on the test subject. We reported average performance metrics across all train-test splits for each dataset. We adopted the analysis-by-event approach <cit.>to reliably identify sleep spindles, comparing estimated spindles to the ground truth, as described in Section <ref> on an event basis. For each spindle in the expert annotation, we calculated the temporal intersection with the closest estimated spindle divided by their union. If this relative overlap exceeded a threshold (set to 0.2, a common suggestion in the literature <cit.>), we counted it as a true positive (TP). If an expert-annotated spindle lacked any detected spindle with sufficient relative overlap, we considered it a false negative (FN). Similarly, if a detected spindle lacked any annotated spindle with sufficient relative overlap, we categorized it as a false positive (FP). We reported sensitivity (SEN), precision (PRE), and F1 score (F1) as performance metrics.§ RESULTS The computation was conducted in the MATLAB R2022a environment using a 2GHz Quad-Core Intel Core i5 processor and 16GB of RAM. The Matlab implementation of the proposed algorithm can be found in <https://github.com/rsbci/ConceFT-Spindle>. The sensitivity analysis of ConceFT-S can be found in Section S-II in the Online Supplementary.§.§ Data VisualizationWe start by visually assessing ConceFT's effectiveness. Figure <ref> shows a comparison of different TFRs given by STFT and ConceFT, utilizing segments from Subject 2 in the Dream dataset.In the first segment, the experts annotated a spindle around the 404th s while another spindle around the 1603th s was annotated for the second segment. In both TFRs, we can observe energy concentration in the sigma band for each spindle.However, ConceFT exhibits sharper power concentration compared to STFT, especially around the 404th second. Additionally, ConceFT significantly reduces spectral spreading in the low-frequency region of STFT (0-3Hz) around the 408th second and 1600th second. Another notable observation is the cardiogenic artifact (indicated by blue arrows). While this artifact is relatively prominent in STFT, ConceFT mitigates its impact.§.§ Performance of ConceFT-S In Tables <ref>, we show the SEN, PRE, and F1 values for the three detection algorithms tested, including A7, SUMO and ConceFT-S. The ConceFT-S achieves an average sensitivity of 0.709, precision of 0.807, and F1 score of 0.749 on the Dream Dataset, and an average sensitivity of 0.789, precision of 0.801, and F1 score of 0.786 on the MASS SS2 subset. ConceFT-S outperforms A7 in F1 with statistical significance, where the p-value is p < 10^-4on MASS and p=0.0411 in Dream. On the other hand, ConceFT-S outperforms SUMO in F1 without statistical significance in both datasets, where the p-value is p = 0.132 in Dream and p = 0.836 on MASS. The thresholds used for testing were stable across subjects within each dataset, with δ equal to 2 and ϵ to 0.5 for the Dream dataset and δ equal to 2.5 and ϵ to 0.2 for the MASS dataset. Next, we consider the summary of average spindle density and duration in the N2 sleep stage for each subject in the MASS and Dream datasets, by experts' annotation and different algorithms.The calculation of spindle duration was conducted on spindles that were labeled TP and reported in median ± median absolute deviation since there exists 6 statistical outliers (longer than 5 seconds) in the spindle duration in the MASS dataset, while the spindle density is reported in mean ± standard deviation. The spindle durations (densities respectively) of experts' annotation were 1.14±0.43 s (0.104±0.03 per second respectively) for the MASS dataset and 1.0±0.10 s (0.065±0.025 per second respectively) for the Dream dataset. On average the spindle length is longer and the density is higher in MASS compared with Dream, which probably comes from the annotation strategies or the existence of subjects with sleep disorders in Dream. Meanwhile, the spindle durations (densities respectively) determined by ConceFT-S were 0.96±0.32 s (0.101±0.002 per second respectively) for the MASS dataset and 1.0±0.34 s (0.057±0.024 per second respectively) for the Dream dataset. Except the spindle durations in the MASS dataset (p < 10^-4), there is no significant difference between expert annotations and predictions by ConceFT-S. The RRMSE of spindle durations (densities respectively) between annotations and predictions by A7, SUMO and ConceFT-S are0.213, 0.085 and 0.232 respectively (0.330, 0.353 and 0.202 respectively) in the Dream dataset, and0.220, 0.131 and 0.186 respectively (0.275, 0.234 and 0.238 respectively) in the MASS dataset.Note that the while overall ConceFT-S does not outperform SUMO, the performance is comparable.§.§ Exploration of spindle instantaneous frequency To explore dynamics of spindles, see Figure <ref> for an illustration of a 30 s EEG signal with several spindles. It is evident that the spindle cycles' durations are not constant; they decrease, as quantified by the IF condition (C2) in the phenomenological model (<ref>). This time-varying frequency is visualized in the TFR of the EEG signal determined by conceFT in the middle panel in Figure <ref>. A closer look at the dominant curve in the TFR and the fitted curve shows that the curve is not linear. That is, the spindles shown in Figure <ref> are not linear chirps. To further explore the spindle dynamics in terms of IF, we first explore IF on the subject level. For each subject in each dataset, gather the IFs of all spindles labeled by experts using the curve extraction (<ref>). We align all spindles by their associated middle points of the labels, denote c_i,j(t) to be the estimated IF of the jth spindle of the i-th subject on [-t_i,j,t_i,j], where t_i,j>0. Then we assess the mean and standard deviation of all IFs at each time t when there are at least 3 spindles that last longer than t. See Figure <ref> for an illustration. In each subject we show a functional plot of all spindles (in gray) of one subject, along with the associated mean and mean ± standard deviation of IFs. It is clear that the averaged IF is not linear. Next, we quantify the IF on the spindle level. In the Dream (MASS respectively) dataset, 63.8% (65.3% respectively) of spindles have negative linear chirp rate, and 71.0% (71.3% respectively) of spindles have negative quadratic chirp rate, which both have significant difference with p<10^-8 (p<10^-8 respectively) by applying the binomial test with the null hypothesis that the positive and negative rates are of the same ratio. The distributions of (β_0,β_1,β_2)^⊤, (γ_0,γ_1)^⊤ and RRMSE are shown in Figure <ref>, where the mean ± standard deviation of β_0, β_1, β_2, γ_0 and γ_1 in the Dream (MASS respectively) dataset are 13.313±0.797,-0.589± 2.414, -7.924± 14.8695, 13.009± 0.735 and -0.731±2.382 respectively (13.427± 0.706, -0.526± 1.724, -4.902± 11.841, 13.207± 0.627 and -0.599± 1.715 respectively). In the Dream (MASS respectively) dataset, the Pearson's linear correlation coefficients between β_1 and β_2, β_1 and β_3 and β_2 and β_3 are -0.01, -0.355 and -0.125 respectively, where only β_1 and β_3 and β_2 and β_3 are different from 0 with statistical significance with p<10^-8 and p=0.01 respecitvely (0.114, -0.393 and -0.097 respectively, where all are different from 0 with statistical significance and p<10^-8).The RRMSEs of the Dream and MASS datasets are 0.034± 0.018 and 0.025 ± 0.013 respectively. In both datasets, the quadratic chirp fits better than the linear chirp with statistical significance, where the p<10^-8 by applying the Wilcoxon signed-rank test to the RRMSEs of linear and quadratic polynomial fits. This result supports that the spindle IF is in general not linear. §.§ Sensitivity analysis of ConceFT-SWe test the robustness of ConceFT-Spindle algorithm with respect to its parameters, mainly M, J, and Q as a sensitivity analysis. The first parameter of interest is M, which indicates the frequency resolution of TFR in the frequency domain. For M=40,400,4000, the F1 scores are 0.698, 0.717, and 0.749 respectively for the Dream dataset, and the F1 scores are 0.773, 0.782, and 0.786 respectively for the MASS dataset. On the other hand, for each corresponding M, the computational times are 4, 10, and 45 seconds for calculating the ConceFT of a 30-second EEG signal. This shows that there is no significant decline in the F1 score as the M decreases. The p-value between M=400 and M=4000 is p=0.329 on Dream and p = 0.619 on MASS while p-value between M=40 and M=4000 is p=0.310 on Dream and p = 0.340 on MASS. Thus. when a fast computation is needed, we could speed up the computation by slightly sacrificing the performance.Figure <ref> shows the F1 value of ConceFT-S with J=2,3,4 and Q=20, 30, 40. To speed up the calculation, this analysis was conducted with M=400. Across both datasets, the F1 score is the lowest when J=2 while there is no obvious trend between J=3 and J=4. Changing Q does not seem to have a consistent effect on the F1 score. Overall, the performance of the algorithm is stable across datasets and parameters. § DISCUSSION In this work, we introduce a novel Time-Frequency (TF) analysis tool known as ConceFT for the study of sleep spindles. Our contributions can be summarized as follows. The first contribution is developing an automatic spindle detection algorithm, ConceFT-Spindle (ConceFT-S), based on ConceFT, and compare it with two state-of-the-art detectors, A7 <cit.> and SUMO <cit.>, on two public benchmark databases.Our second contribution focuses on demonstrating how ConceFT can be applied to investigate the spindle IF. We present evidence that the spindle IF exhibits non-linearity, challenging the assumption of a linear chirp frequency across time. The overall performance of ConceFT-S is equivalent to, if not better than, existing results in the literature. Notably, You et al. <cit.> and Jiang et al. <cit.> have assessed an extensive range of spindle detectors on the same dataset, utilizing the identical Leave-One-Subject-Out Cross-Validation (LOSO-CV) methodology applied in our research. Their studies reveal a wide spectrum of F1 scores, ranging from 0.175 for DETOKS to 0.739 for Spindle U-Net, with Spindle U-Net sharing structural similarities with SUMO and both being rooted in the U-Net neural network framework. While deep learning techniques have consistently displayed superior performance over traditional methods in sleep spindle detection, our findings underscore that straightforward and interpretable thresholding techniques, when combined with an improved TFR, can yield comparable results to deep learning methods. Notably, compared with “black-box” deep learning models, an advantage of ConceFT-S is its interpretability, which is essential for scientific research. Grounded in the amplitude and power within the sigma frequency band, it offers transparent insights into the detection process, fostering understanding and trust, and bridging the gap between computational outcomes and practical applications. Another advantage rooted in its interpretability is its simplicity; it does not necessitate extensive preprocessing, such as pre-filtering or windowing, nor does it require the level of tuning typically associated with deep learning approaches.Beyond the development of ConceFT-S, the primary advantage of the proposed analysis framework, ConceFT, lies in its capacity to investigate spindle IFs. The IF of an oscillatory time series, along with its associated phase, encapsulates intricate physiological dynamics. The quantification of these features through TF analysis tools has paved the way for various clinical applications, including automatic sleep apnea detection <cit.>, early prediction of acute hemorrhage <cit.>, and numerous others. In the context of spindle analysis, it's crucial to remember that the alternation between the acceleration and deceleration of sleep spindle frequencies is linked to sleep-related disorders, such as sleep apnea <cit.>. Consequently, it becomes an intriguing subject to further investigate whether the more detailed dynamics of spindle IF contain valuable physiological or clinical insights.The present study has certain limitations, and topens avenues for future exploration. First, it is important to note that our analysis was conducted using a single EEG channel. While we demonstrated its performance in this context, its applicability to different channels remains unvalidated. Furthermore, it is well-established that sleep spindles originating from various brain regions are associated with distinct generators <cit.>. Prior research, such as that presented in <cit.>, has shown that spatio-spectral-temporal analysis can illuminate the potential involvement of spindles in coordinating cortical activity during consolidation. The high-resolution TFR generated by ConceFT might be valuable in distinguishing the characteristics of spindles recorded from different channels. Second, our focus has been on epochs labeled as N2 stages by experts, which may limit practical application, given that expert sleep stage annotations may not always be readily available. In such cases, it would be beneficial to employ automatic sleep stage classification algorithms <cit.>. Third, although the algorithm has been validated using two publicly available datasets, further validation on larger datasets and application to clinical scenarios is warranted. Lastly, our model <ref> is purely phenomenological and serves the objectives outlined in this paper. However, its potential for further exploration is not guaranteed. One possible direction for enhancement involves incorporating physiological evidence. For instance, there is strong and significant correlation between spectral and temporal features over pre-spindle and spindle periods <cit.>, suggesting a connection between the appearance of spindles and the stochastic component Φ. This could be considered in future models.§ CONCLUSIONConceFT represents a novel nonlinear-type TF analysis tool that holds significant potential for visualizing and analyzing the dynamics of biomedical signals. The central emphasis of this paper is its application in the study of sleep spindles. Through our research, we have demonstrated that ConceFT enables the development of a straightforward, interpretable, and precise automatic sleep spindle detection algorithm. Additionally, it facilitates the exploration of spindle dynamics, specifically with regards to its IF. The clinical relevance and implications of ConceFT in the brain wave research will be further investigated in our forthcoming work. § ACKNOWLEDGMENTThe authors thank Dr. Kraines and the Department of Mathematics at Duke University for funding this project through the PRUV program and offering computing resources. H.-T. Wu thank Dr. Anna Mullins for indicating literature about the sleep spindle acceleration and deceleration. ieeetr | http://arxiv.org/abs/2310.18381v1 | {
"authors": [
"Riki Shimizu",
"Hau-Tieng Wu"
],
"categories": [
"q-bio.NC",
"cs.LG",
"cs.NA",
"math.NA"
],
"primary_category": "q-bio.NC",
"published": "20231027024658",
"title": "Unveil Sleep Spindles with Concentration of Frequency and Time"
} |
Phase-space entropy cascade and irreversibility of stochastic heating in nearly collisionless plasma turbulence William D. Dorland January 14, 2024 ===============================================================================================================In many multi-microphone algorithms for noise reduction, an estimate of the relative transfer function (RTF) vector of the target speaker is required. The state-of-the-art covariance whitening (CW) method estimates the RTF vector as the principal eigenvector of the whitened noisy covariance matrix, where whitening is performed using an estimate of the noise covariance matrix. In this paper, we consider an acoustic sensor network consisting of multiple microphone nodes. Assuming uncorrelated noise between the nodes but not within the nodes, we propose two RTF vector estimation methods that leverage the block-diagonal structure of the noise covariance matrix. The first method modifies the CW method by considering only the diagonal blocks of the estimated noise covariance matrix. In contrast, the second method only considers the off-diagonal blocks of the noisy covariance matrix, but cannot be solved using a simple eigenvalue decomposition. When applying the estimated RTF vector in a minimum variance distortionless response beamformer, simulation results for real-world recordings in a reverberant environment with multiple noise sources show that the modified CW method performs slightly better than the CW method in terms of SNR improvement, while the off-diagonal selection method outperforms a biased RTF vector estimate obtained as the principal eigenvector of the noisy covariance matrix. Acoustic sensor networks, relative transfer function vector, beamforming, covariance whitening § INTRODUCTION Acoustic sensor networks (ASNs) with multiple spatially distributed microphone nodes are of rising interest for speech communication applications due to their ability to capture spatially diverse information <cit.>. This allows ASNs to be deployed, e.g., for speech enhancement <cit.>, and sound source localization <cit.>,in applications such as smart speakers or hearing aids connected with external microphones. In these applications the desired speech signal is often corrupted by background noise. To achieve noise reduction, multi-microphone algorithms like the minimum variance distortionless response (MVDR) beamformer can be used <cit.>, requiring an estimate of the noise covariance matrix and the relative transfer function (RTF) vector of the target speaker. In this paper, we consider an ASN where the noise component between all nodes is assumed to be uncorrelated, which is for example the case in a diffuse noise field when the distance between the nodes is large or when different nodes capture different noise sources. This results in a block-diagonal structure of the noise covariance matrix. Exploiting this covariance matrix structure, we propose two methods to estimate the RTF vector of the target speaker in an ASN with at least three nodes. The first method involves a modification of the state-of-the-art covariance whitening (CW) method <cit.>. Instead of using the entire estimated noise covariance matrix for whitening, the proposed CW-D method considers only the diagonal blocks, which allows for efficient inversion and square-root decomposition. The CW and CW-D methods both estimate the RTF vector as the best rank-1 approximation of the whitened noisy covariance matrix, which can be achieved via an eigenvalue decomposition (EVD).The second method only requires the noisy covariance matrix and no estimate of the noise covariance matrix. Assuming uncorrelated noise between nodes, all information required for RTF vector estimation is contained in the off-diagonal blocks of the noisy covariance matrix. In the off-diagonal selection (ODS) method, an optimization problem is formulated to estimate the entire RTF vector using only the internode correlations of the noisy covariance matrix. Contrary to the first method, the solution of this optimization problem cannot be computed via an EVD, and we propose to use an iterative optimization procedure.In the experimental evaluation with reverberant real-world recordings and multiple noise sources, the performance is evaluated in terms of RTF vector estimation accuracy and signal-to-noise ratio (SNR) improvement when applying the RTF vector estimates in an MVDR beamformer. The results show that the proposed CW-D method performs slightly better than the CW method. In addition, the proposed ODS method outperforms a biased estimator using the EVD of the entire noisy covariance matrix, especially at low input SNRs where the influence of noise on the diagonal blocks is most severe. § SIGNAL MODEL AND NOTATION We consider an ASN with N spatially distributed nodes, where node n∈{1,…,N} contains M_n microphones, i.e., in total M = ∑_n=1^N M_n microphones. The considered acoustic scene consists of a single target speaker and undesired ambient noise. The noisy m-th microphone signal of the n-th node can be written in the short-time Fourier transform (STFT) domain asY_n,m(k,l) = X_n,m(k,l) + V_n,m(k,l),where k is the frequency bin index and l is the frame index, which - for the sake of brevity - are omitted in the remainder of this paper wherever possible. The speech and noise signal components are denoted by X_n,m and V_n,m, respectively. The M_n-dimensional signal vector for the n-th node is defined as_n = [Y_n,1,Y_n,2, …,Y_n,M_n]^T,where {·}^T denotes the transpose operator. By stacking all node-wise signal vectors, the M-dimensional signal vector , containing all microphone signals in the ASN, is defined as= [_1^T, _2^T, …, _N^T]^T.The speech vectors _n andand the noise vectors _n andare defined similarly to (<ref>) and (<ref>), respectively. For the speech component, we assume a multiplicative transfer function model <cit.>, allowing the speech vector to be written as=X_,where ∈ℂ^M is the target RTF vector, which relates the speech component in the reference microphone X_ to the speech component in all other microphones. Hence, the entry ofcorresponding to the reference microphone is equal to 1.It should be noted that the reference microphone is chosen for the entire ASN and not per node. The RTF vector for node n is defined as _n∈ℂ^M_n, such that = [_1^T,_2^T,…,_N^T]^T, where all _n are normalized to the same reference. Assuming that the speech and noise signals are mutually uncorrelated, the noisy covariance matrixcan be written in terms of the speech covariance matrixand the noise covariance matrixas= ℰ{^H} =+,where ℰ{·} denotes the expectation operator and {·}^H denotes the Hermitian transpose operator. The node-wise covariance matrices for the n-th node, n, n and n, are defined similarly to (<ref>), using the node-wise vectors _n, _n and _n, respectively. Using (<ref>), the speech covariance matrixcan be written as a rank-1 matrix spanned by the RTF vector , i.e., = ^H,where = ℰ{|X_|^2} denotes the speech power spectral density (PSD) in the reference microphone.In this paper, we make the central assumption that the noise component is uncorrelated between different nodes.This assumption is realistic, e.g., for a diffuse noise field when the distance between the nodes is large enough <cit.> or when nodes capture different noise sources. For the noise correlation between different microphones within each node, no assumption is made, implying that within each node the noise component may be partially correlated. The node-wise noise covariance matrices n are assumed to be full-rank. Figure <ref> schematically depicts the structure of the entire noise covariance matrix , where yellow indicates high correlation (within each node), and white indicates low correlation (between the nodes). To visualize the influence of such ablock-diagonal noise covariance matrix on the noisy covariance matrix, Figure <ref> also depicts the rank-1 speech covariance matrixand the resulting noisy covariance matrix . It can clearly be seen thatonly contains information about the target RTF vector in its off-diagonal blocks (orange and green), as they are unaffected by the noise.Note that for N=2 nodes, the off-diagonal block ofonly contains information about scaled versions of _1 and _2, which cannot be unified into the vector =[_1^T,_2^T ]^T, as the speech PSD(see (<ref>)) evokes a scaling ambiguity for the RTF vector part that does not contain the reference microphone. This scaling ambiguity, however, can be lifted when N≥ 3, since direct information about the relative scaling of the different parts is contained in adjacent off-diagonal blocks.To achieve noise reduction, we consider the MVDR beamformer, which requires an estimate of the noise covariance matrixand an estimate of the RTF vector . The filter vectorof the MVDR beamformer is given by <cit.> = /^H,yielding the output signal Z = ^H when applied to the noisy input signals. The filtered speech and noise components are defined as Z_x = ^H and Z_v = ^H, respectively. § RTF VECTOR ESTIMATION METHODS In this section, we present different RTF vector estimation methods, where we first discuss the general idea of the rank-1 approximation (Section <ref>), which is the basis for the biased estimator and the state-of-the-art CW method (Section <ref>). In Section <ref>, we propose a modification of the CW method by exploiting the assumed block-diagonal structure of the noise covariance matrix. In Section <ref>, a novel cost function is proposed, where only the off-diagonal blocks of the noisy covariance matrix are selected to compute the best fitting RTF vector. §.§ Rank-1 Approximation and Biased Estimator To motivate the RTF vector estimation methods in the following sections, we first consider the case where an estimate of the speech covariance matrixis available. In practice, the rank-1 model in (<ref>) does not perfectly hold, e.g., due to an insufficient frame length. Hence, the RTF vectoris often estimated as the best rank-1 approximation of<cit.>, i.e., the vector solving the optimization problemmin_ - ''^H _F^2_J('),whereis a scaled (non-normalized) version ofand ·_F denotes the Frobenius norm. Considering the gradient of the cost function J(') in (<ref>), i.e.,∇ J() = -2 (-^H),and setting it to zero, the solutions can be found by solving the eigenvalue problem= (^H).Hence, the best rank-1 approximation ofis a scaled version of the principal eigenvector , corresponding to the maximum eigenvalue . The RTF vector estimate can then be obtained as= /𝐞_^T,where _ is a selection vector containing all zeros except for the entry corresponding to the reference microphone, which equals 1. If only the noisy covariance matrixis available, a biased estimate may be obtained as the best rank-1 approximation of , i.e.,min_ - ^H _F^2,where the bias obviously is larger for lower SNR. The biased RTF vector estimate ^ B is obtained as in (<ref>), whereis the principal eigenvector of . §.§ Covariance Whitening (CW) To compensate for the influence of the ambient noise on the RTF vector estimate, a frequently used approach is to perform whitening of the noisy covariance matrix using an estimate of the noise covariance matrix, i.e., = ^-1/2^-H/2, where = ^1/2^H/2 corresponds to a square-root decomposition, e.g., the Cholesky decomposition <cit.>.=0pt Similarly to (<ref>), the optimization problem in the whitened domain is given bymin_ - ^-1/2^H^-H/2_F^2,i.e., the principal eigenvectorof the whitened noisy covariance matrixcorresponds to a scaled version of the whitened RTF vector estimate. By de-whitening and normalizing , the CW RTF vector estimate is obtained as ^ CW = ^1/2/_^T^1/2. §.§ Covariance Whitening Using Diagonal Blocks (CW-D) To leverage the assumed block-diagonal structure of the noise covariance matrix (see Figure <ref>), we propose to only consider the diagonal blocks ofin the CW method. The modified noise covariance matrixis constructed as a block-diagonal matrix containing the node-wise noise covariance matrices n on its diagonal blocks (and all zeros in the off-diagonal blocks). For a block-diagonal matrix, the inverse and the square-root decomposition can be performed efficiently on the separate diagonal blocks, such that the matrix required for the whitening operation in (<ref>) is given by ^-1/2 = [[ 1^-1/2 M_1M_2… M_1M_N; M_2M_1 2^-1/2…⋮;⋮⋮⋱⋮; M_NM_1…… N^-1/2;]].The whitened RTF vector estimate is obtained as the principal eigenvector of ^-1/2^-H/2, where de-whitening and normalization is performed similarly to (<ref>) using ^1/2 to obtain the RTF vector estimate ^ CW D.§.§ Off-Diagonal Selection (ODS) In the optimization problem for the biased estimator in (<ref>), it can directly be seen that biased information is used, as the diagonal blocks ofcontains both speech and noise information. To avoid this problem without compensating for the noise directly (as in the CW and CW-D methods), we propose the following optimization problem min_⊙( -^H )_F^2_J(),where we only select the off-diagonal blocks of(see Figure <ref>) and its respective rank-1 approximation by means of the selection matrixand ⊙ denotes the Hadamard product, i.e., the element-wise multiplication of matrices. The selection matrix is defined as = [[ M_1M_1 M_1M_2… M_1M_N; M_2M_1 M_2M_2…⋮;⋮⋮⋱⋮; M_NM_1…… M_NM_N;]],containing all ones except for the diagonal blocks, which contain zeros. Similarly to (<ref>), the gradient of the cost function J(') in (<ref>) is given by∇ J() = -2 (⊙(-^H)).Setting the gradient equal to zero yields(⊙) = (⊙(^H)).In contrast to (<ref>), this does not correspond to an eigenvalue problem, since the Hadamard product does not allow for further simplification. To the best of our knowledge, there is no closed-form solution or well-defined operation like the (generalized) EVD to solve (<ref>).Nevertheless, iterative optimization procedures like gradient-descent or the quasi-Newton method can be used <cit.>. After solving the unconstrained optimization problem in (<ref>), the RTF vector estimate ^ ODS is obtained by normalizing the solution to the reference entry.In general, it should be noted that the optimization problems for the ODS method in (<ref>) and the biased estimator in (<ref>) only require the noisy covariance matrix , whereas the optimization problem for the CW and CW-D methods in (<ref>) also require an estimate of the noise covariance matrix .§ EVALUATIONIn this section, we evaluate the performance of the presented RTF vector estimation methods using real-world recordings. The considered performance measures are the Hermitian angle between the ground truth RTF vector and the estimated RTF vector, and the intelligibility-weighted SNR improvement of the MVDR beamformer using the respective RTF vector estimates. §.§ Setup and ImplementationFigure <ref> depicts the considered acoustic scene for the evaluation, consisting of a target speaker in an acoustically treated laboratory with dimensions 7×6×2.7 m^3 and a reverberation time T_60≈ 500 ms. As speech material, four different talkers (two male and two female) from the EBU SQAM CD <cit.> and the VCTK corpus <cit.> were used.All utterances had a duration of 20 s. The ambient noise was generated by four loudspeakers in the corners of the room.Different versions of multi-talker babble noise were played back at approximately the same level by the four loudspeakers.The acoustic sensor network for the evaluation consisted of four nodes with uniform linear arrays, placed at about 0.5 m distance from the noise loudspeakers. Nodes 1-3 contained four microphones each, while node 4 contained three microphones, giving a total of M=15 microphones. For all arrays, two different microphone spacings of 1 cm and 3 cm were considered. The first microphone of node 1 was chosen as the reference microphone. The speech and noise components were recorded separately at a sampling rate of 16 kHz and mixed subsequently at an SNR of SNR_in={-5,0,5} dB in the reference microphone.For the implementation of the algorithms, an STFT framework with a frame length of 512 samples (corresponding to 32 ms), a frame overlap of 50%, and a square-root-Hann window for analysis and synthesis was used. The covariance matrices were estimated in batch, where for each frequency binwas estimated during speech activity andwas estimated during speech pauses. For each frequency bin, speech-plus-noise and noise-only frames were determined by means of a speech presence probability (SPP) estimator <cit.>, which was computed on one microphone per node and subsequently averaged. The MVDR beamformer was computed according to (<ref>), where the estimated noise covariance matrix was used in conjunction with one of the four presented RTF vector estimates: * CW: EVD ofwhitened with .* CW-D: CW using block-diagonal noise covariance matrix .* Biased estimator: EVD of .* ODS: Iterative optimization method using only off-diagonal blocks of . Optimized using Matlab'sfunction <cit.> supplied with the gradient and initialized on a random vector. As a measure of RTF vector estimation accuracy, we use the Hermitian angle <cit.>between the ground truth RTF vectorand the estimated RTF vector , i.e., θ = arccos( |^H|/_2 _2 ), averaged over all frequency bins. For each target position, the ground truth RTF vector was computed via the EVD of the oracle speech covariance matrix, obtained using the measured room impulse response convolved with white Gaussian noise. The second performance measure is the intelligibility-weighted SNR improvement <cit.> Δ SNR =SNR_out - SNR_in,max, where the output SNR SNR_out is computed using the filtered speech and noise signal components and SNR_in,max is the highest input SNR among all microphones. §.§ Results and DiscussionFor different input SNRs, Figure <ref> depicts the Hermitian angle and the SNR improvement for the considered RTF vector estimation methods, where the bars represent the mean over 32 conditions (four target speaker positions, four speakers, two microphone spacings) and the error bars depict the standard deviation. First, it can be observed that in general the Hermitian angle in the upper panel of Figure <ref> decreases with increasing input SNR, implying more accurate estimation at higher SNRs. Although at low SNRs the CW and CW-D methods achieve a lower Hermitian angle than the biased estimator and ODS method, these differences become negligible at higher input SNRs.Second, it can be observed that in terms of SNR improvement the differences between the methods are more noticeable than in terms of Hermitian angle. At all input SNRs, the CW and CW-D methods consistently achieve around 12 dB of SNR improvement and outperform the biased estimator and the ODSmethod. This indicates a clear benefit of compensating for the noise using the estimated noise covariance matrixinstead of using biased or selected information from the covariance matrix . The performance of the CW and CW-D methods is similar, although the CW-D method attains a slightly higher SNR improvement, particularly at higher input SNRs. These results indicate a good validity of the block-diagonal model for the noise covariance matrix for the considered scenario. Hence, inverting only the diagonal blocks, cf. (<ref>), seems to be sufficient or even beneficial, as it may reduce estimation errors of .Comparing the SNR improvement of the biased estimator with the ODS method, it can be observed that at an input SNR of -5 dB, the ODS method significantly outperforms the biased estimator (by about 1.5 dB). At higher input SNRs, the advantage of using only the off-diagonal blocks vanishes, and it seems more beneficial to use all information as the influence of noise diminishes. This indicates a higher robustness of the EVD of the full covariance matrix towards deviations from the rank-1 speech model compared to selecting only the off-diagonal blocks. At low SNRs, however, it seems more advantageous to exclude biased information and use only the off-diagonal blocks, which are affected less by noise, leading to a better performance of the ODS method compared to the biased estimator.§ CONCLUSIONIn this paper, we presented and compared different RTF vector estimation methods leveraging the assumed block-diagonal structure of the noise covariance matrix in an acoustic sensor network with multiple nodes. In an evaluation with real-world recordings, the modified CW method, which only considers the diagonal blocks of the noise covariance matrix, showed equal or even slightly better results than the original CW method at a lower complexity.Furthermore, we proposed a novel optimization problem for RTF vector estimation by selecting only the off-diagonal blocks of the noisy covariance matrix which are assumed not to be affected by noise. The evaluation results showed that the ODS method clearly outperforms a biased estimator in terms of SNR improvement, especially at low SNRs, showing that the selection of only unbiased information is beneficial if the influence of noise is large.IEEEtran=10000 | http://arxiv.org/abs/2310.18199v1 | {
"authors": [
"Wiebke Middelberg",
"Henri Gode",
"Simon Doclo"
],
"categories": [
"eess.AS"
],
"primary_category": "eess.AS",
"published": "20231027151820",
"title": "Relative Transfer Function Vector Estimation for Acoustic Sensor Networks Exploiting Covariance Matrix Structure"
} |
Generalized Firefly Algorithm for Optimal Transmit Beamforming Tuan Anh Le and Xin-She Yang T. A. Le and X.-S. Yang are with the Faculty of Science and Technology, Middlesex University, London, NW4 4BT, UK. Email: {t.le; x.yang}@mdx.ac.uk.This paper has been presented in part at the IEEE Vehicular Technology Conference (VTC 2023-Spring), Florence, Italy, June, 20-23, 2023.January 14, 2024 ===============================================================================================================================================================================================================================================================================================================================empty empty We consider the problem of closed-loop robotic grasping and present a novel planner which uses Visual Feedback and an uncertainty-aware Adaptive Sampling strategy (VFAS) to close the loop. At each iteration, our methodbuilds a set of candidate graspsby generating random perturbations of a seed grasp. The candidates are then scored using a novel metric which combines a learned grasp-quality estimator, the uncertainty in the estimate and the distance from the seed proposal to promote temporal consistency. Additionally, we present two mechanisms to improve the efficiency of our sampling strategy: We dynamically scale the sampling region size and number of samples in it based on past grasp scores. We also leverage a motion vector field estimator to shift the center of our sampling region. We demonstrate that our algorithm can run in real time (20 Hz) and is capable of improving grasp performance for static scenes by refining the initial grasp proposal. We also show that it can enable grasping of slow moving objects, such as those encountered during human to robot handover. Video: <https://youtu.be/8DRe2OFlf7o>§ INTRODUCTIONA traditional robotic grasping pipeline typically uses an external camera which provides a scene representation as input to a grasp planner which then proposes a set of candidate grasps either for the full scene or for a target object. More often than not, the execution of one of these grasps is carried out in an open loop fashion, with little or no new sensor information used after the selection of the best grasp candidate. Under such circumstances, grasping may fail due to poor pose or object shape estimation, camera calibration and other perception artifacts.A closed loop control system periodically incorporates sensor data as a task progresses, computes an error metric as a function of the current and goal states and takes actions to reduce this error. In the context of grasping, the goal is typically encoded as a 6D pose to be achieved by the robot end effector. A lot of progress has been made in the last decade in grasp learning to produce faster, more accurate and reliable grasp planners that output large number of grasp pose candidates. However, even when some of those grasp planners may be able to run in real-time, their outputs are not temporally consistent, i.e., the output at any given frame is independent of the previous one, causing discontinuous jumps of the goal pose. The lack of temporal consistency makes it hard to design closed loop behaviors around these algorithm's outputs. It also poses a challenge for the downstream motion planner that consumes this goal pose. Moreover, in the case of eye-in-hand systems, the images obtained as the manipulator gets close to the object may fall outside of the training distribution of these planners, causing them to fail. In this paper, we address the problem of driving a robot manipulator to a successful grasp on a target object in a closed loop manner. A particular challenge for this task is to provide suitable scene information (for feedback) at all times during the manipulation task. If we only consider the use of a fixed external camera, the robot motion may produce occlusions as it navigates to execute the grasp. A wrist-mounted camera provides the best occlusion-free perspective at the moment of executing a grasp, but cannot possibly keep the target grasp in view at all times while the robot moves due to kinematics. Therefore, in this work, we limit ourselves to providing a closed loop mechanism to drive the gripper from a pre-grasp position to a final successful grasp using the visual feedback from a wrist-mounted camera (Figure <ref>).Our approach begins with an initial grasp proposal provided by any available grasp planner with a global view of the scene as input. We allow the robot to navigate to a pre-grasp position, that is, a retracted pose from where if the gripper moves forward, it can execute the grasp. We leverage the fact that, in most cases, this grasp proposal will be either correct or close to correct. Therefore, if we search in a small neighborhood around this seed grasp, our local grasp planner should be able to find a high quality grasp. Once we have found this grasp using the local information provided by the wrist camera, our task becomes that of “rediscovering” the same high quality grasp on the next frame.To search for the highest quality grasp in this region, we sample many grasp candidates around the seed grasp and evaluate their quality. We use a Grasp Evaluator network trained with synthetic data which operates on the raw unsegmented point cloud data from the wrist camera. This network can provide a quality metric for each grasp and, combined with other scoring heuristics, allows us to produce a final score for each sampled candidate. The highest scoring grasp becomes the seed grasp in the next iteration of the algorithm. We propose two mechanisms to improve the performance of our sampling-based approach. First, based on the previous seed grasps scores, we dynamically scale the sampling region size and the number of samples in it: we scale up when no good grasps are found, and scale down when a good grasp is found. Second, we make use of a Motion Vector Field Estimator and utilize the motion of points in the vicinity of the current grasp as a signal of object movement and use it to bias the center of our sampling region (the seed grasp) accordingly. The main contributions of our work are as follows: * A sampling-based closed loop grasping algorithm: Our algorithm takes in RGB-D inputs from a wrist camera and iteratively finds the highest quality grasp in a small region around a seed grasp. Unlike traditional grasp planners, our algorithm is designed from the ground up to output a temporally consistent high quality grasp. When initialized with an appropriate seed, the output of our algorithm can be consumed downstream by a suitable controller to drive a robotic gripper towards a successful grasp despite object disturbances. * Adaptive sampling: We present two mechanisms that aid the efficiency of our sampling search mechanism. We linearly scale both the sampling region size and number of samples based on previous grasp scores to quickly recover when losing track of a grasp. We leverage a motion vector field estimator to bias the center of our sampling region to improve tracking of moving objects.* Grasp scoring mechanism: We show a simple scoring heuristic which takes into account the inherent grasp quality of a candidate but applies penalties to promote grasp temporal consistency. Additionally, we quantify and penalize the uncertainty in the grasp quality through the injection of synthetic noise into our Grasp Evaluator network.is the first to track grasps in a temporally consistent manner in 6 DoF at 20 Hz while also refining the grasp quality iteratively. We demonstrate that a system running our algorithm can improve grasping performance on both static and dynamic scenes. § RELATED WORKIn this section, we first review relevant work on grasp learning, followed by the progress in incorporating learning in closed-loop grasping with visual feedback. §.§ Grasp LearningClassic research on robotic grasping treats the problem as a planning or optimization problem where certain geometric or mechanical constraints are considered for the grasping contact regions <cit.>. Recent data-driven approaches explored grasp detection and evaluation without explicitly modeling these constraints. A lot of works in grasp learning focus on learning SE(2) grasps. For example, in <cit.>, an input RGB image is mapped to a vector that encodes a good grasp in SE(2). Learning for SE(2) grasps has been demonstrated to be quite successful when equipped with high capacity visual encoders<cit.>. More recently, grasp learning in SE(3) has gained popularity and is often tackled with more advanced network models.Gualtieri <cit.> presented one of the earliest works on grasp learning in six dimensions. In their work, a binary classifier is adopted to evaluate the heuristics-based grasp proposals. Besides point cloud, many different volumetric representations have also been demonstrated for grasp learning, including voxel <cit.>, signed distance function (SDF) <cit.> and graphs <cit.>. Shape completion has also been explored to improve the quality of grasp detection <cit.>. In this work, we choose point cloud to represent the geometry as it not only preserves fine details of object shape information, but also has been well-explored for grasp learning in real-time.Grasp learning on point cloud can be further categorized to two domains: grasp detection and grasp evaluation. In <cit.> and <cit.>, the authors applied PointNet++ <cit.> to encode point features and infer grasps around the points. Zhao <cit.> further extended the works by adding another "grasp region network" to infer grasp orientation as a categorical distribution for points determined to have a high grasp quality. On the other side, similar network structures have been proposed in <cit.>, <cit.> and <cit.> to evaluate the quality of grasp proposals. Grasp detection usually assumes no prior information about grasp candidates from previous time steps, therefore ensuring temporal consistency becomes a challenge. Since this work is not intended to solve end-to-end grasp tasks and we assume some priors to initialize the system, we focus on learning a good grasp evaluation function and rely on adaptive sampling to ensure spatial and temporal consistency. §.§ Closed-Loop GraspingGrasping in a closed-loop manner becomes important when the system has to deal with perception errors and object disturbances. Early works in this aspect attempt to guide the robot via visual servoing or object tracking, and the grasps are limited from top-dowm <cit.>. Some more recent works adopt similar ideas. For example, Marturi <cit.> proposed a work that explicitly tracks 6DoF object pose and combines this with grasp poses computed offline to achieve dynamic grasp planning. Other approaches such as <cit.>, <cit.> and <cit.> requires prior knowledge of the object shape thus are more restricted for real-world applications. Our closed-loop grasp system estimates motion at the scene level. Thus it does not require prior information of the object, neither does it explicitly track the object motion, making it more general in real-world scenarios.In <cit.>, the authors proposed generative grasping convolutional neural network (GG-CNN) for SE2 grasping and claimed that a lightweight network ruining at high frequency could enable closed-loop grasping scenarios. Our system follows similar principles by making the grasp evaluation module computation efficient and capable of running at real-time (20Hz). Therefore, instead of iteratively performing grasp detection across the whole scene and relying on similarity metrics to ensure temporal consistency <cit.>, we choose to adaptively sample around the seed grasp from last frame and evaluate the grasps candidates in real time. From this perspective, our work is closely related to that of Yang <cit.>. Our approach differs in that we sample many candidates in both translation and rotation around a single seed (as opposed to a single perturbation in translation around many seeds). We also gather information from the scene-level motion vector field and use grasp scores history to dynamically adjust the sampling parameters. Lastly, our system is not limited to a single task and can be applied to other applications without further tuning.§ METHODOLOGY We assume a grasp planner has selected a grasp candidate to be executed and the robot has successfully navigated to a pre-grasp location which is retracted from that target grasp G_s. Using the RGB-D information provided by a short range camera mounted on the wrist of the robot (RealSense D405), our high level objective is to servo the gripper from this pre-grasp location to a successful grasp on the target object in a closed loop manner. Provided the initial grasp candidate is in the vicinity of an actual high quality grasp, we posit that a grasp evaluator network will converge to this grasp or a similar one by continuously evaluating a set of randomly sampled grasps around the seed grasp G_s. More specifically, we aim to develop a real-time algorithm which takes in a seed grasp pose and a frame of RGB-D data from the wrist camera and outputs the highest quality grasp in a region around the seed grasp. Applying such algorithm frame by frame results in G_s being constantly updated and a suitable controller can therefore drive the robot gripper to a successful grasp. In the next few sections we will provide more details about the full pipeline, which is shown in Figure <ref>) §.§ Grasp SamplingOur pipeline begins with a seed grasp G_s. During the first iteration, this seed grasp is provided by some off-the-shelf grasp planner. The assumption behind our sampling strategy is that this initial grasp proposal, while not perfect, is usually close to a high quality grasp. Given G_s, we would like to sample a set of N grasps that lie in a small region around it. To do this, we generate N transformation matrices where the translation vector is sampled from a uniform distribution with ±2 cm on each axis. In similar fashion, the rotation matrix is generated from a uniform distribution of Euler angles with zero degrees for roll and±5^∘ for both pitch and yaw. In our convention (see Figure <ref>, the roll axis matches the finger closure direction and therefore is not such an important factor in grasp quality. Our full set of perturbed grasps can then be obtained by right multiplying G_s with these transformations matrices.The sampling region limits mentioned above determine what we call our nominal sampling region. However, we can dynamically scale N and the region limits at run-time based on the circumstances. In practice, we scale both the region size and the number of perturbed grasps N using a fixed scaling factor λ. If on a given iteration no good grasps are found as determined by our grasp scoring policy (all scores less than 0.5), we scale up the sampling region and N by 30% (λ=1.3) on the next iteration. This scaling may continue up to triple the nominal size. As soon as an iteration yields a good enough grasp (a score greater than 0.5), we revert the sampling region to its nominal size.As described so far, our algorithm relies exclusively on the sampling region size to keep track of a moving object. Consider the simplified case of an object with only a single successful grasp pose moving at a constant speed. If we have discovered this high quality grasp at frame t, our only hope to recover this grasp at frame t+1 is that the sampling region is large enough to account for the object translation during that time interval. However, if we had a mechanism to keep track of the object movement, we can bias the pose around which we sample grasps and improve our tracking performance. This is the role of the motion field estimator.We use GMFlow <cit.>, a transformer-based optical flow estimation algorithm on the RGB feed from our wrist camera. We lift the flow field from the image domain to our point cloud by taking the difference in depth values per pixel across two frames, creating a dense 3D vector field that describes the motion of the corresponding 3D points. At each iteration, we collect the 3D flow for all points in a sphere of radius ρ around G_s, average them and obtain a single 3D velocity vector which, multiplied by the iteration time period produces the translational offset which we apply on the following iteration to G_s. The dynamic size of our sampling region in combination with the use of a motion vector field to bias the region center is what we refer to as adaptive sampling.§.§ Grasp Evaluation NetworkOnce we have a set of N perturbed grasps, we require an evaluation network that can provide a quality metric for each. This grasp quality metric Q ∈ [0,1] is one of the main factors used to determine the final score of a grasp (more details in section <ref>).We draw inspiration from GraspNet to design this Grasp Evaluator network. The backbone of the network is based on PointNet++, followed by a fully connected MLP with a sigmoid activation function applied at the output. Like GraspNet, the input to this network is the combination of the observed point cloud by the wrist camera and the gripper point cloud, each labeled using an extra binary feature indicating whether the point belongs to the scene (0) or to the gripper (1). The gripper point cloud is obtained by uniformly sampling the gripper proximal and distal link meshes. Unlike GraspNet, we decide to crop the scene pointcloud to a smaller region of interest around the grasping area. Using the grasp coordinate frame, we crop the scene pointcloud with a rectangular prism with limits x = ± 10 cm, y = ± 5 cm and -10<z<3 cm (see Figure <ref>).To train our Grasp Evaluator, we use the ACRONYM dataset <cit.>. We utilize Isaac Gym to spawn objects and our gripper in the location corresponding to the grasp poses from ACRONYM. From each grasp position, we render the depth image from the wrist camera view, generate the corresponding point cloud and apply the same crop parameters explained above. These cropped point clouds are stored alongside the grasp label to build our training dataset.All grasps from ACRONYM are generated on the object mesh surface. In our application, it is entirely possible to query grasp candidates (by our grasp sampler) which are more retracted from the object, in collision with it or even in an empty area with the object far away from the grasp. For this reason, we need to augment the data to capture these cases and label them accordingly. Following GraspNet convention, we call these hard negative grasps. We generate them by applying both positive and negative offsets in the grasp Z axis (see Figure <ref>) as well as translations on the X,Y axes from the original set of grasps. Additionally, we verify that none of these generated grasps are similar to those in the original set (using Euclidean distance as similarity metric).There are two data augmentations we perform to help bridge the sim2real gap. First, at training time, we add Gaussian noise to our point clouds with zero mean and 2mm of standard deviation on each axis. Additionally, our dataset contains the normal vectors for each point cloud. If the angle between the camera normal and a given point normal is between 80 and 90 degrees, we drop this point with a probability of 70%. This is meant to represent the fact that most real depth cameras do not reliably provide depth information on surfaces which are at a shallow angle with respect to the camera normal direction.§.§ Grasp ScoringOur grasp evaluator network outputs a quality metric Q ∈ [0,1], where 1 represents a high quality grasp and 0 a poor quality grasp. Because our network is trained as a classifier, the distribution of Q is heavily biased towards the values of 0 and 1. The underlying function our network approximates is complex and may have sharp discontinuities since small perturbations of the grasp pose can cause a grasp to go from success to failure. When evaluating grasps in such regions, even small amounts of noise in the point cloud can drive the quality towards either side of the classifier decision boundary, potentially creating a large spread in the quality metric. We consider that a true high quality grasp is that which can endure small perturbations while still producing a successful grasp and therefore we want to penalize grasp candidates which present large spreads in quality due to noise.In such cases, a naive solution can be to take many measurements to average out the noise. However, this would hurt our real time performance. We propose instead to inject synthetic noise to the measured point cloud. For each grasp candidate G_n and its corresponding point cloud X_n, we apply gaussian noise using the same parameters as during data augmentation at train time. We create k copies of X_n, each with a different noise applied to it, and create a larger batch of inputs for network inference. After the forward pass through the evaluator network, we collect the mean quality q̅_̅n̅ for each candidate, as well as the spread in quality scores q_sp = max(Q_n)-min(Q_n) where Q_n represents the set of k quality values corresponding to grasp candidate G_nWe design a scoring function that penalizes large changes to G_s to promote temporal consistency. To achieve this, we compute distance metrics for translation T and rotation R between the seed and the candidate grasp and multiply each with penalty terms k_1 and k_2 respectively. The translation distance metric T is simply the L2 norm between the position of G_s and G_n. For the rotation distance, we compute the rotation between G_s and G_n as R_sn =R_s^T R_n where R represents the rotation matrix for each grasp. Then we compute the axis-angle representation as Tr(R_sn) = 1 + 2 cos(θ) and use the value of θ as the rotation distance R: S_n = q̅_̅n̅ - k_1*T - k_2*R - k_3*q_sp The score S_n corresponding to a grasp candidate G_n is then mainly composed of the mean quality and penalized by three different terms corresponding to translational and rotational distance from the seed and quality spread. The penalty weights k_1 through k_3 allows us to adjust each term's influence on the final score. The highest scoring grasp is fed to a simple low pass filter <cit.> to further enforce smoothness in the pose change of G_s over time before updating the goal for the cartesian controller which servos the gripper. § EVALUATION AND RESULTS To evaluate our method, we design three experiments. First, we want to quantify howcan improve grasping performance on a static scene. Second, we would like to evaluate our ability to track a moving object and then proceed to grasp it. Lastly, we attempt a human to robot handover task, where the object pose is completely unrestricted. In order to perform these experiments, we utilize a Franka Emika Panda robot arm equipped with a custom 4-bar linkage gripper. The robot is placed right next to a table where we place the objects for the static case or the rotating table in the moving objects case.As explained previously, our method requires an initial grasp proposal for initialization. For both the static and moving objects experiments, we utilize Contact-GraspNet <cit.> (CGN) to provide the initial grasp seed G_s. Additionally, we need a fast controller capable of driving the gripper towards a moving target as our algorithm updates the best possible grasp at each iteration. For this task, we utilize a simple cartesian controller with a proportional gain to reduce the error between the current pose and the target pose. It must be noted that we can't naively servo the gripper directly towards the goal as we must approach the grasp pose from the pre-grasp position to avoid colliding with the object. Our grasping logic is such that we first target the pre-grasp position and if we can track it within certain tolerance for 5 iterations, we change the target to the final grasping pose. Note that in the current iteration of this work, this movement towards the final grasp pose is done in an open loop manner, due to the wrist camera inability to provide a reliable point cloud when the object geometry surpasses the gripper finger tips. §.§ Static objects For this experiment, the goal is to clear all the objects on the table. We utilize a total of 8 objects, with a maximum of 4 on the table for a given scenario. We define 5 scenarios with a subset of 4 objects and another 5 with the remaining 4 objects for a total of 10 scenarios (Figure <ref>). Each scenario presents a different arrangement of the objects on the table. We record the grasp success rate for each object across the 10 scenarios, where each scenario is attempted 3 times.We first observe the scene from a position where all the objects are visible. We utilize a pre-trained and fine-tuned model of Mask R-CNN <cit.> to segment the objects point cloud and query CGN for grasps on each object. We ranked the returned grasps based on their kinematic feasibility and score higher those which can be achieved with the robot arm further away from joint limits (since the cartesian controller cannot gracefully handle joint limits).For our baseline, we record the grasp success rate that results from executing the grasps proposed by CGN without our algorithm. Then we repeat the experiment with CGN proposing a grasp, but usingafter the robot reaches the pre-grasp position to drive the gripper towards a refined grasp proposal. Results are shown in Table <ref>.provides a significant improvement over the baseline across all objects, but especially on objects where small perturbations can result in failed grasp, such as the can which is typically grasped from the top. §.§ Moving objects In this experiment, we use a custom built turntable (Figure <ref>) to analyze the effect of our adaptive sampling strategies when tracking objects moving at different speeds. We place an object in a starting position at three different radial distances (6, 10 and 14cm) and use a fixed angular velocity of 2π/10 rad/s to generate trials at a low, medium and higher speed (3.8, 6.3 and 8.8 cm/s respectively).The robot will observe the scene and query CGN for grasps and move to the pre grasp position. At this point, the turntable is commanded to move in a random direction, with a goal position between ±60^∘ and ±120^∘. The robot needs to track the object throughout this movement and grasp it after it comes to a stop. The objects used in this experiment are the mug, bowl, bottle and can. We compare our algorithm grasping performance with and without adaptive sampling. When running without adaptive sampling, the motion vector field estimator and the dynamic sampling region resizing are disabled. Results in Table <ref> show that, as the object movement speed increases, our adaptive sampling strategies are fundamental to the final performance of the system.§.§ Human to Robot HandoverFor our final experiment, we demonstrate thatcan also be used to enable human to robot handover tasks, where the object pose is completely unrestricted and may change in both translation and rotation. Unlike the previous experiments, we do not provide an initial grasp proposal to start tracking the object. Instead, we place the robot arm in a fixed position, hard-code the seed grasp to be 15cm in front of the gripper and let our algorithm run continuously searching for a good grasp in this area. During this initialization stage, G_s is not updated. Once a good grasp is found (score greater than 0.5), we transition to the tracking phase where the seed is updated on each iteration to track the high quality grasp found in the previous iteration. The participants were instructed to pick any of 4 objects (bowl, bottle, small box, mustard) and slowly present the object to the robot in the area in front of the gripper. They may freely move and change the object pose as desired during the tracking phase. We consider the handover to be successful if the robot grasps the object within the first 20 seconds of starting the tracking phase. A total of 8 participants performed 3 trials with each object for a total of 96 trials (24 trials per object). The overall success rate was 81.25% with 3, 5, 4 and 7 failures for the bowl, bottle, small box and mustard respectively.§ CONCLUSIONS We presented a new closed-loop grasping method . We demonstrated that it can effectively refine and track an initial grasp proposal solely using the feedback provided by an RGB-D camera mounted on the wrist of a robot manipulator. This is enabled by our real-time sampling strategy, which is capable of evaluating a large number of candidate grasps and scoring them to promote temporal consistency of the output target grasp. Results show thatsignificantly improves the grasping performance for static objects and enables the possibility of grasping moving objects.However, there are some limitations to our method. In its current form we cannot guarantee that the final grasp executed on the object will maintain the original grasp affordance. It is entirely possible that our algorithm will shift and drift the grasp on the object either moving within a continuous grasp manifold or even jump to a different one (for example with a mug, the grasp might “walk” through the rim and, in some cases, jump to the handle) based on how the object is moving. Additionally, due to the fact that no semantic information is provided, if objects are cluttered or moving in close proximity, the grasp proposal could jump from one object to another. Segmentation algorithms are, as of today, too computationally expensive and cannot be incorporated into our approach without a dramatic drop in our sampling frequency. Lastly, grasping faster moving objects likely will require an additional estimator which predicts the future movement of the target to provide a feed-forward mechanism in our control loop.Perhaps one the biggest challenges of building a closed loop grasping system is the motion planner which needs to consume a constantly evolving goal pose. In our experiments, we use a cartesian controller to drive the robot movement due to its computational efficiency. However, this approach is completely unaware of the scene or the robot kinematics, which can cause the robot to collide with other objects or itself, as well as encountering joint limits during motion. A robust system will require a collision-aware motion planner which can replan in the order of tens of milliseconds while keeping the target object in view.IEEEtran | http://arxiv.org/abs/2310.18459v1 | {
"authors": [
"Pedro Piacenza",
"Jiacheng Yuan",
"Jinwook Huh",
"Volkan Isler"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20231027201230",
"title": "VFAS-Grasp: Closed Loop Grasping with Visual Feedback and Adaptive Sampling"
} |
Laboratories for Computational Physics and Fluid Dynamics,U.S. Naval Research Laboratory, 4555 Overlook Ave SW, Washington, DC 20375 Discontinuous Galerkin method; pressure equilibrium; spurious pressure oscillations; contact discontinuity; contact interfaceA note on reducing spurious pressure oscillations in fully conservative discontinuous Galerkin simulations of multicomponent flows Eric J. Ching, Ryan F. Johnson, and Andrew D. Kercher January 14, 2024 ================================================================================================================================== |#1{{#1}} #1#2#1e#2#1#1@pprintTitleoddheademptyevenheademptyoddfootevenfootoddfoot DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.footnote-1 § INTRODUCTION The compressible, inviscid, multicomponent Euler equations in d spatial dimensions are given by∂ y/∂ t+∇·ℱ(y)=0where t is the time, y(x,t):ℝ^d×ℝ^+→ℝ^m is the vector of m conservative variables, expanded asy=(ρ v_1,…,ρ v_d,ρ e_t,C_1,…,C_n_s)^T,and ℱ(y):ℝ^m→ℝ^m× d is the convective flux, the kth spatial component of which is written as ℱ_k(y)=(ρ v_kv_1+Pδ_k1,…,ρ v_kv_d+Pδ_kd,v_k(ρ e_t+P),v_kC_1,…,v_kC_n_s)^T.x=(x_1,…,x_d) is the vector of physical coordinates, ρ is the density, v=(v_1,…,v_d) is the velocity vector, e_t is the mass-specific total energy, C=(C_1,…,C_n_s) is the vector of n_s species concentrations, and P is the pressure, which is computed using the ideal gas law asP=R^0T∑_i=1^n_sC_i, where R^0 is the universal gas constant and T is the temperature. The mass fraction of the ith species is computed asY_i=ρ_i/ρ,where ρ_i=W_iC_i is the partial density, with W_i denoting the molecular weight of the ith species, and ρ=∑_i=1^n_sW_iC_i is the total density. In this work, we assume d=1 and mixtures of thermally perfect gases, such that the specific heat capacities vary with temperature.It is well-known that spurious pressure oscillations are typically generated at contact interfaces when using fully conservative numerical schemes to solve Equation (<ref>) <cit.>, an issue not observed when considering a monocomponent calorically perfect gas. These pressure oscillations may lead to solver failure and can still occur even if discrete entropy stability is satisfied <cit.>. Quasi-conservative methods are usually employed to prevent the occurrence of such oscillations. For example, the double-flux approach <cit.> employs elementwise-constant, thermodynamically frozen auxiliary variables to mimic a calorically perfect gas and mathematically guarantee preservation of pressure equilibrium at contact interfaces. A popular numerical method that has successfully been applied to multicomponent flows (both nonreacting and reacting) in recent years is the discontinuous Galerkin (DG) method <cit.>. This family of numerical schemes boasts a number of desirable properties, such as arbitrarily high order of accuracy on unstructured meshes, compatibility with local grid and polynomial adaptation, and suitability for modern compute hardware. Initial efforts to extend DG schemes to multicomponent flows employed the double-flux approach <cit.>. This work, however, focuses on fully conservative DG schemes so that any issues associated with not maintaining conservation are circumvented. A handful of fully conservative DG schemes have been introduced that can maintain pressure equilibrium at contact interfaces (while preserving order of accuracy in smooth regions of the flow and without artificial dissipation) in an approximate sense, i.e., pressure oscillations can occur but remain small over long periods of time and do not cause the solver to diverge if the solution is adequately resolved. For example, Johnson and Kercher <cit.> computed a canonical test case involving the advection of a constant-pressure, hydrogen/oxygen thermal bubble. They found that a colocated scheme successfully maintained pressure equilibrium, but overintegration, which is important for reducing aliasing errors, rapidly led to solver divergence. As such, they developed an overintegration approach in which the flux is evaluated with a modified state based on an approximate pressure that is projected onto the selected finite element space via interpolation. This approach was used to successfully compute complex multicomponent flows, including a moving detonation wave and a reacting shear layer configuration <cit.>. Additionally, it can maintain pressure equilibrium when using multidimensional, curved elements <cit.> and can be easily incorporated into a positivity-preserving, entropy-bounded framework <cit.>. Bando <cit.> further investigated this flux-evaluation approach and compared it to another approach in which the pressure is projected onto the finite element space via L^2-projection (as opposed to interpolation). He found that in the same thermal-bubble test case, the interpolation-based approach is simpler and yielded smaller deviations from pressure equilibrium although both approaches (unlike standard flux evaluation) maintained stability over long times. He also investigated the influences of initialization strategy and choice of quadrature rule. A somewhat similar approach was proposed by Franchina et al. <cit.>, in which the primitive variables are treated as the unknowns and an L^2-projection of the pressure is employed. Implicit time integration was used in <cit.>; though compatible with explicit time stepping, a solution-dependent Jacobian term must be inverted at each iteration, making it arguably less appropriate for explicit time integration (which was used in <cit.> and <cit.>).In this short note, we revisit the flux-evaluation approaches considered in <cit.> and <cit.>, which successfully maintained pressure equilibrium in the canonical hydrogen/oxygen thermal-bubble advection test. We consider high-pressure, nitrogen/n-dodecane thermal-bubble advection at low and high velocities. This type of test case has been previously used to assess numerical schemes designed for simulating transcritical/supercritical, real-fluid flows with, for example, cubic equations of state and more complicated thermodynamic relations <cit.>. Here, although we restrict ourselves to the thermally perfect gas model, the increased nonlinearity of the thermodynamics of the nitrogen/n-dodecane mixture is nevertheless perhaps more representative of certain complicated mixtures at realistic conditions than the simpler hydrogen/oxygen case. Furthermore, it makes the considered case more effective at revealing deficiencies in numerical techniques. Also of interest is the influence of projecting additional variables (other than solely pressure) to the finite element trial space.§ MATHEMATICAL FORMULATION§.§ Discontinuous Galerkin discretization The semi-discrete form of Equation (<ref>) is given as: find y∈ V_h^p such that∑_κ∈𝒯(∂ y/∂ t,𝔳)_κ-∑_κ∈𝒯(ℱ(y),∇𝔳)_κ+∑_ϵ∈ℰ(ℱ^†(y^+,y^-,n),𝔳)_ℰ=0∀ 𝔳∈ V_h^p,wheredenotes the inner product, 𝒯 is the set of cells κ, ℰ is the set of interfaces ϵ, n is the normal vector at a given interface, y^+ and y^- are the states at both sides of a given interface, ℱ^†(y^+,y^-,n) is the numerical flux, and V_h^p is the space of basis and test functionsV_h^p={𝔳∈[L^2(Ω)]^m∀κ∈𝒯,.𝔳|_κ∈[𝖯_p(κ)]^m} ,with 𝖯_p(κ) denoting the space of polynomial functions of degree no greater than p in κ. Only periodic boundary conditions are considered in this work, such that all interfaces are interior interfaces and the jump operator, ·, is defined as · =(·)^+-(·)^-. We employ a nodal basis using Gauss-Lobatto points, such that the element-local solution approximation is given byy_κ=∑_j=1^n_by_κ(x_j)ϕ_j,where n_b is the number of basis functions, {ϕ_1,…,ϕ_n_b} is the set of basis functions, and { x_1,…,x_n_b} is the set of node coordinates. The flux in Equation (<ref>) is approximated asℱ_κ≈∑_k=1^n_cℱ(.(z(y_κ))|_x_k)φ_k,where n_c≥ n_b (with n_c>n_b corresponding to overintegration), {φ_1,…,φ_n_c} is the corresponding set of basis functions (also based on Gauss-Lobatto points), 𝒫 is a projection operator, and z(y):ℝ^m→ℝ^m is a vector of intermediate state variables. A major focus of this study is assessing the effects of various choices of 𝒫 and z, which will be detailed in the next subsection. The integrals in Equation (<ref>) are computed with a quadrature-free approach <cit.> that is also employed in <cit.>. The overall trends observed here are expected to also apply to a more conventional quadrature-based approach, which was employed in <cit.>; indeed, in the context of maintaining pressure equilibrium in the hydrogen/oxygen thermal-bubble configuration, both the quadrature-free and quadrature-based approaches yielded very similar results. §.§ Flux evaluation This subsection discusses the choices of 𝒫 and z considered in this study. §.§.§ Projection operators As in <cit.>, three projection operators are considered: * 𝒫_1(·)=id(·), the identity function. This corresponds to a standard flux evaluation, such that the RHS of Equation (<ref>) reduces to ∑_k=1^n_cℱ(y_κ(x_k))φ_k, regardless of the choice of z.* 𝒫_2(·)=ℐ(·), interpolatory projection onto V_h^p. Note that using a colocated scheme, this is equivalent to the identity function. This approach was first introduced by Johnson and Kercher <cit.>.* 𝒫_3(·)=Π(·), L^2-projection onto V_h^p.In this study, non-identity projection is applied only when overintegration is employed. Optimal convergence was previously observed when using interpolatory projection, 𝒫_2, <cit.> and L^2-projection, 𝒫_3, <cit.>; therefore, we do not assess order of convergence here.§.§.§ Intermediate state variables Three choices of z are considered: * z_1=(ρ v_1,…,ρ v_d,P,C_1,…,C_n_s)^T, where total energy is replaced with pressure, which was proposed by Johnson and Kercher <cit.>. Note that this was the only choice of intermediate state variables considered in <cit.> and <cit.>. * z_2=(v_1,…,v_d,P,C_1,…,C_n_s)^T, where total energy is replaced with pressure and momentum is replaced with velocity.* z_3=(v_1,…,v_d,P,T,Y_1,…,Y_n_s-1)^T, which represents a full set of primitive variables. §.§.§ Integration We compute p=2 solutions using both colocated integration and overintegration. The former was found to maintain stability over long times in the hydrogen-oxygen thermal-bubble advection test <cit.>, whereas the latter, in conjunction with 𝒫_1 (i.e., standard flux evaluation), rapidly led to solver divergence. For colocated integration, only 𝒫_1 is applied, which corresponds to standard colocated integration. For overintegration, the 5-point Gauss-Lobatto nodal set is used for the integration points. Note that Bando <cit.> assessed the effects of different quadrature rules. Such an investigation is not repeated here.§ RESULTS This section presents results for the advection of a high-pressure, nitrogen/n-dodecane thermal bubble <cit.>. Although this test case was presented in <cit.> and <cit.> with discontinuous initial conditions, discontinuity-capturing schemes were applied in those studies. Here, we are specifically interested in the performance of DG schemes without any discontinuity-capturing techniques (e.g., dissipative limiters or artificial viscosity). Similar to the hydrogen/oxygen thermal-bubble configuration considered in <cit.>, we modify the initial condition to be smooth (but still with high gradients) asY_n-C_12H_26=1/2[1-tanh(25|x|-0.2)],Y_N_2= 1-Y_n-C_12H_26,T =T_min+T_max/2-T_max-T_min/2tanh(25|x|-0.2) K,P = 6 MPa,where T_min=363 K and T_max=900 K. In the hydrogen/oxygen case, despite the smooth initial condition, underresolution leads to spurious pressure oscillations that, with standard overintegration, grow rapidly and lead to solver divergence <cit.>. We consider two advection velocities, v=1 m/s, as in <cit.>, and v=600 m/s, as in <cit.>, since we have found that changing the velocity can yield markedly different results. The computational domain is [-0.5,0.5] m, partitioned into 50 line cells. Both sides of the domain are periodic. The HLLC <cit.> flux function is employed. All solutions are initialized using interpolation and then integrated forward in time for ten advection periods using the third-order strong-stability-preserving Runge-Kutta scheme <cit.> with CFL=0.8 based on the order-dependent linear-stability constraint. To assess deviations from pressure equilibrium, we compute the following global quantity as a function of time <cit.>:Δ P(t)=max_xP(t,x)-min_xP(t,x),where the maximum and minimum are evaluated over all solution nodes. All simulations are performed using a modified version of the JENRE® Multiphysics Framework <cit.> that incorporates the techniques described in this work. In the following, τ denotes the time to advect the solution one period, and P_0 is the initial pressure of 6 MPa. §.§ Low velocity, v=1 m/s Figure <ref> presents the temporal variation of Δ P for the following approaches: * Colocated integration.* Standard overintegration, i.e., 𝒫_1.* Modified overintegration using all combinations of {𝒫_2,𝒫_3} and { z_1,z_2,z_3}.Δ P is sampled every τ/10 seconds until the solution either diverges or is advected for ten periods (i.e., t=10τ). The solution diverges rapidly in the case of standard overintegration or modified integration with {𝒫_2,z_3}. 𝒫_2 in conjunction with z_2 or z_3 maintains solution stability for longer times, although the solution nevertheless blows up well before t=10τ. Only colocated integration and 𝒫_3, regardless of choice of z, maintain stability through ten advection periods. Deviations from pressure equilibrium are overall smallest with colocated integration. In the context of 𝒫_3, z_3 preserves pressure equilibrium slightly better than z_1 and z_2, both of which give nearly identical results. At early times, however, 𝒫_2 yields the smallest pressure oscillations (which then grow rapidly before leading to solver failure). Figure <ref> presents the temperature profiles for the stable solutions at t=10τ. Despite most effectively maintaining pressure equilibrium, colocated integration leads to appreciable oscillations in the temperature profile that are not observed in the 𝒫_3 solutions, all of which exhibit good agreement with the exact solution. §.§ High velocity, v=600 m/s As in the previous subsection, Figure <ref> displays the temporal variation of Δ P for the considered approaches. The 𝒫_2 solutions again initially exhibit very small pressure oscillations before diverging, with the z_3 case diverging most rapidly. Furthermore, when using 𝒫_3, z_3 again gives smaller deviations from pressure equilibrium than z_1 and z_2. However, key differences with the low-velocity case are observed. First, the magnitude of pressure oscillations is generally greater in the high-velocity case, although the 𝒫_3 solutions yield deviations of similar magnitude between both velocities. Second, standard overintegration maintains stability throughout the simulation. Additionally, of the stable solutions, the colocated case exhibits the largest pressure deviations, which would likely continue to grow if the simulation is run for longer times. In contrast, the maximum pressure deviations for the other stable solutions seem to have either plateaued or begun to plateau. Pressure equilibrium is the most well-preserved in the 𝒫_3 solutions, particularly in conjunction with z_3. Figure <ref> presents the temperature profiles of the stable solutions at t=10τ. Temperature oscillations are observed in the cases of colocated integration and standard overintegration. In contrast, the 𝒫_3 solutions exhibit better agreement with the exact temperature.To help explain why overintegration with 𝒫_3 can lead to higher solution stability than both standard overintegration (i.e., 𝒫_1) and overintegration with 𝒫_2, we take the 𝒫_1 solution at t=10τ and then evaluate the pressure in the cell κ=[-0.22,-0.2] m with the 𝒫_1, 𝒫_2, and 𝒫_3 projection operators. The resulting pressure-deviation profiles in the given cell are displayed in Figure <ref>. Noticeable pressure deviations are located at the faces in the 𝒫_1 case (i.e., the pressure is evaluated normally); since 𝒫_2 (at least using Gauss-Lobatto points) does not modify pressure at the faces, overall deviation from pressure equilibrium is not appreciably reduced. In contrast, 𝒫_3 markedly improves preservation of pressure equilibrium throughout the cell, indicating that L^2-projection can act as a mechanism to reduce spurious oscillations that initially emerge via, for instance, underresolution or inexact evaluation of the flux. In the case of z_2 or z_3, oscillations of other variables can also be mitigated. However, it should be noted that 𝒫_2 can reduce pressure oscillations if deviations are large inside the cell but not at the faces. Furthermore, there is no guarantee that 𝒫_3 will always reduce spurious oscillations. § CONCLUDING REMARKS In this short note, we reevaluated previously introduced techniques designed to reduce spurious pressure oscillations at contact interfaces in multicomponent flows in the context of overintegrated DG discretizations <cit.>. Specifically, we focused on strategies that do not (a) introduce conservation error, (b) rely on artificial viscosity or limiting, or (c) degrade order of accuracy in smooth regions of the flow. The considered techniques, which employ a projection of the pressure (and potentially additional variables) onto the finite element trial space via either interpolation or L^2-projection, were previously shown to maintain approximate pressure equilibrium and thus stable solutions over long times (in contrast to a standard DG scheme with overintegration) in the advection of a hydrogen/oxygen thermal bubble. In <cit.>, interpolation-based projection was considered the preferred approach due to better preservation of pressure equilibrium.In this work, we considered a more challenging test case: advection of a high-pressure nitrogen/n-dodecane thermal bubble at both low and high velocities. All simulations were performed on a 50-cell grid using p=2. Key observations from this test case are as follows: * Interpolation-based projection always led to solver divergence. Although it overall maintained stability for longer times than standard overintegration in the low-velocity case, it was outperformed by standard overintegration in the high-velocity case. * L^2-projection always maintained solution stability. It produced larger pressure deviations than colocated integration in the low-velocity case but more effectively maintained pressure equilibrium than all other approaches in the high-velocity case. * Performing L^2-projection of a full set of primitive variables (specifically, mass fractions, pressure, temperature, and velocity) more effectively preserved pressure equilibrium than if only pressure or if both pressure and velocity were projected. However, projecting a smaller set of variables may be sufficient for stability.* L^2-projection led to better predictions of temperature than colocated integration and standard overintegration.* Colocated integration always maintained solution stability, but led to inferior temperature predictions, indicating that higher resolution (than in the case of overintegration) may be needed to offset the greater integration error.These findings suggest that this configuration is more effective at testing the ability of a numerical scheme to preserve pressure equilibrium, even without consideration of a cubic equation of state or more complicated thermodynamic relations commonly associated with it <cit.>. Furthermore, it is also valuable to consider different advection velocities. It should be also be noted that although the L^2-projection-based approach is the only overintegration strategy that prevented solver divergence across all considered conditions, the interpolation-based approach is simpler and was shown to maintain robustness across a variety of challenging test cases involving more realistic flow conditions and geometries, including moving detonation waves and a chemically reacting shear layer <cit.>, suggesting that it may still be a reliable choice. Future work will involve consideration of real-fluid effects, in particular a cubic equation of state and thermodynamic departure functions <cit.>, in multiple dimensions. The ability of the considered techniques, specifically the L^2-projection-based strategy, will be further assessed in this more challenging context of transcritical and supercritical flows. Furthermore, a fully conservative finite volume scheme that mathematically guarantees preservation of pressure equilibrium was recently developed by Fujiwara et al. <cit.>; an extension to DG schemes may indeed be worth pursuing.§ ACKNOWLEDGMENTS This work is sponsored by the Office of Naval Research through the Naval Research Laboratory 6.1 Computational Physics Task Area. elsarticle-num | http://arxiv.org/abs/2310.17792v1 | {
"authors": [
"Eric J. Ching",
"Ryan F. Johnson",
"Andrew D. Kercher"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231026214409",
"title": "A note on reducing spurious pressure oscillations in fully conservative discontinuous Galerkin simulations of multicomponent flows"
} |
Stability of Inverse Problems for Steady Supersonic Flows Past Perturbed Cones]Stability of Inverse Problems for Steady Supersonic Flows Past Lipschitz Perturbed ConesGui-Qiang G. Chen: Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, UK; School of Mathematical Sciences, Fudan University, Shanghai 200433, China [email protected] Pu: Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China; School of Mathematical Sciences, Fudan University, Shanghai 200433, China [email protected] Yongqian Zhang: School of Mathematical Sciences, Fudan University, Shanghai 200433, China [email protected] [2020]35B07, 35B20, 35D30, 35L65, 35L67; 76J20, 76L05, 76N10 We are concerned with inverse problems for supersonic potential flows past infinite axisymmetric Lipschitz cones. The supersonic flows under consideration are governed by the steady isentropic Euler equations for axisymmetric potential flows, which give rise to a singular geometric source term. We first study the inverse problem for the stability of an oblique conical shock as an initial-boundary value problem with both the generating curve of the cone surface and the leading conical shock front as free boundaries. We then establish the existence and asymptotic behavior of global entropy solutions with bounded BV norm of the inverse problem, under the condition that the Mach number of the incoming flow is sufficiently large and the total variation of the pressure distribution on the cone is sufficiently small. To this end, we first develop a modified Glimm-type scheme to construct approximate solutions by self-similar solutions as building blocks to balance the influence of the geometric source term. Then we define a Glimm-type functional, based on the local interaction estimates between weak waves, the strong leading conical shock, and self-similar solutions. Meanwhile, the approximate generating curves of the cone surface are also constructed. Next, when the Mach number of the incoming flow is sufficiently large, by asymptotic analysis of the reflection coefficients in those interaction estimates, we prove that appropriate weights can be chosen so that the corresponding Glimm-type functional decreases in the flow direction. Finally, we determine the generating curves of the cone surface and establish the existence of global entropy solutions containing a strong leading conical shock, besides weak waves. Moreover, the entropy solution is proved to approach asymptotically the self-similar solution determined by the incoming flow and the asymptotic pressure on the cone surface at infinity. [ Yongqian Zhang January 14, 2024 ==================== § INTRODUCTION We are interested in the structural stability of inverse problems for the three-dimensional (3-D) steady supersonic potential flows past a Lipschitz perturbed cone with given states of the incoming flow together with Lipschitz perturbed pressure distributions on its surface. The shock stability problem of steady supersonic flows past Lipschitz cones is fundamental for the mathematical theory of the multidimensional (M-D) hyperbolic systems of conservation laws, since its solutions are time-asymptotic states and global attractors of general entropy solutions of time-dependent initial-boundary value problems (IBVP) with abundant nonlinear phenomena, besides its significance to many fields of applications including aerodynamics; see <cit.> and references cited therein. Meanwhile, the corresponding inverse problems play essential roles in airfoil design; see <cit.>. As indicated in <cit.>, when a uniform supersonic flow of constant speed from the far-field (negative infinity) hits a straight cone, given a constant pressure distribution that is less than a critical value on the cone surface, the vertex angle of the cone can be determined such that there is a supersonic straight-sided conical shock attached to the cone vertex, and the state between the conical shock-front and the cone can be obtained by the shooting method, which is a self-similar solution; see Fig. <ref>. In this paper, we focus our analysis on the stability of an inverse problem, along with the background self-similar solutions, in the steady potential flows that are axisymmetric with respect to the x–axis, given the pressure distributions of gas on the cones, whose boundary surfaces in ℝ^3, formed by the rotation of generating curves of form Γ:= {(x,b(x)): x≥0} around the x–axis, are to be determined; see Fig. <ref>. To be precise, the governing 3-D Euler equations for steady potential conical flows are of the form:{ (ρ u)_x+(ρ v)_y=-ρ v/y, v_x-u_y=0, .together with the Bernoulli law:u^2+v^2/2+c^2/γ-1= u_∞^2/2+c_∞^2/γ-1,where U:=(u,v)^⊤ is the velocity in the (x,y)–coordinates, ρ is the flow density, and U_∞=(u_∞,0)^⊤ and ρ_∞ are the velocity and the density of the incoming flow, respectively. The Bernoulli law in (<ref>) is obtained via the constitutive relation between pressure p and density ρ: p=ρ^γ, with γ>1 for the polytropic isentropic gas and γ=1 for the isothermal flow, under scaling. In particular, c=:√(γ p/ρ) is called the sonic speed, and M:=√(u^2+v^2/c^2) is called the Mach number.The Bernoulli law (<ref>) can be written asu^2+v^2/2+(u^2+v^2)M^-2/γ-1= u_∞^2/2+M_∞^-2/γ-1.Without loss of generality, we may choose u_∞=1 by scaling; otherwise, we can simply scale: U→ u_∞^-1U, in system (<ref>) and (<ref>). With the fixed u_∞=1, M_∞→∞ is equivalent to p_∞→0, or c_∞→0.System (<ref>) can be written in the form:∂_xW(U)+∂_yH(U)=E(U,y)with U=(u,v)^⊤, whereW(U)=(ρ u,v)^⊤, H(U)=(ρ v,-u)^⊤, E(U,y)=(-ρ v/y,0)^⊤,and ρ is a function of U determined by the Bernoulli law (<ref>).When ρ>0 and u>c, U can also be presented by W(U)=(ρ u,v)^⊤, i.e., U=U(W), by the implicit function theorem, since the Jacobian:(∇_UW(U))=-ρ/c^2(u^2-c^2)<0.Regarding x as the time variable, (<ref>) can be written as∂_xW+∂_yH(U(W))=E(U(W),y).Therefore, system (<ref>)–(<ref>) becomes a hyperbolic system of conservation laws with source terms of form (<ref>). Such nonhomogeneous hyperbolic systems of conservation laws also arise naturally in other problems from many important applications, which exhibit rich phenomena; for example, see <cit.> and the references cited therein.Throughout this paper, the following conditions are assumed: p^b(x)>0 for x>0,p^b(x)=p_0,where x_0>0, p_0∈ (0, p^*) for some p^*>0 to be determined by γ>1, and(p^b)'_+(x)=lim_t→ x+p^b(t)-p^b(x)/t-x∈BV([0,∞)).The velocity of the incoming flow U_∞=(1,0)^⊤ is supersonic: M_∞>1. Given a perturbed pressure distribution p^b(x) on the cone surface, the problem is axisymmetric with respect to the x-axis. Thus, it suffices to analyze the problem in the half-space {y≤0}. Then the inverse problem is to find the generating curve y=b(x)≤ 0 of the cone surface and a global solution in the domain:Ω={(x,y) : x≥0,y<b(x)}with its upper boundary:Γ={(x,y) : x≥0,y= b(x)}such thatU·n|_Γ=0,where n=n(x,b(x)) =(-b'(x),1)^⊤/√(1+(b'(x))^2) is the corresponding outer normal vector to Γ at a differentiable point (x,b(x))∈Γ.With this setup, the inverse stability problem can be formulated into the following initial-boundary value problem (IBVP) for system (<ref>):Cauchy Condition:U|_x=0=U_∞:= (1,0)^⊤, Boundary Condition:p(x,b(x))=p^b(x). We first introduce the notion of entropy solutions for problem (<ref>)–(<ref>).(Entropy Solutions). Consider the inverse problem (<ref>)–(<ref>). A function b(x)∈Lip([0,∞)) is called a generating curve Γ of the cone surface as defined in (<ref>), and a vector function U=(u,v)^⊤∈(BV_loc∩L^∞)(Ω) with Ω defined in (<ref>) is called an entropy solution of (<ref>)–(<ref>) if they satisfy the following conditions: (1) For any test function ϕ∈C_0^1(ℝ^2;ℝ) and ψ∈C_0^1(Ω;ℝ),∬_Ω(ρ uϕ_x+ρ vϕ_y-ρ vyϕ)xy +∫_-∞^0ϕ(0,y)ρ_∞u_∞y=0,∬_Ω(vψ_x-uψ_y)xy=0,(2) For any convex entropy pair (ℰ,𝒬) with respect to W of (<ref>), i.e., ∇^2ℰ(W)≥0 and ∇𝒬(W)=∇ℰ(W)∇ H(U(W)),∬_Ω(ℰ(W(U))φ_x+𝒬(W(U))φ_y +∇_Wℰ(W(U))E(U,y)φ)xy+∫_-∞^0ℰ(W(U_∞))φ(0,y)y≥0for any φ∈C_0^1(Ω;ℝ) with φ≥0. For the potential flow, the Bernoulli law (<ref>) givesM^2/2+1/γ-1=B_∞/c^2for B_∞=u_∞^2/2+c_∞^2/γ-1, c^2=γρ^γ-1, and p=ρ^γ. Then the assumptions on pressure p^b can be reduced to the equivalent ones on the Mach number M_b on the unknown boundary Γ.Main Theorem (Existence and stability). Let (<ref>)–(<ref>) hold, and let 1<γ<3 and0<p_0<p^*:=((√(γ+7)-√(γ-1)) √(γ-1/16γ))^2γ/γ-1.Assume that M_∞ is sufficiently large and ϵ_0 is sufficiently small such thatT.V. p^b<ϵ_0,then the following statements hold: Main (Main)Main* Global existence: IBVP (<ref>)–(<ref>) determines a boundary y=b(x)=∫_0^xb'(t)t with b'(x)∈BV(ℝ_+), which is a small perturbation of the generating curve of the straight-sided cone surface: y=b_0x, and admits a global entropy solution U(x,y) with bounded total variation:sup_x>0T.V.{U(x,y):-∞<y<b(x)}<∞in the sense of Definition <ref>, containing a strong leading shock-front y=χ(x)=∫_0^xs(t) t with s(x)∈BV(ℝ_+) which is a small perturbation of the strong straight-sided conical shock-front y=s_0x, so that the solution between the leading shock-front and the cone surface is a small perturbation of the background self-similar solution of the straight-sided cone case, where s_0 denotes the slope of the corresponding straight-sided shock-front, and b_0 is the slope of the generating curve of the straight-sided cone surface. * Asymptotic behavior: For the entropy solution U(x,y),lim_x→∞sup{|U_ϑ(x,y)-Ũ (σ;s_∞,G(s_∞))| :χ_ϑ(x)<y<b_ϑ(x)}=0with Ũ(σ;s_∞,G(s_∞)) satisfying Ũ(s_∞;s_∞,G(s_∞))=G(s_∞),Ũ(b_∞';s_∞,G(s_∞))·(-b_∞',1)=0,12|Ũ(b_∞';s_∞,G(s_∞))|^2 +γ (p^b_∞)^γ-1/γγ-1 =12+γ p_∞^γ-1/γγ-1,wherep^b_∞=lim_x→∞p^b(x), s_∞=lim_x→∞s_ϑ(x), b'_∞=lim_x→∞(b_ϑ)'_+(x),Ũ(σ;s,G(s)) is the state of the self-similar solution, and G(s) denotes the state connected to state U_∞ by the strong leading shock-front of speed s. During the last forty years, the shock stability problem has been studied for the perturbed cones with small perturbations of the straight-sided cone. For polytropic potential flow near the cone vertex, the local existence of piecewise smooth solutions was established in <cit.> for both symmetrically perturbed cone and pointed body, respectively. Lien-Liu in <cit.> first analyzed the global existence of weak solutions via a modified Glimm scheme for the uniform supersonic isentropic Euler flow past over a piecewise straight-side cone, provided that the cone has a small opening angle (the initial strength of the shock-front is relatively weak) and the Mach number of the incoming flow is sufficiently large. Later on, Wang-Zhang considered in <cit.> for supersonic potential flow for the adiabatic exponent γ∈(1,3) over a symmetric Lipschitz cone with an arbitrary opening angle less than the critical angle and constructed global weak solutions that are small perturbations of the self-similar solution, given that the total variation of the slopes of the perturbed generating curves of the cone is sufficiently small and the Mach number of the incoming flow is sufficiently large. In addition, for the isothermal flows (i.e., γ=1), Chen-Kuang-Zhang in <cit.> made full use of delicate expansions up to second-order as the Mach number of the incoming flow goes to infinity and provided a complete proof of the global existence and asymptotic behavior of conical shock-front solutions in BV when the isothermal flows past Lipschitz perturbed cones that are small perturbations of the straight-sided one.When the surface of the perturbed cone is smooth, using the weighted energy methods, Chen-Xin-Yin established the global existence of piecewise smooth solutions in <cit.>. They considered a 3-D axisymmetric potential flow past a symmetrically perturbed cone under the assumption that the attached angle is sufficiently small and the Mach number of the incoming flow is sufficiently large. This result was also extended to the M-D potential flow case;see <cit.> for more details. Under a certain boundary condition on the cone surface, the global existence of the M-D conical shock solutions was obtained in <cit.> when the uniform supersonic incoming flow with large Mach number past a generally curved sharp cone. Meanwhile, using a delicate expansion of the background solution, Cui-Yin established the global existence and stability of a steady conical shock wave in <cit.> for the symmetrically perturbed supersonic flow past an infinitely long cone whose vertex angle is less than the critical angle. More recently, by constructing new background solutions that allow the speeds of the incoming flows to approach the limit speed, the global existence of steady symmetrically conical shock solutions was established in Hu-Zhang <cit.> when a supersonic incoming potential flow hits a symmetrically perturbed cone with an opening angle less than the critical angle. We also remark that some pivotal results have been obtained on the stability of M-D transonic shocks under symmetric perturbations of the straight-sided cones or the straight-sided wedges, as well as on Radon measure solutions for steady compressible Euler equations of hypersonic-limit conical flows; see <cit.> and the references cited therein.Corresponding to these shock stability problems, two types of inverse problems have been considered. One type is for the problems of determining the shape of the wedge in the planar steady supersonic flow for the given location of the leading shock front. This kind of inverse problems and the related inverse piston problems have been considered by Li-Wang in <cit.>, where the leading shock-front is assumed to be smooth and the characteristic method is applied to find the piecewise smooth solution with the leading shock as its only discontinuity; see also <cit.>. The other one is for the problems of determining the shape of the wedge or the cone with given pressure distribution on it in the planar steady supersonic flow (cf. <cit.>) or axisymmetric conical steady supersonic flow. Though various numerical methods and the linearized method have been proposed to deal with this type of problems, there seems no rigorous result on the existence of solutions to such inverse problems for steady supersonic flow past a cone.In this paper, we develop a modified Glimm scheme to establish the global existence and the asymptotic behavior of conical shock-front solutions of the inverse problem in BV in the flow direction, when the isentropic flows past cones with given pressure distributions on their surfaces, which are small perturbations of a constant pressure less than the critical value. Mathematically, our problem can be formulated as a free boundary problem governed by 2-D steady isentropic irrotational Euler flows with geometric structure.There are two main difficulties in solving this problem: One of them is the singularity generated by the geometric source term, and the other is that, compared to the shock stability problem for supersonic flows past a cone, the generating curve of the cone is unknown. For supersonic flows past an axisymmetric cone with the given generating curve, a modified Glimm scheme developed by Lien-Liu in <cit.> is used to construct approximate solutions (see also <cit.>). In the previous construction, in order to incorporate with the geometric source term and the boundary condition on the approximate generating curve, the center (x_0,0) of the self-similar variable σ=x-x_0/y is defined to be the intersection of the x-axis and the line on which the current approximate generating curve (a line segment of a polyline) lies, and the center is changed according to the random choice at each step when the ordinary differential equations (<ref>) are solved. As a result, the approximate solution on the approximate generating curve is a piecewise constant vector-valued function that satisfies the boundary condition everywhere. However, in the inverse problem under consideration in this paper, the generating curve of the cone is to be determined, apriori unknown, so that the approach in Lien-Liu (c.f.<cit.>) could not apply directly.To overcome the new difficulties, we first fix the center of the self-similar variable to be the origin when solving the differential equations (<ref>) and then develop a modified Glimm scheme to construct approximate solutions U_Δ x,ϑ(x,y) via the self-similar solutions as building blocks in order to incorporate the geometric source term. In our construction, the grid points are fixed at the beginning, which are the intersections of lines x=x_h, h∈ℕ, and the rays issuing from the origin (the vertex point of the cone). Consequently, this construction allows us to find new terms θ_b(h) to control the increasing part of the Glimm type functional near the approximate boundary (see Lemma <ref>), while it brings us an extra error so that the boundary conditions on the approximate boundary are no longer satisfied everywhere, but are satisfied at the initial point of each approximate boundary at each step. Nevertheless, in Proposition <ref>, we are able to prove that this error goes to zero as the grid size Δ x tends to zero. Furthermore, we make careful asymptotic expansions of the self-similar solutions with respect to M_∞^-1. We then make full use of the asymptotic expansion analysis of the background solutions with respect to M_∞^-1 to calculate the reflection coefficients K_r,1, K_w,2, K_s, and μ_w,2 of the weak waves reflected from both the boundary and the strong leading shock, and the self-similar solutions reflected from the strong leading shock to prove thatlim_M_∞→∞(|K_r,1||K_w,2|+|K_r,1||K_s||μ_w,2|)<1.Based on this, we choose some appropriate weights, independent of M_∞, in the construction of the Glimm-type functional and show that the functional is monotonically decreasing. Then the convergence of the approximate solutions is followed by the standard approach for the Glimm-type scheme as in <cit.>; see also <cit.>. Finally, the existence of entropy solutions and the asymptotic behavior of the entropy solutions are also proved.The remaining part of this paper is organized as follows: In <ref>, we give some preliminaries of the homogeneous system (<ref>) and then study Riemann-type problems in several cases and self-similar solutions of the unperturbed conic flow. Also, we calculate the limit states of related quantities as M_∞→∞. In <ref>, we construct a family of approximate solutions via a modified Glimm scheme. In <ref>, we establish some essential interaction estimates in a small neighborhood in the limit state. Then, in <ref>, we define the Glimm-type functional and show the monotonicity of the Glimm-type functional and, in <ref>, we prove that there exists a subsequence of approximate solutions converging to the entropy solution. Finally, in <ref>, we give the asymptotic behavior of the entropy solution which, together with the existence theory, leads to our main theorem. § RIEMANN PROBLEMS AND SELF-SIMILAR SOLUTIONS OF THE UNPERTURBED CONIC FLOW Regarding x as the time variable, the simplified system of (<ref>):{ (ρ u)_x+(ρ v)_y=0, v_x-u_y=0, .is strictly hyperbolic with two distinctive eigenvalues:λ_i=uv+(-1)^ic√(u^2+v^2-c^2)/u^2-c^2,i=1, 2,for u>c_* and u^2+v^2<q_*^2, wherec_*=√(γ-1/γ+1+2c_∞^2/γ+1), q_*=√(1+2c_∞^2/γ+1).Denote q:=√(u^2+v^2) and θ:=arctanv/u. Thenλ_i=tan(θ+(-1)^iθ_m), i=1, 2,whereθ_m:=arctanc/√(q^2-c^2)is the Mach angel. A direct computation indicates θ_m∈(0,π/2). Next, we introduce the following lemma, whose proof can be found in <cit.>. For u>c_* and q<q_*,(-λ_i,1)·(∂λ_i/∂ u,∂λ_i/∂ v) = γ+1/2√(q^2-c^2)^3(θ+(-1)^iθ_m),i=1, 2. Then, settingr_i(U)=(-λ_i(U),1)/(-λ_i(U),1)·∇λ_i(U),i=1, 2,we see that r_i(U)·∇λ_i(U)=1 for i=1, 2.Denote the supersonic part of the shock polar byS((u_∞,0))={(u̅,v̅): c̅^2< u̅^2+v̅^2≤ 1},where (u̅,v̅) satisfies the Rankine–Hugoniot condition:{ ρ̅(u̅s-v̅)=ρ_∞s,u̅+v̅s=1, .with u̅^2+v̅^2/2+γρ̅^γ-1/γ-1 =1/2+c_∞^2/γ-1. LetS_1^-((u_∞,0))={(u̅,v̅) :(u̅,v̅)∈ S((u_∞,0)), v̅<0}be the part of shock polar corresponding to the λ_1–characteristic field. Similar to <cit.>, we can parameterize the shock polar S_1^-((u_∞,0)) by a C^2–function:G:s↦ G(s;U_∞) ,where G(s;U_∞) is a supersonic state connected with U_∞ by a shock of speed s. For simplicity, we write G(s;U_∞) as G(s) and use u̅(s) and v̅(s) to denote the components of G(s), that is, G(s)=(u̅(s),v̅(s))^⊤. Then we have the following property for S_1^-((u_∞,0)) (cf. <cit.>). For s<λ_1(U_∞), ρ̅(s) is a strictly monotonically decreasing function of s, andu̅(s) is a strictly monotonically increasing function of s.As in <cit.>, let σ=y/x. Then the equations in (<ref>) become{ (1-u^2/c^2)σ^2 u_σ -2uv/c^2σ^2 v_σ -(1-v^2/c^2)σ v_σ- v=0, u_σ+σ v_σ=0. .or, equivalently,{ u_σ=c^2v/(1+σ^2)c^2-(v-σ u)^2,v_σ=-c^2v/σ((1+σ^2)c^2-(v-σ u)^2),ρ_σ=ρ v(v-σ u)/σ((1+σ^2)c^2-(v-σ u)^2). .Given a constant state (u̅,v̅)=G(s) on S_1^-((u_∞,0)), there exists a local solution Ũ(σ;s,G(s))=(ũ(σ;s),ṽ(σ;s)) of system (<ref>) with initial data:(ũ(s;s), ṽ(s;s))=(u̅, v̅).This solution can be extended to an end-point (ũ(σ_e;s), ṽ(σ_e;s)) with ṽ(σ_e;s)/ũ(σ_e;s)=σ_e. As (u̅,v̅) varies on S_1^-((u_∞,0)), the collection of these end-states forms an apple curve (Fig. <ref>) through U_∞; see <cit.>. For these solutions, we have following properties, whose proof can be found in <cit.>. For ũ(s;s)>c̃(s;s) and σ∈(s,σ_e), then ũ(σ;s)σ-ṽ(σ;s)<0,∂ũ/∂σ<0,∂ṽ/∂σ<0,c̃(σ;s)-ṽ(σ;s)-σũ(σ;s)/√(1+σ^2) >c̃(s;s)-ṽ(s;s)-sũ(s;s)/√(1+s^2)>0,with ũ^2+ṽ^2/2+c̃^2/γ-1= 1/2+c_∞^2/γ-1.Thus, we obtain the following estimate of the self-similar solution (ũ(σ;s), ṽ(σ;s)). For ũ(s;s)>c̃(s;s) and σ∈(s,σ_e),1/1+s^2< ũ(σ;s)<ũ(s;s),c̃(s;s)<c̃(σ;s)<c̃(σ_e;s)<√((γ-1)s^2/2(1+s^2)+1/M_∞^2). To obtain the asymptotic expansion of the self-similar solution, we need the following properties of the shock polar. Setp^*=((√(γ+7)-√(γ-1)) √(γ-1/16γ))^2γ/γ-1. Let 1<γ<3 and p_0∈(0,p^*). For M_∞ large enough, the equations:{ ρ_0(u^♯s^♯-v^♯)=ρ_∞s^♯,u^♯+v^♯ s^♯=1,(u^♯)^2+(v^♯)^2/2+c_0^2/γ-1=1/2+c_∞^2/γ-1.have a unique solution (u^♯,v^♯,s^♯) with s^♯<0, where ρ_∞=p_∞^1/γ,ρ_0=p_0^1/γ, and c_0=√(γ) p_0^γ-1/2γ, such thatlim_M_∞→∞u^♯ =lim_M_∞→∞u_a=1-2c_0^2/γ-1, lim_M_∞→∞v^♯ =lim_M_∞→∞v_a =-√(2c_0^2/γ-1-2c_0^2)(1-2c_0^2/γ-1), lim_M_∞→∞ c_a^2 = c_0^2.whereu_a:=1/1+(s^♯)^2,v_a:=s^♯/1+(s^♯)^2,c_a:=√((γ-1)(s^♯)^2/2(1+(s^♯)^2)+1/M_∞^2), Proof. From the first two equations of (<ref>), we haveu^♯=ρ_0+ρ_∞(s^♯)^2/ρ_0(1+s^2),v^♯=(ρ_0-ρ_∞)s^♯/ρ_0(1+(s^♯)^2).With the help of the third equation of (<ref>), we have( ρ_0+ρ_∞(s^♯)^2/ρ_0(1+(s^♯)^2))^2 +((ρ_0-ρ_∞)s^♯/ρ_0(1+(s^♯)^2))^2 =1+2(c_∞^2-c_0^2)/γ-1,which gives(s^♯)^2 =2(c_0^2-M_∞^-2)ρ_0^2/(γ-1)(ρ_0^2-(γ M_∞^2)^-2/γ-1)-2(c_0^2-M_∞^-2)ρ_0^2=2c_0^2/γ-1-2c_0^2-2(γ-1)/(γ-1-2c_0^2)^2M_∞^-2 +O(M_∞^-4/γ-1)as M_∞→∞.Therefore, noting that s^♯<0, we obtains^♯=-√(2c_0^2/γ-1-2c_0^2)(1-γ -1/2c_0^2(γ-1-2c_0^2)M_∞^-2) +O(M_∞^-4/γ-1)as M_∞→∞.Substituting the above expansion into (<ref>)–(<ref>), yields (<ref>). □ Let 1<γ<3 and p_0∈(0,p^*). For M_∞ large enough, there exists ρ_d such thatρ_0^γ-1=ρ_d^γ+1-ρ_∞^γ+1/ρ_d^2-ρ_∞^2,and the equations:{ ρ_d(u_ds_d-v_d)=ρ_∞s_d,u_d+v_ds_d=1,u_d^2+v_d^2/2+c_d^2/γ-1=1/2+c_∞^2/γ-1, .have a unique solution (u_d,v_d,s_d) with s_d<0 and c_d=√(γρ_d^γ-1). Moreover, foru^♭:=1/1+s_d^2, v^♭:=s_d/1+s_d^2,we havelim_M_∞→∞ c_d^2 =c_0^2, lim_M_∞→∞u^♭ =lim_M_∞→∞u_d =1-2c_0^2/γ-1, lim_M_∞→∞ v^♭ = lim_M_∞→∞ v_d =-√(2c_0^2/γ-1-2c_0^2) (1-2c_0^2/γ-1). Proof. For each ρ_0, it is direct to find ρ_d such that (<ref>) holds. Moreover, p_d∈ (0,p^*) when M_∞ is large enough. By Lemma <ref>, we obtain the unique solution of (<ref>):s_d=2(c_0^2-c_∞^2)/γ-1-2(c_0^2-c_∞^2),u_d =ρ_d+ρ_∞s_d^2/ρ_d(1+s_d^2), v_d=(ρ_d-ρ_∞)s_d/ρ_d(1+s_d^2).Then we haves_d=-√(2c_0^2/γ-1-2c_0^2)(1-γ -1/2c_0^2(γ-1-2c_0^2)M_∞^-2) +O(M_∞^-4)as M_∞→∞.Substituting the above expansion into (<ref>)–(<ref>) yields (<ref>).□Given p_0∈(0,p^*), we now solve the following problem for s_0<σ<b_0:σ^2(1-ũ^2/c̃^2) ũ_σ -2ũṽσ^2/c^2ṽ_σ - (1-ṽ^2/c̃^2) ṽ_σσ-ṽ=0, ũ_σ+σṽ_σ=0,with the boundary conditions:ρ̃(ũs_0-ṽ)=ρ_∞s_0,ũ+ṽs_0=1 , ρ̃=ρ_0,ṽ=b_0ũ,and define (ũ(σ;s_0),ṽ(σ;s_0))=(1,0) for σ<s_0. Indeed, we have the following lemma.Let 1<γ<3 and p_0∈(0,p^*). For M_∞>K_1, problem (<ref>) has a unique solution ((ũ(σ;s_0), ṽ(σ;s_0)) containing a supersonic conical shock-front issuing from the vertex. In addition,lim_M_∞→∞(σ, c̃^2(σ;s_0))=(tanθ_0, c_0^2),lim_M_∞→∞(ũ(σ;s_0),ṽ(σ;s_0) =(cos^2θ_0, sinθ_0cosθ_0),andlim_M_∞→∞ũ(σ;s_0)/c̃(σ;s_0) =γ-1-2c_0^2/(γ-1)c_0>1,cos(θ_0±θ_m^0)>0,where θ_0=-arctan√(2c_0^2/γ-1-2c_0^2) and θ_m^0=lim_M_∞→∞θ_m for σ∈[s_0,b_0).Proof. Given p_0∈(0,p^*), by the shooting method as in <cit.>, problem (<ref>) has a unique solution (ũ(σ;s_0),ṽ(σ;s_0)) with ũ(s_0;s_0)>c̃(s_0;s_0), ρ̃(b_0;s_0)=p_0^1/γ, and ṽ(b_0;s_0)=b_0ũ(b_0;s_0).We then focus on the asymptotic expansions (<ref>). Lemma <ref> indicates that1/1+s_0^2<ũ(σ;s_0)≤ũ(s_0;s_0),c̃(s_0;s_0)≤c̃(σ;s_0)<c_0 <√((γ-1)s_0^2/2(1+s_0^2)+1/M_∞^2),for σ∈[s_0,b_0). Meanwhile, it follows from (<ref>) that c̃(s_0;s_0)>c_d. Then, due to Lemma <ref>, we see that u^♯<ũ(s_0;s_0)<u_d and s^♯<s_0<0. Therefore, we havec_d<c̃(s_0;s_0)≤c̃(σ;s_0)<c, u_a=1/1+(s^♯)^2<1/1+s_0^2< ũ(σ;s_0)≤ũ(s_0;s_0)<u_d.From Lemma <ref>–<ref>, we havelim_M_∞→∞c̃(σ;s_0)=c_0^2, lim_M_∞→∞ũ(σ;s_0)=1-2c_0^2/γ-1. Since ṽ(σ;s_0)<0, from the Bernoulli laws,ũ^2+ṽ^2/2+c̃^2/γ-1= 1/2+c_∞^2/γ-1, (u^♯)^2+(v^♯)^2/2+c_0^2/γ-1=1/2+c_∞^2/γ-1,we concludelim_M_∞→∞ṽ(σ;s_0)= -√(2c_0^2/γ-1-2c_0^2) (1-2c_0^2/γ-1). Again, by Lemma <ref>, we know thatṽ(s_0;s_0)/ũ(s_0;s_0)>b_0>σ≥ s_0>s^♯.Combining all the expansions obtained above, we obtain (<ref>).Furthermore, sincelim_M_∞→∞( ũ√(ũ^2+ṽ^2-c̃^2))^2 -( ṽc̃)^2 =lim_M_∞→∞(ũ^2-c̃^2)(ũ^2+ṽ^2)>0,we concludecos(θ_0±θ^0_ma) =lim_M_∞→∞cos(θ±θ_m) =lim_M_∞→∞ũ√(ũ^2+ṽ^2-c̃^2)∓ṽc̃/ũ^2+ṽ^2>0.This completes the proof.□Next, for G(s), we have the following expansions, whose proof can be found in <cit.>. For G(s)=(u̅(s),v̅(s))^⊤,lim_M_∞→∞(u̅(s_0), v̅(s_0)) =(cos^2θ_0, cosθ_0sinθ_0), lim_M_∞→∞(u̅_s(s_0), u̅_s(s_0)) =(-sin2θ_0cos^2θ_0, cos2θ_0cos^2θ_0),where u̅_s(s)= du̅(s)/ d s and v̅_s(s)= dv̅(s)/ d s.Now, we introduce the elementary wave curves of system (<ref>). We denote by W(p_0,p_∞) the curve formed by Ũ(σ;s_0)=(ũ(σ;s_0),ṽ(σ;s_0))^⊤ for s_0<σ<b_0, where p_0 is the corresponding pressure of the state at the endpoint. As in <cit.> (also cf. <cit.>), we parameterize the elementary i-wave curves for system (<ref>) in a neighborhood of W(p_0,p_∞):O_r(W(p_0,p_∞))=⋃_s_0<σ<b_0{U: |U-Ũ(σ;s_0)|<r}byα_i↦Φ_i(α_i;U)with Φ_i∈C^2 and.∂Φ_i/∂α_i|_α_i=0 =r_i(U) ,i=1,2. In the sequel, defineΦ(α_1,α_2;U)=Φ_2(α_2;Φ_1(α_1;U)).Denote Ũ(σ;σ_0,U_l) the solution to the ODE system (<ref>) with initial dataŨ|_σ=σ_0=U_lfor U_l∈ O_r(W(p_0,p_∞)). Then, as in <cit.>, we have For p_0∈(0,p^*),lim_M_∞→∞.Ũ(σ;σ_0)/σ|_{σ=σ_0, U_l∈ W(p_0,p_∞)} =(sinθ_0cos^3θ_0,-cos^4θ_0)^⊤. With all the limits given above, we obtain the following lemma, which is essential in wave-interaction estimates. For U_l∈ W(p_0,p_∞),lim_M_∞→∞(r_1(U_l),r_2(U_l))=4cos^2(θ_0+θ_m^0)cos^2(θ_0-θ_m^0) cos^2θ_0cos^2θ^0_msin(2θ_m^0)(γ+1)^2,lim_M_∞→∞(G'(s_0),r_1(G(s_0))) =-2 cos^2(θ_0-θ_m^0)cosθ^0_mcos^3θ_0sin(θ_0+θ_m^0)γ+1,lim_M_∞→∞(r_2(G(s_0)),G'(s_0)) =2 cos^2(θ_0+θ_m^0)cosθ^0_mcos^3θ_0sin(θ_0-θ_m^0)γ+1,lim_M_∞→∞(Ũ(σ;σ_0,G(s_0))/σ,G'(s_0)) =- cos^5θ_0sinθ_0,lim_M_∞→∞(r_2(U_l),Ũ(σ;σ_0,U_l)/σ) =2/γ+1cos^4θ_0cos^2(θ_0+θ_m^0)cosθ_m^0sinθ_m^0,lim_M_∞→∞(r_1(U_l),Ũ(σ;σ_0,U_l)/σ) =-2/γ+1cos^4θ_0cos^2(θ_0-θ_m^0)cosθ_m^0sinθ_m^0. Furthermore, we have the following propositions, which will be used in the construction of building blocks of our approximate solutions. For M_∞ sufficiently large, there exists ε_1>0 such that, for any U_r and U_l lie in O_ε_1(W(p_0,p_∞)), the Riemann problem (<ref>) with initial dataU|_x=x̅={ U_rfor y>y̅,U_lfor y<y̅, .admits a unique admissible solution consisting of at most two elementary waves α_1 for the 1-characteristic field and α_2 for the 2-characteristic field. Moreover, states U_l and U_r are connected byU_r=Φ(α_1,α_2;U_l).For M_∞ sufficiently large, there exists ε_2>0 such that, for any U_l∈ O_ε_2( W(p_0,p_∞)) and p_1, p_2∈ O_ε_2(p_0), there is δ_1 solving the equation:12|Φ(δ_1,0;U_l)|^2+γγ-1p_2^γ-1/γ =12|U_l|^2+γγ-1p_1^γ-1/γ. Proof. From (<ref>), we havelim_M_∞→∞1/2 .∂ |Φ(δ_1,0;U_l)|^2/∂δ_1|_δ_1=0 =U_l· r_1(U_l)≠0.By the implicit function theorem, there exists δ_1 such that (<ref>) holds, provided ε_2 sufficiently small.□For M_∞ sufficiently large, there exists ε_3>0 such that, for any U_l=U_∞ and U_r∈ O_ε_3( W(p_0,p_∞))∩ O_ε_3(G(s_0)), the Riemann problem (<ref>) with initial data (<ref>) admits a unique admissible solution that contains a strong 1-shock s_1 and a 2-weak wave β_2 of the 2-characteristic field. Moreover, states U_l and U_r are connected byU_r=Φ_2(β_2;G(s_1;U_l)). Proof. It follows from (<ref>) and Lemma <ref> thatlim_M_∞→∞(.∂Φ_2(β_2;G(s_1;U_l))/∂(s_1,β_2)|_{s_1=s_0, β_2=0}) =-lim_M_∞→∞(r_2(G(s_0),G'(s_0)))≠0.The existence of the solution of this Riemann problem is ensured by the implicit function theorem for ε_3 sufficiently small. □To end this section, we introduce the following interaction estimate given by Glimm <cit.> for weak waves (see also <cit.>). Let U_l∈W(p_0,p_∞), α, β, and δ satisfyΦ(δ_1,δ_2;U_l)=Φ(β_1,β_2;Φ(α_1,α_2;U_l)).Thenδ=α+β+O(1)Q^0(α,β),where Q^0(α,β)=∑{|α_i||β_i|:α_i and β_j approach}, and O(1) depends continuously on M_∞<∞. § APPROXIMATE SOLUTIONS In this section, we construct approximate solutions for system (<ref>) with (<ref>)–(<ref>) by a modified Glimm scheme. Compared to the modified Glimm scheme developed in <cit.>, in our construction, the grid points are fixed at the very beginning, which are independent of the approximate solution and the random choice.Given ϵ>0 and Δ x>0, there exist piece-wise constant functions p^b_Δ x such thatT.V. p^b_Δ x(·)≤T.V. p^b(·), p^b_Δ x-p^b_L^∞≤ϵ,wherep^b_Δ x(x)= p^b_Δ x,0=p_0 for x∈[0,x_0),p^b_Δ x,h+1 for x∈[x_h,x_h+1) and h∈ℕ,with p^b_Δ x,h+1 being constants on the corresponding intervals and x_h=x_0+hΔ x for h∈ℕ. Then, from Lemma <ref>, for p^b_Δ x,0=p_0, there exists (ũ(σ;s_0),ṽ(σ;s_0)) such that p̃(b_0;s_0)=p_0 and ṽ(b_0;s_0)=b_0ũ(b_0;s_0).We now define the difference scheme. Choose ϑ=(ϑ_0,ϑ_1,ϑ_2,…,ϑ_h,…) randomly in [0,1). For 0< x< x_0, letb_Δ x,ϑ(x)=b_0x,χ_Δ x,ϑ(x)=s_0x.We denote Γ_Δ x,ϑ,0={(x,b_Δ x,ϑ(x)) :0≤ x< x_0}, S_Δ x,ϑ,0={(x,χ_Δ x,ϑ(x)) :0≤ x< x_0}, and Ω_Δ x,ϑ,0={(x,y) :y<b_Δ x,ϑ(x), 0≤ x< x_0}. In region Ω_Δ x,ϑ,0, we then defineU_Δ x,ϑ(x,y)=(u_Δ x,ϑ(x,y),v_Δ x,ϑ(x,y))^⊤≜(ũ(σ;s_0),ṽ(σ;s_0))^⊤for y/x=σ∈(s_0,b_0),U_∞ for y/x=σ<s_0,and, on boundary Γ_Δ x,ϑ,0, we setU_Δ x,ϑ(x, b_Δ x,ϑ(x))=U_Δ x,ϑ^b(x)=(u_Δ x,ϑ^b(x),v_Δ x,ϑ^b(x))^⊤≜ (ũ(b_0;s_0),ṽ(b_0;s_0))^⊤. On x=x_h for h∈ℕ, the grid points are defined to be the intersections of line x=x_h with the self-similar raysy=( b_0+nΔσ)x.Here Δσ>0 is chosen so that Δσ>4Δ xx_0max_i=1,2{|λ_i(G(s_0))|}, and hence the numerical grids satisfies the usual Courant-Friedrichs-Lewy condition. Then we define the approximate solution U_Δ x,ϑ to be a piece-wise smooth solution to the self-similar system (<ref>), the approximate solution U_Δ x,ϑ^b on the boundary, the approximate boundary Γ_Δ x,ϑ={(x,y) : y=b_Δ x,ϑ(x)}, and the numerical grids inductively in h, h=0,1,2,⋯. Suppose that the approximate solution has been defined on x<x_h. The grid points on x=x_h are denoted by y_n(h) for n∈ℤ. Setr_h,n=y_n(h)+ϑ_h(y_n+1(h)-y_n(h)).Then the approximate solution U_Δ x,ϑ(x_h,y) for y∈ (y_n(h), y_n+1(h)) is defined to be the solution U_self,Δ x,ϑ(σ(x,y)) of (<ref>) with the self-similar variable σ(x,y)=y/x_h and with the initial data:σ=r_h,n/x_h: U_self,Δ x,ϑ =U_Δ x,ϑ(x_h,r_h,n)≜ U_Δ x,ϑ(x_h-,r_h,n+) .For the discontinuities at the grid points (x_h,y_n(h)) for n∈ℤ, we solve the Riemann problems for (<ref>) with the Riemann data:U|_x=x_h={ U_Δ x,ϑ(x_h,y_n(h)-), U_Δ x,ϑ(x_h,y_n(h)+), .and the solution consisting of rarefaction waves and shock waves has form U_Rie(η) with η=y-y_n(h)x-x_h. Setting σ_h,n+1/2≜12x_h(y_n+1(h+1)+y_n(h+1)) for n∈ℤ, then, in the region:Ω_h+1,n={(x,y):x_h< x<x_h+1, σ_h,n+1/2>σ>σ_h,n-1/2},along the ray{(x,y) : y-y_n(h)/x-x_h=η, x_h<x<x_h+1},the approximate solution U_Δ x,ϑ(x,y) is defined to be the solution: U_self,Δ x,ϑ(σ(x,y)) of (<ref>) with the self-similar variable σ(x,y)=y/x and with the initial data:σ=y_n/x_h: U_self,Δ x,ϑ=U_Rie(η). The approximate boundary Γ_Δ x,ϑ={(x,y) : y=b_Δ x,ϑ(x)} is traced continuously; see <cit.>. For x∈(0,x_0), let b_Δ x,ϑ(x)=b_0x. Suppose that the approximate solution is constructed for x<x_h and that y_n_b,h<b_Δ x,ϑ(x_h-)<y_n_b,h+1. We call interval y_n_b,h-1<y<y_n_b,h+1 the boundary region at x=x_h. In this boundary region, we first solve the self-similar problem (<ref>) with the initial data:σ=r_h,n_b-1/x_h: U_self=U_Δ x,ϑ(x_h-,r_h,n_b-1+),and with the self-similar variable σ(x_h,y)=y/x_h. We denote the solution by U_self(σ(x_h,y)). Given p^b_Δ x,h+1, by Proposition <ref>, there is β_1 such that1/2|Φ(β_1,0;U_self(σ(x_h,b_Δ x,ϑ(x_h))))|^2 +γ/γ-1(p^b_Δ x,h+1)^γ-1/γ =1/2+γ/γ-1p_∞^γ-1/γ.Then we defineU_Δ x,ϑ^b(x_h)≜Φ(β_1,0;U_self(σ(x_h,b_Δ x,ϑ(x_h)))),andb_Δ x,ϑ(x) =b_Δ x,ϑ(x_h-)+v^b_Δ x,ϑ(x_h)/u^b_Δ x,ϑ(x_h) (x-x_h) . Next, solve again the self-similar problem (<ref>) with initial data U_-(σ(x_h,b_Δ x,ϑ(x_h)))=U_Δ x,ϑ^b(x_h) and with the self-similar variable σ(x_h,y)=y/x_h. Denote the solution by U_-(σ(x_h,y)). We define the approximate solution in the boundary region asU_Δ x,ϑ(x_h,y)=U_-(σ(x_h,y)).The discontinuities at (x_h,y_n_b,h-1) are resolved by the same methods as before. The leading strong conical shock S_Δ x,ϑ={(x,y) : y=χ_Δ x,ϑ(x)} next to the uniform upstream flow is also traced continuously; see <cit.>. For x∈(0,x_0), let χ_Δ x,ϑ(x)=s_0x. Suppose that the approximate solution is constructed for x<x_h and that y_n_χ,h-1<χ_Δ x,ϑ(x_h-)<y_n_χ,h. We call interval y_n_χ,h-1<y<y_n_χ,h+1 the front region at x=x_h. In this front region, we first solve the self-similar problem (<ref>) with the initial dataσ=r_h,n_χ/x_h: U_self=U_Δ x,ϑ(x_h-,r_h,n_χ+),and with the self-similar variable σ(x_h,y)=y/x_h. Denote the solution by U_self(σ(x_h,y)). Then we solve the Riemann problem (<ref>) with the initial dataU(x_h,y)={ U_∞, y<χ_Δ x,ϑ(x_h-), U_self(σ(x_h,χ_Δ x,ϑ(x_h))),χ_Δ x,ϑ(x_h-)<y<y_n_χ,h+1. .The solution U(x,y) contains a weak 2-wave β_2 and a relatively strong 1-shock wave s_Δ x,ϑ(h+1) such thatU_self(σ(x_h,χ_Δ x,ϑ(x_h)))=Φ(0,β_2;G(s_Δ x,ϑ(h+1);U_∞)).Then, letχ_Δ x,ϑ(x)=χ_Δ x,ϑ(x_h-)+s_Δ x,ϑ(h+1)(x-x_h) . Next, solve again the self-similar problem (<ref>) with initial data U_+(σ(x_h,χ_Δ x,ϑ(x_h)))=G(s_Δ x,ϑ(h+1);U_∞) and with the self-similar variable σ(x_h,y)=y/x_h. Denote the solution by U_+(σ(x_h,y)). We define the approximate solution in the front region asU_Δ x,ϑ(x_h,y)={ U_∞, y<χ_Δ x,ϑ(x_h), U_+(σ(x_h,y)), χ_Δ x,ϑ(x_h)<y<y_n_χ,h+1. .The discontinuities at (x_h,y_n_χ,h) are resolved by the same methods as before.§ RIEMANN-TYPE PROBLEMS AND INTERACTION ESTIMATES Let Ω_Δ x,ϑ,h={(x,y) : y<b_Δ x,ϑ, x_h-1≤ x< x_h} and h∈ℕ_+. In order to define the approximate solutions in Ω_Δ x,ϑ≜⋃_k=0^∞Ω_Δ x,ϑ,k, the approximate boundary Γ_Δ x,ϑ≜⋃_k=0^∞Γ_Δ x,ϑ,k, and the approximate leading shock S_Δ x,ϑ≜⋃_k=0^∞ S_Δ x,ϑ,k, we need a uniform bound of them to ensure that all the Riemann problems and the differential equations (<ref>) are solvable. To achieve this,the following formulas are used: (i)If f∈C^1(ℝ), thenf(t)-f(0)=t∫_0^1f_t(μ t)μ. (ii)If f∈C^2(ℝ), thenf(s,t)-f(s,0)-f(0,t)+f(0,0)=st∫_0^1∫_0^1f_st(μ s,λ t)μλ. From now on, we use Greek letters α, β, γ, and δ to denote the elementary waves in the approximate solution, and α_i, β_i, γ_i, and δ_i stand for the corresponding i-th components for i=1,2. As in <cit.>, a curve I is called a mesh curve provided that I is a space-like curve and consists of the line segments joining the random points one by one in turn. I divides region Ω_Δ x, ϑ into two parts: I^- and I^+, where I^- denotes the part containing line x=x_0. For any two mesh curves I and J, we use J>I to represent that every mesh point of curve J is either on I or contained in I^+. We say J is an immediate successor to I if J>I and every mesh point of J except one is on I in general but three when these points are near the approximate boundary or the approximate shock.Assume now that U_Δ x,ϑ has been defined in ⋃_k=0^hΩ_Δ x,ϑ,k and the following conditions are satisfied: H_1(h): {S_Δ x,ϑ,k}_k=0^h forms an approximate strong shock S_Δ x,ϑ|_0≤ x<x_h, and {Γ_Δ x,ϑ,k}_k=0^h forms an approximate boundary Γ_Δ x,ϑ|_0≤ x<x_h, both of which emanate from the origin. H_2(h):In each Ω_Δ x,ϑ,k for 0≤ k≤ h, the strong 1-shock S_Δ x, ϑ, k divides Ω_Δ x,ϑ,k into two parts: Ω_Δ x,ϑ,k^- and Ω_Δ x,ϑ,k^+, where Ω_Δ x,ϑ,k^+ is the part between S_Δ x, ϑ, k and Γ_Δ x, ϑ, k; H_3(h): U_Δ x,ϑ|_Ω_Δ x,ϑ,k^-=U_∞, U_Δ x,ϑ|_Ω_Δ x,ϑ,k^+∈ O_ε_0(G(s_0))∩ O_ε_0( W(p_0,p_∞)), andU_Δ x,ϑ(x,b_Δ x, ϑ(x)-)=U_Δ x,ϑ, k^b∈O_ε_0(G(s_0))∩ O_ε_0( W(p_0,p_∞))for x_k≤ x<x_k+1, 0≤ k≤ h, and 0<ε_0<min{ε_j,j=1,2,3}, where ε_j are introduced in Propositions <ref>–<ref> for j=1,2,3. We prove that U_Δ x,ϑ can be defined in Ω_Δ x,ϑ,h+1 satisfying H_1(h+1) – H_3(h+1). As in <cit.> (see also <cit.>), we consider a pair of the mesh curves (I,J) lying in {x_h-1<x<x_h+1}∩Ω_Δ x, ϑ with J being an immediate successor of I.Now, let Λ be the region between I and J, and letU_Δ x,ϑ∈O_ε_0(G(s_0))∩ O_ε_0(W(p_0,p_∞)).Λ is between Γ_Δ x,ϑ and S_Δ x, ϑ. In this case, we consider the interactions between weak waves. From the construction of the approximate solutions, the waves entering Λ issuing from (x_h-1,y_n-1(h-1)) and from (x_h-1,y_n(h-1)) are denoted by α=(α_1,α_2) and β=(β_1,β_2), respectively. We denoteσ_0 =r_h-1,n/x_h-1, σ̅_0 =r_h-1,n-1/x_h-1, σ̂_0 =r_h-1,n-2/x_h-1, σ_1 =y_n(h-1)/x_h-1=y_n(h)/x_h, σ_2 =y_n-1(h-1)/x_h-1=y_n-1(h)/x_h,andU_1=U_Δ x,ϑ(x_h-1-,r_h-1,n+), U_2=U_Δ x,ϑ(x_h-1-,r_h-1,n-1+),U_3=U_Δ x,ϑ(x_h-1-,r_h-1,n-2+). Case 1.1. Let δ=(δ_1,δ_2) be the waves issuing from (x_h,y_n-1(h)); see Fig. <ref>. Then we need to solve the following equations of δ=(δ_1,δ_2):Ũ(σ_1;σ_2,Φ(δ_1,δ_2;U_l)) =Φ(β_1,0;Ũ(σ_1;σ_2,Φ(α_1,α_2;U_l))),where U_l=Ũ(σ_2;σ̂_0,U_3). Equation (<ref>) has a unique solution δ=(δ_1,δ_2) such thatδ_1=α_1+β_1+O(1)Q(Λ),δ_2=α_2+O(1)Q(Λ),whereQ(Λ)=Q^0(Λ)+Q^1(Λ)withQ^0(Λ)=∑{|α_j||β_k| : α_j and β_k approach}, Q^1(Λ)=|β_1||Δσ|,and Δσ=σ_1-σ_2, where O(1) depends continuously on M_∞ but independent of (α, β, Δσ).Proof. Lemma <ref> yieldslim_M_∞→∞(.∂Φ(δ_1,δ_2;U_l)/∂(δ_1,δ_2)|_{δ_1=δ_2=0,U_l∈ W(p_0,U_∞)})=4cos^2(θ_0+θ_m^0)cos^2(θ_0-θ_m^0)cos^2θ_0cos^2θ^0_msin(2θ_m^0)(γ+1)^2.Then, by the implicit function theorem, system (<ref>) has a unique C^2–solution:δ=δ(α,β,Δσ;U_l)in a neighborhood of (α,β,Δσ,U_l)=(0,0,0,G(s_0)). Due to (<ref>), we haveδ_i(α,β,Δσ;U_l)=δ_i(α,0,Δσ;U_l)+δ_i(α,β,0;U_l) -δ_i(α,0,0;U_l)+O(1)|β||Δσ|=α_i+β_i+O(1)Q^0(Λ)+O(1)|β||Δσ| ,where β_2=0. Then the proof is complete. □Case 1.2. Let δ=(δ_1,δ_2) be the waves issuing from (x_h,y_n(h)); see Fig. <ref>. Then we need to solve the following equations of δ=(δ_1,δ_2):Φ(δ_1,δ_2;Ũ(σ_1;σ_2,U_l)) =Φ(β_1,β_2;Ũ(σ_1,σ_2;Φ(0,α_2;U_l))),where U_l satisfies Ũ(σ̅_0;σ_1,Φ(0,α_2;U_l))=U_2. Similarly, we have the following lemma. Equation (<ref>) has a unique solution δ=(δ_1,δ_2) such thatδ_1=β_1+O(1)Q(Λ),δ_2=α_2+β_2+O(1)Q(Λ),whereQ(Λ)=Q^0(Λ)+Q^1(Λ)withQ^0(Λ)=∑{|α_j||β_k| : α_j and β_k approach}, Q^1(Λ)=|α||Δσ|,and Δσ=σ_1-σ_2. Here O(1) depends continuously on M_∞ but independent of (α, β, Δσ). Λ_b covers the part of Γ_Δ x,ϑ but none of S_Δ x, ϑ. We take three diamonds at the same time, as shown in Fig. <ref>. Let Δ_h,n_b,h-1, Δ_h,n_b,h, and Δ_h,n_b,h+1 denote the diamonds centering in (x_h,y_n_b,h-1), (x_h,y_n_b,h), and (x_h,y_n_b,h+1), respectively, and denote Λ_b=Δ_h,n_b,h-1∪Δ_h,n_b,h∪Δ_h,n_b,h+1. Let α and γ be the weak waves issuing from (x_h-1,y_n_b,h-1-1) and (x_h-1,y_n_b,h-1-2) respectively, and entering Λ_b. We divide α=(α_1,α_2) into parts α_l=(α_l,1,0) and α_r=(α_r,1,α_r,2), where α_l and α_r entering Δ_h,n_b,h-1 and Δ_h,n_b,h, respectively. Moreover, let γ=(γ_1,γ_2) issuing from (x_h-1,y_n_b,h-2), and let δ be the outgoing wave issuing from (x_h,y_n_b,h-1). For simplicity of notation, we denoteσ_α=σ(x_h-1,y_n_b,h-1-1),σ_b(h-1)=σ(x_h-1,b_Δ x,ϑ(x_h-1)), σ_b(h)=σ(x_h,b_Δ x,ϑ(x_h)), Δσ_α=σ_b(h-1)-σ_α,Δσ̅_α=σ_b(h)-σ_α,Δσ_b_h=σ_b(h)-σ_b(h-1), σ_γ=σ(x_h-1,y_n_b,h-1-2),Δσ_γ= σ_α-σ_γ,σ_0=σ(x_h-1,r_h-1,n_b-2),and U_1=U_Δ x,ϑ(x_h-1-,r_h-1,n_b-2+). Let U_l=Φ(α_l,1,0;Ũ(σ_α;σ_0,U_1)).To gain the estimates of δ, we need to deal with the equation:12|Φ(β_1,0;Ũ(σ_b(h);σ_α,U_l))|^2 +γγ-1(p^b_Δ x,h+1)^γ-1/γ= 12|Ũ(σ_b(h-1);σ_α,Φ(α_r,1,α_r,2;U_l))|^2 +γγ-1(p^b_Δ x,h)^γ-1/γ,and then we obtain the following lemma. Equation (<ref>) has a unique solution β_1=β_1(α_r,1,α_r,2,Δσ_α,Δσ̅_α,ω_h+1;U_l)∈C^2 in a neighborhood of (α_r,1,α_r,2,Δσ_α,Δσ̅_α,ω_h+1,U_l)=(0,0,0,0,0,G(s_0)) with ω_h+1=p^b_Δ x,h+1-p^b_Δ x,h such thatδ_1 =α_r,1+α_l,1+γ_1 +K_r,1α_r,2 +K_σ,1Δσ_b_h+K_b,1ω_h+1+O(1)Q(Λ_b) δ_2 =γ_2+K_r,2α_r,2 +K_σ,2Δσ_b_h+K_b,2ω_h+1+O(1)Q(Λ_b),withQ(Λ_b)=Q^0((α_1,0),γ)+|α_1||Δσ_γ|+|α_r,1||Δσ_α|,where O(1) depends continuously on M_∞. Moreover, when α_r,1=α_r,2=Δσ_α=Δσ̅_α=ω_h+1=0, p^b_Δ x,h+1=p_0, and U_l=G(s_0),lim_M_∞→∞K_r,1=-cos^2(θ_0+θ_m^0)cos^2(θ_0-θ_m^0),lim_M_∞→∞|K_b,i|<∞,lim_M_∞→∞K_r,2=0,lim_M_∞→∞K_σ,i=0,for i=1,2.Proof. A direct computation leads to.12∂ (|Φ(β_1,0;Ũ(σ_b(h);σ_α,U_l))|^2)/∂β_1| _{δ_1=Δσ̅_α=0, U_l=G(s_0)}=r_1(G(s_0))· G(s_0).Lemma <ref>, together with the implicit function theorem, implies that there is a unique C^2-solutionβ_1=β_1(α_r,1,α_r,2,Δσ_α,Δσ̅_α,ω_h+1;U_l)in a neighborhood of (α_r,1,α_r,2,Δσ_α,Δσ̅_α,ω_h+1,U_l)=(0,0,0,0,0,G(s_0)). Using (<ref>)–(<ref>), we haveβ_1 =β_1(α_r,1,α_r,2,Δσ_α,Δσ_α,ω_h+1;U_l) +K̅_σ,1(Δσ̅_α-Δσ_α)=β_1(α_r,1,0,Δσ_α,Δσ_α,0;U_l) +K̅_σ,1Δσ_b_h+K̅_r,1α_r,2+K̅_b,1ω_h+1=α_r,1 +K̅_σ,1Δσ_b_h+K̅_r,1α_r,2+K̅_b,1ω_h+1+O(1)|α_r,1||Δσ_α|.Taking derivative with respect to Δσ_b_h in (<ref>) at (α_r,1,α_r,2,Δσ_α,Δσ̅_α,ω_h+1,U_l)=(0,0,0,0,0,G(s_0)), we obtainG(s_0)· r_1(G(s_0))∂β_1/∂Δσ_b_h +G(s_0)·∂Ũ(σ_α+Δσ̅_α+Δσ_b_h;σ_α,G(s_0))/∂Δσ_b_h=0,which yieldslim_M_∞→∞.∂β_1/∂Δσ_b_h| _{α_r,1=α_r,2=Δσ_α=Δσ̅_α=ω_h+1=0, p^b_Δ x,h+1=p_0, U_l=G(s_0)}=0.Similarly, we havelim_M_∞→∞.∂β_1/∂ω_h+1| _{α_r,1=α_r,2=Δσ_α=Δσ̅_α=ω_h+1=0, p^b_Δ x,h+1=p_0, U_l=G(s_0)} =lim_M_∞→∞-p_0^-1/γr_1(G(s_0))· G(s_0)>-∞,lim_M_∞→∞.∂β_1/∂α_r,2| _{α_r,1=α_r,2=Δσ_α=Δσ̅_α=ω_h+1=0, p^b_Δ x,h+1=p_0, U_l=G(s_0)} =-cos^2(θ_0+θ_m^0)cos^2(θ_0-θ_m^0).By the construction of the approximate solution, we haveŨ(σ_b(h);σ_γ,Φ(δ_1,δ_2;U_m)) =Φ(β_1,0;Ũ(σ_b(h);σ_α,Φ(α_l,1,0;Ũ(σ_α;σ_γ,Φ(γ_1,γ_2;U_m)))))with U_m=U_Δ x,ϑ(x_h,y_n_b,h-1-2-). Then, a similar argument as in Case <ref> gives (<ref>)–(<ref>). This completes the proof.□ In Case <ref>, for b'_h:=b_Δ x,ϑ'(x_h-) for h∈ℕ_+,b'_h+1-b'_h=K_c,2α_r,2+K_c,σΔσ_b_h+O(1)ω_h+1+O(1)|α_r,1||Δσ_α|with O(1) depending continuously on p_0 such thatlim_M_∞→∞K_c,σ|_{α_r,2=α_r,1=Δσ_α=Δσ̅_α=ω_h+1=0, p^b_Δ x,h+1=p_0,U_l=G(s_0)}=-1,lim_M_∞→∞K_c,2|_{α_r,2=α_r,1=Δσ_α=Δσ̅_α=ω_h+1=0, p^b_Δ x,h+1=p_0,U_l=G(s_0)} =-4/γ+1 cos^2θ^0_mcos^2(θ^0_m+θ_0)/cos^2θ_0. Proof. From (<ref>), we haveb'_h+1-b'_h =Φ^(2)(β_1(α_r,1,α_r,2,Δσ_α,Δσ̅_α,ω_h+1;U_l),0; Ũ(σ_b_h;σ_α,U_l))/Φ^(1)(β_1(α_r,1,α_r,2,Δσ_α,Δσ̅_α,ω_h+1;U_l),0;Ũ(σ_b_h;σ_α,U_l))-Ũ^(2)(σ_b_h-1;σ_α,Φ(α_r,1,α_r,2;U_l))/Ũ^(1)(σ_b_h-1;σ_α,Φ(α_r,1,α_r,2;U_l)).By (<ref>)–(<ref>), we obtainb'_h+1-b'_h =Φ^(2)(β_1(α_r,1,0,Δσ_α,Δσ̅_α,0;U_l),0;Ũ(σ_b_h;σ_α,U_l))/Φ^(1)(β_1(α_r,1,0,Δσ_α,Δσ̅_α,0;U_l),0;Ũ(σ_b_h;σ_α,U_l)) -Ũ^(2)(σ_b_h-1;σ_α,Φ(α_r,1,0;U_l))/Ũ^(1)(σ_b_h-1;σ_α,Φ(α_r,1,0;U_l)) +K_c,2α_r,2+O(1)ω_h+1=Φ^(2)(β_1(α_r,1,0,Δσ_α,Δσ_α,0;U_l),0;Ũ(σ_b_h-1;σ_α,U_l))/Φ^(1)(β_1(α_r,1,0,Δσ_α,Δσ_α,0;U_l),0;Ũ(σ_b_h-1;σ_α,U_l)) -Ũ^(2)(σ_b_h-1;σ_α,Φ(α_r,1,0;U_l))/Ũ^(1)(σ_b_h-1;σ_α,Φ(α_r,1,0;U_l)) +K_c,σΔσ_b_h+K_c,2α_r,2+O(1)ω_h+1= K_c,σΔσ_b_h+K_c,2α_r,2+O(1)ω_h+1+O(1)|α_r,1||Δσ_α|.By similar calculation in Lemma <ref>, we havelim_M_∞→∞K_c,σ|_{α_r,2=α_r,1=Δσ_α=Δσ̅_α=ω_h+1=0,p^b_Δ x,h+1=p_0, U_l=G(s_0)}=-1,lim_M_∞→∞K_c,2|_{α_r,2=α_r,1=Δσ_α=Δσ̅_α=ω_h+1=0,p^b_Δ x,h+1=p_0, U_l=G(s_0)} =-4/γ+1 cos^2θ^0_mcos^2(θ^0_m+θ_0)/cos^2θ_0.Then the proof is complete.□ For Δ x sufficiently small,|b'_h-σ_b(h-1)| ≥6|Δσ_b_h|,where σ_b(h)=b_Δ x,ϑ(x_h)/x_h.Proof. Using the notation as in Case <ref>, we haveb'_h=b_Δ x,ϑ(x_h)-b_Δ x,ϑ(x_h-1)Δ x.Then a direct computation leads to|b'_h-σ_b(h-1)|=|b_Δ x,ϑ(x_h)-b_Δ x,ϑ(x_h-1)Δ x-σ_b(h-1)|=|σ_b(h)x_h-σ_b(h-1)x_h-1Δ x-σ_b(h-1)|=|x_hΔ x||σ_b(h)-σ_b(h-1)|≥ 6|σ_b(h)-σ_b(h-1)|for Δ x small enough.□Denote θ_b(h)=|σ_b(h-1)-b'_h| that measures the angle between boundary Γ_Δ x,ϑ,h and the ray issuing from the origin and passing through (x_h-1,b_Δ x,ϑ(x_h-1)). Then we have the following estimate for θ_b(h). For M_∞ sufficiently large and Δ x sufficiently small,θ_b(h)-θ_b(h+1)≥ |Δσ|-|K_c,2||α_r,2|-C|ω_h+1|-C|α_r,1||Δσ_α|,where h∈ℕ_+, and constant C>0 is independent of M_∞ and Δ x. Proof. We consider the following two different cases.1. σ_b(h-1)<b'_h so that σ_b(h)>σ_b(h-1). * If b'_h+1>σ_b(h), then it follows from Lemma <ref> thatθ_b(h)-θ_b(h+1)= b'_h-σ_b(h-1)-(b'_h+1-σ_b(h))= (1-K_c,σ)Δσ_b_h-K_c,2α_r,2+O(1)ω_h+1+O(1)|α_r,1||Δσ_α|≥ |Δσ_b_h|-|K_c,2||α_r,2|-C|ω_h+1|-C|α_r,1||Δσ_α|. * If b'_h+1<σ_b(h), then, from Lemma <ref>–<ref>, we haveθ_b(h)-θ_b(h+1)= b'_h-σ_b(h-1)-(σ_b(h)-b'_h+1)= 2(b'_h-σ_b(h-1))+b'_h+1-σ_b(h)-(b'_h-σ_b(h-1))≥ (11+K_c,σ)|Δσ_b_h|+K_c,2α_r,2+O(1)ω_h+1+O(1)|α_r,1||Δσ_α|≥ |Δσ_b_h|-|K_c,2||α_r,2|-C|ω_h+1|-C|α_r,1||Δσ_α|. 2. σ_b(h-1)>b'_h so that σ_b(h)<σ_b(h-1).* If b'_h+1>σ_b(h), then it follows from Lemma <ref>–<ref> thatθ_b(h)-θ_b(h+1)=σ_b(h-1)-b'_h-(b'_h+1-σ_b(h))= 2(σ_b(h-1)-b'_h)+σ_b(h)-b'_h+1-(σ_b(h-1)-b'_h)≥ (11+K_c,σ)|Δσ_b_h|-K_c,2α_r,2-O(1)ω_h+1-O(1)|α_r,1||Δσ_α|≥ |Δσ_b_h|-|K_c,2||α_r,2|-C|ω_h+1|-C|α_r,1||Δσ_α|. * If b'_h+1<σ_b(h), then, from Lemma <ref>, we haveθ_b(h)-θ_b(h+1)=σ_b(h-1)-b'_h-(σ_b(h)-b'_h+1)=(-1+K_c,σ)Δσ_b_h+K_c,2α_r,2+O(1)ω_h+1+O(1)|α_r,1||Δσ_α|≥ |Δσ_b_h|-|K_c,2||α_r,2|-C|ω_h+1| -C|α_r,1||Δσ_α|. Note that we have used the fact thatlim_M_∞→∞K_c,σ|_{α_r,2=β_1=Δσ_α=Δσ̅_α=ω_h+1=0, p^b_Δ x,h+1=p_0,U_l=G(s_0)}=-1in above estimates. This completes the proof.□ Λ_s covers the part of S_Δ x,ϑ but none of Γ_Δ x, ϑ. We take three diamonds at the same time, as shown in Fig. <ref>. Let Δ_h,n_χ,h-1, Δ_h,n_χ,h, and Δ_h,n_χ,h+1 be the diamonds centering in (x_h,y_n_χ,h-1), (x_h,y_n_χ,h), and (x_h,y_n_χ,h+1), respectively. Denote Λ_s=Δ_h,n_χ,h-1∪Δ_h,n_χ,h∪Δ_h,n_χ,h+1. Let α and γ be the weak waves issuing from (x_h-1,y_n_χ,h-1+1) and (x_h-1,y_n_χ,h-1+2) respectively and entering Λ_s. We divide α into parts α_l=(α_l,1,0) and α_r=(α_r,1,α_r,2) where α_l and α_r enter Δ_h,n_χ,h and Δ_h,n_χ,h+1, respectively. Moreover, let γ=(γ_1,0), and let δ be the outgoing wave issuing from (x_h,y_n_χ,h+1).Then, for simplicity of notation, we denoteσ_α=σ(x_h-1,y_n_χ,h-1+1), σ_χ(h-1)=σ(x_h-1,χ_Δ x,ϑ(x_h-1)), σ_χ(h)=σ(x_h,χ_Δ x,ϑ(x_h)), Δσ_α=σ_α-σ_χ(h-1), Δσ̅_α=σ_α-σ_χ(h), Δσ_χ_h=σ_χ(h)-σ_χ(h-1), σ_γ=σ(x_h-1,y_n_χ,h-1+2), Δσ_γ= σ_γ-σ_α. To gain the estimates of (s_h+1,δ), we need to deal with the equation:Ũ(σ_α;σ_χ(h),Φ(0,β_2;G(s_h+1;U_∞))) =Φ(α_l,1,0;Ũ(σ_α;σ_χ(h-1),G(s_h;U_∞))),to obtain the following lemma. Equation (<ref>) has a unique solution (s_h+1,β_2) in a neighborhood of(α_l,1,α_r,γ,Δσ_α,Δσ_χ_h,s_h)=(0,0,0,0,0,s_0),such thatδ_1=α_r,1+γ_1+μ_w,1Δσ_χ_h+K_w,1α_l,1+O(1)Q(Λ_s),δ_2=α_r,2+μ_w,2Δσ_χ_h+K_w,2α_l,1+O(1)Q(Λ_s),s_h+1= s_h+K_sα_l,1+μ_sΔσ_χ_h,withQ(Λ_s)=|γ_1||Δσ_γ|+Q^0(α_r,γ),where O(1) depends continuously on M_∞. In addition, for α_l=0, Δσ_α=Δσ_χ_h=0, and s_h=s_0, denoting the derivative of G by G_s, thenlim_M_∞→∞K_w,1=0, K_w,2=(r_1(G(s_0)), G_s(s_0))(r_2(G(s_0)), G_s(s_0)), K_s=(r_2(G(s_0)),r_1(G(s_0))(r_2(G(s_0)),G_s(s_0)),lim_M_∞→∞μ_w,1=0,μ_w,2=(∂Ũ/∂(Δσ_χ_h), G_s(s_0))(r_2(G(s_0)), G_s(s_0)),lim_M_∞→∞μ_s∈(-1,0). Proof. From Lemma <ref> and the implicit function theorem, (<ref>) has a unique C^2–solution (s_h+1,β_2) such thats_h+1=s_h+1(α_l,1,Δσ_α,Δσ̅_α,Δσ_χ_h,s_h),β_2=β_2(α_l,1,Δσ_α,Δσ̅_α,Δσ_χ_h,s_h).A direct computation leads toβ_2 =β_2(α_l,1,Δσ_α,Δσ̅_α,Δσ_χ_h,s_h)=μ_w,2Δσ_χ_h+K_w,2α_l,1 +β_2(0,Δσ_α,Δσ_α,0,s_h)=μ_w,2Δσ_χ_h+K_w,2α_l,1.Similarly, we haves_h+1=s_h+1(α_l,1,Δσ_α,Δσ̅_α,Δσ_χ_h,s_h) =μ_sΔσ_χ_h+K_sα_l,1+s_h. Next, we compute the coefficients: K_s, K_w,2, μ_w,2, and μ_s. Differentiating equation (<ref>) with respect to α_l,1 and Δσ_χ_h, and then letting α_l,1=Δσ_α=Δσ_χ_h=0 and s_h=s_0, we can obtainr_2(G(s_0))K_w,2+G_s(s_0)K_s=r_1(G(s_0)),r_2(G(s_0))μ_w,2+G_s(s_0)μ_s =∂Ũ∂ (Δσ_χ_h)(σ_χ(h);σ_χ(h),G(s_0)).Then Cramer's rule gives the result. Moreover, since θ_0<0<θ^0_ma and θ_0±θ^0_ma∈(-π/2,π/2),lim_M_∞→∞μ_s=cosθ_0sinθ_m^0/sin(θ_0-θ_m^0)∈(-1,0). By the construction of the approximate solution, we haveŨ(σ_γ;σ_α,Φ(δ_1,δ_2;Ũ(σ_α;σ_χ(h),U_m)))=Φ(γ_1,0; Ũ(σ_γ;σ_α, Φ(α_r,1,α_r,2; Ũ(σ_α;σ_χ(h), Φ(0,β_2(α_l,1,Δσ_α,Δσ̅_α,Δσ_χ_h,s_h);U_m))))),withU_m=G(s_h+1;U_∞).Then, by similar arguments as in Case <ref>, we obtainδ_1 =α_r,1+γ_1+μ_w,1Δσ_χ_h+K_w,1α_l,1+O(1)|γ_1||Δσ_γ|+O(1)Q^0(α_r,γ), δ_2 =α_r,2+μ_w,2Δσ_χ_h+K_w,2α_l,1+O(1)|γ_1||Δσ_γ|+O(1)Q^0(α_r,γ).This completes the proof.□ For Δ x sufficiently small,|s_h-σ_χ(h-1)|≥6|Δσ_χ_h|. Proof. Using the notation as in Case <ref>, we haveσ_χ(h)=χ_Δ x,ϑ(x_h)/x_h, s_h=χ_Δ x,ϑ(x_h)-χ_Δ x,ϑ(x_h-1)Δ x.Then a direct computation leads to|s_h-σ_χ(h-1)|=|χ_Δ x,ϑ(x_h)-χ_Δ x,ϑ(x_h-1)Δ x-σ_χ(h-1)|=|σ_χ(h)x_h-σ_χ(h-1)x_h-1Δ x-σ_χ(h-1)|=|x_hΔ x||σ_χ(h)-σ_χ(h-1)|≥ 6|σ_χ(h)-σ_χ(h-1)|,for Δ x small enough.□ Denote θ_χ(h)=|σ_χ(h-1)-s_h| that measures the angle between the leading shock S_Δ x,ϑ,h and the ray issuing from the origin and passing through (x_h-1,χ_Δ x,ϑ(x_h-1)). Then we have the following estimate for θ_χ(h). For M_∞ sufficiently large and Δ x sufficiently small,θ_χ(h)-θ_χ(h+1)≥ |Δσ_χ_h|-|K_s||α_l,1|,with h≥0.Proof. We consider the following two different cases:1. σ_χ(h-1)<s_h so that σ_χ(h)>σ_χ(h-1). * If s_h+1>σ_χ(h), then it follows from Lemma <ref> thatθ_χ(h)-θ_χ(h+1)= s_h-σ_χ(h-1)-(s_h+1-σ_χ(h)) =(1-μ_s)Δσ_χ_h-K_sα_l,1≥ |Δσ_χ_h|-|K_s||α_l,1|. * If s_h+1<σ_χ(h), then, from Lemmas <ref>–<ref>, we haveθ_χ(h)-θ_χ(h+1)=s_h-σ_χ(h-1)-(σ_χ(h)-s_h+1)= 2(s_h-σ_χ(h-1))+s_h+1-σ_χ(h)-(s_h-σ_χ(h-1))≥ (11+μ_s)|Δσ_χ_h|+K_sα_l,1≥ |Δσ_χ_h|-|K_s||α_l,1|. 2. σ_χ(h-1)>s_h so that σ_χ(h)<σ_χ(h-1). * If s_h+1>σ_χ(h), then it follows from Lemmas <ref>–<ref> thatθ_χ(h)-θ_χ(h+1)=σ_χ(h-1)-s_h-(s_h+1-σ_χ(h))=2(σ_χ(h-1)-s_h)+σ_χ(h)-s_h+1-(σ_χ(h-1)-s_h)≥ (11+μ_s)|Δσ_χ_h|-K_sα_l,1≥ |Δσ_χ_h|-|K_s||α_l,1|. * If s_h+1<σ_χ(h), then, from Lemma <ref>, we haveθ_χ(h)-θ_χ(h+1)=σ_χ(h-1)-s_h-(σ_χ(h)-s_h+1)=(-1+μ_s)Δσ_χ_h+K_sα_l,1≥ |Δσ_χ_h|-|K_s||α_l,1|. Note that we have used the fact that μ_s∈(-1,0) as M_∞→∞ in above estimates. This completes the proof.□ § GLIMM-TYPE FUNCTIONAL AND COMPACTNESS OF THE APPROXIMATE SOLUTIONS For each I⊂∪_k=1^h+1Ω_Δ x,ϑ,k, there exists k_I with 1≤ k_I≤ h+1 such that I∩Γ_Δ x,ϑ, k_I≠∅. Next, as in <cit.>, we assign each mesh curve I⊂⋃_k=1^h+1Ω_Δ x,ϑ,k with a Glimm-type functional F_s(I); see also <cit.>: (Weighted total variation). DefineL_0^(i)(I)=∑{|α_i|:α_i is the weak i-wave crossing I},L_1(I)=∑{|ω_k|: k>k_I},L_s(I)=θ_χ(I)for θ_χ(I)=θ_χ(h) in Lemma <ref> when S_Δ x,ϑ crossing I,L_b(I)=θ_b(I) for θ_b(I)=θ_b(h) in Lemma <ref> when Γ_Δ x,ϑ crossing I.Then the weighted total variation is defined asL(J)=L_0^(1)(I)+K_2L_0^(2)(I)+K_1L_1(I)+K_3L_s(I)+K_4L_b(I),where K_l are positive constants for l=1,2,3,4.Letσ^*=b_0+C_1∑_h=1^∞|ω_h|,σ_*=s_0-ϖ,where s_0 is the velocity of the leading shock of the background solution, ϖ and C_1 are constants to be determined; see also <cit.>. Note that ϖ and ∑_h≥1|ω_h| are chosen so small that the largeness of M_∞ implies the smallness of b_0-s_0, which leads to the smallness of σ^*-σ_*. We now define the total interaction potential.(Total interaction potential). DefineQ_0(I) =∑{|α||β| : α and β are weak waves crossing I and approach},Q_1(I) =∑{|α||σ_α-σ_*| : α is a weak 1-wave crossing I},Q_2(I)=∑{|α||σ^*-σ_α| : α is a weak 2-wave crossing I},where σ_α is the σ-coordinate of the grid point where α issues. Then the total interaction potential is defined asQ(I)=Q_0(I)+2Q_1(I)+2Q_2(I). Now, we are able to define the Glimm-type functional.(Glimm-type functional).LetF(I)=L(I)+KQ(I),where K is a big real number to be chosen later.LetE_Δ x,ϑ(Λ)={ Q(Λ)(defined in Case <ref>), ξ(|α_r,2|+|ω_h+1|+|Δσ_b_h|+Q(Λ_b)) (defined in Case <ref>), ξ(|α_l,1|+|Δσ_χ_h|+Q(Λ_s)) (defined in Case <ref>), .with ξ>0 sufficiently small and to be chosen later.In order to make the Glimm-type functional monotonically decreasing, we have to choose the weights carefully in the functional, based on the underlying features of the wave interactions governed by the system. Indeed, we have the following lemma (cf. <cit.>). Let K_r,1, K_w,2, K_s and μ_w,2 be given by Lemmas <ref> and <ref>. Thenlim_M_∞→∞(|K_r,1||K_w,2|+|K_r,1||K_s||μ_w,2|)<1. Proof. Lemmas <ref>–<ref> givelim_M_∞→∞|K_r,1||K_w,2|=|sin(θ_0+θ_m^0)sin(θ_0-θ_m^0)|,lim_M_∞→∞|K_r,1||K_s||μ_w,2| =cos^2(θ_0+θ_m^0)cos^2(θ_0-θ_m^0)lim_M_∞→∞|(r_1(U),r_2(U))(r_2(G(s_0),G'(s_0)))|| ((∂Ũ)/(∂Δσ_χ_h),G_s(s_0;U_∞))(r_2(G(s_0;U_∞)),G_s(s_0;U_∞))|=sin2θ_m^0cosθ_0|sinθ_0|sin^2(θ_0-θ_m^0).Note that θ_0∈(-π/2,0) and θ_0±θ_m^0∈(-π/2,π/2), and that θ_m^0∈(0,π/2). Then, when θ_0+θ_m^0<0,lim_M_∞→∞(|K_r,1||K_w,2|+|K_r,1||K_s||μ_w,2|) <2sinθ_m^0cosθ_0-sin(θ_0+θ_m^0)sin(θ_m^0-θ_0)=1;when θ_0+θ_m^0>0,lim_M_∞→∞(|K_r,1||K_w,2|+|K_r,1||K_s||μ_w,2|) <2cosθ_m^0|sinθ_0|+sin(θ_0+θ_m^0)sin(θ_m^0-θ_0)=1.This implies the expected result.□At this stage, we are able to choose the coefficients in the Glimm-type functional (cf. <cit.>). There exist positive constants K_2 and K_3 such thatlim_M_∞→∞(K_2|K_w,2|+K_3|K_s|)<1,lim_M_∞→∞(K_2|μ_w,2|-K_3)<0, lim_M_∞→∞(K_2-|K_r,1|)>0. Proof. Let K_r,1^*=lim_M_∞→∞|K_r,1|, K_w,2^*=lim_M_∞→∞|K_w,2|, K_s^*=lim_M_∞→∞|K_s|, and μ_w,2^*=lim_M_∞→∞|μ_w,2|. Then, by Lemma <ref>,K_r,1^*(K_w,2^*+K_s^*μ_w,2^*)<1.Hence, we choose K_2 such thatK_2>K_r,1^*, K_2(K_w,2^*+K_s^*μ_w,2^*)<1,which impliesK_2K_s^*μ_w,2^*<1-K_2K_w,2^*.Then we take K_3 such thatK_3>K_2μ_w,2^*, K_3K_s^*<1-K_2K_w,2^*,and the proof is complete.□With the coefficients chosen properly, we can derive a decay property for the Glimm-type functional. Let M_∞ be sufficiently large, and let σ^*-σ_* and ∑_h≥1|ω_h| be sufficiently small. Let I and J be a pair of space-like mesh curves with J being an immediate successor of I. The region bounded by the difference between I and J is denoted as Λ. Then there exist positive constants ϵ_∞, K, and K_l for l=1,2,3,4, such that, if F(I)<ϵ_∞, thenF(I)≤ F(J)-1/4E_Δ x, ϑ(Λ),where E_Δ x, ϑ(Λ) is given by (<ref>).Proof. When M_∞ is large enough, according to Lemma <ref>, there are constants K_2 and K_3 so thatK_2|K_w,2|+K_3|K_s|<1-ξ_0, K_2|μ_w,2|-K_3<-ξ_0, K_2-|K_r|-K_4|K_c,2|>ξ_0,for some ξ_0>0.Now, as in <cit.>, we prove the result inductively; see also <cit.>. We consider three special cases as in <ref>, depending on the location ofΛ. From now on, we use C to denote a universal constant depending only on the system, which may be different at each occurrence.Case 1. Λ lies between Γ_Δ x,ϑ and S_Δ x, ϑ. We consider the case as in Lemma <ref>. Notice that(L_0^(1)+K_2L_0^(2))(J)-(L_0^(1)+K_2L_0^(2))(I)≤ CQ(Λ),L_b(J)-L_b(I)=0,(K_1L_1+K_3L_s)(J)-(K_1L_1+K_3L_s)(I)=0.Then we obtainL(J)-L(I)≤ CQ(Λ).For the terms contained in Q, we haveQ_0(J)-Q_0(I)≤ CL(I)Q(Λ)-Q^0(Λ). For Case 1.1:(Q_1+Q_2)(J)-(Q_1+Q_2)(I)= |δ_1|(σ_2-σ_*)-|α_1|(σ_2-σ_*)-|β_1|(σ_1-σ_*) +|δ_2|(σ^*-σ_2)-|α_2|(σ^*-σ_2)≤ C(σ^*-σ_*)Q(Λ)-|Δσ||β_1|. For Case 1.2:(Q_1+Q_2)(J)-(Q_1+Q_2)(I) ≤|δ_1|(σ_1-σ_*)-|β_1|(σ_1-σ_*)+|δ_2|(σ^*-σ_1)-|α_2|(σ^*-σ_2)-|β_2|(σ^*-σ_1)≤ C(σ^*-σ_*)Q(Λ)-|Δσ||α_2|,which givesQ(J)-Q(I)≤ -(1-C(L(I)+σ^*-σ_*))Q(Λ).When L(I) and σ^*-σ_* are small enough, and K is sufficiently large, it follows thatF(J)-F(I)≤-{K(1-C(L(I)+σ^*-σ_*))-C}Q(Λ)≤-1/4Q(Λ). Case 2. Λ_b=Δ_h,n_b,h-1∪Δ_h,n_b,h∪Δ_h,n_b,h+1 covers a part of Γ_Δ x,ϑ but none of S_Δ x, ϑ. Direct computation shows thatL_0^(1)(J)-L_0^(1)(I)≤ |K_r,1||α_r,2| +|K_σ,1||Δσ_b_h|+|K_b,1||ω_h+1|+CQ(Λ_b),L_0^(2)(J)-L_0^(2)(I)≤ -|α_r,2|+|K_r,2||α_r,2| +|K_σ,2||Δσ_b_h|+|K_b,2||ω_h+1|+CQ(Λ_b),L_1(J)-L_1(I)=-|ω_h+1|,L_s(J)-L_s(I)=0,L_b(J)-L_b(I)=-|Δσ_b_h|+|K_c,2||α_r,2|+C|ω_h+1|+C|α_r,1||Δσ_α|.Combining the above estimates together, we obtainL(J)-L(I) ≤-(K_2-|K_r,1|-K_4|K_c,2|-K_2|K_r,2|)|α_r,2| -(K_1-|K_b,1|-K_2|K_b,2|-CK_4)|ω_h+1| -(K_4-|K_σ,1|-K_2|K_σ,2|)|Δσ_b_h|+CQ(Λ_b)+C|α_r,1||Δσ_α|.For the terms contained in Q, noting that |Δσ_α|≤|Δσ_γ|, we haveQ_0(J)-Q_0(I) ≤ -Q^0((α_1,0),γ)+CL(I)(|α_r,2| +|Δσ_b_h|+|ω_h+1|+CQ(Λ_b)),Q_1(J)-Q_1(I) =|δ_1|(σ_γ-σ_*)-(|α_l,1|+|α_r,1|)(σ_α-σ_*)-|γ_1|(σ_γ-σ_*)≤-(|α_l,1|+|α_r,1|)|Δσ_γ|+C(σ^*-σ_*)(|α_r,2| +|Δσ_b_h|+|ω_h+1|+CQ(Λ_b)),Q_2(J)-Q_2(I) =|δ_2|(σ^*-σ_γ)-|α_r,2|(σ^*-σ_α)-|γ_2|(σ^*-σ_γ)≤ C(σ^*-σ_*)(|α_r,2| +|Δσ_b_h|+|ω_h+1|+CQ(Λ_b)).Then we concludeQ(J)-Q(I) ≤ -Q^0((α_1,0),γ)+CL(I)(|α_r,2| +|Δσ_b_h|+|ω_h+1|+CQ(Λ_b))-|α_1||Δσ_γ| -|α_r,1||Δσ_α|+2C(σ^*-σ_*)(|α_r,2| +|Δσ_b_h|+|ω_h+1|+CQ(Λ_b))≤ -(1-C(L(I)+σ^*-σ_*))Q(Λ_b)+C(L(I)+σ^*-σ_*)(|α_r,2| +|Δσ_b_h|+|ω_h+1|).Finally, combining all the estimates above together, we obtainF(J)-F(I) ≤-{K(1-C(L(I)+σ^*-σ_*))- C}Q(Λ_b)-{K_2-|K_r,1|-K_4|K_c,2|-K_2|K_r,2|-KC(L(I)+σ^*-σ_*)}|α_r,2|-{K_1-|K_b,1|-K_2|K_b,2|-CK_4-KC(L(I)+σ^*-σ_*)}|ω_h+1|-{K_4-|K_σ,1|-K_2|K_σ,2|-KC(L(I)+σ^*-σ_*)}|Δσ_b_h|.Taking suitably large K_1, then, when K is sufficiently large, and L(I) and σ^*-σ_* are sufficiently small, we concludeF(J)-F(I)≤ -ξ/4(|α_r,2|+|ω_h+1|+|Δσ_b_h|+Q(Λ_b)),for some ξ>0 small enough.Case 3. Λ_s=Δ_h,n_χ,h-1∪Δ_h,n_χ,h∪Δ_h,n_χ,h+1 covers a part of S_Δ x,ϑ but none of Γ_Δ x, ϑ. A direct computation shows thatL_0^(1)(J)-L_0^(1)(I)≤ -|α_l,1|+|K_w,1||α_l,1|+|μ_w,1||Δσ_χ_h|+CQ(Λ_s),L_0^(2)(J)-L_0^(2)(I) ≤ |K_w,2||α_l,1|+|μ_w,2||Δσ_χ_h|+CQ(Λ_s),L_1(J)-L_1(I)=0,L_s(J)-L_s(I)≤ -|Δσ_χ_h|+|K_s||α_l,1|,L_b(J)-L_b(I)=0.Combine the above estimates together, we obtainL(J)-L(I)≤ -(1-|K_w,1|-K_2|K_w,2|-K_3|K_s|)|α_l,1|-(K_3-|μ_w,1|-K_2|μ_w,2|)|Δσ_χ_h|+CQ(Λ_s)≤ -(1-K_2|K_w,2|-K_3|K_s|-|K_w,1|)|α_l,1| -(K_3-K_2|μ_w,2|-|μ_w,1|)|Δσ_χ_h|+CQ(Λ_s). For the terms contained in Q, we haveQ_0(J)-Q_0(I) ≤ -Q^0(α_r,γ) +CL(I)(|α_l,1|+|Δσ_χ_h|+Q(Λ_s)),Q_1(J)-Q_1(I) = |δ_1|(σ_α-σ_*)-(|α_l,1|+|α_r,1|)(σ_α-σ_*)-|γ_1|(σ_γ-σ_*)≤ -|γ_1||Δσ_γ|+C(σ^*-σ_*)(|α_l,1|+|Δσ_χ_h|+Q(Λ_s)),Q_2(J)-Q_2(I) = |δ_2|(σ^*-σ_γ)-|α_r,2|(σ^*-σ_α)≤ C(σ^*-σ_*)(|α_l,1|+|Δσ_χ_h|+Q(Λ_s)).Then we deduce thatQ(J)-Q(I)≤ -(1-C(L(I)+σ^*-σ_*))Q(Λ_s)+C(L(I)+σ^*-σ_*)(|α_l,1|+|Δσ_χ_h|).Finally, combining all the estimates above together, we obtainF(J)-F(I)≤ -{K(1-C(L(I)+σ^*-σ_*)) -C}Q(Λ_s)-{1-K_2|K_w,2|-K_3|K_s|-|K_w,1|-CK(L(I)+σ^*-σ_*)}|α_l,1|-{K_3-K_2|μ_w,2|-|μ_w,1|-CK(L(I)+σ^*-σ_*)}|Δσ_χ_h|.When K is sufficiently large, and L(I) and σ^*-σ_* are sufficiently small, we conclude thatF(J)-F(I)≤ -ξ/4(Q(Λ_s)+|α_l,1|+|Δσ_χ_h|)for some ξ>0 small enough. Combining the above three cases, we conclude our result.□ Now, let I_h be the mesh curve in the stripe: {(x,y) :x_h-1≤ x≤ x_h} for h∈ℕ_+; that is, I_h connects all the mesh points in the strip. Let I and J be any pair of mesh curves with I_h<I<J<I_h+1, and let J be an immediate successor of I. That is, the mesh points on J differ from those on I by only one point generally (except three points near the approximate boundary or near the approximate shock), and the region bounded by the difference between I and J is denoted by Λ. Proposition <ref> suggests that the total variation of the approximate solution is uniformly bounded.Moreover, we have the following estimates for the approximate boundary and the approximate leading shock: There exists a constant C̅>0, independent of Δ x, ϑ, and U_Δ x, ϑ, such thatT.V.{s_Δ x,ϑ:[0,∞)}=∑_h=0^∞|s_h+1-s_h|≤C̅∑_h≥1|ω_h|,T.V.{b'_Δ x,ϑ:[0,∞)}=∑_h=0^∞|b'_h+1-b'_h|≤C̅∑_h≥1|ω_h|. Proof. Notice thatT.V.{s_Δ x,ϑ:[0,∞)} =∑_h=0^∞|s_h+1-s_h|≤ O(1)∑_Λ_sE_Δ x,ϑ(Λ_s)≤ O(1)∑_ΛF(I)-F(J)≤ O(1)F(I_1).Similarly, we haveT.V.{b'_Δ x,ϑ:[0,∞)}≤ O(1)F(I_1).Therefore, C̅ in the statement can be determined. □We choose C_1=2C̅ and ϖ=2C̅∑_h≥1|ω_h| in (<ref>). The largeness of M_∞ and the smallness of ∑_h≥1|ω_h| imply the smallness of σ^*-σ_*. Then, following <cit.>, we conclude Under assumptions (<ref>)–(<ref>), if M_∞ is sufficiently large and ∑_h≥1|ω_h| is sufficiently small, then, for any ϑ∈Π_h=0^∞[0,1) and Δ x>0, the modified Glimm scheme introduced above defines a sequence of global approximate solutions U_Δ x,ϑ(x,y) such thatsup_x>0T.V.{U_Δ x,ϑ(x,y):(-∞,b_Δ x,ϑ(x))}<∞,∫_-∞^0|U_Δ x,ϑ(x_1,y+b_Δ x,ϑ(x_1))-U_Δ x,ϑ(x_2,y+b_Δ x,ϑ(x_2))|y< L_1|x_1-x_2|,for some L_1>0 independent of U_Δ x,ϑ, Δ x, and ϑ.§ CONVERGENCE OF THE APPROXIMATE SOLUTIONSIn <ref>, the uniform bound of the total variation of the approximate solutions U_Δ x,ϑ has been obtained. Then, by Propositions <ref>–<ref>, the existence of convergent subsequences of the approximate solutions {U_Δ x,ϑ} follows. Now we are going to prove that there is a convergent subsequence of the approximate solutions {U_Δ x,ϑ} whose limit is an entropy solution to our problem.Take Δ x=2^-m, m=0,1,2,⋯. For any randomly chosen sequences ϑ=(ϑ_0,ϑ_1,ϑ_2,…,ϑ_h,…), we obtain a set of approximate solutions, which are denoted by {(u_m,v_m)}. It suffices to prove that there is a subsequence (still denoted by) {(u_m,v_m)} such that∬_Ω_Δ x,ϑ(ϕ_xρ_m u_m+ϕ_yρ_m v_m -ρ_m v_mϕ/y)xy +∫_-∞^y_0(0)ϕ(x_0,y)ρ(x_0,y)u(x_0,y)y→0for any ϕ(x,y)∈C_0^1(ℝ^2;ℝ), and∬_Ω_Δ x,ϑ(ϕ_x v_m-ϕ_y u_m) xy→0for any ϕ(x,y)∈C_0^1(Ω;ℝ), as m→∞.We now prove (<ref>) only, since (<ref>) can be deduced analogously.For simplicity, we drop the subscript of (u_m,v_m), and rewrite (<ref>) as∬_Ω_Δ x,ϑ(ϕ_xρ u+ϕ_yρ v-ρ vϕ/y)xy +∫_-∞^y_0(0)ϕ(x_0,y)ρ(x_0,y)u(x_0,y)y=∑_h=1^∞∬_Ω_Δ x,ϑ,h(ϕ_xρ u+ϕ_yρ v-ρ vϕ/y)xy +∫_-∞^y_0(0)ϕ(x_0,y)ρ(x_0,y)u(x_0,y)y.By the shock waves and the upper/lower edges of rarefaction waves, each Ω_Δ x,ϑ,h can be divided into smaller polygons: Ω_Δ x,ϑ,h,j, j=0,-1,-2,⋯, alternatively, where Ω_Δ x,ϑ,h,0 is the uppermost area below the approximate boundary Γ_Δ x,ϑ,h. Then we have∬_Ω_Δ x,ϑ(ϕ_xρ u+ϕ_yρ v-ρ vϕ/y)x y +∫_-∞^y_0(0)ϕ(x_0,y)ρ(x_0,y)u(x_0,y)y=∑_h=1^∞∑_j=0^-∞∬_Ω_Δ x,ϑ,h,j(ϕ_xρ u+ϕ_yρ v-ρ vϕ/y)xy +∫_-∞^y_0(0)ϕ(x_0,y)ρ(x_0,y)u(x_0,y)y=-∑_h,j∬_Ω_Δ x,ϑ,h,jϕ((ρ u)_x+(ρ v)_y+ρ v/y)xy +∑_h,j∬_Ω_Δ x,ϑ,h,j( (ϕρ u)_x+(ϕρ v)_y)xy +∫_-∞^y_0(0)ϕ(x_0,y)ρ(x_0,y)u(x_0,y)y=:1+2 +3. We first have1→0 as Δ x→ 0.Proof. To deal with the first term 1 in (<ref>), we use the transform:σ=y/x,η=y-y_n(h)/x-x_h,where (x_h,y_n(h)) are the center of the Riemann problem, and n depends on j. Then we obtain1= -∑_h,j∬_Ω_Δ x,ϑ,h,j(σ-η)ϕρ/σ(y_n(h)-η x_h)(-σ^2(1-u^2/c^2) u_σ+2uvσ^2/c^2v_σ +(1-v^2/c^2) v_σσ+v) ησ-∑_h,j∬_Ω_Δ x,ϑ,h,j(η-σ)ϕ/σx_h-y_n(h)(-η(ρ u)_η+(ρ v)_η) ησ.From the construction of the approximate solutions, the first term of (<ref>) vanishes. For the second term, we have-η(ρ u)_η+(ρ v)_η=O(1)Δσ,where Δσ is the change of the σ-coordinate in domain Ω_Δ x,ϑ,h,j. Denote the rarefaction waves in Ω_Δ x,ϑ,h alternatively by α_R,h,i. Then we have1=O(1)∑_h,jΔη(Δσ)^2,with Δη=O(1)α_R,h,i. According to Proposition <ref>, the total strength ∑_i|α_R,h,i| of rarefaction waves in Ω_Δ x,ϑ,h is bounded, so that1=O(1) diam(suppϕ) Δ x,which gives desired result. □Next, applying Green's formula in each Ω_Δ x,ϑ,h,j, we obtain2+3 =∑_h=1^∞∫_-∞^b_Δ x,ϑ(x_h)ϕ(x_h,y)(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y +∑_h=0^∞∫_x_h^x_h+1ϕ(x,b(x)) ρ(x,b(x)) (v(x,b(x))-u(x,b(x))b'(x)) x +∑_h,i∫_W_h,i(s_h,i(ρ^+u^+-ρ^-u^-)-(ρ^+v^+-ρ^-v^-))ϕx=: 4+5+6,where W_h,i={(x,y) : y=w_h,i(x)=s_h,i(x-x_h)+y_n(h)for some n} are shock waves or upper/lower edges of rarefaction waves lying in Ω_Δ x,ϑ,h, and ρ^±=ρ(x,w_i,h(x)±), u^±=u(x,w_i,h(x)±), and v^±=v(x,w_i,h(x)±).We now show There exists a subsequence of {(u_m,v_m)} such that 4→0 as m→∞.Proof. The first term on the right hand of (<ref>) can be rewritten as4=∑_h≥1V_hwithV_h =∑_n=n_χ,h+1^n_b,h-1∫_y_n-1(h)^y_n(h)ϕ(x_h,y)(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y+∫_χ_Δ x,ϑ(x_h)^y_n_χ,h(h)+1ϕ(x_h,y)(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y+∫_y_n_b,h(h)-1^b_Δ x,ϑ(x_h)ϕ(x_h,y)(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y.To show 4→0 for some subsequence {(u_m,v_m)}, we now introduceV=∑_h≥1V_hwithV_h =∑_n=n_χ,h+1^n_b,h-1∫_y_n-1(h)^y_n(h)ϕ(x_h,y_n(h))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y +∫_y_n_b,h(h)^b_Δ x,ϑ(x_h)ϕ(x_h,y_n_b,h+1(h))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y+∫_y_n_b,h-1(h)^y_n_b,h(h)ϕ(x_h,y_n_b,h(h))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y)) y+∫_y_n_χ,h(h)^y_n_χ,h+1(h)ϕ(x_h,y_n_χ,h+1(h))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y+∫_χ_Δ x,ϑ(x_h)^y_n_χ,h(h)ϕ(x_h,y_n_χ,h(h))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y=:∑_n=n_χ,h+1^n_b,h-1∫_y_n-1(h)^y_n(h)ϕ(x_h,y_n(h))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y+V̌_h^(0)+V̌_h^(1)+V̂_h^(1)+V̂_h^(0).From the construction of approximate solutions, we haveV̌_h^(0) =O(1)Δ x(|α_r,2|+|ω_h+1|+|Δσ_b_h|+Q(Λ_b))(see Case <ref>),V̂_h^(0) =O(1)Δ x(|α_l,1|+|Δσ_χ_h|+Q(Λ_s))(see Case <ref>).From Proposition <ref>, we obtain∑_h=1^∞(V̌_h^(0)+V̂_h^(0))=O(1)Δ xF(I_1).Then we write V̌_h^(1) and V̂_h^(1) asV̌_h^(1) =∫_y_n_b,h-1(h)^y_n_b,h(h)ϕ(x_h,y_n_b,h(h)) (ρ(x_h-,y)u(x_h-,y)-ρ̌(x_h+,y)ǔ(x_h+,y)) y +∫_y_n_b,h-1(h)^y_n_b,h(h)ϕ(x_h,y_n_b,h(h))(ρ̌(x_h+,y)ǔ(x_h+,y)-ρ(x_h+,y)u(x_h+,y))yandV̂_h^(1) =∫_y_n_χ,h(h)^y_n_χ,h+1(h)ϕ(x_h,y_n_χ,h(h)) (ρ(x_h-,y)u(x_h-,y)-ρ̂(x_h+,y)û(x_h+,y))y +∫_y_n_χ,h(h)^y_n_χ,h+1(h)ϕ(x_h,y_n_χ,h(h))(ρ̂(x_h+,y)û(x_h+,y)-ρ(x_h+,y)u(x_h+,y))y,whereǓ(x_h+,y)= Ũ(yx_h;r_h,n_b,h-1x_h,U(x_h+,r_h,n_b,h-1)),Û(x_h+,y)=Ũ(yx_h;r_h,n_χ,h-1x_h,U(x_h+,r_h,n_χ,h-1)),and ρ̌ and ρ̂ are determined via Bernoulli's equation. By the construction of the approximate solutions near the boundary and near the leading shock, we have∫_y_n_b,h-1(h)^y_n_b,h(h)ϕ(x_h,y_n_b,h(h))(ρ̌(x_h+,y)ǔ(x_h+,y)-ρ(x_h+,y)u(x_h+,y)) y=O(1)Δ x(|α_r,2|+|ω_h+1|+|Δσ_b_h|+Q(Λ_b))(see Case <ref>), ∫_y_n_χ,h(h)^y_n_χ,h+1(h)ϕ(x_h,y_n_χ,h(h))(ρ̂(x_h+,y)û(x_h+,y)-ρ(x_h+,y)u(x_h+,y)) y=O(1)Δ x(|α_l,1|+|Δσ_χ_h|+Q(Λ_s)) (see Case <ref>).Similarly, by Proposition <ref>, we conclude∑_h=1^∞∫_y_n_b,h-1(h)^y_n_b,h(h)ϕ(x_h,y_n_b,h(h))(ρ̌(x_h+,y)ǔ(x_h+,y) -ρ(x_h+,y)u(x_h+,y))y =O(1)Δ xF(I_1),∑_h=1^∞∫_y_n_χ,h(h)^y_n_χ,h+1(h)ϕ(x_h,y_n_χ,h(h))(ρ̂(x_h+,y)û(x_h+,y)-ρ(x_h+,y)u(x_h+,y))y =O(1)Δ xF(I_1).SettingV̅_h = ∑_n=n_χ,h+1^n_b,h-1∫_y_n-1(h)^y_n(h)ϕ(x_h,y_n(h))(ρ(x_h-,y)u(x_h-,y) -ρ(x_h+,y)u(x_h+,y))y +∫_y_n_b,h-1(h)^y_n_b,h(h)ϕ(x_h,y_n_b,h(h)) (ρ(x_h-,y)u(x_h-,y) -ρ̌(x_h+,y)ǔ(x_h+,y))y +∫_y_n_χ,h(h)^y_n_χ,h+1(h)ϕ(x_h,y_n_χ,h(h))(ρ(x_h-,y)u(x_h-,y) -ρ̂(x_h+,y)û(x_h+,y))y. As in <cit.> (see also <cit.>), letH=∏_h=0^∞[0,1) ={ϑ=(ϑ_0,ϑ_1,ϑ_2,…, ϑ_h,…) : ϑ_h∈[0,1), h=0,1,2,⋯}.Denoting y̅=y_n-1(h)+ϑ_h(y_n(h)-y_n-1(h)), we obtain from (<ref>) thatρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y)=ρ(x_h-,y)u(x_h-,y) -ρ(x_h-,y̅)u(x_h-,y̅)+ρ(x_h+,y̅)u(x_h+,y̅) -ρ(x_h+,y)u(x_h+,y)=O(1)|α|+O(1)|α||Δσ_α|+O(1)|Δσ_α|=O(1)(|α|+|Δσ_α|),where α is an elementary wave in Ω_Δ x,ϑ,h,j and Δσ_α is the change of the σ-coordinate in the elementary wave α. Denote the elementary waves in Ω_Δ x,ϑ,h by α_h,i. Then weV̅_h=O(1)(∑_i≤0|α_h,i|+σ^*-σ_*)Δ x,which implies∑_h≥1∫_HV̅_h^2ϑ=O(1)diam(suppϕ)(∑_i≤0|α_h,i|+σ^*-σ_*)^2Δ x. Next, we need the following lemma. The approximate solutions {U_Δ x,ϑ(x,y)} satisfy∫_0^1∫_y_n-1(h)^y_n(h)(U_Δ x,ϑ(x_h-,y)-U_Δ x,ϑ(x_h+,y))y ϑ_h=O(1)(Δ x)^3+O(1)(|α|+|β|)(Δ x)^2. Proof. We now give a proof when α and β are both shock waves, since the remaining cases can be obtained similarly.Suppose that α and β issue from (x_h-1,y_n-1(h-1)) and (x_h-1,y_n(h-1)), and end at (x_h,r_1) and (x_h,r_2), respectively. Set a_1=r_1-y_n-1(h)/y_n(h)-y_n-1(h) and a_2=r_2-y_n-1(h)/y_n(h)-y_n-1(h). From the construction of approximate solutions, we have∫_0^1∫_y_n-1(h)^y_n(h)(U_Δ x,ϑ(x_h-,y)-U_Δ x,ϑ(x_h+,y))y ϑ_h=∫_0^1∫_y_n-1(h)^y_n(h)U_Δ x,ϑ(x_h-,y)y ϑ_h- ∫_0^1∫_y_n-1(h)^y_n(h)U_Δ x,ϑ(x_h+,y)y ϑ_h=∫_y_n-1(h)^r_1Ũ(y/x_h;σ_2,U_l)y +∫_r_1^r_2Ũ(y/x_h;σ_2,Φ(0,α_2;U_l))y +∫_r_2^y_n(h)Ũ(y/x_h;σ_1,Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))))y -a_1∫_y_n-1(h)^y_n(h)Ũ(y/x_h;σ_2,U_l)y -(a_2-a_1)∫_y_n-1(h)^y_n(h)Ũ(y/x_h;σ_2,Φ(0,α_2;U_l))y -(1-a_2)∫_y_n-1(h)^y_n(h)Ũ(y/x_h;σ_1,Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))))y.Since Ũ(y/x_h;σ_2,Φ(0,α_2;U_l)) =Ũ(y/x_h;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))), we obtain∫_0^1∫_y_n-1(h)^y_n(h)(U_Δ x,ϑ(x_h-,y)-U_Δ x,ϑ(x_h+,y))y ϑ_h=∫_y_n-1(h)^r_1(1-a_1)(Ũ(y/x_h;σ_2,U_l)-Ũ(y/x_h;σ_2,Φ(0,α_2;U_l)))y -∫_y_n-1(h)^r_1(1-a_2)(Ũ(y/x_h;σ_1,Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l)))) -Ũ(y/x_h;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))))y +∫_r_1^r_2a_1(Ũ(y/x_h;σ_2,Φ(0,α_2;U_l))-Ũ(y/x_h;σ_2,U_l))y -∫_r_1^r_2(1-a_2)(Ũ(y/x_h;σ_1,Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l)))) -Ũ(y/x_h;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))))y +∫_r_2^y_n(h)a_2(Ũ(y/x_h;σ_1,Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l)))) -Ũ(y/x_h;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))))y -∫_r_2^y_n(h)a_1(Ũ(y/x_h;σ_2,U_l)-Ũ(y/x_h;σ_2,Φ(0,α_2;U_l)))y.Then, by Taylor's expansion, we haveŨ(y/x_h;σ_2,U_l) -Ũ(y/x_h;σ_2,Φ(0,α_2;U_l)) =U_l-Φ(0,α_2;U_l)+A_1(y-y_n-1(h))+O(1)(y-y_n-1(h))^2, Ũ(y/x_h;σ_1,Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l)))) -Ũ(y/x_h;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))) =Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l))) -Ũ(σ_1;σ_2,Φ(0,α_2;U_l))+A_2(y-y_n(h))+O(1)(y-y_n(h))^2,withA_1= . ∂_y(Ũ(y/x_h;σ_2,U_l)-Ũ(y/x_h;σ_2,Φ(0,α_2;U_l))) |_y=y_n-1(h),A_2= . ∂_y(Ũ(y/x_h;σ_1,Φ(β_1,0;Ũ(σ_1;σ_2,Φ(0,α_2;U_l)))) -Ũ(y/x_h;Ũ(σ_1;σ_2,Φ(0,α_2;U_l)))) |_y=y_n(h).A direct computation leads to∫_0^1∫_y_n-1(h)^y_n(h)(U_Δ x,ϑ(x_h-,y)-U_Δ x,ϑ(x_h+,y))y ϑ_h= O(1)(y_n(h)-y_n-1(h))^3+12A_1(r_1-y_n-1(h))(r_1-y_n(h))-12A_2(r_2-y_n-1(h))(r_2-y_n(h)).Noting that A_1=O(1)|α| and A_2=O(1)|β|, together with the Courant-Friedrichs-Lewy condition, we conclude(<ref>).□Substituting U_Δ x,ϑ in Lemma <ref> by ρ u and carrying out the same process lead to∫_0^1V̅_hϑ_h =O(1)(diam(suppϕ)+∑_i≤0|α_h,i|)(Δ x)^2=O(1)(Δ x)^2.As in (<ref>), we obtainV̅_k=O(1)(∑_i≤0|α_k,i|+σ^*-σ_*)Δ x.Then∑_h>k∫_HV̅_hV̅_k ϑ≤ ∑_h>k|∫_0^1 V̅_hdϑ_h|∫_0^1| V̅_k| ϑ̂_h =O(1)(diam(suppϕ))^2Δ x,where ϑ̂_h =ϑ_0⋯ϑ_h-1ϑ_h+1⋯.SinceV̅_L^2(H) =∑_h≥1∫_HV̅_h^2ϑ +2∑_h>k∫_HV̅_hV̅_k ϑ,we conclude thatV̅_L^2(H)→0as Δ x→0,which, combining with (<ref>)–(<ref>), gives a subsequence (still denoted by) {(u_m,v_m)} such that V→0 almost everywhere. Meanwhile, we haveV_h-V_h = ∑_n=n_χ,h+1^n_b,h-1∫_y_n-1(h)^y_n(h)(ϕ(x_h,y_n(h))-ϕ(x_h,y))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y +∫_y_n_b,h-1(h)^y_n_b,h(h)(ϕ(x_h,y_n_b,h(h))-ϕ(x_h,y)) (ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y+∫_y_n_b,h(h)^b_Δ x,ϑ(x_h)(ϕ(x_h,y_n_b,h+1(h))-ϕ(x_h,y))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y +∫_χ_Δ x,ϑ(x_h)^y_n_χ,h(h)(ϕ(x_h,y_n_χ,h(h))-ϕ(x_h,y))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y +∫_y_n_χ,h(h)^y_n_χ,h+1(h)(ϕ(x_h,y_n_χ,h+1(h))-ϕ(x_h,y))(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y=O(1)Δ x∑_n=n_χ,h+1^n_b,h-1∫_y_n-1(h)^y_n(h)(ρ(x_h-,y)u(x_h-,y)-ρ(x_h+,y)u(x_h+,y))y +O(1)(Δ x)^2=O(1)(∑_i≤0|α_h,i|+σ^*-σ_*+1)(Δ x)^2,which leads to4-V= ∑_h≥1V_h-V_h =O(1) diam(suppϕ)Δ x.Thus, 4→0 as m→∞ for some subsequence {(u_m,v_m)}. □ 5, 6→0 as Δ x→ 0.Proof. Sinceb'(x)=v(x_h+,b(x_h)-)/u(x_h+,b(x_h)-)for x∈(x_h,x_h+1),it follows from the construction of our approximate solution thatv(x,b(x))-u(x,b(x))b'(x)=O(1)Δ x.Therefore, we have5 =O(1) diam(suppϕ) Δ x→0 as Δ x→0. As for 6, we divide this term into three parts. The first part is the integral along the leading shock, where W_h,i=S_Δ x,ϑ,h. For this part, by similar arguments in treating 5, we have∑_h∫_S_Δ x,ϑ,h[s_h(ρ^+u^+-ρ^-u^-)-(ρ^+v^+-ρ^-v^-)]ϕx =O(1)Δ x.The second part is the integral along the upper or lower edges of rarefaction waves and therefore vanishes automatically. The third part is the integral along the weak shock waves, that is, W_h,i≠ S_Δ x,ϑ,h. In this case, by (<ref>), we haveρ^+u^+-ρ^-u^-=(ρ^+u^+-ρ^-u^-)|_x=x_h++O(1)(ρ^+u^+-ρ^-u^-)|_x=x_h+Δ x,ρ^+v^+-ρ^-v^-=(ρ^+v^+-ρ^-v^-)|_x=x_h++O(1)(ρ^+v^+-ρ^-v^-)|_x=x_h+Δ x.Thus, in view of the Rankine-Hugoniot conditions, we obtain∑_is_h,i(ρ^+u^+-ρ^-u^-)-(ρ^+v^+-ρ^-v^-)=O(1)∑_i|α_S,h,i|Δ x,where α_S,h,i are the weak shock waves in Ω_Δ x,ϑ,h. Combining all the three parts together, we have4 =O(1) diam(suppϕ) ∑_i|α_S,h,i|Δ x+O(1)Δ x.By Proposition <ref>, ∑_i|α_S,h,i| is uniformly bounded with respect to h. Therefore, 4→0 as Δ x→0.□ With all the arguments stated above, a standard procedure as in <cit.> gives the following main theorem. Suppose that (<ref>)–(<ref>), 1<γ<3, and 0<p_0<p^*. Then, when M_∞ is sufficiently large, there are ϵ_0>0 anda null set 𝒩 such that, if T.V. p^b<ϵ_0, for each ϑ∈∏_h=0^∞[0,1)\𝒩, there exist both a subsequence {Δ_i}_i=0^∞⊂{Δ x} of the mesh size with Δ_i→0 as i→∞ and a triple of functions b_ϑ(x) with b_ϑ(0)=0, χ_ϑ(x) with χ_ϑ(0)=0, and U_ϑ(x,y)∈ O_ε_0(G(s_0)∩ W(p_0,p_∞)) such that Man (Man)Man* b_Δ_i,ϑ(x) converges to b_ϑ(x) uniformly in any bounded x-interval;* χ_Δ_i,ϑ(x) converges to χ_ϑ(x) uniformly in any bounded x-interval;* b'_Δ_i,ϑ(x) converges to b'_ϑ(x)∈ BV([0,∞)) a.e.and b_ϑ(x) x=b'(x), a.e. with |b'_ϑ-b_0|<Cϵ_0;* s_Δ_i,ϑ(x) converges to s_ϑ(x)∈ BV([0,∞)) a.e. and χ_ϑ(x) x=σ_ϑ(x), a.e. with |s_ϑ-s_0|<Cϵ_0;* U_Δ_i,ϑ(x,·) converges to U_ϑ∈L^1_loc(-∞,b_ϑ(x)) for every x>0, and U_ϑ is a global entropy solution of the inverse problem (<ref>)–(<ref>) and satisfies (<ref>)–(<ref>). § ASYMPTOTIC BEHAVIOR OF GLOBAL ENTROPY SOLUTIONS To establish the asymptotic behavior of global entropy solutions, we need further estimates of the approximate solutions. There exists a constant M_1, independent of U_Δ x,ϑ, Δ x, and ϑ, such that∑_ΛE_Δ x,ϑ(Λ)<M_1for E_Δ x,ϑ(Λ) given as in (<ref>).Proof. By Proposition <ref>, for any interaction region Λ⊂{(h-1)Δ x≤ (h+1)Δ x} for h≥1, we have∑_ΛE_Δ x,ϑ(Λ)≤ 4∑_Λ(F(I)-F(J))≤ 4F(I_1).Thus, choosing M_1=4F(I_1)+1, the proof is complete.□For any t>0, let ℒ_j,ϑ(t-), j=1,2, be the total variation of j-weak waves in U_ϑ crossing line x=t, and let ℒ_j,Δ x, ϑ(t-), j=1,2, be the total variation of j-weak waves in U_Δ x,ϑ crossing line x=t. Then we have As x→∞,∑_j=1^2ℒ_j,ϑ(x-)→0. Proof. Let U_Δ_i,ϑ be a sequence of the approximate solutions introduced in Theorem <ref>, and let the corresponding term E_Δ x,ϑ(Λ) be defined in (<ref>). As in <cit.>, denoted by E_Δ x,ϑ the measure of assigning quantities E_Δ x,ϑ(Λ) to the center of Λ. Then, by Lemma <ref>, we can choose a subsequence (still denoted as) E_Δ_i,ϑ such thatE_Δ_i,ϑ→ dE_ϑas Δ_i→0with E_ϑ(Λ)<∞.Therefore, for ϵ_1>0 sufficiently small, we can choose x_ϵ_1 (independent of U_Δ_i,ϑ), Δ_i, and ϑ such that∑_h>[x_ϵ_1/Δ x] E_Δ_i,ϑ(Λ_h,n)<ϵ_1.Let X_ϵ_1^1=(x_ϵ_1,χ_Δ_i,ϑ(x_ϵ_1)) and X_ϵ_1^2=(x_ϵ_1,b_Δ_i,ϑ(x_ϵ_1)) be the two points lying in the approximate leading shock y=χ_Δ_i,ϑ(x) and the approximate boundary y=b_Δ_i,ϑ(x), respectively. Let χ_Δ_i,ϑ^j be the approximate j–generalized characteristic issuing from X_ϵ_1^j for j=1,2, respectively. According to the construction of the approximate solutions, there exist constants M̂_j>0, j=1,2, independent of U_Δ_i,ϑ, Δ_i, and ϑ,such that|χ_Δ_i,ϑ^j(x_1)-χ_Δ_i,ϑ^j(x_2)|≤M̂_j(|x_1-x_2|+Δ_i) forx_1,x_2>x_ϵ_1.Then we choose a subsequence (still denoted by) Δ_i such thatχ_Δ_i,ϑ^j→χ_ϑ^jas Δ_i→0for some χ_ϑ^j∈Lip with (χ_ϑ^j)' bounded.Let two characteristics χ_ϑ^1 and χ_ϑ^2 intersect with the cone boundary Γ_ϑ and the leading shock S_ϑ at points (t_ϵ_1^1,χ_ϑ^1(t_ϵ_1^1)) and (t_ϵ_1^2,χ_ϑ^2(t_ϵ_1^2))for some t_ϵ_1^1 and t_ϵ_1^2, respectively. Then, as in <cit.>, we apply the approximate conservation law to the domain below χ_Δ_i,ϑ^1 and above χ_Δ_i,ϑ^1 and use Lemma <ref> to obtainℒ_j,Δ_i,ϑ(x-)≤ C∑_h>[x_ϵ_1/Δ x] E_Δ_i,ϑ(Λ_h,n)<Cϵ_1for j=1,2, x>t_ϵ_1^1+t_ϵ_1^2. This completes the proof.□For p^b_∞:=lim_x→∞p^b(x), s_∞:=lim_x→∞s_ϑ(x), and b'_∞=lim_x→∞(b_ϑ)'_+(x),lim_x→∞sup{|U_ϑ(x,y)-Ũ(σ;s_∞,G(s_∞))| : χ_ϑ(x)<y<b_ϑ(x)}=0,12|Ũ(b_∞';s_∞,G(s_∞))|^2+γ (p^b_∞)^γ-1/γγ-1=12+γ p_∞^γ-1/γγ-1,Ũ(b_∞';s_∞,G(s_∞))·(-b_∞',1)=0.Proof. For every x∈[x_k-1,x_k), we have|U_ϑ(x,y)-Ũ(σ;s_Δ_i,ϑ,G(s_Δ_i,ϑ))| +|Ũ(b_Δ_i,ϑ';s_Δ_i,ϑ,G(s_Δ_i,ϑ))·(-b_Δ_i,ϑ',1)| +|12|Ũ(b_Δ_i,ϑ';s_Δ_i,ϑ,G(s_Δ_i,ϑ))|^2 +γ (p^b_Δ x,k)^γ-1/γγ-1-12-γ p_∞^γ-1/γγ-1|≤ C (∑_j=1^2ℒ_j,Δ_i,ϑ(x-)+|Δ_i|).By Theorem <ref>, letting i→∞, we obtainsup_χ_ϑ(x)<y<b_ϑ(x)|U_ϑ(x,·)-Ũ(σ;s_ϑ,G(s_ϑ))| +|Ũ((b_ϑ)_+';s_ϑ,G(s_ϑ))·(-b_ϑ',1)| +|12|Ũ((b_ϑ)_+';s_ϑ,G(s_ϑ))|^2+γ (p^b)^γ-1/γγ-1-12-γ p_∞^γ-1/γγ-1|≤ C ∑_j=1^2ℒ_j,ϑ(x-).Then, using Lemma <ref> and noting that Ũ(σ;s,G(s)) is a continuous function with respect to σ and s, we conclude our result.□Acknowledgments. This work was initiated when Yun Pu studied at the University of Oxford as a recognized DPhil student through the Joint Training Ph.D. Program between the University of Oxford and Fudan University – He would like to express his sincere thanks to both the home and the host universities for providing him with such a great opportunity. The research of Gui-Qiang G. Chen is partially supported by the UK Engineering and Physical Sciences Research Council Awards EP/L015811/1, EP/V008854/1, and EP/V051121/1. The research of Yun Pu was partially supported by the Joint Training Ph.D. Program of China Scholarship Council, No. 202006100104. The research of Yongqian Zhang is supported in part by the NSFC Project 12271507. Conflict of Interest. The authors declare that they have no conflict of interest. The authors also declare that this manuscript has not been previously published, and will not be submitted elsewhere before your decision.Data Availability: Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. 99Abbott1959 I. H. Abbott and A. E. von Doenhoff. Theory of Wing Sections: Including a Summary of Airfoil Data. Dover Publications, Inc., New York, 1959.Abzalilov2005 D. F. Abzalilov. Minimization of the wing airfoil drag coefficient using the optimal control method. Izv. Ross. Akad. Nauk Mekh. Zhidk. Gaza, 40(6):173–179, 2005.Anderson2019 J. D. Anderson Jr. Hypersonic and High-Temperature Gas Dynamics. American Institute of Aeronautics and Astronautics, Inc., 2019.Bressan2000 A. Bressan. Hyperbolic Systems of Conservation Laws. The One-Dimensional Cauchy Problem. Oxford Lecture Series in Mathematics and its Applications, 20. Oxford University Press, Oxford, 2000. Caramia2019 G. Caramia and A. Dadone. A general use adjoint formulation for compressible and incompressible inviscid fluid dynamic optimization. Comput. & Fluids, 179:289–300, 2019.Xiang2021 G.-Q. Chen, J. Chen, and W. Xiang. Stability of attached transonic shocks in steady potential flow past three-dimensional wedges. Commun. Math. Phys., 387(1):111–138, 2021.Fang2009 G.-Q. Chen and B. Fang. Stability of transonic shock-fronts in three-dimensional conical steady potential flow past a perturbed cone. Discrete and Continuous Dynamical Systems, 23:85–114, 2009.Fang2017 G.-Q. Chen and B. Fang. Stability of transonic shocks in steady supersonic flow past multidimensional wedges. Adv. Math., 314:493–539, 2017.Chen Feldman2018 G.-Q. Chen and M. Feldman. The Mathematics of Shock Reflection-Diffraction and von Neumann's Conjectures. Princeton University Press, 2018.Kuang2021 G.-Q. Chen, J. Kuang, and Y. Zhang. Stability of conical shocks in the three-dimensional steady supersonic isothermal flows past Lipschitz perturbed cones. SIAM J. Math. Anal., 53(3):2811–2862, 2021.Chen Liu1994 G.-Q. Chen, C. D. Levermore, and T.-P. Liu. Hyperbolic conservation laws with stiff relaxation terms and entropy. Comm. Pure Appl. Math., 47(6):787–830, 1994.Chen Wagner2003 G.-Q. Chen and D. H. Wagner. Global entropy solutions to exothermically reacting, compressible Euler equations. J. Differential Equations, 191(2):277–322, 2003.Chen Wang2022 G.-Q. Chen and Y. Wang. Global solutions of the compressible Euler equations with large initial data of spherical symmetry and positive far-field density. Arch. Ration. Mech. Anal., 243(3):1699–1771, 2022.Chen2006 G.-Q. Chen, Y. Zhang, and Zhu D.-W. Existence and stability of supersonic Euler flows past lipschitz wedges. Arch. Ration. Mech. Anal., 181:261–310, 2006.Chen2001 S. Chen. Existence of stationary supersonic flows past a pointed body. Arch. Ration. Mech. Anal., 156(2):141–181, 2001.Chen2020 S. Chen. Mathematical Analysis of Shock Wave Reflection. Springer, 2020.Chen2000 S. Chen and D. Li. Supersonic flow past a symmetrically curved cone. Indiana Univ. Math. J., 49(4):1411–1435, 2000.Chen2004 S. Chen, Z. Wang, and Y. Zhang. Global existence of shock front solutions to the axially symmetric piston problem for compressible fluids. J. Hyperbolic Differ. Equ., 1(1):51–84, 2004.Xin2002 S. Chen, Z. Xin, and H. Yin. Global shock waves for the supersonic flow past a perturbed cone. Commun. Math. Phys., 228(1):47–84, 2002.Courant1948 R. Courant and K. O. Friedrichs. Supersonic Flow and Shock Waves. Interscience Publishers, Inc., New York, N. Y., 1948.Yin2007 D. Cui and H. Yin. Global conic shock wave for the steady supersonic flow past a cone: isothermal case. Pacific J. Math., 233(2):257–289, 2007.Yin2009 D. Cui and H. Yin. Global supersonic conic shock wave for the steady supersonic flow past a cone: polytropic gas. J. Differential Equations, 246(2):641–669, 2009.Dafermos2016 C. M. Dafermos. Hyperbolic Conservation Laws in Continuum Physics, 4th Edition, Springer-Verlag, Berlin, 2016.Glimm1965 J. Glimm. Solutions in the large for nonlinear hyperbolic systems of equations. Comm. Pure Appl. Math., 18:697–715, 1965.Glimm1970 J. Glimm and P. D. Lax. Decay of Solutions of Systems of Nonlinear Hyperbolic Conservation Laws. Memoirs of the American Mathematical Society, No. 101. American Mathematical Society, Providence, R.I., 1970.Goldsworthy1952 F. A. Goldsworthy. Supersonic flow over thin symmetrical wings with given surface pressure distribution. Aeronautical Quarterly, 3(4):263–279, 1952.Golubkin1988 V. N. Golubkin and V. V. Negoda. Calculation of the hypersonic flow over the upwind side of a wing of small span at high angles of attack. Zh. Vychisl. Mat. i Mat. Fiz., 28(10):1586–1594, 1600, 1988.Golubkin1994 V. N. Golubkin and V. V. Negoda. Optimization of hypersonic wings. Zh. Vychisl. Mat. i Mat. Fiz., 34(3):446–460, 1994.Hu2019 D. Hu and Y. Zhang. Global conic shock wave for the steady supersonic flow past a curved cone. SIAM J. Math. Anal., 51(3):2372–2389, 2019.Keyfitz1991 B. L. Keyfitz and G. Warnecke. The existence of viscous profiles and admissibility for transonic shocks. Comm. Partial Differential Equations, 16(6-7):1197–1221, 1991.Lax1973 P. D. Lax. Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves. Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 11. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1973.Li2014 J. Li, I. Witt, and H. Yin. On the global existence and stability of a multi-dimensional supersonic conic shock wave. Commun. Math. Phys., 329(2):609–640, 2014.Li2022 Q. Li and Y. Zhang. An inverse problem for supersonic flow past a curved wedge. Nonlinear Anal. Real World Appl., 66:Paper No. 103541, 20, 2022.Li2006 T. Li and L. Wang. Global exact shock reconstruction for quasilinear hyperbolic systems of conservation laws. Discrete Contin. Dyn. Syst., 15(2):597–609, 2006.Wang2007 T. Li and L. Wang. Existence and uniqueness of global solution to an inverse piston problem. Inverse Problems, 23(2):683–694, 2007.Li2007 T. Li and L. Wang. Existence and uniqueness of global solution to an inverse piston problem. Inverse Problems, 23(2):683–694, 2007.Li2009 T. Li and L. Wang. Global Propagation of Regular Nonlinear Hyperbolic Waves, Progress in Nonlinear Differential Equations and their Applications, 76. Birkhäuser Boston, Ltd., Boston, MA, 2009.Liu1999 T.P. Liu and W. Lien. Nonlinear stability of a self-similar 3-dimensional gas flow. Commun. Math. Phys., 204:525–549, 1999.Maddalena2020 F. Maddalena, E. Mainini, and D. Percivale. Euler's optimal profile problem. Calc. Var. Partial Differential Equations, 59(2):Paper No. 56, 33, 2020.Mohammadi2001 B. Mohammadi and O. Pironneau. Applied Shape Optimization for Fluids. Numerical Mathematics and Scientific Computation. The Clarendon Press, Oxford University Press, New York, 2001. Oxford Science Publications.Pu2023 Y. Pu and Y. Zhang. An inverse problem for determining the shape of the wedge in steady supersonic potential flow. J. Math. Fluid Mech., 25(2):Paper No. 25, 22, 2023.Qu2020 A. Qu and H. Yuan. Radon measure solutions for steady compressible Euler equations of hypersonic-limit conical flows and Newton's sine-squared law. J. Differential Equations, 269(1):495–522, 2020.Robinson1956 A. Robinson and J. A. Laurmann. Wing Theory. Cambridge University Press, Cambridge, 1956.Smoller1983 J. Smoller. Shock Waves and Reaction-Diffusion Equations. Springer, New York, 1983.Vorobev1998 N. F. Vorobev. On an inverse problem in the aerodynamics of a wing in a supersonic flow. J. Appl. Mech. Tech. Phys., 39(3):86–91, 1998.Wang2014 L. Wang. An inverse piston problem for the system of one-dimensional adiabatic flow. Inverse Problems, 30(8):085009, 17, 2014.Wang2019 L. Wang and Y. Wang. An inverse Piston problem with small BV initial data. Acta Appl. Math., 160:35–52, 2019.Wang2009 Z. Wang and Y. Zhang. Steady supersonic flow past a curved cone. J. Differential Equations, 247(6):1817–1850, 2009.Xin2006 Z. Xin and H. Yin. Global multidimensional shock wave for the steady supersonic flow past a three-dimensional curved cone. Anal. Appl. (Singap.), 4(2):101–132, 2006.Xu2009 G. Xu and H. Yin. Global multidimensional transonic conic shock wave for the perturbed supersonic flow past a cone. SIAM J. Math. Anal., 41(1):178–218, 2009.Zhang1999 Y. Zhang. Global existence of steady supersonic potential flow past a curved wedge with a piecewise smooth boundary. SIAM J. Math. Anal., 31(1):166–183, 1999.Zhang2003 Y. Zhang. Steady supersonic flow past an almost straight wedge with large vertex angle. J. Differential Equations, 192(1):1–46, 2003. | http://arxiv.org/abs/2310.17815v1 | {
"authors": [
"Gui-Qiang G. Chen",
"Yun Pu",
"Yongqian Zhang"
],
"categories": [
"math.AP",
"math-ph",
"math.MP",
"nlin.PS",
"physics.flu-dyn",
"35B07, 35B20, 35D30, 35L65, 35L67, 76J20, 76L05, 76N10"
],
"primary_category": "math.AP",
"published": "20231026232542",
"title": "Stability of Inverse Problems for Steady Supersonic Flows Past Lipschitz Perturbed Cones"
} |
Quantum simulation of the tricritical Ising model in tunable Josephson junction ladders Michele Burrello======================================================================================= A novel technique based on a "-uncertainty principle" is introduced in the study of subellipticity of the -Neumann problem. As an application, we determine the sharp order of subellipticity at the origin for a class of dilation-invariant special domains in ambient dimension ≤ 4. § INTRODUCTION§.§ Subellipticity in the ∂̅-Neumann problem The -Neumann problem, formulated by Spencer in the 1950s, is the staple of the partial differential equations approach to complex analysis in several variables. We briefly recall its formulation mostly in order to establish notation, referring to <cit.> or <cit.> for details. Given a domain Ω⊆^n (n≥ 2) and an integer q≥ 0, denote by L^2_(0,q)(Ω) the Hilbert space of (0,q)-forms, with scalar product defined with respect to the ambient Euclidean metric. The natural differential operatorhas a maximal extension :L^2_(0,q)(Ω)→ L^2_(0,q+1)(Ω) which, being closed and densely defined, admits a Hilbert space adjoint ^*:L^2_(0,q+1)(Ω)→ L^2_(0,q)(Ω). We follow the custom of denoting by the same symbol(resp. ^*) operators acting on forms of varying degrees (equivalently, we think ofand ^* as operators acting on ⊕_q L^2_(0,q)(Ω)).The Hodge Laplacian:=^*+^*is a nonnegative self-adjoint operator preserving form degrees, where the natural domain of definition ofisdom()={u∈dom()∩dom(^*)u∈dom(^*), ^* u∈dom()}.The -Neumann problem (at the level of (0,q)-forms) is the noncoercive boundary value problem u = v u∈dom()where v∈ L^2_(0,q)(Ω) is the datum, and the noncoercive boundary conditions are hidden in the requirement that u lie in the domain of , specifically in the conditions u∈dom(^*) and u∈dom(^*). If Ω⊂^n is bounded and pseudoconvex (see <cit.> or <cit.>), thenhas closed range and trivial null space at the level of (0,q)-forms for every q≥ 1. Hence, in positive degrees the -Neumann problem has a unique solution u∈ L^2_(0,q)(Ω) for every datum v∈ L^2_(0,q)(Ω). By Hodge theory, this solvability amounts to the vanishing of the L^2-Dolbeault cohomology groups in positive degrees. Existence of weak solutions of (<ref>) is thus elegantly settled, and the next natural question is that of regularity. Since interior regularity follows from classical elliptic theory, in broad terms the problem is to determine under which assumptions on the boundary b Ω boundary regularity in the -Neumann problem holds. This problem gave rise to deep and multi-faceted contributions by several authors through the last six decades. We refer the reader to the various existing surveys and accounts, e.g., <cit.>. In this paper, we study the local regularity problem, where regularity is measured in the L^2-Sobolev scale, on pseudoconvex domains Ω with smooth boundary. From now on, we restrict our considerations to the level of (0,1)-forms (our methods should be extendable to forms of higher degrees, but we wish to keep at a minimum the complexity of the setting). A crucial notion in the local regularity theory of the -Neumann problem is that of a subelliptic estimate, which we proceed to recall. Let p∈ bΩ be a boundary point and s>0. A subelliptic estimate of order s is said to hold at p if there exist a constant C>0 and a neighborhood V of p in ^n such that the inequality ‖ u‖_s^2≤ C( ‖ u‖^2 + ‖^*u‖^2+‖ u‖^2)holds for all (0,1)-forms u in the domain of ^* having coefficients in C^∞_c(Ω) and supported in Ω∩ V. Here ‖·‖_s is the L^2-Sobolev norm of order s, and ‖·‖=‖·‖_0 is the L^2 norm. The interest in (<ref>) stems from the fact that it implies a local gain of boundary regularity in the L^2-Sobolev scale for the -Neumann problem. More precisely, assume that Ω is bounded, pseudoconvex and has a smooth boundary, and denote by W^t_(0,1)(U)⊆ L^2_(0,1)(U) the L^2-Sobolev space of order t≥ 0 (where U is an open set). If a subelliptic estimate of order s holds at p, then we have the following: for every datum v∈ L^2_(0,1)(Ω) such that v_|Ω∩ U∈ W^t_(0,1)(Ω∩ U), where U is a neighborhood of p in ^n, we have u_|Ω∩ U'∈ W^t+2s_(0,1)(Ω∩ U'), where u is the solution of (<ref>) and U' is a possibly smaller neighborhood of p. We refer the reader to Theorem 1.13 of <cit.> for a more detailed statement, including implications for the canonical solution of the -problem and for the Bergman projection. A few basic and well-known remarks about subelliptic estimates are in order: * The validity of a subelliptic estimate is a local property of the boundary b Ω. This is trivial, because the condition u∈dom(^*) is local (as recalled at the beginning of Section <ref>). Thus, whether a subelliptic estimate of a given order s is valid at p only depends on the germ of b Ω at p. * By a result of Sweeney (see <cit.> and the remarks following its statement), the validity of (<ref>) is independent of the Hermitian metric chosen to define the L^2 spaces of forms and the adjoint ^*. In particular, any (locally defined) Hermitian metric could have been used in place of the standard Euclidean metric of ^n. We will take advantage of this freedom in Section <ref> below. * As a consequence of remarks (1) and (2), the validity of a subelliptic estimate of order s at p is invariant under local biholomorphisms. In slightly imprecise terms, it solely depends on the CR geometry of the germ (b Ω, p). §.§ The sharp order of subellipticity. Previous work Our focus in the present paper is on the best possible order of a subelliptic estimate. It is convenient to give the following definition. [Sharp order of subellipticity] Given a germ of pseudoconvex domain (Ω,p) with smooth boundary (where p∈∂Ω), we define the sharp order of subellipticity ass(Ω, p):=sup{s>0 (<ref>) holds for some choice of C and V}.Notice that the set of exponents s≥ 0 for which a subelliptic estimate of order s holds at p is either [0,s(Ω, p)] or [0,s(Ω, p)), and there are examples of the latter possibility in dimension three or higher (see <cit.>). If Ω is strongly pseudoconvex at p, then s(Ω, p)= 1/2 (and the sup is achieved) <cit.>. The invariant s(Ω, p) is completely understood in dimension 2. If Ω⊆^2, thens(Ω, p)=1/2m,where 2m is an even integer, or possibly +∞ (in which case no subelliptic estimate holds), admitting various equivalent descriptions, e.g., it equals the maximum order of contact of smooth complex analytic curves with b Ω at p, and it also equals the minimum number of iterated commutators of horizontal vector fields required to span the tangent space of b Ω at p, where horizontal refers to the natural CR structure on the boundary. This result is due to Kohn <cit.> and Greiner <cit.>. See also <cit.>. In dimension three or higher, the present understanding of the invariant s(Ω, p) is much more limited. The basic qualitative problem of deciding whether s(Ω,p) is positive, that is, whether a subelliptic estimate holds at p, has been settled by Catlin <cit.>: we haves(Ω, p)>0 if and only ifΔ^1(b Ω, p)<+∞,where Δ^1(b Ω, p) is the D'Angelo 1-type of (bΩ, p), defined as the maximum order of contact of one-dimensional (possibly singular) complex analytic varieties with ∂Ω at p (see <cit.> for this notion of type). More precisely, Catlin states in <cit.> that s(Ω; p)≥1/Δ^1(bΩ; p)^n^2Δ^1(bΩ; p)^n^2,while <cit.> contains a neater upper bound asserting that s(Ω; p) is at most the reciprocal of a different "1-type" invariant, which is always larger than Δ^1(b Ω; p) (see Theorem <ref> below for a statement of this upper bound).A few years before Catlin's work, Kohn developed a very different approach to the subellipticity problem. In<cit.>, he introduced subelliptic multiplier ideals, namely ideals I_s(p) consisting of germs f of smooth functions at p having the property that ‖ f u‖_s^2≤ C( ‖ u‖^2 + ‖^*u‖^2+‖ u‖^2) for the same space of forms u as in (<ref>). He devised a number of procedures of an algebraic nature allowing to pass from an element f∈ I_s(p) to a related element g∈ I_s'(p), where s' is smaller (in a controlled way) than s. If by a suitable sequence of such procedures one is able to obtain that 1∈ I_s(p), then the inequality s(Ω,p)≥ s is established. This approach to the subellipticity problem turns out to be successful when the boundary of Ω is real-analytic near p (thanks to a result of Diederich–Fornæss <cit.>): in this case, s(Ω, p)>0 if and only if the boundary does not contain any germ of positive dimensional complex analytic variety near p, a condition that, in the real analytic case, is equivalent to the finiteness of Δ^1(bΩ; p). Unfortunately, the problem of generalizing Kohn's multiplier ideal approach beyond the real-analytic category appears to be very difficult, and it remains an open question whether a suitable variant of Kohn's method may be used to prove (<ref>). The quantitative lower bounds on s(Ω; p) (in terms of geometric invariants of the germ of ∂Ω at p) that can be extracted from the results of Kohn and Catlin are quite poor. Indeed, there is a substantial gap between Catlin's upper and lower bounds, while Kohn's original approach fails completely to produce any lower bound on s(Ω; p) depending only on the dimension and the D'Angelo 1-type (see, e.g., <cit.>). The problem of obtaining such estimates via a refinement of Kohn's multiplier ideal method has been revived recently by works of Siu <cit.> and Kim–Zaitsev <cit.>, who in particular solved it for the important class of so-called special domainsΩ={(z,z_n+1)∈^n+1 (z_n+1)>|F(z)|^2}⊂^n+1,where F:^n→^m is holomorphic. The lower bound obtained by Kim–Zaitsev is of the form s(Ω; 0)≥ e^-C_n Δ^1(bΩ; 0)^C_n (see <cit.> and the remark that follows). The exponential lower bounds on the sharp order of subellipticity in terms of the D'Angelo 1-type obtained by Catlin (and by Kim–Zaitsev in the case of special domains) are of course very far from being sharp in dimension 2 (cf. the results by Kohn and Greiner cited above), and this is expected to be the case in higher dimensions too. In fact, D'Angelo conjectured <cit.> that s(Ω; p)≥1/B^1(bΩ; p), where B^1(bΩ; p) is a further notion of 1-type taking integer values in the interval [Δ^1(bΩ; p), 2^2-nΔ^1(bΩ; p)^n-1] (the conjecture has been verified by <cit.> for "regular coordinate domains"). §.§ Content of the paper In view of the above discussion, it would be desirable to have a technique to establish subelliptic estimates, alternative to both Catlin's potential theoretic approach and Kohn's multiplier ideal machinery, capable of capturing more precise information about the invariant s(Ω; p). Motivated by these considerations, we introduce below a new method, based on a -uncertainty principle, in the study of subellipticity of the -Neumann problem. As an application of our method, we determine the sharp order of subellipticity at the origin for a class of "homogeneous special domains" of dimension ≤ 4 (Theorem <ref>). Here "homogeneity" refers to scaling, not to a transitive group action, see below for precise definitions. A satisfactory aspect of this result is that the domains to which it applies exhibit a rich geometry, e.g., the D'Angelo type may fail to be upper semicontinuous, which is a well-known difficulty appearing in dimension three and higher. We believe that our method allows to establish quite precise bounds for the sharp order of subellipticity of more general domains than those covered by Theorem <ref>, but this we have to leave to future investigations. Let us discuss a bit more precisely the content of the paper.Given a rigid domain Ω={(z,z_n+1)∈^n+1 (z_n+1)>φ(z)}⊂^n+1,where φ:^n→ is a (germ of) smooth plurisubharmonic function vanishing at zero, in Section <ref> we reduce the problem of bounding from below s(Ω; 0) to that of proving certain spectral gap estimates for a family of quadratic forms 𝐄^ξφ defined on test functions on ^n and depending on φ and a "semiclassical" parameter ξ→+∞ (Proposition <ref>). In Section <ref>, we elaborate a ∂̅-uncertainty principle originally introduced in <cit.> (Lemma <ref>). We combine it with a sublevel set estimate for one-variable complex polynomials (Lemma <ref>), and deduce a local one-dimensional estimate designed to analyze the quadratic forms 𝐄^ξφ in the case where φ=|F|^2 with F:^n→^m a polynomial mapping (Proposition <ref>). In order to pass from the local one-dimensional estimates of Section <ref> to the n-dimensional spectral gap estimates appearing in Proposition <ref> and needed for subellipticity, some geometric information about the polynomial mapping F has to be used. A notion that appears to be relevant here is that of an approximate minimal eigenvector field for φ, which we discuss in Section <ref>, highlighting its relation with various existing notions of "type". Finally, in Section <ref> we apply the theory developed in the preceding sections to the proof of the already cited Theorem <ref>. The reader may jump right away to Section <ref> for the definition of the relevant class of domains and a statement of the result. A few additional comments may help to clarify the scope of our method. First of all, we only concern ourselves with rigid domains, since translation invariance in (z_n+1) allows one to exploit Fourier analysis, which shows that (<ref>) is equivalent to a certain one-parameter family of estimates. Here the parameter is of course the frequency variable "dual" to (z_n+1). After some rather standard manipulations, this family of estimates produces a sufficient condition for subellipticity involving the quadratic forms 𝐄^ξφ, that is, Proposition <ref>. Secondly, while a ∂̅-uncertainty principle might eventually turn out to be useful for more general rigid domains, it seems to be already highly non-trivial to put together the local information it provides and to obtain the desired spectral gap estimates in the narrower class of special domains (<ref>), whose pseudoconvex geometry reduces to the algebraic geometry of the polynomial mapping F. On the other hand, proving good lower bounds for the sharp order of subellipticity for special domains (e.g., proving D'Angelo's conjecture or something of comparable strength) is already a very desirable (and apparently difficult) goal, and so it seems reasonable to think of the category of special domains as a wide enough arena for our investigations. The natural question is then: what prevents us from obtaining a good (or indeed, any positive) lower bound for s(Ω; 0) when Ω is a general special domain (of D'Angelo finite type), thus completing our program of proving subellipticity via appropriate ∂̅-uncertainty principles? We provide below a couple of remarks in this regard. All the "local hard analysis" of the paper happens in dimension one (and in Section <ref>). The passage from the local one-dimensional analysis to the global n-dimensional one is accomplished in two steps: an argument with the approximate minimal eigenvector fields alluded to above yields a "single scale" n-dimensional estimate, and a scaling argument then allows to deduce the final global result from the single scale estimate. We are able to make the first step work under the assumption that the null space of the Jacobian of F is one-dimensional at critical points outside the origin; the most expedient way to carry out the second step is instead to assume that F is homogeneous (that is, its components are homogeneous polynomials of the same degree). We can check that the two assumptions generically co-exist only in dimension n≤ 3 (hence, for domains in dimension n+1≤ 4). The resulting class of low-dimensional rigid domains {(z_n+1)>|F(z)|^2} is the one that we use as test-bed for the whole approach in Section <ref>. We point out that, despite the low-dimensionality, the domains treated in that section display some of the features that make the subellipticity problem difficult in dimension three of higher, e.g., the lack of semi-continuity of the 1-type of the boundary at the origin. This is probably the most satisfactory aspect of our results. We conclude this introduction by pointing out that, of the two assumptions that we make in Section <ref> (one-dimensionality of the null-space of the Jacobian and homogeneity), the first one is the more serious one. We believe in fact that the second may be relaxed (if not dispensed with), while dealing with higher dimensional null-spaces should require new ideas. §.§ Notation and terminologyWe denote by 𝔻 the open unit disc in the complex plane and by D(z,r) the disc of center z∈ and radius r>0. We use ^+ for the group of positive real numbers and ^× for the group of nonzero complex numbers. The n-dimensional complex projective space is denoted by ℙ^n and the unit sphere in ^n is denoted by ^2n-1. Integrals with no explicit indication of the measure, e.g. ∫_D f, are always w.r.t. Lebesgue measure on the domain of integration D. The approximate inequalities A≲_λ B and B≳_λ A mean that A≤ KB for a constant K that is allowed to depend only on the parameter λ and possibly the ambient dimension n. We will also occasionally use the big O notation.The standard Hermitian product and norm in ^n are denoted (·, ·) and |·| respectively. As usual, ∂/∂ z_j=1/2(∂/∂ x_j-i∂/∂ y_j) and ∂/∂z_j=1/2(∂/∂ x_j+i∂/∂ y_j), where z_j=x_j+iy_j are the standard coordinates on ^n. Let U⊆^n be open and let F_1,…, F_m:U→ be holomorphic. Then the real-analytic functionφ(z)=∑_ℓ=1^m|F_ℓ(z)|^2is said to be a Hermitian Sum of Squares (in short, HSOS). Any HSOS is plurisubharmonic (see (<ref>) below). If m=n, we say that φ is a Hermitian Sum of n Squares (HSOnS). Let U⊆^n be an open neighborhood of 0 and let φ:U→ be smooth and plurisubharmonic (plush). We associate to φ the rigid domainΩ= {(z,z_n+1)∈ U× (z_n+1)>φ(z)}⊂^n+1.Plurisubharmonicity of φ is equivalent to pseudoconvexity of Ω (at points of the boundary where z∈ U). We always assume, without loss of generality, that φ(0)=∂φ/∂ z_1(0)=…=∂φ/∂ z_n(0)=0.This assumption of course means that the origin of ^n+1 is on the boundary bΩ of the domain and the tangent space to bΩ at the origin is the real hyperplane of equation (z_n+1)=0.Finally, we use the symbol W^s for L^2-based Sobolev spaces of order s∈.§.§ AcknowledgmentWe would like to thank J. P. D'Angelo and D. Zaitsev for helpful discussions on the subject of this paper.§ SUBELLIPTICITY VIA SPECTRAL GAP ESTIMATESLet U⊆^n be an open set and let φ:U→ be a smooth plurisubharmonic (plush, for brevity) function.Denote byH = (∂^2φ/∂z_j∂ z_k)_j,k=1^nthe Levi form (a.k.a., the complex Hessian) of φ. Plurisubharmonicity of φ amounts to pointwise nonnegativity of the Hermitian matrix-valued function H, that is, (H(z)v,v)=∑_j,k=1^n∂^2φ/∂z_j∂ z_k(z)v_kv_j≥ 0∀ v∈^n,z∈ U.By the spectral theorem, H(z) is unitarily diagonalizable with nonnegative eigenvalues. We denote by λ_j(z) the j-th eigenvalue of H(z) w.r.t. the natural increasing ordering, that is, λ_1(z)≤…≤λ_n(z). Being the zeros of the characteristic polynomial of H(z), which depends continuously on z, the eigenvalues λ_j(z) are continuous in z too. We associate to any smooth plush function φ:^n→ the quadratic form ^φ defined on functions w:^n→ by the formula^φ(w) = ∫_^n|∇^0,1w|^2e^-2φ + 2∫_^nλ_1|w|^2e^-2φ. Here ∇^0,1w =(∂ w/∂z_1, …, ∂ w/∂z_n) is the (0,1)-part of the gradient of w and |∇^0,1w|=√(∑_j=1^n|∂ w/∂z_j|^2) is its Euclidean norm. The precise domain of definition of the quadratic form ^φ will not be an issue, as we will only need to consider ^φ(w) when w is a test function. The definition is extended to vector-valued functions w=(w_1,…, w_n)∈ C^∞_c(^n; ^n) as follows:^φ(w) = ∑_j=1^n^φ(w_j). The main result of this section is the following. [Spectral gap estimates imply subellipticity] Let s≤ 1. Suppose that there exist a positive constant C≥ 1 such that the estimate^ξφ(u)≥ C^-1ξ^2s∫_^n|u|^2e^-2ξφholds for every ξ≥ C and u∈ C_c^∞({|z|<C^-1}). Then the -Neumann problem on the rigid domainΩ= {(z,z_n+1)∈^n+1 (z_n+1)>φ(z)}satisfies a subelliptic estimate of order s at the origin.Before turning to the proof of this proposition, let us discuss the behavior of estimate (<ref>) under scalings. Define, for R>0,_Rw(z) = R^-nw(R^-1z).Notice that the real Jacobian determinant of z↦ Rz equals R^2n, and hence ‖_Rw‖_L^2 = ‖ w‖_L^2 for every R>0.[^φ behaves well under scalings] Define, for R>0,φ_R(z)=φ(R^-1z).Then^φ(w) = R^2 ^φ_R(Dil_Rw).As a consequence, if d>0 and s=1-δ/d, where δ∈ [0,1], then the spectral gap estimate (<ref>) of Proposition <ref> holds if and only if ^R^dφ_R(w)≥ C^-1R^-2δ∫_^n|w|^2e^-R^dφ_Rholds for every R≥ C^1/d and w∈ C^∞_c({|z|≤ C^-1R }). First of all, notice that |∇^0,1(_Ru)|^2 = R^-2|_R(∇^0,1u)|^2. Denoting by H_R the Levi form of φ_R, we have H_R(z)=R^-2H(R^-1z). If λ_1, R(z) is the minimal eigenvalue of H_R(z), then R^2λ_1, R(z)=λ_1(R^-1z). Thus, changing variables we get^φ(w)= ∫_^n|∇^(0,1)w|^2e^-2φ+2∫_^nλ_1|w|^2 e^-2φ= ∫_^n |_R(∇^(0,1)w)|^2e^-2φ_R+2∫_^nλ_1(R^-1 z)|_Rw|^2 e^-2φ_R= R^2{∫_^n |∇^(0,1)(_Rw)|^2e^-2φ_R+2∫_^nλ_1,R|_Rw|^2 e^-2φ_R}. This proves (<ref>). The rest of the statement follows by another simple change of variable. Condition (<ref>) is of course tailored for φ homogeneous of degree d, that is, φ(Rz)=R^dφ(z) for every R>0. In this case, in order to show that the -Neumann problem on the rigid domain associated to φ satisfies a subelliptic estimate of order s at the origin, one needs to show that ^φ(w)≳ R^-2δ∫_^n|w|^2e^-φ∀ w∈ C^∞_c({|z|< R }), ∀ R large, where δ=1-sd. The remainder of this section consists of a series of standard manipulations of the subelliptic estimate (<ref>) that finally lead us to the quadratic forms 𝐄^ξφ and the sufficient condition of Proposition <ref>.§.§ Preliminaries on pseudoconvex rigid domainsSince a defining function for Ω is given by r(z,z_n+1) = φ(z)-z_n+1-z_n+1/2i, the n -linearly independent vector fieldsL_j = ∂/∂ z_j + 2i∂φ/∂ z_j∂/∂ z_n+1 (j=1,…, n)restrict to a global frame of the CR bundle T^1,0bΩ:=⊗ TbΩ∩ T^1,0^n+1. Adding L_n+1=-2i∂/∂ z_n+1 to the collection (<ref>), we obtain a global frame of T^1,0^n+1. We equip ^n+1 with the Hermitian metric h defined by the condition that this frame is orthonormal. The operatorsand ^* will be defined with respect to this metric. In the sequel, we identify bΩ⊂^n+1 with ^n× via the global system of coordinatesz=(z_1,…, z_n)∈^n,t=(z_n+1)∈.§.§ A standard reduction to the boundaryThe boundary reduction to be discussed now is well-known and works equally well if the domain Ω is not rigid. We confine our discussion to the rigid case, in order to avoid introducing additional notation that is unnecessary for our purposes. If ω_1=dz_1,…,ω_n=dz_n, ω_n+1=∂ r is the dual frame to {L_j}_1≤ j≤ n+1, a (0,1)-form u can be uniquely represented asu=∑_j=1^n+1u_jω_jand, if u∈ C^∞_c,(0,1)(^n+1), then u_|Ω∈dom(^*) if and only if u_n+1=0 identically on bΩ. We have the formulasu=∑_1≤ j<k≤ n+1(L_ju_k-L_ku_j) ω_j∧ω_k, ^*u=-∑_1≤ j ≤ n+1(L_j+b_j)u_j,where b_j=L_j(logρ) and ρ is the density of the volume form of h w.r.t. Lebesgue measure. In view of the orthonormality of {ω_j}, we have| u|_h^2=∑_1≤ j<k≤ n+1|L_ju_k-L_ku_j|^2.Hence, if u is a smooth (0,1)-form supported on a fixed neighborhood of the origin and u∈dom(^*), thenQ(u) := ‖ u‖^2_L^2+‖^*u‖^2_L^2+‖ u‖^2_L^2≃ ∑_1≤ j<k≤ n+1∫_Ω|L_ju_k-L_ku_j|^2+∫_Ω|∑_1≤ j≤ n+1L_j u_j|^2+∑_1≤ j≤ n+1∫_Ω|u_j|^2 As is well-known, the L^2-Sobolev norm of order 1 of u_n+1 is controlled by Q(u) (see, e.g., <cit.>). Thus,Q(u) ≳Q(u)+‖ u_n+1‖_W^1(Ω)^2≳ ∑_1≤ j<k≤ n∫_Ω|L_ju_k-L_ku_j|^2+∫_Ω|∑_1≤ j≤ nL_j u_j|^2+∑_1≤ j≤ n∫_Ω|u_j|^2,where we are left only with tangential operators and components. Since there are no constraints on u_1,…, u_n, our task can now be reduced to a boundary estimate. In detail, if v=(v_1,…, v_n) is an n-tuple of test functions defined on bΩ, we setQ_b(v)=∑_1≤ j<k≤ n∫_bΩ|L_jv_k-L_kv_j|^2+∫_bΩ|∑_1≤ j≤ nL_j v_j|^2+∑_1≤ j≤ n∫_bΩ|v_j|^2,where integrals are with respect to Lebesgue measure in the boundary coordinates (<ref>). For later reference, we observe thatL_j=∂/∂ z_j+i∂φ/∂ z_j∂/∂ t (j=1,…, n),in these coordinates. Of course, translation invariance in the t-direction of these operators reflects translation invariance of the rigid domain Ω. [Reduction to a boundary subelliptic estimate] Assume that there exists a neighborhood V of the origin on the boundary bΩ such that Q_b(v)≳‖ v‖_W^s(bΩ)^2=∑_1≤ j≤ n‖ v_j‖_W^s(bΩ)^2holds for every n-tuple v=(v_1,…, v_n) of test functions supported on V. Then the -Neumann problem on Ω satisfies a subelliptic estimate of order s at the origin. Let V be a small neighborhood of the origin in ^n+1 such that V∩ bΩ⊆ V, and let u∈ C^∞_c, (0,1)(V) be such that u_n+1=0 on bΩ. The frame L_j is invariant under translations in z_n+1, and in particular in (z_n+1). Hence, applying (<ref>) to v_h(z,t)=(u_j(z,t+i(φ(z))+h))_j and integrating in h>0, we conclude that (<ref>) controls the tangential Sobolev norm of order s of u_j for j=1,…, n. It is well-known that this implies the conclusion. See, e.g., Section 3 of <cit.>.It is convenient to use the notationE_b(v):=∑_1≤ j<k≤ n∫_bΩ|L_jv_k-L_kv_j|^2+∫_bΩ|∑_1≤ j≤ nL_j v_j|^2,for the "main term" in Q_b.The reader familiar with the tangential Cauchy–Riemann complex may recognize that E_b is the quadratic form of the Kohn Laplacian _b on the CR hypersurface bΩ, for an appropriate choice of metric and background measure. §.§ Two basic identitiesWe think of the n-tuples of test functions v=(v_1,…, v_n) of Proposition <ref>as ^n-valued functions on bΩ≃^n_z×_t. In the next proposition, scalar differential operators act componentwise on v. Also, the n× n matrix-valued function H(z) is lifted to a function on bΩ and acts on ^n-valued functions by pointwise column-vector multiplication. [Basic identities] Let v∈ C^∞_c(bΩ;^n). Then E_b(v) = ∑_1≤ j≤ n∫_bΩ |L_jv|^2-2i∫_bΩ( H∂ v/∂ t,v)= ∑_1≤ j≤ n∫_bΩ |L_jv|^2+2i∫_bΩ((tr(H)I_n-H)∂ v/∂ t,v), where E_b is defined by formula (<ref>) and I_n is the n× n identity matrix. This is a standard commutator computation. Expanding the squares in the formula defining E_b(v) and integrating by parts twice one obtainsE_b(u) = ∑_1≤ j,k≤ n{∫_bΩ |L_jv_k|^2 +∫_bΩ (L_kv_kL_jv_j-L_jv_kL_kv_j)}= ∑_1≤ j,k≤ n{∫_bΩ |L_jv_k|^2 +∫_bΩ (L_kv_kL_jv_j+L_j(L_kv_k)v_j+[L_k, L_j]v_kv_j)}= ∑_1≤ j,k≤ n{∫_bΩ |L_jv_k|^2 +∫_bΩ [L_k, L_j]v_kv_j}. Identity (<ref>) follows from the commutator formula [L_k, L_j]=-2i∂^2φ/∂z_j∂ z_k∂/∂ t. Identity (<ref>) can then be deduced by the further integration by parts identity∫_bΩ|L_jv_k|^2 = ∫_bΩ(|L_j v_k|^2-[L_j,L_j])v_kv_k = ∫_bΩ(|L_j v_k|^2+2i∂^2φ/∂z_j∂ z_j∂ v_k/∂ tv_k). Let δ>0 be small. By assumption (<ref>), in a small enough neighborhood of the origin |∂φ/∂ z_j|^2≤δ/n and the maximal eigenvalue of H is bounded by a constant C. Then, if v∈ C^∞_c(bΩ; ^n) is supported on such a neighborhood, we have |L_jv|^2≥1/2|∂ v/∂z_j|^2-δ/n|∂ v/∂ t|^2 and 2|( H∂ v/∂ t,v)|≤δ|∂ v/∂ t|^2+δ^-1C^2|v|^2. The basic identity (<ref>) then gives Q_b(v) ≳ ∫_bΩ{∑_1≤ j≤ n|∂ v/∂z_j|^2 - 2δ|∂ v/∂ t|^2}≳ ∫_bΩ{|∇_^n v|^2/4 - 2δ|∂ v/∂ t|^2}, where ∇_^n is the ordinary gradient in ^n≡^2n, and the last inequality is obtained by integration by parts. This elementary estimate expresses the microlocal ellipticity of _b in the complement of a conical neighborhood of the characteristic direction span{L_1,…, L_n}^⊥⊂ T^*bΩ. §.§ Fourier analysis If f:bΩ→ is a function, we denote by ℱf:^n×→ its "vertical Fourier transform", that is, its Fourier transform in the t-variable, normalized as follows:ℱf(z,ξ) = 1/√(2π)∫_ f(z,t)e^-iξ tdt(z∈^n, ξ∈).We will not comment on convergence of this and other integrals in what follows, as our functions f will always be smooth and rapidly decaying in t. Applying Plancherel theorem in the variable t, we obtain the identity∫_bΩfg=∫_^n×ℱfℱgfor f,g:bΩ→, where the implicit measures are Lebesgue in (z,t) in the LHS and Lebesgue in (z,ξ) in the RHS.The vertical Fourier transform of a ^n-valued function v=(v_1,…, v_n) is defined componentwise: ℱv(z,ξ):=(ℱv_1(z,ξ), …, ℱv_n(z,ξ)). We denote by 𝒮_vert(bΩ; ^n) the space of smooth functions v:bΩ→^n that are compactly supported in z and Schwartz in t, the latter property meaning that |t|^N|∂^N v/∂ t^N| is bounded for every N∈. By standard facts of Fourier analysis, the vertical Fourier transform is an isomorphism of 𝒮_vert(bΩ; ^n) with inverse v↦ℱv(z,-ξ). If w:^n→^n is a function, we denote by ∇^0,1w (resp. ∇^1,0w) the (0,1)-part (resp. the (1,0)-part) of its gradient, that is, the matrix ∇^0,1w=(∂ w_k/∂z_j)_1≤ j,k≤ n (resp. ∇^1,0w=(∂ w_k/∂ z_j)_1≤ j,k≤ n). The Hilbert–Schmidt norm of this matrix is denoted |∇^0,1w| (resp. |∇^1,0w|), that is, |∇^0,1w|^2=∑_1≤ j,k≤ n|∂ w_k/∂z_j|^2 (resp. |∇^1,0w|^2=∑_1≤ j,k≤ n|∂ w_k/∂ z_j|^2). [Fourier transformed basic identities] Given v∈𝒮_vert(bΩ; ^n), define the one-parameter families of ^n-valued test functionsw_ξ^+(z) = e^ξφ(z)ℱv(z,ξ) w_ξ^-(z) = e^ξφ(z)ℱv(z,-ξ), where ξ∈ and z∈^n. ThenE_b(v)= ∫_{∫_^n |∇^0,1 w_ξ^+|^2e^-2ξφ+2ξ∫_^n( Hw_ξ^+,w_ξ^+) e^-2ξφ}dξ= ∫_{∫_^n | ∇^1,0w_ξ^-|^2e^-2ξφ+2ξ∫_^n((tr(H)I_n-H)w_ξ^-,w_ξ^-) e^-2ξφ}dξ. The operator -i∂/∂ t and multiplication by ξ are conjugated under ℱ, that is,ℱ(-i∂ f/∂ t)(z,ξ) = ξℱf(z,ξ). Hence, L_j=∂/∂ z_j+i∂φ/∂ z_j∂/∂ t is conjugated to∂/∂ z_j-ξ∂φ/∂ z_j=e^ξφ∘∂/∂ z_j∘ e^-ξφ.Similarly, L_j=∂/∂z_j-i∂φ/∂z_j∂/∂ t is conjugated to∂/∂z_j+ξ∂φ/∂z_j=e^-ξφ∘∂/∂z_j∘ e^ξφ. By (<ref>) and Plancherel's identity (<ref>), E_b(v) = ∑_1≤ j≤ n∫_bΩ |L_jv|^2-2i∫_bΩ( H∂ v/∂ t,v)= ∫_{∑_1≤ j≤ n∫_^n|∂(e^ξφℱv)/∂z_j|^2e^-2ξφ+2ξ∫_^n( He^ξφℱv,e^ξφℱv) e^-2ξφ}dξ. This is the first stated identity. Similarly, we exploit the second basic identity (<ref>) to write E_b(v) = ∑_1≤ j≤ n∫_bΩ |L_jv|^2+2i∫_bΩ( (tr(H)I_n-H)∂ v/∂ t,v)= ∫_{∑_1≤ j≤ n∫_^n|∂(e^-ξφℱv)/∂ z_j|^2e^2ξφ-2ξ∫_^n((tr(H)I_n-H)e^-ξφℱv,e^-ξφℱv) e^2ξφ}dξ. Changing variables ξ↔ -ξ, we obtain the second desired identity. §.§ Proof of Proposition <ref> Suppose that v∈𝒮_vert(bΩ;^n) is vertically Fourier-supported on {ξ≥ 0}, that is,ℱv(z,ξ) = 0∀ z∈^nand ξ≤ 0. Then the first identity of Proposition <ref> plus the trivial lower bound H≥λ_1I_n (as quadratic forms) givesE_b(v)= ∫_0^+∞{∫_^n |∇^0,1 w_ξ^+|^2e^-2ξφ+2ξ∫_^n( Hw_ξ^+,w_ξ^+) e^-2ξφ}dξ≥ ∫_0^+∞{∫_^n |∇^0,1 w_ξ^+|^2e^-2ξφ+2ξ∫_^nλ_1|w_ξ^+|^2 e^-2ξφ}dξ. If instead v∈𝒮_vert(bΩ;^n) is vertically Fourier-supported on {ξ≤ 0}, that is,ℱv(z,ξ) = 0∀ z∈^nand ξ≥ 0, thenE_b(v)= ∫_0^+∞{∫_^n |∇^1,0 w_ξ^-|^2e^-2ξφ+2ξ∫_^n((tr(H)I_n-H)w_ξ^-,w_ξ^-) e^-2ξφ}dξ≥ ∫_0^+∞{∫_^n |∇^1,0 w_ξ^-|^2e^-2ξφ+2ξ∫_^nλ_1|w_ξ^-|^2 e^-2ξφ}dξ,because the eigenvalues of the Hermitian matrix tr(H)I_n-H areλ_1+…+λ_n-λ_j (1≤ j≤ n),and in particular they are all larger or equal to λ_1+…+λ_n-1≥λ_1.Since |∇^1,0w|=|∇^0,1w|, the lower bounds (<ref>) and (<ref>) can be rewritten as follows: E_b(v) ≥∫_0^+∞^ξφ(w_ξ^+)dξfor v∈𝒮_vert(bΩ; ^n) positively Fourier-supported, and E_b(v) ≥∫_0^+∞^ξφ(w_ξ^-)dξ.for v∈𝒮_vert(bΩ; ^n) negatively Fourier-supported. Recall that w_ξ^+, w_ξ^- are defined as in Proposition <ref> and that ^φ is defined on vector-valued functions by (<ref>). By Proposition <ref>, our goal is to prove that if v=(v_1,…, v_n)∈ C^∞_c(bΩ; ^n) is supported on a small neighborhood V of the origin, thenQ_b(v) ≳‖Λ^s v‖_L^2^2, where Λ^s is the standard pseudodifferential operator with symbol (1+|τ|^2)^s/2. Here τ∈^2n+1 is the dual, or cotangent, variable on ^n×≃^2n+1, and in particular τ_2n+1 is dual to t. From now on, every norm will be an L^2 norm and we will omit subscripts. Fix a test function χ_0(τ) identically equal to one on a large ball centered at 0∈^2n+1. Next, consider the covering of the unit sphere ^2n = {τ∈^2n+1 |τ|=1} given by the upper hemisphere Ω_+={τ∈^2n τ_2n+1>0}, the lower hemisphere Ω_-{τ∈^2n τ_2n+1<0}, and the equatorial strip Ω_eq={τ∈^2n |τ_2n+1|<δ}, where δ is a small positive parameter. We let {σ_+, σ_-, σ_eq}⊂ C^∞(^2n) be a partition of unity subordinate to this covering. Finally, setχ_∙(τ) = (1-χ_0(τ))σ_∙(τ/|τ|)(∙∈{eq, +, -}). Denote by P_∙ the Fourier multiplier operator with symbol χ_∙(τ). We let operators act componentwise on vector-valued functions. In order to prove inequality (<ref>), it is enough to establish the four boundsQ_b(v) ≳‖Λ^s P_∙ v‖^2 (∙∈{0, eq, +,-}) for every v∈ C^∞_c(V; ^n). The case ∙=0 of (<ref>) is trivial, and the case ∙=eq is a standard consequence of microlocal ellipticity in the equatorial conical region where χ_eq is supported. See Remark <ref> (recall that s≤ 1). We are left with the nonelliptic cases ∙=+ and ∙=-. This is where the key assumption (<ref>) is used. If V is contained in {|z|<C^-1}×, and we choose a test function ρ'∈ C^∞_c(^n) supported on {|z|<C^-1} and identically one on V (when thought of as a function on bΩ≃^n×), then ‖Λ^sP_+v‖^2≲‖|∂/∂ t|^sP_+v‖^2≲‖|∂/∂ t|^s(ρ'(z)P_+v)‖^2+‖ v‖^2, where we used the fact that (1+|τ|^2)^s/2≲ |τ_2n+1|^s on the support of (1-χ_0(τ))σ_+(τ/|τ|) for the first bound, and the fact that (1-ρ'(z))P_+ is a smoothing operator for the second (recall that v is supported on V). Here |∂/∂ t|^s denotes the operator with symbol |τ_2n+1|^s. Applying the partial Plancherel identity (<ref>), we find‖|∂/∂ t|^s(ρ'P_+v)‖^2 = ∫_|ξ|^2s{∫_^n|ℱ(ρ'P_+v)|^2}dξ= ∫_|ξ|^2s{∫_^n|ℱ(ρ'P_+v)e^ξφ|^2e^-2ξφ}dξ Notice that since vertical Fourier transform and multiplication by ρ' commute (because ρ' is independent of t), the vertical Fourier support of ρ'(z)P_+v(z,t) is contained in {ξ≥ C} (if the ball where the cut-off function χ_0 is identically one is chosen large enough) and ℱ(ρ'P_+v)e^ξφ is supported on {|z|<C^-1} for every ξ. Hence, by (<ref>) and inequality (<ref>)‖|∂/∂ t|^s(ρ'P_+u)‖^2 ≲ ∫_C^+∞^ξφ(ℱ(ρ'P_+v)e^ξφ)dξ≤E_b(ρ'P_+v)≲Q_b(v), where the last step is a standard commutation argument in pseudodifferential calculus. This completes the proof of the microlocal bound (<ref>) with ∙=+. The case ∙=- is entirely analogous. In particular, the reader should not worry about the conjugation appearing in (<ref>), since a valid replacement of (<ref>) in the ∙=- case is|| |∂/∂ t|^s(ρ'P_-u)||^2 = ∫_C^+∞ξ^2s{∫_^n|ℱ(ρ'P_-u)(z,-ξ)e^ξφ|^2e^-2ξφ}dξ. § AN UNCERTAINTY PRINCIPLE FOR THEOPERATOR The goal of this section is to prove a local lower bound for the energy form ^φ in the one-dimensional case, when Δφ is comparable to the modulus squared of a complex polynomial. [Main one-dimensional estimate] Let P(ζ) be a polynomial in one complex variable of degree d. Assume that all the roots of P are in the disc D(0,1/2) and that the leading coefficient of P has modulus A≥ C_d, a positive constant depending only on the degree. Let φ∈ C^2(𝔻; ) and B>0 be such that B^-1|P(ζ)|^2≤Δφ(ζ)≤ B|P(ζ)|^2 ∀ζ∈𝔻. Then∫_𝔻|∂ w/∂z|^2e^-2φ+∫_𝔻Δφ|w|^2e^-2φ≳_d, B A^2/d+1∫_𝔻|w|^2e^-2φ holds for all w∈ C^1(𝔻).Inequality (<ref>) is sharp, as far as the dependence on A is concerned. This may be seen considering φ(ζ)=A^2|ζ|^2d+2, in which case Δφ(ζ)≃_d |P|^2, where P(ζ)=Aζ^d. The LHS of (<ref>), evaluated on a test function w supported on {ϵ≤ |ζ|≤ 2ϵ} and such that ‖ w‖_L^∞≲ 1 and ‖∇ w‖_L^∞≲ϵ^-1, is O(1+A^2ϵ^2d+2), while ∫ |w|^2e^-2φ≳ϵ^2, at least if A^2ϵ^2d+2≲ 1. Optimising in ϵ in this range, we obtain (LHS)/∫ |w|^2e^-2φ≲_d A^2/d+1. Notice that the hypothesis of the proposition is that Δφ is comparable to the modulus squared of a perturbation of the polynomial Aζ^d, so one should think of Proposition <ref> as a stability property of the "local spectral gap" of ^φ. We also remark that the proof for the special case Δφ≃ A^2|ζ|^2d is significantly simpler, e.g. the clustering argument of Section <ref> is not needed. The proof of Proposition <ref> is given in Section <ref>, after a number of preliminary facts are established.§.§ A basic -uncertainty principleWe begin with the following basic formulation of a -uncertainty principle (-UP), appearing in<cit.> as Lemma 12. We reproduce the proof (actually, a slight simplification thereof).Let V:D(z,r)→[0,+∞) be a measurable function and definec:=inf_z'∈ D(z,r)∖ D(z,r/2)V(z'). If w∈ C^1(D(z,r)), then∫_D(z,r)|∂ w/∂z|^2+∫_D(z,r)V|w|^2≳min{c,1/r^2}∫_D(z,r)|w|^2. It is enough to prove (<ref>) in the case z=0 and r=1, that is, on the unit disc 𝔻. The general case follows by scaling. Moreover, we may clearly assume that ∫_𝔻|w|^2=1, and thatV(z)=0|z|<1/2 c1/2≤ |z|<1 . We use two inequalities. The first one is an elementary consequence of Cauchy integral formula: ∫_𝔻|h|^2≲∫_𝔻∖1/2𝔻|h|^2 ∀ h∈𝒪(𝔻)∩ L^2(𝔻). The second one is a Poincaré-type inequality for the operator ∂/∂z:∫_𝔻|w-Bw|^2≲∫_𝔻|∂ w/∂z|^2∀ w∈ C^1(𝔻), where B is the Bergman projection of the unit disc 𝔻, that is, the orthogonal projector of L^2(𝔻) onto the subspace of holomorphic functions 𝒪(𝔻)∩ L^2(𝔻). Inequality (<ref>) is an immediate corollary of the L^2-solvability of the -equation on the unit disc (cf. Theorem 4.3.4 of <cit.>, or Section 5 of <cit.>). We argue by contradiction, assuming that w∈ C^1(𝔻) satisfies ∫_𝔻|w|^2=1 and ∫_𝔻|∂ w/∂z|^2+c∫_𝔻∖1/2𝔻|w|^2≤ amin{c,1}, where a>0 is a small absolute constant. It follows that ∫_𝔻∖1/2𝔻|w|^2≤ a. Moreover, by (<ref>), we have∫_𝔻|w-Bw|^2≲ a. Using (<ref>), we deduce that(∫_𝔻|Bw|^2)^1/2 ≲ (∫_𝔻∖1/2𝔻|Bw|^2)^1/2≤ (∫_𝔻∖1/2𝔻|w-Bw|^2)^1/2 + (∫_𝔻∖1/2𝔻|w|^2)^1/2≲ √(a) For a small enough, the inequalities ‖ Bw‖_L^2(𝔻)≲√(a) and ‖ w-Bw‖_L^2(𝔻)≲√(a) are in contradiction with our normalization ‖ w‖_L^2(𝔻)=1. §.§ A weighted -UP What we actually need in our proof of Proposition <ref> is a generalization of the basic -UP of Lemma <ref>, where Lebesgue measure is multiplied by a factor e^-2φ, that is, we need lower bounds for the one-dimensional quadratic form ∫_D(z,r)|∂ w/∂z|^2e^-2φ+∫_D(z,r)V|w|^2e^-2φ,where φ∈ C^2(D(z,r)) satisfies appropriate assumptions. One may interpret functions w:D(z,r)→ as sections of a trivial holomorphic line bundle over the disc (the -operator being well-defined on sections of a holomorphic line bundle). Then the factor e^-2φ can be interpreted as a metric on this bundle. Holomorphic changes of trivializations transform φ into φ + (G), where G is a holomorphic function. Thus, the relevant properties of φ are those that are invariant under such transformations, i.e., expressible in terms of the Laplacian Δφ, which expresses the curvature of the metric e^-2φ. Notice that the energy (<ref>) is stable under bounded perturbations of φ, and that we know how to bound it in the "flat" case φ=0 by Lemma <ref>. These two facts imply that an uncertainty principle holds whenever the metric has bounded curvature. This is the content of Lemma <ref> below. Let V:D(z,r)→[0,+∞) be a measurable function and definec:=inf_z'∈ D(z,r)∖ D(z,r/2)V(z'). Let φ∈ C^2(D(z,r); ) be such that |Δφ(ζ)|≤ Br^-2 for all ζ∈ D(z,r). Then∫_D(z,r)|∂ w/∂z|^2e^-2φ+∫_D(z,r)V|w|^2e^-2φ≳_B min{c,1/r^2}∫_D(z,r)|w|^2e^-2φ holds for all w∈ C^1(D(z,r)). Under the hypothesis of the Lemma, φ= G+O(B), where G is holomorphic. A proof of this known fact is provided in Appendix <ref>. Applying Lemma <ref> to w e^-G, we get∫_D(z,r)|∂ w/∂z|^2e^-2G+∫_D(z,r)V|w|^2e^-2G≳min{c,1/r^2}∫_D(z,r)|w|^2e^-2G,which immediately yields the thesis. §.§ A Sublevel Set Lemma In order to apply the weighted -UP of Lemma <ref> to V=|P|^2, where P is a complex polynomial, we need some quantitative control on the sublevel sets of P. This is the content of Lemma <ref> below, which in turn exploits the following elementary lemma. Let (X,d) be a finite metric space of cardinality N and let L>0. There there exists a subset S⊆ X and a collection of radii {r_s}_s∈ S satisfying the following: * L≤ r_s≤ 4^N-1L, * if s,s'∈ S are distinct, then d(s,s')> 2r_s+2r_s', * for every x∈ X there is a unique s∈ S such that d(x,s)≤ r_s. By the pigeonhole principle, for every x∈ X we can choose a radius r_x∈ [L, 4^N-1L] such that no point s of X satisfies the bounds r_x< d(x,s)≤ 4r_x. The set S is built applying the following greedy algorithm: * Set initially k=0, S_0:=empty set and X_0:=X. Go to step (2). * If X_k is empty, then halt. Otherwise go to step (3). * Pick a point s∈ X_k with largest possible radius r_s and defineX_k+1:=X_k∖{y∈ X d(y,s)≤ r_s},S_k+1:=S_k∪{s}. * Increase the counter k by one and go back to step (2). We omit the easy verification of the conditions in the statement. Let P(z) be a nonzero polynomial of one complex variable of degree d. Denote by A the absolute value of its leading coefficient. Then there is a subset R of the set of roots of P and a collection of radii {r_ζ}_ζ∈ R with the following properties: * r_ζ≲_d A^-1/d+1, * on the disc D(ζ, 4r_ζ), we have |P(z)|≲_d r_ζ^-1, * on the set ∖(⋃_ζ∈ RD(ζ, 2r_ζ)), we have |P(z)|≳_d A^1/d+1. We argue by induction on the degree, the case d=0 being trivial. If P has degree d≥ 1 and leading coefficient of modulus A, we apply the Clustering Lemma to its set of roots with L=A^-1/d+1. We obtain a subset of roots S and a collection of radii r_s≃_dA^-1/d+1 (s∈ S). Denote by 𝒞_s the "cluster at s", namely the set of roots of P at distance at most r_s from s. We also denote by m_ζ the multiplicity of ζ as a root of P and we define d_s:=∑_ζ∈𝒞_sm_ζ. We now argue differently depending on whether there is a single cluster or more than one. Case 1: there is only one cluster. In this case S={s} and all the roots ζ of P are in the disc D(s, r_s). Thus, on the disc D(s,4r_s) we have|P(z)|=A∏_ζ|z-ζ|^m_ζ≲_d A (A^-1/d+1)^d=A^1/d+1≃_d r_s^-1, and in the complement of the disc D(s, 2r_s) we have the reverse inequality |P(z)|≳_d A^1/d+1. Case 2: there are two or more clusters. Since all the roots are contained in ⋃_s∈ SD(s, r_s), on the complement of the set ⋃_s∈ SD(s, 2r_s) we have the lower bound|P(z)|≥ A∏_s∈ S r_s^d_s≃_d A^1/d+1. Let now z∈ D(s, 2r_s). Since D(s, 2r_s) has empty intersection with D(s', 2r_s') for every s'≠ s and all the roots of P in the latter disc (that is, those in the cluster 𝒞_s') are actually contained in D(s',r_s'), it is easily seen that|z-ζ|≃_d |s-s'| ∀ z∈ D(s,2r_s), ζ∈𝒞_s'. Hence, on D(s,2r_s) we have|P(z)|≃_d A∏_s'≠ s |s'-s|^d_s'∏_ζ∈𝒞_s|z-ζ|^m_ζ. Let P_s be the polynomial of degree d_s<d with leading coefficient A_s:=KA∏_s'≠ s |s'-s|^d_s' that has a root of multiplicity m_ζ at every ζ∈𝒞_s. Here K is a positive constant depending only on d to be fixed soon. By what we said above, |P(z)|≃_d|P_s(z)| on the disc D(s, 2r_s). Applying the inductive hypothesis to P_s, we obtain a subset R_s⊆𝒞_s and a collection of radii {r_s,ζ}_ζ∈ R_s. We now show that R=⋃_s∈ SR_s is the desired set of roots of P. Notice that the inequality |s-s'|≳_d A^-1/d+1 (valid for s'∈ S∖{s}) givesr_s, ζ≲_d A_s^-1/d_s+1≲_d K^-1/d_s+1(A∏_s'≠ sA^-d_s'/d+1)^-1/d_s+1=K^-1/d_s+1A^-1/d+1.This verifies property (1) of the statement. Choosing K large enough we may ensure that r_s, ζ≤1/4r_s (recall that r_s is comparable to A^-1/d+1, not only bounded above). This guarantees that, if ζ∈ R_s, then D(ζ, 4r_s,ζ)⊆ D(s, 2r_s), so that the bound |P_s(z)|≲_d r_s,ζ^-1 valid on D(ζ, 4r_s,ζ) can be transferred to |P|. Property (2) is also verified.Finally, on the setD(s,2r_s)∖(⋃_ζ∈ R_sD(ζ, 2r_s,ζ)) we have|P(z)|≃_d|P_s(z)|≳_d A_s^1/d_s+1≳_d A^1/d+1, where the last inequality is the same that we used in (<ref>). Property (3) follows from (<ref>) and the last inequality. §.§ Proof of Proposition <ref> By the Sublevel Set Lemma (Lemma <ref>) we have a set R⊆ D(0,1/2) of cardinality at most d and a collection of radii r_ζ≲_dA^-1/d+1, one for each ζ∈ R. Choosing C_d large enough, we see that the discs D(ζ, 4r_ζ) are contained in the unit disc. By conditions (2) and (3) of the Sublevel Set Lemma, we are in a position to apply the weighted -UP (Lemma <ref>), which yields∫_D(ζ, 4r_ζ)|∂ w/∂z|^2e^-2φ+∫_D(ζ, 4r_ζ)|P|^2|w|^2e^-2φ≳_d, Bmin{r_ζ^-2, A^2/d+1}∫_D(ζ, 4r_ζ)|w|^2e^-2φ. By condition (1), we see that the minimum above is ≃_d A^2/d+1. Summing over ζ∈ R, we get∫_Ω|∂ w/∂z|^2e^-2φ+∫_Ω|P|^2|w|^2e^-2φ≳_d, B A^2/d+1∫_Ω|w|^2e^-2φ, where Ω=⋃_ζ∈ RD(ζ, 2r_ζ). By condition (3) of Lemma <ref> one gets a similar estimate in the complementary region 𝔻∖Ω. The proof is complete.§ APPROXIMATE MINIMAL EIGENVECTOR FIELDS In this section we introduce the notion of approximate minimal eigenvector field for a plush function φ, and prove two basic propositions about it. [Approximate minimal eigenvector field] Let φ be a smooth plush function defined near p∈^n. A germ of holomorphic vector field X at p is said to be an approximate minimal eigenvector field for φ at p if X(p)≠0 and there is a positive constant C such that (H(z)X(z), X(z))≤ C λ_1(z)for every z in a neighborhood of p (recall that H is the Levi form of φ defined in (<ref>)). Notice that the inequality (H(z)X(z), X(z))≳λ_1(z) automatically holds near p, so (<ref>) really expresses the fact that X behaves approximately as an eigenvector field of minimal eigenvalue of the Levi form H(z). The key role played by approximate minimal eigenvector fields in our proof of Theorem <ref> below stems from the following observation. Let X be an approximate minimal eigenvector field for φ, and denote by exp_p(ζ X) the solution of the complex ODE ∂/∂ζexp_p(ζ X) = X(exp_p(ζ X))exp_p(0 X)=p,which is defined in a neighborhood of 0∈. The mapping ζ↦exp_p(ζ X) is holomorphic andΔ(φ(exp_p(ζ X)))= (H(exp_p(ζ X))∂/∂ζexp_p(ζ X), ∂/∂ζexp_p(ζ X))= (H(exp_p(ζ X))X(exp_p(ζ X)), X(exp_p(ζ X)))≃ λ_1(exp_p(ζ X)). Loosely speaking, along the analytic disc ζ↦exp_p(ζ X), the plush function φ (or more precisely, the metric e^-2φ) is as flat as possible (cf. also Remark <ref>), a very convenient property in light of the -UPs discussed in the previous section. The next proposition shows that approximate minimal eigenvector fields indeed exist, for an appropriate class of (germs of) plush functions. [Existence of approximate minimal eigenvector fields] Let φ be a HSOnS defined near p∈ U. Assume that the kernel of H(p) is one-dimensional. Then there exists an approximate minimal eigenvector field for φ at p.In order to state our second proposition about approximate minimal eigenvector fields, we need to recall a notion of type introduced in <cit.>. [Catlin's 1-type] We say that the collection of holomorphic mapsΨ={ψ_t:D(0,t)→^n+1}_t∈ (0,1], is a family of analytic discs shrinking to q∈^n+1 iflim_t→ 0ψ_t(0)=q,sup_t‖ψ_t'‖_L^∞(D(0,t))<+∞,inf_t|ψ_t'(0)|>0. If q∈ bΩ, then we define the order of contact T(Ψ) of Ψ={ψ_t}_t∈ (0,1] as the supremum over all m≥ 0 such that dist_bΩ(ψ_t(ζ))=O(t^m) ∀ t∈ (0,1],∀ζ∈ D(0,t). Here dist_bΩ is the Euclidean distance to the boundary (or any other comparable distance). Let Ω⊂^n+1 be a domain with smooth boundary. We define the Catlin's 1-type of bΩ at q as T^1(bΩ; q)=sup T(Ψ), where the supremum is over all families of analytic discs shrinking to q.The definition above consists of an equivalent restatement of properties (i), (ii) and (iii) at p. 148 of <cit.> in the case q=1 and T=(0,1], plus the additional requirement that the centers of the analytic discs converge to a given point q∈^n+1. Notice that there is no loss in considering only (0,1] as set of parameters for the family of discs. The terminology "family of analytic discs shrinking to q" is ours, as is the term "Catlin's 1-type" for the resulting notion of type. Of course, T^1(bΩ; q) only depends on the germ of bΩ at q.[Approx. minimal eigenvector fields vs. Catlin's 1-type] Let φ:U→ be a smooth plush function, where U⊆^n is open, and Ω={(z,z_n+1)∈ U× (z_n+1)>φ(z))} the associated pseudoconvex rigid domain. Let p∈ U and let q=(p, τ+iφ(p)) be a boundary point of Ω "over p". Let X be an approximate minimal eigenvector field for φ at p and m_0∈ be such thatλ_1(exp_p(ζ X))=O(|ζ|^m_0).Thenm_0+2≤ T^1(bΩ; q). The proof of Proposition <ref> is given in Section <ref>, while the proof of Proposition <ref> occupies Section <ref>.§.§ Existence of approximate minimal eigenvector fields Let φ=|F|^2, whereF=(F_1,…, F_n):U→^nis a holomorphic mapping. Let J be the complex Jacobian matrix of the mapping F, that is,J_jk(z)=∂ F_j/∂ z_k(z) (1≤ j, k≤ n).We will make use of the following elementary identity for the Levi form of φ (defined in (<ref>)): H_jk:=∂^2φ/∂z_j∂ z_k=∑_ℓ=1^n ∂ F_ℓ/∂ z_j∂ F_ℓ/∂ z_k,that is,H= J^*J,where J^* is the (pointwise) conjugate transpose of J. Notice that J is a square matrix and that H(z)=| J(z)|^2. An equivalent reformulation of (<ref>) is(H(z)v,v) = |J(z)v|^2∀ v∈^n, ∀ z ∈ U.Thus, H(z)= J(z). We need a basic integral formula from perturbation theory. Let A be an n× n matrix with complex coefficients. Denote by E(λ) the generalized eigenspace of A of eigenvalue λ, that is, E(λ) = ⋃_k≥ 1 (λ I_n-A)^k, where I_n is the n× n identity matrix. Let D(z,r) be a disc in the complex plane not containing any of the eigenvalues of A in its boundary. ThenΠ=1/2π i∫_∂ D(z,r) (ζ I_n-A)^-1dζ,where the boundary circle is oriented counterclockwise, is a linear projector onto ⊕_λ∈ D(z,r)E(λ) (see, e.g., <cit.>[formula (1.16) at p. 67]). We want to apply formula (<ref>) to A=J(z), in order to obtain a projection-valued holomorphic function of z. Notice that here we take advantage of the fact that φ is a Hermitian Sum of n Squares, that is, that J(z) is a square matrix. Since J(p) has one-dimensional kernel, it has at most one Jordan block corresponding to the zero eigenvalue, which a priori could be of size >1. To avoid this, we use a simple trick. Let M be a n× n unitary matrix such that the kernel and the range of MJ(p) have trivial intersection. Such a unitary matrix exists, because the two subspaces have complementary dimensions and the modification J(p)↦ MJ(p) rotates the range without affecting the kernel. As a result, MJ(p) induces an invertible linear map on its range, and therefore the Jordan block of MJ(p) corresponding to the zero eigenvalue must be of size 1. Equivalently, the characteristic polynomial of MJ(p) has a simple zero at p. Consider then the mapping F(z)=M(F(z)), whose Jacobian is J(z)=MJ(z). By continuity and our choice of M, there is ε>0 and a neighborhood V⊆ U of p such that, for every z∈ V, the matrix MJ(z) has only one eigenvalue α(z) of multiplicity 1 such that |α(z)|<ε, and no eigenvalue of modulus =ε. By (<ref>), there is a holomorphic matrix-valued map Π(z) such thatJ(z)Π(z) = α(z)Π(z)∀ z∈ V, and Π(p) is a linear projection onto J(p) =J(p). Let v_0 be a nonzero vector in the kernel of J(p). Then the holomorphic vector fieldX(z):=Π(z)v_0satisfies X(p)=v_0 and(H(z)v(z), v(z)) = |J(z)X(z)|^2 = |J(z)X(z)|^2 = |α(z)|^2 |X(z)|^2,for every z∈ V. Since every eigenvalue of J(z) different from α(z) has modulus >ε, we have| J(z)|^2 = |(J(z))|^2≃_φ |α(z)|^2∀ z∈ V,and since only one eigenvalue of H vanishes at 0, we similarly have H(z)≃_φλ_1(z).Both approximate identities hold throughout V (shrinking it, if needed). Using H(z)=| J(z)|^2, we conclude that (H(z)v(z), v(z))≃_φλ_1(z)|v(z)|^2, as we wanted.§.§ Approximate minimal eigenvector fields vs. Catlin's 1-type We are going to prove a statement slightly more general than Proposition <ref>. To formulate it, we require a definition. Let φ:U→ be a smooth plush function, where U is open and p∈ U. We denote by 𝔥_p(φ) the supremum over all m≥ 0 for which there exists a nonsingular analytic disc ψ:𝔻→ U through p (that is, ψ(0)=p and ψ'(0)≠ 0) such thatΔ(φ∘ψ)(ζ)=O(|ζ|^m). From (<ref>), one sees immediately thatm_0≤𝔥_p(φ),where m_0 is as in the statement of Proposition <ref>. The promised more general statement is the following. [𝔥_p(φ) vs. Catlin's 1-type] Let φ:U→ be a smooth plush function and Ω={(z,z_n+1)∈ U× (z_n+1)>φ(z))} the associated pseudoconvex rigid domain. Let p∈ U and let q=(p, τ+iφ(p)) be a boundary point of Ω "over p". Then𝔥_p(φ)+2≤ T^1(bΩ; q). By translation invariance in (z_n+1), we may assume that τ=0. Let ψ:𝔻→ U be a nonsingular analytic disc through p such that (<ref>) holds. Rescaling the ζ-variable, we may assume that |ψ'(ζ)|≃ 1 on the whole disc. Consider the subharmonic function f(ζ)=φ(ψ(ζ))-φ(p). We have‖ f‖_L^∞(D(0,t))≲ t,‖Δ f‖_L^∞(D(0,t))≲ t^m+2t^-2∀ t∈ (0,1]. Hence, Lemma <ref> gives a holomorphic function G_t∈𝒪(D(0,t)) such thatφ∘ψ=(iG_t+iφ(p))+O(t^m+2)on D(0,t)andt||G_t'(ζ)||_L^∞(D(0,0.5t)≲ t+t^m+2≲ t. Subtracting an imaginary constant from G_t, we may assume that (G_t(0))=0. Define ψ_t(ζ):=(ψ(0.5ζ), iG_t(0.5ζ)+iφ(p)) (ζ∈ D(0,t)) By the estimates just established, one sees that Ψ={ψ_t}_t∈ (0,1] is a family of analytic discs shrinking to q (notice that (iG_t(0))=O(t^m+2), so ψ_t(0) converges to q=(p, iφ(p)) as t goes to zero). Finally, evaluating the defining function r(z,z_n+1)=φ(z)-(z_n+1) on the discs we see that the family Ψ has order of contact at least m+2 with the boundary. Hence T^1(bΩ; p+iφ(p))≥ m+2. The conclusion follows taking the supremum over all nonsingular analytic discs through p.§ SHARP SUBELLIPTIC ESTIMATES FOR A CLASS OF HOMOGENEOUS SPECIAL DOMAINS In this section we show how the tools introduced above allow to exactly determine the sharp order of subellipticity for an interesting class of special domains. We begin with the definition of the relevant class of plush functions.Let n and d be positive integers. We define ℋ_n,d as the class of plush functions φ:^n→ satisfying the following properties: * φ is a HSOnS. * φ is -homogeneous of degree 2d, that is,φ(λ z)=|λ|^2dφ(z)∀ z∈^n,∀λ∈^×,where d≥ 1 is an integer. * φ vanishes at the origin and only there. By condition (1), φ=∑_ℓ=1^n|F_ℓ|^2, with each F_ℓ:^n→ holomorphic, while condition (2) is equivalent to saying that each F_ℓ is a polynomial of degree d. Condition (3) ensures that the mapping F=(F_1,…, F_n) descends to a non-constant holomorphic map (or a regular morphism, in the language of algebraic geometry)F:ℙ^n-1→ℙ^n-1.By condition (3), 0∈^n is an isolated point of the variety {F_ℓ(z)=0∀ℓ}, which is known to be equivalent to the finiteness of the D'Angelo type Δ^1(bΩ; 0), whereΩ={(z,z_n+1)∈^n+1 (z_n+1)>φ(z))}is the rigid domain associated to φ. See <cit.>. We call domains of this form homogeneous special domains. The next definition singles out the key quantity associated to the self-mapping F of ℙ^n-1 that dictates which subelliptic estimates at the origin hold on Ω. We recall that the order at 0 of a holomorphic map f:𝔻→^n (or a germ thereof) is the quantityν_0(f) = min{k≥ 1f^(k)(0)≠ 0}if f is nonconstant, while ν_0(f)=+∞ if f is constant. Equivalently, ν_0(f)=sup{m≥ 0|f(ζ)-f(0)|=O(|ζ|^m)}, from which it is apparent that the order is well-defined for every holomorphic f:𝔻→ Y, where Y is a complex manifold.If G:ℙ^n-1→ℙ^n-1 is holomorphic, we denote by 𝔱(G) the maximal order of flatness of G along nonsingular analytic discs, that is,𝔱(G)=sup_ψν_0(G∘ψ),where ψ: 𝔻→ℙ^n-1 is holomorphic and satisfies ψ'(0)≠ 0.We can finally state our sharp subellipticity theorem. Let φ=|F|^2 be a plush function in the class ℋ_n,d, H its Levi form (see (<ref>)), and Ω the associated homogeneous special domain. Assume moreover thatH(p)≤ 1 ∀ p∈^n∖{0}. Then the -Neumann problem on Ω satisfies a subelliptic estimate at the origin of order s=(2max{d, 𝔱(F)})^-1 and no better one. The quantity 2max{d, 𝔱(F)} equals the upper semicontinuous envelope of Catlin's type T^1(bΩ; 0) (see (<ref>) below). Hence,s(Ω; 0)=1/T^1(bΩ; 0). Before passing to the proof of the theorem, we give a couple of remarks that may clarify the scope of its statement. * If n=1 or n=2, then the hypothesis (<ref>) is automatically satisfied and 𝔱(F)≤ d. * If n=3, then the hypothesis is generically satisfied in every degree. The invariant 𝔱(F) may exceed the degree d, leading to a more complex behavior of the invariant s(Ω; 0) on homogeneous special domains. * As for the case n=4, we are not aware of any result about the existence or nonexistence of plush functions as in the statement. Anyhow, the question is really about regular morphisms of ℙ^4 into itself and goes beyond the subject matter of the present paper. * If n≥ 5, no φ∈ℋ_n,d satisfies (<ref>) at every point p≠ 0, that is, there is no domain in dimension 6 or higher to which our theorem applies. Proofs of the above facts and more details are in Section <ref>. We prove Theorem <ref> by reducing it to a key estimate (Section <ref>), whose proof is given in Sections <ref> through <ref>.§.§ Reduction to a key estimate Recall the following fundamental theorem of Catlin, which we formulate using the terminology of Definition <ref>. Let Ω⊂^n+1 be smooth and pseudoconvex. Assume that q∈ bΩ is a boundary point and that a subelliptic estimate of order s holds at q. If Ψ is a family of analytic discs shrinking to q with order of contact T(Ψ), then s≤1/T(Ψ). Hence,s(Ω; q)≤1/T^1(bΩ; q).Since s(Ω; q) is lower semicontinuous in q, we are allowed to replace T^1(bΩ; q) in the above estimate with its upper semicontinuous envelope T^1(bΩ; q), i.e., the smallest upper semicontinuous function larger than T^1(bΩ; q): T^1(bΩ; q)=lim_ϵ→ 0sup_|q'-q|<ϵ T^1(bΩ; q').Indeed, one may directly see that the resulting inequalitys(Ω; q)≤1/T^1(bΩ; q)follows from Theorem 3 of <cit.>. The following proposition compares the "regularized" Catlin's 1-type T^1(bΩ; 0) with the invariants of Definition <ref> and Definition <ref>, when Ω is a homogeneous special domain. [𝔥_p(φ) vs. Catlin's 1-type on homogeneous special domains] Let φ∈ℋ_n,d. We have2max{d, 𝔱(F)}=sup_p∈^n𝔥_p(φ)+2≤T^1(bΩ; 0). By homogeneity, sup_p∈ B(0,ϵ)𝔥_p(φ) = sup_p∈^n𝔥_p(φ) for every ϵ>0. Thus, the inequality follows from Proposition <ref>. If ψ is a nonsingular analytic disc in ^n, thenΔ(φ∘ψ) = ∑_ℓ=1^n|∂ (F_ℓ∘ψ)/∂ζ|^2.It follows that the largest m such that Δ(φ∘ψ)=O(|ζ|^m) equals 2ν_0(F∘ψ)-2. If the disc is through the origin, then ν_0(F∘ψ)=d. If ψ'(0) is parallel to ψ(0), then ν_0(F∘ψ)=1. Thus, it is enough to show thatsup_ψν_0(F∘ψ) = 𝔱(F),where the supremum is over all analytic discs ψ such that ψ(0)≠ 0 and ψ'(0)∉ψ(0). Denoting π:^n∖{0}→ℙ^n-1 the canonical projection, it is clear that ν_0(F∘ψ)≤ν_0(π∘ F∘ψ)=ν_0(F∘ψ), where ψ= π∘ψ. On the other hand, we claim that every nonsingular analytic disc ψ in ℙ^n-1 has a lift ψ in ^n such that ν_0(F∘ψ)=ν_0(F∘ψ). To verify the claim, one may assume without loss of generality that ψ(0)=(0,…, 0,1) and F(ψ(0,…, 0,1))=(0,…, 0,1) and work in local coordinates(w,ζ):=(w_1,…, w_n-1, ζ)=(z_1z_n^-1, …, z_n-1z_n^-1, z_n)near that point, so that π becomes the projection onto the first n-1 components. The mapping F takes the form (w,ζ)↦ (Q(w), ζ^dR(w)), where Q and R are holomorphic. If the nonsingular analytic disc ψ is expressed as w(t) in coordinates, then it is enough to set ψ(t)=(w(t), R(w(t))^-1/d).We now state the key inequality needed for the proof of Theorem <ref>. Let φ∈ℋ_n,d. * For every C, there exists ϵ>0 such that^φ(w)≥ϵ∫_{|z|≤ C} |w|^2e^-2φ∀ w∈ C^∞_c(^n). * Let p∈^n∖{0} be such thatH(p)≤ 1. Then there exists C<+∞ and an ^+-invariant neighborhood V of p such that^φ(w)≥ C^-1∫_V∩{|z|≥ C} |z|^2(2d/𝔥_p(φ)+2-1)|w|^2e^-2φfor every w∈ C^∞_c(^n).Before delving into the proof of Lemma <ref>, let us show how it implies Theorem <ref>.By part (2) of Lemma <ref>, every p∈^2n-1 admits an ^+-invariant open neighborhood V_p and a constant C_p>0 such that^φ(w)≥ C_p^-1∫_V_p∩{|z|≥ C_p} |z|^2(2d/𝔥_p(φ)+2-1)|w|^2e^-2φ∀ w∈ C^∞_c(^n). A compactness argument gives then a constant C such that^φ(w)≥ C^-1∫_|z|≥ C |z|^-2(1-2ds)|w|^2e^-2φ∀ w∈ C^∞_c(^n), where s=1/sup_p𝔥_p(φ)+2. Combining this estimate and part (1) of Lemma <ref>, we see that the sufficient condition (<ref>) with this value of s is satisfied. Thus,s(Ω; 0)≥1/sup_p𝔥_p(φ)+2. By Catlin's necessary condition and Proposition <ref>, we have the chain of estimatesT^1(bΩ; 0) ≤1/s(Ω; 0)≤sup_p∈^n𝔥_p(φ)+2=2max{d, 𝔱(F)}≤T^1(bΩ; 0). The proof of Theorem <ref> is complete. We now turn to the proof of Lemma <ref>.§.§ Proof of the local energy boundWe now prove item (1) in Lemma <ref>, which is a simple application of the basic -UP (Lemma <ref>). Pick p∈^2n-1 such that λ_1(p)>0. By continuity and -homogeneity there exists a small constant ϵ>0 such that λ_1 does not vanish onD':={z∈^n 1/2≤ |(z,p)|≤ 1,|z-(z,p)p|≤ϵ}. Consider the setD:={z∈^n |(z,p)|≤ 1,|z-(z,p)p|≤ϵ}.Cutting D into discs parallel to p and applying the basic -UP to each disc, one gets∫_D|∇^0,1w|^2+∫_Dλ_1 |w|^2≳∫_D|w|^2≥∫_|z|≤ϵ|w|^2.Applying this estimate to w(ϵ^-1 Cz), exploting the homogeneity of λ_1 one finds that∫_ϵ^-1CD|∇^0,1w|^2+∫_ϵ^-1CDλ_1 |w|^2≳_C ∫_|z|≤ C|w|^2.Since e^-2φ≃_C 1 on ϵ^-1CD and {|z|≤ C}, we finally obtain (<ref>). Notice that all we used about φ for the local energy bound is its -homogeneity.Let us move to the proof of part (2) of Lemma <ref>.§.§ Preparation The case H(p)= 0 is trivial. In fact, 𝔥_p(φ)=0 in this case and the conclusion follows from the fact that λ_1(z)≳ |z|^2d-2 on an ^+-invariant neighborhood of p. From now on, we assume that H(p)=1. By Proposition <ref>, there exists an approximate minimal eigenvector X defined on an open neighborhood W of p. Let B be a small -codimension one ball perpendicular to X(p), that is, B={p+a a⊥ X(p), |a|<ε}.Denote by Θ(z';ζ) the solution of the complex ODE∂Θ/∂ζ(z';ζ) = X(Θ(z';ζ))Θ(z';0)=z'∈ B . If B and τ>0 are small enough, we obtain a holomorphic mappingΘ: B× D(0,τ)→ W.The transversality of ∂Θ/∂ζ(p;0)=X(p) and B ensures, shrinking B and τ enough, that Θ induces a biholomorphism onto its (open) image. We also shrink W so that Θ itself is a biholomorphism. In other words, what we did is foliating a neighborhood of p with analytic discs whose complex tangent vectors are approximate minimal eigenvectors of H. We now consider the restriction of the plurisubharmonic function φ to these analytic discs. The same computation as in (<ref>) givesΔ_ζ (φ∘Θ(z'; ζ)) ≃λ_1(Θ(z'; ζ)).Since H(p)=1, we have λ_1(z)≃ H(z)=| J(z)|^2 near p. Hence, λ_1(Θ(z'; ζ)) is comparable to the modulus squared of the holomorphic function𝒥(z'; ζ):= J(Θ(z'; ζ)).Notice that 𝒥(p;0)=0. By the Weierstrass Preparation Theorem we have two cases to consider: (a) ζ↦𝒥(p;ζ) vanishes identically on D(0,τ); (b) 𝒥(z';ζ) equals, modulo a unit of the ring of germs of holomorphic functions at (p;0), a Weierstrass polynomialζ^m+∑_k=1^m c_k(z')ζ^m-k, where each c_j(z') is a holomorphic germ vanishing at p. In case (a), J is nontrivial on the whole range of the analytic disc ζ↦Θ(p;ζ). Since ∂Θ/∂ζ(z';ζ) lies in the kernel of J(Θ(z';ζ)) for every point at which the latter is nontrivial, the chain rule shows that F∘Θ(p;ζ) is independent of ζ. (Viceversa, it is easily seen that if F is constant on the disc through p then necessarily 𝒥(p;ζ) vanishes identically in ζ.) Thus, case (a) above may be ruled out because F has finite fibers, as a consequence of Definition <ref>.Notice that the degree m≥ 1 of the Weierstrass polynomial clearly satisfies (cf. Definition <ref>) 2m ≤𝔥_p(φ) (recall (<ref>)). Let us record in a proposition what we obtained so far. [Convenient foliation of a neighborhood of p] Under the assumptions of Lemma <ref> there is a biholomorphismΘ: B^n-1×𝔻→ W,where B^n-1⊆^n-1 is the unit ball, such that Θ(0;0)=p and Δ_ζ (φ∘Θ(z'; ζ)) ≃λ_1(Θ(z'; ζ))≃ |ζ^m+∑_k=1^m c_k(z')ζ^m-k|^2.The coefficients c_k are holomorphic functions on B^n-1 that vanish at z'=0, and2m≤𝔥_p(φ). Notice how we rescaled the small ball B and the disc D(0,τ) to unit size, as we may (at the cost of modifying the implicit constants).§.§ Scaling We now exploit the fact that the complex Hessian H(z) is homogeneous of degree 2d-2:(H(Rz)v,v) = R^2d-2(H(z)v,v)∀ R>0, z∈^n,v∈^n,and that the eigenvalues λ_j(z) are -homogeneous functions of degree 2d-2. By homogeneity, we may assume without loss of generality that the point p in the second item of Lemma <ref> has unit distance from the origin, i.e., p∈^2n-1. Let Θ:B^n-1×𝔻→ W be the biholomorphism and letP_z'(ζ)=ζ^m+∑_k=1^mc_k(z')ζ^m-kbe the Weierstrass polynomial of Proposition <ref>. We may assume that every root of P_z' is contained in D(0,1/2) and that W⊆{1/2< |z|< 3/2}. This is easily achieved by a suitable change of variables in z' and ζ. We choose γ>1 and an open spherical cap K⊂ W∩ S^2n-1 centered at p with the property that{t q 1≤ t< γ, q∈ K }⊆ W.The collection of open sets {W_k:=γ^kW}_k≥ 0 has bounded overlapping and covers V:={tq t≥ 1,q∈ K}, that is,1_V≤∑_k≥ 0 1_W_k≲ 1. The next thing we are going to do is rescale the biholomorphism Θ to adapt it to the sets W_k. LetΘ_k(z';ζ):=γ^kΘ(z';ζ) (k∈),which defines a biholomorphism of B^n-1×𝔻 onto W_k. We have the simple change of variables identity∫_W_k g dℋ_2n≃γ^2kn∫_B^n-1(∫_𝔻 g∘Θ_k(z';ζ) dℋ_2(ζ))dℋ_2n-2(z'),for every g:W_k→ (say, nonnegative and measurable). Here ℋ_m is the m-dimensional (Lebesgue, or Hausdorff) measure and the implicit constant is uniform in k. Now that the stage is set, we come to the proof of (<ref>). The local contribution to our energy form coming from W_k is^φ_W_k(w):=∫_W_k |∇^0,1w|^2e^-2φ+2∫_W_kλ_1 |w|^2 e^-2φ (w∈ C^∞_c(^n)).Since Θ_k(z';ζ)=(Θ_k,1(z'; ζ), …, Θ_k,n(z'; ζ)) is holomorphic, we have ∂/∂ζ(w∘Θ_k)= ∑_j=1^n∂ w/∂z_j∘Θ_k ∂Θ_k,j/∂ζ,and Cauchy–Schwarz gives|∂/∂ζ(w∘Θ_k)|≲γ^k |∇^0,1w|∘Θ_k. Hence, changing variables with (<ref>), we get^φ_W_k(w) ≃ γ^2kn∫_B^n-1∫_𝔻 |∇^0,1w|^2∘Θ_k e^-2φ∘Θ_k+γ^2kn∫_B^n-1∫_𝔻λ_1∘Θ_k |w∘Θ_k|^2 e^-2φ∘Θ_k≳ γ^2k(n-1)∫_B^n-1{∫_𝔻|∂/∂ζ(w∘Θ_k)|^2e^-2φ∘Θ_k+∫_𝔻γ^2kλ_1∘Θ_k |w∘Θ_k|^2 e^-2φ∘Θ_k}= γ^2k(n-1)∫_B^n-1{∫_𝔻|∂/∂ζ(w∘Θ_k)|^2e^-2γ^2kdφ∘Θ+∫_𝔻γ^2kdλ_1∘Θ |w∘Θ_k|^2 e^-2γ^2kdφ∘Θ}≃ γ^2k(n-1)∫_B^n-1{∫_𝔻|∂/∂ζ(w∘Θ_k)|^2e^-2γ^2kdφ∘Θ+∫_𝔻Δ_ζ(γ^2kdφ∘Θ) |w∘Θ_k|^2 e^-2γ^2kdφ∘Θ},where in the third line we used the homogeneity of φ and λ_1, and in the last step we used (<ref>). We are now in a position to apply the Main one-dimensional estimate (Proposition <ref>). Since the leading coefficient of the rescaled Weierstrass polynomial is A=γ^kd, we get∫_𝔻|∂/∂ζ(w∘Θ_k)|^2e^-2γ^2kdφ∘Θ+∫_𝔻Δ_ζ(γ^2kdφ∘Θ) |w∘Θ_k|^2 e^-2γ^2kdφ∘Θ≳γ^2kd/m+1∫_𝔻|w∘Θ_k|^2 e^-2γ^2kdφ∘Θ,if k≥ k_0 is large enough. Plugging this into the lower bound on ^φ_W_k(w), we get ^φ_W_k(w)≳γ^2k(d/m+1-1)γ^2kn∫_B^n-1{∫_𝔻|w∘Θ_k|^2 e^-2φ∘Θ_k}≃γ^2k(d/m+1-1)∫_W_k|w|^2e^-2φ,again for k≥ k_0. Since |z|≃γ^k on W_k, summing over k≥ k_0, and recalling (<ref>) and (<ref>), we finally obtain (<ref>). This completes the proof of Theorem <ref>. § EXAMPLESLet φ∈ℋ_n,d, F and F be as in Section <ref>. Let p∈^n∖{0} and π(p)∈ℙ^n-1 its projection. It is easy to see that the differentials of the mappings F:^n→^n and F:ℙ^n-1→ℙ^n-1 at p and π(p) respectively have kernels of the same dimension. Thus, hypothesis (<ref>) in Theorem <ref> can be reformulated asdF(q)≤ 1∀ q∈ℙ^n-1.§.§ Case n=1The application of Theorem <ref> to this case is particularly straightforward. Indeed, there is (up to scalar multiples) only oneφ∈ℋ_1,d for every d, and F is the identity map of the empty space ℙ^0. We obtain the following corollary (already covered by the much more general result of Kohn and Greiner).The special homogeneous domainΩ={(z_1,z_2)∈^2 (z_2)>|z_1|^2d} satisfies a subelliptic estimate of order 1/2d at the origin, and no better one.Notice that in this case the only nontrivial part of the crucial Lemma <ref> is the local estimate, and all we need of the machinery developed in the paper is the basic -UP. §.§ Case n=2In this case F is a holomorphic self-map of ℙ^1. It is obvious that dF_q≤ 1 for every q∈ℙ^1, and it is easy to see that 𝔱(F)≤ d. Indeed, this amounts to the fact that F is a branched cover of degree d, as may be seen representing the map in appropriate local parameters. Thus, Theorem <ref> gives the following corollary.Let F_1(z_1,z_2),F_2(z_1,z_2) be homogeneous polynomials of degree d≥ 1 without common zeros outside the origin. Then the special homogeneous domainΩ={(z_1,z_2,z_3)∈^3 (z_3)>|F_1(z_1,z_2)|^2+|F_2(z_1,z_2)|^2} satisfies a subelliptic estimate of order 1/2d at the origin, and no better one. §.§ Case n=3Things get more subtle and interesting when n=3. Let us first show that examples exist in abundance.Let d≥ 1. Then the generic holomorphic mapping F:ℙ^2→ℙ^2 of degree d satisfies (<ref>). Proofs of related statements appear in the literature (see e.g. <cit.>), as was pointed out to us by J. Silverman on MathOverFlow. We provide details for completeness. Let N(d) be the dimension of the vector space of homogeneous polynomials of degree d in three variables. Let v_d:ℙ^2→ℙ^N(d)-1 be the Veronese embedding [(z_1,z_2,z_3)]↦ [(z_1^α_1z_2^α_2z_3^α_3)_α_1+α_2+α_3=d]. For every F:ℙ^2→ℙ^2 of degree d, let L be the projection of ℙ^N(d)-1 onto ℙ^2 induced by the 3× N(d) matrix of coefficients of F. Then F = L∘ v_d. Applying Theorem 1.1 of <cit.> to S=v_d(ℙ^2), one gets that the ramification curve on S of a general projection of S to ℙ^2 is smooth. The variety Z={(dF)=0} of critical points of F is the preimage of such ramification curve via v_d. Since v_d is an isomorphism onto its image, Z is generically smooth. To conclude, it is enough to observe that any point q such that dF_q≥ 2 is a singular point of Z. This follows from Jacobi's formula for the differential of the determinantd(A)=tr(adj(A)dA) (A∈^n× n), where adj(A) is the transpose of the cofactor matrix, which is the zero matrix if A≥ 2. Next, we notice that the invariant 𝔱(F) of a holomorphic self-map of ℙ^2 need not be ≤ d, as the following example shows. Consider the mapF(z)=(z_1^2+z_2z_3, z_2^2+z_1z_3, z_3^2),whose Jacobian matrix is J(z) = [ 2z_1z_3z_2;z_3 2z_2z_1;00 2z_3 ] = 2z_3(4z_1z_2-z_3^2)So the ramification locus of F is the union of a line and a conic. It is an easy computation to check that the rank of J(z) is 2 at every point of the ramification locus. Hence, our mapping satisfies the hypothesis (<ref>) of Theorem <ref>. By direct jet computation, one may show the following. If p=[(z_1,z_2,z_3)] belongs to the ramification locus of F and it is different from[(1,0,0)], [(0,1,0)], [(1,1,2)], [(1,ξ,2ξ^2)], [(1,ξ^2,2ξ)]with ξ a primitive 3rd rooth of unit, then for every nonsingular analytic discs ψ with ψ(0)=p we have ν_0(F̃∘ψ)≤2 and equality is attained.If p is [(1,1,2)], [(1,ξ,2ξ^2)], or [(1,ξ^2,2ξ)] (which lie on the conic, but not on the line), then for every nonsingular analytic disc ψ with ψ(0)=p we have ν_0(F∘ψ)≤3 and equality is attained. For example, consider ψ(ζ)=(1+ξζ,ξ^2-ζ-ξζ^2/2,2ξ) and set ψ=π∘ψ.Finally, if p is [(1,0,0)] or [(0,1,0)] (which are the intersection points of the line and the conic), then for every nonsingular analytic disc ψ for which ψ(0)=p, we have ν_0(F∘ψ)≤4 and equality is attained. For example, consider ψ(ζ)=(1+ζ^3/2,ζ,-ζ^2), thenF∘ψ(ζ)=(1+ζ^4/6,-ζ^5/2,ζ^4)that is F has order 4 along a nonsingular curve through the point (1,0,0), hence setting ψ=π∘ψ, we obtain the claimed equality.Therefore, we have 𝔱(F)=4 and Theorem <ref> allows us to conclude thats(Ω;0)=1/8whereΩ={(z,z_n+1)∈^n+1 z_n+1>|z_1^2+z_2z_3|^2+|z_2^2+z_1z_3|^2+|z_3|^2} .§.§ Case n≥ 5Let's prove that Theorem <ref> is vacuous in dimension n≥ 5. This is a fact in algebraic geometry, whose proof we provide for completeness. Let n-1≥ 4. Every regular morphism F:ℙ^n-1→ℙ^n-1 of degree d≥ 2 admits a point q where dF(q)≥ 2. Let Y_2 be the subvariety of the space of square matrices ^n × n given by the condition rk M≤ n-2. We claim that its codimension is 4. To see this, consider the complex manifold Z_2={(M,V)∈^n × n×Gr_n-2,n : Mv∈ V∀ v∈^n } where Gr_r,k is the Grassmannian of r-planes in ^k. Notice that Z_2 is a vector bundle on Gr_n-2,n with fibers of complex dimension n(n-2). Since the dimension of Gr_n-2,n is 2(n-2), we have _ Z_2=n^2-4. Since Z_2 is a desingularization of Y_2, it must have the same dimension n^2-4. This proves the claim. Clearly, the projectivization ℙ(Y_2) in ℙ^n^2-1 has also codimension 4. Given F:^n→^n homogeneous of degree d≥ 2, its Jacobian J_F = (∂ F_j/∂ z_k)_j,k:^n→^n× n induces a map J_F:ℙ^n-1→ℙ^n^2-1. For a generic choice of F, the image of J_F is a subvariety of dimension n-1. To see this, it is enough to exhibit an example in every degree d≥ 2, e.g., F=(z_1^d,…, z_n^d). Therefore, if n-1≥ 4, J_F(ℙ^n-1) must intersect ℙ(Y_2). This proves that the statement is true for a generic choice of F. To conclude, let ℙ^N(d,n) the projectivized of the space of n-tuples of homogeneous polynomials of degree d in n variables. The setW = {([p], [F])∈ℙ^n-1×ℙ^N(d,n) rk(J_F(p))≤ n-2} is Zariski-closed. Thus, the same is true of its projection onto the second factor. Since we proved above that this projection contains a Zariski-open subset, the thesis follows.§A harmonic function on a disc can be represented as the real part of a holomorphic function; Proposition <ref> below states that a function on a disc whose Laplacian is bounded can be represented as the real part of a holomorphic function plus a controlled error. The proof of Proposition <ref> uses another quite elementary fact. [Gradient estimate for holomorphic functions] ||F'||_L^∞(D(z,0.r))≲ r^-2||(F)||_L^2(D(z,r))∀F∈𝒪(D(z,r)). Let B be the Bergman projector on the unit disc 𝔻. Let F(ζ) be a square integrable holomorphic function on 𝔻. By the mean value property, F(ζ)-F(0) is orthogonal to the Bergman space and hence B(F(ζ)-F(0))=0. This identity may be rewritten asF(ζ)+F(0)=2B( F(ζ)). Differentiating, we getF'(ζ)=2∂_ζ B( F(ζ)). Ellipticity (or the Cauchy integral) yields||F'||_L^∞(0.5𝔻)≲ ||B( F)||_L^2(𝔻)≤ || F||_L^2(𝔻). If F is an arbitrary holomorphic function on 𝔻, applying this identity to F(tζ) (t<1), we get t||F'||_L^∞(0.5t𝔻)≲ t^-1|| F||_L^2(t𝔻). Letting t tend to 1, the monotone convergence theorem allows to deduce the inequality of the statement for every holomorphic F, when D(z,r)=𝔻. Traslation invariance and scaling gives the full thesis.[Bounded curvature implies almost harmonicity] Let φ∈ C^2(D(z,r); ) be such that |Δφ(ζ)|≤ Cr^-2 for all ζ∈ D(z,r). Then there exists a holomorphic function G:D(z,r)→ such thatφ = (G)+O(C),r||G'||_L^∞(D(z,0.5r))≲ ||φ||_L^∞(D(z,r))+C It is enough to prove the case z=0 and r=1, the general case follows by translation invariance and scaling. Defineb(ζ):=Δφ(ζ) ζ∈𝔻 0 ζ∉𝔻Let b*Γ be its Newtonian potential, where Γ(ζ)=1/2πlog|ζ|. Then φ-b*Γ is harmonic on 𝔻, and hence there exists G∈𝒪(𝔻) such that φ=(G)+b*Γ. Since Γ is uniformly integrable on discs of radius one, we have |b*Γ|≲ C. Hence, Proposition <ref> gives||G'||_L^∞(0.5𝔻)≲ ||(G)||_L^∞(𝔻)≲ ||φ||_L^∞(𝔻)+C.alpha | http://arxiv.org/abs/2310.18246v1 | {
"authors": [
"Gian Maria Dall'Ara",
"Samuele Mongodi"
],
"categories": [
"math.CV",
"math.CA"
],
"primary_category": "math.CV",
"published": "20231027163227",
"title": "Sharp subelliptic estimates in the $\\bar\\partial$-Neumann problem via an uncertainty principle"
} |
Kraichnan_model [email protected] Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, UK St. John’s College, Oxford OX1 3JP, UK Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, MD, 20742, USA Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, UK Balliol College, Oxford, OX1 3BJ, UK Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08543, USA Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540, USA Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, UK Merton College, Oxford OX1 4JD, UK Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, UK University College, Oxford OX1 4BH, UK Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, UK Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, MD, 20742, USA Department of Physics, University of Maryland, College Park, Maryland 20740, USAWe consider a nearly collisionless plasma consisting of a species of `test particles' in one spatial and one velocity dimension, stirred by an externally imposed stochastic electric field—a kinetic analogue of the Kraichnan model of passive advection. The mean effect on the particle distribution function is turbulent diffusion in velocity space—known as stochastic heating. Accompanying this heating is the generation of fine-scale structure in the distribution function, which we characterize with the collisionless (Casimir) invariant C_2 ∝∬ dx dv⟨ f^2 ⟩—a quantity that here plays the role of (negative) entropy of the distribution function. We find that C_2 is transferred from large scales to small scales in both position and velocity space via a phase-space cascade enabled by both particle streaming and nonlinear interactions between particles and the stochastic electric field. We compute the steady-state fluxes and spectrum of C_2 in Fourier space, with k and s denoting spatial and velocity wavenumbers, respectively. In our model, the nonlinearity in the evolution equation for the spectrum turns into a fractional Laplacian operator in k space, leading to anomalous diffusion. Whereas even the linear phase mixing alone would lead to a constant flux of C_2 to high s (towards the collisional dissipation range) at every k, the nonlinearity accelerates this cascade by intertwining velocity and position space so that the flux of C_2 is to both high k and high s simultaneously. Integrating over velocity (spatial) wavenumbers, the k-space (s-space) flux of C_2 is constant down to a dissipation length (velocity) scale that tends to zero as the collision frequency does, even though the rate of collisional dissipation remains finite. The resulting spectrum in the inertial range is a self-similar function in the (k,s) plane, with power-law asymptotics at large k and s. Our model is fully analytically solvable, but the asymptotic scalings of the spectrum can also be found via a simple phenomenological theory whose key assumption is that the cascade is governed by a `critical balance' in phase space between the linear and nonlinear time scales. We argue that stochastic heating is made irreversible by this entropy cascade and that, while collisional dissipation accessed via phase mixing occurs only at small spatial scales rather than at every scale as it would in a linear system, the cascade makes phase mixing even more effective overall in the nonlinear regime than in the linear one.Phase-space entropy cascade and irreversibility of stochastic heating in nearly collisionless plasma turbulence William D. Dorland January 14, 2024 ===============================================================================================================§ INTRODUCTION Understanding the nature of turbulent cascades in nearly collisionless space and astrophysical plasmas is an outstanding problem <cit.> with a diverse range of applications, from solving the coronal-heating problem <cit.> to interpreting radiation emission from accretion disks around black holes <cit.>. A major difficulty distinguishing such turbulence from its fluid counterparts lies in the fact that fluctuations evolve in the six-dimensional phase space of single-particle positions and velocities. Dissipation (in the sense of irreversible entropy production) occurs via particle collisions <cit.>, which in a nearly collisionless plasma are activated only when the particle distribution function develops large gradients in velocity space.Turbulent dissipation in nearly collisionless plasmas is often considered from an energetics perspective. This point of view focuses on the cascade of bulk kinetic and electromagnetic energy from large to small spatial scales (see, e.g., <cit.> and references therein) and the physical processes, such as magnetic reconnection <cit.> and wave-particle interactions <cit.>, that convert this energy into the internal energy of the particles. Whilst energy-cascade and transfer mechanisms are important, they provide a fundamentally incomplete picture of turbulent dissipation. Without collisions, entropy is formally conserved (along with an infinity of Casimir invariants <cit.>), and any transfer of energy between particles and fields is formally reversible.It was realized by <cit.> that entropy production in the long-time limit of a nearly collisionless plasma must remain finite even as the collision frequency ν tends to zero. This idea crystallized in <cit.>, where the notion of entropy cascade in the context of gyrokinetics was introduced. Below the Larmor scale, in a phase-space inertial range between the injection range at large scales and the collisional dissipation range at small scales, the (negative) entropy of the perturbed distribution function cascades in both position and velocity space via a nonlinear perpendicular phase-mixing mechanism <cit.>. Because this is a constant-flux cascade, the turbulent heating rate in a gyrokinetic plasma is finite and independent of ν even as ν→ 0^+, analogous to so-called dissipative anomalies <cit.> in hydrodynamics, where viscous dissipation is finite in infinite-Reynolds-number turbulence.Entropy cascade outside the gyrokinetic approximation is a frontier topic just beginning to be explored <cit.>. In this paper, we study turbulent dissipation and entropy cascade via an analytically solvable model introduced by <cit.>. We consider the `1D-1V' electrostatic, full-f Vlasov equation with a model diffusive collision operator for a test-particle species, where instead of the electric field being self-consistently determined by Poisson's equation, it is externally determined to be a stochastic Gaussian, white-noise source with a specified spatial correlation function. Physically, this model represents the evolution of a low-density minority species in a multi-component plasma whose dielectric response is dominated by the other, more abundant species. This model is the plasma-kinetic analogue of the Kraichnan <cit.> model of passive advection, where a scalar field [Or a vector field, such as a magnetic field, in which case the model is named after Kazantsev, who, in the year before Kraichnan, pursued the same type of approximation for the kinematic-dynamo problem <cit.>.], such as temperature or concentration of dye, is passively advected by an externally determined random flow, allowing for analytical calculations of the passive-scalar statistics. The Kraichnan model has been dubbed the `Ising model' <cit.> of fluid turbulence because it is a solvable model that exhibits many properties also present in real systems, and so serves as a theoretical laboratory to study turbulence <cit.>. It is in this spirit that we investigate our solvable model of kinetic plasma turbulence.We decompose the distribution function into its mean and fluctuating parts, f =⟨ f ⟩ + δ f, where ⟨ ... ⟩ denotes ensemble averaging over realizations of the random electric field. In our model, particles are stochastically accelerated by the electric field and undergo random walks in velocity space, resulting in bulk heating of ⟨ f ⟩ <cit.>. In the context of magnetized plasmas, this phenomenon is often referred to as stochastic heating <cit.>. Accompanying this heating is the generation of velocity and spatial structure in the perturbed distribution function δ f via linear phase mixing and nonlinear interactions between particles and the turbulent electric field <cit.>.We characterize this structure via the collisionless (Casimir) invariant C_2 = (1/L) ∬ dx dv⟨ f^2 ⟩/2, where L is the system size. C_2 has been considered before as a measure of phase-space structure and as a cascaded quantity in kinetic plasma turbulence <cit.>, and is closely related to the part of the traditional entropy, S = -∬ dx dv f log f, associated with the perturbed distribution function and entering additively in the free-energy invariant of δ f gyrokinetics <cit.>. The conservation of C_2 is broken by particle collisions, and, in particular, when collisions are modeled as a linear diffusion operator in velocity space, as they are in this paper, the collisional dissipation of C_2 is negative-definite. Because the time irreversibility of our system can be tracked via non-conservation of C_2, it can be used as a generalized (negative) entropy of the distribution function.The diffusion of ⟨ f ⟩ injects δ C_2 = (1/L) ∬ dx dv⟨δ f^2 ⟩/2 fluctuations at large scales. These are then cascaded to small scales in both position and velocity space, where they are ultimately dissipated by collisions. This phase-space cascade of δ C_2 is due to both linear phase mixing and nonlinear interactions between particles and the stochastic electric field. We analyze this cascade by computing the steady-state Fourier spectrum and fluxes of δ C_2 in both position and velocity space, with dual variables (wavenumbers) k and s, respectively.In the absence of nonlinearity, linear phase mixing advects the spectrum from low to high |s|, giving rise to an `inertial range' in s where there is a constant flux of δ C_2 from injection to dissipation scales, at every k <cit.>. The resulting steady-state spectrum is flat in the inertial range, with an exponential cutoff at a ν-dependent collisional scale s_ν.Under the Kraichnan model, we find that the nonlinear term in the evolution equation for the Fourier spectrum becomes a fractional Laplacian operator <cit.> in k space, which leads to anomalous diffusion <cit.>. Whereas fractional Laplacians (and fractional derivatives in general) are usually introduced in an ad-hoc manner to model systems with anomalous diffusion, both in plasma physics <cit.> and a wide variety of other contexts <cit.>, here it emerges naturally as a result of our assumptions about the electric field.The linear phase mixing and the turbulent fractional diffusion intertwine the position- and velocity-space cascades in such a way that the resulting spectrum is a self-similar function in the (k,s) plane, with power-law asymptotics at large k and s. Even though the Kraichnan model is fully analytically solvable, we can also recover these asymptotic scalings via a phenomenological theory whose key assumption is that the cascade is governed by a `critical balance' in phase space between the linear and nonlinear time scales.The δ C_2 flux has components in both k and s directions. The flow of δ C_2 occurs along outward unwinding spirals in (k,s) space. This circuitous route to dissipation scales is due to the nonlinearity generating modes that can linearly propagate from high to low |s|, called `phase-unmixing' modes <cit.>, a stochastic generalization of the textbook phenomenon of plasma echo <cit.>. The net result after adding together contributions to the flux from the phase-mixing and phase-unmixing modes is that δ C_2 is cascaded to both high s and high k simultaneously, and collisional dissipation only occurs at scales comparable to, or smaller than, the dissipation length and velocity scales, both of which tend to zero when the collision frequency ν does. Integrating over velocity (spatial) wavenumbers, the flux of δ C_2 in k (s) space is constant down to these dissipation scales, beyond which perturbations are thermalized by collisions. The rate of collisional dissipation is finite and independent of the collision frequency as ν→ 0^+—a clear analytical example of a `dissipative anomaly' in a kinetic system. This turbulent dissipation ultimately mediates the irreversibility of the stochastic heating. The rest of this paper is organized as follows. In Section <ref>, we introduce the Vlasov-Kraichnan model. In Section <ref>, we construct a phenomenological theory that captures the asymptotic scalings of the Fourier spectrum of δ C_2. Then, in Section <ref>, we directly calculate the 1-D phase-space fluxes of δ C_2 in Fourier space in a statistical steady state. In Section <ref>, we calculate the inertial-range Fourier spectrum and its corresponding 2-D fluxes in (k,s) space. Finally, in Section <ref>, we conclude our results and discuss their implications. Supplementary calculations and discussions are exiled to Appendices <ref>-<ref>.§ KRAICHNAN MODEL FOR A 1D-1V ELECTROSTATIC PLASMA We consider a test-particle species composed of particles with charge q and mass m in a 1D periodic box of length L, and subject to an external electric field E. At t = 0, we assume the particle distribution function f to have no spatial variation, but we keep its velocity dependence generic, only assuming that f is square-integrable and has finite kinetic energy. We denote the number density of the distribution function as n_0 and the initial thermal velocity as v_th,0 = √(2T_0/m), where T_0 is the initial temperature of the particles. An example initial condition with these properties is a Maxwellian,f(x,v,t = 0) = n_0/√(π) v_th,0e^-v^2/v_th,0^2≡ F_M(v). The Vlasov equation for the particle distribution function is∂ f/∂ t + v ∂ f/∂ x + E∂ f/∂ v = C[f],where C[f] is the collision operator, and we have absorbed q and m into the definition of E, denoting q E /m → E.We assume E to be a Gaussian white-noise field, with zero mean and correlation function⟨ E(x, t) E(x', t') ⟩ = 2 D(x,x')δ(t-t'),where ⟨ ... ⟩ denotes ensemble averaging over realizations of E and δ is a Dirac delta distribution. We assume E to be statistically homogeneous and isotropic in space, so D(x,x') = D(r), where r = |x-x'|. We chooseD(r) = ∑_k e^i k rD̂(k),where k ∈ (2π/L) ℤ andD̂(k) = De^-(η k)^2/(k^2+L_E^-2)^(α+1)/2.Here, D is a constant diffusion coefficient with dimensions (length^1-α) × (time^-3), 0 < α≤ 2, and L_E and η represent the integral length scale and dissipation scale, respectively, of the stochastic electric field.The electric field is chosen so that its correlation function (<ref>) has a power-law spectrum ∝ |k|^-(α+1) in the inertial range 1/L_E≪ k ≪ 1/η; note that (<ref>) is defined in a similar way to the velocity field in the fluid passive scalar Kraichnan model <cit.>. We identify two distinct regimes: α≠ 2 and α = 2. When α≠ 2, the field is multiscale, reminiscent of turbulent fields in fully developed turbulence. When α = 2, the spectrum is sufficiently steep that the field is effectively single-scale. This case is known as the Batchelor regime <cit.>. While there are important differences between the two regimes <cit.>, some of which we will discuss, many of the properties of the model considered in this paper will be qualitatively the same in both regimes.It will be useful to decompose the distribution function into its mean and fluctuating parts:f = ⟨ f ⟩ + δ f.We make no assumption of δ f being small compared to ⟨ f ⟩.The effect of collisions will be to wipe out fine-scale velocity-space structure in the distribution function. To model this in the simplest possible way, we ignore collisions between our test-particle species and the other species in the plasma, and represent collisions within the test species as a linear diffusion in velocity space acting only on δ f, viz.,C[δ f] = ν∂^2 δ f/∂ v^2,where ν is the collision frequency (multiplied by v_th,0^2), which we consider to be vanishingly small, taking ν→ 0^+. It is not a problem that (<ref>) does not conserve energy or vanish on a Maxwellian because collisions will only matter for the parts of δ f with sharp gradients in v [This approximation may seem overly simplified; for example, it neglects collisions acting on ⟨ f ⟩, collisional drag, and nonlinearity <cit.>. However, we expect these effects to be subdominant compared to collisional diffusion of δ f for the following reasons. In Section <ref>, we find that ⟨ f ⟩ satisfies a diffusion equation in velocity space. Its gradients therefore become ever smaller in time, so weak collisions will be negligible for ⟨ f ⟩. Collisional drag should be negligible in the limit ν→ 0^+ because it contains one fewer velocity derivative than collisional diffusion. We assume that dropping nonlinearity in the collision operator is reasonable because we anticipate that δ f will be small at the scales where collisions matter. Indeed, we find in Sections <ref> and <ref> that the Fourier spectrum of δ f decays as a power law in wavenumber space, with collisional cutoffs occurring at large wavenumbers, where the spectrum is small relative to its magnitude at large scales.]. §.§ Stochastic heatingWe first work out the effect of the turbulent electric field on the mean distribution function. Ensemble averaging (<ref>) over realizations of the stochastic electric field, we get ∂⟨ f ⟩/∂ t + v ∂⟨ f ⟩/∂ x +⟨ E ∂δ f/∂ v⟩ = 0.The equation for δ f is then∂δ f/∂ t + v ∂δ f/∂ x + E ∂δ f/∂ v -⟨ E ∂δ f/∂ v⟩=- E ∂⟨ f ⟩/∂ v + ν∂^2 δ f/∂ v^2.To compute the ensemble average of the nonlinear term in (<ref>), we apply the Furutsu-Novikov theorem <cit.> for splitting correlators. For a Gaussian field E that depends on variables 𝐪, this theorem states that⟨ E(𝐪) F[E] ⟩ = ∫ d𝐪'⟨ E(𝐪) E(𝐪') ⟩⟨δ F[E]/δ E(𝐪')⟩,where F[E] is a differentiable functional of E. Formally integrating (<ref>) with respect to time, we have thatδ f = - ∫^t dt”[v ∂δ f/∂ x+ E ∂δ f/∂ v -⟨ E ∂δ f/∂ v⟩ + E ∂⟨ f ⟩/∂ v - ν∂^2 δ f/∂ v^2].Combining (<ref>) and (<ref>), we have⟨ E(x,t) δ f(x,t) ⟩= ∫^t dt' ∫ dx' ⟨ E(x,t) E(x',t') ⟩⟨δ[ δ f(x,t) ]/δ E(x',t')⟩= - ∫ dx' D(x-x') δ(x-x') ∂⟨ f ⟩/∂ v = -D(0) ∂⟨ f ⟩/∂ v.Therefore, (<ref>) becomes ∂⟨ f ⟩/∂ t = D_0∂^2 ⟨ f ⟩/∂ v^2,where D_0 = D(0) is the `turbulent collisionality.' We have dropped the streaming term in (<ref>) because our initial condition is spatially homogeneous, so ⟨ f ⟩ at future times does not depend on x. The solution to (<ref>) is⟨ f ⟩ = ∫ dv' f(v',t=0) 1/√(4 π D_0t)e^-(v-v')^2/4D_0t.The mean kinetic-energy density ⟨ K ⟩ of this distribution function is⟨ K ⟩ = K_0 + m n_0 D_0 t,where K_0 is the initial kinetic-energy density. For the Maxwellian initial condition (<ref>), (<ref>) becomes simply⟨ f ⟩ = n_0/√(π) v_the^-v^2/v_th^2,with a growing thermal speed:v_th = √(v_th,0^2 + 4 D_0 t). As particles get stochastically accelerated by the electric field, they undergo Brownian random walks in velocity space, leading to bulk heating of the distribution function <cit.>, with secularly growing kinetic energy (<ref>). In magnetized plasmas, this phenomenon is usually referred to as stochastic heating <cit.>. §.§ C2 budget: injection and dissipation Because the stochastic electric field continuously heats the distribution function, viz., (<ref>), energy is not a conserved quantity in our system. However, what is conserved in the absence of collisions is the quadratic quantityC_2 = 1/L∬ dx dv1/2⟨ f^2 ⟩ = C_2, 0 + δ C_2,whereC_2, 0 = 1/L∬ dx dv1/2⟨ f ⟩ ^2, δ C_2 = 1/L∬ dx dv1/2⟨δ f^2 ⟩.C_2 can only change via collisions. Using (<ref>) and (<ref>), we haved C_2/d t = 1/L∬ dx dv⟨δ f C[δ f] ⟩= - ν/L∬ dx dv⟨ |∂δ f/∂ v|^2 ⟩.Thus, when collisions are approximated as a linear diffusion in velocity space, they provide negative-definite dissipation of C_2. Because -C_2 is conserved in the absence of collisions and positive-definitely increased by collisions, we can interpret it as a `generalized entropy' of the distribution function. We will henceforth refer to -C_2 and entropy synonymously.Stochastic heating is accompanied by the decrease of C_2,0. Indeed, using (<ref>) and integrating by parts, we haved C_2,0/d t = - D_0/L∬ dx dv|∂⟨ f ⟩/∂ v|^2 = -D_0∫ dv|∂⟨ f ⟩/∂ v|^2 ≤ 0generically, and for a Maxwellian in particular,d C_2,0/d t = -n_0^2 √(m)/8 √(π)1/T^3/2d T/dt,where T = m v_th^2/2, with v_th given by (<ref>). To work out the δ C_2 budget, we can combine (<ref>) and (<ref>), givingd δ C_2/d t = D_0∫ dv|∂⟨ f ⟩/∂ v|^2 - ν/L∬ dx dv⟨ |∂δ f/∂ v|^2 ⟩.Thus, without collisions, as C_2, 0 decreases as a result of the stochastic heating of ⟨ f ⟩, δ C_2 increases to maintain entropy balance. Once δ f has developed sharp enough gradients, collisions dissipate δ f, increasing the total entropy. The irreversiblity of stochastic heating therefore hinges on the collisional dissipation of δ f.If the δ C_2 fluctuations evolve faster than the mean C_2,0 and reach a quasi-steady state (as we shall argue that they do), then (<ref>) becomesD_0 ∫ dv|∂⟨ f ⟩/∂ v|^2 = ν/L∬ dx dv⟨ |∂δ f/∂ v|^2 ⟩.In the case where the initial condition is Maxwellian, combining (<ref>) and (<ref>) and substituting this expression into (<ref>) yields a direct balance between the heating rate of ⟨ f ⟩ and the collisional dissipation of δ f. The velocity-space gradients of ⟨ f ⟩ inject δ C_2 at large scales, and collisions dissipate δ C_2 at small scales. As in any prototypical turbulent system, the only way the steady-state balance (<ref>) between injection and dissipation at such disparate scales can hold is if there is a constant-flux cascade bridging them. This is precisely what we will find in the following sections, viz., a phase-space cascade of δ C_2 in both position and velocity space. This cascade will be our focus for the rest of this paper.Before continuing, we note that in Appendix <ref>, we discuss alternative formulations of the thermodynamics of our system in terms of other collisionless invariants, and why in this work we have chosen to study C_2 over those other invariants. §.§ Evolution of Fourier spectrum We analyze how δ C_2 is partitioned between scales in phase space via its Fourier spectrum. We defineδf̂(k,s) = 1/2 π L∬ dx dv e^-i(kx-sv) δ f(x,v),where k and s are dual variables to x and v, respectively. Fourier transforming (<ref>), we get that δf̂(k,s) satisfies∂δf̂/∂ t + k∂δf̂/∂ s- i s∑_p[Ê(p) δf̂(k-p) -⟨Ê(p) δf̂(k-p) ⟩] = i sÊ⟨̂ ̂f̂ ̂⟩̂ -ν s^2 δf̂.We define the Fourier spectrum asF̂(k,s) = 1/2⟨ |δf̂(k,s)|^2 ⟩,which satisfies Parseval's theorem,δ C_2 = 1/L∬ dx dv1/2⟨δ f^2 ⟩ = 2 π∑_k∫ dsF̂(k,s),has the budget equation (equivalent to (<ref>) in spectral space)d δ C_2/d t = 2 π[∑_k∫ dsŜ - 2 ν∑_k∫ ds s^2 F̂],and satisfies the evolution equation∂F̂/∂ t + k∂F̂/∂ s + κs^2 (-Δ_k)^α/2F̂ = Ŝ - 2 ν s^2 F̂.The source term in (<ref>) and (<ref>) isŜ(k,s) = D̂(k) s^2 ⟨f̂⟩^2,which for a Maxwellian initial condition evaluates toŜ(k,s) = n_0^2/(2 π)^2 D̂(k) s^2 e^-s^2v_th^2/2,where v_th is given by (<ref>). There are several steps required to go from (<ref>) to (<ref>) and (<ref>),primarily using (<ref>) to calculate correlation functions involving the electric field. These steps are technical, so we detail them in Appendix <ref>.In (<ref>), (-Δ_k)^α/2 is a fractional Laplacian <cit.> of order α/2 (in k space), and κ is a turbulent diffusion coefficient ∝ L D, for which the exact expression is given in Appendix <ref>. To obtain this term, we have taken the limit η→ 0^+ in (<ref>), which is analogous to taking the zero-viscosity limit in hydrodynamic turbulence, and k L_E≫ 1, which is a convenient limit to take to focus on how the distribution function is stirred in the `inertial range' of the stochastic electric field.The fractional Laplacian is a non-local integral operator, generalizing the Laplacian to non-integer order. Its Fourier transform satisfiesL/2 π∫ dk e^ikr (-Δ_k)^α/2F̂(k,s) = |r|^α F(r,s),whereF(r,s) = L/2 π∫ dk e^ikrF̂(k,s).Note that we have converted sums over k into integrals (multiplied by the inverse step size L/2 π).Mathematically, the fractional Laplacian describes abstract `particles' with coordinates (k,s) undergoing random jumps in k space, so-called Lévy flights, which leads to superdiffusion <cit.>. In the limit α→ 2^-, the fractional Laplacian becomes (minus) a regular Laplacian <cit.>, corresponding to regular Brownian motion. This limit, the Batchelor regime, was studied before in <cit.>, although we shall present some new conclusions about it below.In the following sections, we will solve for steady-state solutions of (<ref>). To this end, we further assume that the source is localized at small (k,s) and injects δ C_2 at a constant rate,L ∫ dk dsŜ(k,s) = ε > 0.These assumptions may appear questionable in view of (<ref>) and (<ref>) (is the source really constant in time?) and of (<ref>) and (<ref>) (is it really local?). Before continuing, we address these concerns.Regarding the constancy of the source, we observe that in (<ref>), the injection rate of δ C_2 via the stochastic heating of ⟨ f ⟩, using (<ref>) and dimensional analysis, scales asε = -d C_2,0/dt∝D_0 n_0^2/v_th^3 = D_0 n_0^2/(v_th,0^2 + 4 D_0 t)^3/2.This scaling is particularly obvious, from (<ref>), for the case of a Maxwellian initial condition. Comparing (<ref>) and (<ref>), we see that the injection rate is, in fact, time-dependent, ∝ t^-3/2, even though in our calculations we would like to treat ε as a constant. However, if the cascade of δ C_2 is set up quickly compared to the decay of the source (which, as will be discussed in Section <ref>, it is on a ν-independent time scale when α < 2 and a time scale ∝ | logν | when α = 2), it is reasonable to treat the source as approximately constant on the time scale of the δ f evolution, and then solve for the steady-state spectrum. This becomes an ever-better approximation at long times, because the rate of change of (<ref>) is a decreasing function of time. Also note that, as long as ε > 0, its actual value is not important; because the spectrum-evolution equation (<ref>) is linear, it has no special amplitude scale, and so the size of ε is just a global modifier of the spectrum's amplitude.Regarding the locality of the source, it is easily satisfied in s space, as the velocity derivatives of ⟨ f ⟩ become ever smaller over time as the distribution function stochastically heats. This is clearly seen in the case of a Maxwellian initial condition, where the source (<ref>) is a Gaussian in s, with a width ∼ 1/v_th. In k space, the source is not, in fact, truly localized, since D̂(k) is a power law, viz., (<ref>), so fluctuations are injected at all scales where the electric field exists. However, this turns out not to be fatal to our formalism: we will neglect the source in our solving for the inertial-range spectrum in Sections <ref> and <ref> and argue a posteriori in Section <ref> that this choice was justified.§ PHENOMENOLOGICAL THEORY OF PHASE-SPACE CASCADEIn Sections <ref> and <ref>, we perform a detailed analysis of (<ref>). However, the key results of those sections can, in fact, be intuited via a much simpler route, which we follow here first.As discussed in Section <ref>, stochastic heating can only truly be made irreversible by particle collisions. For collisions to be relevant even in the limit ν→ 0^+, we conjecture that δ C_2 undergoes a cascade in both position and velocity space. Analogously to phenomenological theories of hydrodynamic turbulence <cit.>, we assume that there exists an inertial range in phase space, whereby the flux ε is processed between scales, from the injection to dissipation range. Under this assumption, our task is then to ascertain the form of the spectrum F̂ in (k,s) space in the inertial range.Fluctuations of δ f develop fine-scale structure in velocity space via linear phase mixing, whichmanifests in (<ref>) as advection of F̂ in s at the rate k. Dimensionally, the phase-mixing time is, therefore,τ_p ∼s/k.Likewise, δ f develops fine-scale structure in position space via nonlinear mixing by the stochastic electric field, which manifests in (<ref>) as fractional diffusion of F̂ in k with the diffusion coefficient κ s^2. Dimensionally, using (<ref>), the turbulent-diffusion time is, therefore, τ_d ∼k^α/κ s^2.The ratio of these time scales (taken to the power 1/(α+1) for analytical convenience and in anticipation of the results of Section <ref>) isξ =(τ_d/τ_p)^1/(α+1) = k/(κs^3)^1/(α+1).We conjecture that the structure of F̂ in (k,s) space is governed by the parameter ξ. In particular, we assume that the spectrum is a product of power laws in k and s, with different scaling exponents depending on whether ξ is small or large: F̂(k,s) ∼ k^a s^-b,ξ≪ 1,k^-c s^d,ξ≫ 1. When ξ≪ 1, the turbulent-diffusion time is much shorter than the phase-mixing time, so we expect the inertial-range spectrum to satisfy, to lowest order in ξ,κ s^2(-Δ_k)^α/2F̂ = 0,and therefore, to be independent of k. Consequently, a = 0 in (<ref>).When ξ≫ 1, the phase-mixing time is much shorter than the turbulent-diffusion time. Then, to lowest order in 1/ξ, we expect the spectrum to satisfyk∂F̂/∂ s = 0,and, therefore, to be independent of s, the same as in the linear regime <cit.>. Consequently, d = 0 in (<ref>).To find b and c, we invoke our initial assumption of a constant-flux cascade. As in any Kolmogorov-style theory, we need a prescription for the cascade time τ_c, which is the typical time for δ C_2 to be transferred across phase-space scales (ℓ, u), where ℓ∼ 1/k and u ∼ 1/s. We conjecture that the cascade time is set by the phase-mixing and turbulent-diffusion time scales (<ref>) and (<ref>), and that the latter two must balance along the path of the cascade:τ_c∼τ_p∼τ_dτ_c∼κ^-1/3ℓ^(2-α)/3∼κ^-1/(α + 1) u^(2-α)/(α + 1).In wavenumber space, this condition is satisfied when ξ∼ 1, i.e.,s ∼κ^-1/3 k^(α+1)/3.This is a kinetic, phase-space analogue of the critical-balance conjecture in magnetohydrodynamic turbulence <cit.>.Assuming a constant flux of δ C_2 in position space, using the ℓ scaling in (<ref>), and using (<ref>) to relate fluctuations in real space and wavenumber space, we getv_thδ f_ℓ^2/τ_c∼εv_thδ f_ℓ^2 ∼ε κ^-1/3ℓ^(2-α)/3 L ∫ ds F̂∼ε κ^-1/3k^-(5-α)/3,where δ f_ℓ is the characteristic amplitude of δ f at spatial scale ℓ. Let us compare this result to the 1-D k spectrum that follows from (<ref>). We assume (and verify a posteriori) that b > 1, so that the integral of F̂ over s is dominated by the region ξ≳ 1, i.e.,s ≲κ^-1/3 k^(α+1)/3).This gives∫ dsF̂∼∫^κ^-1/3 k^(α+1)/3_0 ds k^-c∼ k^-c + (α+1)/3.Requiring consistency between (<ref>) and (<ref>) yields c = 2.We can now perform the same exercise in velocity space, viz.,v_thδ f_u^2/τ_c∼εv_thδ f_u^2 ∼εκ^-1/(α + 1) u^(2-α)/(α + 1) L ∫ dk F̂∼ε κ^-1/(α + 1) s^-3/(α + 1),where δ f_u is the characteristic amplitude of δ f at velocity scale u. On the other hand, using (<ref>) and assuming that the integral over k of F̂ is dominated by the region ξ≲ 1, ork ≲κ^1/(α+1) s^3/(α+1),we have∫ dkF̂∼∫^κ^1/(α+1) s^3/(α+1)_0 dk s^-b∼ s^-b + 3/(α + 1).Requiring consistency between (<ref>) and (<ref>) yields b = 6/(α+1).Assembling the scalings that we have surmised above, we find that the spectrum (<ref>) isF̂(k,s) ∼s^-6/(α + 1),ξ≪ 1, k^-2,ξ≫ 1.For α = 2, this was derived, by means of a formal solution, in <cit.>, but the phenomenological argument and physical interpretation presented here are new.In summary, we have constructed a phenomenological theory according to which δ C_2 undergoes a phase-space cascade in both position and velocity space. At large k (ξ≫ 1), the spectrum is phase-mixing-dominated and has a power-law scaling in k. At large s (ξ≪ 1), the spectrum is turbulent-diffusion-dominated and has a power-law scaling in s. These two regimes are separated in phase space by the critical-balance region ξ∼ 1, where the linear and nonlinear time scales are comparable. The 1-D k and s spectra are dominated by contributions from this critical-balance region.Since the collision operator is diffusive in velocity space and ν is assumed to be small, collisional dissipation must necessarily occur at fine velocity-space scales (large s). It is therefore unsurprising that a kinetic system with injection of a quadratic invariant exhibits a constant-flux cascade of that invariant in velocity space. However, there is no a priori scale in position space where the dissipation must happen, so it is nontrivial that there also exists a cascade in position space. This inertial range in position space emerges due to the nonlinear field-particle interactions in (<ref>), which mix position and velocity space <cit.>. A reader interested in how the above results are obtained more rigorously should read Sections <ref> and <ref>, where we solve (<ref>) properly for the spectrum and fluxes of δ C_2. A reader interested only in the big picture can skip straight to Section <ref>.§ PHASE-SPACE FLUXESWe now verify the phenomenological theory presented in Section <ref> by solving (<ref>). It is useful to write this equation in flux-gradient form:∂F̂/∂ t + ∇·Γ̂ = Ŝ - 2 ν s^2 F̂,where ∇ = (∂/∂ k, ∂/∂ s ) is a gradient operator in the (k,s) space, and the flux Γ̂ = (Γ̂^k, Γ̂^s ) has componentsΓ̂^s = k F̂,which is clear from (<ref>), andΓ̂^k = iκ s^2 1/L∫^+∞_-∞ dr e^-ikr|r|^α/r F(r,s).The latter expression can be derived using (<ref>), viz., by writing the Fourier transform of the nonlinear term asκ s^2 |r|^αF(r,s) = -i r [ i κ s^2 |r|^α/r F(r,s) ].Inverse-Fourier transforming the term in brackets in (<ref>) yields (<ref>) <cit.>. In steady state, (<ref>) reads∇·Γ̂ = k∂F̂/∂ s + κs^2 (-Δ_k)^α/2F̂ = Ŝ - 2 ν s^2 F̂. In Sections <ref> and <ref>, we compute Γ̂^k and Γ̂^s integrated over s and k, respectively. Then, in Section <ref>, we compute the spectrum F̂(k,s) and analyze the full flux Γ̂ in (k,s) space. This order of presentation may seem awkward, but is, in fact, necessary. This is because the integrated fluxes inform us of the nature of the solution of (<ref>). This solution is in turn needed to compute the 2-D flux in (k,s) space. §.§ Constant flux in k Let us integrate (<ref>) over all s. The s flux vanishes at s →±∞, and we are left with an equation for ĝ(k) = ∫^+∞_-∞ ds s^2 F̂, which satisfiesκ (-Δ_k)^α/2ĝ = Ŝ - 2 νĝ,where Ŝ(k) = ∫^+∞_-∞ dsŜ(k,s). This integrated source Ŝ injects ĝ, which is diffused by the nonlinear term and dissipated by collisions.The source (<ref>) is peaked at low k, with characteristic width L_E^-1. To analyze the behavior of ĝ(k) in the region k L_E ≫ 1, we approximate Ŝ(k) ≈ (ε/L)δ(k). Fourier transforming (<ref>) and solving for g(r), the Fourier transform of ĝ(k), yieldsg(r) = ε/2 πκ1/|r|^α + ℓ^α_ν,whenceĝ(k) = 1/L∫ dr e^-ikr g(r) = ε/2 πκ L∫ dre^-ikr/|r|^α + ℓ^α_ν,where we have defined the `Kolmogorov' (dissipation) scale asℓ_ν≡(2 ν/κ)^1/α.For α = 2, (<ref>) simplifies toĝ(k) = ε/2 πκ L∫ dre^-ikr/r^2 + ℓ^2_ν =εℓ_ν/4 ν Le^-|k| ℓ_ν.For α < 2, (<ref>) does not have a simple closed-form expression, but it can be manipulated into an integral where the exponential in the integrand is decaying rather than oscillatory:ĝ(k) = εℓ_ν/2 πν L∫^∞_0 dzsin(πα /2) z^α/z^2α + 2z^αcos(πα /2)+1 e^-|k| ℓ_νz.This will be useful in what follows. Deriving (<ref>) from (<ref>) requires some work, which is done in Appendix <ref>.The solution (<ref>) implies that the rate of δ C_2 injection by the source equals the rate of δ C_2 dissipation by collisions. Indeed, using (<ref>) and (<ref>), and converting the sums over k into integrals, the collisional dissipation can be written in terms of ĝ:2 ν L ∫ dkĝ(k) = 2 ν∫ dkε/2 πκ∫ dre^-ikr/|r|^α + ℓ^α_ν = ε ℓ_ν^α∫ drδ(r) 1/|r|^α + ℓ^α_ν = ε = L ∫ dk dsŜ,as expected. Since (<ref>) is a Green's function solution to (<ref>), it is straightforward to show that this balance also holds for arbitrary Ŝ(k). Importantly, (<ref>) applies even in the limit ν→ 0^+; emergence of such finite collisional dissipation in the collisionless limit is known as a dissipative anomaly <cit.>. This result, although perhaps obvious from the steady state of (<ref>), demonstrates constructively that the steady-state solution to (<ref>) is well defined in the collisionless limit.Using the solution (<ref>), we can now compute the 1-D k flux of δ C_2, viz., from (<ref>),L ∫ dsΓ̂^k = i κ∫_-∞^+∞ dr e^-ikr|r|^α/r g(r).For α = 2, this flux reduces toL ∫ dsΓ̂^k = - Lκ∂ĝ/∂ k =sgn(k)ε/2 e^-|k| ℓ_ν,where we used the solution (<ref>). For α < 2, L ∫ dsΓ̂^k = sgn(k)ε/π∫^∞_0 dzsin(πα /2) z^α-1/z^2α + 2z^αcos(πα /2) + 1e^-|k| ℓ_ν z.The derivation of (<ref>) can be found in Appendix <ref>. When k ℓ_ν≪ 1, both (<ref>) and (<ref>) are constant and equal to sgn(k)ε/2 (in this limit, the integral over z in (<ref>) is π/2, which is also shown in Appendix <ref>). This result, one of the most important of this paper, means there is a constant-flux cascade in position space, viz., there exists an inertial range, 1/L_E ≪ k ≪ 1/ℓ_ν, unaffected directly by forcing or collisions, where δ C_2 is transferred from larger to smaller scales.At k ℓ_ν≳ 1, collisions become relevant. The dissipation rate at scales ≳ 1/k is𝒟̂(k) = 2 ν L ∫^+k_-k dk' ĝ(k').Substituting (<ref>) or (<ref>) into (<ref>) gives, for α = 2,𝒟̂(k) = εℓ_ν/2∫^+k_-k dk' e^-|k'| ℓ_ν = ε(1-e^-k ℓ_ν)or, for α < 2,𝒟̂(k) = 2 ε/π∫^∞_0 dzsin(πα /2) z^α-1/z^2α + 2z^αcos(πα /2)+1 (1-e^-k ℓ_νz).From (<ref>) and (<ref>), we see that the collisional dissipation is only order-unity when k ℓ_ν≳ 1, i.e., below the Kolmogorov scale (<ref>), which, therefore, deserves the name that we have given it. Past k ℓ_ν≳ 1, the dissipation balances the δ C_2 injected by the forcing (𝒟̂(k) ≃ε when kℓ_ν≫ 1; derived in Appendix <ref>). As discussed at the end of Section <ref>, because the collision operator is diffusive in velocity space, there is no a priori scale in position space where the dissipation must happen. Rather, this dissipation range in position space forms because of the collisionless dynamics. Note that ℓ_ν→ 0 as ν→ 0, so arbitrarily fine-scale structure in position space can be generated in the collisionless limit. Because of the constant-flux cascade, all of the δ C_2 injected at large scales reaches the dissipation range, no matter how small ℓ_ν is.§.§ Constant flux in sLet us now integrate (<ref>) over all k. The k flux vanishes at k →±∞, resulting in the following equation for the s flux: ∂/∂ s∫ dkΓ̂^s = ∫ dkŜ - 2 ν s^2 ∫ dkF̂.Unfortunately, unlike (<ref>), this equation is not closed and cannot be explicitly solved without knowing the spectrum itself. However, we can still learn a key lesson from it. In view of (<ref>) and (<ref>), the characteristic width over which ∫ dkŜ falls off in s is 1/v_th. But for small ν, collisions can only be relevant when s is large. Balancing the phase-mixing term with the collision term in (<ref>) tells us that collisions will start to matter whenk/s∼ν s^2.We know from Section <ref> that collisional dissipation will start to occur around kℓ_ν∼ 1, so taking k ∼ 1/ℓ_ν in (<ref>) gives us the collisional velocity scaleu_ν∼( ν^α+1/κ)^1/3 α. When s u_ν≪ 1, we expect that we can drop the collision term in (<ref>). Integrating the remaining terms in the equation from -s to s, where 1/v_th≪ s ≪ 1/u_ν, yields, using (<ref>), ∫ dkΓ̂^s(s) - ∫ dkΓ̂^s(-s) =∫^+s_-s ds'∫^+∞_-∞ dkŜ(k,s') ≈sgn(s) ε/L.Because the distribution function is real, the Fourier spectrum satisfiesF̂(k, -s) = F̂(-k, s).Together with the definition (<ref>) of Γ̂^s, this implies that ∫ dkΓ̂^s(-s) = -∫ dkΓ̂^s(s), and so (<ref>) becomesL ∫ dkΓ̂^s(s) = sgn(s)ε/2for 1/v_th≪ s ≪ 1/u_ν. There is, therefore, also an inertial range in s where there is a constant flux of δ C_2 from small to large |s|. Because the collision operator can only be relevant at large |s| when ν is small, the existence of a forced steady state does indeed require such a constant-flux inertial range in s.We note that this result holds even in the absence of nonlinearity. Linear phase mixing can lead to the development of arbitrarily fine scales in velocity space, albeit at the price of a diverging δ C_2 as ν→ 0^+ <cit.>. However, nonlinearity is required for there to be a cascade in position space: note from (<ref>) that ℓ_ν→∞ as κ→ 0. Indeed, the solution of (<ref>) for κ = 0 is simply ĝ = Ŝ(k)/2 ν. The Vlasov equation is linear in this limit, so the spatial Fourier components of the distribution function are uncoupled from one another, and the k dependence of the Fourier spectrum is then simply set by the source.§ PHASE-SPACE SPECTRUMWe now compute the spectrum in (<ref>). From Sections <ref> and <ref>, we know that our model exhibits a phase-space entropy cascade in which the rate of δ C_2 injection by the forcing at large scales is balanced by collisional dissipation at small scales, with the distribution function developing arbitarily small scales in phase space in the limit ν→ 0^+. This results in an inertial range in k (s) where the flux of δ C_2 integrated over velocity (position) wavenumbers is constant. Therefore, in this inertial range, viz., for 1/L_E ≪ k ≪ 1/ℓ_ν and 1/v_th≪ s ≪ 1/u_ν, we can meaningfully consider (<ref>) in the absence of the forcing and collisional terms, viz.,k∂F̂/∂ s = -κ s^2 (-Δ_k)^α/2F̂,and seek a solution for the spectrum that supports constant fluxes in k and s. §.§ Self-similar inertial-range solutionBecause of the reality condition (<ref>), we only need to solve (<ref>) for F̂(k,s) on half of the (k,s) plane. We choose to solve for the spectrum in the upper half-plane, -∞ < k < ∞ and s ≥ 0 [This solution can be directly compared to earlier results obtained in terms of the Hermite numbers m, as the Hermite transform is analogous to a Fourier transform when m ≫ 1, with m ∼ s^2/2 <cit.>.].To deal with the fractional Laplacian, we Fourier transform this equation in k space. Using (<ref>), we find that F(r,s) satisfies∂/∂ r∂/∂ sF = -i κ s^2 |r|^α F.Because F(r,s) is the Fourier transform of F̂(k,s), which is purely real, F(r,s) must satisfy the reality conditions F(-r,s) = F^*(r,s). We therefore only have to solve (<ref>) for r > 0, for which |r| = r.The inertial-range solution to (<ref>) that does not depend on details of the forcing or dissipation ranges is a similarity solution <cit.>:F = ε κ^-1/(α + 1) s^-βϕ(y),y = (κ s^3)^1/(α+1)r,where β can be constrained by the fact that the flux L ∫ dkΓ̂^s must be constant in the inertial range, as per (<ref>):L ∫ dkΓ̂^s = εs^-β + 3/(α + 1)∫_-∞^+∞d ξ ξ ∫_-∞^+∞ dy e^-iξ yϕ(y) = ε/2, β = 3/α + 1.The integration variable dual to y,ξ = k/(κs^3)^1/(α+1),has previously appeared in (<ref>) as the ratio of the turbulent-diffusion and phase-mixing time scales.Substituting (<ref>) into (<ref>) gives us an ordinary differential equation for ϕ:d^2 ϕ/dy^2 + i(α+1)/3y^α-1ϕ = 0.To solve this equation, consider the transformationϕ = √(y)g(z),z = 2/√(3(α+1))e^-i π /4y^(α + 1)/2.Then, (<ref>) becomes a modified Bessel equation <cit.>:z^2 d^2 g/dz^2 + z d g/dz -[z^2 + 1/(α+1)^2]g = 0.Therefore, the solution to (<ref>) that vanishes at y →∞ isϕ(y) =c√(y)K_1/(α + 1)(2/√(3(α+1))e^-i π /4y^(α+1)/2),where y > 0 and K is the modified Bessel function of the second kind. The reality condition ϕ(y) = ϕ^*(-y) constrains us to pick the phase of c so that the real part of ϕ is even in y and the imaginary part is odd. This is accomplished by settingc = C(e^-i π /4)^1/(α+1),where C > 0 is real and determined by the constraint (<ref>). For generic α, we are unable to evaluate the integrals in (<ref>) analytically, to compute C, although it is straightforward to see that C is finite. For α = 1, C is easily computed: this calculation can be found in Appendix <ref>.Using the solution (<ref>) for ϕ, we inverse-Fourier transform back to k space to obtain,F̂(k,s) =2εL^-1κ^-2/(α + 1) s^-6/(α + 1)Re∫_0^∞ dy e^-iξ yϕ(y),where ξ is given by (<ref>), and we have used the reality condition for ϕ. As we stated at the beginning of this calculation, this expression is valid only for s ≥ 0; the spectrum for negative s can be found by combining (<ref>) and (<ref>). For generic α, we are unable to simplify (<ref>) further. For α = 1, (<ref>) has a simple closed form, derived in Appendix <ref>. For α = 2, the spectrum can be written in terms of incomplete Gamma functions: see <cit.>. In our terms, (<ref>) for α = 2 reduces toϕ(y) = C π√(3)Ai( e^-i π/6 y),where Ai is the Airy function. This is clear by the fact that for α = 2, (<ref>) is an Airy equation. §.§ 2-D spectrumTo show what (<ref>) looks like, we present contour plots of the normalized spectrum L_0^2 v_th^6/(α + 1)F̂(k L_0 , s v_th) for α = 1 (for which an explicit expression can be found in Appendix <ref>) in Fig. <ref>. We use normalized units s v_th and k L_0, whereL_0 = ( v_th^3/κ)^1/(α+1).This length scale is a natural choice, because the similarity variable (<ref>) in these units isξ = k/(κs^3)^1/(α+1) = k L_0/(s v_th)^3/(α+1),and, therefore, the spectrum, up to its amplitude, is independent of κ.For ξ∼ 1, the spectrum is a nontrivial function of k and s. For ξ small or large, it has asymptotics given by (<ref>) (computed below), confirming the phenomenological theory presented in Section <ref>. As discussed in Section <ref>, the distinction between ξ small and large can be understood in terms of competition between the phase mixing and turbulent diffusion for control of the phase-space cascade. To compute the asymptotics (<ref>) from the full solution (<ref>), the ξ≪ 1 limit can be found by simply setting ξ = 0 in (<ref>). For the ξ≫ 1 limit, we use the following result from Fourier analysis. Suppose u(y) is a function that is smooth for y > 0 and has the scaling u(y) ∼ y^λ-1 as y → 0^+, where λ > 0. Then∫_0^∞ dy e^-iξ y u(y) = Γ(λ)/(i ξ)^λ + o( ξ^-λ)as ξ→∞ <cit.>. Note that (<ref>) has the seriesϕ(y) = A - B (e^-i π/2)^1/(α + 1) y + 𝒪(y^α + 1)as y → 0^+, where A and B are real and positive constants. Combining (<ref>) and (<ref>), one finds that the contribution to the integral in (<ref>) from the constant part of (<ref>) is purely imaginary and so vanishes. The next-order term from the linear part of (<ref>) has a real component; the integral is therefore 𝒪(ξ^-2) (since λ = 2), hence F̂(k,s) ∼ k^-2 as ξ→∞.Note that, the 1-D k and s spectra, which can be found by integrating out the similarity variable ξ in (<ref>), agree with (<ref>) and (<ref>).Before continuing, it is instructive to assess the region of validity of (<ref>) in the (k,s) plane. This spectrum is infrared-divergent and thus breaks down as (k,s) → (0,0); information about the functional form of the source (<ref>), which would regularize the spectrum at long wavelengths, is lost by construction in the similarity solution (however, (<ref>) does contain information about the flux from the source via the constraint (<ref>)). Of course, the inertial-range spectrum is no longer valid in the region where the source (<ref>) is concentrated, viz., when sv_th≲ 1 and k L_E≲ 1. For these reasons, we have not plotted the spectrum near the origin in the (k,s) plane in Fig. <ref>, and likewise for the fluxes plotted in the following sections [The fact that the inertial-range spectrum (<ref>) is scale-free and has no infrared cutoff means that v_th in (<ref>) and (<ref>) is formally an arbitrary velocity normalization for (<ref>). However, to emphasize that the true spectrum has an outer scale at s v_th≲ 1, we still normalize s by v_th in (<ref>), and we do not plot the spectrum or fluxes for s v_th < 1 in any of the plots in Section <ref>.].In addition to the solution (<ref>) lacking an outer scale, the approximation that the nonlinear term is a fractional Laplacian to lowest order also breaks down when k L_E≲ 1. This is clear in the derivation of the nonlinear term in Appendix <ref>, where the fractional Laplacian emerges as the lowest-order term when k L_E≫ 1. Yet, away from s = 0, (<ref>) is, in fact, continuous across k = 0. By dropping the finite-k L_E corrections in (<ref>), we have shrunk the boundary layer k L_E≲ 1 to a point [For the source term to be non-zero, it is important that we do not take the limit L_E→∞, otherwise ε→ 0. This is because ε∝ D_0^-1/2∝ L^-α/2_E, as can be seen in (<ref>). Physically, turbulent diffusion is a large-scale effect and so is dominated by the largest electric-field scales.]. The spectrum is then continuous across k = 0.We can also now address the concern of the lack of locality in k space of the source term (<ref>) and whether dropping this term in the inertial range was justified, as discussed at the end of Section <ref>. Consider the two asymptotic regions, ξ≪ 1 and ξ≫ 1 (assuming also sv_th≫ 1, s u_ν≪ 1, and k ℓ_ν≪ 1). In the former region, the nonlinear term is dominant over the phase-mixing term, as per (<ref>). Balancing the nonlinear term with (<ref>) yields an inhomogeneous contribution to F̂∝⟨f̂⟩^2. Since sv_th≫ 1, this term is strongly suppressed, e.g., exponentially so in the case of a Maxwellian initial condition [see (<ref>)], so this term is negligible compared to the ξ≪ 1 asymptotic in (<ref>). In the the latter region (ξ≫ 1), the phase-mixing term is dominant over the nonlinear term, as per (<ref>). Balancing the phase-mixing term with (<ref>) yields an inhomogeneous contribution to F̂∝D̂(k)/k, which is ∝ k^-(2+α) when kL_E ≫ 1. This contribution is subdominant to the homogeneous part of the spectrum, which scales like k^-2 when ξ≫ 1, viz., (<ref>). We cannot explicitly estimate the inhomogeneous contribution to the spectrum from the source in the ξ∼ 1 region, but the above analysis suggests it should be subdominant to (<ref>). Therefore, even though the source is multiscale in k, there is still an inertial range in the position space.Finally, we observe that the inertial-range solution (<ref>) extends to infinity, consistent with k_ν∼ 1/ℓ_ν→∞ ands_ν∼ 1/u_ν→∞ as ν→ 0^+, viz., (<ref>) and (<ref>). A finite ν will introduce finite collisional scales k_ν and s_ν, such that collisions cut off the spectrum when k ≳ k_ν and s ≳ s_ν. §.§ 2-D flux: phase-space circulationsTo gain insight about the pathways in phase space taken by fluctuations from injection to dissipation scales, it is informative to examine the vector flux Γ̂, which, in terms of the similarity solution (<ref>), has the components (<ref>),Γ̂^k(k,s) = -2εL^-1 s^-1Im∫^∞_0 dy e^-i ξ y y^α-1ϕ(y),and (<ref>),Γ̂^s(k,s)= 2εL^-1κ^-1/(α + 1) s^-3/(α + 1)ξRe∫^∞_0 dy e^-i ξ yϕ(y).To obtain (<ref>), we used (<ref>) and changed variables from r to y in the integral. Note that these expressions are valid only for s ≥ 0. For s < 0, combining (<ref>), (<ref>), and (<ref>), we have thatΓ̂(k, -s) = -Γ̂(-k,s). As was the case for the Fourier spectrum, the flux is a nontrivial function of k and s for ξ∼ 1, but can be simplified when ξ is small or large. The k flux has the asymptoticsΓ̂^k(k,s) ∼εL^-1×-s^-1,ξ≪ 1,sgn(k)κ^α/(α + 1) |k|^-α s^(2α -1)/(α + 1),ξ≫ 1, α≠ 2,κk^-3 s^2,ξ≫ 1, α = 2.The s flux has the asymptoticsΓ̂^s(k,s) ∼εL^-1 κ^-2/(α + 1)k s^-6/(α + 1),ξ≪ 1,k^-1,ξ≫ 1.Here, we have retained the signs of terms as well as factors of ε and κ in (<ref>) and (<ref>), but not order-unity constants. To evaluate the asympotics of the fluxes for s < 0, these expressions must be combined with (<ref>).We can derive these results in the same way as we did the asymptotics of the spectrum in Section <ref>. The asymptotics (<ref>) for Γ̂^s come directly from combining (<ref>) and (<ref>). For Γ̂^k, the ξ≪ 1 expansion in (<ref>) comes from evaluating (<ref>) at ξ = 0 (note that the integral is positive). For ξ≫ 1, we can combine (<ref>), (<ref>), and (<ref>). This gives, to lowest order, as ξ→∞,Γ̂^k≃ 2εL^-1 s^-1 AΓ(α) sin( πα/2) sgn(k) |ξ|^-α.The lowest-order k flux vanishes when α = 2, so (<ref>) only gives the ξ≫ 1 limit in (<ref>) for α≠ 2. We need to go to next order for α = 2, which yieldsΓ̂^k≃ 2εL^-1 s^-1 BΓ(3) sin( π/3) sgn(k) |ξ|^-3,giving the ξ≫ 1, α = 2 asymptotic in (<ref>).We first discuss the Batchelor case (α = 2), where(Γ̂^k, Γ̂^s ) = (-κ s^2 ∂F̂/∂ k, k F̂).To work out by what physical mechanism the phase-space cascade is enabled in different parts of the (k,s) plane, it is useful to compare the ratio of the k and s fluxes. Using the normalizations from Section <ref>, we have that the ratio of the dimensionless fluxes isR = L_0^4 Γ̂^k(k L_0,s v_th)/L_0^2 v_th^2Γ̂^s(k L_0,s v_th) = -1/ξIm∫^∞_0 dy e^-i ξ y y ϕ(y)/Re∫^∞_0 dy e^-i ξ yϕ(y) ,which is a function solely of ξ. It has the asymptoticsR ∼-ξ^-1,ξ≪ 1,ξ^-2,ξ≫ 1.Thus, when ξ≪ 1, the flux is diffusion-dominated (dominated by its k component), and when ξ≫ 1, the flux is phase-mixing-dominated (dominated by its s component). When ξ∼ 1, the two fluxes are comparable; this can be seen in the plot of R in Fig. <ref>. The regions of the dominance of the two fluxes are, therefore, separated in phase space by the critical-balance line (<ref>), the same as the phase-mixing and turbulent-diffusion time scales, viz., (<ref>).We plot the (nondimensionalized) vector flux (<ref>) in Fig. <ref>(a). Using the asymptotics (<ref>) and noting the signs of the fluxes in (<ref>) and (<ref>), we see that in the diffusion-dominated region (ξ≪ 1), Γ̂^k is negative when s > 0 and positive when s < 0, and in the phase-mixing-dominated region (ξ≫ 1), Γ̂^s is positive when k > 0 and negative when k < 0. The flux therefore gives rise to counterclockwise circulation of δ C_2 in (k,s) space. The sign changes in the components of the flux that enable this circulation occur at ξ = ξ_2 ≃ 0.747, where R has a zero (as can be seen in Fig. <ref>), and at ξ = 0, below (above) which R diverges positively (negatively). The first point corresponds to Γ̂^k changing sign in the top right and bottom left quadrants, while the second point corresponds to Γ̂^s changing sign from positive to negative between the top right and top left quadrants, as well as between the bottom left and bottom right quadrants. This latter effect occurs because, when sgn(ks) = - 1, perturbations are phase unmixed rather than phase mixed, being advected to low |s| rather than high |s| <cit.>. The phase-unmixing modes are a stochastic instantiation of the plasma-echo effect <cit.>.While the phase unmixing does undo the phase mixing, we will see in Section <ref> that the flux of the phase-mixing modes outweighs that of the phase-unmixing ones, thus enabling the constant-flux cascade. In Fig. <ref>(a), this manifests in the fact that the circulation swirls outward. Indeed, in the top right (bottom left) quadrant, below (above) the line ξ = ξ_2 where Γ̂^k = 0, there is a flux of δ C_2 to both high |k| and high |s| simultaneously, toward the dissipation wavenumbers k_ν∼ 1/ℓ_ν and s_ν∼ 1/u_ν.§.§ Non-local transportWe now examine the α≠ 2 cases. The important difference compared to the Batchelor regime is that the k flux is now non-local in k space. Note that Γ̂^k can be written as <cit.> [When α = 1, (<ref>) is proportional to a Hilbert transform <cit.> in k space. In plasma kinetics, non-local fluxes given by Hilbert transforms also show up in the context of Landau-fluid closures; viz., in the Hammett-Perkins closure <cit.>, the heat flux is proportional to the Hilbert transform of the temperature (in position space).]Γ̂^k = κ s^2 c_α/α∫^∞_0 dp F̂(k-p,s)-F̂(k+p,s)/p^α,where c_α is given by (<ref>). The derivation of this expression is given in Appendix <ref>. The interpretation of (<ref>) is that `particles' (parcels of δ C_2) cross the point (k,s) from points (k ± p,s), with cumulative probability ∝ p^-α. The particles undergo Lévy flights in k space, so the flux at a point (k,s) receives contributions not just from nearby particles taking small jumps but also from faraway ones taking large jumps.The net effect is an enhancement of Γ̂^k compared to Γ̂^s. For these cases, the ratio R of the nondimensionalized fluxes isR= L_0^4 Γ̂^k(k L_0,s v_th)/L_0^2 v_th^6/(α + 1)Γ̂^s(k L_0,s v_th)= -(s v_th)^(2-α)/(α + 1)1/ξIm∫^∞_0 dy e^-i ξ y y^α-1ϕ(y)/Re∫^∞_0 dy e^-i ξ yϕ(y) .Since Γ̂^k is positive when ξ≫ 1 and negative when ξ≪ 1, viz., (<ref>), there is always an order-unity ξ_α at which Γ̂^k = 0. Unlike the Batchelor case (<ref>), the ratio of fluxes (<ref>) is not a function solely of ξ. Therefore, (<ref>) implies that along curves of constant ξ∼ 1 ≠ξ_α, the flux is always diffusion-dominated as s v_th→∞. Furthermore, in the asymptotic region ξ≫ 1 (which for α = 2 was the phase-mixing-dominated region), we show in Appendix <ref> that, as α gets smaller, the region where the flux is asymptotically phase-mixing-dominated shrinks. In fact, for α < 1/2, there is no asymptotic region at all where the flux is phase-mixing-dominated (except along the curve ξ = ξ_α).As an example, we plot the (nondimensionalized) vector flux for α = 1 in Fig. <ref>(b) (explicit expressions for the components of the flux in this case can be found in Appendix <ref>). In this case, the flux is phase-mixing-dominated only in the region |s v_th| ≲ 1 (irrespective of k), which in Fig. <ref>(b) we indicate with gray lines.These results do not mean that there is no effective phase mixing for these cases. The ratio (<ref>) of nonlinear and linear time scales gives the relative local transport of δ C_2 in s versus k space. While the non-local transport of δ C_2 in k space dominates over its flux in s, locally, the phase-mixing time (<ref>) of F̂ is still shorter than the diffusion time (<ref>) when ξ≫ 1. This dominant local phase mixing is what sets up the lowest-order, constant-flux-in-s spectrum in the ξ≫ 1 region, viz., (<ref>) and (<ref>).The fluxes also obey critical balance. It is straightforward to show, using (<ref>) and (<ref>) with calculations analogous to (<ref>) and (<ref>), that the 1-D fluxes (<ref>) and (<ref>) are dominated by contributions from the critical-balance region (<ref>) (ξ∼ 1). Even though the 2-D s flux is subdominant to the k flux, the fact that L ∫ dkΓ̂^s is constant in the s inertial range implies that phase mixing still provides an effective route to dissipation scales in velocity space. §.§ Shell-averaged fluxTo understand the net effect of having both phase-mixing modes that propagate from low to high |s| and phase-unmixing modes that propagate from high to low |s|, it is useful to consider the flux shell-averaged in k,Γ̅≡(Γ̂^k(k,s)-Γ̂^k(-k,s), Γ̂^s(k,s) + Γ̂^s(-k,s) ).While the components of (<ref>) depend on α, their (nondimensionalized) ratio does not (up to a prefactor). Note that, using (<ref>) and integrating by parts, (<ref>) can be rewritten asΓ̂^k(k,s) = 2εL^-1 s^-1×3/α+1Re[ϕ'(0^+) + ξ^2 ∫^∞_0 dy e^-i ξ yϕ(y) ],where the derivative of ϕ is taken at y → 0^+. Using (<ref>) and (<ref>), we getR̅ = L_0^4 [Γ̂^k(k L_0,s v_th)-Γ̂^k(-k L_0,s v_th)]/L_0^2 v_th^6/(α + 1)[Γ̂^s(k L_0,s v_th) + Γ̂^s(-k L_0,s v_th)]= 3/α+1k L_0/s v_th.Remarkably, this expression is valid everywhere in the (k,s) plane, independent of ξ being small or large. The flux is radial at both large s v_th and large k L_0. We plot Γ̅ for α = 1 in Fig. <ref>, which clearly exhibits this feature.Note that the components of the shell-averaged flux are positive-definite. We are only able to show directly that this property holds for α = 1; this calculation can be found in Appendix <ref>. An argument as to why the fluxes are positive-definite for the general case is as follows. Since (<ref>) is positive, the components of (<ref>) are either both positive or both negative for all (k,s) in the inertial range (no sign reversals are possible, otherwise the flux would not be divergence-free in the inertial range: see (<ref>)). If the components were negative-definite, there would be a sink at the origin. However, there is a source at the origin, so the shell-averaged fluxes must therefore be positive-definite.This positive definiteness is important. The circulatory nature of the fluxes in Fig. <ref> is due to phase-mixing modes (with sgn(ks) = 1) and phase-unmixing modes (with sgn(ks) = -1) propagating in opposite directions in s. Since the s component of (<ref>) is equal to k[F̂(k,s) - F̂(-k,s)], the shell-averaged flux being positive-definite means that the spectral amplitudes of the phase-mixing modes are greater than those of the phase-unmixing modes. Therefore, by adding together the fluxes of the two modes, we are left with a net flux that points outward to both high k and high s toward the dissipation scales, in agreement with our analysis in Sections <ref> and <ref>.§ CONCLUSION §.§ Summary In this paper, we have presented a solvable model of kinetic plasma turbulence, in which the electric field is decoupled from the particle distribution function and taken to be an externally imposed Gaussian field, white-noise in time and power-law in k space.The effect of this stochastic electric field on the mean distribution function is diffusion in velocity space, often referred to as stochastic heating <cit.>. The resulting energization of particles is a collisionless process. Indeed, the heating rate is set by the turbulent collisionality and is independent of ν [see (<ref>)]. However, the irreversibility of stochastic heating hinges on the presence of collisions. As ⟨ f ⟩ heats, δ f fluctuations are excited, transferring the minus `entropy' C_2,0 = (1/L)∫ dx dv⟨ f ⟩^2/2 into δ C_2 = (1/L)∫ dx dv⟨δ f^2 ⟩/2, which then cascades to small scales in both position and velocity space simultaneously. This cascade is then cut off by collisions at fine phase-space scales, thereby rendering the heating irreversible. The irreversibility of stochastic heating is therefore enabled by the entropy cascade.We have analyzed this cascade in the Fourier-transformed (k,s) space, where an `inertial range' forms to bridge the injection and dissipation scales of δ C_2. Integrated over k or s, the flux of δ C_2 is constant in this inertial range. Importantly, there is no collisional dissipation at scales much larger than the dissipation scale (ℓ_ν,u_ν) [see (<ref>) and (<ref>)], which tends to (0,0) as the collision frequency does.In the 2-D (k,s) space, the Fourier spectrum of δ C_2 has a self-similar profile, with power-law asymptotics at high k and s, respectively. We find that these asymptotic scalings can be deduced by a phenomenological theory whose governing principle is that the cascade satisfies a critical balance in phase space between the time scales of linear phase mixing and turbulent diffusion. Because there is nothing in our phenomenological theory that is unique to 1D-1V, we also expect these ideas to apply in 2D-2V and 3D-3V. While the one-dimensional |k| and |s| spectra (<ref>) and (<ref>) should be the same, the two-dimensional |k|-|s| spectrum (integrated over angles) will not be the same as (<ref>) because of the wavenumber Jacobian being different from unity in higher dimensions. §.§ Fast cascade and the effectiveness of phase mixingIn a linear system, phase mixing acts as a route to collisional dissipation at every spatial scale <cit.>, but in the model presented here, collisional dissipation is only non-negligible below the dissipation scale ℓ_ν. Following the commonly held intuition that the effect of phase mixing (and Landau damping) on turbulent systems is that it steepens the spectrum (of, e.g., electromagnetic energy) at every scale by an amount set by the Landau-damping rate <cit.>, one may be led to conclude from our results that phase mixing is less effective in a nonlinear system than in a linear one. However, in fact, phase mixing is even more effective in a nonlinear system. This is because the presence of nonlinearity produces fine structure in position space and thus enhances the rate at which the distribution function develops fine phase-space gradients, reducing the time that it takes collisions to activate, τ_ν.To see this, note that in a steadily driven system, as considered in this paper, the total δ C_2 in steady state divided by the injection rate ε gives a reasonable estimate for τ_ν. For a linear system, restricting ourselves to a single k and noting that the spectrum is flat in s up to a collisional cutoff s_ν∝ν^-1/3 given by the balance (<ref>) <cit.>, we find <cit.>τ_ν∼δ C_2/ε∝∫^s_ν ds ∝ν^-1/3.For smaller and smaller ν, it takes longer and longer for collisions to dissipate the velocity-space structure of the distribution function, and in the collisionless limit, the amount of δ C_2 stored in phase space diverges <cit.>. This is consistent with a constant cascade time, set by the phase-mixing time, viz., τ_c ∼ (k v_th)^-1 (with k fixed).In the presence of nonlinearity, when α = 2, the 1-D spectra (<ref>) and (<ref>) scale like k^-1 and s^-1, respectively <cit.>, the former scaling in agreement with the classical Batchelor <cit.> spectrum of a passive scalar advected by a single-scale flow. Integrating the 1-D spectrum up to the collisional cutoff (<ref>) givesτ_ν∼δ C_2/ε∝∫^1/ℓ_ν dk k^-1∝ | logν |.Although formally also divergent as ν→ 0^+, (<ref>) is asymptotically shorter than (<ref>). In this case, the cascade time is also constant, viz., τ_c∼κ^-1/3, as can be seen in (<ref>), even though many k's are involved.When α < 2, the phase-space cascade is even more efficient. From (<ref>), we have that τ_c goes to zero as k and s go to infinity, so fluctuations `turn over' faster to finer phase-space scales the deeper they are in the inertial range (compared to the constant cascade times in the linear and Batchelor regimes). As a result, the 1-D spectra (<ref>) and (<ref>) are steeper than k^-1 and s^-1, so the total steady-state δ C_2 and τ_ν are independent of ν.Thus, the presence of nonlinearity reduces the collision time from a (negative) fractional power of ν (<ref>) in the linear regime to a time scale that is only logarithmic with ν (<ref>) when α = 2, or one that is independent of ν when α < 2. To interpret this shortening of τ_ν in the nonlinear regime, note that linear phase mixing still processes δ C_2 from injection to dissipation scales in the inertial range, but so does, simultaneously, nonlinear mode coupling, in a critically-balanced fashion. The net result is fast dissipation, but only at wavenumbers k ≳ k_ν∼ 1/ℓ_ν and s ≳ s_ν∼ 1/u_ν, which, by construction, satisfy the critical-balance condition (<ref>). The nonlinear cascade ensures that all the injected δ C_2 flux at large phase-space scales is rapidly dissipated at small scales via collisions, no matter how small is ν. This reduction of τ_ν is also an a posteriori justification of the assumption (<ref>) of there being a separation of time scales between the time that it takes δ C_2 to reach quasi-steady state and the time that it takes for the injection rate ε to decay due to the stochastic heating of ⟨ f ⟩ [Formally, this assumption is not valid in the Batchelor regime in the collisionless limit, since the total δ C_2 stored in phase space and collision time τ_ν diverge, viz., (<ref>). However, this divergence is only logarithmic, so one may expect quasi-steady states in the Batchelor regime to be possible when ν is finite.]. §.§ Implications and outlook As discussed in Section <ref>, these results provide a conceptual understanding of the role of phase mixing in turbulent plasmas. Recent theoretical <cit.> and numerical <cit.> studies have suggested a statistical `fluidization' of turbulent, collisionless plasmas by stochastic plasma echoes suppressing phase mixing in the (spatial) inertial range. This might have seemed to be at odds with Landau damping <cit.> clearly being identified in turbulent settings numerically <cit.> in several works, as well in Magnetospheric Multiscale (MMS) Mission observations of the turbulent plasma in the Earth's magnetosheath <cit.>. While our model is quite simplified and does not include self-consistent electric fields and hence Landau damping, insofar as Landau damping and phase mixing are intimately related processes, our results indicate that these two seemingly contradictory sets of results can in fact be compatible.This work also has implications for the relaxation of mean distribution functions in nearly collisionless plasmas. The existence of the entropy cascade implies that collisions will always dissipate fine-scale structure in the distribution function, even when ν is vanishingly small, but it does not necessarily imply that the rate by which the distribution function relaxes toward a Maxwellian is enhanced. This is clear in our model from the fact that, whereas δ f develops sharp phase-space gradients, ⟨ f ⟩ does not, so Coulomb collisions are never activated for it. Mean distributions in space plasmas can be highly non-Maxwellian, e.g., in the solar wind <cit.>. Developing a theoretical formalism to predict the form of such non-equilibrium distribution functions is an outstanding problem. A direct consequence of entropy cascades is that theories of relaxation that assume phase-volume conservation <cit.> may not apply to nearly collisionless plasmas that are strongly turbulent. An alternative approach is to examine how the turbulent phase-space correlations of δ f drive the evolution of the mean distribution function <cit.>.There is much opportunity to understand phase-space entropy cascades in nearly collisionless plasmas better, with theory, numerical simulations, laboratory experiments, and spacecraft data. With regards to the latter two, the works <cit.> suggest that measuring entropy cascades in real plasmas is a realizable endeavor just beginning to be possible.M.L.N is grateful to I. Abel, T. Adkins, M. Allen, W. Clarke, S. Cowley, P. Dellar, J.-B. Fouvry, M. Kunz, R. Meyrand, F. Rincon, and J. Squire for helpful discussions related to this work. W.S would like to thank A. Bhattacharjee and H. Weitzner for useful discussions. M.L.N was supported by a Clarendon Scholarship. R.J.E was supported by a UK EPSRC studentship. W.S. was supported by a grant from the Simons Foundation/SFARI (560651, AB). The work of A.A.S. was supported in part by UK EPSRC (grant EP/R034757/1), STFC (grant ST/W00090311), and the Simons Foundation via a Simons Investigator award. W.D.D. and M.L.N were supported by the US Department of Energy through grant DEFG0293ER54197 and Scientific Discovery Through Advanced Computing (SciDAC) grant UTA18000275.§ INVARIANTS ALTERNATIVE TO C2In this appendix, we discuss invariants of the Vlasov equation alternative to C_2. It is straightforward to show that (<ref>) conserves an infinite number of invariants, so-called Casimir invariants <cit.>. Indeed, focusing on invariants defined in the averaged sense, for any smooth g(f), the functionalG[f] = 1/L∫ dx dv⟨ g(f) ⟩satisfies, using (<ref>) for the collsion operator,d G/d t = 1/L∫ dx dv⟨ g'(f) C[f] ⟩= - ν/L∫ dx dv⟨ g”(f) (∂⟨ f ⟩/∂ v + ∂δ f/∂ v) ∂δ f/∂ v⟩≃ - ν/L∫ dx dv⟨ g”(f) | ∂δ f/∂ v|^2⟩,where the term with two derivatives on δ f dominates when ν→ 0^+. Note that while in this paper ⟨ ... ⟩ denotes ensemble averaging with respect to the stochastic electric field, the arguments in this section hold for any averaging procedure for which one can decompose f = ⟨ f ⟩ + δ f, where ⟨δ f ⟩ = 0.When ν = 0, every G[f] is formally conserved. Furthermore, if g(f) is convex, i.e., g”(f) ≥ 0 everywhere, then the corresponding Casimirs are negative-definitely dissipated by collisions. We label Casimirs with positive-definite time evolution (which are minus the convex functionals of f) as `generalized entropies' (cf. entropy functions in hyperbolic partial differential equations <cit.>). This set includes -C_2, as well as the traditional entropy S = - ∬ dx dv f log f.We define the relative entropy asR[f] = G[f] - G[⟨ f ⟩],which has the budgetd R[f]/dt + d G[⟨ f ⟩]/dt = - ν/L∫ dx dv⟨ g”(f) | ∂δ f/∂ v|^2⟩.Using (<ref>), G[⟨ f ⟩] satisfiesd G[⟨ f ⟩]/dt = -D_0 ∫ dv g”(⟨ f ⟩) | ∂⟨ f ⟩/∂ v|^2,which is negative-definite if g(f) is convex. Therefore, we can interpret (<ref>) analogously to (<ref>). In the absence of collisions, as G[⟨ f ⟩] decreases via stochastic heating, R[f] increases to maintain the G[f] balance. Once δ f has developed sharp enough gradients, collisions dissipate the total G[f].We have shown that C_2 is anomalously dissipated as ν→ 0^+ and is cascaded, i.e., exhibits an inertial range unaffected directly by collisions or forcing. In principle, invariants other than C_2 can also have these properties. A system mathematically similar to (<ref>) in which this happens is the advection-diffusion equation for a scalar advected by a turbulent flow: <cit.> showed that the family of invariants ∫ dx θ^2n, where θ is a scalar field and n is a positive integer, satisfy constant-flux cascades; in contrast, for a passive scalar advected by a smooth, chaotic flow (Batchelor regime), only the quadratic invariant (n = 1) is cascaded. This is because for higher-order invariants,logarithmic correlations of the passive scalar give rise to injection of those invariants by the source at all scales, preventing the formation of an inertial range.It is likely that a similar situation happens in the Vlasov-Kraichan model between the cases α < 2 and α = 2, but such a calculation is beyond the scope of this work. If it were true, then for α < 2, the cascade of C_2 does not necessarily hold deeper physical meaning than the cascade of any other convex functional of f.However, we still believe that C_2 is a particularly useful quantity in kinetic plasma turbulence. Because it is quadratic in f, C_2 is the only invariant (up to weight functions in the integrand of (<ref>)) that satisfies G[⟨ f ⟩ + δ f] = G[⟨ f ⟩] + G[δ f],and so the relative entropy (<ref>) is a function solely of δ f. This property is useful for conceptualizing the budget (<ref>) as a transfer of entropy between ⟨ f ⟩ and δ f, as any other Casimir invariant involving higher powers of f will necessarily involve cross terms containing both ⟨ f ⟩ and δ f. Furthermore, C_2 is the only invariant that lends itself to a simple Fourier analysis. For these reasons, in this paper, we have chosen to analyze phase-space turbulence using C_2 exclusively.§ DERIVATION OF THE FOURIER-SPECTRUM EQUATION In this appendix, we give a detailed derivation of (<ref>).Multiplying (<ref>) by δf̂^*(k,s), adding to the resulting equation its complex conjugate, and then ensemble averaging gives∂F̂/∂ t + k∂F̂/∂ s + sIm∑_p⟨Ê(p) δf̂(k-p) δf̂^* (k) ⟩= Ŝ - 2 ν s^2 F̂.Note that the second term in the Fourier sum in (<ref>) vanishes under multiplication by δf̂^*(k,s) and ensemble averaging. The source term Ŝ, defined in (<ref>), comes from the second term on the right-hand side of (<ref>). It can be found via application of the Furutsu-Novikov theorem (<ref>) and using the fact that (<ref>) implies⟨Ê(k,t) Ê(k',t') ⟩ = 2 D̂(k) δ_k,-k'δ(t-t'),where δ_k,-k' is the Kronecker delta. §.§ Derivation of (31) The δ C_2 budget (<ref>) in terms of F̂ can be found by taking the time derivative of (<ref>) and using (<ref>). Assuming the spectrum goes to zero at s →±∞, the free-streaming term vanishes by integration over s. For the nonlinear term, note that the summand inIm∑_k,p ⟨Ê(p) δf̂(k-p,s) δf̂^* (k,s) ⟩,after taking p → -p and k → k-p, and applying reality conditions on the electric field, is equal to its own complex conjugate. Therefore, its imaginary part vanishes. What is left is injection by the source and dissipation by collisions, viz., (<ref>). §.§ Derivation of (32) We now close the triple correlator in (<ref>). Using (<ref>), we have⟨Ê(p) δf̂(k-p)δf̂^* (k) ⟩ = ∫^t dt' ∑_p'⟨Ê(p,t) Ê(p',t') ⟩×⟨δ[ δf̂(k-p,t) δf̂^* (k,t) ] /δÊ(p',t')⟩.As in Section <ref>, the functional derivative can be computed by formally integrating the relevant evolution equation. Using (<ref>), we can writeδf̂(k-p) δf̂^*(k) =∫^t dt” {is ∑_p”[Ê(p”) δf̂(k-p-p”) δf̂^*(k)-Ê(-p”) δf̂^*(k-p”) δf̂(k-p)] + (...) },where (...) represents terms that will vanish after we take the functional derivative. Therefore, we get⟨δ[ δf̂(k-p,t) δf̂^* (k,t) ] /δÊ(p',t')⟩= is [⟨δf̂(k-p-p',t') δf̂^*(k,t') ⟩- ⟨δf̂^*(k+p',t') δf̂(k-p,t') ⟩]H(t-t'),where H is the Heaviside step function. Combining this expression with (<ref>) and (<ref>) yields⟨Ê(p) δf̂(k-p) δf̂^* (k) ⟩ = i sD̂(p) [F̂(k)-F̂(k-p)].Using this expression in (<ref>), we get∂F̂/∂ t + k∂F̂/∂ s + s^2∑_pD̂(p) [F̂(k)-F̂(k-p)]= Ŝ - 2 ν s^2 F̂.§.§ The case of alpha = 2: Batchelor limit Let us simplify the nonlinear term in (<ref>) further. We start with the case of α = 2. In the limit k L_E≫ 1,we suppose (and check a posteriori) that (<ref>) is sufficiently steep in wavenumbers that we can consider k ≫ p and Taylor-expand the summand of the wavenumber sum in (<ref>):F̂(k)-F̂(k-p) ≃ -p ∂F̂/∂ k + 1/2p^2 ∂^2 F̂/∂ k^2.This `Batchelor approximation' was first used for the problem of passive-scalar mixing in fluids <cit.> and amounts to approximating the electric field as effectively single-scale. Substituting (<ref>) back into the sum in (<ref>), the first term vanishes because it is odd in p, and we are left with∑_pD̂(p) [F̂(k)-F̂(k-p) ] ≃ D_2 ∂^2 F̂/∂ k^2,whereD_2 = 1/2∑_p p^2 D̂(p) =D/2∑_p p^2 e^-(η p)^2/(p^2+L_E^-2)^3/2.In the limit η→ 0^+, (<ref>) is logarithmically divergent, being ∝ log( L_E/ η); without a small-scale cutoff, the approximation (<ref>) is invalid. This is because the k^-3 spectrum in (<ref>) is only marginally in the Batchelor regime <cit.>. The Batchelor limit generically applies when the electric field is spatially smooth, corresponding to a rapidly decaying spectrum D̂(k). We choose the particular form of D̂(k) in (<ref>) in order to match onto the Batchelor limit and fractional cases with one functional form of the correlation function, but we could just as well have picked a steeper (<ref>) for α = 2 that would not have required a small-scale cutoff. Therefore, without loss of generality, we keep (<ref>) without modification. §.§ The alpha < 2 cases: representation in terms of the fractional LaplacianFor α < 2, it is convenient to manipulate the nonlinear term in (<ref>) in position space and then Fourier transform back to k space. We start by noting that∑_k e^ikr∑_p D̂(p)[F̂(k)-F̂(k-p)] =[D(0)-D(r)]F(r).We now take the limits η→ 0^+ and k L_E≫ 1. When η = 0, note that (<ref>) is the kernel of the Bessel potential <cit.>, and can therefore be written asD(r) = L D/2 π2^1-α/2√(π)L^α_E/Γ(α+1/2)(r/L_E)^α/2K_α/2(r/L_E),where K is the modified Bessel function of the second kind, and Γ is the Gamma function. For α < 2 and r/ L_E≪ 1, (<ref>) has a series expansionD(r) = D_0 - D_α r^α + 𝒪(r^2/L^2-α_E),whereD_0 = L D/2 πΓ(α/2)√(π)/Γ(α+1/2) L^α_E, D_α = L D/2 π√(π) |Γ(-α/2)|/ 2^α Γ(α+1/2) . We now Fourier transform back to k space. To lowest order in (r / L_E)^2-α≪ 1 (equivalently, (k L_E)^2-α≫ 1), the r^α term is dominant over the r^2 term, which gives1/L∫ dr e^-ikr[D(0)-D(r)]F(r) ≃1/L∫ dr e^-ikr D_α |r|^α F(r) = D_α (-Δ_k)^α/2F̂(k).Here (-Δ_k)^α/2 is a fractional Laplacian <cit.> of order α/2, in k space, viz.,(-Δ_k)^α/2F̂(k) = c_α p.v.∫_-∞^+∞dpF̂(k)-F̂(k-p)/|p|^α+1,wherec_α = 2^αΓ(α+1/2)/√(π)|Γ(-α/2)|,and p.v. means that the integral is defined in the principal-value sense.Thus, using (<ref>) and (<ref>), we have that (<ref>) becomes (<ref>), whereκ = D_α, α < 2, D_2, α = 2,with D_α given by (<ref>) and D_2 given by (<ref>). §.§ Derivation of (104)Here, we derive the form (<ref>) of the non-local k flux in the α < 2 cases, as analyzed in Section <ref>.We first rewrite (<ref>) as an integral over positive p:(-Δ_k)^α/2 F̂(k) = c_α p.v.∫_-∞^+∞dpF̂(k)-F̂(k-p)/|p|^α+1= c_α∫_0^∞dp2 F̂(k) - F̂(k-p) - F̂(k+p) /p^α + 1.In (<ref>), we can write p^-(α + 1) = -(1/α) ∂_p p^-α and integrate by parts. The boundary terms vanish (assuming F̂ vanishes at infinity), and we are left with(-Δ_k)^α/2F̂(k)= -c_α/α∫_0^∞dp∂F̂/∂ p(k-p) + ∂F̂/∂ p(k+p)/p^α= ∂/∂ kc_α/α∫_0^∞dpF̂(k-p) - F̂(k+p)/p^α.Combining (<ref>) and the definition ∂_k Γ̂^k = κ s^2 (-Δ_k)^α/2F̂ yields (<ref>). § DETAILED CALCULATIONS FOR SECTION IV AIn this appendix, we derive the general α < 2 expressions for ĝ(k) (<ref>) and the k flux integrated over s (<ref>). §.§ Derivation of (63) and (70) We start with ĝ(k), as given in (<ref>). As discussed in Section <ref>, it is useful to convert this integral into one where the exponential in the integrand is decaying rather than oscillatory. To do this, we perform an auxiliary expansion, taking |r|^α→ (r^2+ϵ^2)^α/2 and then ϵ→ 0 in the integral (<ref>):ĝ(k) = ε/2 πκ Llim_ϵ→ 0 ℐ_𝒞_1,whereℐ_𝒞_1 = ∫_𝒞_1 dre^-ikr/(r^2+ϵ^2)^α/2 + ℓ^α_ν,and 𝒞_1 is a contour running along the real line. We choose the branch cuts so that(r^2+ϵ^2)^α/2= (r+iϵ)^α/2(r-iϵ)^α/2 = r^α/2_+e^i αθ_+/2 r^α/2_-e^i αθ_-/2,where r_± = |r ± i ϵ|, θ_+∈ [-π/2,3 π/2 ], andθ_-∈ [-3 π/2,π/2]. Note the branch points are at r_br, ± = ∓ i ϵ. We first treat the case k < 0. We close 𝒞_1 by making a semi-circle in the upper half of the complex plane and going around the branch cut with the branch point r_br, -, as depicted in Fig. <ref>. By Cauchy's residue theorem, since the integrand in (<ref>) has no poles, ℐ_𝒞_1 + ℐ_𝒞_2 + ℐ_𝒞_3 + ℐ_𝒞_4 + ℐ_𝒞_5 + ℐ_𝒞_6 = 0. When the radius R of the contours 𝒞_2 and 𝒞_6 is large, the integrands along these contours are ∝ e^-|k| R sinθR^1-α (here θ is the angle of the contour with respect to the positive real axis). Thus, ℐ_𝒞_2 and ℐ_𝒞_6 tend to zero as R →∞. Because the integrand is finite at r = r_br, -, ℐ_𝒞_4 tends to zero as the radius of 𝒞_4 shrinks to zero. Therefore, ℐ_𝒞_1 = - ℐ_𝒞_3 - ℐ_𝒞_5. Noting that along 𝒞_3, θ_± = π/2, while along 𝒞_5, θ_+ = π/2 and θ_- = -3 π/2, we have that-(ℐ_𝒞_3 + ℐ_𝒞_5) = -i∫_∞^ϵ dze^-|k|z/(z^2-ϵ^2)^α/2e^i πα /2 + ℓ^α_ν-i∫^∞_ϵ dze^-|k|z/(z^2-ϵ^2)^α/2e^-iπα /2 + ℓ^α_ν= 2 ∫_ϵ^∞ dz e^-|k|z×sin(πα /2)(z^2-ϵ^2)^α/2/(z^2-ϵ^2)^α + 2(z^2-ϵ^2)^α/2 ℓ^α_νcos(πα /2) + ℓ^2α_ν,where we have changed variables to z = -i r and used the fact that ℐ_𝒞_5 = ℐ_𝒞_3^*. Now we can safely take the limit ϵ→ 0. Using Cauchy's residue theorem and also redefining z →ℓ_ν z, we haveĝ(k) = ε/2 πκ Llim_ϵ→ 0 ℐ_𝒞_1 = ε/2 πκ Llim_ϵ→ 0[ -(ℐ_𝒞_3 + ℐ_𝒞_5) ] = εℓ_ν/2 πν L∫^∞_0 dzsin(πα /2) z^α/z^2α + 2z^αcos(πα /2)+1 e^-|k| ℓ_νz,which is (<ref>). For k > 0, 𝒞_1 can be closed by making a contour in the lower half plane, yielding the same expression (<ref>). Integrating this expression in k and multiplying by 2 ν L immediately yields (<ref>). §.§ Derivation of (67)For the k flux, combining (<ref>) and (<ref>), we haveL ∫ dsΓ̂^k = i ε/2 π∫ dr |r|^α/re^-ikr/|r|^α + ℓ^α_ν.This integral can be equated to one where the exponential term is decaying rather than oscillatory by again substituting |r|^α→ (r^2+ϵ^2)^α/2 and then taking ϵ→ 0. The method from Appendix <ref> can be applied again, resulting in (<ref>). The main difference is that this time, the auxillary expansion in ϵ introduces a pole at r = 0 in the integrand of (<ref>), making it necessary to deform the contour along the real line. However, this residue contribution vanishes when ϵ→ 0, so whether the contour is deformed below or above the pole does not change the final answer.Given the expression (<ref>), we can take its asymptotics for k ℓ_ν≪ 1, as analyzed in Section <ref>. In this limit, the inertial-range flux is approximately equal to the rate of δ C_2 injection. Changing variables to u = z^α, we haveL ∫ dsΓ̂^k≃sgn(k)ε sin(πα /2) /πα∫^∞_0 du/u^2 + 2ucos(πα /2)+1 = sgn(k)ε sin(πα /2) /πα1/sin(πα /2)πα/2 = sgn(k)ε/2,where the u integral was done using <cit.>, formula 3.252(1). Note this result also implies that (<ref>) approximately satisfies 𝒟̂(k) ≃ε when k ℓ_ν≫ 1. §.§ Expressions for alpha = 1 case Finally, we note that when α = 1, ĝ(k), 𝒟̂(k), and L ∫ dsΓ̂^k can all be expressed in terms of known special functions. For the sake of completeness, we give these expressions here. For ĝ(k), using <cit.>, formula 5.2.13, we haveĝ(k) = εℓ_ν/2 πν L∫^∞_0 dzz e^-|k| ℓ_νz/z^2+1= -εℓ_ν/2 πν L[cos(|k| ℓ_ν) Ci(|k| ℓ_ν) + sin(|k| ℓ_ν) si(|k| ℓ_ν) ],where Ci(z) and si(z) are cosine and sine integral functions <cit.>, respectively.For 𝒟̂(k), using <cit.>, formula 5.2.12, we have𝒟̂(k) = 2 ε/π∫^∞_0dz1-e^-k ℓ_νz/z^2 + 1= ε{1 - 2/π[sin(k ℓ_ν) Ci(k ℓ_ν) - cos(k ℓ_ν) si(k ℓ_ν)] }.When kℓ_ν≪ 1, (<ref>) has the series𝒟̂(k) = ε{2/π[1 - γ - log (kℓ_ν)]kℓ_ν + 𝒪((kℓ_ν)^2) },which vanishes as kℓ_ν→ 0. Here, γ is the Euler–Mascheroni constant. Note that the finite-kℓ_ν corrections are linear and logarithmic, in contrast to the α = 2 case (<ref>), where the first-order correction (after Taylor expanding the exponential) is just linear in kℓ_ν. When kℓ_ν≫ 1, (<ref>) has the series𝒟̂(k) = ε[1-2/π1/kℓ_ν + 𝒪((kℓ_ν)^-3) ]. Likewise, the integrated k flux (<ref>) isL ∫ dsΓ̂^k = sgn(k)ε/π∫^∞_0 dze^-|k| ℓ_ν z/z^2 + 1 = sgn(k)ε/π[sin(k ℓ_ν) Ci(k ℓ_ν) - cos(k ℓ_ν) si(k ℓ_ν)].When k ℓ_ν≪ 1, this expression has the seriesL ∫ dsΓ̂^k = sgn(k)ε/2{1 - 2/π[1 - γ - log (kℓ_ν)]kℓ_ν + 𝒪((kℓ_ν)^2) },in agreement with (<ref>) as kℓ_ν→ 0.§ CLOSED-FORM EXPRESSIONS OF INERTIAL-RANGE SPECTRUM AND FLUXES FOR ALPHA = 1In Sections <ref> and <ref>, we plotted the inertial-range spectrum and its corresponding vector flux, respectively, for α = 1. For this value of α, the spectrum and fluxes have simple closed forms, which we derive in this appendix.When α = 1, (<ref>) reduces toϕ(y) = C̃e^-(1 - i)y/√(3),where we have absorbed all order-unity constants into C̃, where C̃ = 3^1/4 2^-3/4√(π) C. In (<ref>), therefore, we have2 Re∫_0^∞ dy e^-iξ yϕ(y)= 2 C̃Re∫_0^∞ dy e^-y/√(3)e^-iy(ξ-1/√(3))= 2 √(3) C̃ 1/3ξ^2-2√(3)ξ + 2.The constant C̃ can be found via the flux constraint (<ref>):1/2 = 2 √(3) C̃ ∫^+∞_-∞d ξ ξ/3ξ^2-2√(3)ξ + 2 =2 √(3) C̃ π/3C̃ = √(3)/4 π.Therefore, using (<ref>) and (<ref>), the spectrum isF̂(k,s)= 3 ε/2 π L κs^-31/3ξ^2-2√(3)ξ + 2= 3 ε/2 π L 1/3k^2 -2 √(3 κ)k s^3/2 + 2κ s^3 .This expression clearly satisfies the asymptotics given by (<ref>), and the results for the 1-D spectra (<ref>) and (<ref>) apply.The s flux (<ref>) is then simplyΓ̂^s = k F̂ = 3 ε/2 π L k/3k^2 -2 √(3 κ) k s^3/2 + 2κ s^3 .The k flux (<ref>), using (<ref>), isΓ̂^k = -2εL^-1 s^-1Im∫^∞_0 dy e^-i ξ yϕ(y).= -√(3) ε/2 π L sIm∫_0^∞ dy e^-y/√(3)e^-iy(ξ-1/√(3))= 3 ε/ 2 π L s√(3) ξ-1/3ξ^2-2√(3)ξ + 2= 3 ε/ 2 π L√(3 κ)k s^1/2-κ s^2/3k^2 -2 √(3 κ)k s^3/2 + 2κ s^3 . Finally, we show that the components of the shell-averaged flux (<ref>) for α = 1 are positive-definite. For the s flux, using (<ref>) and (<ref>), we haveΓ̂^s(k,s) + Γ̂^s(-k,s) = 3 ε k/2 π L κ s^3(1/3ξ^2-2√(3)ξ + 2-1/3ξ^2+2√(3)ξ + 2) = 3 ε k/2 π L κ s^34 √(3)ξ/9 ξ^4 + 4≥ 0.For the k flux, using (<ref>) and (<ref>), we haveΓ̂^k(k,s) - Γ̂^k(-k,s) = 3 ε/ 2 π L s(√(3) ξ-1/3ξ^2-2√(3)ξ + 2+√(3) ξ+1/3ξ^2+2√(3)ξ + 2) = 3 ε/ 2 π L s6 √(3)ξ^3/9 ξ^4 + 4≥ 0. § CHARACTERIZATION OF 2-D FLUXES FOR ALPHA < 2 CASESIn this section, we characterize the (non-local) 2-D fluxes for the α < 2 cases. The asymptotics of the (normalized) ratio of fluxes (<ref>) areR ∼-(k L_0)^-1(s v_th)^(5 - α)/(α + 1),ξ≪ 1, (k L_0)^1-α(s v_th)^(2 α - 1)/(α + 1),ξ≫ 1.Unlike in the Batchelor regime, the ratios in (<ref>) are no longer functions solely of ξ, and,therefore, the regions where R is small or large are not necessarily divided by the critical-balance curve (<ref>). Note that we focus our analysis on the top right quadrant of the (k,s) plane, but our arguments can be extended to the whole plane.For ξ≪ 1, above the curve (<ref>), which is the diffusion-dominated region for α = 2, the s and k flux are comparable along the curves v_th∼ (k L_0)^(α + 1)/(5-α).At sufficiently large k L_0, this curve falls below the critical-balance curve (<ref>), outside the ξ≪ 1 regime, and hence Γ̂^k is dominant over Γ̂^s when ξ≪ 1 for all α.For ξ≫ 1, below the curve (<ref>), which is the phase-mixing-dominated region for α = 2, the s and k flux are comparable along the curves v_th∼ (k L_0)^(α^2-1)/(2 α - 1).There are now various regimes with different behaviors.When 1 < α < 2, the flux is phase-mixing-dominated (diffusion-dominated) below (above) the curve (<ref>). At sufficiently large k L_0, the curve (<ref>) falls below the critical-balance curve (<ref>), effectively widening the diffusion-dominated region from the ξ≪ 1 region down to the curve (<ref>). Note that for this range of α, (<ref>) is concave, same as (<ref>).When α = 1, (<ref>) is horizontal in k L_0, and so the flux is only phase-mixing-dominated for s v_th≪ 1, irrespective of k. We plotted this case in Fig. <ref>(b).When 1/2 < α < 1, s v_th in (<ref>) is a decreasing function of k L_0, shrinking the phase-mixing-dominated region further. As an example, we plot a diagram for the case α = 3/4 in Fig. <ref>. Note that when α < -1 + √(3)≃ 0.732, the dependence of s v_th on k L_0 in (<ref>) is steeper than (k L_0)^-1, so the area of the phase-mixing-dominated region becomes finite.When α = 1/2, the flux is phase-mixing-dominated only when k L_0 ≪ 1, irrespective of s. Note that k must satisfy k L_E ≫ 1 to be in the inertial range (and ξ≫ 1 for these asymptotics to hold), so the phase-mixing-dominated region is likely nonexistent.When 0 < α < 1/2, the curve (<ref>) is convex. The phase-mixing-dominated region is bounded below by (<ref>) and above by (<ref>), extending from k = 0 to the intersection of (<ref>) and (<ref>), viz., at k L_0 = 1. The region where the flux is phase-mixing-dominated is not asymptotic, i.e., everywhere in the ξ≫ 1 region, in the limit ξ→∞, either in the limit k L_0 →∞ and/or s v_th→ 0, the flux is diffusion-dominated. Therefore, in this case, the only part of the (k,s) plane where the flux is phase-mixing dominated is along the curve ξ = ξ_α, where the k flux vanishes. | http://arxiv.org/abs/2310.18211v1 | {
"authors": [
"Michael L. Nastac",
"Robert J. Ewart",
"Wrick Sengupta",
"Alexander A. Schekochihin",
"Michael Barnes",
"William D. Dorland"
],
"categories": [
"physics.plasm-ph",
"astro-ph.SR",
"physics.space-ph"
],
"primary_category": "physics.plasm-ph",
"published": "20231027153357",
"title": "Phase-space entropy cascade and irreversibility of stochastic heating in nearly collisionless plasma turbulence"
} |
Gate-tunable topological superconductivity in a supramolecular electron spin latticeRémy Pawlak,^1∗† Jung-Ching Liu,^1† Chao Li,^1† Richard Hess,^1† Hongyan Chen,^2 Carl Drechsel,^1, Ping Zhou,^3 Robert Häner,^3 Ulrich Aschauer,^3,4Thilo Glatzel,^1 Silvio Decurtins,^3Daniel Loss,^1 Jelena Klinovaja,^1 Shi-Xia Liu,^3∗ Wulf Wulfhekel,^2 & Ernst Meyer^1^1Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland^2Physikalisches Institut, Karlsruhe Institute of Technology,Wolfgang-Gaede-Str. 1, 76131 Karlsruhe, Germany^3Department of Chemistry, Biochemistry and Pharmaceutical Sciences,University of Bern, Freiestrasse 3, 3012 Bern, Switzerland^4Department of Chemistry and Physics of Materials, University of Salzburg,Jakob-Haringer-Strasse 2A, 5020 Salzburg, Austria^†These authors equally contributed;^∗To whom correspondence should be addressed;E-mails:[email protected], [email protected] ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ The Dynamic Time Warping (DTW) distance is a popular similarity measure for polygonal curves (i.e., sequences of points). It finds many theoretical and practical applications, especially for temporal data, and is known to be a robust, outlier-insensitive alternative to the distance. For static curves of at most n points, the DTW distance can be computed in O(n^2) time in constant dimension. This tightly matches a SETH-based lower bound, even for curves in ℝ^1.In this work, we study dynamic algorithms for the DTW distance. Here, the goal is to design a data structure that can be efficiently updated to accommodate local changes to one or both curves, such as inserting or deleting vertices and, after each operation, reports the updated DTW distance. We give such a data structure with update and query time O(n^1.5log n), where n is the maximum length of the curves.As our main result, we prove that our data structure is conditionally optimal, up to subpolynomial factors. More precisely, we prove that, already for curves in ℝ^1, there is no dynamic algorithm to maintain the DTW distance with update and query time O(n^1.5 - δ) for any constant δ > 0, unless the Negative-k-Clique Hypothesis fails. In fact, we give matching upper and lower bounds for various trade-offs between update and query time, even in cases where the lengths of the curves differ.SODA abstract The Dynamic Time Warping (DTW) distance is a popular similarity measure for polygonal curves (i.e., sequences of points). It finds many theoretical and practical applications, especially for temporal data, and is known to be a robust, outlier-insensitive alternative to the Fréchet distance. For static curves of at most n points, the DTW distance can be computed in O(n^2) time in constant dimension. This tightly matches a SETH-based lower bound, even for curves in ℝ^1.In this work, we study dynamic algorithms for the DTW distance. Here, the goal is to design a data structure that can be efficiently updated to accommodate local changes to one or both curves, such as inserting or deleting vertices and, after each operation, reports the updated DTW distance. We give such a data structure with update and query time O(n^1.5log n), where n is the maximum length of the curves.As our main result, we prove that our data structure is conditionally optimal, up to subpolynomial factors. More precisely, we prove that, already for curves in ℝ^1, there is no dynamic algorithm to maintain the DTW distance with update and query time O(n^1.5 - δ) for any constant δ > 0, unless the Negative-k-Clique Hypothesis fails. In fact, we give matching upper and lower bounds for various trade-offs between update and query time, even in cases where the lengths of the curves differ.The Dynamic Time Warping (DTW) distance is a popular similarity measure for polygonal curves (i.e., sequences of points). It finds many theoretical and practical applications, especially for temporal data, and is known to be a robust, outlier-insensitive alternative to the distance. For static curves of at most n points, the DTW distance can be computed in (n^2) time in constant dimension. This tightly matches a SETH-based lower bound, even for curves in ℝ^1.In this work, we study dynamic algorithms for the DTW distance. Here, the goal is to design a data structure that can be efficiently updated to accommodate local changes to one or both curves, such as inserting or deleting vertices and, after each operation, reports the updated DTW distance. We give such a data structure with update and query time O(n^1.5log n), where n is the maximum length of the curves. The natural follow-up question is whether this time bound can be improved; under the aforementioned SETH-based lower bound, we could even hope for linear update time.As our main result, we refute these hopes and prove that our data structure is conditionally optimal, up to subpolynomial factors. More precisely, we prove that, already for curves in ℝ^1, there is no dynamic algorithm to maintain the DTW distance with update and query time O(n^1.5 - δ) for any constant δ > 0, unless the Negative-k-Clique Hypothesis fails. This holds even if one of the curves is fixed at all times, whereas the points of the other curve may only undergo substitutions. In fact, we give matching upper and lower bounds for various further trade-offs between update and query time, even in cases where the lengths of the curves differ.The Negative-k-Clique Hypothesis is a recent but well-established hypothesis from fine-grained complexity, that generalizes the famous APSP Hypothesis, and successfully led to several lower bounds.Acknowledgements. This work is part of the project CONJEXITY that has received funding from the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (grant agreement No. 101078482). This work is part of the project TIPEA that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 850979). This research was supported by Independent Research Fund Denmark grant 2020-2023 (9131-00044B) “Dynamic Network Analysis” and the VILLUM Foundation grant 37507 “Efficient Recomputations for Changeful Problems”. This project has additionally received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 899987. This research benefited from meetings at the Max Planck Institute for Informatics, Saarbrücken, and from discussions at Dagstuhl Seminar 22461 `Dynamic Graph Algorithms'.§ INTRODUCTION Sequence similarity measures are fundamental in computational geometry and string algorithms and serve as essential tools in a variety of application domains for analyzing time-ordered data, including videos, audio files, time-series measurements, and GPS tracking data. One of the most widely used similarity measures is theDynamic Time Warping (DTW) distance, with applications in speech recognition <cit.>, handwriting and online signature matching <cit.>, gesture recognition <cit.>, medicine <cit.>, song recognition <cit.>, motion retrieval <cit.>, time series clustering <cit.>, and time series database search <cit.>. See also the survey by Senin <cit.>. In this paper, we pick up the study of the well-motivated dynamic Dynamic Time Warping problem, where the goal is maintaining the DTW distance of two sequences that undergo changes. We provide (1) a new dynamic algorithm that is significantly faster than the previous state of the art, and (2) strong evidence that this algorithm isoptimal, up to lower-order factors, conditioned on a well-established hardness assumption from fine-grained complexity theory. We thus conditionally resolve the time complexity of dynamic DTW, successfully closing the problem. Dynamic Time Warping The DTW distance of two sequences (or curves) P = (p_1,…,p_n) and Q = (q_1,…,q_m) is defined as follows. Imagine a dog walking along P and its owner walking along Q. Both owner and dog start at the beginning of their curves, and in each step the owner may stay in place or jump to the next point along P and the dog may stay in place or jump to the next vertex along Q, until both of them have reached the end of their curves. Formally, this yields a traversal T = ((i_1,j_1), …, (i_t,j_t)) with i_1 = j_1 = 1, i_t = n, j_t = m and (i_k+1,j_k+1) ∈{(i_k + 1, j_k), (i_k, j_k + 1), (i_k + 1, j_k + 1)} for each step k ∈{1,…,t-1}. The cost of a traversal is the sum of all distances of dog and owner during the traversal, that is, the cost of traversal T is ∑_k=1^t d(p_i_k, q_j_k).The DTW distance of P and Q, denoted (P, Q), is then defined as the minimum cost of any traversal.Note that for this definition to make sense, we need to fix a distance measure d(·,·) on vertices. In a typical scenario, the vertices p_1,…,p_n, q_1,…, q_m lie in a low-dimensional Euclidean space ℝ^d' and their distance is an L_p norm d(x,y) = x-y_p. Throughout this paper, for the algorithms we assume that d(·,·) can be evaluated in constant time (apart from that, it can be arbitrary), and the lower bounds apply to curves in ℝ^1 (meaning that vertices lie in ℝ^1 and the distance measure is d(x,y) = |x-y|). This means that both algorithms and lower bounds apply very generally to DTW for various distance measures d(·, ·), for example, to ℝ^O(1) under any L_p norm.Static DTW Algorithms The time complexity of computing the DTW distance of two static curves is well-understood: Given two curves P and Q of length n and m, we can compute their DTW distance in time O(nm) by a simple dynamic programming algorithm; the time becomes O(n^2) when both curves have length at most n. This running time is almost the best known, up to mild improvements <cit.>. Moreover, assuming the Strong Exponential Time Hypothesis (or the Orthogonal Vectors Hypothesis) from fine-grained complexity theory, there is a matching lower bound stating that DTW cannot be computed in truly subquadratic time O(n^2-δ), for any constant δ > 0 <cit.>. This lower bound applies even for curves in ℝ^1. Even if we relax the goal to a constant-factor approximation, no algorithm running in truly subquadratic time is known (see <cit.> for polynomial-factor approximation algorithms and approximation algorithms for restricted input models), though for this approximate setting conditional lower bounds are still amiss.Dynamic DTW Algorithms In this paper we study DTW on dynamically changing sequences. This problem was introduced by Nishi et al. <cit.>. Here, both curves receive updates that insert a vertex anywhere in the sequence, or remove any vertex from the sequence, or substitute any vertex by a new vertex. We may also receive queries, upon which we should return the DTW distance of P and Q. Our goal is to store P and Q in a data structure that supports these updates and queries.In short: The sequences P and Q undergo updates such as insertion, deletion, or substitutions of a vertex, and for each query we want to recompute their current DTW distance. The motivation for this dynamic problem is threefold <cit.>: (1) Big data applications produce quickly changing data, and the time-ordered data relevant for DTW is no exception. (2) For audio or video applications one can imagine that a file gets edited and after each edit some similarity score should be recomputed. (3) The time complexity of dynamic versions of sequence similarity measures is of inherent algorithmic interest; see the related work section below for further examples.Note that the results for static algorithms yield some upper and lower bounds for the time complexity of the dynamic DTW problem. We focus for simplicity on the case that update and query time must be equal. Then by using a static algorithm to recompute the DTW distance we get update/query time O(n^2), if both curves have length at most n.Nishi et al. <cit.> improved this to update/query time O(n + m + #chg), where #chg is the number of changes in the dynamic programming table. Since in the worst case #chg = Θ(n m), their algorithm yields the same update/query time O(n^2) as the static recomputation, in case both curves have length about n. On the other hand, starting from two empty curves and using O(n) updates we can create any worst-case instance of size n, and thus the static lower bound implies that O(n) updates and one query cannot be done in time O(n^2-δ) for any δ>0 assuming the Strong Exponential Time Hypothesis. It follows that the update/query time cannot be O(n^1-δ). However, these static bounds leave a large gap. §.§ Our ResultsIn this paper, we improve upon both the quadratic upper bound and the linear lower bound. Specifically, for the upper bound we design a new algorithm for dynamic DTW distance with update and query time O(n^1.5log n). For the lower bound, we show that the update and query time cannot be improved to O(n^1.5-δ) for any constant δ > 0 assuming the Negative-k-Clique Hypothesis. Since our upper and lower bound match, we fully resolve the time complexity of the dynamic DTW distance, up to lower order factors and assuming the Negative-k-Clique Hypothesis. In fact, our precise results do not assume that query and update time are equal. Instead, we obtain trade-off results both for the upper and lower bound, and both trade-offs match up to lower order factors. In the remainder of this section we discuss these results in detail. Note that a substitution update can be simulated by a deletion and an insertion. Therefore, any data structure that supports insertions and deletions automatically also supports substitutions.Upper Bound For a dynamic DTW data structure on curves P,Q, we denote by n and m the current length of P and Q respectively at any point in the update and query sequence. To simplify notation, we assume that n ≥ m always holds.[This assumption can be easily removed, but then in the time bounds n is replaced by max{n,m} and m is replaced by min{n,m}.] In Section <ref> we design a data structure for dynamic DTW with the following guarantees. theoremtradeoff For any constant β∈ [0, 1/2], there is a data structure that maintains curves P and Q of (changing) lengths n and m and supports insertion and deletion updates in time O(n m^βlog m) and DTW queries in time O(n m^1-βlog m). The data structure takes space O(n m log m). We give a high-level description of this data structure. The curves P and Q induce a vertex-weighted grid graph G of n columns and m rows, where we have horizontal, vertical, and diagonal edges. We call such a graph a rectangular graph, referencing its rectangular bounding box.The value (P, Q) corresponds to the length of the shortest xy-monotone path[A sequence of pairs (x_1,y_1), (x_2,y_2), …, (x_ℓ,y_ℓ) such that both x_1,x_2,…, x_ℓ and y_1,y_2,…, y_ℓ are monotone.] from node (1, 1) to node (n, m).Throughout this paper, our coordinates refer to matrix indexing. Hence, we consider the shortest xy-monotone path from the top left to the bottom right corner.We show an upper bound that is more general than Theorem <ref>. We choose some β∈ [0, 1/2]. We partition P and Q into Θ( (n + m) m^-β) subcurves containing O(m^β) vertices each.This partitions the rectangular graph into O(n m^1 - 2 β ) rectangular subgraphs, each containing O(m^2 β) vertices.For each rectangular subgraph G', we store a distance matrix A' with dimensions m^β× m^β. This matrix considers all vertices S on the top and left boundaries of the bounding rectangle and all vertices T on the bottom and right boundaries, and records (by using known techniques for multiple-source shortest paths in planar graphs) all st-distances for s ∈ S and t ∈ T.At query time, we combine the O( nm^1 - 2 β ) matrices, spending O(m^βlog m) time per matrix, by a wavefront algorithm to compute (P, Q) in O(n m^1 - βlog m) time. At update time, intuitively, any update affects O(nm^-β) matrices A' (assuming n ≥ m). We can update each matrix A' in time O(m^2βlog m), and thus we can perform the whole update in time O( n m^- β· m^2βlog m) = O( n m^βlog m).Lower Bound We show that the above trade-off of update and query time is optimal, up to subpolynomial factors and assuming a plausible hypothesis from fine-grained complexity theory. The specific hypothesis concerns the Negative-k-Clique problem for some constant k: Given an undirected k-partite graph G with N nodes and integer edge-weights of absolute value at most N^O(k), decide whether G has a k-clique with negative total edge weight. This problem has a naive O(N^k)-time algorithm, and no much faster algorithms are known. This lack of progress lead to the formulation of the Negative-k-Clique Hypothesis, which postulates that Negative-k-Clique cannot be solved in time O(N^k-δ) for any constants k ≥ 3 and δ > 0. For k=3 this hypothesis is equivalent to the famous All-Pairs Shortest Paths hypothesis <cit.>, so it is very believable. The generalization to k > 3 is very plausible, and was successfully applied as a hardness assumption in several contexts <cit.>, even as the basis for public-key cryptography schemes <cit.>. Moreover, the Negative-k-Clique Hypothesis is known to imply the Orthogonal Vectors Hypothesis <cit.>, which implies the static lower bounds for DTW. We use the Negative-k-Clique Hypothesis to prove the following matching lower bound for our dynamic DTW algorithm.theoremlowerboundDTW Fix constants c≥ 1, β∈ [0,1/2], γ∈ (0, 1] and δ, ε > 0. If there is a data structure that maintains curves P and Q of length n and m with m ∈Ω(n^γ - ) ∩ O(n^γ + ), has preprocessing time O((n+m)^c), and supports substitution updates in time (n · m^β-δ) and DTW queries in time (n · m^1-β-δ), then the Negative-k-Clique Hypothesis fails (for some constant k depending solely on (c, β, γ, δ, ε)). We next give a high-level overview of this conditional lower bound. We first define a new intermediary algorithmic problem which we call Intermediary. Then we present reductions from Negative-k-Clique to Intermediary, and from Intermediary to dynamic DTW. [See Figure <ref> for an illustration]problemintermediary In the Intermediary problem, the task is to maintain a directed (n_r× n_c)-grid graph,with the weights of horizontal, vertical, and diagonal edges defined using auxiliary parameters: * non-negative integer identifiers r_i and c_j for all rows i∈ [n_r] and columns j∈ [n_c], resp.;* non-negative integer weights d_i for all rows i∈ [n_r];* Booleans b_j for all columns j∈ [n_c],* an integer U > n_r · n_c · (max_i d_i·max _i r_i·max_j c_j)^2.Horizontal edges (i, j)→ (i, j+1) and vertical edges (i, j)→ (i+1, j) have weight U.The weight of the diagonal edge (i, j)→ (i+1, j+1) is d_i · b_j if r_i = c_j, and √(U) otherwise. An update update(j,x) sets b_j:=x (for x ∈{0, 1}). A query returns the length of the shortest (0, 0) (n_r-1, n_c-1) path, unless the length is at least U · |n_r - n_c| + √(U), in which case it returns ∞. In Section <ref> we reduce from Intermediary to dynamic DTW: theoremcurvelowerbound Suppose that there is a data structure that maintains curves P,Q (subject to substitutions of vertices in Q and DTW queries) with preprocessing time T_P(n,m), update time T_U(n,m), and query time T_Q(n,m), where n=|P| and m=|Q|. Then any (n_r × n_c)-size instance of the Intermediary problem can be solved with preprocessing time (T_P(n,m)+n+m), update time (T_U(n,m)), and query time (T_Q(n,m)), where n = (n_r) and m=(n_c).In Section <ref> we then reduce from Negative-k-Clique to Intermediary: theoremlowerBoundIntermediaryFix constants c≥ 1, β∈ [0,1/2], γ∈ (0, 1], and δ, ε > 0. If the Intermediary problem for n_c ∈Ω(n_r^γ - ε) ∩ O(n_r^γ + ε) can be solved with preprocessing time ((n_r + n_c)^c), update time (n_r · n_c^β-δ), and query time (n_r · n_c^1-β-δ), then the Negative-k-Clique Hypothesis fails (for some constant k depending solely on (c, β, γ, δ, ε)). Notice thatis in fact a dynamic shortest path problem on planar graphs. In <cit.> a lower bound is given for a dynamic shortest path problem on planar graphs, where updates select a single edge and modify its weight. However, inwe are not allowed to modify the weight of single edges. In fact the weight of an edge can take at most two different values.In our lower bound, we design gadgets that consist of batches of edges. We then ensure that a shortest path either follows all the edges in a batch, or none of them. By properly modifying the weight of each edge in a batch, we simulate the capability of choosing between multiple weights, instead of just two, for each gadget.This is again not enough, because an update changes the weights of multiple edges (and also gadgets), not just a selected one. Let us now provide some (oversimplified) intuition on how we deal with this difficulty. All gadgets can be thought of as having two different weights, a large and a small one. The large weights are never modified, and they ensure a restricted form for the shortest path. The small weights can be thought of as a noise on top of the large weights, that cannot invalidate the aforementioned shortest path property, but encodes the critical information for the lower bound. Now, when we modify the weight of a gadget, this may indeed “accidentally” modify the (small) weight of some other gadget as well. However, it is ensured that when the weight of a gadget is accidentally changed, then this gadget could anyway not be part of a shortest path, due to its large weight.The actual construction is of course more elaborate, and at times needs to modify the large weights as well.§.§ Further Related Work Static Dynamic Time Warping DTW is a well-studied similarity measure with a wealth of prior work.Recall that in the static setting computing DTW requires essentially quadratic time. Attempting to bypass this barrier, previous works studied DTW from many creative angles, e.g., on binary inputs (i.e., both curves have at most two different vertices) <cit.>, on run-length encoded curves <cit.>, parametrized by the DTW distance <cit.>, approximation algorithms <cit.>, communication complexity <cit.>, and many other settings <cit.>.DTW is also related to other similarity measures in computational geometry and stringology: Distance The distance is defined similarly as DTW, except that the cost of a traversal is the maximum distance that dog and owner have at any point during the traversal (instead of the sum of all distances as in DTW). distance has the nice feature of being a metric, but it is less outlier resistant compared to the DTW distance.distance can also be computed in time O(n^2) <cit.>, and not in time O(n^2-δ) for any δ > 0 assuming the Strong Exponential Time Hypothesis <cit.>. Similarly as DTW, also distance has been studied from various angles, see, e.g. <cit.>.To the best of our knowledge, distance has not been studied in a dynamic setting. The line of research that comes closest to a dynamic setting are nearest neighbor data structures for the distance. Here, a set S of curves is given as a static input that we can preprocess to build a data structure. As a query we are then given a curve Q and the task is to compute the curve P ∈ S with smallest distance to Q. Since between two queries the curve Q can change completely, one can show that a naive recomputation of all distances is (near-)optimal for exact algorithms. Research has thus focused on approximation algorithms <cit.> and on restricted curves <cit.>. Edit Distance The edit distance is a similarity measure on strings that counts the number of character insertions, deletions, and substitutions to transform one string into the other. It can also be computed in time O(n^2) <cit.>, but not in time O(n^2-δ) for any δ > 0 assuming the Strong Exponential Time Hypothesis <cit.>, even on binary strings <cit.>.Edit distance on dynamically changing strings admits an exact algorithm with Õ(n) update and query time <cit.> (which is optimal assuming the Strong Exponential Time Hypothesis), and it admits an n^o(1)-approximation algorithm with n^o(1) update and query time <cit.>. Edit distance is usually studied with unit cost for insertions, deletions, and substitutions, but there is also a weighted variant in which the cost of any operation depends on the involved characters. The quadratic-time algorithm also works for this weighted edit distance. In the dynamic setting, weighted edit distance admits the same trade-off as we show for DTW in this paper, i.e., it has an exact algorithm with Õ(n^1+β) update time and Õ(n^2-β) query time for any β∈ [0, 1/2] <cit.>, and there is a matching lower bound assuming the All-Pairs Shortest Paths hypothesis <cit.>.Although these results are in the area of string algorithms, whereas here we study DTW, they are the most related work in the literature. In fact, there is a reduction from edit distance to DTW <cit.>, so DTW can be viewed as the harder problem. Therefore, our dynamic DTW algorithm is a generalization of the dynamic weighted edit distance algorithm, following a similar approach. On the other hand, the lower bound for dynamic weighted edit distance in <cit.> produces strings with Θ(n) different characters. Thus, when an update introduces a new character, then Θ(n) distances to the existing characters need to be specified. This setting cannot be modelled using the DTW distance of curves over ^d' (for d'=o(n)), where each point can be described by (d') coordinates, which is not enough to encode Θ(n) independent distances. We circumvent this issue by reducing from Negative-k-Clique instead of All-Pairs Shortest Paths. The resulting conditional lower bound follows the same high-level ideas as in <cit.>, but needs to deviate considerably in the details, because we need gadgets that test for cliques instead of triangles.§ PRELIMINARIES In this section we present the key concepts required for our results: rectangular graphs, distance measures between curves and conditional lower bounds.We denote by ℕ the natural numbers, by ℝ the reals, and for all n ∈ℕ by [n] the integers 1 up to (and including) n. We use matrix indexing: a matrix with n rows and m columns is an n × m matrix. Row i and column j correspond to the point (i, j).When drawing a grid graph in the plane, (1, 1) indicates the top left vertex.Graphs and shortest paths. A graph G = (V, E) is a set of vertices V connected by edges E.The graph G may be vertex-weighted (assigning every v ∈ V some weight ω(v) ∈ℝ) or edge-weighted (assigning everye ∈ E some weight ω(e)∈ℝ).A path π in G is any sequence of unique vertices, such that consecutive vertices are connected by an edge in E. When G is vertex-weighted, the cost of a path π is c(π) := ∑_v ∈πω(v). When G is edge-weighted, cost c(π) is the sum over all edges e between consecutive vertices in π of ω(e).For any two vertices u, v ∈ V, we denote by d_G(u, v) the minimum cost c(π) over all paths π that have u and v as its endpoints. Rectangular graphs. A rectangular graph G has two integers (A, B). The vertex set V contains A · B vertices: a vertex (i, j) for all (i, j) ∈ [A] × [B].We can embed the graph on an integer grid by giving every vertex (i, j) ∈ V coordinates (i, j).The edge set E is defined by the following edges: * between (i, j) and (i + 1, j)(“vertical edges”),* between (i, j) and (i, j + 1) (“horizontal edges”), and* between (i, j) and (i + 1, j + 1)(“diagonal edges”). We intuitively refer to the columns and rows of G; G is a plane embedded grid graph that contains 2(A + B) - 4 vertices on the outer face.Rectangular (sub)graphs may be vertex-weighted or edge-weighted. A path π in a rectangular graph is xy-monotone, whenever in the sequence of vertices π = { (i, j) } are non-decreasing in both i and j.For any two vertices (i, j), (x, y) ∈ V, we denote by d_G( (i, j), (x, y) ) the cost of the cheapest xy-monotone path from (i, j) to (x, y). Equivalently, the distance d_G( (i, j), (x, y) ) is the cost of the cheapest path from (i, j) to (x, y) in the directed graph where each vertical edge points from (i, j) to (i+1, j) and so forth.A rectangular subgraph R of G is defined by two intervals [a, b] ⊆ [A], [c, d] ⊆ [B].Its vertices areV_R := { (i, j) ∈ V | (i, j) ∈ [a, b] × [c, d] } and it contains an edge between two vertices in V_R whenever they share an edge in E.Note that we do not reindex the vertices. E.g., given a rectangular graph for A = B = 10, and the rectangular subgraph R given by [2, 4] and [6, 9], the set V_R contains the vertex (4, 9), but not (1, 1).Discrete distance measures. Consider some metric space X. We denote for any pair (p, q) ∈ X by d(p, q) their distance.A curve P is any finite ordered sequence of points in X; we refer to points in P as vertices of P.Any two curves P= (p_1, …, p_n) and Q = (p_1, …, p_m) with n and m vertices respectively, induce a vertex-weighted rectangular graph G with A = n and B = m, where the weight of each vertex (i, j) is ω( (i, j) ) = d(p_i, q_j). Given two curves P and Q, we can define a distance measure to illustrate the similarity between P and Q.There are two commonly used discrete similarity measures between curves P and Q, each of which can be formalized using the vertex-weighted rectangular graph induced by P and Q.The first measure is the discrete distance.Denote by Π all xy-monotone paths with (1, 1) and (n, m) as their endpoints.For any π∈Π the bottleneck cost is b(π) := max_(i, j) ∈π d(p_i, q_j).The discrete distance is subsequently defined as: (P, Q) := min_π∈Π b(π)= min_π∈Πmax_(i, j) ∈π d(p_i, q_j). The second distance measure is the Dynamic Time Warping (DTW) distance: which is equal to the minimum cost of all xy-monotone paths from (1, 1) to (n, m) in G:(P, Q) := min_π∈Π c(π) = min_π∈Π∑_(i, j) ∈π d(p_i, q_j). Upper bounds and (conditional) lower bounds. In the static problem variant we are given as input the metric space X and two curves P= (p_1, …, p_n) and Q = (q_1, …, q_m). Given P and Q we may construct the corresponding rectangular graph G using O(nm) time and space.To compute our distances, we may perform depth-first search from the vertex (1, 1) until we find the vertex (n, m) (where the weight of an edge ( (i, j), (x, y)) is the weight associated to (x, y)). Both distances can be computed faster (in O(nm) time) using dynamic programming. Whilst the above static algorithms are simple, there are conditional lower bounds showing that they are also optimal (up to subpolynomial factors). For both the distance and DTW distance these conditional lower bounds can be based on SETH: The Strong Exponential Time Hypothesis (SETH) asserts that for any δ > 0, there is an integer k > 3 such that k-SAT cannot be solved in O(2^ (1 - δ) n) time. Consider any δ' > 0 and a static algorithm that can compute the or thedistance between curves P and Q in time O( (nm)^1 - δ').The conditional lower bounds by <cit.> show that such an algorithm (even for curves in ℝ^1 under any L_p metric) would imply that SETH is wrong. Dynamic DTW distance. In the dynamic problem variant, we have two curves P= (p_1, …, p_n) and Q = (q_1, …, q_m) stored in some data structure subject to vertex insertions and deletions.We choose to formulate the problem using insertions, deletions and queries.A deletion may select either P or Q, any vertex in the curve, and remove the vertex from the sequence.An insertion has as input a point p and may select either P or Q and some vertex in the curve. It inserts p into the curve after (or before) the selected vertex.We focus on the DTW distance, and require that a query reports the DTW distance between P and Q. The conditional lower bounds for the static problem variant imply lower bounds for the dynamic version. Indeed consider any δ' > 0 and a dynamic algorithm with U(n, m) update time and Q(n, m) query time such that (n + m) · U(n, m) + Q(n, m) ∈ O( (nm)^1 - δ'). This dynamic algorithm violates SETH, as we may answer the static problem variant using (n + m) updates and a single query.In this paper, we provide a much stronger lower bound: not only presenting a higher lower bound but also restricting the query and update time individually. We condition our lower bound on the Negative-k-Clique Hypothesis:definitionkCliqueIn the Negative-k-Clique problem, the input is an undirected k-partite graph G with N nodes and integer edge weights, and the task is to decide whether there exists a k-clique in G with a negative sum of all edge weights. Let W be the sum of the absolute values of all edge weights in G. The Negative-k-Clique Hypothesis postulates that for all δ > 0 the Negative-k-Clique problem for W=N^O(k) cannot be solved in time O( N^k - δ). § UPPER BOUND In this section, we present a data structure to dynamically maintain the DTW distance between two curves P and Q. Let P be a curve with n vertices, and let Q be a curve with m vertices, where n ≥ m. Moreover, fix β∈ [0, 1/2].We prove Theorem <ref> as we design a dynamic algorithm with O(n m^βlog m) update time and O(nm^1 - β) query time. Before we state the algorithm and associated data structure, we state an auxiliary result that encapsulates Klein's multiple-source shortest path algorithm <cit.> andthe SMAWK algorithm <cit.>.Let us denote =ℝ_≥ 0∪{∞} and recall that the min-plus product x⋆ A of a row vector x∈^X and a matrix A∈^X× Y is a row vector y∈^Y such that y[j] = min_i∈ X(x[i]+A[i,j]) holds for all j∈ Y.Let G be a weighted plane digraph with N vertices, andlet S={s_1,…,s_|S|} and T={t_1,…,t_|T|} be such that (t_1=s_1,s_2,…,s_|S|=t_|T|,…,t_2) is the cycle around the outer face. Let D∈^S× T be a matrix such that, for every s∈ S and t∈ T, the entry D[s,t] encodes the shortest-path distance from s to t.There exists a data structure of size (|S|· |T|) that can be constructed in ((N+|S|· |T|)log N) time and, given a vector x∈^S, computes the min-plus product x⋆ D∈^T in (|S|+|T|) time. Let w(G) denote the total weight of all arcs in G. For any real parameter W>w(G), consider a plane digraph G^+W obtained from G by introducing a cost-W backward arc (v,u) along with every arc (u,v) in G.Moreover, let D^+W∈^S× T be a matrix such that, for every s∈ S and t∈ T, the entry D^+W[s,t] encodes the shortest-path distance from s to t in G^+W. Observe that all entries in D^+W are finite because there are arcs in both directions between every two adjacent vertices around the outer face of G. Moreover, when comparing the costs of paths in G^+W, the path containing fewer cost-W edges is always cheaper.Consequently, D[s,t]= D^+W[s,t]if D^+W[s,t]<W, ∞ otherwise.and, for any other parameter W'>w(G), we haveD^+W'[s,t] = D^+W[s,t]+(W'-W)⌊D^+W[s,t]/W⌋. Our data structure consists of a value W>w(G) (say, w(G)+1) and the matrix D^+W.It can be constructed in ((N+|S|· |T|)log N) time by a direct application of Klein's multiple-source shortest path algorithm <cit.> on G^+W. This algorithm, after (Nlog N)-time preprocessing, allows (log N)-time computation of any distance between a vertex on the outer face and any other vertex.At query time, given x∈^S, we determine W'=1+W+max{x[s] : x[s]∞} and construct x^+W'∈^S obtained from x by replacing all infinite entries with W'. Then, we compute the min-plus product y^+W':= x^+W'⋆ D^+W' and return a vector y obtained from y^+W' by replacing with ∞ all entries with values W' or more.As for correctness, observe that, for every s∈ S and t∈ T, we have x^+W'[s]+D^+W'[s,t]=x[s]+D[s,t]<W' if x[s]∞ and D[s,t]∞, and x^+W'[s]+D^+W'[s,t]≥ W' if x[s]=∞ or D[s,t]=∞. Consequently, y^+W'[t]=y[t] if y[t]∞ and y^+W'[t]≥ W' otherwise.To design an efficient implementation, consider the vertices in S∪ T and their cyclic order on the outer face of G. By definition of S and T, the rows and columns of D^+W' are ordered so that s_1≺…≺ s_|S| and t_1≺… t_|T|, respectively, then D^+W' satisfy the Monge property <cit.>; see <cit.>. Thus, we can use the SMAWK algorithm <cit.> to compute x^+W'⋆ D^+W' in (|S|+|T|) time given constant-time random access to D^+W'. The relation between D^+W' and D^+W reduces constant-time random access to D^+W' to constant-time random access to D^+W (which is a part of our data structure). *Data structure and query algorithm. Having established our prerequisites, we present an overview of our data structure.Recall that the two curves P and Q induce a vertex-weighted n × m rectangular graph, where the weight of the vertex (i, j) is the distance d(p_i, q_j).We denote this graph by G.The DTW between P and Q is the minimal cost xy-monotone path from (1, 1) to (n, m) in G. We fix a parameter β∈ [0, 1/2] and show that we can efficiently compute this minimal cost path through three steps. Our data structure is illustrated by Figure <ref>. Our first step (Lemma <ref>) is that, if n ≥ m, we dynamically maintain a partition 𝐏 of P into O(n m^-β) subcurves, where each subcurve has at least 1/4 m^β and at most 4 m^β vertices.Similarly, we maintain a partition 𝐐 of Q into O(m^1-β) subcurves with a size in [1/4 m^β, 4 m^β].Secondly, we note that each of the O(nm^1 - 2β) pairs of subcurves (P_a, Q_b) ∈𝐏×𝐐 correspond to a rectangular subgraph R^ab in G with O(m^β) points on its boundary and O(m^2β ) points in its interior.Let S be the vertices on the left and top boundary of R^ab and T be the vertices on the right and bottom boundary of R^ab. Intuitively, we would like to build a matrix such that, for s ∈ S and t ∈ T, the entry A^ab[s, t] stores the distance d_R^ab(s, t) = d_G(s, t).For technical reasons, we transform the vertex-weighted rectangular graph R^ab with an edge-weighed alignment graph D^ab such that, for each pair of vertices (s, t)in R^ab, the length of the shortest path from s to t in D^ab uniquely corresponds to the length of the shortest xy-monotone path from s to t in R^ab. The distances from S to T in D^ab are stored using the data structure of Lemma <ref>. There are O(nm^1-2β) rectangular subgraphs R^ab, each of which have O(m^2β) edges and vertices.For each R^ab, we store the alignment graph D^ab in a data structure of size O( m^2βlog m); thus, our data structure takes O(nm log m) space. Finally, we show that this data structure allows us to compute the DTW distance between P and Q in O(n m^-β) time through the following “wavefront” algorithm (Figure <ref>): Consider the vertex-weighted grid graph G between P and Q.We compute for all x ∈ [n] and all y ∈ [m] the distances d_G ( (1, 1), (x, 1)) and d_G ( (1, 1), (1, y)). We call this set of O(n) values the wavefront 𝒲.Throughout the algorithm, we maintain a wavefront 𝒲 of size O(n) and store for each point (i, j) ∈𝒲 the value d_G ( (1, 1), (i, j)).We iteratively update the wavefront as follows:as long as the point (n, m) is not in the wavefront, there always exists at least one rectangle R^ab whose left and top facet coincide with the wavefront.We select one such rectangle R^ab, remove its left and top facet S from the wavefront and replace them with the bottom and right facet T.We show that we can perform this operation in O(m^β) time by querying the data structure of <ref>. After O(nm^1 - 2β) iterations, we add the point (n, m) to the wavefront and we know the length of the shortest xy-monotone path from (1, 1) to (n, m). Thus, given our data structure, we compute the DTW distance between P and Q in O(n m^- βlog m) time.In the remainder of this section we formalise each data structure component. *Dynamic partitions. We dynamically maintain a partition of P and a partition of Q into subcurves of size O(m^β) under very specific conditions: Let P and Q be curves where their lengths are n_0 and m_0 before receiving any updates.Let 𝐏 and 𝐐 be partitions of P and Q respectively where each subcurve has at least m_0^β and at most 2 m_0^β vertices.During a sequence of m_0/2 updates to P or Q, after which Q has m vertices,we can dynamically maintain 𝐏 and 𝐐 such that each subcurve has a size in [1/4 m^β, 4 m^β], using O(n + m) space and O(m_0^β) time per operation. Moreover, our updates change at most O(1) subcurves of P and Q. We show how to update P after inserting/deleting a vertex in p. Updates in Q are handled analogously. Denote by m the size of Q during updates. Then at all times, we have that m ∈ [1/2 m_0, 3/2 m_0].We store for each subcurve its two boundary vertices and its size, and we store all subcurves in a balanced binary tree sorted by size.Each vertex stores a pointer to the subcurve that contains it.Finally, we store the numbers n and m^β_0. Suppose that we insert a vertex p, preceding a vertex p_i thatlies in the subcurve P_a, or we delete a vertex p lying in a subcurve P_a.We add/remove p to P_a, incrementing/decrementing the size of the subcurve.If p_i was the left boundary vertex of P_a, we make p the left boundary vertex.We update our balanced binary tree in O(log n) time. If we add a vertex p to P, we add it to the subcurve P_1 ∈𝐏 that contains its successor on P.If the size of P_1 is larger than 2m_0^β, we spend O(m_0^β) time to split P_1 into two subcurves of roughly equal size (by iterating over all vertices in P_1 and selecting the median).If, after deleting a vertex p from P (and thereby from P_1 ∈𝐏) the size of P_1 is smaller than 1/2m^β_0, we consider an arbitrary subcurve P_2 incident to P_1 and join the two subcurves. The resulting curve P' must have length at most 2.5m_0^β. If P' is longer than 2m^β_0 we split P' along its median: creating two subcurves whose length lie in[1/2 m_0^β, 2 m_0^β].If we update Q we may change m.For all curves P_a ∈𝐏 we showed that their size remains in [1/2 m_0^β, 2m_0^β] ⊆ [1/4m^β, 4m^β].*Dynamically storing distance matrices.The partitions 𝐏 and 𝐐 partition our vertex-weighted grid graph G into rectangles.Each pair of subcurves (P_a, Q_b) induces a rectangular grid graph R^ab where we want to store the xy-monotone distance matrix A^ab between all boundary vertices of R^ab.We denote by d_G( (i, j), (x, y) ) the cost of the cheapest xy-monotone path from (i, j) to (x, y) in G (d_G( (i, j), (x, y) ) = ∞ if no such path exists). Let P_a = (p_α, …, p_δ) and Q_b = (q_γ, …, q_ν) be two curves with N and M vertices, respectively. We define the alignment graph D^ab as a rectangular graph with N × M vertices and the following edges: * (i, j)→ (i + 1, j) of weight d(p_i + 1, q_j) (“vertical edges”),* (i, j)→ (i, j + 1) of weight d(p_i, q_j + 1) (“horizontal edges”),* (i, j)→ (i + 1, j + 1) of weight d(p_i + 1, q_j + 1) (“diagonal edges”). Let P_a = (p_α, …, p_δ) and Q_b = (q_γ, …, q_ν) be two curves with N and M vertices. Denote by R^ab and D^ab their rectangular and alignment graph, respectively.For p_i, p_x ∈ P_a and q_j, q_y ∈ Q_b, the cost d_G( (i, j), (x, y) ) is equal to:d_R^ab ( (i, j), (x, y))=d_D^ab( (i, j), (x, y))+ d(p_i, q_j). Any xy-monotone path from (i, j) to (x, y) in G must be contained in the rectangular graph R^ab. Thus, there exists a bijection between xy-monotone paths from (i, j) to (x, y) in G and in D^ab. The cost of any xy-monotone path from (i, j) to (x, y) in G, equals the cost of the uniquely corresponding xy-monotone path from (i, j) to (x, y) in D^ab (plus d(p_i, q_j)). Moreover, the orientation of edges guarantees that all paths in D^ab are monotone.Having established our alignment graph D^ab, we are ready to define our update procedure.Let P and Q be two dynamic curves and assume that |P| = n ≥ |Q| = m.We can maintain a partition of P and Q into subcurves with Θ(m^β) vertices each where, for all (P_a,Q_b), we store the graph D^ab, with sources S on the left and top boundary and targets T on the right and bottom boundary, using the data structure of <ref>. Our data structure requires O(n m) space and has O(n m^βlog m) update time.First, we describe our data structure.We want to, at all times, maintain a pointer to the following data structure that stores partitions 𝐏 and 𝐐 of P and Q, respectively, into subcurves that have a size in [1/4m^β, 4m^β].For all O(n m^1 - 2β) pairs of subcurves P_a and Q_b, the rectangular subgraph R^ab of G has size O( m^2β). Our data structure stores D^ab, with sources S on the left and top boundary and targets T on the right and bottom boundary, using the data structure of Lemma <ref>. This requires O(m^2β) space and O(m^2βlog (m^β)) time to construct per subgraph R^ab. Thus, the total space used is O(nm) and we may construct this data structure in O(nm log m) total time.We describe our update strategy. For each update we increment a counter c by 1.Whilst c ≤ m_0/2, we may dynamically maintain P and Q by performing updates such that each subcurve P_a and Q_b has O(m^β) vertices (Lemma <ref>). During every such update, by Lemma <ref>, at most O(1) subcurves P_a ∈P and Q_b ∈Q change.Whenever we change a subcurve Q_b(e.g., the subcurve Q_b lost a vertex, or is obtained by splitting a previous subcurve along its median) we do the following: for all O(n m^-β) subcurves P_a, we consider the rectangular subgraph R^ab of G. This graph has O(m^2β) weighted vertices.We construct the corresponding alignment graph in O( m^2β) time and apply the construction algorithm of Lemma <ref> in O(m^2βlog m) time. Since at most O(1) subcurves P_a and Q_b change, each update takes O( n m^-β m^2βlog m) = O(n m^βlog m) total time. Given any (P, Q), n_0 and m_0, by our above reasoning, we may statically construct partitions 𝐏 and 𝐐 where subcurves have a size in [m_0^β, 2 m_0^β] (and the associated data structure) in O( n_0 m_0 log m_0) time. Hence, when the counter reaches c = m_0/2, we can rebuild the data structure in O(n_0 m_0 log m_0) time. This yields amortized update time O(n m^βlog m). In what follows we apply a classic deamortization scheme, to prove the lemma. We maintain at all times the above data structure twice, referring to them as the first and second copy.Each copy stores a counter, c_0 and c_1 respectively, that counts the number of updates processed by each copy.At all times, we maintain a pointer to one of the two copies, indicating the current `up to date' data structure.We denote by n_0 and m_0 the initial size of P and Q respectively (before any updates) and by n and m the current size of Q.We assume that the first copy has, before receiving any updates, P and Q partitioned into subcurves of size [m_0^β, 2 m_0^β] and that is has recorded the value m_0.Our deamortization scheme ensures that we always perform fewer than m_0/2 updates to the first copy.Our counter c_0 starts at 8 m_0/32.We note that for readability, we over-estimate our constants to be able to write them as multiples of two.When c_0 = 9 m_0/32, we record the value m_1 = m and store it in the second copy. Note that m_1 ∈ [1/2m_0, 6/4 m_0].In addition, we record the curves P^1 = P, Q^1 = Q, n_1 = n, and c_1 = 0. From this point onwards, we start recording updates to the first copy in a queue. Whilst c_0 ∈ [9 m_0/32, 10 m_0/32], we construct as our second copy our data structure on (P^1, Q^1) in O(n_1 m_1 log m_1) total time, doing Θ(n_1 log m_1) = Θ(n log m) work per update. When c_0 = 10 m_0/32, the queue of the first copy contains at most m_0/32≤2 m_1/32 elements.From hereon, each time time c_0 is incremented, we perform an update in the first copy, add it to the queue, dequeue up to four updates from the queue and apply them to the second copy (incrementing c_1 by four).When c_0 = 11 m_0/32, both the first and second copy store the same data structure. Moreover, c_1 ≤ 4 ·m_0/32≤8 m_1/32. We continue applying all updates to both data structures (incrementing c_1 and c_0 by 1) until c_1 = 8 m_1/32. At this point, we record P^0 = P, Q^0 = Q, m_0 = m, n_0 = n and set c_00.We note that m_0 ∈[1/2m_1, 6/4 m_1]. From hereon, we perform the process with the two copies exchanged.Since at all times, c_0 ≤m_0/2 and c_1 ≤m_1/2, we may always apply Lemma <ref> to perform our O(1) updates in O(n m^βlog m) time. Computing the DTW distance.Finally, we are ready to show our main theorem:* We store P and Q in the data structure of Lemma <ref> which has the desired space usage and update time.What remains is to show that we can compute the DTW distance between P and Q. Consider the rectangular graph G and the partition 𝐏 = (P_1, … P_N) and 𝐐 = (Q_1, …, Q_M).Consider the set of all rectangular subgraphs R^ab for P_a and Q_b with a∈ [N] and b ∈ [M]. In O(n) time, we compute for every integer i ∈ [n] the cost of the vertical path from (1, 1) to (i, 1).Similarly, for each j ∈ [m] we compute the cost of the horizontal path from (1, j).We denote these vertices of G as the “wavefront” 𝒲.(Note that, whilst our paths are xy-monotone curves that are increasing, the wavefront is a decreasing xy-monotone curve.) Throughout our algorithm, we maintain the invariant that for each vertex (i, j) ∈𝒲, we store the value d_G( (1, 1), (i, j)).We iteratively expand 𝒲 as follows.At each iteration, there exists at least one rectangular graph R^ab whose left and top facets coincide with 𝒲.Denote by S all vertices on the left and top facet of R^ab and by T all vertices on the right and bottom facet. We remove all (a, b) ∈ S from 𝒲, and add all (x, y) ∈ T to 𝒲.This ensures that 𝒲 remains an xy-monotone curve. To satisfy our invariant, we need to compute a vector where each coordinate corresponds to a point (x, y) ∈ T and where the value at that coordinate records d_G( (1, 1), (x, y)).Observe that any xy-monotone path in G from (1, 1) to (x, y) ∈ T must go through a vertex (i, j) ∈ S. Thus, the length of theshortest xy-monotone path in G from (1, 1) to (x, y) ∈ T is equal to:d_G( (1, 1), (x, y) )= min_ (i, j) ∈S(d_G( (1, 1), (i, j)) -d(p_i,q_j) +d_R^ab( (i, j), (x, y) ) ) =min_ (i, j) ∈S( d_G( (1, 1), (i, j))+ d_D^ab ( (i, j),(x, y)) ).Given this relation between distances in G and distances in our alignment graphs D^ab, we can compute our desired output by applying Lemma <ref> for the vector assigning d_G( (1, 1), (i, j)) to each (i,j)∈ S.In O(m^β) time, we iterate over each (x, y) ∈ T, for which there exists a unique entry in the output vector that stores the value d_G( (1, 1), (x, y) ) =min_ (i, j) ∈ S ( d_G( (1, 1), (i, j)) + d_D^ab ( (i, j), (x, y)) ), and add (x, y) to our wavefront. It follows that, in O(m^β) time, we processed R^ab, removing S from the wavefront, adding T and maintaining our invariant. After O( nm^1-2β ) iterations (taking O( nm^1-β) total time), we process the last rectangle R^NM and thus add the point (n, m) to our wavefront.Via our invariant, we have computed the shortest xy-monotone path in G from (1, 1) to (n, m) and therefore the DTW distance between P and Q. § REDUCING FROM INTERMEDIARY TO DYNAMIC DTW In this section, we study the Intermediary problem.We note that to better match previous results, we index from 0 to (n-1).*For any instance of Intermediary, we show that one may maintain two curves P and Q, where P has n ∈ O(n_r) vertices and Q has m ∈ O(n_c) vertices, so that every update in Intermediary corresponds to changing the position of four vertices in Q.Our curves are created in such a way that we may compute from (P, Q) the output of Intermediary in O(1) time. The reduction For a fixed instance of Intermediary, our construction (Figure <ref>) takes place on the real line and maps every row i to a curve α_i and every column j to a curve β_j.The curve P is simply the concatenation over rows i = 0 to (n_r - 1) of α_i. The curve Q is the concatenation over columns j = 0 to (n_c - 1) of β_j. Note that an update Update(j, x) in Intermediary then corresponds to translating all vertices in β_j to the vertices of the new curve β_j'. Hence, any update in Intermediary is realized by O(1) translations in Q. Denote by ⋆ the point -U^5∈.Denote bya curve that visits the point ⋆ eight times consecutively.Every row i in Intermediary defines the curve α_i:→ α_i^1→ α_i^2 → α_i^3→ α_i^4→ = →U^4 + 2r_i U^3+ d_i4 →2U^4 + 2 r_i U^3 - d_i4 →3U^4 - 2 r_i U^3 + d_i4 →4U^4 - 2 r_i U^3 - d_i4 →Every column j in Intermediary defines a curve β_j:→ β^1_j→ β^2_j → β^3_j→ β^4_j→ = →U^4 + 2 c_j U^3 - U→ 2U^4 + 2 c_j U^3 + U→ 3U^4 - 2 c_j U^3 + (-1)^b_j U→ 4U^4 - 2 c_j U^3 - (-1)^b_j U → We denote by P the curve obtained by concatenating, over all rows i∈ [n_r], the curves α_i.We denote by Q the curve obtained by concatenating, over all columns j∈ [n_c], the curves β_j. We denote by R the O(n_r) × O(n_c) vertex-weighted rectangular graph induced by (P, Q), as defined in <ref>. For any i, j, the curves α_i and β_j induce a 20 × 20 vertex-weighted rectangular graph which we call the gadget R_ij.Each gadget R_ij is a subgraph of R. We call vertices incident to the boundary facets of R_ij the boundary vertices. Each pair (x, y) of vertices in α_i ×β_j corresponds to a vertex in R_ij. We assign these vertices a color as follows: * If x = y = ⋆ then the vertex is orange. * If either x or y equals ⋆ (but not both) then the vertex is white. * If x = α_i^k and y = β_j^k for some k ∈ [4], then the vertex is grey.* Otherwise, the vertex is yellow. Any xy-monotone path that realises (P, Q) uses as few white vertices as possible, then as few yellow vertices as possible and finally as few grey vertices as possible.Reducing from Intermediary. We show the following desirable property of our curves P and Q:For our curves P and Q, there exists an xy-monotone path π^* realizing (P, Q) that contains no white boundary vertices. For a proof by contradiction, suppose that every path realizing (P,Q) contains a white boundary vertex. Let us fix a path π that visits the fewest such vertices. First, suppose that π visits a white boundary vertex u located on the boundary of the entire graph R. By symmetry, we may assume that u lies in the first row of R. Let (2,y) be the first vertex on π that lies in the second row of R, and let π_1 be the prefix of π from the origin (1,1) to (2,y). Consider the following alternative path π'_1 : (1,1) → (2,2) → (2,3) →⋯→ (2,y).Observe that, for each column y'∈ [y], the cost of (1,y') is the same as the cost of (2,y'), and π_1 must visit at least one of these two vertices.Consequently, π_1 is at least as expensive as π'_1. At the same time, π'_1 avoids white boundary vertices, whereas π_1 contains at least one such vertex (u). Thus, by replacing π_1 by π'_1, we transform π into a path π' that contains fewer white boundary vertices, contradicting the choice of π.Henceforth, we may assume that π contains a white boundary vertex that is not located on the boundary of the entire graph R. Let us take the first such vertex u (along π). Let B be the connected component of u in the subgraph of R spanned by white boundary vertices; note that B is a box spanning two rows and four columns (or, symmetrically, spanning four rows and two columns). Moreover, let W be the maximum box of white vertices containing B (it spans 16 rows and 4 columns, or 4 rows and 16 columns).Let t be the last vertex of π that lies above or to the left of W, and let v be the first vertex of π that lies below or to the right of W.If t is above W and v is to the right of W, then the t v subpath of π can be rerouted along the row just above W and the column just to the right of W (we use a diagonal edge whenever we switch from a row to a column or vice versa). Such a detour does not contain any white vertices, so it is cheaper than the original path, contradicting the minimality of π (Observation <ref>)If t is to the left of W and v is below W, then the t v subpath of π can be rerouted along the column just to the left of W and row just below W. Again, such a detour does not contain any white vertices, so it is cheaper than the original path, contradicting the minimality of π.If t is above W and v is below W, then the t v path contains at least 16 white vertices (one per row of W). In this case, let s be the last vertex of π that lies to the left of W. Since u was the first white boundary vertex on π, then s must be located within the same gadget R_ij as the upper half of W. Consequently, the s v path that goes along the column just to the left of W and then along the row just below W contains at most 4 internal white vertices (one for each of the middle four rows of R_ij). Such a detour is thus cheaper than the original path,contradicting the minimality of π. Finally, suppose that t is to the left of W whereas v is to the right of W. In this case, let s be the last vertex of π that lies above B or to the left of W. Since u was the first white boundary vertex on π, then s must be located with the same gadget R_ij as the upper half of B, or within the right half of the adjacent gadget R_i(j-1). Consequently, the s v path that goes along the row of s and along the column just to the right of W contains exactly 4 interval vertices of positive cost: one white vertex per column of W. However, the original s v subpath of π must have also contained such 4 white vertices. The costs of white vertices within W are uniform along columns, so the detour is not more expensive. At the same time, the detour avoids B (and thus any white boundary vertices) whereas the original path contained u. This contradicts the definition of π. Consider our curves P and Q and their induced rectangular graph.We define the blocks (denoted by 𝐁) of this graph as all maximal connected components of orange vertices.Two blocks B_1, B_2 ∈𝐁 are: * Horizontally adjacent if there exists a horizontal line that intersects B_1 and B_2 consecutively.* Vertically adjacent if there exists a vertical line that intersects B_1 and B_2 consecutively.* Diagonally adjacent if they are not horizontally/vertically adjacent and there exists a line with slope -1 that intersects B_1 and B_2 consecutively. Let B_1, B_2 ∈𝐁 be two blocks that are horizontally (or vertically) adjacent.Then for any u ∈ B_1 and v ∈ B_2 the shortest xy-monotone path from u to v has weight 4U^5 + 10 U^4. If (B_1, B_2) are horizontally adjacent then any shortest xy-monotone path from u to v consists of orange vertices plus exactly four white vertices corresponding to pairs: (⋆, β_j^1), (⋆, β_j^2), (⋆, β_j^3), (⋆, β_j^4) for some integer j.Since orange vertices have weight zero, it follows that the weight of this path is:d(⋆, β_j^1) + d (⋆, β_j^2) + d(⋆, β_j^3) + d(⋆, β_j^4) =(U^5 + U^4 + 2 c_j U^3 - U ) +(U^5 + 2U^4 + 2 c_j U^3 + U ) +(U^5 + 3U^4 - 2 c_j U^3 + (-1)^b_j U ) + (U^5 + 4U^4 - 2 c_j U^3 + (-1)^b_j U ) = 4 U^5 + 10 U^4. If (B_1, B_2) are vertically adjacent then any shortest xy-monotone path from u to v consists of orange vertices plus exactly four white vertices corresponding to pairs: (α_i^1, ⋆), (α_i^2, ⋆), (α_i^3, ⋆), (α_i^4, ⋆) for some integer i. Since orange vertices have weight zero, it follows that the weight of this path is:d(α_i^1, ⋆) +d(α_i^2, ⋆) + d(α_i^3, ⋆) + d(α_i^4, ⋆) =(U^5 + U^4 + 2 r_i U^3 - d_i/4) +(U^5 + 2U^4 + 2 r_i U^3 - d_i/4) + (U^5 + 3U^4 - 2 r_i U^3 + d_i/4) + (U^5 + 4U^4 - 2 r_i U^3 - d_i/4) = 4 U^5 + 10 U^4. This concludes the proof.Let B_1, B_2 ∈𝐁 be two blocks that are diagonally adjacent.Denote by R_ij the unique gadget that intersects both blocks.Then for any u ∈ B_1 and v ∈ B_2 the shortest xy-monotone path from u to v has weight greater than U^3 if r_i ≠ c_j. It has weight 4U + d_i · b_j otherwise. If (B_1, B_2) are vertically adjacent then any shortest xy-monotone path from u to v consists of orange vertices plus exactly four grey vertices contained in R_ij.Since orange vertices have weight zero, it follows that the weight of this path is:d(α_i^1, β_j^1) +d(α_i^2, β_j^2) + d(α_i^3, β_j^3) + d(α_i^4, β^4_j) =| 2 (r_i - c_j) U^3 + d_i /4 + U |+ | 2 (r_i - c_j) U^3 - d_i/4 - U | + | - 2 (r_i - c_j) U^3 + d_i/4 - (-1)^b_j U | +| - 2 (r_i - c_j) U^3 - d_i/4 + (-1)^b_j U |If r_j ≠ c_j this is at least U^3. If r_i = c_j and b_j = 0, this is then equal to: d_i/4 + U + d_i/4 + U - d_i/4 + U - d_i/4= 4 U. If r_i = c_j and b_j = 1 this is equal to: d_i/4 + U + d_i/4 + U + d_i/4 + U + d_i/4= 4 U + d_i. For any instance of Intermediary with n_r rows and n_c columns, * If (P, Q) ≥|n_r + n_c| · (4 U^5 + 10 U^4) + U^3, then Intermediary outputs ∞.* Else the output of Intermediary is equal to:DTW(P, Q) - |n_r - n_c| · (4 U^5) +4|n_r - n_c| U - min{ n_r, n_c }· 4 U. By Lemma <ref> there exists a path π^* in the rectangular graph induced by P and Q that realises (P, Q) that intersects no white boundary vertices. It follows immediately that π^* intersects a sequence of blocks 𝐁^* ⊂𝐁 where for every two consecutive blocks B_1, B_2 ∈𝐁^*, B_1 and B_2 are either horizontally, vertically or diagonally adjacent.The path π^* may be partitioned into subpaths whose endpoints lie in consecutive blocks in 𝐁^*. The weight of π^* is equal to the weight of these subpaths. For any consecutive blocks B_1, B_2 ∈𝐁^* that are horizontally or vertically adjacent, by Lemma <ref>,the weight of any subpath of π^* with its endpoints in (B_1, B_2) is4 U^5 + 10 U^4.For any consecutive blocks B_1, B_2 ∈𝐁^* that are diagonally adjacent (both intersecting the gadget R_ij), by Lemma <ref>,the weight of any subpath of π^* with its endpoints in (B_1, B_2) is: * At least U^3 whenever r_i ≠ c_j. * Equal to 4 U + d_i · c_j otherwise.By Observation <ref>, the path π^* takes as few white vertices as possible (prioritizing diagonals consisting of grey vertices whenever possible). Thus, it contains exactly 4 |n_r - n_c| white vertices.This implies that there are exactly 4 |n_r - n_c| pairs of consecutive blocks (B_1, B_2) that are horizontal or vertically adjacent, and min{ n_r, n_c } consecutive blocks (B_1, B_2) that are diagonally adjacent.By Lemma <ref>, the subcurves of π^* between B_1 and B_2 that are vertically or horizontally adjacent have a total weight of exactly|n_r - n_c| (4 U^5 + 10 U^4 ).We may apply the same argument to Intermediary, noting that the horizontal and vertical edges taken in Intermediary have a total weight of exactly |n_r - n_c| U. We now consider two cases: First, the case where Intermediary outputs ∞. In other words, the shortest path (0, 0) → (n_r - 1, n_c - 1) path in Intermediary is at least |n_r - n_c| U+ √(U). This occurs if and only if there does not exist a path Πin Intermediary from (0, 0) to (n_r - 1, n_c - 1) where for all diagonals from (i, j) to (i+1, j+1) in Π: r_i = c_j.It follows by Lemma <ref> that for the corresponding pairs of diagonal blocks (B_1, B_2), the shortest path from any vertex u ∈ B_1 to any vertex v ∈ B_2 has weight at least U^3. We note that 𝐁^* must include at least one consecutive pair (B_1, B_2) that is diagonally adjacent.The subpath of π^* between any such B_1 and B_2 has weight at least U^3 and the path π^* has thus weight at least |n_r - n_c| (4 U^5 + 10 U^4 ) + U^3. Second, the case where Intermediary outputs a finite value.Consider each path Πin Intermediary from (0, 0) to (n_r - 1, n_c - 1) where for all diagonals from (i, j) to (i+1, j+1) in Π: r_i = c_j.Denote by D = { (i, j) } the set of diagonals taken by Π.The cost of Π is equal to |n_r - n_c| U+ ∑_(i, j) ∈ D d_i · b_j. Since Π is xy-monotone, there exists at least one path π' in our rectangular grid graph where the corresponding block sequence 𝐁' contains min{ n_c, n_r } pairs of consecutive blocks B_1, B_2 ∈𝐁' that are diagonally adjacent where every such B_1, B_2 share a gadget R_ij for a diagonal (i, j) ∈ D.Now consider the set of all paths π', where the corresponding set of blocks 𝐁' contains min{ n_c, n_r } pairs of consecutive blocks B_1, B_2 ∈𝐁' that are diagonally adjacent where for all (B_1, B_2) that share a gadget R_ij: c_i = r_j. Denote by D' the set of all pairs (i, j) for these gadgets R_ij.By our above analysis, the weight of π' is equal to:|n_r - n_c| (4 U^5 + 10 U^4 )+ ∑_(i, j) ∈ D' (4U + d_i · b_j) =|n_r - n_c| (4 U^5 + 10 U^4 ) + 4 min{ n_r, n_c } U +∑_(i, j) ∈ D' d_i · b_j.The path π^* equals the path π' with minimal weight and so the lemma follows.For any instance of Intermediary, we may compute P and Q in O(n_c + n_r) time. For each update in Intermediary, we only need to translate 4 vertices in Q in O(1) time to maintain (P, Q).By computing DTW(P, Q) we may answer a query in Intermediary in O(1) additional time.Thus:*Combining this with the lower bound (Theorem <ref>) from the next section gives:* § INTERMEDIARY LOWER BOUND In this section we prove the following theorem:*For example, Theorem <ref> implies that given polynomial preprocessing time, no data structure can have both the update and the query time significantly better than O(n ·√(m)).To this end, we recall the definition of Negative-k-Clique:* A switch in notation. To facilitate our proofs, we make a slight switch in notation. For starters, we assume that in the Intermediate problem we have n = n_r rows and m = n_c columns. We refer to any edge in Intermediate its tail and its type (horizontal/vertical/diagonal). For example the diagonal edge (i,j) is the edge from (i,j) to (i+1,j+1). When we say shortest path we always refer to a shortest path from (0,0) to (n-1, m-1). We sometimes refer to (0,0) as the top-left corner and to (n-1, m-1) as the bottom-right corner.We denote for any integer A by [A] := 0, …, A-1.We denote for A and B with B > A the integer intervals as [A,B] = A, A+1, …, B, and [A,B) = A, A+1, …, B-1. Any positive integer p∈[m] can be written as p = ∑_i∈ [log_2m] b_i · 2^i, for some Booleans b_i. We say that the binary number between bits x and y of p is∑_i∈ [x,y] b_i · 2^i-x. For example 4 = 1· 2^2 + 0· 2^1 + 0· 2^0, and the number between bits 1 and 2 is 1· 2^2-1 + 0· 2^1-1 = 2.Finally, we use the notation Ω_k(·), Θ_k(·),O_k(·) to suppress factors depending only on k, the parameter of the Negative-k-Clique problem we are reducing from. Throughout the proofs, one can think of k as a sufficiently large constant. §.§ Some initial toolsIn our reduction from Negative-k-Clique towe construct instances ofthat satisfy the following additional restrictions.Let = Θ_k(N^2) be a parameter we specify later.We assume that: * n≥ m,* both n-1 and m-1 are multiples of ,* there exist constants A_-1, A_0, A_1, A_2, A_3, A_4, A_5 such that: * A_-1 = 1,* √(U)≥ A_5, and * A_i = 100(n+N)^10k M · A_i-1 = N^O(k) for i ∈ [6], where M is defined as in Definition <ref>, * for any row i, it holds that d_i < 2A_4,* there always exists an increasing sequence s_0, s_1, …, s_m-2 such that 0≤ s_0 < s_m-2 < n-1 and r_s_i = c_i for all i∈ [m-1]. Intuitively, the gaps between the constants A_i are so large that allow us to treat the constants independently. Furthermore, there always exist a path that uses only diagonals of weight smaller than √(U) and no horizontal edges (see Lemma <ref>). We later prove that the restrictions of <Ref> hold in the instances created by our reduction from Negative-k-Clique to . Before that, we first prove some results related to instances having these restrictions.We start with the following simple observation that it never helps to take a horizontal edge:A shortest path from (0,0) to (n-1,m-1) uses no horizontal edges, exactly n-m vertical edges, and exactly m-1 diagonal edges. Furthermore, for each diagonal edge from (i,j) to (i+1,j+1) used from a shortest path, it holds that r_i=c_j and therefore the edge's weight is less than √(U). Finally, the total weight of any shortest path is less than (n-m)· U + √(U). Let s_0, …, s_m-2 be the increasing sequence from <Ref>. Consider the following path P from (0,0) to (n-1,m-1). Whenever we are at vertex (i,j), if j=m-1, we move vertically until we reach (n-1,m-1). Else, if i<s_j then we move (vertically) to (i+1,j). Else, we move (diagonally) to (i+1,j+1).To see that P reaches (n-1,m-1), we prove that it never reaches a vertex (i,j) with j<m-1 and i>s_j. This is true for the initial vertex (0,0) as s_0≥ 0. It remains true when we move vertically, because we only do so under the condition that i>s_j. It also remains true when we move diagonally, because either j+1=m-1 (trivially true) or we had i=s_j (inductively) and we move to (i+1,j+1); but by assumption s_j+1>s_j, meaning i+1 ≤ s_j+1. The above shows that when P is in vertex (i,j) with j<m-1, then i≤ s_j <n-1, meaning it does not stop there. Thus it will only stop at vertex (n-1,m-1). As P does not use any horizontal edge, this means that it takes exactly m-1 diagonal edges. Furthermore, in every step it proceeds by one row, meaning it takes n-1 edges in total. Therefore n-m of them are vertical edges, and the total cost of the path is (n-m)· U + ∑_j=0^m-2 d_s_j· b_j. As all diagonal edges used by the shortest path have weight less than √(U), and by Assumption <ref>, the total cost of the path is less than (n-m) · U + √(U).On the other hand, the maximum amount of diagonal edges on any shortest path is m-1, meaning that any shortest path needs to take at least n-m vertical edges. If it takes more vertical edges, or if it takes at least one horizontal edge, then the weight of the path is at least (n-m+1)· U, meaning that it cannot be a shortest path. Therefore it takes no horizontal edge, exactly n-m vertical edges, andexactly m-1 diagonal edges. If it takes a diagonal edge (i,j) with r_ic_j then the cost is at least (n-m) · U + √(U) and the path is again not a shortest path. We now give results related to a certain structure we use in the main reduction. Intuitively, we define a certain type of subgraph that we call a gadget, which can only be traversed diagonally in a shortest path. See Figure <ref> for an illustration.Let α_r, α_c be non-negative integers, and Γ be the rectangular subgraph induced by all vertices (i,j) with α _r≤ i ≤α_r + and α_c ≤ j ≤α_c +. We say Γ is a γ-gadget if: * for all (i,j) contained in Γ it holds that r_i=c_j if and only if i-α_r = j-α_c,* d_α_r = γ, b_α_c = 1,* d_α_r+-1 = -γ, b_α_r+-1 = 1,We say that the (i-α_r, j-α_c) diagonal edges are the main diagonal edges of the γ-gadget. The cost of a γ-gadget is equal to the sum of weights of its main diagonal edges ∑_p∈ [g] d_α_r+p· b_α_c+p.For ease of notation, we sometimes say that Γ is a gadget, instead of a γ gadget. From a high level view, the interesting property of a gadget is that a shortest path can either follow all its main diagonal edges, or not use any main diagonal edge of the gadget at all. Indeed, entering the gadget comes at some high cost γ for the main diagonal.Exiting the gadget through the last diagonal has an edge with a cost that includes -γ, to cancel out the earlier cost of γ.If γ is large enough then a shortest path that paid the (large) cost γ, must reach the last diagonal edge of the gadget, in order to gain the -γ.One more restriction of the instances created by our reduction from Negative-k-Clique tois the following:For any vertex (i,j) with both i and j being multiples of , we have an((n-i) ·)-gadget with (i,j) being its top left corner. Furthermore, every row i(where neither i nor i+1 is a multiple of ) has weight d_i at most n. We note that a vertex may be contained in up to four gadgets. For example, vertex (, ) is shared by four gadgets, the ones with their top left corner being (0, 0), (0, ), (, 0), (, ). On the other hand, every diagonal edge is contained in exactly one gadget.Given <Ref>, we obtain the following result:Assuming <Ref>, any shortest path that uses a main diagonal edge of a gadget must use all main diagonal edges of this gadget. Let P be a shortest path. If P is at the top left corner (i,j) of a gadget Γ and follows a vertical edge, then it must continue vertically until it reaches the top left corner (i+, j) of another gadget, because by Lemma <ref> P cannot use any non-main diagonal edge. Else, if P follows a (main) diagonal edge of Γ, it either follows all main diagonal edges of Γ, in which case it reaches the top left corner (i+,j+) of another gadget, or P follows the first main diagonal edge (i,j) of Γ but not all of them.Based on the above, and as P starts at (0,0), which is the top left corner of a gadget, there are two cases. Either the claim of the lemma directly holds, or there exists a first vertex (η',θ') which is the top left corner of a gadget Γ such that P follows the main diagonal edge (η', θ') of Γ, but not all the rest.As P does not use any horizontal edges, and we assumed it does not use all main diagonal edges of Γ, it cannot reach vertex (η'+, θ'+). Additionally, P does not use any non-main diagonal edge, meaning it must use some main diagonal edge (μ-1, θ'+-1) of another gadget, where μ is a multiple of . We show that we can modify the subpath of P between vertices (η'+1, θ'+1) and (μ, θ'+) while reducing the cost, thus contradicting the fact that P is a shortest path.Notice that in this subpath P uses μ-η'- vertical edges (cost (μ-η'-)U), and also pays - (n-μ-) · for the final diagonal edge. We instead use all the main diagonal edges of Γ, thus reaching (η'+, θ'+), and then use vertical edges to reach (μ, θ'+). The cost is (μ-η'-)U for the vertical edges, plus - (n-η') · for the final diagonal edge, plus the cost of the rest of the diagonal edges. By assumption, the cost of each of the rest of the diagonal edges is less than n. We conclude that the new cost is improved by at least - n^2 > 0.§.§ Reduction We now describe the reduction from Negative-k-Clique to . Weight function We assume that the input graph G_0=(V=V_0∪ V_1∪…∪ V_k-1, E, w) has N nodes, where N is a multiple of k, and that G is a complete k-partite graph. Each of the parts V_i contains exactly N/k nodes. We identify the N nodes with integers in [N], such that V_i = [i· N/k, (i+1)· N/k)]. Notice that any k clique must have exactly one node in each V_i.The function w encodes the weight of an edge, that is for u,v∈ E we have that the weight of the edge connecting u,v is w(u,v). Recall that M = N^O(k) is equal to ∑_u,v∈ E |w(u,v)|. We introduce two new auxiliary nodes α=N, β=N+1, and we extend w so that w(α, β)=w(α,u)=w(β,u)=2M, for each u∈ V.Similarly we extend w(u,u)=0 and w(u,v)=2M for u,v in the same part V_i and u v. This ensures that if S is a node set of size at least 2 containing α or β or two nodes from the same part V_i, then ∑_u,v∈ S, u<v w(u,v) is too large (at least M).Furthermore, we extend w to take sets as arguments. For two node sets S,T we define w(S,T) to be the sum of w(i,j) over all unordered pairs i,j with one endpoint in S and the other in T. Notice that this is not the same as ∑_u∈ S, v∈ T w(u,v); for example w(S,S) does not double count the weight of each pair. Splitting the parts of V In what follows, we partition the set { V_i | i ∈ [k] } into four disjoint sets 𝒱_1, 𝒱_2, 𝒱_3, 𝒱_4. For each i∈{1,2,3,4}, we refer to eachV_j ∈𝒱_i as a part of V that contains N/k nodes.We may select for all parts V_j ∈𝒱_i one such node to create a sequence of vertices (v_0,v_1, … v_|𝒱_i|-1).There are then(N/k)^|𝒱_i| sequences that can be generated by picking vertices this way.We describe a very non-straightforward way to iterate over these sequences when i=1 or i=3. This helps us encode interesting information in our gadgets in Section <ref>.More formally, we use three positive parameters ρ_1, ρ_2, ρ_3, with ρ_1+ρ_2+ρ_3<1, that we fix later. Based on these parameters, we split the parts of V into four disjoint sets, the first containing ρ_1 k parts, the second ρ_2 k parts, the third ρ_3 k parts, and the last containing the rest of the parts (at most (1-ρ_1-ρ_2-ρ_3)k+3 parts). More formally, let * 𝒱_1 = V_0, …, V_ρ_1 k-1,* 𝒱_2 = V_ρ_1 k, …, V_ρ_1 k+ρ_2 k-1,* 𝒱_3 = V_ρ_1 k+ρ_2 k, …, V_ρ_1 k+ρ_2 k + ρ_3 k-1,* 𝒱_4 = V_ρ_1 k+ρ_2 k+ρ_3 k, …, V_k-1. For i∈2,4 let τ_i=(N/k)^|𝒱_i|. We denote by U_i the set of all τ_i sequences (containing exactly one node from each part in 𝒱_i). We order U_i in arbitrary order, and use the notation U_i(j) to refer to the j-th sequence in U_i. Iterating over all U_i(j), j∈ [τ_i], we generate all different sequences (each containing exactly one node from each part in 𝒱_i).For i∈1,3, our way of iterating over all different sets containing exactly one node from each part in 𝒱_i is more technical. Let N = 2^1+log_2N, that is N is twice the smallest power of 2 that is at least as large as N. Given a sequence of x nodes u_0, u_1, …, u_x-1, we let f(u_0, u_1, …, u_x-1) = ∑_i=0^x-1 u_i N^i. Notice that, as N is larger than N, for any y∈ [x], we can retrieve u_y, given f(u_0, u_1, …, u_x-1). Furthermore, since N is a power of 2, it suffices to read the binary number z between bits log_2N· y and log_2N· (y+1)-1, to retrieve u_y. In fact, as u_y consists of log_2N bits, the topmost bit of z is always zero. We need to ensure this technicality for reasons that will become apparent in Section <ref>.Seen in the reverse order, given a non-negative integer p ∈ [N^|𝒱_1|], we define a sequence U_1(p)=(u_0, u_1, …, u_|𝒱_1|-1). Let z_y be the binary number between bits log_2N· y and log_2N· (y+1)-1 of p, for y∈ [|𝒱_1|]. Then we define u_y=z_y if z_y ∈ V_y, and u_y=α otherwise.Similarly, given a non-negative p ∈ [N^|𝒱_3|], we define a sequence U_3(p)=(v_0, v_1, …, v_|𝒱_3|-1). Let z'_y be the binary number between bits log_2N· y and log_2N· (y+1)-1 of p, for y∈ [|𝒱_3|]. Then we define v_y=z'_y if z'_y ∈ V_|𝒱_1|+|𝒱_2|+y, and v_y=β otherwise.For i∈1,3, let τ_i = N^|𝒱_i|, and notice that iterating over all U_i(j), j∈ [τ_i], we generate all different sequences containing exactly one node from each part in 𝒱_i (along with some “garbage” sequences that contain the node α or β). Furthermore, τ_i = O_k(N^|𝒱_i|), so intuitively the redundancy we introduce is small.We use U_i(j) ∘ U_i'(j') to refer to the concatenation of the two sequences U_i(j) and U_i'(j'), for i,i'∈ [4], j∈ [τ_i], j'∈ [τ_i']. Applying IntermediaryFor a given instance of Negative-k-Clique, we define the following variables: * = 12+ log_2 + N + (|𝒱_1|+|𝒱_2|+|𝒱_3|)|𝒱_4|(N+2) + 2N(|𝒱_1|+|𝒱_2|)|𝒱_3|(N+2) * =, * =, * =, * =, * n= () +1, * m = () +1. Until <Ref>, we only need that g=O_k(N^2). We subsequently create an instance ofby creating a rectangular graph that consists of O(n/g) × O(m/g) gadgets (see Figure <ref>). The rows of our graph have a head and a tail of Y_1 gadgets, and a center of H gadgets.The columns of our graph have a head of 1 gadget, followed by a center of τ_2 B gadgets.This splits the center of the rectangular graphs (defined by the centers of the rows and columns) into τ_2 B blocks of H × B gadgets.The high-level idea of our reduction, is that we construct our gadgets in such a way that the shortest path incrosses exactly one block, and followsa path with certain properties within that block. Each combination of a block andpath with the aforementioned properties corresponds to a k-Clique in our problem instance. Showing that we may apply <Ref>. Notice that = Θ_k(N^2), n≥ m and n-1,m-1 are multiples of . Recall the constant A_i from Assumption <ref>. In <Ref> we specify the weight of each row and it is straightforward to verify that max_i∈ [n] d_i < 2. Let % be the modulo operator. We set the identifier of row i to r_i=i% and similarly the identifier of column j to c_j=j%. Therefore the increasing sequence of Assumption <ref> exists, with s_i=i being a witness.Finally, we let U = 1+n· m· (max_id_i·max_ir_i·max_jc_j· A_5)^2 = N^O(k), which ensures both the requirement from the statement ofthat U > n· m· (max_id_i·max_ir_i·max_jc_j)^2, and that √(U)≥ A_5. We conclude that <Ref> indeed holds. Showing that we may apply <Ref> We straightforwardly ensure <Ref>: We make each vertex (i,j) with both i,j being multiples ofthe top left corner of a (n - i) gadget. Furthermore, for any y such that neither y nor y+1 is a multiple of , we set the weight of row y to be at most n· (see <Ref> for a specification of the row weights).For ease of notation, we refer to the gadget with top left corner (i , j) as the (i,j) gadget. We say that we use the (x,y) gadget to denote that we move from (x, y) to ((x+1), (y+1)) using all the main diagonal edges of the gadget. As defined earlier, the cost of the (x,y) gadget is ∑_i∈ [] d_x+i b_y+i, that is the cost a path pays to use this gadget.Notice that Lemma <ref> applies, and it hints that we can view gadgets as single diagonal edges. In what follows we heavily use Lemma <ref>, even without explicitly stating it.Our reduction works in τ_4 epochs, each one corresponding to a different U_4(s). Each of these epochs is further divided into O(τ_1) phases, each one corresponding to a different U_1(·).When we are at the s epoch and the t phase, we say we are at phase (s,t). For every phase (s,t) we always have s∈ [τ_4], t∈ [τ_1] and U_1(τ_1-t-1)∌α.We now describe a certain type of path from (0,0) to (n-1,m-1) that we call a restricted path (see <Ref>). We later show that a shortest path needs to be a restricted path. For some j∈ [τ_2], l∈ [τ_3] during phase (s,t), a path from (0,0) to (n-1,m-1) is said to be a (j,l)-restricted path, or simply a restricted path, if it has the following form (Figure <ref>)* We move vertically from the top left corner of the (0,0) gadget to the top left corner of the (j+t,0) gadget.* We use the (j+t,0) gadget.* We move diagonally to the top left corner of the (, -j-t) = (, (τ_2-j-1)+τ_1-t) gadget.* We move vertically to the top left corner of the (+τ_1-t-1+l, (τ_2-j-1)+τ_1-t) gadget.* We move diagonally to the top left corner of the (+τ_1+l-1, (τ_2-j-1)+τ_1) gadget.* We use the (+τ_1+l-1, (τ_2-j-1)+τ_1) gadget.* We use the (+τ_1+l, (τ_2-j-1)+τ_1+1) gadget.* We move diagonally to the top left corner of the (+τ_1+t+l+2, (τ_2-j-1)+τ_1+t+3) gadget.* We move vertically to the top left corner of the (+2τ_1+τ_3, (τ_2-j-1)+τ_1+t+3) = (,(τ_2-j-1)+τ_1+t+3)) gadget.* We move diagonally to the top left corner of the (+j+τ_1-t-1, (τ_2-1)+2τ_1+2) = (+j+τ_1-t-1, τ_2) gadget. * We use the (+j+τ_1-t-1, τ_2) gadget. * We move vertically from ((+j+τ_1-t),m-1) to (n-1,m-1). For our reduction, we first describe the desired costs of the gadgets at phase (s,t), and show the lower bound. In <Ref> we show how to implement the gadgets, by the weights d_i of rows, as well as the activations b_j of columns.In what follows, one can think of h(i,j,l) as being equal to w(U_1(i)∘ U_2(j), U_3(l)). For reasons related to the actual implementation of of our gadget costs, h(·, ·, ·) is a function with a very technical definition, that we specify in <Ref>.Let h be a function satisfying the following constraints: h(i,j,l), with i∈ [τ_1], j∈ [τ_2], l∈ [τ_3], is equal to w(U_1(i)∘ U_2(j), U_3(l)) when U_1(i)∌α and U_3(l)∌β. Furthermore, for any i,j such that U_1(i)∌α, there exists an l_i,j∈ [τ_3] such that U_3(l_i,j) ∌β and h(i,j,l_i,j) ≤ h(i,j,l) for any l ∈ [τ_3]. Finally h(i,j,l) ≤ 2N^2M for any i,j,l.At phase (s,t), with U_1(τ_1-t-1)∌α, we say that the gadgets have the desired costs if the following hold:* The (j+t,0) gadgets, j∈ [τ_2], have cost + (τ_2-j) + 2M +w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_4(s)).* The ( + j+τ_1-t-1, τ_2) gadgets, j∈ [τ_2], have cost + j·. * The (y_1,0) gadgets and the (y_2, τ_2) gadgets, with y_1<, y_1 ∉j+t | j∈ [τ_2], y_2≥, y_2∉ + j+τ_1-t-1 | j∈ [τ_2], have cost at least +.* The (y,x) gadgets, for y< or y≥, and 0<x<τ_2, have cost .* The (+i+l, j+i+1) gadgets, with i∈ [τ_1-1], j∈ [τ_2], l∈ [τ_3], have cost + (-l) + 4M + h(i,j,l) - h(i+1,j,l).* The (+τ_1-1+l, j+τ_1) gadgets, with j∈ [τ_2], l∈ [τ_3], have cost + (-l) + 2M + h(τ_1-1,j,l).* The (+y, j+i+1) gadgets, i∈ [τ_1], j∈ [τ_2], y∈ [] ∖ [i,i+τ_3), have cost at least + (+i-y). * The (+τ_1+l, j+τ_1+1) gadgets, with j∈ [τ_2], l∈ [τ_3], have cost + 2M + w(U_3(l),U_3(l)∘ U_4(s)).* The (+y, j+τ_1+1) gadgets, with y∈ [τ_1]∪ [τ_1+τ_3,), and j∈ [τ_2], have cost at least +.* The (+y, j+τ_1+i+2) gadgets, with i∈ [τ_1], j∈ [τ_2], y∈ [], have cost + (-τ_1-1+y-i). * The (+y, j+2τ_1+2) gadgets, with j∈ [τ_2], y∈ [], have cost at least +. [inline]Probably we have 4 more constants than the edit distance paper. One is the constant for the gadget . We are probably not able to make it equal to U or √(U) in the curves... but even if we can, it's cleaner that way. Also theis not needed in edit distance; they use a larger cost (e.g. ) in the irrelevant entries, and 0 in the relevant ones. In our case I do not see how to do it, because we have weaker updates. And two related to the noise we allow... ϵ itself, and the constant with which we multiply our weights , in order to be higher than the noise. [inline]In the first column, they also add some . This is only so that every query has the same form. In our case it would get very ugly, because we also have the M terms that depend on t. We now show what the cost of a restricted path at phase (s,t) is:For any j∈ [τ_2], l∈ [τ_3] during phase (s,t), the (j,l)-restricted path has cost (n-m)U +() + τ_2 + 2(t+1)· + (4t+6)M + h(τ_1-t-1,τ_2-j-1, l) +w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_4(s)) + w(U_3(l),U_3(l)∘ U_4(s))We analyze the cost of the (j,l)-restricted path step by step.* We move vertically from the top left corner of the (0,0) gadget to the top left corner of the (j+t,0) gadget.Cost = (j+t)· U.* We use the (j+t,0) gadget.Cost = + (τ_2-j)· + 2gM + w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_4(s)).* We move diagonally to the top left corner of the (, -j-t) = (, (τ_2-j-1)+τ_1-t) gadget.Cost =(-j-t-1).* We move vertically to the top left corner of the (+τ_1-t-1+l, (τ_2-j-1)+τ_1-t) gadget.Cost = (τ_1-t-1+l) U.* We move diagonally to the top left corner of the (+τ_1+l-1, (τ_2-j-1)+τ_1) gadget.Cost telescoping to t(+(-l) · + 4M)+ h(τ_1-t-1,τ_2-j-1, l) - h(τ_1-1,τ_2-j-1,l).* We use the (+τ_1+l-1, (τ_2-j-1)+τ_1) gadget.Cost = +(-l) · + 2M+ h(τ_1-1,τ_2-j-1, l).* We use the (+τ_1+l, (τ_2-j-1)+τ_1+1) gadget.Cost =+ 2M + w(U_3(l),U_3(l)∘ U_4(s)).* We move diagonally to the top left corner of the (+τ_1+t+l+2, (τ_2-j-1)+τ_1+t+3) gadget.Cost =(t+1)(+(+l)·).* We move vertically to the top left corner of the (+2τ_1+τ_3, (τ_2-j-1)+τ_1+t+3) = (,(τ_2-j-1)+τ_1+t+3)) gadget.Cost =(τ_1+τ_3-t-l-2)· U.* We move diagonally to the top left corner of the (+j+τ_1-t-1, (τ_2-1)+2τ_1+2) = (+j+τ_1-t-1, τ_2) gadget. Cost =(j+τ_1-t-1).* We use the (+j+τ_1-t-1, τ_2) gadget. Cost =+j·.* We move vertically from ((+j+τ_1-t),m-1) to (n-1,m-1).Cost = (-j-τ_1+t)U. Summing up all the costs proves the lemma. During phase (s,t), let C_s,t be the weight of the minimum weight k-Clique that contains the nodes in U_1(τ_1-t-1) ∘ U_4(s). For j∈ [τ_2], l∈ [τ_3], the minimum cost of any (j,l)-restricted path is (n-m)U +() + τ_2 + 2(t+1)· + (4t+6)M + C_s,t - w(U_4(s), U_4(s)). For a phase (s,t), we always assume that s∈ [τ_4], t∈ [τ_1], U_1(τ_1-t-1)∌α. When j,l are such that U_3(l)∌β, we have that h(τ_1-t-1,τ_2-j-1, l) = w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_3(l)). Therefore the cost of any (j,l)-restricted path, with U_3(l)∌β, is(n-m)U +() + τ_2 + 2(t+1)· + (4t+6)M +w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_3(l)) + w(U_3(l),U_3(l)∘ U_4(s)) +w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_4(s)) = (n-m)U +() + τ_2 + 2(t+1)· + (4t+6)M +w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_3(l) ∘ U_4(s), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_3(l) ∘ U_4(s)) - w(U_4(s), U_4(s)) By definition of C_s,t, the minimum cost of any (j,l)-restricted path, with U_3(l)∌β is therefore (n-m)U +() + τ_2 + 2(t+1)· + (4t+6)M + C_s,t - w(U_4(s), U_4(s)).We now need to argue that when j,l are such that U_3(l)∋β, the (j,l)-restricted path has larger cost. In these cases, by <Ref> there exists an l_τ_1-t-1,τ_2-j-1 such that U_3(l_τ_1-t-1,τ_2-j-1)∌β and h(τ_1-t-1,τ_2-j-1,l_τ_1-t-1,τ_2-j-1) ≤ h(τ_1-t-1,τ_2-j-1,l).At the same time we have that w(U_3(l_τ_1-t-1,τ_2-j-1),U_3(l_τ_1-t-1,τ_2-j-1)∘ U_4(s)) ≤ M by definition of M and the fact that U_3(l_τ_1-t-1,τ_2-j-1)∌β, while w(U_3(l),U_3(l)∘ U_4(s)) ≥ 2M by the fact that U_3(l_τ_1-t-1,τ_2-j-1) ∋β.We conclude that the (j,l)-restricted path's cost is larger than the (j,l_τ_1-t-1,τ_2-j-1)-restricted path's cost. We now show that during phase (s,t), the shortest path is a restricted path.At phase (s,t) any shortest path is a restricted path. The proof proceeds in four steps. Each step relates to some constants used in our construction: U and : By Lemmas <ref> and <ref>, a shortest path uses exactly τ_2+1 gadgets, exactly n-m vertical edges, and no horizontal edges. As every gadget has a cost of at least , and every vertical edge costs U, the cost of any shortest path is at least (n-m)· U + (τ_2+1). : Due to Lemma <ref> and Assumption <ref>, the cost of a shortest path must be smaller than (n-m)· U + (τ_2+1) +.If y_1≥ then vertex (n-1,m-1) is unreachable from the top left corner of the (y_1,0) gadgets. Similarly if y_2 < then the top left corner of the (y_2,τ_2) gadget is unreachable from (0,0). Out of the rest of the (y_1,0) gadgets and (y_2,τ_2) gadgets (for y_1 < , y_2 ≥), the only ones with cost smaller than + are the (j+t,0) gadgets and the (+j+τ_1-t-1, τ_2) gadgets.Therefore a shortest path must first move vertically to the top left corner of a (j_1+t,0) gadget and use it. Furthermore, at some point it must use some (+j_2+τ_1-t-1, τ_2) gadget, and from then on it can only move vertically to (n-1,m-1).We also prove that j_2 ≥ j_1. The reason is that the maximum number of (y,x) gadgets with y< we can use is -j_1-t, and the maximum number of (y,x) gadgets with y≥ we can use is j_2+τ_1-t. Therefore we need to use at least τ_2+1--(j_2-j_1)-τ_1+2t = 2t+3-(j_2-j_1) many (y,x) gadgets with y∈ [, ). If j_2 < j_1 then this is more thangadgets. But by construction, we would then need to use some (y, j+2τ_1+2) gadget, y∈ [, ), which would incur an extracost.This implies that the shortest path would have a cost of at least (n-m)· U + (τ_2+1) +, a contradiction. : The cost for using the (j_1+t,0) gadget is at least + (τ_2-j_1), while the cost for using the (+j_2+τ_1-t-1, τ_2) gadget is + j_2 ·.Due to Lemma <ref> and Assumption <ref>, the cost of a shortest path must be smaller than (n-m)· U + (τ_2+1) + (τ_2+1). As j_2≥ j_1, this implies that j_2=j_1. , part I: We claim that a shortest path starting from the (j_1+t+1,1) gadget uses all gadgets diagonally until it reaches the (, (τ_2-j_1-1) + τ_1 - t) gadget. Suppose this is not the case, then there exists some first (y, (τ_2-j_1-1) + τ_1 - t) gadget, y>, reached by the shortest path. * If y≤, then the shortest path used the (y-1, (τ_2-j_1-1) + τ_1 - t -1 ) gadget, and the cost was at least +. In this case we could improve the shortest path, by first moving diagonally from the top left corner of the (j_1+t+1,1) gadget to the top left corner of the (, (τ_2-j_1-1) + τ_1 - t) gadget, and then vertically to the (y, (τ_2-j_1-1) + τ_1 - t) gadget. This would use the same number of gadgets (but all of them would have cost , while in the original path at least one gadget costs at least an extra ) and the same number of vertical edges.* If y>, then the path cannot possibly reach the (+j_1+τ_1-t-1, τ_2) gadget. This is because even if it takes only diagonal edges after reaching the (y, (τ_2-j_1-1) + τ_1 - t) gadget, it reaches the (y+j_1+-τ_1+t, τ_2) gadget. But then it cannot reach the (+j_1+τ_1-t-1, τ_2) gadget, as y+j_1+-τ_1+t >+j_1+(2τ_1+2)-τ_1+t > +j_1+τ_1-t-1. A completely symmetrical argument shows that the shortest path reaches the (, (τ_2-j_1-1)+τ_1+t+3) gadget, moves diagonally to the top left corner of the (+j_1+τ_1-t-1, τ_2) gadget, uses it, and then moves vertically to (n-1,m-1)., part II So far we proved that for some j_1∈ [τ_2] a shortest path reaches the (, (τ_2-j_1-1)+τ_1-t) gadget, and then moves to the top left corner of the (, (τ_2-j_1-1)+τ_1+t+3) gadget. Therefore it needs to use some (y,(τ_2-j_1-1)+τ_1+1) gadget, y∈ [, ). Notice that if this gadget had cost at least +, the shortest path would have cost at least (n-m)· U + (τ_2+1) +; but this contradicts Lemma <ref>. Therefore it must use some (+τ_1+l_0, (τ_2-j_1-1)+τ_1+1) gadget, for some l_0∈ [τ_3].Additionally, the cheapest way to move from the top left corner of the (, (τ_2-j_1-1)+τ_1-t) gadget to the top left corner of the (+τ_1+l_0, (τ_2-j_1-1)+τ_1+1) gadget is by moving vertically to the top left corner of the (+τ_1-t-1+l_0, (τ_2-j_1-1)+τ_1-t) gadget and then diagonally to the (+τ_1+l_0, (τ_2-j_1-1)+τ_1+1) gadget. The reason is that for each x∈ [t+1], the shortest path must use some (+y, (τ_2-j_1-1)+τ_1-t+x) gadget, for y∈ []. It cannot be y>τ_1-t-1+l_0+x, because then the top left corner of the (+τ_1+l_0, (τ_2-j_1-1)+τ_1+1) gadget would be unreachable. If y<τ_1-t-1+l_0+x, then the cost is at least + (-l_0+1), while the cost of using y=τ_1-t-1+l_0+x is less than + (-l_0+1). Therefore the suggested path uses the gadget with the smallest cost, for every x.With a completely symmetrical argument, when the shortest path reaches the (+τ_1+l_0, (τ_2-j_1-1)+τ_1+1) gadget, it continues diagonally to the (+τ_1+l_0+t+2, (τ_2-j_1-1)+τ_1+t+3) gadget, and then vertically to the (, (τ_2-j_1-1)+τ_1+t+3) gadget.By the above discussion and Lemma <ref>, the shortest path at phase (s,t) is completely defined by a j_1∈ [τ_2] and an l_0∈ [τ_3]. Summing up all the costs (exactly as in Lemma <ref>), we get that the cost of the shortest path at phase (s,t) has costD_t + w(U_1(τ_1-1-t)∘ U_2(τ_2-j_1-1), U_1(τ_1-1-t)∘ U_2(τ_2-j_1-1) ∘ U_4(s)) + h(τ_1-1-t, τ_2-j_1-1, l_0) +w(U_3(l_0), U_3(l_0)∘ U_4(s)).Notice that if U_3(l_0)∋β, then the above expression is larger than in any other case with U_3(l) ∌β. Therefore we can assume U_3(l_0) ∌β for the shortest path. By Definition <ref> we therefore have h(τ_1-1-t, τ_2-j_1-1, l_0) = w(U_1(τ_1-1-t)∘ U_2(τ_2-j_1-1), U_3(l_0)). But then the above minimizes at D_t + (C_s,t-w(U_4(s),U_4(s))). Putting it all together, we conclude that the shortest path is in fact the (j_1,l_0)-restricted path.§.§ Gadget implementation For a gadget (η, θ), we say that its i-th row is row η+i, and similarly its j-th column is column θ+j.Let % denote the modulo operator. We use r_i = i%, c_j=j%, for all i,j. Therefore the i-th row of a gadget always has the same identifier, and similarly for the j-th column. We say that the corresponding column of the i-th row is the i-th column, and vice versa.We now give a high level overview of the weights of all rows and the activations of all columns. Suppose we have an (η, θ) gadget. * The first row has weight (n-η). Along with the last row, these two ensure that the gadget is an (n-η) gadget. * The next = 8 rows of every gadget have weights that are independent of each other. Each of them describe a different constant. * The next = log_2 rows of every gadget can be thought of as one block. The weight of each row is a different power of 2. The intuition is that we can encode numbers by activating the proper columns. * The next =N+2 rows of every gadget can be thought of as one block. Each of the topmostand the bottommostgadgets is associated with some set of nodes, and this block encodes this set. * The next =(|𝒱_1|+|𝒱_2|)|𝒱_4|(N+2) rows can be thought of as one block. It describes the cost between two node sets. One of them relates to the rows intersecting the gadget and contains one node in every part of 𝒱_1∪𝒱_2. The other node set relates to the columns intersecting the gadget and contains one node in every part of 𝒱_4. This block is further subdivided into sub-blocks. The cost of each sub-block is equal to the weight of an edge across the two node sets.* The next =|𝒱_3||𝒱_4|(N+2) rows can be thought of as one block, similar to the previous one. This time the rows relate to a node set with one node in every part of 𝒱_3. * The next =N(|𝒱_1|+|𝒱_2|)|𝒱_3|(N+2) rows can be thought of as N blocks. Each block describes the cost between two node sets. One of them relates to the diagonals (not the rows) intersecting the gadget and contains one node in every part of 𝒱_3. The other node set relates to the columns intersecting the gadget and contains one node in every part of 𝒱_1∪𝒱_2. Each block is further subdivided into sub-blocks, describing the weight of an edge across the two node sets. From a high-level view, the idea is that using N blocks we simulate different weights per row, which are essential in order to relate one of the node sets to the diagonals, instead of the rows.* The nextrows is a similar block, for technical reasons. * The last row has weight - (n-η). Therefore, as claimed, = 2+ +++++ 2 = 12+ log_2 + N + (|𝒱_1|+|𝒱_2|+|𝒱_3|)|𝒱_4|(N+2) + 2N(|𝒱_1|+|𝒱_2|)|𝒱_3|(N+2).We now define the weights of the rows more formally. It is straightforward to verify that for every row i its weight d_i is non-negative, less than 2, and in case neither i nor i+1 are a multiple of g then d_i ≤ n, as required by Assumptions <ref> and <ref>.We note that the weights of the rows at phase (s,t) do not depend on s or t. When we do not specify the weight of a row, it is implied that its weight is 0.Let (η, θ) be a gadget.First row of (η,θ) gadget: The weight of this row is(n-η). The nextrows of (η,θ) gadget: * If η=j+i, for some j∈ [τ_2], i∈ [τ_1], then the first row has weight (τ_2-j) + (g-(|𝒱_1|+|𝒱_2|)|𝒱_4|)2M + w(U_1(τ_1-i-1)∘ U_2(τ_2-j-1), U_1(τ_1-i-1)∘ U_2(τ_2-j-1)). Otherwise the first row has weight . * If η=+j+τ_1-i-1, for some j∈ [τ_2], i∈ [τ_1], then the second row has weight j·. Otherwise the second row has weight . * If η = +y, for some y∈ [], then the third row has weight (-y) + (-(|𝒱_1|+|𝒱_2|)|𝒱_3|)2M.* If η = +y, for some y∈ [], then the fourth row has weight (-(|𝒱_1|+|𝒱_2|)|𝒱_3|)2M.* If η = +y, for some y∈ [], then the fifth row has weight (+y-2τ_1) + (-(|𝒱_1|+|𝒱_2|)|𝒱_3|)2M.* If η = +y, for some y∈ [τ_1]∪ [τ_1+τ_3,), then the sixth row has weight .* If η = +τ_1+l, for l∈ [τ_3], then the seventh row has weight (-|𝒱_3||𝒱_4|)2M + w(U_3(l), U_3(l)). * If η = +y, for some y∈ [], then the eighth row has weight .The nextrows of (η,θ) gadget: If η∈ [, ), then the i-th of these rows has weight 2^i-1. The nextrows of (η,θ) gadget: If η=j + i or η=+j+τ_1-i-1 for some i∈ [τ_1], j∈ [τ_2], then the u-th row has weightfor every u∈ U_1(τ_1-i-1). Else the α-th row has weight . The nextrows of (η,θ) gadget: If η=j + i for some i∈ [τ_1], j∈ [τ_2], then let u_0, u_1, …, u_|𝒱_1|-1 be the sequence U_1(τ_1-i-1) and u_|𝒱_1|, u_|𝒱_1|+1, …, u_|𝒱_1|+|𝒱_2|-1 be the sequence U_2(τ_2-j-1). The weight of thei'|𝒱_4|(N+2) + j'(N+2) + u-th row, i'∈ [|𝒱_1|+|𝒱_2|], j'∈ [|𝒱_4|], u∈ [N+2], is equal to 2M+w(u_i',u). The nextrows of (η,θ) gadget: If η=+τ_1+l for some l∈ [τ_3], then let u_0, u_1, …, u_|𝒱_3|-1 be the sequence U_3(l). The weight of thei'|𝒱_4|(N+2) + j'(N+2) + u-th row, i'∈ [|𝒱_3|], j'∈ [|𝒱_4|], u∈ [N+2], is equal to 2M+w(u_i',u). The nextrows of (η,θ) gadget: These rows are non-zero only if η=+y for some y∈ [].For i' ∈ [|𝒱_3|], i∈ [N], let z_i'∈ [N] be the number described between bits log_2N· i' and log_2N· (i'+1) - 1 of y. If i'<|𝒱_1|, (z_i'-i) ∈ V_|𝒱_1|+|𝒱_2|+i' and i ∈ V_i', then let u_i',i = z_i' - i. Else if i'≥|𝒱_1| and z_i'∈ V_|𝒱_1|+|𝒱_2|+i' let u_i',i = z_i'. Else let u_i',i = β.The weight of the i|𝒱_3|(|𝒱_1|+|𝒱_2|)(N+2) + i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u-th row, i∈ [N], i'∈ [|𝒱_3|], j'∈ [|𝒱_1|+|𝒱_2|], u∈ [N+2], is equal to 2M+w(u_i',i,u). The nextrows of (η,θ) gadget: They have the same weights with the previousrows, with the only difference being that the sign of the w(·,·) term is flipped.More formally, these rows are non-zero only if η=+y for some y∈ [].For i' ∈ [|𝒱_3|], i∈ [N], let z_i'∈ [N] be the number described between bits log_2N· i' and log_2N· (i'+1) - 1 of y. If i'<|𝒱_1|, (z_i'-i) ∈ V_|𝒱_1|+|𝒱_2|+i' and i ∈ V_i', then let u_i',i = z_i' - i. Else if i'≥|𝒱_1| and z_i'∈ V_|𝒱_1|+|𝒱_2|+i' let u_i',i = z_i'. Else let u_i',i = β.The weight of the i|𝒱_3|(|𝒱_1|+|𝒱_2|)(N+2) + i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u-th row, i∈ [N], i'∈ [|𝒱_3|], j'∈ [|𝒱_1|+|𝒱_2|], u∈ [N+2], is equal to 2M-w(u_i',i,u).Last row of (η,θ) gadget: The weight of this row is - (n-η). Similarly, we define the activations of the columns of a gadget (η, θ) at phase (s,t). In this case, the activations of columns may depend on s or t. Whenever we do not specify some column, it is implied that it is not active. First column of (η,θ) gadget: This column is always active. The nextcolumns of (η,θ) gadget: * If θ=0 then the first column is active.* If θ=τ_2 then the second column is active.* If θ=j+i+1 for some i∈ [τ_1], j∈ [τ_2], then the third column is active.* If θ=j+i+1 for some i∈ [τ_1-1], j∈ [τ_2], then the fourth column is active* If θ=j+τ_1+i+2 for some i∈ [τ_1], j∈ [τ_2], then the fifth column is active.* If θ=j+τ_1+1 for some j∈ [τ_2], then the sixth and seventh columns are active.* If θ=j+2τ_1+2 for some j∈ [τ_2], then the eighth column is active.The nextcolumns of (η,θ) gadget: * If θ=j+i+1 for some i∈ [τ_1], j∈ [τ_2], then the activation of the x-th column is equal to the x-th bit in the binary representation of i·.* If θ=j+τ_1+i+2 for some i∈ [τ_1], j∈ [τ_2], then the activation of the x-th column is equal to the x-th bit in the binary representation of (τ_1-i-1).The nextcolumns of (η,θ) gadget: If θ=0 or θ=τ_2, then the u-th column is activated if and only if u ∉U_1(τ_1-t-1). We note that this is the only case where the activation of columns depends on t. The nextcolumns of (η,θ) gadget: If θ=0 then let v_0, v_1, …, v_|𝒱_4|-1 be the sequence U_4(s). The i'|𝒱_4|(N+2) + j'(N+2) + u_j'-th column is activated, i'∈ [|𝒱_1|+|𝒱_2|], j'∈ [|𝒱_4|]. The nextcolumns of (η,θ) gadget: If θ=j+τ_1+1, j∈ [τ_2], then let v_0, v_1, …, v_|𝒱_4|-1 be the sequence U_4(s). The i'|𝒱_4|(N+2) + j'(N+2) + u_j'-th column is activated, i'∈ [|𝒱_3|], j'∈ [|𝒱_4|]. The nextcolumns of (η,θ) gadget: These columns are non-zero only if θ=j+i+1 for some i∈ [τ_1], j∈ [τ_2].For j'∈ [|𝒱_1|], let x_i,j' be the number described between bits log_2N· j' and log_2N· (j'+1) - 1 of i. Let u_i,j'=x_i,j' if x_i,j'∈ V_i', and u_i,j' = α otherwise. We activate the columns x_i,j'|𝒱_3|(|𝒱_1|+|𝒱_2|)(N+2) + i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_i,j', i'∈ [|𝒱_3|].For j'∈ [|𝒱_1|, |𝒱_1|+|𝒱_2|), let v_j' be the (j'-|𝒱_1|)-th node in U_2(j). We activate the columns i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_j', i'∈ [|𝒱_3|]. The nextcolumns of (η,θ) gadget: These columns are non-zero only if θ=j+i+1 for some i∈ [τ_1-1], j∈ [τ_2]. They have the same activations with the previouscolumns, with the only difference being that we use (i+1) wherever we previously used i, and that i∈ [τ_1-1] instead of i ∈ [τ_1].For j'∈ [|𝒱_1|], let x_(i+1),j' be the number described between bits log_2N· j' and log_2N· (j'+1) - 1 of (i+1). Let u_(i+1),j'=x_(i+1),j' if x_(i+1),j'∈ V_i', and u_(i+1),j' = α otherwise. We activate the columns x_(i+1),j'|𝒱_3|(|𝒱_1|+|𝒱_2|)(N+2) + i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_(i+1),j', i'∈ [|𝒱_3|].For j'∈ [|𝒱_1|, |𝒱_1|+|𝒱_2|), let v_j' be the (j'-|𝒱_1|)-th node in U_2(j). We activate the columns i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_j', i'∈ [|𝒱_3|].Last column of (η,θ) gadget: This column is always activated. Gadgets' costs We are now ready to prove that with the defined row weights and column activations, the costs of our gadgets are the desired ones.At phase (s,t), where U_1(τ_1-t-1)∌α, the cost of any gadget is the desired cost, according to Definition <ref>.For every gadget, the sum of the weight of its first and its last row is . Furthermore, the first and the last column of every gadget is activated. Therefore, the first and the last row contribute a cost of .We begin with the cases where the desired cost is related to the weights of edges in G_0, as these are the most technically challenging ones. The (j+t,0) gadgets, j∈ [τ_2]. Desired cost: + (τ_2-j) + 2M + w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_4(s)).From rows 1, …,, only the first row has both non-zero weight (τ_2-j) + (g-(|𝒱_1|+|𝒱_2|)|𝒱_4|)2M + w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)), and the corresponding (first) column is activated.The nextrows all have weight 0.For the nextrows, we have non-zero weight () in row u if and only if u∈ U_1(τ_1-t-1). However, in these cases the corresponding columns are deactivated, therefore the total contribution is zero.Out of the nextrows, the corresponding columns activated are the i'|𝒱_4|(N+2) + j'(N+2) + u_j', where i'∈ [|𝒱_1|+|𝒱_2|], j'∈ [|𝒱_4|], and u_j' is the j'-th node of U_4(s). For each such column, the weight of the corresponding row is equal to the weight of the edge between the i'-th node of U_1(τ_1-t-1)∘ U_2(τ_2-j-1) and u_j', plus 2M. Summing up these costs gives (|𝒱_1|+|𝒱_2|)|𝒱_4|2M + w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_4(s)).The next +2 rows all have zero weight.Summing up all the costs, we get that the cost of such a gadget is + (τ_2-j) + 2M + w(U_1(τ_1-t-1)∘ U_2(τ_2-j-1), U_1(τ_1-t-1)∘ U_2(τ_2-j-1)∘ U_4(s)). The (+i+l, j+i+1) gadgets, with i∈ [τ_1], j∈ [τ_2], l∈ [τ_3]. Desired cost if i<τ_1-1: + (-l) + 4M + h(i,j,l) - h(i+1,j,l). Desired cost if i=τ_1-1: + (-l) + 2M + h(τ_1-1,j,l). From rows 1, …,, if i=τ_1-1 only the third row has both non-zero weight ((-i-l) + (g-(|𝒱_1|+|𝒱_2|)|𝒱_3|)2M) and the corresponding (third) column is activated. Else only the third and the fourth row have both non-zero weight ((-i-l) + (g-(|𝒱_1|+|𝒱_2|)|𝒱_3|)2M and (g-(|𝒱_1|+|𝒱_2|)|𝒱_3|)2M) and the corresponding columns (third and fourth) are activated.Out of the nextrows, the x-th of them has weight 2^x-1 and thecorresponding column is active if and only if the x-th bit in the binary representation of i· is 1. Therefore the total cost from these rows is i·.The next + rows all have zero weight.The nextrows all have corresponding columns that are not activated.The analysis for the rest of the rows is the most technically challenging part of the proof. Out of the nextrows, we examine all the corresponding columns that are activated. We take two cases: * For j'∈ [|𝒱_1|], let x_i,j' be the number described between bits log_2N· j' and log_2N· (j'+1) - 1 of i. Let u_i,j'=x_i,j' if x_i,j'∈ V_i', and u_i,j' = α otherwise. In other words, u_i,j' is the j'-th node of U_1(i).For i'∈ [|𝒱_3|], columns x_i,j'|𝒱_3|(|𝒱_1|+|𝒱_2|)(N+2) + i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_i,j' are activated.Let us fix i'∈ [|𝒱_3|]. To describe the weight of the corresponding row, first let z_i',i+l∈ [N] be the number described between bits log_2N· i' and log_2N· (i'+1) - 1 of i+l. If i'<|𝒱_1|, (z_i',i+l-x_i,j') ∈ V_|𝒱_1|+|𝒱_2|+i' and x_i,j'∈ V_i', then let v_i',x_i,j',i+l = z_i',i+l - x_i,j'. Else if i'≥|𝒱_1| and z_i',i+l∈ V_|𝒱_1|+|𝒱_2|+i' let v_i',x_i,j',i+l = z_i',i+l. Else let v_i',x_i,j',i+l = β.The weight of row x_i,j'|𝒱_3|(|𝒱_1|+|𝒱_2|)(N+2) + i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_i,j' is equal to 2M+w(v_i',x_i,j',i+l, u_i,j').Notice that ∑_p∈ [|𝒱_1|], q∈ [|𝒱_3|] w(v_q,x_i,p,i+l, u_i,p) is a function of i and i+l, therefore a function of i and l, which we call g(i,l). It holds that g(i,l) ≤ |𝒱_1||𝒱_3|2M for any i,l.Furthermore, assume that U_1(i)∌α and U_3(l) ∌β (thus u_i,j' = x_i,j').Then the (p+1)log_2N-1 bit of both U_1(i) and U_3(l) is always 0 for any p, meaning that when we add i and l, there is no carry from the (p+1)log_2N-1 to the (p+1)log_2N bit. In effect, the number between bits i'log_2N and (i'+1)log_2N-1 of i+l is the same as the sum of the number between bits i'log_2N and (i'+1)log_2N-1 of i and the number between bits i'log_2N and (i'+1)log_2N-1 of l. Viewing it the other way around, the number between bits i'log_2N and (i'+1)log_2N-1 of l (the i'-th node in U_3(l)) is equal to the number between bits i'log_2N and (i'+1)log_2N-1 of i+l minus the number between bits i'log_2N and (i'+1)log_2N-1 of i (this difference is exactly v_i',x_i,j',i+l).We conclude that if U_1(i)∌α and U_3(l) ∌β, then w(v_i',x_i,j',i+l, u_i,j') is the weight of the edge between the j'-th node of U_1(i) and the i'-th node of U_3(l). Therefore g(i,l) = w(U_1(i), U_3(l)) ≤ M.We now show that if α∈ U_1(i) or β∈ U_3(l), then g(i,l) is too large. This is later used to ensure the properties of h(·, ·, ·) specified by Definition <ref>. * If α∈ U_1(i) then g(i,l) contains at least |𝒱_3| terms that are 2M, and the sum of all negative terms is at least -M, by definition of M.* Similarly, if v_q,x_i,p,i+l = β, for any p,q, then g(i,l) has |𝒱_1| terms that are 2M, and the sum of all negative terms is at least -M, by definition of M.* If α∉U_1(i) but β∈ U_3(l), let r be the smallest term in U_3(l) that is equal to β. Then, as we argued previously and by definition of r, it should be that for r'≤ r we have that the r'-th node in U_3(l) is equal to v_r',x_i,j',i+l. Therefore v_r,x_i,j',i+l= β, and as in the previous case g(i,l) has |𝒱_1| terms that are 2M, and the sum of all negative terms is at least -M, by definition of M.* For j'∈ [|𝒱_1|, |𝒱_1|+|𝒱_2|) the activated columns are the i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_j', i'∈ [|𝒱_3|], where u_j' is the (j'-|𝒱_1|)th node of U_2(j). Notice that these columns are distinct from the ones activated in the previous case, as u_j'∈ V_j', while u_i,j was always either α or in some V_j” with j”∈ [|𝒱_1|]. The weight of these rows is as previously, but now we have that v_i',x_i,j',i+l = z_i',i+l ifz_i',i+l∈ V_|𝒱_1|+|𝒱_2|+i', and v_i',x_i,j',i+l = β otherwise.The weight of row i'(|𝒱_1|+|𝒱_2|)(N+2) + j'(N+2) + u_j' is equal to 2M+w(v_i',x_i,j',i+l, u_j').Notice that ∑_p∈ [|𝒱_2], q∈ [|𝒱_3|] w(v_q,x_i,p+|𝒱_1|,i+l, u_p+|𝒱_1|) is a function of i,j,l which we call g'(i,j,l) (we now have a dependence on j because of the definition of u_p+|𝒱_1|). This is upper bounded by |𝒱_2||𝒱_3|2M.With the same arguments as previously, if U_1(i)∌α and U_3(l) ∌β, then g'(i,j,l) = w(U_2(j), U_3(l)). Let h(i,j,l) = g(i,l) + g'(i,j,l). We conclude that the total cost is (|𝒱_1|+|𝒱_2|)|𝒱_3|2M + h(i,j,l). If U_1(i)∌α and U_3(l) ∌β then h(i,j,l) = w(U_1(i)∘ U_2(j), U_3(l)) ≤ M. On the other hand, if U_1(i)∋α orU_3(l)∋β then h(i,j,l)≥ g(i,l) ≥min|𝒱_1|,|𝒱_3|2M-M, which is at least 2M for sufficiently large k. Therefore, for any fixed i,j such that U_1(i)∌α, it holds that there exists an l_i,j such that U_3(l_i,j)∌β and h(i,j,l_i,j) ≤ h(i,j,l) for all l. Finally, for any i,j,l we upper bound h(i,j,l) by 2N^2M, using the upper bounds of g and g'.This proves the desired cost when i=τ_1-1, as the nextrows all have non-activated corresponding columns.When i<τ_1-1, then we have the exact same analysis for the nextrows, with the only difference being that we use i+1 instead of i, and we reverse the sign of the weights. Therefore we get an additional cost (|𝒱_1|+|𝒱_2|)|𝒱_3|2M - h(i+1,j,l) from these rows. Along with the additional ( - (|𝒱_1|+|𝒱_2|)|𝒱_3|)2M cost from the fourth row of the gadget, this proves we indeed get the desired cost.The (+τ_1+l, j+τ_1+1) gadgets, with j∈ [τ_2], l∈ [τ_3]. Desired cost: + 2M + w(U_3(l),U_3(l)∘ U_4(s)).From rows 1, …,, only the seventh row has both non-zero weight ((-|𝒱_3||𝒱_4|)2M + w(U_3(l), U_3(l))), and the corresponding (seventh) column is activated.Out of the next ++ rows, all their corresponding columns are not activated.Out of the nextrows, the corresponding columns activated are the i'|𝒱_4|(N+2) + j'(N+2) + u_j', with i'∈ [|𝒱_3|], j'∈ [|𝒱_4|], and u_j' is the j'-th node of U_4(s). For each such column, the weight of the corresponding row is equal to the weight of the edge between the i'-th node of U_3(l) and u_j', plus 2M. Summing up these costs gives |𝒱_3||𝒱_4|2M + w(U_3(l), U_4(s)).The next 2 rows all have non-activated corresponding columns.Summing up all the costs, we get + 2M + w(U_3(l),U_3(l)∘ U_4(s)). The ( + j+τ_1-t-1, τ_2) gadgets, j∈ [τ_2]. Desired cost: + j·.From rows 1, …,, only the second row has both non-zero weight j· and its corresponding (second) column is activated.The nextrows all have weight 0.Out of the nextrows, we have non-zero weight in row u if and only if u∈ U_1(τ_1-t-1). However, in these cases the corresponding columns are deactivated, therefore the total contribution is zero.The next + +2 rows all have weight 0.Therefore the cost of such a gadget is the desired cost + j·. The (y_1,0) gadgets and the (y_2, τ_2) gadgets, with y_1<, y_1 ∉j+t | j∈ [τ_2], y_2≥, y_2∉ + j+τ_1-t-1 | j∈ [τ_2]. Desired cost: At least +.We only argue about the (y_1, 0) gadgets, as the situation is similar for the (y_2, τ_2) gadgets. We only need a lower bound, therefore we can ignore rows 1, …,+.For the nextrows we take take two cases: * The α-th row has weight . Notice that the corresponding column is activated, because we always assume α∉U_1(τ_1-t-1). Therefore the gadget has cost at least +.* The α-th row does not have weight . From the definition of row weights, this means y_1=j+i for some i∈ [τ_1], j∈ [τ_2]. But as i t, we have that U_1(τ_1-i-1)U_1(τ_1-t-1). Let u be a node in U_1(τ_1-i-1) such that u∉U_1(τ_1-t-1). Then the u-th row has weightand the corresponding column is activated, meaning that again the cost of the gadget is at least +.The (y,x) gadgets, for y< or y≥, and 0<x<τ_2. Desired cost: .From rows 1, …,, only the first and the second row have non-zero weight, but the first and second column are not activated.The nextrows all have weight 0.Out of the next + rows, all coresponding columns are not activated.The next + 2 rows all have zero weight.Therefore the cost of such a gadget is the desired cost . The (+y, j+i+1) gadgets, i∈ [τ_1], j∈ [τ_2], y∈ [] ∖ [i,i+τ_3). Desired cost: At least + (+i-y).From rows 1, …,, the third row has both non-zero weight (greater than (-y)) and the corresponding (third) column is activated. Out of the nextrows, the x-th of them has weight 2^x-1 and the corresponding column is activated if and only if the x-th bit in the binary representation of i· is 1. Therefore the total cost from these rows is i·.This proves that the cost of such a gadget is at least + (+i-y). The (+y, j+τ_1+1) gadgets, with y∈ [τ_1] ∪ [τ_1+τ_3,), and j∈ [τ_2]. Desired cost: At least +.From rows 1, …,, the sixth row has weightand the sixth column is activated.Therefore the cost of such a gadget is at least +. The (+y, j+τ_1+i+2) gadgets, with i∈ [τ_1], j∈ [τ_2], y∈ []. Desired cost: + (-τ_1-1+y-i).From rows 1, …,, only the fifth row has both non-zero weight ((+y-2τ_1)) and the corresponding (fifth) column is activated. Out of the nextrows, the x-th of them has weight 2^x-1 and the corresponding column is activated if and only if the x-th bit in the binary representation of (τ_1-i-1)· is 1. Therefore the total cost from these rows is (τ_1-i-1)·.Out of the next +++2 rows, all their corresponding columns are deactivated.We conclude that the cost of such a gadget is + (-τ_1-1+y-i). The (+y, j+2τ_1+2) gadgets, with j∈ [τ_2], y∈ []. Desired cost: At least +.From rows 1, …,, the eighth row has weightand the corresponding (eighth) column is activated. Therefore the cost of such a gadget is at least +. §.§ Lower boundWe finally prove our lower bound for . Recall that for i∈ [1,3] we have |𝒱_i| = ρ_i k, |𝒱_4| = k-|𝒱_1|-|𝒱_2|-|𝒱_3|. Additionally, for i∈ [1,4] we have τ_i = Θ(N^|𝒱_i|). Finally, n=n_r=Θ(τ_3+τ_1τ_2), m=n_c=Θ(τ_1τ_2).*Let ρ_1 = βγ/(c+2), ρ_2 = (1-β)γ/(c+2), ρ_3 = 1/(c+2). Notice that with these ρ_1, ρ_2, ρ_3 values, and as n=Θ(τ_3+τ_1τ_2), we get n=Θ(τ_3).For i∈ [|𝒱_1|] let u_i = i · N/k, and let integer p=f(u_0, …, u_|𝒱_1|-1) be the encoding of this sequence (therefore U_1(p) = u_0, …, u_|𝒱_1|-1, and U_1(p)∌α). Given a Negative-k-Clique instance, for a sufficiently large constant k, we use the reduction of <Ref> to formulate an instance of . We properly set the activations of the columns so that we start at phase (0,p). For any given s∈ [τ_4], we iterate over all phases (s,t) with t∈ [τ_1] and U_1(τ_1-t-1)∌α by properly updating our data structure. In each phase we query our data structure.Let C_s,t be the minimum cost of a k-Clique in G_0 that includes all nodes in U_1(τ_1-t-1) and U_4(s). By Lemma <ref> the shortest path at phase (s,t) is a restricted path. Therefore we can acquire the length of the shortest restricted path at phase (s,t) by querying the data structure. By Corollary <ref> we can retrieve C_s,t given the length of the shortest restricted path. As we iterate over all relevant (s,t), we can compute the minimum cost of any k-Clique, which means we can decide whether there exists a Negative-k-Clique. Concerning the running time, notice that we switch from a phase (s,t) to a phase (s,t') a total of O(τ_1 τ_4) times, and we only need to update the columns of the (y,0) and the (y,τ_2 ) gadgets, for any y. There are at most = O(N^2) such columns. We switch from a phase (s,t) to a phase (s',t') with s' s a total of τ_4-1 times, and every time we need to update the columns of the (y,0), the (y,τ_2) and the (y,j+τ_1+1) gadgets, for j∈ [τ_2] and all y. There are O(τ_2 ) such columns.Therefore the time we spend to solve Negative-k-Clique is O(T_p(n,m) + (τ_1+τ_2) τ_4 N^2 T_u(n,m) + τ_1τ_4 T_q(n,m)) Assuming the Negative-k-Clique Hypothesis, for all δ'>0 we have that T_p(n,m) + (τ_1+τ_2) τ_4 N^2 T_u(n,m) + τ_1 τ_4 T_q(n,m)= Ω(N^k-δ') T_p(n,m) + (τ_1+τ_2) τ_4 N^2 T_u(n,m) + τ_1 τ_4 T_q(n,m)= Ω(τ_1 ·τ_2 ·τ_3 ·τ_4 N^-δ') T_p(n,m)/τ_4 + (τ_1+τ_2) T_u(n,m) + τ_1 T_q(n,m)= Ω(τ_1 ·τ_2 ·τ_3 · N^-2-δ') We now have that: T_p(n,m)/τ_4= O((N^ρ_3k+N^ρ_1k+ρ_2k)^c / N^k-ρ_1 k-ρ_3 k-ρ_3 k) = O((N^ck/(c+2) / N^(1-2/(c+2))k) = O(1) Therefore the term T_p(n,m)/τ_4 is negligible. As ρ_1 ≤ρ_2, we get τ_2 T_u(n,m) + τ_1 T_q(n,m) = Ω(τ_1 ·τ_2 ·τ_3 · N^-2-δ') Thus either τ_2 T_u(n,m) = Ω(τ_1 ·τ_2 ·τ_3 · N^-2-δ') or τ_1 T_q(n,m) = Ω(τ_1 ·τ_2 ·τ_3 · N^-2-δ').Assume τ_2 T_u(n,m) = Ω(τ_1 ·τ_2 ·τ_3 · N^-2-δ'), then T_u(n,m)= Ω(τ_1 ·τ_3 · N^-2-δ') = Ω(N^ρ_1k-1· n · N^-2-δ')= Ω(m^β/N· n · N^-2-δ')= Ω(m^β· n · N^-3-δ') But m = Θ(N^ρ_1 k+ρ_2 k), therefore there exists a sufficiently large k such that N^-3-δ' = Ω(m^-δ), which gives us that T_u(n,m) = Ω(n · m^β-δ) Repeating the same arguments gives that if τ_1 T_q(n,m) = Ω(τ_1 ·τ_2 ·τ_3 · N^-2-δ') then T_q(n,m) = Ω(n · m^1-β-δ). Finally, to prove that m = Ω(n^γ-ε) ∩ O(n^γ+ε) notice that: * m=Θ(N^ρ_1 k+ρ_2 k), thus m∈Ω(N^ρ_1 k+ρ_2 k-2) ∩ O(N^ρ_1 k+ρ_2 k) = Ω(N^γ/(c+2) k-2) ∩ O(N^γ/(c+2) k).* n=Θ(N^ρ_3 k), thus n∈Ω(N^ρ_3 k-1) ∩ O(N^ρ_3 k) = Ω(N^1/(c+2) k-1) ∩ O(N^1/(c+2) k).For sufficiently large k we get m∈ O(N^γ/(c+2) k) ≤ O(N^γ/(c+2)k - γ + ε/(c+2)k - ε) = O(N^(1/c+2k-1)(γ+ε)) ≤ O(n^γ+ε).Similarly, m∈Ω(N^γ/(c+2) k - 2) ≥Ω(N^γ/(c+2)k - ε/(c+2)k) = Ω(N^1/c+2k(γ-ε)) ≥Ω(n^γ-ε).alphaurl | http://arxiv.org/abs/2310.18128v3 | {
"authors": [
"Karl Bringmann",
"Nick Fischer",
"Ivor van der Hoog",
"Evangelos Kipouridis",
"Tomasz Kociumaka",
"Eva Rotenberg"
],
"categories": [
"cs.CG"
],
"primary_category": "cs.CG",
"published": "20231027131945",
"title": "Dynamic Dynamic Time Warping"
} |
Department of Applied Physics, The University of Tokyo, 7-3-1 Hongo, Tokyo 113-8656, JapanResearch Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Tokyo 113-8656, JapanResearch Center for Department of Applied Physics and Physico-Informatics, Keio University, 3-14-1 Hiyoshi, JapanResearch Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Tokyo 113-8656, Japan We analyze a quantized pumping in a nonlinear non-Hermitian photonic system with nonadiabatic driving. The photonic system is made of a waveguide array, where the distances between adjacent waveguides are modulated. It is described by the Su-Schrieffer-Heeger model together with a saturated nonlinear gain term and a linear loss term. A topological interface state between the topological and trivial phases is stabilized by the combination of a saturated nonlinear gain term and a linear loss term. We study the pumping of the topological interface state. We define thetransfer-speed ratio ω /Ωby the ratio of the pumping speed ω of the center of mass of the wave packet to the driving speed Ω of the topological interface. It is quantized as ω /Ω =1 in the adiabatic limit. It remains to be quantized for slow driving even in the nonadiabatic regime, which is a nonadiabatic quantized pump. On the other hand, there is almost no pump for fast driving. We find a transition in pumping as a function of the driving speed.Nonadiabatic nonlinear non-Hermitian quantized pumping Satoshi Iwamoto January 14, 2024 ======================================================§ INTRODUCTION Topological insulator is a prominent idea in contemporary physics<cit.>. A typical example is a quantum Hall effect or a Chern insulator in a two-dimensional system described by the Chern number. The Thouless pump is a dynamical counter part of a Chern insulator<cit.>, where the Chern number is defined in the space-time variable. A pumped charge per one cycle is quantized. Especially, a topological-edge state is pumped in quasicrystal model<cit.>, Rice-Mele model<cit.>, and the Su-Schrieffer-Heeger (SSH) model<cit.>.Photonic systems provide us with an ideal playground of topological physics <cit.> . Various topological phases are realized in photonic crystal by modulating the hopping amplitude spatially. A simplest example is the SSH model<cit.>. Especially, a large area topological interface laser is theoretically proposed by using the topological interface state of the SSH model<cit.>. The Thouless pumping is realized by using spatially modulated waveguides<cit.>, where the hopping amplitude between waveguides are spatially modulated by modulating the distances between the adjacent waveguides. Dynamics is governed by the Schrödinger equation<cit.>, where the direction z of the waveguide acts as time t. Nonlinear Thouless pumping has been studied in photonic systems<cit.>. Recently, pumping by modulating the topological interface state of the SSH model is proposed in a linear Hermitian system<cit.>, which is not the Thouless pumping.Non-Hermicity<cit.> and nonlinearity<cit.> naturally arises in topological photonics, which has expanded the field of topological physics starting from condensed matter physics. Photon loss is effectively well described by a non-Hermitian loss term. On the other hand, the gain has a saturation, which is described by a nonlinear term. Stable laser emission occurs in the presence of both of these terms<cit.>. The interplay of non-Hermicity and nonlinearity is interesting<cit.>.The Thouless pumping is valid only in the adiabatic limit. The pumped charge is not quantized in the nonadiabatic regime<cit.>. It is interesting if a pumping can be quantized in the nonadiabatic regime in general.In this paper, we study a nonadiabatic quantized pumping in a nonlinear non-Hermitian photonic system. The basic structure is described by the SSH model, which has the topological and trivial sectors. The interface state emerges at the interface between these two sectors. We investigate the pumping of the state by changing the pumping velocity, and find a transition to occur between a quantized charge pumping and no pumping. We define thetransfer-speed ratio by the ratio ω /Ω, where ωis the pumping speed of the wave packet while Ω is the driving speed of the topological interface. The transfer-speed ratio 1 means that the wave packet perfectly follows the driving. In slow pumping, the wave function of the interface state perfectly follows the modulation, and hence the transfer-speed ratio is 1. On the other hand, in fast pumping the wave function collapses because it cannot follow the modulation, and hence, the transfer-speed ratio is zero.§ MODEL We consider an array of spatially modulated waveguides<cit.>, as illustrated in Fig.<ref> . The basic structure is described by the SSH chain made of N sites as in Fig.<ref>(b), which is a bipartite system. We take N to be an even integer. The hopping parameter κ _A,n has a site dependence, while κ _B does not. They are given byκ _A,n( z) =κ( 1+λtanhn-n_ IF( z) /ξ) ,κ _B=κ ,where n_IF( z) is the position of the interface as a function of z; λ >0 and ξ >0 represent the interface modulation amplitude and the interface width, respectively. Small (large) ξ represents a sharp (smooth) interface. The function κ _A,n-κ is illustrated in Fig.<ref>(a) with the choice ofn_IF( z) =N/2. We set ξ =20, λ =0.5, κ =1, γ =0.1, χ =1 and η =10 in numerical studies, unless otherwise stated.The system is governed by<cit.> idψ _n/dz=∑_nmM_nmψ _m-iγ( 1-χ ( 1-( -1) ^n) /2/1+|ψ _n| ^2/η) ψ _n,with a site-dependent hopping matrix representing the SSH model,M_nm( z) =([0 κ _A,n( z)000⋯; κ _A,n( z)0 κ _B00⋯;0 κ _B0 κ _A,n( z)0⋯;00 κ _A,n( z)0 κ _B⋱;000 κ _B0⋱;⋮⋮⋮⋱⋱⋱ ] ) ,where ψ _n is the amplitudes at the site n, where n=1,2,3,⋯ ,N; γ represents the constant loss in each waveguide; γχ represents the amplitude of the optical gain via stimulated emission induced only at the odd site; η represents the saturation parameter of nonlinear gain<cit.>. All these parameters are positive. The system turns out to be a linear model in the limit η→∞ . On the other hand, γ controls the non-Hermicity, where the system is Hermitian for γ =0. In Eq.(<ref>) we measure z in units of 1/κ and the loss parameter γ in units of κ, where κ is defined in Eq.(<ref>).The explicit equations for a finite chain with length N follow from Eq.( <ref>) asidψ _2n-1/dz= κ _Bψ _2n-2+κ _A,n( z) ψ _2n -iγ( 1-χ/1+|ψ _2n-1| ^2/η) ψ _2n-1,idψ _2n/dz= κ _Bψ _2n+1+κ _A,n( z) ψ _2n-1-iγψ _2n.We solve this set of equations together with the boundary conditionψ _n( z=0) =δ _n,n_IF. It is convenient to regard z as time t. Then, Eqs.(<ref>) and (<ref>) are the Schrödinger equations describing a quench dynamics starting from the interface site by giving an input to it with the initial condition (<ref>). We consider a case wheren_IF( z) =N/2for the relaxation process z<z_0, and n_IF( z) =N/2+Ω( z-z_0)for the driving process z>z_0. See Fig.<ref>(c) for the modulation of the hopping parameters with this choice of Eqs.(<ref>) and (<ref>). In what follows, we may occasionally regard z as time t. The pumping is said adiabatic only for Ω≪κ, which means that any finite driving is nonadiabatic.§ NONADIABATIC REGIME We start with a review of the stationary solution in the case of Ω =0 , where the interface n_IF( z) is given by the constant (<ref>) in the hopping parameter (<ref>) for all z. This case was studied previously, where the stationary solution is found to be<cit.>Ψ _A( n)exp[ -κλ/ 2ξ( 1+c_2) (n-n_IF( z) )^2] , Ψ _B( n) -ix/ηc_2κλ/ξexp[ -κλ/2ξ( 1+c_2) (n-n_IF( z) )^2] ,together with (<ref>) for all z, where c_2 is a certain constant: See Eq.(78) in the reference<cit.>. This is a generalization of the Jackiw-Rebbi solution to a nonlinear non-Hermitian model. The quench dynamics was also examined, where the wave packet spreads from the delta function (<ref>) and reaches the stational distribution (<ref>) as in Fig.<ref>(a1) and (b1). Namely, the interface state (<ref>) is formed in the relaxation process (z<z_0 ).We investigate the driving process, where the interface n_IF( z) is modulated as in Eq.(<ref>) for z>z_0. In the adiabatic approximation (Ω≃ 0), the wave packet is expected to perfectly follow the interface state (<ref>) together with (<ref>). This is indeed the case, as numerically checked in Fig.<ref>(a2) and (b2) for Ω =κ /8.What is unexpected is the numerical result for larger values of Ω. For slow driving Ω <κ, the state maintains its form as in Fig. <ref>(a1)∼(a5), whose center perfectly follows the sample modulation (<ref>), as shown by the green line in Fig.<ref>(b1)∼(b5). On the other hand, for fast driving Ω >κ, the wave packet collapses and spreads, as shown in Fig.<ref>(b6)∼(a8) and (b6)∼(b8).We show the snapshot at z=100, 150 and 200 for Ω =κ /2, 1 and 4 in the system with z_0=100 in Fig.<ref>. The wave packet does not change its form for Ω =κ /2 and 1, as shown in Fig. <ref>(a) and (b), which leads to the quantized pumping. On the other hand, the wave packet is collapsed for Ω =4κ as shown in Fig. <ref>(c), where the pumping does not occur.We show the z evolution of the maximum value of the wave packet in Fig. <ref>(a). The maximum value increases in the relaxation process and reaches a stational value. It maintains as it is for slow driving (Ω <κ). On the other hand, it decreases as soon as fast driving (Ω >κ) is started at z=100. We plot the maximum value at z=200 in Fig.<ref>(b). It is constant for slow driving and decreases for fast driving.We have shown the existence of the critical value Ω =κ for the driving velocity. In order to determine this critical value more clearly, we calculate the expectation value of the mean position by⟨ x( z) ⟩≡∑_n( n-n_ IF( 0) ) |ψ _n( z) | ^2/∑_n|ψ _n( z) | ^2.We show the z evolution of ⟨ x( z) ⟩for various Ω in Fig.<ref>(a). It is almost zero in the relaxation process (z<z_0). On the other hand, it increases almost linearly in the driving process (z>z_0). Namely, ⟨ x( z) ⟩ is well approximated by⟨ x( z) ⟩ =ω( z-z_0) .We define the pumping velocity ω at each Ω by this formula.We then calculate numerically the ratio ω /Ω of the pumping speed ω to the driving speed Ω. We plot ω /Ω as a function of Ω in Fig.<ref>(b), where it is found that the pumping speed is almost identical to the driving speed, i.e., ω =Ω for Ω≤κ. It means that the wave packet follows the modulation n_IF( z). On the other hand, ω decreases as the increase of Ω. It means that the modulation is too fast for the wave packet to follow it. The transition occurs near Ω≃κ, where the system is nonadiabatic.The quantized pumping in the nonadiabatic regime requires both non-Hermicity and nonlinearity. The wave packet rapidly collapses for the Hermitian system (γ =0) as in Fig.<ref>(a). This is because the delta function contains components not only from the topological interface state but also from many bulk states, where the bulk states spreads rapidly over the sample<cit.>. The wave packet exponentially grows and does not reach the stationary solution for the linear non-Hermitian system ( γ≠ 0, η =∞ ) as in Fig.<ref>(b). The wave packet rapidly decreases if there is the loss term without the gain term (γ≠ 0,χ =0) as in Fig.<ref>(c).§ DISCUSSION We have shown that the nonlinear non-Hermitian pump is quantized even in the nonadiabatic regime. The combination of the saturated gain and linear loss greatly stabilizes the wave packet, and hence, it is robust even for nonadiabatic driving. It is highly contrasted to the Thouless pump, which is broken in the nonadiabatic regime<cit.>. Our work provides an example that the interplay among non-Hermicity and nonlinearity gives an intriguing phenomena in the nonadiabatic regime.We compare the maximum driving speed Ω_max in the nonadiabatic regime with those in the previous work<cit.>, where the system is Hermitian and linear. It is of the order of Ω_max/κ=30/5000 0.006 in the Thouless pumping based on the Rice-Mele model, and it is of the order of Ω_max/κ=31/400 0.075 in the pumping of the topological interface state. On the other hand, it is of the order of Ω_max/κ=1 in the present mechanism. Hence, the ratio is much faster than the previous ones. This is because that the wave packet is stabilized by the combination of the non-Hermicity and nonlinearity.Femto-second laser writing waveguide<cit.> or semiconductor waveguide<cit.> are used in experiments.Typical distance between waveguides is 15μ m (5μ m) and the length of the waveguide is 500mm(50μ m) for a femto-second laser writing<cit.> (semiconductor waveguide<cit.>) waveguide. We have set N=800, which corresponds to 12mm for a femto-second laser writing waveguide and 4mm for a semiconductor waveguide, in numerical simulations.M.E is supported by CREST, JST (Grants No. JPMJCR20T2) and Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grant No. 23H00171). N. Ishida is supported by the Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grants. No. JP21J40088). Y. Ota is supported by the Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grants. Nos. 22H01994 and 22H00298). S. Iwamoto is supported by CREST, JST (Grants No. JPMJCR19T1) and the Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grants. Nos. 22H00298 and 22H01994).99 Hasan M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010).Qi X.-L. Qi and S.-C. Zhang, Topological insulators and superconductors, Rev. Mod. Phys. 83, 1057 (2011).Thouless D.J. Thouless, Quantization of particle transport, Phys. Rev. B 27, 6083 (1983).Niu84 Q. Niu and D. J. Thouless, Quantised adiabatic charge transport in the presence of substrate disorder and many-body interaction, J. Phys. A: Math. Gen. 17 2453 (1984).Niu Q. Niu, Towards a quantum pump of electric charges, Phys. Rev. Lett. 64, 1812 (1990).Citro Roberta Citroand Monika Aidelsburger, Thouless pumping and topology, Nature Reviews Physics volume 5, pages 87–101 (2023).Kraus Yaacov E. Kraus, Yoav Lahini, Zohar Ringel, Mor Verbin and Oded Zilberberg, Topological States and Adiabatic Pumping in Quasicrystals, Phys. Rev. Lett. 109, 106402 (2012).Verbin Mor Verbin, Oded Zilberberg, Yoav Lahini, Yaacov E. Kraus and Yaron Silberberg, Topological pumping over a photonic Fibonacci quasicrystal, Phys. Rev. B 91, 064201 (2015).RWang R. Wang, X. Z. Zhang, and Z. Song, Dynamical topological invariant for the non-Hermitian Rice-Mele model, Phys. Rev. A 98, 042120 (2018)LonghiPump Stefano Longhi, Topological pumping of edge states via adiabatic passage, Phys. Rev. B 99, 155150 (2019)Raghu F. D. M. Haldane, S. Raghu, Possible Realization of Directional Optical Waveguides in Photonic Crystals with Broken Time-Reversal Symmetry, Phys. Rev. Lett. 100, 013904 (2008).KhaniPhoto A. B. Khanikaev, S. H. Mousavi, W.-K. Tse, M. Kargarian, A. H. MacDonald, G. Shvets, Photonic topological insulators, Nature Materials 12, 233 (2013).Hafe2 M. Hafezi, E. Demler, M. Lukin, J. Taylor, Robust optical delay lines with topological protection, Nature Physics 7, 907 (2011).Rech Mikael C. Rechtsman, Julia M. Zeuner, Yonatan Plotnik, Yaakov Lumer, Daniel Podolsky, Felix Dreisow, Stefan Nolte, Mordechai Segev and Alexander Szameit, Photonic Floquet topological insulators, Nature 496, 196 (2013).Hafezi M. Hafezi, S. Mittal, J. Fan, A. Migdall, J. Taylor, Imaging topological edge states in silicon photonics, Nature Photonics7, 1001 (2013).WuHu L.H. Wu and X. Hu, Scheme for Achieving a Topological Photonic Crystal by Using Dielectric Material. Phys. Rev. Lett. 114 , 223901 (2015).KhaniSh A. B. Khanikaev and G. Shvets, Two-dimensional topological photonics, Nature Photonics 11, 763 (2017).LuRev L. Lu. J. D. Joannopoulos and M. Soljacic, Topological photonics, Nature Photonics 8, 821 (2014).OzawaRev T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg and L. Carusotto, Topological photonics, Rev. Mod. Phys. 91, 015006 (2019).OtaRev Y. Ota, K. Takata, T. Ozawa, A. Amo, Z. Jia, B. Kante, M. Notomi, Y. Arakawa, S.i Iwamoto, Active topological photonics, Nanophotonics9, 547 (2020).IwamotoRev S. Iwamoto, Y. Ota and Y. Arakawa, Recent progress in topological waveguides and nanocavities in a semiconductor photonic crystal platform, Optical Materials Express 11, 319 (2021).Jean P. St-Jean, V. Goblot, E. Galopin, A. Lemaitre, T. Ozawa, L. Le Gratiet, I. Sagnes, J. Bloch and A. Amo, Lasing in topological edge states of a one-dimensional lattice, Nature Photonics 11, 651 (2017).Parto M. Parto, S. Wittek, H. Hodaei, G. Harari, M. A. Bandres, J. Ren, M. C. Rechtsman, M. Segev, D. N. Christodoulides and M. Khajavikhan, Edge-Mode Lasing in 1D Topological Active Arrays, Phys. Rev. Lett.120, 113901 (2018).Zhao H. Zhao, P. Miao, M. H. Teimourpour, S. Malzard, R. El-Ganainy, H. Schomerus and L. Feng, Topological hybrid silicon microlasers, Nat. Com. 9, 981 (2018).Han Changhyun Han, Myungjae Lee, S. Callard, Christian Seassal and Heonsu JeonLasing at topological edge states in a photonic crystal L3 nanocavity dimer array, Light: Science & Applications volume 8, Article number: 40 (2019)Ota18 Y. Ota, R. Katsumi, K. Watanabe, S. Iwamoto and Y. Arakawa, Topological photonic crystal nanocavity laser, Communications Physics1, 86 (2018).Ishida N. Ishida, Y. Ota, W. Lin, T. Byrnes, Y. Arakawa and S. Iwamoto, A large-scale single-mode array laser based on a topological edge mode, Nanophotonics 11, 2169 (2022).SUSYLaser Motohiko Ezawa, Natsuko Ishida, Yasutomo Ota, Satoshi Iwamoto, Supersymmetric non-Hermitian topological interface laser, Phys. Rev. B 107, 085302 (2023).Kraus2 Yaacov E. Kraus, Zohar Ringel, Oded Zilberberg, Four-dimensional quantum Hall effect in a two-dimensional quasicrystal, Phys. Rev. Lett. 111, 226401 (2013).Ke Yongguan Ke, Xizhou Qin, Feng Mei, Honghua Zhong, Yuri S. Kivshar, Chaohong Lee, Topological phase transitions and Thouless pumping of light in photonic waveguide arrays, Laser Photonics Rev. 10, 6, 995 (2016).Zilber Oded Zilberberg, Sheng Huang, jonathan Guglielmon, Mohan Wang, Kevin P. Chen, yaacov E. Kraus Mikael C. rechtsman, Photonic topological boundary pumping as a probe of 4D quantum Hall physics, Nature 553, 59 (2018).Krivo S.G. Krivoshlykov, Quantum-Theoretical Formalism for Inhomogeneous Graded-Index Waveguides (Akademie Verlag, Berlin, 1994).Longhi S. Longhi, Quantum-optical analogies using photonic structures, Laser Photonics Review 3, 243 (2009).Jurg Marius Jurgensen and Mikael C. Rechtsman, Chern Number Governs Soliton Motion in Nonlinear Thouless Pumps, Phys. Rev. Lett. 128, 113901 (2022).Jurg2 Marius Jurgensen, Sebabrata Mukherjee, Christina Jorg, and Mikael C. Rechtsman, Fractionally Quantized Topological Nonlinear Thouless Pumping of Solitons, CLEO 2023, Technical Digest Series (Optica Publishing Group, 2023), paper FM1B.1.Mosta Nader Mostaan, Fabian Grusdt and Nathan Goldman, Quantized topological pumping of solitons in nonlinear photonics and ultracold atomic mixtures, Nature Communications 13, 5997 (2022).QFu Qidong Fu, Peng Wang, Yaroslav V. Kartashov, Vladimir V. Konotop, and Fangwei Ye, Nonlinear Thouless Pumping: Solitons and Transport Breakdown, Phys. Rev. Lett. 128, 154101 (2022).QFu2 Qidong Fu, Peng Wang, Yaroslav V. Kartashov, Vladimir V. Konotop, and Fangwei Ye, Two-Dimensional Nonlinear Thouless Pumping of Matter Waves, Phys. Rev. Lett. 129, 183901 (2022).Yuang Jiale Yuan; Chenran Xu; Han Cai; Da-Wei Wang, Gap-protected transfer of topological defect states in photonic lattices, APL Photonics 6, 030803 (2021).LFeng L. Feng, R. El-Ganainy and L. Ge, Non-Hermitian photonics based on parity–time symmetry, Nature Photonics, 11, 752 (2017).Weimann S. Weimann, M. Kremer, Y. Plotnik, Y. Lumer, S. Nolte, K. G. Makris, M. Segev, M. C. Rechtsman and A. Szameit, Topologically protected bound states in photonic parity–time-symmetric crystals, Nat. Mat.16, 433 (2017).GanaRev R. El-Ganainy, K. G. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter and D. N. Christodoulides, Non-Hermitian physics and PT symmetry, Nature Physics, 14, 11 (2018).Ley D. Leykam and Y. D. Chong, Edge Solitons in Nonlinear-Photonic Topological Insulators, Phys. Rev. Lett. 117, 143901 (2016).Zhou X. Zhou, Y. Wang, D. Leykam and Y. D. Chong, Optical isolation with nonlinear topological photonics, New J. Phys. 19, 095002 (2017).Malzard S. Malzard and H. Schomerus, Nonlinear mode competition and symmetry-protected power oscillations in topological lasers, New. J. Phys. 20, 063044 (2018).SmiRev D. Smirnova, D. Leykam, Y. Chong and Y. Kivshar, Nonlinear topological photonics, Applied Physics Reviews 7, 021306 (2020).NLPhoto M. Ezawa, Phys. Rev. B 104, 235420 (2021).Harrari G. Harari, M. A. Bandres, Y. Lumer, M. C. Rechtsman, Y. D. Chong, M. Khajavikhan, D. N. Christodoulides, M. Segev, Topological Insulator Laser: Theory, Science 359, eaar4003 (2018).Bandres M. A. Bandres, S. Wittek, G. Harari, M. Parto, J. Ren, M. Segev, D. N. Christodoulides, M. Khajavikhan, Topological Insulator Laser: Experiments, , Science 359, 1231 (2018).Hass A.U. Hassan, H. Hodaei, M. A. Miri, M. Khajavikhan, and D.N. Christodoulides, Nonlinear reversal of the PT-symmetric phase transition in a system of coupled semiconductor microring resonators, Phys. Rev. A 92, 063807 (2015).MalzardOpt S. Malzard, E. Cancellieri, and H. Schomerus, Topological dynamics and excitations in lasers and condensates with saturable gain or loss , Optics Express 26, 22506 (2018).EzawaLaser M. Ezawa, Nonlinear non-Hermitian higher-order topological laser, Phys. Rev. Research 4, 013195 (2022).Privi Lorenzo Privitera, 1 Angelo Russomanno, 2,3 Roberta Citro, 4 and Giuseppe E. Santoro, Nonadiabatic Breaking of Topological Pumping, Phys. Rev. Lett. 120, 106601 (2018).Tuloup Thomas Tuloup, Raditya Weda Bomantara, Jiangbin Gong, Breakdown of quantization in nonlinear Thouless pumping, New J. Phys. 25 083048 (2023).Gara Ivan L. Garanovich, Stefano Longhi, Andrey A. Sukhorukov, Yuri S. Kivshar, Light propagation and localization in modulated photonic lattices and waveguides Physics Reports, 518, 1 (2012)Davis K. M. Davis, K. Miura, N. Sugimoto, and K. Hirao, Writing waveguides in glass with a femtosecond laser, Optics Letters 21, 1729 (1996)SzaFemto Alexander Szameit and Stefan Nolte, Discrete optics in femtosecond-laser-written photonic structures J. Phys. B: At. Mol. Opt. Phys. 43 163001 (2010)Foresi J. S. Foresi, P. R. Villeneuve, J. Ferrera, E. R. Thoen, G. Steinmeyer, S. Fan, J. D. Joannopoulos, L. C. Kimerling, Henry I. Smith and E. P. Ippen, Photonic-bandgap microcavities in optical waveguides, Nature 390, 143 (1997)Foster Mark A. Foster, Amy C. Turner, Jay E. Sharping, Bradley S. Schmidt, Michal Lipson & Alexander L. Gaeta, Broad-band optical parametric gain on a silicon photonic chip, Nature 441, 960 (2006)Sun Lu Sun, Hongwei Wang, Yu He, Yong Zhang, Guojing Tang, Xintao He, Jianwen Dong and Yikai Su, Broadband and Fabrication Tolerant Power Coupling and Mode-Order Conversion Using Thouless Pumping Mechanism Laser Photonics Rev. 16, 2200354 (2022) | http://arxiv.org/abs/2310.17987v1 | {
"authors": [
"Motohiko Ezawa",
"Natsuko Ishida",
"Yasutomo Ota",
"Satoshi Iwamoto"
],
"categories": [
"cond-mat.mes-hall",
"physics.optics"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231027090002",
"title": "Nonadiabatic nonlinear non-Hermitian quantized pumping"
} |
Effect of interfacial Dzyaloshinskii - Moriya interaction in spin dynamics of an Antiferromagnet coupled Ferromagnetic double - barrier Magnetic Tunnel Junction Reeta Devi[[email protected]], Nimisha Dutta[[email protected]], Arindam Boruah[[email protected]] and Saumen Acharjee[[email protected]] January 14, 2024 =========================================================================================================================================================================== We study the proportional clustering problem of <cit.> and relate it to the area of multiwinner voting in computational social choice. We show that any clustering satisfying a weak proportionality notion of <cit.> simultaneously obtains the best known approximations to the proportional fairness notion of <cit.>, but also to individual fairness <cit.> and the “core” <cit.>. In fact, we show that any approximation to proportional fairness is also an approximation to individual fairness and vice versa. Finally, we also study stronger notions of proportional representation, in which deviations do not only happen to single, but multiple candidate centers, and show that stronger proportionality notions of <cit.> imply approximations to these stronger guarantees. § FAIR CLUSTERINGFair decision-making is a crucial research area in artificial intelligence. To ensure fairness, a plethora of different fairness notions, algorithms and settings have been introduced, studied, and implemented. One area in which fairness has been applied extensively is clustering.In centroid clustering, we are given a set of n data points which we want to partition into k clusters by choosing k “centers” and assigning each point to a center by which it is represented well. Fairness now comes into play when the data points correspond to human individuals. Fairness notions in clustering usually depend on one important decision: whether demographic information (such as gender, income, etc.) is taken into account or whether one is agnostic to it. A large part of work on fair clustering has focused on incorporating such demographic information, starting with the seminal work of <cit.> who aimed to proportionally balance the number of people of a certain type in each cluster center.However, not all work of fair clustering relies on demographic information. For instance, in the problem of selecting fair polling locations <cit.>, the composition of agents at the polling locations is arguably less relevant than the placement being fair for each individual agent. Independently, and in different contexts, <cit.> and <cit.> deviated from incorporating demographic information and instead tried to derive fairness notions from the instance itself. For <cit.> this lead to their notion of individual fairness: given a population of size n, with k cluster centers to be opened, every agent should be entitled to a cluster center not further away than their n/k-th neighbor. While this is not always exactly achievable, <cit.> gave a simple algorithm achieving a 2-approximation to this notion.On the other hand, <cit.> were motivated not by being fair towards individual members of the population (or agents), but towards groups of agents, defining their notion of proportional fairness: given a population of size n, with k cluster centers to be opened, no group of size at least n/k should be able to suggest a cluster center they would be better off with. While this notion is also not achievable exactly, <cit.> gave a simple (1 + √(2))-approximation to this notion.Further, the problem of fair clustering seems inherently similar to the problem of proportional multiwinner voting from the field of Computational Social Choice <cit.>. Here, the goal is to elect a committee of size k based on the votes of n agents. In fact, the notion of proportional fairness of <cit.> was directly inspired by the corresponding proportionality notions of <cit.>, which also require that no group of size n/k agents could deviate to a candidate they would like to see in the committee. Finally, for both proportional fairness and individual fairness as studied by <cit.> and <cit.>, agents only care about their closest cluster. However, there are settings, in which agents cannot only be represented by one candidate. This motivated <cit.> to introduce a generalization of proportional fairness, in which agents care not only about the closest, but the q closest clusters. This is of relevance in their setting of sortition, in which an α percentage of the population should be guaranteed to have an α percentage of the panel. Further, in the polling location setting of <cit.>, the outcome should be truly proportional to the underlying population as well, it is not enough to just open one polling location for a large group of agents, even if it is close to all of them. So far, the notions of individual and proportional fairness for clustering have existed in parallel, with similarities between the two being acknowledged but not formalized. Further, the connections between the fairness notions in computational social choice and in fair clustering have not really been explored (for exceptions see related work). In this work, we aim to tackle these challenges. Our contributions. We show that an approximation to proportional fairness also is an approximation to individual fairness and vice versa (see <ref>). Remarkably, the obtained bounds are provably tight. Similarly, an approximation to proportional fairness is also an approximation to the transferable core.Moreover, we extend the connection to multiwinner voting. To this end, we adapt the proportionality notions of rank-JR, rank-PJR and rank-PJR+ for multiwinner voting introduced by <cit.> to work with metric preferences, and derive the following connection: If a clustering outcome satisfies rank-JR (see <ref>), then it achieves the best currently known approximations to individual fairness, proportional fairness, and the transferable core (see <ref>). Further, the most common algorithm for proving the approximations, greedy capture, always generates outcomes that satisfy rank-JR. For an overview over the bounds, see <ref>.As mentioned earlier, in a recent preprint, <cit.> introduced a generalization of proportional fairness called the q-core (<ref>). This is motivated by settings in which agents are represented not only by one center, but by their closest q centers. In their spirit, we also lift the notions of individual fairness and the transferable core to variants in which the agents care about their q closest centers (<ref>). We again prove interconnections between the three fairness notions (see <ref>). Moreover, we provide another connection to multiwinner voting: If a clustering outcome satisfies rank-PJR, then it achieves constant-factor approximations to the q-core, to q-individual fairness, and to the q-transferable core, for any value of q simultaneously (see <ref>). Further, any outcome of the expanding approvals algorithm <cit.> satisfies rank-PJR+, which in turn satisfies rank-PJR.For an overview over the bounds, see <ref>.Closest in spirit to our work is that of <cit.>: They introduce two fairness notions, DPRF and UPRF (<ref>), of which the former notion is equivalent to rank-PJR. While <cit.> observed that any clustering outcome satisfying DPRF or UPRF is also a constant-factor approximation to proportional fairness, the connections to the other fairness notions and in-between these fairness notions are new.For completeness' sake, we also show that UPRF implies approximations for individual and proportional fairness and the transferable core (<ref>) and their “q-closest” counterparts (<ref>).Finally, in <ref>, we discuss challenges in the case that the candidate space is infinite: Here, the established algorithms greedy capture and expanding approvals become NP-hard to compute. Combining techniques by <cit.>, we are able cope with this challenges and show that rank-JR and rank-PJR still can be used to obtain constant-factor approximations to proportional fairness and the q-core. §.§ Related Work Individual Fairness. After the initial definition of individual fairness by <cit.> much of the follow-up work has focused on obtaining bi-criteria approximation guarantees: clustering algorithms which achieve a good individual fairness and, e.g., k-means or k-center approximation at the same time <cit.>. The work of <cit.> also inspired <cit.> to define a notion of individual fairness in the setting of approval-based multiwinner voting, which they term individual representation. Finally, there are several papers studying notions of individual fairness, which are different from the one by <cit.>, for instance, <cit.> or <cit.>.Proportional Fairness. After the initial work by <cit.> introducing the concept of proportional fairness, <cit.> improved the results by <cit.> by showing that their Greedy Capture algorithm achieves better approximation guarantees in certain metric spaces (including the Euclidean space with the 2-norm) and gave improved results for graph metrics as well as improved complexity results. Then, <cit.> introduced proportionality notions inspired by <cit.> and the notion of the transferable core from algorithmic game theory. Further, the work closest to ours is a recent preprint by <cit.> which introduces proportionality axioms and rules directly inspired from social choice theory to proportional clustering. Among their results, they show that every rule satisfying their proportionality axiom is also (1 + √(2))-proportionally fair, thereby achieving the best possible bound for proportional fairness. The general connection to social choice or further connections to the aforementioned fairness notions of <cit.> or <cit.> remain unexplored, though. Finally, another recent preprint of <cit.> studies proportional representation in the setting of sortition (see e.g., <cit.>) in which they define stronger proportional fairness guarantees (and selection rules) inspired by <cit.> and their greedy capture rule. Multiwinner Voting. Multiwinner voting is the branch of computational social choice theory dealing with selecting multiple instead of just one candidate as a winner. In particular, there is lots of work dealing with so-called proportional multiwinner voting: the multiple candidates selected should “proportionally” represent the underlying population. Much of this literature, starting with <cit.> has focused on approval preferences, see <cit.> for a recent book on this topic. However, proportionality notions also exist for ordinal preferences, dating back to the seminal work of <cit.> who deals with the famous STV rule. These notions were recently strengthened by <cit.> and <cit.>, with the latter forming the basis for the proportionality axioms we discuss in this paper.§ PRELIMINARIESLet (𝒳, d) be a metric space with a distance function d 𝒳×𝒳→ℝ satisfying the triangle inequality. Let i ∈𝒳 be a point. For r ∈ℝ, define B(i, r) = {j ∈𝒳 d(i,j) ≤ r} to be the ball of radius r around i. For W ⊆𝒳, denote by d(i, W) = min_c ∈ W d(i, c) to be the closest point in W to i. Further, for q ≤ |W|, we denote by d^q(i, W) the q-th closest point in W to i, i.e., |{ c ∈ Wd(i,c) ≤ d^q(i, W) }| = q. Note that d^1(i, W) = d(i, W). It is easy to see that d^q also fulfills a form of triangle inequality: For i, i' ∈ N it holds that d^q(i, W) ≤ d(i, i') + d^q(i', W) <cit.>.Throughout the paper, we are given a set of agents N = [n] and a (possibly infinite) set of candidates (facilities) C, both of which lie in a metric space (𝒳, d), and a number k ∈ℕ^+. A clustering or outcome now is any subset W ⊆ C of candidates |W| ≤ k. The elements c ∈ W are called centers.For our examples, we will often rely on so-called graph metrics. In these metrics, the points lie in a weighted graph G = (𝒳, E, w) and the distance between two points is the shortest distance in the graph according to w between these two points.§.§ Fairness notions for proportional clustering In the area of fair clustering, several proportionality notions were introduced.We will briefly introduce the notions relevant for our work alongside some examples. Lastly, we will introduce two novel fairness measures. Notions for being represented by a single center. We will start with the notion of (approximate) proportional fairness of <cit.> and <cit.>. The idea is that if there is a candidate c such that for at least n/k agents are closer to c by a factor of α than to the closest cluster center in the outcome, then they will deviate to c. If there is no such candidate, then the outcome is α-proportional fair. For α≥ 1 an outcome W satisfies α-proportional fairness, if there is no group N' ⊆ N of agents with | N' |≥n/k and c ∉ W such that α· d(i, c) < d(i, W) for all i ∈ N'.It is known that a 1-proportional fair outcome need not exist, but a (1 + √(2))-proportional fair outcome can be computed for any metric space <cit.>.Next, the notion of individual fairness was independently introduced by <cit.>. For a given agent i∈ N, we let r_N,k(i) be the radius of the smallest ball that encloses n/k agents, i.e., r_N,k(i) = min{r ∈ℝ |B(i, r) ∩ N| ≥n/k}. We drop the subscript N or even both subscripts N and k if they are clear from context. For this definition to properly work, we additionally need to make the assumption that N ⊆ C, i.e., the entire metric space can be chosen as cluster centers. Otherwise, a secluded group of agents without any possible cluster centers around them would never be able to get a center close to them in the outcome. Indeed, this is a plausible restriction in metric clustering as oftentimes the centers may be picked from the (infinite) set of points in the metric space, and also was studied in proportional clustering <cit.>. For an instance with N ⊆ C, an outcome W satisfies β-individual fairness if d(i, W) ≤β r_N,k(i) for all i ∈ N. Finally, <cit.> introduced a notion of core based on the concept of transferable utilities from game theory.[We remark that <cit.> simply calls this notion “core”.] Comparing to proportional fairness, this notion considers the average utility for each group. An outcome W is in the (γ, α)-transferable core if there is no group of agents N' ⊆ N and candidate c ∉ W with | N' |≥γn/k andα∑_i ∈ N' d(i, c) < ∑_i ∈ N' d(i, W).Consider the instance (a) depicted on the left of <Ref> with k = 5 and the associated graph distance metric. Further, assume that cluster centers can only be placed on the depicted agents. In this instance, n/k = 2 and therefore, two agents should be able to “control” one cluster center. The outcome W = {1,2,3,6,9} satisfies 1-proportional fairness. Every agent on the left has a distance of 0 to the outcome, while every agent on the right has a distance of 0 if they are selected themselves, or 1 otherwise.To see the difference between proportional fairness, individual fairness, and the transferable core, again consider the instance (a), this time with k = 4, so n/k = 2.5. Here, the outcome W = {1,2,6,7} satisfies 1-proportional fairness, however it does not satisfy 1-individual fairness. Agent 8 could look at their 2 closest neighbors, 5 and 9, both at a distance of 1. However, the distance of 8 to the outcome is 2. Observe that W also is not in the (1,1)-transferable core. Here, for the group N' = {8, 9, 10} and candidate c = 9, we have ∑_i ∈ N' d(i, c) = 2 < ∑_i ∈ N' d(i, W) =4.Proportionally representative fairness. The observation that 1-proportional fairness cannot always be satisfied and can produce outcomes they consider “not-proportional” prompted <cit.> to introduce two notions they call Proportionally Representative Fairness (PRF), one for “discrete” and for “unconstrained” clustering, i.e., for the case that the candidate space is bounded or unbounded. We begin with the “discrete” version.An outcome W satisfies Discrete Proportionally Representative Fairness (DPRF) if there is no ℓ∈ℕ, no group N' ⊆ N with size | N'|≥ℓn/k, and no distance threshold y ∈ℝ such that |⋂_i ∈ N' B(i, y) ∩ C |≥ℓ and|⋃_i ∈ N' B(i, y) ∩ W| < ℓ. Further, as a way to deal with potentially unbounded candidate spaces, they define a notion of representative fairness only based on the location of the agents.An outcome W satisfies Unconstrained Proportionally Representative Fairness (UPRF) if there is no ℓ∈ℕ, no group N' ⊆ N with size | N'|≥ℓn/k, and no distance threshold y ∈ℝ such that max_i, i' ∈ N' d(i, i') ≤ y and|⋃_i ∈ N' B(i, y) ∩ W| < ℓ. Indeed, in a recently updated version of their paper, <cit.> already point out some connections between their notions and proportional fairness. Any outcome that satisfies DPRF also satisfies (1 + √(2))-proportional fairness. Any outcome that satisfies UPRF also satisfies 3 + √(17)/2-proportional fairness. We remark that technically the relationship between UPRF and DPRF has not been explored yet by <cit.>. We point out some connections between the two notions in <ref>. Therein, we will also show that UPRF cannot always be satisfied (without for instance restricting k ≤ n). Implicitly using this restriction, <cit.> show that UPRF and DPRF can be satisfied using variants of the expanding approvals algorithm <cit.> which we will introduce in <ref> below.[<cit.> do not explicitly reference expanding approvals. However, their algorithm is equivalent to expanding approvals when run ordinal preferences.] Consider the instance depicted in (b) in <Ref> with k = 5. Again, consider the clustering {1,2,3,6,9}. This outcome does not satisfy DPRF, since for y = 4, it holds that ⋂_i = 5^10 B(i, 4) = {6,7,9}⊆{5, …, 10} = ⋃_i = 5^10 B(i, 4), however from {5, …, 10} only 2 agents instead of the required 3 got selected into the clustering. It does however satisfy UPRF. One can easily check this for ℓ = 1,2. For ℓ = 3, in the group {5, …, 10} the distance between the agents 5 and 10 is 5. Thus, UPRF only requires 3 agents to be selected which are at a distance at most 5 away from one of these agents. Since 1 to 4 are at a distance of 5 to agent 5 this is satisfied.Notions for being represented by multiple centers. Recently, <cit.> introduced a fairness measure that considers for each agent not only the closest center, but the first q closest centers. This setting comes into play when agents are represented not only by one, but by multiple centers. The notion called α-q-core naturally generalizes α-proportional fairness.For α≥ 1 an outcome W is in the α-q-core, if there is no ℓ∈ℕ and no N' ⊆ N with |N'| ≥ℓn/k and set C' ⊆ C with q ≤| C' |≤ℓ such that α· d^q(i, C') < d^q(i, W) for all i ∈ N'. It is easy to see that α-proportional fairness is the same as the α-1-core, since independently of the value of ℓ, there must be a single candidate who improves upon the outcome for at least n/k agents. In their work <cit.> show that for any q a 5 + √(41)/2-q-core outcome can be found if N = C, i.e., every agent also corresponds to a possible candidate [In addition their randomized algorithm also selects each agent with the same probability, a desirable property in the context of sortition, i.e., the randomized selection of citizen assemblies <cit.>]. Consider again instance (a) of <ref> with k=5 and the outcome W = {1, 2, 3, 6, 9}. As mentioned above, W satisfies 1-proportional fairness. For the 3-core however, consider the set N' = {5, …, 10} deviating to C' = {6, 9, 10}. The distance of any member of N' to 6, 9, or 10 is at most 3, while the distance to the third most distant center in the outcome is at least 10. Thus, when considering the distances to the third most distant candidate in C', every agent in N' would improve by at least a factor of 10/3. Thus, W is only in the 10/3-3-core. In the spirit of the α-q-core, we introduce novel generalizations of individual fairness and the transferable core. Generalizing individual fairness is straightforward: The q-th furthest cluster center must be within the radius of the q n/k agents. To this end let r^q_N, k = r_N, k/q = min{r ∈ℝ : |B(i, r) ∩ N| ≥ q|N|/k}. As for the β-individual fairness, we require N ⊆ C for this definition to work. Further, we also require that k ≤ n, for otherwise, the agent may deserve more than one candidate in the outcome, exactly in the same location. For β≥ 1 and for an instance with N ⊆ C and k ≤ n, an outcome W satisfies β-q-individual fairness if d^q(i, W) ≤β r^q_N, k for all i ∈ N. For the transferable core, we propose the following generalization, in which a group of γ q ⌈n/k⌉ agents diverges to another cluster set if their average distance would improve by a factor of α when compared to the average distance to the current outcome. An outcome W is in the (γ, α)-q-transferable core if there is no ℓ∈ℕ, no group of agents N' ⊆ N with | N'|≥γℓn/k, and no set of candidates C' ⊆ C with q ≤| C'|≤ℓ such that α∑_i ∈ N' d^q(i, C') < ∑_i ∈ N' d^q(i, W). §.§ Algorithms We will briefly introduce three selected algorithms that were studied in the setting of proportional clustering.The greedy capture algorithm <cit.> starts off with an empty outcome W. It maintains a radius δ and smoothly increases δ. If there is a candidate c such that at least n/k agents are in a distance of at most δ around this candidate, it adds it to W and deletes the agents. If an agent is in a distance of at most δ to a candidate in W, then it gets deleted as well. This is continued until all agents are deleted.The expanding approvals algorithm <cit.> behaves similarly to greedy capture. It starts off with an empty outcome W and a radius δ as well. Further, it gives each agent a budget b_i = k/n. It then smoothly increases the radius δ. When there is a candidate c ∉ W for which the agents at a distance of at most δ have a budget of at least 1, it decreases the budget of these agents collectively by exactly 1 and adds c to W.In a recent paper, <cit.> introduced fair greedy capture, a randomized generalization of greedy capture for the setting of sortition. It works in the setting in which N = C and is parameterized by some parameter q ≤ k. Like greedy capture it smoothly increases a radius around each agent/candidate. Once this radius contains at least q n/k agents, it selects q of them uniformly at random and deletes in total ⌈ q n/k⌉ of these agents. Together with an adequate final sampling step, it can be shown, that this selects each agent with a probability of exactly k/n.Consider the instance (a) in <Ref>. Here, with k = 4, greedy capture, would first open one of 1,…, 4 with δ = 0 and remove all agents from 1, …, 4. Then for δ = 1 it could either open 6 or 9, removing all adjacent agents to it. Then there are two agents remaining, which would be assigned to either 6 or 9 for δ = 2. Thus, in this instance, greedy capture only opens two clusters. On the other hand, expanding approvals, would give each agent a budget of 4/10 = 2/5. For δ = 0, it will open a cluster from 1, …, 4 and decrease their budgets by exactly 1, for instance it could set the budget of 1 and 2 to 0 and of 3 to 1/5. Then for δ = 1 it could again open 6 and 9, for instance by removing the budget of 6 and 9 to 1/5 and of 5,7,8,10 to zero. The remaining budget is exactly 1, which would be spent for δ = 10 on 5. Thus, one possible final outcome is {1,5,6,9}.Finally, we consider fair greedy capture with parameter q=2. The smallest ball to contain at least qn/k = 5 agents is centered either at 6 or at 9 and has radius 3; thus it contains the agents 5, …, 10. It selects two of them uniformly at random, and then deletes all but one of them. Say the algorithm selects 5 and 9 and deletes all but agent 10. The next ball to contain at least 5 agents is centered at 5, has radius 10, and contains agents 1, …, 4, 10. Again, the algorithm selects two of the agents uniformly at random. So a possible outcome is {1, 5, 9, 10}. §.§ Multiwinner voting In multiwinner voting with approval preferences, we are given a set N of voters (or agents), a set C of candidates, and a committee size k. We focus here on multiwinner voting with approval preferences, i.e., for each voter i ∈ N we are given a subset A_i ⊆ C of candidates they approve. For such preferences, we call a set N' ⊆ N of voters ℓ-large if |N'| ≥ℓn/k, and ℓ-cohesive if |⋂_i ∈ N' A_i |≥ℓ. We say that a committee satisfies JR if for every 1-cohesive and 1-large group N' there exists an i ∈ N' with | A_i ∩ W |≥ 1;PJR if for every ℓ∈ [k] and ℓ-cohesive and ℓ-large group N' it holds that |⋃_i ∈ N' A_i ∩ W |≥ℓ;PJR+ if for every ℓ∈ [k] and 1-cohesive and ℓ-large group N' it holds that |⋃_i ∈ N' A_i ∩ W |≥ℓ or ⋂_i ∈ N' A_i ⊆ W. To define similar axioms for voters and candidates in a distance metric, we follow <cit.> (who defined this for weak ordinal preferences) and generalize their notions to look at each possible distance separately. Let (𝒳, d) be a distance metric, let N, C ⊆𝒳, and let k ∈ℝ^+. Let Π be an approval proportionality axiom. An outcome W satisfies rank-Π if for every y ∈ℝ, for the multiwinner approval voting instance in which each i ∈ N has approval set B(i, y) ∩ C, the corresponding committee W satisfies Π. For example, an outcome W satisfies rank-JR if for every y ∈ℝ and for every group N' ⊆ N of at least n/k agents whose ball of radius y all contain a common candidate (|⋂_i ∈ N' B(i, y) ∩ C| ≥ 1), there exists an agent i ∈ N' whose ball of radius y contains a center c ∈ W.Let us provide another illustrative example to give some intuition for the rank-Π axioms.To see the differences between the proportionality axioms, consider the instances depicted in <Ref>. First consider instance (a) on the left with N=C={1, …, 10}, k = 4, and the outcome W = {1,2,3,6}. Here, n/k = 2.5. First, we note that this outcome does not satisfy 1-proportional fairness: The agents 8, 9, 10 are closer to 9 than they are to the closest winner in W. It does however satisfy rank-JR: Among every group of at least three agents that have a common candidate within distance y, there is one agent that has a cluster center w ∈ W within distance y. For example, 8, 9, 10 have candidate 9 at distance 1, and the distance of 9 to the closest center is also 1.This outcome does not satisfy rank-PJR though, since the group 5, …, 10 would deserve at least two candidates within distance 2 in W.To see the subtle difference between rank-PJR and rank-PJR+, consider instance (b) on the right, again with N=C={1, …, 10} and k = 4, but with the outcome W = {1,2,3,9}. This outcome satisfies rank-PJR: For y = 0, only the group {1, …, 4} shares a candidate, but also have a center at distance 0. For y = 1, the group {8, 9, 10} has candidate 9 within range 1, and there is also a center within range of the group. For y = 2, the group {5, 6, 9} has candidate 6 within range, and the group {6, 8, 9, 10} has candidate 9 within range, and both groups are close enough to center 9. For y=3, the group {5, …, 10} only has candidate 6 in common range. Finally, as every agent has a center within distance y=4, the outcome satisfies rank-PJR. However, it does not satisfy rank-PJR+: The group {5, …, 10} and y=3 has candidate 6 in common range, but there is only one center within this range.§.§ First observationsWe first state some connections between rank-JR, rank-PJR, and rank-PJR+ and the greedy capture and expanding approvals algorithms. greedy capture satisfies rank-JR. Let r ≥ 0. Then, after greedy capture reached δ = r there is no unchosen candidate c ∉ W with at least n/k agents at a distance of at most r. Therefore, greedy capture satisfies rank-JR. expanding approvals satisfies rank-PJR+. Let r ≥ 0. Then, after greedy capture reached δ = r, the agents at a distance at most r to a candidate c must have a budget of less than 1. Thus, if there are ℓn/k agents at a distance at most r, they had a total budget of ℓ at the start of the algorithm and must have thus spent their money on at least ℓ candidate who are at a distance of at most r to an agent in that group. Thus, expanding approvals satisfies rank-PJR+. Next, we add a bit more context to the proportionally representative fairness notions by <cit.>. As our first small result, we start off by placing DPRF into the context of social choice and show that it is indeed equivalent to rank-PJR. An outcome satisfies DPRF if and only if it satisfies rank-PJR. An outcome W ⊆ C satisfies DPRF if and only if for all ℓ∈ [k] and all N' ⊆ N with |N'| ≥ℓn/k and |⋂_i ∈ N' B(i,y) ∩ C|≥ℓ for all y ∈ℝ, we have |⋃_i ∈ N' B(i,y) ∩ W|≥ℓ. That is, for every y ∈ℝ, every ℓ-cohesive and ℓ-large group N' ⊆ N fulfills the property of PJR. As for UPRF, we observe that we can characterize instances into two meaningful groups: Those in which UPRF is implied by DPRF and those in which an outcome satisfying UPRF need not exist. We remark that the former group contains all instances with N ⊆ C and k ≤ n, which, as argued above, are two plausible restrictions in many settings. Consider an instance in which for each agent i ∈ N there are at least k/n unique candidates at distance 0 to i. Then any outcome satisfying DPRF also satisfies UPRF. In any instance that does not have this property, an outcome satisfying UPRF need not exist. For the first part, consider an outcome W that does not satisfy UPRF, i.e., there is a group N' ⊆ N with |N'| ≥ℓn/k with y = max_i, i' ∈ N' d(i, i') and |⋃_i ∈ N' B(i, y) ∩ W| < ℓ. As we have at least k/n candidates per agent at distance 0, we have |⋂_i ∈ N' B(i, y) ∩ C| ≥ |N'| ·k/n≥ℓ. Thus, W also does not satisfy DPRF.For the second part, consider an instance with n < k, in which there is an agent i and less than k/n candidates with distance 0 to i. Then the set N' = {i} has size at least 1 ≥ℓn/k for ℓ≥k/n > 1. As max_i, i' ∈ N' d(i, i') = 0, UPRF requires for any outcome W that |B(i, 0) ∩ W| ≥ℓ≥k/n, but there aren't sufficiently many candidates in B(i,0). We further observe that, even if N ⊆ C and k ≤ n, UPRF is not equivalent to DPRF. We can even show a slightly stronger statement (recall that DPRF implies rank-JR). In instances with N ⊆ C and k ≤ n, there exist outcomes that satisfy UPRF but not rank-JR. Consider an instance with k = 1 and n = 3 agents {1, 2, 3} and one candidate c on a path, such that d(1, 2) = d(2, 3) = 1 and d(3, c) = 2 with all other distances induced by this path. Then the outcome W = {c} satisfies UPRF, since the distance between 1 and 3 is 2. However, it does not satisfy rank-JR, as the three agents all have candidate 2 within distance 1. § DEVIATING TO A SINGLE CANDIDATE We begin our main results with notions relating to single candidates, that is individual fairness, proportional fairness, and the transferable core, as defined in the previous section. As outlined in <ref>, we show that a constant factor approximation to either individual and proportional fairness implies a constant factor approximation to their other two notions. Further, we derive approximation bounds under the proportionality notions of rank-JR and UPRF. §.§ Connections between individuality, proportionality, and the transferable core We begin by exploring the connections between individual fairness, proportional fairness and the transferable core. Perhaps surprisingly, we show that individual fairness and proportional fairness are the same up to a constant factor of at most 2. If N ⊆ C, then an outcome that satisfies α-proportional fairness also satisfies (1 + α)-individual fairness, and an outcome that satisfies β-individual fairness also satisfies 2β-proportional fairness. If N = C, then an outcome that satisfies β-individual fairness also satisfies (1 + β)-proportional fairness. First, consider an outcome W that satisfies α-proportional fairness. For each j ∈ N let N_j = { i ∈ Nd(i,j) ≤ r(j)}. As N ⊆ C, there must exist an i ∈ N_j with d(i, W) ≤α d(i, j), since otherwise N_j could deviate to j. Thus, it must hold thatd(j, W) ≤ d(j, i) + d(i, W) ≤ d(i, j) + α d(i, j) = (1 + α) d(i,j),which by choice of N_j is at most (1+α) r(j). Thus, W satisfies (1+α)-individual fairness.Consider next an outcome W that satisfies β-individual fairness and assume that N ⊆ C. Let N' ⊆ N with |N'| ≥n/k and let c ∉ W be an unchosen candidate. We claim that i^* = _i ∈ N' d(i,c) fulfills d(i^*, W) ≤ 2β d(i^*, c), which implies that W satisfies 2β-proportional fairness. Note that the radius r(i^*) containing ⌈n/k⌉ agents is at most as large as the most distant agent in N', that is, there exists i' ∈ N' such that r(i^*) ≤ d(i^*, i). Thend(i^*, W) ≤β r(i^*) ≤β d(i^*, i') ≤β (d(i^*, c) + d(i', c)) ≤β· 2 d(i^*, c). Finally, let us consider the setting N = C and let W be an outcome that satisfies β-individual fairness. Consider a set N' ⊆ N with |N'| ≥n/k and a candidate c ∉ W. Again we choose i^* to be the agent with greatest distance to c. As N = C and |N'| ≥n/k, we have r(c) ≤ d(c, i^*); thus d(c, W) ≤β d(c, i^*). Therefore,d(i^*, W) ≤ d(i^*, c) + d(c, W) ≤ (1+β) d(i^*, c). As a consequence, the bicriteria approximation algorithms for individually fair clustering (e.g., by <cit.>) also apply to proportional fairness, and vice versa. We next show that all three provided bounds are tight. For every α, β > 1 and ε > 0, there are instances with N = C for which there exists [(a)]* an outcome which satisfies α-proportional fairness, but not (α + 1 - ε)-individual fairness, and* an outcome which satisfies β-individual fairness, but not (β + 1 - ε)-proportional fairness. Moreover, there are instances with N ⊆ C for which there exists* an outcome which satisfies β-individual fairness, but not (2β - ε)-proportional fairness.The constructed instances are based on points under a graph metric and are depicted in <ref>.First, for the example for proportional fairness and N=C, consider the instance (a) with N consisting of 6 agents, and k = 2. There the outcome {2, 3} satisfies α-proportional fairness, since the only deviating coalition could be 4, 5, 6 and by deviating to 5 they could at most improve by a factor of α, if they would deviate to 4 it would be at most by a factor of α + 1/2≤α. Further, due to 5 the instance is also only α + 1-individual fair. For the example for individual fairness and N=C, consider the instance (b) with 6 agents, and k=2. The outcome {w_1, w_2} satisfies β-individual fairness as d(c, W) ≤β r(c). Note that d(i, W) = 1+β while r(i) = 2 for each i ∈{1, 2, 3}. However, for each i ∈{1, 2, 3}, we have d(i, c) = 1 ≮1/1+βd(i, W); thus the outcome does not satisfy (1+β - ε)-proportional fairness.For the example for individual fairness and N ⊆ C, consider the instance (c) with agent set N = {1, 2, 3, 4}, candidate set C = N ∪{c, w}, and k=1. There, the outcome {w} satisfies β-individual fairness as for each i ∈ N, the smallest radius containing n/k=4 agents is 2, and the distance to w is 2β. However, for each i ∈ N, d(i, c) = 1 ≮1/2βd(i, W); thus it does not satisfy (2β-ε)-proportional fairness.We can also show that an approximation to proportional fairness, implies an approximation to the transferable core. Thus, transitively individual fairness also implies an approximation to the transferable core.Any outcome that satisfies α-proportional fairness is in the (γ, γ(α+1)/γ-1)-transferable core for any γ > 1. First, consider an outcome that satisfies α-proportional fairness. Let N' ⊆ N be a group of size n' ≥γn/k, c ∉ W, and η = ⌈n/k⌉. Further, let N' = {i_1, …, i_n'}, ordered by their increasing distance to c, i.e., d(i_j, c) ≤ d(i_j+1, c) for every j ∈ [n'-1]. Consider agent i_η. Due to α-proportional fairness, there exists an agent j_0 ∈ J_0 = {i_1, …, i_η} such that d(j_0, W) ≤α d(j_0, c). We fix j_0 and find further agents j_λ for λ = 1, …, n' - η as follows. Let J_λ = {i_1, …, i_η+λ}∖{j_0, …, j_λ-1}. As |J_λ| = η, we can fix a j_λ∈ J_λ such that d(j_λ, W) ≤α d(i_η+λ, c). Let N” = N' ∖{j_0, …, j_n'-η}. Note that for each i ∈ N”, we haved(i, W) ≤ d(i, c) + d(c, j_0) + d(j_0, W) ≤ d(i, c) + (1+α) d(i_η, c).As d(i_η, c) ≤ d(i_z, c) for each z = η, η+1, …, n', we have d(i_η, c) ≤1/n'-|N”|∑_z=η^n' d(i_z, c). Furthermore, ∑_z=η^n' d(i_z, c) ≤∑_i ∈ N' d(i, c). Finally, as n' ≥γn/k and |N”| = η-1 ≤n/k, we have |N”|/n'-|N”|≤1/γ - 1. We can now piece this together to get∑_i ∈ N' d(i, W)= ∑_i ∈ N” d(i, W) + ∑_λ = 0^n' - η d(j_λ, W)≤ (1+α) · |N”| ·1/n'-|N”|∑_z=η^n' d(i_z, c) + ∑_i ∈ N” d(i, c) + α∑_z=η^n' d(i_z, c)≤(α+1/γ-1 + α + 1) ∑_i ∈ N' d(i, c) = (γ(α+1)/γ-1) ∑_i ∈ N' d(i, c),which yields the promised (γ, γ(α+1)/γ-1)-transferable core.To complement this, we give a (non-tight) lower bound of γα +1/γ - 1.For any α≥ 1, γ > 1, and ε > 0 there exists an instance in which an α-proportional fair outcome is not in the (γ, γα +1/γ - 1 - ε)- transferable core. Consider an instance with a candidate c in which there is a set N_0 of ⌈n/k - 1⌉ agents at distance 0 to c, while the rest of the agents are at distance 1 to c. Our outcome W contains one cluster center c_1 at distance α to every agent not in N_0. It is easy to see that this is indeed α-proportional, the agents could only deviate to c and improve by at most a factor of α. Further, we get that ∑_i ∈ N' d(i, W) = (⌈n/k - 1⌉) (α + 1) + (γn/k - ⌈n/k - 1⌉)α while ∑_i ∈ N' d(i,c) = (γn/k - ⌈n/k - 1⌉). Thus, in this instance ∑_i ∈ N' d(i, W)/∑_i ∈ N' d(i,c) = (⌈n/k - 1⌉) (α + 1) + (γn/k - ⌈n/k - 1⌉)α/γn/k - ⌈n/k - 1⌉= α + (⌈n/k - 1⌉) (α + 1)/γn/k - ⌈n/k - 1⌉≥α + ( n/k - 1) (α + 1)/γn/k -n/k + 1≥α + (α + 1)(n/k) - (α + 1) /(γ - 1) (n/k) + 1= α + (α + 1) - k/n(α + 1) /(γ - 1) + k/n. As k/n goes to 0 this goes to α + α + 1/γ - 1 = γα +1/γ - 1. §.§ Fairness bounds for rank-JR outcomes While the previous results give constant factor approximation bounds, these are not meeting the currently best known approximation factors. To this end, we strengthen the result of <cit.> and show that only rank-JR is needed to achieve the best currently known approximation to proportional fairness, individual fairness, and the transferable core simultaneously. The bound for the transferable core improves upon the analysis of <cit.> who gave a slightly weaker (γ, max(4, 3γ-1/γ-1))-transferable core guarantee for any γ>1.Let W be an outcome satisfying rank-JR. Then it also satisfies (1 + √(2))-proportional fairness, 2-individual fairness, and is in the (γ, 2γ/γ-1)-transferable core for any γ > 1. For proportional fairness, our proof follows the lines of <cit.>. Let N' ⊆ N be a group of size at least n/k and let c ∉ W such that d(i', c) ≤ d(i', W) for all i' ∈ N'. Let i = _i' ∈ N' d(i', c) and let y = d(i, c). Then c ∈⋂_i' ∈ N' B(i', y) ∩ C, and rank-JR implies the existence of an agent j ∈ N' and a candidate c' ∈ W such that c' ∈ B(j, y) ∩ W. Then we have d(j, W) ≤ d(j, c') ≤ d(i, c) = y as well as d(j, c) ≤ y. Using this together with the triangle inequality, we obtainmin(d(i, W)/d(i, c), d(j, W)/d(j, c))≤min(d(i, c) + d(j, c) + d(j, W)/d(i, c), d(j, W)/d(j, c)) ≤min(2 d(i, c) + y/d(i, c), d(i, c)/y) ≤min( 2 + y/d(i, c), d(i, c)/y) ≤ 1 + √(2). For individual fairness, we again assume that N ⊆ C. Let i ∈ N be any agent and let N' ⊆ N be a group of at least n/k agents closest to i (including i). Let y = max_i' ∈ N' d(i, i'). Then i ∈⋂_i' ∈ N' B(i, y) ∩ C, and rank-JR implies the existence of an agent j ∈ N' and a candidate c' ∈ W such that c' ∈ B(i, y) ∩ W. By the triangle inequality, d(i, W) ≤ d(i, j) + d(j, W) ≤ 2 max_i' ∈ N' d(i, i'); thus W satisfies 2-individual fairness.For the transferable core, let N' ⊆ N be a group of size n' ≥γn/k and let c ∉ W. Let η = ⌈n/k⌉. We can now use the same argumentation as in the proof of <Ref>: Again, we order the agents i_1, …, i_n' in N' by their increasing distance to c. We choose the elements j_λ∈ J_λ, λ = 0, …, n' - η, such that d(j_λ, W) ≤ d(i_η+λ, c); such a j_λ always exists by rank-JR, as |J_λ| = η. By replacing every occurrence of α by 1 in the inequality in the proof of <Ref>, we obtain the claimed bound.§.§ Fairness bounds for UPRF outcomes Finally, for UPRF, one can obtain weaker bounds. Let W be an outcome satisfying UPRF. Then it also satisfies 3 + √(17)/2-proportional fairness, 3-individual fairness, and is in the (γ, 3γ/γ-1)-transferable core for any γ > 1. The proportional fairness immediately follows from <cit.> or by a similar proof to <cit.>. For individual fairness, we again assume that N ⊆ C. Let i ∈ N be any agent and let N' ⊆ N be a group of at least n/k agents closest to i (including i). Let y = max_i' ∈ N' d(i, i'). Then, the maximum distance between any two points in N' is at most 2y and UPRF implies that there must be a j ∈ N' and a candidate c' ∈ W such that c' ∈ B(i, 2y) ∩ W. By the triangle inequality, d(i, W) ≤ d(i, j) + d(j, W) ≤ 3 max_i' ∈ N' d(i, i'); thus W satisfies 3-individual fairness.For the transferable core, let N' ⊆ N be a group of size n' ≥γn/k and let c ∉ W. Let η = ⌈n/k⌉. We can now use the same argumentation as in the proof of <Ref>: Again, we order the agents i_1, …, i_n' in N' by their increasing distance to c. We choose the elements j_λ∈ J_λ, λ = 0, …, n' - η, such that d(j_λ, W) ≤ 2d(i_η+λ, c); such a j_λ always exists by UPRF, as |J_λ| = η. By replacing every occurrence of α by 2 in the inequality in the proof of <Ref>, we obtain the claimed bound. Concerning the tightness of the bounds, we observe that <cit.> already proved the tightness of the proportional fairness bound. For individual fairness, we can show that the bound is tight, while for the transferable core we can derive the same bound as in <Ref>. For any ε > 0 there exist instances in which an outcome satisfying UPRF is not 3 + √(17)/2 - ε proportional fair, or 3 - ε-individual fair, or in the (γ, γ + 1/γ - 1 - ε)-transferable core. Firstly, the case for proportional fairness follows from <cit.>. For individual fairness, again consider an instance with k = 1, three agents i_1, i_2, i_3and one candidate c_1 on a path, such that d(i_1, i_2) = d(i_2, i_3) = 1 and d(i_3, c_1) = 2. As argued before, just selecting c_1 satisfies UPRF. However, it only satisfies 3-individual fairness, since it is at distance 3 to agent i_2 who is at distance 1 to both i_1 and i_3. For the transferable core, we observe that the instance described in <Ref> with α = 1 also applies to UPRF. § DEVIATING TO GROUPS OF CANDIDATES In this section we turn to fairness measures that take into account deviations to more than one candidate, i.e., the notions of q-core, q-individual fairness, and q-transferable core. We show that the proportionality notion of rank-PJR does not only guarantee approximations for q = 1, but for any q simultaneously. Such a generalized proportionality can be important in settings, in which agents cannot only be represented by one candidate, such as the sortition setting of <cit.> or the polling location setting of <cit.>, instead the outcome should be truly proportional to the underlying population, i.e., an α percentage of the population should get an approximately α percentage of the panel or of the opened polling locations.§.§ Connections between q-individuality, the q-core, and the q-transferable core We begin by again establishing the relations between the notions. We show that the approximation guarantees for q = 1 nearly generalize to arbitrary q. An important distinction we need to make here, is that for the q-transferable core, we need to strict ℓ — the number of candidates the agents deviate to — to be less than 2q. After this theorem, we give an example that this requirement is indeed necessary and that without it, no approximation would be possible.For any q ≤ k, any outcome in the α-q-core is (2α + 1)-q-individually fair if N ⊆ C and k ≤ n and in the (γ, γ(α+1)/γ - 1)-q-transferable core if 2q>ℓ. Further, if N ⊆ C, any β-individually fair outcome is in the 2β-q-core.Let W be in the α-q-core and i ∈ N'. Further, let x = r^q_N, k(i). Thus, if the q n/k closest agents to i would deviate to q of them, their q-distance would be at most 2x. Hence, by the α-q-core, there must be one agent of them, with a q-distance of α 2x to W and thus the q-distance of i to W is at most (2α + 1)x.For the q-transferable core for 2q>ℓ, consider a set N' ⊆ N with n' = |N'| ≥γℓn/k and let C' ⊆ C with q ≤ |C'| ≤ℓ. Our proof is a simple generalization of the proof of <ref>: We let η = ⌈ℓn/k⌉ and choose agents j_λ∈ J_λ, λ = 0, …, n'-η such that d^q(j_λ, W) ≤α d^q(j_λ, C'); such an agent is guaranteed to exist as W is in the α-q-core. Now, for each i ∈ N” = N' ∖{j_0, …, j_n'-η}, we need to bound d^q(i, W). As |C'| ≤ℓ < 2q, among the q candidates in C' closest to i and those closest to j_0, we find at least one common candidate c due to the pigeonhole principle. That is, there exists a candidate c ∈ C' such that d(i, c) ≤ d^q(i, C') and d(j_0, c) ≤ d^q(j_0, C'). Therefore, in analogy to (<ref>), we haved^q(i, W) ≤ d(i, c) + d(j_0, c) + d^q(j_0, W) ≤ d^q(i, C') + (α+1) d^q(i_η, C').As again |N”|/n'-|N”|≤1/γ-1, we obtain a (γ, γ(α+1)/γ-1)-q-transferable core.Next, let W be a β-individually fair outcome. Let N' and C' witness a q-core violation. As in the previous proofs, there must be a candidate c ∈ C' such that at least qn/k agentsN”⊆ N' who have this candidate in their first q. Let i = _i ∈ N' d(i,c). Then since W satisfies β-individual representation, it must hold thatd^q(i, W) ≤βmax_i' ∈ N” d(i, i') ≤β (d(i,c) + max_i' ∈ N' d(i',c)) ≤ 2 β d^q(i,C').Indeed, the bound on ℓ for the q-transferable core is necessary, without it instances might exist, which satisfy all of our proportionality notions, but no approximation to the q core for any finite approximation factor.There exists an instance with N=C with an outcome W that satisfies rank-PJR, UPRF, and 1-q-individual fairness, is in the 1-q-core, and in the support of fair greedy capture parameterized by q; however, it is not in the (γ, α)-q-transferable core for ℓ≥ 2q, any γ≥ 1, and any value of α. Our instance contains two sets N_1, N_2 of ⌈n/k⌉ and ⌊ (k-1) n/k⌋ agents, respectively. All agents in N_1 lie on one point, and all agents in N_2 lie on a point with distance 1 to the agents in N_1, i.e., d(i, j) = 1 if i ∈ N_1 and j ∈ N_2 and d(i, j) = 0 otherwise. Our outcome W contains one agent in N_1 and (k-1) agents in N_2. The outcome satisfies rank-PJR: For y=0, as B(i, 0) ∩ N_x = ∅ if i ∉N_x, we only have to consider sets N' ⊆ N_1 and N' ⊆ N_2; and for these sets, there are sufficiently many centers close by. As for y=1 we have N ⊆ B(i, 1) for every i ∈ N; therefore we satisfy rank-PJR. The argumentation for UPRF is identical. Further, by a similar argument, as both sets cannot deviate, the outcomes satisfies both the 1-q-core and 1-q-individual fairness. It can also be selected by fair greedy capture, as fair greedy capture can select arbitrary candidates after deleting all agents.Now consider a set N' ⊆ N with |N'| ≥ 2q ·γ·n/k that contains an agent j ∈ N' ∩ N_1 and a set C' that contains q agents from N_1 and at least q agents from N_2.As there is only one center within distance 0 from j, we have d^q(j, W) = 1; however, as |C' ∩ N_2| ≥ |C' ∩ N_1| = q, we have d^q(i, C') = 0 for every i ∈ N'. Therefore, for any choice of α,∑_i ∈ N' d^q(i, W) ≥ 1 > α∑_i ∈ N' d^q(i, C') = 0. §.§ Fairness bounds for rank-PJR outcomes Next, we turn to rank-PJR (or DPRF) and show that it implies strong approximate proportionality guarantees for all q simultaneously, for our generalized proportionality notions. For the q-core, comparing with the work of <cit.> who were able to obtain a 5 + √(41)/2∼5.702 approximation in the special case in which N = C, we obtain a bound of 3 + 2√(2)∼ 5.828, however without any assumptions on the candidate or agent sets. Let W be an outcome that satisfies rank-PJR. Then, for any q ≤ k, * W is in the (3+2√(2))-q-core;* W satisfies 3-q-individual fairness if N ⊆ C and k ≤ n;* W is in the (γ, 3γ + 1/γ-1)-q-transferable core for any γ > 1 and if 2q>ℓ.We start with the proof for the q-core. Let N' ⊆ N be a group of agents with | N'|≥ℓn/k and let C' ⊆ C with q ≤| C' |≤ℓ such that d^q(i, C') ≤ d^q(i, W) for any i ∈ N'. We follow the basic proof idea of <cit.> and have every i ∈ N' put a mark on their top q candidates from C'. Since there are q ℓn/k markings and at most ℓ candidates, there must exist a candidate with at least q n/k markings. Let c be this candidate and N” be the at least qn/k agents marking c. Out of these, let i_1 be the agent maximizing d(i_1, c) and i_2 be the agent minimizing d^q(i_2, C'). Also, let C”⊆ C' be the set of the q candidates closest to i_2.Now, for every c' ∈ C” and every i ∈ N”, we haved(i, c') ≤ d(i, c) + d(c, i_2) + d(i_2, c') ≤ d(i_1, c) + 2 d^q(i_2, C')y.In other words, |⋂_i ∈ N” B(i, y) ∩ C| ≥ q. Now rank-PJR implies that |⋃_i ∈ N” B(i, y) ∩ W| ≥ q. Since the distance of i_2 to any other agent i ∈ N” is at most d(i, c) + d(i_2, c) ≤ d(i_1, c) + d^q(i_2, C'),d^q(i_2, W) ≤ d(i_1, c) + d^q(i_2, C') + y ≤ 2 d(i_1, c) + 3 d^q(i_2, C') ≤ 2 d^q(i_1, C') + 3 d^q(i_2, C').Therefore, we obtain that W is in the α-q-core withα≤min(d^q(i_2, W)/d^q(i_2, C'), d^q(i_1, W)/d^q(i_1, C')) ≤min(2 d^q(i_1, C') + 3 d^q(i_2, C')/d^q(i_2, C'), d(i_1, c) + d(i_2, c) + d(i_2, W)/d^q(i_1, C')) ≤min(2 d^q(i_1, C') + 3 d^q(i_2, C')/d^q(i_2, C'), 3 d^q(i_1, C') + 4 d^q(i_2, C')/d^q(i_1, C'))≤min(3 + 2 d^q(i_1, C')/d^q(i_2, C'), 3 + 4 d^q(i_2, C')/d^q(i_1, C'))≤max_x ≥ 0min(3 + 2x, 3 + 4/x) = 3 + 2 √(2). For q-individual fairness, let x = r^q_N, k(i). Then the maximum distance between any two of the closest q n/k agents to i (including i) is 2x. Thus, rank-PJR requires that there are at least q candidates with a distance of at most 2x from any of these agents and therefore d^q(i, W) ≤ 3x.Finally, we prove the bound for the q-transferable core for the case that 2q>ℓ. Let N' ⊆ N be a group of at least n' ≥γ q n/k agents, and let C' ⊆ C with q ≤ |C'| ≤ℓ. Without loss of generality we can assume that n' = ⌈γℓn/k⌉. Let η = ⌈ℓn/k⌉, and let N' = {i_1, …, i_n'} be ordered such that d^q(i_z, C') ≤ d^q(i_z+1, C') for all z ∈ [n'-1]. As in the above proof for the q-core, we have the agents i_1, …, i_η mark their top q candidates. As |C'| ≤ℓ and q η≥ q ℓn/k, there is at least one candidate c which is marked by a set N” of at least qn/k agents. Let y = d^q(i_η, C'). Then |⋂_j ∈ N” B(j, y) ∩ C| ≥ q, therefore by rank-PJR, |⋃_j ∈ N” B(j, y) ∩ W| ≥ q. This allows us to provide a bound on d^q(i, W) for every i ∈ N': As |C'| ≤ℓ < 2q, we have that B(i, d^q(i, C')) ∩ C' intersects B(j, y) ∩ C' for every j ∈ N”. As there are at least q centers within a distance of y from the agents in N”, we have d^q(i, W) = d^q(i, C') + 2y. Therefore,∑_i ∈ N' d^q(i, W) ≤ 2n'y + ∑_i ∈ N' d^q(i, C').As y = d^q(i_η, C') ≤ d^q(i_z, C') for all η≤ z ≤ n', we have2n'y ≤ 2n' ·1/n'-η+1∑_z=η^n' d^q(i_z, C') ≤2n'/n'-η+1∑_i ∈ N' d^q(i, C'). Finally, as γ q n/k≤ n' ≤γ q n/k+1 and η-1 ≤ q n/k, we obtain a (γ, α)-q-transferable core withα≤ 1 + 2n'/n'-η+1≤ 1 + 2(γ q n/k+1)/(γ-1) qn/k≤ 1 + 2γ/γ-1 + 2/(γ-1)qn/k≤3γ + 2 k/qn - 1/γ-1.If k/qn > 1, then, by rank-PJR, the outcome W must contain the q closest candidates for every agent. Therefore d^q(i, W) = d^q(i, C) ≤ d^q(i, C') for every agent i, and we obtain a trivial (γ, 1)-q-core. If k/qn≤ 1, then the above bound yields α≤3γ+1/γ-1. §.§ Fairness bounds for UPRF outcomesSimilarly, under the condition that N ⊆ C, we derive bounds for UPRF. Interestingly enough, the condition N ⊆ C is already strong enough for the q-core guarantee to be stronger under UPRF than under rank-PJR, without this condition, improving from approximately 5.828 to 5.372 Nevertheless, for the q-transferable core, we were not able to obtain an equally good bound as for for rank-PJR. In an instance with N ⊆ C let W be an outcome that satisfies UPRF. Then, for any q ≤ k, * W is in the (5 + √(33)/2)-q-core;* W satisfies 3-q-individual fairness if k ≤ n;* W is in the (γ, 5γ + 1/γ-1)-q-transferable core for any γ > 1 and if 2q>ℓ.We start with the α-q-core. Let N' ⊆ N be a group of agents with | N'|≥ℓn/k and let C' ⊆ C with q ≤| C' |≤ℓ such that d^q(i, C') ≤ d^q(i, W) for any i ∈ N'. As in the proof of <Ref>, the agents in N' mark their top q candidates from C', and we obtain a candidate c that is marked by a set N” of at least qn/k agents. Out of these, let i_1 be the agent maximizing d(i_1, c) and i_2 be the agent minimizing d^q(i_2, C').Now, for every i, i' ∈ N” we have d(i, i') ≤ d(i, c) + d(c, i) ≤ 2d(i_1, c) = y. Thus, UPRF implies that |⋃_i ∈ N” B(i, y) ∩ W| ≥ q. Since the distance of i_2 to any other agent i ∈ N” is at most d(i_1, c) + d^q(i_2, c), we get thatd^q(i_2, W) ≤ d(i_1, c) + d^q(i_2, C') + y ≤ 3d(i_1, c) + d^q(i_2, C') ≤ 3d^q(i_1, c) + d^q(i_2, C').Therefore, we have an α-q-core withα≤min(d^q(i_2, W)/d^q(i_2, C'), d^q(i_1, W)/d^q(i_1, C'))≤min(3 d^q(i_1, C') + d^q(i_2, C')/d^q(i_2, C'), d(i_1, c) + d(i_2, c) + d^q(i_2, W)/d^q(i_1, C')).As the right denominator is upper-bounded by 4d^q(i_1, C')+2d^q(i_2, C'), we obtainα ≤min(1 + 3 d^q(i_1, C')/d^q(i_2, C'), 4 + 2 d^q(i_2, C')/d^q(i_1, C')) ≤max_x ≥ 0min(1 + 3x, 4 + 2/x) = 5 + √(33)/2. For q-individual fairness, let x = r^q_N, k(i) for an agent i ∈ N'. Then the maximum distance between any two of the closest q n/k agents to i (including i) is 2x. Thus, UPRF requires that there are at least q candidates within a distance of at most 2x from any of these agents and therefore d^q(i, W) ≤ 3x.Finally, for the q-transferable core with 2q>ℓ, we again follow the proof of <Ref>. Letting N' ⊆ N with |N'| = n' = ⌈γℓn/k⌉ and C' ⊆ C with q ≤ |C'| ≤ℓ, we again have the agents i_1, …, i_η pick their top q candidates and will find a candidate c marked by a set N” of at least q n/k agents. Let y = d^q(i_η, C'). Then the largest distance between any two agents in N” is at most 2y; therefore by UPRF, |⋃_j ∈ N” B(j, 2y) ∩ W| ≥ q. Therefore, in analogy to the other proof, we have d^q(i, W) ≤ d^q(i, C') + 4y; thus we obtain a (γ, α)-q-transferable core withα≤ 1 + 4n'/n'-η+1≤5γ + 2 k/qn - 1/γ - 1.As we can again assume that k/qn≤ 1 we obtain a bound of 5γ + 1/γ-1.§.§ Fairness bounds obtained by Fair Greedy Capture Finally, we consider the fair greedy capture algorithm and derive some bounds for it. Ebadian and Micha [] introduced the algorithm and used it to obtain constant factor approximation guarantees for the q-core.fair greedy capture parameterized by q satisfies the 5 + √(41)/2∼ 5.7-q-core. If k divides n, then fair greedy capture parameterized by 1 satisfies the 10-q-core for all q ≤ k.Interestingly enough, by slightly altering the analysis, we can improve upon the bound of <cit.> and show that fair greedy capture parameterized by q is actually in the 5-q-core. Moreover, we again achieve similar bounds for the q-transferable core and for q-individual fairness as we did for rank-PJR and UPRF.Let W be an outcome returned by fair greedy capture parameterized by q ≤ k on an instance with N=C. Then, * W is in the 5-q-core;* W satisfies 3-q-individual fairness if k ≤ n;* W is in the (γ, 5γ/γ-1)-q-transferable core for any γ > 1 and if 2q>ℓ.We start with the α-q-core. Just as in previous proofs, let N' ⊆ N with |N'| ≥ℓn/k and C' ⊆ C=N with q ≤ |C'| ≤ℓ. Then, after having the agents in N' put marks on their top q candidates, there is a candidate c (which is also an agent) that is marked by a set N” of at least q n/k agents. Again, let i_1 be the agent maximizing d(i_1,c), but let i_2 be the agent minimizing d(i_2, c). Then for every i ∈ N” we have d(i_2, i) ≤ d(i_2, c) + d(c, i) ≤ d(i_2, c) + d(i_1, c)y. Therefore, N”⊆ B(i_2, y); thus fair greedy capture captured B(i_2, y) (or a subset of it), or parts of N” were captured by another ball of radius at most y. Let i ∈ N” be such an agent; suppose that i was captured by a ball centered around some point p with radius at most y. Thend^q(i, W) ≤ d(i, p) + d^q(p, W) ≤ 2(d(i_1, c) + d(i_2, c)) ≤ 2(d(i_1, c) + d(i, c)) ≤ 2(d^q(i_1, C') + d^q(i, C')).Similarly, we can boundd^q(i_1, W) ≤ d(i_1, c) + d(c, i) + d^q(i, W) ≤ 3(d^q(i_1, C') + d^q(i, C')).This way, we can obtain an upper bound of α ≤min(d^q(i_1, W)/d^q(i_1, C'), d^q(i, W)/d^q(i, C'))≤min(3(d^q(i, C') + d^q(i_1, C))/d^q(i_1, C'), 2(d^q(i, C') + d^q(i_1, C'))/d^q(i, C')) ≤min(3 + 3 d^q(i, C')/d^q(i_1, C'), 2 + 2 d^q(i_1, C')/d^q(i, C')) ≤max_x ≥ 0min(3 + 3x, 2 + 2/x) = 5. For q-individual fairness, it follows similarly, that since there is a ball of radius r^q_N, k(i) covering the next qn/k agents to i, that one of them must be contained in a ball of a not larger radius and thus, d^q(i,W) ≤ 3r^q_N, k(i).For the q-transferable core for 2q>ℓ, consider a set N' ⊆ N with n' = |N'| ≥γℓn/k and let C' ⊆ C=N with q ≤ |C'| ≤ℓ. We again follow the proof of <Ref>. We let η = ⌈ℓn/k⌉ and choose agents j_λ∈ J_λ, λ = 0, …, n'-η in a similar manner as done for the q-core: After having the agents in J_λ put marks on their top q candidates, there is a candidate c (which is also an agent) that is marked by a set J'_λ of at least q n/k agents. As for every i ∈ J'_λ, d(i, c) ≤ d^q(i, C') ≤ d^q(i_η+λ, C'), we know that J'_λ is contained in a ball of radius at most y = 2d^q(i_η+λ, C'). Thus, fair greedy capture captures a ball of radius at most y which contains at least one agent from J'_λ: this is our agent j_λ. As the captured ball will contain at least q centers, we haved^q(j_0, W) ≤ 2y ≤ 4d^q(i_η+λ, C').It remains to bound d^q(i, W) for each i ∈ N” = N' ∖{j_0, …, j_n'-η}. As |C'| ≤ℓ < 2q, among the q candidates in C' closest to i and those closest to j_0, we find at least one common candidate c due to the pigeonhole principle. That is, there exists a candidate c ∈ C' such that d(i, c) ≤ d^q(i, C') and d(j_0, c) ≤ d^q(j_0, C'). Therefore,d^q(i, W) ≤ d(i, c) + d(j_0, c) + d^q(j_0, W) ≤ d^q(i, C') + 5 d^q(i_η, C').As again |N”|/n'-|N”|≤1/γ-1, we get the stated q-transferable core by bounding ∑_i ∈ N' d^q(i, W) in the same manner as in (<ref>), replacing α by 4 and d(i, c) by d^q(i, C').§ POLYNOMIAL TIME COMPUTABILITY Finally, we shortly remark on polynomial time computability. Whenever the candidate space C is finite, it is straightforward to implement the algorithms introduced in <ref> in polynomial time. However, as shown by <cit.>, once C is unbounded and the metric space is only implicitly given (e.g., some distance norm over C = ℝ^t), computing greedy capture becomes NP-hard. For Euclidean distances over C = ℝ^t, <cit.> were nevertheless able to give an approximate version of greedy capture, which approximates proportional fairness up to a factor of 2 + ε for any ε > 0 in this special metric space. For general metric spaces, <cit.> gave a simple proof that in an instance with N ⊆ C, any clustering which is α-proportionally fair when restricted to the instance with candidate set N is 2 α-proportionally fair in the whole instance. In the follow-up work of <cit.>, they used a very similar approach to this and showed that running expanding approvals using only the pairwise distances between the agents results in a 3-proportionally fair outcome.We combine both approaches and show that any outcome W which satisfies rank-JR restricted to the instances with candidate set N ∪ W satisfies 3-proportional fairness in the entire instance. Thus, running greedy capture restricted to the agents would also provide a 3-proportionally fair outcome. We further also provide a generalization to the q-core, for which we are able to obtain the same approximation guarantee as for fair greedy capture.Consider an instance I with N ⊆ C and an outcome W and let I' be the instance with agent set N and candidate set N ∪ W. If W satisfies rank-JR in I', then W satisfies 3-proportional fairness. If W satisfies rank-PJR in I', then W is in the 5-q-core for all q ≤ k. First consider a group N' deviating to a single candidate c ∈ C and assume that W satisfies rank-JR in I'. Let i = _i ∈ N' d(i,c). Then, for any i' ∈ N',d(i,i') ≤ d(i, c) + d(c, i') ≤ d(i,c) + max_i^* ∈ N' d(i^*, c) = y. Now, as W satisfies rank-JR in I', there must exist a c' ∈ W and i”∈ N' with d(i”,c') ≤ y. Let i^* = _i^* ∈ N' d(i^*, c). Then, our outcome W satisfies α-proportional fairness with α ≤min(d(i^*, W)/d(i^*, c), d(i”, W)/d(i”, c)) ≤min(d(i^*, c) + d(i”, c) + d(i”, W)/d(i”, c), d(i^*, c) + d(i,c)/d(i”, c)) ≤min(2 (d(i^*, c) + d(i”,c))/d(i^*, c), d(i^*, c) + d(i”,c)/d(i”, c)) ≤min( 1 + 2d(i”,c)/d(i^*, c), 1+ d(i^*, c)/d(i”,c)) = max_x ≥ 0min(2 + 2x, 1 + 1/x) = 3. For the q-core and rank-PJR, there must again be a candidate c and a set N” of at least qn/k agents marking c (compare with the proof of <ref>). Further, again, let i_1 be the agent maximizing d(i_1, c) and i_2 be the agent minimizing d^q(i_2, C'). Now, for every i ∈ N” we have d(i_2, i) ≤ d(i_2, c) + d(c, i) ≤ d^q(i_2, c) + d(i_1,c) = y. In other words,| ⋂_i ∈ N” B(i, y) ∩ C | ≥| ⋂_i ∈ N” B(i, y) ∩ N' | ≥ q,and as W satisfies rank-PJR in I', we have |⋃_i ∈ N” B(i, y) ∩ W| ≥ q.[We remark that this even holds for the instance restricted to the candidate set N' ∪ W.] Thus,d^q(i_2, W) ≤ d(i_1, c) + d^q(i_2, C') + y = 2 (d(i_1, c) + d^q(i_2, C')),and, we have an α-q-core with α being at mostmin(d^q(i_2, W)/d^q(i_2, C'), d^q(i_1, W)/d^q(i_1, C'))≤min(2(d^q(i_1, C') + d^q(i_2, C'))/d^q(i_2, C'), d(i_1, c) + d(i_2, c) + d^q(i_2, W)/d^q(i_1, C')).As the right denominator is upper-bounded by 3d^q(i_1, C')+3d^q(i_2, C'), we obtainα ≤min(2 + 2 d^q(i_1, C')/d^q(i_2, C'), 3 + 3 d^q(i_2, C')/d^q(i_1, C')) = max_x ≥ 0min(2 + 2x, 3 + 3/x) = 5.§ CONCLUSION AND FUTURE WORKIn this paper, we studied proportional clustering from a social choice perspective and showed that generalizations of proportionality axioms enable near optimal approximations of notions from clustering. An interesting open question, both relevant to social choice and clustering is related to a different relaxation of proportional fairness (or core fairness) introduced by <cit.>. Instead of bounding the factor by which the agents can improve, they bound the size of the deviating coalition (similar to the transferable core). In that sense, no group of size γn/k should exist, who could all deviate to a candidate they like more. In their work, they show that there are instances for which no solution with γ < 2 can exist while for any ε > 0 a solution with γ = 16 + ε exists. Since these results only care about the relative ordering of the candidates, they also translate to clustering. Closing this bound or improving it for certain metric spaces, seems like an interesting problem. Similarly, it would be interesting to study the probabilistic analog of the core introduced by <cit.> and <cit.>, especially if their results generalize to the q-core and if certain metric spaces admit simple combinatorial algorithms to compute the probabilistic core.Further, the expanding approvals algorithm is more of a family of algorithms, parameterized by how they select the candidates and how they deduct the budgets. Is there any way to axiomatically distinguish the different variants of the expanding approvals rule? In the context of approval-based multiwinner voting, the Method of Equal Shares <cit.>, can be seen as an instantiation of the family of expanding approvals rules, which provides stronger proportionality guarantees than other rules in the family. Is something similar possible for our setting, e.g., can one go from proportionality axioms inspired by PJR to proportionality axioms inspired by the stronger axiom of EJR <cit.>? Naturally, our work still leaves several open questions when it comes to the approximation factors of our notions. What are the best attainable factors for proportional fairness and the q-core? Further, the open question of <cit.> whether the bound of 2 on individual fairness can be improved for Euclidean spaces and of <cit.> whether for graph metrics with N = C a 1-proportional fair clustering always exist, are still open. plainnat | http://arxiv.org/abs/2310.18162v1 | {
"authors": [
"Leon Kellerhals",
"Jannik Peters"
],
"categories": [
"cs.LG",
"cs.CY",
"cs.GT"
],
"primary_category": "cs.LG",
"published": "20231027141256",
"title": "Proportional Fairness in Clustering: A Social Choice Perspective"
} |
1]Alessia Cattabriga 1]Paolo Cavicchioli[1]Alma Mater Studiorum, Bologna, ItalyA Markov theorem for plat closure of surface braids in Dunwoody and periodic Takahashi manifolds [ February 2022 ================================================================================================ In this article we deal with the problem of finding equivalence moves for links in Dunwoody and periodic Takahashi manifolds. We represent these manifolds using Heegaard splitting and we represent the embedded links as plat closure of elements in the braid group of the corresponding Heegaard surfaces. More precisely, starting from an open Heegaard diagram for such manifolds, we determine the plat slide equivalence moves algorithmically and compute them explicitly in some cases. Keywords: plat closure of braids, links in 3-manifolds, Dunwoody manifolds, Takahashi Manifolds, algorithm.MSC Classification: 57K10, 57K30, 57-08, 57-04 § INTRODUCTIONThe representation of links as plat closures of braids with an even number of strands goes back to the works of Hilden <cit.> and Birman <cit.>, who proved that any link in ℝ^3 may be represented as the plat closure of a braid. Since then, much work has been done using this representation in the direction of studying the equivalence problem of links, for example by defining and analyzing link invariants as theJones polynomial (see <cit.>). Using Heegaard splittings, in <cit.> Doll introduced the notion of (g,b)-decomposition or generalized bridge decomposition for links in a closed, connected and orientable (from now on c.c.o.) 3-manifold, opening the way to the study of links in 3-manifolds via surface braid groups and their plat closures <cit.>.The equivalence of links in 3-manifolds, under isotopy, has recently been described in <cit.>, where the authors find a finite set of moves that connect braids in B_g, 2n, the braid group on 2n strands of a surface of genus g, having isotopic plat closures.In their result somemoves are explicitly described as elements in B_g, 2n, while others, called plat slide moves, depend on a Heegaard surface for the manifold and are explicitly described only for the case of Heegaard genus one, that is for lens spaces and S^2 × S^1. Moreover, in <cit.> an algorithm to determineplat slide movesfor manifolds with Heegaard genus two is presented.In this paper we deal with the same problem, but in the case of two infinite families of manifolds: Dunwoody and (periodic) Takahashi manifolds. Dunwoody manifolds are c.c.o. 3-manifolds introduced in <cit.> by means of a graph with a cyclic symmetry determining an open Heegaard diagram. Some interesting results connect these manifolds and cyclic branched covering of a (1, 1)-knot (i.e., knots admitting a (1,1) decomposition): in <cit.> and <cit.> it has been proven that the class of Dunwoody manifolds coincides with the class of strongly-cyclic branched coverings of (1, 1)-knots. In <cit.> the family of Dunwoody manifold has been generalized to include also manifolds with non-empty boundary, while, more recently,other graph with cyclic symmetry generalizing those introduced in<cit.> have been studied (see <cit.>).Takahashi manifolds are c.c.o. 3-manifolds introduced in <cit.> by Dehn surgery with rational coefficientsalong a specific 2n-component link in S^3 (see Figure <ref>).Several important classes of 3-manifolds, such as (fractional) Fibonacci manifolds <cit.> and Sieradski manifolds <cit.>, represent notable examples of periodic Takahashi manifolds. These have been intensively studied in many papers as <cit.>. Among Takahashi manifolds the periodic ones are those whose surgery coefficientspresent an order ncyclic symmetry: in <cit.> the author proves that these manifolds are cyclic coverings of the connected sum of two lens spaces or S^3 branched over a (2,1)-knot. In the paper, we determine algorithmically plat slide moves for links in these manifolds: in order to do so, we use the fact that both families admit a symmetric open Heegaard diagram depending of a finite number of integer parameters. Our approach could be easily generalizedto other families of manifolds with the same property. The paper is organized as follows.In Section <ref> we recall the definition of Heegaard diagram, plat closure of links in 3-manifolds and the equivalence moves introduced in <cit.>. In Section <ref> (resp. <ref>), after giving the definition of the family of Dunwoody (resp. Takahashi) manifolds, we compute the equivalence moves for the considered family of 3-manifolds, together with some notable examples. In the following, manifolds are always assumed to be closed, connected and orientable and are considered up to homemorphisms. Links inside them, that is, closed 1-dimensional smooth submanifolds, will be considered up to isotopy. § PRELIMINARIESIn this section we briefly recall the notion of Heegaard splitting and Heegaard diagram for 3-manifolds as well as the definition of plat closure of links in 3-manifolds. §.§ Heegaard splittingsLet M be a 3-manifold, a Heegaard spitting for M is the data of a triple (H^*,H,ϕ), where H^* and H are two copies of agenus g oriented handlebody (see Figure <ref>) standardly embedded in ℝ^3 andϕ:∂ H →∂ H^* is an orientation reversing homeomorphism such that M=H ∪_ϕ H^*. The surface ∂ H∪_ϕ∂ H^*⊆ M has genus g andis called Heegaard surface for M. Each 3-manifold admits Heegaard splittings <cit.>: we call Heegaard genus of a 3-manifold M the minimal genus of a Heegaard surface for M.For instance, the 3-sphere S^3 is the only 3-manifold with Heegaard genus 0, while the manifolds with Heegaard genus 1 are lens spaces (i.e., cyclic quotients of S^3) and S^2× S^1. While S^3, as well as lens spaces, have, up to isotopy, only one Heegaard surface of minimal genus (and those of higher genera are stabilizations of that of minimal genus), in general, a manifold may admit non isotopic Heegaard surfaces of the same genus (see <cit.> for the case of Seifert manifolds).Each Heegaard splitting(H^*,H,ϕ) can be represented by means of a (closed) Heegaard diagram that is a triple (Σ,𝐝^*,𝐝), with Σ=∂ H^* and 𝐝^*= {d_1^*, …, d_g^*} (resp. 𝐝= {d_1, …, d_g}) a set of meridians[We recall that a set of meridians for a handlebody of genus g is a collection of closed curves 𝐝^* = {d^*_1, …, d^*_g} on its boundary, such that the curves bound properly embedded pairwise disjoint disks D_1, …, D_g (dotted inFigure <ref>) and cutting the handlebody along these disks yields a 3-ball.] for H^* (resp. the image in Σ, via ϕ, of a set of meridians for H).Moreover, one of the two system of meridians, say 𝐝^* could be always choose in a standard way, as in Figure <ref>.If we cut Σ along 𝐝^*, we obtain a sphere with 2g holes, say D_1, …, D_2g, distinct and paired, each pair corresponding to a certain meridian d_i^*. Moreover the curves of𝐝 will be naturally cut into arcs joining the holes in various ways, giving a graph on the sphere with 2g holes called open Heegaard diagram of(H^*,H,ϕ) and still denoted by (Σ, 𝐝^*, 𝐝).On the contrary, if we have an open Heegaard diagram, in order to obtain the closed one we just need to identify the D_i's. It can be done by attaching 1-handles connecting the paired disks and arcs along the handles, connecting the paired vertices of the graph. The result will be a set of closed curves which will represent the system 𝐝. §.§ Plat closure of links in 3-manifolds Let M be a 3-manifold, fix a Heegaard diagram (Σ,𝐝^*,𝐝) for M so that Σ is the boundary of a standard handlebody and 𝐝^* are chosen in a standard way (see Figure <ref>); denote with g the genus of Σ. Consider B_g,2n=π_1(C_2n(Σ), *) the braid group on 2n strands of Σ, that is, the fundamental group of the configuration space of 2n (unordered)points P_1,…, P_2n in Σ. A presentationof B_g,2nis given in <cit.>.Referring to Figure <ref> the generators are σ_1,…,σ_2n-1, the standard braid ones, and α_1,…,α_g,β_1,…, β_g, where α_i (resp. β_i) is the braid whose strands are all trivial except the first one which goes once along the i-th longitude (resp. i-th meridian) of Σ.Moreover, we fix a set of n disjoint arcsρ_1,…,ρ_n embedded into Σ, such that ∂ρ_i={P_2i-1,P_2i}, for i =1,…,n, as in Figure <ref>. Now we can associate to each element γ∈ B_g, 2n a link in M, calledtheplat closure of γ, and denoted with γ̂, obtained “closing” a geometric representative of γ in Σ× [0,1] by connecting P_2i-1×{0} withP_2i×{0} through ρ_i×{0} and P_2i-1×{1} withP_2i×{1}throughρ_i×{1}, for i=1,…,n.For each link L in M, there exists a braid γ∈∪_n ∈ℕ B_g, 2n such that γ̂=L.Moreover, as provedin<cit.>, two braids γ_1, γ_2 ∈⋃_n∈ℕ B_g,2n have isotopic plat closures if and only if γ_1 and γ_2 differ by a finite sequence of the following moves(M1) σ_1γ⟷γ⟷γσ_1(M2) σ_2iσ_2i+1σ_2i-1σ_2iγ⟷γ⟷γσ_2iσ_2i+1σ_2i-1σ_2i(M3) σ_2σ_1^2σ_2γ⟷γ⟷γσ_2σ_1^2σ_2 (M4) α_jσ_1^-1α_jσ_1^-1γ⟷γ⟷γα_jσ_1^-1α_jσ_1^-1 j=1,…, g (M5)β_jσ_1^-1β_jσ_1^-1γ⟷γ⟷γβ_jσ_1^-1β_jσ_1^-1 j=1,…, g (M6)γ⟷ T_k(γ)σ_2kT_k: B_g,2n→ B_g, 2n+2 T_k(α_i)=α_i,T_k(β_i)=β_i T_k(σ_i)={[σ_ii<2k; σ_2kσ_2k+1σ_2k+2σ_2k+1^-1σ_2k^-1i=2k; σ_i+2 i>2k;].(psl_i^*)γ⟷γβ_ii = 1, …, g(psl_i)γ⟷d̅_i γ,i = 1, …, g,where d̅_i∈ B_g,2 is a braid representative of d_i with 2 strands.[Since d_i bounds a disk in M it is always possible such a representation.] So in order to determine completely the moves, it is necessary to describe the d̅_i as a word in terms of the braid generators.From now on we will call psl-moves the moves psl_i.§.§ From Heegaard diagrams to psl-moves In <cit.> an explicit formula for psl-moves in 3-manifold with Heegaard genus one is computed, while in<cit.>, an algorithm to compute the psl-moves for 3-manifolds of Heegaard genus two is described, starting from an open Heegaard diagram of the manifold. However, the same idea works in higher genus, so we briefly recall it. Let (Σ, 𝐝^*, 𝐝) be an open Heegaard diagram for M. In the corresponding closed one on Σ, we color in blue the arcs along the 1-handles arising from the identifications of the paired disks, and in red the other arcs. Note that, by construction, moving along each d_i, any red arc is followed by a blue arc and vice-versa.In order to represent d_i's curves in terms of generators of the braid group we can proceed as follows: (1) orient each curve and divide it into elementary pieces of couples of subsequent red-blue arcs, (2) connect the two ends of each piece with a fixed base point by fixed arcs, so to obtain an elementary closed curve, (3) catalogue each elementary closed curve arising in this way up to homotopy and (4) represent it in terms of generators of the surface braid group. This procedure gives rise to a dictionary that can be used to implement a computer algorithm that takes as input the graph corresponding to the open Heegaard diagram and gives as output the words d̅_i's corresponding to the psl-moves. It just works by visiting the cycles of the graph and replacing each elementary arc with the corresponding word in the dictionary.§ DUNWOODY MANIFOLDS Let a, b, c, n ∈ℕ, with n>0 and a+b+c>0. Let Γ = Γ(a,b,c,n) be the regular, trivalent, planar graph depicted inFigure <ref>. The graph contains n top and n bottom circles, each having d = 2a+b+c vertices; we denote them with D_1^u, …, D_n^u and D_1^d, …, D_n^d respectively (all the indexes are considered mod n).For each i = 1, …, n, there are a parallel arcs, called upper (resp. lower) arcs, connecting D_i-1^u (resp. D_i-1^d) with D_i^u (resp. D_i^d), b parallel arcs, called diagonal arcs, connecting D_i^u to D_i-1^d c parallel arcs, called vertical arcs, connecting D_i^u to D_i^d. Clearly, Γ has a cyclic symmetry of order n sendingD_i^u into D_i-1^u and D_i^d into D_i-1^d, for i=1,…,n. The one-point compactification of the plane brings Γ onto S^2. Now, let r, s two given integers: we give a clockwise (resp. counterclockwise) orientation to each D_i^u (resp. D_i^d) and we enumerate the vertices as in Figure <ref> (all the numbers are to be considered mod d), for i = 1, …, n.If we cut of from S^2 the region of the 2n disks bounded by each D_i^u and D_i^d and not containing any arc of Γ, and we glueD_i^u with D_i+s^d, for i=1,…,n, so that vertices having the same labelling correspond,we obtain a regular graph of degree four in an orientable surface Σ_n of genus n. Of course, by construction, we can always consider r mod d and s mod n. Through the identification, the nd arcs of Γ connect with each other via their endpoints creating m closed curves e_1, …, e_m, where e_i is the curve passing from the vertex labelled a+b+1 in D_i^u. Now, denote with d_i = D_i^u = D_i+s^d and set 𝒟 = {d_1, …, d_n}, ℰ = {e_1, …, e_m}.Naturally, if we cut Σ_n along 𝒟 we do not disconnect the surface; if m = n and even cutting Σ_n along ℰ does not disconnect the surface, then (Σ_n, 𝒟, ℰ) is a Heegaard diagram of genus n of a 3-manifold, completely determined by the 6-tuple (a,b,c,n,r,s): we call such a manifold Dunwoody manifold.Clearly not all the 6-tuples σ=(a,b,c,n,s,r) ∈ℤ^6 determine a Dunwoody manifold, that is,are such that the set ℰ contains exactly n closed curves not disconnecting the surface Σ_n. We call admissible such 6-tuplesand withD(σ) (resp. H(σ)),the Dunwoodymanifold(resp. open Heegaard diagram) associated to σ∈ S.In Figure <ref> we depict the case of M(1,1,1,3,2,1) that is homeomorphic to S^1× S^1× S^1 (see <cit.>).§.§ The dictionary forDunwoody manifolds Given a Dunwoody manifold M=M(a,b,c,n,s,r), we use the open Heegaard diagram of Figure <ref> for the construction of the dictionary. The elementary blue-redpieces arethose corresponding to the four type of arcs of the graph, i.e., upper, lower, diagonal and vertical, each with two possible orientation. We use P_1 as base point (see Figure <ref>) and, inthe figures, we color in grey the fixed arcs connecting the elementary pieces with the base point. * Horizontal arc, upper or lower (see Figure <ref>): we denote with A^U_i (resp. A^L_i) an upper (resp. lower) arc connecting D_i^u (resp. D_i^d) with D_i-1^u (resp. D_i-i^d), and denote with A^U_i (resp. A^L_i) an upper (resp. lower) arc connecting D_i^u (resp. D_i^d) with D_i+1^u (resp. D_i+i^d)A^U_i = β_i-1^-1α_i-1 A^U_i = β_iα_i+1 A^L_i = α_i-s-1^-1 A^L_i = α_i-s+1^-1. * Vertical arc (see Figure <ref>):the description of the elementary piece corresponding to the vertical arc depends on the gluing parameter s of 6-tuple determining M. We denote withC_i^s the elementary blue-red piece corresponding to a vertical arc connecting the disks D_i^u and D^d_i with parameter s (so that D^u_i is glued to D^d_i+s), while we use the arrow on the left to denote the orientation↓ C_i^s= w_i,sα_i+n-s^-1 ↑ C_i^s= w_i,s^-1α_iwherew_i,s = ∏_j = 0^kβ_i+j,1≤ k≤ nk≡ n-s-1n. * Diagonal arc: from the constructionof the Dunwoody manifold it is quite straightforward to see that the elementary curve corresponding to a diagonal arc with gluing parameter s is equal to the one corresponding to a vertical arc with gluing parameter s+1, that is↓ B_i^s = ↓ C_i^s+1 ↑ B_i^s = ↑ C_i+1^s+1,where ↓B_i^s denotes the elementary piece corresponding to a diagonal arc connectingthe disks D_i^u and D^d_i-1 with parameter s, and ↑ B_i^s denotes the elementary piece corresponding to a diagonal arc connectingthe disks D_i^d and D^u_i+1 with parameter s.In the case of the Dunwoody manifoldM(1,1,1,3,2,1), homeomorphic to S^1× S^1× S^1 (see <cit.>) and depicted in Figure <ref>, if we denote with e_1 the blue curve (label ×), e_2 the green one (label ∘) and e_3 the red one (no label) we have, starting from the vertex labeled 3:[ e_1= β_1 β_2 α_3^-1β_3 α_1 β_3^-1α_3 α_1^-1; e_2= β_2 β_3 α_1^-1β_1 α_2 β_1^-1α_1 α_2^-1; e_3 = β_3 β_1 α_2^-1β_2 α_3 β_2^-1α_2 α_3^-1. ]§.§ Fibonacci and Sieradski manifolds Recall that the Minkus manifoldM_n(2a+1,2r) is the n-fold cyclic covering branched on the 2-bridge knot b(2a+1,2r), with (2a+1, 2r) = 1. In <cit.> it is proved that M_n(2a+1,2r) = M(a,0,1,n,r,s̅), where s̅ can be computed on Γ(a,0,1,n,r,0) as follows: consider the vertex labelled v = a+b+1 of D^u_1, and orient downwards the arc connecting this vertex with D^d_1 and call e_1 the cycle of ℰ passing from v. Now, follow along this orientation the arcs of Γ belonging to e_1, and count the arcs running fromD^u_i-1 to D^u_i or D^d_i-1 to D^d_i and those running fromD^u_i to D^u_i-1 or D^d_i to D^d_i-1. The difference between the first type of arcs and the second one is s̅. For r=1 and a=2, the knot b(5,2) is the figure-eight knot, so M(2,0,1,n,1,s̅) is a Fibonacci manifold, while when r=a=1, the knot b(3,2) is the trefoil knot, so M(1,0,1,n,1,s̅) is a Sieradski manifold. For both families of manifolds it is possible to compute explicitly the words related to the curves:* Fibonacci case M(2,0,1,n,1,s̅): we have s̅ = 0 fal all n ∈ℕ; so the curve e_i starting in the vertex labeled 3 of D_i^u is the red curve (and thicker curve) in Figure <ref>, with related word: e_i = α_i^-1β_i-1^-1α_i-1α_i^-1β_iα_i+1α_i^-1. * Sieradski case M(1,0,1,n,1,s̅): we have s̅ = -2 for n>2; so the curve e_i starting in the vertex labeled 2 of D_i^u is the red curve (and thicker curve) in Figure <ref>, with related word: e_i = β_iβ_i+1α_i+2^-1β_i+1^-1α_i+1α_i^-1.§ PERIODIC TAKAHASHI MANIFOLDSLetL_n⊆S^3 be the link with 2n components represented in Figure <ref>: it is a closed chain of 2n unknotted components having a cyclic symmetry of order n which permutes the unknotted components.The Takahashi manifoldT_n (p_1 / q_1,…, p_n/q_n; r_1/s_1,…, r_n/s_n) is the manifold obtained by Dehn surgery on S^3, along the link L_n, with surgery coefficients alternatively equal to p_k / q_kand r_k / s_k, with 1 ≤ k ≤ n. Without loss of generality, we can always suppose (p_k,q_k)=1, (r_k,s_k)=1 and p_k,r_k≥ 0. If p_1/q_1=p_2/q_2=⋯ =p_m/q_m=p/q and r_1/s_1=r_2/q_2=⋯ =r_m/s_m=r/s, i.e., the surgery coefficients have the same cyclic symmetry of order n of the link L_n, the Takahashi manifold is called periodic and it is denoted simply by T_n(p/q,r/s). In <cit.> the authors describe an open Heegaard diagram for a periodicTakahashi manifold T_n(p/q,r/s) in the case p/q,r/s≥ 0 and p/q,r/s 1, that, up to isotopy, isthe one depicted inFigure <ref>, where the couples of corresponding circles are D_i^u, D_i^d for i = 1, …, 2n glued according to the orientation and so that the fat red points are identified; without loss of generality we always assume p,q,r,s≥ 0.In Figure <ref> we depict the case of T_2(1/2, 2/3). §.§ The dictionary for periodic Takahashi manifolds For the construction of the dictionary in the setting of Takahashi manifolds, we refer to the Heegaard diagram of Figure <ref>: we have eighttypeof elementary arcs and we denote them according to Figure <ref>. Following Figure <ref> down below, we can describe the words associated to theelementary red-blue arcs. As in the Dunwoody case, we use P_1 as the base point and, inthe figures, we color in grey thearcs connecting the elementary pieces with the base point. * Arcs A^U and A^L: they are as the horizontal arcs in the Dunwoody caseso the result is the same (see Figure <ref>) A^U_2i = β_2i+1^-1α_2i+1 A^U_2i = β_2i+1α_2i A^L_2i = α_2i+1^-1 A^L_2i = α_2i^-1. * Arcs B and C: these coincide with vertical or diagonal arcs of the Dunwoody case with s = 0, so we haveB_2i-1 = α_2i^-1 B_2i-1 = α_2i-1 ↓ C_2i-1 = α_2i-1^-1 ↑ C_2i-1 = α_2i-1. * Arc F: we obtain the curve depicted in Figure <ref>. In order to find the corresponding word we need to decompose the original loop into three elements, see Figure <ref>. Following these three elementary pieces we obtainF_2i = [α_2i+1, β_2i+1] β_2i+2^-1α_2i+2 = α_2i+1^-1β_2i+1^-1α_2i+1β_2i+1β_2i+2^-1α_2i+2 F_2i = β_2i [β_2i-1, α_2i-1] α_2i-2 = β_2iβ_2i-1^-1α_2i-1^-1β_2i-1α_2i-1α_2i-2where [α,β] = α^-1β^-1αβ. * Arcs G, X and Y: for all these kind of arcs the procedure is similar to the previuos one, obtaining the following sequence of movesG_2i-1 = β_2i^-1β_2i+1^-1α_2i+1 G_2i-1 = β_2i-1β_2i-2α_2i-3 X_2i-1 = β_2i+1^-1α_2i+2^-1 X_2i-1 = β_2i-1^-1α_2i-2^-1 Y_2i-1 = α_2i+1^-1 Y_2i-1 = α_2i-3^-1. In the case of the Takahashi manifoldT_2(1/2, 2/3)depicted in Figure <ref>, if we denote with e_1 the blue curve (label ×), e_2 the red one (no label), e_3 the green one (label ∘) and e_4 the black one (label ⋆), following the arrow we have[e_1= B_1F_2X_3F_2A^L_4 = α_2^-1β_2 β_1^-1α_1^-1β_1 α_1 α_4 β_1^-1α_2^-1β_2 β_1^-1α_1^-1β_1 α_1 α_4 α_1^-1;e_2= B_3F_4X_1F_4A^L_2 = α_4^-1β_4 β_3^-1α_3^-1β_3 α_3 α_2 β_3^-1α_4^-1β_4 β_3^-1α_3^-1β_3 α_3 α_2 α_3^-1;e_3 = A^U_2Y_3G_1Y_3B_1A^U_2Y_3B_1 = β_3^-1α_3 α_1^-1β_2^-1β_3^-1α_3 α_1^-1α_2^-1β_3^-1α_3 α_1^-1α_2^-1;e_4 = A^U_4Y_1G_3Y_1B_3A^U_4Y_1B_3= β_1^-1α_1 α_3^-1β_4^-1β_1^-1α_1 α_3^-1α_4^-1β_1^-1α_1 α_3^-1α_4^-1. ]acm | http://arxiv.org/abs/2310.17962v1 | {
"authors": [
"Alessia Cattabriga",
"Paolo Cavicchioli"
],
"categories": [
"math.GT",
"57K10, 57K30, 57-08, 57-04"
],
"primary_category": "math.GT",
"published": "20231027081950",
"title": "A Markov theorem for plat closure of surface braids in Dunwoody and periodic Takahashi manifolds"
} |
Tailoring Photocurrent in Weyl Semimetals via Intense Laser Irradiation Gopal Dixit January 14, 2024 ======================================================================= We introduce an equivariant version of contextuality with respect to a symmetry group, which comes with natural applications to quantum theory. In the equivariant setting, we construct cohomology classes that can detect contextuality.This framework is motivated by the earlier topological approach to contextuality producing cohomology classes that serve as computational primitives in measurement-based quantum computing. § INTRODUCTIONIn quantum theory, measurement statistics is described by a family of probability distributions associated to quantum measurements that can be performed simultaneously. Such a family of distributions is called contextual if a joint probability distribution cannot describe the measurement statistics. The sheaf-theoretic approach of <cit.> is a natural framework to capture this notion. More recently, a topological approach has been introduced in <cit.> based on chain complexes and tailored for applications in measurement-based quantum computing. Simplicial distributions introduced in <cit.> is a new framework that unifies these two earlier approaches and goes beyond by generalizing the notion of contextuality tomeasurements and outcomes described by spaces rather than discrete sets.In <cit.>, topological techniques are first introduced to describe proofs of contextuality for (1) parity proofs based on algebraic relations among the observables and (2) symmetry-based proofs that rely on a transformation group acting on the observables. Moreover, the two kinds of proofs can be related using the theory of chain complexes. This paper extends this analysis to simplicial distributions. Our main contribution is the notion of equivariant contextuality with respect to a symmetry group and the construction of cohomology classes, which can detect this type of contextuality. These cohomology classes are related to those introduced in <cit.>, which play an important role in quantum computational schemes such as measurement-based quantum computation <cit.> and quantum computation with magic states <cit.> for quantifying computational advantage. As a future direction, the authors plan to investigate the computational significance of the cohomology classes introduced in the equivariant setting.The framework of simplicial distributions is based on simplicial sets, combinatorial models of spaces more expressive than simplicial complexes. A simplicial distribution on a pair of simplicial sets X and Y, which represent the measurements and outcomes, is given by a simplicial set mapp:X→ D_R(Y) The simplicial set D_R(Y) has simplices given by R-valued distributions on the set of simplices of Y, where R is a semiring. Any simplicial set map s:X→ Y can be turned into a simplicial distribution δ^s=δ_Y∘ s,called a deterministic distribution, by composing with δ_Y:Y→ D_R(Y) that sends a simplex of the outcome space to the delta-distribution peaked at that simplex. A simplicial distribution is called contextual if it cannot be expressed as a probabilistic mixture of deterministic distributions.Given a group G and a pair of simplicial G-sets (X,Y), we introduce the notion of G-equivariant contextuality. The basic idea is to require equivariance with respect to G in the definitions of simplicial and deterministic distributions. In quantum theory, commutation relation among operators can be used to define simplicial sets that are variations of the well-known nerve construction in algebraic topology.Such simplicial sets are called partial groups, a notion introduced in <cit.>. A typical example is defined for the unitary group U() of a finite dimensional Hilbert spaceand consists of the n-simplices given byN(_d,U()) = (A_1,A_2,⋯,A_n):A_i^d=,A_iA_j=A_jA_i ∀ i,j .This space and its variants have been studied in<cit.>. Identifying unitary operators that differ by an element in e^2π i/d gives a central partial group extension in the sense of <cit.>. We develop our theory for a central partial group extension of the formNAE M where A is a finite abelian group and NA is thenerve space. This extension is classified by a cohomology class [β]∈ H^2(M,A). A simplicial distribution p:E→ D_R(NA) is said to be relative to i:NA→ E if the following diagram commutes[column sep=huge,row sep=large] NA [d,"i"] [rd,"δ_Y"]X[r,"p"] D_R(NA) We show that if [β]≠ 0, then every such simplicial distribution is contextual (Corollary <ref>). In the equivariant setting, we consider a group G that acts on E by partial group automorphisms fixing NA. Our main tool in the equivariant setting is the Borel construction, which sends a simplicial G-set X to the simplicial set given by X G = EG×_G X. Applying this construction to the partial group extension gives another partial group extensionNA → E G → M G Let [β_G]∈ H^2(M G,A) denote the cohomology classclassifying thisextension. On the other hand, we can think of the Borel construction as the diagonal of the bisimplicial set S_G(X) whose (p,q)-simplices are given by (EG)_p ×_G X_q. The total complex of the associated double complex of cochains on this bisimplicial set provides an alternative way of studying the cohomology class [β_G]. In the total complex, we have two important cocycles:First of all, β can be regarded as a (0,2)-cocycle. Given a pseudo-section η:M→ Eof π (similar to a simplicial set map except that it fails to be compatible with the d_0 face), we have a (1,1)-cocycle Φ:G× M_1→ A defined by Φ_g(x) = (g·η(g^-1· x)) ·η(x)^-1.Our main result is the following (Theorem <ref> and Corollary <ref>):Let NAEM be a central partial group extension and G be a group acting on E by partial group automorphisms that fix NA. The following are equivalent: * The class [β_G] is zero.* There exists s:M_1→ A such that d^vs=β and d^hs=Φ.* The map π admits a G-equivariant section. Moreover, if [β_G]≠ 0 then every G-equivariant distribution p:E→ D_R(NA) relative to NA is G-equivariantly contextual. There is a counterpart to the partial group approach, which uses cofiber sequences of spaces. We develop this approach in Section <ref> and prove a result analogous to the theorem above (Theorem <ref>). Equivariant distributions naturally arise in quantum theory. The nerve space N(_d,U()) can be regarded as the total space of a partial group extension and a quantum state (density operator) can be used to obtain a simplicial distribution relative to the fiber, which is N_d. The paper is organized as follows. In Section <ref>, we introduce the notion of equivariant distributions and equivariant contextuality. We give a simple example, torus with the involution action, to discuss these basic notions (Section <ref>). We also introduce distributions on the simplicial circle. Section <ref> is about the relative version of contextuality and mainly focuses on introducing the cohomological notions. The total complex of the Borel construction is studied in this section. We end the section by elaborating on the torus example. Quantum distributions are discussed in Section <ref>. We introduce partial group extensions and group actions on them. Our main theorem stated above is proved in this section. The counterpart to the torus example is the nerve of the dihedral group, also discussed in this section. We show how quantum theory naturally gives simplicial distributions relative to a subspace. We finish with the Mermin star construction, a well-known example of a contextual scenario introduced in <cit.>, analyzed in the equivariant setting. Our methods extend the earlier topological study presented in <cit.> by demonstrating that this construction produces equivariantly contextual simplicial distributions. Acknowledgments. This work is supported by the US Air Force Office of Scientific Research under award number FA9550-21-1-0002 and the Digital Horizon Europe project FoQaCiA, GA no. 101070558. § EQUIVARIANT DISTRIBUTIONS Simplicial distributions are first introduced in <cit.> as a framework to study contextuality.In this section we introduce equivariantsimplicial distributions and equivariant contextuality. §.§ Simplicial distributionsLet R be a commutative zero-sum-free semiring, i.e., a+b=0 implies a=b=0 for all a,b,∈ R. The distribution monadD_R:→ is defined as follows: * The set D_R(U) of R-distributions on a set U consists offunctions p:U→ R of finite support, i.e., |u∈ U:p(u)≠ 0|<∞, such that ∑_u∈ U p(u)=1. * Given a function f:U→ V the function D_R(f):D_R(U)→ D_R(V) is defined by p ↦( v↦∑_u∈ f^-1(v) p(u) ). Letdenote the simplex category with objects [n]=0,1,⋯,n where n≥ 0 and morphisms given by ordinal maps θ:[m]→ [n]. This category is generated by morphisms of the form * coface maps d^i:[n-1]→ [n] that skip i in the target, and* codegeneracy maps s^j:[n+1]→ [n] that has a double preimage at j.A simplicial set is a functor X:^→ A simplicial set map f:X→ Y between two simplicial sets is a natural transformation. We will denote the category of simplicial sets by . Alternatively, a simplicial set is a sequence of sets X_n, where n≥ 0, together with the face d_i and the degeneracy maps s_j dual to the coface d^i and the codegeneracy s^j maps. In this description a simplicial set map consists of a family of functions f_n:X_n→ Y_n_≥ 0 compatible with the face and degeneracy maps. Given σ∈ X_n we will writef_σ∈ Y_n for the value of the function f on σ. The standard d-simplex Δ[d] is the simplicial set with n-simplices given by the set of ordinal maps [n]→ [d]. In other words, this is the functor represented by [d]. We will write σ^01⋯ d for the d-simplex in Δ[d] corresponding to the identity map :[d]→ [d]. Observe that every other n-simplex can be obtained from σ^01⋯ n by an application of face and degeneracy maps. The functor D_R extends to a monad on simplicial sets D_R:→ sending a simplicial set X:^→ to the composite functor D_R(X):^; see <cit.>. We will write δ_X:X→ D_R(X) for the unit of this monad that sends x to the delta-distribution with a peak at x. Let X and Y be simplicial sets. A simplicial distribution on (X,Y) is a simplicial set map p:X→ D_R(Y). A deterministic distribution is a simplicial distribution of the form δ^s:X YD_R(Y) where s:X→ Y is a simplicial set map. We write (X,Y) and (X,Y) for the sets of simplicial and deterministic distributions, respectively. There is a comparison mapΘ: D_R((X,Y)) →(X,Y)that sends a distribution d=∑_r d(r) δ^rto the simplicial distribution Θ(d):X→ D_RYdefined by Θ(d)_σ: θ↦∑_r : r_σ=θ d(r) where σ∈ X_n, θ∈ Y_n and r:X→ Y runs over simplicial set mapssuch that r_σ=θ. A simplicial distribution is called non-contextual if it lies in the image of Θ. Otherwise, it is called contextual. Let us consider simplicial distributions of the form p:Δ[2]→ D_R(N_2). Such a simplicial set map is determined by a distribution p_σ∈ D_R(_2^2) where σ=σ^012. Writing p^ab for the values p_σ(a,b) this distribution is specified by a tuple (p^00, p^01, p^10, p^11)∈ R^4 such that ∑_a,bp^ab=1. A deterministic distribution δ^s for a simplicial set map s:Δ[2]→ N_2 will be determined by a delta-distribution at s_σ=(a,b)∈_2^2. We write δ^ab for this simplicial distribution. Then any simplicial distribution p can be written asp = ∑_a,b p^abδ^ab,and hence is non-contextual. More complicated simplicial distributions can be constructed by gluing 2-simplices. In this case additional relations will be introduced as the faces. The distribution p_d_iσ associated to a face of σ will be an element of D_R(_2). We will write (p_d_iσ)^a to denote its value as a∈_2. We have (p_d_iσ)^0+(p_d_iσ)^1=1. Then(p_d_iσ)^0 = {[ p^00+p^10 i=0; p^00+p^11 i=1; p^00+p^01i=2. ].When two 2-simplices are glued at a common face, say the i-th face of σ_1 and the j-th face of σ_2, then we haved_ip_σ_1 = p_d_iσ_1 = p_d_jσ_2 = d_jp_σ_2.This way we can construct contextual simplicial distributions. See <cit.> and <cit.> for examples.§.§ Partial groups In this section we recall the notion of partial monoids and groups from <cit.>.We need two ordinal maps: * The i-th edge map e^i:[1]→ [n] that sends 0,1 to i-1,i.* The n-th multiplication map Π^n:[1]→ [n]that sends 0,1 to 0,n.A partial monoid is a non-empty simplicial set M such that * M is reduced, i.e., M_0=∗,* The n-th spine map _n=(e_1,⋯,e_n): M_n→M_1×⋯× M_1_nis injective for all n≥ 1, where e_i:M_n→ M_1 is induced by the ordinal map e^i. A homomorphism of partial monoids is a simplicial set map. A simplex x∈ M_n can be identified with the tuple _n(x)=(x_1,⋯,x_n). Let Π_n:M_n→ M_1 denote the map induced by the ordinal map Π^n. The simplex Π_n(x)∈ M_1 will also be denoted by x_1· x_2⋯x_n to emphasize that this element represents the product.The degenerate simplex s_0(∗)∈ M_1 will be denoted by 1 since it serves as the identity of the product. Let X be a simplicial set, and M, N be a partial monoids. * A simplicial set map f:X→ M is determined by f_1:X_1→ M_1.* A simplicial homotopy F:N×Δ[1] → M from g to f is determined by θ=F_1(1,σ^01)∈ M_1 that satisfies θ· g_1(x) = f_1(x)·θ for all x∈ N_1.Part (1) of this lemma justifies saying that f:X→ Mis defined by f(x)=m on 1-simplices. Note that we also drop the subscript from f_1. We write f g for the homotopy from f to g. Every monoid can be regarded as a partial monoid by the nerve construction.For a monoid M the nerve space N(M)is the simplicial set whose set of n-simplices is given by M^n with the following simplicial structure:d_i(m_1,m_2,⋯,m_n)= {[ (m_2,m_3,⋯,m_n) i=0; (m_1,⋯,m_i· m_i+1 ,⋯,m_n) 0<i<n; (m_1,m_2,⋯,m_n-1) i=n ]. s_j(m_1,m_2,⋯,m_n)= (m_1,⋯,m_j, 1_M,m_j+1,⋯,m_n)0≤ j≤ n, where 1_M is the identity element.A simplicial set map f:X→ NM is given by a function f_1:X_1→ M such that f_1(d_1σ) = f_1(d_2σ)· f_1(d_0σ)for every σ∈ X_2. This can be obtained from Lemma <ref> by working out the simplicial structure. A more direct approach is to use the fact that the nerve of a category is 2-coskeletal <cit.>.An inversion in a partial monoid M is a simplicial set map ν:M→ M^ such that for every x∈ M_n with n≥ 1 there is a simplex (ν(x),x)∈ M_2n such that Π_2n(ν(x),x)=1. A partial monoid with an inversion is called a partial group.A group G can be regarded as a partial group by the nerve construction.§.§ Simplicial G-sets By a simplicial G-set we mean a simplicial object in the category of G-sets.We define a simplicial G-set EG whose n-simplices are given by(EG)_n=G^n+1 and the simplicial structure maps are as follows:d_i(g_0,g_1,⋯,g_n) = {[ (g_0,⋯,g_ig_i+1,⋯,g_n) 0≤ i < n;(g_0,g_1,⋯,g_n-1)i=n ].and s_j(g_0,g_1,⋯,g_n) = (g_0,⋯,g_j,1,g_j+1,⋯,g_n) where 0 ≤ j≤ n.The group action is multiplication from the left on the first coordinate in each level. The quotient map gives the universal principal G-bundle:G→ EG→ NG Let X be a simplicial G-set.The Borel construction of X is the simplicial set X G, also denoted by EG×_G X, whose n-simplices are given by EG_n×_G X_n with the diagonal simplicial structure maps. Let f:X→ Y be a simplicial G-map.There is an associated map between the Borel constructions that makes the following diagram commute:[column sep =huge, row sep=large] EG× X [r,"× f"] [d] EG× Y [d] X G [r,"f_G"] Y Gwhere the vertical maps are the quotients under the G-action. When G acts trivially on X, the Borel construction can be identified with X G = NG× X. Assume that the action of G on Y is trivial. We will consider the composition f̃_G: XGYG =NG × YYApplying the Borel construction to X→∗ gives a fibration sequenceXX G → NG where ∗ G is identified with NG. The first map in the sequence is the canonical inclusion that sends x∈ X_n to the simplex [(1,⋯,1),x]. Note that this is a simplicial G-map, where the target has the trivial action. §.§ Equivariant distributions Let U be a G-set. We equip D_R(U) with the following G-action:g· p(u) = p(g^-1· u).Here the action is induced by the standard G-action on the set of all functions U→ R, where the action on the target is trivial.This extends to simplicial G-sets. That is, if Y is simplicial G-set then D_R(Y) is a simplicial G-set with the action above in each degree.Let X and Y be simplicial G-sets. A G-equivariant simplicial distribution on (X,Y) is a simplicial G-mapp: X→ D_R(Y)A G-equivariant deterministic distribution is a simplicial distribution of the form δ^s: X YD_R(Y) where s:X→ Y is a simplicial G-map. We will write _G(X,Y) and _G(X,Y)for the sets of G-equivariant simplicial and deterministic distributions, respectively. A G-equivariant simplicial distribution p is calledG-equivariantly contextual if it does not lie in the image ofΘ_G: D_R(_G(X,Y)) →_G(X,Y)Otherwise, it is called G-equivariantly non-contextual.We will be interested in the case where the G-action on Y is trivial. In this case we can apply the Borel construction, as described in (<ref>), to a G-equivariant distribution p:X→ D_R(Y) and obtain a simplicial distributionp̃_G: X G → D_R(Y)This construction can be applied to a simplicialG-map s:X→ Y and the corresponding equivariant distribution δ^s:X→ D_R(Y).We have δ_G^s = δ^s̃_G. We want to show that δ̃_G^s factors as δ∘s̃_G. By construction we haveδ̃_G^s[(1,g_1,⋯,g_n),x] = δ^s(x) = δ^s̃[(1,g_1,⋯,g_n),x].The construction in (<ref>) gives a commutative diagram[column sep=huge,row sep =large] D_R(_G(X,Y)) [d] [r,"Θ_G"]_G(X,Y) [d] D_R((X G,Y)) [r,"Θ"] [u,bend left,dashed](X G, Y)[u,bend right,dashed]where the left vertical map sends δ^s to δ^s̃_G, which is also equal to δ_G^s by Lemma <ref>, and the right vertical map is defined by p↦p̃_G. The dashed arrows are simply obtained by restricting along the inclusion map X→ X G. The composition of the vertical downward map with the upward map is the identity. A G-equivariant simplicial distribution p is G-equivariantly contextual if and only if p̃_G is contextual. Assume that p is G-equivariantly non-contextual. Then by the commutativity of Diagram (<ref>), with the downward vertical arrows, we see that p̃_G is non-contextual. Conversely, if p̃_G is non-contextual then the same diagram with the upward vertical arrows this time shows that p is non-contextual.§.§ Distributions on the circleThe set of n-simplices of the simplicial interval Δ[1] is given by Δ[1]_n = θ^i|0≤ i≤ n+1where θ^i:[n]→ [1] is the ordinal map such that |(θ^i)^-1(0)|=i. The simplicial structure maps are given byd_j(θ^i) = {[ θ^i-1 j < i; θ^ii≤ j ]. and s_j(θ^i) = {[θ^i+1j<i;θ^i i≤j. ]. The boundary ∂Δ[1] consists of the simplicial subset given by θ^0,θ^n+1 in each level. The simplicial structure maps are just the identity maps. The simplicial circle S^1 is defined to be the simplicial set obtained from Δ[1] by collapsing its boundary ∂Δ[1]. So θ^0 and θ^n+1 are identified to a single point, which we denote by ⋆. Therefore the set of n-simplices is given by(S^1)_n = {[⋆n=0; ⋆, θ^1,⋯,θ^nn≥ 1. ]. The simplicial structure maps on θ^i are given byd_j(θ^i) = {[ θ^i-1j< i and 1<i; θ^i i≤ jand i<n; ⋆ otherwise ]. and s_j(θ^i) = {[ θ^i+1j< i; θ^i i≤ j.; ]. Any simplicial structure map sends ⋆ to itself. A zero-sum-free semiring R admits a partial order: For a,b∈ R we write a≤ b if there exists c∈ R such that a+c=b. In the next result we use this partial order. There is an isomorphism of simplicial setsD_R(S^1) → N_≤ 1(R)where N_≤ 1(R) is the simplicial subset of the nerve space N(R) of (R,+), consisting of tuples whose sum is at most 1. An R-distribution on (S^1)_n is specified by a tuple (p_1,p_2,⋯,p_n) where p_i∈ R and ∑_i p_i ≤ 1. The value assigned to θ^i is given by p_i and the value assigned to ⋆ is determined by this tuple, which is given by 1-∑_i p_i. The simplicial structure of S^1can be used to show thatd_j(p_1,p_2,⋯,p_n) = {[(p_2,⋯,p_n)j=0; (p_2,⋯,p_j+p_j+1,⋯, p_n)0<j<n;(p_2,⋯,p_n-1)j=n ].ands_j(p_1,p_2,⋯,p_n) = (p_1,⋯,p_j,0,p_j+1,⋯,p_n). Note that N_≤ 1(R) is a partial monoid contained in NR therefore combining with Corollary <ref> we obtain: Let i_2:X^(2)→ X denote the inclusion of the 2-skeleton. Then the following induced map is an isomorphism(i_2)^*:(X,S^1) →(X^(2),S^1)The inclusions i_2 and D_RS^1≅ N_≤ 1R→ NR induce a commutative diagram[column sep=huge,row sep =large] (X,D_RS^1) [d,hook] [r,"(i_2)_*"](X^(2),D_R S^1) [d,hook](X,NR) [r,"≅"](X^(2),NR)Then (i_2)_* is injective by commutativity of the diagram. For surjectivity we observe that in the commutative diagram[column sep=huge,row sep =large] X^(2)[dr] [r,"i_2",hook] [d,"p"] X[d,dashed,"p̃"] D_RS^1 [r,hook] NRthe lift p̃ of the diagonal map has image contained in D_RS^1 since it is determined by the restriction to the 1-skeleton of X by Lemma <ref>. In particular, for R= we obtain that S^1 = D_(S^1)N_≤ 1()Therefore we also have(i_2)^*:(X,S^1) →(X^(2),S^1)§.§ Example: Torus with involution We will take X=S^1× S^1 and Y=S^1. We will write σ_0 and σ_1 for the non-degenerate 2-simplices of X. The face maps for c∈{0,1} are given byd_i σ_c = {[ x_c+1 i=0; x i=1; x_ci=2. ].where the summation in the first formula is taken modulo 2.The unique vertex of X will be denoted by v. We will regard the circle as a simplicial subset of the nerve spaceS^1 → N_2defined by sending σ^01↦ 1 in degree 1. Using the notation of Example <ref> a simplicial distribution p:Δ[2]→ D(S^1) is specified by a tuple (p^00,p^01,p^10,p^11) satisfying ∑_a,bp^ab=1 and p^11=0. Note that (S_1)_2 is in bijective correspondence with (0,0),(0,1),(1,0)⊂_2^2 hence a distribution p in D((S^1)_2) regarded as a distribution in D(_2^2) will have p^11=0. Then simplicial distributions on (X,Y) are given as follows(X,Y) =(t_1,t_2)∈ [0,1]^2:1-(t_1+t_2)≥ 0;see Figure (<ref>).Let us take G=_2 and endow both spaces with a G-action. On X the action is given by the swapping of the coordinates and on Y the action is trivial. A G-equivariant distribution would be one in which t_1=t_2. Then_G(X,Y) can be identified with the interval [0,1/2]. When t_1=0 we obtain the unique deterministic distribution p_σ_i=δ^00, and for t_1=1/2 we obtain p^ab_σ_i = {[1/2a+b=1;0 otherwise. ].Therefore p is G-equivariantly contextual if and only if t_1>0.Now, let us consider the Borel construction X G. By Corollary <ref> it suffices to considerthe 2-skeleton W = (X G)^(2).We will use the following notation for the simplices:v= [(1),v]x_b(a) = [(1,a),x_b] σ_c(ab) = [(1,a,b),σ_c].The face maps of the 2-simplices are given byd_i σ_c(ab) = {[ x_a+c+1(b)i=0; x(a+b)i=1; x_c(a) i=2. ]. The non-degenerate 2-simplices can be grouped into three parts each of which is a closed surface as in Figure (<ref>).Let us describe simplicial distributions on (W,S^1). Using the closedsurface in Figure (<ref>) we havep_σ_0(11)^01 = p_σ_0(11)^10 = t_1p_σ_1(11)^01 = p_σ_1(11)^10 = t_2where t_1,t_2∈ [0,1/2]. On the other hand, the middle surface forces t_1=t_2. Considering thesurface in Figure (<ref>) as well and using Corollary <ref> we obtain _G(X,Y) = (t,t):t∈ [0,1/2]⊂(X,Y)and_G(X,Y) (X G,Y)A similar isomorphism holds between thedeterministic distributions. Hence we verified Proposition<ref>.§ DISTRIBUTIONS RELATIVE TO A SUBSPACEIn this section X and Y denote simplicial G-sets where the G-action on Y is trivial. We consider simplicial distributions relative to a subspace and introduce cohomology classes that can detect contextuality for such distributions. Let i:Y→ X be a G-equivariant inclusion.A G-equivariant distribution on (X,Y) relative to Y is a simplicial G-map p:X→ D_R(Y) such that the following diagram commutes[column sep = huge, row sep = large] Y [dr,"δ_Y"] [d,"i"']X [r,"p"] D_R(Y)Note that the non-equivariant definition can be obtained by taking G as the trivial group.If p:X→ D(Y) is a G-equivariant distribution relative to Y then p̃_G:X G → D_R(Y) in (<ref>) is a simplicial distribution relative to Y. Follows from the following commutative diagram[column sep=huge,row sep=large,ampersand replacement=&] Y[d,"i"][dr,"δ"]&X[r,"p"][d,"ι_X"]&D(Y)X G[ur,"p̃_G"']&where the top triangle commutes by assumption on p and the lower triangle commutes by the construction of p̃_G. We will consider the diagram of cofiber sequences[column sep=huge,row sep =large] Y [d,"i"] [r,equal] Y [d,"j"] [r] Y G [d,"i_G"] X [d,"q"] [r,"ι"] X G [d] [r,equal] X G [d]X̅[r,"ι̅"]X G[r,"c"] X̅ GThe following are equivalent. * The inclusion i:Y→ X has a G-equivariant section.* The inclusion i_G:Y G→ X G has a section.* The inclusion j:Y→ Y G X G has a section. First we prove the equivalence of (1) and (3). Let s:X→ Y be a G-equivariant section of i. The associated map s̃_G:X G→ Y gives a section of j. Conversely, let r:X G→ Y be a section of j. Then precomposing with the canonical (G-equivariant) map X→ X G gives a G-equivariant section of i. Next we prove the equivalence of (2) and (3). Let t be a section of i_G. Then projecting onto the second factor gives a section of j. Given a section r of j, composing this with the canonical inclusion Y→ Y G gives a section of i_G.§.§ Cohomology Our default space of choice for Y will be the nerve space NA of some finite abelian group A. In this case we can interpret deterministic distributions as cohomology classes.Let Z be a simplicial subset of X.Consider the cofiber sequence ZX X̅We have a commutative diagram[column sep=huge,row sep =large] (X̅,NA) [d,"≅"] [r,"q^*"](X,NA) [d,"≅"] [r,"i^*"](Z,NA) [d,"≅"]H^1(X̅,A) [r,"q^*"] H^1(X,A) [r,"i^*"] H^1(Z,A) [r,"ζ"] H^2(X̅,A)where the vertical maps send a deterministic distribution δ^s to the cohomology class [s]. The bottom sequence is exact; see <cit.> for details. Let Z be a simplicial subset of X.A simplicial set map r:Z→ NA extends to X if and only if ζ([r])=0.From the exactness of the bottom row in Diagram (<ref>) we see that ζ([r])=0 if and only if there exists [r̃]∈ H^2(X,A) such that i^*([r̃])=[r]. Vertical isomorphisms imply that this latter condition is equivalent to the existence of an extension. The cohomology class ζ([r]) is represented by the cocycle ζ(r) defined as follows: * Let r̃:X:→ A denote the extension of r by setting its value equal to zero on X_1-Z_1.* The coboundary dr̃ :X_2→ A restricts to a cocycle on X̅_2 which we denote by ζ(r).Let us specialize to Z=Y=NA. We take r to be the identity map :NA→ NA. Note that in this case we haveH^1(NA,A) ≅(A,A).Then :NA→ NA can be identified with the identity homomorphism :A→ A. We will writeγ=ζ(). The inclusion i:NA→ X has a section if and only if [γ]=0. Moreover, if [γ]≠ 0 thenevery simplicial distribution p:X→D_R(NA) relative to NA iscontextual.The first statement follows from Lemma <ref>. For the second statement, assume that p is non-contextual, that is, there exists d=∑_r d(r)δ^r with Θ(d)=p. Then there exists s such that d(s)>0 and for every σ∈X_n we havep_σ(s_σ) = ∑_r: r_σ=s_σ d(r) > d(s)>0.Taking σ∈ (NA)_n we havep_σ = (p∘ i)_σ = δ^σwhere δ^σ is the delta-distribution peaked at σ. Combining the two equations weobtain δ^σ(s_σ)>0 which implies that s_σ=σ. This shows that s:X→ NA is a section of i. Now, we turn to the equivariant case.Let i:NA→ X be a G-equivariant inclusion. We apply the cohomology discussion to the cofiber sequences in Diagram (<ref>). For the middle cofibration sequence we introduce the following cocycle: Using the connecting homomorphism ζ:H^1(NA,A)→ H^2(X G,A) we defineγ_G=ζ(). By definition we have ι̅^*(γ_G) =γ.The inclusion j:NA → X G has a section if and only if [γ_G]=0. Moreover, if [γ_G]≠ 0 then every simplicial distribution p:X G → D_R(NA) relative to NA is contextual.Follows from Proposition <ref> applied to j. If [γ_G]≠ 0 then every G-equivariant distribution p:X→ D_R(NA) relative to NA is G-equivarianly contextual.By Proposition <ref>, [γ_G]≠ 0 implies that p̃_G is contextual. This is equivalent to p being G-equivariantly contextual by Proposition <ref>.We can slightly generalize one direction of Proposition <ref> to the case where Y is a simplicial subset of NA: For Y a simplicial subset of NA we have that if the inclusion i:Y→ X has a G-equivariant section then [γ_G]=0. The converse of this statement may not be true since the extension obtained using Lemma <ref> gives a map with target NA, not necessarily the simplicial subset Y. We are interested in the case where Y is the simplicial subset given by the circle S^1_(a)→ NAwhere the unique non-degenerate 1-simplex is mapped to a non-trivial element a∈ A. We have a similar observation for the rightmost cofibration in Diagram (<ref>).Using the connecting homomorphism ζ:H^1(NA,A)→ H^2(X̅ G,A) and the projection map π_1:NA G→ NA we defineγ̃_G=ζ(π_1). We havec^*(γ̃_G) = γ_G.The inclusion i_G:NAG → X G has a section if and only if [γ̃_G]=0.Moreover, we have [γ_G]=0 if and only if [γ̃_G]=0. Proof of the first statement follows from Lemma <ref> applied to π_1:NA G → NA. The second statement follows from combining Proposition <ref> and <ref> with Lemma <ref>. Combined with Proposition <ref> [γ̃_G] can also be used to detect contextuality.§.§ Total complex of the Borel construction For a simplicial G-set X, we define a bisimplicial setS_G(X)_p,q =(EG)_p ×_GX_q. The diagonal (S_G(X)) of this bisimplicial set is the Borel construction X G. In this section we recall the Eilenberg–Zilberg theorem that relates the chains on the diagonal of a bisimplicial abelian group and the total complex of the associated double complex. We apply this result to the bisimplicial set S_G(X).Let L be a bisimplicial abelian group. There are two chain complexes associated to this object: * The chain complex C((L)) of the diagonal.* The total complex (C(L)) of the double complex C(L).By the Eilenberg–Zilber theorem these two chain complexes are chain homotopy equivalent <cit.>. This equivalence is given by the Alexander–Whitney mapΔ: C((L)) →(C(L)) and the Eilenberg–Zilber map∇: (C(L)) → C((L))We are interested in degree 2 of these maps: * For a∈ L_2,2 we haveΔ_2(a) = ( (d_0^v)^2(a) , d_2^h d_0^v(a) , d_1^hd_2^h(a) ). * For (a,b,c)∈ L_2,0⊕ L_1,1⊕ L_0,2 we have∇_2(a,b,c) = s_1^vs_0^v(a) + ( s_1^hs_0^v(b) - s_0^h s_1^v(b) ) + s_1^h s_0^h(c).We will apply this to the bisimplicial abelian groupL_p,q = [S_p,q ] where S=S_G(X) is the bisimplicial set given in Equation (<ref>).Then the maps Δ and ∇ in degree 2 are given byΔ_2[(1,g_1,g_2),x] = ( [(1,g_1,g_2), (d_0)^2(x)] , [(1,g_1),d_0(x)], [(1),x] )and∇_2( [(1,g_1,g_2),x] , [(1,g_1') , x'],[ (1) , x” ] ) = [(1,g_1,g_2),s_1s_0(x)] +[(1,g_1',1) , s_0(x') ] -[(1,1,g_1') , s_1(x') ]+ [(1,1,1) , x” ] . By applying (-,A) to the chain complexes (C(L)) and C((L)) we obtain the dual maps between the associated cochain complexes with coefficients in A. The dual map Δ^*: C^*( (C(L)) ,A) → C^*( (L),A) sends a 2-cochain (α,α',α”) to θ[(1,g_1,g_2),x] = α[(1,g_1,g_2),(d_0)^2(x)] + α'[(1,g_1),d_0(x)] + α”[(1),x].On the other hand, the dual map ∇^2: C^*( (L),A)→ C^*((C(L)),A) sends a 2-cochain θ to (α,α',α”) whereα[(1,g_1,g_2),x]= θ[(1,g_1,g_2),s_1s_0(x)]α'[(1,g_1'),x']= θ[(1,g_1',1),s_0(x')]- θ[(1,1,g_1'),s_1(x')] α”[(1),x”]= θ[(1,1,1),x”]. Next, we apply the formulas in Equation (<ref>) to the middle and the rightmost cofibrations in Diagram (<ref>). The Borel construction X̅ G can be described as the diagonal of the bisimplicial set S̅_p,q = S_G(X̅)_p,q= (EG)_p×_G X̅_q. Similarly we can describe X G as the diagonal of the following bisimplicial setT_p,q =(EG)_p×_G X_q /∗× Y_q. The Eilenberg–Zilber map isnatural with respect to the map T→S̅ of bisimplicial sets.Under theEilenberg–Zilber map the cocycle γ_G is represented by the triple (0,0,γ) in the total complex.Let π_2 NA G → NA denote the projection map, recall that NA G =NG× NA. Since the Eilenberg–Zilber map ∇ is natural with respect to maps of bisimplicial sets we can first compute ∇^2(γ̃_G) and then pull-back along the corresponding map between the total complexes. Applying the formula in Equation (<ref>) to S̅ we obtain that∇^2(γ̃_G) = (0,0,γ). Pulling back this cochain along T→S̅ gives the desired result.Toobtain Equation (<ref>) we first consider how the cocycle γ̃_G is defined. In the cohomology long exact sequence we begin with π_1 regarded as a 1-cochain π_1:A× G→ A, again given by projection, and define π̃: X_1×_G G^2 → A by lifting π_1 on the rest of the 1-simplices by setting it equal to zero. Then applying the coboundary in X G we obtain γ̃_G. Looking at the description of d(π̃_1) and the formula in Equation (<ref>) we obtain Equation (<ref>). In more details, the map ∇^2(γ̃_G) consists of three factors. Forx∈ X_0, x'∈ X_1 and x”∈ X_2, we have * γ̃_G((1,g_1,g_2),s_1s_0(x))=0 since π̃ restricted to a degenerate simplex of the form s_1s_0(x) is zero.* γ̃_G((1,g_1',1),s_0(x'))-γ̃_G((1,1,g_1'),s_1(x'))=0: We have γ̃_G((1,g_1',1),s_0x')-γ̃_G((1,1,g_1'),s_1x')= π̃((g_1',1),x')-π̃((1,g_1'),x') +π̃((1,g_1'),s_0d_1x')-π̃((1,g_1'),s_0d_0x')+π̃((1,g_1'),x')-π̃((1,1),x')=π̃((g_1',1),x')+π̃((1,g_1'),s_0d_1x')-π̃((1,g_1'),s_0d_0x')-π̃((1,1),x').In the sum above, the second and third component are 0, as the second factor in π̃ is a degeneracy of the vertex of X. Thus we are left with the sumπ̃((g_1',1),x')-π̃((1,1),x').However, this sum also becomes zero since if x'∈ (NA)_1, then G acts trivially on x' and we have that ((g_1',1),x')=((1,1),x') in X G. Otherwise, if x'∉(NA)_1, then also g_1'· x∉(NA)_1, giving that both components of the sum are zero. * γ̃_G((1,1,1),x”)=γ(x”) since γ̃_G pulls-back to γ along c∘ι̅ in Diagram (<ref>).Assume that [γ]=0 in H^2(X̅,A). Consider a 1-cochain s:X̅_1 → A such that d^vs=γ in the total complex of S̅ (see Equation (<ref>)). We introduce a (1,1)-cochain ϕ̃:G×X̅_1 → Ain(C([S̅])) byϕ̃= - d^hs, where we identify G^2×_GX̅_1 with G×X̅_1. Then wedefine ϕ = c^*(ϕ̃),a (1,1)-cochain in the total complex of T (see Equation (<ref>)).Note that one important case where ϕ can be defined is when [γ_G]=0. Since γ_G pulls-back to [γ] under ι̅ we have [γ]=0 in this case and thus we can define ϕ. Let X be a simplicial G-setand i:NA→ X be an inclusion of simplicial G-sets where the action on NA is trivial.Assume that the class[γ_G] defined in Equation (<ref>) is zero, so that ϕ is defined (see Definition <ref>). Then there exists r:X̅_1→ A such that d^vr=0 and d^hr=ϕ.By Lemma <ref> [γ_G]=0 if and only if [(0,0,γ)] is zero in the total complex. Since [β]=0 we can trivialize it and define ϕ. That is, we have [(0,0,γ)]=[(0,ϕ,0)] in the total complex. Then this class is zero if and only if there exists r:X̅_1→ A with the desired property.By the proof of Theorem <ref> we can show that [γ_G]≠ 0 by working in the total complex.We have [γ_G]=0 if and only if * there exists s:X̅_1→ A such that d^vs=γ, and* there exists r:X̅_1→ A such that d^vr=0 and d^hr = ϕ (where ϕ=-d^hs). Note that s̃=(0,r+s) satisfies ds̃=(0,0,γ). We can apply this observation to the case when [γ]=0. Then s as specified in (1) exists. Thus [γ_G]=0 if and only if 2 as specified in (2) exists. Combined with Proposition <ref> we can use this observation to detect contextuality. §.§ Example: Torus relative to the diagonal We revisit the example X=S^1× S^1 and Y=S^1 from Section <ref>. Let i:Y→ X be the diagonal map sending the non-degenerate 1-simplex of Y to x in Figure (<ref>). A simplicial distribution p:X→ D(Y) is relative to Y if and only if p_x=δ^1.Recall thatsimplicial distributions on (X,Y) are given by (t_1,t_2)∈ [0,1]^2 such that 1-(t_1+t_2)≥ 0. The relativity condition then implies that t_1+t_2=1. In the G-equivariant case, i.e., when t_1=t_2∈ [0,1/2], this forces t_1=1/2. Therefore there exists a unique G-equivariant simplicial distribution relative to Y in this case.When we take the Borel construction and then restrict to the 2-skeleton (this gives W=(X G)^(2)) the inclusion map j:Y→ W sends the unique non-degenerate 1-simplex of Y to the edge x(0) (the diagonal edge in the space in Figure (<ref>)). Again there is a unique simplicial distribution relative to Y given by t_1=1/2 by the isomorphism in (<ref>).Now, we consider the cofiber sequenceYW →WLet [ γ_G] denote the cohomology class obtained as the image of [s_1]∈ H^1(Y,_2) under the connecting homomorphism, where s_1:Y→ N_2 sends the unique non-degenerate 1-simplex to the edge labeled by 1.We can compute the cochain γ_G using the construction of the connecting homomorphism. More explicitly, we have γ_G(σ)=1 for σ∈σ_0(00), σ_1(00), σ_0(11), σ_1(11) , and otherwise zero. We see that γ_G is non-zero on the upper triangle of the surface in Figure (<ref>),which is a closed surface in W, therefore [γ_G]≠ 0. Then Proposition <ref> implies that the simplicial distribution with t_1=1/2 is contextual.Using Lemma <ref> we conclude that every G-equivariant simplicial distribution p:X→ D(Y) relative to Y is G-equivariantly contextual.Next we want to understand the representation of γ_G in the total complex.Note that here we regard γ_Gas a cocycle on (X G), the cofiber of Y→ X G,rather than the subspace W.Since [γ]=0 there exists s:X̅_1→_2 such that d^vs=γ. Forconcreteness we will chooses(x_0)=0 ands(x_1)=1.In the total complex we have(0,0,γ)-d(0,s) =(0,0-d^hs,γ-d^vs) = (0,-d^hs,0).The cocycle ϕ=-d^hs, which can be regarded as a function _2×X̅_1→_2, is given byϕ(g,x_i) = s(g· x_i) +s(x_i) = {[1g=1;0 otherwise. ].§ QUANTUM DISTRIBUTIONSEquivariant simplicial distributions arise naturally in quantum theory. In this section we provide an approach based on partial groups, which is complementary to the cofibration approach of the previous sections. Then we apply our theory to equivariant simplicial distributions that are obtained from quantum theory. In Section <ref> we consider an important example, known as the Mermin star construction,which also has applications tomeasurement-based quantum computing<cit.>. We show that this construction produces equivariantly contextual simplicial distributions and this kind of contextuality can be detected by the cohomology classes we construct in the theory.§.§ Extensions of partial groups We recall the theory of extensions of partial groups from <cit.>. There are two important groups associated to a partial group: * The normalizer of the partial group M is defined byN(M) = θ∈ M_1: ∃ _Mf_θ. * The center of M is the subgroup Z(M)⊂ N(M) consisting of those θ∈ M_1 such that _M _M.We are interested in the case where M=NA. We have Z(NA)=N(NA)=A. Extensions of partial groups are described by fiber bundles.Let N be another partial group. The extension of M by N is defined to be a fiber bundleN→ E→ M In the theory of partial groups the total space E can be described by a twisted product.An M-twisting pair for N is a pair of functions (t,β) wheret:M_1 →(N) and β:M_2→ N(N)satisfying * β(x,y) determines a homotopy t(x· y) → t(x)· t(y),* t(1)= and β(x,1)=1=β(1,x) for all x∈ M_1,* for all (x,y,z)∈ M_3,t(x)(β(y,z)) ·β(x,y· z) = β(x,y) ·β(x· y,z). The pair (t,β) can be used to define a twisting function τ and the total space can be described as the twisted product E=N×_τ M; see <cit.>. We will not go into the description of τ in general. We will be more explicit when N=NA, the case of interest for us. In this paper we will needtwo extreme cases of extensions: * β is trivial: In this case the twisted product is simply denoted by N⋉ M. Our main example will come from an action of a group G on M. We will consider NG⋉ M. * t is trivial: Such extensions are called central. We will consider the case where N=NA. Note that in this case β:M_2→ A is precisely a 2-cocycle. We will write NA×_β M for the twisted product. Central extensions of partial groups generalize central extensions of groups. Let K be a group A⊂ K be a central subgroup. We have a central group extension0→ A → K K̅→ 1Consider a set-theoretic section η:K̅→ K of ϵ satisfying η(1_K̅)=1_K. Then we can define a cocyleβ(k̅_1,k̅_2) = η(k̅_1) η(k̅_2) η(k̅_1 k̅_2)^-1.The associated cohomology class [β] classifies the group extension.We can also regard this extension as an extension of partial groups.Let A×_βK̅ denote the group whose elements consists of pairs (a,k̅) together with the multiplication rule twisted by β:(a_1,k̅_1)· (a_2,k̅_2) = (a_1+a_2+β(k̅_1,k̅_2),k̅_1 k̅_2). Then there is a commutative diagram[column sep=huge, row sep=large] NA [r,equal] [d,"i"] NA [d] NA×_β NK̅[d,"π"] [r,"≅"] N(A×_βK̅) [d]NK̅[r,equal] NK̅The section η also defines a pseudo-section of the left-hand partial group extension, which by a slight abuse of notation, we still denote by η:NK̅→ N_d×_β NK̅. (Pseudo-section means that the map is compatible with the simplicial structure maps except d_0.)The isomorphism is given by ((a_1,⋯,a_n),(k̅_1,⋯,k̅_n)) ↦ ((a_1,k̅_1),⋯,(a_n,k̅_n))For an arbitrary central partial group extension we will allow the base space to be a partial group: NA→ NA ×_β M → M where M is a partial group. Equation (<ref>) also holds for the 1-simplices of the total space as the Π_2 map coincides with the d_1-face of the twisted product.For a central partial group extensionNANA ×_β MMthe following are equivalent: * [β]=0 in H^2(M,A).* The map i splits.* The map π splits.The class [β] is zero if and only ifthe twisted product is isomorphic to the trivial one E≅ NA× M; see for example <cit.>. Both (2) and (3) are equivalent to this latter condition.§.§ Group action on partial groupsLet G be a group. Consider the central partial group extensionNA → EM where E=NA×_β M. Our group will act on the total space in such a way that its restriction to the fiber is trivial and moreover the action is compatible with the NA-action on the total space. Note that this fiber bundle is in fact a principal NA-bundle: the fiber is not just a partial group, it is a simplicial abelian group. More precisely, the action is given bya homomorphism φ:G→(NA×_β M), which we simply denote by g· x = φ(g)(x), such that* g· a = a for all a∈ (NA)_n, and* g·(a· x) = a· (g· x). Thus G acts on E by partial group automorphisms fixing NA. Here NA acts freely on the total space by left multiplication on the first coordinate of the twisted product. Then we haveg· (a,m) = (a+Φ_g(g· m),g· m).for some function Φ_g:M_1→ A.We will think of this as a function Φ:G× M_1→ A. Our approach will be to regard this as a cochain in a double complex.First we begin with the partial group extension associated to the group action:NA → NG⋉ ENG⋉ Mwhere π̃ NG⋉ E→ NG⋉ M is defined on 1-simplices byπ̃((g,x))=(g,π(x)). The associated twisting pair (t,β) is given by t=φ and β=0.Our next goal is to identify the partial group extension associated to the group action with the Borel construction.There is an isomorphism of simplicial setsMG≅ NG⋉ Mgiven byS: M G → NG⋉ M defined by[(g_0,g_1,…,g_n),(m_1,⋯,m_n)] ↦ ((g_0^-1m_1,g_1),(g_1^-1g_0^-1m_2,g_2),⋯,((g_n-1g_n-2⋯ g_2g_1)^-1g_0^-1m_n,g_n)]and T: NG⋉ M → M G defined by((m_1,g_1),⋯,(m_n,g_n)) ↦ [(1,g_1,⋯,g_n),(m_1,g_1m_2,⋯,g_n-1⋯ g_2g_1m_n)]. Follows from direct verification. We note that the isomorphism given in the lemma above can be extended to extensions of partial groups. There is a commutative diagram of partial group extensions[column sep=huge,row sep =large] NA [r] [d,equal] E G [d,"≅"] [r,"p"] M G [d,"≅"] NA [r] NG⋉ E [r, "π̃"] NG⋉ MWe need only to prove that the right-hand square indeed commutes. Note that the vertical isomorphisms are given by S in Lemma <ref>. Let [(1,g_1,⋯,g_n),(x_1,⋯,x_n)]∈ (E G)_n. Then we have thatS∘ p([(1,g_1,⋯,g_n),(x_1,⋯,x_n)]) = S((1,g_1,…,g_n),π(x))=((g_1,π(x_1)),(g_2,g_1^-1π(x_2)),⋯,(g_n, (g_n-1g_n-2⋯ g_1)^-1π(x))).On the other hand,π̃∘ S([(1,g_1,⋯,g_n),(x_1,⋯,x_n)]) =π̃((g_1,x_1),(g_2,g_1^-1)x_2,⋯,(g_n, (g_n-1g_n-2⋯ g_1)^-1x))=((g_1,π(x_1)),(g_2,g_1^-1π(x_2)),⋯,(g_n, ((g_n-1g_n-2⋯ g_1)^-1π(x))).Thus the claim follows. Recall that M G is the diagonal of the bisimplicial set S_G(M)_p,q=(EG)_p×_G M_q defined in Equation (<ref>). We regard Φ as a (1,1)-cochain in the double complex C^*( [S_G(M)] ,A). Similarly we can regard β:M_2→ A as a (0,2)-cochain in the same complex.The function Φ satisfies:* d^vΦ = d^hβ,* d^hΦ=0. Automorphisms of E=NA×_β M are determined by their restriction to 1-simplices (part (1), Lemma <ref>).We have a commutative diagram[column sep=huge,row sep =large] (NA×_β M)_2 [d,"g·"] [r,"d_2× d_0"] (NA×_β M)_1 × (NA×_β M)_1 [d,"g· "] (NA×_β M)_2 [r,"d_2× d_0"] (NA×_β M)_1 × (NA×_β M)_1 By commutativity of the diagram we obtaing· ((0,0),σ) = ( (Φ_g(g· d_2σ), Φ_g(g· d_0σ)+β(σ)-β(g·σ) ), g·σ).Now, g· is a simplicial set map and since the twisted product is determined by a 2-cocycle it suffices to require compatibility with the face maps d_i from 2-simplices.Compatibility with d_1 requires thatd_1(g· ((0,0),σ)) = ( Φ_g(g· d_2σ)+ Φ_g(g· d_0σ)+β(σ)-β(g·σ) , d_1(g·σ))is equal tog·(d_1 ((0,0),σ) ) = (Φ_g(g· d_1σ), g· (d_1σ) )which gives d^vΦ = d^hβ. A similar computation shows that g· is compatible with d_0 and d_2. Finally d^hΦ=0 follows from the requirement that g· (h· ((0,0),σ)) = gh· ((0,0),σ). Indeed, using Equation (<ref>) we have thatg· (h· ((0,0),σ))=g· (Φ_h(h·σ),h·σ)=(Φ_g(gh·σ)+Φ_h(h·σ), gh·σ) = (Φ_gh(gh·σ), gh·σ)= gh· ((0,0),σ).This gives the identity Φ_g(gh·σ)+Φ_h(h·σ)=Φ_gh(gh·σ).from which the result follows:d^hΦ(g,h,σ)=Φ_g(gh·σ)+Φ_h(h·σ)-Φ_gh(gh·σ)=0. Let η:M→ NA×_β M denote the pseudo-section defined by η(x)=(0,x). Then we haveη(x) ·η(y) ·η(x· y)^-1 =(0,x)· (0,y) · (0,x· y)^-1=(β(x,y),x· y) · (-β(x· y,x^-1· y^-1),y^-1· x^-1)=(β(x,y) ,1)and(g·η(g^-1· x))·η(x)^-1= (g· (0,g^-1· x)) · (0,x)^-1=(Φ_g(x),x) · (-β(x,x^-1),x^-1) = (Φ_g(x),1).There exists s:M_1→ A such that d^hs=Φ if and only if π admits a G-equivariant pseudo-section.We firstly note that the cohomology class of Φ does not depend on the choice of the pseudo-section η. Indeed, let η_1 and η_2 be two pseudo-sections and let Φ^(1) and Φ^(2) be the corresponding functions defined using Equation (<ref>). Since for every x∈ M_1 we have that π(η_1(x))=π(η_2(x)), there exists a function α M_1→ A such that η_2(x)=α(x)η_1(x). Recall that we assume that the action of G on A is trivial. Therefore we have that for any g∈ G, x∈ M_1Φ^(2)(g,x) =g·η_2(g^-1x)η_2(x)^-1= α(g^-1x)g·η_1(g^-1x)α(x)^-1η_1(x)^-1=α(g^-1x)-α(x)+Φ^(1)(g,x). We obtain that the difference of the cocycles Φ^(1) and Φ^(2) is a coboundary of the cochain α. So the class of Φ does not depend on the choice of the pseudo-section.From Equation (<ref>) it follows directly that if η is G-equivariant, then the cocycle Φ become zero. By the observation above we see that if there is any equivariant pseudo-section, then the cocycle Φ differs from zero by a coboundary, thus its cohomology class is zero.Assume now that [Φ]=0, that is, there exists a function a M_1→ A such thatfor every g∈ G and x∈ M_1 we have thata(g^-1x)-a(x)=g·η(g^-1x)η(x)^-1.Now define the function λ M_1→ A by λ(x)=η(x)a(x)^-1. From the equation above and the fact that π maps A to the trivial partial subgroup of M it follows that λ is an equivariant pseudo-section. Thus the claim follows. The pseudo-section η_G:M G → E G defined by η_G([(1,g_1,⋯,g_n),x)]) =[(1,g_1,⋯,g_n),η(x)]gives a 2-cocycle β_G: (M G)_2→ A defined by a formula similar to Equation (<ref>)[(1,1),β_G(σ,x,y)] =η_G(x) ·η_G(y) ·η_G(x· y)^-1,where [σ,(x,y)]∈ (EG)_2×_G M_2, such that the principal NA-bundle in(<ref>) is classified by [β_G].Using Lemma <ref> we can identifyM G ≅ NG⋉ M.Thus for σ=(1,g_1,g_2)∈ (EG)_2the element [(σ,(x_1,x_2))] is represented by ((g_1,x_1),(g_2,g_1^-1x_2))∈ (NG⋉ E)_2.The function η_G NG⋉ M→ NG⋉ E is then given by η_G((g_1,x_1),⋯,(g_n,x_n))=((g_1,η(x_1)),⋯,(g_n,η(x_n))). Using this we obtain (1,β_G((g_1,x_1),(g_2,x_2)) =η_G(g_1,x_1)·η_G(g_2,x_2)·η_G(g_1g_2, x_1·(g_1x_2))^-1= (g_1,η(x_1))· (g_2,η(x_2))· (g_1g_2, η(x_1·(g_1x_2)))^-1= (g_1g_2,η(x_1)·(g_1η(x_2)))·((g_1g_2)^-1,(g_1g_2)^-1·η(x_1·(g_1x_2))^-1)= (1, η(x_1)·(g_1η(x_2))·η(x_1·(g_1x_2))^-1). The cocycle β_G is represented by the triple (0,Φ,β) in the total complex.We will use the identification in Remark <ref> and Equation (<ref>). The claim follows from the formulas given in Equation (<ref>). For the first factor, recall that M is a reduced simplicial set (see Definition <ref>) and for the unique vertex ∗∈ M_0 we have s_0(∗)=1∈ M_1. Thus for σ=(1,g_1,g_2)∈ EG_2 we compute thatα(σ,∗)) =β_G(σ,s_1s_0(∗))=β_G((g_1,1),(g_2,1))= (1,η(1)·(g_1η(1))·η(1· (g_1· 1))^-1)= (1,0). We obtain that α(σ,∗)=0. The second factor comes from the following computation for g∈ G and x∈ M_1:α'((1,g),x) =β_G((1,g,1),s_0 x)-β_G((1,1,g),s_1x)=β_G((g,1),(1,g^-1x)))-β_G((1,x),(g,1))=(1,η(1)·(g_1η(g^-1x))·η(1)^-1)-(1,η(x)·η(1)·η(x)^-1)= (1,η(1)·(g_1η(g^-1x))·η(1)^-1)=(1,Φ_g(x)).Therefore α'((1,g),x)=Φ_g(x). Finally the third factor can be computed asα”[(1),(x_1,x_2)]= β_G((1,1,1),(x_1,x_2)) = β_G((1,x_1),(1,x_2)) = (1,η(x_1)·η(x_2)·η(x_1x_2)^-1)= (1,β(x_1,x_2)). Let NAEM be a central partial group extension and G be a group acting on E partial group automorphisms that fix NA. The following are equivalent: * The class [β_G] is zero.* There exists s:M_1→ A such that d^vs=β and d^hs=Φ.* The map π admits a G-equivariant section. We begin by showing the equivalence of (1) and (2). By Lemma <ref> [β_G]=0 if and only if [(0,Φ,β)]=0 in the total complex. This condition is equivalent to the existence of s:M_1→ A and r:G→ A such that d(r,s)=(0,Φ,β). Unraveling the coboundary in the total complex givesd(r,s) = (d^hr, d^hs,d^vs)where we used d^vr=0. Thus [(0,Φ,β)]=0 if and only if d^hr=0, d^hs=Φ and d^vs=β. We obtain the desired result since we can take r=0.Equivalence of (2) and (3) follows from Lemma <ref> and Lemma <ref>. In practice we will not carry over the trivial part in the Equations (<ref>-<ref>) and simply writeβ(x,y) = η(x)·η(y) ·η(x· y)^-1 Φ_g(x) = (g·η(g^-1· x)) ·η(x)^-1 β_G(x,y) = η_G(x)·η_G(y) ·η_G(x· y)^-1 .§.§ Comparison to cofibration There is a canonical comparison diagram[column sep=huge,row sep =large] NA [r,equal] [d,"i"] NA [d,"i"] E [r,equal] [d,"q"] E [d,"π"]E̅[r,"α"]Mwhere the left sequence is a cofiber sequence and right sequence is a central extension of partial groups. We have α^*([β])=[γ]. Consider the Serre spectral sequence for the right vertical sequence in Diagram (<ref>) with coefficients in A.Let 𝕀 be the identity homomorpshim of A. Then we have that [𝕀]∈ H^1(A,A) and that d_2([𝕀])=[β]; see <cit.>. This differential is the transgression homomorphism and it is defined by the following diagram (see <cit.>):[ampersand replacement = &,column sep=huge,row sep =large] 0& H^2(M,A)["α^∗"]& H^2(M,A)["π^∗"]& 0H^1(A,A)["ζ"]& H^2(E̅,A)& H^2(E,A)&This diagram comes from the map of cofibrations:[ampersand replacement=&,column sep=huge,row sep =large] NA [r] [d,"i"] &∗[d] E [r,,"π"] [d] & M [d,equal]E̅[r,"α"] &MBy the definition of the transgression we obtain that α^∗([β])=ζ([𝕀])=[γ]. The following are equivalent.* i:NA→ E has a section.* [γ]=0 in H^2(E̅,A).* [β]=0 in H^2(M,A).Equivalence of (1) and (2) follows from Proposition <ref> and the equivalence of (1) and (3) follows from Lemma <ref>.If [β]≠ 0 then every simplicial distribution p:E→ D_R(NA) relative to NA is contextual.Follows from Proposition <ref> and Proposition <ref>. We can also apply the discussion of G-equivariant contextuality. In this case the comparison diagram is given as follows[column sep=huge,row sep =large] NA [r] [d] NA G[d] [r] NA [d] E G [r,equal] [d] E G [r,equal] [d] E G [d]E G[r,"c"] E̅ G [r,"α"] M GWe haveα^*([β_G])=γ̃_G andc^*(γ̃_G)=γ_G. We haveα^*[(0,Φ,β)] = [(0,ϕ̃,γ) ].Follows from the first equation above and the corresponding descriptions in the total complexes.The following are equivalent. * i:NA→ E has a G-equivariant section.* i:NA→ E G has a section.* [β_G] ∈ H^2(M G,A) is zero.* [γ_G] ∈ H^2(E G,A) is zero.Equivalence of (1), (2) and (4) is by Lemma <ref> and Proposition<ref>. Applying Proposition <ref> to the leftmost cofibration and the rightmost fibration, which is a partial group extension by Proposition <ref>, we obtain the equivalence of (3) and (4).If [β_G]≠ 0 then everyG-equivariantdistribution p:E→ D_R(NA) relative to NA is G-equivariantly contextual. Follows from Corollary <ref> and Proposition <ref>.By Theorem <ref> we can show that [β_G]≠ 0 by working in the total complex.Part (2) of this corollary can be broken into two steps: We have [β_G]=0 if and only if * there exists s':M_1→ A such that d^vs'=β, and* there exists s”:M_1→ A such that d^vs”=0 and d^hs” = Φ-d^hs'. Note that s=(0,s”+s') satisfies ds=(0,Φ,β). We can apply this observation to the case when [β]=0. Then s' as specified in (1) exists. Thus [β_G]=0 if and only if s” as specified in (2) exists.§.§ Example: Dihedral group We will compare the cofibration involving the torus in Sections<ref> and <ref> to a partial group extension obtained from the trivial group extension:[column sep=huge,row sep =large] S^1 [d] [r] N_2 [d] S^1× S^1 [r] [d] N(_2×_2) [d]S^1× S^1[r,"α"] N_2The map α sends x_i↦ 1 where i=0,1. We consider the G=_2 action on the middle spaces by swapping the coordinates. Let us begin with a pseudo-section η:N_2→ N(_2×_2) that sends a↦ (0,a) in degree 1.In fact, this is a section and henceβ=0. In Section <ref> we have seen that [γ]=0. Therefore α^*([β])=[γ] (thus verifying Lemma <ref>). We compute Φ as follows:Φ_a(b)= a·η(-a· b) ·η(b)^-1= {[0a=0;b a=1. ].Then we haveα^*(Φ)(a,x_i)= Φ(a,α(x_i)) = Φ(a,1) = {[ 0 a=0; 1 a=1 ]. = ϕ(a,1). where ϕ is given by Equation (<ref>). Using s in Equation (<ref>) we observe that α^*(0,Φ,0) = (0,ϕ,0) = d(0,s) +(0,ϕ,γ),and hence α^*[(0,Φ,0)]=[(0,ϕ,γ)] (this verifies Lemma <ref>).We observe that (_2×_2)⋊_2, where the action is the swap, is isomorphic to the dihedral group D_8. The action on the base space N_2 is trivial. Then the Borel construction of Diagram (<ref>) gives [column sep=huge,row sep =large] S^1 [d] [r] N_2 [d] (S^1× S^1)G [r] [d] ND_8 [d](S^1× S^1) G[r,"α_G ∘c"] N(_2×_2)We can compute the cocycle representing the extension class from the pseudo-section η_G:N(_2×_2)→ ND_8 defined by η_G(c,d)=((0,c),d):β_G((c,d),(c',d'))= η(c)· (dη(c))·η(c+(dc'))^-1= (0,c)+d·(0,c')+(0,c+c')= {[ c'd=1;0 d=0. ].Note that we use that the action on the base space is trivial.Using Equation (<ref>) we obtain∇_2(β_G) = (0,Φ,0).When computing this we also used the isomorphism N(_2×_2)≅ N_2_2 that sends ((c,d),(c,d))↦ [(0,d,d'),(c,c')] in degree 2. To illustrate this we compute the middle term in Equation (<ref>):α'[(0,d),c]= β_G[(0,d,0),(0,c)] - β_G[(0,0,d),(c,0)] = β_G[(0,d,0),(0,c)]= {[ c d=1; 0 d=0 ]. = Φ_d(c).§.§ Quantum distributionsWe recall some basic constructions from <cit.> to study simplicial distributions that come from quantum mechanics. Letdenote a finite dimensional complex Hilbert space. We will write () and () for the set of positive operators and the subset of projectors.We define a functor P_:→: * For a set U the set P_(U) of projective measurements on U consists of functionsΠ:U→() with finite support, i.e., |u∈ U: Π(u)=|<∞, such that ∑_u∈ UΠ(u)=_.* Given a function f:U→ V the function P_(f):P_(U)→ P_(V) is defined by Π↦( v↦∑_u∈ f^-1(v)Π(u) ). There is an analogous functor Q_ involving () instead of the projectors. Note that for = this functor coincides with the distribution monad D introduced in Section <ref>.In this paper we will focus on the projective version. Given a simplicial set X:^→ we define P_(X):X.A simplicial projective measurement on (X,Y) is a simplicial set map Π:X→ P_(Y). We will write (X,Y) for the set of simplicial projective measurements on (X,Y).Let ρ be a density operator, i.e., ρ∈() and (ρ)=1.By sending a simplicial measurement Π:X→ P_(Y) to the simplicial distribution ρ_*(p):X→ D(Y) defined by σ↦( θ↦(ρΠ_σ(θ)))we obtain a simplicial set mapρ_*:P_(Y) → D(Y)The trace (ρΠ_σ(θ)) is interpretedas the probability of obtaining the outcome θ and in physics literature this is known as the Born rule.The simplicial set map in (<ref>) will be referred to as the simplicial Born rule. Next we introduce a nerve space that has been successful in studying contextuality in the context of quantum theory; see <cit.> for applications to operator solutions of linear systems. Let d≥ 2 be an integer and K be a group. We define a partial group N(_d,K), a simplicial subset of the nerve space NK, whose n-simplices are given byN(_d,K)_n= (k_1,k_2,⋯,k_n):k_i^d=1_K, k_ik_j=k_jk_i ∀ 1≤ i,j≤ n . The property k^d=1_K will be referred to as the d-torsion property.Let U() denote the group of unitary operators.(<cit.>) Sending an n-tuple (A_1,A_2,⋯,A_n) of pairwise commuting d-torsion unitary operators to the projective measurement Π_A:_d^n→() obtained by simultaneously diagonalizing the operators gives an isomorphism of simplicial sets:N(_d,U()) → P_(N_d)We will identify these two simplicial sets. After this identification the simplicial Born rule becomesρ_*:N(_d,U()) → D(N_d) Now, consider a group Kwith a central element J or order d. We have a central group extension1 →J→ K K̅→ 1 We write g̅ for the image of g under the quotient map.Let d≥ 2 be an integer and K be a group with a central element J of order d. We define a partial group N̅(_d,K), a simplicial subset of the nerve space NK̅, whose n-simplices are given byN̅(_d,K)_n= (k̅_1,k̅_2,⋯,k̅_n)∈K̅^n:k_i^d=1_K, k_ik_j=k_jk_i ∀ 1≤ i,j≤ n . The central extension in (<ref>) gives a diagram as in (<ref>). Puling back this diagramalong the inclusion N̅(_d,K)→ NK̅ we obtain the following result. We will still write β for the pull-back of the cocycle β along this inclusion.We have a central extension of partial groupsN_dN(_d,K) N̅(_d,K) where the total space can be described as the twisted product N_d ×_βN̅(_d,K).There is a way to obtain a simplicial distribution on the total space relative to the fiber that uses representation theory ofgroups. Consider a unitary representation ψ:K→ U(), i.e., a group homomorphism, such that ψ(J) = e^2π i/d.Then given a quantum state ρ∈() we can construct a commutative diagram[column sep=huge,row sep =large] N_d [d] [r,equal] N_d [d] [r,equal]N_d [d,"δ"]N(_d,K) [d] [r,"ψ_*"] N(_d,U()) [d] [r,"ρ_*"] D(N_d)N̅(_d,K) [r]N̅(_d,U())The composite ρ_*^ψ= ρ_*∘ψ_* is a simplicial distribution relative to N_d.A quantum state ρ∈() is called (non-)contextual with respect to ψ:K→ U() if the simplicial distribution ρ_*^ψ: N(_d,K) → D(_d)is (non-)contextual. If [β]≠ 0 then ρ_*^ψ is contextual for allρ∈(). Follows from Corollary <ref> since the simplicial distribution ρ_*^ψ is relative to NA.In practice we can assume that ψ is injective so that K can be identified with a subgroup of U(). Note that we can always replace K with the image of ψ.K is called contextual if ρ_*^ψ is contextual for all ρ∈(). This notion of contextuality is usually referred to as state-independent contextuality and a topological perspective is first developed in <cit.>. In this case we say ρ is contextual with respect to K instead of the homomorphism ψ.Next, we turn to the equivariant case. Let us assume that ψ:K→ U() is injective so that we can identify K as a subgroup of the unitary group. Under this assumption we drop ψ from notation, e.g., we simply write ρ_* for the map in (<ref>). We will consider partial group actions on N(_d,K) in the sense of Section <ref>. In practice such actions arise from a subgroup G of the normalizer N(K) of K in the unitary group U(). For the rest of this section we will restrict to actions arising in this way. A quantum state ρ is called G-equivariant if Uρ U^† =ρ for all U∈ G.A G-equivariant quantum state is G-equivariantly (non-)contextual if ρ_* is G-equivariantly (non-)contextual. If [β_G]≠ 0 thenρ_* is G-equivariantly contextual for all G-equivariant quantum states ρ.Follows from Corollary <ref> since the simplicial distribution ρ_*^ψ is G-equivariant and relative to NA. In practice, we will use Remark <ref> to show that [β_G]≠ 0.The situation in Corollary <ref> is referred to as state-independent contextuality in the physics literature <cit.>. §.§ Example: Mermin starIn this section we will construct an example that is of practical interest in quantum foundations and computing. Our example is a topological representation of the well-known Mermin star construction introduced in <cit.>. We improve on the earlier topological approach of <cit.> by help of the simplicial approach developed in <cit.>. Letdenote the n-fold tensor product (^2)^⊗ n.The Pauli X and Z matrices are given byX = ( 0 1 1 0 )Z = ( 1 0 0 -1 ).The Pauli group P_n is the subgroup of U() generated by operators of the form T_a = i^a_z· a_x Z(a_z) X(a_x)where i is the imaginary unit, a=(a_z,a_x)∈_2^n×_2^n, a_z· a_x=∑_j (a_z)_j(a_x)_j, and Z(a_z) = Z^(a_z)_1⊗⋯⊗ Z^(a_z)_n X(a_x) = X^(a_x)_1⊗⋯⊗ X^(a_x)_n.Note that T_a^2= for all a∈_2^2n. The Pauli group fits in a central extension0→_4P_n _2^2n→ 0A set-theoretic section of ϵ can be defined byη(a) =T_a.We will study the partial group extensionN_2 → N(_2,P_n) N̅(_2,P_n)Now, the set-theoretic section in Equation (<ref>) can be used to define a pseudo-section of π, which we still denote by η. Using η and Equation (<ref>) we can compute(-1)^β(a,b) = η(a)·η(b)·η(a+b)^-1= T_a T_b (T_a+b)^-1= (i^a_z· a_x Z(a_z)X(a_x))(i^b_z· b_x Z(b_z)X(b_x))(i^(a+b)_z· (a+b)_x Z((a+b)_z)X((a+b)_x)) = i^a_z· a_x+b_z· b_x+(a+b)_z· (a+b)_x (-1)^b_z· a_x+(a+b)_z· (a+b)_x= i^b_z· a_x - a_z· b_x= (-1)^(b_z· a_x - a_z· b_x)/2where we used the fact that b_z· a_x - a_z· b_x is divisible by 2 since ω(a,b)=0. Therefore β(a,b)=(b_z· a_x - a_z· b_x)/2.Similarly we can compute Φ using Equation (<ref>). We begin by constructing a smaller partial group extension and a cofibration that comes with a comparison map. For the partial group extension first we consider the 2-skeleton N_2^(2) and the partial monoid M given by the space in Figure (<ref>). Let Π̅:M→N̅(_2,P_n) denote the inclusion described by Figure (<ref>) and E denote the 2-skeleton of the pull-back of π along this maps.The space E consists of the non-degenerate 2-simplices given by σ_ai(cd) and σ_bi(cd) where c,d∈_2, i.e., there are four copies of triangles above a given triangle of M. Instead of providing a picture of E we will consider a simplicial subset X⊂ E which is described in Figure (<ref>). This portion focuses on the restriction of E over the two triangles σ_1b and σ_4b of M. Using all these spaces we obtain a commutative diagram[column sep=huge,row sep =large] S^1 [d,"i"] [r,hook] N_2^(2)[d] [r,hook] N_2 [d]X [d,"q"] [r,"α",hook] E [d,"π"] [r,hook,"Π"] N(_2,P_3) [d]X̅[r,"α̅"] M [r,hook,"Π̅"]N̅(_2,P_3) where Π|_X = Π∘α is described in Figure (<ref>). Under i the circle S^1 maps to the common edge x between σ_2 and σ_3. Next we describe the symmetry group G. This example is a modified version of the one discussed in the context of measurement-based quantum computing; see <cit.>.Let us define the following unitary operatorA = X + Y/√(2). Note that A^2= and we haveA X A = Y, AYA = X,AZA = -Z. Therefore A is in the normalizer of P_1⊂ U(^2). Consider the symmetry groupG = A⊗ A ⊗ Y, A ⊗ Y⊗ A, Y⊗ A ⊗ A. Note that G acts on N(_2,P_3) and furthermore this action restricts to an action on E, which is regarded as a simplicial subset via the inclusion Π. Consider the following subgroupH= V⊂ G.where V= A⊗ A ⊗ Y. We haveV (X⊗ X⊗) V= Y⊗ Y⊗V (X⊗ X⊗ X) V= -Y⊗ Y⊗ X V (⊗⊗ X) V= -⊗⊗ X.HenceH acts on the smaller simplicial subset X (embedded via Π|_X).Finally we consider a G-equivariant quantum state ρ∈() given by ρ = 1/4( + ⊗ Z⊗ Z+ Z⊗⊗ Z+Z⊗ Z⊗+ X⊗ X⊗ X - X⊗ Y⊗ Y - Y⊗ X⊗ Y - Y⊗ Y⊗ X).Note that ρ can be written as the outer product vv^† where v=(e_0⊗ e_0⊗ e_0 + e_1⊗ e_1⊗ e_1)/√(2) and e_0,e_1 is the canonical basis of ^2.In physics literature the state v is called the Greenberger–Horn–Zeilinger (GHZ) state <cit.>. Observe thatU ρ U^† = ρfor all U∈ G. Therefore we obtain a G-equivariant distribution using the simplicial Born rule (see (<ref>))ρ_*: N(_2,P_3) → D(N_2)which we can restrict to E and X to obtain the G-equivariant distributions ρ_*|_E and ρ_*|_X, respectively.First we begin with X and the H-equivariant distribution p_X=ρ_*|_X. It turns out that X does not admit any H-equivariant deterministic distribution. To see this note that such a distribution d is determined by the assignment d_d_2σ”_1b =δ^a and d_d_0σ”_1b=δ^b. Then the 2-simplex σ_2 forces that d_d_2σ'_1b=δ^1-a since one of the edges is given by δ^1. In addition, H-equivariance implies that d_d_0σ'_1b=δ^b. Then we obtain a contradictionδ^a+b = d_d_1σ”_1b = d_d_1σ'_1b = δ^1+a+b.By Corollary <ref> this implies that [γ_G]≠ 0. On the other hand, we observe that [γ]=0 since the cocycle γ is non-zero on two of the non-degenerate 2-simplices σ_2 and σ_3. Let us defines(x') = {[ 1 x' ∈x_i:i=0,1,2; 0otherwise. ].Then using ϕ=-d^hs we obtainϕ(g,x') = {[ 1 g=1,x' ∈x_i,x_i': i=0,1,2; 0otherwise. ]. Let us consider the partial group extension E. We can compute β, as we did for the general case; see Equation (<ref>). Since in each triangle the product of operators on the boundary isthe cocycle β is zero. Let us compute Φ for the group element V:(-1)^Φ_g(x') = g·η(g· x')η(x') = {[ -1 g=V,x' ∈x_i,x_i':i=0,1,2;1 otherwise. ]. Therefore ϕ = α̅^*(Φ). ieeetr | http://arxiv.org/abs/2310.18135v1 | {
"authors": [
"Cihan Okay",
"Igor Sikora"
],
"categories": [
"quant-ph",
"math.AT"
],
"primary_category": "quant-ph",
"published": "20231027132546",
"title": "Equivariant simplicial distributions and quantum contextuality"
} |
Practical application of quantum neural network to materials informatics: prediction of the melting points of metal oxides Hirotoshi Hiraie-mail: [email protected] Central R&D Labs., Inc.,41-1, Yokomichi, Nagakute, Aichi 480-1192, Japan============================================================================================================================================= Quantum neural network (QNN) models have received increasing attention owing to their strong expressibility and resistance to overfitting. It is particularly useful when the size of the training data is small, making it a good fit for materials informatics (MI) problems. However, there are only a few examples of the application of QNN to multivariate regression models, and little is known about how these models are constructed. This study aims to construct a QNN model to predict the melting points of metal oxides as an example of a multivariate regression task for the MI problem. Different architectures (encoding methods and entangler arrangements) are explored to create an effective QNN model. Shallow-depth ansatzs could achieve sufficient expressibility using sufficiently entangled circuits. The “linear” entangler was adequate for providing the necessary entanglement. The expressibility of the QNN model could be further improved by increasing the circuit width. The generalization performance could also be improved, outperforming the classical NN model. No overfitting was observed in the QNN models with a well-designed encoder. These findings suggest that QNN can be a useful tool for MI.Practical application of quantum neural network to materials informatics: prediction of the melting points of metal oxides Hirotoshi Hiraie-mail: [email protected] Central R&D Labs., Inc.,41-1, Yokomichi, Nagakute, Aichi 480-1192, Japan=============================================================================================================================================§ INTRODUCTION The application of machine learning (ML) to the development of materials is becoming increasingly important <cit.>. Materials informatics (MI) is a field of information science used to develop materials <cit.>. It involves constructing a predictive model of physical properties from a limited amount of data obtained from experiments or simulations and then screening materials with the desired performance from a large group of materials. The challenge with MI is that the data are often limited and prone to noise owing to errors in the experimental data, making it difficult to construct a model with a good generalization performance (prediction performance for unknown materials) <cit.>.Recently, a quantum neural network (QNN) <cit.>, also referred to as quantum circuit learning <cit.>, has been developed as an ML algorithm for quantum computers <cit.>. It is a quantum-classical hybrid algorithm based on the variational quantum algorithm <cit.>, which has been developed to work with noisy intermediate-scale quantum (NISQ) devices <cit.>. A QNN model is built by minimizing the discrepancy between the output of the quantum circuit and labeled data by adjusting the circuit parameters to their optimal values. The advantage of QNN is that it can use high-dimensional quantum states as trial functions that are hard to generate on a classical computer <cit.>. Another advantage of a QNN is that the unitarity of quantum circuits serves as regularization to prevent overfitting <cit.>. In a classical neural network (NN) model, a regularization term is incorporated into the cost function to constrain the norm of the learning parameters and to reduce the model's expressibility to prevent overfitting <cit.>. In contrast, the norm of parameters is automatically limited to one due to unitarity in a QNN model, i.e., the regularization function is inherently provided. QNNs have also been reported to afford predictive models with excellent generalization performance even when only a small amount of training data is available <cit.>. It has also been reported that the smaller the data size of the problem, the greater the advantage of the generalization performance of QNNs over classical NNs <cit.>.These characteristics of QNNs may be particularly useful in MI. The atomic configuration can be used to predict the properties of materials because the Hamiltonian can be determined from the atomic configuration and the Schrödinger equation can be solved (in principle) using the Hamiltonian to obtain the properties of the material.ML models can be used instead of solving the Schrödinger equation because solving the many-body Schrödinger equation is extremely difficult <cit.>. Such concepts have been considered in the MI <cit.> and QSAR (Quantitative Structure-Affinity Relationship) <cit.> fields. The construction of an ML model that bypasses the Schrödinger equation is expected to be naturally aided by a QNN model with quantum architectures.In this study, we attempted to construct a successful QNN model to predict the melting points of metal oxides. Calculating thermodynamic properties such as melting points is difficult with first-principles calculations because of the high computational cost and lack of accuracy <cit.>. Therefore, it is important to develop a practical melting point prediction model to identify functional materials <cit.>. However, because QNNs are an emerging field, there is still a lack of understanding of how to construct effective QNN models. We considered various architectures (ansatz and encoding methods) to create an effective QNN model for the practical task of predicting melting points. § METHODS§.§ Data setThis study addresses the issue of predicting the melting points of metal oxides. The melting point data for metal oxides listed in <cit.> were expanded to 70 metal oxides by adding data from other references <cit.>. Each material was identified in the Materials Project database <cit.>, and the following five explanatory variables were obtained (some variables were calculated from structural data in the database in <cit.>). * formation_energy_per_atom: Formation energy per atom* band_gap: Band gap energy* density: Mass density* cati_anio_ratio: Ratio of the number of cations and anions* dist_from_o: Minimum distance from the oxygen atom to cationThe constructed dataset is available in the Supporting Information. These explanatory variables were normalized to have a mean of 0 and a variance of 1 for the training data and further scaled to have a maximum value of 1 and a minimum value of -1. The objective variable (melting point temperature in °C) was divided by 3500 and scaled such that the maximum value was approximately 1 (the highest melting point of metal oxides treated in this study was 3390 °C).The k-fold cross-validation method <cit.> was used to evaluate the accuracy of the constructed regression models. In this study, the 70 dataset entries were divided into five groups; one group was used as the test data, while the other groups were used as the training data. This procedure was performed for all five combinations, and the average accuracy of the five models was used as the final accuracy. The root mean square error (RMSE) was used as a measure of accuracy.§.§ QNN modelsThe QNN model is composed of three components: an encoder that transforms explanatory variables into a quantum state, an ansatz which is a quantum circuit with learning parameters, and a decoder that converts the quantum state into an output value. Each component is described in detail in the following sections. In this study, QNN models were implemented using Pytket <cit.>, a Python module for quantum computing, and quantum circuit calculations were performed using state vector calculations with the Qulacs <cit.> backend, a quantum computing emulator. The mean squared error (MSE) between the labeled data and model predictions was used as a cost function. The Powell method <cit.> was used to optimize the learning parameters.§.§.§ EncoderIn this study, Ry rotation gates <cit.> were used as encoders. We used two different methods to transform each scaled explanatory variable x into the rotation angle θ: θ = π x and θ = arctan(x)+π/2. The arctangent allows the scaled explanatory variable to be uniquely converted to a rotation angle even if the value is outside the scale range (-1,1) when the scaler is used for the test data. We constructed a 5-qubit QNN model with each explanatory variable encoded in one qubit and a 10-qubit QNN model with each explanatory variable encoded in two qubits, as shown in Fig. <ref> (a) and (b), respectively.In the 10-qubit model, two different encoding methods were tested: one with redundant imputation of the explanatory variable x and the other with imputation as x and x^2, as indicated by the parentheses in Fig. <ref> (b).§.§.§ AnsatzIn this study, as the ansatz part of the QNN, we examined ansatzs with the quantum circuits shown in Fig. <ref> as the depth 1-block.In these ansatzs, an entangler (a group of 2-qubit operations) was placed after the Ry rotation gate. Although Fig. <ref> shows CNOT (CX) gates as 2-qubit gates, and we also examine the case using controlled-Z (CZ) gates. circular2 (c) and circular4 (d) contain 2-qubit operations up to the second and fourth nearest-neighbor qubits, respectively. Each Ry gate has an independent learning parameter θ. Because there are five (10) Ry gates in the depth 1-block of the 5-qubit (10-qubit) model, the number of parameters for the QNN model with depth d is 5d (10d). In this study, d values of 1 to 7 were considered. §.§.§ DecoderThe QNN decoder takes the expectation value of an observable quantum state generated by the encoder-ansatz quantum circuit as the output of the regression model. For the 5-qubit QNN models, the expectation value of σ^4_z (the Z-axis projection of the lower-end qubit) was used as the decoder (note that the number on the label begins with zero). For the 10-qubit QNN models, the expected value of σ^4_z + σ^9_z was used.§.§ Circuit analysisThe higher the expressibility of the ansatz, the better the regression accuracy. Therefore, the quantitative evaluation of the expressibility of an ansatz plays an important role in the construction of a QNN model. In this study, Kullback-Leibler (KL) divergence <cit.> and entanglement entropy <cit.> were used as ansatz evaluation tools. In the KL divergence metric, the KL divergence between the fidelity distribution of quantum states obtained from an ansatz with random parameters and the fidelity distribution for Haar measurements is used to quantify expressibility <cit.>.In the entanglement entropy, the entanglement entropy between one qubit and another was calculated for an ansatz with random parameters and the statistical mean obtained. This calculation was performed for different qubits as subsystems, and the average of the qubits was used to quantify the entanglement strength of the ansatz. §.§ Classical NN modelsA conventional neural network (NN) model was constructed for comparison. To vary the number of learning parameters in the NN regression model, models 5-5-1(36), 5-3-1(22), 5-2-1(15), 5-1(6) were prepared, where the numbers indicate the number of neurons in the fully connected layers, “-” indicates “between layers”, and the numbers in parentheses represent the number of training parameters.A sigmoid function was used as the activation function. PyTorch <cit.> is used to construct and train the NN model. The Adam optimizer <cit.>, an extended version of the stochastic gradient descent, was used with a learning rate of 0.02 over 10000 epochs.L2 regularization was applied to prevent overfitting. The weight parameter for L2 regularization (a hyperparameter set by the user) was used to minimize the RMSE for the test data (average of five groups). We tested the parameters of 10^-n with n = 2, 3, 4, and 5, and found that n = 4 gave the best performance for all models. § RESULTS AND DISCUSSIONS§.§ EncoderFirst, we present the results of the analysis of the effects of different methods on transforming the explanatory variable x into the rotation angle θ during Ry(θ) encoding. The RMSE of the QNN models with Ry(π x) and Ry(arctan(x)+π/2) are shown in Fig. <ref>.Here, the number of qubits was fixed at five, and the entangler was fixed in a linear arrangement (Fig. <ref> (a)). The number of parameters in the model increased with the depth of the ansatz. For comparison, Fig. <ref> also shows the results for the classical NNs with and without regularization as “NN reg.” and “NN”, respectively. It can be confirmed that NN models without regularization induce overfitting. That is, the RMSE of the test data increases as the number of parameters increases. When Ry(π x) was used as the encoder, QNN models with a small number of parameters (shallow ansatzs) exhibited significantly poorer regression performance. The reasons for this are as follows. Here, the explanatory variable x(-1,1) is converted into a rotation angle θ(-π,π), which results in a round trip around the Bloch sphere, and the Z-axis projection after encoding is not unique. In extreme cases, x=-1 and x=1 are encoded in the same quantum state. As the number of parameters increases (the ansatz is deepened), the RMSE becomes smaller for the training data. This is thought to be because the data are fully trained by brute force with a large number of parameters.However, for the test data, overfitting was observed for the models with deep ansatzs. However, in the QNN model using Ry(arctan(x)+π/2) as the encoder, the RMSE was small, even for a model with a small number of parameters (shallow ansatzs). It can also be confirmed that overfitting does not occur even in models with a large number of parameters (deep ansatzs). In this case, the RMSE values for the test and training data showed approximately the same dependence on the number of parameters as the classical NN with regularization, confirming that the automatic regularization function of the QNN was effective. In the following discussion, Ry(arctan(x)+π/2) was used as the encoder. §.§ AnsatzNext, we analyzed the impact of ansatz differences on the regression performance of the QNN. The differences between the CX and CZ gates is shown in Fig. <ref>, where the number of qubits is fixed to five and the entangler is fixed to the “linear” arrangement.From the comparison of the ansatzs with the CX and CZ gates, the QNN models with CZ have lower expressibility. Because the observation axis is set to the Z axis (σ_z is used for the decoder), the phase inversion by the CZ gate does not directly change the projection of the Z axis (the Pauli gate based on the basis axis does not change the state, except for the phase). As a result, QNN models with CZ gates are considered to have lower expressibility, particularly when the number of parameters is small. In the following discussion, only CX gates were used as entanglers.Fig. <ref> shows the impact of different entangler structures (Fig. <ref>) on QNN performance.The QNN models with ansatz “linear”, “circular”, and “circular2” show similar performances, while the QNN model with ansatz “circular4” performs significantly worse for a small number of parameters (shallower depths). To investigate the factors contributing to these results, the KL divergences and entanglement entropies of these ansatzs were examined, as shown in Fig. <ref> and Fig. <ref>, respectively. These figures also show the results for the “full” arrangement shown in Fig. <ref> (a).These results indicate a correlation between KL divergence and entanglement entropy, with a larger entanglement entropy indicating a smaller KL divergence. Therefore, an ansatz with larger entanglement has greater expressibility. It can be expected that the entanglement becomes stronger as the number of CXs increases, such as “linear”, “circular” and “circular2”, but it is noticeably weaker for the “full” and “circular4” entanglers. This can be understood based on the following facts: It is known that a “full” entangler has a reduced circuit and is equivalent to an inverse “linear” entangler <cit.> (Fig. <ref> (a)). This implies that entanglement cannot be enhanced by blindly including a large number of CXs, provided that a simple equivalent circuit (reduced circuit) exists. However, it is difficult to determine whether a circuit has a reduced equivalent circuit. Therefore, we optimized each entangler using the circuit optimization function in tket <cit.> and explored a reduced equivalent circuit. The results are summarized in Fig. <ref>. There is a significantly reduced equivalent circuit for “circular4”. In the reduced “circular4” entangler, each qubit has only a CX gate with the bottom qubit, so the entanglement is weak, as can also be seen from the entanglement entropy. In contrast, “circular2” is not significantly simplified, and the entanglement is not notably weak.Figures <ref> and <ref> show the KL divergence and the entanglement entropy for the “linear CZ” entangler, respectively, and indicate that the QNN model with this entangler has less expressibility. These results indicate that KL divergence and entanglement entropy may be able to screen out ansatz with poor expressibility.In this study, there were no large differences in QNN performance among ansatzs with entanglement greater than the “linear” entangler, and therefore, the “linear” entangler was found to provide sufficient entanglement for the QNN model for this problem. This implies that a model with satisfactory performance can be constructed using only 2-qubit operations between neighboring qubits, suggesting that it may be feasible to operate the QNN model on superconducting quantum computers, which are widely used today, in the near future. §.§ Circuit widthThe effect of the number of qubits (circuit width) on the performance of the QNN model is illustrated in Fig. <ref>.Here, the entangler is fixed to the “linear” arrangement. When comparing the RMSE for the training data, the model with twice the number of qubits (w2) had a smaller error than the original model, indicating that its expressibility was improved by increasing the basis dimension. The generalization performance (accuracy for the test data) was also improved by increasing the circuit width and outperformed the classical NN model. Comparing the model with redundant inputs of the explanatory variable (x-x) and the model with redundant inputs (x-x^2), the latter appears to perform slightly better. This is because it prevents basis duplication and efficiently handles a large number of basis functions.§ CONCLUSIONIn this study, we constructed QNN models to predict the melting point of metal oxides by exploring various architectures (encoding methods and entangler arrangements). The explanatory variables should be uniquely converted into rotation angles to obtain good QNN models and avoid overfitting. It was also found that even shallow-depth ansatzs could achieve sufficient expressibility for the present task using sufficiently entangled circuits. It is insufficient to place a large number of CX gates without consideration; it is necessary to set up an entangler that produces entangles in real terms. In this case, KL divergence and entanglement entropy proved to be good indicators. The “linear” entangler was adequate for providing the necessary entanglement for the QNN model for this particular problem. This result indicates that a model with satisfactory performance can be created using only 2-qubit operations between adjacent qubits. The expressibility of a QNN model can be improved by increasing the circuit width (number of qubits). This also improved the generalization performance, outperforming the classical NN model. Most importantly, no overfitting was observed in QNN models with well-designed encoders. A QNN can achieve high generalization performance without hyperparameter tuning and is considered an excellent tool for regression tasks. § CONFLICTS OF INTERESTThe authors declare no conflict of interest.§ SUPPORTING INFORMATIONThe melting point data for the metal oxides and the explanatory variables used in this study are listed in the following. | http://arxiv.org/abs/2310.17935v1 | {
"authors": [
"Hirotoshi Hirai"
],
"categories": [
"quant-ph",
"cond-mat.stat-mech"
],
"primary_category": "quant-ph",
"published": "20231027072136",
"title": "Practical application of quantum neural network to materials informatics: prediction of the melting points of metal oxides"
} |
These two authors contributed [email protected]@xanadu.ai Xanadu, Toronto, ON, M5G2C8, CanadaThese two authors contributed [email protected]@xanadu.ai Xanadu, Toronto, ON, M5G2C8, CanadaXanadu, Toronto, ON, M5G2C8, CanadaVolkswagen AG, Berliner Ring 2, 38440 Wolfsburg, Germany TUM School of Natural Sciences, Technical University of Munich, Garching, GermanyXanadu, Toronto, ON, M5G2C8, CanadaICFO - Institut de Ciéncies Fotòniques, Castelldefels, SpainXanadu, Toronto, ON, M5G2C8, CanadaXanadu, Toronto, ON, M5G2C8, CanadaXanadu, Toronto, ON, M5G2C8, Canada Department of Chemistry, Sungkyunkwan University, Suwon 16419, Republic of KoreaVolkswagen AG, Berliner Ring 2, 38440 Wolfsburg, GermanyVolkswagen AG, Berliner Ring 2, 38440 Wolfsburg, GermanyXanadu, Toronto, ON, M5G2C8, CanadaQuantum algorithms for ground-state energy estimation of chemical systems require a high-quality initial state.However, initial state preparation is commonly either neglected entirely, or assumed to be solved by a simple product state like Hartree-Fock. Even if a nontrivial state is prepared, strong correlations render ground state overlap inadequate for quality assessment.In this work, we address the initial state preparation problem with an end-to-end algorithm that prepares and quantifies the quality of initial states, accomplishing the latter with a new metric – the energy distribution. To be able to prepare more complicated initial states, we introduce an implementation technique for states in the form of a sum of Slater determinants that exhibits significantly better scaling than all prior approaches. We also propose low-precision quantum phase estimation (QPE) for further state quality refinement.The complete algorithm is capable of generating high-quality states for energy estimation, and is shown in select cases to lower the overall estimation cost by several orders of magnitude when compared with the best single product state ansatz.More broadly, the energy distribution picture suggests that the goal of QPE should be reinterpreted as generating improvements compared to the energy of the initial state and other classical estimates, which can still be achieved even if QPE does not project directly onto the ground state. Finally, we show how the energy distribution can help in identifying potential quantum advantage.Initial state preparation for quantum chemistry on quantum computers Juan Miguel Arrazola January 14, 2024 ==================================================================== § INTRODUCTION One of the main contenders for useful applications of quantum computers is the simulation of many-body physics, in particular for quantum chemistry and materials science. Of special interest is the determination of ground-state energies, which have broad application <cit.>. Many different quantum methods for ground-state energy determination have been proposed, ranging from quantum phase estimation (QPE) and its variants <cit.>, to more recent developments <cit.>. We refer to these methods as quantum energy estimation algorithms. Each of these methods requires a high quality initial state, where quality is traditionally understood in terms of the overlap with the ground state. The quality of the initial state directly impacts the performance and runtime of any energy estimation algorithm, making it crucial to develop advanced methods for initial state preparation. While decisive for the success of quantum algorithms, initial state preparation is often treated as separate from the energy estimation algorithm, and has not received as much attention as other aspects of quantum algorithms in the literature.A common approach to preparing an initial state is to take an approximate wavefunction from a traditional quantum chemistry method and encode it on a quantum computer.The Hartree-Fock state is the simplest and computationally cheapest choice.Even though it has been found to have high overlap with the ground state in small molecules <cit.>, it is seriously lacking for strongly-correlated systems <cit.>, such as molecules with stretched bonds <cit.> and more complex molecules with transition-metal centers <cit.>.Beyond the Hartree-Fock state, a variety of methods have been proposed to encode sums of Slater determinants (SOS) <cit.> or matrix-product states (MPS) <cit.>. Ground-state energy estimation was explored using SOS states obtained from configuration interaction singles and doubles <cit.>, active space methods <cit.>, and selective configuration interaction methods <cit.>. However, the performance of these approaches has mostly not been evaluated or compared beyond small, uncorrelated molecules. Other approaches that can be categorized as quantum heuristic methods have also been considered widely. While adiabatic state preparation is likely the most well-known <cit.>, other heuristics include variational methods <cit.> and quantum imaginary time evolution <cit.>.While promising, to date most of these methods have been demonstrated only for small molecules <cit.>, and suffer from various shortcomings such as long runtimes, expensive classical optimization, or costly state tomography. More broadly, the absence of any guarantee of their success in state preparation is problematic.The variety of state preparation approaches raises the question of which method is best suited to which situations. Furthermore, it is not even clear how one should compare different possible state preparation schemes for actual problems of interest. For example, in general the overlap with the ground state cannot serve as a practical metric for comparison or quality assessment since we typically do not know what the ground state is. These issues also make it difficult to quantify the total runtime of quantum algorithms and to understand their actual potential to outperform classical methods. Overall, there is a need for a framework that encompasses the most powerful methods for initial state preparation, provides tools to evaluate their quality, and allows us to make informed statements about the prospects for quantum advantage.In this work, we present a complete algorithm for preparing high-quality states for quantum energy estimation. Our state preparation algorithm begins with using quantum chemistry methods to obtain classical descriptions of approximate ground-state wavefunctions, either in SOS or MPS form. We then introduce a novel quantum algorithm for preparing SOS states on a quantum computer, with a better cost compared to all previous methods. This is complemented with resource estimation formulas quantifying the number of qubits and gates needed for implementation, both for our new SOS algorithm and for previously developed techniques for implementing MPS states <cit.>. To assess and compare the quality of the several candidate states in hand, we develop a methodology that works based on their associated energy distributions. These are projections of the candidate wavefunction on the eigenspectrum of the system Hamiltonian: while obtaining them exactly is more difficult than computing the overlap with the ground state, they can be approximated – a task for which we propose new classical and quantum methods. Once the state quality is assessed and the chosen state is implemented, it can also be further refined with the use of a quantum filtering algorithm. On the basis of our analyses, we find that coarse QPE with post-selection – that is, QPE performed with low precision – generally outperforms other filtering methods. The concept of the energy distribution has utility beyond state quality assessment. First, it suggests an alternative interpretation of quantum energy estimation algorithms – not as a means of projecting onto the ground state, but as a way to improve classical estimates of the ground-state energy. Second, when QPE is performed, the energy distribution can help address the problem of the contribution of higher energy states towards low outcome values – what we call the leakage problem in QPE. We show that this problem can be diagnosed when the energy distribution is at hand, and can furthermore be mitigated through quantum refining mentioned above. Finally, the energy distribution picture can be a guide towards potential quantum advantage: the amount of low-energy support of the initial state below a classical target energy estimate can be a proxy for the likelihood that quantum energy estimation algorithms can obtain lower energy estimates than the classical reference. With this in mind, we introduce the concept of Goldilocks problems: energy estimation tasks where the initial state is neither too good (where classical methods are sufficient) nor too bad (where even quantum algorithms fail). Our energy distribution techniques can be used to search for such problems, which are candidates for quantum advantage, as we illustrate with numerical examples.All of the subroutines discussed above combine to give an initial state preparation algorithm, which can be applied for quantum energy estimation in any quantum chemical system. The complete algorithm, illustrated in <ref> consists of the following steps: * Classical computation of a candidate initial state * Converting the candidate initial state to either SOS or MPS form* Assessing the quality of different candidates through the energy distribution* Implementing the resulting state into the quantum computer* Quantum refining of the state with an energy filtering method* With the implemented state, execute QPE or any other quantum energy estimation algorithm.Using our algorithm, we find that for nontrivial problems of interest in quantum chemistry such as estimating ground-state energies of iron-sulfur complexes, by improving state quality we can reduce the total algorithm cost by several orders of magnitude, compared to using a single product state. The rest of the paper is structured as follows. In <ref>, we briefly review the traditional quantum chemistry methods we use for calculating candidate initial states. In the following <ref>, we describe our state-of-the-art algorithm for implementing such initial states in their SOS form, as well as a separate way of implementing a state in the MPS form, providing resource estimations for both approaches. To assess and compare our candidate states, in <ref> we introduce the concept of the energy distribution, pioneering new methods to estimate the quality of candidate states. As a case study, we apply our new techniques to address the QPE leakage problem. Once a state is prepared, assessed and implemented, we can use a quantum algorithm to refine it by filtering out high-energy contributions, as we discuss in <ref>. Having described each step of our algorithm individually, in <ref> we look at the entire pipeline, and showcase numerical experiments that demonstrate the state preparation algorithm for different molecules.§ OBTAINING CLASSICAL DESCRIPTIONS OF INITIAL STATESThe initial state preparation algorithm starts from executing a traditional quantum chemistry method to generate a candidate wavefunction. It is likely that the choice of which method to run is highly situation-dependent, so we consider a wide variety of techniques, which we now briefly describe. We also review the concept of electronic correlation and its implications for choosing a quantum chemistry method for a given molecule. Experts are welcome to skip this section, but concepts from it will be employed throughout our paper.Throughout this paper, we work in the second quantized language and represent states with Fock occupation number vectors. By a Slater determinant we mean a many-particle product state built by distributing the available n_e electrons over 2N single-particle spin-orbitals. We will use either spatial orbital occupation numbers n_i = 0, α, β, 2 or spin-orbital occupation numbers n_i = 0, 1, typically ordered according to their Hartree-Fock energy (increasing from left to right): the choice of whether orbital or spin-orbital occupations are used will be clear from context. For example, a generic S_z = 0 Hartree-Fock state is written as |222...200...⟩ in the first scheme and |1111...100...⟩ in the second scheme, respectively. More broadly, a generic Slater determinant reads|ψ_Slater⟩ = | n_1, n_2, …, n_N⟩ or | n_1, n_2, …, n_2N⟩.§.§ Quantum chemistry methods for obtaining approximate ground states The strategies we consider can be split into two groups. The first includes methods in the configuration interaction family, building the wavefunction as a superposition of Slater determinants (SOS). The second includes just one approach: representing the wavefunction using the matrix product state (MPS) ansatz, and variationally optimizing it over a series of sweeps using the density matrix renormalization group algorithm (DMRG) <cit.>. We focus first on methods based directly on the basis of Slater determinants. The configuration interaction with single and double excitations (CISD) aims to prepare a wavefunction of the form|ψ_CISD⟩ = |HF⟩ + ∑_i c_i |S_i⟩ + ∑_i d_i |D_i⟩,with singly and doubly excited determinants S_i, D_i that parametrized by CI coefficients c_i, d_i. The coupled cluster with singles and doubles (CCSD) technique instead builds a wavefunction of the form|ψ_CCSD⟩ = e^T̂_1 + T̂_2|HF⟩,where the excitation operators T̂_1, T̂_2 are single and double excitation operators parametrized by amplitudes t^(1,2) that only connect occupied orbitals to virtual ones. Both methods are built on top of the Hartree-Fock reference state |HF⟩ and are hence termed single-reference methods.On the other hand, instead of restricting the many-body Hilbert space in terms of excitations, we can do this at the level of the single-particle basis: this is the nature of the complete active space configuration interaction (CASCI) method. With N spatial orbitals and 1 ≤ l < L ≤ N, we fix the first l-1 orbitals to be fully occupied and set the final N-L-1 orbitals to be unoccupied. Then the wavefunction takes the form|ψ_CASCI⟩ =∑_ n_l, n_l+1, ... n_L c_n_l, n_l+1, ..., n_L| 2, ... 2, n_l, n_l+1, ..., n_L, 0, ..., 0 ⟩,parametrized by c_n_l, n_l+1, ..., n_L. CASCI executes exact diagonalization within the active space of orbitals {l, l+1, ..., L}. While this lets CASCI produce complex multireference wavefunctions that can have many determinants with similar weights, there are two limitations. First, the choice of active orbitals is widely acknowledged to be challenging <cit.> (although automated approaches <cit.> are gaining popularity). Second, exact diagonalization scales too prohibitively to be useful in nontrivial molecules. The impact of frozen orbitals on the CASCI wavefunction could be partially taken into account with multireference perturbation theory (MRPT), which is applied on top of a CASCI calculation. The perturbative correction modifies the coefficients c_n_l, ... n_L of the CASCI wavefunction, typically using standard second-order (Moller-Plesset) perturbation theory. In practice, MRPT is usually much better at improving the energy estimate than at improving the wavefunction: largely used for recovering dynamic correlation energy, it is not capable of adding multireference character from the non-active space states. Selective configuration interaction (SCI) methods are inspired by the idea that for many wavefunctions of interest, written in the full basis as|ψ_SCI⟩ = ∑_n_1, n_2, ... n_N c_n_1, n_2, ..., n_N| n_1, n_2, ..., n_N ⟩,most coefficients c_n_1, n_2, ..., n_N vanish. The goal is to identify an efficient way of searching for these non-zero coefficients <cit.>. We focus on the recently developed semistochastic heat-bath configuration interaction (SHCI) <cit.>, which employs the relatively simple criterion max_i(H_ki c_i) > ϵ_1: here H_ki is the Hamiltonian matrix element between determinants in the variational basis c_i and a candidate external determinant c_k, and ϵ_1 a user-chosen cutoff.Finally, we turn to the second group and summarize the DMRG approach for obtaining wavefuctions in MPS form. DMRG has proven to be a reliable, robust, and efficient method for constructing approximate ground states for a wide variety of molecules <cit.>. An MPS can be seen as an efficient way of factorizing the general N-tensor coefficient c_n_1, ..., n_N of a Slater determinant series into a product of matrices, whose internal dimension is limited by the bond dimension χ <cit.>. The MPS wavefunction can be written as|ψ_MPS⟩ = ∑_α_1,…,α_N-1n_1, …, n_N A^n_1_1;α_1A^ n_2_2;α_1 α_2… A^ n_N_N;α_N-1|n_1 , n_2 , … , n_N⟩,n_i ∈{0, α, β, 2 }, α_i ≤χ.This factorization scales polynomially with system size for a fixed bond dimension <cit.>.Combined with the DMRG algorithm, it gives a wavefunction-based variational approach that provably converges to the exact solution in the limit χ→∞.To apply the DMRG method to molecules, which exhibit inherently nonlocal interactions between molecular orbitals and do not resemble the spin chains that DMRG was originally developed for, orbitals must be arranged along a one-dimensional chain. Ideally, the arrangement is such that it minimizes the amount of long-range nonzero molecular integrals. The standard choice is to arrange the molecular orbitals according to their Hartree-Fock energy; more sophisticated reordering schemes are also considered <cit.>. The diversity of wavefunction forms resulting from different methods means that in practice it is difficult to compare and evaluate them. On the other hand, this variety suggests that there are trade-offs that could be exploited depending on the molecule being considered. The strengths and weaknesses of different approaches are largely determined by the amount and type of correlation present in the system, as discuss next. §.§ Electronic correlationsCorrelation energy is the portion of total system energy that is not accounted for by the Hartree-Fock ansatz. It is due to Coulomb interactions between electrons.Strong correlation necessitates the use of additional determinants in the many-body wavefunction for an accurate description. Thus, depending on the amount of correlation and its type, different methods might be preferable.Correlation energy is usually partitioned into static and dynamic types.Dynamic correlation is a consequence of electrons avoiding each other due to Coulomb repulsion. When electrons approach each other in real space – as when sharing a spatial orbital – their wavefunctions acquire non-analytic cusps due to the Coulomb potential divergence. Resolving these cusps requires a large basis set and many determinants. Dynamic correlation is thus associated with the many-body wavefunction being described by one dominant determinant with a large weight, together with many small-weight contributions. While the weights are small, the determinants are numerous, resulting in large energy errors. Fast single-reference methods such as CISD and CCSD are usually capable of recovering dynamic correlation energy, as they are able to work with a larger basis set and their single-reference nature is conducive to the task.By contrast, scale limitations of CASCI, DMRG and SHCI make them worse at recovering this type of correlation. By contrast, static correlation arises in the presence of nearly-degenerate eigenstates: multiple determinants with roughly similar weights are needed for an accurate description of the ground state wavefunction.Typical situations where static correlation arises are nonequilibrium geometries, low-spin states of open-shell molecules (spin state degeneracy), excited states and molecules containing transition metal atoms (due to high degeneracy of d-type orbitals).In such situations a single reference, such as the Hartree-Fock state or CISD and CCSD, will not be a good starting point – a multireference method such as CASCI, DMRG or SHCI is needed.In practice, CASCI is strongly limited by the active space size; and while MPRT improves the CASCI energy appreciably through adding dynamic correlation, it provides only minor improvements to the wavefunction itself, which is the object of interest in quantum algorithms. This leaves DMRG and SHCI as the leading contenders. DMRG can boast polynomially efficient representation of even strongly multireference states and relatively straightforward convergence with bond dimension; by contrast, SHCI's lower computational demands make it easier to run calculations for larger spaces, without sacrificing accuracy even for correlated systems such as the chromium dimer <cit.>. § EFFICIENT ANSATZ IMPLEMENTATION ON QUANTUM COMPUTERS In <ref>, we detailed how approximate ground states can be found using classical computational methods and expressed in two standardized forms: SOS and MPS. In this section, we detail how SOS and MPS states can be implemented on a quantum computer and estimate the number of qubits and gates required for these tasks. Typically the cost of encoding classical states is lower than the cost of the energy estimation algorithm, even for sophisticated states with many Slater determinants or large bond dimension. This results in considerable runtime reductions of the full algorithm by lowering the number of repetitions needed to achieve a target accuracy, while incurring only small increases in the cost of each independent run. §.§ Sum of Slater determinants (SOS)We employ the formalism of second quantization, but the implementation method is general and can also be used for algorithms employing a first quantization representation <cit.>.The goal is to prepare the normalized state |ψ⟩ =∑_i=1^Dα_i|ν_i⟩,where α_i are the given amplitudes and |ν_i⟩ = |n_i,1n_i,2⋯ n_i,2N⟩ are states of 2N qubits. The bits n_i,j denote the occupation number of spin-orbital j for the i-th Slater determinant |ν_i⟩. We use N to denote the total number of spatial orbitals, each supporting one spin-up and one spin-down particle. We are interested in cases where the number of Slater determinants D is much smaller than2^2N, which is the case in practice. Therefore, the problem is to prepare a summation of relatively few basis states picked from a very large Hilbert space. The gate cost of the algorithm will be measured in terms of the number of non-Clifford Toffoli gates, which is a standard complexity measure used in the literature.There has been previous work in this direction. Ref. <cit.> proposes an iterative generation of the state using (2N-1)(D-1) Toffoli gates and 2N-1 ancilla qubits (when D>1). Other algorithms include <cit.>, but have Toffoli complexity even higher than O(ND), or potentially exponential in N <cit.>. Instead, we present an algorithm with asymptotic runtime of O(D log D), where log is in base two throughout this manuscript.This is a considerable improvement since the number of Slaters D is at most the full space 2^N, and so, it is often the case that log D ≪ N: <ref> makes this comparison explicit for a few different model systems and molecules studied in this paper and elsewhere. Notice that the advantage is even more explicit for larger systems; for example, with N∼ 400 spin-orbitals, our algorithm is an order of magnitude more efficient in Toffoli cost as long as D < 2^40∼ 10^12.We aim to prepare the state in <ref>,where |ν_i⟩ represents the occupation bitstrings of length 2N mentioned above. Our main technical result is a mapping from the bitstrings ν_i of 2N bits, that identify each Slater determinant, to more compact and unique bitstrings b_i of only O(log D) bits. The following lemma, formalizes the compression scheme. We will assume that D is a power of two for convenience; otherwise, log D should be replaced with its ceiling. Given as input a set {ν_i} of bitstrings representing unique Slater determinants, there is a classical algorithm with complexity O(tD^2), where t ≤min(2N,D) - 2log D +1, that outputs substrings ν̃_i of ν_i and 2 log D-1 bitstrings u_k of length O(D), such that the bitstrings b_i := ( u_1 ·ν̃_i, … , u_2log D-1·ν̃_i) presented as column vectors in[u_1^T;⋮; u_2log D-1^T ] [ ν̃_1⋯ ν̃_D ] =[ b_1 ⋯ b_D ],are mutually distinct, i.e., b_i ≠ b_j for all i and j.The proof can be found in <ref>. The algorithm to implement the SOS state in <ref> is described below. We employ three registers: the system register, where we wish to prepare the desired state, and two ancilla registers: an enumeration register with log D qubits and an identification register with 2log D-1 qubits.Quantum algorithm for encoding SOS states* Prepare the state∑_i=1^Dα_i |0⟩|i⟩|0⟩,in the enumeration register using the Quantum Read-Only Memory (QROM) state preparation method in <cit.>. * Use a QROM oracle O of Toffoli cost D as in Ref. <cit.> that implements the transformation O |0⟩|i⟩|0⟩ = |ν_i⟩|i⟩|0⟩.This results in the state∑_i=1^Dα_i|ν_i⟩|i⟩|0⟩. * Using the output bitstrings u_k from <ref>, do the following. If the j-th bit of u_1 is equal to 1, apply a CNOT gate between the system register and the identification register. The CNOT is controlled on the j-th qubit of |ν_i⟩. This results in the state∑_i=1^Dα_i|ν_i⟩|i⟩(|u_1·ν̃_i⟩|0⟩)= ∑_i=1^Dα_i|ν_i⟩|i⟩(|b_i,1⟩|0⟩).Notice the bits in u_1 are matched with the bits of the substring ν̃_i obtained from <ref>.* Repeat the above step for all u_k. This results in the state∑_i=1^Dα_i|ν_i⟩|i⟩|b_i⟩,which now contains the unique compact identifier b_i for the Slater determinant |ν_i⟩.* Using multi-controlled operations, apply the transformation |i⟩→|0⟩ to the first ancilla register conditioned on each unique b_i. This results in the state∑_i=1^Dα_i|ν_i⟩|0⟩|b_i⟩. * The final step is to uncompute the sequence of CNOTs used to prepare the state |b_i⟩ in steps 3 and 4. This leads to the final output∑_i=1^Dα_i|ν_i⟩|0⟩|0⟩,which contains the target state in the system register disentangled form all other registers, as desired. Important steps of the algorithm are illustrated in <ref>.Taking into account the first two usages of QROM for preparing ∑_i=1^Dα_i |i⟩and in <ref>, the overall Toffoli cost is dominated by (2log D-2)D + 2^log D+1 + D < (2log D+3)D = O(D log D).The overall additional qubit cost due to use of ancillas is4log D-3+log D = 5log D-3 = O(log D). One could trade off Toffolis with qubits, within the same volume cost of O((log D)^2D). In most cases, this means using the variant of QROM <cit.>, also called QROAM in <cit.>. Importantly, this variant allows using uninitialized qubits for this trade-off. The trade-off leads to a Toffoli cost of min(2√(32ND),D) + (7log D+2√(32log D))√(D), a clean qubit cost of (2log D-1)√(D), and an additional uninitialized qubit cost of √(32ND). We explain this in details in <ref>, along with comments on how one could lower the expected Toffoli cost by combining our strategy with <cit.>. §.§ Matrix-product state (MPS)Here we discuss how to implement initial states expressed in the MPS form. While it is always possible to transform an MPS into an SOS formulation, direct implementation of an MPS can be beneficial in certain cases. We consider mainly the method introduced in Ref. <cit.> and perform an estimation of the total Toffoli cost. We note that there are many variations of this technique considered in the literature <cit.> and also newer versions requiring lower-depth circuits <cit.> for short-range correlated MPSs. First, we give a quick review of the method in Ref. <cit.>. We start with the MPS form shown in <ref> and use standard graphical notation for representing the MPS. We denote its tensors as A^n_j_j;α_j-1α_j =[baseline=1ex] [black, fill=red!10,thick] (0,0) rectangle (0.5,0.5) node[pos=.5] A_j; [thick] (0.,0.25) – (-0.25,0.25) ;(-0.65,0.25) node α_j-1; [thick] (0.25,0.5) – (0.25,0.75) ;(0.5,0.8) node n_j; [thick] (0.5,0.25) – (0.75,0.25) ;(1.,0.25) node α_j;.The physical index n_j runs over d values, where d is the local Hilbert space dimension. The auxiliary indices α_j run over χ_j values, where χ_j is called the bond dimension, which generally may be different for each index j. For the implementation, the MPS is turned into its left-canonical form:[black, fill=red!10,thick] (-.75,0) rectangle (-.25,0.5) node[pos=.5] A_1; [thick] (-0.5,0.5) – (-0.5,0.75) ;(-0.25,0.8) node n_1; [thick] (0.,0.25) – (-0.25,0.25) ;[black, fill=red!10,thick] (0,0) rectangle (0.5,0.5) node[pos=.5] A_2; [thick] (0.25,0.5) – (0.25,0.75) ;(0.5,0.8) node n_2; [thick] (0.5,0.25) – (0.75,0.25) ;[black, fill=red!10,thick] (0.75,0) rectangle (1.25,0.5) node[pos=.5] A_3; [thick] (1.,0.5) – (1.,0.75) ;(1.25,0.8) node n_3; [thick] (1.25,0.25) – (1.5,0.25) ; (2.15,.25) node …; [thick] (2.75,0.25) – (3.,0.25) ;[black, fill=red!10,thick] (3.,0) rectangle (3.5,0.5) node[pos=.5] A_N; [thick] (3.25,0.5) – (3.25,0.75) ;(3.5,0.8) node n_N; .It means that for all j>1, we have:∑_α_j,n_j A_j;α_j-1α_j^n_j(A_j;α_j-1'α_j^n_j)^* = δ_α_j-1α_j-1',or diagrammatically:[baseline=4ex] [thick] (0.5,0.25) – (0.75,0.25) ; [black, fill=red!10,thick] (0.75,0) rectangle (1.25,0.5) node[pos=.5] A_j; [thick] (1.,0.5) – (1.,1) ; [thick] (1.25,0.25) – (1.5,0.25) ;[thick] (0.5,1.25) – (0.75,1.25) ; [black, fill=red!10,thick] (0.75,1.) rectangle (1.25,1.5) node[pos=.5] A_j; [thick] (1.25,1.25) – (1.5,1.25) ; [thick] (1.5,0.25) – (1.5,1.25) ; = [baseline=4ex] [thick] (1.25,0.25) – (1.5,0.25) ; [thick] (1.25,1.25) – (1.5,1.25) ; [thick] (1.5,0.25) – (1.5,1.25) ;.This is in general possible using singular value decomposition of tensors <cit.>. A note regarding notation: in the above equations and everywhere else, the summation over left and right auxiliary indices of the leftmost and the rightmost tensors, respectively, can simply be dropped. This is equivalent to setting χ_0,χ_N+1=0.The implementation works by first observing that the above tensors of the MPS, owing to the left-canonical form, can be directly used to define unitaries, which we denote as G, that are used in a quantum circuit for preparing the MPS:G[j]_α_jn_j , α_j-1 0 = A_j;α_j-1α_j^n_j,[baseline=1ex] [black, fill=red!10,thick] (0,0) rectangle (0.5,0.5) node[pos=.5] A_j; [thick] (0.,0.25) – (-0.25,0.25) ;(-0.65,0.25) node α_j-1; [thick] (0.25,0.5) – (0.25,0.75) ;(0.5,0.8) node n_j; [thick] (0.5,0.25) – (0.75,0.25) ;(1.,0.25) node α_j; →[baseline=1ex] [black, fill=red!10,thick] (0,0) rectangle (0.8,0.8) node[pos=.5] G[j]; [thick] (-0.25,0.2) – (0.,0.2) ;(-0.7,0.1) node α_j-1; [thick] (-0.25,0.6) – (0.,0.6) ;(-0.55,.6) node 0; [thick] (0.8,0.2) – (1.05,0.2) ;(1.3,0.1) node α_j; [thick] (0.8,0.6) – (1.05,0.6) ;(1.3,0.6) node n_j;.Each unitary acts on a d-level system composed of ⌈log(d)⌉ qubits as well as ⌈logχ⌉ ancillae, where we remind the reader that log is in base two throughout this manuscript. For example, for fermionic systems, d=4 and two qubits are required for each spatial orbital of the system. Furthermore, the incoming physical index for the unitary is set to be equal to 0 and thus the above relation does not specify all the elements of G[j]. This is fine as long as the rest of the elements are chosen so that G[j] remains unitary. A quantum circuit that implements the desired MPS and works with these unitaries is shown in <ref>. Note that an auxiliary register of a size, which we have denoted collectively as ⌈logχ⌉, is required to reconstruct the MPS and that we are schematically moving it around to act with unitaries on this register and different qudits in the system.Also, note that for the first and the last unitaries the input and output auxiliary registers also have value 0. For the above circuit, we need to synthesize the unitaries G[j], for which we use the method in Ref. <cit.>.Details of how the synthesis can be performed is discussed in <ref>. There, it is shown that each G[j] imposes a Toffoli cost of:χ_j-1[ 8χ_j d + (b+1) log(χ_j d) ],where b is the number of digits for storing the angle for implementing single qubit rotations required in the synthesis process. Assuming a number N of qudits in the physical system, the total cost will be the sum of the above over all j; asymptotically and with using χ selectively for all bond dimensions, the dominant Toffoli cost can be written as O(N χ^2). Using trade-off schemes of <cit.>, we can show that it is also possible to use O(N χ^3/2) Toffolis for implementation of the MPS. §.§ Discussion of the implementation methods As with many questions concerning initial state preparation, the choice of SOS vs MPS is highly context-dependent. The polynomial scaling (with system size) of the number of parameters in an MPS may provide it with a decisive advantage for strongly multireference systems; while in single-reference molecules (or ones not being highly multireference), the reduced cost of implementing the SOS form might be preferable. A concrete comparison between these two methods is done in <ref>. Either way, these two forms are inter-convertible, and we have outlined advanced quantum algorithms for implementing states of either form. This includes a novel method for SOS states with lower asymptotic cost than the previous state of the art. Crucially, the cost of implementing a classical state on a quantum computer is typically a lot lower than that of the full energy estimation algorithm. This means there is a large budget available for implementing sophisticated states with many Slater determinants or large enough bond dimension that better approximate the true ground state compared to simpler approaches like the Hartree-Fock state. This can lead to considerable runtime reductions for the entire quantum algorithm. § ENERGY DISTRIBUTION OF THE INITIAL STATE Using the overlap with the ground state as a way to assess the quality of an initial state is a challenging task, especially in strongly correlated systems. This is largely because the true ground state is generally not known: after all, this is the problem we are attempting to solve.Instead, we propose a new way to assess state quality through the use of the state's associated energy distribution. In this section, we first define energy distributions in precise terms. We then discuss how the energy distribution picture can change our view of performing quantum phase estimation (QPE). Our main technical contribution is a description of methods for approximating energy distributions, as well as formalizing how the energy distribution picture can be used to predict statistics of QPE outcomes. Finally, we detail how the QPE leakage problem can be seen through the energy distribution of the initial state.We define the energy distribution of the state |ψ⟩ with respect to the Hamiltonian H as:P(E) = ∑_n |⟨ E_n| ψ⟩|^2 f_η(E_n-E),where E_n,|E_n⟩ are respectively eigenvalues and eigenstates of H, and f_η is a kernel, for example Gaussian or Lorentzian, with width η, a copy of which is centered at each of the eigenvalues. The limit of η→ 0 corresponds to a discrete distribution — the actual distribution of the state's overlaps with the Hamiltonian spectrum. With η≠ 0, each energy level is broadened and the result is a continuous distribution. In practice, it is hard to access the true η=0 distribution, but much can be inferred about state quality from the broadened distribution. We use the energy distribution in a number of applications. Most importantly, we formulate a simple criterion based on the energy distribution for assessing the quality of initial states. Suppose we are given a number of candidate states with similar energies, i.e., with similar expectation values of H. We can compare the quality of the states by focusing on the left-side tails of the states' energy distributions. Intuitively, whichever state has more weight extended to lower energies is a better candidate, as it provides higher probability for obtaining a low-energy estimate. We will make this statement more precise in <ref>.§.§ Quantum phase estimation through the lens of energy distributionsIn many of the quantum routines for quantum energy estimation, the energy distribution has a very close relation with the distribution of outcomes: for QPE, for example, the outcomes are roughly sampled from the energy distribution of the initial state. More precisely, in a QPE measurement with k phase digits, the probability of an integer outcome x_m (that can be interpreted as an estimated energy of 2^-k x_m) in the phase register reads <cit.>:∑_n |⟨ E_n|ψ⟩|^21/2^2k(sin^2(π 2^k E_n)/sin^2(π [E_n - x_m/2^k ])).Notice that in the above, we have assumed a normalization of the Hamiltonian such that 0 ≤ E_n ≤ 1 for all n. We similarly assume that the integer outcome x_m satisfies 0 ≤ x_m < 2^k. The discrete QPE kernel 1/2^2k(sin^2(π 2^k E_n)/sin^2(π [E_n - x_m/2^k ])) which appears in the above relation, broadens each energy level and is maximized when the integer x_m is closest to 2^k E_n for each level; thus, performing QPE can be thought of as sampling from a discrete distribution, which is obtained by spreading the weight of each energy level by the discrete QPE kernel. As the number of digits k increases, the sampling gets closer to sampling from the actual underlying distribution. The traditional viewpoint for QPE is that the algorithm must be repeated enough times so that there is a high probability of sampling the ground-state energy (up to the allowed precision). This perspective tacitly implies that all samples other than the ground-state energy should be discarded as useless.Instead, we recognize that the goal of any algorithm, whether classical or quantum, is to provide the best possible energy estimates. Ideally this is precisely the ground-state energy, but that may be too ambitious in practice, especially for large systems with strong electronic correlations. The energy distribution picture presents an alternative where QPE is viewed as a method to improve the energy estimate associated with the initial state. Intuitively, since the average of the energy distribution is precisely the classical estimate, QPE can equally sample energies that are higher or lower; even few repetitions can thus lead to better estimates. Generating a larger number of samples increases the probability of observing more dramatic improvements, with the ultimate goal of obtaining precisely the ground-state energy. Employing quantum algorithms is advantageous whenever there is a sizeable probability of obtaining an energy estimate that is lower than any classical method, including more powerful ones than those used for the initial state. This concept is illustrated in <ref>. The energy distribution picture is useful beyond helping re-interpret QPE. The QPE kernel discussed above has long algebraic tails on the two sides of each energy level: as a result, it is in general possible that an outcome indicating a false low energy is obtained, that was actually in the long tail of much higher energy levels. Note that this is contrary to the occurrence of an outcome due to the actual weight in its vicinity; we call this phenomenon the QPE leakage problem. In fact, in general it is possible for the observed outcome to lie below the ground-state energy of the Hamiltonian, rendering it unphysical.We will discuss the problem in detail along with ways it can be diagnosed and avoided – by employing the energy distribution – in <ref>.§.§ Approximating energy distributions We provide three different methods for approximating the energy distribution with respect to a Hamiltonian H for an initial state |ψ⟩. Two of them are classical methods and one is quantum. §.§.§ Series expansion This method employs moments of energy (expectation values of powers of H) to obtain a series expansion for the energy distribution.We consider the Edgeworth series and the Gram-Charlier series, which both approximate a distribution as a Gaussian multiplied by different orders of the Hermite polynomials.The coefficients in the series can be written in terms of the moments of the distribution, i.e., the expectation values of powers of the Hamiltonian ⟨ψ| H^n |ψ⟩:=⟨ E^n ⟩.The lowest order approximation is a Gaussian distribution with a variance proportional to that of the initial state, namely ⟨ E^2 ⟩ - ⟨ E ⟩^2. This method works best for distributions that are nearly Gaussian.The Edgeworth and Gram-Charlier series have identical terms: the only difference is that the terms in an Edgeworth series are arranged in a way that the series constitutes a true asymptotic series <cit.>.The Gram-Charlier series expansion for the energy distribution P(E) can be written asP̃(E) = exp(-E^2/2)/√(2π)[1+∑_n=3^∞ (-1)^n c_n_n(E)],where _n(E) is the Hermite polynomial in the probababilist's notation defined as:_n(E) = (-1)^n exp(E^2/2) d^n/dE^nexp(-E^2/2).The expansion in the form in <ref> is used for a distribution function with zero mean and unit variance; any distribution can be cast in this form upon translating and rescaling. The coefficients of the Gram-Charlier expansion are given byc_n = (-1)^n/n!∫ dE P(E)_n(E).The coefficients can be written in terms of the moments of the distribution μ_n = ∫ dE P(E) E^n = ⟨ E^n ⟩,once the Hermite polynomials are expanded. A list of the coefficients is given in <ref>. The Edgeworth series is obtained by regrouping the same terms from a Gram-Charlier:P̃(E) =exp(-x^2/2)/√(2π)[1+ κ_3/6_3(E) +(κ_4/24_4(E) + κ_3^2/72_6(E) ) + …]where κ_n is the n-th cumulant of the distribution. For the general prescription for obtaining the terms and also explicit forms for more terms, see <ref>.In practice, the above series expansions may exhibit negative distribution values or artificial rapid oscillations. This happens mostly in cases where the energy distribution is far from Gaussian – for example, when large gaps are present in the spectrum. Generally, the series will converge if the approximated distribution function falls faster than exp(-E^2/4) <cit.>.The convergence condition is satisfied for a bounded energy spectrum, but the discreteness of the true energy distribution, from which the moments are calculated, can cause the series approximation to show rapid oscillations at higher orders. Moreover, we have seen in our numerics that Gram-Charlier series is generally more well-behaved than Edgeworth series.Therefore, to obtain a series approximation for the energy distribution, it suffices to calculate the moments ⟨ E^n ⟩. This can be done directly from knowledge of the classical wavefunction |ψ⟩ and the system Hamiltonian H, although it can become computationally intensive. In this work, we mainly employ an MPS representation of the states and a matrix product operator (MPO) expression for the Hamiltonian to compute moments. We find that this does not impose a large computational overhead, as acting with the Hamiltonian MPO on the solution MPS even multiple times is not a prohibitive computational task. §.§.§ The resolvent method The energy distribution as defined in <ref> can be thought of as the imaginary part of a particular Green's function, P(E,η) = - ImG(E,η), with the Green's function defined asG(E,η) = 1/π⟨ψ| 1 /H-E + iη|ψ⟩.Transforming to the Lehmann representation by inserting the resolution of the identity, we findP(E,η) = -1/π∑_n |⟨ E_n| ψ⟩|^2Im(1 /E_n - E + iη),and using the fact that Im 1/E_n-E+iη =- η/ (E_n-E)^2 + η^2 ,we can see that computing - ImG(E,η) is equivalent to <ref> for a Lorentzian kernel with broadening η. Thus any method that can calculate G(E, η) can be used to find the approximate energy distribution. We refer to calculating the energy distribution through its associated Green's function as the resolvent method.We solve for the above Green's function through a DMRG-like variational method that was introduced in Ref. <cit.> and then improved in Ref. <cit.>. The method uses the MPS wavefunction form, and performs DMRG-like sweeps to evaluate the Green's function. This means that if we want to use the resolvent method to assess the quality of a candidate wavefunction, the state must be transformed into MPS form. The method works as follows: we define the state |φ⟩ that satisfies:|φ⟩ = 1/π1/ (H-E+iη)|ψ⟩, ⇒ π (H-E+iη) |φ⟩ = |ψ⟩.The Green's function can be written as the overlap G(E,η) = ⟨ψ | φ⟩.Now, defining |Y⟩ as <cit.>:[ (H-E)^2 + η^2 ] |Y⟩ = -η/π|ψ⟩.From this equation, we see that ImG = ⟨ψ|Y⟩. Finding the overlap of the above equation with |Y⟩, a DMRG-like algorithm is used to minimize the resulting functional <cit.>:⟨Y|[ (H-E)^2 + η^2 ] |Y⟩ + η/π ⟨ Y |ψ⟩.We rely on the package Block2 <cit.> to carry out the calculation of the Green's function in <ref>. Even though the method is implemented in the MPS language, it is general because we have methods for transforming other forms of states into MPS.§.§.§ Coarse QPE Finally, we describe how using low-precision QPE, which we refer to as coarse QPE, can be used to build approximate energy distribution function; this is based on using the QPE kernel:f(E) = 1/2^2ksin^2(π2^k E)/sin^2(π E),in <ref>, for some integer k that is the number of digits in the coarse QPE measurement.One possible way to obtain an energy distribution is through performing QPE multiple times and obtaining the statistics of the outcomes and then trying to reconstruct the energy distribution from that. As a refinement of this process and to diminish bias, for each QPE round, we add a random constant to the Hamiltonian. The constant c is chosen to lie in the interval [0,2^-k). After the measurement is performed and an integer x_m is observed, we take the measured energy to have the value 2^-k x_m - c. This ensures that all real values can be sampled, making it possible to approximate the energy distribution using smoothing methods such as kernel density estimation <cit.>.As is well known, in kernel density estimation with a suitable choice of the broadening factor, the error scales as 1/M^4/5, where M is the number of samples (see <ref> for more details).§.§.§ Numerical exampleHere, we consider a Hydrogen chain with 6 Hydrogen atoms at a bond length of 5 a_0, where a_0 is the Bohr radius (see <ref> for more details on this class of systems). We calculate the exact spectrum and also find the energy distribution of an initial state of our choice using the above method. The state that is used has a form as a sum of Slater determinant as the following: |ψ_0⟩ =0.86|2,2,2,0,0,0⟩- 0.36(|β,2,α,α,0,β⟩ + |α,2,β,β,0,α⟩),in the basis of Hartree Fock orbitals. The coefficients are directly taken from the corresponding terms in the exact ground state but the state is normalized.The above three methods are used and energy distributions obtained are shown in <ref> (see the caption for the details of implementation of the methods). All three methods for this particular example show that some useful information can be obtained. But among the two classical methods we see that the resolvent method is more reliable, especially because the series expansions can show uncontrolled oscillations for larger orders. §.§ Using the energy distribution for estimating lowest-energy outcomesAssuming we have access to the approximate energy distribution of the initial state, we can use it to approximate the distribution of QPE outcomes. For a number of k digits and an integer outcome x_m, it can be calculated as follows:P̃(x_m) = ∫ dE P(E)1/2^2k(sin^2(π 2^k E)/sin^2(π [E - x_m/2^k ])).Note that this is a discrete distribution function. We define the variable E_o = 2^-k x_m with the distribution functionP(E_o) = P̃(x_m) /2^-k. In the limit of k →∞, this distribution approaches the underlying energy distribution and henceforth we denote this by P(E) for notational simplicity.We study the statistics of the best energy achievable through repeated QPE measurements. Referring to K QPE outcomes as E^(1),…,E^(K), we focus on the distribution of the smallest observed energy, E_min, K = min(E^(1),…,E^(K)). It is straightforward to calculate the cumulative distribution function (CDF) of this variable at energy E as:C_min, N(E) =1-(1 - p_<(E))^K.This is the probability that at least one outcome from K rounds of QPE lies below E_0. Here we have defined p_<(E) = ∫_-∞^E dE' P(E'), which is the probability of a single outcome lying below 0. Upon differentiating the CDF with respect to E, we obtain the probability distribution function of E_min, K, which reads:P_K(E) = K P(E) (1 - p_<(E))^K-1.One simple measure of state quality is the mean value of this distribution: ∫ dE P_K(E) E.§ QUANTUM REFINING After the classically optimized ansatz state is implemented on the quantum computer, there is possibility for further quality improvement by using a quantum algorithm to filter out some of the remaining high energy weights. This is beneficial only if a cheap quantum refining procedure is possible. We show that this is indeed the case in this section. Also, we show that another subtlety with QPE, namely the leakage problem, can be addressed through the quantum refining process.We focus on two main methods: coarse QPE <cit.> and quantum eigenvalue transformation of unitary matrices with real polynomials (QETU) <cit.>. The latter is chosen as a representative of the polynomial-based algorithms <cit.>.§.§ Coarse QPE with postselectionHere we consider an implementation of QPE with less digits of precision than in the final scheme, which for example may be targeting chemical accuracy. In each coarse QPE measurement, if the outcome lies outside a set of predetermined low values, the state is discarded and the algorithm is restarted. The assessment of what QPE outcomes are considered small can be based on the energy distribution of the implemented state. To see to what extent large energies are suppressed after postselecting on low-energy QPE outcomes, consider a setting in which we postselct an outcome x_m when QPE with k digits is performed on a state |ψ⟩ = ∑_n c_n |E_n⟩. The probability for each component E_n in the resulting state will beP(n) = |c_n|^21/2^2k(sin^2(π 2^k E_n)/sin^2(π [E_n - x_m/2^k ])). If the standard deviation of the initial state energy distribution is small compared to the span of the spectrum of the Hamiltonian, we can approximate the denominator in the above factor by a Taylor expansion. The weight after measurement then is suppressed as the inverse square distance from the measurement outcome: ∼ |2^k E - x_m|^-2.This shows that if the precision of the coarse QPE and the postselection values are chosen appropriately, the high energy weight of the distribution can be well suppressed.§.§ QETU filtering Using QETU <cit.> or other polynomial-based methods <cit.>) aim to implement a function of the Hamiltonian that retains low energies and filters high energies. In short, the method consists of a quantum signal processing circuit <cit.> that implements a unitary matrix that block encodes a function f(H) = P(cos(H/2)), where H is the Hamiltonian of interest and P is an even polynomial of degree d_P. We need f(H) to be designed in a way so that low energies are retained and high energies are filtered, for instance using an approximate step function. Details of how this can be done are discussed in the appendix.The cost of implementation is directly given by the degree of P; more precisely, the number of times that one queries the unitary U=e^-iH is exactly d_P. In order for the filtering to be successful, this degree should have a scaling O(Γ^-1logϵ^-1), where the error ϵ in the polynomial approximation is small enough, and Γ is the energy scale over which the transition in the function f(H) needs to occur. Apart form the above asymptotic scaling, in practice, we choose the degree by examining how good of a filtering function is achieved. §.§ Cost of implementing quantum refining methodsA simple analysis of the asymptotic cost of the above methods can be done by the following consideration: to suppress a high degree of weight at unwanted high energies, and to keep low-energy weights mostly intact, we need to differentiate energies separated by values of the order of the standard deviation σ_E of the energy distribution of the initial state. This means that in a coarse QPE setting, we need the resolution 2^-k to be of the order of σ_E; on the other hand, in a QETU setting, again we need a function f(H) that discerns energies of the order of σ_E, i.e., Γ∼σ_E. Thus, both of the methods require a number of queries to e^-iH (or related unitary) that scales as O(1/σ_E). We expect the target precision ϵ to be considerably smaller than σ_E, so the cost of filtering should in principle be much lower than for the final energy estimation algorithm, which would have a scaling O(1/ϵ). This shows that quantum refining for further improvement of the state quality is viable through these methods.While the asymptotic costs are the same, by examining simple concrete examples, we find that coarse QPE works appreciably more efficiently. In particular, we consider the case of a Gaussian energy distribution for the initial state, and study the effect of quantum refining via coarse QPE and QETU on it; even though this can be an artificial construction, we can capture the essential factors of comparing polynomial based algorithms with coarse QPE in it. Plots of energy distribution after quantum refining are presented in <ref>, with coarse QPE on the left panel and QETU on the right. We can see that the ultimate state obtained through coarse QPE has a much higher quality when compared with the one obtained through QETU. More details on the processes are presented in <ref>. There, we show that this performance of coarse QPE is achieved even with a lower cost compared with QETU.Given this example and similar other constructions, we come to the conclusion that QPE has generally a lower cost in practice. This is mostly due to the fact that constructing polynomials with sharp jumps for the QETU algorithm requires high degrees, resulting in high costs. Thus we pick coarse QPE as our method of choice for performing the quantum refining stage of the algorithm.§.§ Case study: The QPE leakage problemHere, we consider a known problem that is usually discussed in the literature as changing the cost scaling of QPE to a 1/p_0^2 behavior instead of 1/p_0, where p_0 is the square of the overlap of the initial state with the ground state (see e.g. section I.A of <cit.> and Appendix A of <cit.>). This scaling can occur when long tails of QPE kernels placed at higher energies contribute outcomes at low energies, potentially even below the ground state value. It is argued in Ref. <cit.> that in order to prevent this from happening, longer evolution times that scale as 1/p_0 in each QPE round should be used. Since a total of 1/p_0 rounds are required to obtain precisely the ground state, the overall cost in this argument scales as 1/p_0^2. We analyze this problem, which we call the QPE leakage problem, based on our energy distribution approach and discuss how it can be diagnosed.We also show that the problem can be circumvented using quantum refining of the initial state as discussed above, without a need to resort to large evolution time. We first consider the problem in the conventional setting; starting with an initial state |ψ⟩ = ∑_n c_n |E_n⟩, we would like to perform QPE and estimate the ground state energy E_0 with a toleratederror ϵ. The question is how many phase digits are required for this task. One requirement is to ensure that leakage is absent, meaning we need to estimate the probability of contributing an outcome below E_0-ϵ from all of the energy levels except the ground state:p_leak = ∑_n ≠ 0∑_x_j<x_upper1/2^2k(sin^2 (πδ_n)/sin^2 (π/2^k [x_n+δ_n - x_j])),where x_upper=⌈ 2^k (E_0-ϵ) ⌉, 2^k E_n = x_n + δ_n with x_n an integer satisfying 0 ≤ x_n < 2^k, and 0 ≤δ_n < 1.For the leakage to be improbable, p_leak should be small enough when compared with probability of the ground state in the initial state composition p_0=|c_0|^2.Note that for the ground state itself to not leak beyond the threshold one can add an O(1) number of more qubits to the phase register and discard their outcome <cit.>.Up to an additive error of O(max[2^-2k,(x_n-x_upper)^-2] ) , the single level leakage probability in <ref> can be written as:p_leak (E_n) = sin^2(πδ_n)/π^21/x_n-x_upper+δ_n.See <ref> for more details. Having access to the energy distribution of the initial state, the total leakage probability in <ref> can be approximated as:p_leak = 1/π^2 2^k ∫_E_0+ϵ dE P(E)sin^2(π 2^k E)/E -x_upper/ 2^k . A simple criterion for leakage, based on the above integral, can be derived for multimodal distributions; we focus on a unimodal distribution and the multimodal case is similar. Assuming an O(1) probability is concentrated close to the main peak of the distribution, and that this peak is located at E_p, from the above integral the probability of leakage beyond the ground state can be approximated as 1/2π^2 2^k1/E_p-E_0. We need this probability to be smaller than the probability of the ground state; this means that the number of phase digits should be chosen large enough so that 2^k = O ( [p_0 ( E_p - E_0 )]^-1). In general, we need to take the tolerated error ϵ in QPE also into account for k, and as a result we have: 2^k = O ( max( [p_0 ( E_p - E_0 )]^-1, ϵ^-1) ).Apart from the above setting, where higher energy states can contribute to QPE outcomes below the ground-state energy value, there is also a possibility for leakage when we are not aiming to necessarily obtain the ground state energy but striving to obtain better energy estimates using QPE. In such a setting, if a small QPE outcome is obtained in an energy region where there is actually not an appreciable weight, it is more probable for the outcome to be invalid as it likely happened due to leakage from higher energies. Such outcome should not be accepted as an estimate of the energy since it can in general be smaller than all the eigenvalues of H with which the initial state has nonnegligible overlap; it can actually be below the ground-state energy of the system resulting in incorrect estimates. To quantify such a possibility, we first note that the distribution of the lowest outcomes of QPE was considered in <ref> through the use of the energy CDF; here, we use the CDF of QPE outcomes to study the possibility of leakage too. In particular, at an energy of interest, we can compare the CDFs of energy and QPE outcomes (with the desired number of digits k). If the QPE outcomes CDF is considerably larger than the energy CDF, this signifies a high probability of leakage contamination of results around or below that energy value. This is especially important for the region in which the energy CDF is of the order 1/N for a QPE measurement with N repetitions as this is where the smallest outcome is expected to appear. An ultra-precise QPE will result in an energy close to this region but if the QPE outcomes CDF is large, lower precision QPE can contribute smaller outcomes, and those can only be due to unwanted leakage from higher energies and thus should be avoided.Given these two treatments of the possibility of leakage, we see that knowledge of the energy distribution function enables us to identify situations in which QPE leakage is not insignificant. If leakage is present, in principle in both of the cases, with choosing QPE precision high enough, the leakage probability is managed; however this induces extra cost for each QPE round, as with increasing the number of QPE digits k, the cost rises exponentially 2^k. In the following we show that quantum refining of the initial state can be used to manage the leakage possibility without a need to use higher precision. §.§.§ Mitigating the leakage probabilityThe above analysis shows that if the spectral weight in some region, that is responsible for the leakage, is suppressed through some means, the leakage probability could also be suppressed with it. We have seen in <ref> that for a small cost compared with that of the most precise QPE measurement, high energy weights can be filtered with quantum refining. Thus, QPE leakage can in principle be mitigated through quantum refining, but it is a matter to be studied on a case-by-case basis based on the details of the energy distribution at hand. This will be very helpful as it removes the need to perform time evolution for times of the order p_0^-1 for the ultimate QPE measurement. As an example, consider an energy distribution having a multimodal structure, one can identify the peaks – accumulating O(1) spectral weight in their vicinities – which are responsible for leakage through analyses discussed above; one can then perform a quantum refining in the form of coarse QPE, so that those peaks are close to discarded outcomes, and thus will lose a substantial weight after the process. This can substantially decrease the leakage probability. Such procedure is illustrated in a concrete example in <ref> (last paragraph).Note that this means the quantum refining step of our state preparation algorithm is capable of reducing the cost of the whole algorithm, not only by lowering the number of required repetitions of the most precise QPE measurement, but also by decreasing the cost of each single round by mitigating the leakage problem when it is present.§ NUMERICAL DEMONSTRATIONS In this section, we showcase our complete initial state preparation algorithm for a variety of molecules. Through the numerical examples, we explore how viable it is to prepare good-enough initial states for complex molecules to be studied with a quantum algorithm. Of particular interest are situations where, on the one hand, classical methods struggle to give a good energy estimate, but at the same time one can still prepare a good-enough initial state for quantum energy estimation. We call such problems Goldilocks problems, a concept formalized through the energy distribution. §.§ Goldilocks moleculesAs we argued in <ref>, the energy distribution picture can be used to estimate whether a quantum algorithm can improve a given initial state's classical energy estimate. Using this concept, we can categorize energy estimation problems based on the hardness of preparing a good enough initial state for performing quantum energy estimation routines (e.g. QPE).Such a classification is of course tied to the available budget for initial state preparation: here we take the budget to be unspecified for the sake of generality. With a given budget for state preparation, one can have easy, intermediate, and hard problems: an easy problem is one in which a very high quality initial state with large accumulated weight over low energy parts of the spectrum of the Hamiltonian is possible to prepare on a quantum computer; on the other hand, a hard problem will be one in which with a given budget, it is only possible to prepare a poor quality state with negligible weight over low energies of the Hamiltonian. Note that these depend on the given budget for state preparation. In-between, there ought to be problems of intermediate hardness, where there is some non-negligible, but also not too large weight over low energy parts of the spectrum. We argue that in all likelihood, it is only possible to obtain quantum advantage in quantum ground energy estimation over classical computational methods in intermediate problems. This is because for hard problems, by definition one cannot perform quantum routines effectively; and for easy problems, it is very likely to find a good classical energy estimate which is highly challenging to beat using quantum algorithms. The question of whether there is quantum advantage in an intermediate problem remains open: the energy distribution allows this question to be explored computationally. To make this more concrete, on an energy axis, let us mark the best classically achieved energy by E_T. Assuming we have access to the energy distribution of our initial state, we can calculate its accumulated weight for energies below E_T and call it p__<E_T. If p__<E_T is large enough – that is, large enough so that performing an accurate QPE measurement a number O(p__<E_T^-1) times is within the available QPE budget – there is possibility for quantum advantage. We call such situation a Goldilocks problem – one in which there is possibility for improving the classical ground energy estimate using QPE. Given our numerical results for the Cr_2 and Fe_4S_4 molecules in the following sections, we expect that complexes including several transition metal centers, if studied within in an appropriate active space, could indeed present such Goldilocks problems.In the remainder of this section, we will study several different quantum chemical systems numerically and exhibit the ideas and discussions in the previous sections concretely in those systems.We now illustrate all the results presented in this work through numerical experiments aimed at a variety of chemical systems.§.§ Hydrogen chains We begin by studying the hydrogen chain model system in the minimal STO-6G basis (<ref>), varying the bond length to increase correlations and evaluating the overlaps and energies of different methods relative to the exact solution from full configuration interaction (FCI). All methods including DMRG are executed in the S_z symmetry mode, i.e., conserving the spin projection on the z-axis. Here and in later figures, the overlap ⟨ψ | ψ_FCI⟩ is computed by first bringing the output of all methods to the SOS form: in all cases (especially DMRG), we make sure that the SOS form of the wavefunction includes enough determinants to capture above 99% of the weight of the original state. Near equilibrium, as in panel (a), all methods perform equally well: but as the bonds get stretched and static correlation increases (panels (b)-(c)), SHCI and DMRG clearly emerge as leaders in terms of overlap, while the Hartree-Fock state performs poorly. While in practice CCSD is able to recover a substantial portion of the dynamic correlation energy of the system, its energy estimates are not directly linked to the quality of the wavefunction because the energy CCSD computes is non-variational. To place the CCSD energy estimates on equal footing with the other methods, we instead plot the variational energy of the associated CCSD ansatz that we aim to implement on a quantum computer: this amounts to expanding the CCSD wavefunction as a SOS and truncating it to second order. We find that such a truncated CCSD wavefunction is only marginally improved compared to CISD. This is expected: since CCSD's internal optimization routine is geared towards minimizing a non-variational energy, this can easily lead towards inaccurate wavefunctions. Hydrogen chains are also convenient for comparing the SOS and MPS forms of the initial state in terms of their quality versus implementation Toffoli cost curves. For this comparison, we obtain the system ground state with DMRG, then process it in two ways: for the SOS form, we do reconstruction and then repeatedly truncate the number of Slater determinants; for the MPS form, we compress the MPS to smaller and smaller bond dimensions while continuously computing the overlap to the original state. For implementation cost, we use the expressions derived in <ref>: the cost is mainly set by the number of determinants D for SOS and by the bond dimension χ for MPS. The results for hydrogen chains of varying sizes are shown in <ref>. The MPS form appears to be more expensive relative to the SOS form for the cases shown even though the system is one dimensional, which means that MPSs should perform very well in representing the ground state. This signifies the efficiency of the SOS method developed in this work. Notice that DMRG can still be used as the method of choice for the classical ground-state search in such cases, however, it might beneficial to transform the result into an SOS form and then implement on a quantum computer. Furthermore, we should also note that the SOS cost is seen to increase exponentially with wavefunction quality, especially in larger chains, as more and more determinants are needed to accurately represent the ground state. In fact, the reason why the SOS curve does not reach perfect wavefunction overlap for larger chains is because of the extreme memory requirements for storing all the determinants arising in reconstruction beyond a certain cutoff. Thus there could be advantages to implementing the state in the MPS form in large, strongly-correlated systems. This needs to be explored more in future works.We next study energy distributions for all of the states studied in <ref>. We compute them using the resolvent method: for all non-MPS-based states, we first convert them to MPS form. Near equilibrium, all energy distributions are sharply peaked around the ground state (not shown). However, when bonds are stretched, the DMRG and SHCI states have significantly more weight in the lower-energy portion of the spectrum, as seen in <ref>. The classical energies of the states are shown with vertical dashed lines of the same colour.Given that we are implementing the SOS forms of the states, we report the associated SOS state energy, not the energy reported by the classical computational method. This applies most importantly to CCSD: because of this consideration, the CCSD energy shown is actually higher than the CISD energy. While the CCSD state redistributes some of the weight towards lower-energy states relative to CISD, this is marginal. At the same time, the Hartree-Fock state has very little weight in the low-energy parts of the spectrum, including the ground state peak. Overall, we see that even though these energy distributions are approximate, by direct visual inspection the high-quality states can be identified – without any need for a reference state, as with the overlap metric. Beyond this, we also get a much richer picture of how the weight is distributed across the energy range. §.§ The N_2 molecule Next we turn to molecules, starting directly with a system with an intermediate degree of correlations — the N_2 molecule with stretched bonds (r = 2.25 r_N_2) in the cc-pVDZ basis with the effective core potential ccECP <cit.>, which is used to reduce the number of electrons in the problem. Since this system of 26 orbitals and 10 electrons is now beyond our capability for FCI, we use a highly-converged DMRG wavefunction (χ = 1000) as the reference state, because it returns the lowest energy for this system (a few mHa below the SHCI solution). The CASCI and MRPT methods are carried out in an active space of CAS(10e, 12o). The energy-overlap bar chart in <ref> shows that most methods, while they struggle to generate an excellent wavefunction, still provide reasonably good overlap to the reference state. Notice that while MRPT significantly improves the energy estimate, it does not improve the CASCI wavefunction quality. Another curiosity is that the SHCI wavefunction, while recovering nearly 100% of the correlation energy, has an appreciably reduced overlap with the DMRG reference state. All of these observations reinforce the idea that energy is not a very reliable proxy for wavefunction quality.For the energy distributions for this molecule, we choose to compare the states obtained from DMRG (see <ref>): namely the Hartree-Fock state (χ = 1), as well as two intermediate-quality states with bond dimensions χ = 50 and χ = 250, obtained through an increasing bond dimension schedule without compression (our best classical result is obtained with χ = 1000). Once again, at a glance we notice that the Hartree-Fock state is of significantly worse quality than the other two states. This shows that the energy distribution method allows us to characterize the quality of initial states for physical molecules. §.§ The Cr_2 dimer We next turn to the Cr_2 molecule, which is an example of a strongly multireference system. To increase correlations even further, we stretch the dimer bond length by a factor of 1.8 relative to equilibrium. To study such a many-electron molecule with a limited computational budget, we focus on active spaces built around a limited set of orbitals, the 3d-orbitals in Cr_2. We employ the atomic valence active space (AVAS) approach <cit.>, where molecular orbitals with the largest d-orbital overlap above a given threshold are selected for the active space.The active space for Cr_2 focused on 3d orbitals can be as small as 10 active electrons in 10 active orbitals, written CAS(10e,10o). In an active space this small, a reference FCI solution can be obtained. As seen in <ref>, once again the DMRG method recovers the entirety of the correlation energy and, more importantly, produces a wavefunction with perfect ground-state overlap, whereas both CISD and CCSD clearly struggle in this multireference situation. Note that the DMRG calculations are now being carried out in SU(2) mode, conserving total spin S, instead of only the projection S_z. As before, CCSD appears to recover most of the correlation energy whilst having a worse overlap than the CISD solution. Notice that the SHCI solution gives nearly the correct energy but an incorrect wavefunction: in our calculations, we found it particularly challenging to obtain an SHCI solution with the correct total spin, i.e. total spin of zero, the chosen FCI reference solution – hence the vanishing overlap. For the energy distributions, we again focus on the DMRG exact solution and its compressed versions with lower bond dimensions. The energy distribution gives an excellent account of the quality of the state during compression: not only are the different quality states easily distinguishable, it is also clear that even the poor-quality, highly compressed χ_cps = 4 state still has a wide range of energies covered in its distribution, showing weight well below its mean (classical) energy. These energy distributions are an illustration of the Goldilocks state concept described in more detail in <ref>: while the χ = 4 state is clearly too poor to do QPE with, and the χ = 250 state already has a good classical energy, the intermediate quality states with χ = 50-100 would allow QPE to improve on their associated best classical estimate with only a small number of iterations. Notice also that while the quality of the χ = 100 state is greater than that of the χ = 50 state, their energies are nearly equal: this shows that it is possible to find initial states with improved quality without this corresponding to an improvement in the classical energy estimate. §.§ [Fe_4S_4] core Finally, we consider the situation in the Fe_4S_4 molecule core – the 8-atom center extracted from the associated molecular iron-sulfur complex. The active space for this system is focused on the Fe 3d and S 3p orbitals: we follow the procedure outlined in Refs. <cit.>. Instead of Pipek-Mezey, we use the Cholesky method to split-localize the α molecular orbitals from a high-spin (S_z = 5/2 per Fe atom) restricted open-shell Hartree-Fock calculation in the cc-pVDZ basis, then select orbitals with Fe 3d and S 3p character by visual inspection.For this system, we consider four different states obtained with DMRG: three states obtained by converging DMRG at bond dimensions χ = 20, 50, 100 respectively, and another obtained by converging a calculation at a high bond dimension of χ = 1000 and then compressing that wavefunction down to χ_cps = 7. While the highly compressed state is the most high-quality of the four, it has the worst classical energy estimate – higher-energy admixtures balance out the weight at the lower energies. On the other hand, while the χ = 50 state has a significantly improved energy relative to χ = 20, its energy distribution is mostly unchanged, and its quality not significantly improved. Finally, the states χ_cps = 7 and χ = 20 have nearly the same energies, but the former has much more weight in the low energy part of the spectrum, making it much higher-quality. At the same time, all these states for the Fe_4S_4 core are built up from tens or hundreds of Slater determinants, with the coefficients of the largest contributing determinants being on the order of 10^-2. This strongly suggests a single product state would be an exceedingly poor initial state in this situation. More specifically, the χ_cps = 7 state has an overlap of 0.69 with the ground state (obtained at χ = 1000), while the largest contributing determinant in the χ = 1000 state has weight 0.04. Then in terms of probabilities to project on the low-energy χ = 1000 state, the χ_cps = 7 state gives 0.47 while the best single product state gives 0.002. Thus at least 300 times fewer iterations of QPE will be needed to project on this state with the only marginally more complicated initial state – which translates into direct cost and runtime savings of more than two orders of magnitude. At the same time, the compression to χ_cps = 7 ensures that the cost of implementing the improved initial state continues to be negligible compared to the main costs of energy estimation <cit.>.The main conclusion to be drawn from <ref> is that even though simple product states have low overlap with the low-energy subspace, it is generally possible to prepare a relatively cheap, relatively high-quality initial state (e.g. the χ_cps = 7 state) for this system with only moderate additional effort. While this particular state is generated from an expensive classical calculation, it retains a considerable weight in the low-energy sectors post-compression, further suggesting that an expensive classical calculation followed by compression could be a good method to obtain a cheap-to-implement high-quality state. These last two example molecules together suggest that preparing an inexpensive, good-quality state is possible for molecules with transition metal centers. At the same time, such systems are known to be challenging cases for classical computational methods <cit.>. The combination of these facts implies that these systems are good candidates for Goldilocks systems, and motivates their further study for quantum computing applications. This conclusion is uniquely enabled by the energy distribution picture and data: the overlap metric would not give much insight into the relative quality of the states we considered here. § CONCLUSIONSWe have introduced a complete workflow for preparing initial states for quantum chemistry. Our results target a critical component of quantum algorithms for simulating chemical systems, which is essential to elucidate the potential for quantum advantage. Key technical contributions of this work include a state-of-the-art quantum algorithm for preparing states expressed as sums of Slater determinants, methods to construct approximate energy distributions for assessing state quality, and identification of coarse quantum phase estimation (QPE) as a leading technique for refining initial states and addressing the leakage problem.We demonstrate the applicability and usefulness of our initial state preparation procedure with several numerical experiments on challenging molecules.Our work indicates that it is worthwhile to employ advanced techniques for state preparation beyond simplistic approximations like the Hartree-Fock state. Quantum energy estimation algorithms such as QPE already incur a considerable cost, leaving a large budget available for spending computational resources to prepare better initial states. This budget should be utilized since improved initial states lead to higher probabilities of observing low-energy estimates, resulting in fewer repetitions of the energy estimation algorithm and an overall reduced cost. Our optimized technique for implementing sums of Slater determinants was designed precisely to enable the use of sophisticated approximate ground states such as those obtained from semistochastic heat-bath configuration interaction (SHCI) and the density matrix renormalization group (DMRG) methods, which we identify as leading strategies for initial state preparation.The energy distribution approach that we propose suggests a rethinking of initial state preparation for quantum chemistry. It provides a computationally-tractable method for assessing and comparing the quality of initial states in a reference-free way; this is out of reach when computing overlaps with the true ground state, which is typically unknown. Energy distributions also help to shape our understanding of the prospects for quantum advantage: since the goal of QPE is to improve the energy estimates associated to the initial state, we can use approximate energy distributions to reason about the extent to which this is possible. We employ this perspective to propose the concept of Goldilocks systems: molecules where the quality of the initial state is neither too high nor too low. This means that two conditions are met: (i) the difference between the best classical estimate and the true ground-state energy is large enough to leave room for improvements, and (ii) the quality of the initial state is sufficiently high to support a considerable probability of observing such improvements.Numerical experiments support these findings. We observe that it is possible to use energy distributions to infer quality of different initial states, for example in cases where the expectation values of the energy are very similar. This is evidence that energy can be a problematic proxy for state quality. Our studies also suggest that molecules with transition metals in non-equilibrium geometries are potentially Goldilocks systems, and therefore a quantum advantage in ground-state energy estimation could be possible.Future work may focus on further optimizing quantum algorithms for implementing classical wavefunctions, and more generally, on further improving the proposed workflow. Of particular interest are quantum algorithms for refining initial states obtained from classical methods, which have not received much direct attention. It is possible that better methods than coarse QPE, equipped also with performance guarantees, could be discovered. Another direction that can be pursued is to extend our methodology to periodic systems. This is needed for simulating materials, which have many industrial use cases. Finally, it is important to understand how to prepare initial states in circumstances where quantum hardware places restrictions in terms of available qubits and circuit depth, in preparation for the emergence of early fault-tolerant quantum computers.We thank Huanchen Zhai, Yu Tong and Soran Jahangiri for fruitful discussions. We gratefully acknowledge the support and computational resources provided by the BC DRI Group through the Cedar supercomputing cluster and the Digital Research Alliance of Canada. We are also grateful for the use of computational resources of the LISA computational cluster at the Stewart Blusson Quantum Matter Institute. S.F. acknowledges support by Mitacs through the Mitacs Accelerate Program. J.F. acknowledges support from ERC AdG NOQIA; Ministro de Ciencia y Innovacion AEI (Plan Nacional FIDEUA PID2019-106901GB-I00, Plan Nacional STAMEENA) and Fundació Cellex. § CONVERSION TO SUM OF SLATER FORMATS FOR ALL WAVEFUNCTION METHODS Converting all wavefunction-based methods explored in this paper to a sum of Slater format requires a number of specialized steps particular to each method.The CISD wavefunction already comes in the sum of Slater format, so no conversion is required.The CCSD ansatz is more challenging to convert to the unified sum of Slaters format due to the fact that in principle excitations to all orders are being generated. However, since these decay quickly, in practice going up to second or fourth order in excitations already captures most of the CCSD wavefunction. These can be obtained by Taylor expanding the exponential to the appropriate order and collecting like terms for excitations: in this way, coupled cluster amplitudes combine in various ways to become CI coefficients. CASCI wavefunctions merely need to be padded to the full space with the occupied orbitals, which makes the conversion of these wavefunctions to the sum of Slaters format almost immediate. The same applies to MRPT wavefunctions.Being one of our standard formats, the MPS does not require form conversion. However, for the purpose of comparison with the other methods, it is also possible to start from an MPS and compute the equivalent Slater determinant representation of the wavefunction up to a specified tolerance – a process called reconstruction. A deterministic approach to this involves partial re-summation of the matrix products: the details can be found in Ref. <cit.>. On top of that, to switch to chemist convention of keeping all spin up operators on the left, we evaluate the required parity conversion factor for each determinant.The SHCI method naturally returns the wavefunction as a sum of Slaters, so little conversion is required beyond post-processing the results of the particular package we are employing.§ SOS ↔ MPS TRANSFORMATIONIn this appendix, we discuss how the two standardized formats, i.e. SOS and MPS are transformed to each other. MPS to SOS: The goal is to calculate the largest coefficients c(n_1,…,n_N) = ∑_α_1 …α_N-1 A^n_1_1;α_1… A^n_N_N;α_N-1 in a SOS expansion. Based on Appendix A of Ref. <cit.>, we start from a left canonical form and set a threshhold for keeping terms in the SOS. Partial coefficients such as c^(p)_α_p(n_1,…,n_p)=∑_α_1 …α_p-1 A^n_1_1;α_1… A^n_p_p;α_p-1α_p are formed and whenever a norm of the partial coefficient ∑_α_p |c^(p)_α_p(n_1,…,n_p)|^2 goes below a threshhold, all Slater determinants with the prefix (n_1,…,n_p), i.e. of the form |n_1,…,n_a_p,…⟩, are dropped from the SOS. This way owing to the left canonical form of the MPS, it is ensured that all the terms with coefficients above the threshhold are recovered in the SOS.SOS to MPS:For this task, we start with a bond dimension 1 MPS that corresponds to the largest coeeficient Slater determinant in the SOS (could be the Hartree Fock state or not); we make an auxiliary copy of it also. Using MPOs consisting of a number of c^† and c operators, the auxiliary bond dimension 1 MPS is transformed to the Slater determinant with the second largest coefficient. The new auxiliary MPS is added to the main MPS and the procedure goes on until all coefficients are added. Note that the auxiliary MPS remains bond dimension 1, while the bond dimension of the main MPS grows, one can compress the main MPS as more and more terms are added to it.§ PROOF OF LEMMA <REF>We prove <ref> by induction. To avoid cluttering, we shall replace ν̃_i with ν_i. From here onwards, the vectors ν_i have length r. We recall the statement and notations of the lemma: In <ref>, one needs to prove the existence of 2log D-1 vectors u_j of length r, forming a matrix called U, that helps to distinguish the D distinct vectors ν_i of length r by mapping them to vectors b_i of length 2log D-1. Notice U is supposed to be found offline on the classical computer.We can interpret U as a linear map acting on each ν_i. We need to find a U : _2^r →_2^2log D-1 such that U(ν_i) ≠ U(ν_j) ↔ν_i - ν_j ∉ U , ∀ i ≠ j . We will efficiently construct such a linear map with the additional property that ν_i ∉ U, ∀ i unless ν_i = 0; this additional property will help us in proving the next inductive step from the induction hypothesis. In summary, U has to satisfy the following properties:ν_i - ν_j ∉ U , ∀ i ≠ j, ν_i ∉ U, ∀ i,unless ν_i = 0.Since log D-1 < log(D) _2^log D-1 = 2^log D-1<D, there are at least log D many linearly independent vectors among the ν_i's. Therefore r ≥log D. First assume log D ≤ r ≤ 2log D-1. Without loss of generality, assume ν_1, … , ν_r are linearly independent and generate all the ν_i's. Note that finding these linearly independent generators is an efficient classical algorithm in linear algebra. In this case, we can distinguish the ν_i's using r≤ 2log D-1 many u_j's; we simply choose U to be the r× r identity matrix. This choice yields b_i = ν_i, with length r≤ 2log D-1, fulfilling the same purpose. When r>2log D-1, we perform induction on t ∈ where r=2log D-1+t. According to the rank theorem, Im U+U = _2^r = r = 2log D-1+t. We find U by first constructing a subspaceof dimension t, satisfying the same properties in <ref>, followed by building a U with kernel equal to . More precisely, we will find t many linearly independent vectors w_1,…,w_t ∈_r :=span⟨ν_1,…,ν_r ⟩ that would define such a . Then, by basic linear algebra, there is an efficient classical algorithm that finds linearly independent vectors u_1,…,u_2log D-1∈_r that satisfy u_j · w_i = 0, ∀ i,j. Because of their linear independence, a matrix U defined by such u_j's would have rank 2log D-1, so ImU = 2log D-1 ker U = t. Finally, since ∀ i: w_i ∈ U and U =, it follows U =. Therefore U satisfies <ref>, as desired.Note. Going forward, as operations are over the field _2, we may play loose with subtraction and addition, as ν_i-ν_j = ν_i+ν_j.Let us start by proving the base of induction t=1r = 2log D. We need to find a single vector w_1 such that ν_i ≠ w and ν_i - ν_j ≠ w. The number of distinct vectors in the set {ν_i, ν_i-ν_j}_i,j is at most D+D2= (D^2+D)/2≤ 2^2log D-1+2^log D-1< |_2^r| = 2^r = 2^2log D. Therefore there exists w ∈_2^r - {ν_i, ν_i-ν_j}_i,j, and this vector can be found after a search over (D^2+D)/2+1 vectors picked from _2^r. Thus, w_1 for the base of induction can be found efficiently.For the induction step, without loss of generality assume ν_1, …, ν_2log D-1+t are all linearly independent and generate the rest of the ν_i's (we note again that finding these generators can be done efficiently). By induction hypothesis w_1, … , w_r-1∈_r-1 = span⟨ν_1, … , ν_r-1⟩ form a desired subspace _r-1 for the previous induction step. Note that clearly _r-1⊂_r = span⟨ν_1, … , ν_r ⟩.We can partition the set of all vectors {ν_i}_i=1^D into three subsets: (1):= {ν_i|ν_i ∈_r-1}, which elements will be referred toas m_i, (2) the single element subset {ν_r}, and (3):= {ν_i|ν_i ∈_r - _r-1}. The latter will have vectors that look like ν_i = m_i'+ν_r where 0 ≠ m_i'∈_r-1. Because of the partitioning, || + 1 + || = D. Note any future use of m_i,m_i' will refer to a ν_i inside , respectively. We emphasize that , are sets and not necessarily a linear subspace. We would like to invent a new set of D vectors with rank r-1, so that we can apply the induction hypothesis. To do so, let us replace ν_r with some l ∈_r-1, and similarly substitute every ν_r in the linear expansion of any ν_i = m_i' +ν_r ∈, meaning ν_i becomes l+m_i'. This vector l needs to satisfy some properties: l ≠0 , m_i'+l ≠0, l ≠ m_j,m_i'+l ≠ m_jThe first two conditions ensure that after replacement, we do not obtain any zero vector. The second line ensures that we do not obtain any repeated vector. All these conditions amount to l ∉{0, m_i', m_j, m_j+m_i' }, the size of this set being (at most) 1 + || + || + ||· ||. We recall 1 + || + || = D, so the size is ≤ D + (D-1-||)|| ≤ D + D^2/4 ≤ 2^log D + 2^2log D-2 < 2^r-1=2^2log D-t-2 as t>1. Hence, there exists l ∈_r-1 that satisfies <ref>. Now the induction hypothesis for the new set of D vectors apply, since the rank has clearly been decreased by one to r-1 = 2log D-1 +t-1. Therefore, there exists a subspace = span⟨ w_1,…,w_t-1⟩⊂_r-1 satisfying <ref> for this new set of vectors. Now let us bring back ν_r by undoing the replacement by l. After this change, we need to verify thatstill satisfies <ref>, and then, in order to finish the proof, extendby a vector w_t while satisfying said properties. We verify the properties as follows: * We first check that ν_i, ν_i+ν_j ∉. Note that ν_i ∉ needs to be checked only for ν_r and ν_i ∈ (as they are the only ones impacted by bringing back ν_r). For both, this is in fact obvious as ({ν_r}∪) ⊂_r -_r-1 ({ν_r}∪) ∩_r-1 = ∅ while ⊂_r-1.* For the property ν_i+ν_j ∉, this needs to be checked only when at least one of ν_i,ν_j is inside ({ν_r}∪).* Assume that ν_i ∈ ({ν_r}∪) and ν_j = m_j ∈. If ν_i=ν_r, we need to show ν_r+m_j ∉ and for ν_i ∈, we need to prove m_i'+ν_r+m_j∉. Both these cases, due to ν_r, are outside of _r-1 and ⊂_r-1.* Assume that ν_i,ν_j∈ ({ν_r}∪). Then we need to check ν_r + m_j' + ν_r = m_j'∉, and also m_i' + ν_r + m_j' + ν_r = m_i'+m_j'∉. However by induction hypothesis, we already know that l + m_j' + l = m_j'∉andl+m_i'+l+m_j' = m_i'+m_j' ∉, so this is also guaranteed. Finally, we need to find a new w_t to add towhile preserving its properties. Let us define w_t = ν_r +l and let ' = ⊕ w_t ⊂_r = _r-1⊕ν_r. To prove that ' satisfies <ref>, one needs to verify that ν_i ∉' - or ν_i-ν_j ∉' -, ashas already been shown to satisfy said properties. If ν_i ∈' - or ν_i-ν_j∈' - then w_t must be `involved':* For ν_i ∈' -, we must have ν_i = w_t+w = ν_r+l + w for some w ∈, in which case ν_i-l = ν_r + w. However, the latter is inside _r-_r-1. Therefore, since l ∈_r-1, we have ν_i ∈_r- _r-1. So ν_i ∈ ({ν_r}∪ N). If ν_i = ν_r then l = w ∈, which violates our induction hypothesis. If ν_i= m_i' + ν_r ∈ Nthen m_i'+l = w ∈, which again violates the construction of .* For ν_i-ν_j =ν_r+l + w for some w ∈,exactly one of ν_i or ν_j must be inside ({ν_r}∪ N). Without loss of generality assume ν_i = ν_r or ν_i= m_i'+ν_r. Then this simplifies to l-ν_j = w ∈ or m_i'+l-ν_j = w ∈, both violating the induction hypothesis for .This shows ' satisfies <ref> and finishes the induction. The significant cost in each inductive step is the search to find l, taking O(D^2/2+D) steps. Resource estimation. The total complexity is found by applying this for each induction step, thus O(tD^2). Note that t ≤min(2N,D)-2log D+1, with equality when all the vectors ν_i are linearly independent; so the total cost of the classical algorithm used to find U is at most O(D^2(min(2N,D)-2log D+1)). It should be noted that the search process can be fully parallelized, using all cores on an available machine, and given the nature of this search, the expected runtime could be much less. § TRADING OFF TOFFOLIS WITH QUBITS IN THE SOS ALGORITHMAs explained in the main text, trading off Toffolis with qubits can be done by using an alternative version of QROM. This variant, which we shall call <cit.>, has a parameter λ that allows for trading off qubits with Toffolis. For a QROM loading L many data points |x_i⟩,indexed by i = 1,… , L and of size c, the trade-off λ∈ [1,L] can be applied to change the Toffoli complexity from O(L) to O(L/λ +λ c) while increasing the uninitialized (so-called dirty) qubit cost to O(λ c). Notice that the volume cost stays as O(Lc), although technically, as we traded gates with dirty qubits, this volume is not a clean volume, so it is an overall improvement. To keep our discussion focused on the novelties and following the convention in previous resource estimations <cit.>, we will select λ = √(L/c) in our applications. This strikes a balance in the trade off, using `equally' many (O(√(Lc))) Toffolis as dirty qubits. The first expensive QROM is employed when outputting the system register in <ref>. Using the variant with Toffoli cost 2D/λ + 8 (2N)λ with λ = √(2D/16N), this QROM Toffoli cost can be lowered to 2√(32ND) while also using √(32ND) dirty qubits.We also could have chosen to use QROM to flip the register |i⟩ using |b_i⟩ in∑_i=1^D α_i|i⟩|ν_i⟩_v| b_i⟩_b,where we have denoted the registers by subscripts. We need to employ the inverse O^† of the operator O |b_i⟩_b|0⟩^⊗log D = |b_i⟩_b|i⟩. The naive implementation of O would read 2log D-1 qubits of b_i, therefore its optimized Toffoli cost would scale as O(√(D^2 ·log D)) = O(D √(log D)). However, know that we only have D ∼ 2^log D many b_i's and we would like to exploit this fact, achieving a Toffoli cost that is sublinear in D. First we note that we have knowledge of the value of b_i as we computed them classically. Let λ= ⌈√(D)⌉. Let us now order b_l_1 < ⋯ < b_l_D. We compute:|b_j⟩|0⟩^⊗⌈log D/2⌉→|b_j⟩|f(j)⟩where f(j) is the unique index such that b_l_λ (f(j)-1) < b_j < b_l_λ f(j), where for λ f(j)>D, we set i_λ f(j) = D. This computation requires comparing b_j to⌈ D/λ⌉ many other b_i's. This means a Toffoli cost of (2log D-1+2log D-1) (the comparator cost of two 2log D-1 bits integers) multiplied by the ⌈D/λ⌉ comparisons that we have to make. More precisely, starting from b_qλ for q=1, we compare b_j with b_qλ stored in an auxiliary register, and if b_j > b_qλ, we store |1⟩ in another auxiliary register and otherwise |0⟩. Then we add the value of that register to another register holding |q⟩ which will eventually become |f(j)⟩.This summation itself costs ⌈log D/2⌉+1 Toffolis as the size of the register holding q is ⌈log D/2⌉ (the qubit cost is effectively zero, as any ancilla used is immediately liberated without Toffolis). As this summation must be performed for all q ∈{1, …,⌈D/λ⌉}, we have an additional ⌈log D/2⌉·⌈D/λ⌉ to account for. In total, the cost for this computation is (4log D-2+⌈log D/2⌉)⌈D/λ⌉∼9log D√(D)/2, and we note that its (later) uncomputation can be done with Clifford gates. After this, our state becomes ∑_i=1^D α_i|i⟩|ν_i⟩_v|b_i⟩_b|f(i)⟩Next, we read the register f(i) using a Select QROM with cost 2^log D/2 and output the λ-block of b_j's in which we know b_i lives in, i.e. : |b_l_λ (f(j)-1)+1, …, b_l_λ f(j)⟩.∑_i=1^D α_i|i⟩|ν_i⟩_v|b_i⟩_b|f(i)⟩|b_l_λ (f(i)-1)+1, …, b_l_λ f(i)⟩Notice the significant clean qubit cost (2log D-1)λ∼ (2log D-1)√(D). The uncomputation of this step is later done with Clifford gates. CNOT-ing the register b_i into the block yields:∑_i=1^Dα_i|i⟩|ν_i⟩_v|b_i⟩_b|f(i)⟩⊗|b_l_λ (f(i)-1)+1⊕ b_i, …, b_l_λ f(i)⊕ b_i⟩By using λ many (2log D-1)-MCCNOTs, with total Toffoli cost (2log D-1)λ, we compute the index 1≤ g(i) ≤λ for which b_l_λ(f(i)-1)+g(i) = b_i. More precisely, we hold in an auxiliary register the index |k⟩ at which we are implementing the MCCNOT, and using Toffolis, bit by bit for every ⌈log D/2⌉ bits, add that to the register designed for holding g(i) controlled on the result of the (2log D-1)-MCCNOT. This requires an additional ⌈log D/2 ⌉ Toffolis and ensures that only the true k=g(i) is added to the register designed for holding g(i). Therefore, the total Toffoli cost of this step is (2log D+⌈log D/2 ⌉ -1)λ∼5log D√(D)/2. We now have the following state∑_i=1^Dα_i|i⟩|ν_i⟩_v|b_i⟩_b|f(i)⟩⊗|b_l_λ (f(i)-1)+1⊕ b_i, …, b_l_λ f(i)⊕ b_i⟩|g(i)⟩Notice the register |f(i),g(i)⟩ has 2⌈log D/2⌉∼log D qubits, and determines i uniquely. Tracing this back to our original goal of distinguishing the D many ν_i's, this is the ideal we can hope for, as we have computed D many log(D)-bits integers |f(i),g(i)⟩ that are distinct. The rest can be done using the inverse O^† of a QROM which computes O|f(i),g(i)⟩|0⟩^⊗log D= |f(i),g(i)⟩|i⟩Notice that we have access to where b_i is in the list b_l_1<… <b_l_D, and therefore can classically compute the values f(i),g(i). Hence the QROM has the required classical lookup data. This QROM has Toffolicost 2√(16log(D)2^log D) < 2√(32log(D)D) with dirty qubit cost <√(32log(D)D). Note that these dirty qubits are already available using the λ-block which has (2log D-1)√(D) qubits.Resource estimation. Overall, the clean qubit cost increases to (2log D-1)√(D), and an additional dirty qubit cost √(32ND). The clean qubit cost may be improved if one can find a way to compute g(i) without outputting the entire λ-block. While the dirty qubit cost can be avoided if we do not apply QROM in <ref>. In practice, the dirty qubit cost, though significantly larger than the clean (5log D) qubit cost in our previous method, may be actually available if a future computation (such as qubitization) to simulate the evolution of the Hamiltonian requires that much qubit. The Toffoli cost changes to min(2√(32ND),D) + 9log D√(D)/2+5log D√(D)/2+2^log D/2+2√(32log(D)D)∼min(2√(32ND),D) + (7log D+2√(32log D))√(D). In case D is chosen from the minimum, the dirty qubit cost √(32ND) is lifted.Is the algorithm optimal? Our algorithm is general in the sense that it simply assumes a given set of amplitudes {α_i} and bitstrings {ν_i}, with no assumption on the nature of either of the two sets. Furthermore, a dimensionality argument can show that the compression in <ref> of ν_i to 2log D -1 bits is very likely to be tight, and we conjecture that the volume cost can not be asymptotically lower than Õ(D), and more strongly, O((log D)^2D). We remark that this conjecture is for creating the superposition ∑_i=1^D α_i |ν_i⟩, without any junk register, i.e. ∑_i=1^D α_i |ν_i⟩|junk_i⟩ is not acceptable. The only `approximation' allowed is in the amplitudes, and the distance to the state should be on the order of chemical accuracy. Both These restrictions are necessary as this preparation problem concerns the system register, and not, say, a PREP state in a qubitization protocol derived from an LCU, which can include a junk register, and hence why preparation methods such as coherent alias sampling <cit.> can be employed in that instance.One wonders if the approach in <cit.> in iteratively generating the superposition can be combined with ours. This would involve ordering the states ν_i and compressing ν_1, …, ν_l for each 1≤ l ≤ D. Assuming that for each l, a compression of the bitstrings ν_1, …, ν_l to a length k_l is possible (and we know k_l ≤ 2⌈log l⌉ -1), then the Toffoli cost of generating the superposition is ∑_l=1^D (k_l-1). This is upper bounded by ∑_i=1^⌈log D⌉ -2 (2i-1)(2^i-1) +(2⌈log D⌉ -1)(D-2^⌈log D ⌉ -1) and can be seen to be smaller than our own cost (2⌈log D ⌉+3)D.However, this method also involves a rotation at each step for 1≤ l≤ D, and even assuming access to a gradient state, the overall cost of these rotations is (c-3)D Toffolis, where c is the required bit precision. Note that we also have to take into account the accuracy c for our rotations in generating the superposition in <ref>, however the associated cost is simply (c-3)log D, and according to Lemma E.1 in <cit.>, c must satisfy c > log ( log(D) π / ϵ), where ϵ is chemical accuracy. Crucially, c is double logarithmic in terms of D, because there are only log D rotations performed in generating <ref>.This is not the case when using the approach in <cit.>, and for the overall error to be under chemical accuracy, we need c > log(Dπ/ϵ), therefore making c logarithmic in D. Still, one could see the total cost ∑_i=1^⌈log D⌉ -2 (2i-1)(2^i-1) +(2⌈log D⌉ -1)(D-2^⌈log D ⌉ -1) + (log(Dπ/ϵ)-3)D to be competitive within a constant factor of two with (2⌈log D ⌉+3)D, but clearly, it does not lead to a significant constant factor cost reduction. Nevertheless, this is a worst case complexity analysis, and if k_l's are significantly smaller, one might see benefits of this combination of the two methods. This could potentially be the case for a particular range of D and bitstrings ν_i's which have some common structure, such as being an excitation or two away from a reference state, and we leave that for future works.§ DETAILS OF RESOURCE ESTIMATION FOR MPS IMPLEMENTATIONIn this appendix we discuss the details of the MPS cost estimation presented in <ref>. In particular we focus on the ancilla qubit and Toffoli costs of implementing G[j] defined in <ref>.We represent the G operations as:G[j] = ∑_α_j-1( |u_α_j-1⟩⟨α_j-1,0|) + …,with|u_α_j-1⟩ = ∑_α_j,n_j A^n_j_j;α_j-1α_j|α_j, n_j⟩.In the states |α_j, n_j⟩ the first and second arguments show the ancillae and system qudit indices respectively.Note that the form of <ref> follows from the defintion of G in <ref>. Each unitary is synthesized using a series of Householder reflections with the addition of a single ancilla qubit as follows:|0⟩⟨1|⊗ G[j] + |1⟩⟨0|⊗ G[j]^† = ∏_α_j-1=1^χ_j-1 (1-2 P_α_j-1)where a total of χ_j-1 reflections are used, whose projectors are defined through an auxiliary state |w_α_j-1⟩:P_α_j-1 = |w_α_j-1⟩⟨w_α_j-1|,with|w_α_j-1⟩ = |1⟩⊗|α_j-1,0⟩ - |0⟩⊗|u_α_j-1⟩= W_α_j-1|0⟩⊗|0,0⟩.On the second row, we have defined the operator W_α_j-1 that prepares |w_α_j-1⟩, also in the state |0⟩⊗|0,0⟩ the 0's correspond to the reflection ancilla, MPS ancillae and the system qudit respectively.With the operator W_α_j-1, each of the reflections in <ref> can be written as:1-2P_α_j-1 = W_α_j-1 [ 1-2 |0,0,0⟩⟨0,0,0|]W_α_j-1^†,and thus the cost of implementing each reflection is twice the cost of W_α_j-1 plus the cost of the simple reflection [ 1-2 |0,0,0⟩⟨0,0,0|]. As a result, we need to also evaluate the cost of W_α_j-1, which we synthesize as follows: we first define an operator that prepares the state |u_α_j-1⟩:|u_α_j-1⟩ = V_α_j-1|0,0⟩.It can be seen that W_α_j-1 as below can serve to satisfy the definition of |w_α_j-1⟩ in <ref>:W_α_j-1 = (C̅V_α_j-1) ( C_α_j-1) ( (ZH) ⊗𝕀),where ZH (a Pauli Z and a Hadamard) acts on the reflection ancilla, C_α_j-1 is a product of CNOTs controlled on the reflection ancilla to prepare the state 1/√(2)[|0,0,0⟩- |1,α_j,0⟩]. With C̅V_α_j-1 controlled negatively on the reflection ancilla, one can check that |w_α_j-1⟩ is prepared up to a phase.We are interested in the Toffoli cost of implementation and list all sources of Toffoli cost below: * Simple reflections [ 1-2 |0,0,0⟩⟨0,0,0|]: a number of χ_j-1 of them is required for G[j].* Operations C̅V_α_j-1: these are required for creating W_α_j-1 and are the only source of non-Clifford gates in their synthesis. A total of 2 χ_j-1 of such operators is required for G[j]. First we note that the reflection [ 1-2 |0⟩^⊗ν⟨0|^⊗ν] can essentially be thought of as a multi-controlled Z operation, and thus can be implemented using a circuit such as the one shown in figure 4.10 of <cit.>. However, the second half of the circuit, i.e. the uncomputing part can be done without Toffolis and in fact using measurements and Clifford gates as shown in figure 3 of Ref. <cit.>. This makes the total number of Toffoli gates and ancillae equal to ν-1. Next, we discuss how C̅V_α_j-1 can be implemented and estimate the required resources. For this we first discuss the implementation of V_α_j-1. V_α_j-1 prepares the state:|u_α_j-1⟩ = V_α_j-1|0,0⟩,which is defined as:|u_α_j-1⟩ = ∑_α_j,n_j A^n_j_j;α_j-1α_j|α_j, n_j⟩.First, we take the above subspace of interest to consist of ν qubits. The preparation of a generic state as |u_α_j-1⟩ can be done using the methods discussed in Ref. <cit.>; the state is carved qubit by qubit in ν steps; in each step a single qubit rotations on one qubit being controlled on all the previous entries is performed and with consecutive application of this procedure all the correct probabilities for bitstrings are reproduced at the end; one last multicontrolled single qubit rotation is required to recover the complex phases corresponding to the components of the state in question (see page 3 of <cit.> for details). Thus a total of ν+1 of such single qubit rotations are required for reproducing the state. The rotation will be performed with the method given in <cit.>, where access to a phase gradient state 2^-b/2∑_k=0^2^b-1 e^-2π i k / 2^b|k⟩ is assumed. b=log(1/δ_r) is the number of digits in the binary representation of the rotation angle and thus δ_r is the error in rotation. The Toffoli cost of each single qubit rotation is given by b+O(1) <cit.>; note that this also means we need an additional log(1/δ_r) additional qubits to store the phase of each single qubit rotation.Considering first the Select variant of implementation in <cit.>, we now discuss the cost of control operations for the above single qubit rotations. Since multicontrolling over a sequence of 0,1,2,…,ν qubits is required to store the respective rotation angles, we will respectively have a sequence of Toffoli costs of 2^0-1,2^1-1,2^2-1,…,2^ν-1 according to <cit.> (see e.g. figure 7 therein). Note that we are interested in implementing C̅V_α_j-1, and <cit.> also considers a controlled operation for the above cost. After each single qubit rotation, the qubits storing the rotation angle should be uncomputed and this adds a multiplicative factor of 2. As a result, for the Select variant we have a total Toffoli cost equal to2^ν+2 +ν b.Where we have dropped an additive -ν term in the sum as it is subdominant.In our particular case of interest, i.e. synthesis of G[j], we have a number of reflections shown in <ref>, equal to χ_j-1. On the other hand the Hilbert space over which each of the relections acts is χ_j d and as a result, N in the above trwatments should be 2^ν = χ_j d. This means that in the Select variant, the total Toffoli cost reads:χ_j-1[ 8χ_j d + b log(χ_j d) + log(χ_j d) ], Next turning to the other variant SelSwapDirty of Ref. <cit.>, which is capable of reducing the Toffoli cost if dirty qubits are available; we saw above that a number b qubits is required for storing the rotation angles, however with the addition of a number λ b dirty qubits, we can use this variant of the algorithm. A total of ν+b extra clean qubits are also required (excluding the ancillae required for performing single qubit rotations like the phase gradient state); note that this is the same number as the Select variant. Moreover, for SelSwapDirty, one needs to perform swaps also which will add to the total Toffoli cost. It is straightforward to see that the Toffoli gate cost in this case reads:22^ν+2/λ + 4 · 2λν b + ν b.The first term corresponds to multi-qubit controls, the second term swaps and the third term single qubit rotations. The factors 2 and 4 in the first and the second terms appear because Select and Swap need to be done twice and four times in SelSwapDirty (See figure 1d of <cit.>). The factor of 2 in the second term comes from uncomputing the rotation angles.As is discussed in <cit.>, it is best for Toffoli gate count to be λ = O(√(2^ν)), but we will keep it unspecified for the rest of the discussion. Gathering all the above costs together for synthesizing G[j] in <ref>, we see that the Toffoli cost reads using the SelSwapDirty variant:χ_j-1[ 8χ_j d/λ + 8 λ b log(χ_j d) + b log(χ_j d) + log(χ_j d) ] In total assuming a number N of qudits in the physical system, the total cost will be the sum of the above; asymptotically and with using χ selctively for all bond dimensions, the dominant Toffoli cost can be written as O(N χ^3/2). § EDGEWORTH SERIES TERMS In general, the Edgeworth series terms can be written as <cit.>:p_E(x) = e^-x^2/2/√(2π)[ 1 + ∑_s=1^∞∑_{k_m}_s+2r(x) ∏_m=1^s 1/k_m!( κ_m+2/(m+2)!)^k_m],where the summation over {k_m} in the above denotes summatuion over all non-negative integer solutions of the Diophantine equationk_1 + 2k_2 + … + sk_s = s,and r is the sum of these integers for each solution: r = ∑ k_m. The explicit forms for a few of the orders of the Edgeworth expansion can be found in <ref> § KERNEL DENSITY APPROXIMATIONHere, we give a quick overview of the kernel density approximation method. Supposing we have access to a finite number of samples drawn from a distribution function, the goal is to approximate the distribution function. To this end, a broadening kernel is placed at the position of each of the outcomes and a normalized sum approximates the underlying distribution:p̂(x) = 1/Mh∑_i=1^M K(x-X_i/h),where K is a kernel (e.g. Gaussian, Lorentzian, etc.) with mean of 0 and variance of 1, X_i, i=1,…,M are the outcomes of sampling and h is the broadening factor. The analysis of error in reconstructing the above QPE-kernel energy distribution with kernel density estimation follows a standard approach <cit.>. First, the error is quantified by the quantity mean integrated square error (MISE):MISE = 𝔼( ∫ dx (p̂(x) - p(x) )^2 ),where p̂(x) is the approximated distribution for the underlying distribution p(x). When a sample of size M is used, it is well known that with an appropriate choice of h, i.e. h_opt∼ 1/M^1/5, the error also shows the behavior MISE∼ 1/M^4/5. § DETAILS OF THE QUANTUM EIGENVALUE TRANSFORMATION OF UNITARY MATRICES METHOD The method consists of a quantum signal processing circuit <cit.> that implements a unitary matrix that block encodes a polynomial function f(H) = P(cos(H/2)), where H is the Hamiltonian of interest and P is an even polynomial of degree d. A schematic of the quantum circuit is shown in Fig. <ref>. The circuit works by implementing U=e^-iH and its Hermitian conjugate controlled on a single ancilla, a total number of d times. The parameters φ_0,φ_1,…,φ_d/2 are determined based on the polynomial of interest. Upon measuring the ancilla qubit at the end and obtaining the outcome 0, the implementation has been successful, the probability of success is given by P(cos(H/2)) |ψ⟩.A scheme of the quantum circuit is shown in <ref>. The circuit works by implementing U=e^-iH and its Hermitian conjugate controlled on a single ancilla, a total number of d times. The parameters φ_0,φ_1,…,φ_d/2 are determined based on the polynomial of interest. Upon measuring the ancilla qubit at the end and obtaining the outcome 0, the implementation has been successful, the probability of success is given by P(cos(H/2)) |ψ⟩.The polynomial that needs to be implemented for our energy filtering task should be a symmetric function that retains low energies and filters high energies.We take the spectrum of the Hamiltonian to lie within the interval [-π+η,0-η], if necessary this can be done by adding a constant to and/or rescaling the Hamiltonian before performing QETU. Note that this is contrary to the original setting of Ref. <cit.> (the spectrum is contained in [η,π-η]) and coarse QPE mentioned above; the reason for this change is better performance.We need a polynomial P which when expressed as P(cos(H/2)) can filter high energies; it is straightforward to see that using the following combination of error functions, which we will try to imitate using the polynomial P, it is possible to filter high energies:ξ_k,μ(x) = 1/2[ erf( - k (x-μ) ) + erf( k (x+μ) ) ],with 0<k and 0<μ<1 determining the steepness and position of the transitions, i.e. position of energy filtering in the function. We use the prescription in Appendix A of Ref. <cit.> to reconstruct the error function erf(kx) in terms of Chebyshev polynomials as follows:p_erf , k , n = 2k e^-k^2/2/√(π)( I_0(k^2/2) + ∑_j=1^(n-1)/2(-1)^j I_j(k^2/2)[ T_2j+1 (x)/2j+1 - T_2j-1 (x)/2j-1]),where T_j is the degree j Chebyshev polynomial, I_j is the modified Bessel function of the first kind. Note that p_erf , k , n is an odd polynomial of degree n; it is the degree n that controls the error in approximating erf(kx) and thus ensuring that low energies are retained and high energies filtered, and therefore it should be chosen large enough (see below). An example constructing polynomials like this is shown in <ref>.Applying a successful round of QETU filtering to a state |ψ⟩ = ∑_E φ_E |E⟩, we end up with the following unnormalized state:∑_E φ_E P(cos (E/2)) |E⟩|0⟩,This shows that supposing we want to keep energies below E_l and filter energies above E_u, we can choose a filtering function in <ref> (to be approximated by P) with μ = cos(E_u/2)+cos(E_l/2)/2 and 1/k = ζ cos(E_u/2)-cos(E_l/2)/2. The factor ζ is added so that it is possible to control the intensity of filtering while keeping the degree of the polynomial and the cost down.The degree of a polynomial that needs to be used for this task will have a scaling O(Γ^-1logϵ^-1), where ϵ is the error in the polynomial approximation and Γ is the energy scale over which the transition in the error functions in <ref> happens, and thus should scale as 1/k.Apart form the above asymptotic scaling, in practice, we choose n by examining how good of an approximation is achieved for degree n.§ SIMPLIFIED NUMERICAL EXAMPLE FOR THE QUANTUM REFINING STEPHere, as a simple concrete model, we consider a Gaussian energy distribution for our initial state. This Gaussian distribution can be characterized by a mean value E̅ and a width σ_E:A(E) = 1/√(2π)σ_E e^-(E-E̅)^2/2σ_E^2,Even though the energy distribution of an initial state might not actually be close to Gaussian in general, but we expect at least some variational states to show qualitatively similar behavior.We work with following concrete example of a Gaussian distribution: E̅ = 0.06, σ = 0.02. We would like to estimate the resources required to obtain a result close to 0 using QPE. With the above choice of parameters for the energy distribution, the accumulated weight below 0 is p_<(0) = 0.0013. This mean that we need roughly 1/p_<(0) measurements to obtain a value around 0. We can perform quantum refining to decrease the number of times the most expensive quantum energy estimation routine is performed. We take this most precise routine to be a QPE with k=10 digits for this example, however, we tolerate an error of 2^-8, discarding the last tow digits in any QPE outcome. k furthermore determines the total evolution time required as T ∼ 2^k. We characterize the cost of different operations by the number of queries they require to the unitary e^-iH, which for a k-digit QPE measurement becomes 2^k. This means that each round of the ultimate QPE measurement brings in a cost of 2^10. We first consider coarse QPE for energy filtering in this setting.Concretely, we do a coarse QPE measurement with 4 digits and only keep the results that show outcome 0 in the measured phase register, we can filter out some part of the weight as shown in <ref> (left). This outcome happens with a probability of W_k=4 = 0.10. In this new energy distribution the total weight below zero now reads p'_QPE,<(0) = 0.012. This means that, after such measurement, roughly ten times less rounds of the precision QPE will be required compared to the initial state. One can do this procedure one more time with a coarse QPE with 5 digits now and postselecting on the outcome 0 again; the resulting weight distribution can be seen in <ref> (left). The probability of such outcome (given the previous outcome of 0 with 4 digits) is now W_k=4;5 = 0.13 (this means that the probability of obtaining 0 in the 4-digit measurement and then also 0 in the 5-digits measurement is W_k=4W_k=4;5 = 0.013). Remarkably with this measurement, the total weight below zero becomes p”_QPE,<(0) = 0.083; this results in close to two orders of magnitude decrease in the number of precision QPE measurement required for obtaining outcomes close to 0. This is achieved for a cost of 2^4+2^5 which is an insignificant overhead compared with the cost of the most precise QPE measurement.For QETU, we shift the energies so that low energies are located close to -π as discussed in <ref>. We use a degree 200 polynomial to approximate a step function as shown in <ref>. The normalized distribution after QETU has been performed moves to the left and thus some of the higher energies are filtered. Probability of success in this case is W_QETU = 0.21 and the total weight below 0 after the procedure can also be calculated as p'_QETU,< = 0.0056. This means decreasing the number of repetitions roughly by a factor of 4. As the polynomial that is used is order 200, the number of required queries to e^-iH is also 200. We see that both coarse QPE and QETU refining methods can be helpful for a cost that is an insignificant fraction of the ultimate QPE cost, but coarse QPE acts considerably better for a lower cost. As creating steep polynomials like the one used here is generally a hard task, we believe this result should hold generically even though we tested it here for a simple model. §.§.§ Mitigating the leakageAnother thing which can be studied in this simple model is the probability of leakage before and after the refining is performed. We only consider coarse QPE for this.Before any of the measurements are performed the total probability of leakage is equal to p_leak = 0.00097 which close to p_<(0) = 0.0013 and this can be problematic by contributing outcomes below the actual energy levels of th system. Upon performing 4-digit and 5-digit QPE measurements discussed above, the probability of leakage become p'_leak = 0.0019 and respectively p”_leak = 0.0036. These two values when compared with p'_QPE,<(0) = 0.012 and p”_QPE,<(0) = 0.083 show that the probability of leakage has decreased substantially enough compared to the probability of obtaining results of interest, so that its occurrence has become improbable, and thus quantum refining has suppressed the possibility of leakage also. § ERROR ANALYSIS OF THE QPE LEAKAGE APPROXIMATE FORMIn this appendix we analyze <ref> and in particular how the approximation in <ref> can be performed. p_leak(E_n) =∑_x_j<x_upper1/2^2k(sin^2 (πδ_n)/sin^2 (π/2^k [x_n+δ_n - x_j])),we take the lower bound in the summation over x_j to be -2^k-1, and we are using the periodicity of the QPE results. A lower bound and an upper bound for the above sum can be found by using the following integral form:I(x_0)= sin^2(πδ_n)/2^2k ∫_-2^k-1^x_0dx/sin^2(π/2^k (x_n+δ_n-x))= sin^2(πδ_n)/2^2k[ . 2^k/π(π/2^k (x_n+δ_n-x) )|^x_0_-2^k-1] .It is easy to check that:I(x_upper-1) ≤ p_leak(E_n) ≤ I(x_upper).This readily results in <ref>, and the error can also be shown to have the form O(max[2^-2k,(x_n-x_upper)^-2] )by evaluating I(x_upper) - I(x_upper-1). | http://arxiv.org/abs/2310.18410v1 | {
"authors": [
"Stepan Fomichev",
"Kasra Hejazi",
"Modjtaba Shokrian Zini",
"Matthew Kiser",
"Joana Fraxanet Morales",
"Pablo Antonio Moreno Casares",
"Alain Delgado",
"Joonsuk Huh",
"Arne-Christian Voigt",
"Jonathan E. Mueller",
"Juan Miguel Arrazola"
],
"categories": [
"quant-ph",
"cond-mat.str-el",
"physics.chem-ph"
],
"primary_category": "quant-ph",
"published": "20231027180059",
"title": "Initial state preparation for quantum chemistry on quantum computers"
} |
[email protected] Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China[Corresponding author: ][email protected] School of Physics, Beijing Institute of Technology, 100081 Beijing, [email protected] Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China Center for High Energy Physics, Peking University, 100871 Beijing, China Collaborative Innovation Center of Quantum Matter, Beijing 100871, [email protected] Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, GermanyExtreMe Matter Institute EMMI, GSI, Planckstr. 1, 64291 Darmstadt, Germany We study the QCD equation of state and other thermodynamic observables including the isentropic trajectories and the speed of sound. These observables are of eminent importance for the understanding of experimental results in heavy ion collisions and also provide a QCD input for studies of the timeline of heavy-ion-collisions with hydrodynamical simulations. They can be derived from the quark propagator whose gap equation is solved within a minimal approximation to the Dyson-Schwinger equations of QCD at finite temperature and density. This minimal approximation aims at a combination of computational efficiency and simplification of the truncation scheme while maintaining quantitative precision. Thisminimal DSE scheme is confronted and benchmarked with results for correlation functions and observables from first principles QCD lattice at vanishing density and quantitative functional approaches at finite density.QCD equation of state and thermodynamic observables from computationally minimal Dyson-Schwinger Equations Jan M. Pawlowski============================================================================================================§ INTRODUCTIONThe thermodynamic properties of strong interaction matter are of both experimental and theoretical interest. The phase structure of strongly interacting matter is explored incurrently runningand planned on-going heavy-ion-collision facilities such as the BNL Relativistic Heavy Ion Collider (RHIC), GSI Facility for Antiproton and Ion Research (FAIR), JINR Nuclotron-based Ion Collider facility (NICA) and High Intensity heavy ion Accelerator Facility (HIAF). Its thermodynamic properties in the phase structure are governed by the QCD equation of state (EoS), i.e. thermodynamic functions such as pressure, entropy density, energy density, etc., at finite temperature T and quark chemical potential μ_q <cit.>. Specifically, for hydrodynamic simulations of heavy-ion collision, the QCD EoS is a crucial input as are further transport coefficients, see e.g. <cit.>. Moreover, at large densities and small temperatures, the QCD EoS is required for explaining the physics of compact stars such as neutron stars, e.g. <cit.>.Accordingly, obtaining the EoS and other thermodynamic observables from first principles QCD is of utmost importance for the physics phenomena discussed above. At finite chemical potential and in particular for μ_B/T ≳ 3 these results can only be obtained with functional QCD approaches such as Dyson-Schwinger equations (DSE) and the functional renormalisation group (fRG) approach, as lattice simulations at finite chemical potential to date suffers from the sign problem. Investigations of the phase structure of QCD with functional QCD approaches have made significant progress over the past decade, see in particular <cit.>, and the reviews <cit.> (DSE) and <cit.> (fRG). In turn, at vanishing μ_B, first principles QCD computations on the lattice provide benchmark results for the chiral phase transition temperature, thermodynamic observables and fluctuations of conserved charges. QCD, see e.g. <cit.>, which also can be used for extrapolations to finite chemical potential <cit.>.By now the results for the chiral phase structure from functional approaches are converging quantitatively at finite density with the increasing order of the truncations used. Moreover, these up-to-date results meet the lattice benchmark results at vanishing (and low) chemical potential, see <cit.>. The convergent results include an estimate for an onset regime of new physics, potentially a CEP, at about (T,μ_B) ∼ (110 , 600)MeV. This location lies beyond the quantitative convergence regime μ_B/T≲ 4 of the current best approximations, and hence it is only an estimate and not a fully qunatitative prediction. Still, this is exciting news and furthermore there is an ongoing quest for even more elaborate truncations that aim for full apparent convergence. However, the present approximations already allow for quantitative computation in the regime μ_B/T≲ 4 and for estimates in the regime μ_B/T≳ 4.This opens the path towards a comprehensive analysis of the equation of state, further thermodynamic observables, fluctuations of conserved charges as well as timelike observables such as transport coefficients within functional approaches. In the present work we contribute twofold to this endeavour:(i) We want to make quantitative functional QCD computations accessible to a wider audience beyond the technical experts. To that end we set up a minimal computational scheme for DSE computation: such a scheme aims at the technically most simple approximationat finite temperature and density that still reproduces the phase structure results with the state-of-the-art approximation scheme in <cit.> and hence allows a relatively simple access to many observables beyond the phase structure itself.(ii) We compute the equation of state and other thermodynamic observablesin a wide range of T and μ_B within this scheme. This allows us to study further thermodynamic observables such as the isentropic trajectories and the speed of sound, highly relevant for hydrodynamic simulations at finite density.This work is organised as follows: In <Ref>, we present the framework of the minimal scheme and its agreement with the other studies in vacuum. In <Ref>,we apply the framework in the plane of temperature and chemical potential and obtain the chiral phase transition. Then in <Ref>, we present the results of EoS in the (μ, T) plane and also the isentropic trajectories.In <Ref>, we summarise the main results and make further discussions and outlook.§ THE MINIMAL DSE SCHEME In this section we develop the minimal truncation scheme for the DSE approach at finite temperature and density, that is minimal for quantitative and semi-quantitative results with minimal computational effort (miniDSE). Key to this approach is the quantitative solution of the quark gap equation or quark DSE for the full quark propagator S(p), [S(p)]^-1 = [S_0(p)]^-1 + Σ(p) , where S_0 is the classical propagator, S_0(p) =1/p +m , where m is the matrix of current quark masses with entries m_f for all flavours f=1,...,N_f. Σ is the renormalised self energy, that satisfies the DSE Σ(p) ≃4/3g_s_qγ_μ S(q)Γ_ν(q,p)D_μν(k) , where we have dropped the renormalisation details. The diagrammatic depiction of eq:QuarkDSE is provided in <Ref> and the momentum arguments in the quark-gluon vertex are the incoming quark and antiquark momenta. <Ref> is computed within the MOM^2 scheme developed in <cit.> also used implicitly in fRG computations. We refer to <cit.> for a detailed analysis of this RG-scheme in the vacuum.In eq:QuarkDSE2, the gluon momenta k of the full gluon propagator D_μν is given by k = q - p, and the classical quark propagator S_0(p) in eq:Sf0 is flavour-diagonal. The full quark-gluon vertex Γ_ν is also taken flavour-diagonal, and hence eq:QuarkDSE2 constitutes equations for the self-energies Σ^f for a given flavour f, that only depends on the classicaland full quark propagators S^f_0 , S^f or rather Σ^f of the same flavour f and Γ_ν^f. Hence, the quark gap equation is flavour-diagonal, however, the gluon propagator depends on all flavours.In the current work we restrict ourselves to 2+1 flavour QCD with f=u,d,s. In the vacuum, the full quark propagator is parameterised with a flavour diagonal Dirac dressing A and a scalar dressing B, to wit, S^-1(p) = i pA(p) + B(p) . The vacuum gluon propagator is transverse in the Landau gauge used in the current work and has the transverse dressing Z, D_μν(k) =G_A(k) Π^_μν(k), Π^_μν(k) = δ_μν - k_μ k_ν/k^2 , with the transverse projection operator Π^ and the scalar propagator part G_A(k)= Z(k)/k^2 . The vacuum gluon propagatoreq:gluon is not computed in the present work, as by now there are very accurate and consistent results for the 2+1 flavour gluon propagator from lattice QCD simulations and functional computations <cit.>. Therefore we use the parametrised formula of the 2+1 flavour gluon put forward in <cit.>.The last ingredient is the quark-gluon vertex. In the vacuum it can be built from eight transverse tensor structures { T^(i)} with i=1,..,8 and 4 longitudinal ones, see e.g. <cit.>. For the development of ourminimal truncation scheme at finite density and temperature we can build on many functional results obtained for the quark-gluon vertex in the vacuum, see e.g. <cit.>. §.§ Minimal Scheme for functional approachesHere we put forward a minimal scheme in functional approaches (miniDSE or miniFRG) that allows for quantitative results with a small systematic error. It builds on previous developments in <cit.> and is built on two pillars: (i) Minimal fluctuations: it is an advantageous property of functional approaches such as the DSE and the fRG that QCD correlation functions such as continuum extrapolated lattice results or quantitative functional QCD results can be implemented straightforwardly. Moreover, functional loop equations of QCD for given external parameters such as temperature T, baryon chemical potential μ_B, and number of quark flavours N_f can be expanded about QCD for different external parameters, for more details see <cit.>. This minimises the amount of quantum, thermal and density fluctuations carried by the functional equations themselves. The benefits of such a procedure are twofold: Firstly, assuming a negligible or small systematic error of the inputit minimises the systematic error as the latter only concerns the fluctuations carried by the functional equations. This allows us to reduce the intricacy of the approximation within the DSE or fRG considerably without a significant loss of the quantitative nature of the result. Secondly, it minimises the need of renormalising the functional equations. In the DSE the latter is highly non-trivial within non-perturbative approximations, while the gain is the fRG is the qualitative reduction of UV-relevant running with positive powers of the cutoff scale.We exemplify the procedure within the quark gap equation, Σ_v = Σ_v' +ΔΣ , with the difference of the self energies ΔΣ_v,v'= Σ_v - Σ_v' , and v collects the external parameters, e.g. v=(T,μ_B,N_f,m_f). Equating <ref> with the difference of the DSEs constitutes a closed gap equation for ΔΣ with the input Σ_v' and the quark-gluon vertices Γ_μv and Γ_μ;_v'. Evidently, the closer v is to v', the less non-trivial physics is implemented by the loop itself. Moreover, for (N_f,m_f)=(N_f',m_f') the difference DSE is finite and does not require renormalisation. (ii) Minimal correlation functions: Complete n-point correlation functions Γ^(n)(p_1,...,p_n) carry a rapidly increasing number of tensor structures. Their respective scalar dressing functions, which depend on the momenta p_1,...,p_n-1, the remaining momentum p_n is fixed by momentum conservation. However, only few of these dressings have a sizable impact within the system of functional equations and higher order vertices are typically suppressed due to space-time and momentum locality of the vertices in gauge-fixed QCD, for more details see <cit.>. In our example of the difference gap equation for eq:ExpandS a respective evaluation concerns only the quark-gluon vertex Γ_μ or rather its difference Γ_μ;v = Γ_μ;v' +ΔΓ_μ;v,v' , ΔΓ_μ;v,v'= Γ_μ;v - Γ_μ;v' , which is the main external ingredient in the gap equation. In a first application of the minimal scheme or miniDSE we will construct a reduced minimal truncation of the quark-gluon vertex with only two tensor structures in <Ref>. In general such a construction uses thespace-time and momentum locality of the vertices as well as benchmark results within full functional computations and lattice simulations.In summary, the above minimal scheme allows us to obtain quantitatively reliable results for observables with a significant reduction of the numerical costs and a sizable improvement of the stability of the convergence of the numerics. In combinations this can lead to a reduction of the computation time by orders of magnitude. Moreover, some of these reduced truncations in the miniDSE are easily accessible technically also for non-experts. §.§ Quark-gluon vertex in the miniDSE schemeThe quark-gluon vertex in the vacuum has a complete basis of twelve tensor structures, and its transverse part can be expanded in eight transverse projections of these tensors.At finite temperature and density these transverse projections all come with a thermal split.The following suggestion for a simplified four-quark vertex in the miniDSE scheme builds on results of the in-detail analysis of theimportance ordering of the vertices in <cit.> in the vacuum. Moreover we also work in the information from DSE results at finite temperature and density obtained in the precursor of the present minimal scheme in <cit.>, and its comparison with the full computation in <cit.>. This combined analysis showed that five of the eight tensor structures are completely irrelevant and we only have to consider the transverse projections of the remaining three, T_μ^(1)(q,p) =-γ_μ , 𝒯_μ^(4)(q,p) =-σ_μν k^ν,σ_μν = /2[γ_μ,γ_ν] ,𝒯_μ^(7)(q,p) =i/3{σ_αβγ_μ + σ_βμγ_α + σ_μαγ_β} l^α k^β , each coming with a momentum dependent dressing functionλ^(i)(q,-p) with the incoming quark and antiquark momenta q and -prespectively, and the gluon momentum k_μ and the weighted sum of the quark and antiquark momenta l_μ, k=q-p ,l=1/2( p+q) . Then, the miniDSE quark-gluon vertex takes the form Γ_μ(q,p) = ∑_1,4,7 T_μ^(i)(q,p) λ^(i)(q,p) . The terms in eq:miniDSEquarkgluon have the following relevance ordering <cit.>: the by far dominant component of the vertex is that with the classical (chiral) tensor structure, T_1 λ_1, and the dressing is constrained by the Slavnov-Taylor identities (STIs). This is followed by the chiral symmetry breaking part T_4 λ_4. The smallest contribution originates in the second chirally symmetric part T_7 λ_7. The Dirac structures of quark-gluon vertexare adopted from<cit.>, except 𝒯_7 which has less overlap with the other components and avoids kinematic singularities due to its symmetric form, see <cit.>.Then, the fully quantitative miniDSE scheme would utilize the splits eq:ExpandS and eq:DeltaGammamu with v'=(N_f',m_f',T',μ_B') = (N_f,m_f,0,0) or even with T'=T as well as the quantitative data from <cit.> or finite temperature results. Moreover, at finite temperature and density the dressings λ_1,4,7 with and without thermal split would be approximated by combinations of the dressings of the quark propagator as done in <cit.>. The latter step further reduces the numerical costs significantly. The quantitative nature of this approximation has already been confirmed in <cit.>. This concludes our discussion of the quantitative miniDSE scheme for applications to the phase diagram of QCD.In the present work we will further simplify the scheme by approximating the vertex dressings also at T=0 with combinations of the propagator dressings. Moreover, we shall drop the least important part T_7 λ_7, even though it accounts for an about20% decrease of the mass function. We accommodate for this decrease of the mass function by decreasing the coupling constant with roughly 3% compared with the full QCD coupling in <cit.>. We emphasize that this is based on a self-consistency check of the quantitative nature of the procedure, checked with the full results also at finite temperature and chemical potentials relevant for the chiral phase structure and thermodynamic observables studied here.In summary this leads us to a computationally minimal scheme only in terms of the quark dressings with the quark gluon vertex Γ_μ(q,p) = T^(1)_μ(q,p)λ^(1)(q,p)+ 𝒯^(4)_μ(q,p) λ^(4)(q,p) , where the dressing of the classical tensor structure is constrained by the STIs for the quark-gluon vertex. We shall use λ^(1)(q,p) = g_sF(k^2) Σ_A(q,p) , with the ghost dressing function F(k^2)=k^2 G_c(k), where G_c(k) δ^ab is the ghost propagator. The other factor Σ is the sum of the quark dressings A defined in eq:Sp, Σ_A(q,p) = A(p)+A(q)/2 . Several studies suggest that λ^(4) is proportional to differences of the scalar quark dressing function <cit.>, Δ_B(q,p) = B(p)-B(q)/p^2-q^2 . The scalar dressing of the quark propagator carries the RG-scaling of the quark and anti-quark leg of the quark-gluon vertex. The RG-scaling of any vertex dressing λ^i also has to accommodate the RG-scaling of the gluon leg ∝ 1/Z^1/2(k) with the gluon dressing defined in eq:gluon.It has been shown in <cit.> by comparison to the full vertex computed in <cit.> (DSE) and fRG <cit.> (fRG) in the MOM^2 scheme, that this factor indeed not only carries the appropriate RG-scaling but also the correct momentum dependence of λ_4 in the vacuum. Hence, in the vacuum we choose λ^(4)(q,p) = g_s/ Z^1/2(k) Δ_B(q,p) , with Z(k) the gluon dressing function introduced in eq:gluon, see <cit.>. <Ref> introduces a kinematic singularity into the vertex that it absent in the direct computation. Note however, that in our computations, the vertex is always attached to a gluon propagator with momentum k and the factor 1/Z^1/2(k) is cancelled. Moreover, the loop integration introduces a further k^2 at finite temperature and k^3 in the vacuum, which leads to a very efficient suppression of this regime. This is checked with a comparison to the results from computations with full vertices which allows for a systematic error estimate.As a part of this evaluation we first argue that the kinematic singularity can be avoided by the following upgrade of the present procedure: Instead of using eq:gluon and its finite temperature and chemical potential analogues for the definition of the gluon wave function, one can use a parameterisation for the scalar propagator part G_A(k) in eq:gluon, that takes into account the mass gap of QCD explicitly. In the vacuum this reads G_A(k)=1/ Z_A,scr(k) 1/k^2 +m_scr^2 , where m_scr^2 is the spatial screening mass. This mass is defined via the exponential decay of the large distance limit of the spatial Fourier transform of the gluon propagator, G̃_A(k_0,r) = ∫d^3 k /(2 π)^3 G_A(k_0,k )e^kx , with the spatial momentum k and the spatial position or distance x and r=x. The large distance limit r→∞ can be parametrised with lim_r→∞G̃_A(k_0=0,r) → R(r)e^- m_scr r . where R(r) is a polynomial or at most a rational function of r. The spatial screening mass m_scr^2 is the inverse screening length and is defined as the strength of the exponential decay. A similar definition holds true for the temporal screening mass, that is obtained from the asymptotic time-dependence of the Schwinger function.In the vacuum these two masses agree due to Lorentz invariance and we get from the functional and lattice 2+1 gluon data in<cit.>, m_scr≈ 850 MeV . The overall error of eq:mscr and the respective ones for N_f=2 flavour QCD and Yang-Mills theory is about 20 MeV which can be reduced significantly if producing dedicated data for the task of determining the screening mass. <Ref> can be considered as a physics definition of the gluon mass gap, and can be compared with m_scr≈ 830 MeV for the two-flavour data from <cit.> that underlie the 2+1 flavour computations in <cit.> and m_scr≈ 760 MeV in Yang-Mills theory from the gluon data in<cit.>, compatible with the T→ 0 extrapolation of the the finite temperature screening mass computed in<cit.>. The physical nature of this definition is corroborated by the quantitative agreement of the screening mass with the Debye screening mass in thermal perturbation theory for temperatures T ≳ 2 T_c, where T_c is the critical temperature of the confinement-deconfinement phase transition.The spatial and temporal screening masses differ at finite temperature and chemical potential, and a more quantitative vertex construction at finite temperature and chemical potential takes into account both screening masses. For a respective discussion and computation in finite temperature Yang-Mills theory see <cit.>, and the notation in eq:GAZA is close to that used there and in further fRG works such as <cit.> and the DSE works <cit.>.We emphasise that the spatial and temporal screening masses reflect the physical gluon mass gap in QCD even in the present gauge-fixed settings and constitute a relevant physics input in phenomenological considerations in the phase structure of QCD. This is already evident for its importance for the confinement-deconfinement phase transition in Yang-Mills theory, see<cit.>. Importantly, with the substitution Z^1/2(k) →1/Z_A,scr^1/2(k) , in eq:lambda_4 as well as other dressings, kinematic singularities are avoided and the respective dressings reflect the decoupling of the dynamics below the (gluon) mass gap of QCD. This as well as their phenomenological importance will be considered elsewhere.For the present purposes we find that the simplified vertex construction eq:lambda_4 serves well and the kinematic singularity has no impact on the physics considered here. We proceed with the systematic error estimate with a comparison to results with the full vertex. First we note, that the negligible impact of this kinematic singularity has been discussed in detail in <cit.>, based on the explicit vacuum results in <cit.>. Importantly, this analysishas also been extended to finite T and μ_B in <cit.>. Below we briefly discuss these different checks:In <cit.>, it has been shown, that eq:lambda_4 describes the full vertex in the vacuum very well down to momenta k≈ 1 GeV, using also vertex data from <cit.>. This has later been corroborated with vertex data from the quantitative DSE vacuum computation in <cit.>. In turn, for k≲ 500 MeV, the vertex eq:lambda_4 shows a kinematic singularity which is not present in the full vertex that monotonously rises and approaches a constant for k=0. Thekinematic singularity in eq:lambda_4 is in a regime which is suppressed by the mass gap of QCD, and hence it has no impact. This has been checked and confirmed in several ways: Its reliability for computations in the phase structure has been benchmarked with the good agreement of the results with that from <cit.> up to baryon chemical potentials μ_B ≲ 600 MeV, and this has been corroborated by the phase structure results with the full quark-gluon vertex in the DSE computation in <cit.>. In the present work we check the irrelevance of the kinematic singularity by freezing Z(k) in eq:lambda_4 for small momenta with a freezing scalein the regime k_freeze≈ 0.4 - 1.7 GeV , which is roughly 1/2 m_scr≲ k_freeze≲ 2 m_scr with the 2+1 flavour screening mass in eq:mscr. This emulates the effect of eq:Z-Zscr as Z_A,scr indeed freezes for small momenta. Moreover, it covers efficiently the difference to the full vertex: while the kinematic singularity leads to an enhancement of the vertex, the freezing leads to a lowering of the vertex in comparison to the full vertex. The results do not change by more than 3 %,which is well within the systematic error estimate of our computation and hence supports our procedure.This concludes the discussion of the simplified version of the miniDSE scheme in the quark sector used in the present paper. The price to pay for this last simplification steps eq:Minimalqgvertex,eq:lambda_4 already at T,μ_B=0 is a loss of quantitative reliability for baryon chemical potentials with μ_B/T≳ 3. This loss of quantitative reliability manifests itself e.g. in an increasing difference of the chiral crossover line from that in full QCD in <Ref> for these chemical potentials including a 10% reduction of the temperature and the chemical potential values of the location of the critical end point from the estimated regime in full quantitative functional QCD. The vertex dressings in eq:Minimalqgvertex,eq:lambda_4 are also based on dressings from the ghost-gluon sector. The ghost propagator is almost independent of temperature and density and we use the vacuum fRG data in two-flavour QCD <cit.>. In turn, the gluon dressings are computed from a difference DSE analogously to that of the quark discussed around eq:DeltaS. The respective difference DSE have been discussed in detail in <cit.>. This procedure accommodates further intricacies that arise from the need of a numerically optimal treatment of differences of frequency integrals and Matsubara sums, and hence we defer its description to the next section, <Ref>, where the setup at finite T,μ_B is described, seeeq:glqusplit,eq:DeltaPiAqu,eq:PiAqu,eq:glDSE2. With this input and simplification of the miniDSE scheme, the quark propagators are computed in the isospin symmetry approximation with m_u=m_d=m_l with the coupling parameters α_s=g_s^2/(4 π),m_l,m_s being fixed at an RG-scale μ= 15 GeV. This is significantly lower than the perturbative RG-scale μ=40 GeV used in <cit.> for precision computations in the vacuum, but suffices for the present accuracy goals. We use α_s= 0.235 , m_l = 3.0MeV , m_s = 27 m_l = 81 MeV , at μ=15 GeV, which is compatible with the coupling parameters in <cit.> within the same RG scheme, the MOM^2-scheme.As a benchmark result we show the light quark mass function M_l(p^2) = B_l(p^2)/A_l(p^2) in <Ref> in comparison to the quantitative fRG-DSE results in <cit.> and the lattice results from <cit.>. From this quark propagator we compute the reduced quark condensate Δ_l,s = ⟨q̅q⟩_l - m_l/m_s⟨q̅q⟩_s . For the comparison with the lattice and functional results for the reduced condensate we have to map our present results to the respective RG-scales. This has been described in detail in <cit.> where the precision results for the quark condensates have been compared to the lattice results at the lattice RG-scale μ_lat = 2 GeV. Hence we simply map the present result to the lattice RG-scale and compare it with the lattice and functional results. We are led to Δ_l,s(μ_lat) = -(277.6 MeV)^3 , the light chiral condensate has been computed instead of the reduced condensate. Forμ_lat we find Δ_l(μ_lat) = -(274.5 MeV)^3 , in comparison to the the functional precision result Δ_l(μ_lat) = (272.0 MeV)^3 in <cit.>. Another and even more direct benchmark is provided with the light quark condensate in the chiral limit: it relates to the quark mass function <cit.>, and we obtain Δ_l,χ(μ_lat) = -(273.9(8) MeV)^3 , in comparison with the functional precision result in the vacuum(269.3(7) MeV)^3 <cit.>, and the lattice result Δ_l = (272(5) MeV)^3 (FLAG <cit.>). Moreover, using the Pagels-Stokar formula <cit.> (PS) we obtain an estimate for the pion decay constant of f_π = 94.7 MeV. Given the expected 10% accuracy of the PS result from the full results this agrees well with f_π≈ 93 MeV.Moreover, the Gell-Mann–Oakes–Renner relation yields a pion mass of m_π = 140.4 MeV.In summary, despite its relative simplicity the quark propagator and the derived observables in the vacuum, obtained from the present approximation show an already impressive agreement with the precision functional results and those from lattice simulations. Finally, we note that the truncation scheme is free from any phenomenological parameter, which will also be the case when applied at finite temperature and chemical potential in the following Sections.§QCD PHASE STRUCTURE In this section we discuss the remaining details of the miniDSE scheme at finite temperature and density. This concerns in particular the thermal split and the treatment of the gluon sector. Then the phase structure of QCD is computed and confronted with that obtained with lattice simulations and functional approaches at vanishing density and functional approaches at finite density. The latter results offer a quantitative benchmark up to densities μ_B/T≲ 4. §.§ miniDSE scheme at finite T and μ_BThe full quark and gluon propagators S(p) and D_μν(p) at finite temperature and density are paramterised as follows, S^-1(p̃) = iγ_4ω̃_n C(p̃) + iγ·pA(p̃) + B(p̃) , p^2 D_μν(p) = Π_μν^E(p) Z_E(p) + Π_μν^M(p)Z_M(p) , with ω̃_n = ω_n + iμ_q ,p̃=p+iμ_q , p=(p,ω_n) , and the quark Matsubara frequencies ω_n = (2n+1) π T and the gluon Matsubara frequencies ω_n = 2n π T.<Ref> also depends on the electric and magnetic gluon projection operators P_μν^E,M, Π_μν^M(p) = (1-δ_μ 4)(1-δ_ν 4) ( δ_μν - p_μ p_ν/p^2) ,Π_μν^E(p) = δ_μν - p_μ p_ν/p^2 - Π_μν^M . The quark DSE at finite (T,μ_B) is of the form eq:QuarkDSE with a spatial momentum integral and a thermal sum over Matsubara frequencies, _q= T∑_n∈Z∫d^3 q/(2 π)^3 . The DSE of the gluon propagator at finite T and quark chemical potentials (μ_u,μ_d,μ_s) is computed along the lines suggested in <cit.>. A diagrammatic depiction of the gluon DSE is provided in <Ref>. We first use the difference DSE for the gluon propagator as in eq:DeltaS,eq:DeltaGammamu in an expansion about the gluon propagator in the vacuum, D_v^-1(k) = D_v'^-1(k) + ΔΠ_A;v,v'(k) , with v=(N_f,m_f,T,μ_B) ,v'=(N_f,m_f,0,0) . In eq:DeltaGluon, Π_A,μν is the vacuum polarisation of the gluon that comprises all quantum, thermal and density fluctuations in terms of the diagrams in the DSE. In a further step we split the diagrams in the thermal and density difference DSE into the gluonic part ΔΠ_A^gl(k) whose classical three- or four-gluon vertex comes from the Yang-Mills sector, and the quark part ΔΠ_A^qu(k) that is proportional to the classical quark-gluon vertex. The latter part is one-loop exact while the former one also contains two-loop diagrams. D_v^-1(k) = D_v'^-1(k) + ΔΠ_A;v,v'^gl(k) + ΔΠ_A;v,v'^qu(k) , The quark loop contribution ΔΠ_A^qu in eq:glqusplit reads ΔΠ_A^qu(k) = ∑_f[ Π_A;v^f(k) - Π_A;v'^f(k) ], with Π_A^f(k) =- 1/2 Z_1F^fg^2 _qtr[ γ_μ S^f(p̃)Γ_ν^f(k;p̃,q̃)S^f(q̃) ] , for each flavour. The trace in eq:PiAqu sums over Dirac indices and gauge group indices in the fundamental representation. The contribution is flavour diagonal as already assumed in the quark gap equation.The pure gauge theory part can be evaluated analogously. While the difference does not require renormalisation, the numerical implementation of this property requires some care and for this purpose a numerically stable scheme has been set up and successfully used in <cit.>. In the present work we resort to a further simplifying approximation suggested in <cit.> and expand the gauge loop contribution in eq:glqusplit about the lattice data of the Yang-Mills gluon propagator. We obtain ΔΠ_A^gl(k) =[ D_T^YM(k) ]^-1 - [ D_T=0^YM(k) ]^-1 , where we have used that YM theory is only sensitive to the temperature and not the rest of the parameters in v and v'. The systematic error of this approximation for physical quark masses has been evaluated in detail in <cit.> and does not add significantly to the total systematic error for the T,μ_B regime considered here. In a forthcoming work this approximation is also resolved with the numerically stable scheme from <cit.>.Finally, we have to consider thermal and density splits in the vertices and especially in the quark-gluon vertex. The miniDSE approximation of the latter with two tensor structures has beenintroduced in <Ref> in the vacuum, see eq:Minimalqgvertex. At finite T,μ_B wehave to take into account the thermal or density split of tensor structures as the heat bath or medium singles out a rest frame. To begin with, the classical tensor structure in eq:Minimalqgvertex is split as γ_μΣ_A(q,p) →γ_μ[ δ_μ 4 Σ_C(q̃,p̃) + (1-δ_μ 4)Σ_A(q̃,p̃) ] , where q̃,p̃ contain complex frequencies eq:omega+mu. The vertex part with the second tensor structure T^(4) in eq:Minimalqgvertex is split as follows, T^(4)(q,p)λ^(4)(q,p) →T^(4)(p,q)[Π^E(k)λ^(4)_E(q,p)+Π^M(k) λ^(4)_M(q,p) ], with the miniDSE approximation for the electric and magnetic dressing functions λ_4^E,M(k;q̃,p̃) = g_sZ_E,M^-1/2(k^2)Δ_B (q̃,p̃). This concludes the discussion of the simplified version of the miniDSE scheme used in the present work: we have reduced the task of solving the gap equations and vertex DSEs to that of solving the gap equations, where each approximation step has been benchmarked and controlled by functional results obtained within more sophisticated approximations as well as lattice results. We proceed by solving this set of difference DSEs for the quark and gluon dressings with the coupled quark and gluon DSEs eq:QuarkDSE,eq:DeltaGluon. §.§ Chiral phase structureWe now present results for the chiral phase structure of QCD obtained in the isospin-symmetricapproximation and with a vanishing strange quark chemical potential, (μ_u,μ_d,μ_s) = (1/3μ_B, 1/3μ_B, 0), which give the net-baryon number density n_B = 2/3n_u,d and the vanishing strange quark density n_s = 0. This matches the scenario of heavy-ion collision with a vanishing net strangeness. We define the pseudo-critical temperature of the chiral phase transition T_c(μ_B) the peak temperature of the thermal susceptibility of the reduced condensate Δ_l,s defined in eq:cond, χ^ _T(T,μ_B) = -∂ _T ( Δ_l,s(T,μ_B)/Δ_l,s(0,0)) . Numerical results of χ_T at several chemical potentials are shown in <Ref>. At zero μ_B, we obtain T_c(0) = 156.5 MeVin agreement with results from lattice QCD <cit.> and functional approaches <cit.>.A further benchmark result is provided with the curvature coefficients of the pseudo-critical temperature at μ_B=0. Its Taylor at μ_B=0 is given by T_c(μ_B)/T_c(0) = 1 - κ_2 (μ_B/T_c(0))^2 - κ_4 (μ_B/T_c(0))^4 + ⋯ , and the present simplified version of the miniDSE scheme yields κ_2 = 0.0169(6) . This result is slightly larger but compatible with lattice QCD <cit.> andfRG/fRG-DSE <cit.> predictions with κ_2 ≈ 0.015 (0.0142(2) in <cit.>, 0.0147(5) in <cit.>). On the other hand, we found κ_4 ≈ 5× 10^-4 which is also larger but of the same magnitude as the functional results κ_4 ≈ 3× 10^-4 in quantitative approximations <cit.>. These slight deviation grow larger at finite chemical potential. In <Ref> we depict theobtained phase transition line in <Ref> in comparison to other functional and lattice studies.Our result agrees well with the previous functional QCD results within more sophisticated truncations tillμ_B ≈ 400 MeV or μ_B/T ≈ 3. For μ_B/T ≳ 3 the deviations become sizable, which also manifests itself in the location of the critical end point (CEP) with (T^CEP,μ_B^CEP) = (108.5,567) MeV . This location has to be contrasted with the quantitative estimate (T^CEP,μ_B^CEP) ≈(100-110 , 600-650) MeV . from the results in <cit.>. Note that eq:CEPestimate singles out a line and not an area. In short, eq:CEPminiDSE shows a ∼ 10% deviation with respect to the estimate eq:CEPestimate and this deviation provides a systematic error estimate for the simplified miniDSE scheme used in the present work. In summary, this analysis entails that the simplified miniDSE scheme, provides semi-quantitative results for a large range of chemical potentials. Hence, we can use it for the computation of thermodynamic quantities which are directly related to the measurements.We close this Section with a brief discussion of the twofold origin of the deviations, that are responsible for a successive loss of fully quantitative reliability of the present results for μ_B/T≳ 3. To begin with, we already know from the comparison of the phase structure computation in <cit.>, that the use of full vacuum dressings for the quark-gluon vertex corrects the curvature coefficient κ. Moreover, the deviation at larger chemical potential is also caused by the use of Δ_B, eq:DeltaB, in the dressing λ^(4), eq:lambda_4: in comparison to the dressing computed in <cit.>, Δ_B carries a singular momentum dependence. This can be compensated for with the introduction of higher order corrections from the scattering kernel together with the imaginary part of the propagator induced by the chemical potential. An upgrade of the present simplified miniDSE scheme based on two-point dressings is work in progress and we hope to report on the respective results soon.Another interesting aspect is the negligible contribution of the thermal chemical potential splits. For example, we find that the difference ofchiral crossover temperature for the O(4)-symmetric vertex without split and the vertex with thermal split is less than 1 MeV, and the curvature is barely changed. This results is also corroborated within a DSE computation with full vertices, <cit.> as well as many fRG tests, see e.g. <cit.>. In conclusion, the split affects mainly the quark and gluon propagators, and the O(4)-symmetric approximation for the quark-gluon vertex gives agreeing results for μ_B/T≳ 3 as discussed above. Note however, that the explicit results here are obtained within the thermal split.§ EQUATION OF STATE OF QCDThe miniDSE scheme allows for a numerically cheap complete scan of theEoS and other observables in the phase diagram of QCD. The quark number densities n_q^f are directly obtained from the quark propagators, n_q^f(T,μ_B)≃ -N_c Z_2^f T∑_n∫d^3 p /(2π)^3tr_D[γ_4^ S^f(p)] , where we use μ_B= 3 μ_l with μ_u=μ_d=μ_l and μ_s=0. In the present work we simply use the momentum-dependent propagators in the T-μ_B plane on the right hand side of eq:nq and leave a more detailed analysis to future work: Firstly, it is well-known that eq:nq has to be evaluated in the non-vanishing background ⟨ A_0⟩ that solves the equations of motion, see <cit.>. This is tantamount to implementing the non-trivial expectation value of the Polyakov loop away from unity. Only with such a background the change from quark-gluon degrees of freedom to hadronic ones is described accurately. This is well illustrated with the kurtosis whose asymptotic temperature values is 1/9 in the quark-gluon phase for large temperatures and unity in the hadronic phase for vanishing temperature, capturing the change of the degrees of freedom from asymptotically free quarks to weakly interacting baryons. Without the A_0 background the degrees of freedom in the low temperature phase resemble the quarks and the kurtosis is far smaller than unity,for a detailed discussion see <cit.>. In short, with ⟨ A_0⟩ =0 the qualitative behaviour around the crossover line with its change of the dynamical degrees of freedom is captured, for the quantitative or even semi-quantitative behaviour the A_0-background is required. Respective results and formal developments in functional approaches can be found in <cit.>.Secondly, the density eq:nq requires renormalisation and is subject to a non-trivial normalisation, reflecting its UV degree of divergence. This intricacy worsens at large temperatures but can be resolved by representing the density in terms of a (multiple) chemical potential integration of density fluctuations with a lower or absent UV degree of divergence, e.g. the kurtosis. Indeed, the thermodynamic relation between pressure and quark number density discussed below is precisely of this type as the quark number density has a lower UV degree of divergence.Both issues will be addressed in a forthcoming work and we proceed with the present qualitative approximation. The EoS follows from n_q^f in the (T,μ_B) plane with the thermodynamic relation between the pressure and quark number densities, P(T,μ_B) = P(T,0) + 1/3∫_0^μ_Bdμn_l(T,μ), where n_l=n_q^u+n_q^d. The standard thermodynamic relation eq:prs is of the same structural form as our difference DSE: the integral in eq:prs is simply Δ P(T,μ_B) = P(T,μ) -P(T,0) and follows from the quark propagators. In turn, the pressure at vanishing chemical potential can be determined from the QCD trace anomaly I(T) I(T) = (ϵ-3P)/T^4 , with P(T,0)/T^4 = ∫_0^TdT' (I(T')/T') . For I(T) we use 2+1 flavor QCD lattice data <cit.>. Moreover, the integral over the quark number density expresses the density part of the pressure in terms of a less-divergent operator which stabilises the numerical computation and lowers the systematic error.In summary, with the lattice input for the trace anomaly at μ_B=0 and the relations eq:nq,eq:prs, we can compute the QCD pressure P(T,μ_B), the energy density ϵ and theentropy density s and in the T-μ_B plane, ϵ(T,μ_B) = Ts(T,μ_B) + μ_Bn_B(T,μ_B) - P(T,μ_B) ,s(T,μ_B) = ∂ P(T,μ_B) / ∂ T . The respective numerical results for the pressure P/P_SB and the light quark number density n_u,d/T^3 are shown in <Ref> and provide us with the EoS. Further thermodynamic observables, namely the entropy density s/s_SB, the energy density ϵ/ϵ_SB and pressure to energy density ratio P/ϵ are shown in <Ref>. We have normalised the pressure and energy density with the free Stefan-Boltzmann counter parts in three flavour QCD at zero chemical potential, P_SB = 19/36π^2 T^4 ,s_SB = 19/9π^2 T^3 , ϵ_SB = 19/12π^2 T^4 . In the vicinity of the CEP, the entropy s and the energy density ϵ experience rapid changes close to the chiral crossover line T_c(μ_B). This rapid change indicates the increasingly rapid change of the degrees of freedom from hadrons to quarks in the vicinity of crossover. Moreover, the successively sharper and deeper minimumof P/ϵ is related to the peak of the trace anomaly in eq:TA as well as the minimumof the speed of sound, and leaves a strong imprint on the EoS. The latter allows us to estimate the location of the CEP even relatively far away from it.We have also investigated the isentropic trajectories, i.e. the trajectories satisfying s/n_B=const. in the (T,μ_B) plane, which are related to the cooling of the hot QGP matter produced in heavy-ion collision experiments. The isentropic trajectories calculated from our EoS at these s/n_B values are shown in <Ref>, together with the chiral phase transition line and the CEP. We also compare the obtained phase diagram and the trajectories to the freeze out data, which are marked with the same labels as in <Ref>. In the vicinity of the phase transition line, our calculated trajectories are in good agreement with those obtained from the state-of-the-art equation of state NEoS in <cit.>. Especially,our trajectories for s/n_B = 420, 144, 51 and 30 which values are chosen in the previous studies for the corresponding collision energies in heavy ion collision experiments,also preciselymeetwith the freeze-out points at √(s_NN) = 200, 62.4, 19.6 and 11.5 GeV, respectively.At high temperatures, our results deviate from the trajectories from lattice QCD simulation and we can trace this back to the normalisation intricacy of the quark number density discussed below eq:nq. In turn, below the crossover line thebackground ⟨ A_0 ⟩ <cit.> has not been incorporated in the present computations of the density or other thermodynamic quantities and has a significant impact. A full quantitative computation is beyond the scope of the present paper and will be presented elsewhere.In addition to the s/n_B-values obtained from the extrapolation of lattice data at vanishing density, we also have investigated a smaller value with s/n_B = 23 with the present EoS. By comparing the result with the STAR freezeout points <cit.>, we estimate that s/n_B = 23 corresponds to √(s_NN)≳ 7.7GeV. This estimate should be taken with a grain of salt as the curve is located at the border (and beyond) the quantitative reliability regime of the present simplified miniDSE scheme, and we have neither tackled the A_0-background nor the normalisation issue. With this caveat we note that this trajectory still does not cross the CEP, and it may require a smaller collision energy forapproaching it. Finally, we report results for the speed of sound c_s in the simplified miniDSE scheme. We have computed c_s^2 in the vicinity of phase transition line. In order to investigate the experimental scenario of adiabatic cooling, the speed of sound is evaluated along the isentropic trajectories, using the following formula <cit.>, c_s^2 = n_B^2 ∂_T^2 P - 2 s n_B ∂_T∂_μ_BP + s^2 ∂_μ_B^2 P/(ϵ+P)[∂_T^2 P ∂_μ_B^2 P - (∂_T∂_μ_BP)^2 ]. The temperature T is chosen as the control parameter for each trajectory, and the results are shown in <Ref>. The minimum of c_s^2(T) agrees with the chiral phase transition point for each trajectory. The value of the speed of sound at the minimum does not change too much in the current energy range, as c_s^2 ∼ 0.13, but the minimum becomes shaper as s/n_B decreases.The speed of sound is computed from the second and fourth order T,μ_B-derivatives of QCD pressure, see eq:cs2, including for example the mixed μ_B,T derivative, the thermal susceptibility of the baryon number as well as its derivative. Its minimum may be regarded as a criterion for the crossover temperature of the confinement-deconfinement phase transition. This crossover can also be measured more directly in terms of fluctuations of baryonic charges, see <cit.> for recent functional results. We observe that the crossover temperature is a bit lower as the chiral crossover temperature defined by the peak of the thermal susceptibility of the chiral condensate, eq:ThermalSusceptDelta, even though this difference does not exceed the respective error bars and the widths of these transitions. With increasing μ_B the transition regime gets sharper as the region around the minimum of c_s^2 is getting steeper. Hence, both the chiral and confinement-deconfinement phase transitions get steeper towards the critical end point as expected.Note, that we do not observe critical scaling, for a more detailed analysis see <cit.>. However, it is precisely the smallness of the critical regime, observed by now for both the O(4)-scaling regime in the chiral limit, <cit.> and around the critical end point <cit.>, that allows for a precision estimate of the location of the latter: the extrapolation of suitable non-universal observables towards higher chemical potentials provides a quantitative estimate of the location of the CEP, if the data are sufficiently accurate. Such an endeavour requires a theoretical search for and quantitative computation of optimal observables in the phase structure together with their extraction from high precision experimental data. A respective programme has been advocated and started in <cit.> with the theoretical computation and the comparison to experimental data of fluctuations of observed charges.In the present work we contribute to this programme by comparing the estimates of the location of the critical end point from several thermodynamic functions with the computed location in the present simplified miniDSE scheme, see <Ref>. To that end we consider the thermal width Δ T for both thermal susceptibilities χ_T and ∂ n_B / ∂ T, which is defined as the width of the 90% value of the peak heights of the respective susceptibility. In case ofP/ϵ the width Δ T is defined as the width of 110% value of the minimum. These thermal widths monotonously decrease for larger chemical potential and vanish at the CEP. Hence, an extrapolation of the widths towards zero provides us with an estimate of the location of the CEP. A fully conclusive analysis will be presented elsewhere and will answer the question about the required precision and wealth of the experimental data for such a quantitative estimate in dependence of the distance to the CEP in terms of chemical potential or collision energy √(s).Here we proceed by simply elucidating this task with a limited amount of data points, see <Ref>. We perform cubic polynomial fits for the Δ T data within several μ_B regions and then extrapolate towards larger μ_B. For current Δ T data, adding higher order polynomial terms only changes the extrapolated CEP position for about 5% and thus a cubic fit is sufficient for convergence. In the present case this originates in the sparseness of the data and not a lack of precision. We find that with successively larger μ_Bincluded into the fit regime, the estimates for the location of the CEPgets closer to its actual location. However, even with the present sparse data one does not have to zoom into the neighbourhood of the CEP. Moreover, the comparison shows that the chiral condensate or rather its susceptibility is better suited for such an extrapolation. In summary, it is very suggestive that a global combination of experimental precision data is best suited for such a task. This asked for the latter, which can be obtained in a combination of STAR data and in particular future high precision CBM data, based on its orders of magnitude larger luminosity.§ SUMMARYIn the present work we have computed thermodynamic quantities such as the chiral phase structure, the QCD equation of state (EoS), the isentropic trajectories and the speed of sound within first principles functional QCD. At low densities the results are benchmarked with lattice results, while at larger densities the current approach offers qualitative predictions. The EoS was obtained from integrating the quark number density from vanishing to finite chemical potential, while using lattice results for the trace anomaly at zero chemical potential as an input. Apart from the above mentioned observables we have also computed the pressure, entropy density and energy density in a wide range of temperature and chemical potential. In particular, we also discussed the implications of our results for the adiabatic speed of sound on the search for novel phases and the location of critical end point in the strong interaction matter produced in the collider experiments.Our thermodynamic results are obtained within a minimal computational scheme for functional approaches, developed in the present work for quantitative and semi-quantitative computations, see <Ref>. This scheme is also based on previous developments in <cit.> both in the DSE approach as well as in the fRG approach. Here we have applied its DSE version, the miniDSE scheme, to computations of the quark propagator at finite temperature and density. Additional truncations reduced the regime of quantitative reliability to the regime μ_B/T ≲ 3, where the current results for the phase structure agree very well with that in state-of-the art quantitative truncations <cit.>. Still, also the results in the regime μ_B/T ≳ 3 provide semi-quantitative and qualitative estimates. For example, the current estimate of the location of the critical end point only differs by approximately 10% by that given in the quantitative studies. This leads us to the suggestion to finally determine its location within a combination of theoretical constraints and predictions for both, the phase structure as well as experimental observables, and respective experimental precision measurements.While the current application has been tuned to minimal computational costs and further truncations have been done, aiming at the computation in terms of two-point functions alone, the fully quantitative miniDSE scheme is set-up as well. Moreover, the miniDSE scheme can also readily applied to the low temperature and finite chemical potential regime, i.e. cold dense quark matter and the equation of state of neutron stars. Furthermore, it provide a simple and quantitative access for the exploration of the QCD phase structure in the (T,m_l,m_s) space, the Columbia plot, which is work under completion.We hope to report soon on the respective results in the Columbia plot and for cold dense matter, and in particular on precision prediction for experimentally accessible observables in the regime 2 GeV≲√(s)≲ 15 GeV. This regime includes the location of the critical end point or more generally the onset regime of new phases: Theoretical predictions accompanied with an analysis of the μ_B or √(s)-dependence, and a combination of STAR data and future high precision CBM data in this regime should allow us to finally pin down the location of the CEP or the onset regime of new phases as well as its physics. We thank G. Eichmann, C. S. Fischer, W.-j. Fu, M. Q. Huber, J. Papavassiliou, F. Rennecke, B.-J. Schaefer,N. Wink and Hui-Wen Zheng for discussions.This work is done within the fQCD collaboration <cit.>, and we thank the members of the collaboration for discussions and collaboration on related subjects. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster) and the Collaborative Research Centre SFB 1225 - 273811115 (ISOQUANT). YL and YXL are supported by the NationalScience Foundation of China under GrantsNo. 12175007 and No. 12247107.FG is supported by the NationalScience Foundation of China under GrantsNo. 12305134. | http://arxiv.org/abs/2310.18383v1 | {
"authors": [
"Yi Lu",
"Fei Gao",
"Yu-Xin Liu",
"Jan M. Pawlowski"
],
"categories": [
"hep-ph",
"nucl-ex",
"nucl-th"
],
"primary_category": "hep-ph",
"published": "20231027062329",
"title": "QCD equation of state and thermodynamic observables from computationally minimal Dyson-Schwinger Equations"
} |
Fully Relativistic Entanglement Harvesting Eduardo Martín-Martínez January 14, 2024 ==========================================empty empty Currently, High-Definition (HD) maps are a prerequisite for the stable operation of autonomous vehicles. Such maps contain information about all static road objects for the vehicle to consider during navigation, such as road edges, road lanes, crosswalks, and etc. To generate such an HD map, current approaches need to process pre-recorded environment data obtained from onboard sensors. However, recording such a dataset often requires a lot of time and effort. In addition, every time actual road environments are changed, a new dataset should be recorded to generate a relevant HD map.This paper addresses a novel approach that allows to continuously generate or update the HD map using onboard sensor data. When there is no need to pre-record the dataset, updating the HD map can be run in parallel with the main autonomous vehicle navigation pipeline.The proposed approach utilizes the VectorMapNet framework to generate vector road object instances from a sensor data scan. The PolyMerge technique is aimed to merge new instances into previous ones, mitigating detection errors and, therefore, generating or updating the HD map.The performance of the algorithm was confirmed by comparison with ground truth on the NuScenes dataset. Experimental results showed that the mean error for different levels of environment complexity was comparable to the VectorMapNet single instance error.§ INTRODUCTION §.§ Motivation In recent years, the development of autonomous vehicles has garnered significant attention as a transformative technology poised to revolutionize transportation. Central to the successful deployment of self-driving cars is the availability of highly accurate and detailed maps for safe operation. These maps, often called High-Definition (HD) maps <cit.>, serve as a critical foundation, enabling vehicles to process surrounding environment and to make informed decisions in real-time. While traditional mapping methods have provided a starting point, the complexity and precision required for autonomous driving necessitate the creation of HD maps using automated techniques. Traditional manual map creation has been replaced by Simultaneous Localization and Mapping (SLAM) algorithms, where a human first manually drives the vehicle and records data from onboard sensors. Data, collected by survey cars or crowdsourcing, is sent to the data center for merging into a 3D map. However, building HD maps faces challenges such as costly and time-consuming data collection and annotation, maintenance difficulty, and large data sizes. Maps can become invalid due to environmental changes, such as snow drifts, road crossings, or road network modifications. Recent research, such as Tesla’s Full Self-Driving feature, aims to allow point-to-point navigation without requiring HD map created a priori but it still requires tremendous amounts of training data. Other methods, such as the VectorMapNet <cit.>, creates a map that can be generated continuously as the vehicle drives through the environment. The VectorMapNet algorithm uses a Convolutional Neural Network (CNN) to extract features from the input image and a Recurrent Neural Network (RNN) to generate the vector map. The CNN is used to learn the patterns of the input image, while the RNN is used to generate the vector map based on the learned features.The main advantage of the VectorMapNet, which is the focus of this paper, is using polylines as a representation for HD maps.Polylines provide a geometrically simple and efficient way to represent road shapes and connectivity, allowing for compact storage, streamlined processing, and easier analysis of map data. They offer smooth and continuous representations of road geometry, enabling precise modeling of curved roads, roundabouts, and complex intersections, which is crucial for autonomous vehicles. Additionally, polylines offer flexibility and scalability, allowing for adjustments in HD map detail, accommodating road network changes, and scaling to dynamic environments. Their compatibility and seamless integration with existing map data formats in cartography and GIS make them a versatile choice for various mapping applications.However, obtained local vector map instances still need to be merged manually, modifying the polyline points, to obtain the global HD map for the autonomous vehicle to navigate.§.§ Problem statementThis paper addresses the problem of continuous automatic update of HD maps by directly updating the road element polylines, aiming to eliminate the need for manual post-processing. Each time step, autonomous vehicles utilize data from onboard sensors, including LiDAR, cameras, and Global Positioning System (GPS), to generate local HD maps using techniques, such as the VectorMapNet. However, due to imperfections in these techniques, the resulting maps may have intersecting parts that do not align seamlessly with previous maps, as depicted in Fig. <ref> and Fig. <ref>. Consequently, multiple polylines with slight shape variations representing the same road element emerge, necessitating the reduction of these polylines into a single merged polyline that encompasses all relevant features.§.§ Related works Most recent approaches require a pre-recorded dataset to generate an HD map for an autonomous vehicle. K.-W. Chiang et al. <cit.> propose an automated modeling of road networks for HD maps in OpenDRIVE format using point clod data from LiDAR sensors. The algorithm is divided into three phases: lane lines’ extraction from point clouds, modelling lane lines with attributes, and building an OpenDRIVE file. M. Elhousni et al. <cit.> propose a deep learning based method capable of generating labelled HD maps from raw LiDAR and camera data, pre-collected from a test vehicle. Y. Zhou et al. <cit.> propose an approach based on semantic-particle filter to tackle the automatic lane-level mapping problem. It performs semantic segmentation on 2D front-view images from ego vehicles and explores the lane semantics on a birds-eye-view domain with true topographical projection.Multiple studies <cit.>, <cit.>, <cit.> address HD map updates. K. Jo et al.<cit.> propose an approach that incrementally adds new feature layers during driving and optimizes them using GraphSLAM <cit.>. The optimized layer is uploaded to a map cloud, where multiple vehicles' layers are combined through data association algorithms. The map cloud updates the integrated layer using a Recursive Least Square (RLS) algorithm whenever new layers are uploaded <cit.>. C. Kim et al. <cit.> propose a crowd-sourced mapping process of the new feature layer for the HD map. Multiple intelligent vehicles are used to acquire new features in the environment to build feature layers for each vehicle using the HD map-based GraphSLAM approach <cit.>. New feature layers are conveyed to a map cloud through a mobile network system. Finally, crowd-sourced new feature layers are integrated into a new feature layer in a map cloud. C. Kim et al. <cit.> propose a crowd-sourcing framework to update point cloud maps from environment changes continuously using LiDAR and vehicle communication. While having an initial point cloud map, each vehicle is localized inside it using a hierarchical SLAM approach. The estimated pose is used to detect the differences between the point cloud map and environments, which are defined as map changes that are eventually merged into the point cloud map. However, none of these studies directly update the final polylines.Another track is updating the final polylines map by merging the resulting polylines with the main map. While several GIS tools, such as ArcGIS <cit.>, QGIS <cit.>, Global Mapper <cit.>, and GRASS GIS <cit.>, offer functions or capabilities for merging polylines, they still require manual modification of individual points to achieve the desired merging. Moreover, these tools do not address the precise problem of handling multiple lines that overlap with each other or share overlapping sections. Their primary focus lies in merging endpoints through trimming or extending polylines to intersection points or joining already connected lines into a single polyline.§.§ ContributionTo overcome disadvantages of existing map updating techniques and polyline joining tools, this paper introduces the PolyMerge, an automated technique for merging HD maps that specifically focuses on merging polylines, see Fig. <ref>. The PolyMerge identifies chains of polylines that require merging in a given map instance, taking into account their corresponding labels representing road element types. The technique extends the primary polylines while modifying their overlapping sections with other secondary polylines within the chain. By automating the merging process, the PolyMerge aims to achieve accurate and efficient HD map merging without the necessity of manual point modifications. § METHODS§.§ Local HD map construction We employed the VectorMapNet to generate the local HD maps due to its utilization of a polyline-based representation rather than a dense collection of semantic pixels. The HD map generation pipeline of the VectorMapNet consists of three essential components as shown in Fig. <ref>:* A BEV feature extractor for mapping sensor data to a canonical BEV representation;* A scene-level map element detector that identifies and classifies all map elements by predicting element key- points and their class labels;* An object-level polyline generator that generates a sequence of polyline vertices for each detected map element. For training and evaluating the VectorMapNet model, we utilized the NuScenes full dataset (v.1.0)<cit.>. The NuScenes dataset is a large-scale public dataset for autonomous driving, featuring annotations of 23 object classes with accurate 3D bounding boxes, object-level attributes, and a significant number of camera images, LiDAR sweeps, RADAR sweeps, and object bounding boxes. In addition, to obtain ground truth data of roads, crosswalks, and etc., the NuScenes map expansion pack with 11 semantic layers (crosswalk, side-walk, traffic lights, stop lines, lanes) was implemented. Image data was used for the training and validation of the results. Three types of road elements were considered such as road borders, road dividers, and pedestrian crossings.§.§ PolyMerge techniqueA vector map ℳ is represented by a sparse set of N_m vectorized primitives, specifically polylines 𝒱^poly = {V_1^poly,...,V_N_m^poly} in this context, which serve as representations of the map elements, and their class labels ℒ = { L_1,...,L_N_m}. Each individual polyline V_i^poly = {ν_i,n∈ℝ^2|n = 1,...,N_ν_i} consists of a series of N_ν_i sequentially arranged vertices ν_i,n.Method overview. First, we transform the input map tokens from bird eye view (BEV) representation of the ego frame ℱ_ego to the global world frame ℱ_W. Then, we create network graph G = (𝒱^poly,E) of similar polylines to be merged, where E ⊆{{V_x^poly, V_y^poly} |V_x,V_y ∈𝒱^poly} is a set of edges connecting the similar polylines. Finally, similar polylines are merged together under the correct label. Correspondingly, the PolyMerge employs three parts to produce the merged map as shown in Fig. <ref>: Ego to world frame transformer; Network generator that identifies similar polylines; Polyline merging tool that combines the similar polylines into one merged polyline. §.§.§ Ego to World transformerWe use simple frame transformations to rotate and then translate each polyline's vertices ν_i,n,_ego given in ego frame ℱ_ego to a global frame ℱ_W:ν_i,n_W = qν_i,n_ego q^-1 + T_ego , where q is the rotation quaternion vector fromℱ_ego to ℱ_W, qν q^-1 is a Hamilton product <cit.>, and T_ego is the translation vector of the ego frame to the world frame.It has to be noted that a main map ℳ_m has to be defined as the one to merge the other N_s secondary maps ℳ_s = {ℳ_s,_1,...,M_s,_N_s} into.Then,a new map ℳ_conc is created by concatenating the transformed maps while adding another labelℒm = { Lm,_1,...,Lm,_N_m} that distinguish main map polylines, where { Lm,_i = 1ifV_i^poly∈ℳ_m,elseLm,_i = 0 }.§.§.§ Network GeneratorGiven the concatenated map ℳ_conc contacting all the maps' polylines in world frame, the goal of network generator is to determine the subsets of polylines that need to be merged together. The first step involves iterating through all the polylines belonging to the secondary maps ℳ_S, after which each polyline is examined for its proximity to all other polylines in the concatenated map ℳ_conc. To decide if two polylines are close or not,Euclidean distances are calculated between all points in one polyline and their corresponding projections onto the other polyline. If any of the points from both polylines fall within an acceptable distance from the other polyline (below a defined threshold Th_prox), indicating close proximity, the polylines are considered suitable for merging if they possess similar labels. The chosen acceptable distance prevents the merging of road elements separated by a road in between and accounts for a margin of error in both elements. It can be adjusted for various road configurations. Finally, using polylines made it easier to distinguish map element types even if they overlap.A crucial aspect of this part is determining the projection of a given point onto a polyline. This is accomplished by identifying the nearest line segment of the polyline to the point, which is determined by calculating the minimum distance between the point and its in-line projection. If a direct projection onto the line is not possible, the closest edge serves as the projection, as depicted in Fig. <ref>. The in-line projection D of point A onto line segment BC, is calculated as following:D = B + max(0,min(1,BA·BC/BC·BC)) * BC These nearest line segment edges are subsequently utilized to position the point within the merged polyline. The algorithm of Network Generator is shown in Alg. <ref>.The function “polyline_merge_check” compares the distance between the polylines' points and their projections to the given threshold as explained, then verifies the labels assigned to the two polylines. If the distance criterion is satisfied and the labels are compatible, the function returns a boolean value of true, indicating that the polylines can be merged. Distance computation details are provided in Section <ref>. §.§.§ Polyline MergingFollowing the computation of the network graph G = (𝒱^poly,E), the PolyMerge proceeds by iterating through the connected graph edges that represent the polylines to be merged. First, it defines the main map polyline V_i,_ℳ, which serves as the base onto which the remaining polylines will be merged. Then, one by one, it merges the remaining polylines onto the base polyline. To merge polyline V_A^poly onto V_B^poly, we deal with V_A^poly point by point. We could narrow the possible scenarios for merging point A ∈ V_A^poly, having its in-line projection point B onto V_B^poly, into the following four scenarios: * First scenario: Point B falls inside a line segment CD⊂ V_B^poly. In this case, a mid point P_m = A+B/2 is inserted onto V_B^poly with an index between points C and D indices.* Second scenario: B equals either point C or point D. In this case, P_m is also calculated but replaces C or D in V_B^poly, depending if it equals C or D.* Third scenario: B equals C and C is an edge point of V_B^poly. In this case, B is appended into V_B^poly before C, as the new starting edge.* Fourth scenario: B equals D and D is an edge point of V_B^poly. In this case, B is appended into V_B^poly after D, as the new ending edge. The four scenarios are explained in Fig. <ref>. V_B^poly is then updated with every iteration. However, specific road elements such as pedestrian crossings, always come in quadrilateral closed shapes, and thus we used a better method for their merge. First, similar quadrilaterals are rasterized onto a grid, and an empty rectangle is created around their union shape. The cells within the rectangle are filled based on the number of quadrilaterals covering them. This rectangle represents the probability of polygon coverage, ranging from 0.0 to 1.0. A modified Gaussian blur function is applied using full convolution to avoid edge distortions. Finally, the average result is extracted by selecting a quantile threshold Th_cov, such as 0.1 for larger coverage or 0.95 for smaller coverage. A quantile threshold of 0.5 represents the median, or an “average” coverage area. Finally, we use a rotated minimum bounding rectangle to represent the pedestrian crossing. Fig. <ref> explains the described merging method and compares results with the case of merging them in the same way as other road elements.§ EXPERIMENTAL RESULTS Different polyline compositions have been tested for merging individually as shown in Fig. <ref>. It shows how the PolyMerge technique effectively captures and averages distances between two polylines, considering the selection of points at the start and end.Demonstration of merging multiple map instances by the Polymerge technique is provided in Fig. <ref>. It is shown that polylines effectively encode detailed geometries and direction information of map elements, reflecting real-world structures and explicit directions. The merging algorithm is able to successfully preserve detailed geometry while considering all registered map instances. However, some parts (e.g. road border at lower right corner) were not accurately estimated by the VectorMapNet method, which resulted in chaotic merged polyline.In order to validate the performance of the developed PolyMerge technique, we prepared three experimental scenarios representing data from the NuScenes dataset of different road sections. For each scenario, 15 local HD map instances were obtained by the VectorMapNet (VMN) and then processed by the PolyMerge technique to generate the merged HD map, see Fig. <ref>. Both VMN instances and the merged map were compared with corresponding Ground Truth data using the “polyline_merge_check” method for detecting corresponding polylines from Alg.<ref>. We used the Partial Curve Method (PCM) <cit.> to measure the difference between each pair of corresponding polylines. PCM normalizes vectors and matches the area of a subset between the two curves, allowing identification of the optimal section of the ground truth poly that corresponds to a short estimated poly. Additionally, we calculated the discrete Frechet distance (DF) <cit.>, which measures the similarity between curves while considering the location and ordering of points along the curve. DF represents the length of the shortest leash that allows traversing both curves, similar to a man walking a dog on a leash without backtracking. For the experiment, a proximity threshold Th_prox of 1 m, a quantile threshold Th_cov of 0.5, and the VMN model prediction confidence of 0.8 were applied.Experimental results are provided in Table <ref>, where a comparison of average metric values measured on polylines from 15 VMN local HD map instances with metric values of the corresponding merged HD maps is provided. As shown, the PolyMerge generally preserves mean DF and PCM values with slight variations compared to VMN instances. DF distances show closer alignment to the ground truth for pedestrian crossings and boundaries, reducing maximum error. However, dividers show increased errors due to VMN inaccuracies and unaccounted extra dividers merging with overlapping instances. Higher PCM values suggest a need for additional smoothing step due to increased zigzag movements. To mitigate these issues, improving VMN accuracy is essential. Currently reliant on camera images, future enhancements may include integrating lidar scans for more precise road element detection and expanding class categories beyond pedestrian crossings, dividers, and boundaries. § CONCLUSIONWe introduce the PolyMerge, a novel technique for dynamic HD map updates. In contrast to existing methods, the PolyMerge directly merges polylines of similar road elements using the local vector map generated. By constructing a network graph of similar polylines and projecting them onto each other, an average representation is obtained keeping the offset with ground truth mostly equal to the used map instances with a mean of 1.06 m DF distance for pedestrian crossings, 0.60 m for road dividers, and 1.22 for road boundaries compared to 0.98 m, 0.48, and 1.72 m in the used VMN map instances. Our experiments demonstrate the ease of implementation and effectiveness of the PolyMerge, resulting in a comprehensive map that closely resembles the ground truth. IEEEtran10url@samestyle rong2020lgsvl G. Rong, B. H. Shin, H. Tabatabaee, Q. Lu, S. Lemke, M. Možeiko, E. Boise, G. Uhm, M. Gerow, S. Mehta et al., “Lgsvl simulator: A high fidelity simulator for autonomous driving,” in 2020 IEEE 23rd International conference on intelligent transportation systems (ITSC).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 1–6.liu2022vectormapnet Y. Liu, Y. Wang, Y. Wang, and H. Zhao, “Vectormapnet: End-to-end vectorized hd map learning,” arXiv preprint arXiv:2206.08920, 2022.chiang2022automated K.-W. Chiang, H.-Y. Pai, J.-C. Zeng, M.-L. Tsai, and N. El-Sheimy, “Automated modeling of road networks for high-definition maps in opendrive format using mobile mapping measurements,” Geomatics, vol. 2, no. 2, pp. 221–235, 2022.elhousni2020automatic M. Elhousni, Y. Lyu, Z. Zhang, and X. Huang, “Automatic building and labeling of hd maps with deep learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 08, 2020, pp. 13 255–13 260.zhou2021automatic Y. Zhou, Y. Takeda, M. Tomizuka, and W. Zhan, “Automatic construction of lane-level hd maps for urban scenes,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).1em plus 0.5em minus 0.4emIEEE, 2021, pp. 6649–6656.jo2018simultaneous K. Jo, C. Kim, and M. Sunwoo, “Simultaneous localization and map change update for the high definition map-based autonomous driving car,” Sensors, vol. 18, no. 9, p. 3145, 2018.kim2018crowd C. Kim, S. Cho, M. Sunwoo, and K. Jo, “Crowd-sourced mapping of new feature layer for high-definition map,” Sensors, vol. 18, no. 12, p. 4172, 2018.kim2021updating C. Kim, S. Cho, M. Sunwoo, P. Resende, B. Bradaï, and K. Jo, “Updating point cloud layer of high definition (hd) map based on crowd-sourcing of multiple vehicles installed lidar,” IEEE Access, vol. 9, pp. 8028–8046, 2021.grisetti2010tutorial G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based slam,” IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, pp. 31–43, 2010.simon2006optimal D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear approaches.1em plus 0.5em minus 0.4emJohn Wiley & Sons, 2006.ArcGIS “Arcgis online,” 2023. [Online]. Available: <https://www.arcgis.com/index.html> QGIS “Qgis: A free and open source geographic information system,” 2023. [Online]. Available: <https://qgis.org/> GlobalMapper “Global mapper gis software,” 2023. [Online]. Available: <https://www.bluemarblegeo.com/global-mapper/> GRASSGIS “Grass gis: A free geographic information system (gis) software,” 2023. [Online]. Available: <https://grass.osgeo.org/> caesar2020nuscenes H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 621–11 631.goldman2011understanding R. Goldman, “Understanding quaternions,” Graphical models, vol. 73, no. 2, pp. 21–49, 2011.witowski2012parameter K. Witowski and N. Stander, “Parameter identification of hysteretic models using partial curve mapping,” in 12th AIAA Aviation technology, integration, and operations (ATIO) conference and 14th AIAA/ISSMO multidisciplinary analysis and optimization conference, 2012, p. 5580.eiter1994computing T. Eiter and H. Mannila, “Computing discrete fréchet distance,” 1994. | http://arxiv.org/abs/2310.18416v2 | {
"authors": [
"Mohamed Sayed",
"Stepan Perminov",
"Dzmitry Tsetserukou"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20231027181652",
"title": "PolyMerge: A Novel Technique aimed at Dynamic HD Map Updates Leveraging Polylines"
} |
M. K. Grzeszczyk et al.Sano Centre for Computational Medicine, Cracow, Poland [email protected] Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands Amsterdam University Medical Center, Amsterdam, The Netherlands The Medical Centre of Postgraduate Education, Warsaw, Poland Medical University of Warsaw, Warsaw, Poland Warsaw University of Technology, Warsaw, Poland IDEAS NCBR, Warsaw, Poland Tooploox, Wroclaw, Poland Massachusetts General Hospital, Harvard Medical School, Boston, MA, USATabAttention: Learning Attention Conditionally on Tabular Data Michal K. Grzeszczyk1 Szymon Płotka1, 2, 3 Beata Rebizant4 Katarzyna Kosińska-Kaczyńska4 Michał Lipa5 Robert Brawura-Biskupski-Samaha4 Przemysław Korzeniowski1 Tomasz Trzciński 6, 7, 8 Arkadiusz Sitek 9 January 14, 2024 ============================================================================================================================================================================================================== Medical data analysis often combines both imaging and tabular data processing using machine learning algorithms. While previous studies have investigated the impact of attention mechanisms on deep learning models, few have explored integrating attention modules and tabular data. In this paper, we introduce TabAttention, a novel module that enhances the performance of Convolutional Neural Networks (CNNs) with an attention mechanism that is trained conditionally on tabular data. Specifically, we extend the Convolutional Block Attention Module to 3D by adding a Temporal Attention Module that uses multi-head self-attention to learn attention maps. Furthermore, we enhance all attention modules by integrating tabular data embeddings. Our approach is demonstrated on the fetal birth weight (FBW) estimation task, using 92 fetal abdominal ultrasound video scans and fetal biometry measurements. Our results indicate that TabAttention outperforms clinicians and existing methods that rely on tabular and/or imaging data for FBW prediction. This novel approach has the potential to improve computer-aided diagnosis in various clinical workflows where imaging and tabular data are combined. We provide a source code for integrating TabAttention in CNNs at <https://github.com/SanoScience/Tab-Attention>. § INTRODUCTION Many clinical procedures involve collecting data samples in the form of imaging and tabular data.New deep learning (DL) architectures fusing image and non-image data are being developed to extract knowledge from both sources of information and improve predictive capabilities <cit.>. While concatenation of tabular and imaging features in final layers is widely used <cit.>, this approach limits the interaction between them. To facilitate better knowledge transfer between these modalities more advanced techniques have been proposed. Duanmu et al. <cit.> presented the Interactive network in which tabular features are passed through a separate branch and channel-wise multiplied with imaging features at different stages of Convolutional Neural Network (CNN). Pölsterl et al. <cit.> proposed a Dynamic Affine Feature Map Transform (DAFT) to shift and scale feature maps conditionally on tabular data. In <cit.>, Guan et al. presented a method for transforming tabular data and processing them together with 3D feature maps via VisText self-attention module. The importance of the attention mechanism on DL models' performance has been extensively studied <cit.>. Convolutional Block Attention Module (CBAM) <cit.> has been shown to improve the performance of DL models on high dimensional data <cit.>. Despite these advances, few studies have explored the potential of incorporating attention maps with imaging and tabular data simultaneously.We develop such a solution and as an example of application, we use fetal birth weight (FBW) prediction from ultrasound (US) data. It is a challenging task requiring clinicians to collect US videos of fetal body parts and fetal biometry measurements. Currently, abdominal circumference (AC), head circumference (HC), biparietal diameter (BPD), and femur length (FL) are used to estimate FBW with heuristic formulae <cit.>. The predicted weight is the indicator of perinatal health prognosis or complications in pregnancy and has an impact on the method of delivery (vaginal or Cesarean) <cit.>. Unfortunately, the current approach to FBW estimation is often imprecise and can lead to a mean absolute percentage error (MAPE) of 10%, even if performed by experienced sonographers <cit.>. An ensemble of Machine Learning algorithms was proposed by Lu et al. <cit.> for solving this task. CNNs are applied for fetal biometry measurements estimation from US standard planes <cit.> or US videos <cit.>. Tao et al. <cit.> approach this problem with a recurrent network utilizing temporal features of fetal weight changes over weeks concatenated with fetal parameters. Płotka et al. <cit.> developed BabyNet, a hybrid CNN with Transformer layers to estimate FBW directly from US videos. Recent studies show that there is a strong correlation between the image features of the abdominal plane and the estimated fetal weight, indicating that it can serve as a dependable indicator for evaluating fetal growth <cit.>. We utilize the US videos of the abdomen (imaging data) and biometry measurements with other numerical values (tabular data) during our experiments. In this work, we introduce TabAttention, a novel module designed to enhance the performance of CNNs by incorporating tabular data. TabAttention extends the CBAM to the temporal dimension by adding a Temporal Attention Module (TAM) that leverages Multi-Head Self-Attention (MHSA) <cit.>. Our method utilizes pooled information from imaging feature maps and tabular data (represented as tabular embeddings) to generate attention maps through Channel Attention Module (CAM), Spatial Attention Module (SAM), and TAM. By incorporating tabular data, TabAttention enables the network to better identify what, where, and when to focus on, thereby improving performance. We evaluate our method on the task of estimating FBW from abdominal US videos and demonstrate that TabAttention is at least on par with existing methods, including those based on tabular and/or imaging data, as well as clinicians. The main contributions of our work are: 1) the introduction of TabAttention, a module for conditional attention learning with tabular data, 2) the extension of CBAM to the temporal dimension via the TAM module, and 3) the validation of our method on the FBW estimation task, where we demonstrate that it is competitive with state-of-the-art methods.§ METHOD In this section, we introduce the fundamental components of the TabAttention module. We detail the development of CBAM augmented with a Temporal Attention Module. Then, we elaborate on how TabAttention leverages tabular embeddings to modulate the creation of attention maps and outline how the module can be seamlessly incorporated into the residual block of ResNet.Fig. <ref> presents the overview of the TabAttention module. Given US video sequence S ∈^T_0× 1 × H_0× W_0 of height H_0, width W_0 and frame number T_0 as the input, 3D CNN produces intermediate temporal feature maps S' ∈^T × C × H × W where C is the number of channels. In our setting, the CBAM block generates T 1D channel attention maps M_c∈^T × C × 1 × 1 and T 2D spatial attention maps M_s∈^T × 1 × H × W. We create attention maps separately for every temporal feature map as the information of what is meaningful and where it is important to focus on might change along the temporal dimension. To account for the temporal changes and focus on when is the informative part we add TAM which infers temporal attention map M_t∈^T × 1 × 1 × 1. Intermediate temporal feature maps S' are refined with attention maps in the following way:S”=M_c(S') ⊗ S'S”' = M_s(S”) ⊗ S”O' = M_t(S”') ⊗ S”'Here O' denotes the output of the module and ⊗ is an element-wise multiplication during which attention maps are broadcasted along all unitary dimensions. In general, attention maps are computed based on information aggregated by average- and max-pooling along specified dimensions which are then passed through shared layers for refinement (Fig. <ref>). Then, these refined descriptors are passed through the sigmoid function to create final attention maps. To account for the tabular information during attention maps computing, we embed the input tabular data Tab ∈^D, where D is the number of numerical features, with two linear layers and Rectified Linear Unit (ReLU) activation in between. The tabular data is embedded to the size of pooled feature maps. The embedding is passed through shared layers in the same way as pooled feature maps. Therefore, the attention maps are computed conditionally on tabular data. Thus, the output of TabAttention O'_t is computed as follows:S”_t = M_c(S', Tab) ⊗ S'S”'_t = M_s(S”_t, Tab) ⊗ S”_tO'_t = M_t(S”'_t, Tab) ⊗ S”'_t Channel Attention Module. We follow the design of the original CBAM <cit.>. We split temporal feature maps into T feature maps F_i where i∈ 1, ..., T so that each of them is passed through CAM separately. To compute the channel attention (M_c), we aggregate the spatial information through average- and max-pooling to produce descriptors (F^c_avg_i, F^c_max_i∈^C × 1 × 1). We pass the tabular data through a multi-layer perceptron (MLP_emb_c) with one hidden layer (of size ^C/z, where z is the reduction ratio set to 16) and ReLU activation to embed it into the same dimension as spatial descriptors. Then, both descriptors, with tabular embedding are passed through the shared network which is MLP with a hidden activation size of ^C/z and one ReLU activation. After the MLP is applied, the output vectors are element-wise summed to produce the attention map. We concatenate attention maps of all feature maps to produce M_c: M_c(S', Tab) = [M^f_c(F_i, Tab)]_i=1, ..., TM^f_c(F_i, Tab) = σ(MLP(F^c_max_i) + MLP(F^c_avg_i) + MLP(MLP_emb_c(Tab))) Spatial Attention Module. After splitting the temporal feature maps, we average- and max-pool them along channel dimension to produce feature descriptors (F^s_avg_i, F^s_max_i∈^1 × H × W). We pass the tabular data through MLP_emb_s with one hidden layer of size ^H × W/2 and ReLU activation to embed it into the same dimension as spatial descriptors. We reshape this embedding to the size of feature descriptors and concatenate it with them. We pass the following representation through a 2D convolution layer and the sigmoid activation: M_s(S”, Tab) = [M^f_s(F_i, Tab)]_i=1, ..., TM^f_s(F_i, Tab) = σ(Conv([F^s_max_i, F^s_avg_i, Reshape(MLP_emb_s(Tab))])) Temporal Attention Module. We create temporal descriptors by average- and max-pooling temporal feature maps along all non-temporal dimensions (F^t_avg_i, F^t_max_i∈^T × 1 × 1 × 1). We embed tabular data with MLP_emb_t with one hidden layer of size ^T/2 into the same dimension. We concatenatecreated vectors and treat them as the embedding of the US sequence which we pass to the MHSA layer (with 2 heads). We create the query (Q), key (K) and value (V) with linear layers and an output size of d (4). We add relative positional encodings <cit.> r to K. After passing through MHSA, we squash the refined representation with one MLP layer and sigmoid function to create a temporal attention map M_t: MHSA(S_emb) = MLP([softmax(Q_j(K_j+r)^T/√(d))V_j]_j=1, 2) M_t(S”', Tab) = σ(MHSA([F^t_max, F^t_avg, MLP_emb_t(Tab)]))TabAttention can be integrated within any 3D CNN (or 2D CNN in case TAM is omitted). As illustrated in Fig. <ref>, we add TabAttention between the first ReLU and the second convolution in the residual block to integrate our module with 3D ResNet-18.§ EXPERIMENTS AND RESULTSThis section describes the dataset used and provides implementation details of our proposed method. We benchmark the performance of TabAttention against several state-of-the-art methods and compare them to results obtained by clinicians. Additionally, we conduct an ablation study to demonstrate the significance of each key component utilized in our approach. Dataset. This study was approved by the Ethics Committee of the Medical University of Warsaw (Reference KB.195/2021) and informed consent was obtained for all subjects. The multi-site dataset was acquired using international standards approved by <cit.>. The dataset consists of 92 2D fetal US video scans captured in the standard abdominal plane view. These scans were collected from 92 pregnant women (31.89 ± 4.76 years), across three medical centers, and obtained as part of routine US examination done less than 24 hours before delivery. This allowed us to obtain the real ground truth which was baby weight soon after birth.Five experienced sonographers (14.2 ± 4.02 years of experience) acquired the data using a single manufacturer device (General Electric) and several models (GE Voluson E6, S8, P8, E10, and S10). The abdominal fetal US videos (5-10 seconds, 13-37 frames per second) were saved in the DICOM file format. We resized the pixel spacing to 0.2 mm × 0.2 mm for all video clips. As tabular data, we used six numerical features: AC (34.51 ± 2.35 cm), HC (33.56 ± 1.41 cm), BPD (9.40 ± 0.46 cm), FL (7.28 ± 0.33 cm), GA (38.29 ± 1.47 weeks), and mother's age. The examples of how the measurements were obtained are presented in Fig. <ref>.The actual birth weight of the fetus obtained right post-delivery (3495 ± 507 grams) was used as the target of the prediction. Implementation details. We use 3D ResNet-18 as our base model. We implement all experiments with PyTorch and train networks using NVIDIA A100 80GB GPU for 250 epochs with a batch size of 16 and an initial learning rate chosen with grid search from the set of {1 × 10^-2, 1 × 10^-3, 1 × 10^-4}. To minimize the Mean Squared Error loss function, we employ the Adam <cit.> optimizer with L2 regularization of 1 × 10^-4 and cosine annealing learning rate scheduler. To evaluate the reliability of the regression algorithm, we conduct five-fold cross-validation (CV) and ensure that each patient's data is present in only one fold. To ensure similar birth weight distribution in all folds, we stratify them based on the assignment of data samples into three bins: < 3000 g, > 4000 g, and in-between. The input frames are of size 128 × 128 pixels. We follow the approach presented in <cit.>, we set the number of input frames to 16 and average per-patient predictions of all 16 frame segments from the single video. Throughout the training process, we employ various data augmentation techniques such as rotation, random adjustments to brightness and contrast, the addition of Gaussian noise, horizontal flipping, image compression, and motion blurring for every batch. We standardize all numerical features to a mean of 0 and a standard deviation of 1. We use Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and MAPE to evaluate the regression performance.Comparison with state-of-the-art methods. We compare TabAttention with several methods utilizing tabular data only (Linear Regression <cit.>, XGBoost <cit.>), imaging data only (3D ResNet-18 <cit.>, BabyNet <cit.>), both types of data (Interactive <cit.>, DAFT <cit.>), and Clinicians. The predictions of Clinicians were achieved using Hadlock III <cit.> formula and AC, HC, BPD, FL measurements. The comparison of results from the five-fold CV is presented in Table <ref>. TabAttention achieves the lowest MAE, RMSE and MAPE (170 ± 26, 225 ± 37, 5.0 ± 0.8 respectively) among all tested methods. Our approach outperforms clinically utilized heuristic formulae, machine learning, and image-only DL methods (two-tailed paired t-test p-value < 0.05). Results of TabAttention are also best compared with all DL models utilizing tabular and imaging modalities, however, the difference does not reach statistical significance with a p-value around 0.11.1.01Ablation study. We conduct ablation experiments to validate the effectiveness of key components of our proposed method (Table <ref>). We employ 3D ResNet-18 as the baseline model. The integration of TAM or CBAM with attention maps learned conditionally on tabular data into the 3D ResNet-18 architecture improves the predictive performance of the network. Subsequently, the incorporation of full TabAttention further enhances its capabilities.1.1§ DISCUSSION AND CONCLUSIONS In this work, we present a novel method, TabAttention, that can effectively compete with current state-of-the-art image and/or tabular-based approaches in estimating FBW. We found that it outperformed Clinicians achieving mMAPE of 5.0% vs. 5.9% (p-value < 0.05). A key advantage of our approach is that it does not require any additional effort from clinicians since the necessary data is already collected as part of standard procedures. This makes TabAttention an alternative to the heuristic formulas that are currently used in clinical practice. We should note that while TabAttention achieved the lowest metrics among the DL models we evaluated, the differences between our approach and other DL methods using tabular data were not statistically significant, partly due to the small performance change. This small difference in the performance is likely caused by the fact that the tabular features used in TabAttention are mainly derived from the same modality (i.e. US scans), so they do not carry additional information, but instead can be considered as refined features already present in the scans.To develop TabAttention, we used tabular data as a hint for the network to learn attention maps and gain additional knowledge about essential aspects presented in the scans. This approach significantly improved the performance of baseline methods and demonstrated its practical applicability.Accurate estimation of FBW is crucial in determining the appropriate delivery method, whether vaginal or Cesarean. Low birth weight (less than 2500 g) is a major risk factor for neonatal death, while macrosomia (greater than 4000 g) can lead to delivery traumas and maternal complications, such as birth canal injuries, as reported by Benacerraf et al. <cit.>. Thus, precise prediction of FBW is vital for very low and high weights. Notably, in this respect, our method is robust to outliers with high or low FBW since there is no correlation between true FBW and absolute prediction error (Pearson correlation coefficient of -0.029).This study has limitations. Firstly, a relatively small study cohort was used, which may affect the accuracy and generalization of the results. To address this, future work will include a larger sample size by using additional datasets. Secondly, our dataset is limited to only Caucasian women and may not be representative of other ethnicities. It is important to investigate the performance of our method with datasets from different ethnic groups and US devices to obtain more robust and generalizable results. Lastly, our method relies on fetal biometry measurements that are subject to inter- and intra-observer variabilities. This variability could potentially affect the network's performance and influence the measurements' quality. Future studies should consider strategies to reduce measurement variabilities, such as standardized protocols or automated measurements, to improve the accuracy of the method.To summarize, we have introduced TabAttention, a new module that enables the conditional learning of attention on tabular data and can be integrated with any CNN. Our method has many potential applications, including serving as a computer-aided diagnosis tool for various clinical workflows. We have demonstrated the effectiveness of TabAttention on the FBW prediction task, utilizing both US and tabular data, and have shown that it outperforms other methods, including clinically used ones. In the future, we plan to test the method in different clinical applications where imaging and tabular data are used together. § ACKNOWLEDGEMENTS This work is supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement Sano No 857533 and the International Research Agendas programme of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund.splncs04 | http://arxiv.org/abs/2310.18129v1 | {
"authors": [
"Michal K. Grzeszczyk",
"Szymon Płotka",
"Beata Rebizant",
"Katarzyna Kosińska-Kaczyńska",
"Michał Lipa",
"Robert Brawura-Biskupski-Samaha",
"Przemysław Korzeniowski",
"Tomasz Trzciński",
"Arkadiusz Sitek"
],
"categories": [
"eess.IV"
],
"primary_category": "eess.IV",
"published": "20231027132137",
"title": "TabAttention: Learning Attention Conditionally on Tabular Data"
} |
Department of Physics #1, Graduate School of Science, Kyoto University, Kyoto 606-8502, JapanBased on a variational expression for the steady-state entropy production rate in overdamped Langevin dynamics, we derive concrete upper bounds on the entropy production rate in various physical settings. For particles in a thermal environment and driven by non-conservative forces, we show that the entropy production rate can be upper bounded by considering only the statistics of the driven particles. We use this finding to argue that the presence of non-driven, passive degrees of freedom generally leads to decreased dissipation. Another upper bound can be obtained only in terms of the variance of the non-conservative force, which leads to a universal upper bound for particles that are driven by a constant force that is applied in a certain region of space. Extending our results to systems attached to multiple heat baths or with spatially varying temperature and/or mobility, we show that the temperature difference between the heat baths or the gradient of the temperature can be used to upper bound the entropy production rate. We show that most of these results extend in a straightforward way to underdamped Langevin dynamics and demonstrate them in three concrete examples. Upper bounds on entropy production in diffusive dynamics Andreas Dechant 2023-10-25 ========================================================§ INTRODUCTIONFrom a thermodynamic point of view, the distinction between equilibrium and out-of-equilibrium systems is the irreversibility of the latter. This irreversibility is quantified by entropy production, which measures the inevitable increase in the entropy of the universe during any out-of-equilibrium process. In practice, entropy production leads to dissipation, that is, a loss of energy into the environment of the system, and therefore, one frequently seeks to minimize the entropy production involved in a given process. Conversely, far-from-equilibrium systems can exhibit a rich variety of non-trivial behaviors, which are not possible in or near equilibrium, such as non-monotonic or even negative transport coefficients <cit.>, persistent oscillations <cit.> or spontaneous emergence of patterns <cit.>.A recent trend in non-equilibrium statistical physics is to investigate lower bounds on the entropy production in terms of observable quantities, which can be used to estimate entropy production from measured data <cit.>. From a more fundamental perspective, such lower bounds also quantify how much entropy production is strictly necessary to observe a given non-equilibrium phenomenon. Given the recent success of this approach, it seems natural to ask the opposite question: Given our knowledge about a particular system, can we derive an upper bound on the entropy production? Clearly, this question cannot be answered from measured data alone: If there is some non-equilibrium process occurring in the system that is not reflected in the measurement, then we have no way of estimating its contribution to the entropy production.In many cases, a physical system is driven out of equilibrium by an externally controlled operation, such as applying a bias force or a temperature gradient. In such cases, we know the thermodynamic forces driving the system out of equilibrium.We show that this additional information about the driving forces can be used, potentially together with measured data, to obtain upper bounds on the entropy production. In particular, for Brownian particles in contact with a thermal environment (characterized by temperature T and friction coefficient γ), that are driven into a non-equilibrium steady state by a non-conservative force-field F^nc, we obtain a simple upper bound on the rate of entropy production,σ_st≤1/γ T‖F^nc‖^2 ,where … denotes an ensemble average and ‖…‖ is the vector norm. This bound, which holds for both over- and underdamped motion, places an upper limit on the dissipation incurred due the non-conservative force, which only depends on the average magnitude of the driving force and the properties of the thermal environment. As we will discuss in the following, the bound can be tightened using additional information and can be extended to cases where the thermodynamic force arises due to several heat baths or a temperature gradient.The structure of the paper is as follows: In Section <ref> we introduce the basic setup of overdamped Langevin dynamics and discuss a variational formula for the entropy production rate. We apply this variational formula to systems of particles driven by non-conservative forces in Section <ref> to obtain various upper bounds on the entropy production rate for specific situations. In Sections <ref> and <ref> we extend the bounds to systems in contact with several heat baths or in the presence of temperature gradients, respectively. Section <ref> discusses the necessary modifications of the bounds when dealing with system with periodic boundary conditions. While most of the derivations are done with overdamped Langevin dynamics in mind, we show in Section <ref> that almost all derived upper bounds also apply to underdamped systems with no or little modification. Finally, in Section <ref>, we compare the upper bounds to the exact value of the entropy production for interacting driven Brownian particles, a temperature ratchet, and underdamped motion in periodic potentials. § SETUP AND VARIATIONAL PRINCIPLEWe consider a system of N overdamped degrees of freedom, whose motion is described by the overdamped Langevin equations <cit.>ẋ(t) = a(x(t)) + G(x(t)) ·ξ(t ) .Here a(x) is a drift vector, G(x) is a rank N coupling matrix describing the coupling to the N Gaussian white noises ξ(t) and · denotes the Ito-product. We assume that the drift vector and coupling matrix are such that the system reaches a steady state at long time. The steady-state probability density p_st(x) is determined from the Fokker-Planck equation <cit.>0= - ( ν_st(x) p_st(x) ) with ν_st(x)= a(x) - B(x) - B(x) ln p_st(x).Here B(x) = 2 G(x) G(x)^T is the positive definite diffusion matrix and B(x) denotes the vector with entries ∂_x_j B_ij(x), where the sum over repeated indices is implied. The quantity ν_st(x) is called the local mean velocity, it characterizes the flows in the steady state of the system, which is thus generally a non-equilibrium steady state with entropy production rate <cit.>σ_st = ν_stB^-1ν_st,where … denotes the average with respect to the steady-state probability density. In the following, we will focus on systems under natural or reflecting boundary conditions, such that the probability current j_st(x) = ν_st(x) p_st(x) vanishes at the boundaries or as ‖x‖→∞. We will remark on systems with periodic boundary conditions later. Due to the steady state condition (<ref>), we have for arbitrary gradient fields ψ(x),ψν_st = ∫ dx p_st(x) ν_st(x) ψ(x)= - ∫ dx ψ(x) ( ν_st(x) p_st(x) ) = 0.That is, the local mean velocity is orthogonal to all gradient fields with respect to the inner product over the space of vector fields u,v = uv. Next, let us consider the functionalΨ[ψ] = ( ν_st + Bψ) B^-1( ν_st + Bψ) .Using (<ref>), we can write this asΨ[ψ] = σ_st + ψBψ .Since the second term is positive and vanishes for any constant function ψ(x) = 0, we have at the identity,σ_st = inf_ψΨ[ψ].While at this level, the variational principle is almost trivial, we will see in the following that it is useful for deriving upper bounds on the entropy production rate in terms of computable quantities. At this point, we note that (<ref>) gives an upper bound on the entropy production rate for any choice of ψ(x),σ_st≤( ν_st + Bψ(x) ) B^-1( ν_st + Bψ(x) ) .We remark that a similar variational formula has been derived for the housekeeping entropy in the Maes-Netočnỳ decomposition <cit.>. In Ref. <cit.>, it was shown that we haveσ_hk = inf_ψ( ν_t + B_t ψ) B_t^-1( ν_t + B_t ψ(x) )_t.Here ν_t(x) is the time-dependent local mean velocity,∂_t p_t(x)= - ( ν_t(x) p_t(x) ) with ν_t(x)= a_t(x) - B_t(x) - B_t(x) ln p_t(x),and the average is taken with respect to the time-dependent probability density p_t(x). The similarity between (<ref>) and (<ref>) implies that all results that are derived from (<ref>) for the steady state entropy in the following also apply without change to the housekeeping entropy in arbitrary time-dependent dynamics.§ THERMAL SYSTEMS §.§ General non-conservative forcesAn important class of physical systems described by (<ref>) are Brownian particles in contact with an equilibrium environment with temperature T. The interactions between the particles and the effect of external forces result in a total force F(x), while the interaction between the particles and the environment is described by a positive definite mobility matrix μ. The equations of motion are thenẋ = μF(x) + √(2 μ T)ξ(t) .The matrix √(μ) is defined as the unique positive definite matrix with √(μ)√(μ) = μ. In this case, the diffusion matrix is μ T and the steady-state local mean velocity is given byν_st(x)= μ( F(x) - T ln p_st(x) ) .If all forces in the system are conservative, we can write F(x) = - U(x) and the steady-state is given by the Boltzmann-Gibbs equilibrium,p_st(x) = Z^-1 e^-U(x)/Twith Z = ∫ dx e^-U(x)/T ,and the local mean velocity and entropy production rate vanish. Thus, for systems in contact with a thermal environment, conservative forces are equivalent to relaxation to thermal equilibrium. In the presence of non-conservative forces that cannot be written as the gradient of a potential U(x), the system will relax to a non-equilibrium steady state with a non-zero rate of entropy production, which in this case is equal to the rate of heat dissipation into the environment,σ_st = Q̇_diss/T .Let us now apply (<ref>) to this situation. We haveσ_st = 1/Tinf_ψ⟨( F - T ln p_st + T ψ) ×μ( F - T ln p_st + T ψ) ⟩ .Since the infimum is taken over all gradient fields, we can absorb the terms involving the steady-state probability density into ψ(x),σ_st = 1/Tinf_ψ⟨( F + T ψ) μ( F + T ψ) ⟩ .Next, we split the forces into a conservative and non-conservative part as F(x) = - U(x) + F^nc(x), again absorbing the gradient term into ψ(x),σ_st = 1/Tinf_ψ⟨( F^nc + T ψ) μ( F^nc + T ψ) ⟩ .Defining V(x) = T ψ(x), we then arrive atσ_st = 1/Tinf_V⟨( F^nc +V ) μ( F^nc +V ) ⟩ .This corresponds to a minimization over conservative forces, that is, finding the conservative force - V(x) that is most similar to the non-conservative force F^nc(x). Note that the expression (<ref>) does not explicitly involve the potential forces, which only appear via their influence on the steady-state probability with respect to which the average is taken. In many physical situations, the non-conservative force corresponds to an externally applied force, and therefore, its functional form is often known. By contrast, the conservative force generally also include interactions between the particles, as well as interactions with e. g. obstacles in the environment, whose precise form is often not known. (<ref>) allows us to calculate the entropy production rate from the knowledge of only the non-conservative forces by computing ensemble averages. Rescaling V(x) →α V(x) and minimizing with respect to α, we find the equivalent expression,σ_st = 1/Tinf_V( F^ncμF^nc - F^ncμ V^2/ V μ V ),which generally provides a tighter bound than (<ref>) for an arbitrary choice of V(x), since we already optimized with respect to the overall magnitude of V(x). Choosing V(x) = 0 in (<ref>), we obtain our first upper bound on the entropy production rate,σ_st≤1/TF^ncμF^nc .The right-hand side corresponds to the entropy production that would be obtained if the entire non-conservative force was converted into a current, ν_st(x) = μF^nc(x). This bound ascertains that the entropy production rate cannot exceed the overall magnitude of the non-conservative force. A tighter bound, which likewise is expressed only in terms of the non-conservative force can be obtained from (<ref>) by choosing V = -xF^nc_st,σ_st≤1/T⟨( F^nc - F^nc) μ( F^nc - F^nc) ⟩ .This implies that the entropy production is upper bounded by the deviation of the non-conservative force from its average value. This bound takes an even simpler shape if the non-conservative force is a constant force F^nc(x) = F_0 χ_Ω(x) that is acting over some finite region Ω of the configuration space, where χ_Ω(x) = 1 if x∈Ω and χ_Ω(x) = 0 otherwise. In this case, we haveσ_st≤F_0 μF_0/T P(Ω) (1-P(Ω) ) ,where P(Ω) denotes the probability of finding the system in Ω. Obviously, this expression vanishes if P(Ω) = 0, when the system is never driven by the non-conservative force, or if P(Ω) = 1, since a constant force acting over the entire configuration space is conservative F_0 = -(xF_0). The expression on the right-hand side is maximal if P(Ω) = 1/2 and we obtain the global upper boundσ_st≤F_0 μF_0/4 T.This gives an upper bound on the entropy production of a system driven by a constant magnitude non-conservative force, which depends only on the magnitude of the force, the mobility and the temperature.We can also compute the entropy production rate from the work done by the non-conservative force <cit.>,𝒲 = ∫_0^τ dtF^nc(x(t)) ∘ẋ(t).Here, ẋ(t) is the velocity given by (<ref>) and ∘ denotes the Stratonovich product. In the steady-state, the average work is related to the entropy production rate byσ_st = 𝒲/T τ .Just like (<ref>), this can be used to calculate the entropy production rate from the knowledge of the non-conservative force. At first glance, (<ref>) appears to be simpler, since it does not involve any variational expression. However, it requires the velocity ẋ(t), which can only be measured accurately if the time-resolution is sufficient to resolve all the timescales in the dynamics, which may be challenging in particular in strongly interacting systems. By contrast, (<ref>) relies on sampling statistics from the steady-state probability density, which does not place any restrictions on the time-resolution. Therefore, (<ref>) may yield a more accurate result in situations where the velocity cannot be resolved well, at the cost that we have to perform a minimization with respect to V(x). Expanding V(x) into a set of Q basis functions, V(x) = ∑_k=1^Q c_q η_q(x), (<ref>) requires computing the (Q+2)(Q+1)/2 coefficientsC_00 = F^ncμF^nc,C_0q = F^ncμη_q, C_q r = η_q μη_r ,which can be done by evaluating the functions for the measured values of x(t) and averaging. In terms of the coefficient matrix C, the minimizer of (<ref>), and thus the resulting estimate on the entropy production rate, is given byσ̂_st = C_00 - ∑_q,r = 1^K C_0q(C^-1)_qr C_0r .For an finite set of basis functions (and ignoring other sources of errors), this is an upper estimate on the entropy production rate, σ_st≤σ̂_st, which will converge to the true value as the number of basis functions is increased, provided that they form a complete basis of the configuration space. §.§ Driven and passive particlesWe next consider a system of M+K Brownian particles in n dimensions in contact with a heat bath at temperature T, whose positions at time t we label by x(t) = (y(t),z(t)) = (y_1(t),…,y_M(t),z_1(t),…,z_K(t)) with y_1(t) = (y_1,1(t),…,y_1,n(t)), that is, N = (M+K)n degrees of freedom. Conservative external forces and interactions between the particles are encoded in the potential U(x) = U(y,z), which may depend on both sets of particles. In addition, there is a nonconservative force F_d^nc(y) that acts on M of the particles, which we refer to as the driven particles, while the remaining K particles, which are only subject to potential forces, are called passive. We further assume that the mobility matrix is block-diagonal, with its first M × M block equal to the mobility matrix μ_d of the driven particles, and the remaining K × K block equal to the mobility matrix μ_p of the passive particles. This means that the environment does not induce any interactions between the driven and non-driven particles. In (<ref>), we now choose V(x) = V_d(y), that is, we allow the potential to only depend on the driven degrees of freedom. Restricting the domain of the minimization in that way results in an upper bound,σ_st≤1/Tinf_V_d⟨( F_d^nc + _y V_d) μ( F^nc + _y V_d) ⟩ .Since the mobility matrix is block-diagonal and the vector F_d^nc(y) + _y V_d(y) is only non-zero in the entries corresponding to the driven particles and only depends on their coordinates, we can simplify this toσ_st ≤σ_st,d= 1/Tinf_V_d⟨( F_d^nc + _y V_d) μ_d( F_d^nc + _y V_d) ⟩_d .Here …_d denotes an average with respect to the marginal probability density of the driven particles, p_st,d(y) = ∫ dz p_st(y,z). Crucially, (<ref>) only depends on quantities involving the driven degrees of freedom: The non-conservative force F_d^nc(y), the mobility matrix μ_d and the statistics of the driven particles encoded in p_st,d(y). This implies that we can obtain an upper bound on the entropy production rate by only observing the driven degrees of freedom. Moreover, the right-hand side of (<ref>) corresponds precisely to the entropy production rate of the M driven particles with steady state p_st,d(y) in the absence of interactions with the passive particles. This implies that any interaction between the driven and the passive particles necessarily decreases the entropy production rate compared to the case where there is no interaction. At first glance, this appears to be counter-intuitive, since the interaction between the driven and the passive particles generally also induces currents among the passive particles, thereby increasing the dissipation. However, the same interaction also reduces the response of the active particles to the driving force and therefore the corresponding current. (<ref>) implies that the second effect always outweighs the first, that is, the reduction in the currents of the driven particles is always larger than the currents induced in the passive particles.The bound (<ref>) is also useful from a practical point of view: In many cases, there are only one or a few driven (or probe) particles, whose properties, including the non-conservative force driving them, are well-known. By contrast, there may be many different species of passive particles, with complicated interactions among each other and with the driven particles. The upper bound (<ref>) is independent of these details, instead relying only on the properties and a measurement of the driven particles. We stress that p_st,d(y) is the probability density of the driven particles in the actual physical system (including interactions between driven and passive particles), and therefore, the average in (<ref>) can be evaluated from observed trajectory data of the driven particles. In complete analogy to (<ref>) we also haveσ_st≤1/T⟨( F_d^nc - F_d^nc_d) μ_d( F_d^nc - F_d^nc_d) ⟩_d,where, likewise, all quantities on the right-hand side only depend on the driven particles. We also remark that, while in the above we referred to the driven and passive degrees of freedom as particles, (<ref>) is not restricted to this case. For example, the driven degrees of freedom may correspond to the spatial directions in which the non-conservative force acts, while the passive degrees of freedom are the spatial directions unaffected by the non-conservative force.Even if many (or all) particles are driven, it is still possible to use (<ref>) obtain a useful upper bound on the entropy production by exploiting symmetries in the system. For example, consider the case where the non-conservative force acting on each particles is the same and only depends on the coordinate of the respective particle, that is, F^nc(x) = ∑_k F^nc_k(x_i). Then we can choose a sum of one-body potentials V(x) = ∑_k V_k(x_k), which yields the boundσ_st≤1/T∑_kinf_V_k⟨(F_k^nc +V_k ) μ_k (F_k^nc +V_k )⟩_k,where μ_k is the mobility matrix of particle k. The right-hand side is now written as a sum of one-particle expressions, which can be evaluated by using the one-particle probabilities p_st,k(x_k). In particular, if all particles are identical, then the above simplifies toσ_st≤K/Tinf_V⟨(F_1^nc +V ) μ_1 (F_1^nc +V )⟩_1.This implies that, for systems of identical, driven particles, the total entropy production is less than sum of one-particle entropy productions. Similar to the case of driven and passive particles, dissipation is reduced by the interactions between the particles and the resulting correlations. It is worth emphasizing again that p_1(x_1) is the one-particle density in the interacting system, and therefore the average in (<ref>) can be directly evaluated from the trajectory data of the latter.We remark that the finding that interactions reduce dissipation relies on a particular identification of the non-interacting system. Let us write F(x) = ∑_i F_i(x_i) -U^int(x), with the interaction potential U^int(x) and one-body forces F_i(x_i) = - _i U_i(x_i) + F_i^nc(x_i). We refer to this interacting out of equilibrium system as S. In most cases, we would identify the non-interacting system S_1 as the one with U^int(x) = 0, that is, a system of independent particles moving under the influence of only the one-body force. This has the advantage that the solution can usually be obtained much easier than for the interacting system. However, there is no definite general relation between the corresponding one-body entropy production σ_st,1 of S_1 and the entropy production σ_st of the interacting system S, so the former cannot be used to draw any conclusions about the latter. This approach further has the downside that the statistics of the non-interacting system S_1 cannot be obtained from a measurement of S. Since there is generally no way of just turning off the interactions, the non-interacting system S_1 may not be accessible in practice. By contrast, in the bound (<ref>), we identify the non-interacting system S̅_1 as the one with the same one-particle probability density p_st,1(x_1) as the interacting system. This has the obvious downside that we do not know the corresponding one-particle potential explicitly; the latter is defined by requiring that it has to reproduce the probability density p_st,1(x_1) in the presence of the one-body non-conservative force F_1^nc(x). The advantage of interpreting S̅_1 as the non-interacting system is, however, that its one-body entropy production σ̅_st,1 always yields an upper bound on the entropy production on the entropy production of S, σ_st≤ K σ̅_st,1. Further, σ̅_st,1 can be computed from the one-body probability density in the interacting system S—and thus from measured trajectory data of the latter—using the variational expression in (<ref>). In conclusion, if we treat S_1 as the non-interacting system, then adding interactions between the particles may increase the dissipation, since doing so also changes the one-body density. However, when comparing the interacting system S with the non-interacting system S̅_1 with the same one-body density, the interacting system always has reduced dissipation. We note that S_1 and S̅_1 coincide if the interactions do not change the one-body density. This also implies that a necessary condition for interactions to increase dissipation is that they significantly change the one-body density compared to the non-interacting system S_1. It would be interesting to see if this statement can be made more quantitative, that is, whether a direct relation between a potential increase in dissipation and the change in the one-body density can be obtained. § MULTIPLE HEAT BATHS In the previous Section, we focused on the case where the entire system is in contact with a single equilibrium environment; in that case, any non-equilibrium effects stem from non-conservative forces. By contrast, if a system is in contact with multiple heat baths at different temperatures, this generally induces heat flows and therefore drives the system out of equilibrium even if all forces are conservative. For concreteness, let us consider M particles in n dimensions, x = (x_1,…,x_M) with x_1 = (x_1,1,…,x_1,n). We allow each particle x_i to be in contact with a heat bath at a different temperature T_i. We assume that the mobility matrix is block-diagonal, μ = (μ_1,…,μ_M), where μ_i is the n × n mobility matrix of particle x_i. Since we are mainly interested in the effect of different heat baths on the dissipation, we restrict the discussion to the case of conservative forces. We write the potential as U(x) = U^int(x) + ∑_i U_i(x_i), that is, we distinguish between the interaction potential U^int(x) which may depend on all coordinates, and the one-body potentials U_i(x_i), which can be different for each particle. From this and the preceding discussion, it is clear that the system is in equilibrium if all the temperatures are the same (in that case, we just have a conservative system in contact with an equilibrium environment) or if the interaction potential vanishes (in that case, we have M independent equilibrium systems in contact with different equilibrium environments). So, any possible entropy production stems from the interplay between interactions and the temperature differences between the heat baths. The diffusion matrix for the current setup is the block-diagonal matrix B with entries μ_i T_i, and the local mean velocity has componentsν_st,i(x) = - μ_i ( _i U^int(x) + _i U_i(x_i) + T_i _i ln p_st(x) ),where _i denotes the gradient with respect to x_i. In this case, (<ref>) is written asσ_st = sup_ψ∑_i ⟨( _i U^int + _i U_i + T_i _i ln p_st - T_i _i ψ)×μ_i/T_i( _i U^int + _i U_i + T_i _i ln p_st - T_i _i ψ) ⟩ .We can absorb the gradient of the probability density and the one-body potentials into ψ(x),σ_st = sup_ψ∑_i ⟨( _i U^int - T_i _i ψ) ×μ_i/T_i( _i U^int - T_i _i ψ) ⟩ .Rescaling ψ(x) by a global constant α and minimizing with respect to α yields the equivalent expressionσ_st = inf_ψ(U^intμT^-1 U^int -U^intμψ^2/ψμTψ) .Where we defined the diagonal matrix T that has entries T_i on the diagonal elements corresponding to x_i. We now choose ψ(x) = U^int(x), which results in the upper boundσ_st ≤(U^intμT^-1 U^int -U^intμ U^int^2/ U^intμT U^int) .Writing this explicitly in terms of μ_i and T_i, we obtainσ_st ≤∑_i,j(T_i-T_j)^2/T_i T_j𝒰_i 𝒰_j /2 ∑_i T_i 𝒰_i with𝒰_i= _i U^intμ_i _i U^int .The constant 𝒰_i quantifies the magnitude of the interaction force acting on particle i. Note that each term in the numerator is positive and vanishes if either the temperature difference between two particles vanishes (pair of particles is at thermal equilibrium with respect to each other) or the interaction force on one of the particles vanishes identically (particle is independent from the remaining particles). The bound (<ref>) allows us to obtain an upper estimate on the entropy production rate in heat conduction systems, which depends only on the temperature differences and the interaction forces—it only contains contributions from pairs of particles between which a heat flow can occur in principle. However, we remark that, for example for a linear chain of M particles, the bound (<ref>) becomes trivial in the continuum limit of vanishing temperature differences between neighboring particles and increasing particle number:The right-hand side is proportional to M, whereas the left-hand side is expected to be of order 1. The reason is that we allowed for arbitrary interactions, which means that particles at opposite ends of the chain, where the temperature difference is not small, could in principle interact. It may be possible to obtain a more informed bound that explicitly takes into account the spatial structure of the interactions, such as nearest neighbor or finite-range interactions. We leave this problem for future research, noting that (<ref>) is expected to be most useful for systems with few degrees of freedom.§ SPATIAL TEMPERATURE VARIATIONS Finally, we also allow the temperature and/or mobility to depend on the coordinates x themselves. For simplicity, we restrict the discussion to the case where all degrees of freedom are in contact with the same, spatially inhomogeneous heat bath at temperature T(x). As has been shown in Ref. <cit.>, the appropriate form of the local mean velocity is in this caseν_st(x) = μ(x) ( -U(x) -T(x) - T(x) ln p_st(x) ) .From (<ref>) we then haveσ_st = inf_ψ⟨(U +T + T ln p_st - T ψ) ×μ/T(U +T + T ln p_st - T ψ) ⟩ .As before, we can absorb the term involving the gradient of the probability density into ψ(x),σ_st = inf_ψ⟨(U +T - T ψ) ×μ/T(U +T - T ψ) ⟩ .We write ψ(x) = (U(x) - U_0)/T(x) + ln T(x) + ϕ(x) with an arbitrary constant U_0,σ_st = inf_ϕ⟨( U - U_0/T T - T ϕ) ×μ/T( U - U_0/T T- T ϕ) ⟩ .Again rescaling ϕ(x) by a constant factor α and minimizing with respect to α, we obtainσ_st = inf_ϕ( ( U - U_0/T)^2T μ/T T-( U - U_0/T)T μϕ^2/ϕμ T ϕ).Finally, choosing ϕ(x) = ln T(x), we obtain the upper boundσ_st ≤( U - U_0/T)^2T μ/T T-( U - U_0/T)T μ/T T^2/ T μ/T T .The right-hand side vanishes if either the temperature is homogeneous, T(x) = 0, or if the potential and temperature profile are linearly related, U(x) = c T(x) + U_0. In general, the upper bound depends on the relative magnitude of the temperature gradient 𝒯(x) =T(x) μ T(x)/T(x) and the ratio ℛ(x) = (U(x) - U_0)/T(x) between the potential energy and the temperature. In terms of these quantities we have,σ_st ≤𝒯( ℛ^2 𝒯/𝒯 - ( ℛ𝒯/𝒯)^2 ).We note that the second factor is equal to the variance of ℛ(x) with respect to the probability density q(x) = 𝒯(x) p_st(x)/𝒯. If the ratio ℛ(x) between potential energy and temperature is bounded, ℛ^min≤ℛ(x) ≤ℛ^max with Δℛ = ℛ^max - ℛ^min, then we can further bound this variance from above,σ_st ≤Δℛ^2/4𝒯 .If the magnitude of the temperature gradient is also bounded, 𝒯(x) ≤𝒯^max, then we further haveσ_st ≤Δℛ^2/4𝒯^max .Note that the above still depends on the global shift U_0 of the potential, which is in principle arbitrary, but should be chosen such as to minimize the range Δℛ in order to obtain a tight bound. One heuristic way of doing so is to plot U(x) as a function of T(x) and then perform a linear regression on the data. If a linear relationship of the form U(x) = c T(x) + U_0 exists, then the parameter U_0 obtained from the fit will lead to a constant ratio ℛ(x) and thus Δℛ = 0. In other cases, we obtain a linear approximation of the relation between U(x) and T(x) and the parameter U_0 obtained from the fit will generally result in a tighter bound than simply choosing U_0 = 0. We stress that this procedure only requires knowledge of the spatial dependence of T(x) and U(x) and the bound can be computed without solving the dynamics. Thus, (<ref>) provides a simple and explicit upper bound on the entropy production rate in terms of the parameters of the model.§ PERIODIC BOUNDARY CONDITIONS When introducing the orthogonality relation (<ref>), we assumed natural or reflecting boundary conditions, which, in particular, implies that the probability current vanishes at the boundaries and we can thus neglect the boundary terms when integrating by parts. For systems with periodic boundary conditions, on the other hand, there is generally a finite probability current across the boundary. Let us consider the expressionψν_st = ∫_Ω dx p_st(x) ν_st(x) ψ(x) = p_st(x) ν_st(x) ψ(x) |_∂Ω- ∫_Ω dx ψ(x) ( p_st(x) ν_st(x) ),where Ω denotes the configuration space and ∂Ω its boundary. The second term vanishes due to (<ref>). However, the first term only vanishes if the function ψ(x) has the same periodicity as the probability current ν_st(x) p_st(x), which is a periodic function in the directions where the boundary conditions are periodic. Thus, the orthogonality relation (<ref>) only holds for gradients of periodic functions ψ(x), and the same is true for all gradient fields used in subsequent expressions. As an example, suppose that the system is periodic with periodicity L_1 in direction x_1 and satisfies natural boundary conditions in directions x_2 and x_3. This is the case for a particle moving in linear channel in three dimensions, where x_1 is the direction along the channel and the particle is assumed to be confined in the transverse directions. In this case, we require the function ψ(x_1,x_2,x_3) to be periodic in direction x_1, ψ(x_1+L,x_2,x_3) = ψ(x_1,x_2,x_3), whereas in directions x_2 and x_3 the only requirement is that the function does not grow too fast such that ψ(x) ν_st(x) p_st(x) vanishes as |x_2|,|x_3| →∞ and the integral over these directions is finite. For the variational expression (<ref>) and results derived from it, we thus have to replace the minimization over all gradient fields with the minimization over gradients of functions with the appropriate periodicity. This also implies that any specific choice of ψ(x) that is used to obtain an upper bound should satisfy the same periodicity constraints. Since the potential and temperature profile have to be periodic functions to ensure the periodicity of the steady state density, (<ref>) and (<ref>) also hold for systems with periodic boundary conditions. However, the choice ψ(x) = xF^nc used to obtain (<ref>) is not a periodic function and therefore not admissible, and (<ref>) does not hold for periodic systems. A simple counterexample is a particle in a one-dimensional periodic potential U(x+L) = U(x), driven by a constant bias F^nc(x) = F_0. Here, the variance of the bias force is zero, yet the system is out of equilibrium and has a finite entropy production rate. Similarly, (<ref>) does not hold, because in this case, the effect of the bias force is largest if it acts over the entire length of the potential. We may use ψ(x) = x_npF^nc, where the vector x_np is equal to x in the non-periodic directions and equal to 0 (and thus trivially periodic) in the periodic directions. Then, we obtain the boundσ_st ≤1/T( F^nc_pμF^nc_p+ (F^nc_np - F^nc_np) μ(F^nc_np - F^nc_np)).Here, F^nc_p(x) is the non-conservative force acting in the periodic directions, while F^nc_np(x) is the non-conservative force acting in the non-periodic directions.§ UNDERDAMPED DYNAMICS So far, we always assumed that the system is described by an overdamped Langevin dynamics of the type (<ref>). This assumption is well-justified for small particles diffusing in a relatively dense viscous environment, whose thermal relaxation time, τ_th = γ/m with γ the friction coefficient and m the particle mass, is typically orders of magnitude shorter than the characteristic timescales of the motion in the force field. However, for large particles or dilute environments, inertia and thermal relaxation can no longer be neglected and we need to describe the dynamics using the underdamped Langevin equationsẋ(t)= v(t)mv̇(t)= F(x(t)) - γv(t) + √(2 γ T)ξ(t).Here v are the velocities of the particles, m is a diagonal matrix containing the masses of the individual particles and γ is the matrix of friction coefficients, which we assume to be block-diagonal γ = (γ_1,…,γ_K), where γ_k is the symmetric and positive definite friction coefficient matrix of the k-th particle. This form allows for different (and possibly anisotropic) friction forces for the different particles, however, we assume that the friction force does not induce any coupling between different particles. Note that since we assume the matrix m to have the same entry m_k on the diagonal elements corresponding to particle k, the matrices γ and m commute. If the force is conservative, F(x) = -U(x), then the steady state of the system is in equilibrium described by the Boltzmann-Gibbs density,p_st(x,v)= exp( - U(x)/T)/∫ dx exp( - U(x)/T)×1/(2 π T)^N/2√((m))exp( -vm^-1v/2 T) .Here N is the total number of degrees of freedoms (N = 3K for K particles in three dimensions) anddenotes the determinant. By contrast, for non-conservative forces F(x) = -U(x) + F^nc(x), the system is in a non-equilibrium steady state and the probability density no longer factorizes into a position and velocity-dependent part. Instead, we have to solve the steady-state Klein-Kramers-Fokker-Planck equation,0 = ( - v_x -1/m_v ( F(x)- γv - γ T/m_v ) ) p_st(x,v).The entropy production rate in the non-equilibrium steady state is given by the magnitude of the irreversible probability currents asσ_st = 1/T(v + T/m_v ln p_st) γ(v + T/m_v ln p_st)_st .Here, we take a division by the matrix m to mean a multiplication by m^-1. In order to proceed, we define the steady-state position density and local mean velocityp_st^x(x)= ∫ dv p_st(x,v)ν_st(x)= ∫ dv v p_st(x, v)/p_st^x(x) .Integrating (<ref>) over v, we see that these two quantities satisfy a continuity equation0 = - _x ( ν_st(x) p_st^x(x) ),in complete analogy to the overdamped case (<ref>). Since the local mean velocity is the average velocity conditioned on the position x, we introduce the fluctuations of the velocity v around its local average,ω(x,v) = v - ν_st(x) .Using this, we can write (<ref>) asσ_st = σ_st^lm + σ_st^vfwith σ_st^lm = 1/Tν_stγν_st_standσ_st^vf = 1/T(ω + T/m_v ln p_st) γ(ω + T/m_v ln p_st)_st .The first term is formally similar to the overdamped entropy production rate (<ref>) in that it measures the dissipation caused by the local mean flows. It is positive whenever the local mean velocity is non-zero, and we define it as the local mean (lm) contribution σ_st^lm. The second term, on the other hand, measures the non-thermal velocity fluctuations around the local mean. It vanishes only when the velocity fluctuations are given byp_st(v|x) = 1/(2 π T)^N/2√((m))exp( -ωm^-1ω/2 T) ,where p_st(v|x) = p_st(x,v)/p_st^x(x) is the conditional velocity density. We call this contribution the velocity fluctuation (vf) contribution σ_st^vf. We note that the latter can equivalently be written asσ_st^vf = 1/T( ωγω_st - T tr(m^-1) ),where tr denotes the trace of a matrix. Just as in the overdamped case, we can also express the entropy production in terms of the work done by the non-conservative force,σ_st = 1/TF^ncv_st = 1/TF^ncν_st_st,where we used that F^nc(x) only depends on the position, so we can replace the velocity by the local mean value under the average. Due to (<ref>), we can add an arbitrary gradient field in the above expression, since the average of the latter vanishes,σ_st = 1/T(F^nc + _x V) ν_st_st .We now use the Cauchy-Schwarz inequality and identify the local mean entropy production rate,σ_st^2≤1/T^2ν_stγν_st(F^nc + _x V) γ^-1(F^nc + _x V)_st = σ_st^lm1/T(F^nc + _x V) γ^-1(F^nc + _x V)_st .Since the entropy production rate is given by the sum of two positive terms, see (<ref>), we have σ_st^lm≤σ_st and therefore,σ_st≤inf_V 1/T(F^nc + _x V) γ^-1(F^nc + _x V)_st .The right-hand side is precisely the same as in the overdamped variational expression (<ref>), with the mobility matrix μ = γ^-1. While in the overdamped case, the minimization with respect to the potential V(x) results in the exact entropy production rate, the corresponding expression in the underdamped case is an upper bound on the entropy production rate. Crucially, however, any upper bound obtained using a specific choice of V(x) is an upper bound in both cases. This implies that we can use (<ref>) to obtain upper bounds on the entropy production rate, irrespective of whether the dynamics are over- or underdamped. In particular, the findings of Section <ref> with regards to bounds on the entropy production in terms of one-particle statistics remain true for the underdamped case, and we can therefore apply them even if we are not sure whether the effects of inertia can be neglected for the particular system of interest. Let us define the right-hand side of (<ref>) as σ̂_st, that is, the upper estimate on σ_st obtained by solving the minimization problem. Intriguingly, this can be used together with the actual value of the entropy production rate to estimate the velocity-fluctuation contribution from (<ref>),σ_st^vf/σ_st≤ 1 - σ_st/σ̂_st .We can estimate the contribution of the non-thermal velocity fluctuations to the overall entropy production by how much the upper bound exceeds the actual value of the latter. In the overdamped limit, the right-hand side vanishes and thus the velocity fluctuations become thermal.The above generalizes in a straightforward way to the case of multiple heat baths discussed in Section <ref>. Introducing the diagonal matrix T, which contains the temperatures of the heat baths to which the individual particles are coupled, we haveσ_st = σ_st^lm + σ_st^vfwith σ_st^lm = ν_stγT^-1ν_st_standσ_st^vf = (ω + T/m_v ln p_st) γT^-1(ω + T/m_v ln p_st)_st ,so (<ref>) generalizes in a straightforward way to the cases of several heat baths. Here we assumed that either, the temperature associated with each spatial direction of a single particle is the same, or, that the friction matrix γ is diagonal, such that γ and T commute. Again, we can equivalently express the entropy production rate in terms of the work done by the total force,σ_st = -_x U T^-1ν_st_st ,where U(x) contains both the one-body and interaction potentials. In this case, we can add a term of the form T_x ψ(x) to the force without changing the result,σ_st = -(_x U - T_x ψ) T^-1ν_st_st ,since we still have ψν_st_st = 0 from (<ref>). As discussed in Section <ref>, the one-body potentials can be absorbed into the gradient term, and we therefore can write the upper bound asσ_st≤inf_ψ⟨(_x U^int- T_x ψ) (γT)^-1(_x U^int - T_x ψ) ⟩_st .This reproduces (<ref>), except that the minimization now generally yields an upper bound instead of the exact entropy production rate. Nevertheless, the upper bound (<ref>) remains valid for underdamped dynamics.Finally, for a spatially inhomogeneous temperature T(x) and friction coefficient γ(x) (for simplicity we assume T(x) to be scalar-valued in the following), we haveσ_st = σ_st^lm + σ_st^vfwith σ_st^lm = 1/Tν_stγν_st_standσ_st^vf = 1/T(ω + T/m_v ln p_st) γ(ω + T/m_v ln p_st)_st .Compared to (<ref>), the only difference is that the temperature now has to be taken inside the average. We can again obtain an equivalent expression in terms of the force, which now, however, acquires an additional term proportional to the temperature gradient,σ_st = - 1/T_x U ν_st_st - vmv/2 Tv_x ln T_st .The presence of the second term prohibits us from directly extending the results of Section <ref> to the underdamped case. It has been previously found that underdamped systems in the presence of a temperature gradient exhibit a type of hidden entropy production <cit.>, which stems from the continuous thermalization of the velocity degrees of freedom as the particle moves through the temperature gradient. This contribution to the dissipation does not vanish in the overdamped limit and cannot be estimated from the statistics of the position alone. Therefore, a bound of the form (<ref>), which only relies on a measurement of the position, cannot be expected to reproduce this contribution. § DEMONSTRATION §.§ Interacting particles on a ring In order to investigate the tightness and usefulness of the bounds derived in Section <ref>, we study a system of N Brownian particles with coordinates x = (x_1,…,x_N), where x_i = (x_i,1,x_i,2) in two dimensions. The particles are trapped inside the radially symmetric one-body potentialU_1(r_1) = U_0 ( 1/4r_1^4/L^4 - α/2r_1^2/L^2),where r_1 = ‖x_1 ‖ is the distance of a particle from the origin. For α > 0, this potential has a maximum at r_1 = 0 and a minimum at r_1 = L √(α), resulting in a ring-shaped groove (see Fig. <ref>). The particles interact via the truncated and shifted Lennard-Jones-type two-body potential,U_12(r_12) = {[ U^LJ(r_12) - U^LJ(r_c)) forr_12 < r_c; 0forr_12≥ r_c ]. with U^LJ(r) = U_I( ( λ/r)^2 δ - 2 ( λ/r)^δ) ) .Here, λ is the characteristic length scale of the interactions, the exponent δ > 0 determines the steepness of the potential and r_c is a cutoff length. r_12 = ‖x_1 - x_2 ‖ is the distance between two particles. Note that the potential is repulsive for r < λ and attractive for λ < r < r_c; for r_c = λ, the potential is purely repulsive. The total potential is given by U(x) = ∑_i U_1(r_i) + ∑_i ≠ j U_12(r_ij). The environment is at temperature T and we take the mobility to be the same constant μ for each particle. In addition, we introduce the non-conservative one-body force,F_1^nc(x_1) = ℱ/‖x_1 ‖[x_1,2; -x_1,1 ], which corresponds to a constant-magnitude force ‖F_1^nc‖ = ℱ driving a particle in a circle around the origin. Depending on the situation, this non-conservative force acts on only one particle F^nc(x) = F_1^nc(x_1)), some particles, or all particles (F^nc(x) = ∑_i F_1^nc(x_i)). In either case, since the system is radially symmetric, we have F^nc = 0, and so the bound (<ref>) simplifies toσ_st≤μ K ℱ^2/T = σ_st,1d,where K is the number of driven particles. This bound only depends on a small number of parameters of the system, namely, the mobility, the temperature, the strength of the driving and the number of driven particles. In particular, the bound is independent of the precise nature of the interaction or trapping potentials, but only depends on theNote that the same bound is obtained if the driven and passive particles have different physical properties, e. g. mobility. In this case, the bound is already the optimal one that can be obtained using the one-particle statistics of the driven particles, see (<ref>),σ_st,1d = K/Tinf_V⟨(F_1^nc +V ) μ_1 (F_1^nc +V )⟩_1 .To see this, we note that the entire problem is radially symmetric, and so the same has to be true for the optimal one-body potential V(x_k), where k denotes one of the driven particles. Then, the gradient of V(x_k) is along the radial direction and, therefore, perpendicular to the non-conservative force F_1^nc(x_k). Consequently, the above expression is minimized for _k V(x_k) = 0, which yields (<ref>), noting that the magnitude of the driving force is constant.We now compare the upper bound (<ref>) to the actual entropy production. The latter we compute from numerical simulations of the system outlined above. Concretely, since, as discussed in Section <ref>, the bounds also apply to underdamped dynamics, we consider particles with mass m = 0.5 in contact with a heat bath at temperature T = 1.0 and with friction coefficient γ = 1 (i. e. μ = 1/γ = 1). These particles evolve according to (<ref>) with the force given byF(x) = - ∑_i = 1^N _i U_1(r_i)- 1/2∑_i,j = 1 | i ≠ j^N _i U_12(r_ij)- ∑_i = 1^K F_1^nc(x_i).We set the parameters of the trapping potential to U_0 = 10, L = 10 and α = 1, those of the interaction potential to U_I = 4, λ = 2 and δ = 2, and the strength of the driving force to ℱ = 1. Note that for these parameters, the depth of both the groove of the trapping potential (2.5 T) and the minimum of the interaction potential (1.125 T) is comparable to the temperature. We further introduce a short-range cutoff r_m = λ/3 and replace the interaction force by its maximal value at r_ij = r_m for shorter distances. The reason for doing so is that it allows us to choose a more reasonable size for the time step in the simulations, since it prevents the interaction force from becoming arbitrarily large. The simulations are performed using the stochastic velocity Verlet algorithm introduced in Ref. <cit.>, using a time step of 1.25 · 10^-4 and a total time τ = 10^3, after initial equilibration for τ_eq = 10^2 and average over 100 repetitions. We consider two different values for the cutoff length of the interaction potential, r_c = 4 (both repulsive and attractive interactions) and r_c = 2 (only repulsive interactions), and vary the number of total N and driven K particles between 1 and 20 in each case. The entropy production is computed using (<ref>); the results are shown in Fig. <ref>. We see that for a single particle, the variational expression using one-particle statistics exactly reproduces the entropy production rate, as expected. Interestingly, the same is observed for the interacting case when all particles are driven; since all particles move coherently with the same velocity, the interactions do not reduce the current. Conversely, when only some particles are driven, the presence of the passive particles hinders their motion and the current is reduced compared to independent particles. As a consequence, the entropy production is also reduced and the inequality in (<ref>) becomes strict. This effect is considerably more pronounced in the presence of attractive interactions, which cause the driven particle to transiently form pairs with passive particles, reducing the mobility of the compound particle and therefore its ability to move in response to the driving force. In the extreme case of one driven particle in the presence of 19 passive particles, the entropy production is reduced to around half the single-particle bound, compared to a reduction of around 20% in the purely repulsive case. As illustrated in Fig. <ref>, this corresponds to a situation of almost permanent interaction between the particles. In conclusion, we find that the simple bound (<ref>) can give a useful estimate of the entropy production even in cases with strong interactions and considerable inertial effects. §.§ Büttiker-Landauer ratchet A simple model that exploits a non-equilibrium steady state in a spatially varying temperature profile to generate motion is the ratchet model introduced by Büttiker <cit.> and Landauer <cit.>. Here, an overdamped Brownian particle moves in a one-dimensional periodic potential U(x+L) = U(x) with a likewise periodic spatial temperature profile T(x+L) = T(x). If the spatial dependence of U(x) and T(x) is chosen appropriately, then this system exhibits a spatially periodic steady state with a non-zero drift velocity v_st. Due to the one-dimensional nature of this system, all involved quantities can be calculated explicitly, allowing us to compare the bound (<ref>) to the exact value of the entropy production rate. As shown in Ref. <cit.>, we obtain for the entropy production rateσ_st = μ(1 - e^ψ(L))^2 ∫_0^L dxe^ψ(x)[ ∫_x^x+L dye^ψ(y)]^-1/∫_0^L dxe^ψ(x)∫_x-L^x dye^-ψ(y)/T(y),withψ(x) = ∫_0^x dyU'(y)/T(y) .As a concrete example, we consider the potential and temperature profileU(x)= U_0 sin( k x ) T(x)= T_0 ( 1 + α/2( sin( k x + ϕ ) + 1 ) ).Here k = 2π/L, U_0 and T_0 > 0 determine the magnitude of the potential and the overall temperature, respectively, α≥ 0 determines the magnitude of the spatial temperature variation and 0 ≤ϕ < 2 π is the phase-difference between the potential and temperature profile. For α = 0, the temperature is constant and equal to T_0, while for α = 1, the temperature varies between T_0 and 2 T_0. As show in Fig. <ref>, the upper bound correctly captures the qualitative behavior of the entropy production rate: It vanishes for a spatially constant temperature (α = 0) and at ϕ = 0 or ϕ = π, where the potential and the temperature are linearly related (see the discussion in Section <ref>). By contrast, both the entropy production and the upper bound are maximal for ϕ = π/2 and increase as the spatial dependence of the temperature becomes more pronounced. Quantitatively, the actual value of the entropy production rate is around 20 % of the upper bound throughout the range of parameters. However, we stress that, in contrast to the explicit expression (<ref>), computing which requires evaluating the triple integrals in the numerator and denominator, the bound (<ref>) is easily evaluated from the known expressions (<ref>). Moreover, the different reasons for the vanishing entropy production as α→ 0 and as ϕ→ 0,π are clearly apparent in the structure of the bound; in the former case, the magnitude 𝒯 of the temperature gradient vanishes, while in the latter case Δℛ vanishes as the relation between temperature and potential becomes linear.§.§ Underdamped motion in a periodic potential In Section <ref>, we found that the upper bounds on the entropy production rate apply to both over- and underdamped dynamics. In particular, (<ref>) suggests that the difference between the upper bound and the actual value of the entropy production may be used to estimate the contribution of non-thermal velocity fluctuations to the dissipation. We study a particle with mass m and friction coefficient γ moving in a one-dimensional periodic potential U(x+L) = U(x) and driven by a constant bias force ℱ, so that the Langevin equation (<ref>) readsẋ(t)= v(t) m v̇(t)= - U'(x(t)) + ℱ - γ v(t) + √(2 γ T)ξ(t).We focus on the periodic steady state p_st(x,v) = p_st(x+L,v) of the system. In contrast to the overdamped case, no closed-form expressions for the steady-state density, drift velocity or entropy production rate are available, so we have to rely on numerical simulations of (<ref>). From the simulations, we can measure the drift velocity v_st, which allows us to determine the total entropy production rate σ_st = v_stℱ/T. Next, we note that the optimization problem in (<ref>) can be solved explicitly in one dimension. For a constant driving force, the Euler-Lagrange equation reads,∂_x ( p_st(x) ( ℱ + ∂_x V(x) ) ) = 0,with general solutionV(x) = c_1 + c_2 ∫_0^x dy1/p_st(y) - F x .The additive constant c_1 does not enter (<ref>), while c_2 can be obtained by imposing periodic boundary conditions on V(x). The resulting expression for the upper bound is thenσ̂_st = (ℱ L )^2/γ T1/∫_0^L dx1/p_st(x) .We introduce the quantityχ = L^2/∫_0^L dx1/p_st(x)≤ 1 ,where the inequality follows from the Jensen inequality and equality is attained for a uniform probability density p_st(x) = 1/L. In terms of this, we haveσ̂_st = ℱ^2/γ Tχ≤ℱ^2/γ T,where the latter expression is obtained by choosing V(x) = 0 instead of the actual minimizer. We can also determine the local mean velocity ν_st(x) in terms of p_st(x). In one dimension, the solution to the continuity equation (<ref>) isν_st(x) = v_st/L p_st(x),since the product of p_st(x) and ν_st(x) has to be constant, and its integral over one period equal to v_st. We then obtain for the local mean entropy production rate defined in (<ref>),σ_st^lm = γ/T∫_0^L dxν_st(x)^2 p_st(x) = γ v_st^2/T χ .From (<ref>), we therefore obtain the relationv_stℱ = γ/χ v_st^2 + T σ_st^vf .In the overdamped case, the last term vanishes, and v_st = χℱ/γ, i. e., the effective mobility μ_eff = v_st/ℱ is reduced by a factor χ compared to the free-space mobility 1/γ. In the underdamped case, by contrast, we haveσ_st^vf = 1/T( v_stℱ - γ/χ v_st^2 ) ,which allows us to calculate the contribution of the non-thermal velocity fluctuations to the entropy production, using the measured values of v_st and p_st(x). This also implies that v_st≤χℱ/γ, i. e., the drift velocity in the underdamped case is always less than what we would expect from an overdamped system with the same parameters and position density. On the other hand, evaluating the right-hand side of (<ref>) using the result (<ref>) for σ̂_st, we find precisely the same result for σ_st^vf, so that the upper bound turns into an equality in the one-dimensional case. This suggests that (<ref>) can provide a quantitatively useful estimate of the velocity-fluctuation contribution to the entropy production, even without measuring the velocity fluctuations directly. Evaluating the factor χ requires determining the position density from the simulation data; (<ref>) also provides an upper bound on the velocity-fluctuation contribution in terms of more easily accessible system parameters and the drift velocity,σ_st^vf≤v_st/T(ℱ- γ v_st) .The values of σ_st^vf as well as this upper bound are shown in Fig. <ref>. We see that at both small and large driving force, the entropy production due to non-thermal velocity fluctuations vanishes. For weak driving, the system is close to equilibrium, and thus the velocity distribution becomes thermal. For strong driving, on the other hand, the system behaves like biased free diffusion, which also exhibits thermal velocity fluctuations relative to the drift velocity. At intermediate driving, the velocity-fluctuation entropy production exhibits a peak. Both the driving force corresponding to the peak, as well as the peak value increase with decreasing mass. The first effect can be understood by noting that, at larger mass, the inertia of the particle leads to so-called running solutions, where the particle essentially continues sliding down the potential for a significant amount of time before becoming trapped in one of the potential wells again. Thus, the force that is required to approach the free diffusion limit decreases with increasing mass. The increasing peak value of σ_st^vf with decreasing mass, on the other hand, is at first glance counterintuitive, since we expect the velocity distribution to become thermal in the overdamped limit. However, the validity of the overdamped limit requires that the thermalization of the velocity distribution should be smaller than all other timescales associated with the particle's motion. For increasing force, the time required for the particle to traverse the potential, and, therefore, the characteristic timescale for a change in the force, also decrease, and we have to consider ever smaller masses for the overdamped limit to be valid as we drive the system further from equilibrium. This is confirmed when considering σ_st^vf as a function of the mass (bottom panel in Fig. <ref>). At a fixed value of the driving force, the velocity fluctuations do eventually become thermal as we decrease the mass. However, the eventual decay of σ_st^vf with decreasing mass occurs at smaller masses for larger forces, leading to a significant deviations from thermal fluctuations even at relatively small mass. Finally, we note that the upper bound (<ref>) on the velocity-fluctuation entropy production reproduces this qualitative behavior very well, allowing us to infer the degree of non-thermal velocity fluctuations using only the parameters of the system and the observed steady-state velocity. § DISCUSSION Let us first remark on two recent, related results. First, in Ref. <cit.>, an upper bound on the entropy production in terms of the entropy flow has been obtained. For the class of systems considered here (steady-state systems in contact with thermal environments), however, the entropy flow is already equal to the entropy production, and so the latter bound is only meaningful in more general situations, where the environments are not necessarily thermal. Second, Ref. <cit.> discusses an upper bound on the entropy production in jump processes on a discrete state space, which is expressed in terms of the maximum thermodynamic force in the system. This result, while obtained in a different physical setting, is closely related to the bounds derived in the present work. Indeed, from (<ref>) it is straightforward to see that the maximal magnitude of the nonconservative force also yields an upper bound. Using the variational characterization of entropy production for jump processes derived in Ref. <cit.>, it should be possible to obtain more refined versions of the bound in Ref. <cit.>.A fundamental consequence of the upper bound in terms of the one-particle density (<ref>) is that the presence of additional, passive degrees of freedom in the system generally leads to a reduction in the dissipation. As discussed in Section <ref>, this statement is strictly true only if the interactions do not substantially modify the one-particle density.However, it still appears to be in contradiction to the established notion of hidden entropy production <cit.>, where neglecting degrees of freedom through coarse-graining causes one to underestimate the dissipation in the system. This apparent contradiction is resolved by noting that the one-particle bound still includes all the information about the non-conservative force as the source of the dissipation.Therefore, (<ref>) does not correspond to a coarse-grained description of the system, even though less information (one-particle density vs. many-particle density) is required to compute it compared to the exact entropy production.In experimental settings, an upper bound on the entropy production rate could be combined with known lower bounds to narrow the range of possible values of the entropy production. Provided that the thermodynamic forces driving the system out of equilibrium are known, the bounds that we derived here can be computed using various degrees of experimentally obtained information. If the probability density of the full system is known, then the variational expression (<ref>) reproduces the exact entropy production rate. On the other hand, if only one-particle statistics are available, we can use (<ref>) to derive an upper bound that, as the example in Section <ref> shows, can be relatively tight even in strongly interacting systems.The upper bounds derived here are also particularly suited for theoretical studies of non-equilibrium phenomena, where we often start from a particular model and want to investigate its behavior and entropy production. In this case, it can be useful to know the amount of entropy production that the model can exhibit in principle, without solving the entire dynamics. An upper bound then tells us how far from equilibrium it can potentially be for certain parameters. This knowledge could then be combined with existing lower bounds to predict the maximal magnitude of non-equilibrium phenomena. For example, the thermodynamic uncertainty relation <cit.> states that the entropy production bounds the precision of stochastic currents. Thus, an upper bound on the entropy production also implies a maximal precision of any current in the system, without having to compute the entropy production explicitly. One interesting detail in this context is that, while it is known that the thermodynamic uncertainty relation can be violated in underdamped systems <cit.>, the upper bounds on the entropy production hold for both over- and underdamped systems.Finally, we remark on some possible extensions of the present results. As mentioned in Section <ref>, most bounds obtained here also apply to the housekeeping entropy production in the Maes-Netočnỳ decomposition <cit.> for time-dependent driven systems. It would be interesting to see whether similar upper bounds can also be obtained for the excess part, for example using its known relation with Wasserstein distance <cit.>, yielding an upper bound on the total entropy production for time-dependent systems. Extending on the results of Section <ref>, it may be possible to derive more useful bounds for heat conduction systems, taking into account the structure of the interactions. Such bounds could be useful to estimate the heat flow, and thus thermal conductance, from the model parameters. Finally, as remarked in Section <ref>, deriving an upper bound for the entropy production in underdamped systems with temperature gradients remains an open yet interesting problem, as such a bound could also provide an estimate on the hidden entropy production that is neglected in the overdamped description. A. D. is supported by JSPS KAKENHI (Grant No. 19H05795, and 22K13974).37 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Eichhorn et al.(2002)Eichhorn, Reimann, and Hänggi]Eic02 author author R. Eichhorn, author P. Reimann,and author P. Hänggi,title title Brownian motion exhibiting absolute negative mobility, @noopjournal journal Phys. Rev. Lett. volume 88, pages 190601 (year 2002)NoStop [Ros et al.(2005)Ros, Eichhorn, Regtmeier, Duong, Reimann, and Anselmetti]Ros05 author author A. Ros, author R. Eichhorn, author J. Regtmeier, author T. T. Duong, author P. Reimann, and author D. Anselmetti, title title Absolute negative particle mobility, @noopjournal journal Nature volume 436, pages 928 (year 2005)NoStop [Oberreiter et al.(2022)Oberreiter, Seifert, and Barato]Obe22 author author L. Oberreiter, author U. Seifert, and author A. C. Barato, title title Universal minimal cost of coherent biochemical oscillations, @noopjournal journal Phys. Rev. E volume 106,pages 014106 (year 2022)NoStop [Cross and Hohenberg(1993)]Cro93 author author M. C. Cross and author P. C. Hohenberg, title title Pattern formation outside of equilibrium, https://doi.org/10.1103/RevModPhys.65.851 journal journal Rev. Mod. Phys. volume 65, pages 851 (year 1993)NoStop [Gollub and Langer(1999)]Gol99 author author J. P. Gollub and author J. S. Langer, title title Pattern formation in nonequilibrium physics, https://doi.org/10.1103/RevModPhys.71.S396 journal journal Rev. Mod. Phys. volume 71, pages S396 (year 1999)NoStop [Barato and Seifert(2015)]Bar15 author author A. C. Barato and author U. Seifert, title title Thermodynamic uncertainty relation for biomolecular processes, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.158101 journal journal Phys. Rev. Lett. volume 114, pages 158101 (year 2015)NoStop [Dechant and Sasa(2018)]Dec17 author author A. Dechant and author S.-i. Sasa, title title Current fluctuations and transport efficiency for general Langevin systems, http://stacks.iop.org/1742-5468/2018/i=6/a=063209 journal journal J. Stat. Mech. Theory E. volume 2018, pages 063209 (year 2018)NoStop [Pietzonka et al.(2017)Pietzonka, Ritort, and Seifert]Pie17 author author P. Pietzonka, author F. Ritort,and author U. Seifert,title title Finite-time generalization of the thermodynamic uncertainty relation, https://doi.org/10.1103/PhysRevE.96.012101 journal journal Phys. Rev. E volume 96, pages 012101 (year 2017)NoStop [Li et al.(2019)Li, Horowitz, Gingrich, and Fakhri]Li19 author author J. Li, author J. M. Horowitz, author T. R. Gingrich, andauthor N. Fakhri, title title Quantifying dissipation using fluctuating currents, @noopjournal journal Nature Comm. volume 10, pages 1 (year 2019)NoStop [Seifert(2019)]Sei19 author author U. Seifert, title title From stochastic thermodynamics to thermodynamic inference, @noopjournal journal Annu. Rev. Cond. Matt. Phys. volume 10, pages 171 (year 2019)NoStop [Horowitz and Gingrich(2020)]Hor20 author author J. M. Horowitz and author T. R. Gingrich, title title Thermodynamic uncertainty relations constrain non-equilibrium fluctuations, https://doi.org/10.1038/s41567-019-0702-6 journal journal Nature Phys. volume 16, pages 15 (year 2020)NoStop [Skinner and Dunkel(2021)]Ski21 author author D. J. Skinner and author J. Dunkel, title title Estimating entropy production from waiting time distributions, https://doi.org/10.1103/PhysRevLett.127.198101 journal journal Phys. Rev. Lett. volume 127,pages 198101 (year 2021)NoStop [Dechant and Sasa(2021)]Dec21c author author A. Dechant and author S.-i. Sasa, title title Improving thermodynamic bounds using correlations, https://doi.org/10.1103/PhysRevX.11.041061 journal journal Phys. Rev. X volume 11, pages 041061 (year 2021)NoStop [van der Meer et al.(2022)van der Meer, Ertel, and Seifert]Mee22 author author J. van der Meer, author B. Ertel, and author U. Seifert, title title Thermodynamic inference in partially accessible markov networks: A unifying perspective from transition-based waiting time distributions, https://doi.org/10.1103/PhysRevX.12.031025 journal journal Phys. Rev. X volume 12, pages 031025 (year 2022)NoStop [Harunari et al.(2022)Harunari, Dutta, Polettini, andRoldán]Har22 author author P. E. Harunari, author A. Dutta, author M. Polettini, andauthor E. Roldán, title title What to learn from a few visible transitions' statistics?, https://doi.org/10.1103/PhysRevX.12.041026 journal journal Phys. Rev. X volume 12, pages 041026 (year 2022)NoStop [Coffey and Kalmykov(2017)]Cof17 author author W. M. Coffey and author Y. P. Kalmykov, @nooptitle The Langevin equation: with applications to stochastic problems in physics, chemistry, and electrical engineering (publisher World Scientific,year 2017)NoStop [Risken(1986)]Ris86 author author H. Risken, @nooptitle The Fokker-Planck Equation (publisher Springer Berlin, year 1986)NoStop [Sekimoto(2010)]Sek10 author author K. Sekimoto, https://books.google.de/books?id=8Fq7BQAAQBAJ title Stochastic Energetics, Lecture Notes in Physics(publisher Springer Berlin Heidelberg, year 2010)NoStop [Seifert(2012)]Sei12 author author U. Seifert, title title Stochastic thermodynamics, fluctuation theorems and molecular machines, http://stacks.iop.org/0034-4885/75/i=12/a=126001 journal journal Rep. Prog. Phys. volume 75,pages 126001 (year 2012)NoStop [Maes and Netočnỳ(2014)]Mae14 author author C. Maes and author K. Netočnỳ, title title A nonequilibrium extension of the Clausius heat theorem, @noopjournal journal J. Stat. Phys. volume 154, pages 188 (year 2014)NoStop [Dechant et al.(2022)Dechant, Sasa, and Ito]Dec22 author author A. Dechant, author S.-i. Sasa,and author S. Ito, title title Geometric decomposition of entropy production in out-of-equilibrium systems, @noopjournal journal Phys. Rev. Research volume 4, pages L012034 (year 2022)NoStop [Jayannavar and Mahato(1995)]Jay95 author author A. M. Jayannavar and author M. C. Mahato, title title Macroscopic equation of motion in inhomogeneous media: a microscopic treatment, @noopjournal journal Pramana volume 45, pages 369 (year 1995)NoStop [Celani et al.(2012)Celani, Bo, Eichhorn, and Aurell]Cel12 author author A. Celani, author S. Bo, author R. Eichhorn, and author E. Aurell, title title Anomalous thermodynamics at the microscale, https://doi.org/10.1103/PhysRevLett.109.260603 journal journal Phys. Rev. Lett. volume 109,pages 260603 (year 2012)NoStop [Bussi and Parrinello(2007)]Bus07 author author G. Bussi and author M. Parrinello, title title Accurate sampling using Langevin dynamics, https://doi.org/10.1103/PhysRevE.75.056707 journal journal Phys. Rev. E volume 75, pages 056707 (year 2007)NoStop [Büttiker(1987)]Bue87 author author M. Büttiker, title title Transport as a consequence of state-dependent diffusion, https://doi.org/10.1007/BF01304221 journal journal Z. Phys. B volume 68, pages 161 (year 1987)NoStop [Landauer(1988)]Lan88 author author P. Landauer, title title Motion out of noisy states, https://doi.org/10.1007/BF01011555 journal journal J. Stat. Phys. volume 53,pages 233 (year 1988)NoStop [Salazar(2022)]Sal22 author author D. S. Salazar, title title Upper bound for quantum entropy production from entropy flux, @noopjournal journal Phys. Rev. E volume 105,pages L042101 (year 2022)NoStop [Nishiyama and Hasegawa(2023)]Nis23 author author T. Nishiyama and author Y. Hasegawa, title title Upper bound for entropy production in markov processes, @noopjournal journal arXiv preprint arXiv:2306.15251(year 2023)NoStop [Yoshimura et al.(2023)Yoshimura, Kolchinsky, Dechant, andIto]Yos23 author author K. Yoshimura, author A. Kolchinsky, author A. Dechant, and author S. Ito,title title Housekeeping and excess entropy production for general nonlinear dynamics, @noopjournal journal Phys. Rev. Research volume 5, pages 013017 (year 2023)NoStop [Kawaguchi and Nakayama(2013)]Kaw13 author author K. Kawaguchi and author Y. Nakayama, title title Fluctuation theorem for hidden entropy production, @noopjournal journal Phys. Rev. E volume 88, pages 022147 (year 2013)NoStop [Chun and Noh(2015)]Chu15 author author H.-M. Chun and author J. D. Noh,title title Hidden entropy production by fast variables, @noopjournal journal Phys. Rev. E volume 91, pages 052128 (year 2015)NoStop [Wang et al.(2016)Wang, Kawaguchi, Sasa, and Tang]Wan16 author author S.-W. Wang, author K. Kawaguchi, author S.-i. Sasa, andauthor L.-H. Tang, title title Entropy production of nanosystems with time scale separation, @noopjournal journal Phys. Rev. Lett. volume 117, pages 070601 (year 2016)NoStop [Nakayama et al.(2018)Nakayama, Kawaguchi, and Nakagawa]Nak18 author author Y. Nakayama, author K. Kawaguchi, and author N. Nakagawa, title title Unattainability of Carnot efficiency in thermal motors: Coarse graining and entropy production of Feynman-Smoluchowski ratchets, @noopjournal journal Phys. Rev. E volume 98,pages 022102 (year 2018)NoStop [Gingrich et al.(2016)Gingrich, Horowitz, Perunov, andEngland]Gin16 author author T. R. Gingrich, author J. M. Horowitz, author N. Perunov,and author J. L. England,title title Dissipation bounds all steady-state current fluctuations, https://doi.org/10.1103/PhysRevLett.116.120601 journal journal Phys. Rev. Lett. volume 116,pages 120601 (year 2016)NoStop [Pietzonka(2022)]Pie22 author author P. Pietzonka, title title Classical pendulum clocks break the thermodynamic uncertainty relation, @noopjournal journal Phys. Rev. Lett. volume 128, pages 130606 (year 2022)NoStop [Dechant(2022)]Dec22b author author A. Dechant, @nooptitle Bounds on the precision of currents in underdamped langevin dynamics (year 2022),https://arxiv.org/abs/2202.10696 arXiv:2202.10696 [cond-mat.stat-mech] NoStop [Nakazato and Ito(2021)]Nak21 author author M. Nakazato and author S. Ito,title title Geometrical aspects of entropy production in stochastic thermodynamics based on Wasserstein distance,@noopjournal journal Phys. Rev. Research volume 3, pages 043093 (year 2021)NoStop | http://arxiv.org/abs/2310.17929v1 | {
"authors": [
"Andreas Dechant"
],
"categories": [
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.stat-mech",
"published": "20231027070114",
"title": "Upper bounds on entropy production in diffusive dynamics"
} |
Direct numerical simulation of turbulent open channel flow: Streamwise turbulence intensity scaling and its relation to large-scale coherent motions Direct numerical simulation of turbulent open channel flowBauer et al.^1Institute of Aerodynamics and Flow Technology, German Aerospace Center, Germany, [email protected]^2TUM School of Engineering and Design, Technical University of Munich, Germany ^3 Institute for Hydromechanics, Karlsruhe Institute of Technology, Germany * Christian Bauer^1, Yoshiyuki Sakai^2 and Markus Uhlmann^3 2023-10-25 =============================================================We conducted direct numerical simulations of turbulent open channel flow (OCF) and closed channel flow (CCF) of friction Reynolds numbers up to Re_τ≈ 900 in large computational domains up to L_x× L_z=12π h × 4π h to analyse the Reynolds number scaling of turbulence intensities.Unlike CCF, our data suggests that the streamwise turbulence intensity in OCF scales with the bulk velocity for Re_τ≳ 400.The additional streamwise kinetic energy in OCF with respect to CCF is provided by larger and more intense very-large-scale motions in the former type of flow.Therefore, compared to CCF, larger computational domains of L_x× L_z=12π h× 4π h are required to faithfully capture very-large-scale motions in OCF—and observe the reported scaling.OCF and CCF turbulence statistics data sets are available at <https://doi.org/10.4121/88678f02-2a34-4452-8534-6361fc34d06b>§ INTRODUCTIONPlane Poiseuille flow, also known as closed channel flow (CCF), is one of the most studied canonical flows by means of direct numerical simulations (DNSs, <cit.>).The numerical domain is defined by doubly-periodic boundary condition in the stream- and spanwise directions, and impermeable no-slip walls at the bottom and the top. Conversely, less attention has been paid to open channel flow (OCF), where one of the no-slip walls is replaced by a free-slip plane, despite its even more direct relevance in environmental flows. Most of the early OCF DNSs were restricted to small friction Reynolds numbers Re_τ = u_τ h/ν, based on friction velocity u_τ, channel height h andkinematic viscosity ν, and/or small computational domains. Recently, Yao et al. <cit.> compared CCF and OCF by means of DNS data in computational domains of L_x × L_z=8π h × 4 π h for Reynolds numbers up to Re_τ = 2000.In agreement with experiments <cit.> and DNSs <cit.>, they reported that so-called very-large-scale motions (VLSMs)—which are a common feature of wall-bounded turbulent flows—are more energetic and larger in OCF than in CCF. Generally, VLSMs become more energetic with increasing Reynolds number <cit.>, leading to the failure of wall scaling of the streamwise intensity in CCF <cit.>.While for CCF the streamwise turbulence intensity scales neither with the friction velocity nor with the bulk velocity, Afzal et al. <cit.> suggested that the streamwise turbulence intensity in OCF scales with the bulk velocity based on an experimental data set. Other experimental studies, such as Duan et al. <cit.>, on the contrary, show the—relatively scattered—streamwise turbulence intensity scaled in wall units together with universal scaling functions proposed by Nezu and Rodi <cit.>. Thus, additional highly accurate OCF data sets are required to answer the question whether the streamwise turbulence intensity in OCF scales in bulk units. Since the streamwise length of VLSMs is an order of magnitude larger than the channel half width, exceptionally large domains are required to faithfully capture VLSMs in DNS <cit.>. Moreover, Pinelli et al. <cit.> found that the small scales—responsible for the mass transfer across the free surface in OCF—are spatially organised by VLSMs, which requires an additional grid refinement towards the free surface.In the present study, we investigate the scaling of the streamwise turbulence intensity in OCF by means of DNSs. For this purpose, we generate new OCF and CCF DNS datasets at comparable Reynolds numbers up toRe_τ=900 in identical computational domains of L_x × L_z = 12π h × 4π h, with grid refinements towards no-slip and free-slip walls. By removing Reynolds number or domain size discrepancies between OCF and CCF, we are able to quantify the impact of the free-slip boundary condition on turbulence one-point statistics in the cleanest possible manner. § METHODOLOGYWe solve the incompressible Navier-Stokes equations in their wall-normal velocity/vorticity formulation with a pseudo-spectral representation of the flow variables <cit.>.We apply periodic boundary conditions in the stream- and the spanwise directions, the smooth no-slip wall at the bottom and the free-slip boundary condition at the top of the channel. Although the numerical method has shown its validity in numerous CCF simulations <cit.>, the introduction of a free-slip surface has to be regarded separately. A Chebyshev-Gauss-Lobatto grid, which is adopted in the surface-normal direction provides a grid refinement both towards the wall and the free surface.We set up six OCF and four CCF cases, where the Reynolds number and the computational domain length are varied (cf. table <ref>). Hereinafter, the velocity fluctuation in i-direction is defined as u^'_i = u_i - ⟨ u_i ⟩, where angular brackets denote averaging over both homogeneous directions and time. Normalisation in viscous or “wall units” is indicated by the + superscript. Corresponding scales are u_τ and the viscous length scale δ_ν=ν/u_τ. Characteristic scales in “bulk units”, on the other hand, are the bulk velocity u_b and the channel height h, which forms the bulk Reynolds number Re_b = u_b h / ν.§ COMPUTATIONAL DOMAIN SIZEIn the following, the effect of the constraints upon large-scale coherent motions in the vicinity of the free surface due to the finite size of the computational domain is investigated.Figure <ref> displays instantaneous realisations of the streamwise fluctuating velocity for the three open channel simulation cases with different computational domain size O900L4, O900L8, and O900L12 in the vicinity of the free-slip boundary (y/h≈0.95), together with closed channel data for comparison.A coherent structure with a streamwise extent comparable to the domain lengthl_x ≈ 12π h and a spanwise extent ofl_z ≈ 2h is clearly visible in figure <ref>(c) for O900L12 at z/h≈ 9, but not for the smaller boxes (a,b), where the flow domain interferes with the natural extensions of such a structure. Moreover, the structures in the vicinity of the free-slip boundary in OCF appear to be more intense and larger than the structures near the centreline of CCF (figure <ref>d). Thus, larger computational domains than in CCF are required to fully capture these scales.Figure <ref> displays one-dimensional pre-multiplied energy spectra of the streamwise velocity fluctuations for Re_τ≈900 and different computational domain length as a function of the wavelength and the wall distance.Although at first glance both the streamwise and the spanwise pre-multiplied energy spectra decay at large wavelengths for O900L4 (figure <ref>a,b), in fact larger box sizes are required to physically capture the largest scales that only appear in the larger domains. This can be seen in the streamwise spectra for O900L8 (figure <ref>c), where a spectral pile-up at large wavelength occurs. For O900L12 (figure <ref>e,f), on the other hand, both the streamwise and the spanwise spectra decay at large wavelengths again. Thus, a domain size of L_x × L_z =12π h × 4π h appears to be sufficiently large to capture a relevant amount of VLSMs. Moreover, a comparison of the spanwise spectra (figure <ref>b,d,f) indicates an artificially large accumulation of energy at spanwise wavelengths around λ_z/h≈ 2 in the smallest box (L_x=4π h, L_z=2π h). In summary, as in CCF, VLSMs appear in the bulk flow, penetrating into the buffer layer and therefore through these motions the whole flow domain is influenced by the free-surface layer. Due to their large spatial extent, larger computational domains than in CCF are needed to fully capture these scales. A domain size of L_x=12π h was found to be sufficient at Re_τ=900. § TURBULENCE INTENSITY SCALINGLet us now focus on the variation with Reynolds number of the turbulence intensities normalised with the bulk velocity u_b. Both open and closed channel turbulence intensities obtained from DNSs with the largest computational domain (L_x × L_z=12π h× 4 π h) are presented in figure <ref>.While the spanwise and wall-normal contributions to the turbulence intensity scale well with the friction velocity u_τ in the bulk region of the flow for Re_τ≥ 400for both flow configurations (figure omitted), the streamwise contribution differs. As figure <ref>(a) indicates, the streamwise turbulence intensity in OCF appears to scale with the bulk velocity u_b for Re_τ≥ 400 and y/h ≳ 0.3, whereas in the closed channel case it neither scales with u_τ (figure omitted) nor with u_b (figure <ref>b). This scaling property of OCF is in qualitative agreement with the experimental study of OCF by <cit.>, who observed the scaling of the outer flow streamwise turbulence intensity with u_b for 1260 ≤ Re_τ≤ 5280.Besides, the streamwise turbulence intensity profile obtained from recent OCF measurements by <cit.> at Re_τ=2407 normalised with u_b collapses well with our data (figure <ref>a inset). The above observation differs significantly from CCF, where the spectral signature of VLSMs first appears at Re_τ≈ 600 (figure omitted) and the contribution of VLSMs to the streamwise turbulence intensity is much lower than in OCF <cit.>. Even at Re_τ = 10049, which is currently the highest Reynolds number ever been achieved by a CCF DNS <cit.>, the bulk scaling of the streamwise turbulence intensity has not been observed. Hence, it is remarkable that OCF streamwise turbulence intensity scales with u_b at a Reynolds number as low as Re_τ=400, if a sufficiently large numerical domain is used.§ CONCLUSIONWe performed DNSs of OCF and CCF in computational domains up to L_x× L_z=12π h × 4π h and Reynolds numbers up toRe_τ=900. The enhancement of VLSMs in OCF leads to more stringent requirements on the domain size than in the closed channel counterpart. Since VLSMs appear at lower Re in OCF, a domain size L_x ≥ 12π h is already required at the moderate value Re_τ=400. Unlike CCF, the streamwise turbulence intensity in OCF scales with the bulk velocity u_b for Re_τ≥ 400 and y/h ≳ 0.3, which is attributed to the larger and more intense VLSMs in OCF. Although it was suggested by the experiment of Afzal et al. <cit.>, this is the first time the bulk scaling of the streamwise turbulence intensity has been observed in DNS data. In addition, only with L_x = 12π h the bulk-scaling profile appears in the Re_τ=400 case (figure omitted), which indicates that VLSMs in OCF already play a significant role at this Reynolds number. Thus, it is of critical importance to use sufficiently large numerical domains. Finally, both OCF and CCF turbulence statistics data sets are available online <cit.>. Many fruitful discussions with Genta Kawahara throughout this work are gratefully acknowledged.The simulations were carried out at UC2 of SCC Karlsruhe and at DLR's CARA cluster. The computer resources and assistance provided by these centres are gratefully acknowledged. YS and MU acknowledge funding by DFG through grant UH 242/3-1.1BauerData2023 Bauer C., Sakai Y., and Uhlmann M. (2023), "Data underlying the publication: Direct numerical simulation of turbulent open channel flow", 4TU.ResearchData, https://doi.org/10.4121/88678F02-2A34-4452-8534-6361FC34D06B.V1doi: 10.4121/88678F02-2A34-4452-8534-6361FC34D06B.V1Kim1987Kim J., Moin P. and Moser R.D. (1987),J. Fluid Mech., vol. 177, 133–166Moser1999Moser R.D., Kim J., and Mansour N.N. (1999),Phys. Fluids, vol. 11, 943–945Hoyas2006Hoyas S. and Jiménez J. (2006),Phys. Fluids, vol. 18, 11702Lee2015Lee M. and Moser R. D. (2015),J. Fluid Mech., vol. 774, 395–415Oberlack2022 Oberlack M., Hoyas S., Kraheberger S. V., Alcántara-Ávila F. and Laux J. (2022),Phys. Rev. Lett., vol. 128, 024502Yao2022 Yao J., Chen X. and Hussain F. (2022),J. Fluid Mech., vol. 953, A19 Duan2020Duan Y., Chen Q., Li D. and Zhong Q. (2020),J. Fluid Mech., vol. 892, A3 Duan2021Duan Y., Zhong Q., Wang G., Zhang P. and Li D. (2021), J. Fluid Mech., vol. 918, A40 Bauer2015Bauer C. (2015), Master's thesis, KIT, Karlsruhe Afzal2009 Afzal B., Faruque M. A. and Balachandar, R. (2009), J. Hydraul. Res., vol. 47(1), 66–81,Nezu1986Nezu I. and Rodi W. (1986),J. Hydraul. Eng., vol. 112,335–355, Lozano2014Lozano-Durán A. and Jiménez J. (2014),Phys. Fluids, vol. 26, 011702, Feldmann2018Feldmann D., Bauer C. and Wagner C. (2018),J. Turbul., vol. 19(3), 274–295 Pinelli2022Pinelli M., Herlina H., Wissink J.G. and Uhlmann, M. (2022),J. Fluid Mech., vol. 933, A49DelAlamo2003 del Álamo J. C. and Jiménez J. (2003)Phys. Fluids, vol. 15(6), L41–L44 | http://arxiv.org/abs/2310.17948v1 | {
"authors": [
"Christian Bauer",
"Yoshiyuki Sakai",
"Markus Uhlmann"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231027074412",
"title": "Direct numerical simulation of turbulent open channel flow: Streamwise turbulence intensity scaling and its relation to large-scale coherent motions"
} |
[ Disentangled Representation Learning with Large Language Models for Text-Attributed Graphs Yijian Qints Xin Wangts Ziwei Zhangts Wenwu Zhuts tsDepartment of Computer Science and Technology, Tsinghua UniversityWenwu [email protected] Xin [email protected] Learning, ICML0.3in ]Text-attributed graphs (TAGs) are prevalent on the web and research over TAGs such as citation networks, e-commerce networks and social networks has attracted considerable attention in the web community. Recently, large language models (LLMs) have demonstrated exceptional capabilities across a wide range of tasks. However, the existing works focus on harnessing the potential of LLMs solely rely on prompts to convey graph structure information to LLMs, thus suffering from insufficient understanding of the complex structural relationships within TAGs. To address this problem, in this paper we present the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the reasoning and predicting capabilities of LLMs for TAGs. Our proposed DGTL model incorporates graph structure information through tailored disentangled graph neural network layers, enabling LLMs to capture the intricate relationships hidden in text-attributed graphs from multiple structural factors. Furthermore, DGTL operates with frozen pre-trained LLMs, reducing computational costs and allowing much more flexibility in combining with different LLM models. Experimental evaluations demonstrate the effectiveness of the proposed DGTL model on achieving superior or comparable performance over state-of-the-art baselines. Additionally, we also demonstrate that our DGTL model can offer natural language explanations for predictions, thereby significantly enhancing model interpretability. § INTRODUCTIONText-attributed graphs (TAGs) are employed to represent a group of structured data where textual entities are connected by graph relations. TAGs are ubiquitous on the web, such as citation networks, e-commerce networks, social media, recommendation systems, web pages etc. As such, representation learning on TAGs has become an important research problem recently in the web community, where the TAGs are explored to capture rich semantic relationships and dependencies among connected textual elements, providing valuable contexts for better understanding and reasoning in various downstream tasks. Classical text-attributed graph representation approaches normally utilize graph neural networks (GNNs) to capture structural information, transforming textual attributes into shallow or hand-crafted representations such as bag-of-words or skip-gram features, which will then be used for prediction tasks in TAG <cit.>. Some works also use natural language processing models to enhance GNN classifiers by augmenting the node features and capture the rich semantic information <cit.>. Recent advancements in machine learning and artificial intelligence have witnessed the emergence of large language models (LLMs) that exhibit unprecedented capabilities in various tasks <cit.>. These models have demonstrated remarkable proficiency in natural language processing related tasks including language generation, machine translation and sentiment analysis, as well as other fields such as recommendation system <cit.>, social network analysis <cit.>, code analysis <cit.>, bioinformatics <cit.> and many more.Therefore, the advent of LLMs has promoted the direct exploration of LLMs to solve prediction problems in TAG without GNN classifiers. These existing approaches solely rely on prompts to convey graph structure information to LLMs, suffering from insufficient understanding of the complex structural relationships within TAGs. For example, Graph-LLM <cit.> uses neighbor summary to generate prompts with structural information, while InstructGLM <cit.> directly describes all the neighbors in the prompt. Nevertheless, only utilizing prompts to pass information about graph structure over LLMs hinders the model's ability to capture and utilize the intricate relationships and dependencies encoded in the graph structures of TAGs, resulting in limited capability of exploiting the potential power of LLMs in TAG tasks.To tackle this problem, in this paper we go beyond the vanilla prompt-based methods and effectively integrate graph structure information into LLMs, in order to enable the holistic utilization of LLMs' exceptional powers in TAG tasks. However, achieving this goal poses the following technical challenges. First, TAGs usually contain rich yet entangled structural information, i.e., not all the information is relevant or helpful for downstream tasks. LLMs need to effectively filter out and extract the pertinent information while disregarding the irrelevant details carried in the graph structure. Second, adapting LLMs to a specific TAG task is challenging, given that LLMs typically require extensive training and learning of task-specific knowledge. The process of fine-tuning LLMs for particular tasks involves striking the balance between maintaining the pre-trained knowledge and acquiring new knowledge specific to the target tasks. To solve these challenges, we propose the Disentangled Graph Text Learner (DGTL) model in this paper, which can boost LLMs in deeply exploiting structural information to enhance their reasoning and predicting capabilities for TAG tasks. The proposed DGTL model first encodes raw text information carried in TAGs using a pre-trained LLM. Then, a group of tailored disentangled GNN layers is developed to capture graph neighborhood information in TAGs from multiple structural factors. By injecting the learned features with disentangled factors into the LLM predictor, we enable the model to comprehend complex graph structure information in TAGs. Moreover, DGTL allows the pre-trained LLMs to remain frozen, thereby reducing computation costs and mitigating the risk of catastrophic forgetting in LLMs. This flexibility enables DGTL to be compatible with different LLM models, ensuring its practical applicability. Overall, DGTL is able to serve as a general framework for combining text-attributed graph learning and natural language modeling to improve the explainability and predictive performance of LLMs for TAG tasks.To demonstrate the effectiveness of our proposed method, we compare it with state-of-the-art approaches on various TAG benchmarks. Our method achieves superior or comparable results with baseline methods. Additionally, we demonstrate that DGTL can offer human-understandable explanations for the model predictions in natural language.In summary, we make the following contributions: * We propose Disentangled Graph Text Learner (DGTL), a novel model which deeply exploits graph structure information to enhance the reasoning and predicting capabilities of LLMs for TAG tasks. DGTL also serves as a general framework for integrating structural analysis abilities of GNNs with the powerful language modeling capabilities of LLMs. * We propose tailored disentangled GNN layers to capture graph neighborhood information from multiple structural factors, enabling the LLMs to comprehend complex graph structure information in TAGs. * Our proposed DGTL enables pre-trained LLMs to remain frozen, benefiting from reduced computation costs as well as mitigating the risk of catastrophic forgetting.* We conduct extensive experiments on various TAG benchmarks and compare DGTL with state-of-the-art baselines. The results demonstrate that DGTL is able to achieve superior or comparable performance, as well as provide users with human understandable natural language explanations for the model's predictions.§ RELATED WORKS §.§ Text-attributed Graphs Text-attributed graphs (TAGs) have gained significant attention in the field of graph machine learning in recent years <cit.>. A TAG is a type of graphs where each node is associated with a text attribute. This representation captures the rich semantic relationships and dependencies among textual elements, making TAGs valuable for understanding and reasoning tasks. Commonly used TAG benchmark datasets include Cora, CiteSeer, PubMed <cit.>, and OGBN-arXiv <cit.>, where nodes represent papers and edges represent reference relationships.Message-passing GNNs <cit.> have been proposed as an effective framework for graph machine learning following the neighborhood aggregation scheme. At each layer, nodes learn representations by aggregating their neighbors' representations <cit.>. GNNs have also made significant progress in TAG tasks by considering both node attributes and graph structures. Classical GNN pipelines typically handle text attributes by converting them into shallow or hand-crafted features such as bag-of-words <cit.> or skip-gram <cit.> representations. However, with the advancements in natural language processing, there has been a shift towards utilizing language models to generate more comprehensive node features based on the text attribute. This approach allows for a deeper understanding and representation of learning on TAGs.Despite the progress made by LLMs and GNNs in capturing textual or structural information for representation learning on TAGs, there are still large rooms for improvement. The integration of these models can lead to enhanced performance and more effective utilization of the rich information contained within TAGs. §.§ LLMs for Graph Tasks Recent researches have also delved into the exploration of leveraging LLMs directly for addressing graph-related tasks <cit.>. The fundamental concept behind this approach is to convert graph data, including both structural components and features, as well as graph tasks, into natural language representations. By treating graph problems as conventional NLP problems, researchers have unlocked the potential of utilizing LLMs for graph-related tasks. In the subsequent sections, we present a comprehensive overview of these recent advancements in the field.Pioneer researches begin with explorations on synthetic graph tasks. NLGraph <cit.> reorganizes graphs as natural language description and conducts a systematic evaluation of LLMs on eight graph reasoning tasks in natural language, including connectivity, shortest path, maximum flow, simulating GNNs, etc. Meanwhile, GPT4Graph <cit.> also conducts extensive experiments by converting graphs into specific code formats. They evaluate the graph understanding capabilities of LLMs across ten distinct tasks, including structure understanding tasks and semantic understanding tasks. LLMtoGraph <cit.> also tests GPT-3.5 and GPT-4 for various graph tasks by natural language description and makes some interesting observations. More recently, several works carry out explorations on TAGs with LLMs. Graph-LLM <cit.> systematically investigates two strategies on TAGs: LLMs-as-Enhancers and LLMs-as-Predictors. The former strategy uses LLM to enhance the representations of text attributes of nodes before passing them to GNNs, while the latter one directly employs LLM as TAG task predictors with natural language prompts. They also explore using ego-graph description and neighbor summary to incorporate structural information by prompt. InstructGLM <cit.> expands the vocabulary by creating new tokens for every node in the TAG, which enables tuning LLMs to handle various TAG tasks in a generative manner. Nevertheless, the existing methods use prompt to convey neighborhood information for downstream LLMs, which faces several challenges, such as the issue of excessive neighboring information leading to lengthy prompt texts.In our method, we employ a disentangled graph learning approach to compress the neighboring information on the TAG into a small number of tokens. This enables LLMs to learn and utilize the rich knowledge contained within these compressed tokens for downstream inference tasks. Besides, our method follows a delta-tuning scheme where the LLM is kept frozen and only a small number of parameters are tuned <cit.>, making our method easy to integrate with off-the-shelf LLMs.§ PROBLEM FORMULATION AND PRELIMINARIES §.§ Text-attributed GraphsA text-attributed graph can by formulated as 𝒢=(𝒱,𝐀,𝐲), where 𝒱 denotes the set of nodes, 𝐀 denotes the adjacent matrix, and 𝐲 denotes the labels of nodes. In TAGs, each node is associated with a text description, e.g., the abstract of papers for citation graphs. Before the message-passing procedure in the GNNs, we need to process the raw texts into real-valued features, i.e., text embeddings.In this paper, we focus on node classification, one of the most typical tasks on TAGs. We adopt the semi-supervised settings, where the text information of all the node set 𝒱 and 𝐀 is given at the training procedure, as well as a part of the node labels {𝐲_u|u∈𝒱_tr}, where 𝒱_tr is thetraining node set. The task aims at predicting the labels {𝐲_u|u∈𝒱_te} of the testing node set 𝒱_te. §.§ Graph Neural NetworkGNNs are state-of-the-art models for graph machine learning, which typically follow a message passing scheme where nodes aggregate information from their neighbors in each layer formulated as:𝐦_i^(l) = Agg(𝐡_j^(l)|j∈𝒩_i), 𝐡_i^(l+1) = Update(𝐦_i^(l)),where 𝐡_i^(l) is the representation of node i at the l-th layer, 𝒩_i denotes the neighbors of node i derived from the adjacent matrix, Agg(·) is the aggregation function, Update(·) is an updating function between two node representations. §.§ Large Language ModelsLarge language models (LLMs) have revolutionized natural language processing tasks by demonstrating remarkable capabilities in understanding and generating human-like text. One of the key architectural advancements that underpins the success of LLMs is the transformer model, which has become the de facto standard for modeling sequential data and computer vision.The transformer architecture <cit.> is based on the principle of self-attention. The attention mechanism enables the model to weigh the importance of different parts of the input sequence when making predictions. The attention mechanism calculates attention scores between each pair of positions in the input sequence, allowing the model to focus on relevant information while disregarding irrelevant or redundant elements. This attention mechanism is crucial for capturing long-range dependencies and contextual relationships in text. The self-attention in current decoder-based LLMs can be formulated as follows:Attention(𝐇) = softmax(f_q(𝐇)f_k(𝐇)/√(d))f_v(𝐇).Here, 𝐇 is the hidden features, f_q(·), f_k(·), and f_v(·) are the learnable projection functions to calculate the query, key, and values.In a transformer, the attention mechanism operates through multiple self-attention layers. Each layer consists of multiple attention heads, which learn different representations and capture diverse aspects of the input sequence. By employing self-attention across multiple layers, transformers can effectively model both local and global dependencies in the input sequence. Furthermore, LLMs are typically pre-trained on massive amounts of text data using unsupervised learning objectives. The most commonly used task is language modeling, which is formulated as:ℒ=-log p(s_i|s_1:i-1),where s_1:i is a natural language sequence. This pre-training phase enables the models to acquire a rich understanding of language patterns and structures, and commonsense knowledge. The pre-trained LLMs can then be fine-tuned on specific downstream tasks with task-specific supervised learning, allowing them to adapt their knowledge to specific tasks and domains.§ METHOD§.§ Overall FrameworkWe aim to leverage LLMs to provide predictions for TAG tasks while enabling interpretability. The main challenge lies in efficiently equipping the LLM with neighborhood knowledge of the target nodes in the TAG. To address this, our framework incorporates disentangled graph learning, which allows us to learn rich semantic information from the neighborhood and inject it into the downstream LLM. An overall framework of our method is shown in Figure <ref>. Next, we elaborate our proposed method. * In Step 1, we generate text embeddings by computing the average of the features at the last layer in the upstream LLM. This process captures the overall context and semantics of the text associated with each node in the TAG, which form the foundation for subsequent steps. * In Step 2, we employ disentangled graph learning to learn embeddings that incorporate diverse neighborhood information. The disentangled graph learning approach allows us to capture varied information from the neighborhood, facilitating a comprehensive understanding of the TAG.* In Step 3, we inject the learned features with neighborhood information into the downstream LLM. This integration of the LLM and the disentangled GNN facilitates more accurate and informed predictions for the downstream TAG tasks.The combination of disentangled graph learning and information injection in the downstream LLM forms the core of our framework. This approach allows us to harness the strengths of both graph representation learning and language modeling, enabling effective utilization of structural information and enhancing the interpretability of our predictions. Moreover, we adopt a scheme where the pre-trained LLMs remain frozen in our approach. We only focus on tuning the disentangled GNN to generate structural information for LLMs. This scheme benefits us from several advantages. Firstly, we keep a very low computation cost on fine-tuning since the disentangled GNN only cover a small rate of the parameters.Even if the LLM is updated to a new version, we can quickly adapt to the new one with minimal tuning procedure. Secondly, our approach maximizes the utilization of LLMs' pre-existing knowledge sand mitigates the risk of catastrophic forgetting.§.§ Text Embedding GenerationThe first step in our method is to generate an effective text embedding that encapsulates the semantics and contextual information of the input text. Traditionally, previous works<cit.> have primarily focused on utilizing the embedding of the End-Of-Sentence (EOS) token at the last layer as the text embedding. However, we propose that taking the average of the hidden states in the last layer provides a more comprehensive representation of the entire input text.Considering the average of the hidden states enables us to capture the collective information and contextual understanding of the text across all positions in the input sequence. This approach allows us to incorporate a broader range of linguistic cues and dependencies into the text embedding, resulting in a more robust and informative representation. §.§ Disentangled Graph LearningOur next step is learning hidden features that capture relevant neighborhood information for the target task. By injecting these features into the downstream LLM, we enable the LLM to effectively utilize the complex neighborhood information of the nodes in the TAG to assist in predicting the downstream task.To achieve this, we employ disentangled graph learning, which allows us to capture and represent the diverse neighborhood information present in the TAG. Specifically, we adopt multiple parallel 2-layer GNNs to learn the features, and disentangle them by assigning diverse graph structures. We use a parameter-efficient and differentiable manner to generate the weights of edges for the graph structures, which can be formulated as follows:𝐀_i,(u,v) = δ𝐀_(u,v)+(1-δ)𝐀_(u,v)·σ((𝐒_i^u 𝐡_u)^⊤(𝐒_i^v 𝐡_v)),where 𝐀_i,(u,v) is the weight of edge between node u and v in the adjacent matrix of the i-th GNN, 𝐀_(u,v) indicates if there is an edge between node u and v in the original graph, 𝐒_i^u and 𝐒_i^v are learnable parameters to generate edge weights of the i-th GNN, δ is the hyper-parameter, and σ(·) denotes the sigmoid function. Consequently, we generate diverse graph structure by assign different edge weights in a continuous space. We can also conveniently use gradient based methods to optimize the parameters.Our disentangled GNN architecture incorporates diverse graph structures to ensure the learning of varied information from the neighborhood. These graph structures highlight different aspects of the TAG's topology and semantics, enabling the following LLM to leverage specific types of neighborhood information during downstream task predicting. §.§ Neighborhood Information InjectionTo take full advantages of the LLMs' ability to understand and model complex patterns and semantics in TAGs, we inject the learned disentangled embedding directly into the downstream LLM.Specifically, we reserved a set of token positions for placing these disentangled embeddings in the prompt input to the downstream LLM. Even if these embeddings are not in the form of natural language, the form of our input will still allow the LLM to think that they are aligned with the natural semantic space that humans can understand. In this way, we can optimize our disentangled GNNs by gradient descent methods during fine-tuning.However, the gradient back-propagation through the entire LLM neural architecture can make the optimization process extremely difficult during the fine-tuning procedure. Therefore, we take a step further by performing the embedding injection in all layers of the downstream LLM. Specifically, we add the disentangled embedding to the text embedding at the reserved positions in key and query projection functions as follows:f_{q,k}(𝐱_i) = 𝐖_{q,k}(𝐱_i+𝐩_{q,k}^(i)+𝐡_u^(i)),where 𝐡_u^(i) is the disentangled embedding of node u placed in position i, 𝐖_{q,k} is the projection matrix, 𝐱_i is the corresponding feature from the last layer in the LLM, and 𝐩_{q,k}^(i) indicates the absolute or relative position embedding of position i. As an alternative, we use the following function for rotary position embedding:f_{q,k}(𝐱_i) = 𝐑_i𝐖_{q,k}(𝐱_i+𝐡_u^(i)),where 𝐑_i is the rotary matrix of the position embedding. Due to the varying position encoding of the injected features, our disentangled GNNs are encouraged to learn diverse knowledge, further enriching the semantic knowledge learned by different parts of GNNs. In addition, we also use the disentangled embedding to in value calculation by a similar way:f_v(𝐱_i) = 𝐖_v(𝐱_i+𝐡_u^(i)).As such, our method incorporates neighborhood information from GNNs into each layer of the large language model LLM, enabling the LLM to benefit from a comprehensive understanding of the graph structure throughout the entire model. This injection of information in all layers facilitates a direct gradient flow to the GNNs, resulting in more accurate and informative gradient updates. This integration of language modeling and graph representation learning allows our model to leverage the contextual information captured by the LLM and the structural patterns learned by the GNNs, leading to effective learning and improved performance. §.§ Fine-tuning Procedure §.§ Experimental Setup§.§.§ DatasetsWe summarize the fine-tuning procedure in Algorithm. <ref>. After generating node embedding by LLMs for the input graph at the beginning of the algorithm, the algorithm iteratively updates the parameters until convergence. Each iteration involves sampling a mini-batch from the training set and performing forward propagation through the disentangled GNN layers to capture both the graph structure and the text attributes. We follow the classic auto-regressive scheme to fine-tune. For each node in the mini-batch, we construct its prompt (refer to Section <ref> for more details) as the input. Since we are only concerned about the LLM's prediction for node categories, we design a response template (some examples are shown in Table <ref>) as part of the prompt. Consequently, the LLM can directly predict the category at the first token of the generation procedure. Then we can conveniently calculate the loss function with respect to the category prediction, which can be formulated as follows:ℒ=-log p(s_lab|s_pro+s_res),where s_lab indicates the label token, s_pro and s_res indicate the sequences of the prompt and response template. We find that only calculating the loss function of the first token can work well in practice. As such, we can fine-tune the model in a more targeted manner, enabling it to learn the most useful information for downstream classification tasks. Back-propagation is performed after each node classification, and the parameters θ are updated through a gradient descent method when all nodes in the mini-batch are predicted. Through this iterative process, DGTL learns a text generation model that leverages the rich information present in text-attributed graphs.§ EXPERIMENTS We conducted experiments on benchmark TAGs to assess the effectiveness of our framework. These TAGs encompass both citation networks and e-commerce networks, providing a diverse range of domains for evaluation.For the citation networks, we utilize widely recognized datasets including Cora and PubMed <cit.>. The Cora dataset focuses on computer science research papers.PubMed, on the other hand, is specific to biomedical literature, making it particularly relevant for medical-related tasks. These datasets consist of academic papers as nodes and their citation relationships as edges. Each paper has its title and abstract as text information. The task on these citation networks is to determine the category of the papers.In addition, we also incorporated e-commerce graphs into our evaluation. Specifically, we employ Book-History dataset <cit.>, which contains a rich variety of historical books. In the dataset, nodes represent books, while the edges indicate the connected books are frequently purchased together. Each book also has its description as the text information. The task on the e-commerce network is to determine the category of the books.The statistics of the datasets are shown in Table <ref>. For all datasets, we follow the classical semi-supervised learning setting, i.e., we random select 20 nodes of each category as the training set, and randomly select 1,000 nodes from the rest nodes as the test set. §.§.§ ImplementationsIn our experiments, we employed Llama-2-13B-chat <cit.> as the backbone LLM. Llama2 is a representative LLM known for its impressive language modeling capabilities and extensive pre-training on a vast corpus of text data. By utilizing Llama2 as our backbone LLM, we aimed to validate the effectiveness of our approach when collaborated with LLMs.To tailor our framework to different datasets and tasks, we designed specific prompts for different types of datasets. The prompts served as initial instructions or cues provided to the LLM to guide our fine-tuning for the target task.The details of the designed prompts are summarized in Table <ref>. The table outlines the specific prompt structure, including any additional instructions or formatting applied, for each dataset used in our experiments. By customizing the prompts, we ensured that the LLM could effectively adapt its pre-trained knowledge to the specific requirements and characteristics of each dataset.For other hyper-parameters, we set the number of disentangled channels is 16 for citation datasets, and 8 for the e-commerce datasets. The number of hidden dimensions of each disentangled GNN channel is 32. The hyper-parameter to control the disentangled structure δ is 0.8. For optimization, we use the Adam optimizer with a learning rate of 0.001.§.§.§ BaselinesWe compare our model with baselines from the following two categories.* GNN Predictors. Following Chen at el. <cit.>, we consider different language models to enhance the node features of the TAG, including Deberta <cit.>, Llama <cit.>, Sentence-BERT <cit.>, and e5 <cit.>. Shallow embeddings TF-IDF and Word2vec are also included. We include GLEM <cit.> as a baseline as well. For all these feature enhancer methods, three GNN backbones, including GCN <cit.>, GAT <cit.>, and MLP are adopted. For these baselines, we directly use the classification accuracy as the evaluation metric.* LLM predictors. We also consider using LLM (Llama2-13B-chat) as the predictors as our baselines, including using prompts without neighborhood information (0-hop prompt) and neighborhood summarization <cit.>. For these methods, we use exact match scores <cit.>, i.e., the percentage of predictions that match any one of the ground truth, as the metric. truth answers exactly.§.§ Experimental ResultsThe result comparison with the GNN predictor baselines is shown in Table <ref>. From the table, we can find that GNN backbones are generally better than MLP backbone, indicating that structural information is essential to solve TAG tasks. Our method achieves comparable performance to SOTA baselines on Cora and outperforms all baselines on PubMed. Besides, our method offers a distinct advantage in terms of interpretability. Unlike the baselines, our method can provide natural language explanations for the predictions, enhancing the transparency and comprehensibility of the model's decision-making process. This interpretability aspect is particularly valuable in scenarios where understanding the reasoning behind the predictions is crucial. The result comparison with the LLM predictor baselines is shown in Table <ref>. Although neighbor summary prompt achieves better performance than 0-hop prompt by considering neighborhood information, our method outperforms these baselines by a large margin, indicating that our method enables LLMs to effectively learn and utilize task-relevant knowledge and benefits the downstream task prediction. §.§ Interpretability with Structural Information We showcase some examples to illustrate the interpretation of our method. The results as well as the comparison with two LLM predictor baselines are shown in Table <ref>.The results indicate that our proposed method is more capable of predicting the correct label on the target node while the other methods fail. Moreover, our method also generates explanations related to neighborhood information after giving the prediction. For example, in the first case, our method predicts the paper belongs to "rule learning", and a supporting evidence for this prediction is "thepaper also cites several references related to rule-based machine learning." This supporting evidence is exactly derived from the neighborhood information on the TAG, which demonstrates that our method can effectively capture the semantic information on the graph to make more accurate predictions. In case 2, our method predicts that the book belongs to military history. In addition to making predictions based on the content of the book, it also utilize the neighborhood information in the TAG to assist in the prediction process, as it states "there are also other books that are frequently purchased together, which are related to military history, such as 'The Second World War' and 'The Cold War'".The shown cases demonstrate that our method generates interpretations for classification predictions that are accurate and structurally relevant. By contrast, the baselines produce incorrect predictions without using structural information for interpretation. Our method effectively integrates the graph's structural information in LLM generation, providing meaningful insights and justifications for the classification results. The experimental results highlight the importance of incorporating structural information in achieving accurate and interpretable predictions in TAG tasks. More importantly, through our approach, humans can harness intuitive interpretations based on graph structures when using LLMs to tackle TAG problems. This greatly enhances the utilization of knowledge embedded in TAG and unleashes the potential of LLMs on graphs. §.§ Ablation Study In this section, we evaluate the effectiveness of the disentanglement component of our method through an ablation study. The results are also shown in Table <ref>. The results demonstrate that the disentanglement is greatly beneficial and crucial for our method to learn better structural information and enables the downstream LLM to give more accurate predictions accordingly.§ CONCLUSIONIn conclusion, this paper addresses the challenge of effectively integrating structural information into large language models (LLMs) for text-attributed graph (TAG) tasks. We propose the Disentangled Graph Text Learner (DGTL) model, which leverages tailored disentangled graph neural network layers to capture complex graph structure information in TAGs. Our method enhances the reasoning and prediction capabilities of LLMs while providing natural language explanations for model predictions, which is crucial for interpretability in TAG tasks. Through extensive experiments on various TAG benchmarks, we demonstrate that DGTL achieves competitive performance compared to state-of-the-art baselines while offering human-understandable explanations for model predictions. Our work contributes to advancing the field of TAG analysis by harnessing the power of LLMs and improving their interpretability for real-world applications. icml2023 | http://arxiv.org/abs/2310.18152v2 | {
"authors": [
"Yijian Qin",
"Xin Wang",
"Ziwei Zhang",
"Wenwu Zhu"
],
"categories": [
"cs.CL",
"cs.LG"
],
"primary_category": "cs.CL",
"published": "20231027140004",
"title": "Disentangled Representation Learning with Large Language Models for Text-Attributed Graphs"
} |
Shell et al.: Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals Shape-centered Representation Learning for Visible-Infrared Person Re-identification Shuang Li, Jiaxu Leng, Ji Gan, Mengjingcheng Mo, and Xinbo Gao, Senior Member, IEEEThis work was supported in part by the National Natural Science Foundation of China under Grants No. 62102057, No. U22A2096, and No. 62221005, in part by the Nat- ural Science Foundation of Chongqing under Grand No. CSTB2022NSCQ-MSX1024, in part by the Science and Tech- nology Research Program of Chongqing Municipal Education Commission under Grant No. KJZD-K202300604, in part by the Chongqing Postdoctoral Innovative Talent Plan under Grant No. CQBX202217, in part by the Postdoctoral Science Foundation of China under Grant No. 2022M720548, in part by the Chongqing Institute for Brain and Intelligence, and in part by Chongqing Excellent Scientist Project under Grant No. cstc2021ycjh-bgzxm0339.(Corresponding author: X. Gao.) S. Li, J. Leng, J Gan, M Mo and X. Gao,are with the Chongqing University of Posts and Telecommunications, Chongqing, China (E-mail: [email protected], [email protected], [email protected], [email protected], [email protected]). January 14, 2024 ==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Current Visible-Infrared Person Re-Identification (VI-ReID) methods prioritize extracting distinguishing appearance features, ignoring the natural resistance of body shape against modality changes. Initially, we gauged the discriminative potential of shapes by a straightforward concatenation of shape and appearance features. However, two unresolved issues persist in the utilization of shape features.One pertains to the dependence on auxiliary models for shape feature extraction in the inference phase, along with the errors in generated infrared shapes due to the intrinsic modality disparity. The other issue involves the inadequately explored correlation between shape and appearance features. To tackle the aforementioned challenges, we propose the Shape-centered Representation Learning framework (ScRL), which focuses on learning shape features and appearance features associated with shapes. Specifically, we devise the Shape Feature Propagation (SFP), facilitating direct extraction of shape features from original images with minimal complexity costs during inference. To restitute inaccuracies in infrared body shapes at the feature level, we present the Infrared Shape Restitution (ISR).Furthermore, to acquire appearance features related to shape, we design the Appearance Feature Enhancement (AFE), which accentuates identity-related features while suppressing identity-unrelated features guided by shape features.Extensive experiments are conducted to validate the effectiveness of the proposed ScRL. Achieving remarkable results, the Rank-1 (mAP) accuracy attains 76.1%, 71.2%, 92.4% (72.6%, 52.9%, 86.7%) on the SYSU-MM01, HITSZ-VCM, RegDB datasets respectively, outperforming existing state-of-the-art methods.person re-identification, logical relation inference, multi-view information interaction, domain adaptation.§ INTRODUCTIONPerson Re-Identification (ReID) serves the purpose of identifying specific individuals across non-overlapping camera views, carrying substantial significance in the domain of intelligent surveillance systems. Consequently, it has garnered considerable attention from a multitude of researchers, undergoing rapid advancements in recent times <cit.>. However, the majority of existing methods are confined to scenarios where pedestrians are solely present during daylight hours<cit.>. Consequently, the efficacy of these approaches hinges on visible appearances. This limitation has resulted in notable performance degradation when attempting to match pedestrians captured by both visible (VIS) and infrared (IR) cameras. To effectively tackle this issue, the Visible-Infrared Person Re-Identification (VI-ReID) task <cit.> was introduced. This task aims to enable the retrieval of pedestrians across the distinct spectra of IR and VIS <cit.>.In contrast to the extensively explored realm of ReID within the visible spectrum, the domain of VI-ReID presents notably greater challenges. This is primarily attributed to substantial intra- and inter-modality variations that exist between images captured in the visible (VIS) and infrared (IR) spectra.A prevailing trend among existing VI-ReID methods is the zealous pursuit of mining modality-shared appearance cues to enhance the discriminability of features but ignoring the learning of body shape clues. Nevertheless, three compelling reasons underscore the importance of not disregarding body shape cues. 1) The natural resistance of body shape to modality changes stands as the first reason. As illustrated in Fig. <ref>, no modality discrepancy in the body shape corresponding to infrared and visible images. 2) The second rationale stems from the identity-discriminative nature of body shape. As depicted in Fig. <ref>, one can observe that pedestrian A slightly fatter than pedestrian B in terms of their global body shapes, with distinctions extending to local characteristics such as facial shape, hair shape, limb shape, etc. Thus, the identification of pedestrians becomes feasible through body shape analysis, even when changes in modality render color texture features unreliable. 3) Body shape estimation can be achieved through pre-trained human parsing models, obviating the necessity for human annotation. Moreover, single-modality ReID methods have demonstrated successful instances in this regard.Nevertheless, when attempting to apply body shape estimates to the VI-ReID task, as depicted in Fig. <ref>, certain inaccuracies become evident in the infrared body shapes associated with infrared images. These inaccuracies manifest as discrepancies such as missing or inaccurately represented local shapes, with a predominant focus on limbs. This occurrence stems from the skin color of pedestrians bearing a striking resemblance to the background color present in infrared images. Consequently, the human parsing model erroneously categorizes exposed arms and legs as the background. Although the identity-related information carried by body shapes indeed falls within the realm of modality-shared cues, the presence of incorrect infrared body shapes limits the effective utilization of these cues. Furthermore, we acknowledge that the discriminative cues offered by body shape possess limitations. Depending solely on body shape for the identification of all pedestrians proves inadequate. Compared to the body shape, VIS (IR) pedestrian images contain richer identity clues, including clothing style, hairstyle, and more <cit.>. However, the direct extraction of the aforementioned appearance features from images presents challenges due to the interference caused by modality-specific information (color features in visible images,temperature in infrared images) and intricate backgrounds. Based on our observations, these appearance features exhibit a strong correlation with body shape, while the modality-specific information and background are unrelated to the body shape.So effectively utilizing body shape not only requires extracting discriminant shape features from the visible (infrared) images but also requires effectively exploring the correlation between shape features and pedestrian appearance features.Within the realm of VI-ReID, two methods closely aligned with body shape considerations are CMMTL <cit.> and SEFL <cit.>. Illustrated in Fig. <ref>, CMMTL undertakes the implicit acquisition of shape features, leveraging human parsing as an auxiliary task. However, it falls short in addressing the potential adverse effects of incorrect infrared shape representation and fails to elucidate the manner in which shape features contribute to enhancement, thus lacking interpretability. On the other hand, SEFL adopts the stance that body shape cues are unreliable and subsequently seeks to acquire diverse modality-shared features by disentangling and discarding potentially unreliable shape features. In contrast, our approach diverges from SEFL's viewpoint. We contend that body shape features possess intrinsic discriminative qualities and are adept at withstanding modality variations. Consequently, our focus centers on devising effective strategies for acquiring robust body shape features and delving into the intricate relationship between these features and appearance features. Based on the aforementioned insights and analysis, we introduce the Shape-centered Representation Learning (ScRL) framework, which facilitates the direct acquisition of modality-shared shape features and shape-associated appearance features from visible (infrared) images. Illustrated in Fig. <ref>, this proposed framework comprises three key components: Infrared Shape Restitution (ISR), Shape Feature Propagation (SFP), and Appearance Feature Enhancement (AFE). Specifically, ISR is designed to restitute IR shapes by capturing missing infrared shape features from appearance features during the training phase. SFP is designed to make the model can directly extract shape features from pedestrian images without the participation of additional auxiliary networks during the inference phase by transferring the ability of the shape features extraction to the appearance feature extraction network in the training phase. AFE is aimed at mining appearance features associated with body shape by introducing a two-stage cascade attention mechanism that directly and indirectly emphasizes identity-related features and suppresses identity-unrelated features. Our main contributions are summarized as follows:* We propose a novel deep-learning framework that prioritizes learning shape-centered features without the necessity of auxiliary networks during the inference phase. Notably, we are the first to effectively incorporate explicit shape features into VI-ReID. * We design the Infrared Shape Resolution (ISR), which effectively restitutes erroneous infrared body shapes generated by human parsing networks and improves the discriminability of shape features. * We design the Appearance Feature Enhancement (AFE) to elevate appearance features by harnessing the inherent relationship between shape and appearance features.* Extensive experimental results on SYSU-MM01, RegDB, and HITSZ-VCM datasets show the proposed ScRL achieves a new state-of-the-art performance.The rest of this paper is organized as follows. Section 2 introduces related work; Section 3elaborates the proposed method; Section 4 analyzes the comparative experimental results; and Section 5 concludes this paper. § RELATED WORK §.§ Visible Person Re-IdentificationVisible Person Re-Identification (ReID) aims to match visible pedestrian images with the same identity under non-overlapping cameras. With the introduction of large-scale datasets <cit.>,<cit.>,<cit.>, Visible person ReID based on deep learning has rapidly developed. In order to directly conduct end-to-end training in the expected embedding space, <cit.> improved the hard sample mining of the classic triplet loss, which improved the discriminability of pedestrian features. In order to obtain fine-grained discriminative features, PCB<cit.> proposes to horizontally divide pedestrian feature maps into 6 parts to learn part-level features. MGN<cit.> proposes an end-to-end multi-branch deep network to learn pedestrian feature representations of different granularity. To overcome the information loss caused by convolution and pooling operations, TransReid<cit.> proposes a transformer-based reid framework. In addition, changes in illumination, pose, and perspective also pose challenges for extracting discriminative pediatric features In response to the issue of illumination changes, IID<cit.> proposes to eliminate the adverse effects of illumination changes by decoupling illumination features and identity features. In order to align with the standard posture of pedestrians, PIE<cit.> introduces the PoseBox structure to obtain pose-invariant embedded features. In response to the adverse impact of camera style changes on pedestrian matching, Camstyle<cit.> proposed using CycleGAN<cit.> to achieve transfer between different camera styles, smoothing out camera style differences at the data level. §.§ Visible-Infrared Person Re-IdentificationIn order to match pedestrians from different modalities, researchers have contributed a lot of excellent work,VI-ReID methods can be roughly divided into two main categories: feature-level modality alignment and image-level modality alignment methods. The feature-level modality alignment methods aim to learn modality-shared features by aligning IR and VIS features at the feature level. HSME <cit.> maps features of different modalities to a unified hypersphere. MBCE <cit.> proposes a memory-based prototype feature learning method to suppress the modality discrepancy. MRCN <cit.> reduces the modality discrepancy by decoupling the modality-relevant and modality-irrelevant features. To more effectively mine diverse cross-modality clues,MPANet <cit.> be designed to discover the nuanced modality-shared features. DEEN <cit.> enhances the embedding representation in the embedding space by generating diverse embeddings.The image level modality alignment methods alleviate modality differences by generating images with the target or intermediate modality styles. D^2RL <cit.> transfers the style of IR (VIS) images to VIS (IR) images through the GAN network, compensating for the missing modality information. XIV <cit.> transform IR and VIS images into auxiliary X-modality images respectively, and perform X-IR-VIS three-mode learning. SMCL <cit.> generates syncretic modality images that contain information from both modalities to steer modality-invariant feature learning. However, the modality-shared clues (like shape-centered clues) have not been fully explored, which limits the discriminability of features. §.§ Semantic Parsing for Person Re-IdentificationSPReID <cit.> first introduced human semantic parsing into the ReID to obtain refined local features. MGCAM <cit.> introduces human semantic parsing for the first time to construct RGB mask pairs to eliminate the adverse effects of background noise. EaNet <cit.> guides the ReID model through human parsing as labels for multitasking learning. In addition to the human parts captured by human parsing results, there are also some other useful clues (such as personal belongings.). P2Net <cit.> and ISP <cit.> explore more comprehensive features through accurate human parts and non-human parts. Considering that widely studied methods overly rely on color appearance and fail to apply to cloth changing ReID, FSAM <cit.> transfers knowledge from shape stream to appearance stream by interactive mutual learning. In the VI-ReID task, CMMTL <cit.> and SEFL <cit.> involve human semantic parsing. CMMTL uses the body shape as the semantic label guidance model to perform both VI-ReID and human semantic segmentation tasks. SEFL <cit.> discards the shape feature and focuses on learning identity-related modality-shared features through disentanglement learning. Compared to their work, our method aims to learn shape-centered feature representation guided by body shape, while considering the unreliable situation of IR human semantic parsing. § THE PROPOSED METHOD§.§ Preliminaries and BaselinePreliminaries. Let X={(x^vis_i,x^ir_i,x^vis_s,i,x^ir_s,i, y_i)|_i=1^N} represent the training dataset, where N represents the total number of pedestrian images, y_i∈{1,2, ⋯, K} represents its corresponding identity label, K represents the total number of identities. x^vis_i(x^ir_i) represents the i-th VIS(IR) image, x^vis_s,i(x^ir_s,i) represents the body shape corresponding to x^vis_i(x^ir_i), which is obtained by feeding x^vis_i(x^ir_i) into the pre-trained Self-Correction Human Parsing (SCHP) <cit.> model.Baseline: Appearance Feature Learning. Referring to the ReID framework named AGW <cit.>, we adopted a dual stream network that removed the Non-local Attention as the Baseline. Specifically, a dual stream network consists of two parallel convolutional layers with unshared parameters, four blocks with shared parameters, and a Generalized-mean (GeM) Pooling layer, named F_a. Two parallel convolutional layers independently process VIS and IR images to extract low-level features, then the features are fed to the subsequent four blocks to extract high-level features f_i^m.The above process can be formalized as: f_i^m = F_a(x^m_i)m={vis,ir}.To ensure the identity discriminability of f_i^m, Cross Entropy (CE) loss L_id and Weighted Regularization Triplet (WRT)<cit.> loss L_wrt was adopt to constrain it as follows:L_id =-1/n_b∑_i=1^n_bq_ilog( W_id( f_i^m)) ,where n_b represents the batch size, W_id represents the shared identity classifier for IR and VIS pedestrian features, q_i∈ℝ^K × 1 is a one-hot vector, and only the element at y_i is 1.L_wrt= 1/n_b∑_i=1^n_blog(1+exp(∑_i,j w_i,j^p d_i,j^p-∑_i,k w_i,k^n d_i,k^n),w_i,j^p = exp( d_i,j^p)/∑_ d_i,j∈P_iexp( d_i,j^p),w_i,k^n = exp(- d_i,k^n)/∑_ d_i,k∈N_iexp(- d_i,k^n),where j,k (P_i,N_i) represents the index of the positive and negative samples(set) corresponding to the anchor sample x_i within a batch size, respectively. And d_i,j represents the euclidean distance between two features: d_i,j= f_i^m- f_j^m_2. §.§ Shape Feature LearningAlthough the design of the dual stream network is aimed at extracting modality-shared features, it focuses on learning pedestrian appearance features and lacks the ability to learn pedestrian body shape features. Compared to appearance features, the body shape features of pedestrians are more robust against modality changes. Therefore, in this section, we aim to learn the body shape features by the SCHP and a shape feature learning network F_s. In order to obtain discriminative body shape features, we feed the body shape x^vis_s,i and x^ir_s,igenerated by the SCHP to F_s to extract the shape features f_s,i= F_s(x_s,i). Then, f_s,i is into the shape identity classifier W_s for identity classification.Infrared Shape Restitution: as discussed in the introduction and shown in Fig.<ref>, inherent errors are present within the IR body shapes, which not only leads to the inability of F_s to derive adequately discriminative shape features from input IR shapes but also limit the ability of F_s to extract VIS shape features due to pulling the distance between incorrect IR shape features and VIS shape features.To address this issue, we propose to obtain absent shape details of the IR body shape from the corresponding IR image to restitute the IR shape at the feature level, as shown in Fig. <ref>. This module is motivated by the fact that, despite certain errors in the infrared shape generated by SCHP, the original infrared pedestrian image still contains the complete shape information. Specifically, let the IR shape feature map and IR feature map output from the j-th block of F_s and F_a be represented as f^ir,j_s,i and f^ir,j_i, respectively.In order to search for features related to incorrect IR shape in IR feature map f^ir,j_i, we design a cross-attention consisting of query Q, key K, value V, inspired by self-attention<cit.>. A noteworthy distinction from self-attention is that our query is obtained by adding the IR shape feature map f^ir,j_i and IR feature map f^ir,j_s,i:Q= W_q ( f^ir,j_i+ f^ir,j_s,i),Where W_q represents the 2D convolution layer with kernel sizes of 1 × 1.The query generated by this operation contains the information required to restitute the body shape, so it can be used to search for valid information in the value. And the key and value are represented as:K= W_k( f^ir,j_i),V= W_v( f^ir,j_i),Where W_k and W_v represent the 2D convolution layer with kernel sizes of 1 × 1, respectively. Then we obtain the required feature from V and the corrected IR shape feature map f̂^ir,j_s,i can be obtained as follows:f̂^ir,j_s,i = W_v2(BN (Norm( QK^T)V)) +f^ir,j_s,i,Where W_v2 represents the 2D convolution layer with kernel sizes of 1 × 1, BN, Norm and (·)^T represents the batch normalization layer, the normalization operation and the transpose operation.Following the above steps, we incorporated the infrared shape restitution attention following the first and second blocks of F_s. In order to guide the training of F_s and ISR, we introduce CE loss and WRT loss to constrain the shape features f̂_s,i output from F_swith GeM pooling layer: L^s_id =-1/n_b∑_i=1^n_bq_ilog( W_id^s(f̂_s,i)) ,where W_id^s represents the shared identity classifier for IR and VIS pedestrian shape features.L^s_wrt= 1/n_b∑_i=1^n_blog(1+exp(∑_i,j w_i,j^s,p d_i,j^s,p-∑_i,k w_i,k^s,n d_i,k^s,n),w_i,j^s,p = exp( d_i,j^s,p)/∑_ d_i,j∈P_i^sexp( d_i,j^s,p),w_i,k^s,n = exp(- d_i,k^s,n)/∑_ d_i,k∈N_i^sexp(- d_i,k^s,n),where j,k (P_i^s,N_i^s) represents the index of the positive and negative samples(set) corresponding to the anchor shape feature f̂_s,i within a batch size, respectively. And d_i,j^s represents the euclidean distance between two shape features: d_i,j^s=f̂_s,i-f̂_s,j_2. Importantly, it should be highlighted that the VIS shape features do not require restitution and we mix them with corrected IR shape features in batch size to participate in the loss calculation. Therefore, the VIS shape features can guide the learning of the ISR at the loss level.Shape Feature Propagation: we can obtain shape features through the collaboration of appearance feature extraction network F_a, shape feature learning network F_s, and ISR. However, this poses challenges to model deployment due to the increased parameters and computational complexity. Therefore, it is crucial to transfer the ability of extracting shape features to the appearance stream network, so that the testing phase does not require the participation of the shape stream network.Towards this objective, we replicate the fourth block of F_a as the shape subnetwork F_s̃ with GeM pooling layer and apply it to the output features of the third block of F_a to obtain the shape features f̃_s,i under the guidance of the shape featuresf̂_s,i output from the F_s at instance (Eq.9)and prototype level (Eq.10).L_kd = 1/n_b∑_i=1^n_b| |f̃_s,i- f̂_s,i| |_2,where n_b represents the batch size, and || ·||_2 represents l_2-norm.L_kd_ce =-1/n_b∑_i=1^n_b q_ilog(f̃_s,iΘ^T),where Θ∈ℝ^C × K represents the class prototype from the classifier W_id^s. §.§ Appearance Feature EnhancementIn addition to shape features, pedestrian appearance contains essential identity cues like clothing style, hairstyle and etc. Extracting these features from the original image is complex due to modality(background) changes. It is worth noting that these clues are tied to body shape—hairstyle features relate to head shape. Therefore, we expect to mine these appearance clues mentioned above and suppress identity-unrelated modality-specific features, such as color, temperature, etc. Due to the limited appearance features directly associated with shape features, it is necessary to first mine the appearance features directly associated with shape and then use these appearance features to further explore the appearance features indirectly associated with body shape. So we devised a cascading two-stage attention mechanism that systematically extracts appearance features related to body shape—both directly and indirectly, as shown in Fig.<ref>. The mechanism operates as follows: in the first stage, let the shape feature map f_s,i output by F_s̃ serve as query, and the appearance feature map f_i extracted by F_a serve as key and value:Q= W_q ( f_s,i),K= W_k ( f_i), V= W_v ( f_i),Then, similar to ISR, the correlation score between Q and K is used to search features directly related to shape in V, and fused with shape feature map f_s,i to obtain f̃^fuse_i.f̃^fuse_i = W_v2(BN (Norm( QK^T)V)) +f_s,i,Considering f̃^fuse_i contains shape discriminative information and appearance discriminative information directly related to body shape, f̃^fuse_i can be effectively employed as a query during the second stage of attention, which facilitates the acquisition of appearance features that are both directly and indirectly associated with the body shape. As the query feature of the second-stage attention, f̃^fuse_i plays a pivotal role in determining whether modality-invariant identity-discriminative features, both directly and indirectly associated with shape, can be effectively extracted from the appearance features. To ensure the discriminability of the query f̃^fuse_i, we also employ CE loss and WRT loss to jointly constrain GeM(f̃^fuse_i), GeM represents the GeM pooling layer. L^q_id =-1/n_b∑_i=1^n_bq_ilog( W_id^q(GeM(f̃^fuse_i))) ,where W_id^q represents the shared identity classifier for infrared and visible pedestrian query features.L^q_wrt= 1/n_b∑_i=1^n_blog(1+exp(∑_i,j w_i,j^q,p d_i,j^q,p-∑_i,k w_i,k^q,n d_i,k^q,n),w_i,j^q,p = exp( d_i,j^q,p)/∑_ d_i,j∈P_i^qexp( d_i,j^q,p),w_i,k^q,n = exp(-d_i,k^q,n)/∑_ d_i,k∈N_i^qexp(-d_i,k^q,n),where j,k (P_i^q,N_i^q) represents the index of the positive and negative samples(set) corresponding to the anchor shape feature f̂_s,i within a batch size, respectively. And d_i,j^q represents the euclidean distance between two query features: d_i,j^q=(GeM(f̃^fuse_i)-(GeM(f̃^fuse_j)_2.In the second stage, we employed the output feature f̃^fuse_i of the attention of the first stage as a query to emphasize the appearance features directly and indirectly related to the body shape in V as follows:f̃_i = W_v2(BN (Norm( W_q(f̃^fuse_i)K^T)V)) +f_i, Similar to the baseline method, we also utilized a combination of CE loss and WRT loss to jointly constrain appearance features GeM(f̃_i) that are closely linked to shape. This constraint serves to enhance their identity discrimination and modality invariance. As follows: L^a_id =-1/n_b∑_i=1^n_bq_ilog( W_id(GeM(f̃_i))) , L^a_wrt= 1/n_b∑_i=1^n_blog(1+exp(∑_i,j w_i,j^a,p d_i,j^a,p-∑_i,k w_i,k^a,n d_i,k^a,n),w_i,j^a,p = exp( d_i,j^a,p)/∑_ d_i,j^a∈P_iexp( d_i,j^a,p),w_i,k^a,n = exp(-d_i,k^a,n)/∑_ d_i,k∈N_iexp(-d_i,k^a,n),where j,k (P_i^a,N_i^a) represents the index of the positive and negative samples(set) corresponding to the anchor shape feature f̃_i within a batch size, respectively. And d_i,j^a represents the euclidean distance between two features: d_i,j^a=GeM(f̃_i)-GeM(f̃_j)_2.§.§ Training and InferenceIn the training process, we employ the appearance flow network F_a for extracting appearance features and the shape flow network F_s for extracting shape features. To further enhance this process, we introduce the ISR module to restitute inaccuracies in infrared shape features. Additionally, the SFP module is integrated to impart the shape feature extraction capabilities of the shape stream network F_s to the appearance stream network F_a. Lastly, the AFE module is introduced to mine appearance features that have both direct and indirect associations with the shape features. The complete training process is executed in an end-to-end fashion, as illustrated in Algorithm <ref>.During the testing process, we solely utilize the appearance stream network F_a, F_s̃ and the AFE module, without involving the shape stream network F_s and the human parsing network SCHP.§ EXPERIMENTS §.§ Datasets SYSU-MM01<cit.> is a large-scale dataset with complex environments. The training set consists of 11909 IR (22258 VIS) images of 395 identities captured across 2 IR (4 VIS) cameras.For the testing set, there are 96 pedestrians, with a total of 3,803 IR pedestrian images and 301 randomly selected VIS images. RegDB<cit.> is a small dataset consisting of 8420 images of 421 identities captured by a single VIS(IR) camera. Each pedestrian has 10 VIS(IR) images. We followed BDTR<cit.> and randomly divided the dataset into training and testing sets for training and evaluation.HITSZ-VCM<cit.> is a video-based VI-ReID dataset that contains 251452 VIS images and 211807 IR images of 927 identities, with each track containing 24 consecutive images. The training(testing) set encompasses 11061(10802) tracks of 500(427) identities.Evaluation Metrics We employ Cumulative Matching Characteristics (CMC), mean Average Precision (mAP), and mean Inverse Negative Penalty (mINP)<cit.>as evaluation metrics to assess the performance of the ScRL and the methods compared in this paper. §.§ Implementation DetailsSimilar to DEEN <cit.>, we adopted ResNet50 pre-trained on imagenet as the backbone and replaced the average pooling layer with the GEM pooling layer, with all input image sizes resized to 384×144. In the training phase, we adopted Random Crop, Random Horizontal Flip, Channel Random Erasing and Channel AdapGray <cit.> to enhance the IR(VIS) images, and we adopt Random Crop and Random Horizontal Flip for the shape image. We adopt the Adam optimizer for optimization, the learning rate of the classifier W_s, W_a was set to 0.0007, and the learning rate of other networks was set to 0.00035. The model trained a total of 120 epochs, in the first 10 epochs, the learning rate is dynamically adjusted through the warmup strategy, in the 40th and 60th epochs, the learning rate decreases by 10%. At every batch size, we randomly sample 64 images from 8 identities, with 4 VIS (IR) images for each identity and we follow the sampling settings of MITML for HITSZ-VCM<cit.>. §.§ Comparison with State-of-the-Art Methods In this section, we present a comprehensive comparison between the proposed ScRL and other state-of-the-art methods across the SYSU-MM01, RegDB, and HITSZ-VCM datasets. Specifically, the state of the art methods we compared includeZero-Pad<cit.>, HCML<cit.>,HSME<cit.>, D^2RL<cit.>, eBDTR<cit.>, X-Modal<cit.>, Hi-CMD<cit.>, DDAG<cit.>, HAT<cit.>, MCLNet<cit.>, CM-NAS<cit.>, MPANet<cit.>, CAJ<cit.>, AGW<cit.>, FMCNet<cit.>, CMMTL<cit.>, PMT<cit.>, DSCNet<cit.>, MRCN<cit.>,DEEN<cit.>, SEFL<cit.>. SYSU-MM01.As shown in Tab.<ref>, the proposed method exhibits strong competitiveness on SYSU-MM01, particularly in terms of Rank-1, mAP, and mINP. Specifically, in the “all search” mode, the proposed method attained an accuracy of 76.1%, 72.6%, and 59.8% for the Rank-1, mAP, and mINP indicators, respectively. Similarly, in the “indoor search” mode, the proposed method delivered an accuracy of 82.4%, 85.4%, and 82.2% for the Rank-1, mAP, and mINP indicators, respectively. Furthermore,our method outperforms the suboptimal methodby 0.9%(2.1%), 0.8%( 2.1%), and 7.9% (9.5%) on Rank-1, mAP, and mINP in “all search”(“indoor search”) mode, respectively. The results suggest that the proposed method, which focuses on pedestrian features with shape as the central component, effectively mitigates modality changes and enhances the accuracy of cross-modality pedestrian matching. RegDB.As illustrated in Tab. <ref>, the proposed approach has showcased commendable performance even on the limited-scale dataset RegDB. Notably, within the “IR to VIS” mode, our method achieved 91.8% and 85.3% accuracy in Rank-1 and mAP indicators, respectively, and has demonstrated superiority over the suboptimal SEFL approach by 0.7% (0.1%) in terms of Rank-1 (mAP). Similarly, in the “VIS to IR” mode, our method attained an accuracy of 92.4% in Rank-1 and 86.7% in mAP, and our method outperformed SEFL in both Rank-1 and mAP metrics. These experimental results demonstrate that our method also achieves significant effectiveness with small-scale datasets.HITSZ-VCM. To verify the scalability of the proposed method, we conducted experiments on the video-based VI-ReID. Specifically, similar to SEFL, we can obtain sequence-level feature by applying an average pooling layer to the frame level. As showcased in Tab. <ref>, our method has exhibited superior performance compared to all methods grounded in video (image) strategies and achieved 71.2%(73.3%) and 52.9%(53.0%) accuracy in Rank-1 and mAP in the “VIS to IR”(“IR to VIS”) mode, respectively.In comparison to the video-based approach, our method has demonstrated superior performance by exceeding the suboptimal SADSTRM approach by 5.9% and 3.4% in terms of Rank-1 and mAP, respectively, within the “IR to VIS” mode. Additionally, our method has also outperformed the suboptimal IBAN approach by 3.7% and 2% in terms of Rank-1 and mAP in the “VIS to IR” mode. Furthermore, when compared to the image-based approach, our method has achieved remarkable results in the “VIS to IR” (“IR to VIS”) mode, surpassing the suboptimal SEFL approach by 3.5% (3.1%) in Rank-1 and 0.6% (0.5%) in mAP. The experimental results suggest that our method can extract shape-centered features that remain insensitive to modality variations at the frame level, ensuring robustness in pedestrian feature representation at the sequence level. As a result, it achieves the best performance.The comparative experimental results have confirmed the effectiveness and advancements of our method for image-based VI-Reid tasks on both the large-scale dataset SYSU-MM01 and the small-scale dataset RegDB. Moreover, our method has demonstrated its effectiveness and advancements in video-based VI-Reid tasks on the large-scale video dataset MITML. These findings underscore the significance of learning shape-centered features, offering tangible benefits for cross-modality pedestrian retrieval.§.§ Ablation Studies In this subsection, we analyze the contribution of different components including SFP (Shape Feature Propagation), ISR (Infrared Shape Restitution), AFE (Appearance Feature Enhancement) based on the SYSU-MM01 dataset. Effectiveness of SFP.To verify the effectiveness of SFP, we integrated the SFP into the Baseline, resulting in “B+S”.This augmentation empowers the model to autonomously acquire shape features from the original image during inference, eliminating the need for shape feature learning and human parsing networks. Specifically, as shown in Tab. <ref>, in “All search”(“Indoor search”) mode, “B+S” has demonstrated improvements of 3.4%(1.4%) in Rank-1, 3.1%(1.7%) in mAP, and 3.6%(2.0%) in mINP when contrasted with the Baseline.These improvements are attributed to the fact that shape features, which remain unaffected by modality changes, complement appearance features, thereby enhancing the overall performance. Furthermore, we conducted comprehensive experiments on both model and computational complexity. As shown in Tab. <ref>, we validated the complementarity of shape features to appearance features through experiments in Setting 2, specifically, we utilized two independent networks to extract shape and appearance features and then concatenated these two features for inference. Compared to the Baseline, the Rank-1 (mAP) of Setting 2 increased by 1.7% (0.6%) by increasing the model parameter (computational complexity) by 90.1M (83.4G).In contrast, “B+S" (Setting 3) achieved an even greater improvement of 3.4% in Rank-1 (and 3.1% in mAP) with a relatively modest increment of 15M parameters and 3.2G in computational complexity. This suggests that propagating shape features to the appearance stream network can enhance the acquisition of shape-related features, thus improving the model's performance. In comparison to Setting 2, The introduction of SFP has also reduced the computational complexity of the inference process, as it eliminates the need for the involvement of shape stream networks and human parsing networks.Effectiveness of ISR. Incorporating SFP extends the shape stream's capabilities to the appearance stream. However, the incorrect IR shape restricts the potential of the shape stream, subsequently curbing the efficacy of the appearance stream. To tackle this challenge, ISR was introduced to the “B+S", yielding“B+S+I". As highlighted in Tab. <ref>, the Rank-1, mAP, and mINP metrics have collectively improved by 1.0%(0.4%), 1.0%(0.4%), and 1.2%(0.4%) in “All search”(“Indoor search”) mode respectively.The performance improvement suggests the feasibility of extracting pertinent information for repairing infrared shape features from the intermediate features of the appearance stream network and underscores ISR's prowess in restituting inaccurate IR shapes.Effectiveness of AFE.In order to accentuate appearance features linked with shape, we incorporated the AFE into the “B+S+I". The corresponding results, as depicted in Tab. <ref>, reflect an enhancement in Rank-1, mAP, and mINP, in “All search" mode, these metrics increased from 74.6%, 71.0%, and 57.7% to 76.1%, 72.6%, and 59.8%, respectively, in “Indoor search" mode, the metrics improved from 79.4%, 83.2%, and 79.8% to 82.4%, 85.4%, and 82.2%, respectively.This suggests that in contrast to typical appearance features, appearance features centered on shape are better at emphasizing identity-discriminative information related to individuals while minimizing the influence of background and other irrelevant noise features. Furthermore, as delineated in Tab. <ref>, the computational overhead imposed by AFE is negligibly minimal after rounding, rendering it inconsequential during the inference phase. This ensures that the introduction of AFE hardly imposes any burden on the model's inference process. AFE effectively learns appearance features that have both direct and indirect connections to shape. To confirm its ability to capture features directly associated with shape, we replaced the original query feature of Stage 2 (S2) with shape features. The results in Table <ref> show that adding S2 to “B+S+I" to create “B+S+I+S2" resulted in improvements across Rank-1, mAP, and mINP. Specifically, when compared to “B+S+I," there were increases of 0.7%(1.0%), 0.9%(0.8%), and 1.2%(0.9%) in “All search”(“Indoor search”) mode, respectively. Furthermore, when incorporating S1 into the “B+S+I+S2" configuration to create “B+S+I+S2+S1," the metrics exhibited further improvements. In the “All search" mode, Rank-1, mAP, and mINP increased from 75.3%, 71.9%, and 58.9% to 76.1%, 72.6%, and 59.8%, respectively. Similarly, in the Indoor search" mode, the metrics improved from 80.4%, 84.0%, and 80.7% to 82.4%, 85.4%, and 82.2%, respectively. The experimental results affirm that the strategy of initially learning appearance features directly associated with shape and subsequently those indirectly linked is effective. This is due to the fact that appearance features, whether directly or indirectly connected to shape, are modality-insensitive discriminative features pertaining to human subjects.§.§ Visualization This section encompasses visualization experiments on retrieval results and model interest.Retrieval result. In order to further analyze the effectiveness of different ablation settings, as shown in Fig. <ref>, we visualized the retrieval results of different ablation settings. Moving left to right, the degree of similarity with the query image progressively decreases. Compared to the Baseline, it is clear that the addition of each module yields improvements in retrieval performance.Model interest. The proposed ScRL primarily learns modality-invariant pedestrian feature representations with a focus on shape. To illustrate the advantages of our method more vividly, we have visualized the heatmap, as displayed in Fig. <ref>. It is evident that, in comparison to the Baseline method, the proposed ScRL excels in emphasizing more discriminative information about the human body area. Furthermore, it effectively suppresses interference stemming from background noise. This is evident in Fig. <ref>, where the baseline method concentrates on background objects (the trash cans in columns 5 of Fig. <ref>, as well as the patterns on the wall in column 6, etc), whereas our proposed method exclusively emphasizes the body area.§ CONCLUSION AND FUTURE WORKSThis paper introduces the innovative ScRL framework designed to enhance the effective utilization of body shape in VI-ReID.Firstly, we introduce Shape Feature Propagation (SFP), which directly extracts shape features from visible(infrared) pedestrian images. The distinctive characteristic of SFP lies in its ability to avoid the need for the auxiliary model, resulting in a significant reduction in computational complexity. Additionally, the proposed Infrared Shape Restitution (ISR) restitutes errors in the infrared shape at the feature level. This correction enhances the discriminative capacity of infrared shape features.Furthermore, we present the Appearance Feature Enhancement (AFE), which focuses on learning shape-centered appearance features while concurrently suppressing irrelevant identity-unrelated information. The synergy of these techniques culminates in the ScRL framework. Extensive experiments have validated the superiority of ScRL on the SYSU-MM01, RegDB image datasets, and HITSZ-VCM video datasets.Nonetheless, our proposed method has not taken into account the significant variations in shape features arising from different camera perspectives. This oversight may result in inadequate discrimination of shape features, subsequently impacting the learning of appearance features related to shape, especially when dealing with modality changes. Therefore, in future research, we will delve deeper into the exploration of methods for learning camera-invariant shape features, with the ultimate goal of enhancing the learning of appearance features related to shape. | http://arxiv.org/abs/2310.17952v2 | {
"authors": [
"Shuang Li",
"Jiaxu Leng",
"Ji Gan",
"Mengjingcheng Mo",
"Xinbo Gao"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231027075724",
"title": "Shape-centered Representation Learning for Visible-Infrared Person Re-identification"
} |
[Event-Triggered Consensus for Continuous-Time Distributed Estimation footnoteinfo [ January 14, 2024 ================================================================================= < g r a p h i c s >figureGiven a collection of RGB images captured by mobile phones, Reality3DSketchgenerates a 3D object based on the input of the hand-drawn sketch and places it at the user's desired location. We first obtain a reconstructed 3D environment (first column). Users can then draw a sketch from a specific view (second column), and our algorithm reconstructs black an in situ 3D object placed at the user's desired location (third and fourth columns; added objects are highlighted in green for visual clarity). ]Tianrun Chen is with the College of Computer Science and Technology, Zhejiang University, China, 310027 and KOKONI, Moxin (Huzhou) Technology Co., LTD. (email: [email protected]) Zejian Li is with the School of Software Technology, Zhejiang University, China, 310027 (email: [email protected]). Chaotao Ding and Ying Zang are with the School of Information Engineering, Huzhou University, China, 313000 (e-mail: [email protected], [email protected]). Yiyi Liao is with the College of Information Science and Electronic Engineering, Zhejiang University, China, 310027 (email: [email protected]). Lanyun Zhu is with the Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore 487372 (e-mail: [email protected]). Lingyun Sun is with the College of Computer Science and Technology, Zhejiang University, China, 310027. (email: [email protected]) This work is an extended version of the conference report of Chen, Tianrun, et al., "Deep3DSketch: 3D Modeling from Free-hand Sketches with View- and Structural-Aware Adversarial Training", at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2023. *Corresponding Author The emerging trend of AR/VR places great demands on 3D content. However, most existing software requires expertise and is difficult for novice users to use. In this paper, we aim to create sketch-based modeling tools for user-friendly 3D modeling. We introduce Reality3DSketch with a novel application of an immersive 3D modeling experience, in which a user can capture the surrounding scene using a monocular RGB camera and can draw a single sketch of an object in the real-time reconstructed 3D scene. A 3D object is generated and placed in the desired location, enabled by our novel neural network with the input of a single sketch. Our neural network can predict the pose of a drawing and can turn a single sketch into a 3D model with view and structural awareness, which addresses the challenge of sparse sketch input and view ambiguity. We conducted extensive experiments synthetic and real-world datasets and achieved state-of-the-art (SOTA) results in both sketch view estimation and 3D modeling performance. According to our user study, our method of performing 3D modeling in a scene is >5x faster than conventional methods. Users are also more satisfied with the generated 3D model than the results of existing methods. Sketch, 3D Modeling, 3D Reconstruction, Computer/Human Interaction.§ INTRODUCTIONThe rapid development of portable displays and the emerging trend of metaverse applications, including AR/VR, bring new possibilities in the digital era – people can now view massive digital content and even interact with it in a virtual 3D world. This emerging trend calls for a large quantity of versatile and customizable 3D content in the virtual world <cit.>. Much effort has been made to design 3D modeling tools to be simpler and encourage creativity <cit.>.However, most existing 3D modeling tools cannot meet this demand in the metaverse era. First, they are not friendly to novice users aiming to create customized 3D models. Widely used computer-aided design (CAD) software requires knowledge that involves both sophisticated CAD software commands and strategies, which is required to parse a shape into sequential commands <cit.>; additionally, it is a labor-intensive and time-consuming process <cit.>. Second, most 3D modeling tools isolate the creation process from existing 3D content, and the created model may lack context information and require extra effort to fit it to the VR/AR context <cit.>. A promising solution to the abovementioned limitations is to utilize sketch-based 3D modeling tools as an alternative to the conventional CAD software suite. As sketching is a natural form of expression for human beings, using sketches as the input to produce 3D models can free users from mastering 3D modeling. In particular, there are 3D sketching tools in the context of VR/AR <cit.>. They allow users to immersively and freely draw 3D curves directly in the air, ensuring that the generated 3D model fits in a 3D world. However, special devices and sufficient expertise are still required to use these tools – most of the tools require MoCap systems or a motion-tracking stylus to provide precise localization of 3D stokes, and the modeling process is still designed for users with reasonably good drawing skills and not for novice users <cit.>. The requirement of sufficient expertise and much practice in using existing 3D sketching tools also comes from the depth-perception issue <cit.> – users have difficulties in localizing the desired drawing position in 3D space, especially in settings where users draw 3D sketches by viewing only a 2D display panel on tablets or mobile phones without depth perception <cit.>.To address this challenge, we offer a novel solution, Reality3DSketch, to provide a new paradigm for novice users to create new customized 3D models in a given 3D world. Reality3DSketch is an AI-enabled 3D modeling tool inspired by the recent success of AI-enabled content creation. We first propose a novel generative network that can use only a single-view freehand 2D sketch (in a single plane) as the input to produce a high-fidelity 3D model. This eliminates the need for users to have specialized skills or to provide multiple consistent views of the object, thereby minimizing their effort. Instead of using multiple precise line drawings that require drawing expertise or a step-by-step workflow that requires strategic knowledge <cit.>, Reality3DSketch allows users to draw a single-view sketch from an arbitrary viewpoint. The neural network processes the rest, and it is trained to estimate the user-intended viewpoint. Viewpoint estimation can constrain the model generation process to resolve the view ambiguity of the single sketch. The estimated viewpoint can also be used to guide the positioning of the 3D model in a real scene. Since the generated 3D model is aligned in a particular viewpoint as in the dataset, having the viewpoint information can enable the object to be rotated to match the user-desired view angle. We design novel modules to enhance the performance of the sketch-to-model process. We disentangle the learning of 3D shapes and viewpoints by random pose sampling (RPS) of the object silhouette, and we input the randomly sampled silhouette to an effective progressive shape discriminator that is aware of the objects' geometric structure via cross-view silhouettes of the 3D model. The network is designed to have both view- and structure-aware properties, aiming to provide high-fidelity 3D modeling that can accurately reflect users' intentions. Furthermore, to offer an immersive creative environment with depth perception, we exploratively propose to have users draw a single-view sketch in a 3D reconstruction of the surrounding environment obtained from images captured by a mobile phone instead of drawing by viewing the raw scenes captured by a monocular RGB camera with a limited field of view. Our experimental results demonstrate that contextual geometric information can be beneficial for users in terms of user experience. With the 3D reconstructed mesh, users can rotate and view the environment from different angles as they sketch. This gives the user a better sense of depth and spatial relationships than with a single flat image. This can be especially useful when designing new components that need to fit into the environment in a specific way. Additionally, the resulting object can be viewed directly from different angles and distances, allowing for easy checking or adjustment immediately after the sketch-to-shape generation process, which is also time-saving for users.We conducted extensive experiments using both a synthetic dataset and a user-drawn real dataset. The qualitative and quantitative results show the effectiveness of our novel sketch-to-model approach with state-of-the-art (SOTA) performance. As Reality3DSketch is a new 3D interaction paradigm, in which the sketch can precisely define the 6D pose and position of the generated object, we also performed a user study that compared our 3D modeling approach and the conventional manual approach of 3D interaction. The results show that our method is >5 times faster than the baseline approach in performing 3D modeling and interaction within a scene. We further performed a user study that compared Reality3DSketch with RGB context input sketch-based 3D modeling. The results showed that involving geometric information led to significantly higher user experience ratings and fewer redo actions. Another user study shows that users were more satisfied with the 3D model generated with our approach, demonstrating the practicality and effectiveness of our novel 3D modeling pipeline. Specifically, our contributions are as follows: * We propose a novel paradigm, Reality3DSketch, for intuitive and immersive 3D modeling. Table I shows the differences between Reality3DSketch and other pipelines. We consider a case in which users can use their phones to capture the surrounding environment to obtain an accurate 3D reconstruction and draw a single-view sketch at the desired location. The 3D object is generated and put into the virtual environment. Both the 3D reconstruction and the sketch-to-model generation are processed in real time. Our user study shows that our immersive 3D modeling approach is >5 times faster than separately modeling and manually positioning the object. * The sketch-to-model process is realized by a novel neural network we propose. The network is designed to have random pose sampling (RPS) and a progressive shape discriminator (SD) so that it is both view- and structure-aware to ensure high-fidelity model generation.* State-of-the-art (SOTA) performance is achieved with both our sketch view estimation result and our 3D modeling result in both synthetic and real datasets. In the user study, users are also more satisfied with the generated 3D model than previous methods. § RELATED WORKS§.§ 2D Sketch-Based 3D ModelingSketch-based 3D modeling has been studied by researchers for decades. Early works mainly focused on drawing 2D sketches on paper or a touch-screen panel, and 3D models were obtained accordingly. Bonnici et al. <cit.> and Olsen et al. <cit.> comprehensively reviewed the existing sketch-based 3D modeling approaches. Existing sketch-based 3D modeling methods using 2D sketches as the input can be divided into end-to-end and interactive approaches. The interactive approach requires users with strategic knowledge for sequential step decomposition or specific drawing gestures or annotations <cit.>.For the end-to-end approach, works that use template primitives or retrieval-based approaches <cit.> can producesome satisfactory results, but they lack customizability. Some very recent work directly reconstructed a 3D model using deep neural networks and recognized the sketch-based 3D modeling as single-view 3D reconstruction <cit.>. However, these methods using an autoencoder structure similar to a single-view 3D reconstruction pipeline can only obtain coarse predictions of the 3D model <cit.> due to the sparsity and ambiguity of sketches. Specifically, sketches are sparse because they have only a single view, are mostly abstract, lack fine boundary information when drawn by humans, and more critically, lack texture information for depth estimation. This brings considerable uncertainty when learning 3D shapes. In this work, we incorporate the advantage of using 2D sketches, which are intuitive and convenient, and we propose a novel network design to alleviate the sparsity and ambiguity of a single 2D sketch to produce high-fidelity 3D modeling that reflects users' ideas. §.§ Immersive 3D SketchingWith the emergence of AR/VR, 3D sketching tools were developed. An early attempt at 3D sketching in the context of AR/VR was Holosketch <cit.>, which supports creating primitives and freeform tubes and wire geometries in 3D. Later works expanded the possibilities with advances in hardware development <cit.>.There are even commercial tools (e.g., Tilt Brush, GravitySketch, and Quill) available for users to directly draw 3D objects in a virtual environment. With various curve- and surface-fitting techniques, even 3D CAD models can be created <cit.>. However, despite the freedom of painting with 3D strokes, 3D sketching and potentially forming 3D models is not a trivial task due to two significant challenges. The first challenge is the depth perception issue. Distance underestimation <cit.> and disparities in targeting accuracy between lateral and depth motions are frequently found in 3D sketching systems <cit.>. Specifically, in a user study of a mobile-based 3D sketching system creating content in a scene captured by RGB cameras, all users reported difficulty in depth estimation when creating the content <cit.>. Another challenge is the high cognitive and sensorimotor demands of drawing in 3D.Wiese et al. <cit.> discovered that 3D drawing requires more manual effort andhigher cognitive and sensorimotor demands than 2D drawing, which is due to the requirement for users to control more degrees of freedom (DOFs) during movement (3/6 DOFs instead of 2 DOFs). Arora et al. <cit.> reported that in pure 3D interactive settings without a physical surface, users are forced to rely solely on eye-hand coordination to control stroke position, which introduces extra challenges for creators. In contrast, this work demonstrates an application that combines 2D sketching on a physical surface (mobile device) and 3D scene reconstruction (using a regular mobile device) for the first time. There are no longer heavy cognitive and sensorimotor demands, but the generated 3D model can still be fitted in a real scene for AR/VR applications. § METHOD §.§ OverviewThe overall pipeline of Reality3DSketch is illustrated in Figure 2. We separated the generation of the surrounding environment and sketch-based object 3D modeling into two steps with separate neural networks. Users use their phones to capture the surrounding environment. The obtained posed images are fed into a real-time reconstruction network that directly reconstructs local surfaces, represented as sparse TSDF volumes. The mesh is extracted and rendered on the user's screen. The user then draws one sketch at a single viewpoint within the scene. The single-view sketch is fed into a sketch-to-model network to obtain a complete 3D model. The sketch-to-model network has a view-prediction network to obtain the predicted viewpoint information. Because the object generated from the sketch-to-model network is aligned as in the dataset, a pose transformation (rotation) is performed to translate the object in canonical space to global coordinates, and the sketch-derived 3D model is placed in the reconstructed scene at the users' desired position and angle. §.§ PreliminariesFor the 3D reconstruction network, we are given a set of images { I_t } with the corresponding camera poses {ξ_t∈𝕊𝔼( 3 ) } to obtain a dense 3D mesh reconstruction M_t. For the sketch-to-model network, the input is a binary sketch I_s∈{ 0,1 }^W× H. We let I_s[ i,j] = 0 if it is marked by a pen stroke, and I_s [ i,j] = 1 otherwise. The goal of the sketch-to-model network G is to obtain a mesh M_Θ =(V_Θ,F_Θ), in which V_Θ and F_Θ represent the mesh vertices and faces and the silhouette S_Θ :ℝ^3 →{ 0, 1 } ^W× H of M_Θ best matches the information from the input sketch I_s. Compared to NeRF or other 3D representations <cit.>, the mesh representation of generated shapes offers a seamless integration into reconstructed scenes <cit.>. §.§ View Ambiguity and Sketch View PredictionIn the sketch-to-model network, we first explicitly learn the viewpoint of the model, which is a fundamental element in positioning the generated 3D model in a scene. We use an encoder E to produce latent code z_l and input it to the viewpoint prediction module, which consists of two fully connected layers D_v to produce the viewpoint estimation ξ_pred, represented by an Euler angle. The viewpoint prediction module is optimized in a fully supervised manner with the input of the ground-truth viewpoint ξ_gt, supervised by a viewpoint prediction loss ℒ_v, which adopts MSE loss for the predicted and ground-truth poses, defined as:ℒ_v=ξ_gt-ξ_pred_2=ξ_gt-D_v(z_l)_2 We also integrate view prediction into the sketch-to-model process, as previous works argue that view ambiguity is a critical issue of sketch-based 3D modeling<cit.>, which will be illustrated in the subsequent section.§.§ 3D Model Generation with View AwarenessWe take a commonly used encoder-decoder structure as the backbone of our sketch-to-model process, as it is a cross-domain prediction task. We use an encoder E to obtain a compressed shape code z_s and a decoder D to manipulate z_s to calculate the vertex offsets of the template mesh and deform it to obtain the output mesh M_Θ = D(z_s). The silhouette S_Θ :ℝ^3 →{ 0, 1 } ^W× H of M_Θ should match the input sketch I_s. Therefore, we render the silhouette S_1 of M_Θ. Specifically, the output viewpoint prediction ξ_pred is fed into a differentiable renderer to render a silhouette at the given viewpoint for supervision. We use the mIoU loss ℒ_iou to measure the similarity between the rendered silhouette S_1 and the silhouette of the input sketch S_2:ℒ_i o u(S_1, S_2)=1-S_1⊗ S_2_1/S_1⊕ S_2-S_1⊗ S_2_1For computational efficiency, we progressively increase the resolutions of silhouettes, obtaining the multiscale mIoU loss ℒ_sp, which is represented as:ℒ_s p=∑_i=1^Nλ_siℒ_iou^i The predicted viewpoint ξ_pred is also used to guide the generation process. We feed the viewpoint into two other fully connected layers D_v to produce a view-aware vector representation z_v and input both z_v and z_s to the decoder D to produce M_Θ. A common degradation can occur in which M_Θ is generated directly from z_s and z_v is completely ignored if the model is trained without any other constraints. To further condition the generation process with the viewpoint constraint, we add a random-view mesh synthesis branch, in which a random viewpoint ξ_random is obtained and a mesh M_Θ r is generated in the same manner as mesh generation with ξ_pred. We use a differentiable renderer to render the silhouettes S_Θ from mesh M_Θ and render the silhouettes S_r from mesh M_Θ r. The generated silhouettes S_r are regarded as the out-of-distribution fake sample, while the generated silhouettes S_Θ are regarded as the real sample. A shape discriminator SD is introduced to take the inputs of real and fake samples and force the neural network to generate meshes under the view constraint. §.§ 3D Model Generation with Structural AwarenessAt this point, the supervision of the mesh generation fidelity is performed with a single rendered silhouette of a generated mesh with a given viewpoint. We find that 2D input alone cannot meet the demand for obtaining complete 3D shapes with fine-grained structural information since a single sketch and the corresponding silhouette can only represent the information at that given viewpoint and lacks the information from other viewpoints. Therefore, we propose a random pose sampling (RPS) strategy, which uses multiple random-view silhouettes to supervise the sketch-to-model process. Random pose sampling aims to give the network the capability to generate reasonable 3D fine-structured shapes independent of the viewpoints. As many previous works have investigated in the realm of shape-from-silhouette, the proposed multiview silhouettes contain valuable geometric information about the 3D object <cit.> and thus can serve as effective clues in the 3D model generation process. In addition, during the training process, the Sketch View Prediction Module may encounter degradation, resulting in 3D shapes being generated directly from shape code Z_s and disregarding the significance of viewpoints, consequently impairing its viewpoint awareness. To address this challenge, we introduce the shape discriminator SD, which undergoes joint training with the encoder and decoder using an adversarial approach. The integration of random view augmentation during training and the shape discriminator serves to strike a balance between view perception and shape quality. This training strategy mitigates the common degradation issue to a certain extent and enhances the model's viewpoint awareness.In practice, we randomly sample N_ξ camera poses ξ_1...N_ξ from camera pose distribution p_ξ. We use a differentiable renderer to render the silhouettes S_Θ{1...N_ξ} from the mesh M_Θ and render the silhouettes S_r{1...N_ξ} from the mesh M_Θ r. The extrasampled silhouettes of the real mesh and the fake mesh are fed into the discriminator. By introducing S_r{1...N_ξ}, the network can use the geometric structure of the objects in cross-view silhouettes while producing the 3D objects, and the discriminator helps to resolve the challenge due to the sparsity of sketches by offering more visual clues. The disentanglement process is very similar to disentangling the “where" and “what" principles in generative models <cit.>, which has proven to be effective in our tasks. Moreover, the shape discriminator is also carefully designed to fully capture the structural information of the rendered silhouettes. We apply a progressive shape convolutional discriminator SD. Following <cit.>, our discriminator is trained with increasing image resolution and incrementally adds new layers to handle higher resolutions and discriminate fine details. We have found that such a convolutional discriminator design is more effective in capturing local and global structural information to facilitate the generation of high-fidelity 3D shapes compared to the MLP-enabled discriminator for 3D objects. In training, nonsaturating GAN loss with R1 regularization is used <cit.> for better convergence: ℒ_sd =𝐄_𝐳_𝐯∼ p_z_v, ξ∼ p_ξ[f(SD_θ_D(R(M_Θ, ξ)))] +𝐄_𝐳_𝐯𝐫∼ p_z_vr, ξ∼ p_ξ[f(-SD_θ_D(R(M_Θ r, ξ)))]wheref(u)=-log (1+exp (-u))§.§ 3D Reconstruction and In-Situ 3D ModelingWe next apply the sketch-to-model process in a real environment, which is enabled by a state-of-the-art real-time indoor 3D reconstruction algorithm <cit.> and our customized acquisition application. Specifically, the reconstruction is performed incrementally, with input from the RGB camera and poses. The network directly optimizes the 3D volume represented by a volumetric truncated signed distance function (TSDF) from the inputs, and the mesh is obtained by marching cubes <cit.>. Accurate, coherent, and real-time reconstruction can be achieved and displayed via our customized app.After the surrounding environment is reconstructed, considering a user viewing the mesh of the 3D scene at a specified view in the world coordinates, they can sketch the desired object in that scene immersively. The object belongs to a user-defined class, and the system selects the corresponding weight of the sketch-to-model network based on the class. The sketch is preprocessed and input into the sketch-to-model network. A view estimation of the sketch in the canonical view and a 3D model at that particular view are produced via the sketch-to-model network. A relative position and pose (rotation) can be calculated to place the generated model in the scene at the desired location. Specifically, the rotation is derived from the viewpoint estimation result from the sketch-to-model process, and the translation is derived based on the relative position of the central point within the reconstructed mesh. Algorithm 1 is the pseudocode summarizing the method. § EXPERIMENT §.§ DatasetTraining the model requires large-scale sketch data with the corresponding 3D models, which are rarely available from publicly accessible sources. Following Zhang et al. <cit.>, we used the synthetic data ShapeNet-Synthetic for training and testing and the real-world data ShapeNet-Sketch to evaluate the method in the wild. ShapeNet-Synthetic is the edge map extracted by aCanny edge detector from rendered images provided by Kar et al. <cit.>. It contains 13 categories of 3D objects from ShapeNet. ShapeNet-Sketch is a dataset collected from real human drawings. Volunteers with varied drawing skills were asked to draw objects based on the rendered images of 3D objects from Kar's dataset <cit.>, and there are a total of 1300 sketches and their corresponding 3D shapes.The training of the indoor 3D reconstruction network is based on the commonly used ScanNet-V2 dataset <cit.>. This dataset is a large-scale resource for indoor 3D scene understanding, containing RGB images, depth images, 3D point cloud data, and semantic and instance annotations from indoor environments.§.§ Implementation DetailsFor the sketch-to-model process, we utilize ResNet-18 <cit.> as the encoder for image feature extraction. The extracted 512-dim feature is processed through two linear layers with L2-normalization, yielding a 512-dim shape code z s and a 512-dim view code z v. The rendering module is SoftRas <cit.>, and the number of views is N=3. Each 3D object is positioned in the canonical view with a set distance from the camera, 0 elevation,and 0 azimuth angle. We utilize the Adam optimizer with an initial learning rate of 1e-4 that is multiplied by 0.3 every 800 epochs. Beta values are set as 0.9 to 0.999. The total number of training epochs is 2000. The loss function ℒ for the sketch-to-model process is calculated as the weighted sum of five components:ℒ =ℒ_sp + ℒ_r+ λ_v ℒ_v+ λ_sdℒ_sd + λ_ddℒ_ddℒ_r denotes the flattening loss and Laplacian smoothing loss as in <cit.>, which is used to make the meshes more realistic with higher visual quality. ℒ_dd is the loss for domain adaptation, as in <cit.>. The lack of a large amount of ground-truth 3D models and the corresponding 2D sketches leads us to use synthetic data for training and testing on real-world data – a domain gap exists in the synthetic data and the real-world data. ℒ_dd is thus introduced to make our network generalizable to real hand-drawn datasets. We use domain adaptation on 7 of the classes, which have a sufficient number of sketches in the Sketchy dataset <cit.> and Tu-Berlin dataset <cit.>. Domain adaptation is performed by concatenating the average pooling and max pooling results of the image feature map as input, as in <cit.>.λ_sd and λ_dd in Equation <ref> equal 0.1, and λ_v equals 10.For the 3D reconstruction process, the network was trained following the settings in <cit.>. To apply the trained network, we wrote a custom Android application that captures videos using the onboard RGB camera of the phone. Along with the captured video, the extrinsic camera information, including the real-time pose, was obtained through the ARCore API. Using the camera poses and the video clips, a key-frame set was selected following the method in <cit.> as the input to the 3D reconstruction network to obtain the predicted mesh of the surrounding environment. §.§ Experimental Results for Sketch-View PredictionWe evaluated the performance of view prediction, which was jointly trained with the sketch-to-model process. We tested the mean absolute error (MAE) of the predicted viewpoint and the ground-truth viewpoint in the ShapeNet-Synthetic dataset, measured in degrees. The result is shown in Table I. Our method achieves state-of-the-art (SOTA) sketch-view prediction performance in elevation and azimuth angles. Note that the azimuth angle has larger errors in some categories (bench, cabinet, display, lamp, loudspeaker, table, telephone), as in these categories, objects have multiple symmetry planes.§.§ Experimental Results for Sketch-to-Model Generation The ShapeNet-Synthetic DatasetWe first evaluated the performance of the dataset with the ground-truth 3D model. Following <cit.>, we compared our method with a naive autoencoder network, model retrieval with features from a pretrained sketch classification network, and Sketch2Model <cit.> as the current state-of-the-art (SOTA) model. We first assessed the model's performance using the training/test sets of the ShapeNet-Synthetic dataset, which offered precise ground-truth 3D models for training and evaluation purposes. Meshes with the predicted viewpoint (Pred Pos) and the ground-truth viewpoint (GT Pos) were trained and evaluated. We applied a commonly used 3D reconstruction metric – voxel IoU – to measure the fidelity of the generated mesh. The results are shown in Table II. The qualitative results demonstrate the effectiveness of our approach with state-of-the-art (SOTA) performance in every category evaluated.To verify the statistical significance of this superior performance, we conducted t tests comparing our approach to prior methods. The results confirm that our approach outperforms existing methods with p < 0.05, indicating that the improvements are statistically significant. The quantitative evaluation of our method compared with existing state-of-the-art methods further demonstrated the effectiveness of our approach in reconstructing models with higher structural fidelity, as shown in Figure 4.The ShapeNet-Sketch Dataset We further evaluated the performance on real-world human drawings through the ShapeNet-Sketch dataset. We trained the model on the ShapeNet-Synthetic dataset and used the ShapeNet-Sketch dataset for evaluation. As shown in Table III, our model outperforms the existing state-of-the-art methods in most categories, demonstrating the effectiveness of our approach. In some categories, our method outperforms the existing methods even without domain adaptation (DA). The introduction of DA can further boost the performance in some categories by reducing the gap between real and synthetic data.After adequately training the network, we tested the neural network on a computer with a graphics card (NVIDIA Tesla V100). Our approach had a generation speed of 123 frames per second (FPS). We also conducted a CPU-only performance test (Intel Xeon E5-2650 V3), and the results showed a 6% speed boost over Sketch2Model <cit.> under the same test settings (0.0328 s), with a rate of 30 FPS, which is sufficient for natural computer-human interaction.§.§ User Study of theImmersive 3D Modeling Process Our immersive 3D modeling experience offers creators the ability to design 3D models that fit the context quickly and efficiently. To validate the effectiveness of our approach, we conducted a user study where we compared the time costs for designers creating models using our approach with sketches drawn over a 3D scanned mesh and a baseline method where designers manually placed a model after designing it separately in a blank drawing pad without context information. The study involved 12 designers with 3D design expertise who drew sketches of chairs on a blank drawing pad (Fig. <ref> (a)) and obtained the generated 3D model file. The participants in the study were instructed to use a mobile phone to design a piece of furniture in an office setting. They were given the freedom to adjust the camera angle to find the optimal position for beginning their design. The user interface employed in the study is presented in Fig. <ref> and was a custom-designed mobile app that allowed users to draw, place, and view a 3D model of the designed object in situ within the environment. For comparison, we asked the designers to use the mobile app to manually place the generated 3D chair model in the 3D scanned mesh with the built-in "translate," "scale," and "rotate" features (touch-based interaction <cit.>), as shown in Fig. <ref> (b-d). The total time for designing the 3D models and manually placing them in the context was recorded, and the average time spent using our approach was compared to the average recorded time of the baseline method. The volunteers were asked to perform 3D modeling in each setting 3 times, for a total of 6 times. The results, shown in Table <ref>, indicate that our method can be more than 5x faster than the baseline method, demonstrating the effectiveness of our approach in enabling rapid and efficient 3D modeling within a scanned context.§.§ User Study of Geometric Context Information We used further experiments to verify the necessity of introducing geometric context information. Not only could the model learn a new digital environment 3D file with added objects, but this information was also critical for the user's creation process. Specifically, we recruited 12 volunteers and let them use a redesigned user interface, which allowed the users to draw sketches in a captured 2D image. The 3D scene was still reconstructed as the users moved their phones so that the obtained 3D model could remain in a specific 3D position. However, users could only see RGB images, not 3D meshes of the context. We ran a total of 48 sessions. In each session, volunteers were asked to perform the same task (e.g., designing and placing a table next to a sofa) with the two approaches. After completing each modeling task, we asked the volunteers to rotate the camera and view the designed objects from different angles. They were then asked to determine whether a "redo" operation was required due to inaccurate or unrealistic reconstruction or collision issues. We collected the number of "redo" calls for each approach. At the end of all the sessions, we asked the volunteers to evaluate the controllability and usefulness of each approach, which are commonly used criteria for evaluating user interface usability and user experience <cit.>. We followed the settings in a prior study <cit.>, using a 7-point Likert scale that ranged from “highly disagree” to “highly agree”. The result in Table <ref> shows a higher level of user experience ratings when using the geometric context than the RGB image context.§.§ Evaluating the Runtime for 3D Modeling After adequately training the neural network, we tested it on a computer with an NVIDIA Tesla V100 graphics card. Our approach had a generation speed of 123 FPS. We also conducted a CPU-only performance test (Intel Xeon E5-2650 V3), and the results showed a 6% speed boost over Sketch2Model <cit.> under the same test settings (0.0349 s), with a rate of 30 FPS, which is sufficient to be used for natural computer-human interaction. §.§ User Study of 3D Modeling ResultsTo further validate the effectiveness of our sketch-to-model algorithm, we conducted a user study following the settings of <cit.> and used the metric of the widely used mean option score (MOS) ranging from 1-5 <cit.> for two factors: Q1: How well does the output 3D model match the input sketch? (Fidelity); Q2: What do you think of the quality of the output 3D model? (Quality). We recruited 12 designers who were familiar with 3D content and presented them with 36 3D modeling results generated by our algorithm. Prior to the experiment, we gave each participant a brief and one-to-one introduction to the concepts of fidelity and quality. We recorded the rating results and averaged the scores. The results are shown in Table <ref>. As perceived by users, our method outperforms existing state-of-the-art methods in the user subject ratings.§.§ Ablation Study To show the effectiveness of our proposed method, we conducted an ablation study that removes random pose sampling (RPS) for view awareness. We also removed the progressive shape convolutional discriminator (SD) and used an MLP-based discriminator as in <cit.>. Our quantitative results (Table <ref>) and qualitative example (Figure <ref>) show that removing the RPS and SD is detrimental to the performance. Specifically, in RPS, we sampled multiview silhouettes for supervision to generate high-fidelity 3D models. We further performed a sensitivity analysis to determine how the number of sampled views affects the performance of the network. We changed the number of views and trained the neural network again, keeping all the settings and the network structures unchanged. The results are shown in Table VII. From the experimental results, we find that sampling three views brings slightly higher performance than using only two views, which means that multiview images are used to guide the network to facilitate optimization toward higher-fidelity models.§ CONCLUSION In this study, we provide a novel solution, Reality3DSketch, for 3D modeling. Unlike conventional CAD software, we take advantage of deep neural networks for intuitive and immersive 3D modeling. We demonstrate that users can use their phones to capture the surrounding environment and draw a single-view sketch on the screen. The algorithm reconstructs the 3D mesh of the surrounding environment in real time and produces a 3D object according to the user-drawn sketch in situ. We introduce a novel neural network to perform sketch view prediction and 3D modeling with the input of a single sketch. The network is designed to be view- and structure-aware, enabled by random pose sampling (RPS) and a progressive shape discriminator (SD) to produce high-fidelity models. Extensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of our approach. We achieved state-of-the-art (SOTA) performance in both sketch view prediction and 3D modeling. Our user study shows that our method yields >5 times faster 3D modeling in a scene compared to separately modeling an object and manually placing it in a scene. Users are also more satisfied with the generated 3D model compared to existing methods. We believe that our work forges a new path and will have great potential to enable creators to perform 3D modeling in the future.§ LIMITATIONS AND FUTURE WORKSOur current system uses single-view sketches, which inherently lack comprehensive information. Due to this limited input, our 3D shape generation method struggles to produce high-fidelity results when there is heavy occlusion or missing information. With such incomplete input, it is difficult for the network to reliably determine the complete 3D geometry. Future work on incorporating other forms of context could help address these challenges. Currently, the generated 3D scene is only used to assist users in sketching from a single perspective. While this benefits users, the scene information has not yet been utilized to optimize pose estimation or shape generation. Future work could explore leveraging the scene geometry for these purposes. Additionally, since the sketch is in camera coordinates while the context is in world coordinates, investigating how world-space features could inform the model in the camera or canonical space represents another interesting research direction. Overall, our work provides an initial proof of concept, and we believe future research can build on this foundation to enable further applications.§ ACKNOWLEDGMENTSThis paper is supported by the National Key R&D Program of China (2022YFB3303301), National Natural Science Foundation of China (NSFC) (Grant No. 62006208, 62202418), and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-08-006). Tianrun Chen acknowledges funding from KOKONI, Moxin (Huzhou) Technology Co., LTD and Moxin Technology (HK). The author thanks Papa Mao and Xin Xu for discussion. IEEEtran [ < g r a p h i c s > ]Tianrun Chen received a bachelor's degree from the College of Information Science and Electronic Engineering, Zhejiang University, and is pursuing a Ph.D. degree at the College of Computer Science and Technology, Zhejiang University. He is the founder and technical director of Moxin (Huzhou) Technology Co., LTD. His research interests include computer vision and its enabling applications. [ < g r a p h i c s > ]Chaotao Dingis currently studying for a master's degree in electronic information at Huzhou University, focusing on 3D reconstruction and computer vision. He is a student member of CCF, and he has published articles in several computer vision-related journals.[ < g r a p h i c s > ]Lanyun Zhu received his B.E. degree from Beihang University, Beijing, China in 2020. He is currently pursuing a Ph.D. degree with the Information Systems Technology and Design (ISTD) pillar, Singapore University of Technology and Design. His research interests are mainly focused on deep learning and computer vision. He is the reviewer of multiple top journals and conferences, including IEEE T-IP, ICML and NeurIPS.[ < g r a p h i c s > ]Ying Zang received her B.S. degree in computer science and technology from Liaoning University, China, in 2004; her M.S. degree in computer science and technology from Dalian Maritime University, China, in 2010; and her Ph.D. degree in computer application technology from Chinese Academy of Sciences University, China, in 2022. She is an AI engineer at the School of Information Engineering of Huzhou University. She is currently working on research on 3D vision, object detection and semantic segmentation.[ < g r a p h i c s > ]Yiyi Liao received her Ph.D. degree from the College of Control Science and Engineering, Zhejiang University, China, in 2018. She is currently an assistant professor at the College of Information Science and Electronic Engineering, Zhejiang University. Her research interests include 3D vision and scene understanding.[ < g r a p h i c s > ]Zejian Li is an assistant researcher at the School of Software Technology, Zhejiang University. He obtained a Ph.D. degree from Zhejiang University. His research interests include generative models, interpretable image generation and intelligent design.[ < g r a p h i c s > ]Lingyun Sun is a professor at the School of Computer Science and Technology, Zhejiang University. He obtained a Ph.D. degree from Zhejiang University. His research revolves around AI and design, aiming to equip the design industry with AI capabilities and to enhance design tools and methodologies in the AI era. He has developed image and video generation platforms that can create visual content, short videos, and other digital materials. | http://arxiv.org/abs/2310.18148v1 | {
"authors": [
"Tianrun Chen",
"Chaotao Ding",
"Lanyun Zhu",
"Ying Zang",
"Yiyi Liao",
"Zejian Li",
"Lingyun Sun"
],
"categories": [
"cs.HC"
],
"primary_category": "cs.HC",
"published": "20231027135436",
"title": "Reality3DSketch: Rapid 3D Modeling of Objects from Single Freehand Sketches"
} |
Network Science Institute, Northeastern University, Boston, Massachusetts 02115, USA Department of Physics and Astronomy, Northwestern University, Evanston, Illinois 60208, USASchool of Mathematical Sciences, Jiangsu University, Zhenjiang, Jiangsu 212013, ChinaNordita, KTH Royal Institute of Technology and Stockholm University,SE-106 91 Stockholm, SwedenSchool of Mathematical Sciences, Jiangsu University, Zhenjiang, Jiangsu 212013, ChinaMathematical Institute, University of Oxford,Oxford OX2 6GG, UK Turing Institute, London NW1 2DB, UKDepartment of Computer Science, Rensselaer Polytechnic Institute, Troy, New York 12180, USA Network Science and Technology Center, Rensselaer Polytechnic Institute, Troy, New York 12180, [email protected] Department of Physics, Bar-Ilan University, Ramat Gan 52900, IsraelQuantum networks have experienced rapid advancements in both theoretical and experimental domains over the last decade, making it increasingly important to understand their large-scale features from the viewpoint of statistical physics. This review paper discusses a fundamental question: how can entanglement be effectively and indirectly (e.g., through intermediate nodes) distributed between distant nodes in an imperfect quantum network, where the connections are only partially entangled and subject to quantum noise? We survey recent studies addressing this issue by drawing exact or approximate mappings to percolation theory, a branch of statistical physics centered on network connectivity. Notably, we show that the classical percolation frameworks do not uniquely define the network's indirect connectivity. This realization leads to the emergence of an alternative theory called “concurrence percolation,” which uncovers a previously unrecognized quantum advantage that emerges at large scales, suggesting that quantum networks are more resilient than initially assumed within classical percolation contexts, offering refreshing insights into future quantum network design. Percolation Theories for Quantum Networks Shlomo Havlin January 14, 2024 =========================================Keywords: percolation, quantum network, entanglement distribution, critical phenomena, networks of networks, hypergraph§ INTRODUCTION Quantum information <cit.> is a fast-developing field that has transcended its roots originally in quantum mechanics and information theory to other areas like condensed matter physics <cit.>, statistical physics <cit.>, and network science <cit.>. At the core of quantum information lies the quantum bit, or qubit, the basic quantum information carrier. Two qubits can be designed into a relationship, called entanglement, which is an essential quantum resource <cit.> for quantum computing. Yet, entanglement is notoriously fragile, especially when qubits are spatially distant. Fortunately, by path routing and adding in-between sites for replaying, entanglement between remote qubits may eventually be established in an indirect way. Such an action, called entanglement distribution <cit.>, is a fundamental benefit of quantum networks (QN) <cit.>. In general, a QN is a network representation of different parties (nodes) that share entanglement (links) as connections. A significant part of our interest lies in distributing entanglement between two arbitrary nodes in the network, a process we refer to as “entanglement transmission.”Entanglement across different parties is essentially transmitted through quantum communication protocols. Successful demonstrations of quantum communication protocols have already been made on small-scale QN using diamond nitrogen-vacancy centers <cit.> and ion traps <cit.>. However, the big question that looms is how to scale this to much larger networks. A large-scale, practical QN would offer significant advantages for many industrial and scientific applications. For example, financial institutions and governments would benefit from quantum cybersecurityproviding an unprecedented level of secure communication. Researchers could also use networked quantum computers to dramatically increase the simulation speed of the physical and chemical processes of many interacting particles. Yet, if the individual channels (links) along the routed path are too noisy,the entanglement transmission may fail. Study of such dependence of “indirect” transmission ability on the noise level of individual links requires tools from statistical physics and complex network theories.One of such theory that has proven to be useful is percolation theory <cit.>. Percolation theory offers a mathematical framework for understanding how networks behave when subjected to random processes (can be treated as a form of noise), such as how water percolates through soil or how diseases spread through populations. In the context of QN, percolation could provide valuable insights into the robustness and efficiency of entanglement distribution. By applying percolation theory, we can model and analyze the network structure directly and identify the most effective ways to maintain and distribute quantum entanglement across it. This lays the groundwork for examining QNs through the lens of statistical physics and opens up new avenues for understanding the upper limits of entanglement distribution in these networks.In this work, we will explore and summarize the developments of the QN framework and how a mapping to percolation offers unique tools for dissecting the problem of entanglement transmission. Specifically, we will show that the mapping to percolation theories—and a definition of how a combination of pairwise edges combines into indirect connectivity—are, indeed, not unique. A new, alternative percolation-like theory termed concurrence percolation <cit.> emerges, and it underlies an unexpected “quantum advantage,” revealing that QNs are more robust than we initially thought within theclassical percolation framework. Moreover, the finding is scalable with network size and adaptable to different network topologies, suggesting a macroscopic improvement over classical considerations from a statistical physics perspective. This paper focuses on the comparison between classical percolation and concurrence percolation when mapped based on QN. It is structured as follows: In Section <ref>, we give a definition of the QN theoretical framework as well as its possible generalizations to other QN-based structures (e.g., hypergraphs). In Section <ref>, we briefly review the concept and definition of percolation theory and, in particular, how it relates to network connectivity at large scales. In Sections <ref> and <ref>, we will focus on the discoveries that the new concurrence percolation theory surpasses the traditional percolation theory (which we refer to as “classical percolation” for comparison). In Section <ref>, we will delve into the algorithms developed for calculating concurrence percolation. Finally, in Section <ref>,we will discuss the open questions and practical implications of the findings, both theoretically and for real-world communications. § QUANTUM NETWORKS (QN)As in traditional network theories, a QN resembles a topological graph or a graph-like structure, comprising nodes and links. This paper primarily focuses on a pure-state version of QN (Fig. <ref>) <cit.>.The QN is defined based on the following three principles: * Each node (purple) comprises a collection of qubits (gray dots) that are entangled with qubits belonging to other nodes.* Each link (gray line) represents a bipartite entangled pure state |ϕ⟩ connecting the two qubits at its endpoints.* A weight θ is assigned to each link to characterize the degree of the link's entanglement.Using the Dirac notation, a link, which corresponds to a bipartite entangled pure state connecting two nodes (e.g., Alice and Bob), can be written as |ϕ⟩=cosθ|00⟩+sinθ|11⟩. Here, w.l.o.g., the weight parameter θ is constrained within the range 0≤θ≤π/4, ensuring that cosθ≥sinθ is satisfied. In this notation, the first “0” in |00⟩ and the first “1” in |11⟩ represent the two possible states of Alice's qubit, |0⟩_Alice and |1⟩_Alice, respectively. Similarly, the second “0” and “1” represent the two possible states of Bob's qubit, |0⟩_Bob and |1⟩_Bob. The entanglement between Alice's and Bob's qubits is evident by the presence of only two terms |00⟩ and |11⟩ in |ϕ⟩, while |01⟩ and |10⟩ are absent. This implies that upon measuring the state |ϕ⟩ in the |0⟩, |1⟩ basis from either Alice's or Bob's side, the state will randomly collapse to either |00⟩ or |11⟩. Consequently, if Alice's (or Bob's) measurement yields “0,” it guarantees that the other party's measurement result will also be “0.” This highlights a correlation feature that can be harnessed for communication in the quantum realm.Similar to how correlation in classical communication is measured using mutual information, we can quantify this quantum correlation using quantum mutual information <cit.>, which isgiven by -2cos^2θln(cos^2θ)-2sin^2θln(sin^2θ). The quantum mutual information reaches its maximum value when θ=π/4, which corresponds to a maximally entangled state,|ϕ_⊥⟩=√(1/2)|00⟩+√(1/2)|11⟩, commonly referred to as a Bell state or a singlet <cit.>.The entire QN, comprising many links, can be regarded as a huge pure state |QN⟩—the tensor product of all the individual bipartite pure states associated with each link. Consequently, the QN solely focuses on “quantum noise,” which comes from the fact that when θ<π/4, the link exhibits only partial entanglement. This partial entanglement, when employed in quantum communication tasks such as quantum teleportation, leads to errors in the teleported qubits,affecting the QN's overall communication capacity.Yet, as a pure state, the QN does not involve any classical noise (i.e., mixed states). This makes |QN⟩ an excellent medium for examining quantum phenomena without the interference of classical noise. Thus, this “minimalist” construction of the QN can serve as an ideal framework for investigating quantum theories and concepts on large scales.At present, the choice to define nodes as collections of qubits rather than individual qubits may appear arbitrary. What is the physical meaning of a node as a collection of qubits? And what about the qubits that belong to the same node—are they also entangled?To answer these questions, it is crucial to comprehend the concept of locality. While the theoretical framework of quantum mechanics is inherently nonlocal, practical implementations of quantum information technologies often necessitate considering a “distant lab” paradigm <cit.>. In such scenarios, when a quantum system is distributed among multiple spatially distant parties or laboratories, itbecomes unrealistic to assume the feasibility of executing global quantum operations. Instead, the parties are typically constrained to apply quantum operations exclusively to their respective subsystems (in their own “labs”), rather than collectively to the global system. This subset of quantum operations is known as local operations (LO).For example, given the entangled state |ϕ⟩=cosθ|00⟩+sinθ|11⟩ between Alice and Bob, Alice may apply local unitary transformation on her qubit (e.g., a rotation {|0⟩,|1⟩}→{(|0⟩+|1⟩)/√(2), (|0⟩-|1⟩)/√(2)), and Bob may apply the same transformation as well. This yields a new state |ϕ⟩→|ϕ'⟩=cosθ(|0⟩+|1⟩)(|0⟩+|1⟩)/2+sinθ(|0⟩-|1⟩)(|0⟩-|1⟩)/2.Furthermore, LO also allows Alice or Bob to locally measure their qubits as well, resulting in the random collapse of |ϕ⟩ to one of its eigenstates. However, Alice and Bob cannot transform their state globally and obtain a singlet, |ϕ⟩→|ϕ_⊥⟩=√(1/2)|00⟩+√(1/2)|11⟩. This is not counted as LO.On top of LO, Alice and Bob are also free to communicate classical information (CC), sharing their results of quantum measurements. Together, this set of operations is called the local operations and classical communication (LOCC). The LOCC defines a set of strategies to share and manipulate quantum information under the locality constraint. One of the most powerful theorems in quantum information states that the average entanglement between two parties can never be increased if only LOCC is allowed for the two parties. This establishes the role of entanglement as a quantum resource, given that LOCC is the “free operations” of the system <cit.>. Therefore, qubits belonging to the same node are not constrained by LOCC. They can be freely entangled or disentangledas needed, but the entanglement is not viewed as a resource for communication. Only the entanglement of qubits belonging to different nodes matters. In other words, quantum networks are an effective representation of the fundamental constraints of locality, manifested by assigning qubits to different local compartments and entanglement to inter-compartment connections.§.§ Qudit-based quantum networksA natural extension of QN is to use more general d-dimensional “qudits” (qutrits, ququarts, etc.) instead of qubits (Fig. <ref>).Each link, as a bipartite pure state of qudits, can be written as|λ⟩=∑_j=1^d√(λ_j)|jj⟩.Here, λ_1≥λ_2≥⋯≥λ_d≥0 and ∑_j=1^dλ_j= 1. In this generalization, the weight of each link is no longer a single number but a list of non-negative numbers known as Schmidt numbers, denoted as λ=(λ_1, λ_2, ⋯, λ_d). When d=2, the bipartite pure state reduces to qubits, where λ_1=cos^2θ and λ_2=sin^2θ. The consideration of qudit-based QN offers both theoretical and practical advantages. Theoretically, a d-dimensional qudit inherently carries log_2 d times more information than a qubit. Therefore, as the value of d increases, a single carrier can transmit more information, increasing the bandwidth. This enhanced capability is also evident in the robustness of entanglement for entangled states of qudits. Indeed, even when some coefficients are erased (λ_j→ 0) from λ in the presence of noise, the pure state |λ⟩ can still remain entangled, as long as the two largest Schmidt numbers λ_1 and λ_2 remain positive.In experiments, qubit systems are commonly realized using two-level atoms or superconducting states. However, isolating these two levels from other nearby levels can be challenging. By including nearby levels and increasing the potential dimension d, the experimental design may become more feasible. In fact, several experiments have employed qudits to achieve better performance, including applications in quantum scrambling <cit.> and superdense coding <cit.>. §.§ Quantum networks are the basis of tensor networksThere is also an interesting and deep connection between QN and tensor networks (Fig. <ref>), the latter being a familiar and powerful tool in condensed matter physics, mostly used for the purpose of facilitating computations and simulations in quantum physics and materials science. To be specific, tensor networks are designed to efficiently represent many-body quantum states <cit.>. These quantum states, which are essentially large, high-dimensional tensors in mathematical terms, can be factorized into smaller tensors using tensor networks. In particular, tensor networks are useful for representing the ground state of quantum systems, which typically exhibit strong ordering compared to excited states. This strong ordering often means that entanglement does not grow very fast with the length scale, which, in turn, allows for easier and more efficient factorization of the corresponding ground state.To delve deeper into the concept, note that a general N-body quantum state reads|Ψ⟩= ∑^D_x_1,x_2,⋯,x_N=1 T^x_1 x_2 ⋯ x_N|x_1⟩|x_1⟩⋯|x_N⟩,which lives in a D^N-dimensional Hilbert space that is the tensor product of N “single-body” Hilbert spaces (i=1,2,⋯,N). Each space is spanned by a basis of D vectors, |x_i⟩, where x_i=1,2,⋯, D. The complex tensor T that stores the coefficients is exponentially large (∼ D^N), effectively preventing direct computations of the quantum state |Ψ⟩'s characteristics. However, there may be a significant level of redundancy in the coefficients present stored in the tensor. Consider an example where every entry in T can be fully factorized, such that T=a⊗ b⊗ c⊗⋯ where a,b,c,⋯ are D-dimensional vectors. In this case, it becomes unnecessary to store the entire tensor or perform calculations on it. Rather, it suffices to simply store the vectors a,b,c,⋯ of which the total size is DN.This, indeed, is how a tensor network works—by leveraging different ways of factorization that can be depicted through different graphical network structures <cit.>. Among various tensor networks, the matrix product state (MPS) is among the most researched <cit.>. In computer science, it is often called the tensor-train network <cit.>. The MPS is commonly utilized to represent one-dimensional many-body quantum states. When extended to higher dimensions, this becomes what is known as the projected entangled pair state (PEPS) <cit.>. More involved tensor network structures, such as the multiscale entanglement renormalization ansatz (MERA) <cit.>, are also routinely used to study critical quantum systems.For example, a MPS representation of Eq. (<ref>) can be written as|Ψ⟩= ∑^D_x_1,x_2,⋯,x_Ntr{A_1^x_1A_2^x_2⋯ A_N^x_N}|x_1⟩|x_1⟩⋯|x_N⟩,where for each single-body i, a set of D different matrices, A_i^1,A_i^2,⋯,A_i^D, are introduced.Each matrix is of size d× d, where d is called the bond dimension. Thus, the total number of parameters is NDd^2, which is linear in N. For a sufficiently large d, the MPS has enough degrees of freedom to exactly represent any tensor T. However, it is frequently observed that a small d can approximately, if not perfectly, reproduce T. This occurs when the information stored in T scales linearly with N, a condition often found in the ground states of one-dimensional noncritical quantum systems.Intriguingly, the MPS offers a new physical perspective—the valence-bond picture <cit.>. To be specific, we map each single-body Hilbert space (spanned by |x_i⟩) to a physical site and assume that there are two d-dimensional qudits located at each site. For every two neighboring sites (1↔ 2, 2↔3,⋯, N↔ 1), two qudits from each site are fully entangled, forming a “valence bond” that can be written as an unnormalized maximally entangled state,|ψ⟩= ∑^d_j=1|j⟩|j⟩↣ψ=[ 1 0 ⋯; 0 1 ⋯; ⋮ ⋮ ⋱ ].Here, the state is also represented (matricized) into the matrix form ψ. Combining this with Eq. (<ref>), we obtain|Ψ⟩ = ∑^D_x_1,x_2,⋯,x_Ntr{A_1^x_1ψ A_2^x_2ψ⋯ A_N^x_Nψ}|x_1⟩|x_1⟩⋯|x_N⟩= (𝒯_1 ⊗𝒯_2 ⊗⋯⊗𝒯_N ) |ψ⟩^⊗ N,where 𝒯_i=∑_x_i=1^D∑_j,j'=1^d(A_i^x_i)_j j'|x_i ⟩⟨ j,j'| represents a linear transformation acting on the two qudits (labeled by j and j') on-site i. The valence-bond picture is now evident: In this picture, the many-body state is not the primary entity. Instead, it is built upon something more fundamental—a network of qudits and “valence bonds.” The linear transformations 𝒯_i are then employed on top of it to form the tensor network.Note that this fundamental network |ψ⟩^⊗ N is a one-dimensional (periodic) quantum network consisting of maximally entangled states, making it remarkably suitable to be generalized to partially entangled states [Eq. (<ref>)]. This can be achieved by replacing ψ in Eq. (<ref>) byψ=[ λ_1 0 ⋯; 0 λ_2 ⋯; ⋮ ⋮ ⋱ ].The physical meaning of inserting such a partially entangled state is that since LO cannot increase the entanglement, the entanglement between neighboring sites will be upper bounded by the amount of entanglement in ψ.The valence-bond picture is not limited to MPS but can be generalized to arbitrary tensor networks. Indeed, suppose A_i at site i does not denote a set of matrices but a set of tensors, A_i^1,A_i^2,⋯,A_i^D, each having entries (A^x_i_i)_j j' j”⋯ j^(k) labeled by k subscripts j,j',j”,⋯,j^(k). Each subscript denotes a qudit on-site i. The site has k qudits in total, indicating that the corresponding node i has degree k (i.e., k incident links) in the QN. The linear transformation then becomes𝒯_i=∑_x_i=1^D∑_j,j',⋯,j^(k)=1^d(A^x_i_i)_j j' ⋯ j^(k)|x_i ⟩⟨ j,j',⋯,j^(k)|,and the many-body state is expressed as |Ψ⟩ = (𝒯_1 ⊗𝒯_2 ⊗⋯⊗𝒯_N ) |QN⟩,where |QN⟩ represents the entire QN considered as a huge pure state (Fig. <ref>). §.§ Multipartite quantum networksOur attention has been narrowed to bipartite entanglement. However, a complete QN framework should take multipartite entangled states into account. This is since multipartite entangled states have a specialized and unique role in certain quantum communication applications, such as secret sharing <cit.>. Although multipartite entanglement has been widely explored, we still lack a unified, clear method to precisely detect, measure, and define it.For example, even with an entangled state of just three qubits, there exist two non-equivalent forms of genuine tripartite entanglement. The first is known as the GHZ class, characterized by five real parameters, α, β, γ, δ, and θ, and can be expressed as <cit.>|ϕ_GHZ⟩∝cosδ|000 ⟩+sinδ e^i θ(cosα|0 ⟩+sinα|1 ⟩)(cosβ|0 ⟩+sinβ|1 ⟩)(cosγ|0 ⟩+sinγ|1 ⟩).The second form, called a W class, has a general representation as <cit.>|ϕ_W⟩ =√(a)|001 ⟩+√(b)|010 ⟩+√(c)|100 ⟩+√(d)|000 ⟩,with the real parameters a,b,c>0 and d=1-a-b-c≥ 0. Both the GHZ and W classes represent a level of correlation that goes beyond just pairwise interactions, meaning that a measurement on any single qubit among the three will instantaneously affect the outcomes of the other two. Despite this, states within one class cannot be converted to those in the other class using LOCC. As a result, we cannot directly compare the degree of entanglement of states belonging to different classes. This represents a fundamentally challenging quantum “three-body problem” that complicates the practical applications of multipartite entanglement. For example, a W state may outperform in certain applications, while in others, a GHZ state may be more effective. Note that states belonging to the W class are characterized by only three real d.o.f., whereas the GHZ class requires five. Hence, a generic tripartite state typically belongs to the GHZ class.Traditionally, each link in a network is also “bipartite,” connecting exactly two nodes. As a result, to study aQN consisting of multipartite entangled states, it is essential to go beyond “bipartite” network theory and consider multipartite entangled states as higher-order interactions <cit.>. These can be mathematically represented as “hyperedges” of hypergraphs <cit.>. Here, Fig. <ref> shows an example of a hypergraph-based QN consisting of hyperedges in the form of|ϕ_GHZ⟩ =cosδ|000⋯⟩+sinδ|111⋯⟩,which represents a special case of the GHZ class [Eq. (<ref>)].Of course, this is only one specific form of multipartite entangled states, characterized by the sole parameter δ. Yet it illustrates the necessity of representing these as hypergraphs and studying them through higher-order network theories, such as higher-order percolation theories (see Section <ref> for a brief review). § PERCOLATION OF COMPLEX NETWORK Percolation theory, serving as a foundational model for investigating disordered systems <cit.>, is mainly concerned with understanding the geometric connectivity of random media.Constructing a percolation model is straightforward: Take, for example, a square lattice (or a lattice of any shape) in which each link is randomly either present with a probability p or absent with a probability 1-p. In a real-world application, one could consider the present links as electrical conductors and the absent ones as insulators <cit.>. The electrical current would then flow solely through the conductor links. When p is small, almost no paths exist that connect the lattice's two distant boundaries (e.g., the left and right boundaries in the square lattice). However, as p grows, various conduction pathways begin to emerge. A phase transition <cit.> is eventually triggered when p crosses a critical threshold, labeled as p_th, effectively changing the composite material from an insulating to a conducting state. At this point, the probability of a path connecting the two distant boundaries becomes greater than zero. (This specific probability of connecting distant boundaries is termed the “sponge-crossing probability,” about which we will delve into more details in Section <ref>).This phenomenon of a phase transition between two phases of different connectivity is prevalent in real-life scenarios. An illustrative example from biology is the spread of epidemics <cit.>. In its most basic manifestation, an epidemic commences with an infected individual who, with a probability denoted as p, can transmit the infection to its nearest neighbors over time until it propagates extensively. A comparable methodology can be applied to model forest fires, where the probability of a burning tree igniting its nearest neighbor tree in the subsequent time step replaces the infection probability <cit.>. Another notable application of this concept can be found in polymerization processes within chemistry, where the activation of bonds between small branched molecules leads to the formation of larger molecules <cit.>. This transformation is known as a gelation transition. An illustrative example of this gelation process can be observed when boiling eggs. Percolation theory has a wide range of other applications as well, spanning fields such as quantum systems <cit.>, material science <cit.>, geophysics <cit.>, social dynamics <cit.>, and infrastructures <cit.>.§.§ Percolation of single-layer networks Percolation theory is closely associated with a wide range of concepts of critical phenomena, including scaling laws, fractals, self-organization criticality, and renormalization, holding significance across diverse statistical physics disciplines <cit.>. The traditional characterization ofphase transitionin percolation hinges on the statistical properties of clusters near p_th.For p < p_th, only finite clusters exist. As p > p_th, a unique, infinite cluster emerges. A crucial parameter is P_∞, signifying the relative size of the infinite cluster, which exhibits a power-law near p_th <cit.>:P_∞∼|p-p_th|^β.The parameter P_∞ serves as a measure of order within the percolation system and can be identified as the order parameter. If we exclude the infinite cluster (if it exists), then the rest of the finite clusters follow a distribution:n_s ∼ s^-τ e^-s / s^*.Here, s is the cluster size, and n_s is the number of clusters of size s. At criticality, the characteristic size s^* diverges:s^* ∼|p-p_th|^-1 / σ.Consequently, the tail of the distribution n_s becomes a power law, n_s ∼ s^-τ.The mean cluster size, i.e., how large a finite cluster is on average, also diverges:⟨ s⟩∼∑_s s^2 n_s ∼|p-p_th|^-γ,with the same exponent γ above and below p_th. Finally, the correlation length ξ, defined as the average distance between two sites on the same finite cluster, also diverges:ξ∼|p-p_th|^-ν,again, with the same exponent ν above and below p_th.These exponents, namely β, τ, σ, γ, and ν, encapsulate the critical behavior of key quantities associated with the percolation transition and are collectively referred to as the critical exponents. Notably, they satisfy the scaling relations:β=τ-2/σ and γ=3-τ/σ. It is worth emphasizing that these exponents exhibit universality, meaning they remain invariant irrespective of the specific structural attributes of the lattice (e.g., square or triangular) or the type of percolation (site or bond). Instead, their values are solely determined by the dimensionality of the lattice.Besides, at the critical point,ξ and s^* also follow a relation,s^* ∼ξ^d_f .The exponent d_f is often called the fractal dimension <cit.>, characterizing the structure of the infinite cluster at the critical point. Assuming the dimension of the system is d, there is another relation between critical exponents, called the hyperscaling relation,d_f=d-β/ν.Thus, the fractal dimension of the infinite cluster at p_th is not a new independent exponent but depends on β, ν and d. Finally, for complex network structures, similar critical exponents following Eqs. (<ref>)-(<ref>) can also be identified. For example, in scale-free networks <cit.>, which are characterized by a power-law distribution P(k)∼ k^-λ of its degree k, the values of critical exponents depend on the power-law exponent λ, as outlined in Table <ref>. As an essential process inherently associated with the notion of connectivity in networked systems, percolation has been generalized to models that go beyond undirected networks, with studies dedicated to directed networks <cit.>, temporal networks <cit.>, and, as we discuss in more detail in the next sections, networks of networks and hypergraphs. §.§ Percolation of networks of networksIn many real-world systems, an individual network is one component within a much larger complex network of interdependent networks <cit.>. In interdependent networks, the failure of nodes in one network leads to the failure of dependent nodes in other networks, which may cause further damage to the first network, leading to cascading failures and possibly catastrophic consequences. In 2010, Buldyrev et al. studied the percolation of two fully interdependent networks subject to cascading failures based on a generating function formalism. They found a surprising first-order discontinuous phase transition, dramatically different from the second-order continuous phase transition in single-layer networks <cit.>, as shown in Fig. <ref>. Later, Parshani et al. studied two partially interdependent networks and found that the percolation transition changes from first to second order as the coupling strength decreases <cit.>. Considering a malicious attack, Huang et al. developed a mathematical framework for understanding the percolation of two interdependent networks under targeted attack, later extended to targeted attack on partially interdependent networks <cit.>. Each node in one network may depend on multiple nodes in another network. Therefore, Shao et al. proposed a theoretical framework for understanding the percolation of interdependent networks with various support and dependence relationships <cit.>. The study of interdependence between networks also led researchers to realize that other types of interactions are important. One example closely related to interdependence is antagonistic interactions <cit.>. Here, for a node to be active, the antagonistic node in another network has to be active, as can happen if each pair of nodes competes for some limited resource. Considering that more than two networks may depend on one another, Gao et al. developed the analytical framework to study the percolation of a network formed by n interdependent networks <cit.>, which was later extended to the study of targeted attacks on high-degree nodes <cit.>. Baxter et al. studied the percolation of multiplex networks, which can be considered as the percolation of tree-like network of networks in Ref. <cit.>. Liu et al. developed a theoretical framework based on generating functions and percolation theory to understand the percolation of interdependent directed networks <cit.>. In the past decade, we have witnessed fruitful results and discoveries related to the percolation of networks of networks <cit.>, as well as multilayer networks and interconnected networks.In general, the percolation of networks of networks extends that ofsingle-layer networks. For example, when the n interdependent Erdős–Rényi (ER) networks form a tree-like topology and have the same average degree k̅, k̅_i=k̅ (i=1,2,...,n),the giant connected component in each layer, P_∞, as a function of k̅,p, and n follows <cit.>:P_∞=p[1-exp(-k̅P_∞)]^n.Note that for Eq. (<ref>), the particular case n=1 is the known ER second order percolation law for a single-layer network <cit.>. When n≥ 2, the system shows a first-order phase transition. Using the generating function, we obtain that p_th and P_∞|_p→ p_th^+ satisfyp_th = -w/k̅[1+1/(nw)]^n-1,andP_∞|_p→ p_th^+ = -(w+1/n)/ k̅.where w is given by w = W_-(-1/nexp(-1/n)), and W_-(x) is the smallest of the two real roots of the Lambert equation exp(W_-)W_-=x. For n=1 we obtain the known ER results p_th=1/k̅, and P_∞|_p→ p_th^+=0 at p=p_th. Substituting n=2 in Eqs. (<ref>) and (<ref>), we obtain the exact results derived by Buldyrev et al. <cit.>.§.§ Percolation of hypergraphs Hypergraphs generalize graphs by allowing that interactions, the hyperedges,connect an arbitrary number of vertices <cit.>. Hypergraphs, and, up to a certain extent, simplicial complexes, offer more flexibility to model interacting systems,and they have become popular models of many real-world networks over recent years <cit.>. For example, more than two molecules can participate in some reactions <cit.>, and group interactions also frequently occur for collaborations of scientific papers <cit.>. It has been shown that higher-order interaction may significantly change the physical properties of dynamical processes from those on ordinary networks with only pairwise connections <cit.>. However, there are only a few works exploring the robustness or the percolation of hypergraphs <cit.>. Specifically, Coutinho et al. introduced two generalizations of core percolation to hypergraphs, and offered analytical solutions to certain types of random hypergraphs accordingly <cit.>. Sun and Bianconi later proposed a general framework for accessing hypergraph robustness, and further characterized the critical properties of simple and higher-order percolation processes <cit.>. Sun et al. also considered a paradigmatic type of higher-order interactions, triadic interactions, where a node regulates the interaction between two other nodes, and provided a general theory, accurately predicting the full phase diagram on random graphs <cit.>. More recently, Bianconi and Dorogovtsev have further developed a theory for hyperedge and node percolation on hypergraphs, and showed that, in contrast to ordinary networks, the node and hyperedge percolation problems for hypergraphs strongly differ from each other <cit.>.§ CLASSICAL PERCOLATION IN QUANTUM NETWORKSWhy is percolation theory useful in the study of QN?The roots of this interest can be traced back to a 2007 paper <cit.>. In the seminal work, the authors first proposed a mapping between percolation theory and a particular entanglement transmission scheme, which they discovered and accordingly termed as the classical entanglement percolation (CEP) scheme. Within this context, an entanglement transmission scheme refers to a (possibly infinite) series of quantum communication protocols that may be applied collectively to a QN for distributing entanglement between two nodes. This pioneering discovery has opened up a new approach to studying QN from a statistical physics perspective, with a focus on understanding the large-scale, collective characteristics of the entanglement transmission task and how they are influenced by the topology of the QN. §.§ Classical entanglement percolation (CEP)As previously noted, the LOCC cannot increase the average entanglement. However, it does not mean that one cannot use LOCC as a form of “gambling"—to enhance the entanglement with a certain probability p, even though it might reduce the entanglement with probability 1-p. This principle forms the foundation of the CEP scheme. To be specific, the CEP scheme involves two steps <cit.>. First, we “gamble” to enhance the entanglement of each link, aiming to obtain a singlet (maximally entangled state) with a probability of p. The optimal probability for this is referred to as the singlet conversion probability, given by p=2sin^2θ. Second, if a path of links connecting the source (s) and target (t) has all been converted to singlets, then a specific protocol known as entanglement swapping can be applied. This protocol converts every two singlet links sharing a common node (Relay, R) into a single singlet linking the two end nodes. For example, if there is a singlet between Alice and the Relay, and another between the Relay and Bob, the entanglement swapping protocol can merge the two into one singlet between Alice and Bob. By applying this protocol recursively along the singlet path connecting s and t, we arrive at a final singlet between s and t, fulfilling the transmission task.Equipped with these concepts, the mapping between CEP and (classical) percolation theory is straightforward. The probability p=2sin^2θ represents the probability for links to be present or absent.The CEP scheme succeeds if s and t are connected after the random percolation process is applied. Furthermore, this connection implies a nontrivial critical threshold for the CEP scheme on infinitely large QN. Specifically, when 2sin^2θ falls below the percolation threshold p_th, s and t are almost certainly disconnected if they are infinitely apart. Hence, p_th, which solely depends on the network topology, serves as a metric of the overall capacity of the QN in the context of CEP.The CEP scheme represents a great simplification of the QN entanglement transmission task to a pure percolation problem. Nevertheless, the CEP is not necessarily the optimal scheme. Indeed, even when 2sin^2θ≤ p_th, there might still be other schemes that can fulfill the transmission task, as we will explore in the following sections.§.§ Quantum entanglement percolation (QEP) It is expensive to obtain a singlet from a partially entangled state given its “gambling” nature. Even worse, the swapping protocol spends all the singlets along a path and converts them into just one singlet. This process leads to a waste of singlets and causes the inefficiency of the CEP scheme. Naturally, this leads to the question: Is it necessary to convert every link into a singlet? As we will see, the answer is negative, paving the way for the QEP scheme <cit.>.The QEP scheme is based on the discovery that given two partially entangled states between three parties, Alice–Relay–Bob, there exists a LOCC protocol that can yield a higher probability of obtaining a singlet between Alice and Bob. This probability is higher than obtaining two singlets (Alice–Relay, Relay–Bob) individually and followed by a swapping protocol. Indeed, the optimal probability is found to be min{2sin^2θ_AR,2sin^2θ_RB} <cit.>, outperforming the probability (2sin^2θ_AR)(2sin^2θ_RB) achieved by CEP.What about three partially entangled states between four parties (Alice–Relay1–Relay2–Bob)? Unfortunately, the optimal conversion probability does not intuitively simplify to min{2sin^2θ_AR_1,2sin^2θ_R_1R_2, 2sin^2θ_R_2B}, but takes on a much more complicated form. This forbids us to generalize the optimal result to larger scale.For readers who wish to delve deeper, further details can be found in Ref. <cit.>.Even though we cannot generalize the optimal result, we can still extend the improvement, by bypassing one relay every other step. This gives rise to the QEP scheme, which avoids the need to create singlets for every link, bypassing (half of) the Relays. But this approach is not without its trade-offs, especially on a large scale. Since the Relays are bypassed, the QN misses out on the potential connectivity to other paths through the Relays. Thus, it still remains a question whether QEP can achieve a lower critical threshold than CEP, fulfilling the entanglement transmission task at infinite scales. The authors of the 2007 paper <cit.> demonstrated that this is indeed achievable for specific topologies, such as a “double”-honeycomb topology, where there are two links between every two adjacent nodes on a hexagonal network. The QEP scheme is equivalent to adding a preprocessing step, modifying the network into a triangular structure and thereby reducing the percolation threshold.Note that despite being referred to as QEP, the mapping of the scheme is still aligned with classical percolation theory from a statistical physics point of view. The quantum aspect of this process is confined primarily to the preprocessing step, which is executed only at the local scale. Additionally, the QEP does not yield the optimal result either <cit.>, which leaves open the question of whether a more effective entanglement transmission scheme might exist. It would be intriguing if this new scheme were guided by a fundamentally different statistical physical theory distinct from classical percolation.We will show that such a theory does exist.§ CONCURRENCE PERCOLATION IN QUANTUM NETWORKS §.§ No need to establish singlets The mapping of CEP/QEP to classical percolation is essentially established on the necessity that two nodes must be connected by at least one path of singlets. However, we have seen that in the QEP scheme, it is not mandatory for all links along the path to be converted to singlets. By bypassing some of the links, a more efficient scheme might be realized.This observation leads us to a natural question: why not stop establishing singlets at all? In other words, we would bypass not just some, but all links, resulting in a final state between the source s and target t that remains only partially, rather than maximally, entangled. This scenario is, of course, approachable, considering that one can always “downgrade” a singlet to a partially entangled state with no cost <cit.>. What we truly seek, however, is a trade-off where we can achieve a much higher probability of obtaining such a partially entangled state instead of a singlet. By carefully weighing the compromise (having only partial entanglement) against the benefit (a significantly higher conversion probability), we might discover a more advantageous scheme for entanglement transmission overall. This revised approach challenges the conventional thinking in terms of classical percolation and could lead to new opportunities in developing new schemes on QN. §.§ Deterministic entanglement transmission (DET)Based on the above ideas, a new scheme named the deterministic entanglement transmission (DET) scheme is introduced <cit.>. The DET approaches the entanglement transmission task from a completely distinct perspective: the scheme demands that the conversion probability throughout the process always equals one.In other words, rather than “gambling” to increase the links' entanglement, we operate directly on partially entangled states in a deterministic fashion. The aim of DET is to maximize the final (partial) entanglement under the constraint of determinacy, contrasting with CEP/QEP's objective of always acquiring a singlet with high (but not unit) probability.The DET involves two quantum communication protocols:The first is a continuation of the swapping protocol <cit.>. However, here the swapping protocol operates directly on partially entangled states. It can be shown that given entanglements θ_AR and θ_RB in the A–R–B configuration, one can tune the swapping protocol such that it deterministically yields a final state between A and B, having a new entanglement θ_AB that satisfies sin 2θ_AB =sin 2θ_ARsin 2θ_RB <cit.>. The second protocol is the entanglement concentration protocol <cit.>. This protocol takes two links between A and B (with entanglement θ_1, θ_2, respectively) as input. At the expense of the two links,a new link that has a higher entanglement θ is produced between A and B, where cosθ= cosθ_1 cosθ_2 or √(1/2), whichever is the largest.The DET scheme is founded on the generalization of these two protocols to global scales. This is possible since the swapping and concentration protocols are fully equivalent to the series/parallel rules, respectively, as often employed in circuit network analysis <cit.>. Therefore,the DET scheme becomes applicable if the network topology between the source s and target t is series-parallel <cit.>. In other words, the network can be fully reduced to a link between s and t using only series and parallel reductions. Note that there are already fast network algorithms designed specifically for detecting series-parallel networks and determining their feasible reductions, which will be explored further in the next section.One of the most intriguing characteristics of the DET scheme is that, when applied to infinitely large series-parallel networks, a threshold similar to the CEP threshold is observed <cit.>, below which the DET can never produce nonzero entanglement between s and t. This threshold, however, is lower than the CEP threshold, demonstrating a “quantum advantage” over the classical scheme. The existence of such a threshold suggests that the DET may be globally governed by a statistical physics theory. In subsequent sections, we will explore this statistical theory in details.§.§ Concurrence percolation theoryTo establish the statistical theory, recall that in classical percolation theory, given a regular lattice, one can define a “sponge-crossing” quantity, P_SC,as the probability that there is an open path connecting two far-apart boundaries <cit.>. When the lattice becomes infinitely large (the number of nodes N→∞), it is known that a minimum value of p exists, below which P_SC becomes zero in the thermodynamic limit:p_th = inf{p ⊂ [0, 1] | lim_N →∞ P_SC > 0}.This minimum value coincides with the traditionally defined percolation threshold p_th in Section <ref>, which is based on the size of the infinite cluster.As a result, Eq. (<ref>) offers an alternative definition of p_th. In the special case of two-dimensional square lattices, Kesten proved that the “sponge-crossing” probability P_SC of connecting the left and right boundaries is strictly zero until p> 1/2. Thus, p_th=1/2 <cit.>.Moreover, the existence of such a critical threshold is not limited to regular lattices but can also be observed for complex network topologies. All we need to do is to generalize P_SC from being defined between two apparent boundaries to two arbitrary sets of nodes, denoted S and T. It is reasonable to believe that there still exists a nontrivial ConPT threshold p_th for this generalized P_SC, as long as the minimum length of all paths connecting S and T increases with the network size N. We contract the two sets S and T into two “mega” nodes, which amounts to erasing the internal network topologies of S and T, and then calculate the “sponge-crossing” probability P_SC between them. This provides us a definitive way of calculating P_SC for arbitrary network topology and inferring p_th from Eq. (<ref>). How to derive p_th? First, we need to know how P_SC manifests as a function of p. The exact expression of P_SC can be calculated by basic addition and multiplication rules of probability measures <cit.>. In general, P_SC bears the form as a ratio of two large polynomials of p (i.e., meromorphic in p), which quickly becomes complex when the number of links becomes large. Nevertheless, when the network topology between S and T is series-parallel, then P_SC can be decomposed into the iteration of two connectivity rules, namely, the series and parallel rules (Table <ref>). these rules establish the probability of at least one path connecting the two ends.When the network topology between S and T is not series-parallel (such as a bridge circuit <cit.>), higher-order connectivity rules are needed. Unlike the series/parallel rules, these higher-order rules do not follow a general form. That being said, there is a way to approximate (see Section <ref> for a detailed discussion about the nature of this approximation)these higher-order rules using only the series/parallel rules. This technique is know as the star-mesh transform, originating from the study of circuit analysis. We will include more details on this technique in the Algorithms section (Section <ref>).It becomes clear that the series/parallel rules play a very special rule in defining percolation connectivity. Given that the DET scheme is also founded on series/parallel rules, a natural and intriguing question arises: can we define a new statistical theory reversely, starting directly from the exact forms of series/parallel rules? Such a definition may not be complete, since we do not know the exact forms for higher-order rules (which may not even have a closed mathematical form). Yet, using the star-mesh transform technique, it may be possible that we can approximate these higher-order rules using the series/parallel rules that we have known.This is the motivation of the concurrence percolation theory. Recall that the DET series/parallel rules are given by sin 2θ =sin 2 θ_1 sin 2 θ_2 and cosθ=max{√(1/2),cosθ_1cosθ_2}, respectively. Under a change of variable c≡sin 2θ, the rules can be rewritten in the form presented in Table <ref>.Note that the new series rule in terms of c bears the same nominal form as the classical series rule in terms of p. This variable c, indeed, has a physical meaning, known as the concurrence, a specific measure of bipartite entanglement <cit.>. This inspires the new theory to be termed “concurrence percolation.” In comparison to classical percolation where the variable of interest is the probability p, in concurrence percolation, choosing c as the variable of interest is appropriate.After fixing all connectivity rules (series + parallel + star-mesh), an analogous quantity, C_SC, referred to as the sponge-crossing concurrence can be defined as the weighted sum of all paths in terms of this new weight c <cit.>. It is believed that a nontrivial threshold on c also exists:c_th = inf{c ⊂ [0, 1] | lim_N →∞ C_SC > 0},such that c_th is the minimum value of the concurrence c per link, below which C_SC becomes zero when S and T become infinitely distant. §.§ ResultsDET outperforms CEP.—Utilizing the framework of concurrence percolation, we can derive an essential and powerful result: the DET scheme always outperforms the CEP scheme on any series-parallel QN.To rigorously demonstrate this comparative superiority, we rewrite both the classical and concurrence series/parallel rules in terms of the entanglement variable θ (Table <ref>). These rules correspond to the entanglement transmission rules for CEP and DET, respectively (as illustrated in Fig. <ref>). Now, for the series rule, we have c^2 = ∏_i (sin 2θ_i)^2= ∏_i (2 sin^2 θ_i)(2-2 sin^2 θ_i)≥ [∏_i (2 sin^2 θ_i)] [2-∏_i (2 sin^2 θ_i)]= p(2-p),where the inequality is supported by the subadditivity of f(x)=ln(2-e^-x), namely, f(x_1+x_2+⋯)≤ f(x_1)+f(x_2)+⋯for x=-ln(2sin^2θ) ≥0. This leads to1-c^2≤ 1-p(2-p)=(1-p)^2.This final inequality underscores that the θ obtained from the CEP series rule (under a change of variable p=2sin^2 θ) is never greater than the θ obtainedfrom the DET series rule (under a change of variable c=sin 2 θ).For the parallel rule, similarly we have1/2+1/2√(1-c^2) = ∏_i cos^2θ_i= ∏_i(1/2+1-2sin^2θ_i/2)≤ 1/2+1/2∏_i(1-2sin^2θ_i)= 1-p/2,where the inequality is supported by the subadditivity of f(x)=-ln(1/2+e^-x/2) for x=-ln(1-2sin^2θ) ≥0. This further leads to√(1-c^2)/2≤ 1-p/2-1/2=1-p/2,which, again, underscores that the θ obtained from the CEP parallel rule is never greater than the θ obtained from the DET parallel rule. Together, it can be established that the DET rules consistently yield superior results to those of the CEP rules, both in series and parallel configurations. This underlines the potential of DET as a valuable tool in the ongoing development of large-scale QN and adds a new dimension to our understanding of quantum connectivity. Concurrence percolation threshold.—In infinite-size QNs, both a classical percolation threshold p_th and a concurrence percolation threshold c_th exist.This leads us to the second insightful finding within the realm of concurrence percolation: the prediction of a lower threshold compared to what was known from earlier classical-percolation-theory-based schemes, including CEP and its variants (such as QEP).What makes this result particularly interesting is that the improvement exists across various network topologies. Table <ref> shows these findings, detailing tests conducted on different network topologies, including the Bethe lattice (Fig. <ref>) as well as other regular lattices such as the square, honeycomb, and triangular lattices (Fig. <ref>).This consistency across multiple configurations underscores the robustness of the concurrence percolation method, demonstrating its potential to redefine our understanding of entanglement transmission within large-scale QNs.On series-parallel QN, this predicted concurrence percolation threshold is readily achievable using the DET scheme. On general network topologies, however, it is unknown if the higher-order connectivity rules produced by the star-mesh transform is realizable by LOCC. They are only approximations of the true LOCC-allowing rules. The study of the higher-order rules of concurrence percolation remains a difficult task that could be handled by multipartite strategies <cit.>, QN routing <cit.>, or QN coding <cit.>.Saturation.—Concurrence percolation also differs from classical percolation with the existence of a saturation point c_sat. Whenever c≥ c_sat, the sponge-crossing C_SC consistently equals one [Fig. <ref>]. For example, basic calculations show that the exact value of the saturation point for a Bethe lattice of degree k is given by <cit.>c_sat=√((1/2)^1/k-(1/4)^1/k)/√((1/2)^(k-1)/k-(1/4)^(k-1)/k).In contrast, in classical percolation, P_SC equals one if and only if p=1. This phenomenon originates from the anomaly of the parallel rule (Table <ref>) being not a smooth function. The presence of this saturation point unveils a new “quantum advantage” originating only from concurrence percolation: one can deterministically establish a singlet as long as the entanglement in each link surpasses the saturation point. This advantage stands in contrast to schemes based on classical percolation, where a singlet can only be established with certainty if each link is perfectly entangled.Critical exponents.—Lastly, similar to classical percolation, concurrence percolation also shows critical phenomena, marked by a set of dependent or independent critical exponents <cit.>. However, it is important to note that concurrence percolation is defined based on connectivity rules (Table <ref>), not clusters. As a result, one cannot simply deduce a traditional cluster-based order parameter from the variable c used in these rules. In fact, an effective cluster could be defined using c, c^2 or any other power of c. Altering the definition in this way essentially results in a variable change in the connectivity rules but does not change the underlying physics. In the absence of a cluster definition, the sole critical exponent that can be determined is the dynamic thermal exponent, ν_dyn=z ν. This exponent is tied solely to length dimensions, reflecting how the system's correlation length diverges as c approaches c_th. Note that the length in the context refers to chemical length, not the conventional Euclidean length. The two length scales are related by the dynamic critical exponent z <cit.>. This is why zν, rather than ν, is used in this case. Importantly, the dynamic thermal exponent z ν can be retrieved from finite-size analysis <cit.>. Here, the idea is that the correlation length can be replaced by the system's finite length scale l when |c-c_th|→ 0. Therefore, near the critical threshold, all dependence on ξ can be deduced using l. The finite-size analysis results for both classical and concurrence percolation for the Bethe lattice are shown in Fig. <ref>. For concurrence percolation, it is found that C_SC follows a power law with an exponential cutoff as a function of the number of layers l, C_SC∼ l^-1/2exp(-l/l^*). Here, l^* diverges as a power law as c approaches c_th [Fig. <ref>]. The numerical value zν=1.082(95) is obtained by fitting near |c-c_th|∼10^-5 [Fig. <ref>]. Alternatively, zν can also be determined by looking at the finite-size critical threshold c_th(l), which is defined as the turning point of C_SC, c_th(l)=c|_∂^2C_SC/∂c^2=0,which deviates from c_th as a power law with respect to l [Fig. <ref>]. Again, the numerical value 1/(zν)=0.99(5) is obtained near l∼10^4 [Fig. <ref>]. For general k, different c_th and c_sat are also presented [Fig. <ref>], revealing two universal (i.e., independent of k) power laws of C_SC near c→ c_th and c→ c_sat, respectively, supported by numerical results (dots) on a finite Bethe lattice of l=500. In particular, the power-law relation C_SC∼|c-c_th|^1/2 is reminiscent of the critical exponent β in classical percolation, which follows P_inf∼|p-p_th|^β [Eq. (<ref>)]. Yet, as previously discussed, C_SC cannot be uniquely equated to a “cluster-based” order parameter. Thus, it would be premature to assert that β=1/2 without accounting for certain nuances.Note that the above results also have their counterparts in classical percolation [Figs. <ref>-<ref>] except for the saturation point c_sat. It is found that the critical exponent zν is the same for both classical and concurrence percolation theories on the Bethe lattice. It thus remains unknown whether the classical and concurrence percolation theories belong to the same universality class or not. In conclusion, the concept of concurrence percolation establishes a new theory that governs the behavior of entanglement transmission across large-scale QN. This novel theory brings in several unique characteristics that distinguish it from classical percolation, thereby providing a refreshing and rich perspective on QN. We believe that the theoretical framework set by concurrence percolation may also open doors to practical applications, such as more efficient entanglement transmission schemes or novel protocols for quantum communication and computation. In essence, concurrence percolation not only enriches our comprehension of the inherent complexity of QN but also signifies a leap towards a more refined and versatile understanding of the statistical physics that governs QN. § ALGORITHMSThis section is dedicated to a comprehensive exploration of the fundamental algorithms that have played a pivotal role in our investigation of concurrence percolation theory. Not only are these algorithms instrumental in the analysis and understanding of QN, but they also serve as essential tools for modeling and simulating the complex behaviors within the network.§.§ Identification of series-parallel networks Series-parallel networks were introduced by Duffin <cit.> as a mathematical model of electrical networks, and a general version was introduced later by Lawler <cit.> and Monma and Sidney <cit.> as a model for scheduling problems. The classification of a network as series-parallel depends on the choice of two specific nodes of interest <cit.>. Given two source and target nodes S and T,the network topology can be grouped into different categories (Fig. <ref>). All topologies between S and T given in Figs. <ref>–<ref> are considered series-parallel, except Fig. <ref>, due to an existing “bridge” in addition to Fig. <ref>.Importantly, it is worth highlighting that many realistic complex networks can be approximated as series-parallel. This is because, in infinite-dimensional systems, cycles can typically be ignored through the Bethe approximation <cit.>. It is known that when a “decomposition tree” (Fig. <ref>) for a series-parallel graph is given, many problems, including those that are NP-hard for arbitrary graphs <cit.>, can be solved in linear time. While series-parallel networks continue to play an important role in various applications, they have been extensively studied in their own right as well as in relation to other optimization problems (cf. <cit.>). We also refer to <cit.> for more results in series-parallel graphs.Series-parallel networks enjoy nice algorithmic properties. There is a fast algorithm that determines whether any given network is series-parallel, and if it is, also returns the decomposition tree that is suitable to be used in the following applications <cit.>. Following this work, researchers have further developed parallelized algorithms to determine the important class of series-parallel networks <cit.>.§.§ Star-mesh transformThe star-mesh transform (also known as the Kron reduction <cit.>) was originally developed as a circuit analysis technique for calculating the effective resistance in resistor networks. The star-mesh transform replaces a local star network topology by a mesh topology (a complete graph). Importantly, the equivalent resistance between each pair of nodes remains consistent before and after the transformation. Here, we generalize this idea to offer an approximation method for percolation on networks. This approach bears similarity to the real-space renormalization group (RG) methods used in percolation theory. However, the star-mesh transform is more versatile, applicable to various types of networks beyond regular lattices. A star-mesh transform <cit.> can be built upon only series and parallel rules but not higher-order rules to map an N-node star graph to an (N-1)-node complete graph, establishing a local equivalence (in terms of connectivity) between the two graphs. Mathematically, we denote 𝒢(N) to be a star graph with one root vertex and N leaf vertices, where the weights of the N edges are from θ_1 to θ_N. And the N-complete graph transformed from 𝒢(N) is denoted as 𝒢'(N), the weights of its N(N-1)/2 edges are (θ_12,θ_13,⋯,θ_1 N,⋯,θ_N-1,N). The equivalence between 𝒢(N) and 𝒢'(N) are formatted N(N-1)/2 independent equations:seri(θ_1,θ_2)= c(1,2;𝒢'(N)), seri(θ_1,θ_3)= c(1,3;𝒢'(N)),⋯,seri(θ_1,θ_N)= c(1,N;𝒢'(N)),⋯,seri(θ_N-1,θ_N)= c(N-1,N;𝒢'(N)),in which, seri(θ_i,θ_j) is the series-sum of θ_i and θ_j based on the series rule, and c(i,j;𝒢'(N)) is the net weight between vertices i and j of the complete graph 𝒢'(N). To calculate the sponge-crossing percolation between the source S and target T in a certain network, we approximately obtain the equivalent simplified network by consecutively applying the star-mesh transform technique, where one node is degraded for each application. Specifically, we arbitrarily choose a vertex from 𝒢'(N) (w.l.o.g., the last one, N) to be the new root of a sub-star-graph of 𝒢'(N) constructed from the N-1 edges that connect the root to the other N-1 vertices. Next, we transform this sub-star-graph (sub𝒢')(N-1) into a (N-1)-complete graph, denoted by (sub𝒢')'(N-1), and combine it with what is left untransformed, 𝒢'(N)∖(sub𝒢')(N-1). The new graph denoted as Comb(𝒢_α,𝒢_β) is derived by setting each edge weight to be θ_ij=para(α_ij,β_ij), which is the parallel-sum of α_ij∈𝒢_α and β_ij∈𝒢_β based on the parallel rule. Note that, because of the lack of closed-form solution for concurrence percolation, we use Broyden’s root-finding algorithm to numerically solve the N(N-1)/2 weights θ_ij that satisfy Eq. (<ref>). In a word, we can calculate c(i,j;𝒢'(N)) by first solving a (N-1)-complete graph, c(i,j;𝒢'(N))=c(i,j;Comb((sub𝒢')'(N-1),𝒢'(N)∖(sub𝒢')(N-1))).By applying Eq. (<ref>) one after the other on all but boundary nodes, the network can be eventually reduced to two nodes and one link between them (Table <ref>), the final weight θ of which shall be approximately equivalent to the percolation of initial network. For demonstrative purposes, here we show how the star-mesh transform works for both the classical percolation (Fig. <ref>) and concurrence percolation (Fig. <ref>) on a small square lattice.Since c(i,j;𝒢'(N)) is calculable through recursions and Eq. (<ref>) involves a (N-1)-level star-mesh transform, the entire procedure is a double recursion, the cost growing faster than exponential. But by practically carrying out the recursive computation using symbolic expressions in Mathematica and other numerical techniques, the solutions can be found within a sufficiently small error range <cit.>. Note that the star-mesh transform functions as an approximation rather than an exact representation of higher-order connectivity rules. To see this, consider the example of classical percolation given in Fig. <ref>. Under the change of variable p≡ 2sin^2θ, the actual higher-order connectivity, i.e., the probability of at least one path connecting nodes 1 and 6, can be expressed as follows:p_34[1-(1-p_35p_56)(1-p_46)] [1-(1-p_12p_24)(1-p_13)]+ (1-p_34) [1-(1-p_13p_35p_56)(1-p_12p_24p_46)]≈0.0799,where p_ij≡ 2sin^2θ_ij≈0.304 represents the singlet conversion probability for each link i↔ j. The final probability (≈0.0799) translates to a final entanglement θ≈ 0.256 π/4, which is very close to the star-mesh approximation result of θ≈ 0.25 π/4 (Fig. <ref>). The closeness of these values supports our confidence that the star-mesh transform can offer a reasonably accurate approximation. Also, note that the process of reducing a network using star-mesh transforms is not unique in sequence. For example, in the fourth step of Fig. <ref>, one can transform the star graph (3 ↔ 1, 3 ↔ 4, 3 ↔ 6) instead. Different sequences of reduction might lead to varying approximate results, but they tend to stay close to each other and to the exact value <cit.>. §.§ Fast numerical approximation for concurrence percolationThe heuristic approximation (star-mesh transform) used for higher-order connectivity rules can be quite demanding in terms of computational resources. To address this challenge, in this section, we discuss a more efficient approach to calculate the sponge-crossing concurrence C_SC <cit.>. This acceleration in computation is achieved through the incorporation of two key simplifying approximations: the parallel approximation and the S_m approximation (Fig. <ref>). §.§.§ Parallel approximationFirst, we introduce the parallel approximation: we treat all paths connecting nodes of interest to be parallel, i.e., treating them as if they have no overlap. For an arbitrary network with N nodes and uniform concurrence c per link, the parallel approximation C_SC' of the true sponge-crossing concurrence C_SC between two sets of nodes, S and T, is given by1 + √(1 - C_SC'^2)/2 = max{∏_l(1 + √(1 - c^2l)/2)^N_l, 1/2},where N_l is the total number of self-avoiding paths of length l that connect the source/target nodes s and t for all s∈ S and t∈ T, respectively. Equation (<ref>) is the mathematical statement of the parallel approximation, indicating that we are taking each of the N_l paths to be parallel and non-overlapping (Fig. <ref>). Now, we will show that on series-parallel networks <cit.> the concurrence calculated under the parallel approximation forms an upper bound to the true concurrence. First, we consider the case where our network is essentially parallel, i.e., it can be expressed as the parallel combination of k subnetworks, each with concurrence c_i. In this case, the parallel approximation is exact:C_SC' = C_SC = para(c_1, c_2,… c_k).The more interesting case is that of an essentially series network, i.e., a network that can be decomposed as a combination of subnetworks in series. We consider an exemplary network that splits into k branches, each with concurrence c_p_i. The concurrence of the segment before branching is c_s. Following the series and parallel rules (Table <ref>), the sponge-crossing concurrence from the left of this network segment to the right isC_SC = c_s(2√(f(c_p_0,… c_p_k) - f(c_p_0,… c_p_k)^2)) f(c_p_0,… c_p_k)>1/2, c_s f(c_p_0,… c_p_k)≤ 1/2,where f(c_p_0,… c_p_k) = ∏_i=1^k g(c_p_i) =∏_i=1^k(1 + √(1 - c_p_i^2)2). Under the parallel approximation, the network is transformed so that the concurrence of the segment is given byC'_SC =2√(f(c_sc_p_0,… c_sc_p_k) - f(c_sc_p_0,… c_sc_p_k)^2)f(c_sc_p_0,… c_sc_p_k)>1/2, 1 f(c_sc_p_0,… c_sc_p_k)≤ 1/2.Since c_s c_p_i≤ c_p_i, it follows that g(c_s c_p_i) ≥ g(c_p_i) and thus f(c_sc_p_0,… c_sc_p_k) ≥ f(c_p_0,… c_p_k). After some calculations <cit.> one can show that C_SC'≥ C_SC.Taken together, since every series-parallel network can be decomposed into essentially series or parallel configurations, we have shown that C_SC' is an upper bound for C_SC on series-parallel networks. Interestingly, as we will see, this upper bound seemingly becomes tighter as the network becomes larger. We hence expect that a new concurrence threshold on C_SC' can emerge, which should numerically approach the true c_th from below and match c_th in the thermodynamic limit N→∞.§.§.§ S_m approximationFor most regular lattices and complex networks, however, it is a nontrivial task to determine the number of paths N_l of length l. When we look at arbitrary networks, the calculation for the sponge-crossing concurrence is essentially a path-counting problem which may require approximation as well. Although the literature of path counting on graphs is rich and well studied, we are not aware of any closed-form solutions for enumeration of self-avoiding walks of arbitrary length for even the simplest network (like 2D lattices) <cit.>. While approximate path enumerations exist for both 2D lattices <cit.> and random networks <cit.>, we find them impractical, since the concurrence calculation is very sensitive to N_l for small l.Based on this, if we define S_m as the set which contains up to the m-th shortest paths (i.e., the shortest paths, the 2nd shortest paths, and so on up to the m-th shortest paths) between s and t for all s∈ S and t∈ T, then it is possible to approximate the sponge-crossing concurrence between S and T using only these paths.§.§.§ Results In this section, using the S_m and parallel approximations, we present numerical results for calculating C_SC in different networks of large size N. From that, we can numerically estimate the finite-size concurrence percolation threshold c_th≡sin 2θ_th, determining its position on the critical curve by matching the corresponding sponge-crossing concurrence at the half point, namely,c_th≡sin 2θ_th≈ c|_C_SC=1/2.Next, we show how to apply our approach to Bethe Lattice and 2D Square Lattices.Bethe lattice.—Given a finite Bethe lattice (i.e., a Cayley tree) [Fig. <ref>] with L layers and coordination number k <cit.>, all paths from the root node to any one of the boundary nodes have the same length, L. Since only one path exists from the root node to any node on the boundary, the number of paths of length L is N_L = k(k-1)^L-1.There is no need to employ the S_m approximation since all paths are exactly known. Only the parallel approximation C_SC' of the sponge-crossing concurrence C_SC is to be calculated, which is given by [following Eq. (<ref>)]1 + √(1 - C_SC'^2)/2 = max{(1 + √(1 - c^2L)/2)^N_L, 1/2}.To solve for c_th, near C'_SC=0 we let (1 + √(1 - c_th^2L)/2)^N_L = 1-ϵgiven an arbitrarily small positive ϵ. This gives rise toc_th^2L = 1-[2(1-ϵ)^1/N_L-1]^2 ≃ -4N_L^-1ln(1-ϵ)+ O(N_L^-2),and thusc_th≃(4ϵ/k)^1/2L(1/k-1)^L-1/2L≃1/√(k-1)in the limit of large L. The result is identical to the exact concurrence percolation threshold for Bethe lattices (Table <ref>).For validation purposes, numerical results of the parallel approximation C'_SC versus the exact values C_SC are shown in Fig. <ref>. We see that as L increases, the threshold c_th approaches 1/√(k-1) from below, consistent with our theoretical result. Hence, it is highly suggested that the parallel approximation can correctly estimate the true concurrence percolation threshold c_th in the limit N →∞.It is known that a saturation point c_sat<1 also exists in the Bethe lattice <cit.>, namely, before c reaches unity, C_SC will already reach unity at c=c_sat. It is obvious that c_sat≥ c_th, given the monotonicity of the series and parallel rules (Table <ref>). To see if we can recover c_sat using the parallel approximation too, let(1 + √(1 - c_sat^2L)/2)^N_L = 1/2,set by C'_SC = 1. This yieldsc_sat^2L = 1-[2(1/2)^1/N_L-1]^2 ≃ 4N_L^-1ln 2+ O(N_L^-2),and thusc_sat≃(4ln 2/k)^1/2L(1/k-1)^L-1/2L≃1/√(k-1).We see that the saturation point calculated using the parallel approximation is equal to c_th but different from the exact value [Fig. <ref>].Two-dimensional square lattice.—In a 2D square lattice of N nodes (√(N)∈ℤ), the length of the mth shortestself-avoiding path, between source and target nodes of coordinates s=(x_s, y_s) and t=(x_t, y_t) (1≤ x_s,x_t≤√(N) and 1≤ y_s,y_t≤√(N)), is simply l_m = |x_s - x_t| + |y_s - y_t|+ 2(m-1).Now, let S and T denote the left (x_s=1) and right (x_t=√(N)) boundaries. Let s=(1,y_s)∈ S and t=(√(N),y_t)∈ T. Under the S_m approximation, the total number of self-avoiding paths of length l between S and T is given byN_l≈∑_y_s=1^√(N)∑_y_t=1^√(N)δ_l_1 lN_l_1(s→ t) +δ_l_2 l N_l_2(s→ t) + ⋯ + δ_l_m l N_l_m(s→ t),where δ_ij is the Kronecker delta. This approximation of N_l is then substituted into the parallel approximation [Eq. (<ref>)] to calculate C_SC between S and T. For m≤2, it is possible to directly enumerate the 1st and 2nd shortest self-avoiding paths between every pair of s and t. For m > 2, however, it becomes difficult to write down a closed-form combinatorial expression for N_l_m(s→ t). A path enumeration algorithm is thus needed. We treat paths of length l_m with m > 2 as deviations from the 1st and 2nd shortest paths. For a given m, these deviations can only take a finite number of shapes. Once we have identified these primitive deviations, we must next identify positions in the lattice where these deviations can be placed. Finally, we count the total number of paths by counting the number of shortest paths between deviations <cit.>.For example, given every pair of source and target nodes s and t, all 3rd-shortest paths (m = 3) have either two single-step deviations or one double-step deviation from the shortest path (m = 1). For the case where we have two single-step deviations, we first identify two sets of points, D_1 and D_2, where the first and second deviations can happen, respectively. Then we calculate N_s, D_1 (the number of shortest paths from s to every point in D_1), N_D_1, D_2 (the number of shortest paths from every point in D_1 to every point in D_2), and N_D_2, t (the number of shortest paths from every point in D_2 to t). The total number of 3rd-shortest pathsis then given by N_l_2 (s → t) = N_s, D_1N_D_1, D_2N_D_2, t.The final numerical results of the sponge-crossing concurrenceC_SC are shown in Fig. <ref>. We see that the transition in the value of C_SC becomes sharper as the network size N increases. Moreover, for higher-order approximation S_m and/or larger N, the numerical threshold θ_th levels out at constant values that are very close to those calculated using the star-mesh transform. For example, for N = 8^2, the numerical approximation yields θ_th≈ 0.4, closely mirroring the θ_th≈ 0.416 result from the star-mesh transform technique. This evidence underscores the viability of the new approach for approximating the concurrence percolation threshold accurately.One major benefit of this path-counting approach is its speed. As shown in Fig. <ref>, the algorithm is over a hundred times faster than the heuristic star-mesh transform. This substantial increase in computational speed facilitates the extension of concurrence percolation threshold calculations to more complex network topologies, such as random networks.Random networks.—For random networks, the sponge-crossing concurrence is simply defined as the concurrence between S={s} and T={t}, each set containing only one source (target) node s (t), picked such that the shortest path between s and t is equal to the diameter of the network. By randomly generating and averaging 10^2 network realizations of certain sizes and degree distributions, the concurrence percolation threshold θ_th is obtained.The outcomes of this numerical approach, applied across different topologies including the Erdős–Rényi (ER) <cit.> and the Barabási–Albert (BA) <cit.> random networks at large scales (N∼ 10^4), are summarized in Table <ref>. These findings shed light on the inherent capacities of large-scale complex QNs, opening new avenues for exploration. § DISCUSSION AND CONCLUSIONSDistributing entanglement throughout a quantum network (QN) is a critical and complex problem at the heart of quantum communications that has attracted significant attention and studies. This field has been further enriched from a statistical physics point of view, by the discovery of two percolation theories (classical versus concurrence) that, at first glance, appear to be remotely related but are, in fact, fundamentally distinct (Fig. <ref>). These theories have not only deepened our understanding but also raised many new questions for further exploration and potentially groundbreaking research. In the following, we will outline and discuss some of the open questions that have been brought to light by these developments:* Optimality. Does there exist an optimal scheme for entanglement transmission?In the context of classical percolation, both classical entanglement percolation (CEP) and quantum entanglement percolation (QEP) fall short of yielding the optimal singlet conversion probability, especially as network size scales up <cit.>. The deterministic entanglement transmission (DET) <cit.>, on the other hand,focuses on improving not the singlet conversion probability but the entanglement that can be deterministically obtained.It has been found that the DET optimizes the average concurrence on either series [Fig. <ref>] or parallel topologies [Fig. <ref>], meaning that even when we relax the requirement of determinacy and consider the average entanglement of general probabilistic outcomes, the DET results remain the optimal on series or parallel topologies <cit.>. However, this result does not generalize to general series-parallel topologies [Fig. <ref>], where DET may not always offer the optimal average concurrence.This prompts us to ask how effective the DET actually is across various QN topologies. Answering this question could substantially deepen our comprehension of the maximum entanglement capacity of QN.* Universality. As a statistical physics theory, the concurrence percolation also exhibits a second-order phase transition near the threshold c_th, similar to classical percolation near p_th.So, what can be said about the universality of this phase transition?It has been found that the thermal critical exponent ν remains the same on different 2D lattices (square versus honeycomb versus triangular), suggesting that universality is likely at play near the critical threshold. Yet, the current definition of percolation based on connectivity rules (Table <ref>) does not clearly define an order parameter <cit.>.This lack of an order parameter hints at a missing degree of freedom in the connectivity-based model. This omission makes it challenging to determine other critical exponents besides ν (or its dynamic counterpart, zν).Additionally, the existing data on ν (or zν) do not allow us to distinguish between the universality classes of concurrence percolation and classical percolation <cit.>. Thus, an open question remains: are concurrence percolation and classical percolation simply two facets of a single underlying statistical theory if we overlook short-range details, or are they genuinely distinct theories at a macroscopic level?* Experimental implementation. One of the greatest challenges of quantum information experiments is to achieve high-efficiency multi-body operations.For instance, two-body quantum gates like CNOT are considerably more challenging and less efficient to implement compared to single-body gates such as the rotation gates RX, RY, and RZ. In fact, the number of two-body gates often serves as a benchmark for gauging the computational complexity of quantum algorithms. Interestingly, a practical QN offers an easier path to scalability compared to universal quantum computers. This is since in a QN, only local operations are allowed on qubits across different nodes. This eliminates the need for complex gates like CNOT between qubits from different nodes. This design constraint substantially simplifies QN implementation and boosts its scalability. Recently, experimental feasibility of the series/parallel rules of the DET scheme (Table <ref>) has also been demonstrated on IBM's quantum computation platform Qiskit <cit.>. The series and parallel rules typically perform with fidelity rates of 92.4% and 78.2%, respectively <cit.>.These rates are expected to improve, given ongoing advancements in two-qubit gate fidelity <cit.>. Compared to the CEP/QEP schemes, the DET scheme has its advantages and drawbacks. On the upside, the DET inputs/outputs are only partially entangled, which generally makes them easier to produce and results in higher fidelity. On the flip side, circuit parameters are input-dependent, requiring precise initial state estimations through techniques like heralding <cit.> or tomography <cit.> before deployment. This brings us to a crucial question: to what extent can we experimentally scale the DET scheme for larger QNs? More importantly, given that the current CEP/QEP/DET schemes focus solely on pure states, there is an urgent need to extend these results to mixed states that are affected by noise—a vital step for the practical implementation of QNs.* Other network-based tasks enhanced by entanglement. The feasibility of establishing entanglement over network structures also opens up further new possibilities of using entanglement to enhance some more general, nontrivial network-based tasks. For example, in Refs. <cit.>, researchers studied the application of entanglement to enhance quantum games on both regular lattices and complex network structures, demonstrating that entanglement is a crucial resource for achieving favorable outcomes in the realm of quantum game theory. Additionally, similar improvements have been noted, such as in a quantum adaptation of the card game, bridge, as highlighted in Ref. <cit.>.The discussion on networks of networks in Section <ref> also provides an alternative perspective regarding entanglement. Rather than regarding it solely as a resource, we can view entanglement as a control parameter that regulates the interdependency between multiple network layers, potentially giving rise to novel critical or multicritical behaviors. Indeed, recent theoretical advancements in quantum phase transitions <cit.> have suggested that long-range entanglement among quantum spins at near absolute zero temperatures could trigger a shift from a second-order to a first-order phase transition. We hypothesize that this long-range entanglement may operate similarly to the introduction of interdependency among nodes across multiple layers, akin to classical networks-of-networks models. Yet, it is worth noting that the underlying physics governing this interdependency stems from entirely distinct principles within the quantum realm. We have explored the far-reaching implications of entanglement transmission in large-scale quantum networks, all through the lens of the percolation frameworks. The presence of two distinct types of percolation—classical versus concurrence—clearly suggests that the statistical landscape of quantum networks is rather complex. As we look forward, we are enthusiastic that both theoretical and experimental progress in the field of quantum networks will enrich our understanding of this captivating area of study. | http://arxiv.org/abs/2310.18420v2 | {
"authors": [
"Xiangyi Meng",
"Xinqi Hu",
"Yu Tian",
"Gaogao Dong",
"Renaud Lambiotte",
"Jianxi Gao",
"Shlomo Havlin"
],
"categories": [
"quant-ph",
"physics.comp-ph"
],
"primary_category": "quant-ph",
"published": "20231027182458",
"title": "Percolation Theories for Quantum Networks"
} |
Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001, Leuven, Belgium [email protected] Department of Astrophysics, IMAPP, Radboud University Nijmegen, PO Box 9010, 6500 GL Nijmegen, The Netherlands Max Planck Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA UBC 1 is an open cluster discovered in Gaia data and located near the edge of the Transiting Exoplanet Survey Satellite's (TESS) continuous viewing zone. We aim to provide age constraints for this poorly studied open cluster from the combination of gravity-mode (g-mode) asteroseismology, gyrochronology, and isochrone fitting. We established the members of UBC 1 from a spatial-kinematic filtering and estimate the cluster age and its parameters. Firstly, we fitted rotating isochrones to the single star cluster sequence. Secondly, using TESS time-series photometry, we explored the variability of the upper main sequence members and identified potential g-mode pulsators. For one star, we found a clear period spacing pattern that we used to deduce the buoyancy travel time, the near-core rotation rate, and an asteroseismic age. For a third independent age estimate, we employed the rotation periods of low-mass members of UBC 1. Based on isochrone fitting, we find log t = 8.1±0.4, where the large uncertainty occurs because UBC 1 does not host evolved stars. From asteroseismology of one g-mode pulsator, we find a constrained age of log t= 8.24^+0.43_-0.14. From gyrochronology based on 17 cool star cluster members, we estimate log t = 8.35^+0.16_-0.25. Combined, all three methods lead to a consistent age in the range of 150-300 Myr. Our results show that even a single cluster member with identified g modes can improve age-dating of young open clusters. Combining gyrochronology of low-mass members with asteroseismology of intermediate-mass members is a powerful tool for young open cluster modelling, including high-precision age-dating. Age-dating the open cluster UBC 1 with asteroseismology, gyrochronology, and isochrones Fritzewski et al. Age-dating the young open cluster UBC 1 with g-mode asteroseismology, gyrochronology, and isochrone fitting The full Table 1 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/ D. J. Fritzewski1 T. Van Reeth1 C. Aerts1,2,3 J. Van Beeck1 S. Gossage4 G. Li1 ==========================================================================================================================================================================================================================================================================================§ INTRODUCTION Asteroseismology is a relatively recent method to perform stellar modelling, offering precise global parameters, as well as estimates of the internal physics of stars <cit.>. It is based on observed and identified stellar pulsation modes and their frequencies. Yet, most asteroseismic properties of stellar interiors are inferred from stellar models, which have to be calibrated with stars of well-known properties. Among the best known calibrators in stellar astrophysics are stars in open clusters because their common formation history and large mass range provides tight (initial birth) constraints on any model input physics. Here, we are concerned with young open clusters to assess the internal physics of their members in the core-hydrogen-burning phase of evolution. Such main-sequence stars in young open clusters have been the target of asteroseismic studies from the ground for a long time <cit.>. However, the observational restrictions limited the yield and most progress in ground-based asteroseismic modelling of main-sequence stars was achieved for bright field stars <cit.>. Both asteroseismology and open cluster astrophysics gained new momentum with the space missions Kepler <cit.>, the Transiting Exoplanet Survey Satellite (TESS, ), and Gaia <cit.>. We refer to the reviews by <cit.> and <cit.>, for extensive discussions of space asteroseismology of field stars and space astrometry of clusters, respectively. While Kepler and TESS provide unprecedented continuous photometric time-series that allow for high-precision frequency determination of oscillating stars, Gaia delivers precise astrometric, photometric, and spectroscopic parameters for the majority of stars in our Galactic neighbourhood. By combining the data from Gaia and Kepler or TESS, we are now in the position to test stellar models, including those of main-sequence pulsators, with open cluster members more firmly than ever before and thus providing passageways towards improving these models. For this work, we are mainly concerned with gravity-mode (g-mode) pulsators in the main-sequence stage of their evolution. Specifically, we work with γ Doradus (γDor) pulsators, which are intermediate-mass dwarfs. The Kepler and Gaia missions led to the discovery <cit.>, description <cit.>, and detailed asteroseismic modelling <cit.> of these stars, while enabling the development of a broader theoretical underpinning on angular momentum and chemical element transport <cit.>. However, most of this work was carried out on field stars. Given the ages and distances of the few open clusters in the Kepler field, open cluster asteroseismology has mainly focussed on red giants with solar-like oscillations <cit.>. (Pre-)main sequence pulsators in open clusters were observed with K2 <cit.> and lately with TESS <cit.>. These studies showed that asteroseismology can constrain the ages of open clusters. However, the studies mostly focussed on pressure mode (p-mode) pulsations in δ Sct-type stars located in the classical instability strip. In this work, we exploit the potential of g-mode asteroseismology to age-date a barely studied open cluster and cross-calibrate the asteroseismic age with the ages derived from other methods. One reason why most asteroseismic studies on open cluster stars with TESS focus on δ Sct-type pulsators is its observing mode. Although it provides time-series photometry with a nearly all-sky coverage, the 27 d-coverage by its sectors hinders deep asteroseismic exploration of the data when the beating patterns of multi-periodic oscillations are longer than this time base. Moreover, due to the short sector baseline, individual frequencies in the g-mode regime (0.5≲ f_puls≲ 5 d^-1) are not resolved because the frequency resolution is inversely proportional to that baseline. Fortunately, the TESS observations include a continuous viewing zone (CVZ) in each hemisphere which is monitored for 352 d, hence providing long-baseline time-series photometry that enables precise frequency determination for multi-periodic low-frequency pulsations. One of the few open clusters with a large number of observed TESS sectors (i.e. in or on the edge of the CVZ) is the recently discovered UBC 1 (others include the tidal tails of NGC 2516 which are treated in a separate parallel study, Li et al., in prep.). <cit.> discovered this open cluster in Gaia DR2 data with an unsupervised clustering algorithm. As <cit.> note the position of this open cluster matches the previously described RSG 4 <cit.>, yet the mean proper motion and the distance of the clusters do not agree. The Milky Way Star Cluster catalogue <cit.> also includes an entry for an open cluster near the position of UBC 1. However, MWSC 5373 is located at a distance of 1.6 kpc (compared to 320 pc for UBC 1) and can therefore not be considered the same cluster. UBC 1 has not been analysed in a dedicated study, yet it is included in other large-scale, Gaia-based open cluster studies. <cit.> include UBC 1 in their list of stellar clusters and streams as Theia 520 with an age of 182 Myr and <cit.> list it as UPK 134 with similar astrometric parameters albeit with an estimated age of 525 Myr[We note that in <cit.>, UBC 1 is confused with MWSC 5373 based on the sky position despite their different distances.]. <cit.> revisit the collection of UBC clusters, update the cluster parameters slightly and estimate the age to be 70 Myr with a reddening of A_V=0.35 mag. We note that the estimated age of RSG 4 is 350 Myr <cit.>. We aim to explore the (g-mode) pulsators in UBC 1 to obtain an asteroseismic age estimate, along with independent age-dating from isochrone fitting based on rotating stellar models and from gyrochronology. After establishing the membership in Sect. <ref>, we find the isochronal age based on Gaia photometry in Sect. <ref>. Further, we describe the TESS observations and our data reduction, and identify promising cluster pulsators (Sect. <ref>). Using this information, we estimate the asteroseismic age of UBC 1 in Sect. <ref>. To refine our age estimate and to exploit the full potential of TESS, we also estimate the gyrochronal age based on cool star rotation periods in Sect. <ref>. Finally, we briefly discuss the different age estimates and come to conclusions in Sect. <ref>.§ OPEN CLUSTER MEMBERSHIP§.§ Membership The membership list of UBC 1 presented in <cit.> and subsequently in <cit.> includes 47 members, while <cit.> list 94 members. In contrast, <cit.> found 397 members based on unsupervised clustering. However, many of the listed members may not be true members of the open cluster because the velocity dispersion of the groups found by <cit.> is often larger than typically found in an open cluster <cit.>. In the following, we reanalyse the stars in the field of UBC 1 to establish a comprehensive, inclusive, yet clean membership. For our analysis, we followed the kinematic and spatial filtering approach outlined in <cit.> and <cit.>. This method is rather conservative and traditional, yet thanks to the precision of the Gaia data also very accurate. In our implementation, the filtering is a three step process based on (1) proper motions, (2) 3D motion, and (3) spatial distribution. Hence, as in <cit.>, we did not use a photometric criterion. We applied these steps to a generous selection of Gaia DR3 <cit.> sources centred on the known position of UBC 1 (see Appendix <ref> for the Gaia archive ADQL query). A prerequisite of calculating the velocity dispersion is knowledge of the bulk motion of the open cluster which we took from<cit.>.Following <cit.>, we first calculated the tangential velocities of each source in km s^-1 and subsequently the velocity dispersion in the sky plane, Δ v_2D. For the members presented in <cit.>, we found a velocity dispersion of 1.6 km s^-1. To include all these sources as members, we selected a threshold of 1.7 km s^-1 and considered all stars with Δ v_2D < 1.7 km s^-1 as proper motion members. For the brighter stars in the field, radial velocity measurements are available from Gaia DR3 <cit.>. Yet their uncertainties are up to 9 km s^-1. We only used these data to exclude stars with too divergent radial velocities, defined as Δ v_3D > 10 km s^-1. In this way, we found three stars in the membership list of <cit.> that exceed this value and were therefore removed. Although the kinematic filtering was very efficient, we still found many co-moving sources over a large volume. In order to reduce the number of random matches, we applied a spatial density filtering similar to <cit.>. For each star, we calculated the distance in pc to the three nearest kinematic members and selected only stars with all neighbours within 10 pc. This value effectively suppressed all random aggregates of a few stars that could be found in the data but also allowed us to retain members on the outskirts of the open cluster. In total, we find 132 members of UBC 1 ranging from spectral type A0 to mid-M (Table <ref>). Fig. <ref> shows the colour-magnitude diagram[The spectral type axis in this and all subsequent figures is based on <cit.> (<http://www.pas.rochester.edu/ emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt)>] (CMD) of the cluster members. The single star cluster sequence is very clean. Among the lower-mass stars, we observe a spread in the sequence due to less accurate Gaia measurements. The photometric binary sequence is sparsely populated and hosts only cool stars. We note that we did not apply any photometric criteria other than the maximum magnitude of G=18, showing again the precision with which co-moving structures can be extracted from the Gaia data. For completeness, we reiterated the membership determination, but this time with the proper motion for RSG 4 as reported in <cit.>. We did not find evidence of a cluster at the given position in velocity space. Hence, we conclude that the detection of RSG 4 was illusive and we call the open cluster UBC 1 in this work. From our analysis, we find a distance of 320 pc to the cluster core with a mean proper motion of μ^*_α=-2.53184 mas yr^-1, μ_δ=3.73449 mas yr^-1 and a mean radial velocity v_r=-23.2 km s^-1. Based on the reddening provided by Gaia <cit.>, we expect the cluster to be mostly extinction free. Yet, the numerical value (the average of all members with T_eff>4600 K) is not well constrained with E(G_BP-G_RP)=0.035±0.034 mag (corresponding to E(B-V)=0.026 mag, ). The metallicity estimate from Gaia <cit.> is similarly uncertain with [Fe/H]=-0.2±0.3. On the other hand, the metallicity determined from APOGEE spectra <cit.> of five members is consistent with the solar value ([Fe/H]=0.02±0.03). The stars in common between APOGEE and Gaia are offset by 0.15 dex <cit.>. From the available data and the knowledge that most open clusters in the solar vicinity are also of solar metallicity, we conclude that UBC 1 is not an outlier. §.§ Binarity Binarity (multiplicity in general) can influence stellar evolution in many aspects. It can change the star's structure, its chemical mixing, and angular momentum evolution <cit.>. All these properties influence stellar pulsations. Hence, knowing potential binaries in our sample is important in the subsequent analyses. Multiple binarity indicators are available for stars in open clusters. Primarily, we used the information provided in Gaia DR3, including the reduced unit weight error (RUWE), spectroscopic binarity indicators, and photometric data. The latter was supplemented by infra-red photometry from WISE <cit.>. We flagged every star with RUWE>1.2 as a potential binary <cit.>. An increased RUWE value reflects larger uncertainties in the Gaia astrometric solution and can indicate an unresolved binary. Gaia DR3 also provides spectroscopic orbits <cit.> for two upper main sequence members of UBC 1, which we included in our list of cluster binaries. Unresolved binaries can also be identified from photometry, because they are elevated above the single star cluster main sequence in the colour-magnitude diagram (CMD). As seen from Fig. <ref>, photometric binaries can mostly be found among UBC 1's low-mass members. However in a CMD based on a redder colour (G-W1, with W1 from WISE), we find additional outliers redwards of the main sequence (Appendix <ref>). These unequal-mass binaries have a redder component that contributes more to the combined flux in the redder passband than in the optical Gaia passband. The redder component could also be a debris disc as observed around other intermediate mass stars <cit.>. Nevertheless, we treated these stars as candidate binaries as a precaution. We note that photometric binaries identified in the optical are also found in the infrared. We therefore flagged all stars found at redder colours than the cluster main sequence in any of the CMDs as photometric binaries. Gaia DR3 also provides a radial velocity amplitude for some sources. Radial velocity variability can be induced by a companion. However, stellar pulsations also cause spectral line-profile variations for dwarfs, leading to radial-velocity changes of tens of km s^-1 depending on the kind of star and the nature of the modes <cit.>. As our targets potentially include such stars, the radial velocity variability cannot be used as a reliable binarity indicator. With future Gaia data releases, the epoch radial velocity data will become available and enable the distinction between sources of radial velocity variability based on the observed periodicity and all the combined time-series data. In total, we find 35 out of 132 members of UBC 1 (27 %) to show signs of potential multiplicity. This percentage is in agreement with studies of other open clusters based on Gaia data <cit.>. We mark all binaries with distinct symbols in the figures.§ ISOCHRONAL AGE OF UBC 1Our membership list includes only main sequence stars, which makes it challenging to estimate the age of this open cluster by means of isochrone fitting. Nevertheless, we employed this method to obtain a first handle on the cluster age and to assess the parameter space for the asteroseismic modelling discussed in Sect. 5. Since the asteroseismic models in that Section are constructed with the Modules for Experiments in Stellar Astrophysics software (MESA, ), we relied on the MESA Isochrones and Stellar Tracks (MIST) isochrones as a natural choice <cit.>. The standard MIST isochrones are provided as non-rotating models and as models rotating at 40 % of the critical rate, v/v_crit=0.4 (seefor the definition of v_crit). As intermediate-massstars are typically fast rotators and rotation is known to affect main-sequence turn-off morphologies of young clusters <cit.>, we employed the generalised MIST isochrones computed by <cit.> (see also ) to perform the isochrone fitting. These cover rotation rates ranging from 0≤ v/v_crit≤0.9 in steps of 0.1. Low-mass stars are treated as non-rotating in every MIST model given their small rotational velocities <cit.>.§.§ Mathematical fitting model The accurate Gaia parallax measurements and precise photometry allowed us to fit isochrones to the `measured' absolute magnitudes using the distance modulus of the individual stars. Using the prior knowledge from the available spectroscopy discussed in Sect. 2.1, we restricted the isochrone fitting to solar metallicity. All uncertainties from the photometry were propagated during the model evaluation. We used a Markov Chain Monte Carlo (MCMC) approach in a Bayesian setting (following e.g. ) and estimated the posterior probability for the cluster age from its single star population, because binaries can distort the fitting as they do not necessarily occur on the cluster sequence. Due to a known mismatch between the colours of the isochrones and observed colours for low-mass stars (seeand references therein), we limited the fitting to G≤13 (M_G≤ 5.5, corresponding to masses ≳ 0.9 M_).The posterior distributions for the age t and extinction A_V were estimated according to Bayes' theorem:p(t, A_V |M(ϖ)) = ∫ p( M(ϖ) | t, A_V) p(t) p(A_V) dt dA_Vwith M the combined absolute magnitude measurements of all considered cluster members. The log-likelihood function was given byℒ_cluster =log p( M(ϖ) | t, A_V) = -0.5∑_i w_i [ (( M_i -I(M_G, t, A_V))^2 / σ_M)],where I contained the isochronal magnitudes given the input parameters and σ_M the uncertainties associated with the observations M. Each star was weighted with w_i depending on its proximity to the turn-off (in our case the brightest star in the cluster). Stars close to the turn-off were given the highest weights in the fit. For each observed M_G, we interpolated the isochrone linearly and evaluated its M_BP and M_RP magnitudes. These values were reddened according to the probed A_V by using the median extinction coefficients from <cit.>[These values are strictly speaking only valid up to 7000 K, yet at the low extinction the effect of over or underestimating the extinction coefficient can be neglected.].The prior, p(t)p(A_V), was chosen to be flat and very broad in age (7 ≤log t ≤ 10.0) and extinction (0 mag≤ A_V ≤ 1 mag). Both values were basically unbound to probe the complete parameter space.The MCMC calculations were carried out using the open-sourcepackage<cit.>. We applied this procedure to each set of isochrones with different rotational velocities. §.§ Isochronal age Given the absence of evolved cluster members, the age cannot be tightly constrained from the isochrones and spans a wide range independent of the chosen stellar rotation rate. However, the best age (i.e., the age corresponding to the maximum likelihood of the posterior) is strongly correlated with the chosen rotation rate and we find younger ages for faster rotating models (c.f. Fig <ref> in the Appendix for the age posterior distributions). This effect is not unexpected because UBC 1 does not host a main-sequence turn-off and colour shifts due to rotation have to be compensated by age differences (and additional extinction to some extent). Considering all rotating models would lead to a large age uncertainty of 1 Gyr in the range log t = 7.6 - 8.6. While the maximum of the posterior distribution changes appreciably depending on the rotation rate, the median age is in the range log t = 7.9 - 8.2. In Fig. <ref>, we show isochrones for the different rotating models based on the median posterior value (left panel) and on the maximum likelihood of the parameters (right panel). It is immediately visible that some isochrones do not match the observations even for their best-fitting parameters (see Table <ref> for their values). In particular, the faster rotating models (redder shades in Fig. <ref>) are not able to describe the observations as they overestimate the brightness for many stars. Hence, they are not considered further. Similarly, the slowly rotating models (dark blue shades) have troubles to match the observations. Here, best-fitting values do not match the highest mass stars and provide too old solutions. For these models the joint fit with the cool stars provides additional conditions. Since low-mass stars are considered non-rotating, independent of the higher-mass stars' rotation rate, extinction is the constraining factor for their isochrone position. The extinction added to the slowest rotating models is the cause of the spread observed among the lower-mass stars. Based on both panels of Fig. <ref>, we find the isochrone with v/v_crit=0.5 to represents the measurements best and to pass through most of the data points. For these models, the maximum likelihood estimator of the age is closest to the median age distribution, which itself is nearly symmetric for this rotation rate (Fig. <ref>) while it is heavily skewed for other rotation rates (Fig. <ref>). Hence, we can safely assume that the selected model and its uncertainty encompasses most of the data. Reliable rotational velocities (vsin i) are available in the literature for only two stars <cit.>. These two stars are slow rotators with vsin i/v_crit≈0.1. The lack of observational data (from surveys and dedicated studies) for other cluster members does not allow us to draw conclusions on the overall rotational distribution. The posterior distributions of the most physical model with v/v_vcrit=0.5 is shown in Fig. <ref> and leads to the maximum likelihood age log(t)=8.1±0.4 with an associated extinction of A_V=0.065± 0.035 mag (equivalent to E(B-V)=0.021±0.01 mag, assuming R(V)=3.1). The uncertainty levels are given by the 16th and 84th percentile of the posterior distribution. The interstellar extinction value is in good agreement with the rough estimate from Gaia. We made several tests to ensure that neither the age nor the extinction depend on the initial value of the MCMC chain or on the prior. Finally, we assess the best-fitting model not only based on the quantitative posterior distributions but also on how it visually fits the data to get a sense of the parameter range and to assess the physical implications of the fit. In fact, the age of open clusters has often been estimated by visual comparison to isochrones, rather than rigorous mathematical modelling. In Fig. <ref>, we show the colour-absolute magnitude diagram of UBC 1 and over-plot a randomly drawn sample from the posterior distribution, as well as the best fitting model. This re-affirms that the rotating isochrone with the chosen parameters describes the stars in the fitted range very well. Among the randomly drawn models, some stray far from the actual data near the turn-off, providing visual confirmation for the age near 125 Myr. To summarize, we find UBC 1 to be log(t)=8.1±0.4 (125 Myr) old. The reddening towards the open cluster is small with E(B-V)=0.021±0.01 mag. We show that it is important to include rotation in the isochrones for CMD fitting based on intermediate-mass stars, even in the case of open clusters without an extended main-sequence turn-off.§ TESS OBSERVATIONS, REDUCTION, AND CLUSTER PULSATORS§.§ Observations and data reduction We chose UBC 1 as our target because it is situated near the edge of the TESS Northern Continuous Viewing Zone (CVZ-N), delivering uninterrupted time-series photometry up to 352 d. As seen in Fig. <ref> not all stars are in the CVZ-N and the coverage fraction is varying. Our targets could potentially be observed in sectors 14-26 of the primary mission and sectors 40, 41, and 47-60 of the extended missions (28 sectors in total).To download and reduce the TESS data, we employed the asteroseismic reduction pipeline developed by <cit.>. In short, we gathered all available TESS data for the intermediate-mass cluster members from the Mikulski Archive for Space Telescopes (MAST) using theAPI <cit.>. We chose a cut-out size of 25 px x 25 px, allowing for the identification of neighbouring, contaminating stars and a sufficient background area for subtraction. For each target, we extracted the flux using custom apertures in thepackage<cit.>. The apertures were constructed with the threshold method in(threshold 4.5σ). In order to avoid light of the neighbouring stars in our aperture, we identified relevant neighbours (Δ m_T ≤ 4.5 mag) in the cut-out region. We fitted a background planar model to the image and modelled each source as a Gaussian at the background position. The true TESS point spread function is much more complicated but for our purpose a Gaussian estimate is sufficient. With the models for the background and neighbours, we were in a position to estimate the flux contribution of all stars to the pixels within our mask. Pixels in which the contamination fraction is larger than 10^-4 were removed from the aperture mask. Other pixels were kept to avoid too small masks even though it led to irregular mask shapes. As an additional check for contamination, we selected all Gaia sources with G<16 within 2 around each A or F-type cluster member and calculated their absolute magnitude. Stars with close neighbours that could lead to potential flux contamination were selected and we checked whether the contaminating star is of spectral type F or earlier (based on their absolute magnitude). We found only one instance in which this is the case. However, this star is a Gaia-resolved equal-mass binary cluster member (UBC1-23 and UBC1-24). After the extraction, we de-trended each light curve sector-by-sector with a background subtraction and principal component analysis (up to seven components). In the asteroseismic analysis, we are interested in relatively short-term variability, hence removing all long-term trends (longer than one sector) in the light curves is appropriate. It also allowed us to stitch together all observed TESS sectors into a single light curve. Starting with the extended mission, the cadence of the TESS observations was shortened from 30 min to 10 min and with the second extended mission to 200 s. To create a light curve with a common cadence, we binned all observations of the later sectors to a 30 min cadence. Due to the location at the edge of the CVZ-N, most star were not observed in all sectors within a cycle, imposing complicated window functions. Yet, the great majority of our stars were completely covered by Sectors 56 through 60, hence we can use these sectors with a shorter cadence to distinguish between effects of the window function and true signals in the frequency analysis. Our membership list contains eleven stars that could potentially be intermediate-mass pulsators (with spectral types F3 or earlier, (G_BP-G_RP)_0<0.5). We extracted ten light curves. The remaining star (UBC1-73) was too close to a neighbouring star to exclude contamination. §.§ Population of pulsators in UBC 1 <cit.> searched the TESS data of the whole CVZ-N and provide a list of periodically variable A and F stars, including a classification. Three hybrid γ Dor-δ Sct pulsators and five stars classified as generally variable are in our membership list. In addition, <cit.> also classify UBC1-3 as a γ Dor pulsator. Based on its position in the CMD this star is a K dwarf.We find that rotational modulation of two nearby periods in the periodogram of this star (maybe caused by latitudinal differential rotation) generates a pattern that resembles a period spacing pattern at first look. Three stars were not classified but are included in <cit.>. Their list of known pulsators serves as our initial list which we aim to expand and analyse in detail. Our initial classification of pulsators is carried out with Lomb-Scargle periodograms <cit.> generated by the asteroseismic pipeline <cit.>. We find the majority of stars earlier than F5 ((G_BP-G_RP)_0<0.6) to show at least some level of periodic variability as they reveal peaks in their periodogram, which occur clearly above the noise level. These stars are highlighted in Fig. <ref> and colour-coded by their perceived richness of the periodogram from an asteroseismic perspective. Three stars (UBC1-46, UBC1-61, UBC1-106) show very clear pulsation signals. These three stars are already classified as hybrid pulsators by <cit.> and are discussed in detail below. The star UBC1-107 shows only a few periodic components in the frequency domain. Yet, we interpret these as pulsation frequencies rather than rotational signals as it concerns values in the range of a few cycles per day. This member is described as variable by <cit.>. In Fig. <ref>, we also mark the theoretical instability strip for g-mode pulsations from <cit.> and the observed red edge of the p-mode strip deduced by <cit.>[We note that transforming instability strip edges from log L/L_ and T_eff to Gaia magnitudes involves empirical transformations which might introduce biases. They are only meant as a guidance.]. These four stars are located in the γ Dor instability strip and are thus expected to pulsate in g modes. Moreover, the three richest pulsators show both g- and p-mode pulsations and are located in the narrow area that is shared by both instability regions. The remaining variable stars (UBC1-16, UBC1-40, UBC1-67, UBC1-68, UBC1-86, UBC1-97) show maximally one or two significant high-frequency components. Among them UBC1-16, UBC1-67, UBC1-73, and UBC1-86 are described as variables by <cit.>. All of these stars are more massive than typical γ Dor stars and occur above the instability region while their variability has frequencies in 1<f<10 d^-1 only. We do not find high-frequency p-modes typical of young δ Sct-type pulsators for these stars, despite their position within the classical instability strip. In the following, we give more details on the variable stars focussing on the three multi-periodic g-mode pulsators. We show the amplitude spectra in the range [0,65] d^-1 as we found that none of the stars reveal periodic variability beyond this frequency.§.§.§ UBC1-46 UBC1-46 (TIC 406930461) is a genuine δ Sct star with some low-amplitude frequencies in the γ Dor g-mode regime. The frequencies do not lead to mode identification, hampering asteroseismic inferences. In particular, we cannot identify a period spacing pattern from the low-amplitude g modes. Both periodograms in Fig. <ref> also show a single peak at 1.6 d^-1, which might be the star's surface rotation frequency or a (sub-)multiple thereof. §.§.§ UBC1-61 Figure <ref> shows the amplitude spectra of the richest hybrid pulsator in the cluster. This star (UBC1-61, TIC 416402521) does not only show a wealth of higher-frequency pulsations but also many peaks from the low-frequency domain of g-mode pulsations all the way to the p-mode regime. Despite its abundance in pulsation frequencies, we are not able to identify a period spacing pattern. We suspect that multiple overlapping patterns with missing peaks occur. We also do not find obvious combination frequencies neither in the g- nor in the p-mode regime. §.§.§ UBC1-106 Although UBC1-106 (TIC 421334449) is very similar to both stars discussed above in terms of its position in the CMD and hence mass, its periodogram is very different. This star is also a hybrid pulsator but only a few prominent peaks in the g-mode frequency regime occur (Fig. <ref>). However, these exhibit a clear, comb-like signature of a period spacing pattern. Unlike UBC1-46 and UBC-61, the highest amplitude peak in the amplitude spectrum of UBC1-106 occurs among the lower frequencies. We identify and interpret the period spacing pattern in Sect. <ref>.§.§.§ Other variable cluster members Among the remaining variables, we are not able to identify stars with clear pulsation signals in their periodograms. These cluster members are periodic variables and exhibit one or multiple periodic components in the g-mode regime. However, their individual detected modes are currently not suited for asteroseismic inferences as long as they cannot be identified in terms of degree and azimuthal order. High-resolution spectroscopy may remedy this and still lead to asteroseismic modelling <cit.>. Here, we only discuss the targets and show their periodograms in Appendix <ref>. The two most interesting pulsating cluster members of this class are UBC1-67 (TIC 416403139) and UBC1-107 (TIC 421335009). The former might be a “hump-and-spike” star <cit.>, in which the observed feature could originate from Rossby modes <cit.>. If this is the case, we estimate the rotation frequency of UBC1-67 to be f_rot≈ 1.6 d^-1. However, the frequency peak could also be a solitary pulsation or surface rotation frequency. We note that UBC1-67 is the brightest and hence highest mass star in the open cluster. For UBC1-107, we find several frequencies below f=3 d^-1, which could be part of a period spacing pattern. However, they are very close to the noise level or even insignificant, hence we are not able to construct a reliable period spacing pattern as there would be missing modes of consecutive radial order. In case these features in frequency space originate from pulsations, the two stars might be promising targets to revisit once additional TESS observations are available. A longer time baseline would provide higher frequency resolution of nearby pulsation peaks and might elevate additional pulsation frequencies above the noise level. Further, we note that UBC1-40 (TIC 406925885) shows low level stochastic variability with f≤ 2 d^-1. The frequency spectrum of UBC1-86 (TIC 243275988) has an isolated frequency around f≈5.1 d^-1, while its spectrum is otherwise featureless for f≳1 d^-1.§ ASTEROSEISMIC AGE-DATING OF UBC 1 The main aim of this work is to provide an asteroseismic age constraint for UBC 1 based on identified g modes in its cluster members and to confront this asteroseismic age with the isochronal and gyrochronal ages. In order to derive an asteroseismic age, we attempt to estimate the buoyancy travel time (Π_0), which is an internal structure quantity characterizing the size of the mode cavity where the detected g modes propagate. Π_0 can be used as an age-indicator for intermediate-mass stars because it probes the near-core g-mode cavity that changes during the main sequence evolution. Generally speaking, the shrinking convective core leaves behind a chemical gradient which increases the stability against convection <cit.> and hence the buoyancy frequency (Brunt-Väisälä frequency) increases. Thus, the inversely related buoyancy travel time decreases with age.Observationally, the buoyancy travel time is delivered by period spacing patterns of g modes <cit.>. In order to infer Π_0 from observations, an estimate of the near-core rotation frequency, f_ rot, is needed because it is determined by the slope of the period spacing pattern and slightly correlated with Π_0 <cit.>. To estimate f_ rot, the identification of the spherical wave numbers (l,m,n) of the modes involved in the period spacing pattern is required. We use the methodology initially developed by <cit.> and improved by <cit.> to achieve mode identification and estimation of Π_0 and f_ rot. As a first step in the application of this method, we hunt for period spacing patterns of g modes for the cluster pulsators. §.§ Pre-whitening We analysed the three pulsators with clear g modes in detail to obtain asteroseismic parameter estimates. For that purpose, we extracted all their significant frequencies from the periodograms through iterative pre-whitening. For this work, we used the frequency analysis routines developed by <cit.>. We used the option to deduce all frequencies with a signal-to-noise ratio (S/N) threshold S/N≥3. The S/N was calculated in a window of 1 d^-1 around the extracted frequency. We chose this low cut-off S/N value as stopping criterion to extract the significant frequencies compared to the often cited 5.6 for space photometry <cit.> because we possess light curves with difference cadences that allows us to cross-check whether peaks are significant. With the lists of pre-whitened mode periods, we proceeded to search for g-mode period spacing patterns. To facilitate the identification, we used the open-sourcepackage [<https://github.com/IvS-KULeuven/FLOSSY>] <cit.>. It allows to visually fit the observed periods by displaying a period spacing plot and an échelle diagram. When a matching pattern is found,calculates the χ^2 value to check whether the selected pattern is locally the optimal solution. With this visual approach, FLOSSY allows the user to easily select which mode periods belong to the pattern and investigate whether additional modes with frequencies below the selection threshold might be present in the periodogram. §.§ Period spacing pattern for UBC1-106 As outlined above, the only successfully identified period spacing pattern belongs to UBC1-106. Using , we find a pattern of five modes with consecutive radial order. The period spacing pattern of these five dominant modes is very regular, as illustrated in Fig. <ref>. We used the smooth pattern to perform mode identification and estimate the buoyancy travel time (Π_0) and near-core rotation frequency f_rot from the formalism derived in <cit.> and updated in <cit.>. This procedure was already successfully applied to TESS g-mode field pulsators by <cit.>. For UBC1-106 this led to the identification of five prograde dipole modes of consecutive radial order n∈[10, 14]. The corresponding buoyancy travel time was estimated to be Π_0=4549±24 s and we deduced a relatively low near-core rotation frequency of f_rot = 0.544±0.009 d^-1. Assuming rigid rotation, the near-core rotation rate corresponds to v_surf = 46 km s^-1 which is in agreement with the spectroscopically measured surface rotation vsin i =22.6 km s^-1 from APOGEE <cit.> and points to an inclination angle of about 30. Dips due to the occurrence of mode trapping caused by a chemical gradient are typically observed in evolved stars <cit.>. The smooth pattern thus hints towards a young pulsator, in agreement with the isochrone fitting. In addition to the smooth pattern centred around mode periods of 0.4 d, we find three additional modes with longer periods that could belong to the pattern as well. These mode periods are seemingly separated from the pattern. Should they belong to it, they must have higher and non-consecutive radial orders. Without the certainty of these modes belonging to a long pattern, we can neither rule out nor confirm that the observed modes belong to the same series as the modes with periods around 0.4 d. Hence, we continue with the asteroseismic parameters determined from the five consecutive radial order modes. §.§ Asteroseismic age of UBC1-106 In order to estimate an asteroseismic age for UBC1-106, additional astrophysical observables, aside from Π_0, are beneficial, provided that they are of high precision <cit.>. We first considered the effective temperature (T_eff) and surface gravity (log g) from Gaia DR3 <cit.>. However, these spectroscopic values have unrealistically small errors and their values in the supplementary astrophysical parameter tables also disagree. For this reason, we resorted to the stellar luminosity as an additional independent observable from Gaia DR3. Given the absolute magnitude of UBC1-106 and the bolometric correction <cit.>, we obtained log L/L_=0.82±0.02. While the bolometric correction is model dependent and also depends on the effective temperature, we checked that it is insensitive to T_eff in the considered temperature range (see also Fig. 8 in ). Our result for the luminosity was therefore essentially unaffected by the poor uncertainty estimate for the effective temperature from Gaia DR3 <cit.>. To estimate UBC1-106's age from Π_0 and log L/L_, we developed an MCMC grid search similar to the isochrone fitting in Sect. <ref>, but this time relying on the dedicated MESA () stellar structure and evolution grid of models for γDor stars computed by <cit.>. It was calculated for stars within the mass range 1.2 ≤ M_⋆/M_≤ 2. For each stellar mass various combinations of the initial chemical composition, convective core overshoot values for both an exponentially decaying and a step overshoot description, and diffusive envelope mixing levels are available. We refer the reader to <cit.> for details of the input physics omitted here and point out that it is different from the one in the MIST isochrones (c.f. ). Following the cluster's general properties, we limited the grid search to models with solar metallicity (Z=0.014, X_ini=0.71). For the measured Π_0=4549±24 s, the application of the grid search delivered the posterior distributions for the stellar mass and age shown in Fig. <ref>. We find a well defined lower boundary in age. The mean value (coinciding with the maximum likelihood) is log t= 8.24^+0.43_-0.14 (t = 175^+280_-50 Myr) for a 1.51±0.08 M_ star with an exponential overshoot f_ov=0.0075 (in agreement with <cit.>; the results do not change for the best model with a step overshoot). The envelope mixing is not well constrained, as it is also the case for the even more precise Kepler γDor asteroseismology <cit.>. Its value does not affect the mass and age results. Hence, we fixed it at D_mix=1 cm^2 s^-1. The asteroseismic age estimate for UBC1-106 is well in agreement with the isochronal age of the cluster (Sect. <ref>). As it is common for asteroseismology of stars with a convective core and overshooting, we find a correlation between mass and age <cit.>, explaining the larger upper than lower age uncertainty. Despite the seemingly good agreement, we keep in mind that systematic uncertainties due to the choice of input physics in stellar models occur but are still largely unknown. Detailed asteroseismic analyses of open clusters will eventually help in reducing these systematic uncertainties. An observationally motivated approach of determining the age of UBC 1 is to place the observed properties onto isochrones in the mass-Π_0 plane. Fig. <ref> shows these isochrones for different ages and constructed from the MESA grid of <cit.>. In subsequent studies, we will populate this diagram with more γ Dor stars in open clusters to empirically calibrate the asteroseismic models and probe systematic uncertainties in the determination of Π_0. Our observed star has uncertainties that cross the borders of the instability strip of <cit.>. It is well known that g-mode pulsators can be found beyond that region <cit.>. The γDor stars have two different types of mode excitation mechanisms active in them <cit.> and, moreover, the instability strips are computed for just one choice of input physics, usually ignoring internal rotation. However, Π_0 carries solely information on the stellar structure and is independent of the g-mode excitation mechanism. For completeness, we point out that we also ran a grid search with just the Gaia observables log L/L_, T_ eff, and log g instead of relying on Π_0 and log L/L_. For this to give meaningful results, we had to inflate the errors of T_ eff, and log g arbitrarily <cit.>. This led to more uncertain and quite broad posterior age and mass distributions (right panel of Fig. <ref>), which is not surprising given UBC1-106 is a main sequence star. Hence, the inclusion of the asteroseismic information contained in Π_0 strongly constrains the posterior and allows to establish much more precise stellar parameters. In fact, using only the asteroseismic observable Π_0 also leads to a well-peaked posterior distribution for the mass and age. §.§ Large frequency separation of p-modes Similar to the period spacing pattern in the g-mode regime, p-modes in δ Sct stars can show a regular frequency spacing in the absence of fast rotation. In that case, mode identification from such patterns is sometimes possible, particularly for young stars <cit.>. If mode degrees can be identified, this pattern may give rise to the so-called large frequency separation, Δν, which is the difference in frequency between modes of the same degree and consecutive radial order. This separation can be used in a similar manner as Π_0 to constrain the stellar age <cit.>. The p-modes detected in the three δ Sct pulsators of UBC 1 occur between ∼15 d^-1 and ∼40 d^-1 which is not enough to construct a pattern because potential patterns are too short hindering an unambiguous identification of the modes. Hence, we cannot give an age estimate for UBC 1 based on the p-mode pulsations. § GYROCHRONAL AGE OF UBC 1As an open cluster, UBC 1 contains not only intermediate-mass stars but hosts also a large number of low-mass members. These stars with outer convection zones shed angular momentum through their magnetized stellar wind <cit.>, which effectively spins them down on the (pre-)main sequence. The interconnection between rotation, the stellar magnetic field and the field creating dynamo leads to a feedback mechanism that reduces the rotation periods of low-mass stars to a function of mass and age <cit.>. (Metallicity is an additional parameter but basically all open cluster accessible to time-series photometry of low-mass stars have a near solar metallicity.) This property makes cool star rotation periods a powerful tool to determine stellar ages and ages of coeval populations in particular <cit.>. With our frequency analysis workflow detailed above, we can derive rotation periods for a number of cool star members of UBC 1. At the anticipated age of this open cluster, we expect most observable stars to rotate with P_rot≲10 d, which requires no changes to our photometric procedure adopted above. Hence, we applied the same pipeline to all low-mass members with G<16 mag. Even with TESS space photometry, the rotation periods can be prone to aliasing, in particular with a factor of two. Hence, we manually verified the detected periods in each light curve and made sure that each detected period is a good representation of the rotational variability in the light curve (see Appendix <ref> for the light curves of the final selection of stars). Stars for which we were not able to confirm that the periodicity found in the Fourier transform corresponds with the behaviour in the time domain were rejected. This leaves us with 17 rotational period measurements (Table <ref>) among 62 potential low-mass cluster stars within the adopted brightness limit of 16 mag. This relatively low yield is a consequence of our rather strict rejection of members whose TESS light curve are potentially contaminated by close-by sources. This criterion has a bigger influence on the aperture masks of fainter cluster stars. Since our aim is not to provide a detailed rotational analysis of the open cluster but rather a confrontation between gyrochronology and asteroseismic age-dating, this modest yield is acceptable. Cool star rotation periods are best discussed in a mass-dependent way for which typically an observational proxy of the mass is used. We show the colour-period diagram of the 17 low-mass members with a rotation period estimate in Fig. <ref> (left panel) using Gaia (G_BP-G_RP)_0. The rotation periods exhibit the expected shape, with fast rotation for the highest-mass stars and slower rotation with decreasing mass. While the majority of stars follow the expected sequence, one obvious outlier can be found above it. This star (UBC1-5) was identified as a potential binary and we might have picked up the rotational signal from its lower-mass companion or a period connected with the orbit. This scenario would also explain why we observe the rotational modulation only in a few sectors[The position angle of the TESS field influences the observability of rotation periods with TESS in crowded fields including close binaries.]. The power of gyrochronology in age determination comes from relative comparisons with populations of known age. Based on the above determined isochronal and asteroseismic ages we selected the two southern open clusters NGC 2516 <cit.> and NGC 3532 <cit.> as comparison clusters. As seen from Fig <ref> (right panel), both clusters have low-mass stars with rotation periods bracketing those of the stars in UBC 1. We find that G and K-type stars rotate slower in UBC 1, than in NGC 2516, while similarly to or faster than those in NGC 3532 within the limits of the comparison due to the scatter in both sequences. Hence, UBC 1has a rotational age determined from its low-mass rotational variables between 150 and 300 Myr (t=230±70 Myr).§ DISCUSSION AND CONCLUSIONSIn this work, we provide age constraints from g-mode asteroseismology, gyrochronology, and isochrone fitting for the recently discovered open cluster UBC 1. Its position near the edge of the TESS Northern continuous viewing zone makes it a prime target to attempt g-mode asteroseismology, which is dependent on long time-series of high-precision photometric observations to resolve individual pulsation frequencies. Based on a spatial-kinematic filtering of Gaia data, we find 132 members in UBC 1. After establishing the cluster membership, we estimate an isochronal age for UBC 1. A rotating isochrone with v/v_crit=0.5 describes the Gaia data best and we estimate the age of UBC 1 to be log t = 8.1±0.4 (t = 126^+190_-76 Myr). We also find a very small reddening towards the open cluster with E(B-V)=0.021±0.01 mag. The TESS observations of the intermediate-mass stars in UBC 1 provide a wealth of variable stars. Three stars are hybrid pulsators located in the small overlap region of the γ Dor and δ Sct instability strips. Despite the plethora of observed variability, including many pulsations, we can only identify the pulsation modes of one star thanks to our successful construction of its g-mode period spacing pattern. We use UBC1-106's identified dipole prograde modes to deduce its buoyancy travel time (Π_0) and near-core rotation frequency (f_ rot). Asteroseismic modelling of this single cluster member delivers an age of log t= 8.24^+0.37_-0.12 (t = 175^+232_-43 Myr). We stress that we can constrain the asteroseismic age of UBC 1 with only the measuredΠ_0 from TESS and the Gaia luminosity of a single cluster member without the need for precise spectroscopic information. We use the TESS light curves of the low-mass members of UBC 1 to estimate a rotational age for the cluster. Through a comparison of the rotation period distribution with other open clusters (NGC 2516 and NGC 3532), we find the low-mass members of UBC 1 to be 230±70 Myr old (log t = 8.35^+0.16_-0.25). The high-precision TESS space photometry allows us to exploit not only one but two age estimators for open clusters, namely asteroseismology and gyrochronology. These two methods are particularly valuable independent age estimators because many of the young clusters and stellar associations discovered with Gaia do not host evolved stars, which limits the capacity of isochrone fitting. All three age estimates for UBC 1 are similar: the range of 150 Myr to 300 Myr is simultaneously compatible with isochrone fitting, g-mode asteroseismology and gyrochronology. Despite the good agreement of the age-dating estimates, the overall systematic uncertainty remains large due to the choice of different input physics for the isochrones and due to the limited yield of asteroseismology. For the asteroseismic age-dating, we could use only one measurement of Π_0 for a single cluster member. It would be of great interest to probe whether all pulsators in the cluster have a compatible asteroseismic age and to achieve a proper age spread from ensemble asteroseismic modelling. Our work is a successful proof-of-concept study showing that even one pulsator with only a few identified g modes allows for a drastically improved age estimate compared to the use of its surface quantities log L/L_,T_ eff, and log g measured by Gaia. Currently, the gyrochronal age estimate is the most precise for UBC 1 because it is the least model dependent. However, with more and more open clusters becoming available for g-mode asteroseismology thanks to TESS, we can start building a cluster modelling methodology that will eventually lead to an empirical age-ranked sequence based on Π_0 and f_ rot estimates (and possibly other asteroseismic observables). Indeed, with the increasing amount of space photometry from the ongoing TESS mission and the future PLATO <cit.> mission, we enter a golden era of g-mode open-cluster asteroseismology. An age-ranked sequence of (g-mode) asteroseismic observables would enable true calibrations of asteroseismic models to leverage their full potential.We are grateful to Aaron Dotter for the suggestion of using rotating isochrones and for further discussions. We thank the anonymous referee for the very useful review. The research leading to these results has received funding from the KU Leuven Research Council (grant C16/18/005: PARADISE). CA also acknowledges financial support from the Research Foundation Flanders (FWO) under grant K802922N (Sabbatical leave). This research has made use of NASA's Astrophysics Data System Bibliographic Services and of the SIMBAD database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Software: This research made use of , a community-developed corepackage for Astronomy <cit.> and , apackage for Kepler and TESS data analysis <cit.>.This work made use of<cit.>. This work made use of<http://www.ColorBrewer2.org>. This research made use of the followingpackages:<cit.>;<cit.>;<cit.>;<cit.>;<cit.>;<cit.>; <cit.>;<cit.>aa§ ADQL QUERY For the membership analysis, we used the following ADQL query to download the Gaia DR3 data from ESA Gaia archive at <https://gea.esac.esa.int/archive/>.§ PHOTOMETRIC BINARIES Our analysis of binarity includes photometric binaries that we identified through multi-colour photometry. The CMDs in Fig. <ref> show the cluster members in Gaia G against three different colours based on Gaia and WISE observations. In each panel, we highlight the photometric binaries in this colour. We note that the photometry was not dereddened for this analysis because the reddening towards UBC 1 is very small.§ POSTERIOR DISTRIBUTIONSThe posterior distributions of the isochrone fitting in Sect. <ref> have a strong trend in age with rotation rate. To illustrate this trend and to provide additional information to Fig. <ref>, we show these distributions for all ten considered rotation rates in Fig. <ref>. The extinction is better constrained and depends only slightly on the rotation rate. Its posterior distribution is therefore not shown. We provide the numerical values for the median and maximum likelihood of both parameters in Table <ref>.§ POWER SPECTRA OF CLUSTER MEMBERSIn this section, we display the frequency spectra of variable intermediate mass stars, that do not show a clear pulsation characteristic but are still variable at f>1 d^-1. Their characteristics are briefly discussed in the main text. We omit the frequency spectra for UBC1-68 and UBC1-97 as they are featureless for f>1 d^-1. For the same reason, we only show the frequency range 0≤ f≤10 d^-1 in the figures.§ LIGHT CURVES OF COOL STAR ROTATORS In Fig. <ref>, we show light curves of the 17 identified cool star rotators. For each star, we display only the first available TESS Sector and indicate the rotation period at the top. We note that these rotation periods are derived from the full light curves. | http://arxiv.org/abs/2310.18426v1 | {
"authors": [
"D. J. Fritzewski",
"T. Van Reeth",
"C. Aerts",
"J. Van Beeck",
"S. Gossage",
"G. Li"
],
"categories": [
"astro-ph.SR",
"astro-ph.EP",
"astro-ph.GA"
],
"primary_category": "astro-ph.SR",
"published": "20231027185226",
"title": "Age-dating the young open cluster UBC 1 with g-mode asteroseismology, gyrochronology, and isochrone fitting"
} |
A Novel Application of Polynomial Solvers inmmWave Analog Radio Beamforming Snehal Bhayani^†, Praneeth Susarla^†, S.S. Krishna Chaitanya Bulusu^, Olli Silven^†, Markku Juntti^, and Janne Heikkila^† † Center for Machine Vision and Signal Analysis (CMVS), Centre for Wireless Communications (CWC), University of Oulu, 90570, Finland. ===========================================================================================================================================================================================================================================================================§ INTRODUCTIONBeamforming is a signal processing technique where an array of antenna elements can be steered to transmit and receive radio signals in a specific direction. The usage of millimeter wave (mmWave) frequencies and multiple input multiple output (MIMO) beamforming are considered as the key innovations of 5^th Generation (5G) and beyond communication systems. The mmWave radio waves enable high capacity and directive communication, but suffer from many challenges such as rapid channel variation, blockage effects, atmospheric attenuations, etc. The technique initially performs beam alignment procedure, followed by data transfer in the aligned directions between the transmitter and the receiver <cit.>. Traditionally, beam alignment involves periodical and exhaustive beam sweeping at both transmitter and the receiver, which is a slow process causing extra communication overhead with MIMO and massive MIMO radio units. In applications such as beam tracking, angular velocity, beam steering etc. <cit.>, beam alignment procedure is optimized by estimating the beam directions using first order polynomial approximations.Recent learning-based SOTA strategies <cit.> for fast mmWave beam alignment also require exploration over exhaustive beam pairs during the training procedure, causing overhead to learning strategies for higher antenna configurations. Therefore, our goal is to optimize the beam alignment cost functions e.g., data rate, to reduce the beam sweeping overhead by applying polynomial approximations of its partial derivatives which can then be solved as a system of polynomial equations.Specifically, we aim to reduce the beam search space by estimating approximate beam directions using the polynomial solvers.Here, we assume both transmitter (TX) and receiver (RX) to be equipped with uniform linear array (ULA) configuration, each having only one degree of freedom (d.o.f.) with N_t and N_r antennas, respectively.§ PROBLEM FORMULATIONLet R=log_2(1+α_1 α_2/α_3𝐰_rx^H H𝐰_tx^2) denote the communication data rate of the mmWave received signal, where α_1, α_2,α_3 ∈ℝ are the known constants, and H∈ℂ^N_r × N_t is a matrix of random complex channel values. The matrix H can be written as H = H_r + j H_i, where j^2 = -1, and H_r, H_i are real N_r × N_t matrices with known entries. The beamforming vectors 𝐰_rx and 𝐰_tx are functions of the transmitter and receiver beam angles, rx and tx. Altogether, R is considered as a function of rx, tx. We formulate the beamalignment problem as to estimate rx, tx by maximizing R given as,rx^*,tx^* = rx,tx∈ℝargmax R.Exploiting the fact that rx,tx∈[ 0, 2 π], one approach is to subdivide the interval into fixed sub-intervals and search for the maxima of R among sub-intervals <cit.>. However, all the iterative methods require a good starting point. Moreover, it is not possible to know the total number of stationary points for a given function, thus leading to local maxima or saddle points.Instead, we draw inspiration from computer vision problems, where algebraic methods have gained popularity in recent times. The problems of estimating camera geometry lead to finite systems of polynomial equations which have been successfully solved using the concepts based on the and the <cit.>. In this work, we have adopted the -based approach <cit.>. §.§ Algebraic approach for optimizationThe optimization problem in Eq. (<ref>) can be solved by estimating those points, say rx^*,tx^* ∈ℝ, where the first order partial derivatives of R w.r.t. rx and tx, i.e., R_rx = ∂ R/∂rx and R_tx = ∂ R/∂tx, vanish. However, R_rx (=f_1) and R_tx (=f_2) are not polynomials, and in order to facilitate an algebraic approach, we approximate f_1 and f_2 as bivariate polynomials. Suppose, x = [ rx tx ]^⊤. Then, the Taylor series expansions <cit.> of f_1 and f_2 can be expressed as f_1 = (x^α,c) ∈𝒯_1∑c x^α, f_2 = (x^α,c) ∈𝒯_2∑ c x^α, where 𝒯_1 and 𝒯_2 denote the sets of coefficient and monomial pairs, occurring in the Taylor series expansion of f_1 and f_2, respectively.Suppose ℬ_1 and ℬ_2 respectively denote the sets of monomials (x^α) in 𝒯_1 and 𝒯_2, and 𝒞_1 and 𝒞_2 respectively denote the sets of coefficients (c) in 𝒯_1 and 𝒯_2. In order to approximate f_1 and f_2 in Eq. (<ref>) as polynomials, we truncate the monomial sets ℬ_1 and ℬ_2 as finite subsets, ℬ_1⊂ℬ_1 and ℬ_2⊂ℬ_2. The corresponding truncated set of coefficients, are C_1 = { c ∈ C_1 | (x^α, c) ∈𝒯_1, x^α∈ℬ_1} and C_2 = { c ∈ C_2 | (x^α, c) ∈𝒯_2, x^α∈ℬ_2}. Thus, we have approximated f_1 and f_2 respectively with the polynomials p_1 = c ∈C_1,α∈ℬ_1∑c x^α and p_2 =c ∈C_2, α∈ℬ_2∑c x^α. The common roots of p_1 and p_2 represent the approximate solutions to f_1=f_2=0. Let the exponent sets of the monomials in ℬ_1 and ℬ_2 be denoted as B_1∈ℤ^2_≥ 0 and B_2∈ℤ^2_≥ 0, respectively. Therefore, the functions, f_1 and f_2, and the truncated polynomials, p_1 and p_2, can be expressed asf_1 = (x^α, c) ∈𝒯_1α∈B_1∑c x^α, f_2 = (x^α, c) ∈𝒯_2α∈B_2∑c x^α,p_1 = (x^α, c) ∈𝒯_1α∈B_1∑c x^α,p_2 = (x^α, c) ∈𝒯_2α∈B_2∑c x^α.Here, one of the common roots of p_1 and p_2 should be as close to a global maxima of R as possible. Number of common roots of p_1 and p_2:TheBernstein–Kushnirenko theorem <cit.> provides the upper bound on the number of common roots of p_1 and p_2,denoted as η, in the complex field ℂ^2. Let, P_i denote the convex hull of B_i, and Vol_2 (P_i) denote the euclidean volume (area) of P_i, for i=1,2.Then, η can be considered as a function of the exponent sets, B_1 and B_2. Observe that η is the upper bound on the size of the matrix undergoing eigenvalue decomposition in the -based polynomial solvers, which in turn affects the speed of the application. We also need to exhaustively iterate through all of the computed roots over the communication channel values (see Sec. <ref>),and pick the root that corresponds to the largest R. Thus, in the interest of application speed, we require η be as small as possible. Approximation error: Lowering η comes at the cost of accuracy of the solution to the optimization problem in Eq. (<ref>), obtained via the roots of polynomial approximations. Let rx^*, tx^* denote the true maxima of R in Eq. (<ref>). Hence, f_1 and f_2 vanish at rx^*, tx^*. Also, let rx^† , tx^† be one of the estimated roots of the polynomials p_1 and p_2. Then, our objective here is to find p_1 and p_2, s.t., δ = ||rx^†- rx^* ||_2^2 + ||tx^†- tx^* ||_2^2 is as close to zero as possible[Note, that || * ||_2 denotes the 2-norm in ℝ^2.].One of the ways to investigate the relationship of δ with p_1 and p_2, is by observing the terms we need to drop to obtain p_1 and p_2 respectively from f_1 and f_2 in Eq. (<ref>). Let f|_x^*,y^* denote the evaluation of the bivariate function f by assigning the values x^* and y^* to its two variables x and y, and let δ_rx=rx^†- rx^* and δ_tx=tx^†- tx^*. Then, (f_1-p_1)|_rx^† , tx^† and (f_2-p_2)|_rx^† , tx^† both can be expressed as infinite sums of terms, each term consisting of monomials in δ_rx and δ_txas variables. Observe that, δ = 0 implies that (f_1-p_1)|_rx^† , tx^† and (f_2-p_2)|_rx^† , tx^† both should vanish, for all possible roots rx^† , tx^† of p_1 and p_2. One way to achieve this is by ensuring that the coefficients of the terms in (f_1-p_1)|_rx^† , tx^† and (f_2-p_2)|_rx^† , tx^† to be as small as possible. In other words, we need to minimize the magnitude of the dropped terms from f_1 and f_2 in Eq. (<ref>). The dropped terms are infinitely many. Instead, we aim to maximize the magnitude of the selected terms in p_1 and p_2. Thus, we can loosely redefine the approximation error δ to be the inverse of the sum of the magnitudes of all the coefficients of the terms off_1 and f_2, selected to obtain the approximations p_1 and p_2. Specifically, δ =1c ∈C_1∑| c | + c ∈C_2∑| c |. Our ongoing work seeks to jointly minimize η and δ w.r.t. themonomial susbets, B_1 and B_2. We define this minimization problem as 𝒪 := B_1, B_2argmin η + δ. We then formulate the polynomial approximations p_1 and p_2 from the solution of the optimization problem 𝒪.We note that analytical expressions for η and δ as functions of B_1 and B_2 are yet to be determined and are part of our future work. Hence, in this work, we adopted a simple strategy to select B_1 and B_2, which minimize δ while keeping η reasonably low. The strategy is normalize the coefficients in C_1 w.r.t. the largest observed magnitude, and choose ℬ_1 corresponding to those coefficients whose normalized values are larger than a certain threshold, ϵ_1. We perform the same steps for choosing ℬ_2 using C_2, via some threshold, ϵ_2.§ PROPOSED APPROACH AND CONCLUSIONIn this work, we used a setup of 2 × 2 antenna array grid, i.e., N_t=N_r=2, and assigned random values to the matrix H,to demonstrate our approach. We studied some threshold pairs, [ϵ_1; ϵ_2] (see the Figure <ref> (Right)), and for each pair, we computed the monomial sets B_1 and B_2, and solved the corresponding polynomial approximations p_1 and p_2 using the -based solver <cit.>. For each threshold pair, we also measured the difference in the estimated data rate and the known data rate based on the beam search, and also measured (η + δ) in the optimization problem 𝒪. Both these quantities are depicted in Figure <ref> w.r.t. the threshold pair. We observe, that the best data-rate estimation (and hence the minimal value of δ) happens when the threshold pair is[0.7; 0.7]. However, the value of the objective function is not minimized ([0.7; 0.75] leads to lower η but higher δ), indicating that it came at the expense of η. Thus, our future work will focus on jointly minimizing, (η + δ).IEEEtran | http://arxiv.org/abs/2310.18103v1 | {
"authors": [
"Snehal Bhayani",
"Praneeth Susarla",
"S. S. Krishna Chaitanya Bulusu",
"Olli Silven",
"Markku Juntti",
"Janne Heikkila"
],
"categories": [
"cs.SC"
],
"primary_category": "cs.SC",
"published": "20231027124141",
"title": "A Novel Application of Polynomial Solvers in mmWave Analog Radio Beamforming"
} |
Birmingham City University, Birmingham, United Kingdom University of the West of England (UWE), Bristol, United Kingdom Institute of Accounting and Administration, University of Aveiro, Portugal NUS High School of Math and Science, Singapore Qatar University, Doha, Qatar Information Technology University, Punjab, Pakistan Bristol Heart Institute, University of Bristol, Bristol, United Kingdom^*Corresponding author: [email protected]; [email protected] Bilal et al.Constraining the growth rate on linear scales by combining SKAO and DESI surveys Muhammad Bilal1^,2^,* Dinis Martinho3 Reiner Sim4 Adnan Qayyum5^,6 Hunaid Vohra7 Massimo Caputo7 Taofeek Akinosho2 Sofiat Abioye2 Zaheer Khan2 Waleed Niaz2 Junaid Qadir5 January 14, 2024 =============================================================================================================================================================================== Coronary angiography analysis is a common clinical task performed by cardiologists to diagnose coronary artery disease (CAD) through an assessment of atherosclerotic plaque's accumulation. This study introduces an end-to-end machine learning solution developed as part of our solution for the MICCAI 2023 Automatic Region-based Coronary Artery Disease diagnostics using x-ray angiography imagEs (ARCADE) challenge, which aims to benchmark solutions for multivessel coronary artery segmentation and potential stenotic lesion localisation from X-ray coronary angiograms. We adopted a robust baseline model training strategy to progressively improve performance, comprising five successive stages of binary class pretraining, multivessel segmentation, fine-tuning using class frequency weighted dataloaders, fine-tuning using F1-based curriculum learning strategy (F1-CLS), and finally multi-target angiogram view classifier-based collective adaptation.Unlike many other medical imaging procedures, this task exhibits a notable degree of interobserver variability.Our ensemble model combines the outputs from six baseline models using the weighted ensembling approach, which our analysis shows is found to double the predictive accuracy of the proposed solution. The final prediction was further refined, targeting the correction of misclassified blobs. Our solution achieved a mean F1 score of 37.69% for coronary artery segmentation, and 39.41% for stenosis localisation, positioning our team in the 5th position on both leaderboards. This work demonstrates the potential of automated tools to aid CAD diagnosis, guide interventions, and improve the accuracy of stent injections in clinical settings.Keywords: Coronary Artery Segmentation, Stenosis Localisation, Deep Ensemble Learning, Weighted Data Loaders, and Curriculum Learning. § INTRODUCTIONCoronary artery disease (CAD) is a major global health concern and a leading cause of death worldwide. The gold standard procedure for CAD diagnosis is invasive coronary angiography, which uses contrast materials and X-ray technology to visualise arterial lesions and assess blood flow to heart muscle in real-time. Such information plays a pivotal role in guiding intraventricular interventions, stent placements, and the planning of revascularisation procedures, relying on precise calculations of occlusions and affected artery segments (AS). However, X-ray coronary angiogram (XCA) like other imaging modalities faces a challenge in the form of interobserver variability, where angiographers often disagree on the severity of artery blockages<cit.>. To address this issue, there's a pressing need for digital tools to improve the reliability of XCA analysis. These tools hold the potential to significantly bolster the effectiveness of XCA for CAD detection and treatment strategies, addressing a critical gap in cardiac care.From an algorithmic standpoint, addressing the challenge of robust multivessel coronary artery segmentation and stenosis localisation in XCA images presents several complexities. These complexities arise due to factors such as patient-specific anatomical variations, the often unfavorable noise-to-signal ratio inherent to XCA images, and the presence of confounding background structures within the imaging scene. Various researchers have endeavored to tackle this issue. For instance, Gao et al. <cit.> proposed an ensemble framework that combines gradient boosting decision trees and a deep forest classifier for binary segmentation of coronary arteries. In a similar vein, Danilov et al. <cit.> conducted a comparative study of modern neural network architectures, assessing their suitability for real-time stenosis localisation during procedures. However, a significant portion of existing research is hindered by limitations, including small datasets, impractical binary segmentation methods for clinical settings, and insufficient support for clinicians in their routine XCA analysis tasks.The MICCAI 2023 Automatic Region-based Coronary Artery Disease diagnostics using x-ray angiography imagEs (ARCADE) challenge enabled our research endeavour by providing the essential framework for benchmarking our solutions in coronary artery segmentation and stenosis localisation. This competition has attracted biomedical AI researchers from around the world, all striving to develop cutting-edge solutions for this problem. To facilitate the participants' efforts, the challenge organisers have generously provided a comprehensive dataset consisting of 1,500 meticulously annotated images of coronary AS and an additional 1,500 images segmenting the locations of stenotic plaques. The labels for both multivessel instance segmentation and stenosis localisation tasks adhere to the SYNTAX Score[<https://syntaxscore.org/index.php/tutorial/definitions/14-appendix-i-segment-definitions>] definitions, wherein each region of the vessel tree is assigned a name and an ordinal number based on its specific location. This dataset is divided into three subsets: a training set containing 1,000 images, a phase 1 validation set comprising 200 images, and a final phase validation set with 300 images. The evaluation of a participant's model is carried out using the F1 score, calculated for each individual image. The overall F1 score for a team is then determined as the average F1 score across all images. When multiple teams achieve the same F1 score, inference time serves as the tiebreaker.Our proposed solution comprises a multifaceted approach aimed at enhancing the accuracy of coronary angiogram analysis. We employ image transformations specifically tailored to mitigate noise in XCA images. Our solution leverages six baseline models selected from state-of-the-art (SOTA) deep learning (DL) architectures. The training unfolds in five stages: binary class segmentation pretraining, multilabel vessel segmentation training, two subsequent fine-tuning stages using class frequency dataloaders (to learn minority classes), and F1-based curriculum learning (to address difficult classes). Finally, collaborative classifier-based fine-tuning accurately predicts AS based on the given angiogram view. Weighted ensembling significantly improved the output quality, followed by morphology operations for small segment removal and gap filling, further refining the results. Our solution exhibits novelty in several areas. Firstly, we utilise inter-class difficulty-aware stratified sampling, enabling our baseline models to maximise the utility of the limited dataset. Secondly, our systematic baseline model training process incorporates various modelling strategies in a meaningful order, including a unique collaborative multi-target training stage. This facilitates the learning of view-specific AS by considering angiogram acquisition angles. This training process worked on all baseline models and progressively improved their accuracy, regardless of their underlying models' architectures.§ RELATED WORK Deep Ensemble Learning (DEL) has emerged as a potent technique in medical image analysis, notably excelling in multivessel coronary artery segmentation, disease localisation, and its broader applications in radiology and pathology. DEL's strength lies in aggregating multiple weak models to improve segmentation precision and accuracy. This section provides a concise review of pertinent literature, serving to contextualise our research within the existing landscape of image analysis. By summarising key findings and contributions from related papers, we position our work as a progressive addition to the ongoing dialogue in this crucial domain.Nobre et al. <cit.> proposed an AI solution to segment XCA images from four medical centres encompassing patients who underwent CAG, PCI, or invasive assessments. Their approach showcased a notable increase in multiple segmentation metrics, including overlap accuracy, sensitivity, and Dice Score. Furthermore, the Global Segmentation Score (GSS) exhibited alignment with previous results, thus serving as a validation of AI segmentation effectiveness in the realm of XCA analytics. The authors highlight critical clinical applications concerning coronary artery revascularisation. Tao et al. <cit.> proposed a Bottleneck Residual U-Net (BRU-Net), a lightweight model for XCA segmentation. Differing from the traditional U-Net, BRU-Net incorporates bottleneck residual blocks to enhance computational efficiency. The CLAHE pre-processing technique is also used to improve performance. Nevertheless, BRU-Net occasionally mislabels background as vessels and vice versa, impacting accuracy. Data annotation challenges result in unmarked thin vessels being segmented, causing discrepancies. The authors emphasise the need for larger, improved coronary angiography datasets to bolster coronary artery disease diagnosis.Additionally, Gao et al. <cit.> proposed an innovative method for delineating coronary blood vessels by leveraging a combination of DL and filter-based features, all integrated into an ensemble framework that utilises Gradient Boosting Decision Tree (GBDT) and Deep Forest classifiers. Compared to traditional deep neural networks, this ensemble approach demonstrates superior performance across various metrics, including precision, sensitivity, specificity, F1 score, AUROC (Area Under the Receiver Operating Characteristic curve), and IoU (Intersection over Union) scores. Tmenova et al. <cit.> proposed enhancing the realism of vascular images generated from a cardiorespiratory simulator. They employed CycleGAN to mimic real angiograms. The evaluation focused on the consistency and vessel preservation of the enhanced images, resulting in an average structural similarity (SSIM) score of 0.948. These findings suggest that CycleGAN serves as a potent tool for synthetic XCA data generation. This approach is particularly relevant due to the need to replicate the complex physiology and patterns observed in X-ray angiography, as well as the scarcity of data mentioned before.§ PROPOSED METHOD§.§ Data DescriptionThe ARCADE challenge comprises two distinct datasets, each consisting of 1,500 XCA images, purposefully curated for the intricate task of coronary artery segmentation and stenosis localisation. The training set included 1,000 images whereas two sets of validation XCA images were held out by the competition organisers, including the phase 1 validation set involving 200 images, and the final phase set comprised an additional 300 images. To gain insights into the dataset and to make informed modelling decisions, we conducted exploratory mask analysis. Table <ref> provides key summary statistics for Arcade datasets. We identified a total of 6,180 coronary artery segmentation masks across 25 segmentation classes, adhering to the SYNTAX Score definition. However, it's important to note that the background pixel (class 0) accounts for a staggering 97.08% of the entire dataset, leaving less than 3% of pixels available to inform the learning of visual features for multivessel segmentation. Among the non-background classes, 13 classes (6, 2, 1, 3, 7, 5, 4, 8, 13, 11, 9, 16, 12a) contained over 100 segmentation masks, while 12 classes (14, 9a, 12b, 14b, 12, 14a, 16b, 15, 16a, 10, 16c, 10a) had fewer than 100 masks in the training data. Notably, some segments, such as class 10a, were represented by only a single image in the dataset. This highlights a significant class imbalance, primarily within the background class (at pixel level) but also among other AS, with varying ratios (at sample level). This class imbalance introduces several challenges, including a propensity for biased learning and difficulty in learning minority classes. Furthermore, the dataset exhibits intraclass segment variability, with segment sizes within the same class varying substantially, ranging from single pixels to several thousand pixels forming a segmentation mask. In Fig. <ref> we highlight this diversity by calculating the mean of each artery segment in the dataset to get an average of their representation across the dataset. This variation in segment sizes poses inherent challenges, as classes with significant segment size disparities may be difficult to learn and also require intelligent mechanisms to ensure similar difficulty is reflected by the training and validation sets of 1,000 images. An important limitation of this dataset is the issue of overlapping segments, particularly at the borders where two segments interleave. While the original annotations were provided in JSON files to allow multiple classes to represent a single pixel location in a mask, this information was lost when we serialised and saved masks as PNG files. Perhaps the most formidable challenge presented by the dataset lies in the fact that not all coronary arteries visible in the XCA view are segmented. Instead, only a small set of arteries that were the focus at the given time in XCA procedure are segmented. This characteristic renders the modelling task exceptionally challenging, as it necessitates the ability to discern and segment relevant arteries amidst a complex and cluttered background. §.§ Proposed SolutionOur proposed methodology for joint segmentation of artery segmentation and stenosis localisation consists of four major steps that include: (1) Input Preprocessing; (2) Baseline Model Development; (3) Ensemble Model Development; and (4) Post processing. The pipeline of our proposed solution is shown in Fig. <ref> and described next. Input Preprocessing: Data preprocessing plays a crucial role in the ML pipeline, addressing issues within the input XCA data. Our preprocessing steps included converting JSON-formatted COCO channel-encoded masks into 2D class-encoded masks, although some information was lost due to overlaps of artery segments around their boundaries. For binary class pretraining of the multivessel segmentation stage, as well as stenosis localisation model training, we transformed mask contents into binary images, designating 0 for the background and 255 for the foreground by replacing pixels greater than 0 with 255. We further improved input image quality by applying various techniques, such as contrast-limited histogram equalisation (CLAHE), Gabor filtering, random gamma adjustments, and tone curve enhancements. To match the difficulty of the validation set, we also introduced multiplicative noise and a slight blur. Baseline Model(s) Development: In this section, we describe the development of our baseline models. We considered training SOTA segmentation models, including DeepLabV3+ <cit.>, FPN <cit.>, PAN <cit.>, and UNet++ <cit.>. These models are integrated with different encoders, encompassing DeepLabV3+ with InceptionV4, FPN with MiT B5, PAN with EfficientNet-B5, FPN with InceptionV4, PAN with RegNetx080, and UNet++ with an EfficientNetB5 backbone. Our approach to training baseline models adheres to a systematic strategy, which we will elaborate on further in this section. Additionally, we will elucidate the key design choices that played a pivotal role in our model development process, including a novel data-splitting strategy designed to create a challenging validation set and the implementation of a combo loss function aimed at facilitating the learning of robust imaging features from a dataset of such complexity.Intraclass Difficulty-aware Stratified Sampling: We employed a novel sampling strategy to balance class distribution in both the training and validation sets. Let C denote the total number of classes, N represent the total number of samples, S_i denote the size of segment i (where i ranges from 1 to N), and n_i represent the number of samples belonging to class i. We partitioned the dataset into training and validation sets, ensuring class balance while considering segment size. To allocate V samples to the validation set, we considered the desired number of segments to assign to the validation set, defined as S_i < S_t, where S_t represents a specified threshold value. We aimed to distribute smaller segments to the validation set while maintaining class balance, using class proportions P_i, calculated as P_i = N/n_i. We followed these steps: (1) Calculated the number of segments (V_i) with S_i < S_t for each class; (2) Assigned V_i segments from each class i to the validation set; and (3) Utilised class proportions P_i to determine how many segments V_i from each class should be assigned to the validation set, ensuring class equilibrium. We compute V_i as follows: V_i = P_i × V/∑_j=1^C P_j Here, ∑_j=1^C P_j represents the sum of class proportions for all classes. This approach balances class distribution and effectively allocates smaller segments to the validation set, guiding the model's learning process to accommodate varying difficulty levels, ultimately enhancing experimental effectiveness.Combo Loss Function: We combined Focal Loss and Tversky losses to address distinct challenges in the ARCADE dataset. Focal Loss is chosen for its effectiveness in handling imbalanced datasets, specifically to enhance the baseline models' performance on minority classes. Likewise, Tversky Loss was introduced to tackle difficult classes with significant segment size variability. We found a remarkable synergy between these loss components by weighting both losses with a balance parameter, represented as α (set to 0.5), achieving a harmonious fusion of their respective strengths. We explored several gamma (γ) values (ranging from 2 to 4) in both loss components, adapting the model's focus to harder examples as training progressed. Let ℒ_F denote the Focal Loss, ℒ_T represents the Tversky Loss, and α be the balance parameter. The combo loss of function ℒ_C can be mathematically expressed as ℒ_C = α×ℒ_F + (1 - α) ×ℒ_T. Where, α regulates the weightage assigned to each loss, allowing for a dynamic adjustment of models to focus on class imbalance and learning challenging segments. This flexible combination not only harmonises the strengths of these loss functions but also enables adaptability during training by considering various values of α and γ.Systematic Baseline Model Training Approach: Our solution relies on baseline models for coronary artery segmentation and stenosis localization. We selected six top-performing architectures and encoder combinations from a list of pretrained models, as described above. The subsequent subsections provide a detailed explanation of each of these stages.Stage 1: Binary Class Pretraining: We initiated the training of our baseline models through transfer learning. Notably, our initial experiments revealed the inadequacy of using pretrained IMAGENET models. This limitation stemmed from the fundamental differences between the objects in the IMAGENET dataset and the anatomical structures commonly found in coronary artery angiograms. Subsequently, we trained a binary class segmentation model to learn basic spatial patterns of coronary arteries, considering them as the foreground class and non-arterial pixels as the background class. This domain-specific binary class pretraining offered several advantages. By simplifying the initial segmentation task, the baseline models converged quickly during training. Additionally, this pretraining endowed the subsequent multivessel segmentation models with fundamental spatial awareness of the coronary artery tree. As a result, the baseline models significantly improved their performance on segmentation tasks, even after just the first epoch of training. The mean F1 score of the binary class segmentation models after 80 epochs of training was 85.34%.Stage 2: Multivessel Artery Segmentation Training: This stage marks the beginning of training our baseline multi-vessel segmentation models. We initiated it using binary class pretrained models as a starting point. Notably, the baseline models quickly began to discern coronary artery segments, resulting in a performance boost of 10 points compared to pretraining with IMAGENET models. On the training set, we achieved an F1 score exceeding 50%, a notable improvement from the previous performance plateau of around 40%. The training of the multivessel segmentation models spanned 80 epochs, with an initial learning rate of 1e-3 for pretraining, followed by 50 epochs of fine-tuning. During the fine-tuning phase, we unfroze the entire model architecture and applied much smaller learning rates, ranging from 1e-3/400 to 1e-3/4. The mean F1 score of the baseline models consistently hovered around 15% across both evaluation phases.Stage 3: Class Frequency-based Weighting for Learning Minority Classes: Our initial evaluation revealed that baseline models struggled to predict minority classes, as evidenced by their lower F1 scores. To address this challenge, we introduced a novel scoring mechanism designed to assign higher probabilities to rare class XCA images using weighted dataloaders, thereby enhancing their representation in mini-batches creation during training. We calculate the class frequency, denoted as F_c, for each class, which is defined as the proportion of segments belonging to a specific class within the entire dataset: F_c = |j|/|𝒟|, where |j| is the number of segments in class j, and |𝒟| is the total number of class segments in the dataset. To convert these frequencies into scores for each class, we take the square root of the class frequency, defined as S_c = √(F_c).To compute a single score for an XCA image (x), since most samples had several class segments, we computed the reciprocals of S_c values for all unique class segments present in the XCA mask, denoted as U_c(x), and selected the lowest class score as the weight for the XCA image to guide mini-batch sampling, denoted as S_x: S_x = min_x ∈ U_c(x)(1/S_c) This scoring approach reduces the probability of selecting frequent classes, allowing dataloaders to encounter minority classes more frequently and enabling the models to learn to distinguish rare segments. While this resulted in a slight reduction in the overall performance, the baseline models exhibit improved F1 scores for minority classes.Stage 4: F1-Loss based Curriculum Learning for Learning Difficult Images: In this stage, we introduce a novel approach called "F1-loss-based Curriculum Learning Strategy (F1-CLS)" to further enhance the learning capabilities of our models to address the difficulty induced by images with significant segment size variability. By utilising the F1 score as a loss function, our aim is to employ self-directed curriculum learning to guide the model's learning process toward these challenging samples. We define the F1 score (F_i) for each XCA image (F_i(x)) as F_i = 2 × P_i × R_i/P_i + R_i, where P_i represents the precision of the model's predictions for F_i(x), and R_i represents the recall of the model's predictions for F_i(x). The precision and recall values are computed based on the true positive (TP), false positive (FP), and false negative (FN) predictions for each class in F_i(x).We implemented curriculum learning to gradually introduce difficult images into the training process. The training loop starts with a subset of the dataset that contains images with less segment size variability. As the model's performance improves, we progressively increase the frequency of complex images in mini-batches, specifically those on which the model performs poorly at the beginning to initiate learning from them. This process allows the model to learn from easier samples initially and gradually tackle the more challenging ones, resulting in a more robust and effective learning process. F1-CLS has proven to be instrumental in improving the model's performance, particularly on difficult classes with varying segment sizes. Through this approach, we enable our models to excel in capturing the intricacies of coronary artery segmentation, even in the presence of challenging samples, leading the models to achieve a mean F1 score of around 18% on both validation phases.Stage 5: Multi-Target Classifier Finetuning for Plane-Based Segmentation: In the preceding training stages, our baseline models improved their segmentation accuracy on minority class samples and difficult images. However, a new challenge emerged: these models sometimes predicted arteries where they should not be present in certain angiogram planes. This stage introduces an innovative approach to incorporate angiogram plane knowledge using multi-target training to enhance the performance of our baseline models.Our research identified 11 acquisition planes for XCA images, each associated with specific classes of the coronary artery tree. Collaborating closely with our clinical partners, we annotated ARCADE training images with their respective acquisition planes. Subsequently, we developed a classifier capable of predicting the acquisition plane for a given image with 75% accuracy. We then used this classifier to further refine our baseline segmentation models through a multi-target model. This model leverages both the view classifier and the baseline segmentation model, training them further using a multi-target loss function that combines cross-entropy and our proposed combo loss ℒ_C discussed earlier.This collaborative training significantly boosted the performance of both models. The view classifier achieved an impressive accuracy score of 84%, while the performane of baseline models improved by an average of three points. Our evaluation revealed that these models now predict the correct combinations of artery segments for the specific angiogram plane, effectively eliminating irrelevant predictions. This final stage of training marks a crucial milestone in our journey toward achieving accurate and clinically relevant coronary artery segmentation.Ensemble Model Development: In our quest to maximise the predictive performance of our solution, we delved into ensemble learning as a potent strategy. Ensemble learning involves integrating multiple weak models to create a single, potentially highly robust model. We harnessed the capabilities of six diverse baseline models, each trained with distinct deep learning architectures and encoders, with the aim of leveraging the inherent variance in their outputs. While our baseline models produced only modest predictions, the ensemble approach astutely harnessed the complementary strengths of individual models, culminating in a potent model. The predicted outputs (O_1, …, O_n), one from each baseline model, correspond to their predicted masks on the XCA image. To assign appropriate importance to each model's prediction, we introduced a weight vector based on the models' performance on the training dataset to quantify their influence within the ensemble. Mathematically, our ensemble output is computed as follows: f̂_ens(x) = 1/B∑_i=1^Bf̂_i(x) Here, f̂_ens(x) represents the ensembled output, achieved by averaging the individual mask predictions of baseline models (f̂_1(x), …, f̂_M(x)). The variable M signifies the number of baseline models in our ensemble.Ensemble learning emerged as a highly effective strategy for enhancing predictive performance. It empowered our solution to exploit diverse DL architectures and encoders, resulting in a remarkable improvement of over 50% compared to individual models. To determine the optimal combination of models within the ensemble, we devised an algorithm for the systematic exploration of various model combinations. By objectively selecting the best-performing combinations, we further refined our ensemble, ultimately achieving superior results. Post Processing for Error Correction: In the end, we focused on rectifying errors within the predicted segmentation masks to enhance the overall output quality. As depicted in Fig. <ref>, our ensemble predictions exhibited various errors, appearing in diverse forms such as noisy pixels along borders or at a distance from the primary AS, segments with unrealistic sizes (either excessively large or small), gaps within AS, sporadic voids within segments, and instances where one artery segment encompassed another. To address these challenges, we devised a comprehensive post-processing pipeline. This pipeline included a two-step open-close morphology operation, followed by the identification and removal of unrealistic AS. Often underestimated, post-processing played a pivotal role in enhancing the quality of our segmentation masks. As a result, our solution achieved a noteworthy improvement of 3.74% and 3.20% in performance, both on the phase 1 and final phase validation sets, over the ensemble outputs. § RESULTS AND DISCUSSION §.§ Main Results (using Phase 1 and 2 Validation Datasets)In this section, we present the results of our proposed solution for training coronary artery segmentation and stenosis localisation models. Table <ref> present our results on both validation set (i.e., phase 1 and phase 2 provided by Arcade challenge). From the table, it is evident that our proposed systematic training methodology produced improved models, which were subsequently enhanced by ensemble techniques and further refined through post-processing in both evaluation phases. This highlights the effectiveness of our meticulously designed multi-staged training approach was found to systematically enhance the performance of models across different architectural families. Below, we delve into the results obtained in each stage of our systematic strategy, emphasising the stark contrast between our ensemble model and the baseline models. Additionally, we highlight the efficacy of the proposed post-processing tasks in improving the ensemble output. §.§ Effect of Systematic Training Strategy In Stage 1, the binary class segmentation pretraining—the performance of baseline models was as expected, with the ability to correctly identify approximately 10% of artery pixels without training. Even the ensemble of binary class models achieved a modest F1 score of 11.69%. Subsequent post-processing techniques were applied, resulting in a noticeable improvement, with the mean F1 score reaching 13.20%. These results underscore the inherent difficulty of the task, as individual models struggled to achieve satisfactory accuracy.In Stage 2, we introduced multilabel adaptation of the binary class model, some of these models, such as FPN Mix ViT B5, started to learn artery pixels, surpassing a mean F1 score of 19%. The ensemble models demonstrated a remarkable leap, achieving an F1 score of 33.97%. Post-processing further enhanced the score to 37.87%. This considerable increase in accuracy highlighted the limitations of individual models in comprehending the intricate coronary artery structures.In Stage 3, we introduced class frequency fine-tuning, emphasising the importance of accurately segmenting minor classes. While baseline models' accuracy slightly decreased compared to the previous stage, the ensemble model exhibited superior performance, achieving an F1 score of 35.55%, which increased to 39.52% with post-processing. This stage underscored the value of addressing minor classes, a task where baseline models often faltered.Stage 4 introduced F1-CLS, focusing on challenging classes. It marked a turning point, as the ensemble model's F1 score surged to 36.53%. Post-processing further refined it to 39.74%. In contrast, the baseline models continued to struggle with challenging classes, reaffirming the limitations of individual models.Finally, in Stage 5, multi-target fine-tuning was employed to refine the segmentation model using angiography view predictions. The ensemble model achieved an F1 score of 35.52%, with post-processing increasing it to 39.39%. In contrast, the baseline models could not adapt to the clinical relevance aspect of the task, further highlighting their limitations. Note the results discussed above are based on phase 1 validation data (as presented in the top half of Table <ref>). However, we observed a slight drop in the overall performance of different segmentation models and the techniques employed, when models were evaluated using phase 2 validation data (see bottom half of Table <ref>). §.§ Effect of Ensemble Learning and Post Processing In summary, our ensemble methodology coupled with robust post-processing, consistently outperformed the baseline models at every training stage. Notably, the baseline models struggled to surpass specific performance thresholds (barely exceeding mean F1 score of 20%), highlighting their inherent limitations in handling the intricate tasks of coronary artery segmentation and stenosis localisation. Our weighted ensembling approach, capitalising on the strengths of multiple models, achieved substantial performance enhancements, surpassing the average performance of baseline models by more than twofold in both evaluation phases. The solution's performance is further boosted by an additional fold through post-processing refinements. To provide a visual demonstration, Fig. <ref> showcases example predictions from the baseline models, the ensemble, and the images after post-processing. It is evident that the segmentation quality significantly improves from the baseline models to the ensemble and post-processed outputs. This study underscores the efficacy of ensemble learning followed by robust post-processing as a pivotal approach in medical AI applications, establishing it as a superior and innovative methodology. §.§ Unsuccessful AttemptsWe made several attempts to enhance our solution's performance, but these efforts did not yield the expected results. Some of the unsuccessful approaches we explored include:* Data Preprocessing: We attempted to enhance contrast in XCA imagery using CLAHE during data preprocessing, but this approach did not produce the desired improvements. * Geometric Augmentations: We experimented with geometric augmentations such as Shift Scale rotation, Perspective Warp, and Grid Distortion, but they did not lead to improved results for either of the tasks. * Synthetic Data Generation: We considered generating synthetic data, particularly for rare classes like class 10a, using CycleGAN. Unfortunately, we were unable to generate realistic synthetic images. * Loss Functions: We explored the use of Dice Loss in combination with Tversky Loss, but our models did not converge as efficiently as they did with Focal Loss and Tversky Loss. * Fine-Tuning with Phase 1 Validation Data: Fine-tuning our models with the phase 1 validation set, after its release, did not significantly improve our training. * Ensembling Strategies: We experimented with using U-Net with an attention module for ensembling, but this approach did not yield the desired results. * Conditional Generative Adversarial Networks (CGAN): We explored the use of CGAN for predicting defect correction, but these efforts did not lead to successful outcomes. These attempts highlight the challenges and complexities involved in addressing the tasks of coronary artery segmentation and stenosis localization in XCA imagery, and despite our best efforts, certain approaches did not prove effective in achieving our goals.§ LIMITATIONS AND FUTURE WORK While our solution represents a significant advancement, it has certain limitations that require attention for practical applications. Improvements in input preprocessing are needed to balance the noise-to-image ratio, enabling baseline models to acquire robust imaging features and thereby enhancing overall performance. Additionally, while we employed state-of-the-art computer vision architectures and encoders, we acknowledge the potential of generative AI models for medical image segmentation. The iterative refinement offered by diffusion models holds promise for revolutionizing similar tasks, and we plan to explore their integration into our solution.Furthermore, we envision future research directions in XCA analytics, leveraging emerging visual computing techniques. One exciting prospect involves translating the 2D angiogram representation of the coronary artery tree into 3D, enabling immersive mixed reality applications. This innovation could facilitate collaborative XCA analytics in an immersive environment, improving assessment quality through remote peer guidance during preoperative cardiac surgery planning sessions. Such advancements hold the potential to greatly benefit cardiac care and enhance patient outcomes.§ CONCLUSIONSIn this research, we have proposed a solution for multivessel coronary artery segmentation and stenosis localisation from XCA images. Our solution is based on baseline models, complemented by ensemble techniques for result consolidation, and innovative post-processing to create a robust end-to-end prediction system. While the baseline models successfully learned to segment various aspects of coronary arteries, their individual performance could not surpass certain performance thresholds. This underscores their inherent weaknesses in handling the complexities of coronary artery segmentation and stenosis localisation. The proposed ensemble strategy, with its ability to leverage the strengths of multiple models, yielded significantly improved results, establishing itself as a superior and innovative approach for XCA analytics. Our solution was developed for the ARCADE challenge, where we achieved the 5th position on both leaderboards. We obtained a mean F1 score of 37.69% for coronary artery segmentation and 39.41% for stenosis localisation, showcasing the potential of intelligent tools in assisting CAD diagnosis. These tools not only enhance the reliability of XCA analysis but also hold promise in guiding interventions and improving the accuracy of stent injections in clinical settings. Our work contributes to advancing the field, offering valuable insights and methodologies for future research in this critical area of cardiovascular healthcare.splncs0410 chen2018encoder Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV). pp. 801–818 (2018)danilov2021real Danilov, V.V., Klyshnikov, K.Y., Gerget, O.M., Kutikhin, A.G., Ganyukov, V.I., Frangi, A.F., Ovcharenko, E.A.: Real-time coronary artery stenosis detection based on modern neural networks. Scientific reports11(1), 7582 (2021)gao2022vessel Gao, Z., Wang, L., Soroushmehr, R., Wood, A., Gryak, J., Nallamothu, B., Najarian, K.: Vessel segmentation for x-ray coronary angiography using ensemble methods with deep learning and filter-based features. BMC Medical Imaging22(1), 10 (2022)kirillov2017unified Kirillov, A., He, K., Girshick, R., Dollár, P.: A unified architecture for instance and semantic segmentation. In: CVPR (2017)li2018pyramid Li, H., Xiong, P., An, J., Wang, L.: Pyramid attention network for semantic segmentation. arXiv preprint arXiv:1805.10180(2018)nobre2023coronary Nobre Menezes, M., Silva, J.L., Silva, B., Rodrigues, T., Guerreiro, C., Guedes, J.P., Santos, M.O., Oliveira, A.L., Pinto, F.J.: Coronary x-ray angiography segmentation using artificial intelligence: a multicentric validation study of a deep learning model. The international journal of cardiovascular imaging pp. 1–12 (2023)tao2022lightweight Tao, X., Dang, H., Zhou, X., Xu, X., Xiong, D.: A lightweight network for accurate coronary artery segmentation using x-ray angiograms. Frontiers in Public Health10,892418 (2022)tmenova2019cyclegan Tmenova, O., Martin, R., Duong, L.: Cyclegan for style transfer in x-ray angiography. International journal of computer assisted radiology and surgery14,1785–1794 (2019)zhou2018unet++ Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: Unet++: A nested u-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4. pp. 3–11. Springer (2018)zir1976interobserver Zir, L.M., Miller, S.W., Dinsmore, R.E., Gilbert, J., Harthorne, J.: Interobserver variability in coronary angiography. Circulation 53(4),627–632 (1976) | http://arxiv.org/abs/2310.17954v1 | {
"authors": [
"Muhammad Bilal",
"Dinis Martinho",
"Reiner Sim",
"Adnan Qayyum",
"Hunaid Vohra",
"Massimo Caputo",
"Taofeek Akinosho",
"Sofiat Abioye",
"Zaheer Khan",
"Waleed Niaz",
"Junaid Qadir"
],
"categories": [
"eess.IV",
"cs.CV"
],
"primary_category": "eess.IV",
"published": "20231027080312",
"title": "Multivessel Coronary Artery Segmentation and Stenosis Localisation using Ensemble Learning"
} |
Picosecond pump pulses probe the relevance of hot electrons for the laser-induced phase transition in FeRh M. Bargheer January 14, 2024 ========================================================================================================== Eight percent of global carbon dioxide emissions can be attributed to the production of cement, the main component of concrete, which is also the dominant source ofemissions in the construction of data centers. The discovery of lower-carbon concrete formulae is therefore of high significance for sustainability.However, experimenting with new concrete formulae is time consuming and labor intensive, as one usually has to wait to record the concrete's 28-day compressive strength, a quantity whose measurement can by its definition not be accelerated. This provides an opportunity for experimental design methodology like Bayesian Optimization (BO) to accelerate the search for strong and sustainable concrete formulae. Herein, we 1) propose modeling steps that make concrete strength amenable to be predicted accurately by a Gaussian process model with relatively few measurements, 2) formulate the search for sustainable concrete as a multi-objective optimization problem, and 3) leverage the proposed model to carry out multi-objective BO with real-world strength measurements of the algorithmically proposed mixes. Our experimental results show improved trade-offs between the mixtures' global warming potential (GWP) and their associated compressive strengths, compared to mixes based on current industry practices. Our methods are open-sourced at https://github.com/facebookresearch/SustainableConcrete. § INTRODUCTION Eight percent of global carbon dioxide emissions can be attributed to the production of cement <cit.>, the main reactive component of concrete,contributing significantly to anthropogenic climate change <cit.>. By comparison, the global annual emission from commercial aviation was estimated at 2.4% in 2019 <cit.>. Concrete is also the leading source ofemissions in data center construction, accounting for 20-30% of the associated emissions, and making the reduction of the carbon footprint of concrete necessaryto de-carbonize the operations of modern technology companies. Further, concrete mixtures that simultaneously exhibit a small carbon footprint and safe strength levels could become a critical piece in achieving societal de-carbonization goals and the mitigation of climate change.However, conventional concrete is mainly optimized for cost, availability, and compressive strength at the 28-day mark. To meet construction and sustainability goals, concrete needs to be optimized for additional, often opposing objectives: curing speed and low environmental impact, where the latter is commonly expressed as the global warming potential (GWP), typically in kilo-gram ofper cubic meter. The optimization of these opposing objectives is the primary goal of this work,and is part of a program to develop low-carbon concrete for data center construction which includes model development, lab testing, pilot projects, and at-scale application at Meta's data centers <cit.>.Herein, we give an overview of our methodology and validated experimental resultsthat enable reliable strength predictions and the optimization of the trade-offs that are inherent to the design of low-carbon concrete. In particular, we1) propose a probabilistic model that maps concrete formulae to compressive strength curves,2) formulate the search for sustainable concrete as a multi-objective optimization problem, and 3) employ Bayesian optimization (BO) in conjunction with the proposed model to accelerate the optimization of the strength-GWP trade-offs using real-world compressive strength measurements.§ BACKGROUND§.§ Gaussian ProcessesGaussian Processes (GP) constitute a general class of probabilistic models that permit exact posteriorinference using linear algebraic computations alone <cit.>, which includes the quantification of the uncertainty of the model predictions. For this reason, most BO approaches use GPs as a model for the objective function f that is to be optimized and presumed to be too expensive to evaluate frequently. GPs can be defined as a class of distributions over functions whose finite-dimensional marginals are multi-variate Normal distributions. Formally, a real-valued f is distributed according to a GPif for any set of points X = [ x_1, ,x_n] in the domain , f() = [f(_1), , f(_n)] ∼(μ(), k(, )),whereμ: → is called the mean function and k: ×→ the covariance function, or simply “kernel”, and we overloaded the notation of f, μ and k applied to the set of inputswith a “broadcasting” over the elements of : μ() = [μ(_1), , μ(_n)] and k(, ')_ij = k(_i, '_j) is a matrix which is positive semi-definite if = '. §.§ Multi-Objective Bayesian OptimizationMulti-objective optimization (MOO) problemsgenerally exhibit trade-offs between m > 1 objectives = [f_1, ⋯, f_m] that make it impossible to find a single input that jointly maximizes the objectives. Instead, one is usually interested in finding a set of optimal trade-offs, also called the Pareto frontier (PF) between multiple competing objectives, usually under the constraint that each objective isabove a minimum acceptable value f_i() > r_i. Collectively, the set of lower bounds = [r_1, ⋯, r_m] is referred to as the reference point. The hypervolume of a discrete Pareto frontier = {_i}_i bounded by a reference pointis a common measure of the quality, formally(, ):= λ(⋃__i ∈ [, _i]), where [, _i] denotes the hyper-rectangle bounded by verticesand _i, and λ is the Lebesgue measure.Thus, a natural acquisition function for MOO problems is the expected hypervolume improvement()= [ [(∪(), ) - (, )]_+ ],from obtaining a set ∼() := [(_1), ⋯, (_q)] of q new observations. If q=1 and the objectives are modeled with independent GPs,can be expressed in closed form <cit.>, otherwise Monte Carlo approximations are necessary <cit.>. Notably, <cit.> proposed a simulation-optimization framework for low-carbon concrete which jointly optimizes deterministic models of 28-day strength, cost, and embodied carbon using using an evolutionary algorithm (EA). The approach is promising but unlikely to be competitive for real-world experimentation without modifications, as EAs tend to be significantly less sample efficient than BO.§ A PROBABILISTIC MODEL OF COMPRESSIVE STRENGTHIn prior work, <cit.> proposed analytical forms for the evolution of concrete's compressive strength as a function of time, <cit.> proposed a non-temporal, non-probabilistic kernel regressor to predict concrete's lateral strength, and <cit.> used conditional variational autoencoders to predict the compressive strength at discrete t-day intervals.Herein, we propose a probabilistic model (, t) that jointly models dependencies on compositionand time t. The proposed model is accurate even in the low data regime due to data transformations, augmentations, and a customized kernel function.Zero-day zero-strength conditioningA simple but key characteristic of concrete strength is that it is zero at the time of pouring. As no actual measurement is made upon pouring,strength data sets generally do not contain records of the zero-day behavior. Direct applications of generic machine learning models to these data sets can consequently predict non-physical, non-sensical values close to time zero.We propose a simple data augmentation that generalizes to any model type to fix this: For any concrete mixture in a training data set, we add an artificial observation at time zero with corresponding strength zero.We add additional artificial observations at time zero for a randomly chosen subset of compositionsto encourage the model to conform to the behavior for compositions that are dissimilar to any observed ones. Non-stationarity in time The evolution of compressive strength is non-stationary in time, as the strength increases quickly and markedly early in the curing process, but converges monotonically to a terminal strength value. Therefore, we transform the time dimension to be logarithmic before passing the inputs to the model: t →log t . This transformation enables the application of stationary covariance kernels on the log-time dimension, while leading to good empirical predictive performance. Kernel design Aside from data augmentation and transformation, a GP's kernel chiefly determines the behavior of the model and permits the incorporation of problem-specific, physical structure <cit.>. Notably, the (, ·) curves generally share a similar shape for any composition . For this reason, we introduce an additive composition-independent time-dependent component k_time to a generic kernel over all variables k_joint:k((, t), (', t')) = α k_time(log t, log t') + β k_joint((, log t), (', log t')),where α, β > 0 are variance parameters which are inferred with the other kernel hyper-parametersusing marginal likelihood optimization <cit.>. For our experiments, we chose k_time to be an exponentiated quadratic kerneland k_joint to be a Matérn-5/2 kernel with automatic relevance determination (ARD). This structure allows the model to infer the general smooth shape of strength curves independent of the particular composition , leading the model's predictions to be physically sensible and accurate for long time horizons. Model EvaluationFigure <ref> shows the predicted strength curves (, ·) for two fixed compositions , but ranging over time t, of our model (left) and a naïve GP (right). Both models were trained on the UCI concrete strength dataset <cit.>, which we used to develop our model before startingour own experiments. While a naïve application of a GP to the data (dots) is unsuccessful, our model is accurate and well calibrated. § SUSTAINABLE CONCRETE AS A MULTI-OBJECTIVE OPTIMIZATION PROBLEMIncreasing sustainability is the motivating factor of this work, though it is simultaneously critical to maintain concrete's compressive strength above application-specific thresholds.Specifically, our primary objective is making concrete sustainable, a multi-faceted notion that includes, but is not limited to the carbon impact of production. Here, we focus on the global warming potential (GWP) <cit.> to quantify concrete mixtures' sustainability, but the methods are general and can be extended to metrics that quantify complementary aspects of sustainability. Importantly, any decrease in GWP would be rendered meaningless by the inability of the associated concrete formula to be used for construction with usually tight deadlines. We therefore add compressive strength at short 1-day and long 28-day curing durations to our list of objectives.The strength objectives are a-priori unknown and are estimated using the model proposed in Section <ref> by evaluating (, 1) and (, 28). We model GWP as a deterministic linear function () = α^⊤ where each α_i quantifies the GWP for each unit of the ith mixture ingredient. A more precise quantification of GWP would also dependent on location and transportation <cit.>, but the linear model is a reasonable approximation for our purposes. Further, given an accurate strength model, we can infer Pareto-optimal mixes for a post-hoc change in the GWP model, similar to the post-hoc changes in the constraints explored in Sec. <ref>. Formally, the associated MOO problem ismax_∈ ((, 1),(, 28),-()).We then employ BO with <cit.>'sqLogNEHVI acquisition function – the LogEI variant of <cit.>'s qNEHVI – to design batches of compositions , optimizing for the PF of the MOO problem of Eq. (<ref>) using real-world compressive strength experiments. § EXPERIMENTS §.§ Real-World Experimental SetupAll experimental testing of mortar specimens was performed in accordance with ASTM C109, as outlined below. Two-inch mortar cubes were mixed and cured at 22 °C, first mixing fine aggregate with half the water and then adding all cementitious material with the remaining water. Superplasticizer was added during the second stage of mixing, as needed. After mixing and tamping, plastic lining was applied to each mold to prevent significant moisture loss. After 24 hours of curing inside the molds, mortar cube specimens were removed and submerged in a lime-saturated water bath at room temperature (22 °C). Three specimens each were prepared for curing ages of one, three, five, and twenty-eight (1, 3, 5, and 28) days. All specimens were subjected to a compressive load at a constant loading rate of 400 lb/s using a Forney compressive testing machine. §.§ Empirical Optimization Results Figure <ref> shows the experimentally achieved trade-offs between 1-day (left) and 28-day (right) compressive strength, and GWP. The proposed mixtures quickly improve on both a random initial set (Batch 1, human) and a set of mixtures inspired by industry practices (Batch 2, human). The AI-proposed batches push the empirical Pareto frontier outward, providing an increasingly fine set of trade-offs between sustainability and strength. Fortunately,the highest-GWP Pareto-optimal composition shown in Figure <ref> dominates the GWP-strength trade-off ofa pure-cement mix (not shown in Fig. <ref>), implying that a GWP-reduction can – to a non-negligible extent – be achieved without sacrificing strength, though greater GWP reductions do require such trade-offs. §.§ Inferred Pareto Frontier In addition to using the proposed concrete strength model for BO, we can use it to compute inferred PFs conditioned on application-specific constraints. Querying the model in this way is useful both to gain scientific insight andbecause ingredients like fly ash or slag – waste-products of coal and steel plants, respectively – might only be effectively sourced in specific regions.By computing the inferred PF based on location-specific constraints, we can generate composition recommendations that are customized to specific construction projects.Figure <ref> shows a numerical approximation of the inferred PF conditioned on different water-to-binder (w/b) ratios,which affect the workability of the concrete, with higher w/b usually being more workable, as well as constraining the mixtures not to include fly ash (orange) or slag (green). The inferred PFs yield several insights:1) the water-to-binder ratio has a significant effect on compressive strength, a phenomenon that has been remarked by in the literature on concrete <cit.>, 2) removing ash from the composition space has a negligible effect on the achievable trade-offs up to 28 days, and3) removing slag has a significant negative effect. We stress that these results are based on predictions,but qualitatively match expert consensus on the variables' effects.§ CONCLUSION We introduced a probabilistic model for the temporal evolution of concrete's compressive strength as a function of its composition, formalized the problem of finding strong yet sustainable concrete formulae as a multi-objective optimization problem, and leveraged BO to propose new concrete mixtures for real-world testing. We seek to accelerate the development of sustainable concreteby open-sourcing our methods at https://github.com/facebookresearch/SustainableConcrete . The work has the potential to decrease the carbon footprint of data center construction and the construction industry at large, with possibly global impact.§ BACKGROUNDAdditional Information on Gaussian Processes Given a set of input and output pairs (, ) and a Gaussian likelihood with zero mean and variance σ^2, the posterior distribution of a GP is also a GP defined by the posterior mean μ_p and kernel Σ_p:μ_p(^*)= μ(^*) + Σ(^*, )(Σ(, ) + σ^2 𝐈)^-1 ( - μ()),Σ_p(^*, '^*)= Σ(^*, '^*) - Σ(^*, ) (Σ(, ) + σ^2 𝐈)^-1Σ(, '^*).These equations show that, without a-priori knowledge about the specific values of the target function, the kernel function Σ is the primary factor in determining the behavior of the model.In the literature, Σ is usually also a function of hyper-parameters like length-scales which control how quickly the function is expected to vary with respect to specific input dimensions. In this work, we use well-established marginal likelihood optimization to optimize our model's hyper-parameters.In the single-outcome (M=1) setting, () ∼(μ(), Σ()) with μ: ^q →^q and Σ: ^q →^q_+.In the sequential (q=1) case, this further reduces to a univariate Normal distribution: f() ∼(μ(), σ^2()) with μ: → and σ: →_+. § PROBABILISTIC COMPRESSIVE STRENGTH MODELStrength Model Cross-Validation Figure <ref> shows cross validation results across all day-1 strength values in our data set, highlighting generally good predictive accuracies and well-calibrated uncertainties. On monotonicity Another characteristic of concrete strength is its monotonic increase to a terminal value. While techniques for including this constraint on the derivatives of the GP have been proposed,they introduce additional complexities that we have – so far –not found to be worth the potential increase in model performance. The model's predictive mean is already monotone in our empirical observations, due to the other model components, like the log-time transform, the additive time-dependent component, and a range of different measurement times in the training data. Including this constraint could be lead to an increase in trust and uptake of the model by practitioners long term. | http://arxiv.org/abs/2310.18288v3 | {
"authors": [
"Sebastian Ament",
"Andrew Witte",
"Nishant Garg",
"Julius Kusuma"
],
"categories": [
"cs.LG",
"physics.soc-ph"
],
"primary_category": "cs.LG",
"published": "20231027172512",
"title": "Sustainable Concrete via Bayesian Optimization"
} |
Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses E. Sizikova, N. Saharkhiz, D. Sharma, M. Lago, B. Sahiner, J. G. Delfino, A. BadanoOffice of Science and Engineering Laboratories Center for Devices and Radiological Health U.S. Food and Drug Administration Silver Spring, MD 20993 USA ========================================================================================================================================================================================================================================================= To generate evidence regarding the safety and efficacy of artificial intelligence (AI) enabled medical devices, AI models need to be evaluated on a diverse population of patient cases, some of which may not be readily available. We propose an evaluation approach for testing medical imaging AI models that relies on in silico imaging pipelines in which stochastic digital models of human anatomy (in object space) with and without pathology are imaged using a digital replica imaging acquisition system to generate realistic synthetic image datasets. Here, we release M-SYNTH[Code and data links available at: <https://github.com/DIDSR/msynth-release/>], a dataset of cohorts with four breast fibroglandular density distributions imaged at different exposure levels using Monte Carlo x-ray simulations with the publicly available Virtual Imaging Clinical Trial for Regulatory Evaluation (VICTRE) toolkit. We utilize the synthetic dataset to analyze AI model performance and find that model performance decreases with increasing breast density and increases with higher mass density, as expected. As exposure levels decrease, AI model performance drops with the highest performance achieved at exposure levels lower than the nominal recommended dose for the breast type.§ INTRODUCTION The goal of this work is to demonstrate that AI models for medical imaging can be evaluated using simulations, specifically, using an in silico (also known as synthetic) imaging pipeline equipped with a stochastic model for human anatomy and disease <cit.>. We show that in silico methods can constitute rich sources of data with realistic physical variability for performing comparative analysis of AI device performance. To date, computational models have been applied to some extent for the analysis of nearly all medical imaging modalities and for a wide variety of clinical tasks <cit.>. Since it is critical to ensure patient safety and system effectiveness in healthcare applications, rigorous and thorough testing procedures must be performed in order to study performance in the intended population including subpopulations of interest. To prevent estimates that might be biased by overfitting, model testing is typically performed on a previously unseen dataset. However, datasets consisting of patient images may present a limited distribution of the variability in human anatomy and may not always capture rare, but life-critical cases, and may be biased towards specific populations or parameters of image acquisition devices dominant at specific clinical sites. In addition, patient data and associated health records may not be available due to patient privacy, cost, or additional risk associated with additional imaging procedures. Precise mass location and extent (e.g., mass boundaries) are typically not available in the patient's records, and it is burdensome, error-prone, and sometimes impossible to collect this information retrospectively. In many medical imaging applications, these limitations pose a significant barrier to development and evaluation of novel computational techniques in medical imaging products.We propose evaluating AI models using physics-based simulations. We create realistic test cases by imaging digital objects using digital image acquisition systems. Our in silico testing pipeline offers the ability to control both object and acquisition parameters, and generate highly realistic test cases (see Figure <ref>). We show that digital objects and computer simulated replicas of image acquisition devices offer a rich source of realistic data capturing a variety of patient and imaging conditions for evaluation purposes. In particular, our approach (and associated dataset) allows for performing comparative analysis of AI performance across physical breast properties (e.g., mass size) and imaging characteristics (e.g., radiation dose). Such testing typically cannot be performed with patient data, as the data may be too costly to collect or unsafe to acquire (e.g., one cannot ethically re-image the same patient multiple times using ionizing radiation). Our contributions in this work can be summarized as follows: * We demonstrate that, using this approach, we can detect differences in AI model performance based on selected image acquisition device or physical object model parameters. Specifically, we evaluate the effect of image acquisition (radiation dose) and object model (breast and mass densities, mass size) parameters on the performance of the AI model.* We release a dataset, M-SYNTH, to facilitate testing with pre-computed data using the proposed pipeline. The dataset consists of 1,200 stochastic knowledge-based models and their associated digital mammography (DM) images with varying physical (breast density, mass size and density) and imaging (dose) characteristics. § BACKGROUND First, we introduce the concepts of knowledge-based models and physics-based imaging simulation that form the in silico imaging pipeline, the foundation of our work.Object Models. Knowledge-based (KB) models incorporate information about the physical world into the data generation process to create realistic virtual representations of human parts or organs <cit.>. As discussed in <cit.>, large cohorts of digital stochastic human models can be represented by:{𝐟_s}_s=1^S = ∑_n θ_n^s ϕ_n(r) ,where s denotes a particular state or random realization of a digital human in a cohort of size S, r denotes a spatial variable, ϕ_n denote expansion (basis) functions, and θ_n denote expansion coefficients. Knowledge-based models specifically are constructed by sampling a set of θ_n in Eq. <ref> from distributions representing the relevant model characteristics, given a specific ϕ_n based on the application. The characteristics of the distributions are often derived from physical or biological measurements. In the case of breast, knowledge-based models allow us to vary physical patient characteristics including breast size, breast shape, mass size and mass density (see Figures <ref>, <ref> and <ref>). Specifically, the object (breast) is a model D, parameterized by a vector x characterizing a fixed, user-defined set of physiological properties (e.g., breast density, mass presence, mass size, glandularity). Given a sample x_s, we can generate a realistic, high-resolution object f_s = D(x_s). We rely on Graff's breast model <cit.> as the KB model for this project and describe its properties in Section <ref>. Digital Mammography (DM) image generation. Once created, KB models are imaged using simulations of x-ray transport through the materials present in each KB model. The image acquisition device I is a parametric model that receives the object d_i as well as user-defined choices for control parameters y (e.g., detector type, radiation dose) and outputs an image r_i,j=I(d_i,y_j) given a sample choice of parameters y_j and an input object d_i. Parameters of such a system (e.g., geometry, source characteristics, detector technology, anti-scatter grid, etc.) can emulate system geometries and x-ray acquisition parameters found in commerciallyavailable imaging device (e.g., mammography) specifications. In our work, we used MC-GPU <cit.>, a Monte Carlo x-ray simulation software implemented on GPUs that generates mammography images. Additional details for this component of the pipeline can be found in Section <ref>. Related work in generative image models. The in silico imaging pipeline described above is highly related to medical imaging generation using generative models. One popular type of generative model is a generative adversarial network (GAN) <cit.>, which learns a mapping from a low-dimensional representation to images at resolution. Generative models have been applied to a variety of medical image generation tasks <cit.>. For example, Guan <cit.> showed that GAN-generated synthetic images can be used to augment a smaller patientbreast image dataset for breast image classification. <cit.> introduced image-based GAN to generate high resolution images conditioned on pixel-level mask constraints. GANs may not correctly capture the link between input parameters and outputs, and thus, are prone to generating unrealistic examples <cit.>. A number of alternative types of generative models <cit.> have been developed that address its limitations, such as training instabilities and unrealistic output images. A key advantage of generative models is that their run time can be faster than fully-detailed, object-space simulations, and it remains important to explore and compare both techniques. Their key limitation is that they require large training datasets and typically learn noise and artifacts from the imaging system <cit.>. In particular, all image acquisition systems have a null space, i.e., the set of object-space details that are not observed in the acquired images due to imaging system limitations (e.g., finite spatial and temporal resolution). Null space constraints limit the ability of generative models to describe certain components of patient anatomy and pathology. Simulation-based testing has been proposed in other fields, such as autonomous vehicle navigation <cit.>, and is related to the concept of generating adversarial perturbations in the image <cit.> and the physical property space <cit.>. For example, <cit.> introduced 3DB, a photo-realistic simulation framework to debug and improve computer vision models. Inspired by these works, we propose to evaluate medical imaging AI using images generated using KB models and physics simulations and release a dataset to facilitate such exploration. § DATASET GENERATIONThe use of in silico imaging allows for the generation of large object and image datasets without the need of human clinical trials. Here, we take advantage of the benefits of the in silico approach to perform comparative analysis of AI model performance across different physical properties of the case population of breast models. We rely on the VICTRE pipeline [See https://github.com/DIDSR/VICTRE_PIPELINEVICTRE Github Page and https://www.fda.gov/medical-devices/science-and-research-medical-devices/catalog-regulatory-science-tools-help-assess-new-medical-devicesFDA Regulatory Science Tools (RST) Catalog.] for generating breast models and their corresponding DM images. Previous work <cit.> has shown that the VICTRE pipeline replicated the results of a clinical study comparing DM and digital breast tomosynthesis (DBT) involving hundreds of enrolled women. An overview of the data generation process can be seen inFigure <ref>. Breast Model Synthesis.In silico breast models <cit.> (also known as breast imaging phantoms) were generated using a procedural analytic model which allows for adjusting various patient characteristics including breast shape, size and glandular density. The models are compressed in the craniocaudal direction using FeBio <cit.>, an open source finite-element software. We simplified the breast materials in non-glandular (as fat) or glandular tissue with Young's modulus and Poisson ratio of E=5 Pa, ν=0.49 and E=15 Pa, ν=0.49, respectively. Lesions were inserted in a subset to create the signal-present cohort. These models were then imaged using a state-of-the-art Monte Carlo x-ray transport code (MC-GPU) <cit.>. We studied breast densities of extremely dense (referred to as “dense”), heterogeneously dense (referred to as “hetero”), scattered, and fatty, matching the distributions from <cit.>. For each breast density, a different breast size is used to correspond with population statistics. Therefore, the dense breast is the smallest, followed by heterogeneously dense, then scattered, and then fatty. Each breast model was compressed to 3.5 cm, 4.5 cm, 5.5 cm, and 6.0 cm for each respective density, mimicking the organ compression during the imaging. Random spiculated breast masses were generated using the de Sisternes model <cit.> with three different sizes (5 mm, 7 mm and 9 mm radii) and mass density was set to be a factor of glandular tissue density (1.0, 1.06 and 1.1 times). Note that for dense and hetero breasts, we only used mass sizes of 5 and 7 mm, since 9 mm masses do not fit within the breast region. No micro-calcification clusters were inserted. To create the signal-present cohort, a single spiculated mass was inserted in half of the cases at randomly chosen locations chosen from a list of candidate sites determined by the position of the terminal duct lobular units.The resulting in silico dataset comprises of 1,200 digital breast models, corresponding to 300 patients per breast size/density. Compared to the original VICTRE trial <cit.>, we introduce variations in mass size and density. Samples of model realizations are shown in Figures <ref>, <ref> and <ref>. Note that the bounding boxes are only to make the masses more conspicuous for visualization purposes only.Digital Mammography (DM) Generation.To simulate the x-ray imaging process, we used MC-GPU <cit.>, a Monte Carlo x-ray simulation software implemented on GPUs that generates DM images. The detector model relies on system geometries and x-ray acquisition parameters inspired by the currently available Siemens Mammomat Inspiration DM system. The dosimetric and x-ray acquisition parameters were selected based on publicly available device specifications and clinical recommendations for each compressed breast thickness and glandularity. We applied 20-100% of the clinically recommended dose for each breast density. See Badal et al. <cit.> for the exact parameter values and doses delivered to each breast and Sengupta et al. <cit.> for additional details. X-ray photons arriving at the detector are tracked until first photoelectric interaction incorporating fluorescence effectsby generating and tracking a secondary x-ray based on the fluorescence yield in a uniformly random direction. Electronic noise is added to the pixel variance. The focal spot blurring in the source was modeled as a 3D Gaussian probability distribution with a full-width-at-half-maximum of 300 μm. A tungsten anode filtered with 50 μm rhodium was used with a peak voltage of 28 kV for fatty and scattered breasts and 30 kV for dense and heterogeneously dense breasts. The same analytical anti-scatter grid was also included for generating the DM images. (5:1 ratio, 31 line pairs/mm), see <cit.>. The resulting detector model (known as DIR in <cit.>) is representative of a solid-state amorphous selenium transducer in a direct detector configuration. Visualizations of generated images and masses can be seen in Figure <ref>. A summary of complete parameters used to generate data points in the presented dataset is described in Table <ref>. In Figure <ref>, we report statistics of dose levels corresponding to the dataset. § RELATED DATASETSTo date, a number of datasets for mammographic image analysis have been collected (see Table <ref>). The majority of datasets are created from patient data collected from DM <cit.> or digital breast tomosynthesis (DBT) <cit.> scans from various clinical sites. The DREAM Challenge <cit.> offered datasets for development of AI-based mammography analysis techniques. Patient datasets vary widely in the types of labels available, and the data may be biased toward the demographic characteristics of patients at the source site. While there exist datasets, such as the EMory BrEast imaging Dataset (EMBED) <cit.>, that specifically focus on equal representation (in this case, equal representation of African American and White patients), collecting a truly balanced dataset across all possible characteristics may not be possible with patient cases.We found only two in silico datasets for mammography analysis. The first dataset, published by Sarno <cit.>, consists of150 patient-derived digital breast models with uncompressed computational breast phantoms derived from 3D breast images acquired with an in-house dedicated breast computed tomography (CT) scanner. The models were processed by a voxel classification algorithm into four materials (air, adipose tissue, fibroglandular tissue, and skin).The second dataset is the VICTRE <cit.> collection that consists of about 3,000 digital patients with breast sizes and densities representative of a screening population. Digital microcalcification clusters and spiculated masses were inserted in the voxelized phantoms to create the positive cohort. The phantoms were imaged in silico to produce digital mammogram projections and digital breast tomosynthesis volumes. In comparison to both of these datasets, our work contains more significant variability in breast and mass characteristics, as well as a range of applied dose levels for image acquisition, in order to facilitate comparative evaluations of AI across characteristic changes. § RESULTS AND ANALYSIS In this section, we present an approach to using our M-SYNTH dataset to evaluate an AI device. Formally, an image processing AI model F takes as input an image r and predicts a specific property of interest F(r) about the image. For example, such a model can predict the presence or absence of a mass. Typically for AI models, F is a neural network and is trained on a dataset of images and their labels T_train={(r_1,l_1),(r_2,l_2),… (r_n,l_n)}, and then evaluated on a held-out dataset T_test. When using patient images, evaluation is limited to the variability contained in the samples and in the annotations present across examples in the fixed test set T_test. Instead, we propose to generate T_train and T_test dynamically using D and I described above in order to test F across variations in model x and acquisition parameters y.§.§ Implementation Details Evaluation Metrics We evaluate performance using the area under curve (AUC) metric for a mass detection task. Specifically, we treat evaluation as a multiple reader multiple case study, where an AI model is a single reader. Multiple readers are obtained by re-training the model with different random seeds. We rely on the iMRMC software <cit.> to identify associated confidence intervals. Network TrainingWe represent the AI-enabled device as a neural network with an efficientnet_b0 architecture, receiving an image with one channel and dimensions of 224 by 224, and outputting a binary mass presence label. The network is trained with batch size 64 using binary cross entropy loss (BCE) and optimized using RMSProp optimizer (with learning rate 0.0001). We rely on the timm library <cit.> and fine-tune the model pre-trained with ImageNet <cit.>. We also compared performance with alternative architectures (vit_small_patch16_224 and vgg_16), but results were very similar (see supplementary material).For each specific breast density, radiation dose level, and mass size and density, the 300 images in the M-SYNTH dataset were divided into 200 for training, 50 for validation, and 50 for test. For comparison, we also train the AI device on 410 patient DM images from the INBreast dataset <cit.>, where images were obtained using MammoNovation Siemens full-field digital mammography system with a solid-state amorphous selenium detector. We use the same pre-processing and training regimes on this dataset and learn a network to predict mass presence. The trained models on the real patient dataset were then tested on 50 examples of M-SYNTH dataset for each specific breast density, dose level, and mass size and density. The full experimental setup is implemented in Python and C over a cluster with 50 Tesla V100-SXM2 GPUs.§.§ Experimental Results We identify two tasks that can be performed using our method. In the subgroup analysis task, we train and test an AI model using the released synthetic (M-SYNTH) dataset to identify performance changes on specified subgroups. In the patient data evaluation task, we study how an AI model trained on patient data (InBreast) performs on the proposed M-SYNTH dataset. This task can help identify where the trained model may show variable performance for different subgroups belonging to the target population.Subgroup Analysis. In Figures <ref> and <ref>, we report the results of the AI model performance at detecting masses, when the model is trained and tested on the our dataset (see Section <ref> for details of splits). We find that masses with larger sizes or higher densities (Figures <ref>a-b) are more easily detected. Although models trained on all sizes or mass densities have the highest performance, when the models are trained on smaller masses or lower densities, theygeneralize better to other masses (more difficult cases).The performance of the models are highest when they are tested and trained on the same breast density and decrease as the density of the test breast phantom differs from the train phantom (Figures <ref>c). The dose levels applied in this study have minimal impact on the performance of the models and resulted in similar AUC values (Figures <ref>d). Evaluation of the performance change across all the breast densities (Figures <ref>a-b) reveals that the AUC improves with larger mass density and mass size, yet is impacted by the breast density, where mass detection performance is lowest in high-density breasts (dense) and highest in low-density breasts (fatty) in most of the cases, consistent with findings from clinical practice. Patient Data Evaluation. In Figure <ref>, we report experiments where the AI model is trained on INBreast data and evaluated on the M-SYNTH data. Although the performance results for all experiments are lower in general, we find a similar set of trends as when the model is trained on M-SYNTH data. Note that we have made no attempt to match the radiation dose levels or the image acquisition parameters for these comparisons using patient images. Even though the simulated pipeline is designed to replicate a specific DM system with a particular detector technology and technique factors, the comparison suggests similarity between the datasets. The images are qualitatively different but overall have similar glandular patterns which is an important consideration for the realism of the task of detecting masses in a noisy background. We also assessed similarity between INBreast and M-SYNTH datasets in terms of low-level pixel distributions using first five statistical moments: mean, variance, skewness, kurtosis, and hyperskewness. We found that there is a reasonably good alignment in terms of moments, especially when the synthetic images were included at all four breast densities (see supplementary material). Future work should develop a more detailed comparison including radiomics features for the training and testing datasets used in the study to complement the validation of our approach.Limitations.There are a number of limitations to our work. First, simulations may require long runtimes and demand large computational resources, thus somewhat limiting the amounts of data that can be generated. This limitation needs to be considered with respect to the difficulty of obtaining large patient image datasets with known mass locations. In addition, data can be pre-generated offline (as we do with the M-SYNTH dataset), therefore, removing the large runtime limit and computational burden off the user. Second, testing with simulations is constrained to the variability captured by the parameter space of the object models for anatomy and pathology and the acquisition system. Thus, the complexity of the object model and acquisition system may need to be adjusted depending on the complexity of the questions to be investigated with simulated testing. In particular, a potential risk of testing using simulated data is missing the variability observed in patient populations. Finally, there is a risk of misjudging model performance due to a domain gap between real and synthetic examples. However, the realism and sophistication of object-based modeling of the imaging pipeline is improving rapidly and may soon compete with other approaches, making approaches based on synthetic data useful and practical for regulatory evaluation of AI-enabled medical devices.§ CONCLUSION AND FUTURE WORKWe introduce and discuss an approach for validating AI models using physics-based simulations of digital humans from the object space to the image data, specifically for the task of breast cancer mass detection. The simulated images are highly realistic and offer a challenging test case for AI model evaluation. Our findings are consistent with expected performance and show that the AI model performance increases with mass size and mass density as expected. Finally, we show that our approach can be used to validate a model trained on independent patient data. This finding suggests that the proposed simulation setup can be used as a framework for more general evaluation of medical AI devices. The goal of this study is to demonstrate as proof-of-concept the feasibility of using simulated data to evaluate the comparative performance of AI models. In future work, it would be important to assess the evaluation approach for additional parameters in terms of the distribution of the population of digital humans in the object space, and for a range of image acquisition systems (e.g., by considering alternative simulators). By imaging a more diverse population of breast models, we hope to identify additional insights regarding AI evaluation. Finally, it is important to note that the testing is limited to the variability captured in the digital representations and may not fully indicate absolute real-world performance or trends. This study illustrates that physics-based simulation of mammography images can represent a less burdensome and cost-efficient approach for the evaluation of AI model performance across a wide range of scenarios, including a variety of image acquisition parameters and diverse populations that may not be available or are hard to obtain from human studies. Moreover, this approach offers a complementary evaluation paradigm that does not depend on the availability of patient data. § ACKNOWLEDGEMENTSWe thank Andreu Badal (OSEL/CDRH/FDA) and anonymous reviewers for helpful suggestions, Kenny Cha, Mike Mikailov and the OpenHPC team (OSEL/CDRH/FDA) for providing help with experiments, Akhonda, Mohammad (OSEL/CDRH/FDA) for help with data release, and Andrea Kim (OSEL/CDRH/FDA) for rendering visualizations of the 3D breast model. This is a contribution of the US Food and Drug Administration and is not subject to copyright. The mention of commercial products herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services.unsrt § SUPPLEMENTARY MATERIAL§.§ Data AvailabilityM-SYNTH and code for processing can be found in <https://github.com/DIDSR/msynth-release>. Please following the instructions on Github to dowload files from Huggingface. M-SYNTH is organized into a directory structure that indicates the parameters. The foldercontains image files imaged with the specified parameters. Each folder contains mammogram datathat can be read from .raw format (.mhd contains supporting data), or DICOM (.dcm) format. Note that only examples with oddcontain lesions, others do not. Coordinates of lesions can be found in .loc files. For instance:contains a lesion-present breast example with mass size (radius) of 5.0 mm (approximate, as the mass is not perfectly spherical), mass density 1.0, dose (# histories) 1.02×10^10, and heterogeneously dense breast density. Code and dataset is released with the Creative Commons 1.0 Universal License (CC0).§.§ Timing AnalysisWe now review the timing required to perform mass insertion and imaging. Timings were computed on a Tesla V100-PCIE GPU card with 32 GB RAM. In Table <ref>, we review the mean timing (in minutes) for mass insertion by breast density and mass size across each category of examples. We find that larger mass size requires a slight increase in time. However, breast density significantly affects timing because the reading and writing times are proportional to the number of voxels in the volume. In particular, lower density breasts, which are larger in size on the average, need more insertion time, with fatty breasts requiring nearly 3.5 as much time than dense breasts. Note that mass density is set during projection, therefore, it does not affect insertion time. In Table <ref>, we review the imaging time required for each breast density. The time varies from 2.84 min for most dense to 13.46 min to least dense breasts. Note that total time for creating of each DM image is either the imaging time (no mass inserted) or imaging + mass insertion times. Given our high performance cluster with access to multiple GPUs (where each example requires access to one GPU), we were able to generate the complete dataset in about two weeks. §.§ Rendering of Breast PhantomsAdditional renderings of the breast phantoms generated for the study are shown in Figure <ref>, demonstrating a high level of detail and anatomical variability within and among models.§.§ Real and Synthetic Image Similarity Assessment In order to investigate the similarity in terms of low-level pixel distributions between the real patient (INBreast) and synthetic (M-SYNTH) datasets, we estimated the first five statistical moments (mean, variance, skewness, kurtosis, and hyperskewness). Although there is a differences between synthetic and real examples, the distributions and ranges are reasonably aligned. §.§ Additional Subgroup Analysis§.§.§ Mass Size and Density EffectsWe further study the impact of generalization of the training dataset on the performance of mass detection. In Figure <ref>a, we train the models on individual mass sizes, as well as on all the sizes. The training mass density of 1.06 and relative radiation dose of 100% are kept constant. Each model is trained and tested on the same breast density that is given on top of each figure, with the test mass size and mass density as shown. We find that the models trained on all sizes (dashed lines) have an equal or better performance on small masses (i.e., 5 mm) than the models trained on a specific mass radius (solid lines) (except for scattered breast density). However the models trained on all sizes generalize worse to the larger masses, compared to the models trained and tested on the same mass size. Similarly, in Figure <ref>b, we train the models on individual mass densities, as well as on all the mass densities. The training mass size of 7 mm and relative radiation dose of 100% are kept constant. Each model is trained and tested on the same breast density that is given on top of each figure, with the test mass density and mass size as shown. We find that in most of the cases, the models trained on all the mass densities (dashed lines) result in worse performance than the models trained on a specific mass density (solid lines), specially as the test mass size increases. Thus, these models are not able to generalize well to masses with different densities on the testing dataset.§.§.§ Network Architecture EffectsIn order to evaluate the effect of the AI enabled device, we repeat the experiments with additional model architectures of vit_small_patch16_224 and vgg_16. As shown in Figures <ref> and <ref>, using different models results in similar results and has minimal impact of the outcome of the experiments. | http://arxiv.org/abs/2310.18494v1 | {
"authors": [
"Elena Sizikova",
"Niloufar Saharkhiz",
"Diksha Sharma",
"Miguel Lago",
"Berkman Sahiner",
"Jana G. Delfino",
"Aldo Badano"
],
"categories": [
"eess.IV",
"cs.CV"
],
"primary_category": "eess.IV",
"published": "20231027211430",
"title": "Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses"
} |
Quantum chemical methods dealing with challenging systems while retaining low computational costs have attracted attention. In particular, many efforts have been devoted to developing new methods based on the second-order perturbation that may be the simplest correlated method beyond Hartree-Fock. We have recently developed a self-consistent perturbation theory named one-body Møller-Plesset second-order perturbation theory (OBMP2) and shown that it can resolve issues caused by the non-iterative nature of standard perturbation theory. In the present work, we extend the method by introducing the spin-opposite scaling to the double-excitation amplitudes, resulting in the O2BMP2 method. We assess the O2BMP2 performance on the triple-bond N_2 dissociation, singlet-triplet gaps, and ionization potentials. O2BMP2 performs much better than standard MP2 and reaches the accuracy of coupled-cluster methods in all cases considered in this work.< g r a p h i c s >Sufficient Incorrectness Logic: SIL and Separation SIL Francesco Logozzo January 14, 2024 ======================================================Second-order Møller-Plesset perturbation theory (MP2) on Hartree-Fock (HF) orbitals<cit.> is the simplest correlated wave-function method. Its accuracy depends on the quality of reference wave functions, in particular, for open-shell systems<cit.>. To bypass the issue of poor references, many research groups have actively developed orbital-optimized MP2 (OOMP2) and its spin-scaled variants<cit.>. In these methods, orbitals are optimized by minimizing the Hylleraas functional. OOMP2 and its variants have outperformed standard MP2 calculations for numerous properties. Apart from wave-function methods, double-hybrid density functional (DHF) theory, in which a scaled perturbative correction is performed on top of hybrid density functional calculations, has attracted significant attention. These functionals are considered the fifth rung of the DFT Jacob's ladder and have been shown to outperform conventional functionals in many cases<cit.>. It is well-known that perturbation theory is inadequate for multi-reference systems, and the perturbative correlation energy diverges due to small gaps of orbital energies. To eliminate these issues, several regularization schemes that modify the MP2 amplitude with a function damping any divergent or excessively large correlations have been recently developed <cit.>. It has been shown that regularized (orbital-optimized) MP2<cit.> can outperform standard MP2 across relevant chemical problems. In the meantime, numerous efforts are devoted to developing alternative approaches to resolving the abovementioned issues. These methods include Brillouin-Wigner perturbation theory (BWPT) and its size-consistent variant <cit.>, retaining the excitation degree MP2 (REMP2) and its orbital-optimized variant <cit.>. Empirical spin-scaled methods, such as spin-component scaling (SCS) and spin-opposite scaling (SOS), have also been widely used to improve the performance of perturbation theory <cit.>. Noticeably, SOS-MP2 does not only often improve the accuracy of MP2, but it is also less costly (N^4) than standard MP2 (N^5 ). In general, developing new methods based on low-cost perturbation theory able to deal with challenging systems is still highly desirable. Recently, we have developed a new self-consistent perturbation theory named one-body MP2 (OBMP2)<cit.>. The key idea of OBMP2 is the use of canonical transformation<cit.> followed by the cumulant approximation<cit.> to derive an effective one-body Hamiltonian. The resulting OBMP2 Hamiltonian is a sum of the standard Fock operator and a one-body correlation MP2 potential. Molecular orbitals and orbital energies are relaxed in the presence of correlation by diagonalizing correlated Fock matrix. The double-excitation MP2 amplitudes are then updated using those new molecular orbitals and orbital energies, resulting in a self-consistency. We have shown that the self-consistency of OBMP2 can resolve issues caused by the non-iterative nature of standard MP2 calculations for open-shell systems <cit.>. It is also surprising that OBMP2 does not suddenly break down in bond stretching <cit.>. In this work, we present the extension of OBMP2 by introducing SOS into the double-excitation amplitudes, denoted as the O2BMP2 method. We assess the O2BMP2 performance on the triple-bond N_2 dissociation curve, singlet-triplet (ST) gaps of various sets of molecules, and ionization potentials (IPs) obtained from the Koopmans' approximation. We found that O2BMP2 can dramatically outperform standard MP2 and reach the accuracy of coupled-cluster methods in all cases considered in this work. Also, O2BMP2performs better than OBMP2 in most cases.Details of OBMP2 theory are presented in Refs. , and it is implemented in a local version of PySCF<cit.>. The OBMP2 Hamiltonian is derived through the canonical transformation <cit.>: Ĥ̅̂ = e^Â^†Ĥ e^Â, with the molecular Hamiltonian asĤ =∑_pqh^p_qâ_p^q + 12∑_pqrsg^p r_q sâ_p r^q s. Here, {p, q, r, …} indices referring to general (all) spin orbitals. One- and two-body second-quantized operators â_p^q and â_pq^rs are given by â_p^q = â^†_pâ_q and â_pq^rs = â^†_pâ^†_qâ_sâ_r. h_pq and v_pq^rs are one- and two-electron integrals, respectively. In OBMP2, the anti-Hermitian excited operator  includes only double excitations. = Â_D = 12∑_ij^occ∑_ab^vir T_ij^ab(â_ab^ij - â_ij^ab), with the MP2 amplitudeT_i j^a b =g_i j^a b/ϵ_i + ϵ_j - ϵ_a - ϵ_b , where {i, j, k, …} indices refer to occupied (occ) spin orbitals and {a, b, c, …} indices refer to virtual (vir) spin orbitals; ϵ_i is the orbital energy of the spin-orbital i. The OBMP2 Hamiltonian is defined as Ĥ_OBMP2 = Ĥ_HF + [Ĥ,Â_D]_1 + 12[[F̂,Â_D],Â_D]_1 = Ĥ_HF + v̂_OBMP2. In Eq. <ref>, commutators with the subscription 1, […]_1, involve one-body operators and constants that are reduced from many-body operators using the cumulant approximation<cit.>. Ĥ_HF is standard HF Hamiltonian and v̂_OBMP2 is a correlated potential composing of one-body operators with the working expression given by v̂_OBMP2 =T_i j^a b[ f_a^i Ω̂( â_j^b)+ g_a b^i p Ω̂( â_j^p) - g^a q_i j Ω̂( â^b_q) ] - 2 T_i j^a bg^i j_a b +f_a^iT_i j^a bT_j k^b c Ω̂(â_c^k) +f_c^aT_i j^a bT_i l^c b Ω̂(â^l_j) + f_c^aT_i j^a bT_k j^c b Ω̂(â^k_i) -f^k_iT_i j^a bT_k l^a b Ω̂(â_l^j)-f^p_iT_i j^a bT_k j^a b Ω̂(â^p_k)+f^k_i T_i j^a bT_k j^a d Ω̂(â_b^d) +f_k^iT_i j^a bT_k j^c b Ω̂(â_a^c) -f_c^aT_i j^a bT_i j^c d Ω̂(â^b_d)- f_p^aT_i j^a bT_i j^c b Ω̂(â^p_c)- 2f_a^cT_i j^a bT_i j^c b +2f_i^kT_i j^a bT_k j^a b. with T_ij^ab = T_ij^ab - T_ji^ab, the symmetrization operator Ω̂( â^p_q) = â^p_q+ â^q_p, and the Fock matrix f_p^q = h_p^q + ∑_i^occ(g^p i_q i - g^p i_i q). We rewrite Ĥ_OBMP2 (Eq.<ref>) in a similar form to standard HF as follows:Ĥ_OBMP2 =F̂̅̂ + C̅,where the constant C̅ is a sum of terms without excitation operators. F̂̅̂ is the correlated Fock operator, F̂̅̂ =f̅^p_qâ_p^q, with correlated Fock matrix f̅^p_q written asf̅^p_q = f^p_q + v^p_q. v^p_q serves as the correlation potential altering the uncorrelated HF picture. The MO coefficients and energies then correspond to eigenvectors and eigenvalues of f̅^p_q.Grimme<cit.> found that the MP2 performance can be dramatically improved by separating and scalingsame-spin (SS) and opposite-spin (OS) contributions to the correlation energy. Later, Jung et al. <cit.> extended Grimme's method by only considering the opposite-scaling component, SOS-MP2. Lochan and Head-Gordon <cit.> further developed the optimized second-order opposite-spin (O2) method by optimizing orbitals with the SOS-MP2 energy. Kossmann and Neese <cit.> introduced spin-component scaling to the OO-MP2 method by scaling the SS and OS contributions to the MP2 amplitude. All these studies showed that SOS can significantly improve the performance of conventional counterparts. In the present work, we extend the OBMP2 method by incorporating the spin-opposite scaling c_osinto the double-excitation amplitude (Eq. <ref>) T_i j^a b =c_osg_i j^a b/ϵ_i + ϵ_j - ϵ_a - ϵ_b .The optimal value of c_os for SOS-MP2 were found to be 1.3 <cit.>. In the current work, we will assess three values c_os = 1.1, 1.2, and1.3 to find the best scaling for O2BMP2.In Ref. , we showed that the self-consistency of OBMP2 helps it avoid the divergence in energy curves present in standard MP2 for H_2 and LiH. We now consider a more challenging system, N_2. We use NEVPT2 with an active space of (8e,8o) as the reference. Energies relative to the equilibrium energies of each method are presented in Figure <ref>. Unsurprisingly, standard MP2 quickly breaks down, whereas OBMP2 yields a better energy curve. However, beyond the equilibrium bond length, the OBMP2 curve is far below the NEVPT2 reference. O2BMP2, with all the scaling factors considered here, can improve the energy curve upon OBMP2 and make curves close to NEVPT2. Among these factors, 1.2 may perform best, particularly at long distances. Let us now assess the performance of our methods on the prediction of singlet-triplet (ST) gaps. We start with a test set including 38 small molecules. We first examine the spin contamination presented in Figure <ref> in Supporting Information (SI). In the upper panel, we present some molecules for that HF severely suffers from spin contamination. We can see that while MP2 cannot eliminate the spin contamination in these cases, OBMP2 yields negligible spin contamination. In the lower panel, we plot the change in spin contamination with respect to OBMP2 iterations for two molecules CO and CO_2. The spin contamination at the first iteration is large and significantly reduced when the loop converges, implying the importance of self-consistency in eliminating the spin contamination. In Figure <ref>, we plot mean absolute deviations (MADs) relative to CCSD(T) reference of ST gaps from different methods, including MP2, SOS-MP2 with c_OS = 1.2, OBMP2, and O2BMP2 with varying values of c_OS. ST gaps of different methods are given in SI. We can see that MP2 and SOS-MP2 give MADs larger than 0.3 eV, whereas OBMP2 and O2BMP2 with three scaling factors yield MADs smaller than 0.15 eV and comparable to CCSD. For this set of small molecules, O2BMP2 is only marginally better than OBMP2.We now consider some medium-size organic radicals adopted from Ref.. All results are shown in Table <ref>. We compare our results to experimental values and other calculated results, including MP2, SOS-MP2(c_OS = 1.2), CC2, and CCSD. MP2 and SOS-MP2 yield significant errors relative to the experiment. While OBMP2 can dramatically improve MP2 ST gaps, its errors are still quite large. Interestingly, O2BMP2 with c_os = 1.2 performs better than OBMP2 with a smaller MAD (0.19 eV). The next set consists of 10aryl carbenes adopted from Ref. . Determining the ST gap of carbenes is a difficult task for both experiment and theory. Among classes of carbenes, aryl carbenes have attracted extensive attention due to the accessibility of the triplet state. It has been evident that HF theory fails to accurately reproduce ST gaps of carbenes, whereas DFT cannot guarantee consistently accurate predictions. One of the reasons for the failure of HF and DFT in the ST gap prediction of carbenes may be the large spin contamination. As shown in Figure <ref>, both HF and MP2 severely suffer from spin contamination. Thanks to the self-consistency, OBMP2 can significantly reduce the spin contamination. To further see the importance of self-consistency, we plot in Figure <ref> spin densities of three aryl carbenes 1, 4, and 9. We can see that MP2 predicts spin densities spreading over whole molecules, which may lead to large spin contamination. On the other hand, OBMP2 predicts spin densities localizing on the aryl group, which is consistent with CCSD prediction. The ST gaps of aryl carbenes predicted by MP2, OBMP2, CCSD, and CCSD(T) are presented in Figure <ref>. We use the scaling c_os = 1.2 for O2BMP2. Unsurprisingly, CCSD results are close to the CCSD(T) reference. On the other hand, while HF underestimates the ST gaps, MP2 significantly overestimates them. Our methods yield results very close to higher-cost methods, CCSD and CCSD(T).The last set we used to test the OBMP2 and O2BMP2 prediction of ST gap is polyaromatic hydrocarbons (PAHs). The prediction of accurate ST gaps of polyaromatic hydrocarbons has been challenging for computational methods<cit.>. While the ST gaps of linear PAHs have shown an exponential decay with system size, those of the non-linear PAHs are marginally sensitive to system size<cit.>. Unfortunately, the latter has not been observed by single-reference methods like DFT <cit.>. Dey and Gosh have attributed the failure of DFT to the multi-reference nature of each state of non-linear PAHs <cit.>. In the current work, we consider polyacenes (linear PAHs) and helicene (non-linear PAHs) with geometries taken from Ref. . ST gaps from OBMP2 and MP2 compared to the density matrix renormalization group (DMRG) are shown in Figure <ref>. For both cases,while MP2 errors relative to DMRG are significant, OBMP2 and O2BMP2/1.2 can dramatically improve ST gaps of PAHs. Our methods predict ST gaps close to DMRG for polyacenes, whereas their errors are still quite significant for helicene. It could be because of the stronger multi-reference nature present in helicene. It is worth stressing that OBMP2 and O2BMP2 can reproduceDMRG prediction on the less dependence of ST gaps on the system size for helicene, which has not been observed by single-reference methods like DFT<cit.>. In Figure <ref>, we plot spin densities of helicene[3] and helicene[4]. While MP2 spin densities are delocalized over the structures, OBMP2 ones are localized along the preferentially stable double bonds, entirely consistent with DMRG prediction <cit.>. We now move to assess the performance of our methods on the prediction of molecular IPs. Other previous studies showed that Koopmans' approximation with MP2 cannot give satisfactory accuracy in the prediction of IPs<cit.>. It is interesting to check whether our methods can achieve accurate IPs in the framework of Koopmans' approximation. We previously derived the formula of OBMP2 IPs within Koopmans' approximation in Ref. . In the current work, we implement it in the spin-unrestricted OBMP2 version, removing only one electron instead of a pair of electrons in the restricted version. We first consider a test set of 21 small molecules with 58 valence IPs. We report IP-EOM-CCSD and G0W0 with HF and DFT (PBE) references for comparison. For O2BMP2, we have tested three scaling factors c_os = 1.1, 1.2, and 1.3. Calculated and experimental IPs are given in SI. We show in Figure <ref> mean absolute deviations (MAD) and maximum absolute deviations (MAX) relative to experimental values. We can see that both G0W0 yield large MAD and MAX, whereas IP-EOM-CCSD can significantly reduce MAD to 0.23 eV. Although OBMP2 performs better than G0W0, its errors are still large. Regarding O2BMP2, unlike ST gaps,IPs are sensitive to the scaling factor c_os. Among the three values, 1.1 gives the smallest errors with MDA comparable to that of IP-EOM-CCSD and MAX even smaller than IP-EOM-CCSD.We finally evaluate the IPs of 10 organic acceptor molecules with medium size adopted fromRef. . The above assessment for small molecules shows that O2BMP2/1.1 performs best. We thus report its results in comparison with IP-EOM-CCSD and G0W0@HF. All results are summarized in Table <ref>. G0W0@HF vastly overestimates IPs of acceptor molecules with MAD up to 0.46 eV, consistent with the error found in Ref. . IP-EOM-CCSD yields results close to experimental values with MAD of 0.17 eV. Surprisingly, O2BMP2/1.1 can reach an accuracy similar to EOM-CCSD with MAD of 0.16 eV. The maximum error of O2BMP2/1.1 is 0.3 eV for mDCNB and benzonitrile molecules that may have a strong multi-reference nature. In summary, we have extended our recently developed method, OBMP2, by introducing the spin-opposite scaling to the double-excitation amplitudes, termed O2BMP2. We assess the O2BMP2 performance on the triple-bond N_2 dissociation, ST gaps, and IPs of medium-size organic compounds. O2BMP2 performs much better than standard MP2 and reaches the accuracy of coupled-cluster methods in all cases considered in this work. Our method is then expected to help tackle realistic, challenging systems with large sizes. Working on further reducing computational costs of OBMP2 and O2BMP2 is in progress. § SUPPORTING INFORMATION TO:REACHING HIGH ACCURACY FOR ENERGETIC PROPERTIES AT SECOND-ORDER PERTURBATION COST BY MERGING SELF-CONSISTENCY AND SPIN-OPPOSITE SCALING Nhan Tri Tran University of Science, Vietnam National University, Ho Chi Minh City, VietnamHoang Thanh Nguyen Institute of Applied Mechanics and Informatics, Vietnam Academy of Science and Technology, Ho Chi Minh City, VietnamLan Nguyen Tran* Email: [email protected] Department of Physics, International University, Ho Chi Minh City, Vietnam Vietnam National University, Ho Chi Minh City, Vietnam File supporting-information.xls includes: * Singlet-triplet gaps (in eV) of 39 small molecules* Valence IPs (in eV) of 21 small molecules. | http://arxiv.org/abs/2310.18154v1 | {
"authors": [
"Nhan Tri Tran",
"Hoang Thanh Nguyen",
"Lan Nguyen Tran"
],
"categories": [
"physics.chem-ph"
],
"primary_category": "physics.chem-ph",
"published": "20231027140226",
"title": "Reaching high accuracy for energetic properties at second-order perturbation cost by merging self-consistency and spin-opposite scaling"
} |
arrows,backgrounds,positioning,fit,matrix,angles,quotescompat=1.18 external L[1]> p#1 C[1]> m#1 R[1]> m#1 | http://arxiv.org/abs/2310.18227v1 | {
"authors": [
"Molly Gibbins",
"Arash Jafarizadeh",
"Adam Smith",
"Bruno Bertini"
],
"categories": [
"quant-ph",
"cond-mat.stat-mech",
"cond-mat.str-el",
"nlin.SI"
],
"primary_category": "quant-ph",
"published": "20231027160219",
"title": "Quench dynamics in lattices above one dimension: the free fermionic case"
} |
Resource Allocation for Near-Field Communications: Fundamentals, Tools, and Outlooks Bokai Xu, Jiayi Zhang, Senior Member, IEEE, Hongyang Du, Zhe Wang, Yuanwei Liu, Senior Member, IEEE, Dusit Niyato, Fellow, IEEE, Bo Ai, Fellow, IEEE, and Khaled B. Letaief, Fellow, IEEE B. Xu, J. Zhang, Z. Wang and B. Ai are with Beijing Jiaotong University; H. Du, and D. Niyato are with Nanyang Technological University; Y. Liu is with Queen Mary University of London; K. B. Letaief is with Hong Kong University of Science and Technology.January 14, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================Extremely large-scale multiple-input-multiple-output (XL-MIMO) is a promising technology to achieve high spectral efficiency (SE) and energy efficiency (EE) in future wireless systems. The larger array aperture of XL-MIMO makes communication scenarios closer to the near-field region. Therefore, near-field resource allocation is essential in realizing the above key performance indicators (KPIs). Moreover, the overall performance of XL-MIMO systems heavily depends on the channel characteristics of the selected users, eliminating interference between users through beamforming, power control, etc. The above resource allocation issue constitutes a complex joint multi-objective optimization problem since many variables and parameters must be optimized, including the spatial degree of freedom, rate, power allocation, and transmission technique. In this article, we review the basic properties of near-field communications and focus on the corresponding “resource allocation” problems. First, we identify available resources in near-field communication systems and highlight their distinctions from far-field communications. Then, we summarize optimization tools, such as numerical techniques and machine learning methods, for addressing near-field resource allocation, emphasizing their strengths and limitations. Finally, several important research directions of near-field communications are pointed out for further investigation. Near-field communications, XL-MIMO, resource allocation, optimization algorithm, machine learning.§ INTRODUCTIONWith the advent of next-generation wireless networks beyond fifth generation (B5G) and sixth generation (6G), performance requirements will become increasingly stringent, including unprecedented data rates, high reliability, global coverage, and ultradense connectivity. Various physical antennas such as holographic multiple-input-multiple-output (MIMO), intelligent reflecting surfaces (IRS), and very large scale antennas, as well as millimeter wave, ultra-dense networks, and other new technologies, can make the above possible. Among numerous 6G technologies, the extremely large-scale MIMO (XL-MIMO) can provide high spectral efficiency (SE), high energy efficiency (EE), and reliable massive access <cit.>. Moreover, it has evolved far-field communications into near-field communications.To further optimize system performance, improve user experience, reduce energy consumption, and enhance anti-interference capability, radio resource allocation has become a core component in wireless communication networks. Additionally, due to the diversity and complexity of application scenarios and wireless environment, near-field communications resource allocation is also facing significant challenges.In contrast to traditional far-field assumptions, 6G will have a technical trend toward higher frequency migration and active/passive antenna deployment, which is expected to have fundamentally different electromagnetic propagation. Specifically, large aperture arrays may result in a Rayleigh distance exceeding the transmission distance <cit.>. Increasing the aperture of the antenna results in scatterers and users being located in the near-filed region of the antenna array. In general, near-field channels have the following typical characteristics:∙ Near-field Spherical Wave Properties: Spherical wavefronts are more accurate at describing phase differences in the near-field. Furthermore, due to the beam focusing capabilities of near-field transmissions, we can reliably communicate with multiple users in the same angular direction at different distances, which is impossible with conventional far-field beam steering <cit.>. ∙ Near-field Non-stationary Properties: Additionally, the large aperture has the non-stationarity property of only receiving signals from a relatively small area known as the visibility region (VR) in different parts of the array <cit.>. ∙ Near-field Electromagnetic Properties: Compared with traditional far-field massive MIMO communications, electromagnetic waves will gain new polarization flexibility and degree of freedom, and the mutual coupling effect between antennas will be intensified. Based on the above characteristics of the near-field channel, there are many resources to be allocated, including power control, beamforming, antenna selection, user scheduling, and channel estimation, mechanisms of which are different from far-field. The SE, EE, bit error rates (BER), user fairness, and quality-of-service (QoS) are common matrics for assessing far-field communications performance. Near-field communications not only improve performance in the above metrics but also bring new metrics such as mutual information, effective degrees of freedom (EDoF), and beam depth (BD), etc. Near-field communications have different optimization objectives and constraints due to differences in resource allocation issues and evaluation metrics. Consequently, corresponding optimization tools and methods need to be considered. In Fig. <ref>, we illustrate some typical applications of near-field communications for improving the performance of wireless systems and compare resources in near-field and far-field.Due to the new characteristics of near-field communications, specifically for XL-MIMO, traditional far-field resource allocation schemes are no longer applicable. At the same time, many new antenna structures and signal processing architectures were introduced, so the optimization algorithms needed to be redesigned. To fully utilize near-field resources, joint optimization designs are often required, making the optimization solution extremely difficult and often unable to obtain a closed form solution. Therefore, a typical approach for high-dimensional joint resource design problems is to transform non-convex optimization problems into convex optimization problems and then introduce algorithms such as alternating optimization, deep reinforcement learning (DRL), and other optimization tools to solve them.With the focus on near-field resource allocation, the main contributions of this paper can be summarized as follows:∙ We discuss the channel characteristics of near-field communications. We also elucidate the disparities between near-field and far-field resource allocation. Furthermore, we explore the challenges associated with resource allocation in near-field communications and propose possible solutions.∙ We review approaches for near-field resource allocation. The major optimization tools include traditional numerical optimization, heuristic optimization, and machine learning (ML) optimization. We then present the process of solving general near-field resource allocation problems, highlighting the strengths and limitations between traditional numerical optimization algorithms and ML algorithms. Moreover, the characteristics of the algorithm and the types of problems that each optimization tool can handle are revealed.∙ We simulate and compare traditional optimization algorithms with ML algorithms.Our investigated solutions can serve as a novel framework for near-field modeling and performance optimization. Finally, important research directions are highlighted for further studies. § CHARACTERISTICS OF NEAR-FIELD COMMUNICATIONS In this section, we present new properties of near-field communications, encompassing non-stationarity, spherical beam focusing, and electromagnetic characteristics. We then introduce the near-field resource allocation framework. Furthermore, we propose the main challenges and possible solutions to improve the performance of near-field resource allocation utilizing near-field characteristics.§.§ Near-field Non-stationary PropertiesXL-MIMO near-field communications exhibit a characteristic of non-stationarity. Different parts of the array can observe distinct terminals when the array size becomes very large since the energy of every terminal is focused on a specific area of the array. Due to the limited channel power of outside antennas, the VR-based channel model allows radio frequency (RF) chains to assign VR to each user dynamically. Consequently, a dynamic architecture with low complexity could provide a similar level of performance as a fully digital system, but at a lower hardware cost. This makes it possible to reduce the number of RF chains and signal processing units to reduce power consumption and improve SE by scheduling users in different VRs. A major challenge for near-field resource allocation caused by the non-stationary properties is how to carry out user scheduling and antenna selection to improve SE and EE in non-stationary channels?§.§ Near-field Spherical Wave Properties As the electromagnetic wave propagation model changes from far-field plane wave to near-field spherical wave, multiple propagation angles are available between the transceiver array, which can support parallel transmission of multi-stream data and significantly improve the gain of degrees of freedom. As shown in Fig. <ref>, the spherical wave propagation in near-field enables beam focusing in the polar-domain. By exploiting this property, the beam can be focused at a specific angle and at a specific distance. Moreover, the far-field channel is sparse in the angle-domain, but suffers energy leakage in the polar-domain. The near-field channel has better sparsity in the polar-domain than in the angle-domain. Regarding the near-field spherical wave property, the challenge lies in how can beamforming improve SE, EE, and EDoF under the near-field spherical wave? §.§ Near-field Electromagnetic PropertiesAnother new feature of near-field communications is the introduction of electromagnetic properties due to the change in the aperture of the antenna array. When the aperture of a linear array tends to be spatially continuous, i.e., holographic MIMO, the propagation mode, polarization, and spatial freedom of the electromagnetic wave in space will bring about great differences. In contrast to far-field communications, the influence of antennas on the wireless channel propagation environment is further amplified. Consequently, it becomes imperative to incorporate microwave network theory, impedance matching networks, and other methodologies to jointly model antennas and propagation channels. In addition, the compact antenna array has more sampling points in space and can capture more wavenumber domain information. By utilizing this mechanism, spatial electromagnetic waves can be exploited to achieve extreme spatial resolution and high SE. Therefore, another major challenge of near-field communications arising from these unique electromagnetic properties is how to construct channel models for the free-space propagation environment (LoS) and the arbitrary scattered propagation environment (NLoS) scenarios? In addition, we need to design corresponding beamforming techniques to reach the fundamental limits with an acceptable complexity based on the above channel models. In Fig. <ref>, we summarize the framework for near-field resource allocation and propose three challenges and possible solutions.§ OPTIMIZATION TOOLS FOR RESOURCE ALLOCATION IN NEAR-FIELD COMMUNICATIONSIn this section, we delve into optimization tools for near-field resource allocation and classify them into three distinct categories: numerical optimization based methods, heuristic optimization based methods, and ML based methods. Subsequently, we explore the optimization problems to which these algorithms can be applied, examine resource allocation scenarios, and analyze the key characteristics of these algorithms. In Fig. <ref>, we summarize the near-field resource allocation problems and corresponding suitable optimization methods.§.§ Numerical Optimization based ApproachDue to its reliability, versatility, and solid mathematical foundation, traditional numerical optimization algorithms become valuable tools for solving resource allocation problems and have found widespread applications in telecommunications <cit.>. Due to the unique nature of resource allocation problems and the diverse structures of antenna arrays, new optimization problems have to be formulated. Depending on the specific forms of the objectives and constraints, one can select appropriate numerical optimization tools to address them. 1) Alternating Optimization: One type of alternating optimization is the Alternating Minimization (AM) algorithm, commonly used in collaborative filtering and matrix factorization problems. In contrast to the traditional far-field beam steering that can only point in a specific direction, the near-field spherical wave propagation brings the possibility of focusing the beam at a particular position. Furthermore, near-field beamforming can be transformed into a combinatorial optimization problem and solved using the AM algorithm <cit.>. According to <cit.>, the authors proposed an alternating design algorithm to jointly optimize the dynamic metasurface antennas (DMAs) configuration and beamforming, i.e., configurable weights of the DMAs are optimized first, and digital precoding vectors are later. Additionally, beam focusing based on the proposed algorithms can achieve notable gains in SE compared to designs assuming conventional far-field operation.More significantly, it is shown that the SE of DMAs is comparable to that of fully digital architectures.2) Riemannian manifold: Optimization over a Riemannian manifold is locally analogous to optimization over the Euclidean space with smooth constraints. Therefore, a well-developed conjugate gradient algorithm in Euclidean spaces can also be applied to Riemannian manifolds. Compared with far-field beam training, near-field beam training needs to search for angle parameters and additional distance-related factors to accurately focus the beam, which leads to excessive training overhead and computational complexity. The authors in <cit.> modeled it as a combinatorial optimization problem with non-convex constraints in constant mode and applied the manifold optimization method to solve it. Despite slight beamforming performance degradation, training overhead can be reduced by over 99% compared with the bottom-layer overall codewords exhaustive searching. 3) Search Algorithm:Near-field resource allocation search schemes include greedy search, graph search, graph matching, etc. For instance, the authors in <cit.> designed an efficient greedy algorithm to solve mixed integer non-convex optimization problems in near-field user grouping and antenna selection. Although this algorithm has high complexity, it can significantly improve system spectral efficiency when users are densely distributed. Moreover, an adjustable RF chain beamforming structure was proposed in <cit.> to fully utilize the additional spatial degrees of freedom introduced by the near-field spherical wave to enhance the capacity of a single-user system. In addition, they proposed a low-complexity algorithm to optimize the selection matrix, which approximates exhaustive searching.4) Other Approaches: We illustrate additional optimization algorithms employed in near-field resource allocation. Near-field communications use a high degree of freedom and spherical wave introduced by the very large scale antenna to meet various QoS requirements of users and maintain their fairness. The problem is usually modeled as a non-convex joint high-dimensional resource optimization problem, which cannot be solved by traditional convex optimization methods. However, the problem can be transformed into a convex optimization problem by stochastic successive convex approximation theory, and then solved by using existing convex optimization tools. Additionally, there are combinatorial optimization problems related to near-field resource allocation, such as quadratically constrained quadratic program (QCQP) problems. Due to the large antenna array size, traditional semidefinite relaxation (SDR) optimization methods require an enormous computation burden, which is no longer applicable. Therefore, this necessitates low-complexity strategies, such as the constrained least square (LS) method or ML optimization tools. §.§ Heuristic Optimization based ApproachHeuristic optimization-based methods cover a variety of algorithms for solving complex non-convex and combinatorial optimization problems.One widely adopted metaheuristic approach for solving the resource allocation problem in antenna selection is the Genetic Algorithms (GA). This technique incorporates multiple search phases to effectively explore the feasible solution space and exploit the favorable properties of candidate solutions, aiming to identify promising regions within the feasible subspaces. Unlike exact optimization methods, GA does not require convex objective functions or constraints. Moreover, the computational complexity can be adjusted to match the available computational resources by tuning the input parameters and the number of iterations. However, it should be noted that, similar to other meta-heuristics, GA does not guarantee to find the optimal solution <cit.>. GA is also not good with problems with constraints, and it may converge very slowly. §.§ Machine Learning based ApproachML has emerged as one of the most dynamic areas in contemporary signal processing. In contrast to the traditional iterative methods, where an algorithm is executed over multiple iterations, the concept of deep unfolding represents these iterations as a sequence of identical processing layers.∙ Deep learning based approach: Compared to far-field channels, the near-field channel does not exhibit sparsity in the angle domain but exhibits sparsity in the polarization domain. Therefore, codebook design must consider both the angle and distance between the transmitter and the receiver, which differs from the far-field domain. The authors in <cit.> proposed a deep learning-based beam training technique using two neural networks to estimate the optimal angle and distance for near-field beams. Moreover, the improved scheme can increase effective achievable rates and reduce the pilot overhead by approximately 95% compared to the beam sweeping technique.∙ Deep reinforcement learning based approach: For high-dimensional resource allocation problems such as time, space, frequency, and degree of freedom in near-field, traditional numerical optimization algorithms often cannot solve this kind of joint optimization design problem. The joint design method based on reinforcement learning algorithm can dynamically adjust its own decision-making strategy through the interaction between the agent and the environment, and obtain the optimal expected reward.The authors in <cit.> proposed a framework to optimize the codebook beam patterns based on the environment. Furthermore, the proposed solution produces beams with similar SNR and approaches ideal beam shapes. For near-field antenna selection and power control problems, single-agent reinforcement learning is no longer suitable for such complex scenarios when the user has mobility or is served by multiple base stations. In contrast, multi-agent reinforcement learning performs better in this combinatorial optimization problem <cit.>. Moreover, it can reduce CSI interaction, reduce complexity and improve energy efficiency by using part of information to exchange and share information among users. To sum up, we present a list of main resource allocation problems and optimization tools in Table <ref> for ease of reference. § USE CASESThis section evaluates and demonstrates near-field resource allocation designs and optimization. We consider two scenarios for near-field resource allocation: XL-MIMO and cell-free communication systems.Scenario 1: We consider a fully-digital and hybrid fully-connected precoded architecture, the number of RF chains and the number of receivers are 4. For performance comparison of beamforming schemes, we consider the three traditional numerical optimization algorithms: Optimal beamforming <cit.>, Riemannian manifold optimization, alternating optimization <cit.> and reinforcement learning. Moreover, we compare a generative AI-based method, called AI-generated optimization, whose key technology is generative diffusion models <cit.>. Through a process known as forward diffusion, the generative diffusion model gradually adds Gaussian noise over time, generating targets forthe training process of a denoising neural network. From Fig. <ref>, we can observe a trade-off between the performance and complexity of the algorithms: although traditional Riemannian optimization and alternating optimization can both approach the optimal solution, these two algorithms require more iterations. Furthermore, the above near-field beamforming schemes can reliably communicate with multiple users in the same angular direction but with different ranges, which is impossible in far-field beam steering. In addition, if the receivers have high mobility, i.e., the channel is constantly changing, the two methods require recalculation at every moment, which is extremely time-consuming. Alternately, AI-generated optimization is more suitable for complex and ever-changing scenarios. However, performance is slightly reduced. Moreover, AI-generated optimization is superior to the reinforcement learning algorithm, e.g., Soft Actor Criticism (SAC), which has faster convergence and better performance.Scenario 2: As shown in Fig. <ref>, we consider the near-field power control scenario of multiple base stations. It is observed that the multi-agent reinforcement learning (MADDPG) <cit.> is better than the traditional power control scheme. Due to the high dimensionality of the near-field array and the mobility of users, traditional power allocation schemes applied in the far-field cannot be applied. On the contrary, using MADDPG to utilize decisions between multiple agents can better solve complex optimization problems in the near-field. Moreover, the training time of MADDPG and Improved-MADDPG algorithms is 0.554 s and 0.610 s respectively, and the convergence time is 178 s and 75 s respectively. Furthermore, the Improved-MADDPG algorithm can significantly improve the convergence speed of the MADDPG problem, and compared to traditional optimization algorithms, multi-agent deep reinforcement learning algorithms can significantly reduce interference between antennas and achieve better power control performance. § CONCLUSIONS AND FUTURE DIRECTIONSIn this article, we provided a comprehensive overview of near-field resource allocation for XL-MIMO systems. Specifically, we first presented near-field channel characteristics. For near-field optimization problems, we presented a solution framework and optimization tools. Additionally, we explained which kinds of resource allocation problems that are suitable for numerical, heuristics, and ML. Some other directions for future research in near-field communications are outlined as follows. Electromagnetic information theory:Electromagnetic information theory is a promising direction, based on extremely large antenna array (ELAA) technology and holographic massive MIMO technology, to consider how to surpass the classic Massive MIMO problem. Nevertheless, this introduces high-dimensional channels, space, degrees of freedom, and other resources in the near-field and makes channel modelling, network deployment, and scheduling users more challenging. In future electromagnetic information theory research, how to allocate the resources above in the near-field will be crucial. Semantic Communications:Semantic communication is considered a breakthrough beyond the Shannon paradigm, to transmit the semantic information the source conveys successfully. Near-field communications that use semantic communication compress or encode the original data instead of sending it, thus reducing the amount of data needed to be transmitted while requiring less bandwidth and energy. Furthermore, near-field resource allocation enables a greater spatial degree of freedom and SE, which can accommodate new wireless communication applications requiring larger capacity. IEEEtran | http://arxiv.org/abs/2310.17868v1 | {
"authors": [
"Bokai Xu",
"Jiayi Zhang",
"Hongyang Du",
"Zhe Wang",
"Yuanwei Liu",
"Dusit Niyato",
"Bo Ai",
"Khaled B. Letaief"
],
"categories": [
"cs.IT",
"eess.SP",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231027030656",
"title": "Resource Allocation for Near-Field Communications: Fundamentals, Tools, and Outlooks"
} |
[][email protected] Department of Physics, Stockholm University, AlbaNova University Center, 10691 Stockholm, Sweden [][email protected] Department of Physics, Stockholm University, AlbaNova University Center, 10691 Stockholm, SwedenWe investigate anomalous localization phenomena in non-Hermitian systems by solving a class of generalized Su-Schrieffer-Heeger/Rice-Mele models and by relating their provenance to fundamental notions of topology, symmetry-breaking and biorthogonality. We find two flavours of bound states in the continuum, both stable even in the absence of chiral symmetry. The first being skin bulk states which are protected by the spectral winding number. The second flavour is constituted by boundary modes associated with a quantized biorthogonal polarization. Furthermore, we find the extended state stemming from the boundary state that delocalizes while remaining in the gap at bulk critical points. This state may also delocalize within a continuum of localized (skin) states. These results clarify fundamental aspects of topology, and symmetry in the light of different approaches to the anomalous non-Hermitan bulk-boundary correspondence – and are of direct experimental relevance for mechanical, electrical and photonic systems. Non-Hermitian extended midgap states and bound states in the continuum Emil J. Bergholtz January 14, 2024 ====================================================================== Introduction.It is standard textbook knowledge that states within the energy continuum are generally extended while states in the gap are spatially localized. However, already almost a century ago, von Neumann and Wigner challenged this paradigm <cit.>, by proposing a bounded state with an energy embedded within the extended states, hence providing an example of an anomalous localization phenomenon. This work was later extended in 1970s by Stillinger and Herrick <cit.>, and briefly addressed in the following years <cit.>. However, only during the past decade there has been a renewed interest in bound states in the continuum (BICs) across different physics communities<cit.>, featuring several potential applications in electronics <cit.>, mechanical metamaterials <cit.>, plasmonics <cit.> and, saliently, in photonics <cit.>.In Hermitian systems, BICs were realized in optical set-ups using the trapped light <cit.>. Topological properties of BICs in the form of vortex centres in the polarization have been proposed <cit.>, experimentally realised <cit.>, and used for unidirectional guided resonances <cit.>. BICs can be also induced by moving flat bands <cit.> or topological edge states <cit.>. In non-Hermitian systems, impurity-induced BICs were studied <cit.> or recently, a generalized framework to study BICs in nanophotonics lattices with engineered dissipation was introduced <cit.>. However, in the context of non-Hermitian topological phenomena <cit.>, localization phenomena go beyond the single bounded state in the continuum. In particular, the non-Hermitian skin effect (NHSE) <cit.> and biorthogonal bulk boundary correspondence <cit.> viewed as localization phenomena provide a natural framework to study the interplay of localized and extended states, giving rise to several new anomalous localization phenomena with no known counterpart in the Hermitian systems. In addition to a single bounded state, a continuum of bounded states was also proposed <cit.>. In contrast to localized states, an extended mid-gap state (EGS) was realized <cit.> and analysed <cit.> in metamaterials, and proposed using acoustoelectric effect <cit.>. An antipode of the boundary states in the continuum, an extended state lying inside the continuum of bound states (ELC), was also proposed in Ref. Wang2022B. Anomalous localization phenomena is also know to lead to a remarkable spectral sensitivity <cit.> that may be harnessed in sensing devices <cit.>.The aforementioned anomalous (de)localization phenomena have been dispersed throughout various models and platforms. In this letter, we (i) unify all these (de)localization phenomena by demonstrating their emergence in a single general one-dimensional non-Hermitian model realizable in several platforms; show that (ii) biorthogonal polarization 𝒫 predicts the gap closing and emergence of bounded states in the continuum and remains relevant in the absence of chiral symmetry; (iii) put these results in the context of non-Hermitian topology and the anomalous bulk-boundary correspondence of non-Hermitian systems. Exactly solvable model. We consider open non-Hermitian Su–Schrieffer–Heeger chain (SSH) with nonreciprocal γ term in the intra-cell hopping, and placed in the staggering potential Δ breaking the chiral symmetry turning it into a non-Hermitian Rice-Mele model,H =∑_n=1^N_ cell-1 [ (t_1 - γ) a^†_B,n a_A,n+(t + γ) a^†_A,n a_B,n + t_2 (a^†_B,na_A,n+1+a^†_A,n+1a_B,n) +Δ (n_A,n - n_B,n)]+ t_2 (a^†_B,N_ cell-1 a_A,N_ cell+a^†_A,N_ cell a_B,N_ cell-1) ,where N_ cell is the number of cells, n_α,n=a_α,n^† a_α,n is the number of particles, and a^†_α,n (a) creates (annihilates) a bosonic particle in a sublattice α in a unit cell n. The open boundaries are applied such that the last unit cell is broken, and the model has 2N_ cell-1 sites in total. Diagonalizing Eq. (<ref>), generalizing results in <cit.>, we obtain the complete eigensystem with 2N_ cell-2 bulk modes and one edge state <cit.>. The energy spectrum reads { E_-(k), Δ, E_+(k) },where the bulk energy spectrum is derived from the Hamiltonian with periodic boundaries by shifting the momentum E (k) = E^ PBC (k - ilog ( r ) ), where r=√(t_1-γ)/√(t_1+γ)<cit.>. Hence, we find thatE_±(k) = ±√( t_1^2+t_2^2 + Δ^2 - γ^2 + 2 t_2√(t_1-γ)√(t_1+γ)cosk)where k=π m/N_ cell, with m =1,2,...(N_ cell-1). Fig. <ref> shows the energy spectrum as a function of intra-cell hopping t_1 for all k values, where the color of the energy point represents the average position < s> of the particle described by the right eigenstate Ψ_R, Bulk(k,n) corresponding to the eigenvalue E(k) evaluated at t_1, hence < s> = ∑_n,α=A,B^N_ cell s_n,α |Ψ_R, Bulk(k,n)|^2,where s_n,α yields the site label of the sublattice α in the unit cell n, s_n,α=2n-δ_A, α. While the periodic Bloch energies of our model lacks spectral inversion symmetry, the open boundary energies (Eq. (<ref>)) have the symmetry E(k)=E(-k) which makes it possible to derive all eigenstates generalizing Refs. Edvardson2020, KunstMiertBerg2019. The right bulk modes are Ψ_R,Bulk,±(k) readΨ_R,Bulk,±(k) = 𝒩[ Ψ_R,Bulk,±,A(k,1); Ψ_R,Bulk,±,B(k,1); Ψ_R,Bulk,±,A(k,2); ⋮; Ψ_R,Bulk,±,A(k,n); Ψ_R,Bulk,±,B(k,n); ⋮; Ψ_R,Bulk,±,B(k,N_ cell-1); Ψ_R,Bulk,±,A(k,N_ cell); ],where n labels the unit cell, and the amplitudes read Ψ_R,Bulk,±, A(k,n) = r^n[ (t_1 +γ) sin(k n) +t_2/rsin(k [n- 1])],Ψ_R,Bulk,±, B(k,n) = r^n[ E_±(k)-Δ ] sin(k n),where r=√(t_1-γ/t_1+γ).From these expressions, we see that states are exponentially localized to one of the boundaries, the phenomenon known as the non-hermitian skin effect (NHSE) <cit.>, where the value of the spectral winding number <cit.> predicts the boundary to which states pile-up. In Fig. <ref>, the average position< s > shows the localization to the right (pink) boundary for t_1<0 and to the left (blue) boundary for t_1>0. Comparing the top and bottom panels with (Δ = 0) and without (Δ≠ 0) chiral symmetry, respectively, we see that the NHSE is carried over to the broken chiral symmetry phase.The remaining eigenstate, the edge mode, with energy Δ is an exponentially localized boundary mode|ψ_R/L⟩ = 𝒩_R/L∑_n= 1^N_ cell r_R/L^n a_n,A^†|0⟩,where 𝒩_R/L is the normalization, r_R=-(t_1-γ)/t_2, and r_L^*=-(t_1+γ)/t_2 which dictate the behaviour of the right (R) and left (L) boundary modes <cit.>. (Note that right/left here refers to the eigenvectors fulfilling HΨ_R=EΨ_R andΨ_LH=EΨ_L, respectively, rather than where the state may be localized.)While the emergence of NHSE is related to a spectral winding number <cit.>, the behaviour of boundary Δ-mode can be associated with the biorthogonal polarization P<cit.>𝒫 = lim_N_ cell→∞1/N_ cell∑_s⟨ψ_L| s Π̂_s|ψ_R|⟩/⟨ψ_L|ψ_R|⟩,where |ψ_R/L⟩ are the boundary modes in Eq. (<ref>), Π̂_s = |e_s⟩⟨e_s| is the projection operator on each unit site s and |e_s⟩ is the basis vector in the position basis. Note that in the reference <cit.> the biorthogonal polarization is defined as 1-𝒫.Remarkably, the change in P implies the gap closings <cit.>. To see this, we evaluate the limit in Eq. (<ref>) and find that the biorthogonal polarization 𝒫 changes when |r_L^* r_R|=1 <cit.>, i.e., when t_1=±√(γ^2 + t_2^2),t_1=±√(γ^2 - t_2^2).Evaluating Eq. (<ref>) for lim_N_ cell→∞ at t_1 in Eq. (<ref>) we findE(k=N_ cell/2)=Δ, i.e., the gap between the bulk modes and the Δ-mode closes as predicted by the change in the polarization number P. This is also visible in Fig. <ref>, where the colored dashed lines highlight the polarization P accompanied by the gap closings, and discrepancies are due to the finite system size. Bound states in the continuum. Biorthogonal polarization 𝒫 also marks the emergence of the bound state in the continuum (BIC). Once the gap is closed, the gap can open again, or remain closed. This is visible in the middle panels of Fig. <ref>, where the boundary mode at E=Δ touches the bulk (yellow lines) or enters the continuum (green lines) at the points marked by the change of P, which remains to be predictive quantity also in the phase with broken chiral symmetry phase Δ≠ 0 (bottom panels). The localization boundary of the BIC is determined by the amplitude in Eq. <ref> and can be the same or opposite to the localization boundary of the skin states, see Fig. <ref> (b) and (c) for comparison.In Hermitian systems, bounded states appear with quantized energies. In comparison, here, the bulk skin states can form a continuum whenlim_N_ cell→∞, see Eq. (<ref>), leading to a continuum of bound skin states. This is visible in the middle panels in Fig. <ref>, where the bulk energies start to form the continuum. A similar phenomenon was proposed using imaginary momentum and Landau-type vector potential <cit.>, while here we demonstrate them using a simple non-Hermitian SSH model.Another anomalous localization composition surfaces from the combination of the boundary mode, Eq. (<ref>), and the bulk skin states, Eq. (<ref>), where the former localizes in the latter giving rise to bounded state in the continuum made of bounded states (BICB), see (b) in Fig. <ref>. Therefore, in our model, the BICs can be BICB. Anomalous extended states.We now turn to anomalous delocalization phenomena. FromEq. (<ref>), we see that the edge mode can transition from being localized to the left (right) boundary for |r_R|<1 (|r_R|>1) to the extended state at |r_R|=1, i.e. when t_1 ±=γ± t_2.At these points, the otherwise localized boundary state is now extended throughout the lattice while all the bulk skin states are pilled to the boundary, see (d) and (e) in Fig <ref>. The energy of these extended states can lie inside or outside of the continuum.When it lies in the continuum, anextended state in the localized continuum (ELC) is formed, see (d) inFig. <ref>, where the localized continuum is formed by the bulk skin states. An analogue of ELC was realized in a mechanical two-dimensional system <cit.> while we show that it can also emerge in a one-dimensional model. The extended state lying outside of the continuum creates extended mid-gap state (EGS), see middle pannels in Fig. <ref> and (e) in Fig. <ref>, which is localized at zero for Δ=0, while for Δ≠ 0 it is shifted by Δ. A similar EGS was realised in metamaterials <cit.>, the transition to EGS was also analysed <cit.>, and recently a similar phenomenon was addressed in solitons <cit.>.In the biorthogonal basis, the emergence of the extended state is predicted by the change in 𝒫. In particular, for the chain with the odd number of sites (the last unit cell broken), the biorthogonal-bulk boundary<cit.> correspondence implies that atP changing points, Eq. <ref>, the biorthogonal boundary mode <cit.> defined as ⟨ψ_L| Π_n|ψ_R|$⟩, becomes extended. This delocalization is different from the extended state in the right eigenvector basis, Eq. (<ref>) at|r_R|=1. Discussion. In this work we have fully solved a non-Hermitian Rice-Mele model and showcased how the localization of bulk and boundary states defy usual expectations. Notably, this highlights bound states in the continuum (BICs) and extended states in the gap (EGSs) as well as in a localized continuum (ELCs). In fact, the localization of non-Hermitian bulk and boundary states can be tuned essentially independently. We have also shown how a jump in the polarization marks the emergence (or disappearance) of BICs. This highlights a fundamental difference between the biorthogonal polarization <cit.>, which carries physical information also when the chiral symmetry is broken (Δ≠0), and the generalized Brillouin zone winding number <cit.> that instead relies on the aforementioned symmetry. This is notable since their predictions coincide in the SSH limitΔ=0. The difference reflects that the (eigenvector) winding number relates to symmetry-protected topology while the polarization reflects a general feature of diverging (biorthogonal) localization lengths at phase transitions.The simplicity of our model makes it relevant for many experimental setups, in particular in photonics which naturally feature a high degree of tunability and where non-Hermiticity is an ubiquitous feature. The authors are supported by the Swedish Research Council (VR, grant 2018-00313), the Wallenberg Academy Fellows program (2018.0460) and the Göran Gustafsson Foundation for Research in Natural Sciences and Medicine. | http://arxiv.org/abs/2310.18270v1 | {
"authors": [
"Maria Zelenayova",
"Emil J. Bergholtz"
],
"categories": [
"physics.optics",
"cond-mat.mes-hall",
"quant-ph"
],
"primary_category": "physics.optics",
"published": "20231027165804",
"title": "Non-Hermitian extended midgap states and bound states in the continuum"
} |
=1./figures/9.5in 6.7in -0.15in -0.55in=.08em =.05em =.03em =.65ex =0pt =.4ex =0pt ==.5em =.5em | http://arxiv.org/abs/2310.17741v1 | {
"authors": [
"Daohan Wang",
"Jin-Hwan Cho",
"Jinheung Kim",
"Soojin Lee",
"Prasenjit Sanyal",
"Jeonghyeon Song"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20231026192753",
"title": "Probing Light Fermiophobic Higgs Boson via diphoton jets at the HL-LHC"
} |
[][email protected] Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore 560012, India.Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore 560012, India.[][email protected] Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore 560012, India.[^These authors contributed equally to this work.]0.25cm We propose an exact analytical decimation transformation scheme to explore the fascinating coexistence of flat bands and Dirac fermions in three-dimensional coupled kagome systems. Our method allows coarse-graining of the parameter space that maps the original system to an equivalent low-level lattice. The decimated system enables defining a quantity in the tight-binding parameter space that predominantly controls the emergence of a flat band (FB) and provides a specific criterion for absolute flatness. Likewise, in terms of atomic separations, we develop a quantity that primarily controls the FB width in real materials and thus can be helpful in predicting new systems hosting FB as well as in tuning the FB width. Our predictions on the emergence of the flat band and Dirac fermions are confirmed for M_3X (M= Ni, Mn, Co, Fe; X= Al, Ga, In, Sn, Cr,...) family of materials, leveraging materials databases and first-principles calculations. Our work provides an analytical formalism that enables accurate predictions of FBs in real materials. Origin of flat bands and non-trivial topology in coupled kagome lattices Awadhesh Narayan January 14, 2024 ========================================================================Flat bands (FBs), i.e., Bloch bands with vanishingly small dispersion, have spurred tremendous research interest in recent years due to their possibility to provide an ideal platform to study exotic many-body physics <cit.>, such as ferromagnetism <cit.>, anomalous Landau levels <cit.>, unconventional superconductivity <cit.>, non-Fermi liquid behavior <cit.>, and high temperature fractional quantum Hall effect <cit.>. In solid state systems, a FB with quenched kinetic energy can emerge due to two reasons – isolated localized states without coupling to other states <cit.> or states resulting from a destructive quantum interference. Lattices with geometric frustration, such as kagome, Lieb, dice, and pyrochlore, fall into the latter category and generate localized eigenstates giving rise to FBs <cit.>.In a kagome lattice, the single orbital nearest-neighbor hopping model provides real space eigenfunctions with opposite phases at neighboring sites <cit.>. The destructive interference of these opposite phases prohibits any hopping to the adjacent cells, causing the confinement of the electronic state in the kagome hexagon. This, in turn, results in dispersionless bands in the momentum space. Furthermore, kagome lattices feature the existence of a Dirac point at the Brillouin zone (BZ) corner (K point) and a saddle point at the zone boundary (M point). However, the exact realization of these features in real materials has become challenging due to several other controlling factors, such as multiple orbital contributions, significant hopping beyond the nearest neighbor, and complex interlayer hopping. Recent studies have focused on the prediction as well as the experimental realization of FBs in the k_z= 0 plane, along with suppressed dispersion in other directions in the three-dimensional (3D) BZ in binary kagome metals TX (T: 3d transition metal, X: Ge, Sn) <cit.>. Other related systems such as Pd_3P_2S_8 <cit.> and YMn_6Sn_6 <cit.> exhibit spatially decoupled quasi-two-dimensional kagome planes. However, the impact of interlayer hopping in a 3D coupled kagome lattice is yet to be fully explored. To be more specific, we want to address the pertinent question: under what conditions do FBs and Dirac fermions coexist in these systems, and what are their energetic positions?In this work, we propose an exact analytical framework based on the real space decimation strategy to answer the above mentioned questions. In particular, the decimation scheme downfolds the original Hamiltonian without losing any information and provides the necessary conditions for a FB at k_z=0 plane and a Dirac point at K. Notably, the electronic spectra of the system in the k_z=π plane resemble the monolayer kagome and, strikingly, follow the Su–Schrieffer–Heeger model <cit.>. Incorporating spin-orbit coupling (SOC) in the model invariably lifts the band degeneracies, which can give rise to a nontrivial band topology. Our predictions have been confirmed by performing first-principles calculations on realistic candidate materials M_3X (M= Ni, Mn, Co, Fe; X= Al, Ga, In, Sn, Cr,...) obtained by leveraging materials database screening <cit.>. Our scheme allows in predictions of FBs in a new material solely from its crystallographic information. This general recipe of flat band optimization also facilitates the engineering of a given FB by means of external perturbations, for example, pressure and strain. Our powerful approach, therefore, enables the exploration of the rich physics of such kagome systems.Real space decimation method for coupled kagome system. Our coupled kagome model consists of stacked kagome layers coupled by an inter-layer hopping t_v, along with two distinct intra-layer hopping parameters – t (intra-unit cell) and t^' (inter-unit cell) as depicted in Fig. <ref>(a). One of the most intriguing properties of monolayer kagome lattice is the coexistence of Dirac fermions and localized states generated by the destructive interference of atomic orbitals [Fig. <ref>(b)]. However, the intra-layer hopping anisotropy and inter-layer coupling in coupled kagome lattices will drastically modify such an electronic spectrum. We have considered an effective quasi-one-dimensional (Q1D) model that precisely mimics the band structure of the original lattice in the k_z=0 plane, particularly along the high symmetry path Γ-K-M-K^'-Γ [Fig. <ref>(c)]. We note that the choice of an equivalent Q1D lattice model is not very stringent in this case. For instance, Fig. <ref>(a) (top panel) depicts a typical example of a Q1D network for which the original tight-binding (TB) Hamiltonian Ĥ = ∑_i ε_i | i ⟩⟨ i | + ∑_i ≠ j t_ij | i ⟩⟨ j | + H_SO remains invariant under the constraint k_y=k_z=0. In the above expression, ε_i and t_ij represent the onsite potential and hopping integral for the orbitals |i ⟩ and |j ⟩, respectively. The unit cell of the system contains six atomic sites that can be divided into the top and bottom layers denoted by site numbers {1, 2, 5} and {3, 4, 6}, respectively. Similar to the original lattice, there are two distinct intra-layer (t and t^') and one inter-layer (t_v) hopping parameters in this Q1D model. The SOC has been introduced by the term H_SO = i λ_SO∑_⟨⟨ i,j⟩⟩ν_ij c_i^† s^z c_j, where s^z stands for the z component of the Pauli matrix corresponding to spin, and ν_ij = ± 1 depends on the rotation direction while going from j to the next-nearest neighbour site i <cit.>. In the momentum space, the Hamiltonian can be written in the orbital basis {| 1 ⟩ , ... ,| 6 ⟩}, as follows H= [ ε Γ Δ 0 Θ 0; Γ^* ε 0 0 Λ Ω; Δ^* 0 ε Θ^* 0 Γ^*; 0 0 Θ ε Ξ Λ; Θ^* Λ^* 0 Ξ^* ε 0; 0 Ω^* Γ Λ^* 0 ε ]. In the above Eq. <ref>, we have considered a uniform onsite potential ε_i = ε(∀ i ∈{1, ..., 6}) while the other parameters are of the form, Γ = t + t^' exp[i(a_2-a_1)k_x], Δ = 2 t_v ( exp[ia_2k_x] + exp[i(a_2-a_1)k_x] ), Θ = t + t^' exp[ia_2k_x], Λ = t + t^' exp[ia_1k_x], Ξ = 2 t_v ( exp[ia_1k_x] + exp[ia_2k_x]), and Ω = 2 t_v ( exp[ia_1k_x] + exp[i(a_1-a_2)k_x] ). The superscript * denotes complex conjugation.We further have a_1 = 2 a_2=a (|a|=|b⃗|=a). The transformation of this lattice model will be instrumental in unveiling the fundamental electronic structure of these coupled kagome lattices. For this purpose, we write down the TB analogue of the Schrödinger equation <cit.> as (E-ε_i)ψ_i - ∑_i ≠ j t_ijψ_j = 0. The parameters, E, ε_i, t_ij and ψ_i/j stand for eigenenergy, onsite potential, hopping parameter, and wavefunction, respectively, while i and j are the site indices. This set of difference equations can be used to decimate a preferred subset of degrees of freedom or lattice sites – this procedure maps the original system to its equivalent reduced version. In other words, a coarse-grained parameter space of TB Hamiltonian can be achieved using this exact formalism that does not lose any information about the original system. It is worth mentioning that the decimation process in various forms is the backbone of the real space renormalization group scheme <cit.>. In the present case, we intend to absorb all the information of the lattice sites {5, 6} encoded in their probability amplitudes {ψ_5, ψ_6} into the remaining sites of the Q1D system. This decimation process can be performed using the following set of difference equationsE^'ψ^n_1 = t ψ^n_2 + tψ^n_5+t^'ψ^n-1_2 +t^'ψ^n+1_5+t_vψ^n-1_3+ t_vψ^n+1_3, E^'ψ^n_2 = t ψ^n_1 + tψ^n_5+t^'ψ^n+1_1 +t^'ψ^n+2_5 + t_vψ^n+1_6 + t_vψ^n+2_6, E^'ψ^n_5= t ψ^n_1 + tψ^n_2+t^'ψ^n-2_2 +t^'ψ^n-1_1 + t_vψ^n-2_4 + t_vψ^n-1_4.In a similar vein, the difference equations for the bottom layer sites {3, 4, 6} can be written by considering the following transformations ψ_1 ↔ψ_3, ψ_2 ↔ψ_4, and ψ_5 ↔ψ_6. In Eq. <ref>, the parameter E^' represents the potential energy contribution E-ε, and n is the unit cell index. The decimation of the sites {5, 6} essentially downfolds the system to an exactly equivalent four-level lattice, shown in Fig. <ref>(a) (bottom panel). We note that this reduced system is obtained at the expense of allowing new energy-dependent onsite potentials and hopping terms that keep the characteristic equation invariant. In particular, the TB parameters of the renormalized lattice can be written in terms of the original ones as ε^' = ε + (t^2 + t^'^2)/(E-ε), ε^'' = ε + (t^2+t^'^2+2t_v^2)/(E-ε), h_1=t^'+t^'^2/(E-ε), h_1^'=t t^'/(E-ε), h_2 = t_v^2/(E-ε), and d=-t^'/(E-ε). The decimation process has redrawn the lattice problem to an equivalent depiction where two completely identical blocks made of lattice sites {1, 2} and {3, 4} with potential matrix Σ (E,k_x) which are coupled with each other by the overlap matrix T (E,k_x), which read Σ = [ε^' + h_1^' x_p,n h + h_1 x_n + h_1^' x_p,nn; h + h_1 x_p + h_1^' x_n,pp ε^'' + h_1^' x_pp,nn + h_2 x_p,n ], T= [t_v x_p,n d+d_1 x_nn + d^' x_n; d+d_1 x_pp + d^' x_p 2d + d^' x_p,n + d_1 x_pp,nn ]. Here x_α,β=x_α+x_β, where x_α and x_β stand for the phase terms of the Bloch wavefunction. Further, p and pp represent the nearest neighbour and next-nearest neighbour inter-unit cell hopping contributions, respectively, in the positive x direction, while the n and nn are the corresponding complex-conjugate terms. The Hermitian matrix T (E,k_x) signifies that the dispersion relations of the decimated four level system will be identical to the eigenvalues of two 2 × 2 matrices given by Σ (E,k_x) ± T (E,k_x) [see, Supplementary information (SI) for detail expressions <cit.>]. Now, we focus on the above mentioned eigenvalues to explore the criteria for manifesting a non-dispersive band in the k_z=0 plane. It is fascinating that the condition t_v = -t^'/2 invariably eliminates all the dispersive terms from a particular eigenvalue, which gives rise to a FB in the k_z=0 plane at the energy value E=ε + 2t <cit.>. Here, it is worth mentioning that the presence of other interlayer hopping parameters viz. between neighbouring (t_v^') and opposite vertices (t_v^'') of the hexagon inside the unit cell [Fig. <ref>(a)] do not alter the condition for obtaining the FB. However, these parameters essentially shift the energetic position of the FB to E=ε + 2 (t+ t_v^') + t_v^''.In addition, two bands touch each other at a Dirac point and sandwich another dispersive band at the K point below the Fermi level for t^'=t/√(2), as depicted in Fig. <ref>(c) (left panel). The inclusion of SOC (λ_so≠ 0) in the above case invariably lifts the band degeneracy (see SI <cit.>) with a nontrivial topological index, i.e., ℤ_2=1. Therefore, our lattice model not only reveals exciting FB physics but also exhibits the possibility of obtaining non-trivial topological phases. Furthermore, it can be shown from Eq. <ref> that the Hamiltonian for k_z=π plane transforms into that of the decoupled kagome lattices stacked along c-axis. Consequently, the band spectra resemble that of an isolated kagome lattice for t^'=t/√(2), that possesses a FB away from the Fermi level at E= ε -(t+t^') [Fig. <ref>(c) (right panel)]. In the t=t^' case, two bands meet each other in the form of Dirac cones as typically obtained for pristine kagome lattice (see SI <cit.>). The set of difference equations given in Eq. <ref> reveal that the above case is essentially the critical phase of the Su–Schrieffer–Heeger model <cit.> and that the other two gapped phases (t^' < t and t^' > t) are topologically distinct (see, SI for further details <cit.>). Nevertheless, quadratic band touchings are obtained between the flat and dispersive bands for all the cases in the k_z=π plane, which are gapped upon inclusion of SOC, confirming its nontrivial topological nature <cit.>. Density functional calculations for M_3X systems. The above theory sets up a general formalism to reveal the underlying physics for FB formation in 3D coupled kagome lattices. Furthermore, we have traversed through the open-source materials databases – Materials Project <cit.> and Topological Materials Database <cit.> – to corroborate our analytical findings for real materials. For that purpose, we have performed density functional theory (DFT) calculations to obtain the band structure of the screened systems (see SI for computational details <cit.>). Here, for a systematic analysis, we restrict our discussions to the materials belonging to the crystal class M_3X [Space group P6_3/mmc (No. 194)] with easily recognizable coupled kagome geometry [Fig. <ref>(a)] hosting a FB in the k_z=0 plane within the energy window of ± 1.2 eV around the Fermi level. Moreover, the FB width has been calculated for all those systems as depicted in Fig. <ref>(b). The lattice geometries of the candidate materials, their band structures, and the corresponding FB widths are presented in SI <cit.>.Our calculations reveal that the band structure of the Ni_3In system, shown in Fig. <ref>(c), exhibits the lowest FB width (∼64.1 meV) among this class of materials. We note that Ni_3Inhas recently been experimentally studied by Ye et al. <cit.>. A careful analysis indicates that despite the narrowest bandwidth, the FB in Ni_3In is slightly dispersive in nature with a maximum at the Γ point and a minimum at the K point. Dispersions of other systems also exhibit similar features. We have found similar FB dispersion in our TB model for the hopping relation |t_v/t^'|<0.5, as illustrated in SI <cit.>. Orbital analysis of Ni_3In reveals that the FBs at both the (k_z=0 and π) planes are mainly contributed by Ni d orbitals – namely d_xz/d_yz orbitals (see SI <cit.>). The destructive quantum interference of Ni d_xz/d_yz orbitals, as shown schematically in Fig. <ref>(b), results in the localization of electrons in a hexagonal region <cit.>. This electronic localization in the real space causes the Bloch wave functions to have a reduced dispersion in the momentum space, giving rise to the FB in this material.It is worth mentioning that, in the TB model, we have used an effective single orbital basis per atom with different hopping, which is an idealized case and very difficult to realize in real systems. To give a more straightforward but equivalent flavor for the hopping parameters, we employ Harrison's universal inverse square scaling law <cit.>, which helps us to map the hopping parameters {t, t^', t_v} to geometric parameters of the candidate systems, i.e., the separation between the atoms {d, d^', d_v} [Fig. <ref>(a)]. In particular, we have found that, (d^'/d_v)^2, which is now proportional to t_v/t^', plays the crucial role in controlling the FB width of these systems. We have plotted the FB width for the candidate materials with different (d^'/d_v)^2 ratio, which exhibits a linear regression with a slope ≈ -1.29 eV and an intercept on the horizontal axis, x_0^DFT≈ 0.61, as shown in Fig. <ref>(d). The negative slope implies that the FB width decreases with increasing value of (d^'/d_v)^2, while x_0 represents the condition for absolute flatness. Beyond this x_0 value of (d^'/d_v)^2, the FB curvature is expected to change with the minimum and maximum at the Γ and K point, respectively, as supported by the TB model (see SI <cit.>). The variation of the FB width with t_v/t^' obtained from the TB model also shows a similar linear variation with slope ≈ -1.29 eV but x_0^TB=0.5, for t=0.305 eV and t^'=1/√(2) (see SI <cit.>). Comparison between the x_0 values obtained from DFT and the TB model allows us to estimate the proportionality constant for (d^'/d_v)^2 ∝ t_v/t^' to be 1.22. Moreover, the above value of the proportionality constant has been confirmed by comparing (d^'/d_v)^2 ≈ 0.5528 with the hopping ratio obtained by fitting the DFT results with the TB model (t_v/t^')=0.45 for Ni_3In. The above analysis reveals that the formation of FB in the coupled kagome systems is predominately controlled by the ratio between the intra- and inter-layer atomic spacing, i.e., d^'/d_v, which can help us in predicting the FB width of a new material in this class.To establish the above-mentioned general recipe for optimizing the flatness, we have engineered the FB width with changing d^'/d_v ratio, by applying strain. We have employed both compressive and tensile uniaxial strain along the c-axis for the Co_3Sn system, which essentially tunes the d^'/d_v ratio towards and away from x_0^DFT, respectively. Thus, the compressive (tensile) strain, in turn, reduces (increases) the flatness as depicted in Fig. <ref>(e).Non-trivial topology of FB. Next, we investigate the topological nature of the FB residing in the k_z = 0 plane. We consider Ni_3In with a FB residing in the vicinity of the Fermi level. For clarity, we denote the FB with a band index α, and neighbouring bands accordingly. First, we concentrate on the band structure in the absence of SOC. On both the sides, FB is touched by dispersive bands [the (α-1)-th and (α+1)-th bands] contributing to the Dirac points. The positions of these Dirac points are slightly away from the high symmetry directions on the k_z = 0 plane and are shown in the inset of Fig. <ref>(b). With the inclusion of SOC, these gapless points become gapped. Thus, we obtain a continuous direct band gap in the k_z = 0 plane between α and (α+1) bands. Utilizing the inversion symmetry of the system, we use the Fu-Kane parity-based formulation <cit.> to calculate the ℤ_2 invariant in the k_z = 0 plane. For Ni_3In, we obtain ℤ_2=1, confirming the non-triviality of the gap between the α-th and (α+1)-th bands in the k_z =0 plane. We have also found a similar non-trivial behaviour in other materials of this family – Ni_3Al and Ni_3Ga. As we discussed earlier, our TB analysis with the hopping relations obtained from the decimation method supports the non-trivial topological phase of these 3D coupled kagome systems in presence of SOC. In a nutshell, we present a decimation transformation technique to unravel the reappearance of the FB and Dirac cones in 3D coupled kagome lattices. This method essentially downfolds the tight-binding Hamiltonian by decimating some preferred lattice sites at the expense of energy-dependent renormalized onsite and hopping parameters. The equivalent low-level system, in turn, reveals that the interlayer hopping is crucial in generating a FB in the k_z=0 plane. Furthermore, incorporating SOC in the model invariably triggers a nontrivial topological phase characterized by ℤ_2 = 1. Our predictions on the emergence of the FB and Dirac fermions are confirmed by first-principles calculations for a class of kagome materials. We find our approach helpful in predicting the possibility of hosting a FB in coupled kagome lattice by simply analysing the lattice geometry. Our theory also suggests the engineering of FB width by applying external perturbation, e.g., strain and pressure. The present work allows exploration of the underlying physics of the intricate electronic properties of the coupled kagome lattices and can further assist in designing other lattice systems with nontrivial topological FBs and Dirac fermions.§.§ Acknowledgements:A. Bose acknowledges Prime Minister's Research Fellowship. A. Bandyopadhyay thanks the Indian Institute of Science IoE postdoctoral fellowship for financial support. A.N. acknowledges support from the start-up grant (SG/MHRD-19-0001) of the Indian Institute of Science.apsrev§ SUPPLEMENTARY INFORMATION § BAND DISPERSION RELATIONS FOR DECIMATED MODEL The band dispersion relations of the four-level system obtained through the decimation scheme given in Fig. 2 of the main manuscript along the path k_z=0 plane can be written as follows. λ_1 = 1/2( (ξ_1 + ξ_2) + √((ξ_1 + ξ_2)^2 - 4 (ζ_1 + ζ_2))) λ_1 = 1/2( (ξ_1 + ξ_2) - √((ξ_1 + ξ_2)^2 - 4 (ζ_1 + ζ_2)))λ_3 = 1/2( (ξ_1 - ξ_2) + √((ξ_1 - ξ_2)^2 - 4 (ζ_1 - ζ_2)))λ_4 = 1/2( (ξ_1 - ξ_2) - √((ξ_1 - ξ_2)^2 - 4 (ζ_1 - ζ_2))) In the above eq. <ref>, the parameters depend on the eigenenergy and momentum that make the characteristic equation invariant upon decimation. The parameters can be explicitly written down as given below.ξ_1 = ε^' + ε^'' + h_1^' x_n + h_2 x_n + h_1^'x_nn + h_1^' x_p+ h_2 x_p+ h_1^'x_pp,ξ_2 = 2 d + dx_n + d_1 x_n + t_v x_n + d_1x_nn + dx_p+ d_1 x_p+ t_v x_p+ d_1 x_p,ζ_1 = ε^' ε^'' - h^2 + ε^''h_1^' x_n + ε^' h_2 x_n - h_1 h_1^' x_n^2 + h_1^'h_2 x_n^2 + d_1t_v x_n^2 + ε^'h_1^'x_nn + d_1t_v x_n x_nn + ε^''h_1^' x_p+ ε^'h_2 x_p- d_1^2x_n x_p- h_1^2x_n x_p- h_1^'^2x_n x_p+ 2 h_1^'h_2 x_n x_p+ 2d_1t_v x_n x_p- d_1^2 x_nn x_p- h_1h_1^'x_nn x_p+ h_1^'^2 x_nn x_p+ d_1t_vx_nn x_p- h_1h_1^' x_p^2 + h_1^'h_2x_p^2 + d_1t_v x_p^2 - d^2 (1 + x_n) (1 + x_p) + d t_v (x_n + x_p) (2 + x_n + x_p) + (ε^'h_1^' - d_1^2 (x_n + x_nn) +h_1^' (-h_1 x_n + h_1^' x_n - h_1^'x_nn) + d_1t_v (x_n + x_p)) x_pp - h (h_1 (x_n + x_p) + h_1^' (x_n + x_nn +x_p+ x_pp)) - d d_1 (x_nn +x_p+ x_nn x_p+ x_pp + x_n (1 + 2x_p+ x_pp)),ζ_2=-d (h_1 x_n - h_1^' x_n + h_1^'x_nn + h_1 x_p- h_1^' x_p+ 2 h_1 x_n x_p-2 h_1^' x_n x_p+ h_1^'x_nn x_p- ε^' (2 + x_n + x_p) + h (2 + x_n + x_p) +h_1^' (1 + x_n) x_pp) + t_v (x_n + x_p) (ε^'' + h_2 (x_n + x_p) + h_1^' (x_nn + x_pp)) + d_1 (-2 h_1 x_n x_p+ 2 h_1^' x_n x_p- h_1x_nn x_p- h_1 x_nx_pp -2 h_1^'x_nnx_pp + ε^' (x_n + x_nn +x_p+ x_pp) -h (x_n + x_nn +x_p+ x_pp)).In the above expressions, we have considered x_p = exp[i k_x a/2], x_n = exp[-i k_x a/2], x_pp = exp[i k_x a], and x_nn = exp[i k_x a]. We now focus on the eigenvalue λ_1 to explore the criteria for manifesting a non-dispersive band in the k_z=0 plane. It is fascinating that the condition t_v = -t^'/2 substantially simplifies the expressions of of (ξ_1 + ξ_2) and (ζ_1 + ζ_2) as 2 ε^' + 4 t_v (k/2) and ε^'^2 - (t^' + 2 d_1)^2 - (d+h)^2 + 4 t_v (ε - t) (k/2), respectively. Substituting these values in the eigenvalue mentioned above, remarkably, eliminates all the dispersive terms giving rise to a flat band in the k_z=0 plane at the energy value E=ε + 2t.§ METHODSWe have performed the electronic structure calculations by using density functional theory, as encoded in the QUANTUM ESPRESSO code <cit.>. We have employed generalized gradient approximation based on Perdew-BurkeErnzerhof parametrization <cit.> within projector augmented wave basis <cit.>. Band structures were calculated both with and without the inclusion of spin-orbit interaction (SOI) using scalar and fully relativistic pseudopotentials, respectively. A cutoff value of 75 Ry was chosen for plane waves. A 12 × 12 × 15 Monkhorst-Pack grid <cit.> is used for the self-consistent calculation. After the self-consistent calculation, maximally localized Wannier functions were constructed using the WANNIER90 package <cit.> considering Ni 3d and In 3p orbitals as the basis. The positions of the Dirac points on the k_z = 0 plane were calculated using the Nelder and Mead’s Downhill Simplex Method <cit.>, as implemented in WannierTools <cit.>, which takes a uniform k-mesh in the three-dimensional Brillouin zone as a set of starting points for the calculation. The 2D hexagonal plane (k_z=0) of the 3D BZ has four time-reversal invariant momenta (TRIM) points — Γ and three M points. Under the inversion symmetry of the system, we use the Fu-Kane parity-based formulation to calculate ℤ_2 invariant <cit.>. The ℤ_2 invariant for the k_z=0 plane ν_k_z=0 can be defined using (-1)^ν_k_z=0=Π_i=1^αΠ_Γ_jχ_2i(Γ_j), where `i' runs over the occupied bands and Γ_j represents a TRIM point on the k_z=0 plane. χ_2i (Γ_j) denotes the parity eigenvalue for the `2i' th band at TRIM Γ_j, and can take value ±1. § CRYSTAL STRUCTURE AND ELECTRONIC PROPERTIES OF M_3X SYSTEMS§.§ Crystal structure:M_3X (M = Ni, Mn, Co; X=Al, Ga, In, W, Sn, Nb, Si, Ge, Mo) crystallizes in the layered hexagonal lattice with non-symmorphic space group (SG) P63/mmc (No. 194). The unit cell consists of two layers of atoms. Each layer contains four atoms – three `M' atoms and one `X' atom. Atoms of the two such layers stacked along the `c' direction are connected by inversion symmetry. Fig. <ref> presents the (a) top view and (b) side view of a coupled kagome lattice. Three `M' atoms in a layer form a Kagome-type lattice – with triangles and hexagons, and the remaining `X' atom is positioned at the center of the hexagon.In this work, we have highlighted twenty-three systems of the coupled kagome family with the same space group, P63/mmc, obtained through materials search <cit.>. The lattice geometry in terms of the atomic separations – d, d^', and d_v [see, main manuscript: Fig 3(a)] of these systems have been enlisted in Table. <ref>. §.§ Band dispersion for M_3X systems:In the main manuscript, we have discussed the band dispersion for the Ni_3In system [see, main manuscript: Fig 3(c)]. Here in Fig. <ref> and <ref>, we present the electronic band dispersions in presence of spin orbit interaction for the rest of the systems. All these systems host a flat band in the k_z = 0 plane. The corresponding width of the flat bands (presented in Fig. 3(b) of the main text) have been consolidated in Table. <ref>.§.§ Orbital projected bands for the Ni_3In system:In Fig. <ref> we present the Ni-d orbital projected bands for the Ni_3In system system. We observe that the flat bands at both the k_z = 0 and k_z = π planes are majorly contributed by d_xz and d_yz orbitals. § EFFECTS OF SPIN ORBIT COUPLING: In the main manuscript, we have presented band structures calculated from the TB model [see, main manuscript: Fig. 1(c)] and DFT computations [for Ni_3In, see, main manuscript: Fig. 3(c)] without incorporating SOC. Here, we calculate the dispersions in the presence of the spin-orbit coupling (SOC). In Fig. <ref> (a) and (b), we have depicted the band dispersions obtained from DFT calculations and TB model, respectively. We note that, for Ni_3In, the intrinsic SOC value is small. Our TB model considers the SOC strength to be λ_so = 0.06t. In both cases, some band degeneracies, including the Dirac point at K point are lifted upon incorporating SOC, which can readily be seen in FIG. <ref>. § HOPPING PARAMETER DEPENDENT BAND DISPERSIONS: §.§ For k_z = 0 planeIn the maintext, we have explained the condition to achieve the absolute flatness from the TB model, which reads t_v/t^' = -0.5. In Fig. <ref>, we try to find the band dispersions for a variation for the above-mentioned ratio. The middle panel presents the dispersion for t_v/t^' = -0.5, with an absolute flat band at the Fermi energy and a Dirac point at K. We observe a slight variation of t_v/t^' from -0.5 makes the flat band slightly dispersive. Also, the bands forming Dirac point at K now become gapped. The left panel depicts the bandstructure t_v/t^' = -0.4. The slightly dispersive band has a minimum at K and a maximum at Γ. Flat band for t_v/t^' = -0.6 features the opposite nature with a maximum at K and a minimum at Γ, as presented in the right panel. It is worth mentioning that for all the systems we have studied, a flat-like band follows the curvature for that of the |t_v/t^'| < 0.5 case. The dispersive bands are also gapped exactly at K point. Although they could be present slightly away from K. The exact position for the Dirac point in Ni_3In system was found to be (0,0,0.07), i.e., slightly shifted along the k_z direction from the Γ point. §.§ For k_z = π plane As discussed in the main manuscript, for k_z = π plane, the two layers in the coupled kagome lattice become decoupled, i.e., all the terms associated with inter-layer hopping parameter t_v become zero. In other words, this means that for the k_z = π plane, the dispersion follows the 2D kagome lattice. Here, we discuss the variation of band dispersion as a function of intra-layer hopping parameters t and t^'. In Fig. <ref>, we present the band dispersions for (left) t > t^', (middle) t = t^' and (right) t < t^'. The system is accompanied by a flat band irrespective of the values of t and t^'. t = t^' case resembles the dispersion for the ideal kagome system with a Dirac point at L. It can be observed that both for t < t^' andt > t^' cases, bands forming Dirac point at L become gapped. However, it can be noted that despite having similar bandstructure, these are not identical. The band inversion for t < t^' makes this phase topologically non-trivial, which is distinct from the trivial phase appeared for t > t^'. In addition to the orbital exchange, as shown in Fig. <ref>, the non-triviality has further been confirmed by the Zak-phase calculation. | http://arxiv.org/abs/2310.18276v2 | {
"authors": [
"Anumita Bose",
"Arka Bandyopadhyay",
"Awadhesh Narayan"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231027170535",
"title": "Origin of flat bands and non-trivial topology in coupled kagome lattices"
} |
A Chebyshev Confidence Guided Source-Free Domain Adaptation Framework for Medical Image Segmentation Jiesi Hu, Yanwu Yang, Xutao Guo, Jinghua Wang*, Ting Ma* This work was supported in part by grants from the National Natural Science Foundation of P.R. China (62276081, 62106113), Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine (NO:ZYYCXTD-C-202004), Basic Research Foundation of Shenzhen Science and Technology Stable Support Program (GXWD20201230155427003-20200822115709001) and The Major Key Project of PCL (PCL2021A06). (Corresponding author: Jinghua Wang, Ting Ma.) Jiesi Hu , Yanwu Yang, and Xutao Guo are with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, and The Peng Cheng Laboratory.(e-mail: [email protected], [email protected], [email protected]) Jinghua Wang is with School of Computer Science and Technology, Harbin Institute of Technology at Shenzhen. (e-mail: [email protected]) Ting Ma is with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, The Peng Cheng Laboratory, Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology, Harbin Institute of Technology, Shenzhen, and International Research Institute for Artifcial Intelligence, Harbin Institute of Technology, Shenzhen. (e-mail: [email protected])Received XXX; accepted YYY ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Event-by-event particle ratio fluctuations for simulated data sets of three different models named UrQMD, AMPT, and Pythia are studied using the fluctuation variable ν_dyn. The simulated data sets produced in pp collisions at four different LHC energies √(s) = 2.76, 5.02, 7 and 13 TeV are generated and considered for this analysis. The variation of fluctuation parameter ν_dyn for accepted pair of meson and baryon combination i.e.[π, K], [π, p] and [p, K] with the increasing value of the mean multiplicity of charged particles (⟨ N_ch⟩) are investigated. It has been observed that the correlation between the particle pair [π, K] is more than that of the two other particle pairs [π, p] and [p, K]. However, the energy-wise inspection of the fluctuation variable ν_dyn for 0-10% centrality data shows the increase in the correlation between the particles in each pair for all three models considered.§ INTRODUCTIONThe experiments involving heavy ions are crafted with the purpose of investigating nuclear material in exceptionally challenging conditions, marked by elevated levels of temperature, density, or both <cit.>. Within these experiments, there's an intention to generate a novel form of matter termed quark-gluon plasma (QGP). This theoretical state of matter is believed to have influenced the geometry of the cosmic background in the immediate aftermath of the Big Bang, taking shape within a mere few microseconds <cit.>. This concept finds its roots in the field of quantum chromodynamics (QCD) <cit.>, and it's theorized to emerge during collisions of heavy ions.One notable focus is the transition between quarks and hadrons, elemental particles composed of quarks held together by the strong force. This phase transition carries distinctive traits that can be identified through various experimental indicators <cit.>. Among these indicators, the fluctuations in net charge exhibited by the particles produced in these collisions stand out <cit.>. Correlations and event-by-event (ebe) fluctuations that arise from dynamic processes are believed to be linked to critical phase transition phenomena. Investigating these aspects can reveal local and global distinctions between events that are generated from similar initial conditions <cit.>.Researchers have explored fluctuations in ebe data in both hadronic and heavy-ion collisions across a wide range of energies, utilizing various methodologies. These include multifractals <cit.>, normalized factorial moments <cit.>, k-order rapidity spacing <cit.>, erraticity <cit.>, and intensive quantities such as those involving multiplicity and transverse momentum <cit.>. Additionally, fluctuations in conserved quantities like strangeness, baryon number, and electric charge have emerged as valuable tools to gauge the level of equilibration and criticality in measured systems <cit.>.Dynamical fluctuations in net charge have been subject to investigation by experiments like the STAR and the ALICE, employing a parameter denoted as ν_dyn <cit.>. This parameter serves as an effective probe due to its resistance to detector efficiency losses <cit.>. Alternative measures for net charge fluctuations, such as the variance of charge (V(Q)), variance of charge ratio (V(R)), and the D-measure, are more susceptible to measurement conditions <cit.>. However, there has been a notable observation that emphasizes significant systematic uncertainties within such measurements <cit.>. These uncertainties often stem from variations in the volume of the system due to changes in impact parameters. Conversely, fluctuations in multiplicity ratios seem to exhibit greater sensitivity to changes in density rather than shifts in volume <cit.>. Hence, the parameter ν_dyn has been employed as a method to explore the properties of QGP. Unlike conventional definitions based on combinations of similar or dissimilar charges, ν_dyn is determined by considering pairs of particle species <cit.>. The increase and divergence in fluctuation, possibly due to phase transition if occurs, might hold a connection to ebe fluctuations and to illustrate the associated fluctuations in baryon numbers, variations in strangeness, and correlations between baryons and strangeness particle pairs such as [π, K], [π, p] and [p, K] can be a suitable example <cit.>.§ GOAL OF THE STUDY Researchers have conducted several studies to examine fluctuations in particle ratios within AA (nucleus-nucleus) collisions. The NA49 experiment studied Pb-Pb collisions in a range of beam energies from 20 to 158 A GeV <cit.>. The STAR experiment, on the other hand, investigated Au-Au collisions across a center-of-mass (c.m.) energy range from 7.7 to 200 GeV <cit.>. Additionally, the STAR experiment explored Cu-Cu collisions at c.m. energies √(s)= 22.4, 62.4, and 200 GeV <cit.>. These experiments were conducted alongside several others, as referenced <cit.>. Particle ratio fluctuations at higher LHC energies have been examined specifically by the ALICE experiment, focusing on collisions at a c.m. energy of 2.76 TeV <cit.>.Particle ratio fluctuations are more pronounced in heavy-ion collisions due to the formation of a quark-gluon plasma and large-scale effects. In proton-proton (pp) collisions at LHC energies, these fluctuations are expected to be smaller due to the lack of such collective effects. In pp collisions, the particle production process is governed by simpler mechanisms compared to heavy-ion collisions. While there can still be fluctuations in particle yields due to the inherent probabilistic nature of particle production, these fluctuations are expected to be smaller and less significant than in heavy-ion collisions.Studying particle ratio fluctuations in pp collisions at LHC energies might still provide valuable insights into the underlying particle production processes and can serve as a baseline for comparison with heavy-ion collisions. However, due to the limitations posed by the smaller system size and absence of a QGP formation, any observed fluctuations in pp collisions would likely be of a different nature and magnitude compared to those seen in heavy-ion collisions. In the present work, we have studied the correlations and fluctuations among meson-baryon particle pairs [π, K], [π, p] & [p, K] using simulated models like UrQMD, AMPT & Pythia generated data sets at different c.m. energies √(s)= 2.76, 5.02, 7 & 13 TeV.Our main aims of this investigation are: (i) to ensure the existence of correlation among three different particle pairs[π, K], [π, p] & [p, K] using charged particle ratio fluctuation parameter ν_dyn at the mentioned LHC energies, (ii) to study the variation of the strength of correlation between the particles in each pair with the centrality dependent quantity ⟨ N_ch⟩, i.e., the mean charged particle multiplicity, (iii) to find the energy dependence of the values ofν_dyn of all the three mentioned particle pairs for the most centrality data set (0-10%), and (iv)to set a baseline comparison of the data sets generated by the models UrQMD, AMPT, & Pythia.§ SIMULATED EVENTS DESCRIPTION The Ultra relativistic Quantum Molecular Dynamics(UrQMD), A Multi-Phase Transport (AMPT) & Pythia models are used to simulate 10^5 MB data events for pp collisions at √(s)= 2.76, 5.02, 7& 13 TeV for the present investigation <cit.>. The generated data sets are thus filtered for |η| < 1.0 and 0.2 < p_T< 5.0 GeV/c range of pseudorapidity and transverse momentum. The ranges of η and p_T are selected to work for the region of particle distribution with maximum probability of fluctuations among the produced multiplicities in regime of the ALICE experiment. To demonstrate the feasibility of the model-based study, the p_T spectra of π, K and p produced from UrQMD, AMPT, and Pythia simulated pp interactions with the ALICE experimental data are shown in Fig. 1, Fig. 2 and Fig. 3 respectively. In case of the ALICE data in Fig. 1, Fig. 2 and Fig. 3, data of all the V0 classesare merged together and their weighted average is said as ALICE minimum bias experimental data in the respective figures <cit.>. §.§ UrQMD v3.4To explore the known and speculative mechanisms responsible for generating multiplicities of charged particles, the researchers have employed the UrQMD-3.4 model for simulation purposes <cit.>. This microscopic transport model, UrQMD, is focused on the covariant propagation of color strings and incorporates degrees of freedom related to mesons and baryons. Within this framework, the model accounts for the creation of hadronic resonances, particle resonances, and the fragmentation of color strings. Notably, these characteristics are commonly observed in experiments conducted at RHIC and LHC energy levels. For the lower energy range, approximately 5.0 GeV, the UrQMD model provides an understanding of collisions between atomic nuclei (referred to as AB collisions), contingent upon the influence of hadronic degrees of freedom. Collisions between two particles become apparent when the impact parameter b<√(σ_t/π) <cit.>, here σ_t denotes the total cross section. The UrQMD model encompasses the AB interaction, which is an accumulation of nucleon-nucleon (NN) collisions. This property renders the UrQMD model suitable for researchers to investigate AB interactions within the energy range relevant to their requirements. However, it's important to note that certain significant phenomena, such as the transition between quark-gluon plasma (QGP) and hadron gas (HG) phases, are not explicitly incorporated within the UrQMD model. Instead, the model adheres to Hagedron-type dynamics <cit.>. It describes the intermediate fireball using a local thermal and chemical equilibrium framework. The presence of Bose-Einstein correlations is likely to influence fluctuations, which are taken into account for measurement purposes. This accounts for the existence of short-range correlations among the multiplicities of charged particles in high-energy interactions <cit.>. In the UrQMD-3.4 version, a more accurate iso-energy density particlization hypersurface has been integrated <cit.>. §.§ AMPT v2.26The AMPT (A Multi-Phase Transport) model is composed of four essential components: initial conditions, partonic interactions, the transition from partonic to ha-dronic matter, and hadronic interactions <cit.>. The initial conditions are derived from the HIJING model and encompass the spatial and momentum distributions of minijet partons and soft string excitations. Zhang's parton cascade (ZPC) is responsible for simulating parton scatterings, focusing exclusively on two-body scatterings with cross sections obtained from perturbative QCD (pQCD) incorporating screening masses.In the default AMPT model, partons are combined with their parent strings once their interactions cease, and these resulting strings are then fragmented into hadrons using the Lund string fragmentation model <cit.>. However, in the AMPT model employing the string melting condition, quark coalescence is employed instead of recombining partons into hadrons. Subsequently, a hadronic cascade, initially based on the ART model, is expanded to include additional reaction pathways that become significant at higher energies. This cascade process is employed to describe the dynamics of the subsequent hadronic phase. These channels include the creation of baryons and antibaryons from mesons, along with the reverse reactions of annihilation, as well as the formation and decay of K^* resonance and antibaryon resonances. The AMPT model incorporates the concept of string melting. In this mechanism, excited strings that are not part of the incoming projectiles and have not directly interacted with nucleons are converted into partons based on the flavors and spin orientations of their valence quarks. When a meson transforms, specifically into a quark and an anti-quark, or when a baryon transforms, it follows a two-step process. Initially, a baryon is transformed into a quark and a diquark. This transformation is guided by weights determined from relationships found in the SU(6) quark model. Subsequently, the diquark splits into two individual quarks. It is assumed that the quark and diquark masses are the same as in the Pythia programme , i.e. m_u = 5.6 MeV/c^2, m_d = 9.9 MeV/c^2, and m_s = 199 MeV/c^2. The production period of the partons is given by t_f = E_H /m^2 _T,H , where E_H and m _T,H are the energy and transverse mass of the parent hadron respectively. The process of two-body decomposition described above occurs uniformly in the rest frame of the parent hadron or diquark. When the AMPT model incorporates string melting, its results align with those of the HIJING model when there are no interactions between partons and hadrons. This is because, in the absence of interactions, the partons would naturally approach each other as the closest partners at the same freeze-out time, effectively recombining into the original hadrons.In the AMPT string melting model, interactions between quarks and antiquarks of various types are taken into account. In the context of the string melting scenario, the straightforward quark coalescence model is employed to simulate the hadronization of partons once their interactions have ceased. In this model, the two closest partons are merged to form a meson, while the three nearest quarks (or antiquarks) are combined to create a baryon (or antibaryon). When partons combine to form a hadron, preserving a well-defined 4-momentum becomes challenging due to the fact that the invariant mass creates a continuous spectrum rather than a discrete one. The three-momentum is now preserved throughout coalescence, and the flavour and invariant mass of coalescing partons are used to identify the hadron species. The produced mesonhas a mass that is closer to the invariant mass of the coalescing quark and antiquark pair for pseudo-scalar and vector mesons with the same flavour composition. The aforementioned quark coalescence model incorporates mesons and baryons which are enlisted in the HIJING program, excluding η, Ξ^* and Σ^* which are not included in hadronic transport model. The process of hadronization leads to the emergence of a phase in which both partons and hadrons coexist. This coexistence arises because partons dynamically freeze out at different stages within the parton cascade. Consequently, the creation of hadrons through their coalescence occurs at various points in time. Moreover, the interactions between these particles account for the inclusion of numerous higher resonances, which act as intermediate states. Within the AMPT model, the progression of hadrons is described using the ART (A Relativistic Transport) model. Originally designed for heavy ion collisions at AGS (Alternating Gradient Synchrotron) energy levels, the ART model forms the basis of the hadron cascade in the AMPT framework <cit.>.§.§ Pythia v8.3Pythia addresses a wide array of phenomenological challenges in particle physics, as well as related topics in astro-particle, nuclear, and neutrino physics. Its foundation lies in the Lund string model of hadronization. This model is most suitable when dealing with hadronizing systems with invariant masses equal to or exceeding 10 GeV, but its reliability diminishes for systems with lower masses. For situations involving lower-mass systems, typically integrated into larger events, such as those involving color reconnections, hadronic rescattering, or heavy-flavour decays, the model is still used. For the very lowest-mass systems emitting only a couple of hadrons, a simplified cluster-style model called "ministrings" is employed, while standard string fragmentation is applied for other cases.However, Pythia offers more than just string hadron-ization. It encompasses advanced simulations for various particle-physics events. The structure of the Pythia v8.3 Monte-Carlo event generator is designed to accommodate the diverse physics descriptions and models necessary to create entirely exclusive final states as observed in collider experiments. The layout of Pythia v8.3 is divided into three key segments: process level, parton level, and hadron level. These divisions collectively enable the simulation of comprehensive and distinct final states, mirroring what is observed in real collider experiments <cit.>. The hard-scattering process, which produces transient resonances, is represented at the process level.In most cases, the hard process is represented perturbatively with a small amount of particles, often at high-energy ranges <cit.>. At the parton level in Pythia, there are several shower models available that account for initial- and final-state radiation. This stage includes the consideration of multiparton interactions, processing of beam remnants, and the probability of color-reconnection phenomena. The outcome of this parton-level development displays the actual partonic structure, including jets and a description of the underlying event. The confinement of partons according to QCD principles into color-singlet systems takes place at the hadron level.In Pythia v8.3, hadronization is depicted as QCD strings breaking apart to form hadrons. Unstable hadrons subsequently decay, and the rescattering of hadrons is accounted for at the hadron level. Since hadronization physics models often involve non-perturbative aspects, they require modeling and parameter adjustment. The output at the hadron level represents an event as it would be detected in an actual experiment. The interface between parton showers and the process level employs a matching and merging mechanism. Parton distribution functions play a crucial role at both the process level and during Initial State Radiation (ISR). The "Info" object is utilized to store and retrieve essential data across all levels of the simulation.In the analysis of heavy-ion collisions using Pythia v8.3, various parton-level objects can be employed to represent independent sub-collisions. These sub-collisions can then be merged to produce the final hadronization outcome. § METHOD OF ANALYSIS The net-charge fluctuations of the produced particles in high energy nucleus-nucleus collisions is one of the different signatures through which phase-transition and formation of QGP can be studied. ν_dyn parameter is commonly accepted for investigating the particle ratio fluctuations in terms of the ratio of yeild particle species A and B <cit.>.The standard deviation which describes the fluctuations in A/B ratio is given by the following equation <cit.> σ^2_A/B = ⟨ N^2_A⟩ - ⟨ N_A⟩^2/⟨ N_A ⟩^2 + ⟨ N^2_B⟩ - ⟨ N_B⟩^2/⟨ N_B ⟩^2-2 ⟨ N_A N_B ⟩ - ⟨ N_A ⟩⟨ N_B ⟩/⟨ N_A ⟩⟨ N_B ⟩ where N_A and N_B, respectively,represents the event multiplicities of particle type A and B within the given kinematical limits, while the quantities within ⟨ ... ⟩ represent their mean values. It is worthwhile to mention that the particle type A and B includes the particle and its anti-particle. The covariance measures how much two random variables change with each other. For identical variables, the variance is the special case of covariance. The quantity ⟨ N_A N_B⟩ gives the statistical average of the integrals or multiplicities of the simultaneous correlation of the production of both A and B type particles. The equation (<ref>) can be rewritten as σ^2_A/B = ⟨ N_A (N_A - 1) ⟩/⟨ N_A ⟩ ^2 + ⟨ N_B (N_B - 1) ⟩/⟨ N_B ⟩ ^2- 2 ⟨ N_A N_B⟩/⟨ N_A ⟩⟨ N_B ⟩+ 1/⟨ N_A ⟩ + 1/⟨ N_B ⟩ The last two terms in equation (<ref>) represent uncorrelated particle production which is statistical in nature in Poisson's limit.The charge dependence of the dynamical net-charge fluctuations for A/B charge ratio is given by ν_dyn[A,B] which measures the deviation of the fluctuations in the multiplicity of particle species A and B from that expected from Poissonian statistics <cit.>. The first three terms in equation (<ref>) is better described by σ^2_dyn. ν_dyn[A,B] variable does not involve particle ratios directly but it is related to σ_dyn as σ^2_dyn≈ν_dyn[A,B] <cit.>.Thus, ν_dyn[A,B] is given as <cit.> ν_dyn[A,B]= ⟨ N_A (N_A - 1) ⟩/⟨ N_A ⟩ ^2 + ⟨ N_B (N_B - 1) ⟩/⟨ N_B ⟩ ^2- 2 ⟨ N_A N_B⟩/⟨ N_A ⟩⟨ N_B ⟩If particles A and B are produced in statistically independent way then ν_dyn[A,B] should be zero <cit.>. However, as the particles produced in the collision process are partially correlated through the production of resonances, string fragmentation, jet fragmentation, and (or) other mechanisms, the value of ν_dyn[A,B] is expected to be a nonzero value <cit.>. Dominance of first two terms in equation (<ref>) gives positive value of ν_dyn[A,B] which indicates the presence of anticorrelation whereas the negative value of ν_dyn[A,B] is due to the dominance of the third term in equation (<ref>) which indicates the correlation between A and B type of particles.In this paper we have considered three pair of particle's combination such as [π, K], [π, p] and [p, K] for the indices A, B.Subsample method is used to determine the statistical errors associated to ν_dyn <cit.>. Subsamples made out of data set are used to calculate the values of ν_dyn, which are then used to estimate the mean and dispersion as: < ν_dyn[A,B] > = 1/nΣν_dyn[A,B]_i,σ_dyn = √(Σ (ν_dyn[A,B]_i - <ν_dyn[A,B]>)^2/n-1) Therefore, the associated statistical error is calculated as (Error)_stat = σ_dyn/√(n) § RESULT AND DISCUSSIONSIn this investigation of event-by-event particle ratio fluctuations using ν_dyn variable, simulated minimum bias (MB) events are generated using Pythia v8.3, AMPT v2.26 and UrQMD v3.4 for pp collisions at √(s)= 2.76, 5.02, 7 and 13 TeV. The number of events generated for each of this model is 10^5. Charged particles having pseudorapidity (η) and transverse momentum (pT) in the range |η| < 1.0 and 0.2 < pT < 5.0 GeV/c in regime of the ALICE experiment is considered out of the MB data set for this analysis. Unlike the centrality estimation of the ALICE experiment, for the purpose of simplicity in this analysis, six centrality classes ranging from 0-60% at an interval of 10% is made depending on higher multiplicities as most central data set (0-10%) and the successive data sets as in accordance.Variation of ν_dyn with the mean multiplicity of charged particles (⟨ N_ch⟩) calculated from each centrality data set is represented in figs. 4-15 for accepted pair of meson and baryon combination i.e.[π, K], [π, p] and [p, K].Evidently, quark number fluctuation and correlation influences the variance and two-hadron correlations and therefore ν_dyn[π, K] <cit.>. First four figures, figs. 4-7 shows the π-K fluctuations ν_dyn[π, K] respectively at four different LHC energy, √(s)= 2.76, 5.02, 7 and 13 TeV, where π refers to π^+ + π^- and K refers to K^+ + K^-. It may be noted that the value of ν_dyn is maximum for the smallest value of ⟨ N_ch⟩, i.e. for peripheral collisions and it decays quickly for larger values of ⟨ N_ch⟩ at respective energies for all the three data sets of Pythia, AMPT and UrQMD. The positive values of ν_dyn indicates the presence of rather stronger anticorrelation in peripheral collisions with low ⟨ N_ch⟩. All the three data-sets of UrQMD, AMPT and Pythia shows similar falling variation but the UrQMD data repesents higher value of ν_dyn i.e. the fluctuation between produced π & K is more in UrQMD data in comparison to Pythia and AMPT data. The AMPT data has least fluctuations measured between π & K at all the four concerned energies in this discussion. It is also to note that the larger value of ⟨ N_ch⟩ shows the production of charged particles in Pythia is much higher than the other two models UrQMD and AMPT. The value of ⟨ N_ch⟩ corresponds to each centrality data sets increases with the increase in the LHC energy from √(s)= 2.76 TeV to √(s)= 13 TeV as increase in centre-of-mass (c.m.) energy gives rise tomore number of particle production in all the three models studied. The particle ratio fluctuation bewteen π and p with ⟨ N_ch⟩ is studied and shown in figs. 8-11 for √(s)= 2.76, 5.02, 7 and 13 TeV respectively, where π refers to π^+ + π^- and p refers to p + p. The variation of all the data sets in this case is similar to the variation of ν_dyn[π, K] with ⟨ N_ch⟩, i.e. the value of ν_dyn[π, p] is maximum for lower value of ⟨ N_ch⟩ and then decreases as ⟨ N_ch⟩ increases for most central collisions. It is very important to note that the π and p fluctuation is little more pronounced compared to the fluctuation betweenπ and K for all the generated data sets of Pythia, AMPT and UrQMD at mentioned energies.At a certain energy, the inspection of π and p fluctuations with ⟨ N_ch⟩ shows, the UrQMD generated data sets has maximum π and p fluctuations and AMPT has least fluctuations in between π and p among all the data sets generated using Pythia, AMPT and UrQMD. Figs. 8-11 also shows, the fluctuations between π and p in AMPT data especially at √(s)= 2.76, 5.02 and 7 TeV has a disturbance in the falling trend of ν_dyn[π, p] with increasing ⟨ N_ch⟩ certainly at lower value of ⟨ N_ch⟩, perhaps the statistical fluctuation is being the reason behind this. In figs. 12-15, the fluctuation between p (p+p̅) and K (K^+ + K^-) with ⟨ N_ch⟩ is plotted for energies √(s)= 2.76, 5.02, 7 and 13 TeV respectively. It is clear to note, the value of ν_dyn[p,K] with ⟨ N_ch⟩ decreases non-linearly as ⟨ N_ch⟩ increases for the generated data sets of Pythia, AMPT and UrQMD. At a certain energy, the value of ν_dyn[p, K] is found to be more prominant for UrQMD data and the value of ν_dyn[p, K] is least for AMPT data among all the data sets of Pythia, AMPT and UrQMD. As the centre of mass energy increases from √(s)= 2.76 TeV to √(s)= 13 TeV, for a given generated data set, the fluctuation in p and K ratio decreases by its magnitude for corresponding value of ⟨ N_ch⟩.Figs. 15-18 represents the energy-wise variation of ν_dyn of 0-10% centrality data for particle pairs [π, K], [π, p] and [p, K] respectively. It may be of interest to note, at a certain energy for Pythia and UrQMD data, the magnitude of ν_dyn is least for the particle pair [π, K] and the value of ν_dyn is appreciably more for the particle pair [π, p] & [p, K] among all the three particle pairs discussed, which is perhaps due to presence of strong correlation between π and K produced in these models over the weaker correlations between [π, p] & [p, K] combination. But the AMPT data shows, the value of ν_dyn is low for the particle pair [π, K] & [π, p] than that of the value of ν_dyn[p, K], which indicates that the correlation between [π, K] & [π, p] combination is more than the [p, K] combination. It is also interesting to note, for a certain particle pair, the fluctuation in that particle pair ratio gradually decreases with increasing LHC energy for all the three models Pythia, AMPT and UrQMD. However, the AMPT data always shows least value of ν_dyn for particle pairs considered, whereas the UrQMD model sometime fails to reproduce a better result towards the discussion of particle ratio fluctuation at LHC energies. The Pythia follows a good trend in producing a generalised pattern for the variation of ν_dyn either with ⟨ N_ch⟩ or with √(s) for pp collisions at all the discussed LHC energies. § CONCLUSIONThe present work is to find the charged particle ratio fluctuations among the conserved charged mesons and baryons like π, K and p produced in pp collisions at four different LHC energy, √(s)= 2.76, 5.02, 7 and 13 TeV using three different simulation models- Pythia, AMPT, and UrQMD. Our effort to study the ν_dyn variable in terms of centrality dependent quantity ⟨ N_ch⟩ and √(s) for three different particle pairs [π, K], [π, p] & [p, K] has lead to the following conclusions: 1. It is observed that for all the three pairs of particle species, at a certain LHC energy, the magnitude of ν_dyn increases on moving from central to peripheral collisions for all the data sets generated by Pythia, AMPT, and UrQMD. The value of ν_dyn for all three pair of particle species is found to be positive, which indicate that the correlation between two particles in a pair is less or the anti-correlation between the two particles in a pair is more.2. In case of Pythia and UrQMD data, at certain energy for most central collisions (0-10%), the little low value of ν_dyn[π, K] among the value of ν_dyn of particle pairs [π, K], [π, p] & [p, K] is perhaps due to a strong correlation between π & K involved in the generated data of the models and/or the multiplicity distribution of π & K are broader, whereas the higher value of ν_dyn is found for [π, p] & [p, K] combination may be due to weaker π-p and p-K correlations present in the Pythia and UrQMD model. For the AMPT model, at a certain energy for most central collisions (0-10%), the value of ν_dyn of [π, K] and [π, p] is much low compared to the other two models Pythia and UrQMD and the value of ν_dyn[p, K] is little more in its own comparison but still remains very less than the other two models, which indicates that the correlation between π - K and π - p is more than the correlation between p - K. This is perhaps due to the resonance decay process involved in the AMPT model. 3. The value of ν_dyn for all the three pairs of particle species generated by Pythia, AMPT, and UrQMD is observed to be decreasing with an increase in LHC energy from √(s) = 2.76 TeV to √(s) = 13 TeV for pp collisions, indicate towards the increment in the correlation between particles considered in a pair.§ ACKNOWLEDGEMENT The author S. Paul expresses gratitude for the financial support granted underG.O No. 52-Edn(B)/5B-15/2017 dt. 7.6.2017 read with 65-Edn(B)/5-15/2017 dt. 11.7.2017 for Swami Vivekananda Merit-cum-Means Scholarship, Government of West Bengal, India. Similarly, the author T. Biswas extends appreciation for the Inspire fellowship (No. DST/INSPIRE Felloship/2022/IF220173) from the Department of Science and Technology, Govt. of India.§ CONFLICT OF INTERESTSThe authors declare that there is no conflict of interests regarding the publication of this paper.§ DATA AVAILABILITY STATEMENTIn this paper, the simulated data has been generated by using UrQMD, AMPT & Pythia simulation model. All data generated or analysed during this study are included in this paper.99 1 N. Cabibbo and G. Parisi,Physics Letters B, 59(1) (1975) 67–69. 2 M. Gyulassy and L. McLerran, Nuclear Physics A, 750(1) (2005) 30–63. 3 A. Tawfik, Physical Review D, 71(5) (2005), Article ID 054502. 4 T. J. Tarnowsky,Acta Physica Polonica B, 5 (2012) 515. 5 B. I. Abelev, M. Aggarwal, Z. Ahammed et al., Physical Review Letters, 103(9) (2009), Article ID 092301. 6 J. Adams, M. M. Aggarwal, Z. Ahammed et al., Physical Review C, 68 (2003), Article ID 044905. 7 S. Jeon and V. Koch, Physical Review Letters, 83 (1999), Article ID 5435. 8 S. Jeon and V. Koch, Physical Review Letters, 85(10) (2000) 2076–2079. 9 C. Pruneau, S. Gavin, and S. Voloshin, Physical Review C, 66(4) (2002), Article ID 044904. 10 D. Bower and S. Gavin,Physical Review C : Nuclear Physics, 64(5) (2001) 519021–519025. 11 C. A. Pruneau and The STAR Collaboration, Heavy Ion Physics, 21(2) (2004) 261–266. 12 B. Abelev, J. Adam, D. Adamová et al., Physical Review Letters, 110(15) (2013), Article ID 152301. 13 M. Weber, Nuclear Physics A, 904-905 (2013), 467c–470c. 14 E. A. De Wolf, I. M. Dremin, and W. Kittel, Physics Reports, 270 (1996) 1–141. 15 R. C. Hwa, Physical Review D, 41(5) (1990) 1456–1462. 16 S. Khan and S. Ahmad, International Journal of Modern Physics E: Nuclear Physics, 27(1) (2018), Article ID 1850004. 16a N Subba et al.,The European Physical Journal Plus, 136(8) (2021) 1-13. 16b D Ghosh et al.,Physica Scripta, 82(4) (2010) 045201. 17 R. C. Hwa and C. B. Yang, Acta Physica Polonica B, 48(1) (2017) 23. 18 Y. Xie, G. Chen, J. Wang, Z. H. Liu, and M. J. Wang, Nuclear Physics A, 920 (2013) 33–44. 19 B. Ali, S. Khan, and S. Ahmad, Advances in High Energy Physics, 2019 (2019), Article ID 6034981. 20 S. Ahmad, A. R. Khan, M. Zafar, and M. Irfan, Chaos, Solitons & Fractals, 42(1) (2009) 538–547. 21 M. L. Cherry, A. Dabrowska, P. Deines-Jones, P. Holynski, and B. S. Nilsen, Acta Physica Polonica B, 29 (1998) 2129–2146. 22 K. Fialkowski and R. Wit, Acta Physica Polonica B, 30 (1999) 2759. 23 S. Ahmad, A. Chandra, A. Kumar et al., EPL (Europhysics Letters), 112(4) (2015) 42001. 24 R. C. Hwa, Acta Physica Polonica B, 27 (1996) 1789. 25 S. Ahmad, M. M. Khan, N. Ahmad, and A. Ahmad, Journal of Physics. G, Nuclear and Particle Physics: An Institute of Physics Journal, 30(9) (2004) 1145–1152. 26 S. Ahmad, M. M. Khan et al., (Acta Physica Hungarica A) Heavy Ion Physics, 25(1) (2006) 105–115. 27 M. Gazdzicki, M. I. Gorenstein, and M. Mackowiak-Pawlowska, Physical Review C, 88(2) (2013), Article ID 024907. 28 M. I. Gorenstein and K. Grebieszkow, Physical Review C, 89(3) (2014), Article ID 034903. 29 K. Grebieszkow and NA61/SHINE Collaboration, <https://arxiv.org/abs/1904.03165>. a B. Sharma, M. M. Aggarwal, N. R. Sahoo, and T. K. Nayak, Physical Review C, 91(2) (2015), Article ID024909. c B. I. Abelev, M. M. Aggarwal, Z. Ahammed et al., Physical Review C, 79(2) (2009), Article ID 024906. d D. K. Mishra, P. Garg, P. K. Netrakanti, L. M. Pant, and A. K. Mohanty, Advances in High Energy Physics, 2017 (2017), Article ID 1453045. e V. K. Singh, D. K. Mishra, and Z. Ahammed, Physical Review C, 101(1) (2020), Article ID 014903. f P. Christiansen, E. Haslum, and E. Stenlund,Physical Review C , 80(3) (2009), Article ID 034903. i H. Wang and STAR Collaboration, Central European Journal of Physics, 10 (2012) 1282. j V. Koch, A. Majumder, and J. Randrup, Physical Review Letters, 95(18) (2005) 182301. k T. Anticic, B. Baatar, J. Bartke et al., Physical Review C, 89(5) (2014) 054902. l L. Adamczyk, J. K. Adkins, G. Agakishiev et al., Physical Review C, 92(2) (2015), Article ID 021901. m T. Tarnowsky, Physics of Atomic Nuclei, 75(5) (2012) 546–549. n A. N. Tawfik, I. I. Abou-Salem, A. G. Shalaby, and M. Hanafy, Advances in High Energy Physics, 2016 (2016), Article ID 2475916. o ALICE Collaboration, S. Acharya, D. Adamová et al.,European Physical Journal C: Particles and Fields , 79(3) (2019) 236. p M. Arslandok and ALICE Collaboration,Nuclear Physics A, 956 (2016) 870. q H. Ma, K. Lin, W. L. Qian, and B. Wang,Advances in High Energy Physics, 2020 (2020), Article ID 6504290. 31 ALICE Collaboration (S. Acharya et al.), arXiv:2112.00610v2, (2022). 32 ALICE Collaboration (B. Abelev et al.), Eur. Phys. J. C, 74 (2014) 2974. 33 ALICE Collaboration (S. Acharya et al.),Physical Review C, 97 (2018). 34 S. Bhattacharjeeet.al., International Journal of Modern Physics E, 31(8) (2022) 2250079. ALICE2020 ALICE collaboration, Eur. Phys. J. C, 80 (2020) 693. 35 S.A. Bass et al., Prog. Part. Nucl. Phys., 41 (1998) 255. 36 P. Maliet.al., Modern Physics Letters A, 32(8) (2017) 1750024. 37 M. Bleicher et al. , J. Phys. G, 25 1859 (1999). 38 A. Ahmedet.al., Eur. Phys. J. A, 57 (2021)322. 39 O.V. Utyuzh et al., Phys. Lett. B, 522(2001) 273. 40 M. Belkacem et al.,Phys. Rev. C, 58 (1998) 1727. 41 P. Huovinen and H. Petersen, Eur. Phys. J. A, 48 (2012)171 [arXiv:1206.3371 [nucl-th]]. 42 Yuncun He, Zi-Wei Lin, Eur. Phys. J. A, 56 (2020) 123. 43 B. Zhang et al.,Physical Review C, 62 (2000) 054905. 44 Z. W. Lin et al., Nucl. Phys. A, 698 (2002) 375. 45 S. Pal et al., Nuclear Physics A, 730 (2004) 143. 46 Lin Z W et al.,Physical Review C, 72 (2005) 064901. 47 Lin Z W et al.,Physical Review C, 64 (2001) 011902. 48 B. Zhang et al.,Physical Review C, 61 (2000) 067901. 49 Zi-Wei Lin, Liang Zheng, Nucl Sci Tech, 32 (2021) 113. 50 U. Egede et al., Eur. Phys. J. C, 82 (2022) 773. 51 T. Sjöstrand et al., An introduction to Pythia 8.2, Comput. Phys. Commun., 191(2015) 159. 63 C. Alt, T. Anticic, B. Baatar et al.,Physical Review C, 79(4) (2009), Article ID 044910. 64 T. Anticic, B. Baatar, D. Barna et al.,Physical Review C, 83 (2011), Article ID 061902(R). 65 S. Jeon and V. Koch,Quark Gluon Plasma 3 , Edited by R. C. Hwa and X.-N. Wang, World Scientific, Singapore, 2004. 73 Hai-hing Li et al., Chinese Phys. C, 42(1) (2018) 014102. 74 S. Khan et al., Advances in High Energy Physics, 2021 (2021), Article ID 6663846. | http://arxiv.org/abs/2310.18039v2 | {
"authors": [
"Subhadeep Paul",
"Tumpa Biswas",
"Dipak Ghosh",
"Mehedi Kalam",
"Prabir Kr. Halder"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20231027103737",
"title": "Net-charge particle ratio fluctuations in $pp$ collisions at several LHC energies"
} |
Inadmissibility and Transience Yang-Hao Chan^1,2 January 14, 2024 ============================== Inadmissibility and Transience A]Kōsaku Takanashi[label=e1][email protected]]Kenichiro McAlinn[label=e3,mark][email protected][A]RIKEN, Center for Advanced Intelligence Project, e1[B]Department of Statistics, Operations, and Data Science, Fox School of Business, Temple University, e3We discuss the relation between the statistical question of inadmissibility and the probabilistic question of transience. <cit.> proved the mathematical link between the admissibility of the mean of a Gaussian distribution and the recurrence of a Brownian motion, which holds for ℝ^2 but not for ℝ^3 in Euclidean space. We extend this result to symmetric, non-Gaussian distributions, without assuming the existence of moments. As an application, we prove that the relation between the inadmissibility of the predictive density of a Cauchy distribution under a uniform prior and the transience of the Cauchy process differs from dimensions ℝ^1 to ℝ^2. We also show that there exists an extreme model that is inadmissible in ℝ^1. [class=MSC] [Primary ]62C15[; secondary ]60J46 62C10 62C20 Inadmissibility Dirichlet form Cauchy distribution Stable distribution Bayesian predictive distribution§ INTRODUCTION Consider the mathematical connection between the admissibility of an estimator and the recurrence of a stochastic process. On the former, we have the classical statistical decision problem of the admissibility of an estimator. Since the seminar work of <cit.>, the admissibility, or inadmissibility, of an estimator with regard to its dimension has received considerable interest. In the case of the Gaussian distribution, <cit.> proved that the best equivariant estimator is admissible if the dimension of the multivariate Gaussian, d, satisfies d≤2, and inadmissible if d≥3. On the latter, we have the probabilistic problem of the recurrence of a stochastic process. In an analog to the estimation of the Gaussian mean, a Brownian motion is known to be recurrent if d=1,2 and transient if d≥3. Connecting the two problems, and establishing a mathematical link between admissibility and recurrence, has received much focus. For the Gaussian case, <cit.> proved that the estimator for the mean is admissible if and only if the corresponding diffusion process is recurrent. Specifically, the transience of the diffusion process implies inadmissibility, without any regularity conditions, while the other direction is rigorously established when the estimator has bounded risk. For non-Gaussian cases, <cit.> provided the Poisson counterpart to <cit.> by proving that the admissibility of an estimator for the mean corresponds to the recurrence of a unique reversible birth and death process. The key idea in both papers is to reformulate the question of admissibility to a calculus of variational minimization problem. Specifically, it is known that the diffusion is recurrent if and only if the exterior Dirichlet problem is insoluble. Exploiting the connection to the statistical problem, these papers show that the statistical estimator is admissible, equivalently, if and only if the exterior Dirichlet problem is insoluble. More generally, <cit.> proved that, for general distributions, the sufficient condition of admissibility is for the associated stochastic process to be recurrent.While <cit.> and <cit.> proved the equivalence connection between the admissibility/inadmissibility of an estimator and the recurrence/transience for the Gaussian and Poisson processes, respectively, whether this equivalence holds for general distributions has remained an open question. One specific problem of interest is the Cauchy distribution. Just as recurrence/transience is determined by the dimension at ℝ^2 and ℝ^3 for the Brownian motion, it is also known that the recurrence/transience of a Cauchy process differs at ℝ^1 and ℝ^2. Similar to the case in <cit.>, one can expect the prediction of the associated distribution– the Cauchy– to also be admissible at ℝ^1 and inadmissible at ℝ^2. As a more extreme case, consider the estimation of a stable process with index α in ℝ^1. It is known that this is transient when α<1. Extending the concept of <cit.> to a stable process, one can expect that the predictive distribution using the MLE as the location estimate is inadmissible even at ℝ^1.Our contribution is to generalize the result in <cit.> to the relation between the symmetric infinite divisible distribution and the Lévy process. We prove that the statistical problem of the inadmissibility of estimating the predictive density of a Cauchy at ℝ^2 is equivalent to the probabilistic problem of the transience of a Cauchy process at ℝ^2. A critical departure from previous research is that we cannot assume the existence of moments, since we cannot define the squared error loss of the mean for the Cauchy. For this paper, we evaluate the Kullback-Leibler loss. This is suitable, since the evaluation is of the Bayes predictive distribution and MLE plug-in distribution, which can be interpreted as minimizing the Kullback-Leibler loss. §.§ Notation We use upper case (X) for random variables and lower case (x) for its realizations. Random variables with subscripts, s,t, denote stochastic processes (e.g. X_t). Xd=Y denotes that X,Y are identically distributed. ⟨·,·⟩ ,‖·‖ are each the inner product and norm under ℝ^d, respectively. L^2(ℝ^d;m) denotes the square integrable function space regarding measure m on ℝ^d. L^1(ℝ^d;m),L_+^1(ℝ^d;m) are each the integrable function space and non-negative integrable function space regarding measure m on ℝ^d. When written as ⟨·,·⟩ _m, it is the inner product on L^2(ℝ^d;m). 𝗆 is the Lebesgue measure on ℝ^d.§ MAIN RESULTS§.§ Bayes risk difference and Blyth's method Consider a symmetric d-dimensional location-scale distribution, p_c(x|θ.), with location, θ, and scale, cI (I is a d× d identity matrix). Here, symmetric means that the stochastic variable, X∼ p_c(x|θ.), is equivalent around the location, θ; i.e. -(X-θ)d=X-θ. Assumptions about the moments, including their existence, are not made. Based on observing a datum, X=x, we consider the problem of obtaining a predictive density, p̂(y|x.), for Y that is close to p_c(y|θ.). We measure this closeness by the Kullback-Leibler (KL) loss, L_𝖪𝖫(θ,p̂(·|x.))=𝖪𝖫(p_c|p̂.)=∫_ℝ^dlogp_c(y|θ.)/p̂(y|x.)p_c(y|θ.)dy,and evaluate p̂ by its expected loss or risk function, R_𝖪𝖫(θ,p̂)=∫_ℝ^dL_𝖪𝖫(θ,p̂(·|x.))p_c(x|θ.)dx.For the comparison of two procedures, we say that p̂^1 dominates p̂^2 if R_𝖪𝖫(θ,p̂^1)≦ R_𝖪𝖫(θ,p̂^2) for all θ and with strict inequality for some θ. The Bayes risk B_𝖪𝖫(π,p̂)=∫_ℝ^dR_𝖪𝖫(θ,p̂)π(θ)dθis minimized by the predictive distribution p̂^π(y|x.)=∫_ℝ^dp_c(y|θ.)π(θ|x.)dθ.Unless π is a trivial point prior, we have p̂^π(y|x.)∉{ p_c(y|θ.):θ∈ℝ^p} ,where p̂^π will not correspond to a ”plug-in“ estimate, p̂^π. In other words, the predictive distribution dominates any plug-in estimate <cit.>. The best equivariant predictive density, with respect to the location group, is the Bayes predictive density under the uniform prior, π_U(θ)≡1, which has constant risk <cit.>. More precisely, one might refer to p̂^π_U as a generalized Bayes solution because π_U is improper. <cit.> showed that p̂^π_U(y|x.) dominates the plug-in predictive density p_c(y|θ̂_MLE.), which simply substitutes the MLE θ̂_MLE=x for θ. This is why the evaluation must be done on the predictive density under KL loss.Again, the KL risk and KL Bayes risk are R_𝖪𝖫(θ,p̂^π) =∫_ℝ^d{ ∫_ℝ^dlogp_c(y|θ.)/p̂^π(y|x.)p_c(y|θ.)dy} p_c(x|θ.)dxB_𝖪𝖫(π,p̂^π) =∫_ℝ^d∫_ℝ^d{ ∫_ℝ^dlogp_c(y|θ.)/p̂^π(y|x.)p_c(y|θ.)dy} p_c(x|θ.)dx·π(θ)dθ.The Bayes risk difference is B_𝖪𝖫(π,p̂^π_U)-B_𝖪𝖫(π,p̂^π) =∫_ℝ^d∫_ℝ^d{ ∫_ℝ^dlogp̂^π(y|x.)/p̂^π_U(y|x.)p_c(y|θ.)dy} p_c(x|θ.)dx·π(θ)dθ =∫_ℝ^d∫_ℝ^dlogp̂^π(y|x.)/p̂^π_U(y|x.)p̂^π(y|x.)M^π(x;c)dxdy.Here, M^π(x;c) is the marginal distribution under π: M^π(x;c)=∫_ℝ^dp_c(x|θ.)π(θ)dθ.Further, since we assume that the scale parameter, cI, is equivalent for X,Y, we have ∫_ℝ^dp̂^π(y|x.)M^π(x;c)dx =∫_ℝ^d{ ∫_ℝ^dp_c(y|θ.)π(θ)p_c(θ|x.)/M^π(x;c)dθ} M^π(x;c)dx =∫_ℝ^dp_c(y|θ.){ ∫_ℝ^dp_c(θ|x.)dx} π(θ)dθ,which implies that M^π(x;c) is the invariant distribution of p̂^π(y|x.): ∫_ℝ^dp̂^π(y|x.)M^π(y;c)dy=M^π(x;c).From this, we have B_𝖪𝖫(π,p̂^π_U)-B_𝖪𝖫(π,p̂^π)=∫_ℝ^d𝖪𝖫(p̂^π|p̂^π_U.)M^π(x;c)dx.To examine the admissibility of p̂^π_U under KL risk, Blyth's method is an effective strategy, as with <cit.> Lemma 1. Therefore, it is sufficient to ask whether there exists a measure sequence of a proper prior, {π_n}, such that B_𝖪𝖫(π,p̂^π_U)-B_𝖪𝖫(π,p̂^π_n) (the Bayes risk difference using {π_n} as a proper prior for the proper predictive distribution, p̂^π_n) converges to zero. §.§ Equivalence between Bayes risk difference and Dirichlet formOur main contribution is to show that the Bayes risk difference Eq. (<ref>) in the above setup satisfies the following equality:Let 𝗆 be a Lebesgue measure on ℝ^d. Further, let M^π be the Radon-Nikodym derivative regarding the Lebesgue measure, 𝗆, of the invariant measure, M^π(x;c)dx, of p̂^π. The Bayes risk difference Eq. (<ref>) satisfies ∫_ℝ^d𝖪𝖫(p̂^π|p̂^π_U.)M^π(x;c)dx=ℰ(√(M^π),√(M^π)).Here, ℰ(f,f) and f∈ℱ are the values of the Dirichlet form of a Markov process with p̂^π_U as its transition probability and the domain of its Dirichlet form. Details of the Dirichlet form are defined below.Here, ℰ(·,·) is the Dirichlet form with transition probability, p̂^π_U, and is equal to the quadratic variation of a Markov process, { X_t}, with transition probability, p̂^π_U, under the transformation of variable, √(M^π)(details regarding the connection between the predictive distribution and the continuous-time Markov process are given in Section <ref>). Intuitively, this is equal to the instantaneous variance: lim_ t→01/ t𝔼_p̂^π_U[(√(M^π(X_t+ t))-√(M^π(X_t)))^2],where 𝔼_p̂^π_U represents expectation with respect to the transition probability p̂^π_U. Eq. (<ref>) is a generalization of <cit.> Eq. (1.3.4) by extending the quadratic loss to KL loss, and normal distribution to a symmetric location model.For the case of KL loss with normal distributions, <cit.> Corollary 1. shows the equivalence between the Bayes risk difference and Dirichlet form. The result states that when X∼ N(μ,v_xI) and Y∼ N(μ,v_yI), we have the following equality:B_𝖪𝖫(μ,p̂_π_U)-B_𝖪𝖫(μ,p̂_π)=1/2∫_v_w^v_x1/v^2[B_Q^v(μ,μ̂_MLE)-B_Q^v(μ,μ̂_π)]dvwhere, B_Q^v(μ,μ̂) =∫_ℝ^d𝔼_N_p(μ,vI)[‖μ̂-μ‖^2]π(μ)dμ, μ̂_π=∫_ℝ^dμπ(.μ|x)dμ, v_w=v_xv_y/v_x+v_y.The 1/v^2[B_Q^v(μ,μ̂_MLE)-B_Q^v(μ,μ̂_π)] inside the integral on the right-hand side in Eq. (<ref>) is four times the Dirichlet form of the standard Brownian motion <cit.>:1/v^2[B_Q^v(π,μ̂_MLE)-B_Q^v(π,μ̂_π)] =4∫_ℝ^d‖∇_z√(m^π(z;v))‖^2dz.Here, m^π(z;v) is the marginal likelihood of p(.z|μ)=N(μ,vI) under the prior, π(μ). Finally, we have B_𝖪𝖫(μ,p̂_π_U)-B_𝖪𝖫(μ,p̂_π)=2∫_v_w^v_x[∫_ℝ^d‖∇_z√(m^π(z;v))‖ ^2dz]dv. Although Eqs. (<ref>) and (<ref>) do not, initially, look equivalent, when v_x=v_y, we have v_w=1/2v_x, making them equivalent. When X∼ N(μ,v_xI), Y∼ N(μ,v_yI), and v_x=v_y, we have v_w=1/2v_x, and the following holds: 1/2∫_v_w^v_x1/v^2[B_Q^v(π,μ̂_MLE)-B_Q^v(π,μ̂_π)]dv=ℰ_BM^2v_x(√(m_v_x^π),√(m_v_x^π)),where m_v_x^π=m^π(x;v_x). The Dirichlet form, ℰ_BM^2v_x(·,·), corresponds to a Brownian motion where the transition probability is the predictive distribution, p̂_π_U. From this, we can see that the results for normal distributions in <cit.> also hold within the framework of this paper. The converse is also true, in that our results hold under the setting in <cit.> when v_x=v_y.Now, for <cit.> Theorem 1., they evaluate a stronger KL risk, R_𝖪𝖫(μ,p̂_π_U)-R_𝖪𝖫(μ,p̂_π)=1/2∫_v_w^v_x1/v^2[R_Q^v(μ,μ̂_MLE)-R_Q^v(μ,μ̂_π)]dvcompared to Eq. (<ref>). Because of this, they not only derive the condition for admissibility and inadmissibility, they also derive the sufficient condition to dominate p̂_π_U. Unfortunately, we were not able to derive the sufficient condition to construct an estimator or predictive distribution that is superior when inadmissible. As the main purpose of this paper is to reduce the Bayes risk difference to the analytic Dirichlet form, we consider the derivation of a dominating uniform prior predictive density, p̂_π_U, to be future research.From equality Eq. (<ref>), we can connect the statistical decision problem and the Markov process. Whether the Markov process is recurrent or transient can be discerned by the Dirichlet form, and is dependent on the existence of a sequence of functions where the Dirichlet form becomes zero <cit.>. Thus, we have* The necessary and sufficient condition for the recurrence of a Markov process with transition probability p̂_t^π_U is the existence of a function sequence, { f_n}, that satisfies { f_n}⊂ℱ,lim_n→∞f_n=1(𝗆-a.e.),lim_n→∞ℰ(f_n,f_n)=0. * The necessary and sufficient condition for the transience of a Markov process with transition probability p̂_t^π_U is that there exists an 𝗆-integrable function g that is bounded on ℝ^d with g>0, 𝗆-a.e. and satisfies0<∫_ℝ^d|f|gd𝗆≦ℰ(√(f),√(f)), ∀√(f)∈ℱ. With the above theorems, we can correspond Blyth's method and the discernment of recurrence and transience of a Markov process through its Dirichlet form. This is why the admissibility or inadmissibility of p̂^π_U under KL risk has a corresponding relationship with the recurrence or transience of a Markov process with transition probability p̂^π_U. For the d-dimensional Cauchy process, it is known that the recurrence and transience switches between d=1 and d≧2 <cit.>, which leads to the following analogue of <cit.> for the Cauchy distribution:Consider a d-dimensional Cauchy distribution, 𝒞(θ,cI), with unknown location, θ, and known scale, c. For the problem of estimating the predictive distribution under KL risk, upon observing X=x,* When d=1, the MLE, θ̂=x, plug-in predictive distribution and the uniform prior Bayes predictive distribution is admissible. * When d≧2, the MLE, θ̂=x, plug-in predictive distribution and the uniform prior Bayes predictive distribution is inadmissible. Further, for the 1-dimensional stable distribution, Stable(α,γ,c), if the index, α, is less than one, it is inadmissible.Consider a 1-dimensional stable distribution, Stable(α,γ,c), with unknown location, γ, and known scale, c. For the problem of estimating the predictive distribution under KL risk, upon observing X=x,* When α=1, Stable(α,γ,c) is a Cauchy distribution and thus the MLE, θ̂=x, plug-in predictive distribution and the uniform prior Bayes predictive distribution is admissible. * When 0<α<1, the MLE, θ̂=x, plug-in predictive distribution and the uniform prior Bayes predictive distribution is inadmissible. While <cit.> also considers the relationship between a Markov process and the problem of estimating the mean parameter, our approach is notably different in several ways. First, the loss function in <cit.> is quadratic, which assumes the second moment of the estimator. Since this is problematic for the Cauchy, we use the KL loss and do not assume the existence of moments. Second, the goal of <cit.> is to derive the sufficient condition for admissibility by bounding the Bayes risk difference from above using the Dirichlet form. We, on the other hand, derive the exact relation between the Bayes risk difference and Dirichlet form to provide the necessary and sufficient condition to discern admissibility and inadmissibility, similar to <cit.>. §.§ Sufficient conditions for admissibility In this section, we will show that the ℝ^1-Cauchy distribution is admissible. Specifically, we show the following: the group invariant prior is a Lebesgue measure for the location parameter, but whether an estimator is inadmissible can be known from Eq. (<ref>) when the predictive distribution based on the Lebesgue prior is transient. Conversely, when the predictive distribution is recurrent, there exists a way to construct a prior sequence that approximates a uniform prior to make the estimator admissible. The ℝ^1-Cauchy distribution in Corollary <ref> and ℝ^1,2-normal distribution are examples of this construction.Let (ℰ,ℱ) be the Dirichlet form that follows a Markov process with transition probability, p̂_t^π_U. Then, we have √(M^π)∈ℱ, and for ℰ(√(M^π),√(M^π)) we have the following inequality, ℰ(√(M^π),√(M^π))≦ℰ(√(π),√(π)). See Supplementary Material Appendix <ref>. This lemma provides a guide to constructing a prior distribution that makes the estimator admissible. From this Lemma, to show admissibility, we need to show that a sequence of proper priors, {π_n}, such that ℰ(√(π_n),√(π_n))→0, can be constructed. However, {π_n} is a functional sequence in the L^2(ℝ^d,𝗆) space. Therefore, it is a functional sequence of a proper prior, and we construct a functional sequence such that it is a uniform prior when n→∞. When (ℰ,ℱ) is recurrent, the functional sequence can be constructed in ℱ, but cannot when it is transient. This is the distinction between admissibility/inadmissibility for the ℝ^1,ℝ^d(d≧2) Cauchy distribution (<ref>), and ℝ^1,2,ℝ^d(d≧3) normal distribution.We now construct such a sequence. Assume that the Dirichlet form, (ℰ,ℱ), of the Markovian semigroup, { T_t}, that corresponds to the transition probability, p̂^π_U, and p̂^π_U is recurrent. We transform the Markovian semigroup with transition probability, p̂^π_U, as { T_t^η}. Choose a function, η, that is η∈ L^1(ℝ^d;𝗆)∩ L^∞(ℝ^d;𝗆), η>0 𝗆-a.e., and define it as T_t^ηf(x)≜∫_ℝ^dexp(-tη(y))p̂_t^π_U(y|x.)f(y)dy, f∈ L^2(ℝ^d;𝗆).The corresponding Dirichlet form is ℰ^η(f,g)=ℰ(f,g)+⟨ f,g⟩ _η·𝗆, f,g∈ℱ.Although, ⟨ f,g⟩ _η·𝗆=∫_ℝ^df(x)g(x)η(x)𝗆(dx).Here, { T_t^η} is a Markov process semigroup that is generated by a particle following a Markov process corresponding to { T_t} that has a survival probability ∫_0^texp(-sη(y))ds, at t.Denote the transition semigroup of this transformed transition probability as { T_t^η}, resolvent as G_α^η, and consider the function, G_α^ηη. Therefore, G_α^ηη(x) =∫_0^∞e^-αsT_sη(x)ds. =∫_0^∞e^-αs∫_ℝ^dexp(-sη(y))p̂_s^π_U(y|x.)η(y)𝗆(dy)ds.If we consider G_1/n^ηη as √(π_n) this is the desired sequence of proper priors. In other words, we have the following theorem:Assume that the Dirichlet form, (ℰ,ℱ), is recurrent. G_1/n^ηη is a function defined on ℝ^d. Then, G_1/n^ηη∈ℱ⊂ L^2(ℝ^d;𝗆), and is square-integrable, proper, 0≦ G_1/n^ηη(x)↑1, 𝗆-a.e., and asymptotically uniform, non-negative. From this, we have, lim_n→∞ℰ(G_1/n^ηη,G_1/n^ηη)=0. See Supplementary Material Appendix <ref>.Thus, Corollary <ref> is proven.To show the admissibility of the Cauchy distribution when d=1, we need to show the existence of a sequence of p̂^π that converges the Bayes risk difference to zero. This is equivalent to the existence of a sequence, M^π, that converges the Dirichlet form to zero. From, Lemma <ref>, we only need to consider the prior sequence. Since the Dirichlet form of the Cauchy process is recurrent when d=1 (from Example <ref>), if we input the functional sequence of a proper prior, { G_1/n^ηη}, from Theorem <ref>, we can show that the Bayes risk difference converges to zero. This is because the prior sequence is a measure sequence that approximates the Lebesgue measure by the L^2(ℝ^d;𝗆) function. Therefore, for a Cauchy distribution with d=1, the uniform Bayes predictive distribution, p̂^π_U, is admissible from Blyth's method <cit.>.Under a Gaussian distribution, G_1/n^ηη is equivalent to the Stein prior, which is a Green function. We can consider G_1/n^ηη to be an extension of the Stein prior to infinitely divisible distributions. However, as noted in Section <ref>, we are not evaluating the risk difference (not the Bayes risk), so for the statement ”under a Gaussian distribution, the Bayes predictive distribution with a Stein prior, which is a Green function, dominates a uniform prior distribution and is also admissible," we are only extending the ”admissible" part. The rest is for future research.§ BAYESIAN PREDICTIVE DISTRIBUTIONS AS MARKOV TRANSITION PROBABILITIES In this section, we correspond the predictive distribution, p̂^π_U(y|x.), and the continuous-time Markov process, { X_t} _t≧0. Thus, we will derive the continuous-time Markov process from the predictive distribution, p̂^π_U(y|x.), and connect the known results from Markov processes to the statistical decision problem. Specifically, we define the Dirichlet form that corresponds to the predictive distribution, p̂^π_U(y|x.), and the concept of transience/recurrence of Markov processes. For a more detailed explanation regarding Markov processes, see <cit.>.Let the transition probability of a continuous-time Markov process, { X_t} _t≧0, be p_t(.y|x). This can be interpreted as the density function of the probability that a Markov process, { X_t}, with initial value, X_0=x, takes the value, X_t=y, at time, t.Let us reinterpret the likelihood function, p_c(x|θ.), of the symmetric d-dimensional Location-Scale family, p_c(x|θ.), as the transition density function of a stochastic process, { X_t} _t≧0, with initial value, X_0=θ, and takes value, X_c=x, at time, c, and the random variable, X∼ p_c(x|θ.), as a stochastic process that takes value, X_c, at time, t=c. From symmetry, we have p_c(x|θ.)=p_c(θ|x.), thus p_c(θ|x.) can be interpreted as a transition density function of a stochastic process with initial value, X_0=x, and takes value, X_c=θ, at time, c. Then, from the symmetry of the distribution, p_c(x|θ.), the predictive distribution, p̂^π_U(y|x.), can be interpreted as a convolution distribution of two random variables, Y∼ p_c(y|θ.), θ∼ p_c(x|θ.)=p_c(θ|x.):p̂^π_U(y|x.)=p_2c(.y|x)=∫_ℝ^dp_c(y|θ.)p_c(θ|x.)𝗆(dθ).Here, 𝗆(dθ) is a Lebesgue measure in ℝ^d. p̂^π_U(y|x.) can be seen as the transition density function of the stochastic process, { X_t} _t≧0, with initial value, X_0=x, that takes value, X_2c=y, at time, 2c.In corresponding the transition density function, p_t(.y|x), of the continuous-time stochastic process, { X_t} _t≧0, to the predictive distribution, p̂^π_U(y|x.), we assumed that the transition density function, p_2c(.y|x), at time, t=2c, is equal to the predictive distribution, p̂^π_U(y|x.). The transition density function, p_t(.y|x), can take any positive real value including t=(0,c,2c), at time, t. However, we assume that the scale parameter of the symmetric d-dimensional Location-Scale family is equal, thus the transition density function, p_t(.y|x), does not depend on t. For example, if we set t=0.31415, p_t(.y|x) is a symmetric d-dimensional Location-Scale family with location, θ=x, and scale parameter, c=0.31415. Therefore, the uniform Bayes predictive distribution, p̂^π_U(y|x.), is the transition probability of the convolution semigroup, and is to simply consider the transition probability, p_t, to be the predictive distribution with scale parameter, c and time, t. Thus, constructing a continuous-time transition probability function from the uniform Bayes predictive distribution is straightforward.Next, using the transition probability function, p_t(.y|x), we define the notion of transience and recurrence of a Markov process, Define the potential operator as Rf(x) =∫_0^∞{∫_ℝ^dp_s(.y|x)f(y)𝗆(dy)} ds.Intuitively, ∫_ℝ^dp_s(.y|x)1_B(y)𝗆(dy) is the probability that a stochastic process, with initial value, x, arrives at B after time, s. To integrate this probability over time, [0,∞), means to obtain the number of times the stochastic process arrives at B after a long time. Then, we have the following definition:The transition probability function, { p_t} _t≥0, is transient if there exists positive integrable function, g∈ L_+^1(ℝ^d;𝗆), satisfying 𝗆(x|g(x)=0.)=0, such that Rg<∞, 𝗆-a.e.The transition probability function, { p_t} _t≥0, is recurrent if there exists positive integrable function, g∈ L_+^1(ℝ^d;𝗆), such that Rg =∞ or 0, 𝗆-a.e.Therefore, whether the integral value of the transition probability with regard to time is finite or infinite determines if it is transient or recurrent. As we will see later, whether the integral value of the transition value with regard to time, Rg, is finite or infinite determines whether Eq. (<ref>) is non-zero or zero.Lastly, we will briefly explain the Dirichlet form. For a more rigorous explanation, see <cit.>. Let L^2(ℝ^d;𝗆) be the entire square-integrable function regarding the Lebesgue measure, 𝗆, on ℝ^d.First, there exists a measure, m(·), that is an invariant measure m(dy)=∫_ℝ^dp_t(.y|x)m(dx), (t>0)with regard to the transition probability, p_t(.y|x). The Lebesgue measure 𝗆 is always an invariant measure, though there exists p_t(.y|x), such that the invariant measure is a finite measure, as Eq. (<ref>). However, as we will see later, for the uniform prior predictive distribution, p̂^π_U(y|x.), that we consider, the invariant distribution of the transition probability, p_t(.y|x), is only a Lebesgue measure.Next, if we differential the transition probability, p_t(.y|x) regarding time, t, we obtain the parabolic partial differential equation: d/dtp_t(.y|x)=𝒜· p_t(.y|x)regarding the operator, 𝒜, that corresponds to p_t. The operator, 𝒜, differs according to the transition probability, p_t(.y|x). Note that 𝒜· p_t(.y|x) operates on the variable, x. The function space that is the domain for the operator, 𝒜, is the linear subspace, ℱ⊂ L^2(ℝ^d;𝗆), of L^2(ℝ^d;𝗆), and represents all functions, f, such that 𝒜f∈ L^2(ℝ^d;𝗆). The size of this function space, ℱ, depends on 𝒜. This invariant measure, 𝗆, and the domain, ℱ, determined by the operator, 𝒜, is what determines the transience/recurrence of a Markov process with transition probability, p_t(.y|x), and the admissibility/inadmissibility of the predictive distribution, p̂^π_U(y|x.).Similar to considering the eigenvalue problem for matrices, we consider the following quadratic form: ⟨ -𝒜f,g⟩ _𝗆=∫_ℝ^d(-𝒜f(x))g(x)𝗆(dx), f,g∈ℱ.Regarding the operator, 𝒜, of the transition probability, p_t(.y|x), obtained from convolution, the quadratic form is symmetric: ⟨ -𝒜f,g⟩ _𝗆=⟨ f,-𝒜g⟩ _𝗆. This symmetric quadratic form is denoted as ℰ(f,g), and, with the domain, ℱ, of the operator, 𝒜, we have the Dirichlet form, (ℰ,ℱ).Using (ℰ,ℱ), we can extract from the transition probability, p_t(.y|x), the analytic information regarding 𝗆,𝒜,ℱ, and the related information on the Markov process <cit.>. In this paper, we directly connect the Bayes risk difference in Theorem <ref> and (ℰ,ℱ). By doing this, we can apply known properties of Dirichlet form and Markov processes to the statistical decision problem. (Normal distribution and Brownian motion). Let X,Y be the d-dimensional normal random variable with mean, θ, and covariance, A: X∼𝒩(θ,A)=p_A(x|θ.), Y∼𝒩(θ,A)=p_A(y|θ.).When observing a datum, X=x, the MLE is θ̂=x, and the Bayes predictive distribution under the uniform prior, π_U(θ)=1, is p̂^π_U(y|x.)=1/√((2π)^d)1/√((2A))exp(-1/2⟨ y-x,(2A)^-1(y-x)⟩),and is a normal distribution with location x: 𝒩(x,2A). However, the scale is 2A and is not the same as the plug-in predictive distribution. Setting the corresponding transition probability of the Markov process, p_t(.y|x), as p_t(y|x.)=𝒩(x,tA),this is the transition probability function of a Brownian motion with initial value, x, at time, t=0. When t=2, this is equivalent to the Bayes predictive distribution. A d-Brownian motion is recurrent when d=1,2, and is transient otherwise. In fact, let the transition function { p_t(.x|0);t>0} be p_t(.x|0) =1/(2πt)^d/2exp(-‖x‖^2/2t)=∏_i=1^d1/√(2πt)exp(-x_i^2/2t).Integrating p_t(.x|0) with respect to t provides ∫_0^∞p_t(.x|0)dt = ∞, d=1,2,1/21/π^d/2Γ(d/2-1)‖x‖^2-d, d≥3. Since ∫_ℝ^dp_t(y|x.)𝗆(dx)=1, the Lebesgue measure is an invariant measure. If we differentiate, p_t, with regard to, t, we have d/dtp_t(y|x.)=1/2∑_p,q=1^da_pq∂/∂ x_p∂/∂ x_qp_t(y|x.).Here, we have A=(a_pq) (matrix A, where the p,q elements are a_pq), and since A is a symmetric matrix, we have a_pq=a_qp. From this, the quadratic form is symmetric,⟨ -1/2∑_p,q=1^da_pq∂/∂ x_p∂/∂ x_qf,g⟩ _𝗆=⟨ f,-1/2∑_p,q=1^da_pq∂/∂ x_p∂/∂ x_qg⟩ _𝗆.Next, regarding the domain, ℱ, of the operator, 1/2∑_p,q=1^da_pq∂/∂ x_p∂/∂ x_q, it is immediate that C_0^∞(ℝ^d) (an infinitely continuous differentiable, compact support, continuous function space) satisfies the conditions. However, it also seems that a larger function space, e.g., a Sobolev space, H^1(ℝ^d) when A is an identity matrix. The question of what is the largest space for ℱ is a difficult one, though it is not necessary for the main results of this paper, so we leave ℱ unspecified. The Dirichlet form is ℰ(f,g)=⟨ -1/2∑_p,q=1^da_pq∂/∂ x_p∂/∂ x_qf,g⟩ _𝗆, ℱ=C_0^∞(ℝ^d). (Cauchy distribution and Cauchy processes). Let X∼𝒞(θ,cI)=p_c(x|θ.), Y∼𝒞(θ,cI)=p_c(y|θ.)be independent d-dimensional multivariate Cauchy vectors with common unknown location θ, and let p_c(x|θ.) and p_c(y|θ.) denote the conditional densities of X and Y. The scale parameter cI is known. When observing a datum, X=x, the MLE is θ̂=x, and there are no moments. The Bayes predictive distribution under a uniform prior, π_U(θ), is p̂^π_U(y|x.)=1/√(π^d+1)Γ(d+1/2)c+c/√((‖ y-x‖ ^2+(c+c)^2)^d+1)with its characteristic function φ(z)=exp(-2c‖ z‖ +i⟨ x,z⟩), which is a Cauchy distribution with location, x, and scale 2c, the latter making it different from the plug-in predictive distribution:p̂^π_U(y|x.)=𝒞(x,2cI). The potential density of { p_t(.x|θ);t>0}, which has a Cauchy distribution, 𝒞(θ,t), as its transition probability is ∫_0^∞p_t(x|θ)dt =1/π^(d+1)/2((d+1)/2)∫_0^∞t/(‖ x-θ‖ ^2+t^2)^(d+1)/2dt =∞ (d=1), 1/21/π^(d+1)/2((d-1)/2)‖ x‖ ^1-d (d≥2).Hence, the Cauchy process is recurrent for d=1 and transient for d≥2.The generator of the Cauchy process, 𝒜, cannot be represented using differential operators like the Brownian motion can. For example, the generator for the 1-dimensional Cauchy process can be expressed with integral operators, 𝒜f(x)= 1/π∫_-∞^∞{ f(x+y)-f(x)} 1/y^2dy.Here, f(x) is taken from the subspace of L^2(ℝ^d;𝗆), such that 𝒜f∈ L^2(ℝ^d;𝗆). Using Perseval's identity of the Fourier transform and the characteristic function, the Dirichlet form can be written as, ℱ={.f∈ L^2(ℝ^d;𝗆)|∫_ℝ^d|f̂(z)|^2‖ z‖ dz<∞} ,ℰ(f,g)=∫_ℝ^df̂(z)ĝ(z)‖ z‖ dz, f,g∈ℱ,where f̂,ĝ are the Fourier transform of f,g and ‖ z‖ is the Euclidean norm of ℝ^d. For further details, see Appendix A. (1-dimensional stable distribution and stable processes). Let θ=γ, and consider an additional parameter α. We consider the characteristic function of a 1-dimensional symmetric stable distribution, Stable(α,γ,c), with parameters, (α,γ,c): 𝔼[e^izX]=exp(-c‖ z‖ ^α+iγ z), (0<α<2).We particularly consider when 0<α≦1. The statistical problem is in estimating γ, and consider α,c to be known. The random variable with this characteristic function is symmetric around γ. Further, when α≦1, 𝔼[X]=∞ does not have any moments. When α=1, the density can be expressed, using an elementary function, with a Cauchy distribution, 𝒞(γ,c), though a density function that can be expressed using an elementary function is not known when 0<α<1. We observe a datum, x=X∼Stable(α,γ,c)and construct a Bayes predictive distribution, p^π_U(y|x.), with uniform prior, π(γ)=1. The characteristic function, 𝔼[e^izY]=exp(-2c‖ z‖ ^α+ixz),is then, p̂^π_U(y|x.)=Stable(α,x,2c).Consider the potential operator of a Markov process with a 1-dimensional stable distribution, Stable(α,0,ct) (without loss of generality, we set the location as γ=0), as its transition probability. If g(x)∈ L^1(ℝ^d;𝗆) and its Fourier transform, ĝ(x), are both integrable and g(x) is continuous (therefore bounded), we have the expression <cit.>, Rg(x)=1/(2π)^d∫_ℝ^de^i⟨ z,x⟩1/-log(𝔼[e^izX])ĝ(-z)dz.Here, the real part of the characteristic function is, ℜ𝔢(-log(𝔼[e^izX]))=c‖ z‖ ^αthus, when 0<α<1, we have ∫_ℝ^de^i⟨ z,x⟩1/c‖ z‖ ^αdz<∞,which is transient.Using Perseval's identity for the Fourier transform and the characteristic function, as with the Cauchy process, the Dirichlet form can be written as, ℱ={.f∈ L^2(ℝ^d;𝗆)|∫_ℝ^d|f̂(z)|^2‖ z‖ ^αdz<∞} ,ℰ(f,g)=∫_ℝ^df̂(z)ĝ(z)‖ z‖ ^αdz, f,g∈ℱ.For further details, see Appendix A.1. The Bayes predictive distribution, p̂^π(y|x.), with a proper prior, derives a continuous-time Markov chain with initial value, x, and takes the value, y, at time, t=1. However, the analytic expression of the continuous-time, Markovian semigroup is not as easy as p̂^π_U, and the expression of the cylindrical measure is difficult to obtain as well. Although, it is known that the stationary probability measure is M^π(x;c) and induces a stationary Markov process. Here, the initial value is x, and the stationary distribution follows the marginal likelihood, M^π(x;c). Denote the cylindrical measure as ℚ_x. Because making M^π(x;c) a stationary distribution is not sufficient to uniquely determine the stationary Markov process, there are multiple choices regarding the cylindrical measure, ℚ_x. The least we can say is that the Bayes predictive distribution, p̂^π(y|x.), with a proper prior, is a marginal distribution of a cylindrical measure, ℚ_x, at time, t={ 0,1}.The necessary and sufficient condition for admissibility, as in Theorem <ref>, is equivalent to the condition for recurrence of the Dirichlet form. Conversely, inadmissibility is in an equivalent relation to the transience of the Dirichlet form (Theorem <ref>).§ VARIATIONAL FORMULA OF KULLBACK-LEIBLER This section will present the main result of this paper. In this section, we show Eq. (<ref>): ∫_ℝ^d𝖪𝖫(p̂^π|p̂^π_U.)M^π(x;c)dx=ℰ(√(M^π),√(M^π)). <cit.> also shows, under quadratic loss, through direct transformation of the Bayes risk difference, the Dirichlet form. <cit.> derives an inequality that bounds the Bayes risk difference by the Dirichlet form of the Markov process. For their derivations, the loss function and Dirichlet form are directly connected, making it elementary. This is because the Dirichlet form is in quadratic form, and within quadratic loss, <cit.> derives the generator of the Brownian motion from Stein's equality, and <cit.> directly derives the Dirichlet form by looking at the Markov transition probability as the posterior distribution. On the other hand, this paper's derivation is by corresponding the predictive distribution and transition probability, where the correspondence between the KL risk and Dirichlet form is done by analyzing the Markovian semigroup.The proof strategy is to:* Show that the Bayes risk difference of the KL loss, ∫_ℝ^d𝖪𝖫(p̂^π|p̂^π_U.)M^π(x;c)dx, and the rate function, I(M^π) (defined below), is equal; * Show that the rate function, I(M^π), and the Dirichlet form, ℰ(√(M^π),√(M^π)), is equal. Now, denote the Markovian semigroup under p̂^π_U or ℙ_x as { T_t}, its infinitesimal generator, 𝒜, and the functional space that defines 𝒜 as 𝒟(𝒜). For g∈𝒟(𝒜), the functional, φ, is φ(h,g,ε)=∫_ℝ^dlogg(x)+ε/(T_hg)(x)+εM^π(x;c)dx.This functional, φ, represents the expected log likelihood ratio between the initial distribution, g(x), and the Markov process after infinitesimal time, h. Here, g is interpreted as the parameter in the likelihood ratio, and the domain of definition is g∈𝒟(𝒜)⊂ L^2(ℝ^d,𝗆) (g is not necessarily a probability density function).Then,Denote p_h=p̂_h^π_U(y|x.). Then, sup_g∈𝒟(𝒜),ε>0φ(h,g,ε)=∫_ℝ^d𝖪𝖫(p̂^π|p_h.)M^π(x;c)dxholds. Note that h need not be infinitesimal time for this lemma to hold.See Supplementary Material Appendix <ref>.Simply, this lemma states that if the expectation is a stationary distribution, M^π(x;c), then the maximum (expected) log likelihood ratio is equivalent to the KL risk.Next, we will define the rate functional. Let ℬ_b^+(ℝ^d) be the set of non-negative, Borel measurable functions u:ℝ^d↦ℝ_+ on ℝ^d, and set u_ε=u+ε for ε>0. The functional I(M^π) is defined as I(M^π)≜sup_u∈ℬ_b^+(ℝ^d),ε>0∫_ℝ^d(-𝒜u_ε)(x)/u_ε(x)M^π(x;c)dx.𝒜 is the infinitesimal generator of the Markovian semigroup, { T_t}, under p̂^π_U or ℙ_x. This I is referred to as the rate function in the large deviation literature. Then, we havelim_h↓01/hsup_g∈𝒟(𝒜),ε>0φ(h,g,ε)=I(M^π). See Supplementary Material Appendix <ref>.If we integrate both sides of Eq. (<ref>) with regard to h from 0 to 1, from Eq. (<ref>), we have, ∫_ℝ^d𝖪𝖫(p̂^π|p̂^π_U.)M^π(x;c)dx=sup_g∈𝒟(𝒜),ε>0φ(1,g,ε)=I(M^π).Additionally, for the rate function, I(M^π), the following variational formula holds.Let (ℰ,ℱ) be the Dirichlet form with measure ℙ_x, i.e., a Dirichlet form that follows a Markov process with transition probability, p̂_t^π_U. Then, we have √(M^π)∈ℱ and I(M^π)=ℰ(√(M^π),√(M^π))holds.See Supplementary Material Appendix <ref>.From this, we have shown Theorem <ref>.Finally, we show Corollary <ref> and Corollary <ref>. For the case when d=1 for Corollary <ref>, we have already shown in <ref>.We first show for the 1-dimensional symmetric stable distribution, Stable(α,γ,c), with the exponent α,(0<α<1). We can also show the inadmissibility of the Cauchy distribution for d≧2 in the same manner. The Bayes risk difference is B_𝖪𝖫(θ,p̂^π_U)-B_𝖪𝖫(θ,p̂^π)=ℰ(√(M^π),√(M^π)).Therefore, for any sequence of proper priors, {π_n}, if ℰ(√(M^π),√(M^π)) is bounded away from zero, p̂^π_U is inadmissible. The root √(M^π) of the marginal likelihood satisfies ∫_ℝ^d(√(M^π(x)))^2dx=∫_ℝ^d∫_ℝ^dp_c(x|θ.)π(θ)dθ dx=1,therefore we have √(M_t^π)∈ L^2(ℝ^d;𝗆). The reference function g is defined as g(x)=√(M^π(x))/max{∫_0^∞T_s√(M^π(x))ds,1}.Since { T_t} _t≧0 is transient from Example <ref>, we have ∫_0^∞T_s√(M^π(x))ds<∞. Thus, from Eq. (<ref>), we have 0<∫_ℝ^dg(x)√(M^π(x))dx≦ℰ(√(M^π),√(M^π)),which completes the proof.§ DISCUSSION§.§ Generalization to the infinitely divisible distribution We consider the infinitely divisible distribution as a generalization of distributions, such as the Normal, Poisson, Cauchy/stable, negative binomial, exponential, and . For a random variable, X∈ℝ^d, to follow an infinitely divisible distribution, is to say that the characteristic function can be written as the standard form of the Lévy distribution: 𝔼[e^izX]= exp[-1/2⟨z,Az⟩+∫_ℝ^d(e^i⟨z,x⟩-1-i⟨z,x⟩/1+|x|^2)ν(dx)+i⟨γ,z⟩].Here, A is a non-negative symmetric matrix, ν is measurable on ℝ^d and satisfies ν{ 0} =0 and ∫(|x|^21)ν(dx)<∞, and γ∈ℝ^d. The trio, (A,ν,γ), is called the generating element of μ. ν is called the Lévy measure of μ. γ is equivalent to the location parameter. For example, the ℝ^d Gaussian distribution is when ν=0, the compound Poisson distribution, we have ν=cσ and A=0,γ=0, and the Poisson distribution on ℝ is when A=0,γ_0=0,ν=cδ_1.Let the data, X be given as X=θ+ξwith the estimand being θ(∈ℝ^d). Here, ξ is a random variable that follows a symmetrized infinitely divisible distribution, given as ξ=ξ^'-ξ^'', where ξ^',ξ^'' is independently generated from the same infinitely divisible distribution. X is symmetric regarding θ: (X-θ)d=-(X-θ). We only consider the MLE when the predictive distribution is a plug-in estimator. For the Bayes predictive distribution, even if the prior is improper, it does not preserve the distributional shape <cit.>. Therefore it is difficult to make claims about families of distributions in a holistic manner.Let (<ref>) be the characteristic function of the infinitely divisible function from which ξ follows. Consider out of (A,ν,γ), A,ν to be known, and γ=0. When there is one observation, X=x, if we plug-in θ̂=x as the estimate of θ, the characteristic function of the predictive distribution p̂(y|x.) based on the MLE θ̂=x is 𝔼[e^izY]= exp[-1/2⟨z,Az⟩+∫_ℝ^d(e^i⟨z,y⟩-1-i⟨z,y⟩/1+|y|^2)ν(dy)+i⟨x,z⟩].The Dirichlet for of p̂(y|x.) can be made explicit using the characteristic function of the symmetric convolution semigroup.The Dirichlet form associated with the predictive distribution, p̂(y|x.), is ℱ={ f∈ L^2(ℝ^d;𝗆):∫_ℝ^d‖f̂(z)‖ ^2ψ(z)dz<∞} ,ℰ(f,g)=∫_ℝ^df̂(z)ĝ(z)ψ(z)dz, f,g∈ℱ,where ψ(z) is the logarithm of characteristic function -𝔼[e^izY] and f̂(z),ĝ(z) are the Fourier transformation of the integrable function, f,g on ℝ^d, respectively. See Supplementary Material Appendix <ref>. Now, the following theorem is known to test for recurrence/transience of the Lévy process <cit.>:When ψ(z)=0, then ℜ𝔢(1/-ψ(z))=∞, 1/[ℜ𝔢(-ψ(z))]=∞. We then set the bounded open neighborhood of B regarding 0. If { X_t} is ∫_Bℜ𝔢(1/-ψ(z))dz=∞it is recurrent, and if ∫_Bℜ𝔢(1/-ψ(z))dz<∞it is transient. Note that ℜ𝔢(·) is the real part of the complex number. Therefore, regarding the construction of the predictive distribution of the location, θ, of the symmetric infinitely divisible distribution, its in/admissibility can be determined by the integral value of the logarithm of the characteristic function, ψ(z), around the origin, 0∈ℝ^d. From this condition, it is known that the Lévy process of d≧3 is transient, and as a statistical implication, we have the following result: If d≧3, then the MLE regarding the location, θ, plug-in predictive distribution of the symmetric infinitely divisible distribution on ℝ^d is always inadmissible under 𝖪𝖫 loss.§.§ Domination of p̂^π over p̂^π_U In this paper, we investigated the Bayes risk difference, between the Bayesian predictive density with uniform prior, p̂^π_U(y|x.), and the Bayesian predictive density with proper prior, p̂^π(y|x.), B_𝖪𝖫(π,p̂^π_U)-B_𝖪𝖫(π,p̂^π), and found that the difference cannot be zero depending on the dimension, p, of the parameter, θ(∈ℝ^p). Since p̂^π(y|x.) is an admissible predictive distribution, from Blyth's method, whether the Bayes risk difference is zero or not determines whether it is admissible or inadmissible. From both Corollary <ref> and Corollary <ref>, when p>1 and p≧1, respectively, p̂^π_U(y|x.) is inadmissible, meaning that there exists a predictive distribution that dominates p̂^π_U(y|x.). For a predictive density, p̂_1, to dominate p̂_2, is to say that R_𝖪𝖫(θ,p̂_1)≦ R_𝖪𝖫(θ,p̂_2) for all θ and with strict for some θ. Thus, to construct a predictive distribution that dominates p̂^π_U(y|x.), we have to evaluate the risk itself, rather than the Bayes risk. This paper only evaluated the Bayes risk, and has not considered risk that does not integrate over the prior.One direction of future research is to construct an estimator that dominates p̂^π_U(y|x.). The (not Bayes) risk difference under KL loss isR_𝖪𝖫(θ,p̂^π_U)-R_𝖪𝖫(θ,p̂^π)= ∫_ℝ^p{ ∫_ℝ^plogp̂^π(y|x.)/p̂^π_U(y|x.)p_c(y|θ.)dy} p_c(x|θ.)dx.Let the scale parameter to be known and c=c_x=c_y and p_c(y|θ.)=p_c(x|θ.). From <cit.> Lemma 2., one approach is to analyze the density ratio, p̂^π(y|x.)/p̂^π_U(y|x.). This is because when you have a normal model, N(θ,cI), we have p̂^π(y|x.)/p̂^π_U(y|x.)=m^π(x;1/2c)/m^π(x;c). However, deriving such an analytical expression is difficult, unless under a normal model. One viable method is to consider, as we do in this paper, two cylindrical measures, ℙ_x and ℚ_x, of continuous-time Markov processes { X_t;X_0=x} _t∈[0,2c] obeying transition probabilities, p̂^π_U(y|x.) and p̂^π(y|x.), respectively, and define the density ratio process, dℚ_x/dℙ_x. The cylindrical measures, ℙ_x and ℚ_x, are the joint probability measure of the continuous time Markov process, { X_t} _t∈[0,2c], and if we let 0<t_1<⋯<t_n<2c be the arbitrary finite partition of [0,2c], the density ratio process, dℚ_x/dℙ_x, isdℚ(X_2c,X_t_n,⋯,X_t_1,X_0=x)/dℙ(X_2c,X_t_n,⋯,X_t_1,X_0=x).The density ratio, p̂^π(y|x.)/p̂^π_U(y|x.), can be interpreted as p̂^π(y|x.)/p̂^π_U(y|x.)=ℚ(X_2c=y,X_t_n∈ℝ^p,⋯,X_t_1∈ℝ^p,X_0=x)/ℙ(X_2c=y,X_t_n∈ℝ^p,⋯,X_t_1∈ℝ^p,X_0=x).Here, dℚ_x/dℙ_x is a change of measure for the probability measure, ℙ_x to ℚ_x, and a transformation of drift from a Markov process { X_t} following ℙ_x, to a Markov process following ℚ_x. If we use martingale theory, it may be possible to obtain an analytical expression for dℚ_x/dℙ_x, similar to Girsanov's formula. This will be left for future work. imsart-nameyearSupplementary Material for Inadmissibility and Transience § PRELIMINARIES In this section, we will explain Markov processes in continuous-time, Dirichlet form, transition probability function, cylindrical measure (measure of continuous-time stochastic processes), and how to discern the recurrence and transience of Markov processes through Dirichlet form. Then, we will explain the location estimation problem and its correspondence to the Bayes predictive distribution. §.§ Markov process and Dirichlet form Let ω be a right continuous left limits path in the path space, 𝕎, which takes values on the state space, ℝ^d, and defined by T. T is either the interval, [0,∞), or { 0,1,2,⋯}. Define the projection, k_t, θ_t, for 𝕎↦𝕎, as θ_t(ω)(s) =ω(s+t)k_t(ω)(s) =ω(t∧s).𝕎 is closed under the projection, k_t,θ_t, and ℱ=ℬ(𝕎) is the σ-algebra generated from the set ℱ={.ω∈𝕎|ω(t)∈ A} (t>0,^∀A∈ℬ(ℝ^d)).Henceforth, ω(t) is denoted as X_t(ω). Thus, for each, t∈ T, X_t is the measurable projection from the measurable space, (𝕎,ℱ), to the measurable space, (ℝ^d,ℬ(ℝ^d)); X_t∈ℱ/ℬ(ℝ^d).. Further, the filtration, ℱ_t, is a partial σ-algebra of ℱ, written as ℱ_t={.ω∈𝕎|k_t(ω)∈ B} (^∀B∈ℱ).This is a set of all paths up until t in ℱ.Consider a family of cylindrical measure, {ℙ_x} _x∈ℝ^d, on (𝕎,ℱ), with initial value, x, ℙ_x(X_0(ω)=x)=1, ℙ_x(B) is ℬ(ℝ^d)-measurable for all x∈ℝ^d, and satisfies the Markov property: i.e., for ℙ_x, measure 1, ℙ_x(.θ_t(ω)∈B|ℱ_t) =ℙ_X_t(ω)(B), B∈ℱ.If we take any point, 0<s_1<s_2<⋯<s_n, within T, for each x, ℙ_x(.X_s_n+t∈A|X_s_1,⋯,X_s_n) =ℙ_x(.X_t(θ_s_n)∈A|X_s_n)=ℙ_X_s_n(X_t∈A)holds with ℙ_x measure 1. This shows that { X_t} _t≥0 is a Markov process on ℝ^d with an initial degenerate distribution, δ_x, for each x, and transition probability,p_t(.A|x)=ℙ_x(X_t∈ A). Note that ℙ_x(X_t∈ A) is a cylindrical measure, which is to say, for any point, 0<s_1<s_2<⋯<s_n<t in T, we have ℙ_x(X_s_1∈ℝ^d,X_s_2∈ℝ^d,⋯,X_s_n∈ℝ^d,X_t∈ A). The semigroup of the positive linear operator, { T_t|t≧0.}, on the space of all bounded measurable functions, B(ℝ^d), on ℝ^d, is derived by T_tf(x)=∫_ℝ^df(y)p_t(.dy|x), f∈ B(ℝ^d)from the transition probability, p_t(.A|x). Here, { T_t} satisfies T_s+t=T_sT_t and (s≧0) on B(ℝ^d). If we represent 𝔼_x^ℙ[·] with regard to the expectation of the cylindrical measure, ℙ_x, we have T_tf(x) =𝔼_x^ℙ[f(X_t)],from which T_s+t=T_sT_t follows. Let 𝗆 be a positive Lebesgue measure on ℝ^d. For a Markov process, when the corresponding semigroup, { T_t}, satisfies∫_ST_tf(x)g(x)𝗆(dx)=∫_Sf(x)T_tg(x)𝗆(dx)for any t>0 and any non-negative measurable function, f,g, it is said to be 𝗆-symmetric. A symmetric Markov process is also called a reversible Markov process. Here, { T_t} is uniquely realized as a strongly continuous contraction semigroup on the real L^2(ℝ^d;𝗆) space. L^2(ℝ^d;𝗆) is the entire square integrable function regarding the Lebesgue measure, 𝗆, on ℝ^d. Let 𝒜 be the generator of { T_t}, then 𝒜 is a negative semidefinite self-adjoint operator, which defines the symmetric form, (ℰ,ℱ), on L^2(ℝ^d;𝗆): ℰ(f,g)=⟨√(-𝒜)f,√(-𝒜)g⟩ _𝗆, ℱ=𝒟(√(-𝒜)).⟨ ,⟩ is the internal product of L^2(ℝ^d;𝗆), and 𝒟(𝒜) is the functional space that defines 𝒜. (ℰ,ℱ) is called the Dirichlet form of the 𝗆-symmetric process.A d-dimensional Brownian motion is symmetric with regard to the Lebesgue measure, and the Dirichlet form is given as ℰ(f,g)=1/2∫_ℝ^N∇ f(x)·∇ g(x)𝗆(dx), ℱ={ f∈ L^2(ℝ^d;𝗆)|∂ f/∂ x_i∈ L^2(ℝ^d;𝗆),1≦ i≦ d.} ,though all differential are with regard to generalized functions. ℱ is a Soblev space. This Dirichlet form was what was used in <cit.>. The Dirichlet form of the Cauchy (stable) process on ℝ^d can be specifically shown using the characteristic function of a symmetric convolutional semigroup. First, define the Fourier transformation of the integrable function, f, on ℝ^d as f̂(z)=1/(2π)^d/2∫_ℝ^de^i⟨ z,y⟩f(y)dy, z∈ℝ^d.Denote the probability measure of the Cauchy (stable) process on ℝ^d as μ and the probability measure of the Cauchy (stable) process at time, t, as μ_t, then μ_t can be represented as the t-times convolution of μ; μ^t*. Then the characteristic function can be written as μ̂_t=μ̂^t. The transition probability is defined as p_t(.B|x)=μ^t*(B-x)and the semigroup, T_tf, is T_tf(x)=∫_ℝ^df(x+y)μ_t(dy), t>0,x∈ℝ^d,f∈ L^2(ℝ^d;𝗆),which is a convolution semigroup. Given the characteristic function, μ̂_t, denote the logarithm, -ψ, as μ̂_t(z)=e^-tψ(z). Using Perseval's theorem, ⟨ f,g⟩ _𝗆=⟨f̂,ĝ⟩ _𝗆, f,g∈ L^2(ℝ^d;𝗆),we have T_tf=μ̂_t·f̂, therefore ℰ^(t)(f,f)= 1/t⟨f-T_tf,f⟩_𝗆= 1/t∫_ℝ^d(f̂(z)-μ̂_t(z)f̂(z))f̅̂̅(z)dz= ∫_ℝ^d|f̂(z)|^21-exp(-tψ(z))/tdz.When t↓0, the last integral is monotonically increasing and converges to ∫_ℝ^d|f̂(z)|^2ψ(z)dz. Thus, ℱ={.f∈ L^2(ℝ^d;𝗆)|∫_ℝ^d|f̂(z)|^2ψ(z)dz<∞} ,ℰ(f,g)=∫_ℝ^df̂(z)ĝ(z)ψ(z)dz, f,g∈ℱ.is the Dirichlet form of the Cauchy process.The characteristic function of the predictive distribution of a d-dimensional Cauchy distribution, p̂_t^π_U(y|x.), is μ̂(z)=exp(-2c‖ z‖ +i⟨ x,z⟩)and the characteristic function of the predictive distribution of a stable function with d=1, 0<α<2, exponential α, p̂_t^π_U(y|x.), is μ̂(z)=exp(-2c|z|^α+ixz).Thus, substituting ψ to ℰ(f,g) for each characteristic function gives us the Dirichlet form.§.§ The recurrence and transience of Markov processes and its distinction using the Dirichlet form In this section we introduce the notion of transience and recurrence of the Markovian semigroup. The recurrence and transience of a Markovian semigroup can be characterized in terms of its associated Dirichlet form and extended Dirichlet space.A σ-finite measure 𝗆 on ℝ^d is called an invariant measure if ∫_ℝ^dp_t(.A|x)𝗆(dx) =𝗆(A),∀t>0,∀A∈ℬ(ℝ^d)holds. A spatially homogeneous Cauchy process has its invariant measure as a Lebesgue measure. Define the potential operator as Rf(x) =∫_0^∞T_sf(x)ds.It holds that Rf(x)=lim_α→0G_αf(x).The Markovian semigroup { T_t} _t≥0 is transient if there exists a positive integrable function g∈ L_+^1(ℝ^d;𝗆), satisfying 𝗆(x|g(x)=0.)=0, such that Rg<∞, 𝗆-a.e.The Markovian semigroup { T_t} _t≥0 is recurrent if there exists a positive integrable function f∈ L_+^1(ℝ^d;𝗆), such that Rg =∞ or 0, 𝗆-a.e.Next, we consider the characterization of recurrence and transience using the Dirichlet form. First, for transience, the necessary and sufficient condition is for the Dirichlet form, ℰ, to be bounded away from 0. (Transience of Dirichlet form). Let { T_t} _t>0 be a strongly continuous Markovian semigroup on L^2(ℝ^d;𝗆) and (ℰ,ℱ) be the associated Dirichlet space relative to L^2(ℝ^d;𝗆). Then, { T_t} _t>0 is transient if and only if there exists a bounded, 𝗆-integrable, strictly positive function, g, such that g>0, 𝗆-a.e. on ℝ^d satisfying ∫_ℝ^d|f|gd𝗆≦ℰ⟨f,f⟩, ^∀f∈ℱ.Finding the function, g, can be done as follows. First, there exists a g∈ L^1(ℝ^d;𝗆)∩ L^2(ℝ^d;𝗆) that satisfies the following lemma. For any non-negative, integrable function, g∈ L^1(ℝ^d;𝗆)∩ L^2(ℝ^d;𝗆),sup_u∈ℱ⟨|u|,g⟩ _𝗆/√(ℰ(u,u))=√(∫_ℝ^dg· Ggd𝗆),holds. Any strongly continuous transient Markovian semigroup { T_t} _t>0 admits a strictly positive bounded 𝗆-integrable function g, such that ∫_ℝ^dg· Ggd𝗆≤1. Such g is called a reference function of the transient semigroup { T_t} _t>0. The reference function g can be constructed by taking a strictly positive bounded measurable function f∈ L^1(ℝ^d;𝗆) with ∫_ℝ^d|f(x)|𝗆(dx)=1 and letting g=f/(Gf∨1),where g is dominated by f and g>0, 𝗆-a.e. This g provides ∫_ℝ^dg· Ggd𝗆≤∫_ℝ^df· Ggd𝗆≤∫_ℝ^dGf·(f/Gf)d𝗆=∫_ℝ^dfd𝗆=1.Therefore, we have, ⟨|u|,g⟩ _𝗆/√(ℰ(u,u))√(ℰ(u,u))≦sup_u∈ℱ⟨|u|,g⟩ _𝗆/√(ℰ(u,u))√(ℰ(u,u))≦√(ℰ(u,u)),where √(ℰ(u,u)) is bounded from below by a positive number, ⟨|u|,g⟩, if { T_t} _t>0 is transient.Next, we consider recurrence. This depends on whether there exists a functional sequence within ℱ that converges the Dirichlet form, ℰ, to 0. (Recurrence of Dirichlet form). For each Dirichlet form (ℰ,ℱ) on L^2(ℝ^d;𝗆), the following is equivalent: (i) { T_t} _t>0 is recurrent. (ii) There exists a sequence { f_n} satisfying{ f_n}⊂ℱ,lim_n→∞f_n=1(𝗆-a.e.),lim_n→∞ℰ(f_n,f_n)=0.(iii) 1∈ℱ_e,ℰ(1,1)=0, where ℱ_e is extended Dirichlet form. In the case where 𝗆(supp(p_t(·)))<∞,1∈ℱ, ℰ(1,1)=0.§ PROOF OF LEMMA <REF>Extending and modifying the derivation of the Dirichlet form in <cit.> to the case of X∼ N(μ,vI), we have1/2∫_v_w^v_x1/v^2[B_Q^v(π,μ̂_MLE)-B_Q^v(π,μ̂_π)]dv =1/2∫_v_w^v_x1/v^2[∫_μ[R_Q^v(μ,μ̂_MLE)-R_Q^v(μ,μ̂_π)]π(μ)dμ]dv =1/2∫_v_w^v_x1/v^2[B_Q^v(μ,μ̂_MLE)-B_Q^v(μ,μ̂_π)]dv =1/2∫_v_w^v_x1/v^2[∫_ℝ^p‖v∇m^𝗆(z_v;v)/m^𝗆(z_v;v)-v∇m^π(z_v;v)/m^π(z_v;v)‖^2m_π(z_v;v)dz_v]dv =1/2∫_v_w^v_x[∫_ℝ^p‖∇m^𝗆(z_v;v)/m^𝗆(z_v;v)-∇m^π(z_v;v)/m^π(z_v;v)‖^2m_π(z_v;v)dz_v]dv =1/2∫_v_w^v_x[∫_ℝ^p‖∇m^π(z_v;v)‖^2/m^π(z_v;v)dz_v]dv =1/2∫_v_w^v_x[∫_ℝ^p‖∇_z2√(m^π(z_v;v))‖^2dz_v]dv =2∫_v_w^v_xℰ_BM(√(m_v^π),√(m_v^π))dvand obtain the Dirichlet form for the Brownian motion. Here, Z is dependent on v, thus denoted as Z_v: Z_v_w =W=(v_y+0v_x)X+1v_xY/v_x+v_y∼𝒩_p(μ,v_wI),Z_v_x =X=(v_y+v_x)X+0v_xY/v_x+v_y∼𝒩_p(μ,v_wI). Further, the following holds: ∫_v_w^v_xℰ_BM(√(m_v^π),√(m_v^π))dv=v_xℰ_BM(√(m_v_x^π),√(m_v_x^π)).Here, the Dirichlet form, ℰ_BM(·,·), is the Dirichlet form of the standard Brownian motion. For this, we use the self-similarity of the Gaussian kernel, 1/(2π v)^p/2exp{ -‖ z‖ ^2/2v} =λ^p1/(2π vλ^2)^p/2exp{ -‖λ z‖ ^2/2vλ^2} .This implies, { Z_v}d={1/λZ_vλ^2} for the Brownian motion. Since the Brownian motion with initial value, Z_0=μ, is { Z_v-μ}d={1/λZ_vλ^2-μ}, we can write 1/(2π v)^p/2exp{ -‖ z-μ‖ ^2/2v} =λ^p1/(2π vλ^2)^p/2exp{ -‖λ z-λμ‖ ^2/2vλ^2} .Then the marginal likelihood, m_π(z_v;v), and its derivative function, ∇_zm_π(z_v;v), is μ^'=√(v_x/v)μ, then m_π(z_v;v) =∫_ℝ^p(√(v_x/v))^p/(2πvv_x/v)^p/2exp{ -‖√(v_x/v)z-μ^'‖^2/2vv_x/v} π(μ^')(√(v/v_x))^p𝗆(dμ^') =m_π(√(v_x/v)z,v_x)∇_zm_π(z_v;v) =∇_zm_π(√(v_x/v)z,v_x) =∫_ℝ^p1/(2πv_x)^p/2exp{ -‖√(v_x/v)z-μ^'‖^2/2v_x} { -√(v_x/v)‖√(v_x/v)z-μ^'‖/v_x} π(μ^')𝗆(dμ^') =√(v_x/v)∫_ℝ^p1/(2πv_x)^p/2exp{ -‖√(v_x/v)z-μ^'‖^2/2v_x} { -‖√(v_x/v)z-μ^'‖/v_x} π(μ^')𝗆(dμ^') =√(v_x/v)∂m_π/∂z(√(v_x/v)z,v_x).Here, let ∂ m_π/∂ z(√(v_x/v)z,v_x) be the derivative function, ∇_zm_π(z_v;v), where (z_v,v)=(√(v_x/v)z,v_x) is substituted for the variable. For the integral, ∫_ℝ^p‖∇m^π(z_v;v)‖^2/m^π(z_v;v)𝗆(dz_v) =∫_ℝ^p‖∇_zm_π(√(v_x/v)z,v_x)‖^2/m_π(√(v_x/v)z,v_x)(√(v_x/v))^p𝗆(dz) =(√(v_x/v))^p∫_ℝ^pv_x/v‖∂m_π/∂z(√(v_x/v)z,v_x)‖^2/m_π(√(v_x/v)z,v_x)𝗆(dz)if we variable transform as x=√(v_x/v)z, we have ∂ z/∂ x=√(v/v_x) and ∂ m_π/∂ z(√(v_x/v)z,v_x)=√(v_x/v)∂ m_π/∂ x(x;v_x), thus we have (√(v_x/v))^p∫_ℝ^pv_x/v‖∂m_π/∂z(√(v_x/v)z,v_x)‖^2/m_π(√(v_x/v)z,v_x)𝗆(dz) =(√(v_x/v))^p∫_ℝ^p(v_x/v)^2‖∂m_π/∂x(x;v_x)‖^2/m_π(x;v_x)(√(v/v_x))^p𝗆(dx) =∫_ℝ^p(v_x/v)^2‖∂m_π/∂x(x;v_x)‖^2/m_π(x;v_x)𝗆(dx).Therefore, 1/2∫_v_w^v_x[∫_ℝ^p‖∇m_π(Z_v;v)‖^2/m_π(Z_v;v)𝗆(dz_v)]dv =1/2∫_v_w^v_x∫_ℝ^p(v_x/v)^2‖∂m_π/∂x(x;v_x)‖^2/m_π(x;v_x)𝗆(dx)dv =1/2v_x∫_ℝ^p‖∂m_π/∂x(x;v_x)‖^2/m_π(x;v_x)𝗆(dx) =2v_x∫_ℝ^p‖∂/∂x√(m_π(x;v_x))‖^2𝗆(dx). 2v_xℰ_BM(√(m_v_x^π),√(m_v_x^π)) is 2v_xℰ_BM(√(m_v_x^π),√(m_v_x^π)) =2v_x∫_ℝ^d(-∇^2√(m_π(x;v_x)))√(m_π(x;v_x))𝗆(dx) =∫_ℝ^d(-2v_x∇^2√(m_π(x;v_x)))√(m_π(x;v_x))𝗆(dx) =ℰ_BM^2v_x(√(m_v_x^π),√(m_v_x^π)),so the generator is the Dirichlet form of 2v_x∇^2. This is equivalent to the Dirichlet form of the Markov process when the transition probability is p̂_π_U. § PROOF OF LEMMA <REF> Let (ℰ,ℱ) be the Dirichlet form with measure ℙ_x, i.e., a Dirichlet form that follows a Markov process with transition probability, p̂_t^π_U. Then, √(M^π)∈ℱ, and ℰ(√(M^π),√(M^π)) has the following inequality, ℰ(√(M^π),√(M^π))≦1/t{⟨√(π),√(π)⟩ _𝗆-⟨√(M^π),√(M^π)⟩ _𝗆}≦ℰ(√(π),√(π)). Given, 0≦λ we integrate the inequality,λ√(e^-2tλ)≦1/t(1-√(e^-2tλ))≦λ,with regard to the measure, d⟨ E_λ√(π),√(π)⟩, of the spectral family, { E_λ} _λ≧0, that corresponds to ℰ. Then, we have, ℰ(√(T_tπ),√(T_tπ)) =∫_0^∞λe^-λtd⟨E_λ√(π),√(π)⟩ ≦∫_0^∞1/t(1-e^-tλ)d⟨E_λ√(π),√(π)⟩ ≦∫_0^∞λd⟨E_λ√(π),√(π)⟩=ℰ(√(π),√(π)).Here, we have ∫_0^∞1/t(1-e^-tλ)d⟨E_λ√(π),√(π)⟩=1/t{ ⟨√(π),√(π)⟩_𝗆-⟨√(T_tπ),√(T_tπ)⟩_𝗆} ,and √(T_1π(x))=√(∫_p_c(θ|x.)π(dθ))=√(M^π(x;c)). § PROOF OF THEOREM <REF>First, in general, for f∈ L^2(ℝ^d;𝗆), g∈ℱ, α>0, if we set f_n=G_1/n^ηη,we have ,f_n∈ℱ and 0≦ f_n(x)↑1,[m]. This is because, ℰ_α(G_α^ηf,g)=ℰ_α^η(G_α^ηf,g)-⟨ G_α^ηf,g⟩ _η·𝗆=⟨ f-η G_α^ηf,g⟩ _𝗆and G_α^ηf=G_α(f-η G_α^ηf), α>0.On the other hand, we have 0≦ G^ηη≦1, 𝗆-a.e.. If we substitute f with η in G_α^ηf=G_α(f-η G_α^ηf), and α↓0, then, Gη(1-G^ηη)=G(η-η G^ηη)=G^ηη≦1, 𝗆-a.e.Using recurrence, we have G^ηη=1 from, ^∃g∈ L_+^1(ℝ^d;𝗆), Gg=∞, 𝗆-a.e.This is because, since E={ x∈ E|Gg(x)=∞.}, 𝗆-a.e. and Gη(1-G^ηη)≦1, 𝗆-a.e., when C={ x∈ E|Gη<∞.}, we have Gη=0, and η=0,𝗆-a.e. on C. However, this contradicts η>0, 𝗆-a.e.. The rest is when Gη=∞, however, Gη(1-G^ηη)≦1 only holds when G^ηη=1. If we letf_n=G_1/n^ηη, we have 0≦ f_n↑1 and n→∞, then ℰ(f_n,f_n)≦ℰ_1/n(f_n,f_n)=⟨η-η f_n,f_n⟩ _𝗆≦∫_ℝ^d(η-η f_n)d𝗆→0.§ PROOF OF LEMMA <REF> First, with regard to 1., for 𝖪𝖫(p̂^π|p̂^π_U.), we show the following variational equality. (Donsker-Varadhan variational formula for the relative entropy). Denote the cylindrical measure under p̂^π,p̂^π_U as ℚ_x,ℙ_x, and denote the Markov process as { X_t^ℚ} ,{ X_t^ℙ}. Then, the KL divergence, 𝖪𝖫(p̂^π|p̂^π_U.), is 𝖪𝖫(p̂^π|p̂^π_U.)=sup_g∈ℬ_b(ℝ^d){𝔼_x^ℚ[g(X_1^ℚ)]-log𝔼_x^ℙ[exp(g(X_1^ℙ))]} .Here, ℬ_b(ℝ^d) is a set of bounded measurable function on ℝ^d. Further, the above can be written as 𝖪𝖫(p̂^π|p̂^π_U.)=sup_g∈ C_b(ℝ^d){𝔼_x^ℚ[g(X_1^ℚ)]-log𝔼_x^ℙ[exp(g(X_1^ℙ))]} .Here, C_b(ℝ^d) is a set of bounded measurable function on ℝ^d.The necessary and sufficient condition for 𝖪𝖫(p̂^π|p̂^π_U.) to be bounded is p̂^π≪p̂^π_U and flog f∈ L^1(ℝ^d;p̂^π_U) when p̂^π/p̂^π_U=f. When these conditions are satisfied, we have, 𝖪𝖫(p̂^π|p̂^π_U.)=∫_ℝ^dlogp̂^π/p̂^π_Up̂^πdy=∫_ℝ^dflog fp̂^π_Udy.This theorem is a transformation of the variational formula for the cross entropy in <cit.>. For the proof, see <cit.> Lemma1.4.3. C2. Let g_ε=g+ε. Then, sup_g∈𝒟(𝒜),ε>0φ(h,g,ε) is sup_g∈𝒟(𝒜),ε>0∫_Xlogg_ε(x)/(T_hg_ε)(x)M^π(dx) =sup_g∈𝒟(𝒜),ε>0{ 𝔼^M^π[logg_ε]-𝔼^M^π[logT_hg_ε]}=sup_g∈𝒟(𝒜),ε>0{ 𝔼^M^π[logg_ε]-𝔼^M^π[log𝔼^p_h[loge^g_ε]]}=sup_∈𝒟_+(𝒜){ 𝔼^M^π[]-𝔼^M^π[log𝔼^p_h[e^]]} .From Theorem <ref>, we have∫_ℝ^d𝖪𝖫(p̂^π|p_h.)M^π(x;c)dx= ∫_ℝ^dsup_g∈ℬ_b(ℝ^d){ 𝔼^p̂^π[g]-log𝔼^p_h[e^g]} M^π(x;c)dx.Because p̂^π is an x conditional probability of M^π,𝔼^M^π[] =𝔼^M^π[𝔼^p̂^π[]] ≦𝔼^M^π[sup_∈ℬ_b(ℝ^d){ 𝔼^p̂^π[]-log𝔼^p_h[e^]} ]+𝔼^M^π[log𝔼^p_h[e^]] =𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝔼^M^π[log𝔼_x^p_h[e^]]therefore, sup_∈𝒟_+(𝒜){𝔼^M^π[]-𝔼^M^π[log𝔼^p_h[e^]]}≦𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]. Next, we show the following inequality. From Jensen's inequality,{𝔼^M^π[]-𝔼^M^π[log𝔼^p_h[e^]]}≧{𝔼^M^π[]-log𝔼^M^π𝔼^p_h[e^]} .If we set p_M(y)=∫ p(h,x,y)M^π(x;c)dx, we can rewrite the above as {𝔼^M^π[]-𝔼^M^π[log𝔼^p_h[e^]]}≧sup_∈ℬ_b(ℝ^d){𝔼^M^π[]-log𝔼^p_M[e^]} =𝖪𝖫(M^π|p_M.).Then, from the following Lemma <ref>, we have 𝖪𝖫(M^π|p_M.)=𝔼^M^π[𝖪𝖫(M^π(·|x.)|p_M(·|x.).)].Here, M^π(·|x.),p_M(·|x.) is the transition probability of a stationary Markov process with the staring point, x, and the stationary distributions are each M^π(·),p_M(·).Further, since 𝔼^M^π[𝖪𝖫(M^π(·|x.)|p_M(·|x.).)]=𝔼^M^π[𝖪𝖫(p̂^π|p_h.)],Therefore, with Eq. (<ref>), we show the following inequality sup_∈ℬ_b(ℝ^d){𝔼^M^π[]-𝔼^M^π[log𝔼^p_h[e^]]}≧𝔼^M^π[𝖪𝖫(p̂^π|p_h.)].Since C_b(ℝ^n)⊂𝒟_+(𝒜)⊂ℬ_b(ℝ^d), from Theorem <ref>, sup is the same under 𝒟_+(𝒜) or ℬ_b(ℝ^d).𝖪𝖫(M^π|p_M.) =𝔼^M^π[𝖪𝖫(M^π(·|x.)|p_M(·|x.).)]. By definition, p̂^π is the x conditional probability of M^π. Therefore, M^π(y)=∫p̂^π(y|x.)M^π(x)dxand, similarly for p_M, the x conditional distribution isp_M(y)=∫p̂_h(y|x.)M^π(x)dx. (ℝ^d,ℬ(ℝ^d)) is a Polish space. Let ℬ_1⊂ℬ(ℝ^d) be the sub σ-algebra of ℬ(ℝ^d). The domain of definition of g in <ref> was the measurable space, (ℝ^d,ℬ(ℝ^d)), but even if we restrict the measurable space to (ℝ^d,ℬ_1), we can define 𝖪𝖫. This restricted 𝖪𝖫 is denoted as 𝖪𝖫_ℬ_1. Then, we will show that 𝖪𝖫(M^π|p_M.) =𝖪𝖫_ℬ_1(M^π|p_M.)+𝔼^M^π[𝖪𝖫(M^π(·|ℬ_1.)|p_M(·|ℬ_1.).)]holds.When 𝖪𝖫_ℬ_1(M^π|p_M.)=∞, we have, 𝖪𝖫≧𝖪𝖫_ℬ_1, therefore 𝖪𝖫(M^π|p_M.)=∞.When 𝖪𝖫_ℬ_1(M^π|p_M.)<∞, 𝖪𝖫(M^π|p_M.)<∞ does not necessarily hold, though if we show 𝖪𝖫(M^π|p_M.)≦𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝖪𝖫_ℬ_1(M^π|p_M.)when 𝖪𝖫(M^π|p_M.)<∞ (given 𝖪𝖫≧𝖪𝖫_ℬ_1 it follows automatically that 𝖪𝖫_ℬ_1(M^π|p_M.)<∞). Then, if we show 𝖪𝖫(M^π|p_M.)=𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝖪𝖫_ℬ_1(M^π|p_M.)the proof is complete.Since we have, 𝔼^M^π[] =𝔼^M^π[𝔼^p̂^π[]] ≦𝔼^M^π[sup_∈ℬ_b(ℝ^d){ 𝔼^p̂^π[]-log𝔼^p_h[e^]} ]+𝔼^M^π[log𝔼^p_h[e^]] =𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝔼^M^π[log𝔼^p_h[e^]].If we set, ψ=log𝔼^p_h[e^], we have ψ∈ℬ({ x}). Then, 𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝔼^M^π[ψ]≦𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝖪𝖫_ℬ_1(p̂^π|p_h.)+log𝔼^M^π[𝔼^p_h[e^]]= 𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝖪𝖫_ℬ_1(p̂^π|p_h.)+log𝔼^p_M[e^].From this, we have, 𝔼^M^π[]-log𝔼^p_M[e^]≦𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝖪𝖫_ℬ_1(p̂^π|p_h.),therefore, sup_{𝔼^M^π[]-log𝔼^p_M[e^]} =𝖪𝖫(M^π|p_M.)≦𝔼^M^π[𝖪𝖫(p̂^π|p_h.)]+𝖪𝖫_ℬ_1(p̂^π|p_h.). Now we show the other direction of the inequality. If 𝖪𝖫(M^π|p_M.)<∞, then M^π≪ p_M, therefore we set g=dM^π/dp_M. Similarly, regarding the x conditional probability is p̂^π≪ p_h, therefore g(ω)=.dM^π/dp_M|_ℬ_1(ω)dp̂^π(·|ℬ_1.)/dp_h(·|ℬ_1.)(ω)holds. Here, .dM^π/dp_M|_ℬ_1 is the likelihood ratio, dM^π/dp_M, where the domain of definition is restricted to ℬ_1. Then, 𝖪𝖫(M^π|p_M.)=𝔼^M^π[logg] =𝔼^M^π[log.dM^π/dp_M|_ℬ_1]+𝔼^M^π[logdp̂^π(·|ℬ_1.)/dp_h(·|ℬ_1.)] =𝔼^M^π[log.dM^π/dp_M|_ℬ_1]+𝔼^M^π[𝔼^p_h[logdp̂^π(·|ℬ_1.)/dp_h(·|ℬ_1.)]] =𝔼^M^π[log.dM^π/dp_M|_ℬ_1]+𝔼^M^π[𝖪𝖫(p̂^π(·|ℬ_1.)|p_h(·|ℬ_1.).)] =𝖪𝖫_ℬ_1(M^π|p_M.)+𝔼^M^π[𝖪𝖫(p̂^π(·|ℬ_1.)|p_h(·|ℬ_1.).)].Therefor, we have shown (<ref>).For (<ref>), if we set ℬ_1={ x}, we have our final result.§ PROOF OF THEOREM <REF><cit.> Let μ be a probability measure on ℝ^d, we assume μ≪𝗆, h>0, u_ε=u+ε,(ε>0), and u∈ℬ_b^+(ℝ^d). Then, inf_v∈ D_0,ε>0∫logp_hv_ε/v_εdμ=inf_v∈ D_1,ε>0∫logp_hv_ε/v_εdμ=inf_u∈ℬ_b^+(ℝ^d),ε>0∫logp_hu_ε/u_εdμ.Where the function spaces are defined as follows: D ={ u∈ℬ_b^+(ℝ^d)|∫_ℝ^dud𝗆<∞.} D_0 ={ v|v=1/t∫_0^tp_suds, for some u∈D, some t>0.} D_1 ={ p_hv|v∈D_0,h≧0.} . <cit.>When μ≪𝗆, lim_h↓01/h{ -inf_u∈ℬ_b^+(ℝ^d),ε>0∫logp_hu_ε/u_εdμ} =I(μ). If v∈ D_1, then there exists c>0, such that1/h|p_hv-v|≦ c,for ^∀h>0. From this if we set v_ε=v+ε, we have log p_hv_ε=log v_ε+(p_hv_ε-v_ε)·1/v_ε+O(h^2).Here, O(h^2) is only dependent on ε. When h→0, we have 1/h(p_hv-v)→𝒜v in terms of L^2(ℝ^d;𝗆), therefore it is bounded in terms of measure, μ-a.s.. For ^∀v∈ D_1, we have lim sup_h→01/hinf_v∈ D_1,ε>0∫logp_hv_ε/v_εdμ≦∫𝒜v/v_εdμ,thus, from Lemma <ref>, we have lim inf_h→01/h{ -inf_u∈ℬ_b^+(ℝ^d),ε>0∫logp_hu_ε/u_εdμ}≧ I(μ). Next, we show the reverse inequality. For v∈ D_0, we set φ(h)=∫logp_hv_ε/v_εdμ.Thus, v_ε=v+ε. When v∈𝒟(𝒜), we have dφ/dh=∫𝒜p_hv/p_hv_εdμ≧inf_v∈ D_1,ε>0∫𝒜v/v_εdμ=-I(μ).Regarding h, if we integrate from 0 to h, if we use φ(0)=0, we have φ(h)=∫_0^hdφ/dhdh≧=-∫_0^hI(μ)dh=-hI(μ),for ^∀v∈ D_1,ε>0,h>0. From <ref> and Lemma <ref>, Q.E.D. § PROOF OF THEOREM <REF> Define I_ℰ(μ) as I_ℰ(μ)=ℰ(√(f),√(f)),μ=f·𝗆, √(f)∈ℱ ∞,the Donsker-Varadhan I-function as I(μ)=-inf_u∈𝒟_+(𝒜),ε>0∫_ℝ^d𝒜u/u+εdμ,the function I_α,α>0 as I_α(μ)=-inf_u∈ bℬ_+(ℝ^d),ε>0∫_ℝ^dlog(α R_αu+ε/u+ε)dμ. Let μ∈𝒫, thenI_α(μ) ≦I(μ)/α.-inf_u∈bℬ_+(ℝ^d),ε>0∫_ℝ^dlog(αR_αu+ε/u+ε)dμ≦-1/αinf_u∈𝒟_+(𝒜),ε>0∫_ℝ^d𝒜u/u+εdμFor u=R_αf∈𝒟_+(𝒜) and ε>0, let ϕ(α)=-∫_ℝ^dlog(α R_αu+ε/u+ε)dμ.From the resolvent equation, we have lim_β→αR_αu-R_βu/α-β=-lim_β→αR_αR_βu=-R_α^2u,and dϕ(a)/dα=-∫_ℝ^dR_αu-α R_α^2u/α R_αu+εdμ=∫_ℝ^d𝒜R_α^2u/α R_αu+εdμ.Here, if we note that (αR_α^2u-R_αu)(α^2R_α^2u+ε)-(αR_α^2u-R_αu)(αR_α^2u+ε)=α(αR_α^2u-R_αu)^2 ≧0the following inequality, α R_α^2u-R_αu/α R_αu+ε≧α R_α^2u-R_αu/α R_α^2u+εholds. Therefore, we derive ∫_ℝ^d𝒜R_α^2u/αR_αu+εdμ≧∫_ℝ^d𝒜R_α^2u/α^2R_α^2u+εdμ=-1/α^2(-∫_ℝ^d𝒜R_α^2u/R_α^2u+ε/α^2dμ) ≧-1/α^2inf_u∈𝒟_+(𝒜),ε>0∫_ℝ^d𝒜u/u+εdμ_I(μ).From this, given lim_α→∞α R_αu(x)=u(x), noting that lim_α→∞ϕ(α)=0, we have -ϕ(α)=∫_α^∞ϕ^'(β)dβ≧-∫_α^∞1/β^2I(μ)dβ=-1/αI(μ)and ϕ(∞)-ϕ(α)=∫_ℝ^dlog(α R_αu+ε/u+ε)dμ≧1/αinf_u∈𝒟_+(𝒜),ε>0∫_ℝ^d𝒜u/u+εdμ,to show -inf_u∈𝒟_+(𝒜),ε>0∫_ℝ^dlog(α R_αu+ε/u+ε)dμ≦-1/αinf_u∈𝒟_+(𝒜),ε>0∫_ℝ^d𝒜u/u+εdμ_I(μ)/α. Next, we show inf_u∈𝒟_+(𝒜)∫_ℝ^dlog(α R_αu+ε/u+ε)dμ=inf_u∈ bℬ_+(ℝ^d)∫_ℝ^dlog(α R_αu+ε/u+ε)dμ.For g∈ bC_+(ℝ^d), we have ‖β R_βf‖ _∞≦‖ f‖ _∞,0≦β R_βf(x)→ f(x),(β→∞)thus ∫_ℝ^dlog(α R_α(β R_βf)+ε/β R_βf+ε)dμβ→∞→∫_ℝ^dlog(α R_αf+ε/f+ε)dμholds. We define the measure, μ_α, as μ_α(A)=∫_ℝ^dα R_α(x,A)dμ(x), A∈ℬ(ℝ^d).For v∈ bℬ_+(ℝ^d), we consider a sequence of functions, { g_n} _n=1^∞⊂ bC_+(ℝ^d)∩ L^2(ℝ^d;𝗆), that satisfies, ∫_ℝ^d|v-g_n|d(μ_α+μ)→0, n→∞.Then, when n→∞, we have ∫_ℝ^d|α R_αv-α R_αg_n|dμ≦∫_ℝ^dα R_α(|v-g_n|)dμ=∫_ℝ^d|v-g_n|dμ_α→0,thus the following holds: ∫_ℝ^dlog(α R_αg_n+ε/g_n+ε)dμn→∞→∫_ℝ^dlog(α R_αv+ε/v+ε)dμ.From Eq. (<ref>) and Eq. (<ref>), we have inf_u∈𝒟_+(𝒜)∫_ℝ^dlog(α R_αu+ε/u+ε)dμ=inf_u∈ bℬ_+(ℝ^d)∫_ℝ^dlog(α R_αu+ε/u+ε)dμ. If I(μ)<∞, then μ≪ m. If we take a non-negative function that is monotonically increasing, f_n∈ C_0(ℝ^d), that point-wise converges to f∈ bC_+(ℝ^d), we have, ∫_ℝ^df-α R_αf/R_αf+εdμ=lim_n→∞∫_ℝ^df_n-α R_αf_n/R_αf_n+εdμ, α>0.Therefore, defining the function, 𝒟_+(𝒜̂), as 𝒟_+(𝒜̂)={ R_αf|α>0, f∈ bC_+(ℝ^d), f≡0.} ,if we set 𝒜̂ϕ=α R_αf-f with regard to ϕ=R_αf∈𝒟_+(𝒜̂), we have the following result.For f∈ℱ,ℰ(f,f)=sup_u∈𝒟_+(𝒜̂),ε>0∫_ℝ^d-𝒜̂u/u+εf^2d𝗆.Proof of Theorem <ref>.We first show I(μ)≧ I_ℰ(μ). Assume I(μ)<∞. Since, μ≪ m, we have, f=dμ/d𝗆 and letf^n=√(f)∧ n. From log(1-x)≦-x and -∞<f^n-α R_αf^n/f^n+ε<1, (-∞<x<1), we have ∫_ℝ^dlog(α R_αf^n+ε/f^n+ε)fd𝗆=∫_ℝ^dlog(1-f^n-α R_αf^n/f^n+ε)fd𝗆≦-∫_ℝ^df^n-α R_αf^n/f^n+εfd𝗆.Thus, ∫_ℝ^df^n-αR_αf^n/f^n+εfd𝗆 ≦-∫_ℝ^dlog(αR_αf^n+ε/f^n+ε)fd𝗆 ≦-inf_u∈bℬ_+(ℝ^d),ε>0∫_ℝ^dlog(αR_αu+ε/u+ε)dμ_I_α(f·𝗆). For the euqlity, f^n-αR_αf^n/f^n+εf =f^n-αR_αf^n/f^n+εf1_{ √(f)≦n} +f^n-αR_αf^n/f^n+εf1_{ √(f)>n}=√(f)-αR_αf^n/√(f)+εf1_{ √(f)≦n} _(a)+n-αR_αf^n/n+εf{ √(f)>n} _(b),the absolute value of (a) and (b) is evaluated from above by (√(f)+α R_αf^n)√(f)∈ L^1(ℝ^d;𝗆) and n/n+εf<f∈ L^1(ℝ^d;𝗆), respectively. Therefore, from the bounded convergence theorem, we have lim_n→∞∫f^n-α R_αf^n/f^n+εfd𝗆=∫_ℝ^d√(f)-α R_α√(f)/√(f)+εfd𝗆.When ε→0, from Eq. (<ref>), we have ∫_ℝ^d√(f)(√(f)-α R_α√(f))d𝗆≦ I_α(f·𝗆).Therefore, from Lemma (<ref>), we can shown α⟨√(f),√(f)-α R_α√(f)⟩ _𝗆≦ I(f·𝗆)<∞,and from Lemma <ref>, this implies √(f)∈ℱ and ℰ(√(f),√(f))≦ I(f·𝗆).Next, we show I(μ)≦ I_ℰ(μ). Let ϕ∈𝒟_+(𝒜) and define the semigroup, P_t^ϕ, as P_t^ϕf(x)=𝔼_x[(ϕ(X_t)+ε/ϕ(X_0)+ε)exp(-∫_0^t𝒜ϕ/ϕ+ε(X_s)ds)f(X_t)].Here, P_t^ϕ is (ϕ+ε)^2m-symmetric and satisfies P_t^ϕ1≦1. For the probability measure, μ=fm∈𝒫, such that √(f)∈ℱ, letS_t^ϕ√(f)(x)=𝔼_x[exp(-∫_0^t𝒜ϕ/ϕ+ε(X_s)ds)√(f)(X_t)].Then, the following holds: ∫_ℝ^d(S_t^ϕ√(f))^2d𝗆 =∫_ℝ^d(ϕ+ε)^2(P_t^ϕ(√(f)/ϕ+ε))^2d𝗆 ≦∫_ℝ^d(ϕ+ε)^2P_t^ϕ((√(f)/ϕ+ε)^2)d𝗆 ≦∫_ℝ^d(ϕ+ε)^2(√(f)/ϕ+ε)^2d𝗆 =∫_ℝ^dfd𝗆.Therefore, we have 0≦lim_t→01/t⟨√(f)-S_t^ϕ√(f),√(f)⟩ _𝗆=ℰ(√(f),√(f))+∫_ℝ^d𝒜ϕ/ϕ+εfd𝗆and ℰ(√(f),√(f))≧ I(f·𝗆) has been shown. § PROOF OF <REF> Define the transition semigroup, T_t, of the Markov process, { X_t}, as a convolution semigroup, T_tf(x)=∫_ℝ^df(x+y)ν_t(dy), t>0,x∈ℝ^d,f∈ℬ_+(ℝ^d).Here, the characteristic function for ν_t(·) is given as 𝔼[e^itzY]= exp[t{ -1/2⟨z,Az⟩+∫_ℝ^d(e^i⟨z,y⟩-1-i⟨z,y⟩/1+|y|^2)ν(dy)+i⟨x,z⟩} ].Additionally, X_1d=X. Define ψ(z) as ψ(z)=-1/tlog𝔼[e^itzY].For the Fourier transformation of the integrable function, f, on ℝ^d, f̂(z)=1/(2π)^d/2∫_ℝ^de^i⟨ z,y⟩f(y)dy, z∈ℝ^d,the Parseval formula, ⟨ f,g⟩ =⟨f̂,ĝ⟩ , f,g∈ L^2(ℝ^d;𝗆),holds. Here, because T_tf=ν̂_t·f̂, we have ℰ^(t)(f,f)= 1/t⟨f-T_tf,f⟩= 1/t∫_ℝ^d(f̂(z)-ν̂_t(z)f̂(z))f̅̂̅(z)dz= ∫_ℝ^d‖f̂(z)‖^21-exp(-tψ(z))/tdz.When t↓0, the last integral is monotonically increasing and converges to ∫_ℝ^d|f̂(z)|^2ψ(z)dz. Therefore, the Dirichlet form associated with the predictive distribution, p̂(y|x.), is ℱ=f∈ L^2(ℝ^d;𝗆):∫_ℝ^d‖f̂(z)‖ ^2ψ(z)dz<∞,ℰ(f,g)=∫_ℝ^df̂(z)ĝ(z)ψ(z)dz, f,g∈ℱ. | http://arxiv.org/abs/2310.17891v1 | {
"authors": [
"Kosaku Takanashi",
"Kenichiro McAlinn"
],
"categories": [
"math.ST",
"math.PR",
"stat.TH",
"62C15"
],
"primary_category": "math.ST",
"published": "20231027044940",
"title": "Inadmissibility and Transience"
} |
positioningsort compress 1.1 1.5 1.0 sistema {[ . P3H-23-080 KA-TP-22-2023 [email protected]@pd.infn.itDipartimento di Fisica e Astronomia “G. Galilei", Università di Padova, Italy, and Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova, Italy [email protected]@kit.eduInstitute for Theoretical Physics, Karlsruhe Institute of Technology (KIT), D-76131 Karlsruhe, [email protected] Dipartimento di Fisica e Astronomia “G. Galilei", Università di Padova, Italy, and Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova, Italy Institute for Theoretical Particle Physics, Karlsruhe Institute of Technology (KIT), D-76131 Karlsruhe, GermanyInstitute for Astroparticle Physics, Karlsruhe Institute of Technology (KIT), D-76344 Eggenstein-Leopoldshafen, Germany We calculate the four-top quark operator contributions to Higgs production via gluon fusion in the Standard Model Effective Field Theory. The four-top operators enter for the first time via two-loop diagrams. Due to their chiral structure they contain γ_5, so special care needs to be taken when using dimensional regularisation for the loop integrals. We use two different schemes for the continuation of γ_5 to D space-time dimensions in our calculations and present a mapping for the parameters in the two schemes. This generically leads to an interplay of different operators, such as four-top operators, chromomagnetic operators or Yukawa-type operators at the loop level. We validate our results by examples of matching onto UV models. On γ_5 schemes and the interplay of SMEFT operators in the Higgs-gluon couplingMarco Vitti January 14, 2024 ================================================================================§ INTRODUCTIONWith the increasing precision in the measurement of the Higgs boson couplings, the Higgs sector has become a probe of physics beyond the Standard Model (SM). In the absence of a clear signal of new physics, potential deviations from the SM can be described as model-independently as possible by means of an effective field theory (EFT). Under the assumption that the Higgs field transforms as an SU(2)_L doublet as in the SM, heavy new physics can be described by the SM effective field theory (SMEFT) <cit.>. In this theory, new physics effects are described by higher-dimensional operators suppressed by some large mass scale Λ. In this paper we consider a subset of the possible dimension-six operators, namely the four-top quark operators, and comment on their connection to other SMEFT operators. Four-top operators are generically difficult to probe experimentally, as direct probes require the production of four top quarks. Limited by the large phase space required, four-top quark production remains a rather rare process, with a SM cross section of only about 12 fb including next-to-leading (NLO) QCD and NLO electroweak (EW) corrections for √(s)=13 TeV <cit.>. Current limits on four-top operators are hence typically rather weak, in particular mostly stemming from 𝒪(1/Λ^4) contributions in the matrix element squared <cit.>.For this very reason, potentially better bounds on the four-top operators can be obtained indirectly, hence by considering loop effects on other observables. Furthermore, Ref. <cit.> showed that in the presence of four-top operators possible limits on the trilinear Higgs self-coupling derived from electroweak corrections to single Higgs production <cit.> can become more restrictive. First efforts to constrain the trilinear Higgs self-coupling via single Higgs production have already been started by the experimental collaborations <cit.>. We are going to reconsider the gg→ h computation from Ref. <cit.>, which included effects from four-top operators within the SMEFT, using two different schemes for the continuation of γ_5 to D=4-2ϵ space-time dimensions.While the leading poles of loop integrals are scheme-independent, cancellations of these poles with scheme-dependent ϵ terms, resulting from the Dirac algebra in dimensional regularisation, will lead to scheme-dependent finite parts. It should be stressed that, in this context, the finite terms can be of the same order as the logarithmically enhanced ones (as shown in <cit.>), thus they are phenomenologically relevant.Since four-top operators contribute to gg→ h via two-loop diagrams, the finite terms are expected to be scheme-dependent. Moreover, we find a divergence which depends on the scheme, signaling a scheme-dependent anomalous dimension. We describe in detail how such divergence can be traced back to a finite term (that is expected to be scheme-dependent) in one of the one-loop subamplitudes entering the computation. We also review the results in naive dimensional regularisation<cit.> with respect to the ones obtained in Ref. <cit.> and we discuss various subtleties that arise in the comparison with the Breitenlohner-Maison-'t Hooft-Veltman scheme <cit.> for the treatment of γ_5. Furthermore, we point out that building the SMEFT expansion on the counting of the canonical dimension alone can lead to inconsistencies, as has been explained in Ref. <cit.>. In a counting scheme that in addition takes into acccount whether an operator is potentially loop-generated, the four-top operators and the chromomagnetic operator enter the Higgs-gluon coupling at the same order <cit.> and therefore should not be considered in isolation. Our paper is structured as follows: in Sec. <ref> we introduce the operators considered in our analysis and we fix our notation. In Sec. <ref> we discuss different schemes for the D-dimensional continuation of γ_5.Section <ref> is devoted to the computation of one-loop subamplitudes required to obtain the result for the gg→ h amplitude including the operators given in Sec. <ref>. The two different schemes are then used for the computation of the gg→ h rate presented in Sec. <ref>. We also discuss how the scheme-dependence of the parameters of the theory compensates for the scheme-dependence of the matrix elements, providing a scheme-independent physical result. In Sec. <ref> we validate our approach by means of a matching with two simple models. In Sec. <ref> we briefly show that a non trivial interplay exists not only in the case of four-top operators, as detailed in this work, but also when other operators containing chiral vertices are involved. In App. <ref> we show the result we obtain for Γ( h →b̅b ) as a side-product of our analysis, commenting also in this case about the scheme-independence of the result. In App. <ref> we discuss the relation between the counterterms and the anomalous dimension matrix, highlighting some subleties that arise when dimensional regularisation is used. In App. <ref> we report the scheme-independent part of the gg→ h amplitude and in App. <ref> we give the Feynman rules needed for our computation. §SETUPIf the new physics scale Λ is assumed to be much larger than the electroweak scale, new physics can be described in terms of an EFT. In this paper we use the SMEFT, where all SM fields transform under the SM symmetries, including the scalar field ϕ which contains the Higgs boson.At dimension-five level there is only the lepton-number violating “Weinberg” operator responsible for Majorana mass generation of neutrinos <cit.>, so the dominant new physics effects relevant in collider physics are described by dimension-six operators:ℒ_𝒟=6=ℒ_SM+1/Λ^2∑_ii𝒪_i ,where 𝒪_i denotes every possible non-redundant combination of SM fields with mass dimension six that preserves the symmetries of the SM.A complete basis of dimension-six operators was presented for the first time in Ref. <cit.>, the so-called Warsaw basis, that we will adopt in the following. In the Warsaw basis redundant operators are eliminated making use of field redefinitions, integration-by-part identities and Fierz identities.We are mostly interested in the effect of the four-top operators on Higgs production via gluon fusion (as well as the Higgs decay to gluons). The operators that lead to four-top interactions are given byℒ_4t = QQ(1)/Λ^2(Q̅_L γ_μ Q_L)(Q̅_L γ^μ Q_L ) +QQ(3)/Λ^2(Q̅_L τ^I γ_μ Q_L ) (Q̅_L τ^I γ^μ Q_L )+ Qt(1)/Λ^2(Q̅_L γ_μ Q_L ) (t̅_R γ^μ t_R ) +Qt(8)/Λ^2(Q̅_L T^Aγ_μ Q_L ) (t̅_R T^A γ^μ t_R ) + tt/Λ^2(t̅_Rγ_μ t_R ) (t̅_R γ^μ t_R ) .The field Q_L stands here for the SU(2)_L doublet of the third quark generation, t_R for the right-handed top quark field. The SU(3)_c generators are denoted as T^A while τ^I are the Pauli matrices. We assume all the Wilson coefficients to be real, since we are not interested in CP-violating effects.The operators in Eq. (<ref>) contribute to the gg → h amplitude via two-loop diagrams. At one-loop and tree-level, respectively, the following operators contribute to the (CP-even) Higgs-gluon couplingℒ_2t=[tϕ/Λ^2 (Q̅_L ϕ̃ t_R )ϕ^†ϕ+tG/Λ^2Q̅_L σ^μν T^A t_R ϕ̃G_μν^A+ ],ℒ_s=ϕ G/Λ^2ϕ^†ϕ G_μνG^μν ,where G_μν is the gluon field strength tensor, ϕ̃=iτ^2 ϕ^* and σ_μν=i/2[γ_μ,γ_ν]. To summarise our EFT setup, our Lagrangian reads: ℒ_𝒟=6=ℒ_SM+ℒ_4t+ℒ_2t+ℒ_s.We follow Ref. <cit.> for what concerns the conventions in ℒ_SM,ℒ_SM = -1/4 G_μν^A G^Aμν -1/4 W_μν^I W^Iμν -1/4 B_μνB^μν+ ∑_ψψ̅ i Dψ + (D_μϕ)^† (D^μϕ) - λ(ϕ^†ϕ - 1/2v^2 )^2 -uϕ̃^†u̅_RQ_L +.When spontaneous symmetry breaking occurs (ϕ=(1/√(2))( 0 ,(v+h))^T in the unitary gauge) one has:ℒ_𝒟=6⊃ - m_t t̅ t - g_h t̅t h t̅ t,where the top mass and the ht̅t coupling are modified according to m_t= v/√(2)( t -v^2/2t ϕ/Λ^2), g_h t̅t = 1/√(2)( t -3 v^2/2t ϕ/Λ^2)=m_t/v - v^2/√(2)t ϕ/Λ^2.This establishes a connection between m_t,g_h t̅t (broken phase) and t,t ϕ/Λ^2 (unbroken phase). §CONTINUATION SCHEMES FOR Γ_5 TO D DIMENSIONSDue to the presence of four-fermion operators with different chiralities, γ_5 matrices will be present in our loop computations. As well known, the treatment of γ_5 in dimensional regularisation is highly non-trivial, as γ_5 is an intrinsically four-dimensional object <cit.>. In this paper, we will consider two different schemes for the γ_5 matrix in dimensional regularisation with D=4-2ϵ: naive dimensional regularisation (NDR) <cit.> and the Breitenlohner-Maison-t'Hooft-Veltman scheme (BMHV) <cit.>. §.§ Naive Dimensional RegularisationThe NDR scheme assumes that the usual anti-commutation relations valid in four dimensions hold also in D dimensions{γ_μ, γ_ν}=2 g_μν , {γ_μ, γ_5 }=0 , γ_5^2=1 .This is inconsistent with the cyclicity of the trace. Assuming that the usual four-dimensional relationTr[γ_μγ_νγ_ργ_σγ_5]=-4iϵ_μνρσholds, leads to Tr[γ_μ_1γ_μ_2..γ_μ_2nγ_5]=Tr[γ_μ_2..γ_μ_2nγ_5γ_μ_1]+𝒪(ϵ),for n≥ 3.The cyclicity is hence no longer preserved and the computation of a Feynman diagram depends on the starting point of reading in a fermion trace. As was shown in Refs. <cit.>, the NDR scheme in presence of Dirac traces with an odd number of γ_5 matrices and at least six γ-matrices only leads to consistent results if the reading point is fixed univocally for all Feynman diagrams.[It was shown recently in Ref. <cit.> that in a computation of the singlet axial-current operator at 𝒪(α_s^3) between two gluons and the vacuum a revised version of the scheme of Refs. <cit.> becomes necessary.]§.§ Breitenlohner-Maison-'t Hooft-Veltman SchemeThe BMHV scheme divides the algebra in a four-dimensional part and a (D-4)-dimensional one by definingγ_μ^(D) =γ_μ^(4)+γ_μ^(D-4), {γ_μ^(4), γ_5 } =0,[γ_μ^(D-4), γ_5 ]=0. For the vertices involving chiral projectors we use the following rule, valid in the BMHV scheme:γ_μ^(4)(1 ∓γ^5) →1/2 (1 ±γ^5) γ_μ^(D)(1 ∓γ^5),which is the most symmetric choice and preserves chirality of the external fields in D dimensions (see e.g. Refs. <cit.>). § SCHEME-DEPENDENT FINITE MIXING AT ONE-LOOP ORDER In this section we comment on the interplay between the four-top operators and other operators entering Eq. (<ref>). This interplay will be important in the discussion of single Higgs production in the next section. In particular, we want to highlight two points. The first one is that there is a finite mixing between the four-top and other operators, coming already from one-loop diagrams, as shown below. This fact implies that it would be inconsistent to study the contribution coming from four-top operators in isolation. The second point is that the above mixing, being finite, depends on the γ_5 scheme employed. When combining the one-loop subamplitudes in two-loop diagrams, in principle this could lead to divergent terms that are scheme-dependent. However, provided that both schemes are used consistently, the physical result for the complete two-loop amplitude is expected to be scheme-independent.Direct evaluation of the contribution of the four-top operators to the g →t̅t amplitude gives a contribution proportional to an insertion of the chromomagnetic operator.Pictorially, this can be represented as follows[baseline=(4F)] [small] (g1)g; (gtt1) [dot, scale=, right= 30 pt of g1] ; (4F) [square dot,scale=,right = 20 pt of gtt1, color = red] ; (t1) [above right= 25 pt of 4F] t;(t2) [below right= 25 pt of 4F] t;*(g1)– [gluon] (gtt1), (gtt1) – [anti fermion, half right] (4F)– [anti fermion,half right] (gtt1), (t2) – [fermion](4F) – [fermion] (t1) ;= Qt(1)-1/6Qt(8)/tG K_tG×[baseline=(4F)] [small] (g1)g; (gtt1) [square dot, scale=, right= 30 pt of g1, color=blue] ; (t1) [above right= 25 pt of gtt1] t;(t2) [below right= 25 pt of gtt1] t;*(g1)– [gluon] (gtt1), (t2) – [fermion] (gtt1) – [fermion] (t1),; ,where the red and blue square dots denote an insertion of four-top and chromomagnetic operators, respectively. The value of K_tG in Eq. (<ref>) depends on the γ_5 scheme. We findK_tG = √(2) m_t g_s/16 π^2v (NDR)0(BMHV).We note that Eq. (<ref>) holds only when the gluon is on shell. In this case, only one of the two possible contractions of the fermion lines, namely the one in Fig. <ref>, gives a non-vanishing contribution. We stress that the difference between the two schemes in Eq. (<ref>) does not arise from a trace in Dirac space and therefore cannot be related to trace ambiguities <cit.>. When we consider other one-loop amplitudes with four-top operator insertions, which will enter as subamplitudes in the gg → h computation, we find again that the finite contributions are scheme-dependent, whereas the divergent parts are equal in the two schemes. In particular, the diagrammatic relation concerning the four-top contribution to the Higgs-top coupling is[baseline=(4F)] [small] (g1)h; (gtt1) [dot, scale=, right= 25 pt of g1] ; (4F) [square dot,scale=,right = 20 pt of gtt1, color = red] ; (t1) [above right= 25 pt of 4F] t;(t2) [below right= 25 pt of 4F] t;*(g1)– [scalar] (gtt1), (gtt1) – [anti fermion, half right] (4F)– [anti fermion,half right] (gtt1), (t2) – [fermion](4F) – [fermion] (t1) ;|_FIN = 1/Λ^2 ( Qt(1) + 4/3Qt(8)) × (B_ht̅t+K_ht̅t) ×[baseline=(4F)] [small] (g1)h; (gtt1) [dot, scale=, right= 25 pt of g1, color=black] ; (t1) [above right= 25 pt of gtt1] t;(t2) [below right= 25 pt of gtt1] t;*(g1)– [scalar] (gtt1), (t2) – [fermion] (gtt1) – [fermion] (t1),; ,where we findK_ht̅t = (m_h^2-6m_t^2)/16 π^2 (NDR)0(BMHV),and where B_ht̅t is scheme-independent and can be expressed asB_ht̅t = m_t^2/4 π ^2 τ×(-2 β ^3 log(β -1/β +1)+(3 τ -2) log(μ̃^2 /m_t^2)+5 τ -4 ),with β= √(1- τ) , τ =4 m_t^2/m_h^2and with μ̃^2 = 4 πμ^2 e^-γ_E. We note that B_h t̅t and the analogous B terms in this paper are scheme-independent once a convention to identify K_h t̅t is defined. For example, in this section we choose the B terms such that the K-terms vanish in BMHV. However, this definition is totally arbitrary and does not affect the final results. What is relevant for our purpose is the difference between K-terms in different schemes, which is insensitive to the convention chosen.Regarding the corrections to the top quark propagator we find that only the mass term gets corrected. Diagrammatically, we have [baseline=(t1)] [small] (t1) t; (4F) [square dot, scale=, right = 25 pt of t1,color=red] ;(t2) [right= 25 pt of 4F] t; (inv) [scale = 0.01,above = 20 pt of 4F] t; *(t1)– [fermion] (4F) – [fermion] (t2), (inv)– [fermion, half right] (4F) – [fermion,half right] (inv), ; |_FIN =1/Λ^2( Qt(1) + 4/3Qt(8)) × (B_m_t+K_m_t) ×[baseline=(t1)] [small] (t1) [] t; (4F) [dot, scale=0.01, right = 25 pt of t1,color=black] ;(t2) [right= 25 pt of 4F] t; [shape=star,star points=4,star point ratio = 15,fill=black, draw,scale = 0.05, rotate=45] at (4F) ; *(t1)– [fermion] (4F) – [fermion] (t2),;,K_m_t = - m_t^2/8π^2 (NDR)0(BMHV).Also in this case, B_m_t is scheme-independentB_m_t = m_t^2×log(μ̃^2 /m_t^2)+1/4 π ^2. The results in Eqs. (<ref>, <ref>, <ref>) deserve some discussions. Equation (<ref>) shows that the chromomagnetic and four-top operators are closely linked and contribute at the same order in the EFT expansion, even though the latter operators come with an explicit loop diagram. This can be understood from the fact that, under the assumption that the UV-complete theory is renormalisable and that the SM fields are weakly coupled to the unknown fields, there are operators which cannot be generated at tree-level.This means that their Wilson coefficients are expected to contain a loop suppression factor 1/(4π)^2 <cit.>.The power counting can be formalised conveniently via the chiral dimension d_χ, supplementing the canonical dimension counting in 1/Λ. As a result, the tree-level diagram associated with the (loop-generated) operator 𝒪_ϕ G enters the gg→ h amplitude at the same power as the (tree-generated) operator 𝒪_tϕ inserted into a SM-like loop diagram, which is 1/(4π)^2 1/Λ^2. Similarly, 𝒪_t G inserted into aone-loop diagram for gg→ h (see Fig. <ref>) and the two-loop diagram stemming from the insertion of the four-top operators into the gg→ h matrix element (Fig. <ref>) are of the same power, which is 1/(4π)^4 1/Λ^2. In the former case a loop-generated operator is inserted into a one-loop diagram, while in the latter case a tree-generated operator is contained in an explicit two-loop diagram.Therefore, in Eq. (<ref>), tG contains a loop suppression factor 1/(4π)^2 relative to tϕ, the same holds for ϕ G. Equation (<ref>) shows that g_ht̅t and the four-top operators are also linked, however this relation comes with a relative suppression factor 1/Λ^2 × 1/(4π)^2.§CALCULATION OF THE HIGGS-GLUON COUPLINGIn this section, we compute the four-top operator contribution at two-loop order to the Higgs-gluon coupling in the two different γ_5 schemes introduced in Sec. <ref>. In the previous section we have shown that this contribution cannot be separated from that of the operators of ℒ_2t in Eq. (<ref>). In the case of gg → h, we express the renormalised amplitude as followsℳ_EFT = 1/Λ^2{ 4tℳ_4t+ tGℳ_tG + ϕ Gℳ_ϕ G + t ϕℳ_t ϕ+ ℳ_C.T.},where the inclusion of 𝒪_ϕ G is required in order to cancel the divergent part coming from ℳ_tG. The total matrix element is given by ℳ_TOT = ℳ_SM+ℳ_EFT.The contribution from 𝒪_t ϕ manifests itself as a modification of g_h t̅t and m_t (see Eq. (<ref>)) entering ℳ_SM, so its effect is understood to be included in ℳ_SM. The four-top contribution to ℳ_EFT can be split according to the different topologies of the associated Feynman diagrams. In Fig. <ref> we show a sample of the 12 diagrams that need to be computed. The first topology is related to a correction to the Higgs-top-quark coupling (<ref>), the second one to a correction to the top quark propagator (<ref>) and the third one to a correction to the gluon-top vertex (<ref>). We generated the diagrams with qgraf-3.6.5 <cit.> and performed the algebra with FeynCalc <cit.>. Following the above classification, we express the four-top contribution as4tℳ_4t =𝒜_g_h t̅t+m_t( Qt(1) + 4/3Qt(8)) 1/Λ^2+ 𝒜_gt̅t( Qt(1) - 1/6Qt(8)) 1/Λ^2.The two different combinations of the Wilson coefficients in Eq. (<ref>) arise from the colour algebra. We find that the result of 𝒜_gt̅t can be expressed in terms of the contribution to the amplitude due to an insertion of the chromomagnetic operator𝒜_gt̅t =[ 1/2 K_tGℳ_tG|_DIV + K_tGℳ_tG|_FIN],where K_tG is the same as in Eq. (<ref>). The divergent and finite parts of ℳ_tG are given, respectively, by (A_1,A_2 being the colour indices of the gluons)ℳ_tG|_DIV = - g_s m_t 1/ϵ√(2)/ 2 π^2 L^μ_1 μ_2ϵ_μ_1(p_1) ϵ_μ_2 (p_2) δ^A_1A_2, ℳ_tG|_FIN = - g_s m_t √(2)/4 π^2 L^μ_1 μ_2ϵ_μ_1(p_1) ϵ_μ_2 (p_2) δ^A_1A_2×(1/4τlog^2 (β-1/β + 1) + βlog(β-1/β + 1) + 2 log(μ̃^2/m_t^2) + 1 ),withL^μ_1 μ_2 = (m_h^2/2 g^μ_1 μ_2- p_1^μ_2p_2^μ_1 ).We point out that the fact that K_tG factorises in Eq. (<ref>) does not depend on the scheme. The value of K_tG depends on the scheme, and in particular K_tG=0 in BMHV. Remarkably, this implies that the structure of the divergences is different between the two schemes. This happens because of the combination of a scheme-independent pole of a loop integral with the scheme-dependent finite terms in Eq. (<ref>). On the other hand, we find that thedivergent terms in 𝒜_g_h t̅t+m_t are scheme-independent. §.§ RenormalisationWe use the minimal subtraction (MS) renormalisation prescription for all the parameters in the theory. Schematically, the counterterms needed to renormalise the amplitude are given byℳ_C.T.= [baseline=(htt)](g1)g; (gtt1) [dot,scale=,right= 25 pt of g1] ;(htt) [dot,scale=,below right = 20 pt of gtt1] ; (gtt2) [dot,scale=,below left = 20 pt of htt] ; (g2) [left= 25 pt of gtt2]g;(h) [right = 20 pt of htt] h;[shape=star,star points=5,star point ratio = 2,fill=black, draw,scale = 0.4] at (htt) ; *(g1)– [gluon] (gtt1), (g2) – [gluon] (gtt2), (h)– [scalar] (htt),(gtt1) – [fermion] (htt) – [fermion] (gtt2) – [fermion] (gtt1) ; + [baseline=(htt)](g1)g; (gtt1) [dot,scale=,right= 25 pt of g1] ; (ct) [dot,scale=,below right = 10. pt of gtt1] ; (htt) [dot,scale=,below right = 20 pt of gtt1] ; (gtt2) [dot,scale=,below left = 20 pt of htt] ; (g2) [left= 25 pt of gtt2]g;(h) [right = 20 pt of htt] h;[shape=star,star points=5,star point ratio = 2,fill=black, draw,scale = 0.4] at (ct) ; *(g1)– [gluon] (gtt1), (g2) – [gluon] (gtt2), (h)– [scalar] (htt),(gtt1) – [fermion] (htt) – [fermion] (gtt2) – [fermion] (gtt1) ; + [baseline=(hgg)](g1)g;(hgg) [dot,scale=,below right = 27 pt of g1] ; (g2) [below left= 27 pt of hgg]g;(h) [right = 20 pt of hgg] h;[shape=star,star points=5,star point ratio = 2,fill=black, draw,scale = 0.4] at (hgg) ; *(g1)– [gluon] (hgg), (g2) – [gluon] (hgg), (h)– [scalar] (hgg), ; .For the top quark mass we havem_t^MS=m_t^(0)+δ m_t,withδ m_t =m_t^3/4 π^2 Λ^2ϵ( Qt(1)+ 4/3Qt(8)). We note that typically in the computation of gg→ h the top quark mass is renormalised in the on-shell scheme. In order to simplify our point (as we find the same MS counterterm in NDR and BMHV) we restrict the discussion here to a pure MS renormalisation.In addition, the Wilson coefficienttϕ, which mixes with the four-top operators via renormalisation group equation (RGE) running, needs to be renormalised. The coefficient of the operator is renormalised according totϕMS=tϕ(0) +δt ϕ with δt ϕ = -1/2 ϵ1/16 π^2γ_tϕ,jj ,where γ denotes the one-loop anomalous dimension of the SMEFT. The entries relevant for our discussion can be obtained from Refs. <cit.>. The equation correlating δt ϕ and the anomalous dimension matrix in Eq. (<ref>) is discussed in detail in App. <ref>. The only four-top Wilson coefficients contributing to γ_tϕ,jj are Qt(1,8). The operator𝒪_t ϕ modifies the Higgs couplings to top quarks as discussed previously, see Eq. (<ref>). In analogy to m_t, we have:g_h t̅ t^MS=g_h t̅ t^(0)+δ g_h t̅ t,withδ g_h t̅ t=g_h t̅t( 6 m_t ^2 - m_h^2 )/8 π^2 Λ^2ϵ( Qt(1)+4/3Qt(8)).From now on we will drop the superscript MS, leaving understood that all the parameters are renormalised in the MS scheme. We recall that the divergent parts of the diagrams in Figs. <ref> and <ref> are equal in the NDR and BMHV schemes, and they are fully removed by one-loop diagrams with an insertion of the one-loop counterterms in Eqs. (<ref>), (<ref>).The insertion of the chromomagnetic operator (see Fig. <ref>) gives a divergent contribution to the Higgs-gluon coupling at one loop <cit.>. We find this contribution to be scheme-independent. To remove all the divergences we need to choose (see Eq. (<ref>))δ_ϕ G =g_ht̅t g_s/Λ^2 ϵ4 √(2)π^2( tG + K_tG/2(Qt(1) - 1/6Qt(8)) ).This entails an important consequence: the anomalous dimension is scheme-dependent, as it contains the scheme-dependent K_tG. From d ϕ G(0)/ d μ=0, we obtain16 π^2 μd ϕ G/d μ = -4 √(2) g_h t̅t g_s ( tG + K_tG(Qt(1) - 1/6Qt(8)) ).Notice that there is a relative factor of 2between the contributions from Qt(1,8) in Eq. (<ref>) and Eq. (<ref>). This is a consequence of the contribution proportional to tG being g_h t̅t g_s and the contribution proportional to Qt(1,8) being g_h t̅t^2 g_s^2.[Using g_h t̅t = m_t/v + 1/Λ^2.] This (merely algebraic) fact will have important consequences, as we will show in the following. The details can be found in App. <ref>. We stress that the form of the RGE in Eq. (<ref>) shows that the contributions of tG, Qt(1,8) enter at different loop orders (being K_tG = 1/(4 π)^2). However, when the loop counting from Ref. <cit.> is considered, they enter at the same order, as explained in Sec. <ref>. The differences in NDR and BMHV originating from the finite mixing of the four-fermion operators with chiral structure (L̅L)(R̅R) into the chromomagnetic operator are well known, in particular in the context of flavour physics. This effect can induce a scheme-dependent anomalous dimension matrix at leading order <cit.>. Using the strategy proposed in <cit.>, we can perform a finite renormalisation of the chromomagnetic operator and write tG→tG + K_tG(Qt(1) - 1/6Qt(8)).This choice ensures a scheme-independent anomalous dimension matrix. §.§ Renormalised amplitudeIn the previous section we discussed how to obtain the same anomalous dimension matrix in both schemes. This is achieved via the inclusion of the effects of a scheme-dependent finite mixing in the Wilson coefficients. These effects are related to one-loop subdiagrams as in Eq. (<ref>). One may wonder if redefinitions similar to Eq. (<ref>) are enough to obtain the same result for the finite part of the amplitude in both schemes. In other words, we want to check if the scheme-dependence of the two-loop amplitude can be accounted for simply by computing one-loop subdiagrams. The only scheme-dependent terms in the amplitudes are the ones stemming from a two-loop insertion of the four-top operators and they are parametrised by K_tG, K_g_ht̅t and K_m_t.We express the renormalised contribution from the diagrams in Figs. <ref>, <ref> as 𝒜_g_h t̅t + m_t^Ren = ℳ_g_h t̅t + m_t^S.I. + K_g_h t̅tℳ^SM + K_m_t∂ℳ^SM/∂ m_t × m_t,where ℳ^SM, ℳ_g_h t̅t + m_t^S.I. are scheme-independent and they can be found in App. <ref>. Putting together Eqs. (<ref>),(<ref>) and (<ref>) we have the following expression for the renormalised matrix elementℳ_TOT^Ren = ( Qt(1) + 4/3Qt(8)) 1/Λ^2ℳ_g_h t̅t + m_t^S.I.+ [tG +( Qt(1) - 1/6Qt(8))K_tG]1/Λ^2ℳ_tG|_FIN + [ 1+ ( Qt(1) + 4/3Qt(8)) 1/Λ^2 K_h t̅t]ℳ_SM + ( Qt(1) + 4/3Qt(8)) 1/Λ^2 K_m_t∂ℳ_SM/∂ m_t× m_t +ϕ Gℳ_ϕ G1/Λ^2 .We note that ℳ_TOT^Ren represents a physical on-shell scattering amplitude, which must be scheme-independent.[This can be best understood from a top-down perspective.] Therefore, the scheme-dependence of the K-terms has to be compensated by a scheme-dependence of the parameters. To make this more evident, we define the following set of parameters identified by a tilde𝒞̃_tG =tG +( Qt(1) - 1/6Qt(8))K_tG,g̃_h t̅t =g_h t̅t[1+( Qt(1) + 4/3Qt(8)) 1/Λ^2 K_h t̅t], m̃_t =m_t[1+( Qt(1) + 4/3Qt(8)) 1/Λ^2 K_m].Noting that, under a redefinition of the top mass m_t → m_t + Δ m_t, one has ℳ_SM→ℳ_SM +Δ m_t ∂ℳ_SM/ ∂ m_t, we can write the total matrix element in a more compact form (at 1/Λ^2):ℳ_TOT^Ren = ( Qt(1) + 4/3Qt(8)) 1/Λ^2ℳ_g_h t̅t + m_t^S.I.+ 𝒞̃_tG/Λ^2ℳ_tG|_FIN+ ℳ_SM( g̃_h t̅t, m̃_t) +ϕ G/Λ^2ℳ_ϕ G .In the previous expression, ℳ_SM( g̃_h t̅t,m̃_t) is given by Eq. (<ref>) where g_h t̅t,m_t are replaced by g̃_h t̅t, m̃_t. From the amplitudes ℳ_g_h t̅t + m_t^S.I.,ℳ_tG,ℳ_SM,ℳ_ϕ G being scheme-independent, it follows that thecombinations in Eqs. (<ref>-<ref>) must be scheme-independent. It should be stressed that Eq. (<ref>) is the same relation we obtained in the previous section, namely Eq. (<ref>): the same finite shift makes both the anomalous dimension matrix and the renormalised amplitude scheme-independent.We also remark that, at the order we are working, g_h t̅t and m_t can be used interchangeably with g̃_h t̅t and m̃_t in ℳ_tG,ϕ G,ℳ_g_h t̅t + m_t^S.I. because their contribution to ℳ_TOT is already suppressed by 1/Λ^2. §.§ Summary of the computationWe can now summarize the differences between the two schemes. From Eqs. (<ref>-<ref>) it is evident that there exists a difference between the parameters in the two schemes which is proportional to K_X^NDR-K_X^BMHV. This quantity does not depend on the prescription used to identify the K-terms. In BMHV all the K-terms are vanishing, so the previous redefinitions are trivial. The scheme-independence condition X̃_i^NDR=X̃_i^BMHVallows us to write at 1/Λ^2[ If we had included the loop factor 1/(4 π)^2 explicitly in the C_tG-term in the Lagrangian Eq. (<ref>), it would be manifest that the chromomagnetic and the four-top operators contribute at the same order in the chiral counting, because in this case the factor 1/(4 π)^2 in (<ref>) would be absent. ]tGNDR =tGBMHV - ( Qt(1) - 1/6Qt(8)) √(2)g_ht̅t g_s/16 π^2 ,g_h t̅t^NDR =g_ht̅t^BMHV- g_h t̅t( Qt(1) + 4/3Qt(8))(m_h^2-6m_t^2)/16 π^2 Λ^2, m_t^NDR =m_t^BMHV + ( Qt(1) + 4/3Qt(8))m_t^3/8π^2 Λ^2.The map described by Eqs. (<ref>-<ref>), establishes a connection between the two schemes. When such relations are considered, the two schemes give the same anomalous dimension matrix and the same renormalised amplitude. §MATCHING WITH UV-MODELSAs discussed in the previous section, the differences in the finite terms of the amplitude when using the NDR and the BMHV scheme can be absorbed by different definitions of the parameters tG, g_h t̅t, and m_t.In this section we perform the matching with concrete UV completions of the SM, in order to validate our EFT approach from a top-down point of view.The matching is performed in the unbroken phase (following the notation used in Ref. <cit.>), in which g_h t̅t and m_t can be traded more conveniently in favour of t ϕ and t. In the remainder of the section we will use a thicker fermion line to denote the iso-doublet Q_L and a thinner fermion line to denote the iso-singlet t_R in the Feynman diagrams.§.§ New scalar: Φ∼ (8,2)_1/2We consider, in addition to the SM, a new heavy scalar with a mass M_Φ≫ v and quantum numbers Φ∼ (8,2)_1/2. The Lagrangian in this case can be written asℒ_Φ = (D_μΦ)^†D^μΦ - M_Φ^2 Φ^†Φ- Y_Φ( Φ^A,†εQ̅_L ^TT^A t_R + ),where ε is the Levi-Civita pseudotensor in the isospin space and T refers to the transposition in isospin space only.The tree-level matching yields ℒ=Y_Φ^2/ M_Φ^2 (Q̅_L T^A t_R) (t̅_R T^A Q_L).This operator does not appear in the Warsaw basis since it is considered redundant in D=4 dimensions. In the following it will be referred to as ℛ_Qt^(8). Using the Fierz identities, one can recast this result in terms of operators in the Warsaw basis <cit.>:Qt(1)/Λ^2 = -2/9Y_Φ^2/ M_Φ^2, Qt(8)/Λ^2 = 1/6Y_Φ^2/ M_Φ^2.Now we compute the matching at one-loop level to the chromomagnetic operator. The relevant diagrams are given in Fig. <ref>, while diagrams with t-channel exchange within the loop are forbidden due to the conservation of hypercharge.Evaluating the diagrams in Fig. <ref> gives zero in both NDR and BMHV, in contrast with our previous observations.However, the Fierz identity we used for the matching of the four-fermion operators is broken by ϵ terms when dimensional regularisation is used (D=4 -2 ϵ), as noted in Ref. <cit.>. Following this reference, we define the evanescent operator as ℰ=ℛ_Qt^(8) - (-2/9𝒪_Qt^(1)+1/6𝒪_Qt^(8))and we compute its insertion (in both schemes). We find that in NDR the evanescent operator contributes to the matching to the chromomagnetic operator: 𝒞_Qt^(8), Rℛ_Qt^(8) =-2/9Y_Φ^2/M_Φ^2^𝒞_Qt^(1)/Λ^2𝒪_Qt^(1)+1/6Y_Φ^2/M_Φ^2^𝒞_Qt^(8)/Λ^2𝒪_Qt^(8)+1/16 π^2Φ^2/ M_Φ^2g_s t/4_𝒞_tG/Λ^2𝒪_tG +.This result reproduces the term proportional to the chromomagnetic operator presented in <cit.>.[This reference uses a different convention for the covariant derivative with respect to the one used in Ref. <cit.>, which we follow in the Feynman rules. This leads to a relative minus sign in terms with an odd power of g_s. In addition, the different normalisation of the quartic Higgs self-coupling in Ref. <cit.> requires the replacements λ/2 →λ, μ^2 →λ v^2to convert their result into our conventions.] In BMHV we obtain𝒞_Qt^(8), Rℛ_Qt^(8) =-2/9Y_Φ^2/M_Φ^2^𝒞_Qt^(1)/Λ^2𝒪_Qt^(1)+1/6Y_Φ^2/M_Φ^2^𝒞_Qt^(8)/Λ^2𝒪_Qt^(8) .We conclude that the difference between the NDR scheme and BMHV scheme (using Eq. (<ref>) and √(2)m_t = tv +1/Λ^2) is exactly the one described by Eq. (<ref>). Furthermore, we need to compute the matching to the top Yukawa coupling as well as to t ϕ. Doing so we find in both schemes zero, by colour. This is in trivial agreement with Eqs. (<ref>), (<ref>) since, within this model, Qt(1)+4/3Qt(8)=0. In order to test Eqs. (<ref>), (<ref>) we hence need to consider a different model, namely replacing the colour octet Φ with a colour singlet φ.§.§ New scalar: φ∼ (1,2)_1/2We consider, in addition to the SM, a new heavy scalar with a mass M_φ≫ v and quantum numbers φ∼ (1,2)_1/2. The Lagrangian in this case can be written asℒ_φ = (D_μφ)^†D^μφ - M_φ^2 φ^†φ- Y_φ( φ^†εQ̅_L ^T t_R + ).The tree-level matching yields ℒ=Y_φ^2/ M_φ^2 (Q̅_L t_R) (t̅_RQ_L).As in the previous case, this operator does not appear in the Warsaw basis being redundant in D=4 dimensions. In the following it will be referred to as ℛ_Qt^(1). We findQt(1)/Λ^2 = -1/6Y_φ^2/ M_φ^2, Qt(8)/Λ^2 = - Y_φ^2/ M_φ^2.Due to colour structure, there are no contributions to the chromomagnetic operator. The tree-level matching implies Qt(1)-1/6Qt(8)=0, in agreement with Eq. (<ref>) since tGNDR=tGBMHV=0 within this model.Following the procedure outlined in the previous section, we compute the diagrams in Fig. <ref> to compute the contributions to t and tϕ in both schemes. The matching condition for t (tϕ) is obtained subtracting from the diagram in Fig. <ref> (<ref>) the one-loop amplitude for Q̅_L t_R →ϕ^† (Q̅_L t_R →ϕ^†ϕϕ^†) with an insertion of four-top operators. In other words, we are interested in computing the insertion of the evanescent operator:ℰ=ℛ_Qt^(1) - ( -1/6𝒪_Qt^(1)-𝒪_Qt^(8)).In NDR we find:𝒞_Qt^(1), Rℛ_Qt^(1) =-1/6Y_φ^2/M_φ^2^𝒞_Qt^(1)/Λ^2𝒪_Qt^(1)-Y_φ^2/M_φ^2^𝒞_Qt^(8)/Λ^2𝒪_Qt^(8)+1/16 π^2φ^2/ M_φ^2(3 t^3 - 3 λ)_𝒞_tϕ/Λ^2𝒪_tϕ + -1/16 π^2φ^2/ M_φ^2 3/2λ v^2 _Δt (Q̅_L ϕ̃ t_R )+,confirming once again the results obtained in <cit.>.In this notation, Δt represents the contribution to the top Yukawa coupling from the matching, while t represents the coefficient of the four-dimensional Yukawa operator (Q̅_L ϕ̃ t_R ).In BMHV we find:𝒞_Qt^(1), Rℛ_Qt^(1) =-1/6Y_φ^2/M_φ^2^𝒞_Qt^(1)/Λ^2𝒪_Qt^(1)-Y_φ^2/M_φ^2^𝒞_Qt^(8)/Λ^2𝒪_Qt^(8) .Using the well known relations Eq. (<ref>)we can compute m_t,g_h tt̅ and confirm Eqs. (<ref>), (<ref>). § INTERPLAY BETWEEN MORE OPERATORS IN THE SMEFT The primary focus of this paper is the demonstration of γ_5 scheme differences in the treatment of four-top operators, since they provide a convenient playground for investigation due to the factorization of loop integrals. However, considering a complete operator basis in SMEFT, there are other classes of operators that share similar features regarding the treatment of γ_5.Analogous to Sec. <ref> (but more schematically) we demonstrate in the following that there is also a scheme-dependent finite mixing at one-loop order for operators in the class of ψ^2ϕ^2D of Ref. <cit.>.For the purpose of this discussion, we consider the two operators L_2t2ϕ= ϕ Q(1)/Λ^2Q̅_Lγ_μ Q_L(ϕ^† iD^μϕ)+ ϕ t/Λ^2t̅_Rγ_μ t_R(ϕ^† iD^μϕ) , where we introduced the short-hand notation iD^μ=iD^μ-iD^μ . Similar to the four-top operators in Eq. (<ref>), the operators in Eq. (<ref>) are composed of current-current interactions including chiral vector currents.These current-current operators can be generated by integrating out a new heavy vector particle at tree-level that couples to the SM currents. A concrete and comparably easy realization is given e.g. by the Third Family Hypercharge Model <cit.>. We restrict the direct evaluation of one-loop contributions of the operators in Eq. (<ref>) to the gaugeless limit of the SM[ In the gaugeless limit, the SM gauge bosons are completely decoupled from the rest of the theory, taking the limit g_1→ 0 and g_2→ 0.The Goldstone fields of the SM Higgs doublet are therefore massless physical degrees of freedom. The explicit analytic results in this section are equivalent to the pure Goldstone contribution in Landau gauge. ] and only investigate the contribution to the chromomagnetic form factor, since this is sufficient to point out the necessity of a more exhaustive study in future work.An explicit evaluation of the one-loop correction to g→t̅t in the broken phase leads to[baseline=(4F)] [small] (g1)g; (gtt1) [dot, scale=, right= 20 pt of g1] ; (gtG1) [square dot,scale=,above right = 15 pt of gtt1, color = yellow] ; (gtG2) [dot, scale=,below right = 15 pt of gtt1] ; (t1) [above right= 15 pt of gtG1] t;(t2) [below right= 15 pt of gtG2] t; (l1) [right= 20 pt of gtt1] G^0;*(g1)– [gluon] (gtt1), (t1) – [anti fermion] (gtG1) – [anti fermion] (gtt1)– [anti fermion] (gtG2)– [anti fermion] (t2), (gtG1) – [scalar] (gtG2) ;+[baseline=(4F)] [small] (g1)g; (gtt1) [dot, scale=, right= 20 pt of g1] ; (gtG1) [dot, scale=,above right = 15 pt of gtt1] ; (gtG2) [square dot,scale=,below right = 15 pt of gtt1, color = yellow] ; (t1) [above right= 15 pt of gtG1] t;(t2) [below right= 15 pt of gtG2] t; (l1) [right= 20 pt of gtt1] G^0;*(g1)– [gluon] (gtt1), (t1) – [anti fermion] (gtG1) – [anti fermion] (gtt1)– [anti fermion] (gtG2)– [anti fermion] (t2), (gtG1) – [scalar] (gtG2) ;|_FIN= ϕ Q(1)-ϕ t/tG K_tG^2t2ϕ×[baseline=(4F)] [small] (g1)g; (gtt1) [square dot, scale=, right= 20 pt of g1, color=blue] ; (t1) [above right= 25 pt of gtt1] t;(t2) [below right= 25 pt of gtt1] t;*(g1)– [gluon] (gtt1), (t2) – [fermion] (gtt1) – [fermion] (t1),;+…where the gluon and top quarks are taken on-shell[Even if this choice is not kinematically allowed, it simplifies the extraction of the chromomagnetic contribution.] and the Gordon identity for on-shell fermions is applied to arrive at this result.The (…) in Eq. (<ref>) represent contributions to vector and axial form factors that are completely removed using on-shell renormalisation of the external top fields. For thescheme-dependent value of K_tG^2t2ϕ we find K_tG^2t2ϕ = g_sm_t/16√(2)vπ^2× 1(NDR) 2/3 (BMHV) . A mapping of tG from one scheme to the other in the presence of the operators of Eq. (<ref>) is therefore achieved considering the differenceΔ K_tG^2t2ϕ=K_tG^2t2ϕ, NDR-K_tG^2t2ϕ, BMHV= g_sm_t/48√(2)vπ^2 ,similarly as in Eq. (<ref>).The same difference is obtained in the unbroken phase, evaluating diagrams of the form of Fig. <ref> for both operators.This provides a solid cross check of the scheme-dependent nature which even holds when the SM gauge bosons are part of the theory, since they cannot contribute to the chromomagnetic operator at one-loop order.The result of Eq. (<ref>) (and the analogous calculation in the unbroken phase) illustrates well that we observe a scheme-dependent finite mixing at one-loop between the operators of Eq. (<ref>) and other operators, just like in the case of four-top operators.Similarly to Sec. <ref> a map of finite scheme-dependent shifts in the Wilson coefficients could be verified by an explicit on-shell one-loop matching with an adequate toy model.Regarding the contribution of those operators to the Higgs-gluon coupling, we refrain from performing the complete calculation as in Sec. <ref> in our current work. Even in the simplified scenario of the gaugeless limit, the contributions of the operators would lead to genuine two-loop Feynman integrals, which is beyond the scope of what we would like to demonstrate here. With the observed scheme dependence at one-loop, we already expect a γ_5 scheme dependence for the single pole in gg→ h and for the RGE of ϕ G. As in the case of four-top operators, it should be resolved considering the map of finite shifts in the Wilson coefficients derived at one-loop. However, it is not guaranteed that the renormalised amplitude of the gg→ h would have a scheme-independent form once such shifts are considered. On the contrary, it may be necessary to identify finite scheme-dependent shifts appearing at the two-loop level. § CONCLUSIONSWe have computed the contribution of four-top operators to the Higgs-gluon coupling at two-loop level in the SMEFT. We have discussed in detail, for the first time for this process, the differences between the two schemes for the continuation of γ_5 to D space-time dimensions considered in this paper, namely NDR and BMHV. This process is an interesting show-case for the topic of scheme-dependence, because it shows some key features of two-loop computations without adding too many difficulties with respect to a one-loop computation.Although the results at two-loop level in the two γ_5 schemes have a different form, this difference can be accounted for by allowing that the parameters have different values in the two schemes.Given this, we determined in Eqs. (<ref>-<ref>) a mapping between the parameters in the two schemes that makes both the anomalous dimension matrix and the finite result scheme-independent.This extends the approach presented in Ref. <cit.>, where the scheme-independence of the anomalous dimension matrix only is discussed.We validated the relations between the parameters in the different schemes using some UV models, as detailed in Sec. <ref>. These simplified UV models support the expectation that the physical result does not depend on the scheme used for γ_5, if such scheme is used consistently. However, we remark that this holds for a top-down approach, in which the EFT (in this case, the SMEFT) is used as an intermediate step.In the context of the SMEFT with a new physics scale Λ∼ 1 TeV, the finite terms in the matrix element can be of the same size as the logarithmically enhanced contributions, and thus can be phenomenologically relevant <cit.>. For this reason, deriving a connection between the two schemes is very desirable in the perspective of a global fit, where the observables may be computed in differentschemes. To this aim, Eqs. (<ref>-<ref>) represent a first effort in the direction of a comprehensive map between the two schemes. We remark that the continuation scheme for γ_5 is only one of the calculational choices that could affect the intepretation of SMEFT fits from a bottom-up point of view (see e.g. Refs. <cit.>).Lastly, we have observed that the interplay of four-top and other SMEFT operators cannot be fully understood in terms of the canonical SMEFT power counting, as in some cases operators that are expected to contribute to different orders based on this counting cannot be treated independently. When the canonical power counting is supplemented by a loop counting like the one discussed in Ref. <cit.>, the observed interplay is more naturally accounted for, under the generic assumption of weakly-coupled and renormalisable UV theories. Furthermore, when the loop counting is considered, the shifts we have presented can be of the same order of magnitude as the Wilson coefficients themselves (see Eq. (<ref>)). As a consequence, experimental constraints on the determination of Wilson coefficients of loop-generated operators (like tG in this paper) could be interpreted as suffering from large uncertainties, if scheme-dependent contributions from tree-level-generated chiral operators entering at higher explicit loop orders are omitted (in our case, four-top and ψ^2 ϕ^2 D operators).This points to the necessity of selecting operators contributing to a physical process such that loop counting and canonical-dimension counting are combined, even though it implies assumptions on the UV completion. In any case, a detailed documentation of continuation and renormalisation scheme choices used in EFT calculations and fits of Wilson coefficients is highly recommended. § ACKNOWLEDGMENTSWe are indebted to Luca Silvestrini, whose comments and suggestions were crucial during the early stages of this project. We would like to thank various people for discussion: Jorge de Blas, Gerhard Buchalla, Hesham El Faham, Ulrich Haisch, Paride Paradisi, Luca Vecchi and Eleni Vryonidou. We also thank Lina Alasfar for providing assistance in automatising parts of the computation. The Feynman diagrams shown in this work were drawn with(<cit.>). This project has received funding from the European Union’s Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Staff Exchangegrant agreement No 101086085 – ASYMMETRY.The research of GH and JL was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257. RG and MV acknowledge support from a departmental research grant under the project “Machine Learning approach to Effective Field Theories in Higgs Physics”. This work is supported in part by the Italian MUR Departments of Excellence grant 2023-2027 "Quantum Frontiers”. SDN also thanks the Lawrence Berkeley National Laboratory, Berkeley Center for Theoretical Physics and the Institute for Theoretical Physics at KIT for hospitality.§ THE H→ BB̅ RATE We would like to shortly discuss the computation of the four-quark operators to the h→ bb̅ rate both in the NDR and BMHV scheme, which we obtain as a side product of our analysis. The operators relevant for our discussion areℒ_b = Qb(1)/Λ^2(Q̅_L γ_μ Q_L ) (b̅_R γ^μ b_R ) +Qb(8)/Λ^2(Q̅_L T^Aγ_μ Q_L ) (b̅_R T^A γ^μ b_R ) + [ QtQb(1)/Λ^2(Q̅_L t_R ) i τ_2 (Q̅_L^Tb_R )+ ] + [ QtQb(8)/Λ^2(Q̅_LT^At_R ) i τ_2 (Q̅_L^T T^Ab_R )+ ] + [ bϕ/Λ^2(ϕ^†ϕ) Q̅_L ϕ b_R+ ] .We consider also scalar operators 𝒪_QbQt^(1,8) which are neglected in the gg → h computation since they are suppressed by a factor of m_b / m_t.Including the above operators at NLO, the Higgs decay to bottom quarks is given by <cit.>[In this reference, the on-shell renormalisation scheme is employed. For this reason, we perform the check with the bare amplitude, Eqs. (4.13), (4.14).] Γ_h→ bb̅^NDR/Γ_h→ bb̅^SM = 1 - m_t/m_bm_h^2 /32π ^2 Λ ^2(7 QtQb(1)+4/3QtQb(8))×(2 β ^3 log(β-1/β + 1)-5 β ^2 +(1-3 β ^2 ) log(μ̃^2/m_t^2)+1 )- m_h^2/16 π ^2 Λ ^2( Qb(1)+ 4/3Qb(8)) (4 β_b^3 log(β_b-1/β_b+1) +7 β_b^2+(6 β_b^2-2) log(μ̃^2/m_b^2)-1) + 1/Λ^4,and β defined in Eq. (<ref>) and β_b is obtained from β by replacing m_t with m_b. The correct branch of the logarithm can be obtained by m_h^2→ m_h^2+i 0. In the BMHV scheme instead the result of the scalar operators does not change with respect to the NDR scheme, but we obtain a different result for the operators Qb(1) and Qb(8). We findΓ_h→ bb̅^NDR-Γ_h→ bb̅^BMHV/Γ_h→ bb̅^SM=Qb(1)+ 4/3Qb(8)/8π^2Λ^2(m_h^2-6 m_b^2) + 1/Λ^4 .At tree-level one has Γ_h→ bb̅^X,TL∝ (g_h b̅b^X)^2, being X=NDR,BMHV, where g_h b̅b contains corrections from the operator 𝒪_b ϕ, as can be seen from Eq. (<ref>) (replacing t with b).Keeping into account the different value of such coupling in the two regularisation schemes, namely Eq. (<ref>), we can write Γ_h→ bb̅^NDR,TL-Γ_h→ bb̅^BMHV,TL/Γ_h→ bb̅^SM =Qb(1)+ 4/3Qb(8)/8π^2Λ^2(6 m_b^2-m_h^2)+ 1/Λ^4 . If one consistently accounts for the orders in the loop expansion and the 1/Λ^2 expansion, one is then able to obtain a scheme-independent result for this process. § RENORMALISATION GROUP EQUATIONS AND COUNTERTERMSThe anomalous dimension matrix of a theory is strictly connected to the structure of the divergences of the theory itself. In this appendix we analyse in detail this relation, deriving a general formula which can be used to determine the one-loop counterterms associated to SMEFT operators by simply reading the corresponding entry of the renormalisation group equation, given for example in <cit.> (or viceversa).We present here a general argument where a generic SMEFT operator 𝒪_2 renormalises a different operator 𝒪_1. We fix, coherently with the rest of the paper,1MS(μ) = 1(0) + δ1(μ), δ1(μ) = A/ϵ (μ)^N_λ(μ) ^N_λ g(μ) ^N_g2(μ).In the previous expression, μ is the renormalisation scale (on which the MS parameters depend) and ,λ,g denote, respectively, a Yukawa coupling, the Higgs quartic coupling and a gauge coupling and A is a number that does not depend on the renormalisation scale (nor implicitly or explicitly).When dimensional regularisation is used, it is customary to rescale the parameters in such a way they maintain their physical dimension: X →μ^κ_X ϵ X. A typical example is given by gauge couplings, for which κ_g = 1 is chosen to keep them dimensionless (g →μ^ϵ g). This operation should be done also for the coefficients of the SMEFT operators, whose mass dimension in D space-time dimensions is different from -2.[Within the notation used in this paper, the coefficients are written as i/Λ^2, being i a dimensionless quantity.] Remarkably, SMEFT operators may have a different dimension depending on their field content, even if in the limit D → 4 they all have dimension six. Since the product i𝒪_i must have dimension D one has, in principle, 8 different rescaling factors κ_i, one for each of the operator classes defined in <cit.>. As we will see at the end of this section, keeping this aspect into account is crucial in order to find the correct relation between counterterms and anomalous dimension entries. The renormalisation group equation for 1 can be obtained from (dropping the superscript MS for a better readability)0 = μd 1(0)(μ)/d μ =μd/d μ( μ^κ_1 ϵ (1(μ) - δ1(μ)) ).Since in the end we will take D → 4, we need the first term of the expansion in the β-function for each of the parameters contained in the counterterm, namelyμd X(μ)/ d μ≡β_X = - κ_X ϵ + 1.Performing the algebra in Eq. (<ref>) and using Eq. (<ref>) we obtain μd 1(μ)/ d μ =A × (κ_1 - κ_2 - N_ - N_g - 2 N_λ) ×(μ)^N_λ(μ) ^N_λ g(μ) ^N_g2(μ) .If we normalise the anomalous dimension matrix as μd 1(μ)/ d μ = 1/16 π^2γ_12(μ) 2(μ), we can write (comparing this expression with Eq. (<ref>))δ1 (μ) = 1/16 π^2 ϵγ_12 (μ)2(μ) 1/κ_1 - κ_2 - N_ - N_g - 2 N_λ.A practical example of this formula is Eq. (<ref>). Four-top operators 𝒪_Qt^(1,8) renormalise 𝒪_t ϕ at t λ <cit.> and at t^3 <cit.>.This means (N_,N_g,N_λ)=(1,0,1) ((3,0,0)) for the former (latter) case. In D=4-2ϵ space-time dimensions one has[Qt(1,8)] = 2 ϵ,[t ϕ] = 3 ϵ,which implies κ_Qt = 2, κ_t ϕ = 3. Plugging these numbers in Eq. (<ref>) gives Eq. (<ref>) (for both terms of t λ, t^3). § ADDITIONAL RESULTS We present in this appendix 𝒜_g_h t̅t+m_t introduced in Eq. (<ref>) 𝒜_g_h t̅t+m_t =- g_h t̅t g_s^2 m_t /64 π ^4 m_h^4ϵ_μ_1(p_1) ϵ_μ_2 (p_2)δ ^A_1A_2×[-4 (log( μ̃^2/m_t^2)+2) m_h^4 -4 βm_h^2log(β-1/β + 1)×(2 (log( μ̃^2/m_t^2)-1 ) m_t^2+m_h^2)+16 (2 log( μ̃^2/m_t^2)+3 ) m_h^2 m_t^2 +log ^2(β -1/β +1) ( (log( μ̃^2/m_t^2)+2 ) m_h^4 -4 (3 log( μ̃^2/m_t^2)+5 ) m_h^2 m_t^2 +16 ( 3 log( μ̃^2/m_t^2)+4 ) m_t^4 ) + βlog ^3(β -1/β + 1) (m_h^2-4 m_t^2)^2 ].L^μ_1 μ_2 has been defined in Eq. (<ref>), β in Eq. (<ref>), A_1,A_2 are the colour indices of the gluons.We also report here the result for the SM amplitude for gg → h at one-loop level:ℳ_SM= g_h t̅t g_s^2/32 π^2 m_tτ L^μ_1 μ_2ϵ_μ_1(p_1) ϵ_μ_2 (p_2)δ ^A_1A_2×(β ^2log^2(β -1/β +1)-4).§ FEYNMAN RULES We follow <cit.> for what concerns the Feynman rules. For the sake of completeness, we report here the Feynman rules we used in Sec. <ref>:[baseline=(4F)] [small] (g1)g; (gtt1) [square dot, scale=, right= 30 pt of g1, color=blue] ; (t1) [above right= 25 pt of gtt1] t;(t2) [below right= 25 pt of gtt1] t;*(g1)– [gluon, momentum = p] (gtt1), (t2) – [fermion] (gtt1) – [fermion] (t1),;= -tG/Λ^2√(2) v T^A σ^μν p_ν,[baseline=(4F)] [small] (g1)h; (gtt1) [dot, scale=, right= 25 pt of g1, color=black] ; (t1) [above right= 25 pt of gtt1] t;(t2) [below right= 25 pt of gtt1] t;*(g1)– [scalar] (gtt1), (t2) – [fermion] (gtt1) – [fermion] (t1),;= - i g_h t̅t., [baseline=(t1)] [small] (t1) [] t; (4F) [dot, scale=0.01, right = 25 pt of t1,color=black] ;(t2) [right= 25 pt of 4F] t; [shape=star,star points=4,star point ratio = 15,fill=black, draw,scale = 0.05, rotate=45] at (4F) ; *(t1)– [fermion] (4F) – [fermion] (t2),;= - i m_t.We stress that in Eq. (<ref>) there is not a direct proportionality to the propagator structure p- 1 m_t, but only to 1 m_t. For this reason, we added a cross in the fermion line. ] | http://arxiv.org/abs/2310.18221v2 | {
"authors": [
"Stefano Di Noi",
"Ramona Gröber",
"Gudrun Heinrich",
"Jannis Lang",
"Marco Vitti"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20231027155031",
"title": "On $γ_5$ schemes and the interplay of SMEFT operators in the Higgs-gluon coupling"
} |
Inadmissibility and Transience [ January 14, 2024 ============================== Unsupervised semantic segmentation is a challenging task that segments images into semantic groups without manual annotation. Prior works have primarily focused on leveraging prior knowledge of semantic consistency or priori concepts from self-supervised learning methods, which often overlook the coherence property of image segments. In this paper, we demonstrate that the smoothness prior, asserting that close features in a metric space share the same semantics, can significantly simplify segmentation by casting unsupervised semantic segmentation as an energy minimization problem.Under this paradigm, we propose a novel approach called SmooSeg that harnesses self-supervised learning methods to model the closeness relationships among observations as smoothness signals. To effectively discover coherent semantic segments, we introduce a novel smoothness loss that promotes piecewise smoothness within segments while preserving discontinuities across different segments. Additionally, to further enhance segmentation quality, we design an asymmetric teacher-student style predictor that generates smoothly updated pseudo labels, facilitating an optimal fit between observations and labeling outputs. Thanks to the rich supervision cues of the smoothness prior, our SmooSeg significantly outperforms STEGO in terms of pixel accuracy on three datasets: COCOStuff (+14.9%), Cityscapes (+13.0%), and Potsdam-3 (+5.7%). § INTRODUCTION Semantic segmentation is a crucial task in computer vision that allows for a better understanding of the visual content and has numerous applications, including autonomous driving <cit.> and remote sensing imagery <cit.>.Despite advancements in the field, most traditional semantic segmentation models heavily rely on vast amounts of annotated data, which can be both arduous and costly to acquire. Consequently, unsupervised semantic segmentation <cit.> has emerged as a promising alternative. Prior knowledge is fundamental to the success of unsupervised semantic segmentation models. One key prior knowledge is the principle of semantic consistency, which stipulates that an object's semantic label should remain consistent despite photometric or geometric transformations.Recent advances <cit.> use contrastive learning to achieve consistent features or class assignments. Another essential prior knowledge is the priori concepts implicitly provided by self-supervised learning techniques, e.g., DINO <cit.>and precedent arts <cit.> whose learned features can be employed to partition each image into different segments. Despite their effectiveness, these methods often overlook the coherence property of image segments, resulting in predicted segments that are incomplete and lacking in coherence, as shown in Fig. <ref>. Real-world images often demonstrate a natural tendency towards piecewise coherence regarding semantics, texture, or color.Observations close to each other, either in the form of adjacent pixels in the coordinate space or close features in a metric space, are expected to share similar semantic labels, and vice versa.This essential property, known as the smoothness prior, plays a crucial role in various computer vision tasks <cit.>. Surprisingly, it is still under-explored in the field of unsupervised semantic segmentation.In this paper, we attempt to tackle unsupervised semantic segmentation from the perspective of smoothness prior. As a dense prediction task,semantic segmentation aims at finding a labeling f ∈ℱ that assigns each observation (pixel, patch, features) p∈𝒫 a semantic category f(p), which could be formulated within an energy minimization framework <cit.>: E(f) = E_(f) + E_(f). E_ is a pairwise smoothness term that promotes the coherence between observations, and E_ represents a pointwise data term that measures how well f(p) fits the observation p. However, directly applying smoothness prior to unsupervised semantic segmentation faces several obstacles. 1) Due to the large intra-class variations in appearances within an image, it is difficult to define a well-suited similarity (dissimilarity) relationship among low-level observations. This makes it challenging to discover groups of complex observations as coherent segments. 2) E_ can lead to a trivial solution where f becomes smooth everywhere, a phenomenon known as model collapse. 3) Optimizing E_ without any observed label can be challenging.However, applying the energy minimization framework to address the unsupervised semantic segmentation problem presents several obstacles.Firstly, due to the large intra-class variations in appearances within an image, it is difficult to define a well-suited similarity (dissimilarity) relationship among low-level observations. This makes it challenging to discover groups of complex observations as coherent segments, as the relationship between observations is unclear. Secondly, E_smooth can lead to a trivial and meaningless solution where f becomes smooth everywhere, a phenomenon known as model collapse.This is because the pairwise prior may provide an overly simplified segmentation that fails to capture the complexity of the visual content.Finally, optimizing E_data without any annotated data, especially from scratch, can be challenging. It therefore is difficult to ensure a good fit between the observation p and its label f_p. In this study, we propose a novel approach called SmooSeg for unsupervised semantic segmentation to address the aforementioned challenges. By leveraging the advantages of self-supervised representation learning in generating dense discriminate representations for images, we propose to model the closeness relationships among observations by using high-level features extracted from a frozen pre-trained model. This helps capture the underlying smoothness signals among observations. Furthermore, we implement a novel pairwise smoothness loss that encourages piecewise smoothness within segments while preserving discontinuities across image segments to effectively discover various semantic groups. Finally, we design an asymmetric teacher-student style predictor, where the teacher predictor generates smooth pseudo labels to optimize the data term, facilitating a good fit between the observations and labeling outputs.Specifically, our model comprises a frozen feature extractor, a lightweight projector, and a predictor.The projector serves to project the high-dimensional features onto a more compact, low-dimensional embedding space, and the predictor employs two sets of learnable prototypes to generate the final segmentation results. We optimize our model using a novel energy minimization objective function. Despite its simplicity, our method has demonstrated remarkable improvements over state-of-the-art approaches.In particular, our method significantly outperforms STEGO <cit.> in terms of pixel accuracy on three widely used segmentation benchmarks: COCOStuff (+14.9%), Cityscapes (+13.0%), and Potsdam-3 (+5.7%). Our contributions can be summarized as follows: * We propose SmooSeg, a simple yet effective unsupervised semantic segmentation approach that delves into the potential of the smoothness prior, emphasizing the spatial coherence property of image segments. * We introduce a novel pairwise smoothness term that could effectively discover various semantic groups. * We design an asymmetric teacher-student style predictor to facilitate a good fit between the observations and labeling output. * We demonstrate the effectiveness of our approach by significantly outperforming state-of-the-art methods on three widely used segmentation benchmarks. § RELATED WORK Unsupervised semantic segmentation has gained increasing attention for automatically partitioning images into semantically meaningful regions without any annotated data. Early CRF models <cit.> incorporate smoothness terms that maximize label agreement between similar pixels. They define adjacency for a given pixel in the coordinate space, e.g., using 4-connected or 8 connected grid, which relies heavily on the low-level appearance information and falls short in capturing high-level semantic information in images. Recently, many methods <cit.> have attempted to learn semantic relationships at the pixel level with semantic consistency as a supervision signal. For example, IIC <cit.> is a clustering method that discovers clusters by maximizing mutual information between the class assignments of each pair of images. PiCIE <cit.> enforces semantic consistency between an image and its photometric and geometric augmented versions. HSG <cit.> achieves semantic and spatial consistency of grouping among multiple views of an image and from multiple levels of granularity. Recent advances <cit.> have benefited from self-supervised learning techniques, which provide priori concepts as supervision cues. For instance, InfoSeg <cit.> segments images by maximizing the mutual information between local pixel features and high-level class features obtained from a self-supervised learning model. The work in <cit.> directly employs spectral clustering on an affinity matrix constructed from the pre-trained features. TransFGU <cit.> generates pixel-wise pseudo labels by leveraging high-level semantic concepts discovered from DINO <cit.>.Additionally, STEGO <cit.> utilizes knowledge distillation to learn a compact representation from the features extracted from DINO based on a correspondence distillation loss, which also implies a smoothness regularization through the dimension reduction process.However, the utilization of smoothness prior in STEGO is implicit and entails separate post-process, such as min-batch K-Means, for the final semantic clustering. Besides, MaskContrast <cit.> and FreeSOLO <cit.> leverage mask priors and primarily focus on foreground object segmentation. In contrast, we propose to leverage the smoothness prior as a supervision cue to directly optimize the generated semantic map, achieving more coherent and accurate segmentation results. Self-supervised representation learning (SSL) aims to learn general representations for imageswithout additional labels, which has offered significant benefits to various downstream tasks, including detection and segmentation <cit.>.One main paradigm of SSL is based on contrastive learning <cit.>, which seeks to maximize the feature similarity between an image and its augmented pairs while minimizing similarity between negative pairs. For example,MoCo <cit.> trains a contrastive model by using a memory bank that stores and updates negative samples in a queue-based fashion. SimCLR <cit.> proposes to learn a nonlinear transformation, i.e., a projection head, before the contrastive loss, to improve performance.Notably, DINO <cit.>, built upon Vision Transformer (ViT) <cit.>, has a nice property of focusing on the semantic structure of images, such as scene layout and object boundaries.Features extracted by DINO exhibit strong semantic consistency and have demonstrated significant benefits for downstream tasks <cit.>. Another mainstream belongs to the generative learning approach <cit.>. MAE <cit.> and SimMIM <cit.> propose to predict the raw masked patches, while MaskFeat <cit.> proposes to predict the masked features of images.Our work also leverages recent progress in SSL for unsupervised semantic segmentation.§ METHOD Problem setting. Given a set of unannotated images I=[I_1, …, I_B] ∈ℝ^B× 3× H× W, where B denotes the number of images, and 3, H, W represent the channel, height, and width dimensions respectively, the objective of unsupervised semantic segmentation is to learn a labeling function f∈ℱ that predicts the semantic label for each pixel in each image.We represent the predicted semantic maps as Y=[Y_1, …, Y_B] ∈{1,⋯, K}^B× H× W, where K refers to the number of predefined categories. Architecture.To achieve this goal, we introduce the SmooSeg approach, which capitalizes on self-supervised representation learning and smoothness prior within an energy minimization framework, as illustrated in Fig. <ref>. SmooSeg comprises three primary components: a feature extractor f_θ, a projector h_θ, and a predictor g_θ. Initially, for each image I_i, we employ a pre-trained backbone network, such as a frozen version of DINO, to acquire feature representations X_i=f_θ(I_i) ∈ℝ^C × N, where C and N denote the number of feature channels and image patches, respectively. Subsequently, the projector h_θ maps these features onto a low-dimensional embedding space, resulting in a set of compact features Z_i = h_θ(X_i) ∈ℝ^D × N, where D denotes the reduced feature dimensionality. Finally, the predictor g_θ generates the label assignments A^{s,t}_i ∈ℝ^K× N by computing the similarity scores between the compact features Z_i and the prototypes P^{s,t}. Here, P^s and P^t represent student and teacher prototypes, respectively. The semantic map Y_i for image I_i can be obtained by reshaping the output Y_i^t of the teacher branch. §.§ Smoothness PriorReal-world images typically exhibit inherent continuity and coherence in terms of semantics, texture, and color.Within a single object, semantic labels tend to demonstrate smoothness and consistency, ensuring a cohesive representation of the object.In contrast, labels between distinct objects manifest discontinuity and divergence, facilitating the separation of different object instances. This essential property, known as the smoothness prior, is expected to play a critical role in guiding unsupervised semantic segmentation tasks toward more accurate and meaningful segmentation results.We therefore consider the following pairwise smoothness term:E_ = ∑_i=1^B∑_p, q=1^N W_pq^ii·δ(Y_i,p, Y_i,q),where W^ii∈ℝ^N × Nis the closeness matrix of image I_i. δ(Y_i,p, Y_i,q) is the penalty that takes the value of 1 if Y_i,p≠ Y_i,q, and 0 otherwise. By minimizing this smoothness term, two close patches with different labels will be penalized. In other words, the segmentation model is encouraged to assign similar labels to close patches, thereby promoting the coherence within objects. Closeness matrix.It is worth noting that the large intra-class variation in appearances within the raw pixel space renders the discovery of well-suited closeness relationships among low-level observations challenging. We therefore propose to model the closeness relationships by the cosine distance in the high-level feature space. Specifically, W^ii can be calculated by:W_pq^ii = X_i,p· X_i,q/X_i,pX_i,q,where X_i,p and X_i,q represent the feature vectors for patches p and q of image I_i, respectively. Theoretically, a large element value in the closeness matrix, i.e., a high cosine similarity, suggests a high possibility of a close patch pair, and vice versa. We apply a zero-mean normalization to this matrix: W̅_p^ii=W_p^ii-1/N∑_qW^ii_pq. This normalization balances the negative and positive forces during optimization, which prevents excessive influence from either the negative or positive components of the closeness matrix and ensures that the optimization process is more stable. Label penalty. Directly minimizing Eq. <ref> to optimize our segmentation model is not feasible due to the non-differentiable property of δ(·, ·)and the hard label assignment Y. As a result, we have to resort to another form of penalty cost. Suppose we have the soft label assignment A_i^t ∈ℝ^K× N of image I_i (which will be introduced later), by which we can redefine the penalty cost function as:δ(A_i,p^t, A_i,q^t) = 1 - A_i,p^t · A_i,q^t/A_i,p^tA_i,q^t.Because the non-negative property of the softmax output, i.e., 0 ≤ A^t, 0≤δ(·, ·)≤1 always holds. A larger value of δ(·, ·) denotes a greater dissimilarity between two labels, thereby indicating a higher penalty cost, and vice versa. Smoothness prior within and across images. To prevent the model from converging to a trivial solution where the labeling function becomes smooth everywhere, we also apply the smoothness prior across images, acting as a strong negative force, by introducing another image I_i^' that is randomly selected from the current batch. We then obtain the final smoothness term:E_ = E_^ + E_^ =∑_i=1^B∑_p, q=1^N {(W̅_pq^ii-b_1) ·δ(A_i,p^t, A_i,q^t) + (W̅_pq^ii^'-b_2) ·δ(A_i,p^t, A_i^',q^t)}.Here, we introduce a scalar b_1 to adjust the threshold for applying the penalty. That is, when W̅_pq^ii-b_1 > 0, indicating that two patches p, q with a high closeness degree are nearby patches in the embedding space, patches p, q with different labels will be penalized, encouraging the piecewise smoothness within segments; otherwise, they are rewarded to assign different labels, leading to the discontinuities across segments.By doing so, SmooSeg is capable of finding globally coherent semantic segmentation maps. Discussion with CRF and STEGO. CRF methods <cit.> model the closeness relationship of pixels using their spatial coordinates, emphasizing the local smoothness within each image.On the contrary, our SmooSeg encodes the global closeness relationship of image patches based on the cosine distance in the feature space, which can discover the high-level semantic groups of images. Our smoothness term appears to be similar to the correlation loss in STEGO: ℒ_ =-∑ (F-b)(S,0), but essentially the two losses model different things. In STEGO, S denotes the feature correlation, by which STEGO aims tolearn low-dimensional compact representations for images through a learnable projection head. A separate clustering algorithm, e.g., k-means, is required to obtain the final segmentation maps. However, even with the learned compact representations, the coherence of image segments is not guaranteed in STEGO as slight differences in features may lead to inconsistent labels in the clustering stage. In contrast, our SmooSeg aims to directly learn a labeling function (projector + predictor) based on the smoothness prior, which encourages piecewise smoothness within segments and preserves disparities across segments, leading to more coherent and semantically meaningful segmentation maps. Additionally, the negative part of S contradicts the learning intention of STEGO and therefore requires a 0-clamp via (S,0), which however, represents discontinuities between image patches and should be preserved. In contrast, our label penalty 0≤δ(·, ·)≤1 has a desirable property compared to S. §.§ Asymmetric Predictor A desirable labeling function learnt through energy minimization should on the one hand produce piecewise smooth results, and on the other hand be well fit between the observations and labeling outputs. For semantic segmentation, we expect the labeling output of an image to align well with its semantic map. In other words, the labeling output should accurately predict a category for each individual pixel with high confidence or low entropy. However, this goal is a nutshell in unsupervised semantic segmentation as there is no observed semantic map.Self-training <cit.> emerges as a promising solution for tasks involving unlabeled data. To address the above challenge, we design an asymmetric student-teacher style predictor to learn the labeling function through a stable self-training strategy. The student branch employs a set of K learnable prototypes (class centers) P^s=[p^s_1, ⋯, p^s_K] ∈ℝ^K× D to predict the semantic maps of images. The teacher branch holds the same number of prototypes P^t as the student, and P^t is updated as an exponential moving average of P^s. We then compute the soft assignment A^{s,t}_i of the embeddings Z_i with the prototypes P^{s,t} by computing their cosine similarity. With ℓ_2-normalized embeddings Z̅_̅i̅=Z_i/Z_i and prototypes P̅^{s,t}=P^{s,t}/P^{s,t}, we haveA_i^s = (P̅^̅s̅·(Z̅_̅i̅)), A_i^t = (((P̅^̅t̅) ·Z̅_̅i̅)/τ) ∈ℝ^K× N,where temperature parameter τ > 0 controls the sharpness of the output distribution of the teacher branch. The teacher branch is responsible for generating smoothly updated pseudo labels to supervise the student prototypes' learning. By using a patch-wise cross-entropy loss, we have the data term asE_ = -∑_i=1^B∑_p=1^N∑_k=1^KI_Y^t_i,p=klog A_i,p,k^s,where I_· is an indicator that outputs 1 if the argument is true, and 0 otherwise.Y_i^t =A^t_i is the hard pseudo label for patch p of image I_i. By minimizing E_, the segmentation model is expected to generate label assignments for each patch with high confidence, thus ensuring a better fit between the observations and their predicted labels.§.§ Overall Optimization ObjectiveOur final optimization objective function for training SmooSeg is obtained by incorporating the smoothness term and the data term as follows:ℒ =∑_i=1^B∑_p, q=1^N {(W̅_pq^ii-b_1) ·δ(A_i,p^t, A_i,q^t) + (W̅_pq^ii^'-b_2) ·δ(A_i,p^t, A_i^',q^t)}-∑_i=1^B∑_p=1^N∑_k=1^KI_Y^t_i,p=klog A_i,p,k^s.In practice, ℒ could be approximately minimized using Stochastic Gradient Descent (SGD). During each training iteration, the projector is optimized using gradients from the smoothness loss, while the student prototypes are optimized using gradients from the data loss.The teacher prototypes are updated as an exponential moving average of the student prototypes: P^t = α P^t + (1- α)P^s, with α denoting the momentum value. After training, we use the output from the teacher branch as the segmentation results. The overall procedure in pytorch-like pseudocode of SmooSeg is summarized in Algorithm <ref>.§ EXPERIMENTS §.§ Experimental Setup Datasets. Our experimental setup mainly follows that in previous works <cit.> in datasets and evaluation protocols. We test on three datasets. COCOStuff <cit.> is a scene-centric dataset with a total of 80 things and 91 stuff categories.Classes are merged into 27 categories for evaluation, including 15 stuff and 12 things. Cityscapes <cit.>is a collection of street scene images from 50 cities, with classes merged into 27 classes by excluding the "void" class. Potsdam-3 <cit.> is a remote sensing dataset with 8550 images belonging to 3 classes, in which 4545 images are used for training and 855 for testing.Evaluation metrics. For all models, we utilize the Hungarian matching algorithm to align the prediction and the ground-truth semantic map for all images.We also use a CRF <cit.> as the post-processing to refine the predicted semantic maps. Two quality metrics including mean Intersection over Union (mIoU) and Accuracy (Acc) over all the semantic categories are used in the evaluation.Implementation details. Our experiments were conducted using PyTorch <cit.> on an RTX 3090 GPU.To ensure a fair comparison with previous works <cit.>, we use DINO <cit.> with a ViT-small 8× 8 backbone pre-trained on ImageNet as our default feature extractor, which is frozen during model training. Our projector consists of a linear layer and a two-layer SiLU MLP whose outputs are summed together. The predictor contains two sets of prototypes with the same initialization. The exponential moving average (EMA) hyper-parameter is set to α = 0.998.The dimension of the embedding space is D = 64. The temperature is set to τ=0.1. We use the Adam optimizer <cit.> with a learning rate of 1×10^-4 and 5×10^-4 for the projector and predictor, respectively.r7cm 0.48 tablePerformance on the COCOStuff dataset (27 classes). 6pt Methods backbone Acc. mIoU ResNet50 <cit.> ResNet50 24.6 8.9 IIC <cit.> R18+FPN 21.8 6.7 MDC <cit.> R18+FPN 32.2 9.8 PiCIE <cit.> R18+FPN 48.1 13.8 PiCIE+H <cit.> R18+FPN 50.0 14.4 SlotCon <cit.> ResNet50 42.4 18.3 MoCoV2 <cit.> ResNet5025.2 10.4 + STEGO <cit.> ResNet50 43.1 19.6 darkgray + SmooSeg ResNet50 52.4 18.8 DINO <cit.> ViT-S/829.6 10.8 + TransFGU <cit.> ViT-S/852.7 17.5 + STEGO <cit.> ViT-S/8 48.324.5 darkgray + SmooSeg ViT-S/8 63.2 26.7 0.48 tablePerformance on the Cityscapes Dataset (27 Classes).6pt Methods backbone Acc. mIoU IIC <cit.> R18+FPN 47.9 6.4 MDC <cit.> R18+FPN 40.7 7.1 PiCIE <cit.> R18+FPN 65.5 12.3 DINO <cit.> ViT-S/840.5 13.7 + TransFGU <cit.> ViT-S/8 77.9 16.8 + STEGO <cit.> ViT-S/8 69.817.6 darkgray + SmooSeg ViT-S/8 82.8 18.4 We set a batch size of 32 for all datasets.For Cityscapes and COCOStuff datasets, we employ a five-crop technique to augment the training set size. We train our model with a total of 3000 iterations for Cityscapes and Potsdam-3 datasets, and 8000 iterations for the COCOStuff dataset.§.§ Comparison with State-of-the-Arts Quantitative results. We summarise the quantitative results on three datasets in Tables <ref>, <ref> and <ref>, respectively. Results of baselines, ResNet50<cit.>, MoCoV2<cit.> and DINO<cit.> are directly cited from the paper <cit.>, while the results of DINOV2 <cit.> (Table <ref>) are obtained by our implementation.For these baselines, we first extracted dense features for all images, then utilized a minibatch k-means algorithm to perform patches grouping, which resulted in the final segmentation maps. Our SmooSeg significantly outperforms all the state-of-the-art methods in terms of both pixel accuracy and mIoU on all datasets. In particular, on the COCOStuff dataset in Table <ref>,with DINO ViT-S/8 as backbone, SmooSeg gains a 14.9% improvement in pixel accuracy and a 2.2% improvement in mIoU over the best-performing baseline STEGO.We observe that TransFGU outperforms STEGO in terms of accuracy,but is inferior in mIoU on both COCOStuff and Cityscapes. This is due to the fact that TransFGU adopts a pixel-wise cross-entropy loss, which focuses more on the overall accuracy of pixels, while STEGO achieves better class-balanced segmentation results through mini-batch k-means. Our SmooSeg significantly outperforms both TransFGU and STEGO in both accuracy and mIoU, We attribute this superiority to our energy minimization loss, which optimizes both the smoothness term and the data term simultaneously.Same conclusions can be drawn on the Postdam-3 dataset,as shown in Table <ref>.We can see that SmooSeg, with DINO ViT-B/8 as the backbone, significantly outperforms STEGO, with gains of 5.7% in accuracy and 7.7% in mIoU. The improvement is particularly significant in terms of mIoU. This is not surprising as Potsdam-3 is a remote sensing image dataset that contains only 3 classes, so segments on the Potsdam-3 are often relatively large.In such a scenario, the smoothness prior becomes even more important in ensuring coherent segmentation maps. Qualitative results.We present qualitative examples of SmooSeg, STEGO and TransFGU on three datasets in Figs. <ref> and <ref>.Additional qualitative results, along with color maps, can be found in Appendix <ref>. As shown in Fig. <ref>, SmooSeg produces high-quality fine-grained segmentation maps that outperform those obtained by STEGO and TransFGU. Though STEGO uses a feature correspondence loss to encourage features to form compact clusters, its segmentation maps still suffer from incoherence problems. It can be observed that, slight differences in features can result in inconsistent labels in STEGO. The second image from the left column of Fig. <ref> shows such an example: the slight variations in light on the sea surface results in differences in features, leading to an incomplete and incoherent water segment. Similar phenomenon can be observed in the segmentation maps on Cityscapes and Potsdam-3 too. Besides, although TransFGU is a top-down approach, it still overlooks the relationship between image patches in its top-down approach, and therefore achieves much worse segmentation results. In contrast, SmooSeg with the aim of generating smooth label assignments within segments while preserving differences across different segments by leveraging the smoothness prior, the semantic maps produced by SmooSeg show more coherent and semantically meaningful results. In Fig. <ref>, we can see that SmooSeg outperforms the other methods in terms of accurate boundaries. §.§ Analyses Visualization. Feature visualizations of DINO, STEGO and SmooSeg are illustrated in Fig. <ref>.We can see that the feature distribution of DINO with ViT-base/8 as the backbone exhibits some semantic consistency, with compact clusters within each image but disperse across images. The embeddings of STEGO, which are distilled from DINO features using feature correspondence loss, show higher semantic consistency than DINO, with more compact clusters across images, such as the yellow markers, and improved performance. However, STEGO still suffers from the label incoherence problem due to the large intra-class variation of embeddings, indicating that feature distillation alone is insufficient to capture the high-level semantic coherence of segments. Our SmooSeg leverages the smoothness prior to encourage smooth label assignments, measured by the cosine distance between patch embeddings and prototypes (centers), and achieves remarkable improvement in the semantic consistency of feature embeddings.As shown in the right part of Fig. <ref>, SmooSeg produces highly semantically compact and coherent clusters with clear class boundaries for all images,and the performance, at 87.4% Acc and 77.8% mIoU, significantly higher than STEGO. These results further prove the effectiveness of our SmooSeg in using smoothness prior for unsupervised semantic segmentation. Objective function. To assess the effectiveness of our energy minimization objective function, we conduct an ablation study on the COCOStuff dataset by comparing SmooSeg with four variants of the objective function, each with a different term removed. For the variant of w/o E_, we only keep the prototypes in the teacher branch to generate the label map for the smoothness term. The results are shown in Table <ref>, where E_^ denotes the smoothness term across different images.We can see that the performance drops by 10.2% on Acc and 1.2% on mIoU when removing the data term E_, which highlights the importance of the data term in promoting the fitting of the labeling function. Besides, removing E_^ results in a much larger drop in performance, with a decrease of 27.1% on Acc and 16.3% on mIoU. Moreover, our segmentation model fails when removing the entire E_. The smoothness term utilizes a closeness matrix constructed from high-level features of a pre-trained model, acting as strong supervision signals to guide the label learning for all image patches.Therefore, it is reasonable to see that E_ contributes significantly to the overall performance.On the contrary, the data term operates in a self-training fashion with pseudo labels derived from the teacher branch, which alone cannot generate accurate segmentation maps. These findings demonstrate the crucial role of both the data and smoothness terms for optimal performance of SmooSeg in unsupervised semantic segmentation. Temperature parameter τ. We investigate the effect of the temperature parameter τ on the performance of SmooSeg on the COCOStuff dataset, and report the results in Fig. <ref>. Theoretically, a smaller τ sharpens the softmax output, providing greater gradients and supervision signals for model training. Fig. <ref> shows that τ plays a critical factor in the success of SmooSeg. Specifically, SmooSeg achieves good results when τ≤ 0.1, while performance drops considerably when τ≥ 0.2 because the softmax output tends to become uniformly distributed. Momentum parameter α. We also study the impact of the α on SmooSeg.α controls the smoothness of the update of the teacher predictor from the student predictor.We plot the performance on the COCOStuff dataset as α changes from 0.1 to 1 in Fig. <ref>.The performance of SmooSeg gradually improves as α increases, and reaches stable when 0.99≤α. Limitation. Setting hyper-parameters without cross-validation is always a challenge for unsupervised learning methods. The main limitation of our method is that it involves two dataset-specific hyper-parameters in the smoothness term. We present a feasible strategy in Appendix <ref> to alleviate this issue.§ CONCLUSIONS In this paper, we propose SmooSeg, a simple yet effective unsupervised semantic segmentation approach that delves into the potential of the smoothness prior, emphasizing the coherence property of image segments.In particular, we implement a pairwise smoothness loss to effectively discover semantically meaningful groups. We also design an asymmetric teacher-student style predictor to generate high-quality segmentation maps. SmooSeg comprises a frozen extractor, as well as a lightweight projector and a predictor which could be optimized using our energy minimization objective function. Experimental results show that SmooSeg outperforms state-of-the-art approaches on three widely used segmentation benchmarks by large margins.Acknowledgement. This research is supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s), by the National Research Foundation, Singapore under its Industry Alignment Fund – Pre-positioning (IAF-PP) Funding Initiative, and by the Ministry of Education, Singapore under its MOE Academic Research Fund Tier 2 (STEM RIE2025 Award MOE-T2EP20220-0006).Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore, and the Ministry of Education, Singapore. § APPENDIXtocsectionAppendix§ SETTING HYPER-PARAMETERSTuning hyperparameters without cross-validation on labels is particularly challenging for unsupervised learning methods. For example, STEGO <cit.> contains six parameters that are both dataset- and network-specific. Inspired by their hyperparameter tuning methods, we introduce a similar strategy for manual hyperparameter tuning.Recall that our SmooSeg introduces two hyperparameters b_1 and b_2 in the smoothness loss, which are used to adjust the threshold for applying the penalty. Ideally, a segmentation model should promote smoothness within segments while maintaining discontinuity between different segments.To achieve this, we can monitor the distribution of δ, referred to as the smoothness degree, during the model training (see Fig. <ref>). In a balanced setting, the smoothness degree distribution should exhibit bimodality, as demonstrated in the first column of Fig. <ref>.Otherwise, the segmentation model will generate semantic maps that are either too smooth or too discontinuous.Specifically, The hyperparameters used in SmooSeg with DINO as the backbone are summaried in Table <ref>.1 tableHyperparameters used in SmooSeg. 4pt Parameter COCOStuff Cityscapes Potsdam-3 b_1 0.5 0.5 0.5 b_2 -0.02 -0.02 0.1§ THE IMPACT OF CRFCRF postprocessing is a common practice in both supervised and unsupervised semantic segmentation <cit.>, and that the use of CRF does not overshadow the contribution of the smoothness term in our work.The smoothness prior in this work performs on the high-level feature maps and mainly contributes to semantic smoothness, while CRF operates on pixels to refine the fine details and remedy the resolution loss caused by the final upsample operation that exists in most semantic segmentation models (a normal upsample rate is 8x8).Therefore, the application of a CRF serves as a supplement to our smoothness prior to further refine low-level smoothness.1.0 tableExperimental results of the impact of CRF on SmooSeg and STEGO. 5pt 2|c|COCOStuff 2|c|Cityscapes2|c|Potsdam-3 2|cAvg. Acc mIoU Acc mIoU Acc mIoU AccmIoUSTEGO w/o CRF 46.5 22.4 63.5 16.8 74.1 58.9 61.4 32.7 STEGO w CRF 48.3 24.5 69.8 17.6 77.0 62.6 65.0 (+3.6) 34.9 (+2.2)SmooSeg w/o CRF 60.6 25.2 79.8 18.0 81.4 68.4 73.9 37.2SmooSeg w CRF 63.2 26.7 82.8 18.4 82.7 70.3 76.2 (+2.3) 38.5 (+1.3)Table <ref> demonstrates that SmooSeg still achieves state-of-the-art without CRF.Overall, the performance degradation in STEGO is notably more pronounced compared to SmooSeg. Importantly, the application of a CRF serves as an effective supplement to our smoothness prior. Moreover, we also present qualitative visualizations with and without CRF in Figures <ref> and <ref>.It is found that CRF is able to refine the quality of fine details on both STEGO and SmooSeg.However, SmooSeg is consistently more semantically coherent than STEGO either with or without CRF.§ ADDITIONAL QUALITATIVE RESULTS To provide further evaluation, we have included some difficult samples from the COCOStuff dataset that were predicted by our Smooseg and STEGO in Figure <ref>. In these cases, Smooseg and STEGO tend to generate inaccurate semantic maps. However, it is noteworthy that even in challenging scenarios, SmooSeg consistently generates more semantically coherent segmentation maps compared to STEGO. This observation underscores the advantages of incorporating our smoothness prior in semantic segmentation tasks. We provide the visualization of more results in Fig. <ref> for the COCOStuff dataset, Fig <ref> for the Cityscapes dataset, Figs <ref> and <ref> for the Potsdam-3 datasets. Overall, SmooSeg consistently produces more coherent segmentation maps when compared to both STEGO and TransFGU. These visual results provide further evidence of the efficacy of the smoothness prior in enhancing the label coherence and the overall segmentation quality in unsupervised settings. | http://arxiv.org/abs/2310.17874v1 | {
"authors": [
"Mengcheng Lan",
"Xinjiang Wang",
"Yiping Ke",
"Jiaxing Xu",
"Litong Feng",
"Wayne Zhang"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231027032925",
"title": "SmooSeg: Smoothness Prior for Unsupervised Semantic Segmentation"
} |
0009-0009-6149-8707College of Information Science and Electronic Engineering, Zhejiang University Hangzhou China [email protected] 0000-0002-8206-3747 Corresponding Author. College of Information Science and Electronic Engineering, Zhejiang University Hangzhou China [email protected] 0009-0005-4596-1600College of Information Science and Electronic Engineering, Zhejiang University Hangzhou China [email protected] 0009-0004-4524-1155College of Information Science and Electronic Engineering, Zhejiang University Hangzhou China [email protected] 0000-0002-6224-2748College of Information Science and Electronic Engineering, Zhejiang University Hangzhou China [email protected] 0009-0007-3246-0166College of Information Science and Electronic Engineering, Zhejiang University Hangzhou China [email protected] Person Re-Identification (CC-ReID) is a common and realistic problem since fashion constantly changes over time and people's aesthetic preferences are not set in stone. While most existing cloth-changing ReID methods focus on learning cloth-agnostic identity representations from coarse semantic cues (e.g. silhouettes and part segmentation maps), they neglect the continuous shape distributions at the pixel level. In this paper, we propose Continuous Surface Correspondence Learning (CSCL), a new shape embedding paradigm for cloth-changing ReID. CSCL establishes continuous correspondences between a 2D image plane and a canonical 3D body surface via pixel-to-vertex classification, which naturally aligns a person image to the surface of a 3D human model and simultaneously obtains pixel-wise surface embeddings. We further extract fine-grained shape features from the learned surface embeddings and then integrate them with global RGB features via a carefully designed cross-modality fusion module. The shape embedding paradigm based on 2D-3D correspondences remarkably enhances the model's global understanding of human body shape. To promote the study of ReID under clothing change, we construct 3D Dense Persons (DP3D), which is the first large-scale cloth-changing ReID dataset that provides densely annotated 2D-3D correspondences and a precise 3D mesh for each person image, while containing diverse cloth-changing cases over all four seasons. Experiments on both cloth-changing and cloth-consistent ReID benchmarks validate the effectiveness of our method. Our project page is located at <https://CSCL-CC.github.io>.<ccs2012><concept><concept_id>10010147.10010178.10010224.10010245.10010252</concept_id><concept_desc>Computing methodologies Object identification</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10010147.10010178.10010224.10010245.10010251</concept_id><concept_desc>Computing methodologies Object recognition</concept_desc><concept_significance>500</concept_significance></concept></ccs2012> [500]Computing methodologies Object identification [500]Computing methodologies Object recognitionExploring Shape Embedding for Cloth-Changing Person Re-Identification via 2D-3D Correspondences Yichong Lu January 14, 2024 =============================================================================================== § INTRODUCTION Person Re-Identification (Re-ID) targets at re-identifying a specific person across disjoint cameras <cit.>. Most existing works <cit.> presuppose that the appearances of people remain consistent over time. In reality, people tend to change their outfits over a long duration and different people may share the same dressing sense. Methods that rely excessively on clothing appearance fail to generalize to this long-term cloth-changing scenario. In recent years, plenty of efforts <cit.> have been made to handle the cloth-changing issue by learning discriminative cloth-agnostic identity representations.A small proportion of methods <cit.> attempt to decouple cloth-agnostic features directly from RGB images without multi-modal auxiliary information, which inevitably leads to the loss of crucial information in global features and results in a heavy reliance on the domain. The mainstream methods <cit.> typically adopt human parsing models to obtain coarse semantic cues to guide the extraction of biometric features, such as shape features. However, as shown in Figure <ref>(b), coarse semantic cues are insufficient to obtain detailed shape information of a specific person, as it only enables the estimation of body part labels but fails to model pixel-wise shape distributions within the parts. Several recent works <cit.> leverage dense pose estimation <cit.> to align the texture of body parts based on UV mapping. However, they do not further explore reliable shape representations for the ReID task. Additionally, these methods have a major defect in that they require partitioning the 3D model into charts, and the resulting discretized UV spaces prevent them from learning continuous correspondences over the entire body surface. As shown in Figure <ref>(c), the use of independent UV coordinate systems for each body part results in noticeable part seams in the estimated IUV maps. There are also some methods <cit.> directly estimating SMPL <cit.> shape parameters as 3D shape features. However, the SMPL shape parameter space is highly incompatible with the image feature space, making it challenging to effectively integrate features from these two modalities. In this paper, we propose a Continuous Surface Correspondence Learning (CSCL) framework, which represents a new shape embedding paradigm for cloth-changing ReID. CSCL pixel-wisely maps a person image to a continuous embedding space of the SMPL mesh surface through vertex classification. Essentially, learning continuous 2D-3D correspondences aligns a person image to the entire surface of a 3D human model, and simultaneously obtains a pixel-level continuous distribution of body shape on the canonical 3D surface. Even for different persons wearing the same clothes, there can be significant differences in their body shape distributions. Therefore, we further extract fine-grained discriminative shape features from the established correspondences, and integrate them with global RGB features via an optimized cross-modality fusion module based on the transformer <cit.>, which greatly compensates for the lost shape details in global RGB features. We incorporate a novel Latent Convolutional Projection (LCP) layer for feature projection. The LCP layer enhances the sharing and correlation among tokens via adding an additional latent embedding, which is the latent vector of an auto-encoder designed to reconstruct the token map. It is also noteworthy that the proposed framework generalizes well to the cloth-consistent cases, indicating the reliability of the learned shape features.However, there is currently no publicly available cloth-changing ReID dataset with ground-truth dense 2D-3D correspondences. To facilitate the research, we construct a large-scale cloth-changing ReID dataset named 3D Dense Persons (DP3D), which contains 39,100 person images of 413 different persons captured by 15 cameras over all four seasons. We annotated dense 2D-3D correspondences for each person image via a carefully designed annotation system, ensuring 80 to 125 annotations for each image.The main contributions of this work are summarized as follows: * We propose a new shape embedding paradigm for cloth-changing ReID that establishes pixel-wise and continuous correspondences between a 2D image plane and a canonical 3D human body surface. To the best of our knowledge, this is also the first work to explore global shape representations for cloth-changing ReID via 2D-3D correspondences.* We develop an optimized cross-modality fusion module to adaptively integrate shape features with global RGB features, where a novel Latent Convolutional Projection (LCP) layer is designed to perform feature projection.* We construct 3D Dense Persons (DP3D), which is the first large-scale cloth-changing ReID dataset with densely annotated 2D-3D correspondences and a corresponding 3D mesh for each person image, while containing highly diverse cloth-changing cases in real-world scenarios. * We demonstrate our proposed method is applicable to both cloth-changing and cloth-consistent situations, as shown by extensive results on four cloth-changing ReID datasets including DP3D and two general ReID datasets.§ RELATED WORKS In this section, we first review the literature on cloth-changing person re-identification and corresponding datasets, then introducing the research related to continuous surface embeddings in the context of 3D shape analysis. §.§ Cloth-Changing Person ReIDExisting cloth-changing ReID methods can be categorized into decoupling-based methods and auxiliary modality-based methods. Decoupling-based methods <cit.> aim to decouple cloth-agnostic features directly from RGB images without multi-modal auxiliary information. AFD-Net <cit.> disentangled identity and clothing features via generative adversarial learning. CAL <cit.> proposed to penalize the predictive power of the ReID model with respect to clothes via a clothes-based adversarial loss, while UCAD <cit.> enforced the identity and clothing features to be linearly independent in the feature space via an orthogonal loss.Auxiliary modality-based methods <cit.> are considered more robust since visual texture features can be filtered under the supervision of human semantics. FSAM <cit.> proposed to complement 2D shape representations obtained from human silhouettes for global features. MVSE <cit.> embedded multigranular visual semantic information into the model. Pixel Sampling <cit.> leveraged a human parsing model to recognize upper clothes and pants, and then randomly changed them by sampling pixels from other people, enforcing the model to automatically learn cloth-agnostic cues. DSA-ReID<cit.> and ASAG-Net<cit.> proposed to use dense human semantics to generate semantics-aligned images in the discretized DensePose UV space, while 3DSL <cit.> considered the low-dimensional SMPL shape parameters as 3D shape features, and directly fused them to global features. None of these methods consider establishing pixel-wise and continuous 2D-3D correspondences between image pixels and the entire 3D body surface, which effectively bridges the gap between 2D and 3D shape space.§.§ Cloth-Changing ReID DatasetsGeneral person ReID datasets<cit.> assume that the appearance of the same individual is consistent, which is often not the case in real-world scenarios. Models trained on these datasets rely excessively on clothing appearance, making it difficult for them to generalize well to long-term cloth-changing scenarios. In recent years, a few datasets were collected specifically for the cloth-changing setting. Celebrities <cit.> were obtained from the Internet, which consists of street snapshots of celebrities. PRCC <cit.> provides indoor cloth-changing person images with their corresponding contour sketches. COCAS <cit.> is a large-scale dataset that provides a variety of clothes templates for cloth-changing person ReID. LTCC <cit.> assumes that different people wear different clothes and assigns a unique clothing label to each person image in the dataset. VC-Clothes <cit.> is a large realistic synthetic dataset rendered by the GTA5 game engine. CSCC <cit.> considers different degrees of cloth-changing. NKUP <cit.> contains both indoor and outdoor person images with complex illumination conditions, while NKUP+ <cit.> has more diverse scenarios, perspectives, and appearances.§.§ Continuous Surface EmbeddingsContinuous Surface Embeddings (CSE) target at pixel-wisely learning an embedding of the corresponding 3D vertex from an RGB image <cit.>, which demonstrates strong human body representation capabilities. HumanGPS <cit.> employs contrastive learning to enhance CSE representations. BodyMap <cit.> introduced a coarse-to-fine learning scheme, establishing high-definition full-body continuous correspondences by refining coarse correspondences. SurfEmb <cit.> applied Continuous Surface Embeddings to the field of object pose estimation and learned correspondence distributions in a self-supervised fashion. § THE 3D DENSE PERSONS DATASETObtaining ground-truth 3D structure information for pedestrians is of substantial importance as it can address potential geometric ambiguities that may arise from relying solely on RGB modality. In this section, we introduce the 3D Dense Persons (DP3D), a large-scale cloth-changing ReID dataset that provides densely annotated 2D-3D correspondences and a corresponding 3D mesh for each person image, filling the gap in the field.§.§ Data Collection The raw videos we collected have high resolutions and cover a time span of one year. We selected a total of 15 cameras, with 5 of them having a resolution of 4K, 2 having a resolution of 2K, and the remainder being set to a resolution of 1080P. The use of high-resolution cameras ensures the recorded pedestrians to be as clear as possible, which is advantageous for the ReID task under clothing change. The shooting scenes encompass various outdoor locations, such as street scenes, park landscapes, construction sites, and parking lots. All pedestrians were captured by at least 2 cameras, with the majority being captured by 3 or more. We adopted the Mask R-CNN <cit.> framework to detect the bounding box of each person after framing.§.§ Annotation SystemDue to the dramatic variations in people's clothing styles over the course of a year, we first identified the volunteers and conducted a manual inspection to avoid misidentification, while assigning a camera ID label, a person ID label, and a clothing ID label to each person image. Then, as shown in Figure <ref>, we annotated dense correspondences via a carefully designed pipeline. In the first stage, we ran the universal model of Graphonomy <cit.> with 20 part labels to segment the images, then uniformly sampling 40 pixels across the entire human body region. We also utilized k-means clustering to obtain 5 to 10 centroid pixels for each part based on its size. Compared to DensePose <cit.>, our sampling method avoids seams between body parts and ensures a sufficient number of sampling points for smaller parts. However, since people may wear loose clothes, we manually filtered out those sampling pixels that did not fall within the human body regions underneath the clothes. For each pair of images belonging to the same person, we additionally selected 10 corresponding pixels for consistency learning, which correspond to the same 10 mesh vertices. In the second stage, as shown in Figure <ref> (e), we projected the SMPL mean template mesh from 6 predefined viewpoints to generate full-body images. When annotating a specific pixel, it was only necessary to choose the most suitable projected image, and its 2D coordinates were used to localize the corresponding 3D vertex. In cases certain pixels were challenging to determine from the projected images, we directly annotated the correspondences on the 3D mesh surface through rotation. It is worth noting that we did not annotate in a part-by-part manner, but rather adopted a global approach using full-body projected images for annotation, which ensured accurate annotations at the junctions of body parts. In the last stage, to obtain accurate SMPL parameters, we employed a modified SMPLity-X <cit.> to fit the SMPL model to the person images under the guidance of densely annotated correspondences. §.§ Statistics and ComparisonThe proposed DP3D dataset is characterized by its diverse scenes, multiple perspectives, large number of individuals, and long time span. It comprises 39,100 person images belonging to 413 different persons, which were captured over the course of a year (during four distinct seasons). Depending on its resolution, each person image has approximately 80 to 125 annotated correspondences, where 10 correspondences have mesh vertices shared among all images of the same person. We divided the images into a training set and a testing set, with each set containing approximately equal numbers of identities. For same-appearance images of a specific person, we randomly select one image per viewpoint to construct the query set, while the remaining images in the testing set form the gallery set. We present in Table <ref> a comparison between DP3D and existing cloth-changing ReID datasets. § METHODOLOGY In this section, we first provide an overview of our proposed framework in Section <ref>. Next, in Section <ref> and <ref>, we elaborate the learning scheme of continuous 2D-3D correspondences, as well as the design principles of the cross-modality fusion module, respectively. Subsequently, we provide a comprehensive description of the training losses in Section <ref>.§.§ Overview As shown in Figure <ref> (a), person images are input separately into the ResNet-50 <cit.> backbone and CNN embedding layers to extract global RGB features and continuous surface embeddings. For each foreground pixel, CSCL maps it to a continuous embedding space of the SMPL mesh surface under the supervision of geodesic distances. Subsequently, a shape extraction network with a ResNet-50 architecture is further employed to extract fine-grained shape features from the learned surface embeddings, while simultaneously mapping them to the same size as global RGB features. Following that, we adaptively integrate shape features with global RGB features via an improved cross-modality fusion module, where a novel Latent Convolutional Projection (LCP) layer is designed to perform feature projection. Cross-attention mechanism is then applied to aggregate features from the two distinct modalities, which are then added to the original features. After the fusion, we conduct Global Average Pooling (GAP), followed by two separate fully-connected classifiers, to obtain the final global RGB features and shape features. We also introduce a learnable class token for each of the two modalities, which exhibits strong cross-modality compatibility and also contributes to the ID loss. In the inference stage, the two class tokens are concatenated with global RGB features and shape features to construct the final identity feature.§.§ Establishing Continuous Correspondences Considering the huge domain gap between 2D person images and the 3D space perceived by human eyes, we believe that establishing continuous correspondences between image pixels and the entire 3D human body is of substantial importance, which bridges the gap between the 2D and 3D shape space and therefore benefit the understanding of global body shape.Given a person image I∈ℝ^H× W× 3 of height H and width W, we first extract the segmentation mask M of the foreground person. Then, the CNN embedding layers map the person image into continuous surface embeddings E∈ℝ^H× W× D, while preserving the spatial resolution of the image. For pixels within the foreground mask M, we employ geodesic distances on the 3D surface to supervise the learning of surface embeddings. More concretely, we scale the cross-entropy loss of pixel-to-vertex classification on the mesh surface using geodesic distances. This constraint is reasonable as it quantifies the deviation of vertex prediction on the 3D surface. Furthermore, as illustrated in Figure <ref> (b), we also conduct consistency learning for corresponding pixels in images that belong to the same person. Suppose we have two distinct images of the same person, donated as I_1, I_2, where foreground pixels p_1 and p_2 belong to image I_1, and pixel q belongs to image I_2. Both p_1 and q correspond to the same vertex v_1 on the mesh surface, while p_2 corresponds to vertex v_2. We first compute the cosine distance in the embedding space to measure the similarity between p_1 and q:d(p_1, q) = 1 - cos(E_1(p_1), E_2(q))where E_1 and E_2 denote surface embeddings of images I_1 and I_2. By minimizing the cosine distance d(p_1, q), the embedding vectors of two corresponding pixels are brought closer. However, during training, only considering the consistency of corresponding pixels may lead to all embeddings mapping to similar values. Therefore, for different pixels p_1 and p_2 in the same person image, we keep their relative affinity by enforcing embedding distances to follow geodesic distances, i.e. minimizing | d(p_1, p_2) - s(g(v_1, v_2))|, where g(·, ·) calculates the geodesic distance between two mesh vertices and s(·) scales it to match the range of the cosine distance d(·, ·).Establishing 2D-3D correspondences allows for learning the continuous shape distributions on the 3D surface at the pixel level, i.e. Pr(v|I, p, p∈ M), where I denotes the person image, and M denotes the foreground mask. To further extract fine-grained shape features, we feed the learned embeddings into the shape extraction network with a ResNet-50 architecture, while mapping them to the same size as global RGB features. Note that the extracted shape features are insensitive to clothing appearance as texture features are already filtered out in the correspondence learning process.§.§ Cross-Modality Feature Fusion To adaptively integrate the shape features extracted from the established continuous correspondences with global RGB features, a cross-modality fusion module is designed. As discussed in CVT <cit.>, convolutional layers are renowned for their remarkable ability to capture intricate local spatial token structures, which allows the removal of positional embeddings from the transformer <cit.> framework. However, the utilization of fixed-size convolutional kernels hampers the effectiveness of capturing global positional correlations between non-adjacent tokens. To mitigate this issue, we propose a novel Latent Convolutional Projection (LCP) layer. It adds the same latent embedding to each token in the token map, which is the latent vector of a pretrained auto-encoder designed to reconstruct the token map. During the training of CSCL, only the encoder of the auto-encoder is preserved and fixed to ensure the universal nature of the latent embedding, whereas the decoder is disregarded. This design not only greatly enhances the correlation and sharing among different tokens, but also enables better adaptation to images with diverse backgrounds. The projection of an LCP layer can be formulated as follows:Q/K/V = Flatten(Conv2d(Reshape2D(F) + l))where Q/K/V represents the projected queries, keys, and values, F is the input token map,l represents the latent embedding, and Reshape2D denotes the operation to reshape the feature map F to a 2D token map. After separately passing global RGB features F^g∈ℝ^h× w× c and shape features F^s∈ℝ^h× w× c through two distinct LCP layers, the cross-attention mechanism is applied to adaptively integrate features from different modalities. We first take global RGB features as queries and shape features as keys/values, reshape the fused feature to match the size of F^g, and finally add it to F^g:F^g = F^g + Reshape3D(MHA(Q_g, K_s, V_s))where Reshape3D denotes the operation of reshaping a 2D token map to match the size of F^g, and MHA represents the multi-head attention. We also take shape features as queries and global RGB features as keys/values for identity modeling of shape features. F^s = F^s + Reshape3D(MHA(Q_s, K_g, V_g))In other words, we enable bidirectional access between global RGB features and shape features, which allows the model not only complements fine-grained cloth-agnostic shape knowledge for global RGB features F^g, but alsointegrates essential identity-related characteristics for shape features F^s to assist identity modeling. Additionally, we introduce learnable class tokens for each of the two modalities, which are also utilized to compute the ID loss. §.§ Loss Function CSE Losses. As discussed in Section <ref>, to mask out the background pixels, the foreground silhouette for each person image is retrieved, thus a binary cross-entropy loss ℒ_sil is employed to penalize unsatisfactory silhouette predictions. Furthermore, we employ geodesic distances on the mesh surface to scale the per-pixel vertex classification loss, which penalizes the misclassified pixels based on the degree of deviation on the surface. The geodesic loss can be formulated as follows:ℒ_geo = -1/N∑_p∈ Ig(v_p, v̂_̂p̂)· log(p(v̂_̂p̂))where N indicates the number of pixels with ground-truth annotations in image I, v_p and v̂_̂p̂ represent the ground-truth and predicted mesh vertices corresponding to pixel p, and g(·, ·) calculates geodesic distances between two mesh vertices. For consistency learning of continuous surface embeddings, we design the following consistency loss ℒ_cst: ℒ_cst = 1/N_1∑_p∈ I_1, q∈ I_2log(1+exp(d(p, q))+ 1/N_2∑_p_1, p_2∈ Ilog(1+exp(| d(p_1, p_2) -s(g(v_1, v_2))|))where N_1 and N_2 indicate the number of annotated pairs, p and q are corresponding pixels in cross-view images, p_1 and p_2 stand for different pixels in the same image, d(·, ·) and g(·, ·) respectively denote the cosine distance in the embedding space and the geodesic distance on the surface, and s(·) represents the scale function. The first term of ℒ_cst ensures consistency between embeddings of cross-view corresponding pixels, while the second term enforces embedding distances to follow geodesic distances for different pixels in the same image, thus pushing apart their embeddings, and avoiding the degradation cases that may occur during training.ReID Losses. The ReID losses employed in our framework consist of a cross-entropy loss (ID loss) for classification and a triplet loss <cit.> for similarity learning in the feature space. The final global RGB feature f^g, shape feature f^s, and two class tokens all contribute to the ID loss:ℒ_id = ℒ_id^g + ℒ_id^s + ℒ_id^clswhere ℒ_id^cls represents the summation of ID losses of the two class tokens. We introduce separate triplet losses for global RGB features and shape features to enhance their discriminative capability, which are combined to obtain the final triplet loss:ℒ_tri = ℒ_tri^g + ℒ_tri^s Final Loss. The overall objective function of our proposed Continuous Surface Correspondence Learning (CSCL) framework compromises the aforementioned CSE losses and ReID losses, which can be formulated as follows:ℒ = ℒ_sil + λ_1(ℒ_geo + αℒ_cst) + λ_2ℒ_id + λ_3ℒ_triwhere λ_1, α, λ_2 and λ_3 are weights for balancing each term.§ EXPERIMENTS §.§ Datasets and Protocals We conduct experiments on four existing cloth-changing ReID datasets (i.e. LTCC <cit.>, PRCC <cit.>, VC-Clothes <cit.> and DP3D). Furthermore, three different settings are involved in our experiment: (1) Standard Setting: the test set includes both same-appearance and cross-appearance samples; (2) Cloth-Changing Setting: the test set only includes cross-appearance samples; (3) Same-Clothes Setting: the test set only includes same-appearance samples. For LTCC and DP3D, we provide experimental results in the standard setting and cloth-changing setting, while for PRCC and VC-Clothes, results in the same-clothes setting and cloth-changing setting are reported. We additionally validate our method on two general ReID datasets (i.e. Market-1501 <cit.> and DukeMTMC <cit.>), following their evaluation metrics. For evaluation, we adopt the mean average precision (mAP) and rank-1 accuracy to evaluate the effectiveness of ReID methods. We also utilize Geodesic Point Similarity (GPS) <cit.> scores to measure the quality of the established correspondences:GPS_I = 1/N∑_p∈ Iexp -g(v_p, v̂_̂p̂)^2/2σ^2where I indicates a person image, N is the number of ground-truth correspondences, v_p and v̂_̂p̂ denote the ground-truth vertex and the estimated vertex, g(·, ·) represents geodesic distances, and σ is a normalizing factor set to 0.255. When GPS scores exceed a certain threshold, the correspondences are considered as correct. Therefore, following the metric of BodyMap <cit.>, we report Average Precision (AP) and Average Recall (AR) based on GPS scores.§.§ Implementation DetailsFor datasets without ground-truth dense correspondences, we fit the SMPL body model to the person images under the guidance of OpenPose <cit.> keypoint detections and foreground silhouettes. For each SMPL mesh vertex, there is a reprojected point on the 2D image plane, and the pixel closest to this point is utilized to establish the correspondence. If different vertices correspond to the same pixel, only the vertex closest to the camera is recorded. Based on the image resolution, we uniformly sampled 80 to 125 pseudo correspondences within the entire body region. All input images are resized to 256×128. A skip-connecting UNet <cit.> architecture pretrained on the DensePose-COCO dataset <cit.> is employed as embedding layers, while two distinct ResNet-50 backbone pretrained on ImageNet <cit.> with the last downsampling layer discarded are employed to extract global RGB features and shape features, respectively. In the training stage, the Adam optimizer<cit.> was utilized for optimization. We first trained the embedding layers for 50 epochs with a learning rate of 5×10^-5, and then fixed them to train the rest of the network for 100 epochs with a linear warm-up phase. The learning rate was increased from 1×10^-5 to 1×10^-4 in the first 5 epochs. Finally, we trained the network in an end-to-end manner for 40 epochs with a fixed learning rate of 1×10^-5. The embedding dimension D is set to 64. The values of λ_1, α, λ_2, λ_3 in Eq. <ref> are set to 0.3, 5.0, 1.0, 0.8, and the margin parameter for the triplet loss is set to 0.3, respectively. §.§ Comparison with State-of-the-artsAs shown in Table<ref>, we compare our proposed CSCL with seven SOTA cloth-changing methods (i.e. SE+CSED <cit.>, PSAM <cit.>, 3DSL <cit.>, UCAD <cit.>, MVSE <cit.>, M2NET <cit.> and CAL <cit.>) on LTCC, PRCC, VC-Clothes, and DP3D. To assess the feasibility of CSCL in cases without clothing change, we also choose four SOTA short-term methods (i.e. PCB <cit.>, HACNN <cit.>, MGN <cit.>, and Trans-ReID <cit.>) as competitors. The comparative results on the Market-1501 and DukeMTMC are presented in Table <ref>.Based on the results in Table <ref> and Table<ref>, we have the following key observations: (1) In the cloth-changing setting, CSCL exceeds other competitors on PRCC, VC-Clothes, and DP3D by a large margin, achieving a rank-1 improvement of 4.9%/3.5%/9.6% and a mAP improvement of 6.8%/3.5%/10.9%. This is attributed to the powerful shape representation capability of the continuous correspondences. However, there is still a limitation to CSCL. Due to the poor quality of person images, the generated pseudo correspondences on LTCC are not reliable enough. Despite this limitation, CSCL still achieves comparable results with the SOTA method MVSE on LTCC, indicating a certain tolerance for vertex position errors. (2) CSCL generalizes well to the general ReID datasets where appearance features dominate, achieving comparable performance with the SOTA short-term methods. This is because the distribution of global RGB features is well preserved in the fusion stage. §.§ Ablation StudiesIn this section, we carry out comprehensive experiments on PRCC and DP3D to validate: (1) the effectiveness of continuous surface embeddings, the cross-modality fusion module, and latent convolutional projection, which are abbreviated as CSE, CMF, and LCP respectively; (2) the influence of consistency loss on correspondence learning;(3) the impact of using different features for inference.Effectiveness of CSE, CMF, and LCP.From Table <ref>, we observe that introducing continuous surface embeddings to the model with a proper shape extraction network (Baseline→Model3) remarkably improves the performance of the baseline model, with a rank-1/mAP improvement of 20.1%/18.3% on PRCC, and a rank-1/mAP improvement of 11.5%/10.5% on DP3D. This demonstrates that establishing pixel-wise and continuous correspondences complement rich and essential identity-related shape features for global RGB features. However, there is no significant improvement when directly downsampling the learned correspondences without a shape extraction network, and we will further analyze this issue in Section <ref>. Moreover, the cross-modality fusion module also brings significant improvement, which indicates that features of the two modalities become more compatible via cross-modality fusion. Furthermore, by comparing different feature projection methods for generating Q/K/V, we observe that LCP shows a certain degree of improvement over linear projection and convolutional projection. This is attributed to the inclusion of latent embeddings, which greatly facilitates the sharing among tokens. Additionally, we evaluate the quality of established correspondences on different ReID datasets in Table <ref>. By combining the results from Table <ref> and Table <ref>, we can clearly observe a robust positive correlation between the quality of correspondences and the magnitude of performance improvement.Influence of consistency loss. As shown in Table <ref>, the removal of consistency loss ℒ_cst from the correspondence learning process leads to a 5% decrease in vertex classification accuracy on DP3D, which indicates that performing consistency learning is beneficial for establishing reliable correspondences. From Table <ref>, we also observe that removing ℒ_cst results in a decline in the overall performance of ReID, verifying the importance of consistency learning for CSE.Impact of using different features for inference. During inference, we select the model corresponding to Model 6 in Table <ref> to verify the effectiveness of different features. As shown in Table <ref>, while relying solely on shape features is not reliable enough, the shape features can enhance the performance of other features. Concatenating global RGB features, shape features, and two class tokens results in the best performance at inference time. §.§ Further Analysis Visualization of Continuous Surface Embeddings. We employ PCA to reduce the dimension of continuous surface embeddings from H× W× D to H× W × 3, where H and W denote the height and width of person images, D represents the embedding dimension. Visualization results on DP3D are presented in Figure <ref>. Since the color differences reflect the feature distances in the embedding space, we can clearly observe that the established 2D-3D correspondences between images pixels and the entire body surface are relatively smooth. Different from discretized UV mappings such as the DensePose, the smooth and continuous 2D-3D correspondences can provide richer and more reliable global knowledge of human shape for cloth-changing ReID.Identity modeling for shape features.Multi-modal auxiliary information itself is not sufficiently discriminative for the ReID task, making it necessary to conduct identity modeling. However, some existing CC-ReID methods, such as 3DSL, directly regulate multi-modal auxiliary features via ReID losses, which disrupts the distribution of shape space. As shown in Table <ref>, directly using downsampling operations without a proper shape extraction network (Model3→Model2) leads to significant performance degradation. We believe that multi-modal auxiliary features should first be mapped to an intermediary feature space before identity modeling to alleviate the incompatibility between feature spaces of different tasks, which is beneficial for the fusion of shape and global RGB features.Future works. Current 3D shape-based ReID methods suffer from a huge domain gap between the RGB image space and the 3D shape space. Our work essentially targets at bridging the gap between these two spaces. Therefore, future works can consider transforming the surface mebddings into different forms of 3D shape features and assess their potential benefits for CC-ReID.§ CONCLUSIONWe have proposed a new shape embedding paradigm that establishes pixel-wise and continuous surface correspondences to mine fine-grained shape features for cloth-changing ReID. Moreover, an optimized cross-modality fusion module is designed to adaptively integrate shape features with global RGB features. To facilitate the research, we have constructed 3D Dense Persons (DP3D), which is the first cloth-changing ReID dataset with densely annotated 2D-3D correspondences and corresponding 3D meshes. Experiments on both cloth-changing and cloth-consistent ReID benchmarks demonstrate the robustness and superiority of our method.This work was supported in part by the Research Project of ZJU-League Research & Development Center, Zhejiang Lab under Grant 2019KD0AB01.ACM-Reference-Format | http://arxiv.org/abs/2310.18438v1 | {
"authors": [
"Yubin Wang",
"Huimin Yu",
"Yuming Yan",
"Shuyi Song",
"Biyang Liu",
"Yichong Lu"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231027192630",
"title": "Exploring Shape Embedding for Cloth-Changing Person Re-Identification via 2D-3D Correspondences"
} |
Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, the NetherlandsInstitute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UKInstitute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UKKavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UKEuropean Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching bei Munchen, GermanyThe mission has provided us full astrometric solutions for over 1.5B sources. However, only the brightest 34M of those have radial velocity measurements. As a proof of concept, this paper aims to close that gap, by obtaining radial velocity estimates from the low-resolution BP/RP spectra that now provides. These spectra are currently published for about 220M sources, with this number increasing to the full ∼ 2B sources with Data Release 4. To obtain the radial velocity measurements, we fit BP/RP spectra with models based on a grid of synthetic spectra, with which we obtain the posterior probability on the radial velocity for each object. Our measured velocities show systematic biases that depend mainly on colours and magnitudes of stars. We correct for these effects by using external catalogues of radial velocity measurements.We present in this work a catalogue of about 6.4M sources with our most reliable radial velocity measurements and uncertainties <300 obtained from the BP/RP spectra. About 23% of these have no previous radial velocity measurement in RVS. Furthermore, we provide an extended catalogue containing all 125M sources for which we were able to obtain radial velocity measurements. The latter catalogue, however, also contains a fraction of measurements for which the reported radial velocities and uncertainties are inaccurate. Although typical uncertainties in the catalogue are significantly higher compared to those obtained with precision spectroscopy instruments, the number of potential sources for which this method can be applied is orders of magnitude higher than any previous radial velocity catalogue. Further development of the analysis could therefore prove extremely valuable in our understanding of Galactic dynamics.Radial velocities from Gaia BP/RP spectra Sill Verberne 1Corresponding author: Sill [email protected] Sergey E. Koposov2,3,4 Elena Maria Rossi1 Tommaso Marchetti5 Konrad Kuijken1 Zephyr Penoyre1Received XXX; accepted YYY ================================================================================================================================================================================================================================================================= § INTRODUCTIONThe mission <cit.> has been collecting data since 2014, with the primary scientific data products being the positions, proper motions and parallaxes of about 1.5B objects. In the most recent data release <cit.>, low-resolution spectra have additionally been published of ∼220M objects. These spectra are obtained from two low-resolution prism spectrographs; BP observing in the wavelength range of 330–680 nm and RP in the 640–1050 nm range, collectively referred to as XP spectra. Their primary purpose is to provide source classification and astrophysical information on the astrometric sources observed; e.g. stellar metallicity and line-of-sight extinction <cit.>. In order to measure stellar parameters, such as radial velocities and elemental abundances, is equipped with the Radial Velocity Spectrometer <cit.>. However, RVS spectra will not be available for all sources, being limited to G_RVS≤16 in DR4 <cit.>. XP spectra on the other hand will be published in DR4 for all sources appearing in the astrometric catalogue with a limiting magnitude of G≈20.7 <cit.>. The current magnitude limit of XP spectra in DR3 is G=17.65 <cit.>. XP spectra have been recognised as a rich source of astrophysical information, with efforts to measure [M/H], [α/M] , , , and line-of-sight extinction among others <cit.>. This work focuses on obtaining radial velocity measurements for the first time from the low-resolution XP spectra. Although the precision of any radial velocity measurement from XP spectra is expected to be lower than that of conventional spectroscopic surveys, the scientific content would still be very significant due to the number of objects (220M currently and ∼2B in DR4). This would constitute a factor ∼6.5 increase in the total number of sources with radial velocity measurements compared to the currently largest radial velocity catalogue: DR3 with ∼34M measurements. Additionally, the recently launched Euclid space telescope <cit.> also includes a low-resolution slitless spectrograph. Although the Euclid instrument operates in the near infrared, the spectral resolution is significantly higher[<https://sci.esa.int/web/euclid/-/euclid-nisp-instrument>] compared to XP spectra, though still considered low-resolution. The method presented in this paper could in principle also be applicable to those data. This work should be seen as a proof of concept; focusing on demonstrating our ability to obtain radial velocity information from XP spectra, rather than obtaining the most accurate and precise measurements possible for all sub-types of objects appearing in DR3 XP spectra. We therefore advise readers to carefully consider if our measurements are appropriate for their specific use case. This paper is structured as follows: Section <ref> discusses the properties and format of XP spectra, Section <ref> describes the analysis of the XP spectra to obtain radial velocity measurements, Section <ref> provides the post-calibration we apply to our radial velocity measurements using reference radial velocities, Section <ref> presents the random forest classifier we train for data quality assurance, Section <ref> provides validation of our calibrated results, Section <ref> describes our main and extended catalogues, which we publish together with this paper, in Section <ref> we discuss the science case of hypervelocity stars for our catalogue, in Section <ref> we discuss our results and provide prospects for DR4, and lastly in Section <ref> we give closing remarks. § BP/RP SPECTRA In the following section we discuss a number of important points on the calibration and representation of XP spectra in DR3. We do not attempt to be exhaustive, but rather refer to literature where relevant. We will only consider XP spectra from sources with G<17.65, which is the main XP catalogue from consisting of ∼219M sources. The few hundred thousand sources fainter than this limit mainly consist of white dwarfs and QSOs <cit.>.§.§ XP calibration The mission relies on self-calibration where possible. In the case of spectra, this is only possible to a limited degree. The calibration of multiple single measurements, taken possibly years apart using different CCDs and fields-of-view, into a single mean spectrum is described by <cit.> and is done using self-calibration. The calibration onto a physical wavelength and flux scale is described by <cit.> and is performed using external measurements.At wavelengths below 400 nm and above 900 nm, the wavelength calibration is less accurate <cit.>. In addition, there is a systematic offset in the RP spectra <cit.>, which would lead to a systematic offset in radial velocity if not corrected for.In terms of flux calibration, the uncertainties are typically underestimated <cit.>. This underestimation is more pronounced for bright sources, but present in the majority of spectra and wavelength dependent.§.§ Basis function representation observes the same sources multiple times over a time span of years, using two different fields-of-view and an array of CCDs <cit.>. Small differences in the dispersion, wavelength coverage, instrument degradation, etc. between observations gives the opportunity to extract more spectral information (i.e. higher resolution spectra) from the sources than would be possible given a single observation. Representing this information in flux-wavelength space (henceforth referred to as sampled spectra), would be highly inefficient due to the small nature of the variations compared to the spectral resolution. For this reason, the consortium instead chose to represent the spectra as a series of coefficients for basis functions that describe the spectra. The BP and RP spectra are represented by 55 such spectral coefficients each, making for a total of 110 coefficients that describe every source. The first few coefficients contain most of the spectral information, since they are optimised for representing 'typical' sources <cit.>. Alongside the spectral coefficients, has published their uncertainties and correlation coefficients, allowing us to construct the full covariance matrix for the coefficients of each source.While providing more information than a sampled spectrum with the same number of samples, the representation in spectral coefficients also introduces challenges. Due to the individual basis-functions being continuous functions over the entire wavelength range, the uncertainties on all spectral coefficients are correlated. This means that when converting the basis function representation into sampled flux-wavelength space, all data points are correlated. Random noise in the initial observations in particular, causes random wiggles in the sampled XP spectra that could be mistaken for physical spectral features.§ SPECTRAL ANALYSIS Now that we have discussed some of the important features of the XP spectra, we will first describe our spectral analysis of these data to obtain radial velocity measurements. In this paper we choose to convert the spectral coefficients to sampled spectra using [<https://gaia-dpci.github.io/GaiaXPy-website/> <https://dx.doi.org/10.5281/zenodo.7566303>]. The conversion is performed through the design matrix as s = A·b,with s the mean sampled spectrum, A the design matrix provided by , and b the spectral coefficients. This gives us two spectra for each source, BP and RP, which we choose to sample on a grid of Δλ=2 nm. However, we do not use the entire spectra for our analysis: we select the wavelength range 400–500 nm for the BP spectra, while for the RP spectra we select 640–900 nm. Two example sampled XP spectra are shown in Fig. <ref>. The BP-range is chosen to include the prominent Balmer lines, but exclude the region below 400 nm, where the wavelength calibration might be problematic, and the region above 500 nm, where for many stars the continuum would dominate the fit. The RP-range is much wider and includes most of the RP spectral range, except the region above 900 nm, where again the wavelength calibration might be problematic (see Sect. <ref>). Now that we have discussed how we handle the data, we describe in the following how we produce model spectra. We create the model M(T_eff, log g, [Fe/H], v_r, E(B-V)), where , , and correspond to the effective temperature, surface gravity, and metallicity from the PHOENIX spectral library respectively <cit.>, v_r is the radial velocity, and the extinction along the line-of-sight. For the PHOENIX models, we only consider atmospheres with [α/H]=0 for computational reasons. We shift each of these models by a radial velocity with a step size of 30 . The parameter ranges for radial velocity, , , and are displayed in Table <ref>. The resulting models are convolved with the resolution of the externally calibrated XP spectra. This is done by interpolating the values from Table 1 in <cit.>. These interpolated values are then used at each wavelength in the sampled data to spread the flux in wavelength space using a Gaussian with standard deviation σ = FWHM/(2√(2ln2)). Lastly, we apply extinction on a source-to-source basis using the 2D extinction map from <cit.> and the re-calibration from <cit.>, where we assume all sources to be behind the extinction layer. The extinction law we use is from <cit.>.Having described how we prepare the XP spectra and create model spectra, we will now describe how we fit the spectral models to the XP spectra. Given an XP spectrum from , we can determine the likelihood of the data given a model, P(𝐃|𝐌), fromP(𝐃|𝐌) = 1/√((2π)^k|𝐂|)exp(-1/2[𝐃 - 𝐌]^⊤𝐂^-1[𝐃 - 𝐌]),where D is the data, M the model, k the number of dimensions, and C the covariance matrix of the data <cit.>. This holds in the case where the uncertainties are Gaussian with correctly estimated variances, which is not strictly true in our case. Because we over-sample our sampled spectra with respect to the orthogonal bases, our covariance matrix in sampled space does not have full rank. To allow inversion of the covariance matrix in sampled space, we only consider the diagonal elements and thus discard correlation information. We calculate this likelihood for all models, after which we marginalise over the nuisance parameters T_eff, , and . We use flat priors on our parameters between the extrema of the parameter grid shown in Table <ref>. is the exception; for computational reasons we instead only consider models differing no more than 500K from an initial guess on we make based on the colour of a source and the extinction. The initial guess on is described in Appendix <ref>. During analysis of the results, we noticed that the performance of this initial guess is poor for ≳0.5. For this reason we only report results for sources with <0.5, for which the method works well.In order to determine the radial velocity and corresponding uncertainty we assume a Gaussian posterior probability on the radial velocity. We fit a parabola to the log-posterior probability by selecting all radial velocity points with a log-posterior probability no less than 10 from the maximum log-posterior probability. If less than 5 points in radial velocity space meet this requirement, we reduce the threshold by increments of 10 until we have more than 5 points. If the resulting fit peaks outside our radial velocity range of ±3000 , we consider the fit to have failed and report no radial velocity.Now that we have laid down the foundation of our method we will describe the skewness and goodness-of-fit measurements we use to evaluate the reliability of the radial velocity (uncertainty) measurements in Sect. <ref>. By fitting a parabola to the log-posterior probability, we are assuming symmetric uncertainties. To evaluate if this is a reasonable assumption, we determine the skewness for the log-posterior probability distribution usingg_1 = ∑_i=1^n P_i (v_r_i - v_r)^3/(∑_i=1^n P_i (v_r_i - v_r)^2)^3/2,where P_i is the posterior probability per radial velocity bin, v_r_i the corresponding radial velocity, and v_r the mean radial velocity given byv_r = ∑_i=1^nv_r_i· P_i.This allows us to identify cases in which the posterior probability distribution is asymmetric and for which the symmetric uncertainties might not be reliable. In addition, we calculate the reduced χ^2 of our best fit, by approximating the number of degrees of freedom as the number of data points we have (i.e. number of flux vs wavelength points) minus the number of parameters we fit (4). §.§ Results of spectral analysis Here we will discuss the results from the spectral analysis presented above. As mentioned, the analysis was applied to all ∼219M XP sources with G<17.65. There are generally three outcomes possible for our spectral analysis. The first outcome is that we obtain a measurement for the radial velocity and corresponding uncertainty of a particular source. It is also possible that a fit failed, because the best fit radial velocity was outside our parameter range of ±3000 , or a columnin , such as the BP colour, required by our processing was not measured or was unavailable. The third outcome is that the initial guess for was outside our model range, in which case we do not perform a fit (see Table <ref>). We summarise the relevant numbers in Table <ref>. § RADIAL VELOCITY CALIBRATION Because of the calibration issues described in Sect. <ref> we expect to see systematic offsets in our measurements of radial velocities that are a function of colour, magnitude, and extinction, in addition to an underestimation of uncertainties. To make matters more complicated, we expect the presence of an "outlier" population, which is a population of objects for which the measured radial velocity spread is well beyond formal errors. In this section we describe how we correct for these systematics in our radial velocities and their uncertainties obtained from the spectral analysis described in the previous section. We make use of reference radial velocity measurements from dedicated radial velocity surveys. We begin by describing our set of reference radial velocities in Sect. <ref>, followed by the statistical model to describe our XP radial velocities compared to the reference measurements in Sect. <ref>, and finally the fitting procedure of the model to the data in Sect. <ref>. §.§ Reference data set The reference radial velocity measurements we use are RVS DR3 <cit.>, LAMOST DR8 low-resolution <cit.>, and APOGEE DR17 <cit.>. These catalogues were chosen because of their large size and sky coverage. Importantly, APOGEE and LAMOST contain sources fainter than the RVS magnitude cut, which is needed to calibrate and validate our results for faint sources. A summary of the relevant statistics from these catalogues is included in Table <ref>.The RVS radial velocities are measured with a different instrument and technique and can therefore be considered fully independent of the measurements we provide here. Cross-referencing LAMOST and APOGEE was done using theDR3provided for each measurement by both LAMOST and APOGEE. When more than a single measurement was available for either LAMOST or APOGEE, we took the median of all measurements. This provides us with a total number of 23 900 765 sources for which we have both an XP and reference radial velocity measurement. We consider reference radial velocity measurements to be 'ground truth' and do not consider uncertainties in them. The reason is that our measurements will have uncertainties much larger than typical uncertainties in any of the reference catalogues. In our calibration we only consider sources that have no neighbours in within 2 arcseconds. The reason for this is that these sources tend to have blended spectra, due to the size of the spectral extraction window for XP spectra. Our models are not set up to account for blending, which means that radial velocity uncertainty and offset will be different for many of these sources. This further reduces the total number of sources used in calibration to 22 397 143.§.§ Calibration model To characterise the systematics in our radial velocity measurements we adopt a Gaussian mixture model with a likelihood given byℒ ∝∏_i=1^N[1-f/√(2πσ_i^2) exp(-[v_r_ref - v_r_xp - b]^2/2σ_i^2) . .+ f/√(2πσ^2_out) exp(-[v_r_xp-y]^2/2σ^2_out) ],with f the outlier fraction, σ_i the radial velocity uncertainty, v_r_ref the reference radial velocity measurement, v_r_xp the XP radial velocity, b the systematic offset between the reference and XP radial velocities, σ_out the standard deviation of the outlier population, and y the offset of the outlier population. The uncertainties on radial velocity measurements (σ_i) are described by σ_i = √((aσ_m)^2 + c^2),with a the underestimation factor on the uncertainties, σ_m the uncertainty determined from the posterior probability, and c the noise floor parameter.We use bins in colour, apparent G magnitude, and extinction to fit for the free parameters f, a, b, c, σ_out, and y. We use 20 equally spaced bins in the range 5≤ G ≤ 17.65 and 40 bins in the range -0.3≤ ≤5. For extinction we use bins with a width of Δ=0.1. Additionally, we use two bins in for sources with ≥1.6875, with a divide at =3.5. We use the measurements from <cit.> for this purpose. The split in is used because we observe a high degree of systematic offset in the radial velocities we measure between dwarfs and giants at these colours. A description and justification for this split in is given in Appendix <ref>. We require at least 64 sources in a particular bin for fitting, with a maximum of 100 000, above which we select 100 000 sources from the sample at random for computational efficiency. We run the same calibration procedure using 10 equally spaced bins in the range 5≤ G ≤ 17.65 and 20 bins in the range -0.3≤ ≤5, i.e. using bins twice the default size. This makes sure that we have calibration for most sources, even in sparsely populated areas of the colour-magnitude space. If there are still not enough sources in the colour-magnitude bin for a particular source, we do not apply calibration. §.§ Fitting of the model To estimate the parameters in our calibration model we use the Markov Chain Monte Carlo (MCMC) implementation in<cit.>. We use flat priors throughout, except for σ_out, for which we use a log-uniform prior. Our MCMC approach is as follows: we initialise 64 walkers that we first propagate for 1000 steps to explore the parameter space. To avoid walkers getting stuck in local minima, we reject walkers that finished with a log-likelihood outside 8.4 of the maximum log-likelihood over all walkers. The value 8.4 makes sure that 99% of the walkers would remain if they traced a 6D Gaussian distribution. A next 1000 steps are performed with again 64 walkers that have been drawn randomly from the last 100 steps of the walkers that remained from the previous run. The last 800 steps of this run are used to compute the medians of the free parameters. §.§ Results of radial velocity calibration The number of calibrated radial velocities is 123 837 735 out of a total of 125 148 208. This means that calibration was performed for ∼99% of our radial velocity measurements from Sect. <ref>.Here we present the radial velocity calibration results for low extinction (<0.1) sources. Results for higher extinction sources are similar, unless specified. We show the calibrated uncertainties (see Eq. <ref>) on the radial velocities measured from the XP spectra in Fig. <ref>.In the region BP-RP≥1.6875, where we use two bins in , we take the source-number average for each bin between the giants and dwarfs. In general we can see that the lowest uncertainties are obtained from blue and red sources. We observe higher uncertainties for 1≲ ≲2 and the uncertainty generally increases for faint sources. In addition, the figure shows that uncertainties down to ∼100 are possible for red and blue sources. The reason why uncertainties are relatively high for 1≲ ≲2 is that there are few spectral features in the XP spectra for those sources. Without strong spectral features such as the Balmer lines and molecular absorption bands (see Fig. <ref>), fitting for a radial velocity becomes less precise.In Fig. <ref> we show the outlier fraction as a function of colour and magnitude. The outlier fraction tends to be low (smaller than 0.1), with a few regions containing notably more outliers. For higher extinction, the outlier fraction increases substantially, which we show in Fig. <ref> in the Appendix.Calibration has been applied to all sources when available. A histogram of the calibrated radial velocity uncertainties along with their cummulative distribution is included in Fig. <ref>.The median uncertainty on our calibrated radial velocity measurements is about 772 , but extends all the way down to below 100 . Sources with low radial velocity uncertainty tend to be either blue (≲0.7) or red (≳2).§ RANDOM FOREST CLASSIFIER Although we have a general indication of the reliability of individual measurements from the outlier fraction parameter determined from our calibration model, a quality parameter determined on a source-to-source basis is important to avoid unreliable measurements in the final catalogue. We do this making use of a Random Forest Classifier (RFC). The definition for a bad measurement we use is Δv_r/σ_vr>3, with Δv_r being the difference between our calibrated measurement and the reference measurement and σ_vr being the corresponding calibrated uncertainty. For sources where we have access to reference radial velocities, we make use of 10-fold cross validation to predict the bad measurement probability. This makes sure the source for which we predict the bad measurement probability is never part of the training set. For the remaining sources, we train the RFC on all sources with reference radial velocity measurements. We take care to avoid information leaking from the training parameters to the radial velocities by excluding parameters like the sky coordinates and absorption. We use theRFC with 100 estimators <cit.> and the following parameters for training: * Reduced χ^2 of our best fit model* , , and of the best fit model for radial velocity* Extinction corrected colour of the source* Skewness of the radial velocity posterior (see Eq. <ref>)In addition to the following columns provided by the archive (see the documentation[<https://gea.esac.esa.int/archive/documentation/GDR3/Gaia_archive/chap_datamodel/sec_dm_main_source_catalogue/ssec_dm_gaia_source.html> <https://gea.esac.esa.int/archive/documentation/GDR3/Gaia_archive/chap_datamodel/sec_dm_spectroscopic_tables/ssec_dm_xp_summary.html#xp_summary->] for column descriptions): * * * * * * * * ,where "xp" denotes that we use the corresponding column of both BP and RP.§ VALIDATION We have already discussed the results from our spectral analysis and calibration in Sects. <ref> and <ref>. In addition, we have access to the quality parameterdescribed in Sect. <ref>. Using these earlier results, we focus in this section on validating that our measurements indeed measure the radial velocity and evaluate the reliability of our reported uncertainties. To demonstrate that we are indeed measuring radial velocities from the XP spectra, we include Fig. <ref>, in which we bin the XP radial velocities based on their reference measurements.The uncertainties on the individual bins are calculated asσ = 1/N√(π/2)( |v_r - v_r|^2 )^1/2,where the overline indicates that the mean is taken, v_r is the radial velocity, and N is the number of measurements in the bin. The measurements clearly follow the bisection with the reference radial velocity measurements, demonstrating that we indeed measure stellar radial velocities. To ensure we are not seeing the result of correlation of radial velocity and position in the colour-magnitude diagram picked up by our calibration model, we perform this analysis also for each colour-magnitude bin separately in Fig. <ref> in the appendix. To further demonstrate our ability to constrain radial velocities, we plot in Fig. <ref> the sky projection of the median XP radial velocities compared to RVS.The quality cuts we apply to our catalogue are*<300*<0.2*<0.1.In addition, we use the catalogue from <cit.> and select sources with ≤-1 to mainly select halo stars and ≤8. This makes us able to see the dipole caused by the Solar motion in both the XP and RVS maps. The dipole disappears at the Galactic plane due to the sample being dominated by non-halo stars in that region.To evaluate if our radial velocity uncertainties are accurate, we provide the histogram of the radial velocity difference of our measurements compared to the reference measurements over the uncertainty in Fig. <ref>.We can determine the standard deviation of this distribution byσ = 1.4826·median(Δ v_r/σ_vr - Δ v_r/σ_vr),with Δ v_r/σ_vr the difference between our radial velocity and the reference one over the uncertainty and Δ v_r/σ_vr the median of the same quantity. The standard deviation comes out to about 1.03, which means that our reported uncertainties are typically accurate to a few percent. § CATALOGUESWe publish two catalogues along with this paper. The Main catalogue is the catalogue we recommend for the general user. It includes only relatively precise measurements with low chance of being erroneous. For completion we also publish the Extended catalogue, which includes all measurements we have obtained. The catalogues are available through an online table at <https://doi.org/10.5281/zenodo.10043238>. The columns included are described in Table <ref>.§.§ Main catalogueTo ensure we only publish relatively high quality measurements in our Main catalogue, we apply the following selections: * < 0.5*< 300*<0.2*<0.1.The Main catalogue contains 6 367 355 sources that pass the quality cuts. About 23% of these sources have no previous measurement in RVS, by far the biggest catalogue in our magnitude range. This means the Main catalogue contains relatively accurate and precise radial velocity measurements for about 1.5M sources that have no previous measurement available. In Fig. <ref> we show the colour-magnitude density of our Main catalogue.§.§ Extended catalogueFor completion we also provide our entire catalogue, without any quality cuts, which we refer to as our Extended catalogue. The only exception is that we still only publish sources with <0.5, since we deem most higher extinction measurements to be unreliable. To assist the user in making use of this catalogue, we provide additional parameters for all sources alongside those provided for the main catalogue, which is a subset of the Extended catalogue. In the case of a star occupying a point in parameter space with insufficient reference measurements to perform calibration, we still report our XP radial velocity measurement, only without any calibration performed. In those cases we report the , , , andparameters as NaN values. In Fig. <ref> we show the colour-magnitude diagram of the sources appearing in our Extended catalogue.Since there are many more caveats with this data set compared to the Main catalogue, we provide the user with theparameter that is provided as a bitmask. If one of the following conditions is met, the corresponding bit is set to 1. * No calibration applied (0001)* Neighbour in within 2 arcsec (0010)*> 0.2 (0100)*> 0.1 (1000)§ FINDING HYPERVELOCITY STARS Having presented our Main catalogue, we will now apply it to the science case of hypervelocity stars (HVSs). HVSs can travel with velocities well in excess of 1000 <cit.>, making them much faster than stars belonging to other populations. These stars are ejected from the Galactic Centre following a dynamical encounter with our central massive black hole, Sgr A* <cit.>. Their identification has proven difficult with only a few dozen promising candidates <cit.> and a single star that can be unambiguously traced back to the centre of our Galaxy <cit.>. Our new catalogue of radial velocities can facilitate blind searches for additional HVSs, helping to unravel the dynamics and properties of stars in the centre of our Galaxy as well as providing valuable information about the Galactic potential <cit.>. For the purpose of searching for HVSs it is of interest to determine if we can still obtain reliable radial velocity measurements for extremely high velocity stars. To date, S5-HVS1 is the fastest unbound star known in our Galaxy, with a total velocity in the Galactic frame of 1755±50 and a heliocentric radial velocity of 1017±2.7 <cit.>. The calibrated radial velocity we measure is 799± 273 , which is consistent with the reference radial velocity measurement of S5-HVS1 within ∼0.8σ. Theparameter from the RFC is 0.02 for S5-HVS1, indicating a reliable measurement. This establishes that our results are still accurate for extremely high radial velocity sources. To evaluate the general effectiveness of the selection of high radial velocity star candidates from our Main catalogue, we produce Fig. <ref>.In the figure we only select those stars from our Main catalogue whose 3σ lower limit on v_r is at least 300 , i.e., v_r>300+3·σ_vr and plot those for which reference radial velocity measurements are available. Although the distribution of our selection still peaks at 0 , we can see a very significant over-density of high radial velocity sources. Most of the sources in this selection do not have reference radial velocity measurements and, as Fig. <ref> shows, the majority of them will not have high radial velocities. However, the selection of HVS candidates for follow-up radial velocity surveys can be viable. Only 3175 sources out of our Main catalogue of 6.4M sources pass the selection of v_r>300+3·σ_vr . The number of candidates could be further reduced by using e.g. astrometric information to constrain the orbits. Follow-up observations will be proposed for the most promising of these HVS candidates to precisely measure their radial velocities. § DISCUSSION Despite the challenges, we have shown that radial velocities can be obtained from XP measurements to a precision of better than ∼300 for stars as faint as G=17.65. Sect. <ref> discusses possible improvements to the methods presented in this paper, in Sect. <ref> we provide prospects for DR4 and the improvements that we might expect with its release regarding radial velocities from XP spectra, and lastly in Sect. <ref> we discuss science cases for XP radial velocities in DR4. §.§ Improvements to the method Our current approach is only viable for low to intermediate extinction sources, due to our implementation of an initial guess on . This approximation breaks down for high extinction sources as mentioned in Sect. <ref>. Practically, this means that our results are not reliable for sources with ≳ 0.5 and we do not report our results for those sources. The issue can be mitigated by e.g., using a larger range in during the fitting procedure for high extinction sources, or by fitting every model for all sources. Additionally, fitting for extinction rather than relying on a 2D extinction map would allow for more accurate measurements, because for individual sources the 2D extinction map is only an estimate of the actual line-of-sight extinction. Including extinction in the fitting procedure is possible, but is also computationally very expensive, which is why we opted to use the 2D map instead. The analysis could be further improved by choosing the fitting wavelength range on a source-to-source basis: practically one would chose for each source the wavelength regions that hold the most spectral information. Doing this would improve the precision of the radial velocity measurements. Here, we used instead the same wavelength ranges throughout.Alternatively to the modelling presented in this work, one could forward model the spectral coefficients directly. Provided that the design matrix and the model of the instrument are accurate, this should give more precise results.Improving upon the method is required if the goal is to obtain reliable radial velocity measurements for a revolutionary large set of sources. Even though we started out with XP spectra to about 220M sources, the Main catalogue only includes around 6.4M radial velocities (or about 3% of XP sources), with an additional ∼119M in the Extended catalogue. Understanding and correcting for systematic effects remains the most challenging aspect. §.§ Data Release 4 In DR4, XP spectra will be published for about 2B sources, in addition to individual epoch spectra of said sources. Since this is orders of magnitude higher than any current radial velocity catalogue, the potential scientific return on a well optimised method of radial velocity analysis would be very high.It is unknown how large the improvement will be in radial velocity accuracy and precision from DR3 to DR4 using the methods presented here. The reason is that systematics are a very large factor in the radial velocity uncertainty. We might consider the noise floor in Eq. <ref> to be the intrinsic systematic uncertainty caused by imperfect calibration of XP spectra in DR3. If we assume that these systematics are resolved in DR4 we can provide an outlook for the performance of our method when applied to DR4. When we ignore the noise floor, the number of sources with radial velocity uncertainties <300 in DR3 approximately doubles to ∼17M. In addition, there would be about 1M sources with radial velocity uncertainties <100 . The smallest uncertainties that might be achieved are expected to be on the order of 50 . Although DR4 will mostly include fainter sources than the current limit of G<17.65, the S/N for a given magnitude will also improve. DR4 will provide XP spectra to about 9 times as many sources as DR3. A rough approximation of the final number of sources with a particular quality in DR4 is thus 9 times the number in DR3. This would imply that the XP spectra in DR4 could provide us with ∼153M and ∼8M measurements with uncertainties better and 300 and 100 respectively. Without systematic uncertainty due to the noise floor, these stars would be mainly red (≳2) and blue (≲0.7) in colour.§.§ Science cases in Data Release 4 Having discussed improvements to both the methods and data with the next data release of , we will now look at prospects for two specific science cases in DR4: dark companions and HVSs. Dark companions refer to binary systems in which one of the components emits little to no light in the photometric band used to observe them. These dark companions, such as black holes, can be identified from low-resolution spectra if enough epochs are available over a sufficient time span. The photocentre of BH1, for instance, has a radial velocity amplitude of 136 <cit.>, far larger than the typical uncertainty in RVS of only a few <cit.>. With the release of epoch XP spectra in DR4, searches for dark companions will become possible in the full catalogue of ∼2B sources. Compared to the astrometric time-series, radial velocities have the advantage of being distance independent, thus allowing for a larger search volume. Also, in comparison to RVS, the XP radial velocities have the advantage of being deeper and therefore covering a larger volume. RVS will have a limiting magnitude of G_RVS∼16 in DR4, compared to the limiting magnitude of G∼20.7 for the XP spectra. We assume the two photometric bands to be similar[<https://www.cosmos.esa.int/web/gaia/dr3-passbands>] and approximate the magnitude difference as 4.7. From the magnitude difference we can calculate the volume ratio as V_XP/V_RVS = 10^0.6·Δ m≈ 660,with V_XP the volume covered by XP spectra, V_RVS the volume covered by RVS radial velocities, and Δ m the difference in limiting magnitude. The effective volume covered by XP spectra is thus about 660 times as large as that covered by RVS radial velocities. Depending on the final precision and accuracy that can be achieved, dedicated higher resolution observations might be required to confirm systems with possible dark companions identified from XP radial velocities. In addition to finding dark companions, DR4 XP radial velocities could support the search for HVSs. Because of the high intrinsic velocities of these stars, large uncertainties are less problematic. As demonstrated in Sect. <ref>, the contamination of a selection of extremely high XP radial velocity sources is substantial in our Main catalogue. With an improved analysis as suggested in Sect. <ref>, in combination with a reduction in systematics that we expect in DR4, the contamination will decrease. This will allow for more effective follow-up campaigns to identify new HVSs. As mentioned above, the advantage of using XP spectra, is that the effective volume is much larger than RVS. Both for dark companions and HVSs, XP radial velocities will be most effective in identifying them for red (≳2) and blue (≲0.7) sources, since these sources have the lowest uncertainties in XP radial velocity. This is not expected to change from DR3 to DR4, since it is inherent to the radial velocity information contained within the XP spectra.§ CONCLUSION As a proof of concept, we have clearly demonstrated that XP spectra can be used to measure radial velocities. In this work we publish the Main catalogue containing reliable and precise radial velocity measurements for about 6.4M sources, 23% of which have no previous radial velocity measurements in . In addition, we publish the Extended catalogue containing all ∼125M sources for which we have obtained a radial velocity measurement. This constitutes ∼84% of sources with XP spectra and <0.5. The extended catalogue, however, contains a significant number of unreliable measurements and should therefore only be used with caution.In general, sources with ≳2 and ≲0.7 tend to give the most precise radial velocity measurements in our catalogue, down to uncertainties ∼100 . In the future, we expect the most precise radial velocity measurements from XP spectra to have uncertainties on the order of 50 Critically, this work has demonstrated the potential of measuring radial velocities for over 10^9 sources in DR4 using XP spectra. This would constitute an orders of magnitude increase compared to the largest current catalogue. Before then, the methods presented here should be further improved to fully exploit the scientific content available to us. The authors would like to thank the attendees at the XPloration workshop for their input and enthusiasm. Special thanks goes to Francesca De Angeli, Anthony Brown, and Vasily Belokurov for their support, helpful insight, and discussions. We would also like to thank Anthony Brown for his feedback on a first draft of this manuscript. EMR acknowledges support from European Research Council (ERC) grant number: 101002511/project acronym: VEGA_P. TM acknowledges a European Southern Observatory (ESO) fellowship. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This project was developed in part at the 2023 XPloration, hosted by the Institute of Astronomy, Cambridge University. This paper made use of the Whole Sky Database (wsdb) created and maintained by Sergey Koposov at the Institute of Astronomy, Cambridge with financial support from the Science & Technology Facilities Council (STFC) and the European Research Council (ERC). This work was performed using the ALICE compute resources provided by Leiden University. This research or product makes use of public auxiliary data provided by ESA/Gaia/DPAC/CU5 and prepared by Carine Babusiaux. Software:<cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>, ,<cit.>, [<https://extinction.readthedocs.io/en/latest/>],<cit.>,<cit.>,<cit.>.aa § INITIAL GUESS For computational efficiency, we make an initial guess on for each source and only consider models within 500K of this initial guess as described in Sect. <ref>.We make the initial guess onbased on the colour and 2D extinction along the line of sight. First we correct the colour of the source for extinction using the extinction law provided by [<https://www.cosmos.esa.int/web/gaia/edr3-extinction-law>]. This is only an approximation, since we do not know the intrinsic colour of a source affected by extinction and use the observed colour instead. By analysing test sources for the entiregrid range, we used an empirical exponential fit to the relation between and . Only for colours ≲0 we found a turn-off from the exponential relation, for which we fit a linear function. The relation is given by(K) = 6721·exp(-0.95· c) + 2617ifc > 0.01-18796· c + 9431ifc ≤ 0.01where c denotes the extinction corrected colour. If the initial guess foris outside the temperature range of our models (see Table <ref>) we do not attempt to fit the source and no radial velocity is obtained. § SYSTEMATICS For red sources we observe strong systematics in the radial velocity we measure, based on the surface gravity of the stars. We demonstrate this in Fig. <ref>, where we use the surface gravity measurements from <cit.> to separate dwarfs from giants.To investigate the origin of this systematic offset we used and measurements from APOGEE in comparison to our best-fit values. In Fig. <ref> we show how the radial velocity offset relates to errors in the parameter estimation for dwarfs.We can see that dwarfs tend to get assigned radial velocities that are too high. This also varies as a function of Δand Δ. In particular, if the of our best fit model labels a dwarf as a giant a positive offset is introduced, which can be seen from the colour gradient towards negative Δ.To make this figure and have sufficient sources, we used a much larger range of colour and magnitude than we do for single bins in the calibration. The spread in offsets for individual bins is smaller and therefore less problematic. Fig. <ref> shows the same, but now only for giants.In general we can see that the offset tends to be much lower for giants. It is possible that it is caused by the basis-function representation in , which undergoes optimisation and might lead to systematic differences in the translation of giant and dwarf spectra. Fortunately, we can effectively mitigate the effects of this bias, whatever its origin. To evaluate our treatment of the observed offset described in Sect. <ref> we recreate Fig. <ref> for our calibrated sample, which we show in Fig. <ref>.The figure clearly demonstrates that the offset between dwarfs and giants for very red sources has been effectively removed. Colour dependent systematics are still visible in the offset, however, particularly around ∼1.5. This effect is explained in Sect. <ref> and is related to the .§ FURTHER VALIDATIONTo demonstrate that the calibration we perform is not introducing an apparent radial velocity sensitivity, we include Fig. <ref>.This figure gives the median XP radial velocity for bins in reference radial velocity within the Main catalogue, where each curve is constructed using stars within one colour-magnitude diagram bin. The one-to-one slope demonstrates that within individual colour-magnitude bins we are sensitive to radial velocity. This excludes the possibility that a correlation between colour-magnitude and radial velocity is introducing an apparent radial velocity sensitivity. § MARKOV CHAIN MONTE CARLO EXAMPLES AND RESULTS In Fig. <ref> we show the outlier fraction of the model described in Sect. <ref>, Eq. <ref> as a function of colour and magnitude for the higher bins, not shown in the main text.To create more insight into what our calibration model does, we show here the MCMC results for a number of bins. We choose to show the 11.9575<G<12.59 magnitude bin for three different colour bins indicated in Fig. <ref>. Not for all bins does the method work equally well, which affects the calibration. If we look at the middle and right panels, we see that the overall distribution is skewed. Correcting for the offset in that sample will still result in a median that is significantly higher or lower than 0, because of the skew. This is the residual effect we see in Fig. <ref>. Part of this can be explained by effects that are still present in this colour region. We chose not to attempt to correct for these effects, because for these intermediate colour sources, there is no clear split in offset between dwarfs and giants. Instead a more elaborate strategy would have to be employed to correct for the remaining systematics. | http://arxiv.org/abs/2310.18101v1 | {
"authors": [
"Sill Verberne",
"Sergey E. Koposov",
"Elena M. Rossi",
"Tommaso Marchetti",
"Konrad Kuijken",
"Zephyr Penoyre"
],
"categories": [
"astro-ph.GA",
"astro-ph.IM",
"astro-ph.SR"
],
"primary_category": "astro-ph.GA",
"published": "20231027123712",
"title": "Radial velocities from Gaia BP/RP spectra"
} |
A Quantum Approximate Optimization Algorithm Based on CNR Operation An Min Wang 2023-10-25 ===================================================================The knowledge gradient (KG) algorithm is a popular policy for the best arm identification (BAI) problem. It is built on the simple idea of always choosing the measurement that yields the greatest expected one-step improvement in the estimate of the best mean of the arms. In this research, we show that this policy has limitations, causing the algorithm not asymptotically optimal. We next provide a remedy for it, by following the manner of one-step look ahead of KG, but instead choosing the measurement that yields the greatest one-step improvement in the probability of selecting the best arm. The new policy is called improved knowledge gradient (iKG). iKG can be shown to be asymptotically optimal. In addition, we show that compared to KG, it is easier to extend iKG to variant problems of BAI, with the ϵ-good arm identification and feasible arm identification as two examples. The superior performances of iKG on these problems are further demonstrated using numerical examples. § INTRODUCTION The best arm identification (BAI) is a sequential decision problem where in each stage, the agent pulls one out of k given arms and observes a noisy sample of the chosen arm. At the end of the sampling stage, the agent needs to select the arm that is believed to be the best according to the samples. In this research, we let the best arm be the one with the largest mean. BAI is a useful abstraction of issues faced in many practical settings <cit.> and has been widely studied in the machine learning community <cit.>. Since in practical problems, the target arm(s) (to be identified) is not necessarily the best arm, some variant models of BAI have also been proposed in the literature, e.g., top-m arm identification <cit.>, Pareto front identification <cit.>, ϵ-good arm identification <cit.>, feasible arm identification <cit.>, etc.In this research, we focus on the fixed-budget BAI, in which the total number of samples (budget) is fixed and known by the agent. The goal is to correctly identify the best arm when the budget is used up. To solve this problem, many methods have been proposed, e.g., successive rejects (SR) <cit.>, expected improvements (EI) <cit.>, top-two sampling <cit.>, knowledge gradient (KG) <cit.>, optimal computing budget allocation (OCBA)<cit.>, etc. Among these methods, KG has been prevailing. It was first proposed in <cit.> and further analyzed in <cit.>. It is built on the simple idea of always pulling the arm that yields the greatest expected one-step improvement in the estimate of the best mean of the arms. This improvement measure is analytical, making the algorithm easily implementable. KG often offers reasonable empirical performances and has been successfully applied in a number of real applications <cit.>.However, we observe that this definition of KG has limitations, causing the algorithm not asymptotically optimal. Here by not being asymptotically optimal, we mean that the KG algorithm is not rate optimal, in the sense that the probability of the best arm being falsely selected based on the posterior means of the k arms does not converge to zero at the fastest possible rate. This is resulted from KG allocating too few samples to the best arm and excessive samples to the remaining arms. Note that Frazier et al. <cit.> claimed that KG is “asymptotically optimal”, but in their context, “asymptotically optimal” is consistent, i.e., all the arms will be infinitely sampled as the round n→∞, so that the best arm will be correctly selected eventually. This is a relatively weak result for BAI algorithms (the simple equal allocation is also consistent). In this paper, asymptotically optimal refers to rate optimal.Contributions. We propose a new policy that can overcome this limitation of KG. The new policy follows the manner of one-step look ahead of KG, but pulls the arm that yields the greatest one-step improvement in the probability of selecting the best arm. We call it improved knowledge gradient (iKG) and show that it is asymptotically optimal. This policy is originated from the thought of looking at whether the best arm has been selected at the end of sampling, instead of looking at the extent that the mean of the selected arm has been maximized. Although both ways can identify the best arm, it turns out that the algorithms developed from them are significantly different in the rates of posterior convergence. Another advantage of iKG over KG is that iKG is more general and can be more easily extended to variant problems of BAI. We use ϵ-good arm identification and feasible arm identification as examples, develop algorithms for them using the idea of iKG and establish asymptotic optimality for the algorithms.This paper is conceptually similar to <cit.> which improves the EI algorithm for BAI. However, for EI, sampling ratios of any two arms in the non-best set are already asymptotically optimal. One only needs to introduce a parameter β to balance the probabilities of sampling the best arm and the non-best set without changing the sampling policy within the non-best set to further improve EI. For KG, sampling ratios are not asymptotically optimal for any two out of the k arms. It requires a fundamental change on the sampling policy that influences the sampling rates of all the arms to improve KG. Moreover, the improved rate of posterior convergence of EI in <cit.> still depends on β which is not necessarily optimal, while we can show that this rate of iKG is optimal.§ KNOWLEDGE GRADIENT AND ITS LIMITATIONSIn this section, we review KG and discuss its limitations. Suppose there are k arms in BAI. In each round t, the agent chooses any arm i to pull and obtains a noisy sample X_t+1,i. After n rounds, the agent needs to select an arm that he/she believes to be the best. Under the framework of the KG algorithm, X_t+1,i's are assumed to be independent across different rounds t and arms i and following the normal distribution 𝒩(μ_i,σ_i^2) with unknown means μ_i and known variances σ_i^2. The best arm is assumed to be unique. Without loss of generality, let μ_⟨ 1 ⟩>μ_⟨ 2 ⟩≥…≥μ_⟨ k ⟩, where ⟨ i ⟩ indicates the arm with i-th largest mean.The KG algorithm can be derived from a dynamic programming (DP) formulation of BAI. The state space 𝕊 consists of all the possible posterior means and variances of the arms, denoted as 𝕊≜ℝ^k×(0,∞)^k. State S_t in round t can be written as S_t=(μ_t,1,μ_t,2,…,μ_t,k,σ_t,1^2,σ_t,2^2,…,σ_t,k^2)^⊤. In the Bayesian model, the unknown mean μ_i is treated as random and let θ_i be the random variable following its posterior distribution. We adopt normal distribution priors 𝒩(μ_0,i,σ_0,i^2).With samples of the arms, we can compute their posterior distributions, which are still normal 𝒩(μ_t,i, σ_t,i^2) in round t by conjugacy. The posterior mean and variance of arm i areμ_t+1,i={ σ_t,i^-2μ_t,i+σ_i^-2X_t+1,i/σ_t,i^-2+σ_i^-2 I_t=i,μ_t,i I_t≠ i, . σ_t+1,i^2={ 1/σ_t,i^-2+σ_i^-2 I_t=i,σ_t,i^2 I_t≠ i. .In this paper, we adopt a non-informative prior for each arm i∈𝔸, i.e., μ_0,i=0 and σ_0,i=∞. Denote the action space as 𝔸≜{1,2,…,k} and transition function as 𝒯≜𝕊×𝔸×𝕊→𝕊. Suppose θ_t,i is a random variable following the posterior distribution 𝒩(μ_t,i,σ_i^2) of arm i. Then, the state transition can be written as S_t+1=𝒯(S_t, i, θ_t,i). Let π be the sampling policy that guides the agent to pull arm I_t in round t and Π be the set of sampling policies π=(I_0,I_1,…,I_n-1) adapted to the filtration I_0, X_1,I_0, …, I_t-1, X_t,I_t-1. After n rounds, the estimated best arm I_n^* is selected and a terminal reward v_n(S_n) is received. We can write our objective assup_π∈Π𝔼_πv_n(S_n).The DP principle implies that the value function in round 0≤ t< n can be computed recursively byv_t(S)≜max_i∈𝔸𝔼[v_t+1(𝒯(S,i,θ_t,i))], S∈𝕊.We define the Q-factors asQ_t(S,i)≜𝔼[v_t+1(𝒯(S,i,θ_t,i))], S∈𝕊,and the DP principle tells us that any policy satisfyingI_t(S)∈_i∈𝔸Q_t(S,i), S∈𝕊is optimal. However, the optimal policy is basically intractable unless for problems with very small scales, known as the “curse of dimensionality”.On the other hand, note that except the terminal reward v_n(S_n), this problem has no rewards in the other rounds, so we can restructure v_n(S_n) as a telescoping sequencev_n(S_n)=[v_n(S_n)-v_n(S_n-1)]+…+[v_n(S_t+1)-v_n(S_t)]+v_n(S_t).Thus, v_n(S_n) can be treated as the cumulation of multiple one-step improvements v_n(S_l)-v_n(S_l-1), l=t+1,…, n. A class of one-step look ahead algorithms iteratively pull the arm that maximizes the expectation of the one-step improvement on the value function𝔼[v_n(𝒯(S_t,i,θ_t,i))-v_n(S_t)].These algorithms are not optimal in general unless there is only one round left, i.e., n=t+1.The KG algorithm falls in this class. It sets the terminal reward as v_n(S_n)=μ_I_n^*. With this reward, the one-step improvement in (<ref>) becomesKG_t,i=𝔼[max{𝒯(μ_t,i,i,θ_t,i), max_i'≠ iμ_t,i'}-max_i∈𝔸μ_t,i],and in each round, the KG algorithm pulls the arm I_t(S_t)∈_i∈𝔸KG_t,i.We next characterize for the KG algorithm the rate of posterior convergence of 1-ℙ{I_n^*=I^*}, the probability that the best arm is falsely selected. Let c_⟨ i ⟩=(μ_⟨ 1 ⟩-μ_⟨ i ⟩)/σ_⟨ i ⟩/(μ_⟨ 1 ⟩-μ_⟨ 2 ⟩)/σ_⟨ 2 ⟩, i=2,...,k. For the KG algorithm, lim_n→∞-1/nlog(1-ℙ{I_n^*=I^*})=Γ^KG, where Γ^KG=min_i≠ 1((μ_⟨ i ⟩-μ_⟨ 1 ⟩)^2/2((∑_i≠ 1σ_⟨ 2 ⟩/c_⟨ i ⟩+σ_⟨ 1 ⟩)σ_⟨ 1 ⟩+c_⟨ i ⟩σ_⟨ i ⟩^2(∑_i≠ 11/c_⟨ i ⟩+σ_⟨ 1 ⟩/σ_⟨ 2 ⟩))).We observe that Γ^KG is not optimal. To make this point, Proposition <ref> gives an example that Γ^KG is no better than this rate of the TTEI algorithm <cit.> when the parameter β (probability of sampling the best arm) of TTEI is set to some suboptimal value. For the TTEI algorithm <cit.>, the rate of posterior convergence of 1-ℙ{I_n^*=I^*} exists and is denoted as Γ^TTEI. Let its probability of sampling the best arm β=(σ_⟨ 2 ⟩/σ_⟨ 1 ⟩∑_i≠ 11/c_⟨ i ⟩+1)^-1. We have Γ^KG≤Γ^TTEI. According to the proof of Proposition <ref>, there are configurations of the BAI problem leading to Γ^KG< Γ^TTEI, i.e., Γ^KG is not optimal. In fact, with β=(σ_⟨ 2 ⟩/σ_⟨ 1 ⟩∑_i≠ 11/c_⟨ i ⟩+1)^-1, Γ^KG=Γ^TTEI is achieved only in some special cases, e.g., when k=2. § IMPROVED KNOWLEDGE GRADIENTIn this section, we propose an improved knowledge gradient (iKG) algorithm. We still follow the manner of one-step look ahead of KG, but set the terminal reward of problem (<ref>) as v_n(S_n)=1{I_n^*=I^*}. That is, for the goal of identifying the best arm, we reward the selected arm by a 0-1 quantity showing whether this arm is the best arm, instead of the mean of this arm (as in KG).In this case, 𝔼[v_n(S_n)]=ℙ{I_n^*=I^*}, whereℙ{I_n^*=I^*} = ℙ{⋂_i≠ I_n^*(θ_I_n^*>θ_i)}=1- ℙ{⋃_i≠ I_n^*(θ_i>θ_I_n^*)}.However, the probability ℙ{⋃_i≠ I_n^*(θ_i>θ_I_n^*)} in (<ref>) does not have an analytical expression. To facilitate the algorithm implementation and analysis, we adopt an approximation to it using the Bonferroni inequality <cit.>:ℙ{⋃_i≠ I_n^*(θ_i>θ_I_n^*)}≤∑_i≠ I_n^*ℙ(θ_i>θ_I_n^*),and 𝔼[v_n(S_n)] can be approximately computed as𝔼[v_n(S_n)] ≈ 1- ∑_i≠ I_n^*ℙ(θ_i>θ_I_n^*)= 1-∑_i≠ I_n^*exp(-(μ_n,i-μ_n,I_n^*)^2/2(σ_n,i^2+σ_n,I_n^*^2)).Note that the Bonferroni inequality has been adopted as an approximation of the probability of correct selection in the literature for development of BAI algorithms <cit.>. For our purpose, we can show that the use of this approximation still makes the resulting algorithm asymptotically optimal and empirically superior.Let iKG_t,i be the one-step improvement in (<ref>) with I_t^* treated as unchanged after one more sample and 𝔼[v_n(S_n)] approximated by (<ref>). We have the following proposition to compute iKG_t,i. The iKG algorithm pulls the arm with the largest iKG_t,i in each round. With the definition of iKG_t,i above, we have iKG_t,i={ exp(-(μ_t,i-μ_t,I_t^*)^2/2(σ_t,i^2+σ_t,I_t^*^2))-exp(-(μ_t,i-μ_t,I_t^*)^2/2(σ_t+1,i^2+σ_t,I_t^*^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2)), i≠ I_t^*,∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t,I_t^*^2))-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t+1,I_t^*^2+σ_I_t^*^2(σ_t+1,I_t^*^2/σ_I_t^*^2)^2)), i= I_t^*. .Both KG and iKG are greedy algorithms that look at the improvement only one-step ahead. The essential difference between them is on the reward they use for the event of best arm identification. For KG, it is the mean of the arm selected, while for iKG, it is a 0-1 quantity showing whether the best arm is selected. It is interesting to note that the choice between these two rewards has been discussed in the control community for optimization of complex systems, known as cardinal optimization (similar to KG) vs. ordinal optimization (similar to iKG) <cit.>, with the discussion result in line with this research, indicating that ordinal optimization has advantages over cardinal optimization in the convergence rates of the optimization algorithms <cit.>. For the iKG algorithm, lim_n→∞-1/nlog(1-ℙ{I_n^*=I^*})= Γ^iKG, where Γ^iKG=(μ_⟨ i ⟩-μ_⟨ 1 ⟩)^2/2(σ_⟨ i ⟩^2/w_⟨ i ⟩+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩),and w_i is the sampling rate of arm i satisfying ∑_i=1^kw_i=1, w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2=∑_i=2^kw_⟨ i ⟩^2/σ_⟨ i ⟩^2(μ_⟨ i ⟩-μ_⟨ 1 ⟩)^2/2(σ_⟨ i ⟩^2/w_⟨ i ⟩+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)=(μ_⟨ i' ⟩-μ_⟨ 1 ⟩)^2/2(σ_⟨ i' ⟩^2/w_⟨ i' ⟩+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩), i≠ i'≠ 1.In addition, for any BAI algorithms, lim sup_n→∞-1/nlog(1-ℙ{I_n^*=I^*})≤Γ^iKG. Theorem <ref> shows that the rate of posterior convergence Γ^iKG of the iKG algorithm is the fastest possible. We still use TTEI as an example. This theorem indicates that Γ^TTEI≤Γ^iKG for any β∈(0,1) and the equality holds only when β is set to β^*, where β^* is the optimal value of β and is typically unknown. § VARIANT PROBLEMS OF BAI Another advantage of iKG over KG is that iKG is more general, in the sense that it can be easily extended to solve variant problems of BAI. In the variants, the target arms to be identified are not the single best arm, but no matter how the target arms are defined, one can always look at the event that whether these arms are correctly identified at the end of sampling and investigate the probability of this event to develop iKG and the algorithm. In contrast, it is difficult to extend KG to identify arms that cannot be found through optimizing means of these (and/or other) arms. In this section, we extend iKG to two BAI variants: ϵ-good arm identification <cit.> and feasible arm identification <cit.>. We develop algorithms for them and establish their asymptotic optimality. Note that in these two variant problems, the target arms need to be found by comparing their means with some fixed values. In such cases, the idea of KG is not straightforward. §.§ ϵ-Good Arm Identification We follow the notation in Sections <ref> and <ref>. For the k arms, suppose μ_⟨ 1 ⟩≥μ_⟨ 2 ⟩≥…≥μ_⟨ k ⟩. Given ϵ>0, the ϵ-good arm identification problem aims to find all the arms i with μ_⟨ i ⟩>μ_⟨ 1 ⟩-ϵ, i.e., all the arms whose means are close enough to the best (ϵ-good). Assume that no arms have means lying on μ_⟨ 1 ⟩-ϵ. Denote the set of ϵ-good arms as G^ϵ and the estimated set of ϵ-good arms after n rounds as G_n^ϵ. We set the terminal reward v_n(S_n)=1{G_n^ϵ=G^ϵ}, i.e., whether the set G^ϵ is correctly selected. Then, 𝔼[v_n(S_n)]=ℙ{G_n^ϵ=G^ϵ}, whereℙ{G_n^ϵ=G^ϵ} = ℙ{⋂_i∈ G_n^ϵ(θ_i>max_i^'∈𝔸θ_i^'-ϵ)∩⋂_i∈𝔸∖ G_n^ϵ(θ_i<max_i^'∈𝔸θ_i^'-ϵ)}=1-ℙ{⋃_i∈ G_n^ϵ(θ_i<max_i^'∈𝔸θ_i^'-ϵ)∪⋃_i∈𝔸∖ G_n^ϵ(θ_i>max_i^'∈𝔸θ_i^'-ϵ)}. Again, applying the Bonferroni inequality,ℙ{G_n^ϵ=G^ϵ}≥ 1- ∑_i∈ G_n^ϵℙ(θ_i<max_i^'∈𝔸θ_i^'-ϵ)-∑_i∈𝔸∖ G_n^ϵℙ(θ_i>max_i^'∈𝔸θ_i^'-ϵ). Let iKG_t,i^ϵ be the one-step improvement in (<ref>) with I_t^* treated as unchanged after one more sample and 𝔼[v_n(S_n)] approximated by the right-hand side of (<ref>). We have the following proposition to compute iKG_t,i^ϵ . With the definition of iKG_t,i^ϵ above, we have iKG_t,i^ϵ={ exp(-(μ_t,i-μ_t,I_t^*+ϵ)^2/2(σ_t,i^2+σ_t,I_t^*^2))-exp(-(μ_t,i-μ_t,I_t^*+ϵ)^2/2(σ_t+1,i^2+σ_t,I_t^*^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2)), i≠ I_t^*,∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t,I_t^*^2))-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t+1,I_t^*^2+σ_I_t^*^2(σ_t+1,I_t^*^2/σ_I_t^*^2)^2)), i= I_t^*. . To identify the ϵ-good arms, the iKG-ϵ algorithm pulls the arm with the largest iKG_t,i^ϵ in each round. For this algorithm, we can show that the rate of posterior convergence of 1-ℙ{G_n^ϵ=G^ϵ} is the fastest possible. For the iKG-ϵ algorithm, lim_n→∞-1/nlog(1-ℙ{G_n^ϵ=G^ϵ})= Γ^ϵ, where Γ^ϵ=(μ_⟨ i ⟩-μ_⟨ 1 ⟩+ϵ)^2/2(σ_⟨ i ⟩^2/w_⟨ i ⟩+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩), and w_i is the sampling rate of arm i satisfying ∑_i=1^kw_i=1, w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2=∑_i=2^kw_⟨ i ⟩^2/σ_⟨ i ⟩^2(μ_⟨ i ⟩-μ_⟨ 1 ⟩+ϵ)^2/2(σ_⟨ i ⟩^2/w_⟨ i ⟩+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)=(μ_⟨ i' ⟩-μ_⟨ 1 ⟩+ϵ)^2/2(σ_⟨ i' ⟩^2/w_⟨ i' ⟩+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩), i≠ i'≠ 1. In addition, for any ϵ-good arm identification algorithms, lim sup_n→∞-1/nlog(1-ℙ{G_n^ϵ=G^ϵ})≤Γ^ϵ.§.§ Feasible Arm Identification In the feasible arm identification, samples from pulling arms i are m-dimensional vectors X_t+1,i=[X_t+1,i1,…, X_t+1,im] instead of scalars, where each dimension of the vector corresponds to some measure of the system performance and X_t+1,ij is the observation associated with arm i and measure j. Suppose X_t+1,ij's follow the normal distribution with unknown means μ_ij and known variances σ_ij^2. We impose constraints μ_ij≤γ_j on arms i=1,2,…,k and measures j=1,2,…,m. The goal of this problem is to find the set of feasible arms 𝒮^1. Let the estimated set of feasible arms after n rounds be 𝒮_n^1 and 𝒮^2=𝔸∖𝒮^1. We assume that X_t+1,ij's are independent across different rounds t and measures j, and μ_ij's do not lie on the constraint limits γ_j. To facilitate the analysis, we also define for round t the set of measures ℰ_t,i^1≜{j: μ_t,ij≤γ_j} satisfied by arm i and the set of measures ℰ_t, i^2≜{j: μ_t,ij>γ_j} violated by arm i.Set the terminal reward v_n(S_n)=1{𝒮_n^1=𝒮^1}, i.e., whether the set 𝒮^1 is correctly selected. Then, 𝔼[v_n(S_n)]=ℙ{𝒮_n^1=𝒮^1}, whereℙ{𝒮_n^1=𝒮^1} = ℙ{⋂_i∈𝒮_n^1(⋂_j=1^m(θ_ij≤γ_j))∩⋂_i∈𝒮_n^2(⋃_j=1^m(θ_ij>γ_j))}=1-ℙ{⋃_i∈𝒮_n^1(⋃_j=1^m(θ_ij>γ_j))∪⋃_i∈𝒮_n^2(⋂_j=1^m(θ_ij≤γ_j))}.Applying the Bonferroni inequality,ℙ{𝒮_n^1=𝒮^1}≥ 1-∑_i∈𝒮_n^1∑_j=1^mℙ(θ_ij>γ_j)-∑_i∈𝒮_n^2∏_j∈ℰ_t,i^2ℙ(θ_ij≤γ_j).The inequality holds because 0<∏_j∈ℰ_n,i^1ℙ(θ_ij≤γ_j)≤ 1.Let iKG_t,i^F be the one-step improvement in (<ref>) with 𝒮_t^1, 𝒮_t^2 and ℰ_t,i^2 treated as unchanged after one more sample and 𝔼[v_n(S_n)] approximated by the right-hand side of (<ref>). We have the following proposition to compute iKG_t,i^F . With the definition of iKG_t,i^F above, we have iKG_t,i^F= ∑_j=1^m(exp(-(γ_j-μ_t,ij)^2/2σ_t,ij^21{i∈𝒮_t^1})-exp(-(γ_j-μ_t,ij)^2/2(σ_t+1,ij^2+σ_ij^2(σ_t+1,ij^2/σ_ij^2)^2)1{i∈𝒮_t^1}))+exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2σ_t,ij^21{i∈𝒮_t^2}) -exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2(σ_t+1,ij^2+σ_ij^2(σ_t+1,ij^2/σ_ij^2)^2)1{i∈𝒮_t^2}).To identify the feasible arms, the iKG-F algorithm pulls the arm with the largest iKG_t,i^F in each round. For this algorithm, we can show that the rate of posterior convergence of 1-ℙ{𝒮_n^1=𝒮^1} is also the fastest possible. For the iKG-F algorithm, lim_n→∞-1/nlog(1-ℙ{𝒮_n^1=𝒮^1})= Γ^F, where Γ^F=w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}, and w_i is the sampling rate of arm i satisfying ∑_i=1^kw_i=1, w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}=w_i'min_j∈ℰ_i'^1(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^1}+w_i'∑_j∈ℰ_i'^2(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^2}, i≠ i'.In addition, for any feasible arm identification algorithms lim sup_n→∞-1/nlog(1-ℙ{𝒮_n^1=𝒮^1})≤Γ^F.§ NUMERICAL EXPERIMENTS In this section, we show empirical performances of the iKG, iKG-ϵ and iKG-F algorithms on synthetic and real-world examples. For the best arm identification problem, we compare iKG with the following algorithms. * Expected Improvement (EI) <cit.>. This is another common strategy for BAI. In each round, it pulls the arm offering the maximal expected improvement over the current estimate of the best mean of the arms.* Top-Two Expected Improvement (TTEI) <cit.>. This is a modification of the EI algorithm by introducing a parameter β to control the probabilities of sampling the best arm and the non-best set. We set the parameter β in TTEI as its default value 1/2. * Knowledge Gradient. This is the algorithm under study in this research. For the ϵ-good arm identification problem, we compare iKG-ϵ with the following algorithms. * APT Algorithm <cit.>. It is a fixed-budget algorithm for identifying the arms whose means are above a given threshold. We set the input tolerance parameter as 0.0001 and the threshold as the posterior mean of the estimated best arm minus ϵ. * ()^2 Algorithm <cit.>. It is a fixed-confidence algorithm for ϵ-good arm identification. It pulls three arms in each round, the estimated best arm, one arm above the threshold and one arm below the threshold. We set the input tolerance parameter as 0.0001 and γ=0. For the feasible arm identification problem, we compare iKG-F with the following algorithms. * MD-UCBE Algorithm <cit.>. This is a fixed-budget algorithm for feasible arm identification based on the upper confidence bound. We set the input tolerance parameter as 0.0001 and hyperparameter a=25/36n-k/H, where H is a constant that can be computed. Katz-Samuels and Scott <cit.> showed that with a=25/36n-k/H, the performance of MD-UCBE is nearly optimal. * MD-SAR Algorithm <cit.>. This is a fixed-budget algorithm for feasible arm identification based on successive accepts and rejects. We set the input tolerance parameter as 0.0001. In addition, iKG, iKG-ϵ and iKG-F will be compared with the equal allocation, where each arm is simply played with the same number of rounds. It is a naive method and is often used as a benchmark against which improvements might be measured.The examples for testing include three synthetic examples, called Examples 1-3, and three real examples, namely the Dose-Finding Problem, Drug Selection Problem, and Caption Selection Problem. For Example 1-3 and the Dose-Finding problem, samples of the arms are two-dimensional. We call the measures of them measures 1 and 2. When the examples are tested for the best arm identification and ϵ-good identification, only measure 1 will be used for identifying good/best arms. When the examples are tested for the feasible arm identification, both measures will be used for feasibility detection. For the Drug Selection and Caption Selection problems, samples of the arms are one-dimensional. They are tested for the best arm identification, ϵ-good identification and feasible arm identification.Synthetic Datasets. We consider three examples, all containing ten arms.Example 1. The means in measure 1 of the ten arms are 0.1927, 0.6438, 3.0594, 3.0220, 1.3753, 1.4215, 0.9108, 1.0126, 0.1119 and 1.8808, and the means in measure 2 of the ten arms are 0.4350, 0.7240, 1.1566, 0.8560, 3.4712, 0.8248, 3.8797,1.9819, 3.2431 and 1.4315, all of which are uniformly generated in (0,4). Samples of the arms are corrupted by normal noises 𝒩(0,1). The best arm is arm 3 and 0.1-good arms are arms 3 and 4. For the feasible arm identification, we choose arms with means in both measures less than 2. Then the feasible arms are arms 1, 2, 6, 8 and 10.Example 2. We keep the setting of Example 1. Distributions of the noises for arms 1-5 are changed to 𝒩(0,4).Example 3. Consider functions y_1(x)=-0.05x^2, y_2(x)=-0.06(7-x) and y_3(x)=0.06(x-6). The means in measure 1 of the ten arms are y_1(x) with x=1,2,…,10. The means in measure 2 of the ten arms are y_2(x) with x=1,…,6 and y_3(x) with x=7,…,10. Noises follow the normal distribution 𝒩(0,1). The best arm is arm 1 and 0.5-good arms are arms 1-3. For the feasible arm identification, we choose arms with means in measure 1 greater than -0.5 and means in measure 2 less than 0. The feasible arms are arms 1-3.Dose-Finding Problem. We use the data in <cit.> (see ACR50 in week 16) for treating rheumatoid arthritis by the drug secukinumab. There are four dosage levels, 25mg, 75mg, 150mg, and 300mg, and a placebo, which are treated as five arms. We develop a simulation model based on the dataset. Each arm is associated with two performance measures: the probability of the drug being effective and the probability of the drug causing infections. The means of the five arms are μ_1=(0.151,0.259), μ_2=(0.184,0.184), μ_3=(0.209,0.209), μ_4=(0.171,0.293) and μ_5=(0.06,0.16). Samples of each arm are corrupted by normal noises 𝒩(0,0.25). The best arm is arm 3 and the 0.03-good arms are arms 2 and 3. For the feasible arm identification, we find the arms whose probability of being effective is larger than 0.18 and the probability of causing infections is less than 0.25. The feasible arms are arms 2 and 3.Drug Selection Problem. We consider five contraceptive alternatives based on the Drug Review Dataset (https://doi.org/10.24432/C5SK5S): Ethinyl estradiol / levonorgest, Ethinyl estradiol / norethindro, Ethinyl estradiol / norgestimat, Etonogestrel and Nexplanon, which can be treated as five arms. The dataset provides user reviews on the five drugs along with related conditions and ratings reflecting overall user satisfaction. We set the means of the five arms as μ_1=5.8676, μ_2=5.6469, μ_3=5.8765, μ_4=5.8298 and μ_5=5.6332, and the variances of the five arms as σ_1^2=3.2756, σ_2^2=3.4171, σ_3^2=3.2727, σ_4^2=3.3198 and σ_5^2=3.3251, all calculated by the data. When this example is used for the best arm identification and ϵ-good arm identification, the best arm (with the highest user satisfaction) and 0.003-good arm are both arm 3 (Ethinyl estradiol / norgestimat). When this example is used for feasible arm identification, we will select the drugs whose ratings are over 5.6, and the feasible arms are arm 1 (Ethinyl estradiol / levonorgest), arm 2 (Ethinyl estradiol / norethindro), arm 3 (Ethinyl estradiol / norgestimat), arm 4 (Etonogestrel) and arm 5 (Nexplanon).Caption Selection Problem. We aim to select good captions based on the New Yorker Cartoon Caption Contest Dataset (https://nextml.github.io/caption-contest-data/). In the contests, each caption can be treated as an arm. The dataset provides the mean and variance of each arm, which can be used to set up our experiments. We will test contests 853 (Caption 853) and 854 (Caption 854).In Caption 853, we randomly select ten captions as arms. We set the means of the ten arms as μ_1=1.1400, μ_2=1.0779, μ_3=1.4160, μ_4=1.0779, μ_5=1.1081, μ_6=1.1467, μ_7=1.1333, μ_8=1.1075, μ_9=1.1026 and μ_10=1.4900, and the variances of the arms as σ_1^2=0.1418, σ_2^2=0.0991, σ_3^2=0.4871, σ_4^2=0.0728, σ_5^2=0.0977, σ_6^2=0.1809, σ_7^2=0.1843, σ_8^2=0.0970, σ_9^2=0.0932 and σ_10^2=0.4843, which are all calculated by the data. When this example is used for the best arm identification, the best arm (with the highest funniness score) is arm 10. When this example is used for ϵ-good arm identification, the 0.1-good arms are arms 3 and 10. When this example is used for feasible arm identification, we will select the captions whose funniness scores are over 1.4, and the feasible arms are arms 3 and 10.In Caption 854, we also randomly select ten captions as arms. We set the means of the ten arms as μ_1=1.1986, μ_2=1.1890, μ_3=1.1400, μ_4=1.2621, μ_5=1.1544, μ_6=1.0339, μ_7=1.1349, μ_8=1.2786, μ_9=1.1765 and μ_10=1.1367, and the variances of the arms as σ_1^2=0.1879, σ_2^2=0.2279, σ_3^2=0.1346, σ_4^2=0.3186, σ_5^2=0.1314, σ_6^2=0.0330, σ_7^2=0.1337, σ_8^2=0.3167, σ_9^2=0.1858 and σ_10^2=0.1478, all calculated by the data. When this example is used for the best arm identification, the best arm is arm 8. When this example is used for ϵ-good arm identification, the 0.05-good arms are arms 4 and 8. When this example is used for feasible arm identification, we will select the captions whose funniness scores are over 1.25, and the feasible arms are arms 4 and 8.For the tested algorithms, probabilities of false selection (PFS) are obtained based on the average of 100 macro-replications. Tables 1-3 show the PFS of the algorithms under some fixed sample sizes (additional numerical results about the PFS and sampling rates of the tested algorithms are provided in the Supplement). The proposed iKG, iKG-ϵ and iKG-F perform the best. For the best arm identification, EI tends to allocate too many samples to the estimated best arm, leading to insufficient exploration in the remaining arms, while KG tends to allocate too few samples to the estimated best arm, leading to excessive exploration in the remaining arms. TTEI always allocates approximately one-half budget to the estimated best arm when β=1/2, leading to the budget not being the best utilized. For the ϵ-good identification, APT and ()^2 are inferior because the former insufficiently pulls the estimate best arm, leading to inaccurate estimates of the threshold, while the latter falls in the fixed-confidence regime that focuses on making guarantees on the probability of false selection instead of minimizing it. For the feasible arm identification, both MD-UCBE and MD-SAR allocate too many samples to the arms near the constraint limits. For the three problems, equal allocation performs the worst in general, because it does not have any efficient sampling mechanisms for identifying the target arms in these problems.§ CONCLUSION This paper studies the knowledge gradient (KG), a popular policy for the best arm identification (BAI). We observe that the KG algorithm is not asymptotically optimal, and then propose a remedy for it. The new policy follows KG's manner of one-step look ahead, but utilizes different evidence to identify the best arm. We call it improved knowledge gradient (iKG) and show that it is asymptotically optimal. Another advantage of iKG is that it can be easily extended to variant problems of BAI. We use ϵ-good arm identification and feasible arm identification as two examples for algorithm development and analysis. The superior performances of iKG on BAI and the two variants are further demonstrated using numerical examples. unsrtA Quantum Approximate Optimization Algorithm Based on CNR Operation An Min Wang 2023-10-25 ===================================================================§ PROOF OF PROPOSITION 1To facilitate the analysis, we make the following definition. For two real-valued sequences {a_n} and {b_n}, if lim_n→∞1/nlog(a_n/b_n)=0, we call them logarithmically equivalent, denoted by a_n=̇b_n. We first analyze 1-ℙ{I_n^*=I^*}. Note that 1-ℙ{I_n^*=I^*}= ℙ{⋃_i≠ I_n^*(θ_i>θ_I_n^*)} and we havemax_i≠ I_n^*ℙ(θ_i>θ_I_n^*)≤ℙ{⋃_i≠ I_n^*(θ_i>θ_I_n^*)}≤ (k-1)max_i≠ I_n^*ℙ(θ_i>θ_I_n^*).Then 1-ℙ{I_n^*=I^*}=̇max_i≠ I_n^*ℙ(θ_i>θ_I_n^*). In round n, θ_i-θ_I_n^* follows 𝒩(μ_n,i-μ_n,I_n^*, σ_n,i^2+σ_n,I_n^*^2). Let Φ(·) and ϕ(·) be the cumulative density function and probability density function of the standard normal distribution, respectively. We haveℙ(θ_i>θ_I_n^*)=1-Φ(μ_n,I_n^*-μ_n,i/√(σ_n,i^2+σ_n,I_n^*^2)).Let z=μ_n,I_n^*-μ_n,i/√(σ_n,i^2+σ_n,I_n^*^2) and z>0. By the following property of the cumulative probability function of the standard normal distributionz/(z^2+1)<1-Φ(z)<1/zϕ(z)and μ_n,I_n^*-μ_n,i>0, we have ℙ(θ_i>θ_I_n^*)=̇ϕ(μ_n,i-μ_n,I_n^*/√(σ_n,i^2+σ_n,I_n^*^2)) =̇exp(-(μ_n,i-μ_n,I_n^*)^2/2(σ_n,i^2+σ_n,I_n^*^2)). Denote T_n,i as the number of samples for arm i before round t, i.e., T_t,i≜∑_l=0^t-11{I_l=i}. By (1) in the main text, we haveσ_n,i^2=1/(σ_i^2/T_n,i)^-1+σ_0,i^-2. Then 1-ℙ{I_n^*=I^*} =̇max_i≠ I_n^*(exp(-(μ_n,i-μ_n,I_n^*)^2/2(σ_n,i^2+σ_t,I_n^*^2)))=̇exp(-nmin_i≠ I_n^*(μ_n,i-μ_n,I_n^*)^2/2(σ_n,i^2+σ_t,I_n^*^2)).Hence Γ^KG=lim_n→∞-1/nlog(1-ℙ{I_n^*=I^*})=min_i≠ I^*(μ_i-μ_I^*)^2/2(σ_i^2/w_i+σ_I^*^2/w_I^*).Notice that the sampling rate w_i of each arm i of the KG algorithm has been characterized in <cit.>, with w_⟨ 1 ⟩/w_⟨ 2 ⟩=σ_⟨ 1 ⟩/σ_⟨ 2 ⟩w_⟨ i ⟩/w_⟨ i' ⟩=(μ_⟨ 1 ⟩-μ_⟨ i' ⟩)/σ_⟨ i' ⟩/(μ_⟨ 1 ⟩-μ_⟨ i ⟩)/σ_⟨ i ⟩, i,i'=2,3,…,ki≠ i'.Together with ∑_i=1^kw_i=1, we have w_⟨ 1 ⟩=(σ_⟨ 2 ⟩/σ_⟨ 1 ⟩∑_i≠ 11/c_⟨ i ⟩+1)^-1andw_⟨ i ⟩=(c_⟨ i ⟩(∑_i≠ 11/c_⟨ i ⟩+σ_⟨ 1 ⟩/σ_⟨ 2 ⟩))^-1,i=2,3,…,k.Plugging into (<ref>),Γ^KG=min_i≠ 1((μ_⟨ i ⟩-μ_⟨ 1 ⟩)^2/2((∑_i≠ 1σ_⟨ 2 ⟩/c_⟨ i ⟩+σ_⟨ 1 ⟩)σ_⟨ 1 ⟩+c_⟨ i ⟩σ_⟨ i ⟩^2(∑_i≠ 11/c_⟨ i ⟩+σ_⟨ 1 ⟩/σ_⟨ 2 ⟩))). § PROOF OF PROPOSITION 2Similar to the proof of Proposition 1, we haveΓ^TTEI=lim_n→∞-1/nlog(1-ℙ{I_n^*=I^*})=min_i≠ I^*(μ_i-μ_I^*)^2/2(σ_i^2/w_i+σ_I^*^2/w_I^*).Since for the TTEI algorithm, (μ_i-μ_I^*)^2/2(σ_i^2/w_i+σ_I^*^2/w_I^*)=(μ_i'-μ_I^*)^2/2(σ_i'^2/w_i'+σ_I^*^2/w_I^*), ∀ i≠ i'≠ I^*,we haveΓ^TTEI=(μ_i-μ_I^*)^2/2(σ_i^2/w_i+σ_I^*^2/w_I^*)∀ i≠ I^*.According to (<ref>), for the KG algorithm, w_⟨ 1 ⟩=(σ_⟨ 2 ⟩/σ_⟨ 1 ⟩∑_i≠ I^*1/c_⟨ i ⟩+1)^-1. Now by setting β of the TTEI algorithm to the same value, the sampling rates of the best arm from these two algorithms will be the same. According to Theorem 2 of <cit.>,among algorithms allocating the same proportion of the samples to the best arm, Γ^TTEI of the TTEI algorithm is optimal, i.e., Γ^KG≤Γ^TTEI.§ PROOF OF PROPOSITIONS 3, 4 AND 5Propositions 3, 4 and 5 give the expressions of iKG_t,i, iKG_t,i^ϵ and iKG_t,i^F. Below we introduce a lemma first, which will be used in the proofs of the three propositions. If arm i is sampled from 𝒩(μ_t,i, σ_i^2) in round t, θ_i and θ_i' follow 𝒩(μ_t,i, σ_t,i^2) and 𝒩(μ_t,i', σ_t,i'^2) respectively. Then, 𝔼[ℙ(θ_i>θ_i')]=exp(-(μ_t,i-μ_t,i')^2/2(σ_t+1,i^2+σ_t,i'^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2)). Proof of Lemma <ref>:We know that θ_t,i follows 𝒩(μ_t,i, σ_i^2). Then by (1) of the main text, we haveμ_t+1,i={ σ_t,i^-2μ_t,i+σ_i^-2θ_t,i/σ_t,i^-2+σ_i^-2 I_t=i,μ_t,i I_t≠ i, . σ_t+1,i^2={ 1/σ_t,i^-2+σ_i^-2 I_t=i,σ_t,i^2 I_t≠ i. .Recall thatℙ(θ_i>θ_I_t^*)=̇exp(-(μ_t,i-μ_t,I_t^*)^2/2(σ_t,i^2+σ_t,I_t^*^2)).Then𝔼[ℙ(θ_i>θ_i')] =𝔼[exp(-(μ_t+1,i-μ_t,i')^2/2(σ_t+1,i^2+σ_t,i'^2))]=1/√(2πσ_i)∫_-∞^∞exp(-(σ_t,i^-2μ_t,i+σ_i^-2θ_t,i/σ_t+1,i^-2-μ_t,i')^2/2(σ_t+1,i^2+σ_t,i'^2))exp(-(θ_t,i-μ_t,i')^2/2σ_i^2)dθ_t,i=̇exp(-(μ_t,i-μ_t,i')^2/2(σ_t+1,i^2+σ_t,i'^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2)). Proof of Proposition 3:For the best arm identification problem, if i≠ I_t^*,iKG_t,i=𝔼[v_n(𝒯(S_t,i,θ_t,i))-v_n(S_t)] = 1-∑_i'≠ i≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t,I_t^*^2))-exp(-(μ_t,i-μ_t,I_t^*)^2/2(σ_t+1,i^2+σ_t,I_t^*^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2))-(1- ∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t,I_t^*^2))) = exp(-(μ_t,i-μ_t,I_t^*)^2/2(σ_t,i^2+σ_t,I_t^*^2))-exp(-(μ_t,i-μ_t,I_t^*)^2/2(σ_t+1,i^2+σ_t,I_t^*^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2)).If i= I_t^*,iKG_t,i=𝔼[v_n(𝒯(S_t,i,θ_t,i))-v_n(S_t)] = 1-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t+1,I_t^*^2+σ_I_t^*^2(σ_t+1,I_t^*^2/σ_I_t^*^2)^2)) -(1- ∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t,I_t^*^2))) = ∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t,I_t^*^2))-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_t,i'^2+σ_t+1,I_t^*^2+σ_I_t^*^2(σ_t+1,I_t^*^2/σ_I_t^*^2)^2)). Proof of Proposition 4:We explore the expression of 𝔼[v_n(S_n)] in the ϵ-good arm identification problem first. We know that𝔼[v_n(S_n)]= 1- ∑_i∈ G_n^ϵℙ(θ_i<max_i^'∈𝔸θ_i^'-ϵ)-∑_i∈𝔸∖ G_n^ϵℙ(θ_i>max_i^'∈𝔸θ_i^'-ϵ).Note that in round n, θ_i-θ_I_n^*+ϵ follows 𝒩(μ_n,i-μ_n,I_n^*+ϵ, σ_n,i^2+σ_n,I_n^*^2). Similarly as in the proof of Proposition 1, we can know that if i∈ G_n^ϵℙ(θ_i<max_i^'∈𝔸θ_i^'-ϵ)=̇exp(-(μ_n,i-μ_n,I_n^*+ϵ)^2/2(σ_n,i^2+σ_n,I_n^*^2)),and if i∈𝔸∖ G_n^ϵℙ(θ_i>max_i^'∈𝔸θ_i^'-ϵ)=̇exp(-(μ_n,i-μ_n,I_n^*+ϵ)^2/2(σ_n,i^2+σ_n,I_n^*^2)).Then 𝔼[v_n(S_n)]=1-∑_i≠ I_n^*exp(-(μ_n,i-μ_n,I_n^*+ϵ)^2/2(σ_n,i^2+σ_n,I_n^*^2)).For the ϵ-good arm identification problem, if i≠ I_t^*,iKG_t,i^ϵ=𝔼[v_n(𝒯(S_t,i,θ_t,i))-v_n(S_t)] = 1-∑_i'≠ i≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t,I_t^*^2))-exp(-(μ_t,i-μ_t,I_t^*+ϵ)^2/2(σ_t+1,i^2+σ_t,I_t^*^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2))-(1- ∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t,I_t^*^2))) = exp(-(μ_t,i-μ_t,I_t^*+ϵ)^2/2(σ_t,i^2+σ_t,I_t^*^2))-exp(-(μ_t,i-μ_t,I_t^*+ϵ)^2/2(σ_t+1,i^2+σ_t,I_t^*^2+σ_i^2(σ_t+1,i^2/σ_i^2)^2)).If i= I_t^*,iKG_t,i^ϵ=𝔼[v_n(𝒯(S_t,i,θ_t,i))-v_n(S_t)] = 1-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t+1,I_t^*^2+σ_I_t^*^2(σ_t+1,I_t^*^2/σ_I_t^*^2)^2)) -(1- ∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t,I_t^*^2))) = ∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t,I_t^*^2))-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_t,i'^2+σ_t+1,I_t^*^2+σ_I_t^*^2(σ_t+1,I_t^*^2/σ_I_t^*^2)^2)). Proof of Proposition 5:We explore the expression of 𝔼[v_n(S_n)] in the feasible arm identification problem first. We know that𝔼[v_n(S_n)]= 1-∑_i∈𝒮_n^1∑_j=1^mℙ(θ_ij>γ_j)-∑_i∈𝒮_n^2∏_j∈ℰ_t,i^2ℙ(θ_ij≤γ_j).Note that in round n, θ_ij-γ_j follows 𝒩(μ_n,i-γ_j, σ_n,ij^2). Similarly as in the proof of Proposition 1, we can know that if i∈𝒮_n^1 and measure j∈{1,2,…,m},ℙ(θ_ij>γ_j)=̇exp(-(γ_j-μ_n,ij)^2/2σ_n,ij^2),and if i∈𝒮_n^2 and measure j∈ℰ_n,i^2,ℙ(θ_ij≤γ_j)=̇exp(-(γ_j-μ_n,ij)^2/2σ_n,ij^2).Then 𝔼[v_n(S_n)]= 1-∑_i∈𝒮_n^1∑_j=1^mexp(-(γ_j-μ_n,ij)^2/2σ_n,ij^2)-∑_i∈𝒮_n^2exp(-∑_j∈ℰ_n,i^2(γ_j-μ_n,ij)^2/2σ_n,ij^2).For the feasible arm identification problem,iKG_t,i^F=𝔼[v_n(𝒯(S_t,i,θ_t,i))-v_n(S_t)] = 1-∑_i'≠ i∈𝒮_t^1∑_j=1^mexp(-(γ_j-μ_t,i'j)^2/2σ_t,i'j^2)-∑_i'≠ i∈𝒮_t^2exp(-∑_j∈ℰ_t,i'^2(γ_j-μ_t,i'j)^2/2σ_t,i'j^2)-∑_j=1^mexp(-(γ_j-μ_t,ij)^2/2(σ_t+1,ij^2+σ_ij^2(σ_t+1,ij^2/σ_ij^2)^2)1{i∈𝒮_t^1})-exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2(σ_t+1,ij^2+σ_ij^2(σ_t+1,ij^2/σ_ij^2)^2)1{i∈𝒮_t^2})-(1-∑_i∈𝒮_t^1∑_j=1^mexp(-(γ_j-μ_t,ij)^2/2σ_t,ij^2)-∑_i∈𝒮_t^2exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2σ_t,ij^2)) = ∑_j=1^m(exp(-(γ_j-μ_t,ij)^2/2σ_t,ij^21{i∈𝒮_t^1})-exp(-(γ_j-μ_t,ij)^2/2(σ_t+1,ij^2+σ_ij^2(σ_t+1,ij^2/σ_ij^2)^2)1{i∈𝒮_t^1}))+exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2σ_t,ij^21{i∈𝒮_t^2}) -exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2(σ_t+1,ij^2+σ_ij^2(σ_t+1,ij^2/σ_ij^2)^2)1{i∈𝒮_t^2}). § PROOF OF THEOREM 1Our proof of Theorem 1 will be divided into the analysis of the consistency, sampling rates and asymptotic optimality of the iKG algorithm.We first show the consistency, i.e., each arm will be pulled infinitely by the algorithm as the round n goes to infinity. Since iKG_t,i={ exp(-(μ_t,i-μ_t,I_t^*)^2/2(σ_i^2/T_t,i+σ_I_t^*^2/T_t,I_t^*))-exp(-(μ_t,i-μ_t,I_t^*)^2/2((T_t,i+2)σ_i^2/(T_t,i+1)^2+σ_I_t^*^2/T_t,I_t^*)), i≠ I_t^*,∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_i'^2/T_t,i'+σ_I_t^*^2/T_t,I_t^*))-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*)^2/2(σ_i'^2/T_t,i'+(T_t,I_t^*+2)σ_I_t^*^2/(T_t,I_t^*+1)^2)), i= I_t^*, .it is obvious that iKG_t,i>0 for t>0. To prove the consistency, we define a set V≜{i∈𝔸: ∑_l≥ 01{I_l=i}<∞}. It suffices to prove that V=∅, and then the claim is straightforward based on the Strong Law of Large Numbers. For any δ_1>0 and arm i∉ V, there exists N_1 such that when n>N_1, |μ_n,i-μ_i|<δ_1, because arms not in V will be infinitely pulled. Since the exp(·) is a continuous function and σ_i^2/T_t,i-σ_i^2(T_t,i+2)/(T_t,i+1)^2=σ_i^2/((T_t,i+1)^2T_t,i)→0 holds for arm i∉ V, then for any δ_2>0, there exists N_2 such that when n>N_2, iKG_t,i<δ_2.Arms i'∈ V are pulled for only a finite number of rounds. Then max_i'∈ VT_t,i' exists and we have σ_i'^2/((T_t,i'+1)^2T_t,i')>min_i'≠ I_t^*σ_i'^2/max_i'∈ V(T_t,i'+2)/(T_t,i'+1)^2. According to the continuity of the function exp(·), there exists δ_3>0 such that iKG_t,i'>δ_3. Since δ_2 is arbitrary, let δ_2<δ_3, and then iKG_t,i'>iKG_t,i holds, which implies I_t∈ V. As the total number of rounds tend to infinity, V will become an empty set eventually. In other words, all the arms will be pulled infinitely and I_n^*=I^*=⟨ 1 ⟩ holds with probability 1.We next analyze the sampling rate of each arm by the iKG algorithm. Let δ_4=2δ_2>0, we know that when n is large, iKG_n,i<δ_2=δ_4/2 for all i∈𝔸. Then |iKG_n,i-iKG_n,i'|<iKG_n,i+iKG_n,i'<δ_4/2+δ_4/2=δ_4, where i≠ i'. For any i,i'∈𝔸 and i≠ i'≠⟨ 1 ⟩,|iKG_n,i-iKG_n,i'| = |exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))- exp(-(μ_n,i'-μ_n,⟨ 1 ⟩)^2/2(σ_i'^2/T_n,i'+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))+exp(-(μ_n,i'-μ_n,⟨ 1 ⟩)^2/2((T_n,i'+2)σ_i'^2/(T_n,i'+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))-exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2((T_n,i+2)σ_i^2/(T_n,i+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))| ≤2|exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))- exp(-(μ_n,i'-μ_n,⟨ 1 ⟩)^2/2(σ_i'^2/T_n,i'+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))| = 2|exp(-n(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩))- exp(-n(μ_n,i'-μ_n,⟨ 1 ⟩)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩))|,where w_i=T_n,i/n is the sampling rate of arm i. For any δ_5=δ_1^2>0, we have |iKG_n,i-iKG_n,i'|<δ_4 if and only if |(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)-(μ_i'-μ_⟨ 1 ⟩)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)|<δ_5by the continuity of the function exp(·) and |μ_n,i-μ_i|<δ_1. For arms i≠⟨ 1 ⟩,|iKG_n,i-iKG_n,⟨ 1 ⟩| = |exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))- ∑_i'≠ 1exp(-(μ_n,i'-μ_n,⟨ 1 ⟩)^2/2(σ_i'^2/T_n,i'+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))+∑_i'≠⟨ 1 ⟩exp(-(μ_n,i'-μ_n,⟨ 1 ⟩)^2/2(σ_i'^2/T_n,i'+(T_n,⟨ 1 ⟩+2)σ_⟨ 1 ⟩^2/(T_n,⟨ 1 ⟩+1)^2))-exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2((T_n,i+2)σ_i^2/(T_n,i+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))|.Notice that (T_n,i+2)σ_i^2/(T_n,i+1)^2=σ_i^2/(T_n,i+1/(T_n,i+2)). When n is large enough, 1/(T_n,i+2) is sufficiently small according to the consistency of the algorithm. Thenlim_n→∞exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))-exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2((T_n,i+2)σ_i^2/(T_n,i+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩)) = ∂(exp(-(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩)))/∂ T_n,i =∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i.Since |iKG_n,i-iKG_n,⟨ 1 ⟩|<δ_4 for i≠⟨ 1 ⟩, given δ_6>0, we have1-δ_6<|∑_i≠⟨ 1 ⟩∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_⟨ 1 ⟩/∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i|<1+δ_6.By (<ref>), we have∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i=∂(exp(-n(μ_n,i'-μ_n,⟨ 1 ⟩)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i'.Then 1-δ_6<∑_i≠⟨ 1 ⟩|∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_⟨ 1 ⟩/∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i|<1+δ_6.Hence|w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2-∑_i≠⟨ 1 ⟩w_i^2/σ_i^2|<δ_6. Since δ_6 can be arbitarily small, w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2→∑_i≠⟨ 1 ⟩w_i^2/σ_i^2.We have shown that1-ℙ{I_n^*=⟨ 1 ⟩} =̇exp(-nmin_i≠⟨ 1 ⟩(μ_n,i-μ_n,⟨ 1 ⟩)^2/2(σ_i^2n/T_n,i+σ_⟨ 1 ⟩^2n/T_n,⟨ 1 ⟩)).Then Γ^iKG=lim_n→∞-1/nlog(1-ℙ{I_n^*=⟨ 1 ⟩})=min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩).By (<ref>),Γ^iKG=(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩),∀ i≠⟨ 1 ⟩,where w_i in (<ref>) and (<ref>) is the solution of (8) in the main text.Next, we will show that for any BAI algorithms, lim_n→∞-1/nlog(1-ℙ{I_n^*=⟨ 1 ⟩})≤Γ^iKG. Let W≜{w=(w_1,…,w_k): ∑_i=1^kw_i=1 w_i≥ 0, ∀ i∈𝔸} be set of the feasible sampling rates of the k arms. The proof of this claim is divided into two stages. First, suppose that w_⟨ 1 ⟩=α is fixed for some 0<α<1. We will show that max_w∈ W, w_⟨ 1 ⟩=αmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α) is achieved when ∑_i≠⟨ 1 ⟩w_i=1-α, (μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)=(μ_i'-μ_⟨ 1 ⟩)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/α), i≠ i'≠⟨ 1 ⟩.In other words, in this stage, we will prove the first and third equations in (8) of the main text. We prove it by contradiction. Suppose there exists a policy with sampling rates w'=(w'_1,w'_2,…,w'_k) of the k arms such that min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w'_i+σ_⟨ 1 ⟩^2/α)=max_w∈ W, w_⟨ 1 ⟩=αmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α). Since the solution of (<ref>) is unique, there exists an arm i' satisfying(μ_i'-μ_⟨ 1 ⟩)^2/2(σ_i'^2/w'_i'+σ_⟨ 1 ⟩^2/α)> min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w'_i+σ_⟨ 1 ⟩^2/α). We consider a new policy. There exists δ_7>0 such that w̃_i'=w'_i'-δ_7∈(0,1) and w̃_i=w'_i+δ_7/(k-2)∈(0,1) for i≠ i'≠⟨ 1 ⟩. Thenmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w̃_i+σ_⟨ 1 ⟩^2/α)>min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w'_i+σ_⟨ 1 ⟩^2/α)=max_w∈ W, w_⟨ 1 ⟩=αmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α),which yields a contradiction. Therefore, the first and third equations in (8) of the main text hold.In the second stage, we will prove the second equation in (8) of the main text. Consider the following optimization problem z(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)=(μ_i'-μ_⟨ 1 ⟩)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/α) i,i'≠⟨ 1 ⟩ i≠ i',(μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)≥ z, i≠⟨ 1 ⟩,∑_i≠⟨ 1 ⟩w_i=1-α.The Lagrangian function of (<ref>) is L(α, λ_i)=z+∑_i≠⟨ 1 ⟩λ_i((μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)-z)+λ_1(∑_i≠⟨ 1 ⟩w_i-1+α),where λ_i's are the Lagrange multipliers. By the KKT conditions, we have λ_i∂((μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_i+λ_1=0 for all i≠⟨ 1 ⟩ and ∑_i≠⟨ 1 ⟩λ_i∂((μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_⟨ 1 ⟩+λ_1=0. Then∑_i≠⟨ 1 ⟩∂((μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_⟨ 1 ⟩/∂((μ_i-μ_⟨ 1 ⟩)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_i=1,i.e., w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2=∑_i≠⟨ 1 ⟩w_i^2/σ_i^2.Remark: The conditions in (8) of the main text coincide with the optimality conditions developed in <cit.> using the OCBA method under normal sampling distributions.Notice that the posterior convergence rate is related to the sampling rate of each arm. We consider to find the optimal sampling rate of BAI and formulate it as the following optimization problemmax x{ (μ_i-μ_1)^2/2(σ_i^2/w_i+σ_1^2/w_1)≥ x∑_i=1^kw_i=1. .Applying KKT conditions to Problem (<ref>), we haveΓ^iKG=max_w∈ Wmin_i≠ 1 (μ_i-μ_1)^2/2(σ_i^2/w_i+σ_1^2/w_1),which implies the result. § PROOF OF THEOREM 2Our proof of Theorem 2 will be divided into the analysis of the consistency, sampling rates and asymptotic optimality of the iKG-ϵ algorithm.We first show consistency, i.e., each arm will be pulled infinitely by the algorithm as the round n goes to infinity. Since iKG_t,i^ϵ={ exp(-(μ_t,i-μ_t,I_t^*+ϵ)^2/2(σ_i^2/T_t,i+σ_I_t^*^2/T_t,I_t^*))-exp(-(μ_t,i-μ_t,I_t^*+ϵ)^2/2((T_t,i+2)σ_i^2/(T_t,i+1)^2+σ_I_t^*^2/T_t,I_t^*), i≠ I_t^*,∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_i'^2/T_t,i'+σ_I_t^*^2/T_t,I_t^*))-∑_i'≠ I_t^*exp(-(μ_t,i'-μ_t,I_t^*+ϵ)^2/2(σ_i'^2/T_t,i'+(T_t,I_t^*+2)σ_I_t^*^2/(T_t,I_t^*+1)^2), i= I_t^*, .it is obvious that iKG_t,i^ϵ>0 for t>0. To prove the consistency, it suffices to prove that V=∅, and then the claim is straightforward based on the Strong Law of Large Numbers. For any δ_8>0 and arm i∉ V, there exists N_3 such that when n>N_3, |μ_n,i-μ_i|<δ_8, because arms not in V will be infinitely pulled. Since the exp(·) is a continuous function and σ_i^2/T_t,i-σ_i^2(T_t,i+2)/(T_t,i+1)^2=σ_i^2/((T_t,i+1)^2T_t,i)→0 holds for arm i∉ V, then for any δ_9>0, there exists N_4 such that when n>N_4, iKG_t,i^ϵ<δ_9.Arms i'∈ V are pulled for only a finite number of rounds. Then max_i'∈ VT_t,i' exists and we have σ_i'^2/((T_t,i'+1)^2T_t,i')>min_i'≠ I_t^*σ_i'^2/max_i'∈ V(T_t,i'+2)/(T_t,i'+1)^2. According to the continuity of the function exp(·), there exists δ_10>0 such that iKG_t,i'^ϵ>δ_10. Since δ_9 is arbitrary, let δ_9<δ_10, and then iKG_t,i'^ϵ>iKG_t,i^ϵ holds, which implies I_t∈ V. As the total number of rounds tend to infinity, V will become an empty set eventually. In other words, all the arms will be pulled infinitely and I_n^*=I^*=⟨ 1 ⟩ holds with probability 1.We next analyze the sampling rate each arm by the iKG-ϵ algorithm. Let δ_11=2δ_9>0, we know that when n is large, iKG_n,i^ϵ<δ_9=δ_11/2 holds for i∈𝔸. Then |iKG_n,i^ϵ-iKG_n,i'^ϵ|<iKG_n,i^ϵ+iKG_n,i'^ϵ<δ_11/2+δ_11/2=δ_11, where i≠ i'. For any i,i'∈𝔸 and i≠ i'≠⟨ 1 ⟩,|iKG_n,i^ϵ-iKG_n,i'^ϵ| = |exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))- exp(-(μ_n,i'-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/T_n,i'+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))+exp(-(μ_n,i'-μ_n,⟨ 1 ⟩+ϵ)^2/2((T_n,i'+2)σ_i'^2/(T_n,i'+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))-exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2((T_n,i+2)σ_i^2/(T_n,i+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))| ≤2|exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))- exp(-(μ_n,i'-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/T_n,i'+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))| = 2|exp(-n(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩))- exp(-n(μ_n,i'-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩))|,where w_i=T_n,i/n is the sampling rate of arm i. For any δ_12=δ_8^2>0, we have |iKG_n,i^ϵ-iKG_n,i'^ϵ|<δ_11 if and only if |(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)-(μ_i'-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)|<δ_12by the continuity of the function exp(·) and |μ_n,i-μ_i|<δ_8. For arms i≠⟨ 1 ⟩,|iKG_n,i^ϵ-iKG_n,⟨ 1 ⟩^ϵ| = |exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))- ∑_i'≠⟨ 1 ⟩exp(-(μ_n,i'-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/T_n,i'+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))+∑_i'≠⟨ 1 ⟩exp(-(μ_n,i'-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/T_n,i'+(T_n,⟨ 1 ⟩+2)σ_⟨ 1 ⟩^2/(T_n,⟨ 1 ⟩+1)^2))-exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2((T_n,i+2)σ_i^2/(T_n,i+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))|.Notice that (T_n,i+2)σ_i^2/(T_n,i+1)^2=σ_i^2/(T_n,i+1/(T_n,i+2)). When n is large enough, 1/(T_n,i+2) is sufficiently small according to the consistency of the algorithm. Thenlim_n→∞exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩))-exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2((T_n,i+2)σ_i^2/(T_n,i+1)^2+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩)) = ∂(exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/T_n,i+σ_⟨ 1 ⟩^2/T_n,⟨ 1 ⟩)))/∂ T_n,i =∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i.Since |iKG_n,i^ϵ-iKG_n,⟨ 1 ⟩^ϵ|<δ_11 for i≠⟨ 1 ⟩, given δ_13>0, we have1-δ_13<|∑_i≠⟨ 1 ⟩∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_⟨ 1 ⟩/∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i|<1+δ_13.By (<ref>), we have∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i=∂(exp(-n(μ_n,i'-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i'.Then 1-δ_13<∑_i≠⟨ 1 ⟩|∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_⟨ 1 ⟩/∂(exp(-n(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩)))/∂ w_i|<1+δ_13.Hence|w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2-∑_i≠⟨ 1 ⟩w_i^2/σ_i^2|<δ_13. Since δ_13 can be arbitarily small, w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2→∑_i≠⟨ 1 ⟩w_i^2/σ_i^2.We know that 1-ℙ{G_n^ϵ=G^ϵ}= ℙ{⋃_i∈ G_n^ϵ(θ_i<θ_⟨ 1 ⟩-ϵ)∪⋃_i∈𝔸∖ G_n^ϵ(θ_i>θ_⟨ 1 ⟩-ϵ)},andmax(max_i∈ G_n^ϵℙ(θ_i<θ_⟨ 1 ⟩-ϵ), max_i∈𝔸∖ G_n^ϵℙ(θ_i>θ_⟨ 1 ⟩-ϵ)) ≤ ℙ{⋃_i∈ G_n^ϵ(θ_i<θ_⟨ 1 ⟩-ϵ)∪⋃_i∈𝔸∖ G_n^ϵ(θ_i>θ_⟨ 1 ⟩-ϵ)} ≤ kmax(max_i∈ G_n^ϵℙ(θ_i<θ_⟨ 1 ⟩-ϵ), max_i∈𝔸∖ G_n^ϵℙ(θ_i>θ_⟨ 1 ⟩-ϵ)).Then1-ℙ{G_n^ϵ=G^ϵ}=̇exp(-(μ_n,i-μ_n,⟨ 1 ⟩+ϵ)^2/2(σ_n,i^2+σ_n,⟨ 1 ⟩^2)).We haveΓ^ϵ=lim_n→∞-1/nlog(1-ℙ{G_n^ϵ=G^ϵ})=min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩).By (<ref>),Γ^ϵ=(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/w_⟨ 1 ⟩),∀ i≠⟨ 1 ⟩,where w_i in (<ref>) and (<ref>) is the solution of (12) in the main text.Next, we will show that for any ϵ-good arm identification algorithms, lim_n→∞-1/nlog(1-ℙ{G_n^ϵ=G^ϵ})≤Γ^ϵ. Let W≜{w=(w_1,…,w_k): ∑_i=1^kw_i=1 w_i≥ 0, ∀ i∈𝔸} be set of the feasible sampling rates of the k arms. The proof of this claim is divided into two stages. First, suppose that w_⟨ 1 ⟩=α is fixed for some 0<α<1. We will show that max_w∈ W, w_⟨ 1 ⟩=αmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α) is achieved when ∑_i≠⟨ 1 ⟩w_i=1-α, (μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)=(μ_i'-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/α), i≠ i'≠⟨ 1 ⟩.In other words, in this stage, we will prove the first and third equations in (12) of the main text. We prove it by contradiction. Suppose there exists a policy with sampling rates w'=(w'_1,w'_2,…,w'_k) of the k arms such that min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w'_i+σ_⟨ 1 ⟩^2/α)=max_w∈ W, w_⟨ 1 ⟩=αmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α). Since the solution of (<ref>) is unique, there exists an arm i' satisfying (μ_i'-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/w'_i'+σ_⟨ 1 ⟩^2/α)> min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w'_i+σ_⟨ 1 ⟩^2/α). We consider a new policy. There exists δ_14>0 such that w̃_i'=w'_i'-δ_14∈(0,1) and w̃_i=w'_i+δ_14/(k-2)∈(0,1) for i≠ i'≠⟨ 1 ⟩. Thenmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w̃_i+σ_⟨ 1 ⟩^2/α)>min_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w'_i+σ_⟨ 1 ⟩^2/α)=max_w∈ W, w_⟨ 1 ⟩=αmin_i≠⟨ 1 ⟩(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α),which yields a contradiction. Therefore, the first and third equations in (12) of the main text hold.In the second stage, we will prove the second equation in (12) of the main text. Consider the following optimization problem z(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)=(μ_i'-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i'^2/w_i'+σ_⟨ 1 ⟩^2/α), i, i'≠⟨ 1 ⟩ i≠ i',(μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)≥ z,i≠⟨ 1 ⟩,∑_i≠⟨ 1 ⟩w_i=1-α.The Lagrangian function of (<ref>) is L(α, λ_i)=z+∑_i≠⟨ 1 ⟩λ_i((μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α)-z)+λ_1(∑_i≠⟨ 1 ⟩w_i-1+α),where λ_i's are the Lagrange multipliers. By the KKT conditions, we have λ_i∂((μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_i+λ_1=0 for all i≠⟨ 1 ⟩ and ∑_i≠⟨ 1 ⟩λ_i∂((μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_⟨ 1 ⟩+λ_1=0. Then∑_i≠⟨ 1 ⟩∂((μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_⟨ 1 ⟩/∂((μ_i-μ_⟨ 1 ⟩+ϵ)^2/2(σ_i^2/w_i+σ_⟨ 1 ⟩^2/α))/∂ w_i=1,i.e., w_⟨ 1 ⟩^2/σ_⟨ 1 ⟩^2=∑_i≠⟨ 1 ⟩w_i^2/σ_i^2.Let W≜{w=(w_1,…,w_k): ∑_i=1^kw_i=1 w_i≥ 0, ∀ i∈𝔸} be the feasible sampling rates of the k arms. Notice that the posterior convergence rate is related to the sampling rate of each arm. We consider to find the optimal sampling rate of BAI and formulate it as the following optimization problemmax x{ (μ_i-μ_1+ϵ)^2/2(σ_i^2/w_i+σ_1^2/w_1)≥ x∑_i=1^kw_i=1. .Applying KKT conditions to Problem (<ref>), we haveΓ^ϵ=max_w∈ Wmin_i≠ 1 (μ_i-μ_1+ϵ)^2/2(σ_i^2/w_i+σ_1^2/w_1),which implies the result. § PROOF OF THEOREM 3Our proof of Theorem 3 will be divided into the analysis of the consistency, sampling rates and asymptotic optimality of the iKG-F algorithm.We first show consistency, i.e., each arm will be pulled infinitely by the algorithm as the round n goes to infinity. Since iKG_t,i^F= ∑_j=1^m(exp(-(γ_j-μ_t,ij)^2/2σ_ij^2/T_t,i1{i∈𝒮_t^1})-exp(-(γ_j-μ_t,ij)^2/2(T_t,i+2)σ_ij^2/(T_t,i+1)^21{i∈𝒮_t^1}))+exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2σ_ij^2/T_t,i1{i∈𝒮_t^2}) -exp(-∑_j∈ℰ_t,i^2(γ_j-μ_t,ij)^2/2(T_t,i+2)σ_ij^2/(T_t,i+1)^21{i∈𝒮_t^2}). It is obvious that iKG_t,i^F>0 for t>0. To prove the consistency, it suffices to prove that V=∅ and then the claim is straightforward based on the Strong Law of Large Numbers. For any δ_15>0 and i∉ V, there exists N_5 such that when n>N_5, |μ_n,i-μ_i|<δ_15, because arms not in V will be infinitely pulled. Since the exp(·) is a continuous function and σ_i^2/T_t,i-σ_i^2(T_t,i+2)/(T_t,i+1)^2=σ_i^2/((T_t,i+1)^2T_t,i)→0 holds for arm i∉ V, then for any δ_16>0, there exists N_6 such that when n>N_6, iKG_t,i^F<δ_16.Arms i'∈ V are pulled for only a finite number of rounds. Then max_i'∈ VT_t,i' exists and we have σ_i'^2/((T_t,i'+1)^2T_t,i')>min_i'∈𝔸σ_i'^2/max_i'∈ V(T_t,i'+2)/(T_t,i'+1)^2. According to the continuity of the function exp(·), there exists δ_17>0 such that iKG_t,i'^F>δ_17. Since δ_16 is arbitrary, let δ_16<δ_17 and then iKG_t,i'^F>iKG_t,i^F holds, which implies I_t∈ V. As the total number of rounds tend to infinity, V will become an empty set eventually. In other words, all the arms will be pulled infinitely.We next analyze the sampling rate each arm by the iKG-F algorithm. Let δ_18=2δ_16>0, we know that when n is large, iKG_n,i^F<δ_16=δ_18/2 holds for i∈𝔸. Then |iKG_n,i^F-iKG_n,i'^F|<iKG_n,i^F+iKG_n,i'^F<δ_18/2+δ_18/2=δ_18, where i≠ i'. For any i, i'∈𝔸,|iKG_n,i^F-iKG_n,i'^F| = |∑_j=1^mexp(-(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^1})-∑_j=1^mexp(-(γ_j-μ_n,i'j)^2/2σ_i'j^2/T_n,i'1{i'∈𝒮_n^1})+exp(-∑_j∈ℰ_n,i^2(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^2}) -exp(-∑_j∈ℰ_n,i'^2(γ_j-μ_n,i'j)^2/2σ_i'j^2/T_n,i'1{i'∈𝒮_n^2})+exp(-(γ_j-μ_n,i'j)^2/2(T_n,i'+2)σ_i'j^2/(T_n,i'+1)^21{i'∈𝒮_n^1})-exp(-(γ_j-μ_n,ij)^2/2(T_n,i+2)σ_ij^2/(T_n,i+1)^21{i∈𝒮_n^1})+exp(-∑_j∈ℰ_n,i'^2(γ_j-μ_n,i'j)^2/2(T_n,i'+2)σ_i'j^2/(T_n,i'+1)^21{i'∈𝒮_n^2})-exp(-∑_j∈ℰ_n,i^2(γ_j-μ_n,ij)^2/2(T_n,i+2)σ_ij^2/(T_n,i+1)^21{i∈𝒮_n^2}) | ≤2|∑_j=1^mexp(-(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^1})-∑_j=1^mexp(-(γ_j-μ_n,i'j)^2/2σ_i'j^2/T_n,i'1{i'∈𝒮_n^1})+exp(-∑_j∈ℰ_n,i^2(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^2}) -exp(-∑_j∈ℰ_n,i'^2(γ_j-μ_n,i'j)^2/2σ_i'j^2/T_n,i'1{i'∈𝒮_n^2})| ≤2|mmax_j∈ℰ_n,i^1exp(-(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^1})-mmax_j∈ℰ_n,i^1exp(-(γ_j-μ_n,i'j)^2/2σ_i'j^2/T_n,i'1{i'∈𝒮_n^1})+exp(-∑_j∈ℰ_n,i^2(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^2}) -exp(-∑_j∈ℰ_n,i'^2(γ_j-μ_n,i'j)^2/2σ_i'j^2/T_n,i'1{i'∈𝒮_n^2})| = 2|mexp(-w_imin_j∈ℰ_n,i^1(γ_j-μ_n,ij)^2/2σ_ij^21{i∈𝒮_n^1})-mexp(-w_i'min_j∈ℰ_n,i^1(γ_j-μ_n,i'j)^2/2σ_i'j^21{i'∈𝒮_n^1})+exp(-w_i∑_j∈ℰ_n,i^2(γ_j-μ_n,ij)^2/2σ_ij^21{i∈𝒮_n^2}) -exp(-w_i'∑_j∈ℰ_n,i'^2(γ_j-μ_n,i'j)^2/2σ_i'j^21{i'∈𝒮_n^2})|,where w_i=T_n,i/n is the sampling rate of arm i. We have shown that |μ_n,i-μ_i|<δ_15 for any δ_15>0 and i=1,2,…,k. We can find a sufficiently large positive integer n' such that when n>n', S_n^1=S^1, S_n^2=S^2, ℰ_n,i^1=ℰ_i^1 and ℰ_n,i^2=ℰ_i^2, where i=1,2,…,k and j=1,2,…,m. Note that S_n^1∩S_2^2=∅ and S^1∩S^2=∅. If i∈S_n^1=S^1, |iKG_n,i^F-iKG_n,i'^F|≤2m|exp(-w_imin_j∈ℰ_n,i^1(γ_j-μ_n,ij)^2/2σ_ij^2})-exp(-w_i'min_j∈ℰ_n,i'^1(γ_j-μ_n,i'j)^2/2σ_i'j^2)| ≤ 2m|exp(-w_imin_j∈ℰ_i^1(γ_j-μ_ij-δ_15)^2/2σ_ij^2})-exp(-w_i'min_j∈ℰ_i'^1(γ_j-μ_i'j+δ_15)^2/2σ_i'j^2)|. We have shownthat |iKG_n,i^F-iKG_n,i'^F|<δ_18. Hence |w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^2-w_i'min_j∈ℰ_i'^1(γ_j-μ_i'j)^2/2σ_i'j^2|<δ_19 for any δ_19=δ_15^2>0 by the continuity of the function exp(·). We can get similar result when i∈S_n^2=S^2. Hence, |iKG_n,i^F-iKG_n,i'^F|<δ_18 if and only if |w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}-w_i'min_j∈ℰ_i'^1(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^1}+w_i'∑_j∈ℰ_i'^2(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^2}|<δ_19by the continuity of the function exp(·) and |μ_n,i-μ_i|<δ_15.We have known that1-ℙ{𝒮_n^1=𝒮^1}= ℙ{⋃_i∈𝒮_n^1(⋃_j=1^m(θ_ij>γ_j))∪⋃_i∈𝒮_n^2(⋂_j=1^m(θ_ij≤γ_j))}.We havemax(max_i∈𝒮_n^1ℙ(⋃_j=1^m(θ_ij>γ_j)), max_i∈𝒮_n^2ℙ(⋂_j=1^m(θ_ij≤γ_j) )) ≤ ℙ{⋃_i∈𝒮_n^1(⋃_j=1^m(θ_ij>γ_j))∪⋃_i∈𝒮_n^2(⋂_j=1^m(θ_ij≤γ_j))} ≤ kmax(max_i∈𝒮_n^1ℙ(⋃_j=1^m(θ_ij>γ_j)), max_i∈𝒮_n^2ℙ(⋂_j=1^m(θ_ij≤γ_j) )).Then1-ℙ{𝒮_n^1=𝒮^1}=̇max(max_i∈𝒮_n^1ℙ(⋃_j=1^m(θ_ij>γ_j)), max_i∈𝒮_n^2ℙ(⋂_j=1^m(θ_ij≤γ_j) )).For arm i∈𝒮_n^1,max_j∈ℰ_n,i^1ℙ(θ_ij>γ_j)≤ℙ(⋃_j=1^m(θ_ij>γ_j))≤ mmax_j∈ℰ_n,i^1ℙ(θ_ij>γ_j).For arm i∈𝒮_n^2,ℙ(⋂_j=1^m(θ_ij≤γ_j))→ℙ(⋂_j∈ℰ_n,i^2(θ_ij≤γ_j)),because lim_n→∞ℙ(⋂_j∈ℰ_n,i^1(θ_ij≤γ_j))→1. Hence1-ℙ{𝒮_n^1=𝒮^1}=̇exp(-min_i∈𝒮_n^1(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^1} +exp(-∑_j∈ℰ_n,i^2(γ_j-μ_n,ij)^2/2σ_ij^2/T_n,i1{i∈𝒮_n^2}).We haveΓ^F=lim_n→∞-1/nlog(1-ℙ{𝒮_n^1=𝒮^1})=min_i∈𝔸w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}.By (<ref>),Γ^F=w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2},∀ i∈𝔸,where w_i in (<ref>) and (<ref>) is the solution of (16) in the main text.Next, we will show that for any feasible arm identification algorithms, lim_n→∞-1/nlog(1-ℙ{𝒮_n^1=𝒮^1})≤Γ^F. Let W≜{w=(w_1,…,w_k): ∑_i=1^kw_i=1 w_i≥ 0, ∀ i∈𝔸} be set of the feasible sampling rates of the k arms. We prove it by contradiction. Suppose there exists a policy with sampling rates w'=(w'_1,w'_2,…,w'_k) of the k arms such thatw'_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w'_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2} = max_w∈ Wmin_i∈𝔸w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}.We will show that max_w∈ Wmin_i∈𝔸w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2} is achieved whenw_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}=w_i'min_j∈ℰ_i'^1(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^1}+w_i'∑_j∈ℰ_i'^2(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^2}, i≠ i'.Since the solution of (<ref>) is unique, there exists an arm i' satisfyingw'_i'min_j∈ℰ_i'^1(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^1}+w'_i'∑_j∈ℰ_i'^2(γ_j-μ_i'j)^2/2σ_i'j^21{i'∈𝒮^2} ≥ min_i∈𝔸w'_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w'_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}.We consider a new policy. There exists δ_20>0 such that w̃_i'=w'_i'-δ_20∈(0,1) and w̃_i=w'_i+δ_20/(k-2)∈(0,1) for i≠ i'. Thenmin_i∈𝔸w̃_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w̃_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2} > min_i∈𝔸w'_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w'_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2} = max_w∈ Wmin_i∈𝔸w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2},which yields a contradiction. Therefore, the equations in (16) of the main text hold.Let W≜{w=(w_1,…,w_k): ∑_i=1^kw_i=1 w_i≥ 0, ∀ i∈𝔸} be the feasible sampling rates of the k arms. Notice that the posterior convergence rate is related to the sampling rate of each arm. We consider to find the optimal sampling rate of BAI and formulate it as the following optimization problemmax x{ w_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2}≥ x∑_i=1^kw_i=1. .Applying KKT conditions to Problem (<ref>), we haveΓ^F=max_w∈ Ww_imin_j∈ℰ_i^1(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^1}+w_i∑_j∈ℰ_i^2(γ_j-μ_ij)^2/2σ_ij^21{i∈𝒮^2},which implies the result. § ADDITIONAL NUMERICAL RESULTS In this section, we provide additional numerical results for the experiments conducted in Section 5 of the main text. Figures 1(a)-7(a) show how the probabilities of false selection of the compared algorithms change with the sample sizes for the best arm identification problem, and Figures 1(b)-7(b) show the sampling rates of the algorithms on some selected arms. It can be observed in Figures 1(a)-7(a) that the proposed iKG algorithm performs the best, followed by TTEI, KG, EI and the equal allocation. On the log scale, the probability of false selection (PFS) values of the iKG algorithm demonstrate linear patterns, indicating the potentially exponential convergence rates. For both EI and KG, the rates of the posterior convergence are not optimal, which might influence their empirical performances. The equal allocation performs the worst in general. In Figures 1(b)-7(b), we can see that TTEI always allocates half samples to the best arm when β=0.5, EI allocates too many samples to the best arm while KG allocates too few samples to the best arm.Figures 8(a)-14(a) show how the probabilities of false selection of the compared algorithms change with the sample sizes for the ϵ-good arm identification, and Figures 8(b)-14(b) show the sampling rates of the algorithms on some selected arms. It can be observed in Figures 8(a)-14(a) the proposed iKG-ϵ algorithm performs the best and demonstrates a linear pattern on the log scale. (ST)^2 and APT are inferior, and the equal allocation performs the worst. In Figures 8(b)-14(b), we can see that APT allocates too few samples to the best arm and too many samples to the arms near the threshold. ST^2 allocates too few samples to the best arm, which influences the accuracy of the threshold.Figures 15(a)-21(a) show how the probabilities of false selection of the compared algorithms change with the sample sizes for the feasible arm identification, and Figures 15(b)-21(b) show the sampling rates of the algorithms on some selected arms. The results in Figures 15(a)-21(a) are similar to those in Figures 1(a)-14(a). The proposed iKG-F algorithm has the best performance, followed by the compared MD-UCBE, equal allocation and MD-SAR. In Figures 15(b)-21(b), we can see that MD-UCBE and MD-SAR allocate too many samples to the arms near the constraint limits. | http://arxiv.org/abs/2310.17901v1 | {
"authors": [
"Yang Le",
"Gao Siyang",
"Ho Chin Pang"
],
"categories": [
"cs.LG",
"stat.ML"
],
"primary_category": "cs.LG",
"published": "20231027052502",
"title": "Improving the Knowledge Gradient Algorithm"
} |
Radiative decay branching ratio of the Hoyle state G.V. Rogachev==================================================empty emptyAs service robots begin to be deployed to assist humans, it is important for them to be able to perform a skill as ubiquitous as pouring. Specifically, we focus on the task of pouring an exact amount of water without any environmental instrumentation, that is, using only the robot's own sensors to perform this task in a general way robustly. In our approach we use a simple PID controller which uses the measured change in weight of the held container to supervise the pour. Unlike previous methods which use specialized force-torque sensors at the robot wrist, we use our robot joint torque sensors and investigate the added benefit of tactile sensors at the fingertips. We train three estimators from data which regress the poured weight out of the source container and show that we can accurately pour within 10 ml of the target on average while being robust enough to pour at novel locations and with different grasps on the source container. Video: <https://youtu.be/fQw6i8FpENI> § INTRODUCTION Pouring is a common daily task, especially in the context of food preparation, housekeeping, bartending and hospitality services. As service robots begin to be deployed to assist humans in various environments, it is important for general purpose robots to be able to accurately pour a specific amount of liquid into a container.Robotic pouring has been widely demonstrated using different modalities such as vision<cit.>, audio<cit.> and haptics<cit.> involving varying levels of setup instrumentation and structure. Inspired by human pouring, vision seems to be the more relevant sensor modality. However, the perception task of determining the current liquid level in the receiving container is far from trivial, and most work in this area requires controlled conditions. Real world scenarios where containers may be opaque, liquid color varies, background and lighting are not controlled, and the camera point of view might be less-than-ideal complicate this method even further.Another common approach is the use of force sensors at the robot wrist or underneath the receiving container to measure the weight of the poured liquid. In some contexts, such as a kitchen robot, it may be acceptable to designate a specialized area equipped with a force sensor to assist in the pouring task. On the other hand, a mobile service robot would be expected to perform this task without such instrumentation (for example, for a human holding the glass). For this reason we are interested in investigating how accurately we can pour liquids without any environmental instrumentation such as fixed cameras or force sensors underneath receiving containers. We envision a robot that can approach a table and pour drinks accurately and reliably using only its own sensors. More specifically we focus on the ability to measure how much weight has been poured out of the held container by using the robot arm joint torque sensors supplemented with tactile sensation at the fingertips. Unlike previous work, we intentionally decide against the use of a dedicated force-torque sensor at the wrist to avoid the extra payload and wiring in its integration into the manipulator. Instead, we focus on using the robot arm's built-in joint torque sensors, as this capability is becoming more popular and widely available in commercial robot arms. This makes the task more challenging, as the measurements provided by joint torque sensors are noisier and less accurate compared to those of a dedicated force-torque sensor. Moreover, joint torques do not provide a direct measurement of forces at the robot end-effector. To bridge this potential gap in performance, we introduce the use of tactile sensors at the fingertips, which relay information about the container weight through skin deformation caused by shear forces. Human studies have shown that skin deformation at the fingertips is a big contributor to how we perceive the weight of objects, especially for light ones <cit.>. Our hypothesis is that we can take advantage of tactile signals proportional to skin deformation to compensate for the lack of detail in our joint torque signals, and effectively determine the change in weight at the source container.To study the use of torque and tactile sensors in the context of pouring, we start by training three estimators: one using mainly tactile information, another using the joint torques information and the last one combining the two sensing modalities. These estimators can regress the weight poured out of the held container at any given moment. To train them, we collect pouring trials using a simple PID controller which regulates the pouring rate by adjusting the wrist velocity. The controller's feedback is provided by a force sensor placed underneath the receiving container which is only needed at training time. Assuming all weight lost at the held container arrives later at the receiving container with no spillage, we can adjust the ground truth signal by shifting it in time to represent the weight as it leaves the held container and relate it to our sensing modalities of interest. We evaluate our estimators performance in two ways: first, we compute their mean squared error against ground truth on our test set, and second, we measure their pouring accuracy in ml across different trials when deployed in real time on the robot, including pours using novel grasps and at novel locations not seen in training. We will show that our estimator allows for very accurate pouring without any sensors external to our robot. To summarize our contributions, this paper is the first to (1) study the use of both joint torque sensors and tactile sensors for pouring tasks; (2) make use of the Jacobian transpose formulation as an approximation of end effector forces during pouring tasks; and (3) demonstrate accurate pouring robust to grasp positions and pouring locations.§ RELATED WORK At the heart of the robotic pouring task lies the problem of generating a trajectory that regulates the pouring rate to avoid overflow, spillage and, in some cases, achieve a certain target volume at the receiving container. A common approach is to learn these trajectories from expert demonstrations<cit.>. Rozo et al.<cit.> showed that they can teach a robot to pour drinks from human demonstrations using only force based feedback obtained from a force-torque sensor installed at the end effector of a robot arm. As opposed to our approach, they always pour the same amount (100 ml) and report only binary results as success or failure without evaluating the accuracy of the poured amount. Huang et al.<cit.> uses a motion tracker to collect data from human demonstrations, and a force sensor underneath the receiving container, to develop a model which results in pouring errors between 4 and 12ml. Others have tried to leverage fluid simulations to learn pouring trajectories <cit.> from synthetic data using deep learning. While these methods circumvent the need to collect data or require human demonstration, they are computationally expensive and may struggle to match the pouring accuracy of closed-loop systems which focus on directly sensing the liquid level or weight. Since humans heavily rely on vision to perform pouring tasks, researchers attempted to use different computer vision techniques for robotic pouring <cit.> as well as incorporated auxiliary sensing modalities such as audio <cit.>. Kennedy et al.<cit.> demonstrates an extremely accurate system capable of pouring from unknown containers with an error of under 5 ml. However, because their system is meant for wet-labs, they use a highly instrumented and controlled setting for estimating the volume at the receiving container from combined visual feedback and weight measurements. Zhang et al. <cit.> uses only visual feedback to decompose the pouring task into multiple hierarchies and build a logical graph which controls the decision process. This work only reports binary pouring results as success or failure without tackling the case of pouring a specific amount. Schenck et al.<cit.> aims to solve the problem of pouring a specific amount in a general setting using only visual feedback. Similarly to our work, they also use a simple PID controller, but they use a deep network to classify pixels as liquid and then estimate the volume based on these detections. With this approach they demonstrate an average pouring error of 38ml.In conclusion, to the best of our knowledge, pouring a specific amount of liquid using only proprioceptive sensors or tactile feedback has not been demonstrated so far. § METHODOLOGY§.§ Experimental SetupOur experimental setup consists of a Kinova Gen3 robot arm with 7 degrees of freedom with a Robotiq 2F-85 gripper as our manipulator (see Figure <ref>). We replace the original fingers of the gripper with custom designed mounts to hold two BioTac tactile sensors <cit.>. Because we are interested in the skin deformation at the fingertips, we use the 19 impedance signals delivered by the BioTacs and discard the sensor's AC and DC pressure measurements.The robot arm is mounted on a tabletop where we perform our experiments. A force-torque sensor (ATI Axia80-M8) is fixed to the table in front of the robot and serves as a base where we place the receiving container. To one side, we also place a 3D printed fixture which allows the robot to find the source container repeatably in the same position. A pouring trial begins with the robot in its default resting position. Using pre-programmed waypoints and grasp position, the robot proceeds to grasp the source container, lifts it, and moves it to the pouring position above the receiving container. At this point a pouring controller takes over, and the pouring motion begins using only the rotation of the robot wrist, while the rest of the arm holds position. Once the pouring controller determines that the target weight has been achieved, the wrist rotates back to the initial position with the source container perfectly upright. Finally, the source container is returned to its fixture for the next trial. During the pouring trials, our objective is to continuously sense how much weight has left the source container. While collecting data, the force sensor provides this value which we consider our ground truth after adjusting for the time delay that exists between liquid leaving the source container and arriving to the receiving container. The models we train will later replace the force sensor, and the controller will run based on the weight estimates that the model outputs. Note that we do not intend to estimate the absolute weight of the held container, but instead we are interested in the weight change experienced at the end effector. We can compute the forces at the end effector in robot base coordinates from joint torques using the Jacobian transpose formulation λ = J^T τ where τ are the forces applied at the end effector and λ represent the joint torque values. Therefore, we can solve for the forces applied at the end effector (“EE Forces”) using the Moore-Penrose pseudo inverse:τ = J^+Tλ. Note that we obtain a 6 dimensional vector composed of forces and torques across all axes in robot coordinate frame: τ = (Fx, Fy, Fz, Tx, Ty, Tz)This analytical formulation does not require training data, and we can extract the force component Fz to represent the change in weight at the end effector. We will refer to this method as “Analytical Fz” and use it as a baseline to compare our data-driven estimators. Note that we do not want to use the raw joint torques for training, since the learned model would not generalize to new pouring locations. However, because this formulation assumes the manipulator is in static equilibrium, we can only use it as an approximation. In the next subsections we describe our pouring controller implementation, the data collection process, and finally the implementation of our poured weight estimators. §.§ Pouring Controller We design our pouring controller as a finite state machine with three states (Figure <ref>). The control frequency is determined by the slowest of our sensor feeds, which is the tactile signal from the BioTacs at 100Hz. During the first state, a constant velocity is commanded to the robot wrist to begin the pouring. As the wrist rotates, we monitor the signal from the force sensor to determine when the pouring actually begins. To achieve this, we store a window of readings and compute the first derivative of these historic measurements. We declare that the pouring has started once the average value computed over the first derivative exceeds a given threshold. This method provides robustness to the small drift that may occur on the force sensor and the noisy output of our estimators when deployed on the robot. Both the window size and the threshold are tunable parameters. When we detect the pouring has begun, we build a smooth trapezoidal trajectory to be used as reference by the PID controller. This curve is parametrized by a maximum acceleration and velocity, and the initial value corresponds to the latest force measurement. The maximum velocity determines the desired pouring rate in units of N/s. With this trajectory built, we transition to the second state.During the second state, the PID controller is enabled and becomes the only actor controlling the wrist velocity. At every control iteration, every 10ms, we check whether the target weight has been achieved within a small tolerance value. If so, we transition to the third and final state, where a new trapezoidal trajectory is generated to rotate the wrist back to its upright position and finalize the pour.§.§ Data Collection To train our network, we collected 250 pouring trials using the setup described earlier. The same source container and grasp position is used on all these trials. For each trial, we sample each of the following parameters according to a uniform distribution from the interval specified in parenthesis: P gain (0.15 to 0.8), D gain (0 to 0.04), pouring speed (0.1 to 0.8 N/s), weight at source container (200 to 350ml or equivalently 1.96 to 3.92N), and target weight (101 to 306 ml or equivalently 1 to 3N). To ensure pouring trials never results in simply dumping all contents of the source container, the target weight is capped, when necessary, to 75% of the source container weight.Before each trial begins, a trial configuration is sampled, and a human operator fills the source container with the appropriate amount of water using a scale which measures the contents with ± 1 gram error (we use the known density of water to convert ml to grams). The operator then places the source container in its fixture, and executes the pour sequence, in which the robot arm proceeds to grasp the container, performs the pour, places the container back in its fixture and retracts to start the next trial. During each trial, we record training data after the robot has positioned the source container above the receiving container and the actual pouring is about to begin. At this point, we collect a baseline for both the BioTac signals and our computed forces at the end effector. The baseline measurements are simple averages over a 1 second window to further filter out noise, and are subtracted from all subsequent measurements. After recording these baseline measurements we begin logging data for 2 seconds before the controller starts tilting the container and the pouring task begins. The logging of data finishes 4 seconds after the controller deems that we have reached the target weight. Data is logged at 100Hz, which is the sampling frequency of the BioTacs, the slowest of all of our sensors. Joint positions, velocities and torques from the robot arm are published at 1 Khz. We configure the force sensor to publish at 200Hz. In the case of the force sensor, we log only the most recent value, which is filtered by the sensor internally with a low pass filter with a cutoff frequency of 1.3Hz. The robot arm data and the tactile data, on the other hand, are stored on a sliding window which represents all measurements over the last 100ms (equivalent to the latest 100 and 10 measurements for the robot arm and BioTacs respectively). To reduce noise in both joint torque sensors and tactile sensors we log the average value computed over this window. For joint positions and velocities we log the most recent values. §.§ Poured Weight Estimators Our objective is to use our robot arm proprioception (position, velocity and joint torque sensors) and tactile signals to estimate the weight that has left the source container during a pouring task. Using the data collected as described in the previous subsection, we train several neural networks to regress this quantity.We will benchmark three different estimators to evaluate the contribution of different sensing modalities. The first estimator will use both tactile and end-effector (EE) forces information. We call it Multimodal estimator. The second and third estimators will use tactile and EE forces only respectively and we will refer to them as Tactile estimator and Proprioceptive estimator. Note that all three estimators make use of wrist position and velocity.For both the Multimodal and Tactile estimators, the first stage of our network is a pair of identical encoders to process the impedance signals from each BioTac sensor. These encoders consist of a single linear layer with a ReLU activation function which takes all 19 signals from the BioTac and outputs a vector of 4 dimensions. Note that even though both encoders have the same architecture, they do not share weights and are trained independently. The second stage of our network is a simple MLP which processes different feature vectors depending on the estimator. For the Multimodal estimator, the encoded tactile signals are concatenated with the wrist position, velocity, and the forces at the end effector computed with the Jacobian transpose method described in section <ref>. The Tactile estimator is identical minus the EE forces. Finally the Proprioceptive estimator forgoes the tactile information, and the feature vector is composed only of the wrist position and velocity plus the EE forces (see Figure <ref>)This results in feature vectors of dimensionality 16, 10 and 8 for the Multimodal, Tactile and Proprioceptive estimators. They become the input to a fully connected network with a single hidden layer of 8, 4 and 4 neurons for each estimator respectively. We use the ReLU activation function except on the output layer which uses a linear transfer function. The loss function used to train these networks is the mean squared error (MSE) between the predicted value and ground truth. To compute this ground truth we need to adjust the force sensor signal to represent how much weight has left the source container, instead of how much weight has arrived at the receiving container. From the geometry of the source container, the grasp position and the joint angles, we can compute the exact distance from the spout to the receiving container at all given times. We then compute the time it takes for the liquid to travel this distance using a simple free fall model with zero initial velocity. Finally, we shift the recorded data in time to obtain our ground truth signal. This method also circumvents the difficulty of tuning the gains for a system with time-delayed feedback <cit.> when we deploy our estimators to provide feedback in our controller.To train these estimators we split our data at the trial level, i.e. the data selected for training, validation and testing is comprised of full trials, not by individual datapoints. We randomly chose 210, 20 and 20 trials to be used for training, validation and testing respectively. The training dataset is comprised of 500K datapoints. We train the network for 2000 epochs with a batch size of 5000. We use the Adam optimizer with a fixed step decaying learning rate (initial value of 0.005 which decays with a ratio of 0.7 every 125 epochs). § EVALUATION AND RESULTSOur objective is to replace the force sensor used during training with a regressor which outputs the current weight poured out of the source container. In the previous section we described three different estimators to be trained using either only tactile, only end effector forces or both modalities. We also propose the use of an analytical formulation where we take the z component of the computed forces at the end effector in robot coordinates as a baseline comparison. Our first evaluation of these methods consists in comparing their individual predictions against ground truth on our collected test set, composed of 20 randomly selected trials. The main metric for comparison is the mean square error (MSE) between the estimator output and the ground truth signal aggregated over all test trials.The baseline method of computing the change in Fz through the Jacobian transpose yields a MSE of 8.06 mN. The MSE for the Tactile, Proprioceptive and Multimodal estimator were 7.81 mN, 0.62 mN and 0.52 mN respectively. Figure <ref> shows one of the test trials with traces for each one of our estimators and baseline against ground truth with MSE values for that particular trial alone. The Analytical Fz method presents some noise during the initial container tilt stage before the pouring starts but then follows the ground truth curve reasonably well. The Tactile estimator correctly predicts the beginning of the pour but shows larger error during the actual pouring stage and on the final predicted value. The Proprioceptive estimator makes good use of the extra information of all forces felt at the end-effector (including torques) plus wrist position and velocity to improve on the performance of the Analytical Fz. The Multimodal estimator shows a marginal improvement with the addition of tactile information. These trends are visible across most test trials. Our second evaluation consists of deploying each of our estimators, plus our analytical baseline, on our robot arm and performing a set of representative pouring trials, both within and outside the training data distribution. First we tackle the performance of our models within the distribution that they have been trained on. This means that the testing parameters are within those used to collect our training data. Table <ref> describes 12 instances of pouring trials with various parameters that sweep through different pouring speeds, liquid volume at the source container and target volume to achieve on the receiving container. For each one of our estimators and baseline, we perform these pouring trials on our robot and record the pouring accuracy as the error between the actual quantity poured during the trial versus the target as defined by the trial parameters on Table <ref>. After each trial we weight the receiving container and using the known density of water we report the error in milliliters (a positive error value means we poured more liquid than our target). Results are shown in Table <ref>.The first thing to note is that our Tactile estimator performance diverges notably from the results obtained with our off-line evaluation, to the point of failure to perform the task. We investigated the cause of this divergence and found that our tactile signals present very high sensitivity to small grasp perturbations (1-2mm off-center), yielding fairly different patterns and signal amplitudes as a result. In our experimental setup, the source container fixture tolerance is 1mm which stacks on top of our robot arm repeatability, which the manufacturer does not guarantee for the quick-connect version of the arm <cit.>, but can be considered to be 1mm at best (metric they provide for the fixed-base version of the arm). Figure <ref> depicts the difference in tactile signals for an open loop pour (same trajectory) when we perturb the grasp to be off-center by 1.5mm.While casual inspection might suggest that the Multimodal estimator outperforms the Proprioceptive because of its lower root mean squared error (RMSE), there is a clear bias on the latter towards under filling the receiving container. This manifests as a systematic error of around -11 ml. If we “calibrate” for this offset, this estimator is incredibly consistent. The Analytical Fz estimator presents a similar behaviour, but with a larger error spread, resulting in a standard deviation five times larger than the Proprioceptive estimator. The addition of tactile is likely reducing the performance of the Multimodal estimator compared to the Proprioceptive estimator, increasing the spread of the prediction error, due to the issues previously discussed with our tactile signals.We would also like to evaluate how robust these estimators might be when operating outside the training distribution. We are particularly interested in two cases: pouring at a different location, and pouring using a different grasp on the source container. Two novel locations and two novel grasps are introduced as shown in Figure <ref>. For this test we select trials 1,6,8 and 12 and we perform these at each of the 2 novel locations and then using the 2 novel grasps (at the trained-on location). The pouring accuracy results are shown in Table <ref>. Using novel grasps results in a small performance drop across the board, although similar trends to those observed before remain: the Analytical Fz and Proprioceptive estimator still present a constant bias that results in underfilling the container (individual trial results not shown). For the case of novel locations, both estimators show a larger error spread to go along with their inherent bias, which is to be expected as these are locations never seen in training. Once again, the Tactile estimator fails to perform the task and the addition of tactile signals does not improve performance for the Multimodal estimator. § CONCLUSIONS In this paper we focused on the problem of pouring a specific amount of liquid into a receiving container using only the robot's own tactile and proprioceptive sensors, without any environmental instrumentation. We avoid the use of cameras because we are mostly interested in a pouring solution that can be deployed in a general scenario, without strict controls on lighting or background conditions.During this study we have demonstrated that we can leverage a robot arm's joint torque sensors to effectively sense the weight change of a held container during a pouring task. Our trained Proprioceptive model can pour within ≈ 10 ml on average from the target, a result equivalent to other state of the art methods which rely on external sensors. Our approach shown to be robust enough to pour at novel locations and with different grasps on the source container. On the flip side, we have debunked our initial hypothesis regarding the use of tactile sensors for pouring tasks. Our results show that the variability associated with the signals of our BioTac sensors with respect to minor grasp perturbances make it very challenging to train a regressor that provides stable and reliable predictions over time. While we believed that tactile sensors would bridge the gap in performance between computing forces at the end effector through joint torques versus a dedicated force-torque sensor, it turns out we underestimated the accuracy of joint torque sensors and overestimated the usefulness of the tactile modality. Our approach is not without its limitations. We assume prior knowledge of the state of the receiving container in terms of its capacity and current liquid level, since we do not directly sense these parameters to avoid spillage. In our trials, the receiving container is always assumed to be both empty and large enough to avoid overfilling. In similar fashion, we also do not sense or model the amount of liquid in the source container, which results in our trials starting with an open-loop tilt motion until we detect pouring has started. To avoid an initial large pouring rate, we require this initial rotation of the source container to happen at a low speed, which in turn increases the total time needed to complete the full pouring task. Our training data is only collected with water, hence it is not clear how well this method might generalize to granular media or high viscosity liquids. Finally, we did not investigate the use of different source containers and their effect on pouring performance. We believe the results presented in this paper are relevant because, to our knowledge, there have not been any previous studies that use joint torque measurements or tactile signals at the fingertips to demonstrate pouring skills. While a similar framework could be proposed with a force-torque sensor at the wrist, as it has been done in the literature, we believe it is valuable to study the feasibility of replacing these specialized sensors when joint torques are available to facilitate integration, free up available payload on the robot arm, eliminate external wiring, and reduce cost. While the tactile modality did not provide significant additional utility in our experiments, it is possible that other tactile sensors or processing methods could extract further performance to enable even more precise and robust pouring. IEEEtran | http://arxiv.org/abs/2310.18473v1 | {
"authors": [
"Pedro Piacenza",
"Daewon Lee",
"Volkan Isler"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20231027203227",
"title": "Pouring by Feel: An Analysis of Tactile and Proprioceptive Sensing for Accurate Pouring"
} |
Sem]Mohammadreza Doostmohammadian [Sem]Faculty of Mechanical Engineering, Semnan University, Semnan, Iran, [email protected]] Alireza Aghasi[AA]Department of Electrical Engineering and Computer Science, Oregon State University, USA, [email protected]]Maria Vrakopoulou [MV]University of Melbourne, Melbourne, Australia, [email protected]] Hamid R. Rabiee [HRR]Department of Computer Engineering, Sharif University of Technology, Tehran, Iran, [email protected]] Usman A. Khan [UK]Electrical and Computer Engineering Department, Tufts University, MA, USA, [email protected]. Aalto,TC]Themistoklis Charalambous[Aalto]School of Electrical Engineering, Aalto University, Espoo, Finland, [email protected].[TC]School of Electrical Engineering, University of Cyprus, Nicosia, Cyprus, [email protected]. This paper proposes two nonlinear dynamics to solve constrained distributed optimization problem for resource allocation over a multi-agent network. In this setup, coupling constraint refers to resource-demand balance which is preserved at all-times. The proposed solutions can address various model nonlinearities, for example, due to quantization and/or saturation. Further, it allows to reach faster convergence or to robustify the solution against impulsive noise or uncertainties. We prove convergence over weakly connected networks using convex analysis and Lyapunov theory. Our findings show that convergence can be reached for general sign-preserving odd nonlinearity. We further propose delay-tolerant mechanisms to handle general bounded heterogeneous time-varying delays over the communication network of agents while preserving all-time feasibility. This work finds application in CPU scheduling and coverage control among others. This paper advances the state-of-the-art by addressing (i) possible nonlinearity on the agents/links, meanwhile handling (ii) resource-demand feasibility at all times,(iii) uniform-connectivity instead of all-time connectivity, and (iv) possible heterogeneous and time-varying delays. To our best knowledge, no existing work addresses contributions (i)-(iv) altogether. Simulations and comparative analysis are provided to corroborate our contributions. < g r a p h i c s > * Distributed all-time feasible strategies that solve resource allocation. * Algorithm with tolerating heterogeneous bounded delays over the communication network. * Convergence over uniformly-connected networks, even disconnected at some times. * Addressing possible nonlinearity on the nodes or links for different applications. Constrained distributed resource allocation graph theory convex analysis, time-varying delays, uniform-connectivity§ INTRODUCTION§.§ BackgroundDistributed algorithms have gained considerable attention because of recent advances in the Internet of Things (IoT), cloud computing, and parallel processing. Distributed optimization over multi-agent networks, in particular, has emerged as an effective solution in large-scale applications ranging from machine-learning for large-scale data mining <cit.> to power flow optimization over the smart grid <cit.>.Distributed/decentralized algorithms outperform centralized ones in many aspects: no-single-node-of-failure, scalability, speed/efficiency, etc. Distributed or decentralized optimization allows for scaling the problem-solving process to handle larger and more complex optimization problems by distributing the computational load across multiple nodes. If one of the node fails, the others can still solve the remaining objective function and this makes distributed optimization inherently more resilient to node-failures. Distributed resource allocation can use parallel processing to speed up the optimization process. By dividing the problem into sub-problems and solving them concurrently, the overall solution can be obtained faster and computationally more efficient than a centralized approach.In this work, we propose a distributed setup for resource allocation over general multi-agent systems with constraints on the communication network or the agents. §.§ ChallengesIn real-world applications,many model nonlinearities exist in practice, some of which stem from the inherent physical constraints, e.g., actuator saturation or quantized data exchange <cit.>, and some are purposely added, e.g., to improve the convergence rate <cit.> or robustness to impulsive noise and uncertainties <cit.>. For example, in automatic generation control setup for power systems there exist ramp-rate-limit (RRL) due to limited rate of power generation by the real-world generators. In other words, the increase and decrease of the power generation cannot follow any rate but is limited. Most existing results in the literature cannot address this limit and therefore the generators cannot follow therates assigned by these solutions. Another example is when the information exchange among the agents is quantized. Most existing linear solutions do not address quantized information exchange among the agents and, in this aspect, the algorithm is not realistic.Such unseen nonlinearities make networked optimization (both constrained and unconstrained) more challenging in terms of computation, accuracy, feasibility, optimality, and convergence.Further, the network itself might be subject to time delays, or asynchronous data-transmission <cit.>. The network also may lose network connectivity over some intermittent time-intervals (e.g., due to packet loss or link failure). The notion of constraint feasibility is another challenging issue. The equality-constraint ensures the resource-demand balance, and its violation may cause service disruption. This paper aims to address general nonlinear solutions while handling network latency, uniform network connectivity, and all-time feasibility. The objective function is also nonlinear and this nonlinearity might be due to addressing the box constraints as penalty terms. §.§ Literature Review The literature on distributed optimization, both constrained and unconstrained, mainly assumes linear models and no delay for data exchange over the network. There exist a few nonlinear reinforcement-learning-based models for resource allocation and economic dispatch <cit.>. These works find optimal allocation with no prior knowledge of the mathematical formulation of the actual generation costs. In <cit.> a Neural-Network is used to learn the relationship between the demand and the optimal output of each generation unit, while <cit.> combines the state-action-value function approximation with a distributed optimization based on multiplier splitting.Some existing works focus on one specific nonlinearity, e.g., quantized allocation/information-sharing <cit.> or saturated dynamics <cit.>. Some others are devoted to address sign-based nonlinearities to improve the convergence rate, e.g., to reach the optimal value in finite-time<cit.> or fixed-time <cit.>. Such fixed/finite dynamics are also prevalent in consensus literature<cit.> which also allow for robust and noise-resilient design. However, these solutions are designed for a specific case and cannot handle composition of two or more nonlinearities. Moreover, in many applications, the network is dynamic (time-varying) due to, e.g., agents' mobility and may even get disconnected over some time intervals. Therefore, it is more practical to assume uniform connectivity, in contrast to all-time connectivity in many existing solutions <cit.>. Latency is another networking issue that may cause the optimization algorithm to diverge. Few works in this literature address possible homogeneous delays at all links or asynchronous communication <cit.>, with no consideration of model nonlinearities.In the sum-preserving constrained optimization, another challenge (other than stability) is to preserve all-time feasibility <cit.> (in contrast to asymptotic feasibility <cit.>). All-time feasibility implies that, as the solution evolves, the constraint on the states always holds. For example, in the economic dispatch problem (EDP), at any termination point of the algorithm, the sum of the power states must be equal to the load demand. Otherwise, it causes disruption, power delivery issues, and even system breakdown <cit.>. Such all-time feasibility conditions cannot be addressed by dual-based solutions, e.g., alternating-direction-method-of-multipliers (ADMM) methods <cit.>. These works claim to reach feasibility fast enough within the running interval of the algorithm. Besides, the mentioned nonlinearities, network variation, uniform connectivity, and latency have not been addressed by the existing ADMM solutions <cit.>. Some of the existing algorithms, on the other hand,solve specific quadratic-form problems, for example, consensus-based solutions for CPU scheduling <cit.> or economic dispatch <cit.>.In general, however, the objective could be non-quadratic, e.g., because of additional penalty terms and barrier functions to handle the so-called box constraints.§.§ MotivationsWhat is missing in the existing literature is a general method to address “model nonlinearitiy” to solve (general) non-quadratic objectives while considering uniform connectivity, latency, and sum-preserving all-time feasibility altogether. The nonlinearity of the node dynamics might be due to, for example, ramp-rate-limit (RRL) of the generators in automatic generation control (AGC) <cit.>.The other common issue is quantized information exchange in real-world communication networks that motivates the use of algorithm which can handle such a nonlinearity. In this work, we kept the application general and we focus on general multi-agent systems with communication networks. The communication networks often introduce delays due to packet processing, transmission time, and network congestion. Additionally, agents are subject to some processing time before state-update or message-sharing. Therefore, the information sent from one agent/node may reach the receiver agent with certain time-delay. These delays may cause divergence of the solution and/or feasibility and optimality gap. This motivates the need to introduce a delay-tolerant solution for real-world applications. Furthermore, the network itself might be subject to some changes and lose connectivity over intermittent time-intervals due to, for example, agents' mobility. This motivates the use of algorithms to handle uniform-connectivity instead of all-time network connectivity. Another motivation is to address all-time feasibility of the equality-constraint to ensure resource-demand balance at all times. This guarantees that there is no service disruption before the termination of the algorithm at any time. This work finds many applications, e.g., in CPU scheduling <cit.>, coverage control <cit.>, and plug-in electric vehicle (PEV) charging coordination <cit.> among others.§.§ Main ContributionsIn this work, we propose two general nonlinear solutions for distributed constrained convex optimization: one addressing the nonlinearity of the nodes and one of the links. Sufficient conditions on the nonlinearity and network connectivity to ensure convergence are discussed. The Lyapunov-type proof analysis is irrespective of the specific type of nonlinearity. Therefore, the algorithm can address certain inherent physical model nonlinearity (e.g., quantization and/or saturation) or purposely added nonlinearity (e.g., signum-based) to design fast-convergent or robust-to-noise solutions. Our model can handle the composition of more than one nonlinear mapping. In power networks, for example, the saturated generator dynamics due to RRLs is not addressed by the existing methods <cit.> in the AGC setup. Our proposed general multi-agent network can be adapted to communicate and transmit quantized information, address actuator saturation, reach a tunable (or predefined) rate of convergence, embrace resiliency and impulsive-noise-tolerance, or any composition of such nonlinear models. We prove coupling-constraint feasibility and convergencesubject to general upper/lower sector-bound nonlinearity. In other words, sufficient conditionson the model's nonlinearity are derived to not violate all-time sum-preserving feasibility (resource-demand balance) and convergence to the optimal value; see examples in Section <ref>.We discuss both continuous-time (CT) and discrete-time (DT) solutions. In the DT case, we design delay-tolerant solutions to handle finite arbitrary and heterogeneous time delays over the network. We prove constraint feasibility and convergence/stability under bounded step sizes and certain assumptions on the time-delays. Two approaches to handle network latency are given: i) Case I, updating over a longer time scale after receiving all delayed information, and ii) Case II, updating by all the information received at the same time scale as the DT dynamics. The proposed delay-tolerant solutions lead to no feasibility gap over switching and uniformly connected networks (instead of all-time connectivity) and in the presence of heterogeneous delays and model nonlinearity. This is in contrast to, e.g., the solution by <cit.> with some feasibility gap. The uniform connectivity is motivated by applications in mobile sensor networks, where the links may come and go as the mobile sensors (or robots) move into and out of line-of-sight (or broadcast range) of each other. The network may sometimes even lose connectivity due to link failure or even packet loss, while it maintains uniform connectivity over some finite time intervals. Finally, we apply our proposed solutions in distributed setup with(i) quadratic costs and (ii) non-quadratic costs with logarithmically quantized values.We summarized our contributions in the following: * Possible inherent and additive nonlinearity in the model dynamics can be addressed by our proposed resource allocation model with general non-quadratic objective functions allowing for consideration of penalty/barrier functions. * Our proposed delay-tolerant solution can handle possible heterogeneous and arbitrary delays of the links over general uniformly-connected networks. The values of delays might be different at the links and change over time while the network may lose connectivity over some intermittent time-intervals. * Our proposed distributed resource allocation is all-time feasible implying that at all times the resource-demand balance holds even in the presence of time-delays and losing network connectivity. This is in contrast to dual formulation methods in which the feasibility gap asymptotically reduces over time. The all-time resource-demand feasibility prevents service disruption at any termination/assignment time of the algorithm. To the best of our knowledge, no previous works in the literature address contributions (i)-(iii) altogether. §.§ Paper organization We introduce the preliminary notions and some useful lemmas to set up the problem in Section <ref>. The CT solutions are proposed in Section <ref> and their convergence is discussed in Section <ref>. The DT counterparts subject to time delays are discussed in Sections <ref> and <ref>. Applications and more simulations on sparse dynamic networks are presented in Sections <ref> and <ref>. Finally, Section <ref> concludes the paper.§.§ General Notation In this paper, we present the column vectors in bold small letters and scalars with small letters. The matrices are denoted by capital letters. For notation simplicity, ∂ f_iand ∂^2 f_i denote df_i(x_i)/dx_iand d^2f_i(x_i)/dx_i^2, respectively. See the full list of notations in Table <ref>. The communication network of agents (also referred to as the multi-agent network) considered in this paper is modelled by a (possibly) time-varying graph G(t)={V,E(t)} with a set of time-dependent links E(t) and time-invariant set of nodes V={1,…,n}. A link (i,j) ∈E(t) represents possible information exchange (communication) from the agent i to the agent j, and further implies that the agent i is in the neighbouring set of the agent j, defined as N_j(t)={i|(i,j)∈E(t)}. The link (j,i) ∈E(t) is weighted by W_ij>0 and W(t)=[W_ij(t)] ∈ℝ^n × n_≥0 represents the weight matrix of the network G(t). Clearly, W(t) follows the structure (zero-nonzero pattern) of the adjacency matrix associated with G(t). In G(t), we define a spanning tree as a sub-graph of size n (covering all nodes) in which there is only one path between every two nodes <cit.>. This spanning tree is known to include minimum possible links for connectivity <cit.>. Define the Laplacian matrix L(t)=[L_ij(t)] as,L_ij(t) = {[ ∑_j ∈N_i W_ij(t),for i=j,; -W_ij(t), for i≠ j. ]. § PROBLEM SETUP This paper considers equality-constraintoptimization problems in the primal nonlinear formulation over a multi-agent network. The objective of the problem is to minimize the cost function while satisfying the (weighted) resource-demand balance constraint. This balance equality-constraint ensures that the weighted sum of resources meets the demand by the user, otherwise it may cause service disruption in the application under consideration. The mathematical formulation is as follows:min_z F(z) := ∑_i=1^nf_i(z_i)s.t. z^⊤a = bwhere element z_i ∈ℝ represents the state variable at agent i, Column vector z = [z_1;…;z_n] ∈ℝ^n is the collective vector state[ In its most general form,the problem can be extended to z_i ∈ℝ^m with m>1 (as in <cit.>). Considering m=1 is for the sake of simplifying the proof analysis in Sections <ref> and <ref>.],and column vector a=[a_1;…;a_n] ∈ℝ^n and b ∈ℝ are the constraint parameters. f_i: ℝ↦ℝ is the local cost (or loss) function at agent i, and the overall cost function is F: ℝ^n↦ℝ.The problem can be also extended to consider box constraints on the states, i.e., z_i ≤ z_i ≤z_i. In this case, one can eliminate these extra constraints by adding proper exact (nonlinear) penalty functions to every local objective; for example, changing the local objectives to f_i^σ(z_i) = f_i(z_i) + c([z_i - z_i]^+ + [z_i - z_i ]^+), with [u]^+=max{u, 0}^σ, σ∈ℕ, and c ∈ℝ^+.Some other example penalties and barrier functions are discussed in <cit.>.In general, such penalty (or barrier) functions are non-quadratic and nonlinear, which makes the objective function non-quadratic and nonlinear; see examples in <cit.>. Feasible initialization algorithms under such local constraints are discussed, for example, in <cit.>.The problem is generally stated in the following standard sum-preserving formmin_x F(x) = ∑_i=1^n f_i(x_i)s.t. ∑_i=1^n x_i = bwhich can be obtained from (<ref>) by simple change of variables z_i a_i =: x_i, with box constraints transformed into a_i z_i ≤x_i ≤ a_iz_i for a_i>0 and reversed otherwise.We aim to provide general nonlinear dynamics to solve problem (<ref>) over a multi-agent network for different applications. The proposed solutions need to be distributed, implying that the information available at each agent i includes its own information (for example, its state and local objective) and the data received from agents j ∈N_i(its direct neighbours). This work addresses the case that agents/links are constrained with some nonlinearity. Moreover, the proposed distributed solution remains feasible (i.e.,∑_i=1^n x_i(t) = b, ∀ t≥0) and delay-tolerant (under heterogeneous time-delays). Further, it is possible that the communication network changes over time and loses connectivity at some bounded time-intervals. The assumptions on the objective function convexity (to include possible penalty terms), the network connectivity, feasibility, and the time-delay model are essential in the problem setup, as discussed next.§.§ Useful Lemmas and Definitions onConvexityLet h: ℝ↦ℝ represent a nonlinear mapping. Function h(y) is called Lipschitz continuous if there exists a real constant K_h such that for any y_1, y_2 ∈ℝ,|h(y_1)-h(y_2)| ≤ K_h |y_1-y_2|.A function f:ℝ↦ℝ is strictly convex if ∀ y_1,y_2 ∈ℝ, ∀κ∈ (0,1),f(κ y_1+(1-κ)y_2)<κ f(y_1)+(1-κ)f(y_2).It is known that for a smooth strictly convex function, ∂^2 f(y)> 0 for y ∈ℝ <cit.>.Thelocal cost/objective functions f_i(x_i):ℝ↦ℝ, i ∈{1,…,n} are smooth andstrictly convex, i.e., ∂^2 f_i(x_i) > 0.Note that the penalized objective function f_i^σ is also Lipschitz since [u]^+ is Lipschitz for σ∈ℕ_≥ 2. Recalling the Taylor series expansion, the following holds. <cit.> Given a continuous strictly-convex function f(y), two points y_1, y_2, and Δ y =: y_1-y_2, there existy := κ y_1 + (1-κ)y_2, 0<κ<1 such that,f(y_1) = f(y_2) + ∇ F(y_2)^⊤Δ y +1/2Δ y^⊤∂^2 f(y)Δ y.<cit.>For cost function f_i:ℝ↦ℝ, assume 2 v < ∂^2 f_i < 2 u with 0<v ≤ u <∞[The condition 2 v< ∂^2 f_i(x_i) implies that the cost functionf_i is strongly convex, see Assumption <ref> in Section <ref>.]. Then,for two points x_1, x_2 ∈ℝ, and Δ x := x_1 - x_2, the following statements hold: F(x_1)< F(x_2) + ∇ F(x_1) Δ x + uΔ x Δ x. F(x_1)> F(x_2) + ∇ F(x_1) Δ x + vΔ x Δ x.Let Assumption <ref> hold. The unique optimal solution x^*=[x^*_1;…;x^*_n] to problem (<ref>) is in the form,∇F(x^*) = φ^*1_n,with 1_n as the column vector of 1s, ∂ f_i ∈ℝ as the gradient of the localfunction f_i(·), ∇ F(x^*) = [∂ f_1(x^*_1); …; ∂ f_n(x^*_n)] ∈ℝ^n, and φ^* ∈ℝ. The proof follows from the KKT condition and the Lagrange multipliers method <cit.>.Similarly, the unique solution of problem (<ref>) is in the form ∇F(z^*) = φ^* a. Note that, in the presence of the box constraints, the above lemma assumes that z^* meets those constraints, i.e., z_i ≤ z^*_i ≤z_i for all i.Throughout the paper, we refer to the feasibility condition as described below. Define the feasible sets S_b = {z∈ℝ^n|z^⊤a = b} in (<ref>) or S_b = {x∈ℝ^n|x^⊤1_n = b} in (<ref>). A feasible solution then is defined as x∈S_b or z∈S_b.Let Assumption <ref> hold. Initializing from any feasible set S_b there is only one unique point z^* ∈S_bsuch that ∇F(z^*) = φ^* a forφ^* ∈ℝ. Similarly, there is uniquex^* ∈S_bsuch that ∇ F(x^*) = φ^* 1_n.The proof follows from the strict convexity of the function F(x) (and F(x)) based on Assumption <ref>. For detailed proof based on level-set analysis see <cit.>.§.§ Recall on Graph TheoryThe following assumptions hold[In this paper, we generally assume that the network is undirected for the delay-tolerant case. Extension to balanced directed graphs is discussed later in Remark <ref>.]. * Every link inG(t) is bidirectional with the same weight on both sides at all times. This implies that the weight matrix W(t) is symmetric and balanced for t≥0. * G(t) is uniformly connected over time-window B>0 (or B-connected), implying that there existsB>0 such that the (edge) union graph G_B(t)={V,E_B(t)} includes a spanningtree for everyt ≥ 0 where, E_B(t) = ⋃_t^t+BE(t). The bidirectional condition in Assumption <ref> holds, for example, when agents/nodes have similar broadcasting levels and their communication range is the same. Therefore, if i ∈N_j(t) then j ∈N_i(t) while the assigned weights are the same W_ij(t)=W_ji(t) at all time. Later in the paper, this assumption is extended to weight-balanced directed networks with ∑_i=1^n W_ij(t)= ∑_j=1^n W_ij(t) for some particular cases. This B-connectivity condition in Assumption <ref> is considerably weaker than all-time connectivity in many works (e.g., the ADMM solutions <cit.>). In particular, this allows G to lose connectivity over some time intervals. In other words, the connectivity requirement needs to be satisfied over longer time intervals in the case of links arbitrarily coming and going over the dynamic network. Such an assumption is known to be the least-connectivity assumption in consensus and distributed optimization literature <cit.>. An example contradicting this condition is the case when G(t) contains two separate sub-graphs G_1(t) and G_2(t) with no path between them for t>t_0, implying that no consensus can be achieved between the two. Recall that the sparse connectivity in Assumption <ref>(b) is strong enough for our algorithm to ensure convergence over the network (as shown later). The network conditions in Assumption <ref> are less restrictive than the existing literature. In particular, the weight-balanced and symmetric condition (a) relaxes the weight-stochasticity in <cit.>, while uniform-connectivity (b) relaxes all-time connectivity in <cit.>.These further motivate analysis of dynamic networks under link removal and packet loss, with no need to re-adjust (update) the link weights for ensuring stochasticity <cit.>. The next lemma gives an intuition to relate the dispersion of the entries in x with the eigen-spectrum of L. This mainly follows from the Courant-Fischer theorem and, for example, gives an estimate on the disagreement value in consensus algorithms <cit.>. Consider a symmetric Laplacian matrix L (of a graph G) and vector x∈ℝ^n. Define the so-called dispersion state vector as x =: x - 1_n^⊤x/n1_n. The following statements hold:x^⊤ L x = x^⊤ L x, λ_2 x_2^2 ≤x^⊤ Lx≤λ_n x_2^2,with λ_n and λ_2 as the largest and smallest non-zero eigenvalue[Recall that in graph theory, for (undirected) connected graphs with link-weight equal to 1, λ_2 is the second smallest eigenvalue of associated Laplacian matrix L and is a real-valued positive number. It is also known as the Fiedler value or Algebraic connectivity<cit.>.]of L, respectively. The results also hold for a weight-balanced (WB) directed graph (digraph) G by substituting L_s = L+L^⊤/2 in (<ref>)-(<ref>) with L as the Laplacian of the digraph. Following the definition of matrix L, vector 1_n is in the null space of L. Using this, the proof of(<ref>) is straightforward since 1_n^⊤x = 0. The proof of (<ref>) follows from (<ref>) and the positive definiteness of L. See more details in <cit.>. The results of Lemma <ref>, can be extended for handling two variables x =: x - 1_n^⊤x/n1_n and y =: y - 1_n^⊤y/n1_n as follows:x^⊤ L y = x^⊤ L y, λ_2 x^⊤y≤x^⊤ Ly≤λ_n x^⊤y.Note that, for a (edge) union graph G_B, λ_n ≤λ_nB and λ_2 ≤λ_2B. Link addition (may) increase the algebraic connectivity of the network <cit.>. Therefore, given G= G_1 ∪G_2, we have λ_2(G) ≥λ_2(G_1),λ_2(G) ≥λ_2(G_2). This can be extended, following the Assumption <ref>, to show λ_2(G_B(t)) ≥λ_2(G(t)). One can relate this to the fact that the algebraic connectivity (for 0-1 adjacency matrices) satisfies λ_2(G) ≥1/nd_g (with d_g as the network diameter) <cit.>.§.§ Time-Delay Model for the DT agents In general, multi-agent systems rely on communication networks to exchange information and coordinate actions. These communication networks may introduce delays due to packet processing, transmission time, and network congestion. These delays are typically quantized in integer multiples of the communication time step. Additionally, each node might have its processing time before sending a message or updating its state. These transmission delays between agents can introduce time delays.Here, we define the time-delay model with k denoting the discrete time index. Following the notation in <cit.>, by τ_ij(k) we denote the delay on the transmission link from agent i to agent j at DT step k=⌊t/T⌋ +1 with T as the sampling step and ⌊·⌋ as the floor function. If a message sent from agent j at time t_j reaches agent i before time t_i>t_j, then τ_ij =⌈t_i/T⌉ - ⌊t_j/T⌋ - 1 (with ⌈·⌉ as the ceil function). Therefore, for t_i - t_j < T there is no delay over the link (j,i). However, typically delays are defined in discrete-time and the sent message at time-step k and received before time-step k + τ_ij+1 implies delay equal to τ_ij. The followings hold for (integer) time-delayτ_ij(k) on link (j,i) ∈E(k): * τ_ij(k) ≤τ, where 1 ≤τ < ∞ is an integer representing maximum possible delays on the transmission links (τ = 0 means no delay). Upper-bound onτ guarantees no lost information and implies that the data from agent i at time k eventually is available to agent j at step k+τ+1 (in finite number of time-steps). * τ_ij(k) may change at different time-steps k and is heterogeneous for different links, i.e., it may differ for agents and on different links. The time-delays are upper-bounded by some τ. * The transmitted packets over every link are time-stamped and every agent i knows the time t_j at which agent j sent the information over the link (j,i), i.e., the delay τ_ij(k) is known. * For a shared mutual link between i and j, we consider the same delay for both sides, i.e., τ_ij(k)=τ_ji(k). * At every time k, at least one packet is delivered over the network (possibly with some delay), i.e., at any time step k we have τ_ij(k) ≠ 0for at least one pair (i,j) (and subsequently τ_ji(k) ≠ 0 due to (iv)). Assumption <ref> is less restrictive than many existing literature in distributed sensor networks. To clarify, part (i) only implies no packet loss and data dropout over the network. Part (ii) generalizes the existing literature with fixed delays at all links <cit.> by considering general heterogeneous delays as in <cit.>. In other words, the delays may differ at different links. Part (iii) is a typical assumption in consensus literature <cit.> and data transmission networks for example for clock synchronization <cit.>. Part (iv) implies that both agents i,j process their shared information simultaneously[The assumption thatτ_ij,τ_ji are known to both agents i,j is well-justified in information-theoretic perspective. This follows from Assumption <ref>(iii) and the assumption that when the packet data leaves the buffer it reaches the receiver with a fixed delay <cit.>. One may also consider the upper-bound on these delays as a more conservative approach.].It can be relaxed for asymmetric delays τ_ij≠τ_ji, by considering max{τ_ij,τ_ji} at both sides. This is to fulfil the feasibility condition (as discussed later in Section <ref>). Part (v) is only to ensure that at every time-step k (at least) two nodes update their states.Every agent needs to record its previous information at the last τ time-steps to match them with the received information from agent j ∈N_i at the next-coming time-steps. Further, define I_k-r,ij as the indicator function capturing the delay τ_ij(k) ≤τ on the link (j,i) as follows,I_k,ij(r) = {[1, if τ_ij(k) = r,;0,otherwise. ].Define the temp graph G^τ(t) = {V,E^τ(t)}as the temporary graph representing the neighbourhood of agents at time t (and time-step k) based on the delays τ_ij. At timekT-T<t ≤ kTif agent i receives a (possibly delayed) packet from j, then (j,i) ∈E^τ(t) for kT ≤ t < kT+T, otherwise (j,i) ∉E^τ(t).For switching networks with B>0 in Assumption <ref>(b),(τ+1)T needs to be less than switching period of G(t). This implies that in case of losing a link (i,j) due to a change in G(t), (at least) one delayed packet from agent i reaches neighbouring agent j before the link (i,j) disappears. Therefore, for any (i,j) ∈E(t) we have (i,j) ∈E^τ(t), and thus, ⋃_t=k_1 T ^k_1 T+ BG^τ(t) = ⋃_t=k_1 T ^k_1 T+ BG(t). In case of larger τ, (if possible)uniform connectivity of G_B^τ^τ(t) over a time-window B^τ > B might be considered in Assumption <ref>.§ THE PROPOSED CONTINUOUS-TIME SOLUTION§.§ Proposed Continuous-Time DynamicsIn this section, we provide two CT 1st-order dynamics to solve problem (<ref>); the solution for problem (<ref>) similarly follows. Following the auxiliary results in the previous section, it is clear that any x^* for which ∂ f_i^*=∂ f_j^* for all i,j must be invariant under the proposed dynamics. To account for nonlinearities, in general, two models can be considered: (i) the link-based nonlinearities associated with every link/edge, and (ii) the node-based nonlinearities associated with every agent/node. The proposed continuous-time dynamics are based on local information available at each agent i and received from its neighbours N_i.Node-based Nonlinear Solution:ẋ_i= -∑_j ∈N_i W_ij g(∂ f_i -∂ f_j), Link-based Nonlinear Solution:ẋ_i= - ∑_j ∈N_i W_ij(g(∂ f_i) -g(∂ f_j)),whereW_ij and N_i are time-dependent in general (Assumption <ref>). In the rest of this paper, we discuss the properties of (<ref>)-(<ref>) as the solutions of problem (<ref>). For the sake of notation simplicity, we drop the dependence of W_ij, N_i, and ∂ f_i on t unless where needed. For g(x)=x, (<ref>) and (<ref>) represent the classic linear solution given in <cit.>. However, unlike <cit.> (and many other papers in the literature) W is not necessarily bi-stochastic but is only symmetric with positive entries.As compared to many existing linear dynamics proposed in the literature <cit.> or ADMM-based solutions proposed in theliterature <cit.>, this work addresses nonlinearity g(·) at the nodes or the links. This nonlinearity might be due to inherent property of the system, e.g., quantized information exchange among the nodes (see Section <ref>) or the RRLs in automatic-generation-control setup (see Section <ref>). In this aspect, the existing simplified linear solutions do not work properly and may result in an optimality gap. On the other hand, our nonlinear model allows for improving the convergence rate and finite/fixed-time convergence by adding sign-based functions as nonlinearity g(·). This cannot be addressed by the existing linear solutions <cit.> or dual-based solutions <cit.>.We make the following assumption on the nonlinear mapping g(·). Function g: ℝ↦ℝ is anonlinear odd mapping with dg/dx≠ 0 at x = 0 and x g(x)> 0 for x ≠ 0, i.e., g(x) is strongly sign-preserving. Further, there exists K_g, ε > 0 such that K_g |x| ≥ |g(x)| ≥ε|x|, referred to as the sector-bound conditions[As we discuss later, the condition ε|x| ≤|g(x)| is needed for exact convergence (sign-preserving versus “strongly” sign-preserving) and |g(x)| ≤ K_g |x| is only needed for the DT case. ]. Many existing nonlinearities satisfy the above assumption. As an example, a monotonically increasing Lipschitz odd function g(·) satisfies the conditions in Assumption <ref>. Some other examples are discussed in the rest of this section. Even though Eq. (<ref>) and (<ref>) represent separate nonlinearities at the nodes and the links, the results of this work hold for their combination (with both node and link nonlinearities satisfying Assumption <ref>). An example is given in Section <ref>.§.§ Examples of Practical Nonlinearities inApplicationsIn this section, we provide some practical nonlinear functions g(·).First, define y := ∂ f_i -∂ f_j and the odd function ^ν(y) as^ν(y) := y|y|^ν-1,where |·| denotes the absolute value and ν≥ 0. For ν = 0, (<ref>) gives the well-known signum function denoted by (y) for simplicity. The following applications distinguish this work from many existing literature.Application I: Quantizationg_l(y):= (y) exp(g_u(log(|y|))),with g_u(y) := δ[ y/δ] and δ>0 as the quantization level. In order to allow quantized information processing at agents and transmission links <cit.>, one can substitute logarithmicquantizer g_l(·) into (<ref>) and (<ref>). Note that uniform quantizer g_u(·) is not “strongly” sign-preserving but it is sign-preserving[Such uniformly quantized dynamics may result in (steady-state) residual and bias from the equilibrium, where the bias (residual) scales with the quantization level δ and gets arbitrarily close to zero for sufficiently small δ. See <cit.> for some discussions on quantized discrete-time consensus.]. Application II: Saturationg_κ(y) := κ(y) |y| > κ, y |y| ≤κ,with κ>0 as the saturation (clipping) level. To account for the limited range of sensors/actuators (saturation) and restriction on transfer of analog/digital signals (signal clipping) <cit.>, the nonlinear function g_κ(·) can be substituted in(<ref>) and (<ref>).This might be due to physical restrictions to follow the limited rate of increase or decrease in the actuation input ẋ_i. An example in power grid applications is given in Section <ref>.Application III: Finite/Fixed-time Convergenceg_f(y) := ^ν_1(y) + ^ν_2(y),where 0<ν_1<1,1<ν_2 (fixed-time) or 0<ν_2<1 (finite-time). Using this type of nonlinear optimization model some existing work on consensus and optimization <cit.> show that convergence can be achieved in finite/fixed-time. One may even adopt time-varying ν_1,ν_2 as in consensus algorithms <cit.> to reach convergence over a prescribed time irrespective of the initialization and system parameters.Application IV: Robustificationg_p(y) := 1 -ϵ/ϵ d(y) |y| > d 0 |y| ≤ dwith 0<ϵ<1, d>0. In order to design protocols robust to high-intensity outliers, for example in case of communication channels corrupted with impulsive noise <cit.>, one can replace the sign-preserving function g_p(·) in (<ref>). A similar model can be considered to suppress impulsive actuation nonlinearities in (<ref>). However, since Eq. (<ref>) is only sign-preserving (not “strongly”), it may result in steady-state bias from the optimizer, i.e., convergence ensures x to reach a neighbourhood of x^*.Note that the sign-based nonlinear functions in Applications III-IV are mostly used in CT.The applications of our proposed solutions are not limited to these nonlinear models, but any g(·) satisfying Assumption <ref> might be adopted. For example, the composition of mentioned nonlinear mappings (<ref>)-(<ref>) are also a valid choice for g(·) in (<ref>) and (<ref>), or many other sector-bound nonlinearities which satisfy Assumption <ref>. § CONVERGENCE ANALYSIS IN CONTINUOUS-TIME In this section, we analyze the convergence of the CT dynamics (<ref>) and (<ref>) to the optimal value of constrained optimization (<ref>). We first check the feasibility and uniqueness of the solutions under the given dynamics and then prove the convergence. Suppose that Assumptions <ref> and <ref> hold. Initializing by any x_0 ∈S_b, the state of agents remain feasible under the CT dynamics (<ref>) and (<ref>), i.e., x(t) ∈S_b for all t>0. The proof of feasibility for CT dynamics (<ref>) is given in <cit.>. For (<ref>) similarly we have, d/dt(x^⊤1_n) = -∑_i=1^n∑_j ∈N_i W_ij(g(∂ f_i) -g(∂ f_j)). From Assumptions <ref> and <ref> we have, W_ij(g(∂ f_i) -g(∂ f_j)) = -W_ji(g(∂ f_i) -g(∂ f_j)). which implies thatd/dt(x^⊤1_n)=0in (<ref>) and x^⊤1_n remains time-invariant. Therefore, any feasible initializationx_0^⊤1_n=b gives feasibility over time x(t)^⊤1_n=b and x(t) ∈S_b for all t>0.In contrast to primal-dual solutions such as ADMM-based methods <cit.>, Lemma <ref> proves all-time feasibility of our proposed solution. This means that at any termination time of the algorithm the solution preserves feasibility while in the existing ADMM solutions <cit.> there might be feasibility gap that converges to zero asymptotically. This implies that the solution by <cit.> must be fast enough to gain feasibility before the termination of the algorithm.Note that for any x satisfying ∇ F(x) = φ1_n, we have ẋ_i = 0 at every agent i. Therefore, such x is an equilibrium (invariant-state) of the solution dynamics (<ref>) and (<ref>). In the next theorem, we show that suchx^* is unique for both proposed dynamics. Note that we assume x^* satisfies the local box constraints (if there are any). Suppose that Assumptions <ref>, <ref> and <ref> hold. Let x^* denote the equilibrium under CT dynamics (<ref>) and (<ref>). Then, ∇ F(x^*) = φ^* 1_n with φ^* ∈ℝ. By contradiction assume∇ F(x^*) = (Λ^*_1;…;Λ^*_n) where Λ^* ≠φ^* and for (at least) two agents i,j, Λ^*_i ≠Λ^*_j ⟺∂ f_i^*≠∂ f_j^*, Define two agents α, β as, α = _q∈{1,…,n}Λ^*_q, β = _q ∈{1,…,n}Λ^*_q. From Assumption <ref>, the union graph G_B(t) is connected for every t≥ 0, and therefore,there is a path in G_B(t) from agent (node) α to every other agent (node) including β. In this path, one can find (at least) two nodes α and βwith the set of neighbours N_α and N_β, respectively, such that Λ^*_α≥Λ^*_N_α, Λ^*_β≤Λ^*_N_β, where the strict inequalities in the above hold for (at least) one node in N_α and one node in N_β. Therefore, from Assumption <ref>,ẋ^*_α < 0 and ẋ^*_β > 0 over any time-window of length B (for both proposed dynamics). This contradicts our equilibrium assumption ẋ^* = 0. In the virtue of Lemmas <ref>, <ref>, and Theorem <ref>, for a feasible initial state x_0 ∈S_b there exist only one equlibriumx^* satisfying ∇ F(x^*) = φ^* 1_n under dynamics (<ref>) and (<ref>).In order to prove convergence to x^*, the following lemma is needed. <cit.> For g(·) and W satisfying Assumptions <ref>, <ref> andψ_i,ψ_j ∈ℝ, ∑_i =1^n ψ_i ∑_j =1^n W_ijg(ψ_j-ψ_i) = ∑_i,j =1^n W_ij/2 (ψ_j-ψ_i)g(ψ_j-ψ_i), Similarly, ∑_i =1^n ψ_i ∑_j =1^nW_ij(g(ψ_j)-g(ψ_i)) = ∑_i,j =1^n W_ij/2 (ψ_j-ψ_i)(g(ψ_j)-g(ψ_i). Suppose that Assumptions <ref>, <ref>, <ref> hold and x_0 ∈S_b. The proposed CT dynamics (<ref>) and (<ref>) converge to the optimal value of (<ref>) denoted by x^* for which ∇ F(x^*) ∈{1_n}, or simply, ∃φ^* ∈ℝ, ∂ f_i^* =∂ f_j^* = φ^*. The proof for dynamics (<ref>) is given in the proof of <cit.>. Similar proofs hold for dynamics (<ref>) with Lyapunov function F(x) := F(x)-F(x^*) and recalling from Lemma <ref> that (∂ f_i -∂ f_j) (g(∂ f_i)-g(∂ f_j))> 0. It is worth mentioning that the existing literature<cit.> mostly work over all-time connected networks, while, from Theorem <ref> and <ref>, our proposed solution works over uniformly-connected networks that may lose connectivity over some time-intervals. Note that from Theorem <ref> to prove unique equilibrium of the solution we only need the union network G_B(t) to be connected over some interval B. This makes our solution applicable over mobile multi-agent systems where the network is dynamic and the connectivity might be lost temporarily. This is in contrast to many existing solutions <cit.> where the network is static and/or all-time connected.§ THE PROPOSED SOLUTION IN DISCRETE-TIME In this section, first, we provide the discrete-time version of(<ref>)-(<ref>) respectively as, x_i(k+1)= x_i(k) -T∑_j ∈N_i W_ij g(∂ f_i(k) -∂ f_j(k)), x_i(k+1)= x_i(k) -T∑_j ∈N_i W_ij(g(∂ f_i(k))-g(∂ f_j(k))),with k ≥ 1 and T as the time-step. In the rest of the paper, we use a simplifying notation. Define the anti-symmetric matrices D(k)=[D_ij(k)] and D^g(k)=[D^g_ij(k)], as the weighted difference of the gradients over all links, i.e., D_ij(k)= ∂ f_i(k) -∂ f_j(k), D^g_ij(k)= g(∂ f_i(k)) -g(∂ f_j(k)). The following theorem gives the proof of convergence assuming sufficiently small T (as defined later in Section <ref>). Under Assumptions <ref>, <ref>, <ref> and initializing froma feasible point x_0 ∈S_b, protocols (<ref>) and (<ref>) converge to the feasible unique equilibrium point x^*in the form ∇ F(x^*) = φ^* 1_n (with φ^* ∈ℝ) for sufficiently small (sampling) step T<T_λ (T_λ is defined later in (<ref>)). Following (<ref>) and (<ref>), we get ∑_i=1^n x_i(k+1)= ∑_i=1^n x_i(k) -∑_i=1^n T ∑_j ∈N_i W_ij g(D_ij(k)).∑_i=1^n x_i(k+1)= ∑_i=1^n x_i(k) -∑_i=1^n T ∑_j ∈N_i W_ijD^g_ij(k). For any link (i,j) and (j,i) in G(k), under the given assumptions we have W_ij=W_ji, D^g_ij(k)=-D^g_ji(k), and g(D_ij(k))=-g(D_ji(k)), which implies that, ∑_i=1^n ∑_j ∈N_i W_ij g(D_ij(k)) =0, ∑_i=1^n ∑_j ∈N_i W_ijD^g_ij(k) =0, and therefore, ∑_i=1^n x_i(k+1) = ∑_i=1^n x_i(k) for all k > 0. Therefore, initializing from x_0 ∈S_b we have x(k) ∈S_b, and the feasibility for both DT protocols (<ref>) and (<ref>) follows. It should be mentioned that the feasibility condition for the link-based nonlinear solution holds for general weight-balanced directed networks. The proof of uniqueness follows similar procedure as in Theorem <ref>, where instead of having ẋ^*_α < 0 and ẋ^*_β > 0 in the CT case, we have x^*_α(k+1)- x^*_α(k)< 0 and x^*_β(k+1)-x^*_β(k) > 0 over any time-window of length B. The proof of convergence and the bound on T is discussed later in Section <ref>. For the link-based protocol (<ref>), one can extend the solution to weight-balanced (WB) directed networks. This is because in the proof of Theorem <ref>, one can restate Eq. (<ref>) for ∑_i=1^n W_ij= ∑_j=1^n W_ji over WB digraphs that proves feasibility and convergence as Lemma <ref> also holds for WB digraphs. Note that proof of uniqueness is irrespective of the weights and only depends on the network connectivity. Similar reasoning proves feasibility in Lemma <ref> and convergence for CT dynamics (<ref>) over WB digraphs. Distributed (weight) balancing algorithms can be adopted to design such directed networks <cit.>. §.§ Rate of Convergence Next, to determine the rate of convergence of the proposed (delay-free) protocols, we further make the following assumption. For the local cost functions f_i(x_i):ℝ↦ℝ, * there exists u < ∞ such that ∂^2 f_i(x_i) < 2u. This implies that ∂ f_i(x_i) are Lipschitz continuous. * (Strong-convexity) there exists v > 0 such that ∂^2 f_i(x_i) > 2 v. To incorporate a non-smooth penalty term or barrier function f_i^σ for the box constraints <cit.>, e.g.,[u]^+=max{u,0}^σ for σ = 1, one can replace it with smooth equivalent L(u,μ)=1/μlog (1+exp(μ u)). This is a typical reformulation in machine learning literature <cit.>. It is known that L (z,μ)-max{z,0}≤1/μ and the two functions can become arbitrarily close by sufficiently large μ <cit.>. [u]^+ with σ≥ 2is another alternative smooth option <cit.>. Assumption <ref> helps to explicitly derive a sufficient bound on the sampling step to ensure convergence. Based on this assumption, we reformulate Lemma <ref> as follows. Assume the function f_i:ℝ↦ℝ to be strongly convex and 2 v <∂^2 f_i(x_i) < 2 u (Assumption <ref>). Then,for two feasible points x(k+1), x(k) ∈S_b, and Δx =: x(k+1)-x(k), F(x(k+1))> F(x(k)) +∇F_1^⊤Δx + vΔx^⊤Δx, F(x(k+1))> F(x(k)) - 1/4v∇F_1^⊤∇F_1, and similarly, F(x(k+1))< F(x(k)) + ∇F_1^⊤Δx + uΔx ^⊤Δx, F(x(k+1))< F(x(k)) - 1/4u∇F_1^⊤∇F_1, with gradient dispersion defined as ∇F_1 := ∇ F (x(k+1))- 1/n(1_n^⊤∇F (x(k+1)))1_n. The proof follows from Lemma <ref>. See details in <cit.>. As a direct result of Lemma <ref>, substituting x_2 = x^* in (<ref>) and (<ref>), for any feasible x∈S_b and F(k) := F(k)-F^*, 1/4u∇F^⊤∇F≤F(x) ≤1/4v∇F^⊤∇F. The above corollary follows from the strong convexity of F and holds for general allocation dynamics; for example, a similar statement is given for the linear solution in <cit.>.Using Assumption <ref> and Lemma <ref>, one can find thelimit on the convergence rate under the proposed CT and DT protocols (assuming no latency, i.e., τ =0). Recall that, for the CT case,Ḟ =∇ F^⊤ẋ.In the following, we state the results for (<ref>) and the solution for (<ref>) similarly follows.Using the definition of Laplacian in (<ref>), one can rewrite (<ref>) (with some abuse of notation) as ẋ = - L g(∇ F). Substituting this for ẋ and from Assumption <ref> and Corollary <ref>, - K_g∇ F^⊤ L ∇F ≤Ḟ(x) ≤- ε∇F^⊤ L ∇F,where we used the following inequalities from Assumptions <ref>,ε |(∂ f_i(k))|< |g(∂ f_i(k))| < K_g |(∂ f_i(k))|, ε|D_ij(k)|<|g(D_ij(k))| < K_g |D_ij(k)|.For now, assume B=0 and later we extend it to B>0. Using (<ref>) with λ_2 described in Lemma <ref> and recalling the notation ∇F in Lemma <ref>, we have-K_gλ_n∇F^⊤∇F≤Ḟ≤-ελ_2∇F^⊤∇F,and from (<ref>)-4u K_gλ_n F(x) ≤Ḟ≤- 4v ελ_2 F(x), which is consistent with the linear case (ε = K_g = 1) in <cit.>. Recall that for F=0 we also haveḞ=0.Next, for the DT case, followingEq. (<ref>),F(k+1)≤F(k) + ∇ F(k)^⊤Δx+ uΔx^⊤Δx,with Δx :=x(k+1) - x(k). To satisfy F(k+1) ≤F(k), we need,∇ F(k)^⊤Δx+ u Δx^⊤Δx≤ 0. From Assumption <ref> and following a similar line of reasoning to get Eq. (<ref>), the above is satisfied if,-ε T λ_2 ∇F^⊤∇F + u K_g^2 T^2∇F^⊤ L^⊤ L ∇F≤ 0,where we substituted Δx = L ∇ F = L ∇F and used Lemma <ref> and Corollary <ref>.From the same Lemma <ref> we have ∇F^⊤ L^⊤ L ∇F≤λ_n^2 ∇F^⊤∇F. Therefore, the sufficient condition for convergence is,(u K_g^2 T λ_n^2- ελ_2)∇F^⊤∇F≤ 0,which gives the sufficient bound on T as, T ≤ελ_2/u K_g^2 λ_n^2 =: T_λ. Then, the upper bound on the convergence rate of the residual follows from Lemma <ref> and (<ref>) as,F(k+1)/F(k) ≤1+4 v(u K_g^2 T^2 λ_n^2-T λ_2 ε). The above gives an estimate on the (linear) convergence rate of F for sufficiently small T satisfying (<ref>) and also proves the convergence in Theorem <ref>. For the quadratic cost functions (as in the economic dispatch problem), we have u=v, and the above equations can be more simplified. Further, in case of B-connectivity instead of all-time connectivity (B=1), one can derive Eq. (<ref>) for F(k+B)/F(k) over union graph G_B and modify (<ref>) as T B ≤ T_λ with λ_2, λ_n as the eigenvalues of G_B. Recall that, from Lemma <ref> forWB directed graphs, values λ_2,λ_n in (<ref>) (and in the subsequent equations) denote the eigenvalues of the symmetric matrix L_s instead of L. § NETWORKS WITH TIME-DELAYS Next, we extend the solutions to the time-delayed case. We consider two approaches to overcome network latency according to the following remark. Following Assumption <ref>, the knowledge of every τ_ij is key to satisfy the feasibility condition in the delayed case, see (<ref>) and (<ref>). Note that D(k-r) and D^g(k-r) (with r as the delay) needs to be anti-symmetric. This follows from Assumption <ref> and Remark <ref>. In other words, every agent i knows the delay r to match the received g(∂ f_j(k-r)) with its own previous information g(∂ f_i(k-r)) to find D_ij^g(k-r), and the same holds for agent j to find D_ji^g(k-r).§.§ Case I: information-update over a longer time-scale A straightforward solution in the delayed case is to have the agents wait for τ steps (with τ as the maximum possible delay) until they collect (at least) one delayed packet from the neighbouring agents (before processing for the next iteration) and then update their state[Similar solution to handle delays in consensus protocols is discussed in <cit.>.]. This alternative approach requires knowledge of τ (or an upper bound). Obviously, due to the agents' slower update rate, this approach's convergence rate is low. In this case, we define a new time-scale k=⌊k-1/τ+1⌋+1(see Fig. <ref>)and update the state of each agent i as follows,x_i(k+1)= x_i(k)-T ∑_j ∈N_i W_ij g(D_ij(k)), x_i(k+1)= x_i(k) -T ∑_j ∈N_i W_ijD^g_ij(k).Over this τ+1 time-steps (i.e., two consecutive update steps k_1 and k_2) on every link (i,j), every agent sends 1 message at step k and receives the messages from j∈N_i by step k+1[In an equivalent setup, agents may send their messages per scale k where, at the update scale k, receive at least 1 and at most τ messages. Then, the step size T needs to be down-scaled accordingly to satisfy the convergence criteria.], see Algorithm <ref>.This is used in the proof of convergence. Under Assumptions <ref>-<ref> and initializing froma feasible point x_0 ∈S_b, protocols (<ref>) and (<ref>) converge to the feasible unique equilibrium point x^*in the form ∇ F(x^*) = φ^* ⊗1_nfor sufficiently small T satisfying T<T_λ (with T_λ given as (<ref>)). The proof of feasibility and uniqueness follows a similar procedure as in the proof of Theorem <ref> considering the longer time-scale k. Since at every update step k only 1 message is received from every agent in N_i, following similar expressions to derive Eq. (<ref>), the same criteria T_λ ensures convergence.§.§ Case II: using all delayed data at the same time-scale In the most general time-delayed version of the DT protocols (<ref>) and (<ref>), agent i updates its state based on all available (received) information as,x_i(k+ 1) = x_i(k) -T ∑_j ∈N_i∑_r=0^τ W_ij g(D_ij(k-r)) I_k-r,ij(r), x_i(k+ 1) = x_i(k). -T ∑_j ∈N_i∑_r=0^τ W_ijD^g_ij(k-r) I_k-r,ij(r),Note that in (<ref>), D_ij(k-r)can be easily calculated as agent i records all its gradients ∂ f_i(k) at the last τ steps and knows the time delay r of the received time-stamped message ∂ f_j(k-r). In fact, the second ∑ sums all the gradient differences based on all received data from N_i at step k, i.e.,for all pairs of D_ij(s) and s satisfying[Recall that the processed information at the time-step k are received in the time-period (kT-T,T]. Therefore, (<ref>) can be rewritten as {k -τ≤ s ≤ k, kT-T< sT+τ_ij(s)T ≤ kT, j ∈N_i} in the asynchronous case, for example, the delays could be positive real values (instead of integers).],{k -τ≤ s ≤ k, s+τ_ij(s)=k, j ∈N_i}.or equivalently,{I_k-s,ij=s, j ∈N_i}.Similar arguments hold forD^g_ij(k-r).Further, for Lipschitz g(·)with constant K_g<∞ (and smooth f_i(·)), from Definition <ref>, we have D^g_ij(k-r) ≤ K_g D_ij(k-r). Following Remark <ref>, for switching network G(k), the switching period needs to be longer than (τ+1)T (or τ+1 time-steps). This assumption implies that agent i receives at least one (possibly) delayed packet from every neighbour in N_i before losing its link due to network variation (ensuring Assumption <ref>(v)). Therefore, following the B-connectivity in Assumption <ref>, we need (τ+1)T<B.Note that for the update at every step k, Assumption <ref>(v) says that at least one package over the network is delivered. Otherwise, x_i(k)=x_i(k-1) for all i and no update occurs. The solution is summarized in Algorithm <ref>. The feasibility/uniqueness under (<ref>) and (<ref>) follows similar to the delay-free case. Let Assumptions <ref>-<ref> hold. Initializing from feasible states x_0 ∈S_b, under DT protocols(<ref>) and (<ref>), x(k) ∈S_b. Further, the unique equilibrium point x^* satisfies ∇ F(x^*) = φ^* 1_n. Following (<ref>) and (<ref>), we get ∑_i=1^nx_i(k+1) = ∑_i=1^n x_i(k) -∑_i=1^n T ∑_j ∈N_i∑_r=0^τ W_ij g(D_ij(k-r)) I_k-r,ij(r).∑_i=1^nx_i(k+1) = ∑_i=1^n x_i(k) -∑_i=1^n T ∑_j ∈N_i∑_r=0^τ W_ijD^g_ij(k-r) I_k-r,ij(r). For any link (i,j) and (j,i) in G(k), from Assumptions <ref>, <ref>, and <ref> we have W_ij=W_ji, D^g_ij(k-r)=-D^g_ji(k-r),g(D_ij(k-r))=-g(D_ji(k-r)), andI_k-r,ij(r)=I_k-r,ji(r) for 0 ≤ r ≤τ. This implies that, ∑_i=1^n T ∑_j ∈N_i∑_r=0^τ W_ij g(D_ij(k-r)) I_k-r,ij(r)=0, ∑_i=1^n T ∑_j ∈N_i∑_r=0^τ W_ijD^g_ij(k-r) I_k-r,ij(r)=0, and therefore, ∑_i=1^n x_i(k+1) = ∑_i=1^n x_i(k) for all k ≥ 1. Therefore, initializing from x_0 ∈S_b we have x(k) ∈S_b under both (<ref>) and (<ref>). This proves the feasibility. The proof of uniqueness follows similar reasoning as in the proof of Theorem <ref> and <ref> over the uniformly connected network G_B(t) (or G_B^τ^τ(t) in Remark <ref>). Case I is more practical than Case II in applications with low capacity/buffer at the computing nodes. This is because at time k=k node i sends (and receives) one message to (and from) the nodes N_i. Note that, for large delays (and the same T), this solution converges (although with a low rate) while Case II may not necessarily converge. Further, for unknown (or large) τ, Case II requires a high-capacity memory/buffer at the nodes to record the previous information (on the gradients). In case of limited (m-slot) memory/buffer, the nodes may record and use a portion (last m) of the previous states instead, and discard the rest before k-m (since they are time-stamped). This simply implies losing some links over G_B^τ [Convergence over lossy networks, following from Assumption <ref> and remark <ref>, is another promising direction of our future research.] and follows from Remark <ref> and <ref> and Assumption <ref>. Under Assumptions <ref>-<ref>, and initializing from feasible state x_0 ∈S_b,the proposed protocols (<ref>)-(<ref>) converge to the optimal solution of (<ref>) for T(τ+1)<T_λ. First, consider homogeneous delays τ_ij=τ, where agents' states at any time-step k get updated next at k+τ+1 and every τ+1 steps afterwards (see Fig. <ref>). For this case, following Theorem (<ref>), T<T_λ ensures the convergence. For general heterogeneous time-varying delays, Δx needs to be scaled by τ+1, and accordingly, T_λ is down-scaled by τ+1 to guarantee convergence. Consider two cases: (i) for time-invariant delays ∑_r=0^τI_k-r,ij(r)=1 in (<ref>), implying that only one delayed packet is received from every j ∈N_i. This implies the same bound as T<T_λ. For time-varying delays, from (<ref>), 0 <∑_r=0^τI_k-r,ij(r) ≤ (τ+1). This implies that Δx is scaled by τ+1 and accordingly, from (<ref>), T needs to be down-scaled by τ+1, i.e., T(τ+1) < T_λ. The proof for dynamics (<ref>) similarly follows.For time-varying but equi-probable delays r = 0,1,…,τ in (<ref>), one can claim that (in average) over τ+1 trials node i receivesone message from j ∈N_i (at every k) and the bound is T < T_λ.Communication time-delay over the network is not addressed by the existing solutions <cit.>. However, latency is a common issue in many multi-agent systems including the distributed resource allocation setup. Therefore, most existing works in the literature may not work properly in the presence of time-delays in the data-transmission network. The mentioned literature may lose resource-demand feasibility and/or result in some optimality gap in the presence of time-delays. In this aspect, our proposed delay-tolerant model advancesthe state-of-the-art and provides solutions that can be implemented in practice. § SIMULATION IN DISTRIBUTED SCHEDULING SETUPIn this section, we consider resource allocation over the power generation networks and the smart grid, known as the economic dispatch problem (EDP) <cit.>. The objective is to allocate optimal power outputs to the electricity generators to supply the load demand D (in MW) and to minimize the operating costs. In <cit.> this cost function is given as, F(x) =∑_i=1^n γ_i x_i^2+ β_i x_i + α_i, ∑_i=1^n x_i = D, m_i ≤x_i ≤ M_i,where x_i ∈ℝ is the output power at generator i. To include the box constraints, penalty terms are considered <cit.>.The parameters in the cost function (<ref>) are defined based on the type of the power generators (coal-fired, oil-fired, nuclear, etc.), for example, see Table <ref>. The other parameter values are: α_i = 0 and m_i = 20. The min and max RRL values given in <cit.> are equal to 1 and 3 MW/min (for oil/coal-type generators).In a more complicated setup, such a quadratic cost model is defined to adjust the power-demand mismatch over the grid (e.g., due to generator outage), known as the Automatic Generation Control (AGC) problem <cit.>. Given a known power mismatch, the idea is to allocate enough power to the generators to compensate for it while minimizing the power deviation cost. In this setup, the generators are subject to an extra physical constraint, known as ramp-rate-limit (RRL). This implies that the rate of increase or decrease of the produced power is constrained within certain limits and the generators cannot freely speed up/down their power generation.Such nonlinear constraints are a determining factor on the stability of the grid <cit.>. The solutions of the existing linear methods <cit.> may assign any rate of change ẋ_i to the power generators. Thus, they cannot address RRLs,which, in reality, the solutions cannot be followed by the generators. This may fail the feasibility and lead to sub-optimality. In contrast, considering the nonlinearity g(·) as saturation function (<ref>), the proposed nonlinear solutions in this work can address RRL constraints where the limits can be tuned by the parameter κ. A comparing example is given next. §.§ Allocation with No DelayAssume n=50 generators with the supply-demand constraint D=3200 MW. Initially, it is considered that x_i=D/n=64 MW at every generator i. The parameters in (<ref>) are randomly set from Table <ref> using MATLABfunction. For power allocation, as an academic example, we consider protocol (<ref>) with saturated nonlinearityg_κ(·) with level κ=1/60 on the node dynamics and protocol (<ref>) with g_f(·) as the sign-based function (<ref>) (with ν_1=0.4, ν_2=1.6) on the links. The network is considered as a random Erdos-Renyi graph (with link probability p=0.2) with random symmetric link weights 0.005 ≤ W_ij≤ 0.025. We provide a comparative simulation analysis to support our results. Both solutions are compared with some state-of-the-art solutions in the literature in Fig. <ref> for T=1; namely: linear <cit.>, accelerated linear<cit.>, finite-time<cit.>, and single-bit <cit.> solutions. For the single-bit protocol <cit.> we decreased the link weights by 80% to reduce the chattering effect in Fig. <ref>. Note that the RRL constraint is only met by our saturated solution as shown in Fig. <ref>. In other words, in a real scenario the generators cannot follow the iterative solution by <cit.> since their power generation rates at some intervals violate the RRLs. This may either cause feasibility gap or optimality gap in real world. But our saturated-based solution admits the RRLs. The min RRL value equal to 1 MW/min (or 1/60 MW/sec) is considered that meets the requirement by all the generators[Even though we used the same min RRL in this simulation, this parameter can be further tuned for different generators in dynamics (<ref>)-(<ref>) via the link weights and the node degrees.]. Box constraints m_i ≤ x_i ≤ M_ion the generators are considered by adding smooth penalty terms with σ=2. We modify the objective as f_i^σ = f_i(x_i) + c([x_i - M_i]^+)^2 + c([m_i- x_i]^+)^2 with c = 1. The evolution of power states under the sign-based dynamics is shown in Fig. <ref>, where both feasibility and power limit constraints are met. Note that our given assumptions ensure that at every time step the deviated power at nodes on two sides of every link is balanced such that the feasibility constraint is satisfied (i.e., the generated power equals the demand) at all times.To compare the computational complexity of the algorithms, theelapsed times (for one iteration) are given in Table <ref>. The simulation is done with MATLAB R2021b Intel Core i5 @ 2.4GHz processor RAM 8GB. In Table <ref>, we compared the number of iterations to reach (a predefined) cost residual F = 1. For example, within this range around the optimal value the solution is considered good enough. Note that our proposed sign-based solution, as compared to other solutions, although having more computational complexity converges very fast and within fewer number of iterations.Next, we study the network density on the number of iterations to reach a certain residual (as the termination criteria of the algorithm). In Fig. <ref>, this termination point is F = 0.01 and the ER link probability is changed from 15% to 45%. The link weights W_ij are chosen randomly in the range [0.02 0.12]. The simulation is averaged over 5 Monte-Carlo (MC) trials with T=0.05.The convergence time/iterations to reach F = 10^-2 versus network size n is shown in Fig. <ref>. The parameters are as follows: p=30% as the ER link probability, W_ij∈ [0.05 0.2], and 5 MC trials.Two different scenarios are considered for T: diminishing by size as T=0.05(1-n-20/100) and fixed step size T=0.025.In Fig. <ref>, for the same termination criteria F = 10^-2, the number of iterations for different step sizes T and link densities p are given. As we see in the figure, the convergence iteration multiplied by the step size is almost constant irrespective of the network connectivity (and eigen-spectrum) and the step size T. Note that the T values are chosen small enough for convergence, and for large values of T violating Eq. (<ref>) the solution may not necessarily converge. §.§ Allocation with Delayed Information-ExchangeNext, we consider a cyclic network of n=5 generators under RRL limitsκ=1/60 MW/sec in the presence of time delays. The data in Table <ref> is used which resembles IEEE 14-Bus test system <cit.>. The time-evolution of the generated powers and the Lyapunov function F (the residual) are simulated for random parameters. For the quadratic cost (<ref>), u=max{γ_i} = 0.04. We have ε = 0.0166, K_g = 1, λ_2 = 1.38, T=1 and λ_n = 3.61.Using Eq. (<ref>), for any T<0.045 the solution converges in the absence of delays. This is a sufficient bound and to some extent not very tight. Case I: In this case, using (<ref>), we update the generator states at every τ steps as in Section <ref>.Following Remark <ref>, we consider information exchange over the longer time-scale k.The time-evolution of the Lyapunov function F (representing the residual) for some values of τ is given in Fig. <ref>. Clearly, the convergence rate decreases with the increase in the time delays.Case II:In this case, we update the generator states at every step based on all the received (delayed and non-delayed) information using (<ref>) in Section <ref>.From Theorem <ref>, sufficient condition for convergence is T(τ+1) < 0.045. For this simulation, however, we see that solution convergesτ≤ 3. We perform the simulation for both heterogeneous time-varying and time-invariant (fixed) delays as shown in Fig. <ref>. § SIMULATION OF NON-SMOOTH QUANTIZED OPTIMIZATION OVER UNIFORMLY CONNECTED NETWORKSIn this section, we run the simulations over a randomdynamic network of n=100 agents. The network is not connected at any time, while it is B-connected over every B=100 iterations (Assumption <ref>), i.e.,⋃_k^k+100G(k) is connected. We consider strongly-convex logarithmic functions f_i(·) defined as <cit.>,f_i(x_i) = 1/2α_i(x_i-γ_i)^2 + ζlog(1+exp(β_i(x_i-η_i))),withthecoefficients randomly chosen as 0<α_i<0.2, -0.2<β_i<0.2, -0.3<γ_i<0.3, 0<η_i<0.6, ζ = 0.2, and random constraint parameters in problem (<ref>) chosen as -2<a_i<2 and b=10. To optimize this objective we apply (<ref>) (with T=0.1) under logarithmic quantization g_l(·) in Eq. (<ref>). This locally Lipschitz function satisfies (1-δ/2)z ≤g_l(z)/z≤(1+δ/2)z and, thus, Assumption <ref> holds. We considered a composition of dynamics (<ref>)-(<ref>) via g_κ(g_l(∂ f_i) -g_l(∂ f_j)) with κ = 1. The time-evolution of the residual F for different quantization levels δ is given in Fig. <ref>. We further compare the performance under different time-delay models by assigning random heterogeneous delays to the links.For Case II, two scenarios are given (i) time-varying, and (ii) time-invariant (fixed) delays. For Case I, from Remark <ref>, we consider updating over time-scale k using composition of dynamics (<ref>)-(<ref>).Simulations are shown in Fig. <ref>with parameters: δ = 0.125, T=0.05, τ = 2,6. As shown in Fig. <ref>, the solution by Case II does not necessarily converge while the solution by Case I converges. This simulation shows that for small τ (satisfying Theorem <ref>) Case II leads to faster convergence. On the other hand, for larger τ Case I is a better delay-tolerant mechanism.Note that the notion of information quantization and time-delays although prevalent in real-world networked systems cannot be addressed by the existing primal-based <cit.> and dual-based solutions <cit.>. In other words, these existing literature assume ideal network condition and there is no guarantee that their solution converge under quantized information and/or data transmission delays. § CONCLUSION AND FUTURE DIRECTIONS This paper proposes anytime-feasible (Laplacian-gradient) solutions subject to model nonlinearities to solve distributed sum-preserving resource allocation and coupling-constraint optimization over uniformly-connected networks (not necessarily connected at all times). The convergence to the optimal value is proved for general strongly sign-preserving nonlinearities. In addition, two scenarios are proposed to overcome heterogeneous delays over the network. For large delays and low buffers, we proposed to update the agents' states over a longer time scale after receiving (at least) 1 delayed packet over every link. On the other hand, faster convergence can be achieved, for smaller time delays, by updating at the same time scale of the communication and using all the received (possibly) delayed packets at each iteration. The results are given for undirected and balanced graphs with bounded and link-symmetric delays. Allocation strategies over lossy networks with link failure or packet drop are another direction of our current research. As future research directions, application to asynchronous scheduling under quantized dynamics <cit.>, robust (and noise-resilient) sign-based <cit.>, or single-bit <cit.> dynamics are of interest. Recall that for non “strongly” sign-preserving solutions, the -accuracy needs to be addressed to give an estimate of the optimality gap. Anotherfuture research direction is to extend the results to non-convex problems, which isa bottleneck and more interesting for the industry. elsarticle-num | http://arxiv.org/abs/2310.18225v1 | {
"authors": [
"Mohammadreza Doostmohammadian",
"Alireza Aghasi",
"Maria Vrakopoulou",
"Hamid R. Rabiee",
"Usman A. Khan",
"Themistoklis Charalambou"
],
"categories": [
"eess.SY",
"cs.MA",
"cs.SY",
"math.OC"
],
"primary_category": "eess.SY",
"published": "20231027155202",
"title": "Distributed Delay-Tolerant Strategies for Equality-Constraint Sum-Preserving Resource Allocation"
} |
arabicCooperative quantum tunneling of the magnetization in Fe-doped Li_3N A. Jesche January 14, 2024 ====================================================================We introduce MELEP, which stands for Muti-label Expected Log of Empirical Predictions, a novel measure to estimate how effective it is to transfer knowledge from a pre-trained model to a downstream task in a multi-label settings. The measure is generic to work with new target data having a different label set from source data. It is also computationally efficient, only requires forward passing the downstream dataset through the pre-trained model once. To the best of our knowledge, we are the first to develop such a transferability metric for multi-label ECG classification problems. Our experiments show that MELEP can predict the performance of pre-trained convolutional and recurrent deep neural networks, on small and imbalanced ECG data. Specifically, strong correlation coefficients, with absolute values exceeding 0.6 in most cases, were observed between MELEP and the actual average F1 scores of the fine-tuned models.Transfer learning, electrocardiography, multi-label data, pre-transfer evaluation, decision support systems.§ INTRODUCTIONAutomatic ECG interpretation has gained significant popularity and witnessed remarkable progress in recent years. This advancement can be attributed to the wide-scale digitization of ECG data and the evolution of deep learning techniques. Notably, deep neural networks (DNN) have achieved classification performance on par with cardiologists, as demonstrated by Hannun et al. <cit.>, and Ribeiro et al. <cit.>. These outstanding achievements have partly been due to the availability of extensive human-labeled datasets, consisting of 91,232 and 2,322,513 ECG recordings, respectively.However, ECG datasets used in practice are often much smaller, due to the expensive and time-consuming data collection and annotation process. Consequently, it becomes challenging to achieve desirable results when training DNNs from scratch. Transfer learning is often useful in such scenarios, resulting in improved performance <cit.> and faster convergence <cit.>. Fortunately, there exists some large, publicly available ECG datasets, which enable DNNs to learn important latent features, then transfer the learned knowledge to our main task, typically with much less annotated data. There are two most commonly used transfer learning techniques: head retraining <cit.> and fine-tuning <cit.>. Both replace the top classification layer to match the number of target task's outputs; however, while the former freezes all feature extractor layers and only updates the top layer's parameters during training on the target dataset, the latter does not have such a constraint and makes all layers trainable. Research suggested that fine-tuning leads to better performance <cit.>, thus it has been accepted as a de facto standard.Given the effectiveness of fine-tuning, a new problem arises: how do we select the best pre-trained checkpoints among a large candidate pool? A checkpoint is a model pre-trained on a source dataset, with a specific set of hyperparameter settings. It is straightforward to actually do the fine-tuning and then select the top ones; however, this method is obviously expensive and difficult to scale. Transferability estimation <cit.> aims to address the above bottleneck by developing a metric that indicates how effectively transfer learning can apply to the target task, ideally with minimal interaction with it. Good estimation is likely to facilitate the checkpoint selection process. In the domain of computer vision, several transferability measures were developed. Tran et al. <cit.> introduced negative conditional entropy between the source and target label sets. Bao et al. <cit.> proposed a transferability measure called H-score, which was based on solving a Maximal HGR Correlation problem <cit.>. Nguyen et al. <cit.> developed LEEP, an efficient estimate with no expensive training on target tasks. However, those measures only apply to multi-class classification problems, thus cannot be directly applicable to multi-label tasks such as ECG diagnosis, in which a patient may suffer from more than one cardiovascular disease. Key Contributions:* We propose MELEP, a transferability measure that can directly apply to multi-label classification problems in automatic ECG interpretation.To the best of our knowledge, we are the first to develop such a measure to estimate the effectiveness of transfer learning in the ECG domain. * We conducted the first extensive experiment of transfer learning for 12-lead ECG data. We focused on small downstream datasets and covered a wide range of source checkpoints, which were produced from multiple source datasets and representatives of two most popular DNN architectures for time-series analysis: convolutional and recurrent neural networks.Our article is structured as follows: first, we provide the mathematical foundation behind MELEP and describe the intuition and its properties.Then four 12-lead ECG datasets and two DNN architectures are introduced, which build the backbone of our experiments. We evaluate the ability of MELEP to predict the fine-tuning performance of a convolutional neural network by conducting extensive experiments with multiple checkpoints produced from pretraining the model on different source datasets. To show the versatility of MELEP, we replicate the experiment with a recurrent neural network, affirming that its capability is not tied to a specific model architecture. Next, we demonstrate the effectiveness of MELEP in a real-world scenario, which is selecting the best checkpoints among a group of pre-trained candidates. Finally, we discuss some notable properties, extensions, and applications of MELEP and suggest promising directions for future study.§ MATERIALS & METHODS §.§ Multi-Label Expected Log of Empirical Predictions (MELEP)Consider transfer learning from one multi-label classification task to another. Let: * Θ be the pre-trained model on the source task.* ℒ_s = {0, 1, ..., 𝒵 -1} be the source label set of size |ℒ_s| = 𝒵.* ℒ_t = {0, 1, ..., 𝒴 -1 } be the target label set of size |ℒ_t| = 𝒴.* 𝒟 = { (x_1, 𝐲_1), ..., (x_n, 𝐲_𝐧) } be the target dataset of size n. 𝐲_𝐢 is a label vector of size 𝒴.* (y, z) ∈ℒ_t ×ℒ_s be a pair of target-source labels taken from the two sets.* (t, s) be the values of (y, z). In the ECG classification context, the label values are binary, so (t,s) ∈{0, 1}×{0, 1}. then MELEP is computed as follows:* Step 1: Compute the dummy label distributions of the target data over the source label set, denoted by a vector ŷ_i= Θ(x_i) of size 𝒵, by forward passing each data point to the pre-trained model.* Step 2: Consider each pair of target-source labels (y, z). Let θ_iz denote the value of ŷ_i at the z^th column, i.e. the predicted probability that the sample x_i belongs to label z.* Compute its 2 × 2 empirical joint distribution matrix 𝐏̂_𝐲𝐳(t,s), with value at row t column s is: P̂_yz(t, s) = 1/n∑_i: y_iz=t(θ_iz)_swhere ∑_i: y_iz=t means we select all samples x_i with the z^th ground-truth label y_iz equal to t. With corresponding values of s, (θ_iz)_1 and (θ_iz)_0 are the probabilities that the label z can and cannot be assigned to the sample x_i, respectively.* Compute the empirical marginal distribution vector (of size 2) with respect to the source label z:P̂_z(s)= 1/n∑_i=1^n (θ_iz)_s = P̂_yz(0, s) + P̂_yz(1, s)* Compute the 2 × 2 empirical conditional distribution matrix 𝐏̂_𝐲|𝐳(t,s) of the target label y given the source label z, with value at row t column s is: P̂_y|z(t|s) = P̂_yz(t, s)/P̂_z(s) For any input x_i, consider a binary classifier that predicts whether x_i belongs to label y by first randomly drawing 𝒵 dummy labels from Θ(x_i), then averaging the likelihood of y based on 𝒵 empirical conditional distributions 𝐏̂_𝐲|𝐳. This process is repeated for all 𝒴 target labels. The set of binary classifiers is called the Empirical Predictor (EP). MELEP is defined as the average negative log-likelihood of the EP across all target labels, as follows:* Step 3: Compute the Expected Logarithm of Empirical Prediction with respect to the label pair (y, z): ϕ (Θ, 𝒟, y, z) = - 1/n∑_i=1^n log(∑_s=0^1 P̂_y|z(y_iz|s) (θ_iz)_s)* Step 4: Compute MELEP by taking the weighted average of ϕ (θ, 𝒟, y, z) over all target-source label pairs:Φ (Θ, 𝒟) = 1/𝒴∑_y w_y ×1/𝒵∑_zϕ (Θ, 𝒟, y, z) where w_y are the weights of the target label y in the target dataset, i.e. the ratio of the number of positive samples to the number of negative samples of y. Note that we do not take the source weights into consideration, because in practice, it makes sense to assume that we do not know the source label distribution prior to fine-tuning. From its definition, MELEP is always positive, and smaller values indicate superior transferability. Intuitively, MELEP can be regarded as a distance metric, indicating how "close" the pre-trained model Θ and the target dataset 𝒟 are. The closer the distance, the easier the transfer.The measure is generic, meaning that it can be applied to all types of checkpoints, and works without any prior knowledge of the pre-training process, such as data distribution, hyperparameter settings, optimizer, loss functions, etc. Furthermore, the computation of MELEP is efficient, which renders it practically useful.This lightweight property is inherited from the original LEEP <cit.>, with the calculation involving only a single forward pass through the target dataset 𝒟, requiring no training on the downstream task.§.§ DatasetsWe used publicly available 12-lead ECG datasets in this work.The first source was the public training dataset from the China Physiological Signal Challenge 2018 (CPSC2018) <cit.>. This dataset comprises 6,877 ECG records, each associated with at most nine diagnostic categories: NORM (representing normal ECG patterns), AF (Atrial Fibrillation), I-AVB (First-degree atrioventricular block), LBBB (Left Bundle Branch Block), RBBB (Right Bundle Branch Block), PAC (Premature Atrial Contraction), PVC (Premature ventricular contraction), STD (ST-segment Depression), and STE (ST-segment Elevated).The second dataset was PTB-XL <cit.>, containing 21,837 records from 18885 patients, and a total of 44 diagnostics statements.The dataset's authors organized these diagnostic labels into a hierarchical structure <cit.>, categorizing the 44 labels into five broader superclasses, namely: NORM (normal ECG), MI (Myocardial Infarction), STTC (ST/T-Changes), HYP (Hypertrophy), and CD (Conduction Disturbance).We followed this structure and focused on these five superclasses when conducting experiments with the PTB-XL dataset.Our third dataset, known as the Georgia dataset <cit.>, consists of 10,344 ECGs that reflect the demographic characteristics of the Southeastern United States.The data covers a diverse range of 67 unique diagnoses. However, for our research, we focused on a subset of 10 specific labels, which had the most substantial number of samples:NORM, AF, I-AVB, PAC, SB (Sinus Bradycardia), LAD (left axis deviation), STach (Sinus Tachycardia), TAb (T-wave Abnormal), TInv (T-wave Inversion), and LQT (Prolonged QT interval). The last source was the Chapman University, Shaoxing People's Hospital, and Ningbo First Hospital database <cit.>, which we will refer to as the CSN dataset for brevity.This dataset contains 45,152 12-lead ECG records, each lasting for 10 seconds and sampled at 500 Hz. There are a total of 94 unique labels, among which we focused on 20 labels with more than 1,000 records for our experiments. These 20 labels are SB, NORM, STach, TAb, TInv, AF, STD, LAD, PAC, I-AVB, PVC, AFL (Atrial Flutter), LVH (Left Ventricular Hypertrophy), STC (S-T changes), SA (Sinus Arrhythmia), LQRSV (Low QRS Voltages), PR (pacing rhythm), NSTTA (Nonspecific ST-T Abnormality), CRBBB (complete Right Bundle Branch Block), QAb (Q-wave Abnormal).Table <ref> summarizes key statistics of the four data sources.In terms of data preprocessing, we applied the following procedures:* Downsampling: we reduced the sampling frequency of all ECG records from 500 Hz to 100 Hz. This helps reduce computational load while retaining essential information.* Cropping: for ECG records longer than the desired duration (ten seconds), we cropped them to meet this target. This step ensures that all records have consistent lengths for training. It is worth noting that only a tiny fraction of records have durations shorter than ten seconds: six out of 6,877 in the CPSC2018 dataset, 52 out of 10,334 in the Georgia dataset, and none in the PTB-XL and CSN datasets. Therefore, instead of padding these records to meet the desired duration, which could potentially introduce unwanted noise or artifacts into the signals, they were simply omitted from our experiments. We used the CSN and PTB-XL datasets for fine-tuning due to their relatively large amount of records. When fine-tuning models on the former, we pre-trained our models using three source datasets: CPSC2018, PTB-XL, and Georgia. When fine-tuning on the latter, we only used two source datasets: CPSC2018 and Georgia. For pretraining, we partitioned each of the source datasets into training and test sets as follows. For PTB-XL, we followed the recommended split in <cit.>, pretraining our models on the first eight folds, and testing on the tenth fold. For the CPSC2018 and Georgia datasets, we kept 33% the amount of data in the test set and allocated the remaining for pretraining.§.§ Deep Learning Models We investigated two widely used deep learning architectures for time-series analysis:* Convolutional Neural Network (CNN): we utilized ResNet1d101, which is a 1D variant of ResNet101 <cit.>. The architecture of the ResNet1d101 model is illustrated in Figure <ref>.* Recurrent Neural Network (RNN): the Bidirectional Long Short Term Memory (Bi-LSTM) architecture <cit.> was used. The structure of the Bi-LSTM model is visually presented in Figure <ref>.Since the source datasets have varying numbers of labels, the last fully-connected layer of the models was adjusted to align with the respective number of outputs. During pretraining, each model was trained on a source training set for 50 epochs, using Adam optimizer <cit.> with a learning rate of 0.01. At the end of each epoch, we recorded the average F1 score on the test set, which served as an early stopping criterion. We observed that Bi-LSTM experienced overfitting when training beyond the early stopping point, whereas ResNet1d101 mostly converged.§ EXPERIMENTS & RESULTSIn this section, we show the potential of MELEP in predicting the performance of fine-tuning a pre-trained model on a target dataset.In practice, transfer learning is often used when dealing with limited human-annotated data. Therefore, we focused on investigating MELEP in the context of small target datasets. §.§ MELEP vs Average F1 of CNN fine-tuned on CSN We first experimented with the convolutional model ResNet1d101. This model was pre-trained on three different source datasets: PTB-XL, CPSC2018 and Georgia, as described in Section <ref>, resulting in three respective source checkpoints. Each source checkpoint was then undergone an experiment with a wide range of target tasks sampled from the CSN dataset. To construct these tasks, we started with the set of 20 labels in the CSN dataset with at least 1000 positive samples, as in Section <ref>. N labels were then randomly sampled without replacement from the set, where N varied from 2 to 10. This step ensured that the target tasks would cover a diverse set of target labels. Records with no positive values for the N selected labels were filtered out to avoid creating a sparse dataset, and to guarantee that every sample left contained at least one positive label. We then randomly select 1000 records among the remaining to form a data fold. The process was repeated 100 times to generate a total of 100 data folds for our experiment.For each fold, we further split it into training and test subsets with a 7:3 ratio, i.e. 700 training records and 300 test records. Subsequently, we compute MELEP using the pre-trained checkpoint and the training subset only, following the algorithm described in Section <ref>. Prior to fine-tuning the model, we replaced the top fully connected layer of the checkpoint, adjusting the number of output neurons to match the target number of labels N. The entire modified model was then fine-tuned on the training subset for 50 epochs with early stopping, using Adam optimizer <cit.> . To evaluate the fine-tuning performance, we recorded the weighted average F1-score across the N labels of the given fold.F1-score was chosen as the evaluation metric due to its robustness in handling class imbalances <cit.>, a common feature of ECG data, compared to accuracy. Ultimately, we gathered 100 points of (MELEP, average F1) representing the correlation between MELEP and the fine-tuning performance of the source checkpoint across a wide range of target tasks. We performed a correlation analysis between MELEP and target performance, following asimilar approach used in assessing transferability on multi-class computer vision tasks <cit.>. The first three rows in Table <ref> show the results of the three ResNet1d101 checkpoints in this experiment, revealing strong negative correlations between MELEP and average F1 scores, all of which are below -0.6. To visualize this relationship, Figure <ref> classifies the MELEP values into four distinct distance levels. Within each level, we calculated the mean of average F1 scores from all the folds with MELEP falling into that level. The lower the MELEP, the closer the distance, implying easier transferability. §.§ MELEP vs Average F1 of RNN fine-tuned on CSNTo illustrate the applicability of MELEP to RNN, we repeated the experiment in Section <ref> with Bi-LSTM as the source model.Similar to ResNet1d101, the Bi-LSTM model was pre-trained on three source datasets: PTB-XL, CPSC2018, and Georgia. We leveraged the same set of 100 CSN data folds which were previously constructed for the CNN experiment, and applied the identical fine-tuning procedure. In Table <ref> (specifically, the first three rows of the Bi-LSTM section), we observe a robust correlation, even stronger than that observed with ResNet1d101, between MELEP and average F1 scores. This correlation is visually depicted in Figure <ref>, where MELEP is categorized into the same distance levels as described in Section <ref>. The trend remains consistent: the closer the distance, the better the transfer.§.§ MELEP vs Average F1 of Models fine-tuned on PTB-XLIn this experiment, we explored the use of MELEP on a different target dataset, specifically PTB-XL, chosen for its relatively large amount of records. We followed the same procedure outlined in Section <ref> to construct 100 target data folds, with the only difference being the number of labels N. These label sets ranged from two to five and were derived from the five superclasses covering the whole PTB-XL dataset, as described in Section <ref>. We considered four different checkpoints: ResNet1d101 and Bi-LSTM models pre-trained on the CPSC2018 and Georgia datasets.The results in Table <ref> indicate a moderate correlation between MELEP and transfer performance, with most correlation coefficients below -0.5. These correlations, while still significant, are slightly weaker than what was observed in the experiment with the CSN dataset (Section <ref> and <ref>), as shown in Figure <ref>, where the predictive trend of MELEP is disrupted, with an increase instead of a decrease at one distance level (the 2^nd level for ResNet1d101 pre-trained on CPSC2018 and the 3^rd level for other checkpoints). §.§ MELEP for Checkpoint Selection This experiment demonstrates the use of MELEP in practice to effectively estimate fine-tuning performance in a multi-label classification task, before the actual fine-tuning process takes place. Consider a checkpoint selection problem, where the goal is to choose the best candidate from a set of given source checkpoints for a target task. In this scenario, we had six candidate checkpoints: ResNet1d101-PTBXL, ResNet1d101-CPSC, ResNet1d101-Georgia, BiLSTM-PTBXL, BiLSTM-CPSC, BiLSTM-Georgia. To simulate the context of fine-tuning of a small target dataset, the target task was generated following the random process outlined in Section <ref>, with the target fold being 1000 records sampled from the CSN dataset, belonging to 8 labels: PAC, PVC, TAB, PR, TInv, IAVB, AFL and CRBBB. Subsequently, we divided this fold into training and test subsets with a 7:3 ratio. The training subset was used for computing MELEP and fine-tuning, while the test set was reserved for performance evaluation. In Figure <ref>, MELEP values and their corresponding average F1 scores for all six source checkpoints are displayed, along with the reference best-fit line.The graph clearly illustrates the effectiveness of MELEP in predicting the performance of the given checkpoints on the target task. Notably, the checkpoint with the lowest MELEP, ResNet1d101 pre-trained on the CPSC2018 dataset, achieved the highest average F1 score. The pattern remains consistent for most checkpoints, with the only outlier of ResNet1d101-Georgia, which had higher MELEP but performed better than two of the Bi-LSTM checkpoints. § DISCUSSIONS & CONCLUSIONWe introduced MELEP, a novel transferability measure that is directly applicable to multi-label ECG diagnosis. The measure is built upon the foundation of LEEP <cit.>, adapting from single-label multi-class problems in computer vision to multi-label binary-class ones in the ECG domain.We conducted extensive experiments to empirically illustrate the effectiveness of MELEP in predicting the performance of transfer learning in various ECG classification tasks. In this section, we discuss some notable properties, extensions, and applications of MELEP alongside promising directions for future study.Source model dependence: MELEP computation is based on a source checkpoint, which is a source model pre-trained on a source task. Additionally, MELEP indirectly considers the input features when computing the dummy label distributions. Consequently, the score is inherently influenced by the architectural choice, set of hyperparameters, training configurations (such as learning rate, optimizer, dropout rate, etc.), and the performance of the source model. Source data dependence: in addition, MELEP is also dependent on the source dataset.Equations <ref> and <ref> show that the cardinality of the source label set contributes to MELEP score.Although MELEP calculation does not require specific label values, it is reasonable to assume that if there is a significant overlap between the source and target label sets, the checkpoint pre-trained on the source task is likely to perform well on the target task.Considerable extensions: as mentioned in Section <ref>, Equation <ref>, we do not consider source label weights in the MELEP formula. This exclusion is based on the assumption that we lack prior knowledge of the source label distribution used in pretraining. However, in situations where this information is known, it is more sensible to take the source weights into account. Additionally, there is another variant that deserves consideration for its practical versatility. Instead of aggregating the weighted average ofϕ (θ, 𝒟, y, z) into a single value as in Equation <ref>, we can output a vector of size 𝒴, indicating the transferability measures for each target label.Such an approach is well-suited in scenarios where the performances on certain labels hold more significance than others.Potential applications: apart from the source checkpoint selection use case demonstrated in Section <ref>, MELEP can be useful for continual learning algorithms that based on neural architectures changes or selection of data points in replay buffers <cit.>, facilitating the decision-making process.Furthermore, multi-task learning <cit.>, which often depends on the selection of deep parameter-sharing networks and a combination of task labels, can also benefit from MELEP. Finally, MELEP holds the potential to assist in the selection of hyperparameters for Bayesian optimization <cit.>. We leave these directions for future work.IEEEbib | http://arxiv.org/abs/2311.04224v1 | {
"authors": [
"Cuong V. Nguyen",
"Hieu Minh Duong",
"Cuong D. Do"
],
"categories": [
"eess.SP",
"cs.CV",
"cs.LG"
],
"primary_category": "eess.SP",
"published": "20231027145710",
"title": "MELEP: A Novel Predictive Measure of Transferability in Multi-Label ECG Analysis"
} |
Constraining the growth rate on linear scales by combining SKAO and DESI surveys Muhammad Bilal1^,2^,* Dinis Martinho3 Reiner Sim4 Adnan Qayyum5^,6 Hunaid Vohra7 Massimo Caputo7 Taofeek Akinosho2 Sofiat Abioye2 Zaheer Khan2 Waleed Niaz2 Junaid Qadir5 January 14, 2024 ===============================================================================================================================================================================Simultaneous sequence generation is a pivotal task for real-time scenarios, such as streaming speech recognition, simultaneous machine translation and simultaneous speech translation, where the target sequence is generated while receiving the source sequence. The crux of achieving high-quality generation with low latency lies in identifying the optimal moments for generating, accomplished by learning a mapping between the source and target sequences. However, existing methods often rely on task-specific heuristics for different sequence types, limiting the model's capacity to adaptively learn the source-target mapping and hindering the exploration of multi-task learning for various simultaneous tasks. In this paper, we propose a unified segment-to-segment framework (Seg2Seg) for simultaneous sequence generation, which learns the mapping in an adaptive and unified manner. During the process of simultaneous generation, the model alternates between waiting for a source segment and generating a target segment, making the segment serve as the natural bridge between the source and target. To accomplish this, Seg2Seg introduces a latent segment as the pivot between source to target and explores all potential source-target mappings via the proposed expectation training, thereby learning the optimal moments for generating. Experiments on multiple simultaneous generation tasks demonstrate that Seg2Seg achieves state-of-the-art performance and exhibits better generality across various tasks[Code is available at: <https://github.com/ictnlp/Seg2Seg>.].§ INTRODUCTIONRecently, there has been a growing interest in simultaneous sequence generation tasks <cit.> due to the rise of real-time scenarios, such as international conferences, live broadcasts and online subtitles. Unlike conventional sequence generation <cit.>, simultaneous sequence generation receives a streaming source sequence and generates the target sequence simultaneously, in order to provide low-latency feedback <cit.>. To achieve high-quality generation under such low-latency conditions, simultaneous models must learn to establish a mapping between the target sequence and the source sequence <cit.> and thereby identify the optimal moments for generating <cit.>.Directly mapping source and target sequences is non-trivial due to inherent differences between the two sequences, such as modalities or languages, resulting in significant representational and structural gaps. For instance, in streaming automatic speech recognition (Streaming ASR) <cit.>, speech needs to be mapped to text, while simultaneous machine translation (SimulMT) <cit.> requires the mapping from a source language to a target language (i.e., cross-lingual alignment <cit.>). Simultaneous speech translation (SimulST) <cit.> encounters challenges that encompass both cross-modal and cross-lingual aspects. Therefore, developing an approach to bridge source and target is critical to simultaneous sequence generation.Existing methods for simultaneous generation often rely on task-specific heuristics to bridge the source and target sequences. For example, streaming ASR methods assume a strong correlation between the target token and local speech, employing a fixed-width window to directly predict the corresponding word <cit.>. SimulMT methods consider that the source and target sequences have similar lengths, employing fixed wait-k policies <cit.> or attention mechanisms <cit.> to establish a token-to-token mapping. Such assumptions of similar length limit their ability to handle sequences with significant length differences <cit.>. SimulST methods divide the speech into multiple segments to overcome length differences <cit.>, and then apply the fixed wait-k policy <cit.>. These task-specific heuristics not only hinder the adaptive learning of the source-target mapping but also impede the integration of various simultaneous tasks into a unified framework, restricting the potential of utilizing multi-task learning <cit.> in simultaneous generation tasks.Under these grounds, we aim to bridge the source and target sequences in an adaptive and unified manner without any task-specific assumptions. In simultaneous generation process, the model necessarily waits for a source segment and outputs a target segment alternately, with each segment comprising one or more tokens. As such, the source sequence and target sequence should correspond in terms of the segment and ideally agree on the segment representation <cit.>, enabling the segment to serve as a natural bridge between source and target.In this paper, we propose a unified segment-to-segment framework (Seg2Seg) for simultaneous sequence generation, which introduces latent segments as pivots between source and target. As illustrated in Figure <ref>, given a streaming source sequence, Seg2Seg determines whether the received source tokens can be aggregated into a latent segment. Once aggregated, the latent segment starts emitting the target tokens until the latent segment can no longer emit any further target tokens. Seg2Seg repeats the above steps until finishing generating. To learn when to aggregate and emit, Seg2Seg employs expectation training to explore all possible source-target mappings and find the optimal moments for generating. Experiments on multiple simultaneous generation tasks, including streaming ASR, SimulMT and SimulST, demonstrate that Seg2Seg achieves state-of-the-art performance and exhibits better generality across various simultaneous tasks.§ RELATED WORKStreaming ASRRecent streaming ASR methods primarily rely on two approaches: transducer and local attention. Transducer involves a speech encoder and a label predictor, which are aligned via a joiner to determine whether to generate <cit.>. Local attention approach utilizes monotonic attention to determine whether to generate based on the speech within a local window <cit.>. Moreover, various methods have been proposed to reduce the latency based on these two approaches by optimizing the alignment process <cit.>.SimulMTRecent SimulMT methods are mainly based on pre-defined rules or alignment mechanisms. For pre-defined rules, <cit.> proposed wait-k policy, which waits for k source tokens before alternately waiting/generating one token. Some methods were proposed to improve the flexibility of fixed rules through training <cit.>, simultaneous framework <cit.>, the ensemble of wait-k <cit.> or post-evaluation <cit.>. For alignment mechanisms, previous works employ monotonic attention <cit.>, Gaussian attention <cit.>, binary search <cit.>, non-autoregressive structure <cit.> or hidden Markov models <cit.> to learn the alignments between the source and target token-to-token, and make waiting or generating decisions accordingly.SimulSTRecent SimulST methods focus on the segmentation of speech <cit.>. <cit.> proposed fixed pre-decision to divide speech into equal-length segments, and applied SimulMT methods such as wait-k <cit.> and MMA <cit.>. Some other methods first use CTC results <cit.>, ASR results <cit.> or integrate-and-firing <cit.> to detect the number of words in speech, and then apply the wait-k policy. Further, <cit.> proposed ITST, which judges whether the received information is sufficient for translation. <cit.> proposed MU-ST, which constructs segmentation labels based on meaning units and uses them to train a segmentation model. <cit.> proposed differentiable segmentation (DiSeg) to directly learn segmentation from the underlying translation model via an unsupervised manner.Previous methods for simultaneous generation often involve task-specific heuristics, which hinder adaptive learning and limit their applicability to other tasks. The proposed Seg2Seg utilizes the latent segments as pivots to achieve fully adaptive learning of source-target mapping. Furthermore, Seg2Seg serves as a unified framework for various simultaneous generation tasks, making multi-task learning in simultaneous generation feasible.§ METHODIn this paper, we propose a segment-to-segment framework (Seg2Seg) to map the source sequence to the target sequence, using latent segments as the pivots. With the latent segments, Seg2Seg can adaptively learn to map source to target, enabling it to find the optimal moments to generate the target tokens during the simultaneous generation process. Details are introduced in the following sections. §.§ Mapping with Latent Segmentsr0.4 +1mm< g r a p h i c s >Diagram of source-target mapping with latent segments. The arrows in color and gray represent the mapping in inference and training, respectively. Seg2Seg leverages the Transformer (encoder-decoder) <cit.> as the backbone, and further converts the sequence-to-sequence framework to the segment-to-segment framework by introducing latent segments. Formally, we denote the source sequence as 𝐱={x_1,⋯ ,x_J} with length J, and the target sequence as 𝐲={y_1,⋯ ,y_I} with length I. In Seg2Seg, the source tokens are first aggregated into several latent segments (source tokens⇒latent segment), and then the latent segment emits the target tokens (latent segment⇒target tokens), as shown in Figure <ref>.Source tokens ⇒ Latent segmentFor aggregation, Seg2Seg produces a Bernoulli variable a_j for each source token x_j to determine whether the currently received source tokens can be aggregated into a segment. An aggregation probability α_j is predicted as the parameter for the variable a_j, calculated as:α_j=sigmoid ( FFN ( Rep ( x_j )) ), a_j∼Bernoulli ( α_j ),where FFN (·) is a feed-forward network, Rep (x_j) is the representation of x_j, and α_j is aggregation probability at x_j. As shown in Figure <ref>, if a_j=0, Seg2Seg waits for the next input, otherwise, it aggregates the tokens received after the previous segment into a new segment. Once a latent segment is aggregated, we calculate its representation by summing the representations of all the source tokens it contains. Specifically, the representation of the k^th latent segment is denoted as seg_k, calculated as:seg_k=𝐖^src→seg∑_x_j∈seg_kRep ( x_j ),where 𝐖^src→seg is the learnable projection from source to latent segment space.Latent segment ⇒ Target tokensGiven latent segment representation, Seg2Seg judges whether seg_k can emit y_i by producing a Bernoulli variable b_ik with the emission probability β_ik as a parameter. Specifically, b_ik is calculated as a dot-product form:β_ik=sigmoid ( 𝐖^tgt→segRep ( y_i-1 )·seg_k^⊤/√(d) ) ,b_ik∼Bernoulli ( β_ik ),where 𝐖^tgt→seg is the learnable projection from target to latent segment space, and d is the input dimension. During emission, if b_ik=1, Seg2Seg generates y_i based on the current received source tokens, otherwise it stops emitting and waits for the next input. Take Figure <ref> as an example, y_3 will not be emitted from seg_1 as b_31=0. After aggregating seg_2, y_3 is emitted from seg_2 as b_32=1.Overall, Seg2Seg alternates between waiting for enough source tokens to aggregate a latent segment (i.e., wait until a_j=1), and outputting the target tokens until the current latent segment can no longer emit any further tokens (i.e., output until b_ik=0). Take the mapping in Figure <ref> for instance, Seg2Seg waits for 2 source tokens and generates 2 target tokens, then waits for 3 and generates 2 tokens, then waits for 1 and generates 1 token. Figure <ref> gives the corresponding mapping in matrix form, where the final matrix indicates whether the model receives x_j when generating y_i. §.§ TrainingDuring training, Seg2Seg tends to learn the aggregation and emission in an adaptive manner. However, a significant challenge arises from the use of Bernoulli variables a_j and b_ik for aggregation and emission, which prevents the back-propagation <cit.> to the aggregation probability α_j and emission probability β_ik. To address this issue, we propose expectation training that employs α_j and β_ik instead of Bernoulli variables to calculate the expected mapping, which can be jointly trained with the underlying Transformer model. As illustrated in Figure <ref>, in expectation training, the source tokens and target tokens are no longer forced to be associated with a single latent segment, but rather can belong to multiple latent segments by probability. For the aggregation process from source tokens to latent segment, we introduce p ( x_j∈seg_k ) to represent the probability that x_j belongs to the latent segment seg_k. Since the aggregation process is monotonic with the streaming source sequence, i.e., which segment x_j belongs to is only related to x_j-1, p ( x_j∈seg_k ) can be calculated via dynamic programming:p ( x_j∈seg_k )=p ( x_j-1∈seg_k-1 )×α_j-1 + p ( x_j-1∈seg_k )× (1- α_j-1 ).We consider all possible latent segments in the expectation training, so k ranges from 1 to J (i.e., aggregate at most J segments with one token in each segment), even if the source tokens may belong to the later latent segment with a small probability, as shown in Figure <ref>. With p ( x_j∈seg_k ), we calculate the expected representation of latent segment by weighting all source tokens:seg_k=𝐖^src→seg∑_j=1^Jp ( x_j∈seg_k )×Rep ( x_j ). For the emission process from latent segment to target tokens, we introduce p ( y_j∈seg_k ) to represent the probability that y_j can be emitted from latent segment seg_k. Since the emission process is monotonic with the simultaneous generation, p ( y_j∈seg_k ) can be calculated via dynamic programming: p(y_i∈seg_k)=β_i,k∑_l=1^k ( p(y_i-1∈seg_l)∏_m=l^k-1 ( 1-β_i,m )).We give a detailed introduction to the dynamic programming algorithm in Appendix <ref>.Learning MappingTo adaptively learn α and β, we jointly train p ( x_j∈seg_k ) and p ( y_i∈seg_k ) with Transformer via the cross-entropy loss ℒ_ce. During inference, each target token in Seg2Seg no longer focuses on all source tokens, but can only pay attention to the source token within the same latent segment or the previous segments (i.e., the current received tokens), as shown in Figure <ref>. So in training, we calculate the probability that y_i can pay attention to x_j, denoted as ℳ_ij:ℳ_ij= ∑_kp(y_i∈seg_k)× p(x_j∈{seg_1,⋯,seg_k})=∑_kp(y_i∈seg_k)×∑_l=1^kp(x_j∈seg_l).Then, we multiply the mapping ℳ_ij with the original cross-attention <cit.> and normalize it to get the final attention distribution, which is used to calculate the expected target representation. By jointly training mapping and generation via the cross-entropy loss ℒ_ce, Seg2Seg will assign higher ℳ_ij between those related source and target tokens, thereby learning a reasonable mapping.Learning LatencyBesides learning mapping for high-quality generation, we also introduce a latency loss ℒ_latency to encourage low latency. We utilize two commonly used latency metrics, consecutive wait (CW) <cit.> and average lagging (AL) <cit.>, to calculate the expected latency of Seg2Seg, where CW measures the number of latent segments (i.e., streaming degree <cit.>), and AL measures the lagging of target token (i.e., lagging degree). Therefore, ℒ_latency is calculated as:ℒ_latency=𝒞_CW ( α )+𝒞_AL ( ℳ ), where𝒞_CW ( α )=∑_j=1^|𝐱|α_j- λ| 𝐲|_2+∑MaxPool(α_i, ⌊|𝐱|/λ| 𝐲|⌋)-λ| 𝐲| _2, 𝒞_AL ( ℳ )=1/| 𝐲|∑_i=1^| 𝐲|∑_j=1^|𝐱|ℳ_ij.For the number of latent segments 𝒞_CW ( α ), following <cit.>, we constrain Seg2Seg via the expected segment number ∑_j=1^| 𝐲|α_j and the uniformity of aggregation, where MaxPool(·) is the max polling operation with kernel size of ⌊|𝐱|/λ| 𝐲|⌋. For the expected lagging 𝒞_AL ( ℳ ), we constrain the expected lagging ∑_j=1^| 𝐲|ℳ_ij of target token y_i. λ is a hyperparameter that controls the overall latency of Seg2Seg. A larger λ encourages Seg2Seg to aggregate more latent segments, thereby achieving low latency. When λ→0, the number of latent segments decreases and latency becomes higher, finally degenerating into a sequence-to-sequence framework when λ=0.Overall, the total training objective of Seg2Seg is the trade-off between ℒ_ce for generation quality and ℒ_latency for generation latency, calculated as:ℒ=ℒ_ce+ℒ_latency.§.§ InferenceIn inference, we set a_j=1 when α_j≥0.5 and b_ik=1 when β_ik≥0.5 without sampling <cit.>. Algorithm <ref> illustrates the specific inference process of Seg2Seg. Given a streaming source sequence, Seg2Seg continuously repeats the process of aggregating source tokens into a latent segment (lines 2-6) when a_j=1 and then emitting target tokens from the latent segment (lines 8-12) while b_ik=1, until the generation process is completed. Owing to generating the target sequence in units of segment, it is natural for Seg2Seg to use beam search inside each target segment. Therefore, in the following experiments, we set the size of the beam search for each segment to 5.§ EXPERIMENTS§.§ DatasetsWe conduct experiments on the most common benchmarks of three representative simultaneous generation tasks, including streaming ASR, SimulMT and SimulST.Streaming ASRWe apply LibriSpeech[<https://www.openslr.org/12>] benchmark <cit.>, which consists of 960 hours English audio. We use(5.4 hours) and(5.3 hours) as validation sets, and(5.4 hours) and(5.1 hours) as test sets, whereset contains more noisy audio. For speech, we use the raw 16-bit 16kHz mono-channel audio wave. For text, we use SentencePiece <cit.> to generate a unigram vocabulary of size 10000.SimulMTWe apply WMT15[<https://www.statmt.org/wmt15/>] German→English (De→En) benchmark, including 4.5M sentence pairs for training. We useas validation set (3000 pairs), andas test set (2169 pairs). 32K BPE <cit.> is applied and vocabulary is shared across languages.SimulSTWe apply MuST-C[<https://ict.fbk.eu/must-c>] English→German (En→De) (408 hours, 234K pairs) and English → Spanish (En→Es) (504 hours, 270K pairs) benchmarks <cit.>. We useas validation set (1423 pairs for En→De, 1316 pairs for En→Es) andas test set (2641 pairs for En→De, 2502 pairs for En→Es), respectively. The pre-processing is the same as streaming ASR tasks. §.§ Systems SettingsWe conducted experiments on several strong baselines for all three tasks, described as follows.Offline <cit.> model waits for the complete source sequence before generating the target sequence. Offline model is decoded with beam 5.# Streaming Automatic Speech Recognition (Streaming ASR) T-T <cit.> uses Transformer Transducer to determine waiting/generating via alignments from the joiner between the speech encoder and text predictor. Some methods, including ConstAlign <cit.>, FastEmit <cit.> and SelfAlign <cit.> are proposed to further reduce the latency of the Transducer. MoChA <cit.> applies monotonic chunkwise attention to generate the target token based on the speech within a local window. Various training methods, such as DeCoT <cit.>, MinLT <cit.> and CTC <cit.>, are proposed to further constrain the latency of MoChA.# Simultaneous Machine Translation (SimulMT)Wait-k <cit.> first waits for k source tokens, and then alternately generates and waits for one token. Multipath Wait-k <cit.> trains a wait-k model via randomly sampling different k between batches. MoE Wait-k <cit.> applies mixture-of-experts (MoE) to learn multiple wait-k policies during training. Adaptive Wait-k <cit.> trains a set of wait-k models (e.g., from wait-1 to wait-13), and heuristically composites these models based on their outputs during inference.MMA <cit.> applies monotonic multi-head attention and predicts a variable to indicate waiting or generating, which are trained through monotonic attention <cit.>. GMA <cit.> introduces Gaussian multi-head attention and uses a Gaussian prior to learn the alignments via attention. With alignments, GMA decides when to start translating based on the aligned positions. GSiMT <cit.> generates waiting/generating decisions, and considers all possible decisions in training. HMT <cit.> proposes Hidden Markov Transformer, which uses a HMM to correspond translating moments with the target tokens, thereby learning the optimal translating moments for generating.# Simultaneous Speech Translation (SimulST) Wait-k, MMA <cit.> for SimulMT can be applied to SimulST by making a fixed pre-decision to split the speech into 280ms durations, where each duration corresponds to one word. Wait-k-Stride-n <cit.> generates n tokens every n×280ms to address the issue of length differences between speech and text. We set n=2 following their best result. SimulSpeech <cit.> divides the speech based on a CTC word detector, and then applies wait-k policy. SH <cit.> uses the shortest hypothesis in ASR results as word number, and then applies wait-k policy. RealTrans <cit.> detects the word number in the streaming speech via counting the blank in CTC results, and then applies the wait-k-stride-n policy. MMA-CMDR <cit.> incorporates cross-modal decision regularization to MMA, which leverages the transcription of speech to improve the decision of MMA.MoSST <cit.> uses the integrate-and-firing method <cit.> to segment the speech based on the cumulative acoustic information, and then applies the wait-k policy. ITST <cit.> quantifies the transported information from source to target, and subsequently determines whether to generate output based on the accumulated received information. MU-ST <cit.> trains an external segmentation model based on the constructed data to detect the meaning unit, and uses it to decide whether to generate. DiSeg <cit.> jointly learns the speech segmentation with the underlying translation model via the proposed differentiable segmentation in an unsupervised manner.All implementations are adapted from Fairseq Library <cit.>. In Seg2Seg, we use the standard Transformer-Base (6 encoder and 6 decoder layers) <cit.> for SimulMT. For streaming ASR and SimulST, we replace the word embedding layer in Transformer-Base with a pre-trained Wav2Vec2.0[<dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt>] <cit.> to extract the acoustic embedding, and the rest remains the same as SimulMT. EvaluationWe use[<https://github.com/facebookresearch/SimulEval>] <cit.> to evaluate the quality and latency of simultaneous generation. For streaming ASR, following <cit.>, we use word error rate (WER) for the quality and the mean alignment delay for the latency, which considers the average difference between generating moments and ground-truth alignments. For SimulMT, following <cit.> and <cit.>, we use BLEU <cit.> for the generation quality and average lagging (AL) <cit.> for latency, which measures the average offsets that outputs lag behind inputs (using token as the unit of AL). For SimulST, following <cit.>, we use sacreBLEU <cit.> for the generation quality and average lagging (AL) for latency (using millisecond ms as the unit of AL). Refer to Appendix <ref> for the detailed calculation of the latency metrics. §.§ Main Resultsr6.8cm Streaming ASR results.+1mm2*Systems 2cWER(↓) 2*Latency(↓)cleanother Offline3.51 8.49 -T-T 3.40 9.50 610+ConstAlign4.00 11.10328+FastEmit4.00 10.40195+SelfAlign 4.00 10.70145MoChA4.80 14.20320+CTC4.00 11.20240+DeCoT 3.90 11.60240+MinLT 4.50 11.70320Seg2Seg3.55 8.73 324 Results on Streaming ASRTable <ref> reports the results on streaming ASR, where Seg2Seg exhibits a better trade-off between generation quality and latency. Compared with T-T using the transducer to align speech and text token-to-token <cit.>, or MoChA generating text based on local speech <cit.>, Seg2Seg adaptively learns source-target mapping through latent segments, thereby performing better. Results on SimulMTConsistent with the previous methods <cit.>, we adjust the value of λ (refer to Eq.(<ref>)) to show the performance of Seg2Seg under varying latency. Figure <ref> shows that Seg2Seg outperforms previous SimulMT methods at all latency. Compared to methods based on pre-defined rules, such as wait-k and MoE wait-k, Seg2Seg is more flexible in making generating/waiting decisions, achieving significant advantages. Other methods, such as MMA, GMA and HMT, align the target and source token-to-token. However, since the alignment between the two languages may not be one-to-one <cit.>, some local reordering and multi-word structures can affect the performance <cit.>. By mapping source to target at the segment level, Seg2Seg is more in line with the simultaneous generation process and mitigates these issues, ultimately achieving state-of-the-art performance.Results on SimulSTFor the most challenging SimulST in Figure <ref>, Seg2Seg achieves state-of-the-art performance, especially at low latency. Most of the previous SimulST methods either segment the speech into fixed lengths <cit.> or detect the number of words in the speech <cit.> and then apply a wait-k policy, where both non-derivable word detection and wait-k policy hinder the adaptive learning of the model. Owing to the proposed expectation training, Seg2Seg is completely differentiable and able to jointly learn the mapping from the source to the latent segment and from the latent segment to the target, thereby finding the optimal moments for generating. §.§ Superiority of Unified Framework on Multi-task Learning In the sequence-to-sequence framework, multi-task learning composed of ASR, MT and ST is shown to improve the performance on difficult tasks (e.g. speech translation) by sharing knowledge among different tasks <cit.>. However, in previous simultaneous generation methods, different tasks often involve different architectures and heuristics, leaving no room for multi-task learning. Owing to not involving any task-related heuristics, the proposed unified segment-to-segment framework provides a possibility to apply multi-task learning in simultaneous generation. In Seg2Seg, multi-task learning can include streaming ASR, SimulMT and SimulST, and these three tasks share all parameters, except that SimulMT has a text embedding and streaming ASR/SimulST have a shared speech embedding. r0.38 +1mm< g r a p h i c s >SimulST results on MuST-C En→De with multi-task learning.Figure <ref> demonstrates the improvements brought by multi-task learning on the most challenging SimulST task. By employing multi-task learning in a unified framework, Seg2Seg can achieve further improvements. Specifically, jointly training with streaming ASR yields more discernible improvements, which is mainly because the monotonic properties between speech and text inherent in streaming ASR assist SimulST in learning the source-target mapping <cit.>. Therefore, the unified Seg2Seg facilitates the sharing of knowledge among various simultaneous tasks through multi-task learning and is helpful for the difficult tasks, such as SimulST.§ ANALYSISWe conducted extensive analyses to investigate the specific improvements of Seg2Seg. Unless otherwise specified, all the results are reported on SimulST with MuST-C En→De test set, which is more difficult simultaneous generation task. Refer to Appendix <ref> for more extended analyses. §.§ Improvements of Adaptive Learningr0.6 +1mm[w/o adaptive aggregation.]< g r a p h i c s >[w/o adaptive emission.]< g r a p h i c s > Improvements brought by adaptive learning. Seg2Seg learns the mappings from source to segment and from segment to target in an adaptive manner, without any task-specific assumptions. To verify the effect of adaptive learning, we respectively replace the source-to-segment and segment-to-target mappings with heuristic rules, such as fixed-length segment (i.e., fixed-seg) <cit.> and wait-k/wait-k-stride-n policy (i.e., fixed-emit) <cit.>, and show the SimulST En→De results in Figure <ref>. The results show that adaptive learning significantly outperforms heuristic rules. Compared with dividing the source into fixed lengths of 200/280/360ms, Seg2Seg can adaptively determine whether the received source token can be aggregated into a latent segment, bringing about 1 BLEU improvement. Compared with the rule-based wait-k policy, Seg2Seg judges whether to emit the target token based on the latent segment, thus finding more reasonable generating moments <cit.>. §.§ Quality of Aggregation and EmissionSeg2Seg learns aggregation and emission adaptively, so we further explore the quality of aggregation and emission, respectively. We apply streaming ASR and SimulMT tasks for evaluation. The detailed calculation of the metrics for aggregation and emission quality are shown in Appendix <ref>. r6.6cm Segmentation accuracy of Seg2Seg.+1mm Systems P(↑) R(↑) R-value(↑) ES K-Means <cit.>30.7 18.039.7 BES GMM <cit.> 31.7 13.837.9 VQ-CPC <cit.>18.2 54.1-86.5VQ-VAE <cit.> 16.4 56.8-126.5 DSegKNN <cit.>30.9 32.040.7Fixed(280ms)28.1 16.338.4 Seg2Seg41.1 18.141.2Aggregation QualityTo verify whether Seg2Seg can aggregate the source tokens into a latent segment at the appropriate moments in streaming ASR, following <cit.>, we conduct experiments on the Buckeye dataset[<https://buckeyecorpus.osu.edu>] <cit.>, which is a speech segmentation benchmark with the annotated word boundaries. Table <ref> shows the segmentation quality of Seg2Seg with some segmentation baselines, and the metrics include precision (P), recall (R) and R-value (comprehensive score) <cit.>. Seg2Seg achieves better segmentation precision and higher comprehensive score R-value, showing that Seg2Seg can perform aggregation and segment the speech at reasonable moments (i.e., token boundaries instead of breaking the continuous speech of a word <cit.>).r0.34 +1mm< g r a p h i c s >Emission accuracy of Seg2Seg in SimulMT. Emission QualityTo verify whether the model emits at reasonable moments, we follow <cit.> to evaluate the emission quality in SimulMT based on alignments. We apply RWTH[<https://www-i6.informatik.rwth-aachen.de/goldAlignment/>] De→En alignment dataset, and calculated the proportion of the model emitting the target token after receiving its aligned source token, used as the emission accuracy. Figure <ref> shows that Seg2Seg can receive more aligned source tokens before emitting under the same latency, meaning that Seg2Seg finds more favorable moments for generating and achieves high-quality generation. §.§ Effect of Latent Segmentsr0.34 Representational similarity with the latent segment.+1mmSimilarity source ⇔ target0.53 %source ⇔ segment 20.01 %segment ⇔ target 14.66 %To investigate whether the latent segments can act as a bridge between the source and target, we calculate the representations of the source tokens (∑_x_j∈seg_k x_j), target tokens (∑_y_i∈seg_k y_i), and latent segment seg_k within each segment during SimulST on En→De. We then apply the T-SNE dimensionality reduction algorithm to project these representations into a 2D space. By doing so, we obtain a bivariate kernel density estimation of the representation distribution of source segment, target segment and latent segments, depicted in Figure <ref>.The visualization clearly demonstrates that the latent segment locates between the source and target sequences in the representation space, effectively serving as a bridge connecting the source and target. r0.34 +1mm< g r a p h i c s >Bivariate kernel density estimation visualization on the representations of source, target and latent segment.Furthermore, we calculate the cosine similarity between the representations of the source, target and latent segments, as shown in Table <ref>. It is evident that the similarity between the source and target representations is low, posing a challenge for the model to directly map the source to the target. Conversely, the similarity between the latent segment and the source, as well as the latent segment and the target, is significantly higher. Hence, by introducing the latent segment as a pivot, the model can more easily learn the mapping from the source to the latent segment, and subsequently from the latent segment to the target, thereby finding the optimal moments for generating and achieving better performance.§ CONCLUSIONIn this paper, we propose a unified segment-to-segment framework for simultaneous sequence generation, which bridges the source and target sequences using latent segments as pivots. Unified Seg2Seg enables the handling of multiple simultaneous generation tasks and facilitates multi-task learning. Experiments and analyses show the superiority of Seg2Seg on performance and generalization. § LIMITATIONSThe proposed Seg2Seg employs the encoder-decoder architecture as its backbone, and exhibits better generality across multiple simultaneous generation tasks. In addition to its primary application on generation tasks, the encoder (aggregation process) or decoder (emission process) of Seg2Seg can also be separately used for some real-time tasks based on encoder-only or decoder-only architecture, such as streaming tagging and online parsing. We leave this for further exploration in future work.§ ACKNOWLEDGEMENTSWe thank all the anonymous reviewers for their insightful and valuable comments.unsrtnat§ DYNAMIC PROGRAMMING OF MAPPINGIn Sec.<ref>, we propose expectation training for Seg2Seg to learn when to aggregate and emit through p ( x_j∈seg_k ) and p ( y_i∈seg_k ). Given the monotonic property of simultaneous sequence generation tasks, we can calculate p ( x_j∈seg_k ) and p ( y_i∈seg_k ) using a dynamic programming algorithm. In the following sections, we will provide a detailed explanation of the dynamic programming approach. §.§ Source Tokens to Latent Segment Given the streaming source sequence, Seg2Seg predicts aggregation probability α_j for x_j to represent the probability of aggregating the received source tokens into a latent segment at x_j. Given aggregation probability α_j, we calculate p ( x_i∈seg_k ) via dynamic programming. The whole aggregation process is monotonic with inputs, which means that the (k+1)^th latent segment can only be aggregated once the k^th segment has already been aggregated. Additionally, each latent segment must be aggregated by at least one source token, otherwise a latent segment without representation would be meaningless.As a result, whether x_j belongs to latent segment seg_k depends on which segment that x_j-1 is located in, consisting of 3 situations: * If x_j-1∈seg_k-1:As illustrated by the red line in Figure <ref>, x_j belongs to latent segment seg_k when Seg2Seg aggregate at x_j-1 with probability α_j-1;* If x_j-1∈seg_k:As illustrated by the blue line in Figure <ref>, x_j belongs to latent segment seg_k (i.e., the same latent segment with x_j-1) when Seg2Seg does not aggregate at x_j-1 with probability 1-α_j-1;* Otherwise:x_j can not belong to seg_k anyway, i.e., with probability 0.By combining these situations, p ( x_i∈seg_k ) is calculated as:p ( x_j∈seg_k )=p ( x_j-1∈seg_k-1 )×α_j-1 + p ( x_j-1∈seg_k )× (1- α_j-1 ),where the initialization is p ( x_1∈seg_k )= 1 ifk=1 0 ifk≠1,because the first source token inevitably belongs to the first segment. With the above dynamic programming algorithm, we can calculate p ( x_j∈seg_k ), for j=1,⋯,J and k=1,⋯,J. §.§ Latent Segment to Target Tokens After getting the latent segments, Seg2Seg predicts the emission probability β_ik, which indicates the probability of emitting the target token y_i from the latent segment segk. With the emission probability βik, we compute p ( y_i∈seg_k ) using dynamic programming as well. Note that there is one difference between the aggregation process and the emission process when employing dynamic programming. In the aggregation process, each latent segment must be aggregated by at least one source token. However, in the emission process, the latent segment has the option to not generate any target token, as not all source tokens have corresponding target tokens.Whether y_i belongs to latent segment seg_k depends on which segment that y_i-1 is emitted from, consisting of 3 situations: * If y_i-1∈seg_k:y_i is emitted from latent segment seg_k with probability β_ik;* If y_i-1∈seg_l for l=1,⋯,k-1:y_i is emitted from latent segment seg_k when y_i is not emitted from seg_l to seg_k-1, and then emitted from seg_k. Taking Figure <ref> as an example, if y_2∈seg_3, the premise of y_3∈seg_5 is that y_3 is not emitted from seg_3 and seg_4, and is emitted from seg_5. Formally, the probability is calculated as:β_ik×∏_m=l^k-1 ( 1-β_i,m ),where ∏_m=l^k-1 ( 1-β_i,m ) is the probability that y_i is not emitted from seg_l to seg_k-1;* If y_i-1∈seg_l for l=k+1,⋯:y_i can not be emitted from seg_k anyway, as the emission process is monotonic, i.e., with probability 0.By combining these situations, p(y_i∈seg_k) is calculated as:p(y_i∈seg_k)=β_i,k∑_l=1^k ( p(y_i-1∈seg_l)∏_m=l^k-1 ( 1-β_i,m )),where the initialization is p(y_1∈seg_k)=β_1,k∏_m=1^k-1 ( 1-β_1,m ).With the above dynamic programming algorithm, we can calculate p ( y_i∈seg_k ), for i=1,⋯,I and k=1,⋯,J.§ EXTENDED ANALYSES §.§ Detailed Calculation of Aggregation and Emission QualityIn Sec.<ref>, we evaluate the aggregation and emission quality of Seg2Seg. Here, we give detailed calculations of aggregation and emission quality.Aggregation QualityWe verify whether Seg2Seg can aggregate (segment) the source speech sequence at the appropriate moments on the speech segmentation task <cit.>. For our evaluation, we utilize the Buckeye dataset, where the ground-truth segmentation is based on word units. Evaluation metrics consist of precision (P), recall (R), and the comprehensive score R-value. Note that since the ground-truth segmentation in Buckeye is in units of words, and the aggregation of Seg2Seg is in units of segments, which makes the segment number of Seg2Seg may be less than the ground-truth segment number, so precision can better reflect whether the aggregation moments of Seg2Seg is reasonable. R-value <cit.> is a more robust comprehensive metric for speech segmentation task, calculated as:R-value= 1-|r_1|+|r_2|/2,wherer_1= √( ( 1-R)^2+(R/P-1)^2), r_2= -(R/P-1)+R-1/√(2).A larger R-value indicates better segmentation quality, where R-value=100% if and only if P=100% and R=100%.Emission QualityWe verify whether Seg2Seg can emit the target token at appropriate moments in SimulMT. In simultaneous machine translation, it is crucial for the model to emit the corresponding target token after receiving its aligned source token <cit.>, so the alignments can be used as the basis for judging whether the emitting moments are reasonable. Following <cit.>,<cit.>, <cit.>, we calculate the proportion of the ground-truth aligned source tokens received before emitting as the emission quality. We apply RWTH[<https://www-i6.informatik.rwth-aachen.de/goldAlignment/>] De→En alignment dataset and denote the ground-truth aligned source position of y_i as a_i, while use t_i to record the emitting moments of y_i. Then, the emission quality is calculated as:Score= 1/ | 𝐲 | ∑_i=1^ | 𝐲 | 1_a_i≤ t_i, where1_a_i≤ t_i=1,a_i≤ t_i0,a_i> t_i.§.§ Visualization of Mapping We visualize the proposed source-target mapping with the latent segment as the pivot in Figure <ref> and <ref>, where the case is from the most challenging SimulST task. InferenceFigure <ref> shows the mapping during inference. Seg2Seg effectively aggregates a lengthy speech sequence (approximately 101 tokens) into 6 latent segments, and then these 6 latent segments emit 8 target tokens. As depicted in Figure <ref>, Seg2Seg exhibits high-quality aggregation by selectively splitting and aggregating the speech at appropriate boundaries, thereby preserving the integrity of the acoustic information <cit.>. During emission, Seg2Seg exhibits the ability to generate multi-word structures, such as `unsere Geschichte' (meaning `our history' in English), within the same latent segment. Overall, Seg2Seg achieves good source-to-target mapping through adaptive aggregation and emission. Expectation TrainingFigure <ref> shows the mapping during training. In expectation training, Seg2Seg considers all possible latent segments, with the number of segments ranging from 1 to J, shown in Figure <ref> and <ref>. By incorporating the constraint of the latency loss on the number of segments, Seg2Seg can effectively learn to aggregate an appropriate number of latent segments. Furthermore, Figure <ref> demonstrates the probability that each target token can pay attention to the source token (referred to as ℳ_ij in Eq.(<ref>)). This expectation aligns well with the inference process depicted in Figure <ref>, thereby highlighting the effectiveness of expectation training. §.§ Case StudyFigure <ref>, <ref> and <ref> visualize the simultaneous generation process of Seg2Seg on the cases from streaming ASR, SimulMT and SimulST. In Streaming ASR and SimulST, for a clear illustration, we use an external offline speech-text alignment tool[<https://pytorch.org/audio/main/tutorials/forced_alignment_tutorial.html>] to align the transcription with the speech sequence, and the aligned transcription is displayed above the speech waveform.Case of Streaming ASRAs shown in Figure <ref>, the target sequence and source sequence in streaming ASR often exhibit a predominantly monotonic relationship, and Seg2Seg can handle this monotonic mapping well. Seg2Seg can segment and aggregate source sequences at speech boundaries, and then emit the corresponding target sequences accurately.Case of SimulMTFigure <ref> shows a case on SimulMT. Seg2Seg can generate the target token after receiving the corresponding source tokens, e.g., generating `Lot@@ to' after receiving `Lot@@ to', generating `player' after receiving `spieler' and generating `Har@@ vey' after receiving `Har@@ vey', which effectively ensures the generation quality of SimulMT. Besides, for some related target tokens, especially the subwords after the bpe operation, Seg2Seg can emit them together from the same latent segment, thereby achieving lower latency.Case of SimulSTFigure <ref> shows a case on SimulST, which is more challenging as the source and target sequences involve different modalities and languages. Despite these evident differences, Seg2Seg demonstrates its capability to find the reasonable generating moments, such as generating `herauszufinden' after receiving `figure out' in the speech, and generating `Bedeutung' after receiving `meaning' in the speech. This is mainly attributed to expectation training, which explores all possible mappings in training, allowing Seg2Seg to learn to aggregate and emit at reasonable moments. As seen, Seg2Seg aggregates the related speech frame into the same latent segment and will not break the acoustic integrity of the speech. For emission, Seg2Seg can accurately determine whether a latent segment can emit the target token, where almost all emitted target outputs are correspond to the source speech contained in the latent segment.§ LATENCY METRICS For the latency evaluation of simultaneous generation task, we use mean alignment delay for streaming ASR and average lagging for SimulMT and SimulST.Mean Alignment Delay <cit.> is defined as the average word time difference between the ground-truth alignments (speech and transcription) and generating moments:D_mean =1/|𝐲|∑_i=1^|𝐲|(t̂_i-t_i),where t_i is the ground-truth alignment of y_i, and t̂_i is the generating moment of y_i.Average Lagging (AL) <cit.> evaluates the average number of tokens (for SimulMT) or speech duration (for SimulST) that target outputs lag behind the source inputs. We use t_i to denote the generating moments of y_i, and AL is calculated as:AL= 1/τ∑_i=1^τt_i-i-1/ | 𝐲 |/ | 𝐱 |,whereτ = iargmin ( t_i=| 𝐱 | ).In addition to average lagging, we also use some other latency metrics for SimulMT and SimulST, described as follow.Consecutive Wait (CW) <cit.> evaluates the average number of source tokens waited between two target tokens, i.e., the number of segments:CW= | 𝐱 |/∑_i=1^ | 𝐲 |1_t_i-t_i-1>0,where 1_t_i-t_i-1>0 counts the number of t_i-t_i-1>0, i.e., the number of segments. It is worth mentioning that the latency loss ℒ_latency in training employs the denominator part of the CW metric, as the numerator is a constant.Average Proportion (AP) <cit.> evaluates the proportion between the number of received source tokens and the total number of source tokens, calculated as:AP=1/ | 𝐱 || 𝐲 |∑_i=1^ | 𝐲 | t_i. Differentiable Average Lagging (DAL) <cit.> is a differentiable version of average lagging, calculated as:DAL= 1/ | 𝐲 | ∑_i=1^ | 𝐲 |t^'_i-i-1/ | 𝐱 |/ | 𝐲 |,wheret^'_i= { t_ii=1 max (t_i,t^'_i-1+| 𝐱 |/ | 𝐲 | )i>1 .. § NUMERICAL RESULTSTable <ref>, <ref> and <ref> report the numerical results of Seg2Seg on SimulMT and SimulST, where λ a hyperparameter that controls the overall latency of Seg2Seg (refer to Eq.(<ref>)). | http://arxiv.org/abs/2310.17940v4 | {
"authors": [
"Shaolei Zhang",
"Yang Feng"
],
"categories": [
"cs.CL",
"cs.AI",
"cs.SD",
"eess.AS"
],
"primary_category": "cs.CL",
"published": "20231027073451",
"title": "Unified Segment-to-Segment Framework for Simultaneous Sequence Generation"
} |
APS/123-QEDThis work is dedicated in the memory of Prof. Roman Jackiw and Prof. Trilochan [email protected] for Theoretical Physics and Natural Philosophy, Mahidol University, Nakhonsawan Campus Phayuha Khiri, Nakhonsawan 60130, Thailandhttps://na.mahidol.ac.th/nas2020/kumar-abhinav-page/ An Abelian 2-form theory that maintains gauge-invariance despite having a mass is considered in 3+1 dimensions. Though this mass owes to a non-local term the corresponding classical equations of motion are completely local and subjected to proper gauge fixing, yielding 3 massive degrees of freedom. Subsequently, consistent Dirac brackets could be constructed for this system that agreed with the subsequently obtained quantization under the non-covariant gauge condition. The covariant gauge quantization reproduced the same spectrum following the dissociation of a spurious massless mode from the physical vector space. Moreover, a path-integral treatment of the theory saw ghosts decoupling and proper BRST transformations were obtained. This 2-form field mediates a screened interaction of Yukawa-type that mimics the Meisner effect when topologically coupled to a fermion current.A Gauge-Invariant Massive 2-form Model and its Quantization Kumar Abhinav January 14, 2024 ===========================================================§ INTRODUCTIONThe interplay between gauge invariance and mass is very fundamental to physics. In most cases, the absence of mass is synonymous with gauge invariance <cit.> although it has long been established that mass and gauge invariance can coexist when the gauge field interacts with conserved currents <cit.>. The observation by Proca <cit.> that a direct introduction of mass reduces gauge invariance to a dynamical constraint led to the Stückelberg model <cit.> wherein the presence of mass is compensated by extension of the gauge invariance over multiple fields.*As for the generation of mass, spontaneous symmetry breaking <cit.> of an interacting scalar field has been the most established gauge-invariant mechanism <cit.> in the as per the standard model following the discovery of the Higgs particle <cit.>. Alternatively, the gauge mass can also be generated through self-interaction of the gauge field A_μ in 2+1 dimensions <cit.> due to the topological Chern-Simons term <cit.>. Therein, the self-current term ϵ^μνρF_νρ is conserved by construction as the field strength F_μν=∂_μ A_ν-∂_ν A_μ is anti-symmetric, which is due to the gauge invariance itself. Such topological mass generation finds widespread application in various planar condensed matter systems including the Hall effect and topological insulators <cit.>. Subsequent extension of this topological mechanism of mass generation to the 3+1 dimensions <cit.> requires the 1-form A_μ to interact with a 2-form B_μν through the B∧ F term <cit.> instead of self-interaction. The B∧ F theory is both unitary and renormalizable and generates gauge mass without spontaneous symmetry breaking <cit.>.*The question remains whether it is possible to construct a pure gauge theory,i. e. one containing a single type field, in 3+1 dimensions which is massive. Such a model was initially proposed by considering massive dispersion of the dual 1-form field strength ^*F_μν=1/2ϵ^μναβF_μν and was subsequently quantized <cit.>. In Lorentz gauge ∂_μ A^μ this model reduces to the Proca theory. At a more fundamental level, a gauge-invariant mass term was directly introduced to the free 1-form Lagrangian as <cit.>,L_A=-1/4F_μν(1+m^2/∂^2)F^μν. The apparent non-locality due to the inverse d'Alambertian ∂^-2, responsible for a mass term m^2/2A_μ A^μ, does not hamper locality and causality of the dynamics. Upon quantization, this theory possesses a unitary vector space with a non-negative norm following suitable gauge fixing <cit.>. *Whether such a non-local mass insertion will work beyond the 1-form is a question worth asking considering the general possibility of massive gauge-invariant dynamics without external interactions. In this work, the answer is found to be affirmative in the case of 2-form fields as a corresponding non-local Lagrangian of the form,L_B=1/12H_μνρ(1+M^2/∂^2)H^μνρ, H_μνρ=∂_μ B_νρ+∂_ν B_ρμ+∂_ρ B_μν, serves this purpose. The corresponding field strength H_μνρ is invariant under the gauge transformation B_μν→ B_μν+∂_μΛ_ν-∂_νΛ_μ, wherein the gauge parameter further embodies a redundancy of its own: Λ_μ→Λ_μ+∂_μσ. The 2-form enjoys widespread utility across multiple areas of physics. Introduced by Kalb and Ramond for explaining classical string dynamics <cit.>, this tensor field explains topological defects in superstrings <cit.>, cosmic string interactions <cit.>, Abelian superfluid vortices <cit.> pertaining to type-II superconductivity <cit.> and axionic charge allocation to black holes <cit.>. Few recent developments in cosmology also employ 2-form models that explain anisotropic inflation <cit.>, primordial gravitational waves with dynamical dark energy <cit.> and inflatory physics <cit.>. A gauge-invariant mass term, therefore, can have a considerably widespread application in physics by providing a mass scale to the corresponding interaction mediated by the 2-form.*Despite the non-locality, in this work, a consistent Dirac bracket construction is obtained for this massive 2-form model. This construction further complements the canonical quantization of the theory, obtained under both covariant and non-covariant gauges. The physical mode invariably corresponds to three massive degrees of freedom, two transverse and one longitudinal, which gains further support from a path-integral treatment. As expected of a massive gauge field <cit.> the 2-form is found to mediate a Yukawa-type interaction when suitably coupled to charged fermions. Thus the present theory may also model resistance-less transport, as was observed in the case of the B∧ F model <cit.>. *The present work is organized as follows. Section <ref> reviews the 2-form formulation and the non-local 1-form model of Ref. <cit.>, with possible origins of such dynamics. The classical treatment of the non-local 2-form model is carried out in Section <ref>. The correct degrees of freedom are obtained from multiple approaches, consistent with a subsequent Dirac bracket analysis that identifies them despite the non-locality. Section <ref> contains covariant canonical quantization of the system in both covariant and non-covariant gauges, followed by a path-integral treatment. Conclusions and discussions regarding possible further aspects make up Section <ref>.§ MASSIVE GAUGE DYNAMICS IN 3+1 DIMENSIONSThe free Kalb-Ramond 2-form field is described by the Lagrangian, L^ free_ B=1/12H_μνρH^μνρ, which is invariant under the gauge transformation of B_μν. This totally anti-symmetric field strength satisfies the Bianchi identity, ∂_μ H_αβγ -∂_α H_βγμ+∂_β H_γμα-∂_γ H_μαβ=0,by construction. The equation of motion for this system reads as, ∂_μ H^μνρ=0. Owing to the gauge redundancy, the free 2-form field contains only one dynamical degree of freedom <cit.>. The simplest way to see this is by re-expressing the equation of motion in terms of the dual field-strength ^*H^μ=1/6ϵ^μνρσH_νρσ as, ϵ^μνρσ∂_ρ^*H_σ=0. Then the obvious identification ^*H_μ=∂_μφ reduces L^ free_ B to the free Klein-Gordon action for φ.*Clearly, introducing a mass term of the usual form B_μνB^μν to the free 2-form Lagrangian would destroy the gauge invariance of the theory. Instead, by virtue of interaction with a 1-form, a gauge-invariant 2-form mass appeared in the topological B∧ F model <cit.>,L_ AB=-1/4F_μνF^μν+1/12H_μνρH^μνρ+g/2ϵ^μνρσB_μνF_ρσ. The interaction term above is an exterior product g/2B∧ F. It is topological in the sense that it does not depend on the metric and therefore does not contribute to the stress-energy tensor. This term is independently gauge-invariant for both 1- and 2-forms modulo total derivatives, leaving the action invariant. The respective dynamics follow from the equations of motion, ∂_μ F^μν=-2g^*H^ν and∂_α H^αμν=2g^* F^μν. These equations can be decoupled into two copies of the massive dispersion, (∂^2+4g^2) F=0,where F stands for both the field strengths F_μν and H_μνρ.It was shown in Ref. <cit.> that this theory has three massive degrees of freedom and thus the B∧ F model represents a massive spin-1 field. Also, this mass generation is distinct from the Higgs mechanism as there is no symmetry breaking associated, and the theory is renormalizable. *Both Higgs and B∧ F models, in addition to the well-known Stückelberg mechanism, relied on interactions for generating gauge-invariant mass. The Higgs mechanism already enjoys physical validation following the discovery of the Higgs particle. As for the B∧ F term, it was shown to appear via radiative induction from the interaction of the gauge fields with massive fermions <cit.> and long-range spin currents <cit.>. In that sense, the topologically massive gauge theory in 3+1 dimensions serves as an effective description, similar to the radiatively induced Chern-Simons mass in 2+1 dimensions <cit.>. Therefore it makes sense to expect the massive 1- and 2-form models of Eq.s <ref> and <ref> can be obtained as effective descriptions. Indeed, the non-local 1-form model is obtained from the Stückelberg theory,L_ ASt = -1/4F_μνF^μν+m^2/2A_μ A^μ +A·∂ D+1/2m^2∂ D·∂ D. by integrating out the Stückelberg scalar 1/mD. It is to be noted that the Stückelberg theory forms a part of the Higgs Lagrangian after the spontaneous symmetry breaking is implemented <cit.>. Integrating out the 2-form field in the B∧ F theory also leads to the same result. In 2+1 dimensions, interaction with a confined pseudo-vector fermion current in a graphitic system can yield the same non-local 1-form mass <cit.>. In 1+1 dimensions, when the fermion field is integrated out in the Schwinger model <cit.> and in the Ployakov's model of gravity <cit.>, the same effective description emerges.*On the other hand, if the 1-form is integrated out instead of the 2-form, the topological B∧ F Lagrangian reduces to the 2-form Lagrangian L_ B. Independently, the same non-local 2-form model appears as the effective description of a tensorial Stückelberg generalization,L_ BSt = 1/12H_μνρH^μνρ-g^2B_μνB^μν +1/2B_μν∂^[μD^ν]-1/16g^2∂_[μD_ν]∂^[μD^ν]. The non-locality (isolated from the 2-form mass) is now compensated by the `Stückelberg 1-form' 1/2gD_μ. This model serves as a non-topological origin of the 2-form system L_ B. Thus, both 1- and 2-form non-local models effectively describe multiple known models each, representing the long-wavelength behavior of the parent theory. Following Schwinger's general result <cit.>, this makes sense that a gauge-invariant mass should come through interactions <cit.>. Therefore, a pure massive gauge theory should be an effective description <cit.>. Effective models have wide-spread use in multiple areas of physics <cit.> and especially in condensed matter <cit.> including superconductivity <cit.>.*It makes sense to expect that these non-local gauge models will display unitarity and local dynamics as their underlying theories do <cit.>. The massive 1-form sector has already been shown to possess these properties <cit.>. The present work analyzes the dynamics of and subsequently quantizes the non-local 2-form sector. The quantized version of the massless 2-form has long been known <cit.>, so is that of the B∧ F model <cit.>. In the latter case, the mass-endowing topological B∧ F term allows the 1- and 2-form sectors to retain the respective two transverse and single longitudinal degrees of freedom from the free scenario despite their equations of motion being coupled, yielding a massive spin-1 scenario as a whole. Alternatively, the effective non-local 1- or 2-form descriptions are also expected to carry three massive degrees of freedom each. This is because the gauge-dependent part of the 1- and 2-form free propagators, Δ_μν = 1/∂^2[η_μν-(1-ξ)∂_μ∂_ν/∂^2]and Δ_μν,αβ = -1/∂^2[η_μ[αη_β]ν-(1-ζ)η_[μ[α∂_β]∂_ν]/∂^2],do not contribute to the effective description owing to the absolute anti-symmetry of the field tensors and thus no gauge-fixing is required in the process of constructing the effective theories. *The expected spin-1 massive dynamics of the non-local 2-form model should be relatively straightforward to verify with suitable quantum brackets since there are no interactions with external fields. We shall see that the non-local mass term does not obstruct the physical outcomes of this theory. It serves to represent the nontrivial dynamics entirely, which is otherwise attributed to interactions. This is directly demonstrated by shifting the gauge fields as either, A_μ→ A_μ-2g/∂^2^*H_μ, B_μν→ B_μν,or A_μ→ A_μ,B_μν→ B_μν+2g/∂^2^*F_μν.Such shifts respectively transform the B∧ F Lagrangian to either,L_ AB→ -1/4F_μνF^μν+1/12H_μνρ(1+4g^2/∂^2)H^μνρorL_ AB→-1/4F_μν(1+4g^2/∂^2)F^μν+1/12H_μνρH^μνρ. The above decoupling of 1- and 2-form sectors directly shows that the non-local term entirely represents the topological interaction. This is analogous to themixed Chern-Simons case in 2+1 dimensions, L_aA=-1/4F_μνF^μν-1/4f_μνf^μν+κϵ^μνρA_μ∂_ν a_ρ. with upper and lower case tensors representing field strengths of respective one-forms. Similar re-definitions of fields decouple the two 1-form sectors as,L_ aA→-1/4F_1(1-κ^2/∂^2)F_1-1/4F_2F_2, with F_1,2 standing for either of the field strengths. However, the gauge-invariant mass term here has the wrong sign which is unique to the 2+1 dimensions. In the 2-form case in 3+1 dimensions, the equations of motion display local dynamics and it is possible to construct both Dirac and quantum brackets consistently. The path-integral treatment, however, requires the non-locality to be compensated by an auxiliary 1-form as in Eq. <ref>.§.§ Quantized non-local 1-form theoryBefore taking up the non-local 2-form model, let us summarize the canonical quantization of the non-local massive 1-form model obtained previously <cit.>. The non-local 1-form model is governed by the effective Lagrangian, L_A=-1/4(1+m^2/∂^2)F_μνF^μν+B∂_μ A^μ, with the mass term identified as m=2g. The Nakanishi-Lautrop auxiliary field B was introduced to impose the covariant gauge condition ∂· A=0 leading to the 1-form equation of motion,(1+m^2/∂^2)∂_μ F^μν=∂^ν B.As the above dynamics imply massless scalar dispersion of the auxiliary field ∂^2B=0, in turn, the 1-form satisfies the following equation of motion, (∂^2+m^2)∂^2A_μ=0. Apart from the imposed gauge condition, there are no more constraints coming out of the above equations owing to the mass term. This leaves out 3 massive degrees of freedom. The massive mode V_μ=∂^2A_μ has a propagator, K_μν=1/∂^2+m^2[η_μν-(1-α)∂_μ∂_ν/∂^2],with a transverse gauge-independent part: ∂^μ K_μν(α=0)=0. This is despite the presence of mass, judiciously appearing as the real momentum-space pole of the propagator, just like in the pure Chern-Simons propagator in 2+1 dimensions. Since the respective pole of the mixed Chern-Simons theory of Eq. <ref> is imaginary, the non-local 1-form model (and so the B∧ F model) is analogous only to the pure C-S case of 2+1 dimensions. Subsequently, as expected of a massive mediating field, interaction with a conserved current J^μ made of localized charge distribution J_0(x)=q_1δ^3(x-x_1)+q_2δ^3(x-x_2) constitute a Yukawa-type potential:V(x_1,x_2)= 1/2∫ d^3x J_0(x)1/-∇^2+m^2J_0(x)= q_1q_2/4πe^-m|x_1-x_2|/|x_1-x_2|. In general, such a screened interaction may support localized structures. *There is also a massless mode S_μ=(∂^2+4g^2)A_μ in this model, characterized by the Maxwell-like propagator, K'_μν=1/∂^2[η_μν-(1-α')∂_μ∂_ν/∂^2]. However, this mode conveniently decouples from the theory having an unphysical norm and only the massive mode V_μ defines the physical sub-space <cit.>. The physical subspace corresponds to the non-equal-time quantum bracket, [A_μ(x),A_ν(y)]=-iη_μνΔ(x-y)+i∂_μ^x∂_ν^xΔ'(x-y). This construction follows the method of covariant canonical quantization <cit.> implemented through the invariant commutator function obeying, ∂_x,y^2Δ'(x-y)=Δ(x-y), Δ'(x)=-1/m^2Δ(x)+1/m^2Δ(x, m=0),with, Δ(x)=-i/(2π)^3∫ d^4p sgn(p_0)δ(p^2-m^2) e^-ip.x. Both the equations of motion and the gauge-fixing condition conform to the bracket and subsequently, the massive and mass-less modes get decoupled,[V_μ, S_μ]=0. Further, S_μ corresponds to either zero or negative norm states whereas the massive mode V_μ represents semi-positive-definite states. Subjected to the single gauge condition the theory describes a spin-1 massive particle with three degrees of freedom. *In the non-covariant temporal gauge: A_0=0, without introducing any auxiliary fields, the ν=0 component of the EOM Eq. <ref> implies, (1+m^2/∂^2)∇·A=0.Subsequently, the remaining components of the equation of motion describe three massive degrees of freedom: (∂^2+m^2)A=0.*As had already been mentioned, introducing a dynamical auxiliary field to remove the 1-form non-local term leads to the Stückelberg system of Eq. <ref>. Therein the original gauge redundancy now gets extended to the well-known shared one, A_μ→ A_μ+∂_μΛ, D→ D-m^2Λ, which is consistent with the corresponding equations of motion: (∂^2+m^2)A_μ-∂_μ∂· A=-∂_μ D, m^2∂· A=-∂^2D. As the second equation can be obtained from the first, eliminating D yields a massive vector Z_μ=A_μ+1/m^2∂_μ D=A_μ-1/∂^2∂_μ∂· A with vanishing four-divergence by construction. As for gauge fixing, the choice ∂· A-D=0 provides massive dynamics for both A_μ and D having non-negative energy. The required choice of gauge parameter Λ invariably leads to ∂· A=0 and D=0 independently, subjected to the dynamical equations. Therefore it is equivalent to a direct gauge choice of ∂· A=0 that also leads to D=0. In any case, the number of degrees of freedom is 3 which are all massive.*The quantum versions of both Stückelberg <cit.> and non-local 1-form model <cit.> lead to the same physical states. Thus the non-local term does not hamper the physics subjected to consistent quantization schemes, an aspect that can be quite useful in other non-local gauge models. Following the same logic, the non-local 2-form theory of Eq. <ref> is also expected to display similar well-behaved dynamics despite the non-local term. In the following this indeed is shown to be the case as physical modes are identified in a canonical set-up, followed by consistent quantization schemes.§ CANONICAL STRUCTURE OF NON-LOCAL 2-FORM MODELA straightforward demonstration of the massive dynamics in the non-local 2-form case of Eq. <ref> is to introduce a couple of totally anti-symmetric auxiliary fields J_μαβ and λ_μαβ to the Lagrangian as, L_B = 1/12H_μνρH^μνρ+g^2∂_μ B_αβJ^μαβ +λ_μαβ(H^μαβ-∂^2J^μαβ), with the identification M=2g understood. Elimination of the auxiliary fields by using respective equations of motion leads to the 2-form dynamical equation, (1+4g^2/∂^2)∂_μ H^μνρ=0. When expressed in terms of the dual field ^*H_μ, it yields 3 massive modes: (∂^2+4g^2)^*H_μ=0 subjected to the built-in transverse-ness ∂·^*H=0, with the additional condition, (1+4g^2/∂^2)ϵ^μναβ∂_α^*H_β=0. Unlike in Eq. <ref>, the non-local mass term prevents identification with a scalar field. Though one can still construct a massless scalar mode as (1+4g^2/∂^2)^*H_μ=∂_μφ, it will correspond to the aforementioned spurious one that dissociates from the physical space upon proper quantization. The ∂^-2 operator can always be attributed to an interaction with a massless boson field and thus is carried forward in the analysis. We will get to see that this will not obstruct local dynamics. The 2-form generalization to the Stückelberg mechanism in Eq. <ref> justifies this aspect wherein the `Stückelberg 1-form' 1/2gD^μ is physical in the sense that it compensates for the non-local term and contributes to the dynamics as well.This extended but local system is invariant under a shared gauge transformation, B_μν→ B_μν+∂_μΛ_ν-∂_νΛ_μ, D_μ→ D_μ+4g^2Λ_μ, as well as under anindependent gauge transformation of the D^μ sector, D_μ→ D_μ+∂_μλ. Both these transformations are further consistent with the dynamical equations: (∂^2+4g^2)B_μν-∂_[μ(∂· B)_ν]=∂_[μD_ν], ∂^2D_μ-∂_μ∂· D=4g^2(∂· B)_μ,where(∂· B)_μ=∂^ν B_νμ.Just like the 1-form case, the last equation above is an outcome of the first. On eliminating the auxiliary field the dynamics becomes that of a massive gauge-invariant yet divergence-less combined 2-form Z_μν=B_μν-1/4g^2∂_[μD_ν]=B_μν-1/∂^2∂_[μ(∂· B)_ν]. The natural choice for gauge in the shared gauge sector (∂· B)_μ+D_μ=0 yields massive dispersion for both B_μν and D_μ with semi-positive definite energy, along-with ∂· D=0. The corresponding class of vector gauge parameter Λ_μ is necessarily transverse ∂·Λ=0. Subsequently, the gauge parameter λ of the independent 1-form sector is free to implement ∂· D=0. Therefore it is equivalent to choosing (∂· B)_μ=0 and ∂· D=0 independently, which implies a massive B_μν and on-shell vanishing D_μ subjected to suitable boundary data. In this case, the equivalent class of Λ_μ is also massive whereas that of λ is massless. Now considering the constraint ∂·(∂· B)=0 on the 2-form gauge condition itself, there are 3 massive degrees of freedom left. *As claimed before, there is no non-locality in the dynamical equations of the massive 2-form model. Thus it is analytically efficient to stick with the non-local version of the theory. In the following, no auxiliary fields will be introduced to obtain the canonical quantization consistently. Instead, the non-locality will be removed once a judicious gauge condition is imposed. Though spurious massless modes appear as a result in some cases, they decouple conveniently from the physical Hilbert space. This may serve as a template to handle non-local terms in quantizing a gauge model.§.§ Dynamical modesTo carry out a canonical analysis, it is useful to manifest the gauge theory in a non-covariant form consistent with the Hamiltonian formalism <cit.>. Accordingly, the massive 2-form model of Eq. <ref> can be re-cast as, L_ B=1/2(Π_kΠ_k-ΦΦ), =1+4g^2/∂^2, Π=∂_0B-∇×B,Φ=∇·B, B_i=1/2ϵ_ijkB_jk,B_i=B_0i, (i,j,k,⋯)=1,2,3. The equations of motion of this system can be resolved in terms of the 3-vectors B and B as,∂_μ H^μνρ=0⇒(∂_0Π-∇Φ)=0, ∇×Π=0.The Bianchi identity of Eq. <ref>, that owes only to the inherent anti-symmetry of H_μνρ, now takes the form,∂_0Φ-∇·Π=0. .A massive dispersion can now be obtained simply by substituting one field in terms of the other:∂^2𝔊=(∂^2+4g^2)𝔊=0𝔊=(Π, Φ), similar to the massive 1-form case before. The massive fields (Π, Φ) represent three degrees of freedom following the scalar constraint of Eq. <ref>. At this level, there are no spurious massless modes since (Π, Φ) are gauge-invariant quantities. A hint of the spurious massless modes appears if 𝔊 are taken as field variables. Following Bianchi's identity, this mode also contains three degrees of freedom, and from the second of Eq. <ref> it is also irrotational.*The presence of an antisymmetric source field J_μν=-J_νμ, L_B=1/12H_μνρH^μνρ+1/2B_μνJ^μν, modifies the equations of motion to,∂_μ H^μνρ=J^νρ⇒(∂_0Π-∇Φ)=J, ∇×Π=J, J_i=1/2ϵ_ijk J_jk, J_i= J_0i,with the Bianchi identity prevailing. The current conservation yields, ∂_μ J^μν=0⇒∂_0J-∇×J=0, ∇·J=0,wherein the second equation is actually a subsidiary of the first[We would get ∂_0∇·J=0 which leads to ∇·J=f(x). The inhomogeneous part f(x) can safely be set equal to zero as its form is not going to affect the dynamics.]. So there are three independent current components that resolve the massive dynamics,(∂^2+4g^2)Π=∂_0J+∇×J,(∂^2+4g^2)Φ=∇·J, having 3 degrees of freedom.§.§ The gauge redundancyThe gauge dependence comes into effect in terms of the dynamical fields B and B, the latter being redundant (a Lagrange multiplyer) as its conjugate momentum is absent in the Hamiltonian. The equations of motions <ref> then take the forms,∂_0^2B=∇(∇·B)+∂_0∇×Band ∂_0∇×B=∇×∇×B, whereas the Bianchi identity vanishes identically as usual. The gauge transformations of the 2-form are now expressed as,B→B+∇×Γ,B→B+∂_0Γ+∇Γ. with gauge parameter Γ_μ=(Γ, -Γ). Particular gauge fixing is now required to identify the degrees of freedom. On adopting the standard covariant gauge,∂_μ B^μν=0⇒∂_0B+∇×B=0, ∇·B=0, one obtains the massive dispersion,(∂^2+4g^2)𝔅=0,𝔅=B, B.Among the four gauge conditions of Eq. <ref>, the last one is a subsidiary of the other three owing to the inherent anti-symmetry of B^μν, and it makes B transverse. Given non-zero mass, this transverse field vanishes in the rest frame, leaving out 3 massive degrees of freedom represented by B. In contrast, the massless case has the transverse components of B equated to the redundant field B, leaving out a single longitudinal degree of freedom. Starting from arbitrary fields, the explicit gauge choices that lead to the covariant gauge,Γ=-1/∂^2(∂_0B+∇×B),Γ=1/∂^2∇·B.are themselves invariably constrained as ∂·Γ=0 and thus only 3 independent constraint conditions can be imposed. Since these gauge parameters are further redundant as Γ_μ→Γ_μ+∂_μγ, one subsequently gets ∂^2γ=0. Non-trivial choices for γ are consistent with the chosen covariant gauge. *Sacrificing manifest covariance makes it simpler to resolve the dynamical modes <cit.> as redundant fields can directly be eliminated as a gauge choice: B=0[It is equivalent to the temporal gauge of one-form gauge theory.]. This is achieved by choosing three components of Γ_μ left out after fixing γ. Then the second of the equations of motion in <ref> eludes to,∇×B=f(x),Since f(x) is arbitrary and independent of time, a choice of f(x)=0 does not hamper the system dynamics. As a consequence, the other remaining equation of motion reads as,(∂^2+4g^2)B=0, with 3 free physical components. Crucial to this result, due to the presence of mass, the condition ∇×B=0 does not constrain the components of B. This `temporal' gauge is equivalent to the covariant gauge in the rest frame with p^μ=(2g, 0), a correspondence that is absent in the massless case. In the latter scenario, instead, the condition ∇×B=0 negates two more degrees of freedom.*Physical degrees of freedom equivalently get resolved in the light-cone gauge. The equations of motion in the momentum space,_p(p^2B^μν-p^μ B^ν+p^ν B^μ)=0,where _p=1-4g^2/p^2, B^μ=p_α B^αμ, are now subjected to the light-cone coordinates p^μ=(p^+,p^-,p^2,p^3) with p^+≠ 0<cit.>. The secondary gauge parameter γ is chosen to fix the primary parameter component Γ^+=0. Then the remaining components Γ^-,l, with l=2,3, can be chosen to fix the light-cone gauge B^+μ=0, which essentially is equivalent to the `temporal gauge' above. As a result one obtains, B^+=0, B^-=p^lB^-l, B^l=p^+B^-l-p^mB^ml. Different components of the equation of motion then culminate to,(μ=+, ν=-):_pp^lB^-l=0, (μ=+, ν=l):_pB^-l=p^m/p^+_pB^ml, (μ=-, ν=l):(p^2-4g^2)B^-l=0. Two massive degrees of freedom can now be identified for l=2,3 and a third one B^ml≡ B^23 is a result of substituting from Eq. <ref>[In the massless case B^-l∝ B^ml in Eq. <ref> leaving out a single massless degree of freedom.].§.§ The Hamiltonian analysisThe present theory's canonical treatment requires adequately handling the constraints to identify physical variables so that correct canonical brackets can be constructed. In terms of the dynamical 3-vectors, the non-local 2-form Lagrangian is expressed as,L_B=1/2[(∂_0B-∇×B)_i(∂_0B-∇×B)_i -∇·B∇·B]. The corresponding conjugate momentum components are, Π_i=∂ L_B/∂∂_0B_i=(∂_0B-∇×B)_iand Π_i=∂ L_B/∂∂_0B_i=0, wherein the second equation depicts constraints. The non-local operatoris maintained in the definition of momentum as it gets negated by gauge-fixing without hampering the physical outcome. As there are constraints involved, it is customary to employ the method of Dirac brackets <cit.> to construct the canonical brackets of this system. For this purpose, the canonical Hamiltonian is obtained through the Legendre transformation as,H_ Can = Π_i ∂_0B_i- L_B= 1/2(Π_i1/Π_i+ΦΦ)+B·(∇×Π). The primary constraints of the system are labeled as, Θ_i=Π_i=0,which are needed to be included in theprimary Hamiltonian,H_ P =H_ Can+λ_iΘ_i= 1/2(Π_i1/Π_i+ΦΦ)+B·(∇×Π) +λ_iΘ_i, through Lagrangian multipliers λ_i. Then the equal-time Poisson brackets pertaining to this system are defined as, {𝔄(x,t), 𝔅(y,t)}_ PB =∫ d^3z [∂𝔄(x,t)/∂B_i(z,t)∂𝔅(y,t)/∂Π_i(z,t)-∂𝔄(x,t)/∂Π_i(z,t)∂𝔅(y,t)/∂B_i(z,t) +∂𝔄(x,t)/∂B_i(z,t)∂𝔅(y,t)/∂Π_i(z,t)-∂𝔄(x,t)/∂Π_i(z,t)∂𝔅(y,t)/∂B_i(z,t)], over the complete phase-space {(B_i, B_i, Π_i, Π_i)}. The non-trivial fundamental Poisson brackets in the complete phase-space are, {B_i(x,t), Π_j(y,t)}_ PB=δ_ijδ^3(x-y), {B_i(x,t), Π_j(y,t)}_ PB=δ_ijδ^3(x-y), with Π_i=0 depicting the constraint hyper-surface. Any variable ℭ(x,t) that vanishes on this hyper-surface is termed as weakly vanishing: ℭ(x,t)≈0. *The constraints evolve with time according to the primary Hamiltonian as, ∂_0Θ_i = {Θ_i(x,t), ∫ d^3yH_ P(y,t)}_ PB≡ ∫ d^3y {Π_i(x,t), B(y,t)·(∇_ y×Π(y,t))}_ PB= -(∇_ x×Π(x,t))_i. Since the constraint itself vanishes weakly it is weakly stationary too, yielding the secondary constraints, Θ_i(x,t):=-(∇×Π(x,t))_i≈ 0. Thesesecondary constraints are found to be stationary over the whole phase space, ∂_0Θ_i(x,t) ≡∫ d^3y {(∇_ x×Π(x,t))_i, 1/2Φ(y,t)Φ(y,t)}_ PB =0, and thus the chain terminates. Now we are left with two sets of first-class constraints Θ_i and Θ_i since they commute among themselves following the fundamental Poisson brackets. As the presence of first-class constraints marks gauge redundancy <cit.> an equal number of gauge fixing conditions are needed. By the `temporal' gauge before we choose these conditions to be,Ξ_i=B_i≈ 0, andΞ_i=(∇×B)_i≈ 0.The non-vanishing equal-time Poisson brackets of the constraints and gauge fixing conditions are,{Θ_i(x,t), Ξ_j(y,t)}_ PB=δ_ijδ^3(x-y), {Θ_i(x,t), Ξ_j(y,t)}_ PB= P_ijδ^3(x-y),P_ij=δ_ij∇^2-∂_i∂_j, designating the constraints to be second class. Thus the Lagrange multipliers can be obtained for Ξ, Ξ being stationary which leads to λ_i=0. On defining a 4-component variable Ψ_A=(Θ, Θ, Ξ, Ξ) with A=1,2,3,4 the matrix of the Poisson brackets among the components of Ψ_A is obtained as,ℭ=[ 0 0-𝕀 0; 0 0 0 - P; 𝕀 0 0 0; 0 P 0 0 ] δ^3(x-y). In this matrix, each element is a 3× 3 matrix. It is the inverse of the matrix ℭ:ℭ^-1=[ 0 0 𝕀 0; 0 0 0 P^-1^-1;-𝕀 0 0 0; 0 - P^-1^-1 0 0 ] δ^3(x-y), that appears in the Dirac brackets. Since the operator ^-1 is local, the Dirac brackets will not suffer from the concerned ambiguity, and the initial decision to adopt a non-local canonical momentum is thus validated. Another subtlety here is that P is transverse and thus P^-1 does not exist, a consequence of the secondary gauge invariance Γ_μ→Γ_μ+∂_μγ. However, on adopting the regularization (gauge-fixing): P^-1_ij=∇^-2[δ_ij-(1-ξ)∇^-2∂_i∂_j], analogous to the 1-form case, it is easily seen that the final results stay independent of the parameter ξ[Actually, the only non-trivial bracket turns to be longitudinal.]. On defining the Dirac bracket as usual: {𝔄(x,t), 𝔅(y,t)}_ D = {𝔄(x,t), 𝔅(y,t)}_ PB -∫ d^3z d^3w {𝔄(x,t), Ψ_A(z,t)}_ PBℭ^-1_AB(z,w,t){Ψ_A(w,t), 𝔅(y,t)}_ PB, the dynamical variables now satisfy,{B_i(x,t), Π_j(y,t)}_ D=0={B_i(x,t), Π_j(y,t)}_ D, {B_i(x,t), Π_j(y,t)}_ D=0, and {B_i(x,t), Π_j(y,t)}_ D=∂_i∂_j/∇^2δ^3(x-y). Therefore, the dynamics is rightly confined to the field B which is subjected to the `non-restrictive condition' Ξ=(∇×B)≈ 0. The primary Hamiltonian now simplifies to, H_ P= H_ Can+λ_iΘ_i=1/2(Π_i1/Π_i+ΦΦ), subjected to the constraint Ξ≈ 0 and the result λ_i=0. In the framework of the Dirac bracket, the constraint Ξ does not make individual components of B interdependent. In fact, on imposing Ξ=0 along with Ξ=0 essentially transforms the Hamiltonian to,H_ P=1/2[(∂_0B)^2+(∂_iB)^2+4g^2(B)^2], along-with alocal momentum Π=∂_0B. This is a massive 3-vector, consistent with the only non-trivial Dirac bracket of Eq. <ref> that manifests a longitudinal projection. Noticeably, the gauge choice removes the non-local operator. Quantization can follow as usual by upgrading the Dirac brackets to quantum ones, subject to the constraints which will then be operator-valued.§ QUANTIZATION OF THE 2-FORM MODELResponsible for the gauge-invariant 2-form mass, the non-local term persists in the canonical momenta at the classical level, although we have identified the correct physical modes. Choosing a particular gauge cures this problem, but that is not explicit in the construction of Dirac brackets as the constraints are implemented only weakly. More importantly, we want to maintain manifest Lorentz covariance as much as possible while quantizing, leading to the physical spectrum expected to be gauge-invariant.*These goals are most efficiently met by introducing covariant, non-equal time fundamental quantum brackets <cit.> that demonstrate clear equivalence between covariant and non-covariant gauge choices. The covariant Lorentz gauge removes the non-locality at the expense of spurious massless modes that dissociate from the physical space. Only the physical mode survives on implementing the temporal gauge, though its agreement with covariance needs validation. *On the other hand, since the explicit non-locality is retained unless the Lagrangian (or the Hamiltonian) is directly gauge-fixed, a path-integral quantization scheme will work only when auxiliary fields are introduced to compensate for the same, more so as there are no equations of motion involved. This manifestly covariant approach necessarily utilizes the tensorial Stückelberg extension of Eq. <ref> to arrive at correct degrees of freedom with the gauge redundancy compensated by the duly decoupled ghost sector. In the following, each of these treatments is carried out in detail. §.§ Quantization in the R_ξ gaugeThe covariant gauge condition (∂· B)_μ=0 is implemented in the massive 2-form Lagrangian through the R_ξ prescription, L_ B=1/12H_μνρ(1+4g^2/∂^2)H^μνρ-ξ/2(∂_μ B^μν)^2.Doing so invariably removes manifest gauge invariance and thus necessitates a ghost sector that conveniently decouples in the Abelian case. However, gauge-invariance can still be tracked through dependence on the gauge parameter ξ. The corresponding equations of motion have the form, (1+4g^2/∂^2)∂_μ H^μαβ=ξ∂^[α(∂· B)^β], which can be resolved in two parts, ∂^2(∂^2+4g^2)B_μν=0, ∂^2(∂· B)_μ=0. The second equation above imposes the gauge condition (∂· B)_μ=0 given the boundary values are chosen judiciously. As a direct consequence, Eq. <ref> reduces to a free massive dispersion: (∂^2+4g^2)B_μν=0, and thus the additional massless mode in the first of Eq.s <ref> is removed. *The gauge condition cannot be implemented directly at the quantum level as the fields become operators <cit.>. Before going into that, the gauge invariance of the outcomes of this theory can be checked at the Classical level. The R_ξ gauge propagator has the form,⟨[B_μν(x),B_αβ(y)]⟩=K_μν,αβδ(x-y), K_μν,αβ=1/∂^2+4g^2[η_μ[αη_β]ν-(1+ξ)η_[μ[α∂_β]∂_ν]/∂^2].Naturally, it respects the anti-symmetry of the constituent fields but does not conform to the covariant gauge condition owing to the ξ-dependent part. Still, the physical processes do respect the gauge condition since for a conserved external current J_αβ,J^αβ(y)∂_x^μ K_μν,αβδ(x-y)=0. This is because the gauge-dependent contribution to the term J^αβ(y)K_μν,αβδ(x-y) is a total derivative for J_αβ being conserved. Given the current is further anti-symmetric and time-independent the massive mode leads to the potential energy,V(x_1,x_2) =1/2∫ d^3x J^μν(x)K_μν,αβJ^αβ =2∫ d^3x(J_i1/-∇^2+4g^2J_i-J_i1/-∇^2+4g^2J_i).In 3+1-dimension the requirement for the current tensor can be fulfilled as J_μν=ϵ_μναβ∂^α j^β where j_μ can represent charged fermions <cit.>. This current is topologically conserved since its 4-divergence vanishes irrespective of the system dynamics <cit.>. Therefore the interaction potential, V(x_1,x_2) =2∫ d^3x(j_01/-∇^2+4g^2j_0-j_i1/-∇^2+4g^2j_i),can lead to the Yukawa-like interaction of Eq. <ref> between localized charges and currents in a gauge-invariant manner, akin to the Meisner-like effect inferred to in the case of the B∧ F theory <cit.>. *The manifestly covariant quantization of the present model is achieved through the procedure of Klein-Gordon Divisor <cit.> through postulating the covariant fundamental quantum bracket[The non-locality here is conveniently confined to the gauge-dependent term. Further, the overall sign is important for correctly obtaining the physical modes.], [B_μν(x),B_αβ(y)]=iη_μ[αη_β]νΔ(x-y)-i/ξ(1+4g^2/∂_x^2+ξ)η_[μ[α∂^x_β]∂^x_ν]Δ'(x-y), at unequal times, following the same definitions in Eq.s <ref> with the substitution m→ 2g. This bracket is consistent with the dynamics of both Eq.s <ref> and <ref> as well as with the exact gauge condition (∂· B)_μ=0. However the same is not true for individual massive and massless dynamics. Thus the fundamental bracket overlooks the system's constraints in the sense that the physical mode is not entirely isolated, though the gauge condition is respected. The latter owes to the R_ξ procedure of incorporating the gauge condition directly into the action. For an externally imposed gauge condition, as is the case for the non-covariant (`temporal') gauge or even the covariant gauge chosen explicitly, the bracket need not be consistent with the gauge condition even if it is consistent with the equations of motion and thus again over-counts the degrees of freedom <cit.>. In any case, only an additional condition like the Gupta-Bleuler theorem on the physical subspace: ∂_μ B^μν (+)| Phys.⟩=0, can isolate the physical mode(s), since (∂· B)_μ behaves like a massless field.*One can also introduce a Nakanishi-Lautrup field C_μ to modify the gauge-fixing term as,-ξ/2(∂_μ B^μν)^2→-C_ν∂_μ B^μν+1/2ξC_μ C^μ.It has the operational advantage that the gauge parameter does not appear in the 2-form sector and a manifestly gauge-independent treatment can be performed without having to `fix' ξ. The C_μ sector now imbibes the gauge condition: C_μ=ξ∂^α B_αμ whereas the 2-form equation reduces to,(1+4g^2/∂^2)∂_μ H^μαβ=∂^[αC^β].Subsequently, the fields separate as,∂^2(∂^2+4g^2)B^αβ=0, with, ∂^2C_μ=0, since ∂· C=0. The massless mode persists and owes to the covariant gauge condition, implemented by the second equation above. This is apparent from the dual 1-form (^*H_μ) theory that hosts the massive mode only given its explicit gauge independence[For a massive case (∂^2+m^2)ϕ=0 one can always construct a massless mode (1+m^2/∂^2)ϕ. But the non-locality here cannot be removed, unlike the present case.]. This massless mode will be shown to represent states with non-positive norms that lie outside the physical Fock space. Following the fundamental bracket, the auxiliary field C_μ commutes both with the 2-form field and itself:[C_μ(x),B_αβ(y)]=0=[C_μ(x),C_ν(y)]. As a result, it dissociates from the dynamical sector as expected and corresponds to zero-norm states. *For the convenience of notations, the massive and massless modes are now respectively represented as,V_μν(x)=∂^2B_μν(x)and S_μν(x)=(∂^2+4g^2)B_μν(x). These fields individually satisfy the covariant gauge-fixing conditions ∂_μ V^μν=0=∂_μ S^μν and their corresponding quantum brackets are, [V_μν(x),V_αβ(y)] = i4g^2(4g^2η_μ[αη_β]ν+η_[μ[α∂^x_β]∂^x_ν])Δ(x-y), [S_μν(x),S_αβ(y)] = -i/ξ4g^2(1+4g^2/∂_x^2+ξ)η_[μ[α∂^x_β]∂^x_ν]Δ(x-y, g=0), [V_μν(x),S_αβ(y)] = 0. The massive and massless modes conveniently dissociate. The spuriousness of the latter is strongly suggested by the exclusive presence of ξ in its defining commutator[A simpler calculation corresponds to the Landau gauge: ξ→∞, where the commutators are no longer non-local and the massless sector becomes exactly zero-normed.]. It will be seen that they represent states with non-positive norms. Further assurance comes from the commutator of the dual 1-form, [^*H_μ(x),^*H_ν(y)]=i/2(4g^2η_μν-∂_μ∂_ν)Δ(x-y), which is consistent with Eq. <ref> and the default transverse-ness ∂·^*H=0. This bracket is relatively more complicated than what naïvely follows from the relevant equation of motion, as there is no gauge redundancy in the dual sector. The corresponding physical states have positive norms as required. *Taking the equal-time limit of the commutators disrupts manifest covariance by separating spatial and temporal components, but it leads to the physical modes more clearly. On going to the momentum space, b_μν( q)=i/√(2(2π)^2E_q)∫ d^3x e^iq.x∂_0B_μν(x), E_q^2= q^2+4g^2, the fundamental bracket corresponds to,[b_μν( q),b^†_αβ( q')]=η_μ[αη_β]νδ^3( q- q'), among positive and negative frequency components. On resolving the vector components,[b_i( q),b^†_j( q')]=δ_ijδ^3( q- q')=-[b_i( q),b^†_j( q')], [b_i( q),b^†_j( q')]=0. the previously identified mode b_i( q)(B_i(x)) describes physical states with positive norm,⟨ i,q|i,q'⟩=⟨ 0|b_i( q)b^†_i( q')| 0⟩=δ^3( q- q').The states corresponding to b_i( q)(B_i(x)) will have negative norms, which dissociate from the physical ones. To specify the physical degrees of freedom, the covariant gauge condition in Eq. <ref> implies that, b_ T( q)| Phys.⟩=1/E_qq×b_ T( q)| Phys.⟩, b_ L( q)| Phys.⟩=0. Here, the suffixes L,T stand for longitudinal and transverse components respectively. The `unphysical' b_i( q) components get eliminated, leaving behind 3 massive components of b( q)[It is to be kept in mind that the R_ξ-gauge quantum brackets are not consistent with individual massive and massless dynamics.]. To arrive at the physical spectrum one can introduce a polarization basis, b_μν(q)=∑_λ,λ'=0^3 β_λλ'(q)ε_μν(q; λ,λ').Given the anti-symmetry of b_μν the polarization tensors can be chosen as ε_μν(q; λ,λ')=ε_[μ(q; λ)ε_ν](q; λ') making the operators b_λ,λ'(q) anti-symmetric as well. A natural choice for the set of basis vectors ε_μ(q; λ) is that for a Proca field <cit.>, ε_μ(q; 0)=q_μ/2g,ε_μ(q; n)=(0,-ε(q; n)), ε_μ(q; 3)=(|q|/2g,-q_0/2gq̂), n=1,2, which complete an orthonormal basis about the momentum vector as[These relations further ensure that the operators b_λ,λ'(q) also satisfy the commutation relations prescribed for the operators b_μν(q).], ε(q; λ)·ε(q; λ')=η_λ,λ', ∑_λ=0^3η_λλε_μ(q; λ)ε_ν(q; λ)=η_μν, q·ε(q; λ)=2gη_0λ.The longitudinal (ε_μ(q; 3)) and temporal (ε_μ(q; 0)) components exist due to the non-zero mass and unlike the Proca theory, the absence of any built-in constraints allows all four basis vectors. Thus these basis vectors are equally applicable for the dual 1-form theory, thereby reaffirming 3 massive degrees of freedom subjected to a gauge condition. Presently, the covariant gauge condition translates to, ∑_λ=1^3β̅_λ(q)ε^μ_λ(q; λ')| Phys.⟩=0, β̅_λ=β_0λ, λ=1,2,3, which implies β̅_λ(q)| Phys.⟩=0 for λ=1,2,3 that follows from the explicit basis vectors. Therefore the covariant gauge Hamiltonian, H^C=1/2∫ d^3x[B·B+∂_iB·∂_iB+4g^2B·B-B·B-∂_iB·∂_iB-4g^2B·B], ends up having the normal-ordered expectation value in the physical subspace as, ⟨ Phys.| H^C| Phys.⟩=⟨ Phys.|∫ d^3q E_q∑_λ=1^3β^†_λβ_λ| Phys.⟩, β_λ=1/2ϵ_λαββ_αβ, λ,α,β=1,2,3, with 3 massive components. As is well-known for the massless limit, vectors ε_μ(q; 0/3) are undefined as the reference vector q_μ cannot be normalized to 1. Subjected to the covariant gauge, the longitudinal component of β vanishes whereas its transverse components exactly cancel out with those of β, leaving behind only the massless longitudinal component of β. *An equivalent picture emerges on considering the momentum-space commutators of massive and massless modes, [v_μν( q),v^†_αβ( q')]=4g^2(4g^2η_μ[αη_β]ν-η_[μ[αq_β]q_ν])δ^3( q- q'), [s_μν( q),s^†_αβ( q')]=4g^2η_[μ[αq_β]q_ν]δ^3( q- q'),[v_μν( q),s^†_αβ( q')]=0. Clearly, these two sectors dissociate. On resolving to non-covariant components (v( q),v( q)) and (s( q),s( q)), they also dissociate among themselves. Both s( q) and s( q) turn out to have negative norm states: [s_i( q),s^†_j( q')] = 4g^2(q_iq_j- q^2δ_ij)δ^3( q- q')= [s_i( q),s^†_j( q')],reconfirming the spuriousness of this massless mode. That the physical subspace of Eq. <ref> is free of these negative norm states is evident as Eq. <ref> leads to,S^μν=(1+4g^2/∂^2+ξ)∂^[μ(∂· B)^ν], wherein the non-local factor disappears in the Landau gauge. *As for the massive mode, however, both v( q) and v( q) correspond to semi-positive norms: [v_i( q),v^†_j( q')]=4g^2[( q^2+4g^2)δ_ij-q_iq_j]δ^3( q- q') [v_i( q),v^†_j( q')]=4g^2[ q^2δ_ij-q_iq_j]δ^3( q- q').The longitudinal component of v( q) has a vanishing norm whereas its transverse components can be expressed in terms of those of v( q) as per Eq. <ref> in the covariant gauge.In general, all the components of ∗( q) can never be replaced by those of ∗( q). Unlike s_μν( q), the states corresponding to v_μν( q) are a part of the physical Fock space and it serves our purpose that they have positive norms.§.§ Quantization in non-covariant gaugeSince the field B acts as a Lagrange multiplier, the non-covariant gauge choice B=0 resolves the physical mode with relative ease, but at the expense of manifest covariance. As we have seen from Eq.s <ref> this constrains the remaining field as ∇×B=0 imbibing it with massive dynamics. In this way, the residual gauge redundancy B_ij(x)→ B_ij(x)+∂_iΛ_j(x)-∂_jΛ_i(x) after setting B=0 has also been taken care off. Subsequently, the Lagrangian simplifies to, L_ B = 1/2(∂_0B·∂_0B-∇·B∇·B)→ -1/2B·[∂^2_0B-∇(∇·B)]= -1/2B·(∂^2+4g^2)B,wherein in the middle step we removed total derivatives. There are no spurious massless modes in this gauge. As for the quantization, the physical sub-space is now defined as B^(+)| Phys.⟩=0 that houses the fundamental commutator,[B_i(x), B_j(y)]=iδ_ijΔ(x-y). It is consistent with the massive vector dynamics and reduces to the equal-time commutator, [B_i( x,t), B_j( y,t)]=iδ_ijδ^3( x- y). Although trivially consistent with the gauge condition, however, this bracket is not consistent with the residual gauge condition ∇×B=0 since, [(∇×B)_i( x,t), B_j( y,t)] =-i(1-4g^2/∇^2_x)ϵ_ijk∂_k^xδ^3( x- y)≠ 0.The complete consistency is achieved by postulating a `longitudinal' quantum bracket instead, [B_i( x,t), B_j( y,t)]=i∂^x_i∂^x_j/∇_x^2δ^3( x- y).The inherent validation of this construction comes from the Dirac bracket analysis we had carried out, particularly from the last one of Eq.s <ref>, as in the temporal gauge the canonical conjugate of B_i is Π_i=B_i. Again, the gauge fixing cures the non-locality imposed by the operator . *In the momentum space the fundamental bracket takes the form, [b_i( q), b^†_j( q')]=q_iq_j/q^2δ^3( q- q'),leading to non-negative norms as required. However, owing to the on-shell restriction, the residual gauge condition becomes an identity in the momentum space. Therefore one can still work with the fundamental commutator identical to the covariant case in Eq. <ref>. This suggests that the respective quantization processes are equivalent; in the sense that the commutator of the gauge-independent fields,[Π_i( x,t), Φ( y,t)]=i∂^x_iδ^3( x- y),is unaffected by the choice of gauge[The non-equal-time commutators satisfy the micro-causality condition as Δ(x-y) (and thus Δ'(x-y)) vanishes for space-like separations. The equal-time condition serves as a special case of that and thus represents a causally definite condition for comparing quantization in the two gauges.]. Formally, the inherent covariance of the non-covariant gauge quantization is assured through the Dirac-Schwinger covariance condition <cit.>,[T^T_00( x,t), T^T_00( y,t)] =i(T^T_0i( x,t)+T^T_0i( y,t))∂^x_iδ^3( x- y),linking Hamiltonian and momentum densities that follow from the energy-momentum tensor,T^T_μν=∂_μB·∂_νB-1/2η_μν(∂_ρB·∂^ρB-4g^2B·B).in this particular gauge. As for the energy spectrum, the situation is much simpler now. Since Eq. <ref> is free from B the same mode distribution of energy in Eq. <ref> is simply reproduced. Expectantly, the energy of the system is gauge-independent.§.§ Path integral QuantizationFollowing the canonical quantization of the current model that led to physically well-behaved modes, the question persists as to whether such an explicit non-local model will behave well when quantum corrections are included. It finds assurance since the B∧ F model is renormalizable <cit.> and we can expect that to carry over to an effective description. A somewhat equivalent approach is to quantize the theory in the path integral procedure <cit.>. Since it is implemented at the level of generating function, the off-shell path integral quantization is valid up to all orders <cit.>. The present Abelian fields, upon gauge fixing, are supposed to decouple from the ghost sector having negative norm states. Subsequently, the gauge redundancy is replaced by the BRST transformations <cit.>. *Given the path integral approach is off-shell, the removal of the non-local term in the dynamical equation through gauge-fixing is not possible. In fact, given the standard gauge conditions, the ghost sector would be insensitive to the non-locality. Thus it becomes necessary to introduce auxiliary fields to compensate for the non-locality so that the actual degrees of freedom can be identified. The physics remains unchanged in doing so since the integration of the auxiliary fields is included in the generating function. In the following, the extended Lagrangian of Eq. <ref> is used for the path integral treatment, preceded by that of the 1-form counterpart (Eq. <ref>) for the sake of completion. §.§.§ Massive 1-form caseThe Stückelberg Lagrangian is invariant under the combined gauge transformations of Eq. <ref>. By choosing the shared gauge parameter Λ we may fix a gauge condition, say, the covariant one ∂· A=D. Such a gauge is known to yield positive-definite energies <cit.>. Subsequently, the Lagrangian is extended by the gauge fixing and the corresponding ghost term as,L_A→ L_A+ L_ GF+ L_ Ghost, where,L_ GF=-χ/2(∂· A-D)^2,L_ Ghost=∂_μc∂_μ c-m^2cc. The complete partition function now looks like this,Z[J_μ]= N_0∫ D[A_μ,D, c, c]exp(i∫ d^4x L_A+ L_ GF+ L_ Ghost+A_μ J^μ+DJ). Herein, the external sources (J^μ, J) are introduced for generating n-point functions and N_0 is the overall normalization factor. (c, c) are the anti-commuting (anti-)ghost fields that carry a negative degree of freedom each. They decouple from the rest of the action in this Abelian case, leaving the physical dynamics to the A_μ sector. The gauge invariance of the original Stückelberg action is now traded for invariance under the nilpotent BRST transformations <cit.>, δ A_μ=∂_μ c,δ D=-m^2c, δc=-χ(∂· A-D),δ c=0. *The scalar field D is physical since it represents the non-locality already present in the original Lagrangian, unlike the Nakanishi-Lautrup fields introduced to incorporate a gauge. The total number of degrees of freedom then counts as 4+1-2× 1=3 which are already identified as massive. As discussed before, a gauge choice of ∂· A=0 instead would have worked equally well, only the ghosts would have been massless. Since a single scalar gauge parameter Λ exists, the treatments corresponding to different gauge choices are almost identical. It will not be the case for the 2-form analog which has two distinct gauge sectors. §.§.§ Massive 2-form caseAs has been explained before, the path integral quantization of the massive 2-form model starts with the extended Lagrangian in Eq, <ref> wherein the nonlinearity is compensated by an auxiliary 1-form. This model is invariant under two sets of gauge transformations. The one in Eq. <ref> is shared by 2- and 1-forms whereas the other given in Eq. <ref> is confined to the 1-form sector. The covariant gauge f_μ:=(∂· B)_μ=0 choice in the shared gauge sector endows the 2-form with massive. According to the Faddeev-Popov prescription <cit.> the gauge-fixing affects a ghost action of the form,L^B_ Ghost=1/2∂_[μc_ν]∂^[μc^ν],in addition to the gauge-fixing term,L^B_ GF=-ξ/2(∂_μ B^μν)^2. The ghost action itself is redundant as the vector ghosts themselves behave as 1-form, c_μ→c_μ+∂_μκ, c_μ→ c_μ+∂_μκ, which mandates an additionalghost-of-ghost Lagrangian,L^c,c_ Ghost=∂_μd∂^μ d+∂_μe∂^μ e. Although (d,d,e,e) are Grassman fields they obey statistics opposite to that of (c_μ, c_μ), and thus representpositive degrees of freedom <cit.>. The ghost sector further needs to be gauge-fixed through the term,L^c,c_ GF=-η/2(∂·c)(∂· c). On the other hand, the gauge-fixing condition f_μ itself is a divergence-less term. This restriction needs to be implemented in terms of another ghost term as,L^f_ Ghost=1/2∂_μ f∂^μ f. The bosonic ghost field f represents a positive degree of freedom. Finally, the individual gauge redundancy of the D_μ sector is handled through the ghost action,L^D_ Ghost=∂_μg∂^μ g, with usual Grassman scalars and a gauge-fixing part,L^D_ GF=-ε/2(∂· D)^2. All the individual parts now are collected to form the complete Lagrangian,L_B^PI =L_B+ L^B_ Ghost+ L^B_ GF+ L^c,c_ Ghost+ L^c,c_ GF+ L^f_ Ghost+ L^D_ Ghost+ L^D_ GF+B_μνJ^μν= 1/12H_μνρH^μνρ-g^2B_μνB^μν+1/2B_μν∂^[μD^ν]-1/16g^2∂_[μD_ν]∂^[μD^ν]+1/2∂_[μc_ν]∂^[μc^ν]-ξ/2(∂_μ B^μν)^2 +∂_μd∂^μ d+∂_μe∂^μ e-η/2(∂·c)(∂· c)+1/2∂_μ f∂^μ f+∂_μg∂^μ g-ε/2(∂· D)^2+B_μνJ^μν+D_μ j^μ, with proper sources (J_μν, j) for the respective fields. Correspondingly, the generating function now takes the form,Z[J_μν, j_μ]= N_0∫ D[B_μν,D_μ,c_μ,c_μ,d, d, e, e, f, g, g]exp(i∫ d^4x L_B^PI). The anti-symmetric tensor field B_μν has 6 degrees of freedom whereas the 1-form D_μ has 4. The (anti-)ghost fields (c_μ,c_μ,f, g, g) posses -4-4-1-1-1=-11 degrees of freedom in total, whereas that number for (anti-)ghost of (anti-)ghost fields (d, d, e, e) is 1+1+1+1=4. Therefore the total degrees of freedom of the system is 6+4-11+4=3 as required. We identify all of them as massive since D_μ can dynamically be replaced by B_μν which has a massive dispersion in the implemented covariant gauge. The gauge-fixing terms can further be extended by introducing suitable Nakanishi-Lautrup fields like before and the gauge parameters (ξ, η, ε) can be fixed as required. *The alternate gauge fixing choice (∂· B)_μ+D_μ=0 might feel appealing since it leads to massive dynamics for both B_μν and D_μ fields with positive energies. The respective physical subspace is suitably defined as, [(∂· B)_μ+D_μ]| Phys⟩=0.However, the path integral quantization is off-shell and subjected to the discussion following Eq. <ref> this situation is equivalent to the gauge condition (∂· B)_μ=0. To appreciate that, let us consider the general gauge transformation, (∂· B)_μ+D_μ→(∂· B)_μ+D_μ+∂^2Λ_μ-∂_μ∂·Λ+4g^2Λ_μ+∂_μλ=0, that achieves the desired gauge condition involvingboth the gauge parameters. Taking a divergence reduces the condition to, ∂· D→∂· D+4g^2∂·Λ+∂^2λ=0. On substituting for λ in Eq. <ref> one gets, (∂· B)_μ+D_μ→(∂· B)_μ+D_μ-1/∂^2∂_μ∂· D+(∂^2+4g^2)(Λ_μ-1/∂^2∂_μ∂·Λ)=0. Effectively, we have a transverse gauge-fixing function f_μ=(∂· B)_μ+D_μ-1/∂^2∂_μ∂· D, that requires a bosonic ghost of the form in Eq. <ref> limited to the Λ_μ sector. The gauge fixing itself will now implement the massive yet gauge-invariant fermionic vector ghost action,L^B_ Ghost=-1/2c_μ(1+4g^2/∂^2)(∂^2η^μν-∂^μ∂^ν)c_ν, that replaces Eq. <ref>. Interestingly, the non-local nature of the initial Lagrangian appears in the ghost sector, although the auxiliary 1-form had been introduced. The vector ghost sector still harbors the redundancy of Eq. <ref> requiring two sets of ghost-of-ghost fields as before. Further, the gauge parameter in Eq. <ref> has the form, Λ_μ=1/∂^2+4g^2[(∂· B)_μ+D_μ-1/∂^2∂_μ∂· D], which is local yet transverse: ∂·Λ=0. Then from Eq. <ref> the auxiliary 1-form sector becomes dissociated as, ∂· D→∂· D+∂^2λ=0. Therefore the gauge condition d=∂· D=0 can still be implemented independently, requiring scalar (anti-)ghosts of Eq. <ref>. Thus the number of (anti-)ghost components remains the same as that corresponding to the gauge choice of (∂· B)_μ=0, leaving out the correct number of physical degrees of freedom.*The lack of gauge symmetry in the final Lagrangian is again compensated by an extensive nilpotent (anti-)BRST symmetry <cit.>. We list here the set of BRST transformations that leave the action in Eq. <ref> invariant, δ B_μν=∂_μ c_ν-∂_ν c_μ,δ D_μ=4g^2c_μ+∂_μ g, δc_μ=-ξ(∂· B)_μ+∂_μ d,δ c_μ=∂_μ e, δd=-η/2∂·c,δ d=0, δe=-η/2∂· c,δ e=0, δg=-ε∂· D,δ g=0, where δ^2=0 due to nilpotency. A similar set of complementing anti-BRST transformations also exists. As mentioned before, invariance under these transformations ensures quantization up to all orders <cit.>. Since all the (anti-)ghosts decouple from the dynamical fields in this Abelian case, the physically relevant dynamics no longer contain any negative-norm states. Subsequently, the 2-form quantum propagator can directly be obtained through functional derivatives as, K^μν,αβ(x,y)=-i/ Z[J_μν]δ^2 Z[J_μν]/δ J_μν(x)δ J_αβ(y)|_J_μν=0,with all the properties intact, including the gauge conditions obeyed by the propagator in Eq. <ref>.§ SUMMARY AND CONCLUSIONSThe present work analyzed the non-local 2-form model with gauge-invariant massive dispersion and its quantization. The non-locality does not hamper the dynamics of the system as the classical equations of motion turn out to be well-behaved depicting 3 massive degrees of freedom. Given the full gauge, invariance is intact due to the non-local mass term, the latter can be carried through the canonical analysis as proper gauge fixing leads to local dynamics and a sensible Dirac bracket structure. On adopting the covariant quantization approach, though the system housed a spurious mass-less mode under the covariant gauge, it dissociates from the physical vector space. Under the non-covariant gauge condition, only the massive mode appeared which is in agreement with the Dirac brackets. The gauge invariance of this scheme led to the same physical spectrum for both gauge choices. Finally, the path-integral quantization required the removal of the non-locality in terms of an auxiliary 1-form given the off-shell nature of the approach. Subsequently, the ghosts dissociated properly in the covariant gauge, leaving out the correct physical mode. A set of BRST transformations is given that replaced the gauge redundancy.*The present model is interesting as it is dual to the non-local 1-form model which in turn leads to the Stückelberg system. This is a generalization of the duality between free 2-form and free scalar field in 3+1 dimensions, which serves as the massless limit of the present case. On the same footing, the generalized Stückelberg model of Eq. <ref> is dual to the standard Stückelberg theory of Eq. <ref>. Usually, such dualities enhance the understanding of an applicable system already studied under one of the dual theories. In particular, a dual theory having a larger redundancy (since the number of degrees of freedom is the same) may lead to interesting possibilities based on how the redundancy is fixed. Given the Stückelberg model is well-understood <cit.>, these `extended duals' are also highly expected to be sensible theories. In particular, we hope to study the renormalization of the non-local 2-form model in the near future. This model deserves further study in the presence of fermion interactions given the B∧ F model yields the London equations <cit.> by mediating the short-range interaction.*The non-local 1-form model has recently been used to demonstrate early anisotropic deceleration and late time acceleration of the Universe <cit.>. This belongs to a larger class of cosmological models that are generalizations to Proca theory <cit.> and beyond <cit.>. such models provide a larger range of parameters than that in scalar cosmology with the ghosts decoupling. The present model is a valid candidate for such applications since it corresponds to Proca-like dynamics. Apart from having the same 3 massive degrees of freedom, a direct way to see this correspondence is to extend the Lagrangian of Eq. <ref> as,L_B→1/2∂_μφ(1+M^2/∂^2)^-1∂^μφ→1/2∂_μφ∂^μφ-Mφ∂· K+1/2K_μ(∂^2+M^2)K^μ, where M^2=4g^2. The on-shell elimination of φ now makes K_μ to obey, (∂^2+M^2)K^μ=0,∂· K=0,which essentially is the Proca dynamics. This is distinct from the usual massive Kalb-Ramond field which is a combination of axion and Proca fields <cit.>. The current model may serve as a fruitful dark matter candidate, in addition to possibly extending the already known role of the Kalb-Ramond field during inflation <cit.>, given the short-range gauge-invariant interaction. We plan to study these aspects in the near future. The author greatly appreciates valuable input from Dr. Vivek M. Vyas during the inception of this work. This work enjoys the support of Mahidol University. | http://arxiv.org/abs/2310.18272v1 | {
"authors": [
"Kumar Abhinav"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20231027165854",
"title": "A Gauge-Invariant Massive 2-form Model and its Quantization"
} |
empty This paper has been accepted for publication in the proceedings of the IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD).2023 IEEE. Personal use of this material is permitted. However, permission from IEEE must be obtained for all other uses, whether in current or future media. This includes reprinting/republishing this material for advertising or promotional purposes, creating new collective works, resale or redistribution to servers or lists, or the reuse of any copyrighted component of this work in other works.Edge AI Inference in Heterogeneous Constrained Computing: Feasibility and Opportunities Roberto Morabito University of Helsinki [email protected] Mallik Tatipamula Ericsson [email protected] Sasu Tarkoma University of Helsinki [email protected] Mung Chiang Purdue University [email protected] XXX. Received YYY; in original form ZZZ ========================================================================================================================================================================================================================================================plain The network edge's role in Artificial Intelligence (AI) inference processing is rapidly expanding, driven by a plethora of applications seeking computational advantages. These applications strive for data-driven efficiency, leveraging robust AI capabilities and prioritizing real-time responsiveness. However, as demand grows, so does system complexity. The proliferation of AI inference accelerators showcases innovation but also underscores challenges, particularly the varied software and hardware configurations of these devices. This diversity, while advantageous for certain tasks, introduces hurdles in device integration and coordination. In this paper, our objectives are three-fold. Firstly, we outline the requirements and components of a framework that accommodates hardware diversity. Next, we assess the impact of device heterogeneity on AI inference performance, identifying strategies to optimize outcomes without compromising service quality. Lastly, we shed light on the prevailing challenges and opportunities in this domain, offering insights for both the research community and industry stakeholders. § INTRODUCTION Over the past decade, Artificial Intelligence (AI) has deeply transformed the landscape of computer technologies, influencing several business verticals, including industrial automation, automotive industry, eXtended Reality (XR), unmanned aerial vehicles (UAVs), and healthcare <cit.>. Integrating AI capabilities into these segments often involves sophisticated machine learning (ML) algorithms, like deep learning. Such algorithms might necessitate dedicated hardware platforms for both training and inference tasks. The efficient execution of AI inference is crucial for applications demanding strict real-time AI processing and maintaining a specified quality of service (QoS) <cit.>. The convergence of an increasing demand for high performance and the prominence of Edge AI has catalyzed the advancement of cutting-edge AI-ready hardware and software solutions <cit.>. This momentum is evident in the escalated efforts of chipset manufacturers and software developers, who now prioritize customized solutions to harness the full potential of Edge AI. For instance, there has been a marked rise in the development of AI accelerator chipsets—such as GPUs, VPUs, and TPUs—explicitly crafted for Edge AI systems and inference tasks <cit.>. This growing interest in the industry has naturally spurred the research community to delve deeper, resulting in a surge of research contributions to the domain and making the literature on Edge AI inference expand rapidly.Numerous studies have probed the implications of hardware heterogeneity in single-device systems <cit.>. Others have explored scenarios where AI models are split, and individual AI inference tasks are performed across distributed edge nodes in a multi-node collaborative approach <cit.>. Our work centers on single-node inference. Although both single-node and multi-node collaborative inferences possess their unique advantages and challenges, our preference for the former largely stems from the complications it avoids compared to the model splitting option. These complications encompass unsupported neural network operators due to inconsistencies in hardware and software across different nodes, added overhead in communication and data exchange, and pressing security concerns. Moreover, while many existing solutions lean on theoretical frameworks promising robust optimality <cit.>, the practicalities and resource limitations of edge nodes in real deployment, exemplified by sophisticated networked systems like vehicular networks <cit.>, drive our emphasis on single-node inference, ensuring its viability across varied deployment contexts.In this respect, as AI continues to integrate deeper into evolved distributed edge systems, we can anticipate a diverse range of AI accelerator chipsets (e.g., GPU, TPU, VPU) jointly powering future services (Fig. <ref>). This variety imposes additional challenges, encompassing coordination mechanisms between devices <cit.> and ensuring that Edge AI inference services are uniformly provisioned and managed across clusters of devices with varied capabilities. Within such deployments, an end-device seeking Edge AI inference (e.g., applying an object detection algorithm to a video stream) should effortlessly find the desired algorithm within the edge cluster, regardless of the AI accelerator that will execute it. Overcoming this challenge necessitates hardware-agnostic AI service discovery mechanisms. Moreover, the current absence of standardized interoperability among different AI-enabled edge devices, attributed to diverse software tools and APIs, accentuates the need for a unified API to facilitate smooth inter-device communication. Lastly, it is paramount to ensure optimal resource management in the cluster and the fulfillment of QoS requirements. In this context, QoS entails the entire duration required for an AI request from an end-device to be processed by the Edge AI cluster. This is a demand posed by AI applications. Consequently, offering multiple AI services necessitates strategies for task allocation and orchestration. These strategies should account for computational limits of edge devices, variations in network latency, and specific QoS timings. Additionally, they must consider the unique aspects of the AI application being utilized and how hardware diversity influences AI performance.Given the complex requirements of these scenarios, the remainder of this paper aims to achieve three primary objectives: (i) to introduce the foundational components necessary for a framework to comply with such requirements, (ii) to assess the impact of device heterogeneity on AI inference task outcomes, and (iii) to spotlight key research and developmental challenges, thereby charting a path forward for both academia and industry.§ AN HARDWARE-AGNOSTIC AI INFERENCE PROVISIONINGIn this section, we first present the hardware resources utilized for developing our proof of concept, and then the main components of the software framework developed to satisfy the requirements mentioned in Section <ref>. §.§ Edge AI Cluster: HardwareIn our experimental testbed setup (Fig. <ref>), we introduce edge heterogeneity by using three distinct AI accelerators: (i) Intel Movidius Myriad X VPU, (ii) Google Edge TPU, and (iii) NVIDIA 128-core Maxwell GPU. The three AI accelerator chipsets are respectively hosted in three different Single-Board Computers (SBCs): (i) UP Squared AI Edge X, (ii) Coral Dev Board, and (iii) Jetson Nano. The three edge SBCs have heterogeneous features also from CPU, RAM, and storage perspective. In the role of end-devices, we use four Raspberry Pi 3 Model B.In the context of our experiments, the devices communicate with each other through a controlled wireless network. Since all of our devices in the Edge AI testbed are placed in the same location, we need to emulate a realistic edge system deployment and so the network latency between devices.To determine which distribution to use, we measure latency between a 5G device and a edge server over a 5G commercial network and find the best-fitting distributions to experimental latency values. We observe the best fit of a Stable distribution with parameters of shape 1.6878 and scale 0.0980 to communication latency. The average network latency was found to be 13.405, with a standard deviation of 16.065, indicating the variability in latency measurements. §.§ Edge AI Cluster: Software The software architecture of each node in the Edge AI Cluster (Fig. <ref>) is characterized by three main components and several sub-modules. It is designed to meet the requirements of building an abstraction layer that: (i) enables interoperability between different AI-enabled devices, (ii) allows platform-agnostic service discovery and provisioning of AI inference services, and (iii) supports seamless service orchestration and execution migration capabilities.Consul Agent. Consul <cit.> is a versatile service networking solution that ensures secure network connectivity among services deployed in various runtimes. It offers multiple control plane solutions, such as service configuration, service discovery, and service mesh. A service mesh is typically composed of a control plane and a data plane. The control plane is responsible for maintaining a central registry that tracks all services and their corresponding network locations. In our case, the control plane enables communication between the AI Model Management and AI Inference Management services distributed among the edge devices, and enforces rules and other operational aspects in the service mesh interactions. The control plane uses a consensus protocol for consistency and a gossip protocol to manage membership and broadcast messages to the cluster <cit.>. The data plane oversees communication between services according to their specific design (e.g., through REST API). Upon considering an extremely high-frequency gossip interval (0.015ms), we have empirically estimated that the maximum bandwidth required for each Edge AI node to perform all the necessary coordination mechanisms is 7291.7 kbps per node. The Consul Agent is the core process of Consul and, in our deployment, runs on every cluster node and end-device. End-devices use Consul for AI Inference service discovery. By performing DNS queries, end-devices can easily discover the AI inference services provided by the Edge AI Cluster. For instance, an end-device seeking an object detection (objd) service can use the DNS server directly via name lookups (i.e., ). The query automatically searches for nodes offering the objd service and returns a list of healthy nodes capable of providing that service at that moment.AI Model Management. This component is responsible for streamlining the process of downloading, converting, and storing various AI models on individual edge nodes. Due to the diverse AI chipsets in our testbed, each hardware unit possesses a unique instruction set architecture, necessitating a platform-specific model compiler to leverage AI-optimized hardware. In simple terms, executing AI inference tasks on these devices requires a distinct AI model format for each platform, which can only be produced after a platform-specific conversion. Transitioning from a model suitable for cloud computing to one compatible with these devices is often a complex and challenging process, as most AI models are not readily available in these newer and less common formats. Additionally, the conversion process must consider the model's data size, hardware processing needs, and overall size and complexity due to the computing limitations of the devices in use. In our scenario, utilizing a particular model across our cluster necessitates converting the model for each target platform <cit.>. The Model Converter was created to simplify this conversion process by automating various steps and intelligently determining specific conversion parameters. Our software implementation is fully compatible with commonly used models.AI Inference Management. This component leverages not only the specific features of Consul but also incorporates custom implementations developed in the context of this work. It comprises the AI Services Registry and the Provisioning Agent, which include several sub-components.The AI Services Registry maintains a list of locally available AI inference services on each edge node. With Consul, services can be defined in a configuration file or added at runtime through dedicated HTTP APIs. In our setup, AI inference services are exposed to the AI Service Registry via the Provisioning Agent, which uses an underlying micro-web framework and a REST API for managing the AI inference services lifecycle tasks (e.g., activation and deactivation of the service). The AI Service Controller serves as the primary interface for the underlying AI Accelerator, such as TPU, GPU, or VPU. It oversees the entire lifecycle of an inference process, tailored to the specifics and requirements of both the underlying hardware and software platforms. This encompasses preparing AI models and data for inference, initiating and executing the inference process on the AI accelerator, processing the results as necessary, addressing hardware-specific issues, and deallocating resources after the inference is complete. The implementation of the AI Service Controller is contingent upon the particular AI Accelerator embedded in each edge device. This means, for instance, that the Jetson Nano implementation would vary from those of the Coral Dev Board and UP Squared AI Edge X. Optimal resource allocation is crucial for successfully fulfilling the QoS demands of AI-processed applications in most use cases mentioned in the introduction. Given the hardware constraints of the platforms, it is essential to: (i) profile the performance of individual AI inference instances, (ii) monitor the overall system performance of each edge platform, and (iii) monitor the network performance (i.e., latency) of each edge platform concerning the end-devices to be served and the other edge nodes. The Performance Profiler and Health Checks serve as the foundation for allocating AI inference execution to the healthiest cluster node or offloading it to a healthier node if QoS can no longer be satisfied. These two components address the limited resources of edge devices heuristically by monitoring key metrics, logs, and traces to provide an overview of each device's health status and its running services. Defining Health Checks allows for determining when nodes and services are considered healthy or not, and specific thresholds can be set for the monitored metrics increasing system resiliency. In our deployment, we defined multiple health checks to outline real-time device status concerning AI inference services, edge platform, and network. Platforms and services health can be set to three possible states: pass, warn, and critical. When a node's system-level health check is set to critical, the device is temporarily considered incapable of processing additional AI inference instances. Similarly, if an AI inference service's health check is set to critical, it indicates that the executing device is so loaded that it cannot satisfy the service's inference latency requirement. In such cases, the Orchestration Engine improves device resource utilization and performance of instances with violated QoS by offloading the execution of AI inference services to a healthier device in near real-time. The engine relies on local information provided by the Performance Profiler and utilizes the AI Service Controller for orchestrating and offloading operations. Platform and application health monitoring parameters vary in granularity depending on underlying hardware capabilities. As platform performance metrics, such as CPU utilization (%) and RAM utilization (%), may impact inference application quantitative performance (e.g., latency, root-mean-square error), it is crucial to empirically understand how the heterogeneous structure of our edge cluster directly affects inference performance. In the upcoming section, we first provide additional details on how edge nodes are selected for AI Inference provisioning and offloading. Subsequently, in Section <ref>, we explore this aspect in greater detail through a series of experiments and describe how the Provisioning Agent modules use these empirical insights to design tasks' offloading orchestration rules.§ NODE SELECTION FOR AI INFERENCE PROVISIONING AND OFFLOADINGThe Performance Profiler of each device computes in real-time the inference latency of each object detection (OD) running instance, denoted as InfLat_i, and the average inference latency, AvgInfLat, over all OD instances. Mathematically, InfLat_i represents the time taken to process the i^th OD instance, and AvgInfLat is given by:AvgInfLat = ∑_i=1^nInfLat_i/nIt also keeps track of the demanded QoS by each AI application and transfers all this information to the Health Checks component. This component then uses the following criteria to evaluate the state of each application and the overall system based on their respective overall latency:State(L) ={[passifL < 75%QoS; warning if75%QoS < L < 90%QoS;criticalifL > 90%QoS ].where L is InfLat_i for individual applications and AvgInfLat for the overall system. A percentage of QoS represents the proportion of the maximum allowable time taken to process an AI inference request. For instance, 75%QoS means the process is completed in 75% of the maximum allowable time. The Orchestration Engine of each device processing AI inference tasks operates according to the inputs provided by the Performance Profiler and the status of the Health Checks, with the aim of promptly acting in case of QoS requirements violation. By following this heuristic approach, we can ensure that as soon as the QoS requirements cannot be met—as a result of overload caused by concurrent processing tasks or increased network latency—either the application or system health check will be set to critical. While the entire node is in critical state, it is temporarily unreachable for processing new incoming requests and the execution of the OD instance with the highest InfLat is offloaded to a healthier node of the cluster. If it is the QoS of a single application that is violated, then that particular application will be offloaded to a healthier node of the cluster. The seamless migration between nodes is performed by the Orchestration Engine, which transfers specific metadata that includes information on the AI model to be used and the video streaming source. The choice of the node to which offload the task execution takes into account the network latency inter-edge devices and end-to-edge devices. Each edge device keeps track of the network latency status among all the different nodes of the systemthrough a network latency matrix (NLM). The ranking of the network latency status between two nodes (e.g., edge node X ↔ end-device Y or edge node X ↔ edge node Z) is set to pass, warning, or critical not only considering the latest instantaneous value of the latency, but also its most recent time-series evolution. In particular, in order to rank nodes based on network latency while taking into account both recent and longer-term performance, a methodology employing Exponential Moving Averages (EMAs) for 1-minute, 5-minute, and 15-minute intervals is utilized. To quantify this approach, we denote the Exponential Moving Averages at 1-minute, 5-minute, and 15-minute intervals as EMA_1m, EMA_5m, and EMA_15m respectively. The composite score, S, for a node can then be expressed as:S = w_1m· EMA_1m + w_5m· EMA_5m + w_15m· EMA_15mWhere w_1m, w_5m, and w_15m are the respective weights for the EMAs at the 1-minute, 5-minute, and 15-minute intervals. The choice of weights will determine the importance of each time interval in the final score, with lower weights for shorter intervals emphasizing longer-term trends. By combining these EMAs using this weighted sum, a composite score for each node is generated, which emphasizes longer-term trends and helps mitigate short-term fluctuations. Nodes are then ranked according to their composite scores, ensuring efficient resource allocation and optimal system performance in varying network conditions. It is worth noting that while there are alternative methods to EMA for time-series analysis <cit.>, we opted for EMA at this stage due to its straightforward implementation and minimal resource overhead, ensuring the methodology does not significantly add to the resource demand on each node. Fig. <ref> illustrates the process of discovery, provisioning, and orchestration offloading of an Edge AI-based object detection service in a real-deployment environment. The system allocates the service execution to the healthiest edge device of the cluster (in the example, the UP Squared). As soon as the edge device can no longer guarantee the requested QoS, the AI Inference execution is seamlessly migrated to another edge node (regardless of the underlying AI chipset in use). Algorithm <ref> provides a step-by-step overview of the process, highlighting how edge nodes are initially selected for provisioning and how the execution offload is performed. Please note that the text in blue refers to the mechanism of node selection for AI Inference provisioning, while the text in green to the AI Inference execution offload. § HETEROGENEITY IMPACT ON THE AI INFERENCE PERFORMANCE For understanding how hardware heterogeneity impacts the performance of AI inference services, we should first describe what is the typical lifecycle of this kind of instances. Regardless of the edge platform in which the task is executed and the algorithm in use, the overall AI inference processing sequentially occurs between CPU and AI Accelerator (i.e., GPU, TPU, VPU) in sequence as shown in Fig. <ref>.The AI model loading is typically a resource-demanding task (especially for devices with limited CPU and RAM resources), therefore it is a good practice to design AI inference applications in a way that Step 0 is executed only once. Taking this for granted (our implementation complies with this requirement), it is possible to observe how input data is inferenced only after a sequence of processing stages that are, in most of the cases, bounced between the CPU and the AI accelerator chipset in use. There is however the exception of the case in which also the inference-related steps are demanded to the CPU. However, this scenario is for now out of the scope of this study.In our experiments, the AI application executed by the edge devices encompasses the use of the MobileNetV1 algorithm performed over a video content streamed by each of the end-devices. The model input is a blob that consists of a single image of (1, 300, 300, 3), where the tuple represents batch size, image width, image height, color channels. The choice of a specific image width and height is extremely important. When the base model is first trained with such parameters and then converted for being executed in each targeted platform, the images feeding the model's input at runtime must comply with the choice of such parameters. The scope of our empirical analysis is threefold:* We aim to assess the scaling capabilities of the different AI inference accelerators when processing multiple instances in parallel.* We want to estimate what is the impact produced by pre-processing (Step 1) and post-processing (Step 5) tasks in the overall inference latency time.* We want to understand to what extent changing the input parameters of the AI-processed application would affect the inference latency performance.To accomplish this, we defined two additional experimental scenario requirements. First, we assume the edge devices to serve a growing number of OD instances. Second, we provision the AI model with images having a input frame size higher respect to the frame size used for training the model (i.e., 300x300). This assumption in made to comply with the unpredictability of the application characteristics (video streaming in this specific case) that must be handled. As for the choice of the QoS requirement, we selected an arbitrary value of 150ms, although this can vary from case to case and be therefore more or less stringent <cit.>. In this study, we overlook the qualitative performance (e.g., mean average precision) of the OD, defined as the ratio between the number of correctly recognized objects and that of total objects in a video frame <cit.>.In the context of our experiments, Fig. <ref> displays the breakdown of AI inference latency over two distinct components. Specifically, the graphs illustrate the component executed by the CPU, encompassing all frame processing-related tasks (Steps 1 and 5 in Fig. <ref>). We will refer to this as the CPU component. In contrast, the AI Accelerator component encompasses AI inference processing-related computations (Steps 2, 3, and 4 in Fig. <ref>). For our analysis, we evaluated performance metrics for each device under test. These metrics are based on varying the number of OD instances running simultaneously, alongside two different input frame sizes (600x600 and 1200x1200) of the content streamed by the end-devices. Upon observation, it is evident that for the same input frame size and number of OD instances, different boards yield varied AI Accelerator and CPU performance metrics. As a specific example, with an input frame size of 600x600 and a single OD instance, the UP Squared's AI Accelerator processes the task in roughly 62.65 ms. In comparison, the Jetson Nano requires about 79.59 ms, and the Coral Dev Board finishes in an approximate 71.81 ms. There's also a discernible variance in the performance of the CPU component across these boards. An increase in the number of OD instances predictably augments the processing duration for both the AI Accelerator and CPU components across all devices. Moreover, enlarging the input frame size amplifies the processing duration. Notably, this augmentation appears more pronounced for the CPU component, suggesting that the CPU, relative to the AI accelerator, is more susceptible to escalations in input frame size. In terms of scalability under increasing computational demands, the system's overall latency inflates by about 45-60% on average for all three scenarios when the frame size is enhanced. Analyzing individual devices, the UP Squared consistently exhibits a latency growth trend comparable to the general increment, with an average escalation of 40-50% as the frame size doubles. The Jetson Nano device, on the other hand, displays varied responsiveness to frame size increases. The rise in latency ranges between 35-65%. The most substantial increase is observed during the three concurrent running instances (#3 OD Instance). Lastly, the Coral Dev Board is notably influenced by enlargements in frame size. Its latency inflates on average by 50-70%. This device, similarly to the Jetson Nano, is particularly sensitive during the execution of three concurrent instances. These findings underscore the intricate interplay between the input parameters of AI-processed applications and the resultant AI inference latency. The evident heterogeneity among devices further emphasizes this point, as each device responds uniquely to identical workloads, underscoring divergent performance dynamics in both the AI Accelerator and CPU components. The disparities observed accentuate the necessity of introducing nuanced node allocation strategies tailored to these heterogeneous device behaviors. For example, we envision the need for node allocation strategies that take into account the different performance characteristics of CPU (pre- and post-processing capabilities) and AI accelerators (inference capabilities) in the Edge AI nodes. This could enable a more fine-grained allocation of tasks, considering the unique performance features of different devices (such as GPUs, VPUs, and TPUs) and their ability to handle multiple AI inference tasks concurrently. To follow this approach, the Performance Profiler may assign weights to each Edge AI node: W_cpu for the CPU Weight, representing the node's CPU capabilities for handling pre- and post-processing tasks (Steps 1 and 5); W_ai for the AI Accelerator Weight, representing the node's AI accelerator capabilities for handling inference tasks (Steps 2-4); and W_nl for the Network Latency Weight, factored in when allocating tasks following the NLM approach.The combined weight, W_combined, for each Edge AI node is given by: W_combined = α× W_cpu + β× W_ai + γ× W_nlwhere α, β, and γ are assigned factors for the respective weights. The W_combined is updated whenever the execution of an application is scaled up or down, reflecting the current workload and resource availability of each device. Tasks (video streams) are assigned to nodes based on their combined weights and application requirements. For new upcoming tasks to be processed, the framework parses the workload features of the incoming request (e.g., resolution for the video streaming and OD case) and allocates the execution of the new task based on the pre- and post-processing requirements, as well as inference requirements needed to handle the request. The system dynamically adapts to changes in the workload by updating the W_combined of each Edge AI node, ensuring that the most suitable nodes are selected for the current system state.Addressing these intricate challenges and effectively modeling realistic Edge AI environments is undoubtedly non-trivial. We delve deeper into this complexity and discuss potential solutions in Section <ref>.§ OPPORTUNITIES AND CHALLENGESThe potential to deploy systems that enable interoperability between heterogeneous AI-enabled devices offers significant advantages, considering the growing prevalence of such devices and the requirements of future networks. Numerous research opportunities and challenges still need to be addressed. This section aims to introduce a non-exhaustive set of the most prominent R&D efforts required to further enhance the technological landscape in this area.Our focus will be on system architecture and deployment aspects. Readers may also notice that the identified challenges share a common theme, fitting within the realm of orchestration and management of distributed Edge AI inference services. Balancing Load and Network Latency in AI-Enabled Heterogeneous Edge Networks. The computing environments analyzed in this work face the challenge of balancing load and network latency while adapting to the heterogeneous AI hardware landscape. Different tasks have varying computational requirements, and some tasks are better suited for specific hardware types. For instance, GPUs excel at handling tasks with a high degree of parallelism, while TPUs are more efficient for matrix operations. In different scenarios, the priority of minimizing latency or maximizing resource utilization might vary. Real-time applications, like video analytics for public safety, prioritize low latency to ensure timely responses. In contrast, batch-processing applications, such as analyzing historical data, prioritize resource utilization for efficiency. To address these varying requirements and hardware capabilities, a tunable parameter could be introduced in the system that allows adjusting the balance between load and network latency. This parameter could influence the weights assigned to the Edge AI nodes, making the system prioritize either minimizing latency or maximizing resource utilization based on the specific use case. For instance, if low latency is crucial, the tunable parameter could increase the importance of the Network Latency Weight in the combined weight calculation, while decreasing the significance of CPU and AI Accelerator Weights. Conversely, if resource utilization is more important, the parameter could increase the importance of CPU and AI Accelerator Weights, while decreasing the significance of the Network Latency Weight. In future work, exploring the impact of this tunable parameter on the performance of different applications and investigating methods for automatically adjusting it based on the specific use case, performance metrics, or hardware requirements is crucial. Lightweight techniques such as ensemble learning, incremental learning, and real-time or near real-time Pareto-based optimization could be employed to make these adjustments while considering the resource constraints of edge devices considered in this scenario <cit.>. This would enable the system to adapt to various requirements and hardware types, providing more flexibility and better performance across a range of scenarios in the context of AI-enabled heterogeneous edge networks. Leveraging Intent-Based Networking for Heterogeneous Edge AI Systems. Hardware and software heterogeneity, along with the presence of diverse AI-enabled services with different requirements, pose significant challenges to current deployments. These factors underscore the necessity for innovative mechanisms to streamline the management of such complex systems, moving away from rigid hard-coded policies towards a more adaptable and flexible approach. Intent-based networking (IBN) offers promising avenues of research for addressing these challenges and enhancing the capabilities of edge computing systems <cit.>. Using high-level intents defined with specification languages like the Autonomic System Specification Language (ASSL <cit.>) and NEtwork MOdeling (NEMO <cit.>), IBN can craft computing- and networking-aware orchestration mechanisms that dynamically adapt to system conditions, network latency, and the unique demands of AI-enabled services. With the rise of Large Language Models (LLMs) <cit.>, there lies potential in exploiting these advanced AI models to interpret and even generate IBN intents. Such models can understand the intricacies of application requirements and network conditions, enabling a more refined, contextual, and adaptive orchestration of services. Given their capacity to process vast amounts of information and identify patterns, LLMs can be used to optimize the translation of high-level intents into actionable network configurations, reducing manual interventions and enhancing system adaptability. Furthermore, real-time deployments in edge environments often exhibit a dynamic nature with nodes handling diverse applications over varying time spans. IBN's adaptability, enriched by LLMs, can ensure the seamless updating of configurations and policies based on evolving application types and demands. This recursive adaptability ensures sustained optimal performance, even amidst application and service heterogeneity. By synergizing technologies like NEMO, ASSL, and LLMs, IBN can effectively confront the challenges posed by hardware and software heterogeneity, variable network latency, and the diverse necessities of AI-enabled services, heralding a new era of responsive and intelligent edge computing. Dynamic Code Generation in Heterogeneous Edge AI Systems with Large Language Models. In Edge AI environments characterized by a mosaic of hardware platforms such as TPUs, GPUs, and VPUs, there arises an intrinsic challenge: how to seamlessly integrate new applications tailored for diverse underlying platforms. Large Language Models (LLMs) present not just a powerful capability in natural language processing but also a promise in dynamic and automatic code generation <cit.>. By harnessing this unique capability, we can construct custom solutions in real-time that cater to the ever-changing needs of edge applications. However, the current feasibility of such solutions is heavily predicated on the assumption of using LLMs hosted on cloud services through dedicated APIs. The computational demands of LLMs are significant. While there are attempts to run LLMs on more constrained devices such as smartphones, we are still far from a level that could ensure performing code generation locally on constrained edge nodes. In a standalone edge device cluster, the inference time of these models could indeed be prohibitively long, making real-time code generation challenging if not unfeasible. This underpins the essential role of cloud support in actualizing the benefits of LLMs in edge scenarios. In scenarios where application requirements and system needs constantly evolve, LLMs, supported by these dedicated cloud-based APIs, could potentially generate code segments or configuration scripts that bridge software-hardware gaps, optimize ongoing processes, or even manifest novel functionalities tailored for each type of hardware. Such an approach radically redefines flexibility. Instead of relying on pre-defined solutions or configurations that might not fully align with emergent requirements, the system can generate its own new solutions, tailored perfectly for the context through tailored code sandboxing and trustable SOTA (Software over the air) update mechanisms. This potential goes beyond mere reactionary fixes; it encompasses the proactive, intelligent generation of solutions. These capabilities are not just about meeting immediate needs but also about forecasting future challenges based on observed patterns. Coupled with the ability of LLMs to interpret and act on high-level intents in IBN, this offers a unique opportunity to elevate the efficiency, adaptability, and intelligence of heterogeneous Edge AI systems. However, this novel paradigm also introduces its set of challenges. Ensuring the reliability of dynamically generated code <cit.>, maintaining security and integrity during real-time adaptations, and verifying the consistency and correctness of such on-the-fly solutions emerge as critical research avenues. § CONCLUSIONDesigning edge computing systems that can effectively integrate and manage heterogeneous AI-enabled nodes is crucial for unlocking the full potential of AI Inference capabilities across a diverse set of applications and scenarios. By considering the requirements and challenges outlined in this paper, researchers and practitioners can contribute to the ongoing development of innovative solutions that address the complexities of interoperability, performance optimization, and resource management in these systems. This, in turn, will help pave the way for the widespread adoption and successful implementation of AI-enabled edge computing solutions in various industries and use cases.IEEEtran | http://arxiv.org/abs/2311.03375v1 | {
"authors": [
"Roberto Morabito",
"Mallik Tatipamula",
"Sasu Tarkoma",
"Mung Chiang"
],
"categories": [
"cs.AR",
"cs.AI",
"cs.DC",
"cs.NI"
],
"primary_category": "cs.AR",
"published": "20231027164659",
"title": "Edge AI Inference in Heterogeneous Constrained Computing: Feasibility and Opportunities"
} |
Department of Physics, Indian Institute of Technology Roorkee, Roorkee 247667, India Corresponding author: [email protected] Department of Physics, Indian Institute of Technology Roorkee, Roorkee 247667, IndiaWe have done a systematic no-core shell-model study of ^20-23Na isotopes. The low-energy spectra of these sodium isotopes consisting of natural and un-natural parity states were reported, considering three realistic interactions: inside nonlocal outside Yukawa (INOY), charge-dependent Bonn 2000 (CDB2K), and the chiral next-to-next-to-next-to-leading order (N^3LO). We also present the mirror energy differences in the low-energy spectra of |T_z| = 1/2 mirror pair (^21Na - ^21Ne). Apart from the energy spectra, we have also reported the electromagnetic transition strengths and moments. Finally, considering all three realistic interactions, we report the point-proton radii and neutron skin thicknesses.21.60.Cs, 21.30.Fe, 21.10.Dr, 27.20.+nNuclear structure study of ^20-23Na isotopes with ab initio no-core shell-modelPraveen C. Srivastava January 14, 2024 ================================================================================§ INTRODUCTION For the last two decades, the no-core shell-model (NCSM) <cit.> became a major ab initio approach to solve nuclear many-body problems starting from realistic NN and 3N interactions. The NCSM approach is very successful in explaining nuclear structural properties of p-shell nuclei <cit.>, although it is less explored for the sd-shell nuclei mainly due to the significant increase in computational resources.Some approximations become necessary,in terms of small model spaces considered, to tackle lower sd-shell nuclei within the NCSM formalism. Like any other many-body methods, approximations employed will cause deficiencies in reproducing the correct results of some of the nuclear observables. However, understanding the nature of such deficiencies is important for improving the nuclear Hamiltonians taken into account to study nuclei of a particular region of the nuclear chart.The information of low-lying energy states together with electromagnetic properties can reveal the internal structures of atomic nuclei. And, the nuclei around the N = Z work as perfect testing grounds for different many-body methods of nuclear structure.Earlier, we explored neon chain from A = 18 to A = 24 within the NCSM formalism <cit.>. The calculated results for energy-spectra, B(M1) and magnetic moment are in good agreement with the experimental data, however, operator that depends on the long-range part of nuclear wavefunctions such as B(E2) and quadrupole-moment are far away from the experimental data. Recently, the E2 strengths between the ground and the first excited state of ^23Na and its mirror nucleus ^23Mg are measured using Coulomb excitation measurement <cit.>. The experimental E2 strengths of both nuclei are compared to the ab initio valence space in-medium similarity group (VS-IMSRG) results and standard shell model results within the sd space. A similar kind of theoretical investigation is reported in Ref. <cit.> for a range of mirror nuclei pairs across the sd-space, including |T_z| = 1/2 mirror pairs (^21Na-^21Ne) and (^23Na-^23Mg). This study shows that missing E2 strengths for VS-IMSRG are mostly isoscalar in nature. In Ref. <cit.>, the electric quadrupole and magnetic dipole moments of ^20-31Na isotopes are studied along with other nuclei across the sd-space using valence-space Hamiltonian constructed from ab initio VS-IMSRG and coupled cluster effective interaction (CCEI). Recently, the isospin-symmetry breaking in the mirror-energy difference (MED) of sd- and pf-shell nuclei are investigated in Ref. <cit.> within the ab initioVS-IMSRG. This study shows that while the calculated MEDs of ^20Na (1^+_2) and ^21Na (1/2^+_1) are close to the experimental MEDs, the calculated MED of ^19Na (1/2^+_1) is small compared to the experimental one.The charge radius is another fundamental property of atomic nuclei apart from low-energy spectra and electromagnetic observables, that can reveal the structure of nuclei to a great extent.In recent years, several experiments have been performed to measure the charge radii of different isotopic chains. On the other hand, the calculated charge radii with different many-body theories having error quantification procedures facilitate the comparison of theoretical and experimental results directly. Recently, the charge radii of Na isotopes have been extracted from accurate atomic calculations in Ref. <cit.>. Suzuki et. al.in Ref. <cit.> studied the development of neutron skin along the Na isotopic chain. Using shell-model, the variation of charge and matter radius along the sodium isotopic chain is reported in Ref. <cit.>. In this work, we have performed nuclear structure study of ^20-23Na isotopes within the formalism of ab initio no-core shell model using three realistic interactions, namely inside nonlocal outside Yukawa (INOY) <cit.>, charge-dependent Bonn 2000 (CDB2K) <cit.>, and the chiral next-to-next-to-next-to-leading order (N^3LO) <cit.>. We have calculated the g. s. energies, low-energy spectra comprising of both natural and un-natural parity states, electromagnetic transition strengths and moments, and also the point-proton radii of these sodium isotopes. The maximum basis space we have reached in this work is N_max = 4 for all four sodium isotopes. This work is important to check the consistency of our previous works on the neon chain <cit.> as well as other works on the lower mass sd-shell nuclei <cit.> using the NCSM formalism. This paper is organized as follows: In <ref>, the basic formalism of the NCSM approach is given. The NN interactions used in this work are briefly described in <ref>. In <ref>, the NCSM results for energy spectra, electromagnetic properties, and point-proton radii are presented. Finally, we summarize our work in <ref>.§ FORMALISM OF NO-CORE SHELL-MODELIn this approach, nucleons are considered to be non-relativistic point particles interacting via realistic NN and NN + 3N interactions. In the present work, we are using only the NN interactions for which the Hamiltonian can be written as follows <cit.>:H_A= T_rel + V = 1/A∑_i< j^A(p⃗_i - p⃗_j)^2/2m +∑_i<j^A V^NN_ij,where, T_rel is the relative kinetic energy and V^NN_ij is the NN interaction containing nuclear part as well as Coulomb part.In the NCSM approach, the solution of a non-relativistic Schrödinger equation for an A-nucleon system is found by performing a large-scale matrix diagonalization in a many-body Harmonic oscillator basis. The many-body basis is constructed by taking the Slater determinant (M-scheme) of the single-particle Harmonic oscillator orbitals having a truncation parameter N_max. The N_max shows the number of major shells taken above the minimum configuration allowed by the Pauli exclusion principle. The use of the M-scheme many-body basis along with N_max-truncation allows the smooth separation of the center of mass (CM) coordinates from the relative coordinates. Large N_max calculations are necessary to obtain converged results while dealing with hard-core potentials having strong short-range correlations. However, large N_max calculations are computationally challenging, thus we need some renormalization technique to soften such potentials.Two such renormalization procedures that are popularly used in NCSM calculations are the Okubu-Lee-Suzuki (OLS) <cit.> transformation and similarity renormalization group (SRG) <cit.>. Both these procedures soften the hard-core part of nuclear interactions eventually to obtain converged results within a computationally tractable model space controlled by the N_max parameter. In this work, we are using the OLS technique to obtain an effective Hamiltonian. To facilitate the derivation of the OLS effective Hamiltonian, the center-of-mass (c.m.) Hamiltonian, H_c.m. is added to the original Hamiltonian <ref>:H^Ω_A = H_A + H_c.m. = ∑_i = 1^A [p_i^2/2m + 1/2m Ω^2 r_i^2] + ∑_i < j^A [V_NN, ij - m Ω^2/2 A (r_i - r_j)^2]. Here, H_c.m. = T_c.m. + U_c.m. and T_c.m. and U_c.m. are the kinetic and potential terms for the center of mass co-ordinate. H_A, being translationally invariant which does not change the intrinsic properties once H_c.m. is added to it.In order to develop effective interactions, first the infinite HO basis is separated into P- and Q (=1-P) spaces by using projection operators.The P-space contains all the HO orbitals allowed by the truncation parameter N_max and the Q-space contains all the excluded HO orbitals. The effective Hamiltonian is then constructed by using the OLS unitary transformation on H^Ω_A given in <ref>. The OLS transformation induces up to A-body terms even if we start with the NN Hamiltonian given in <ref>. In this work, we are considering only up to 2-body terms as contributions from those terms would be dominant over other many-body terms.In the final step, the H_c.m. is subtracted, and the Lawson projection term, β (H_ - 3/2ħΩ) <cit.> is added to the Hamiltonian. In this work, the value of β is taken to be 10 and the addition of the Lawson projection terms shifts energy levels arising due to the excitation of the CM. So, the final form the Hamiltonian can be written as:H_A,eff^Ω = P{∑ _i<j^A[ (p⃗_i - p⃗_j)^2/2mA + m Ω^2/2A(r⃗_i - r⃗_j)^2 ]..+∑_i<j^A[ V^NN_ij - m Ω^2/2A(r⃗_i - r⃗_j)^2]_ eff + β(H_ - 3/2ħΩ)}P. The effective Hamiltonian of <ref> is dependent on the number of nucleon (A), HO frequency (Ω), and number of HO orbitals considered in the P-space controlled by the truncation parameter, N_max. In order to minimize the effect of many-body terms in developing an effective Hamiltonian, a large model space is needed for NCSM. For the extreme limit of N_max→∞, the effective Hamiltonian of <ref> reduces to the bare Hamiltonian of <ref> and the NCSM results with effective Hamiltonian reduce to the exact solution. § REALISTIC NN INTERACTIONS In this work, we have performed NCSM calculations using three realistic NN interactions namely, inside nonlocal outside Yukawa (INOY) <cit.>, charge dependent Bonn NN interaction (CDB2K) <cit.>, and the chiral N^3LO <cit.>. The inside nonlocal outside Yukawa (INOY) <cit.> is a phenomenological potential having a nonlocal part at a short distance (≤ 1 fm) and a Yukawa tail similar to local Argonne v18 <cit.> potential at a large distance (≥ 3 fm). A smooth cutoff in the transition region (1-3 fm) connects the local to the nonlocal part, and the range of locality and nonlocality can be controlled explicitly by adjusting the parameters. It can represented as follows:V^full_ll'(r,r')= W_ll'(r,r') + δ (r-r') F^cut_ll'(r) V^Yukawa_ll'(r)where the cut-off function is defined asF^cut_ll'(r) =1- e^-[α_ll'(r-R_ll')]^2for r≥ R_ll' ,0 for r≤ R_ll' , and W_ll^'(r,r^') and V_ll^'^Yukawa(r) are the phenomenological nonlocal part and the local part of the interaction, respectively. The parameters α_ll' and R_ll' are considered to be independent of the angular momenta having values 1.0 fm^-1 and 1.0 fm, respectively. The other parameters of W_ll'(r,r') are determined by fitting NN data and the binding energy of ^3He. This interaction model breaks charge independence and charge symmetry as it is essential to reproduce all low-energy experimental parameters, including np, pp, and nn scattering lengths, to high precision. To describe a few-nucleon system reasonably well, additional 3N forces are necessary along with local NN potentials, but INOY NN interaction does not need 3N force to provide the binding energies of 3N systems. The effects of three-nucleon forces are partly included in the non-local part of this interaction. The non-local part of the NN interaction is mainly due to the internal structure of nucleons.The second interaction we employed in this work is the charge-dependent Bonn 2000 (CDB2K)<cit.>. It is a one-boson exchange NN potential that includes all the mesons π, η, ρ, and ω, the masses of which are less than the nucleon mass. However, the η-meson is dropped from the potential as it has a vanishing coupling with the nucleon. Additionally, two partial-wave dependent scalar-isoscalar σ bosons are also introduced. This charge-dependent potential reproduces accurately charge symmetry breaking (CSB) and charge independence breaking (CIB) in all partial waves with J≤4 as predicted by the Bonn full model <cit.>. In this model, three NN interactions are constructed: a proton-proton, a neutron-neutron, and a neutron-proton potential, and the differences between them are measured by CSB and CIB. These potentials fit the world proton-proton data available in the year 2000 with χ^2/datum = 1.01 and the corresponding neutron-proton data with χ^2/datum = 1.02 below 350 MeV. The CDB2K potential uses the original form of the covariant Feynman amplitudes of meson exchange without local approximation. So, the off-shell behavior of this NN potential is different from other local NN potentials.The third interaction used in our current study is a chiral NN interaction at next-to-next-to-next-to-leading order (N^3LO) <cit.> derived from the quantum chromodynamics using chiral perturbation theory (χPT). The χPT is a systematic expansion of the nuclear potential in terms of (Q/Λ_χ)^ν, where Q is a low-momentum scale or pion mass, Λ_χ ≈ 1 GeV is the chiral symmetry breaking scale and ν ≥ 0 <cit.>. For a definite order ν, there is a finite number of unique and calculable contributing terms. The long-ranged part of the nuclear interaction is associated with the exchange of pions, whereas the short-ranged part is defined in terms of contact terms. The charge dependence of this interaction is important for a good fit of the low-energy pp and np data. There are, in total, 29 parameters of the N^3LO potential. Three of them are low-energy constants (LECs), c_2, c_3, and c_4 that appear in the π N Lagrangians. The most important fit parameters are 24 contact terms that dominate the partial waves with L ≤ 2, and the remaining two parameters are two charge-dependent contact terms. The NN interaction at this order can be compared to high-precision phenomenological potential AV18 <cit.> in terms of the accuracy in reproducing the NN data below 290 MeV lab energy. The NCSM calculations have been performed with the pAntoine code <cit.>. § RESULTS AND DISCUSSIONS In this section, we have reported NCSM results of different nuclear observables. The NCSM calculations are computationally challenging for medium mass nuclei mainly due to the huge dimensions of the Hamiltonian matrix. In <ref>, the M-dimensions of the Hamiltonian matrices are shown for ^20-23Na isotopes corresponding to the model space from N_max = 0 to 6. We are able to reach basis size up to N_max = 4 for all four sodium isotopes (shown in blue color in <ref>)with the available computational resources. The maximum dimension we are able to reach is 1.1 × 10^9 for ^23Na corresponding to N_max = 4 model space.The first step of the NCSM calculation is to decide an optimum frequency for which different observables are calculated later on.The optimum frequency for a particular interaction is decided by plotting the ground state energies for a particular interaction corresponding to different N_max.Subsequently, the frequency corresponding to the minimum g.s. energy calculated at the highest N_max is taken as the optimum frequency for a particular interaction. In <ref>, the g.s. energies of ^21, 22Na isotopes are shown with realistic NN interactions: INOY, CDB2K and N^3LO for N_max = 2 and 4 model spaces. The optimum frequencies for INOY, CDB2K, and N^3LO for both isotopes are 20-, 16-, and 14-MeV, respectively. Similarly, the optimum frequencies for other Na-isotopes are calculated. After deciding the optimum frequency for a particular interaction, the low-energy spectra and other nuclear observables are calculated corresponding to the optimal frequency. In order to compare the results of different interactions for the low-energy spectra of a particular nucleus, we use root mean square deviation (rms) of energy spectra defined as:E_rms = √(1/N∑_i^N(E_exp^i - E_th^i)^2) Here, E_exp^i and E_th^i are, respectively, the experimental and the calculated energy for the i^th state of a particular nucleus. The N is the number of states considered for calculating the rms value, and the E_rms provides the quality of theoretical results compared to the experimental data.§.§ Natural-parity low-lying states of ^20-23Na:The top panel of <ref> shows the positive-parity low-lying spectra of ^20Na. As shown in the figure, 2^+ is the experimental ground state, which is well reproduced by all three interactions, namely INOY, CDB2K, and N^3LO. However, the calculated ground state energies are only a few keV lower than the energies of the first excited states for all three interactions. The INOY and N^3LO interactions correctly reproduce the first excited state of ^20Na though the excitation energies are 58- and 7- keV, respectively, compared to the experimental excitation energy of 596 keV. None of the three realistic interactions are able to reproduce the excitation spectra above the first excited state.However, the excitation energies of 4_1^+ state obtained for INOY and 3_3^+ state obtained for CDB2K are only 150 keV away from the experimental data. The rms deviation in energies for the eight lowest states are 1.146-, 0.720-, and 0.941- MeV, for INOY, CDB2K, and N^3LO NN interactions, respectively.The low-lying spectra of ^21Na for INOY, CDB2K, and N^3LO interactions are shown along with the experimental data in the second panel of <ref>. The figure shows that the CDB2K and N^3LO reproduce the correct ordering of low energy states up to the fifth excited state (5/2^+). On the other hand, the INOY interaction is able to reproduce correct ordering only up to the second excited state (7/2^+). A significant deviation of 2.907 MeV is observed between the calculated excitation energy with INOY and the experimental data for the 1/2_1^+state.The root-mean-square deviations corresponding to nine low-lying states between the experimental and calculated results for the INOY, CDB2K, and N^3LO are 1.645, 0.375, and 0.439 MeV. However, the rms deviations are 0.336, 0.388, and 0.396 MeV corresponding to INOY, CDB2K, and N^3LO for the yrast band states (3/2^+-5/2^+-7/2^+-9/2^+-11/2^+). For the case, ^22Na, the ground state (3^+) is well reproduced by all the three interactions considered in this work as shown in the third panel of <ref>. While the CDB2K interaction is able to reproduce low-energy spectra up to the second excited state (0^+), the N^3LO interaction is able to reproduce only the ground state and the first excited state in the correct order. The calculated excitation energies of the 4^+ state with CDB2K and N^3LO interactions are only 56 and 73 keV away from the experimental data.In <ref>, we have shown the low-energy spectra of ^23Na. While the INOY interaction is able to reproduce the correct ordering of low-energy spectra up to the second excited state (7/2^+), the other two interactions, namely CDB2K and N^3LO, are able to reproduce only the g. s. (3/2^+) and first excited state (5/2^+) correctly. §.§ Un-natural parity low-lying states of ^20-23Na: In this section, we are discussing the low-lying unnatural parity states of ^20-23Na. The ground states of all four isotopes have positive parity, and the unnatural negative parity states are shown in <ref> and <ref>. The first panel of <ref> shows the negative parity states of ^20Na. As can be seen from the figure, all three interactions are able to reproduce 2^- as the correct lowest negative parity state. While CDB2K and N^3LO are able to reproduce the correct sequence of states up to 3^-_1, the ordering of 2^-_2 and 3^-_1 are seen to be reversed for INOY interaction. The 2^-_4 state obtained for N^3LO interaction is in good agreement with the un-confirmed (2)^-_4 state, so the 4.150 MeV state of ^20Na negative parity spectra could be 2^- as predicted from our NCSM calculation.The second panel of <ref> shows the un-natural parity states of ^21Na. From the figure, we can see that the correct sequence of states is correctly reproduced by INOY interaction. While the calculated excitation energies of 3/2^-_1 (5/2^-_1) states are less by 240 keV (337 keV) compared to the experimental states, they are higher by 576 keV (1008 keV) for 3/2^-_2 (1/2^-_1) states compared to the experimental data. The other two interactions fail to reproduce the correct ordering of low-energy spectra including the g.s.The negative parity spectra of ^22Na are shown in the third panel of <ref>. While the INOY interaction is able to reproduce the correct ordering up to 3^-_1 state, the other two interactions fail to reproduce the correct spectra including the g. s. The calculated 2^-_1 state with INOY is in good agreement with the experimental data. The low-energy negative parity states of ^23Na are shown in <ref>. The CDB2K and N^3LO interactions are able to reproduce the g.s. correctly, while the calculated results with INOY show 7/2^-_1 as the lowest negative parity state. The calculated spectra using CDB2K match the experimental ordering of states. The excitation energy of the 5/2^-_1 state for CDB2K is only 50 keV higher than the experimental data; however, the calculated excitation energies of 3/2^-_1, 3/2^-_2 and 7/2^-_1 statesfor the same interaction are lower than the experimental data.In <ref>, the lowest positive and negative parity states of ^20-23Na isotopes are compared for the model space going from 0ħΩ to 4ħΩ for the case of N^3LO interactions. Calculated results corresponding to CDB2K and INOY are shown only for 3ħΩ and 4ħΩ model spaces. We observed that the excitation energies of the unnatural states improve by increasing the model space size for all four Na isotopes. Further improvements in excitation energies of unnatural parity states can be expected for even larger model space. Of the three interactions considered in this work, INOY shows more excitation energies of the corresponding negative parity states, even for the highest model spaces considered. The deviation with INOY interaction is very largecorresponding to 0ħΩ to 1ħΩand 0ħΩ to 1ħΩmodel space. Because of this reason, we have shown N^3LO results corresponding to the above two model spaces in <ref>.§.§ Mirror energy difference in the low-energy spectra of A = 21 mirror pairs: The isospin symmetry breaking can be studied by considering pairs of mirror nuclei where the number of protons and neutrons are interchanged. It is responsible for the difference in excited spectra of mirror pairs. Though the Coulomb interaction among the protons is the major source of isospin symmetry breaking, some contributions also come from the nuclear part. The mirror energy difference (MED) between the analogous states of mirror pairs are defined as:MED_J = E_J(T_z = -T) - E_J(T_z = +T).Here, E_J are the excitation energies of analogous states with angular momentum J in a mirror pair with T_z = ± T. The MED provides an estimation of the isospin symmetry breaking of a particular interaction. In this work, we have shown a comparison between the low-energy states of ^21Na and ^21Ne which form a |T_z| = 1/2 mirror doublet in <ref>. The spectra of ^21Ne are taken from Ref. <cit.>. All results are for N_max = 4 at the corresponding optimum frequencies of each interaction. The figure shows that the yrast band states: 3/2^+_1-5/2^+_1-7/2^+_1-9/2^+_1-11/2^+_1 are slightly different in energy (less than 50 keV) between the analogous states of two mirror pairs. However, some other low-energy states show a large MED. In <ref>, we show the comparison between the experimental and calculated MEDs of some of the low-lying states, which show a large MED. For comparison, we also include MED of 5/2^+_1, which is a yrast band state.Experimentally, a small MED (18 keV) is seen for 5/2^+_1, while the 1/2^+_1 shows a large MED of 370 keV. The calculated MEDs for this state (1/2^+_1) are 122, 100, and 78 keV with INOY, CDB2K and N^3LO, respectively. The other three states, 3/2^+_2, 5/2^+_2, and 5/2^+_3 show MEDs ∼ 200 keV. The calculated MEDs with all three realistic interactions underestimate corresponding to the experimental data. It is observed that out of the three realistic interactions, INOY produces the highest MEDs between analogous states of the mirror pairs considered in this work.A comparison of the neutron and proton occupancies of different HO orbitals is shown in <ref> for the 1/2^+_1 state of (^21Na - ^21Ne) mirror pair. As shown in the figure, a significant contribution comes from the HO orbitals up to 0d_3/2. Among the HO orbitals that contribute most to the 1/2^+_1 wavefunction, the occupancies of proton and neutron 1s_1/2 orbitals are significantly different for all three interactions. While the proton (neutron) occupancies of ^21Na(1/2^+_1) state are comparable to the neutron (proton) occupancies of ^21Ne(1/2^+_1), the calculated MEDs for INOY, CDB2K, and N^3LO are 122, 100 and 78 keV, respectively. This might arise from the isospin non-conserving two-body matrix elements involving 1s_1/2 orbital along with the contributions from the Coulomb part of the Hamiltonian.§.§ Ground state energies and electromagnetic properties In this section, we are going to discuss the binding energies of the g. s. and the electromagnetic properties of all four sodium isotopes considered in this work. All the results presented in <ref> are with respect to the optimal frequencies of each interaction corresponding to the highest N_max calculation. The experimental data included in the table are taken from Refs. <cit.>. We first discuss the binding energies of the g.s. As shown in <ref>, the binding energy of ^20Na g. s. is -145.960 MeV and the calculated result with INOY interaction overbind the g. s. by only 281 keV. While the other two interactions significantly underbind the g. s of ^20Na. In <ref>, we have plotted the g. s. binding energies of sodium isotopes obtained from all three interactions considered in this work. From the figure, it is seen that the INOY interaction overbinds the g. s. of all four Na-isotopes, while the other two interactions underbind corresponding g. s. significantly. On average, while the INOY interaction overbinds the g. s. by 5.140 MeV, the CDB2K and N^3LO underbind by 13 and 11 MeV, respectively.In <ref>, we have shown the probability distributions over different model spaces from N_max = 0 to 4 for the NCSM calculation of ^23Na g. s. for N_max = 4 calculation. From the figure, we see that a maximum contribution of 18.7 % comes from N_max = 4 space for the N^3LO interaction among the three interactions. The neutron and proton occupancies up to 0f_7/2 orbital are shown in <ref> for the g. s. of ^20-23Na isotopes corresponding to three different interactions. From the figure, a significant difference is seen for the occupancies of ν (1s_1/2) and π (1s_1/2) orbitals for INOY and N^3LO interactions. These differences in occupancies might have caused the differences in the binding energies of the g. s. of ^20-23Na isotopes for INOY and N^3LO. Less occupancies of 1s_1/2 orbitals for INOY can be correlated to more binding of the g. s. compared to CDB2K and N^3LO interactions.Apart from the binding energies of the g. s., <ref> shows the calculated quadrupole and magnetic moments of the g. s. obtained from all three interactions. In <ref> and <ref>, we have plotted the calculated quadrupole and magnetic moments of the g. s. of sodium isotopes, respectively, and compared them with the experimental data. The overall trends of g. s. moments are followed by all three interactions, though there are some deviations between the calculated and the experimental values. The electromagnetic transition strengths, more specifically calculated B(E2) values, are significantly less than the experimental values, as can be seen from <ref>. The prime reason for this is that B(E2) is a long-range operator. While the OLS transformation renormalizes the short-range part of nuclear interaction and short-range operators, it weakly renormalizes the long-range operators like B(E2).Thus better results for B(E2) can be obtained once we perform calculations with higher N_max. For comparison, we also included the valence-space in-medium similarity renormalization group (VS-IMSRG) results of B(E2; 5/2^+_1 → 3/2^+_1) transitions for ^21Na and ^23Na isotopes in <ref>. For these two cases, the B(E2) strengths obtained for VS-IMSRG are better than the NCSM results for all three interactions considered in this work. However, the calculated values of E2 transition strengths from VS-IMSRG and NCSM are still far from the experimental data. Out of the three interactions considered in our work, N^3LO reproduces better results for B(E2) transition strengths.On the other hand, the B(M1) transition strengths depend on the spin and isospin coordinates only for which converged results can be achieved easily unlike B(E2) transitions. §.§ Point-proton radius and neutron skinWe have also investigated the point-proton radii of sodium isotopes in addition to the low-energy spectra and electromagnetic properties. The point-proton radius is a long-range operator like the B(E2) operator. Both of these two operators are sensitive to the long-range part of the nuclear wavefunctions. So, to obtain converged results for these observables, NCSM calculations with large N_max model space are required. The point-proton radius is extracted from the experimental charge radius by using the following formula:⟨ r^2 ⟩_p = ⟨ r^2 ⟩_c - ⟨ R_p^2 ⟩ - N/Z⟨ R_n^2 ⟩ - 3/4 m_p^2 Here, ⟨ R_p^2 ⟩ and ⟨ R_n^2 ⟩ are, respectively the squared charge radius of proton and neutron. The last term of the equation is the Darwin-Foldy term related to relativistic correction in natural units. These values are taken to be ⟨ R_p^2 ⟩^1/2 = 0.8783(86) fm, ⟨ R_n^2 ⟩ = -0.1149(27) fm^2 <cit.> and 3/(4m_p^2) = 0.033 fm^2 <cit.>. The squared point-proton radius relative to the center of mass of all nucleons is evaluated with r_p^2= 1/Z∑_i=1^Z|r⃗_⃗i⃗ - R⃗_CM|^2.The operator in the above equation is a two-body operator. It is reduced to a more suitable form involving one-body and two-body operators to evaluate two-body matrix elements for this operator. After finding the two-body matrix elements, the expectation value of the r_p operator is similar to the calculations of the ground-state energies. A similar method is applied to calculate atomic nuclei's point-neutron radii (r_n); however, experimental measurement of r_n is challenging, unlike r_p measurement. In <ref>, the calculated r_p and r_n of ^21, 22Na are shown as function of ħΩ for N_max = 2 and 4 model spaces. The figure shows that at a lower HO frequency, the calculated r_p and r_n decrease with increasing N_max, while at a high HO frequency, they increase with increasing N_max. So, a region of calculated r_p and r_n around the crossing points is observed, which is independent of N_max. That crossover can be taken as a true converged point-proton radius <cit.> and point-neutron radius. This work takes the intersection points of two r_p and r_n curves corresponding to N_max = 2 and 4 as the converged radii. For example, the first panel of <ref> shows that the r_p curves of ^21Na for INOY interaction crosses each other almost at 2.30 fm. Similarly, the crossing point for r_n curves is at 2.68 fm. So, the point-proton and point-neutron radii of ^21Na corresponding to INOY interaction are 2.30 and 2.68 fm, respectively. Similarly, the converged r_p and r_n are calculated for all four sodium isotopes corresponding to the three realistic interactions considered. The calculated r_p from the NCSM method are reported in <ref> and compared with the experimental data which are taken from <cit.>. From <ref>, we see that the converged r_p are underpredicted by NCSM results with all three interactions. While the N^3LO interaction shows the correct trend of experimental r_p, the INOY and CDB2K show odd-odd sodium isotopes ^20, 22Na to have large r_p compared to odd-even sodium isotopes ^21, 23Na. Out of the three realistic interactions, the N^3LO reproduces slightly better results compared to the other two. This is due to large occupancies of the π (1s_1/2) orbital for N^3LO interaction compared to INOY and CDB2K interactions as can be seen from the right side figure of <ref>. From the knowledge of both r_p and r_n, the neutron skin thickness, r_np can be calculated using r_np = r_n - r_p. In <ref>, the neutron skin thicknesses of the sodium isotope ground states are shown. The experimental data for the figure is taken from <cit.>. From the figure, we see that the calculated r_np of ^20,23Na g. s. are within the experimental error for all three interactions. While r_np of ^22Na is in good agreement with the experimental data for INOY, for the other two interactions, they are slightly away from the experimental data. A significant mismatch is observed for ^21Na r_np corresponding to INOY and CDB2K interactions. However, the N^3LO result for the same is close to the experimental value. § CONCLUSIONS In this work, we have investigated the low-lying nuclear structure properties of sodium isotopes ^20-23Na within the ab initio NCSM formalism using three realistic interactions, namely INOY, CDB2K, and chiral N^3LO. We studied the low-energy spectra of Na-isotopes, including both natural and unnatural parity states, electromagnetic properties, point-proton radii, and neutron skin thicknesses. We observed a good agreement of the g.s. binding energy of ^20Na for INOY interaction with the experimental data. However, for other Na-isotopes, INOY interaction overbinds the corresponding g.s. The CDB2K and N^3LO underbind the g.s. of all four sodium isotopes. The INOY interaction results for the g.s. energies are better thanCDB2KandN^3LO interactions, this is because three-body force effects are absorbed in the nonlocal part of the INOY interaction.Among the electromagnetic properties, the quadrupole and magnetic moments of the g.s. follow the same trend as in the experimental data. The B(M1) transition being independent of spatial coordinates, converged results close to experimental data can be achieved at a smaller basis space. However, the situation is different for the B(E2) transition that depends on the long-range part of nuclear wavefunctions. Among the three realistic interactions, the N^3LO reproduces better results for E2 transition strength for all four Na-isotopes. However, only one-third of the experimental transition strengths are obtained for N^3LO interactions. Comparing with B(E2) results of another ab initio method VS-IMSRG, the B(E2; 5/2_1^+ → 3/2_1^+) for ^21Na and ^23Na are 56.1 and 56.9 e^2 fm^4 for VS-IMSRG, the NCSM calculation with N^3LO provide 33.12 and 30.56 e^2 fm^4. Compared to the experimental results, the ab initio results including VS-IMSRG are significantly less.The point-proton radii (r_p) is also a long-range observable, just like the B(E2). In order to obtain the converged r_p, we employed the “crossing-point” method, and the converged r_p for three different interactions are compared to the available experimental data. We observed that the g.s. r_p obtained from NCSM calculation are less than the experimental data. However, among the three interactions, N^3LO reproduces slightly better results compared to the other two interactions and follows the same experimental trend. Similarly, we also calculated the converged root mean square radius of the neutron distribution (r_n) and the neutron skin thickness (r_np) of the g. s. of Na-isotopes. Except for the case of ^21Na, the calculated r_np are in good agreement with the experimental data.§ ACKNOWLEDGMENTS We acknowledge financial support from SERB (India), CRG/2019/000556. We would like to thank Petr Navrátil for providing us his NN effective interaction codeand Christian Forssén for making available the pAntoine. utphys 44 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL NCSM_r2 B. R. Barrett, P. Navrátil, and J. P. Vary, Ab initio no core shell model,https://doi.org/10.1016/j.ppnp.2012.10.003 Prog. Part. Nucl. Phys. 69, 131 (2013).NCSM_r1P. Navrátil, S. Quaglioni, I. Stetcu and B. R. Barrett,Recent developments in no-core shell-model calculations, https://doi.org/10.1088/0954-3899/36/8/083101J. Phys. G: Nucl. Part. Phys. 36, 083101 (2009).MVS2009 P. Maris, J. P. Vary, and A. M. Shirokov, Ab initio no-core full configuration calculations of light nuclei, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.79.014308 Phys. Rev. C 79, 014308 (2019).A=10 E. Caurier, P. Navrátil, W. E. Ormand, and J. P. Vary, Ab initio shell model for A = 10 nuclei,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.66.024314Phys. Rev. C 66, 024314 (2002).C.Forssen C. Forssén, R. Roth, and P. Navrátil, Systematics of 2^+ states in C isotope from the no-core shell model, https://doi.org/10.1088/0954-3899/40/5/055105 J. Phys. G: Nucl. Part. Phys. 40, 055105 (2013). stetcu1 I. Stetcu, B. R. Barrett, P. Navrátil, and J. P. Vary, Long- and short-range correlations in the ab-initio no-core shell model, https://doi.org/10.1103/PhysRevC.73.037307 Phys. Rev. C 73, 037307 (2006).stetcu2 I. Stetcu, B. R. Barrett, P. Navrátil, and J. P. Vary, Effective operators within the ab initio no-core shell model, https://link.aps.org/doi/10.1103/PhysRevC.71.044325 Phys. Rev. C 71, 044325 (2005). Phys.Rev.C573119(1998)P. Navrátil and B. R. Barrett, “Large-basis shell-model calculations for p-shell nuclei”, https://doi.org/10.1103/PhysRevC.57.3119Phys. Rev. C 57, 3119 (1998). Phys.Rev.C502841(1994)D. C. Zheng, J. P. Vary and B. R. Barrett, “Large-space shell-model calculations for light nuclei”, https://doi.org/10.1103/physrevc.50.2841 Phys. Rev. C 50, 2841 (1994).PRL84 P. Navrátil, J. P. Vary and B. R. Barrett, “Properties of ^12C in the ab initio nuclear shell model”,https://doi.org/10.1103/PhysRevLett.84.5728Phys. Rev. Lett. 84, 5728 (2000). PRC62 P. Navrátil, J. P. Vary and B. R. Barrett, “Large-basis ab initio no-core shell model and its application to ^12C”, https://doi.org/10.1103/PhysRevC.62.054311Phys. Rev. C 62, 054311 (2000). Choudhary1P. Choudhary, P. C. Srivastava and P. Navrátil, Ab initio no-core shell model study of ^10-14B isotopes with realistic NN interactions, https://doi.org/10.1103/PhysRevC.102.044309 Phys. Rev. C 102, 044309 (2020). Choudhary2 P. Choudhary, P. C. Srivastava, M. Gennari and P. Navrátil, Ab initio no-core shell-model description of ^10-14C isotopes, https://doi.org/10.1103/PhysRevC.107.014309 Phys. Rev. C 107, 014309 (2023). Choudhary3 P. Choudhary and P. C. Srivastava, Ab initio no-core shell model study of neutron-rich ^18,19,20C isotopes, https://doi.org/10.1016/j.nuclphysa.2022.122565 Nucl. Phys. A 1029, 122565 (2023).arch1 A. Saxena and P. C. Srivastava, Ab initio no-core shell model study of neutron-rich nitrogen isotopes, https://doi.org/10.1093/ptep/ptz073 Prog. Theor. Exp. Phys. 2019, 073D02 (2019).Phys.Rev.C54001(2021) P. Maris et al., Light nuclei with semilocal momentum-space regularized chiral interactions up to third order,https://doi.org/10.1093/ptep/ptz073 Phys. Rev. C 103, 054001 (2021).reviewJPGP. Navrátil, S. Quaglioni, I. Stetcu and B. R. Barrett,“Recent developments in no-core shell-model calculations”, https://doi.org/10.1088/0954-3899/36/8/083101J. Phys. G: Nucl. Part. Phys. 36, 083101 (2009).NCSM_p8 P. Maris, J. P. Vary, and A. M. Shirokov,Ab initio no-core full configuration calculations of light nuclei, http://dx.doi.org/10.1103/PhysRevC.79.014308Phys. Rev. C 79, 014308 (2009). chandan C. Sarma and P. C. Srivastava, Ab initio no-core shell-model study of ^18-24Ne isotopes, https://doi.org/10.1088/1361-6471/acb962 J. Phys. G: Nucl. Part. Phys. 50, 045105 (2023).vs_imsrg1 J. Henderson , G. Hackman, P. Ruotsalainen, et. al., Coulomb excitation of the |T_z|= 1/2 , A = 23 mirror pair, https://doi.org/10.1103/PhysRevC.105.034332Phys. Rev C 105, 034332 (2022).vs_imsrg2 S. R. Stroberg, J. Henderson, G. Hackman, P. Ruotsalainen, G. Hagen, and J. D. Holt, Systematics of E2 strength in the sd shell with the valence-space in-medium similarity renormalization group, https://doi.org/10.1103/PhysRevC.105.034333Phys. Rev. C 105, 034333 (2022).arch3 A. Saxena and P. C. Srivastava, First-principles results for electromagnetic properties of sd shell nuclei, https://doi.org/10.1103/PhysRevC.96.024316Phys. Rev. C 96, 024316 (2017) MED H. H. Li, Q. Yuan, J. G. Li, et. al., Investigation of isospin-symmetry breaking in mirror energy difference and nuclear mass with ab initio calculations, https://doi.org/10.1103/PhysRevC.107.014302Phys. Rev. C 107 014302 (2023)Na_rc B. Ohayon, R.F. Garcia Ruiz, Z.H. Sun, G. Hagen, T. Papenbrock, B.K. Sahoo, Nuclear charge radii of Na isotopes: interplay of atomic and nuclear theory, https://doi.org/10.1103/PhysRevC.105.L031305Phys. Rev. C 105 (2022) L031305.Na_skin T. Suzuki, et al., Neutron skin of Na isotopes studied via their interaction cross sections, https://doi.org/10.1103/PhysRevLett.75.3241Phys. Rev. Lett. 75 (1995) 3241.subhrajit S. Sahoo, P. C. Srivastava, and T. Suzuki, Study of structure and radii for ^20-31Na isotopes using microscopic interactions, https://doi.org/10.1016/j.nuclphysa.2023.122618Nuclear Physics A 1032 (2023) 122618. INOY P. Doleschall and I. Borbély, Properties of the nonlocal NN interactions required for the correct triton binding energy, https://doi.org/10.1103/PhysRevC.62.054004 Phys. Rev. C 62, 054004 (2000). INOY2 P. Doleschall, I. Borbély, Z. Papp, and W. Plessas, Nonlocality in the nucleon-nucleon interaction and three-nucleon bound states, https://doi.org/10.1103/PhysRevC.67.064005 Phys. Rev. C 67, 064005 (2003). INOY3 P. Doleschall, Influence of the short range nonlocal nucleon-nucleon interaction on the elastic nd scattering: Below 300.3em0exMeV, https://doi.org/10.1103/PhysRevC.69.054001 Phys. Rev. C 69, 054001 (2004).CDB2K1 R. Machleidt, K. Holinde, and C. Elster, The Bonn meson-exchange model for the nucleon-nucleon interaction, https://doi.org/10.1016/S0370-1573(87)80002-9 Phys. Rep. 149, 1 (1987).CDB2K2 R. Machleidt, High-precision, charge-dependent Bonn nucleon-nucleon potential, https://doi.org/10.1103/PhysRevC.63.024001 Phys. Rev. C 63, 024001 (2001). QCD R. Machleidt, D. R. Entem, “Chiral effective field theory and nuclear forces, Phys. Rep. 503 (2011) 1-75.N3LO D. R. Entem and R. Machleidt, Accurate charge-dependent nucleon-nucleon potential at fourth order of chiral perturbation theory, https://doi.org/10.1103/PhysRevC.68.041001 Phys. Rev. C 68, 041001(R) (2003). arch2 A. Saxena and P. C. Srivastava, Ab initio no-core shell model study of ^18-23O and ^18-24F isotopes, https://doi.org/10.1088/1361-6471/ab6f1dJ. Phys. G: Nucl. Part. Phys. 47, 055113 (2020). OLS1 S. Ôkubo,Diagonalization of Hamiltonian and Tamm-Dancoff equation, https://doi.org/10.1143/PTP.12.603 Prog. Theor. Phys. 12, 603 (1954). OLS2 K. Suzuki and S. Y. Lee, Convergent theory for effective interaction in nuclei, https://doi.org/10.1143/PTP.64.2091 Prog. Theor. Phys. 64, 2091 (1980). OLS3 K. Suzuki, Construction of Hermitian effective interaction in nuclei: General relation between Hermitian and non-Hermitian forms, https://doi.org/10.1143/PTP.68.246 Prog. Theor. Phys. 68, 246 (1982). OLS4 K. Suzuki and R. Okamoto, Effective interaction theory and unitary-model-operator approach to nuclear saturation problem, https://doi.org/10.1143/ptp/92.6.1045 Prog. Theor. Phys. 92, 1045 (1994). SRG1 S. K. Bogner, R. J. Furnstahl, and R. J. Perry, Similarity renormalization group for nucleon-nucleon interactions, https://doi.org/10.1103/PhysRevC.75.061001 Phys. Rev. C 75, 061001 (2007). SRG2 E. D. Jurgenson, P. Navrátil, and R. J. Furnstahl, Evolving nuclear many-body forces with the similarity renormalization group, https://doi.org/10.1103/PhysRevC.83.034301 Phys. Rev. C 83, 034301 (2011). Lawson D. H. Gloeckner and R. D. Lawson, Spurious center-of-mass motion, https://doi.org/10.1016/0370-2693(74)90390-6 Phys. Lett. B 53, 313 (1974). av18R. B. Wiringa, V. G. J. Stoks, and R. Schiavilla, Accurate nucleon-nucleon potential with charge-independence breaking, https://doi.org/10.1103/PhysRevC.51.38 Phys. Rev. C 51,38 (1995). pAntoine1E. Caurier and F. Nowacki, Present status of shell model techniques, https://www.actaphys.uj.edu.pl/R/30/3/705/pdf Acta Phys. Pol. B 30, 705 (1999). pAntoine2 E. Caurier, G. Martínez-Pinedo, F. Nowacki, A. Poves, and A. P. Zuker, The shell model as a unified view of nuclear structure, https://doi.org/10.1103/RevModPhys.77.427 Rev. Mod. Phys. 77, 427 (2005). pAntoine3C. Forssén, B. D. Carlsson, H. T. Johansson, D. Sääf, A. Bansal, G. Hagen, and T. Papenbrock, Large-scale exact diagonalizations reveal low-momentum scales of nuclei, https://doi.org/10.1103/PhysRevC.97.034328 Phys. Rev. C 97, 034328 (2018). NNDCData extracted using the NNDC World Wide Web site from the ENSDF,https://www.nndc.bnl.gov/ensdf/.https://www.nndc.bnl.gov/ensdf/. QandmagIAEA,https://www-nds.iaea.org/nuclearmoments/. https://www-nds.iaea.org/nuclearmoments/.charge_radii I. Angeli et al., Table of experimental nuclear g.s charge radii: An update, https://doi.org/10.1016/j.adt.2011.12.006At. Data Nucl. Data Tables, 99, 69 (2013).C.Forseen A. Ekström, G. R. Jansen, K. A. Wendt, G. Hagen, T. Papenbrock, B. D. Carlsson,C. Forssén , M. Hjorth-Jensen, P. Navrátil, and W. Nazarewicz, Accurate nuclear radii and binding energies from a chiral interaction, http://dx.doi.org/10.1103/PhysRevC.91.051301Phys. Rev. C 91, 051301(R) (2015).rp_1 A. M. Shirokov, I. J. Shin, Y. Kim, M. Sosonkina, P. Maris and J. P. Vary, N^3LO NN interaction adjusted to light nuclei in ab exitu approach”, https://doi.org/10.1016/j.physletb.2016.08.006 Phys. Lett. B 761, 87 (2016).rp_2M. A. Caprio, P. Maris and J. P. Vary, “Halo nuclei ^6He and ^8He with the Coulomb-Sturmian basis”, https://doi.org/10.1103/PhysRevC.90.034305 Phys. Rev. C 90, 034305 (2014). | http://arxiv.org/abs/2310.17893v1 | {
"authors": [
"Chandan Sarma",
"Praveen C. Srivastava"
],
"categories": [
"nucl-th",
"nucl-ex"
],
"primary_category": "nucl-th",
"published": "20231027045807",
"title": "Nuclear structure study of $^{20-23}$Na isotopes with ab initio no-core shell-model"
} |
A Chebyshev Confidence Guided Source-Free Domain Adaptation Framework for Medical Image Segmentation Jiesi Hu, Yanwu Yang, Xutao Guo, Jinghua Wang*, Ting Ma* This work was supported in part by grants from the National Natural Science Foundation of P.R. China (62276081, 62106113), Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine (NO:ZYYCXTD-C-202004), Basic Research Foundation of Shenzhen Science and Technology Stable Support Program (GXWD20201230155427003-20200822115709001) and The Major Key Project of PCL (PCL2021A06). (Corresponding author: Jinghua Wang, Ting Ma.) Jiesi Hu , Yanwu Yang, and Xutao Guo are with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, and The Peng Cheng Laboratory.(e-mail: [email protected], [email protected], [email protected]) Jinghua Wang is with School of Computer Science and Technology, Harbin Institute of Technology at Shenzhen. (e-mail: [email protected]) Ting Ma is with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, The Peng Cheng Laboratory, Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology, Harbin Institute of Technology, Shenzhen, and International Research Institute for Artifcial Intelligence, Harbin Institute of Technology, Shenzhen. (e-mail: [email protected])Received XXX; accepted YYY ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Recent works in open-domain question answering (QA) have explored generating context passages from large language models (LLMs), replacing the traditional retrieval step in the QA pipeline. However, it is not well understood why generated passages can be more effective than retrieved ones. This study revisits the conventional formulation of QA and introduces the concept of knowledge corpus error. This error arises when the knowledge corpus used for retrieval is only a subset of the entire string space, potentially excluding more helpful passages that exist outside the corpus. LLMs may mitigate this shortcoming by generating passages in a larger space. We come up with an experiment of paraphrasing human-annotated gold context using LLMs to observe knowledge corpus error empirically. Our results across three QA benchmarks reveal an increased performance (10% - 13%) when using paraphrased passage, indicating a signal for the existence of knowledge corpus error.[Our code is available at <https://github.com/xfactlab/emnlp2023-knowledge-corpus-error>] § INTRODUCTION Large language models (LLMs) generate surprisingly fluent and informative texts. This led to many works utilizing the text data generated by these models for purposes such as instruction tuning <cit.> and improving reasoning capability <cit.>. Open-domain question answering (QA) <cit.> is a task where retrieving relevant passages from a corpus of factual information such as Wikipedia is standard practice. Recent works have attempted to generate such passages from LLMs, replacing the retrieval step of the traditional pipeline <cit.>. Despite their success, it is not well understood why these generated passages could be more effective than retrieved passages. These recent advancements lack robust links to prior research in QA posing a challenge to a holistic understanding.By revisiting the formulation of answer generation with retrieved passages <cit.>, we identify the then-overlooked gap, which has become significant now due to the advance in LLMs <cit.>. Our discussion starts with the observation that the knowledge corpus from which the passages are retrieved is only a subset of the possible string space. More helpful passages to the reader may exist outside the knowledge corpus. Unfortunately, retrieval, by definition, cannot utilize passages outside the knowledge corpus, potentially causing a shortfall. We refer to this as knowledge corpus error. In contrast, LLMs can generate passages from the entire string space, which may mitigate the inherent limits of retrieval.We empirically demonstrate the presence of knowledge corpus error where a passage from outside of Wikipedia outperforms the human-annotated gold passage inside Wikipedia in question answerin. We design an experiment of paraphrasing human-annotated gold context with LLMs. Experiments with four QA benchmarks, NQ <cit.>, HotPotQA <cit.>, StrategyQA <cit.>, and QASC <cit.> result in 10% - 13% gain in reader performance across three benchmarks, when using paraphrased passages. This gain supports our hypothesis that there exist more helpful passages than the gold passage outside the knowledge corpus.§ RELATED WORK§.§ Leveraging LLM-generated textAs the quality of text generated from LLMs has improved through larger models <cit.> and instruction tuning <cit.>, many works have sought to use these models as data sources in other NLP tasks. Text generated from LLMs has been used for generating datasets for instruction finetuning <cit.>, improving reasoning <cit.>, and many other purposes <cit.>.Recently, there has been growing attention towards open-source LLMs <cit.> finetuned on instructions generated from proprietary LLMs, such as Alpaca <cit.>, Koala <cit.> and Vicuna <cit.>. Text generated from these models purportedly match quality of those from proprietary LLMs <cit.>, but this assertion remains disputed <cit.>. Understanding the role of LLM-generated text will serve as an important aspect of this discourse. §.§ Knowledge-intensive NLP and retrievalKnowledge-intensive NLP, such as open-domain QA <cit.> and fact verification <cit.>, requires substantial factual knowledge that may change over time. Therefore, these tasks were originally envisioned to incorporate the retrieval of relevant passages from the knowledge corpus <cit.>. In the typical retrieve-then-read pipeline (, inter alia), a pipeline of models, first selects k passages from a retrieval function which are then used to condition answer generation from reader (, inter alia).Meanwhile, the success of pre-trained language models <cit.> and the associative memory properties learned during training <cit.> has allowed researchers to revisit closed-book QA, in which models answer the questions without being provided a passage. Closed-book QA was demonstrated to be effective both in in-context learning <cit.> and supervised learning <cit.>. Recent works on chain-of-thought prompting has shown that generating intermediate steps before giving an answer improves the reasoning capability of LLMs <cit.>. Inspired by this, recent works prompt LLMs to generate the intermediate step of QA, which is the passages <cit.>. These passages are subsequently fed into the reader, either supervised FiD <cit.> or in-context learning LLMs <cit.>. Despite their success, these methods require a very large scale and risk generated passages containing stale or non-factual information. Moreover, it is not fully explained why generating passages may have advantages over retrieval.§ ANALYTIC DISCUSSION Our task formulation follows retrieval augmented models for QA <cit.>. These works view contexts as a latent variable for the QA model <cit.>. §.§ SetupLet V^* be the infinite set of all possible strings over vocabulary tokens in V, including the empty string.An instance of a QA dataset consists of a triple (q,a,c): question q, answer a, and context c, where q, a, c ∈ V^*. Typically, the context c is retrieved from the knowledge corpus 𝒵, such as Wikipedia, where 𝒵⊂ V^*. §.§ QA Task FormulationThe goal of QA is to learn a distribution p(a|q), where models decode a string a that acts as an abstractive answer to the query <cit.>. One can directly prompt a language model to obtain an answer a, given question q (where context c is implicitly the empty string), relying only on model parameters in closed-book QA <cit.>. â = arg max_a ∈ V^* p(a|q) However, direct prompting is often difficult to learn and barely discloses its inner working. Therefore, a popular approach is to marginalize p(a|q) over contexts in the knowledge corpus <cit.>. As it is intractable to calculate the probability for all the contexts in the knowledge corpus, p(a|q) is approximated to the sum of probabilities for top k contexts from 𝒵. Topk(𝒵, q) denotes the set of resulting top k passages after the retrieval with a query q. p(a|q) ≈∑_c∈ Topk(𝒵, q)p(a|q,c)p(c|q) The gap in this formulation is that relevant context c may exist outside of the knowledge corpus 𝒵. This makes the sum of marginal probabilities over 𝒵 only an approximation. The true probability would require the summation of marginal probabilities over the infinite string space V^*. p(a|q)=∑_c∈ S p(a|q,c)p(c|q) ≈∑_c∈𝒵p(a|q,c)p(c|q) ≈∑_c∈ Topk(𝒵, q)p(a|q,c)p(c|q) §.§ Knowledge corpus errorEquation <ref> details two steps of approximation, which results in two sources of potential error in QA using contexts. The first source of error is introduced when the entire knowledge corpus 𝒵 is approximated to top k retrieved contexts, Topk(𝒵,q). This error, which we denote retrieval error, can be mitigated by better retrieval methods or increasing k, the number of contexts. On the other hand, the second source of error is introduced when the entire string space V^* is approximated to knowledge corpus 𝒵. This error is rooted in the use of knowledge corpus itself, hence we denote it as knowledge corpus error. To elaborate, for some c̃∉𝒵, p(c̃|q) > p(c ∈𝒵|q), but p_retriver(c̃|q)=0 whereas p_LLM(c̃|q) > 0.For a query q, p(c|q) is sufficiently small for most contexts c. This allows these terms be ignored by setting p(c|q) to zero. For instance, top-k retrieval is essentially setting p(c|q) to zero for c ∉ Topk(𝒵, q). For contexts outside the knowledge corpus, c̃∉𝒵, applying Bayes' rule, p(c̃|q) ∝ p(q|c̃)p(c̃), where the retrieval-based task formulation is setting the prior p(c̃) =0.Knowledge corpus error may explain why reader models can benefit from generated contexts <cit.> as LLMs can generate strings from the set V^* ⊃𝒵. § EMPIRICAL OBSERVATIONTo observe knowledge corpus error, we study the effect of paraphrasing human-annotated gold contexts from QA dataset. Gold context c_gold∈𝒵 is what humans annotated as the supporting passage for a given question-answer pair. While human annotation may be imperfect, we assume that this c_gold acts as the best available passage from the knowledge corpus 𝒵, i.e., there is no retrieval error. In our experiment, c_gold is paraphrased into c_paraph, by prompting LLMs with c_gold and q. Then, c_gold and c_paraph are separately fed into the reader to compare the performance. As c_gold is the best available context, any gains from paraphrasing should be attributed to reduced knowledge corpus error. §.§ Experimental setupFor a single instance of a QA dataset (q,c_gold,a) and a paraphrased context c_paraph = Paraph(c_gold, q), we compare model performance in two settings without and with paraphrasing: Read(q, c_gold) and Read(q, c_paraph). Both Paraph() and Read() are function calls to LLMs, GPT-3.5 (gpt-3.5-turbo[https://api.openai.com/v1/chat/completions]) and Claude (claude-1[https://api.anthropic.com/v1/complete]). Experiments were conducted in June 2023. §.§ BenchmarksFor benchmarks, we used NQ <cit.>, HotPotQA <cit.>, StrategyQA <cit.>, and QASC <cit.>. NQ consists of factual questions which can be answered with a single passage. Unlike NQ, HotPotQA consists of questions that require reasoning across multiple passages, known as multi-hop QA. StrategyQA and QASC further extend this multi-hop setting by requiring more implicit reasoning.For gold context c_gold, we use the paragraph(s) from Wikipedia, which are part of the annotations in NQ, HotPotQA, and StrategyQA. For QASC, where such paragraph does not exist, we treat the seed facts that were used to create questions as gold context. In multi-hop QA, we concatenate all the contexts into a single context. See Appendix <ref> for details. §.§ ResultsWe report the results in Table <ref>. Paraphrased context outperforms the original gold context for most cases in NQ, HotpotQA, and StrategyQA. This means that paraphrased passages were more helpful than the gold passages, implying the existence of knowledge corpus error. Moreover, using the context paraphrased by different model did not cause any performance depredation, indicating some level of universality in the helpfulness of the passages. We provide further analysis of this finding in Appendix <ref>.QASC. We attribute the degradation in QASC for two reasons. First, the seed facts, which we considered as gold contexts in QASC, are not from a raw corpus. The seed facts are manually selected from cleaned knowledge sources like WorldTree corpus. This is problematic as the gold contexts we are using represent the best-case scenario in retrieval, thereby eliminating any retrieval error. Second, distractor options in multiple-choice question confuses the model to generate a passage relevant to those options. This results in a passage containing distracting information for answering the question. Examples in Table <ref> illustrate these two points well. §.§ Qualitative Analysis After manually examining a sample of results from the empirical study, we identify three common factors contributing to knowledge corpus error. 1. Increased focus on the question Gold passages are a very small subset of facts from Wikipedia, with the communicative intent to generally inform about the subject. Therefore, gold passages inevitably include information that is irrelevant to the question. LLMs can only filter the helpful information from the gold passage during the paraphrasing. In fact, it has been shown that when both retrieved and generated passages contain the correct answer, the FiD reader can produce more correct answers when reading the generated passages <cit.>. And furthermore, models are sensitive to related but irrelevant information in a phenomena called damaging retrieval <cit.>.Query-focused paraphrasing acts as an information filter mitigating damaging retrieval.2. Chain-of-thought Some questions require a composition of facts to answer the question. In such case, we observe that paraphrasing is acting in a manner similar to chain-of-thought <cit.>. This also highlights the inherent limit of corpus such as Wikipedia, where the explicit composition of information is seldom given.In the second example of Table <ref>, the question requires combining two distinct facts, one about the military unit (VMAQT-1) and another about Irish mythology (Banshee). The paraphrased context acts somewhat akin to chain-of-thought, resulting in a more helpful context. 3. Incorporation of commonsense knowledge Commonsense knowledge plays a crucial role in understanding the world <cit.>, but not often explicitly stated, especially in a corpus such as Wikipedia. Language models are known to possess a degree of tacit knowledge <cit.>, which can be utilized by knowledge generation <cit.>. We observe that during paraphrasing, commonsense knowledge is elicited, aiding the reader.The third example of Table <ref> illustrates how commonsense knowledge — someone who dropped out of college will probably not attend graduation ceremony — is induced during paraphrasing. § CONCLUSIONIn this work, we demonstrate that generated contexts may be more helpful than retrieved contexts in open-domain question answering. By revisiting the formulation of question answering, we identify a gap where retriever inevitably ignores potentially helpful contexts outside of the corpus. We call this knowledge corpus error, and design an experiment in order to observe knowledge corpus error empirically. Paraphrasing the human-annotated gold contexts with LLMs led to increased reader performance in 3 out of 4 QA benchmarks, implying the existence of knowledge corpus error. § ACKNOWLEDGMENTSThis work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)) and Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea) & Gwangju Metropolitan City.§ LIMITATIONSThe first limitation of this work is that it did not employ it in a retrieval setting. We used a gold context, which we assume is the best-case scenario in retrieval. However, retrieved contexts in real retrieval setting <cit.>, i.e., contexts in knowledge corpus other than the gold context, may deviate significantly from the gold context. Therefore, it is hard to discuss the effect of paraphrasing and the degree of knowledge corpus within retrieval. The second limitation of this work is that it did not address the practical way to marry retrieval and generation via LLMs. Regardless of the seeming benefit of context generation, this approach suffers from issues such as information staleness and hallucination. Contemporaneous works explore various methods to leverage benefits of both retrieval and generation <cit.>. This work is primarily concerned with the analytic understanding of how generation may have advantages over retrieval. We believe our work can inspire future contributions on empirical method of incorporating retrieval and generation. The third limitation of this work is that its scope was limited to question answering. Conditioning generation on retrieved context is a well-studied approach in language modeling <cit.>. It will be worth exploring how knowledge corpus error manifests within language modeling. acl_natbib§ DATASETNQ: We use the KILT <cit.> version of NQ[https://github.com/facebookresearch/KILT]. We use the dev split after excluding the instances where context does not include answers, which results in 2532 samples. HotPotQA: We use the dataset from its original source[https://hotpotqa.github.io/]. We use the dev split, which includes 7405 samples. To reduce the inference cost, we only use the subset of first 1531 samples.StrategyQA: We use the dataset from its original source [https://allenai.org/data/strategyqa]. We use the training split, which includes 2290 samples. We use the training split because the dev split contained too few (490) examples. QASC: We use the dataset from its original source [https://allenai.org/data/qasc]. We use the dev split, which includes 926 samples. § DETAILED EXPERIMENTAL SETUP §.§ Selection of gold passage NQ: NQ contains a set of provenances for possible answer contexts. For the experiments, we select the gold passages from the provenances that include at least one of the candidate answers. When there are multiple good passages, we employ the very first one. HotPotQA: HotPotQA contains 2 gold paragraphs from Wikipedia for each question. A gold passage is simply the concatenation of these two. Note that we do not utilize fine-grained sentence-level annotation in 2 paragraphs.StrategyQA: StrategyQA contains decomposition steps to solve the question. Each of these steps may be attached with a supporting paragraph from Wikipedia. A gold passage is the concatenation of all these paragraphs throughout the whole steps. Among three different annotated decomposing steps in the dataset, we use the first one. QASC: QASC contains two facts that are combined to create a question. These facts are selected from a cleaned knowledge source. A gold passage is simply the concatenation of these two facts.The title of the passage is prepended to the passage in cases where titles are available (NQ, HotPotQA, and StrategyQA). §.§ Details on generation NQ: During reading, we used 3-shot prompting, where the 3-shot demonstrations are sampled from GPT-3.5 with questions from the dev split of NQ. Note that these questions are excluded from the experiment. Max tokens to generate was set to 500 in paraphrase and 25 in read.HotPotQA: Max tokens to generate was set to 300 in paraphrase and 10 in read. StrategyQA: Max tokens to generate was set to 300 in paraphrase and 10 in read. QASC: Max tokens to generate was set to 100 in paraphrase and 10 in read. Temperature during generation was set to 0.8 in paraphrase and 0.4 in read.We used 3-shot prompting for reading in NQ but otherwise used zero-shot prompting. Other generation keyword arguments are set to default if not specified. For the prompts used, see Table <ref>.Hyperparameters related to generation are decided mainly through trial-and-error. For example, max tokens was adjusted according to few preliminary samples. We tried to tweak temperature for QASC after observing deviant result, but only had minor impact. 3-shot setup was chosen for NQ because the performance was too low in zero-shot.§ DETAILS ON EVALUATION Evaluating exact match or accuracy may be non-trivial in a generative setting. Hence, we follow the previous works <cit.>. NQ (Exact Match): Following , we measure exact match of the output string after normalization. HotPotQA (Exact Match): Similarly as , we measure exact match of the output string after normalization. StrategyQA (Accuracy): Following , we measure accuracy by picking up the first "yes" or "no" encountered in the text after removing unnecessary letters. QASC (Accuracy): Similarly as , we measure accuracy by picking up the first large letter out of A to H encountered in the text.§ INFERENCE COST We used OpenAI and Anthropic's API to use their LLMs. The cost for OpenAI's API is estimated to be around $40 to $50. Anthropic's API has not cost any as we were on the free version.§ ACCORDANCE BETWEEN HETEROGENEOUS READER MODELSFor a majority of the examples, two readers accord with each other, i.e., both are correct or wrong, and this ratio is even higher in paraphrased contexts.§ EXAMPLES For examples of paraphrased context, see Table <ref> through <ref>. | http://arxiv.org/abs/2310.18076v1 | {
"authors": [
"Yejoon Lee",
"Philhoon Oh",
"James Thorne"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231027114406",
"title": "Knowledge Corpus Error in Question Answering"
} |
: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers Zhewei Yao, Reza Yazdani Aminabadi,Stephen Youn, Xiaoxia Wu, Elton Zheng, Yuxiong He Microsoft============================================================================================================= Relation extraction (RE) has achieved remarkable progress with the help of pre-trained language models. However, existing RE models are usually incapable of handling two situations: implicit expressions and long-tail relation classes, caused by language complexity and data sparsity. Further, these approaches and models are largely inaccessible to users who don’t have direct access to large language models (LLMs) and/or infrastructure for supervised training or fine-tuning. Rule-based systems also struggle with implicit expressions. Apart from this, Real world financial documents such as various 10-X reports (including 10-K, 10-Q, etc.) of publicly traded companies pose another challenge to rule-based systems in terms of longer and complex sentences. In this paper, we introduce a simple approach that consults training relations at test time through a nearest-neighbor search over dense vectors of lexico-syntactic patterns and provides a simple yet effective means to tackle the above issues. We evaluate our approach on REFinD and show that our method achieves state-of-the-art performance. We further show that it can provide a good start for human in the loop setup when a small number of annotations are available and it is also beneficial when domain experts can provide high quality patterns. Our code is available at [https://github.com/pawan2411/PAN-DL_Refind].*Equal Contributionfootnote § INTRODUCTION Relation extraction (RE) from text is a fundamental problem in NLP and information retrieval, which facilitates various tasks like knowledge graph construction, question answering and semantic search. Recent studies <cit.>in supervised RE take advantage of pre-trained language models (PLMs) and achieve SOTA performances by fine-tuning PLMs with a relation classifier. However, <cit.> observes that existing RE models are usually incapable of handling two RE-specific situations: implicit expressions and long-tail relation types. Implicit expression refers to the situation whereas relation is expressed as the underlying message that is not explicitly stated or shown. In Figure 1, relation "acquired_by(organization, organization)" occurs implicitly. Such underlying messages can easily confuse the relation classifier.The other problem of long-tail relation classes is caused by data sparsity in training. For example, the REFinD dataset <cit.> comprises 45.5 % of the no_relation instances. The most frequent class in the dataset - “per:title:title” has 4,468 training examples, while over 14 out of 22 classes have less than 500 examples. The majority class can easily dominate model predictions and lead to low performance on long-tail classes. Recently, ICL (In-Context Learning) based approach <cit.> is utilized for RE tasks. The approach achieves improvements over not only existing GPT-3 baselines, but also on fully-supervised baselines even with only a limited number of demonstrations provided in the prompt. Specifically, it achieves SOTA performances on the Semeval and SciERC datasets, and competitive performances on the TACRED and <cit.> ACE05 datasets. <cit.> utilized the GPT-4 under ICL framework on REFinD and achieved 3rd rank in the shared task. However, retrieval of examples to demonstrate is a key factor in the overall performance on these pipelines. Finding efficient demonstrates often relies on learning-based retrieval <cit.>. These learning-based retrievers use annotated data and a LLM. This type of retrieval strategy comes with the increased cost (API, infrastructure etc.), time as more experiments are required because most LLMs are black box and it also needs special expertise. Apart from the implicit expression challenge mentioned above, REFinD poses another challenge to rule-based systems in terms of longer and complex sentences. For example, <cit.> cites that the average sentence length in the REFinD dataset is 53.7 while the average sentence length in the TACRED dataset <cit.> is 36.2. Further, As per <cit.>, REFinD includes more complex sentences than TACRED, with an average entity-pair distance of 11, compared to 8 in TACRED. Because of this, writing rules at surface text level is a challenge. Hence, rules at lexico-syntactic level is the need of the hour. However, strict matching of these rules can yield high precision but low recall result due to accuracy of syntactic parsing. Hence, a robust fuzzy pattern matching system is required.Inspired by recent studies <cit.> using k-Nearest Neighbor to retrieve diverse expressions for language generation tasks, we introduce a simple but effective approach that consults training relations at test time through a nearest-neighbor search over dense vectors of lexico-syntactic patterns and provides a simple yet effective means to tackle the above issues. Our method achieves an improvement of 1.18% over baseline (F1-score - 0.7516). We achieved our results using commodity hardware within a day. That’s why our approach is easier to deploy, lightweight and fast. We further show that our approach can provide a good start (F1-score of 0.5122) for human in the loop setup when a small number of annotations (approx. 10% of training data) are available and it is also beneficial (F1-score of 0.6939 with approx. 10% of training data) when domain experts can provide high quality patterns.§ PRELIMINARY BACKGROUND §.§ Task DefinitionLet C denote the input context and e1 in C, e2 in C denote the pair of entity pairs. Given a set of predefined relations classes R, relation extraction aims to predict the relation y in R between the pair of entities (e1, e2) within the context C,or if there is no predefined relation between them, predict y="no relation".§.§ DataThe REFinD dataset <cit.> is the largest relation extraction dataset for financial documents to date. Overall REFinD contains around 29K instances and 22 relations among 8 types of entity pairs. REFinD is created using raw text from various 10-X reports (including 10-K, 10-Q, etc.broadly known as 10-X) of publicly traded companies obtained from US Securities and Exchange Commission.§ NEAREST NEIGHBOR SEARCH OVER VECTORIZED LEXICO-SYNTACTIC PATTERNS §.§ Generating Lexico-Syntactic PatternsWe replaced words representing entities of interest with their entity types given in the dataset. Instead of conducting nearest neighbor search on a complete sentence, we applied Spacy Dependency Parser[https://spacy.io/] and considered the shortest dependency path (henceforth SDP) between two entities to deal with long and complex sentences with the intuition that considering all sentence words can do more harm in search. SDP is essential for relationship identification in most cases. We apply Spacy NER on REFinD sentences and replace actual named entities with their types to create higher-level patterns. We also enriched all SDP words with their Dependency Labels to utilize structure information in our search.For each sentence, we create 4 patterns: 1. SDP words only (SDP) 2. SDP words with named entities replaced with their types (SDP-NER) 3. SDP words enriched with their Dependency Labels (SDP-DEP) 4. SDP words with named entities replaced with their types and also enriched with their Dependency Labels (SDP-DEP-NER). Example patterns are shown in Figure 2.§.§ Generating Dense Vectors for Lexico-Syntactic PatternsWe converted all 4 types of Lexico-Syntactic Patterns into Dense Vectors as it performs better than Sparse Vectors. To create a vector, we employed an all-mpnet-base-v2[https://huggingface.co/sentence-transformers/all-mpnet-base-v2] sentence encoder. We also created vectors for original sentences using the encoder. §.§ Creating Class Specific IndicesFor each pattern type mentioned above, we created 21 dense vector indices each representing a relationship class except 'no_relation' class. We split 'no_relation' training data instances into 8 splits as per entity-type pairs such as "Person-Organization", "Organization-Organization" etc. and created indices for each split. In this way, there are 29 indices in total for each pattern type. Each element of the index represents a vectorized lexico-syntactic pattern for each training example. For around 11.89% of the training sentences, we faced issues in generating dependency tree and/or SDP. To deal with this, we also created another 29 indices containing dense vectors for original sentences. §.§ Conducting Nearest Neighbor SearchAfter configuring lexico-syntactic pattern type and value of K, Given a test sentence and an entity-type pair, we first create a vector representing its lexico-syntactic pattern obtained using steps described above. With the entity-type pair, appropriate relation class indices are selected for search. The pattern vector is searched in every appropriate class index using cosine similarity and top K vectors from each class index are obtained. The similarity scores of each of these top K vectors are averaged and the class having the highest similarity score is selected. In the case of syntactic parsing failures, as a fallback strategy, a vector of the original sentence is created and is used against class specific sentence indices in search the same way as mentioned above.[ title=Figure 3: Sensitivity Analysis, ylabel=F1-Score, xlabel=K, xmin=0, xmax=20, ymin=0.5, ymax=1, xtick=0,2,4,6,8,10,12,14,16,18,20, ytick=.60,.80,1, legend pos=north west, ymajorgrids=true, grid style=dashed, ][ color=blue, mark=square, ] coordinates(1,0.6867301882407623)(2,0.7194980246339763)(3,0.7348361608180339)(4,0.7434348129212177)(5,0.7464559609574716)(6,0.7504066930048803)(7,0.7527306530327679)(8,0.7538926330467116)(9,0.7564489890773879)(10,0.7571461770857542)(11,0.7606321171275854)(12,0.7606321171275854)(13,0.7622588891471067)(14,0.7634208691610506)(15,0.762956077155473)(16,0.767184731582617)(17,0.7617940971415292)(18,0.762956077155473)(19,0.7622588891471067)(20,0.7613293051359517) ; SDP+DEP+NER(mark) [draw, red, circle, minimum size = 2pt, inner sep=5pt, thick]at (axis cs: 14,0.7634208691610506) ; § EXPERIMENT SETTINGS §.§ Dataset The REFinD dataset <cit.> released with the shared task is a part of "Knowledge Discovery from Unstructured Data in Financial Services" (KDF) workshop which is collocated with SIGIR 2023. There are 20070, 4306 and 4300 instances of training data, development data and public test data respectively. The organizers have released training data, development data and public test data with gold labels but haven't released private test data with gold labels. Because of that, we are not able to benchmark our system against the winners of the shared task. Since, leaderboard [https://codalab.lisn.upsaclay.fr/competitions/11770] and gold labels on development data is available, we have benchmarked our approach against the leaders of development data. We have used training data and public test data to create class specific indices to perform nearest neighbor search for development data sentences.§.§ Hardware ResourcesWe have used a laptop with 16GB RAM and Intel® Core™ i7-7500U CPU @ 2.70GHz × 4 CPU to produce these results.§.§ EffortsGiven the dataset, all setup and experiments are conducted within a day.§ RESULTSWe conducted experiments with 4 different pattern types. To find the best value of K, we have created a 10% split from the training data and experimented with different values of K (1 to 20). During evaluation, we faced issues in generating dependency tree and/or SDP for around 8.7% instances and for those instances, indices containing sentence vectors were used as fallback strategy. The results in Table 1 show that our best F1-score is 0.7634 for SDP-DEP-NER pattern and K=14 (Top K vectors) and our method shows improvement of 1.18% over baseline. Figure 3 shows how sensitive this approach is with respect to different values of K. This confirms our intuition that there is value in utilizing vectorized lexico-syntactic patterns to deal with long and complex sentences and implicit expressions. Further, splitting instances as per the class and performing lazy classification over these splits can help in dealing with the dataset with long-tail relation classes. To explore the effectiveness of our approach in human in the loop situation, we conducted a few experiments as shown in Figure 4. We randomly selected N patterns per class from the training data and built indices with those patterns only. We tried different values of N. With N=100 and K=1 (derived from dev split), we achieved an F1-score of 0.5122 with around 10% of the original training data. It shows that the vectorized lexico-syntactic patterns and the cosine similarity based scoring can be a good start to label similar instances when the number of annotations are less. This method can be used in human in the loop setup to either filter out similar instances (explore) or to find similar instances (exploit) for further human review/annotation.To explore the effectiveness of our approach when domain experts are available and can provide high quality patterns specially for the task like this which is restricted to a particular domain, types of documents, types of entities and a handful of relations, we conducted a few experiments as shown in Figure 4. To approximate this experiment, we selected N training patterns from each class which occurs frequently in Top K search when development patterns are classified correctly. We call these patterns the Most-Frequent Patterns. We built indices with those patterns only. As shown in Figure 4, with N=100 and K=4 (derived from dev split), we achieved an F1-score of 0.6939 (6.95% less than the best result) with around 10% of the original training data. It shows that it can bridge gaps quickly with a small amount of high quality patterns. [ title=Figure 4: Training Patterns Selection, ylabel=F1-Score, xlabel=Number of Patterns per class, xmin=0, xmax=100, ymin=0, ymax=1, xtick=0,25,50,75,100, ytick=.30,.50,.70,1, legend pos=north west, ymajorgrids=true, grid style=dashed, ][ color=blue, mark=square, ] coordinates(100,0.5122007901464095)(75,0.5054613060655356)(50,0.484545665814548)(25,0.453636997443644) ; Randomly selected Patterns[ color=green, mark=square, ] coordinates(100,0.6939344643272136)(75,0.6739484080873809)(50,0.6544271438531257)(25,0.5619335347432024) ; Randomly selected; K=1, Most-Frequent; K=4§ CONCLUSIONOur approach consults training relations at test time through a nearest-neighbor search over dense vectors of lexico-syntactic patterns. We evaluated our approach on REFinD and show that our method achieves state-of-the-art performance without any direct access to large language models (LLMs) or supervised training or fine-tuning or any handcrafted rules. We achieved our results using a commodity hardware within a day. That’s why our approach is easier to deploy, lightweight and fast. We further explores that our approach can provide a good start for human in the loop setup when a small number of annotations are available and it can be also beneficial when domain experts can provide high quality patterns.§ LIMITATIONSSince our method is based on nearest neighbors search, it's sensitive to the value of K. Furthermore, our method is also very sensitive to syntactic parsing and NER. Our vectors are not optimal representations because our syntactic patterns are not a natural fit for the sentence encoder. acl_natbib | http://arxiv.org/abs/2310.17714v1 | {
"authors": [
"Pawan Kumar Rajpoot",
"Ankur Parikh"
],
"categories": [
"cs.CL",
"cs.CE"
],
"primary_category": "cs.CL",
"published": "20231026181956",
"title": "Nearest Neighbor Search over Vectorized Lexico-Syntactic Patterns for Relation Extraction from Financial Documents"
} |
remark[theorem]Remark 0000-0003-4624-9752 Dipartimento di InformaticaUniversità di PisaLargo B. Pontecorvo 356127PisaItaly [email protected] Dipartimento di InformaticaUniversità di PisaLargo B. Pontecorvo 356127PisaItaly [email protected] Dipartimento di InformaticaUniversità di PisaLargo B. Pontecorvo 356127PisaItaly [email protected] Platforms, Inc.USA [email protected] over-approximation methods have been proved effective for guaranteeing the absence of errors, but inevitably they produce false alarms that can hamper the programmers. Conversely, under-approximation methods are aimed at bug finding and are free from false alarms. We introduce Sufficient Incorrectness Logic (SIL), a new under-approximating, triple-based program logic to reason about program errors. SIL is designed to set apart the initial states leading to errors. We prove that SIL is correct and complete for a minimal set of rules, and we study additional rules that can facilitate program analyses. We formally compare SIL to existing triple-based program logics.Incorrectness Logic and SIL both perform under-approximations, but while the former exposes only true errors, the latter locates the set of initial states that lead to such errors, as Outcome Logic can do too. Hoare Logic performs over-approximations and as such cannot capture the set of initial states leading to errors in nondeterministic programs – for deterministic and terminating programs, Hoare Logic and SIL coincide. Finally, we instantiate SIL with Separation Logic formulae (Separation SIL) to handle pointers and dynamic allocation and we prove its correctness.We argue that in some cases Separation SIL can yield more succinct postconditions and provide stronger guarantees than Incorrectness Separation Logic and can support effective backward reasoning. <ccs2012><concept><concept_id>10003752.10003790.10002990</concept_id><concept_desc>Theory of computation Logic and verification</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10003752.10003790.10003792</concept_id><concept_desc>Theory of computation Proof theory</concept_desc><concept_significance>300</concept_significance></concept><concept><concept_id>10003752.10003790.10011741</concept_id><concept_desc>Theory of computation Hoare logic</concept_desc><concept_significance>300</concept_significance></concept><concept><concept_id>10003752.10003790.10011742</concept_id><concept_desc>Theory of computation Separation logic</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10003752.10003790.10003806</concept_id><concept_desc>Theory of computation Programming logic</concept_desc><concept_significance>300</concept_significance></concept></ccs2012> [500]Theory of computation Logic and verification [300]Theory of computation Proof theory [300]Theory of computation Hoare logic [500]Theory of computation Separation logic [300]Theory of computation Programming logicSufficient Incorrectness Logic: SIL and Separation SIL Francesco Logozzo January 14, 2024 ======================================================§ INTRODUCTIONFormal methods aim to provide tools for reasoning and establishing program guarantees. Historically, research in formal reasoning progressed from manual correctness proofs to effective, automatic methods that improve program reliability and security. In the late 60s, <cit.> and <cit.> independently introduced formal systems to reason about programs. In the 70s/early 80s, the focus was on mechanization. <cit.> introduced predicate transformers,<cit.> introduced Abstract Interpretation as a foundation for the systematic development of correct static analyses,<cit.> introduced Model Checking as a technique to automatically prove that a finite state system satisfies a given temporal property,<cit.> introduced automatic type inference in mainstream languages, and<cit.> put the bases for mechanized program proofs. Those seminal works in conjunction with the development of automatic, and semi-automatic theorem provers and solvers (e.g., <cit.>) brought impressive wins in proving program correctness of real-world applications.For instance, the Astrée abstract interpreter automatically proves the absence of runtime errors in millions of lines of safety-critical C <cit.>, the SLAM model checker was used to check Windows drivers <cit.>, CompCert is a certified C compiler developed in Coq <cit.>, and VCC uses the calculus of weakest precondition to verify safety properties of annotated Concurrent C programs <cit.>.Despite the aforementioned successes, effective program correctness methods struggle to reach mainstream adoption. As program correctness is undecidable, all those methods over-approximate programs behaviours. Over-approximation guarantees soundness (if the program is proved to be correct, no error will arise) but it may introduce many false positives. False positives are seen as a distraction by professional programmers, who are skeptical of the adoption of tools with a high false positive ratio. Tools such as Microsoft Prefix <cit.> or Microsoft CodeContracts <cit.> aim at reducing the number of false positives by using human-provided annotations.Nevertheless, in some scenarios, the annotation effort may be over-killing so that even those methods struggle to enjoy universal adoption.To address the aforementioned issues, researchers have recently invested more efforts in effective under-approximations. The overall goal is to develop principled techniques to assist programmers with timely feedback on their code about the presence of true errors, with few or zero false alarms. For instance, Microsoft Sage combines SMT solving with program executions (which are an under-approximation of the program semantics) to find security bugs <cit.> and the Bounded Model Checking technique unrolls recursion up to a certain depth to find property violations <cit.>.Recently, <cit.> introduced Incorrectness Logic (IL). Intuitively, IL uses underapproximations to reduce the problem complexity by dropping disjunctions during the state space exploration. IL is the theoretical foundation of many successful bug-catching tools <cit.>. IL triples resemble Floyd-Hoare logic ones, but they have a different meaning: any error state that satisfies the postcondition can be produced by some input states that satisfy the precondition—since in this paper we deal with different kinds of program logics, a compact legenda of the different notation used in the literature for the various triples is reported in Fig. <ref>. It is worth noting that in IL nothing is said about which input states are responsible for a given error. This is possibly rooted in the forward flavour of the under-approximation, which follows the ordinary direction of code execution. §.§ Backward under-approximation and Lisbon TriplesAccording to the official origins of IL, as first narrated in <cit.> and further discussed in <cit.>, it was at POPL'19 in Lisbon where Derek Dreyer and Ralf Jung suggested Peter O'Hearn to look at bug-finding in terms of a logic for proving presence of faults. However, the proposed model of triples did not fit well with a key feature of Pulse, a bug-catching tool developed mainly by Jules Villard at Facebook, namely its ability to drop the analysis of some program paths, for which IL provides a sound logical foundation instead. The idea of such `Lisbon' triples is that for any initial state satisfying the precondition, there exists some trace of execution leading to a final state satisfying the postcondition and it can be dated back to Hoare’s calculus of possible correctness <cit.>, even if no form of approximation was considered there. This condition aims at finding input states causing bugs: if the postcondition denotes an error, any initial state satisfying the precondition exhibits at least one unwanted behaviour. Lisbon triples were then briefly accounted for in <cit.> and <cit.>, under the name backwards under-approximate triples. Later, the idea of finding sources of error was one of the motivations leading to the more general Outcome Logic (OL) <cit.>, which is able to reason about multiple executions at the same time via the outcome conjunction ⊕. However, since OL has been designed to provide a single unified theory that can be used for both correctness and incorrectness reasoning, OL triples are multiobjectives, in the sense that they can also exploit over-approximation to generalize HL, and are complex enough to be applicable to probabilistic programs as well. In particular: (i) OL assertions are no longer predicates over program states, but rather over a suitable outcome monoid, (ii) Lisbon triples are exactly OL triples matching the pattern PQ⊕⊤, (iii) besides Lisbon triples, OL can prove many more triples, such as those à la HL for partial correctness.Thus, so far, the theory of Lisbon triples is missing a dedicated, complete program logic tailored to catch the source states of errors by exploiting backward underapproximation reasoning. Note that in this setting the requirement of completeness is not negligible as it ensures the identification of all sources of an error. In particular, given an incorrectness specification, a complete proof system is able to infer weakest preconditions that need to be reported to the programmer for discerning all states that lead to bugs. §.§ Sufficient Incorrectness LogicIn this paper, we introduce Sufficient Incorrectness Logic (SIL). Intuitively, SIL is an underapproximation logic that focuses on finding the sources of incorrectness rather than just highlighting the presence of bugs. SIL formalizes precisely Lisbon triples: a SIL triplePQcan be interpreted as “all input states that satisfy P have at least one execution of the programleading to a state that satisfies Q". In the absence of nondeterminism, SIL guarantees that a state satisfying the precondition always leads to an error. In the presence of nondeterminism, SIL guarantees that there exists an execution that leads to an error. Sufficient incorrectness preconditions are extremely valuable to programmers, because, by pointing out the sources of errors, they serve as starting point to scope down debugging, fuzzing, and testing in general, as was already observed in <cit.>. SIL is to some extent orthogonal to OL because its focus is on: (i) exhibiting a minimal set of rules that are complete and correct for backward under-approximation; (ii) defining a default deduction mechanism that starts from some specification of erroneous outcomes and traces the computation back to some initial states that can produce such errors; and (iii) spelling out, a posteriori, a formalization and generalization of the backward analysis step performed by industrial grade analysis tools for security developed at Meta, such as Zoncolan <cit.>, Mariana Trench <cit.>, and Pysa <cit.>. §.§ Why Sufficient Incorrectness Logic?SIL characterizes the initial states leading to an error. However, one may ask if we do need to develop a new logic and whether existing ones suffice. This is a legitimate question, and we answer it by comparing SIL with existing program logics.§.§.§ SIL and Incorrectness LogicLet us consider the program 42 in Figure <ref>, also used by <cit.> to show the essence of IL triples. We assume that Q_42 (z = 42) denotes the set of erroneous states. Any SIL triple provides a sufficient condition for the input state that causes the error.For instance, with the deduction system introduced in this paper (but not with IL <cit.>) we can prove the triple(y) ∧(x) 42Q_42stating that any state where x is even and y is odd will lead to the error (i.e., z=42). In IL, but not in SIL, one can prove that some errors can be reached even when we start in a safe state. For instance, the following triple holds in IL:z=11 42Q_42∧(y) ∧(x).However, that triple is not valid in SIL, because it is not true that for any state where z=11 the program will reach an error, e.g., when we start from the state z↦ 11,x↦ 0,y↦ 0.§.§.§ SIL and Hoare logicIf a programis deterministic and non-divergent, then SIL triples are equivalent to Hoare triples—we only interpret the postcondition in the latter as the error states. For instance, in the example of Figure <ref>, the program 42 is trivially deterministic and non-divergent, so the Hoare triple (y) ∧(x) 42Q_42is valid (where Q_42 (z=42) as before). However, this is no longer true in the presence of divergence and nondeterminism. To illustrate this, consider the program 𝗋42𝗇𝖽 in Figure <ref>, which introduces nondeterminism in the program 42 of Figure <ref>. The SIL triple (y) ∧(x)𝗋42𝗇𝖽Q_42is still valid: any state satisfying the precondition can lead to the error when an even value is assigned to x. However, the Hoare triple (y) ∧(x)𝗋42𝗇𝖽Q_42 is no longer valid, because starting from an initial state where 𝚣≠ 42, 𝚡 may be assigned an odd value, hence never entering the conditional. Otherwise stated, in Hoare triples, the postcondition must over-approximate all the possible outcomes, which in the presence of nondeterminism (or divergence) means that it cannot single out the error states, as under-approximating logics can do.§.§.§ SIL and Outcome logic Outcome Logic (OL) <cit.> extends HL by allowing to express both correctness and incorrectness properties. Although OL has been designed as a general framework that is parametric in a monoid of outcomes, the instance we compare with SIL is based on the powerset monoid, called the nondeterministic instance of OL ().Even though Lisbon triples can be expressed in OL, we show a simple example of a Lisbon triple whose proof in SIL is straightforward but which we were not able to derive in . To understand the example, it is worth noting that inthere are three different forms of disjunctive predicates that coexist with different meanings. To see this, take the two atomic assertions P_x(x = 0) and P_y(y = 0). A set of states m satisfies their union P_x ∪ P_y if ∀σ∈ m. (σ(x)=0 ∨σ(y)=0).Instead, m satisfies the disjunction P_x ∨ P_y if (∀σ∈ m. σ(x)=0) ∨ (∀σ∈ m. σ(y)=0). Lastly, m satisfies the outcome composition P_x ⊕ P_y if m can be decomposed as the union of two non-empty sets m_x and m_y that satisfy P_x and P_y respectively. The difference between P_x ∪ P_y and P_x ∨ P_y should be clear. If we take m {σ|σ(x)=0 ∧σ(y)=1}, then m satisfies both P_x ∪ P_y and P_x ∨ P_y, but not P_x ⊕ P_y. Now take the nondeterministic code_xy((y=0)?; x := 0)((x=0)?; y := 0)and let Q (x=0 ∩ y=0) be an incorrectness specification. By straightforward application of SIL rules we can derive the Lisbon triple:P_x∪ P_y_xyQThe correspondingtriple isP_x∪ P_y_xyQ⊕⊤but, by using the rules available in <cit.>, to our best efforts, we were only able to prove thetriple:P_x⊕ P_y_xyQ⊕⊤ . Notably, the simpler Lisbon triples P_x_xyQ and P_y_xyQ can be derived in both SIL andproof systems, but the single triple P_x∪ P_y_xyQ exposes the weakest SIL precondition for the specification Q, which is useful to determine the least general assumptions that may lead to an error. Thus the example suggests that weakest preconditions are not necessarily derivable in . §.§.§ SIL and Necessary Preconditions<cit.> introduced Necessary Conditions (NC) as a principled foundation for automatic precondition inference. Intuitively a necessary precondition is such that it removes entry states that inevitably will lead to errors without removing any good execution. Therefore, whereas sufficient conditions (e.g., weakest liberal preconditions) require the caller of a function to supply parameters that will never cause an error, necessary conditions only prevent the invocation of the function with arguments that will definitely lead to some error. Whilst Necessary Conditions and SIL have overlaps (e.g., they focus on providing conditions for errors) they express two different concepts.Let us illustrate it with the example of Figure <ref> with again error states Q_42 (z=42). The correctness specification is thus Q_42, that is z ≠ 42. A necessary precondition for correctness is also z ≠ 42 because, no matter which nondeterministic value gets assigned to 𝚡, if z = 42 on entry, the program will reach an error state. We denote it with the NC triple z≠ 42𝗋42𝗇𝖽 Q_42.Please note that the precondition of the SIL triple (<ref>) has a non-empty intersection with the necessary condition in (<ref>), i.e., the formula z≠ 42 ∧(y) ∧(x) is satisfiable. This should not come as a surprise: due to nondeterminism, a state satisfying that formula admits both correct and erroneous executions. §.§ A Taxonomy of Triple-Based Program LogicsOur initial motivation for the work in this paper emerged from the attempt to characterize the validity conditions for existing triple-based program logics as under- or over-approximations of forward or backward semantics.This led to the taxonomy of HL, IL and NC depicted in Figure <ref>. In this formalization process, we realized there was a missing combination, the one that originated SIL.The taxonomy is obtained as follows. Given a (possibly nondeterministic) programlet us denote by(resp. ) its forward (resp. backward) collecting semantics. The forward semantics P denotes the set of all possible output states ofwhen execution starts from a state in P (andterminates). Vice versa, the backward semantics Q denotes the largest set of input states that can lead to a state in Q.We define the validity of the various logics in terms of forward and backward semantics.For any programand sets of states P,Q we let the following: HL triples: PQ is valid if P ⊆ Q; IL triples: PQ is valid if P ⊇ Q; NC triples: PQ is valid if Q ⊆ P; SIL triples: PQ is valid if Q ⊇ P. By using the validity conditions in Definition <ref>, we can characterize HL, IL, NC, and SIL according to (i) whether the condition is expressed in terms of forward or backward semantics, and (ii) whether it is an over- or an under-approximation (Figure <ref>). Indeed, in HL and NC it is safe to enlarge the respective target set (i.e., Q and P) so we do categorize them as over-approximations. Similarly, in IL and SIL it is safe to shrink their target sets, so we do classify them as under-approximations. Furthermore, from the validity conditions, we can immediately derive the consequence rules for each logic: for HL and SIL we are allowed to weaken Q and strengthen P, while for IL and NC we can do the opposite, i.e. the direction of the consequence rules are determined by the diagonals in Figure <ref>.Notwithstanding the fact that NC and IL share the same consequence rule, we will show that NC and IL are not related, whereas NC is tightly connected to HL.We obtain that an NC triple PQ is valid if and only if the HL triple ¬ P¬ Q is valid.By duality, one might expect a similar connection to hold between IL and SIL, but we show this is not the case.Interestingly, IL and SIL share the possibility to drop disjuncts, which was in fact one of the leading motivations for the introduction of IL to increase scalability and make the methods effective: the difference is that IL drops disjuncts in the postconditions, while SIL in the preconditions, a feature that is illustrated in Section <ref>. §.§ Separation Sufficient Incorrectness LogicWe instantiate SIL to the case of Separation Logic, showing how we derive a new logic to identify the causes of memory errors. We call that new logic Separation SIL. Intuitively, Separation SIL borrows the ability to deal with pointers from Separation Logic and combines it with the backward under-approximation principles of SIL.We exemplify Separation SIL using the motivating example of <cit.> for Incorrectness Separation Logic.Consider the program in Figure <ref>. The authors derive the incorrectness triplev ↦ zz ↦ -𝗋𝖼𝗅𝗂𝖾𝗇𝗍v ↦ yy ↦ -xUsing Separation SIL we can derive the triplev ↦ zz ↦ - 𝗋𝖼𝗅𝗂𝖾𝗇𝗍x The triple (<ref>) proves the existence of a faulty execution starting from at least one state in the precondition. However, the Separation SIL triple (<ref>) has both a more succinct postcondition capturing the error and a stronger guarantee: every state in the precondition reaches the error, giving (many) actual witnesses for testing and debugging the code. In Section <ref> we show the derivation and in Section <ref> comment on how SIL principles apply in it. §.§ ContributionsIn summary, we make the following three main contributions: * We introduce SIL, a novel logic for sufficient incorrectness triples. SIL enhances program analysis frameworks with the ability to identify the source of incorrectness. * We prove that SIL is correct and complete for a minimal set of rules. * We study extensions of those rules for program analysis. * We investigate the analogies and differences between SIL and OL. * We characterize SIL by formally constructing a taxonomy of triple-based program logics. * We prove that while both IL and SIL perform under-approximations, they capture similar but different properties: the existence vs. the causes of errors. * We show that, for deterministic and terminating programs, SIL and HL do coincide. However, this is not true with nondeterminism, hence the need for a new logic. * We prove that HL and NC, while operating over-approximations in different directions (forward vs. backward) are isomorphic. * We instantiate SIL to Separation Logic formulae, deriving a new logic (Separation SIL) for memory errors. * We prove the correctness of Separation SIL. * We show an example where Separation SIL triples are more convenient than Incorrectness Separation Logic ones and where their backward inference process is more natural than using Outcome Logic. §.§ Structure of the paperIn Section <ref> we compare our work with well-established results. In Section <ref> we recall some basic notions and introduce the notation used in the rest of the paper. Section <ref> defines the new logic SIL and proves its main properties. Section <ref> gives some insights into the relations among SIL, HL, IL and NC. In Section <ref> we combine SIL and Separation Logic to design Separation SIL. We conclude in Section <ref> pointing out future work. Appendix <ref> contains proofs and minor technical results.§ RELATED WORKThe origins of triples for highlighting sufficient preconditions for incorrectness have been discussed already in Section <ref>. In the literature, Lisbon triples (there called backwards underapproximation) were mentioned in <cit.>, but neither of them fully developed a corresponding program logic. Instead, we develop such a correct and complete proof system for the first time, study the properties of the deductive system, compare it with existing program logics, and instantiate it for Separation Logic. Moreover, <cit.> introduce the notion of manifest errors in the context of IL, but mention that backwards under-approximation is able to characterize them directly. Intuitively, an error is manifest if it happens regardless of the context. They can be easily captured in SIL: thanks to the completeness result, a postcondition Q defines a manifest error if and only if the corresponding SIL triple with a true precondition, namely Q, is provable.The idea of tracking the sources of errors was one of the motivations that led to Outcome Logic <cit.>, which is able to express both SIL/Lisbon triples and HL triples. Manifest errors can also be characterized with OL triples of the form Q ⊕⊤. Some differences between OL and SIL have been already sketched in Section <ref> and we refer to Section <ref> for a more technical comparison. In particular, we argue that SIL proof rules are simpler to use for inferring the source of errors starting from an incorrectness specification because they are tailored to this aim. It also seems to us that, contrary to SIL, weakest sufficient preconditions are not necessarily derivable in .Exact Separation Logic (ESL) <cit.> combines Separation Logic and Incorrectness Separation Logic to recover the exact semantics of a program and produce function summaries that can be used for both correctness and incorrectness. In the schema in Figure <ref>, ESL would be placed between HL and IL. This suggests an extension of the schema with logics also in between the four table cells. Following this idea, we could position the calculus of possible correctness <cit.> between NC and SIL, and OL on the diagonal between HL and SIL.<cit.> propose a calculus for quantitative weakest pre and strongest postconditions, that subsume the Boolean case. Their Boolean weakest (resp. strongest) precondition is equivalent to our backward (resp. forward) semantics. Using these, they propose a classification reminiscent of Definition <ref>. They devise notions of total/partial in/correctness that correspond to the four logics, and draw the same correspondence between HL and NC. However, they neither develop program logics for over/under approximations, nor compare in detail SIL and IL with other logics.Algebraic Hoare Logic (AHL) <cit.> is another generalization of HL aiming at formalizing Design by Contracts <cit.> and program refactoring. We take inspiration from AHL to model pre/post-conditions in triple-based logic as sets of states—and not assertions in some formal language. Our development is different because we want to characterize the set of input states that bring to an error, not to prove that the transformed program is semantically equivalent to (or better than) the original one.Dynamic Logic (DL) <cit.> is an extension of modal logic which is able to describe program properties. It is well-known that HL and IL can be encoded in DL <cit.> using forward box and backward diamond operator, respectively. Moreover, both SIL and NC can be encoded using forward diamond (the latter via Proposition <ref> and the HL encoding). Thus, all four conditions in Definition <ref> can be encoded in DL, which can then be used to understand their connections and possibly get new insights. § BACKGROUND §.§ Regular Commands.Like <cit.>, we consider a simple regular command language with an explicit nondeterministic operator . HL can be translated in this framework as in <cit.>. As usual, we define arithmetic expressions a∈ and Boolean expressions b∈ as follows:∋a ::= n | x |a♢a∋b ::=|¬b|bb|a≍a where n ∈ is a natural number, x is a (integer) variable, ♢∈{ +, -, ·, …} encodes standard arithmetic operations, and ≍∈{ =, ≠, ≤, ≤, …} standard comparison operators.The syntax of regular commands ∈ is:∋ ::=skip|x := a|b?∋ ::=|;||^ Regular commands provide a general template which can be instantiated differently by changing the setof atomic commands . The setdetermines the kind of operations allowed in the language, and we assume it to contain deterministic assignments and boolean guards.The command skip does nothing, while x := a is the standard deterministic assignment. The semantics of b? is that of an “assume" statement: if its input satisfies b it does nothing, otherwise it diverges. At times, we also use nondet() to describe either a nondeterministic assignment (x := nondet()) or boolean expression (nondet()?). Sequential composition is written ; and ⊕ is the nondeterministic choice. The Kleene star ^ denotes a nondeterministic iteration, wherecan be executed any number of time (possibly none) before exiting. This formulation can accommodate for a standard imperative programming language <cit.> by defining conditionals and while-statements as below:if (b) {r_1 } else {r_2 } (b?; r_1)((¬b)?; r_2)while (b) {r} (b?; r)^; (¬b)? Given our instantiation of , we consider a finite set of variablesand the set of stores Σ→ that are (total) functions σ fromto integers. We tacitly assume that 0 is the default value of uninitialised variables. Given a store σ∈Σ, store update σ[ x ↦ v ] is defined as usual for x ∈ and v ∈. We consider an inductively defined semantics · for arithmetic and boolean expressions such that aσ∈ and bσ∈𝔹 for all a∈, b∈ and σ∈Σ. The semantics of atomic commands ∈ and S ∈℘(Σ) is defined below:skip SS x := a S {σ[x ↦aσ] σ∈ S }b? S {σ∈ S bσ = } The collecting (forward) semantics of regular commands · : →℘(Σ) →℘(Σ) is defined by structural induction as follows:SS _1 ; _2S _2_1 S _1 _2 S _1 S ∪_2S ^ S ⋃_n ≥ 0^n S Roughly, S is the set of output states reachable from the set of input states S. We remark that in generalis nondeterministic and possibly non-terminating, so even when S is a singleton, S may contain none, one or many states. To shorten the notation, we write σ instead of {σ}.We define the backward semantics as the opposite relation of the forward semantics, that isσ' {σσ' ∈σ}or, equivalently,σ∈σ' σ' ∈σ.We lift the definition of backward semantics to set of states by union as usual.The backward semantics can also be characterized compositionally, similarly to the forward one: For any commands ,_1,_2∈ all of the following equalities hold: _1; _2 = _1∘_2_1 _2 = _1∪_2^ = ⋃_n ≥ 0^n §.§ Assertion Language The properties of triple-based program logics depend on the expressiveness of the formulae (the assertion language) allowed in pre and postconditions <cit.>. As an example consider HL. When the assertion language is first-order logic, HL is correct but not complete, because first-order logic is not able to represent all the properties needed to prove completeness, notably loop invariants <cit.>. If the assertion language is not close under conjunctions and disjunctions HL may be incorrect <cit.>.To overcome the aforementioned problems, following <cit.>, we assume P and Q to be set of states instead of logic formulas — we therefore write, e.g., σ∈ P to say that the state σ satisfies the precondition P.§.§ Hoare Logic Hoare logic (HL) <cit.> is a well known triple-based program logic which is able to prove properties about programs. The HL triple PQ means that, whenever the execution ofbegins in a state σ satisfying P and it ends in a state σ', then σ' satisfies Q.When Q is a correctness specification, then any HL triple PQ provides a sufficient condition P for the so called partial correctness of the program . Formally, this is described by the over-approximation property of postconditionsP ⊆ Q HL, so we say that a triple satisfying this inclusion is valid in HL. In its original formulation, HL was given for a deterministic while-language and assuming P and Q were formulas in a given logic. Subsequent work generalized it to many other settings, such as nondeterminism <cit.> and regular commands <cit.>, resulting in the rules in Figure <ref>.This is just a minimal core of rules which is both correct and complete, and there are many other valid rules. A HL triple is provable if and only if it is valid.Although the validity condition (HL) is expressed in terms of the forward semantics and thus classifies HL as a forward over-approximation (according to our terminology), the reader should not be misled to think the adjective forward refers to direction of the deduction process, which is of course totally independent from our classification. For example, it is well known that HL is related to Dijkstra's weakest liberal precondition <cit.> for a given postcondition Q: a triple PQ is valid if and only if P ⊆[](Q). In this case, given the correctness specification Q, HL triples can be used for backward program analysis to find the weakest precondition for program correctness.§.§ Incorrectness Logic Dually to HL, Incorrectness Logic <cit.> was introduced as a formalism for underapproximation with the idea of finding true bugs in the code. The IL triple PQ means that all the states in Q are reachable from states in P. This is described by the property P ⊇ Q, IL which characterises valid IL triples. In other words, any valid IL triple PQ states that for any state σ' ∈ Q there is a state σ∈ P such that σ' is reachable from σ. This means that if σ'∈ Q is an “error" state, the triple guarantees it is a true alarm of the analysis. The rules, inspired by previous work on reverse Hoare logic <cit.> and here simplified to not separate correct and erroneous termination states, are shown in Figure <ref>. Just like the HL rules in Figure <ref>, this is a minimal set of correct and complete rules for IL. An IL triple is provable if and only if it is valid. HL and IL aim at addressing different properties. Even the simple, deterministic, terminating program 42 of Figure <ref> can be used to show their difference. Recall thatQ_42 (z = 42) denotes the set of erroneous states, i.e., Q_42 is an incorrectness specification. The valid HL triple (y) ∧(x) 42Q_42 identifies input states that will surely end up in anerror state, while the triple Q_42∧((x) ∨(y)) 42 Q_42 characterize input states that will not produce any error. On the other hand, the valid IL triple 42Q_42∧(y) ∧(x) expresses the fact that error states in Q_42 are reachable by some initial state. Similarly, also the IL triple 42 Q_42∧((x)∨(y)) is valid since the postcondition Q_42 can be reached only when the path conditions to reach the assignment are not satisfied. §.§ Necessary ConditionsFor contract inference, <cit.> introduced the notion of Necessary Conditions (NC). The goal was to relax the burden on programmers: while sufficient conditions require the caller of a function to supply parameters that will never cause an error, NC only prevents the invocation of the function with arguments that will inevitably lead to some error. Intuitively, for Q the set of good final states, the NC triple PQ means that any state σ∈ P admits at least one non-erroneous execution of the program . Recently, the same concept has been applied to the context of security <cit.>.Following the original formulation <cit.>, we can partition the traces of a nondeterministic execution starting from a memory σ in three different sets: 𝒯(σ), those without errors, ℰ(σ), those with an error, and ℐ(σ), those which do not terminate. A sufficient precondition P is such that (σP)(ℰ(σ) = ∅), that is P excludes all error traces. Instead, a necessary precondition P is a formula such that (𝒯(σ) ≠∅ℐ(σ) ≠∅)(σP), which is equivalent to(σP)(𝒯(σ) ∪ℐ(σ) = ∅) . In other words, a necessary precondition rules out no good run: when it is violated by the input state, the program has only erroneous executions.Note that we consider infinite traces as good traces. We do this by analogy with sufficient preconditions, where bad traces are only those which end in a bad state. Consider again the nondeterministic program 𝗋42𝗇𝖽 of Figure <ref>, and the correctness specification Q_42 = (z ≠ 42). Then 𝒯(σ) ℰ(σ) σ(z)≠ 42 ≠∅ ≠∅ σ(z)=42 ∅ ≠∅ The weakest sufficient precondition for this program is P = (z≠ 42 ∧(y)) because no input state σ that violates P is such that ℰ(σ) = ∅. On the contrary, we have, e.g., that P= (z≠ 42) is a necessary precondition, while (z > 42) is not, because it excludes some good runs. § SUFFICIENT INCORRECTNESS LOGICIn this section, we explain the rationale for SIL, give a minimal sound and complete set of rules and present some additional rules that can speed up program analysis. §.§ Rationale As outlined in the Introduction, the main motivation for studying sufficient incorrectness conditions is to enhance program analysis frameworks with the ability to identify the source of incorrectness and provide programmers with evidence of the inputs that can cause the errors. This is different from IL, which can be used to determine the presence of bugs, but cannot precisely identify their sources. SIL is also different in spirit to HL and NC, that aim to prove the absence of bugs or prevent them. Roughly, the SIL triple PQ requires that for all σ∈ P, there exists at least one σ' ∈ Q such that σ∈σ' or, equivalently, σ' ∈σ. In other words, any SIL triple PQ asserts that “all states in P have at least one execution leading to a state in Q".More concisely, this property amounts to the validity conditionQ ⊇ P. SILPlease note that, in the presence of nondeterminism, states in P are required to have one execution leading to Q, not necessarily all of them. We make this equivalence formal by means of the following proposition. For any ∈, P, Q ⊆Σ we have Q ⊇ P ∀σ∈ P ∃σ' ∈ Q σ' ∈σ A convenient way to exploit SIL is to assume that the analysis takes as input the incorrectness specification Q, which represents the set of erroneous final states. Then, any valid SIL triple PQ yields a precondition which surely captures erroneous executions. In this sense, P can be considered a sufficient condition for incorrectness. This is dual to the interpretation of IL, where, for a given precondition P, any IL triple PQ yields a set Q of final states which are for sure reachable, so that any error state in Q is a true bug reachable from some input in P. §.§ Proof SystemWe present the proof rules for SIL in Figure <ref>. This set of rules is minimal, correct and complete (Section <ref>); additional valid rules are discussed in Section <ref>.For the skip atomic command, all the states in Q will reach Q itself. For assignments, the sufficient precondition is given by the backward semantics applied to Q (cf. (<ref>)). For Boolean guards, the set of initial states which can lead to Q are all those states in Q that also satisfies the guard. These three cases are summarized in the rule atom.If we know that all states in R have an execution of _2 ending in Q and all states in P have an execution of _1 ending in R, we can deduce that all states in P have an execution of _1; _2 ending in Q. This is captured by rule seq.If all states in P_1 have an execution of _1 ending in Q, they also have an execution of _1 _2 ending in Q since its semantics is a superset of that of _1 (cf. (<ref>)). The reasoning for P_2 is analogous, thus yielding the rule choice. This rule is reminiscent of the equation for conditionals in the calculus of possible correctness <cit.>.For iterations, if the commandis never executed the precondition is trivially the postcondition Q_0. If the command is executed n > 0 times, then the precondition is the union of the preconditions of every iteration step. This is formalized by rule iter. We remark that this rule first appeared in <cit.>.If all the states in P' have an execution leading to a state in Q', then any subset P of P' will lead to Q' as well. Similarly, every superset Q of Q' is reachable by executions starting in P. Those two observations lead to cons, which is the key rule of SIL as it enables the under-approximation of the precondition. §.§ Correctness and CompletenessSIL is both correct and complete. Correctness can be proved by induction on the derivation tree of a triple. Intuitively, if the premises of a rule are valid, then its consequence is valid as well, as we briefly observed in the previous section. To prove completeness, we rely on the fact that rules other than cons are exact, that is, if their premises satisfy the equality Q = P, their conclusion does as well. Using this, we prove the triple QQ for anyand Q. Then we conclude using cons to get a proof of PQ for any P ⊆Q. This is formalized by: A SIL triple is provable if and only if it is valid.§.§ Additional Rules for Program AnalysisSIL proof system in Figure <ref> is deliberately minimal: if we remove any rule it is no longer complete. However, there are many other valid rules which can be useful for program analysis and cannot be derived from the five we presented. Some of them are in Figure <ref>.Rule empty is used to drop paths backward, just like IL can drop them forward (and analogous axiom P∅ is valid for IL). Particularly, this allows to ignore one of the two branches with choice, or to stop the backward iteration of iter without covering all the infinite iterations. An example of such an application is the derived rule iter0: it can be proved from rules iter and empty by taking Q_0 = Q and Q_n = ∅ for n ≥ 1. This rule corresponds to not entering the iteration at all. It subsumes HL's rule iter, which is based on loop invariants: those are a correct but not complete reasoning tool for under-approximation <cit.>. To show the use of iter0, we consider the program 𝗋𝗌𝗁𝗈𝗋𝗍𝗅𝗈𝗈𝗉0 in Figure <ref>. It is a slight variation of the program presented as “loop0" in <cit.>, where the final error states were Q_2M (x = 2 000 000). We can write this program in the syntax of regular commands as 𝗋𝗌𝗁𝗈𝗋𝗍𝗅𝗈𝗈𝗉0n := nondet(); ((n > 0)?; x := x + n; n := nondet())^; (n <= 0)? To prove a SIL triple for 𝗋𝗌𝗁𝗈𝗋𝗍𝗅𝗈𝗈𝗉0, we let _w (n > 0)?; x := x + n; n := nondet() be the body of the loop and R_2M (x = 2000000n ≤ 0) = (Q_2M n ≤ 0). We perform the following derivation: [seq] Q_2M𝗋𝗌𝗁𝗈𝗋𝗍𝗅𝗈𝗈𝗉0Q_2M[atom]Q_2Mn := nondet()R_2M [seq]R_2M(_w)^; (n <= 0)?Q_2M[iter0]R_2M(_w)^R_2M [atom]R_2M(n <= 0)?Q_2M It is worth noticing that we use the rule iter0 to bypass the iteration: this is possible because we do not need to enter the loop to have a path reaching the entry point of the program. Rule unroll allows to unroll a loop once. Subsequent applications of this rule allows to simulate (backward) a finite number of iterations, and then rule iter0 can be used to ignore the remaining ones. This is on par with IL ability to unroll a loop a finite number of times to find some postcondition. Please note that rules iter0 and unroll are analogous to rules iter0 and unroll (there called Iterate non-zero) of IL <cit.>.Rule disj allows to split the analysis and join the results, just like HL and IL. However, while a corresponding rule conj which perform intersection is sound for HL, it is unsound for both IL and SIL (see Example <ref>).The last rule unrollsplit is derived from unroll and disj. The intuition behind this rule is to split the postcondition Q in two parts, Q_1 which goes through the unrolled loop and Q_2 which instead skips the loop entirely; the results are then joined. This rule is analogous to rule Induction of OL <cit.>. To illustrate the use of unroll, let us consider the full version of “loop0" from <cit.>, which is reported in Figure <ref>. We can translate it in the language of regular commands by letting 𝗋𝗅𝗈𝗈𝗉0x := 0;𝗋𝗌𝗁𝗈𝗋𝗍𝗅𝗈𝗈𝗉0 where 𝗋𝗌𝗁𝗈𝗋𝗍𝗅𝗈𝗈𝗉0 is the one defined in Example <ref>. Final error states are those in Q_2M. To prove a triple for 𝗋𝗅𝗈𝗈𝗉0, we can no longer ignore the loop with iter0 because the initial assignment x := 0 would yield the preconditionon Q_2M if we wanted to extend the proof of Q_2M𝗋𝗌𝗁𝗈𝗋𝗍𝗅𝗈𝗈𝗉0Q_2M from the previous example. In this case, we need to perform at least one iteration, and we do so using unroll. We let _w and R_2M be as in Example <ref>, and we define T_2M (x + n = 2000000n ≥ 0). We show the derivation in Figure <ref>, where we omit the proof of T_2M_wR_2M since it is a straightforward application of seq and atom. Please note that the pattern we used to prove T_2M_w^R_2M given the proof of T_2M_wR_2M is general and does not depend on our specific proof. It corresponds to a single loop unrolling: if we know that one execution of the loop body satisfies some given pre and postconditions, the same holds for its Kleene iteration, because of the under-approximation. This is not a new result: it was already observed for IL. It is worth noticing that a version of the logic where unroll and disj replace iter is not complete. The reason is that unroll is not able to perform an infinite amount of iterations with a finite derivation tree, thus preventing it from proving executions which require an infinite amount of iterations to reach the fixpoint. §.§ SIL and Outcome LogicAs discussed in the Introduction, OL <cit.> already recognized the importance of locating the source of errors, and showed that any SIL triple PQ is valid if and only if the corresponding(the nondeterministic instance of OL, see Section <ref>) triple PQ ⊕⊤ is valid. However, the two approaches differ in the proof systems: in fact, many SIL rules cannot be derived fromrules.The first thing to observe is thatassertions use a higher-level language, which describes sets of subsets of Σ, while SIL assertions are just subsets of Σ. In theassertion language, subsets of Σ (i.e., SIL assertions) are called atomic assertions, which are then combined with higher level operators to denote sets of sets of states.To see this, take the two atomic assertions P_x(x = 0) and P_y(y = 0).In SIL, their conjunction P_x ∩ P_y is satisfied by any state σ such that σ(x)=σ(y)=0, while inthe compound expression P_x ∧ P_y is satisfied by any set of states m such that∀σ∈ m. σ(x)=σ(y)=0.This means that the correspondence between SIL andtriples can only be drawn when the latter is in the specific form PQ ⊕⊤ with P, Q atomic assertions, while in general a proof in OL involves assertions not in that form. Thus, there is no simple way to translate a proof into a proof in SIL. As discussed in Section <ref>, disjunction further distinguishes the assertion languages: the union P_x ∪ P_y is the only kind of disjunction available in SIL, whilehas alsoand ⊕.Some of the rules of SIL can be derived from OL rules. For instance, atom is a combination of OL rules One, Assign and Assume, rule seq can be derived from Seq and rule unrollsplit is very similar to Induction. However, we could not derive three key rules of SIL, namely choice, disj and iter. All these rules depend on the union of atomic assertions. The general OL cannot include specialized rules for union because it may not exists for some instantiations of the outcome monoid. Thus, the only rule which can introduce such connective is Consequence, but this is not always sufficient to derive SIL rules because both PQ and P ⊕ Q imply P ∪ Q. Take for instance disj: its natural derivation from OL rules would be to use Split followed by Consequence. However, this does not work. The premises of disj, translated to the equivalent OL triples, becomes P_1Q_1 ⊕⊤ and P_2Q_2 ⊕⊤. Applying Split to these we get P_1 ⊕ P_2Q_1 ⊕ Q_2 ⊕⊤. However, using Consequence we cannot derive P_1 ∪ P_2(Q_1 ∪ Q_2) ⊕⊤: it is (P_1 ⊕ P_2)(P_1 ∪ P_2), not the converse. For rule choice, the technical difference is similar, but there is also a conceptual difference. On the one hand, OL rule Plus sums the postconditions. On the other hand, SIL rule choice takes the union of the preconditions. Roughly, this means thatproof system is better at tracking outcomes from inputs, while SIL is more oriented to backward reasoning, with the proof system helping the inference of preconditions for a given postcondition. Lastly, rule iter is infinitary (i.e., it has infinitely many premises), while all OL rules are finitary. Moreover, the only rule involving the Kleene star operatoris Induction, which is roughly equivalent to unrollsplit, which in turn is strictly less powerful than iter.These argument are a formal consequence of the same difference already shown in the example in Section <ref>.We recall that P_x(x = 0), P_y(y = 0), Q (x=0 ∩ y=0) and _xy is the code in (<ref>). The SIL derivation for P_x∪ P_y_xyQ involves choice to join the two nondeterministic branches: ![choice] P_x ∪ P_y_xyQ[seq]P_y(y=0)?; x := 0Q[atom]y = 0(y=0)?y = 0[atom]y = 0x := 0Q [seq]P_x(x=0)?; y := 0Q[atom]x = 0(x=0)?x = 0[atom]x = 0y := 0Q Instead, usingone can derive both P_x_xyQ ⊕⊤ and P_y_xyQ ⊕⊤ and join them using Split to obtain P_x ⊕ P_y_xyQ ⊕⊤, but from here we cannot derive the precondition P_x ∪ P_y. Please note that P_x ∪ P_y is exactly the weakest SIL precondition for Q: any valid triple P_xyQ satisfies P ⊆ P_x ∪ P_y. From an error reporting perspective, inferring a precondition as weak as possible is interesting because it allows a tool to report to programmers many errors at once, instead of fragmenting the warnings. § RELATIONS AMONG LOGICSWe follow the two-dimensional scheme in Figure <ref> to carry out an exhaustive comparison among the four validity conditions. While the duality between HL and IL was crystal clear from the introduction of IL in <cit.>, as far as we know their relations with NC has not been explored yet. Pursuing such formal comparison leads to some surprising results: * Although NC and IL share some similarities, not only we show that they are not comparable, but we prove a bijective correspondence between NC and HL that exploits double negation. * While the classification in Figure <ref> and the previous item suggests that the proof strategy used to define the bijective correspondence between NC and HL can be extended to relate IL and SIL, we show that this is not the case: IL and SIL are not comparable. * More in general, each one of the conditions (and the corresponding logics) focus on different aspects of programs: none of them is better than the other; instead, each one has its precise application. Next, we discuss the classification scheme, we deepen the comparison between NC and other logics, we show how SIL and IL differ, and we extend the discussion along other axes of comparison. §.§ Taxonomy The validity of HL and IL triples correspond to the constraints P ⊆ Q and P ⊇ Q, respectively. Therefore, their validity is expressed using the forward semantics. The difference between the two inequalities resides in the direction of the inclusion: the postcondition is an over-approximation of the forward semantics in HL, while it is an under-approximation in IL. Thus we can classify HL and IL along the first column of Figure <ref>. Dually, the validity condition for SIL triples, namely Q ⊇ P, can be placed in the second column, since it is expressed in terms of backward semantics, and in the second row, because the precondition is an under-approximation of the backward semantics.The fourth inequality Q ⊆ P defines over-approximations of backward semantics, and is dual to both SIL and HL. It turns out that Q ⊆ P is the validity condition for NC. Indeed, a precondition P is necessary for Q if every state which can reach a state in Q is in P. If Q describes good final states, then Q defines all states σ which can reach a good state, so every necessary precondition P must contain at least all states in Q. More formally, given a final state σ' ∈ Q, all traces starting in σ and ending in σ' are in 𝒯(σ). This means that every state σ∈σ' has a trace in 𝒯(σ) (the one ending in σ'), so it must belong to a necessary precondition P as well. This is captured by the following: Given a correctness postcondition Q for the program , any possible necessary precondition P for Q satisfies:Q ⊆PNC.An analogous characterization appeared in <cit.> in terms of weakest precondition. We will use (NC) to understand how NC relates to IL (Section <ref>) and HL (Section <ref>). §.§ Necessary Preconditions and Incorrectness LogicSufficient preconditions are properties implying the weakest liberal precondition: P is sufficient for a postcondition Q if and only if P[](Q), which in turn is equivalent to validity of the HL triple PQ. Necessary and sufficient preconditions are dual, and so are IL and HL. Moreover, NC and IL enjoys the same consequence rule: both can strengthen the postcondition and weaken the precondition. This suggest that there may be a relation between NC and IL. However, the following example shows this is not the case. Let us consider once again the simple deterministic program 42 of Figure <ref>, with Q_42 (z=42). Let Q'_42 (Q_42∧(y) ∧(x)) and P_11 (z = 11). Then IL triple (<ref>) is exactly P_11 42Q'_42, so we know this is valid. However, we observe that the NC triple P_11 42Q'_42 is not valid because the state σ such that σ(x) = 0, σ(y) = 1, σ(z) = 10 has an execution leading to Q'_42 but is not in P_11. Moreover, take for instance P = ((y) (x)), which makes the NC triple P 42Q'_42 valid (any state not satisfying P has either an even y or and odd x, and those variables are not changed by the program). Then it is clear that P_11P. This shows that not only IL triples do not yield NC triples, but also that in general there are NC preconditions which are not implied by IL preconditions.Conversely, consider Q_42 = (z ≠ 42). The NC triple 42 Q_42 is clearly valid, but the IL triple 42 Q_42 is not: for instance, the final state σ' such that σ'(x) = σ'(y) = σ'(z) = 11 is not reachable from any initial state. It follows that the IL triple P 42 Q_42 is not valid for any P. Given a valid NC triple PQ and a valid IL triple PQ, we are guaranteed that P ∩P≠∅, that is there are states which satisfy both P and P. However, in general neither P ⊆P nor P⊆ P hold. The difference between NC and IL is striking when we look at their quantified definitions:∀σ' ∈ Q ∀σ∈σ' σ∈ PNC^∀ ∀σ' ∈ Q ∃σ∈σ' σ∈ PIL^∃ While (NC^∀) universally quantifies on initial states (all initial states with a good run must satisfy the precondition), (IL^∃) existentially quantifies on them. §.§ Necessary Preconditions and Hoare LogicIt turns out that NC is strongly connected to weakest liberal preconditions and thus to HL. In fact, let Q be a postcondition: a finite trace is in 𝒯(σ) if its final state satisfies Q and in ℰ(σ) otherwise. In general, a necessary precondition has no relationship with [](Q). However, if we consider ¬ Q instead of Q, we observe that “erroneous" executions becomes those in 𝒯(σ) and “correct" ones those in ℰ(σ). This means that(𝒯(σ) = ∅) σ∈[](¬ Q),from which we derive ¬P[](¬ Q) or, equivalently,¬[](¬ Q) P .Building on Example <ref>, the correctness specification for 𝗋42𝗇𝖽 is Q_42 = (z≠ 42). We have that [𝗋42𝗇𝖽](¬¬ Q_42) = [𝗋42𝗇𝖽](Q_42) = Q_42, because if initially z ≠ 42 then there is the possibility that x is assigned an odd value and z is not updated. Therefore, a condition P is implied by ¬[](¬¬ Q_42) = ¬ Q_42 if and only if it is necessary. For instance, (z≠ 42 ∨(y)) is necessary, while (z > 42) is not. The next bijection establishes the connection between NC and HL: For any∈ and P,Q⊆Σ we have: P ⊆ Q(¬ Q) ⊆¬ P .The proposition means that a necessary precondition is just the negation of a sufficient precondition for the negated post. This was also observed using weakest (liberal) preconditions in <cit.>. §.§ Sufficient Incorrectness Logic and Hoare LogicIn general, HL and SIL are different logics, but they coincide whenever the programis deterministic and terminates for every input (cf. Section <ref>). This is formally captured by: For any ∈ and P, Q ⊆Σ we have: * ifis deterministic, Q ⊇ PP ⊆ Q * ifis terminating, P ⊆ QQ ⊇ P §.§ Sufficient Incorrectness Logic and Incorrectness Logic In Figure <ref>, we highlight the fact that (HL) and (NC) are isomorphic (Proposition <ref>). It is natural to ask if there is a similar connection between (IL) and (SIL). The next example answers negatively. Since IL and SIL enjoy different consequence rules, neither of the two can imply the other with the same P and Q. For negated P and Q, consider the simple program below𝗋1 x := 1 and the two sets of states P_≥ 0 (x ≥ 0) and Q_1 (x = 1). Both the SIL triple P_≥ 0𝗋1Q_1 and the IL triple P_≥ 0𝗋1Q_1 are valid. However, neither ¬ P_≥ 0𝗋1¬ Q_1 nor ¬ P_≥ 0𝗋1¬ Q_1 are valid. So neither (IL) implies negated (SIL) nor the other way around. To gain some insights on why (SIL) and (IL) are not equivalent, given a regular command , we define the set of states that only diverges D_ and the set of unreachable states U_:D_{σσ= ∅} U_{σ' σ' ∉Σ} ={σ' σ' =∅}.In a sense, U_ is the set of states which “diverge" going backward. For any regular command ∈ and sets of states P,Q⊆Σ it holds that: * P ⊇ P ∖ D_; * Q ⊇ Q ∖ U_. Lemma <ref> highlights the asymmetry between over and under-approximation: the composition of a function with its inverse is increasing (but for non-terminating states). This explains why (HL) and (NC) are related while (IL) and (SIL) are not: on the over-approximating side, P ∖ D_⊆ P can be further exploited if we know P ⊆ Q via (HL), but it cannot when P ⊇ Q via (IL).§.§ Comparison Among the Rules We recall the rules of SIL, HL and IL in Figure <ref>, so to emphasize the similarities and differences among them. The 𝖺𝗍𝗈𝗆 and 𝗂𝗍𝖾𝗋 rules show that HL and IL exploit the forward semantics, while SIL the backward one. Furthermore, the rule iter of HL says that any invariant is acceptable, not necessarily the minimal one, so that HL relies on over-approximation. This is confirmed by the row for rules 𝖼𝗈𝗇𝗌 and 𝖾𝗆𝗉𝗍𝗒, where on the contrary IL and SIL are shown to rely on under-approximation. The consequence rule is the key rule of all the logics because it allows to generalize a proof by weakening/strengthening the two conditions P and Q involved. The direction of rules cons of SIL and cons of HL is the same and it is exactly the opposite of the direction of rule cons of IL and NC, which coincides. So the different consequence rules follow the diagonals of Figure <ref>. The row for rules 𝗌𝖾𝗊 and 𝖽𝗂𝗌𝗃 show that in all cases triples can be composed sequentially and additively. Rules 𝗂𝗍𝖾𝗋0, 𝗎𝗇𝗋𝗈𝗅𝗅 and 𝗎𝗇𝗋𝗈𝗅𝗅𝗌𝗉𝗅𝗂𝗍 are a prerogative of under-approximation: they are the same for SIL and IL, but they are unsound for HL. §.§ Weakest/Strongest ConditionsDepending on the way in which program analysis is conducted, one can be interested in deriving either the most general or most specific hypotheses under which the reasoning can take place. For instance, given a correctness program specification Q one is typically interested in finding the weakest liberal preconditions that make Q satisfied, i.e., to impose the minimal constraint on the input that guarantee program correctness. Conversely, to infer necessary conditions we can be interested in devising the strongest hypotheses under which some correct run is possible. To investigate the existence of weakest/strongest pre and postconditions, it is convenient to take into account the consequence rules of the four kinds of triples. The consequence rule of each logic explains how pre and postconditions can be weakened/strengthened. The concrete semantics is trivially a strongest (HL and NC) or weakest (IL and SIL) condition for the target property (P computing backward and Q forward).It turns out that having a strongest/weakest condition on the “source" property is a prerogative of over-approximation, i.e., that over and under-approximation are not fully dual theories. For any regular command ∈: * given Q, there exists a weakest P such that P ⊆ Q (HL); * given P, there exists a weakest Q such that Q ⊆ P (NC). For any regular command : * for some Q, there is no strongest P such that P ⊇ Q (IL); * for some P, there is no strongest Q such that Q ⊇ P (SIL). The reason why strongest conditions may not exist for IL and SIL is that the collecting semantics (both forward and backward) is additive but not co-additive. In other words, rule disj is sound for all triples, while a dual rule for conjunction such as[{𝖼𝗈𝗇𝗃}] P_1 ∩ P_2Q_1 ∩ Q_2P_1Q_1 P_2Q_2is valid for HL and NC but neither for IL nor SIL. So, for instance, given Q and two HL preconditions P_1 and P_2 (P_1Q and P_2Q) also their union is a precondition for Q, ie. P_1 ∪ P_2Q, which can be proved using disj. However, given two IL triples P_1Q and P_2Q, in general (P_1 ∩ P_2) ⊉ Q in which case P_1 ∩ P_2Q is not valid. Consider again the program 𝗋1 of Example <ref>. We can prove the two IL triples x = 0𝗋1x = 1 and x = 10𝗋1x = 1, but their intersection is ∅𝗋1x = 1, which is not a valid IL triple. For SIL, consider the program 𝗋𝗇𝖽x := nondet() For precondition P_1(x = 1) we can prove both P_1𝗋𝗇𝖽x = 0 and P_1𝗋𝗇𝖽x = 10, that are incomparable, and again are both minimal because ∅ is not a valid postcondition. §.§ Termination and Reachability Termination and reachability are two sides of the same coin when switching from forward to backward reasoning, and over- and under-approximation behave differently with respect to this notion.For HL, given the definition of collecting semantics, we can only distinguish a precondition which always causes divergence: if Q is empty, all states in the precondition must always diverge. However, if just one state in P has one terminating computation, its final state must be in Q, so we do not know any more whether states in P diverge or not. Moreover, because of the over-approximation, a non empty Q does not mean there truly are finite executions, as those may be introduced by the approximation. Dually, NC cannot say much about reachability of Q unless P is empty, in which case Q is unreachable.On the contrary, under-approximation has much stronger guarantees on divergence/reachability. Any IL triple PQ ensures that all states in Q are reachable from states in P, which means in particular that every state in Q is reachable. Dually, a SIL triple PQ means that all states in P have a convergent computation (which ends in a state in Q). This observation motivates the choice of a forward (resp. backward) rule for iteration in IL (resp. SIL): a backward (resp. forward) rule would need to prove reachability of all points in the postcondition (resp. precondition). Instead, the forward rule of IL (resp. backward rule of SIL) ensures reachability (resp. termination) by construction, as it build Q (resp. P) only with points which are known to be reachable (resp. terminating) by executing the loop. § SEPARATION SUFFICIENT INCORRECTNESS LOGICWe instantiate SIL to handle pointers and dynamic memory allocation, introducing Separation SIL. The goal of such a program logic is to identify the causes of memory errors: it takes the backward under-approximation principles of SIL and combines it with the ability to deal with pointers from Separation Logic (SL) <cit.>. §.§ Heap Regular CommandsWe denote bythe set of all heap regular commands obtained by plugging the following definition of heap atomic commands in (<ref>) (in blue the new primitives):∋ ::=skip|x := a|b?|x := alloc()|free(x)|x := [y]|[x] := y The primitive alloc() allocates a new memory location containing a nondeterministic value, while free deallocates memory.The dereferencing operator is written [·] and we restrict its use: the syntax only allows to allocate, free and dereference (both for reading and writing) only single variables. Particularly, arithmetic a∈ and Boolean expressions b∈ cannot dereference a variable: to use a value from the heap, the value must be loaded in a variable beforehand.Given a heap command ∈, we let () ⊆ as the set of (free) variables appearing in . We also define the set () ⊆ of variables modified byinductively by(skip) = ∅(x := a) = {x}(b?) = ∅(x := alloc()) = {x}(free(x)) = ∅(x := [y]) = {x}([x] := y) = ∅(_1; _2) = (_1) ∪(_2) (_1 _2) = (_1) ∪(_2) (^) = () Please note that free(x) and [x] := y do not modify x: this is because they only modify the value pointed by x, not the actual value of x (the memory address itself). §.§ Assertion LanguageOur assertion language for pre and postconditions is derived from both SL and Incorrectness Separation Logic (ISL):∋ p, q, t ::= |¬ p | pq |∃ x . p | a ≍ a || x ↦a| x | pqIn the above productions, ≍∈{ =, ≠, ≤, ≤, …} encodes standard comparison operators, x ∈ is a generic variable and a∈ is an arithmetic expression. The first five constructs describe standard first order logic. The others describe heaps and come from Separation Logic, with the exception of x, which was introduced by <cit.>. The constantdenotes an empty heap. The assertion x ↦ a stands for an heap with a single memory cell pointed by x and whose content is a, while x describes that x points to a memory cell that was previously deallocated.The separating conjunction pq describes an heap which can be divided in two disjoint sub-heaps, one satisfying p and the other q. We let x ↦ - ∃ v. x ↦ v describe that x is allocated but we do not care about its exact value. Given a formula p ∈, we call (p) ⊆ the set of its free variables.§.§ Proof System We present the rules of Separation SIL in Figure <ref>. We define the capture-avoiding substitution as usual: q[a / x] is the formula obtained replacing all free occurrences of x in q with the expression a. Unlike ISL, we do not distinguish between correct and erroneous termination – the goal of SIL is to trace back the causes of errors, not to follow the flow of a program after an error has occurred.We categorize the rules of Separation SIL in four groups (Figure <ref>). The first group are new rules of Separation SIL. The second one are borrowed from SL. The third and fourth ones are from SIL.The first group gives the rules for atomic commands ∈, hence replacing the SIL rule atom. The rule skip doesn't specify anything about its pre and postconditions, because whatever is true before and after the skip can be added with frame. The rule assign is the backward variant of HL rule for assignment. The rule assert conjoins the assertion b to the postcondition, because only states satisfying the Boolean guard have an execution. The rule alloc allocates a new memory location for x. The premise requires x = x' in order to allow a frame to refer to the previous content of x: if a frame p has a free occurrence of x, we can use the equivalence px = x'p[x' / x] and then apply alloc. The rule free requires x to be allocated before freeing it. The rule load is similar to rule assign, with the addition of the (disjoint) frame y ↦ a to make sure that y is allocated. The rule store requires that x is allocated, and updates the value it points to. All these rules are local, in that they only specify pre and postconditions for the modified part of the heap, thanks to rule frame.The rule exists allows to “hide" local variables. The rule frame is typical of separation logics <cit.>: it allows to add a frame around a derivation, plugging the proof for a small portion of a program inside a larger heap. In the third group, we collected the core set presented in Figure <ref>. The only notable difference is in rule iter, where Separation SIL uses a predicate q(n) parametrized by the natural number n ∈ and the precondition ∃ n. q(n) in the conclusion of the rule. This is a logical replacement for the infinite union used in SIL rule iter. In the fourth group, we instantiated the additional rules of Figure <ref>. §.§ SoundnessTo prove soundness of Separation SIL, we first define a denotational semantics for heap regular commands. We consider a finite set of variablesand an infinite setof memory locations; we define the set of values ⊎ where ⊎ is disjoint union. Stores s ∈ are (total) functions from variables to values (so either integers or memory locations). A heap h ∈ is a partial function h: ⇀⊎{}. If h(l) = v ∈, location l is allocated and holds value v, if l ∉(h) then it is not allocated. The special valuedescribes a deallocated memory location: if h(l) =, that location was previously allocated and then deallocated. As notation, we use s[x ↦ v] for function update. For heaps specifically, [] is the empty heap, and [l ↦ v] is a shortcut for [][l ↦ v], that is the heap defined only on l and associating to it value v.We consider as states σ pairs of a store and a heap, plus the special staterepresenting the occurrence of an error: letting Σ = ×, states are taken from Σ_e = Σ⊎{}. The denotational semantics of atomic commands ·: →℘(Σ_e) →℘(Σ_e) is in Figure <ref>. To simplify the presentation, we define it as ·: →Σ→℘(Σ_e), we let = {}, and we lift it to set of states by union. Please note that, since arithmetic expressions a and Boolean expressions b cannot contain any dereferencing, their evaluation only depends on the store and not on the heap.We define the forward collecting semantics of heap commands, ·: →℘(Σ_e) →℘(Σ_e), similarly to (<ref>) using the different semantics of atomic commands for ∈.The semantics · of a formula p ∈ is a set of states in Σ. As notation, we write h_1 ⊥ h_2 when (h_1) ∩(h_2) = ∅, and we say the two heaps are disjoint. For two disjoint heaps h_1 ⊥ h_2, we define the ∙ operation as the merge of the two: h_1 ∙ h_2 is defined on (h_1) ∪(h_2), and its value on l is either h_1(l) or h_2(l) (the only one defined). The full definition of · is given in Figure <ref>.Just like SIL, we define validity of a Separation SIL triple pq by the condition q⊇p. To prove soundness of Separation SIL, we rely on a stronger lemma, whose proof is by induction on the derivation tree. Let p, q, t ∈ and ∈. If pq is provable and (t) ∩() = ∅ then: qt⊇pt .Then we can prove soundness of the proof system by taking t = and using pp. If a Separation SIL triple is provable then it is valid. Please note that our proof system is not complete because for Separation SIL we use logic formulae as assertions instead of set of states (cf. Section <ref>). §.§ Example of SIL DerivationWe now discuss in full detail the example in <cit.> (cf. Figure <ref>) to show how Separation SIL can infer preconditions ensuring that a provided error can happen.Our syntax does not support functions, so we assume push_back to be an inlined macro. As we cannot free and allocate *v directly (just like ISL), we introduce an intermediate variable y. Thus we rewrite the program in Figure <ref> as𝗋𝖼𝗅𝗂𝖾𝗇𝗍 x := [v]; (_bskip) _b y := [v]; free(y); y := alloc(); [v] := y To find the cause of errors, we do not include the last assignment *x := 1 in 𝗋𝖼𝗅𝗂𝖾𝗇𝗍: we know that whenever the postcondition x is satisfied, an error occurs after 𝗋𝖼𝗅𝗂𝖾𝗇𝗍, and that's everything we need to find its source. Any valid Separation SIL triple for x identifies a precondition such that any state satisfying it has a faulty execution.<cit.> derives the Incorrectness Separation Logic triple below, which proves the existence of a faulty execution starting from at least one state in the precondition.v ↦ zz ↦ -𝗋𝖼𝗅𝗂𝖾𝗇𝗍v ↦ yy ↦ -x . Separation SIL can do more: it can prove the triplev ↦ zz ↦ - 𝗋𝖼𝗅𝗂𝖾𝗇𝗍xwhich has both a more succinct postcondition capturing the error and a stronger guarantee: every state in the precondition reaches the error, hence it gives (many) actual witnesses for testing and debugging purposes. Moreover, Separation SIL proof system guides the crafting of the precondition if the proof is done from the error postcondition (e.g., the pointer deallocated right before its dereference) backward.Let us fix the following assertions:p:(v ↦ zz ↦ - ), q: (x ), t: (v ↦ zz ↦ -(x = zx ) ). To prove the Separation SIL triple p𝗋𝖼𝗅𝗂𝖾𝗇𝗍q, we first prove t_bq. The derivation of this triple is in Figure <ref>. This derivation is better understood if read from bottom to top, since we start from the postcondition and look for a suitable precondition to apply the rule for each one of the four atomic commands. In all cases, we start with a postcondition, then strengthen it to be able to apply the right rule: this usually means adding some constraint on the shape of the heap. In particular, to apply the rule free we need y to be deallocated, and this can happen in two different ways: either if y = x, since x is deallocated; or if y is a new name. This is captured by the disjunction x = yx. We also implicitly drop the disjunct y = y' in the postcondition of free by applying exists as y' is a free variable.Using the derivation in Figure <ref>, we complete the proof as shown in Figure <ref>. We can apply load to prove the triple px := [v]t because p is equivalent to (v ↦ zz ↦ -(z = zz ) ): z ↦ -z is not satisfiable, so we can remove that disjunct.The same example was used in <cit.> to illustrate the effectiveness of outcome-based separation logic for bug-finding. Even though the OL derivation shown in <cit.> proves essentially the same triple as the SIL one in Fig. <ref>, the deduction processes are quite different. In fact, OL reasoning is forward oriented, as witnessed by the presence of the implication that concludes the proof and by the triple for the skip branch, whereas SIL is naturally backward oriented, to infer the preconditions that lead to the error.§.§ Observations on SIL PrinciplesIn Section <ref>, we could have used cons to drop the disjunct as well: already for y := alloc() in _b, we could have taken as precondition the simpler (v ↦ -yx = y), effectively dropping the disjunct. This is analogous to the IL ability to drop disjuncts in the postcondition, but with respect to the backward direction. Furthermore, we use the postcondition x. The reader might be wondering why we had to include the ( ): isn't it possible to just frame it in when we plug the proof in a larger program? The issue is that in final reachable states x is not the only variable allocated (there are also v and y), so the final heap should talk about them as well. Adding ( ) is just a convenient way to focus only on the part of the heap that describes the error, that is x, and just leave everything else unspecified since we do not care about it.Finally, we conjecture that Separation SIL has the same expressiveness even if we remove thepredicate from the assertion language, unlike ISL <cit.>. In fact, sincedescribes a heap where nothing is allocated, it could be used for this goal. However, this would prevent us to add the ( ) in our example to focus only on the error part: writingwould not work, because x can be allocated in a heap satisfying thepart of the separating conjunction. Therefore, to useinstead of x we should describe exactly the final heap, i.e., v ↦ yy ↦ -. In this formula,is replacing x and the exact description v ↦ yy ↦ - of the rest of the heap is replacing the generic . Instead, the explicit x takes space in separating conjunctions, so that in x thepart cannot contain an allocation for x. Hence, this formula is better suited to specify the error: it describes a state where x is deallocated (which causes the error), without any other constraint. § CONCLUSION AND FUTURE WORK We have introduced SIL as a correct and complete program logic aimed to locate the causes of errors. Unlike IL, that was designed to expose erroneous outputs, SIL provides sufficient conditions that explain why such errors can occur. SIL can be characterized as a logic based on backward under-approximation, which helped us to compare it against HL, IL and NC. This is captured in the taxonomy of Figure <ref> and in the rule-by-rule comparison of Figure <ref>, which we used to clarify the analogies and differences between the possible approaches. We obtained some surprising connections: although NC and IL share the same consequence rules, they are not comparable; NC triples are isomorphic to HL ones, but such a correspondence cannot be extended to relate IL and SIL; and we pointed out the main reasons why duality arguments cannot apply in this case. The following list addresses the overall connections between judgements in the different logics: HL vs NC: there is an isomorphism given by PQ iff P Q.HL vs IL: in general there is no relation between forward over- and underapproximation triples: the only triples common to HL and IL are the exact ones.HL vs SIL: for deterministic and terminating programs HL and SIL judgements do coincide. More precisely, we have that for terminating programs PQ implies PQ and for deterministic programs PQ implies PQ.NC vs IL: there is no relation.NC vs SIL:in general there is no relation between backward over- and underapproximation triples: the only triples common to NC and SIL are the exact ones.IL vs SIL: there is no relation. Finally, we instantiated it to handle memory errors and we exemplified how it may infer more concise error postconditions than ISL. §.§ Future Work As for future work, IL can find reachable (erroneous) states from a set of initial states. SIL can find states which can reach a given set of final states. Because of these two interpretations, it seems interesting to explore a combination of the two: a forward, IL-based analysis determines, at every program point, a set of truly reachable states, which can then be used to narrow down a subsequent backward, SIL-based analysis to find input states which truly generate the errors.Finally, we plan to extend the taxonomy of HL, NC, IL and SIL by incorporating some other program logics discussed in this paper, such as Exact Separation Logic <cit.>, Outcome Logic <cit.> and the calculus of possible correctness <cit.>. Interestingly, while ESL unifies forward over- and under-approximations in a single framework, it seems that the calculus of possible correctness mixes backward over- and under-approximations in an exact logic. This work is supported by the 1Italian Ministero dell'Università e della Ricercahttps://prin.mur.gov.it/ under Grant No. 1P2022HXNSC, PRIN 2022 PNRR – Resource Awareness in Programming: Algebra, Rewriting, and Analysis. ACM-Reference-Format § PROOFS§.§ Proofs of Section <ref> In the proof, we assume Q to be any set of states, and σ' ∈ Q to be any of its elements. _1; _2 By (<ref>), σ∈_1; _2σ' if and only if σ' ∈_1; _2σ. _1; _2σ = _2 (_1σ) = ⋃_σ”∈_1σ_2σ” so σ' ∈_1; _2σ if and only if there exists a σ”∈_1σ such that σ' ∈_2σ”. Again by (<ref>), these are equivalent to σ∈_1σ” and σ”∈_2σ', respectively. Hence σ' ∈_1; _2σ∃σ”∈_2σ' σ∈_1σ” Since · is defined on sets by union _1 (_2σ') = ⋃_σ”∈_2σ'_1σ” which means ∃σ”∈_2σ' σ∈_1σ” if and only if σ∈_1 (_2σ'). Putting everything together, we get σ∈_1; _2σ' if and only if σ∈_1 (_2σ'), so the two are the same set. The thesis follows easily lifting the equality by union on σ' ∈ Q and by the arbitrariness of Q. _1 _2 By (<ref>), σ∈_1 _2σ' if and only if σ' ∈_1 _2σ. _1 _2σ = _1σ∪_2σ so σ' ∈_1 _2σ if and only if ∃ i ∈{ 1, 2 } such that σ' ∈_iσ. This is again equivalent to σ∈_iσ', and ∃ i ∈{ 1, 2 }σ∈_iσ' σ∈_1σ' ∪_2σ' Putting everything together, we get σ∈_1 _2σ' if and only if σ∈_1σ' ∪_2σ', which implies the thesis as in point 1. ^ To prove this last equality, we first prove by induction on n that ^n = ^n. For n = 1 we have ^1 = ^1. If we assume it holds for n we have ^n+1 = ^n;[def. of ^n] = ^n∘ [pt. 1 of this lemma] = ^n ∘ [inductive hp] = ^n+1 We then observe that ^σ'= {σσ' ∈^σ} [def. of ·] = {σσ' ∈⋃_n ≥ 0^n σ} [def. of ^] = ⋃_n ≥ 0{σσ' ∈^n σ}= ⋃_n ≥ 0{σσ' ∈^nσ}= ⋃_n ≥ 0{σσ∈^nσ' } [(<ref>)]= ⋃_n ≥ 0^nσ' = ⋃_n ≥ 0^n σ'[shown above] As in the cases above, the thesis follows.§.§ Proofs of Section <ref> We prove the left-to-right implication, so assume P ⊆ Q. Take a state σ' ∈¬ Q. This means σ' ∉ Q, that implies σ' ∉ P. So, for any state σ∈ P, we have σ' ∉σ, which is equivalent to σ∉σ' by (<ref>). This being true for all σ∈ P means P ∩σ' = ∅, that is equivalent to σ' ⊆¬ P. Since this holds for all states σ' ∈¬ Q, we have (¬ Q) ⊆¬ P. The other implication is analogous. To prove the first point, assume Q ⊇ P and take σ' ∈ P. Then there exists σ∈ P such that σ' ∈σ. Sinceis deterministic, σ can contain at most one element, hence σ = {σ' }. Moreover, since σ∈ P ⊆ Q there must exists a σ”∈ Q such that σ”∈σ = {σ' }, which means σ' ∈ Q. Again, by arbitrariness of σ' ∈ P, this implies P ⊆ Q. To prove the second point, assume P ⊆ Q and take a state σ∈ P. Sinceis terminating, σ is not empty, hence we can take σ' ∈σ. The hypothesis P ⊆ Q implies that σ' ∈ Q. Then, by (<ref>), σ∈σ' ⊆ Q. By arbitrariness of σ∈ P, this implies P ⊆ Q. We first prove that P ⊇ P ∖ D_. Take a σ∈ P ∖ D_. Because σ∉ D_, σ≠∅, so take σ' ∈σ. Since σ∈ P we have σ' ∈ P. Moreover, by (<ref>), we get σ∈σ' ⊆ P. By arbitrariness of σ∈ P ∖ D_ we have the thesis. The proof for Q ⊇ Q ∖ U_ is analogous. By definition,is additive. Take all P such that P ⊆ Q. By additivity of , their union satisfies the same inequality, hence it is the weakest such P. By definition,is additive. Analogously, take all Q such that Q ⊆ P. By additivity of , their union is the weakest Q satisfying that inequality. The proof is given by the countrexamples in Example <ref>. For IL, the example shows that for Q_1 (x = 1) there is no strongest P such that 𝗋1 P ⊇ Q: x = 0 and x = 10 are incomparable and are both minimal, as ∅ is not a valid precondition. The argument for SIL is analogous using 𝗋𝗇𝖽 and precondition P_1 (x = 1).§.§ SIL Soundness and Completeness We split the proof between soundness and completeness. Any provable SIL triple is valid. The proof is by structural induction on the derivation tree. Any valid SIL triple is provable. First we show that, for any Q, the triple QQ is provable by induction on the structure of . = We can prove QQ using atom. = _1; _2 We can prove Q_1; _2Q with [seq] _1_2Q_1; _2Q_1_2Q_1_2Q _2Q_2Q where the two premises can be proved by inductive hypothesis, and _1; _2Q = _1_2Q by Lemma <ref>. = _1 _2 We can prove Q_1; _2Q with [choice] _1 Q ∪_2Q_1 _2Q∀ i ∈{ 1, 2 } _iQ_iQ where the two premises can be proved by inductive hypothesis, and _1 _2 Q = _1 Q ∪_2Q by Lemma <ref>. = ^ We can prove ^Q^Q with [iter] ⋃_n ≥ 0^n QQ∀ n ≥ 0 ^n+1 Q^n Q where the premises can be proved by inductive hypothesis since ^n+1 Q = ^n Q, and Q = ⋃_n ≥ 0^n Q by Lemma <ref>. To conclude the proof, take a triple PQ such that Q ⊇ P. Then we can first prove the triple QQ, and then using rule cons we derive PQ. The proof of Theorem <ref> is a corollary of Proposition <ref> and <ref>. §.§ Other Proofs about SIL By definition of · we have Q = ⋃_σ' ∈ Q{σσ∈σ' } = {σ∃σ' ∈ Q σ' ∈σ} Using this, P ⊆ Q ∀σ∈ P σ∈{σ∃σ' ∈ Q σ' ∈σ}∀σ∈ P ∃σ' ∈ Q σ' ∈σ The rules in Figure <ref> are correct, that is triples provable in SIL extended with those rules are valid. The proof is by structural induction on the derivation tree, and extends that of Proposition <ref> with inductive cases for the new rules.§.§ Proofs about Separation SILGiven two stores s, s' ∈ and a heap command ∈, we use the notation s _ s' to indicate that they coincide on all variables not modified by : ∀ x ∉()s(x) = s'(x). Please note that _ is an equivalence relation. Let (s, h) ∈Σ, ∈. If (s', h') ∈(s, h) then s _ s'. The proof is by induction on the syntax of . We prove here only some relevant cases. x := a (s', h') ∈x := a(s, h) means that s' = s[x ↦a s]. Particularly, this means that for all variables y ≠ x, s'(y) = s(y), which is the thesis because (x := a) = { x }. free(x) (s', h') ∈free(x)(s, h) means that s' = s, which is the thesis because (free(x)) = ∅. _1; _2 (s', h') ∈_1; _2(s, h) means that there exists (s”, h”) ∈_1 (s, h) such that (s', h') ∈_2(s”, h”). By inductive hypothesis, since (_1) ⊆(_1; _2), we have s”__1; _2 s. Analogously, (_2) ⊆(_1; _2) implies s' __1; _2 s”. By combining the two, we get s' __1; _2 s.The following technical proposition states some semantic properties of the assertion language that are exploited in the proof of Lemma <ref>. Let p ∈, s, s' ∈, h ∈ and a ∈AExp. * If ∀ x ∈(p)s(x) = s'(x) and (s, h) ∈p then (s', h) ∈p. * If (s, h) ∈p[a / x] then (s[x ↦a s], h) ∈p. The proof is by structural induction on the syntax of assertions. First, we observe that Proposition <ref> holds for separation SIL as well because it doesn't depend on the specific definition of ·. Thanks to this, we prove the thesis through the equivalent condition ∀ (s, h) ∈pt∃ (s', h') ∈qt (s', h') ∈ (s, h) The proof is by induction on the derivation tree of the provable triple pq. We prove here only some relevant cases. assign Take (s, h) ∈q[a / x]t. Then we can split h = h_p ∙ h_t such that (s, h_p) ∈q[a / x] and (s, h_t) ∈t. Let s' = s[x ↦a s], so that (s', h) ∈x := aq[a / x]t. Since (t) ∩() = ∅, x ∉(t). Thus, by Proposition <ref>.1, (s', h_t) ∈t. Moreover, (s', h_p) ∈q by Proposition <ref>.2. Hence, (s', h_p ∙ h_t) = (s', h) ∈qt. alloc Take (s, h) ∈x = x't. Take a location l ∉(h) and a value v ∈, and let s' = s[x ↦ l], h' = h[l ↦ v], so that (s', h') ∈x := alloc() (s, h). We can split h' = [l ↦ v] ∙ h because l ∉(h). Since (t) ∩() = ∅, x ∉(t). Thus, by Proposition <ref>.1, (s', h) ∈t. Moreover, (s', [l ↦ v]) = (s', [s'(x) ↦ v]), which satisfies, given a variable z ∈, (s'[z ↦ v], [s'(x) ↦ v]) ∈x ↦ z. Thus (s', [l ↦ v]) ∈∃ z. x ↦ z = x ↦ -. Hence (s', h') ∈x ↦ -t. load Take (s, h) ∈y ↦ aq[a /x]t. Then we know x ∉(t) and h = [s(y) ↦a s] ∙ h_p ∙ h_t, (s, h_p) ∈q[a / x], (s, h_t) ∈t. Let s' = s[x ↦ h(s(y))] = s[x ↦a s]. By Proposition <ref>.1, (s', h_t) ∈t. By Proposition <ref>.2, (s', h_p) ∈q. Lastly, (s', [s(y) ↦a s]) ∈y ↦ a. Combining these, (s', h) = (s', [s(y) ↦a s] ∙ h_p ∙ h_t) ∈y ↦ aqt. The thesis follows observing that (s', h) ∈x := [y] (s, h). store Take (s, h) ∈x ↦ -t. Then x ∉(t) and exists v ∈ such that h = [s(x) ↦ v] ∙ h_t, (s, h_t) ∈t. Let h' = h[s(x) ↦ s(y)]. Clearly h' = [s(x) ↦ s(y)] ∙ h_t and (s, [s(x) ↦ s(y)]) ∈x ↦ y. Hence (s, h') ∈x ↦ yt and (s, h') ∈[x] := y (s, h), which is the thesis. exists Take (s, h) ∈(∃ x . p)t. Then there exists a value v ∈ and decomposition h = h_p ∙ h_t such that (s[x ↦ v], h_p) ∈p and (s, h_t) ∈t. Without loss of generality, we can assume x ∉(t); otherwise, just rename it using a fresh name neither in t nor in . Hence, by Proposition <ref>.1, (s[x ↦ v], h_t) ∈t. So (s[x ↦ v], h) ∈pt. By inductive hypothesis on the provable triple pq and formula t, there is (s', h') ∈qt such that (s', h') ∈ (s[x ↦ v], h). Because x ∉(), we also have (s', h') ∈ (s, h), and clearly (s', h') ∈(∃ x . q)t, that is the thesis. frame Take (s, h) ∈pt't. By hypothesis, ((t't)) ∩() = ((t') ∪(t)) ∩() = ∅. Then, applying the inductive hypothesis on the provable triple pq and the formula t't (which satisfies the hypothesis of the theorem) we get exactly the thesis. seq Because of name clashes, here we assume the hypotheses of rule seq to be p_1p' and p'_2q. Since (r_1) ∪(r_2) = (r_1; r_2), we know that (t) ∩(r_1) = (t) ∩(r_2) = ∅. Take (s, h) ∈pt. The by inductive hypothesis on provable triple p_1p' and formula t we get that there exists (s”, h”) ∈p't such that (s”, h”) ∈_1 (s, h). Then, by inductive hypothesis on the provable triple p'_2q and formula t again, we get (s', h') ∈qt such that (s', h') ∈_2 (s”, h”). The thesis follows since (s', h') ∈_2 (s”, h”) ⊆_2 (_1 (s, h)) = _1; _2 (s, h). | http://arxiv.org/abs/2310.18156v2 | {
"authors": [
"Flavio Ascari",
"Roberto Bruni",
"Roberta Gori",
"Francesco Logozzo"
],
"categories": [
"cs.LO",
"F.3.1"
],
"primary_category": "cs.LO",
"published": "20231027140341",
"title": "Sufficient Incorrectness Logic: SIL and Separation SIL"
} |
Semi-Synthetic Dataset Augmentation for Application-Specific Gaze Estimation C. Leblond-Menard, G. Picard-Krashevski and S. Achiche============================================================================ Although the number of gaze estimation datasets is growing, the application of appearance-based gaze estimation methods is mostly limited to estimating the point of gaze on a screen. This is in part because most datasets are generated in a similar fashion, where the gaze target is on a screen close to camera's origin. In other applications such as assistive robotics or marketing research, the 3D point of gaze might not be close to the camera's origin, meaning models trained on current datasets do not generalize well to these tasks. We therefore suggest generating a textured tridimensional mesh of the face and rendering the training images from a virtual camera at a specific position and orientation related to the application as a mean of augmenting the existing datasets. In our tests, this lead to an average 47% decrease in gaze estimation angular error. § INTRODUCTION In recent years the number of gaze estimation datasets is growing <cit.>. However, the application of appearance-based gaze estimation methods remains challenging due to the range of the gaze angles and head poses within those. While some datasets have a very large distribution of gaze angles and head poses, gaze estimation models trained on those still tend to be much less accurate when compared to datasets with a much limited distribution <cit.>.To begin with, the gaze angles are the pitch and yaw angles of the gaze direction vector, which is the normalized vector between the gaze origin, generally defined as either an eye or the middle point between the eyes, and the point of gaze of a user. As for the head pose, it is generally defined as a 3D rotation matrix which describes the rotation of the head of the user with respect to the camera's reference frame.As such, as the range of gaze angles and head poses of the dataset's samples increases, the problem complexity also increases and a learning-based model for gaze estimation will have to learn a more complex distribution, which generally leads to lower accuracy.In certain applications, a large gaze angle and head pose range might not be relevant, such as is the case for assistive robotics. Indeed, in these cases, the gaze distribution might be limited to a specific forward facing region with respect to the user. For example, the allowed gaze direction for object selection in an assistive robotics task such as automated grasping might be limited to a box region in front of the user which corresponds to the robot reach <cit.>.Although the gaze distribution might have a similar range width or variance in this application as most available gaze estimation datasets, the center or mean of the region of gaze might be very different.In fact, most datasets currently available were created from images taken from a webcam-type camera, generally fixed on a screen, laptop, tablet or phone, while the user is looking at a target shown on the screen or device <cit.>. As can be seen from the distribution presented in Fig. <ref>, this limits the distribution of gaze angles to regions close to or below the camera. While this is an accurate scenario for gaze estimation models where the goal is to estimate the point of gaze on a screen, it does not translate well to other applications where the user might not be looking directly toward the camera in 3D space.On the other hand, other datasets <cit.> have tried to expand the limited distribution of the gaze angles in datasets generated from point on the screen by placing multiple cameras at several positions around the user <cit.> or by placing the user at random positions <cit.>. While this might solve the issue of the limited distribution, it remains challenging to train on and can lead to poor accuracy due to the limited number of samples spread over a large range of gaze angles and head poses <cit.>.To mitigate this issue, we suggest generating a mesh of the detected faces of the users in the dataset and then training a gaze estimation model by using pose transformed version of the face meshes and corresponding gaze directions. This is done through methods similar to what has already been done to augment the span of the gaze angles range <cit.>, but these methods are adapted to instead improve the application-specific gaze estimation accuracy, e.g., assistive robotics tasks such as automated grasping. We also provide a simple way of generating the face mesh without having to use multiple viewpoints of the same sample through using multiple cameras as done in <cit.>.As later demonstrated in this paper, this allows to reach a much higher application-specific gaze estimation accuracy. We will use an assistive robotic arm controlled via gaze estimation as an example application in this paper. See <cit.> for more details about the application and refer to Fig <ref> for a visualization and explanation. § METHODOLOGYThe methodology used in this paper follows 8 steps: * Align the face of each dataset sample to a face model.* Solve for the pose of the face under a perspective projection using the obtained aligned facial landmarks.* Generate the textured mesh of the face.* Apply the inverse pose transform to the gaze direction to make it correspond with the generated centered face mesh.* Compute the sampling distribution given the expected pose of the head and its variance for the application.* Apply pose transforms of the head to the samples by sampling from the head pose distribution.* Project the face mesh to virtual normalized camera plane for each eye.* Use the transformed samples as input for training a gaze estimation model. Refer to Figure <ref> for a visualization of the workflow with respect to the modification made to the original image. The following sections will delve into the details of the previously described steps. §.§ Facial AlignmentUsing any facial alignment method <cit.> for which the base face model includes 3D coordinates, align each face of the dataset from the corresponding sample image. In our case, we used the model developed by <cit.>, but any accurate model can do.After running the face alignment model, a set of 2D points in pixels is obtained. These points correspond to facial features or areas as learned by the facial alignment model. We will call the set of obtained 2D points in screen (pixels) coordinates 𝐗_𝐬. §.§ Pose Under Perspective Projection Because the previously obtained face points 𝐗_𝐬 are in 2D screen coordinates, they cannot be directly used to find the pose of the head between the base pose given by the face model and the actual pose in the computed sample. Indeed, algorithms developed to solve the orthogonal Procrustes problem cannot be used as is to solve the problem under projective transformation. For such a case, several methods have been proposed and the most common ones have been implemented under OpenCV'sfunction <cit.>. This function finds the best SE(3) transformation (simultaneous rotation and translation in 3D) which, when projected according to the sample's camera matrix and distortion coefficient, minimizes the error between the input face 2D points and the computed face model 3D points under the pose and projection transforms. Simply put, we find the transformation ^𝐛𝐓_𝐚 which maps the base face model 3D points to a new set of 3D points corresponding to the actual pose of the face in the sample: ^𝐛𝐓_𝐚 = 𝐓∈SE(3) || 𝐗̂_𝐬 - 𝐗_𝐬 ||^2 = 𝐓∈SE(3) || 𝐂 𝐓 𝐗_𝐦 - 𝐗_𝐬 ||^2 𝐗̂_𝐬 corresponds to the face points obtained by applying the estimated transformation 𝐓 to the base face model points 𝐗_𝐦 and projecting those using the camera matrix 𝐂. It is assumed that the screen-space coordinates 𝐗_𝐬 and 𝐗̂_𝐬 and the face model 3D coordinates 𝐗_𝐦 are expressed in homogenous coordinates, namely [u, v, 1] and [X, Y, Z, 1], and as such have been normalized as to have their last component normalized to 1. §.§ Textured Face Mesh GenerationTo generate the textured face mesh, we need: * A set of 3D vertices.* A set of mesh faces defined by 3 or more of the previously defined vertices.* A texture image.* A (u,v) coordinate on the texture image for each 3D vertex. It is worth noting that the mesh faces is the name given to the polygonal faces of a mesh, not a human face. These mesh faces are generally triangles.The set of 3D vertices can be defined as the model face points 𝐗_𝐦. The texture image can simply be the input face image used in the alignment step (step 1). When creating the mesh's texture using the face image, the (u,v) texture coordinates to associate to each vertex are 𝐗_𝐬, the facial landmark positions in screen space as obtained during the alignment.The main difficulty here is to generate the faces from the set of 3D vertices if the base face model does not contain any mesh face definitions, which is not the case for the methodwe used <cit.>. If the mesh faces definitions are not included, we suggest using surface reconstruction methods such as the Ball-Pivoting Algorithm <cit.> to generate the set of mesh faces associated with the vertices, which are themselves the facial landmarks. §.§ Inverse Pose Transform Correction to Gaze Direction As the gaze annotations from the datasets are given with respect to the actual pose of the face and not the centered, base face model pose, we need to correct the gaze direction vector using the previously computed pose. Given the actual gaze direction as per the dataset annotation g⃗_a, the centered base gaze direction g⃗_b can be defined as: g⃗_b = ^𝐛𝐓^-1_𝐚 g⃗_a§.§ Head Pose DistributionGiven the application that the gaze estimation model will be used for, we need to define the expected pose of the user's head. In the case of our example case, we know the camera is mounted in front of and below the user, aiming 30 degrees up. We should thus expect the user's head to be centered on the image, but with a head pose pitch of 30 degrees up. In other words, the user is looking forward while the camera, positioned in front of the user but below them, looks up at the user's face with an angle of 30 degrees.As this is the perfect case and not actually reflective of the reality, we will assume a variance of 10 degrees in both the head pitch and yaw. We thus define a bivariate normal distribution giving the head pitch p and yaw y as: [ y; p ]∼ N([μ_y = 0; μ_p = 30 ],[ σ_y^2 = 100;0 σ_p^2 = 10 ]) The μ parameters are the means whereas the σ^2 parameters are the variances. This distribution will be sampled when generating the augmented dataset to define the augmented head pose for each sample.It should be noted in this case that the roll angle is not computed, as the normalization procedure that is generally used in gaze estimation models removes the roll component <cit.>. As such, there is no need to compute a roll just to remove it when normalizing the images. §.§ Sample Pose Transform By sampling the yaw and pitch from the previous distribution for each sample, we can calculate a rotation matrix to apply to the face mesh and gaze direction annotation. This computated rotation matrix will be the new augmented head pose, and thus we rotate the base face model and gaze direction using this rotation matrix.We define the rotation 𝐑(y, p) around the pitch and yaw as: 𝐑(y, p) = 𝐑_𝐩𝐢𝐭𝐜𝐡(p)𝐑_𝐲𝐚𝐰(y) =[ 1 0 0; 0cos(p) -sin(p); 0sin(p)cos(p) ][cos(y) 0sin(y); 0 1 0; -sin(y) 0cos(y) ] We then apply this rotation to the face mesh points 𝐗_𝐦 and the gaze direction g⃗ to generate our augmented sample from the dataset. 𝐗_𝐦' = 𝐑(y, p)𝐗_𝐦g⃗' = 𝐑(y, p)g⃗§.§ Mesh Projection and NormalizationUsing any rendering method, we can render the face mesh as an image, but to do so requires knowing the camera position and parameters, i.e., the field of view or focal length and the image pixel size. To simplify the workflow, we merge the normalization procedure often done as a preprocess step in gaze estimation <cit.> with the rendering of the face mesh.Indeed, because we want our eye patch images to be normalized in distance, focal length and image pixel size, we use the normalized parameters to create a virtual camera viewpoint. For a given eye (left or right) and assuming a normalized distance d_n and a normalized focal length f_n, we create a virtual camera situated at a z distance of d_n from the eye position on the transformed face mesh e' = [e_x', e_y', e_z'] with focal length f_n (see Figure <ref> for and example geometric construction). As such, the pose of the camera would point toward the position z axis and its origin o_c would be situated at: o_c = [e_x', e_y', e_z' - d_n] Since the rendering scene is now composed of the transformed face mesh 𝐗_𝐦' and the virtual camera with focal length f_n, origin o_c and aimed toward the positive z axis, we can generate an eye patch image with the normalized image size required by our gaze estimation model. This can be done through any rendering tool, but we used Pyrender <cit.> to do so.This rendering part should be done on the left eye and right eye if both are required as input for the gaze estimation model. §.§ Gaze Estimation Model TrainingOnce every sample in the dataset has been augmented using the previous steps, we can use the samples as input to a gaze estimation model. In our case, we use the eye image patches (left and right) and the head pose 𝐑(y, p) as input to our model and we use the transform gaze direction g⃗' to compute the loss. When evaluating the model or using it for inference, we simply use the normalization method described in <cit.> to generate the preprocessed input data and we denormalize the output gaze using the method described in the same paper. This has to be done because we do not generate a face mesh at evaluation time, but we use the actual image from the camera. Given that the training was done in a normalized space with the camera at a distance d_n (in our case 600 mm) and a normalized focal length f_n (in our case 650 px/mm), we must normalize the images to that space before using them as input to our trained model.§ RESULTS AND DISCUSSIONTo analyse the performance of our dataset augmentation method, we trained a gaze estimation model based on a MobileNet architecture <cit.> on UTMultiview <cit.>, MPIIFaceGaze <cit.> and GazeCapture <cit.> with and without our augmentation method. Training for all datasets is done with the same hyperparameters that were tuned for the best accuracy before augmentation. This should ensure that the improvement comes from the augmentation and not the choice of hyperparameters.After training our model, we then evaluate it on a series of 69 samples from 3 different users taken on the setup as described by the example application (see Fig. <ref>). The results are detailed in Table <ref>. The error values e are given by computing the angle between the true gaze direction g⃗ and the estimated gaze direction g⃗_e: e =cos^- 1| g⃗·g⃗_e |/|| g⃗ || ||g⃗_e || As can be seen from Table <ref>, the angular error decreases by 47% on average. This is a significant increase in accuracy and leads to more consistent results in our sample application. This increase in accuracy does not significantly increase training time since it can be preprocessed once and trained on multiple times afterward without having to go through the augmentation steps again. It does not increase the development time significantly either since it has a very small amount of parameters to tune for, namely the bivariate normal distribution's means and variance, and the fact that they can be easily inferred from the application rather than a costly hyperparameter search.The case of the UTMultiview <cit.> dataset should also be noted, since the error reduction is lower for this dataset. Indeed, because the dataset does not include the full face in the images, it is thus not possible to generate the face mesh from our facial alignment method, which is also used in the evaluation step. Therefore, the base face pose used as the "zero" pose is not the same for both models. This leads to a discrepancy between the training data's annotations distribution and the evaluation's distribution.Also, as can be seen from Fig <ref>, the distribution of the gaze angles for the dataset also includes the user looking above the camera, which is not the case for MPIIFaceGaze <cit.> and GazeCapture <cit.>. While the mean of the distribution might not coincide with the average gaze direction of the evaluation using our example application, it still includes more samples relevant to our application and thus it is to be expected that the UTMultiview-trained model will perform similarly with and without augmentation.§ CONCLUSION In conclusion, the developed augmentation method can be done quickly and leads to a significant (47%) decrease in gaze estimation error in gaze estimation for assitive robotics tests, where three different users were asked to look at an object randomly placed in front of them over 69 samples. It does not require a costly hyperparameter search and can be used on existing datasets to adapt them to application-specific usages. IEEEtran | http://arxiv.org/abs/2310.18469v1 | {
"authors": [
"Cedric Leblond-Menard",
"Gabriel Picard-Krashevski",
"Sofiane Achiche"
],
"categories": [
"cs.CV",
"cs.AI",
"cs.HC",
"cs.LG"
],
"primary_category": "cs.CV",
"published": "20231027202722",
"title": "Semi-Synthetic Dataset Augmentation for Application-Specific Gaze Estimation"
} |
0.0cm 0.2cm 16cm21cm 1.0cm sciabstract 24ptSocially Cognizant Roboticsfor a Technology Enhanced Society Kristin J. DanaElectrical and Computer Engineering Dept., Clinton AndrewsBloustein School of Planning and Policy,Kostas BekrisComputer Science Dept., Jacob FeldmanPsychology Dept.,Matthew Stone[3],Pernille Hemmer[4], Aaron MazzeoMechanical and Aerospace Engineering, Hal Salzman[2],Jingang Yi[5] Rutgers University======================================================================================================================================================================================================================================================================================================================================Emerging applications of robotics, andconcerns about their impact, requirethe research community to put human-centric objectives front-and-center.To meet this challenge, we advocate an interdisciplinary approach, socially cognizant robotics, which synthesizes technical and social science methods. We argue that this approach follows from the need to empower stakeholder participation (from synchronous human feedback to asynchronous societal assessment) in shaping AI-driven robot behavior at all levels, and leads to a range of novel research perspectives and problems both for improving robots' interactions with individuals and impacts on society. Drawing on these arguments, we develop best practices for socially cognizant robot design that balance traditional technology-based metrics (e.g. efficiency, precision and accuracy) with critically important, albeit challenging to measure, human and society-based metrics.§ INTRODUCTION Applications of robotics (such astelepresence, transportation, elder-care, remote health care, cleaning, warehouse logistics, and delivery) are bringing significant changes in individuals’ lives and are having profound social impact. Despite the envisioned potential of robotics, the goal of ubiquitous robot assistants augmenting quality of life (and quality of work life) has not yet been realized.Key challenges lie in the complexities of four overarching human-centric objectives that such systems must aim for: 1) improving quality of life of people, especially marginalized communities; 2) anticipating and mitigating unintended negative consequences of technological development; 3) enabling robots to adapt to the desires and needs of human counterparts;4) respecting the need for human autonomy and agency. Pursuing these objectives requires an integrated cohort of technologists, behavioral scientists and social scientists with a shared vision to pursue a deep, multidisciplinary understanding of how robots interact with individuals and society.We introduce a new term, socially cognizant robotics, to describe this multi-faceted interdisciplinary branch of technology. The emerging practitioner, the socially cognizant roboticist, represents the convergence of socially aware technologists, who can develop intelligent devices that adapt to human and social behavior; and technology-aware social scientists and policymakers, who can translate studies of robotics’ social effects into actionable and technically-viable principles and policies. A primary element of socially cognizant robotics is a deliberate “invitation to the table” for social scientists, who bring analytical perspectives and methods that are not typically present in robotics.These perspectives cover two levels of human-technology interaction that we view asessential: the human-robot dyad(Section <ref>) and the robot-society dyad (Section <ref>).Figure <ref> illustrates how these levels might operate in the context of the workplace and everyday life. These considerations lead to formulating best practices in socially cognizant robot design (Section <ref>) that emphasizes the role of feedback (from individual and societal stakeholders) so that humans can affect and tune robot behavior.§ HUMAN-ROBOT DYADHow can robots effectively interact with human partners? Technologists are making great strides in supporting intelligent interaction in fields such as robot embodiment, control and planning <cit.>, visual learning <cit.>, and language processing <cit.>. But excitement due to the technological progress is tempered by an appreciation of the limitations and dangers of current methods <cit.>, especially for data-driven approaches that control action in the physical world in the presence of people without accompanying safety guarantees.Findings and methods from psychology and cognitive science suggest how some of these limitations may be addressed. For example, the ability to understand and anticipate human needs and desires is often referred to in psychology as “theory of mind” (ToM) <cit.>; ToM provides a psychologically-informed framework for designing robotic behavior that mirrors human–human interaction. Such ideas might be realized through developing and employing human-in-the-loop approaches to construct human informed training sets, as well as to guide robot activity in real time. Additionally, cognitive models may identify and meet the needs of the individual under real-world circumstances that are out-of-distribution during lab testing. §.§ From Vision and Language to Robot ActionProgress in computer vision (CV) and natural language processing (NLP) has made it possible for robots to perceive and represent their environment, including the humans within it. Both CV and NLP are faced with the challenge of extracting meaningful inferences and communication from raw environmental stimuli—the same challenge solved by the human brain.Computationally, rawsignals must be converted to suitable intermediate representations in order to extract knowledge. For example, simple image distance metrics are not usually useful, but can be the basis of more structured and meaningful measurements of similarity <cit.>.This challenge ofrepresentation learning is fundamental in both CV and NLP research.In CV, CNNs have become a popular architecture due to their robust performance <cit.>. More recently the transformer architecture in NLP has greatly improved language translation and generation <cit.>. The transformer architecture has been repurposed for visual recognition and now largely outperforms CNNs. Recent demonstrations of transformer-based models generating text (e.g., ChatGPT<cit.>) and images (e.g., Dall-E) <cit.>)have led to a surge of interest in these frameworks.Current CV and NLP frameworks have uncovered computational representations, useful for understanding and generating both images and language, in a manner that is reminiscent of human ability (although with significant limitations), and the implications for robotics is significant. These frameworks from CV and NLP are inspiring recent similar models for robot action or control <cit.>including new approaches of reinforcement learning <cit.>. The promise of these methods are tempered by inherent dangers: <cit.>: mistakes in actions models can do more immediate harm, making a socially cognizant approach to robotics particularly timely and urgent. §.§ Human-in-the-Loop Robot TrainingIn socially cognizant robotics, humans are not merely part of the robot's environment (e.g. to consider for collision avoidance). Rather, the human-in-the-loop meaningfully participates in shaping robot behavior and robot training.Socially cognizant robotics empowers AI stakeholder participation at multiple levels(such as individuals interacting with robots, expert dataset annotators, user feedback, andgroup assessment surveys) both synchronously and asynchronously, as shown in Figure <ref>, with the goal of improving the outcomes of robot interactions. A common and well-developed method for humans training robots is through supervised machine learning where datasets are annotated by humans(including our research <cit.>). This asynchronous human interaction takes the form of large labeled datasets where humans have provided the knowledge label (e.g. labeling the contents of an image or the location of an object in the image) for training computer or robot vision perception tasks such as image recognition, semantic segmentation, object detection, instance segmentation and object counting.For robot control, this asynchronous human input includes video observations of task demonstration that can be used to learn human behavior <cit.>. Problems associated with using datasets <cit.>include the cost and human effort of obtaining new datasets, ethical considerations for human annotators, problematic domain shifts, and a hyperfocus of research on obtaining results on existing datasets.Anticipating and mitigating these negative consequences are an important part of the early design stage as discussed in Section <ref>.A trend that is emerging from the need for human feedback is synchronous or online human-in-the-loop robot training. This is a paradigm of humans improving robots by interacting and intervening, i.e. humans teaching robots.Recent work in human-in-the-loop reinforcement learning provides examples of this paradigm <cit.>.Human-in-the-Loop RL typically works by modifying the reward function, e.g. by indicating preferences for a particular robot behavior. A related concept isimitation learning and learning by demonstration, which<cit.>. Recent hybrid methods use both asynchronous demonstration and intervention <cit.>. Because such methods can be adjusted by humans during the operation,humans can affect the design iterations and this paradigm is well aligned with the overarching goal of respecting human agency.Additionally, human-in-the-loop methods may offer efficient performance improvements with less reliance on datasets. §.§Robot Theory of Mind One striking difference between conventional robots and human agents is that only the latter possess theory of mind, the ability to understand other agents' mental states <cit.>. People attribute mental attributes—beliefs, intentions, and knowledge—to other agents almost reflexively <cit.>, utilizing a neural pathway apparently specialized for social relations <cit.>. Indeed mental state attributions are essential for normal social interaction, especially when language is involved <cit.>: it is scarcely possible to interpret an instruction, a question, or even a single word without comprehending the speaker's likely state of knowledge. Joint action by multiple humans is similarly facilitated by people's ability to understand each other's mental processes <cit.>. Indeed many of our inferences, actions, and reactions are predicated on an intuitive sense of what other people are thinking, what they want, and how they will likely perceive our own actions <cit.>.It has long been speculated that making robots engage in human-like social interaction would require endowing them with ToM (e.g. <cit.>). But progress on this problem has been slowed by our limited understanding of how human “theory of mind” actually works. Current models from our research and others <cit.> are based on Bayesian estimation of other-agent mental states, allowing the model to guess the goals and desires of another agent in order to fully understand their actions and anticipate their needs. However, these models only work in very limited contexts, based on a small set of predetermined possible goals. In order to interact effectively with a person, a robot must be able to decide among hundreds or thousands of potential human desires, corresponding to the myriad goals a human user might have in interacting with an automated device <cit.>. An important example comes from navigation. People navigate around other people in part by using ToM to understand and anticipate the likely movements of others <cit.>. For a robot to navigate among humans in a similar manner would require a similar degree of social awareness <cit.>. Computational models of human ToM have been proposed<cit.> and introduced into robotics <cit.>.Recent computational models for ToMuse deep learning <cit.> and inverse reinforcement learning <cit.>. we expect computational ToM models to benefit from recent machine learning paradigms and architectures, such as transformers and human-in-the-loop RL. To achieve truly socially cognizant robotics,such models need to include mechanisms for humans to synchronously and/or asynchronously affect and tune computational ToMs in order to affect robot behavior.§.§ Robot Embodiment for Human Interaction Robots have thrived in highly structured, accurately known, and safely enclosed industrial settings. Typically, these platforms consist of rigid elements connected by a few degrees of freedom (DoF) that can be controlled directly. This allows strict enforcement of motion for high speed, accuracy, and consistency of repeated tasks. In contrast, modern robotics aspires to more general and ambitious operational goals involving highly unstructured environments, arbitrary terrains, unknown objects, or more importantly in the context of Socially Cognizant Robotics, the company of humans. These altered priorities lead to the key challenge of robots with versatile physical traits, such as deformation and heightened articulation <cit.>, which can be achieved by using soft components or a high number of compliant DoF <cit.>. These features allow for adaptive contact geometry and storage or dissipation of energy for purposes, such as efficient mobility on rough terrain and manipulation of arbitrary objects. Examples include soft <cit.> and adaptive hands <cit.>, grippers <cit.>, manipulators <cit.>, locomotors [<cit.>], skin-like sensors <cit.>, and hybrid soft-rigid tensegrity robots <cit.>. Pioneering early work from our research and others in soft robotics <cit.> seeded exponential growth in the field in recent years <cit.>. Robots with versatile physical traits also include soft robotic exoskeletons to provide strength augmentation for purposes of mobility assistance, rehabilitation and worker protection during repetitive lifting <cit.> as illustrated in Figure <ref>.Socially cognizant robotics considers how technology, such as soft robotic material, can change the perceived empathy, competence, and safety of robots. This requiresintegrated treatment of human intent, perception, and behavior in interaction with embodied and intelligent robots.§.§ Life-Long and Continual Learning with Memory Robots of the future maybe trained by a combination of cognitive models, pre-trained large networks, human interaction, synthetic environments, and self-supervision from videos.Robot devices, like individuals, are a unique combination of their training experiences, combining data, models and interactions. The goal of life-long learning for intelligent agents <cit.> necessitates addressing memory, deciding how and what to remember, so that training based on past experience can to be efficient.A significant shortcoming of artificial neural networks is that they experience catastrophic forgetting. While they can be efficiently trainedon a single task across a wide variety of domains, when trained on a new task they often fail to retain previously learned tasks. This is a fundamental challenge in life-long learning. In contrast, humans, despite the very complex cognitive processes involved, efficiently encode and retrieve memories across the lifespan, without forgetting previously learned experiences. Human episodic memory is a reconstructive process, and one can think of episodic memory as a problem of extracting and storing information from the noisy signals presented to our senses, with the goal of efficiently storing and retrieving relevant information. Because the encoded memory traces are inherently noisy and incomplete reconstruction at recall requires input from at least two memory structures: one based on specific prior episodes, and one that is an abstraction of relevant prior knowledge and expectations (aka semantic memory). In this way, reconstructive memory can exploit environmental regularities to ‘clean up’ the noise in our memory system and improve average recall performance. (Importantly, this also prevents catastrophic forgetting.)If an episodic trace is too noisy or inaccessible, without semantic memory the best the system can do is guess randomly and at worst have catastrophic forgetting. However, the optimal behavior is a trade-off between the strength of the evidence in episodic memory and the likelihood of the event based on prior knowledge and expectations. Using prior knowledge at multiple levels of abstraction is an efficient strategy that allows generalization over experiences and correction of noisy memories <cit.>.A similar type of knowledge transfer has long been argued to be essential in robot learning (e.g., <cit.>).Lifelong learning for intelligent agents, and the parallel mechanism in humans, is a significant open-issue and the subject on on-going research <cit.>.§ ROBOT-SOCIETY DYADScaling robotic solutions at a societal level will fundamentally change the technology used by society including for transportation (smart cars), health care (medical robots), infrastructure (smart buildings), restaurant industry (food prep and food delivery robots), waste management (trash and recycling robots), and warehouse logistics (sorting, picking and packing robots).The paradigm of human feedback affecting robot behavior scales to societal participantsaffecting robot behavior. Feedback takes on a more formal structure with open issues such as: who can engage with and affect the robot systems? (i.e. what is their organizational role?) and when are they empowered to do so? (i.e. what timeline is appropriate given societal goals and constraints)The framework of stakeholder participation includes both real-time synchronous human-in-the-loop training where robots and humans learn from and adapt to one another, as well as asynchronous engagement, where social, economic and political forces can more easily affect robot design. Asynchronous societal engagement includes discussions with stakeholders, survey research, ethnographic observations, ethical evaluations, market uptake of innovations, development of guidelines/policies by organizations and institutions, legal proceedings, the development of governmental regulations, incentives, and information <cit.>. A key conjecture is that all robot systems will have unintended consequences that may only be fully discovered and quantified after initial deployment. It is critical to build with the expectation of iteration andmechanisms must exist tocollect feedback, address issues, and re-deploy. §.§ Societal Theory of Mind The concept of Theory of Mind (ToM) where robots and/or humans predict the intent of collaborators and other entities, can be extended to a societal Theory of Interaction with Social Systems (TISS) to enable robotic systems to anticipate the needs and intentions of human groups, organizations, and institutions. TISS models are different from traditional or machine ToM models because the fundamentals of human-robot interaction change with scale. Robot systems interacting with a community have different properties and necessitate different evaluation than the individual human-robot framework. For example, robot systems impact the culture, economy and infrastructure of a society. Predicting the mental state of humans in TISS encompasses the prediction of human groups, crowds, and assemblies. Similarly robots may act in groups or swarms and agents working with such groups need cues to predict their intent to facilitate appropriate interaction.Computational methods for robot vision, language and action provide can provide such cues by sensing people over space and time drawing conclusions on the needs of groups. To date, both human and machine ToM models are open research issues and the problem of TISS has received sparse attention. When scaling up from interactions between individual robots and humans to those at the societal level, it becomes important to acknowledge that social, economic, and political systems exert forces on individuals and vice versa. There is a classic debate in the social sciences about whether structure constrains the agency of individuals, or whether agency determines structure. Contrast “class shapes everything about our lives” (Karl Marx), “we are constrained by social norms and values” (Emile Durkheim), and “bureaucracies can confine us” (Max Weber); with “individual behavior as a shaping force” (Norbert Elias), society as “the free-playing, interacting interdependence of individuals” (Georg Simmel), and “individual adaptation, goal attainment, integration and latency form the basic characteristics of social action” (Talcott Parsons).The message for socially cognizant roboticists is that social structures are both persistent and malleable. In the short run, social practices offer resistance to change, and humans will enforce contextualized behavioral norms on robots and one another (e.g. sidewalk delivery robots should stay navigate in a manner similar to that of humans <cit.>).§.§ Robots and the Built Environment Performance improvements in robot technologywill lead likely lead toubiquitousdevicesin our streets and homes.As industry an agencies seek to deploy robots for a myriad of tasks including food delivery, landscsaping, surveillance, advertisement and entertainment, the question will soon arise:What will our future world look like? How many robots are too many?We can look to the past and see the transition from nature to the built environment and learn mistakes and successes. In particular, though the field of neuroaesthetics <cit.> primarily focuses on architecture, it is highly relevant to the proliferation of robotics in society. This field examines the intersection of neuroscience and architectural design so that by understanding human response to the built environment (e.g. stress vs. well-being), architects can account for this response in their design. The relevance of neuroaesthetics to socially cognizant robotics is clear; the appearance of robotic systems can foster or inhibit human and societal trust of robotics. The near-term future may bring a proliferation of robots, drones and related infrastructure such as cell towers, charging stations, and storage.Headless dog-robots, snake robots, spider robots and other embodiments may cause stress in humans, much like a heavily industrialized urban environments do when compared to nature <cit.>. Intuitively, the effect may be amplified as the number of robots increases. The relevance of neuroaesthetics to socially cognizant robotics is clear: the appearance of robotic systems can foster or inhibit human and societal trust of robotics.§.§ Unintended Consequences and Policy DevelopmentInnovations sometimes bring unintended adverse consequences for members of society, such as birth defects in children whose mothers used the sedative Thalidomide, formation of a hole in the stratospheric ozone layer from fugitive emissions of chlorofluorocarbon refrigerants, and political violence encouraged by Facebook’s news feed algorithm. Markets can avoid obvious adverse consequences for buyers because innovations often get adopted slowly, over time, on a voluntary basis, in a bottom-up manner that allows quick feedback and decentralized responses to bad news. Political systems often react more slowly, relying on tort liability to redress harm to individuals and the (often lengthy) development of public policies to remedy wider societal harms. Unintended consequences have been the object of much social inquiry, for exampleidentification of key sources <cit.> (ignorance, error, willful ignorance, paradoxical values, and self-defeating predictions); relational thinking about what is known and unknown to us and others <cit.>;andassertations that both outcomes and likelihoods may be problematic <cit.>, thus contrasting risk, uncertainty, ambiguity, and ignorance. Of particular importance in robotics are the unanticipated consequences of deployment at scale, where consequences may be economic (displaced workers), social (lonely seniors), or political (projection of military power). It would be useful to anticipate unintended consequences of widespread deployment of robotics technologies beforehand instead of waiting for the harms to become evident. Technology assessment efforts attempt to do this by looking forward into an uncertain future using a variety of tools. These include reasoning from historical analogies, modeling socio-economic systems in order to project future impacts, fostering discussion and debate about the wisdom of deploying the innovation, reflecting on professional practices, learning from small experiments, and others <cit.>. Public policies can encourage either risk-taking to spur innovation or precaution to reduce potential harms. The first may yield unintended adverse consequences and the second may stifle innovation.Advocating humility in design can encourage the expectation of unintended consequences and the planning of design iterations. Public policymakers strive to encourage innovation while avoiding harm by iteratively acting and learning, a process known as “muddling through” or mutual incremental adjustment <cit.>. It stands in contrast to a planning or optimization approach in which decision makers assume that they already have all of the needed knowledge to make correct decisions. The incrementalist, iterative approach works best when lots of small experiments take place and there is a systematic effort to learn from the experiments.There is a strong movement for policy and regulation in AI (especially in generative AI), there are well known problems and cases of unregulated AI leading to unexpected consequences. There has been recent attention to regulating generative AI <cit.>, but less attention to robotics, although robotics can be much more directly dangerous. Self-driving cars are regulated <cit.>, but these regulations are for a narrow application area. Future regulation and policies for robotics can extendto multiple application contexts, global collaborations <cit.>, and can draw from core human-centric objectives as we have done for formulating best practices for socially cognizant robot design.§ SOCIALLY COGNIZANT ROBOT DESIGN: BEST PRACTICES Standard technical design optimizes measurable metrics such as accuracy, precision and speed. Socially cognizant design is a more complex process because humanity-based metrics are not easily quantifiable but are critically important. Guided by four core human-centric objectives described in Section <ref>, we present a list of 10 questions that should be considered during a socially cognizant design.This proposed question set and design framework comprise best practices in socially cognizant design. As robotics advancesand the quest for socially cognizant robotics grows, we expect the enumeration of best practices to evolve further.Ten Questions to Address for Socially Cognizant Robotics Design * Does the design improve quality of life?* Does this technology address a critical societal need? The designer should evaluate the significance given competitive technologies and consider that the technology uses limited economic and environmental resources.* Have discussions occurred with a diverse set of stakeholders prior to deployment? Consult and ask stakeholders (e.g., customers, casual users, bystanders, managing organizations, etc.) to predict the impact. * What are the potential unintended consequences? Predict unintended consequences and enumerate them before iterating over the technology. Discuss with other designers that have previous experience of robot deployment or previous users of related/competing technology. * Are there safety issues in the application domain and have the appropriate permissions and inspections been completed? Are existing requirements sufficient to ensure human safety?* What is the schedule for iteration? There is tension between speed of development and the goal of socially cognizant design. Managing this tension is related to the need for iteration. * How will the technology adapt to human desires and needs of people once deployed?* Is there a mechanism for users to provide feedback and is there a plan to act on that feedback (asynchronous engagement)?* What are the ethical considerations?Discuss with ethicists and groups that may be impacted by the technology. * Does the design respect human agency and autonomy? Complete autonomy may be desirable in some setups but may also be less adaptive to human needs in others.What are the specific mechanisms for synchronous engagement, i.e. for the human to be in the loop, to bypass automated decisions or to improve automated decisions?Is there sufficient recognition of the limitations of automated decisions? In real-world design, while a socially cognizant solution may be desirable, budget and time constraints lead roboticists to first develop atechnology-only solution.A meaningful step toward socially cognizant robotics is to have a deliberate design step of AI stakeholder participation to affect AI-driven robot behavior. AI stakeholders in this context are the individuals, groups,businesses, and communities that are involved with and impacted by robotics. This participation may be asynchronous (e.g., stakeholder meetingsto understand the value of a design, user surveys to assess impact) or synchronous (e.g., robot training affected by individual or societal interaction). Iterations of building/deploying/evaluating after multiple rounds of stakeholder assessment is crucial for meeting human-centric objectives. § CONCLUSION Traditionally, robotics (e.g. in the context of manufacturing) has aimed primarily for speed, accuracy, and efficiency.As robots are deployed in a wider variety of domains, it becomes important to consider human-centered objectives such as safety, adaptability, privacy, bias, and ethical considerations. At the same time, traditional social sciences have usually studied the effects of technology on society only after deployment. Given the potential impact of robotics, we cannot afford to evaluate the societal and human impact of this technology a posteriori. The limited exposure of robotics professionals to behavioral and social sciences makes them less than fully prepared to predict the impact of the technology they develop on society and public discourse.Roboticists should be empowered to identify critical societal needs that robotics technology can realistically address in order to catalyze and guide meaningful convergence research. Societal needs include providing inexpensive and effective public services, efficient transportation, and environmental protection. Meeting these needs with robotic systems can transform our society by augmenting human abilities. Socially cognizant robotics differs from human-centered robotics by going beyond human-robot interaction to consider robot-society interactions. Furthermore, socially cognizant robotics emphasizes the importance of empowering humans to affect robot behavior both synchronously and asynchronously, recognizing the need for human agency.It includes the objective of ethical development of robotics with the goal of enhancing life for both the individual and society. As technology increasingly infiltrates daily life, humans have become less trustful of the pervasive applications, cameras, and automation. While convenience and ease of communications has been extolled, critical failures are arising, such as biased algorithms, fatal accidents with automated vehicles, and ambiguous accountability. Socially cognizant roboticists should adhere to and contribute to the evolving ethics guidelines of automation and train scientists who consider this component at the earliest stage of robot design as well as after robot deployment.§ ACKNOWLEDGEMENTSThis work has been supported by the National Science Foundation (NSF) National Research Traineeship (NRT) entitled "NRT-FW-HTF: Socially Cognizant Robotics for a Technology Enhanced Society (SOCRATES)" Grant No. 2021628.plain § ACKNOWLEDGMENTSThis work has been supported by the National Science Foundation (NSF) National Research Traineeship (NRT) entitled "NRT-FW-HTF: Socially Cognizant Robotics for a Technology Enhanced Society (SOCRATES)" Grant No. 2021628. | http://arxiv.org/abs/2310.18303v1 | {
"authors": [
"Kristin J. Dana",
"Clinton Andrews",
"Kostas Bekris",
"Jacob Feldman",
"Matthew Stone",
"Pernille Hemmer",
"Aaron Mazzeo",
"Hal Salzman",
"Jingang Yi"
],
"categories": [
"cs.RO",
"cs.AI"
],
"primary_category": "cs.RO",
"published": "20231027175302",
"title": "Socially Cognizant Robotics for a Technology Enhanced Society"
} |
Vision-Based Reconfigurable Intelligent Surface Beam Tracking for mmWave CommunicationsSpecial thanks to the Sony Research Center in Lund for providing their reconfigurable intelligent surface for testing and research.This work has been funded by the Horizon Europe EU Framework Programme under the Marie Skłodowska-Curie grant agreement No. 101059091, the Horizon 2020 EU Framework Programme under Grant Agreement No. 861222, the Swedish Research Council (Grant No. 2022-04691), the strategic research area ELLIIT, Excellence Center at Linköping – Lund in Information Technology, and Ericsson.Juan Sanchez, Xuesong Cai, and Fredrik Tufvesson Department of Electrical and Information Technology, Lund University, Lund, Sweden {juan.sanchez, xuesong.cai, fredrik.tufvesson}@eit.lth.se2023-10-25 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Large Language Models (LLMs) have introduced a new era of proficiency in comprehending complex healthcare and biomedical topics.However, there is a noticeable lack of models in languages other than English and models that can interpret multi-modal input, which is crucial for global healthcare accessibility. In response, this study introduces Qilin-Med-VL[Materirals of this study are available at <https://github.com/williamliujl/Qilin-Med-VL>], the first Chinese large vision-language model designed to integrate the analysis of textual and visual data. Qilin-Med-VL combines a pre-trained Vision Transformer (ViT) with a foundational LLM. It undergoes a thorough two-stage curriculum training process that includes feature alignment and instruction tuning.This method enhances the model's ability to generate medical captions and answer complex medical queries. We also release ChiMed-VL, a dataset consisting of more than 1M image-text pairs. This dataset has been carefully curated to enable detailed and comprehensive interpretation of medical data using various types of images. § INTRODUCTION The introduction of Large Language Models (LLMs) into the field of healthcare and biomedicine has brought significant advancements. GPT-4's notable achievement on the United States Medical Licensing Examination (USMLE) demonstrates its proficiency in complex biomedical concepts and its potential as a tool for healthcare professionals <cit.>. This milestone reflects the model's extensive training and knowledge, as well as the potential of medical LLMs.However, the practical implementation and support for making decisions in healthcare and biomedicine require the use of multi-modal techniques due to the intricate nature of medical diagnosis and patient care <cit.>. Determinative information often goes beyond written content to encompass visual indicators of how illnesses manifest. Disease patterns, clinical diagnoses, and many other aspects often rely on the analysis of visual cues. This includes patterns of skin lesions for dermatological conditions <cit.> or the interpretation of electrocardiograms and brain scans for cardiac and neurological issues <cit.>. Chronic conditions like diabetes require analysis of visual data like retinal scans <cit.>, while cancer treatment benefits from detailed imaging from CT scans or MRIs<cit.>. This highlights the limitations of relying solely on textual data and emphasizes the demand for integrated methods that combine visual data analysis with human-like conversation.In the English world, <cit.> has undertaken a pioneering endeavor to develop LLaVA-Med, an LLM that combines advanced visual-textual data analysis in the field of biomedicine through a process of multistaged multi-modal instruction tailoring. However, it is crucial to recognize that language barriers persist as a significant impediment to the advancement of global health <cit.>. A shortsighted focus on English-centric models could exacerbate inequalities in healthcare accessibility.Given the current absence of large vision-language models for Chinese medical fields, we reduce this inequality by working on developing Chinese healthcare and biomedical models, recognizing the significant impact that linguistic inclusion has on improving global health standards. Expanding on this foundation, our research introduces pivotal contributions: * Qilin-Med-VL, the first large Chinese medical vision-language model, proficient in multiple critical medical imaging disciplines.* The first large-scale Chinese Vision-Language dataset for general healthcare, Chinese Medicine - Vision Language ChiMed-VL, designed to facilitate multistage training. This dataset has two subsets: vision-language feature alignment and instruction tuning.Models like Qilin-Med-VL look forward to helping healthcare professionals make better decisions by providing them with more insights. Ultimately, our goal is to improve healthcare worldwide. We believe that our work represents a new frontier in research, where technology and medical knowledge come together to create a brighter and more equitable future for healthcare.§ RELATED WORK§.§ Multi-modal LLMsThe advent of LLMs has transformed the field of multi-modal LLMs field, which now has a branch that focuses on the adaptability of LLMs to incorporate various modalities. For example, AnyMal<cit.> generates textual responses from input signals, including text, image, video, audio, and IMU motion sensor. NExT-GPT<cit.> accomplishes universal multi-modal comprehension and diverse input/output modalities by integrating LLM with multi-modal adaptors and diffusion decoders. A typical focus of this field is on integrating visual elements, which is primarily concerned with integrating vision as a `foreign language' into the models, and can thus be easily adapted to other modalities. These models are typically referred to as large vision-language models.Pioneering research, such as Flamingo <cit.>, highlights the effectiveness of these models in synthesizing visual and textual information, resulting in nuanced, unrestricted content. Noteworthy developments like the Q-Former by BLIP-2 <cit.> contribute to harmonizing pre-trained vision models with LLMs, driving forward the capabilities of these systems.Models like MiniGPT-4 <cit.> and LLaVA <cit.> laveraged GPT-4 create conversational visual instruction datasets. These datasets enhance the models' proficiency in correlating visual traits with linguistic elements. LLaVA-1.5 <cit.> has advanced through strategic enhancements and high-performance standards in multi-modal LLM evaluations. It outperformed many open-source models, demonstrating significant improvements. Meanwhile, VisCPM <cit.>, InternLM-XComposer <cit.>, and Qwen-VL <cit.>, have excelled in interpreting and executing instructions in Chinese, reflecting the global applicability and adaptability of these advanced systems. These achievements not only showcase the models' versatility in processing language-specific tasks but also highlight their capability to handle intricate instructions across various domains and applications. §.§ Large Medical Vision-Language ModelsResearch in large medical vision-language models has been encouraging, with significant efforts put into establishing foundational models. Noteworthy initiatives include LLaVA-Med <cit.> and MedVInT <cit.>, which utilize image captions from PubMed Central <cit.> for fine-tuning medical visual instruction sets. Medical visual question answering (VQA) has received extensive attention and research due to its substantial practical uses. Pushing the boundaries of medical VQA capabilities, Med-Flamingo <cit.> emerged with capabilities for few-shot generative medical VQA on interleaved medical image-text data.Additionally, MedBLIP <cit.> narrows its focus to a specialized segment of 3D imaging, primarily MRI. Beyond medical VQA, Med-PaLM M <cit.>, adopted an innovative approach, proposed a generalist biomedical AI system that can perform medical image classification, medical VQA, radiology report generation and summarization, and more with the same set of model weights. In radiological diagnostics, models like RadFM <cit.> and Radiology-Llama2 <cit.> demonstrated promising performance in enhancing diagnostic precision through visual aid.Despite these advances, a research gap persists concerning Chinese medical LLMs tailored for multi-modal inputs. Existing models, such as Huatuo <cit.>, Qilin-Med <cit.>, and CMExam <cit.> only allow textual inputs.Bridging this gap is crucial, considering the potential impact on healthcare accessibility, where linguistic barriers can restrict critical information and services. This concern is especially pronounced for non-mainstream language speakers currently underserved by prevalent NLP technologies <cit.>. Prioritizing such research is imperative to mitigate systemic disparities and democratize access to crucial healthcare advancements. In this work, we harness these advancements to develop a specialized Chinese medical vision-language model, characterized by its efficiency and efficacy in operation.§ DATASET CONSTRUCTIONAddressing the scarcity of Chinese medical multi-modal data for training instruction-following models, we introduce a pioneering dataset, ChiMed-VL. ChiMed-VL was established by leveraging several open-source English medical multi-modal datasets. We translated these datasets into Chinese with GPT-3.5 and conducted expert quality control. The dataset contains two components: concept alignment and instruction-following, each critical during distinct training phases. §.§ The Concept Alignment SubsetTo enable model support for a multitude of medical image types, we leveraged two comprehensive open-source multi-modal medical datasets: PMC-OA <cit.>and PMC-CaseReport <cit.>. These datasets collectively cover an extensive range of diagnostic modalities, such as X-ray, MRI, CT, Radioisotope, Mitotic, and several others, examples of which are depicted in Fig.<ref>. Recognizing the disparity induced by the scarcity of Chinese-centric data, we used GPT-3.5 to translate the dataset into Chinese. The breakdown of this translation process is elaborated in Tab.<ref> and Fig.<ref>(a).ChiMed-VL-Alignment consists of 580,014 image-text couplings, each pair falling into one of two categories: context information of an image or descriptions of an image. The context category contains 167M tokens, presenting a median text length of 435 (Q1: 211, Q3: 757). Conversely, descriptions, more concise and image-specific, contain inline descriptions and captions. They comprise 63M tokens, with median lengths settling at 59 (Q1: 45, Q3: 83). §.§ The Instruction-Tuning Subset In the second phase, we constructed the ChiMed-VL-Instruction subset for refining the model's interpreting and instruction following capabilities. We extracted data from two open-source compilations: PMC-Report and PMC-VQA <cit.>. These datasets contain a diverse collection of data, including X-rays, CT scans, Echography, and Ultrasonography, enriching the model's familiarity with varied medical scenarios. We again used GPT-3.5 to translate the English questions and their corresponding answers into Chinese. Tab.<ref> and Fig.<ref>(b) details the statistics of this process.ChiMed-VL-Instruction comprises 469,441 question-answer pairs. Within this subset, the questions section contains 10M tokens with a median length of 20 (Q1: 16, Q3: 25), posing a concise inquiry reflective of medical queries. The answers consist of 13M tokens with a median length slightly longer at 22 (Q1: 12, Q3: 34), providing clear, direct, and informative responses. §.§ Data Pre-processing A significant challenge addressed during the compilation of ChiMed-VL involved the management of input images. Datasets from PMC-OA and PMC-CaseReport contain multiple images corresponding to single text snippets. To enhance medical visual concept alignment and mitigate potential misalignments, images related to the same text were concatenated into single composite images, forming unified image-text pairs. This method necessitated the exclusion of samples with more than four images per text to avoid low-resolution outputs post-concatenation. We strategically chose horizontal or vertical image concatenation based on the original image sets' dimensions, preventing extreme aspect ratios in the combined image. Furthermore, we discarded samples with overly brief textual descriptions or those impractical for translation. The final training data format emulates a conversation between an assistant and an individual providing visual instructions, structured via task-specific Chinese prompts. Approximately 20 unique prompt templates were designed for each task, ensuring a diverse training schema. For each sample, a template was randomly selected from the task-specific set, and the data was reformulated into a dialog format, making it a robust resource for training purposes. § METHODOLOGY§.§ Overall ArchitectureThe Qilin-Med-VL architecture comprises three key components: * Foundation LLM: Qilin-Med-VL employs the renowned Chinese LLM, Chinese-LLaMA2-13B-Chat, to comprehend linguistic content and generate appropriate responses.* Pretrained Image Encoder: To process medical images, Qilin-Med-VL leverages the Vision Transformer (ViT) <cit.> to obtain visual features from each image.* Vision-Language Feature Adapter: This component aims to align visual features with linguistic features, creating a shared feature space to effectively capture complementary information from different modalities. For efficiency, a simple linear projection layer is used as the feature adapter. In the future, we plan to investigate more effective and sophisticated adapters.§.§ Two-stage Curriculum Training SchemeAs shown in Fig. <ref>, the training procedure of Qilin-Med-VL is divided into two stages: vision-language feature alignment and instruction-tuning. This two-stage training scheme is inspired by curriculum learning, which progressively enhances the medical proficiency of VL models. §.§.§ Feature AlignmentIn this first stage, Qilin-Med-VL is trained on an image description task, where the model is asked to predict a caption for each input medical image. For each pair of medical images and text in the dataset, we instructed the model to generate a caption for the image(prompts summarized in Appendix. <ref>. We used the actual captions as the correct answers during training. Importantly, we fix the parameters of the pre-trained image encoder and language model (LLM). Instead, we train a special adapter to make sure visual and language features representing the same medical concepts align well. This alignment helps the model better understand medical information across different forms (visual and text) and improves the consistency of its medical concept understanding.§.§.§ Instruction-Tuning In the second stage, we further improved Qilin-Med-VL's capability of following medical instructions. We used a dataset specifically designed for this purpose, as discussed in Sec. <ref>. In this stage, each training example consisted of a medical image and a related question. The model's task was to answer the question using the information in the image. We freezed the pre-trained image encoder and fine-tuned the language model and the vision-language feature adapter. This way, Qilin-Med-VL becomes more proficient at understanding various medical instructions and can carry out medical tasks, like answering medical questions based on images, in a conversational manner. § EXPERIMENTS§.§ BaselinesTo investigate Qilin-Med-VL's ability in medical vision-language understanding and instruction following, we conduct a comparative analysis with five baseline LMMs: * GPT-4V<cit.>, a large multi-modal model that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.* Qwen-VL <cit.>, an open-sourced general large vision-language model based on Qwen-7B<cit.> and ViT<cit.> that can handle various vision-language tasks, including image description, question-answering, grounding, and text-reading.* VisCPM-Chat <cit.>, trained using CPM-Bee[https://github.com/OpenBMB/CPM-Bee/tree/main] with 10B parameters, fusing visual encoder Muffin<cit.> and visual decoder Diffusion-UNet<cit.> to support visual inputs and outputs.* LLaVA-1.5 <cit.>, an open-sourced end-to-end trained LMM based on Vicuna-13B<cit.> and ViT<cit.>. §.§ Implementation Details We used Chinese-LLaMA2-13B-Chat as the foundation LLM and Clip-ViT-large-patch14-336 as the pre-trained image encoder for Qilin-Med-VL. Chinese-LLaMA2-13B-Chat is an open-source Transformer-based LLM with 13 billion parameters further trained on Chinese-LLaMA2-13B, which is optimized for conversation scenarios. Clip-ViT-large-patch14-336 is a pre-trained CLIP vision encoder trained by OpenAI. We performed two-stage curriculum training using 8 × A100 80G GPUs. Specifically, We had the following settings during feature alignment: batch size = 32 per GPU, 1 epoch, learning rate = 1e-3, warmup ratio = 0.03, and max length = 2048. As for the vision-language instruction-tuning stage, we used the following settings: batch size = 16 per GPU, 1 epoch, learning rate =2e-5, warmup ratio = 0.03, and max length = 2048. §.§ Results and Discussion Fig.<ref> shows some results of Qilin-Med-VL and various baselines on the PMC-VQA test set. We display cases of different types of images, including ultrasound, X-ray, MRI, etc. For the first case, even though the image is clearly informed to be related to the testis, LLaVA still determined it to be an embryo in the uterus. Qwen-VL predicted it to be a varicocele inside the testicle. VisCPM made a fundamental mistake, predicting that there was a fetus inside the testicle and describing the specific situation. GPT-4V's answer was relatively accurate, suggesting the possibility of a cystic or solid lesion. In contrast, Qilin-Med-VL accurately predicted that there was a tumor in the region. For the second case, both LLaVa and VisCPM suggested abnormalities in the lungs, while Qwen-VL suggested there was a rib fracture. GPT-4V did not give a clear judgment. However, Qilin-Med-VL predicted the abnormality to be an enlarged heart. For the third evaluative task, we provided the models with clinical information indicating the presence of a pathological condition and challenged them to ascertain the tumor's anatomical location based on the imaging data. LLaVA, Qwen-VL, and VisCPM misidentified the site of the lesion. GPT rendered a non-specific interpretation, suggesting the tumor's presence within an organ in the abdominal region, yet without precise localization. Conversely, Qilin-Med-VL demonstrated precision by accurately pinpointing the tumor to the right renal region. We sought the expertise of a medical specialist who conducted a meticulous analysis based on the image data. The specialist astutely observed that the liver was located in the upper left quadrant of the image, while the kidneys were bilaterally aligned adjacent to the spinal column. This comprehensive evaluation, considering both the tumor's position and morphology, led the specialist to the conclusion that the tumor was localized within the renal region.§ LIMITATIONS We acknowledge that this study, as the pioneering effort in deploying large vision-language models in the Chinese healthcare sector, has a few limitations that need to be addressed in future research. A critical limitation is the study's dependence on machine-translated data., which could inadvertently introduce biases or inaccuracies, affecting the model's reliability. This limitation also underscores the importance of linguistic and cultural sensitivity in healthcare applications and the need for rigorous validation methods to ensure the accuracy of generated and translated content. Additionally, the absence of multi-turn conversation data in the current dataset limits the model's ability to handle complex, multi-round interactions effectively. § ETHICS AND SOCIETAL IMPACTS The development and deployment of LLMs and large vision-language models in healthcare present various ethical considerations and potential societal impacts. A primary concern is these models lack comprehensive clinical validation and are only for academic and research purposes. As such, Qilin-Med-VL should not be employed for medical advice or healthcare decisions at this stage, as misuse could lead to incorrect or harmful outcomes.In navigating the intersection of artificial intelligence and healthcare, upholding ethical principles, prioritizing patient safety, data privacy, and equitable technology access is essential. Qilin-Med-VL represents a promising advancement but is just one step toward universally accessible healthcare AI solutions. Its ethical responsibility and clinical validation for real-world applications remain to be demonstrated. § CONCLUSION & FUTURE WORK The development of Qilin-Med-VL represents a pioneering step in integrating advanced large vision-language models for Chinese healthcare. This research underscores the importance of linguistic inclusion and the need for specialized models in non-English-speaking communities. We anticipate future research to continue to refine this field, with the ultimate goal of democratizing healthcare access and elevating global health standards with the help of medical AI.§ APPENDIX §.§ Prompts for medical feature alignment and instruction tuning1.0< g r a p h i c s >figurePrompts for medical feature alignment and instruction tuning. | http://arxiv.org/abs/2310.17956v2 | {
"authors": [
"Junling Liu",
"Ziming Wang",
"Qichen Ye",
"Dading Chong",
"Peilin Zhou",
"Yining Hua"
],
"categories": [
"cs.CV",
"cs.AI",
"cs.CL"
],
"primary_category": "cs.CV",
"published": "20231027080521",
"title": "Qilin-Med-VL: Towards Chinese Large Vision-Language Model for General Healthcare"
} |
UTF8gbsn APS/123-QED [email protected] Department of Physics & Astronomy, Texas A&M University, College Station, TX 77843, USA Cyclotron Institute, Texas A&M University, College Station, TX 77843, USACyclotron Institute, Texas A&M University, College Station, TX 77843, USAPresent address: University of Birmingham, Edgbaston, B15 2TT, UK Cyclotron Institute, Texas A&M University, College Station, TX 77843, USACyclotron Institute, Texas A&M University, College Station, TX 77843, USACyclotron Institute, Texas A&M University, College Station, TX 77843, USADepartment of Physics & Astronomy, Texas A&M University, College Station, TX 77843, USA Cyclotron Institute, Texas A&M University, College Station, TX 77843, USACyclotron Institute, Texas A&M University, College Station, TX 77843, USACyclotron Institute, Texas A&M University, College Station, TX 77843, USADepartment of Physics & Astronomy, Texas A&M University, College Station, TX 77843, USA Cyclotron Institute, Texas A&M University, College Station, TX 77843, USACyclotron Institute, Texas A&M University, College Station, TX 77843, USADepartment of Physics & Astronomy, Texas A&M University, College Station, TX 77843, USA Cyclotron Institute, Texas A&M University, College Station, TX 77843, USA[Corresponding author: ][email protected] Department of Physics & Astronomy, Texas A&M University, College Station, TX 77843, USA Cyclotron Institute, Texas A&M University, College Station, TX 77843, USA Nuclear Solutions Institute, Texas A&M University, College Station, TX 77843, USA Background: The triple-alpha process is a vital reaction in nuclear astrophysics, characterized by two consecutive reactions (2α⇆^8Be(α,γ)^12C) that drive carbon formation. The second reaction occurs through the Hoyle state, a 7.65 MeV excited state in ^12C with J^π=0^+.The rate of the process depends on the radiative width, which can be determined by measuring the branching ratio for electromagnetic decay. Recent measurements by Kibédi, et al. conflicted with the adopted value and resulted in a significant increase of nearly 50% in this branching ratio, directly affecting the triple-alpha reaction. Purpose: This work aims to utilize charged-particle spectroscopy with magnetic selection as a means to accurately measure the total radiative branching ratio (Γ_rad/Γ) of the Hoyle state in ^12 C. Methods: The Hoyle state in ^12 C was populated via ^12C(α, α')^12C^* inelastic scattering. The scattered α-particles were detected using a ΔE-E telescope, while the recoiled ^12 C ions were identified in a magnetic spectrometer. Results: A radiative branching ratio value of Γ_rad/Γ×10^4=4.0±0.3( stat.)±0.16( syst.) was obtained. Conclusions: The radiative branching ratio for the Hoyle state obtained in this work is in agreement with the original adopted value. Our result suggests that the proton-γ-γ spectroscopy result reported by Kibédi et al. may be excluded. Valid PACS appear hereRadiative decay branching ratio of the Hoyle state G.V. Rogachev==================================================§ INTRODUCTIONThe triple-alpha process is a crucial reaction in the field of nuclear astrophysics, consisting of two consecutive reactions: a) α+α→^8Be(g.s.), and b) ^8Be +α→γ+^12C, ultimately leading to the formation of carbon. The second reaction occurs via an excited 0^+ state at an excitation energy of 7.65 MeV in ^12 C, known as the Hoyle state <cit.>. This state predominantly decays by α-emission, but a small branch of electromagnetic decay ultimately forms the ^12 C(g.s.). The rate of the triple-alpha process is determined by the product of the α-decay width (Γ_α) and the radiative width (Γ_rad) divided by their sum (Γ_α+Γ_rad). As shown in Eq. <ref>, the experimental method of determining the value of Γ_rad involves measuring the branching ratio for electromagnetic decay (Γ_rad/Γ) and utilizing the established partial width Γ_π(E0) for electron-positron pair production <cit.>. Γ_rad=Γ_rad/Γ×Γ/Γ_π(E0)×Γ_π(E0) The radiative branching ratio (Γ_rad/Γ), the first parameter on the right-hand side of Eq. <ref>, has gathered significant attention over the years. Previous studies conducted during the 20th century <cit.> contributed to an adopted value of Γ_rad/Γ=4.16(11)×10^-4 <cit.>. However, a recent study by Kibédi (2020) <cit.> has unveiled a substantial deviation exceeding 3σ from the adopted value, introducing considerable uncertainties in the determination of the triple-alpha reaction rate. In contrast, the measurement conducted by Tsumura (2021) <cit.> exhibits agreement with previous findings, although it still possesses a relatively large uncertainty. Figure <ref> provides a comprehensive overview of these measurements. The observed discrepancies emphasize the importance of further investigating the radiative branching ratio of the Hoyle state, as it has a profound impact on various astrophysical model calculations, shaping our understanding of stellar nucleosynthesis and the formation of carbon in the universe. This paper introduces a new measurement of Γ_rad/Γ using the charged-particle spectroscopy method with magnetic selection, effectively resolving the ambiguity associated with the radiative decay branching ratio of the Hoyle state in ^12 C.§ EXPERIMENTThe experiment was conducted at the Cyclotron Institute at Texas A&M University, using the K150 Cyclotron. Figure <ref> illustrates the experimental setup employed in this study. The excited states in ^12C were populate by inelastic scattering of 40 MeV α-particles on a highly enriched ^12C target provided by Argonne National Lab. It contains less than 0.17% (verified experimentally, see discussion in Sec. <ref>) of ^13C and has a thickness of 159 μg/cm^2.The scattered α-particles were detected and identified using a ΔE-E silicon telescope, consisting of a 32-μm-thick silicon detector and a 500-μm-thick double-sided silicon strip detector (DSSD) manufactured by Micron Semiconductor Ltd <cit.>. Both silicon detectors have an active area of 49.5 mm × 49.5 mm. The front and rear sides of the DSSD were divided into 16 vertical strips and 16 horizontal strips, respectively. Positioned at an angle of 81.15^∘ relative to the beam axis and located 14.2 cm away from the target, the silicon telescope covered from 72.8^∘ to 89.5^∘ on the reaction plane. This setup facilitated the determination of the momentum vector of the α-particles.To detect the ^12C(g.s.) ions survived from the electromagnetic decay of the ^12C(0_2^+) recoil, we employed the Multipole-Dipole-Multipole (MDM) spectrometer <cit.>. Positioned at an angle of 35.3^∘ in the lab frame, the MDM spectrometer covered 4^∘ in both the vertical and horizontal directions, providing a large acceptance for the ^12 C recoil ions. The detection and identification of the recoil ions, filtered by the MDM spectrometer, were facilitated by implementing the Texas Parallel-Plate Avalanche Counter System (TexPPACS) at the end of MDM. The TexPPACS featured a 2.5 μm Mylar entrance window, operated with 4 Torr pentane gas and spaced at a distance of 42 cm, allowing for the efficient detection of heavy ions with energies around 1 MeV/nucleon. The time between the DSSD and each of the two PPACs in the TexPPACS detector system was recorded. The effective timing resolution of the TexPPACS was determined to be 1.5 ns. A dual-trigger mode was implemented to accommodate the high counting rate. The trigger output from the DSSD's front strips shaper was divided into two channels. One channel was used for coincidences with the first PPAC detector, while the other channel was connected to a pre-scaler that generated one output signal for every 100 input signals. The coincidence and the scaler output triggered the data acquisition system independently. This dual trigger mode significantly reduced the counting rate for single events by a factor of 100 while ensuring the capture of all α+^12 C coincidence events.The branching ratio can be calculated using the following expression:Γ_rad/Γ=N_Coinc/100× N_Scaled× F_5^+×ϵ,where N_Coinc, N_Scaled, F_5+, and ϵ represent the yield of coincidence events, scaled single events, the charge state fraction of ^12 C^5+ (see discussion in Sec. <ref>), and the efficiency that accounts for various factors such as MDM-TexPPACS efficiency, target thickness, etc. respectively. § ANALYSIS In this section, we present an analysis detailing the steps taken to determine the radiative decay branching ratio for the Hoyle state. We begin by discussing the charge state fraction distribution of ^12 C, which serves as the starting point of our analysis. §.§ Charge state fraction distributionThe ^12 C ions emitted from the target exhibit charge states ranging from 1^+ to 6^+. In order to ascertain the distribution of ^12 C charge states after the target, the magnetic rigidity of the MDM was adjusted individually for ^12 C in each charge state. We used the first excited state of ^12 C, the 2^+ at 4.44 MeV, to record the coincidence between the respective α-particles in the DSSD and the ^12 C in the TexPPACS. The resulting distribution is shown in Fig. <ref>. It can be observed that ^12 C^5^+ holds the highest fraction, with F_5+ = 0.495±0.026. Therefore, we selected the magnetic rigidity of the MDM spectrometer for the ^12 C^5+ charge state. This also eliminates most of the α-particles originating from the 3α-decay of the Hoyle state. §.§ EfficiencyThe total efficiency of the MDM-TexPPACS system was estimated using Monte Carlo simulation with Geant4 <cit.> + RAYTRACE <cit.>. This comprehensive simulation considered various experimental factors, such as beam emittance, target thickness, transmission efficiency of the MDM spectrometer, and geometrical efficiency for coincidence selection. The accuracy of the simulation was validated by comparing the simulated efficiencies and the experimentally-determined efficiencies of the setup for measuring the radiative branching ratio of ^12 C(2^+_1), which is known to be 100%. To investigate the impact of beam emittance, which is approximately 24 mm-rad for the K150 cyclotron, we conducted a series of simulations based on different emittance distributions. The average of the simulated results was considered as the efficiency value, while the largest difference observed among them was taken as a source of systematic uncertainty.The resulting efficiency value for the setup targeting ^12 C(0^+_2) was determined to be ϵ=0.95±0.03 ( syst.). §.§ Particle identificationThe magnetic rigidity of the MDM spectrometer was optimized to select the heavy recoils of interest - the ^12 C ions resulted from the population of the Hoyle state in α-particle inelastic scattering on ^12 C. Other recoil ions with similar magnetic rigidity produced in the interation of the α-particle beam with the target (^13 C, ^16 O, and α-particles) can pass through the spectrometer as well. The time-of-flight (ToF), defined as the difference (T_1-T_ Si) between the ToF of the ions from the target to the silicon detector, T_ Si, and the ToF of the ions from the target to the first PPAC, T_1, offers good particle identification.Figure <ref> shows the (T_1-T_ Si) against the excitation energy in ^12 C calculated from the angle and energy of the α-particles in DSSD. The observed groups of events in this 2D scatter plot predominantly correspond to ^12 C, which constitutes the primary component of the target. Groups (a), (b), and (c) in Fig. <ref> correspond to ^12 C ions originating from ^12 C+α elastic scattering and inelastic scattering populating the ^12 C(4.44) and ^12 C(7.65) states, respectively. Groups (d), (e), and (f) represent the α-particles resulting from the α-decay of the ^12 C(7.65), ^12 C(9.64), and ^12 C(10.847) states, respectively, as they exhibit shorter ToF compared to ^12 C ions.Groups (g) and (h) in Fig. <ref> indicate the presence of ^16 O contaminants in the target. Through examining the energy of the associated α-particles, we have confirmed that groups (g) and (h) correspond to ^16 O ions resulting from ^16 O+α inelastic scattering populating the ^16 O(6.05, 6.13) and ^16 O(6.92, 7.12) states, respectively. Two-body kinematics calculations reveal that the kinetic energies of the ^16 O ions for these two groups are approximately 0.90 and 0.88 MeV/nucleon, respectively. These energies are insufficient for ^16 O(g.s.) ions to reach the second PPAC.The inelastic scattering of α-particles on the ^16 O contaminant also resulted in the population of ^16 O in excited states above the α-decay threshold. Some of these states, with appropriate spin-parity, are open to α-decay. Consequently, the production of ^12 C(g.s.) ions can occur through the reaction ^16 O+α→ ^12 C+α+α. This process introduces a minor enhancement in the counts near the Hoyle state. However, as discussed in Sec. <ref>, this contribution can be effectively eliminated through proper fitting.While we used isotopically enriched ^12 C target, ^13 C isotope was still present. To assess the influence of ^13 C, we conducted additional measurements using a 1 mg/cm^2 ^13 C target with the same experimental setup and magnetic rigidity as the measurements for the Hoyle state. Figure <ref> displays the 2D timing spectrum obtained from the measurement with ^13 C target, while Fig. <ref> presents the excitation-energy spectrum obtained by projecting the data in Fig. <ref> onto the X-axis.The ^13 C target contains ^12 C at the level of few percent, resulting in a prominent peak originating from the ^12 C(4.44) state. The group in Fig. <ref> characterized by an excitation energy E_x=2.4 MeV, and T_2-T_ Si=380 ns corresponds to ^13 C(g.s.) ions generated through ^13 C+α inelastic scattering, specifically populating the ^13 C(3.684) state. The group (i) observed in Fig. <ref> is attributed to the same origin. By comparing the ratio of counts in group (i) to the counts in group (b) obtained from the ^13 C target measurements with those from the Hoyle state measurements, we were able to make an estimation of the fraction of ^13 C in the enriched ^12 C target. Assuming that the ^13 C target is composed entirely of ^13 C, our analysis yielded a fraction of 0.17% for the ^13 C contaminant in the ^12 C-enriched target. However, it is only an upper limit because the enrichment in the ^13 C target is not 100%.Furthermore, the observed groups with E_x=5.4 and 6.3 MeV correspond to ^12 C ions originating from ^13 C+α→ ^12 C+n+α reactions. Initially, ^13 C+α inelastic scattering populates the ^13 C(6.864), ^13 C(7.55), and ^13 C(7.67) states, which subsequently decay to a ^12 C(g.s.) ion and a neutron. Figure <ref> illustrates the energy spectrum, where we observe some counts in the energy region of the Hoyle state. These additional counts could potentially enhance the signal for the Hoyle state. However, we can eliminate this contribution through a proper scaling procedure, which will be discussed in detail in Sec. <ref>. §.§ Excitation energy spectraFigure <ref> displays the ToF measurements for particles traveling from the first to the second PPAC of the TexPPACS. This 2D ToF spectrum offers improved separation between ^12 C and α-particles compared to Fig. <ref>. By applying a polygon cut for ^12 C as indicated in this figure and projecting the data onto the X-axis, we obtained the excitation-energy spectrum for the coincidence events.Figure <ref> displays the excitation-energy spectra of (a) singles and (b) coincidence with ^12 C recoil events. The dashed vertical lines identify the location of the Hoyle state. It is clearly visible in the singles and the α+^12 C coincidence spectra. However, it is important to note that there are three sources of contamination affecting the Hoyle state peak: a “shoulder” on the left side of the peak, as well as enhancements originating from ^16 O and ^13 C. The subsequent discussion will address the methods employed to eliminate their influences.The broad feature observed to the left of the Hoyle peak in the coincidence spectrum is attributed to the edge effect of the DSSD silicon detector. Each strip on the DSSD has a width of 3000 μm, with two 100-μm-wide dead regions on either side. The probability of inelastically scattered α-particles hitting the overlapping area between the front and back dead regions is (100/3000)^2 ≈ 0.1%. Consequently, the charges induced by these α-particles were not fully collected. As a result, slightly higher excitation energies in ^12 C were obtained, leading to the appearance of the “shoulder” feature. This hypothesis was verified by modifying the energy matching conditions for the front and back strips of the DSSD. We observed a significant drop in the “shoulder” feature when we restricted the condition on the energy difference between the front and back strips, while the intensities of other peaks remained less changed. In our analysis, we used an energy matching condition of 0.5%, which is consistent with the energy resolution of the DSSD, i.e., (E_ front-E_ back)/E_ front<0.5%.To determine the yields of singles and coincidence events, both spectra were fitted using Gaussian functions for the Hoyle and 3^-_1 states, as well as other peaks, while a smooth function was employed for the continuum. The fitted results are shown in Fig. <ref>. The centroids and widths of the Gaussian functions were adjusted to reproduce the singles spectrum, and the same parameters were utilized for the coincidence spectrum. The continuum was fitted by an exponential function, while two other functions were also tested to estimate the systematic uncertainty: a semi-phenomenological function obtained from Ref. <cit.> with an added constant offset, and a linear function. The measured spectra were then subtracted by the fit functions for the continuum, and the remaining spectra were integrated to obtain the yields of the Hoyle state. This approach was employed to mitigate errors resulting from discrepancies between the Gaussian fit function and the actual measured peak shape.The ^16 O+α inelastic scattering can enhance the counts in the region of the Hoyle state. Specifically, the 1_3^- state in ^16 O with excitation energy of 12.44 MeV was observed in the coincidence spectrum shown in Fig. <ref>. To obtain an accurate count of the Hoyle state, this peak was fitted using a Gaussian function with the same width parameter as the Hoyle state (determined by the experimental energy resolution).The coincident yields within the energy range of the Hoyle state from the ^13 C target measurement and the ^12 C-enriched target measurement were normalized by scaling them with the ratio of the counts of the ^13 C(3.684) peaks observed in both measurements. This normalization procedure allows us to determine the additional counts in the region of the Hoyle state originating from the ^13 C contaminants. The contribution from the ^13 C contaminants in the E_x=7.65 MeV peak is estimated to be 3%, and it was subtracted from the total counts of the Hoyle state peak.The number of actual counts under the Hoyle state peaks in the singles events and coincidence events were determined to be N_Scaled=(1.570±0.037)×10^4 and N_Coinc=291.2±20.7, respectively. According to Eq. <ref>, the radiative branching ratio was determined to be (Γ_rad/Γ)_ expo×10^4=3.94±0.36( stat.)±0.13( syst.). Here the systematic uncertainty arose from the uncertainty of the MDM-TexPPACS efficiency. The semi-phenomenological continuum gave (Γ_rad/Γ)_ semi×10^4=3.92±0.34( stat.)±0.13( syst.). We also fitted the background using a linear function from E_x=6.9 - 8.4 MeV as the extreme assumption, and this gave (Γ_rad/Γ)_ line×10^4=3.95±0.37( stat.)±0.13( syst.). The largest difference between the three yields was considered as an addition to the systematic uncertainty arising from the ambiguity of the continuum function. Consequently, the present radiative branching ratio of the Hoyle state in ^12 C was determined to be Γ_rad/Γ×10^4=3.94±0.36( stat.)±0.16( syst.).Another measurement was conducted with an increase of 0.5% in the magnetic field strength. The analysis procedure followed the same methodology as described above. The obtained radiative branching ratio was found to be Γ_rad/Γ×10^4=4.07±0.51( stat.)±0.16( syst.), which is in agreement with the previous result within uncertainty. This additional measurement at slightly different magnetic rigidity serves as a confirmation of the reliability of the analysis procedures and the independence of the result to a specific choice of magnetic rigidity within the allowed range. The second measurement was combined with the first to reduce the statistical uncertainty.Table <ref> summarizes the quantities used to evaluate radiative branching ratio from the two measurements. Considering both measurements, we report the radiative branching ratio of the Hoyle state in ^12 C to be Γ_rad/Γ×10^4=4.0±0.3( stat.)±0.16( syst.). This result is the statistical uncertainty-weighted mean value obtained from the two measurements.§ CONCLUSIONTable <ref> presents a comparison of the result obtained in this study with other recent measurements and the adopted value. The present result is in good agreement with the previous measurements, except for the Kibédi <cit.> measurement which deviates significantly. The adopted value was determined based on the weighted mean of the measurements conducted before 1976. Tsumura <cit.> performed a measurement with magnetic selection, but their result exhibits a large relative statistical uncertainty of approximately 18.6%. Our present work shows a substantial improvement, with a more than doubled reduction in relative statistical uncertainty, achieving approximately 7.5%. Considering both the older and recent results, it may be appropriate to exclude the γ-particle spectroscopy result reported by Kibédi <cit.> from further consideration. The authors thank the operation team at Cyclotron Institute, Texas A&M University, for the reliable operation of the facilities. The authors also thank Dr. Heshani Jayatissa and Dr. Takahiro Kawabata for their helpful discussions. This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Science under award no. DE-FG02-93ER40773, and by the National Nuclear Security Administration through the Center for Excellence in Nuclear Training and University-Based Research (CENTAUR) under grant number DE-NA0003841. | http://arxiv.org/abs/2310.18475v1 | {
"authors": [
"Zifeng Luo",
"M. Barbui",
"J. Bishop",
"G. Chubarian",
"V. Z. Goldberg",
"E. Harris",
"E. Koshchiy",
"C. E. Parker",
"M. Roosa",
"A. Saastamoinen",
"D. P. Scriven",
"G. V. Rogachev"
],
"categories": [
"nucl-ex",
"astro-ph.SR"
],
"primary_category": "nucl-ex",
"published": "20231027204225",
"title": "Radiative decay branching ratio of the Hoyle state"
} |
Catalan Institute of Nanoscience and Nanotechnology (ICN2), CSIC and BIST, Campus UAB, Bellaterra, 08193 Barcelona, Spain Department of Physics, Universitat Autònoma de Barcelona (UAB), Campus UAB, Bellaterra, 08193 Barcelona, SpainCatalan Institute of Nanoscience and Nanotechnology (ICN2), CSIC and BIST, Campus UAB, Bellaterra, 08193 Barcelona, SpainCatalan Institute of Nanoscience and Nanotechnology (ICN2), CSIC and BIST, Campus UAB, Bellaterra, 08193 Barcelona, Spain ICREA–Institució Catalana de Recerca i Estudis Avançats, 08010 Barcelona, Spain Topological quantum matter exhibits novel transport phenomena driven by entanglement between internal degrees of freedom, as for instance generated by spin-orbit coupling effects. Here we report on a direct connection between the mechanism driving spin relaxation and the intertwined dynamics between spin and sublattice degrees of freedom in disordered graphene. Beyond having a direct observable consequence, such intraparticle entanglement is shown to be resilient to disorder, pointing towards a novel resource for quantum information processing. Resilient Intraparticle Entanglement and its Manifestation in Spin Dynamics of Disordered Dirac Matter Stephan Roche January 14, 2024 ======================================================================================================== The study of quantum transport in topological quantum matter and quantum materials is a central topic of modern condensed matter, given its connection to an emerging class of nontrivial phenomena and materials including topologically protected edge states, exotic Moiré physics, twisted multilayers of 2D materials, and strongly correlated systems <cit.>. Particularly relevant is the existence of the variety of internal degrees of freedom which arise from lattice symmetry and internal degeneracies, such as A/B sublattice pseudospin or 𝐊_+ / 𝐊_- valley isospin in graphene, or layer pseudospin in multilayer systems assembled by stacking (and twist angle) engineering. The contributions of spin-orbit coupling, magnetic exchange fields and Coulomb interactions further yield nontrivial modifications of intraparticle and interparticle entanglement properties as well as the formation of strongly correlated and topological states, which result in a wealth of puzzling and exotic phenomena including superconductivity and orbital magnetism <cit.>.In this wide context, exploration of the dynamics of internal degrees of freedom and their intertwined evolution is particularly relevant when a connection can be made with some directly accessible experimental observable. The case of graphene has been particularly scrutinized given the longstanding debate concerning the nature of the spin transport and spin relaxation mechanisms at play <cit.>, while entanglement effects on weak localization <cit.> or spin-orbit torque phenomena <cit.> have also been discussed recently. The peculiar dynamics of intraparticle entanglement between spin and sublattice in massless Dirac fermions, generated by spin-orbit coupling, have been argued to explain the energy dependence of spin lifetime in graphene <cit.>, and may offer a novel resource of quantum information given its magnitude and resilience to state preparation in the ultraclean limit <cit.>, as well as complement other possible entanglement mechanisms <cit.>. However, to date the dynamics and robustness of such intraparticle entanglement in the presence of disorder remains to be explored. Additionally, a direct connection between entanglement and the spin lifetime in graphene has yet to be elucidated.In this Letter, we use numerical simulations to study the dynamics of spin and intraparticle entanglement in disordered graphene. We consider two types of disorder – charge impurities and magnetic impurities – and in both cases we find that the entanglement saturates to a finite value, independent of the initial state and the type or strength of disorder. This spin-sublattice entanglement thus appears to be an equilibrium property of graphene, with a magnitude that is robust to charge and spin scattering. For future proposals to use such intraparticle entanglement as a quantum resource, these results suggest that disorder may not be an inherently limiting factor.Additionally, we derive an expression that directly relates the intraparticle entanglement to the Elliott-Yafet mechanism of spin relaxation in graphene. In the absence of magnetic impurities, this mechanism is dominant at low doping, yielding a minimum in the spin lifetime at the charge neutrality point. Our results therefore offer a direct, experimentally observable consequence of the innate spin-sublattice entanglement in graphene, and suggest that this connection could be used to monitor quantum information, as well as be generalized to other entanglement phenomena in similar materials.Electronic model and measure of entanglement — To study its entanglement properties, we consider a continuum model of graphene near the 𝐊_± points with Rashba spin-orbit coupling (SOC) induced by a perpendicular electric field or a substrate <cit.>. The Hamiltonian isĤ(𝐤)= ħ v_F(νσ̂_xk_x+σ̂_yk_y)⊗ŝ_0 + λ_R(νσ̂_x⊗ŝ_y-σ̂_y⊗ŝ_x),where v_F is the Fermi velocity, ν=± 1 is the valley index, 𝐤 is the crystalline momentum with respect to the 𝐊_± points, λ_R is the Rashba SOC strength and ŝ_i (σ̂_i) are the Pauli matrices in the spin (sublattice) space. In this work, without loss of generality, we restrict ourselves to one valley, ν=+1. The eigenenergies of the system are ε_±^ξ(𝐤) = ξ( ±λ_R + √(λ_R^2+ε_0^2)), where ξ=+ 1 (-1) for electrons (holes) and ε_0=ħ v_F|𝐤| is the energy dispersion in absence of SOC. The eigenstates of the Hamiltonian, in the sublattice-spin basis{ A_↑, A_↓, B_↑, B_↓}, are given by|ε^ξ_± (𝐤)⟩ = 1√(N_±)[ e^-iθ ± i γ_±ξγ_± ±ξ i e^iθ ]^T,where θ = (k_y/k_x) is the direction of momentum in the graphene plane, γ_± = ε_±/ε_0 and N_± = 2(1+γ_±^2).The spin-sublattice entanglement of an arbitrary state |ψ⟩ can be quantified using the concurrence C_ψ <cit.>. For pure states this is computed as C_ψ = | ⟨ψ|ψ||⟩ = | ⟨ψ|( σ̂_Y ⊗σ̂_Y ) |ψ^*⟩ |,where |ψ⟩ denotes the spin-flip transformation of |ψ⟩ and σ̂_Y = [0 -i;i0 ] in spin space. The concurrence is a so-called entanglement monotone, equal to 0 for separable states and 1 for maximally entangled states. Applying its definition to Eq. (<ref>), we find that the concurrence of the eigenstates is C_ε = λ_R / √(ε_0^2+λ_R^2).Therefore, the eigenstates of graphene in the presence of Rashba SOC are maximally entangled at the charge neutrality point (ε_0 = 0) and become separable for higher doping, with C_ε decaying as ∼1/ε_0. On the other hand, increasing the Rashba coupling will increase the entanglement for a given Fermi energy (E_F). In general, the concurrence is a monotonically-increasing function of x = λ_R / E_F, given by C_ε = x / √(1+x^2), where we have let E_F≈ε_0. The purple solid line in Fig. <ref> shows precisely this behavior.Entanglement dynamics in the presence of disorder — In a previous work, we examined the dynamics of intraparticle spin-sublattice entanglement in perfectly clean graphene <cit.>. Here we examine the dynamics and robustness of such intraparticle entanglement in the presence of disorder. To do so, we have implemented a single-particle Monte Carlo method that tracks the concurrence and spin of a quantum state as it undergoes free flight and scattering in the presence of different types of disorder (schematically shown in Fig. <ref>, right inset). Our approach follows the general procedure for semiclassical Monte Carlo simulation of semiconductors <cit.> (details in the Supplemental Material <cit.>). We consider the time evolution of a state at a given E_F either in the presence of charge impurities which randomize the momentum direction at every scattering event, or magnetic impurities which randomize the spin orientation. The impurity density is related to the scattering time, τ, which is a free parameter in our simulations. Finally, we consider a large number of trajectories to calculate the ensemble average of the spin and concurrence as they evolve in time. Some examples of the ensemble concurrence dynamics in the presence of charge impurities are shown in the right panel of Fig. <ref>. We have considered two different initial states: one fully separable, |ψ_sep⟩=1/√(2)[1 0 1 0]^T=1/√(2)(|A⟩+|B⟩)⊗|↑⟩ and one highly entangled Bell-type state, |ψ_entan⟩=1/√(2)[1 0 0 1]^T=1/√(2)(|A⟩⊗|↑⟩+|B⟩⊗|↓⟩). In both cases, the concurrence converges to a finite and universal value at long times. This behavior is independent of the type of disorder and the initial state. In the left panel of Fig. <ref>, we show the converged value of the concurrence as a function of x = λ_R / E_F. The solid blue lines correspond to charge impurity scattering with τ = 25 and 50 fs. Meanwhile, the blue dashed line shows the case for magnetic impurities with τ = 50 fs. Notably, all three curves fall on top of each other, indicating that the intraparticle entanglement appears to be robust and universal in the presence of disorder in graphene.Fig. <ref> (left panel) shows that the converged value of concurrence grows monotonically with λ_R / E_F. This scaling behavior follows that of the eigenstates themselves, whose concurrence is shown by the solid purple line. The final concurrence arises from a linear combination of both bands at E_F, and thus will generally be smaller than the concurrence of each indidividual eigenstate. Nonetheless, these results demonstrate a general picture of entanglement dynamics, with the initial nonequilibrium entanglement relaxing to an equilibrium value determined by the concurrence of the eigenstates. This is analogous to the spin in a magnetic system relaxing to an equilibrium value determined by the net magnetization <cit.>. Spin relaxation time — Next we examine the features of the spin relaxation time of out-of-plane spins and directly relate it to the concurrence of the graphene eigenstates.The spin relaxation time, τ_s, is obtained from the Monte Carlo simulations by fitting the average value of the out-of-plane spin polarization to either an exponential decay, ⟨ŝ_z|=⟩exp(-t/τ_s), or an oscillating exponential, ⟨ŝ_z|=⟩exp(-t/τ_s)cos(2π t/T_Ω). We use the former when the spin precession time, T_Ω = πħ / λ_R, is longer than the scattering time and the latter when T_Ω < τ. The solid green curve in the inset of Fig. <ref> shows an example of the spin dynamics from the Monte Carlo simulations when T_Ω≫τ, and the dashed green line is the fit to an exponential decay.In the main panel of Fig. <ref>, we plot the spin relaxation time as a function of the Fermi energy for different values of the scattering time. As in Fig. <ref>, the solid curves are for charge impurity scattering and the dashed curve is for magnetic impurities. The scattering times are given in the legend, and the Rashba SOC strength is λ_R = 500 eV. The spin precession time is then T_Ω = πħ/λ_R = 4.1 ps, much longer than all values of τ that we consider.For charge impurity scattering, at low energies τ_s scales quadratically with E_F and is proportional to τ. This is consistent with the Elliott-Yafet (EY) mechanism of spin relaxation, which in graphene was predicted to scale as τ_s^EY∝ (E_F / λ_R)^2 τ <cit.>. At higher energies, when τ_s^EY becomes large, the spin dynamics are dominated by the D'yakonov-Perel (DP) mechanism, and the spin lifetime scales inversely with the scattering time, τ_s^DP = (ħ/2λ_R)^2 / τ <cit.>. This crossover between the EY and the DP regime happens at E_F≈ħ / τ.In the case of scattering by magnetic impurities, τ_s=τ for all energies, as shown by the dashed line in Fig. <ref>. This arises from the fact that magnetic impurities directly randomize the spin at each scattering event. As seen in Fig. <ref>, at low energies the spin lifetime is dominated by EY relaxation, where τ_s ∝ (E_F / λ_R)^2. Meanwhile, above we noted that the concurrence of the eigenstates scales approximately inversely with energy, C_ε≈ (E_F / λ_R)^-1. In the following we demonstrate that there is a direct link between spin-sublattice entanglement and EY spin relaxation in graphene.To derive an expression for EY spin relaxation in graphene, we start with the general interpretation of the mechanism – that at every momentum scattering event, the spin polarization is reduced due to mixing of spin up and spin down states <cit.>. We define a spin loss coefficient that describes the relative amount of spin lost during the nth scattering event,𝒮^n,n-1≡⟨ŝ_z^n|⟩⟨ŝ_z^n-1|⟩,where ⟨ŝ_z^n-1|$⟩ is the ensemble spin polarization just before thenth scattering event and⟨ŝ_z^n|$⟩ is the ensemble spin polarization just after. After a time t, the spin polarization will then be given by ⟨ŝ_z(t)|=⟩𝒮^N,N-1 ...𝒮^2,1𝒮^1,0⟨ŝ_z^0|$⟩, wheret= Nτwithτthe average scattering interval. Assuming that every spin loss coefficient is equal,𝒮^n,n-1 ≈𝒮, the net spin lost during an average intervalτis⟨ŝ_z(t+τ)|-⟩ ⟨ŝ_z(t)|=⟩ -(1 - 𝒮) ⟨ŝ_z(t)|$⟩, giving d⟨ŝ_z(t)|⟩/dt≈ -1-𝒮/τ⟨ŝ_z(t)|.⟩This yields an exponential decay of the spin polarization, ⟨ŝ_z(t)|=⟩⟨ŝ_z^0|exp⟩(-t/τ_s), with spin lifetimeτ_s^EY = τ/1-𝒮. To explicitly calculate 𝒮, we start by considering the spin lost by a single particle during a single scattering event. Before a scattering event, the state of a particle with momentum 𝐤 can be written as a linear combination of the graphene+Rashba eigenstates, |ψ(𝐤)⟩ = ∑_ξνβ_ξν|ε_ν^ξ (𝐤)⟩. For E_F > 0 (E_F < 0), this projection is over the conduction (valence) band eigenstates. The spin polarization of this state is ⟨ŝ_z^ψ(𝐤)|≡⟩⟨ψ(𝐤) | ŝ_z | ψ(𝐤)|$⟩. A charge impurity scattering event changes𝐤 →𝐤', and thus the scattered state can be written as a projection onto the eigenstates at𝐤',|ψ(𝐤')⟩ = ∑_ξν β_ξν' |ε_ν^ξ(𝐤')⟩, whereβ_ξν' =⟨ε_ν^ξ(𝐤') | ψ(𝐤)|$⟩, with proper normalization. The spin polarization of this scattered state is ⟨ŝ_z^ψ(𝐤')|=⟩⟨ψ(𝐤') | ŝ_z | ψ(𝐤')|$⟩, and the spin loss in this particular scattering event is thus𝒮_ψ(𝐤-𝐤') = ⟨ŝ_z^ψ(𝐤')|/⟩ ⟨ŝ_z^ψ(𝐤)|$⟩. Finally, the average spin loss per scattering event, 𝒮, is the average of 𝒮_ψ(𝐤-𝐤') over all 𝐤-𝐤', accounting for the form of |ψ⟩ after a large number of scattering events. By calculating 𝒮 in this way, we then obtain the EY spin relaxation time by applying Eq. (<ref>).Following this procedure (see SM for details <cit.>), we arrive at the following expression for EY spin relaxation in graphene with Rashba SOC,τ_s^EY = 3-C_ε^45C_ε^2-3C_ε^4 τ.This expression, with explicit dependence on the concurrence of the graphene eigenstates, can be broken down into two regimes. At high energies, when C_ε≈λ_R / E_F≪ 1, Eq. (<ref>) reduces to τ_s^EY≈ (3/5) (1 / C_ε)^2 τ≈ (3/5) (E_F / λ_R)^2 τ. This is the usual EY relation in graphene <cit.>. On the other hand, for low energies when E_F≲λ_R and thus C_ε≈ 1, the EY relation becomes τ_s^EY≈τ. In this regime, when there is only one band at E_F, every scattering event changes the spin.The full scaling of τ_s^EY with Fermi energy is shown in Fig. <ref> (right axis). The black circles are EY spin relaxation times extracted from the Monte Carlo simulations, while the solid black line corresponds to Eq. (<ref>), showing perfect agreement. On the left axis, we show the concurrence of the eigenstates, highlighting the inverse correlation between the spin-sublattice entanglement and EY spin relaxation in graphene.Conclusions — Using Monte Carlo simulations, we have revealed the nature of spin-sublattice entanglement dynamics in disordered graphene. We find that this intraparticle entanglement evolves to a universal value that is determined by the eigenstates of graphene and is independent of the initial state, and of the type and strength of disorder. These results suggest that potential applications of this intraparticle entanglement as a resource may not be inherently limited by disorder.Next, we have derived an explicit relation between Elliott-Yafet spin relaxation and the spin-sublattice entanglement in graphene. When spin relaxation is driven by spin-orbit coupling, the EY mechanism is dominant at low doping, yielding a minimum of spin lifetime at the charge neutrality point. Thus, the behavior of spin transport at low doping in graphene may be a direct experimental signature of the quantum entanglement properties of the charge carriers.Beyond spin-sublattice entanglement, similar experimental features may arise from interparticle sublattice entanglement between electrons and holes in single-layer graphene <cit.>, or sublattice-layer entanglement in bilayer graphene <cit.>. Looking ahead, the complex interplay between intraparticle and interparticle entanglement remains to be explored, with this work laying the groundwork for understanding such dynamics. Beyond the quantification of entanglement through experimental data, such studies may be used to envision novel schemes of data processing and storage that utilize the multiple internal degrees of freedom of quantum matter. ICN2 is funded by the CERCA programme / Generalitat de Catalunya, and is supported by the Severo Ochoa Centres of Excellence programme, Grant CEX2021-001214-S, funded by MCIN / AEI / 10.13039.501100011033. This work is also supported by MICIN with European funds‐NextGenerationEU (PRTR‐C17.I1) and by Generalitat de Catalunya. 31 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Giustino et al.(2021)Giustino, Lee, Trier, Bibes, Winter, Valentí, Son, Taillefer, Heil, Figueroa, Plaçais, Wu, Yazyev, Bakkers, Nygård, Forn-Díaz, Franceschi, McIver, Torres, Low, Kumar, Galceran, Valenzuela, Costache, Manchon, Kim, Schleder, Fazzio, and Roche]Giustino_2020 author author F. Giustino, author J. H. Lee, author F. Trier, author M. Bibes, author S. M. Winter, author R. Valentí, author Y.-W. Son, author L. Taillefer, author C. Heil, author A. I.Figueroa, author B. Plaçais, author Q. Wu, author O. V. Yazyev, author E. P. A. M. Bakkers, author J. Nygård, author P. Forn-Díaz, author S. D. Franceschi, author J. W. McIver, author L. E. F. F. Torres, author T. Low, author A. Kumar, author R. Galceran, author S. O.Valenzuela, author M. V.Costache, author A. Manchon, author E.-A. Kim, author G. R. Schleder, author A. Fazzio,and author S. Roche, 10.1088/2515-7639/abb74e journal journal J. Phys. Mater. volume 3, pages 042006 (year 2021)NoStop [Andrei et al.(2021)Andrei, Efetov, Jarillo-Herrero, MacDonald, Mak, Senthil, Tutuc, Yazdani, and Young]Andrei2021 author author E. Y. Andrei, author D. K. Efetov, author P. Jarillo-Herrero, author A. H. MacDonald, author K. F. Mak, author T. Senthil, author E. Tutuc, author A. Yazdani,and author A. F. Young, 10.1038/s41578-021-00284-1 journal journal Nat. Rev. Mater. volume 6, pages 201 (year 2021)NoStop [Tian et al.(2023)Tian, Gao, Zhang, Che, Xu, Cheung, Watanabe, Taniguchi, Randeria, Zhang, Lau, and Bockrath]Tian2023 author author H. Tian, author X. Gao, author Y. Zhang, author S. Che, author T. Xu, author P. Cheung, author K. Watanabe, author T. Taniguchi, author M. Randeria, author F. Zhang, author C. N. Lau,and author M. W. Bockrath, 10.1038/s41586-022-05576-2 journal journal Nature volume 614, pages 440 (year 2023)NoStop [Ertler et al.(2009)Ertler, Konschuh, Gmitra, and Fabian]DPgraphene author author C. Ertler, author S. Konschuh, author M. Gmitra,and author J. Fabian, 10.1103/PhysRevB.80.041405 journal journal Phys. Rev. B volume 80, pages 041405 (year 2009)NoStop [Huertas-Hernando et al.(2009)Huertas-Hernando, Guinea, and Brataas]PhysRevLett.103.146801 author author D. Huertas-Hernando, author F. Guinea,and author A. Brataas, 10.1103/PhysRevLett.103.146801 journal journal Phys. Rev. Lett. volume 103, pages 146801 (year 2009)NoStop [Ochoa et al.(2012)Ochoa, Castro Neto, and Guinea]OchoaEygraphene author author H. Ochoa, author A. H. Castro Neto,and author F. Guinea, 10.1103/PhysRevLett.108.206808 journal journal Phys. Rev. Lett. volume 108, pages 206808 (year 2012)NoStop [Zhang and Wu(2012)]Zhang_2012 author author P. Zhang and author M. W. Wu,10.1088/1367-2630/14/3/033015 journal journal New J. Phys. volume 14,pages 033015 (year 2012)NoStop [Pi et al.(2010)Pi, Han, McCreary, Swartz, Li, and Kawakami]PhysRevLett.104.187201 author author K. Pi, author W. Han, author K. M. McCreary, author A. G. Swartz, author Y. Li,and author R. K. Kawakami, 10.1103/PhysRevLett.104.187201 journal journal Phys. Rev. Lett. volume 104, pages 187201 (year 2010)NoStop [Han and Kawakami(2011)]PhysRevLett.107.047207 author author W. Han and author R. K. Kawakami, 10.1103/PhysRevLett.107.047207 journal journal Phys. Rev. Lett. volume 107, pages 047207 (year 2011)NoStop [McCreary et al.(2012)McCreary, Swartz, Han, Fabian, and Kawakami]PhysRevLett.109.186604 author author K. M. McCreary, author A. G. Swartz, author W. Han, author J. Fabian,and author R. K. Kawakami, 10.1103/PhysRevLett.109.186604 journal journal Phys. Rev. Lett. volume 109, pages 186604 (year 2012)NoStop [Zomer et al.(2012)Zomer, Guimarães, Tombros, and van Wees]PhysRevB.86.161416 author author P. J. Zomer, author M. H. D. Guimarães, author N. Tombros,and author B. J. van Wees, 10.1103/PhysRevB.86.161416 journal journal Phys. Rev. B volume 86, pages 161416 (year 2012)NoStop [Kochan et al.(2014)Kochan, Gmitra, and Fabian]PhysRevLett.112.116602 author author D. Kochan, author M. Gmitra, and author J. Fabian, 10.1103/PhysRevLett.112.116602 journal journal Phys. Rev. Lett. volume 112, pages 116602 (year 2014)NoStop [Soriano et al.(2015)Soriano, Tuan, Dubois, Gmitra, Cummings, Kochan, Ortmann, Charlier, Fabian, andRoche]Soriano_2015 author author D. Soriano, author D. V. Tuan, author S. M.-M. Dubois, author M. Gmitra, author A. W. Cummings, author D. Kochan, author F. Ortmann, author J.-C.Charlier, author J. Fabian,and author S. Roche, 10.1088/2053-1583/2/2/022002 journal journal 2D Mater. volume 2, pages 022002 (year 2015)NoStop [Drögeler et al.(2016)Drögeler, Franzen, Volmer, Pohlmann, Banszerus, Wolter, Watanabe, Taniguchi, Stampfer, and Beschoten]Drögeler2016 author author M. Drögeler, author C. Franzen, author F. Volmer, author T. Pohlmann, author L. Banszerus, author M. Wolter, author K. Watanabe, author T. Taniguchi, author C. Stampfer,and author B. Beschoten, 10.1021/acs.nanolett.6b00497 journal journal Nano Lett. volume 16, pages 3533 (year 2016)NoStop [Thomsen et al.(2015)Thomsen, Ervasti, Harju, andPedersen]PhysRevB.92.195408 author author M. R. Thomsen, author M. M. Ervasti, author A. Harju, and author T. G. Pedersen,10.1103/PhysRevB.92.195408 journal journal Phys. Rev. B volume 92, pages 195408 (year 2015)NoStop [Van Tuan et al.(2016)Van Tuan, Ortmann, Cummings, Soriano, and Roche]VanTuan2016 author author D. Van Tuan, author F. Ortmann, author A. W. Cummings, author D. Soriano,and author S. Roche, 10.1038/srep21046 journal journal Sci. Rep.volume 6, pages 21046 (year 2016)NoStop [Cummings and Roche(2016)]PhysRevLett.116.086602 author author A. W. Cummings and author S. Roche, 10.1103/PhysRevLett.116.086602 journal journal Phys. Rev. Lett. volume 116, pages 086602 (year 2016)NoStop [Gebeyehu et al.(2019)Gebeyehu, Parui, Sierra, Timmermans, Esplandiu, Brems, Huyghebaert, Garello, Costache, andValenzuela]Gebeyehu_2019 author author Z. M. Gebeyehu, author S. Parui, author J. F. Sierra, author M. Timmermans, author M. J. Esplandiu, author S. Brems, author C. Huyghebaert, author K. Garello, author M. V.Costache,and author S. O.Valenzuela, 10.1088/2053-1583/ab1874 journal journal 2D Mater. volume 6, pages 034003 (year 2019)NoStop [Sousa et al.(2022)Sousa, Perkins, and Ferreira]Sousa2022 author author F. Sousa, author D. T. S. Perkins,and author A. Ferreira, 10.1038/s42005-022-01066-z journal journal Commun. Phys. volume 5, pages 291 (year 2022)NoStop [Dueñas et al.()Dueñas, García, and Roche]duenas2023emerging author author J. M. Dueñas, author J. H. García,and author S. Roche, @noop http://arxiv.org/abs/2310.06447 arXiv:2310.06447 NoStop [Tuan et al.(2014)Tuan, Ortmann, Soriano, Valenzuela, and Roche]Tuan2014 author author D. V. Tuan, author F. Ortmann, author D. Soriano, author S. O. Valenzuela,and author S. Roche, 10.1038/nphys3083 journal journal Nat. Phys.volume 10, pages 857 (year 2014)NoStop [de Moraes et al.(2020)de Moraes, Cummings, and Roche]PhysRevB.102.041403 author author B. G. de Moraes, author A. W. Cummings,and author S. Roche, 10.1103/PhysRevB.102.041403 journal journal Phys. Rev. B volume 102, pages 041403 (year 2020)NoStop [Kindermann(2009)]PhysRevB.79.115444 author author M. Kindermann, 10.1103/PhysRevB.79.115444 journal journal Phys. Rev. B volume 79, pages 115444 (year 2009)NoStop [Bittencourt and Bernardini(2017)]PhysRevB.95.195145 author author V. A. S. V.Bittencourt and author A. E.Bernardini, 10.1103/PhysRevB.95.195145 journal journal Phys. Rev. B volume 95, pages 195145 (year 2017)NoStop [Rashba(2009)]PhysRevB.79.161409 author author E. I. Rashba, 10.1103/PhysRevB.79.161409 journal journal Phys. Rev. B volume 79, pages 161409 (year 2009)NoStop [Sierra et al.(2021)Sierra, Fabian, Kawakami, Roche,and Valenzuela]Sierra2021 author author J. F. Sierra, author J. Fabian, author R. K. Kawakami, author S. Roche,and author S. O. Valenzuela, 10.1038/s41565-021-00936-x journal journal Nat. Nanotechnol. volume 16, pages 856 (year 2021)NoStop [Wootters(1998)]Wootters1998 author author W. K. Wootters, 10.1103/PhysRevLett.80.2245 journal journal Phys. Rev. Lett. volume 80, pages 2245 (year 1998)NoStop [Jacoboni and Reggiani(1983)]Jacoboni1983 author author C. Jacoboni and author L. Reggiani, 10.1103/RevModPhys.55.645 journal journal Rev. Mod. Phys. volume 55, pages 645 (year 1983)NoStop [sup()]suppmaterial @noopnote See Supplemental Material at [URL will be inserted by publisher] for a detailed description of the Monte Carlo simulations, and the analytical derivation of the EY spin relaxation time.Stop [Kittel(2004)]Kittel2004 author author C. Kittel, https://www.wiley.com/en-us/Introduction+to+Solid+State+Physics title Introduction to Solid State Physics, edition 8th ed. (publisher Wiley, year 2004)NoStop [ŽŽuti ćć et al.(2004)ŽŽuti ćć, Fabian,and Das Sarma]Zutic2004 author author I. ŽŽuti ćć, author J. Fabian,and author S. Das Sarma, 10.1103/RevModPhys.76.323 journal journal Rev. Mod. Phys. volume 76, pages 323 (year 2004)NoStop | http://arxiv.org/abs/2310.17950v1 | {
"authors": [
"Jorge Martinez Romeral",
"Aron W. Cummings",
"Stephan Roche"
],
"categories": [
"cond-mat.mes-hall",
"quant-ph"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231027074455",
"title": "Resilient Intraparticle Entanglement and its Manifestation in Spin Dynamics of Disordered Dirac Matter"
} |
A class of fractional differential equations via power non-local and non-singular kernels: existence, uniqueness and numerical approximationsThis is a preprintof a paper whose final form is published in Physica D: Nonlinear Phenomena (ISSN 0167-2789). Submitted 19-Jan-2023; revised 15-May-2023; accepted for publication 11-Oct-2023. Hanaa Zitane Delfim F. M. TorresCorresponding author. Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal ===================================================================================================================================================================================================================================================================================================================================================Let ℬ(H) be the bounded, linear operators on a separable Hilbert space equipped with the norm topology. A property is called typical if the set of operators fulfilling the property is co-meager. We show that having non-empty continuous spectrum is a typical property and that the operators with non-empty point spectrum form a nowhere dense set. In addition we show that the set of operators with empty point spectrum is dense and characterize the closure of the set of those operators for which the spectrum and the point spectrum coincide.§ INTRODUCTION 2020 Mathematics Subject Classification. 47A11. Key words and phrases. G_δ-sets, co-meager, point spectrum, continuous spectrum, (Semi)- Fredholm operators, polar decomposition, Banach-Mazur game, Similarity Theorem. The author was partially supported by the Emmy Noether Program of the German Research Foundation (DFG Grant 466012782). G_δ-sets appear at many different places. For example the continuity points of a real valued function or the extreme points of any metrizable convex compact set in a locally convex topological vector space form a G_δ-set. A subspace of a Baire space X is called co-meager if it contains a dense G_δ-set and a property Φis called typical if {x∈ X, xΦ} is co-meager. In <cit.> T. Eisner and T. Mátrai investigated some typical properties in ℬ(H). It turns out that typical properties greatly depend on the underlying topology. To be a unitary operator on a infinite dimensional Hilbert space is a typical property with respect to the weak operator topology (see <cit.>) while the set of unitary operators is closed and has empty interior with respect to the operator norm topology. In this paper we will answer the two questions given in <cit.>. Both concern ℬ(H) with the norm topology and properties that give a restriction on the spectrum. We start with basic definitions of the point and continuous operator in <Ref>. For the proofs of the main theorems we need some well-known results concerning semi-Fredholm operators. Most of these will also be given in this section together with some helpful lemmas.In <Ref>, we look at the first question in <cit.>: Are the operators with non-empty point spectrum co-meager in ℬ(H) with respect to the norm topology? We answer this question in affirmative but we get an even better result.Let H be a separable Hilbert space. The set {T∈ℬ(H); σ_p(T)=∅}is nowhere dense with respect to the norm topology.In <Ref> we move on to the second question of <cit.> which asks if the operators with empty continuous spectrum are co-meager. Unlike the previous question the answer is no and we will prove the following theorem.Let H be a separable infinite dimensional Hilbert space. The set{T∈ℬ(H); σ_c(T)≠∅}is co-meager with respect to the norm topology.The proof of this theorem uses the Banach-Mazur game and will be somewhat technical.However even with the knowledge that a given set is meager, it is not clear if the set is dense in the associated space. With the help of the Similarity Orbit Theorem of C. Apostol, L. A. Fialkow, D. A. Herrero and D. Voiculescu <cit.> we obtain the following result in <Ref>.Let H be a separable Hilbert space. The set {T∈ℬ(H); σ_c(T)=∅}is dense in ℬ(H) with respect to the norm topology. § ACKNOWLEDGEMENTI wish to thank Michael Hartz for many helpful discussions and John McCarthy for pointing me towards the Similarity Orbit Theorem.§ DEFINITIONS Throughout the paper H will be a separable Hilbert spae and ℬ(H) the set of all linear, bounded operators from H to H. For T∈ℬ(H), the spectrum σ(T) of T are all points λ∈ℂ such that T-λ id_H is not invertible. Write 𝒦(H) for all compact operators on H. The essential spectrum σ_e(T) of T is the spectrum of T+𝒦(H) seen as element in the Calkin algebra ℬ(H)/𝒦(H). Fundamental and important facts about both the spectrum and the essential spectrum are that they are bounded, closed and non-empty (if (H)=∞) subsets of ℂ. Of course the essential spectrum is also contained in the spectrum.For the definition of the following spectra, we have to take a closer look at when an operator is not invertible. It is obvious that an operator is not invertible when the kernel (T) is not equal to {0}. The set of λ∈ℂ such that (T-λ)≠{0} is called the point spectrum σ_p(T). Another term for a point in σ_p(T) is eigenvalue. If (H) is finite then σ(T)=σ_p(T). This equality can fail if (H)=∞ and even worse the point spectrum can be empty. Let T^* be the adjoint of T. Now the spectrum of T can be decomposed as σ(T)=σ_p(T)∪σ_p(T^*)∪σ_c(T)where the last set is called continuous spectrum of T and is defined as all points λ∈σ(T) such that λ∉σ_p(T)∪σ_p(T^*). With the equality (T^*-λ̅id_H)=(T-λ id_H)^⊥ one sees that a point λ∈ℂ is in σ_c(T) if and only if * (T-λ id_H)=(T^*-λ̅id_H)={0} and* the image of T is not closed.Throughout the paper we will use (semi-)Fredholm operators. For that reason we now give the definition and all the fundamental results we need. An operator T∈ℬ(H) is called (semi-)Fredholm if* the image of T is closed,* ((T))<∞ and (or) ((T^*))<∞.If T is a semi-Fredholm operator, then (T)=((T))-((T^*))∈ℤ∪{-∞,∞} is called the index of T. The following lemmas contain well known properties of semi-Fredholm operators. Let c∈ℤ∪{-∞,∞}. Then the set {T∈ℬ(H); T(T)=c}is open with respect to the operator norm.Proof: See for example <cit.>.Let T∈ℬ(H) and λ∈∂σ_e(T), then T-λ id_H is not semi-Fredholm. In particular if (H)=∞ there is always λ∈ℂ such that T-λ id_H is not semi-Fredholm.Proof: The first statement is <cit.>. The second claim follows from the fact that σ_e(T) is compact and non-empty if (H)=∞. The last technique we need is the polar decomposition of an operator. If T∈ℋ is an operator, there is a partial isometry P∈ℬ(H) and a positive operator A∈ℬ(H) with (A)^⊥=(P) such that T=PA. See <cit.>. The next lemma and the following corollary are the starting point of the first two main theorems. Let T∈ℬ(H) be not semi-Fredholm and T>ϵ>0. Write T=PA for the polar decomposition of T and ν for the spectral measure of A. ThenS=P∫_ϵ^Tt dν(t)∈ℬ(H)has the following properties: * the image of S is closed and more precisely Sx≥ϵx for all x∈(S)^⊥,* ((S))=((S)^⊥)=∞,* T-S≤ϵ. Proof: Let T, S, P, A, ν be as above. Since P is a partial isometry and ν([ϵ,T])(H)⊂ν({0})(H)^⊥=(A)^⊥=(A)=(P)^⊥, we have that∫_ϵ^Tt dν(t)x=Sxfor every x∈ν([ϵ,T])(H). SoSx^2=∫_ϵ^Tt dν(t)x^2=⟨∫_ϵ^Tt^2dν(t)x,x⟩=∫_ϵ^Tt^2 dν_x,x(t)≥ϵ^2x^2,where ν_x,x(·) stands for the measure ⟨ν(·)x,x⟩. But since (S)^⊥⊂ν([ϵ,T])(H), we have proven (i).For clarity of the proof of part (ii) we denote by [T] the equivalence class +𝒦(H). Since T is not semi-Fredholm, [T] is not left-invertible by <cit.> and so [T^*T]=[A]^2 is not invertible. Thus [A] is not invertible and in particular A is not Fredholm. Now we can write A asA=∫_0^T(ϵ,t)dν(t)-∫_0^ϵ(ϵ-t)dν(t).If we assume that (ν([0,ϵ))(H))<∞, then the operator ∫_0^ϵ(ϵ-t)dν(t) has finite rank. But ∫_0^T(ϵ,t)dν(t) is invertible, so we get the contradiction that A is Fredholm. Hence (ν([0,ϵ))(H))=∞. Withν([0,ϵ))(H)⊂(S)Pν([0,ϵ))(H)⊂(S)^⊥where the second inclusion follows from ν([0,ϵ))(H)⊂(P)^⊥, we obtain ((S))=((S)^⊥))=∞.The last part (iii) follows from (T-S)x^2=∫ tχ_[0,ϵ)(t) dν(t)x^2=∫ t^2χ_[0,ϵ)(t)dν_x,x(t)≤ϵ^2x^2for all x∈ H.□ Let T∈ℬ(H) and ϵ>0. There are λ∈ℂ and S∈ B_ϵ(T-λ id_H) such that* the image of S is closed,* ((S))=((S)^⊥)=∞. Proof: First apply <Ref> and then <Ref>.□§ NON-EMPTY POINT SPECTRUM Let T be in ℬ(H). If (H)<∞ then every point in σ(T) is an eigenvalue and so the point spectrum of T is never empty. This is not true if (H)=∞. One of the best known examples is the bilateral shift on ℓ^2(ℤ). Now the question arises how big the set of all operators with empty point spectrum is. Problem 8.4 in <cit.> asks if the set of operators with non-empty point spectrum is co-meager with respect to the operator norm. The definition of co-meager set will be given in the next chapter. We will answer the question in the affirmative and even better, we will show that the complement of the set is nowhere dense. The set {T∈ℬ(H);σ_p(T)=∅}is nowhere dense with respect to the operator norm. The proof follows immediately from the next lemma.Let T∈ℬ(H) and ϵ>0. There exist δ>0 and T̃∈ℬ(H) such that * T-T̃<ϵ,* B_δ(T̃)⊂{A∈ℬ(H); σ_p(A)≠∅}.Proof: We can assume that (H)=∞ or else T=T̃ fulfills the desired properties. Apply <Ref> to T, ϵ/2 and obtain S∈ℬ(H), λ∈ℂ and δ>0 such that* the image of S is closed,* ((S))=((S^*))=∞,* S-(T-λ id_H)≤ϵ/2.Since ((S))=((S^*))=∞, there is an isometric operatorj:(S^*)→(S)such that ((S)⊖(j))<∞ and j is not surjective. Let I be the extension of j to H by setting I(x)=0 for x∈(S^*)^⊥=(S). Then the operatorS̃=S+ϵ/3I^*is Fredholm with (S̃)=c<0. This follows from (S̃)=(S)⊕(I^*)=H and (S̃)=(j^*)≠{0}. However by <Ref> the set of Fredholm operators with index c is open so there is a δ>0 withB_δ(S̃)⊂{A∈ℬ(H); (A)<0}⊂{A∈ℬ(H); σ_p(A)≠∅}.The operator T̃=S̃+λ id fulfills the desired properties sinceT-T̃=T-λ id-S̃≤T-λ id-S+S-S̃<ϵ.Likewise we have for A∈ B_δ(T̃) that A-λ id∈ B_δ(S̃) and hence ∅≠σ_p(A-λ id), so σ_p(A)≠∅. □§ NON-EMPTY CONTINUOUS SPECTRUM Let T∈ℬ(H). If (H)<∞ then the continuous spectrum of T is always empty. However if (H)=∞ there are plenty of examples for operators with non-empty continuous spectrum. For example 0∈σ_c(T) whenever T is compact and has trivial kernel. Like in the previous chapter the question is how big is the set of operators with non-empty continuous spectrum. Problem 8.4 in <cit.> asks if this set is meager. In the following chapter we will show that the opposite is true. Let (X,τ) be a topological space. A subset U⊂ X is called co-meager if there are sets (U_n)_n∈ℕ in X with * (X∖ U_n)=∅,* U=⋂_n∈ℕU_n.U is meager if X∖ U is co-meager. So the main theorem of the section is the following. The set {T∈ℬ(H);σ_c(T)≠∅}is co-meager with respect to the operator norm.For the proof we need some technical lemmas and the Banach-Mazur game. We start with the definition of the Banach-Mazur game. Assume player I and player II play a game where they take turns playing sets of a topological space (X,τ)I:U_1U_3…II:U_2U_4 …with the condition that ∅≠ U_n+1⊂ U_n and U_n∈τ for all n∈ℕ. Player II wins the game for a set A⊂ X if ⋂_n∈ℕU_2n⊂ A.Note that by choice of the U_n we automatically have ⋂_n∈ℕU_2n-1=⋂_n∈ℕU_2n.The above game is called Banach-Mazur game and the relation to co-meager sets is given by the next theorem. A proof can be found in <cit.>. Let (X,d) be a metric space and ∅≠ A⊂ X. In the Banach-Mazur game, player II has a winning strategy if and only if the set A is co-meager. If one has to show that Player II has a winning strategy, it is enough to assume that Player I only plays open balls of the form B_ϵ(T). This follows from the fact that the sets played are open and Player II has to find a winning strategy for all possible choice Player I can make.Now we come to the technical lemmas needed to prove <Ref>. Let T∈ℬ(H) be not semi-Fredholm. Then for every ϵ>0 there is S∈ B_ϵ(T) such that* σ_e(S)=σ_e(T),* (S)=(S^*)={0}. Proof: The idea is to choose S as a compact perturbation of T since compact perturbations do not change the essential spectrum. Assume that there is a compact operator K_1 such that K_1≤ϵ/2 and ((T-K_1))=((T^*-K_1^*)). Then we can find an injective, compact operator K_2:(T-K_1)→(T^*-K_1^*) with dense image and K_2<ϵ/2. Extend K_2 to H by 0 on (T-K_1)^⊥. The operator S=T-K_1+K_2 now fulfills the conditions (i) and (ii). Thus it remains to show that such an operator K_1 exists.By <cit.> there is a compact operator K∈ℬ(H) such that ((T-K))=∞ and K<ϵ/4. We are done if ((T^*-K^*))=∞. So assume that ((T^*-K^*))<∞. Since T^*-K^* is not semi-Fredholm, (T^*-K^*) not closed. So we can apply the same theorem again to the operatorH→(T-K)^⊥,x↦ (T^*-K^*)x.This gives another compact operator K̃:H→(T-K)^⊥ such that ((T^*-K^*-K̃))=∞ and K̃<ϵ/4. However we can consider K̃ as an operator from H to H. Finally the operator K_1=K+K̃^*∈ℬ(H) is compact as sum of two compact operators and fulfills K_1<ϵ/2 as well as ((T-K_1))=((T^*-K_1^*))=∞. □ Let T∈ℬ(H) be not semi-Fredholm with (T)={0} and T=PA the polar decomposition with spectral measure v. Thenv([δ,||T||])→_SOTid_Hδ→0.In particular for every final dimensional subspace F⊂ H we have||(v([δ,||T||])-id_H)|_F||→0for δ→0. Proof:v((0,T])=id_H since v((0,||T||]) is the projection onto (A)^⊥. Let x∈ H. It holdslim_δ→0(id_H-v([δ,T))x^2=lim_δ→0v([0,δ))x^2=lim_δ→0v_x,x([0,δ))=v_x,x({0})=0,where v_x,x(·) stands for the measure ⟨ v(·)x,x⟩. Thus v([δ,||T||]) converges in the SOT to id_H for δ→0. The additional note follows from a characterization of the SOT in <cit.>.□ A simple application of the triangle inequality shows that a sequence of norm bounded operators that converges pointwise on a dense subspace to an operator already converges pointwise on the whole space. This proves the next lemma. Let (v_n)_n∈ℕ be a sequence of spectral measures and (r_n)_n∈ℕ in [0,∞) such that v_n((0,r_n])=id_H. Also assume that there is a increasing sequence F_n of finite dimensional subspaces inH whose union is dense in Hand a null sequence (δ_n)_n∈ℕ of positive numbers such that||(v_n([δ_n,r_n])-id_H)|_F_n||<1/n.Then v_n([δ_n,r_n])x→ xfor every x∈ H. Let ϵ>0, T∈ℬ(H), λ∈∂σ_e(T). Then there are S∈ B_ϵ(T), δ>0 with B_ϵ(λ)∩∂σ_e(A)≠∅for every A∈ B_δ(S). Proof:Let ϵ>0, T∈ℬ(H) and λ∈∂σ_e(T). Since λ is in the boundary of the essential spectrum T-λ id_H is not semi-Fredholm by <Ref> and in addition there is a λ̃∈ B_ϵ(λ) such that T-λ̃id_H is Fredholm, denote (T-λ̃id_H)=c. By <Ref> the set of Fredholm operators with index c is open so there is a δ̃>0 with(A-λ̃id_H)=c∀ A∈ B_δ̃(T).Without loss of generality let δ̃<ϵ. Applying <Ref> to T-λ id_H, δ̃/2 yields S̃∈ℬ(H) with the properties * (S̃) is closed,* ((S̃))=((S̃)^⊥)=∞,* ||T-λ id_H-S̃||≤δ̃/2.Take an isometric operator j:(S̃)→(S̃)^⊥ such that ((j))-((j^*))=c̃∈ℤ∖{c}. Extend j on H by defining j(x)=0 for x∈(S̃)^⊥. Now the operator S=S̃+λ id_H+δ̃/3 j fulfills * ||T-S||≤||T-λ id_H-S̃||+δ̃/3||j||<δ̃,* S-λ id_H is Fredholm with (S-λ id_H)=c̃,* S-λ̃id_H is semi-Fredholm with (S-λ̃id_H)=c,where part (c) follows from S∈ B_δ̃(T) and <Ref>.By <Ref> there is a δ>0 such that(A-λ id_H)=c̃ ∀ A∈ B_δ(S),(A-λ̃id_H)=c∀ A∈ B_δ(S).It remains to show that B_ϵ(λ)∩∂σ_e(A)≠∅ for every A∈ B_δ(S). For this purpose definet_0={t∈[0,1],A-λ id_H+t(λ-λ̃)id_H }and λ_0=λ-t_0(λ-λ̃). The existence of t_0 is guaranteed because ( A-λ id_H)≠(A-λ̃id_h). By definition of t_0 is A-λ_0 id_H not semi-Fredholm, thus λ_0∈σ_e(A) and since every neighborhood of λ_0 contains a point λ_1 such that A-λ_1 id_H is Fredholm we obtain λ_0∈∂σ_e(A). The last thing to show is that λ_0 is in B_ϵ(λ) but that follows from|λ-λ_0|≤|λ-λ̃|<ϵ. We are finally able to prove <Ref>. For that fix an ONB (e_n)_n∈ℕ of H and define F_n=⟨ e_1,…,e_n⟩.Proof of <Ref>:Recall that we want to use the Banach-Mazur game. By <Ref> we can assume that player I plays the set B_ϵ_1(T_1)⊂ℬ(H). Let λ_1∈∂σ_e(T_1). By <Ref> we can assume that (T_1-λ_1id_H)=(T_1^*-λ̅_1id_H)={0}.Let T_1-λ_1 id_H=P_1A_1, T_1^*-λ̅_1 id_H=Q_1B_1 be the polar decomposition and ν_1, μ_1 be the spectral measures of A_1, B_1. By <Ref> there is a 1>δ̃_̃1̃>0 with||(ν_1([δ̃_̃1̃,||A_1||])-id_H)|_F_1||<1,||(μ_1([δ̃_̃1̃,||B_1||])-id_H)|_F_1||<1.By <Ref> there is a δ_1>0, S_1∈ B_δ̃_1/4(T_1) such thatB_δ̃_1/4(λ_1)∩∂σ_e(A)≠∅for every A∈ B_δ_1(S_1). Without loss of generality we can assume that δ_1<1. Player II plays B_δ_1(S_1). Let n>1. We are going to construct the sets of Player II inductively. So assume that the sets B_ϵ_i(T_i), B_δ_i(S_i), i=1,…,n-1 have been played. Set λ_0=0, δ̃_0=∞. The induction requirements are :λ_i∈∂σ_e(T_i)∩ B_δ̃_i-1/4(λ_i-1) and polar decompositions T_i-λ_i id_H=P_iA_i,T_i^*-λ̅_i id_H=Q_iB_iwith spectral measures ν_i, μ_i such that||(ν_i([δ̃_̃ĩ,||A_i||])-id_H)|_F_i||<1/i||(μ_i([δ̃_̃ĩ,||B_i||])-id_H)|_F_i||<1/ifor 1/i>δ̃_i>0 with B_δ̃_i/4(λ_i)⊂ B_δ̃_i-1/4(λ_i-1) . Also δ_i<min{1/i,δ̃_i/4} andB_δ̃_i/4(λ_i)∩∂σ_e(A)≠∅for every A∈ B_δ_i(S_i). Assume Player I plays B_ϵ_n(T_n). By construction there is a λ_n∈∂σ_e(T_n)∩ B_δ̃_n-1/4(λ_n-1) and by <Ref> we can assume that (T_n-λ_n)=(T_n^*-λ̅_n)={0}. Again let T_n-λ_n id_H=P_nA_n, T_n^*-λ̅_n id_H=Q_nB_n be the polar decomposition and v_n, μ_n the spectral measures of A_n, B_n. By <Ref> there is a 1/n>δ̃_n>0 such that||(v_n([δ̃_n,||A_n||])-id_H)|_F_n||<1/n,||(μ_n([δ̃_n,||B_n||])-id_H)|_F_n||<1/n.Without loss of generality we can assume that δ̃_n is so small that B_δ̃_n/4(λ_n)⊂ B_δ̃_n-1/4(λ_n-1).By <Ref> there is a δ_n>0, S_n∈ B_δ̃_n/4(T_n) withB_δ̃_n/4(λ_n)∩∂σ_e(A)≠∅for every A∈ B_δ_n(S_n). Without loss of generality let δ_n<min{δ̃_n/4,1/n}.Player II plays B_δ_n(S_n). It remains to show that T∈⋂_n∈ℕB_δ_n(S_n)=⋂_n∈ℕB_ϵ_n(T_n) fulfills σ_c(T)≠∅. Let λ_n, δ̃_n, A_n, v_n be as above. Since B_δ̃_i/4(λ_i)⊂ B_δ̃_i-1/4(λ_i-1) for every i∈ℕ, the sequence (λ_n)_n∈ℕ converges to a λ∈ℂ with |λ-λ_n|≤δ̃_n/4. On the other hand T_n converges to T since ||T_n-T||<ϵ_n≤δ_n-1<1/(n-1). Thus T-λ id_H is not semi-Fredholm because it is the limit of non semi-Fredholm operators.It remains to show that (T-λ id_H)=(T^*-λ̅id_H)={0}. For that let x∈(T-λ id_H) and x_n=v_n([δ̃_n,||A_n||])x. We haveδ̃_n(||x_n||-||x-x_n||) ≤||(T_n-λ_n id_H)x|||=||(T_n-S_n)x+(S_n-T)x-(λ_n-λ)x||≤δ_n||x||+δ_n||x||+δ̃_n/4||x||≤δ̃_n/4||x||+δ̃_n/4||x||+δ̃_n/4||x||=3/4δ̃_n||x||.Here the first inequality follows from ||(T_n-λ_n id_H)x_n||≥δ̃_n||x_n|| and ||(T_n-λ_n id_H)(x-x_n)||≤δ̃_n||x-x_n||.But x_n converges to x by <Ref>. Hence the above inequaltiy shows that x has to be 0. One can do the same for x∈(T^*), x_n=μ_n([δ̃_n,A_n])x and obtains in total that (T-λ id_H)=(T^*-λ̅id_H)={0}. Thus λ∈σ_c(T). □§ DENSITY OF OPERATORS WITH EMPTY CONTINUOUS SPECTRUMEven with the knowledge that operators with non-empty continuous spectrum are co-meager, it is still not clear how big the closure of the complement is. For example ℚ is meager in ℝ but its closure is ℝ. In this section we show that the closure of operators with empty continuous spectrum is ℬ(H).We have{ T∈ℬ(H); σ_c(T)=∅}=ℬ(H)with respect to the norm topology.The main idea of the proof is to show that every operator is in the closure of the similarity orbit of an operator with empty continuous spectrum. To achieve this we are going to use the restricted Similarity Orbit Theorem from <cit.>.For an operator T∈ℬ(H) we call (T)={S^-1TS; S∈ℬ(H) }the similarity orbit of T. In <cit.>, the closure of (T) was characterized in terms of properties of the spectrum of T. We need the following definitions to formulate the restricted version of the Similarity Orbit Theorem.The normal spectrum σ_0(T) are the isolated points in σ(T) which are not in the essential spectrum σ_e(T).We denote the points λ∈ℂ such that T-λ id_H is semi-Fredholm by ρ_sF(T). By <Ref> this set is open.Let λ∈σ_e(T) be isolated and Φ be a faithful unital *-representation of the Calkin algebra ℬ(H)/𝒦(H). Let f(z) be the function that is equal to z-λ on a neighborhood of λ and 0 on a neighborhood of σ_e(T)∖{λ}. We can apply the holomorphic functional calculus to the element Φ(T+𝒦(H)) to obtain a quasinilpontent element Q_λ . For λ∈ℂ we definek(λ;T)=0 λ∉σ_e(T),n λσ_e(T)Q_λn, ∞ .It is not immediately clear that the above definition of k(λ;T) is independent of the representation Φ. However, this can be obtained by taking the function f(z) that is equal to z-λ in a neighborhood of λ and 0 on a neighborhood of σ_e(T)∖λ and applying it to the element T+𝒦(H). The element f(T+𝒦(H)) is nilpotent of order n if and only if k(λ;T)=n. (see also <cit.> and <cit.>)The reason we gave the definition with a representation of the Calkin algebra is that we use it to proof the next lemma.Let A∈ℬ(H) and B∈ℬ(H̃) be operators on Hilbert spaces H, H̃. If λ∈σ_e(A⊕ B) is isolated, thenk(λ;A⊕ B)=(k(λ;A),k(λ;B)).Proof: Let λ∈σ_e(A⊕ B) be isolated. For clarity of the proof we will denote the equivalence class of an operator T in the Calkin algebra by [T]. Define the projection P byP:H⊕H̃→ H⊕H̃, x⊕ y↦ x⊕ 0.Let Φ:ℬ(H⊕H̃)/𝒦(H⊕H̃)→ℬ(K) be a faithful unital *-representation. Since Φ is a *-homomorphism, Φ([P]) is a projection again. Denote this projection by P_1 and the Hilbert space P_1(K) by H_1. Further we denote the projection id_K-P_1 by P_2 and P_2(K)=H_2. This yields a decomposition of K into H_1⊕ H_2. Since P_1 commutes with Φ([A⊕0]) and P_2 with Φ([0⊕ B]), we see that H_1 and H_2 reduce Φ([A⊕ B]). Write A_0 for Φ([A⊕ B])|_H_1 and B_0 for the restriction on H_2. NowΦ_1:ℬ(H)/𝒦(H)→ℬ(H_1), [T]↦Φ([T⊕ 0])|_H_1is well defined since P_1 commutes with Φ([T⊕ 0]) for all T∈ℬ(H) and also a faithful unital *-representation. Thus we have Φ_1([A])=A_0 and σ([A])=σ(A_0). Likewise forΦ_2:ℬ(H)/𝒦(H)→ℬ(H_2), [T]↦Φ([0⊕ T])|_H_2we obtain Φ_2([B])=B_0 and σ([B])=σ(B_0). Let f(z) be a function that is the equal to z-λ on a neighborhood of λ and 0 on a neighborhood of σ_e(A⊕ B)∖{λ}. We can apply f to A_0 and B_0 since σ_e(A⊕ B)=σ_e(A)∪σ_e(B). Hence we obtainf(Φ([A⊕ B]))=f(A_0⊕ B_0)=f(A_0)⊕ f(B_0).But Φ_1 and Φ_2 are faithful unital *-representations and since k(λ;·) is independent of the representation, we obtain that f(A_0) is nilpotent of order n if and and only if k(λ;A)=n. Likewise f(B_0) is nilpotent of order n if and only if k(λ;B)=n. Thus f(Φ([A⊕ B])) is nilpotent of order n if and only if n=(k(λ;A),k(λ;B)). If f(Φ([A⊕ B])) is not nilpotent then either k(λ;A)=∞ of k(λ;B)=∞ and we have proven the lemma. □ Let A be an operator in ℬ(H). We say that A has property (S), (F) or (A) with respect to T if(S)σ_0(A)⊂σ_0(T)ℂ∖ρ_sF(A) intersects σ_e(T), (F) ρ_sF(A)⊂ρ_sF(T), (λ id_H-A)=(λ id_H-T) andmin(λ id_H-T)^k≤min(λ id_H-A)^k for all λ∈ρ_sF(T) and k≥1, (A) ((A-λ id_H))=((T-λ id_H)) for all λ∈σ_0(A).Here min (x) stands formin{((x)),((x^*))}.Now we are ready to formulate the restricted Similarity Orbit Theorem <cit.>.[restricted Similarity Orbit Theorem] If T is in ℬ(H) and k(λ;T)=∞ for every isolated point λ∈σ_e(T), then(T)={X∈ℬ(H); X(S), (F)(A)T}. To get rid of the points λ∈σ_e(A) with k(λ;A)∈ℕ we need the following special operator. Let T_0 be a quasinilpotent operator in ℬ(H) such that k(0;T_0)=∞ anda=(λ_n)_n∈ℕ be a bounded sequence of isolated points in ℂ. Then the operatorT_a:⊕_n∈ℕ H→⊕_n∈ℕ H, (x_n)_n↦ ((λ_n id_H-T_0)(x_n))_n is in ℬ(⊕_n∈ℕH) since the sequence a is bounded. Furthermore if λ∉{λ_n, n∈ℕ}, then by continuity of the inverse map, sup_n∈ℕ((λ-λ_n)id_H-T_0)^-1<∞ and hence λ∉σ(T_a). We obtain that {λ_n; n∈ℕ}=σ(T_a)=σ_e(T_a) and each λ_n is a isolated point in σ_e(T_a). Fix m∈ℕ. We can write T_a as a direct sum of λ_m id_H-T_0 and ⊕_n∈ℕ∖{m}H→⊕_n∈ℕ∖{m}H, (x_n)_n↦ ((λ_n id_H-T_0)(x_n))_n.By assumption on T_0, we have that k(λ_m;λ_m id_H-T_0)=∞. But λ_m is isolated in σ_e(T_a) and so we conclude that k(λ_m;T_a)=∞ by <Ref> . Since m was arbitrary we see that k(λ_n;T_a)=∞ for all n∈ℕ. Let H̃ be a Hilbert space and A∈ℬ(H), B∈ℬ(H̃). Assume that σ(B)=σ_p(B)=ℂ∖ρ_sF(A). Then * σ(A⊕ B)=σ(A),* σ_c(A⊕ B)=∅,* σ_e(A⊕ B)=σ_e(A),* σ_0(A⊕ B)=σ_0(A) and ((A⊕ B-λ id_H⊕H̃))=((A-λ id_H)) for all λ∈σ_0(A),* ρ_sF(A⊕ B)=ρ_sF(A),* (A⊕ B-λ id_H⊕H̃)=(A-λ id_H) for all λ∈ρ_sF(A⊕ B) ,* min(A⊕ B-λ id_H⊕H̃)^k=min(A-λ id_H)^k for all λ∈ρ_sF(A⊕ B) and k≥1.If U is a linear, invertible operator from H⊕H̃→ H, then A has property (S), (A) and (F) with respect to U(A⊕ B)U^-1.Proof: (i) follows from σ(A⊕ B)=σ(A)∪σ(B) and σ(B)=ℂ∖ρ_sF(A)⊂σ(A).Let λ∈σ_c(A⊕ B). Since λ has to be in σ(A) but not in ρ_sF(A), we get λ∈σ(B). But σ(B)=σ_p(B)⊂σ_p(A⊕ B). This is a contradiction to λ∈σ_c(A⊕ B) and we obtain (ii).For (iii) observe that σ_e(A⊕ B)=σ_e(A)∪σ_e(B) and σ_e(B)⊂σ(B)⊂σ_e(A).From (i) and (iii) we get that σ_0(A⊕ B)=σ_0(A). In addition we have that ((A⊕ B-λ id_H⊕H̃))=((A-λ id_H))+((B-λ id_H̃))for all λ∈ℂ. But ((B-λ id_H̃))=0 for all λ∈σ_0(A)⊂ℂ∖σ(B) and (iv) follows.(v) and (vi) essentially follow from σ(B)⊂ℂ∖ρ_sF(A) and (B-λ id_H̃)=0 for all λ∈ρ_sF(A). To be more precise, (A⊕ B-λ id_H⊕H̃) is closed if and only if the image of A-λ id_H and B-λ id_H̃ is closed. And with <Ref> we see that if A-λ id_H is semi-Fredholm and B-λ id_H̃ is Fredholm, then A⊕ B-λ id_H⊕H̃ is semi-Fredholm with (A⊕ B-λ id_H⊕H̃)=(A-λ id_H)+(B-λ id_H̃)for all λ∈ρ_sF(A⊕ B). But since B-λ id_H̃ is invertible for all λ∈ρ_sF(A) we get that (B-λ id_H̃)=0 and ρ_sF(A)⊂ρ_sF(A⊕ B). However if A-λ id_H is not semi-Fredholm, then A⊕ B-λ id_H⊕H̃ is not semi-Fredholm and we obtain the other inclusion ρ_sF(A⊕ B)⊂ρ_sF(A).Part (vii) follows from the above observation that (A⊕ B-λ id_H⊕H̃)=(A-λ id_H)⊕(B-λ id_H̃), (A⊕ B-λ id_H⊕H̃)^k=(A-λ id_H)^k⊕(B-λ id_H̃)^k and ((B-λ id_H̃)^*)^k=(B-λ id_H̃)^k={0} for all λ∈ρ_sF(A)=ρ_sF(A⊕ B).The additional remark follows from (i) to (viii) and the observation that each of the sets σ(·), σ_e(·), σ_c(·), ρ_sF(·), σ_0(·) and each number ((·)), (·), min(·) is invariant under similarity.□ Proof of <Ref>:We already know that the theorem is true for (H)<∞. For the rest of the proof we assume that (H)=∞. Let A∈ℬ(H) and {λ_n; n∈ℕ} be the set of isolated points in σ_e(A). If the set is finite we just repeat one λ infinitely often. Let T_a be the operator from <Ref> with respect to the sequence (λ_n)_n∈ℕ. Recall that σ(T_a)={λ_n; n∈ℕ} and k(λ_n;T_a)=∞ for all n∈ℕ. Since the set ℂ∖ρ_sF(A) is compact and non-empty, by <cit.> there is an operator T_p∈ℬ(H) such that σ(T_p)=σ_p(T_p)=ℂ∖ρ_sF(A).Now define the operator B∈ℬ(H⊕⊕_n∈ℕH) as B=T_p⊕ T_a and T̃=A⊕ B. Fix a linear, invertible operator U:H⊕ H⊕⊕_n∈ℕ H→ H and define T=UT̃U^-1.The set {λ∈ρ_sF(A); (A-λ id_H)=∞(A-λ id_H)=-∞}is open by <Ref> and equal to σ_e(A)∩ρ_sF(A). Hence no isolated point in σ_e(A) can belong to ρ_sF(A). Thus {λ_n, n∈ℕ} is contained in ℂ∖ρ_sF(A) and so σ(B)=σ(T_a)∪σ(T_p)=ℂ∖ρ_sF(A) and σ_p(B)=σ_p(T_a)∪σ_p(T_p)=σ(B). Now we can use <Ref> and see that A has property (A), (S) and (F) with respect to T. In addition we have that k(λ_n;T)=k(λ_n;T_a)=∞ for all n∈ℕ since k(λ;·) is invariant under similarity and by <Ref>. Thus the requirements for the restricted Similarity Orbit Theorem <ref> are met and we obtain that A∈(T). But since T has empty continuous spectrum, we see that A∈(T)⊂{ X∈ℬ(H); σ_c(X)=∅}. A slight modification of the above proof leads to the next theorem. We have{T∈ℬ(H); σ_p(T)=σ(T)}={T∈ℬ(H); ((T-λ id_H))≠0λ∈ρ_sF(T)∩σ(T) }. Proof: Let T∈ℬ(H) and assume that there is a λ∈ρ_sF(T)∩σ(T) such that ((T-λ id_H))=0. Since the operator T-λ id_H is semi-Fredholm and has trivial kernel, it is bounded below by some constant c>0. By <Ref> there is a ϵ<c/2 such that (S)=(T-λ id_H)≠0 for all S∈ B_ϵ(T-λ id_H). Now each operator in S∈ B_ϵ(T) fulfills that S-λ id_H is bounded below and λ∈σ(S). Hence B_ϵ(T)⊂ℬ(H)∖{X∈ℬ(H); σ_p(X)=σ(X)}and in particular we have that T∉{X∈ℬ(H); σ_p(X)=σ(X)}.On the other hand if A∈ℬ(H) and ((A-λ id_H))≠0 for all λ∈ρ_sF(A)∩σ(A), then we only have to check that the operator T from the proof of <Ref> lies in {X∈ℬ(H); σ_p(X)=σ(X)}. However by construction of T, we have that ℂ∖ρ_sF(T)⊂σ_p(T), σ(T)=σ(A), ρ_sF(T)=ρ_sF(A) and by the assumption on the operator A, we have that ρ_sF(A)∩σ(A)⊂σ_p(A). Since σ_p(A)⊂σ_p(T), we obtain σ_p(T)=σ(T) and the proof is complete. □ plain Fachrichtung Mathematik, Universität des Saarlandes, 66123 Saarbrücken, GermanyEmail address: [email protected] | http://arxiv.org/abs/2310.18490v1 | {
"authors": [
"Marcel Scherer"
],
"categories": [
"math.FA",
"math.SP",
"47A11 (Primary). 47A53 (Secondary)"
],
"primary_category": "math.FA",
"published": "20231027210814",
"title": "Spectra of typical Hilbert space operators"
} |
[footnoteinfo]This work was supported via projects PID2021-124137OB-I00 and TED2021-130224B-I00 funded by MCIN/AEI/10.13039/501100011033, by ERDF A way of making Europe and by the European Union NextGenerationEU/PRTR, by the Gobierno de Aragón under Project DGA T45-20R, by the Universidad de Zaragoza and Banco Santander, by the Consejo Nacional de Ciencia y Tecnología (CONACYT-México) grant 739841, and by Spanish grant FPU20/03134.2023 IFAC. This work has been accepted to IFAC for publication under a Creative Commons Licence CC-BY-NC-ND. Accepted for presentation at the 22nd IFAC World Congress 2023.First]Irene Perez-SalesaFirst]Rodrigo Aldana-LopezFirst]Carlos Sagues[First]Departamento de Informática e Ingeniería de Sistemas (DIIS) and Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, María de Luna 1, 50018 Zaragoza, Spain Distributed sensor networks have gained interest thanks to the developments in processing power and communications. Event-triggering mechanisms can be useful in reducing communication between the nodes of the network, while still ensuring an adequate behaviour of the system. However, very little attention has been given to continuous-time systems in this context. In this work, we propose a strategy for distributed state estimation in sensor networks, based on average dynamic consensus of the continuous measurements. While communication between nodes is discrete and heavily reduced due to the event-triggering mechanism, our method ensures that the nodes are still able to produce a continuous estimate of the global average measurement and the state of the plant, within some tuneable error bounds.Estimation and filtering, sensor networks, distributed estimation, dynamic consensus, event-triggered communication, information and sensor fusion, stochastic systems. § INTRODUCTION Distributed sensor networks have gained interest thanks to the developments in processing power and communications. In distributed state estimation problems, several sensors collectively observe a dynamical system. Each of them has access to a local measurement as well asinformation from neighboring sensors, which can be used to reconstruct the full state of the plant. The collaboration of several sensing agents has some benefits. First, as it is pointed out by <cit.>, the redundancy of having several sensors measuring the same variables results in less risk of single-point failure, as well as a decrease in uncertainty of the resulting estimate. Additionally, since the nodes share information with their neighbors, it is no longer needed that each sensing node can estimate the full state of the plant using exclusively its own local measurement. Thus, several works tackle the problem of distributed state estimation, proposing distributed implementations of well-known filters <cit.>.Distributed estimation comes at the cost of having communication between the elements of the sensor network. This communication can be constrained by aspects such as bandwidth, a shared transmission network that needs to be available to several elements, or power limitations of the nodes. For this reason, event-triggered strategies are of interest for distributed applications, such as in networked systems <cit.> or wireless sensor networks <cit.>.In this context, event-triggering mechanisms need to be chosen to reduce transmissions of information, while ensuring that the quality of the resulting estimates is not heavily degraded. Several approaches have been taken to achieve distributed state estimation with event-triggered communication. Due to the availablity of a local estimate in each sensing node, some works design their triggering condition based on their local estimate of the state <cit.>. Another option is to monitor the behaviour of the measured signal, applying an absolute threshold to the difference between the current and last transmitted value <cit.>, or to base the trigger on the innovation of the measurements, i.e. the difference between the real and predicted measurement <cit.>. To fuse the information, adequate state estimators are designed according to the triggering mechanism, or algorithms to reach consensus on the state estimates are used <cit.>. While a variety of works exist on this topic, all of the aforementioned ones focus on discrete-time systems. Very little attention has been given to continuous-time systems in this context. Moreover, works such as <cit.>, which do feature continuous-time systems, do not consider stochastic noise.Motivated by this discussion, we contribute a novel setup to achieve distributed state estimation in continuous-time systems, based on dynamic average consensus of the measurements with event-triggered communication between nodes. Contrary to other works, in which consensus is done on the state estimates, each node has a triggering condition based on its local estimate of the global average measurement. Thus, events are triggered according to the evolution of the measured variables. While nodes communicate their local consensus estimates to their neighbors in a discrete manner due to the event-triggering mechanism, our method ensures that each node is still able to estimate the continuous global measurement within an error threshold that can be tuned by the user. To the best of our knowledge, this is the first time event-triggered communication has been used for distributed estimation of continuous time plants and measurements with stochastic noise. We show the effectiveness of our proposal in reducing transmissions compared to an ideal case with continuous communication between nodes, while still achieving a comparable estimation performance. §.§ NotationWe denote the n × n identity matrix as I_n. Moreover, let 1=[1,…,1]^⊤ for an appropriate size. The Euclidean norm is represented by ∙. Let ⊗ denote the Kronecker product.§ PROBLEM STATEMENT Consider an unknown input dynamical system of the formẋ(t) = Ax(t) + Bw(t), t ≥ 0where A∈ℝ^n × n, B∈ℝ^n×n_w and w(t)∈ℝ^n_w is the unknown input, which can also contain disturbances or other non-modeled dynamics. In order to apply optimal filtering techniques such as a Kalman filter to (<ref>), w(t) is usually modelled as an n_w-dimensional Wiener process with {w(s),w(r)} = Wmin(s,r) <cit.>. In this case, (<ref>) is better understood as a Stochastic Differential Equation (SDE), with x(t) following a Gaussian distribution where the mean x_0 and covariance matrix P_0 for the initial condition x(0) are assumed to be known.The state of the plant is monitored by a sensor network composed of N sensors. The communication network topology is modeled by a connected undirected graph 𝒢, with node set 𝒱 = {1,…,N} for the sensors and an edge set ℰ⊆𝒱×𝒱 for the communication links between neighboring nodes. The adjacency matrix𝐀_𝒢=[a_ij]∈{0,1}^N× N has elements a_ij = 1 if (i,j) ∈ℰ and a_ij = 0 otherwise. The set of neighbors of node i is denoted by 𝒩_i = {j ∈𝒱 : (i,j) ∈ℰ}.Each node i has access to its own measurements y_i(t) = C_i x(t) + v_i(t), ∀ t ≥ 0, where C_i ∈ℝ^n_y× n is a constant matrix. Moreover, v_i(t) is modeled to follow a Gaussian distribution with zero mean and covariance matrix R_i. Assume that the pair (A, [C_1^⊤,…,C_N^⊤]^⊤) is observable.Communication between sensors is triggered at each node according to a condition based on the node's local information. The goal for each node of the system is to compute an estimate of the full state of the plant, by fusing its own local information and the transmitted information from neighboring nodes. § DISTRIBUTED ESTIMATION UNDER EVENT-TRIGGERED COMMUNICATION We propose a solution to distributed state estimation in sensor networks with communication constraints via dynamic consensus of the measurements. Each node uses its local information and that of its neighbors to compute an average measurement through a consensus algorithm. In order to reduce communication between nodes, we equip each sensor with an event-triggering mechanism to decide when to broadcast its local information to the neighbors. Even though communication between nodes is performed at discrete event instants, the sensors have access to a local continuous measurement and the consensus algorithm is also run in a continuous fashion. Thus, each node computes a continuous average measurement, which is fed to a Kalman-Bucy filter to produce an estimate of the state in each node.Our proposal is summarized in Figure <ref>. §.§ Event-Triggered Consensus of Measurements First, let the informational form of the measurement z_i(t) := C_i^⊤R_i^-1y_i(t) with information matrix 𝒵_i :=C_i^⊤R_i^-1C_i and define the following consensus quantities:z̅(t):= 1/N∑_i=1^N z_i(t),Z̅:= 1/N∑_i=1^N 𝒵_i Each node has access to its local values of z_i(t) and 𝒵_i, and computes the estimates ẑ_i(t) and Ẑ_i(t) for the consensus quantities z̅(t) and Z̅ using only local communication.We consider that each node i communicates with its neighbors only at some event instants t ∈{τ_k^i}_k=0^∞. These events are constructed by the following absolute threshold triggering condition, applied to the local estimate ẑ_i(t):τ_k+1^i = inf{ t-τ_k^i>τ| ẑ_i(t) - ẑ_i(τ_k^i)≥δ_i }where τ>0 is included as time regularization in order to guarantee a minimum inter-event time and δ_i > 0 is a design parameter.Note that ẑ_i(t) evolves according to the measurements, with the goal of reaching a consensus on the average measurement of the plant. Thus, setting the triggering condition on this variable means that communication can be greatly reduced when the measured signals do not suffer significant changes. When an event is triggered at node i, the node broadcasts its value of ẑ_i(t) to its neighbors. Thus, node i has knowledge of its own measurement z_i(t) and local estimate ẑ_i(t), as well as the last transmitted value from its neighbors, ẑ_j(τ^j_t) ∀ j ∈𝒩_i, where τ^j_t=max{τ_k^j≤ t} is the last triggering instant at node j prior to time t. The proposed consensus algorithm to compute ẑ_i(t) at each node can be expressed as:ṗ_i(t)= - κ_1 p_i(t) + κ_2 ∑_j ∈𝒩_i( ẑ_i(t) - ẑ_j(τ^j_t) )ẑ_i(t)= z_i(t)-p_i(t)where p_i(t) is an auxiliary local variable and κ_1, κ_2 are design parameters.In order to compute Z̅ from each node, we propose a scheme of discrete updates at event instants. First, let Ẑ_i(t) be the estimate of Z̅ at nodei for any t≥ 0 which complies Ẑ_i(0) = 𝒵_i. The estimate Ẑ_i(t) is piece-wise constant and changes its value only when node i or its neighbors j∈𝒩_i trigger an event, according to the following rules. When an event t=τ_k^i is triggered in node i due to (<ref>), the node asks its neighbors j ∈𝒩_i for their last updated estimates Ẑ_j(τ_k-^i) := lim_t→(τ_k^i)^-Ẑ_j(t). Then, node i updates its value as Ẑ_i(τ_k^i) = Ẑ_i(τ_k-^i) + ∑_j∈𝒩_iẐ_j(τ_k-^i)/J_i + 1where J_i:=∑_j=1^N a_ij denotes the number of neighbors of node i. Node i broadcasts Ẑ_i(τ_k^i) at t=τ_k^i to its neighbors, which update their estimate as:Ẑ_j(τ_k^i)= Ẑ_i(τ_k^i) ∀ j ∈𝒩_iMoreover, Ẑ_i(t) remains constant between event instants. §.§ Distributed State Estimation The resulting continuous consensus quantities computed at each node can be incorporated to a Kalman-Bucy filter in order to obtain an estimate of the state of the plant. The distributed implementation of the filter, taking into account the consensus quantities in (<ref>), can be expressed as <cit.>:ẋ̂̇_i(t) = Ax̂_i(t) + NP_i(t)ẑ_i(t) - NP_i(t)Ẑ_i(t)x̂_i(t)Ṗ_i(t)= AP_i(t) + P_i(t)A^⊤ + BWB^⊤ - N P_i(t)Ẑ_i(t)P_i(t)where the network size N is assumed to be known either by construction, or obtained through well-known distributed leaderless methods as in <cit.>. Note that, since we are using an event-triggered implementation of the consensus algorithm, an additional error is introduced in the consensus phase with respect to an ideal case with continuous communication between nodes. Feeding the consensus signal directly to the Kalman-Bucy filter is an approximation, which does not take the additional event-triggered error into account. However, as we show in the following section, the error due to events can be made arbitrarily small by tuning the triggering thresholds δ_i, the constants in (<ref>) or by improving the connectivity of the network.§ MAIN RESULTSIn order to show that the consensus filter in (<ref>) works, we use the following assumption.Given κ_1 in (<ref>), there exists a bound L such that ż̅̇(t)-ż_i(t) - κ_1(z̅(t)-z_i(t))≤ L, for all i∈𝒱, ∀ t≥ 0.Assumption <ref> is reasonable in practice and similar assumptions have been made previously in the literature <cit.>.Let 𝒢 be a connected graph with algebraic connectivity λ_2(𝒢), Assumption <ref> hold, the event-triggering rule (<ref>) and the consensus algorithm in (<ref>) with parameters {δ_i}_i=1^N,κ_1,κ_2. Hence, there exist constants T, K>0 such thatz̅(t)-ẑ_i(t)≤ K , ∀ t≥ T,where K can be made arbitrarily small by decreasing max{δ_i}_i=1^N and increasing κ_1, κ_2, κ_1/κ_2. Define u_j(t) = ẑ_j(t) - ẑ_j(τ_t^j) with arbitrary j∈𝒩_i. Then, the consensus algorithm in (<ref>) can be expressed asṗ_i(t) = -κ_1 p_i(t) + κ_2 ∑_j ∈𝒩_i(ẑ_i(t) - ẑ_j(t)) + κ_2∑_j∈𝒩_iu_j(t)Now, let: p(t) = [ p_1(t);⋮; p_N(t) ],ẑ(t) = [ ẑ_1(t);⋮; ẑ_N(t) ]z(t) = [ z_1(t);⋮; z_N(t) ],u(t) = [ u_1(t);⋮; u_N(t) ] used in order to write (<ref>) in matrix form asṗ(t) = -κ_1 p(t) + κ_2( 𝐐_𝒢⊗I_n)ẑ(t) + κ_2 (𝐀_𝒢⊗I_n)u(t)where 𝐐_𝒢 and 𝐀_𝒢 denote the Laplacian and adjacency matrices of the graph 𝒢 <cit.>. Let s̅(t) = (1^⊤/N ⊗I_n)ẑ(t) and s̃(t) = (H⊗I_n)ẑ(t) be the consensus component and consensus error respectively with H=I_N - (1/N)11^⊤. Note that ẑ(t) = (1⊗I_n)s̅(t) + s̃(t). Moreover, the dynamics of s̃(t) comply:ṡ̃̇ = (H⊗I_n)ẑ̇ = (H⊗I_n)(ż - ṗ) = (H⊗I_n)(ż + κ_1 (z - ẑ) - κ_2 (Q_𝒢⊗I_n)ẑ- κ_2 (A_𝒢⊗I_n) u) = (H⊗I_n)(ż + κ_1 z) - κ_1(H⊗I_n)ẑ- κ_2(H⊗I_n)(Q_𝒢⊗I_n)ẑ - κ_2 (H⊗I_n)(A_𝒢⊗I_n) u=z̃ - κ_1s̃ - κ_2 (Q_𝒢⊗I_n)s̃ - κ_2 (HA_𝒢⊗I_n)uomitting time dependency for brevity and defining z̃(t) = (H⊗I_n)(ż(t) + κ_1 z(t)) which complies z̃(t)≤ L' for some L'>0 due to Assumption <ref>. Define the Lyapunov function candidate V(s̃(t)) = s̃(t)^⊤s̃(t). Then, we haveV̇(s̃) = 2 s̃^⊤ṡ̃̇= 2 s̃^⊤z̃ - 2κ_1 s̃^⊤s̃ - 2κ_2 s̃^⊤ (Q_𝒢⊗I_n)s̃ - 2 κ_2 s̃^⊤ (HA_𝒢⊗I_n)u≤ 2 s̃z̃ - 2κ_1 s̃^2 - 2 κ_2 λ_2(𝒢) s̃^2 + 2 κ_2 s̃(HA_𝒢⊗I_n)u≤ 2 s̃ ( z̃ - κ_2 λ_2(𝒢) s̃ + κ_2 (HA_𝒢⊗I_n)u)with λ_2(𝒢) being the connectivity of the network, i.e. the minimum nonzero eigenvalue of Q_𝒢. Note that the eigenvalues of Q_𝒢⊗I_n coincide with those of Q_𝒢 with additional multiplicity by n. This yields that V̇(s̃(t))<0 when s̃ > z̃ + κ_2(HA_𝒢⊗I_n) u/κ_2 λ_2(𝒢)Hence, for any initial condition there exists T>0 such that s̃(t) converges to the region in which s̃(t)≤K̃, ∀ t≥ T whereK̃ = L' + κ_2σ_max(HA_𝒢⊗I_n)√(N)max{δ_i}_i=1^N/κ_2 λ_2(𝒢)with σ_max(∙) denoting the maximum singular value and where we used (HA_𝒢⊗I_n) u(t)≤σ_max(HA_𝒢⊗I_n)u(t) with u_i(t)≤δ_i and z̃≤ L'.Finally, it remains to check that s̅(t) converges to a neighborhood of z̅(t). From the dynamics of s̅(t), we have the following:ṡ̅̇ = ( 1^⊤/N ⊗I_n )ẑ̇= (1^⊤/N⊗I_n )(ż + κ_1 (z - ẑ) - κ_2 (Q_𝒢⊗I_n)ẑ - κ_2 (A_𝒢⊗I_n) u)= ż̅̇ + κ_1 (z̅ - s̅) - κ_2 (1^⊤A_𝒢/N ⊗I_n) u Defining the error e(t) = s̅(t) - z̅(t) and disturbance u̅(t) = - κ_2 (1^⊤A_𝒢/N ⊗I_n) u(t), it follows that ė(t) = -κ_1 e(t) + u̅(t), which has the following explicit solution:e(t) = e(0)e^-κ_1 t + e^-κ_1 t∫_0^te^κ_1 τu̅(τ) τ̣Hence, for any t≥ T we have thate(t)≤ e^-κ_1 Te(0) + e^-κ_1 T∫_0^T e^κ_1 ττ̣(sup_τ≥ 0u̅(τ)) ≤ e^-κ_1 Te(0) + κ_2/κ_1σ_max(1^⊤A_𝒢/N ⊗I_n)√(N)max{δ}_i=1^N =: K̅Hence, for t≥ T the consensus error s̃(t) is bounded by K̃ which decreases with respect to κ_1,κ_2,λ_2(𝒢),max{δ}_i=1^N as pointed out in the theorem statement. Similarly, the consensus component error e(t)=s̅(t) - z̅(t) is bounded by K̅ which decreases in a similar fashion. Hence, z̅(t)-ẑ_i(t), ∀ t≥ T is bounded by a constant K which takes into account the effect of K̃, K̅.Let 𝒢 be a connected graph and consider the event-triggering mechanism in (<ref>) along with the consensus algorithm in (<ref>) and (<ref>). Then, Ẑ_i(t) asymptotically converges to Z̅=∑_i=1^N 𝒵_i/N as events occur. Consider the global sequence of events in all nodes as the overlapping sequence {τ_k}_k=1^∞ = ⋃_i=1^N{τ_k^i}_k=1^∞. Without loss of generality, we assume that any τ_k corresponds to the event from a single node. Note that (<ref>) shows that Ẑ_i(τ_k+1) is computed via a convex combination of Ẑ_i(τ_k) and Ẑ_j(τ_k) ∀ j ∈𝒩_i with equal weights λ = 1/(J_i +1), complying λ + ∑_j∈𝒩_iλ = 1. Choose an arbitrary component s_i(τ_k) of the matrix Ẑ_i(τ_k) and let s(τ_k) = [s_1(τ_k), …, s_N(τ_k)]^⊤. Then, define the Lyapunov function candidate V(s) = (s) - (s) and note that V(s) = 0 only if (s) = (s), i.e. when s_i = s_j,∀ i,j ∈𝒱.To show convergence of the elements of s(τ_k), the Lyapunov function must be non-increasing. We haveV(s(τ_k+1)) - V(s(τ_k)) = (s(τ_k+1)) - (s(τ_k)) - ((s(τ_k+1)) - (s(τ_k)))Note that, since the elements of s(τ_k+1) are computed through a convex combination of elements of s(τ_k), they are contained in their convex hull. Hence, it follows that(s(τ_k)) ≤(s(τ_k+1)) ≤(s(τ_k+1)) ≤(s(τ_k))which shows that V(s(τ_k+1)) - V(s(τ_k))≤ 0 holds and the component s_i(τ_k) of the matrix Ẑ_i(τ_k) for all nodes asymptotically converge to the same value i.e., lim_k→∞s_i(τ_k)=α, ∀ i∈𝒱for some α≥ 0.Moreover, note that:∑_ℓ=1^N s_ℓ(τ_k+1) = ∑_ℓ≠ i,j∈𝒩_is_ℓ(τ_k+1) + s_i(τ_k+1) + ∑_j∈𝒩_is_j(τ_k+1)=∑_ℓ≠ i,j∈𝒩_is_ℓ(τ_k) + (J_i+1)s_i(τ_k+1) =∑_ℓ≠ i,j∈𝒩_is_ℓ(τ_k) + (s_i(τ_k) + ∑_ℓ∈𝒩_is_ℓ(τ_k))=∑_ℓ=1^N s_ℓ(τ_k)where the updates_i(τ_k+1)=s_j(τ_k+1)=s_i(τ_k)+∑_j∈𝒩_is_j(τ_k)/J_i+1 from (<ref>) and (<ref>) was used. Hence, ∑_i=1^N s_i(τ_k) = ∑_i=1^N s_i(0) remains invariant ∀ k≥ 0. Moreover, the consensus result implies lim_k→∞∑_i=1^N s_i(τ_k) = α N. Therefore, it must be the case that α = ∑_i=1^N s_i(0)/N. Thus, all nodes converge to the average of the initial conditions. This reasoning can be extended to all elements of the matrix Ẑ_i(τ_k), showing that all nodes reach the global average value Z̅ = ∑_i=1^N Ẑ_i(0)/N=∑_i=1^N 𝒵_i/N. § SIMULATION EXPERIMENTS Consider a 2-D object tracking problem. Let the state vector x(t) = [x(t), y(t), v_x(t), v_y(t)] of the object where (x(t), y(t)) represent Cartesian coordinates and the corresponding velocity components for both axis are represented by (v_x(t), v_y(t)). The object moves in the following trajectory, which is unknown to the observers:x(t) = [(0.5t); 3.5(0.8t); 0.5(0.5t); 2.8(0.8t); ]For this experiment to be realistic, the trajectory in (<ref>) is not a stochastic process. However, in absence of knowledge of the unknown input, nodes model (<ref>) conservatively by the SDE in (<ref>), withA = [ 0 0 1 0; 0 0 0 1; 0 0 0 0; 0 0 0 0; ],B = [ 0 0; 0 0; 1 0; 0 1; ]and W = (1, 1). At t=0, the state is modeled by the sensors with a Gaussian distribution with mean x_0 = [0,0,0.5,2.8]^⊤ and covariance P_0 = I_n. The system is observed by a sensor network consisting of N=5 nodes, as shown in Figure <ref>. Each of them can access a local measurement y_i(t) = C_ix(t) + v_i(t), withC_1 = C_5 =[ 1 0 0 0; ],C_3 = [ 0 1 0 0; ]C_2 = C_4 =[ 1 0 0 0; 0 1 0 0 ]and noise covariances R_1 = 0.02,R_3 = 0.01,R_5 = 0.015, R_2 = R_4 = (0.01, 0.01). For the event-triggered simulations, we have set the same triggering threshold for all nodes. The constants in the consensus algorithm (<ref>) have been set to κ_1 = 0.5,κ_2 = 20. The simulation time has been set to T_f = 10 with a step of h=1· 10^-4. Figure <ref> shows the estimation results in the nodes for the ideal continuous communication case, i.e. when each node i has ẑ_j(t),Ẑ_j(t) available at any time. This simulation has been computed as a baseline to compare to the event-triggered case.Figures <ref> and <ref> show the results for the event-triggered setup for δ_i = 25 and δ_i = 50. It can be observed that the estimates are similar to the continuous communication case, increasing the estimation error with the triggering threshold δ_i. Lower values of δ_i provide smaller errors at the cost of an increase in frequency of communication between nodes, as is generally expected in event-triggered systems <cit.>. This trade-off is shown in Figure <ref>, which depicts the estimation error against the frequency of communication. Note that we are able to greatly reduce the frequency of communication between nodes without a significant increase in estimation error. To obtain these results, we have run simulations with the same plant as described above and different values of δ_i in a range of [0, 80]. Due to the stochastic nature of the problem, S=20 simulations have been executed for every δ_i. The average estimation error and average frequency of communication of the nodes for each simulation have been computed as ℰ_s = 1/N T_f∑_i=1^N∫_0^T_fx̂_i(t) - x(t)ṭ,ℱ_s = ∑_i=1^N e_i/NT_fwhere T_f is the total time for the experiment and e_i represents the number of events triggered in node i. Then, the values are averaged to obtain ℰ and ℱ for each δ_i:ℰ = ∑_s=1^S ℰ_s/S ,ℱ = ∑_s=1^S ℱ_s/SMoreover, the frequency of communication ℱ is shown normalized in Figure <ref>, so that 1 means continuous communication between nodes (an event is triggered at every simulation step) and 0 means no communication.§ CONCLUSIONSWe have presented an approach to distributed state estimation over sensor networks for continuous-time systems, via dynamic consensus of measurements under event-triggered communication. Our method uses discrete communication between nodes, due to the triggering mechanism, but still obtains a continuous estimate.We have shown that applying an event-triggering mechanism to decide when each node broadcasts its local information allows to reduce communication between nodes without significantly increasing the estimation error with respect to the ideal case with continuous communication. Moreover, we have shown that the consensus error is bounded, and that the error tolerance can be tuned according to the desired performance of communication rate and estimation error. | http://arxiv.org/abs/2310.18150v1 | {
"authors": [
"Irene Perez-Salesa",
"Rodrigo Aldana-Lopez",
"Carlos Sagues"
],
"categories": [
"eess.SY",
"cs.SY"
],
"primary_category": "eess.SY",
"published": "20231027135622",
"title": "Event-Triggered Consensus for Continuous-Time Distributed Estimation"
} |
[email protected] Institute for Theoretical Physics, University of Regensburg, 93040 Regensburg, Germany Department of Materials, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kingdom Department of Physics and Astronomy, University of Delaware, Newark, DE 19716, USA Institute for Theoretical Physics, University of Regensburg, 93040 Regensburg, GermanyProximity-induced phenomena in van der Waals heterostructures have emerged as a platform to tailor the electronic, spin, optical, and topological properties in two dimensional materials. A crucial degree of freedom, which has only recently been recognized, is the relative twist angle between the monolayers.While partial results exist in the literature, we present here a comprehensive first-principles based investigation of the twist-angle dependent proximity spin-orbit coupling (SOC) in graphene in contact with, or encapsulated by, monolayer transition metal dichalcogenides (TMDCs) MoS_2, MoSe_2, WS_2, and WSe_2. Crucially, our commensurate supercells comprise monolayers with strains of less than 2.5%, minimizing band-offset artifacts. We confirm earlier DFT results that for Mo-based TMDCs the proximity valley-Zeeman SOC exhibits a maximumat around 15–20^∘, and vanishes at 30^∘ for symmetry reasons. Although such a maximum was also predicted by tight-binding simulationsfor W-based TMDCs, we find an almost linear decrease of proximity valley-Zeeman SOC in graphene/WSe_2 and graphene/WS_2 when twisting from 0^∘ to 30^∘. We also refine previous DFT simulations and show that the induced Rashba SOCis rather insensitive to twisting, while acquiring a nonzero Rashba phase angle φ which measures the deviation of the electron spin from in-plane transverse direction to the momentum, for twist angles different from 0^∘ and 30^∘. The Rashba phase angle varϕ varies from -20^∘ to 40^∘, with the largest variation (40^∘) found for MoS_2 at a twist angle of 20^∘.This finding contradicts earlier tight-binding predictions that the Rashba angle can be 90^∘ in the studied systems.In addition, we study the influence of a transverse electric field, vertical and lateral shifts, and TMDC encapsulation on the proximity SOC for selected twist angles. Within our investigated electric field limits of ± 2 V/nm, mainly the Rashba SOC can be tuned by about 50%. The interlayer distance provides a giant tunability, since the proximity-induced SOC can be increased by a factor of 2–3, when reducing the distance by only about 10%. When encapsulating graphene between two TMDCs, both twist angles are important to control the interference of the individual proximity-induced SOCs, allowing to precisely tailor the proximity-induced valley-Zeeman SOC in graphene, while the Rashba SOC becomes suppressed. Finally, based on our effective Hamiltonians with fitted parameters to low-energy ab initio band structures, we calculate experimentally measurable quantities such as spin lifetime anisotropy and charge-to-spin conversion efficiencies.The spin lifetime anisotropy—being the ratio between out-of-plane and in-plane spin lifetimes—can become giant (up to 100), depending on the TMDC, twist angle, transverse electric field, and the interlayer distance.The charge-to-spin conversion can be divided into three components which are due to spin-Hall and Rashba-Edelstein effects with non-equilibrium spin-density polarizations that are perpendicular and parallel to the applied charge current. All conversion efficiencies are highly tunable by the twist angle and the Fermi level.Twist- and gate-tunable proximity spin-orbit coupling, spin relaxation anisotropy, and charge-to-spin conversionin heterostructures of graphene and transition-metal dichalcogenides Jaroslav Fabian====================================================================================================================================================================================== § INTRODUCTIONVan der Waals (vdW) heterostructures based on two-dimensional (2D) materials are emerging as an important platform for investigating novel solid state phenomena <cit.>. While 2D materials exhibit extraordinary physical properties on the atomic scale, we can combine different monolayers to form artificial vdW crystals with customized electronic, optical, magnetic, or topological properties <cit.>. The prime example are heterostructures based onmonolayer graphene, where proximity interactions, such as spin-orbit coupling (SOC) <cit.>, exchange coupling <cit.>, and superconductivity <cit.> can be induced via neighboring layers. Important, the proximity-induced interactions can be controlled by gating, doping, straining, lateral stacking, and twisting.Particularly interesting for spintronics <cit.> are graphene/transition-metal dichalcogenide (TMDC) bilayers <cit.>. First-principles calculations <cit.> and experiments <cit.> on graphene/TMDC structures have already demonstrated that proximity SOC can be tuned by the application of a transverse electric field. Recent DFT simulations show a potential tunability via controlled alloying of the TMDC <cit.>; this should beexperimentally realizable given the impressive progress in TMDC growth techniques <cit.>. Since proximity effects are short-ranged and originate from the wavefunction overlap of different layers, also the vdW distance plays an important role. Recent experiments have shown that external pressure, which reduces the interlayer distance, can significantly boost proximity interactions <cit.>.The proximity coupling of graphene with TMDCs has already lead to fascinating experimental findings,such as optical spin injection <cit.> , gate tunable charge-to-spin conversion <cit.>, giant spin relaxation anisotropy <cit.>, and field-effect spin transistor operation <cit.>.Recently, the relative twist angle between the monolayers has emerged as another important control knob. In general, vdW heterostructures composed of twisted monolayers<cit.> promise great tunability of electronic, optical, and magnetic properties. For example, magic-angle twisted bilayer graphene exhibits magnetism and superconductivity due to strong correlations <cit.>. In twisted TMDCs, a strong trapping potential for excitons can arise due to the emerging moiré pattern <cit.>. In graphene/Cr_2Ge_2Te_6 bilayers, twisting allows to reverse the proximity-induced exchange splitting of the Dirac bands <cit.>. Finally, gating and twisting are two efficient control knobs to tune the valley splitting in TMDC/CrI_3 heterostructures <cit.>. All the above demonstrates that the twist angle has a highly non-trivial influence on physical observables.There have already been theoretical <cit.> and experimental <cit.> studies investigating the impact of twisting on the electronic properties and proximity-induced SOC in graphene/TMDC heterostructures <cit.>.Tight-binding studies have predicted that the relative rotation of the monolayers can greatly enhance the proximity SOC, with an expected maximum at around 15–20^∘, for graphene in contact with MoS_2, MoSe_2, WS_2, and WSe_2 <cit.>.However, tight-binding calculations have to rely on some input parameters. For example, the position of the Dirac point within the TMDC band gap seems is rather crucial for predictingtwist-angle dependent proximity SOC <cit.>. In a systematic DFT investigation, Naimer et al. <cit.> showed that strain (the study used up to 10% of strain in graphene) in twisted graphene/TMDC supercells affects the proximity effects due to strain-induced band offsets, prompting the application of a transverse displacement field to remove these artifacts. This ad hoc procedure has produced qualitatively similar results as the aforementioned tight-binding studies for Mo-based TMDCs, but has found that the valley-Zeeman proximity coupling for W-based TMDCs decreases with increasing the twist angle from 0^∘ to 30^∘, not exhibiting a global maximum. This DFT study <cit.> also found specific values for the Rashba phase angles, predicted on symmetry grounds to be different from zero (the reference angle at which the in-plane spin is perpendicular to the momentum) away from 0^∘ to 30^∘ <cit.>.Also Pezo et al. <cit.> considered large-scale supercells of graphene on strained (up to 3.5%) MoTe_2 and WSe_2, employing twist angles around 0^∘, 15^∘, and 30^∘, predicting strong variations of the proximity SOC, althoughthe limited set of twist angles was insufficient to uncover systematic trends. Finally, Lee et al. <cit.> performed DFT investigations of twistedgraphene/WSe_2 heterostructures with small strain (less than 2%) finding a nearly constant valley-Zeeman SOC up to about 18^∘, followed by a linear decrease to 30^∘; the Rashba SOC was found to benearly constant for all the investigated twist angles. There is already evidence from weak antilocalization experiments <cit.> on twisted graphene/WSe_2 structures showing small (∼0.05 meV) valley-Zeeman and finite (∼ 0.5 meV) Rashba SOC at 30^∘, in agreement with theory. In contrast, samples with 15^∘ twist angle show larger SOC values, with Rashba ∼1.5 meV and valley-Zeeman ∼0.4 meV.In this paper, we aim to provide a comprehensive DFT-based picture of proximity SOC in twisted graphene/TMDC heterostructures by considering only small-strain supercells (less than 2.5% of strain in graphene and zero strain in TMDCs) for all four semiconducting TMDC monolayersMoS_2, MoSe_2, WS_2, and WSe_2. In addition to providingsystematic dependencies of the valley-Zeeman and Rashba SOC on the twist angles, we alsoaddress the effects of a transverse electric field, encapsulation, and lateral and vertical shifts. We confirm earlier DFT studies that upon twisting from 0^∘ to 30^∘, the induced valley-Zeeman SOC decreases almost linearly to zero for W-based TMDCs, while for Mo-based TMDCs it exhibits a maximum at around 15–20^∘. The induced Rashba SOC stays rather constant upon twisting, and acquires a phase angle φ≠ 0, due to symmetry breaking, for twist angles different from 0^∘ and 30^∘. For WSe_2 our results also correspond to the findings of Ref. <cit.>, but we additionally cover the twist angle behavior for graphene on MoS_2, MoSe_2, and WS_2. Within our investigated electric field limits of ± 2 V/nm, mainly the Rashba SOC can be tuned by about 50%. The interlayer distance, correlating to external pressure in experiments <cit.>, provides a giant tunability, since the proximity-induced SOC can be increased by a factor of 2–3, when reducing the distance by only about 10%. When encapsulating graphene between two TMDCs, both twist angles are important to control the interference of the individual proximity-induced SOCs, allowing to precisely tailor the valley-Zeeman SOC, while the Rashba SOC becomes suppressed. More precisely, when the twist angles of the encapsulating TMDC layers are equal, say both are 0^∘, the induced valley-Zeeman SOC is roughly doubled, since the layer-resolved proximity effect is additive on the graphene sublattices. In contrast, when the twist angles differ by 60^∘, the sublattices are effectively exchanged and the effective valley-Zeeman SOC becomes suppressed. The Rashba SOC is always suppressed due to the nearly restored z-mirror symmtery in encapsulated structures. Finally, combining the first-principles calculations, low energy model Hamiltonian, fitted parameters, and real-space transport calculations, we make specific predictions for experimentally measurable quantities such as spin lifetime anisotropy and charge-to-spin conversion efficiency. We find that the spin lifetime anisotropy—the ratio between out-of-plane and in-plane spin lifetimes—can become giant, up to 100, especially in graphene on MoS_2 and WS_2 as the valley-Zeeman dominates over the Rashba SOC, pinning the spin to the out-of-plane direction. Our calculated anisotropies are in agreement with experiments <cit.> and further tunability is provided by twisting, an external electric field, and the interlayer distance. The real-space transport calculations reveal that twisted heterostructures provide a tunable charge-to-spin conversion via spin-Hall and Rashba-Edelstein effects. With gating and twisting, it is possible to tailor not only the magnitude but also the direction of the non-equilibrium spin-density, making graphene/TMDC heterostructures a versatile platform for creating and detecting spin polarized currents without the need of conventional ferromagnets. The manuscript is organized as follows. In Sec. <ref>, we first address the structural setup and summarize the calculation details for obtaining the electronic structures of the twisted graphene/TMDC bilayers. In Sec. <ref>, we introduce the model Hamiltonian that captures the proximitized Dirac bands, which is used to fit the first-principles results. In Sec. <ref>, we show and discuss exemplary calculated electronic structures, along with the model Hamiltonian fits. We also address the influence of the twist-angle, transverse electric field, and the interlayer distance on the proximity SOC. In Sec. <ref>, we briefly discuss TMDC-encapsulated graphene structures, where proximity SOC can be enhanced or suppressed due to interference of the encapsulating layers. In Sec. <ref>, we address some open questions and discuss the origin of our findings in more detail.In Sec. <ref> and Sec. <ref> we analyze experimentally relevant quantities, which are the twist-angle and gate tunability of the spin-lifetime anisotropy and charge-to-spin conversion efficiencies. Finally, in Sec. <ref> we conclude the manuscript. § GEOMETRY SETUP AND COMPUTATIONAL DETAILSThe graphene/TMDC heterostructures, for which we consider several twist angles between the two monolayers, are set-up with the atomic simulation environment (ASE) <cit.> and the CellMatch code <cit.>, implementing the coincidence lattice method <cit.>. Within this method, a graphene/TMDC heterostructure contains a (n,m) graphene supercell and a (n',m') TMDC supercell, where integers n,m,n' and m' define the corresponding supercell lattice vectors.Monolayers of graphene and TMDCs are based on hexagonal unit cells, with experimental lattice constants <cit.> ofa = 2.46 Å (graphene), a = 3.288 Å (MoSe_2), a = 3.282 Å (WSe_2), a = 3.15 Å (MoS_2), and a = 3.153 Å (WS_2), which additionally need to be strained in the twisted heterostructures, in order to form commensurate supercells for periodic density functional theory (DFT) calculations. Since MoSe_2 and WSe_2 have nearly the same lattice constant, we set them to 3.28 Å in the following. The same we do for MoS_2 and WS_2, where we use 3.15 Å. In Table S1 and Table S2 we summarize the main structural information for the twist angles we consider.In total, we investigate 12 different angles between 0^∘ and 30^∘, for each graphene/TMDC heterostructure. Especially these angles are suitable for DFT calculations, since strain applied to the monolayers is below 2.5%.We already know that biaxial strain strongly influences the band gap of monolayer TMDCs <cit.> and therefore we leave them nearly unstrained in the heterostructures.The residual strain is applied to the graphene lattice, which mainly influences the Fermi velocity of Dirac states <cit.>.In addition, the number of atoms is kept below 250. Otherwise, also other angles could be investigated, but beyond reasonable strain limits and above a computationally feasible number of atoms in the structure. The electronic structure calculations and structural relaxations of the graphene/TMDC heterostructures are performed by DFT <cit.> with Quantum ESPRESSO <cit.>. Self-consistent calculations are carried out with a k-point sampling of n_k× n_k× 1. The number n_k is listed in Table S1 and Table S2 for all twist angles and depends on the number of atoms in the heterostructure. In addition, n_k is limited by our computational power. Nevertheless, for large supercells the heterostructure Brillouin Zone is small and only few k-points are necessary to get converged results. We use an energy cutoff for charge density of 560 Ry and the kinetic energy cutoff for wavefunctions is 70 Ry for the fully relativistic pseudopotentials with the projector augmented wave method <cit.> with the Perdew-Burke-Ernzerhof exchange correlation functional <cit.>. Spin-orbit coupling (SOC) is included in the calculations. For the relaxation of the heterostructures, we add DFT-D2 vdW corrections <cit.> and use quasi-Newton algorithm based on trust radius procedure. Dipole corrections <cit.> are also included to get correct band offsets and internal electric fields. In order to simulate quasi-2D systems, we add a vacuum of about 20 Å to avoid interactions between periodic images in our slab geometry. To get proper interlayer distances and to capture possible moiré reconstructions, we allow all atoms to move freely within the heterostructure geometry during relaxation. Relaxation is performed until every component of each force is reduced below 5×10^-4 [Ry/a_0], where a_0 is the Bohr radius. After relaxation of the graphene/TMDC heterostructures, we calculate the mean interlayer distances, d_int, and the standard deviations, Δ z_grp, from the z coordinates of the C atoms of graphene. The standard deviations represent the amount of rippling of graphene. The results are summarized in Table S1 and Table S2. The interlayer distances are nearly independent of the twist angle and range from about 3.3 to 3.4 Å. The graphene itself stays nearly flat, as the rippling stays below about 3 pm. In Fig. <ref>, we show the general structural setup of our graphene/TMDC heterostructures, where the graphene resides above the TMDC. When we apply the transverse electric field (modeled by a zigzag potential), a positive field points along z direction from the TMDC towards graphene. § MODEL HAMILTONIANFrom our first-principles calculations we obtain the low energy Dirac band structure of the spin-orbit proximitized graphene. We then extract realistic parameters for an effective Hamiltonian describing graphene's low energy Dirac bands. The Hamiltonian together with the fitted parameters provide an effective description for the low-energy physics, which is relevant for studying transport <cit.>, topology <cit.>, or spin relaxation <cit.>. Due to the short-range nature of the proximity effects in van der Waals heterostructures, the effective model parameters are transferable and can be employed for bilayer and trilayer graphene heterostructures <cit.>.The band structure of spin-orbit proximitized graphene can be modeled by symmetry-derived Hamiltonians <cit.>. For graphene in heterostructues with C_3 symmetry, the effective low energy Hamiltonian isℋ = ℋ_0+ℋ_Δ+ℋ_I+ℋ_R+E_D, ℋ_0 = ħv_F(τk_x σ_x - k_y σ_y)⊗s_0,ℋ_Δ =Δσ_z ⊗s_0, ℋ_I = τ(λ_I^A σ_++λ_I^B σ_-)⊗s_z, ℋ_R = -λ_Re^-iφs_z/2(τσ_x ⊗s_y + σ_y ⊗s_x)e^iφs_z/2Here v_F is the Fermi velocity and the in-plane wave vectorcomponents k_x and k_y are measured from ±K, corresponding to the valley index τ = ± 1. The Pauli spin matrices are s_i, acting on spin space (↑, ↓), and σ_i are pseudospin matrices, acting on sublattice space (C_A, C_B), with i = { 0,x,y,z } and σ_± = 1/2(σ_z ±σ_0). The staggered potential gap is Δ, arising from sublattice asymmetry. The parameters λ_I^A and λ_I^B describe the sublattice-resolved intrinsic SOC and λ_R stands for the Rashba SOC. In addition, a phase angle φ can be present in the usual Rashba term, which leads to a rotation of the spin-orbit field around the z-axis <cit.>.When the intrinsic SOC parameters satisfy λ_I^A = -λ_I^B, it is also called valley-Zeeman or Ising type SOC, while in the case of λ_I^A = λ_I^B, it is called Kane-Mele type SOC <cit.>. Charge transfer between the monolayers in the DFT calculation is captured by the Dirac point energy, E_D, which adjusts the Dirac point with respect to the Fermi level. The basis states are |Ψ_A, ↑⟩, |Ψ_A, ↓⟩, |Ψ_B, ↑⟩, and |Ψ_B, ↓⟩, resulting in four eigenvalues ε_1/2^CB/VB. For each considered heterostructure, we calculate the proximitized low energy Dirac bands in the vicinity of the K point. To extract the fit parameters from the first-principles data, we employ a least-squares routine <cit.>, taking into account band energies, splittings, and spin expectation values.§ FIRST-PRINCIPLES RESULTS AND DISCUSSION §.§ Twist angle dependence of proximity SOCIn Fig. <ref>(a), we show the calculated global band structure of the graphene/MoSe_2 heterostructure for a twist angle of 0^∘, as an exemplary case. The Dirac states of graphene are nicely preserved within the band gap of the TMDC, and are located about 0.61 eV (-0.85 eV) above (below) the relevant K point valence (conduction) band egde of the TMDC, see Table S3.Actually in Fig. <ref>(a), the conduction band edge of the TMDC is located close to the M point. However, we note that we use a lattice constant of 3.28 Å for MoSe_2, and not the exact experimental one of 3.288 Å. Already at such small tensile strain, MoSe_2 becomes an indirect band gap semiconductor, with the conduction band edge at the Q side valley <cit.>. In addition, the relevant K points of TMDC band edges are backfolded to the Γ point due to the 3× 3 MoSe_2 supercell we use for the 0^∘ case. In Figs. <ref>(b)-(g), we summarize the low-energy band properties of the graphene Dirac states near the Fermi level. Due to proximity-induced SOC, the Dirac bands split into four states, ε_1/2^CB/VB. The magnitude of the splitting is on the order of 0.7 meV. By fitting the low-energy Dirac dispersion to our model Hamiltonian, we find that proximity-induced intrinsic SOCs are of valley-Zeeman type, λ_I^A≈ -λ_I^B≈ 0.23 meV. In addition, a Rashba SOC is present, λ_R≈ 0.25 meV, being of the same magnitude. The obtained SOC parameters are giant compared to the intrinsic SOC of pristine graphene, being about 20–40 μeV <cit.>. In addition, Dirac states display an orbital gap, which results from the potential asymmetry of the sublattices (connected to the rippling of graphene), characterized by parameter Δ.The Dirac states, band splittings, and spin expectation values are perfectly reproduced by our model Hamiltonian employing the parameters in Table <ref>.The results for 0^∘ are in good agreement to earlier calculations of proximity SOC in graphene/TMDC heterostructures <cit.>.Before we show and discuss the twist-angle dependence of proximity SOC, we first want to address how strain affects the dispersion. Since the lattice constant of the TMDC is fixed for all twist angles, the main changes are in the graphene Dirac states and band offsets. From literature, we know that the Dirac states of graphene are quite robust against biaxial strain <cit.>, apart from a renormalization of the Fermi velocity. From recent studies <cit.>, we already know that band offsets are tunable by strain.In Fig. <ref>, we plot the position of the Dirac point with respect to the TMDC valence (conduction) band edge, E_D-E_V (E_D-E_C), as defined in Fig. <ref>(a), as function of the strain applied to graphene. The different twist angles provide different strain, and the plotted information are summarized in Tables S1, S2, and S3.We find a linear dependence of the band offsets with respect to the graphene strain as in a previous study <cit.>.In experiment, one can expect that both graphene and the TMDCs are nearly unstrained due to weak vdW bonding and only the zero strain band offsets are relevant. For our exemplary case of MoSe_2, we find the Dirac cone roughly in the middle of the TMDC band gap.From Fig. <ref> we can extract the zero strain band offsets and the rates γ at which the band offsets change via straining, by fitting the data with a linear dependence. The extrapolated values are summarized in Table <ref>. We find that for lighter (heavier) elements in the TMDC, the Dirac cone is located closer to the conduction (valence) band edge, as is the case for MoS_2 (WSe_2). Especially the zero strain band offsets should be also useful for tight-binding models of graphene/TMDC bilayers <cit.>, where the position of the Dirac point within the TMDC band gap enters as an unknown parameter.In addition, despite the strain in graphene is kept below ±2.5% in our heterostructure calculations, we observe variations in the band offsets of several hundreds of meV. The reason is that the rates γ≈ -80 meV/% are quite large, but similar for all TMDCs, and band offsets can be massively tuned by straining.In particular, tensile (compressive) strain will shift the Dirac states closer to the TMDC valence (conduction) band edge.Our calculated zero strain band offsets show that the Dirac cone is clearly located within the TMDC band gap, which is in agreement to experiments <cit.>. The tunability of the band offset with straining graphene is expected, since the individual workfunctions of the layers determine the band alignment, and the workfunction of graphene shows a significant strain dependence within our strain limits <cit.>. In particular, the workfunction of graphene increases (decreases) with positive (negative) strain <cit.>, shifting the Dirac point towards more negative (positive) energy, which is consistent with our observations in Fig. <ref>.In contrast to Ref. <cit.>, our heterostructures have smaller strain so we do not compensate the strain-related band offsets with an electric field. Also, we perform structural relaxation at eachtwist angle which leads to rippling and twist-dependent interlayer distance. As we show, both effects influence the proximity induced SOC, so that electric-field compensation would not necessarily make the results more representative. We demonstrate this by comparing 0^∘ graphene/MoSe_2 and graphene/WSe_2 heterostructures with different strains and setup conditions [1]. We believe that the field correction as in Ref. <cit.> makes sense to be applied only in the scenario of a flat graphene layer and fixed interlayer distance, to extract the bare twist-angle dependence while disregarding other effects. Otherwise all these effects: band offset, rippling, and interlayer distance, whichare in some way connected to strain and which affect proximity SOC, would be difficult to disentangle. Now we turn to the most important result, which is the twist-angle dependence of proximity-induced SOC. In Fig. <ref>, we show the calculated low energy Dirac states for the graphene/MoSe_2 heterostructure for three different twist angles, 0^∘, 19.1^∘, and 30^∘, as exemplary cases. As already mentioned, the Dirac states are split due to proximity SOC. In the case of 0^∘, the splitting is moderate, caused by nearly equal valley-Zeeman and Rashba SOC (λ_I^A≈ -λ_I^B≈ 0.23 meV, λ_R≈ 0.25 meV).This can be also seen in the calculated spin-orbit field of one of the Dirac bands. Overall, spins have an out-of-plane component due to intrinsic SOCs, while Rashba SOC is responsible for the vortex-like in-plane components. Both components are nearly equal away from the K point, see also Fig. <ref>.For 19.1^∘, the splitting is maximized, a band inversion can be obtained, and valley-Zeeman SOC dominates over the Rashba one (λ_I^A≈ -λ_I^B≈ 0.57 meV, λ_R≈ 0.33 meV). The band inversion is due to the fact that the sublattice potential asymmetry Δ is small compared to the magnitude of the intrinsic SOCs. The spin-orbit field shows almost only an out-of-plane component, while in-plane components are suppressed.For 30^∘, the splitting is minimal, valley-Zeeman SOC vanishes and Rashba SOC dominates (λ_I^A≈ -λ_I^B≈ 0 meV, λ_R≈ 0.24 meV).In fact, the valley-Zeeman SOC should completely vanish at 30^∘, due to a mirror plane symmetry, restoring the sublattice symmetry <cit.>. However, due to the small rippling in graphene from structural relaxations, this symmetry is not fully restored and small, but finite, intrinsic SOCs arise even at 30^∘.The spin-orbit field almost solely shows vortex-like in-plane components, while an out-of-plane component is only present right at the K point. Such a twist-angle tunability of SOC and the corresponding spin-orbit fields will have a huge impact on spin transport and relaxation <cit.>, as we will discuss later.For all the investigated twist angles and the different TMDCs, our model Hamiltonian can faithfully describe the low-energy Dirac states, with the fit parameters summarized in Table <ref>. For structures from Tables S1 and S2, which satisfy n-m = 3· l, l ∈ℤ, the Dirac states of graphene from both K and K^' fold back to the Γ point. Consequently, we cannot apply our fitting routine employing the model Hamiltonian, Eq. (<ref>), for some twist angles, which are then absent in Table <ref>.Note that, when graphene sublattices (C_A and C_B) are interchanged in the geometry, the parameter Δ changes sign, while parameters λ_I^A and λ_I^B are interchanged as well. Such an exchange of sublattices corresponds to an additional60^∘ twist applied to graphene above the TMDC.Therefore twist angles ϑ and ϑ+60^∘ cannot be distinguished from the geometries.In Table <ref>, the fit parameters show such a sign change for the investigated twist angles. This is connected to the setup of the heterostructure supercells for different angles, since 1) the starting point stacking of the non-rotated layers is arbitrary, 2) the origin of the rotation axis can be chosen randomly, 3) the lattice vectors, defining the periodic heterostructure supercell, can be imposed differently on the moiré structure from the twisted layers.Consequently, one would have to consider several structures for each twist angle to obtain well justified results (in terms of value and sign). Considering subsequent lateral shifts (see below) is particularly helpful to see how the proximity SOC changes for different atomic registries.However, it is enough to consider only angles between 0^∘ and 30^∘, since the parameters for the other angles can be obtained by symmetry considerations <cit.>.From the experimental point of view, e. g., in spin transport or spin-charge conversion experiments, that consider twisted graphene/TMDC heterostructures, only the magnitude and type of proximity SOC plays a role, since a well-defined manufacturing process with atomically precise control of stacking and twisting of two different monolayers is not yet possible.Due to this and the mentioned sign issue from the DFT results, in Fig. <ref> we plot the absolute values of valley-Zeeman and Rashba SOC as function of the twist angle for all TMDs, as summarized in Table <ref>.Note that the valley-Zeeman SOC is defined as λ_VZ = (λ_I^A-λ_I^B)/2.We find a clear and strong twist-angle dependence of the proximity-induced SOC.The heavier the elements in the TMDC, the larger is the proximity SOC. For untwisted structures (0^∘), both valley-Zeeman and Rashba SOC are finite. At 30^∘, the valley-Zeeman SOC vanishes and Rashba SOC dominates, independent of the TMDC.While the Rashba SOC stays rather constant upon twisting, the valley-Zeeman SOC shows a marked twist-angle dependence, different for Mo- and W-based TMDCs.For WS_2 and WSe_2, the valley-Zeeman SOC gradually decreases when twisting from 0^∘ to 30^∘. This finding is consistent with Ref. <cit.>. In contrast, for MoS_2 and MoSe_2, the valley-Zeeman SOC exhibits a maximum at around 15^∘ to 20^∘.§.§ Influence of vertical and lateral shiftsHow sensitive is the proximity-induced SOC with respect to the atomic registry (stacking) and the interlayer distance? Recent experiments have shown that one can tune proximity SOC by external pressure, thereby reducing the interlayer distance between graphene and the TMDC <cit.>. In particular, by applying external pressure of about 1.8 GPa to a graphene/WSe_2 heterostructure and diminishing the interlayer distance by about 9%, leads to a 2-fold enhancement of the proximity-induced Rashba SOC, as found by magnetotransport experiments <cit.>. In this section, we study how variations of the interlayer distance influence proximity SOC. For selected twist angles we vary d_int in steps of 0.1 Å, starting from the relaxed equilibrium distances listed in Tables S1 and S2, keeping the rest of the geometry (rippling of graphene and the TMDC) fixed.In addition, we study how lateral shifts, which essentially change the exact stacking of graphene above the TMDC, influence proximity SOC. For the lateral shifts, we use crystal coordinate notation, i. e., we shift graphene above the TMDC by fractions x and y of the supercell lattice vectors. We perform structural relaxations in the case of lateral shifts before we calculate the proximitized low energy Dirac bands, since the stacking may influence the graphene rippling and the interlayer distance. Since Mo- and W-based TMDCs produce different trends in the twist-angle dependence of proximity SOC, we focus on MoSe_2 and WSe_2 only. In addition, we consider only three selected twist angles, namely 0^∘, 19.1^∘ and 30^∘. In Table S4 and Table S5 we summarize the fit results, when tuning the interlayer distance or changing the stacking. By reducing the interlayer distance, we find that Dirac states are pushed towards the TMDC valence band edge. In addition, the sublattice asymmetry, represented by the staggered potential Δ increases, when decreasing the distance.Most important, the induced valley-Zeeman and Rashba SOC depends strongly on the distance, as summarized in Fig. <ref>. By reducing the interlayer distance, the SOC can be heavily increased, in agreement with experiments <cit.>.In particular, the proximity-induced SOC can be increased by a factor of 2–3, when reducing the distance by only about 10%. The only exception is the valley-Zeeman SOC for the 30^∘ structures, which is absent (or at least very small in our case due to rippling) due to symmetry.In contrast, the precise atomic registry (stacking) has negligible influence on the magnitude of proximity SOC in graphene/TMDC heterostructures. This results probably from the fact that the considered heterostructure supercells are large compared to the monolayer unit cells, such that an averaging effect takes place. §.§ Gate tunability of proximity SOCIn experiment, gating is a tool to further control and tailor the proximity SOC in graphene-based heterostructures <cit.>.For example, in Ref. <cit.> it has been shown that a gate voltage can be employed to control the spin-charge conversion efficiency in graphene/MoTe_2 heterostructures.We wish to answer the question: How does a transverse electric field affect proximity SOC for different twist angles? Again, we focus only on MoSe_2 and WSe_2 and twist angles of 0^∘, 19.1^∘ and 30^∘. The positive field direction is indicated in Fig. <ref>.The fit results are summarized in Tab. S6 for graphene/MoSe_2 and Tab. S7 for graphene/WSe_2 bilayers. In general, the electric field simply shifts the Dirac cone up or down in energy within the TMDC band gap, as can be seen from the band offsets. The tunability is about 100 meV per V/nm of applied field. Since the band offsets change, also the interlayer coupling along with proximity SOC changes.In Fig. <ref> we show how the valley-Zeeman and Rashba SOC are affected by the external transverse electric field.We find that for MoSe_2, the field barely influences the valley-Zeeman SOC, while the Rashba one can be tuned in a linear fashion, similar for all the different twist angles we consider. More precisely, within our field limits of ± 2 V/nm, the Rashba SOC can be tuned by about 50%.In particular, recalling that the ratio between valley-Zeeman and Rashba SOC determines the spin relaxation anisotropy <cit.>, the electric field will lead to an enormous tunability of the latter. In the case of WSe_2, the behaviour is rather similar but the 19.1^∘ twist angle is an exception.For this angle, also the valley-Zeeman SOC is highly tunable by the field. Moreover, we find that the valley-Zeeman SOC increases, while the Rashba one decreases for positive field amplitudes and vice versa for negative fields. § ENCAPSULATED GEOMETRIESMaximizing the proximity SOC in graphene is advantageous for example in spin-charge conversion experiments <cit.>. We have already seen, that proximity-induced SOC is maximized for WSe_2 at 0^∘ and for MoSe_2 at 19.1^∘.Can we further enhance proximity SOC, by encapsulating graphene between two TMDC monolayers? We consider the graphene/WSe_2 heterostructure with 0^∘ twist angle and place another WSe_2 monolayer on top. The top WSe_2 layer is considered to have a relative twist angle of 0^∘ and 0+60^∘ with respect to the subjacent graphene/WSe_2 bilayer, see Fig. <ref>.Similarly, we consider the graphene/MoSe_2 heterostructure with 19.1^∘ twist angle and place another MoSe_2 monolayer on top, with a relative twist angle of 19.1^∘ and 19.1+60^∘. We also perform a structural relaxation on the encapsulated structures, similar as above, before we proceed to calculate the proximitized Dirac dispersion.The structural information for the encapsulated structures are summarized in Table <ref>. The relaxed top and bottom graphene/TMDC interlayer distances are nearly identical for the different cases we consider, and coincide with the non-encapsulated geometries. In addition, the intrinsic dipole of the trilayer structure is strongly diminished, but still finite due to a small aysmmetry in the interlayer distances.The rippling of the graphene layer is small (large) for symmetric (asymmetric) encapsulation when twist angles are the same for top and bottom monolayers (when the top TMDC monolayer has an additional 60^∘ twist). The calculated band offsets are also nearly identical to the non-encapsulated structures. We expect that symmetric encapsulation will boost proximity SOC in graphene, while for asymmetric encapsulation the proximity SOC in graphene will nearly vanish. The reason is the valley-Zeeman type of SOC combined with the interchange of the graphene sublattices upon 60^∘ rotation.For example, the induced SOC from the bottom WSe_2 is λ_I^A≈ - λ_I^B≈ 1.2 meV in the case of 0^∘ twist angle. If the top WSe_2 layer has the same alignment to graphene as the bottom WSe_2 layer, the induced SOC will be the same and we can expect a doubling of valley-Zeeman SOC. However, if the top WSe_2 layer is rotated by 60^∘ with respect to the underlying graphene/WSe_2 bilayer, the graphene sublattices are effectively interchanged with respect to the top WSe_2 layer. Hence, bottom and top TMDC layers induce opposite valley-Zeeman SOC, which in total leads to a cancellation. In Table <ref>, we summarize the fit results for the TMDC encapsulated geometries, while in Fig. <ref>, we explicitly show the results for WSe_2-encapsulated graphene and the different twist angle scenarios. Indeed, symmetric encapsulation strongly enhances and roughly doubles the proximity-induced intrinsic SOC parameters, compared to non-encapsulated geometries. In contrast, the Rashba SOC is drastically reduced, since TMDC encapsulation nearly restores the z-mirror symmetry. Also the dipole (intrinsic electric field) of the structures is almost zero. For asymmetric encapsulation, the proximity-induced intrinsic and Rashba SOC is strongly reduced, as expected. Actually, for perfectly symmetric encapsulation, the Rashba SOC should exactly vanish. Also the valley-Zeeman SOC should vanish in encapsulated structures where inversion symmetry is restored. However, our heterostructures still show a finite structural asymmetry after atomic relaxation, leading to finite values of proximity SOC. In conclusion, TMDC encapsulation will only boost proximity SOC in graphene, if both TMDC layersoffer the valley-Zeeman SOC in an additive way. In other words, both twist angles are important control knobs to tailor the interference of the individual proximity effects, as also discussed in Ref. <cit.>. § PHYSICS BEHIND THE SPIN-ORBIT PROXIMITY EFFECTThere are several open questions related to the presented DFT and simulation results that we wish to address: Why is the proximity-induced SOC of valley-Zeeman (sublattice-odd) and not Kane-Mele (sublattice-even) type?What is the exact origin of the proximity-induced SOC? Why is the twist-angle dependence so different for different TMDCs and not as universal as predicted by recent tight-binding studies <cit.>? Which atomic type (transition-metal or chalcogen) contributes most to the proximity-induced SOC? Why is the electric field tunability of valley-Zeeman SOC so pronounced for WSe_2 and a twist angle of 19.1^∘?We start by addressing the question about which atomic type contributes most to proximity SOC. We already know that the different transition-metal and chalcogen atoms provide very different contributions to the TMDC spin splittings <cit.>, which should also influence proximity effects.Therefore, we have turned off SOC on different atoms by employing non-relativistic pseudopotentials, and recalculated the proximitized Dirac bands for different TMDCs and twist angles. The fit results are summarized in the SM [1]. We find, as expected, that the heavier the element (Mo or W, S or Se), the larger the contribution to the proximity-induced SOC. In particular, the contribution of W, Mo, Se, and S atoms to the proximity-induced valley-Zeeman SOC is roughly 1.2, 0.3, 0.1, and 0.01 meV for small (0 to 8^∘) twist angles. Remarkably, this can be drastically different for other twist angles. For example, at 19.1^∘ the contribution of Se atoms to the valley-Zeeman SOC is roughly twice as large as the one from W or Mo atoms. The reason is that the graphene Dirac cone couples to different k-points within the TMDC Brillouin zone for different twist angles. At different k-points, the TMDC bands have a different atomic and orbital decomposition <cit.>. Therefore, for different twist angles different atomic contributions and orbitals are involved.Why is the proximity SOC of valley-Zeeman type? The graphene Dirac states at K are split as if an external magnetic field would be present, see Fig. <ref>. In particular, for 0^∘, spin down states are shifted to lower energies compared to spin up, see Fig. <ref>(a), hence a Zeeman-like band splitting. Due to time-reversal symmetry the Dirac states at K^' are energetically the same, but have the opposite spin. Hence, the charge carriers effectively experience the opposite magnetic field, i. e., a valley-dependent Zeeman-like spin splitting arises. What causes this splitting in the first place? As we find from the projected band structures for different twist angles, the Dirac states predominantly couple to high-energy TMDC bands, seefor example Fig. <ref>(a) and SM [1]. Considering a particular twist angle, the Dirac states at K couple differently to the spin up and spin down TMDC band manifolds. For simplicity, imagine that the coupling of Dirac states is only to TMDC conduction band states and the coupling to the spin down manifold is stronger than to the spin up one. According to second order perturbation theory, coupled energy levels repel. When the coupling to spin down is stronger, the spin down Dirac states would be pushed to lower energies compared to spin up, explaining the Zeeman-like splitting for a given valley. Due to time-reversal symmetry, the other valley shows the opposite behavior. Of course, in our heterostructures the coupling is also to TMDC valence bands and there is a delicate balance to the coupling to spin up and spin down manifolds, where one outweighs the other.This is similar to recent considerations in twisted graphene/Cr_2Ge_2Te_6 heterostructures <cit.>. In particular for 30^∘ twist angle, the Dirac states of graphene are folded to the Γ-M high-symmetry line of the TMDC Brillouin zone, see Fig. <ref>, where TMDC bands are spin degenerate, and proximity-induced valley-Zeeman SOC vanishes [1].Regarding the electric field tunability of valley-Zeeman SOC for WSe_2 and a twist angle of 19.1^∘, we first have to consider the location in the TMDC Brillouin zone, where the Dirac cone folds back, see Fig. <ref>(b) and SM [1]. In particular, the graphene K point folds near the WSe_2 Q side-valley, see Fig. <ref>(f), where the spin splitting of the first TMDC conduction band is very large (∼ 200 meV).Moreover, the electric field results in Table S7 show that the closer the Dirac point shifts towards the TMDC conduction band, the larger is the proximity-induced valley-Zeeman SOC. Considering a coupling of Dirac states to the energetically closest TMDC bands, for this particular twist angle, we come to the conclusion that mainly the first conduction band is responsible for the spin splitting of Dirac states. The contributions from the first two WSe_2 valence bands seem to cancel each other, due to opposite spin splittings.Another supporting factor is that at the Q valley, the TMDC conduction band wave function is strongly delocalized across the TMDC layer, see Fig. <ref>(e), allowing for a more efficient wavefunction overlap between the layers and an enhanced transfer of the SOC to the graphene layer.Therefore, a coupling to the Dirac states should be enhanced, once the energy difference is reduced by applying an external electric field.In contrast, for MoSe_2 the spin splittings of the relevant bands at the Q valley are very different in magnitude compared to WSe_2, see Fig. <ref>(d), and therefore the electric field dependence is not as pronounced for the same twist angle. This also relates to the question, why our twist angle results are not universal for all the TMDCs, as the tight-binding studies suggest <cit.>. Even though the individual TMDCs are very similar, there are profound differences such as atomic and orbital decompositions of bands, leading to different spin splittings across the Brillouin zone. On top of that, ourDFT calculations capture the full picture, including monolayer dispersions, spin-orbit effects, and interlayer interactions. In contrast, the tight-binding description of the heterostructure <cit.> employs assumptions for the interlayer interactions and a specific parametrization of the TMDC monolayer dispersion based on first-principles results <cit.>, which does not perfectly reproduce band energies nor spin splittings.Anyway, both DFT and the tight-binding descriptions have advantages and drawbacks, but help to gain insights on the physics of proximity-induced SOC in graphene/TMDC heterostructures.§ SPIN RELAXATION ANISOTROPYAn experimentally verifiable fingerprint of the proximity-induced SOC in graphene/TMDC heterostructures is the anisotropy of the spin lifetimes <cit.>. The intrinsic SOC parameters provide a spin-orbit field that points out of the monolayer plane, while the Rashba SOC creates, in the simplest case, a vortex-like in-plane spin-orbit field.Depending on the interplay of both SOCs, spins pointing in different directions relax on different timescales, creating a spin lifetime anisotropy. The spin relaxation anisotropy, ξ, which is defined as the ratio between the out-of-plane (τ_s,z) and in-plane (τ_s,x) spin relaxation times, can be easily calculated from the fitted parameters via <cit.>ξ= τ_s,z/τ_s,x = (λ_VZ/λ_R)^2(τ_iv/τ_p)+1/2.A similar expression has been derived in Ref. <cit.>. Here, the ratio between the valley-Zeeman and the Rashba SOC strength predominantly determines the anisotropy, but also the ratio between intervalley (τ_iv) and momentum (τ_p) scattering times play a role.In the following, we assume τ_iv/τ_p = 5, as in Ref. <cit.>. In Fig. <ref>, we summarize the calculated anisotropies as function of the 1) twist angle, 2) the applied electric field, and 3) the interlayer distance, employing the results from above. The anisotropy is extraordinarily large for WS_2 and MoS_2 at 0^∘, since the valley-Zeeman SOC is giant compared to the Rashba one, pinning the spins to the out-of-plane direction. At 30^∘, the anisotropy reduces to 1/2, i. e., the Rashba limit, since the valley-Zeeman SOC vanishes independent of the TMDC. In general, the twist angle is an experimental knob to tailor the spin relaxation anisotropy. Once a twist angle is fixed, the proximity SOC can be further tuned by a transverse electric field or pressure engineering of the interlayer distance. Tuning the electric field from -2 to 2 V/nm essentially decreases the Rashba SOC and consequently increases the anisotropy. A strong tunability can be especially observed in WSe_2 for 0^∘ and for MoSe_2 for 19.1^∘, where the anisotropies can be increased by a factor of 2–3. In contrast, reducing the interlayer distance both valley-Zeeman and Rashba SOC increase, but at different rates, and the anisotropies decrease. A particular strong anistoropy can be expected in TMDC-encapsulated graphene, as the Rashba SOC can be suppressed compared to the valley-Zeeman SOC, see Table <ref>. In particular, considering the WSe_2-encapsulated case, and both twist angles to be 0^∘, the calculated anisotropy would be gigantic ξ≈ 3×10^4.§ SPIN-CHARGE CONVERSIONAnother experimentally verifiable fingerprint of proximity-induced SOC is the possibility to convert between charge and spin currents in proximitized graphene without the need of conventional ferromagnetic electrodes, which is highly desirable for all-2D spintronic devices <cit.>. Recent theoretical calculations <cit.> have already considered the twist angle dependence of the charge-to-spin conversion in graphene/TMDC heterostructures. Remarkably, not only the conventional spin-Hall effect (SHE) and Rashba-Edelstein effect (REE) occur, but also an unconventional REE (UREE) can arise. While for SHE and REE the current-induced non-equilibrium spin density has a polarization perpendicular to the charge current <cit.>, for the UREE the spin density polarization is collinear to the applied electric current.A similar unconventional charge-to-spin conversion has already been experimentally detected in the semimetals WTe_2 <cit.> and MoTe_2 <cit.>, and can be attributed to reduced symmetries <cit.>.Recent experiments on graphene/NbSe_2 <cit.>, graphene/WTe_2 <cit.>, and graphene/MoTe_2 <cit.> heterostructures have demonstrated the spin-to-charge conversion of spins oriented in all three directions. However, in these structures NbSe_2, WTe_2, and MoTe_2 are metallic, contributing directly to the conversion process, along with the proximitized graphene. The figure of merit for charge-to-spin conversion for comparing 3D and 2D systems is given by αλ_SF, where α is the conversion efficiency and λ_SF is the spin diffusion length <cit.>. Especially λ_SF can be giant in proximitized graphene (∼ μm) <cit.>, much larger than in conventional 3D bulk heavy metals such as Pt or W (∼ nm) <cit.>. Therefore, 2D material heterostructures can outperform 3D systems, even though the conversion efficiencies of, e. g., Pt (7%) <cit.> or W (20%) <cit.> are sizable.The reason for the UREE in graphene/semiconductor-TMDC heterostructures <cit.> is the Rashba phase angle φ of the proximitized Dirac bands. When φ = 0, no radial in-plane spin-orbit field components arise. In other words, the in-plane spins are always perpendicular to momentum, see for example Fig. <ref>(f), and consequently the generated spin density polarization will also be perpendicular to the applied current direction. However, when φ≠ 0, also radial spin-orbit field components arise, see for example Fig. S11, meaning that a current-induced spin density can have a polarization component parallel to the current. Consequently, the UREE will be maximized when φ = 90^∘. In Fig. <ref>, we summarize the twist-angle dependence of the Rashba phase angle for our investigated graphene/TMDC structures.For our exemplary case of MoSe_2, we therefore expect that UREE will be maximized for a twist angle of ϑ≈ 23^∘, where the Rashba phase angle has a maximum of φ≈ 30^∘.In Fig. <ref>, we schematically sketch the different conversion processes in an experimental setup. A charge current along x direction generates a spin current along y with spins polarized along z due to SHE. Similarly, a non-equilibrium spin density δ s is generated, which is in-plane polarized, due to combined REE and UREE. In order to get the conversion efficiencies, we have performed real-space quantum transport calculations <cit.>, employing the honeycomb tight-binding version <cit.> of the Hamiltonian ℋ, Eq. (<ref>). The conversion efficiencies Θ_SHE, α_REE, and α_UREE, are evaluated as Θ_SHE=(2/ħ) J_y^z / J_x α_REE=(2ev_F/ħ)δs_y/J_x α_UREE=(2ev_F/ħ)δs_x/J_x where J_x is the charge current along the direction of the applied bias voltage V_b and δ s_x (δ s_y) is the current-induced nonequilibrium spin density along the x (y) axis. Analogously, J_y^z = (e/2){s_z,v_y} is the Hermitian operator <cit.> of spin current along the y-axiswhich carries spins oriented alo)ng the z-axis. The local spin and charge currents <cit.>, as well as nonequilibrium spin density <cit.>, were calculated using the nonequilibrium Green's function formalism (NEGF) <cit.> applied to Landauer geometry <cit.> where the central region of finite length is an armchair nanoribbon that is attached to two semi-infinite leads terminating into macroscopic source (S) and drain (D) reservoirs at infinity. Thedifference of their electrohemical potentials defines the bias voltage, μ_S-μ_D=eV_b.Such clean (i.e., without any impurities) system is then periodically repeated in the transverse direction, which requires carefully checking of convergence in k_y points sampling <cit.>. Note that this procedure effectively models an infinite plane, while guarantying a continuous energy spectrum of the system Hamiltonian which is essential <cit.> for properly introducing dissipation effects when calculating nonequilibrium expectation values in quantum statistical mechanics.The NEGF formalism provides the nonequilibrium density matrix for steady-state transport, ρ̂(̂k̂_̂ŷ)̂, from which the expectation value of the relevant operator Ô is obtained viaO(k_y) = ⟨Ô⟩ = Tr[ρ̂(̂k̂_̂ŷ)̂Ô] at a single value of k_y, while its total is an integral over the first Brillouin zone (BZ), O=W/2π∫ dk_y O(k_y), where W is the width of the nanoribbon. In Fig. <ref>, we show the calculated SHE, REE, and UREE efficiencies, Θ_SHE, α_REE, and α_UREE, as function of the twist angle and Fermi level for the different graphene/TMDC heterostructures, employing the model Hamiltonian parameters from Table <ref>. We find that graphene/WSe_2 has in general both the largest range and highest values of spin conversion efficiencies, due to the highest values and variations of proximity SOC upon twisting. In addition, the large tunability of the Rashba phase angle is responsible for a pronounced UREE for WSe_2 and changes sign at a twist angle of around 20^∘. In all cases, the UREE follows the REE according to α_UREE=α_REEtan(φ), i. e., a modulation by the Rashba phase angle. Fig. <ref> shows the REE and UREE efficiencies for a set of twist angles, as a function of the Fermi energy, for graphene/WSe_2. The overall behaviour of these curves can simply be understood via the band structure of the corresponding twisted heterostructure. Below the band gap, no states contribute to transport, but as the Fermi energy increases, different cases need to be considered. In the first case, there is no Mexican hat in the band structure and only Rashba-type SOC present, see for example Fig. <ref>(c) for a twist angle of 30^∘. Once the Fermi energy crosses the first spin-split subband, which is characterized by spin-momentum locking, a plateau in REE emerges <cit.>. The plateau is maintained within the Rashba pseudo-gap, followed by an algebraic decay, once the second subband is reached, which contributes with opposite spin-momentum locking. In the second case, when there is additionally a valley-Zeeman SOC present, as is the case for example in Fig. <ref>(a) for a twist angle of 0^∘, the REE and UREE efficiencies spike before reaching the plateau. In the third case, a Mexican hat develops, see for example Fig. <ref>(b), due to proximity SOC that is larger than the pseudospin-asymmetry gap (inverted band structure) <cit.>. Instead of directly reaching the plateau or a spike as the Fermi energy increases, the REE and UREE efficiencies now ramp up slowly but still reach a plateau once the Mexican hat is overcome. The analysis from this point is identical to before. § CONCLUSIONS In conclusion, we have performed extensive first-principles calculations to reveal the twist-angle and gate dependence of proximity-induced SOC in graphene/TMDC heterostructures. By employing a symmetry-based Hamiltonian, we have extracted orbital and spin-orbit parameters that capture the proximitized low energy Dirac bands. Our results show that the magnitude and the interplay of valley-Zeeman and Rashba SOC can be tuned via twisting, gating, encapsulation, and the interlayer distance. In particular, when twisting from 0^∘ to 30^∘, the induced valley-Zeeman SOC decreases almost linearly to zero for W-based TMDCs,while for Mo-based TMDCs it exhibits a maximum at around 15–20^∘ before going to zero. The induced Rashba SOC stays rather constant upon twisting, and acquires a phase angle φ≠ 0, due to symmetry breaking, for twist angles different from 0^∘ and 30^∘.Within our investigated electric field limits of ± 2 V/nm, mainly the Rashba SOC can be tuned by about 50%. The interlayer distance provides a giant tunability, since the proximity-induced SOC can be increased by a factor of 2–3, when reducing the distance by only about 10%. In TMDC-encapsulated graphene, both twist angles are important to control the interference of the individual proximity-induced SOCs, allowing to precisely tailor the valley-Zeeman SOC, while the Rashba SOC becomes suppressed. Based on our effective Hamiltonian with fitted parameters, we made specific predictions for experimentally measurable quantities such as spin lifetime anisotropy and charge-to-spin conversion efficiencies. The spin lifetime anisotropy, as well as the charge-to-spin conversion efficiencies are highly tunable by our investigated control knobs and serve as guidance for experimental measurements.Our results highlight the important impact of the twist angle, gating, interlayer distance, and encapsulation when employing van der Waals heterostructures in experiments.K. Z. and J. F.were supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB 1277 (Project No. 314695032), SPP 2244 (Project No. 443416183), the European Union Horizon 2020 Research and Innovation Program under contract number 881603 (Graphene Flagship) and FLAGERA project 2DSOTECH. B. K. N. was supported by the US National Science Foundation through the University of Delaware Materials Research Science and Engineering Center, DMR-2011824. The authors thank T. Naimer, E. Icking, and A. Ferreira for fruitful discussions. [1]See Supplemental Material, including Refs. <cit.> where we summarize structural information and fit results in tabular form for the investigated heterostructures. We also analyze the origin of proximity SOC and give details on the real space transport calculations. [pages=1]suppl.pdf[pages=2]suppl.pdf[pages=3]suppl.pdf[pages=4]suppl.pdf[pages=5]suppl.pdf[pages=6]suppl.pdf[pages=7]suppl.pdf[pages=8]suppl.pdf[pages=9]suppl.pdf[pages=10]suppl.pdf[pages=11]suppl.pdf[pages=12]suppl.pdf[pages=13]suppl.pdf[pages=14]suppl.pdf[pages=15]suppl.pdf[pages=16]suppl.pdf[pages=17]suppl.pdf[pages=18]suppl.pdf[pages=19]suppl.pdf[pages=20]suppl.pdf[pages=21]suppl.pdf[pages=22]suppl.pdf | http://arxiv.org/abs/2310.17907v1 | {
"authors": [
"Klaus Zollner",
"Simão M. João",
"Branislav K. Nikolić",
"Jaroslav Fabian"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231027055326",
"title": "Twist- and gate-tunable proximity spin-orbit coupling, spin relaxation anisotropy, and charge-to-spin conversion in heterostructures of graphene and transition-metal dichalcogenides"
} |
Ionospheric response during Tropical Cyclones-a brief review on Amphan and Nisarga [ January 14, 2024 ================================================================================== We study covering numbers of subsets of the symmetric group S_n that exhibit closure under conjugation, known as normal sets. We show that for any ϵ>0, there exists n_0 such that if n>n_0 and A is a normal subset of the symmetric group S_n of density ≥ e^-n^2/5 - ϵ, then A^2 ⊇ A_n. This improves upon a seminal result of Larsen and Shalev (Inventiones Math., 2008), with our 2/5 in the double exponent replacing their 1/4.Our proof strategy combines two types of techniques. The first is `traditional' techniques rooted in character bounds and asymptotics for the Witten zeta function, drawing from the foundational works of Liebeck–Shalev, Larsen–Shalev, and more recently, Larsen–Tiep. The second is a sharp hypercontractivity theorem in the symmetric group, which was recently obtained by Keevash and Lifshitz. This synthesis of algebraic and analytic methodologies not only allows us to attain our improved bounds but also provides new insights into the behavior of general independent sets in normal Cayley graphs over symmetric groups. § INTRODUCTIONThis paper employs tools from analysis of Boolean functions to address problems studied independently by group theorists and combinatorialists. The problems we study are those which can be reformulated as investigations about independent sets in Cayley graphs over symmetric groups. §.§ Covering numbers of subsets of symmetric groupsThe covering number of a generating set A in a group G is the minimal ℓ such that A^ℓ = G. The problem of determining the covering numbers of conjugacy classes and their unions is fundamental in group theory, with highlights including the breakthroughs of Guralnick, Larsen, Liebeck, Shalev and Tiep <cit.>.A particular question that has been studied extensively is characterizing sets A such that A^2=G. A well known open problem in this area is Thompson's conjecture which asserts that every finite simple group G contains a conjugacy class whose square is G.Much of the research on characterizing sets whose square is the entire group has focused on the symmetric group, where this study goes back to Gleason, who showed in 1962 that for any n-cycle σ∈ S_n, the conjugacy class σ^S_n satisfies (σ^S_n)^2 = A_n (see <cit.>). For many years, results of this kind were achieved only for very restricted families of conjugacy classes, like the case where σ consists of two cycles (see, e.g., <cit.>). In a breakthrough paper from 2007, Larsen and Shalev <cit.> showed that for a sufficiently large n, if σ∈ S_n has at most n^1/128 cycles then (σ^S_n)^2 = A_n. As a random permutation σ∈ S_n has O(log n) cycles a.a.s., this shows that asymptotically, (σ^S_n)^2 = A_n holds for almost all permutations. In another breakthrough which followed shortly after, Larsen and Shalev <cit.> proved the same assertion for any σ∈ S_n that has at most n^1/4-ϵ cycles. Namely, they showed:For any ϵ>0, there exists an integer n_0, such that for any n>n_0 and for any σ∈ S_n that has at most n^1/4-ϵ cycles, we have (σ^S_n)^2 = A_n. The number of cycles of a permutation is closely related to the density of its conjugacy class. (Throughout the paper, for finite sets A,B the density of A inside B is μ_B(A) = |A∩ B|/|B|, and when B is clear from the context, we shorten the notation to μ(A)). Theorem <ref> can be easily seen to be equivalent to the following:For any ϵ>0, there exists an integer n_0, such that for any n>n_0 and for any normal subset A ⊂ S_n with μ(A)≥ e^-n^1/4 - ϵ, we have A^2 ⊇ A_n. Determining the minimal density α(n) such that for any normal subset of S_n with density ≥α(n) we have A^2⊇ A_n, remains a very challenging open problem, and the results of <cit.> remained the `state of the art' in the last 15 years (see, e.g., <cit.>). §.§.§ Our results We show that the assertions of Theorems <ref> and <ref> hold under a significantly weaker assumption on the set A. For any ϵ>0, there exists an integer n_0, such that for any n>n_0 and for any σ∈ S_n that has at most n^2/5-ϵ cycles, we have (σ^S_n)^2 = A_n. For any ϵ>0, there exists an integer n_0, such that for any n>n_0 and for any normal subset A ⊂ S_n with μ(A)≥ e^-n^2/5 - ϵ, we have A^2 ⊇ A_n. We also prove a similar strengthening of the corresponding result for subsets of A_n that was recently proved by Larsen and Tiep <cit.>. For any ϵ>0, there exists an integer n_0, such that for any n>n_0 and for any normal subset A ⊂ A_n with μ(A)≥ e^-n^2/5 - ϵ, we have A^2 ⊇ A_n ∖{1}. Theorem <ref> significantly improves over a recent result of Lifshitz and Marmor <cit.>, which achieves the weaker conclusion A^3=A_n under the stronger assumption μ(A)≥ e^-n^1/3 - ϵ.In terms of techniques, Larsen and Shalev <cit.> obtained their results by establishing upper bounds for the values of irreducible characters. Those character bounds have grown out to be fundamental to the study of covering numbers and have found various applications in other areas of mathematics. Our new results demonstrate the surprising role of a very different new tool – the recent result of Keevash and Lifshitz <cit.> on hypercontractivity for global functions over symmetric groups. Regarding tightness of our results, we believe that the minimal density of A which guarantees A^2 ⊇ A_n is significantly smaller than e^-n^2/5 - ϵ. In this context, it is worth noting that Garonzi and Maróti <cit.> conjectured that there exists an absolute constant c>0, such that if A,B,C are normal subsets of an alternating group G=A_n of density ≥ |G|^-c, then ABC=G. They achieved an essentially best possible result for four sets by showing that for any ϵ>0 there exists n_0 = n_0(ϵ), such that if n>n_0 and A,B,C,D are normal sets of density ≥ |G|^-1/2 + ϵ, then ABCD = G. Lifshitz and Marmor <cit.> speculated that a far-reaching generalization of Theorem <ref> holds: If A is a normal subset of S_n of density ≥ (n!)^-c, then A^2 ⊇ A_n. §.§ Independent sets in normal Cayley graphs Theorem <ref> can be restated in a graph theoretic terminology. Recall that a subset of the vertices of a graph is independent if it does not contain any edges. The largest size of an independent set in a graph is called its independence number.A Cayley graph Cay(G,A) is said to be normal if the set A is normal. For a set I⊆ S_n and for τ∈ S_n, it is easy to see that τ∉ I^-1I if and only if I is an independent set in the Cayley graph Cay(S_n, τ^S_n). Since for a normal set I⊆ S_n, we have I^-1I=I^2, it is clear that the following theorem is a restatement of Theorem <ref>.For any ϵ>0, there exists an integer n_0, such that for any n>n_0 and for any τ∈ A_n ∖{1}, the largest normal independent set in the Cayley graph Cay(A_n,τ^S_n) has size at most e^-n^1/4-ϵ.The size of the largest normal independent set in Cay(A_n,τ^S_n) is clearly bounded by the independence number of Cay(A_n,τ^S_n).A subfield of extremal combinatorics known as Erdős–Ko–Rado type theorems (see the book <cit.> and the thesis <cit.>) is mostly devoted to the study of the independence numbers of graphs that that have a large group of symmetries. One breakthrough in this direction is the work of Ellis, Friedgut, and Pilpel <cit.> concerning the independence number of the Cayley graph Cay(S_n, A), where A is the set of permutations with at most t-1 fixed points. Independent sets I in Cay(A_n, A) are called t-intersecting, as in such a set I, any two permutations agree on at least t coordinates. Ellis, Friedgut, and Pilpel showed that for any n>n_0(t), the largest t-intersecting sets in S_n are the t-umvirates, which are cosets of the subgroup of all permutations that fix a given set of size t. The minimal possible value of n_0(t) was improved by Ellis and Lifshitz <cit.>, then by Kupavskii and Zakharov <cit.>, and finally by Keller, Lifshitz, Minzer, and Sheinfeld <cit.> who showed that n_0(t) can be taken to be linear in t. Furthermore, the authors of <cit.> showed that the results of Ellis, Friedgut, and Pilpel extend to the sparser Cayley graph Cay(G,A'), where A' consists only of the permutations that have exactly t-1 fixed points (though, starting at a larger value of n_0(t). The latter setting is known as the `forbidding one intersection' problem, see <cit.>. When removing edges from a Cayley graph, its family of independent sets widens, making it increasingly challenging to establish effective upper bounds on the independence number. We prove the following result regarding the independence number of significantly sparser Cayley graphs, in which the generating set is a single conjugacy class. [We note that in the specific case of the Cayley graph Cay(G,B), where B consists of all permutations that have a single cycle of length >1 and arbitrarily many fixed points (which is a union of n-1 conjugacy classes), significantly stronger bounds on the independence number were obtained in <cit.>. These results, which have important applications to coding theory, are incomparable with our results.] In order to avoid sign issues, we restrict our attention to the alternating group A_n.For any ϵ>0 there exist δ,n_0, such that the following holds for any t∈ℕ and n>n_0+t. Let σ∈ A_n be a permutation with t fixed points. Then the largest independent set in the Cayley graph Cay(A_n,σ^S_n) has density of at most max(e^-(n-t)^1/3-ϵ, (n-t)^-δ t).For t<n^1/3 -ϵ, Theorem <ref> implies that the independence number of the Cayley graph Cay(A_n,σ^S_n) is n^-Θ(t), as in this range, the assertion matches the trivial lower bound implied by the t-umvirates. Thus, the theorem shows that in terms of the order of magnitude, the results of <cit.> for the `forbidding one intersection' problem extend to the much sparser setting where only intersection inside a single conjugacy class is forbidden. For larger values of t, our bound improves upon the bound of Larsen and Shalev in two ways. Firstly, our bound holds for all independent sets, while their bound applies only to normal independent sets. Moreover, even in the broader context of arbitrary independent sets in normal Cayley graphs we improve the 1/4 in the double exponent to 1/3.Our main tool, which is interesting for its own sake, is the following stability result which says that a mild lower bound on the density of an independent set suffices to imply that it is heavily correlated with a t-umvirate. Given a set A we write μ_A for the uniform measure on A.For any ϵ>0 there exist δ,n_0, such that the following holds for any t∈ℕ and n>n_0+t. Let σ∈ A_n be a permutation with t fixed points. Suppose that I is an independent set in the Cayley graph Cay(A_n,σ^S_n) of density ≥ e^-n^1/2 - log_n t/2 - ϵ. Then there exists ℓ>0 and an ℓ-umvirate U, such thatμ_U(I) ≥ n^δℓμ_A_n(I) . §.§ Our methods: Hypercontractivity and bounds for the isotypic projections Our proof combines character bounds with a recent tool known as `sharp hypercontractivity in the symmetric group' due to Keevash and Lifshitz <cit.>, which improves upon the earlier work of Filmus, Kindler, Lifshitz and Minzer <cit.>. The covering results of Larsen and Shalev are based upon character bounds. These can be used to show that conjugacy classes behave (in some senses) like random sets of the same density. Hypercontractivity serves a similar role to the character bounds for functions that are not necessarily class functions. We make use of this by applying it to study the restrictions of the conjugacy classes to the ℓ-umvirates (for various values of ℓ). These restrictions satisfy the following spreadness notion (see <cit.>), which is also known as globalness or quasiregularity in the literature (see <cit.>). Let δ >0. We say that a set A⊆ S_n is δ-spread if for each ℓ≥ 1and for each ℓ-umvirate U,μ_U(A)≤ n^δℓμ_S_n(A).In words, this means that no restriction to an ℓ-umvirate increases the density of A significantly.Theorem <ref> can be restated as an upper bound on the size of δ-spread independent sets in normal Cayley graphs. It lies in the heart of the paper and the rest of our theorems are reduced to it by combinatorial arguments. §.§.§ Sketch of proof for Theorem <ref>For functions f,g on a finite group G, we write f*g(y) = 𝔼_x∼ G[f(x)g(x^-1y)], where x∼ A denotes that x is chosen uniformly out of A. Denote by Ĝ the set of irreducible characters on G. For χ∈Ĝ, we write f^=χ = χ(1) f * χ. It is well known that f can be orthogonally decomposed as f = ∑_χ∈Ĝ f^= χ. We denote the space of functions of the form f^=χ by W_χ.Fix σ∈ A_n and write f = 1_(σ^S_n)/μ_A_n(σ^S_n). It was known already to Frobenius that since f is a class function, for any χ∈Ĝ, the space W_χ is an eigenspace of the convolution operator g ↦ f * g, which corresponds to the eigenvalue χ(σ)/χ(1). Let g=1_I/μ_A_n(I) be the normalized indicator of an independent set I in the Cayley graph Cay(A_n,σ^S_n). Then one can decompose 0 = ⟨ f*g, g⟩ = ∑_χ∈A_nχ(σ)/χ(1)g^=χ_2^2. The `main term' of the above sum comes from the trivial representation χ = 1, which contributes ⟨ g, 1⟩ = 𝔼[g] = 1 to the sum. We proceed by showing that if σ has t fixed points and I is `large' (as a function of t) and δ-spread, then the other terms are negligible compared to the main term, leading to a contradiction. Our proof is divided into two parts – upper bounding the terms |χ(σ)/χ(1)| and upper bounding the terms g^=χ_2^2, for all χ∈A_n∖{1}. To upper bound the terms |χ(σ)/χ(1)|, we use the character bounds of Larsen–Shalev <cit.> and Larsen–Tiep <cit.> that take the form χ(σ)≤χ(1)^β, where β depends only on σ and not on χ. The main novel tool that we introduce in this paper is the following proposition which allows upper bounding the terms g^=χ_2^2. For any ϵ>0 there exist δ,n_0>0, such that the following holds for all n>n_0. Let α<1-ϵ and let A⊆ S_n be a δ-spread set of density ≥ e^-n^α. Write g = 1_A/μ(A). Then g^=χ_2^2≤χ(1)^α+ϵ for any χ∈Ĝ.We prove Proposition <ref> by appealing to the hypercontractivity theorem of Keevash and Lifshitz <cit.>.Combining the Larsen–Shalev and Larsen–Tiep bounds with ours, while choosing α appropriately, we obtain that |⟨ f*g,g ⟩- 1| ≤∑_χ∈A_n∖ 1χ(1)^-s for an absolute constant s>0. At this point we apply the Witten zeta function estimates of Liebeck and Shalev. For a finite group G, the Witten zeta function is given by ζ_G(s) = ∑_χ∈Ĝχ(1)^-s. Liebeck and Shalev <cit.> showed that ζ_A_n(s) = 1+o(1) for any fixed s>0, as n tends to infinity. This estimate yields |⟨ f*g,g ⟩- 1| =o(1) in contradiction to Equation (<ref>), thus completing the proof.[We note that the Witten zeta function originates in the representation theory of compact Lie groups, where ζ_SU(2) is the Riemann zeta function. We define it here only for finite groups for simplicity.] We deduce Theorems <ref>, <ref>, and <ref> from Proposition <ref> by proving that certain restricted conjugacy classes are δ-spread for an absolute constant δ >0, and then following a similar route to the above sketch.§.§ Structure of the paperIn Section <ref> we present results from works of Larsen–Shalev <cit.>, Larsen–Tiep <cit.> and Liebeck–Shalev <cit.> that will be used in the sequel.Namely, we recall the character bounds of Larsen–Shalev that take the form χ(σ)≤χ(1)^E(σ) +ϵ for the irreducible characters of S_n. We also recall its recent variant for A_n due to Larsen and Tiep, and provide simple estimates for E(σ). Finally, we recall some properties of the Witten zeta function for S_n and A_n, which are due to Liebeck and Shalev.In Section <ref> we prove Proposition <ref>. In Section <ref> we prove a key theorem (Theorem <ref>) and deduce from it Theorems <ref> and <ref>. In Section <ref> we prove that certain restricted conjugacy classes admit some form of spreadness. In Section <ref> we prove Theorems <ref>, <ref>, and <ref>.§.§ AcknowledgementThis work was done while N. L., and O. S. were visiting the Simons Institute for the Theory of Computing. § OUTLINE OF THE PROOF OF THEOREM <REF>Our proof idea is showing that for each set I⊆ A_n of density at leaste^-n^2/5 -ϵ and each conjugacy class A of A_n the set I is not an independent set inside the normal Cayley graph Cay(A_n,A).Each edge of this graph in I would correspond to σ_1,σ_2∈ I, a∈ A with aσ_1 = σ_2. Since the conjugacy classes are close under taking inverses we have a=σ_2^-1σ_1 ∈ I^2 and as I^2 is closed under conjugation we would have A⊆ I^2. Establishing that for all the conjugacy classes A would complete the proof. We make crucial use of the following deep theorems of Larsen and Shalev. The first one settles the case where I is fixed point free and does not have more than n^1-ϵ cycles.For each ϵ there exists n_0, such that the following holds.Let n>n_0 and suppose that I is a conjugacy class of S_n with at most (1/4-ϵ)n cycles, without fixed points and at most n^1-ϵ 2-cycles. Then I^2 = A_n (Theorem 1.10)The second theorem that they proved that we make use of is the following: For each ϵ>0 there exists n_0>0, such that the following holds. Let I_1,I_2,A be conjugacy classes withμ(I_1),μ(I_2)> e^-n^1/2 - ϵ. Suppose additonally that A contains no cycles of length smaller then 2/ϵ. Then I_1I_2⊇ A.Maybe 5.1? I don't see that exactly Finally we make use of the following result of theirs. Let C be sufficiently large. Then for all m ≥ 4 that divides n we have (m^n/m)^2= A_n. We also make use of the following result due to Vishne. The set (2^n/2)^2 consists of thepermutations that have an even number of i-cycles for each i. § PRELIMINARIES FROM THE WORKS OF LARSEN–SHALEV, LARSEN–TIEP, AND LIEBECK–SHALEV §.§ Character bounds using the parameter E(σ)Recall that given a finite group G, we write G for the set of its irreducible complex characters. Larsen and Shalev <cit.> introduced theparameter E(σ), defined as follows.For σ∈ S_n, let f_σ(i) be the number of i-cycles in its cycle decomposition.Define the orbit growth sequence e_1,e_2,…,e_n via the equalitye_1 + ⋯ + e_k := max(log(∑_i=1^k i · f_σ(i))/log n,0), for each1 ≤ k ≤ n. The function E(σ) is defined byE(σ):= ∑_i=1^n e_i/i.Does this notion have any clear combinatorial meaning? If it does, it will be helpful to add a sentence explaining it. It doesn'tThe main result of Larsen and Shalev <cit.> is the following character bound. For any ϵ>0, there exists n_0 ∈ℕ, such that the following holds. Let n>n_0, let χ be an irreducible character of S_n, and let σ∈ S_n. Then |χ(σ)| ≤χ(1)^E(σ) + ϵ.We also make use of the following character bound of Larsen and Tiep <cit.>. For any ϵ>0, there exists n_0 ∈ℕ such that the following holds. Let n>n_0 and suppose that σ∈ A_n satisfies σ^A_nσ^S_n. Then for every character χ of A_n we have |χ(σ)| ≤χ(1)^ϵ. These bounds combine to yield the following variant of Theorem <ref> for A_n. For any ϵ>0, there exists n_0 ∈ℕ, such that the following holds. Let n>n_0, let χ be an irreducible character of A_n, and let σ∈ A_n. Then |χ(σ)| ≤χ(1)^E(σ) + ϵ. Recall from the representation theory of S_n and A_n that every irreducible character χ of S_n is either irreducible when restricted to A_n or is the sum of two irreducible characters χ_1,χ_2, such that χ_2(σ) = χ_1((12)σ (12)) for all σ∈ A_n. Moreover, any irreducible character of A_n can be obtained from an irreducible character of S_n in one of these two ways. It follows that whenever σ^A_n= σ^S_n, we have χ_1(σ) = χ_2(σ) = χ(σ)/2, and the assertion follows from Theorem <ref>. Otherwise, by Theorem <ref> we have |χ(σ)| ≤χ(1)^ϵ, which implies the assertion.§.§ Upper bounds for E(σ)We now give several simple estimates for E(σ). First we treat the case where σ has t fixed points.For any ϵ>0, there exists n_0 ∈ℕ such that the following holds for all n>n_0. Suppose that σ∈ S_n has t fixed points. Then E(σ)≤1 + log_nt/2. Let e_i be as in the definition of E(σ). We haveE(σ) = ∑_i=1^n e_i/i = e_1 + ∑_i=2^n e_i/i ≤ e_1 +∑_i=2^n e_i/2 = e_1 + 1-e_1/2 = 1+e_1/2=1 + log_nt/2. We now treat the case where σ has n^o(1) i-cycles for each `small' i.For any ϵ>0 and any m ∈ℕ, there exist δ>0 and n_0 ∈ℕ such that the following holds. Let n>n_0 and suppose that σ∈ S_n has at most n^δ i-cycles for each i < m. Then E(σ) ≤ 1/m +ϵ.We haveE(σ) = ∑_i=1^n e_i/i =∑_i=1^m-1 e_i/i +∑_i=m^n e_i/i ≤∑_i=1^m-1δ+i/log n/i + ∑_i=m^n e_i/m ≤δ· 2log(m) +m/log n + 1/m ≤ 1/m + ϵ. Another estimate for E(σ) that we need is the following.For any ϵ>0, there exists n_0 ∈ℕ, such that the following holds for all n>n_0. Let 0<α<1. Suppose that σ∈ S_n has no fixed points and has at most n^α cycles of length at most ⌈ 2/ϵ⌉. Then E(σ)≤α/2+ϵ/2. We haveE(σ) = ∑_i=1^n e_i/i = ∑_i=2^⌈ 2/ϵ⌉ e_i/i + ∑_i=⌈ 2/ϵ⌉^n e_i/i ≤∑_i=2^⌈ 2/ϵ⌉ e_i/2 + ∑_i=⌈ 2/ϵ⌉ +1^n e_i/2/ϵ≤α/2 + ϵ/2,where the last inequality holds since e_2+...+e_⌈ 2/ϵ⌉≤α. A similar proof yields the following lemma whose easy proof we omit.Let α>0 and suppose that σ∈ S_n has at most n^α cycles. Then E(σ) ≤α.We also use the following lemma, due to Larsen and Shalev.For any ϵ>0, there exists n_0 ∈ℕ such that the following holds for all n>n_0. Let W⊆ S_n be a subset of density ≥ e^-n^α. Then W contains a permutation σ with E(σ) ≤α + ϵ §.§ Squares of conjugacy classes We use several results on squares of conjugacy classes in A_n and in S_n, of Larsen and Shalev <cit.> and of Larsen and Tiep <cit.>. There existsn_0∈ℕ, such that the following holds for all n>n_0. Suppose that σ∈ A_n satisfies σ^A_nσ^S_n. Then (σ^A_n)^2⊇A_n∖{1}. For any ϵ>0, there exists n_0 ∈ℕ such that the following holds for all n>n_0. Suppose that for some σ∈ S_n,τ∈ A_n we have 2E(σ) + E(τ) < 1-ϵ.Then τ∈ (σ^S_n)^2. For any ϵ>0, there exists n_0 ∈ℕ such that the following holds for all n>n_0. Suppose that σ∈ S_n has no fixed points, at most n^1-ϵ 2-cycles and at most (1/4 - ϵ)n cycles overall. Then (σ^S_n)^2=A_n. For integers n,m such that m|n, we denote by (m^n/m) the conjugacy class of all σ∈ S_n that consist of n/m m-cycles.There exists n_0 ∈ℕ such that for any n>n_0 and for any m ≥ 4 that divides n,we have (m^n/m)^2= A_n. We also make use of the following result due to Vishne <cit.>. For any even n ∈ℕ, the set (2^n/2)^2 consists of thepermutations that have an even number of i-cycles for each i.Finally, we shall need the following simple lemma of Larsen and Shalev.For any n∈ℕ, the probability that a random permutation σ∼ S_n has at least r i-cycles is at most 1/r!i^r.§.§ The Witten zeta functionAs was described in the introduction, we apply a result of Liebeck and Shalev <cit.> concerning the Witten zeta function.Recall that the Witten zeta function for a finite group G is defined by ζ(s) = ζ_G(s) = ∑_χ∈Ĝχ(1)^-s. For any ϵ,s>0 there exists n_0 such that for any n>n_0, we have 2-ϵ≤∑_χ∈S_nχ(1)^-s≤ 2+ϵ,1-ϵ≤∑_χ∈A_nχ(1)^-s≤ 1+ϵ.§ FROM THE LEVEL D-INEQUALITY TO BOUNDS FOR THE FINER ISOTYPIC DECOMPOSITIONIn this section we prove Proposition <ref>. Let us recall its statement.Proposition <ref>. For any ϵ>0 there exist δ,n_0>0, such that the following holds for all n>n_0. Let α<1-ϵ and let A⊆ S_n be a δ-spread set of density ≥ e^-n^α. Write g = 1_A/μ(A). Then g^=χ_2^2≤χ(1)^α+ϵ for any χ∈Ŝ_̂n̂.In the proof, we use the level-d inequality of Keevash and Lifshitz for global functions over symmetric groups, as well as standard estimates for the dimensions of the characters.§.§ The level-d inequality of Keevash and Lifshitz Level-d inequalities bound the L_2 norm of certain `chunks' of the orthogonal decomposition of a function, using hypercontractivity. The first level-d inequality was obtained in 1988 by Kahn, Kalai, and Linial <cit.>, for Boolean functions over the discrete cube {-1,1}^n endowed with the uniform measure. It asserts that for any f:{-1,1}^n →{0,1} with 𝔼[f]=α and for any d ≤ 2ln(1/α), the coefficients of the Fourier-Walsh expansion of f (namely, f=∑_S ⊂ [n]f̂(S)χ_S) satisfy ||∑_|S|=df̂(S)χ_S||_2^2 ≤ (2e/d)^d α^2 ln(1/α)^d (see <cit.>). Level-d inequalities turned out to be very useful, and have diverse applications.In <cit.>, Keevash, Lifshitz, Long, and Minzer (see also Khot, Minzer, and Safra <cit.>) showed that level-d inequalities can be obtained in much more general settings under the additional assumption that the function is `global' – i.e., that no restriction of O(1) coordinates can increase its L_2-norm significantly. Filmus, Kindler, Lifshitz, and Minzer <cit.> were the first to use the technique of Keevash et al. to obtain a level-d inequality for global functions over symmetric groups. Here, we use a sharp level-d inequality which was recently proved by Keevash and Lifshitz <cit.>, building upon a sharp version of the inequality of Keevash et al. that was obtained by Keller, Lifshitz, and Marcus <cit.>.In order to state the level-d inequality due to Keevash and Lifshitz <cit.> we need the following terminology, which follows <cit.> in providing a degree decomposition for the symmetric group, which corresponds to the decomposition of the Fourier-Walsh expansion over the discrete cube into `degree levels' f^=d=∑_|S|=df̂(S)χ_S that appears in the original level-d inequality. A dictator U_i→ j is the set of permutations that send i to j. The intersection of d distinct dictators is called a d-umvirate if it is nonempty. The d-umvirates correspond to pairs of d-tuples I,J and we denote by U_I→ J the set of permutations sending the tuple I to the tuple J. The restriction of a function f to a d-umvirate U_I → J is denoted by f_I→ J and is called a d-restriction. We write f_I→ J_p for the L_p-norm of f with respect to the uniform measure on the d-umvirate U_I→ J.A function f is said to be r-global if f_I→ J_2 ≤ r^|I|f_2 for all d-restrictions f_I→ J, for all values of d. A set A is r-global if its indicator function is r-global.Note that a set A is δ-spread if and only if it is n^δ-global.For a partition λ=(λ_1,λ_2,…,λ_t) ⊢ n,the strict level of a representation V_λ that corresponds to λ is n-λ_1. The level of V_λ is the minimum between the strict levels of V_λ and V_λ', where λ' is the conjugate partition of n.The space of matrix coefficients of an irreducible representation V is the space spanned by the functions f_v,φ G→ℂ indexed by v∈ V,φ∈ V^* that are given byf_v,φ(g) = φ g(v). We write W_d for the sum of the spaces of matrix coefficients for all representations of level d, and denote by f^≈ d the projection of f onto W_d.Keevash and Lifshitz proved the following:There exists C>0, such that for any n ∈ℕ and for any r>1, if AS_n is r-global and d≤min(18log(1/μ(A)), 10^-5n), then 1_A^≈ d_2^2≤μ(A)^2 ( C r^4 d^-1log (1/μ(A)) )^d. §.§ Proof of Proposition <ref>Recall that any character χ is the trace of a unique representation ρ, and that we have χ(1)=dim(ρ). We say that the level of a character χ is the level of the unique representation that corresponds to it.In the proof we use the following lower bounds on the dimensions of low level irreducible representations.There exists n_0 ∈ℕ, such that the following holds for all n>n_0. Let d≤ n/200 and let χ be an irreducible character of S_n of level ≥ d. Then χ(1) ≥(n/ed)^d. For any α',ϵ'>0, there exists n_0 ∈ℕ, such that the following holds for all n>n_0. Let d=n/α' and let χ be an irreducible character of S_n such that all rows and columns in the Ferrer diagram of the corresponding representation are of size ≤ d. Then χ(1) ≥ (α'-ϵ')^n.Let A⊆ S_n be a δ-spread set of density ≥ e^-n^α, let g = 1_A/μ(A), and let χ be a character of level d. We may assume w.l.o.g. that μ(A)=e^-n^α. We consider two cases: * d ≥min(10^-5n, 1/8n^α). In this case, we may upper bound g^=χ_2 ≤g_2 = e^n^α≤ e^n^1-ϵ, while by Lemma <ref>, for a sufficiently large n we have χ(1) ≥min((n/ed)^d , (200/e)^n/200. This implies that the statement of the proposition holds provided that n_0 is sufficiently large. * d ≤min(10^-5n, 1/8n^α). In this case, we may apply Theorem <ref> to obtain g^= χ_2^2≤μ(A)^2 ( C n^4δ d^-1log (1/μ(A)) )^d ≤( Cd^-1 n^α + 4δ)^d,and by Lemma <ref> the right hand side is smaller thanχ(1)^α+ϵ, provided that δ is sufficiently small and n_0 is sufficiently large.This completes the proof. We may assume that d<min(10^-5n, 1/8n^α) for otherwise we may upper bound f^=χ_2 ≤f_2≤ e^-n^α while χ(1) ≥min((n/ed)^d , e^Θ(n)), which shows that the statement of the proposition holds provided that n_0 is sufficiently large and δ is sufficiently small with respect to ϵ.Let us now settle the remaining case, which falls under the regime d≤min(18log(1/μ(A)), 10^-5n). By Theorem <ref> we have f^= χ_2^2≤( C n^4δ d^-1log (1/μ(A)) )^d ≤( Cd^-1 n^α + 4δ)^d, and by Lemma <ref> the right hand side is smaller thanχ(1)^α+ϵ, provided that δ is sufficiently small and n_0 is sufficiently large. §.§ Bounds for the finer isotypic decomposition over A_nWe shall make use also of the following variant of Proposition <ref> for A_n.For any ϵ>0 there exist δ,n_0>0, such that the following holds for all n>n_0. Let α<1-ϵ and let A⊆ A_n be a δ-spread set of density μ_A_n(A)≥ e^-n^α. Write g = 1_A/μ_A_n(A). Then g^=χ_2^2≤χ(1)^α+ϵ for any χ∈A_n. For a partition λ, let us write λ' for the conjugate partition obtained by replacing the roles of the rows and the columns in its Young diagram. Recall that the corresponding characters satisfy χ_λ = χ_λ'·sign. It is well known that all irreducible characters of A_n are obtained from characters of S_n, in one of two possible ways: * Characters that correspond to partitions λ with λλ': In this case, the characters χ_λ and χ_λ' restrict to the same irreducible character of A_n.* Characters that correspond to partitions λ with λ= λ'. In this case, the restriction of χ_λ to A_n splits to the sum of two irreducible characters, which we denote by χ_λ_1 and χ_λ_2, that have the same dimension.To handle the characters of the second type, we note that for any such λ, the level of χ_λ is necessarily ≥ n/2 - 1. Therefore, by Lemma <ref> we haveχ_λ_1(1) = χ_λ_2(1)≥1/2(200/e)^n/200,which implies thatg^=χ_2^2 ≤g_2^2≤ e^n^α≤χ(1)^α +ϵ,provided that n is sufficiently large with respect to ϵ.We now handle the characters of the first type. Let h be the extension of g to S_n whose value on the odd permutations is 0. Write h= ∑_λ⊢ n h^=χ_λ. Let λλ' and let χ be the restriction of χ_λ to A_n. Then g^=χ = χ(1) g*χ. We would like to write this convolution in terms of convolutions over S_n to which we will be able to apply Proposition <ref>. Let χ̃ = χ_λ + χ_λ'. Then we have χ̃(σ)=2χ(σ) for all σ∈ A_n and χ̃(σ)=0 for all σ∈ S_n ∖ A_n. Therefore, the functionsg*χ and h * χ̃ agree on A_n (note that the first convolution takes place in A_n and the second takes place in S_n). Hence, g^=χ = χ(1)g*χ= χ(1) (h* χ_λ + h * χ_λ')|_A_n = (h^=χ_λ +h^=χ_λ')|_A_n.The desired upper bound on g^=χ now follows from the triangle inequality, when applying Proposition <ref> to h. § UPPER BOUNDING SPREAD INDEPENDENT SETS IN NORMAL CAYLEY GRAPHS In this section we prove Theorem <ref>, as well as several related results. We begin with a proposition that explains how to combine character bounds with hypercontractivity to upper bound the size of spread independent sets. Recall that any class function h:A_n →ℂ can be uniquely represented as a linear combination of irreducible characters: h=∑_χ∈A_n h_χχ. The coefficient of χ in this expansion is denoted by ĥ(χ). Note that if A=σ^S_n for some σ∈ A_n and h=1_A, then for any χ∈A_n, we have ĥ(χ) = μ_A_n(A)χ(σ). For any ϵ >0, there exist n_0 ∈ℕ and δ>0 such that the following holds for all n>n_0 and all 0<α<1-ϵ. Suppose that I,I'⊆ G are δ-spread subsets of A_n, such that μ_A_n(I),μ_A_n(I') >e^-n^1 - α -ϵ. Suppose additionally that A⊆ A_n is a normal set with 1_A(χ)/μ_A_n(A) < χ(1)^αfor every irreducible character χ of A_n. Then the sets I,I' span at least one edge in the Cayley graph Cay(A_n,A). Write h = 1_A/μ_A_n(A). Let T_A be the operator associated with the Cayley graph generated by A, i.e., T_A g = h * g. Let W_χ: = span{gχ}_g∈ A_n be the isotypic component of χ. The operator T_A commutes with the action of A_n× A_n from both sides and since each W_χ is an irreducible A_n× A_n representation appearing in L^2(A_n) exactly once, it follows from Schur's lemma that the restriction of T_A to W_χ is multiplication by a scalar. To compute the scalar it is sufficient to compute T_Aχ. By Frobenius, we therefore obtain that the eigenvalue corresponding to W_χ is given by ĥ(χ)/χ(1). Write f= 1_I/μ_G(I) andg=1_I'/μ_G(I'). We have⟨ T_A f , g ⟩ = ∑_χh(χ)/χ(1)⟨f^=χ , g^=χ⟩.By Proposition <ref>, applied with ϵ/2 in place of ϵ, and Theorem <ref>, we therefore have| ⟨ T_A f , g⟩ - 1| ≤∑_χ∈A_n∖{triv}χ(1)^α - 1f^=χ_2 g^=χ_2≤∑_χ∈A_n∖{1}χ(1)^-ϵ/2 = o(1).Hence, we have ⟨ T_A f , g ⟩ 0, provided that n is sufficiently large, which implies that there exists an edge between I and I'. The following theorem follows by combining Proposition <ref> with the results of Larsen–Shalev <cit.> and Larsen–Tiep <cit.> presented in Section <ref>. For any ϵ>0 there exist n_0 ∈ℕ and δ>0, such that the following holds for all n>n_0. Let σ∈ A_n and write E(σ) = α. Then every δ-spread independent set in the Cayley graph Cay(A_n,σ^S_n), has density ≤ e^-n^1-α -ϵ. Let A = σ^S_n. As E(σ)=α, we may apply Theorem <ref> to deduce that for any character χ of A_n,1_A(χ)/μ_A_n(A)=χ(σ) ≤χ(1)^α+ϵ/2. The assertion now follows from Proposition <ref>, when substituting α+ϵ/2 in place of α and ϵ/2 in place of ϵ. § UPPER BOUNDING THE INDEPENDENCE NUMBER OF CAYLEY GRAPHS WITH MANY FIXED POINTS Now we are ready to present the proofs of Theorems <ref> and <ref>.The theorem follows immediately by combining Theorem <ref> with Lemma <ref>. Let ϵ>0, let δ,n_0 (depending on ϵ) be determined below, and let I be an independent set in Cay(A_n,σ^S_n), where n ≥ n_0+t and σ∈ A_n is a permutation with t fixed points. Assume on the contrary that μ_A_n(I)>max(e^-(n-t)^1/3-ϵ,(n-t)^-δ t). We obtain a contradiction in a three-step argument.Step 1: Reducing to the case t ≤ n^1/3. We use the following observation. Let 1 ≤ℓ≤ t and let σ' be obtained from σ by deleting ℓ of its fixed points. For each ℓ-umvirate τ U, with U the subgroup of all permutations that fix a given set of size ℓ, the set I' = τ^-1I∩ U is independent in the Cayley graph Cay(U,(σ')^U) which is isomorphic to Cay(A_n-ℓ,(σ')^S_n-ℓ). is it (σ')^A_n-ℓ or (σ')^S_n-ℓ? also probably not so important but l is an integer and is taken as t - (n-t)^1/3Right. I fixed both things now.If t>n^1/3, we may reduce the number of fixed points by applying this process with ℓ = ⌈ t - (n-t)^1/3⌉, choosing an ℓ-umvirate τ I such that μ(I ∩τ U)/μ(U)≥μ_A_n(I). The resulting set I' is independent in the Cayley graph Cay(A_n',(σ')^A_n') with n'=n-ℓ, where the number t'=⌊ (n-t)^1/3⌋ of fixed points of σ' satisfies (n')^1/3-ϵ/2≤ t'≤ (n')^1/3, provided that n_0 is sufficiently large as a function of ϵ. To see that μ_A_n'(I')>max(e^-(n'-t')^1/3-ϵ,(n'-t')^-δ t'), note that for any n and for any t≫ n^1/3-ϵ we have e^-(n-t)^1/3-ϵ≫ (n-t)^-δ t, provided that n_0 is sufficiently large. Hence, we haveμ_A_n'(I')≥μ_A_n(I)>max(e^-(n-t)^1/3-ϵ,(n-t)^-δ t) = e^-(n-t)^1/3-ϵ = e^-(n'-t')^1/3-ϵ=max(e^-(n'-t')^1/3-ϵ,(n'-t')^-δ t'),where the last equality holds since t ≥ (n')^1/3-ϵ/2. Therefore, I' satisfies the `contrary assumption' for (n',t') in place of (n,t). This shows that we may assume w.l.o.g. that t≤ n^1/3. Step 2: Reducing to the case where I is δ-spread.Similarly to the first step, we may also assume that I is δ-spread, as otherwise we may iteratively find ℓ-umvirates in which the density of A is ≥μ(A)n^δℓ until we are stuck. The set I” we obtain at the end of this process is an independent δ-spread set in the Cayley graph Cay(A_n”,(σ”)^A_n”), where σ” has t” fixed points and n”=n-(t-t”). Its measure satisfies μ_A_n”(I”) ≥max(e^-(n”-t”)^1/3-ϵ,(n”-t”)^-δ t”), as in the transition from (n,t) to (n”,t”), the left term remains unchanged and the increase of the right term is less than the density increase by a factor of n^δℓ which we obtain in each ℓ-restriction. This shows that we may assume w.l.o.g. that I is δ-spread.Step 3: Applying Theorem <ref>. Assuming that t ≤ n^1/3 and that I is δ-spread, we can apply Theorem <ref>, with the same value of ϵ, to deduce that μ(I)<e^-n^1/2-log_n t/2-ϵ≤ e^-n^1/3-ϵ,which contradicts the assumption μ_A_n(I)>max(e^-(n-t)^1/3-ϵ,(n-t)^-δ t). This completes the proof (with δ being the same as in Theorem <ref> and n_0 being sufficiently large). § THE GLOBALNESS OF CONJUGACY CLASSES AND THEIR RESTRICTIONSIn this section we prove that `large' conjugacy classes of permutations with not-too-many short cycles are global, and that the same holds for their restrictions inside umvirates (under certain additional conditions). In order to state our goal more precisely, we introduce some more terminology.A d-restriction of a function is its restriction to a d-umvirate U_I→ J with |I|=d. A k-chain is a restriction of the form i_1→ i_2→…→ i_k+1 (i.e., i_1 → i_2, i_2 → i_3, …, i_k → i_k+1) , where i_1,...i_k+1 are all different. In other words, a k-chain is the restriction to the k-umvirate U_I→ J where I=(i_1,...,i_k), and J=(i_2,...,i_k+1). We say that a k_1-chain (i_1→ i_2→...→ i_k_1+1) and k_2-chain (j_1→ j_2→...→ j_k_2+1) are disjoint if all the coordinates i_1,…,i_k_1+1,j_1,…,j_k_2+1 are different. We say that the length of a k_1-chain is k_1. A k-restriction is a k-cycle if it takes the form i_1→ i_2→…→ i_k→ i_1. Every d-restriction can be decomposed to disjoint cycles and k-chains that we call the parts of the restriction.We prove the following lemma, as well as a variant of it (Lemma <ref> below) that will be used in the sequel.There exists n_0>0 such that the following holds for all n>n_0. Let r≥ 25, and let A⊆ S_n be a conjugacy class of density at least e^-n, such that all the permutations in A have at most (r/2)^ℓ ℓ-cycles for each ℓ. Then A is r-global. In order to prove the lemma we first prove the following claim which calculates the measure of restrictions without cycles.Let A be a normal set. Let d>0 and let A_I→ J be a d-restriction that consists of t chains. Denote the chain lengths of A_I→ J by i_1 - 1,… i_t - 1. Let P be the probability that for a random permutation τ∼ A, and for all 1 ≤ℓ≤ t, the length of the cycle containing ℓ in τ is at least i_ℓ. Then μ(A_I→ J) = μ(A) · P ·[(1-t/n)(1-t/n-1)⋯(1 - t/n+1-|I|)]^-1. Decompose the d-restriction A_I→ J into its chain partsa_11→ a_12→…→ a_1i_1, a_21→…→ a_2i_2,⋮a_t1→…→ a_ti_t.Consider the family of d-umvirates U_σ(I)→σ (J) = σ U_I→ Jσ^-1, for all permutations σ∈ S_n that fix each of a_11,…, a_t1. It is clear that each two non-equal d-umvirates of this form are pairwise disjoint. Moreover, since A is normal, the measure of A inside each such d-umvirate is the same.Without loss of generality, we may assume a_11 = 1,…, a_t1 = t. Observe that A∩ (⋃_σ∈ U_(1,…, t)→ (1,…, t)U_σ(I)→σ(J)) consists of all the permutations in A for which for all 1 ≤ℓ≤ t, the length of the cycle that contains ℓ is ≥ i_ℓ. Hence, we have μ(A)P =μ(A_I→ J)μ(U_I→ J) #{U_σ(I)→σ(J): σ∈ U_(1,…, t)→ (1,…, t)} Therefore, in order to prove the claim, all that remains is computing the orbit of U_I→ J with respect to the action of the group U_(1,…, t)→ (1,… ,t) on S_n by conjugation. By the orbit stabilizer theorem, its size is(n-t)!/(n- i_1 - i_2 -… -i_t)!. As μ(U_I→ J) = [n(n-1)·…· (n-i_1-i_2-…-i_t+t+1)]^-1, the claim follows by rearranging.Given a restriction A_I→ J of A, we view it as a composition of two restrictions, denoted by I_1→ J_1 and I_2→ J_2, where the restriction I_1→ J_1 consists of all the cycle parts of I→ J, and the restriction I_2→ J_2 consists of the chain parts. Let us consider each restriction separately.Density increase in a restriction consisting of cycles. By the orbit stabilizer theorem, if σ has f_σ(i) cycles of length i for each i, then the density of its conjugacy class σ^S_n in S_n is 1/∏(i^f_σ(i)· f_σ(i)!). Therefore, when removing a cycle of size ℓ from σ, the measure of the corresponding conjugacy class increases by a factor of ℓ· f_σ(ℓ). By assumption, we have f_σ(ℓ) ≤ (r/2)^ℓ, and hence, when deleting an ℓ-cycle from σ, the density of the corresponding conjugacy class increases by a factor of ≤ (ℓ^1/ℓ r/2)^ℓ. Set A' = A_I_1→ J_1, and write k= |I_1|, n' = n-k. We obtain that μ(A'_I_1→ J_1)≤ r^kμ(A), by sequentially removing cycles from σ and taking into account the measure increment at each step. Density increase in a restriction consisting of chains. Denote the lengths of the chains in the restriction by i_1-1,i_2-1,…,i_t-1. Note that we may assume that (i_1-1) + … (i_t-1) < n/3, for otherwise the lemma holds trivially. Hence, we have 1 - t/n+1-|I|>1/2, and consequently,[(1-t/n)(1-t/n-1)⋯(1 - t/n+1-|I|)]^-1≤ 2^n.Therefore, the upper bound μ(A_I→ J) = μ(A'_I_2→ J_2) ≤ 2^|I_2|μ(A') ≤ r^|I|μ(A) follows immediately from Claim <ref>, applied with A' in place of A.There exist n_0 ∈ℕ and C>0, such that the following holds for all n>n_0 and all r≥ 20. Let σ∈ S_n be a permutation that has at most r/20 fixed points and 2-cycles, and at most (r/20)^ℓ/3 ℓ-cycles for each ℓ≥ 3. Suppose in addition that A = σ^S_n has density ≥ e^-√(n/C). Let d ≤√(n)/rC and suppose that A_I→ J is a d-restriction of A whose parts consist only of 1-chains and 2-chains. Then A_I→ J is r-global. Let A be a conjugacy class that satisfies the assumptions of the lemma, and let A_I→ J be a d-restriction of A. Let A_I'→ J' be an ℓ-restriction of A_I→ J. Our goal is to show that μ(A_I'→ J') ≤ r^ℓμ(A_I→ J).Similarly to the proof of Lemma <ref>, we view the restriction I' → J' as a composition of two restrictions, denoted by I_1→ J_1 and I_2→ J_2, where the restriction I_1→ J_1 consists of all the cycle parts, and the restriction I_2→ J_2 consists of the chain parts. We will show that μ(A_I' → J')/μ(A_I → J)= μ(A_I_1 → J_1)/μ(A_(I ∩ I_1) → (J ∩ J_1))·μ(A_I' → J')/μ(A_I_1 → J_1)/μ(A_I → J)/μ(A_(I ∩ I_1) → (J ∩ J_1))≤ r^ℓ,by considering each restriction separately.Density increase in the restriction I_1→ J_1 consisting of cycles.Here, we have to bound the density increase μ(A_I_1 → J_1)/μ(A_(I ∩ I_1) → (J ∩ J_1)). It will be more convenient for us to bound the density increase μ(A_I_1 → J_1)/μ(A) instead. To see that this is sufficient, note that by Claim <ref>, we have μ(A_(I ∩ I_1) → (J ∩ J_1))≥μ(A)/2. Indeed, denoting the lengths of the chains in the restriction (A_(I ∩ I_1) → (J ∩ J_1)) by i_1-1,i_2-1,…,i_s-1, the claim implies that μ(A_(I ∩ I_1) → (J ∩ J_1))≥μ(A) · P, where P is the probability that for a randomly chosen τ∼ A, for all j=1,…,s, the length of the cycle containing j in τ is at least i_j. By assumption, i_j ≤ 2 for all j, the permutations in A have at most 3r/20 elements in cycles of length ≤ 2, and we have s ≤ d ≤n/rC. Hence, P ≥(1-3r/20n)^s ≥(1-3r/20n)^√(n)/rC>1/2,provided that C is sufficiently large.In order to bound μ(A_I_1 → J_1)/μ(A), we observe that as the `old' restriction I → J consists only of 1-chains and 2-chains, each cycle of length l ≥ 3 in I_1 → J_1 contains at least l/3 elements from the `new' restriction I_1 ∖ I → J_1 ∖ J. Similarly, each cycle of length 1 or 2 in I_1 → J_1 contains at least one element from the restriction I_1 ∖ I → J_1 ∖ J. As was shown in the proof of Lemma <ref>, the density increase when removing a single cycle of length l from the conjugacy class of σ∈ S_n is at most l · f_σ(l). By assumption, we have f_σ(l) ≤ (r/20)^l/3 for all l ≥ 3, and also f_σ(l) ≤ (r/20)^l/2 for l=2 and f_σ(l) ≤ (r/20)^l for l=1. It follows that the density increase when removing a cycle that contains l' `new' coordinates is at most 3l' · (r/20)^l'≤ (r/8)^l'. Since the number of `new' coordinates is |I_1 ∖ I| ≤ℓ, by sequentially removing cycles from σ and taking into account the measure increment at each step, we obtain μ(A_I_1 → J_1)/μ(A_(I ∩ I_1) → (J ∩ J_1))≤ 2 ·μ(A_I_1 → J_1)/μ(A)≤ 2(r/8)^ℓ≤ r^ℓ/4.Density increase in the restriction I_2 → J_2 consisting of chains. Here, we have to bound the ratio between the density increases of the restrictions A_I_1 → J_1→ A_I_1 ∪ I_2 → J_1 ∪ J_2 andA_(I ∩ I_1) → (J ∩ J_1)→ A_I → J. As these restrictions consist only of chains, we can estimate and compare their density increases using Claim <ref>. As the restriction I→ J consists only of 1 chains and 2 chains, the value P that corresponds to it in Claim <ref> is at least 1/2. Therefore, we may assume that ℓ=|I'∖ I|≤ 4√(n)/C as otherwise we have μ(A_I'→ J')≤ 1 ≤ 4^ℓ1/2/μ(A) ≤ 4^ℓμ(A_I→ J). We also have |I|≤ 4√(n)/C by hypothesis. Denote the chain lengths of the restrictions I_2 → J_2 and (I ∩ I_2) → (J ∩ J_2) by i'_1-1,…,i'_s'-1 and i”_1-1,…,i”_s”-1, respectively, where i'_j ≥ i”_j for any 1 ≤ j ≤ s” and s' ≥ s”. Note that s”≤ |I ∩ I_2| ≤4√(n)/C and that s'-s”≤ |I_2 ∖ I| ≤ℓ≤4√(n)/C. Let n'=n-|I_1|≥ n-d-ℓ and n”=n-|I∩ I_1| ≥ n-d.As the corresponding value of P for the restriction I∩ I_2→ J∩ J_2 is also ≥1/2 we may apply Claim <ref> to obtain thatμ(A_I' → J')/μ(A_I_1 → J_1)/μ(A_I → J)/μ(A_(I ∩ I_1) → (J ∩ J_1))≤ 2(1-s”/n”)(1-s”/n”-1) ⋯(1-s”/n”+1-|I ∩ I_2|)/(1-s'/n')(1-s'/n'-1)⋯(1-s'/n'+1-|I_2|)≤ 4,provided that C is sufficiently large.Combining this with (<ref>) and (<ref>) completes the proof of the lemma.where n'=n-|I_1|≥ n-d-ℓ and n”=n-|I∩ I_1| ≥ n-d. By assumption, we have d ≤√(n)/rC. In addition, as μ(A) ≥ e^-√(n) and μ(A_I → J) ≥μ(A)/2, we may assume that ℓ≤ 2√(n),as otherwise we clearly have μ(A_I' → J') ≤ 1 ≤ r^ℓμ(A_I → J), which is the requirement of r-globalness. Therefore, we have (1-s”/n”)(1-s”/n”-1)·…·(1-s”/n”+1-|I ∩ I_2|)/(1-s'/n')(1-s'/n'-1)·…·(1-s'/n'+1-|I_2|)≤≤(1+s'-s”/n'+1-|I_2|-s')^d ≤(1+ℓ/n/2)^n/rC≤ e^2ℓ/rC≤ 2^ℓ,did you mean I∩ I_2 in the denominator too? Also I didn't understand the first inequality provided that n,C are large enough. Similarly, we have1/(1-s'/n'+1-|I ∩ I_2|+1)·…·(1-s'/n'+1-|I_2|)≤(1-s'/n'+1-|I_2|)^-(|I_2 ∖ I|)≤ 2^ℓ,provided that n is large enough. Therefore, μ(A_I' → J')/μ(A_I_1 → J_1)/μ(A_I → J)/μ(A_(I ∩ I_1) → (J ∩ J_1))≤ 2 · 2^ℓ· 2^ℓ≤ 2 · 4^ℓ.Combining Equations (<ref>), (<ref>), and (<ref>) completes the proof.Claim <ref> also implies that an arbitrary ℓ-restriction without cycles can increase the measure by at most a factor of 10^ℓ. Let A_I”→ J” be the restriction obtained by removing all the chains from A_I'→ J'. Let t_1 be the number of 1-chains of A_I'→ J' closed by A_I”→ J”, let t_2 be the number of 2-chains closed in it and ℓ' = |I”|-t_1-2t_2. Then it is sufficient to prove that μ(A_I”→ J”) ≤(r/2)^ℓ'μ(A).As in the previous lemma each removal of an ℓ-cycle increases the measure of a conjugacy class σ^S_n bya factor of ℓ f_σ(l). The lemma now follows by sequentially removing cycles from σ according to the restriction A_I”→ J” while keeping track of the density increment at each step. § PROOF OF THEOREMS <REF>, <REF>, AND <REF>In this section we prove Theorem <ref>, which states that for a sufficiently large n, for any σ∈ S_n with less than n^2/5-ϵ cycles we have (σ^S_n)^2=A_n. Then, we deduce from it Theorems <ref> and <ref>.The proof of Theorem <ref> proceeds in two stages. First, we show that we may strengthen the hypothesis of the theorem by adding the assumption that σ has only a few short cycles, and at the same time weaken the assertion to claiming that (σ^S_n)^2 contains any fixed-point free τ∈ A_n. Afterwards, we prove the `reduced' statement.Formally, in Sections <ref> and <ref> we show that it is sufficient to prove the following lemma.For any ϵ >0 there exists n_0 ∈ℕ such that the following holds for all n>n_0. Let τ∈ A_n be a permutation with no fixed points. Suppose thatσ∈ S_n has at most n^2/5 -ϵ cycles overall and less than 10 cycles of length ℓ for each 2 ≤ℓ≤log n. Then τ∈(σ^S_n)^2. The proof of Lemma <ref> is presented in Section <ref>. The proofs of Theorems <ref> and <ref> is presented in Section <ref>. §.§ Explicit computationsFor σ∈ S_m and τ∈ S_n-m, we write σ⊕τ for the element in S_n obtained by letting σ act on the first m elements and τ act on the last n-m elements. For a conjugacy class C_1 of S_m and a conjugacy class C_2 of S_n-m, we write C_1⊕ C_2 for the conjugacy class obtained by concatenating their cycle decompositions.Let C_1,C'_1,C_1” be conjugacy classes of S_m with C_1”⊆ C_1 · C_1', and let C_2,C_2',C_2” be conjugacy classes of S_n-m with C_2”⊆ C_2 · C_2'. Then the set (C_1⊕ C_2)· (C_1'⊕ C_2') contains C_1”⊕ C_2”.Let π_1 ∈ C_1” and writeπ_1 = σ_1τ_1 for σ_1'∈ C_1,τ_1 ∈ C_1'. Let π_2∈ C_2” and let σ_2,τ_2 be defined similarly. We have (σ_1⊕σ_2)(τ_1⊕τ_2) = (π_1 ⊕π_2 ).There exists n_0 ∈ℕ such that the following holds for any n>n_0. Let r,m be integers dividing n, with r > 1 and n/m even. Then (r^n/r)^2⊇ (m^n/m). Let B=(r^n/r). For r ≥ 4, by Theorem <ref> we have B^2 = A_n. For r=2, we may apply Theorem <ref> which says that B^2 contains all the permutations that have an even number of cycles of each length. As n/m is even by hypothesis, this proves the claim.It now remains to treat the case r=3. For all m ≥ 4 we may apply Theorem <ref> to prove our assertion, as E(m^n/m)=1/m by definition. The case m=3 is straightforward, as when squaring a permutation of cycle type (3^n/3), we obtain a permutation of the same cycle type.Finally, when m=2 and r=3, we use the fact that in S_12 we have (2,7,5)(3,8,6) (1,9,4)(12,11,10) · (1,2,3)(7,4,11)(8,5,12)(9,6,10) = =(1,7)(2,8)(3,9)(4,10)(5,11)(6,12). We now write (3^n/3) = C_1⊕⋯⊕ C_n/12, where each C_i is the conjugacy class of (1,2,3)(4,5,6)(7,8,9)(10,11,12), and write (2^n/2) = D_1⊕⋯⊕ D_n/12, where each D_i is the conjugacy class of (1,2)(3,4)(5,6)(7,8),(9,10),(11,12). (Note that in this case, n is indeed divisible by 12 since by assumption, r=3 divides n and n/m=n/2 is even). Lemma <ref> now completes the proof. §.§ Reducing to Lemma <ref> In this subsection we reduce the statement of Theorem <ref> to the statement of Lemma <ref>. Our goal is to show that for a sufficiently large n, for any σ∈ S_n with at most n^2/5 - ϵ cycles, the conjugacy class I=σ^S_n satisfies I^2 = A_n. Equivalently, we have to show that I^2∩τ^S_n∅ for each τ∈ A_n. Fix such a τ and write A=τ^S_n. We split the proof into two cases: * Case 1: The number of fixed points of σ is at most the number of fixed points of τ.* Case 2: The number of fixed points of σ is larger than the number of fixed points of τ.Case 1: σ has no more fixed points than τ. Write t for the number of fixed points in σ. We may restrict both I and A to the t-umvirate U = U_[t]→ [t] to obtain the conjugacy classes I', A' obtained by removing t fixed points from σ,τ. It is clearly sufficient to show that (I')^2⊇ A'. We may therefore assume that σ is fixed points free. As σ has less than n^2/5 cycles overall, the assertion now follows from Theorem <ref>. The following sentence looks weird. The stabilizer theorem seems irrelevant, σ has no fixed points, etc. I suggest the following instead: `As σ has less than n^2/5 cycles overall, the assertion follows from Theorem <ref>.' By the orbit stabilizer theorem I has ≤ (1/4-ϵ)n fixed points and ≤ n^2/5 cycles provided that n is sufficiently large. The first case now follows from Theorem <ref>.Case 2: σ has more fixed points than τ. By the same argument as in Case 1, we may assume without loss of generality that τ has no fixed points. By Lemma <ref>, we have E(σ)≤ 2/5 - ϵ. Theorem <ref> now completes the proof if E(τ)<1/5. Suppose on the contrary that E(τ) ≥ 1/5. By Lemma <ref>, applied with m=6 and ϵ=1/60, this implies that for some i ≤ 5, τ has at least n^δ i-cycles, for some explicit δ>0. (Actually, this holds for all n>n_0, but we may absorb this into our assumption that n is sufficiently large).Suppose that σ has at least 10 ℓ-cycles for some 2 ≤ℓ≤log n, as otherwise we are done by Lemma <ref>. We may write σ = σ_1 ⊕σ_1' and τ = τ_1 ⊕τ_1', where τ_1' consists of 2ℓ i-cycles and σ_1' consists of 2i ℓ-cycles. (Note that as i ≤ 5, σ contains at least 10 ≥ 2i ℓ-cycles, and as ℓ≤log n, τ contains at least n^δ≥ 2ℓ i-cycles, assuming n is sufficiently large. Hence, the decomposition is possible). By Lemma <ref>, we have ((σ'_1)^S_n)^2⊇ (τ'_1)^S_n. Hence, by Lemma <ref>, it is sufficient to show that (σ_1^S_n)^2⊇τ_1^S_n.We can repeat the deletion process to obtain a sequence of restrictions σ_1,…, σ_j and τ_1,…, τ_j, until either E(τ_j)<1/5 or σ_j has less than 10 ℓ-cycles for all 2 ≤ℓ≤log n. (Note that as σ contains at most n^2/5-cycles, the process terminates when σ_j,τ_j are permutations on at least n-10n^2/5log n coordinates, and thus for all 1 ≤ l ≤ j we have E(σ_l)<2/5-ϵ/2, provided that n is sufficiently large). In the former case, we are done by Theorem <ref>. In the latter case, we are done by Lemma <ref>. This completes the proof.Suppose otherwise. Recall that our goal is to reduce to the case that σ has at most a constant number of cycles at each length up to log(n). Let C>0 be a sufficiently large absolute constant. We suppose that σ has more than Cm r-cycles for r≤log n as otherwise we are done by Lemma <ref>. We now write σ = σ_1 ⊕σ_1' and τ = τ_1 ⊕τ_1', where τ_1' consists of ℓ r m-cycles and σ_1' consists of ℓ m r-cycles. By Lemmas <ref> and <ref> the proof is reduced to showing that (σ_1^S_n)^2⊇τ_1^S_n.We now proceed with the deletion process to obtain σ_1,…, σ_i, τ_1,…, τ_i until either E(τ_i)<1/5 - ϵ or σ has at most Cm r-cycles, where each time we are reduced to the task of showing that (σ_i^S_n)^2⊇τ_i^S_n. As σ contains at most n^2/5-cycles the process terminates when σ_i,τ_i are permutations on at least n/2 coordinates, provided that n is sufficiently large. We may therefore apply either Lemma <ref> or Lemma <ref> to complete the proof. §.§ Proving Lemma <ref>The proof consists of four steps.Step 1: Reducing to the case where τ has many short cycles. If τ has at most n^2/5 - ϵ/3 cycles of length less than 10/ϵ, then by Lemma <ref> we haveE(τ) ≤ 1/5 - ϵ/3 (provided that n is sufficiently large). As E(σ) ≤ 2/5 -ϵ by Lemma <ref>,Theorem <ref> implies that τ∈(σ^S_n)^2, completing the proof. Hence, we may assume that τ has at most n^2/5 - ϵ/3 cycles of length less than 10/ϵ. In particular, there exists m≤10/ϵ, such that τ has at least ϵ/10n^2/5 - ϵ/3>n^2/5-ϵ/2 m-cycles. We fix such an m and proceed with it. We note that this step, which allows us to assume that τ has more short cycles of a fixed length than the total number of fixed points of σ,is the only step where we crucially use the bound n^2/5-ϵ on the number of cycles in σ. The other steps can be adapted to work with up to n^1/2-ϵ cycles in σ. Step 2: Removing almost all fixed points of σ by restrictions. We perform a sequence of 2m-restrictions intended for removing almost all fixed points of σ, in exchange for removing m-cycles of τ. The restrictions are of the form I_S'→ T', I_S'→ W', and A_T'→ W', for appropriately chosen sets S',T',W'. The way in which these restrictions are used is explained at the next step.Assume for simplicity that m is even. Each 2m-restriction involves 4m coordinates denoted byx_1,x_2,…,x_m,y_1,…,y_m,x'_1,…,x'_m,y'_1,…,y'_m,where x_i=y_i and x'_i=y'_i for all odd i, and except for this, all the coordinates are pairwise distinct. We define the restrictions by settingS'=(x_1,x_2,…,x_m ,x'_1,x'_2,…,x'_m),T'=(y_1,y_2,…,y_m,y'_m,y'_1,…,y'_m-1), W'=(y_m,y_1,…,y_m-1,y'_1,y'_2,…,y'_m).As a result, each of the restrictions I_S'→ T', I_S' → W' consists of m/2 1-cycles, m/2 1-chains and m/2 2-chains, while the restriction A_T' → W' consists of the two m-cycles (y_1,y_2,…,y_m,y_1) and (y'_1,y'_2,…,y'_m,y'_1).We perform s=⌊2f_σ/m⌋ such 2m-restrictions, where f_σ is the number of fixed points of σ. As a result, all fixed points of σ except for at most m/2-1, are removed. (Note that we do not `get stuck' on the side of A, since the number of m-cycles in τ is much larger than the number of fixed points of σ, bounded by n^2/5-ϵ). We let I_S→ T, I_S→ W, and A_T→ W be the sets obtained at the end of the process.We now iteratively restrict I to obtain a sequence of sets I_S_i→ T_i, I_S_i→ W_i, A_T_i→ W_i. At the first step we find restrictions of the form I_S→ T, I_S→ W, and A_T→ W, such that T→ W consists of two m-cycle, and S→ T,S→ Tconsist of m 1-cycles and at most m 1-chains and 2-chains. Let x_1,…, x_m,y_1,…, y_m∈ [n] are distinct, except when i is odd, where y_i = x_i. Also let x_1',…, x_m' and y_1',…, y_m'∈ [n]∖{x_1,…, x_m,y_1,…, y_m} be distinct, except for the same condition that x_i' = y_i' when i is odd.We set the restriction for I_S→ T be given by x_1 → y_1, x_2→ y_2 … x_m→ y_m, x_1'→ y_m',x_2',→ y_1' …, x_m'→ y_m-1'.We set the restriction for I_S→ W to bex_1→ y_m,x_2→ y_1,x_3→ y_2, … , x_m→ y_m-1,x_1'→ y_1',x_2' → y_2',…, x_m'→ y_m'We then obtain the restriction A_T→ W is given bythe two m-cycles y_m →y_m-1→…→ y_1→ y_mand y_m' → y_1' → y_2' →…→ y_m'. This process corresponds to deleting 2 m-cycles of A and m fixed points from I. Let us repeat the restriction process until we delete as many fixed points of I as we can, i.e we repeat it ⌊f_σ/m⌋ times, where f_σ is the number of fixed points of σ. Let A_T_i→ W_i, I_S_i→ T_i, I_S_i→ W_i be the sets obtained at the end of the process. By hypothesis the permutation τ contains at least 2mi cycles.Step 3: Reducing to edges between vertex sets in a Cayley graph.First, we perform a simple shifting procedure which allows us to `get rid' of the coordinates in S,T, and W. We let π_1 be the permutation that fixes the set of coordinates not appearing in W, and sends the tuple W to the tuple T. The permutation π_1 consists of 2s m-cycles on the elements appearing in W. Let π_2 be an arbitrary permutation that sends S to W. Consider the sets B_1 = π_2 I π_1, B_2 = π_2 I, B_3= A π_1. As (B_2)^-1 B_1 = I^-1I π_1 and I^-1I=I^2, it is sufficient to prove that (B_2)^-1 B_1 has a non-empty intersection with B_3 = Aπ_1.In fact, we show that (B_2^-1)_W→ W(B_1)_W→ W has a nonempty intersection with (B_3)_W→ W.Assume without loss of generality that W = {(n-2sm+1,…, n)} and identify S_n-2ms with the set of permutations in S_n fixing W. Then (Aπ_1)_W→ W is the conjugacy class (τ')^S_n-2ms of S_n-2ms, obtained by deleting 2s m-cycles from τ. Our goal is showing that the sets (B_2)_W→ W, (B_1)_W→ W span an edge in the Cayley graph Cay(S_n, (τ')^S_n-2ms). Furthermore, we can reduce the problem to A_n-2ms (i.e., assume w.l.o.g. that B_2,B_3 are contained in A_n-2ms by multiplying all odd permutations in B_2 by some fixed permutation and multiplying all odd permutations in B_3 by its inverse).Step 4: Completing the proof using Proposition <ref>. The sets (B_1)_W→ W, (B_2)_W→ W are shifts of the sets I_S→ T, I_S→ W, and therefore inherit their spreadness. In order to apply Proposition <ref>, we establish the δ-spreadness of I_S→ T and I_S→ W. We may view the restrictions I_S → T and I_S → W as a composition of two restrictions – a restriction that removes all fixed points except for at most m/2-1, and a restriction that consists only of 1-chains and 2-chains. By the assumption on σ, this allows us to apply Lemma <ref>, with a constant r>max(10m,200), to deduce that the sets B_2 and B_3 are r-global. It follows that B_2,B_3 are δ-spread for an arbitrarily small δ>0, provided that n is sufficiently large. As follows from Claim <ref>, a restriction that consists of 1-cycles, 1-chains and 2-chains cannot decrease the measure of a conjugacy class by more than a factor of 2, and hence, we have μ(I'),μ(I”) ≥ e^-n^2/5-2ϵ. Furthermore, τ' is fixed-point free, and hence, by Lemma <ref> we have E(τ')≤ 1/2. Consequently, by Theorem <ref> we have 1_A'(χ)/μ(A')=χ(τ') < χ(1)^1/2+ϵ,for any χ∈A_n-2ms, provided that n is sufficiently large. Hence, Proposition <ref>, applied with α=1/2+ϵ, implies that the sets I',I” span an edge in the Cayley graph Cay(A_n-2ms, A'), completing the proof.§.§ Proving Theorems <ref> and <ref>§.§ Pseudo-conjugacy classes We say that a set W is a pseudo-conjugacy class if every σ,τ∈ W have the same number of ℓ-cycles for each ℓ≤log n. Let A be a normal set with μ(A)≥ e^-n^2/5 -ϵ. Then A can be decomposed as the union of at most n^log n pseudo-conjugacy classes. Consequently, it contains a pseudo-conjugacy class A' of density ≥μ(A)/n^log n. Provided that n is sufficiently large, we have μ(A') ≥ e^-n^2/5 -ϵ/2. Hence, it is sufficient to verify that the argument for proving Theorem <ref> carries through smoothly when conjugacy classes are replaced by pseudo-conjugacy classes which are a union of (possibly many) conjugacy classes.The adaptations for Section <ref> are straightforward, as pseudo-conjugacy classes W can also be decomposed as σ⊕ W', provided that σ does not contain a cycle of length ≥log n. The argument of Section <ref> (namely, reducing to Lemma <ref>) consists of two parts. The first part handles the case where σ has no more fixed points than τ and applies Theorem <ref>. This part can be adapted by looking at a single conjugacy class I=σ^S_n⊂ A, such that σ has at most (1/4-ϵ)n cycles overall. (Such a σ exists in A, since the total measure of all permutations that have more than (1/4-ϵ)n cycles is much smaller than μ(A)). Theorem <ref> implies A^2 ⊇ (σ^S_n)^2 ⊇ A_n, as required. The second part handles the case where σ has more fixed points than τ and uses a combination of Lemma <ref> andTheorem <ref>, along with a restriction process. The restriction process can be applied to A without change (as it only concerns cycles of size ≤log n). Lemma <ref> can be replaced by Lemma <ref> which implies that A contains a permutation σ with E(σ)≤ 2/5-ϵ/2, allowing us to apply Theorem <ref> to σ^S_n⊆ A. Finally, in the proof of Lemma <ref>, two steps need adaptation. The first step of the proof, which handles the case when τ has only a few short cycles and applies Theorem <ref>, can be adapted by using Lemma <ref>, just like above. The last step of the proof applies Lemma <ref> which is stated for a single conjugacy class, but it generalizes straightforwardly to our setting, as the density increment when closing a cycle can be computed inside each conjugacy class included in A separately. The following is not needed: For cycles of length ≤log n the argument stays as is. When closing a cycle of length ≥log n, the resulting density increment is of a factor ≤ n^2, and therefore it does not contradict r-globalness when r is a sufficiently large constant. The rest of the proof translates to our setting almost verbatim. To demonstrate that the analogue of the proof of Lemma <ref> carries through to pseudo-conjugacy classes. We note that if A is a pseudo conjugacy class of measure ≥ e^-n^2/5-ϵ, then all the elements of A have ≤ n^2/5 - ϵ/2 fixed points by Lemma <ref>. Moreover, τ∈ A^2 for every τ with E(τ)≤ 1/5-ϵ/3 by an application of Lemma <ref> with ϵ/6 instead of ϵ in conjuction with Theorem <ref>.In the proof of Theorem <ref> we use the following standard fact regarding the cycle structure of random permutations. For σ∈ S_n, denote by C(σ) the total number of cycles in σ. For any n ∈ℕ and any 0 ≤ m ≤ n, we have _σ∼ S_n[C(σ)=m]≤(2log(n))^m-1/(m-1)!.Let A be a normal set with μ(A)≥ e^-n^2/5 -ϵ. We claim that for a sufficiently large n, A contains a conjugacy class C=σ^S_n with μ(C)≥ e^-n^2/5 -ϵ/3. Once we show this, the assertion of the theorem follows by applying Theorem <ref> to C.It is clearly sufficient to show that for a sufficiently large n, the union of all conjugacy classes C'=(σ')^S_n with μ(C')<e^-n^2/5-ϵ/3 has measure < e^-n^2/5-ϵ. Recall that μ(C')=[∏_i=1^n (i^f_σ'(i)· f_σ'(i)!)]^-1. Hence, by taking logarithms, the assumption μ(C')<e^-n^2/5-ϵ/3 implies ∑_i=1^n f_σ'(i)log(i)+ f_σ'(i)log(f_σ'(i)) ≥ n^2/5-ϵ/3,and subsequently, C(σ')=∑_i=1^n f_σ'(i) ≥ n^2/5-2ϵ/3, provided that n is sufficiently large. By Proposition <ref>, the probability that a random σ' satisfies this condition is less than e^-n^2/5-ϵ, provided that n is sufficiently large. The assertion follows. when a conjugacy class is of measure smaller then e^-n^α-ϵ that means e^n^α-ϵ≤∏(i^f_σ(i)· f_σ(i)!). We would like to bound the size of the union of all such conjugacy classes. After taking log we get∑ (f_i)log(i) ≥ n^α-ϵ or ∑ f_ilog(f_i) ≥ n^α-ϵ, which both actually mean ∑ (f_i) ≥ n^α-ϵ'. We get by Theorem <ref> that the probability that a permutation is in that range is smaller then e^-n^α-ϵ", which means that the union of all the conjugacy classes of measure at moste^-n^α-ϵ, is of measure at moste^-n^α-ϵ". That means that Theorems <ref> and <ref> are equivalent. Let A be a normal subset of A_n of density ≥ e^-n^2/5 -ϵ. If A is a normal subset of S_n as well, then the statement follows from Theorem <ref>. Otherwise, A contains a permutation σ with σ^A_nσ^S_n, in which case the statement follows from Theorem <ref>. plain | http://arxiv.org/abs/2310.18107v1 | {
"authors": [
"Nathan Keller",
"Noam Lifshitz",
"Ohad Sheinfeld"
],
"categories": [
"math.GR",
"math.CO",
"math.RT"
],
"primary_category": "math.GR",
"published": "20231027124746",
"title": "Improved covering results for conjugacy classes of symmetric groups via hypercontractivity"
} |
Journal ofClass Files, Vol. , No. ,Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals 0000–0000/00$00.00 2023 IEEEAddressing GAN Training Instabilities via Tunable Classification Losses Monica Welfert*, Gowtham R. Kurri*, Kyle Otstot, Lalitha Sankar * Equal contribution This work is supported in part by NSF grants CIF-1901243, CIF-1815361, CIF-2007688, DMS-2134256, and SCH-2205080. M. Welfert, K. Otstot and L. Sankar are with the School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ 85281 USA (email: {mwelfert, lsankar, kotstot}@asu.edu). Gowtham R. Kurri was with the School of Electrical, Computer and Energy Engineering at Arizona State University at the time the work was done. He is now with the Signal Processing and Communications Research Centre at International Institute of Information Technology, Hyderabad, India (e-mail: [email protected]). Manuscript received ; revised . January 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees.Noting that D is a classifier, we begin by reformulating the GAN value function usingclass probability estimation (CPE) losses.We prove a two-way correspondence between CPE loss GANs and f-GANs which minimize f-divergences.We also show that all symmetric f-divergences are equivalent in convergence.In the finite sample and model capacity setting, we define and obtain bounds on estimation and generalization errors. We specialize these results to α-GANs, defined using α-loss, a tunable CPE loss family parametrized by α∈(0,∞]. We next introduce a class of dual-objective GANs to address training instabilities of GANs by modeling each player's objective using α-loss to obtain (α_D,α_G)-GANs. We show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (α_D,α_G). Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error. Finally, we highlight the value of tuning (α_D,α_G) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring as well as the large publicly available Celeb-A and LSUN Classroom image datasets. generative adversarial networks, CPE loss formulation, estimation error, training instabilities, dual objectives. § INTRODUCTIONGenerative adversarial networks (GANs) have become a crucial data-driven tool for generating synthetic data. GANs are generative models trained to produce samples from an unknown (real) distribution using a finite number of training data samples. They consist of two modules, a generator G and a discriminator D, parameterized by vectors θ∈Θ⊂ℝ^n_g and ω∈Ω⊂ℝ^n_d, respectively, which play an adversarial game with each other. The generator G_θ maps noise Z∼ P_Z to a data sample in 𝒳 via the mapping z↦ G_θ(z) and aims to mimic data from the real distribution P_r. The discriminator D_ω takes as input x∈𝒳 and classifies it as real or generated by computing a score D_ω(x)∈[0,1] which reflects the probability that x comes from P_r (real) as opposed to P_G_θ (synthetic). For a chosen value function V(θ,ω), the adversarial game between G and D can be formulated as a zero-sum min-max problem given by=1muinf_θ∈Θsup_ω∈Ω V(θ,ω).Goodfellow et al. <cit.> introduce the vanilla GAN for which=2mu =0mu V_VG(θ,ω)=𝔼_X∼ P_r[logD_ω(X)]+𝔼_X∼ P_G_θ[log(1-D_ω(X))].For this V_VG, they show that when the discriminator class {D_ω}_ω∈Ω is rich enough, (<ref>) simplifies to minimizingthe Jensen-Shannon divergence <cit.> between P_r and P_G_θ. Various other GANs have been studied in the literature using different value functions, includingf-divergence based GANs called f-GANs <cit.>, IPM based GANs <cit.>, etc.Observing that the discriminator is a classifier, recently, in <cit.>, we show that the value function in (<ref>) can be written using a class probability estimation (CPE) loss ℓ(y,ŷ) whose inputs are the true label y∈{0,1} and predictor ŷ∈[0,1] (soft prediction of y) as V(θ,ω) =𝔼_X∼ P_r[-ℓ(1,D_ω(X))]+𝔼_X∼ P_G_θ[-ℓ(0,D_ω(X))]. We further introduce α-GAN in <cit.> using the tunable CPE loss α-loss <cit.>, defined for α∈(0,∞] asℓ_α(y,ŷ)α/α-1(1-yŷ^α-1/α-(1-y)(1-ŷ)^α-1/α),and show that this α-GAN formulation recovers various f-divergence based GANs including the Hellinger GAN <cit.> (α=1/2), the vanilla GAN <cit.> (α=1), and the Total Variation (TV) GAN <cit.> (α=∞). Further, for a large enough discriminator class, we also show that the min-max optimization for α-GAN in (<ref>) simplifies to minimizing the Arimoto divergence <cit.>. In <cit.>, we also show that the resulting Arimoto divergences are equivalent in convergence.While each of the abovementioned GANs have distinct advantages, they continue tosuffer from one or more types of training instabilities, including vanishing/exploding gradients, mode collapse, and sensitivity to hyperparameter tuning. In <cit.>, Goodfellow et al. note that the generator's objective in the vanilla GAN can saturate early in training (due to the use of the sigmoid activation) when D can easily distinguish between the real and synthetic samples, i.e., when the output of D is near zero for all synthetic samples, leading to vanishing gradients. Further, a confident D induces a steep gradient at samples close to the real data, thereby preventing G from learning such samples due to exploding gradients.To alleviate these, <cit.> proposes a non-saturating (NS) generator objective:V_VG^NS(θ,ω)=𝔼_X∼ P_G_θ[-logD_ω(X)]. This NS version of the vanilla GANmay be viewed as involving different objective functions for the two players (in fact, with two versions of the α=1 CPE loss, i.e., log-loss, for D and G). However, it continues to suffer from mode collapse <cit.> due to failure to converge and sensitivity to hyperparameter initialization (e.g. learning rate) because of large gradients. While other dual-objective GANs have also been proposed(e.g., Least Squares GAN (LSGAN) <cit.>, RényiGAN <cit.>, NS f-GAN <cit.>, hybrid f-GAN <cit.>), few have successfully addressed the landscape of training instabilities. Recent results have shown that α-loss demonstrates desirable gradient behaviors for different α values <cit.>. These results also assure learning robust classifiers that can reduce the confidence of D (a classifier); this, in turn, can allow G to learn without gradient issues. More broadly, by using different loss-based value functions for D and G, we can fully exploit this varying gradient behavior. To this end, in <cit.> weintroduce a different α-loss objective[Throughout the paper, we use the terms objective and value function interchangeably.] for each playerand propose a tunable dual-objective (α_D,α_G)-GAN, where the value functions of D and G are written in terms of α-loss with parameters α_D∈(0,∞] and α_G∈(0,∞], respectively. This paper ties together and significantly enhances our prior results investigating single-objective CPE loss-based GANs including α-GAN <cit.> and dual-objective GANs including (α_D,α_G)-GANs <cit.>. We list below all our contributions (while highlighting novelty relative to <cit.>) for both single- and dual-objective GANs.§.§ Our Contributions Single-objective GANs: * We review CPE loss GANs and include a two-way correspondence between CPE loss GANs and f-divergences (Theorem <ref>) previously published in <cit.>. We note that we include a more comprehensive proof of this result here. We review α-GANs, originally proposed in <cit.>, and present the optimal strategies for G and D, provided they have sufficiently large capacity and infinite samples (Theorem <ref>). We also include a result from <cit.> showing that α-GAN interpolates between various f-GANs including vanilla GAN (α=1), Hellinger GAN <cit.> (α=1/2), and Total Variation GAN <cit.> (α=∞) by tuning α (Theorem <ref>).* A novel contribution of this work is proving an equivalence between a CPE loss GAN and a corresponding f-GAN (Theorem <ref>). We specialize this forα-GANs and f_α-GANs to show that one can go between the two formulations using a bijective activation function(Theorem <ref> and Corollary <ref>).* We study convergence properties of CPE loss GANs in the presence of sufficiently large number of samples and discriminator capacity. We show that all symmetric f-divergences are equivalent in convergence (Theorem <ref>) generalizing an equivalence proven in our prior work <cit.> for Arimoto divergences. We remark that the proof techniques used here give rise to a conceptually simpler proof of equivalence between Jensen-Shannon divergence and total variation distance proved earlier by Arjovsky et al. <cit.>.* In the setting of finite training samples and limited capacity for the generator and discriminator models, we extend the definition of generalization, first introduced by Arora et al. <cit.>, to CPE loss GANs. We do so by introducing a refined neural net divergence and prove that it indeed generalizes withincreasing number of training samples (Theorem <ref>). * To conclude our results on single-objective GANs, we review the definition of estimation error for CPE loss GANs introduced in <cit.>, present an upper bound on the error originally proven in <cit.> (Theorem <ref>), and a matching lower bound under additional assumptions for α-GANs previously proven in <cit.> (Theorem <ref>).Dual-objective GANs:* We begin by reviewing (α_D,α_G)-GANs, originally proposed in <cit.>, and the corresponding optimal strategies for D and G for appropriate (α_D,α_G) values (Theorem <ref>). We also review the non-saturating version of (α_D,α_G)-GANs, also proposed in <cit.>, and present its Nash equilibrium strategies for D and G (Theorem <ref>).* A novel contribution of this work is a gradient analysis highlighting the effect of tuning (α_D,α_G) on the magnitude of the gradient of the generator's loss for both the saturating and non-saturating versions of the (α_D,α_G)-GAN formulation (Theorem <ref>).* We introduce a dual-objective CPE loss GAN formulation generalizing our dual-objective (α_D,α_G)-GAN formulation in <cit.>. For this non-zero sum game, we present the optimal strategies for D and G and prove that for the optimal D_ω^*, G minimizes an f-divergence under certain conditions (Proposition <ref>).* We generalize the definition of estimation error we introduced in <cit.> for (α_D,α_G)-GANs to dual-objective CPE loss GANs. We present an upper bound on the error (Theorem <ref>), and show that this result subsumes that for (α_D,α_G)-GANs in <cit.>. * Focusing on (α_D,α_G)-GANs, we demonstrate empirically that tuning α_D and α_G significantly reduces vanishing and exploding gradients and alleviates mode collapse on a synthetic 2D-ring dataset (originally published in <cit.>). For the high-dimensional Celeb-A and LSUN Classroom datasets, we show that our tunable approach is more robust in terms of the Fréchet Inception Distance (FID) to the choice of GAN hyperparameters, including number of training epochs and learning rate, relative to both vanilla GAN and LSGAN.* Finally, throughout the paper, we illustrate the effect of tuning (α_D,α_G) on training instabilities including vanishing and exploding gradients, as well as model oscillation and mode collapse. §.§ Related Work GANs face several challenges that threaten their training stability <cit.>, such as vanishing/exploding gradients, mode collapse, sensitivity to hyperparameter initialization, and model oscillation, which occurs when the generated data oscillates around modes in real data due to large gradients. Many GAN variants have been proposed to stabilize training by changing the objective optimized <cit.> or the architecture design <cit.>. Since we focus on tuning the objective, we restrict discussions and comparisons to similar approaches.Approaches modifying the objective can be categorized as single-objective or dual-objective variants.For the single objective setting, arguing that vanishing gradients are due to the sensitivity of f-divergences to mismatch in distribution supports, Arjovsky et al. <cit.> proposed Wasserstein GAN (WGAN) using a “weaker" Euclidean distance between distributions. However, this formulation requires a Lipschitz constraint on D, which in practice is achieved either via clipping model weightsor using a computationally expensive gradient penalty method <cit.>.More generally, a broader class of GANs based on integral probability metric (IPM) distances have been proposed, including MMD GANs <cit.>, Sobolev GANs <cit.>, (surveyed in <cit.>), and total variation GANs <cit.>. Our work focuses on classifier based GANs, and does not require clipping or penalty methods, thus limiting meaningful comparisons with IPM-based GANs. Finally, for single-objective GANs, many theoretical approaches to GANs assume that a particular divergence is minimized and study the role of regularization methods <cit.>. Our work goes beyond these approaches by explicitly analyzing the value function optimizations of both D and G, thereby enabling understanding and addressing training instabilities. Noting the benefit of using different objectives for the D and G, various dual-objective GANs, beyond the NS vanilla GAN, have been proposed. Mao et al. <cit.> proposed Least Squares GAN (LSGAN) where the objectives for D and G use different linear combinations of squared loss-based measures.LSGANs can be viewed as state of the art in highlighting the effect of objective in GAN performance; therefore, in addition to vanilla GAN, we contrast our results to this work, as it allows for a fair comparison when choosing the same hyperparameters including model architecture, learning rate, initialization, optimization methodology, etc. for both approaches.Dual objective variants including RényiGAN <cit.>, least kth-order GANs <cit.>, NS f-GAN <cit.>, and hybrid f-GAN <cit.> have also been proposed. Recently, <cit.> attempts to unify a variety of divergence-based GANs (including special cases of both our (α_D,α_G)-GANs and LSGANs) via ℒ_α-GANs. However, our work is distinct in highlighting the role of GAN objectives in reducing training instabilities. Finally, it is worth mentioning that dual objectives have been shown to be essentialin the context of learning models robust to adversarial attacks <cit.>. Generalization for single-objective GANs was first introduced by Arora et al. <cit.>. Our work is the first to extend the definition of generalization to incorporate CPE losses. There is a growing interest in studying and constructing bounds on the estimation error in training GANs <cit.>. Estimation error evaluates the performance of a limited fixed capacity generator (e.g., a class of neural networks) learned with finite samples relative to the best generator. The results in <cit.> study estimation error using a specific formulation that does not take into account the loss used and also define estimation error only in the single-objective setting. In this work, we study the impact of the loss used as well as the dual-objective formulation on the estimation error guarantees. To the best of our knowledge, this is the first result of this kind for dual-objective GANs.The remainder of the paper is organized as follows. We review various GANs in the literature, classification loss functions, particularly α-loss, and GAN training instabilities in Section <ref>. In Section <ref>, we present and analyze the loss function perspective of GANs and introduce tunable α-GANs. In Section <ref>, we propose and analyze dual-objective (α_D,α_G)-GANs and introduce a dual-objective CPE-loss GAN formulation. Finally, in Section <ref>, we highlight the value of tuning (α_D,α_G) for (α_D,α_G)-GANs on several datasets. All proofs and additional experimental results can be found in the accompanying supplementary material (Appendices A-Q). § PRELIMINARIES: OVERVIEW OF GANS AND LOSS FUNCTIONS FOR CLASSIFICATION §.§ Background on GANsWe begin by presenting an overview of GANs in the literature. Let P_r be a probability distribution over 𝒳⊂ℝ^d, which the generator wants to learn implicitly by producing samples by playing a competitive game with a discriminator in an adversarial manner.We parameterize the generator G and the discriminator D by vectors θ∈Θ⊂ℝ^n_g and ω∈Ω⊂ℝ^n_d, respectively, and write G_θ and D_ω (θ and ω are typically the weights of neural network models for the generator and the discriminator, respectively). The generator G_θ takes as input a d^'(≪ d)-dimensional latent noise Z∼ P_Z and maps it to a data point in 𝒳 via the mapping z↦ G_θ(z). For an input x∈𝒳, the discriminator outputs D_ω(x)∈[0,1], the probability that x comes from P_r (real) as opposed to P_G_θ (synthetic). The generator and the discriminator play a two-player min-max game with a value function V(θ,ω), resulting in a saddle-point optimization problem given byinf_θ∈Θsup_ω∈Ω V(θ,ω).Goodfellow et al. <cit.> introduced the vanilla GAN usingV_VG(θ,ω)=𝔼_X∼ P_r[logD_ω(X)]+𝔼_Z∼ P_Z[log(1-D_ω(G_θ(Z)))]=𝔼_X∼ P_r[logD_ω(X)]+𝔼_X∼ P_G_θ[log(1-D_ω(X))], for which they showed that when the discriminator class {D_ω}, parametrized by ω, is rich enough, (<ref>) simplifies to finding inf_θ∈Θ 2D_JS(P_r||P_G_θ)-log4, where D_JS(P_r||P_G_θ) is the Jensen-Shannon divergence <cit.> between P_r and P_G_θ. This simplification is achieved, for any G_θ, by choosing the optimal discriminatorD_ω^*(x)=p_r(x)/p_r(x)+p_G_θ(x),x ∈𝒳,where p_r and p_G_θ are the corresponding densities of the distributions P_r and P_G_θ, respectively, with respect to a base measure dx (e.g., Lebesgue measure).Generalizing this by leveraging the variational characterization of f-divergences <cit.>, Nowozin et al. <cit.> introduced f-GANs via the value functionV_f(θ,ω)=𝔼_X∼ P_r[D_ω(X)]+𝔼_X∼ P_G_θ[-f^*(D_ω(X))],where[This is a slight abuse of notation in that D_ω is not a probability here. However, we chose this for consistency innotation of discriminator across various GANs.] D_ω:𝒳→ℝ and f^*(t)sup_u{ut-f(u)} is the Fenchel conjugate of a convex lower semicontinuous function f defining an f-divergence D_f(P_r||P_G_θ)∫_𝒳p_G_θ(x)f(p_r(x)/p_G_θ(x))dx <cit.>. In particular, sup_ω∈Ω V_f(θ,ω)=D_f(P_r||P_G_θ) when there exists ω^*∈Ω such that D_ω^*(x)=f^'(p_r(x)/p_G_θ(x)). In order to respect the domain dom(f^*) of the conjugate f^*, Nowozin et al. further decomposed (<ref>) by assuming the discriminator D_ω can be represented in the form D_ω(x) = g_f(Q_ω(x)), yielding the value functionV_f(θ,ω)=𝔼_X∼ P_r[g_f(Q_ω(x))]+𝔼_X∼ P_G_θ[-f^*(g_f(Q_ω(x)))],where Q_ω:𝒳→ℝ and g_f:ℝ→dom(f^*) is an output activation function specific to the f-divergence used.Highlighting the problems with the continuity of various f-divergences (e.g., Jensen-Shannon, KL, reverse KL, total variation) over the parameter space Θ <cit.>, Arjovsky et al. <cit.> proposed Wasserstein-GAN (WGAN) using the following Earth Mover's (also called Wasserstein-1) distance:W(P_r,P_G_θ) =inf_Γ_X_1X_2∈Π(P_r,P_G_θ)𝔼_(X_1,X_2)∼Γ_X_1X_2‖X_1-X_2‖_2,where Π(P_r,P_G_θ) is the set of all joint distributions Γ_X_1X_2 with marginals P_r and P_G_θ. WGAN employs the Kantorovich-Rubinstein duality <cit.> using the value functionV_WGAN(θ,ω)=𝔼_X∼ P_r[D_ω(X)]-𝔼_X∼ P_G_θ[D_ω(X)],where the functions D_ω:𝒳→ℝ are all 1-Lipschitz, to simplify sup_ω∈ΩV_WGAN(θ,ω) to W(P_r,P_G_θ) when the class Ω is rich enough. Although various GANs have been proposed in the literature, each of them exhibits their own strengths and weaknesses in terms of convergence, vanishing/exploding gradients, mode collapse, computational complexity, etc., leaving the problem of addresing GAN training instabilities unresolved <cit.>. §.§ Background on Loss Functions for Classification The ideal loss function for classification is the Bayes loss, also known as the 0-1 loss. However, the complexity of implementing such a non-convex loss has led to much interest in seeking surrogate loss functions for classification. Several surrogate losses with desirable properties have been proposed to train classifiers; the most oft-used and popular among them is log-loss, also referred to as cross-entropy loss. However, enhancing robustness of classifier has broadened the search for better surrogate losses or families of losses; one such family is the class probability estimator (CPE) losses that operate on a soft probability or risk estimate. Recently, it has been shown that a large class of known CPE losses can be captured by a tunable loss family called α-loss,which includes the well-studied exponential loss (α=1/2), log-loss (α=1), and soft 0-1 loss, i.e., the probability of error (α=∞). Formally, α-loss is defined as follows.For a set of distributions 𝒫(𝒴) over 𝒴, α-loss ℓ_α:𝒴×𝒫(𝒴) →ℝ_+ for α∈ (0,1) ∪ (1,∞) is defined asℓ_α(y,P̂) = α/α - 1(1 - P̂(y)^α-1/α). By continuous extension, ℓ_1(y,P̂) = -logP̂(y), ℓ_∞(y,P̂) = 1 - P̂(y), and ℓ_0(y,P̂)=∞.Note that ℓ_1/2(y,P̂) = P̂(y)^-1 - 1, which is related to the exponential loss, particularly in the margin-based form <cit.>. Also, α-loss is convex in the probability term P̂(y). Regarding the history of (<ref>), Arimoto first studied α-loss in finite-parameter estimation problems <cit.>, and later Liao et al. <cit.> independently introduced and used α-loss to model the inferential capacity of an adversary to obtain private attributes. Most recently, Sypherd et al. <cit.> studied α-loss extensively in the classification setting, which is an impetus for this work. §.§ Background on GAN Training InstabilitiesGANs face several challenges during training. Imbalanced performance between the generator and discriminator often coincides with the presence of exploding and vanishing gradients. When updating the generator weights during the backward pass of the network G_θ∘ D_ω, the gradients are computed by propagating the gradient of the value function from the output layer of D_ω to the input layer of G_θ, following the chain rule of derivatives. Each layer contributes to the gradient update by multiplying the incoming gradient with the local gradient of its activation function, and passing it to the preceding layer. When the gradients become large, the successive multiplication of these gradients across the layers can result in an exponential growth, known as exploding gradients. Conversely, small gradients can lead to an exponential decay, referred to as vanishing gradients. In both cases, networks with multiple hidden layers are particularly susceptible to unstable weight updates, causing extremely large or small values that may overflow or underflow the numerical range of computations, respectively.In the context of the vanilla GAN, exploding gradients can occur when the generator successfully produces samples that are severely misclassified (close to 1) by the discriminator. During training, the generator is updated using the loss function log(1 - D_ω(x)), which diverges to -∞ as the discriminator output D_ω(x) approaches 1. Consequently, the gradients for the generator weights fail to converge to non-zero values, leading to the generated data potentially overshooting the real data in any direction. This is illustrated in Fig. <ref>(b), relative to an initial starting point in Fig. <ref>(a). In severe cases of exploding gradients, the weight update can push the generated data towards a region far from the real data. As a result, the discriminator can easily assign probabilities close to zero to the generated data and close to one to the real data. As the discriminator output approaches zero, the generator's loss function saturates, causing the gradients of the generator weights to gradually vanish. This is shown in Fig. <ref>(c). The conflation of these two phenomena can prevent the generator from effectively correcting itself and improving its performance over time.To alleviate the issues of exploding and vanishing gradients, Goodfellow et al. <cit.> proposed a non-saturating (NS) generator objective:V_VG^NS(θ,ω)=𝔼_X∼ P_G_θ[-logD_ω(X)].The use of this non-saturating objective provides a more intuitive optimization trajectory that allows the generated distribution P_G_θ to converge to the real distribution P_r. As the discriminator output D_ω(x) for a sample x approaches 1, the generator loss -log D_ω(x) approaches zero, indicating that the generated data is closer to the real distribution Additionally, with a high-performing discriminator, the generator receives steep gradients (as opposed to vanishing gradients) during the update process; this occurs because the generator loss diverges to +∞ as the discriminator output approaches zero (see Fig. <ref>). As we show in the sequel, using α-loss based value functions allow modulating the magnitude of the gradient (and therefore, how steeply it rises), thereby improving over the vanilla GAN performance.While the non-saturating vanilla GAN (an industry standard) incorporates two different objective functions for the generator and discriminator in order to combat vanishing and exploding gradients, it can still suffer from mode collapse and oscillations <cit.>. These issues often arise due to the sensitivity of the GAN to hyperparameter initialization.The problem of mode collapse occurs when the generator produces samples that closely resemble only a limited subset of the real data. In such cases, the generator lacks the incentive to capture the remaining modes since the discriminator struggles to effectively differentiate between the real and generated samples. One possible explanation for this phenomenon, as depicted in Fig. <ref>, is that the generator and/or discriminator become trapped in a local minimum, impeding the necessary adjustments to mitigate mode collapse. In Fig. <ref>(a), the generated distribution approaches a single mode of the real distribution, which causes the optimal discriminator to have uniform predicted probabilities in this region; as a result, when the discriminator landscape is sufficiently flat in the mode neighborhood, the generator will get stuck and won't move out of the mode. We note that an extreme case of complete mode collapse is captured in Fig. <ref>(c) where the generator is stuck in a non-mode region. As we show in the sequel, α-loss based dual objective GANs can resolve such mode collapse issues which result from vanishing and exploding gradients. Yet another potential cause of mode collapse is model oscillation. This occurs when a generator training with the non-saturating value function V_VG^NS fails to converge due to the influence of a generated outlier data sample, as illustrated inFig. <ref>. In Fig. <ref>(a), most of the generated data is situated at a real data mode, while some are outliers and are situated very far from the real distribution. The discriminator very confidently classifies such outlier data as fake but is less sure about the generated data that is close to the real data. As shown in Fig. <ref>(b), the outlier data consequently receive gradients of very large magnitude while the generated data closer to the real data receive gradients of much smaller magnitude. The generator then prioritizes directing the outlier data toward the real data over keeping the data close to the real data in place; as a result, the generator update reflects a compromise in Fig. <ref>(c), where the outliers are resolved at the expense of moving the other data away from the real data mode. Although the generator succeeds at bringing down the average loss by eliminating these outliers, the discriminator is now able to confidently distinguish between the distributions, leading to near-zero probabilities assigned to the generated data. In turn, as shown in Fig. <ref>(d), the generated samples all receive very large gradients which may result in oscillations around the real data. For this setting as well, in the sequel, we show that choosing value functions that allow modulating the role of the outliers such as via α-loss, can be very beneficial in addressing mode oscillation. We begin our analysis by first introducing a loss function perspective of GANs. § LOSS FUNCTION PERSPECTIVE ON GANS Noting that a GAN involves a classifier (i.e., discriminator), it is well known that the value function V_VG(θ,ω) in (<ref>) considered by Goodfellow et al. <cit.> is related to binary cross-entropy loss. We first formalize this loss function perspective of GANs. In <cit.>, Arora et al. observed that the log function in (<ref>) can be replaced by any (monotonically increasing) concave function ϕ(x) (e.g., ϕ(x)=x for WGANs). In the context of using classification-based losses, we show that one can write V(θ,ω) in terms of any class probability estimation (CPE) loss ℓ(y,ŷ) whose inputs are the true label y∈{0,1} and predictor ŷ∈[0,1] (soft prediction of y). For a GAN, we have (X|y=1)∼ P_r, (X|y=0)∼ P_G_θ, and ŷ=D_ω(x). With this, we define a value function V(θ,ω) =𝔼_X|y=1[-ℓ(y,D_ω(X))]+𝔼_X|y=0[-ℓ(y,D_ω(X))]=𝔼_X∼ P_r[-ℓ(1,D_ω(X))]+𝔼_X∼ P_G_θ[-ℓ(0,D_ω(X))].For binary cross-entropy loss, i.e., ℓ_CE(y,ŷ) -ylogŷ-(1-y)log(1-ŷ), notice that the expression in (<ref>) is equal to V_VG in (<ref>). For the value function in (<ref>), we consider a GAN given by the min-max optimization problem:inf_θ∈Θsup_ω∈ΩV(θ,ω).Let ϕ(·)-ℓ(1,·) and ψ(·)-ℓ(0,·) in the sequel. The functions ϕ and ψ are assumed to be monotonically increasing and decreasing functions, respectively, so as to retain the intuitive interpretation of the vanilla GAN (that the discriminator should output high values to real samples and low values to the generated samples). These functions should also satisfy the constraintϕ(t)+ψ(t)≤ϕ(1/2)+ψ(1/2), for all t∈[0,1],so that the optimal discriminator guesses uniformly at random (i.e., outputs a constant value 1/2 irrespective of the input) when P_r=P_G_θ. A loss function ℓ(y,ŷ) is said to be symmetric <cit.> if ψ(t)=ϕ(1-t), for all t∈[0,1]. Notice that the value function considered by Arora et al. <cit.> is a special case of (<ref>), i.e., (<ref>) recovers the value function in <cit.> when the loss function ℓ(y,ŷ) is symmetric. For symmetric losses, concavity of the function ϕ is a sufficient condition for satisfying (<ref>), but not a necessary condition. §.§ CPE Loss GANs and f-divergences We now establish a precise correspondence between the family of GANs based on CPE loss functions and a family of f-divergences. We do this by building upon a relationship between margin-based loss functions <cit.> and f-divergences first demonstrated by Nguyen et al. <cit.> and leveraging our CPE loss function perspective of GANs given in (<ref>).This complements the connection establishedby Nowozin et al. <cit.> between the variational estimation approach of f-divergences <cit.> and f-divergence based GANs.We call a CPE loss function ℓ(y,ŷ) symmetric <cit.> if ℓ(1,ŷ)=ℓ(0,1-ŷ) and an f-divergence D_f(··) symmetric <cit.> if D_f(PQ)=D_f(QP).We assume GANs with sufficiently large number of samples and ample discriminator capacity.For any symmetric CPE loss GAN with a value functionin (<ref>), the min-max optimization in (<ref>) reduces to minimizing an f-divergence. Conversely, for any GAN designed to minimize a symmetric f-divergence, there exists a (symmetric) CPE loss GAN minimizing the same f-divergence. Let ℓ be the symmetric CPE loss of a given CPE loss GAN; note that ℓ has a bivariate input (y,ŷ) (e.g., in (<ref>)), where y ∈{0,1} and ŷ∈ [0,1]. We define an associated margin-based loss function ℓ̃ using a bijective link function (satisfying a mild regularity condition); note that a margin-based loss function has a univariate input z ∈ℝ (e.g., the logistic loss l̃^log(z) = log(1+e^-z)) and the bijective link function maps z →ŷ (see <cit.> for more details).We show after some manipulations that the inner optimization of the CPE loss GAN reduces to an f-divergence withf(u):=-inf_t∈ℝ(ℓ̃(-t)+uℓ̃(t)).For the converse, given a symmetric f-divergence, using <cit.>, note that there exists a margin-based lossℓ̃ such that (<ref>) holds. The rest of the argument follows from defining a symmetric CPE loss ℓ from this margin-based loss ℓ̃ via the inverse of the same link function. See Appendix <ref>for the detailed proof. A consequence of Theorem <ref> is that it offers an interpretable way to design GANs and connect a desired measure of divergence to a corresponding loss function, where the latter is easier to implement in practice. Moreover, CPE loss based GANs inherit the intuitive and compelling interpretation of vanilla GANs that the discriminator should assign higher likelihood values to real samples and lower ones to generated samples. We now specialize the loss function perspective of GANs to the GAN obtained by plugging in α-loss. We first write α-loss in (<ref>) in the form of a binary classification loss to obtainℓ_α(y,ŷ):=α/α-1(1-yŷ^α-1/α-(1-y)(1-ŷ)^α-1/α), for α∈(0,1)∪ (1,∞). Note that (<ref>) recovers ℓ_CE as α→ 1. Now consider a tunable α-GAN with the value function V_α(θ,ω)=𝔼_X∼ P_r[-ℓ_α(1,D_ω(X))]+𝔼_X∼ P_G_θ[-ℓ_α(0,D_ω(X))]=α/α-1(𝔼_X∼ P_r[D_ω(X)^α-1/α]+𝔼_X∼ P_G_θ[(1-D_ω(X))^α-1/α]-2).We can verify that lim_α→ 1V_α(θ,ω)=V_VG(θ,ω), recovering the value function of the vanilla GAN. Also, notice that lim_α→∞V_α(θ,ω)=𝔼_X∼ P_r[D_ω(x)]-𝔼_X∼ P_G_θ[D_ω(x)]-1is the value function (modulo a constant) used in Integral Probability Metric (IPM) based GANs[Note that IPMs do not restrict the function D_ω to be a probability.], e.g., WGAN, McGan <cit.>, Fisher GAN <cit.>, and Sobolev GAN <cit.>.The resulting min-max game in α-GAN is given byinf_θ∈Θsup_ω∈ΩV_α(θ,ω). The following theorem provides the min-max solution, i.e., Nash equilibrium, to the two-player game in (<ref>) for the non-parametric setting, i.e., when the discriminator set Ω is large enough. For α∈(0,1)∪ (1,∞) and a generator G_θ, the discriminator D_ω^* optimizing the sup in (<ref>) is D_ω^*(x)=p_r(x)^α/p_r(x)^α+p_G_θ(x)^α,x ∈𝒳,where p_r and p_G_θ are the corresponding densities of the distributions P_r and P_G_θ, respectively, with respect to a base measure dx (e.g., Lebesgue measure). For this D_ω^*, (<ref>) simplifies to minimizing a non-negative symmetric f_α-divergence D_f_α(·||·) to obtaininf_θ∈Θ D_f_α(P_r||P_G_θ)+α/α-1(2^1/α-2),wheref_α(u)=α/α-1((1+u^α)^1/α-(1+u)-2^1/α+2),for u≥ 0 and[We note that the divergence D_f_α has been referred to as Arimoto divergence in the literature <cit.>.]D_f_α(P||Q)=α/α-1(∫_𝒳(p(x)^α+q(x)^α)^1/α dx-2^1/α),which is minimized iff P_G_θ=P_r. A detailed proof of Theorem <ref>is in Appendix <ref>.As α→ 0, note that (<ref>) implies a more cautious discriminator, i.e., if p_G_θ(x) ≥ p_r(x), then D_ω^*(x) decays more slowly from 1/2, and if p_G_θ(x) ≤ p_r(x), D_ω^*(x) increases more slowly from 1/2.Conversely, as α→∞, (<ref>) simplifies to D_ω^*(x)=1{p_r(x)>p_G_θ(x)}+1/21{p_r(x)=p_G_θ(x)}, where the discriminator implements the Maximum Likelihood (ML) decision rule, i.e., a hard decision whenever p_r(x)≠ p_G_θ(x). In other words, (<ref>) for α→∞ induces a very confident discriminator.Regarding the generator's perspective, (<ref>) implies that the generator seeks to minimize the discrepancy between P_r and P_G_θ according to the geometry induced by D_f_α. Thus, the optimization trajectory traversed by the generator during training is strongly dependent on the practitioner's choice of α∈ (0,∞).Please refer to Fig. <ref> in Appendix <ref> for an illustration of this observation. Figure <ref> illustrates this effect of tuning α on the optimal D and the corresponding loss of the generator for a toy example.Note that the divergence D_f_α(·||·) (in (<ref>)) that naturally emerges from the analysis of α-GAN was first proposed by Österriecher <cit.> in a statistical context of measures and was later referred to as the Arimoto divergence by Liese and Vajda <cit.>. Next, we show that α-GAN recovers various well known f-GANs. α-GAN recovers vanilla GAN, Hellinger GAN (H-GAN) <cit.>, and Total Variation GAN (TV-GAN) <cit.> as α→ 1, α=1/2, and α→∞, respectively.We show the following: (i) as α→ 1, (<ref>) equals inf_θ∈Θ2D_JS(P_r||P_G_θ)-log4 recovering the vanilla GAN; (ii) for α=1/2, (<ref>) gives 2inf_θ∈ΘD_H^2(P_r||P_G_θ)-2 recovering Hellinger GAN (up to a constant); and (iii) as α→∞, (<ref>) equals inf_θ∈ΘD_TV(P_r||P_G_θ)-1 recovering TV-GAN (modulo a constant).A detailed proof is in Appendix <ref>. Next, we present an equivalence between f_α-GAN defined using the value function in (<ref>) and α-GAN. Define ℝ = ℝ∪{±∞}. We first prove that there exists a mapping between the terms involved in the optimization of both GAN formulations in the following theorem.For any α∈ (0,1)∪ (1,∞), let f_α be a slightly modified version of (<ref>) defined asf_α(u)=α/α-1((1+u^α)^1/α-(1+u)),u ≥ 0,with continuous extensions at α=1 and α=∞. Let f^*_α be the convex conjugate of f_α given byf_α^* (t) = α/α-1(1- (1-s(t))^α-1/α),wheres(t)=(1+α-1/α t)^α/α-1.Let g_f_α : ℝ→dom( f^*_α) be a bijective output activation function. * Given v ∈ℝ, there exists d ∈ [0,1] such thatg_f_α(v) = -ℓ_α(1,d) andf^*_α(g_f_α(v)) = ℓ_α(0,d).* Conversely, given d ∈ [0,1], there exists v ∈ℝ such that (<ref>) holds for the same function g_f_α.The result follows from comparing the corresponding terms in the f-GAN value function in (<ref>) (specifically for f=f_α) and the α-GAN value function in (<ref>).A detailed proof is in Appendix <ref>. Taking a closer look at the first equality in (<ref>) and recalling that a margin-based loss is often obtained by composing a classification function (such as α-loss) and the logistic sigmoid function, we can derive an example of such a g_f_α using the margin-based α-loss <cit.> asg_f_α(v) = α/α-1((1+e^-v)^-α-1/α-1 ),for v∈ℝ and α 1, where g_f_1(v) = lim_α→ 1 g_f_α(v) = -log(1+e^-v) for v ∈ℝ. The function g_f_α is monotonically increasing for any α, with range exactly matching dom(f_α^*), and is therefore bijective.The following corollary establishes the equivalence between f_α-GAN and α-GAN.Two optimization problems sup_v∈ Ag(v) and sup_t∈ Bh(t) are said to be equivalent <cit.> if there exists a bijective function k:A→ B such thatg(v)=h(k(v)) and h(t)=g(k^-1(t)),for allv∈ A, t∈ B.In other words, two optimization problems are equivalent if a change of variable via the function k can transform one into the other. For any α∈ (0,∞] and corresponding f_α defined in (<ref>), the optimization problems involved in f_α-GAN (using (<ref>) with f=f_α) and α-GAN (using (<ref>)) are equivalent for the choice g(Q_w)=𝔼_X ∼ P_r[g_f_α(Q_ω(X))] + 𝔼_X ∼ P_G_θ[-f_α^*(g_f_α(Q_ω(X)))]with A = {Q_ω:𝒳→ℝ} andh(D_ω)=𝔼_X ∼ P_r[-ℓ_α(1,D_ω(X))] +𝔼_X ∼ P_G_θ[-ℓ_α(0,D_ω(X))]with B = {D_ω:𝒳→ [0,1] } using k:A→ B defined byk(v) = s(g_f_α(v))= (1+(α-1/α) g_f_α(v))^α/α-1 , where s is defined in (<ref>) and g_f_α is a bijective output activation function mapping from ℝ to dom(f_α^*).The proof of Corollary <ref> follows from (<ref>) and Theorem <ref>. The following theorem generalizes the equivalence demonstrated above between f_α-GAN and α-GAN to an equivalence between f-GANs (using the original value function in (<ref>)) and CPE loss based GANs. For any given symmetric f-divergence, the optimization problems involved in f-GAN and the CPE loss based GAN minimizing the same f-divergence are equivalent under the following regularity conditions on f: * there exists a strictly convex and differentiable CPE (partial) loss function ℓ such thatf(u)=sup_t∈[0,1] -uℓ(t)-ℓ(1-t)(note that this condition without the requirement of strict convexity of ℓ is indeed guaranteed by <cit.> for any convex function f resulting in a symmetric divergence) and -uℓ(t)-ℓ(1-t) has a local maximum in t for every u∈ℝ_+, and * the function mapping u∈ℝ_+ to unique optimizer in (<ref>) is bijective. Observing that the inner optimization problem in the CPE loss GAN formulation reduces to the pointwise optimization (<ref>) and that of the f-GAN formulation reduces to the pointwise optimizationf(u)=sup_v∈domf^* uv-f^*(v),it suffices to show that the variational forms of f in (<ref>) and (<ref>) are equivalent. We do this by showing that (<ref>) is equivalent to the optimization problemf(u)=sup_v∈ℝ_+uf^'(v)-[vf^'(v)-f(v)],which has been shown to be equivalent to (<ref>) <cit.>. A detailed proof is in Appendix <ref>.Since α-loss, ℓ_α(p)=α/α-1(1-p^α-1/α), p∈[0,1], is strictly convex for α∈(0,∞), and the function mapping u∈ℝ_+ to unique optimizer in (<ref>) with α-loss, i.e., u^α/1+u^α, is bijective, Theorem <ref> implies that α-GAN is equivalent to f_α-GAN with f_α defined in (<ref>). Though the CPE loss GAN and f-GAN formulations are equivalent, the following aspects differentiate the two: * The f-GAN formulation focuses on the generator minimizing an f-divergence with no explicit emphasis on the role of the discriminator as a binary classifier in relation to the function f. With the CPE loss GAN formulation, we bring into the foreground the connection between the binary classification performed by the discriminator and the f-divergence minimization done by the generator.* More importantly, the CPE loss function perspective of GANs allows us to prove convergence properties (Theorem <ref>), generalization error bounds (Theorem <ref>), and estimation error bounds (Theorem <ref>) as detailed in the following sections.§.§ Convergence Guarantees for CPE Loss GANs Building on the above one-to-one correspondence,we now present convergence results for CPE loss GANs, including α-GAN, thereby providing a unified perspective on the convergence of a variety of f-divergences that arise when optimizing GANs. Here again, we assume a sufficiently large number of samples and ample discriminator capacity. In <cit.>, Liu et al. address the following question in the context of convergence analysis of any GAN: For a sequence of generated distributions (P_n), does convergence of a divergence between the generated distribution P_n and a fixed real distribution P to the global minimum lead to some standard notion of distributional convergence of P_n to P? They answer this question in the affirmative provided the sample space 𝒳 is a compact metric space.Liu et al. <cit.> formally define any divergence that results from the inner optimization of a general GAN in (<ref>) as an adversarial divergence <cit.>, thus broadly capturing the divergences used by a number of existing GANs, including vanilla GAN <cit.>, f-GAN <cit.>, WGAN <cit.>, and MMD-GAN <cit.>. Indeed, the divergence that results from the inner optimization of a CPE loss GAN (including α-GAN) in (<ref>) is also an adversarial divergence. For strict adversarial divergences (a subclass of the adversarial divergences where the minimizer of the divergence is uniquely the real distribution),Liu et al. <cit.> show that convergence of the divergence to its global minimum implies weak convergence of the generated distribution to the real distribution. Interestingly, this also leads to a structural result on the class of strict adversarial divergences <cit.> based on a notion of relative strength between adversarial divergences.We note that the Arimoto divergence D_f_α in (<ref>) is a strict adversarial divergence.We briefly summarize the following terminology from Liu et al. <cit.> to present our results on convergence properties of CPE loss GANs. Let 𝒫(𝒳) be the probability simplex of distributions over 𝒳. A strict adversarial divergence τ_1 is said to be stronger than another strict adversarial divergence τ_2 (or τ_2 is said to be weaker than τ_1) if for any sequence of probability distributions (P_n) and target distribution P (both in 𝒫(𝒳)), τ_1(PP_n)→ 0 as n→∞ implies τ_2(PP_n)→ 0 as n→∞. We say τ_1 is equivalent to τ_2 if τ_1 is both stronger and weaker than τ_2. Arjovsky et al. <cit.> proved that the Jensen-Shannon divergence (JSD) is equivalent to the total variation distance (TVD).Later, Liu et al. showed that the squared Hellinger distance is equivalent to both of these divergences, meaning that all three divergences belong to the same equivalence class (see <cit.>). Noticing that the squared Hellinger distance, JSD, and TVD correspond to Arimoto divergences D_f_α(·||·) for α=1/2, α=1, and α=∞, respectively, it is natural to ask the question: Are Arimoto divergences for all α>0 equivalent? We answer this question in the affirmative in Theorem <ref>. In fact, we prove that all symmetric f-divergences, including D_f_α, are equivalent in convergence.Let f_i:[0,∞)→ℝ be a convex function which is continuous at 0 and strictly convex at 1 such that f_i(1)=0, uf_i(1/u)=f_i(u), and f_i(0)<∞, for i∈{1,2}. Then for a sequence of probability distributions (P_n)_n∈ℕ∈𝒫(𝒳) and a fixed distribution P ∈𝒫(𝒳), we have D_f_1(P_n||P)→ 0 as n→∞ if and only if D_f_2(P_n||P)→ 0 as n→∞. Note that it suffices to show that D_f(··) is equivalent to D_TV(··) for any function f satisfying the conditions in the theorem. To show this, we employ an elegant result by Feldman and Österreicher <cit.> which gives lower and upper bounds on the Arimoto divergence in terms of TVD asγ_f(D_TV(P||Q))≤ D_f(P||Q)≤γ_f(1)D_TV(P||Q),for an appropriately defined well-behaved (continuous, invertible, and bounded) function γ_α:[0,1]→ [0,∞). We use the lower and upper bounds in (<ref>) to show that D_f(··) is stronger than D_TV(··), and D_f(··) is weaker than D_TV(··), respectively. Proof details are inAppendix <ref>. We note that the proof techniques used in proving Theorem <ref> give rise to a conceptually simpler proof of equivalence between JSD (α = 1) and TVD (α = ∞) proved earlier by Arjovsky et al. <cit.>, where measure-theoretic analysis was used. In particular, our proof of equivalence relies on the fact that TVD upper bounds JSD <cit.>. SeeAppendix <ref> for details. Theorems <ref> through <ref> hold in the ideal setting of sufficient samples and discriminator capacity. In practice, however, GAN training is limited by both the number of training samples as well as the choice of G_θ and D_ω. In fact, recent results by Arora et al. <cit.> show that under such limitations, convergence in divergence does not imply convergence in distribution, and have led to new metrics for evaluating GANs. To address these limitations, we consider two measures to evaluate the performance of GANs, namely generation and estimation errors, as detailed below.§.§ Generalization and Estimation Error Bounds for CPE Loss GANsArora et al. <cit.> defined generalization in GANs as the scenario when the divergence between the real distribution and the generated distribution is well-captured by the divergence between their empirical versions. In particular, a divergence or distance[For consistency with other works on generalization and estimation error, we refer to a semi-metric as a distance.] d(·,·) between distributions generalizes with m training samples and error ϵ>0 if, for the learned distribution P_G_θ, the following holds with high probability:|d(P_r,P_G_θ)-d(P̂_r,P̂_G_θ) |≤ϵ,where P̂_r and P̂_G_θ are the empirical versions of the real(with m samples) and the generated (with a polynomial number of samples) distributions, respectively. Arora et al. <cit.> show that the Jensen-Shannon divergence and Wasserstein distance do not generalize with any polynomial number of samples. However, they show that generalization can be achieved for a new notion of divergence, the neural net divergence, with a moderate number of training examples <cit.>.To this end, they considerthe following optimization probleminf_θ∈Θd_ℱ(P_r,P_G_θ),where d_ℱ(P_r,P_G_θ) is the neural net divergence defined as d_ℱ(P_r,P_G_θ)=sup_ω∈Ω(𝔼_X∼ P_r[ϕ(D_ω(X))]+𝔼_X∼ P_G_θ[ϕ(1-D_ω(X))]) -2ϕ(1/2)such that the class of discriminators ℱ={D_ω:ω∈Ω} is L-Lipschitz with respect to the parameters ω, i.e., for every x∈𝒳, |D_ω_1(x)-D_ω_2(x)|≤ L||ω_1-ω_2||, for all ω_1,ω_2∈Ω, and the function ϕ takes values in [-Δ,Δ] and is L_ϕ-Lipschitz. Let p be the discriminator capacity (i.e., number of parameters) and ϵ>0. For these assumptions, in <cit.>, Arora et al. prove that (<ref>) generalizes. We summarize their result as follows: for the empirical versions P̂_r and P̂_G_θ of two distributions P_r and P_G_θ, respectively, with at least m random samples each, there exists a universal constant c such that when m≥cpΔ^2log(LL_ϕp/ϵ)/ϵ^2, with probability at least 1-exp(-p) (over the randomness of samples),|d_ℱ(P_r,P_G_θ)-d_ℱ(P̂_r,P̂_G_θ)|≤ϵ.Our first contribution is to show that we can generalize (<ref>) and <cit.> to incorporate any partial losses ϕ and ψ (not just those that are symmetric). To this end, we first define the refined neural net divergence as d̃_ℱ(P_r,P_G_θ)=sup_ω∈Ω(𝔼_X∼ P_r[ϕ(D_ω(X))]+𝔼_X∼ P_G_θ[ψ(D_ω(X))])-ϕ(1/2)-ψ(1/2),where the discriminator class is same as the above and the functions ϕ and ψ take values in [-Δ,Δ] and are L_ϕ- and L_ψ-Lipschitz, respectively. Note that the functions ϕ and ψ should also satisfy(<ref>) so as to respect the optimality of the uniformly random discriminator when P_r=P_G_θ. The following theorem shows that the refined neural net divergence generalizes with a moderate number of training examples, thus extending <cit.>. Let P̂_r and P̂_G_θ be empirical versions of two distributions P_r and P_G_θ, respectively, with at least m random samples each. For Δ, p, L, L_ϕ,L_ψ,ϵ>0 defined above, there exists a universal constant c such that when m≥cpΔ^2log(Lmax{L_ϕ,L_ψ}p/ϵ)/ϵ^2, we have that with probability at least 1-exp(-p) (over the randomness of samples),|d̃_ℱ(P_r,P_G_θ)-d̃_ℱ(P̂_r,P̂_G_θ)|≤ϵ.When ϕ(t)=t and D_ω=f_ω can take values in ℝ (not just in [0,1]), (<ref>) yields the so-called neural net (nn) distance[This term was first introduced in <cit.> but with a focus on a discriminator D_ω taking values in [0,1]. Ji et al. <cit.> generalized it to D_ω = f_ω taking values in ℝ.] <cit.> given by d_ℱ_nn(P_r,P_G_θ) =sup_ω∈Ω (𝔼_X∼ P_r[f_ω(X)]-𝔼_X∼ P_G_θ[f_ω(X)] ), where the discriminator[In <cit.>, f_ω indicates a discriminator function that takes values in ℝ.] and generator f_ω(·) and G_θ(·), respectively, are neural networks. Using (<ref>), Ji et al. <cit.> defined and studied the notion of estimation error, which quantifies the effectiveness of the generator (for a corresponding optimal discriminator model) in learning the real distribution with limited samples. In order to define estimation error for CPE-loss GANs (including α-GAN), we first introduce a loss-inclusive neural net divergence[We refer to this measure as a divergence since it may not be a semi-metric for all choices of the loss ℓ.] d^(ℓ)_ℱ_nn to highlight the effect of the loss on the error. For training samples S_x={X_1,…,X_n} and S_z={Z_1,…,Z_m} from P_r and P_Z, respectively, we begin with the following minimization for GAN training: inf_θ∈Θd^(ℓ)_ℱ_nn(P̂_r,P̂_G_θ),where P̂_r and P̂_G_θ are the empirical real and generated distributions estimated from S_x and S_z, respectively, andd^(ℓ)_ℱ_nn(P̂_r,P̂_G_θ)=sup_ω∈Ω(𝔼_X∼P̂_r[ϕ(D_ω(X)] )+𝔼_X∼P̂_G_θ[ψ(D_ω(X)] )) -ϕ(1/2)-ψ(1/2), where for brevity we henceforth use ϕ(·) -ℓ(1,·) and ψ(·) -ℓ(0,·). As proven in Theorem <ref>, for ℓ=ℓ_α and α=∞, (<ref>) reduces to the neural net total variation distance.As a step towards obtaining bounds on the estimation error, we consider the following setup, analogous to that in <cit.>. For x∈𝒳{x∈ℝ^d:||x||_2≤ B_x} andz∈𝒵{z∈ℝ^p:||z||_2≤ B_z}, we considerdiscriminators and generators as neural network models of the form:D_ω :x↦σ(𝐰_k^𝖳r_k-1(𝐖_d-1r_k-2(… r_1(𝐖_1(x)))) G_θ :z↦𝐕_ls_l-1(𝐕_l-1s_l-2(… s_1(𝐕_1z))),where 𝐰_k is a parameter vector of the output layer; for i∈[1:k-1] and j∈[1:l], 𝐖_i and 𝐕_j are parameter matrices; r_i(·) and s_j(·) are entry-wise activation functions of layers i and j, i.e., for 𝐚∈ℝ^t, r_i(𝐚)=[r_i(a_1),…,r_i(a_t)] and s_i(𝐚)=[s_i(a_1),…,s_i(a_t)]; and σ(·) is the sigmoid function given by σ(p)=1/(1+e^-p) (note that σ does not appear in the discriminator in <cit.> as the discriminator considered in the neural net distance is not a soft classifier mapping to [0,1]). We assume that each r_i(·) and s_j(·) are R_i- and S_j-Lipschitz, respectively, and also that they are positive homogeneous, i.e., r_i(λ p)=λ r_i(p) and s_j(λ p)=λ s_j(p), for any λ≥ 0 and p∈ℝ. Finally, as modelled in <cit.>, we assume that the Frobenius norms of the parameter matrices are bounded, i.e., ||𝐖_i||_F≤ M_i, i∈[1:k-1], ||𝐰_k||_2≤ M_k, and ||𝐕_j||_F≤ N_j, j∈[1:l].We define the estimation error for a CPE loss GAN as d^(ℓ)_ℱ_nn(P_r,P_G_θ̂^*)-inf_θ∈Θ d^(ℓ)_ℱ_nn(P_r,P_G_θ),where θ̂^* is the minimizer of (<ref>) and present the following upper bound on the error. We also specialize these bounds for α-GANs, relying on the Rademacher complexity of this loss class to do so. For the setting described above, additionally assume that the functions ϕ(·) and ψ(·) are L_ϕ- and L_ψ-Lipschitz, respectively. Then, with probability at least 1-2δ over the randomness of training samples S_x={X_i}_i=1^n and S_z={Z_j}_j=1^m, we haved^(ℓ)_ℱ_nn(P_r,P̂_G_θ̂^*)-inf_θ∈Θ d^(ℓ)_ℱ_nn(P_r,P_G_θ) ≤ L_ϕ B_xU_ω√(3k)/√(n)+L_ψ U_ω U_θ B_z√(3(k+l-1))/√(m)+U_ω√(log1/δ)(L_ϕ B_x/√(2n)+L_ψ B_zU_θ/√(2m)), where U_ω M_k∏_i=1^k-1(M_iR_i) and U_θ N_l∏_j=1^l-1(N_jS_j).In particular, when this bound is specialized to the case of α-GAN by letting ϕ(p)=ψ(1-p)=α/α-1(1-p^α-1/α), the resulting bound is nearly identical to the terms in the RHS of (<ref>), except for substitutions L_ϕ← 4C_Q_x(α) and L_ψ← 4C_Q_z(α), where Q_x U_ω B_x, Q_z U_ω U_θ B_z, andC_h(α)σ(h)σ(-h)^α-1/α, α∈(0,1] (α-1/2α-1)^α-1/αα/2α-1,α∈(1,∞). Our proof involves the following steps: * Building upon the proof techniques of Ji et al. <cit.>, we bound the estimation error in terms of Rademacher complexities of compositional function classes involving the CPE loss function. * We then upper bound these Rademacher complexities leveraging a contraction lemma for Lipschitz loss functions <cit.>. We remark that this differs considerably from the way the bounds on Rademacher complexities in <cit.> are obtained because of the explicit role of the loss function in our setting.* For the case of α-GAN, we extend a result by Sypherd et al. <cit.> where they showed that α-loss is Lipschitz for a logistic model with (<ref>). Noting that similar to the logistic model, we also have a sigmoid in the outer layer of the discriminator, we generalize the preceding observation by proving that α-loss is Lipschitz when the input is equal to a sigmoid function acting on a neural network model. This is the reason behind the dependence of the Lipschitz constant on the neural network model parameters (in terms of Q_x and Q_z). Note that (<ref>) is monotonically decreasing in α, indicating the bound saturates. However, one is not able to make definitive statements regarding the estimation bounds for relative values of α because the LHS in (<ref>) is also a function of α. Proof details are in Appendix <ref>.We now focus on developing lower bounds on the estimation error. Due to the fact that oft-used techniques to obtain min-max lower bounds on the quality of an estimator (e.g., LeCam's methods, Fano's methods, etc.) require a semi-metric distance measure, we restrict our attention to a particular α-GAN, namely that for α=∞, to derive a matching lower bound on the estimation error. We consider the loss-inclusive neural net divergence in (<ref>) with ℓ=ℓ_α for α=∞, which, for brevity, we henceforth denote as d^ℓ_∞_ℱ_nn(·,·)As in <cit.>, suppose the generator's class {G_θ}_θ∈Θ is rich enough such that the generator G_θ can learn the real distribution P_r and that the number m of training samples in S_z scales faster than the number n of samples in S_x[Since the noise distribution P_Z is known, one can generate an arbitrarily large number m of noise samples.]. Then inf_θ∈Θ d^ℓ_∞_ℱ_nn(P_r,P_G_θ) = 0, so the estimation error simplifies to the single term d^ℓ_∞_ℱ_nn(P_r,P_G_θ̂^*). Furthermore, the upper bound in (<ref>) reduces to O(c/√(n)) for some constant c (note that, in (<ref>), C_h(∞)=1/4). In addition to the above assumptions, also assume the activation functions r_i for i ∈ [1:k-1] are either strictly increasing or ReLU. For the above setting, we derive a matching min-max lower bound (up to a constant multiple) on the estimation error. For the setting above, let P̂_n be an estimator of P_r learned using the training samples S_x={X_i }_i=1^n. Then,inf_P̂_nsup_P_r ∈𝒫(𝒳) ℙ{d^ℓ_∞_ℱ_nn(P̂_n,P_r) ≥C(𝒫(𝒳))/√(n)} > 0.24,where the constant C(𝒫(𝒳)) is given by C(𝒫(𝒳)) = log(2)/20[ σ (M_k r_k-1(… r_1(M_1 B_x))- σ(M_k r_k-1(… r_1(-M_1 B_x)) ]. To obtain min-maxlower bounds, we first prove that d^ℓ_∞_ℱ_nn is a semi-metric. The remainder of the proof is similar to that of <cit.>, replacing d_ℱ_nn with d^ℓ_∞_ℱ_nn. Finally, we note that the additional sigmoid activation function after the last layer in D satisfies the monotonicity assumption as detailed in Appendix <ref>. A challenge that remains to be addressed is to verify if d^ℓ_α_ℱ_nn is a semi-metric for α<∞.§ DUAL-OBJECTIVE GANSAs illustrated in Fig. <ref>, tuning α<1 provides more gradient for the generator to learn early in training when the discriminator more confidently classifies the generated data as fake, alleviating vanishing gradients, and also creates a smooth landscape for the generated data to descend towards the real data, alleviating exploding gradients. However, tuning α < 1may provide too large of gradients for the generator when the generated samples approach the real samples, which can result in too much movement of the generated data, potentially repelling it from the real data. The following question therefore arises: Can we combine a less confident discriminator with a more stable generator loss? We show that we can do so by using different objectives for the discriminator and generator, resulting in (α_D,α_G)-GANs. §.§ (α_D,α_G)-GANsWe propose a dual-objective (α_D,α_G)-GAN with different objective functions for the generator and discriminator in which the discriminator maximizes V_α_D(θ,ω) while the generator minimizes V_α_G(θ,ω), where =0muV_α(θ,ω)=𝔼_X∼ P_r[-ℓ_α(1,D_ω(X))]+𝔼_X∼ P_G_θ[-ℓ_α(0,D_ω(X))],for α=α_D,α_G ∈ (0,∞]. We recover the α-GAN <cit.> value function when α_D=α_G=α. The resulting (α_D,α_G)-GAN is given by sup_ω∈ΩV_α_D(θ,ω) inf_θ∈Θ V_α_G(θ,ω) .We maintain the same ordering as the original min-max GAN formulation for this non-zero sum game, wherein for a set of chosen parameters for both players, the discriminator plays first, followed by the generator. The following theorem presents the conditions under which the optimal generator learns the real distribution P_r when the discriminator set Ω is large enough. For the game in (<ref>) with (α_D,α_G)∈ (0,∞]^2, given a generator G_θ, the discriminator optimizing (<ref>) is D_ω^*(x)=p_r(x)^α_D/p_r(x)^α_D+p_G_θ(x)^α_D,x ∈𝒳. For this D_ω^* and the function f_α_D,α_G:ℝ_+ →ℝ defined asf_α_D,α_G(u)=α_G/α_G-1(u^α_D(1-1/α_G)+1+1/(u^α_D+1)^1-1/α_G-2^1/α_G),(<ref>) simplifies to minimizing a non-negative symmetric f_α_D,α_G-divergence D_f_α_D,α_G(·||·) asinf_θ∈Θ D_f_α_D,α_G(P_r||P_G_θ)+α_G/α_G-1(2^1/α_G-2),which is minimized iff P_G_θ=P_r for (α_D,α_G) such that(α_D ≤ 1, α_G > α_D/α_D+1 ) or ( α_D > 1, α_D/2< α_G ≤α_D). We substitute the optimal discriminator of (<ref>) into the objective function of (<ref>) and write the resulting expression in the form∫_𝒳 p_G_θ(x)f_α_D,α_G(p_r(x)/p_G_θ(x)) dx + α_G/α_G-1(2^1/α_G-2).We then find the conditions on α_D and α_G for f_α_D,α_G to be strictly convex so that the first term in (<ref>) is an f-divergence. Figure <ref>(a)in Appendix <ref> illustrates the feasible (α_D,α_G)-region. A detailed proof can be found in Appendix <ref>. See Fig. <ref> for a toy example illustrating the value of tuning α_D<1 and α_G≥1. Noting that α-GAN recovers various well-known GANs, including the vanilla GAN, which is prone to saturation, the (α_D,α_G)-GAN formulation using the generator objective function in (<ref>) can similarly saturate early in training, potentially causing vanishing gradients. We propose the following NS alternative to the generator's objective in (<ref>): V^NS_α_G(θ,ω)= 𝔼_X∼ P_G_θ[ℓ_α_G(1,D_ω(X))],thereby replacing (<ref>) withinf_θ∈Θ V^NS_α_G(θ,ω). Comparing (<ref>) and (<ref>), note that the additional expectation term over P_r in (<ref>) results in (<ref>) simplifying to a symmetric divergence for D_ω^* in (<ref>), whereas the single term in (<ref>) will result in (<ref>) simplifying to an asymmetric divergence. The optimal discriminator for this NS game remains the same as in (<ref>). The following theorem provides the solution to (<ref>) under the assumption that the optimal discriminator can be attained. For the same D_ω^* in (<ref>) and the function f_α_D,α_G^NS:ℝ_+ →ℝ defined asf^NS_α_D,α_G(u)=α_G/α_G-1(2^1/α_G-1-u^α_D(1-1/α_G)/(u^α_D+1)^1-1/α_G),(<ref>) simplifies to minimizing a non-negative asymmetric f^NS_α_D,α_G-divergence D_f^NS_α_D,α_G(·||·) asinf_θ∈Θ D_f^NS_α_D,α_G(P_r||P_G_θ)+α_G/α_G-1(1-2^1/α_G-1),which is minimized iff P_G_θ=P_r for (α_D,α_G) ∈ (0,∞]^2 such that α_D + α_G > α_Gα_D. The proof mimics that of Theorem <ref> and is detailed in Appendix <ref>.Figure <ref>(b) in Appendix <ref> illustrates the feasible (α_D,α_G)-region; in contrast to the saturating setting of Theorem <ref>, the NS setting constrains α≤ 2 whenα_D=α_G=α. See Figure <ref>(c) for a toy example illustrating how tuning α_D<1 and α_G≥1 can also alleviate training instabilities in the NS setting.We note that the input to the discriminator is a random variable X which can be viewed as being sampled from a mixture distribution, i.e., X∼δ P_r + (1-δ)P_G_θ where δ∈ (0,1). Without loss of generality, we assume δ=1/2 but the analysis that follows can be generalized for arbitrary δ. We use the Bernoulli random variable Y ∈{0,1} to indicate that X=x is from the real(Y=1) or generated (Y=0) distributions. Therefore, the marginal probabilities of the two classes are P_Y(1)=1-P_Y(0)=δ=1/2.Thus, one can then compute the true posterior P_Y|X(1|x) and its tilted version P^(α_D)_Y|X(1|x)as follows: P_Y|X(1|x)= p_r(x)/ p_r(x)+ p_G_θ(x) and P^(α_D)_Y|X(1|x) = p_r(x)^α_D/ p_r(x)^α_D+p_G_θ(x)^α_D,where both expressions simplify to the optimal discriminator of the vanilla GAN in (<ref>) for α_D=1. We now present a theorem to quantify precisely the effect of tuning α_D and α_G. To this end, we begin by first taking a closer look at the gradients induced by the generator's loss during training. To simplify our analysis, we assume that at every step of training, the discriminator can achieve its optimum, D_ω^*[We note that a related gradient analysis was considered by Shannon <cit.> for f-GANs assuming an optimal discriminator.]. For any sample x=G_θ(z) generated by G, we can write the gradient of the generator's loss for an (α_D,α_G)-GAN w.r.t. its weight vector θ as-∂ℓ_α_G(0, D_ω^*(x))/∂θ =-∂ℓ_α_G(0, D_ω^*(x))/∂ x×∂ x/∂θ=- ∂ℓ_α_G(0, D_ω^*(x))/∂ D_ω^*(x)×∂ D_ω^*(x)/∂ x×∂ x/∂θ. We note that while we cannot explicitly analyze the term ∂ x/∂θ in (<ref>), we assume that by using models satisfying boundedness and Lipschitz assumptions[These assumptions match practical settings.], this term will not be unbounded. We thus focus on the first two terms on the right side of(<ref>) for any α_G.For α_D=1, from (<ref>), we see that in regions densely populated by the generated but not the real data, D_ω^*(x)→ 0. Further, the first term in (<ref>) is bounded thus causing the gradient in (<ref>) to vanish. On the other hand, when α_D<1, D_ω^* increases (resp. decreases) in areas denser in generated (resp. real) data, thereby providing more gradients for G. This is clearly illustrated in Fig. <ref>(a) and <ref>(b) and reveals how strongly dependent the optimization trajectory traversed by G during training is on the practitioner’s choice of (α_D, α_G) ∈ (0,∞]^2. In fact, this holds irrespective of the saturating or the NS (α_D, α_G)-GAN. In the following theorem, we offer deeper insights into how such an optimization trajectory is influenced by tuning α_D and α_G.For a given P_r and P_G_θ, let x be a sample generated according to P_G_θ, and D_ω^* be optimal with respect to V_α_D(θ, ω). Then(a) the saturating and non-saturating gradients, -∂ℓ_α_G(0,D_ω^*(x)) / ∂ x and ∂ℓ_α_G(1, D_ω^*(x)) / ∂ x, respectively, demonstrate the following behavior: -∂ℓ_α_G(0, D_ω^*(x))/∂ x= C_x,α_D,α_G(1/p_G_θ(x)∂ p_G_θ/∂ x - 1/p_r(x)∂ p_r/∂ x) ∂ℓ_α_G(1, D_ω^*(x))/∂ x= C^NS_x,α_D,α_G(1/p_G_θ(x)∂ p_G_θ/∂ x - 1/p_r(x)∂ p_r/∂ x),where using the tilted probabilityP^(α_D)_Y|X(1|x) as written in (<ref>), C_x,α_D, α_G α_D P^(α_D)_Y|X(1|x) (1 - P^(α_D)_Y|X(1|x) )^1 - 1/α_G, C^NS_x,α_D, α_G α_D(1 - P^(α_D)_Y|X(1|x)) P^(α_D)_Y|X(1|x)^1 - 1/α_G, and (b) the gradients in both (<ref>) and (<ref>) have directions that are independent of α_D and α_G. One can view the results in Theorem <ref> above as a one-shot (in any iteration) analysis of the gradients of the generator's loss, and thus, we fix P_G_θ. Doing so allows us to ignore the implicit dependence on (α_D,α_G) of the P_G_θ learned up to this iteration, thus allowing us to obtain tractable expressions for any iteration. A detailed proof of Theorem <ref> can be found in Appendix <ref>. Focusing first on saturating (α_D, α_G)-GANs, in Fig. <ref>(a) in Appendix <ref>, we plot C_x,α_D, α_G as a function of the true probability that X ∼1/2P_r + 1/2 P_G_θ is real, namely P_Y|X(1|x), for five (α_D, α_G) combinations. In the (1,1) case (i.e., vanilla GAN), C_x,1, 1≈ 0 for generated samples far away from the real data (where P_Y|X(1|x) ≈ 0).As discussed earlier, this optimization strategy is troublesome when the real and generated data are fully separable, since the sample gradients are essentially zeroed out by the scalar, leading to vanishing gradients. To address this issue, Fig. <ref>(a) shows that tuning α_D below 1 (e.g., 0.6) ensures that samples most likely to be “generated” (P_Y|X(1|x) ≈ 0) receive sufficient gradient for updates that direct them closer to the real distribution.The vanilla GAN also suffers from convergence issues since generated samples close to the real data (when P_Y|X(1|x) ≈ 1) receive gradients large in magnitude (C_x,1,1≈ 1). Ideally, these generated samples should not be instructed to move since they convincingly pass as real to D_ω^*. As explained in Section <ref>, an excessive gradient can push the generated data away from the real data, which ultimately separates the distributions and forces the GAN to restart training. Although the (0.6,1)-GAN in Fig. <ref>(a) appears to decrease C_x,α_D, α_G for samples close to the real data (P_Y|X(1|x) ≈ 1), tuning α_G>1 allows this gradient to converge to zero as desired (see Fig. <ref>(b)). Although tuning the saturating (α_D, α_G)-GAN formulation away from vanilla GAN promotes a more favorable optimization trajectory for G, this approach continues to suffer from the problem of providing small gradients for generated samples far from P_r. This suggests looking at the behavior of the NS (α_D,α_G)-GAN formulation.Figure <ref>(b) in Appendix <ref> illustrates the relationship between the gradient scalar C^NS_x,α_D,α_G and the probability that a sample X ∼1/2P_r + 1/2P_G_θ is real, namely P_Y|X(1|x), for several values of (α_D,α_G). In the vanilla (1,1)-GAN case, we observe a negative linear relationship, i.e., the samples least likely to be real (P_Y|X(1|x) ≈ 0) receive large gradients (C^NS_x,1,1≈ 1) while the samples most likely to be real receive minimal gradients (C^NS_x,1,1≈ 0). While this seems desirable, unfortunately, the vanilla GAN's optimization strategy often renders it vulnerable to model oscillation, a common GAN failure detailed in Section <ref>, as a result of such large gradients of the outlier (far from real) samples causing the generated data to oscillate around the real data modes.By tuning α_D below 1, as shown in Fig. <ref>(b), one can slightly increase (resp. decrease) C^NS_x,α_D, α_G for the generated samples close to (resp. far from) the real modes.As a result, the generated samples are more robust to outliers and therefore more likely to converge to the real modes. Finally, tuning α_G above 1 can further improve this robustness.A caveat here is the fact that C^NS_x,α_D, α_G≈ 0 when P_Y|X(1|x)≈ 0 can potentially be problematic since the near-zero gradients may immobilize generated data far from the real distribution. This is borne out in our results for several large image datasets in Section <ref> where choosing α_G=1 yields the best results. The cumulative effects of tuning (α_D,α_G) are further illustrated in Fig. <ref>(c). §.§ CPE Loss Based Dual-objective GANsSimilarly to the single-objective loss function perspective in Section <ref>, we can generalize the (α_D,α_G)-GAN formulation to incorporate general CPE losses. To this end, we introduce a dual-objective loss function perspective of GANs in which the discriminator maximizes V_ℓ_D(θ,ω) while the generator minimizes V_ℓ_G(θ,ω), where =0muV_ℓ(θ,ω)=𝔼_X∼ P_r[-ℓ(1,D_ω(X))]+𝔼_X∼ P_G_θ[-ℓ(0,D_ω(X))],for any CPE losses ℓ=ℓ_D,ℓ_G. The resulting CPE loss dual-objective GAN is given by sup_ω∈ΩV_ℓ_D(θ,ω) inf_θ∈Θ V_ℓ_G(θ,ω) .The CPE losses ℓ_D and ℓ_G can be completely different losses, the same loss but with different parameter values, or the same loss with the same parameter values, in which case the above formulation reduces to the single-objective formulation in (<ref>). For example, choosing ℓ_D = ℓ_G = ℓ_α, we recover the α-GAN formulation in (<ref>); choosing ℓ_D = ℓ_α_D and ℓ_G = ℓ_α_G, we obtain the (α_D,α_G)-GAN formulation in (<ref>). Note that ℓ_D should satisfy the constraint in (<ref>) so that the optimal discriminator outputs 1/2 for any input when P_r=P_G_θ. We once again maintain the same ordering as the original min-max GAN formulation and present the conditions under which the optimal generator minimizes a symmetric f-divergence when the discriminator set Ω is large enough in the following proposition.Let ℓ_D and ℓ_G be symmetric CPE loss functions with ℓ_D(1,·) also differentiable with derivative ℓ_D^'(1,·) and strictly convex. Then the optimal discriminator D_ω^* optimizing (<ref>) satisfies the implicit equation, provided it has a solution,ℓ_D^'(1,1-D_ω^*(x)) = p_r(x)/p_G_θ(x)ℓ_D^'(1,D_ω^*(x)),x ∈𝒳.If (<ref>) does not have a solution for a particular x∈𝒳, then D_ω^*(x)=0 or D_ω^*(x)=1. Let A(p_r(x)/p_G_θ(x))D_ω^*(x). For this D_ω^*, (<ref>) simplifies to minimizing a symmetric f-divergence D_f(P_r||P_G_θ) if the function f:ℝ_+ →ℝ is convex, where f is defined asf(u) = -uℓ_G(1,A(u))-ℓ_G(1,1-A(u))+2ℓ_G(1,1/2) .The proof involves a straightforward application of KKT conditions when optimizing (<ref>) and substituting in(<ref>). A detailed proof can be found in Appendix <ref>.As it is difficult to come up with conditions without having the explicit forms of the losses ℓ_D and ℓ_G, Proposition <ref> provides a broad outline of what the optimal strategies will look like. The assumption of the losses being symmetric can be relaxed, in which case the resulting f-divergence will no longer be guaranteed to be symmetric. Theorem <ref> is a special case of Proposition <ref> for ℓ_D=ℓ_α_D and ℓ_G=ℓ_α_G. As another example, consider the following square loss based CPE losses [Note that these losses were considered in <cit.> and were shown to result in a special case of a shifted LSGAN minimizing a certain Jensen-f-divergence.]:ℓ_D(y,ŷ) =1/2[y(ŷ-1)^2+(1-y)ŷ^2]ℓ_G(y,ŷ) =-1/2[y(ŷ^2-1)+(1-y)((1-ŷ)^2-1)] .Note that (<ref>) and (<ref>) are both symmetric and ℓ_D(1,·) is both convex (and therefore ℓ_D satisfies (<ref>)) and differentiable with ℓ_D^'(1,ŷ)=ŷ-1. The implicit equation in (<ref>) then becomes (1-D_ω^*(x))-1=u(D_ω^*(x)-1), where D_ω^* (x) = u/u+1=p_r(x)/p_r(x) + p_G_θ(x). The corresponding f in (<ref>) isf(u)=[3(1-u)]/[4(u+1)],which is convex. Therefore, the dual-objective CPE loss GAN using (<ref>) and (<ref>) minimizes a symmetric f-divergence.§.§ Estimation Error for CPE Loss Dual-objective GANs In order to analyze what occurs in practice when both the number of training samples and model capacity are usually limited, we now consider the same setting as in Section <ref> with finite training samples S_x={X_1,…,X_n} and S_z={Z_1,…,Z_m} from P_r and P_Z, respectively, and with neural networks chosen as the discriminator and generator models. The sets of samples S_x and S_z induce the empirical real and generated distributions P̂_r and P̂_G_θ, respectively. A useful quantity to evaluate the performance of GANs in this setting is again that of the estimation error. In Section <ref>, we define estimation error for CPE loss GANs. However, such a definition requires a common value function for both discriminator and generator, and therefore, does not directly apply to the dual-objective setting we consider here.Our definition relies on the observation that estimation error inherently captures the effectiveness of the generator (for a corresponding optimal discriminator model) in learning with limited samples. We formalize this intuition below.Since CPE loss dual-objective GANs use different objective functions for the discriminator and generator, we start by defining the optimal discriminator ω^* for a generator model G_θ as ω^*(P_r,P_G_θ) _ω∈ΩV_ℓ_D(θ,ω)|_P_r,P_G_θ,where the notation |_·,· allows us to make explicit the distributions used in the value function.In keeping with the literature where the value function being minimized is referred to as the neural net (NN) distance (since D and G are modeled as neural networks) <cit.>, we define the generator's NN distance d_ω^*(P_r,P_G_θ) asd_ω^*(P_r,P_G_θ)(P_r,P_G_θ)V_ℓ_G(θ,ω^*(P_r,P_G_θ))|_P_r,P_G_θ.The resulting minimization for training the CPE-loss dual-objective GAN using finite samples isinf_θ∈Θ d_ω^*(P̂_r,P̂_G_θ)(P̂_r,P̂_G_θ).Denoting θ̂^* as the minimizer of (<ref>), we define the estimation error for CPE loss dual-objective GANs asd_ω^*(P_r,P_G_θ̂^*)(P_r,P_G_θ̂^*)-inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P_r,P_G_θ) .We use the same notation as in Section <ref>, detailed again in the following for easy reference. For x∈𝒳{x∈ℝ^d:||x||_2≤ B_x} andz∈𝒵{z∈ℝ^p:||z||_2≤ B_z}, we model the discriminator and generator as k- and l-layer neural networks, respectively, such that D_ω and G_θ can be written as:D_ω :x↦σ(𝐰_k^𝖳r_k-1(𝐖_k-1r_k-2(… r_1(𝐖_1(x)))) G_θ :z↦𝐕_ls_l-1(𝐕_l-1s_l-2(… s_1(𝐕_1z))),where (i) 𝐰_k is a parameter vector of the output layer; (ii) for i∈[1:k-1] and j∈[1:l], 𝐖_i and 𝐕_j are parameter matrices; (iii) r_i(·) and s_j(·) are entry-wise activation functions of layers i and j, respectively, i.e., for 𝐚∈ℝ^t, r_i(𝐚)=[r_i(a_1),…,r_i(a_t)] and s_i(𝐚)=[s_i(a_1),…,s_i(a_t)]; and (iv) σ(·) is the sigmoid function given by σ(p)=1/(1+e^-p). We assume that each r_i(·) and s_j(·) are R_i- and S_j-Lipschitz, respectively, and also that they are positive homogeneous, i.e., r_i(λ p)=λ r_i(p) and s_j(λ p)=λ s_j(p), for any λ≥ 0 and p∈ℝ. Finally, as is common in such analysis <cit.>,we assume that the Frobenius norms of the parameter matrices are bounded, i.e., ||𝐖_i||_F≤ M_i, i∈[1:k-1], ||𝐰_k||_2≤ M_k, and ||𝐕_j||_F≤ N_j, j∈[1:l]. We now present an upper bound on (<ref>) in the following theorem. For the setting described above, additionally assume that the functions ϕ(·)-ℓ_G(1,·) and ψ(·)-ℓ_G(0,·) are L_ϕ- and L_ψ-Lipschitz, respectively. Then, with probability at least 1-2δ over the randomness of training samples S_x={X_i}_i=1^n and S_z={Z_j}_j=1^m, we have d_ω^*(P_r,P_G_θ̂^*)(P_r,P_G_θ̂^*)-inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P_r,P_G_θ) ≤ L_ϕ B_xU_ω√(3k)/√(n)+L_ψ U_ω U_θ B_z√(3(k+l-1))/√(m)+U_ω√(log1/δ)(L_ϕ B_x/√(2n)+L_ψ B_zU_θ/√(2m)), where U_ω M_k∏_i=1^k-1(M_iR_i) and U_θ N_l∏_j=1^l-1(N_jS_j).In particular, when specialized to the case of (α_D,α_G)-GANs by letting ϕ(p)=ψ(1-p)=α_G/α_G-1(1-p^α_G-1/α_G), the resulting bound is nearly identical to the terms in the RHS of (<ref>), except for substitutions L_ϕ← 4C_Q_x(α_G) and L_ψ← 4C_Q_z(α_G), where Q_x U_ω B_x, Q_z U_ω U_θ B_z, andC_h(α)σ(h)σ(-h)^α-1/α, α∈(0,1] (α-1/2α-1)^α-1/αα/2α-1,α∈(1,∞).The proof is similar to that of Theorem <ref> (and also <cit.>). We observe that (<ref>) does not depend on ℓ_D, an artifact of the proof techniques used, and is therefore most likely not the tightest bound possible. See Appendix <ref> for proof details. § ILLUSTRATION OF RESULTS We illustrate the value of (α_D, α_G)-GAN as compared to the vanilla GAN (i.e., the (1,1)-GAN). Focusing on DCGAN architectures <cit.>, we compare against LSGANs <cit.>, the current state-of-the-art (SOTA) dual-objective approach. While WGANs <cit.> have also been proposed to address the training instabilities, their training methodology is distinctly different and uses a different optimizer (RMSprop), requires gradient clipping or penalty, and does not leverage batch normalization, all of which make meaningful comparisons difficult. We evaluate our approach on three datasets: (i) a synthetic dataset generated by a two-dimensional, ring-shaped Gaussian mixture distribution (2D-ring) <cit.>; (ii) the 64 × 64 Celeb-A image dataset <cit.>; and (iii) the 112 × 112 LSUN Classroom dataset <cit.>. For each dataset and pair of GAN objectives, we report several metrics that encapsulate the stability of GAN training over hundreds of random seeds. This allows us to clearlyshowcase the potential for tuning (α_D, α_G) to obtain stable and robust solutions for image generation.§.§ 2D Gaussian Mixture Ring The 2D-ring is an oft-used synthetic dataset for evaluating GANs. We draw samples from a mixture of 8 equal-prior Gaussian distributions, indexed i ∈{1,2, ,8 }, with a mean of (cos(2π i / 8), sin(2π i / 8)) and variance 10^-4. We generate 50,000 training and 25,000 testing samples and the same number of 2D latent Gaussian noise vectors, where each entry is a standard Gaussian.Both the D and G networks have 4 fully-connected layers with 200 and 400 units, respectively.We train for 400 epochs with a batch size of 128, and optimize with Adam <cit.> and a learning rate of 10^-4 for both models. We consider three distinct settings that differ in the objective functions as: (i) (α_D, α_G)-GAN in (<ref>); (ii) NS (α_D, α_G)-GAN'sin (<ref>), (<ref>); (iii) LSGAN with the 0-1 binary coding scheme (see Appendix <ref> for details).For every setting listed above, we train our models on the 2D-ring dataset for 200 random state seeds, where each seed contains different weight initializations for D and G. Ideally, a stable method will reflect similar performance across randomized initializations and also over training epochs; thus, we explore how GAN training performance for each setting varies across seeds and epochs. Our primary performance metric is mode coverage, defined as the number of Gaussians (0-8) that contain a generated sample within 3 standard deviations of its mean. A score of 8 conveys successful training, while a score of 0 conveys a significant GAN failure; on the other hand, a score in between 0 and 8 may be indicative of common GAN issues, such as mode collapse or failure to converge. For the saturating setting, the improvement in stability of the (0.2,1)-GAN relative to the vanilla GAN is illustrated in Figure <ref> as detailed in the caption.Vanilla GAN fails to converge to the true distribution 30% of the time while succeeding only 46% of the time. In contrast, the (α_D, α_G)-GAN with α_D < 1 learns a more stable G due to a less confident D (see also Figure <ref>(a)). For example, the (0.3,1)-GAN success and failure rates improve to 87% and 2%, respectively.For the NS setting in Figure <ref>, we find that tuning α_D and α_G yields more consistently stable outcomes than vanilla and LSGANs. Mode coverage rates over 200 seeds for saturating (Tables <ref> and <ref>) and NS (Table <ref>) are in Appendix <ref>.§.§ Celeb-A & LSUN Classroom The Celeb-A dataset <cit.> is a widely recognized large-scale collection of over 200,000 celebrity headshots, encompassing images with diverse aspect ratios, camera angles, backgrounds, lighting conditions, and other variations. Similarly, the LSUN Classroom dataset <cit.> is a subset of the comprehensive Large-scale Scene Understanding (LSUN) dataset; it contains over 150,000 classroom images captured under diverse conditions and with varying aspect ratios. To ensure consistent input for the discriminator, we follow the standard practice of resizing the images to 64 × 64 for Celeb-A and 112 × 112 for LSUN Classroom. For both experiments, we randomly select 80% of the images for training and leave the remaining 20% for validation (evaluation of goodness metrics). Finally, for the generator, for each dataset, we generate a similar 80%-20% training-validation split of 100-dimensional latent Gaussian noise vectors, where each entry is a standard Gaussian, for a total matching the size of the true data. For training, we employ the DCGAN architecture <cit.> that leverages deep convolutional neural networks (CNNs) for both D and G. In Appendix <ref>, detailed descriptions of the D and G architectures can be found in Tables <ref> and <ref> for the Celeb-A and LSUN Classroom datasets, respectively. Following SOTA methods, we focus on the non-saturating setting, utilizing appropriate objectives for vanilla GAN, (α_D, α_G)-GAN, and LSGAN. We consider a variety of learning rates, ranging from 10^-4 to 10^-3, for Adam optimization. We evaluate our models every 10 epochs up to a total of 100 epochs and report the Fréchet Inception Distance (FID), an unsupervised similarity metric between the real and generated feature distributions extracted by InceptionNet-V3 <cit.>. For both datasets, we train each combination of objective function, number of epochs, and learning rate for 50 seeds. In the following subsections, we empirically demonstrate the dependence of the FID on learning rate and number of epochs for the vanilla GAN, (α_D, α_G)-GAN, and LSGAN. Achieving robustness to hyperparameter initialization is especially desirable in the unsupervised GAN setting as the choices that facilitate steady model convergence are not easily determined a priori. §.§.§ Celeb-A Results In Figure <ref>(a), we examine the relationship between learning rate and FID for each GAN trained for 100 epochs on the Celeb-A dataset. When using learning rates of 1 × 10^-4 and 2 × 10^-4, all GANs consistently perform well. However, when the learning rate increases,the vanilla (1,1)-GAN begins to exhibit instability across the 50 seeds. As the learning rate surpasses 5 × 10^-4, the performance of the vanilla GAN becomes even more erratic, underscoring the importance of GANs being robust to the choice of learning rate. Figure <ref>(a) also demonstrates that the GANs with α_D < 1 perform on par with, if not better than, the SOTA LSGAN. For instance, the (0.6,1)-GAN consistently achieves low FIDs across all tested learning rates. In Figure <ref>(a), for different learning rates, we compare the dependence on the number of training epochs (hyperparameter)of the vanilla (1,1)-GAN, (0.6,1)-GAN, and LSGAN by plotting their FIDs every 10 epochs, up to 100 epochs, for two similar learning rates: 5 × 10^-4 and 6 × 10^-4. We discover that the vanilla (1,1)-GAN performs significantly worse for the higher learning rate and deteriorates over time for both learning rates. Conversely, both the (0.6,1)-GAN and LSGAN consistently exhibit favorable FID performance for both learning rates. However, the (0.6,1)-GAN converges to a low FID, while the FID of the LSGAN slightly increases as training approaches 100 epochs. Finally, Fig. <ref>(b) displays a grid of generated Celeb-A faces, randomly sampled over 8 seeds for three GANs trained for 100 epochs with a learning rate of 5 × 10^-4. Here, we observe that the faces generated by the (0.6,1)-GAN and LSGAN exhibit a comparable level of quality to the rightmost column images, which are randomly sampled from the real Celeb-A dataset. On the other hand, the vanilla (1,1)-GAN shows clear signs of performance instability, as some seeds yield high-quality images while others do not.§.§.§ LSUN Classroom Results In Figure <ref>(b), we illustrate the relationship between learning rate and FID for GANs trained on the LSUN dataset for 100 epochs. In fact, when all GANs are trained with a learning rate of 1 × 10^-4, they consistently deliver satisfactory performance.However, increasing it to 2 × 10^-4 leads to instability in the vanilla (1,1)-GAN across 50 seeds. On the other hand, we observe that α_D<1 contributes to stabilizing the FID across the 50 seeds even when trained with slightly higher learning rates. In Figure <ref>(b), we see that as α_D is tuned down to 0.6, the mean FIDs consistently decrease across all tested learning rates. These lower FIDs can be attributed to the increased stability of the network. Despite the gains in GAN stability achieved by tuning down α_D, Figure <ref> demonstrates a noticeable disparity between the best (α_D, α_G)-GAN and the SOTA LSGAN. This suggests that there is still room for improvement in generating high-dimensional images with (α_D, α_G)-GANs. In Appendix <ref>, Figure <ref>(a), we illustrate the average FID throughout the training process for three GANs: (1,1)-GAN, (0.6,1)-GAN, and LSGAN, using two different learning rates: 1 × 10^-4 and 2 × 10^-4. These findings validate that the vanilla (1,1)-GAN performs well when trained with the lower learning rate, but struggles significantly with the higher learning rate. In contrast, the (0.6,1)-GAN exhibits less sensitivity to learning rate, while the LSGAN achieves nearly identical scores for both learning rates. In Figure <ref>(b), we showcase the image quality generated by each GAN at epoch 100 with the higher learning rate. This plot highlights that the vanilla (1,1)-GAN frequently fails during training, whereas the (0.6,1)-GAN and LSGAN produce images that are more consistent in mimicking the real distribution. Finally, we present the FID vs. learning rate results for both datasets in Table <ref> in Appendix <ref>. This allows yet another way to evaluate performance by comparing the percentage (out of 50 seeds) of FID scores below a desired threshold for each dataset, as detailed in the appendix.§ CONCLUSIONBuilding on our prior work introducing CPE loss GANs and α-GANs, we have introduced new results on the equivalence of CPE loss GANs and f-GANs, convergence properties of the symmetric f-divergences induced by CPE loss GANs under certain conditions, and the generalization and estimation error for CPE loss GANs including α-GANs. We have introduced a dual-objective GAN formulation, focusing in particular on using α-loss with potentially different α values for both players' objectives. GANs offer an alternative to diffusion models in being faster to train but training instabilities stymie such advantages. In this context, our results are very promising and highlight how tuning α can not only alleviate training instabilities but also enhance robustness to learning rates and training epochs, hyperparameters whose optimal values are generally not known a priori. A natural extension to our work is to define and study generalization of dual-objective GANs. An equally important problem is to evaluate if our observations hold more broadly, including, when the training data is noisy <cit.>. While different f-divergence based GANs have been introduced, no principled reasons have been proposed thus far for choosing a specific f-divergence measure and corresponding loss functions to optimize. Even in the more practical finite sample and model capacity settings, different choices of objectives, as shown earlier, lead to different neural network divergence measures.Using tunable losses, our work has the advantage of motivating the choice of appropriate loss functions and the resulting f-divergence/neural network divergence from the crucial viewpoint of avoiding training instabilities. This connection between loss functions and divergences to identify the appropriate measure of goodness can be of broader interest both to the IT and ML communities. IEEEtran § PROOF OF THEOREM <REF>Consider a symmetric CPE loss ℓ(y,ŷ), i.e., ℓ(1,ŷ)=ℓ(0,1-ŷ). We may define an associated margin-based loss using an increasing bijective link function l:ℝ→ [0,1] asℓ̃(t):=ℓ(1,l(t)),where the link l satisfies the following mild regularity conditions: l(-t)=1-l(t),l(0)=1/2,l^-1(t)+l^-1(1-t)=0(e.g., sigmoid function, σ(t)=1/(1+e^-t) satisfies this condition). Consider the inner optimization problem in (<ref>) with the value function in (<ref>) for this CPE loss ℓ.sup_ω∫_𝒳(-p_r(x)ℓ(1,D_ω(x))-p_G_θ(x)ℓ(0,D_ω(x))) dx= ∫_𝒳sup_p_x∈[0,1](-p_r(x)ℓ(1,p_x)-p_G_θ(x)ℓ(0,p_x)) dx=∫_𝒳sup_p_x∈[0,1](-p_r(x)ℓ(1,p_x)-p_G_θ(x)ℓ(1,1-p_x)) dx=∫_𝒳sup_t_x∈ℝ(-p_r(x)ℓ(1,l(t_x))-p_G_θ(x)ℓ(1,1-l(t_x)))dx=∫_𝒳sup_t_x∈ℝ(-p_r(x)ℓ(1,l(t_x))-p_G_θ(x)ℓ(1,l(-t_x))) dx=∫_𝒳sup_t_x∈ℝ(-p_r(x)ℓ̃(t_x)-p_G_θ(x)ℓ̃(-t_x) dx =∫_𝒳p_G_θ(x)(-inf_t_x∈ℝ(l̃(-t_x)+p_r(x)/p_G_θ(x)l̃(t_x))) dxwhere (<ref>) follows because the CPE loss ℓ(y,ŷ) is symmetric, (<ref>) follows from (<ref>), and (<ref>) follows from the definition of the margin-based loss ℓ̃ in (<ref>). Now note that the function f defined asf(u)=-inf_t∈ℝ(ℓ̃(-t)+uℓ̃(t)),u ≥ 0is convex since the infimum of affine functions is concave (observed earlier in <cit.> in a correspondence between margin-based loss functions and f-divergences). So, from (<ref>), we getsup_ω∫_𝒳 (-p_r(x)ℓ(1,D_ω(x))-p_G_θ(x)ℓ(0,D_ω(x))) dx=∫_𝒳p_G_θ(x)f(p_r(x)/p_G_θ(x)) dx=D_f(P_rP_G_θ).Thus, the resulting min-max optimization in (<ref>) reduces to minimizing the f-divergence, D_f(P_rP_G_θ) with f as given in (<ref>).For the converse statement, first note that given a symmetric f-divergence, it follows from <cit.> that there exists a decreasing and convex margin-based loss function ℓ̃ such that f can be expressed in the form (<ref>). We may define an associated symmetric CPE loss ℓ(y,ŷ) withℓ(1,ŷ):=ℓ̃(l^-1(ŷ)),where l^-1 is the inverse of the same link function. Now repeating the steps as in (<ref>)-(<ref>), it is clear that the GAN based on this (symmetric) CPE loss results in minimizing the same symmetric f-divergence. It remains to verify that the symmetric CPE loss defined in (<ref>) is such that ℓ(1,ŷ) is decreasing so that the intuitive interpretation of vanilla GAN is retained and that it satisfies (<ref>) so that the optimal discriminator guesses uniformly at random when P_r=P_G_θ. Note that ℓ^'(1,ŷ)=ℓ̃^'(l^-1(ŷ))(l^-1)^'(ŷ)≤ 0 since the margin-based loss ℓ̃ is decreasing and the link function l (and hence its inverse) is increasing. So, ℓ(1,ŷ) is decreasing. Observe that the loss function ℓ(1,ŷ)=ℓ̃(l^-1(ŷ)) may not be convex in y even though the margin-based loss function ℓ̃(·) is convex. However, we show that the symmetric CPE loss associated with (<ref>) indeed satisfies (<ref>).-ℓ(1,t)-ℓ(0,t) =-ℓ(1,t)-ℓ(1,1-t)=-ℓ̃(l^-1(t))-ℓ̃(l^-1(1-t))≤ -2ℓ̃(1/2l^-1(t)+1/2l^-1(1-t))=-2ℓ̃(0)=-2ℓ̃(l^-1(1/2))=-ℓ(1,1/2)-ℓ(0,1/2),where (<ref>) follows since the margin-based loss ℓ̃(·) is convex, and (<ref>) and (<ref>) follow from (<ref>) and (<ref>), respectively.§ PROOF OF THEOREM <REF>For a fixed generator, G_θ, we first solve the optimization problemsup_ω∈Ω∫_𝒳α/α-1(p_r(x)D_ω(x)^α-1/α+p_G_θ(x)(1-D_ω(x))^α-1/α)dx.Consider the functiong(y)=α/α-1(ay^α-1/α+b(1-y)^α-1/α),for a,b>0 and y∈[0,1]. To show that the optimal discriminator is given by the expression in (<ref>), it suffices to show that g(y) achieves its maximum in [0,1] at y^*=a^α/a^α+b^α. Notice that for α>1, y^α-1/α is a concave function of y, meaning the function g is concave. For 0<α<1, y^α-1/α is a convex function of y, but since α/α-1 is negative, the overall function g is again concave. Consider the derivativeg^'(y^*)=0, which gives usy^*=a^α/a^α+b^α.This gives (<ref>). With this, the optimization problem in (<ref>) can be written as inf_θ∈ΘC(G_θ), whereC(G_θ) =α/α-1[∫_𝒳(p_r(x)D_ω^*(x)^α-1/α+p_G_θ(x)(1-D_ω^*(x))^α-1/α)dx-2]=α/α-1[∫_𝒳(p_r(x)( p_r(x)^α/p_r(x)^α+p_G_θ(x)^α)^α-1/α+p_G_θ(x)( p_r(x)^α/p_r(x)^α+p_G_θ(x)^α)^α-1/α)dx-2]=α/α-1(∫_𝒳(p_r(x)^α+p_G_θ(x)^α)^1/αdx-2)=D_f_α(P_r||P_G_θ)+α/α-1(2^1/α-2), where for the convex function f_α in (<ref>),D_f_α(P_r||P_G_θ)=∫_𝒳 p_G_θ(x)f_α(p_r(x)/p_G_θ(x)) dx=α/α-1(∫_𝒳(p_r(x)^α+p_G_θ(x)^α)^1/αdx-2^1/α).This gives us (<ref>). Since D_f_α(P_r||P_G_θ)≥ 0 with equality if and only if P_r=P_G_θ, we have C(G_θ)≥α/α-1(2^1/α-2) with equality if and only if P_r=P_G_θ. § PROOF OF THEOREM <REF>First, using L'Hôpital's rule we can verify that, for a,b>0,lim_α→ 1α/α-1((a^α+b^α)^1/α-2^1/α-1(a+b)) =alog(a/a+b/2)+blog(b/a+b/2).Using this, we haveD_f_1(P_r||P_G_θ) lim_α→ 1D_f_α(P_r||P_G_θ)=lim_α→ 1α/α-1(∫_𝒳(p_r(x)^α+p_G_θ(x)^α)^1/αdx-2^1/α)=lim_α→ 1[α/α-1∫_𝒳((p_r(x)^α+p_G_θ(x)^α)^1/α-2^1/α-1(p_r(x)+p_G_θ(x)))dx]=∫_𝒳p_r(x)logp_r(x)/(p_r(x)+p_G_θ(x)/2)dx+∫_𝒳p_G_θ(x)logp_G_θ(x)/(p_r(x)+p_G_θ(x)/2)dx=:2D_JS(P_r||P_G_θ), where (<ref>) follows by interchanging the limit and the integral by invoking the dominated convergence theorem because of the boundedness of f_α <cit.> andD_JS(·||·) in (<ref>) is the Jensen-Shannon divergence. Now, as α→ 1, (<ref>) equals inf_θ∈Θ2D_JS(P_r||P_G_θ)-log4 recovering the vanilla GAN.Substituting α=1/2 in (<ref>), we getD_f_1/2(P_r||P_G_θ) =-∫_𝒳(√(p_r(x))+√(p_G_θ(x)))^2dx+4=∫_𝒳(√(p_r(x))-√(p_G_θ(x)))^2dx=:2D_H^2(P_r||P_G_θ),where D_H^2(P_r||P_G_θ) is the squared Hellinger distance. For α=1/2, (<ref>) gives 2inf_θ∈ΘD_H^2(P_r||P_G_θ)-2 recovering Hellinger GAN (up to a constant).Noticing that, for a,b>0, lim_α→∞(a^α+b^α)^1/α=max{a,b} and defining 𝒜:={x∈𝒳:p_r(x)≥ p_G_θ(x)}, we haveD_f_∞(P_r||P_G_θ) lim_α→∞D_f_α(P_r||P_G_θ)=lim_α→∞α/α-1(∫_𝒳(p_r(x)^α+p_G_θ(x)^α)^1/αdx-2^1/α)=∫_𝒳max{p_r(x),p_G_θ(x)} dx-1=∫_𝒳max{p_r(x)-p_G_θ(x),0} dx=∫_𝒜(p_r(x)-p_G_θ(x)) dx=∫_𝒜p_r(x)-p_G_θ(x)/2 dx+∫_𝒜^cp_G_θ(x)-p_r(x)/2 dx=1/2∫_𝒳|p_r(x)-p_G_θ(x)| dx=:D_TV(P_r||P_G_θ),where (<ref>) follows by interchanging the limit and the integral by invoking the dominated convergence theorem because of the boundedness of f_α <cit.> and D_TV(P_r||P_G_θ) in (<ref>) is the total variation distance between P_r and P_G_θ. Thus, as α→∞, (<ref>) equals inf_θ∈ΘD_TV(P_r||P_G_θ)-1 recovering TV-GAN (modulo a constant).See Fig. <ref> for an illustration of the behavior of D_f_α for different values of α. § PROOF OF THEOREM <REF> We first derive the Fenchel conjugate f^*_α of f_α as follows:f_α^* (t) = usup (ut -f_α(u)) = α/α-1 usup (1+(1+α-1/α t)u - (1+u^α)^1/α).The optimum u_* is obtained by setting the derivative of ut -f_α(u) to zero, yielding1+α-1/α t = u^α-1_*(1+u^α_*)^1/α-1 = (u_*^α/1+u_*^α)^α-1/α,i.e.,u_*=u_*(t) = (s(t)/1-s(t))^1/αwiths(t)=(1+α-1/α t)^α/α-1.The verification that u_* is a global maximizer over u≥0 follows from(ut -f_α(u))^'' = -(u^α/1+u^α)^α-1(1+u^α)^-2α u^α-1<0for all u > 0. The relations (<ref>) and (<ref>) then lead tof_α^* (t) = u_*(t)t -f_α(u_*(t))= α/α-1(1+(1+α-1/α t)u_*(t) - (1+u_*(t)^α)^1/α) = α/α-1(1- (1+u_*(t)^α)^1/α-1)= α/α-1(1- (1-s(t))^α-1/α),where s is given by (<ref>). The domain dom(f_α^*) consists of values t such that 1+α-1/α t ≥ 0 and s(t) ≤ 1, i.e., t ∈ [-α/α-1,0] for α>1 and t ≤ 0 for α∈(0,1). Also note that f_1^*(t)=lim_α→ 1f_α^*(t)=lim_α→ 1α/α-1(1- (1-s(t))^α-1/α)= -log(1-e^t)for t≤0, where s is again given by (<ref>).In the following we consider α 1 with results also valid for α =1 by continuity. Let v ∈ℝ and considerd = s(g_f_α(v)) = (1+α-1/α g_f_α(v))^α/α-1.We first show that d ∈ [0,1] and then show that (<ref>) is satisfied. If α >1, then g_f_α(v) ∈ [-α/α-1,0]=dom(f^*_α). Therefore, d ∈ [0,1]. If α∈ (0,1), then g_f_α(v) ∈ [-∞,0]=dom(f^*_α). Therefore, (1+α-1/α g_f_α(v)) ∈ [1,∞], and hence d ∈ [0,1].Using (<ref>),ℓ_α(1,d) = α/α-1(1-d^α-1/α) = α/α-1(1-s(g_f_α(v))^α-1/α) =-g_f_α(v),andℓ_α(0,d) =α/α-1(1-(1-d)^α-1/α) = α/α-1(1-(1-s(g_f_α(v)))^α-1/α) =f^*_α(g_f_α(v)).Conversely, let d ∈ [0,1] and considerv = g_f_α^-1(-ℓ_α(1,d)) =g_f_α^-1(α/α-1(d^α-1/α - 1)).We first show that v ∈ℝ and then show that (<ref>) is satisfied. If α >1, then -ℓ_α(1,d) ∈ [-α/α-1,0]=dom(f^*_α). Therefore, v ∈ℝ. If α∈ (0,1), then d^α-1/α∈ [0,∞] and -ℓ_α(1,d) ∈ [-∞,0]=dom(f^*_α). Hence, v ∈ℝ.Using (<ref>),g_f_α(v) = -ℓ_α(1,d),ands(g_f_α(v)) = (1+α-1/α g_f_α(v) )^α/α-1=(1+α-1/α(α/α-1(d^α-1/α -1) ) )^α/α-1 = d,so thatf^*_α(g_f_α(v)) = α/α-1(1-(1-s(g_f_α(v)))^α-1/α) = α/α-1(1-(1-d)^α-1/α) = ℓ_α(0,d). § PROOF OF COROLLARY <REF>For Q_ω∈ A define D_ω∈ B such that d=D_ω(x) is obtained from (<ref>) with v = Q_ω(x) for all x ∈𝒳. By Theorem <ref>, g(Q_ω)=h(D_ω). Conversely, for D_ω∈ B define Q_ω∈ A such that v=Q_ω(x) is obtained from (<ref>) with d=D_ω(x) for all x ∈𝒳. Again by Theorem <ref>, h(D_ω)=g(V_ω).To show that k is bijective, we first show that s:dom(f_α^*)→ [-∞,1] defined in (<ref>) is bijective. Let the function s^-1:[-∞,1]→dom(f_α^*) be defined by s^-1(u)=α/α-1(u^α-1/α-1). Let t ∈dom(f_α^*). Thens^-1(s(t)) = α/α-1[((1+α-1/α t)^α/α-1)^α-1/α -1] = t.Now, let u ∈ [-∞,1]. Thens(s^-1(u)) = (1+α-1/α(α/α-1(u^α-1/α-1) ) )^α/α-1 = u.Therefore, s^-1 is the inverse of s, and hence s is bijective. As the composition of two bijective functions, k is also bijective.§ PROOF OF THEOREM <REF>As noted in the proof of Theorem <ref>, given a symmetric f-divergence, it follows from <cit.> that there exists a CPE (partial) loss ℓ such that f(u)=sup_t∈[0,1]-uℓ(t)-ℓ(1-t).We assume that the loss l is strictly convex as mentioned in the theorem statement. Note that f(u)=sup_v∈domf^* uv-f^*(v).Noticing that the inner optimization problems in the CPE loss GANand f-GAN formulations reduce to pointwise optimizations (<ref>) and (<ref>), respectively, it suffices to show that the variational forms of f in (<ref>) and (<ref>) are equivalent. To this end, we show that (<ref>) is equivalent to the optimization problemf(u)=sup_v∈ℝ_+uf^'(v)-[vf^'(v)-f(v)]which is known to be equivalent to (<ref>) <cit.>. Let k:ℝ_+→ [0,1] denote the bijective mapping from u∈ℝ_+ to the optimizer in (<ref>). So, k(u) satisfies-uℓ^'(k(u))+ℓ^'(1-k(u))=0.Note that it follows from implicit function theorem that k(u) is also differentiable.Fix a v∈ℝ_+. With this, we have f(v)=-vℓ(k(v))-ℓ(1-k(v)). On differentiating both sides of (<ref>) with respect to v, we getf^'(v) =-ℓ(k(v))+k^'(v)(vℓ^'(-k(v))+ℓ^'(1-k(v)))=-ℓ(k(v)),where (<ref>) follows from (<ref>) by replacing u with v.Considervf^'(v)-f(v) =-vℓ(k(v))+vℓ(k(v))+ℓ(1-k(v))=ℓ(1-k(v)),where (<ref>) follows from (<ref>) and (<ref>). Thus, with the change of variable t=k(v), the objective function in (<ref>) is equal to that of (<ref>). Since the function k is invertible, for a fixed t∈[0,1], we can also show that the change of variable v=k^-1(t) in the objective function of (<ref>) gives the objective function of (<ref>).§ PROOF OF THEOREM <REF>Without loss of generality we take the functions f_1 and f_2 to be non-negative using the fact that D_f(··)=D_f^'(··) whenever f^'(x)=f(x)+c(x-1), for some c∈ℝ (see <cit.>). Note that it suffices to show that any symmetric f-divergence D_f(··) is equivalent to D_TV(··), i.e., D_f(P_n||P)→ 0 as n→∞ if and only if D_TV(P_n||P)→ 0 as n→∞. To this end, we employ a property of any symmetric f-divergence which gives lower and upper bounds on it in terms of the total variation distance, D_TV. In particular, Feldman and Österreicher <cit.> proved that for any symmetric f-divergence D_f, probability distributions P and Q, we haveγ_f(D_TV(P||Q))≤ D_f(P||Q)≤γ_f(1)D_TV(P||Q), where the function γ_α:[0,1]→ [0,∞) defined by γ_f(x)=(1+x)f(1-x/1+x) is convex, strictly increasing and continuous on [0,1] such that γ_f(0)=0 and γ_f(1)=2f(0). We first prove the `only if' part, i.e., D_f(P_n||P)→ 0 as n→∞ implies D_TV(P_n||P)→ 0 as n→∞. Suppose D_f(P_n||P)→ 0. From the lower bound in (<ref>), it follows that γ_f(D_TV(P_n||P))≤ D_f(P_n||P), for each n∈N. This implies that γ_f(D_TV(P_n||P))→ 0 as n→∞. We show below that γ_f is invertible and γ_f^-1 is continuous. Then it would follow that γ_f^-1γ_f(D_TV(P_n||P))=D_TV(P_n||P)→γ_f^-1(0)=0 as n→∞ proving that Arimoto divergence is stronger than the total variation distance. It remains to show that γ_f is invertible and γ_f^-1 is continuous. Invertibility follows directly from the fact that γ_f is strictly increasing function. For the continuity of γ_α^-1, it suffices to show that γ_f(C) is closed for a closed set C⊆ [0,1]. The closed set C is compact since a closed subset of a compact set ([0,1] in this case) is also compact. Now since γ_f is continuous, γ_f(C) is compact because a continuous function of a compact set is compact. By Heine-Borel theorem, this gives that γ_f(C) is closed (and bounded) as desired.We prove the `if part' now, i.e., D_TV(P_n||P)→ 0 as n→∞ implies D_f(P_n||P)→ 0. It follows from the upper bound in (<ref>) that D_f(P_n||P)≤ D_TV(P_n||P), for each n∈N. This implies that D_f(P_n||P)→ 0 as n→∞ which completes the proof. § EQUIVALENCE OF THE JENSEN-SHANNON DIVERGENCE AND THE TOTAL VARIATION DISTANCEWe first show that the total variation distance is stronger than the Jensen-Shannon divergence, i.e., D_TV(P_nP)→ 0 as n→∞ implies D_JS(P_nP)→ 0 as n→∞. Suppose D_TV(P_n||P)→ 0 as n→∞. Using the fact that the total variation distance upper bounds the Jensen-Shannon divergence <cit.>, we have D_JS(P_n||P)≤ (log_e2) D_TV(P_n||P), for each n∈N. This implies that D_JS(P_n||P)→ 0 as n→∞ since D_TV(P_n||P)→ 0 as n→∞. The proof for the other direction, i.e., the Jensen-Shannon divergence is stronger than the total variation distance, is exactly along the same lines as that of <cit.> using triangle and Pinsker's inequalities. § PROOF OF THEOREM <REF>The proof is along similar lines as that of <cit.>. Below we argue that, with high probability, for every discriminator D_ω,|𝔼_X∼ P_r[ϕ(D_ω(X))]-𝔼_X∼ P_G_θ[ϕ(D_ω(X))]|≤ϵ/2, |𝔼_X∼ P_r[ψ(D_ω(X))]-𝔼_X∼ P_G_θ[ψ(D_ω(X))]|≤ϵ/2.Assuming ω^* to be an optimizer attaining d̃_ℱ(P_r,P_G_θ), it would then follow thatd̃_ℱ(P̂_r,P̂_G_θ)=sup_ω∈Ω|𝔼_X∼P̂_r[ϕ(D_ω(X))]+𝔼_X∼P̂_G_θ[ψ(D_ω(X))]|≥|𝔼_X∼P̂_r[ϕ(D_ω^*(X))]+𝔼_X∼P̂_G_θ[ψ(D_ω^*(X))]|≥|𝔼_X∼P_r[ϕ(D_ω^*(X))]+𝔼_X∼P_G_θ[ψ(D_ω^*(X))]|- |𝔼_X∼ P_r[ϕ(D_ω(X))]-𝔼_X∼P̂_r[ϕ(D_ω(X))]|-|𝔼_X∼ P_G_θ[ψ(D_ω(X))]-𝔼_X∼P̂_G_θ[ψ(D_ω(X))]|≥d̃_ℱ(P_r,P_G)-ϵ, where (<ref>) follows from the triangle inequality, (<ref>) follows from (<ref>) and (<ref>). Similarly, we can prove the other direction, i.e., d̃_ℱ(P̂_r,P̂_G)≤d̃_ℱ(P_r,P_G)+ϵ, which implies (<ref>).It remains to argue for the concentration bounds (<ref>) and (<ref>). Recall that the concentration bound in(<ref>) was proved in <cit.> by considering a ϵ/8LL_ϕ-net in Ω and leveraging the Lipschitzianity of the discriminator class ℱand the function ϕ. Using the exact same analysis, the concentration bound in (<ref>) can be proved separately by considering a ϵ/8LL_ψ-net. For both the bounds to hold simultaneously, it suffices to consider a ϵ/8Lmax{L_ϕ,L_ψ}-net along the same lines as the last part of <cit.>, thus completing the proof.§ PROOF OF THEOREM <REF>We upper bound the estimation error in terms of the Rademacher complexities of appropriately defined compositional classes building upon the proof techniques of <cit.>. We then bound these Rademacher complexities using a contraction lemma <cit.>. Details are in order.We first review the notion of Rademacher complexity. Let 𝒢_Ω:={g_ω: 𝒳→ℝ|ω∈Ω} and S={X_1.…,X_n} be a set of random samples in 𝒳 drawn independent and identically distributed (i.i.d.) from a distribution P_X. Then, the Rademacher complexity of 𝒢_Ω is defined asℛ_S(𝒢_Ω)=𝔼_X,ϵsup_ω∈Ω|1/n∑_i=1^nϵ_ig_ω(x_i)|where ϵ_1,…,ϵ_n are independent random variables uniformly distributed on {-1,+1}. We write our discriminator model in (<ref>) in the formD_ω(x)=σ(f_ω(x)),where f_ω is exactly the same discriminator model defined in <cit.>. Now by following the similar steps as in <cit.> by replacing f_ω(·) in the first and second expectation terms in the definition of d_ℱ_nn(·,·) by ϕ(D_ω(·)) and -ψ(D_ω(·)), respectively, we getd^(ℓ)_ℱ_nn(P_r,P̂_G_θ̂^*)-inf_θ∈Θ d^(ℓ)_ℱ_nn(P_r,P_G_θ)≤ 2sup_ω|𝔼_X∼ P_rϕ(D_ω(X))-1/n∑_i=1^nϕ(D_ω(X_i))|+2sup_ω,θ|𝔼_Z∼ P_Zψ(D_ω(g_θ(Z)))-1/m∑_j=1^mψ(D_ω(g_θ(Z_j)))| Let us denote the supremums in the first and second terms in (<ref>) by F^(ϕ)(X_1,…,X_n) and G^(ψ)(Z_1,…,Z_m), respectively. We next bound G^(ψ)(Z_1,…,Z_m). Note that ψ(σ(·)) is L_ψ/4-Lipschitz since it is a composition of two Lipschitz functions ψ(·) and σ(·) which are L_ψ- and 1/4-Lipschitz respectively. For any z_1,…,z_j,…,z_m,z_j^', using sup_r|h_1(r)|-sup_r|h_2(r)|≤sup_r |h_1(r)-h_2(r)|, we have G^(ψ)(z_1,…,z_j,…,z_m)-G^(ψ)(z_1,…,z_j^',…,z_m) ≤sup_ω,θ1/m|ψ(D_ω(g_θ(z_j)))-ψ(D_ω(g_θ(z_j^')))|≤sup_ω,θ1/m|ψ(σ(f_ω(g_θ(z_j))))-ψ(σ(f_ω(g_θ(z_j^'))))|≤L_ψ/4sup_ω,θ1/m|σ(f_ω(g_θ(z_j)))-σ(f_ω(g_θ(z_j^')))|≤L_ψ/42/m(M_k∏_i=1^k-1(M_iR_i))(N_l∏_j=1^l-1(N_jS_j))B_z=L_ψ Q_z/2m,where (<ref>) follows from (<ref>), (<ref>) follows because ψ(σ(·)) is L_ψ/4-Lipschitz, (<ref>) follows by using the Cauchy-Schwarz inequality and the fact that ||Ax||_2≤ ||A||_F|||x||_2 (as observed in <cit.>), and (<ref>) follows by definingQ_z(M_k∏_i=1^k-1(M_iR_i))(N_l∏_j=1^l-1(N_jS_j))B_z.Using (<ref>), the McDiarmid's inequality <cit.> implies that, with probability at least 1-δ,G^(ψ)(Z_1,…,Z_j,…,Z_m)≤𝔼_ZG^(ψ)(Z_1,…,Z_j,…,Z_m)+L_ψ Q_z/2√(log1/δ/(2m)). Following the standard steps similar to <cit.>, the expectation term in (<ref>) can be upper bounded as𝔼_ZG^(ψ)(Z_1,…,Z_j,…,Z_m) ≤ 2𝔼_Z,ϵsup_ω,θ|1/m∑_j=1^mϵ_jψ(D_ω(g_θ(Z_j)))| =:2ℛ_S_z(ℋ^(ψ)_Ω×Θ). So, we have, with probability at least 1-δ,G^(ψ)(Z_1,…,Z_j,…,Z_m) ≤ 2ℛ_S_z(ℋ^(ψ)_Ω×Θ)+√(log1/δ)L_ψ Q_z/2√(2m).Using a similar approach, we have, with probability at least 1-δ,F^(ϕ)(X_1,…,X_n)≤ 2ℛ_S_x(ℱ_Ω^(ϕ))+√(log1/δ)L_ϕ Q_x/2√(2n),where ℛ_S_x(ℱ_Ω^(ϕ)):=𝔼_X,ϵsup_ω|1/n∑_i=1^nϵ_iϕ(D_ω(X_i))|.Combining (<ref>), (<ref>), and (<ref>) using a union bound, we get, with probability at least 1-2δ,d^(ℓ)_ℱ_nn(P_r,P_G_θ̂^*)-inf_θ∈Θ d^(ℓ)_ℱ_nn(P_r,P_G_θ) ≤ 4ℛ_S_x(ℱ_Ω^(ϕ))+4ℛ_S_z(ℋ^(ψ)_Ω×Θ)+√(log1/δ)(L_ϕ Q_x/√(2n)+L_ψ Q_z/√(2m)). Now we bound the Rademacher complexities in the RHS of (<ref>). We present the contraction lemma on Rademacher complexity required to obtain these bounds. For A⊂ℝ^n, let ℛ(A):=𝔼_ϵ[sup_a∈ A|1/n∑_i=1^nϵ_ia_i|].For each i∈{1,…,n}, let γ_i:ℝ→ℝ be a ρ-Lipschitz function. Then, for A⊂ℝ^n,ℛ(γ∘ A)≤ρℛ(A),where γ∘ A:={(γ_1(a_1),…,γ_n(a_n)):a∈ A}.Note that ϕ(σ(·)) is L_ϕ/4-Lipschitz since it is a composition of two Lipschitz functions ϕ(·) and σ(·) which are L_ϕ- and 1/4-Lipschitz respectively. Consider ℛ_S_x(ℱ_Ω^(ϕ)) = 𝔼_X[ℛ({(ϕ(D_ω(X_1)),…,ϕ(D_ω(X_n))):ω∈Ω})]= 𝔼_X[ℛ({(ϕ(σ(f_ω(X_1))),…,ϕ(σ(f_ω(X_n)))):ω∈Ω})]≤L_ϕ/4𝔼_X[ℛ({(f_ω(X_1),…,(f_ω(X_n)):ω∈Ω})]≤L_ϕ Q_x√(3k)/4√(n)where (<ref>) follows from (<ref>), (<ref>) follows from Lemma <ref> by substituting γ(·)=ϕ(σ(·)), and (<ref>) follows from <cit.>. Using a similar approach,we obtainℛ_S_z(ℋ^(ψ)_Ω×Θ)≤L_ψ Q_z√(3(k+l-1))/4√(m).Substituting (<ref>) and (<ref>) into (<ref>) gives (<ref>). §.§ Specialization to α-GANLet ϕ_α(p)=ψ_α(1-p)=α/α-1(1-p^α-1/α). It is shown in <cit.> that ϕ_α(σ(·)) isC_h(α)-Lipschitz in [-h,h], for h>0, with C_h(α) as given in (<ref>). Now using the Cauchy-Schwarz inequality and the fact that ||Ax||_2≤ ||A||_F||x||_2, it follows that |f_ω(·)|≤ Q_x,|f_ω(g_θ(·))|≤ Q_z,where Q_x:=M_k∏_i=1^k-1(M_iR_i)B_x and with Q_z as in (<ref>). So, we have f_ω(·)∈[-Q_x,Q_x] and f_ω(g_θ(·))∈[-Q_z,Q_z]. Thus, we have that ψ_α(σ(·)) and ϕ_α(σ(·)) are C_Q_z(α)- and C_Q_x(α)-Lipschitz, respectively. Now specializing the steps (<ref>) and (<ref>) with these Lipschitz constants, we get the following bound with the substitutions L_ϕ/4← C_Q_x(α) and L_ψ/4← 4C_Q_z(α) in (<ref>):d^(ℓ_α)_ℱ_nn(P_r,P̂_G_θ̂^*)-inf_θ∈Θ d^(ℓ_α)_ℱ_nn(P_r,P_G_θ)≤4C_Q_x(α) Q_x√(3k)/√(n)+4C_Q_z(α) Q_z√(3(k+l-1))/√(m)+2√(2log1/δ)(C_Q_x(α)Q_x/√(n)+C_Q_z(α)Q_z/√(m)).§ PROOF OF THEOREM <REF> Let ϕ(·)=-ℓ_α(1,·) and consider the following modified version of d^ℓ_α_ℱ_nn(·,·) (defined in <cit.>):d^ℓ_α_ℱ_nn(P,Q) = sup_ω∈Ω (𝔼_X∼ P[ϕ(D_ω(X))]+𝔼_X∼ Q[ϕ(1-D_ω(X))] ) -2ϕ(1/2),whereD_ω(x)= σ(𝐰_k^𝖳r_k-1(𝐖_d-1r_k-2(… r_1(𝐖_1(x))))σ(f_ω(x)).Taking α→∞, we obtaind^ℓ_∞_ℱ_nn(P,Q) =sup_ω∈Ω (𝔼_X∼ P[D_ω(X)]-𝔼_X∼ Q[D_ω(X)] ).We first prove that d^ℓ_∞_ℱ_nn is a semi-metric.Claim 1: For any distribution pair (P,Q), d^ℓ_∞_ℱ_nn(P,Q)≥0. Consider a discriminator which always outputs 1/2, i.e., D_ω(x)=1/2 for all x. Note that such a neural network discriminator exists, as setting 𝐰_k=0 results in D_ω(x)=σ(0)=0. For this discriminator, the objective function in (<ref>) evaluates to 1/2-1/2=0. Since d^ℓ_∞_ℱ_nn is a supremum over all discriminators, we have d^ℓ_∞_ℱ_nn(P,Q)≥0.Claim 2: For any distribution pair (P,Q), d^ℓ_∞_ℱ_nn(P,Q)=d^ℓ_∞_ℱ_nn(Q,P).d^ℓ_∞_ℱ_nn(P,Q)=sup_ω∈Ω (𝔼_X∼ P[D_ω(X)]-𝔼_X∼ Q[D_ω(X)] ) =sup_𝐖_1,…,𝐰_k (𝔼_X∼ P[D_ω(X)]-𝔼_X∼ Q[D_ω(X)] ) (i)=sup_𝐖_1,…,-𝐰_k (𝔼_X∼ P[σ(-f_ω(x))]-𝔼_X∼ Q[σ(-f_ω(x))] ) (ii)=sup_𝐖_1,…,𝐰_k (𝔼_X∼ P[1-σ(f_ω(x))]-𝔼_X∼ Q[1-σ(f_ω(x))] )=sup_𝐖_1,…,𝐰_k (𝔼_X∼ Q[σ(f_ω(x))]-𝔼_X∼ P[σ(f_ω(x))] )= d^ℓ_∞_ℱ_nn(Q,P),where (i) follows from replacing 𝐰_k with -𝐰_k and (ii) follows from the sigmoid property σ(-x)=1-σ(x) for all x.Claim 3: For any distribution P, d^ℓ_∞_ℱ_nn(P,P)=0.d^ℓ_∞_ℱ_nn(P,P)=sup_ω∈Ω (𝔼_X∼ P[D_ω(X)]-𝔼_X∼ P[D_ω(X)] )=0. Claim 4: For any distributions P,Q,R, d^ℓ_∞_ℱ_nn(P,Q)≤ d^ℓ_∞_ℱ_nn(P,R)+d^ℓ_∞_ℱ_nn(R,Q).d^ℓ_∞_ℱ_nn(P,Q)=sup_ω∈Ω (𝔼_X∼ P[D_ω(X)]-𝔼_X∼ Q[D_ω(X)] ) =sup_ω∈Ω (𝔼_X∼ P[D_ω(X)] - 𝔼_X∼ R[D_ω(X)] + 𝔼_X∼ R[D_ω(X)]-𝔼_X∼ Q[D_ω(X)] )≤sup_ω∈Ω (𝔼_X∼ P[D_ω(X)] - 𝔼_X∼ R[D_ω(X)] ) +sup_ω∈Ω (𝔼_X∼ R[D_ω(X)]-𝔼_X∼ Q[D_ω(X)] ) = d^ℓ_∞_ℱ_nn(P,R)+d^ℓ_∞_ℱ_nn(R,Q). Thus, d^ℓ_∞_ℱ_nn is a semi-metric. The remaining part of the proof of the lower bound follows along the same lines as that of <cit.> by an application of Fano's inequality <cit.> (that requires the involved divergence measure to be a semi-metric), replacing d_ℱ_nn with d^ℓ_∞_ℱ_nn and noting that the additional sigmoid activation function after the last layer in the discriminator satisfies the monotonicity assumption so that C(𝒫(𝒳))>0 (for C(𝒫(𝒳)) defined in (<ref>)).§ PROOF OF THEOREM <REF> The proof to obtain (<ref>) is the same as that for Theorem <ref>, where α=α_D. The generator's optimization problem in (<ref>) with the optimal discriminator in (<ref>) can be written as inf_θ∈ΘV_α_G(θ,ω^*), whereV_α_G(θ,ω^*) =α_G/α_G-1[∫_𝒳(p_r(x)D_ω^*(x)^α_G-1/α_G+p_G_θ(x)(1-D_ω^*(x))^α_G-1/α_G)dx-2]=α_G/α_G-1[∫_𝒳(p_r(x)( p_r(x)^α_D/p_r(x)^α_D+p_G_θ(x)^α_D)^α_G-1/α_G + p_G_θ(x)( p_G_θ(x)^α_D/p_r(x)^α_D+p_G_θ(x)^α_D)^α_G-1/α_G)dx-2]=α_G/α_G-1[∫_𝒳p_G_θ(x)((p_r(x)/p_G_θ(x))^α_D(1-1/α_G)+1+1/((p_r(x)/p_G_θ(x))^α_D+1)^1-1/α_G)dx-2] = ∫_𝒳 p_G_θ(x)f_α_D,α_G(p_r(x)/p_G_θ(x)) dx + α_G/α_G-1(2^1/α_G-2 ), where f_α_D,α_G is as defined in (<ref>). Observe that if f_α_D,α_G is strictly convex, the first term in the last equality above equals an f-divergence which is minimized if and only if P_r = P_G_θ. We note that continuous extensions of D_f_α_D,α_G(P_r||P_G_θ) for α_D,α_G ∈{1,∞} exist and can be computed by interchanging the limit and integral following the dominated convergence theorem. In particular, as (α_D,α_G) → (1,1), D_f_α_D,α_G(P_r || P_G_θ) recovers D_JS(P_r || P_G_θ), and as (α_D,α_G) → (∞,∞), D_f_α_D,α_G(P_r || P_G_θ) recovers D_TV(P_r || P_G_θ). We also note that since theα-loss functions for both D and G have continuous extensions at 1 and ∞, we can obtain the same simplifications noted above by using the optimal discriminator strategies for the limiting points and the corresponding divergences for the generator's objective.Define the regions R_1 and R_2 as follows:R_1 {(α_D,α_G) ∈ (0,∞]^2 |α_D ≤ 1,α_G > α_D/α_D+1}and R_2 {(α_D,α_G) ∈ (0,∞]^2 |α_D > 1,α_D/2< α_G ≤α_D}. In order to prove that f_α_D,α_G is strictly convex for (α_D,α_G)∈ R_1∪ R_2, we take its second derivative, which yieldsf^''_α_D,α_G(u) = A_α_D,α_G(u) [(α_G+α_Dα_G-α_D)(u+u^α_D+α_D/α_G) +(α_G-α_Dα_G)(u^α_D/α_G+u^α_D+1)],where A_α_D,α_G(u)=α_D/α_Gu^α_D-α_D/α_G-2(1+u^α_D)^1/α_G-3.Note that A_α_D,α_G(u)> 0 for all u > 0 and α_D,α_G∈(0,∞]. Therefore, in order to ensure f^''_α_D,α_G(u)>0 for all u>0 it is sufficient to haveα_G+α_Dα_G-α_D > α_G(α_D-1)B_α_D,α_G(u),whereB_α_D,α_G(u) = u^α_D/α_G+u^α_D+1/u+u^α_D+α_D/α_Gfor u > 0. Since B_α_D,α_G(u) > 0 for all u> 0, the sign of the RHS of (<ref>) is determined by whether α_D ≤ 1 or α_D > 1. We look further into these two cases in the following:Case 1: α_D ≤ 1. Then α_G(α_D-1)B_α_D,α_G(u) ≤ 0 for all u > 0 and (α_D,α_G)∈(0,∞]^2. Therefore, we needα_G(1+α_D)-α_D > 0 ⇔α_G > α_D/α_D+1.Case 2: α_D > 1. Then α_G(α_D-1)B_α_D,α_G(u) > 0 for all u > 0 and (α_D,α_G)∈(0,∞]^2. In order to obtain conditions on α_D and α_G, we determine the monotonicity of B_α_D,α_G by finding its first derivative as follows: B^'_α_D,α_G(u) = (α_G-α_D)(u^2α_D-1)+α_Dα_G(u^α_D-α_D/α_G+1-u^α_D+α_D/α_G-1)/α_G u^-α_D/α_G(u+u^α_D+α_D/α_G)^2.Since the denominator of B^'_α_D,α_G is positive for all u>0 and (α_D,α_G)∈ (0,∞]^2, we just need to check the sign of the numerator.Case 2a: α_D>α_G. For u ∈ (0,1), u^2α_D-1 < 0 and u^α_D-α_D/α_G+1-u^α_D+α_D/α_G-1 >0,so B^'_α_D,α_G(u) > 0. For u > 1, u^2α_D-1 > 0 and u^α_D-α_D/α_G+1-u^α_D+α_D/α_G-1 < 0, so B^'_α_D,α_G(u) < 0. For u=1, B^'_α_D,α_G(u) = 0.Hence, B^'_α_D,α_G is strictly increasing for u∈ (0,1) and strictly decreasing for u ≥ 1. Therefore, B_α_D,α_G attains a maximum value of 1 at u=1. This means B_α_D,α_G is bounded, i.e. B_α_D,α_G∈ (0,1] for all u>0. Thus, in order for (<ref>) to hold, it suffices to ensure thatα_G+α_Dα_G-α_D > α_G(α_D-1) ⇔α_G > α_G/2.Case 2b: α_D<α_G. For u ∈ (0,1), u^2α_D-1 < 0 and u^α_D-α_D/α_G+1-u^α_D+α_D/α_G-1 < 0, so B^'_α_D,α_G(u) < 0. For u > 1, u^2α_D-1 > 0 and u^α_D-α_D/α_G+1-u^α_D+α_D/α_G-1 > 0, so B^'_α_D,α_G(u) > 0. Hence, B^'_α_D,α_G is strictly decreasing for u∈ (0,1) and strictly increasing for u ≥ 1. Therefore, B_α_D,α_G attains a minimum value of 1 at u=1. This means that B_α_D,α_G is not bounded above, so it is not possible to satisfy (<ref>) without restricting the domain of B_α_D,α_G. Thus, for (α_D,α_G)∈ R_1 ∪ R_2,V_α_G(θ,ω^*) =D_f_α_D,α_G(P_r||P_G_θ)+α_G/α_G-1(2^1/α_G-2).This yields (<ref>). Figure <ref>(a) illustrates the feasible (α_D,α_G)-region R_1∪ R_2. Note that D_f_α_D,α_G(P||Q) is symmetric since D_f_α_D,α_G(Q||P)= ∫_𝒳 p(x)f_α_D,α_G(q(x)/p(x)) dx =α_G/α_G-1[∫_𝒳p(x)((p(x)/q(x))^-α_D(1-1/α_G)-1+1/((p(x)/q(x))^-α_D+1)^1-1/α_G)dx-2^1/α_G] =α_G/α_G-1[∫_𝒳p(x)(q(x)/p(x)+(p(x)/q(x))^α_D(1-1/α_G)/(1+(p(x)/q(x))^α_D)^1-1/α_G)dx-2^1/α_G]=α_G/α_G-1[∫_𝒳q(x)(1+(p(x)/q(x))^α_D(1-1/α_G)/(1+(p(x)/q(x))^α_D)^1-1/α_G)dx-2^1/α_G] = D_f_α_D,α_G(P||Q).Since f_α_D,α_G is strictly convex and f_α_D,α_G(1)=0, D_f_α_D,α_G(P_r||P_G_θ)≥ 0 with equality if and only if P_r=P_G_θ. Thus, we have V_α_G(θ,ω^*)≥α_G/α_G-1(2^1/α_G-2) with equality if and only if P_r=P_G_θ. § PROOF OF THEOREM <REF> The generator's optimization problem in (<ref>) with the optimal discriminator in (<ref>) can be written as inf_θ∈ΘV^NS_α_G(θ,ω^*), whereV^NS_α_G(θ,ω^*) =α_G/α_G-1[1-∫_𝒳(p_G_θ(x)D_ω^*(x)^α_G-1/α_G)dx]=α_G/α_G-1[1-∫_𝒳p_G_θ(x)( p_r(x)^α_D/p_r(x)^α_D+p_G_θ(x)^α_D)^α_G-1/α_Gdx]=α_G/α_G-1[1-∫_𝒳p_G_θ(x) (p_r(x)/p_G_θ(x))^α_D(1-1/α_G)/((p_r(x)/p_G_θ(x))^α_D+1)^1-1/α_Gdx]= ∫_𝒳 p_G_θ(x)f^NS_α_D,α_G(p_r(x)/p_G_θ(x)) dx + α_G/α_G-1(1-2^1/α_G-1), where f^NS_α_D,α_G is as defined in (<ref>). Continuous extensions of D_f^NS_α_D,α_G(P_r||P_G_θ) for α_D,α_G ∈{1,∞} exist and can be computed by interchanging the limit and integral following the dominated convergence theorem. In order to prove that f^NS_α_D,α_G is strictly convex for (α_D,α_G)∈ R_NS= {(α_D,α_G) ∈ (0,∞]^2 |α_D > α_G(α_D-1)}, we take its second derivative, which yields f^''_α_D,α_G(u) = A_α_D,α_G(u) [(α_G-α_Dα_G+α_D) +α_G(1+α_D)u^α_D],where A_α_D,α_G is defined as in (<ref>). Since A_α_D,α_G(u)> 0 for all u > 0 and (α_D,α_G)∈(0,∞]^2, to ensure f^''_α_D,α_G(u)>0 for all u>0 it suffices to haveα_G-α_Dα_G+α_D/α_G(1+α_D) > -u^α_Dfor all u > 0. This is equivalent toα_G-α_Dα_G+α_D/α_G(1+α_D) > 0,which results in the conditionα_D > α_G(α_D-1)for (α_D,α_G)∈(0,∞]^2. Thus, for (α_D,α_G)∈ R_NS,V^NS_α_G(θ,ω^*) =D_f^NS_α_D,α_G(P_r||P_G_θ)+α_G/α_G-1(1-2^1/α_G-1).This yields (<ref>). Figure <ref>(b) illustrates the feasible (α_D,α_G)-region R_NS. Note that D_f^NS_α_D,α_G(P||Q) is not symmetric since D_f^NS_α_D,α_G(P||Q)D_f^NS_α_D,α_G(Q||P). Since f^NS_α_D,α_G is strictly convex and f^NS_α_D,α_G(1)=0, D_f^NS_α_D,α_G(P_r||P_G_θ)≥ 0 with equality if and only if P_r=P_G_θ. Thus, we have V^NS_α_G(θ,ω^*)≥α_G/α_G-1(1-2^1/α_G-1) with equality if and only if P_r=P_G_θ. § PROOF OF THEOREM <REF> Saturating (α_D,α_G)-GANs:For the optimal discriminator D_ω^* defined in (<ref>), we firstderive ∂ D_ω^* / ∂ x using the quotient rule as∂ D_ω^*/∂ x= (p_r(x)^α_D + p_G_θ(x)^α_D)^-2[ (p_r(x)^α_D + p_G_θ(x)^α_D)(α_Dp_r(x)^α_D-1∂ p_r/∂ x)- p_r(x)^α_D(α_Dp_r(x)^α_D-1∂ p_r/∂ x + α_Dp_G_θ(x)^α_D-1∂ p_G_θ/∂ x) ]= (p_r(x)^α_D + p_G_θ(x)^α_D)^-2[ α_Dp_r(x)^2α_D-1∂ p_r/∂ x + α_Dp_r(x)^α_D-1p_G_θ(x)^α_D∂ p_r/∂ x - α_Dp_r(x)^2α_D-1∂ p_r/∂ x - α_Dp_r(x)^α_Dp_G_θ(x)^α_D-1∂ p_G_θ/∂ x]= (p_r(x)^α_D + p_G_θ(x)^α_D)^-2(α_Dp_r(x)^α_D-1p_G_θ(x)^α_D∂ p_r/∂ x - α_Dp_r(x)^α_Dp_G_θ(x)^α_D-1∂ p_G_θ/∂ x)= α_Dp_r(x)^α_Dp_G_θ(x)^α_D(p_r(x)^α_D + p_G_θ(x)^α_D)^-2(1/p_r(x)∂ p_r/∂ x - 1/p_G_θ(x)∂ p_G_θ/∂ x)= α_D D_ω^*(x)(1 - D_ω^*(x)) (1/p_r(x)∂ p_r/∂ x - 1/p_G_θ(x)∂ p_G_θ/∂ x).Next, we set μ = D_ω^*(x) and derive -∂ℓ_α_G(0, μ)/∂μ as follows-∂ℓ_α_G(0,μ)/∂μ= ∂/∂μ[ -α_G/α_G-1(1 - (1 - μ)^1 - 1/α_G)]= -(1 - μ)^-1/α_G.Lastly, to find the gradient -∂ℓ_α_G(0, D_ω^*(x))/∂ x, we apply the chain rule and substitute (<ref>) and (<ref>):-∂ℓ_α_G(0, D_ω^*(x))/∂ x= -∂ℓ_α_G(0, D_ω^*(x))/∂ D_ω^*×∂ D_ω^*/∂ x = C_x,α_D,α_G(1/p_G_θ(x)∂ p_G_θ/∂ x - 1/p_r(x)∂ p_r/∂ x),whereC_x,α_D,α_G = α_DD_ω^*(x)(1 - D_ω^*(x))^1 - 1/α_G,or equivalently,C_x,α_D, α_G = α_D P^(α_D)_Y|X(1|x) (1 - P^(α_D)_Y|X(1|x) )^1 - 1/α_G.Since the scalar C_x,α_D, α_G is positive and the only term reliant on α_D and α_G for a fixed P_G_θ, we conclude that the direction of -∂ℓ_α_G(0, D_ω^*(x))/∂ x is independent of these parameters. See Fig. <ref>(a) for a plot of C_x,α_D, α_G as a function of P_Y|X(1|x) for five (α_D, α_G) combinations. Non-saturating (α_D,α_G)-GANs:The proof for NS (α_D,α_G)-GANs follows similarly to that for saturating (α_D,α_G)-GANs. First, we set μ = D_ω^*(x) and derive ∂ℓ_α_G(1, μ) /∂μ as follows:∂ℓ_α_G(1, μ)/∂μ= ∂/∂μ[α_G/α_G - 1(1 - μ^1 - 1/α_G)]= -μ^-1/α_G.Then we derive the gradient ∂ℓ_α_G(1, D_ω^*(x))/∂ x using the chain rule and substituting (<ref>) and (<ref>):∂ℓ_α_G(1, D_ω^*(x))/∂ x= ∂ℓ_α_G(1, D_ω^*(x))/∂ D_ω^*×∂ D_ω^*/∂ x = C^NS_x,α_D,α_G(1/p_G_θ(x)∂ p_G_θ/∂ x - 1/p_r(x)∂ p_r/∂ x),whereC^NS_x,α_D,α_G = α_D(1 - D_ω^*(x))D_ω^*(x)^1 - 1/α_G,or equivalently,C^NS_x,α_D, α_G = α_D(1 - P^(α_D)_Y|X(1|x)) P^(α_D)_Y|X(1|x)^1 - 1/α_G.Since the scalar C^NS_x,α_D, α_G is positive and the only term reliant on α_D and α_G for a fixed P_G_θ, we conclude that the direction of ∂ℓ_α_G(1, D_ω^*(x))/∂ x is independent of these parameters. See Fig. <ref>(b) for a plot of C^NS_x,α_D, α_G as a function of P_Y|X(1|x) for five (α_D, α_G) combinations.§ PROOF OF PROPOSITION <REF> As noted in the proof of Theorems <ref> and <ref>, since ℓ_D is a symmetric CPE loss, the discriminator's optimization problem in (<ref>) reduces to solving the pointwise optimizationsup_t∈[0,1]-uℓ_D(1,t)-ℓ_D(1,1-t),u ≥ 0,For a fixed u ≥ 0, consider the functiong(t)=-uℓ_D(1,t)-ℓ_D(1,1-t),t ∈ [0,1].To show that the optimal discriminator D_ω^* satisfies the implicit equation in (<ref>), we find when g is maximized over [0,1]. Since ℓ_D(1,·) is strictly convex and therefore g is strictly concave, there exists a unique global maximum attained either at t^*∈{0,1} or at t^*∈ (0,1), in which case it occurs when g^'(t^*) = 0, which yields (<ref>) as follows:uℓ_D^'(1,1-t^*)-ℓ_D^'(t^*) = 0 ℓ_D^'(t^*) = uℓ_D^'(1-t^*).The generator's optimization problem in (<ref>) with the optimal discriminator D_ω^* satisfying (<ref>) can be written as inf_θ∈ΘV_ℓ_G(θ,ω^*), whereV_ℓ_G(θ,ω^*) =∫_𝒳(-p_r(x)ℓ_G(1,D_ω^*(x))-p_G_θ(x)ℓ_G(1,1-D_ω^*(x)))dx= ∫_𝒳 p_G_θ(x)(-p_r(x)/p_G_θ(x)ℓ_G(1,D_ω^*(x)) -ℓ_G(1,1-D_ω^*(x))) dx.Let A(p_r(x)/p_G_θ(x))D_ω^*(x) for x ∈𝒳 andf(u) = -uℓ_G(1,A(u)) -ℓ_G(1,1-A(u)) + 2ℓ_G(1,1/2),u ≥ 0.Note that the additional term 2ℓ_G(1,1/2) in <ref> is required to satisfy f(1)=0, where the 1/2 comes from the fact that D_ω^*(x)=1/2 for any x∈𝒳 such that p_r(x)=p_G_θ(x). Then f is convex by assumption, so (<ref>) becomes D_f(P_r||P_G_θ) -2ℓ_G(1,1/2), where D_f(P_r||P_G_θ) is the f-divergence with f as given in (<ref>).§ PROOF OF THEOREM <REF> By adding and subtracting relevant terms, we obtain d_ω^*(P_r,P_G_θ̂^*)(P_r,P_G_θ̂^*) -inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P_r,P_G_θ)= d_ω^*(P_r,P_G_θ̂^*)(P_r,P_G_θ̂^*) - d_ω^*(P_r,P_G_θ̂^*)(P̂_r,P_G_θ̂^*)+ inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P̂_r,P_G_θ) - inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P_r,P_G_θ)+ d_ω^*(P_r,P_G_θ̂^*)(P̂_r,P_G_θ̂^*) - inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P̂_r,P_G_θ). We upper-bound (<ref>) in the following three steps. Let ϕ(·) = -ℓ_G(1,·) and ψ(·) = -ℓ_G(0,·). We first upper-bound (<ref>). Let ω^*(θ̂^*)=ω^*(P_r,P_G_θ̂^*). Using (<ref>) yieldsd_ω^*(P_r,P_G_θ̂^*)(P_r,P_G_θ̂^*)- d_ω^*(P_r,P_G_θ̂^*)(P̂_r,P_G_θ̂^*)= 𝔼_X∼ P_r[ϕ(D_ω^*(θ̂^*)(X))] +𝔼_X∼ P_G_θ̂^*[ψ(D_ω^*(θ̂^*)(X))]- (𝔼_X∼P̂_r[ϕ(D_ω^*(θ̂^*)(X))] + 𝔼_X∼ P_G_θ̂^*[ψ(D_ω^*(θ̂^*)(X))] ) ≤| 𝔼_X∼ P_r[ϕ(D_ω^*(θ̂^*)(X))] - 𝔼_X∼P̂_r[ϕ(D_ω^*(θ̂^*)(X))] | ≤sup_ω∈Ω| 𝔼_X∼ P_r[ϕ(D_ω(X))] - 𝔼_X∼P̂_r[ϕ(D_ω(X))] |.Next, we upper-bound (<ref>). Let θ^* = min_θ∈Θ d_ω^*(P_r,P_G_θ)(P_r,P_G_θ) and ω^*(θ^*)=ω^*(P_r,P_G_θ^*). Theninf_θ∈Θ d_ω^*(P_r,P_G_θ)(P̂_r,P_G_θ)- inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P_r,P_G_θ) ≤ d_ω^*(θ^*)(P̂_r,P_G_θ^*) - d_ω^*(θ^*)(P_r,P_G_θ^*)= 𝔼_X∼P̂_r[ϕ(D_ω^*(θ^*)(X))] +𝔼_X∼ P_G_θ^*[ψ(D_ω^*(θ^*)(X))]- (𝔼_X∼P_r[ϕ(D_ω^*(θ^*)(X))] + 𝔼_X∼ P_G_θ^*[ψ(D_ω^*(θ^*)(X))] )= 𝔼_X∼P̂_r[ϕ(D_ω^*(θ^*)(X))] - 𝔼_X∼P_r[ϕ(D_ω^*(θ^*)(X))] ≤sup_ω∈Ω| 𝔼_X∼ P_r[ϕ(D_ω(X))] - 𝔼_X∼P̂_r[ϕ(D_ω(X))] |. Lastly, we upper-bound (<ref>). Let θ = min_θ∈Θ d_ω^*(P_r,P_G_θ)(P̂_r,P_G_θ) and ω^*(θ)=ω^*(P_r,P_G_θ). Thend_ω^*(P_r,P_G_θ̂^*)(P̂_r,P_G_θ̂^*)- inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P̂_r,P_G_θ)= d_ω^*(θ̂^*)(P̂_r,P_G_θ̂^*) - d_ω^*(θ)(P̂_r,P̂_G_θ) + d_ω^*(θ)(P̂_r,P̂_G_θ) - d_ω^*(θ)(P̂_r,P_G_θ) ≤ d_ω^*(θ̂^*)(P̂_r,P_G_θ̂^*) - d_ω^*(θ̂^*)(P̂_r,P̂_G_θ̂^*) + d_ω^*(θ)(P̂_r,P̂_G_θ) - d_ω^*(θ)(P̂_r,P_G_θ)= 𝔼_X∼P̂_r[ϕ(D_ω^*(θ̂^*)(X))] +𝔼_X∼ P_G_θ̂^*[ψ(D_ω^*(θ̂^*)(X))]- (𝔼_X∼P̂_r[ϕ(D_ω^*(θ̂^*)(X))] + 𝔼_X∼P̂_G_θ̂^*[ψ(D_ω^*(θ̂^*)(X))] )+ 𝔼_X∼P̂_r[ϕ(D_ω^*(θ)(X))] +𝔼_X∼P̂_G_θ[ψ(D_ω^*(θ)(X))]- (𝔼_X∼P̂_r[ϕ(D_ω^*(θ)(X))] + 𝔼_X∼P_G_θ[ψ(D_ω^*(θ)(X))] )= 𝔼_X∼ P_G_θ̂^*[ψ(D_ω^*(θ̂^*)(X))] - 𝔼_X∼P̂_G_θ̂^*[ψ(D_ω^*(θ̂^*)(X))]+ 𝔼_X∼P̂_G_θ[ψ(D_ω^*(θ)(X))] - 𝔼_X∼P_G_θ[ψ(D_ω^*(θ)(X))] ≤ 2sup_ω∈Ω,θ∈Θ| 𝔼_X∼ P_G_θ[ψ(D_ω(X))] - 𝔼_X∼P̂_G_θ[ψ(D_ω(X))] |.Combining (<ref>)-(<ref>), we obtain the following bound for (<ref>):d_ω^*(P_r,P_G_θ̂^*)(P_r,P_G_θ̂^*) -inf_θ∈Θ d_ω^*(P_r,P_G_θ)(P_r,P_G_θ) ≤ 2sup_ω∈Ω| 𝔼_X∼ P_r[ϕ(D_ω(X))] - 𝔼_X∼P̂_r[ϕ(D_ω(X))] |+ 2sup_ω∈Ω,θ∈Θ| 𝔼_X∼ P_G_θ[ψ(D_ω(X))] - 𝔼_X∼P̂_G_θ[ψ(D_ω(X))] |= 2sup_ω∈Ω| 𝔼_X∼ P_r[ϕ(D_ω(X))] - 1/n∑_i=1^n ϕ(D_ω(X_i)) |+ 2sup_ω∈Ω,θ∈Θ| 𝔼_X∼ P_G_θ[ψ(D_ω(X))] - 1/m∑_j=1^m ψ(D_ω(X_j)) |.Note that (<ref>) is exactly the same bound as that in (<ref>). Hence, the remainder of the proof follows from the proof of Theorem <ref>, where ϕ(·)-ℓ_G(1,·) and ψ(·)-ℓ_G(0,·). The specialization to (α_D,α_D)-GANs follows from setting ℓ_D=ℓ_α_D and ℓ_G=ℓ_α_G.§ ADDITIONAL EXPERIMENTAL RESULTS §.§ Brief Overview of LSGAN The Least Squares GAN (LSGAN) is a dual-objective min-max game introduced in <cit.>. The LSGAN objective functions, as the name suggests, involve squared loss functions for D and G which are written asinf_ω∈Ω 1/2(𝔼_X∼ P_r[(D_ω(X)-b)^2]+𝔼_X∼ P_G_θ[(D_ω(X)-a)^2])inf_θ∈Θ 1/2(𝔼_X∼ P_r[(D_ω(X)-c)^2]+𝔼_X∼ P_G_θ[(D_ω(X)-c)^2]).For appropriately chosen values of the parameters a, b, and c,(<ref>) reduces to minimizing the Pearson χ^2-divergence between P_r+P_G_θ and 2P_G_θ. As done in the original paper <cit.>, we use a=0, b=1 and c=1 for our experiments to make fair comparisons. The authors refer to this choice of parameters as the 0-1 binary coding scheme.§.§ 2D Gaussian Mixture RingIn Tables <ref> and <ref>, we report the success (8/8 mode coverage) and failure (0/8 mode coverage) rates over 200 seeds for a grid of (α_D, α_G) combinations for thesaturating setting. Compared to the vanilla GAN performance, we find that tuning α_D below 1 leads to a greater success rate and lower failure rate. However, in this saturating loss setting, we find that tuning α_G away from 1 has no significant impact on GAN performance.In Table <ref>, we detail the success rates for the NS setting. We note that for this dataset, no failures, and therefore, no vanishing/exploding gradients, occurred in the NS setting. In particular, we find that the (0.5,1.2)-GAN doubles the success rate of the vanilla (1,1)-GAN, which is more susceptible to mode collapse as illustrated in Figure <ref>. We also find that LSGAN achieves a success rate of 32.5%, which is greater than vanilla GAN but less than the best-performing (α_D, α_G)-GAN. §.§ Celeb-A & LSUN Classroom The discriminator and generator architectures used for the Celeb-A and LSUN Classroom datasets are described in Tables <ref> and <ref> respectively. Each architecture consists of four CNN layers, with parameters such as kernel size (i.e., size of the filter, denoted as “Kernel"), stride (the amount by which the filter moves), and the activation functions applied to the layer outputs. Zero padding is also assumed. In both tables, “BN" represents batch normalization, a technique that normalizes the inputs to each layer using a batch of samples during model training. Batch normalization is commonly employed in deep learning to prevent cumulative floating point errors and overflows, and to ensure that all features remain within a similar range. This technique serves as a computational tool to address vanishing and/or exploding gradients. In Table <ref>, we collate the FID results for both datasets as a function of the learning rates. This table captures the percentage (out of 50 seeds) of FID scores below a desired threshold, which is 80 for the CELEB-A dataset and 800 for the LSUN Classroom dataset. We first focus on the CELEB-A dataset: Table <ref> demonstrates that for a learning rate of 1 × 10^-4, all GANs (vanilla, different (α_D,α_G)-GANs, and LSGANs) achieve an FID score below 80 at least 93% of the time. However, the instability of vanilla GAN is also evident in Table <ref>, where for a slightly higher learning rate of 6 × 10^-4, the (1,1)-GAN achieves an FID score below 80 only 60% of the time whereas at least one (α_D,α_G=1)-GAN consistently performs better than 76% over all chosen learning rates. We observe that tuning α_D below 1 contributes to stabilizing the FID scores over the 50 seeds while maintaining relatively low scores on average. This stability is emphasized in Table <ref>, in particular for the (0.7,1)-GAN, as it achieves an FID score below 80 at least 80% of the time for 7 out of the 10 the learning rates.Table <ref> also illustrates similar results for the LSUN Classroom dataset. However, increasing it to 2 × 10^-4 leads to instability in the vanilla (1,1)-GAN across 50 seeds. | http://arxiv.org/abs/2310.18291v1 | {
"authors": [
"Monica Welfert",
"Gowtham R. Kurri",
"Kyle Otstot",
"Lalitha Sankar"
],
"categories": [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
],
"primary_category": "cs.LG",
"published": "20231027172907",
"title": "Addressing GAN Training Instabilities via Tunable Classification Losses"
} |
A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications Ahmed Magooda*, Alec Helyar*, Kyle Jackson*, David Sullivan, Chad Atalla, Emily Sheng, Dan Vann, Richard Edgar, Hamid Palangi, Roman Lutz, Hongliang Kong, Vincent Yun, Eslam Kamal, Federico Zarfati, Hanna Wallach, Sarah Bird, Mei Chen January 14, 2024 ==============================================================================================================================================================================================================================================Stylistic headline generation is the task of generate a headline that not only summarizes the content of a news, but also reflects a desired style that attracts users. As style-specific news-headline pairs are scarce, previous research has focused on unsupervised approaches using a standard headline generation dataset and mono-style corpora. In this work, we follow this line and propose StyleBART, an unsupervised approach for stylistic headline generation. Our method decorates the pretrained BART model with adapters that are responsible for different styles and allows the generation of headlines with diverse styles by simply switching the adapters. Different from previous works, StyleBART separates the task of style learning and headline generation, making it possible to freely combine the base model and the style adapters during inference. We further propose an inverse paraphrasing task to enhance the style adapters. Extensive automatic and human evaluations show that StyleBART achieves new state-of-the-art performance in the unsupervised stylistic headline generation task, producing high-quality headlines with the desired style. Code is available at https://github.com/sufenlp/StyleBARThttps://github.com/sufenlp/StyleBART.§ INTRODUCTIONThe sequence-to-sequence-based neural headline generation (HG) model <cit.> has demonstrated its ability to generate factual, concise, and fluent headlines <cit.>. Yet, the headlines are also expected to have stylistic attributes to draw more attention of the audiences. To address the problem, <cit.> propose the task of Stylistic Headline Generation (SHG), which aims to generate a headline with a specified style such as humorous, romantic and clickbaity. However, acquiring enough parallel data for SHG is almost impossible, as the creation of headlines with specific styles demands creativity and often consumes significant effort. Hence, researchers <cit.> in turn explore unsupervised approaches which only require a standard headline generation dataset and non-parallel stylistic text corpora. Some existing works propose a pipeline method <cit.> which firstly generates a plain headline and then introduces the pre-specified style with a style transfer model. However, this pipeline method require two models during inference, which brings additional latency and storage cost.Some other works <cit.> jointly learn a plain headline generation model on news-headline pairs, and a denoising autoencoder on the stylistic text corpus. However, these approaches require to carefully design the scheduling and parameter sharing mechanisms between tasks. Moreover, as the task of headline generation and style learning are entangled, the entire model has to be rebuilt when facing a new style or task.In this paper, we propose StyleBART, an unsupervised SHG model based on the BART model <cit.>.Instead of using a multitask learning framework, StyleBART disentangles the style and the specific task (e.g. headline generation) by the design of model architecture and training strategy. Intuitively, this enables StyleBART to separately train the style and task modules, then combine them as required during inference (Figure <ref>). Hence, StyleBART is more flexible and training-efficient compared with baselines <cit.> which require training models for all style and task combinations. Specifically, StyleBART utilizes style adapters as plug-and-play modules for the style control. The style adapters are learned at the pretraining stage through the inverse paraphrasing task on the style corpus, while the base HG model is learned at the fine-tuning stage on news-headline pairs. During inference, we achieve unsupervised SHG by switching to the target style adapter. In this way, StyleBART has the same high decoding efficiency as <cit.>, while overcoming its problems in task scheduling and inefficient training. We evaluate StyleBART using the same three SHG tasks described in <cit.>. Both automatic and human evaluations demonstrate the superiority of our method over baselines. § APPROACH §.§ Problem Formulation We denote the parallel news-headline dataset as D={⟨ x,y⟩}, where each pair ⟨ x,y⟩ consists of a news article x and its corresponding plain headline y. The corpus for the i^th style s_i is denoted as T^s_i={t^s_i}. Our goal is to generate stylistic headline y^s_i with style s_i given the news article x. Note that this is an unsupervised setup, as no news article and stylistic headline pairs are available.§.§ StyleBART Architecture As shown in Figure <ref>, our model consists of two parts: the BART model and the adapter modules. We use the BART as the base model for StyleBART. Adapters <cit.> are light-weight bottleneck layers inserted into a base model. It is designed as a parameter-efficient method to fine-tune the base model for a new task <cit.>, language <cit.> or domain <cit.>. In StyleBART, we instead use the adapters to control the style of the model output. Formally, following <cit.>, an adapter module A is composed of layer normalization (LN) of the input z∈R^h, down-projection (W_down∈R^h× b) with the bottleneck dimension b, non-linear function (ReLU), up-projection (W_up∈R^b× h), and a residual connection with the input z:A(z) = W^T_upReLU(W_down^TLN(z))+z.We insert the adapters into each decoder layer of the base model. For each style, we have a set of style adapters. In the following, we use θ^b to denote the set of parameters of the BART model, and θ^s_i to denote that of the style adapters for style s_i. §.§ Training and Inference We divide the training process of StyleBART into three steps (Figure <ref>): (1) Style adapter pretraining, which learns the style adapters to control the style of the model output by pretraining on the style dataset T^s_i; (2) Headline generation fine-tuning, which optimizes the base model to generate plain headline on the headline dataset D; (3) Stylistic headline generation, which generates stylistic headline by switching to the target style adapters during inference. Step 1: Style Adapter Pretraining. The style adapters can be trained with the style dataset T^s_i by the denoising auto-encoding task, which reconstructs the sentence t^s_i from g_n(t^s_i). Here g_n is a noise function that generates a perturbed version of the input, such as token masking and token deletion used during BART pretraining <cit.> and our baselines <cit.>. However, this training method is suboptimal. Following <cit.>, the style can be loosely defined as the common patterns of lexical choice and syntactic constructions that are distinct from the content of a sentence. Considering how the noise function g_n works, we argue that the corrupted text g_n(t^s_i) still contains the stylistic information. As a result, the model may learn to guide the style of its output with the style of its input, which deviates from our goal to control the output style with style adapters. We call this undesirable spurious correlation <cit.>. Instead, we propose to address this issue with inverse paraphrasing method. Inspired by <cit.>, we replace g_n(t^s_i) with g_p(t^s_i), which is generated by feeding t^s_i to a paraphrase model[We use the STRAP model <cit.>.] trained to maximize diversity. Table <ref> displays examples of stylistic sentences and their perturbed versions with the noise function g_n and the paraphrase model g_p. Intuitively, the paraphrase model can better strip away information from t^s_i that is predictive of its original style. As a result, our model has to rely on the style adapters instead of the input sentence to control its output style, enhancing the adapters' ability for style control. Specifically, to train the adapters for style s_i, we first feed each sentence t^s_i to the paraphrase model and get the perturbed sentence g_p(t^s_i). Then we train the corresponding style adapters while freezing all other parameters with inverse paraphrasing:θ̂^s_i = max_θ^s_i∑_t^s_i∈ T^s_ilog p(t^s_i| g_p(t^s_i);θ^s_i,θ^b).Since the input is style-agnostic and the other parameters are freezing, the model has to rely on the style adapters to control its output style. We also learn a style-less adapter θ^s_0 in the same way except replacing T^s_i with a dataset T^s_0, which consists of plain text from BART pretraining.[We choose plain text from BART pretraining instead of the headline generation dataset in order to separate the style and downstream task at the data level.] Step 2: Headline Generation Fine-tuning. In this step, StyleBART learns headline generation on the headline dataset D={⟨ x,y⟩}. Since the headline is style-less, we insert the style-less adapters into StyleBART and finetune it on the headline generation task. This step is required to force the model to learn the task of headline generation. During finetuning, we only update the parameters of the encoder while freezing the style-less adapters and all other model parameters:θ̂^b_enc = max_θ^b_enc∑_<x,y> ∈Dlog p(y | x; θ̂^s_0, θ^b).In this way, we limit the computational cost, and more importantly, mitigate the catastrophic forgetting problem in style control, thus facilitating the switching of different style adapters in Step 3.Step 3: Stylistic Headline Generating. To generate a headline in a given style, we can achieve this by replacing the style-less adapters of StyleBART with the corresponding style-specific adapters. StyleBART has already learned the task of headline generation. Therefore, by switching the style-less adapters to the style-specific adapters, we can obtain a style-specific headline generation model. We use θ̂^b, θ̂^s_0 and θ̂^s_i to denote the model parameters of the base model, the style-less adapters and the style adapters after Step 2. Then given a news article, StyleBART outputs its style-specific headline y^s_i withy^s_i = max_y p(y | x;θ̂^s_i, θ̂^b). § EXPERIMENTS§.§ Datasets Following <cit.>, the experiment datasets consist of a headline generation dataset CNN-NYT, and three stylistic text datasets for humorous, romantic and clickbaity. The CNN-NYT dataset consists of 146K news-headline pairs from two sources: the New York Times <cit.> and the CNN dataset <cit.>. We use the same dataset split as <cit.>. Each style dataset contains 500K style-specific sentences. The humor and romance datasets are collected from the novels in the corresponding genres of BookCorpus <cit.>. The clickbait dataset is obtained from The Examiner-SpamClickBait News dataset.[https://www.kaggle.com/datasets/therohk/examine-the-examiner]To train the style-less adapters, we also build a style-less dataset, which consists of 500K sentences randomly sampled from BookCorpus, one of the datasets used in BART pretraining. For style datasets, we use the same split as <cit.>. For the style-less dataset, we randomly sample 3,000 sentences for both the validation and test set, leaving the rest as the training set.§.§ BaselinesWe compare StyleBART with the following baselines:∙ Neural Headline Generation (NHG): Finetuning all parameters of BART on the plain headline generation dataset.∙ Two-Step Decoding(TSD): A two-step decoding method that first generates a plain headline and then injects style with an unsupervised style transfer model <cit.>.∙ Multitask: A multitask framework that jointly trains on plain headline generation and stylistic text denoising with the BART model. ∙ TitleStylist <cit.>: An approach similar to Multitask, but with carefully designed parameter sharing and switching between tasks.∙ S-SHG <cit.>[Our reimplemented S-SHG performs slightly worse than the original paper. However, the paper provides no open-source code or implementation details.]: An approach that first constructs the stem and syntax of the headline similarly to TitleStylist, and then populates that with substantive context.§.§ Evaluation MetricsAutomatic Evaluation. We use automatic evaluation to measure the quality and style of the generated headline. For the quality, we use ROUGE <cit.> and BERTScore <cit.> to measure the relevance of the generated headlines to the reference headline and PPL to evaluate its language fluency. The PPL is calculated with OpenAI GPT2 <cit.> finetuned on the plain headlines following <cit.>. To evaluate the style, we use PPL-S, which computes the PPL score of the generated headline using GPT2 finetuned on the corresponding style corpus. Human Evaluation.We conducted a human evaluation to more comprehensively assess our model. We randomly sampled 50 news articles from each test set and asked three judges to rate the outputs from TitleStylist, S-SHG and our StyleBART.[We do not include TSD, Multitask as they are much worse according to previous work and our automatic evaluation.] To assess generation quality, we ask the judges to rate from 1 to 10 (integer values) from three aspects: 1) Relevance—how semantically relevant the headline is to the news article. 2) Attractiveness—how appealing they feel the headline is. 3) Fluency—how comprehensive and easy-to-read the headline is. We then report the average score for each aspect across all test samples and judges. For the style evaluation, we ask the judges to choose the best headline of each style from an entire set of TitleStylist, S-SHG and StyleBART's outputs. We then report the percentage of the times each model is chosen as the best.§.§ Model ConfigurationWe implemented StyleBART with the pretrained BART model using Hugging Face. We set the bottleneck dimension b of the adapters to be 64. To pretrain the adapters, we use the AdamW optimizer <cit.> with β_1=0.9 and β_2=0.999. The learning rate is 5e-5. For fine-tuning on the headline generation dataset, we use the same AdamW optimizer and learning rate. For all steps of training, we use a batch size of 8. At inference time, we use beam search with a beam size of 4. All experiments are run on a single NVIDIA RTX 2080ti GPU, except that LLaMA2-InsTuning is run on a single NVIDIA A100 40G GPU§ RESULTS AND DISCUSSION §.§ Automatic Evaluation ResultsTable <ref> shows the automatic evaluation results of our proposed StyleBART and all baselines. Content Relevance. We use ROUGE (R1,R2, and RL) and BERTScore (BERT) to measure the relevance of the generated headlines to the reference headline. As can be seen, NHG, StyleBART-N, and MultiTask achieve the highest relevance score. This can be explained by that more stylistic headlines would lose some relevance as 1) the reference headlines are style-less; 2) the stylistic headline may use more words outside the news body for improved creativity <cit.>. For stylistic headline generation, TSD has the worst relevance score. StyleBART performs slightly worse than TitleStylist (-0.6 averaged RL and -0.1 averaged BERT) and better than S-SHG (+1.3 averaged RL and +0.0 averaged BERT), validating StyleBART's ability in this aspect.Language Perplexity. We use PPL on the GPT2 fine-tuned on plain headlines to measure the fluency of the generated headline. As can be seen, StyleBART surpasses all baselines by a significant margin except that it is slightly worse than TitleStylist. This may be because the headlines from StyleBART have stronger style and more stylistic headlines would also lose some PPL. Style Strength. We use PPL-S to measure the style strength of the generated headline. As can be seen, StyleBART generates headlines with the strongest style across all stylistic headline generation tasks. Compared with the baseline TitleStylist (resp. S-SHG), StyleBART obtains 119.5 (resp. 114.2) lower averaged PPL-S score. The baseline MultiTask performs the worst in style control. Moreover, StyleBART disentangles style learning and headline generation learning, thus only trains once on the headline generation dataset to support all three styles. In contrast, all baselines except TSD require training three times on the headline generation dataset for the three stylistic generation tasks.[The baselines can also support multiple styles within one model by jointly training on a headline generation dataset and all style corpora. However, the performance will decrease according to <cit.>.]§.§ Human Evaluation ResultsTable <ref> and Table <ref> present the human evaluation results. Generation Quality. We assess generation quality in relevance, attraction, and fluency, as shown in Table <ref>. StyleBART performs the best in all three aspects compared to baselines. The results are slightly inconsistent with automatic quality evaluation. This may be explained by that automatic quality evaluation metrics favor less stylistic headlines, while humans do not have such bias.Style Strength. We measure the style strength in Table <ref>. As can be seen, StyleBART has the highest average selection percentage, followed by the S-SHG model. This is consistent with what we find in the automatic measurement. §.§ Comparison with LLMs In this part, we compare StyleBART with methods using Large Language Models (LLMs), as presented in Table <ref>. For GPT3.5-prompting, we perform few-shot prompting with the gpt-3.5-turbo[https://platform.openai.com/docs/models/gpt-3-5] API. We provide stylistic sentences and news-plain headline pairs in the prompt and query the model to generate stylistic headlines for the input news. For LLaMA2-InsTuning, we conduct instruction tuning with LoRA on LLaMA2 <cit.> for both the inverse paraphrasing task and the news headline generation task. Then we perform stylistic headline generation during testing. Appendix <ref> provides more details. We find that StyleBART overall generates headlines with the best content relevance and the strongest style, demonstrating the superiority of StyleBART even in the era of large language models.[Note that StyleBART and these two LLM-based methods are not strictly comparable due to the difference in model size and training data. We include the comparison as a reference.] §.§ Ablation StudyDesign Choices of Style Adapter Pretraining. Table <ref> shows the effect of different design choices at the style adapter pretraining step. StyleBART-para represents training the adapters with the denoising task instead of the inverse paraphrasing task. StyleBART-s_0 adapters fine-tune the BART model without style-less adapters at step 2, thus without pretraining the style-less adapters. As can be seen, all methods perform similarly in style-less headline generation. When it comes to stylistic headline generation, StyleBART-s_0 adapters decrease dramatically in both the summarization quality and the style control. The inverse paraphrasing task is crucial for style control, while slightly decreasing the relevance of the generated headline to the reference headline. This may be because the reference headlines are style-less, while the headlines produced by using the inverse paraphrasing task have a stronger style. Trainable Parameters of Headline Generation Fine-tuning. We compare different choices of trainable parameters at the headline generation fine-tuning step, as illustrated in Table <ref>. As can be seen, all fine-tuning methods get similar scores when generating style-less headlines, indicating that fine-tuning only partial parameters is enough for learning the headline generation task. When comparing on the SHG task, only updating the encoder can best control the style of the generated headlines among all fine-tuning methods. As the style adapters are inserted at the decoder, freezing the decoder and cross-attention can better maintain their style control ability obtained during pretraining. §.§ Case StudyIn this section, we present examples of generated stylistic headlines, as shown in Table <ref>. Again, weconcentrate on StyleBART, TitleStylist, and S-SHG, as they outperform the other methods.From this example as well as others, we find that all three methods generate relevant and fluent headlines. However, StyleBART is better at style control. For example, the phrase "won't last forever" used in the romantic headline by StyleBART is a common expression in romantic contexts. StyleBART uses questions to raise the reader's curiosity in clickbaity headline. We also observe that when generating headlines of different styles, StyleBART produces more diverse results, while TitleStylist and S-SHG are more likely to change only a few words or expressions.§.§ Extension to Stylistic Story TellingTo test whether StyleBART can flexibly combine tasks and styles, we conduct experiments on stylistic story telling. Specifically, giving the first sentence of a story, this task generates story follow-ups with a desired style. We use the ROCStories dataset <cit.> as the standard story telling dataset which contains around 100,000 stories. We randomly select 4080 stories for both validation and test set and leave the rest as the training set. With the dataset, we fine-tune our base model following Step 2 (Section <ref>) and combine the fine-tuned base model with existing humor/romance/clickbait style adapters for inference. In this way, we efficiently build a stylistic story telling model which supports three styles while only requiring fine-tuning once. Table <ref> shows the automatic evaluation results. As can be seen, when switching to the style adapters, the model generates more stylistic stories (PPL-S), while the generated stories become less relevant to the reference story follow-ups (BERT). This is consistent with stylistic headline generation, demonstrating that our style adapters can be combined with different downstream tasks to control the style of the generated text. § RELATED WORK §.§ Headline GenerationHeadline generation is the task of generating relevant and concise headlines for given news. It has various application scenarios, such as automated news writing <cit.> and product advertising <cit.>. Traditional approaches on headline generation rely on linguistic features and handcrafted rules <cit.>. With the advancement of neural networks, neural headline generation shows its capacity to generate high quality headlines <cit.>. However, controlling the style of these headlines remains challenging. <cit.> propose the first unsupervised stylistic headline generation model which relies only a standard headline generation dataset and mono-style corpora. They design a multitask learning framework to jointly learn both plain headline generation and stylistic text denoising with carefully designed parameter sharing and switching strategy. <cit.> further extend by decomposing the headline into style and content. They define stem and syntax as the style and generate the style first using a similar multitask framework as <cit.>. After that, substantive context is populated into that style with the conditional masked language modelling task. However, these works jointly learn style and headline generation, thus cannot support freely combination of styles and generation tasks at inference time. §.§ Unsupervised Text Style TransferText style transfer is to generate a sentence consistent with desired style while preserving the content of the source sentence <cit.>. Due to the scarcity of style parallel corpora, unsupervised text style transfer is widely explored in previous works. <cit.> regard text style transfer as a cloze task to accomplish sentiment style conversion of sentences. <cit.> incorporates the sentiment style information through a reconstruction task. <cit.> redefine the style transfer problem as a paraphrase generation problem and propose a method based on reverse paraphrasing to generate the desired style. <cit.> utilize large pretrained models such as BART and GPT-2, using BLEU scores and style classification scores as rewards, to achieve significant improvements in the transformation of formal and informal text. These models can be combined with plain headline generation model to achieve stylistic headline generation. However, they require two-steps decoding at inference time, while StyleBART generates stylistic headlines in one decoding step.§ CONCLUSIONWe propose StyleBART, an unsupervised stylistic generation method, which enables to freely combine the downstream tasks and styles by disentangling style learning and downstream task learning. Experimental results show that our model can generate content-relevant and style-intensive headlines, and can be extended to other stylistic generation tasks.§ LIMITATIONSThis work mainly explores the stylistic headline generation task. We leave the exploration of more combinations of tasks and styles such as stylistic machine translation, stylistic document summarization as future work. At the same time, our current method achieves stylistic generation by simply switching the adapters, which cannot provide fine-grained control of the style. This hinders StyleBART from meeting the diverse user demands related to style control.§ ETHICS STATEMENT We present a training-efficient approach to build an unsupervised stylistic headline generation model which disentangles the headline generation learning and style learning. Despite the strong performance of style control, StyleBART inherits the societal impacts including some negative ones of the original BART model, such as societal biases <cit.> and misuse of language models <cit.>. The implicit biases are expected to be removed by debiasing either the dataset or the model <cit.>. StyleBART makes it possible to generate text in various (e.g. clickbait) style which can be used to propagate the malicious or offensive content <cit.>. Future explorations are needed to mitigate the misuse of StyleBART model.§ ACKNOWLEDGEMENTSThis project was supported by National Natural Science Foundation of China (No. 62106138, No. 62306132) and Shanghai Sailing Program (No. 21YF1412100). We thank the anonymous reviewers for their insightful feedbacks on this work. acl_natbib§ APPENDIX§.§ GPT3.5-prompting Details We perform few-shot prompting with the gpt-3.5-turbo API. We follow the unsupervised setup of StyleBART which assumes paired news-stylistic headlines are unavailable. As a results, we use three non-parallel stylistic sentences, five news-plain headline pairs in our data template. Table <ref> shows the details. We set the the temperature to 0.5.§.§ LLaMA2-InsTuning Details We sample 10,000 data points per task[Our training loss shows no significant reductions after 150 steps when using a batch size of 80. Therefore, we do not use the full dataset as larger dataset requires more computation and only brings marginal performance improvement.] and construct the corresponding instruction data. Table <ref> shows our data template in training and inference stages. We utilize the 40,000 instruction data jointly to fine-tune LLaMA2-7B with LoRA. The hyperparameters for the LoRA module are set as follows: lora_rank = 256, lora_alpha = 256, and lora_dropout=0.05. The training process employs the AdamW optimizer with parameters β_1=0.9, β_2=0.999. We adopt a learning rate of 2e-5. The training is executed with a batch size of 80 for one epoch, amounting to a total of 500 steps. After our training stage, we directly use the inference template in Table <ref> to evaluate the fine-tuned model.§.§ Comparison with TitleStylist-BARTTitleStylist is initialized using the MASS model, a pretrained model with an encoder-decoder structure similar to BART. We adopt the methodology employed in TitleStylist and apply it to the BART model. We execute both TitleStylist-MASS (with their open-source code) and TitleStylist-BART (with our reimplemented code). As shown in Table <ref>, We can find that these two variations yield similar ROUGE and BERTScore results while TitleStylist-MASS gets better PPL and PPL-S scores. Therefore, we present the results for TitleStylist-MASS in this paper, as it demonstrates better performance and is consistent with the original paper. | http://arxiv.org/abs/2310.17743v2 | {
"authors": [
"Hanqing Wang",
"Yajing Luo",
"Boya Xiong",
"Guanhua Chen",
"Yun Chen"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231026193122",
"title": "StyleBART: Decorate Pretrained Model with Style Adapters for Unsupervised Stylistic Headline Generation"
} |
Origin of flat bands and non-trivial topology in coupled kagome lattices Awadhesh Narayan January 14, 2024 ======================================================================== Recent years have seen growing interest in developing and applying perceptual similarity metrics. Research has shown the superiority of perceptual metrics over pixel-wise metrics in aligning with human perception and serving as a proxy for the human visual system. On the other hand, as perceptual metrics rely on neural networks, there is a growing concern regarding their resilience, given the established vulnerability of neural networks to adversarial attacks. It is indeed logical to infer that perceptual metrics may inherit both the strengths and shortcomings of neural networks. In this work, we demonstrate the vulnerability of state-of-the-art perceptual similarity metrics based on an ensemble of ViT-based feature extractors to adversarial attacks. We then propose a framework to train a robust perceptual similarity metric called LipSim (Lipschitz Similarity Metric) with provable guarantees.By leveraging 1-Lipschitz neural networks as the backbone, LipSim provides guarded areas around each data point and certificates for all perturbations within an ℓ_2 ball.Finally, a comprehensive set of experiments shows the performance of LipSim in terms of natural and certified scores and on the image retrieval application. The code is available at <https://github.com/SaraGhazanfari/LipSim>.§ INTRODUCTION^* Correspondence to Sara Ghazanfari: Comparing data items and having a notion of similarity has long been a fundamental problem in computer science. For many years ℓ_p norms and other mathematically well-defined distance metrics have been used for comparing data items. However, these metrics fail to measure the semantic similarity between more complex data like images and are more focused on pixel-wise comparison. To address this problem perceptual distance metrics <cit.> have been proposed that employ deep neural networks as a backbone to first compute embeddings, then apply traditional distance metrics on the embeddings of the data in the new space. It is well-established that neural networks are susceptible to adversarial attacks <cit.>, That is, imperceptible variations of natural examples can be crafted to deliberately mislead models. Although perceptual metrics provide rich semantic interpretations compared to traditional metrics, they inherit the properties of neural networks and therefore their susceptibility to adversarial attacks <cit.>.Recent works have tried to address this problem by training robust perceptual metrics <cit.>. However, these works rely on heuristic defenses and do not provide provable guarantees.Recent research has focused on designing and training neural networks with prescribed Lipschitz constants <cit.>, aiming to improve and guarantee robustness against adversarial attacks. Promising techniques, like the SDP-based Lipschitz Layer (SLL) <cit.>, have emerged and allow to design of non-trivial yet efficient neural networks with pre-defined Lipschitz constant.Constraining the Lipschitz of neural has been known to induce properties such as stability in training <cit.>, robustness <cit.>, and generalization <cit.>.Recently, the DreamSim metric <cit.> has been established as the state-of-the-art perceptual similarity metric. This metric consists of a concatenation of fine-tuned versions of ViT-based embeddings, namely, DINO <cit.>, CLIP <cit.>, and Open CLIP <cit.>. To compute the distance between two images, DreamSim measures the cosine similarity distance between these ViT-based embeddings. In this work,we initially demonstrate with a series of experiments that the DreamSim metric is not robust to adversarial examples. Consequently, it could be easy for an attacker to bypass important filtering schemes based on perceptual hash, copy detection, etc. Then, to tackle this problem, we propose LipSim, the first perceptual similarity metric with provable guarantees.Building on the DreamSim metric and recent advances in 1-Lipschitz neural networks, we propose a novel student-teacher approach with a Lipschitz-constrained student model. Specifically, we train a 1-Lipschitz feature extractor (student network) based on the state-of-the-art SLL architecture. The student network is trained to mimic the outputs of the embedding of the DreamSim metric, thus distilling the intricate knowledge captured by DreamSim into the 1-Lipschitz student model. After training the 1-Lipschitz feature extractor on the ImageNet-1k dataset, we fine-tune it on the NIGHT dataset. By combining the capabilities of DreamSim with the provable guarantees of a Lipschitz network, our approach paves the way for a certifiably robust perceptual similarity metric.Finally, we demonstrate good natural accuracy and state-of-the-art certified robustness on two alternative forced choice (2AFC) datasets that seek to encode human perceptions of image similarity.Our contributions can be summarized as follows: * We investigate the vulnerabilities of state-of-the-art ViT-based perceptual distance including DINO, CLIP, OpenCLIP, and DreamSim Ensemble. The vulnerabilities are highlighted using AutoAttack <cit.> on the 2AFC score which is an index for human alignment and PGD attack against the distance metric and calculating the distance between an original image and its perturbed version. * We propose a framework to train the first certifiably robust distance metric, LipSim, which leverages a pipeline composed of 1-Lipschitz feature extractor, projection to the unit ℓ_2 ball and cosine distance to provide certified bounds for the perturbations applied on the reference image.* We show by a comprehensive set of experiments that not only LipSim provides certified accuracy for a specified perturbation budget, but also demonstrates good performance in terms of natural 2AFC score and accuracy on image retrieval which is a serious application for perceptual metrics. § RELATED WORKS Similarity Metrics. Low-level metrics including ℓ_p norms, PSNR as point-wise metrics, SSIM <cit.> and FSIM <cit.> as patch-wise metrics fail to capture the high-level structure and the semantic concept of more complicated data points like images. In order to overcome this challenge the perceptual distance metrics were proposed. In the context of perceptual distance metrics, neural networks are used as feature extractors, and the low-level metrics are employed in the embeddings of images in the new space. The feature extractors used in recent work include a convolutional neural network as proposed by <cit.> for the LPIPS metric, or an ensemble of ViT-based models <cit.> as proposed by <cit.> for DreamSim. As shown by experiments the perceptual similarity metrics have better alignment with human perception and are considered a good proxy for human vision.Adversarial Attacks & Defenses. Initially demonstrated by <cit.>,neural networks are vulnerable to adversarial attacks,, carefully crafted small perturbations that can fool the model into predicting wrong answers. Since then a large body of research has been focused on generating stronger attacks <cit.> and providing more robust defenses <cit.>. To break this pattern, certified adversarial robustness methods were proposed. By providing mathematical guarantees, the model is theoretically robust against the worst-case attack for perturbations smaller than a specific perturbation budget. Certified defense methods fall into two categories. Randomized Smoothing <cit.> turns an arbitrary classifier into a smoother classifier, then based on the Neyman-Pearson lemma, the smooth classifier obtains some theoretical robustness against a specific ℓ_p norm. Despite the impressive results achieved by randomized smoothing in terms of natural and certified accuracy <cit.>, the high computational cost of inference and the probabilistic nature of the certificate makes it difficult to deploy in real-time applications. Another direction of research has been to leverage the Lipschitz property of neural networks <cit.> to better control the stability of the model and robustness of the model. <cit.> highlighted the connection between the certified radius of the network with its Lipschitz constant and margin. As calculating the Lipschitz constant of a neural network is computationally expensive, a body of work has focused on designing 1-Lipschitz networks by constraining each layer with its spectral norm <cit.>, replacing the normalized weight matrix by an orthogonal ones <cit.> or designing 1-Lipschitz layer from dynamical systems <cit.> or control theory arguments <cit.>.Vulnerabilities and Robustness of Perceptual Metrics. Investigating the vulnerabilities of perceptual metrics has been overlooked for years since the first perceptual metric was proposed. As shown in <cit.> perceptual similarity metrics (LPIPS <cit.>) are vulnerable to adversarial attacks. <cit.> presents a qualitative analysis of deep perceptual similarity metrics resilience to image distortions including color inversion, translation, rotation, and color stain. Finally <cit.> proposes a new way to generate attacks to similarity metrics by reducing the similarity between the adversarial example and its original while increasing the similarity between the adversarial example and its most dissimilar one in the minibatch. To introduce robust perceptual metrics, <cit.> proposes e-lpips which uses an ensemble of random transformations of the input image and demonstrates the empirical robustness using qualitative experiments. <cit.> employs some modules including anti-aliasing filters to provide robustness to the vulnerability of LPIPS to a one-pixel shift. More recently <cit.> proposes R-LPIPS which is a robust perceptual metric achieved by adversarial training <cit.> over LPIPS and evaluates R-LPIPS using extensive qualitative and quantitative experiments on BAPPS <cit.> dataset. Besides the aforementioned methods that show empirical robustness, <cit.> propose methods to achieve certified robustness on perceptual metrics based on randomized smoothing. For example, <cit.> proposed center smoothing which is an approach that provides certified robustness for structure outputs.More precisely, the center of the ball enclosing at least half of the perturbed points in output space is considered as the output of the smoothed function and is proved to be robust to input perturbations bounded by an ℓ_2-size budget. The proof requires the distance metric to hold symmetry property and triangle inequality. As perceptual metrics generally don't hold the triangle inequality, the triangle inequality approximation is used which makes the bound excessively loose. In <cit.>, the same enclosing ball is employed however, the problem is mapped to a binary classification setting to leverage the certified bound as in the randomized smoothing paper (by assigning one to the points that are in the enclosing ball and zero otherwise). Besides their loose bound, these methods are computationally very expensive due to the Monte Carlo sampling for each data point.§ BACKGROUND Lipschitz Networks. After the discovery of the vulnerability of neural networks to adversarial attacks, one major direction of research has focused on improving the robustness of neural networks to small input perturbations by leveraging Lipschitz continuity.This goal can be mathematically achieved by using a Lipschitz function. Let f be a Lipschitz function with L_f Lipschitz constant in terms of ℓ_2 norm, then we can bound the output of the function by f(x) - f(x+δ)_2 ≤ L_f δ_2. To achieve stability using the Lipschitz property, different approaches have been taken. One efficient way is to design a network with 1-Lipschitz layers which leads to a 1-Lipschitz network <cit.>.State of the Art Perceptual Similarity Metric. DreamSim is a recently proposed perceptual distance metric <cit.> that employs cosine distance on the concatenation of feature vectors generated by an ensemble of ViT-based representation learning methods. More precisely DreamSim is a concatenation of embeddings generated by DINO <cit.>, CLIP <cit.>, and Open CLIP <cit.>.Let f be the feature extractor function, the DreamSim distance metric d(x_1, x_2) is defined as:d(x_1, x_2) = 1 - S_c(f(x_1), f(x_2))where S_c(x_1, x_2) is the cosine similarity metric. To fine-tune the DreamSim distance metric, the NIGHT dataset is used which provides two variations, x_0 and x_1 for a reference image x, and a label y that is based on human judgments about which distortion is more similar to the reference image x (some instances of the NIGHT dataset are shown in Figure <ref> of the Appendix). Supplemented with this dataset, the authors of DreamSim turn the setting into a binary classification problem. More concretely, given a triplet (x, x_0, x_1), and a feature extractor f, they define the following classifier:h(x) =1, d(x, x_1) ≤ d(x, x_0) 0, d(x, x_1) > d(x, x_0)Finally, to better align DreamSim with human judgment, given the triplet (x, x_0, x_1), they optimize a hinge loss based on the difference between the perceptual distances d(x, x_1) and d(x, x_1) with a margin parameter. Note that the classifier h has a dependency on f, d and each input x comes as triplet (x, x_0, x_1) but to simplify the notation we omit all these dependencies.§ LIPSIM: LIPSCHITZ SIMILARITY METRICIn this section, we present the theoretical guarantees of LipSim along with the technical details of LipSim architecture and training. §.§ A Perceptual Metric with Theoretical GuaranteesGeneral Robustness for Perceptual Metric. A perceptual similarity metric can have a lot of important use cases, , image retrieval, copy detection, etc. In order to make a robust perceptual metric we need to ensure that when a small perturbation is added to the input image, the output distance should not change a lot.In the following, we demonstrate a general robustness property when the feature extractor is 1-Lipschitz and the embeddings lie on the unit ℓ_2 ball, , f(x)_2 = 1.propositionpropdistanceLet f: →^k be a 1-Lipschitz feature extractor and f(x)_2 = 1, let d be a distance metric defined as in Equation <ref> and let δ∈ and ε∈^+ such that δ_2 ≤ε. Then, we have, | d(x_1, x_2) - d(x_1+δ, x_2) | ≤δ_2 The proof is deferred to Appendix <ref>. This proposition implies that when the feature extractor is 1-Lipschitz and its output is projected on the unit ℓ_2 ball then the composition of the distance metric d and the feature extractor, , d ∘ f, is also 1-Lipschitz with respect to its first argument.This result provides some general stability results and guarantees that the distance metric cannot change more than the norm of the perturbation. Certified Robustness for 2AFC datasets. We aim to go even further and provide certified robustness for perceptual similarity metrics with 2AFC datasets, , in a classification setting.In the following, we show that with the same assumptions as in Proposition <ref>, the classifier h can obtain certified accuracy.First, let us define a soft classifier H:→^2 with respect to some feature extractor f as follows:H(x) = [ d(x, x_1), d(x, x_0) ]It is clear that h(x) = _i ∈{0,1}H_i(x) where H_i represent the i-th value of the output of H.The classifier h is said to be certifiably robust at radius ϵ≥ 0 at point x if for all δ_2 ≤ϵ we have:h(x + δ) = h(x)Equivalently, one can look at the margin of the soft classifier: M_H, x := H_y(x) - H_1-y(x) and provide a provable guarantee that:M_H, x+δ > 0[Certified Accuracy for Perceptual Distance Metric]theoremmainresultLet H:→^2 be the soft classifier as defined in Equation <ref>. Let δ∈ and ε∈^+ such that δ_2 ≤ε. Assume that the feature extractor f:→^k is 1-Lipschitz and that for all x, f(x)_2 = 1, then we have the following result:M_H,x≥εf(x_0) - f(x_1)_2 ⟹M_H,x+δ≥ 0 The proof is deferred to Appendix <ref>. Based on Theorem <ref>, and assuming x_1 ≠ x_0, the certified radius for the classier h at point x can be computed as follows:R(h, x) = M_H,x/f(x_0) - f(x_1)_2 Theorem <ref> provides the necessary condition for a provable perceptual distance metric without changing the underlying distance on the embeddings (, cosine similarity).This result has two key advantages. First, as in <cit.>, computing the certificate at each point only requires efficient computation of the classifier margin H.Leveraging Lipschitz continuity enables efficient certificate computation, unlike the randomized smoothing approach of <cit.> which requires Monte Carlo sampling for each point.Second, the bound obtained on the margin to guarantee the robustness is in fact tighter than the one provided by <cit.>. Recall <cit.> result states that for a L-Lipschitz classifier H, we have:M_H,x≥ε√(2) L ⟹M_H,x+δ≥ 0Given that the Lipschitz constant of H[Recall the Lipschitz of the concatenation. Let f and g be L_f and L_g-Lipschitz, then the function x ↦ [f(x), g(x)] can be upper bounded by √(L_f^2 + L_g^2)-Lipschitz] is √(2), this lead to the following bound:M_H,x≥ 2 ε≥εf(x_0) - f(x_1)_2simply based on the triangle inequality and the fact that f(x)_2=1. §.§ LipSim Architecture & Training To design a feature extractor that respects the assumptions of Proposition <ref> and Theorem <ref>, we combined a 1-Lipschitz neural network architecture with an Euclidean projection. Let f:→^k such that:f(x) = π_B_2(0, 1)∘ϕ^l∘⋯∘ϕ^(1)(x)where l is the number of layers, π_B_2(0, 1) is a projection on the unit ℓ_2 ball, , π_B_2(0, 1)(x) = _z ∈ B_2(0, 1)x - z_2 and where the layers ϕ are the SDP-based Lipschitz Layers (SLL) proposed by <cit.>:ϕ(x) = x - 2W(∑_j=1^n |W^⊤ W|_ijq_j/q_i ) ^-1σ(W^⊤ x+b),where W is a parameter matrix being either dense or a convolution, {q_i} forms a diagonal scaling matrix and σ is the ReLU nonlinear activation. The neural network f:→^k describe in Equation <ref> is 1-Lipschitz and for all x we have f(x)_2 = 1. The proof is a straightforward application of Theorem 3 of <cit.> and Corollary 2.2.3 of <cit.>.Two-step Process for Training LipSim.LipSim aims to provide good image embeddings that are less sensitive to adversarial perturbations. We train LipSim in two steps, similar to the DreamSim approach.Recall that DreamSim first concatenates the embeddings of three ViT-based models and then fine-tunes the result on the NIGHT dataset. However, to obtain theoretical guarantees, we cannot use the embeddings of three ViT-based models because they are not generated by a 1-Lipschitz feature extractor. To address this issue and avoid self-supervised schemes for training the feature extractor, we leverage a distillation scheme on the ImageNet dataset, where DreamSim acts as the teacher model and we use a 1-Lipschitz neural network (without the ℓ_2 unit ball projection) as a student model. This first step is described on the left of Figure <ref>.In the second step, we fine-tuned the 1-Lipschitz neural network with projection on the NIGHT dataset using a hinge loss to increase margins and therefore robustness, as in <cit.>. This second step is described on the right of Figure <ref>. § EXPERIMENTS In this section, we present a comprehensive set of experiments to first highlight the vulnerabilities of DreamSim which is the state-of-the-art perceptual distance metric, and second to demonstrate the certified and empirical robustness of LipSim to adversarial attacks. §.§ Vulnerabilities of Perceptual Similarity Metrics To investigate the vulnerabilities of DreamSim to adversarial attacks, we aim to answer two questions in this section; Can adversarial attacks against SOTA metrics cause: (1) misalignment with human perception? (2) large changes in distance between perturbed and original images? Q1 – Alignment of SOTA Metric with Human Judgments after Attack. In this part we focus on the binary classification setting and the NIGHT dataset with triplet input. The goal is to generate adversarial attacks and evaluate the resilience of state-of-the-art distance metrics. For this purpose, we use AutoAttack <cit.>, which is one of the most powerful attack algorithms.During optimization, we maximize the cross-entropy loss, the perturbation δ is crafted only on the reference image and the two distortions stay untouched:_δ : δ_2≤εℒ_ce(y , ŷ) = ℒ_ce([d(x + δ, x_1), d(x+δ, x_0)], y)Where y ∈{0,1} and ŷ = [d(x + δ, x_1), d(x + δ, x_0)] which is considered as the logits generated by the model. The natural and adversarial 2AFC scores of DreamSim are reported in Table <ref>. The natural accuracy drops to half the value for a tiny perturbation of size ϵ=0.5 and decreases to zero for ϵ=2.0. In order to visualize the effect of the attack on the astuteness of distances provided by DreamSim, original and adversarial images (that are generated by ℓ_2-AA and caused misclassification) are shown in Figure <ref>. The distances are reported underneath the images as d(, ).To get a sense of the DreamSim distances between the original and perturbed images, the third row is added so that the original images have (approximately) the same distance to the perturbed images and the perceptually different images in the third row (d(, ) = d(, )). The takeaway from this experiment is the fact that tiny perturbations can fool the distance metric to produce large values for perceptually identical images.Q2 – Specialized Attack for Semantic Metric. In this part, we perform a direct attack against the feature extractor model which is the source of the vulnerability for perceptual metrics by employing the ℓ_2-PGD <cit.> attack (ϵ=1.0) and the following MSE loss is used during the optimization:max_δ : δ_2≤εℒ_MSE[ f(x + δ), f(x) ]The attack is performed on 500 randomly selected samples from the ImageNet-1k test set and against the DreamSim Ensemble feature extractor. After optimizing the δ, the DreamSim distance metric is calculated between the original image and the perturbed image: d(x, x+δ). The distribution of distances is shown in Figure <ref>. We can observe a shift in the mean of the distance from 0 to 0.6 which can be considered as a large value for the DreamSim distance as shown in <ref>. §.§ LipSim Results In this section, we aim to leverage the framework introduced in the paper and evaluate the LipSim perceptual metric. In the first step (, right of Figure <ref>), we train a 1-Lipschitz network for the backbone of the LipSim metric and use the SSL architecture which has 20 Layers of Conv-SSL and 7 layers of Linear-SSL.For training the 1-Lipschitz feature extractor, the ImageNet-1k dataset is used (without labels) and the knowledge distillation approach is applied to utilize the state-of-the-art feature extractors including DINO, OpenCLIP, and DreamSim which is an ensemble of ViT-based models. To enhance the effectiveness of LipSim, we incorporate two parallel augmentation pipelines: standard and jittered. The standard version passes through the feature extractor and the teacher model while the jittered only passes through the feature extractor. Then, the RMSE loss is applied to enforce similarity between the embeddings of the jittered and standard images. This enables LipSim to focus more on the semantics of the image, rather than its colors.After training the 1-Lipschitz backbone of LipSim, we further fine-tune our model on the NIGHT dataset (, step 2 see right of Figure <ref>). During the fine-tuning process, the embeddings are produced and are projected to the unit ℓ_2 ball. In order to maintain the margin between logits, the hinge loss is employed similarly to DreamSim. However, while DreamSim has used a margin parameter of 0.05, we used a margin parameter of 0.5 for fine-tunning LipSim in order to boost the robustness of the metric. Remarkably, LipSim achieves strong robustness using a 1-Lipschitz pipeline composed of a 1-Lipschitz feature extractor and a projection to the unit ℓ_2 ball that guarantees the 1-Lipschitzness of cosine distance. To evaluate the performance of LipSim and compare its performance against other perceptual metrics, we report empirical and certified results of LipSim for different settings. r7cm Alignment on NIGHT dataset for original and perturbed images using AutoAttack. In this experiment, the perturbation is only applied on the reference images. While DreamSim employs an ensemble of three ViT-based models as the feature extractors, LipSim Backbone consists of a 1-Lipschitz network (composed of CNN and Linear layers) and is trained from scratch using the knowledge distillation approach. 0.5!2*Metric/ Embedding2*Natural Score 4cℓ_2-AA 3-6 1c0.51c1.01c2.01c3.0 CLIP93.9129.93 8.44 1.20 0.27 OpenCLIP 95.4572.31 42.32 11.84 3.28DINO94.5281.91 59.04 19.29 6.35 DreamSim96.16 46.2716.66 0.93 0.93 LipSim (ours)85.0981.58 76.92 65.62 53.07Empirical Robustness Evaluation.We provide the empirical results of LipSim against ℓ_2-AutoAttack in Table <ref>. Although the natural score of LipSim is lower than the natural score of DreamSim, there is a large gap between the adversary scores.We can observe that LipSim outperforms all state-of-the-art metrics. The results of a more comprehensive comparison between LipSim, state-of-the-art perceptual metrics, previously proposed perceptual metrics, and pixel-wise metrics are presented in Figure <ref>. The pre-trained and fine-tuned natural accuracies are comparable with the state-of-the-art metrics and even higher in comparison to CLIP. In terms of empirical robustness, LipSim demonstrates great resilience. More comparisons have been performed in this sense, the empirical results over ℓ_∞-AutoAttack and ℓ_∞-PGD are also reported in Table <ref> in Appendix <ref> which aligns with the ℓ_2 results and shows strong empirical robustness of LipSim. In order to evaluate the robustness of LipSim outside the classification setting, we have performed ℓ_2-PGD attack (ϵ=1.0) using the MSE loss defined in Equation <ref> and the distribution of d(x, x+δ) is shown at Figure <ref>. The values of d(x, x+δ) are pretty much close to zero which illustrates the general robustness of LipSim as discussed in proposition <ref>. To evaluate the general robustness of the DreamSim, we leverage ℓ_2-AA with perturbation budget ϵ=0.5 and provide the original and adversarial instances that have 0.5 distance in the input space and larger distance in the embedding space at Figure <ref>, which is a complete violation of the general robustness proposition for the DreamSim and shows the superiority of LipSim to hold this property.Certified Robustness Evaluation. In order to find the certified radius for data points, the margin between logits is computed and divided by the ℓ_2 norm distance between embeddings of distorted images (f(x_0) - f(x_1)_2). The results for certified 2AFC scores for different settings of LipSim are reported in Table <ref>, which demonstrates the robustness of LipSim along with a high natural score. The value of the margin parameter in hinge loss used during fine-tuning is mentioned in the table which clearly shows the trade-off between robustness and accuracy. A larger margin parameter leads to more robustness and therefore higher certified scores but lower natural scores. #1 =1pt=2pt0pt2.12cm[#1] §.§ Image Retrieval. After demonstrating the robustness of LipSim in terms of certified and empirical scores, the focus of this section is on one of the real-world applications of a distance metric which is image retrieval. We employed the image retrieval dataset[<https://github.com/ssundaram21/dreamsim/releases/download/v0.1.0/retrieval_images.zip>] proposed by <cit.>, which has 500 images randomly selected from the COCO dataset. The top-1 closest neighbor to an image with respect to LipSim and DreamSim distance metrics are shown at the rows of Figure <ref>. In order to investigate the impact of adversarial attacks on the performance of LipSim and DreamSim in terms of Image Retrieval application, we have performed ℓ_2-PGD attack (ϵ=2.0) with the same MSE loss defined in Equation <ref> separately for the two metrics and the results are depicted at the rows of Figure <ref>. In adversarial rows, the LipSim sometimes generates a different image as the closest which is semantically similar to the closest image generated for the original image.§ CONCLUSION In this paper, we initially showed the vulnerabilities of the SOTA perceptual metrics including DreamSim to adversarial attacks and more importantly presented a framework for training a certifiable robust distance metric called LipSim which leverages the 1-Lipschitz network as its backbone, 1-Lipschitz cosine similarity and demonstrates non-trivial certified and empirical robustness. Moreover, LipSim was employed for an image retrieval task and exhibited good performance in gathering semantically close images with original and adversarial image queries. For future work, It will be interesting to investigate the certified robustness of LipSim for other 2AFC datasets and extend the performance of LipSim for other applications, including copy detection, and feature inversion.§.§.§ AcknowledgmentsThis paper is supported in part by the Army Research Office under grant number W911NF-21-1-0155 and by the New York University Abu Dhabi (NYUAD) Center for Artificial Intelligence and Robotics, funded by Tamkeen under the NYUAD Research Institute Award CG010. iclr2024_conference§ PROOFS *We havethe following:| d(x_1 , x_2) - d(x_1+δ , x_2) | = | ⟨ f(x_1+δ) , f(x_2) ⟩/f(x_1 +δ)f(x_2- ⟨ f(x_1) , f(x_2) ⟩/f(x_1)f(x_2| (1)=| ⟨ f(x_1+δ) , f(x_2) ⟩- ⟨ f(x_1) , f(x_2) ⟩| = | ⟨ f(x_1+δ) - f(x_1) , f(x_2) ⟩| ≤f(x_1+δ) - f(x_1)f(x_2)(2)≤δwhere (1) and (2) are due to f(x) = 1 for all x and the fact that f is 1-Lipschitz, which concludes the proof. * First, let us recall the soft classifier H:→^2 with respect to some feature extractor f as follows:H(x) = [ d(x, x_1), d(x, x_0) ]where d: ×→ is defined as: d(x, y) = 1 - ⟨ f(x), f(y) ⟩/f(x)_2 f(y)_2.Let us denote H_0 and H_1 the first and second logits of the soft classifier. For a tuple (x, x_0, x_1) and a target label y, we say that H correctly classifies if H(x) = y. Note that we omit the dependency on x_0 and x_1 in the notation. Let us assume the target class y = 1. The case for y = 0 is exactly symmetric.Let us define the margin of the soft classifier H as:M_H, x := H_1(x) - H_0(x)We have the following:M(H, x+δ) = H_1(x+δ) - H_0 (x+δ) = H_1(x+δ) - H_1(x) - H_0(x+δ) + H_0(x) + (H_1(x) - H_0(x)) = d(x+δ, x_0) - d(x, x_0) - d(x+δ, x_1) + d(x, x_1) + ( H_1(x) - H_0(x) ) = (1 - ⟨ f(x+δ), f(x_0) ⟩/f(x+δ)_2 f(x_0)_2) - (1 - ⟨ f(x), f(x_0) ⟩/f(x)_2 f(x_0)_2)- (1 - ⟨ f(x+δ), f(x_1) ⟩/f(x+δ)_2 f(x_1)_2) + (1 - ⟨ f(x), f(x_1) ⟩/f(x)_2 f(x_1)_2) + M_H,x(1)=-⟨ f(x+δ), f(x_0) ⟩ +⟨ f(x), f(x_0) ⟩ +⟨ f(x+δ), f(x_1) ⟩ -⟨ f(x), f(x_1) ⟩ + M_H,x= ⟨ f(x+δ), f(x_1) - f(x_0) ⟩ +⟨ f(x), f(x_0) - f(x_1) ⟩ + M_H,x= ⟨ f(x+δ) - f(x), f(x_1) - f(x_0) ⟩ + M_H,x≥ - f(x) - f(x+δ)f(x_0) - f(x_1) + M_H,x≥ - εf(x_0) - f(x_1) + M_H,xwhere (1) is due to the fact that for all x, f(x) = 1. Therefore, M_H, x+δ≥ 0 only if M_H,x≥εf(x_0) + f(x_1) which conclude the proof.§ ADDITIONAL FIGURES AND EXPERIMENTAL RESULTS In the section, we initially represent some details NIGHT dataset and show the additional experiments that we performed on LipSim.§.§ Dataset DetailsIn order to train a perceptual distance metric, datasets with perceptual judgments are used. The perceptual judgments are of two types: two alternative forced choice (2AFC) test, that asks which of two variations of a reference image is more similar to it. To validate the 2AFC test results, a second test, just a noticeable difference (JND) test is performed. In the JND test, the reference images and one of the variations are asked to be the same or different. BAPPS <cit.> and NIGHT <cit.> are two datasets organized with the 2AFC and JND judgments. The JND section of the NIGHT dataset has not been released yet, therefore we did our evaluations only based on the 2AFC score. In Figure <ref> we show some instances from the NIGHT dataset, the reference is located in the middle and the two variations are left and right. The reference images are sampled from well-known datasets including ImageNet-1k. §.§ Empirical Robustness ResultsIn this part, the empirical robustness of LipSim is evaluated using ℓ_∞-AutoAttack and ℓ_∞-PGD attack, and the cross entropy loss as defined in Equation <ref> is used for the optimization. In the case of ℓ_∞-AutoAttack, LipSim outperforms all metrics for the entire set of perturbation budgets. For ℓ_∞-PGD, the performance of DreamSim is better for ϵ=0.01, however, LipSim has preserved a high stable score encountering PGD attacks with different perturbation sizes, which demonstrates the stability of the LipSim.r0.45< g r a p h i c s >Distribution of d(x, x+δ) where δ is generated using ℓ_2-PGD attack with ϵ=3.0 §.§ General Robustness of LipSim The general robustness of Perceptual metrics was defined using Equation <ref>, which emphasizes a general robustness attribute that all perceptual metrics should hold and states that given an image and a perturbed version of the image to the perceptual metric, the predicted distance should be smaller than the perturbation norm. To evaluate this property for DreamSim, we generated ℓ_2-AA with ϵ=0.5 on the test set of the ImageNet-1k dataset. Figure <ref> shows some samples that DreamSim is generating bigger distance values than the perturbation norm, which shows an obvious violation to the general robustness property. To further show the empirical robustness of LipSim in comparison with DreamSim, we optimize the MSE loss defined in Equation <ref> employing ℓ_2-PGD attack for LipSim and DreamSim separately. The distribution of d(x, x+δ) is show in Figure <ref>. The difference between this figure and the histogram in Figure <ref> is that we have chosen a larger perturbation budget, ϵ=3, to demonstrate the fact that even for larger perturbations, LipSim is showing strong robustness. §.§ Detailed Results for Natural ScoreAs LipSim is trained on ImageNet-1k and ImageNet constitutes a subset of the NIGHT dataset we follow the setting of DreamSim <cit.> paper and report the scores for the ImageNet part of the NIGHT dataset and for the other part separately. For the pre-trained results, LipSim is better than CLIP and very close to OpenCLIP. In the case of fine-tuned results, LipSim shows decent results in comparison with the SOTA methods and outperforms the pixel-wise and prior learned perceptual metrics. | http://arxiv.org/abs/2310.18274v1 | {
"authors": [
"Sara Ghazanfari",
"Alexandre Araujo",
"Prashanth Krishnamurthy",
"Farshad Khorrami",
"Siddharth Garg"
],
"categories": [
"cs.CV",
"cs.LG"
],
"primary_category": "cs.CV",
"published": "20231027165951",
"title": "LipSim: A Provably Robust Perceptual Similarity Metric"
} |
Towards optimal multimode fiber imaging by leveraging input polarization and conditional generative adversarial networks Jawaria Maqbool,^1, Syed Talal Hassan,^2 and M. Imran Cheema^1* Department of Electrical Engineering, Syed Babar Ali School of Science and Engineering,Lahore University of Management Science,Sector U, D.H.A Lahore, Cantt, Punjab 54792, Pakistan^1Department of Computer Science, Syed Babar Ali School of Science and Engineering,Lahore University of Management Science,Sector U, D.H.A Lahore, Cantt, Punjab 54792, Pakistan^2^*[email protected] § ABSTRACTDeep learning techniques provide a plausible route towards achieving practical imaging through multimode fibers. However, the results produced by these methods are often influenced by physical factors like temperature, fiber length, external perturbations, and polarization state of the input light. The impact of other factors, except input light polarization, has been discussed in the literature for imaging applications. The input polarization has been considered by researchers while looking at the characterization and control of polarization in multimode fibers. Here, we show experimentally that the state of polarization of light, being injected at multimode fiber input, affects the fidelity of reconstructed images from speckle patterns. Certain polarization states produce high-quality images at fiber output, while some yield degraded results. We have designed a conditional generative adversarial network (CGAN) for image regeneration at various degrees of input light polarization. We demonstrate that in the case of multimode fibers that are held fixed, optimal imaging can be achieved by leveraging our CGAN model with the input light polarization state, where the fidelity of images is maximum.Our work exhibits high average structural similarity index values exceeding 0.9, surpassing the previously reported value of 0.8772. We also show that the model can be generalized to image adequately for all input light polarization states when the fiber has bends or twists. We anticipate our work will be a stepping stone toward developing high-resolution and less invasive multimode fiber endoscopes.§ INTRODUCTIONMultimode fibers (MMFs) can lead to practical endoscopes because they are thinner and less invasive than single-mode fiber bundles <cit.>. The presence of numerous spatial modes in MMFs can be harnessed for transmitting images. However, light waves propagating through different fiber modes interfere with each other to form speckles or random patterns at the fiber’s distal end. Hence, its properties resemble scattering or disordered media like fog, diffusers, and biological tissues that scramble the information to produce a speckle phenomenon. Extraction of data from speckle patterns is a challenging task. Primarily, three different strategies are generally employed for image reconstruction from speckles: optical phase conjugation <cit.>, computation of transmission matrix <cit.>, and deep learning <cit.>. Phase conjugation incorporates a complex interferometric method for measuring phase, and precise alignment is required between the camera and spatial light modulator. The transmission matrix (TM), on the other hand, aptly describes the relationship between an MMF input and output. It helps to assimilate the information about light absorption, reflection, and transmission through the medium. The TM measurement requires both amplitude and phase information. The accurate phase computation needs a stable reference arm and nontrivial interference setup. Moreover, phase values are very sensitive to external perturbations. Hence, one TM can only be used for the transmission state in which it is calculated <cit.>. Recent research indicates that the challenges mentioned above can be addressed by applying deep learning techniques, leading to a more effective imaging process using MMFs <cit.>.Previous deep learning works have shown various ways to improve MMF imaging in terms of accuracy, generalizability, and data requirements <cit.>. However, the effect of input polarization state changes on the reconstruction of images from speckles of multimode fibers has yet to be discussed thoroughly. Prior research has been done in the past on the characterization, statistics, and control of the polarization of light in multimode fibers. Due to random mode interference in multimode fibers, polarization mixing also occurs, which results in depolarized or partially polarized output <cit.>. On the other hand, it has been shown in <cit.> that the field distribution of some modes does not change during propagation through the fiber. Moreover, complete control of output polarization can be achieved using the eigenvectors and eigenvalues of the multimode fiber TM with orthogonal polarizations as a basis <cit.>. Considering these previous works, we hypothesize that input polarizations can affect the reconstruction of images at multimode fiber output and should be utilized to improve the MMF imaging process.Here, we devise an experimental and computational way to quantify input polarization impact on multimode fiber imaging. We acquire output data of speckles for nine input polarization states at multiple MMF positions. We reconstruct original images from the acquired datasets using our designed conditional generative adversarial network (CGAN). Our CGAN model is fast (training time: 1 hour, inference time: 5.4 ms), stable, and gives better reconstruction results. By varying the input polarizations and the fiber positions, we show that our system can produce average structural similarity index (SSIM) values above 0.9, which is higher than the previous value of 0.8772 <cit.>.We find that our model trained for one polarization state at a particular fiber position gives poor reconstruction results for another polarization state. To improve the generalizability of our deep learning model to reconstruct images for all polarization states at a specific fiber position, we merge an equal percentage of data from all nine polarization datasets to form one combined dataset. The model is trained on this dataset and tested on unseen data of each polarization state. This procedure is carried out separately for two fiber positions. Furthermore, we integrate subsets of eighteen datasets for both fiber positions. After training on this superset, our CGAN model can accurately reconstruct images for unseen data of all polarization states of two fiber positions under consideration. Hence, our work highlights that the input light polarization state affects the accuracy of reconstructed images from speckles at the multimode fiber output and it can be harnessed in two ways: 1) For a fixed MMF orientation, input polarization can be set to a degree where we get optimal imaging results,2) But in scenarios, where fiber position can change, we must train our model on the data measured while constantly changing fiber position and input light polarization state. In this way, we can get satisfactory reconstruction results for any input polarization state.We now describe the rest of the paper. Section <ref> details the experimental setup for data collection, followed by Section <ref>, in which we describe the data acquisition procedure. Section <ref> is dedicated to our deep learning framework, where we introduce its architecture, training processes, and integration with the data gathered in the previous sections. Section <ref> presents our methodology for evaluating the input polarization impact on the reconstructed images' quality and offers insights into the system’s sensitivity to polarization variations. Section <ref> explains our model's generalization ability to diverse input polarizations. Finally, Section <ref> summarizes our findings and highlights potential avenues for future research.§ EXPERIMENTAL SETUP The experimental setup schematic is illustrated in Fig. <ref>. We utilize a 633 nm continuous-wave laser diode (Eagleyard GC-02940) operated viaThorlabs CLD1015 controller. After reflection by mirrors, the laser light is collimated through a telescopic system comprising two lenses with focal lengths of 500 mm and 100 mm. A polarizer is placed after the telescopic system to achieve horizontal polarization for optimal phase modulation with the HOLOEYE Pluto 2.0 spatial light modulator (SLM). The polarized laser beam is then directed onto a 50/50 beam splitter (BS). Half of the beam is transmitted towards the SLM, while a beam blocker blocks the remaining half. Once reflected by the SLM, the phase-modulated light passes through the BS and is imaged by lens 3 onto collimator 2, which in turn focuses the image of the phase-modulated light onto the input of a multimode fiber. The multimode fiber has core and cladding diameters of 50 μm and 125 μm, respectively, with a length of 1 m and a numerical aperture (NA) of 0.22. Before the fiber input, a half-wave plate (HWP) and a quarter-wave plate (QWP) are positioned to attain any desired state of polarization (SOP). The multimode fiber converts all the information the laser light carries into a speckle pattern. The speckle pattern emerging from collimator 3 is imaged by lens 3 onto a Thorlabs DCC1545M CMOS camera with a resolution of 1280×1024 pixels. § DATA ACQUISITION FOR DIFFERENT POLARIZATION STATES Initially, we place the multimode fiber in position 1, as shown in Fig. <ref>, and its orientation remains fixed for all measurement sets. Fixing the position is essential as speckle patterns change with a change in the orientation of the fiber <cit.>. A state of polarization (SOP) at the fiber input is set with HWP and QWP while observing SOP on Thorlabs’s PAX1000IR1/M polarimeter. We choose nine different SOPs comprising linear, circular, and elliptical polarizations, as shown in Fig. <ref>. The Stoke’s parameters indicated in Fig. <ref> are measured using the polarimeter.For each input SOP, a computer sends the Modified National Institute of Standards and Technology (MNIST) data of 60,000 handwritten digits on SLM. The images are of size 28×28 pixels and are up-sampled to 64×64 pixels before being sent on SLM. The light reflected from SLM now contains images of handwritten digits. After passing through MMF, this light produces speckle patterns recorded by the computer connected to the camera. The speckle patterns saved on the system are cropped to dimensions of 256×256 pixels. The process of speckle data collection for 60,000 images takes approximately 24 hours. We perform this procedure for nine input polarization states, resulting in the formation of nine different datasets. To further gauge the input polarization effect on the accuracy of imaging through MMF, we change the fiber position and repeat the methodology of acquiring nine datasets for nine different polarization states.§ DEEP LEARNING FRAMEWORK FOR IMAGE RECONSTRUCTION After the formation of data sets at various distinct input polarization states, the next step is reconstructing original images from speckle patterns. For this, we design a pix2pix model based on CGAN, as shown in detail in Fig. <ref>. The generator is a U-Net-type architecture with an encoder, decoder, and skip connections. We first down-sample the speckle patterns to size 64×64×1 and apply them as the input to the generator. The generator is enabled with robust feature extraction capabilities due to several convolution and deconvolution layers. In addition, the skip connections allow weight sharing and preserve feature information across different network layers. The output has the same resolution as the input. The discriminator is composed of five convolution layers and one flattened layer. The generator’s output, concatenated with the true label, is employed as input to the discriminator, which works on classifying a patch in an image as real or fake. CGAN has been used previously for reconstructing images from speckle patterns produced by multimode fibers <cit.>. The highest average SSIM reported previously is 0.8772 <cit.>. In contrast to previous works, we use binary cross entropy (BCE) loss for the discriminator and an amalgam of mean squared error (MSE) and mean absolute error (MAE) for the generator. The MSE loss function minimizes the difference between real and generated data. It also overcomes the problem of vanishing gradients resulting in stable training and high-fidelity results <cit.>. MAE loss aids in the regeneration of low-frequency details. The hybrid loss function for a CGAN is the weighted sum of generative and discriminative loss (Eq.(<ref>)). We define the discriminator and generator loss for our designed model in Eq.(<ref>) and Eq.(<ref>), respectively:ℒ_CGAN =ℒ_Gen +ℒ_Disc, ℒ_Disc =λ_1[l_1(D(y,x),1]+λ_1[l_1(D(G(x),x),0], ℒ_Gen =l_2[D(G(x),x),1]+λ_2[l_3((G(x),y)],where G() and D() are the generator and discriminator functions, respectively. The speckle pattern inputs are represented by x while the true labels are denoted by y. We use l_1 for BCE, l_2 for MSE, and l_3 for MAE. To optimize the model's performance, we incorporate weighting factors, λ_1 and λ_2, set at values of 100 and 0.5, respectively, to effectively balance the MAE and BCE losses. Out of 60000 pairs of speckle-MNIST digits in all polarization datasets, we reserve 5000 pairs for testing. For the remaining 55000 pairs, 85% are kept for training, and the rest 15% are used for validation. For each dataset, the CGAN model takes 1 hour to train for 80 epochs, and the inference time for each reconstructed image is 4.6 ms. We realize the data collection using a Python 3.10.12 environment. Furthermore, we utilize the PyTorch framework for building, training, and testing the model. The whole mechanism of deep learning is accelerated by the NVIDIA Tesla V100 Tensor Core GPU.We train and evaluate our CGAN model for all compiled polarization data sets at fiber positions 1 and 2. We use SSIM and peak signal-to-noise ratio (PSNR) as evaluation metrics for our restored images. SSIM compares the similarity between reconstructed digits and true labels based on their luminance, contrast, and structure. Its value varies between 0 and 1. SSIM value around zero means no similarity between two images, while a value closer to 1 denotes that the images are almost identical. Its expression is given by:SSIM =(2μ_xμ_y +C_1)(2σ_xy+C_2)/(μ_x^2+μ_y^2+C_1)(σ_x^2+σ_y^2+C_2),where μ_x and μ_y refer to the mean value over a window in images x and y, respectively. σ_x and σ_y are standard deviations over a window of x and y. σ_xy is the covariance over a window between image x and image y while C_1 and C_2 are constants.Another metric that we have used is PSNR, which is the ratio between the maximum pixel value of the ground truth image (I_max) and the root mean squared error (RMSE) and is given by:PSNR =20log_10I_maxRMSE.RMSE is determined between the pixel values of the original and the predicted images. The higher the value of PSNR, the better the quality of reconstructed images. Some of the reconstruction results from our designed CGAN are given in Fig. <ref> and Fig. <ref> for fiber positions 1 and 2, respectively. For brevity, the regenerated images for each fiber position are displayed for only two polarization states: one where SSIM and PSNR attain their respective maximum values and the other where they reach their minimum values. This choice is made to show a proper difference in image fidelity at these polarization states. As can be observed in Fig. <ref>, at polarization P9 (elliptical), the images are closer to true labels (have high SSIM and PSNR) as compared to polarization P4 (45-degree). P4 has relatively poor image regeneration results, especially for digits 2 and 3. The same can be apprehended from the results of Fig. <ref>, where average PSNR and SSIM attain their highest values at P1 (nearly horizontal) and lowest at P7 (elliptical).§ ASSESSING THE INPUT POLARIZATION EFFECTWe record eighteen different data sets for two distinct positions of the multimode fiber at nine varying input polarization states shown in Fig. <ref>. We then train our designed CGAN for these data sets, followed by the model evaluation for 5000 unseen test images. The obtained average SSIM and PSNR for every data set are shown in Figs. <ref> and <ref>. The varying magnitudes of these bar graphs illustrate that SSIM and PSNR change with input polarization states. For position 1, the maximum average SSIM of 0.9010 and PSNR of 22.89 are attained at elliptical polarization. At the same time, the minimum SSIM of 0.8430 and PSNR of 20.182 are obtained for linearly polarized light at 45 degrees. The percentage difference between the smallest and largest SSIM is 6.65%, while for PSNR, this variation is 12.5%. For position 2, the highest average SSIM of 0.9046 and PSNR of 23.202 are achieved for nearly horizontally polarized light. The lowest SSIM of 0.8225 and PSNR of 19.458 are obtained when the input light is vertically polarized. In this case, the percentage difference between the two extremities is 9.5% for SSIM and 17.55% for PSNR. We find that the deviation between evaluation metrics' values for some polarization states is more significant than others. It can also be inferred from the plots that the effect of different polarization states changes with a change in the fiber position. For example, at P1 and position 2, SSIM of 0.9046 and PSNR of 23.202 are high, but for position 1 and the same polarization state, these metrics have reduced to 0.8632 and 21.302, respectively. This is because modal distribution changes with the bending or twisting of the fiber.We repeat the data collection, model training, and testing procedure twice at each of the nine polarization states and for individual fiber positions. This is done to ensure the capture of the persistent impact of different polarization states on the fidelity of reconstructed images. When evaluated across various input polarization states, we observe that the percentage difference in SSIM and PSNR values for reconstructed images exhibit consistent results with only 1-2% marginal fluctuations. This means that if SSIM and PSNR are minimum at P4 (45-degree linear) compared to other polarization states, it will always be minimal, no matter how many times we repeat this process while keeping the position fixed.Physically, the relationship of reconstruction of images with input polarization states can be elaborated in the following way. When light with a specific polarization state is launched to any fiber mode, it spreads to other modes. Due to modal coupling, polarization scrambling also occurs, resulting in different polarization states at the inputs and outputs of all modes. Moreover, higher-order modes suffer from higher attenuation than lower-order modes. For an arbitrary polarized (p) input |ϕ⟩ the output field is |ψ⟩ = t_p|ϕ⟩, where t_p is the transmission matrix for p polarized input. The total intensity of this polarization state is ⟨ψ|ψ⟩ =⟨ϕ|t_p^† t_p|ϕ⟩. The transmission range achieved in this state is defined by the eigenvalues of t_p^† t_p. The maximum energy that can be maintained in the same state of polarization is given by the largest eigenvalue, while the maximum energy retained in orthogonal SOP is defined by the smallest eigenvalue <cit.>. The larger eigenvalues and their associated eigenvectors get their contribution from lower-order modes, leading to maximum transmission. The eigenvectors corresponding to smaller eigenvalues are influenced by higher-order modes, resulting in reduced transmission. Also, input wavefronts change due to data sent on SLM. For some states of input polarization, the eigenvectors of most of the wavefronts from SLM coincide with greater eigenvalues of t_p^† t_p, contributing to the maximum transmission of these wavefronts. This eventually improves the fidelity of reconstructed images as most of the input information is retained while propagating through fiber. On the contrary, for certain input SOPs, eigenvectors of input wavefronts correspond to smaller eigenvalues, causing the attenuation of input information. SSIM and PSNR will be low in these cases.Also, due to variations in mode and polarization coupling, as well as changes in the transmission matrix and its eigenvalues with respect to the fiber's position, the influence of input polarization differs between the two fiber positions.§ GENERALIZATION FOR VARYING INPUT POLARIZATION STATESThe input polarization effect can be harnessed in two ways: (a) The input polarization is set at a degree where we get the optimal imaging results for endoscopic applications where multimode fiber length is small and is not bent or twisted while imaging <cit.>. (b) In the case of long-length endoscopes inserted deeply in the body, a dynamically perturbed multimode fiber should also be trained or calibrated for a diverse range of input polarization degrees. This approach ensures consistently satisfactory imaging results regardless of the input polarization degree. To start with the generalization mechanism, we first use the weights of one input polarization state for reconstructing the test images of another input polarization data. Not surprisingly, we get poor results as the average SSIM and PSNR remain below 0.2 and 8, respectively. We then combine 15% training data of each of the nine input polarization states to form one dataset with 74250 speckle-label pairs for fiber positions 1 and 2. We train our designed CGAN model on this collective set of images and test for 5000 unseen images of each polarization state and fiber position. The average SSIM and PSNR of nine polarization states tested on weights of combined datasets are plotted in Fig. <ref> and Fig. <ref> for positions 1 and 2, respectively. The SSIM and PSNR values are reasonable in this case. SSIM is not less than 0.7, and PSNR is greater than 17 for position 1. Likewise, for position 2 combined data, SSIM and PSNR remain above 0.8 and 18, respectively.As a final step towards the generalization process where an MMF can image well for any input polarisation state (P1-P9) and position (1 or 2), we pick up 10% training data from 18 datasets and integrate them to form one super set of 133650 images. After training on this set, we test our designed CGAN model for unseen images of all polarization states at both fiber positions. In this case, the average SSIM and PSNR are shown in Fig. <ref>. It is noticeable from this bar graph that SSIM does not degrade more than 0.75 while PSNR is greater than 17. These reasonable metric values signify successful image reconstruction for any input polarization state. This process indicates that the CGAN model should be trained on a dataset obtained while dynamically changing the input polarization state and fiber positions for better generalization. Table <ref> encapsulates the mean and standard deviation of evaluation metrics when training is done separately on each polarization dataset and for a combination of different polarization and position datasets. As can be seen, mean values are higher after training on individual polarization datasets. However, as discussed previously, the model trained for one polarization does not reconstruct well for another polarization state. The mean metric values are relatively low for a combination of datasets, but generalizability is improved. The small standard deviation values in these cases depict that the model can image well for various polarization states. § CONCLUSIONWe demonstrate experimentally the influence of input light polarization on the accuracy of image reconstructions from speckle patterns at the multimode fiber output. Specifically, we have established a clear correlation between the input polarization states and the variation in the average structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) across a set of 5000 unseen images. Furthermore, we have exhibited this polarization impact for two distinct multimode fiber positions. The high SSIM values exceeding 0.9 achieved at both fiber positions surpass the previously reported value of 0.8772. We conclude that we should incorporate that polarization state where SSIM is maximum toward achieving optimal MMF imaging. Moreover, we have generalized our model for the cases where input polarization and position of the fiber are changing. By training on combined data from all polarization states and fiber positions, we show that imaging through MMF can be done satisfactorily for any polarization state and fiber position. Our work can be extended to explore the influence of input polarization on multimode fiber-optic communication systems. Furthermore, this research is transferable to imaging applications through challenging mediums such as fog and biological tissues. We believe that our work can significantly contribute to developing compact and high-resolution endoscopes that do not require traditional lenses.unsrt | http://arxiv.org/abs/2310.17889v1 | {
"authors": [
"Jawaria Maqbool",
"Syed Talal Hassan",
"M. Imran Cheema"
],
"categories": [
"physics.optics",
"eess.IV"
],
"primary_category": "physics.optics",
"published": "20231027043923",
"title": "Towards optimal multimode fiber imaging by leveraging input polarization and conditional generative adversarial networks"
} |
1ex : a Python package for cluster expansion with a focus on complex alloysSantiago Rigamonti[[email protected]], Maria Troppenz, Martin Kuban, Axel Hübner, and Claudia DraxlHumboldt-Universität zu Berlin, Institut für Physik and IRIS Adlershof, 12489 Berlin, Germany27 October 2023 We present the Python package , which provides a modular approach to the cluster expansion (CE) method.can treat a wide variety of substitutional systems, including one-, two-, and three-dimensional alloys, in a general multi-component and multi-sublattice framework. It is capable of dealing with complex materials comprising several atoms in their parent lattice.uses state-of-the-art techniques for the construction of training data sets, model selection, and finite-temperature simulations. The user interface consists of well-documented Python classes and modules (http://sol.physik.hu-berlin.de/cell/).also provides visualization utilities and can be interfaced with virtually any ab initio package, total-energy codes based on interatomic potentials, and more. The usage and capabilities ofare illustrated by a number of examples, comprising a Cu-Pt surface alloy with oxygen adsorption, featuring two coupled binary sublattices, and the thermodynamic analysis of its order-disorder transition; the demixing transition and lattice-constant bowing of the Si-Ge alloy; and an iterative CE approach for a complex clathrate compound with a parent lattice consisting of 54 atoms.§ INTRODUCTIONTypical quests in materials science, as for instance finding stable compositions of an alloy and its properties, or determining the conditions for molecular adsorption on a surface, involve computations of a large number of atomic configurations on a well-defined lattice. Ideally, one would perform these computations using first-principles methods, density-functional theory (DFT), however, due to the combinatorial explosion of the number of configurations with system size, a direct ab initio approach is in many cases out of reach. In this context, the cluster expansion (CE) method <cit.> can be used to achieve a physical description of materials with essentially ab initio accuracy at reasonable computational cost. In this method, the configuration-dependent properties of the material are computed by means of generalized Ising-like models parametrized with ab initio data. Thus, calculations of a huge number of atomic configurations with large supercells become feasible, bridging length scales and enabling a statistical-thermodynamics description.Despite these capabilities, systems of technological interest are often still too complex. Dealing with complexity requires adequate code, able to account for multi-component settings, multiple sublattices, large parent lattices, surfaces and interfaces, and more. Moreover, CE modeling also entails tasks like learning from data, including the creation of data sets for model training and the evaluation and selection of models. In this context, the application of modern artificial intelligence (AI) techniques to CE model construction is possible and desirable. In this work, we present the CE Python <cit.> package<cit.> that provides a modular approach to cluster expansion, fulfilling all the needs mentioned above.allows the set-up of systems with an arbitrary number of substituent species types and an arbitrary number of sublattices. Here, a sublattice is a subset of crystal sites whose composition can differ from other sites. The size of the parent lattice, the number of atoms comprising the primitive unit cell of the pristine (non-substituted) material, can be arbitrarily large, so thatcan readily deal with complex alloys having several atoms in the primitive unit cell. One, two, and three dimensions can be handled, which allows, for instance, the study of surface alloys. The construction of CE models involves fitting to, ab initio data sets. Since these data are often very costly to compute,implements a number of structure selection strategies aimed at rationalizing data sets for optimal model training. These include special quasirandom structures <cit.> and variance-reduction schemes <cit.>.is written in Python <cit.>, and provides a common interface to the various estimators from the machine-learning Python library<cit.>, and to native estimators, those coded inside thepackage. Its modularity allows for the ad hoc design and implementation of AI strategies. At the core of the representation of crystal structures, theobject of the Atomic Simulation Enviroment (ASE)<cit.> is employed, giving access to all the advantages of ASE, as for instance the construction of structure databases and the interface to a large number of ab initio DFT codes.provides state-of-the-art tools for performing thermodynamic analysis of materials. To this extent, a module for the calculation of the configurational density of states with the Wang-Landau <cit.> method is available. This module can be executed in parallel, which paves the way for performing simulations of very large supercells.The paper is organized as follows: Sec. <ref> gives a general introduction to the CE method; Sec. <ref> and Sec. <ref> present the structure of the code for model construction and thermodynamical analysis, respectively, along with an illustration of its most important features through a complex surface system consisting of a Pt/Cu(111) surface alloy with atomic O adsorption. Finally, the application to SiGe, and the complex thermoelectric clathrate compound Ba_8Al_xSi_46-x, with 54 atoms in the primitive cell, are presented in Sec. <ref>. Conclusions are given in Sec. <ref>.§ CLUSTER EXPANSION METHODIn this section, we present an overview of the cluster expansion formalism for the multi-component and multi-sublattice case <cit.>. Additionally, formal aspects related to the application of AI techniques to CE model construction are introduced.§.§ Simple CE of a binary alloyFigure <ref> shows a schema of a binary substitutional system where every site of a square lattice can be occupied by one of two species, indicated by blue and red color, respectively. The goal is to find the relation between the configuration of the substituent species (red) and the physical properties of the system, for instance the total energy. This question could be answered by performing numerically costly first-principles calculations for every atomic arrangement of the lattice. Though quite convenient, this simple strategy turns out to be impractical since the number of configurations scales as 2^N, with N being the number of lattice sites (for the system of Fig. <ref>, the number of configurations would be of the order of 7×10^4, without considering symmetries). However, by making a few reasonable assumptions, we can calculate the energy of an arbitrary configuration as in the left side of Fig. <ref> in a very efficient way. The only input needed is the energies of a small number of well-chosen configurations. We start by considering effective n-body interactions and assume that (i) the 1-body interactions are larger than the 2-body interactions, these being larger than the 3-body interactions, and so on; (ii) the interactions decrease with distance; and (iii) the total energy can be written as a linear superposition of n-body interactions. Next, we evaluate the ab initio energies E_0 to E_5 of the configurations depicted on the right side of Fig. <ref>. According to the assumptions made above, these contain important interactions that are labeled with J_0 to J_5: J_0 represents the energy E_0 of the pristine structure (all circles blue). J_1 accounts for the change in energy of the system when a "blue" species is replaced by a "red" one, and can be obtained by subtracting J_0 from the computed E_1. J_2 to J_4 represent 2-body interactions of increasing distance. They can be determined by making use of assumption (iii). For instance, the calculated value of E_2 is the addition of the energy of the pristine structure (J_0), plus two "blue"-to-"red" susbstitutions (2 J_1), plus an additional term (J_2) that embodies a 2-body interaction and can be estimated by the difference E_2-(J_0+2J_1). Finally, J_5 represents a 3-body interaction that can be estimated analogous to the 2-body interactions. With these so-called effective cluster interactions J_i=0-5, we can now predict the energy of the configuration on the left side of Fig. <ref> asÊ=J_0+6J_1+2J_2+4J_3+4J_4+1J_5.If assumptions (i) to (iii) are true, then the predicted energy value Ê will be close to the ab initio energy E (henceforth predicted values will be denoted by a "hat" symbol). The integer numbers multiplying the interactions J_i tell how many times the corresponding pattern or cluster is present in a given structure. For instance, it is 4 for the interaction J_4, as indicated by the four arrows in Fig. <ref>. The naive approach just exposed has two important shortcomings. First, the assumtions (i) and (ii) cannot be expected to be true for all material properties. Second, it cannot be easily adapted to more complex situations as, shown in Fig. <ref>. In this case, we have a parent lattice (top left) with different sublattice types assigned to the lattice points. These are indicated by white, striped, and gridded patterns. Different numbers of substitutions and species types can occupy each sublattice, as indicated by the colors (top right). Large supercells can be constructed by periodic repetitions of the parent lattice (bottom left). These allow for the construction of configurations compatible with the parent lattice, such as fully disordered, ordered, or partially (dis)ordered structures as shown in the bottom right. Analogous situations are frequently encountered, in complex bulk or surface alloys.In the following, we will explain how to treat this general case. The formulation does not make use of assumptions (i) and (ii) but allows for the application of AI methods to identify property-specifc interactions. §.§ General CE formalismWe consider a crystal lattice with atomic positions 𝐑_i, i=1,...,N. Every position 𝐑_i can host any of M_i atoms of different type. An arbitrary arrangement of species in the lattice can be represented by vector σ=(σ_1, σ_2, ... , σ_N), with σ_i being integer numbers between 0 and M_i-1, indicating the species at position 𝐑_i. The pristine crystal is defined as the configuration with σ_i=0 for all i. A physical property that depends on the configuration can be represented by a function P(σ). It can be shown <cit.> that, in the discrete space spanned by the vectors σ, there exist complete and orthonormal sets of basis functions Γ_α(σ), called cluster functions, in terms of which P(σ) can be expanded: P(σ) = ∑_αJ_αΓ_α(σ). The real numbers J_α are the expansion coefficients and α stands for a vector of componentsα_i∈{0,1,....M_i-1}, i=1,...,N.The cluster functions fulfil the orthonormality condition ⟨Γ_α,Γ_β⟩=δ_αβ. Here, δ_αβ=∏_i=1^Nδ_α_iβ_i, with δ_α_iβ_i being the Kronecker delta, and the inner product is defined by ⟨ f,g⟩=∑_σf(σ)g(σ)/∏_i=1^NM_i, with f and g arbitrary functions in configuration space and the sum running over all possible configurations σ of the system. The cluster functions can be constructed as follows <cit.>: Γ_α(σ) = ∏_i=1^Nγ_M_i,α_i(σ_i), with the functions γ_M_i,α_i(σ_i) forming a real, orthonormal basis in the discrete (and finite) domain σ_i=0, 1,..., (M_i-1). Usual choices for this basis include discrete Chebyshev polynomials and trigonometric functions <cit.>.Without loss of generality, one can choose the constant function γ_M,0(σ)=1 for α=0. Thus, the product in Eq. (<ref>) can be restricted to the indices α_i0. Accordingly, a cluster function is defined by indicating the set { (i,α_i) | α_i0}. The special case where all α_i are 0, corresponds to the empty cluster function, Γ_∅(σ)=1. There are M_i basis functions γ at a site i. Obviously, if two clusters α and β are related by a symmetry operation of the parent lattice, then their expansion coefficients are equal, J_α=J_β.It should be noted that the sublattice types also determine the symmetry. Hence, the sum in Eq. (<ref>) can be split into a sum over symmetrically inequivalent (s.i.) clusters and a sum over symmetrically equivalent ones: P(σ) = ∑_α^s.i. M_αJ_α X_α(σ). Here, the cluster correlation functionX_α(σ)X_α(σ)=1/ M_α∑_β∈O(α)Γ_β(σ). is the average of the cluster functions in the set of clusters symmetrically equivalent to α. This set, of size M_α, is called the orbit of cluster α, and we denote it by O(α). Models are often constructed for an intensive property, as for instance the energy per unit cell. In such a case,M_α in Eq. (<ref>) can be replaced by m_α =M_αV_pc/V_sc, with V_scand V_pc being the super- and the parent-cell volume, respectively. The integers m_α are also called cluster multiplicities. In the simple binary case of Fig. <ref>, we have M_i=2. If we choose the basis γ_20(σ)=1,γ_21(σ)=σ, σ=0,1 [This basis is not orthonormal, however, this is not problematic for the present example.], it is easy to verify that, for the structure represented on the left side of Fig. <ref>, the cluster correlations for clusters 1 to 5 on the right are, respectively, X=6/16, 2/32, 4/32, 4/64, and 1/64, with the denominators being the corresponding values of M. Thus, the values of MX=6,2,4,4,1 agree with the coefficients for J_1to J_5 in Eq. (<ref>).The number of clusters in the expansion of Eq. (<ref>) is in principle infinite, and the expansion coefficients J_α (called effective cluster interactions, ECIs in short), are still undetermined. For practical applications, though, it is necessary to cut off the cluster basis, keeping only the most relevant clusters in Eq. (<ref>). One also has to determine the ECIs that lead to accurate predictions of the property of interest. These are the main tasks to be addressed in the construction of CE models. Below, we briefly explain how this problem is tackled by using AI techniques.§.§ CE as a data-analytics problemTo build a CE model of a material property of interest, we need (i) a set of structures S={σ_1, σ_2,...,σ_N_s}, (ii) the corresponding calculated properties P^⊤=(P_1, P_2, ..., P_N_s) with P_i=P(σ_i), and (iii) a set of clusters C={α_1, α_2, ...,α_N_c}. Then, we build a matrix X of cluster correlations, with elements X_ij=X_α_j(σ_i), such that a column j in the matrix represents a cluster, while a row i represents a structure. By using Eq. (<ref>) with the sum on clusters limited to the set C, we can write, for an intensive property P̂=X J , where J is a column vector with elements J_i=m_α_iJ_α_i, i=1,...,N_c. Optimal cluster interactions J can be found by minimizing a cost function: J = J^*[‖X J^*-P‖_2^2+ϕ( J^*)]. Here, we use the standard definition of the ℓ_p-norm of a vector x=(x_1,...,x_n), namely ‖x‖_p=(∑_i=1^n |x_i|^p)^1/p. (In the particular case, p=0, ‖x‖_0 is defined as the number of non-zero components of x.) The first term inside the square brackets is the mean squared error (MSE) of the predictions, and the second term, ϕ( J^*), is a penalization or regularization term. The latter can be used for different purposes. For instance, if there are no linearly dependent columns in X andN_c≤ N_s, the number of clusters is smaller than the number of structures in the training set S, optimal cluster interactions J can found by directly minimizing the MSE of the predictions (ϕ=0), with the solution J = (X^⊤X)^-1X^⊤P.In typical applications, data are scarce, and one wants to find the relevant interactions out of a large set of clusters, solve an underdetermined problem with N_c> N_s. In this case, the Gram matrix X^⊤X cannot be inverted. Thus, the optimization problem posed by Eq. (<ref>) must be regularized. The easiest way to achieve this is by choosing ϕ(J)=λ‖ J‖_2^2, with the hyperparameter λ∈. In this case, the solution is J = (X^⊤X+λI)^-1X^⊤P <cit.>. Besides making the problem solvable, this selection penalizes large ECI values, leading to solutions with increasingly small interactions for increasing λ. To obtain sparse models in which the relevant interactions are represented by a small number of clusters, compressed-sensing techniques <cit.> may be employed. In compressed sensing, one seeks penalization terms in Eq. (<ref>) that promote sparsity,giving solutions with many interactions J_i being exactly zero. Common penalization terms fulfilling this requirement, are ϕ=λ‖ J‖_0 or ϕ=λ‖ J‖_1. The first choice penalizes solutions with a large number of non-zero parameters by using the ℓ_0-norm, thus producing sparse models. The associated minimization problem is NP-hard <cit.>,its solution cannot be found in polynomial time, and exact solutions can only be computed for rather small clusters pools. Using the second choice (also called the Manhattan norm), leads to the least-absolute-shrinkage-and-selection-operator (LASSO) approach <cit.>, that represents a convex optimiztion problem, and efficient algorithms exist for its solution. Under certain conditions <cit.>, the solutions found with LASSO may approximate well the solutions found using the ℓ_0 norm.Once the optimal J is found by solving Eq. (<ref>), one can use Eq. (<ref>) to perform property predictions for arbitrary configurations σ.§ CLUSTER EXPANSION WITHTo demonstrate the construction of CE models with , we build one for the energy of formation of a complex surface system, consisting of a Pt/Cu(111) surface alloy with atomic O adsorption, as shown in Fig. <ref>. In this example, both O adsorption and Pt-Cu alloying phenomena are simultaneously accounted for. Platinum and copper form a two-dimensional (2D) alloy in the top-most atomic layer of Cu(111), creating disordered as well as ordered 2D surface patterns (p(2×2) and √(3)×√(3) R 30^∘ reconstructions) <cit.>. Atomic oxygen adsorbs preferentially on hollow fcc sites, both on the pristine Cu(111) <cit.> and Pt(111) <cit.> surfaces. These facts define the parent lattice of our system.To avoid the generation of numerically costly DFT data sets, the total energies in the present example are calculated with the effective medium theory (EMT) calculator from ASE <cit.>. While the EMT potentials for Pt and Cu are quite realistic <cit.>, this is not the case for oxygen. Since, however, the main purpose of this section is to showcase the construction of a CE model and its use and not to capture the actual physical details of the system, we proceed in this way. The use of toy total-energy models is a common practice to test CE methods (see Refs. <cit.>).All the code listings shown in this section are written in Python 3 <cit.> and can be run interactively as Jupyter notebooks <cit.> (see Sec. <ref>).§.§ Generation of structures The basic building block for the generation of structures for the CE is the parent lattice. It comprises the definition of the primitive cell of the pristine crystal and the species that can possibly occupy any crystal site. In , a parent lattice is embodied by theclass. It admits the definition of a multi-sublattice and multi-composition framework, as demonstrated in Listing <ref>. In lines 1 to 5 of the listing, the Atomic Simulation Environment (ASE) <cit.> is used to create anobject representing the primitive cell of an fcc (111) Cu slab with three atomic layers and vacancy sites on hollow fcc positions (vacancies are indicated with the character ). Thisobject is assigned to the variable , as it represents the pristine primitive cell, hosting no substituents. In line 8, theclass ofis loaded [Note that thepackage is called and imported in Python code with the name .], and aobject is initialized in line 11 and assigned to the variable . The initialization takes two arguments: the primitive cell (variable ), and a list of possible species that every crystal site can host. The latter is defined in the listin line 10 of the listing: The first two layers may only contain Cu (thus behaving as spectator atoms <cit.>,they determine the symmetry but are not part of any cluster), while the Cu atoms on the top layer may be substituted by Pt, and the remaining vacancy sites (X) can be substituted by oxygen. Note that the order of this list must correspond to the atomic arrangement in theobject . The latter could be inquired with the method . [caption=Creation of a two-dimensional multi-composition multi-sublatticeobject., label=lst:p-lat] from ase.build import fcc111, add_adsorbatepristine = fcc111('Cu', a=3.59, size=(1,1,3)) # 3-atomic-layer Cu(111) slab add_adsorbate(pristine,'X',1.7,position='fcc') # Hollow fcc vacancy site pristine.center(vacuum=10.0, axis=2) # add vacuum along z-axis# Note: CELL is imported with the name "clusterx" from clusterx.parent_lattice import ParentLattice symbols = [['Cu'],['Cu'],['Cu','Pt'],['X','O']] p_lat = ParentLattice(pristine, symbols=symbols) p_lat.print_sublattice_types()Upon execution of this code, the output, as shown in Fig. <ref>, is displayed, indicating that three sublattices were created: two different binary sublattices (assigned sublattice types 0 and 2) and one spectator sublattice (sublattice type 1). In this way, a full multi-component multi-sublattice framework can be generated.does not restrict the number of n-ary (unary, binary, ternary, ...) sublattices that can be defined in this way.We can now build a supercell and visualize it, as shown in Listing <ref>. Inline 2 of the listing, we use theclass ofto create a object, based on the parent lattice created in Listing <ref>. The lattice coordinates of the supercell vectors are (4,0) and (-2,4), referring to the hexagonal unit vectors of the Cu(111) surface (indicated by arrows in the left panel of Fig. <ref>). The call to themethod ('s plotting interface for Jupyter notebooks) in line 5 produces the image shown in Fig. <ref>. In this graphical representation of the , the first image on the left depicts the pristine, non-substituted crystal, while the images on the right, represent the results of substituting one of the species as allowed in the definition of the parent lattice. [caption=Creation and visualization of aobject., label=lst:scell] from clusterx.super_cell import SuperCell scell = SuperCell(p_lat,[[4,0],[-2,4]])from clusterx.visualization import juview juview(scell)A supercell like the one depicted in Fig. <ref>, serves as a blueprint for the generation of structures. In Listing <ref>, we use the supercell to construct random structures and gather them in aobject, which is created in line 2 of the listing. Objects of this class act as structure containers which have a database attached. They can be serialized, in the form of JSON database files fully compatible with ASE's JSON databases. [caption=Creation of aobject containing 50 random structures., label=lst:sset] from clusterx.structures_set import StructuresSet sset = StructuresSet(p_lat) for i in range(50): rnd_str = scell.gen_random_structure() sset.add_structure(rnd_str) juview(sset,n=4)In line 4, structure creation takes place by calling themethod of theclass. This returns aobject. In ,objects consist of a supercell augmented with a decoration array, which is a list indicating which species occupy the individual sites of the supercell. Thus, all relevant information like, sublattice types, is provided. Less memory-consuming representations consisting of only the decoration array, are available for tasks such as Monte Carlo simulations (see Sec. <ref>). The 50 generated structures are added toin line 5 of Listing <ref> by using themethod of theclass. Finally, four of the generated structures are displayed by calling themethod in line 6, with the result as shown in Fig. <ref>. In summary, the construction and collection of structures intakes place in four classes, three of them related by inheritance as shown in Fig. <ref>: Theclass inherits from ASE'sclass and is supplied with a multi-composition-multi-sublattice framework; ais an enlarged ; ais adecorated with a specific configuration of substituent species. These classes are equipped with a lot of useful functionality, either through methods inherited from ASE'sclass, or from methods native to . The latter are documented in Ref. <cit.>.objects allow, for the union of sets through the "+" operator, serialization, aggregation of properties, and more.§.§ Calculation of configuration-dependent properties Now that we have a set of structures, the next step in a standard CE workflow is to perform ab initio computations of the properties that we want to predict with a CE model. A common property to be modeled is the total internal energy of the alloy, E(σ). With , its computation can be easily achieved through the methodof theclass. All ab initio codes supported by the calculators of ASE can be employed. This is exemplified in the following listing by using the effective-medium-theory () calculator: [caption=Calculation of the total energy of every structure in aobject with a calculator of ASE., label=lst:calcprop1] from ase.calculators.emt import EMTsset.set_calculator(EMT()) # Assign EMT() calculator to StructuresSet.# The total energy of every structure in sset is evaluated. sset.calculate_property(prop_name="e_tot") Here, in line 1, thecalculator is loaded and an instance of it is attached to theobject in line 2. Finally, in line 5 the calculator is used to compute the total energy of every structure in the set. By using theargument, we assign the nameto this property. This acts, one the one hand, as a label to later retrieve the property values and, on the other hand, for storage upon serialization in, a JSON file, with themethod of .For a surface system as the one we study here, a physically meaningful quantity is the adsorption energy, E_ads, defined as: E_ads(σ)=1/N[E(σ)-n_O1/2E_O_2-n_Pt(E_Pt,bulk-E_Cu,bulk)-E_Cu,surf.] Here, N is the number of Cu-Pt sites in the top-most layer of the supercell, n_O (n_Pt) the number of O (Pt) atoms, E_O_2 the energy of an O_2 molecule, E_Pt (Cu),bulk the energy per atom of fcc Pt (Cu), and E_Cu,surf. the total energy of the pristine Cu slab. Since structural relaxation can notably affect the relative stability of structures, it is important to build the CE model with energies E(σ) corresponding to optimized structures. The required steps are implemented in a custom python function,(see Sec. <ref>), which gets aobject as argument and returns E_ads. Finally, the methodis called as follows: [caption=Calculation of the absorption energy of every structure in aobject by means of the custom function ., label=lst:e_ads] sset.calculate_property(prop_name="e_ads", prop_func=ads_energy) In more detail,performs a structure optimization using the BFGS method. Three constraints are applied here: (i) The bottom-most Cu layer is fixed, (ii) the top-most Pt-Cu layer is allowed to relax in the (x,y) plane only, (iii) the O positions may relax in the perpendicular direction, z only. With the latter two constraints, we avoid reconstructions that may become significant at large Pt concentrations (a Pt atom moving out from the Pt-Cu layer, or triads of O atoms sitting at bridge positions). We do so here for the sake of simplicity, although the degree of complexity brought about by these reconstructions could still be accounted for withby adding the corresponding sites in the parent-lattice definition. After evaluating the energies, we can plot the result by using themethod from 's visualization package: [caption=Code for generating a plot of the ab initio adsorption energy as a function of the Pt concentration.,label=lst:visua1] from clusterx.visualization import plot_property_vs_concentration plot_property_vs_concentration(sset, site_type=2, property_name ="e_ads")Here, we instructto plot the adsorption energy calculated before (labeled ) versus the concentration of the type-2 sublattice (the Pt concentration in the Cu-Pt surface alloy). The result is shown in Fig. <ref>.In this case, we have exemplified the calculation of properties by either using ASE's calculators (Listing <ref>) or custom functions provided by the user (Listing <ref>). An approach consisting of the creation of folders containing input files for ab initio packages is also possible <cit.>. §.§ Clusters pool Having the training data constructed, we have to define which cluster functions will constitute our basis set. This is done with theclass of , as shown in the following listing: [caption=Generation of a pool of clusters.,label=lst:codepool] from clusterx.clusters.clusters_pool import ClustersPool cpool = ClustersPool(p_lat, npoints=[1,2,3,4], radii=[0,-1,-1,4.3], super_cell=scell) cpool.serialize(db_name="cpool.json") cpool.print_info()In line 1, we start by importing theclass of . Then, in lines 2-5, an instance ofis created and assigned to the variable . Clusters with 1 to 4 points are created, as indicated with the argumentin line 3. In line 4, the radius of every cluster, the maximum distance between any of its sites, is specified. This is obviously 0 for the 1-point cluster. For clusters with 2 and 3 points, a negative radius (-1) is assigned. A negative number is used to indicate that all unique clusters compatible with the periodic boundary conditions of the supercell , specified in line 5 by the argument , are generated. For clusters with 4 points, we indicate a small radius of 4.3 Å. This selection is motivated by the notion that clusters with many points should mainly capture short-range-order effects. No general rule exists, nevertheless, and one should try different parameters for other properties or systems. The execution of line 7 serializes theto a JSON file. The created pool of clusters can then be conveniently visualized, for instance through the graphical user interface of ASE. Also using thefunction ofis possible, as explained before for the visualization of the(see Listing <ref> and Fig. <ref>). In Fig. <ref>, all the generated 4-point clusters are shown. The execution of the last line generates the output shown in Fig. <ref>, listing index, number of points, radius, and multiplicity of every generated cluster.§.§ Building CE models The essential elements for the construction of a cluster expansion model are now available, namely, the training data contained in theobjectand theobject . With them, we can obtain the vector P and the input matrix X entering Eq. (<ref>), as demonstrated in Listing <ref>.Here, we start by creating aobject in line 2. With this object, which is assigned to the variable , we can calculate the cluster correlations of Eq. (<ref>) for any structure based on the parent latticeand for all clusters in . In this case, the calculator uses basis functions γ_M,α(σ) based on trigonometric functions, as indicated by the argument . The full matrix X and the vector Pare created in lines 5 and 8, respectively.We now build a CE model by using theclass of . This class acts as an interface to all the estimators of themachine-learning Python library <cit.> and to 's native estimators. In line 5 of Listing <ref>, a ridge-regression estimator fromis created. This estimator solves Eq. (<ref>) with an ℓ_2 regularization. The valueindicates a small regularization parameterλ=10^-8. Finally, a CE model is created and assigned to the variable(line 7), and the errors are reported (line 8).The output, presented in Fig. <ref>, shows that the root mean square error (), the mean absolute error (), and the maximum absolute error () of the fit are all zero. This is so because the number of clusters (122) is larger than the number of structures (50), thus, a perfect fit is possible. Conversely, the corresponding cross validation (CV) scores are different from zero. They express the ability of the built CE model to predict the properties of data not included in the fit <cit.>. The reason for a high CV score is either underfitting, or, what is the case now, overfitting. This can be avoided by searching on all possible subsets of clusters until finding the one for which the CV score is minimal. Such cluster-selection procedure, equivalent to regularizing with the ℓ_0-norm (see Sec. <ref>), is very demanding, since the number ofsubsets increases exponentially, making the problem numerically tractable only for small clusters pools. Nonetheless, very good approximations to the optimal solution exist. Both the ℓ_0 solution as well as approximations to it are available through theclass of . Although this class can be used independently, CE models can be easily built through a helper class called . It encapsulates both the cluster selection and the estimator construction. In this example, we use the class to find optimized models using two strategies. In the first one, the optimal solution is searched among sets of clusters of increasing radius and increasing number of points: For a given cluster to be present in a set, all other clusters with smaller radii and smaller numbers of points must be present as well. The other strategy makes use of the least absolute shrinkage operator (LASSO), which is a good approximation to the ℓ_0 solution under certain conditions (see Sec. <ref>). In the following listing, the code for an optimization with the first strategy is shown: [caption=Creation of a CE model with the helper class ., label=lst:modelbuilder] from clusterx.model import ModelBuilder mb = ModelBuilder( selector_type="subsets_cv", selector_opts="clusters_sets":"size", estimator_type="skl_Ridge", estimator_opts="alpha":1e-8, "fit_intercept":True)ce_model = mb.build(sset, cpool, "e_ads") ce_model.report_errors(sset)Here, theclass is imported (line 1) and instantiated in lines 2-6. Its initialization requires to specify (i) what strategy to employ to select the optimal model, and (ii) what estimator to use once the optimal set of clusters is determined. For (i), the argumentsandare set to and , respectively, indicating that the optimal solution will be searched among sets of clusters of increasing size, as explained above. For (ii), the values assigned to the argumentsandhave the same meaning as in Listing <ref>. The keywordis set to(line 6), which amounts to add the empty cluster in the expansion (see Sec. <ref>). Using this setup, the CE model is built in line 8, and the errors are reported in line 9. The output is shown in Fig. <ref>.As compared to the previous CE model (Fig. <ref>), here the fitting errors are not zero, however the generalization error (see column ) is smaller, indicating that this model yields better predictions on new configurations, configurations not contained in the training setused to build the model.Figure <ref> shows the predicted energies, both for the fit (black points) and CV (red points) as a function of Pt concentration. The figure is created as in Listing <ref>, by adding the argumentto the function call in line 2.It is interesting to consider in more detail the cluster optimization performed by theclass in Listing <ref>. Figure <ref> shows the RMSEs for both fit and CV for all sets of clusters considered, with the respective cardinality given in the abscissa. The optimal set, indicated with a red circle, contains 12 clusters. The LASSO selector can be used by replacing lines 3 and 4 in Listing <ref> with[caption=Creation of a CE model with LASSO using theclass., label=lst:lasso] selector_type="lasso_cv",selector_opts='sparsity_max': 1e-2,'sparsity_min': 1e-6,In this case, the size of the cluster sets is controlled by the sparsity parameter λ in the ℓ_1 penalization term ϕ=λ‖ J‖_1 (see Sec. <ref>): Larger values of λ yield sparser models with smaller numbers of clusters and, conversely, smaller values of λ yield larger cluster pools and eventually overfitted models. The interplay between sparsity and predictive power is shown in Fig. <ref>. As expected, the RMSE of the fit decreases monotonously for decreasing sparsity, the larger the cluster pool, the better the fit. Table <ref> shows the values of the RMSE errors for the three employed strategies for model construction.§ THERMODYNAMICS The stability of multi-component alloys at finite temperatures is determined by the free energy, F=U-TS, where U is the internal energy and S the entropy. S consists of at least three terms, namely, electronic, vibrational, and configurational contributions. The latter is usually approximated with the simple formula S=-k_B∑_i n_ix_ilog(x_i), with n_i and x_i being, respectively, the number of sites and fractional concentration of sublattice i, and k_B the Boltzmann constant. Although being practical, it neglects interactions between substituent species, which can be crucial in determining the properties of complex alloys <cit.>. In , instead, the configurational entropy, including the effects of interactions, is accurately accounted for in the calculation of thermodynamic properties. This is achieved by employing different sampling methods, including the Metropolis Monte Carlo sampling and the Wang-Landau sampling, together with a CE model for the internal energy of the alloy. In this section, the application of these methods is demonstrated on the Pt/Cu(111) surface alloy using the CE model of Sec. <ref>. As input, the sampling procedures require a CE model that predicts the energy of newly proposed structures during the sampling. The simulation cell and the thermodynamic ensemble must also be specified. In the canonical ensemble, for instance, it is necessary to provide the substituents' concentrations in the sublattices. A detailed documentation is provided in Ref. <cit.>. Listing <ref> shows the initialization of theobject in line 3, corresponding to the adsorption energy, E_ ads, for the binary surface alloy (Eq. (<ref>) with n_O=0). The simulation cell is created in line 7 by instantiating theclass that takes as arguments the parent lattice and a transformation matrix (), which defines a rectangular simulation cell with 16 substitutional surface sites. In the examples demonstrated below, we perform samplings in the canonical ensemble, with a fixed Pt concentration as specified in line 10. (Note that the dictionary key with valuerefers to the sub-lattice corresponding to the top-most atomic layer of the Cu(111) surface.) Having 4 Pt atoms in 16 sites, leads to the stoichiometry of Cu_3Pt at the surface. With the CE model, the simulation cell, and the concentration already set up, we can start a canonical sampling. The Metropolis Monte-Carlo method is explained in Sec. <ref>, and the Wang-Landau method in Sec. <ref>. §.§ Metropolis Monte Carlo sampling A widely used method to study finite-temperature properties in alloys is the Metropolis Monte-Carlo method <cit.>. In this sampling method, trial moves made by swapping two atoms from randomly chosen crystal sites, are accepted with the probability P ( E_0 → E_1) = min[ exp( - E_1-E_0/k_ B T) , 1 ],where E_0 is the energy of the initial structure, E_1the energy after swapping the atoms, and exp ( - E/k_ B T ) the Boltzmann probability distribution at temperature T. Listing <ref> demonstrates how to perform a Metropolis Monte-Carlo (MC) sampling withand how to compute the specific heat at a given temperature after the sampling. First, the number of sampling stepsis specified in line 2. A set of temperatures, , is given in line 5. Here, the units of the temperature and 𝚔_ 𝙱 have to be consistent with the energy units from the CE model, which is eV per substitutional site in our case. Thus, it is convenient to use eV/K for 𝚔_ 𝙱, and K for temperature.The MC sampling requires aobject, which is created in line 15 by instantiating theclass of . For the initialization, we indicate the CE model (), the simulation cell (), the ensemble type (), and the concentration (, see Listing <ref>). Using this instance (), an MC sampling is executed in line 19 by calling the methodoffor each temperaturein the list . The number of sampling steps, , is passed as argument. The product Π of the elements in the second argument, , enters the exponent of the acceptance probability as exp(-ΔÊ/Π), with ΔÊ being the energy change predicted by the CE model. Since our CE model predicts energies per site, and the Boltzmann factor must be computed with total-energy changes, we include the factorin the list. After successful completion of the sampling, aobject ofis returned and assigned to the variable . This object contains detailed information of the MC sampling trajectory, as the energies of the visited structures and their configuration. Thus, any information of the MC trajectory can be read after the sampling procedure. For instance, by calling the methodof theobject in line 20, we compute for each sampled temperature, the specific heat C_p, by passing the argument . The result, shown in the top panel of Fig. <ref> (dark red dots connected by dashed lines) exhibits a maximum at ∼ 450 K. We can repeat the MC simulation with a larger simulation cell, by passing the transformation matrixin the initialization of theobject (line 7 of Listing <ref>), which produces a rectangular simulation cell of 64 sites. The resulting specific heat is shown in the top panel of Fig. <ref> (light red dots connected by dashed lines). As compared to the previous result with a smaller simulation cell, the peak becomes higher and narrower, and shifts to a lower temperature of around 350 K. This behavior signals an order-disorder phase transition. Indeed, from the MC trajectories, we see that the dominant configuration at low temperatures is the ground-state structure depicted in the left panel of Fig. <ref>. It reveals a p(2x2) ordering of the Pt atoms, as expected from Ref. <cit.>. For comparison, on the right side of Fig. <ref>, a snapshot of the trajectory at 1500 K is depicted, which looks rather disordered. §.§ Wang-Landau samplingThe Wang-Landau (WL) method <cit.> aims at reducing the simulation effort by obtaining the configurational density of states, g(E) –a temperature-independent quantity– directly within the sampling procedure. Once g(E) is known, the thermodynamic properties can be easily obtained at any temperature. This represents an enormous advantage as compared to MC simulations, particularly for quantities that require thermodynamic integration for its evaluation, like the free energy <cit.>. In the WL approach, the latter is readily available, since the partition function Z is directly obtained from ∫ dE g(E)exp(-E/k_BT). The WL algorithm is described in detail in Refs. <cit.>, and applications of this method can be found in Refs. <cit.>. In the following, its workflow is briefly explained, and its implementation inis exemplified by the Pt/Cu(111) surface alloy. In the WL sampling, the energy space is sampled with the probability proportional to 1/g(E) which, for the exact g(E), should produce a flat histogram H(E) of the visited energies. At the start, however, g(E) is unknown. Thus, the energy space is discretized into energy bins E_i, where each bin is assigned a priori a uniform density of states g(E_i)=1. A newly proposed structure with energy E_new and density of states g(E_new) (equal to 1 at the start) is accepted with the probability P ( E_old→ E_new) = min[ g(E_old)/g(E_new) , 1 ] , where E_old and g(E_old) are the energy and density of states of the initial structure, respectively. If the trial structure is accepted (rejected), both g(E) and H(E) for the energy bin containing E_new (E_old), are updated according to the following rule: g(E)→ f g(E) and H(E)→ H(E)+1. With the multiplication factor f kept fixed, the sampling is continued until the histogram satisfies a predetermined flatness condition. Then, the modification factor f is reduced and the histogram restarted. The complete sampling procedure consists of a nested loop, with the inner loop generating the flat histogram and the outer loop reducing the modification factor. The accuracy of the final g(E) is determined by the final flatness condition and modification factor. The WL algorithm is implemented withinin theobject and can be used as shown in Listing <ref>. Aobject is initialized in line 2 with the CE model of E_ads, the simulation cell, and the given concentration, similar to the initialization of theobject in Listing <ref>. The WL sampling is then executed in line 4 by the method . Here,and define the energy window and bin width of the histogram, respectively. The modification factor f and the flatness condition are predefined by default values that can be changed by the user. The initial modification factor f=e, is reduced by f →√(f) in each iteration of the outer loop, until the final modification factor is f=exp(10^-3) (see argumentin line 4). The condition for determining the flatness of the histogram is min[ H(E) ] > cH(E), where the overline indicates the mean value, and c is a real number. Starting with c=0.5 in the first iterations, it is then increased once every few iterations of the outer loop, until reaching c=0.9. More details and ways to control the default behavior can be found in Ref. <cit.>.The algorithm returns aobject, assigned to the variablein line 4 of Listing <ref>. Using this object, several thermodynamic quantities can be computed directly for an arbitrary number of temperatures. For instance, line 8 tells to compute the specific heat for the temperatures given in line 7. The result is shown in the top panel of Fig. <ref> (dark red solid line), together with that of the larger simulation cell (light red solid line) in comparison with the MC simulations. Since the evaluation of C_p from g(E) is computationally less demanding than the Metropolis Monte-Carlo method presented in the previous section, we can determine the transition temperature more accurately. In the lower panels of Fig. <ref>, the internal energy, U, the free energy, F, and the entropy, S, are presented. The entropy clearly increases with increasing temperature, indicative of an order-disorder phase transition. Note that the results presented here, can only show trends, but are not fully quantitative, since the CE model of the absorption energies is trained with model energies (see Sec. <ref>.)§ APPLICATIONS In this section, we demonstrate the application ofto the binary alloy Si-Ge and to the clathrate Ba_8Al_xSi_46-x as an example of a complex intermetallic alloy. §.§ Si-Ge alloyThe binary alloy Si_xGe_1-x exhibits a miscibility gap, without forming ordered structures <cit.>, it has a strong tendency to separate into almost pure Si and Ge phases. This tendency decreases with increasing temperature until a critical temperature is reached beyond which the fully disordered phase is stable at any Si concentration. The theoretical description of this demixing transition has typically been done using Metropolis Monte Carlo simulations in the grand canonical ensemble <cit.>. Here, we approach it using the Wang-Landau method in the canonical ensemble. This approach is interesting because it allows access to the phase-separation region, which is inaccessible in the grand canonical ensemble. We also describe the bowing of the lattice constant at different concentrations and temperatures. These results are compared with available experimental data.Using a 16-atom supercell with lattice vectors (a,a,0), (a,0,a), and (0,a,a), a being the lattice constant of the parent diamond crystal, we generate a set of 43 structures with random configurations containing n_Ge=0,1,2,...,16 Ge substituents. The lattice constants are optimized, and the atomic positions relaxed until the forces are smaller than 5× 10^-3eV/Å. The ab initio energies are calculated by density-functional theory (DFT) <cit.> with the all-electron, full-potential electronic-structure code FHI-aims <cit.>. Exchange and correlation effects are treated within the generalized gradient approximation, employing PBEsol <cit.>. The basis set is determined by "tight" settings. A 10×10×10 k-point grid is used for integrations in the supercell Brillouin zone (BZ). The data generated in this way serve as the training set for CE models of the energy of mixing per atom, E_mix, and of the lattice parameter, a_0. The former is defined by E_mix(x)=E(x)-[(1-x)E_Si+x E_Ge] with E(x) being the total energy per atom of the compound with Ge concentration x, and E_Si and E_Ge the energies of the pristine Si and Ge solids, respectively. The CE models are built using a pool of clusters as shown in the right panel of Fig. <ref>, containing all clusters up to three points contained in the supercell: Besides the empty and a single one-point cluster, there are four 2-point clusters and six 3-point clusters. The cluster selection is performed by a combinatorial search on all possible subsets of clusters, with the condition that the first three (the empty, the one-point, and the first-neighbor 2-point cluster) are included. The learning curves for the two CE models are shown in the left panels of Fig. <ref>. Here, the model which minimizes the RMSE-CV, marked by the red diamond, is selected. Every dot (plus sign) indicates the RMSE-fit (RMSE-CV) of a single trial model. The optimal model for E_mix contains 8 clusters (blue background in the right panel), while for a_0 it contains 11 clusters. The solid orange line joins the models with optimal RMSE-CV as a function of the number of clusters, and the green solid line indicates the corresponding RMSE-fit for these models. While the latter decreases monotonously, the former may have a minimum as in the case of a for 11 clusters, signaling overfitting for models with more clusters. The ECIs of the final CE model for E_mix are displayed in the middle panel of Fig. <ref>. The cluster radii are given in units of the nearest-neighbor distance R_nn, the shortest distance between two Si atoms in the parent lattice. Most ECIs are negative, in accordance with the well known tendency of SiGe to phase-separate at low temperatures.The accuracy of these models becomes more clear in Fig. <ref> where excellent agreement between the DFT data (red circles) and the CE predictions (black dots) is observed. With these models, property predictions are made for all possible derivative structures of up to 16 atoms. The generation of the derivative structures is done withusing the algorithm of Ref. <cit.>. The resulting predictions are shown with blue dots. The fact that for all structures E_mix≥0 means that at zero temperature, the system would favor a demixed state at all concentrations, without forming ordered structures. A negative bowing of the lattice constant is observed in the lower panel of Fig. <ref>, which indicates the departure from Vegard's law, which reads Δ a(x) = a(x)-[(1-x)a_Si+xa_Ge], with a(x) being the lattice constant of a structure with Ge concentration x and a_Si and a_Ge those of the pristine solids. For the perfectly random alloy, the cluster correlations can be computed analytically with , so that the CE model can also be employed to compute the lattice parameter in this limiting case. This is shown by the solid red line, which yields Δ a values very close to the random structures used for training. The qualitative behavior is also similar to the experimental data, shown in the figure by red dots with error bars. Still, the experimental values are visibly smaller in magnitude than the prediction for the random alloy. This difference could in part be due to temperature-dependent configurational effects, as will be discussed below. Using these models, we study the temperature dependence of the demixing transition at 50% Ge concentration for a cubic SiGe supercell containing 2744 atoms. Figure <ref> shows the results of a Wang-Landau sampling performed in parallel using 32 CPUs. Each process performs sampling in different overlapping energy ranges, as indicated by different colors in the first three panels of the figure. The first panel shows the converged WL histograms per process converged up to a flatness condition of c=0.995. Each histogram contains 22 bins with a bin width of 51 meV. In the last WL iteration, about 2·10^8 structures per energy range are visited. The normalized histograms, indicated by black dots, merge into a global flat histogram for the entire energy range considered. The latter is determined at the start of the simulation by considering the predicted energies of a fully demixed and a fully disordered structure, giving the minimum and maximum energies in the energy range, respectively. The logarithm of the configurational density of states, log(g), is evaluated for all processes and converged up to a final modification factor f satisfying ln(f)=2^-21. Upon reaching convergence, each process i yields a configurational density of states C_i g(E), with an arbitrary normalization C_i. This means that the log(g) of contiguous processes i and i+1 are shifted by an additive constant log(C_i/C_i+1). Therefore, to extract g(E) with a common normalization factor, one has to shift the parts of log(g) by appropriate additive constants. As described in Ref. <cit.>, these are determined by finding the energies at which the microcanonical temperature T_mc=(k_Bd ln(g)/dE)^-1 from adjacent energy ranges overlap best. The third panel of Fig. <ref> shows T_mc for each process. Here, the red arrow in the inset illustrates the point of best overlap of T_mc's for two contiguous ranges. The resulting log(g(E)) is shown by the black line in the second panel. It is normalized such that thelowest-energy configuration has a degeneracy of one. It is to be noted though, that the results presented below do not depend on the chosen normalization.From log(g), one can easily evaluate the canonical probability distribution P(E,T) at any temperature. This is useful for computing thermodynamic averages of different quantities, as will be shown below. In the fourth panel of Fig. <ref>, log(P)-log(P_max) is shown for various temperatures. As expected, the energy at which P is maximum increases monotonically with temperature. At large temperatures, the maxima approach the limit of the fully random structure. This is why the maxima for T=300K and T=350K, which are above the demixing transition temperature, are close to each other. Since, importantly, log(P) always shows a single maximum, in agreement with Ref. <cit.>, the transition does not seem to be of first order, at least for the supercell sizes considered here. The demixing transition is more closely analyzed in Fig. <ref>. Here, the specific heat at constant pressure, C_p(T)=(⟨ E^2 ⟩-⟨ E ⟩^2)/k_BT^2, with ⟨ E^n ⟩=∫ dEE^n P(E,T), is computed using P(E,T) from Fig. <ref>. The calculation is performed for three supercell sizes, containing 1000, 1728, and 2744 atoms, respectively. The specific heat peaks at a temperature that increases for increasing system size. For the largest system, the maximum is at about T=196K. At T∼135K, C_p displays a shoulder and at T>196K it decays monotonously to zero. It is interesting to explore what kind of structures are present at different temperatures, which are indicated by thin vertical dashed lines. These are obtained by taking a sample from a microcanonical sampling at a narrow energy range centered at the corresponding maxima of P(E,T) (see bottom panel of Fig. <ref>) and are shown in the right panel of Fig. <ref>. At about 135K, the system is essentially separated into a pure Si and a pure Ge phase, with very little dissolution of one species into the bulk of the other. At 170K and 196K –the maximum of C_p– this dissolution increases, indicating the proximity of the mixed state. Just above the maximum, at T=230K and in the region of small C_p around T=300K, the structures appear fully mixed and increasingly disordered. These observations suggest a transition temperature of about T_c∼200K. This value lies in between various values from previous simulations in the literature, such as 360K in Ref. <cit.>, 170K in Refs. <cit.>, 320K in Ref. <cit.>, 247K in Ref. <cit.>, or 325K in Ref. <cit.>. For each temperature shown in the upper left panel of Fig. <ref>, Δ a(T) is estimated in a single-shot procedure. It consists of taking a single sample from a microcanonical sampling at an energy that maximizes P(E,T) (as shown in the right panel of the figure) and then using the CE model for a to predict Δ a.Δ a decreases monotonously with temperature as shown in the lower panel of the figure (solid red line). At around the lowest computed temperature (T=135K),it approaches zero, which is the value predicted for the perfectly phase-separated SiGe alloy. Above T_c, at T=300K and 350K, Δ a is very close to the value predicted for the perfect disordered solid, as expected. At T=196K, our result is close to the experimental value reported in Ref. <cit.>. These findings imply a negative expansion coefficient in the transition region, purely due to configurational effects. This region is typically inaccessible to experiments, which usually deal with the homogeneous disordered phase above the transition temperature. In this case, the configuration being in the disordered state, does not change with temperature and due to anharmonicities of the crystal, the measured expansion coefficient is positive and estimated to be about 5.4 × 10^-6 K^-1 at a concentration of 51.3 % Ge <cit.>. Note that our calculations neglect anharmonic effects. These would presumably reduce the magnitude of the predicted Δ a for increasing temperatures. §.§ Si-based clathrate compoundsWe now demonstrate the application ofto the study of a complex alloy, the intermetallic clathrate Ba_8Al_xSi_46-x. Intermetallic clathrates belong to the class of inclusion compounds in which the host lattice forms cages that can enclose guest species. Their low thermal conductivity together with their highly tunable electronic properties make them ideal candidates for thermoelectric applications <cit.>. The unit cell of Ba_8Al_xSi_46-x is shown in Fig. <ref>. It contains 54 atoms, 46 of which are tetrahedrally bonded and form the host lattice of Si and Al atoms (Wyckoff sites 24k, 16i, and 6c). The 8 Ba atoms occupy the cages in Wyckoff sites 2a and 6d. The material's properties, such as electronic structure and energy of formation, depend strongly on both the Al concentration x and the crystal sites that they occupy, the configuration <cit.>. Therefore, it is of paramount importance to find the configuration of the ground-state structures (GSS), in order to understand the material's physical properties. This is, however, a formidable task. Due to the incredible number of possible configurations, to arrange the Al atoms in the unit cell (∼10^11 for x=16), approaches relying on a full enumeration of structures are infeasible. Therefore, in Ref. <cit.>, some of us have devised a special iterative approach to find the GSS and build an accurate CE model. The workflow, implemented in , is depicted in Fig. <ref>. It is particularly useful for being applied to alloys with complex parent lattices, where a full structural enumeration is out of reach. First, the ab initio energies of an initial set of random structures (left white box) are calculated (left yellow box), and an initial CE model is built with this data set (left orange box). Next, a configurational sampling is performed (upper gray box); the predicted structures with the lowest energies that are non-degenerate to those already present in the data set (labeled "LND structures" in the white box on the right), are added to the data set; and an improved CE model is determined (right orange box). The performance of the CE model is then evaluated through, cross validation. If the CV score is larger than the desired target precision, the CE model may be further improved by performing a new Metropolis sampling, as shown by the arrows. This is repeated until the CV score is smaller than the desired tolerance, and property predictions can be made. Figure <ref> shows the realization of this iterative cluster expansion for the clathrate alloy. The convergence of the CE model and the determination of the GSS are achieved in only 4 iterations. Here, the target accuracy of the CE model is 1meV/atom or less, since this allows one to correctly identify the GSS <cit.>. In the first iteration (top left), a set of 11 random structures with Al content in the range x=6-16 is created, and the energy of mixing of each structure is computed ab initio withusing the functional PBEsol.is a full-potential all-electron DFT package implementing the linearized augmented planewave + local-orbital method <cit.>. The energy of mixing per atom is defined by E_mix(σ) = E(σ)-[Ê_0(1-c)+Ê_46c]. Here, E(σ) is the total energy per atom of the relaxed structure with atomic configuration σ, Ê_0 (Ê_46) is the predicted energy per atom of the hypothetical structure Ba_8Si_46 (Ba_8Al_46), and c=x/46. Computational details for the calculation of E(σ) are given in Ref. <cit.>. The ab intio computed energies are indicated by black circles in the figure. Following the workflow, an initial CE is constructed with these data. The predictions made with the corresponding CE model are represented by the black dots. Their RMSE-CV is 4.9 meV/atom. With this initial CE, a Metropolis sampling is performed for 1000K, with 5×10^5 steps per composition(gray dots) and the visited structures of lowest energy are identified (red dots). This step corresponds to the box "LND structures" of the workflow in Fig. <ref>. For these structures, ab initio calculations are carried out (red circles). For iteration 1, the disagreement between the predictions for the LNDs and their ab initio values for x<12 is big. In iteration 2 (upper right panel), the newly computed data from the previous iteration are added to the training set, which now consists of 22 data points shown as black circles. A new CE model is fitted to these data. Its RMSE-CV of 4.4meV/atom is slightly smaller than that of the first iteration but still well above the desired accuracy of 1meV/atom. Thus, according to the workflow, a new Metropolis sampling is performed (gray dots), LND structures are identified (red dots), and ab initio calculations are performed for them (red circles). There is still a significant disagreement between predicted and ab initio energies for the new LND structures, which increases with increasing x. In iteration 3 (lower left panel) again the data from the previous iteration are added, so that the training set (black circles) now consists of 33 structures. A new CE model trained with these data yields an RMSE-CV of 0.9 meV/atom, which is slightly below the desired accuracy threshold. Nonetheless, a new sampling is performed. The previously found GSS for x<16 are confirmed, and three new LNDs are added for x>12. For x=16, a new ground-state structure is found. These four data points (red circles) are added to the training set in the fourth and final iteration (lower right panel); the resulting model has an RMSE-CV of 0.8 meV/atom. A new sampling confirms the previously found GSS, except for two new ones found for x=8 and x=14. The latter are quasi-degenerate to the previously found ones. The 3 new data points from iteration 4 are added to the training set and a final CE model is fitted.A closer look at the model performance for the different iterations is provided in Fig. <ref>. Focusing first on the fitting errors (left panel), the RMSE is above the accuracy threshold in the first two iterations and then decreases to a value of around 0.6meV/atom, below the desired accuracy. The median of the absolute errors follows a similar trend, but is considerably smaller in magnitude than the RMSE. The reason is that the latter is more sensitive to outliers. In the final model, a single outlier with an absolute error larger than 1meV/atom remains. For the generalization errors (right panel), a similar trend is observed. For the first two iterations, the RMSE-CV lies well above the accuracy threshold, but it stabilizes at a value of around 0.8meV/atom for the remaining models. There is good agreement between the RMSE-CV and the RMSE-Test, indicating that the estimates of the generalization error given by the RMSE-CV are good. The RMSE-Test is obtained from the red dots and red circles in each iteration shown in Fig. <ref>. Thus, it represents the error on totally unseen data, not even used for CV. The found CE model is able to make accurate predictions not only for GSS, but also for higher-energy structures. This allows a finite-temperature analysis using the thermodynamics modules of . Such a study was performed in Ref.<cit.>, where for the charge-balanced composition x=16, a temperature-driven semiconductor-to-metal transition was found. The latter is accompanied by a partial order-disorder structural transition. § CONCLUSIONSWe have described in detail the Python packagefor cluster expansion and for statistical thermodynamics.provides a modular framework that allows for customized CE model building and integration into workflows. This paves the way to address a wide range of problems, as illustrated by various applications. The first one, has demonstrated the ability ofto create the CE model of a complex surface system, O-Pt/Cu(111). This system is characterized by the interplay of two binary sublattices, one describing Pt/Cu surface alloying and the other the oxygen surface adsorption. Such systems are of great interest, for instance, for applications in catalysis. The characterization of the structure at finite temperature revealed a temperature-driven order-disorder transition. The found ordered low-temperature phase is in agreement with experimental findings.The second example showed the application ofto the binary semiconducting alloy Si-Ge. Using a combinatorial approach to model selection, accurate CE models for the mixing energy and lattice parameters were generated. The prediction of the energy of mixing for all derivative structures of up to 16 atoms confirmed the tendency of Si-Ge to separate into pure Si and Ge phases. Consideration of the fully random alloy together with the CE model for the lattice constant, revealed a negative bowing of the lattice parameter, in agreement with experiments. We have characterized the demixing transition of Si-Ge in terms of the configurational density of states, the microcanonical temperature, the canonical probability distribution, the specific heat, and the thermal expansion. Our analysis, performed in the canonical ensemble, both contrasts with and nicely complements previous studies in the literature using the grand canonical ensemble. The latter yields homogeneous phases but prevents access to the phase-separated state, which is accessible in our study. In this example, we have showcased the parallel execution of 's with supercells containing thousands of atoms. Inspection of structures from microcanonical sampling yielded a demixing transition temperature of around 200K, which is in the range of values reported in the literature. The third and last example concerned the construction of a CE model for the energy of the complex clathrate alloyBa_8Al_xSi_46-x. This material, with its 54 atoms in the unit cell, posed a formidable challenge to the tasks of building accurate CE models and finding the lowest-energy structures. We have addressed this problem by creating an iterative workflow that efficiently solves both problems simultaneously. The workflow uses continuously improved CE models to perform configurational sampling. These are used to identify low-energy structures that are iteratively added to the training set. With only four iterations, requiring the ab initio calculation of just 40 structures, all ground states in the range x=6-16 are found. Analysis of the model's performance in terms of fitting and generalization errors revealed that convergence is already achieved by the third iteration. In summary,provides a comprehensive approach to cluster expansion, covering all aspects of model construction and thermodynamical analysis.offers its users an efficient way to build CE models using machine-learning techniques and leveraging parallelization, seamlessly integrating with Python code and facilitating the interaction with ab initio packages.§ DATA AND SOFTWARE AVAILABILITY is available at the Python Package Index (PyPI) repository <https://pypi.org/project/clusterX/>. Documentation can be found at <https://sol.physik.hu-berlin.de/cell>. It includes installation instructions, the API, and tutorials. Jupyter notebooks and Python scripts for reproducing the CE of O-Pt/Cu(111) of Sec. <ref> are available on Github (<[email protected]:srigamonti/optcu.git>). The ab initio data for Si-Ge (Sec. <ref>) and Ba_8Al_xSi_46-x (Sec. <ref>)are available in NOMAD <cit.>, DOI <https://dx.doi.org/10.17172/NOMAD/2023.10.24-3> and DOI <https://dx.doi.org/10.17172/NOMAD/2023.10.24-1>, respectively.§ ACKNOWLEDGEMENTSWe thank Luca Ghrinighelli for fruitful discussions on model selection and sampling methods and Matthias Scheffler for drawing our attention to the Wang-Landau method. This work received partial funding from the German Research Foundation (DFG) through the CRC 1404 (FONDA), project 414984028, and the NFDI consortium FAIRmat, project 460197019; the Max Planck Research Network BiGmax, and the European Union's Horizon 2020 research and innovation program under the grant agreement N^∘ 951786 (NOMAD CoE). apsrev4-1 | http://arxiv.org/abs/2310.18223v1 | {
"authors": [
"Santiago Rigamonti",
"Maria Troppenz",
"Martin Kuban",
"Axel Hübner",
"Claudia Draxl"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231027155134",
"title": "CELL: a Python package for cluster expansion with a focus on complex alloys"
} |
Casimir effect in axion electrodynamics with lattice regularizations Kei Suzuki January 14, 2024 ==================================================================== Misinformation and disinformation are growing threats in the digital age, spreading rapidly across languages and borders. This paper investigates the prevalence and dynamics of multilingual misinformation through an analysis of over 250,000 unique fact-checks spanning 95 languages. First, we find that while the majority of misinformation claims are only fact-checked once, 11.7%, corresponding to more than 21,000 claims, are checked multiple times. Using fact-checks as a proxy for the spread of misinformation, we find 33% of repeated claims cross linguistic boundaries, suggesting that some misinformation permeates language barriers. However, spreading patterns exhibit strong homophily, with misinformation more likely to spread within the same language.To study the evolution of claims over time and mutations across languages, we represent fact-checks with multilingual sentence embeddings and cluster semantically similar claims. We analyze the connected components and shortest paths connecting different versions of a claim finding that claims gradually drift over time and undergo greater alteration when traversing languages. Overall, this novel investigation of multilingual misinformation provides key insights. It quantifies redundant fact-checking efforts, establishes that some claims diffuse across languages, measures linguistic homophily, and models the temporal and cross-lingual evolution of claims. The findings advocate for expanded information sharing between fact-checkers globally while underscoring the importance of localized verification.KEYWORDS misinformation, fact-checking, multilingual NLP, information diffusion, social media§ INTRODUCTIONMisinformation is a global challenge responded to in myriad local ways. The International Fact-Checking Network (IFCN) currently has 112 verified active member organizations across 75 countries[<https://ifcncodeofprinciples.poynter.org/signatories>]. There have been experiments with collaborative fact-checking across countries such as #CoronavirusFacts led by the IFCN and #UkraineFacts led by Spanish fact-checker Malditas as well as collaborations within countries (e.g., #FactsFirstPH,[<https://factsfirst.ph/>], EKTA[<https://ekta-facts.com/>], and Confirma 2022 in Brazil[<https://www.poynter.org/fact-checking/2023/dubawa-cek-fakta-brazil-globalfact-10-awards/>]). To date, however, there is no centralized repository of global fact-checks as there is for child abuse imagery (the Internet Watch Foundation) or extremist content (the Global Internet Forum to Counter Terrorism).On the one hand, greater collaboration between fact-checking organizations could help meet the increasing demand for fact-checks. Increasing use of social media—and soon generative AI—has resulted in the volume of misinformation far exceeding the capacity of human fact-checkers <cit.>. Furthermore, fact-checking capacity is unequally distributed with more fact-checkers working in English than in other languages. If a large proportion of misinformation is shared across languages, centralization of fact-checks and cross-language claim matching <cit.> could help identify misinformation even in less-resourced languages. On the other hand, it's not clear how often the same misinformation is spread across languages or countries. For example, there are large differences in general content across language editions of Wikipedia <cit.> and only a small percentage of users author content in multiple languages online <cit.>.In this article, we investigate the extent to which misinformation claims are fact-checked by multiple fact-checking organizations (RQ1) as well as how often similar misinformation is fact-checked across different languages (RQ2). While answering these questions, we also examine the differences between content fact-checked by one vs. multiple organizations and in one vs. multiple languages. Finally, we analyze how much misinformation claims change over time and explore what is most likely to be fact-checked more than once or across languages (RQ3).This paper presents an investigation into the prevalence and dynamics of multilingual misinformation through analysis of over 250,000 fact-checks in 95 languages. We find that 11.7% of claims, corresponding to more than 21,000 claims in our dataset, are checked multiple times, highlighting opportunities for greater collaboration between fact-checkers. A third of repeatedly checked claims are found in multiple languages, establishing some diffusion across languages, but there is strong language homophily. Our analysis reveals a gradual drift in claims over time and greater changes between languages. § RELATED WORK§.§ Fact-checking effortsMisinformation predates the World Wide Web <cit.>, and probably existed throughout human history <cit.>. Nonetheless, misinformation and disinformation appear in scholarship about the World Wide Web in 1995, only two years after the release of the first graphical web browser: <cit.> defines disinformation as a `deliberate attempt to deceive or mislead' and misinformation as `an honest mistake' (p. 134). As intention is difficult to reliably infer, we follow recent scholars in using misinformation as an umbrella term for any false or misleading content regardless of the user's intention <cit.>With the spread of user-generated content and social media platforms, there has been a marked increase in scholarship about misinformation online <cit.>. Over the same period there has been an increase in fact-checking organizations: teams of journalists aiming to fact-check or debunk misinformation <cit.>. Meta's third-party fact-checking program pays organizations to write fact-checks about content on its Facebook and Instagram platforms <cit.> or to host `tiplines' on WhatsApp where users can search fact-checks <cit.>. Google also pays fact-checkers to provide it copies of fact-checks in ClaimReview markup for use in its news and search tools <cit.>. Much of the data Google collects is freely available and is one of our data sources for this study.Many, if not most, fact-checking organizations are signatories to the IFCN Code of Principles [<https://ifcncodeofprinciples.poynter.org/know-more/the-commitments-of-the-code-of-principles>]. Indeed, being a signatory to the IFCN code of principles is required to participate in Meta's third-party fact-checking program.Fact-checking resources are unevenly distributed across the globe. North America, Europe, and Australia have more fact-checking organizations than other regions of the world, although the difference is decreasing <cit.>. English remains the most-resourced language: 47.51% of fact-checking organizations in the IFCN use English.[We counted all 181 verified fact-checking organizations in IFCN, and 86 of them are using English. The second and third most-used languages are Spanish and French with only 11 and 10 organizations respectively.] This stands in stark contrast to the global distribution of people and Internet users in the world <cit.>. §.§ (Mis)information across languagesMost research on misinformation diffusion has not explicitly considered cross-language spread but more general research has found geography, language, and culture, as general impediments to the spread of information <cit.>. According to the culture proximity theory, people tend to prefer content that is most proximate to their location, language and culture <cit.>. Language stands as a salient explanation for the cultural proximity in information consumption, and scholars have identified several major online content consumption clusters <cit.>. Rather than people consuming content from the entirety of the World Wide Web, audiences tend to consume information in their preferred languages <cit.>. However, the development of social media technology and the convergence trend of global events (e.g., pandemic and climate change) may render language barriers, facilitating a higher percentage of cross-language diffusion than before <cit.>. Nonetheless, there may now be a higher percentage of common misinformation across languages as shared social media platforms and translation technology as well as bilingual users make it possible to consume more diverse content <cit.>. Global events (e.g., COVID-19, climate change, and Russia's war in Ukraine) also attract attention across languages. As noted, COVID-19 and the Russian invasion of Ukraine have formed the basis of new fact-checker collaborations (#CoronaVirusFacts and #UkraineFacts).We're unaware of any study examining the spread of misinformation across languages globally, but <cit.> found misinformation in their study of Chinese and English messages on social media and <cit.> found COVID-19 misinformation in multiple languages. § DATAThe dataset used in this study is a combination of data from Google Fact-Check explorer <cit.> and data directly crawled from the websites of verified signatories of the International Fact-Checking Network (IFCN) code of principles. We structure crawled pages as ClaimReview, a markup for fact-checkers to standardize their work, and the format of the Google data.[<https://developers.google.com/search/docs/appearance/structured-data/factcheck>]The markup consists of multiple data fields. First, each fact-check has an associated author and date. Additionally, it contains the Claim—that is the statement of misinformation—that is being fact-checked, a Review, and a Rating. The Google Fact-Check Explorer data contains 54,150 unique fact-checks. We noticed, however, that many IFCN-certified fact-checking organizations were not included; so, we directly crawled the websites of IFCN-certified fact-checking organizations creating a dataset of 262,439 unique observations. Deduplication and data cleaning steps are outlined in Section <ref>. We combined these two datasets to generate the final dataset.We limit our analysis to the period of time from March 2020 to March 2022 as there is minimal data before period.§.§ Data Preparation Not all fact-checkers adhere completely to the ClaimReview format. Fact-checks on websites are particularly problematic as they might not include ClaimReivew or leave some fields empty. When thefield is missing or empty, we consider theandfields. We employ a heuristic to determine which part of the fact-check contains the claim: If the fact-checking entry contain any string in thefield, we return this unaltered. Otherwise, we check whether either the length of theor thefields is within two standard deviations of the average length of aentry. If this is the case, we return either one (with preference for the former). If both are longer we remove the fact-check. Similarly, not all fact-checks contain the name of the organization posting the claim. To remedy this, we extract the domain name of the final redirect of each URL associated with a fact-check and use this as our primary entity identifier. Another pre-processing step taken to assure that only the claim is contained in the final data-set, we manually inspect tri- to six-grams within each fact-checking domain that appear suspiciously often. We used the LaBSE tokenizer to split each fact-check, and aggregate token counts by the domain of the fact-checker. Examples of tokens, unrelated to the actual claim are “WHATSAPP - CHECK," “Verificamos," “Fact-Check:". After manually reviewing the most repeated substrings, we found 84 that are not part of the claims being fact-checked and remove these. In addition, we remove exact duplicates of fact-checks (retaining the earlier in time), and fact-checks that did not contain a valid claim, according to the aforementioned heuristics employed, after the pre-processing. Lastly, the initial data contained a significant number of fact-checks that appear to only be editorial mistakes. These fact-checks only differ based on punctuation, or slight editorial fixes, and were posted extremely close in time. To remove these fact-checks we check for duplicates after removing any punctuation and non-alpha-numeric characters and ignoring case. Additionally, we remove any fact-checks that have a cosine similarity exceeding 0.95 measured with LaBSE and are posted by the same domain or author.§.§ Final DatasetOur final dataset consists of 251,590 unique fact-checks. Each fact-check has an associated claim, verdict (also know as a rating), date, and author. The language of each claim was determined using the Google Translate API: we found a total of 95 unique languages, showcasing the diversity of fact-checking organizations dedicated to improve the information environment world-wide.Figure <ref>, shows the IFCN signatories, contained in our dataset and their respective countries of origin. The top pane highlights all countries based on the number of fact-checking organizations, while the bottom pane shows the number of fact-checks by IFCN signatories contained in out dataset. The map demonstrates that fact-checking is a global activity, with numerous organizations active in countries that primarily speak low-resource languages. The breadth of the linguistic and geographic diversity of fact-checking organizations highlights the need for research into multilingual misinformation, extending beyond a focus on European languages and Western cultural and political contexts.The dataset spans from 2018 to 2023, and covers a broad array of topics. Figure <ref> displays the total number of fact-checks per month in the dataset on the left y-axis in red. We observe that the number of fact-checks per month is relatively consistent between March 2020 and March 2022 at seven thousand fact-checks per month. Before this period, we see around four thousand fact-checks per month and after it around one thousand. This difference stems from the joining of the two datasets which span different time-periods. In blue, we show the percentage of the claims that are “unique." A thorough discussion of the definition of a unique claim will follow in Section <ref>. In the periods before and after the observations period (March 2020–March 2022), the percentage of unique fact-checks is significantly distorted due to the lower number of available fact-checks. We therefore restrict ourselves to this time period. Fact-checks have several notable limitations that constrain their ability to fully capture misinformation dynamics. First, they rely on the subjective judgments of fact-checking organizations in deciding what claims to investigate and assessing their accuracy. There is inherent subjectivity in these choices determined both by varying mission statements of fact-checking organizations, different funding incentives, and that different types of claims will require different levels of attention and scrutiny <cit.>. Second, fact-checks tend to focus only on high-profile viral claims that gain widespread traction, meaning they likely overlook more subtle or less visible misinformation spreading. Additionally, the availability of fact-checks varies by region based on where fact-checking initiatives exist. Importantly, fact-checks often lag behind the initial viral spread of misinformation. By the time a claim is investigated, initial spread and damage may have already occurred.However, despite these limitations, fact-checks remain a useful proxy for studying global misinformation by documenting the details of specific dubious claims. Fact-checks, by their definition, offer insights into the spread, subject matter, and timeline of misinformation. Furthermore, our diverse dataset of 251,590 unique fact-checks in 95 languages underscores the global and multilingual endeavors in fact-checking. This is a testament to the universal scope of the problem of inaccurate information, crossing different geographical, linguistic, and cultural boundaries. Unlike focusing on a single social media site, which may overlook misinformation in countries where that platform is not dominant, the study of fact-checks transcends these boundaries. The number of fact-checks, as depicted in Figure <ref> and the word wide distribution of fact-checking organizations shown in Figure <ref>, further solidifies the case for using fact-checks as a proxy to study the undercurrents of misinformation across various social media platforms. § METHODS To compare misinformation spread across languages, we embedded all fact-checks with Language-agnostic BERT Sentence Embedding (LaBSE) <cit.>. LaBSE combines Masked Language Modeling (MLM) where a representation is learned by randomly masking tokens in one language, and letting the model predict the token, and Translation Language Modeling (TLM) where bi- or multilingual sentences are concatenated words are masked in both sentences. The model then predicts the masked words, encouraging it to learn cross-lingual representations. In contrast to other multilingual sentence embedding models, LaBSE supports at least 109 languages, enabling us to include a larger number of fact-checks, even in languages typically considered low-resource. LaBSE is therefore ideal to retrieve similar sentences across languages. Nevertheless, as a robustness check we embedded all covered fact-checks with distiluse-base-multilingual-cased-v2, paraphrase-multilingual-MiniLM-L12-v2, and paraphrase-multilingual-mpnet-base-v2 <cit.>.The results were qualitatively similar regardless of the embedding model used. We proceed with LaBSE as its 109 languages cover 99.3% of our data.To cluster the fact-checking claims we utilize the LaBSE embeddings and retrieve other fact-checks with a cosine similarity exceeding a given threshold. To retrieve the approximately nearest neighbors we employed Locality Sensitive Hashing (LSH), specifically Spotify's ANNOY library <cit.>, to reduce the number of computations. We used 100 hyperplanes to retrieve the nearest neighbors. To gather the approximately nearest neighbors, we started by retrieving 10 nodes, and doubled the number of retrieved claims until the last element of the retrieved neighbors fell below the cosine similarity threshold. We then performed a binary search within the last batch of returned nodes to determine the last element to be included. The resulting data-structure can be modeled as an extremely sparse graph. We then extracted all connected components to yield the final set ofclusters. Our methodology to embed fact-checks, retrieving the approximate nearest neighbors with a cosine similarity surpassing a given threshold, and subsequently extracting connected components, has proven particularly potent when grappling with the high-dimensional LaBSE embeddings and the sparse nature of the resulting graph. Contrasting this with other clustering methods, k-Nearest Neighbours (kNN) exhibits difficulties in high-dimensional spaces due to the “curse of dimensionality” <cit.>. Density-based approaches such as HDBScan <cit.> or DBSCAN <cit.> offer advantages in terms of identifying clusters of various shapes and densities and have robust noise-handling capabilities. However, they are computationally intensive and struggle to define density in high-dimensional, sparse spaces, making them less ideal for sentence embeddings. Centroid-based clustering methods, like k-Means <cit.>, though widely adopted, can be adversely affected by noise and outliers, and assume clusters to be convex-shaped, which may not hold true in our context. By comparison, our proposed strategy balances computational efficiency and robustness to noise. The use of a preset cosine similarity threshold facilitates control over cluster granularity, and the extraction of connected components naturally separates noise and outliers into distinct clusters. This combination of strategies results in an efficient, interpretable, and scalable solution to the problem of clustering multilingual sentence embeddings.Figure <ref> displays an example cluster of ten fact-checks authored in six different languages. The embeddings of the claims are projected on two dimensions with Uniform Manifold Approximation and Projection <cit.>. The two most dissimilar nodes are circled in black. The shortest path between the two nodes is highlighted in yellow.§.§ Clustering EvaluationTo evaluate the robustness, accuracy, and consistency of the clustering, we went through four rounds of human validation to test the performance of different clustering thresholds. We randomly sampled 100 clusters each time and asked the expert coders to qualitatively code the most dissimilar pair as indicated by the cosine similarity and evaluated whether they were talking about the same misinformation claim. The coding scheme follows the work of <cit.>. The first three rounds of human evaluation identified the need for better pre-processing to remove strings from claims such as publishers' names, special characters, and formatted fact-checking claims. We implemented this pre-processing (see Section <ref>) before our final evaluation round.In addition to the qualitative analysis, we examine three measures of the goodness of fit of the clustering. First, we analyzed the variance of cosine distance within clusters (intra-cluster variance). Better clusters should lead to a reduced variation for higher thresholds. Additionally, we looked at the between-cluster (or inter-cluster) cosine distances. Here we selected the centroid of a cluster and sampled up to 10,000 additional centroids for which we then calculated the average cosine distance. The meanintra-cluster variance and inter-cluster distance are shown in Figure <ref> for different thresholds.The mean intra-cluster variance consistently falls with higher thresholds (Figure <ref> top). This is a positive result because it signifies that our clusters are becoming more cohesive—the elements within each cluster are more alike. This increased cohesiveness is crucial for our clustering methodology as we aim to gather similar fact-checks together to facilitate their analysis. The highlighted bar shows the cosine similarity threshold for which the intra-cluster variance is minimized. Conversely, the inter-cluster distance is the average distance of two randomly chosen clusters for each threshold. An increasing inter-cluster distance signifies that—as we increase the cosine similarity threshold—the clusters are becoming more distinct and there is better separation between clusters. The highlighted bar in the bottom plot of Figure <ref> shows inter-cluster distance is maximized at 0.875.While intra-cluster variance is minimized with a cosine similarity threshold of 0.95—the maximum similarity we tested—the inter-cluster distance is maximized with a threshold of 0.875. Increasing the cosine similarity beyond this point leads to clusters that are closely linked, to be split into two components, thereby reducing the inter-cluster distance. We also find larger thresholds yield smaller sized clusters. At a cosine similarity threshold of 0.875, the average non-singleton cluster size is 2.52. This contrasts with average cluster sizes of 6.12 and 2.1 at thresholds of 0.75 and 0.95, respectively.Lastly, another heuristic measurement available to use is the rating associated with each fact-check (e.g., `True,' `False,' `Misleading,' etc.). As different media organizations employ different rating schemes, we first removed potentially confounding spellings, white-spaces, punctuation, and case from each verdict. We then selected any verdict that appeared at least 50 times in the dataset and mapped these to “false," “mostly-false," “mostly-true," and “true." For this evaluation of clusters, we ignored the 30% of fact-checks with verdicts not in this mapping.Figure <ref> show two measures of the clustering accuracy. First, top plot displays the weighted average of the percentage of nodes that have the modal (i.e., most common) verdict in each cluster.We can see that for any cosine-similarity higher than 0.825 we find that over 90% of fact-checks in the same cluster have the same associated rating. This holds both when we map ratings to two labels (true, false) and when we map them to four labels (true, mostly-true, mostly-false, false). This indicates that our clustering methodology effectively groups fact-checks that are not only semantically similar but also share similar ratings of truthfulness.Having fact-checks in the same cluster generally share the same rating is an important signal of the cohesiveness of our clusters. Since most items fact-checked are rated as false or mostly-false, meeting this criteria alone is not proof of a good clustering, but if our clustering did not satisfy this criteria it would suggest the clusters were likely too broad.We finally settled on a cosine-similarity threshold for edges of 0.875 after analyzing all available measures in conjunction with our qualitative evaluation. This measure is in line with previous research <cit.>. Where possible we perform measures with a range of thresholds, and we find the same general patterns.With a threshold of 0.875, our qualitative analysis showed 7 of the 100 pairs of most dissimilar nodes we sampled from clusters did not belong to the same clusteer. Overall, we have high confidence in the precision of our clustering methods from the qualitative analysis, consistency of ratings, and high intra-cluster similarity. Similarly, the low inter-cluster similarity suggests recall is generally good: nonetheless there could be instances where we fail to identify two claims as similar. § RESULTS §.§ Research Question 1: To what extent are misinformation claims fact-checked by multiple fact-checking organizations?To answer RQ1 we extract the connected components of our sparse graph. Any component (or cluster) with two or more nodes contains a claim that has been fact-checked at least twice. The remaining nodes are singletons, and have not been matched with any other fact-check. The proportion of singleton nodes ranges from 67.6% for a cosine similarity threshold of 0.8 to 92.9% for a threshold of 0.9. For our the 0.875 threshold we find that 88.2% of nodes are singletons—or conversely—that 11.7% of fact-checks investigate a claim that has previously been fact-checked. In total, at this threshold we find more than 21,000 claims that have been fact-checked multiple times. This represents a significant proportion of repeated work across fact-checking organizations investigating the same claims.§.§ Research Question 2: What percentage of non-unique fact-checks spreads across languagesTo address our second research question, we quantitatively investigate the spread of non-unique fact-checks across languages. From the claims that are fact-checked more than once, we find that approximately 33.79% are fact-checked in multiple languages, suggesting the original misinformation claim was present in multiple languages. This finding helps situate previous studies showing that misinformation does not exist in isolation within language-specific silos, but rather has the potential to traverse language barriers <cit.>.Despite this, our research also indicates a pronounced inclination for misinformation to disseminate predominantly within the same language. To substantiate this, we compare our observed data against a null model, which assumes no language-based preferences for misinformation spread. In this comparative analysis, the null model's expectations are calculated by randomly sampling languages from the overall language distribution but keeping all edges of the graph unchanged. To clarify the concept of randomness in our null model, we assume that the spread of misinformation is not influenced by language barriers, behaving as if language preference does not exist. As the languages associated with misinformation claims are chosen by random sampling from the overall language distribution, we nullify any inherent language-based patterns or preferences. Comparing to this null model as a baseline allows us to understand how language could specifically impact the dissemination of misinformation. We depict this analysis in Figure <ref>, where we plot the expected frequencies from the null model against the empirically observed frequencies of mono-, bi-, tri-, and four-or-more- lingual clusters. Each point corresponds to a distinct cosine similarity threshold.If there were no language-specific effects in the clustering, we'd expect the points to lie on the 45-degree line y=x as we'd observe the value just as often as is expected. The empirical data deviates significantly (α = 0.01) confirming the influence of language on our clustering of misinformation claims. We mapped each language to a language family based on data by Ethnologue, a catalog of languages <cit.>. Of the 33.79% claims found in multiple languages, 80.7% of these were found within languages belonging to the same language family.§.§ Research Question 3: Evolution of Misinformation claimsUnderstanding the temporal evolution of misinformation claims offers valuable insights into their dynamics. This line of inquiry helps us track how claims change over time, revealing whether they become more or less similar as they spread. By analyzing the factors that influence this evolution, we can start to gain a deeper understanding of the mechanisms that drive the spread of misinformation. This serves as a natural extension to our previous analyses, bridging the gap between static properties and temporal behaviors of misinformation claims.While we intentionally chose not to restrict clustering based on time—due to the multiple peaks in the temporal diffusion pattern of misinformation, as revealed by <cit.>—we found that the majority of edges within each cluster are still closely linked in time, indicating that the corresponding fact-checks were often created near each other temporally.Figure <ref> shows the propensity of fact-checks to be closely linked in time. It displays the cumulative percentage of time differences lower than or equal to x days for unconnected nodes within one cluster (i.e., fact-checks checking the same claim). For our chosen cosine similarity threshold of 0.875, 56.26% of edges have a time difference less than or equal to a week. 68.18% of edges have a time differences less than or equal to three weeks. Even within each cluster, the time difference of connected nodes is significantly higher than unconnected nodes for all cosine-similarity thresholds (α = 0.01).While directly connected edges always have a cosine-similarity exceeding the pre-set threshold, nodes within the same cluster that are unconnected have lower cosine similarities. We can therefore, inspect how the similarity between unconnected nodes within the same cluster changes over time. A monotonically decreasing cosine similarity would indicate a gradual evolution of the misinformation claim over time. The top plot of Figure <ref> illustrates the average cosine similarity between all unconnected pairs of nodes within clusters, specifically for our selected cosine-similarity threshold of 0.875. This average is taken across all clusters and plotted as a function of the time difference between the nodes. The lower plot encompasses all lower cosine-similarity thresholds. Importantly, we only report the average distance for unconnected nodes, as nodes that are directly connected inherently have a similarity of at least the threshold value. The shading in the figure represents the standard error. The similarity of unconnected nodes continues to decrease for the first month. For all cases, we see a strong indication of a monotonically decreasing cosine-similarity, showing how misinformation claims change over time. The same effect is also observed for longer time periods. The effect is consistent for all clustering levels below or equal to our chosen similarity of 0.875. The average similarity between disconnected nodes within the same cluster drops by around 10% within one year from 0.8 to 0.73 (t=6.41, p < 0.01). Similarly, we can inspect what factors influence the evolution of misinformation claims within clusters. To do that, we determine the two most dissimilar nodes per cluster. Subsequently, we determine the shortest path connecting these dissimilar nodes. Figure <ref> shows an example multilingual cluster. The color of each node refers to the language of the fact-check. The two most dissimilar nodes, F & A, are circled in black. The shortest path between these two nodes is highlighted in yellow. This way of looking at each cluster allows us to investigate how misinformation claims within the same cluster evolve. Figure <ref> shows the average cosine similarity of the two most dissimilar claims in each cluster plotted against the number of language switches and the length of the shortest path. The shaded area is the 95% confidence interval. We only plot observations with at least 100 valid pairs.We find that the average cosine similarity of the two most dissimilar nodes varies considerably with the length of the shortest path (i.e., tracing the most probable path of evolution). Both the number of unique languages and the number of language switches are highly significant predictors of change in the cosine similarity.To test whether the effect remains significant when controlling for the length of the shortest path, we performed a regression analysis (Table <ref>). The effects of both remain consistently negative and significant, even when controlling for the length of the shortest path between the two most dissimilar nodes. To investigate which claim topics are most likely to be fact-checked multiple times and spread across languages, we first machine translate all claims to English using Google Translate. We then extract and lemmatize all noun tokens from the claims field. We then calculated the relative frequency of each token under two conditions. First, whether the claim is a singleton (cluster size = 1) or not (cluster size > 1). Secondly, for non-singleton claims, whether the claim cluster is monolingual or multilingual.[We look at the original dataset before machine translation to determine whether a cluster is mono- or multilingual.] Before calculating the relative frequencies, we filtered both lists to tokens present in both conditions. When calculating the relative frequencies, we only included tokens that appeared at least 50 times. Table <ref> presents tokens that are most and least likely to appear in multilingual clusters (left column) and in clusters containing multiple fact-checks (right column). For instance, the token most indicative of multilingual clusters is “Pfizer”, emphasizing that Covid-19 is a globally discussed topic. Other top multilingual tokens like “viral” potentially relate to the pandemic. Conversely, tokens like “school,” “law,” and “government” are primarily confined to single-language discourse. Similarly, tokens such as “Ivermectin,” “Soros,” and “Greta Thunberg” are often found in claims checked multiple times. In summary, this analysis identifies words that tend to appear in claims that either spread widely across languages or remain localized. § DISCUSSION AND CONCLUSIONS This paper investigates fact-checking in the online multilingual space. Above all, we find that while most misinformation claims are only fact-checked once, 11.7% of misinformation claims are fact-checked multiple times. This observation highlights the existence of a recurring subset of claims that undergo continual scrutiny. It suggests a persistent pattern where certain claims, due to their nature, importance, or controversy, command repeated attention from fact-checking organizations. Our token analysis further illuminates this phenomenon. We find some words that are more likely to be appear in claims in multiple languages and in claims that are fact-check multiple times. Such words include “Pfizer,” “Ivermectin,” and words related to conspiracy theories. These recurring, high-scrutiny topics present an opportunity for more efficient allocation of fact-checking resources to minimize repeated work.Next, we find that 33.79% of misinformation claims that are fact-checked more than once are checked in multiple languages. Nevertheless, misinformation still diffuses predominantly within the same language. This highlights the importance of global cooperation on fact-checking, as the majority of misinformation claims stay in their own language. This echoes the culture proximity theory that culture and language are still the most influential factors that decide online users' information consumption. Though technologies such as machine translation and social media make it easier for cross-language communication, they do not necessarily function that way in everyday use. In other words, our research highlights the importance of local fact-checkers and the cooperation of global fact-checking communities, as most misinformation claims stay local and in single-language communities. Moreover, we show that misinformation claims do change over time and changes are especially common when a claim is found in multiple languages. Our data, however, only contains the misinformation claims as reported by fact-checkers. There is some degree of editorial voice in how fact-checkers write or phrase the misinformation; so, future work should seek to analyze a global dataset of the original misinformation posts on social media. Such posts are often removed, which makes this a challenge and will require closer collaboration between academics and fact-checkers. Most misinformation is fact-checked closely in time which mirrors other work on the diffusion of information more generally; however, the time scales are slower than those observed directly on social media <cit.>. While our results show claims change over time, the specific mechanisms and consequences remain unclear. Future research should delve more deeply into how claims change and the role of cross-lingual spread in this process, which may help fact-checkers better anticipate what claims will enter their languages from elsewhere.§ ACKNOWLEDGMENTSThis work was funded in part by a grant from The Alan Turing Institute. ACM-Reference-Format | http://arxiv.org/abs/2310.18089v1 | {
"authors": [
"Dorian Quelle",
"Calvin Cheng",
"Alexandre Bovet",
"Scott A. Hale"
],
"categories": [
"cs.CL",
"cs.CY",
"cs.SI"
],
"primary_category": "cs.CL",
"published": "20231027122155",
"title": "Lost in Translation -- Multilingual Misinformation and its Evolution"
} |
IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING Shell Zhao et al.: Multi-grained Evidence Inference for Multi-choice Reading Comprehension Multi-grained Evidence Inference for Multi-choice Reading Comprehension Yilin Zhao, Hai Zhao, Sufeng DuanThis paper was partially supported by Joint Research Project of Yangtze River Delta Science and Technology Innovation Community (No. 2022CSJGG1400). Yilin Zhao, Hai Zhao and Sufeng Duan are with the Department of Computer Science and Engineering, Shanghai Jiao Tong University, and also with Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University. Yilin Zhao and Sufeng Duan contributed equally to this work. Corresponding author: Hai Zhao. E-mail: [email protected], [email protected], [email protected]. January 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Multi-choice Machine Reading Comprehension (MRC) is a major and challenging task for machines to answer questions according to provided options. Answers in multi-choice MRC cannot be directly extracted in the given passages, and essentially require machines capable of reasoning from accurate extracted evidence.However, the critical evidence may be as simple as just one word or phrase, while it is hidden in the given redundant, noisy passage with multiple linguistic hierarchies from phrase, fragment, sentence until the entire passage.We thus propose a novel general-purpose model enhancement which integrates multi-grained evidence comprehensively, named Multi-grained evidence inferencer (Mugen), to make up for the inability. Mugen extracts three different granularities of evidence: coarse-, middle- and fine-grained evidence, and integrates evidence with the original passages, achieving significant and consistent performance improvement on four multi-choice MRC benchmarks. Natural Language Processing, Multi-choice Reading Comprehension, Multi-grained Thought, Reference Extraction and Integration. § INTRODUCTIONAs a fundamental and challenging task of natural language understanding (NLU), Machine Reading Comprehension (MRC) requires machines to answer questions according to the given passages <cit.>. According to the differences in expectant answers, MRC tasks can be divided into three common formats <cit.>: 1) extractive task, which searches for the most proper snippet from the passage as answer <cit.>;2) generative task, which needs model to summarize the passage and generate answer <cit.>; 3) multi-choice task, the focus of this work, which provides several options and aims to select the most suitable one <cit.>.Though multi-choice MRC seems not so challenging that the answers have been shown among candidate options, the real difficulty is, the answers together with their supported evidence may not appear explicitly in the given passages at all. Thus to perform the multi-choice MRC satisfactorily, there comes an essential demand requiring the models capable of inference based on accurate and abundant evidence.However, questions in multi-choice MRC may accompany with lengthy passages with noise, which hide critical evidence in different levels:1) Evidence may appear in a quite refined level, and in some cases, one phrase or even one word can determine the prediction of the question;2) Evidence may hide in quite diverse grained units inside the passage, which needs to infer from the information among phrases, fragments, sentences, until the entire passage.One example from RACE <cit.> is shown as Figure <ref>. To answer the given question, extraction and integration of the complete golden evidence chain (marked in red) distributed in different linguistic levels are necessary. Relying on each single level may lead to incomprehensive explanation and inference (for fine-grained evidence), or introduce interference information which leads to incorrect prediction (marked in blue, mostly for coarse-grained evidence).Though well-extracted evidence rather than the entire passage for later inference is essential to solve concerned MRC tasks effectively, most existing studies only obtain single-grained evidence in a rough way <cit.> and fail to make a flexible and comprehensive multi-grained evidence processing, leading to marginal improvements. Inspired by raising studies with “coarse-to-fine” and “multi-grained” thoughts <cit.>, we propose a concise model which pays attention to the evidence in multiple granularities, called Multi-grained evidence inferencer (Mugen). As Figure <ref> shows, Mugen first extracts middle-grained evidence in a fragment level, then finds out the sentences containing it as coarse-grained evidence, as well as extracts a set of critical phrases as fine-grained evidence. With the integration of the original passage and three different granularities of evidence, Mugen products the evidence-enhanced prediction. The effectiveness of Mugen is verified on four multi-choice MRC benchmarks: RACE, DREAM, Cosmos QA and MCTest, and obtains substantial performance improvement over strong baselines by passing MRC significance tests <cit.>.§ RELATED STUDIESIn recent years, more challenging MRC tasks among various forms have been proposed <cit.>. To solve MRC tasks, researchers train powerful pre-trained models and obtain significant improvements <cit.>. With the raising encoding ability of pre-trained contextual encoders, some researchers try to quote external commonsense <cit.> or train on additional profitable datasets <cit.> to enhance their models in an outer way.In the meantime, more researchers attempt to strengthen models in an inner way without external information. Some studies improve the interaction embedding between input sequences based on attention networks <cit.>, while other studies focus on human reading strategy simulation <cit.>. Among the inductive strategies, evidence extraction plays an important role <cit.>.However, most existing studies are limited to the evidence in one single granularity, which may reduce the attention on critical phrase information (for coarse-grained evidence, like <cit.>) or lack complete contextual explanation (for fine-grained evidence, like <cit.>).Raising studies with “coarse-to-fine” or “multi-grained” thoughts for non-MRC tasks provide a possible solution for the above limitations. For open-domain QA, Zhong <cit.> and Zheng <cit.> utilize multi-grained co-attentions to encode documents and score answers. And for long document extractive QA, Choi <cit.> use coarse-to-fine reading strategies for single-grained evidence evaluation. However, no previous work applies the above “multi-grained” thought to MRC field especially challenging multi-choice MRC.Inspired by the previous works of evidence enhancement and multi-grained strategy, this paper proposes Mugen to make the first attempt to integrate multi-grained evidence comprehensively for inference enhancement in the MRC field, and achieves inspiring results with concise design, highlighting the effectiveness of hierarchical evidence extraction and integration.§ OUR MODELWe focus on multi-choice MRC in this work, which can be represented as a triple <P, Q, O>, where P is a passage, Q is a question over P, O={O_1, O_2, ... O_U} is a set of options for Q, and U is the number of options. Among the options, the most appropriate option O_gold has been chosen as the ground truth answer, and the goal of our model is to pick up the answer O_gold. Thus we let the model learn subject to:ℙ(O_gold| P, Q, O)≥ℙ(O_i| P, Q, O),i ∈{1,2,...,U},where ℙ represents probability. §.§ Multi-grained EvidenceIn this work, three different grains of evidence are proposed for multi-grained evidence integration and modeling enhancement, where: * As coarse-grained evidence, Sentence Evidence (Set) is one single sentence (or a set of sentences) that contains the critical evidence in the lengthy passage, with appropriate rich contextual information.* As middle-grained evidence, Fragment Evidence is the shortest sub-sentence fragment with complete linguistic structures[As the middle-grained flexible granularity, the typical case of Fragment Evidence is a clause sentence, but it can convert from several phrases to nearly the entire sentence.]. Fragment Evidence is used to extract the most concise and explicit text segment with complete semantics as evidence, for answer prediction in the subsequent processes. Thus in most cases, we can find Fragment Evidence possesses good interpretability, like the examples in Table <ref>.* As fine-grained evidence, Phrase Evidence Set is a set of “feature” phrases in the middle-grained evidence. Different from Fragment Evidence, most phrases in the Phrase Evidence only have adequate complete meanings, rather than complete linguistic structures. Therefore, the main function of Phrase Evidence is to further highlight critical words or phrases, rather than serve as interpretable evidence texts directly. Table <ref> shows several samples of multi-grained evidence in both textual and conversational corpora, where the evidence in each granularity has a relatively complete semantic and syntactic structure, and provides critical information for answer prediction. §.§ Overall FrameworkThe overall framework of Mugen is shown in Figure <ref>. With the help of Evidence Extractor, Mugen filters out Sentence Evidence, Fragment Evidence and Phrase Evidence respectively, as the coarse-, middle- and fine-grained evidence. If the evidence set in a certain granularity contains more than one textual piece, Mugen will splice these pieces by space.Then Mugen encodes the above evidence with the question and options respectively, and executes a weighted integration of them for prediction. In detail, Mugen uses its baseline as the Encoder (a single parameter-sharing ALBERT <cit.> in this work) to encode the textual content of the evidence in each granularity, as well as the complete contextual information of the passage.In the separate encoding process, as the granularity of encoded evidence becomes finer, the input sequence of the encoder contains less contextual information. As a result, contextual information takes up less proportion in the embedding representation of the finer-grained evidence, while the textual content of critical evidence takes up more.The subsequent integration process can be formulated as:E^i = dropout (α e_pas^i + β e_sen^i + γ e_fra^i + σ e_phr^i) ∈ℝ^H,where H is the hidden size of Encoder, andα,β,γ,σ are learnable parameters. E^i, e_pas^i, e_sen^i, e_fra^i and e_phr^i represent the [CLS] embedding vector from the last hidden layer of “Question + i-th Option” with the final evidence-enhanced representation, the original passage, sentence evidence, fragment evidence and phrase evidence respectively.In the above process, with the integration of the passage embedding (e_pas) and evidence embeddings (e_sen/e_fra/e_phr), Mugen integrates the contextual representation of the entire passage and the enhanced textual representation of each single-grained evidence. Thus among the information embedded in the evidence-enhanced embedding E, critical evidence occupies a greater proportion, leading to a more accurate answer prediction.In Mugen, a softmax layer as the Classifier is employed to calculate scores for options, and the total loss is the standard Cross Entropy Loss between the integrated prediction and the golden answer:ℒ=-1/U∑_i=1^U (Bool(O_i=O_gold)· log(p_i))), p_i=exp(w_i^TE_i+b_i)/∑_j=1^U exp(w_j^TE_j+b_j),where w_i ∈ℝ^H, b_i ∈ℝ^1 are learnable parameters. §.§ Evidence ExtractorAs Figure <ref> shows, there are two sub-extractors in the Evidence Extractor: Sentence Evidence Extractor (the upper one) and Phrase Evidence Extractor (the lower one).∙ Sentence Evidence ExtractorMugen uses Sentence Evidence Extractor to extract both Sentence and Fragment Evidence. To ensure Sentence Evidence Extractor can extract precise evidence, we implement a contextual encoder (we employ ALBERT_base <cit.> in Mugen) which is pre-trained on SQuAD 2.0 <cit.> individually to extract a non-null answer span[We eliminate the possibility of extracting null spans by drastically increasing the threshold τ in the above encoder. According to <cit.>, when S · T_0+E · T_0 > max_i≤ j S · T_i+E · T_j + τ, the encoder will extract a null span, where T_i ∈ℝ^H is the embedding of the i-th input token, and S/E∈ℝ^H is the introduced start/end vector.]. Then the extracted span is defined as the Fragment Evidence for Mugen. Benefiting from the pre-training on SQuAD 2.0, Sentence Evidence Extractor can ensure the segmenting correctness and linguistic integrity of Fragment Evidence to a large extent.In addition, though Sentence Evidence Extractor can be modified to extract several pieces of Fragment Evidence, we only retain one piece with the highest confidence score, because the benchmarks we focus on do not have obvious multi-hop features like MultiRC <cit.>. Multiple weak-relevant fragments may reduce the proportion of critical information in the entire Fragment Evidence, causing inference deviation with further extraction and integration.In the next step, Mugen obtains Sentence Evidence Set based on Fragment Evidence. If Fragment Evidence locates in one single sentence S, then S is the only element in Sentence Evidence Set; and if Fragment Evidence spans several consequent sentences {S_1, ..., S_k}, then sentences {S_1, ..., S_k} are added into the Sentence Evidence Set[In most cases, Fragment Evidence is the subsection of one single sentence.]. ∙ Phrase Evidence ExtractorBased on Fragment Evidence, Phrase Evidence Extractor extracts Phrase Evidence as fine-grained evidence, shown in Figure <ref>.In Divider, the Fragment Evidence is divided into n phrases: {Phrase_1,...,Phrase_n} based on stopwords (including prepositions, pronouns, conjunctions and interjections)[Some prepositions (like “from”) are retained because they can express specific meanings in some specific phrases (such as “come from”).] and punctuation. Mugen removes the above words and punctuation, and splits the fragments before and after them into independent phrases. With minor computational cost, the above rule-based phrase segmentation method highlights critical words and phrases in the Fragment Evidence, and makes the phrases get appropriate segmentation in most cases.After that, Mugen splices the question with all given options with spaces, encodes the above question and phrases by an ALBERT_base Encoder, and calculates the correlation scores of their embedding vectors:s_i=p_i^Tq, i∈ (1,...,n),where s_i and p_i are respectively the correlation score and embedding of Phrase_i, q is the embedding of the question with options.In Phrase Filter, Mugen retains all Phrase_isatisfying: s_i > θ× s_max, where s_max is the maximum correlation score among all phrases, and θ is the evidence threshold. Finally, all retained phrases form the Phrase Evidence Set. By splicing these phrases with spaces, Mugen generates the ultimate Phrase Evidence. §.§ Simplified Version of MugenIn the above Mugen, multiple runs of the baseline encoder are required to integrate and determine an appropriate proportion of multi-grained evidence, which may ask for higher computational cost. To control such extra computational cost, we simplify the multi-grained evidence integration method in Mugen, providing Mugen_simp. There is only one input sequence of the processed passage in Mugen_simp, therefore it only requires a single run of encoding, without additional computational cost beyond the baseline model. In the processed passage text, there are 6 special tags (<sos><eos><sof><eof><sop><eop>) around the evidence in 3 different granularities, and each granularity has 2 tags labeling its start and end positions. For example, Table <ref> shows the input sample of the example passage in Figure <ref>, pre-processed by Mugen_simp. § EXPERIMENTS §.§ SetupWe run the experiments on 8 NVIDIA Tesla V100 GPUs. The implementation of Mugen is based on the PyTorch <cit.> implementation of ALBERT_xxlarge, and the hyper-parameters of Mugen are shown in Table <ref>.As a supplement, the warmup rate is 0.1 for all datasets, and we set θ=0.8 for Phrase Evidence Extractor[With θ changing to 0.7, 0.9 and 1.0, the average score of Mugen based on ALBERT_base on DREAM got 0.08%, 0.29% and 0.46% reduction respectively.]. For the length of passage and evidence in different granularities, we set 512 for passage, 128 for Sentence Evidence, and 32 for Fragment Evidence and Phrase Evidence. §.§ DatasetWe evaluate Mugen on four multi-choice MRC benchmarks: RACE <cit.>, DREAM <cit.>, Cosmos QA <cit.> and MCTest <cit.>. The detailed descriptions are shown as following:RACE is a large-scale MRC task collected from English examinations, which contains nearly 100,000 questions. Its passages are in the form of articles, and most questions need contextual reasoning. In RACE, the average word length of the passages is 313, and the domains of passages are diversified. DREAM is a conversation-based multi-choice MRC task, containing more than 10,000 questions, where the average word length of the conversations is 147. The challenge of the dataset is that more than 80% of the questions are non-extractive and require reasoning from multi-turn conversations.Cosmos QA is a large-scale MRC task, which has about 35,600 questions and the passages are collected from people’s daily narratives. The questions are about the causes or effects of events, which can benefit from commonsense injection as well as evidence extraction. The passages in Cosmos QA have an average word length of 71.MCTest is a multi-choice MRC task, whose passages are from fictional stories, with an average word length of 240. One of the challenges is that most questions require evidence dispersing in different parts of the passage, which can benefit well from our model. §.§ ResultsTaking accuracy(%) as the evaluation criteria, with 5 random seeds, our average results are shown in Tables <ref>–<ref>[Due to the test set of Cosmos QA is not available for free evaluations with different random seeds, we report the results with one single seed.]. As a supplement, the average standard deviations of the development and test results of Mugen on ALBERT_xxlarge are respectively 0.55, 0.23, 1.14 and 0.77 on DREAM, RACE, MCTest 160 and MCTest 500, which shows Mugen has satisfactory stability of answer prediction.For the performance, Mugen outperforms the strong baselines and other powerful models on the leaderboards without any external information or additional neural networks with numerous parameters like DUMA <cit.> (shown in Table <ref>). Even so, Mugen achieves state-of-the-art (SOTA) performance on both sub-dataset of MCTest beyond the previous SOTA model <cit.>; and SOTA performance on Cosmos QA[<https://leaderboard.allenai.org/cosmosqa/submissions/public>] among models with moderate contextual encoders except for two models with huge T5, due to our limited computing resources. Besides, Mugen passes McNemar’s significance test[In a statistical sense, if a model passes McNemar's significance test, we can conclude the performances of the evaluated model and its baseline model have a statistically significant difference. Following the settings in previous works <cit.>, we define “whether the answer of baseline/proposed model is correct” as the pair in McNemar's test. For example, if the answer of the proposed model is correct and the baseline is wrong, the pair is 0-1.] <cit.> with p < 0.01 for all the above datasets as Zhang <cit.> suggested. It indicates that, compared to the baseline model, the performance gains from Mugen are statistically significant. From another point of view, existing powerful pre-trained models can gain further substantial improvements from the integration of multi-grained evidence. As for the proportions, with five random seeds, the final learned results are α=0.46, β=0.19, γ=0.28 and σ=0.07 on average. In this work, Mugen is a generalized representation enhancement method for diverse tasks and baselines without advanced auxiliary tech on specific datasets <cit.>, and we verify Mugen in a standardized setting. Even so, Mugen obtains consistent and statistical significant improvement over strong baselines, and achieves SOTA performance on two benchmarks, pointing out the prospect of deeper exploration and integration of the information in given datasets.In terms of parameter scale, Mugen has no additional parameters beyond baselines during the training process, as Table <ref> shows. In terms of computational cost, with almost no additional computation, Mugen_simp still obtains acceptable improvement over strong baselines, reiterating that the improvement of Mugen comes from the integration of multi-grained evidence. We record the training time cost of models on base/xxlarge size on RACE, Mugen costs 44/738 minutes for one training epoch while Mugen_simp costs 29/366 minutes, saving 41.9% training cost on average. Thus, we recommend Mugen_simp to researchers who pursue lower computational cost.§ ANALYSISWe evaluate Mugen on ALBERT_base on DREAM for further analysis, and experiments on other datasets like RACE show a similar quantitative tendency. §.§ Ablation StudiesTo make a brief analysis of the extracted evidence in three different granularities, we execute a series of ablation studies to fix each integrating weight coefficient of evidence to 0, retaining other coefficients learnable. Results in Table <ref> suggest that, Fragment Evidence as the middle-grained evidence places the most important role among all-grained evidence, while Phrase Evidence has the minimum efficiency. Further, the quantitative tendency is the same from base to xxlarge model magnitude.Besides, due to the Sentence Evidence Extractor in Mugen relies on the pre-trained contextual encoder with additional computational cost, we further design an Ensemble Baseline to explore the source of gains from Mugen. Ensemble Baseline combines the [CLS] embedding vectors of the baseline and the contextual encoder in Sentence Evidence Extractor. In detail, the two above embeddings are spliced into an integrated embedding in the size of ℝ^2H, and a linear feedforward layer is employed, to reduce the dimension of the integrated embedding to ℝ^H. In general, this baseline can be regarded as an enhanced baseline with almost all additional parameters and pre-trained data in Evidence Extractor.As shown in Table <ref>, compared to the improvements of the other experimental models, the actual gains from additional neural architectures and pre-trained data in Ensemble Baseline are marginal. It indicates that, most performance gains of Mugen are from the extraction and integration of multi-grained evidence.We also implement Mugen based on other encoder baselines, and achieve significant improvements. The performance of Mugen implemented on ELECTRA <cit.> is shown in Table <ref>. The consistent and significant improvements over various baselines verify the universal effectiveness of Mugen. §.§ The Roles of Multi-grained Evidence To make a comprehensive analysis of multi-grained evidence, we set and adjust weight coefficients manually and draw the performance curves in Figure <ref>. To make the figure more intuitive, we set σ=0 to mask the Phrase Evidence due to its minor contribution, and a specialized analysis of Phrase Evidence will be given later.The figure depicts that, by allocating a little more proportion to Fragment Evidence than Sentence Evidence, Mugen can achieve the best performance. It indicates Fragment Evidence plays a dominant role to provide guidance information, as well as Sentence Evidence plays an important supporting role, which is consistent with the results in ablation studies. This finding reveals a reading strategy that, one should refer to the surrounding context (sentencess) to get a comprehensive explanation and to know how to utilize critical evidence fragments.In addition, paying attention to multi-grained evidence and the original passage in a balanced way (50% v.s. 50%) seems to lead to better performance. The original passage provides a protective measure to reduce the negative impact of inaccurate evidence extraction, and that is one reason we retain the original passage for information integration in Mugen.To study whether fine-grained evidence in phrase level deserves more attention, we fix σ to 0.1 and retain α, β, γ learnable, leading to a 0.39% drop in average score. It indicates that, models should not overly depend on fine-grained evidence, because evidence at the phrase level may be spliced directly[We also try to splice them with some punctuation like “,”, but it does not matter.] and lack complete linguistic structure.However, combined with the positive effect in ablation studies, fine-grained evidence can deliver some detailed information in the form of “holes” just like the example in Figure <ref>. A continuous evidence fragment may bring noisy information to the model like “... each focusing on specific topics teaching Hoosiers how to ...” in the given example, since there exists critical information located at its front and back. With fine-grained evidence, Mugen can extract the critical information effectively and dig out the useless information in middle-grained evidence in the “holes”.Finally, to evaluate whether the above analysis is consistent with the characteristics of the multi-choice MRC datasets, we randomly extract 100 evidence-requiring cases in RACE, DREAM and CosmosQA respectively. According to the evidence type that provides the most comprehensive information with the least redundant text, we find coarse-, middle- and fine-grained evidence accounts for 27%/54%/19% in RACE, 24%/58%/18% in DREAM, and 32%/55%/13% in Cosmos QA. The evidence-type distributions are consistent with the above conclusions, showing the effectiveness of the extraction of finer-grained evidence and the integration of multi-grained evidence, as well as justifying the design of the proposed model. §.§ Integration and Interaction of EvidenceThe aforementioned experimental results show that, for multi-choice MRC tasks, models can obtain statistically significant performance gains from the simple integration of multi-grained evidence. Based on the above conclusion, a further question is that, whether the performance gains can be further amplified by elaborate designs that focus on the features of multi-grained evidence.According to previous researches, typical information enhancement methods mainly include the design of integration strategies <cit.> and the modeling of interaction mechanisms <cit.>. For the multi-grained evidence enhancement in this work, the design of integration strategies aims to make evidence in each granularity have the most appropriate contribution to the model prediction; while evidence interaction mechanisms utilize special neural networks to enrich evidence embedding vectors by the fusion of the evidence in other granularities. In this section, we implement several integration strategies and interaction mechanisms for multi-grained evidence, to explore possible further gains as well as determine the most effective designs for Mugen.1) Voting Integration Strategy.In this strategy, four embeddings pass the classifier respectively and Mugen uses a majority vote of their predictions. Based on weight coefficients of evidence, this strategy can be divided into equal voting and weighted voting: ℒ_equal=∑ CELoss(O_i,O_gold), ℒ_weighted=∑θ_i × CELoss(O_i,O_gold),where i∈{pas, sen, fra, phr}, θ_i is a learnable weight coefficient, and O_i is the predicted option.2) BiGRU Interaction Mechanism.In MRC field, numerous works utilize GRU (Gate Recurrent Unit) or BiGRU to obtain enhanced contextual representation <cit.>. Inspired by the above studies, we employ a series of BiGRU modules to execute the interaction of the evidence in each granularity:h_phr = GRU(e_phr, 0), h_fra = GRU(e_fra, h_phr), h_sen = GRU(e_sen, h_fra), h_pas = GRU(e_pas, h_sen),where h is the last hidden representation of GRU. Take Phrase Evidence as an example, h_phr can be obtained similarly, and the interacted representation E_phr is generated as:E_phr=feedforward(h_phr⊕h_phr) ∈ℝ^H,where ⊕ is the vector connection operation. In this mechanism, the original evidence representations will be replaced by above interacted representations for integration.3) Attention Interaction Mechanism.We also employ attention-based modules to produce more precise interacted representations. Take Phrase Evidence as an example, the calculation process is shown as:Att(e_phr, e_*)=softmax(e_phr· e_*^T/√(d_k))e_*, E_phr=⊕{Att(e_phr, e_*)}W_phr,where * ∈{fra, sen, pas}, d_k denotes the dimension of Key vector, ⊕{} denotes the vector connection operation and W_phr is a learnable matrix.Results of various evidence integration strategies and interaction mechanisms of Mugen are shown in Table <ref>, which illustrate that:1) Among the above evidence integration strategies, the direct embedding integration strategy in the original Mugen is better than the voting strategy, regardless of loss function types. Answer prediction relying on only single-grained evidence is inaccurate[For example, ALBERT_base with only Fragment Evidence gets an average score of 58.98 on DREAM.], and vote strategy may be heavily hindered by low-accurate models.2) Among the above evidence interaction mechanisms, to our surprise, BiGRU Interaction Mechanism performs worse than the original Mugen, which has no interaction mechanism. It indicates that, improper evidence interaction mechanisms may bring negative impacts on the model. On the contrary, despite Attention Interaction Mechanism brings marginal improvement, the increase of parameters causes disproportionate computational cost like <cit.>.According to the above empirical studies, we conclude that: compared to the original Mugen, the further gains by the proposed integration strategies and interaction mechanisms are marginal. Thus, we retain the original design of Mugen, due to its lite scale and adequate improvement. §.§ Studies on the Evidence ExtractorAs we state in Section III.C, benefit from the pre-trained encoders, Evidence Extractor ensures the quality of the segmentation and extraction of Fragment and Phrase Evidence. In this section, we attempt to explore the sensitivity of Mugen to the Evidence Extractor, where the extractor lacks sufficient pre-training or fine-tuning, and the extracted evidence is at a relatively low quality. In detail, we design three comparative baselines to study the sensitivity to Sentence Evidence Extractor, one baseline to study the Phrase Evidence Extractor, and three other baselines to explore the accuracy of the design of the proposed multi-grained evidence:1) Weakened Mugen.In this baseline, we tune the contextual encoder of Sentence Evidence Extractor on SQuAD 2.0 with only one training epoch to make the fine-tuning process inadequate[As a result, the performance of Exact Match (EM) on SQuAD 2.0 drops from 79.21 (2 training epochs, for the original contextual encoder) to 73.17 (1 training epoch).], keeping other processes and settings unchanged, to make the fine-tuning process inadequate.2) Attention Mugen.In this baseline, we remove the fine-tuning process of the encoder in Sentence Evidence Extractor, by using attention calculation for evidence extraction. Figure <ref> shows the architecture of its Sentence Evidence Extractor.In general, the above Sentence Evidence Extractor is similar to the Phrase Evidence Extractor in the original Mugen, and the Encoder is the same as the one in Phrase Evidence Extractor. In detail, the Sentence Filter in Attention Mugen computes sentence correlation scores for the embeddings of all input sentences and the given question; while the Fragment Filter computes fragment correlation scores for the embeddings of all input fragments and spliced options, and retains the fragment with the highest score[In this module, we set the maximum element number to 1 for Sentence Evidence and Fragment Evidence, which is the same in most cases of the original Mugen.]. To divide the extracted sentences into fragments, we analyze extracted fragments in the original Mugen, finding a large proportion of them are clauses divided by pause punctuation (like “,”, “-” in the example in Figure <ref>). Thus, the Divider in this module divides the sentence into fragments based on all pause punctuation.3) TF-IDF Mugen.In this baseline, we remove the encoder in Sentence Evidence Extractor directly, where a heuristic TF-IDF method is employed to extract Fragment Evidence with similarity calculation of the context and question-option pairs. The process of TF-IDF Mugen is similar to Attention Mugen, except that TF-IDF Mugen calculates similarity scores by the TF-IDF method <cit.> instead of dot production of vectors.4) TF-IDF Phrase Mugen.Referring to TF-IDF Mugen, this baseline removes the encoder in Phrase Evidence Extractor and provides low-quality Phrase Evidence with the TF-IDF similarity calculation method, but its Sentence Evidence Extractor is the same as the original one.5) Sliding Window Mugen.We make this design to explore the impact of the accuracy of multi-grained evidence on model performance. In this design, we employ sliding windows to extract multi-grained evidence with fixed lengths. The computational method for length-fixed evidence is similar to the Phrase Evidence Extractor in the original Mugen, where a length-fixed sliding window traverses the entire valid contextual text, and calculates the correlation score of each extracted text segment and the question-option pair in turn, according to the formula we state in Section III.C. Ultimately, the text segment with the highest correlation score is defined as the extracted evidence[When the length of the traversable text is less than the sliding window, the entire traversable text is defined as the evidence.]. For the lengths of sliding windows, we design two different combinations:* Tri-Gram-Bi-Gram-Word: the lengths of the three sliding windows are respectively 3, 2 and 1 word(s);* Average Length: lengths of the three sliding windows are the respective average lengths of the multi-grained evidence in the original Mugen. For the DREAM dataset, the lengths of sliding windows are 11, 6 and 4 respectively. 6) Damaged Mugen.To study the accuracy of multi-grained evidence, in this baseline, the original extracted evidence at each granularity is randomly damaged by adding or deleting several words (1 for Phrase Evidence and 2 for others) on its front and back textual boundaries. We additionally set the evidence at each granularity to have at least one word to avoid excessive damage, and the operations beyond the valid contextual text are filtered.We evaluate these baselines on DREAM based on ALBERT_base, as Table <ref> shows. The results indicate three main conclusions:1) The performances of the experimental baselines with low-quality evidence are still significantly better than Ensemble Baseline (stated in Section V.A), which further proves the performance gains of Mugen are mainly from the integration of multi-grained evidence;2) The proposed Mugen has satisfactory robustness to the quality of evidence (or the design of Evidence Extractor), and the performance of Mugen increases with the more powerful encoding ability of its Evidence Extractor and more accurate multi-grained evidence.3) Length-fixed textual evidence without grammatical structure and semantic integrity brings significant damage to the model performance, which proves the gains of Mugen are mainly from the accurate and linguistic multi-grained evidence design. §.§ Transferability StudiesTo further verify the generalizability and robustness of Mugen, we implement a series of transfer experiments. We train Mugen and its baseline on RACE and evaluate them on DREAM, Cosmos QA and MCTest respectively, obtaining transfer results on the development set as Table <ref> shows.In Table <ref>, transferred Mugen obtains more consistent performance improvements than its baseline on various out-of-domain datasets, proving the generalizability and robustness of Mugen. Furthermore, transferred Mugen even performs better than the original in-domain trained Mugen on DREAM and MCTest, indicating models may benefit from larger out-of-domain training datasets like RACE. On the contrary, transfer results on Cosmos QA drop significantly, due to its disparate data collection sources and question-type proportion. §.§ Error Case AnalysisTo explore potential further improvement, on DREAM, RACE and Cosmos QA three datasets, we randomly extract 50 examples respectively in 1) the original dataset; 2) error cases predicted by ALBERT_xxlarge; and 3) error cases predicted by Mugen on ALBERT_xxlarge[We do not perform the above operations on MCTest due to its minor dataset scale.]. We divide them into different types according to “the most decisive information for correct prediction” and draw the analysis donut chart as Figure <ref>.The above chart depicts that Mugen has an excellent ability to solve questions requiring evidence integration. Take RACE as an example, compared to its baseline model, Mugen benefits from the middle- and fine-grained evidence, and the proportion of the error cases requiring continuous phrase evidence receives an additional 6% reduction. In the same way, the integration of multi-grained evidence helps to solve an additional 12% of the cases requiring discontinuous dispersed evidence, like the example in Figure <ref>, as well as the following conversation in Table <ref>.In detail, the underline context is the extracted Fragment Evidence, the sentence containing it is the Sentence Evidence, and the Phrase Evidence is marked in bold. As a typical instance of “discontinuous dispersed evidence for answer prediction” in Figure <ref>, the integration of multi-grained evidence helps to infer out “the item not mentioned”, while relying on one single-grained evidence may not predict the golden answer ultimately (coarse-grained evidence may overemphasize interference information while fine-grained evidence may lack contextual explanation).This chart also reveals the challenging cases for Mugen. Take RACE as an example, compared to its baseline, the proportion of error cases caused by the lack of external commonsense increases by 6%, indicating Mugen can benefit from explicit commonsense injecting. Another challenge is the questions requiring logical inference or calculation, where the typical types are attribute sorting, location description and numerical calculation, instead of the inference based on evidence chains.Furthermore, the statistics of error cases on DREAM and Cosmos QA has a similar tendency to RACE, emphasizing the effectiveness of Mugen to questions with continuous or discontinuous evidence. Surprisingly, in Cosmos QA, the proportion of questions requiring external commonsense does not increase as significantly as RACE and DREAM, indicating the commonsense in Cosmos QA may be inferred by multi-grained evidence integration within the passages.In addition, we also make statistics on the error-type distribution of Mugen in base and xxlarge two parameter magnitudes. As Figure <ref> and Table <ref> show, compared to Mugen on ALBERT_xxlarge, Mugen on ALBERT_base reduces the evidence-requiring error cases with a larger proportion and obtains more significant performance improvement. One main reason is, due to the limited parameter magnitude, the baseline model ALBERT_base does not have strong abilities of text encoding and information integration like ALBERT_xxlarge, and can gain more benefits from the multi-grained evidence integration in Mugen.§ CONCLUSIONIn this work, we propose a general-purpose model enhancement design that integrates multi-grained evidence comprehensively, called Multi-grained evidence inferencer (Mugen), to make up for the inability to deliver evidence in different granularities in existing studies. With integration and inference, Mugen achieves substantial improvement on four multi-choice MRC benchmarks: RACE, DREAM, Cosmos QA and MCTest with all passing significance tests, which indicates the superiority of multi-grained evidence integration and points out a promising research direction. IEEEtran | http://arxiv.org/abs/2310.18070v1 | {
"authors": [
"Yilin Zhao",
"Hai Zhao",
"Sufeng Duan"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231027113618",
"title": "Multi-grained Evidence Inference for Multi-choice Reading Comprehension"
} |
= .8mm 1cm ^Princeton Center for Theoretical Science, Princeton University, Princeton, NJ 08544, USA ^Jefferson Physical Laboratory, Harvard University, Cambridge, MA 02138 USA [email protected], [email protected], [email protected] construct the symplectic form on the covariant phase space of the open string field theory on a ZZ-brane in c=1 string theory, and determine the energy of the rolling tachyon solution, confirming Sen's earlier proposal based on boundary conformal field theory and closed string considerations.§ INTRODUCTION The decay process of an unstable D-brane in string theory, known as the rolling tachyon <cit.>, is one of the simplest time-dependent backgrounds of string theory and serves as a basic example of open/closed duality. It admits an open string description, either as a time-dependent solution to the open string field theory (OSFT) on the D-brane, or as a boundary conformal field theory (BCFT) on the string worldsheet. Alternatively, it is also expected to admit a dual closed string description that amounts to the closed string radiation resulting from the D-brane decay. In the literature, the rolling tachyon has been primarily investigated from the BCFT perspective<cit.>. In bosonic string theory, starting from an unstable D-brane, the rolling tachyon BCFT is obtained through the boundary deformationΔS = λ∫_∂Σ cosh(X^0-u), where X^0 is a timelike free boson, and cosh (X^0-u) is a marginal boundary vertex operator defined by the usual boundary normal ordering. Despite its apparent simplicity, the BCFT formulation of the rolling tachyon is subject to a number of subtleties that are ubiquitous to worldsheet string theory in time-dependent backgrounds. First of all, the BCFT is a priori constructed with a Euclidean target space, namely starting from a noncompact free boson X parameterizing the imaginary time coordinate, and then deforming by the marginal boundary coupling λcos X. A suitable analytic continuation X^0-u=-i X is then performed at the level of CFT correlators, that in particular involves a contour choice in the target space of X^0. A first-principle formulation of physical observables, as well as the interpretation of string amplitudes in this setting, remain to be clarified. In this paper, we revisit the rolling tachyon in the OSFT formalism, which is technically more complex but offers a number of conceptual clarifications. To begin with, the rolling tachyon is constructed as a (time-dependent) solution Ψ to the open string field equations. Here the open string field Ψ takes value in the boundary Hilbert space of the undeformed BCFT of the original D-brane. There are several known strategies for constructing such solutions <cit.>. For our purpose, it will suffice to construct Ψ as a perturbative expansionΨ= ∑_n=1^∞λ^n Ψ_n, where λ is related but not equal to λ in (<ref>). Both the BCFT and the OSFT solution of the rolling tachyon will be reviewed in section <ref>.We will focus on the OSFT of a single ZZ-brane in c=1 string theory <cit.>, which is given by Witten's cubic bosonic OSFT <cit.> where the matter BCFT consists of a free boson X^0 and the vacuum module of a c=25 boundary Virasoro algebra. In this setting, the closed string background is consistent at the quantum level, and one may anticipate the OSFT to make sense at the quantum level non-perturbatively, even though the consideration of this paper is restricted to the classical OSFT. In the dual matrix quantum mechanics of c=1 string theory <cit.>, the ZZ-brane with rolling tachyon is expected to be described by a single eigenvalue/fermion bouncing off the potential and hovering above the fermi sea <cit.>. In this case, one expects the rolling tachyon solutions to cover (at least a domain of) the (covariant) phase space of the OSFT, which is 2-dimensional (parameterized by λ and u).A key step in constructing the phase space of the OSFT, as parameterized by its (gauge-inequivalent) solutions, is to determine the symplectic form on the phase space. We adopt a slightly modified version of Witten's proposal <cit.>Ω= 12g_o^2 ∫(ℙ[Q_B, Θ(X^0)]) (δΨ*δΨ), where Θ(X^0) is the Heaviside step function in X^0 evaluated at the midpoint of the open string, and ℙ stands for a projector of a local operator onto its weight (0,0) component (see Appendix <ref> for OSFT conventions). We will see in section <ref> that such a symplectic form is gauge invariant, is invariant under time translation, and moreover evades a potential singularity that would be encountered in the proposal of <cit.> (as was pointed out in <cit.>).We will then evaluate (<ref>) on the space of rolling tachyon open string field solutions. The result can be expressed in the formΩ=δuδE(λ), where E(λ) is the energy of the rolling tachyon. As the main result of this paper, in section <ref> we will give an analytic argument that E(λ) agrees with a proposal of Sen <cit.> based on the BCFT and the conserved charges for the open string fields induced by the rigid gauge transformations of closed string fields in open-closed string field theory (OCSFT). Our analytic result is further supported by a highly nontrivial level truncation computation at order λ^4 in section <ref>.More broadly, we anticipate the construction of the symplectic form and the Hamiltonian on the covariant phase space to serve as a step toward defining the quantum OSFT, both at the level of time-dependent perturbation theory (e.g. around the rolling tachyon background), and at the non-perturbative level. Some future perspectives are discussed in section <ref>. § ROLLING TACHYONIn this section, we review the construction of the rolling tachyon solution in classical OSFT and the corresponding BCFT description, following <cit.>. Witten's cubic form of the bosonic OSFT is defined through the action <cit.> S[Ψ] = -1g_o^2∫(12 Ψ*Q_BΨ+13Ψ*Ψ*Ψ), where the (classical) open string field Ψ is a ghost number 1 state of the worldsheet BCFT, Q_B is the BRST operator. The star product * and the convolution ∫ on the open string fields are defined in Appendix <ref>. Variation of (<ref>) with respect to Ψ gives the classical equation of motionQ_BΨ+Ψ*Ψ= 0. Specifically, we will consider the OSFT on a single type (1,1) ZZ-brane in c=1 string theory <cit.>. The worldsheet (bulk) CFT consists of a timelike free boson X^0, the c=25 Liouville CFT, and the c=-26 bc-ghost system. The ZZ-brane is described by a BCFT on the worldsheet that is Neumann with respect to X^0, and the type (1,1) ZZ boundary condition in the Liouville sector <cit.>. The latter may be characterized by its boundary state, or equivalently the disc 1-point functions (following the convention of <cit.>) ⟨V_P ⟩^D^2_ZZ=2^54√(π)sinh(2πP), where V_P is a bulk primary of the Liouville CFT that carries Liouville momentum P, and corresponding holomorphic and anti-holomorphic weights h=h = 1+P^2. The boundary operator spectrum of the ZZ-boundary Liouville theory is extremely simple: namely, it consists of only the identity operator and its Virasoro descendants. This identity primary of the boundary Liouville theory gives rise to the open string tachyon on the ZZ-brane, whose corresponding vertex operator takes the form c e^iω X^0 (⊗ 1). The rolling tachyon on the ZZ-brane is represented by a time-dependent solution Ψ to the OSFT equation (<ref>), which may be constructed perturbatively in the form (<ref>) where the Ψ_n's are solved successively at each order in λ,λ^1: Q_BΨ_1=0, λ^2: Q_BΨ_2+Ψ_1*Ψ_1=0, λ^3: Q_BΨ_3+Ψ_1*Ψ_2+Ψ_2*Ψ_1=0, ... At the linearized order, Ψ_1 takes the form of an on-shell tachyon vertex operator, which we will take to be[Here and henceforth the boundary normal ordering is understood in all expressions for boundary vertex operators in the X^0 CFT. We have also adopted the convention '=1.]Ψ_1 =ccosh(X^0-u). Such a solution represents a tachyon field that starts rolling at time x^0=u with magnitude ∼λ.The higher order terms of the open string field are solved in Siegel gauge viaΨ_n =-b_0L_0∑_k=1^n-1Ψ_k*Ψ_n-k, n≥2. In this procedure, a priori one may encounter obstructions when the star product Ψ_k*Ψ_n-k contains weight zero states, on which 1 L_0 is ill-defined. However, the exact marginality of cosh(X^0-u) (viewed as a BCFT deforrmation) guarantees that such obstructions are absent to all orders in λ. Indeed, the perturbative solution (<ref>), (<ref>) is valid to all orders in λ, even though the radius of convergence with respect to λ is believed to be finite (see <cit.>, chapter 6).The rolling tachyon may alternatively be described as an exactly marginal deformation of the worldsheet BCFT of the form (<ref>). As usual in conformal perturbation theory, the precise definition of (<ref>) as a deformation depends on a choice of regularization scheme. We will adopt the scheme in which a correlator on the upper half plane (UHP) subject to the deformed boundary condition is computed as ⟨⋯⟩_deformed^UHP = ∑_n=0^∞λ̃^nn! ⟨⋯∏_k=1^n ∫_W_k dt_k2πcosh(X^0(t_k)-u)⟩^doubling_undeformed, where ⟨ ... ⟩^ doubling stands for the correlator computed using the doubling trick for X^0 CFT (i.e. replacing the free boson on the UHP with a chiral boson on the entire complex plane), and the integration contours W_k of the boundary operators are taken to be ℝ+i_k in the complex plane obtained by the doubling trick, for distinct small real parameters ϵ_1,⋯, ϵ_n.The exact boundary state in the X^0 CFT is known <cit.>, and takes the form[This boundary state was determined in <cit.> for the λ̃cosX deformation of the Neumann boundary condition of a spacelike free boson X, and then analytically continued to that of the timelike free boson X^0 in <cit.>.]|B⟩_X^0=[f(X^0)+(cos(2πλ̃)+1-f(X^0))_-1_-1+ ⋯] |0⟩, where f(X^0)=11+e^X^0sin(πλ̃)+11+e^-X^0sin(πλ̃)-1 and ⋯ stands for terms at higher oscillator levels. Note that λ̃ is not the same as the parameter λ appearing in (<ref>), even though they agree to first order; their precise relation λ̃=λ̃(λ) will be discussed in section ??. In the full BCFT of the string worldsheet, the ZZ boundary condition of the c=25 Liouville theory as well as that of the bc ghosts are undeformed; the full resulting boundary state will be denoted by | B_λ̃, u⟩. The background independence of OSFT <cit.> is such that every exactly marginal deformation of the BCFT can be equivalently represented as a solution to the original OSFT equation defined in the undeformed BCFT. In the original OSFT on the ZZ-brane, an exact solution Ψ(λ, u) to the equation of motion (<ref>) that is physically equivalent to the deformed BCFT has been constructed in <cit.>. Furthermore, the on-shell components of the boundary state | B_λ̃, u⟩ can be directly related to the string field solution through the so-called Ellwood invariant W <cit.>, viai4π(⟨B_λ̃, u|(c_0-c̃_0)|V⟩-⟨B_λ̃=0|(c_0-c̃_0)|V⟩) ≡W(Ψ(λ̃, u),V) = ⟨|V(i)|Ψ(λ̃, u)⟩. Here V is an arbitrary bulk vertex operator which is Q_B-closed, and is the “identity string field" defined as the identity element with respect to the *-algebra <cit.>.[Precisely speaking, <cit.> showed that (<ref>) holds true assuming that V takes the form cc̃V_m, where V_m is a matter primary of weight (1,1). Nonetheless, (<ref>) is expected to be true for any Q_B-closed V, based on the background independence of string field theory <cit.>. When V is not a conformal primary of weight (0,0), the RHS of(<ref>) may be singular in the cubic OSFT, but a more general open-closed string vertices will provide a suitably regularized result.] The matrix element on the RHS of (<ref>) is understood in radial quantization on the UHP, with the string field Ψ(λ, u) inserted at the origin, the identity string fieldinserted at infinity, and the vertex operator V inserted at i. An equivalent expression for the Ellwood invariant isW(Ψ(λ̃, u),V)=⟨V(i)f∘Ψ(λ̃, u)(w=0) ⟩^UHP, where f∘Ψ(λ̃, u) is the conformal transformation of Ψ(λ̃, u) under the map z=f(w)≡2w1-w^2 which takes the half unit disc to the UHP.As was explained in <cit.>, the boundary state | B_λ̃,u⟩ captures the spacetime conserved charges associated with the solution Ψ(λ̃,u), as can be seen from the rigid gauge transformations of the closed string fields which induce symmetry transformations of the open string fields in open+closed SFT <cit.>. For instance, consider a gauge variation of the closed string field Φ of the form δΦ=Q_BΛ(ω), with Λ(ω)=(c∂ X^0-c̃∂̅X^0)e^iω X^0. The “rigid” part Λ(0) is Q_B-closed and represents an isometry of the closed string background Φ=0, which in this case amounts to time translation symmetry. It further induces time translation on the open string fields, through the closed string tadpole terms in open+closed SFT action which are linear in the boundary state | B_λ̃,u⟩. The corresponding conserved charge E, which has the interpretation of energy, is then proportional to∫dp2πe^px^0⟨B_λ̃,u|(c_0-c̃_0)|ϕ(p)⟩, where x^0 is the zero mode of X^0, and ϕ(p) is defined by Q_BΛ(p)=pϕ(p) and thus given by|ϕ(p)⟩∼[2c_1c̃_1α_-1_-1+c_-1c_1-c̃_-1c̃_1+⋯]|p⟩, where ⋯ are terms linear in p. The integration over p in (<ref>) actually localizes to p=0 and the final result isE(λ(λ̃))=14π^2g_o^2(1+cos(2πλ̃)). In particular, the energy E(λ(λ̃)) is determined by the coefficients of the zero momentum dilaton vertex operator V_D=cc̃∂X^0∂̅X^0, and that of the zero momentum ghost-dilaton vertex operator c∂^2c-c̃∂̅^2c̃, in the boundary state | B_λ̃,u⟩. The λ̃ dependence of E(λ(λ̃)) comes purely from V_D, whose Ellwood invariant is given by [For generic OSFT backgrounds whose worldsheet theory includes X^0-CFT, this particular Ellwood invaraint, when Ψ(λ̃,u) is replaced by any static solution Ψ_s of the OSFT EOM (<ref>) satisfying some regularity conditions, was shown to compute the energy of the solution Ψ_s in the sense that it agrees with the OSFT action (<ref>) evaluated on Ψ=Ψ_s <cit.>.]W(Ψ(λ̃,u),V_D)=V_X^04πicos(2πλ̃) where V_X^0 is the volume of X^0. In section <ref>, we will obtain the energy expression (<ref>) by showing how W(Ψ(λ̃,u),V_D) arises from the symplectic form of the classical OSFT. § A SYMPLECTIC FORM FOR OSFTA symplectic form on the phase space of the bosonic OSFT was proposed by Witten <cit.>. In this section, we review the construction of <cit.> and propose a slightly modified regularized version of this symplectic form. We begin by briefly reviewing the covariant phase space formalism.[See e.g. <cit.> for an introduction to the subject.] For our purpose, it suffices to restrict to the case where the spatial manifold has no boundary. Given a Lagrangian field theory with the action S=∫_M L, where M is the spacetime manifold and L is a function of the fields ϕ^a and their derivatives, the classical pre-phase space is the space of solutions to the equations of motions e_a=0 obtained by varying the LagrangianδL = e_a ϕ^a+dU. Note that the total derivative term dU is linear in the variation of ϕ^a. U is known as the pre-symplectic potential. The pre-symplectic form is defined asΩ= ∫_ΣδU,where Σ is a Cauchy slice and δ, which originally denoted the field variation, is now interpreted as the exterior derivative on the pre-phase space.[δ will henceforth be regarded as a Grassmann-odd operation. ] In particular, Ω is a closed 2-form on the pre-phase space. In a gauge theory, the phase space is obtained as the quotient of the pre-phase space of solutions by the group of gauge transformations, and is equipped with the symplectic form Ω induced from the pre-symplectic form Ω. The contraction of Ω with a Hamiltonian vector field produces the differential of the corresponding conserved charge. In the case of the rolling tachyon solution Ψ(λ̃, u), the Hamiltonian vector field u generates time translation, and the corresponding energy E is related by Ω( u,·)= E. §.§ Witten's construction of the symplectic form Following the general covariant phase space prescription outlined above, we consider the variation of the OSFT action (<ref>) S[Ψ] = -1g_o^2∫[ (Q_BΨ+Ψ*Ψ)*Ψ-12Q_B(Ψ*Ψ) ]. Indeed, as in (<ref>), the total derivative term 1 2g_o^2∫ Q_B(Ψ*Ψ) gives rise to the pre-symplectic potential U which will then be used to construct the pre-symplectic form Ω.Recall that Q_B is a second order derivative operator with respect to the (space-)time coordinate (as the BRST current takes the form j_B∼ c∂ X^0∂ X^0+⋯), while the pre-symplectic potential U appears in(<ref>) through its first order derivative. Thus in extracting U from the expression δ L∼ Q_B(Ψ*Ψ), one needs to “strip off” one derivative from Q_B.Witten's prescription <cit.> amounts to constructing U by replacing Q_B in 1 2g_o^2∫ Q_B(Ψ*Ψ) with the commutator of Q_B with the Heaviside step function Θ(Σ) with respect to the open string midpoint. Namely, Θ(Σ) is defined to be 1 if the open string midpoint is to the future of the Cauchy slice Σ and 0 otherwise. The resulting pre-symplectic form isΩ_W=(12g_o^2∫[Q_B,Θ(Σ)](Ψ*Ψ) )=12g_o^2∫[Q_B,Θ(Σ)](Ψ*Ψ), which may equivalently be written in the form of a correlator on the UHPΩ_W=12g_o^2⟨[Q_B,Θ(Σ)](i) f_1∘Ψ(w_1=0) f_2∘Ψ(w_2=0)⟩^UHP. Here f∘δΨ stands for the conformal transformation of the string field δΨ with respect to the map z=f(w), with the choice f_1(w)=1+w1-w, f_2(w)=-1-w1+w. In <cit.>, Ω_W was argued to be independent of the choice of Σ and gauge invariant. However, the definition of Ω_W via the star product implicitly involves a conformal transformation that acts singularly on the operator [Q_B,Θ(Σ)] which is inserted at the midpoint of the string. For generic choices of Σ, such as constant time slicing X^0=const, [Q_B,Θ(Σ)] would not be a primary of weight (0,0), whose singular conformal transformation results in Ω_W being ill-defined.[In a string background where there is at least one spatial coordinate Y and a lightlike isometry, the lightcone time slicing X^+=X^0+Y=const provides a primary of weight (0,0) at the midpoint since e^ik_+X^+ is a primary of weight (0,0) for any k_+. We thank Ted Erler for bringing this point and the relevant work <cit.> to our attention.] §.§ A regularized symplectic form for the OSFT A simple fix for the singularity in the definition of Ω_W is to replace it with Ω =12g_o^2∫(ℙ[Q_B,Θ(Σ)])(Ψ*Ψ)=12g_o^2⟨(ℙ[Q_B,Θ(Σ)])(i) f_1∘Ψ(w_1=0) f_2∘Ψ(w_2=0)⟩^UHP, where ℙ stands for the projector that takes a local operator to its weight (0,0) component.[More precisely, ℙ can be defined by orthogonal projection to the subspace of a definite weight in the matter CFT Hilbert space, when restricted to a given oscillator level with respect to the bc ghost system.] Note that we should be cautious in commuting ℙ with Q_B, as ℙ is not well-defined when acting directly on Θ(Σ), e.g. for a constant time slicing X^0=v, Θ(Σ_X^0=v)=12πi∫_-∞^∞ 1p-i e^ip (X^0-v)dp.Nonetheless, it makes sense to consider the ℙ-projection of [Q_B,Θ(Σ_X^0=v)],D≡ℙ[Q_B,Θ(Σ_X^0=v)] =ℙ12π∫_-∞^∞(c∂X^0+c̃∂̅X^0+ ⋯)e^ip (X^0-v)dp=1V_X^0∫_-∞^∞ δ(p)(c∂X^0+c̃∂̅X^0+⋯)e^ip (X^0-v)dp=1V_X^0(c∂X^0+c̃∂̅X^0), where the omitted terms in the first and second lines are linear in p and drop out after the projection, and V_X^0 stands for the time volume which will end up canceling against a similar volume factor from the CFT correlator.More generally, ℙ is well defined on an operator of the form f(X^0) provided that the function f has compact support.In particular, given two Cauchy slices Σ_1 and Σ_2, we can write ℙ[Q_B,Θ(Σ_1)-Θ(Σ_2)]=[Q_B,ℙ(Θ(Σ_1)-Θ(Σ_2))]. We now proceed to prove several properties of Ω (<ref>), including its gauge invariance. The latter allows for defining the symplectic form simply by restricting the pre-symplectic form to a set of gauge-fixed solutions to the OSFT equation of motion. With this understanding, we will henceforth not distinguish between the notations for the symplectic form and the pre-symplectic form. §.§.§ Independence of Σ The a priori definition of Ω (<ref>) necessitates a choice of Σ. Nevertheless, it proves to be independent of this choice, as follows. The difference between Ω defined with respect to two slices Σ_1 and Σ_2 can be written as ∫ℙ[Q_B,Θ(Σ_1)-Θ(Σ_2)](δΨ*δΨ)=∫[Q_B,ℙ(Θ(Σ_1)-Θ(Σ_2))](δΨ*δΨ)=∫Q_B((ℙ(Θ(Σ_1)-Θ(Σ_2)))(δΨ*δΨ))-∫ℙ(Θ(Σ_1)-Θ(Σ_2))Q_B(δΨ*δΨ). The first term on the second line of (<ref>) is the integral of a total derivative that vanishes as (ℙ(Θ(Σ_1)-Θ(Σ_2)))(δΨ*δΨ) has no support at the boundaries of spacetime.The second term on the second line of (<ref>) also vanishes, as follows from the equation of motion Q_BΨ+Ψ*Ψ=0 and the cyclicity of the convolution with respect to the star product, in the presence of a bulk insertion at the string midpoint:∫ℙ(Θ(Σ_1)-Θ(Σ_2))Q_B(δΨ*δΨ)= - 2∫ℙ(Θ(Σ_1)-Θ(Σ_2))(δ(Q_BΨ)*δΨ)= 2∫ℙ(Θ(Σ_1)-Θ(Σ_2))(δΨ*Ψ*δΨ- Ψ*δΨ*δΨ)=0. Note that in these manipulations we have used the Grassmann-oddness of the exterior differential δ as well as the string field Ψ itself.§.§.§ Invariance with respect to the gauge orbit The classical OSFT action is invariant under the gauge transformation Ψ↦Ψ+Q_B+*Ψ-Ψ*, where the gauge parameteris a string field of ghost number zero. When Ψ solves the equation of motion and thereby corresponds to a point in the pre-phase space,generates the gauge orbit of this solution, and we expect the pre-symplectic form Ω to be invariant along the gauge orbit. To see this, note that the differential δΨ on the pre-phase space transforms underby δΨ↦δΨ+*δΨ-δΨ*, and thusΩ↦Ω+2∫ℙ[Q_B,Θ(Σ)](*δΨ*δΨ-δΨ**δΨ)=Ω, due to the aforementioned cyclicity property.§.§.§ Decoupling of null tangent vectors At each point Ψ in the pre-phase space, we defineQ_Ψ≡Q_B+[Ψ,·}_* where [·,·}_* stands for the graded commutator with respect to the star product. It follows from the equation of motion that Q_Ψ is nilpotent. Furthermore, tangent vectors of the pre-phase space at Ψ are in correspondence with Q_Ψ-closed string fields, whereas Q_Ψ-exact string fields amount to gauge redundancies, or “null tangent vectors”.We expect the pre-symplectic form Ω to vanish upon contraction with a null tangent vector. In other words, if we replace one of δΨ's in Ω with Q_Ψ = Q_B+Ψ*-*Ψ, for a ghost number zero string field , the result should vanish. Indeed, ∫ℙ[Q_B,Θ(Σ)](δΨ* Q_Ψ)=∫ℙ[Q_B,Θ(Σ)](δ(Q_BΨ)*+δΨ*[Ψ,}_*)=∫ℙ[Q_B,Θ(Σ)]((-δΨ*Ψ+Ψ*δΨ)*+δΨ*[Ψ,}_*) =0, where we have used Q_B^2=0 in the first equality, Q_BΨ+Ψ*Ψ=0 in the second quality, and cyclicity in the third equality.In summary, we have shown that Ω as defined in (<ref>) is independent of the choice of the Cauchy slice Σ, is invariant along the gauge orbit of a string field solution, and vanishes upon contraction with a null tangent vector on the pre-phase space. It follows that Ω induces a well-defined symplectic form on the phase space, parameterized by gauge-equivalence classes of solutions to the OSFT equation of motion. § PERTURBATIVE EVALUATION OF THE SYMPLECTIC FORMIn this section, we present an explicit perturbative evaluation of the symplectic form Ω (<ref>), (<ref>) on the phase space of the rolling tachyon on the ZZ-brane, based on level truncation approximation.Recall from section <ref> that the rolling tachyon solutions of the OSFT on the ZZ-brane are parameterized by (u, λ) where λ is the natural expansion parameter of the perturbative string field solution, or (u, λ̃) where λ̃ is the natural deformation parameter in the BCFT description. The corresponding two-dimensional phase space is equipped with a symplectic form Ω of the form (<ref>), where E(λ) is the energy of the solution up to a constant shift. Note that Ω is invariant under shift of u.When evaluated in terms of the perturbative rolling tachyon solution (<ref>), Ω takes the form of a power series expansion in λ, Ω=∑_n=0^∞Ω^(n) ≡∑_n=0^∞n E_n λ^n-1 δuδλ, with the energy expanded as E(λ) = ∑_n=0^∞ E_n λ^n. In particular, the solution at λ=0 corresponds to the unperturbed ZZ-brane, whose energy is E_0=12π^2g_o^2. §.§ Leading order in λ We begin with the leading nontrivial order in λ, namely Ω^(2) or equivalently the energy coefficient E_2. The relevant string field solution, to first order in λ, is given by Ψ=λ c cosh(X^0-u)+ O(λ^2). Substituting its variation δΨ=δλ c cosh(X^0-u)-δ u λ c sinh(X^0-u)+ O(λ^2) into(<ref>) leads toΩ^(2) =λuλ2g_o^2⟨ D(i) (c sinh(X^0-u)(z=1) c cosh(X^0-u)(z=-1)- c cosh(X^0-u)(z=1) c sinh(X^0-u)(z=-1))⟩^UHP, where D is defined as in (<ref>). Using the following correlators of the ghost and matter CFTs on the UHP,[We have adopted an overall normalization convention that is compatible with the unitarity of open string amplitudes.] ⟨c(z_1)c(z_2)c(z_3)⟩= |(z_1-z_2)(z_1-z_3)(z_2-z_3)|,⟨∏_n e^k_n X^0(y_n)⟩=2π(∑_n k_n)∏_n>m|y_n-y_m|^2k_nk_m,we obtain Ω^(2) =-λuλg_o^2=δuδ(-λ^22g_o^2). It follows that the energy of the perturbative solution to order λ^2 is E(λ)=12π^2g_o^2(1-π^2λ^2+O(λ^4)). With the identification λ=λ̃+ O(λ̃^3), this result is in agreement with Sen's expression (<ref>).§.§ Subleading order in λThe next order term in the λ-expansion of the symplectic form is Ω^(4),[Note that Ω^(k)=0 for all odd k, due to time reversal invariance of the perturbative solution.] which determines the energy coefficient E_4. In this subsection we will aim to evaluate the quantity r ≡E_4E_2numerically with the level truncation method, which has been successfully applied in various aspects of SFT in the past <cit.>. Namely, we will truncate the space of open string fields to a finite dimensional subspace graded by the worldsheet oscillator level, evaluate the symplectic form in this approximation, and numerically extrapolate the result to the limit where the truncation level is taken to infinity.We begin by specifying a basis of the BCFT Hilbert space as follows. The open string field Ψ lies in the ghost number 1 subspace of boundary operators in the BCFT Hilbert space. For the (1,1) ZZ-brane in question, the BCFT Hilbert space is the tensor product of the c=25 identity Virasoro module in the Liouville sector, the Hilbert space of Neumann boundary operators in the X^0 free boson sector, and that of the bc ghost system subject to the standard (Neumann type) boundary condition. We will work with the basis statesL^(25)_-K |1⟩_Liouville ⊗α_-M |f(X^0)⟩_X^0 ⊗b_-N c_-P |↓⟩_bc, where K={k_1, ⋯, k_n_K} is a set of non-negative integers with k_1 ≥ ... ≥ k_n_K, and L^(25)_-K stands for the product of a sequence of operators L^(25)_-k_1⋯ L^(25)_-k_n_K (similarly for α_-M,b_-N, and c_-P). L^(25)_-k are the c=25 Virasoro raising operators in the Liouville sector (note that L^(25)_-1 is not needed as it annihilates the identity boundary primary), and _-m, b_-n, c_-p are the usual oscillators in the free boson and the ghost BCFT. The state |f(X^0)⟩_X^0 corresponds to the boundary normal ordered operator f(X^0). |↓⟩_bc = c_1 |1⟩_bc is the bc ghost ground state annihilated by b_m for m≥0, and we will impose Siegel gauge condition by omitting c_0 from the oscillators appearing in (<ref>). The level of such a basis state is defined as|K|+|M|+|N|+|P|,where |K| ≡ k_1 + ⋯ + k_n_K, and similarly for |M|, |N|, |P|.The star product of a pair of string fields of the form (<ref>) is conveniently evaluated using a set of Ward identities, known as the “cubic vertex conservation laws” in the cubic OSFT <cit.>, which replace a raising operator acting on one of the three string fields in a cubic vertex with a sum of lowering operators acting on all three string fields. Iterating the latter procedure, we can reduce any cubic vertex of string fields of the form (<ref>) to that of (boundary) conformal primaries. Given three primaries ϕ_i with conformal weights h_i, i=1,2,3, their cubic vertex evaluates to simply{ ϕ_1, ϕ_2, ϕ_3 } = C_123 K^-(h_1+h_2+h_3),where C_123 is the boundary structure constant, and the power of K = 3√(3)/4 appears due to the conformal mapping involved in the definition of Witten's cubic vertex. The level truncation approximation to (<ref>) proceeds by restricting the string fields to the subspace spanned by the basis states (<ref>) up to a cutoff level L, as well as truncating the star products of such string fields up to the same level, with the expectation that observables computed with the level-truncated string fields converge to their exact values in the L→∞ limit. For instance, consider the first step of (<ref>), Ψ_2 = -b_0/L_0 Ψ_2, Ψ_2≡Ψ_1 * Ψ_1. Let |v, f⟩ be a Siegel gauge (0 ∉P) ghost-number 1 basis state of the form (<ref>), labeled by v≡ (K,M,N,P) and the function f(X^0). Its level is denoted |v| ≡ |K|+|M|+|N|+|P|. We define a set of dual basis states ⟨ v^c, f| by the property v^c, f|c_0|v',f' = δ_v,v' (f,f'), where the pairing (f,f') is given by integration with respect to X^0, and |v^c,f⟩ the BPZ conjugate of ⟨v^c,f|. Ψ_2 is a ghost-number 2 string field of the form |Ψ_2⟩ = ∑_v,f,|v| ≤L A_2,v,f c_0 |v,f⟩ + ⋯, where ⋯ include higher level states, which are omitted in the level truncation approximation, as well as ghost-number 2 states that are annihilated by b_0 and therefore do not contribute to Ψ_2 (<ref>). The coefficients A_2,v,f can be calculated using the cubic vertex conservation laws via∑_f' A_2,v,f' (f,f') = v^c,f | Ψ_2 = {|v^c,f⟩, Ψ_1, Ψ_1 }.For Ψ_1 = c cosh(X^0-u), it follows from the free boson OPE that the only non-vanishing coefficients A_2,v,f appear with the function f of the form f(X^0) ∈{ 1, cosh(2(X^0 - u)), sinh(2(X^0 - u))}. Note that Ψ_2 is also parity-even, and so A_2,v,f is nonzero for even n_M (the total number of α oscillators) in the cases f=1, cosh(2(X^0 - u)), and for odd n_M in the case f= sinh(2(X^0 - u)).To calculate Ω^(4), we also need Ψ_3 = -b_0/L_0 Ψ_3, Ψ_3 ≡Ψ_1*Ψ_2 + Ψ_2 * Ψ_1 ,where Ψ_3 takes the form |Ψ_3⟩ =∑_v,f, |v| ≤L A_3,v,f c_0 |v,f⟩ + ⋯.The non-vanishing A_3,v,f appears with f(X^0) ∈{cosh(X^0 - u), sinh(X^0 - u), cosh(3(X^0 - u)), sinh(3(X^0 - u))}. Note that importantly, as mentioned in section <ref>, the exact marginality of the corresponding BCFT deformation implies that Ψ̃_3 cannot contain weight zero states, as the latter would otherwise obstruct the perturbative solution to the string field equation. In particular, a term proportional to c_0 |Ψ_1⟩=|1⟩_Liouville⊗|cosh(X^0-u)⟩_X^0⊗ c_0 |↓⟩_bc cannot appear in (<ref>).[At finite truncation level L however, a term proportional to c_0 |Ψ_1⟩ may appear in (<ref>) with a small coefficient, but will nonetheless be omitted “by hand” in our truncation scheme.]Having calculated Ψ_2 and Ψ_3, we can express Ω^(4) via (<ref>),(<ref>) asΩ^(4) = λ^3 δu δλ/2g_o^2 ⟨D(i) [ f_1 ∘Ψ_1 f_2 ∘∂_u Ψ_3 - 3 f_1 ∘∂_u Ψ_1 f_2 ∘Ψ_3+ 3 f_1 ∘Ψ_3 f_2 ∘∂_u Ψ_1 -f_1 ∘∂_u Ψ_3 f_2 ∘Ψ_1 + 2 f_1 ∘Ψ_2 f_2 ∘∂_u Ψ_2 - 2 f_1 ∘∂_u Ψ_2 f_2 ∘Ψ_2] ⟩^UHP . As Ψ_2 and Ψ_3 are not primaries, their conformal transformations with respect to the maps f_1, f_2 are not simple. It is convenient to evaluate these correlators using Ward identities similar to those of the cubic vertex conservation laws, which are described in detail in appendix <ref>. In the end, a correlator of the form D(i) f_1 ∘ A(w_1=0) f_2 ∘ B(w_2=0)^UHP can be reduced to a sum of correlators of the same form but with A and B primaries, which are then easily evaluated (cf. <cit.>).In applying the conservation laws (<ref> - <ref>) to (<ref>), we encounter several simplifications. Firstly, using (<ref>), (<ref>), we can reduce every correlator involved to those with just three c-ghost modes: (i) one each at z= i,1,-1, (ii) two modes at z=1 and one at z=-1, (iii) one at z=1 and two modes at z=-1. The latter two cases evaluate to zero, because c_-n c_1 acting on the insertion at z=1 can be turned to c_n c_1 acting on the insertion at z=-1. Only the case (i) can contribute to the symplectic form. Secondly, L^(25) modes act trivially on D(i), and so the L^(25) modes at z=±1 produce the usual Gram matrix elements. Finally,Ψ_1 is itself a conformal primary, and so any creation mode moved onto Ψ_1 will annihilate it. Consequently, a correlator involving Ψ_1 can contribute only the Ψ_1 is paired with terms in Ψ_3 that come with a single α_-m mode, which is then moved to act on the insertion at z=i rather than on Ψ_1 at z= ± 1. In other words, the only states in |Ψ_3⟩ that contribute to the symplectic form are of the form |1⟩_Liouville⊗α_-m|sinh(X^0-u)⟩_X^0⊗|↓⟩_bc. After applying these relations, the level-truncated Ω^(4) reduces to a linear combination of the following correlators:C_0,cc(k)≡1/V_X^0 c(±i ) f_1 ∘c cosh(kX^0) f_2 ∘c cosh(kX^0) ^doubling= 1/2 ,C_0,ss(k) ≡1/V_X^0c(±i ) f_1 ∘c sinh(kX^0) f_2 ∘c sinh(kX^0) ^doubling = -1/2 , C_1,cs(k)≡1/V_X^0c ∂X^0_L(±i ) f_1 ∘c cosh(kX^0) f_2 ∘c sinh(kX^0) ^doubling =k/2 , C_1,sc(k) ≡1/V_X^0c ∂X^0_L(±i ) f_1 ∘c sinh(kX^0) f_2 ∘c cosh(kX^0) ^doubling= -k/2 ,where k=1,2. As a sample of the output of the level truncation computation, the lowest level results for the symplectic form are[The level truncation results are identical between L = 2n and L = 2n+1 with n=0,1,2,, and so we only present the even L cases.] Ω^(4)|_L=0≈λ^3 δuδλ/2g_o^2[0.02312(2C_1,sc(2)) - 0.02312(2C_1,cs(2)) ] ≈δuδ( (-0.09249)/8g_o^2 λ^4 ),Ω^(4)|_L=2≈λ^3 δuδλ/2g_o^2 [ 0.02986(2C_1,sc(2))- 0.02986(2C_1,cs(2)) + 0.01233 (2C_0,cc(2))- 0.01233 (2C_0,ss(2)) - 0.66680 (2C_0,cc(1))+ 2.0004 (2C_0,ss(1)) ] ≈δuδ( (0.63077)/8g_o^2 λ^4). We numerically evaluated Ω^(4) using level truncation up to L=20. The results, shown in Figure <ref>, are then extrapolated L →∞ by fitting a function of L of the form c + a/( L-d)^b++ a_1/( L-d)^b+1 to give an estimated valuer ≡E_4E_2 ≈-1.126 (from symplectic form.)§.§ Comparison to Sen's proposal In order to compare the result (<ref>) with Sen's expression for the energy of the rolling tachyon based on the BCFT (<ref>), E(λ(λ̃)) = 1/2π^2 g_o^2 (1 - π^2 λ̃^2 +π^4/3 λ̃^4 + ), we still need to know the precise relation between the BCFT parameter λ̃ and the OSFT parameter λ. This relation is determined through (<ref>), where the Ellwood invariant of the OSFT solution can be evaluated perturbatively in λ via (<ref>). Previously, for the analogous exactly marginal BCFT deformation of a compact free boson at the self-dual radius, the Ellwood invariant(<ref>) for the perturbative OSFT solution generated by |Ψ_1⟩=|1⟩_Liouville⊗|cos X⟩_X⊗ c_0 |↓⟩_bc was calculated using level truncation in <cit.> (section 6). Via Wick rotation, an essentially identical calculation applies to the rolling tachyon of present interest. A small technical issue is that <cit.> had adopted a slightly different level truncation scheme, where the cutoff is imposed on the total weight L_v^K = |K|+|M|+|N|+|P|+h_f = L_v + h_fof the state |v,f⟩ (defined below (<ref>)), rather than the oscillator level L_v. Here h_f is the weight of the boundary-normal-ordered operator f(X). This is sensible in the compact boson case, where the non-negative-weight operators e^ikX give rise to a complete basis. Such a basis is not suitable, a priori, in the case of the noncompact, timelike free boson of consideration here. Furthermore, a cutoff that involves the weight of f(X^0) appearing in the string field would be incompatible with locality in time. For these conceptual reasons, we have re-evaluated the Ellwood invariant appearing in (<ref>) by truncating with respect to the oscillator level L_v, and verified that at sufficiently high truncation levels the deviation from the numerical results of<cit.> is negligible.Writing 4πiV_X^0W(Ψ(λ̃,u),V_D)=cos(2πλ̃) =1 + D_1^(2) λ^2 + D_1^(4) λ^4 + ⋯, where the exact value of D_1^(2) is -2π^2, we have calculated D_1^(2) and D_1^(4) up to truncation level L=16. Numerical extrapolation of r=D_1^(4)/D_1^(2) to L = ∞ using a fitting of the form c + a/( L-d)^b (Figure <ref>) gives the result r ≡D_1^(4)D_1^(2) ≈-1.119 (from BCFT/Ellwood invariant.)The result (<ref>), derived from the symplectic form of the OSFT, and (<ref>) motivated by consideration of closed strings <cit.> and calculated through the BCFT, are in good agreement. The small discrepancy between (<ref>) and (<ref>) can be attributed to the numerical extrapolation to L = ∞ as well as errors in the level truncation approximation itself. Nonetheless, we believe that these level truncation results constitute a sufficiently nontrivial piece of evidence for the agreement between the two conceptually different prescriptions of the energy of the rolling tachyon, and furthermore provides a consistency test of all-order analytic arguments of section <ref>. § AN ANALYTIC EVALUATION OF THE SYMPLECTIC FORMIn this section, we will present an analytic argument that the energy of the rolling tachyon as determined from the symplectic form Ω (<ref>) agrees with the Ellwood invariant(<ref>) (modulo the volume factor) of the string field solution Ψ(λ̃,u) derived from the deformed BCFT. To begin with, we observe that the time derivative of the string field solution inserted at a point y on the boundary (real axis) of the UHP can be expressed asuΨ(λ̃,u)(y) =-X^0Ψ(λ̃,u)(y)=-∫_C_y(dz2πi∂X^0(z)-dz̅2πi∂̅X^0(z̅))·Ψ(λ̃,u)(y)≡-A_C_y·Ψ(λ̃,u)(y). Here C_y is a counter-clockwise semi-circle contour enclosing y, that can be freely deformed on the UHP as long as it does not pass through other insertions of zero modes of X^0. In particular, we may move this contour cross D(i) (<ref>) appearing in the definition of Ω (<ref>), and hence write the symplectic form asΩ =uλ̃2g_o^2⟨D(i) (λ̃f_1∘Ψ(λ̃,u)(w_1=0) uf_2∘Ψ(λ̃,u)(w_2=0) -uf_1∘Ψ(λ̃,u)(w_1=0) λ̃f_2∘Ψ(λ̃,u)(w_2=0))⟩^UHP=-uλ̃2g_o^2λ̃⟨A_C_0 D(i) f_1∘Ψ(λ̃,u)(w_1=0) f_2∘Ψ(λ̃,u)(w_2=0)⟩^UHP, where C_0 is a contour that extends along the entire positive imaginary axis. Via the relation Ω = δ uδλ̃δ Eδλ̃, we can re-express this in terms of the variation of the energy E(λ̃). Next, by folding the contour C_0 in half, and employing the doubling trick, we haveEλ̃=-1g_o^2λ̃⟨A_C_h D(i) f_1∘Ψ(λ̃,u)(w_1=0) f_2∘Ψ(λ̃,u)(w_2=0)⟩^UHP=-1g_o^2 V_X^0λ̃⟨Ã_C_f (c∂X_L^0(i)+c∂X_L^0(-i)) f_1∘Ψ(λ̃,u)(w_1=0) f_2∘Ψ(λ̃,u)(w_2=0)⟩^doubling, where C_h is the contour extended from the origin to i along the imaginary axis, and C_f is the contour extended from from -i to i related by doubling trick, withÃ_C_f ≡∫_C_fdz2πi∂X^0_L(z). The integration along C_f can be performed explicitly, with the result expressed in terms of X_L insertions at the end points z=± i, givingEλ̃ =i2πg_o^2V_X^0λ̃⟨(cX_L^0∂X_L^0(i)-cX_L^0∂X_L^0(-i)+X_L^0(i)c∂X_L^0(-i)-X_L^0(-i)c∂X_L^0(i)) f_1∘Ψ(λ̃,u)(w_1=0) f_2∘Ψ(λ̃,u)(w_2=0)⟩^doubling.In fact, the part of the correlator on the RHS of (<ref>) with insertion of cX_L^0∂ X_L^0 at z=i vanishes. To see this, we first replace the insertion of the two string fields Ψ at z=± 1 with their star product Ψ*Ψ inserted at the origin z=0, transformed with the conformal map (<ref>). Next, we use the equation of motion obeyed by Ψ, Ψ*Ψ=-Q_BΨ, and deform the BRST contour to enclose the operator inserted at z=i. This results in the identity⟨cX_L^0∂X_L^0(i) f_1∘Ψ(λ̃,u)(w_1=0) f_2∘Ψ(λ̃,u)(w_2=0)⟩^doubling=⟨(cX_L^0∂X_L^0)(i) f∘(Ψ(λ̃,u)*Ψ(λ̃,u))(w=0)⟩^doubling=⟨Q_B·(cX_L^0∂X_L^0)(i) f∘Ψ(λ̃,u)(w=0)⟩^doubling=⟨(-14c∂^2c)(i) f∘Ψ(λ̃,u)(w=0)⟩^UHP.The correlator appearing in the last line is also known as the Ellwood invariant W(Ψ(λ̃,u),-14c∂^2c), as the bulk operator c∂^2c is Q_B-closed. In this case, the bulk insertion is pure ghost whereas the string field solution Ψ amounts to an exactly marginal deformation of the matter X^0 CFT, by (<ref>) the result vanishes identity. A similar argument applies to the part ofthe correlator on the RHS of (<ref>) with c:X_L^0∂ X_L^0: inserted at z=-i. We are thus left withEλ̃ =i2πg_o^2V_X^0λ̃⟨(X_L^0(i)c∂X_L^0(-i)-X_L^0(-i)c∂X_L^0(i)) f_1∘Ψ(λ̃,u)(w_1=0) f_2∘Ψ(λ̃,u)(w_2=0)⟩^doubling. Let us compare (<ref>) with the λ̃-variation of the Ellwood invariant (<ref>) associated with the bulk zero momentum dilaton operator V_D. UsingV_D=-12Q_B·(R+14∂c-14∂̅c̃ ), R≡X^0(c∂X^0-c̃∂̅X^0 ), we can write λ̃ W(Ψ(λ̃,u),V_D)=-12λ̃(W(Ψ(λ̃,u),Q_B·R)+14W(Ψ(λ̃,u),Q_B·∂c)-14W(Ψ(λ̃,u),Q_B·∂̅c̃) ). Via the doubling trick, we can replace R inserted at z=i on the UHP withR^doubling = cX_L^0∂X_L^0(i)-cX_L^0∂X_L^0(-i)-X_L^0(i)c∂X_L^0(-i)+X_L^0(-i)c∂X_L^0(i). Further using cX_L^0∂ X_L^0=12Q_B(X_L^0)^2-14∂ c, the contribution from the first two terms in R^ doubling to (<ref>) cancels against that of the last two terms in (<ref>). Now deforming the BRST contour and using -Q_BΨ=Ψ*Ψ, we arrive at iπg_o^2 V_X^0λ̃ W(Ψ(λ̃,u),V_D)=i2πg_o^2V_X^0λ̃⟨(X_L^0(i)c∂X_L^0(-i)-X_L^0(-i)c∂X_L^0(i)) f_1∘Ψ(λ̃,u)(w_1=0) f_2∘Ψ(λ̃,u)(w_2=0)⟩^doubling. The agreement of this expression with Eλ̃ in (<ref>) hence establishes the relation between the energy E(λ(λ̃)) and the Ellwood invariant W(Ψ(λ̃,u),V_D), as claimed.§ DISCUSSION To summarize, we have shown that the energy of the rolling tachyon on the ZZ-brane in c=1 string theory, determined from the regularized symplectic form (<ref>) of the OSFT up to an overall constant shift, is equal to the Ellwood invariant of the zero momentum dilaton operator V_D, which in turn is the same as the coefficient of V_D in the deformed rolling tachyon boundary state. The latter was previously given the interpretation of energy by Sen <cit.>, based on considerations of closed string backreaction<cit.>. In contrast, the derivation presented in this paper amounts to a direct construction of the OSFT Hamiltonian or equivalently its corresponding vector field on the phase space without appealing to coupling to closed strings.Thus far, we have worked under the assumption that the relevant domain of the phase space of OSFT on the ZZ-brane is two-dimensional and consists entirely of the rolling tachyon solutions which can be constructed perturbatively. While this is suggested by the identification of the ZZ-brane with a fermion/eigenvalue of the dual matrix quantum mechanics <cit.>, in addition to regions of the phase space that correspond to open string tachyon rolling to the “wrong side” (presumably described by similar OSFT solutions with negative or imaginary λ), there may be additional components of the phase space that captures multiple ZZ-branes or higher ZZ-branes defined by (m,n) ZZ-boundary conditions <cit.>. While the higher ZZ-boundary conditions appear to play a role in D-instantons of c=1 string theory <cit.>, the existence of the corresponding physical branes (that support open string tachyons) and their dual interpretation in the matrix quantum mechanics remain to be clarified. We hope the present work serves as a first step towards exploring the full phase space of the OSFT of ZZ-branes, and its quantization that may lead to the construction of a quantum non-perturbative OSFT (cf. <cit.>). § ACKNOWLEDGEMENTS We would like thank Carlo Maccaferri, Martin Schnabl, Ashoke Sen, and Charles Wang for discussions. We are especially grateful to Ted Erler for explaining to us subtleties in defining the symplectic form of OSFT <cit.>, and Matej Kudrna for sharing with us the details of level truncation data involved in <cit.>. We thank Pedro Pascual Center of Science, Benasque, Spain where an early version of this work was presented. MC is supported by the Sam B. Treiman fellowship at the Princeton Center for Theoretical Science. BM and XY are supported by a Simons Investigator Award from the Simons Foundation, by the Simons Collaboration Grant on the Non-Perturbative Bootstrap, and by DOE grant DE-SC0007870. The numerical computations in this work are performed on the FAS Research Computing cluster at Harvard University.§ OSFT CONVENTIONSIn this Appendix, we summarize the OSFT conventions used in this paper, primarily based on those of <cit.>. In the classical OSFT, the open string field |Ψ⟩ is a ghost number 1 state in the BCFT Hilbert space. The SL(2,ℝ)-invariant vacuum |0⟩ has ghost number zero, b has ghost number -1 and c has ghost number +1. Under the state-operator correspondence, |Ψ⟩ maps to an operator Ψ(w=0) inserted at the origin of the upper half unit disc (UHD). Propagated to the unit semicircle, the string has its midpoint at w=i, its left half at {w=e^iθ| θ∈ [0,π] } and right half at {w=e^iθ| θ∈ [π,2π] }.The star product between a pair of string fields, Ψ_1*Ψ_2, is defined by identifying the right half of Ψ_1 with the left half of Ψ_2. * is associative, namely (Ψ_1 * Ψ_2) * Ψ_3 = Ψ_1 * (Ψ_2 * Ψ_3), but not commutative nor anti-commutative. The BRST charge Q_B is a nilpotent, Grassmann-odd derivation on this algebra, that satisfies the Leibnitz rule Q_B (A * B) = (Q_B A) * B + (-1)^AA * (Q_B B), where A= Ψ or A= δΨ with δ defined in section <ref>. (-1)^A = 1 for Grassmann even A and -1 for Grassmann odd A (we adopt the convention that δ itself is Grassmann odd). The BRST charge Q_B is given by Q_B= ∫_C dz/2πi j_B - d/2 πi j̃_B, j_B= cT^matter+ bc ∂c + 3/2 ∂^2 c , where C is a counter-clockwise (CCW) semicircle contour enclosing the origin.The convolution ∫ is a linear functional on the space of string fields. It satisfies ∫ Q_B Ψ = 0 up to possible contributions from the boundaries of spacetime. It also satisfies the graded cyclicity property ∫A * B = (-1)^AB ∫B * A, and by extension ∫A_1 * A_2 * ⋯* A_n = (-1)^(A_2++A_n)A_1 ∫A_2 * ⋯* A_n * A_1. Such an n-linear functional on string fields is referred to as an n-vertex. The action of Witten's cubic OSFT (<ref>) consists of 2- and 3-vertices, and the Ellwood invariant will be defined through a 1-vertex.The n-vertex is calculated by a CFT correlator of n-string fields on the unit disc or the UHP, with specific choices of local charts containing each string field insertion. For instance, the 2-vertex is equivalent to the BPZ inner product ∫Ψ_1 * Ψ_2 = Ψ_1 | Ψ_2. The cubic vertex is defined by the correlator ∫Ψ_1 * Ψ_2 * Ψ_3 = Ψ_1,Ψ_2, Ψ_3 =g_1 ∘Ψ_1(w_1=0) g_2 ∘Ψ_2(w_2=0) g_3 ∘Ψ_3(w_3=0)^unit disc whereg_1(w_1)= e^2πi /3( 1+iw_1/1-iw_1 )^2/3g_2(w_2)=( 1+iw_2/1-iw_2 )^2/3g_3(w_3)= e^-2πi /3( 1+iw_3/1-iw_3 )^2/3each maps the w_i-UHD to a wedge spanning angle 2π/3 in the unit disc parameterized by z' (with |z'|≤1). The string field insertion at w_i=0 is mapped to one at the midpoint on the boundary arc of each wedge, and the midpoint of each string at w_i = i is mapped to the center of the disc z'=0. The semicircle{|w_i|=1, {w_i}≥ 0}, on the other hand, is mapped to a pair of radii on the boundary of the wedge. In this formulation, the cyclicity of the cubic vertex is manifest.The identity string field 𝕀 is defined by the property𝕀 * Ψ= Ψ* 𝕀 = Ψfor all string fields Ψ. It follows that a 2-vertex involving 𝕀 reduces to a 1-vertex, ∫Ψ=𝕀 | Ψ. To express this 1-vertex as a CFT correlator on the unit disc or the UHP, we need a conformal transformation that maps the w-UHD (with Ψ inserted at w=0) to the whole unit z'-disc in such a way that the semicircle{|w|=1, {w}≥ 0} maps to a radius of the z'-disc, while the midpoint w=i maps to the center of the disc z'=0. Note that the left and right halves of the w-semicircle are glued together on the z'-disc. The z'-disc can also be further mapped to the z-UHP, with the w-semicircle now folded into the semi-infinite line {z=iy| y ∈ [1,∞) }. That is, ∫Ψ= F_I ∘Ψ(w=0)^unit disk = f_I ∘Ψ(w=0)^UHP, wherez'= F_I(w) = ( 1+iw/1-iw )^2(unit disc) z= f_I(w) = 2w/1-w^2(UHP)Equivalently, one can express the identity string field as ⟨𝕀|=⟨0|U_f_I where U_f_I is the operator implementing the conformal transformation z=f_I(w). The Ellwood invariant <cit.> can be defined as the 1-vertex with the extra insertion of an on-shell closed stringvertex operator V(i,-i) (at the string midpoint) that is a primary of weight (0,0). We denote itW(Ψ, V) ≡ 𝕀 | V(i,-i) | Ψ = V(i,-i) f_I ∘Ψ(w=0)^UHP Note that as the string field carries ghost number 1, the closed string vertex operator V must carry ghost number 2 to give a nonzero result.§ CONSERVATION LAWS FOR VERTICESThe conservation laws used for evaluating the symplectic form (<ref>) in level truncation are derived as follows. The correlators in question are of the formV(i,-i) f_1 ∘A(w_1=0) f_2 ∘B(w_2=0)^UHP. We take z to be the coordinate on the UHP, and w_1, w_2 the local coordinates on the UHD containing the open string fields A, B respectively. They are related by the coordinate transformationsz=f_1(w_1) = 1+w_1/1-w_1 z=f_2(w_2)= - 1-w_2/1+w_2, mapping the pair of w_i-UHDs to the right and left half of the z-UHP, respectively. Note that the origins of the w_1- and w_2-UHD are mapped to z=1 and z=-1. The w_i-semicircles are mapped to the imaginary z-axis.In our application, V is a conformal primary while A and B are not primaries in general. We use Ward identities to replace raising operators in A or B with linear combinations of lowering operators and constants acting on the remaining vertex operators, eventually reducing the correlator to that of primaries. We begin with a correlator of the formV(i,-i) f_1 ∘(ϕ_-mA)(w_1=0) f_2 ∘B(w_2=0)^UHP, wherethe field ϕ is either i√(2)∂ X^0, T^(25), b, or c.We can express the mode ϕ_-m as ϕ_-m= ∫_C [ dw/2 πi ϕ(w)/w^m-h+1 -dw̅/2 πi ϕ̃(w̅)/w̅^m-h+1 ], where h is the weight of ϕ and C is a CCW semicircular contour enclosing w_1=0. Deforming the contour C away to infinity in the UHP, we must take into account the coordinate transformation between the w_1-chart and the z-UHP, as well as the residue contributions when the contour moves past the V and B insertions.Using dz/dw_1=2/(1-w_1)^2=(z+1)^2/2, (<ref>) is written in the z-coordinate as ∫_C_1 dz/2 πi 1/2^h-1 (z+1)^m+h-1 (z-1)^-m+h-1 V(i,-i) ϕ(z) f_1 ∘A f_2 ∘B^UHP + (anti-holomorphic), where C_1 is now a CCW semicircle in the UHP enclosing z=1.The residue contribution from ϕ colliding with V can be determined from the OPE ϕ(z) V(i,-i)∼∑_k C_k V_k(i,-i)/(z-i)^h+h_V-h_k,ϕ̃() V(i,-i)∼∑_k̅ C̃_k̅ Ṽ_k̅(i,-i)/(+i)^h+h_V-h_k̅, where h_V, h_k,h_k̅ are the weights of V, V_k,V_k̅ respectively.The residue contribution from ϕ colliding with B is a correlator involving ϕ_m B. The result isV(i,-i) f_1 ∘(ϕ_-mA)(w_1=0) f_2 ∘B(w_2=0)^UHP= (-1)^m+hV(i,-i) f_1 ∘A(w_1=0) f_2 ∘(ϕ_m B)(w_2=0) ^UHP -∑_kz→iRes ( (z+1)^m+h-1(z-1)^-m+h-1/2^h-1(z-i)^h+h_V-h_k ) C_k V_k(i,-i) f_1 ∘A f_2 ∘B ^UHP - ∑_k̅→-iRes ( (+1)^m+h-1(-1)^-m+h-1/2^h-1(+i)^h+h_V-h_k̅) C̃_k̅ Ṽ_k̅(i,-i) f_1 ∘A f_2 ∘B ^UHP. Specializing to the case V = D = 1 V_X^0(c ∂ X^0 + c̃∂̅ X^0 ) of our interest, the replacement rules areD(i,-i) f_1 ∘(α_-mA) f_2 ∘B^UHP = (-1)^m+1D(i,-i) f_1 ∘A f_2 ∘(α_m B)^UHP-i (-i)^m /√(2)mc(i)/ V_X^0 f_1 ∘A f_2 ∘B ^UHP -i (i)^m /√(2)mc̃(-i) / V_X^0 f_1 ∘A f_2 ∘B ^UHP,D(i,-i) f_1 ∘(c_-mA) f_2 ∘B^UHP = (-1)^m-1D(i,-i) f_1 ∘A f_2 ∘(c_m B)^UHP,D(i,-i) f_1 ∘(b_-mA) f_2 ∘B^UHP = (-1)^m+2D(i,-i) f_1 ∘A f_2 ∘(b_m B)^UHP+ (-i)^m∂X^0(i)/ V_X^0 f_1 ∘A f_2 ∘B ^UHP + (i)^m∂̅ X^0 (-i)/ V_X^0 f_1 ∘A f_2 ∘B ^UHP,D(i,-i) f_1 ∘(L^(25)_-mA) f_2 ∘B^UHP = (-1)^m+2D(i,-i) f_1 ∘A f_2 ∘(L^(25)_m B)^UHP.§ NUMERICAL IMPLEMENTATION OF LEVEL TRUNCATIONHere we describe some details of the numerical implementation of the level truncation computations of section <ref>. Our approach is similar to that of <cit.> (section 3), with one difference being that we impose a cutoff on the total oscillator level rather than the total weight (cf. (<ref>)).The main hurdle is to calculate the cubic vertices for all triplets of basis states <ref>, the symplectic form correlators D(i) f_1 ∘ A(w_1=0) f_2 ∘ B(w_2=0)^UHP for (A,B) all pairs of basis states, and the Ellwood invariantW(A,V), defined in <ref> for any individual basis state A. The cubic vertices can then be used to calculate the perturbative solution Ψ_i as described in <ref>. Once Ψ_i are obtained, the symplectic form and Ellwood invariant for the perturbative string field solution can be calculated using the symplectic form correlators and Ellwood invariants of basis states. One convenience is that all three of the ingredients, cubic vertices, symplectic form correlators, and Ellwood invariants, decouple between the c=25 Liouville, timelike boson, and ghost sectors, so they can be separately calculated for all three, and combined as needed by multiplication. To implement this on a computer, it is convenient to define a canonical ordering for the basis states in each sector, in order of increasing level of oscillators. For example 1 →1,L^(25)_-2 →2,L^(25)_-3 →3, L^(25)_-4 →4,L^(25)_-2L^(25)_-2 →5, etc. in order of increasing level. For the boson sector, where parity is an important consideration, we numbered the α_-M which had an even number of oscillators with increasing positive integers and theα_-M which had an odd number of oscillators with decreasing negative integers. 1 →1,α_-1α_-1 →2, α_-2α_-1 →3,α_-3α_-1 →4,α_-2α_-2 →5, etc.α_-1 →-1,α_-2 →-2, α_-3 →-3,α_-1α_-1α_-1 →-4,α_-4 →-5, etc.The algorithm proceeds as follows:1. Calculate the ingredients: cubic vertices, symplectic form correlators, and Ellwood invariants for all the necessary triplets, pairs, and single basis states. This can be donse separately in the Liouville, timelike boson, and ghost sectors. Use the conservation laws described in <cit.> and <ref> to reduce the calculation of an ingredient at a given level to a linear combination of the same ingredient at lower levels. It is worth noting thatfor the cubic vertex of three basis states at levels (L_v1, L_v2, L_v3) the cubic vertices that appear when applying the conservation laws have levels (L_v1' ≤ L_v1, L_v2' ≤ L_v2, L_v3' ≤ L_v3) while the total level must strictly decrease L_v1' + L_v2' + L_v3' < L_v1 + L_v2 + L_v3. Similarly for the symplectic form correlators, except with just two basis states. Because the Liouville, boson, and ghost sectors are calculated separately, the level L_v here always refers to the level of oscillators in that particular sector only.One can set up a loop over all increasing values of L_v1, L_v2, L_v3 each from 0 to L. Because the recursive conservation laws will always decrease the total level and never raise the levels of the first, second, or third basis states individually, the recursion will never have to go deeper than one step if one calculates all the cubic vertices at each step of this loop. This is important when running a parallel version of the algorithm, because it reduces how often two threads will be repeating the same calculation.In practice, we do not actually need to calculate all the cubic vertices for the boson sector at each level. In this case the recursive calls may go deeper than one step, but this is not an issue. The boson sector also carries the additional complication of having an f_i associated with each basis state. This will be treated below.2. Calculate the coefficients A_i,v,f appearing in Ψ_i using the cubic vertices. This does not require any recursive calls, so one does not have to be careful about the order in which they are calculated. The only requirement is that Ψ_2 be calculated before Ψ_3 before Ψ_4 etc. In the parallel implementation, we set up a loop over increasing i = 2,3,4. Within each run of the loop, the task of calculating the A_i,v,f was split up evenly between all the threads since for a fixed i, no A_i,v,f needs the value of any other.3. Calculate the symplectic form or Ellwood invariant using the symplectic form correlators or Ellwood invariants for basis states from step 1 and the coefficients A_i,v,f from step 2. Like in step 2, all the recursive work has been done, and when implementing this in parallel, it is easiest to divide the work evenly among all the threads without worrying about the order.We implemented this algorithm using Mathematica and C++, using the WSTP library to communicate between the two. As the level cutoff L is increased, the number of basis states grows, and the number of cubic vertices grows quite quickly. This is the main reason why a parallelized implementation of the algorithm is necessary.The approximate storage costs at each level are Level L8 10 12 14 16 1820 Cost 5 MB 40 MB 288 MB 1.8 GB 10 GB 54 GB 261 GBWhere the most space-consuming part are the timelike boson cubic vertices (where the basis states carry an extra label f). There is also the memory cost of running one Wolfram kernel on each thread, which is not estimated here. One node on the Harvard FASRC cluster has 184 GB RAM, so level 18 is the limit without writing and reading from hard disk/SSD memory or using multiple nodes together.We ran our program on a single node of the Harvard FASRC cluster, which has 184 GB RAM and 48 processor cores. Running on the full 48 cores, the runs took Level L8 10 12 14 16Time 20 sec 4.5 min 19 min 2.5 hr 21.5 hrto execute. A simple experiment at level 12 shows that the speedup due to multiple cores is not exactly linear, but is not far off. Number of CPUs48 24 12 6 3Time to run level 12 19 min 34 min 64 min 120 min 237 min These data fit the curve Time = (625.15 min)(CPUs)^-0.91 so the program runs 1.88 times faster when the number of CPUs is doubled. This is to be expected, since the recursive part of the algorithm involves threads waiting for each other, and there is also an associated overhead time cost to managing multiple threads. JHEP | http://arxiv.org/abs/2310.17895v1 | {
"authors": [
"Minjae Cho",
"Ben Mazel",
"Xi Yin"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20231027050342",
"title": "Rolling tachyon and the Phase Space of Open String Field Theory"
} |
Institute for Mathematics, Astrophysics and Particle PhysicsRadboud University, 6525 AJ Nijmegen, NetherlandsThe Bethe Center for Theoretical PhysicsBonn University, 53115 Bonn, Germany #1#1#1#1 #1#1and #1 Submitted to #1Abstract PresentedPRESENTED ATACKNOWLEDGEMENTS footnote-0.5ex>∼ -0.5ex<∼E_T Multi–lepton Signatures from U(1)_L_μ-L_τ at the LHC Jochem Kip[[email protected]], Zhongyi Zhang[[email protected]]The U(1)_L_μ-L_τ extended Standard Model (SM) is anomaly free, and contains a massive Z^' boson. The associated Higgs, which generates the Z''s mass via spontaneous symmetry breaking (SSB) can mix with the SM Higgs. The new parameters relating to extra Higgs cannot be probed at the LHC with final states containing no more than 4 leptons. Therefore, we use signatures with at least 6 leptons to probe the parameter space through LHC experiments. Since SM predicts a lower cross section for a final state with at least 6 leptons than is currently visible at the LHC, the background is negligible, thereby making this channel an extremely sensitive probe. We find that in a limited region of the parameter space this channel can strongly constrain the associated U(1)_L_μ-L_τ coupling constant, even more so than for a 4 lepton or fewer final state.§ INTRODUCTION Currently one of the biggest questions in particle physics is the exact nature of Dark Matter (DM). One model that allows for the incorporation of DM is an extension of the Standard Model with a local U(1)_L_μ-L_τ gauge symmetry <cit.>, where a DM candidate can take the form of a complex scalar <cit.> or Dirac fermion <cit.>. While the inclusion of an additional U(1) gauge symmetry in the SM will in general induce anomalies that must be dealt with by choosing the fermionic charges appropriately <cit.>, the U(1)_L_μ-L_τ extension is automatically anomaly free <cit.>. Additionally, since the newly introduced gauge boson does not couple (directly) to electrons and positrons, strong constraints from e^+e^- colliders such as LEP are avoided which therefore still allows for a much larger parameter space and correspondingly a lower mass spectrum for the newly introduced particles.One of the best experiments to currently probe the potential existence of such a model and its properties is the LHC. Previous analyses have already investigated large regions of the parameter space that are in reach of the LHC and previous colliders for the U(1)_L_μ-L_τ model <cit.>. Here we propose and investigate a new channel, namely that of a 6-8 lepton final state, whose cross section is too small in the SM to be currently observable, thereby providing a very low SM-produced background. This channel thus allows for the detection of comparatively small cross sections due to the cleanness of the signal. This paper is structured as follows. First we discuss the relevant theoretical aspects of the model and the details of the 6-8 lepton final state in Section <ref>. Subsequently, we show our results for the cross section and validate that this region of the parameter space is not already excluded by Higgs decay in Section <ref>. We finish with our conclusions in Section <ref>.§ LAGRANGIAN AND SIGNATURES By extending the SM with a local U(1)_L_μ-L_τ gauge symmetry a new gauge boson Z' is introduced. The resulting field strength tensor and covariant derivative are defined in the usual wayZ^'_μν = ∂_μ Z^'_ν - ∂_ν Z^'_μ D_μ = ∂_μ - i g_μτqZ^'_μ .where g_μτ is the associated coupling constant of U(1)_L_μ - L_τ and q the L_μ - L_τ charge of the particle on which the coviarant derivative acts.Naturally this new gauge boson needs to eat a Goldstone boson to acquire mass. In order to do so a new complex Higgs singlet with non-zero L_μ-L_τ charge is introduced which provides mass to the Z^' boson. Here we set the L_μ - L_τ charge of the new Higgs singlet to be 1. Setting ϕ_h to be a SM-like Higgs doublet and ϕ_H the new complex Higgs singlet, the additional Higgs terms as compared to the SM are thenℒ_H = (D_μϕ_H)^*(D^μϕ_H)- V(ϕ_h, ϕ_H) ,V(ϕ_h, ϕ_H) = μ^2_Hϕ_H^*ϕ_H + λ_H(ϕ_H^*ϕ_H)^2 + λ_hH(ϕ_h^†ϕ_h)(ϕ_H^*ϕ_H) .Here μ_H^2 and λ_H are the mass and interaction term of ϕ_H with μ_H^2 < 0. After spontaneous symmetry breaking the Higgs doublet and singlet will have the following form in the unitary gaugeϕ_h = [0; v+h/√(2) ]ϕ_H = v_μτ + H/√(2) .where v_μτ is the vacuum expectation value of ϕ_H. Due to the presence of λ_hH a mixing is introduced between h and H resulting in two mass eigenstates h_1 and h_2, of which one must be the 125 GeV SM-like Higgs. We define h_1 to be the SM-like Higgs. In order to parameterize the mixing of the Higgs states h and H into h_1 and h_2 we use a mixing angle α (tan(2α) = vv_μτλ_hH/(λ_hv^2-λ_Hv_μτ^2)) which defined such that h_1=h and h_2 = H for α = 0. The phenomenological implications of α is more straightforward when compared to λ_hH; the size of α dictates the strength of the coupling of massive SM particles to h_2, which would result in noticeable effects in SM observations for a large α. Therefore, α always needs to be kept sufficiently small, a constraint that we implement in this study by only considering α = 0.01 and 0.1.Two relevant new vertices that are introduced by the U(1)_L_μ-L_τ extension are:valign=c < g r a p h i c s > = -sin(2α) g_μτM_h_1^2 - M_h_2^2/4M_Z^' , valign=c < g r a p h i c s >= g_μτM_Z^' .After considering these 2 new vertices, all the particles in the model could acquire masses through spontaneous symmetry breaking from Higgs mechanism, instead of merely a simplified model. The simplified models only consider renormalizable terms with testable sectors, and neglects the potential source of the terms. In analyses focusing on final states with up to 4l, these 2 vertices are not testable.This model can also accommodate a Dark Matter particle as a complex singlet and provide a mechanism for neutrino masses in a type-I seesaw manner. However, in this study we will focus on probing the mixing angle α of physical Higgs states at the LHC, we shall thus omit from discussing both the Dark Matter and neutrino sectors further.In order to probe α at the LHC we consider a 6 or 8 lepton final state. Currently this process is invisible at the LHC for the SM (The cross sections of more than 6 lepton final states are smaller than 10^-5 pb. This is lower than the minimal value according to the recent detection limit, i.e. observing at least 3 events under 139fb^-1). This is therefore a clean low-background channel, making detection very efficient. The dominant diagram that is introduced by the U(1)_L_μ-L_τ extension of the SM is < g r a p h i c s >where the ggh_1 vertex is of course an effective vertex induced by quark loops, most notably the top quark <cit.>. Furthermore, here any μ^±τ^± pair can become a ν_μ/ν_τ pair thereby making this process a 0,2,4,6, or 8 observable lepton final state. We of course focus only on the 6/8 μ/τ final state. From equation (<ref>) it can be seen that the h_1h_2h_2 coupling depends on α, which is what makes this channel sensitive to the Higgs mixing angle. It should of course also be noted that the cross section from this diagram scales roughly as g_μτ^6 due to the 3 vertices involving either h_2 or Z^'. Naturally there are many more diagrams contributing to a 6/8 lepton final state, but all of these are subleading with respect to the aforementioned diagram. The reason is that this diagram contains 3 BSM vertices, which are collectively proportional to g_μτ^3,[The g_μτ^4 factor of the 4 Z'-lepton-lepton vertices does not contribute here, since the Z' can only decay into leptons.] and 1 ggh vertex. In contrast, other leading order diagrams have at least 3 BSM vertices and 2 electroweak vertices. Additionally, contributions arising from leptons radiating off a Z' are suppressed with a factor of 1/M_Z'^2 as compared to the h_2-mediated case per Z' pair. Furthermore, electroweak bosons do not contribute significantly, as the Z' boson does not mix with any of the SM vector bosons.In order to simulate events we use MadGraph <cit.> in which the model files for the U(1)_L_μ-L_τ are custom made and can be obtained from the authors. Notably, these model files include an explicit ggh_1 coupling in order to include the loop-induced gluon fusion to Higgs at tree level. The inclusion of this vertex is vital as all computation performed here are at tree level and Higgs production is foundational to probing both α and the dominant 6/8 lepton final state channel. Furthermore, the number of diagrams scales approximately as n! for n external particles <cit.>, thus we limit the number of simulated diagrams in order to address the otherwise exceedingly high computation time. We simulate pp → Z^' Z^' Z^' Z^' in which either every Z^' goes to a μ^+ μ^-/τ^+ τ^- pair, or one Z^' goes to a ν_μν_μ/ν_τν_τ pair with all others going to μ^+μ^-/τ^+τ^-, thereby assuring either an 8 or 6 charged lepton final state.§ BOUNDS FROM LHC EXPERIMENTS In order to set the value of the effective gluon-gluon-Higgs vertex the process gg → h is simulated in MadGraph and the coupling is tuned until the cross section is 48.5± 0.1 pb <cit.>. This results in g_ggh = 0.356.In order to find the values of g_μτ that provide exclusion based on the 6/8 lepton channel we consider cross sections larger than 2.2· 10^-5 pb to be excluded. Figure <ref> shows the exclusion lines for all combinations of α = 0.1, 0.01 and M_h_2 = 30, 50, and 70 GeV. The values of the exclusion boundary of the 2/3/4 lepton search from <cit.> is provided as a comparison. From figure <ref> it can clearly be seen that the 6/8 lepton search outperforms the 2/3/4 leptop search in the regions where M_Z'≤ 1/2 M_h_2 and M_h_2≤ 1/2 M_h. This is in line with expectations, seeing as when the on-shell decay h_2→ Z' Z' is not possible if 2 M_Z' > M_h_2, and similarly h_1 → h_2 h_2 cannot occur on shell if 2 M_h_2 > M_h_1. As expected the exclusion boundary for α =0.1 is higher than those of α =0.01 if all other parameters are equal, differing roughly by one order of magnitude if M_h_2 < 1/2 M_h_1. Naturally, since for M_h_2 > 1/2 M_h_1 h_1→ h_2 h_2 can never be on-shell the sharp rise of the exclusion boundary for M_h_2 = 30 GeV at M_Z' = 15 GeV and M_h_2 = 50 GeV at M_Z' = 25 GeV is absent for M_h_2 = 70 GeV. Naturally the region of interest allows for the decay of the SM Higgs boson into non-SM particles, i.e. h_2 and Z'. In order to verify that the decay of the SM Higgs boson does not a priori exclude the parameter region of interest for the 6/8 lepton search we have computed the branching ratio of h_1→ BSM Particles and demand that it is not more than 5%. The results for the M_Z'-M_h_2 plane can be seen in figure <ref>. Notably for g_μτ = 0.1 and α = 0.1 the branching ratios are fairly significant, however lowering either g_μτ or α by one order of magnitude shows that the branching ratio is only relevant for low values of M_Z'. While the dependence of the branching ratio of the SM Higgs boson on M_Z' may appear to be counter intuitive, as its decay is dominated by h_1→ h_2 h_2, the coupling constant for this vertex contains a 1/M_Z' term as seen in equation (<ref>). When inspecting both figures <ref> and <ref>, it is clear that the decay of the SM Higgs boson provides no danger for the exclusion of these processes, as for low M_Z' masses the exclusion line is well below g_μτ = 0.1.In figure <ref> the cross sections of the 6/8 lepton searches are shown with a cut at 2.2· 10^-5 pb. Again the region of interest, namely M_Z'≤ 1/2 M_h_2≤ 1/4 M_h, is clearly visible, in addition to small values of M_Z' for M_h_2 > 1/2 M_h_1. For M_h_2 < 60 at g_μτ = 0.01 and α = 0.01 only small values of M_Z' are allowed. When inspecting the exclusion line corresponding to M_h_2 = 50 GeV and α = 0.01 from figure <ref> it can be seen that the line lies just below g_μτ = 0.01, thus when extrapolating to M_h_2 = 60 GeV the process is then indeed expected to be excluded for g_μτ = 0.01. Additionally, when comparing the plots of the cross sections for g_μτ=0.1 &α = 0.01 and g_μτ=0.01 &α = 0.1 it can be seen that they are fairly similar for multiple values of M_h_2. This is again as expected when inspecting figure <ref>, where the exclusion line for identical values of M_h_2 but for α = 0.1 and α =0.01 differ approximately one order of magnitude in g_μτ. § CONCLUSION In this paper we have shown that the 6/8 lepton search is a powerful probe for small values of both the U(1)_L_μ - L_τ gauge coupling g_μτ and the Higgs mixing parameter α in the parameter region M_Z'≤ 1/2 M_h_2≤ 1/4 M_h_1, significantly outperforming the 2, 3 and 4 lepton channels. The effectiveness of this probe is compounded by the fact a small Higgs mixing parameter is able to be probed effectively in regions of the parameter space where no deviations in the SM Higgs sector are expected to be induced. However, this channel is of course limited by the size of the parameter space that is able to be probed effectively; requiring both h_2 and Z' being able to be produced on-shell for an effective probe. Future runs of the LHC are expected to provide more stringent bounds, seeing as exclusion is directly tied to the integrated luminosity due to the clean signal provided by this channel.unsrt | http://arxiv.org/abs/2310.17708v2 | {
"authors": [
"Jochem Kip",
"Zhongyi Zhang"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20231026180829",
"title": "Multi--lepton Signatures from $U(1)_{L_μ-L_τ}$ at the LHC"
} |
Polarization vs. magnetic field: competing eigenbases in laser-driven atoms Christian T. Schmiegelow January 14, 2024 ===========================================================================A non-crossing spanning tree of a set of points in the plane is a spanning tree whose edges pairwise do not cross. Avis and Fukuda in 1996 proved that there always exists a flip sequence of length at most 2n-4 between any pair of non-crossing spanning trees (where n denotes the number of points). Hernando et al. proved that the length of a minimal flip sequence can be of length at least 3/2 n. Two recent results of Aichholzer et al. and Bousquet et al. improved the Avis and Fukuda upper bound by proving that there always exists a flip sequence of length respectively at most 2n - log n and 2n - √(n).We improve the upper bound by a linear factor for the first time in 25 years by proving that there always exists a flip sequence between any pair of non-crossing spanning trees T_1,T_2 of length at most c n where c ≈ 1.95.Our result is actually stronger sincewe prove that, for any two trees T_1,T_2, there exists a flip sequence from T_1 to T_2 of length at most c |T_1 ∖ T_2|.We also improve the best lower bound in terms of the symmetric difference by proving that there exists a pair of trees T_1,T_2 such that a minimal flip sequence has length 5/3 |T_1 ∖ T_2|, improving the lower bound of Hernando et al. by considering the symmetric difference instead of the number of vertices.We generalize this lower bound construction to non-crossing flips (where we close the gap between upper and lower bounds) and rotations. § INTRODUCTIONLet C be a set of n points in the plane in convex position. A spanning tree T on the set of points C is a subset of edges that forms a connected acyclic graph on C. A spanning tree T on C is non-crossing if every pair of edges of T (represented by the straight line interval between their endpoints) are pairwise non-crossing.Let us denote by 𝒮(C) the set of all non-crossing spanning trees on the point set C. Let T∈𝒮(C). A flip on T consists of removing an edge e from T and adding another edge f so that the resulting graph (T ∪ f) ∖ e is also a spanning tree. A flip sequence is a sequence of non-crossing spanning trees such that consecutive spanning trees in the sequence differ by exactly one flip.Equivalently, one can define the configuration graph on the vertex set 𝒮(C) where two trees T,T' are adjacent if they differ in exactly one edge (that is |T ∖ T'|=|T'∖ T|=1). A (minimal) flip sequence is a (shortest) path in the configuration graph. §.§ Flips between non-crossing spanning trees. Avis and Fukuda <cit.> proved that there always exists a flip sequence between any pair of non-crossing spanning trees of length at most 2n-4 by showing that there is a star[A star is a spanning tree with at most one vertex of degree at least 2.] S on C such that T_1 and T_2 can be turned into S with at most n-2 flips. In fact, they showed that this flip sequence exists even if the point set C is in generalposition.Given two spanning trees T_1,T_2, the symmetric difference between T_1 and T_2 is denoted by Δ(T_1,T_2)= (T_1 ∖ T_2) ∪ (T_2 ∖ T_1). We denote by δ(T_1,T_2)=|Δ(T_1,T_2)|/2 the number of edges in T_1 and not in T_2, which is a trivial lower bound on the length of a flip sequence from T_1 to T_2.It is well-known that the set of spanning trees of a graph G forms a matroid. In particular, for any possible pair of spanning trees T_1,T_2, there is a (non geometric) flip sequence that transforms T_1 into T_2 in exactly δ(T_1,T_2) flips. So if we do not care about geometric properties of the representation of the spanning trees, it is always possible to transform a spanning tree T_1 into T_2 using at most n-1 flips. One can wonder if the same holds if we want to keep non-crossing spanning trees all along the flip sequence. Hernando et al. <cit.> answered this question in the negative by providing, for every n, two non-crossing spanning trees T_1,T_2 on a convex set of n points whose minimal flip sequence needs 3/2n -5 flips (we give their example in Figure <ref>). During 25 years, no improvement of the lower or upper bound has been obtained until a recent result of Aichholzer et al. <cit.>. They showed that the upper bound of Avis and Fukuda can be improved when points are in convex position by proving that there exists a flip sequence between any pair of non-crossing spanning trees of length at most 2n-Ω(log n). Their result has been further improved by Bousquet et al. <cit.> who proved that 2n-Ω(√(n)) flips are enough. However, until now, there does not exist any general proof that there always exists a flip sequence of length at most (2-ϵ)n for some ϵ >0.In both papers, the authors prove as well the existence of shorter flip sequences when one (or both) of the trees has a special shape. Aichholzer et al <cit.> proved that when the points are in convex position and T_1 is a path then there exists a flip sequence of length at most 3/2n - 2 - |T_1 ∩ T_2| = n + |Δ(T_1,T_2)|/2 - 1. Bousquet et al. <cit.> proved that there exists a flip sequence of length at most 3/2 n when the points are in convex position and T_1 is a path or a nice caterpillar[A caterpillar is a tree such that the set of nodes that are not leaves induces a path. Without giving the exact definition, a nice caterpillar is a caterpillar such that every chord cuts in a nice way the geometric representation.].Bousquet et al. <cit.> conjectured that the lower bound of Hernando et al. <cit.> is essentially tight:Let C be a set of n points in convex position. There exists a flip sequence between any pair of non-crossing spanning trees of length at most 3/2 n.One can easily prove that there exists a flip sequence of length at most 2 δ(T_1,T_2) between any pair of non-crossing spanning trees in convex position. The improvement of Aichholzer et al. <cit.> also holds in that setting. Since in the example of Hernando et al. the intersection is reduced to two edges, one can wonder if Conjecture <ref> can be extended to the symmetric difference, namely:Let C be a set of n points in convex position. There exists a flip sequence between any pair of non-crossing spanning trees T_1,T_2 of length at most 3/2δ(T_1,T_2).Our main results, discussed in more details in the next paragraphs, first consist in (i) improving the best known upper bound by breaking the linear factor 2 of the threshold on the length of a minimal flip sequence (even in terms of the symmetric difference) and (ii) disproving Conjecture <ref> by proving that the best upper bound factor we can hope for is 5/3. We complete these results by providing improved upper and lower bounds on the length of transformations in the non-crossing and rotation models defined later. In particular, we close the gap between upper and lower bounds in the case of non-crossing flips. Improved upper bound.The first main result of this paper is to improve the best upper bound of <cit.> by a linear factor by proving that the following holds:theoremmainthmLet C be a set of n points in convex position. There exists a flip sequence between any pair of non-crossing spanning trees T_1 and T_2 of length at most c ·δ(T_1,T_2) with c = 1/12(22 + √(2))≈ 1.95.In particular, there exists a flip sequence of length at most cn ≈ 1.95 n between any pair of non-crossing spanning trees. One can note that our result is expressed in terms of the symmetric difference, which is also the case for the upper bound of <cit.>[Some of the partial results obtained in <cit.> depend on both n and Δ.]Our proof technique is completely different from the previous approaches of <cit.>. Even if these two proofs are very different, they share a common point: their goal is to transform at least one spanning tree into a very rigid structure that does not really take into account the specific structure of both trees.On the contrary, our approach depends on the local structure of both trees, which works along the following lines: If two non-crossing trees T,T' contain a common chord, we divide our problem into two sub-problems[This approach is safe since our upper bound depends on the size of the symmetric difference and not of n.]: the “left" and the “right" problem where the common chord becomes an edge of the border in both cases. In particular, if we can create a common chord with few modifications, we can apply induction. Unfortunately, this cannot work in general since we may have to modify a lot of edges until we can create a common edge (see e.g. the example of Figure <ref>). We prove that we can find a chord e in T and one side of that chord (say “left") such that T' has not too many endpoints in that side. The difference with the argument above is that the “not too many" here is not a universal constant but depends on the size of the side. We then prove that, by only modifying a small linear fraction of these edges, we can transform “left" into what we call a very good side. Informally speaking, “left" is a very good side if (i) no edge of T' has both endpoint in “left" (in other words, all the edges with one endpoint in left have the other endpoint in the right part) and, (ii) the number of such edges in T' is equal to the size of “left". We then prove that, in that case, we can perform flips in order to be sure that both trees agree on the left of e in at most 5/3 times the number of vertices at the left of e[Actually, the size of a side will be defined as the number of non-edges of the border and not simply of vertices in the side which explains why we obtain a bound in terms of the symmetric difference.].Our proof is self-contained and is algorithmic. So a flip sequence of length at most cn can be obtained in polynomial time. Moreover it is robust since it can be adapted to improve the best upper bounds for rotations for instance. Note that we did not try to optimize the constant c to keep the proof as simple as possible.Lower bound in terms of the symmetric difference.Our second set of results consists in proving stronger lower bounds in terms of the symmetric difference. In particular, we disprove Conjecture <ref>:theoremlbflips For every k=0 mod 3, there exist two trees T_k and T_k' such that δ(T_k,T_k')=k and every flip sequence between T_k and T_k' has length at least 5/3 k.The proof of Theorem <ref> consists in first providing two spanning trees T_1,T_1' on 8 vertices for which |T_1 ∖ T_1'|=3 and such that the minimal flip sequence between T_1 and T_2 needs 5 flips (see Figure <ref>). One of the reasons of the hardness comes from the fact that, for every common edge e of the border, the endpoint of e that is used to connect this edge to the rest of the tree is different in both trees. This allows us to increase the number of crossings between the trees, and then the length of the flip sequence.Note that our example is not a counter-example to Conjecture <ref> since the pair of trees contains a lot of common edges. We then prove that if we glue many instances of (T_1,T_1') appropriately, we can obtain a similar example with arbitrarily large value of k.The idea is as follows. If we assume that there always exists a minimal flip sequence that does not break common edges, the conclusion immediately follows. Unfortunately, this statement, known as the Happy Edge conjecture <cit.>, is only known to be true for common edges of the border but not for chords. So we have to prove that it is never interesting to break a common edge which we succeed to do in this particular case (in other words, the Happy Edge Conjecture holds in this case).We have not found any example for trees T_1,T_2 for which a flip sequence of length more than 5/3δ(T_1,T_2) is necessary. We therefore leave the following as an open problem: Let C be a set of points in convex position and T_1,T_2 two non-crossing spanning trees on C. Does there always exist a flip sequence between T_1 and T_2 of length at most 5/3δ(T_1,T_2)? Improved lower bounds for the other models.Several other types of flips have been introduced in the literature (see e.g. <cit.> for an overview of the results in the different models). We proved that we can strengthen the best lower bounds in terms of the symmetric difference for two other types of flips: non-crossing flips and rotations. Let T be a spanning tree, e be an edge of T and f be an edge such that (T ∪ f) ∖ e is non-crossing. We say that the flip is non-crossing if T ∪ f does not contain any crossings. In other words, we restrict to flips where the new edge does not cross the edge that is deleted. We say that the flip is a rotation if e and f share an endpoint. In other words, every flip must rotate an edge around a point in that case. Upper and lower bounds in terms of n for the longest minimal possible transformation have already been studied (see <cit.>). Note that the best known lower bounds for all the models are the same and are given by the construction of Hernando et al. which gives a lower bound (in terms of n of size 3/2 n). We improved the lower bounds in terms of the symmetric difference for both non-crossing flips and rotations.For non-crossing flips, one can easily remark that, by projecting edges on the border, we can always find a non-crossing flip sequence between any pair of trees of length at most 2δ(T_1,T_2) (see Lemma <ref> for a formal proof). We prove that this bound is tight by giving a pair of trees that reach this bound, which completely closes the gap between lower and upper bounds for non-crossing spanning trees in terms of symmetric difference. Namely we prove that the following holds:theoremlbncflips For every k, there exist two trees T_k and T_k' such that δ(T_k,T_k')=k and the length of a minimal non-crossing flip sequence between T_k and T_k' has length at least 2k. We finally consider the rotation model. One can easily remark that there is always a rotation sequence between T_1 and T_2 of length at most 4δ(T_1,T_2) using projections on the border. Actually one can prove that this 4 can be improved into a 3 with a simple clever analysis. A careful reading of the proof of Theorem <ref> with a slight adaptation actually permits to improve the factor into (1+c) ≈ 2.95. Our last result consists in improving the best lower bound for rotation by showing that the following holds: theoremlbrotations For every k=0 mod 3, there exist two trees T_k and T_k' such that δ(T_k,T_k')=k and every rotation sequence between T_k and T_k' has length at least 7/3 k. While the family of trees reaching that bound is similar to the family constructed for flips, the analysis that this family works is much more involved. We end this part with a last open problem: Let C be a set of points in convex position and T_1,T_2 two non-crossing spanning trees on C. Does there always exist a rotation sequence from T_1 to T_2 of length at most 7/3δ(T_1,T_2)? §.§ Related workFlip distance between geometric structures.Flips between combinatorial structures have been widely studied in computational geometry and combinatorics. One of the most studied problem, known as the Flip Distance problem, aims at computing the minimum number of flips needed to transform one triangulation into another[A flip in that case consists in replacing one diagonal of a quadrilateral into the other.] The problem has been proven to be NP-complete when considering n points in non-convex position <cit.>, and in that case, the flip graph of triangulations of a point set may have diameter Θ(n^2) <cit.>. When the n points are in convex position, the maximum flip distance between triangulations is linear and equal to 2n-10 when n ≥ 9. A first proof for n large was found using hyperbolic geometry <cit.>, while a combinatorial proof for all n≥ 9 was only given decades later <cit.>. However, the complexity of the Flip Distance problem is, as far as we know, still an open problem in that case.Flip graphs and their diameter for other geometric objects havebeen studied, such as non-crossing perfect matchings or rectangulations. For both of these objects there are several natural notions of flips, yielding various flip graphs. A natural way of defining a flip for perfect matchings is by allowing two edges to be removed and two other edges to be added such that the resulting matching is still non-crossing. When the n points are in convex position and n is even, Hernando, Hurtado and Noy <cit.> showed that the flip graph of non-crossing perfect matchings has diameter n/2-1. Houle et al. <cit.> gave a result on general point sets when using the notion of flip where M_1 is connected to M_2 in the flip graph where the symmetric difference of M_1 and M_2 contains a single non-crossing cycle. They showed that there is a transformation of linear length between any pair of non-crossing matchings, whereas Aichholzer et al. <cit.> showed that, if multiple non-crossing cycles are allowed in a flip, then any minimal transformation has length at most O(log n).Ackerman et al. <cit.> considered flips of rectangulations with two elementary flip operations, where one flip changes a horizontal line to a vertical line and vice versa and the other flip is a rotation around a point (by splitting the line segment into two parts). They showed that the maximum flip sequence over all n points is of the order Θ(n log n). A natural point set for rectangulations to consider is a diagonal point set, for which Ackerman et al. showed that the flip graph has diameter at most 11n.Combinatorial Reconfiguration. In the last decade, an important line of work has consisted in finding transformations between solutions of a problem such as graph colorings or independent sets (see e.g. <cit.> for a recent survey).Amongst all these works, some of them studied transformations between restricted spanning trees. While we focus in this work on a restriction to the geometric representation of the spanning trees (non-crossing), these works focus on combinatorial properties of the spanning trees such as their maximum degree <cit.> or their number of leaves <cit.>. In these cases, the existence of a transformation is not guaranteed and the goal is to design efficient algorithms determining, given a pair of spanning trees, whether one can transform one into the other. These works focus on the token jumping model which essentially corresponds to flips and very few is known on the token sliding model (which is an analogue of rotations).As a final remark, spanning trees are, as we already mentioned, a particular case of matroids (called graphic matroids). Other reconfiguration results related to generalizations of matroids have also been studied in the litterature, see e.g. <cit.>.Organization of the paper. After giving some definitions and simple observations in Section <ref>, we prove Theorem <ref> in Section <ref>. In Section <ref> we prove Theorems <ref>, <ref> and <ref>. § BASIC DEFINITIONS AND OBSERVATIONSLet C be a set of points in convex position and T be a non-crossing spanning tree on C. We say two points of a convex set C are consecutive if they appear consecutively on the convex hull of C. We say we perform e ⇝ e' in T if we perform the flip consisting in removing e and adding e' in T.Let A ⊆ C. We denote by T[A] the induced subgraph of T on A, that is the subforest of T with vertex set A where uv is an edge if and only if uv is an edge of T. Note that T[A] is a non-crossing forest.A border edge (for T) is an edge between consecutive points. An edge of T which is not a border edge is called a chord. A hole of T is a pair of consecutive points that is not a border edge. We will say that we fill a hole when we apply a flip where the created edge joins thepair of points of the hole.One can remark that, for each chord e of T, the line containing e splits the convex hull of C in two non-trivial parts. A side of a chord e is the subset of points of C contained in one of the two closed half-planes defined by the line containing the two endpoints of e (see Figure <ref> for an illustration).A side of T is a side of a chord e for some e∈ T. We say an edge (or a hole) is in a side A if both its endpoints are in A. In the following, for every side A of a chord, we will denote by k_A the number of holes in A, which is also the number of chords of T in A. Since T is acyclic, we also have k_A>0. Note that each chord e of T defines two non-trivial sides[A side is trivial if it only contains either two points or all the points.] A and B whose intersection is exactly the endpoints of e. Moreover, T has exactly k_A + k_B holes. Let e be a chord of T and A be a side of e. For every chord e' in A, the side of e' (w.r.t. A) is the side of e' that is contained in A. Note that for every pair of chords e_1,e_2 in A, the sides of e_1 and e_2 (w.r.t. A) are either disjoint or contained in each other. The chord e_1 is inclusion-wise minimal if no side of a chord e' in A is included in the side of e_1 w.r.t. A. By connectivity, we can easily note the following.Let e be a chord of T and A be a side of e. Let A' be the side of an inclusion-wise minimal chord e' in A. Then k_A'=1.Let A be a side of a chord e of T.We say that a point v is inside A for a pair vw if v ∈ A and either v is not an endpoint of e or both v and w are in A (see Figure <ref> for an illustration). Note that the fact that v is inside A depends on the pair vw. Note moreover that a point can be inside A for several pairs of points, and a point can be inside A for some pairs but not inside A for other pairs.We define the degree of a side A in a tree T' as the number of endpoints (counted with multiplicity) that are inside A for some chord of T'.intersectionsThe following lemma appeared in <cit.>. We give the proof for completeness. Let T be a tree and e be a border edge. Then there exists a non-crossing flip that adds e in T without removing any border edge of T (except if T only contains border edges). Adding e to T does not create any crossing, since e is a border edge.Moreover, the unique cycle in T ∪{ e } must contain at least one chord e', since otherwise T ∪{ e } is precisely the convex hull of C. Adding e and removing e' is a non-crossing flip as claimed.§ UPPER BOUNDSThis section aims at proving Theorem <ref>. *We say that two trees T_I and T_F on a convex point set C form a minimal counterexampleif the pair (T_I,T_F) is a counterexample to Theorem <ref>, and for every pair of trees T'_I and T'_F, which are either defined on the same set of points and δ(T'_I, T'_F) < δ(T_I, T_F) or on a smaller set of points, Theorem <ref> holds.Before giving all the details of the proof, let us first explain the main steps of the proof (see Figure <ref> for an illustration). First, we prove in Section <ref> that a minimal counterexample T_I and T_F is non-trivially reducible, i.e. T_I and T_F have the same border edges and no common chord.In Section <ref>, we will observe that, in some sides (later called very good), we can match the k_A chords in the side using at most 5/3k_A flips in total. However, a very good side does not necessarily exist in a minimal counterexample.Thus, we will give tools in Section <ref> to obtain a very good side from another special type of side (which we can also find in a minimal counterexample) without using too many flips. §.§ Basic properties of a minimal counterexample First, we prove the base case. Let T_I and T_F be a minimal counterexample. Then both T_I and T_F have a chord. Assume by contradiction that T_I contains only border edges. By Lemma <ref>, we can iteratively add border edges to T_F by decreasing the symmetric difference at each step until the two trees are the same. So there is a flip sequence between T_I and T_F of length at most δ(T_I,T_F), a contradiction. The other case holds by symmetry. Let T_I and T_F be a minimal counterexample. Then T_I and T_F do not have common chords. Assume by contradiction that T_I and T_F have a common chord e. Let A and B be the two sides of e. Since e is a chord, |A| and |B| are smaller than |C|. Since e belongs to both T_I and T_F, T_I[A] and T_F[A] is a pair of non-crossing spanning trees of A.By minimality of the counterexample, there exists a flip sequence between T_I[A] and T_F[A] of length at most c ·δ(T_I[A], T_F[A]).Note that this sequence of flips can be performed for whole set of points C starting from T_I while keeping connectivity. Let T_I' be the tree obtained from T_I after applying this flip sequence.Note that T_I' and T_F agree on A. The same argument on T_I'[B] and T_F[B] ensures that there also exists a flip sequence between T_I'[B] and T_F[B] of length at most c ·δ(T_I'[B], T_F[B]).Hence, there exists a flip sequence sequence between T_I and T_F of length at most c ·δ(T_I[A], T_F[A]) +c ·δ(T_I'[B], T_F[B]) = c ·δ(T_I, T_F) (the equality holds since e is an edge of all the trees T_I[A], T_F[A], T_I[B] and T_F[B] and then does not belong to the symmetric difference of any pair), a contradiction. Let T_I and T_F be a minimal counterexample. Every border edge of T_I is a border edge of T_F.Assume by contradiction that there is a border edge e of T_I that is not an edge of T_F.Since T_F has at least one chord by Lemma <ref>, we can perform e^* ⇝ e in T_F, where e^* is a chord of T_F. By Lemma <ref>, e^* is not contained in T_I. Let T_F^* be the non-crossing spanning tree obtained from T_F after flipping e^* into e.By minimality of T_I and T_F, there exists a flip sequence between T_I and T_F^* of length at most c ·δ(T_I, T^*_F) = c(δ(T_I, T_F) - 1). Thus, there exists a flip sequence between T_I and T_F of length at most c ·δ(T_I, T^*_F) + 1 < c ·δ(T_I, T_F), a contradiction. We say that two trees T and T' form a nice pair of trees if the two trees have no common chord and have the same border edges.Note that for a nice pair of trees, every pair of consecutive points is either a common hole or a common border edge.Thus, for a nice pair of trees (T_1, T_2), we will refer to a hole of T_1 or T_2 simply as a hole. Lemmas <ref> and <ref> ensure that the following holds: A minimal counterexample is a nice pair of trees. A face f of a tree T is a face, different from the outer face, of the plane graph obtained by filling the holes of T with edges.Note that, since T is connected, every face contains exactly one hole on its boundary and every hole is on the boundary of exactly one face.Thus, there is a bijection between holes and faces of a tree.The face containing a hole h of a tree T is the face f such that h belongs to the boundary of f. We say that the hole h is contained in the face f in T.For every hole h, we can fill h in T by flipping any chord on the boundary of the face of T containing h. §.§ Very good sidesLet T and T' be a pair of trees. A good side A of T with respect to T' is a side of T containing no chord of T' (see Figure <ref> for an illustration).A good side A is very good (w.r.t. T') if the degree of A in T' is at most k_A. (Recall that k_A denotes the number of holes in A). Remark that, for a side A of an edge e of T, if A is a good side of T w.r.t. a tree T', the degree of A in T' is equal to the number of chords of T' crossing e.The goal of Section <ref> is to prove that the following holds:lemmaverygood Let T_1 and T_2 be a nice pair of trees, e be a chord of T_1, and A be a very good side of e (w.r.t. T_2). Then, we can match k_A pairs of chords of T_1 and T_2 using at most 5/3 k_A flips in total. Before proving this lemma, let us introduce some definitions and investigate the structure of very good sides (see Figure <ref> for an illustration of these definitions). A border path of a tree T is a maximal path (possibly reduced to a single point) of consecutive border edges of T_1.A border path of a tree T and a hole h are incident if they share a common point. Note that, for a tree T that is not reduced to a border path, a border path of T is incident to exactly two distinct holes of T, and a hole of T is incident to exactly two distinct border paths of T. Note that since trees forming a nice pair admit the same border edges, they also have the same border paths.Thus, for a nice pair of trees T_1, T_2, we will refer to a border path of T_1 or T_2 simply as a border path. Let us first remark that the following holds: Let T be a tree and h be a hole of T such that P is a border path of T incident to h. If T has a chord, then the face of T containing h has a chord on its boundary that has an endpoint in P. An internal border path P of a side A of a chord e is a border path such that all the points of P are in A and that does not contain an endpoint of e (see Figure <ref> for an illustration). Let T_1 and T_2 be a nice pair of trees, e be a chord of T_1, and A be a very good side of e w.r.t. T_2. Then:(i) Every internal border path is incident to 1 or 2 chords of T_2,(ii) Every border path containing an endpoint of e is incident to 0 or 1 chord of T_2 in A,(iii) There is at most one border path reaching the maximum value in (i) and (ii).When it exists, we call this path the extra path of A.Let us denote by h_1,…,h_k_A the holes of A and let us denote by P_i the internal path between h_i and h_i+1 for every i < k_A (note that there are k_A-1 such paths).By Observation <ref> applied to h_i,P_i for every i < k_A, there is at least one chord of T_2 incident to P_i. So the degree of A in T_2 is at least k_A - 1.Since the degree of A in T_2 is at most k_A (by definition of very good side), there is at most one extra chord of T_2 with an endpoint in A. If it exists, this extra chord is either incident with a path that contains an endpoint of e, or an internal border path of A. Lemma <ref> gives us a lot of information on how the tree T_2 behaves in a very good side w.r.t T_2. Using this information, we can now prove the main result of this section: Consider a side A' ⊆ A of an inclusion-wise minimal chord e' of T_1 w.r.t. A. Note that we might have A=A'. Let d be the degree of A' in T_2.By Remark <ref>, A' contains exactly one hole h. Since there is no chord of T_1 in A', the two border paths P_1 and P_2 incident to h contain all the points in A'.By Lemma <ref>, these paths contain altogether at most three endpoints of a chord of T_2 in A' since A is very good, hence d ≤ 3.The core of the proof consists in proving the following result: Let d≤ 3. In at most 5/3max (1, d) flips, we can obtain from T_1 and T_2 a very good pair of trees T'_1,T'_2 that agree on A' by filling max(0,d-1) holes of A in both T_1 and T_2 and creating the chord e' in T_2, see Figure <ref>. Moreover, if k_A > d, then we do not flip e and the number of edges in T'_2 crossing e decreased by max(1,d).Before proving Claim <ref>, let us show how it can be used to complete the proof of Lemma <ref>. Recall that A is very good hence d ≤ k_A. If d=k_A, then we can fill every hole of A that is different from h, and then add e' to T_2 using at most 5/3 d = 5/3 k_A flips by Claim <ref>. We may thus assume that d < k_A.We proceed by induction on k_A. For the base case k_A=1, we have d=0. In that case, A=A' and e=e', and we can add e to T_2 using at most ⌊5/3⌋ = 1 flip by Claim <ref>.Assume now that k_A>1. Let T_1^* and T_2^* be the two trees obtained from T_1 and T_2 after applying Claim <ref>. Note that even if e was not flipped, we cannot directly apply induction since T_1^* and T_2^* do not form a nice pair (e' is a common chord) and moreover, e is not very good anymore (because of e').However, since T_1 and T_2 form a nice pair, and we only added holes and e' to obtain T_1^*, T_2^*, e' is the only common chord of the two trees T_1^*, T_2^* and they have the same border edges. Let C' be the convex set of points obtained from C by removing the points of A' that are not endpoints of e'.Now observe that the trees T_1^*[C'] and T_2^*[C'] form a nice pair where e' is a common border edge. We claim that e is now very good for T_2^*[C']. Let A^* be the side of e in T_1^*[C'] that is contained in A. Note A^* does not contain any chord of T_2^*[C'], hence A^* is good w.r.t. T_2^*[C']. To prove that it is very good, first observe that A' contained only one hole, hence k_A^* = k_A - max(0,d-1)-1. Moreover, since A^* is good, the degree d_A^* of A^* in T_2^*[C'] is the number of chords in T'_2[C'] crossing e. The number of such chords is precisely the degree d_A of A in T_2 minus the max(1,d) (corresponding to the chords crossing e that were flipped to obtain T_2^*). Since A was very good w.r.t. T_2, we have k_A⩾ d_A, hence k_A^*⩾ d_A^* and A^* is very good w.r.t. T_2^*[C'].By induction, we can match k_A^* chords of the trees using at most 5/3k_A^* flips in total. Adding the flips applied to obtain T_1^* and T_2^* concludes the induction. To complete the proof, it now remains to prove Claim <ref>. Recall that d≤ 3 and that A' contains only one hole h. We consider three cases depending on d and will proceed as illustrated in Figure <ref>.Case d≤ 1: In that case, we claim that we can flip an edge of T_2 crossing e into e'.Since T_1 and T_2 form a nice pair and e' is a chord of T_1, then T_2 has also a chord, so neither T_1 nor T_2 is a border path. Let P be a border path incident to h. If d=1, we additionally require P to contain the endpoint of a chord of T_2 that is in A'. By Observation <ref>, the face f of T_2 containing h has a chord e_1 with one endpoint in P. By Observation <ref>, the flip e_1⇝ h is valid, and then we may perform h⇝ e'. Let T'_2 be the resulting tree. However, T'_2 could be obtained from T_2 using only the flip e_1⇝ e', since the only edge that may cross e' in T_2 is e_1 (when d=1).Finally note that e was not flipped, and we removed one edge of T_2 crossing e, namely e_1.Case d = 2: Let u_1,u_2 be the two endpoints in A' of the chords e_1,e_2 of T_2 with endpoints in A'. Note that we may have u_1=u_2, but not e_1=e_2 since A is good. Let P_1,P_2 be the two border paths incident with h. If both u_1 and u_2 lie on the same border path, say P_2, then Lemma <ref>(ii) ensures that P_2 is internal and is incident with no other chord of T_2. Otherwise, u_1 and u_2 lie on distinct border paths P_1 and P_2. Since A is very good, we have k_A⩾ d⩾ 2, hence P_1 or P_2 is internal. Moreover, by Lemma <ref>, if both paths are internal, at least one of them (say P_2) is not the extra path. And if one of them (say P_1) is not internal, then it is the extra path, and P_2 is an internal path. So, up to symmetry, we can assume that P_2 is an internal border path incident with only one chord of T_2, say e_2. Therefore, there exists a hole h'≠ h in A that is incident to P_2. Now by Observation <ref>, the face of T_2 containing h' contains an edge with an endpoint in P_2, hence it is e_2. We then perform e_2⇝ h' in T_2. Using Case 1, we may then perform e_1⇝ e' in the resulting tree. It then remains to flip a chord of T_1 on h'. Let f be the face of T_1 containing h'. If k_A=2, then the only chords of T_1 on the boundary of f are e and e', and we perform e⇝ h'. Otherwise, since e' is inclusion-wise minimal, there exists another chord e'_1∉{e,e'} on f, that we can flip on h'. Note that in both cases, we filled a hole in both trees and created e' in T_2. Moreover e was not flipped unless k_A=d, and we removed two chords of T_2 crossing e (namely e_1 and e_2). Case d = 3: Let u_1, u_2 and u_3 be the three points in A' of the chords e_1,e_2,e_3 of T_2 with an endpoint in A'. Since A is very good, e_1,e_2 and e_3 are distinct.By Lemma <ref>, we can assume up to symmetry that u_1 and u_2 belong to P_1 and u_3 to P_2. (Note that u_1 and u_2 might be the same). Since there is at most one extra path, Lemma <ref> ensures that P_2 is internal and e_3 is the only chord of T_2 incident with P_2. In particular, P_2 is incident to a hole h^*≠ h in A.Let us first prove that we can fill h^* in T_2 by flipping a chord that has an endpoint inside A. By Observation <ref>, the face containing h^* in T_2 has a chord on its boundary with an endpoint in P_2, which has to be e_3. By Observation <ref>, we can perform e_3 ⇝ h^* in T_2.Let us now prove that we can fill h^* in T_1 by flipping a chord different from e and e'. Let f be the face of T_1 containing h^*. If the boundary of f has no chord besides e and e', then by minimality of e', A contains only two chords and k_A=2, which is impossible since A is very good. Therefore there is another chord e^*_1 that we may flip on h^* by Observation <ref>. We may now apply Case 2 to the resulting trees T'_1 and T'_2. Observe that in total, we filled two holes in both T_1 and T_2, and created e' in T_2 using 5 flips. Moreover, we removed the chords e_1,e_2,e_3 from T_2 that were crossing e. Finally, we only flip e when k_A=2 in T'_1, that is when k_A=3 in T_1, which completes the proof. Since there is at most one extra path in a very good side, observe that Case 3 of Claim <ref> only happens once during the whole induction. Hence, with a deeper analysis, the 5/3k_A bound of Lemma <ref> can be improved to ⌈3/2 k_A ⌉. However, there is an example of a small very good side A of T_1 w.r.t T_2, such that we cannot match the k_A chords in A of T_1 with chords of T_2 in less than 5/3k_A = ⌈3/2 k_A ⌉ flips, see Figure <ref>.This example was the initial stone on which we built the lower bounds of Section <ref>. Now that we have shown how we can handle very good sides, we need to find one in a minimal counterexample T_I, T_F.However, we did not proved that a very good side always exists in T_I, T_F.We present in the next section how to transform some side which exists in T_I, T_F into a very good side.§.§ Obtaining a very good side Let T and T' be a nice pair of trees. The goal of this paragraph is to create a very good side for T' in T. Recall that a good side of T w.r.t. T' is a side of T containing no chord in T'.The first step to obtain a very good side is to create a good side from a special type of side (which always exist in a minimal counterexample) we describe later on.A bad hole h of a side A of T w.r.t. T' is a hole in A that is also in a side B ⊊ A of T', see Figure <ref>.Note that a side of T is a good side w.r.t. T' if and only if it does not contain bad holes of T w.r.t. T'.So in order to obtain a good side, our goal will consist in filling bad holes. To generate a good side in T, we start from a side A of T that contains at least one hole that is not bad w.r.t. T'. Such a side always exists in a nice pair since, at least one of the two sides of the same chord of T must contain a hole that is not bad w.r.t. to T' (otherwise, T' would contain a cycle).Note that we will actually require more conditions on the initial side in order to conclude, namely that it is τ-extremal (which will be defined later). However, this does not impact the transformation into a very good side.Lemma <ref> explains how we fill one bad hole in A, and our goal is to apply it several times to obtain a good side. Let T_1, T_2 be a nice pair of trees, A be a side of a chord e of T_1 that contains at least two holes including at least one bad hole h w.r.t T_2.Then, we can fill h in T_1 by flipping a chord different from e and we can fill h in T_2 by flipping a chord with both endpoints in A. Moreover the resulting pair of trees T'_1 and T'_2 form a nice pair and A has one less bad hole.An illustration of the proof is given in Figure <ref>.Let us first prove that we can fill h in T_1 by flipping a chord different from e. Let f be the face containing h in T_1.Note that the boundary of f is included in A.If e is the only chord on the boundary of f, then f is the only face whose boundary is included in A and h is the only hole of A, a contradiction.Otherwise, let e^* be a chord of T_1 different from e that is on the boundary of f. By Observation <ref>, we can perform e^* ⇝ h in T_1.Let us now prove that we can fill h in T_2 by flipping a chord in A. Let f' be the face containing h in T_2. Since h is a bad hole of A w.r.t T_2, the face f' is included in A. Let e' be a chord of T_2 on the boundary of f', and note that e' is a chord in A. By Observation <ref>, we can perform e' ⇝ h in T_2. Note that T_1',T_2' is still nice since we created a common border edge and no common chord.Recall that the side A of T contains a good hole. Therefore, each time we update T,T' applying Lemma <ref> on A, the good holes in A are not filled, which ensures we can repeatedly apply the lemma until no bad hole remains. After this process, we filled m<k_A bad holes w.r.t. T' in 2m flips, and A is now a good side of T w.r.t. T'. Let us now explain how we can transform A into a very good side. Recall that A being not very good simply means that there are too many chords of T' crossing the unique chord e on the boundary of A. The goal of Lemma <ref> is to remove these extra crossings. To obtain a very good side, we will apply it iteratively until we reach the right amount of chords. Let T_1, T_2 be a nice pair of trees, A be a good side of a chord e of T_1 w.r.t T_2 which is not very good w.r.t. T_2. Then there exists a hole h not in A such that: (i) we can fill h in T_1 by flipping a chord distinct from e and (ii) we can fill h in T_2 by flipping a chord crossing e.Moreover the resulting pair of trees after these two flips is still nice. The proof is illustrated in Figure <ref>. Let ℓ > k_A be the degree of A in T_2. And let us denote by B be the other side of e. Since A is good w.r.t. T_2, A contains no chord of T_2, and hence every chord e_0 of T_2 with an endpoint inside A for e_0 crosses e, so has its other endpoint in C ∖ A.Thus, there are ℓ chords e_1,…, e_ℓ of T_2 that cross e.These ℓ chords split the convex hull of C into ℓ+1 parts, hence there are at least ℓ+1 faces of T_2 that contain at least one chord in e_1, …, e_ℓ on their boundary.Now consider the holes contained in these ℓ + 1 faces of T_2.Since ℓ > k_A, there are at least two such holes, say h and h', that are not in A.Let f and f' be the faces of T_1 containing h and h'.Since the boundaries of f and f' are both included in B, e cannot lie in both f and f'. So, up to symmetry, we may assume e is not on the boundary of f.By Observation <ref>, we can perform e^* ⇝ h in T_1 with e^* a chord of T_1 that is on the boundary of f. By definition of h, there is a chord e_i on the face of T_2 containing h and we can perform e_i ⇝ h in T_2 by Observation <ref>. Since we only flipped chords to border edges, the resulting pair of trees is still nice.Let d be the degree of A in T'. Applying d-k_A times Lemma <ref> transforms A into a very good side w.r.t. T' using 2(d - k_A) flips. To summarize, using Lemma <ref> and Lemma <ref>, we are able to transform any side A containing a good hold into a very good side. However, at each step, we use 2 flips and fill one hole. Once A becomes very good side, our goal is to apply Lemma <ref> to match k_A edges in T and T' in 5k_A/3 flips. §.§ Bounding the number of flips In order to conclude the proof of Theorem <ref>, we need to make sure that we save enough using Lemma <ref> to compensate for the expensive use of Lemmas <ref> and <ref>. In other words, we want to ensure that we will not use too many flips to obtain a very good side relative to the number of holes in the resulting side. This is why we need to start from a side whose number of bad holes and degree are not too large compared to its number of holes. Let T,T' be a nice pair of trees and τ > 2. We say a side A of a chord e of T is τ-extremal for a tree T' if the degree of A in T' is at most τ·k_A, and, for every side A' ⊊ A of T', the degree of A' in T is more than τ· k_A'. First, we prove that such a side exists in a minimal counterexample. Let T_1 and T_2 be a nice pair of trees that are not border paths, then either T_1 or T_2 contains a τ-extremal side. Let e be a chord of T_1, and A and B be the two sides of e.Recall that k_A + k_B is the number of holes in T_1 (thus in T_2). Since the number of holes in T_2 is equal to the number of chords in T_2 minus 1, T_2 contains k_A + k_B - 1 chords. In particular, there are 2(k_A + k_B - 1) endpoints of chords of T_2, hence by symmetry we get that A has degree at most 2k_A ≤τ· k_A in T_2.If A is not τ-extremal, there is a side A' ⊊ A of T_2 of degree at most τ· k_A' in T_1. We now replace A by A', swap T_1 and T_2 and iterate this process until we find a τ-extremal side. Note that this terminates since each time A'⊊ A. We now show that a τ-extremal side has not too many bad holes, which we use later for bounding the number of flips in our process.Let T_1 and T_2 be a nice pair of trees, and A be a τ-extremal side of a chord e of T. Then the side A contains at most 2/τ k_A bad holes w.r.t. T_2. Let m be the number of bad holes in A w.r.t. T_2.For each bad hole h w.r.t. T_2 in A, let e_h be the chord of T' such that the side of e_h included in A is inclusion-wise maximal (with possibly e_h=e_h' for distinct bad holes h,h').Then, we define 𝒜 as the set formed by the sides A'⊆ A of all the chords e_h.Note that these sides do not overlap, so no hole belongs to two different sides in . Thus, each bad hole w.r.t. T_2 is contained in exactly one side A' ∈𝒜.Further, each A' ∈ contains exactly k_A' holes of T_2, that are all bad w.r.t T_2.Thus, we get that m=∑_A' ∈𝒜 k_A'.Since A is τ-extremal, the degree of A' ∈ in T_1 is more than τ k_A'. Since there are k_A chords of T_1 in A, the sum of the degrees of the sides A'∈ in T_1 is at most 2k_A. But then:2k_A ≥∑_A' ∈𝒜τ· k_A' = τ m.Rearranging the equation proves the lemma. We are now ready to prove Theorem <ref>. Let us first explain the intuition of the proof. By Lemma <ref>, a τ-extremal side exists in a minimal counterexample. Moreover, informally speaking, a τ-extremal side does not have too large degree (by definition) and does not contain too many bad holes by Lemma <ref>. Thus,we can obtain a very good side from a τ-extremal side using not too many flips. And, we will use this idea along all we have proved on very good sides and minimal counterexamples to prove the upper bound on a minimal flip sequence. *Assume by contradiction Theorem <ref> does not hold and let us considera minimum counterexample T_I and T_F. By Corollary <ref>, T_I and T_F form a nice pair of trees. Consider a τ-extremal side A in the counterexample.By symmetry, we can assume that A is a side of some chord e of T_I. By Lemma <ref>, the side A contains m ≤2/τ k_A bad holes w.r.t T_F. In particular A contains a good hole, and we can apply m times Lemma <ref> to fill every bad hole of A w.r.t. T_F using 2m flips.By Lemma <ref>, the resulting pair of trees T_I' and T_F' form a nice pair of trees and the side A' of the chord e of T_I' is a good side of T_I' w.r.t T_F' of size k_A' = k_A - m such that the degree of A' in T_F' is at most τ k_A - 2m.Now, we apply (τ - 1)k_A - m times Lemma <ref>, until the degree of A' in T_F” becomes at most k_A'. Again, note that each time by Lemma <ref>, we are left with a nice pair of trees where e did not change. After 2((τ - 1)k_A - m) flips, the resulting pair of trees T_I” and T_F” form a nice pair of trees and the side A” = A of the chord e of T_I” is a very good side of T_I” w.r.t. T_F” of size k_A” = k_A'= k_A - m.By Lemma <ref>, we can match k_A” chords of the trees using at most 5/3 k_A” flips. Let T_I^* and T_F^* be the resulting pair of trees. We have that:δ(T_I^*, T_ F^*)= δ(T_I”, T_F”)-k_A”= δ(T_I”, T_F”) - (k_A-m) = δ(T_I', T_F') - ((τ - 1)k_A - m) - (k_A - m) = δ(T_I, T_F) - m - ((τ - 1)k_A - m) - (k_A - m) = δ(T_I, T_F) - τ k_A + m Sinceτ k_A - m > 0, δ(T_I^*, T_F^*) < δ(T_I, T_F) and by minimality, there exists a flip sequence between T_I^* and T_F^* of length at most cδ(T_I^*, T_F^*). Thus, we have a flip sequence between T_I and T_F of length at most:cδ(T_I^*, T_F^*) + 2m + 2((τ - 1)k_A - m)+ 5/3 (k_A - m)=cδ(T_I, T_F) + (2 - c)τ k_A - 2k_A + cm + 5/3(k_A - m) =cδ(T_I, T_F) + (2 - c)τ k_A - 1/3k_A + (c - 5/3)m ≤cδ(T_I, T_F) + ((2 - c)τ - 1/3 + ( c - 5/3) 2/τ) k_A For τ = 2 + √(2) and c = 1/12(22 + √(2)), we get a flip sequence between T_I and T_F of length at most cδ(T_I, T_F), a contradiction.§ LOWER BOUNDS The goal of this section is to prove the different lower bounds. For each model, we will give a family of pairs of trees that satisfy the corresponding theorem. The proofs of the three theorems share a similar structure and rely on counting arguments. To give the flavor, we start with the simplest construction, and prove Theorem <ref> in Section <ref>. We then proceed with the more involved case of flips by proving Theorem <ref> in Section <ref>. Finally, we prove Theorem <ref> in Section <ref> using a different counting method. §.§ Non-crossing flips As a warm-up before proving the other items which are harder, let us prove prove Theorem <ref>.*In particular, the proof of Theorem <ref> will follow the same scheme but the construction and proofs will be more technical.Let us denote by T_1 and T'_1 the pair of non-crossing spanning trees on a convex set C of size 4 represented in Figure <ref>. Note that we have δ(T_1,T_1')=1.For every k, we denote by T_k, T_k' the pair of non-crossing spanning trees obtained by taking k disjoint copies of T_1,T_1' and identifying the points v_3 and v_4 of the i-th copyrespectively with the points v_1 and v_2 in the (i+1)-th copy for i<k. We define C_i as the set of points of the i-th copy, and v_j^i the point corresponding to v_j in C_i. In particular, the sets of points C_i's are not disjoint since C_i and C_i+1 intersect on v_1^i=v_3^i+1 and v_2^i=v_4^i+1. Finally observe that δ(T_k, T_k') = k for all k ≥ 1.Recall that we can always transform a tree T into another tree T' using at most 2δ(T,T') non-crossing flips by flipping edges of the symmetric difference into border edges (with an iterative application of Lemma <ref>). The rest of the proof of Theorem <ref> consists in proving by induction on k that this strategy yields a minimal non-crossing flip sequence. First, we can easily see that one cannot transform T_1 into T'_1 with one non-crossing flip, which proves the case k=1. For the induction, consider an integer k>1 and assume that for ℓ < k, a minimal non-crossing flip sequence from T_ℓ to T_ℓ' has length at least 2ℓ. Let us first remark that the following holds. If a non-crossing flip sequencefrom T_k to T_k' does not modify at least one common chord e, thenhas length at least 2k. By construction, there exists i < k such that e is a chord with both endpoints in C_i and C_i+1.Let A and B be the two sides of e such that C_i ⊆ A and C_i+1⊆ B. Since e is not modified during , no step inremoves an edge in A to add an edge in B, or conversely (otherwise one side would not be connected anymore).We can partition the sequenceinto two sequences _A and _B where _A is restricted to flips between points in A and _B flips between points in B.Since e belongs to the tree at any step of , we can first perform all the non-crossing flips in _A and then all the non-crossing flips in _B while keeping connectivity. Note that both T_k[A],T_k'[A] and T_k[B],T_k'[B] induce a copy of T_i,T_i' and T_k-i,T_k-i'.By induction, |_A| ≥ 2i and |_B| ≥ 2(k-i). Thus,has length at least 2k.If a non-crossing flip sequencefrom T_k to T_k' modifies every common chord, thenhas length at least 2k. In the non-crossing flip sequence , we have to remove and add every common chord of T_k and T'_k (there are k-1 of them), to remove chords of T_k ∖ T_k' and create chords of T_k' ∖ T_k. Thus,add or remove at least 2k-2 edges (and then contains at least 2k - 1 non-crossing flips).Since, at the first step, the first non-crossing flip cannot directly create a chord of T'_k, there is also an edge e' ∉ T_k∪ T'_k that is not described above that has to appear and be removed in .Thereforecontains at least 2k non-crossing flips.§.§ FlipsThe goal of this part is to prove Theorem <ref>.*The proof also holds by induction but (i) the construction has to be different and (ii) in order to prove that the result holds, one has to analyze it with more involved arguments. Construction of the trees.Let us denote by T_1 and T'_1 the pair of non-crossing spanning trees on a convex set C of size 8 represented in Figure <ref>. Note that we have δ(T_1,T_1')=3.For every k, we denote by T_k, T_k' the pair of non-crossing spanning trees obtained by taking k disjoint copies of T_1,T_1' and identifying the points v_7 and v_8 of the i-th copyrespectively with the points v_2 and v_1 in the (i+1)-th copy for i<k. (Note that the identification is performed upside down, which will be of importance in the proof, see Figure <ref> for an illustration with i=2).We again define C_i as the set of points of the i-th copy, and v_j^i the point corresponding to v_j in C_i.Observe that δ(T_k, T_k') = 3k for all k ≥ 1. Properties of a minimal flip sequence.We first claim that for every k≥ 1, there is a flip sequence from T_k to T_k' of length 5/3δ(T_k, T_k') = 5k. Indeed, the following flip sequence gives a transformation from T_1 to T_1': we perform in order the flips v_6v_1 ⇝ v_2v_5, v_3v_8 ⇝ v_4v_7, v_1v_8 ⇝ v_4v_5, v_2v_5 ⇝ v_2v_4, and finally v_4v_7 ⇝ v_5v_7. We can adapt this flip sequence for every k>1 between T_k and T'_k into a sequence of length 5k by applying the former in each copy of T_1 and T_1' independently.The rest of the proof of Theorem <ref> consists in proving by induction on k that the above mentioned sequences are minimal. First, we prove the base case k=1.A minimal flip sequence between T_1 and T'_1 has length at least 5. Since every chord of T_1 crosses all chords of T_1', the first two flips cannot create a chord of T_1'. Thus, after the first two steps, the symmetric difference still contains at least three edges of T_1'. Hence, a flip sequence between T_1 and T'_1 has length at least 5 = 5/3δ(T_1, T_1').Let k>1 be such that for ℓ < k, a minimal flip sequence from T_ℓ to T_ℓ' has length at least 5ℓ. Following the exact same arguments as in Lemma <ref>, we can derive the following. If a flip sequencefrom T_k to T_k' does not modify at least one common chord e, thenhas length at least 5k.Therefore, it only remains to show that flip sequences that modify every common chords of T_k and T'_k have length at least 5k. Letbe such a sequence. We use a more involved version of the counting argument presented in Lemma <ref>. More precisely, we distribute one unit of weight to a subset of C_1, … C_k for every flip of . We will essentially[What we will prove is actually slightly weaker since we will only ensure that the total weight is at least 5k-3/4; But since the weight has to be an integer, it will be enough to conclude.] show that the total weight given byto every set C_i is at least 5, which ensures thathas length at least 5k. In other words, we will prove that the following holds: Letbe a minimal flip sequence between T_k and T_k' such that all the common chords are modified. Then,has length at least 5k. Assignment of weights.Letbe a minimal flip sequence from T_k to T_k' modifying all common chords. Recall that during a step, one edge is added and one is removed. We call the addition of an edge or the removal of an edge a phase of the flip sequence (each step then consists of two phases).We distribute 1/2 for each phase as follows. Consider a phase of a step inin which an edge e = uv is added or removed. Let i and j be the minimal indices such that u ∈ C_i and v ∈ C_j, with i ≤ j by symmetry. The sequencegives a total weight of 1/2 for this phase according to the following rules: * if u also belongs to C_i+1 and v ∈⋃_ℓ>iC_ℓ,gives weight 1/4 from u to C_i+1, otherwise,gives weight 1/4 from u to C_i.*gives weight 1/4 from v to C_j.Note that, if uv is a common chord of T_k and T'_k, then i = j and both C_i and C_i+1 receive weight 1/4.The rest of the proof consists incounting in different claims how much weight is given during all the phases of . Weights assignment for disjoint and common chords. For every i,gives weight 3 to C_i because of the addition of the chords of T_k' ∖ T_k and the deletion of the chords of T_k ∖ T_k' with both endpoints in C_i. Each set C_i receives weight 1/2 when a chord of T_k ∖ T_k' with both endpoints in C_i is deleted, and each set C_i receives the same amount when a chord of T'_k ∖ T_k with both endpoints in C_i is added. Since the chords of T_k' ∖ T_k have to be added and the chords of T_k ∖ T_k' have to be deleted during S and there are six such chords,the conclusion follows. For every 1<i<k,gives weight 1 to C_i because of the addition and deletion of the common chords of T_k and T_k'. The sets C_1 and C_k receive weight 1/2. Recall that we suppose that the minimal flip sequencechanges all the common chords. Each set C_i receives weight 1/4 when a chord in T_k ∩ T'_k with both endpoints in C_i is deleted, and C_i receives the same amount when such a chord is added back. Sincemodifies all common chords and there are two such chords for 1 < i <k and one otherwise, the conclusion follows. Claim <ref> and Claim <ref> ensure thatalready gives a total weight of at least 4k - 1. We now aim at finding k+1 additional units of weight. This weight can only come from phases ofinvolving intermediate edges, that are edges not appearing in T_k nor T'_k. Weight assignment for intermediate edges. The core of the proof consists in proving the following claim: For each i, there exist two distinct edges e_1, e_2that are not in T_k ∪ T_k', such thatgives weight 1 to C_i because of the addition and the deletion of e_1 and e_2. Moreover, C_i receives this weight from two endpoints u_1 ∈ e_1 and u_2 ∈ e_2 which are not from v_2^i and v_7^i for any i ≤ k. Recall that, since all the edges of T_k ∖ T_k' with both endpoints in C_i are pairwise crossing the edges of T_k' ∖ T_k with both endpoints in C_i, we have to modify all the edges of T_k ∖ T_k' with both endpoints in C_i before creating an edge of T_k' ∖ T_k with both endpoints in C_i.Consider the last step where a chord e of T_k ∖ T_k' with both endpoints in C_i is removed during . And let us denote by T the tree before removing e. Note that e is the only chord of T ∩Δ(T_k,T'_k).We want to prove that there are two different edges e_1 and e_2 in T ∖ (T_k' ∪ T_k), each having an endpoint different from v_2^i and v_7^i that gives weight 1/4 when adding and removing e_1 and e_2. We distinguish two cases (see Figure <ref> for an illustration). Case 1: e = v_1^iv_8^i. By connectivity, there exists an edge e_1 (resp. e_2) in T ∖ (T_k ∪ T_k') with exactly one endpoint in { v_5^i, v_6^i } (resp. { v_3^i, v_4^i }) in T. Note that e_1, e_2 and v_2^iv_7^i are pairwise distinct since e separates {v_3^i,v_4^i,v_7^i} from {v_2^i,v_5^i,v_6^i}.Case 2: e is either v_1^iv_6^i or v_3^iv_8^i. Up to symmetry, we can assume that e=v_3^iv_8^i. Let A be theside of e=v_3^iv_8^i containing v_1^i. By connectivity, there exist two edges e_1 (resp. e_2) in T∖ (T_k ∪ T_k') with exactly one endpoint in {v_3^i,v_8^i }(resp. { v_5^i, v_6^i }) and the other endpoint in A. If e_1e_2, the conclusion follows. Otherwise, there exists another edge e'_2 in T ∖ (T_k ∪ T_k') with exactly one endpoint in { v_3^i,v_5^i,v_6^i,v_8^i } and the other endpoint in A, which completes the proof.In both cases, we have proved the existence of the two distinct edges e_1 and e_2 in T ∖ (T_k' ∪ T_k). Since e_1 and e_2 have to be both created and removed in ,gives weight 1 to C_i because of the addition and deletion of e_1 and e_2.All the claims above put together ensure that the total weight given byover all phases is at least 5k - 1. Since the total weight is an integer, in order to ensuring that the flip sequence has length at least 5k, we only need to find some positive additional weight given by . The flip sequencegives an additional weight of 1/4 to some set C_i. Let T be the first tree ofwhere an edge e_0 of T_k ∩ T_k' has been removed. Let us denote by i the index such that e_0 is in both C_i and C_i+1. Then, there is an edge e^* = u^*w^* ≠ e_0 in T such that u^* = v_7^i = v_2^i+1. Since e^* ≠ e_0 and v_7^i is an endpoint of e^*, e^* is not in T_k. Observe thatgives weight 1/4 to either C_i or C_i+1 from v_7^i when adding e^*. This weight has not been counted in Claim <ref> (by assumption on u_1 and u_2). If e^* is not in T_k',then this weight was not counted by Claim <ref> either and we are done. Otherwise, e^* is either v_2^i+1v_4^i+1 or v_5^iv_7^i, say the latter by symmetry.First note that the chords of T_k ∖ T_k' with both endpoints in C_i are not in T (since e^* crosses all these edges). By connectivity, since no common chord has been removed inbefore e_0 and since T does not contain any chord of T_k, at least three edges of T ∖ T_k distinct from e^* have both endpoints in C_i. In particular, at least one of them, denoted by e', is not in T_k'. Since e' is not in T_k ∪ T_k' and has both endpoints in C_i, the weight given byto C_i when removing e' was not entirely counted by Claim <ref>. Indeed, this claim only considers the contribution of exactly one endpoint of each edge e_1 and e_2, not both. All the previous claims ensure that the weight given byis at least ⌈ 5k- 3/4⌉ = 5k, which completes the proof of Lemma <ref>.§.§ Rotations In this section, we also give a family (T_k, T_k')_k ∈^* that satisfies the conclusion of Theorem <ref>.*We consider the same inductive construction as before, but start with a slightly different pair (T_1, T_1'): the edge v_1v_8 is replaced by v_3v_6 in T_1, as illustrated in Figure <ref> (the graph T_1' remaining the same).For every k ≥ 1, there is a rotation sequence from T_k to T_k' of length 7k.Indeed, the following rotation sequence gives a transformation from T_1 to T_1': v_1v_6 ⇝ v_1v_5, v_3v_6 ⇝ v_3v_5, v_3v_5 ⇝ v_5v_8, v_5v_8 ⇝ v_5v_7, v_3v_8 ⇝ v_4v_5, v_1v_5 ⇝ v_1v_4 and finally v_1v_4 ⇝ v_2v_4. This rotation sequence can indeed be generalized into a rotation sequence between T_k and T'_k of length 7k by rotating in each copy of T_1 and T_1' independently. The rest of the proof consists in proving by induction on k that such sequences are minimal. For the base case, one can check that seven rotations are needed to transform T_1 into T'_1 similarly to the previous sections, but the case analysis is quite tedious. We rather run an exhaustive computer search[The code can be found at https://github.com/tpierron/reconf-nc-treeshttps://github.com/tpierron/reconf-nc-trees] which checks that 7 is indeed the length of a minimal rotation sequence.A minimal rotation sequence between T_1 and T'_1 has length at least 7. Assume now that k > 1, and, for every ℓ < k, any rotation sequence between T_ℓ and T_ℓ' has length at least 7ℓ.Letbe a minimal rotation sequence from T_k to T_k'. Following the steps of the previous parts, we may assume thatmodifies every common chord of T_k and T_k', for otherwise we can mimic the proof of Lemma <ref> and directly conclude by induction. The rest of the proof is different from the proofs of the previous sections. While we only counted edges created or removed during the sequence in previous sections and proved that this number is large, we use a more involved argument here consisting in proving that the number of rotations is large. We start with an easy claim about the number of rotations that involve the edges of Δ(T_k,T'_k). The following subset _0 ofgives 6k pairwise disjoint rotations: * for each edge of T_k∖ T'_k, _0 contains the first rotation that removes it and,* for each edge of T'_k∖ T_k, _0 contains the last rotation that creates it. Note that the chords of T_k ∖ T_k' have no common endpoint with the chords of T_k' ∖ T_k, hence every rotation that removes a chord of T_k∖ T_k' cannot create a chord of T_k' ∖ T_k. Therefore _0 contains 6k rotations, which concludes. It remains to prove that there are k rotations that have not yet been counted, i.e. k steps in ∖_0. First, we prove the existence of rotations ininvolving C_i and C_i+1 which have special properties. For each i∈[1,k-1], let V_i :=( C_i ∪ C_i+1 ) ∖{ v_1^i, v_2^i, v_7^i+1, v_8^i+1}. The sequencecontains at least one of the following:(1) a rotation v_7^iv_8^i⇝ v_5^iv_7^i (or v_7^iv_8^i⇝ v_2^i+1v_4^i+1 by symmetry) not in _0,(2) a rotation v_3^iv_8^i⇝ v_7^iv_8^i (or v_1^i+1v_6^i+1⇝ v_7^iv_8^i by symmetry) not in _0,(3) a rotation that removes v_7^iv_8^i to add an edge that is not in T_k' ∖ T_k,(4) a rotation that removes an edge that is not in T_k ∖ T_k' to add v_7^iv_8^i,(5) two rotations, one that removes v_8^iu with u ∈{v_5^i, v_6^i} (or v_1^i+1u with u ∈{v_3^i+1, v_4^i+1} by symmetry) to add an edge that is not in T_k' ∖ T_k with both endpoints in V_i, and the other that removes an edge that is not in T_k ∖ T_k' with both endpoints in V_i to add v_2^i+1w with w ∈{v_5^i+1,v_6^i+1} (or v_7^iw with w ∈{v_3^i, v_4^i} by symmetry).Assume by contradiction thatdoes not contain such a rotation for an integer i and let us denote by e the edge v_7^iv_8^i=v_1^i+1v_2^i+1 (which is in C_i∩ C_i+1).Recall that all the common chords of T_k and T_k' are rotated during the sequence. Let T be the first tree inthat does not contain e and e ⇝ e' be the roation applied to obtain T. Since (3) does not hold, e' must be an edge of T_k'.Thus, e' is either v_2^i+1v_4^i+1 or v_5^iv_7^i, say v_5^iv_7^i by symmetry.Since (1) does not hold, the flip e⇝ e' is in _0, hence v_5^iv_7^i is in every tree obtained after T during .Let us prove that e is rotated exactly once. Assume by contradiction that it is rotated a second time (after being added back) into an edge f that is in T_k' ∖ T_k since (3) does not hold. Since (1) does not hold and because of the existence of e', f=v_2^i+1v_4^i+1, and every tree obtained afterwards contains v_5^iv_7^i and v_2^i+1v_4^i+1. Since e∈ T'_k, there is an edge e” rotated into e after creating v_2^i+1v_4^i+1, and let T” be the tree obtained from T_k just before this rotation takes place. Since v_5^iv_7^i and v_2^i+1v_4^i+1 are in T”, e” is neither v_3^iv_8^i nor v_1^i+1v_6^i+1. So, e”∉ T_k ∖ T_k', which contradicts (4). So, from now on, we will assume that the edge e is removed and added exactly once.Let T' be the tree inbefore adding back e and e^* ⇝ e be the rotation applied at that step. Since (4) does not hold, e^* ∈ T_k∖ T'_k, hence e^* is either v_3^iv_8^i or v_1^i+1v_6^i+1. Since T' appears after T in , v_5^iv_7^i is in T' and then T' cannot contain v_3^iv_8^i, and then e^* = v_1^i+1v_6^i+1. Since (2) does not hold, e^*⇝ e is in _0 and e^* = v_1^i+1v_6^i+1 belongs to all the trees before T' in .Since v_1^i+1v_6^i+1 and e are in every tree obtained before T during , and e ⇝ v_5^iv_7^i is performed to obtain T, there is a path in T connecting {v_1^i+1, v_6^i+1} and {v_5^i, v_7^i }. Since this path is in the tree obtained before T, this path does not cross v_7^iv_8^i, and is not included in C_i+1. So the path is included in C_i+1 and T contains an edge v_8^iu such that u ∈{v_5^i, v_6^i}. This edge is not in T_k nor T_k', thus it must be rotated duringafter obtaining T. Since (4) does not hold, it is not rotated into e. And, since v_5^iv_7^i is in all the treesafter T duringand is the only edge of T_k' ∖ T_k that can share an endpoint with v_8^iu, the edge v_8^iu cannot be rotated into an edge of T_k' ∖ T_k. Since e is removed exactly once and (4) does not hold, there either v_6^i+1v_1^i+1 or e belong to all the trees obtained after T. So v_8^iu is rotated into an edge b that is not in T_k' ∖ T_k, with both endpoints in V_i.Likewise, v_5^iv_7^i and e are in every tree obtained after T' during , and v_5^iv_7^i and v_6^i+1v_1^i+1 are in T'. So T' contains an edge v_2^i+1w such that w ∈{v_5^i+1, v_6^i+1}. This edge is not in T_k nor in T'_k, thus it must have been added during .Since (3) does not hold, it has not been added by rotating e. And, since v_6^i+1v_1^i+1 is in each tree obtained before T' duringand is the only edge of T_k ∖ T_k' that can share an endpoint with v_2^i+1w, v_2^i+1w cannot have been added by rotating an edge of T_k ∖ T_k'. Thus, v_2^i+1w has been added by rotating an edge that is not in T_k ∖ T_k', with both endpoints in V_i, which contradicts (5). Note that for each i∈[1,k-1], several of the previous cases may arise. For p∈[1,5], denote by n_p the number of times case (p) arises. The following lemma shows that all the rotations given by Lemma <ref> are pairwise distinct. This already gives k-1 rotations in ∖_0, and even k under the right conditions. There are at least n_1+n_2+n_3+n_4+2n_5 rotations in ∖_0.We first observe that Lemma <ref> provides rotations not in _0. This is clear for items (1) and (2): they provide a rotation that does not lie in _0. Items (3) and (4) provide a rotation that does not involve edges in Δ(T_k,T'_k), thus not in _0. Finally, item (5) provides two rotations that also do not involve edges in Δ(T_k,T'_k), hence again not in _0.Moreover, one can easily check that the rotations obtained applying Lemma <ref> to every i∈[1,k-1] are pairwise distinct. This concludes, since items (1) to (4) each provide one rotation and item (5) provides two of them.The previous lemmas ensure that a minimal rotation sequencemodifying all common chords contains at least 7k - 1 rotations and if (5) happens at least once, thenhas length at least 7k. So from now on, we can assume that (5) never happens, and moreover for every i∈[1,k-1], only one item among (1)-(4) happens. Denote by r_i the corresponding rotation. Note that r_i necessarily impacts the common edge v_7^iv_8^i, hence we say that r_i (and by extension v_7^iv_8^i) has type (p) when r_i was provided by item (p). It now remains to find a single additional rotation to conclude.For every i∈[1,k-1], each tree obtained duringcontains at least one edge among {v_7^iv_8^i,v_3^iv_8^i,v_1^i+1v_6^i+1, v_5^iv_7^i,v_2^i+1v_4^i+1}. Observe that the chord v_7^iv_8^i can only be deleted by a rotation in _0, or by r_i (if the chord has type (1) or (3)). In the first case, the rotation creates e'∈{v_5^iv_7^i,v_2^i+1v_4^i+1}, and by definition of _0, the e' is not deleted anymore afterwards. Assume that at some point, the edge v_7^iv_8^i is deleted because of r_i. Then, v_7^iv_8^i has to be created later, and it must be using a rotation from _0 (because v_7^iv_8^i does not have type (2) nor (4)), therefore flipping e”∈{v_3^iv_8^i,v_1^i+1v_6^i+1}. By definition of _0, all the trees obtained before recreating v_7^iv_8^i contain e”.Now afterwards, all the rotations involving v_7^iv_8^i are in _0, hence we can conclude using the previous case.∖_0 contains a rotation that we have not already counted in Lemma <ref>.Assume by contradiction that every rotation from ∖_0 has already been counted in Lemma <ref>.For i∈[1,k-1], let e be the edge affected by r_i different from the common chord v_7^iv_8^i. We say that e interferes with C_i when the endpoint of e not in {v_7^i,v_8^i} is on the left of v_7^iv_8^i, and that it interferes with C_i+1 otherwise. Since there are k-1 common chords, there is an index p such that no r_i interferes with C_p.There is no common chord that is rotated into v_5^pv_7^p or into v_2^pv_4^p and there is no common chord that has been added by rotating v_3^pv_8^p or v_1^pv_6^p. Suppose that the common chord e=v_7^pv_8^p is rotated into v_5^pv_7^p.Let T be the tree obtained before rotating e into v_5^pv_7^p.Since no rotations of type (1) interferes with C_p, the rotation v_7^pv_8^p ⇝ v_5^pv_7^p is in _0 and v_5^pv_7^p is in every tree obtained after T during . By connectivity, there is an edge xy ≠ v_5^pv_7^p in T such that x ∈{ v_7^p, v_8^p} and y ∈⋃_j ≤ iC_j ∖{ v_7^p, v_8^p }. Since v_5^pv_7^p can be added to T ∖{ e}, xy is not in T_k ∪ T_k'. We distinguish two cases whether x is v_8^p or v_7^p. Case 1: x = v_8^p.Then, y ∈{ v_5^p, v_6^p }. So xy is removed duringafter obtaining T. Since v_5^pv_7^p is not removed after obtaining T, there is a rotation that removes xy. This rotation is not in _0 since it cannot create an edge of T_k' ∖ T_k, and it was not counted by Lemma <ref> since otherwise it would interfere with C_p.Case 2: x = v_7^p.By definition of p, xy has not been added by rotating a common chord with both endpoints in C_p. Moreover, by Lemma <ref>, xy has not been added by rotating a common chord with an endpoint in C_j with j≤ p-2. Therefore, we may assume that xy has been added by a rotation of _0 (otherwise, we found an additional rotation). Since xy∉ T'_k, this rotation deleted a chord of T_k ∖ T_k'. In particular, since x = v_7^p is not an endpoint of a chord of T_k ∖ T_k', y is an endpoint of a chord of T_k ∖ T_k'.Similarly, since xy∉ T'_k,the edge xy is removed duringafter obtaining T. Applying the same argument, we get that xy must be removed by a rotation from _0 that creates an edge e' from T'_k∖ T_k. Moreover, e' is not v_5^pv_7^p (since v_5^pv_7^p is not removed after obtaining T) nor v_1^p+1v_6^p+1 (by connectivity).Thus, this rotation rotates around y, hence we also get that y is an endpoint of a chord of T_k' ∖ T_k. This is a contradiction since chords of T_k∖ T'_k and of T'_k∖ T_k do not share any endpoint. Hence, there is no common chord that is rotated into v_5^pv_7^p, and into v_2^pv_4^p by symmetry. Moreover, up to exchanging T_k and T_k', this also proves that there is no common chord that has been added by rotating v_3^pv_8^p or v_1^pv_6^p. The rotations of ∖_0 involving the common chords in C_p have type (3) or (4). By symmetry, we consider the case p<k and the common chord e=v_7^pv_8^p. Assume by contradiction that r_p has type (1) or (2).Then, since the only rotations involving e are either r_p or in _0, e is rotated into a chord of T_k' ∖ T_k and added by rotating a chord of T_k ∖ T_k'. By Claim <ref>, e is rotated into v_2^p+1v_4^p+1 and added by rotating v_1^p+1v_6^p+1. Moreover, at least one of these two rotations must lie in _0, say e⇝ v_2^p+1v_4^p+1 (the other case being similar). In particular, all the trees obtained duringcontain v_2^p+1v_4^p+1 afterwards. However, this prevents to create again the chord v_1^p+1v_6^p+1, and thus to recreate e, a contradiction. The chords of T_k ∖ T_k' with both endpoints in C_p are rotated into edges with both endpoints in C_p.Assume by contradiction thatrotates a chord uv of T_k ∖ T_k' into a chord vw, where u,v ∈ C_p and w ∉ C_p. Let T be the tree obtained after performing uv ⇝ vw during . By symmetry, say w is in ⋃_j>pC_j. We now distinguish two cases depending on the type of r_p ((3) or (4) by Claim <ref>). Case 1: r_p has type (3).By Claim <ref>, v_7^pv_8^p is added back by a rotation r in _0 which removes v_1^p+1v_6^p+1 for the first time in .In particular, vw cannot be v_1^p+1v_6^p+1, nor can cross v_1^p+1v_6^p+1, hence w ∈{ v_5^p+1, v_6^p+1}. Since v is in C_p and an endpoint of a chord in T_k ∖ T_k', vw either is v_8^pv_5^p or crosses v_7^pv_8^p.Thus, vw is not a chord of T_k' and must be removed before performing r during .However, vw cannot be rotated into an chord of T_k' before performing r during(since v is not an endpoint of a chord in T'_k∖ T_k, and the only chords of T'_k∖ T_k containing w cross v_1^p+1v_6^p+1).Thus, vw is rotated into a chord not in T_k', and we found an additional rotation.Case 2: r_p has type (4).By Claim <ref>, v_7^pv_8^p is removed by a rotation r in _0 which adds v_2^p+1v_4^p+1 for the last time in .So vw cannot be rotated into v_2^p+1v_4^p+1 and does not cross v_2^p+1v_4^p+1, hence w ∈{ v_3^p+1, v_4^p+1}. In particular, neither v nor w is an endpoint of a chord of T_k'∖ T_k except maybe v_2^p+1v_4^p+1. Therefore, vw cannot be rotated into a chord of T_k' ∖ T_k nor into v_1^pv_2^p by choice of p.This shows that vw must be rotated into v_7^pv_8^p, hence v = v_8^p and u = v_3^p since w ∉ C_p and uv ∈ T_k ∖ T_k'. Since vw∉ T_k∖ T'_k, the rotation vw⇝ v_7^pv_8^p is precisely r_p. Observe that the rotations r, uv⇝ vw and r_p must then occur in that order in . Let T be the tree obtained after performing r. By construction, uv⇝ vw is a rotation from _0, hence all the trees obtained until T contain uv. In particular, T contains a path connecting {u,v} and {v_2^p+1,v_4^p+1}. Since T is obtained using r, this path must be included in C_p+1 by connectivity, and cannot cross nor contain v_7^pv_8^p. The first edge of this path must thus be v_1^p+1z for some z∈{v_3^p+1,v_4^p+1}.This edge is not in T'_k, hence it has to be removed by a rotation r'∈. Since every tree obtained after T contain v_2^p+1v_4^p+1, the edge v_1^p+1z cannot be rotated into a chord of T'_k∖ T_k and r'∉_0. Moreover, we have r'≠ r_p, hence r' is a rotation from ∖_0 that was not counted by Lemma <ref>, which concludes. Consider the three edges e_1, e_2, e_3 obtained by performing the rotations of _0 which remove the chords of T_k ∖ T_k' with both endpoints in C_p for the first time. Since these edges have been obtained by rotating chords of T_k ∖ T_k', they are not chords of T_k' ∖ T_k. By Claim <ref>, e_1,e_2,e_3 are not common chords either. And finally, these edges are not common border edges (otherwise, we applied a rotation before that rotated the common border edge into a chord that is not in T'_k ∖ T_k). Thus, these edges have to be removed during .By construction of C_p, e_1,e_2,e_3 are not rotated into common chords. Hence, the rotations that remove these edges are in _0, thus add a chord of T_k' ∖ T_k for the last time. Since the common chords with endpoints in C_p are either rotated into v_2^p+1v_4^p+1 or v_5^p-1v_7^p-1, or added by rotating v_1^p+1v_6^p+1 or v_8^p-1v_3^p-1, the edges e_1,e_2,e_3 cannot be rotated into chords of T_k' ∖ T_k with both endpoints in C_p-1 or with both endpoints in C_p+1. So the edges e_1, e_2, e_3 are rotated into distinct chords of T_k' ∖ T_k with both endpoints in C_p. This gives a rotation sequence from T_k ∩ C_p to T_k' ∩ C_p using 6 rotations, which contradicts Lemma <ref>. The previous lemmas ensures that a minimal rotation sequencecontains at least 7k rotations, which completes the proof of Theorem <ref>.Acknowledgments. The first and third authors would like to thank Valentin Gledel for interesting discussions on the problems on an earlier stage of this project.plain | http://arxiv.org/abs/2310.18518v2 | {
"authors": [
"Nicolas Bousquet",
"Lucas De Meyer",
"Théo Pierron",
"Alexandra Wesolek"
],
"categories": [
"cs.CG",
"cs.DM",
"F.2.2"
],
"primary_category": "cs.CG",
"published": "20231027223044",
"title": "Reconfiguration of plane trees in convex geometric graphs"
} |
Tuning vortex critical velocity in Mo_2N thin films via striped magnetic domain configuration [=============================================================================================Formerly: Katharina Kann With recent advances in large language models (LLMs), the concept of automatically generating children’s educational materials has become increasingly realistic. Working toward the goal of age-appropriate simplicity in generated educational texts, we first examine the ability of several popular LLMs to generate stories with properly adjusted lexical and readability levels. We find that, in spite of the growing capabilities of LLMs, they do not yet possess the ability to limit their vocabulary to levels appropriate for younger age groups. As a second experiment, we explore the ability of state-of-the-art lexical simplification models to generalize to the domain of children’s stories and, thus, create an efficient pipeline for their automatic generation. In order to test these models, we develop a dataset of child-directed lexical simplification instances, with examples taken from the LLM-generated stories in our first experiment.We find that, while the strongest-performing lexical simplification models do not perform as well on material designed for children due to their reliance on LLMs,a model that performs well on general datastrongly improves its performance on children-directed data with proper finetuning, which we conduct using our newly created child-directed simplification dataset. § INTRODUCTION Large language models (LLMs), such as GPT-3 or ChatGPT, are able to produce stories that are far more coherent and fluent than stories generated by state-of-the-art models from even a couple of years ago, such as GraphPlan <cit.> and Plan-and-Write <cit.>. However, most of the already limited work on automatic story generation focuses on stories for an adult audience. Children's stories are not frequently a topic of interest, despite how crucial early literacy is to future success <cit.>.As we will describe in more detail in Section <ref>, children's stories are important for both entertainment and education. Automatic story generation for children increases their potential for broader impact by making it possible to personalize stories, making them increasingly relevant for each individual child. As one example, tailoring stories to a specific child's interests could allow them to become more easily interested in reading, which could improve their literacy skills. Another possible application could be the teaching of specific target words to preschoolers via stories. In this paper, we will keep the latter use case in mind.While, at first look, stories that have been generated by LLMs seem (and generally are) better than previous attempts, it has not been systematically evaluated if their children's stories adhere to what one would expect from the genre. In this paper, we focus on the simplicity of generated stories as measured by the age of acquisition of their individual words. Specifically, we assume that we are interested in generating stories to teach words with an age of acquisition (AoA) from 6 to 9 to children who are preschool-aged (around 2-5). Including other complex words or concepts, however, can make story-based vocabulary learning more difficult, as they detract attention from target words and complicate the ascertainment of context clues. Thus, we want to include the target words in the stories, but we do not want any other words to have an AoA ≥ 6. For the first experiment conducted in this paper, found in Section <ref>, we use 3 different LLMs for generation and ask the following research questions: (RQ1) What is the simplicity of generated stories for different LLMs? (RQ2) How do different prompts, with different descriptions of the target age group, influence story simplicity for different models? We find that, in spite of LLMs' growing abilities for story generation, they struggle to control for age-appropriate simplicity; in comparison to our dataset of human-generated stories for the same demographic, the models' stories exhibit scores that are over 17% worse for all readability metrics tested (Flesch Reading Ease, Flesch-Kincaid Level, Gunning-Fog Index, and Automated Readability Index).Motivated by those findings, in Section <ref>, we then turn towards simplification models. Simplification models have mostly[Exceptions exist but are relatively old and use since-deprecated technology, such as <cit.> and <cit.>.] been developed for text directed at adults. Here, we investigate two leaders of the TSAR-2022 Shared Task on Multilingual Lexical Simplification <cit.> in English, UniHD <cit.> and UofM&MMU <cit.>, with regards to their ability to generalize to the challenging out-of-domain setting of children's stories. For this, we generate a new dataset of annotated instances for lexical simplification, using age of acquisition as a metric for identifying complex words and pulling examples from our newly created corpus of LLM-generated stories for human annotators to simplify. We then use this dataset to evaluate the models' performance on automatically generated children's stories. Our experiments show that, while UniHD performs considerably worse, achieving an accuracy of only 30.52% in comparison to the 42.89% accuracy it achieves on the TSAR-EN dataset for general lexical simplification, the ordinarily weaker UofM&MMU model is actually able to perform better on the child-directed dataset when finetuned, achieving an accuracy of 47.37% compared to its 28.95% on TSAR.We conclude that simplification models trained on adult data are suitable to simplify automatically generated children's stories only when properly finetuned, i.e., the best-performing models for adult text do not generalize well without additional training, butfintuning is effective for our domain, even with limited data. To summarize, our contributions are as follows: (1) We examine and compare the ability of several LLMs to adjust for age-appropriate levels of readability and lexical simplicity in automatically generated and individualized educational stories. (2) We explore how effectively state-of-the-art lexical simplification models perform in the domain of children's stories in order to supplement the inability of LLMs to sufficiently adapt their vocabularies. (3) We provide a public dataset for the simplification of child-directed text in order to promote the advancement of models for this purpose.§ BACKGROUND: WHY SHOULD LLMS GENERATE CHILDREN'S STORIES? In child development, small early differences can compound into big long-term effects. One example of this is the relationship between early vocabulary size, literacy, and later academic achievement. Early vocabulary size is strongly related to reading ability in 2nd and 3rd grade <cit.>, and even when controlling for vocabulary size in Kindergarten, reading ability in 4th grade is associated with vocabulary growth through 10th grade <cit.>. Although there is a recognition of this relationship, large gaps in vocabulary size persist into elementary school and beyond: e.g., <cit.> estimate the vocabulary of 4th graders in the lowest quartile to be less than half the size that of 4th graders in the highest quartile, and a similarly sized gap is found in empirically-based estimates throughout adulthood <cit.>.Many vocabulary enrichment programs based on shared reading with a caregiver have been developed to try to address this gap, with mixed success. Early vocabulary interventions are generally based on storybook reading with a parent or teacher. A meta-analysis focusing on vocabulary intervention studies on children in pre-K and K concluded that although such interventions may increase oral language skills, they are not powerful enough to close the gap, even when implemented at this early age <cit.>. Experts agree that intensive, individual-level interventions would be necessary to make a difference but acknowledge that the infrastructure investment required for something on that scale would be substantial <cit.>.With recent improvements in LLMs, the concept of using natural language processing techniques to automate the child-by-child customization of educational materials has become increasingly realistic. In particular, state-of-the-art models have proven to be effective for generating individualized reading materials without the labor cost of having a human modify them by hand, making this process more practical in lower-resource settings facing educational discrepancies. Developing efficient pipelines for personalized vocabulary-enriching story generation is one way ito address this so called "achievement gap" in early childhood education, providing opportunities for individualized education where they otherwise may not be available.There are several challenges that should be addressed in order to make the automatic generation of individualized educational children's stories a more feasible idea. One primary concern is ensuring that the stories are simple enough to be understood by the target demographic. Stories which are too complex or which contain too many unknown words could make understanding the meanings of their target words based on the context provided more difficult. Additionally, studies show that having fewer unfamiliar detractors allows children to focus on and retain target words more effectively <cit.>. As such, it is important that automatic story generation models be able to control for the simplicity and readability of the stories outside of the words they intend to teach. Because of the potential personalized stories have for increasing both specific educational potential and overall engagement in reading, this paper focuses on how LLMs and lexical simplification models can be used for this purpose.§ RELATED WORK§.§ Automatic Story Generation The automatic generation of stories is a task which has seen vast changes in its methodology with the recent advent of LLMs, requiring increasingly less human intervention to mimic human-written stories. Even fairly recent automatic story generation models such as that by <cit.>, GraphPlan <cit.>, and the Plan-and-Write system <cit.> focused much of their effort on setting up scaffolding for the story ahead of time in order to ensure the coherence of the eventual generation model. With LLMs such as ChatGPT <cit.> and Bard <cit.>, however, systems for automatic story generation can begin to rely less on this strategy of planning story structure ahead of generation.More modern approaches that implement LLMs such as Future Sight <cit.> and MEGATRON-CNTRL <cit.> allow for more creative modification during and after generation. Wordcraft <cit.>, for instance, allows end users to collaborate with OpenAI's GPT-3 in order to continually modify stories throughout the process of their generation. One facet of story generation that has shown especially prevalent applications of late has been the generation of children's stories. Early childhood literacy has proven to be a significant indicator of a child's future academic success <cit.>, so emphasizing generation for childhood literary advancement is one way in which modern NLP techniques can be especially impactful. Though many story generation systems have aspects that can likely be transferred to children's stories, few studies intentionally focus their generation systems directly at children.§.§ Prompt-tuningA topic that has seen increasing amounts of attention as LLMs have become more and more popular is prompt-tuning, or the study of how the modification of prompts to LLMs affects their output. The concept of prompt-tuning in the sense in which it is used in this paper was originally proposed in <cit.>, who propose the idea of modifying prompts for GPT-3 in a way that affects the model's results quantifiably, similar to how parameter tuning affects ordinary machine learning models. Other papers such as <cit.> and <cit.> reaffirm the effectiveness of prompt-tuning for LLMs, demonstrating that it can have an even more significant effect than traditional parameter finetuning. This paper does not place a large emphasis on prompt-tuning, but we do perform prompt-tuning on a smaller scale to investigate how specifying different target groups affects the simplicity of generated stories. §.§ Lexical SimplificationLexical simplification describes the process of identifying words which are too complex for some target demographic and replacing them with synonyms which are easier to understand. Lexical simplification (LS) has been a commonly studied NLP tasks for several years, and early models such as <cit.> and <cit.> highlighted in particular the applications of lexical simplification for children or for non-native speakers <cit.>. More recent models such as LSBert <cit.>, which was created by finetuning the BERT masked language model on the task of lexical simplification, focus instead on the task of general LS, bringing up the question of whether these more modern, higher-performing models can be generalized to work as effectively on children-directed text.Even more recently, the TSAR-2022 Shared Task on Multilingual Lexical Simplification <cit.> has drawn more attention to the improvement of LS models such as the winning UniHD model <cit.>. This shared task also led to the development of the ConLS system <cit.>, which is a modified version of LSBert created after the TSAR shared task and obtains state-of-the-art results on the TSAR-2022 dataset.§ EXPERIMENT 1: ASSESSING READABILITYFirst, we investigate the readability of automatically generated stories for 1) different models and 2) a variety of prompts. We generate a total of 250 stories for each model: 50 per model–prompt combination.§.§ ModelsInstructGPT InstructGPT is a group of GPT-3 models finetuned via reinforcement learning from human feedback. Trained on 1.3 billion parameters, it was released by OpenAI in January 2022 as part of its series of generative pre-trained transformer (GPT) models. These models use data gathered by crawling the internet to predict how a series of text tokens should be completed <cit.>. The InstructGPT series of models are unique from other GPT-3 models in their intentional alignment with their purpose, which is completing text given a natural language instruction, as opposed to the inherent misalignment faced by models that just aim to predict statistically what word(s) should come next <cit.>. Specifically, this experiment implements OpenAI's Text-DaVinci-003 model, which can be used at the cost of $0.0200 per 1000 tokens. ChatGPT ChatGPT, or GPT-3.5-Turbo, is another model created by OpenAI using reinforcement learning from human feedback, released in November 2022 <cit.>. Unlike InstructGPT, ChatGPT is finetuned in a supervised setting by using human-created AI assistant dialogue samples, making it more equipped for dialogue. It is currently free to use as a part of its research preview, but its code is not yet publicly available. Vicuna Vicuna is a LLM created by finetuning the open-source LLaMa <cit.> on the ShareGPT dataset of user conversations. According to preliminary evaluations done using OpenAI's GPT-4, Vicuna is able to achieve 92% of the performance of ChatGPT <cit.>. Vicuna has the advantage over both GPT-3 and ChatGPT, however, that it can be used at no cost and has publicly available code. It is included in this study to test the capabilities of openly available LLMs to generate age-appropriate educational stories for preschoolers. This experiment uses the version of Vicuna with 7 billion parameters, which is built off of LLaMa's 7 billion parameter model.§.§ DatasetsAge of Acquisition Data The dataset used to identify words in the model-generated stories that should be simplified in order to reduce their complexity and increase their educational potential is the English Lexicon Project's Age of Acquisition dataset <cit.>. This dataset consists of over 31,000 words along with the estimated average age at which they are learned based on crowd-sourced data collected by researchers at six universities. Books for Preschoolers In order to have a set of stories with which to compare the ones generated by the above-described LLMs, we use the Books for Preschoolers dataset (BfP) from <cit.>. It consists of 1026 human-written stories intended for children ages 2-5. Of all the words in this corpus, 88.61% can be found in the Age of Acquisition dataset <cit.> which is used to test the simplicity of the stories generated by the LLMs. Among the words that could be found in the AoA data, 83.08% were below the age threshold we compare to in computer-generated stories, which is 6. The stories included in BfP are commercially-available, professionally-written picture books (i.e., books that have illustrations in every single page) intended to be read to preschoolers or by early readers, such as Good Night, Moon and The Very Hungry Caterpillar. Transcribed stories in the corpus contain an average of 52 sentences and 9.4 words per sentence. The authors themselves or the publishers designated the stories’ age range that qualifies them for inclusion in this dataset. §.§ Prompts With our prompts, we aim to provide the model with the necessary information for generation of stories around the target words. In addition, we want to encourage simplicity, i.e., stories that are easily understandable by young children. We assume our target group consists of children aged 6 or younger.We experiment with the following prompts: * Write a story for a preschooler containing the following words: w1, w2, w3, w4, w5* Write a story for a 3-year-old containing the following words: w1, w2, w3, w4, w5* Write a story for a 4-year-old containing the following words: w1, w2, w3, w4, w5* Write a story for a 5-year-old containing the following words: w1, w2, w3, w4, w5* Write a children's story containing the following words: w1, w2, w3, w4, w5 Target WordsWith the target demographic of preschool-aged children in mind, we select the target words for our LLM-generated stories from words in the AoA dataset with ages of acquisition between 6 and 9. Starting with all the words in this age range, we go through a specific filtering process that includes steps such as removing adverbs and words tagged with more than one part of speech, avoiding multiple words derived from the same lemma, and removing words with missing or low concreteness scores. The target words are then reviewed by multiple annotators and scored in three categories: learnability, imageability, and appropriateness. The guidelines relating to these categories (as they were presented to the annotators) are included in Appendix <ref>. At this point, only the highest scoring words in each category are kept, resulting in a list of 150 nouns, 50 verbs, and 50 adjectives. The complete list of target words can also be found in Appendix <ref>.§.§ MetricsCurrently, automatic readability metrics are limited and largely consist of ones that are significantly outdated <cit.>. These measures, although well-established and widely used, are coarse oversimplifications of language use. Since we are using several different measures consistently between the human- and computer-written stories, however, we believe these imperfect measures serve as a good starting point for comparison against the BfP corpus. As such, to judge the simplicity of the stories generated in Experiment 1, we use the following metrics. Average Age of Acquisition We go through the stories generated by each model and find each word's age of acquisition. We then take the average from all of these words so we can judge the ability of each model to simplify their lexicon to reflect that of their target demographic.Average Highest Age of Acquisition We check each story generated by a model and find the word in it that has the highest age of acquisition. We then take the average of these scores for each model to judge each model's ability to avoid using words in stories which are too complex for their target demographic.Readability Scores: Flesch Reading Ease We use readability scores to judge the relative ease of reading each of the models' stories. The Flesch Reading Ease score is calculated using the following formula: Reading Ease = 206.835 – (1.015 x Average Sentence Length) – (84.6 x Average Syllables per Word).Readability Scores: Flesch-Kincaid Grade Level Another readability metric we use to test the difficulty of reading each story is the Flesch-Kincaid Grade Level, which is computed via the following formula: Flesch-Kincaid Grade Level = (0.39 x Average Sentence Length) + (11.8 x Average Syllables per Word) - 15.59.Readability Scores: Gunning-Fog Index Another readability metric we use is the Gunning-Fog Index, calculated as follows: Gunning-Fog Grade Level = 0.4 x (Average Sentence Length + Percentage of Hard Words), where "hard words" are defined as words with three or more syllables that are not (i) proper nouns, (ii) combinations of easy words or hyphenated words, or (iii) two-syllable verbs made into three with -es and -ed endings.Readability Scores: Automated Readability Index The final readability metric we use is the Automated Readability Index, whose formula is: 4.71 × (characters/words) + 0.5 × (words/sentences) - 21.43 % of Valid Stories We further look at the % of valid stories to judge the models' ability to adhere to the prompts assigned. Stories are considered invalid if they are missing one or more of the assigned target words.% of Age-Appropriate Stories Last, we compute the % of age-appropriate stories to judge the models' ability to adhere to the specified age group. Stories are considered invalid if they contain words with an age of acquisition higher than 6. §.§ ResultsSome key results for our first experiment are shown in Table <ref>. After running the above-described experiment, we find that although LLMs are generally able to simplify the average word difficulty in their stories to age-appropriate levels, they are unable to avoid including some words with age of acquisition levels significantly higher than their target demographic. Further, we find that none of the 750 total generated stories stayed within the age range of 6 or younger. Though it is common for some children's stories to contain words that are more complex, others refrain from using such words in order to cater to their target demographic (e.g., 49 stories in the Books for Preschoolers dataset). Having this ability is a key difference between computer- and human-generated stories, highlighting an area in which our automatically generated stories could use improvement. In terms of readability, the stories generated by our models score significantly worse than those in the BfP dataset. While the average Flesch Reading Ease (FRE) in the BfP dataset is approximately 89.37 (out of 100), the average FRE among the stories generated by the models is only 74.22. Contrarily, for the Flesch-Kincaid Level (FKL) and Gunning-Fog Index (GFI), a lower score indicates a body of text being easier to read. In the FKL and GFI metrics, the BfP stories score on average 2.9 and 5.23, respectively. Among the stories generated by the LLMs, meanwhile, these scores average out to 6.44 and 8.87. Full results of this readability analysis can be seen in Figure <ref>. With regard to our second research question, we find that age-specific prompt-tuning has little effect on the simplicity of children's stories generated by LLMs.§ EXPERIMENT 2: STORY SIMPLIFICATION The results shown in Section <ref> demonstrate a serious need for improvement concerning age-appropriate simplicity in automatically generated children's stories. Thus, in our second experiment, we examine to what extent current state-of-the-art lexical simplification models generalize to the domain of children's stories, as well as how this could be beneficial in the automatic generation of educational children's stories by LLMs. §.§ Dataset Creation To the best of our knowledge, there are currently no datasets for lexical simplification focused on children. As such, the first step in our second experiment is to create such a dataset. We are able to do this by using the corpus of children's stories created in Section <ref>, identifying complex words using the Age of Acquisition dataset, and having human annotators identify simpler synonyms for these complex words (in cases where such synonyms exist). In the annotation process, each instance consists of the sentence from which the complex words was taken, as well as the complex word itself. Annotators are tasked with finding a simpler synonym for each instance that could replace the complex word in the sentence without changing the sentence's meaning. Each instance is then reviewed by two additional annotators, a professor and PhD student in NLP, to ensure that the meaning of the sentence is retained. Only instances deemed to be valid are kept, meaning all annotators agree the sentence's meaning was unchanged and the newly suggested synonym has a lower age of acquisition than the original word. In total, we annotate 750 instances randomly selected from our corpus of LLM-generated stories. After filtering out instances deemed to be invalid, our final dataset consists of 315 simplification examples. We refer to this dataset as our Child-Directed Simplification dataset, or CDS for short.[The complete dataset can be found at https://github.com/mariavale/CDShttps://github.com/mariavale/CDS.]§.§ Models We use two of the three best systems of the TSAR-2022 Shared Task on Multilingual Lexical Simplification to test the ability of lexical simplification models to generalize to the domain of children's stories: UniHD <cit.> and UofM&MMU <cit.>. In addition to examining the ability of state-of-the-art lexical simplification models to simplify our computer-generated children's stories, we also experiment with LLMs for the task to see if they outperform any models designed specifically for simplification. UniHD UniHD's model is created using an ensemble of six different configurations/prompt combinations from GPT-3. Its results are generated by calculating an aggregate ranking of the outputs of its different GPT-3 configurations and prompts. It demonstrates state-of-the-art performance in the area of lexical simplification, with an accuracy score over 25% higher than LSBert <cit.>, which is regarded as one of the most popular and effective LS models to date. UofM&MMU While UofM&MMU's performs considerably lower on accuracy on the TSAR dataset than UniHD does, it can be finetuned with additional data. Based on the BERT masked language model, the UofM&MMU model goes through three distinct steps in its simplification process. The first involves candidate generation based on different prompt templates to be provided to BERT. The second finetunes BERT and subsequently ranks and selects candidates. Finally, the candidates are post-processed in order to filter out noise and remove any antonyms that may appear. For this model, we are able to finetune using a training set consisting of 70% of the CDS dataset. The remaining 30% is used as a test set. Vicuna, ChatGPT, InstructGPT We further experiment with the three LLMs used in our Experiment 1. We use the following set of prompts to get data for our full range of metrics: * Name a simpler synonym that could replace the word [word] in the following sentence: [sentence]* Name two simpler synonyms that could replace the word [word] in the following sentence: [sentence]* Name three simpler synonyms that could replace the word [word] in the following sentence: [sentence]§.§ Metrics For Experiment 2, we use three metrics to measure the performance of the simplification models.Accuracy We define accuracy as the ratio of instances in which the top-ranked candidate generated by the model is equal to the top synonym chosen by human annotators.Simplification Validity This metric represents the ratio of instances where the model chooses a top candidate with a lower age of acquisition than the original word.Accuracy@k This is the ratio of instances in which at least one of the top-k candidates generated by the model is equal to the top synonym chosen by human annotators. We calculate this with k=2 and k=3.§.§ ResultsFull results of our second experiment with regard to the tested simplification models can be found in Table <ref>. Upon running the UniHD model on our child-directed simplification dataset, we find that the performance of the model is significantly worse than it is on the TSAR shared task English dataset. In terms of accuracy, the model is able to generate a top candidate equal to the one selected by human annotators 30.52% in comparison to 42.89% in the adult-directed dataset.Regarding a pure LM employed with prompting, we find that, with accuracy scores lower than all but one of the other models and validity scores significantly lower than any of them, LLMs are not effective tools for this task. However, we find different results for the UofM&MMU model in combination with finetuning. On the original TSAR-EN dataset, UniHD outperforms UofM&MMU by over 15% accuracy. Without finetuning, the UofM&MMU model also performs significantly worse on the child-directed dataset, with an accuracy score of just 8.42%. After being finetuned with a portion of the CDS dataset, however, the model is able to score even better than the best-performing model does on the TSAR dataset, predicting the top human-selected substitution with 47.37% accuracy. This demonstrates that finetuning can result in ordinary lexical simplification models being able to generalize to simplify child-directed text and that even better results could be achieved if better-performing models allow for the same level of finetuning.We conclude that, while LLMs and LLM-based models struggle to simplify children's stories, models which allow for finetuning on domain-specific data can perform as well as or even better on children's stories than they do on adult-directed corpora. In terms of our overall pipeline, we conclude that it is in fact plausible to generate and simplify children's stories using LLMs for generation and finetuned lexical simplification models to simplify overly complex words. § CONCLUSION In this paper, we investigate the ability of several current LLMs to generate age-appropriately simplified stories for children, as well as an examination of how modern lexical simplification models generalize to the domain of children's stories to enhance their educational potential. We demonstrate that, in spite of their growing capabilities, modern LLMs are unable to generate children's stories with age-appropriate simplicity, particularly in comparison to their human-written counterparts. Because of these shortcomings found in the automatically generated stories, our second experiment (Section <ref>) focuses on whether or not ordinary lexical simplification models generalize to the domain of children's stories, due to the lack of current LS models that focus on children-directed corpora. We find that some models which are ordinarily lower-performing than their LLM-powered counterparts have the potential to perform well in the domain of simplifying child-directed text, when properly finetuned.Over the course of our experiments, we further create a corpus of vocabulary-driven LLM-generated children's stories as well as an annotated lexical simplification dataset, CDS, intended specifically for the domain of children's text and using examples taken from this above-mentioned automatically generated stories. We provide these datasets publicly in order to promote further research in this area. In future work, we hope to further improve the automatic generation of customized children's stories by adding models for other tasks to our generation pipeline, such as one that can detect coherence errors or one that can improve readability.§ LIMITATIONSWe use a limited number of LMs and simplification models in our experiments, and the number of prompts we explore is also rather small. Thus, while our experiments feature state-of-the-art models, we cannot exclude with absolute certainty that other models or prompts might lead to different results. Our dataset is small as well, and it could be improved with the help of more annotators. Future research could include the use of more workers to create a significantly larger dataset, potentially through the use of gamification for data collection. § ETHICS STATEMENTRegarding the ethical considerations for this study, we find that the harms are minimal due to the relatively confined nature of the experiments; all annotations were performed voluntarily and with consent. Potential benefits of this study include the advancement of research in the area of child-directed lexical simplification and improvements in efficiency for the creation of personalized educational material for young children. Though the results of this research are eventually intended for the demographic of children, no vulnerable populations were involved in this study up to this point. If automatically generated stories are given or read to children, it is important to verify in advance that they are safe for the target population, as current models cannot guarantee this. § ACKNOWLEDGMENTSWe thank the anonymous reviewers for their helpful comments.This research was supported by the NSF under grant IIS 2223917. The opinions expressed are those of the authors and do not represent views of the NSF. acl_natbib§ APPENDIX §.§ Annotator Guidelines APPROPRIATENESS Rate each word with respect to how appropriate they are for children. A HIGH appropriateness (5) word will be totally fine for a preschooler; a LOW appropriateness (1) word should NOT appear in a story for preschoolers.LEARNABILITY Rate each word based on how likely you think a preschooler is to be able to learn it from a story. HIGH learnability (5) words should be easy to learn from a story; LOW learnability(1) words would be nearly impossible to learn from a story.IMAGEABILITY Rate each word with respect to their imageability. HIGH imageability (5) words will easily evoke a mental image in your mind;LOW imageability (1) words will evoke a mental image with difficulty or not at all. §.§ Full List of Target Wordsaccordion, acrobat, almond, anteater, antelope, anthill, bakery, bandanna, banjo, beagle, billboard, blizzard, bobcat, bookmark, bookshelf, bouquet, bugle, campground, cashew, catcher, cavern, chandelier, cheetah, chef, chimpanzee, clipboard, cobweb, collie, comet, confetti, cookbook, cottage, cowbell, crater, crescent, cricket, cyclist, denim, desert, dome, doorknocker, dumbbell, earmuff, earplug, earring, easel, eggplant, elk, ferryboat, fireplace, flute, forearm, fountain, gator, glacier, gnome, golf, grove, gutter, hairnet, hammock, headstand, hedgehog, hexagon, hiker, hourglass, iceberg, iguana, island, jaguar, jersey, kayak, kiwi, lantern, lifeboat, limousine, llama, lobster, locker, macaw, mansion, maze, microwave, mole, moss, mountaintop, museum, musician, newt, nostril, orchard, pelican, petunia, pinwheel, piranha, platypus, pompom, poncho, propeller, receipt, rink, rocker, sardine, sax, sequin, shrimp, shrub, sibling, skillet, skylight, skyscraper, sloth, snowshoe, songbird, sparrow, spatula, speck, spotlight, squid, stadium, stairwell, statue, stethoscope, stopwatch, suitcase, tangerine, taxicab, tennis, thimble, thunderstorm, tightrope, tongs, tortilla, toucan, trolley, trombone, trouser, tuba, tulip, tumbleweed, tusk, tutu, vase, violet, violin, visor, volleyball, warthog, wreath, xylophone, bald, bearded, beige, blond, blurry, breakable, bubbly, bushy, chalky, chilly, cloudless, crumbly, electric, feathery, floral, foamy, foggy, frosty, gooey, grassy, greenish, hatless, hilly, lilac, longhaired, lumpy, magenta, moonless, moonlit, mossy, plaid, prickly, puffy, reddish, seaside, sleeveless, slimy, smoky, starry, stormy, stretchy, sunless, sunlit, swampy, thorny, turquoise, undersea, wintry, wooded, wrinkly, applaud, awaken, bulldoze, curl, dangle, darken, decorate, deflate, deliver, dine, dotted, drove, enlarge, erupt, exhale, expand, fetch, flatten, halve, hover, illustrate, inflate, invite, jog, juggle, knit, magnify, masked, mimic, mow, munch, perform, recline, repaint, rotate, serve, sew, skydive, sniff, soak, squint, stumble, sunbathe, topple, unfold, unhook, unlock, unpack, unroll, unzip | http://arxiv.org/abs/2310.18502v1 | {
"authors": [
"Maria Valentini",
"Jennifer Weber",
"Jesus Salcido",
"Téa Wright",
"Eliana Colunga",
"Katharina Kann"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231027213134",
"title": "On the Automatic Generation and Simplification of Children's Stories"
} |
PlantPlotGAN: A Physics-Informed Generative Adversarial Network for Plant Disease Prediction Felipe A. Lopes1,3, Vasit Sagan1,*, Flavio Esposito21Taylor Geospatial Institute, St. Louis, MO 63108, USA2Department of Computer Science, Saint Louis University, St. Louis, MO 63108, USA3Federal Institute of Alagoas, Arapiraca, Brazil{felipe.lopes, vasit.sagan*, flavio.esposito}@slu.eduJanuary 14, 2024 ===================================================================================================================================================================================================================================================================================================================Monitoring plantations is crucial for crop management and producing healthy harvests. Unmanned Aerial Vehicles (UAVs) have been used to collect multispectral images that aid in this monitoring. However, given the number of hectares to be monitored and the limitations of flight, plant disease signals become visually clear only in the later stages of plant growth and only if the disease has spread throughout a significant portion of the plantation. This limited amount of relevant data hampers the prediction models, as the algorithms struggle to generalize patterns with unbalanced or unrealistic augmented datasets effectively. To address this issue, we propose PlantPlotGAN, a physics-informed generative model capable of creating synthetic multispectral plot images with realistic vegetation indices. These indices served as a proxy for disease detection and were used to evaluate if our model could help increase the accuracy of prediction models. The results demonstrate that the synthetic imagery generated from PlantPlotGAN outperforms state-of-the-art methods regarding the Fréchet inception distance. Moreover, prediction models achieve higher accuracy metrics when trained with synthetic and original imagery for earlier plant disease detection compared to the training processes based solely on real imagery.§ INTRODUCTION The early detection of plant diseases is crucial for effective monitoring and management during the critical crop growth period. Fortunately, unmanned aerial vehicles (UAVs) have emerged as a valuable tool for efficiently collecting multispectral data and mapping entire fields. Leveraging various spectral bands, these aerial platforms equipped with multispectral cameras provide an opportunity to capture detailed information about plant health. However, training Machine Learning (ML) algorithms for plant health prediction poses a significant challenge due to the limited availability of samples from unhealthy plants. Such data scarcity hampers the development of accurate predictive models as algorithms struggle to generalize patterns effectively with unbalanced datasets. Besides, given the number of hectares to be monitored and flight limitations, plant disease signals become visually clear only at the last stages of plant growth and if the disease has been spread throughout a significant portion of the plantation – an undesirable end for farmers.Researchers have attempted to address this challenge, for instance, by detecting the Puccinia striiformis f.sp. tritici, commonly known as Wheat Yellow Rust – a devastating fungal disease that affects wheat crops worldwide, leading to significant yield losses if left undetected and untreated. Previous research efforts used traditional data augmentation, such as flipping, cropping, and color jitter, as the most common choices to balance datasets and achieve higher accuracy <cit.>. However, these traditional techniques can introduce potential issues, such as a lack of semantic understanding, limited variability, and inadequate preservation of spatial relationships. Recently, new approaches have applied Generative Adversarial Networks (GANs) models to solve similar data augmentation issues. For instance, a previous work <cit.> analyzed object identification with and without different augmentation methods (e.g., flipping, rotation, and translation). The authors concluded that data augmentation and fine-tuned models could improve the identification accuracy and avoid overfitting deep learning networks for tea leaf disease identification with insufficient training set size, but traditional augmentation methods have been unsuccessful in generalization. Our perspective, aligned with the literature <cit.>, is that general GAN models suffer from the lack of semantic understanding and need fine-tunning for generating complex and structured data, such as multispectral imagery from plant plots.This paper proposes a physics-informed generative adversarial network named PlantPlotGAN to generate more realistic synthetic multispectral imagery focused on plant health analysis from UAV imagery. Its architecture extends DCGAN <cit.>, incorporating physics constraints at the loss function (e.g., reflectance and wavelengths) and balancing the weights according to the most important multispectral bands related to plant disease detection. This provides plant scientists with additional synthetic phenotyping information (including high spatial and temporal resolution).The main contribution of this work is an innovative physical constraint-based GAN, which contains one generator network and two discriminator networks with different weights that output realistic imagery of plant plots (cf. Figure <ref>). Such architecture shows that it is possible to extend a GAN to generate higher fidelity imagery for crop analysis, considering the underlying characteristic of reflectance and spectral bands wavelengths. The proposed trained model can provide additional samples to balance a dataset, considering each phase of plant growth. This approach enhances the prediction accuracy for plant disease detection during the early stages of crop development. Other key contributions are:* A latent space manipulation technique to enable the generator to create more realistic multispectral images.* A new layered approach consisting of two discriminators with different responsibilities.* The development of a domain-specific loss function that considers the physics correlation between multispectral bands and the spectral distance between real and fake images.* A robust evaluation model encompassing different perspectives to rank the quality of GAN-based synthetic images. After implementing PlantPlotGAN, we validated our approach by generating synthetic imagery of healthy plots mixed with wheat yellow rust plots and evaluated these generated samples using several metrics, including the Fréchet inception distance (FID) and spectral information divergence (SID). Then, we compared the results with flipped real samples and samples generated from other state-of-the-art GAN architectures.§ RELATED WORKOur review of the related work mainly focuses on data augmentation for unbalanced datasets and the generation of synthetic multispectral imagery, the two critical techniques to build and validate the proposed architecture. §.§ Traditional Data Augmentation for Limited Imagery DatasetsData augmentation is the process of producing more samples from existing data by introducing manipulation techniques, and several research efforts have explored ways to augment data for several distinct areas. For a comprehensive overview of data augmentation techniques, including various GAN-based approaches, interested readers can refer to Khalifa <cit.>.For instance, data augmentation is extensively employed in skin lesion classification, a task that lacks available training data <cit.>, while the famous work of Krizhevsky <cit.> applied traditional flipping, rotation, and cropping techniques to augment a dataset for general image classification. In <cit.>, Siddique devised a self-supervised method to segment multiple fruit flower species, utilizing data augmentation to improve the segmentation accuracy. Their method uses a sliding window technique followed by random rotations to increase the variability of samples. Another approach investigated the combination of distinct methods <cit.> (i.e., crop, rotation, and resize) to augment their data, achieving an accuracy of 93% in detecting tomato leaf disease.In line with previous research, we used data augmentation. But because of the unique nature of reflectance values used for vegetation indices, we adopted a method that accurately captured sunlight reflection in plants – to preserve a consistent distribution of multispectral pixel values, different from techniques like flipping or sliding windows that would distort this distribution. §.§ GANs and Synthetic Multispectral ImageryAlthough extensive research has explored the use of GAN-based methods to augment data in several domains, e.g., image super-resolution <cit.>, facial attribute editing <cit.>, and medical images <cit.>, the development of GANs for generating synthetic multispectral imagery is still in its early stages. Specifically within agriculture and remote sensing areas, using multispectral imagery to predict plant disease is paramount due to the valuable information that can be extracted from its bands, such as the vegetation indices <cit.>. Despite the decreasing cost of multispectral sensors, some scenarios (e.g., diseased crops) are rare, limiting the amount of available data.Fortunately, one of the first papers in generating multispectral satellite images using a generic GAN<cit.> demonstrated success. However, researchers started to give more attention to the lack of a pre-trained model for generating multispectral imagery <cit.> and the singular covariance between the multispectral bands that hinders the convergence of the GAN training yielding to suboptimal generation <cit.>.To the best of our knowledge, <cit.> are the closest approaches to the present work in terms of offering GAN-based alternatives to generate synthetic multispectral imagery for predicting plant disease. Martinez <cit.> employed a pre-trained encoded network and a regularization technique to solve the convergence problem of GANs when trained with multispectral images due to their high dimensionality. Fawakherji <cit.> used a conditional GAN (cGAN) to generate RGB data with near-infrared (NIR) information, generating four-channel multispectral synthetic images for weed recognition tasks. In <cit.>, the authors incorporated the spectral behavior of land-use and land-cover classes into a GAN to better model the properties of classes by using spectral indices. Although the similar fact of using GAN to generate multispectral imagery, this proposal distinguishes itself from previous endeavors by adding a new discriminator to verify physics constraints into such generation, and assessing the spectral similarity between real and synthetic imagery. This paper also analyzes the advantages of employing synthetic data to improve machine learning algorithms' accuracy, but focusing on the early detection of plant diseases.§ METHODOLOGY§.§ Problem FormulationIn this section, we formalize the problem of generating multispectral synthetic imagery. This encompasses the definition of the optimizer used to acquire physically constrained spectral profiles.As depicted in Figure <ref>, X and X' respectively denote the set of real input multispectral images and the set of possible synthetic multispectral images. In multispectral images, especially in the context of remote sensing and vegetation analysis, there is a covariance between the last two spectral bands, i.e., Near Infrared (NIR) and Red-Edge (RE). Then, if there is a covariance between the last two bands of X, X' should reproduce the same covariate behavior in its NIR and RE pixel values (which are the inputs of our covariance calculation). This technique forces the randomness of latent space to follow the existing physical constraint in the relation between NIR and RE <cit.>. For each input/source image x belonging to X, its covariance coefficients c should also be in any x' element belonging to X'. Given a set of coefficients C and an input X, the goal of a generative multispectral model is to train a generator denoted as G(z; θ_h), where z is the input noise vector and θ_h denotes the generator's parameters. The goal of G is to generate the elements of X', using the coefficients of C and observing the evaluation of discriminators D. For each generated X', two discriminators denoted by D_1(X; X'; θ_1) and D_2(X_spectral; X'_spectral; θ_2) are responsible for evaluating the distribution and covariance of X' in relation to X, respectively, where X_spectral and X'_spectral refer to the spectral profile of X and X', and θ_1 and θ_2 denote their parameters. §.§ PlantPlotGAN Model The model proposed in this paper consists of five main modules: optimizer, spectral regularization, generator, and two discriminators, as depicted in Figure <ref>. To facilitate the comprehension of our PlantPlotGAN model as well for notation simplicity, consider X and X' vectors (N, N, 5), in the sense that 5-band multispectral imagery will be considered, and N is the imagery size – note that although the plant plots are rectangular, we assume square imagery for the sake of simplicity in the convolution layers of each GAN model and consider only the inner rectangle in each analysis (e.g., spectral profile, FID). The architecture of PlantPlotGAN is an extension of DCGAN <cit.>, adding a new discriminator to validate the spectral profiles of synthetic imagery and can be incremented to handle other physical constraints. Optimizer. Using the elements of X, the optimizer denoted by O is responsible for obtaining spectral coefficients, which manipulate the latent space. To achieve that, the optimizer selects the NIR and RE bands of X, obtains the covariance Cov, and minimizes a function to return three coefficients that approximate the equation <ref> to Cov. The selection of NIR and RE bands is based on the scientific observation that vegetation exhibits distinctive spectral properties, particularly in the RE and NIR regions of the electromagnetic spectrum <cit.>. In Equation <ref>, RE(λ) represents the Red Edge channel reflectance at a given wavelength λ, and NIR(λ) represents the Near Infrared channel reflectance at the same wavelength. The parameters G, H, and K are the coefficients that control the shape, characteristics, and covariance between the Red Edge and Near Infrared channels, respectively. The latent space, then, is weighted according to the coefficients adjusted during the convergence process. RE(λ) = G · e^-H ·λ + K · NIR(λ) Spectral Regularization. Formally, given an input X, the generator G should generate a set X' with a similar spectral profile. However, depending on the number of elements x ∈ X, G can generate elements x' with an abnormal spectral profile. It is thus necessary to conduct factorization on X'. The Spectral Regularization module, denoted as SR(S_x,S_x'), calculates the spectrum of x and x', using an underlying 2D Fast Fourier Transform (FFT) to compute and compare the shift-invariance and noise of each set. The result of SR is in one of the discriminators as a spectral loss. Generator. Our generator G receives a random noise n, with the latent space values already adjusted by the optimizer O as input, and generates the output synthetic imagery x'. The objective of G is to generate samples that fool the discriminators into classifying x' as real. As in DCGAN <cit.>, G utilizes convolutional layers to transform n into a synthetic x'. After receiving the random noise from the optimized latent space, G upsamples vector n into a higher-dimensional representation using deconvolutional layers. PlantPlotGAN has three deconvolutional layers and LeakyReLU activation functions in sequence to introduce non-linearity in data generation. Discriminators. For each set of synthetic samples X', two discriminators are responsible for i) evaluating how close to x each element x' ∈ X' is; and ii) calculating the spectral divergence between each x and x'. For the first objective, our PlantPlogGAN utilizes the discriminator D_1(x; x'; θ_s) to compute the probability that x' comes from the real data distribution rather than G. The second discriminator, D_2(x; x'; θ), receives the spectrum of x and x' and calculates a dynamic distance to identify if the spectrum x_i belongs to a spectral profile of x. Thus, the objective of PlantPlotGAN with one generator and two discriminators is to find equilibrium by solving the following minimax problem: min_h max_sV(D_1, D_2, G) =E_x ∼ p_data(x) [log D_1(x) + log D_2(x)] +E_z ∼ p_z(z) [log (1 - D_1(G(z))) + log (1 - D_2(G(z)))], where E_x ∼ p_data(x) denotes the expectation over real data samples. E_z ∼ p_z(z) refers to the expectation over the gradients for the spectral vector sampled from a prior distribution p_z(z). D_1(x) represents the output probability of the first discriminator for real data x. D_2(x) represents the output probability of the second discriminator for real data x. D_1(G(z)) is the output probability of the first discriminator for synthetic data generated by the generator, while D_2(G(z)) represents the output probability of the second discriminator for synthetic data generated by the generator. Finally, V(D_1, D_2, G) is the value function of the minimax game. It is maximized with respect to the discriminator parameters (θ_1and θ_2) and minimized with respect to the generator parameters θ_h. Similar to other state-of-the-art GANs <cit.>, PlantPlotGAN updates the parameters of G and D iteratively to improve their performance. §.§ Dataset Preparation To train the convolution networks G and D, and validate the PlantPlotGAN architecture for its aim, a new multispectral imagery dataset of spring wheat was collected at Chacabuco (-34.64; -60.46), Argentina. The data collection experiment consisted of growing three varieties of spring wheat, categorized as an intermediate-ate-long cycle, intermediate cycle, and short cycle, in a total of 700 field plots. The plots were arranged in a grid pattern with 35 rows and 20 columns within the field. Each plot had dimensions of 1.2 x 5 meters and consisted of 7 rows of wheat plants. Spring wheat was planted on June 19th, 2021, and harvested on December 16th, 2021. The soil type at the site was classified as fine-silty, mixed, thermic Typic Argiudoll according to the USDA-Soil Taxonomy V. 2006.Throughout the growing season, uniform field management practices were implemented for all field plots. Insecticide, herbicide, and fertilizer were applied as needed to ensure consistent plant health and growth. However, no fungicide was used in the experiment, as the focus was on studying the impact of the biotic stress caused by the occurrence of yellow rust disease. UAV flights. The primary aerial platform for collecting remotely sensed data from the standing crop was the DJI Phantom 4 Pro. This UAV was equipped with a multispectral sensor capable of capturing the five discrete spectral bands used with the PlantPlotGAN modules (cf. Section <ref>): blue, red, green, red edge, and near-infrared. A sunlight irradiance watcher was connected to the sensor to ensure accurate measurements. To achieve a high spatial resolution of the imagery data, the UAV was programmed to fly at a low altitude of 20 meters from the surface, resulting in a pixel resolution of 1.04 centimeters. To monitor the crop at different stages of growth, the aerial data collection was designed to have a temporal dimension. The UAV was flown five times throughout the growing stages, starting from the first flight on August 30th, 2021, during the tillering stage, and concluding with the last flight on November 17th, 2021, during the flowering stage. Ground truth data. On November 15th, 2021, a team of experts, including pathological and phenological specialists, visited the experiment site to assess the wheat plants' health status visually. Their primary task was to provide annotations and divide the plants into different categories based on their health condition: healthy, average, and severe disease. The specialists determined the level of disease infection by carefully examining the leaves of the plants in each plot. The entire plot was labeled as healthy if the leaves showed no signs of green loss. The plot was labeled as having a mild infection if some yellow spots were present. Conversely, if most plants in the plot exhibited a yellowish color, it was labeled as having a severe infection.It is important to note that due to labor constraints, the crop health status was assessed and annotated for 592 out of the 700 plots. Of these 592 samples, 430 (72.6%) were classified as mild, 106 (17.9%) as unhealthy, and 56 (9.45%) as healthy. These quantities of samples for each class demonstrate the common imbalance scenario of real-world data and its related challenges for an accurate prediction <cit.>.§ EXPERIMENTSThe validation of the proposed method used two sets of experiments and two related datasets. The first group evaluates image quality metrics regarding the synthetic images generated by PlantPlotGAN and compares them to the output of other state-of-the-art models, i.e., DCGAN <cit.>, WGAN <cit.>, and VQGAN <cit.>. Besides, a quantitative performance evaluation is part of this first group. The second group of experiments utilizes the synthetic images generated from PlantPlogGAN to extract vegetation indices and assess whether it could improve the accuracy metrics of machine-learning models in detecting wheat yellow rust. §.§ Dataset and Data Extraction The dataset used in both sets of experiments considers the data presented in Sub-section <ref>. For the training of PlantPlotGAN, its modules (i.e., discriminators, optimizer, and spectral regularizer) used only the labeled samples as healthy or unhealthy while ignoring the mild samples, as their mixed signals would hinder convergence. For fairness in the evaluation and to reduce training costs, the remaining 162 images were resized to 128x128x5. The quality assessment and comparison of PlantPlotGAN with other models used the same dataset in all scenarios and metrics.In the set of experiments for yellow rust prediction, a vegetation index extractor using the Rasterio library derived 47 indices for each sample, such as the Normalized Difference Vegetation Index (NDVI), Improved Modified Chlorophyll Absorption Ratio Index (MCARI), and Green Chlorophyll Index (GCI). The same process was used to extract these vegetation indices from the synthetically generated samples. The choice for these 47 vegetation indices aligns with the literature <cit.>. §.§ ImplementationDuring training, all models use the Adam <cit.> optimizer with the learning rate set to 2e-4, β1 = 0.5, and β2 = 0.99. The observations are collected every 5 steps in 50 epochs, finalizing with the trained generators. Figure <ref> depicts some of the synthetic samples created by the generators. Notably, even the VQGAN model, equipped with multiple underlying convolutional layers (pre-trained using VGG16) and encoders, fell short of generating satisfactory results that visually resembled real images within the defined number of epochs and training images.The generator and discriminator networks utilize 2D transpose and 2D convolution layers for each GAN model. Each subnetwork incorporates batch normalization and LeakyReLU activations following the convolutional layer, as described in <cit.>. For all GAN experiments, a random input z ∈ℝ^100∼𝒩(0, 1) was used as initial noise. Every implemented GAN model uses the TensorFlow platform and runs in a single Intel Core i9-10900X CPU @ 3.70Ghz and 128 GB. Training the PlantPlotGAN and the other three GAN models on all scenarios takes about 14 hours. More details are given in the supplementary material.§.§ Performance ComparisionUsing the same number of epochs (50) and training steps (5) for all the evaluated GANs, the convergence process of each model was assessed by continuously observing the adversarial loss of the discriminators and generators. Subsequently, the metrics described in the following sub-sections were collected and compared after the training phase.§.§.§ Similarity, Fidelity, and Spectral Analysis An in-depth analysis compared the real samples with the synthetic dataset generated by DCGAN, WGAN, VQGAN, and PlantPlotGAN to assess the quality of synthetic samples. The objective was to assess the similarity and fidelity of the synthetic imagery. The following metrics were employed in this analysis: Fréchet Inception Distance (FID). The FID is a widely used metric for evaluating the quality of imagery generated from GAN models. It measures the similarity between real and generated images' distribution by utilizing features extracted from a pre-trained Inception-v3 neural network. The FID quantifies the dissimilarity between the two distributions in a feature space. A lower FID score indicates a higher similarity between the real and generated images, implying better quality and fidelity in the generated imagery. By considering both the distributional and perceptual aspects of the images, the FID offers a comprehensive evaluation of the GAN model's image generation capabilities. FID is denoted by: FID =μ_real - μ_generated^2+ Tr(Σ_real + Σ_generated- 2(Σ_realΣ_generated)^1/2) where μ_real and μ_generated are the mean vectors of features extracted from the real and generated images, respectively. Σ_real and Σ_generated represent the covariance matrices of the feature distributions. The FID captures both the difference in means and the difference in covariances between the real and generated image feature distributions, providing a quantitative measure of their dissimilarity. For our scenario, as Inception-v3 cannot handle multispectral images, a mean of two FID calculations is considered. The first calculation selected bands 1:3, while the second computation selected bands 3:5 from both datasets (i.e., real and synthetic). Chi-square. It was used to compare the observed and expected frequencies within each spectral band, enabling the detection of significant deviations. The equation to obtain this metric is denoted by: χ^2 = ∑(O_i - E_i)^2/E_i, where χ^2 represents the Chi-square test statistic, O_i denotes the observed frequency of a particular category or bin in the dataset, and E_i represents the expected frequency based on a specified hypothesis or model. We can determine whether the observed frequencies significantly differ from the expected frequencies by comparing the computed Chi-square value to the critical Chi-square value corresponding to a chosen significance level and degrees of freedom. A higher Chi-square value indicates a larger deviation between the real images and the synthetic data, suggesting a higher dissimilarity or lack of agreement.Intersection and Bhattacharyya. The Chi-square method alone may not capture the complete picture of similarity or dissimilarity between the two sets of images. Intersection and Bhattacharyya are complementary coefficients to achieve a deeper analysis. The Intersection coefficient (IC) considers the overlap or commonality between the pixel values of real and synthetic images. It calculates the ratio of the intersection of pixel values to the union of pixel values in the two datasets. By examining the shared elements between the sets, the IC reveals the extent of similarity or overlap between the real and synthetic images. The Bhattacharyya coefficient (BC) quantifies the similarity between the probability distributions of the real and synthetic datasets. It can be computed as the Bhattacharyya distance, which measures the divergence between the two distributions. IC and BC are denoted respectively by the following equations: IC = ∑_i=1^nmin(P_real_i, P_synthetic_i), and BC = ∑_i=1^n√(P_real_i· P_synthetic_i), where P_real_i and P_synthetic_i represent the probability densities of the real and synthetic datasets at the i-th spectral band, respectively. The sums are taken over all spectral bands (from i = 1 to n = 5) for the experiments.A higher Intersection coefficient indicates a high match or complete overlap, implying a higher similarity between the real and synthetic images. Conversely, a lower value suggests a lower similarity between the sets. On the other hand, generating imagery with a lower BC is desirable for every GAN model.§.§.§ Evaluation of Early Plant Disease DetectionConsidering the potential of our collected data, by having a temporal dimension, we also investigated if using the synthetic data generated from PlantPlotGAN could improve early detection of the Puccinia striiformis f.sp. trit-ici, i.e.,wheat yellow rust. A feature-based prediction was employed using classical machine learning algorithms. The algorithms were trained with two datasets, i.e., real images and real images mixed with synthetic samples. The test dataset consisted of real images only. The metrics used to evaluate the early prediction are accuracy, recall, and F1 Score. We selected these metrics due to the cost of false positives (incorrectly labeled negatives) for farmers, i.e., a crop classified by mistake as diseased would receive toxic treatment unnecessarily. The other side is equally costly; a false negative classification could lead to truly diseased crops being classified as healthy and left without treatment. § RESULTS[ADDITIONAL EVALUATIONS, RESULTS, AND THE TRAINED MODELS CAN BE FOUND IN THE SUPPLEMENTARY MATERIAL.] Similarity, Fidelity, and Spectral Analysis. As shown in Section <ref>, the first set of experiments evaluated the quality of the generated images. To this aim, the evaluation considered four metrics (i.e., FID, Chi-square, IC, and BC). Figure <ref> depicts the result of this evaluation. The baseline refers to the flipped real images.Particularly, our PlantPlotGAN leads to significant improvements compared to the other GAN architectures in three evaluation metrics. PlantPlotGAN achieved the best average FID score, indicating that it can generate images with the highest quality. It also achieved a higher IC score, meaning a more precise distribution of pixels along the generated imagery.These metrics help clarify the result in Figure <ref>, demonstrating that an analysis solely based on FID may not represent the true quality of synthetic data generated from GAN models. For instance, DCGAN generated synthetic images visually closer to the real vegetation plots for the same number of epochs. However, WGAN obtained a lower FID than DCGAN for this scenario. The quality of synthetic images was also measured by the spectral profile shape – as it is the source of several vegetation indices used to asses plant health (cf. Section <ref>). VQGAN was excluded from this analysis due to the current limitation on generating only RGB images. In this case, as depicted in Figure <ref>, our method achieved a satisfying R^2 in all scenarios compared to the spectral profile of real images. It also demonstrates the spectrum regularizer SR in action, weighting the loss of D_2 to approximate the model function. Future work can assess whether a simple increase in the number of parameters of traditional GAN models could achieve the same result obtained by PlantPlotGAN. Besides, the implementation evaluated in this experiment only considered the covariance between sequential channels (e.g., Red edge and NIR). Extending the SR to perform a combinatorial covariance analysis of multispectral bands is an open issue.Early plant disease detection The second set of experiments evaluated the potential of PlantPlotGAN to improve the early detection of yellow rust. After excluding the mild samples (as discussed in Section <ref>), the dataset with real imagery has 106 healthy samples (65.43%) and 56 unhealthy samples (34.56%). To assess if PlantPlotGAN could overcome the imbalance problem, the first experiment compared a prediction model trained in two scenarios for the extracted vegetation indices i) extraction from the real imagery data; ii) extraction from the mixed dataset consisting of real and synthetic imagery. The validation dataset consists of a stratified selection of the real imagery at the last observed date (cf. Section <ref>). PlantPlotGAN generated 50 healthy and 10 unhealthy samples for the mixed dataset, resulting in a distribution of 116 healthy samples (52.2%) and 106 unhealthy samples (47.74%). For obtaining the accuracy indices, three models were utilized, i.e., XGBoost, Random Forest, and a Convolutional Neural Network (CNN). The description of the parameters of each model is present in the supplementary material.This first evaluation measured if the accuracy scores would increase when utilizing the combined dataset for training. Table <ref> demonstrates the observed results, with the combined PlantPlotGAN-based dataset achieving enhanced values in almost all scenarios. Furthermore, a per-date analysis verified if the F1 Score would increase by detecting the unhealthy class at early dates for the evaluated scenario. Figure <ref> depicts the results of this experiment, demonstrating an overall detection improvement of employing the synthetic data generated from PlantPlotGAN in detecting yellow rust disease at early stages. § CONCLUSIONWe have proposed a novel generative adversarial network model based on latent space manipulation and an architecture of two discriminators. The proposed PlantPlotGAN is the first end-to-end GAN model for generating multispectral plant plots and is effective for augmenting UAV-based plant imagery datasets. This is due to two key components: a novel spectrum regularizer that factories the latent space with optimized spectral coefficients; and a two-layered discriminator architecture that utilizes physics constraints to decide if a spectrum is true or fake. Extensive experiments show that our PlantPlotGAN achieves significant improvements over state-of-the-art GAN models and highlights the importance of incorporating physical constraints for synthetic data generation.In future work, a per-channel discriminator might improve the synthetic imagery quality. Besides, incorporating a style transfer based on StyleGAN <cit.> can be a fruitful approach to take advantage of a full categoric multispectral dataset (e.g., diseased plants).ieee_fullname | http://arxiv.org/abs/2310.18268v1 | {
"authors": [
"Felipe A. Lopes",
"Vasit Sagan",
"Flavio Esposito"
],
"categories": [
"cs.CV",
"cs.LG",
"eess.IV"
],
"primary_category": "cs.CV",
"published": "20231027165628",
"title": "PlantPlotGAN: A Physics-Informed Generative Adversarial Network for Plant Disease Prediction"
} |
Gate-tunable topological superconductivity in a supramolecular electron spin latticeRémy Pawlak,^1∗† Jung-Ching Liu,^1† Chao Li,^1† Richard Hess,^1† Hongyan Chen,^2 Carl Drechsel,^1, Ping Zhou,^3 Robert Häner,^3 Ulrich Aschauer,^3,4Thilo Glatzel,^1 Silvio Decurtins,^3Daniel Loss,^1 Jelena Klinovaja,^1 Shi-Xia Liu,^3∗ Wulf Wulfhekel,^2 & Ernst Meyer^1^1Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland^2Physikalisches Institut, Karlsruhe Institute of Technology,Wolfgang-Gaede-Str. 1, 76131 Karlsruhe, Germany^3Department of Chemistry, Biochemistry and Pharmaceutical Sciences,University of Bern, Freiestrasse 3, 3012 Bern, Switzerland^4Department of Chemistry and Physics of Materials, University of Salzburg,Jakob-Haringer-Strasse 2A, 5020 Salzburg, Austria^†These authors equally contributed;^∗To whom correspondence should be addressed;E-mails:[email protected], [email protected] ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ In Conversational Recommendation System (CRS), an agent is asked to recommend a set of items to users within natural language conversations.To address the need for both conversational capability and personalized recommendations, prior works have utilized separate recommendation and dialogue modules.However, such approach inevitably results in a discrepancy between recommendation results and generated responses.To bridge the gap, we propose a multi-task learning for a unified CRS, where a single model jointly learns both tasks via Contextualized Knowledge Distillation (ConKD).We introduce two versions of ConKD: hard gate and soft gate. The former selectively gates between two task-specific teachers, while the latter integrates knowledge from both teachers.Our gates are computed on-the-fly in a context-specific manner, facilitating flexible integration of relevant knowledge. Extensive experiments demonstrate that our single model significantly improves recommendation performance while enhancing fluency, and achieves comparable results in terms of diversity.[The code is available at <https://github.com/yeongseoj/ConKD>]§ INTRODUCTIONNatural language dialogue systems generally fall into either task-oriented system <cit.> or open-domain dialogue system <cit.>.Despite the same modality (conversation), the tasks differ in their objectives; the former aims to achieve certain tasks (e.g. booking hotels), while the latter engage in an open-ended dialogue.Conversational Recommendation (CR) is an emerging task in natural language dialogue, which combines the task-oriented and open-domain <cit.>.The task aims to recommend proper items to users through natural language conversations, and the model-generated conversation is expected to be fluent and suit the context.Unlike traditional recommendation systems, the interactive nature of multi-turn dialogue allows the agent to explore explicit user interests that may not be present in the user's history.This advantage particularly stands out compared to the traditional models that have access to only few purchase histories or implicit feedback (e.g. click) from users.To generate appropriate responses containing recommended items aligned with a user's taste, an agent needs to possess both recommendation and dialogue capabilities.Previous works addressed the issue with separate modules <cit.>.A recommendation module learns user preferences based on conversation history and retrieve relevant items,while a dialogue module generates the final response sentences. In Conversational Recommendation System (CRS), a major problem lies in incorporating the two separate modules.Common strategies for injecting recommendation ability into the responses include copy mechanism <cit.> and pointer network <cit.>. Despite these efforts, the prior approaches <cit.> have demonstrated a discrepancy between the separate modules: the results of a recommendation model are not integrated in the generated response as depicted in Figure <ref>.The dialogue module suggests “The Avengers”, while the recommendation module recommends “Titanic”, revealing a clear disagreement between the two modules.Accordingly, the probability distribution of the recommendation model is not directly reflected in the output of the dialogue model. Such mismatch is inevitable when CR is formulated as two separate tasks, failing to serve the original purpose of the task. To address this challenge, we propose a multi-task learning approach for a unified CRS using Contextualized Knowledge Distillation (ConKD).We build a CRS with a single model via knowledge transfer by two teacher models, a dialogue teacher and a recommendation teacher.However, combining the knowledge is not straightforward due to the nature of CRS; the task differs in each time step depending on the context.In this light, we introduce two gating mechanisms, hard gate and soft gate, to effectively fuse teachers' knowledge. With hard gate, knowledge transfer comes solely from either of the teachers, while soft gate integrates both sources of knowledge.We introduce an adaptive nature in the gates which is context-specific and computed on-the-fly during forward propagation. To our knowledge, this is the first work to explicitly demonstrate the existence of the discrepancy, and provide a dedicated training mechanism to address it. Moreover, our approach offers the flexibility in selecting model architectures, enabling integration of diverse language and recommendation models.Extensive experiments conducted on a widely used benchmark dataset <cit.> demonstrate that our single model significantly outperforms baselines in terms of recommendation performance and response fluency, while achieving comparable results in response diversity.The contributions of our work can be summarized as follows: * We propose a multi-task learning approach for a unified CRS using Contextualized Knowledge Distillation.* We introduce two versions of ConKD, employing different gate mechanisms: hard gate and soft gate.* Our approach surpasses strong baseline models in making coherent recommendations and fluent responses, while competitive results are observed in response diversity.§ PRELIMINARIES & RELATED WORK§.§ Open-Ended Conversational Recommendation SystemConventional recommendation systems mainly focus on building a static user preference based on previous histories, such as clicks, purchases, and ratings <cit.>. In such environment, where feedback from users is static, implicit, and sparse, recommendation systems have difficulty reflecting dynamic changes in users' preferences as well as suffer the cold-start problem <cit.>. ReDial <cit.> is one of the first attempts at handling such issue; the work combines open-ended chit-chat with recommendation task, called Conversational Recommendation System (CRS). Specifically, let (𝐱, 𝐲) be a dialogue sample, where 𝐱={x^1, x^2,...,x^m} is a set of previous dialogue turns. m is the lengths of turns in the dialogue history and 𝐲 is the corresponding response (ground truth).In each turn, a recommendation module is expected to provide an item set I_u for a user u ∈ U, while a dialogue module produces a response 𝐲 based on a dialogue history 𝐱.To incorporate recommended items into a response, a copy mechanism <cit.> or pointer network <cit.> is generally adopted in prior works <cit.>. In such methods, additional networks are trained to predict whether the next token is a word or an item by aggregating representations from recommendation and dialogue modules.In the aim of improving a CRS, previous studies leverage external knowledge in training.In KBRD <cit.>, an item-oriented knowledge graph is introduced as an auxiliary input to a recommendation module. KGSF <cit.> utilizes both word-level and item-level knowledge graphs in training a CRS.In addition to the knowledge graphs, movie review data is utilized in RevCore <cit.>, and the meta information of items is encoded in <cit.> to enrich item representation. However, these works employ separate modules to manage CRS,which inevitably introduces discrepancy issues.To address this problem, prompt-based learning strategies are introduced in <cit.>. Despite the unified architecture, these approaches fail to dynamically incorporate multiple recommendations into a response. RecInDial <cit.> introduces a vocabulary pointer and knowledge bias to produce a unified output by combining two modules. §.§ Knowledge DistillationThe core idea behind Knowledge Distillation (KD) <cit.> is transferring knowledge of a high-capacity teacher network to a relatively smaller student model. In knowledge distillation, a student network is guided by not only a one-hot encoded ground-truth but also a soft target mapped by the teacher network (probability distribution). This is known to transfer a class-wise relation mapped by a teacher is commonly termed the dark knowledge. Given a data sample from a joint distribution (x,y) ∈𝒳×𝒴, a student model is optimized by combining two cross-entropy terms.ℒ_KD(θ)= -∑_k=1^|Y|γŷ_k log P_θ(y_k|x)+(1-γ)P̃_ϕ(y_k|x) logP̃_θ(y_k|x)where |Y| and ŷ_k denote the number of classes and a k-th target label (one-hot encoded) respectively. γ, and P̃ denote a balancing parameter, and a probability distribution scaled with a temperature. θ and ϕ are parameters of a student and teacher network respectively.§ UNIFIED CRS VIA CONKDIn this section, we first demonstrate the mismatch issue with preliminary experiments and introduce our approach that mitigates such problem. §.§ Preliminary Experiments In our preliminary experiments on<cit.> dataset[<cit.> proposed a dataset and a model, which we term the dataset asand the model as ReDial hereinafter.], we aim to identify the mismatch problem in evaluation. In Table <ref>, we compare the recommendation results from two separate modules using recall metrics: R@k (Recall) and ReR@k (Recall in Response) which evaluates the top-k items predicted by a recommendation module and a dialogue module respectively.In all metrics, a significant degradation is observed when recall is computed on the dialogue response.The relative decreases in the performance are ranged from 73.33% to 88.46%, implying that a large discrepancy exists between the outputs of recommendation modules and generated responses.However, incorporating the recommendation module's outputs during inference does not provide the fundamental solution to the problem. Further discussion on this issue is provided in Appendix <ref>. To address the discrepancy, we propose a multi-task learning approach for a unified conversational recommendation system via Contextualized Knowledge Distillation (ConKD).ConKD consists of three key components: a dialogue teacher and a recommendation teacher as experts on each task, and a student model - a multi-task learner, as described in Figure <ref>. §.§ Recommendation TeacherA recommendation teacher models the item-user joint distribution and provides a set of items that suit a user's preference. We adopt the model structure of <cit.>, where an item-oriented Knowledge Graph (KG) <cit.> and word-oriented KG <cit.> are encoded to build a user preference. To learn item representations with structural and relational information, R-GCN <cit.> is adopted for the item-oriented KG as follows.𝐡_e^(l+1) = σ(∑_r∈ℛ∑_e'∈ℰ_e^r1/Z_e,r𝐖_r^(l)𝐡_e'^(l)+𝐖^(l)𝐡_e^(l))where 𝐡_e^(l) denotes the representation of node e at (l)-th layer and 𝐡^(0) is the initial node embedding. ℰ_e^r is the set of neighbor nodes for node e under the relation r. 𝐖_r^(l) and 𝐖^(l) are learnable matrix for handling various edge types and self-loop respectively. Z_e,r is a normalization constant. Similarly, word-oriented KG is encoded with the GCN <cit.> and the description in detail is illustrated in Appendix <ref>.Given the learned node embeddings, a user representation 𝐩_𝐮 is acquired by aggregating words 𝐯^(𝐱) and items 𝐧^(𝐱) that appear in previous dialogue turns 𝐱 as follows[For the detailed aggregation process, please refer to Section 4.3 in <cit.>]. 𝐩_u = β·𝐯^(𝐱) + (1-β)·𝐧^(𝐱) β = σ(𝐖_𝐠[𝐯^(𝐱);𝐧^(𝐱)])where 𝐖_𝐠 are learnable parameters for computing the balancing parameter β. Finally, a matching score between a user u and an item i is calculated as follows.P_ψ(i) = (𝐩_u^T𝐧_i)where ψ is model parameters optimized to maximize the likelihood of predicting ground-truth items. §.§ Dialogue Teacher To handle the chit-chat task, we train a conditional language model that intakes dialogue history and generates a context-aware response.We explore two primary structural variations for our language model:* KGSF <cit.>: A standard transformer <cit.> and a knowledge-enhanced transformer <cit.> are utilized for each as an encoder and a decoder.* DialoGPT <cit.>: A transformer-based pre-trained language model (PLM) trained on a large-scale dialogue dataset.Both dialogue models are trained to maximize the likelihood of predicting the ground truth response as follows:ℒ(ϕ) = -∑_j=1^|Y|∑_t=1^T1/Tŷ_t,jlog P_ϕ(y_t,j|𝐲_1:t-1,𝐱)where T and j are the length of a response 𝐲 and a token index respectively. Y is a union of the vocabulary set and item set (Y = V∪ I), and ϕ denotes the parameters of the dialogue model.§.§ Contextualized Knowledge Distillation We elaborate on how a student model learns both the recommendation and dialogue tasks. Specifically, two gating mechanisms are introduced: discrete and continuous gate, which integrate knowledge between the two teachers in an adaptive manner.§.§.§ Hard GateA student model is trained to minimize the gap between its conditional probability distribution and the conditional probabilities mapped by the teacher networks.However, it is not ideal to equally weight the knowledge from both teacher models at every time step as seen in the following response.y1: If you like romance movies, I would recommend Titanic.At the last time step, where the item Titanic appears, knowledge of a recommendation teacher can help a student learn the recommendation ability. On the contrary, a student can learn to generate a coherent and suitable utterance by accessing knowledge from the dialogue model except the time of recommendation.Taking all these factors into account, we introduce a token-level hard gate between teacher networks, where supervision solely comes from either of the teachers at each time step. To distinguish which knowledge of the two teachers to be transferred, we aggregate the probability mass of item indexes mapped by the dialogue teacher in each time step. This is built on an assumption that the dialogue model assigns a relatively large probability mass on item indexes at the time of recommendation. Given the distribution P_ϕ(y_t|𝐲_1:t-1, 𝐱) mapped by the dialogue teacher, we calculate a sum of item probabilities which answers the question of “is this time to recommend an item?". Therefore, a time step-specific hard gate is computed on-the-fly during forward propagation as follows.λ_t =0,if ∑_y'∈ℐP_ϕ(y'|𝐲_1:t-1, 𝐱) < η1,otherwisewhere ℐ denotes the set of items in the vocabulary, and η is a predefined threshold. When λ_t is computed to be 1, it is a clear indication that a CRS is expected to output a recommendation result.On the contrary, when the hard gate is 0, a dialogue teacher defines the current time step as a dialogue time step; hence, the CRS is expected to make a coherent chit-chat.§.§.§ Soft GateThe hard gate inevitably introduces a hyper-parameter η due to the thresholding approach.To remove the hyper-parameter search on η, we introduce a continuous gating mechanism.This can be applied under the assumption that a sum of item probabilities mapped by the dialogue teacher reflects the extent to which recommendation is expected.Therefore, the aggregated mass answers the question of `how much to learn the recommendation ability at the time step”. Based on the intuition, we introduce a soft gate as follows.λ_t = ∑_y'∈ℐP_ϕ(y'|𝐲_1:t-1, 𝐱)where the gate λ_t takes continuous values within the range of [0,1].A gate close to 1 indicates that the system should focus more on recommendation, while a gate close to 0 suggests that the agent is expected to prioritize the conversation task.To validate our assumption regarding the behavior of the dialogue teacher, we conducted a preliminary experiment using a smaller model, KGSF.We computed the average sum of item probabilities in a dialogue time step λ_v and in a recommendation time step λ_r.The computed value of λ_r was found to be 0.653, while λ_v was measured to be 0.023. These results support our assumption that the dialogue teacher assigns relatively large probability mass to item indexes in recommendation time. We provide further discussion on the validity of λ in Appendix <ref>.The gating computation differs from the previous gating approaches <cit.> in two ways : 1) we leverage a teacher distribution as a signal of gating, where the gates can be discrete λ_t∈{0,1} or continuous λ_t∈[0,1].2) our gates are not learned but a simple sum of probabilities.§.§.§ Contextualized Knowledge Distillation LossWith the two pre-trained teachers and the gating mechanisms, we now introduce loss for contextualized knowledge distillation.KD losses for each task are formulated as follows.ℒ_DIAL-KD(θ)= -∑_k=1^|Y|∑_t=1^T1/TP_ϕ(y_t,k|𝐲_1:t-1,𝐱)×log P_θ(y_t,k|𝐲_1:t-1,𝐱) ℒ_REC-KD(θ)= -∑_k=1^|Y|∑_t=1^T1/TP_ψ(y_t,k|𝐱)×log P_θ(y_t,k|𝐲_1:t-1,𝐱)where θ is the parameter of the student. Then, the final KD losses for each task are derived as follows.ℒ_DIAL(θ) = (1-γ) ℒ_NLL+γℒ_DIAL-KD ℒ_REC(θ) = (1-γ) ℒ_NLL + γℒ_REC-KDwhere γ is the balancing parameter between ground truth and teacher distribution, and ℒ_NLL is the cross entropy with ground truth. Finally, the losses are aggregated with our gate λ_t per time step.ℒ(θ) = (1-λ_t)ℒ_DIAL + λ_t ℒ_RECWhen the hard gate is applied, a supervision is made by either of the teachers with the discrete λ_t. On the other hand, the soft gate fuses knowledge from the two teachers with the λ_t being the weight. By optimizing the combined objective, a single model is capable of learning the dialogue and recommendation tasks simultaneously, alleviating the mismatch that comes from two separate modules in previous methods. An evident advantage of employing contextualized knowledge distillation lies in taking the class-wise relation into consideration beyond the observed data <cit.>.With a one-hot encoded label, a neural network is trained to maximize the difference between the ground-truth and remaining classes; the dark knowledge is overlooked with one-hot supervision where a single index is set to 1 and the remaining indexes to 0s.In our work, the dark knowledge from both teachers is engaged in an adaptive manner to generate a fluent and user-specific response. §.§.§ Special TokensUnder CR, a dialogue turn falls into either a turn with recommendation result or a turn for querying a user preference. In this light, to inject an extra signal to the student model, we add two special tokens,and , at the beginning of each turn.During training, the ground truth prefix starts with eitherif the response includes an item , orif it is chit-chat. This explicit scheme enables the model to learn turn-specific actions based on the preceding context and generate appropriate sequences.During inference, we employ a pre-trained classifier to predict one of the two special tokens at each dialogue turn.The classifier is built with standard transformer layers and optimized as follows.ℒ(θ_cls) = E_(k, x) ∼ D[-∑_j=1^|K|log P(k_j|𝐱;θ_cls)]where K = {0,1} is the label set, and θ_cls denotes the classifier's parameters. The model learns to classify whether the current turn is intended for chit-chat or recommending an item,based on the dialogue history 𝐱.§ EXPERIMENTS §.§ DatasetThe proposed approach is tested on the recently introduced(Recommendation Dialogues) dataset <cit.>. is a conversation dataset which the dialogues are centered around recommendation,and the subject of the recommendation is movie.contains 10,006 multi-turn dialogues, which amount to 182,150 utterances.The total number of unique movies in the dataset is 6,924, and the size of vocabulary is 23,928. §.§ BaselinesReDial <cit.> consists of dialogue, recommendation, and sentiment analysis modules. Pointer network <cit.> is introduced to bridge the modules. KBRD <cit.> introduces item-oriented KG <cit.> and the KG representation is added when building a logit for the dialogue module <cit.>. KGSF <cit.> integrates word-oriented KG <cit.> and item-oriented KG <cit.> for semantic alignment. KG-enhanced transformer and copy network <cit.> are employed. RevCore <cit.> incorporates movie-review data for review-enriched item representations and utilizes a copy network <cit.>. DialoGPT <cit.> is fine-tuned on thedataset. RecInDial <cit.> finetunes DialoGPT-small with R-GCN<cit.> and introduces a vocabulary pointer and knowledge-aware bias to generate unified outputs. §.§ Evaluation MetricsTo validate the recommendation performance, we employ a top-k evaluation approach with k values of 1, 10, and 50. Consistent with prior research <cit.>, we report Recall@k (R@k) in Section <ref>.However, given the conversational nature of the task, it is crucial to evaluate recommendation performance within generated responses. We introduce Recall in Response (ReR@k) following <cit.> and <cit.>, with a refined calculation approach to ensure scores range within [0, 1]. Specifically, we take the average of correct item predictions over the total number of item predictions instead of responses containing items. Additionally, we introduce Precision in Response (PrR@k) and compute the harmonic mean of ReR@k and PrR@k, denoted as F1@k. Furthermore, we assess the system's ability to make active recommendations by introducing the recommendation turn ratio, calculated as the number of dialogue turns with recommended items over the total dialogue turns.To evaluate dialogue performance, we report perplexity (PPL) and distinct n-grams <cit.> (DIST), assessing the fluency and the diversity of generated responses, respectively.In prior studies, DIST was computed by counting distinct n-grams at the corpus-leveland averaging them over sentences, which can lead to scores greater than 1. To address this, we have updated the metric calculation to count distinct n-gramsand calculate rates at the corpus-level,ensuring the scores fall within the range of 0 to 1. The results evaluated on original metrics are illustrated in Appendix <ref>.Recent studies find that the n-gram based evaluation methods may not be sufficient to assess the performance of a language model <cit.>.Therefore, we conduct human evaluation to comprehensively assess model performance as done in previous works <cit.>. Detailed human evaluation setup is described in Appendix <ref>. §.§ Results§.§.§ Evaluation of RecommendationIn Table <ref>, we present the recommendation performance of the models. The results clearly demonstrate that models with ConKD (hard) consistently achieve the highest scores in F1@k,indicating superior performance in the recommendation task. Notably, KGSF integrated with ConKD doubles the scores compared to KGSF, while DialoGPT with ConKD achieves the scores 1.5 times as high as DialoGPT. These improvements are observed not only in single predictions, but also in top-10 and top-50 predictions,indicating superior user-item mapping.We hypothesize that such gain stems from the “dark knowledge” distilled from the recommendation teacher within our framework. This knowledge encompasses inter-class relations that are absent in the one-hot encoded hard targets but are expressed through the probability distribution provided by the recommendation teacher.Furthermore, the models with ConKD make active recommendations, as depicted by the high recommendation ratio, reflecting their focus on task-oriented conversation.Among our gating mechanisms, the hard gate outperforms the soft gate,which can be attributed to the stronger supervision made by the hard gate;a student is guided solely by the recommendation teacher in recommendation time. §.§.§ Evaluation of Dialogue GenerationIn addition to the recommendation performances, ConKD exhibitscomparable results in conversation metrics, as shown in Table <ref>. Under quantitative evaluation, the proposed models outperform the backbone models in PPL,indicating enhanced fluency of responses.We observed that RecInDial tends to generate relatively simple responses without active engagement, resulting in lower PPL scores. Regarding the slight decrease in the DIST metric compared to the backbone models in our results,two important observations should be highlighted:1) The base models fail to effectively address the recommendation task in their responses,and 2) DIST scores alone are insufficient for evaluating the quality of model-generated responses.These findings are supported by the results of qualitative evaluation, where the single models with ConKD outperform all baselines in average scores.Specifically, when applied to KGSF, the proposed methods significantly enhance informativeness and coherence. This implies our training mechanism performs consistently regardless of the model size, aligning with the automatic evaluation results. We also observe that ConKD-soft outperforms ConKD-hard with KGSF as the backbone, and the opposite holds true for DialoGPT.This discrepancy is attributed to the model capacity of the dialogue teacher,which influences the gate value.It suggests that the choice between hard and soft gatesdepends on the capacity of the dialogue teacher. §.§ Quality of Recommendation In CRS, evaluating recommendations based solely on a single ground-truth item may not fully capturea model's potential, as user preferences can span a wide range of items. To address this, we expand the evaluation by considering contextually relevant items. These relevant items are those located within a 2-hop distance from previously mentioneditems in the item-oriented KG <cit.>, sharing attributes such as genre and actor. We compare two models, RecInDial and DialoGPT + ConKD (hard), both of which use the same backbone model to produce unified outputs.Table <ref> shows that ConKD consistently outperforms RecInDial across all metrics, with a significant improvement over the single-item evaluation results in Table <ref>. This highlights ConKD's capability not only to recommend a gold item but also to comprehend the underlying knowledge structures, even in the absence of a knowledge graph. §.§ Efficiency ComparisonEnsuring real-time efficiency is of importance in enhancing the user experience in CRS. This section compares the inference speeds of three models: DialoGPT, DialoGPT + ConKD (hard), and RecInDial. DialoGPT and DialoGPT+ConKD (hard) achieve inference latencies of 5.464ms and 5.306ms per token, respectively.In contrast, RecInDial incurs a slightly higher latency of 6.100ms per token. This additional latency in RecInDial can be attributed to the computation of the knowledge-aware bias and vocabulary pointer. The components involve making decisions between generating items or words in every time step. In ConKD, a language model handles the knowledge-aware recommendations without sacrificing efficiency. §.§ AblationsTo verify the efficacy of each component introduced in this work,we conduct ablation studies using several variants of our model.The results are depicted in Table <ref> and <ref>.We observed that the role of teachers significantly affects the performance of a student.Learning solely from the recommendation teacher enhances recommendation performance but comes at the cost of PPL and DIST scores. In contrast, learning from the dialogue teacher improves fluency. When both teachers are present within the training framework, the student's performance improves in both tasks, highlighting the effectiveness of knowledge transfer through our gating mechanism. Scores decline in all metrics when we replace our gate with a static value of 0.5.Lastly, the special token also brings meaningful gains, depicted with the increased F1@k and DIST scores. §.§ Case StudyThe responses generated by the baselines and ConKD are shown in Table <ref>.In the conversation, the user has expressed a clear preference for action movies andhas previously mentioned actors “bruce willis” and “Tom Cruise”. However, the three baselines generate chit-chat responses without recommendations. Although KGSF suggests a movie, the response lacks coherence and fluency. DialoGPT generates a coherent response but recommends “Mr.& Mrs.Smith”, which does not align with the user's preference; neither Bruce Willis nor Tom Cruise are associated with it. On the other hand, our models produce apt responses that are both fluent and informative;it makes user-specific recommendations, suggesting Edge of Tomorrow and Mission: Impossible, both of which are action movies featuring Tom Cruise. Notably, these recommendations are more aligned with the user's expressed preferences compared to the ground-truth movie “Gladiator”, which is an action movie without the mentioned actors. Additional examples are provided in the Appendix <ref>.§ CONCLUSIONIn this study, we introduce contextualized knowledge distillation with hard gate and soft gate. In hard gate, a student is either learned from a recommendation teacher or from a dialogue teacher with a discrete value, while the soft gate fuses knowledge from both teachers, removing a hyper-parameter from the hard gate. The gates are computed in a context-specific manner by aggregating the probability mass on the interest set (a movie item set in our experiment). Our work verifies the idea in the popular benchmark dataset and the result illustrates the superior performance of our approach compared to strong baselines. In addition, human evaluation mainly conforms with the automatic evaluation, demonstrating that the proposed approach is a well-balanced model with both recommendation and chit-chat ability. § LIMITATIONSThis work is grounded on the student-teacher framework, hence requiring additional computation when obtaining knowledge of a teacher; our approach requires two teachers, one for dialogue and one for recommendation. This extra computation can be a burden in an environment lack of resource. Nonetheless, the proposed approach utilizes a single model for inference. Furthermore, our approach requires the teachers to be well-trained. This, however, is also a shared problem within KD training.§ ETHICAL CONSIDERATIONSincedataset <cit.> contains multi-turn dialogue histories, the dataset, by nature, may pose a privacy issue. If a dialogue teacher in our framework learns such information, the student in our framework can also learn to output private information while in conversation. Such issue may possibly be handled by employing a privacy classifier model, where a model is trained to identify certain outputs containing private information of a user.§ ACKNOWLEDGEMENTSLei Chen's work is partially supported by National Science Foundation of China (NSFC) underGrant No. U22B2060, the Hong Kong RGC GRF Project 16213620, CRF Project C2004-21GF, RIF Project R6020-19, AOE Project AoE/E-603/18, Theme-based project TRS T41-603/20R, China NSFC No. 61729201, Guangdong Basic and Applied Basic Research Foundation 2019B151530001, Hong Kong ITC ITF grants MHX/078/21 and PRP/004/22FX, Microsoft Research Asia Collaborative Research Grant and HKUST-Webank joint research lab grants. acl_natbib§ IMPLEMENTATION DETAILSCombinations of KGSF and ConKD.We mainly follow KGSF <cit.> for the basic model structure of the two teacher and student models.For the dialogue teacher and student, we employ an encoder-decoder structure, with each module consisting of 2 transformer layers. The hidden dimension size of both models is 300. Our work excludes the copy mechanism used in KGSF. In the recommendation model, we utilize 1-layer GNN networks trained with word-oriented and item-oriented KGs as inputs. The normalization constant Z_e,r is set to 1.The token classifier is a transformer-based encoder with a classification head.We employ the Adam optimizer <cit.> and apply gradient clipping for stable training.Our chosen hyper-parameters include a batch size of 32, a learning rate of 1e-3, and training for 200 epochs. Furthermore, for ConKD, we set the η and γ to 0.3 and 0.6, respectively.Combinations of DialoGPT and ConKD.We utilize DialoGPT-small <cit.> as the backbone for the dialogue teacher and student model. DialoGPT consists of 12 transformer layers and the hidden dimension size of the model is 768.The recommendation teacher and token classifier in the DialoGPT+ConKD models follow the same settings as those in the KGSF+ConKD models. For hyper-parameters, we use a batch size of 32, a learning rate of 1e-3, and train for 20 epochs.For ConKD, we set the η and γ to 0.6 and 0.4, respectively.During inference, we only use the student model for the response generation in both settings. § GRAPH CONVOLUTIONAL NETWORK (GCN) GCN is adopted to encode the word-oriented KG ConceptNet <cit.>. A triple in the KG is denoted by (w_1, r, w_2), where w_1 and w_2 are word nodes, and r is a relationship between the nodes. The node features are updated with an aggregation function as follows:𝐇^(l) = (𝐃̃^-1/2𝐀̃𝐃̃^-1/2𝐇^(l-1)𝐖^(l))𝐇^(l)∈R^n× d and 𝐖^(l) denote the node representations and a learnable matrix at the l-th layer respectively. n is the number of nodes and d denotes the dimension size of node features. 𝐀̃=𝐀+𝐈 is the adjacency matrix of the graph with self-loop, where 𝐈 is the identity matrix. 𝐃̃=∑_j𝐀̃_ij refers to a degree matrix. § VALIDITY OF Λ To explore the validity of the λ, we illustrate variations of the response y1 introduced in <ref>.y2: If you like romance movies, I would recommend you Titanic.y3: If you like romance movies, I would recommend some romantic comedies.After the word recommend, the vocab (you and some) can be replaced with the item Titanic in the y1. Therefore, the average value λ_r of 0.653 is acceptable in soft gate, indicating the mass reflects the level to which recommendation is expected.§ ITEM REFILLINGWe discuss a two-step inference, in which a response is first generated, and items in the response are refilled with the output of a recommendation module. The recommendation module predicts a probability distribution in each dialogue turn, which remains fixed at each time step during the response generation. Hence, the module cannot directly handle the variable number of items in a response. Additionally, the output does not reflect the dynamic changes of the context, which fails to provide a context-aware recommendation as seen in the following responses.y_4: Blended with Adam Sandler is a fav of mine. y_5: Love Actually with Adam Sandler is a fav of mine.y_4 and y_5 are generated under the original setting and the two-step inference setting respectively[The samples are generated by KGSF. We leverage top k sampling to handle the variable number of items in responses, where k is set to 1.].The dialogue module generates the coherent and informative response y_4, describing the actor Adam Sandler who stars in the movie Blended. On the other hand, the item refilling fails to incorporate a suitable item into the response, which causes a factual error; there is no relationship between Adam Sandler and the movie Love Actually. This indicates that the simple integration cannot be the fundamental solution for the mismatch issues; it leads to semantic discrepancy between the recommendations and generated responses. In our unified system, the output distribution over items dynamically changes depending on the context, generating coherent and user-specific responses. This is done by a single model in a single step, thereby reducing the model size and inference time. § HUMAN EVALUATION SETUPWe engaged three annotators who were tasked with assessing model outputs given a dialogue history.100 model outputs from each model are randomly sampled and collected, the total being 1000 dialogue turns.The annotators score each model output from the range of 0 to 2 on the level of informativeness, coherence, and fluency of model output. The following instructions were provided to guide annotators in their assessments:Fluency: Fluency encapsulates the naturalness of the generated text. It involves an assessment of how the output adheres to linguistic standards, avoiding grammatical flaws. Annotators should evaluate the syntactic flow, word choice, and overall readability. The scores should be shown as 0, 1, and 2, where each indicates “not fluent”, “readable but with some flaws”, and “fluent”, respectively.Informativeness: The informativeness encompasses the model's ability to convey relevant and accurate information. Annotators should assess the depth and accuracy of the conveyed information. The scores need to be displayed 0, 1, and 2 where each corresponds to “information is missing or incorrect”, “information is included but insufficient or partially inaccurate”, and “comprehensive and accurate information”, respectively.Coherence: Coherence entails the harmonious integration of the model's output within the evolving conversation. Annotators should assess how well the model comprehends and adheres to the conversation's theme, avoiding abrupt shifts and ensuring a natural conversational flow. The scores should be valued using 0, 1, and 2. Each rating represents “awkward conversation flow”, “make sense but somewhat disconnected”, and “coherent”, respectively.§ RESULTS EVALUATED USING ORIGINAL METRICS § ADDITIONAL CASES Our models generate diverse movies that differ from the ground truth but align with the user’s preferences.For example, in Case 2, when the user requests old classics, our models suggest Gone with the Wind (1939), It’s a Wonderful Life (1946),The Big Lebowski (1998), The Outsiders (1967) and Driving Miss Daisy (1989),all of which are considered old classics.In contrast, other baselines fail to provide recommendation, except for KBRD. In the Case 3, when the user expresses a preference for family friendly movies likePeter Rabbit, and Finding Dory,our models recommend Beauty and the Beast, Jumanji, Coco, and Troll all of which are family-friendly,with three of them being animations.This contrasts with other baselines that produce contextually incorrect responses without recommendationsor mention Peter Rabbit again, which the user had previously mentioned in the dialogue context. | http://arxiv.org/abs/2310.18119v1 | {
"authors": [
"Yeongseo Jung",
"Eunseo Jung",
"Lei Chen"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231027130624",
"title": "Towards a Unified Conversational Recommendation System: Multi-task Learning via Contextualized Knowledge Distillation"
} |
firstpage–lastpage Diagrammatic approach to excitonic effects on nonlinear optical response Yang-Hao Chan^1,2 January 14, 2024 ========================================================================We present a novel numerical approach aiming at computing equilibria and dynamics structuresof magnetized plasmas in coronal environments. A technique based on the use of neural networks that integrates the partial differential equations of the model, and called Physics-InformedNeural Networks (PINNs), is introduced. The functionality of PINNs is explored via calculation of different magnetohydrodynamic (MHD) equilibrium configurations,and also obtention of exact two-dimensional steady-state magnetic reconnection solutions <cit.>.Advantages and drawbacks of PINNs compared to traditional numerical codes are discussed in order to propose future improvements. Interestingly,PINNs is a meshfree method in which the obtained solution and associated different order derivatives are quasi-instantaneously generated at any point of the spatial domain. We believe that our results can help to pave the way for future developments of time dependent MHD codes based on PINNs. magnetic fields - magnetic reconnection - MHD - sun: solar corona - neural networks - physics-informed neural networks. § INTRODUCTION Deep learning techniques based on multilayered neural networks (NNs) are actually increasingly used to solve problems in a variety of domains including computer vision, language processing, game theory, etc. <cit.>. The idea to use NNs to solve non-linear differential equations is not new, since it was initially introduced more than 25 years ago <cit.>. This was made popular only recently, following the work of Raissi et al. (2019). where the class of Physics-Informed Neural Networks (PINNs) application was introduced. Indeed, PINNsbenefited from technical progress on automatic differentiation and the facilitated use of Python open source software libraries like Tensorflow or Pytorch.To date, PINNs are already used for many applications like, fluid dynamics <cit.>, radiative transfer <cit.>, astrophysics <cit.>, and many other ones. The specificity of the PINNs technique is to minimize the equation's residual at some predefined set of data called collocation points, where the predicted solution must thus ensure the differential equation. To this purpose, a physics-based loss function associated to the residual is defined and then used. In the original method proposed by Raissi et al. (2019), that is sometimes called vanilla-PINNs in the literature, the initial/boundary conditions required to solve the equations are imposed via a second set of data called training points where the solution is known or assumed. The latter constraints are applied by minimizing a second loss function that is a measure of the error (e.g the mean squared error), i.e. the difference between the predicted solution and the values imposed by the initial/boundary conditions. The combination of the two loss functions allows to form a total loss function that is finally used in a gradient descent algorithm. PINNs does not require a large amount of training data as the sole knowledge of solution at boundary is required for vanilla-PINNs. Note that, as initially proposed by Lagaris (1998), it is also possible to exactly enforce the boundary conditions in order to avoid the use of training data set <cit.>. This consists in forcing the neural networks to always assign the prescribed value at the boundary by employing a well behaved trial function. For example, when this value is zero (homogeneous Dirichlet condition), the initial output of the neural network is multiplied by a function that cancels out on the boundary. However, when the boundary conditions are not homogeneous or the geometry is complex, this technique becomes complicated to implement. For simplicity, we make the choice to apply the vanilla-PINNs variant in this work. The aim of this work consists in assessing the advantages and drawbacks of PINNsto solve the dynamics of plasmas immersed in the magnetic field of the solar corona. To the best of our knowledge, PINNs technique has never been appliedto such context in astrophysics, at the exceptions of structure of force-free neutron star magnetospheres <cit.> and forprobing the solar coronal magnetic field from observations data <cit.>. However, similar PINNs techniques have been recently developed for applications to laboratory plasmas. In particular,there is a surge of interest for computing MHD equilibria relevant to toroidal magnetic confinement configurations (e.g. tokamaks)for which Grad-Shafranov like equations need to be solved <cit.>. In this work, the functionality of PINNs is explored through application to two particular solar problems. First, we consider the computation of two-dimensional (2D) force-free magnetic equilibria representative of arcades and loop like structures in the solar corona by solving an associated Grad-Shafranov like equation. Second, our method is extended to a more complex system of differential equations that is an incompressible resistive MHD set, with the aim to compute 2D magnetic reconnection solutions. More precisely, in this work we focus on the reconnective annihilation solutions that are particular exact steady-state solutions obtained in 2D cartesian geometry <cit.>. The paper is organized as follows. In Section 2, we first introduce the basics of PINNs approach for solving partial differential equations (PDEs).Section 3 presents the application to the computation of two different examples of 2D MHD equilibria relevant for solar corona.In Section 4, a PINNs code with the aim to solve the set of 2D steady-state resistive equations in the framework of incompressible MHDis presented. In particular, we assess the applicability of our PINNs solver in retrieving exact analytical solutions <cit.>.Finally, conclusions are drawn in Section 5.§ THE BASICS OF PINNS§.§ The basics of NNs for non linear approximationIn this subsection,we briefly review how NNs are employed as universal approximators. Let us consider an unknown function u( x) that could be the solution of a differential equation, u_θ (x) being its approximated value at given x value (representing two spatial coordinates) and θ being a set of model parameters. Using a classical feed forward neural network, we can writeu_θ ( x) =( 𝒩^(L)∘𝒩^(L-1) ... 𝒩^(0)) ( x) ,making appear u_θ ( x) as the result of compositions (operator ∘ above) of non-linear transformations 𝒩^(l) at different layers (l = 0, 1, ..., L). An example of a given feed-forward NN architecture is schematized in Fig. 1, showing how the neurons for each layer are interconnected. The network is composed of L+1 layers including L-1 hidden layers of neurons (e.g. L = 4 for Fig. 1). Two neurons are employed for the input layer to represent the two required space coordinates (see below in this paper), and a single neuron is sufficient to predict the scalar solution u_θ ( x) in cases involving a single differential equation. Each transformation can be expressed as𝒩^(l)( x)=σ ( W^(l)𝒩^(l-1) ( x) +b^(l)) ,where we denote the weight matrix and bias vector in the l-th layer by W^(l)∈ℝ^d_l-1× d_l and b^(l)∈ℝ^d_l (d_l being the dimension of the input vector for the l-th layer). σ(.) is a non linear activation function, which is applied element-wisely. Such activation function allows the network to map nonlinear relationship that is fundamental for automatic differentiation and therefore the calculation of the derivatives (see below). In this work, the most commonly used hyperbolic tangent tanh function is chosen. Other smooth functions would have led to the same results. However, note that piecewise linear functions ReLU (or Leakly ReLU) would have been a very bad choice, leading to constant piecewise second derivatives and making impossible to minimize the loss function. The model is consequently defined byθ ={W^(l), b^(l)}_l =1,L representing the trainable parameters of the network. The optimization problem aiming to find a non linear approximation u_θ ( x) ≃ u ( x) is based on the minimization of a functionℒ_data, called loss function, that is a measure of the difference between u_θ (x) andu ( x). In practice, a mean squared error formulation is chosen asℒ_data (θ) = 1/N_data∑_i=1^N_data| u_θ ( x_i ) - u_i^data|^2 ,where a set of N_data data called training data is assumed to be available for u( x) taken at different x_i values. Finally, a gradient descent algorithm is used until convergence towards the minimum is obtained for a predefined accuracy (or a given maximum iteration number) asθ_k+1 =θ_k - l_r∇_θ ℒ_data(θ_k) ,for the k-th iteration also called epoch in the literature, leading to θ^*= *argmin_θ ℒ_data (θ) ,where l_r is known as the learning rate parameter. This is the so-called training procedure. In this work, we choose the well known Adam optimizer. The standard automatic differentiation technique is necessary to compute derivatives (i.e. ∇_θ) with respect to the NN parameters, i.e. weights and biases<cit.>. This technique consists in storing the various steps in the the calculation of a compound function, then calculating its gradient usingthe chaine rule. The final goal is to calibrate the trainable parameters θ (weight matrices and bias vectors) of the network such that u_θ ( x) approximates the target solution u(x). The initialization of the network parameters is done randomly. The implementation of the algorithm is done using the Tensorflow library, a classical Python software for machine learning[https://www.tensorflow.org/]. The gradient descent algorithm is implemented with Keras using the application programming (API) GradientTape.[https://keras.io/api/ ]§.§ The basics of PINNs for solving a single PDELet consider a function u( x) satisfying some boundary conditions u_b( x) at the boundary ∂ 𝒟 of some 2D domain 𝒟. The previous non linear approximation procedure can be applied once a set of training data is defined at x_i (i = 1,..., N_data) whereu_θ ( x_i ) ≃ u_b( x_i), and using the minimization of ℒ_data (θ) = 1/N_data∑_i=1^N_data| u_θ ( x_i ) -u_b( x_i) |^2 . In PINNs, the complete minimization is obtained by considering a second loss function that takes into account the equation, so called physics-based loss function, i.e. ℒ_ℱ hereafter. The latter is defined by using the equation residual that can be written in the simple following form ℱ [ u( x),x ]= 0 ,where the symbolℱ stands for a nonlinear differential operator. Indeed, using a second set of data, that are N_c data points located at x_j(j ∈ [1, N_c]) and generally called collocation points, we can define the following associated loss functionℒ_ℱ (θ) = 1/N_c∑_j = 1^N_c| ℱ [u_θ( x_j),x_j ] | ^2,that must be minimized in addition to the training data loss. As an important property characterizing PINNs, the derivatives of the expected solution with respect to the variable x (i.e the network input) needed in the previous loss function are obtained via the automatic differentiation (also used in the gradient descent algorithm described in Section 2.1), avoiding truncation/discretization errors inevitable in traditional numerical methods.In the vanilla-PINN framework, atotal loss function ℒ is thus formed as ℒ(θ)=ω_data ℒ_data (θ)+ω_ℱ ℒ_ℱ (θ),where weights (ω_data, ω_ℱ) can be introduced in order to ameliorate the eventual unbalance between the two partial losses during the training process. These weights and the learning rate can be user-specified or automatically tuned. In the present work, for simplicity we fix the ω_data andω_ℱ valuesto be constant and equal to unity, and the gradient descent algorithm described in Section 2.1 is thus applied to the total loss defined in equation 9. A schematic representation summarizing the procedure can be found in Fig. 2.§.§ The basics of PINNs for solving PDEs The PINNs solver for a single PDEcan be easily extended for a set of n PDEs with m desired scalar functions (n being greater or equal to m). Consequently, the output layer must have m neurons instead of one. The training and collocation data sets must be defined for each function. A physics-based loss function can be defined, that is a weighted sum ofn physics-based loss functions (one per equation). As a single neural network is used, one must increase the complexity of the network by increasing the number of neurons and/or the number of hidden layers (see applications in Section 3 and Section 4). § SOLVING EQUILIBRIUM EQUATIONS USING PINNS Optimization algorithms have been developed for computing MHD equilibria in the solar coronausing however classical methods where a complex functional is minimized (i.e. without neural networks) <cit.>.Two examples of magnetic solar configurations are considered below that are, an arcade structure, and a curved loop like structureobeying a Soloviev Grad-Shafranov equation. Note that, as the exact analytical solutions are known, they are useful in order to evaluatethe accuracy of the method and also to impose the boundary conditions.§.§ Triple arcade structureMagnetic arcades are important observed structures in the solar corona <cit.>. Indeed, they are at the heart of solar flares,coronal mass ejections (CME), andother physical processes <cit.>.More precisely, triple arcades are of particularimportance to explain the initiation of solar flares associated to CME scenario (like the breakout model) in the solar wind <cit.>. Simple force-free models in the framework of two dimensional magnetohydrostatics can bededuced from the following equilibrium equation for the scalar field ψ (x, z) representing the y component of the vector potential of the magnetic field in cartesian coordinates <cit.>, Δψ + c^2ψ = 0,where c is a constant and Δ =∂^2/∂ x^2 +∂^2/∂ z^2 is the cartesian Laplacian operator. This equation is solved in a spatial domain (x, z) ∈ [-L/2 : L/2]× [0 : L], where Lis a given reference spatial scale. This is a linear force-free equilibrium for which the current density and thermal pressure gradient give the linear form c^2ψ <cit.>. Exact solutions for triple arcade structures can be obtained using Fourier-series asψ (x, z) = ∑_k=1^3exp ( -ν_k z)[ a_k cos ( k π/L x ) ] .The latter solution is periodic in x, and the relationship ν_k^2 =k^2 π^2/L^2 - c^2 applies as a consequence of equation 10. We present the PINNs solutions obtained with L = 3, c = 0.8, and a_2 = 0. Three particular cases are considered below, (a) a dipole-like field with a_1 =1 and a_3 = 0, (b) a quadrupole-like field with a_1 = 0 and a_3 = 1, (c) and a combination of both with a_1 = 1 anda_3 = -0.5. The obtained solutions are plotted in Fig. 3, and can be compared to results previously shown for a similar set of parameters <cit.>.Moreover, we detailed below the training procedure only for the third case (c), as being similar for the two other cases (a and b). We have chosen 20 training data points per boundary layer (i.e. N_data = 80) with a random distribution, as one can see in Fig. 4 (with red dots). The exact solution is used to prescribe these training data values. For the collocation data set, N_c = 700 points are generated inside the integration domain using a pseudo-random distribution (latin-hypercube strategy) as one can see with blue dots. The evolution of the loss function with the training epochs that is is reported in Fig. 5, shows the convergence toward the predicted solution. Note that the training is stopped after 50000 epochs corresponding to a final loss value of order 2 × 10^-6. We have chosen a network architecture having 7 hidden layers with 20 neurons per layer, and a fixed learning rate of l_r = 2 × 10^-4. The latter parameters choice slightly influences the results but is not fundamental as long as the number of layers/neurons is not too small <cit.>. A faster convergence can be also obtained by taking a variable learning rate with a decreasing value with the advance of the training process. The error distribution at the end of the training is plotted in Fig. 5 exhibiting a maximum absolute error of order 0.003, which also corresponds to a similar maximum relative error of order 0.003 (the maximum magnitude solution value being of order unity). Note that the predicted PINNs solution and associated error distribution are obtained using a third set of points (different from the collocation points) that is taken to be a uniform grid of 100 × 100 points here, otherwise the error could be artificially small (overfitting effect). One must also note that the error is higher near the boundary due to the higher gradient of the solution and to the coexistence of data/collocation points in these regions. In this way, once trained, the network allows to predict the solution quasi-instantaneously at any point inside the integration domain, without the need for interpolation (as done forexample with finite-difference methods when the point is situated between two grid points). The precision of PINNs is known to be very good but less than more traditional methods (like in finite-element codes for example). This is a general property of minimization techniques based on gradient descent algorithms <cit.>. However, a finer tuning of the network parameters together with the introduction of optimal combinations for weights of the partial losses can generally ameliorate the results,which is beyond the scope of the present work. §.§ Grad-Shafranov equilibrium structure: Soloviev solutionEquilibrium curved magnetic structures represent another important issue in solar physics. Indeed, the latter obey the solutions of Grad-Shafranov (GS) equation that is obtained in the axisymmetric approximation. For example, GS equation and its solution are often used for magnetic clouds reconstruction (e.g. in order to determine their geometries from observations) <cit.>. GS like solutions are also important to model the CME phenomenon for which a simple force-free spheromak solution is used <cit.>. In the latter context, particular solutions of GS equation called Soloviev solutions can be also implemented as time dependent boundary conditions, leading to a more realistic and self-consistent CME evolution model and better predictions <cit.>. Following the formulation deduced using (R, z) cylindrical like variables in the plane perpendicular to the toroidal angle, the GS equation can be written as-[∂^2 ψ/∂ R^2+∂^2 ψ/∂ z^2 - 1/ R∂ψ/∂ R] = F (R , z,ψ)where F is a term containing the current density flowing in the toroidal direction <cit.>. Assuming the particular form for F, F = α R^2 + β (where α and β are constant), allows the obtention of Soloviev solutions <cit.>. More precisely, taking F = f_0 (R^2 + R_0^2) leads to the exact solution ψ = f_0 R_0^2/ 2[ a^2 - z^2- (R^2 - R_0^2)^2/ 4 R_0^2]in a spatial domain 𝒟 bounded by its frontier ∂ 𝒟 defined as follows,∂ 𝒟 = [ R = R_0 √(1 +2 a cosα/ R_0 ) , z = a R_0 sinα, α= [0: 2 π] ] , and having a Dirichlet-type boundary condition ψ = 0 <cit.>. The solution has a drop-shaped structure, that have an X-point at (z = 0, R = 0) as ∂ψ/∂z = ∂ψ/∂R = 0 at this point.Note that similar Soloviev solutions can be also obtained using a different parametrization in order to approximate axisymmetric solutions of tokamak configurationhaving a D-shaped geometry, that are beyond the scope of the present work. We present the results obtained with our PINNs solver in Figs 6-7 for finding the solution of equations 12 and 14.We have used the following solutions parameter values, f_0 = 1, a=0.5, and R_0 = 1. The network architecture is similar to the arcade case where7 hidden layers with 20 neurons per layer were chosen, which consequently represent a number of 2601 trainable parameters for θ. We have used 80 training data points (i.e. N_data = 80) with a distribution based on a uniform α angle generator,and randomly distributed N_c = 870 collocation points inside the integration domain. The results are obtained after a training processwith a maximum of 50000 epochs. The convergence of the loss function is initially very fast (typically during the first 10000 epochs) andis much more slower after, as already observed previously for the arcade problem.When comparing to the exact solution, the relative error of PINNs solver is similar (with a slightly higher value) compared to the arcade problem.However, a smaller error is expected with a finer tuning of the different parameters and/or with a longer training procedure.§ STEADY-STATE MAGNETIC RECONNECTIONMagnetic reconnection plays a fundamental role for release of magnetic energy in solar flares and coronal mass ejections. The mechanismhas been extensively investigated over the last 50 years <cit.> including exact analyticalsolutions for steady state reconnection <cit.> and numericaltime dependent reconnection <cit.> in the MHD framework approximation. In incompressible inviscid plasmas, the particular 2D exact solution obtained byCraig & Henton (1995) that is the generalization of the previously one introduced by Sonnerup & Priest (1975) is of particular interest in order to test our PINNs solver.§.§ Incompressible MHD equationsWe consider the following set of steady-state incompressible resistive MHD equations written in usual dimensionless units (i.e.the magnetic permeability and plasma density are taken to be unity).The flow velocity obeys the inviscid equationV·∇V - (∇×B)×B + ∇P = 0 , which is written in a residual form ready to be solved by our PINNs algorithm. The thermal pressure P (via its gradient) isnecessary to ensure the equilibrium when using the velocity equation.The flow velocity vector is also constrained by the incompressibility assumption∇·V = 0.On the other hand, using the Maxwell-Faraday law and Ohm's law, the magnetic field vector is known to follow the equation∇× ( V ×B )+η∇^2B = 0 ,accompanied by the solenoidal condition∇·B = 0 .Finally, note that the resistivity η is assumed to be uniform in this work.§.§ Magnetic annihilation and reconnective diffusion solutions Magnetic annihilation solution is a particular 2D magnetic reconnection process in which two anti-parallel regions of magnetic field (directed along the y directions) are swept together by the incompressible plasma flow and destroy one another <cit.>. The solution is based on a stagnation-point flow,V= ( - α x, α y ) ,where α is a positive real given constant. In the limit of vanishing viscosity, the exact steady state solution for the magnetic field vector is,B=( 0, B_y (x) ) ,withB_y (x) =E_d/ημDaw(μ x) ,where E_d is the magnitude of a uniform electric field perpendicular to the (x, y) plane, μ ^2 = α / (2 η) with η being the electrical resistivity of the plasma, and Daw (x) is the Dawson function given byDaw (x) = ∫_0^xexp (t^2 - x^2) dt.The role of E_d is to control the rate of energy conversion. In the limit of small resistivity η, this solution exhibits a strong current sheet centered over the stagnation-point flow with a thickness in the x-direction proportional to η^1/2.As an natural extension of the previous reconnection model, the solution of the called reconnective diffusion solution has been obtained by Craig & Henton (1995). It corresponds to the velocity and magnetic field profiles of the form:V= ( - α x, α y - β/αE_d /ημ Daw(μ x) ) ,andB=( βx , - βy+ E_d/ημDaw(μ x) ) ,respectively. The new definition of μ parameter is now, μ^2 = α^2 - β^2/2 ηα, where an additional real parameter β <α is introduced. Note that the annihilation solution is naturally recovered as a particular case when β = 0. The reconnective diffusion exhibits diffusion across one separatrix like the annihilation solution, but the dominant process across the other separatrix is advection like in a classical reconnection picture. As a shear flow exists across a global current layer, there is a symmetry breaking compared to the annihilation process <cit.>. §.§ Solving steady state magnetic reconnection using PINNs Our PINNs solver must therefore treat 6 scalar equations, that are the two divergence free conditions, two scalar equations for velocity components, and two scalar equations for magnetic field components, together with the use of 6 corresponding partial physics-based loss functions. As now 5 unknown variables (i.e. V_x, V_y, B_x, B_y and P) represent the problem solution, the output layer must at least include 5 corresponding neurons. In practice, we have used 5 neurons, adding a sixth neuron for a magnetic flux function ψ (in order to be used for plotting magnetic field lines) as B_x = ∂ψ/∂y and B_y = - ∂ψ/∂x.Following the same procedure previously used for solving equilibria, the magnetic annihilation and reconnective solutions have been nicely obtained. Indeed, the results are plotted in Fig. 8 for two values of the β parameter, i.e. for β = 0 thus selecting the pure annihilation solution and for β = 0.5 selecting a reconnective diffusion one. The other chosen physical parameters are E_d = 0.1, α = 1, and η = 0.01. The integration is done on a [-1:1]^2 square spatial domain.As concerns the architecture of the network, 9 hidden layers with 30 neurons per layer are chosen, which represent a corresponding number of 7716 trainable parameters for θ. We have used N_data = 120 training data points (i.e. 30 for each boundary layer) with a random distribution,and randomly distributed N_c = 700 points inside the integration domain. The exact solutions for magnetic field and flow velocity are used to prescribe these training data values. The results are obtained after a training process with 25000 epochs employing a learning rate l_r = 2 × 10^-4.The solutions obtained with PINNs solver are compared to the exact analytical ones. The results for β=0.5 are plotted in Figs 9-11. A maximum absolute error of order 3 × 10^-3 is visible on the maps showing the spatial error distributionof the magnetic field and velocity flow components, as one can see in Fig. 9 and Fig. 10 respectively.Contrary to the previous results obtained for the equilibrium solvers, the error is higher in the central region due to the higher gradient. One dimensional cutsfor different given x and y values plotted in Fig. 11 also confirm the very good precision properties of the solver. Similar results with similar performances can be also obtained for other β values. However, for cases using smaller resistivity values, the training requires a significantly higher number of collocation points in order toresolve the central layer that have a thickness in the x-direction proportional to η^1/2. In practice, it is also possible to use a particular spatial distribution of collocation points having an accumulation in the central region, that is beyond the scope of this study.§ CONCLUSIONS In this work, we show that PINNs are interesting tools for solving PDEs. In particular, they represent alternatives to traditional/classical numerical methods for modelling magnetic field dynamics of the solar conona.As a first example of application, PINNs-based solvers can easily handle finding equilibrium configurations via solving Grad-shafranov like equations, without the need toinvolve a spatial mesh over which differential operators are discretized in order to solve a large linear system.Second, it is shown that PINNs solvers can also offer an alternative to classical MHD codes for modelling dynamics of the solar corona. Indeed,exact particular steady state magnetic reconnection solution of 2D incompressible resistive MHD equations is easily recovered in this work <cit.>. Compared to traditional numerical methods, they present some advantages listed below.* The technique does not require meshing the domain. Indeed, the implementation simply requires the use of a a dataset of collocation points arbitrarily chosen inside the domain. It can therefore easily be applied to curvilinear geometries or complex domains (for example with holes). The only constraint is to define points in the domain, which is simpler than building meshes.* Once trained, the technique makes it possible to calculate the solution at any point of the domain. This allows, for example, to zoom in on part of the domain without the need for interpolation. Moreover, this predicted solution is quasi-instantaneously (in a fraction of second) generated, as the latteris a function fully determined by the set of parameters θ.* The formulation based on the equations residuals (e.g. second order derivative form) does not require the use of some equivalent system of first order differential equations. The solution derivative with respect to the spatial variable is also quasi-instantaneously obtained with an accuracy similar to the solution.However, our results also highlight some drawbacks listed below.* Even if the accuracy obtained in this work is excellent, PINNs seem to be potentially less accurate than classical methods where for example refining a grid allows a precision close to the machine one.This limitation is partly inherent to minimization techniques. Nevertheless, our results could be slightly ameliorated (see the second point below). * The training process depends on a combination of many parameters like, the learning rate, the weights in the loss function (not considered in this study), and the architecture of the network, which determines the efficiency (speed and accuracy) of the minimization <cit.>. Consequently, a finer tuningusing some adaptive techniques is possible in order to ameliorate the results. However, this is not a simple task that is beyond the scope ofthe present work.Anyway, PINNs are promising tools that are called upon to develop in future years for the following reasons. Ameliorations using self-adaptive techniques are expected in order to improve the previously cited drawbacks <cit.>.As shown in this work, they also offer a different and complementary approach to traditional methods. Once trained, the network output being an analytic-like expression (see equations 1-2),the solution and derivatives can be quasi-instantaneously generated in the trained spatial domain. Consequently, the solution obtained with our PINNs methods is valid over the entire domain without the need for spatial interpolation as in classical numerical schemes.Another strong promising potentiality offered by PINNs approach is the possibility to learn a family of different solutions with the same neural network <cit.>. Indeed, the use of an input layer considering additional variable parameters (it could be the resistivity or/and the β parameters in case of the magnetic reconnection problem) would allow to learn multiple solutions for ranges of variation of these parameters. We are actually developing such important applications, as this is clearly a particular potentiality of PINNs technique that is not possible when using traditional numerical schemes. Finally, another way of using PINNs is to combine a PINNs solver with classical MHD simulations, as this is already under exploitation for hydrodynamics. Indeed, data obtained from classical simulations in a first step (e.g. magnetic reconnection ones for different resistivity values) can be used as extra training data in the neural network training process in order to learn the different associated solutions. Thus, in the second step, PINNs solver can be used to generate a new solution corresponding to another parameter value (e.g. resistivity). In other words, PINNs method can serve as a reduced model of a given problem, avoiding numerous long and costly calculations.The computation time needed to obtain the results presented in this work (for a standard single CPU computer) is of order a few minutes in case of the arcade/equilibrium equations and a few tens of minutes for the reconnection problem. This is probably faster than obtained with traditional methods on a similar computer. A even faster computation is of course possible when using GPU and multi-GPU.Beyond the above potentialities, more studies are obviously needed to extend the examples of application presented in this work. First, the reconstruction of the solar coronal magnetic field in a more realistic three dimensional (3D) geometry could be a challenging project. The transition to 3D version doesn't necessitate special adaptation (only additional input/output neurons), but the computation time would be higher as a higher number of points and possibly a larger/deeper neural network are required.Second, using a PINNs solver for a time dependent MHD dynamics is also actually under development either for exploitation in combination with a classical MHD code or not.§ ACKNOWLEDGEMENTSThe authors thank Emmanuel Franck and Victor Michel-Dansac (IRMA, Strasbourg) for fruitful discussions on PINNs technique. We also sincerely thank the anonymous referee for useful suggestions that helped improve the content of the paper.§ DATA AVAILABILITYData will be made available on reasonable request to the corresponding author.99 [Baty et al. 2014]bat14 Baty H., Forbes T.G., Priest E.R., 2014, Phys. Plasmas, 21, 112111[Baty & Nishikawa 2016]bat16 Baty H., Nishikawa H., 2016, MNRAS, 459, 624[Baty 2019]bat19 Baty H., 2019, ApJS, 243, 23[Baty 2023]bat23 Baty H., 2023, Astronomy and Computing, 44, 100734[Baydin et al. 2018]bay18 Baydin A.G., Pearlmutter B.A., Radul A.A., Siskind J.M., 2018, Journal of Machine Learning Research, 18, 1 [Cai et al. 2021]cai21 Cai S., MaoZ., Wang Z., Yin M., Karniadakis G.E., 2021, Acta Mechanica Sinica, 37, 1727[LeCun et al. 2015]lec15 LeCun Y.,Bengio Y., Hinton G., 2015, Deep Learning. Nature, 521, 436 [Craig & Henton 1995]cra95 Craig I.J.D., Henton S.M., 1995, ApJ, 450, 280 [Cuomo et al. 2022]cuo22 Cuomo S., Di Cola V.S., Giampaolo F., Rozza G., Raissi M., Piccialli F., 2022, Journal of Scientific Computing, 92, 88[Deriaz et al. 2011]der11 Deriaz E., Despres B., Faccanoni G., Gostaf K.P., Imbert-Gérard L.M., Sadaka G.,Sart R.,2011, ESAIM Proc., 32, 76 [Imada et al. 2013]ima13 Imada S., Aoki K., Hara H., Watanabe T., Harra L.K., Shimizu T.,2013, ApJL, 776, L11 [Isavnin et al. 2011]isa11 Isavnin A., Kilpua E.K.J., Koskinen H.E.J, 2011, Solar Physics, 273, 205[Janvier et al. 2015]jan15 Janvier M., Aulanier G., Démoulin P., 2015, Solar Physics, 290, 3425[Jarolim et al. 2023]jar23 Jarolim R, Thalmann J.K., VeronigA.M., Podladchikova T., 2023, to appear in Nature Astronomy, https://doi.org/10.1038/s41550-023-02030-9 [Kaltsas & Throumoulopoulos 2022]kal22 Kaltsas D.A., Throumoulopoulos G.N., 2022, Phys. Plasmas, 29, 022506 [Karniadakis et al. 2021]kar21 Karniadakis G.E., Kevrekidis I.G., Lu L, Perdikaris P., Wang S.,Yang L. 2021, Nature Reviews Physics, 3, 422[Kusano et al. 2004]kus04 Kusano K., Maeshiro T., Yokoyama T., Sakurai T., 2004, ApJ, 610, 537[Kuzma et al. 2021]kuz21 Kuzma B., Murawski K., Musielak Z.E., Poedts S., Wojcik D., 2021, A&A, 652, A88 [Lagaris 1998]lag98 Lagaris E., Likas A., Fotiadis D.A., 1998, IEEE Transactions on Neural Networks, 9 (5), 987[Linan et al. 2023]lin23 Linan L., Maharana A., Poedts S., Schmieder B., Keppens R., 2023, submitted to A&A[Mc Kenzie & Hudson 1999]mac99 Mc Kenzie D.E., Hudson H.S., 1999, ApJ, 519, L93 [Mishra & Molinaro 2023]mis23 Mishra S., Molinaro R., 2023, IMAJ. Num. Anal., 43, 1 [Press et al. 2007]pre07 Press W.H., Teukolsky S.A., Vetterling W.T., Flannery B.P., 2007,Numerical Recipes 3rd Edition[Priest & Forbes 2000]pri00Priest E.R., Forbes T.G., 2000, Magnetic Reconnection. Cambridge Univ. Press, Cambridge [Raissi et al. 2019]rai19 Raissi M., Perdikaris P., KarniadakisG.E., 2019, J. Comput. Phys., 378, 686[Shiota & Kataoka 2016]shi16 Shiota D., Kataoka R.,2016, Space Weather, 14, 56 [Soloviev 1975]sol75 Soloviev L.S., 1975, Reviews of Plasma Physics, ed. M. Leontovich, Vol. 6 (New York: Consultants Bureau), 257[Sonnerup & Priest 1975]son75 SonnerupB.U.O., Priest E.R., 1975, J. Plasma Physics, 14, 283[Urbán et al. 2023]urb23 Urbán J.F., Stefanou P., Dehman C., PonsJ.A., 2023, MNRAS, 524, 32[Van Der Holst et al. 2007]van07 Van Der Holst B., Jacobs C., Poedts S., 2007, ApJ, 671, L77 [Verbeke et al. 2019]ver19 Verbeke C.,Pomoell J., Poedts S., 2019, A&A, 627, A111 [Watson and Craig 1998]wat98 Watson P.G., Craig I.J.D., 1998, ApJ, 505, 363 [Watson et al. 1998]wat98b Watson P.G., Priest E.R., Craig I.J.D., 1998, Geophys. Astrophys. Fluid Dyn.. 88, 165 [Wiegelmann 1998]wie98 Wiegelmann T., 1998, Physica Scripta, T74, 77[Wiegelmann& Neukirch 2006]wie06 Wiegelmann T., Neukirch T.,2006, A&A, 457, 1053 | http://arxiv.org/abs/2310.17919v1 | {
"authors": [
"Hubert Baty",
"Vincent Vigon"
],
"categories": [
"astro-ph.SR",
"physics.plasm-ph",
"physics.space-ph"
],
"primary_category": "astro-ph.SR",
"published": "20231027062810",
"title": "Modelling solar coronal magnetic fields with physics-informed neural networks"
} |
Causal disentanglement of multimodal data equal* Elise Walkerxxx Jonas A. Actorxxx Carianne Martinezyyy,comp Nathaniel TraskupennxxxCenter for Computing Research, Sandia National Laboratories, Albuquerque, NM, USA yyyApplied Information Sciences Center, Sandia National Laboratories, Albuquerque, NM, USA compSchool of Computing and Augmented Intelligence, Arizona State University, USA upennSchool of Engineering and Applied Science, University of Pennsylvania, USA Nathaniel [email protected] models, multimodal machine learning, physics-informed machine learning, variational inference, variational autoencoders, fingerprinting, mixture of experts 0.3inCausal representation learning algorithms discover lower-dimensional representations of data that admit a decipherableinterpretation of cause and effect; as achieving such interpretable representations is challenging,many causal learning algorithms utilize elements indicating prior information, such as (linear) structural causal models, interventional data, or weak supervision.Unfortunately, in exploratory causal representation learning, such elements and prior information may not be available or warranted. Alternatively, scientific datasets often have multiple modalities or physics-based constraints, and the use of such scientific, multimodal data has been shown to improve disentanglement in fully unsupervised settings.Consequently, we introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships.Our innovative algorithm utilizes a new differentiable parametrization to learn a directed acyclic graph (DAG) together with alatent space of a variational autoencoder in an end-to-end differentiable framework via a single, tractable evidence lower bound loss function. We place a Gaussian mixture prior on the latent space and identify each of the mixtures with an outcome of the DAG nodes; this novel identification enables feature discovery with causal relationships. Tested against a synthetic and scientific datasets, our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key featuresin a fully unsupervised setting. § INTRODUCTION To achieve autonomous scientific discovery, scientists are rapidly collecting large scientific datasets with a growing number of complex modalities. Such large, multimodal scientific datasets extend beyond the limits of human cognition and thereby necessitate ML-driven methods to identify hidden, underlying factors in the data <cit.>. The field of disentangled representation learning seeks to identify hidden features of data through an interpretable latent representation <cit.>.Variational autoencoders (VAE) frameworks are often used in representation learning to provide a meaningful, disentangled representation of data in a latent space <cit.>.Physics-informed multimodal autoencoders (PIMA) have demonstrated the ability to detect features in multimodal datasets while incorporating known physics to aid in disentanglement <cit.>.One shortcoming of representation learning methods is that they typically do not consider any causal relationships. Representation learning has long been used to describe how random variables relate to each other based on observable data, but does not address the why behind random variable correlations. For example, many VAE frameworks assume that features are independent. Real-world data have natural correlative and causal relationships, however. To capture causal dependencies between random variables, recent works introduce causal inference into representation learning <cit.>.Causal inference commonly identifies a directed acyclic graph (DAG) on a set of random variables representing features, where each node of the DAG is a random variable and directed edges between the nodes represent a causal relationship <cit.>. Traditionally, learning a DAG reduced to expensive, combinatorial searches, e.g. <cit.>. Recent methods of learning DAGs, however, utilize a continuous, differentiable optimization scheme, which bypasses the otherwise laborious combinatorial search in the space of all DAGs <cit.>.Many works are interested in learning a causal DAG on human-specified features from data, or, alternatively, learning a data distribution given a known DAG. We, however, are interested causal representation learning, which means learning a causal DAG in concert with learning a lower-dimensional representation of data. Current frameworks for causal representation learning often rely on causal structural models, interventional data, or labels on data features. Such assumptions and interventions are deemed necessary for finding unique, or indentifiable, DAGs. We, however, are considering the exploratory setting where we are not concerned as much with unique, identifiable models so much as identifying plausible causal patterns within datasets where no prior additional information on the causal characteristics is available. In lieu of additional causal information or assumptions, we follow <cit.> and instead rely on multiple modalities or physics-based constraints of the data. In this paper we present causalPIMA: a fully unsupervised causal representation learning framework capable of handling multiple modalities and physics-based constraints. In particular, we adapt a new DAG-learning structure to the latent space of the PIMA framework. The result is a multimodal variational autoencoder with physics-based decoder capabilities such that clustering in the latent space follows a DAG structure that is learned simultaneously with the variational autoencoder embedding. The ability to handle multimodal data with physical constraints makes our algorithm unique from other causal representation learning algorithms.§.§ Related WorksThe references detailed below give a non-comprehensive overview of the current work in representation causal inference, as well as references that informed our algorithmic development. Continuously learning DAGs. A continuous optimization strategy for learning DAGs is first introduced in <cit.>, where the key component was developing new conditions for enforcing acyclic DAGs. Works such as <cit.> further build off of this idea and introduce new continuous constraints for learning DAGs. Applications of continuous optimization of DAGs include <cit.>. In contrast, our DAG parametrization is inspired by Hodge theory <cit.> where we view edges as the flows between nodes. Furthermore, our novel parametrization includes a temperature parameter that regularizes the edge indicator function in order to avoid local minimum while training. Causal representation learning. Much of causal inference seeks to fit a DAG to data, or otherwise already assume a DAG and looks to fit the data to the DAG. Causal representation learning aims to learn a lower-dimensional representation of data with a causal interpretation. In <cit.>, the authors introduce a linear structural causal model into a VAE framework. Their framework enables counterfactual data generation and has some identifiability guarantees under set assumptions. Their approach requires weak supervision, however, in order to achieve disentanglement and identifiability. Our method differs from that of <cit.> in that our algorithm can handle multiple modalities, known physics, and, most significantly, is completely unsupervised. A fully unsupervised framework is necessary for truly exploratory settings, such as autonomous scientific discovery. A fully linear causal representation learning approach is introduced in <cit.>, where unimodal data is factored into a linear causal model and a lower-dimensional representation. The primary objective of this work is to provide identifiability analysis in causal disentanglement. Indeed, this unimodal method, given interventional data, is guaranteed to be identifiable given a pure intervention on each random variable of the DAG. Our work differs from that of <cit.> in that we do not assume linearity or interventional data. Indeed our work finds causal relationships and multimodal, nonlinear representations in settings where interventional data is not available.Latent representations of scientific datasets. Scientific datasets often are artisan, consist of various modalities, and obey physics constraints.Physics-informed multimodal autoencoders (PIMA), introduced in <cit.>, use a VAE framework to learn a joint representation of multimodal data with optional physical constraints on the decoders. In particular, they show that additional modalities can improve classification and disentanglement. Consequently, we chose to base our causal algorithm on the PIMA framework. The result is that our algorithm has a tractable, closed-form evidence lower bound loss function and can also handle incomplete multimodal data and incorporate simulators, reduced-order models, or other physics-based predictions.§ ALGORITHM FRAMEWORKGiven data 𝐗 = { X_1, …, X_M } from M distinct modalities, we seek a common embedding into a latent space Z ∈^J, where the latent space representation admits distinct clusters based on encoded features of the data.We assume that our embedded representations aredescribed by L categorical features 𝐍_1,… ,𝐍_L, where each feature 𝐍_ℓ is a categorical random variable with C_ℓ outcomes. Moreover, we assume that there is a causal relationship between features, where the relationship is representable by a directed acyclic graph (DAG). That is, we assume that each feature 𝐍_ℓ is a node in a DAG G. Letting Pa(𝐍_ℓ) denote the immediate parents of feature node N_ℓ, we assume the joint distribution of the feature nodes follows the Markov factorization property: p(𝐍) := p(𝐍_1,…,𝐍_L) = ∏_ℓ=1^L p(𝐍_ℓ|Pa(𝐍_ℓ)).We relate the features nodes of the DAG to the latent space Z through identification. That is, we enforce a Gaussian mixture model (GMM) prior on the latent space Z. Each Gaussian in the mixture corresponds to a unique outcome of the joint distribution 𝐍. In particular, the number of clusters in our GMM is C = C_1 ⋯ C_L and the categorical probability of each cluster is given by 𝐍. Appendix <ref> contains a summary of notation (Table <ref>)as well as a sketch depicting how the causal graph of features is related to the latent space embedding (Figure <ref>).We construct our embedding and latent space representation via a multimodal variational autoencoder, with distributions for the prior p and posterior q. Following work such as <cit.>, we train our variational autoencoder through finding distributions p, q, and DAG G which maximize the evidence lower bound (ELBO) loss: ℒ = 𝔼_q(Z,N|𝐗)[ logp(𝐗,Z,𝐍)/q(Z,𝐍|𝐗)].We assume independence of decoding mechanisms for each modality for our prior, and assume mean-field separability for the posterior. These assumptions respectively give: p(𝐗| Z,𝐍) = ∏_m=1^M p(X_m | Z,𝐍) andq(Z,𝐍|𝐗) = q(Z|𝐗) q(𝐍|𝐗).The ELBO above is computationally tractable through strategic framework decisions. In particular,our framework (1) utilizes unimodal deep encodings with Gaussian outputs, (2) fuses the unimodal deep encodings via a product of experts (PoE), (3) models clusters in the latent space as a mixture of Gaussians, (4) computes the probability of each cluster as the joint probability of the nodes a trainable DAG, and (5) utilizes a mixture of deep decoders with the optional capability of physics-informed decoders for modalities suitable to expert modeling. By assuming Equation <ref> and extensively using Gaussians, the ELBO separates as sums of expectations of Gaussian distributions. In the case of Gaussians with diagonal covariance, <cit.> gives a closed-form solution to compute such an expectation (see Corollary <ref> in Appendix <ref>). For general Gaussian distributions, we give the closed-form solution in Lemma <ref> of Appendix <ref>. For simplicity, we assume Gaussian distributions with diagonal covariance throughout this work.Our algorithmic framework thus consists of (1) a multimodal variational autoencoder with a Gaussian mixture prior and (2) a parameterization of our DAG and the causal structure it induces. We describe each of these components in the subsections below.§.§ Multimodal variational autoencoder with Gaussian mixture priorOur mulitmodal representation learning framework amounts to a variational autoencoder (VAE) with a Gaussian mixture model (GMM) prior on the latent space Z. Our GMM is informed by a DAG G where all nodes of G are categorical random variables. In particular, we identify each Gaussian in the latent space with an outcome on the nodes of G. Thus the total number of Gaussians in the latent space is C_1 ⋯ C_L, where L is the number of nodes and C_ℓ is the number of outcomes of the ℓ^th node 𝐍_ℓ. The joint random variable of all nodes is 𝐍, which indexes the clusters in the GMM. In essence, we are putting a causal prior on the distribution of the clusters so that the probability of belonging to cluster (c_1, …, c_L) is given by 𝐀_c_1, …, c_L=p(𝐍_c_1, …, c_L). We assume each cluster in the mixture is a Gaussian of the form p(Z |𝐍_c_1,…,c_L) ∼(μ_c_1,…,c_L, σ_c_1,…,c_L^2 𝐈), where the parameters μ_c_1,…,c_L and σ_c_1,…,c_L^2 are either trainable variables, or computed using block-coordinate maximization strategy outlined in Section <ref>. We handle the multimodal embedding and decoding of the VAE in the same manner as <cit.>. In particular, we use neural network encoders to embed each modality as a Gaussian and then combine these embeddings using a PoE. That is, for each modality m we assume q(Z | X_m) ∼(μ_m, σ_m^2 𝐈), where [μ_m, σ_m^2] = F_m(X_m; θ_m) for a neural network F_m with trainable parameters θ_m. We deterministically compute the multimodal embedding from the unimodal ones via the identity q(Z |𝐗) ∼(μ, σ^2 𝐈) = α∏_m=1^M (μ_m, σ_m^2𝐈),where α is a normalization constant and σ^-2 = ∑_m=1^Mσ_m^-2and μ/σ^2 = ∑_m=1^M μ_m/σ_m^2.During training, the multimodal distribution is sampled using the reparametrization trick. That is, we sample ϵ∼(0, 𝐈) and compute z = μ + ϵ⊙σ, where ⊙ is the Hadamard product. Our decoders output a Gaussian for each modality p(X_m | Z, 𝐍_c_1, …, c_L) ∼(μ_m;c_1, …, c_L, σ_m; c_1, …, c_L^2 𝐈). The Gaussians' parameters are determined by neural networks D_m; c_1, …, c_L, i.e. [μ_m;c_1, …, c_L, σ_m; c_1, …, c_L^2] = D_m;c_1, …, c_L(Z;θ_m;c_1, …, c_L). Alternatively our decoders D_m;c_1, …, c_L(Z;θ_m;c_1, …, c_L) can be expert models, or they can depend upon only Z, i.e. p(X_m | Z, 𝐍_c_1, …, c_L) = p(X_m | Z). §.§ Directed acyclic graph and joint distribution of nodes Given our data, we assume that hidden features - which are discovered by the encoder and decoder of our architecture - admit a causal structure. In particular, we assume a directed acyclic graph G with L nodes, where each feature node 𝐍_ℓ is a categorical random variable representing a hidden feature with C_ℓ outcomes. Furthermore, the joint distribution of 𝐍 factors according to G: p(𝐍) = p(𝐍_1,…,𝐍_L) = ∏_ℓ=1^L p(𝐍_ℓ|Pa(𝐍_ℓ)).One challenge of causal representation learning is determining how to efficiently learn the DAG edges, which are described by Pa(𝐍_ℓ). Building off of concepts from Hodge theory <cit.>, we pose our regularized edge indicator function as the graph gradient (𝒢) on a set of nodes. By using the graph gradient, we are guaranteeing that our edge indicator function is curl-free, and consequently defines a complete DAG; we introduce sparsity in the complete DAG through nonnegative weightings (B) of edges. Explicitly, given a set of nodes, each node 𝐍_ℓ is assigned a trainable score ξ_ℓ. We denote the vector of trainable node scores as ξ⃗. Each potential edge e_ij between nodes is assigned a value F_ij given byF_ij = (B ·𝒢ξ)_ij = B_ij(ξ_j - ξ_i),where 𝒢 is the graph gradient operator, and B is a trainable nonnegative metric diagonal tensor inducing sparsity in G. We use these edge values to give a regularized edge indicator functionE_ij = ReLU( tanh( 1/βF_ij) ),where the scalar β > 0 is a temperature parameter that controls the sharpness of the regularization of the indicator function. We use this temperature to control how easily the DAG can update during training, as described in Section <ref>. With this formulation, we assign edges in our DAG via the rule 𝐍_i ⊆Pa(𝐍_j) ⟺lim_β→ 0 E_ij = 1 ⟺ B_ij 0and ξ_i < ξ_j.Our DAG parametrization does indeed guarantee a DAG and is flexible enough to learn any possible DAG. Formal proofs are given in Appendix <ref>. Let A be the adjacency matrix of a directed graph G. Then G is a DAG if and only if A=lim_β→ 0 E for some matrix E with entries given by Equation <ref>.See Lemma <ref> and Proposition <ref> in Appendix <ref>. For a DAG G parameterized with the edge scores in Equation <ref>, we need to compute the joint probability distribution on the nodes 𝐍, as given by the Markov factorization property (Equation <ref>). We now proceed to describe our representation of each of the terms in this factorization, i.e. for the probability distribution at each node π⃗_ℓ = p(𝐍_ℓ|Pa(𝐍_ℓ)). However, the direction of the edge dependencies in the DAG G may change during training, and as a result the number of causal factors that are parents of a given node (i.e. Pa(𝐍_ℓ)) may change as well. We therefore build, for each node ℓ, a parameterization of these probabilities that allows for any subset of nodes to be parents via a trainable tensor 𝐖^ℓ; we downselect which nodes are parents using the DAG edge indicator scores from Equation <ref> and averaging out those modes that are not parents. For ease of notation, we let 𝐀 = p(𝐍). For each ℓ=1,…,L, we define 𝐖^ℓ to be a nonnegative rank-L tensor of size C_1 ×…× C_L, constrained so that for any c_1,…,c_ℓ-1,c_ℓ+1,…,c_L,∑_c_ℓ = 1^C_ℓ W^ℓ_c_1,…,c_L = 1.The entries of 𝐖^ℓ represent the probabilities 𝐖^ℓ : = p( 𝐍_ℓ|𝐍_kfork ℓ) W^ℓ_c_1,…,c_L = p(𝐍_ℓ = 1_c_ℓ|𝐍_k = 1_c_k fork ℓ),where 1_c is a one-hot encoding with the c^th entry set to 1. With this definition, we can now proceed to describe our downselection algorithm and parameterize the probabilities over the structure of a given DAG.If a node 𝐍_k is not a parent node of 𝐍_ℓ, then we remove mode k from 𝐖^ℓ by contracting 𝐖^ℓ against 1/C_k1 (where 1 is the vector of all ones) along mode k. This contraction makes 𝐍_ℓ independent of 𝐍_k when 𝐍_k is not a parent of 𝐍_ℓ. If the node 𝐍_k is a parent of 𝐍_ℓ, then we can contract against the realizations of the parent one-hot encodings, since they are already known. We therefore end up with an expression for the categorical distribution on 𝐍_ℓ|Pa(𝐍_ℓ) viaπ⃗_ℓ = p(𝐍_ℓ|Pa(𝐍_ℓ))= 𝐖^ℓ×̅_1 v_1 ×̅_2 v_2 …×̅_ℓ-1 v_ℓ-1×̅_ℓ+1 v_ℓ+1…×̅_L v_L = ∑_k=1kℓ^L ∑_c_k =1^C_k W^ℓ_c_1,…,c_L v_k,c_k,where ×̅_k denotes the contractive n-mode tensor product against mode k (see <cit.> for more details) andv_k = 𝐍_k if 𝐍_k ⊆Pa(𝐍_ℓ)1/C_k1if 𝐍_k ⊆Pa(𝐍_ℓ)^𝐜 andk ℓUsing the DAG representation from the previous section, these cases can be written asv_k = 𝐍_k ifE_k,ℓ = 11/C_k1ifE_k,ℓ 1andk ℓor more concisely asv_k = 1/C_k1 - E_k,ℓ(1/C_k1-𝐍_k)for kℓ,which has the benefit of allowing us to handle relaxations of E where E is not necessarily a binary matrix (such as when the temperature β is small but not yet sufficiently close to 0). With v_k defined as such, we can write π⃗_ℓ viaπ⃗_ℓ = ∑_k = 1 k ℓ^L ∑_c_k = 1^C_k W^ℓ_c_1,…,c_L( 1/C_k1 - E_k,ℓ(1/C_k1-𝐍_k) )_c_k. From the tensors 𝐖^ℓ and the vectors π⃗_ℓ, we can now compute 𝐀.By Corollary <ref>, assume that the categorical variables 𝐍_1,…,𝐍_L are ordered such that ∀ k < ℓ, 𝐍_k ⊂Anc(𝐍_ℓ); if not, we reassign the indices via the permutation σ. Observe that𝐀^ℓ := p(𝐍_1,…,𝐍_ℓ) = p(𝐍_ℓ|𝐍_1,…,𝐍_ℓ-1) p( 𝐍_1,…,𝐍_ℓ-1) = p(𝐍_ℓ|Pa(𝐍_ℓ))𝐀^ℓ-1where 𝐀^0 = 1, and where the expression for p(𝐍_ℓ|Pa(𝐍_ℓ)) is given by π⃗_ℓ in Equation (<ref>). This inductive process of computing 𝐀 = 𝐀^L is given in Algorithm <ref>.§ ELBO LOSS AND TRAINING §.§ Single sample ELBOThe ELBO loss for training is identical to that of <cit.>, albeit with different notation and computation of cluster assignment. The full ELBO derivation is in Appendix <ref>. After dropping constant terms, the single-sample ELBO is:ℒ = - ∑_m=1^M log (σ_m^2) + ‖X_m - μ_m/σ_m‖^2+ ∑_j=1^Jlog(σ_j^2)+ ∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_L·[ 2 log(𝐀_c_1,…,c_L/γ_c_1,…,c_L) -∑_j=1^Jlog (σ_c_1,…,c_L;j^2) + σ_j^2/σ_c_1,…,c_L;j^2 + (μ_j - μ_c_1,…,c_L;j)^2/σ_c_1,…,c_L;j^2],where γ_c_1,…,c_L is an estimate for the posterior distribution and, following <cit.>, is computed by:γ : = q(𝐍|𝐗) = p(𝐍| Z) =p(𝐍) p(Z |𝐍) /p(Z) γ_c_1,…,c_L = p(𝐍_c_1,…,c_L) p(Z |𝐍_c_1,…,c_L)/∑_c'_1=1^C_1⋯∑_c'_L=1^C_L p(𝐍_c'_1,…,c'_L)p(Z |𝐍_c'_1,…,c'_L) = 𝐀_c_1,…,c_L p(Z |𝐍_c_1,…,c_L) /∑_c'_1=1^C_1⋯∑_c'_L=1^C_L𝐀_c'_1,…,c'_L p(Z |𝐍_c'_1… c'_L)where we recall 𝐀 := p(𝐍) for convenience. Note that 𝐀 and γ are both tensors with L modes, of size C_1 ×…× C_L. The tensor 𝐀 can be calculated via Algorithm <ref> in Section <ref>. The values of p(Z |𝐍) can be computed by sampling from each Gaussian in the Gaussian mixture model. All other values are parameters in our model; Table <ref> in Appendix <ref> summarizes the assumed distributions on each term in the architecture, and lists how the variables are computed and updated during training. §.§ Training To train our causal model, we seek to maximize the ELBO ℒ over the entire dataset. That is, if we use ℒ_d to denote Equation <ref> for the d^th datapoint, then we want to minimize -∑_d ℒ_d.Throughout training we alternate between (1) updating the neural network, expert model, and DAG parameters via gradient descent and (2) updating the Gaussian mixture centers and variances using block-coordinate maximization, similar to <cit.>. In particular we compute the optimal Gaussian mixture centers and variances by taking the derivative of -∑_dℒ_d with respect to the cluster centers and variances and solving for the global minimizers: μ_c_1,…,c_L = ∑_dμ^(d)γ_c_1, …, c_L^(d)/∑_dγ_c_1, …, c_L^(d),σ_c_1,…,c_L^2=∑_d(( μ^(d) - μ_c_1,…,c_L )^2 + σ^2(d))γ_c_1, …, c_L^(d)/∑_dγ_c_1, …, c_L^(d),where d indexes the d^th data point, and, in particular μ^(d) and σ^2(d) are respectively the encoded mean and variance of the d^th data point. Our training procedure follows Algorithm <ref>.§.§ Practical considerationsWe implement several tools for algorithm adaptation and to aid in training. These tools are described here, and the use of these tools in each experiment is detailed in Appendix <ref>. Pre-training.Before fitting a DAG, we need a reasonable latent embedding. Thus we implemented a pre-training regimen following <cit.>.As our first step in pre-training, we fix the pre-initialized encoders and initialize the cluster means and variances via Equations <ref> and <ref>. This initialization has the benefits of providing a good GMM fit for the initial latent embedding, but if the initial latent embedding is poor or undiscriminating, then the initial GMM fitting by these step might not focus on any informative features. Our next step in pre-training is to train the weights and biases of the encoders and decoders via the reconstruction term or by fitting a unit-normal Gaussian variational autoencoder <cit.>. Following this training we find a good initial GMM fit through iterations of Equations <ref> and <ref>. This has the benefit of finding a good initial embedding from which the block coordinate maximization can recover meaningful features. Edge indicator function adaptations.We have two optional adaptations to the edge indicator function. The first is to add random noise to the node scores ξ. This noise is included to break free of local minima, and may additionally test edge orientation. Our second adaptation is to anneal β during training. In Equation <ref>, β serves as a temperature parameter and, as β→ 0, E approaches a true indicator function. Our annealing implementation is simple, where we specify the initial β, the final β, and the update frequency of β. Updates on GMM parameters. The cluster center and variance updates in Equation <ref>, paired with the gamma calculation in Equation <ref>, are reminiscent of expectation-maximization. The traditional maximization step would, however, also update the probability of belonging to each cluster (𝐀). While there is not a closed-form expression for an update on 𝐀 from our ELBO since they depend on the underlying causal factorization, we alternatively perform extra gradient-descent steps to update 𝐀 after each update of the cluster means and variances. Furthermore, we also implemented the option to perform multiple iterations of GMM variable updates per epoch. § EXPERIMENTSWe tested causalPIMA on a synthethic dataset consisting of circle images and a materials dataset consisting of 3D printed lattices. All architectures and hyperparameters for the experiments can be found in Appendix <ref>. §.§ CirclesFor our first experiment, we generated a synthetic dataset consisting of images of circles with three different features: hue h (red, blue), radius r (Gaussian mixture of big, small), and shift s (Gaussian mixture of left, right). We generated 4096 circles using the decision tree in Figure <ref> where we purposefully overlapped distributions of h, r, and s to necessitate the discovery of a DAG describing the generative process. We ran this experiment with three nodes in the DAG, where each node was a binary categorical random variable. The latent space showed disentanglement in the three different features. The learned DAG is in subpanel (e) of Figure <ref>. By comparing cluster labels to features characteristic of each cluster, we see that node 𝐍_1 in the DAG corresponds to radius, node 𝐍_2corresponds to hue, and node 𝐍_3 corresponds to shift. For example, all clusters with red circles have a 0 in the second entry of their label. Under this identification, the resulting directed acyclic graph demonstrates that radius and hue play a key role in the outcome of shift. §.§ Lattices Our next experiment uses a dataset of 3D printed lattices <cit.>. Two different lattice geometries were printed (octet and gyroid) with a total of 91 samples. An image (X_1) and a stress-strain curve (X_2) produced by a high-thoughput uniaxial compression machine were collected for each printed lattice. The stress/strain curves represent a physics-imbued modality where curves can be modeled via a continuous piecewise linear function.Consequently, for the stress-strain modality, we used an expert model decoder composed of a two piecewise linear segments. Specifying two binary feature nodes resulted in a latent space organized by lattice type and stress-strain curves. The two clusters consisting of the octet geometry merged, and the corresponding expert models are nearly identical. This result is consistent with distribution of octet stress-strain curves, which has a lower variance than the gyroid stress-strain curves. By comparing cluster labels to features characteristic of each cluster, we see that node 𝐍_2 corresponds to lattice type while 𝐍_1 corresponds to the stress-strain curve profile. The learned DAG suggests that the lattice type influences the stress-strain curve.§ CONCLUSIONCausal disentanglement often relies upon interventional data and underlying model assumptions. For exploratory cases where such information is not available, we introduce a causal disentanglement algorithm that does not make any structural assumptions and does not rely on interventional data. Furthermore, this algorithm is capable of handling multiple modalities and underlying physics to encourage data-driven disentanglement of data with a causal interpretation. We demonstrate the efficacy of our algorithm on synthetic and real data and were able to achieve interpretable causal relationships. These results show that meaningful causal disentanglement is possible, even in purely exploratory settings. Future work will include methods for optionally introducing interventions and structural causal models.§ ACKNOWLEDGEMENTSThis article has been co-authored by employees of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employees are solely responsible for its contents. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND number: SAND2023-11515O icml2022§ NOMENCLATURE AND REPRESENTATIONS We include tables outlining our notation choices (Table <ref>) as well as the various distributions appearing in our algorithm (Table <ref>). Furthermore we include Figure <ref> to illustrate the connection between the trained DAG and the clusters in the latent space. § ARCHITECTURES, HYPERPARAMETERS, AND IMPLEMENTATIONWe include details on architectures and implementation for each experiment. Hyperparameters for each experiment are in Table <ref>.§.§ Experiment <ref> (Circles)Our circles experiment consists of 4096 images of circles of size 28×28×3. Our neural network architectures for this experiment were simple multilayer perceptrons (MLPs). The encoder for circles first flattens the image and then is an MLP consisting of five linear layers (with respective sizes 128, 64, 32, 16, and 2× encoding_dim) withReLU activations between each layer. The final output has size 2× encoding_dim as it represents the the mean and standard deviation of the input in the latent space. The decoder for circles is also an MLP with five linear layers with ReLU activation between the layers. The respective linear layer sizes are 16, 32, 64, 128, and 2,352. The final layer is followed by a reshape into size 28× 28× 3. §.§ Experiment <ref> (Lattices)The lattice dataset consists of 91 lattice samples where each sample contains an image and a stress-strain curve. Our data preparation follows the steps in <cit.>, which we include here for completeness. In particular, the stress-strain curves were downsampled to an array of length 100 and normalized to have values in [0,1]. The lattice images were cropped and subsampled into images of size 32× 32 and standardized so each image had zero mean and unit variance over pixel intensity values. The dataset was further augmented by flipping images along each axis. We use an 81%/9%/10% train/val/test split of the data.Our neural network architectures for this experiment also follow those in <cit.>, but we include the details here for completeness. We use relatively small convolutional encoders and decoders for the image modality. The image modality encoder consists of two 2D convolutional layers with 32 and 64 channels respectively, each with 3×3 kernels. We use the exponential linear unit (ELU) activation function as well as batch normalization after each convolutional layer, then pass the output to a fully connected layer of size encoding_dim × 2 to enable the representation of the mean and variances of each embedded point. The image decoder begins with a fully connected layer of appropriate size to be reshaped into 32 channels of 2D arrays, with each dimension having a length 1/4 of the length of the number of pixels per side of the original image. We pass the reshaped output of the initial dense layer through a series of three deconvolution layers with 64, 32, and 1 channel, respectively, each with a kernel of size 3. The first two deconvolution layers use a stride of 3 and a ReLU activation function. The final deconvolution layer uses a stride of 1. No padding is used to retain the input shape while traversing these layers.The stress-strain curve modality is treated as an expert modality as piecewise linear functions can capture the import aspects of a stress-strain curve. The encoder for the stress-strain curves is identical to the image encoder architecture, except we use 1D convolutions with 8 and 16 channels respectively in place of the 2D convolutional layers. The decoder is modeled as a continuous piecewise linear function consisting of two pieces. The trainable parameters for this decoder are inflection point and the slope of each linear piece. § DAG PARAMETERIZATIONWe provide proofs showing that our construction guarantees a DAG, and that our parametrization can recover any DAG. Some of the proofs contain elements that are similar to those found in Appendix A of <cit.>. Let G = (𝒱,ℰ) be a graph with adjacency matrix A. Then, for any positive integer k ≥ 1, A^k_ij is the number of walks of length k from v_i to v_j. We proceed by induction. The base case of k=1 is immediate, since A is the adjacency of the matrix for G and G has no self-loops. Suppose the statement holds true for all walks of lengths up to and including length k-1. Then, the number of walks from v_i to v_j of length k can be found by taking the number of walks of length k-1 from v_i to an intermediate node v_ℓ, and then completing one more step from v_ℓ to v_j, i.e.# of walks = ∑_ℓ = 1^|𝒱| A^k-1_iℓ A_ℓ k = A^k_ij. A graph G = (𝒱, ℰ) with adjacency matrix A has no cycles if and only if ∑_k=1^∞trace(A^k) = 0. Let σ = (ξ) be any permutation that sorts ξ in ascending order, and let Q be the corresponding permutation matrix. Then, Q E Q^T is strictly upper triangular. Let Anc(v) denote the ancestors of a node v ∈𝒱, defined asAnc(v) = { w ∈𝒱\{v}: there exists a path fromwtovinG}.We introduce the matrix E^*, given byE^*_ij = ReLU( tanh( 1/β (𝒢ξ)_ij) ).By our rules for DAG assignment in Equation (<ref>),𝐍_i ⊆Anc(𝐍_j) ⟺lim_β→ 0 E^*_ij = 1 ⟺ξ_i < ξ_j.Let σ = (ξ). In the case σ is not unique i.e. ξ has repeated values, we break ties arbitrarily but consistently; therefore, without loss of generality, assume ξ has no repeated values. Define Q as the permutation matrix corresponding to σ. By definition of Q,(Qξ)_i < (Qξ)_jfor anyi ≤ j.We will show the following:* Fix i,j; for any a < i, we have (Q E^* Q^T)_ij≤ (Q E^* Q^T)_aj, and if also a < j, the inequality is strict.* Fix i,j; for any b > j, we have (Q E^* Q^T)_ij≤ (Q E^* Q^T)_ib, and if also b > i, the inequality is strict.* If both (1) and (2) are true, then both Q E^* Q^T and Q E Q^T are strictly upper triangular.For (1): Fix j. Define ξ = ξ_σ(j)1 - ξ. Then, for any a < i,(Q ξ)_i < (Q ξ)_a. Since tanh and ReLU are monotonic nondecreasing,(Q E^* Q^T)_ij = ReLU( tanh( 1/β( (Qξ)_j - (Q ξ)_i ) ) ) = ReLU( tanh( 1/β (Qξ)_i ) ) ≤ReLU( tanh( 1/β (Qξ)_a ) ) = ReLU( tanh( 1/β( (Qξ)_j - (Q ξ)_a ) ) ) = (Q E^* Q^T)_aj.If additionally a < j, then (Qξ)_a > 0, and since tanh and ReLU are strictly monotonic increasing on (0,∞), the inequality becomes strict.For (2), repeat the proof of (1), with ξ = ξ - ξ_σ(i)1.For (3), since the range of ReLU is nonnegative, for all i,j, (QE^*Q^T)_ij≥ 0. However, the diagonal entries E^*_ii = 0 for all i, so (Q E^* Q^T)_ii = 0 for all i as well. By (1) and (2), for any i,j below the diagonal, E^*_ij≤ 0. Therefore, QE^*Q^T is zero on or below the diagonal i.e. strictly upper triangular. Since the nonzero entries of E are a subset of the nonzero entries of E^*, Q E Q^T must be strictly upper triangular as well.Let A = lim_β→ 0 E be the adjacency matrix of a directed graph G = (𝒱, ℰ). Then, G is a DAG. Let σ be the permutation that sorts ξ in ascending order and Q be the corresponding permutation matrix. Then, for any β > 0, by Lemma <ref>, the matrix Q E Q^T is strictly upper triangular, and therefore QAQ^T = lim_β→ 0 Q E Q^T is strictly upper triangular. Additionally, since Q is a permutation matrix, for any integer k > 0,trace((QAQ^T)^k) = trace(Q A^k Q^T) = trace(A^k),and since QAQ^T is strictly upper triangular, the matrix (QAQ^T)^k is strictly upper triangular as well. Since the trace of any strictly upper triangular matrix is 0, we conclude that∑_k=1^∞trace(A^k) = 0,and therefore by Corollary <ref>,G is a DAG. The permutation σ provides the order to traverse the DAG in order.The edge parametrization in Equation <ref> is sufficiently expressive to represent all possible DAGs. The graph gradient is sufficient to recover any complete DAG. We recover any sub-DAG of any complete DAG by eliminating edges through the multiplication by the metric B.§ ELBO DERIVATION We consider the ELBO lossℒ = _q(Z,𝐍|𝐗)[logp(𝐗,Z,𝐍)/q(Z,𝐍|𝐗)]. For convenience, we denote _q(Z,𝐍|𝐗) as _q. With our assumptions (Equation <ref>), this ELBO expression becomesℒ = _q[logp(𝐗,Z,𝐍)/q(Z,𝐍|𝐗)] = _q log p(𝐗,Z,𝐍) - _q log q(Z,𝐍|𝐗) = _q log( ( ∏_m=1^M p(X_m | Z,𝐍) ) p(Z|𝐍) p(𝐍) ) - _q log( q(Z |𝐗) q(𝐍|𝐗) ) = ∑_m=1^M _q log p(X_m | Z,𝐍) + _q log p(Z |𝐍) + _q log p(𝐍) - _q log q(Z|𝐗) - _q log q(𝐍|𝐗).We estimate the distribution q(𝐍|𝐗) following <cit.> by γ : = q(𝐍|𝐗) = p(𝐍| Z) =p(𝐍) p(Z |𝐍) /p(Z) γ_c_1,…,c_L = p(𝐍_c_1,…,c_L) p(Z |𝐍_c_1,…,c_L)/∑_c'_1=1^C_1⋯∑_c'_L=1^C_L p(𝐍_c'_1,…,c'_L)p(Z |𝐍_c'_1,…,c'_L) = 𝐀_c_1,…,c_L p(Z |𝐍_c_1,…,c_L) /∑_c'_1=1^C_1⋯∑_c'_L=1^C_L𝐀_c'_1,…,c'_L p(Z |𝐍_c'_1… c'_L)where we denote 𝐀 := p(𝐍) for convenience. Note that 𝐀 and γ are both tensors with L modes, of size C_1 ×…× C_L. The tensor 𝐀 can be calculated via Algorithm <ref> in Section <ref>. The values of p(Z |𝐍) can be computed by sampling from each Gaussian in the Gaussian mixture model.We now compute each expectation in Equation (<ref>) using Corollary <ref> to evaluate integrals.* We compute _q(Z,𝐍|𝐗)log p(X_m | Z, 𝐍) via the following:_q(Z,𝐍|𝐗)log p(X_m | Z,𝐍)= log p(X_m | Z,𝐍) = log(1/√(2π)σ_m) - 1/2‖X_m - μ_m /σ_m‖^2= -1/2log (2 πσ_m^2) -1/2‖X_m - μ_m /σ_m‖^2.* We compute _q(Z, 𝐍|𝐗)log p(Z |𝐍) via the following:_q(Z, 𝐍|𝐗)log p(Z |𝐍) =∑_𝐍 q(𝐍|𝐗) ∫_Z q(Z |𝐗) log p(Z |𝐍) dZ = ∑_c_1=1^C_1⋯∑_c_L=1^C_Lq(𝐍_c_1,…,c_L|𝐗) ·∫_Z q(Z |𝐗) log p(Z |𝐍_c_1,…,c_L) d Z = ∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_L·[-1/2∑_j=1^Jlog 2πσ_c_1,…,c_L;j^2 + σ_j^2/σ_c_1,…,c_L;j^2 + (μ_j - μ_c_1,…,c_L;j)^2/σ_c_1,…,c_L;j^2],where J =Z.* We compute _q(Z, 𝐍|𝐗)log p(𝐍) via the following:_q(Z, 𝐍|𝐗)log p(𝐍) = ∑_𝐍 q(𝐍|𝐗) ∫_Z q (Z |𝐗) log p(𝐍) dZ = ∑_𝐍 q (𝐍|𝐗) log p(𝐍)= ∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_Llog𝐀_c_1,…,c_L.* We compute _q(Z, 𝐍|𝐗)log q(Z |𝐗) via the following:_q(Z, 𝐍|𝐗)log q(Z |𝐗) =∫_Z q(Z |𝐗) log q(Z |𝐗) dZ = -1/2∑_j=1^J( log (2πσ_j^2) + 1 ).* We compute _q(Z, 𝐍|𝐗)log q( 𝐍|𝐗) via the following:_q(Z, 𝐍|𝐗)log q( 𝐍|𝐗) = ∑_𝐍 q(𝐍|𝐗) ∫_Z q(Z |𝐗) log q(𝐍|𝐗) dZ = ∑_𝐍 q(𝐍|𝐗) log q(𝐍|𝐗) = ∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_Llogγ_c_1,…,c_L.Returning to the ELBO expression and combining all terms together, we haveℒ = ∑_m=1^M _q log p(X_m | Z,𝐍) +_q log p(Z |𝐍) +_q log p(𝐍) - _q log q(Z |𝐗) -_q log q(𝐍|𝐗)= -1/2∑_m=1^M log (2 πσ_m^2) + ‖X_m - μ_m /σ_m‖^2 - 1/2∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_L·[ ∑_j=1^Jlog 2πσ_c_1,…,c_L;j^2 + σ_j^2/σ_c_1,…,c_L;j^2 + (μ_j - μ_c_1,…,c_L;j)^2/σ_c_1,…,c_L;j^2]+ ∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_Llog𝐀_c_1,…,c_L + 1/2∑_j=1^J( log (2πσ_j^2) + 1 )- ∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_Llogγ_c_1,…,c_LSince any constant terms in ℒ do not have bearing on the solution to the maximization problem, we can remove them; after rescaling, we haveℒ = - ∑_m=1^M log (σ_m^2) + ‖X_m - μ_m/σ_m‖^2+ ∑_j=1^Jlog(σ_j^2)+ ∑_c_1=1^C_1⋯∑_c_L=1^C_Lγ_c_1,…,c_L·[ 2 log(𝐀_c_1,…,c_L/γ_c_1,…,c_L) -∑_j=1^Jlog (σ_c_1,…,c_L;j^2) + σ_j^2/σ_c_1,…,c_L;j^2 + (μ_j - μ_c_1,…,c_L;j)^2/σ_c_1,…,c_L;j^2]We describe how to compute 𝐀 and γ in Section <ref>.In terms of architecture, the distributions are learned or computed in the following manner, where the d subscripts index batching over several data points: [μ_m, σ_m]= D_m(Z;θ̂_m),whereD_mis a neural network or expert model[μ_m, σ_m]= F_m(X_m; θ_m),whereF_m is a neural networkμ_c_1,…,c_L = ∑_dμ^(d)γ_c_1, …, c_L^(d)/∑_dγ_c_1, …, c_L^(d),wheredindexes thed^th data point and μ^(d) is the encoded mean of thed^th data point σ_c_1,…,c_L^2=∑_d(( μ^(d) - μ_c_1,…,c_L )^2 + σ^2(d))γ_c_1, …, c_L^(d)/∑_dγ_c_1, …, c_L^(d),wheredindexes thed^thdata point§ EXTENSION FOR GENERAL MULTIVARIATE GAUSSIANSOur ELBO is computationally tractable because our model uses Gaussians extensively. While we do restrict to Gaussians with diagonal covariances matrices, we show that computational tractability remains when using a generalized covariance matrix. We provide the lemma of <cit.>, and then state and prove the generalized version.<cit.> Given Gaussian distributions 𝐘_1 ∼𝒩(μ_1, σ_1^2 𝐈) and 𝐘_2 ∼𝒩(μ_2, σ_2^2 𝐈) defined over the same probability space, where μ_1,μ_2,σ_1^2,σ_2^2 ∈^J, we have∫_Ω𝐘_1log𝐘_2 dμ = - 1/2( ∑_jlog(2πσ_2,j^2) + σ_1,j^2/σ_2,j^2 + ( μ_1,j - μ_2,j)^2/σ_2,j^2). Given Gaussian distributions 𝐘_1 ∼𝒩(μ⃗_1, Σ_1 ) and 𝐘_2 ∼𝒩(μ⃗_2, Σ_2 ) defined over the same probability space, where Σ_1 and Σ_2 are symmetric positive definite covariance matrices, and μ⃗_1,μ⃗_2 ∈^J and Σ_1, Σ_2∈^J× J, we have∫_^J𝐘_1log𝐘_2dy⃗ =-J/2log(2π)- 1/2log((Σ_2))- 1/2 (μ⃗_1 - μ⃗_⃗2⃗)^T Σ_2^-1 (μ⃗_1 - μ⃗_⃗2⃗)- 1/2(Σ_1Σ_2^-1).Recall the density function of a multivariate Gaussian random variable𝐘 = y⃗ = (2 π)^-J/2(Σ)^-1/2exp(-1/2(y⃗-μ⃗)^TΣ^-1(y⃗-μ⃗)).By definition,∫_^J𝐘_1log𝐘_2dy⃗ = ∫_^J (2 π)^-J/2(Σ_1)^-1/2exp(-1/2(y⃗-μ⃗_1)^TΣ_1^-1(y⃗-μ⃗_1)) ·log[ (2 π)^-J/2(Σ_2)^-1/2exp(-1/2(y⃗-μ⃗_2)^TΣ_2^-1(y⃗-μ⃗_2)) ] dy⃗= ∫_^J (2 π)^-J/2(Σ_1)^-1/2exp(-1/2(y⃗-μ⃗_⃗1⃗)^TΣ_1^-1(y⃗-μ⃗_⃗1⃗)) ·( -J/2log(2π) - 1/2log((Σ_2)) -1/2(y⃗-μ⃗_⃗2⃗)^TΣ_2^-1(y⃗-μ⃗_⃗2⃗))dy⃗.Since Σ_1 and Σ_2 are symmetric positive definite, their inverses are also symmetric positive definite and therefore have well-defined Cholesky factors, denoted byΣ_1^-1 = L_1^T L_1Σ_2^-1 = L_2^T L_2.Therefore, Equation (<ref>) becomes∫_^J𝐘_1log𝐘_2dy⃗ =-J/2log(2π) - 1/2log((Σ_2))- 1/2∫_^J (2 π)^-J/2(Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ‖ L_2(y⃗-μ⃗_2) ‖_2^2 dy⃗.We splitL_2(y⃗-μ⃗_2)= L_2 L_1^-1 L_1 (y⃗-μ⃗_1) + L_2(μ⃗_1 - μ⃗_2) := x⃗_1 + x⃗_2so that the remaining integral on the right-hand side of Equation (<ref>) becomes∫_^J (2 π)^-J/2 (Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ‖ L_2(y⃗-μ⃗_2) ‖_2^2 dy⃗=∫_^J (2 π)^-J/2(Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ‖x⃗_1 + x⃗_2 ‖_2^2 dy⃗= ∫_^J (2 π)^-J/2(Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ( ‖x⃗_1 ‖_2^2 + ‖x⃗_2 ‖_2^2 + 2 ⟨x⃗_1, x⃗_2 ⟩)dy⃗Since x⃗_2 is independent of y⃗, we can pull this term out of the integral. As the term involving ⟨x⃗_1, x⃗_2 ⟩ is an odd function around μ⃗_⃗1⃗, its integral equals 0. Therefore,∫_^J (2 π)^-J/2 (Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ‖ L_2(y⃗-μ⃗_2) ‖_2^2 dy⃗= ‖ L_2(μ⃗_1 - μ⃗_) ‖_2^2 + ∫_^J (2 π)^-J/2(Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ‖ L_2 L_1^-1 L_1 (y⃗ - μ⃗_1) ‖_2^2dy⃗.With the change of variables x⃗ = L_1(y⃗-μ⃗_1),∫_^J (2 π)^-J/2 (Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ‖ L_2(y⃗-μ⃗_2) ‖_2^2 dy⃗= ‖ L_2(μ⃗_1 - μ⃗_2) ‖_2^2 + ∫_^J (2 π)^-J/2exp(-1/2‖x⃗‖_2^2 ) ‖ L_2 L_1^-1x⃗‖_2^2dx⃗= ‖ L_2(μ⃗_1-μ⃗_2)‖_2^2 + (L_1^-T L_2^T L_2 L_1^-1) = ‖ L_2(μ⃗_1-μ⃗_2)‖_2^2 + ( Σ_1 Σ_2^-1 ).Returning to Equation (<ref>), we finally have∫_^J𝐘_1log𝐘_2dy⃗ =-J/2log(2π) - 1/2log((Σ_2))- 1/2∫_^J (2 π)^-J/2(Σ_1)^-1/2exp(-1/2‖ L_1(y⃗-μ⃗_1)‖_2^2 ) ‖ L_2(y⃗-μ⃗_2) ‖_2^2 dy⃗= -J/2log(2π) - 1/2log((Σ_2))- 1/2( ‖ L_2 (μ⃗_1 - μ⃗_2) ‖_2^2 + (Σ_1 Σ_2^-1)) = -J/2log(2π) - 1/2log((Σ_2)) - 1/2 (μ⃗_⃗1⃗-μ⃗_2)^T Σ_2^-1 (μ⃗_1 - μ⃗_2) - 1/2(Σ_1 Σ_2^-1),which concludes our proof. When Σ_1 = σ⃗_1 𝐈 and Σ_2 = σ⃗_2 𝐈, Equation (<ref>) simplifies to Equation (<ref>) in Lemma <ref>, i.e. the result in <cit.>. | http://arxiv.org/abs/2310.18471v2 | {
"authors": [
"Elise Walker",
"Jonas A. Actor",
"Carianne Martinez",
"Nathaniel Trask"
],
"categories": [
"cs.LG",
"cs.AI",
"stat.ML",
"68T07"
],
"primary_category": "cs.LG",
"published": "20231027203011",
"title": "Causal disentanglement of multimodal data"
} |
OT1pzc OT1pzcmit<-> s * [1.10] pzcmi7t OT1pzcmit equationsection figuresectiontheoremTheorem[section] proposition[theorem]Proposition corollary[theorem]Corollary lemma[theorem]Lemma lemma-definition[theorem]Lemma-Definition sublemma[theorem]Sublemma assumption[theorem]Assumption disclosure[theorem]Disclosure conjecture[theorem]Conjecture claim[theorem]Claimdefinition definition[theorem]Definition remark[theorem]Remark remarks[theorem]Remarks example[theorem]Example convention[theorem]Convention notation[theorem]Notation openproblem[theorem]Open Problem *acknowledgmentsAcknowledgments *examplestarExample *convConvention | http://arxiv.org/abs/2310.18307v2 | {
"authors": [
"Jo Nelson",
"Morgan Weiler"
],
"categories": [
"math.GT",
"math.SG"
],
"primary_category": "math.GT",
"published": "20231027175529",
"title": "Torus knotted Reeb dynamics and the Calabi invariant"
} |
[thanks]Submitted to the editors January 14, 2024. [funding]Supported in part by US National Science Foundation grants DMS #1745654 and DMS #1953271.label1]John [email protected]]Alen [email protected],label2]Pierre [email protected][label1]organization=Department of Mathematics, North Carolina State University,city=Raleigh,state=NC,country=USA [label2]organization=The Graduate School, North Carolina State University,city=Raleigh,state=NC, country=USA[cor1]Corresponding authorThe formulation of Bayesian inverse problems involves choosing prior distributions; choices that seem equally reasonable may lead to significantly different conclusions. We develop a computational approach to better understand the impact of the hyperparameters defining the prior on the posterior statistics of the quantities of interest. Our approach relies on global sensitivity analysis (GSA) of Bayesian inverse problems with respect to the hyperparameters defining the prior. This, however, is a challenging problem—a naive double loop sampling approach would require running a prohibitive number of Markov chain Monte Carlo (MCMC) sampling procedures. The present work takes a foundational step in making such a sensitivity analysis practical through (i) a judicious combination of efficient surrogate models and (ii) a tailored importance sampling method. In particular, we can perform accurate GSA of posterior prediction statistics with respect to prior hyperparameters without having to repeat MCMC runs. We demonstrate the effectiveness of the approach on a simple Bayesian linear inverse problem and a nonlinear inverse problem governed by an epidemiological model.Prior selection global sensitivity analysiss Bayesian inverse problems importance sampling surrogate modeling § INTRODUCTIONConsider a Bayesian inverse problem governed by a system of differential equations. The inverse problem uses a vector d⃗ of measurement data to estimate the uncertain model parameters, θ⃗.The solution of the Bayesian inverse problem is a posterior distribution π_post(θ⃗| d⃗).After solving the inverse problem, typically we seek to make some predictions based on the posterior.For example, for a prediction quantity q(θ⃗) we may consider 𝔼_post(q) := ∫ q(θ⃗)π_post(θ⃗| d⃗) dθ⃗.A crucial component of this analysis is to know how the choice of prior hyperparameters affects such predictions. We present a practical variance-based global sensitivity analysis (GSA) approach to study how statistics (e.g. mean or variance) of q vary with respect to prior hyperparameters. This enables us to identify which prior hyperparameterscarry the most influence over the prediction. Bayesian inference is pervasive; this perspective makes inferences not just from data, but also by incorporating prior beliefs and assumptions. In practice, these prior assumptions are often subjective choices made by the researcher.However, these prior beliefs can have a huge impact on the results, including those of Bayesian inverse problems <cit.>.This well-known issue motivated statisticians in the 1980s and 1990s to develop a methodology, known as robust Bayesian analysis <cit.>, for ensuring the robustness of Bayesian inference to different choices by the researcher. These ideas have continued to receive attention over the past two decades <cit.>. Related work. Sensitivity analysis of Bayesian inverse problems has been subject to several recent research efforts. The articles <cit.> consider hyper-differentialsensitivity analysis (HDSA) of Bayesian inverse problems.HDSA is a technique used originally for (deterministic) PDE-constrained optimization problems.HDSA, as a practical framework for sensitivity analysis of optimal control problems governed by PDEs, was considered in <cit.>. In <cit.>, HDSA was used for sensitivity analysis of deterministic inverse problems to auxiliary model parameters and parameters specifying the experimental setup (experimental parameters).In <cit.>, use of HDSA is extended to nonlinear Bayesian inverse problems. Specifically, the authorsconsider the Bayes risk and the maximum a posterior probability (MAP) point as quantities of interest for sensitivity analysis. In <cit.>, the HDSA framework is used to study Bayesian inverse problems governed by ice sheet models. The sensitivity of information gain, measured by the Kullback–Leibler (KL) divergence between the prior and posterior, to uncertain model parameters in linear Bayesian inverse problems is studied in <cit.>.HDSA provides valuable insight for experimenters on where to focus resources during experimental design and when measuring auxiliary parameters.The previous works on HDSA of Bayesian inverse problems, have focused primarily on sensitivity analysis with respect to auxiliary or experimental rather than prior hyperparameters. More importantly, HDSA is local, relying on derivative informationevaluated at a set of nominal parameters.Variance-based GSA, see <ref>, accounts for the uncertainty in thehyperparameters globally.The work <cit.>, which is closely related to our work,examines single-parameter statistical models using Bayesian inference.In that paper, the authors perform variance-based GSA on posterior statistics with respect to prior and auxiliary hyperparameters.Their method uses Gaussian process (GP) surrogates to emulate the mapping from the hyperparameters to the posterior distribution. This method requires many Markov chain Monte Carlo (MCMC) runs to build the GP surrogate. For the Bayesian inverse problems we target, the high cost of evaluating the forward model makes repeated MCMC runs impossible.We tackle this difficulty by using an importance sampling approach that allows integrating the QoIs under study with respect to multiple posterior distributions. Strategies for importance sampling on multiple distributions have been subject to several previous works; see e.g., <cit.>. We use the structure of the Bayesian inverse problem to derive a tailored importance sampling approach.Another related work that has partly inspired theapproach in the present work is <cit.>. That article, outlines a method for GSA of rare event probabilities that combines surrogate-assisted GSA with subset simulation.Our approach and contributions. We show that GSA is a viable computational approach to analyze the sensitivity ofBayesian inverse problems to prior hyperparameters. The proposed approach is goal oriented—the focus is on the posteriorstatistics of prediction/goal QoIs that are functions of the inversion parameters. We first frame the problem in a manner conducive to variance-based GSA in <ref>. We detail the computational strategy for sensitivity analysis in <ref>. Our method combines two key techniques. Importance sampling eliminates the need for repeated MCMC runs for different choices of the prior. Then, sparse polynomial chaos expansion (PCE) and extreme learning machine (ELM) surrogate models emulate the mapping from prior hyperparameters to statistics of q.Use of surrogate models not only eases the computational burden, but also improves the accuracy of the sensitivity analysis. The combined approach enables prior hyperparameter sensitivity analysis for many Bayesian inverse problems. If one has access to a single MCMC run, then one can ascertain prior hyperparameter importance. To demonstrate the effectiveness of the proposed approach, we present extensive computational experiments in the context of two examples: a simple linear inverse problem in Section <ref> and a nonlinear inverse problem governed by an epidemiological model in Section <ref>. § HYPERPARAMETER-TO-STATISTIC MAPPING OF BAYESIAN INVERSE PROBLEMSIn an inverse problem <cit.>, we use a model and observed data to estimate unknown model parameters of interest.Weconsider the inverse problem of estimating a parameter vectorθ⃗ in models of the form{ y⃗' = f(y⃗;θ⃗),y⃗(t_0)=y⃗_0. .Here, y⃗∈^d is the state vector.In a deterministic formulation of the inverse problem, we typically seek a θ⃗ that minimizes the cost functional,J(θ⃗):=By⃗(θ⃗)-d⃗^2.Here, d⃗ is a vector of data measurements,B is a linear operator that selects the corresponding model responses, and y⃗ is obtained by solving (<ref>). We focus on Bayesian inverse problems <cit.> and seek a statistical distribution for θ⃗, known as the posterior distribution, that is conditioned on the observed data and is consistent with the prior distribution. In this context, the prior distribution encodes our prior knowledgeregarding the parameters. The Bayes formulashows how the model, data, and the prior are combined toobtain the posterior distribution:π_post(θ⃗|d⃗)∝π_like(d⃗|θ⃗)×π_pr(θ⃗),where π_like is the data likelihood and π_pr is the prior probability density function (PDF). Throughout this paper, we assume a Gaussian noisemodel for the observation error. In this case, the Bayes formula readsπ_post(θ⃗|d⃗)∝exp(-1/2(By⃗(θ⃗)-d⃗)^⊤Γ_noise^-1(By⃗(θ⃗)-d⃗))×π_pr(θ⃗) ,where Γ_noise^-1 is the noise covariance.In practice, we are often interested in scalar prediction quantities of interest (QoIs) that depend on θ⃗. Let q(θ⃗) be such a QoI.Solving the Bayesian inverse problem enables reducing the uncertainty in θ⃗ and consequently in q(θ⃗). In this case, the statistical properties of q depend on π_post.Let Ψ(q) denote a generic statistic of q. Examples include Ψ(q)=var(q) or Ψ(q)=𝔼(q), where the expectation and variance are with respect to the posterior distribution. Another example is Ψ(q)=q(θ⃗_MAP); i.e, QoI evaluatedat the maximum a posteriori (MAP) point estimate of θ⃗.Recall that the MAP point, θ⃗_MAP, isa point where the posterior PDF attains its maximum value. Using the Bayes formula (<ref>), we note that the MAP point is the solution to the nonlinear least squares problemθ⃗_MAP = θ⃗ J(θ⃗)(By⃗(θ⃗)-d⃗)^⊤Γ_noise^-1(By⃗(θ⃗)-d⃗) - 2 log(π_pr(θ⃗)). We consider how the choice of prior affects Ψ(q). Narrowing this question, we take a parameterized family of prior distributions π_pr^ξ⃗(θ⃗) determined by a vector ξ⃗ of scalar hyperparameters. For a Gaussian prior, the hyperparameters can be taken as the prior means and variances. With this setup, the choice of ξ⃗ will determine our statistic of interest so thatΨ(q) = Ψ^ξ⃗(q).In what follows, the hyperparameter-to-statistic (HS) mapping F:^n→ is given byF(ξ⃗) Ψ^ξ⃗(q).To model the uncertainty in the hyperparameters, we consider them as random variables and then analyzehow the uncertainty in the entries of ξ⃗ contributes to the uncertainty in F(ξ⃗).To this end, we follow a variance-based sensitivity analysis framework, and compute the Sobol' indices <cit.> of the HS mapping Fwith respect to ξ⃗. For the purposes of this study, we let the prior hyperparameters ξ⃗ follow uniform distributions, ξ_j∼𝒰[a_j,b_j], for j=1,…,n. We focus on three choices for the statistic of interest Ψ in (<ref>):* the mean: F_mean(ξ) = 𝔼^ξ⃗_post(q);* the variance:F_var(ξ⃗) = 𝔼^ξ⃗_post(q^2) -(𝔼^ξ⃗_post(q))^2; and* the QoI evaluated at the MAP point: F_MAP(ξ⃗)q(θ⃗_MAP(ξ⃗)), withθ⃗_MAP(ξ⃗) from (<ref>).The mean and variance are computed from moments of the posterior PDF.These two quantities can be estimated at each ξ⃗ by Monte Carlo integration.Estimating F_MAP instead requires solving the nonlinear least squares problem (<ref>) for each ξ⃗.§ GLOBAL SENSITIVITY ANALYSIS AND SURROGATE-ASSISTED APPROACHESWe focus on variance-based GSA using Sobol' indices <cit.>.Consider a (scalar-valued) modely = F(x⃗), x⃗∈^d.We assume that the components of x⃗ are independent random variables. In variance-based GSA, the most important inputs are those that contribute the most to the output variance var(F( x)). Sobol' indices are quantitative measures of this contribution. Specifically,the first-order Sobol' indices, S_k, and the total Sobol' indices S^tot_k, are defined by S_k = var(F_k)/var(F), S_k^tot = 1 - var(𝔼(F|x_l,l≠ k ))/var(F),where F_k(x_k)(f|x_k) - (f). In practice, the Sobol' indices are approximated by Monte Carlo sampling, requiring many evaluations of the model <cit.>.This can be too costly, especially when the model F is expensive to evaluate.In such cases, it is common practice to construct a surrogate model F≈ F whose Sobol' indices can be efficiently computed <cit.>. In the best case scenario, the Sobol' indices of the surrogate model can be computed analytically.We detail two such surrogate models below. Polynomial chaos surrogates. Polynomial chaos expansions (PCEs) take advantage of orthogonal polynomials to approximate expensive-to-evaluate models; see <cit.>. The standard approach is to truncate the PCEbased on the total polynomial degree. PCE surrogates are advantageous because they admit analytic formulas for Sobol' indices that depend only on the PC coefficients <cit.>.In practice, the PC coefficients are typically computed using non-intrusive approaches that involve sampling the model F. Theseinclude non-intrusive spectral projection or regression based methods <cit.>.In the present work, we build PCE surrogates using sparse regression <cit.>. As noted in <cit.>, this approach is particularly useful in the case where function evaluations are noisy.Solving the sparse regression problem can be formulated as a linear least squares problem regularized by an ℓ^1-penalty <cit.>.In our numerical computations, we use the SPGL1 solver <cit.> to solve such problems.Note that an ℓ^1-penalty approach also involves choosing a penalty parameter.In our experiments, we use PCE surrogates with the basis truncated at total degree 5, and we perform a tenfold cross validation over training sets to choose the ℓ^1-penalty parameters. Sparse weight-ELM surrogates.Sparse weight extreme learning machines (SW-ELMs) are a class of neural network surrogates that build on the standard extreme learning machines (ELMs). They are single-layer neural networks of the formF(x⃗) = β⃗^⊤ϕ( Wx⃗ + b⃗),x⃗∈^d.Here, β⃗ denotes the output weight vector, W the hidden layer weight matrix, b⃗ the hidden layer bias vector, and ϕ the activation function. The weights and biases are usually trained all at once by solving a nonlinear least squares problem.ELMs instead use randomly chosen hidden layer weights and biases.Training an ELM then only involves determining the output layer weights bysolving a linear-least squares problem; see <cit.> for details.SW-ELM <cit.> modifies the weight sampling step of standard ELM to improve performance as a surrogate model for GSA. The method introduces a validation step to choose a sparsification parameter p.Similar to PCE, the Sobol' indices of SW-ELM, as defined in <cit.>, can be computed analytically.For the SW-ELM surrogates used in our experiments, the number of neurons used is half the number of training points. A fraction of the training points are used as a validation set to choose the sparsification parameter. See <cit.> for further details.§ METHODIn this section, we outline our proposed approach for GSA ofhyperparameter-to-statistic (HS) mappings of the form (<ref>). Our focus will be mainly on HS mappings that involve integrating over the posterior. Examples are the posterior mean or variance. For simplicity, we focus on F(ξ)=_post(q(θ))= ∫_^dq(θ)(θ)dθ⃗.It is straightforward to generalize the strategies described below t the cases ofvariance and higher order moments. For brevity, we have suppressed the dependence of theposterior density on data d⃗ in <ref>.Computing the Sobol' indices of (<ref>) is often challenging.Computing F(ξ⃗) via direct sampling requires generatingsamples from the posterior law of θ⃗ using a Markov Chain Monte Carlo (MCMC) method.A naive approach for computing the Sobol' indices of F(ξ⃗) would be to follow a sampling procedure where an MCMC simulation is carried out for each realization of ξ⃗.This is typically infeasible. For one thing, the computational cost of this naive approach will be prohibitive for most practical problems. In addition, performing multiple runs of an MCMC algorithm can be problematic,because such methods typically have algorithm-specific parameters that mightneed tuning for different realizations of ξ⃗. In <ref>, weoutline an approach that combines MCMC and importance sampling for fast computation of moment-based HS mappings under study. Then, in <ref>, we present an algorithm that combines the approach in <ref> and surrogate models tofacilitate GSA of moment-based HS maps. In that section, we also discussthe computational cost of the proposed approach, in terms of thenumber of required forward model evaluations.We also briefly discuss GSA of F_MAP in <ref>. §.§ Importance sampling for fast evaluation of moment-basedHS mapsImportance sampling <cit.> aims at accelerating the computation of integrals such as <ref>, where the target distributionis difficult to sample from.This is done by introducing an importance sampling distribution π_IS(θ), which is tractable to work with, and from which we are likely to sample points where the target posterior distribution takes high density.Let π_IS be an importance sampling distribution.The integral <ref>can be written as ∫_^d q(θ) (θ⃗) dθ⃗= ∫_^d(θ⃗)q(θ⃗) π_IS(θ⃗) dθ⃗, with (θ⃗) = (θ⃗)/π_IS(θ⃗),provided that π_IS(θ)>0 whenever q(θ)π_post(θ)≠ 0 <cit.>. When this holds, we can create a Monte Carlo estimate of (<ref>)∫_^dq(θ)π_post(θ;ξ))dθ≈∑_i=1^M w_i q(θ_i) ,θ_i∼π_IS,wherew_i = (θ⃗_i), i = 1, …, M, define the importance sampling weights. For our purposes, we desire weights that are much greater than zero and have little variation over different samples. Our motivation for using importance sampling is to compute <ref> for different realizations of ξ⃗ without the need for multiple MCMC runs.Specifically, we propose an importance sampling approach tailored to the Bayesian inverse problem of interest that enables computing (<ref>) for different choices of ξ⃗ using the same importance sampling distribution. Since we consider choosing the prior distribution from a parameterized family, the target posterior distributions belong to a parameterized family (parameterized by the same prior hyperparameters) as well. We let the importance sampling distributionbe the posterior π_IS= constructed using aspecific choice of prior, .Thisis chosen from the same family as the priors in such a waythat its high probability region covers that of the family of target priors.See <ref> for an illustration, for the case of Gaussian priors.We then consider (θ⃗| d⃗) ∝π_like(d⃗ | θ⃗)×(θ⃗). Importance sampling often breaks down if the importance sampling distribution fails to cover the density of the target, especially when the target distribution has a heavy tail.As noted in our computational results, choosing a prior that “covers" all the target priors typically results in a suitable importance sampling posterior . With the present strategy, it is possible to sample fromwith one run of MCMC and gather information for all the target posteriors. Next, we derive an expression for the estimator (<ref>) when π_IS=. We let θ⃗_IS and Γ_IS denote the mean and covariance ofwhile θ⃗_ξ⃗ and Γ_ξ⃗ will denote the mean and covariance of . Letandbe the normalization constants that correspond toand , respectively:∫_^dπ_like(d⃗|θ⃗)(θ⃗) dθ⃗, ∫_^dπ_like(d⃗|θ⃗)(θ⃗) dθ⃗.We can write the importance sampling weights in (<ref>) as(θ⃗)= (θ⃗)/θ⃗) =(θ⃗)π_like(θ⃗)//(θ⃗)π_like(θ⃗)/ =1//(θ⃗)/(θ⃗).Letting theimportance sampling weight in (<ref>)be given by <ref>, we obtain ∫_^d q(θ) (θ⃗) dθ⃗=1//∫_^dq(θ⃗)(θ⃗)/(θ⃗)(θ⃗) dθ⃗.Furthermore, we can use the importance sampling distribution to rewritethe ratio of normalization constants / as/ =1/∫_^dπ_like(d⃗|θ⃗)(θ⃗) dθ⃗=1/∫_^dπ_like(d⃗|θ⃗)(θ⃗)/π_like(d⃗|θ⃗)(θ⃗)(θ⃗) dθ⃗=1/∫_^d(θ⃗) /(θ⃗)(θ⃗) dθ⃗=∫_^d(θ⃗)/(θ⃗)(θ⃗) dθ⃗.Combining the expressions (<ref>) and <ref> yields the estimatorF(ξ⃗) = ∫_^d q(θ) (θ⃗) dθ⃗≈1/C(θ⃗_1,…,θ⃗_M)∑_i=1^Mq(θ_i)(θ⃗_i)/(θ_i),θ⃗_i∼,where C(θ⃗_1,…,θ⃗_M) = ∑_i=1^M(θ⃗_i)/(θ⃗_i) is from the estimator of <ref>. Note that in the case of Gaussianpriors,(θ⃗)/(θ)=exp[1/2((θ⃗_IS-θ)^⊤Γ_IS^-1(θ_IS-θ)-(θ_ξ⃗-θ)^⊤Γ_ξ⃗^-1(θ_ξ⃗-θ))]. There are some diagnostics for evaluating the effectiveness of a sample set from the importance sampling distribution <cit.>. We use effective sample size in our experiments.A large effective sample size is desirable as it indicates small variation in the estimator (<ref>). For a given ξ⃗, the effective sample size isn_E^ξ⃗(∑_i=1^M w^ξ⃗(θ⃗_i))^2/∑_i=1^M w^ξ⃗(θ⃗_i)^2,θ⃗_i∼.Recall from (<ref>) that we can rewrite w^ξ⃗ = / =1/P^ξ⃗/P^IS/,Then, we can write (<ref>) asn_E^ξ⃗ =(∑_i=1^M (θ⃗_i)/(θ⃗_i))^2/∑_i=1^M ((θ⃗_i)/(θ⃗_i))^2=(∑_i=1^M 1/P^ξ⃗/P^IS(θ⃗_i)/(θ⃗_i))^2/∑_i=1^M (1/P^ξ⃗/P^IS(θ⃗_i)/(θ⃗_i))^2=(∑_i=1^M (θ⃗_i)/(θ⃗_i))^2/∑_i=1^M ((θ⃗_i)/(θ⃗_i))^2. In practice, we assess the suitability ofas an importance samplingdistribution by examining the distribution of n_E^ξ⃗ for an ensemble ofrealizations of ξ. This is illustrated in our computational results inSection <ref>. §.§ Algorithm for GSA of moment based HS mapsWe approximate F_mean(ξ) and F_var(ξ) using (<ref>).When estimating their Sobol' indices, we opt for the surrogate-assisted approach. Because (<ref>) provides us with noisy evaluations of F_mean(ξ) and F_var(ξ), estimating Sobol' indices by the double-loop sampling approach can give poor results. Regression-based surrogate methods are well-suited here. We employ sparse regression PCE and sparse-weight ELM, discussed in <ref>. In <ref>, we detail the proposed framework for variance-based GSA of (<ref>). Samples from one MCMC run are used to estimate, by importance sampling, F(ξ) for selected realizations of ξ.Sample realizations of ξ⃗ are generated by Latin hypercube sampling (LHS) <cit.>. . These samples serve as a training set for building surrogate models of F(ξ) for GSA, as discussed in <ref>.The purpose of using two different surrogate methods is to helpgain further confidence in the computed results.Under the assumption that the model and QoI q are expensive to evaluate, <ref> incurs most of its cost during the MCMC sampling stage. In this work, we use the delayed-rejection adaptive Metropolis (DRAM) <cit.> algorithm to perform MCMC. With delayed-rejection, each MCMC stage can include up to a fixed number of extra delayed-rejection steps. Each of these steps requires us to evaluate the model an additional time. Typically, one initially runs MCMC for M_burn burn-in stages. These burn-in samples are discarded and not included in the set of posterior draws. The cost of running the MCMC stage in <ref> with DRAM is 𝒪(M+M_burn) model evaluations.In the second stage, we evaluate q at the distinct MCMC samples.Because the MCMC samples usually include repeated draws, the number of these QoIevaluations is less than M. §.§ GSA of the MAP pointThe MAP point is an important point estimator and studying its sensitivity to prior parameters complements the study of other moment-based HS maps such as the posterior mean or variance.The approach described in <ref> can be used in cases where F(ξ) involves moments of the posterior, as in the case of the mean and variance.On the other hand, evaluating F_MAP requires solving the regularized nonlinear least squares problem (<ref>) for each realization of ξ⃗. No numerical integration is needed.One does not even need to know the normalization constant of the posterior to find its MAP point. While we do not use <ref> to study F_MAP, we evaluate it at the same set of realizations {ξ⃗_k}_k=1^N used in <ref>. These evaluations are used to build surrogate models for F_MAP. The computed surrogate is then used for fast GSA of F_MAP. § COMPUTATIONAL RESULTSIn this section, we consider two model inverse problems as testbeds for our proposed approach.Specifically, we use <ref> for global sensitivity analysis (GSA) of hyperparameter-to-statistic (HS) mappings from the inverse problems under study. These examples are used to examine various aspects of the proposed method.In <ref>, we consider a simple linear inverse problem.Specifically, we formulate fitting a line to noisy data as a linear Bayesian inverse problem.In this case, the posterior distribution is known analytically.This means that the HS mappings admit analytical forms, and we can perform GSA without <ref>.This problem serves as a benchmark where we gauge the accuracy of GSA with <ref> against reference values.The QoI in this example is a quadratic function. For this QoIs, we study the HS mappings for the mean and variance.The Sobol' indices, approximated using <ref>, of these HS mappings are compared to the true Sobol' indices.Overall,we note close agreement between the results produced by our method and the analytic results.Next, we apply our method to a nonlinear Bayesian inverse problem in <ref>.The inverse problem is governed by an SEIR model from epidemiology <cit.>.It exemplifies the type of problem that <ref> is designed and intended for.Our numerical results provide a unique perspective on the impact of uncertainty in prior hyperparameters.The QoI is the basic reproductive number.We quantify the uncertainty in the mean, variance, and MAP point that is caused by uncertainty in the prior hyperparameters.The Sobol' indices of the mean, variance, and MAP point HS mappings are computed using <ref> and highlight the most influential hyperparameters in each case.We use two different surrogate modeling approaches in these computations: one based on sparse polynomial chaos expansions (PCEs) and the other based on sparse weight extreme learning machines (SW-ELMs).The two approaches provide results that match closely.§.§ Linear Bayesian inverse problemWe consider the problemof fitting a line y=mt+b to noisy measurements {(t_i,y_i)}_i=1^4 at times t = 0,0.5,1.5,2.5.The slope m and intercept b are treated as unknown parameters, which we seek to estimate.We cast this problem in a Bayesian framework.This serve to illustrate various properties of our proposedframework.§.§.§ Bayesian inverse problem setupLet the inversion parameter vector be denoted byθ= [[ b m ]]^⊤. We consider estimation of θ from A θ + η = y,where Here A = [[ 1 1 1 1; 0 0.5 1.5 2.5 ]]^⊤ is the forward operator, η⃗ models measurement noise, and y⃗ is the data. We assume noise at each measurement independently follows the standard normal distribution, i.e.,η_i∼𝒩(0,1). The noise covariance is Γ_noise= I_4× 4. We assume a “ground-truth” parametervector θ_true =[1 -2]^⊤ andgenerate measurements by adding sampled noise η_i to y_i = -2t_i + 1 for i=1, …, 4; see <ref>.We assume a Gaussian prior distribution 𝒩(θ⃗_pr,Γ⃗_pr)for the inversion parameters θ⃗ withθ_pr=[[ μ_b; μ_m ]],Γ_pr=[[ σ_b^2 0; 0 σ_m^2 ]].Due to linearity of the parameter-to-observable map and Gaussianprior and noise models,the posterior distribution for θ⃗ is also Gaussian andexplicitly known. It is the Gaussian distribution 𝒩(θ⃗_post,Γ⃗_post), whereΓ_post=( A^⊤Γ_noise^-1 A + Γ_pr^-1)^-1,θ_post=Γ_post( A^⊤Γ_noise^-1y+Γ_pr^-1θ_pr).Since the posterior distribution is Gaussian, the posterior mean and MAP point are the same.Quantity of interest. We introduce the QoI which depends on the inversion parameters θ⃗. The QoI is the quadratic formq(θ⃗) = θ⃗^⊤θ⃗= m^2 + b^2,θ⃗∼𝒩(θ⃗_post,Γ⃗_post). As θ⃗ is a Gaussian random variable,we have access to expressions for the first and second moments <cit.>.of the QoI.We can therefore express the mean and variance of the QoI analytically.𝔼_post(q) = θ⃗_post^⊤θ⃗_post, var(q) = 2 tr(Γ⃗_post^2) +4θ⃗_post^⊤Γ_postθ⃗_post. Uncertainty in prior hyperparameters. Before building the posterior distribution, we must choose values for the prior hyperparameters are ξ⃗=[[ μ_b μ_m σ_b^2 σ_m^2 ]]^⊤ that appear in (<ref>).We assume these parameters are specified within some interval around their nominal values and are modeled as independent uniformly distributed random variables.We use a nominal value of 1 for each of the parameters and let the upper and lower bounds of the distributions be ± 50% perturbations of the nominal value.§.§.§ Parameter estimation and importance sampling To understand how the uncertainty in the prior hyperparameters affects the QoI, we employ <ref> in <ref>. The first step is to choose a priorto build the importance sampling distribution . We take 𝒩(θ⃗_pr^IS,Γ⃗_pr^IS) with θ_pr=[[ 1; 1 ]],Γ_pr=[[ 1.5^2 0; 0 1.5^2 ]].We use the DRAM algorithm, discussedin <ref>, to draw 10^5 samples from . In <ref>, we compare the prior, analytic posterior, and MCMC-constructed posterior marginal distributions of b and m.Before we implement <ref>, we evaluate whether is an acceptable importance sampling distribution.As discussed in <ref>, we use (<ref>) to computethe effective sample size over the distribution of prior hyperparameters ξ⃗.The distribution of effective sample sizes, given in <ref> (left), showsthatis an effective importance sampling distribution over many realizations of ξ⃗. In <ref>, we give a further visual of how serves as an effective importance sampling distribution.In the right panel, the distribution of q,when θ⃗∼, is compared to the distributions ofq_lin(θ⃗) when θ⃗∼, for three realizationsof ξ⃗.§.§.§ Sensitivity analysisWe now study q given in (<ref>).We are interested in the variance and mean HS mappings (<ref>) F_mean(ξ⃗) =𝔼_post^ξ⃗(q) and F_var(ξ⃗) = var^ξ⃗(q).As shown in <ref>, these HS mappings take analytically known forms.We use <ref> to compute the Sobol' indices of the HS mappings under study. The importance sampling distribution is given by , as described in <ref>, and withas specified in <ref>. We study how the Sobol' indices, computed via <ref> converge as we increase MCMC sample size M. In our computations, we build sparse PCE and sparse-weight ELM <cit.> surrogate models, discussed in <ref> using 10^3 realizations of ξ⃗, drawn using Latin hypercube sampling (LHS).SW-ELM surrogates use 800 realizations for training and 200 for validation during the weight sparsification step.The Sobol' indices estimated by <ref> are compared against benchmark indices.We compute the benchmark indices by applying the standard sampling approach from <cit.> to the HS mappings.This yields accurate indices because we have access to the analytic expressions of F_var and F_mean. <ref> illustrates the Sobol' indices of F_mean.The computed indices are compared against benchmark values which are computed by sampling the analytic form of the QoI.Note that With only a modest number of about 1000 MCMC samples, we can ascertain the correct importance ranking of the total Sobol' indices of F_mean. By 5000 samples, the total Sobol' indices have converged. Also,as before, the two surrogate modelingapproaches provide similar results. In <ref>, we consider F_var. We note that the Sobol' indices for the F_var take longer to converge than for F_mean, for the present QoI.However, even with a modest number of MCMC samples (about 2500), the Sobol' indices provide the correct ranking of importance. The total indices converge with around 10^4 MCMC samples.The numerical studies for the present model linear inverse problem provide a proof-of-concept study of <ref>.In particular,availability of analytic expressions for the HS mappings enables testing the accuracy of the computed results. We note that a modest MCMC sample size is sufficient toobtain the correct parameter rankings. We also observe that fewer MCMC samples are required to estimate the indices F_mean compared to F_var.This is not surprising, because computing second order moments typically require more effort than that required for computing the mean.§.§ Nonlinear Bayesian inverse problem based on SEIR modelIn this section, we consider a Bayesian inverse problem governed by the susceptible-exposed-infected-recovered (SEIR) model <cit.> epidemic model.In <ref> we discuss the governing SEIR model and the Bayesian inverse problem under study.In <ref>, we study the proposed importance sampling procedure for computing the HS mappings under study.Finally, in <ref>, we present our computational results for GSA of the present Bayesian inverse problem with respect to prior hyperparameters. §.§.§ The inverse problemThe SEIR model simulates the time dynamics of an epidemic outbreak in a population.The model has four compartments, S, E, I, and R, corresponding to the susceptible, exposed, infected, and recovered populations.The individuals in the exposed compartment are those who have been exposed to the disease but are not yet displaying signs of infection. The individuals in the I compartment are infected and infectious.We consider a standard SEIR model where we assume recovered individuals cannot be reinfected.Additionally, we assume that the natural birth and death rates are equal and neglect disease related mortality. This ensures that the total population N=S+E+I+R remains constant over time. The present model is described by the following system of nonlinear ordinary differential equations (ODEs):Ṡ =μ N - β SI/N - μ S,Ė =β SI/N - (σ + μ)E,İ = σ E - (γ + μ) I,Ṙ =γ I - μ R.There are four model parameters in the above system which we seek to estimate. The infection rate β, in units days^-1, represents how quickly an infected individual infects a susceptible individual. The recovery rate γ, in units days^-1, represents how fast an infected individual recovers from infection. The latency rate σ, in units days^-1, represents how long it takes for an exposed individual to display symptoms. Lastly, there is also a parameter μ, with units individuals per day, which represents both the natural birth rate and the natural death rate. In the model, individuals are only born susceptible while individuals in any compartment can die a natural death. As noted before, since the birth and death rates are the same, the total population size remains constant. Setup. For the purposes of this example, we simulate an epidemic governed by the SEIR model for a population of N=1000 individuals. The nominal parameters and initial conditions are detailed in <ref>.The nominal parameter values will be used as “ground-truth” in the computational studies that follow. The dynamics of the epidemic under these conditions are shown in <ref> (left). Next, we formulate a Bayesian inverse problem. In what follows, we formulate the inverse problem as thatof estimating the log of the uncertain model parameters.Hence, we consider the inversion parameter vector, θ⃗= [[ logμ logβ logσ logγ ]]^⊤. The data measurements, used to solve the inverse problem,consist of simulated data {(t_k,I_k)} at times t_k=3k+30, where k=1,…,15.These simulated data measurements are obtained bysolving the SEIR model with ground-truth parameter values and adding random noise. The noise at each measurement is identically independently distributed from a normal distribution 𝒩(0,30^2). The simulated data compared to the true model are shown in <ref> (right). We use a Gaussian prior𝒩(m⃗_pr, Σ⃗_pr) on the inversion parameter vectorθ withm⃗_pr = [[ m_logμ; m_logβ; m_logσ; m_logγ ]], Σ⃗_pr = [[ s_logμ^2000;0 s_logβ^200;00 s_logσ^20;000 s_logγ^2 ]].Note that, unlike the inverse problem in <ref>, this Bayesian inverse problem is nonlinear. In this case, we do not have access to an analytically known posterior distribution.This means Markov Chain Monte Carlo (MCMC) is needed to sample from the posterior distribution. Uncertainty in prior hyperparameters. We assume there is uncertainty in the hyperparameters that appear in (<ref>).Specifically, we consider the vector ξ⃗= [[ m_logμ m_logβ m_logσ m_logγ s_logμ^2 s_logβ^2 s_logσ^2 s_logγ^2 ]]^⊤of parameters that define the prior as uncertain.In the present study, we assume that the entries of ξ⃗ are independent uniformly distributed random variables, as specified in <ref>.Quantity of interest. An important quantity of interest in epidemiology is the basic reproduction number, denoted R_0.It can be interpreted as the number of secondary infections caused, on average, by a singleindividual <cit.>. Determining R_0 of an epidemic is key to understandinghow severe the outbreak could be. For the SEIR model <ref>,R_0 takes the formR_0=β/γ + μσ/σ+μ. For the epidemic in <ref>, R_0=2.7985.The importance of R_0 makes it a prime area to apply uncertainty quantification and robustness analysis. In <cit.>, the robustness of R_0 estimates to model parameters is considered through local derivative-based methods. Hence, we focus on R_0 as the QoI,q(θ⃗) =e^θ_2/e^θ_4 + e^θ_1e^θ_3/e^θ_3+e^θ_1. §.§.§ Parameter estimation and importance sampling Before we can implement <ref>, we have to choose the importance sampling distribution. In accordance with the discussion in <ref>, we choose the importance sampling distribution π_pr^IS as 𝒩(m⃗_pr^IS,Σ⃗_pr^IS) with m⃗_pr^IS= [[-10; -1.5; -1.5; -1.5 ]], Σ_pr^IS = [[ 3^2 0 0 0; 0 2^2 0 0; 0 0 2^2 0; 0 0 0 2^2 ]]. Because m_logμ takes a wider range of values compared to the other means,we impose a large variance on logμ in . We construct the correspondingposteriorusing the DRAM algorithm.The first 10^3 samples are removed for burn-in. After sufficient burn-in, we generate 1.5× 10^5 from the posterior. We present the MCMC chains of log parameters and their respective marginal posterior distributions in <ref>.In <ref> (left), we evaluate the effectiveness of our importance sampling distribution byexamining the distribution of effective sample sizes. We also compare the distribution of R_0 values, with respect to π_post^IS, compared to the posterior distributions for three realizations of the prior hyperparameters in <ref> (right).These results indicate that we can use π_post^IS as an importance sampling distribution for the target posteriors.§.§.§ Sensitivity analysisHere, we study the sensitivity of the HS mappings F_mean, F_var, and F_MAP to prior hyperparameters, relativeto the QoI q(θ⃗) = R_0. As discussed in <ref>, F_MAP is not evaluated the same way as the other two HS mappings—it is evaluated by solving an optimization problem. Therefore, we only include convergence studies for F_mean and F_var.For each HS mapping, surrogate models are constructed using 10^3 realizations of ξ⃗, drawn using Latin hypercube sampling (LHS).For polynomial chaosexpansion surrogates, we use expansions of total degree 6.SW-ELM surrogates use 800 realizations for training and 200 for validation during the weight sparsification step. We start by studying the total Sobol' indices forF_var and F_mean. We track the convergence of these indices aswe increase the number of MCMC samples taken from π_post^IS to up to 1.5× 10^5 samples.The results are reported in <ref>.We observe that the estimators for the larger Sobol' indices converge faster. However, our importance ranking remains constant after 4× 10^4 MCMC samples.The convergence of the total indices of F_var are studiedin <ref>.As was observed when studying the linear Bayesian inverse problem, evaluating the variance accurately requires more MCMC samples compared to evaluating the mean.Finally, we compare the converged total Sobol' indices of F_mean(ξ),F_var(ξ) with those of F_MAP(ξ) in <ref>. Overall, we note that the results from the SW-ELM and sparse regression PCE results agree.The indicates that the present computations are stable with respect to the choice of the surrogate model.The global sensitivity analysis of the posterior mean, variance, and MAP point in <ref> allow us to infer much information about which hyperparameters in the prior matter and which do not.The Sobol' indices suggest that the uncertainty inthe prior mean of logγ and prior variances of logμ,logγ can be ignored. To illustrate this,we compare the distributions of F_mean, F_var, and F_MAP before and after these prior hyperparameters are fixed at their nominal values in <ref>. The density estimates in <ref> confirm that those three prior hyperparameters have little influence over the posterior mean, variance, and MAP point. Thus, the experimental resources should be put towards finding more knowledge about the other hyperparameters.§ CONCLUSIONWe have developed a computational approach for global sensitivity analysis of Bayesian inverse problems with respect to hyperparameters defining the prior. Our results indicate that the posterior distribution can exhibit complex dependence on such hyperparameters. Consequently, the uncertainty in the prior hyperparameters lead touncertainty in posterior statistics of the prediction/goal quantities of interest which needs to be accounted for.The results of GSA provide valuable insight this context.Such an analysis reveals the prior hyperparameters that are most influential to the posterior statistics of prediction quantities of interest and whose specification requires care.Our computational studies provide a proof-of-concept of the proposed approach and indicate its viability.In particular, at the cost of one MCMC run, we can obtain reliable estimates of the sensitivity of moment-based hyperparameter-to-statistic mappings with respect to prior hyperparameters.An important aspect of our approach is the proposed importance sampling procedure.A limitation of the present study is that the importance sampling prior in (<ref>) was chosen in an empirical manner. While this can be practical in many cases, developing a systematic approach for picking this distribution is an interesting and important avenue of future investigations. This can be facilitated, e.g., by considering an appropriate optimization problem for finding .This requires definition of suitable performance objectives forthat are tractable to optimize. A related line of inquiryis exploration of techniques such as variational inference <cit.> or the Laplace approximation <cit.> to the posterior for obtaining an importance sampling posterior .This is necessary for computationally intensive inverse problems where even one MCMC run might be prohibitive.Yet another direction for future work is the development of hyperparameter screening steps. A tried-and-true approach is to screen via derivative-based global sensitivity measures <cit.>, after which a variance-based analysis may be conducted.This would be important for inverse problems with a large number of prior hyperparameters. elsarticle-num | http://arxiv.org/abs/2310.18488v2 | {
"authors": [
"John E. Darges",
"Alen Alexanderian",
"Pierre A. Gremaud"
],
"categories": [
"stat.CO",
"62F15, 62F07, 62D05"
],
"primary_category": "stat.CO",
"published": "20231027210516",
"title": "Variance-based sensitivity of Bayesian inverse problems to the prior distribution"
} |
Ionospheric response during Tropical Cyclones-a brief review on Amphan and Nisarga [ January 14, 2024 ==================================================================================The luminous narrow line Seyfert galaxywas the first non-BAL AGN to reveal a powerful ionized wind, based on early observations with ESA's X-ray Observatory. Subsequent observations, mainly withand the JapaneseObservatory, found such winds to be a common feature of luminous AGN. Typical outflow velocities of v ∼ 0.1c and flow momenta mv ∼ L_Edd /c are consistent with winds being launched by continuum driving from a disc when the local mass accretion rate is super-Eddington. Here we report the launch of a new, ultra-fast outflow component in near the end of a 5-weekobserving campaign, and discuss its origin in an ultra-fast inflow of similar velocity detected some 3 weeks earlier.We find that the inflow lasted for at least 3 days and delivered some 10 Earth mass of fresh material into the innermost region of the source. While this mass by itself is insufficient to cause a complete inner disc restructuring, we show that it is sufficient to disrupt the X-ray emitting corona of the disc. We conclude that it is this coronal re-arrangement of the inner tens gravitational radii inthat subsequently caused the launch of a new wind. galaxies: active – galaxies: Seyfert: quasars: general – galaxies: individual: PG1211+143 – X-ray: galaxies § INTRODUCTION X-ray spectra from anobservation of the narrow-line Seyfert galaxyin 2001 provided the first detection in a non-BAL AGN of strongly blue-shifted absorption lines of highly ionized gas, corresponding to an outflow velocity of 0.15±0.01c (Pounds2003, Pounds and Page 2006). Further observations over several years with ,andshowed the high velocity outflow to be persistent but of variable opacity (eg Reeves2008). Evidence that the extended outflow inwas both massive and energetic - with potential importance for galaxy feedback - came from the detection of PCygni and other broad emission features obtained by combining the 2001, 2004 and 2007EPIC spectra (Poundsand Reeves 2007, 2009). Examination of archival data fromandhas since shown similar ultra-fast, highly-ionized outflows (UFOs) to be relatively common in nearby, luminous AGN (Tombesi 2010, 2011; Gofford 2013). The frequency of these detections confirms a substantial covering factor and hence significant mass and kinetic energy in such winds. Indeed, their integrated mechanical energy may be substantially greater than required to disrupt the bulge gas in the host galaxy, suggesting some winds are intermittent, or that much of the energy in a persistent wind must be lost before reaching the star forming region, perhaps by colliding with pre-ejecta, as seen in anobservation of the low mass Seyfert(Pounds and Vaughan 2011, Pounds and King 2013).In order to further explore the velocity structure and evolution of the fast wind inan extendedobservation was carried out during 7 spacecraft orbitsover the period 2014 June 2 to 2014 July 9. Effective on-target exposures for individual orbits ranged from ∼50 to ∼100 ks, with a total duration of ∼650 ks. Full details of theobserving log are given in Lobban (2016), reporting the results of a detailed timing analysis.(Gehrels et al. 2004) also observed PG 1211+143 with 43 snapshots, as part of a Target of Opportunity (ToO) programme linked to thecampaign. Theobservations cover the period 2014 June 4 to 2014 August 4 and have typical durations of ∼ 1.5 ks. We have extracted x-ray light curves from the X-ray Telescope (XRT; Burrows et al. 2005) and UV light curves from the Ultra-Violet/Optical Telescope (UVOT; Roming et al. 2005),the latter using the U, UVW1 and UVW2 filters. Full details of theobservations are also presented in Lobban et al. (2016).Figure 1 reproduces the orbital-mean x-ray fluxes from thepn camera (Strueder 2001), together with the first 17 snapshots from the Swift XRT. Both data sets show a deep minimum flux nearorbit 2659 (day 16), when the ultra fast inflow was detected. Thelight curve is particularly interesting, with its softer x-ray bandwidth responding to both column density and ionization changes in the line-of-sight flow, suggesting the inflow seen in day 16 actually began some 3 days earlier followed by a strong increase in x-ray emission, to a peak in orbit 2664 some 8 days later. The high x-ray flux is then maintained for at least 4 further days,to orbit 2666, but has fallen substantially by the final observation (orbit 2670).Figure 2 shows x-ray spectra from thepn camera over the same interval, with orbits 2659(black), 2661(red), 2663(green) and 2664(blue), plotted as a ratio to that of orbit 2652. The broad spectral band highlights the strong soft x-ray absorption associated with the transient accretion event in orbit 2659, which then falls over several days, with the continuum flux continuing to increase to a new peak by orbit 2664.Published analysis of the 2014observation ofhas focussed on stacked X-ray spectra, where the high quality data have revealed a complex velocity structure, with primary (high column density) outflow components at v∼0.067c, v∼0.129c andv∼0.187c (Pounds 2016a, 2016b; hereafter P16a and P16B)). Given the limited spectral resolution of the pn camera, detection of all 3 outflow velocities in the co-aligned Reflection Grating Spectrometer (RGS: den Herder2001) was important, while indicating the presence of co-moving higher density matter in each flow component.Notably, none of the outflow velocities in 2014 were consistent with the powerful outflow of v∼0.15c in the 2001 initialobservation, while repeated observations of several AGN reported in the afore-mentioned archival searches showed differing velocities weeks apart, implying that some high velocity AGN winds may be relatively short-lived. An initial examination of individual orbits during the 5-weeksobservation in 2014 found the clearest spectral variability in the soft x-ray band, sensitive to both column density and ionization state changes. A detailed inter-orbit analysis of the RGS soft x-ray spectra has subsequently confirmed variability on timescales of days, with the strongest outflow at v∼0.06c clearly resolved into distinct ionization (density) components (Reeves2018).The present paper derives from an on-going orbit-by-orbit study of the harder X-ray spectra from the EPIC pn (Strueder2001) and mos (Turner 2001) cameras. One remarkable outcome already reported (Pounds2018) was the detection of a transient ultra-fast inflow, with v∼0.3c, during the secondorbit in 2014. We now report the launch of a new high velocity outflow component, with v∼0.27c, a few weeks later, and note that the similar velocities suggest the two events are physically linked.We assume a redshift of z=0.0809 (Marziani1996), witha black hole mass of 4× 10^7 (Kaspi 2000) indicating the historical mean luminosity of is close to Eddington. Spectral modelling is based on the XSPEC package (Arnaud 1996) and includes absorption due to the line-of-sight Galactic column N_H ∼ 3×10^20 (Murphy1996). To allow comparison with previous analyses of the 2014 spectra we again use photoionized absorption and emission grids of pre-computed spectra based on the XSTAR code of Kallman(1996).§ LAUNCH OF A FOURTH HIGH VELOCITY OUTFLOW LATE IN THE 2014 CAMPAIGNAs noted above, the unusually high statistical quality of the extended 2014 observation ofwas important in resolving complex absorption structure in the Fe K spectrum, identified in P16a with absorption line series in FeXXV and XXVI corresponding to line-of-sight outflow velocities of v∼0.067c and v∼0.129c, and evidence for a third velocity v∼0.187c subsequently confirmed in the soft X-ray spectrum reported in (P16b. Table 1 lists the parameters of the 3 primary (high column) outflows from the mean 2014 pn spectral fit.Examination of pn data for individual orbits show the 3 outflow velocities to remain as in Table 1, indicating all were in 'coasting' phase.However, in the final observation (orbit 2670) several additional absorption lines appeared, notably at ∼8.1 kev and ∼8.6 keV and at 9-10 keV, indicating absorption specific to orbit 2670. To quantify the new absorption component, the spectral model from the stacked 2014 data (P16a) was applied to the pn data from orbit 2670, with only the normalisation (amplitude) of X-ray continuum and emission components free to change. Figure 3 (upper panel) shows the additional absorption in the ratio of orbit 2670 data to the mean 2014 spectrum.Adding a 4th photoionized absorber to the spectral model confirms a new high column outflow at an observed blueshift of 0.185±0.005, with column density N_H 9.7±2.8×10^22cm^-2 and ionisation parameter logξ ∼2.9.The corresponding outflow velocity in the AGN rest frame is v∼0.27±0.01c.Inclusion of the additionalabsorber (Figure 3, lower panel) recovers an excellent spectral fit to the orbit 2670 data, with Δχ^2 of 17/3 and null probability of 4×10^-3.Comparison of the upper and lower panels of Figure 3 identifies the resonance lines of He-like FeXXV and H-like FeXXVI blue-shifted to 8.1 and 8.6 keV,respectively, and the corresponding β transitions both visible at higher energies. The apparent broadening of the ∼8.1 keV line can be explained by significant inner-shell absorption components in the low energy wing of the Fe XXV resonance line, a feature often seen in similar AGN spectra. Independent support for the new high velocity outflow is provided by a similar examination of the RGS data from orbit 2670, where a comparison with the mean 2014 spectrum (P16b) again reveals additional soft x-ray absorption specific to orbit 2670. Modelling with a photionized absorber, as in P16b, finds a soft x-ray absorber with column density N_H ∼ 10^22 and ionization parameter logξ∼ 0.8, at anobserved blueshift ∼ 0.183±0.003. Figure 4 shows an array of soft x-ray absorption lines specific to orbit 2670, identified with resonance transitions in OVII, OVIII, NVII and CVI, with a common blueshift ∼ 0.185±0.006, corresponding to an outflow velocity v∼0.265±0.005c. The lower ionization parameter and column density (compared with the pn data) again indicate higher density matter embedded in the primary (high column) highly-ionized flow.§ DISCUSSION Repeatedobservations have demonstrated the highly variable nature of the wind opacity in . The initial discoveryof a strong outflow at 0.15c in a 2001observation was followed byweaker detections in 2004 and 2007, and then the long 2014 campaign reporting a more complex velocity structure.Assuming the highly ionized winds are driven by momentum exchange with a super-Eddington radiation flux (King and Pounds 2003), an explanation now supported in the quantitative agreement of data and theory (King and Pounds 2015), the pattern of wind variability will also have direct bearing on the nature of variableaccretion in AGN. Nixon(2012) have shown how an accreting 'cloud' approaching the disc at an oblique angle to the black hole spin plane could cause the inner disk to warp and break off, with subsequent collisions between rings of matter precessing at different rates, leading to loss of angular momentum and direct infall, potentially creating local disc regions of super-critical mass accretion. The ultrafast infall detected during the 2014observations ofwas discussed in that context in Pounds (2018).Observing the launch of a new primary (high column) ultrafast outflow component, some 20 days later, is a striking illustration of continuum-drivingfor a high velocity wind, as first described in the classic accretion disc paper of Shakura and Sunyaev (1973), but - interestingly - not mentionedin the original scientific case for(Bleeker1982). In that context, the near coincidence of a 0.3c infall velocity (with free-fall location at ∼20 Rg) detected in rev 2659, and the wind launch seen 2-3 weeks later(with a similar escape velocity, and likely radial location) strongly suggests a direct physical link. The increase in x-ray emission between those two events (Fig.2) might then reflect the release of energy as the added matter spreads out on the local viscous timescale, with an inward accreting flow eventually reaching a radius where the mass rate is super-Eddington, with excess matter being ejected as anew wind component. To examine that idea quantitatively, we recall the transient inflow mass (in line-of-sight) reported in P18 was ∼3.3×10^26g, accumulated over 3000s, for a mean rate of 10^23g s^-1.However, that measure is only for matter in line of sight and sufficiently cool to be detected, and is likely to be a significant underestimate.Furthermore, Fig. 1 suggests the infall may have begun some 3 days earlier, increasing the observed mass dump by a substantial factor. In principle the integrated mass in the new UFO could have provided an alternative measure of the total mass dump, but unfortunately no observations ofhave been made since the 2014campaign.As noted above, Pounds (2018) discussed mass deposition in the context of the specific Nixon(2012) scenario for mass deposition due to colliding rings of a warped disc. An alternative scenario for gas deposition in AGN, supported by simulations of clumpy in-homogeneous inflows is chaotic or ballistic accretion, e.g., Hobbs2011, Faber & Dehnen (2018). In this picture gas inflowing into the central parsec of AGN has a wide distribution of angular momentum, in both orientation and magnitude. A small fraction of the gas will have angular momentum small enough to fall into the central region of AGN discs. Independently of how the gas is deposited on the disc, the mass deposition rate due to a radial inflow isṀ = ΔΩ R^2 m_p n_p v_r ,where ΔΩ is the solid angle of the flow (4π would correspond to the full sky flow), m_p and n_p are the proton mass and particle density in the flow, and v_r is the radial velocity. Now N_ H = n_p Δ R, where Δ R < R is the radial thickness of the flow, and we can eliminate R through v_r = v_ ff = (2GM/R)^1/2, which yields R = R_ S (c/v_r)^2. We obtainṀ = ΔΩ R_ S m_p c N_H R/Δ Rc/v_r = 0.003 yr^-1 ΔΩ N_23 ,where N_23 = N_H/(10^23). For comparison, the Eddington accretion rate for the source is= ≤/0.1 c^2 = 0.8 yr^-1We see immediately that it is unlikely that the ultra-fast inflow could have significantly affected the bulk of the accretion flow, that is, its optically thick UV-emitting part.That prediction is supported by the UV fluxes fromover the same period (Fig.5), showing no substantial flux variability for several days before and after the ultra fast inflow on day 16. In contrast, the x-ray data fromandover the same period clearly show (Fig.1) enhanced soft x-ray absorption between days 12 and 15, followed (Fig.2) by an increase to the broad band x-ray emission from day 17.Furthermore, making plausible assumptions about disc parameters in the viscous timescale (on which mass accretion rate through the disc may vary) is likely to be of order a few years at R∼ 20 R_ g, too long for a causal disc-mediated connection between the infall event and the launch of the new wind reported here. However, the inflow is certainly massive and powerful enough to have affected the corona of the accretion disc in . Disc coronae must be Compton optically thin to explain typical AGN spectra (e.g., Poutanen & Svensson 1996). The effect of absorption in a line-of-sight inflow on theXRT data (Fig. 2) suggests a substantial flow lasted for at least Δ t =3days, adding ∼ṀΔ t/(π R^2)/2 ∼ a few g/cm^2, per side of the disc, exceeding the expected amount of material in the corona.Furthermore, the specific energy injected by the inflow into the coronal region of the disc, v_r^2/2 = 0.05 c^2 is one-two orders of magnitude higher than that of the X-ray emitting coronal gas. We suggest the additional material and extra energy introduced by the infalling gas is likely to have caused a significant rearrangement in the corona, and the top layers of the disc where most of the disc emission forms, and therefore propose the launch of a new wind late in the 2014 campaign was a direct consequence of the earlier fast infall of new matter.§ CONCLUSIONS Our analysis indicates that the amount of material delivered into the innermost tens of R_ g by the inflow is likely to besmall compared to that in the optically thick disc in that region, a conclusion supported by the lack of significant variability in the UV data from Swift (Fig.5), contrasting with the increasing coronal x-ray emission fromorbits 2659 to 2664 (Fig.2). However, the additional mass and energy injected into the accretion disc by the transient inflow are significant in comparison with the mass and energy budgets of the coronal region of the inner disc. We therefore conclude that the observed mass and energy injection early in the 2014campaign most likely is the direct cause of the new wind launched some three weeks later. §ACKNOWLEDGEMENTSis a space science mission developed and operated by the European Space Agency. We acknowledge in particular the excellent work of ESA staff in Madrid successfully planning and conducting theobservations. We also thank thePI for approving, and NASA mission planners for scheduling, the additional observations used in this paper.data were reduced by our former colleague Andrew Lobban and data were provided by the UK Science Data Centre at the University of Leicester, supported by the UK Science and Technology Facilities Council, with our special thanks locally to Dr Kim Page.Arnaud K.A.1996, ASP Conf. Series, 101, 17Bleeker J.A.,Brinkman A.C.,Culhane J.L.,Koch L.,Pounds K.A.,Schnopper H.W.,Spada G.,Taylor B.G.,Trumper J.1982Burrows D.N.2005, SSRv, 120, 165den Herder J.W.2001,A&A, 365, L7Faber C., Dehnen W., 2018, MNRAS, 478, 852Gehrels N.2004, ApJ, 611, 1005Gofford J., Reeves J.N., Tombesi T., Braito V., Turner T.J., Miller L., Cappi M.2013, MNRAS, 430, 60Hobbs A., Nayakshin S., Power C., King A.R.2011, MNRAS, 413, 263Kallman T., Liedahl D., Osterheld A., Goldstein W., Kahn S.1996, ApJ, 465, 994Kaspi S. Smith P.S., Netzer H., Maoz D., Jannuzi B.T., Giveon U.2000, ApJ, 533, 631King A.R., Pounds K.A.2003, MNRAS, 345, 657King A.R., Pounds K.A.2015, ARA&A, 53, 115Lobban A.P., Vaughan S.A., Pounds K.A., Reeves J.N.2016, MNRAS,457, 38L Marziani1996, ApJS, 104, 37Murphy E.M., Lockman F.J., Laor A., Elvis M. 1996, ApJS, 105, 369Nixon C.J., King A.R., Price D, Frank J.2012, Ap J, 757, 24Pounds K.A., Reeves J.N., King A.R., Page K.L., O'Brien P.T., Turner M.J.L.2003, MNRAS, 345, 705 Pounds K.A., Page K.L.2006, MNRAS, 360, 1123Pounds K.A., Reeves J.N.2007, MNRAS, 374, 823 Pounds K.A., Reeves J.N.2009, MNRAS, 397, 249 Pounds K.A. and Vaughan S.A2011, MNRAS, 415, 2379 Pounds K.A. and King A.R.2013, MNRAS, 433, 1369Pounds K.A.Pounds K.A, Lobban A.P, Reeves J.N., Vaughan S.A.2016a, MNRAS, 457, 2951 (P16a)Pounds K.A.Pounds K.A, Lobban A.P, Reeves J.N.,Costa M., Vaughan S.A.2016b, MNRAS, 459, 4389 (P16b)Pounds K.A., Nixon C.J., Lobban A., King A.R., 2018, MNRAS, 53, 115Poutanen J., Svensson R.,1996, ApJ, 470, 249Reeves J.N., Done C., Pounds K.A., Tereshima Y., Hayashida K., Anabuki N., Uchino M., Turner M.J.L. 2008, MNRAS, 385, L108Reeves J.N., Lobban A., Pounds K.A.2018, ApJ, 854, 28Roming P.W.A.2005, SSRv, 120, 95Shakura N.I, Sunyaev R.A1973,A&A, 24, 337Strueder L.2001,A&A, 365, L18Tombesi F., Cappi M., Reeves J.N., Palumbo G.C., Yaqoob T., Braito V., Dadina M.2010, ApJ, 742, 44Tombesi F., Cappi M., Reeves J.N., Palumbo G.C., Braito V., Dadina M.2011, A&A, 521, A57Turner M.J.2001, A&A, 365, L27 | http://arxiv.org/abs/2310.18105v1 | {
"authors": [
"Ken Pounds",
"Sergei Nayakshin"
],
"categories": [
"astro-ph.HE",
"astro-ph.GA"
],
"primary_category": "astro-ph.HE",
"published": "20231027124301",
"title": "Observing the launch of an Eddington wind in the luminous Seyfert galaxy PG1211+143"
} |
* Denis S. Grebenkov ======================Code-mixing is a well-studied linguistic phenomenon when two or more languages are mixed in text or speech. Several datasets have been build with the goal of training computational models for code-mixing. Although it is very common to observe code-mixing with multiple languages, most datasets available contain code-mixed between only two languages. In this paper, we introduce SentMix-3L, a novel dataset for sentiment analysis containing code-mixed data between three languages Bangla, English, and Hindi. We carry out a comprehensive evaluation using SentMix-3L. We show that zero-shot prompting with GPT-3.5 outperforms all transformer-based models on SentMix-3L. *These two authors contributed equally to this work. § INTRODUCTION Code-mixing and code-switching are very commonly observed in both text and speech. Code-mixing means the practice of using words from multiple languages within a single utterance, sentence, or discourse, and code-switching refers to the deliberate alteration between multiple languages within the same context <cit.>. The first case is spontaneous and the second case is purposeful. However, both are widely observed in bilingual and multilingual communities. According to <cit.>, several factors are behind these two phenomena, which include social, convenience, linguistic, and cognitive reasons. Socially, this often serves as a sign of group identity which allows individuals to navigate multiple social and cultural affiliations. In terms of linguistics, it is a very common scenario to not be able to find any word for a specific term in one language, whereas another word from another language can help to communicate better. Additionally, there are several cases even in a monolingual community, when Code-mixing might be the convenient way to express something. In most occurrences, code-mixing is bilingual. In an early research, <cit.> states that, it is very likely that by the year 2035, over half of the children enrolled in kindergarten will have grown up speaking a language other than English. Another study conducted by <cit.> shows that it is very common in European countries like Germany, Spain, and Italy to use bilingualism in practice. However, in cosmopolitan cities and areas like New York, London, Singapore, and others, code-mixing with three or even more languages is fairly common. This is also observed in countries like Luxembourg, and regions such as West Bengal, and South-East India where more than two languages are commonly used on a daily basis.Several research works have been conducted on building code-mixed datasets and performing several downstream tasks on such datasets. These datasets include both synthetic and natural ones. However, most of them are bilingual in nature. In this paper, we present SentMix-3L, a Bangla-English-Hindi dataset annotated for sentiment analysis. The main contributions of our work are as follows:* We introduce SentMix-3L, a novel three-language code-mixed test dataset with gold standard labels in Bangla-Hindi-English for the task of Sentiment Analysis, containing 1,007 instances.[https://github.com/LanguageTechnologyLab/SentMix-3L]* We provide a comprehensive experimental analysis with several monolingual, bilingual, and multilingual models on SentMix-3L. We are presenting this dataset exclusively as a test set due to the unique and specialized nature of the task. Such data is very difficult to gather and requires significant expertise to access. The size of the dataset, while limiting for training purposes, offers a high-quality testing environment with gold-standard labels that can serve as a benchmark in this domain. Given the scarcity of similar datasets and the challenges associated with data collection, SentMix-3L provides an important resource for the rigorous evaluation of text-based models, filling a critical gap in multi-level Code-mixing research. In our experiments, we also prepare a synthetic train and a development dataset to evaluate several models. § RELATED WORK There have been some works conducted on Bangla-English, Hindi-English, and Bangla-Hindi Code-mixing and Code-switching separately. Most of them are case studies and surveys that show the common occurrences of Bangla-English <cit.>, Hindi-English <cit.> and Bangla-Hindi <cit.> Code-mixing in a wide variety of areas and situations.Few works are done on sentiment analysis tasks for these types of cases. The work of <cit.> presents a Bangla-English Sentiment Analysis dataset primarily related to COVID-19. Their dataset is called CoVaxBD and it contains 1113 samples. Their experiments show that the best result of the dataset is generated by BERT with a development accuracy of 97.3%. However, a lot of the data are purely in Bangla and they only experiment using the BERT and multilingual BERT model <cit.> while only providing development accuracy as the performance metric. Another recent work by <cit.>, consists of 18,074 Bangla-English Code-mixed data from online for the purpose of sentiment analysis. They augment the dataset using their own approach and achieve their best result of an 87% weighted F1 score by implementing XGBoost with Fasttext embedding. Their experiments lack the evaluation of transformer-based models.An early work by <cit.> focuses on Hindi-English Code-mixing dataset for Sentiment Analysis. However, it contains 345 data samples in total and only 180 of them are code-mixed. They get their best results of 91.01% accuracy using Recursive Neural Tensor Network (RNTN). <cit.> compile a dataset of 3879 texts and get their best results using a subword-LSTM approach. Also, the work of <cit.> includes a dataset of 6357 texts and Bi-LSTM helps them to get their optimal result. However, none of these works present how the transformer-based models perform on their datasets.In summary, there are no works or datasets on sentiment analysis for code-mixed Bangla-English-Hindi altogether.SentMix-3L is a novel addition in this particular domain of research.§ THE SENTMIX-3L DATASET In generating the dataset, we choose a controlled data collection method, asking the volunteers to freely contribute data in Bangla, English, and Hindi. This decision stems from several challenges of extracting such specific code-mixed data from the vast corpus available on social media or other online platforms. While the data are not rare, identifying and isolating them from large, unstructured corpora is a very labor-intensive and error-prone process. Our approach ensures data quality and sidesteps the ethical concerns associated with using publicly available online data. Such types of datasets are often used when it is very difficult to mine them from existing corpora. As examples, for fine-tuning LLMs on instructions and conversations, semi-natural datasets like <cit.> and <cit.> have become popular. Data CollectionA group of 10 undergraduate students who are fluent in all 3 languages in all four language skills - listening, reading, writing, and speaking. We ask each of them to prepare 250 to 300 social media posts or tweets. They are allowed to use any language including Bangla, English, and Hindi to prepare posts on several daily topics like politics, sports, education, social media rumors, etc. We also ask them to switch languages if and wherever they feel comfortable doing it. The inclusion of emojis, hashtags, and transliteration are also encouraged. The students had the flexibility to prepare the data as naturally as possible. Upon completion of this stage, we filter 1863 samples that contain at least one word or subword from each of the three languages using langdetect <cit.> an open-sourced Python tool for language detection.Data Annotation We annotate the dataset in two steps to prepare high-quality labels for the dataset. First, we recruit three students from social science, computer science, and linguistics as annotators who are also fluent in all 3 languages in all four language skills. They annotate all the 1863 samples with one of the three labels (Positive, Neutral, and Negative) with a raw agreement of 65.3%. We only take these 1182 data, where all three annotators agree on the labels. Second, we gather a second group of annotators consisting of two NLP researchers with the same level of fluency and skills. After their annotation, we calculate a raw agreement of 0.85, a Cohen Kappa score of 0.78 and only keep the data where both annotators agree. After the two stages, we end up with a total of 1007 data.Dataset Statistics A detailed description of the dataset statistics is provided in Table <ref>. Since the dataset was generated by people whose first language is Bangla, we observe that the majority portion of the tokens in the dataset is in Bangla. It is also observed that all the data instances contain tokens from all three languages. There are several Other tokens in the dataset that are not from Bangla, English, or Hindi language. These tokens contain transliterated words and misspelled words as well as emojis and hashtags. We have Positive, Neutral, and Negative labels for the 1007 data. The label distribution is shown in table <ref>.Synthetic Train and Development Set We present SentMix-3L as a test dataset, hence for experimental purposes, we build a synthetic train and development set that contains Code-mixing for Bangla, English, and Hindi. We originally take the Amazon Review Dataset <cit.> as seed data and pick 100K data instances randomly. The dataset labels are ratings on a 1 to 5 scale. We convert them into Positive (rating > 3), Neutral (rating = 3), and Negative (rating < 3) for our task. We carefully choose an equal number of instances for Positive, Neutral, and Negative labels. We then use two separate methodologies called Random Code-mixing Algorithm by <cit.> and r-CM by <cit.> to generate the synthetic Code-mixed dataset. § EXPERIMENTS Monolingual Models We use 5 monolingual modelsDistilBERT <cit.>, BERT <cit.>, BanglaBERT <cit.>, roBERTa <cit.>, HindiBERT <cit.> for this experiment. Here BanglaBERT is specifically trained on only Bangla and HindiBERT on only Hindi language. The rest of the models are trained in the English language only. Bilingual Models BanglishBERT <cit.> and HingBERT <cit.> is used as bilingual models which are trained on both Bangla-English and Hindi-English respectively thus effective for the purpose of code mixing tasks including where any two of these languages are involved. Multilingual Models We use mBERT <cit.> and XLM-R <cit.> as multilingual models which are respectively trained on 104 and 100 languages. These two models are very effective while we are working in a trilingualBangla, Hindi and English domain. Moreover, we use IndicBERT <cit.> and MuRIL <cit.> which covers 12 and 17 Indian languages respectively including Bangla-English-Hindi. Thus these two models and the respective list of languages justify the inclusion of them for our targeted tri-lingual code-mixing task. We also perform hyper-parameter tuning while using all the models to prevent overfitting and ensure optimal F1 score. PromptingWe use prompting with GPT-3.5-turbo model <cit.> from OpenAI for this task. We use the API for zero-shot prompting (see Figure <ref>) and ask the model to label the test set.Additionally, we run the same experiments separately on synthetic and natural datasets splitting both in a 60-20-20 way for training, evaluating, and testing purposes.§ RESULTS In this experiment, synthetic data is used as train set, and natural data is used as test set. The F1 scores of monolingual models range from 0.47 to 0.55 where BERT performs the best. Among the two bilingual models BanglishBERT scores 0.56 which is better than HingBERT. XLM-R is the best multilingual model with an F1 score of 0.59. On the other hand, a zero shot prompting technique on GPT 3.5 turbo performs the best with a 0.62 weighted F1 score. These results are available in Table <ref>. We perform the same procedure by using synthetic data for both the training and testing where MuRIL and XLM-R with 0.77 F1 score are the best-performing models. These results are available in Table <ref>. §.§ Error Analysis We observe Other tokens in almost 40% of the whole dataset, as shown in Table <ref>. These tokens occur due to transliteration which poses a challenge for most of the models since not all of the models are pre-trained on transliterated tokens. BanglishBERT did better than HingBERT since it recognizes both Bangla and English tokens and the total number of tokens for Hindi-English is less than Bangla-English tokens (see Table <ref>). Also, misspelled words and typos are also observed in the datasets, making the task even more difficult. Some examples are available in Appendix <ref> which are classified wrongly by all the models.§ CONCLUSION AND FUTURE WORK In this paper, we presented SentMix-3L, a Bangla-English-Hindi code-mixed offensive language identification dataset containing 1,007 instances. We also created 100,000 synthetic data in the same three languages for training. We evaluated various monolingual models on these two datasets. Our results show that prompting GPT3.5 generates the best result on SentMix-3L. When using synthetic data for both training and testing, multilingual models such as mBERT and XLM-R perform well.In the future, we would like to expand SentMix-3L so that it can serve as both training and testing data. Additionally, we are working on pre-training Bangla-English-Hindi trilingual code-mixing models for offensive language identification. § ACKNOWLEDGEMENTSWe thank the annotators who helped us with the annotation of SentMix-3L. We further thank the anonymous workshop reviewers for their insightful feedback.Antonios Anastasopoulos is generously supported by NSF award IIS-2125466.§ LIMITATIONS Although most datasets for the downstream tasks are scraped from social media posts in the real world, in our case these data instances are generated in a semi-natural manner, meaning that they were generated by people but not scraped from social media directly. This was done due to the complexity of extracting contents that contain a specific 3 language code-mixing in them. Also, the dataset is comparatively smaller in size, since it is costly to generate data by a specific set of people who are fluent in all 3 target languages.§ ETHICS STATEMENT The dataset introduced in this paper, which centers on the analysis of offensive language in Bangla-English-Hindi code-mixed text, adheres to thehttps://www.aclweb.org/portal/content/acl-code-ethicsACL Ethics Policy and seeks to make a valuable contribution to the realm of online safety. SentMix-3L serves as an important resource for opinion analysis of online content. Moreover, the contributors and annotators of the dataset are paid respectable remuneration for their efforts towards this dataset.acl_natbib § EXAMPLES OF MISCLASSIFIED INSTANCES< g r a p h i c s > | http://arxiv.org/abs/2310.18023v2 | {
"authors": [
"Md Nishat Raihan",
"Dhiman Goswami",
"Antara Mahmud",
"Antonios Anastasopoulos",
"Marcos Zampieri"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231027095924",
"title": "SentMix-3L: A Bangla-English-Hindi Code-Mixed Dataset for Sentiment Analysis"
} |
Institut für Physik und Astronomie, Universität Potsdam, 14476 Potsdam, GermanyInstitut für Physik und Astronomie, Universität Potsdam, 14476 Potsdam, GermanyCEITEC BUT, Brno University of Technology, 61200 Brno, Czech RepublicCEITEC BUT, Brno University of Technology, 61200 Brno, Czech Republic Institute of Physical Engineering, Brno University of Technology, 61669 Brno, Czech RepublicInstitut für Physik und Astronomie, Universität Potsdam, 14476 Potsdam, Germany Helmholtz-Zentrum Berlin für Materialien und Energie GmbH, Wilhelm-Conrad-Röntgen Campus, BESSY II, 12489 Berlin, Germany [email protected] ultrafast photoemission experiments showed signatures of an ultrafast modification of the electronic band structure in FeRh indicative of a ferromagnetic (FM) state that is initiated by a non-equilibrium occupation of the electronic states upon femtosecond laser excitation. We use ultrafast x-ray diffraction to examine the impact of hot electrons on the antiferromagnetic (AFM) to FM phase transition. By increasing the pump-pulse duration up to 10.5 ps, we eliminate hot electrons and see that the nucleation of FM domains still proceeds at the intrinsic timescale of 8 ps, which starts when the deposited energy surpasses the threshold energy. For long pulses, the phase transition proceeds considerably faster than predicted by a convolution of the dynamics observed for ultrafast excitation with the long pump pulse duration. We predict that quite generally, slow photoexcitation can result in a fast response, if the non-linear threshold behavior of a first-order phase transition is involved.Picosecond pump pulses probe the relevance of hot electrons for the laser-induced phase transition in FeRh M. Bargheer January 14, 2024 ==========================================================================================================First-order phase transitions are characterized by an abrupt change of structural, electronic or/and magnetic properties and a co-existence of multiple phases when the deposited energy in thermal equilibrium<cit.> or on ultrafast timescales<cit.> exceeds a threshold. The abruptly emerging phase is a consequence of a fine interplay of spin, charge and lattice degrees of freedom<cit.>. Since the optical excitation often affects only the electrons directly, they heat up far beyond the transition temperature, rendering the driving mechanism of the laser-induced phase transition a formidable question.Since the discovery of the first-order magneto-structural antiferromagnetic-to-ferromagnetic (AFM-FM) phase transition of FeRh at 370 K different mechanisms such as expansion-induced sign change of the exchange constant<cit.>, excitation of spin waves<cit.> and dominant FM exchange of the Fe moments mediated by an induced Rh moment<cit.> were proposed.On ultrafast timescales, direct signatures of the ferromagnetic phase transition of FeRh are the rise of the increased FM lattice constant<cit.> and the emergence of a net magnetization<cit.>. While the rise of the magnetization is most significant after the coalescence of the nucleated FM domains<cit.>, probing the structural order parameter via ultrafast x-ray diffraction (UXRD) directly yields insights into their nucleation and growth<cit.>, where the intrinsic nucleation time<cit.> of 8 ps applies for a wide range of fluences, external magnetic fields and even for FeRh nanostructures<cit.>.The local configuration of magnetic moments has been described as a competition between bilinear and higher-order four spin exchange terms in atomistic spin dynamics <cit.>, a combination of Heisenberg exchange of Fe and a Stoner model for Rh <cit.> and a modification of the Rh-Fe hybridization<cit.>. Recently, time-resolved photoelectron spectroscopy identified a sub-picosecond formation of an electronicferromagnetic state<cit.>, related to a photo-induced change of the band structure: Charge transfer from Rh to Fe and an intersite spin transfer between Fe sites was found to induce a Rh moment during the relaxation of optically excited non-equilibrium electrons. However, it remains unclear if the non-equilibrium character of the photoexcited electrons is required for the laser-induced phase transition.Here, we investigate the role of hot electrons concomitant with ultrashort laser pulse excitation for this prototypical phase transition. By ultrafast x-ray diffraction (UXRD), we directly measure how the kinetics of domain nucleation and the threshold of the AFM-FM phase transition depend on the duration of optical pump-pulses. For 10.5 ps-long pulses, the pronounced electron-phonon non-equilibrium present upon femtosecond laser excitation is effectively suppressed. Thus, we gradually deposit the energy in all subsystems and recover the energy threshold for the laser-induced phase transition known from equilibrium. The transient FM volume fraction extracted from the laser-induced strain response is found to exclusively depend on the total deposited energy. It reaches the same final value at the intrinsic nucleation timescale irrespective of the pump-pulse duration. As a consequence of the non-linear threshold behavior, the rise of the FM phase is delayed but faster than the convolution of the pump pulse duration with the signal upon femtosecond pulse excitation. In total, a laser-induced hot Fermi-distribution is not necessary to drive the phase transition on the intrinsic nucleation timescale. Figure <ref>(a) sketches the epitaxial 12.6 nm thick FeRh(001) film grown by magnetron sputtering from an equiatomic FeRh target<cit.> on an MgO(001) substrate. We used synchrotron radiation from the KMC-3 XPP endstation at BESSY II<cit.> to determine the film thickness via x-ray reflectivity (XRR) and to characterize the first order AFM-FM phase transition via the concomitant change of the mean out-of-plane lattice constant d (symbols in Fig. <ref>(b)). The hysteresis for this locally probed lattice constant is narrower than the global temperature-dependent magnetization M_FeRh (solid line) determined by Vibrating sample magnetometry (VSM) using a QuantumDesign VersaLab magnetometer. The magnetization data indicates the presence of a residual FM phase of around 20 % originating from interface effects<cit.> consistent with the reduced out-of-plane expansion of η^thin_AFM-FM=0.48 % compared to η^thick_AFM-FM=0.6 % observed in thicker films<cit.>.Figure <ref>(c) illustrates the influence of increasing the pump-pulse duration on the optically induced electron-phonon non-equilibrium. We model the transient mean electron T_el and phonon T_ph temperature of the FeRh film as function of the pump-pulse duration in the framework of a diffusive two-temperature model<cit.>. We use the modular Python library udkm1Dsim <cit.> and literature values for the Sommerfeld constant<cit.> γ^S=0.06 Jkg^-1K^-2, the phononic heat capacity<cit.> C_ph=350 Jkg^-1K^-1and the electron-phonon coupling constant <cit.> g^el-ph=9· 10^13 Jkg^-1K^-1. The thermophysical properties of the MgO substrate were used as reported previously<cit.>. The excitation by a 60 fs pump pulse increases the electron temperature that stays significantly higher than the slowly rising phonon temperature within the first 3 ps, with the maximum Δ T^max_el≈ 7 Δ T^max_ph. Increasing the pump pulse duration to 2.9 ps drastically reduces the maximum electron temperature, as a considerable amount of energy already dissipated to the phonons during the optical deposition. For a pump pulse of duration 10.5 ps, the electron temperature barely exceeds the phonon temperature. A strong electron-phonon non-equilibrium with a substantial amount of hot electrons is absent. In the following, we experimentally apply such pump-pulses to investigate the kinetics of the laser-induced phase transition in FeRh by UXRD. The thin film is excited by p-polarized pump pulses with a central wavelength of 800 nm that are incident under 40^∘ with respect to the sample normal. We probe the transient out-of-plane strain response of the FeRh layer via symmetric θ-2θ scans <cit.> around the FeRh(002) Bragg peak at our table-top laser-driven plasma x-ray source <cit.> providing 200 fs hard x-ray pulses with a photon energy of approx. 8 keV. The Bragg peak position along the reciprocal space coordinate q_z encodes the mean out-of-plane lattice constant d of the FeRh films via q_z=4π/d. The lattice strain η_FeRh=Δ d/d_0 is the relative change Δ d of the lattice constant with respect to its value d_0 before excitation. We independently determined the pump-probe overlap and calibrated the duration of the pump-pulse by the strictly linear laser-induced response of a metal-insulator superlattice serving as reference sample<cit.>.Figure <ref>(a) displays the laser-induced strain response of the thin FeRh film for a fluence of 2.0 mJ cm^-2 and different pump pulse durations ranging from 60 fs to 10.5 ps. The excitation fluence exceeds the previously identified critical threshold<cit.> of F_th=0.6 mJ cm^-2 for this sample at room temperature and drives the magnetostructural phase transition that is associated with an out-of-plane expansion of FeRh. In total, the strain response is the superposition of an expansion due to the phase transition, a quasi-static expansion due to heating and a propagating strain pulse reflected at the surface and the FeRh-MgO interface, where it is partially transmitted into the substrate. In case of the excitation by a 60 fs pulse, this results in a a decaying oscillation <cit.> with a period of 2L_FeRh/v_s=5 ps given by the layer thickness L_FeRh and the sound velocity v_s=5 nm ps^-1<cit.> (see Figure <ref>(a)). Increasing the pump-pulse duration successively suppresses this oscillation of the mean out-of-plane strain until 10.5 ps pump-pulses only drive a slow expansion. The strain at 40 ps is identical for all pump-pulse durations. This indicates that all strain contributions including the expansion associated with the AFM-FM phase transition exclusively depend on the deposited energy and not on the details of the optical excitation. Fig. <ref>(b) compares the transient strain for pump pulse durations of 5 ps (symbols) and 60 fs (grey solid curves) for various super- and sub-threshold fluences. At first sight, the long pump pulses seem just to smear out the oscillations resulting from coherent longitudinal acoustic phonons. We obtain better insight by disentangling the linear acoustic response of the sample from the expansion driven by the phase transition, which depends non-linearly on the excitation fluence. The sub-threshold fluence of F_bt=0.4 mJ cm^-2 does not induce the AFM-FM phase transition<cit.>. This simplifies the strain response to a superposition of a quasi-static expansion and propagating strain pulses, which both depend linearly on the deposited pulse energy. Therefore, the acoustic strain contribution upon a super-threshold excitation is given by this sub-threshold strain response η^bt_FeRh (grey solid line) scaled by the ratio of the fluences s=F_at/F_bt and convoluted with the respective pump-pulse duration. The dotted line in Fig. <ref>(b) depicts the acoustic strain contribution for F_at=2 mJ cm^-2 given by s ·η^bt_FeRh with s=5. The deviation Δη(t)=η^at_FeRh-s·η^bt_FeRh from the strain response to the super-threshold excitation η^at_FeRh is essentially due to the additional expansion induced by the phase transition, which directly provides insights into the rise of the FM volume fraction V_FM(t). To determine its absolute value, we consider the residual FM phase V^0_FM=0.2 present before the laser excitation due to interface effects and the latent heat C_AFM-FM of the first-order phase transition. The energy Q_AFM-FM=∫ C_AFM-FM dT required for transforming FeRh into the FM phase reduces the local temperature<cit.> by Δ T=Q_AFM-FM/C_ph, which reduces the quasi-static expansion of FeRh by η_AFM=α_AFMΔ T with the expansion coefficient α_AFM in the AFM phase. This relates the deviation in the transient strain Δη(t) to the laser-induced FM volume fraction Δ V_FM via:Δη(t)= (η^thick_AFM-FM-η_AFM)Δ V_FM(t) .Figure <ref> displays the laser-induced volume fraction Δ V_FM derived from the measured transient strain via Eq. (<ref>). The variation of the excitation fluence in Fig. <ref>(b) exemplifies the non-linear response at the phase transition: A tiny increase of the fluence from 1.8 to 2.1 mJ cm^-2 changes Δ V_FM from 60% to a complete phase transition, which corresponds to Δ V_FM= 80%. The data for 60 fs pulse excitation in Fig. <ref>(a) is very well reproduced by the nucleation of domains on a τ=8 ps timescale by <cit.>:Δ V_FM(t)= ℋ(t) V_FM^* ·( 1-e^-t/τ) ,with the Heaviside function ℋ(t) and the final FM volume fraction increase V_FM^* that depends on the fluence with V_FM^*>0 if F>F_th and V_FM^*=0.8 if F=2.1 mJ cm^-2 as reported previously<cit.>. Equation (<ref>) yields excellent agreement with Δ V_FM for 60 fs in Fig. <ref>(a). As a first attempt, the dashed lines for the longer pump-pulses in Fig. <ref> represent Δ V_FM(t) from Eq. (<ref>) convoluted with a Gaussian representing the pump-pulse in the experiment. However, the deviation of the measured transient FM volume fraction from this simple estimation becomes larger with increasing pump-pulse duration (see Fig. <ref>(a)). Interestingly, the data rise faster than the convolution although the rise starts later, which must be a consequence of the non-linearity associated with the threshold for the phase transition. As an improved model of the rising Δ V_FM for long pump-pulses, we explicitly consider the successively deposited energy that leads to the unlocking of the AFM-FM phase transition in an increasing volume fraction of the film during the pump-pulse. This explicit treatment of the threshold character of first-order phase transitions extends Eq. (<ref>) in the case of picosecond pump-pulses to:Δ V_FM(t)= ∫ℋ(t-t^') V_FM^*(t^') ·( 1-e^-(t-t^')/τ) dt^'.Here, the FM volume fraction V_FM^*(t^') is unlocked at delay t^' by the increase of the deposited energy and rises on the 8 ps nucleation timescale. This transiently unlocked phase transition adds to the already present FM phase driven at delays t<t^'. The Heaviside function ℋ(t-t^') ensures a start of the phase transition at t^'. To model the transient FM volume fraction we assume a linear increase of V_FM^* from 0 to 1 between the fluences F_th=0.6 mJ cm^-2 and 2.1 mJ cm^-2, which corresponds to a full phase transition as characterized previously<cit.>.Under this assumption, Eq. (<ref>) yields excellent agreement (see solid lines) with the experimentally determined transients Δ V_FM(t) in Fig. <ref>(a and b) for various pump-pulse durations and super-threshold fluences. Our model including the critical threshold of the first-order phase transition reproduces both the delayed start of the domain nucleation relative to the beginning of the pump pulse for longer pump-pulses and the earlier start of the domain nucleation with increasing fluence, which highlights the central role of the threshold for the laser-induced phase transition.In summary, we studied the laser-induced magnetostructural AFM-FM phase transition in FeRh driven by picosecond pump-pulses via UXRD. The extracted transient FM volume fraction highlights the crucial role of the threshold for the first-order phase transition in FeRh. The insensitivity of the final FM volume fraction on varying the pump-pulse duration from 60 fs up to 10.5 ps reveals that the threshold is exclusively determined by the amount of deposited energy and that the laser-induced AFM-FM phase transition does not need to proceed though the generation of hot non-equilibrium electrons. With increasing pump-pulse duration we observe an increasing deviation of the FM volume fraction from a linear response model. We successfully model the data using the intrinsic nucleation of FM domains on an 8 ps timescale, by simply considering the slow deposition of energy by picosecond pump-pulses, which successively overcomes the critical threshold that makes the phase transition dynamics nonlinear. We acknowledge the DFG for financial support via Project-No. 328545488 – TRR 227, project A10 and the BMBF for funding via 05K22IP1. Access to the CEITEC Nano Research Infrastructure was supported by the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic under the project CzechNanoLab (LM2023051). | http://arxiv.org/abs/2310.18289v2 | {
"authors": [
"Maximilian Mattern",
"Steffen Peer Zeuschner",
"Jon Ander Arregi",
"Vojtěch Uhlíř",
"Matias Bargheer"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231027172608",
"title": "Picosecond pump pulses probe the relevance of hot electrons for the laser-induced phase transition in FeRh"
} |
Department of Mathematics, Princeton University, Princeton, [email protected] E Huh Center for Mathematical Challenges of KIAS, Seoul, South [email protected] of Mathematics, University of Toronto, Toronto, ON [email protected] Department of Mathematics, University of Toronto, Toronto, ON [email protected] walks on groups and superlinear-divergent geodesics Kasra Rafi July 2022 ========================================================== In this paper, we study random walks on groups that contain superlinear-divergent geodesics, in the line of thoughts of Goldsborough-Sisto. The existence of a superlinear-divergent geodesic is a quasi-isometry invariant which allows us to execute Gouëzel's pivoting technique. We develop the theory of superlinear divergence and establish a central limit theorem for random walks on these groups. § INTRODUCTIONClassical limit laws in probability theory concern the asymptotic behaviour of the random variable Z_n = ξ_1 + ξ_2 + ⋯+ ξ_n.for i.i.d. random variables ξ_1, ξ_2, … on ℝ. As a non-commuting counterpart, Bellman, Furstenberg and Kesten initated the study of random walks on a matrix group G(<cit.>, <cit.>, <cit.>, <cit.>). Given a probability measure μ on G, the random walk generated by μ is a Markov chain on G with transition probabilities p(x, y) := μ(x^-1y). Our goal is to understand the n-th step distribution Z_n = g_1 ⋯ g_nwhere g_i are independent random variables distributed according to μ.There are several generalizations of Bellman, Furstenberg and Kesten's theory of non-commuting random walks: random walks on Lie groups (cf. <cit.> and the references therein); random conformal dynamics (<cit.>); subadditive and multiplicative ergodic theorems due to Kingman<cit.> and Oseledec <cit.>, respectively (see their generalizations <cit.>, <cit.> that incorporates random processes on isometries and non-expanding maps on a space) to name a few. In geometric group theory, there is a strong analogy between rank-1 Lie groups and groups with a non-elementary action on a Gromov hyperbolic space X (<cit.>). Given a basepoint o ∈ X, the sample path (Z_n o)_n ≥ 0 on X tracks a geodesic and the displacement d(o, Z_n o) at step n grows like a sum of i.i.d. random variables with positive expectation. From this one can derive a number of consequences, such as exponential bounds on the drift (<cit.>), limit laws (<cit.>), and identification of the Poisson boundary (<cit.>). If the G-action on X is compatible with the geometry of G in a suitable sense, one can transfer these results on X to G. One of the most successful results in this direction is due to Mathieu and Sisto <cit.>, who proved a central limit theorem for random walks on acylindrically hyperbolic groups. We refer the readers to <cit.> and <cit.> for examples of acylindrically hyperbolic groups and hierarchically hyperbolic groups.Although the notion of acylindrical hyperbolicity captures a wide range of discrete groups,acylindrical hyperbolicity of a group is not known to be quasi-isometry invariant or even commensurability invariant. This is because there is no known natural way to transfer a group action through a quasi-isometry. To overcome this, the second author proposed a theory for random walks using a group-theoretic property that does not involve hyperbolic actions, namely, possessing a strongly contracting element <cit.>. Nevertheless, this theory is still not invariant under quasi-isometry.Meanwhile, certain hyperbolic-like properties are known to be quasi-isometry invariant, such as existence of a Morse quasi-geodesic. Hence one can expect that many consequences of hyperbolicity should hold under quasi-isometry invariant assumptions. To address this, Goldsborough and Sisto <cit.> developed a QI-invariant random walk theory for groups. Given a bijective quasi-isometry f from a group G to a group H, the pushforward of the random walk from G to H is not necessarily a random walk, but only an inhomogeneous Markov chain. Nonetheless, if one—equivalently both—groups are non-amenable, the induced Markov chain satisfies some sort of irreducibility, which the authors call tameness. At this moment, Goldsborough and Sisto require that G acts on a hyperbolic space X and contains what they call a `superlinear-divergent' element g, that is, any path must spend a superlinear amount of time to deviate from the axis of g (see section <ref> for the definition). Goldsborough and Sisto observed that along a random path arising from a tame Markov chain on G, some translates of the superlinear-divergent axis are aligned on X, and such alignment is also realized on the Cayley graph of G. As a consequence, theyestablished a central limit theorem for random walks on H, which is only quasi-isometric to G.In the setting of Goldsborough and Sisto, still, G is required to possess an action on a hyperbolic space. Our purpose is to remove this assumption and establish a central limit theorem for groups satisfying a QI-invariant property, without referring to a hyperbolic space.Let G be a finitely generated group with exponential growth, and suppose that G has a superlinear-divergent quasi-geodesic γ: → G. Let (Z_n)_n≥ 1 be a simple random walk on G. Then there exist constants λ, σ≥ 0 such thatd_X(o, Z_n o)-λ n/σ√(n)→𝒩(0, 1) in distribution. Note that we only assume existence of a superlinear-divergent quasi-geodesic, as opposed to a superlinear-divergent element. This makes our setting invariant under quasi-isometry; see Lemma <ref>. In addition, our proof only uses the classical theory of random walks and does not refer to tame Markov chains. This theorem applies to groups that are not flat but not of rank 1 either. For example, we can construct a superlinear-divergent element in any right-angled Coxeter group (RACG) that contains a periodic geodesic with geodesic divergence at least r^3:Let W_Γ be a Right-angled Coxeter group of thickness k≥ 2. Then any Cayley graph of Γ contains a periodic geodesic σ which is (f, θ)–divergent for some θ>0 and f(r) ≳ r^k. In particular, simple random walks on W_Γ satisfy the central limit theorem. By f ≳ r^k we mean that f≥ cr^k for some sufficiently small c > 0. The proof of this lemma is Appendix <ref>. Such RACGs can be produced following the method in<cit.>, and <cit.> shows that there is an abundance of such groups.Lastly, let us mention the relationship between superlinear-divergence and the strongly contracting property, which is a core ingredient of the second author's previous work <cit.>. In general, a superlinear-divergent axis need not be strongly contracting and vice versa. Hence, the present theory and the theory in <cit.> are logically independent. We elaborate this independence in Appendix <ref>. §.§ Outline of the paper Our main idea is to bring Gouëzel's recent theory of pivotal time construction for random walks <cit.>. Here, the key ingredient is a local-to-global principle for alignments between quasigeodesics. Lacking Gromov hyperbolicity of the ambient group, we establish such a principle among sufficiently long superlinear-divergent geodesics (Proposition <ref>). For this purpose, in Section <ref> we continue to develop the theory of superlinear-divergent sets after Goldsborough and Sisto <cit.>. In Section <ref>, we discuss alignment of superlinear-divergent geodesics. In Section <ref>, we estimate the probability for alignment to happen during a random walk. This yields a deviation inequality (Lemma <ref>) that leads to the desired central limit theorem. §.§ AcknowledgementThis project was initiated at the AIM workshop “Random walks beyond hyperbolic groups", after a lecture by Alex Sisto on his work with Antoine Goldsborough. We would like to thank Alex Sisto, Ilya Gekhtman, Sébastien Gouëzel, and Abdul Zalloum for many helpful discussions. We are also grateful to Anders Karlsson for suggesting references and explaining the background.The first author was partially supported by an NSERC CGS-M grant.The second author is supported by Samsung Science & Technology Foundation (SSTF-BA1702-01 and SSTF-BA1301-51) and by a KIAS Individual Grant (SG091901) via the June E Huh Center for Mathematical Challenges at Korea Institute for Advanced Study. The third author was partially supported by an NSERC CGS-M Grant. The fourth author was partially supported by NSERC Discovery grant, RGPIN 06486.§ SUPERLINEAR-DIVERGENCEFor this section, let X be a geodesic metric space. For points x,y ∈ X, we will use the notation [x, y] to mean an arbitrary geodesic between x and y (note: not unique in general). If α is a quasi-geodesic, and x,y ∈α, we use [x, y]|_α to denote the quasi-geodesic segment from x to y along α. Throughout, all paths are continuous maps from an interval into X.We adopt the definition in <cit.>. For a set Z ⊆ X and constants A, B > 0, we say a map π = π_Z : X → Z is an (A, B)–coarsely Lipschitz projection if∀ x, y ∈ X, d(π(x), π(y)) ≤ A d(x, y) + Band∀ z ∈ Z, d(π(z),z) ≤ B.We say that a map π is coarsely Lipschitz if it is (A,B)-coarsely Lipschitz for some A,B>0. Note that a coarsely Lipschitz projection is comparable to the closest point projection: for any x ∈ X we have d(x, π(x)) ≤inf_z ∈ Z( d(x, z) + d(z, π(z)) + d(π(z), π(x))) ≤inf_z ∈ Z( d(x, z) + B + (Ad(x,z)+B) )≤ (A+1) d(x, Z) + 2B. We say a function f : ℝ_+→ℝ_+ is superlinear if it is concave, increasing, andlim_x →∞f(x)/x = ∞.Let Z be a closed subset of a geodesic metric space X, let θ > 0 and let f : ℝ_+→ℝ_+ be superlinear. We say that Z is (f,θ)–divergent if there exists an (A, B)–coarsely Lipschitz projection π_Z : X → Z such that for any R>0 and any path p outside of the R–neighborhood of Z, if the endpoints p_- and p_+ of the path p satisfyd(π_Z(p_-),π_Z(p_+)) > θthen the length of p is at least f(R). We say that Z is superlinear-divergent if it is (f,θ)–divergent for some constant θ > 0 and a superlinear function f : ℝ_+→ℝ_+. The following lemma shows that the existence of a superlinear-divergent quasi-geodesic in a group G is a quasi-isometry invariance. Let X be a geodesic metric space containing a superlinear-divergent subset Z, and let ϕ:X→ Y be a quasi-isometry. Then ϕ(Z) is also superlinear-divergent. Let Z ⊂ X be (f,θ)–divergent with a coarsely Lipschitz projection π_Z.Let ϕ:X→ Y be a (q,Q)–quasi-isometry. Then π_Z pushes forward to a coarsely Lipschitz projection π_ϕ(Z) = ϕ∘π_Z ∘ϕ^-1.Note that the pullback under ϕ of a continuous path in Y may not be a continuous path in X. But by the taming quasi-geodesics lemma (Lemma III.H.1.11 of <cit.>), we can find a continuous path within the (q+Q)–neighborhood of ϕ^-1(p) with the same endpoints.Fix R>0. Suppose p is a path in Y outside of a R–neighborhood of ϕ(Z), and suppose the endpoints p_- and p_+ satisfyd(π_ϕ(Z)(p_-),π_ϕ(Z)(p_+))>θ',where θ' = q θ + Q. Then let p' be a continuous path in the (q+Q)–neighborhood of ϕ^-1(p) with endpoints p'_-∈ϕ^-1(p_-)and p'_+∈ϕ^-1(p_+). It follows that p' is outside of the (R/q-q-2Q)–neighborhood of Z. Moreover, the endpoints have projections bounded byd_Z(π_Z(p'_-),π(p'_+))>θ.Superlinear divergence of Z lets us conclude that l_X(p') > f (R/q-q-2Q), so l_Y(p) > g(d) where g(x) = 1/qf (x/q-q-2Q )-Qis a superlinear function. Suppose a finitely generated group G contains a superlinear-divergent bi-infinite quasi-geodesic γ: → G. Let H be a finitely generated group quasi-isometric to G. Then H also contains a superlinear-divergent bi-infinite quasi-geodesic. We now establish basic consequences of superlinear divergence of a geodesic. In part, superlinear-divergent geodesics are “constricting” (in the sense of <cit.> and <cit.>) up to a logarithmic error. This will be formulated more precisely in Lemma <ref>. For each superlinear function f and positive constants A, B, K, θ, q, Q, there exists a constant K_0>1 such that the following holds.Let Z be an (f, θ)–divergent subset of X with respect to an (A, B)–coarsely Lipschitz projection π_Z, and let α : [0, M] → X be a geodesic in X such that d(π_Zα(0), π_Zα(M)) ≥θandd(α(0), Z) > K_0.Then there exists t ∈ [0,M] such that d(π_Zα(0), π_Zα(t)) ≤θ+B,and eitherd(α(t), Z) ≥ K · d(α(0), Z)or d(α(t), Z) ≤1/K· d(α(0), Z). Let A,B be the coarsely Lipschitz constants of π_Z. Choose K' >1 large enough such that for all t > K',f(t)/t≥ K(K+5B+θ+1)(A+1).Let τ := inf{t ∈ [0,M]: d(π_Zα(0), π_Zα(t)) ≥θ}.By the (A, B)-coarse Lipschitzness of π_Z, we haved(π_Zα(0), π_Zα(t)) ≤θ+Bfor all t ∈ [0, τ]. We now take K_0 = K'K. For convenience, let d_t := d(α(t), Z) for each t. The desired conclusion holds if d_t≤ d_0/K = K' for some t ∈ [0, τ]; suppose not. Under this assumption, we show that d_τ > Kd_0. By superlinear-divergence of Z,l(α([0, τ]))≥ f( d_0/K)≥ K(K+5B+θ+1)(A+1) ·(d_0/K) ≥ (K+5B+θ+1)(A+1) d_0.Note also, since α is a geodesic,l(α([0,τ]))≤ d(α(0), π_Z(α(0))) + d(π_Z(α(0)), π_Z(α(τ))) + d(π_Z(α(τ)), α(τ))≤ ((A+1) d_0 + 2B) + (θ+B) + [(A+1) d_τ + 2B].Combining these, we have d_τ ≥1/A+1[(K+5B+θ+1)(A+1) d_0 -5B-(A+1)d_0 - θ]≥ K d_0 + (5B+ θ) (d_0-1/A+1)≥ K d_0,where the final inequality is due to d_0≥ K_0 = KK' > 1 > 1/(A+1). The following lemma is a technical calculation that will be used in the proof of Lemma <ref> to examine the behaviour of a sequence of points along a geodesic whose projections are making steady progress.Let π_Z : X → Z be an (A, B)–coarsely Lipschitz projection onto a subset Z of X and let K>0. Suppose x, z ∈ X, y ∈ [x, z] satistyd(π_Z(x), π_Z(y))<K, d(π_Z(y), π_Z(z))< K, andd(x, Z),d(z, Z)≤1/8(A+1) d(y, Z).Then we have d(y, Z) ≤ 2K + 16B.Suppose to the contrary that d(y, Z) > 2K + 16B. First, the assumption tells us that(A+1) d(x, Z) + 2B ≤1/8 d(y, Z) + 2B ≤1/4 d(y, Z). This forcesd(x, y) ≥ d(y, π_Z(x)) - d(x, π_Z(x)) ≥ d(y, Z) - (A+1)d(x, Z) - 2B ≥3/4 d(y, Z)and similarly d(y, z) ≥3/4 d(y, Z), which leads to d(x, z) = d(x, y) + d(y, z) ≥3/2d(y, Z). Meanwhile, note thatd(x, z) ≤ d(x, π_Z(x)) + d(π_Z(x), π_Z(z)) + d(π_Z(z), z) ≤1/2d(y, Z) + 2K.Hence, we have3/2d(y, Z) ≤1/2d(y, Z) + 2K,which contradicts the assumption. The following is the main lemma.Let Z be an (f, θ)–divergent subset of X. Then, for any δ >0, there exists K_1 > 0 such that the following holds. For any x, y ∈ X, if d(π_Z(x), π_Z(y)) > δ (log d(x, Z) + log d(y, Z)) + K_1, then there exist a subsegment [p_x, p_y] of [x, y] and points q_x, q_y∈ Z such that: * d(p_x, q_x), d(p_y, q_y) < K_1* d(q_x, π_Z(x)) ≤δlog d(x, Z) + K_1,* d(q_y, π_Z(y)) ≤δlog d(y, Z) + K_1,* the segment [p_x, p_y] is in the K_1–neighbourhood of Z. Roughly speaking, parts (1), (2) and (3) state that the geodesic [x,y] will enter theK_1–neighbourhood of Z exponentially quickly from both sides and part (4) states that it stays near Z in the middle(See Figure <ref>). Let (A, B) be the coarsely Lipschitz constants for π_Z. Let K' = 8(A+1) + exp(θ + B/δ), let K” = K_0(K') be as in Lemma <ref>, and letK_1 = (2A+3)(K” + 2θ + 4B) + 5B + θ + log K'.If [x, y] entirely lies in the (K” + 2θ + 4B)–neighborhood of Z, we can take p_x = x, p_y = y, q_x = π_Z(x) and q_y = π_Z(y).If not, we analyze the subsegments of [x,y] outside of the (K” + 2θ + 4B)–neighborhood of Z. Let [x', y'] be an arbitrary connected component of [x, y] ∖ N_K” + 2θ + 4B (Z) := { p ∈ [x, y]: d(p, Z) ≥ K” + 2θ + 4B}.We will take a sequence of points {x_i}_i=0,…,M on [x', y'], associated with a sequence of real numbers {r_i := d(x_i,Z)}_i=0,…,M (Figure <ref>). We construct the sequence recursively. Start by choosing x_0 := x', then recursively choose x_i+1∈ [x_i, y'] such that d(π_Z(x_i),π_Z(x_i+1)) ≤θ + B.and eitherr_i+1≥ K'r_ior r_i+1≤ r_i/K'.Such x_i+1 must exist when d(π_Z(x_i), π_Z(y')) ≥θ + B, due to Lemma <ref>. The process terminates at step M when d(π_Z(x_M), π_Z(y')) ≤θ + B or r_M≤ K”+ 2θ + 4B.We first observe that, by Lemma <ref>, for any i, we cannot simultaneously have r_i≥ K'r_i-1and r_i ≥ K'r_i+1.Hence, the only possibilities for the sequence is either: * r_i keeps decreasing, * r_i keeps increasing, or* r_i decreases at first and then keeps increasing.We will apply this observation in two cases depending on the endpoints of [x',y'].Case 1. One (or both) of the endpoints is x or y. WLOG, consider the case x' = x. We will show that these segments enter the K_1–neighbourhood of Z exponentially quickly. Then we will choose p_x to be the entrance point. We first see that the sequence will not persist until y. Choose index j such thatr_j = min_i = 0, …, M r_i.If the minimum satisfiesr_j > K”+2θ + 4B, then the sequence persists until y. In this case, the sequence decreases until the minimum, then keeps increasing until the end. It terminates whend(π_Z(x_M), π_Z(y)) ≤θ +B,and the index satisfiesM ≥1/θ + Bd(π_Z(x), π_Z(y)) - 1.Moreover,r_M/r_j·r_0/r_j≥ K'^M.Combining the three inequalities above we havelog d(y, Z) + log d(x, Z) - 2 log r_j≥ d(π_Z (x), π_Z(y)) ·log K_2/θ+B - log K'.Hence,d(π_Z(x),π_Z(y)) ≤δ(log d(x, Z) + log d(y, Z)) + log K',a contradiction. Hence, the sequence {r_i}_i keeps decreasing as in Figure <ref>, and it terminates when r_M≤ K” + 2θ + 4B.Choose p_x = x_M and take q_x∈ Z such that d(p_x, q_x) ≤ K” + 2θ + 4B.This choice of p_x and q_x guarantees that d(π_Z(p_x), q_x)≤ d(π_Z(p_x), π_Z(q_x)) + d(π_Z(q_x), q_x) ≤(A d(p_x, q_x) + B ) + B ≤ K_1.Moreover, we have K'^M≤ r_0/r_M,which impliesM ≤log r_0 - log r_M/log K'.So d(π_Z(p_x), π_Z(x)) ≤ (θ+ B) M ≤δlog d(x, Z), and consequentlyd(q_x, π_Z(x)) ≤δlog d(x, Z) + K_1as desired. We may apply the same argument to choose p_y ∈ [x,y]and q_y∈ Zsuch that d(p_y, q_y) ≤ K_1and d(q_y, π_Z(y)) ≤δlog d(y, Z) + K_1. Case 2. The endpoints x' and y' both belong to the closure of N_K” + 2θ + 4B(Z). These are segments between our choice of p_x and p_y. We show that they are within the K_1–neighbourhood of Z.In this case,d(x', Z) = d(y', Z) = K” + 2θ + 4B ≤ d(p, Z) (∀ p ∈ [x', y']).Observe that r_i cannot decrease as first since [x',y'] lies outside the (K”+2θ+4B)–neighbourhood of Z. But r_i also cannot keep increasing, because d(x', Z) = d(y', Z). So the process must stop at the very beginning, that is,M=0 and d(π_Z(x'), π_Z(y')) ≤θ + B. Then we have d(x', y')≤ d(x', π_Z(x')) + (θ + B) + d(π_Z(y'), y') ≤((A+1)d(x',Z)+2B ) + (θ + B) + ((A+1)d(y',Z)+2B )= 2(A+1)(K” + 2θ + 4B) + 4B + (θ + B).From this, we deduce that [x', y'] lies in the K_1–neighborhood of Z.The next lemma helps us strengthen Lemma <ref> to a statement about Hausdorff distance. Let K, M, M' be positive constants and α: [0, M] → X and β : [0, M'] → X be (q, Q)–quasi-geodesics. Suppose that α is contained in a K–neighborhood of β andd(α(0), β(m))< K,d(α(M), β(n)) < K hold for some 0 ≤ m < n ≤ M'. Then we have d_ Haus(α, β|_[m, n]) ≤ K + Q+6q^6 Q + 2Kq^5. Let us define a map h from [0, M] to [0, M']. For each t ∈ [0, M] let h(t) ∈ [0, M'] be such that d(α(t), β(h(t))) ≤ K. Without loss of generality, set s_0 := m and s_M := n. This map is well-defined, and is a (q^2, K +2qQ)–quasi-isometric embedding of [0, M] into ℝ. Indeed, note that|h(t) - h(t')|≤ q d(β(h(t)), β(h(t'))) + qQ ≤ qd(α(t), α(t')) + 2K + qQ ≤ q^2 |t-t'| +K + 2qQand|t - t'|≤ q d(α(t), α(t')) + qQ ≤ qd(β(h(t)), β(h(t'))) + 2K + qQ ≤ q^2 |h(t)-h(t')| +K + 2qQ. From the very definition, it is clear that α and β(h([0, M])) are within Hausdorff distance K. Next, as h is a QI-embedding of [0, M] into ℝ that sends 0 and M to m and n, its image h([0, M]) is 2qQ-connected and h([0, M]) is contained in[s - 6q^5Q - 2K q^4, t + 6q^5Q + 2K q^4].In particular, h([0, M]) and [m, n] are within Hausdorff distance 6q^5Q + 2K q^4. By applying β, we deduce that β(h([0, M])) and β|_[m, n] are within Hausdorff distance 6q^6 Q + 2Kq^5 + Q. Combining all these, we conclude thatd_ Haus(α, β|_[m, n]) ≤ K + Q+6q^6 Q + 2Kq^5. In the setting of Lemma <ref>, assume that Z is a (q,Q)–quasi-geodesic. Then for some constant K_2 depending on f,θ,q,Q,δ,d_ Haus([p_x,p_y],[q_x,q_y]|_Z) ≤ K_2. As another corollary of Lemma <ref>, we can replace a superlinear-divergent quasigeodesic on X with a superlinear-divergent geodesic. Let γ be a bi-infinite (f, θ)–divergent quasigeodesic on a proper space X. Then there exists a bi-infinite (f',θ')–divergent geodesic γ' such that d_Haus(γ, γ') is finite. Specifically, f'(x) = f(x-C), θ' = θ + 2C where C is the constant Hausdorff distance between γ and γ'.Let γ: → X be an (f, θ)–divergent (q, Q)–quasigeodesic on X. Let K_1 be the constant given by Lemma <ref> for Z = γ and δ = 0. For each sufficiently large n, we note thatd(π_γ(γ(n)), π_γ(γ(-n))) ≥ d(γ(n), γ(-n)) - 2B > 2n/q - Q - 2B > K_1.Lemma <ref> tells us that there exists a subsegment [p_-n, p_n] of [γ(-n), γ(n)] and j_-n, j_n∈ such thatd(p_-n, γ(j_-n)) ≤ K_1,d(p_n, γ(j_n)) ≤ K_1, d(γ(j_-n), γ(-n)) ≤ d(γ(j_-n), π_γ(γ(-n))) + d(π_γ(γ(-n)), γ(-n)) ≤ K_1+B,d(γ(j_n), γ(n)) ≤ d(γ(j_n), π_γ(γ(n))) + d(π_γ(γ(n)), γ(n)) ≤ K_1+B,and such that [p_-n, p_n] ⊆ N_K_1 (γ). By Lemma <ref>, [p_-n, p_n] and γ([j_-n, j_n]) are within Hausdorff distance K_1 + Q+6q^6 Q + 2Kq^5. For simplicity, let C = K_1 + Q+6q^6 Q + 2Kq^5. Note also thatj_-n < -n + q(K_1 + B) + Q < 0 < n - q(K_1 + B) - Q < j_nfor large enough n. In conclusion, [p_-n, p_n] contains a point p that is C–close to γ(0). Moreover, the distanced(γ(0), p_n) > d(γ(0), γ(j_n)) - 2K_1 - Bgrows linearly, and likewise so does d(γ(0), p_-n). Using the properness of X and Arzela-Ascoli, we conclude that the sequence {[p_-n, p_n]}_n>1 converges to a bi-infinite geodesic γ', within a K_1–neighborhood of γ. By Lemma <ref> again, we have d_ Haus(γ, γ') ≤ C.It remains to declare a coarsely Lipschitz projection π_γ' onto γ' and show that γ' is (f', θ')–divergent with respect to π_γ'. Since d_ Haus(γ,γ') ≤ C, we can define π_γ'(z) to be a point on γ' such thatd(π_γ'(z), π_γ(z)) < C. Any path p outside of the R–neighborhood of γ' is outside of the (R-C)–neighborhood of γ. Moreover, if the endpoints p_- and p_+ of p satisfy thatd(π_γ'(p_-),π_γ'(p_+)) > θ + 2C ,then by the construction of π_γ',d(π_γ(p_-),π_γ(p_+)) > θ.Superlinear divergence of γ implies that the length of p is at least f(R-2C). This concludes the proof.§.§ ConventionFrom now on, we fix a finitely generated group G with exponential growth which contains a superlinear-divergent bi-infinite geodesic γ : ℝ→ G: this is a QI-invariant property thanks to Corollary <ref> and Corollary <ref>.§ ALIGNMENT In this section, we define the alignment of sequences of (subsegments of) superlinear-divergent geodeiscs. The key lemma is Lemma <ref>, which promotes alignment between consecutive pairs to global alignment of a sequence. Given paths γ_1, …, γ_N : → G, integers m_i≤ n_i and subpaths γ_i' := γ_i([m_i, n_i]), we say that (γ_1', …, γ_N') is K–aligned if: * π_γ_i(γ_i-1') lies in γ_i((-∞, m_i + K]), and* π_γ_i(γ_i+1') lies in γ_i([n_i - K, +∞)). Note that γ_i can be a single point. We will construct linkage words using K–aligned paths, starting with the following lemma. Given a superlinear function f, positive constants θ, A, B and 0<ϵ, η < 0.1, there exists a constant K_3 = K_3(f, θ, A, B, ϵ, η) such that the following holds.For i = 1,2, let γ_i be an (f,θ)–divergent geodesic with respect to a (A,B)–coarsely Lipschitz projection π_γ_i : X →γ_i, and let γ_i' = γ_i([m_i, n_i]) be a subpath of γ_i. Let z ∈ X, and let D > K_3 be a constant such that: * (γ_1' ∪γ_2' ∪ z) ≤ D ;* |n_2 - m_2| ≥ϵlog D;* (γ_1', γ_2') is (ηϵlog D)–aligned and (γ_2', z) is (2ηϵlog D)–aligned.Then (γ_1', z) is (2ηϵlog D)–aligned. We will assume that D is much larger than the constants K_1 and K_2 that appears during the argument. For i=1,2, denote x_i = γ(m_i) and y_i = γ(n_i). Suppose for contradiction that π_γ_1(z) lies in γ_1((-∞, n_1 - 2 ηϵlog D)) as in Figure <ref>. This implies thatd(π_γ_1(y_2), π_γ_1(z))≥ηϵlog D > ηϵ/3( log d(y_2, γ_1) + log d(z, γ_1)) + K_1,where K_1 is the constant given in Lemma <ref> taking δ = ηϵ / 3.By Lemma <ref>, there exist a subsegment [p_1, p_2] of [z, y_2] and time parameters s, t of γ_1 such that d(p_1, γ_1(s)) < K_1, d(p_2, γ_1(t)) < K_1 andd(γ_1(s), π_γ_1(z))< ηϵ/3log d(z, γ_1) + K_1 ,d(γ_1(t), π_γ_1(y_2))< ηϵ/3log d(y_2, γ_1) + K_1. In particular, we haves< γ_1^-1 (π_γ_1(z)) +( ηϵ/3log d(y_2, γ_1) + K_1) ≤ n_1 - 5/3ηϵlog D.A similar calculation shows that t > n_1 - 4/3ηϵlog D. Now let K_2 be the constant in Corollary <ref> so that γ_1([s, t]) and [p_1, p_2] are within Hausdorff distance K_2 of each other. In particular, for p' := γ_1(n_1 - 1.5 ηϵlog D) ∈γ_1([s, t]), we have a point p ∈ [p_1, p_2] ⊆ [z, y_2] such that d(p, p') ≤ K_2.Let us now investigate the relationship between [p, y_2] and γ_2'. First, the coarse Lipschitzness of π_γ_2 tells us thatd(π_γ_2(p), π_γ_2(y_2))≥ d(π_γ_2(p'), y_2) - d(π_γ_2(p'), π_γ_2(p)) - d(y_2, π_γ_2(y_2)) ≥ d(π_γ_2(p'), y_2) - AK_2- 2B.Since π_γ_2(p') ∈π_γ_2(γ_1') is contained in γ_2( (-∞, m_2 + ηϵlog D) ), we deduce that d(π_γ_2(p), π_γ_2(y_2)) > (n_2 - m_2) - ηϵlog D - AK_2 - 2B > ηϵ/3(log d(p, γ_2)) + K_1.Again, by Lemma <ref> there exist a subsegment [p_1', p_2'] ⊆ [p,z] and time parameters s', t' of γ_2with d(p_1', γ_2(s')) < K_1, d(p_2', γ_2(t')) < K_1 andd(γ_2(s'), π_γ_2(p))< ηϵ/3qlog d(p, γ_2) + K_1,d(γ_2(t'), π_γ_2(y_2))<K_1.This means thatd(p_1', p_2')≥ d(π_γ_2(p),π_γ_2(y_2))- d(γ_2(t'), π_γ_2(y_2)) -d(γ_2(s'), π_γ_2(p)) -d(p_1', γ_2(s')) -d(p_2', γ_2(t')) ≥(n_2 - m_2) - ηϵlog D/q - Q - 1/3ηϵlog D - 4K_1≥2/3ηϵlog D.Hence, [y_2, p_2'] is longer than 2/3ηϵlog D. But on the other hand,d(p_2', y_2) ≤ d(p_2', γ_2(t')) + d(γ_2(t'), π_γ_2(y_2)) + d(y_2, π_γ_2(y_2)) ≤ 2K_1+ B.This is a contradiction for sufficiently large D.Let f be a superlinear function, θ, A, B > 0, 0< ϵ, η < 0.1 and let K_3 be the constant given in Lemma <ref>. Let x, y ∈ X, and for i=1, …, N, γ_i be an (f, θ)–divergent geodesic with respect to a (A, B)–coarse-Lipschitz projection and let γ_i' = γ_i([m_i, n_i]) be a subpath of γ_i. Let D > K_3 be a constant such that: * (γ_1' ∪…∪γ_N') ≤ D;* |n_i - m_i| ≥ϵlog D for each i, and * (x, γ_1', …, γ_N', y) is (ηϵlog D)–aligned.Then for each i, (x, γ_i', y) is (2ηϵlog D)–aligned.This follows inductively from lemma <ref>. Fixing i<j, we show that π_γ_i(γ_j') ∈γ_i((-∞, m_i+2ηϵlog D]).If i = j-1, immediately by assumption we have π_γ_i(γ_j') ∈γ_i((-∞, m_i+ηϵlog D]) ⊂γ_i((-∞, m_i+2ηϵlog D]).Now assuming π_γ_i+1(γ_j) ∈γ_i+1((-∞, m_i+1+2ηϵlog D]),since (γ _i,γ_i+1) are (ηϵlog D)–aligned, the triple (γ _i,γ_i+1, γ_j) satisfies the assumptions in lemma <ref>. We conclude that π_γ_i(γ_j) ∈γ_i((-∞, m_i+2ηϵlog D]).Applying the same argument to π_γ_j(γ_i), π_γ_j(γ_k), and π_γ_k(γ_j) shows thatπ_γ_j(γ_i)∈γ_j([n_j-2ηϵlog D, +∞)) π_γ_j(γ_k)∈γ_j((-∞, m_j+2ηϵlog D]) , and π_γ_k(γ_j)∈γ_k([n_k-2ηϵlog D,+∞)).Given a superlinear function f, positive constants θ, A, B and 0<ϵ, η < 0.1, there exists constants K_4 = K_4(f, θ, A, B, ϵ, η) and C = C(A) such that the following holds.Let α and β be (f, θ)–divergent geodesics with respect to (A, B)–coarsely Lipschitz projections. Let α' and β' be their subsegments with beginning points x_1 and x_2, respectively, such that: * D := (α'∪β') ≥ K_4;* (α') ≥ϵlog D, and* (α', x_2) and (x_1, β') are ηϵlog D–aligned.Then (α', β') is (C ηϵlog D)–aligned. Let α'= α([m_1, n_1]) and β'= β([m_2, n_2]). Denote x_1 = α (m_1), y_1 = α (n_1), x_2 = β (m_2), y_2 = β (n_2). Let C' = 16(A+1)+1 and C = (C')^2 + 2. We first show that π_β(α')⊂β((-∞, m_2 + C'ϵlog D])⊂β((-∞, m_2 + Cϵlog D]).Suppose to the contrary that for some point a ∈α', the projectionπ_β(a) ∈β([m_2 + C'ϵlog D , +∞)).Then we haved(π_β (x_1), π_β (a)) ≥ (C'-1) ηϵlog D≥1/16A(C'-1) ηϵ (log d(x_1, β) + log d(a,β)) + K_1,where K_1 >0 is the constant as in Lemma <ref> taking δ = 1/16A(C'-1)ηϵ. Then there exists a subsegment [p_x_1,p_a]|_α⊂ [x_1,a]|_α⊂ [x_1, y_1]|_α, and points q_x_1, q_a on β such that d(p_x_1, q_x_1), d(p_a, q_a)< K_1d(q_x_1, π_β(x_1))≤(1/16A(C'-1)ηϵ) log d(x_1, β) + K_1d(q_a, π_β(a))≤(1/16A(C'-1)ηϵ) log d(a, β) + K_1.Then by Corollary <ref>, there is a point p'_x_1∈ [p_x_1,p_a]|_α close to x_2. The point p'_x_1 is chosen to be p_x_1 if q_x_1∈β((m_2,∞)), or the point where the Hausdorff distance K_2 is attained if q_x_1∈β((-∞, m_2]). The distance is bounded byd(x_2,p'_x_1) ≤max((1/16A(C'-1)ηϵ) log d(x_1, β) + 2K_1,K_2 )≤(1/16A(C'-1) )ηϵlog D + O(1)= (A+1/A)ηϵlog D + O(1),where K_2 is the constant in Corollary <ref>, and O(1) is the implied constant. Projecting to α gives thatd(π_α(x_2),p'_x_1) ≤ d(π_α(x_2),π_α(p'_x_1)) + B≤ (A+1)qηϵlog D + O(1).On the other hand, since (α', x_2) is (ηϵlog D)–aligned,d(π_α(x_2),p'_x_1)≥ d(y_1,p'_x_1) - ηϵlog D ≥ d(p_a,p'_x_1) - ηϵlog D≥ d(x_2,π_β(a)) - d(x_2,p'_x_1) - d(π_β(a), p_a)- ηϵlog D≥( 1/q C'ηϵlog D ) - 2 ( C'-1/16Aηϵlog D )- ηϵlog D - O(1)≥(14(A+1)-1 ) ηϵlog D - O(1)contradicting the previous inequality when D is sufficiently large.We now show that π_α(β')⊂α((n_1 - Cϵlog D,∞)).Suppose the contrary that for some point b ∈β' the projectionπ_α(b)∈α((-∞, n_1 - Cϵlog D)).We will discuss in two cases. Ifπ_α(b)∈α((m_1, n_1 - Cϵlog D)) ⊂α',then the previous calculation shows that π_β(π_α(b)) ∈β ((-∞, m_2+C'ηϵlog D]).This shows that (π_α(b),[x_2,b]|_β, ) and (b,[π_α(b), y_1]|α) are (C'ηϵlog D)–aligned. Moreover, ([x_2,b]|_β∪ [π_α(b), y_1]|_α) < D. So the exact same calculation as before shows that π_α ([x_2,b]|_β)⊂α((- ∞, π_α(b)+C'^2 ϵlog D )) ⊂α((- ∞, n_1-2 ϵlog D) ).This contradicts that π_α (x_2) ∈α((n_1 - ηϵlog D , ∞)).The remainder case is when π_α(b) ∈α((-∞, m_1)). We will show that this is impossible assuming η < min(1/q+2q^2 , A+2/A+q+2) and α' is long. In this case,d(π_α(b),π_α(x_2))≥1/q(1-η)ϵlog D≥1/(2+A)ηϵlog d(b,α) + log d(x_2,α) - K_1,where K_1 is the constant in Lemma <ref> choosing δ = 1/(2+A)ηϵ. Then by Lemma <ref>, there are points p_x_2,p_b ∈ [x_2,b] such thatd(p_x_2, y_1)≤1/2+Aηϵlog D +K_1, andd(p_b, x_1)≤1/2+Aηϵlog D +K_1.Thend(π_β (x_1), p_b) ≤A/2+Aηϵlog D +O(1).But on the other hand, π_β(x_1) ∈β((-∞,m_2 + ηϵlog D]) implies thatd(π_β (x_1), p_b) ≥ d(x_2,p_b) - ηϵlog D ≥ d(p_x_2,p_b) - ηϵlog D ≥ d(x_1,y_1) - d(x_1,p_b) - d(y_1,p_x_2) ηϵlog D ≥ϵlog D - 2/2+Aηϵlog D - ηϵlog D - O(1)> A/2+Aηϵlog n +O(1).The last step is due to η < 1/3. This is a contradiction. We now construct linkage words. These play the role of Schottky sets in <cit.> We use the notation ℬ(g, R) := {h ∈ G : d(g, h) ≤ R} to mean the ball of radius R around g, and 𝒮(g, R) := {h ∈ G : d(g, h) = R} to mean the sphere of radius R around g. Let γ: → G be a (f, θ)–divergent quasi-geodesic and let ϵ>0. For K sufficiently large, the following holds. For each m∈, there exists a subset S ⊆ G with 100 elements such that for each pair of distinct elements a, b ∈ S, we have * |a|, |b| = K and |b a^-1|, |a^-1 b|≥ 0.5K;* π_γ(γ(0) a^-1) and π_γ(γ(0)a^-1 b) ∈ℬ(γ(0), ϵ K), and * π_γ(γ(m) a) and π_γ(γ(m) ab^-1) ∈ℬ(γ(m), ϵ K).Let K_0 = K_0(0.1ϵ, f) be as in Lemma <ref>.Let λ > 1 be the growth rate of G. For n large enough, we haveλ^n≤#𝒮(id, n) ≤λ^(1+0.1ϵ)n.We consider the sets O_1:= {g ∈𝒮(id,K): d(γ(0), π_γ(γ(0)g)) ≥ 0.5 ϵ K}, O_2 := {g ∈𝒮(id,K): d(γ(m), π_γ(γ(m)g)) ≥ 0.5ϵ K }, We will argue that both of these sets are much smaller than 𝒮(id,K), and use a certain subset of 𝒮(id,K) ∖ (O_1∪ O_2) to construct our set S. To show that O_1,O_2 are relatively small, let us now consider a word a with |a| = K and d(π_γ(γ(0)a^-1), γ(0)) ≥ 0.5ϵ K. Then sinced(π_γ(γ(0) a^-1), π_γ(γ(0))) ≥ 0.5 ϵ K - B ≥ K_0 + 0.1ϵlog B + 0.1ϵlog K,Lemma <ref> asserts that there exist p ∈ [γ(0), γ(0)a^-1] and q ∈γ such that d(p, q) ≤ K_2 and d(p, π_γ(γ(0)a^-1)) ≤log |a| + K_1. In this case, we haved(p, γ(0) a^-1)= d(γ(0), γ(0) a^-1) - d(γ(0), p) ≤ |a| -d(γ(0), π_γ(γ(0) a^-1)) + d(p, π_γ(γ(0) a^-1)) ≤ K - 0.5ϵ K +log K + K_0.In summary,a^-1 = (γ(0)^-1q)·(q^-1 p) · (p^-1γ(0)a^-1)where, as in figure <ref>, * γ(0)^-1q = γ(0)^-1γ(k) for some k between -2qK - Q and 2qK+Q;* |q^-1p| ≤ K_0, and* |p^-1γ(0)a^-1| ≤ (1 - 0.5ϵ) K+ log(1.5K) + K_0. For large enough K, the number of such elements is at most5QK ·λ^(1+0.1ϵ) (1-0.4ϵ)K≤ 5QK λ^(1-0.3 ϵ)K .Hence, the cardinality of𝒜 := { (g_1, …, g_100) ∈𝒮(id,K)^100 : g_i ∈ O_1for some i ∈ [1,100]}is at most 100 · (# S_0)^99· 3QK λ^ (1+0.3) ϵ K—we pick some index i which satisfies the given condition and draw the rest of the elements from 𝒮(id,K). This is exponentially small compared to (#𝒮(id,K))^100. By a similar logic, ℬ := { (g_1, …, g_100) ∈𝒮(id,K)^100 : g_i ∈ O_2 for some i ∈ [1,100]}is exponentially small compared to 𝒮(id,K)^100.Finally, we observe that for each h ∈𝒮(id,K), there are at most #ℬ(h,0.5K)≤λ^(1+ϵ)0.5Kelements g such that |g^-1h| ≥ 0.5K. Hence, we deduce that the cardinality of 𝒞 := { (g_1, …, g_100) ∈ S_0^100 : d(g_i,g_j) ≤ 0.5K for somei ≠ j}is at most 100 · 99 · 2 ·λ^0.6K· (#𝒮(id,K)^99, which is exponentially small compared to (#𝒮(id,K))^100. Given these estimates, we conclude that for sufficiently large K,𝒮(id,K)^100∖( 𝒜∪ℬ∪𝒞)is nonempty.Letting (g_1, …, g_100) be one of its elements, we claim that the choice S = {g_i, i = 1,...,100} satisfies the conditions of the lemma.Note in particular that g_i^-1g_j≠ id since its norm is at least 0.5K. We observe that: * g_i's are all distinct;* |g_i| = K for all 1 ≤ i≤ 100;* |g_i g_j^-1|, |g_i^-1 g_j| ≥ 0.5K for each i≠ j;* π_γ(γ(0) g_i^-1) ∈ℬ(γ(0), 0.5ϵ K) and π_γ(γ(m) g_i) ∈ℬ(γ(m), 0.5ϵ K) for each 1 ≤ i ≤ 100.It remains to show that d(γ(0), π_γ(γ(0) g_i^-1 g_j)) < ϵ K for each i ≠ j. Suppose not; then for large enough K we haved(π_γ(γ(0) g_i^-1), π_γ(γ(0) g_i^-1g_j))≥ϵ K - 0.5 ϵ K > 2 ϵlog K> ϵlog |g_i| + ϵlog (|g_i|+|g_j|) + K_0≥ϵlog d( γ, γ(0) g_i^-1) + ϵlog d( γ, γ(0) g_i^-1 g_j) + K_0By Lemma <ref>, there exists p ∈ [γ(0) g_i^-1, γ(0) g_i^-1 g_j] such thatd( p, π_γ(γ(0) g_i^-1)) < ϵlog d(γ, γ(0) g_i^-1) + 2K_2≤ϵlog K + 2K_0,and d(γ(0), p) < 0.6 ϵ K. Here, we haved(p, γ(0) g_i^-1) ≥ d(γ(0), γ(0) g_i^-1) - d(γ(0), p) ≥ K - 0.6 ϵ K,andd(γ(0), γ(0) g_i^-1 g_j)≤ d(γ(0), p) + d(p, γ(0) g_i^-1(g_j) ≤ 0.6 ϵ K + [d(γ(0) g_i^-1, γ(0) g_i^-1 g_j) - d(γ(0) g_i^-1, p)] ≤ 0.6 ϵ K + 0.6 ϵ K.But this contradicts |g_i^-1 g_j| ≥ 0.5 K. Given a translate of γ, we can naturally define the projection π_gγ(x) := gπ_γ(g^-1x). Since G acts by isometries, this is an (A,B)–coarse Lipschitz projection so long as π_γ is as well. The following lemma describes projections between translates of superlinear-divergent quasi-geodesics.Let α and β be (f, θ)–divergent quasi-geodesics and let 0< ϵ < 1/10(A+1). Then there exists K_6 > 0 such that the following holds. Suppose a ∈ G and i ∈ satisfy that * |a| > K_6;* π_β(β(0) a^-1) ∈ℬ(β(0), ϵ |a|); and * π_α(α(i) a) ∈ℬ(α(i), ϵ |a|).Then for each j ∈, π_α(α(i)a β(0)^-1β(j)) is within distance ϵlog |j| + 2|a| from α(i). For simplicity, we denote and parameterize the translate of ββ' (j) = α(i) a β(0)^-1β (j).Let γ: [0, M] → G be a geodesic connecting α(i) and β'(j), see Figure <ref>. The projection of α(i) onto β' is near α(i) a:d(α(i) a, π_β' (α(i)) ) = d (β(0), π_β(β(0)a^-1) ) ≤ϵ |a|. Then there exists t ∈ [0,M] such that γ(t) ∈ℬ(α(i) a, 2 ϵ |a|). If d(β'(j),α(i)a) < 2 ϵ |a|, simply take t=M so that γ(t) = β'(j). And ifd(β'(j),α(i)a) ≥ 2 ϵ |a|, we obtain such t by applying Lemma <ref>. Noticed(π_β'(α(i)a),π_β'(β'(j))) ≥d(α(i)a,β'(j)) - d(α(i)a,π_β'(α(i)a)) - d(π_β'(β'(j)),β'(j))) ≥2 ϵ |a| - ϵ |a| - B ≥ ϵ (log d(β'(j),β') + log d(α(i),β')) + K_1,where K_1 is the constant from Lemma <ref> taking δ = ϵ. The last inequality holds when |a| is sufficiently large. Then Lemma <ref> implies that for some t,d(γ(t), α(i)a) ≤ d(γ (t), π_β'(α(i))) + d(π_β'(α(i)),α(i)a) ≤ (ϵlog |a| + K_1) + ϵ |a| ≤ 2 ϵ |a|for sufficiently large |a|.Note thatt = d(γ(0),γ(t)) ≤ d(α(i), α(i) a) + 2ϵ |a| ≤3/2 |a|.Now if d(π_α(γ(0)), π_α(γ(M))) > ϵlog (|j|+|a|)≥ϵ (log d(γ(0),α)+log d(γ(M), α)) - K_1,where K_1 is the constant from <ref> taking δ = ϵ. Then there exists τ∈ [0, M] such that d(γ(τ), π_α(γ(M))) ≤ϵlog (|j| + |a|) + K_1and γ|_[0, τ] is contained in theK_1–neighborhood of α. Notice that τ cannot be larger than t, otherwise γ(t) is K_1–close to α; let p ∈α be the point such that d(γ(t), p) ≤ K_1. Then when |a| is sufficiently large,d(α(i), π_α(α(i) a))≥ d(α(i),p) - d(π_α(α(i) a), p) ≥ d(α(i) a, α(i))) - d(α(i)a,q) - (A d(p, α(i) a) + 2B) ≥ |a| - (A+1) (2 ϵ |a|+ K_1) - 2B > ϵ |a|.This is a contradiction, so we must have τ≤ t. We then haved(π_α (α(i) a β(0)^-1β(j)), α(i)) ≤ d(π_α(γ(M)),γ(τ)) + d(γ(τ), γ(0)) ≤ϵlog (|j| + |a|) + K_1 + 3/2 |a| ≤ϵlog |j| + 2|a|.§ PROBABILISTIC PARTIn this section, fixing a small enough ϵ>0, we study the situation where a random path (id =: Z_0, Z_1, …, Z_n) is seen by a superlinear-divergent direction, or to be precise, where (Z_i, …, Z_i+ϵlog n) is (a part of) an (f, θ)–divergent quasigeodesic and(id, (Z_i, Z_i+1, …, Z_i+ϵlog n), Z_n)is ϵ^2–aligned for some i ≪ n. We will prove in Corollary <ref> and Lemma <ref> that this happens for an overwhelming probability.To make an analogy, consider a random path (id =: Z_0, Z_1, …, Z_n) arising from a simple random walk on the Cayley graph of a free group F_2≃⟨ a, b ⟩. Here, we similarly expect that Z_n = id is not desirable and (id, (Z_i, Z_i+1), Z_n) is aligned for some i ≪ n. In fact, the alignment happens for all but exponentially decaying probability. A classical argument using martingales can be described as follows: * construct a `score' that marks the progress made till step i;* prove that at each step i, it is more probable to earn a score rather than losing one.* sum up the difference at each step and use concentration inequalities to deduce an exponential bound.Here, the score at step i should be determined by information up to time i. Moreover, when the score grows, the recorded local progresses should also pile up. To realize these features on a general Cayley graph other than tree-like ones, we employ the concatenation lemma proven in Section <ref>.§.§ Combinatorial modelIn the sequel, let γ be an (f, θ)–divergent geodesic on G with γ(0) = id and ϵ >0 be a small enough constant. Let us fix some constants:* K_3 = K_3(f, θ, q, Q, A, B, ϵ^3, ϵ) be as in Lemma <ref>;* K is larger than K_5 = K_5(1/10qϵ^4) and the twice of K_6 = K_6(0.1ϵ^4) given by Lemma <ref> and <ref>, respectively;* N_0 is a threshold such thatϵ^4 n > 10( K + K_3+ log n)for all n > N_0.After multiple applications of our alignment lemmas, the power on ϵ will weaken, which is why we start with ϵ^4.Throughout this section, we will consider the following combinatorial model. Fix w_0, w_1, ⋯∈ G. Now given a sequence of 3–tuples s_i = (a_i, b_i, c_i) ∈ S^3, we consider a word of the formW_k = w_0· a_1γ(ϵlog n )b_1γ(ϵlog n)c_1·w_1⋯ a_kγ(ϵlog n ) b_kγ(ϵlog n) c_k· w_k.To ease the notation, let us also defineV_k = W_k-1 a_k,U_k = W_k-1 a_kγ(ϵlog n) b_kWe also denote s = (a_1, b_1, c_1, ..., a_k, b_k, c_k). We will argue that for most choices of s ∈ S^3k, a certain subsequence of (id,V_1γ|_[0, ϵlog n], U_1γ|_[0, ϵlog n]…, U_k-1γ|_[0, ϵlog n], W_k)is well-aligned. In section <ref>, we will derive from this a deviation inequality (Lemma <ref>), and deduce a central limit theorem. To show well-alignment, we argue analogously to <cit.>, by keeping track of times in which the random walk may travel along different translates of γ|_[0, ϵlog n], and arguing that at most of these times, most directions of the random walk do not backtrack. To implement we need the following lemma <ref>. We remark that for the rest of the paper, whenever we discuss alignment of a sequence of points and geodesic segments, the only segments used are translates of γ|_[0, ϵlog n].Let g ∈ G and let n be an integer greater than N_0 and |g|. Let S be the subset of 𝒮(id, K) described in Lemma <ref> for m=ϵlog n. Then for any distinct a,b ∈ S, at least one of (γ|_[0, ϵlog n], γ(ϵlog n) ag) and(γ|_[0, ϵlog n], γ(ϵlog n) bg )is ϵ^4log n–aligned. Likewise, at least one of (a^-1g, γ|_[0, ϵlog n]), and(b^-1g, γ|_[0, ϵlog n])is ϵ^4log n–aligned.We prove the first claim only. Let t ∈ be such that γ(t) = π_γ(γ(ϵlog n) a g). If t is greater than ϵ(1- ϵ^3 ) log n, we deduce that (γ|_[0, ϵlog n], γ(ϵlog n) a g) is ϵ^4log n-aligned as desired. Let us deal with the remaining case: we assumet ∈(-∞,ϵlog n - ϵ^4log n ].Consider two translates of γ: γ_1 =a^-1γ(ϵlog n)^-1γandγ_2 = b^-1γ(ϵlog n)^-1γ, and their subpaths γ_1' := γ_1|_[t, ϵlog n]andγ_2' := γ_2|_[0, ϵlog n]. Let γ̅_2' be the reversal of γ_2'.By the definition of t, (γ(ϵlog n) a g, γ|_[t, ϵlog n]) is automatically 0–aligned, or equivalently, (g, γ_1') is 0–aligned. Next, since a and b are chosen from S, the subset of 𝒮(id, K) as described in Lemma <ref>, we have thatπ_γ(γ(ϵlog n) a b^-1) is withinℬ(γ(ϵlog n ), 0.1ϵ^4 |ab^-1|) and π_γ(γ(ϵlog n) b a^-1) is withinℬ(γ(ϵlog n), 0.1ϵ^4 |ab^-1|). Moreover, we have |ba^-1| ≥ 0.5K ≥ K_6.By plugging in α = γ and β = γ̅ (i.e., β(t) = γ(ϵlog n - t) for each t ∈), we can apply Lemma <ref>. The required assumptions are π_β(β(0) (ba^-1)^-1) = π_γ(γ(ϵlog n) a b^-1) ∈ℬ(γ(ϵlog n), 0.1ϵ^4 |ab^-1|) = ℬ(β(0), 0.1ϵ^4|ab^-1|), andπ_α(α(ϵlog n), ba^-1) = π_γ(γ(ϵlog n), ba^-1) ∈ℬ(γ(ϵlog n), 0.1 ϵ^4|ab^-1|).As a result, for each j ∈ we haved (π_γ(γ(ϵlog n) ab^-1γ(ϵlog n)^-1γ(j)),γ(ϵlog n) ) ≤ 0.1ϵ^4log |j-ϵlog n| + 2|ab^-1|. In other words, we haveπ_γ_1(γ_2') ∈γ_1( (ϵlog n - 0.1ϵ^4log (ϵlog n) - 2K, +∞)). Similarly we deduce thatπ_γ_2(γ_1')∈γ_2( (ϵlog n -0.1ϵ^4log (ϵlog n) -2K, +∞)). We conclude that (γ_1', γ̅_2') is (0.1ϵ^4log (ϵlog n) + 2K)–aligned.We now let D = |g| + 2ϵlog n + 2K +K_3; note thatD > (g^-1∪γ_1' ∪γ_2').Moreover, the lengths of γ_1' and γ_2' are at least ϵ^4log n and we haveϵ^4log n ≥ϵ^3log D.Finally, (g, γ_1', γ̅_2') is (0.1ϵ^4log m + 2K)–aligned, hence 0.2 ϵ^4log D–aligned. Lemma <ref> now tells us that (g^-1, γ̅_2') is ϵ^4log D–aligned. This implies that (γ|_[0, m], γ(m) b g) is ϵ^4log n–aligned as desired. Following Boulanger-Mathieu-Sert-Sisto <cit.> and Gouëzel <cit.>, we define the set of pivotal times P_k(s) inductively. We will suppress the notation P_k := P_k(s) when it is unambiguous, and the remaining notation follows from the beginning of this section. First set P_0 = ∅ and z_0 = id. Given P_k-1⊆{1, …, k-1} and z_k-1∈ G, P_k and z_k are determined by the following criteria.* When (z_k-1,V_kγ|_[0, ϵlog n], U_kγ|_[0, ϵlog n], W_k) is ϵ^3log n–aligned, we set P_k = P_k-1∪{k} and z_k = U_k.* Otherwise, we find the maximal index m ∈ P_k-1 such that (V_mγ|_[0. ϵlog n], W_k) is ϵ^3log n–aligned and let P_k = P_k-1∩{1, …, m-1} (i.e., we gather all pivotal times in P_k-1 smaller than m) and z_k = V_m. If such an m does not exist, then we set P_k = ∅ and z_k = id. Given input w_0, w_1, …, w_k∈ G and s ∈ S^3k, this algorithm outputs a subset P_k(s) of {1, …, k}. Our first lemma tells us that P_k(s) effectively records the alignment.The following holds for all n > N_0.Let P_k = {i(1) < … < i(M)} and suppose that ϵlog( |w_0| + ⋯ + |w_k| + kϵlog n ) ≤log n. Then there exist g_1, …, g_N= z_k such that (V_i(1), U_i(1), …, V_i(M), U_i(M)) is a subsequence of (g_1, …, g_N) and (id,g_1γ|_[0, ϵlog n], …, g_Nγ|_[0, ϵlog n], W_k)is ϵ^2log n–aligned. We induct on k. If we added a pivot, P_k = P_k-1∪{k}, there are two cases:* P_k-1= ∅. Then (id, V_kγ|_[0, ϵlog n], U_kγ|_[0, ϵlog n], W_k) is (ϵ^3log n)–aligned, with z_k = U_k, as desired.* P_k-1= {i(1) < … < i(M-1)} is nonempty. Then there exist g_1, …, g_N such that (V_i(1), …, V_i(M-1)) is a subsequence of (g_1, …, g_N), g_N = z_k-1 and (id,g_1γ|_[0, ϵlog n], …, g_Nγ|_[0, ϵlog n], W_k-1)is ϵ^2log n–aligned. Moreover,(z_k-1, V_kγ|_[0, ϵlog n], U_kγ|_[0, ϵlog n], W_k)is (ϵ^3log n)–aligned. Here, since (z_k-1γ|_[0, ϵlog n], W_k-1) is ϵ^3log n–aligned, (z_k-1γ|_[0, ϵlog n], W_k-1 a_k) = (z_k-1γ|_[0, ϵlog n], V_k)is also (ϵ^3log n + AK+B)–aligned. Now Lemma <ref> asserts that for large enough n, (z_k-1γ|_[0, ϵlog n], V_kγ|_[0, ϵlog n]) is ϵ^2log n–aligned. As a result,(id,g_1γ|_[0, ϵlog n], …, g_Nγ|_[0, ϵlog n],V_kγ|_[0, ϵlog n],U_kγ|_[0, ϵlog n], W_k)is ϵ^2log n–aligned, with z_k = U_k. Now suppose we backtracked: P_k = P_k-1∩{1, …, m-1} for some m ∈ P_k-1.Letting M = #P_k-1, so that #P_k = M+1, our induction hypothesis tells us that there exist g_1, …, g_N such that (V_i(1),U_i(1), …, V_i(M+1), U_i(M+1)) is a subsequence of (g_1, …, g_N) and (id,g_1γ|_[0, ϵlog n], …, g_Nγ|_[0, ϵlog n], W_k-1)is ϵ^2log n–aligned. Moreover, we have that (V_i(M+1)γ|_[0, ϵlog n], W_k) is ϵ^3log n–aligned by the criterion. It follows that(id, g_1γ|_[0, ϵlog n], …, V_i(M+1)γ|_[0, ϵlog n], W_k)is ϵ^2log n–aligned, with z_k = V_m = V_i(M+1), as desired. Next, we haveLet us fix a_1, b_1, c_1, …, a_k, b_k, c_k and draw a_k+1, b_k+1, c_k+1 in S^3 according to the uniform measure. For n ∈ℕ sufficiently large, the probability that # P_k+1 = # P_k + 1 is at least 9/10. We need to choose a_k+1, b_k+1, c_k+1 in S^3 such that (z_k,V_k+1γ|_[0, ϵlog n], U_k+1γ|_[0, ϵlog n], W_k+1) is ϵ^3log n–aligned. By Proposition <ref>, there are at least 99 choices of a_k+1 such that (z_k,V_k+1γ|_[0, ϵlog n])is ϵ^3log n–aligned.Likewise, there are at least 98 choices of b_k+1 such that both (V_k,U_k+1γ|_[0, ϵlog n]) and (V_k+1γ|_[0, ϵlog n], U_k+1)are ϵ^4 log n–aligned. From lemma <ref>, for sufficiently large n, this tells us there are at least 98 choices of b_k+1 such that (V_k+1γ|_[0, ϵlog n], U_k+1γ|_[0, ϵlog n]) is ϵ^4 log n–aligned. Finally, there are at least 99 choices of c_k+1 such that (U_k+1γ|_[0, ϵlog n], W_k+1) is ϵ^3 log n–aligned.We are done as 99/100·98/100·99/100 > 9/10. Given a sequence s =( a_i, b_i, c_i)_i=1^k, we say that another sequence s' = (a_i', b_i', c_i')_i=1^k is pivoted from s if they have the same pivotal times, (a_l, c_l) = (a_l', c_l') for all l=1, …, k, and b_l = b_l' for all l except for l ∈ P_k(s). We observe that being pivoted is an equivalence relation. Given s =( a_i, b_i, c_i)_i=1^k and a pivotal time ℓ∈ P_k(s), construct a new sequence s' by replacing b_ℓ with another b'_ℓ∈ S such that (z_ℓ-1,V_ℓγ|_[0, ϵlog n], U_ℓγ|_[0, ϵlog n], W_ℓ)is ϵ^3log n–aligned. Then s' is pivoted from s. We need to show that both sequences s and s' have the same set of pivotal times. Before time ℓ, the sequences are identical, so that P_j(s) = P_j(s') for j < ℓ. By our choice of b'_ℓ, we know that the time ℓ is added as a pivot, and so z'_ℓ = U'_ℓ. Now we induct on j > ℓ: suppose that all pivotal times in P_j-1(s) are still in P_j-1(s'). To determine P_j(s), either we added a new pivotal time j or we backtracked. In the former case, we have that (z_j-1,V_jγ|_[0, ϵlog n], U_jγ|_[0, ϵlog n], W_j) is ϵ^3 log n–aligned. Since G acts on itself by isometries, this happens if and only if the sequence(z'_ℓ(z_ℓ^-1)z_j-1, z'_ℓ(z_ℓ^-1)V_jγ|_[0, ϵlog n], z'_ℓ(z_ℓ^-1)U_jγ|_[0, ϵlog n], z'_ℓ(z_ℓ^-1)W_j)is ϵ^3 log n–aligned. But this is the same as requiring that (z'_j-1,V'_jγ|_[0, ϵlog n], U'_jγ|_[0, ϵlog n], W'_j)is ϵ^3 log n–aligned, so that j ∈ P_j(s'). In the latter case, we found the maximum M such that (V_Mγ|_[0. ϵlog n], W_k) is ϵ^3log n–aligned. Since ℓ∈ P_k(s), we know that M > ℓ. Hence this is the same as requiring that (z'_ℓ(z_ℓ^-1)V_Mγ|_[0. ϵlog n], z'_ℓ(z_ℓ^-1)W_k) = (V'_Mγ|_[0. ϵlog n], W'_k) is ϵ^3 log n–aligned. Therefore j ∈ P_k(s'). Now fixing w_i's, we regard W_k as a random variable depending on the choice of (a_1, b_1, c_1, …, a_k, b_k, c_k),which are distributed according to the uniform measure on S^3k. Fixing a choice s = (a_1, …, c_k), let ℰ_k(s) be the set of choices s' that are pivoted from s. Since being pivoted is an equivalence relation, the collection of ℰ_k(s)'s partitions the space of sequences S^3k. We claim that most of these equivalence class are large: at pivotal times ℓ∈ P_k, one can replace b_ℓ with one of many other b'_ℓ's while remaining pivoted. Let s = (a_1, b_1, c_1, …, a_k, b_k, c_k). We condition on ℰ_k(s) and we draw (a_k+1, b_k+1, c_k+1) according to the uniform measure on S^3. Then for all j ≥ 0,(#P_k+1(s', s_k+1) < #P_k(s') - j| (s', s_k+1)∈ℰ_k(s) × S^3) ≤(1/10)^j+1.We remark that the conditional measure (·| ℰ_k(s)× S^3) on S^3(k+1) is the same as the uniform measure on ℰ_k(s) × S^3⊂ S^3(k+1), because (·) is the uniform measure on a finite set. We induct on j≥ 0. The j=0 case is lemma <ref>. We prove it for j=1. Suppose that we made some choice of s_k+1 := (a_k+1, b_k+1, c_k+1) that lead to backtracking. We must show that for such an s_k+1,(#P_n+1(s', s_k+1) < #P_n(s') - 1| s' ∈ℰ_k(s)) ≤1/10.To this end, we examine the final pivot s_ℓ. By Lemma <ref>, we can replace b_ℓ with any distinct b'_ℓ∈ S such that(z_ℓ-1,V_ℓγ|_[0, ϵlog n], U_ℓγ|_[0, ϵlog n], W_ℓ)is ϵ^4log n–aligned. There are at least 98 choices of such a b'_ℓ, by Proposition <ref>.Likewise, there are at least 98 choices of b'_ℓ≠ b_ℓ such that (U_ℓγ_[0, ϵlog n], W_k) is ϵ^4 log n–aligned. From Lemma <ref>, we know that(V_ℓγ|_[0, ϵlog n] ,W_k) is ϵ^3 log n–aligned. For this choice of s', we have P_k+1(s') = P_k(s') ∩{0, ..., ℓ - 1}. In particular, #P_k+1 = #P_k - 1. Hence (#P_k+1 < #P_k - 1|ℰ_k(s), s_k+1) ≤(4/100) < (1/10).To handle the induction step for j ≥ 2, the same argument works, except we condition not only on s_k+1 but also on the final j pivotal increments which resulted in backtracking. ( #P_k≤ k/2 ) < (1/10)^k. §.§ Random walks Recall that G contains an (f, θ)–superdivergent (q,Q)–quasigeodesic γ : → G with γ(0) = id. Let μ be a probability measure on G whose support generates G as a semigroup. Passing to a convolution power if necessary, assume that μ(a) > 0 for all a in our finite generating set A ⊂ G. Let (Z_n)_n≥1 be the simple random walk generated by μ, and let α∈ (0,1). We can definep= min{μ(a), a ∈ A}, ϵ =α/100/log(1/p).so that p^ϵlog n = n^-α/100. Then for any path η of length 100 ϵlog n and any k ∈, we have( (g_k+1, …, g_k+100 ϵlog n) = η ) ≥ n^-α.Also recall that for any three points o,x,y ∈ G we can define the Gromov product, given by(x, y)_o = 1/2(d(o,x) + d(o,y) - d(x,y) ). We now have: For any 0<α < 1, there exist K > 0 such that for each x ∈ℬ(id, 2n) we have[(x, Z_n)_id≥ n^3α)] ≤ K e^-n^α/K. First, we would like to find a nice decomposition of our random walk, which will allow us to analyze the sample paths using our combinatorial model in section <ref>.Let λ_i be i.i.d. distributed according to the uniform measure on the subset S' ⊂ G^5 defined byS' := {(a, γ', b, γ', c) : a, b, c ∈ S, γ' = γ(ϵlog n)}. Then the evaluation λ_i = a·γ '· b ·γ' · c is distributed according to the measure μ_S * γ' * μ_S * γ' * μ_S, where μ_S is uniform over S.Let N = 3K + 2ϵlog n. By our choice of p, for each a,b,c ∈ S we have μ^*N(aγ ' b γ' c) ≥ p^N. Then we can decompose μ^*N = 10^6p^N(μ_S * γ' * μ_S * γ' * μ_S) + (1-10^6 p^N) ν, for some probability measure ν.Now we consider the following coin-toss model,Let ρ_i be independent 0-1 valued random variables, each with probability 10^6· p^N of being equal to 1. Also let ξ_i be i.i.d. distributed according to ν. We set g_i = {[λ_i if ρ_i = 1;ξ_i otherwise. ]. Then (g_1 ⋯ g_n)_n has the same distribution as (Z_Nn), because each g_i is distributed according to μ^*N. Hoeffding's inequality tells us that( ∑_i=1^n^3αρ_i≥ 0.5 n^3α· n^-α) ≥ 1 - 2 exp( - 2 (0.5 n^2α)^2/n^3α) ≥ 1 - 2 exp (-0.5 n^α).After tossing away an event of probability at most 2exp(-0.5n^α), we assume ∑_i=1^n^3αρ_i≥ 0.5 n^2α. To apply the analysis of our combinatorial model, we condition on the values of ρ_i, ξ_i and only keep the randomness coming from the η_i's. Leti(1) < i(2) < … < i(M)be the indices in [1, n^3α] where ρ_i = 1. Then we can writex^-1· Z_n =w_0· a_1γ(ϵlog n )b_1γ(ϵlog n)c_1·w_1⋯ a_Mγ(ϵlog n ) b_Mγ(ϵlog n) c_M· w_M,wherew_0 = x^-1g_1⋯ g_N(i(1)-1) - 1,w_1 = g_Ni(1) + 1⋯ g_N(i(2)-1) - 1, ⋮w_M = g_Ni(M) + 1⋯ g_nand a_i, b_i, c_i are i.i.d.s distributed according to the uniform measure on S. As in the previous section, we set s = (a_1,b_1,c_1, …, a_M,b_M, c_M). By Lemma <ref>, the set of pivots P_M(s) is nonempty with probability at least 1-(1/10)^M≥ 1- (1/10)^0.5 n^2α. By Lemma <ref>, for any pivotal time i ∈ P_M(s) we have (id, x^-1 Z_N(i-1)γ|_ϵlog n, x^-1 Z_n) = (id, (x^-1Z_N(i-1), …, x^-1Z_N(i-1)+ϵlog n), x^-1 Z_n) is ϵlog n–aligned. Lemma <ref> implies that [id, x^-1 Z_n] passesthrough the K_1–neighborhood of (x^-1Z_N(i-1), …, x^-1Z_N).In other words, [x, Z_n] passes through the (Ni+ K_0)–neighborhood of id, which is within the n^3α–neighborhood of id when n is large.For any α>0, there exists K' such that for each 0 ≤ m ≤ n we have[(id, Z_n)_Z_m^2] ≤ n^6α + K e^-n^α/K· n ≤ n^6α+K'.The following lemma states that our deviation inequality (Corollary <ref>) implies a rate of convergence in the subadditive ergodic theorem. LetL := lim_n→∞1/n[d(id, Z_n)].ThenL - 1/n[d(id, Z_n)]= o(1/√(n)). Note that by the definition of the Gromov product, we have[d(id, Z_n2^k)]= ∑_i=1^2^k[d(Z_n(i-1), Z_ni)]- 2∑_i=1^k∑_t = 1^2^k-i[(Z_n 2^i(t-1)), Z_n2^it)_Z_n(2^it - 1)].Also by corollary <ref>[(Z_n 2^i(t-1)), Z_n2^it)_Z_n(2^it - 1)] ≤ 2(n2^i-1)^6α + K' and we also know that [d(Z_n(i-1), Z_ni)] = [d(id, Z_n)] for any i ∈ℕ. Hence for any sufficiently small α>0, we have|1/2^k[d(id, Z_n2^k)] - [d(id, Z_n)]|≤2/2^k∑_i=1^k 2^k-i· (2(n2^i-1)^6α + K') ≲ n^6α∑_i=1^k 2^-i/2. As k →∞ the quantity 2^-k[d(id, Z_n2^k)] converges to L. Picking α < 1/12, we can send k →∞ and divide by n to conclude. We now prove the CLT (Theorem <ref>). It is essentially the same argument as <cit.>, but with a different deviation inequality as input.We claim that for any ϵ>0, there exists N sufficiently large, such that the sequence 1/√(Nk)(d(id, Z_Nk) - [d(id, Z_Nk)])converges to a Gaussian distribution up to an error at most ϵ in the Lévy distance.Indeed, the sequence {1/√(k)(d(id, Z_k) - [d(id, Z_k)])}_k > 0is eventually ϵ–close to a distribution X (in the Lévy distance) if and only if its N–jump subsequence {1/√(Nk)(d(id, Z_Nk) - [d(id, Z_Nk)] )}_k > 0 is as well. Moreover, from Lemma <ref>, we know that[d(id, Z_Nk) = L Nk + o(1/√(Nk)).To show the claim, we first take a sequence0 = i(0) < i(1) < … < i(2^⌊log_2 k ⌋) =ksuch that i(t+1) - i(t) = 1 or 2 for each t. The easiest way is to keep halving the numbers, i.e., i(2^tk) := ⌊i(2^t(k-1)) + i(2^t(k+1))/2⌋for each t and odd k. Let T be the collection of i(t)'s such that i(t+1) - i(t) = 2.Then,1/√(Nk)(d(id, Z_Nk) - [d(id, Z_nk)] ) = I_1 - I_2 - I_3 whereI_1 = ∑_i=1^k1/√(k)[ d(Z_N(i-1), Z_Ni) - [d(Z_N(i-1), Z_Ni)]/√(N)]I_2 =2/√(Nk)∑_t ∈ T( (Z_Ni(t), Z_N(i(t)+2)_Z_N(i(t)+1) -[(Z_Ni(t), Z_N(i(t)+2)_Z_N(i(t)+1)] ), andI_3 = 2/√(Nk)∑_t=1^⌊log_2 k - 1 ⌋∑_l = 1^2^⌊log_2 k ⌋ - t - 1((Z_N2^tl, Z_N2^t (l+2))_Z_N2^t(l+1) - [(Z_N2^tl, Z_N2^t (l+2))_Z_N2^t(l+1)] ). We claim that for sufficiently large N ∈ℕ, I_2 and I_3 are small (in terms of the Lévy distance). Then the only non-negligible term I_1 is a sum of i.i.d random variables, normalized to converge to a Gaussian as k →∞. The second summation I_2 is the sum of at most k independent RVs whose variance is bounded by 4/Nk· 3N^6α.Hence, the second summation has variance at most12 N^6α-1 and (|I_2| ≥ N^-α) ≤ 12 N^8α - 1by Chebyshev.Now for each t,I_3;t := 2/√(Nk)∑_l = 1^2^⌊log_2 k ⌋ - t - 1((Z_N2^tl, Z_N2^t (l+2))_Z_N2^t(l+1) - [(Z_N2^tl, Z_N2^t (l+2))_Z_N2^t(l+1)] ) is the sum of at most k/2^t independent RVs whose variance is bounded by 4/Nk· 3 (N2^t)^6α. This means that I_3;t has variance at most 12 N^6α - 1· 2^(6α - 1) t, and (|I_3; t| ≥ N^-α 2^-α t) ≤ 12 N^8 α - 1 2^(8α - 1) tby Chebyshev.Summing them up, we have |I_2 + ∑_t I_3;t| ≤ N^-α∑_t 2^-α toutside a set of probability N^8α - 1∑_t 2^(8 α - 1) t. These are small, regardless of the range of t. More precisely, by setting α = 1/10, we deduce that |I_2 + I_3| = O(N^-1/10)outside a set of probability O(N^-1/10), ending the proof. We now prove the CLT for random walks with finite p-th moment for some p>2. It suffices to show that Corollary <ref> holds for such random walks.For some q > 0, let E be the event that ∑_i=1^n |g_i| is at least n^q. We note the following inequality[ n^q (p-2)( ∑_i=1^n |g_i| )^2 1_∑_i=1^n |g_i| ≥ n^q] ≤[ ( ∑_i=1^n |g_i| )^p] ≤[ ( n max_i=1^n |g_i|)^p]≤ n^p[ ∑_i=1^n |g_i|^p]≤ n^p+1 |g|^p.This implies that [ (∑_i=1^n |g_i| )^2 1_E] ≤ Cn^(p+1) - q(p-2).By taking q > p+1/p+2, we can keep this bounded.Now on the event E^c, we argue as in Lemma <ref>. We remark that the only place we used the finite support assumption was to invoke Lemma <ref>. In particular, we needed ϵlog( |w_0| + ⋯ + |w_k| + kϵlog n ) ≤log n,where w_i. However, on the event E^c, this assumption is still met, replacing ϵ with ϵ/q if necessary. Then we may still apply lemma <ref>. Hence, we get[(id, Z_n)_Z_m^2] ≤ n^6α + K e^-nα/K≤ 2n^6α + K'.Given this estimate, we get: Let μ be an admissible measure on G with finite p–moment for some p > 2, and (Z_n)_n be the random walk on G generated by μ. Then there exist constants λ, σ such thatd_X(o, Z_n o)-Ln/σ√(n)→𝒩(0, 1).§ RIGHT-ANGLED COXETER GROUPSLet Γ = (V,E) be a finite simple graph. We can define the Right-angled Coxeter group by the presentationW_Γ = ⟨ v ∈ V|v^2, [v,w], (v,w) ∈ E⟩. In this appendix we show the following Let W_Γ be a Right-angled Coxeter group of thickness k ≥ 2. Then any Cayley graph of Γ contains a periodic geodesic σ which is (f, θ)–divergent for some θ>0 and f(r) ≳ r^k. We only need to slightly modify the proof of Theorem C given in <cit.>. They show that a RACG of thickness at least k has divergence at least polynomial of degree k+1. To do this, they construct a periodic geodesic γ such that for any path κ with endpoints on γ and avoiding an r-neighbourhood of γ's midpoint, any segment of κ with projection at least some constant has to have length at least r^k. By integrating they get r^k+1. For completeness, we include the proof below. Since the claim is quasi-isometry invariant, we work on the Davis complex Σ_Γ. We modify the proof of Theorem C of <cit.>, borrowing their notation and terminology. Take the word w in the proof, so that σ is a bi-infinite geodesic which is the axis of w, and set p_i = w^i. Since the Davis complex is a CAT(0) cube complex, the nearest point projection π: Σ_Γ→σ is well-defined and 1–Lipschitz. Let κ: [0, t] →Σ_Γ be a path whose projection has diameter at least 2|w|, which is disjoint from the |w|r-neighbourhood around some w^i. As the projection of κ has length at least 2|w|, we can find some points p_j, p_k such thatπ(κ(0)) < p_j < p_k < π(κ(t)) in the orientation on σ. Here p_j, p_k = w^j, w^k. For the rest of the proof, we follow <cit.>. For some j ≤ i < k, let H_i (resp. K_i) be the hyperplane dual to the edge of σ which is adjacent to p_i (resp. p_i+1) and is labeled by s_0 (resp. s_n). As hyperplanes separate Σ_Γ and do not intersect geodesics twice, it follows that H_i (resp. K_i) intersects κ. Let e_i (resp. f_i) be the last (resp. first) edge of κ dual to H_i (resp. K_i). Let γ_i (resp. η_i) be a minimal length geodesic in the carrier N(H_i) (resp. N(K_i)) with starting point p_i (resp. p_i+1) and endpoint on e_i (resp. f_i). Let α_i be the subpath of κ between γ_i ∩κ and η_i ∩κ. As w is a Γ–complete word, no pair of hyperplanes dual to σ intersect. By our choices, α_i ∩α_j is either empty or a single vertex for all i ≠ j. Let D_i be the disk diagram with boundary path γ_i α_i η_i^-1β_i where β_i has label w^-1. For each 0 ≤ i ≤ r-2, we observe the following: * The path γ_i is reduced. * By Lemma 7.2, no (k-1)–fence connects γ_i to η_i^-1 in any disk diagram with boundary path γ_i α_i η_i^-1β_i. * The path α_i does not intersect the ball B_p_i(|w|(r)). Thus we can apply <cit.> to D_i by setting, in that theorem, γ = γ_i,α = α_i,η = η_i^-1,β = β_i, andL = k - 1 R = |w|(r - i).We conclude that for r large enough|α_i| ≥ C' (|w|(r)^k). As α_i is a subsegment of p, we are done.§ SUPERLINEAR-DIVERGENCE AND STRONGLY CONTRACTING AXISIn this section, we give two constructions that illustrates the logical independence between superlinear divergence and strongly contracting property. We first recall the notion of strongly contracting geodesics. For a subset A ⊆ X of a metric space X and ϵ > 0, we define the closest point projection of x ∈ X to A byπ_A(x) := {a ∈ A : d_X(x, a)= d_X(x, A) }.A is said to be K-strongly contracting if: * π_A(z) ≠∅ for all z∈ X and* for any geodesic η such that η∩ N_K(A) = ∅, we have (π_A(η) ) ≤ K. There exists a finitely generated group G containing an elementwhose axis is strongly contracting but not superlinear-divergent.Let G be the group constructed by Gersten in <cit.>:G = ⟨ x, y, t|txt^-1 = y, xy=yx ⟩.The group G naturally acts on the universal cover of its presentation complex, which is a CAT(0) cube complex. Recall that the presentation complex of G is defined as follows: start with a single 0-cell, attach a 1-cell for each of the three generators x,y,t, and attach a 2–cell for each of the relations [x,y] and txt ^-1y ^-1. Let X be the universal cover of this complex, which Gersten shows is CAT(0) <cit.>. The induced combinatorial metric on X is isometric to the word metric with respect to {x, y, t}.Let g = tx and γ be a path connecting (…, id, t, tx, txt, (tx)^2, …). Then γ is a g–invariant geodesic, and γ does not bound a flat half-plane (the cone angle of γ at its each vertex is 3π/2). Hence, γ is rank-1 and we can conclude that g is strongly contracting.Meanwhile, by <cit.>, G has quadratic divergence. Given an appropriate action of G on a hyperbolic space, we would be able to conclude from <cit.> that γ is not superlinear-divergent. Since we do not assume a hyperbolic action, we instead present a modification of Goldborough-Sisto's argument.Suppose that there exists an (A, B)–coarsely Lipschitz projection π_γ: G →γ, a constant θ > 0 and a superlinear function f such that γ is (f, θ)–divergent with respect to π_γ. Up to a finite additive error, we may assume that π_γ takes the values {(zx)^i : i ∈}.Let ϵ = 1/2(A+3) and let n be a sufficiently large integer. We claim:If a point p ∈ G ∖ B(id, n) satisfies d(p, γ) ≤ϵ n, then π_γ(p) = (tx)^i for some |i| > 0.5n.First, from d(p, γ) ≤ϵ n and the coarse Lipschitzness of π_γ, we deduced(p, π_γ(p)) ≤ (A+1) ϵ n + 2B.Hence, we haved(id, π_γ(p)) ≥ d(id, p) - d(p, π_γ(n)) ≥ n - (A+1) ϵ n - 2B > 0.5nand the claim follows.Next, we let a_n = (tx)^(1-ϵ)n y^-⌊ϵ n ⌋, b_n = (tx)^-(1-ϵ)n y^-⌊ϵ n ⌋and let η be an arbitrary path in G ∖ B(id, n) connecting a_n and b_n. Let m, m' ∈ be such that π_γ(a_n) = (tx)^m and π_γ(b_n) = (tx)^m'. We then haved((tx)^n, π_γ(a_n)) ≤ d((tx)^n, a_n) + d(a_n, π_γ(a_n)) ≤(A+2) ϵ n + 2B < 0.5 n.It follows that m > n - 0.5n ≥ 0.5n. Similarly, we deduce m' < -0.5n.We examine the two connected components of η∖ N_ϵ n(γ) as well as η∩ N_ϵ n(γ). Each component of η∩ N_ϵ n(γ) attains values of π_γ(·) in {(tx)^i : i < -0.5 n}or{(tx)^i : i > 0.5n}, by Claim <ref>, but not in both (by the coarse Lipschitzness of π_γ). Meanwhile, the endpoints of η attain values of π_γ(·) in {(tx)^i : i < -0.5 n} and {(tx)^i : i > 0.5n}, respectively. As a result, there exists a subsegment η' of η, as a component of η∖ N_ϵ n(γ), such that π_γ(η'^+) ∈{(tx)^i : i > 0.5n}andπ_γ(η'^-) ∈{(tx)^i : i < -0.5n}.It follows that the length of η' is at least (n/θ) · f(ϵ n). Since η is longer than η', we deduce that an arbitrary path in G ∖ B(id, n) connecting a_n, b_n∈ B(id, n) is longer than (n / θ) · f(ϵ n). When n increases, this contradicts the quadratic divergence of G. Hence, we deduce that γ is not superlinear-divergent. There exists a proper geodesic metric space X that contains a superlinear-divergent geodesic γ that is not strongly contracting.Let X = ℍ^2 and γ be a bi-infinite geodesic γ on X with respect to the standard Poincaré metric ds_0^2. Let o ∈γ be a reference point on γ andlet _γ be the closest point projection onto γ with respect to ds_0^2. For each x ∈ X, let r be the (directed) distance from x to γ and let τ be the (directed) distance from o to _γ(x). Since (r, τ) is an orthogonal parametrization of X, there exists a continuous coefficient F_0 such thatds_0^2 = dr^2 + F_0(x) dτ^2holds at each point x ∈ X. We note that F_0(x) ∼ e^κ r(x) for some κ > 0 (due to the Gromov hyperbolicity of (X, ds_0^2)) and F_0(x) ≥ 1.We will now define a new metric ds^2 as follows. For each i > 0 and j ∈ let I_i, j = {(r, τ) : r = 4^2^i, 2j +i≤τ≤2j + i+1},and let S := ⋃_i > 0,j ∈ I_i, j.Let χ: ℝ^2→ [0, 1] be a smooth function that takes value 0 on S and 1 on ℝ^2∖ N_0.1(S). We finally define F(x) := F_0(x) ·χ(r(x), τ(x)) + (1 - χ(r(x), τ(x)))and ds^2 := dr^2 + F(x) dτ^2.First, _γ is still the closest point projection with respect to ds^2. Indeed, the shortest path from x ∈ X to γ is the one that does not change in the value of τ. As a corollary, the K-neighborhoods of γ with respect to the two metrics coincide.Let i be a positive integer and let x, y ∈ X be such that r(x) = r(x) = 4^2^4i and τ(x) = 0, τ(y) = 2i. We first consider a path η connecting x to y while passing through N_K(γ). Then the total length is at least 2 · (4^2^4i -K). Next, we take a piecewise geodesic path η' that goes like: (r, τ) = (4^2^4i, 0)- (4^2^4i, 1) - (4^2^4i-1, 1) - (4^2^4i-1, 2) - ⋯- (4^2^3i+1, i) - (4^2^3i, i) - (4^2^3i, i+1) - (4^2^3i+1, i+1) - ⋯ - (4^2^i, 2i).Then the total length is 2(4^2^4i - 4^2^3i) + 2i. Note also that η' does not intersect N_K(γ). We conclude that the geodesic connecting x to y does not touch N_K(γ). Note also that the projection is larger than 2i. By increasing i, we conclude that γ is not K–strongly contracting for any K>0.Meanwhile, it is superlinear-divergent. To see this, suppose a path η lies in X ∖ N_R(γ) and satisfies π_γ(η) > 4. Then π_γ(η) contains [2k, 2k+2] for some integer k, and by restricting the path if necessary, we may assume π_γ(η) = γ([2k, 2k+2]). If r(η) ever takes two values among {4^2^i: i > 0}∩ [R, +∞), say 4^2^m and 4^2^m' for some m < m', then the total variation of r(η(t)) is at least 4^2^m+1 - 4^2^m = 4^2^m (4^2^m - 1) ≥ R^2/2.Consequently, we have l(η) ≥ 0.5R^2.If not, r(η) takes at most one value 4^2^i among {4^2^j : j > 0}. If i is even, then F(η(t)) = F_0(η(t))for t such that τ(η(t)) ∈ [2k+1.1, 2k+1.9]. Since F_0(η(t)) ≥ e^κ r(η(t))≥ e^κ R,we havel(η) ≥∫ F(η)dτ(η) ≥ e^κ R× 0.8 = 0.8^κ R.Similarly, we have l(η) ≥ 0.8 e^κ R when i is odd. This concludes that γ is superlinear-divergent. Finally, we remark that superlinear divergence is invariant under quasi-isometry but the notion of strongly contracting is not. For example, let X be the Cayley graph of a group G equipped with the word metric associated to some finite generating set 𝒮 and let Z be a superlinear-divergent set in X. Then changing the generating set changes the metric in X by a quasi-isometry, and hence, Z is still a superlinear-divergent set. But if γ is a strongly contracting geodesic in X it may not be strongly contracting with respect to the new metric. As an explicit example, it was shown in <cit.> that each mapping class group admits a proper cobounded action on a metric space X such that all pseudo-Anosov elements have strongly contracting quasi-axes in X. To contrast, it was shown in <cit.> that the the mapping class group of the five-times punctured sphere can be equipped with a word metric such that the axis of a certain pseudo-Anosov map in the Cayley graph is not strongly contracting.alpha | http://arxiv.org/abs/2310.18506v2 | {
"authors": [
"Kunal Chawla",
"Inhyeok Choi",
"Vivian He",
"Kasra Rafi"
],
"categories": [
"math.GT",
"math.GR",
"math.PR",
"60G50, 20F65, 20F69"
],
"primary_category": "math.GT",
"published": "20231027215155",
"title": "Random walks on groups and superlinear divergent geodesics"
} |
Searching for the signature of a pair density wave in YBa_2Cu_3O_6.67 using high energy X-ray diffraction S. M. Hayden January 14, 2024 ========================================================================================================= All-in-one adverse weather removal is an emerging topic on image restoration, which aims to restore multiple weather degradation in an unified model, and the challenging are twofold. First, discovering and handling the property of multi-domain in target distribution formed by multiple weather conditions. Second, design efficient and effective operations for different degradation types. To address this problem, most prior works focus on the multi-domain caused by weather type. Inspired by inter&intra-domain adaptation literature, we observed that not only weather type but also weather severity introduce multi-domain within each weather type domain, which is ignored by previous methods, and further limit their performance. To this end, we proposed a degradation type and severity aware model, called UtilityIR, for blind all-in-one bad weather image restoration. To extract weather information from single image, we proposed a novel Marginal Quality Ranking Loss (MQRL) and utilized Contrastive Loss (CL) to guide weather severity and type extraction, and leverage a bag of novel techniques such as Multi-Head Cross Attention (MHCA) and Local-Global Adaptive Instance Normalization (LG-AdaIN) to efficiently restore spatial varying weather degradation. The proposed method can significantly outperform the SOTA methods subjectively and objectively on different weather restoration tasks with a large margin, and enjoy less model parameters. Proposed method even can restore unseen domain combined multiple degradation images, and modulating restoration level. Implementation code will be available at https://github.com/fordevoted/UtilityIRthis repository§ INTRODUCTIONDespite the fact that learning based method usher in a dramatic growth and success on computer vision in last decade, such as image classification and segmentation, the performance of many high-level vision algorithms usually be degraded while applied in real-world adverse environments, e.g. underwater, adverse illumination or weather conditions. To restore degraded input image, most of previous works design task specific model for each adverse environment such as underwater <cit.>, low-light <cit.>, rainy day <cit.>, haze <cit.>, and snow <cit.>. However, to deploy multiple model for different scenario is resource in-efficiency for practical usage. All-in-one bad weather removal <cit.> is thus emerging topic to resolve this issue, which aims to learn an unified model to restore image degraded by different adverse weather conditions. The challenge of all-in-one adverse weather removal (or broadly, all-in-one image restoration) are twofold: * Effectively and efficiently integrate different operations suitable for different weather degradation type.* The target distribution formed by different weather conditions is with the property of multi-domain, naively learning the mapping from all degraded images to clean images would result in learning from a large variance uni-domain distribution and leading to sub-optimal performance. Accurately and implicitly handle the property of multi-domain at test-time would be challenging. To confront the first challenge, previous works adopted various techniques for effective and efficient architecture, e.g. NAS <cit.>, ViT <cit.>, knowledge distillation <cit.>, deformable convolution and feature affine <cit.>, FAIG <cit.>, weather general&specific operation <cit.>, and diffusion model <cit.>, etc.. As the second challenge, some prior works <cit.> utilize contrastive learning <cit.> to separate weather type feature and learning the multi-domain of target distribution, another technique such as classifier <cit.>, multiple weather specific operation/encoder <cit.>, and learnable query <cit.> are also adopted. It is worthy to note that in some works <cit.>, the weather type label is available while testing, namely Non-blind all-in-one image restoration. A more challenging setting would be contrary situation <cit.> that input image is unknown degradation while inferencing, which is the Blind all-in-one image restoration, and we focus on the latter one in this paper. Inspired by inter&intra-domain adaptation literature for image enhancement <cit.>, we view the problem from data domain perspective and observed that multi-domain obstacles are not only existed between different weather types, the diverse weather severity also introduce multi-domain in intra-domain of weather type, which is ignored by most of previous works, and further limit their performance. As shown in Fig. <ref>, not only weather type result different degradation, the diverse weather severity also caused various appearance and lead to intra-domain gap.To this end, we proposed a degradation type and severity aware all-in-one adverse weather removal network based on the image quality ranker with proposed Marginal Quality Ranking Loss (MQRL) and a bag of techniques to effectively and efficiently restore diverse degraded images. Specifically, to extract type and severity information from different weather images, we design a Degradation Information Encoder (DIE) to perform two branches multi-task feature extraction. For the weather type branch, we imposed Contrastive Loss (CL) to diminish the distance of feature extracted by same weather type, and enlarge the distance between different types. As severity, we develop based on the intuitively observation that the more severely degraded image produce worse Image Quality Assessment (IQA) score, and IQA score is positively related to restoration level that require to apply to restore clean image. Motivated by previous works <cit.> that demonstrate the effectiveness of learning a image quality ranker with Marginal Ranking Loss (MRL) to benefit the following image restoration, we train a ranker to predict weather severity information. Nevertheless, the standard MRL only consider the ranking information between input image pairs, that make the ranker prone to predict incorrect IQA score interval, and leading to apply inappropriate restoration level while use the predicted IQA score as restoration level signal. To be the remedy, we further proposed an interval-aware MRL, i.e. MQRL, to better extract the severity information. After obtaining the weather information, they would be injected into model through Degradation Information Local-Global AdaIN (DI-LGAdaIN) and Degradation-guided Cross Attention (DGCA) for type and severity aware global-local degradation removal. The proposed method can outperform the state-of-the-art methods on All-Weather dataset <cit.> subjectively and objectively, and can restore the combined multiple degradation weather without training these type data, but enjoy less parameters compared with other blind all-in-one image restoration methods.Our contributions are summarized as follow: * To the best of our knowledge, UtilityIR is the first type and severity aware blind all-in-one image restoration method to handle diverse adverse weather type and severity in an unified model with single image input.* We proposed a marginal quality ranking loss to address the insufficiency of standard MRL in scenario of degradation severity estimation. We also modify and exploit a bag of off-the-shelf techniques to better estimate weather information and restore diverse degradation type images.* Proposed method significantly outperform the SOTA methods on different weather removal tasks subjectively and objectively with less parameters, and even can restored unseen combined multiple weather images and modulating restoration level. § RELATED WORK Single Image Restoration for Adverse Weather To restore image from adverse weathers, many learning based methods are designed by weather specific characteristics or physical model. For image deraining <cit.>, JORDER <cit.> based on additive rain model to jointly learning detect and remove rain streak. Follow the formulation of raindrop image, Qian <cit.> remove raindrop from single image through learning raindrop mask attention. For image dehazing <cit.>, DehazeNet<cit.> learning to estimate transmission map. ZID <cit.> leverage atmosphere scatter model to learn an image dehazing model without ground-truth. To remove snow from single image <cit.>, Chen <cit.> proposed HDCWNet based on dual-tree complex wavelet transform. Another trend of research lie in task-agnostic model for general image restoration <cit.>. SwinIR <cit.> based on Swin Transformer, which combine the advantages of the local processing of convolution and the long-range dependency modeling of Transformer. MAXIM <cit.> using multi-axis gated design based on MLP with local and global branches. Nevertheless, these methods can only restore one type of degradation once for single model and weight, which limit the practical usage in the real-world.All-in-one Image Restoration All-in-one image restoration is a new trend on image restoration in recent years, which aims to remove different degradation type images with a unified model. As mentioned in Section. <ref>, the challenge of this topic lie in effectively and efficiently integrate different operations and handle multi-domain property. For non-blind all-in-one image restoration <cit.>, All-in-one network <cit.> first proposed task-specific encoders and common decoder based on neural architecture search. WGWS-Net <cit.> observed that model trained by different weather exist general and specific filters and further design a weather general and specific network. To achieve blind all-in-one image restoration <cit.>,Transweather <cit.> based on transformer model and learnable weather query. Unified model <cit.> and AirNet <cit.> applied contrastive loss to separate different degradation types feature. Zhang <cit.> proposed an ingredient-oriented degradation reformulation framework that can integrate to any Transformer based model. ADMS <cit.> utilize FAIG to obtain task specific discriminative filter and synergize with a degradation classifier. Weatherdiff <cit.> removal multiple weather degradation based on diffusion model. However, the above approaches only consider the multi-domain caused by weather type, but not explicitly handle degradation severity, which also produce different difficulty for image restoration, and limit their performance. Severity Aware Image Restoration Apart from degradation type, degradation severity also bring various appearance and difficulty to restore, many prior works <cit.> have been demonstrated the effectiveness of taking degradation severity into account for image restoration. DID-MDN <cit.> proposed a rain density aware model and divide rain level to light, medium and heavy rain for better restore images, a similar work also done by <cit.>. JSTASR <cit.> estimates snow size and transparency for snow removal. Yi <cit.> utilize a domain adaptation framework to mitigate the Syn2real and intra-domain gap due to haze severity for image dehazing. Li <cit.> proposed a quantization-aware JPEG artifact removal. CBDNet <cit.> estimate noise level and jointly input into model with noisy image to perform image denosing. In this work, we aim to deal with the diverse degradation appearance that caused by different degradation type and severity for adverse weather removal in a more general form.§ PROPOSED METHOD§.§ OverviewThe overview of proposed UtilityIR is depicted in Fig. <ref>. Input arbitrary weather image I ∈ℝ^H× W × C, the Degradation Information Encoder (DIE) would be first applied to extract weather information, include type F_t ∈ℝ^H/S×W/S and severity F_s ∈ℝ^D, where S is the down sampling ratio and D is feature dimension. The process can be presented by:F_t, F_s = DIE(I)Note that the type would keep as 2D matrix and severity would encoded into a 1D vector. The intuition behind our design is that different weather type give rise to spatial varying degradation, e.g. raindrop usually bring on local degradation and fog bring on global degradation, which make keep spatial information being crucial while extract representative weather type feature. On the other hand, using scalar or vector to represent degradation severity is enough, i.e. IQA score or noise level, we adopt the encoded feature vector before IQA score regressor as the severity rather than predicted IQA score for richer information. Based on the intuition of severe weather image result in worse IQA score, we follow prior works <cit.> to learn a image quality ranker to boost learning of image restoration. Different from <cit.> that utilize ranker to provide novel rank-content loss, and <cit.> leverage ranker to initial model parameter, our ranker is adopted to directly extracted severity information for restoration level signal. Note that the ranker is usually guided by MRL <cit.> in <cit.> rather than directly regress GT IQA score since what we care is not exact IQA value but the ranking information<cit.>. However, the ranker guided by standard MRL do not consider the interval of predicting IQA score but only ranking information, which might lead to under-/over-enhanced while regard the severity feature as the restoration level signal to inject into restoration module. As the example illustrates in Fig. <ref>, the severity of moderate rainy image is actually closer to the heavy rain one, but the severity predicted by MRL ranker is closer to the light one, that make model adopt insufficient restoration level while restoration and result in under-restored. To tackle this issue, we further proposed Marginal Quality Ranking Loss (MQRL) to guide weather severity extraction. We first introduce diff_in = Φ(I) -Φ(I') is IQA score difference of degraded image pair, diff_gt = Φ(I_gt) - Φ(I'_gt) is IQA score difference of GT image pair, and diff = |diff_gt - diff_in| is the distance of two image pairs' IQA score, where I is input degraded image, I' is random sample image with same weather type to I. I_gt, I'_gt are corresponding GT image for I and I', respectively, and Φ(·) is IQA metric, we simply adopt PSNR for simplicity although some perceptual score can be adopted. Then, the MQRL can be formulated by:ℒ_mqrl(I, I', I_gt, I'_gt) =diff, if sgn(diff_gt) ≠ sgn(diff_en) max(0, diff - ϵ) , elsewhere sgn(·) is sign function, and ϵ is the margin. The aim behind formula is learning to rank weather severity first, then ease the burden while predicted severity is correctly ranked and the difference interval is closed enough to GT. Compared to directly regress IQA score, MQRL ease the burden of regressing the bias of GT distribution, which is trivial for severity feature; Compared to MRL, MQRL learning more about variance of GT distribution, which is more crucial for learning restoration level signal.and we further modified standard contrasitive loss with the Marginal <cit.> idea to ease the learning burden while extracted feature is representative enough, the modified contrastive loss can be formulated as:ℒ_mcl(F, F^+, F^-) =-log[exp(Φ(F, F^+)/τ)/exp(Φ(F, F^+)/τ)+ Σ^N_j=1exp(Φ(F, F^-_j)/τ)]where: Φ(υ, υ̂) =1 , if cos(υ, υ̂) ≥ 1-ϵ-1 , if cos(υ, υ̂) ≤ -1+ϵcos(υ, υ̂) ,else.Note that modified marginal contrastive loss here is different to Marginal Contrasitive Loss proposed by Zhen <cit.>, which focus on enlarge the distance of each feature cluster, in contrast, we aim to ease the learning burden while encoded feature is representative enough.As for meeting the representative weather type feature, we followed to prior works <cit.> to impose contrastive regularization <cit.> to improve discriminative of feature extracted from different weather type. After obtaining representative weather information features, they will jointly input into model with input image I, to guide the restoration process and obtain the high-quality restored result Î:Î = R(I, F_t, F_s)where R is the restoration network. We will detail the architecture and how to inject weather information into restoration network below. §.§ Network Architecture§.§.§ Degradation Information Encoder DIE)As shown in Fig. <ref> (b) , DIE is consisted with stack of convolution layers to extract different level features, then we generate type and severity feature under multitasking manner using two branch architecture and guided by different objective function. For weather type branch, to encode the diverse degradation type, the features extracted from different layers will be concatenated, then digest by another convolution layer to encode from low-level to high-level feature. As for the weather severity branch, we perform global average pooling on input feature to obtain quality feature vector and regress IQA score by a simple two-layers MLP. Similar to mapping network in <cit.> and CBDE in <cit.>, our DIE can be integrated to any task-specific image restoration backbone network, and extend to all-in-one image restoration fashion. §.§.§ RestoreNetThe RestoreNet consist three stages, i.e. feature extraction, restoration, and reconstruction. We stack convolution layer with stride 2 as feature extraction block to extract degraded feature F ∈ℝ^H/S×W/S× D, where S is down sampling ratio, and D is feature dimension.For restoration stage, we stack K residual blocks and a DGCA block to improve ability of modeling long-range dependency from convolution net. As shown in Fig. <ref> (c), residual block is formed by classic conv-norm-activ block. Thanks to the success of AdaIN <cit.> and feature affine transform in image restoration literature <cit.>, we injected the information into model through proposed Degradation Information Local-Global Adaptive Instance Normalization (DI-LGAdaIN), which is inspired by LG-AdaIN <cit.>. The DI-LGAdaIN consist a local and global feature affine transform and an instance normalization. Considering the feature dimensionality and consisting spatial information, we use weather type to generate local, i.e. pixel-wise, affine transform parameter. Follow to <cit.>, which shows different restoration level can be modulating through channel-wise feature linear operation, the weather severity is used to perform global, i.e. channel-wise, affine transform, which is equivalent to kernel size set to 1×1 in <cit.>. the DI-LGAdaIN can be formulated as:F' = α_g(α_lF-μ/σ +β_l)+ β_gwhere α_l, β_l ∈ℝ^H/S×W/S are local affine parameters, α_g, β_g ∈ℝ^D are global affine parameters, and μ and σ are mean and standard deviation of feature. Then we consider the multi-head self-attention (MHSA) at the end of residual block to introduce global receptive field operation and refine the enhanced feature. However, MHSA tends to globally attend content of input feature <cit.>, which might not be suitable for local degradation, considering degradation appearance mainly depend on weather type, we further utilize Multi-head Cross-Attention (MHCA) <cit.>, that take weather type to serve as query to guide the self-attention feature refinement as <cit.>, the DGCA is depicted in Fig. <ref> (d). Finally, the restored feature would transform back to image domain through reconstruction module. §.§ Training Loss To extracted weather information, we use proposed MQRL in Eq.<ref> to guide learning the weather severity, and contrastive loss to guide weather type extraction. The contrastive loss can be formulated as:ℒ_cl(F, F^+, F^-) =-log[exp(Φ(F, F^+)/τ)/exp(Φ(F_t, F_t^+)/τ)+ Σ^N_j=1exp(Φ(F, F^-_j)/τ)]where F is input feature, F^+, F^- are positive and negative example feature, τ is temperature, and Φ(·) is the cosine similarity function. To learn the image restoration, we utilize L1 loss and SSIM loss as pixel fidelity loss, which formulated by:ℒ_l1 = Î - Y||_1andℒ_ssim = 1-SSIM(Î, Y)We also use perceptural loss to achieve more realistic result:ℒ_per = ∑_j1/c_j h_h w_jΨ_j(Î) - Ψ_j(Y)^2_2where c_j h_h w_j is shape of j^th feature map Ψ_j of pre-trained VGG-16 network. The total loss can be summarized as:ℒ = λ_mqrlℒ_mqrl + λ_clℒ_cl+ λ_l1ℒ_l1 + λ_ssimℒ_ssim + λ_perℒ_perwhere λ are weightings to control the contribution of each loss term.§ EXPERIMENTSWe align our experiment setting with <cit.> for fair comparison. The model is trained and evaluated on All-Weather dataset <cit.>, which contain three kind of weather degradation datasets, i.e. Raindrop dataset <cit.> for raindrop, Outdoor-Rain <cit.> for rain+fog, and Snow100K-L <cit.> for snow. We select fourteen state-of-the-art methods for comparison, include four paradigms, i.e. single task specific (AttentiveGAN <cit.> and DuRN <cit.> for raindrop. pix2pix <cit.> and HRGAN <cit.> for rain+fog. JSTASR <cit.> and DDMSNet <cit.> for snow), single task agnostic (MPRNet <cit.>, Restormer <cit.> and SwinIR <cit.>), non-blindall-in-one (All-in-one network <cit.> and WGWS-Net <cit.>), and blind all-in-one (Transweather <cit.>, Unified model <cit.> and Weatherdiff <cit.>). We use pre-trained model if it is available, otherwise, we use author's official code for training the models with default setting or minimal modification to overcome limitation of CUDA memory. §.§ Comparisons with the State-of-the-art MethodsWe first compare the proposed method with the state-of-the-art methods quantitatively on three different weather removal tasks with PSNR and SSIM. As Table <ref> shown, UtilityIR outperform same type methods, i.e. blind all-in-one, with large margins on three tasks for +1.54 db on de-raindrop, +2.65 db on rain+fog removal, and +0.9 db on desnowing. The model also achieve state-of-the-art performance compared with others type method on raindrop and snow removal, and significant better than all comparison models on challenging rain+fog dataset. We suppose the reason is that the restoration level is fixed for whole image in our method, which is sufficient for global degradation that degradation level is consistent for whole image, but raindrop and snow image is more on local degradation, that might require different restoration level for each patch to further boost the performance. For the visual comparisons, as depicted in Fig. <ref>, ours model can remove more snow degradation and generate clearer result compared with other methods. For the raindrop removal, our model can restore better straight line, while others method produce twisted line as shown in top images of Fig. <ref>. As presentation of bottom images of Fig. <ref>, all comparison methods remain a black stain artifact after restoration, while the proposed model can reduce the artifact and obtain cleaner result. For rain+fog removal, all comparison methods result uneven color artifact on the wall of top images in Fig. <ref>, that UtilityIR successfully remove the rain+fog and generate visual pleasing restored result. As the red area of bottom images shown in Fig. <ref>, our model can obtain clear and higher fidelity to GT. MPRNet and Unified model over-smooth, Transweather remain vertical line color artifact, and WGWS-Net produce a stain artifact. §.§ Weather Type and Severity Prediction To learn more about extracted weather type and severity, we first visualize the weather type feature by T-SNE to evaluate representative. As presented in Fig. <ref> (a), the encoded weather type features well gather in corresponding cluster as expectation. As for weather severity evaluation, we first define image pair that make the MQRL equal to 0 is correct, and report the average accuracy of randomize sample 1000 image pairs 10 times. Our model can achieve accuracy of 81.16% ± 1.43, that demonstrates the capability to distinguish weather severity from diverse input image. §.§ Ablation Study Proposed methodWe evaluate the effectiveness of the weather type and severity by removing corresponding loss function, and report result before finetune stage. As shown in Table <ref>, with addressing no matter weather type or severity by imposing CL or MQRL, the performance elevate significantly, especially for weather type. The result illustrate the multi-domain existing in target distribution for both type and severity, and weather type is more important since it introduce more significant different degradation and visual appearance. After jointly addressing weather type and severity with CL and MQRL, the performance is further improved. Marginal Quality Ranking LossTo explore more on guidance of severity feature extraction, we validate the proposed MQRL by comparing with MRL and directly regress IQA score. We random sample 3000 image pairs from All-Weather dataset, and Fig. <ref> (b) depicted the GT and predicted IQA score distribution guided by each loss. MQRL and MRL predicted incorrect bias compared with GT. After min-max normalization as shown in Fig <ref> (c), it can be seen that the ranker guided by MQRL can capture the correct variance, which is more related to interval, of ground truth distribution. Ranker guided by MRL fail to predict correct variance, and learning by directly regress IQA score result accurately predict bias and variance. However, note that as result shown in the bottom of Table. <ref>, severity extraction guided by directly regress IQA score actually deteriorate the image restoration performance since it waste too much effort to regress the bias of target distribution. Model guided by MRL perform better than directly regress IQA since provide the ranking information and alleviate the burden to learn exactly value of IQA score, but still slightly degraded the model since lack of learning the information of interval of IQA score. After utilizing MQRL, the model can obtain more informative guidance, and converge to better optima, that demonstrates the key information for learning weather severity is ranking and interval of severity signal, i.e. IQA score. §.§ Iterative RestorationProgressive Restoration Because of the type and severity aware handling, our model are able to dynamically distinguish severity and restoration level, and progressively improve the image quality while iteratively input the restored image into model. Visualization can be found in Supp.. Restoration Level Modulation Thanks to the success of progressive restoration, we further perform restoration level modulation to demonstrate the representative of extracted severity information. Since the severity information is only meaningful in latent space, we utilize latent space manipulation as <cit.> to modulate restoration level. Specifically, we iteratively feed the degraded image I into model to find the meaningful direction in latent space:F_s, F_t = DIE(I), I^' = R(I, F_s, F_t) F_s^' = DIE(I^')where F_s^' is predicted severity for restored image I^' and represent the smaller restoration level compared with F_s, that the vector between F_s and F_s^' could form a meaningful direction of modulating restoration level in latent space. Then restoration level modulation can be performed by: F_s^” = F_s + α× (F_s^'-F_s) I_mod = R(I, F_s^”, F_t)where α is modulating parameter to perform interpolation or extrapolation of two severity F_s and F_s^', and obtain different restoration level result I_mod. As shown in Fig.<ref>, by tuning the α, the model can generate different restoration level result and demonstrate the controllability and representative of extracted severity. Combination of Multiple Weather Removal To be an interesting finding, we observed the proposed method trained on multiple type of single degradation data can restore combined type of multiple degradation, which is unseen type data for the trained model. Concretely, for image degraded by N type adverse weather conditions, we can iteratively enhance the image N times to remove all degradation. Note that this approach can only perform by blind all-in-one methods since we observed that the order of input weather label does matter for non-blind all-in-one method, i.e. WGWS-Net <cit.>, which make it vulnerable in real-world application. Fig. <ref> shown the comparison with SOTA models, our method can restore more visual pleasing result. We also observed our proposed is more stable and robust compared with other methods, the reason might be our method can be better separate different type of degradation and explicitly handle different severity, and avoid the misuse of incorrect filter for the input degradation image. More detail can be found in Supp.. § LIMITATION Despite that our proposed method demonstrate superior performance, there are still some limitations, include the proposed MQRL is parameter sensitive, require high computation cost and suffer from blur result for severe occlusion on high frequency area, which also failed for other SOTA methods, the visualization can be found in Supp., and we will explore these limitations in the future toward practical and efficient all-in-one image restoration. § CONCLUSIONIn this paper, we proposed a type and severity aware method for blind all-in-one weather removal, which aims to remove multiple unknown type weather degradation in an unified model. We utilized contrastive loss and proposed novel MQRL to guide the model extract representative weather information, and utilize a bag of technique to inject the information into model. Our proposed method enjoy less parameters and achieve the state-of-the-art performance. Our model even can restored unseen combined multiple weather images, and modulating restoration level. Our future work include solve current limitations and extending to robust unseen multiple degradation restoration toward blind weather removal in the wild. § ACKNOWLEDGEMENTSThis work is supported by NSTC (National Science and Technology Council, Taiwan) 108-2221-E-002-040-MY3. ieeenat_fullname § APPENDIX §.§ Experiment DetailsFor the experiment details, we randomize crop the images with patch size 256×256 and normalized to [0, 1], use AdamW optimizer with betas 0.5 and 0.999, and set the batch size to 2, epoch to 40, and learning rate to 1e^-4 and would gradually decrease after epoch 18. The model would first train by full loss, then finetune by smaller lr and only pixel fidelity loss, which is most crucial for image restoration, for 30 epochs. To provide GT IQA score guidence, we adopt PSNR and clip the value into [0, 50], then inverse and normalize to [0, 1].§.§ Progressive RestorationWe demonstrate the visualization result of progressive restoration in this section. As shown in Fig. <ref>, while iteratively feed the enhanced image into model, the remaining degradation can be progressively removed and obtain more visual pleasing result. §.§ Combination of Multiple Weather RemovalAs mentioned in body text, we observe that our model can perform more stable compared with other state-of-the-art models for visual evaluation. As shown in Fig. <ref>, for the failure cases, our model could preserve input image appearance, in contrast, the other comparison methods produce extremely visual unpleasing result. We also quantitatively validate the observation through report the NIQE <cit.>, which is more close to human perceptual than PSNR and SSIM. As shown in Table <ref>, our method achieve best performance on worse case and standard deviation, which imply the better stability of our model. §.§ LimitationWe observed the state-of-the-art models including ours still suffer from blur artifact with severe occlusion on high frequency area. As shown in Fig. <ref>, the same problem shared for almost all methods <cit.>. | http://arxiv.org/abs/2310.18293v1 | {
"authors": [
"Yu-Wei Chen",
"Soo-Chang Pei"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231027172955",
"title": "Always Clear Days: Degradation Type and Severity Aware All-In-One Adverse Weather Removal"
} |
[cor1]Corresponding author.IAM,HS]Balduin Katzer IPD]Daniel Betsche IPD]Klemens Böhm IAM]Daniel Weygand IAM,HS]Katrin Schulzcor1[IAM]Karlsruhe Institute of Technology (KIT), Institute for Applied Materials (IAM),Kaiserstr. 12,76131 Karlsruhe, Germany [HS]Karlsruhe University of Applied Sciences, Moltkestr. 30, 76133, Karlsruhe, Germany [IPD]Karlsruhe Institute of Technology (KIT), Institute for Program Structures and Data Organization (IPD),Am Fasanengarten 5,76131 Karlsruhe, Germany [email protected] Three-dimensional dislocation networks control the mechanical properties such as strain hardening of crystals. Due to the complexity of dislocation networks and their temporal evolution, analysis tools are needed that fully resolve the dynamic processes of the intrinsic dislocation graph structure. We propose the use of a graph database for the analysis of three-dimensional dislocation networks obtained from discrete dislocation dynamics simulations. This makes it possible to extract (sub-)graphs and their features with relative ease. That allows for a more holistic view of the evolution of dislocation networks and for the extraction of homogenized graph features to be incorporated into continuum formulation. As an illustration, we describe the static and dynamic analysis of spatio-temporal dislocation graphs as well as graph feature analysis. dislocation network, dislocation dynamics, graph database, graph theory, dislocation mobilityIn dislocation theory, it is fundamental to understand how dislocations evolve, multiply or stabilize due to mutual interaction during plastic straining. At mesoscopic length scales, dislocations are strongly interconnected with each other.This leads to the formation of complex three-dimensional dislocation networks. The evolution behaviour of dislocation networks is fundamental for the resulting material properties <cit.>. Dislocation networks and networks in general possess a graph structure <cit.>. Thus, one can subject it to many graph algorithms and graph frameworks <cit.>.In this work, we distinguish between the three-dimensional dislocation network (spatial topology) and its transformed graph theory based representation (graph topology). For the transformation, the pristine three-dimensional data is imported into a graph database retaining all information of the original data, as described later. This yields in the lower-level graph representation a so-called ”Property Graph” <cit.>. The graph database allows us to perform graph analysis and to identify graph structures as well as to extract higher-level graphs, so-called hypergraphs <cit.>. The graph topology is not static since the plastic deformation of a dislocation network incorporates the generation and dissolution of dislocations. During this dynamic behavior the graph topology changes continuously.Thus, one can classify it as a spatio-temporal graph <cit.> with dynamic topology.Previous studies on the evolution of three-dimensional dislocation networks within discrete dislocation dynamics (DDD) simulations have shown the richness of characteristics of these networks <cit.>. Madec et al. <cit.> observed specific dislocation reactions and related them to mechanical properties. First approaches to graph analysis for dislocation networks already characterised defects under cyclic loading <cit.>. However, it has been shown that continuum models either have limited agreement with the data or do not include all details of complex network structures. For example, extended continuum theories of dislocation reaction kinetics do only partially fit to three-dimensional DDD data <cit.> or the analysis of complex dislocation mechanisms is only partially conducted for strongly interconnected structures <cit.>. The reason why these complex structures are often left unused is the difficulty in including them in the analysis. Starting with the whole DDD simulation, we record it as a sequence of consecutive system states.Each state of this evolving system is stored as a snapshot of a graph at a discrete point in time. A common technique of comparing two consecutive system states at times t_n and t_n+1 is performing the comparison of the three-dimensional representation visually <cit.> or analysing each individual snapshot statistically <cit.>. Using only individual snapshots does not give way to any quantitative tracking of dislocations.By using a representation as spatio-temporal graphs, we aim at traceability of the dislocation network in space and time in quantitative terms. A similar approach of temporal tracking has been introduced by Bertin et al. <cit.>. It converts graph snapshots to a continuum representation by Nye's tensor <cit.> in order to obtain an ”iso-topology”.This approach avoids the challenges of handling dynamic topology. In this work, we present a graph database containing the graph representation of converted three-dimensional dislocation network structures while preserving temporal and topological information. Using a graph query language (GQL), we formulate our structures of interest as graph patterns with desired features, letting our database find the matches within our graph. Additionally, we can employ graph algorithms to detect interesting dislocation constellations, e.g., shortest paths. The result is a set of subgraphs that matches our graph patterns and feature values. Using a graph representation and performing graph analysis is a powerful tool in other disciplines, e.g., social networks <cit.>, traffic forecasting <cit.> or drug discovery <cit.>. The application of graph database technology to dislocation-based plasticity to perform analyses of complex but repeating structures with little effort, is the objective of this paper.Specifically we want to answer two question for metal plasticity in this paper: (i) ”How do dislocation graphs evolve in time and space” and (ii) ”How do graph features influence the dynamic graph topology”? We present our answers by facilitating static and dynamic analysis of dislocation graphs as well as extraction of graph features within our graph database.We propose a graph database for the analysis of three-dimensional DDD simulations. We use DDD according to <cit.> to conduct simulations of 5 × 5 × 5μm^3 tensile-tested face-centered cubic (fcc) single crystals mimicking aluminum. For a detailed description of the simulation parameter and procedure, see <cit.>. Exemplarily for the DDD simulations, we use a high symmetrical ⟨100⟩ crystal orientation with a strain rate ε̇ of 2000 s^-1. The initial relaxed dislocation network has a dislocation density ρ_0 of 1.2 × 10^13m^-2.We use the graph database management system Neo4J, which is well known as a powerful open-source graph database <cit.>. The process of extracting, transforming, and loading (ETL) our DDD simulation data from its source into our database follows a commonly established procedure <cit.>. This process is depicted schematically in <ref>, where the spatial topology of the DDD data is transformed into a graph topology in the graph database. The addition of features and labels to every edge and node turns our data structure into a ”Property Graph” <cit.>. Formally, a graph G = (V,E) is a collection of nodes, also called vertices, v∈ V, and edges e∈ E. By adding a time value to turn each node v into a tuple v = (id, t) of its id and time, one obtains temporal graphs. Analogously, we represent each edge e as tuple e = (id, t). A snapshot is the subset of all nodes and edges in G from the same observed state, i.e., the same point in time t.The transformation of the DDD data is conducted as follows: The three-dimensional DDD data is the spatial topology and consists of discretization nodes and their connection by edges, which represent dislocations. The discretized node data is imported into the graph database, where spatial information such as, e.g., slip system and Burgers vector are stored as features in graph nodes resulting in a graph topology. Similarly, the discretized edge data is imported as graph edges in the graph database connecting the graph nodes. This results in a lower-level graph representation. Within the graph database, we implement additional sets of higher-level graph nodes as e.g. ”junctions” and ”dislocation links”, which are aggregates of all consecutively connected discretization nodes and edges of a same property, as introduced in <cit.>. Consequently, we add the label ”end node” to a lower-level graph node if a dislocation link or a junction starts or ends in this node. This label connects lower-level and higher-level graph representations. The condensed information of links and junctions is stored as feature in the dislocation link, e.g., the line length or bow-out. The bow-out is defined as the link length divided by the Euclidean distance of the two end nodes. Ultimately, our graph database consists of a lower-level graph representation consisting of the pristine DDD data and a higher-level graph representation consisting of end nodes, links and junctions. It should be remarked, that this procedure also allows for an easy implementation of even higher levels in the graph database.For the temporal tracking of the dislocation graph, the creation of the ids is described briefly: The link id is generated by its neighboring junction ids, i.e., the link id remains the same as long as the junction neighbor or the link itself does not interact.The junction id is created by its connected dislocation loop ids, which are already used for multiplication cascade tracking <cit.> and are stored as a feature in the lower-level graph nodes.In computer-science terminology, the dislocation network is a spatio-temporal graph. This gives way to two options for analysis: (i) The static analysis extracts information for each snapshot of the graph individually. (ii) The dynamic analysis extracts information over all snapshots of the graph. Going beyond mere statistical analysis, we make use of the GQL to match graph patterns. Our database is an instance of Neo4j.Consequently, we formulate our queries in the GQL Cypher.This allows for efficient retrieval of graph data using a concise syntax. To demonstrate the prospects of a graph database coupled with a query language for dislocation graphs, we present results of the static and dynamic analysis as well as some graph feature analyses of the dynamic topology.The static analysis of the dislocation graph extracts statistics of each individual simulation state over time. <ref> shows results of the static analysis of two graph features, dislocation link length and bow-out. Considering the individual states, the link evolution shows a decrease in length and an increase in bow-out while straining the single crystal. This observation holds true for almost the entire spectrum of lengths of dislocation links (<ref>). With this static analysis, we can query for information of each dislocation graph snapshot.The dynamic analysis allows for a different view on the evolution of the graph over time compared to the static analysis. It enables the tracing of the evolution of (sub-)graphs. Based on the dynamic process of generation and dissolution of graph (sub-)structures, the dynamic analysis incorporates graph stability or instability analysis. A (sub-)graph is considered stable as long as its graph topology does not change within at least one time step. For example, physically, a stable graph topology may indicate that there is little or no plastic deformation, while an unstable graph may indicate increased plastic deformation. Dislocation links undergo different length and bow-out changes while straining, as shown in <ref>. Besides dislocation generation, dislocation links can dissolve due to dislocation motion. <ref> shows the dislocation lifetime, i.e., the ”survival” strain Δε of dislocation links with respect to the total strain ε_max = 0.5% of the simulation resulting in a survival period.The dislocation links either exist at the beginning of the simulation or are generated while straining. We observe that the survival period of dislocation links tend to be short. The survival period fits a stretched exponential function of the form A · exp(-(τ_d^-1 x)^β). We choose parameters physically as follows: A is the initial total number of links x(0), τ_d^-1 corresponds to a characteristic decay time (ε̇·Δ t)^-1 and β = 3/7 is an exponent derived from considering short and long-range contributions <cit.>. Additionally, we add a best curve fit function. Both functions are evaluated with the graph data by the coefficient of determination R^2, which is greater than 0.95 in both cases.A key concept in graph theory is the analysis of node degree, which refers to the number of edges connected to a node. In the context of dislocation graphs, the degree is used to quantify the connectivity of end nodes to links or junctions to identify important nodes in the graph.End nodes with different degrees can lead to varying graph behavior. An end node has X connected dislocation links and can have Y junctions (abbreviated by LX_JY).For example, a simple Lomer reaction leads to two end nodes of type L2_J1; a glissile reaction would lead to two L3_J0 end nodes. Thus, we define a set of end node features based on the number of connected links and junctions.We analyze the two end node features of each dislocation link, see <ref>(left). We show the probability of link configurations, i.e. the probability that a link starts with one feature and ends with the same or a different feature. The diagonal entries can correspond to configurations between the same junction type, e.g. a link from collinear to collinear (L2_J0), from Lomer to Lomer (L2_J1) or from glissile to glissle (L3_J0), but can be of different junction type as well. Additionally, there are surface to surface links (L1_J0) or single-arm spiral sources (L1_J1) between two Lomer junctions as shown in Motz et al. <cit.>. However, we observe many configurations, where the two end nodes of a link consist of different features (off-diagonal entries). The matrix is symmetric, since the end nodes of a link can be starting or end point in our analysis.One frequent off-diagonal example is a link, starting at a L3_J0 (e.g. glissile) end node and ending in a L2_J0 (e.g. collinear) or vice versa.Thus, the end node feature based analysis revealsthat links of different end node features are frequent, which may influence the mobility.Besides characterizing end nodes of link configurations, we use the graph topology to analyse the evolution of larger graph structures. One interesting configuration are Lomer junctions and their k-nearest neighbors, since Lomer junctions are energetically favorable, but can be dissolved after a certain survival period. We investigate, if the distribution of the surrounding end node features of a Lomer junction has an influence on its stability. Therefore, we analyse Lomer junction configurations of the k-nearest neighbors at their first (one time step after creation) and their last (one time step before dissolution) appearance during the simulation during plastic deformation and with a survival strain of at least 0.2%. <ref>(right) shows the probability of the Lomer junction (layer 0) end node features at its first and its last appearance, as well as the probability of the features of its first (layer 1) and its second (layer 2) neighboring end nodes. Layer 0 reveals an decreased probability of L2_J1 and an increased probability of L1_J1 for the Lomer junction's last compared to its first appearance. Thus, Lomer arms tend to react with ongoing plastic deformation without unzipping the Lomer junction, which is indicated by the end nodes features changing from L2_J1 to L1_J1. Layer 1 shows that Lomer arms often end in L2_J0 (e.g. collinear junction), inL1_J1 (Lomer/Hirth junction), or in L3_J0 (e.g. glissile junction) end nodes. We observe a slightly increased probability of L1_J1 and a slightly decrease prdobability of L2_J0 and L3_J0 for the last compared to the first appearance. The probability of end node features in layer 2 does not indicate an influence on the Lomer junction dissolution. However, layer 0 and layer 1 show larger probability differences between first and last appearance of a Lomer junction.The presented results are examples of query results that are easy to obtain once the dislocation network is modeled in a graph database. The results demonstrate that a graph database is a promising tool for static and especially for dynamic analysis of dislocation graphs.The inherent mapping of the spatio-temporal features of dislocation networks to a graph representation leads to new insights into the evolution of dislocation networks. Compared to the static analysis of Lomer arms <cit.>, the graph database complements the spatial information with the temporal dimension by graph features. The temporal tracking enables the analysis of generation and dissolution processes of mobile dislocation links by the survival strain analysis (<ref>). Extending the static analysis, which deals with each snapshot graph individually (<ref>), the full history of the temporal graph evolution is preserved in the dynamic graph analysis. The stretched exponential function fit of the lifetime of dislocation links indicates strong changes of the graph topology over time (<ref>). The analysis of the link end node degrees yields interesting insights into the dislocation graph. In contrast to simple link configuration concepts, the results shown in <ref>(left) reveal a more complex picture including end node degrees larger than three and links starting and ending in different end node degrees. This motivates the deployment of graph databases for even more detailed analyses of the structure of dislocation networks. Specific structures like multi-junctions or second-order junctions have already been reported several times <cit.>. However, a systematic approach describing complex three-dimensional structures has been missing. Our results show that a variety of other link and junction structures based on characteristics of the end node exist.We show that at creation and just before dissolution of a Lomer junction, the distribution of end node features directly at the Lomer junction changes as well as the distribution of end node features of its first and second nearest neighbors (<ref>(right)). The analysis indicates that the neighboring structure seems to converge after two layers for a Lomer junction and could be seen as a characteristic structure. The importance of the analysis of node degrees higher than three has already been discussed but only analysed to a limited extent , as e.g. for the so-called ”assisted glissile mechanism” as one possible mechanism <cit.>. Therefore, we assume that end node degree analysis can reveal various other complex interaction mechanisms incorporating high node degrees. Future research should demonstrate the derivation of flow rules incorporating dislocation network information such as link length (as in <cit.>) or node degree.Ultimately, the deployment of graph database technology should pave the way to study the inherent dynamic processes. Further algorithms for temporal graphs, like minimal contrast subgraph pattern <cit.>, can help us in understanding the dislocation network evolution. For example, comparing the subgraph of the last appearance of a Lomer junction and its k-nearest neighbors (<ref>(right)) with the subgraph after the dissolution of the Lomer junction can yield insights into junction dissolution processes. Graph machine learning can be used to predict whole graph states, ultimately, with the goal to surrogate modelling. Hereby, the graph representation is useful, since we can convolve the graph into more condensed higher-level graphs (hypergraphs) to reduce the size of the graph for a faster prediction. Finally, we demonstrated the applicability of graph databases to analyse the evolution of dislocation networks, but this technology is applicable to any graph representation from materials science and engineering, which could include converted experimental data <cit.>.§ ACKNOWLEDGEMENT Thefinancialsupport for this work in the context of the DFG research projects SCHU 3074/4-1 and BO 2129/16-1 is gratefully acknowledged. This work was performed on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Württemberg and by the Federal Ministry of Education and Research. This work was supported by the Ministry of Science, Research and the Arts Baden Württemberg, project Algorithm Engineering for the Scalability Challenge (AESC). § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. elsarticle-num | http://arxiv.org/abs/2310.18097v1 | {
"authors": [
"Balduin Katzer",
"Daniel Betsche",
"Klemens Böhm",
"Daniel Weygand",
"Katrin Schulz"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231027123313",
"title": "A graph database for feature characterization of dislocation networks"
} |
2226User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over Wireless Communications Liangxin Qian, Jun ZhaoSchool of Computer Science and Engineering Nanyang Technological [email protected], [email protected] 14, 2024 ===================================================================================================================================================================plain plainIn the rapidly evolving landscape of large language models (LLMs) and mobile edge computing, the need for efficient service delivery to mobile users with constrained computational resources has become paramount. Addressing this, our paper delves into a collaborative framework for model training where user data and model adapters are shared with servers to optimize performance. Within this framework, users initially update the first several layers of the adapters while freezing the other layers of them, leveraging their local datasets. Once this step is complete, these partially trained parameters are transmitted to servers. The servers, equipped with more robust computational capabilities, then update the subsequent layers. After this training, they send the enhanced parameters back to the users. This collaborative training approach ensures that mobile users with limited computational capacities can still benefit from advanced LLM services without being burdened by exhaustive computations. Central to our methodology is the DASHF algorithm, which encapsulates the Dinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR), the Hungarian method, and a pioneering fractional programming technique from a recent IEEE JSAC paper <cit.>. The crux of DASHF is its capability to reformulate an optimization problem as Quadratically Constrained Quadratic Programming (QCQP) via meticulously crafted transformations, making it solvable by SDR and the Hungarian algorithm. Through extensive simulations, we demonstrate the effectiveness of the DASHF algorithm, offering significant insights for the advancement of collaborative LLM service deployments. Large language model, mobile edge computing, wireless communications, resource allocation.§ INTRODUCTIONThe proliferation of large language models (LLMs) marks a monumental leap in the realms of artificial intelligence and natural language processing. These models, with their deep structures and vast parameter sizes, offer capabilities that redefine the benchmarks of machine-human interactions <cit.>. However, the very nature of their size and intricacy means they cannot be effortlessly deployed, especially in constrained environments like mobile devices <cit.>.Mobile Edge Computing (MEC) environments, designed to bring computation closer to the data source, seem like a perfect fit for deploying LLMs. Still, they present their own set of challenges. Mobile devices are constrained by computational resources and battery life, making it strenuous to run these heavyweight LLMs efficiently <cit.>. Additionally, the unpredictability of wireless communication, with its fluctuating data rates and potential for high latency, complicates the seamless integration of LLMs <cit.>.To meet the growing demand for on-the-fly LLM services, there's a pressing need to address these issues. This involves optimizing the LLMs for constrained devices and innovating on the wireless communication front. A potential solution lies in a collaborative approach: a synergy where local computations on mobile devices are harmoniously complemented by offloading specific, intensive tasks to more capable servers. Such a paradigm can make the promise of LLMs in MEC environments a tangible reality. §.§ Studied problemIn this paper, we explore the LLM-driven MEC system and introduce the novel concept of the user service experience-cost ratio (ECR), represented as experience score/cost. This metric eloquently captures the balance between user service experience scores and the crucial factors of delay and energy consumption within mobile computing environments. The user service experience score amalgamates a user's wireless and computational resources as perceived by the server. Cost consumption embodies the cumulative delay and energy expenditure of both users and servers. Given the computational challenges posed by LLMs, we hypothesize that users begin by training the initial layers of the adapters with their local data. Once this preliminary training is completed, users send these trained parameters to the servers. Servers, equipped with more computational resources, take over from this point, training the subsequent layers of the adapters. Specifically, servers then assign both wireless and computational resources to each user. This includes bandwidth, user and server transmission power, and the GPU computing resources of users and servers. Upon completion, these refined parameters are then relayed back to the users by servers.§.§ Main contributionsTo the best of our knowledge, our paper is the first to explore user association and resource allocation in the LLM wireless communication scenario. Our contributions include a novel joint optimization problem, the introduction of ECR, and a novel alternating optimization algorithm as follows:∙ Joint Optimization of User-Sercer Adapter Parameter Training Ratio and User Association: We propose a joint optimization problem that optimizes user adapter training offloading and user association for tailored LLM service to users.∙ Introduction of the User Service Experience-Cost Ratio: The concept of the user service experience-cost ratio (ECR) is introduced. ECR quantifies the balance between user service experience scores and the overall delay and energy consumption in the entire uplink and downlink communication. It provides a valuable metric for assessing the trade-off between user experience and resource efficiency.∙ Innovative Alternating Optimization Approach: We propose an innovative Alternating Optimization (AO) approach called DASHF, which represents the combination of the Dinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR), the Hungarian algorithm, and a novel fractional programming (FP) technique by <cit.>published in IEEE JSAC recently. The most challenging part of DASHF is to rewrite an optimization problem as Quadratically Constrained Quadratic Programming (QCQP) via carefully constructed transformations, in order to leverage SDR and the Hungarian algorithm to obtain a solution. Initially, it addresses the optimization of user connections and adapter parameter training ratios as a single QCQP problem. Subsequently, it delves into the optimization of communication-computation resource allocation (for bandwidth, transmit power of users and servers, and computing frequency of users and servers), providing an effective solution for the non-convex FP problem.∙ The simulation results substantiate the effectiveness of the proposed DASHF algorithm in achieving the joint optimization of user adapter parameter offloading, resource allocation, experience-cost ratio, and communication-computation resource allocation, demonstrating its practical applicability and benefits.The rest of this paper is organized as follows. The system model and optimization problem formulation are presented in Section <ref>. We propose a novel DASHF algorithm to solve the optimization problem in Section <ref>. The numerical results are provided in Section <ref>. We conclude this paper in Section <ref>.§ LLM-EMPOWERED MEC SYSTEM AND OPTIMIZATION PROBLEM FORMULATIONIn this section, we first introduce the system scenario, then analyze the delay and energy consumption in the system model, then introduce the concept of user service experience-cost ratio (ECR), and formulate the optimization problem. §.§ System scenarioAs presented in Fig. <ref>, the LLM-based mobile edge computing system contains multiple servers that distribute tailored LLM models or emulators to various mobile users. Given the computational constraints of some mobile users, a hybrid approach is adopted: users train the first few layers locally with their datasets while freezing the other layers. After training, users send these layer parameters of their adapters to the server. Once the server receives those layer parameters, it completes the training of the remaining layers while freezing the layers trained by the users. After training, the server sends the refined parameters back to the users. This collaborative mechanism ensures efficient and personalized LLM services, compensating for individual users' computational limitations.§.§ System modelWe consider a system comprising N mobile users and M LLM servers. We use n and m as indices for a VR user and a LLM server, respectively, where n ∈𝒩 := {1,2,⋯,N} and m ∈ℳ := {1,2,⋯,M}. Each user is connected to one and only one server; i.e., ∑_m∈ℳ x_n,m = 1. We introduce indicator variables x_n,m∈{0,1} to characterize the connection between users and servers; specifically, x_n,m=1 (resp, 0) means that theuser is connected (resp., not connected) to theserver. For example, if x_n,m=1, it means that theuser only connects to theserver and x_n,m'=0 for m'∈ℳ∖{m}. §.§.§ Time consumptionWe consider frequency-division multiple access (FDMA) so that communication among users and servers would not interfere. The transmission rate from user n to the chosen edge server m is r_n,m(b_n,m, p_n) = b_n,mlog_2(1+g_n,mp_n/σ^2b_n,m), where σ^2 is the noise power, b_n,m is the allocated bandwidth between user n and server m, p_n is the transmit power of user n, g_n,m is the channel gain and can be further expressed by g_n,m = h_n,ml_n,m, where h_n,m is the large-scale slow-fading component capturing effects of path loss and shadowing and l_n,m is the small-scale Rayleigh fading. In Parameter-Efficient Fine-Tuning (PEFT) strategies for large language models, the concept is to introduce "adapters" - smaller neural network components. These are placed within the model, enabling task-specific customization while largely keeping the pre-trained parameters unchanged. By doing this, there's no need for the exhaustive retraining of the complete model. If we consider inserting an adapter between two layers with dimensions d_in and d_out, the design usually involves: 1. A down-projection from d_in to a reduced dimension d_adapt. 2. An up-projection from d_adapt back to d_out. Accounting for weights and biases in these transformations, the total size of the adapter's parameters, d, can be captured as: d = d_in× d_adapt + d_adapt× d_out + d_adapt + d_out. For any given user, represented by n, the parameter size of the adapter, d_n, can differ. This could be due to user-specific requirements or constraints. Hence, d_n links the inherent complexity of the LLM, the architecture of the adapter, and the unique data requirements of user n. User training and sending adapter parameter phases. Based on the above discussion, assume the total adapter parameter size at user n is d_n. The adapter parameter size trained by user n is φ_n d_n, φ_n ∈ [0,1]. User n trains the first φ_n d_n layer parameters with the local datasets. The training time consumed is T^(t_1)_n,m = t_n φ_n d_n e_n/g_n F_n. t_n is the FLOPs of all tokens for each adapter parameter of the user n.e_n is the number of local training epochs. g_n is the available GPU number of user n. F_n is the available GPU computation speed of user n, defined as floating point operations (FLOPs). After local training and the user-server connection algorithm (this can be complicated by choosing the nearest neighbor server sets, then choosing the server with the lowest transmission time, and finally finishing all user-server connections), user n transmits φ_n d_n adapter parameters to the server m. The transmission time from the user n to the server m is T^(t_2)_n,m = x_n,mφ_n d_n ω_b/r_n,m, where ω_b is the bits number used to represent each parameter. For example, if we use the “float32” floating-point number, ω_b will be 32.Server training and returning adapter parameter phases. After receiving the partial adapter parameters φ_n d_n from the user n, the server m trains the remaining part of the adapter with the shared user datasets and the training delay is T^(t_3)_n,m = t_n (1-φ_n) x_n,m d_n e_m/g_m F_m. e_m is the number of server training epochs. g_m is the available GPU number of server m. F_m is the available GPU computation speed of server m. Then, Server m transmits the results to the user n, and the delay is T^(t_4)_n,m = x_n,m(1-φ_n) d_n ω_b/r_m,n, where r_m,n=b_n,mlog_2(1+g_n,mp_m/σ^2b_n,m). We assume the path loss and bandwidth between the downlink and uplink are the same. The time consumed on the server side isT_s,n,m = T^(t_2)_n,m + T^(t_3)_n,m.The time consumed on the user side is T_u,n,m =T^(t_1)_n,m + T^(t_4)_n.Therefore, the total delay will beT_total= T_s,n,m + T_u,n,m = t_n φ_n d_n e_n/g_n F_n + x_n,mφ_n d_nω_b/r_n,m + t_n (1-φ_n)x_n,m d_n e_m/g_m F_m + x_n,m(1-φ_n) d_nω_b/r_m,n.§.§.§ Energy consumptionBased on the delay discussion, we then compute the energy consumption in this system.Energy used for user n training the adapter locally can be calculated by E^(t_1)_n,m = e_nκ_n φ_n d_n t_n F^2_n <cit.>. κ_n is the computational efficiency of user n's GPUs, denoting the power growth rate corresponding to rising computing speeds. Energy used for transmitting data from theuser to the server m is given as E^(t_2)_n,m = p_n T^(t_2)_n,m = p_n x_n,mφ_n d_nω_b/r_n,m.Energy for server training φ_n d_n adapter parameters is given as E^(t_3)_n,m = e_mκ_m x_n,m (1-φ_n) d_n t_n F_m^2. κ_m is the computational efficiency of server m's GPUs.Energy caused byserver transmitting trained adapter parameters to user n is E^(t_4)_n,m = p_m T^(t_4)_n,m = p_m x_n,m (1-φ_n) d_nω_b/r_m,n. Thus, the total energy consumption can be formulated as follows:E_total = ∑_n ∈𝒩,m∈ℳ (E_u,n,m + E_s,n,m) =∑_n ∈𝒩,m∈ℳ (E^(t_1)_n,m+E^(t_2)_n,m+E^(t_3)_n,m+E^(t_4)_n,m)= ∑_n ∈𝒩,m∈ℳ (p_n x_n,mφ_n d_nω_b/r_n,m + p_m x_n,m (1-φ_n) d_nω_b/r_m,n+ e_nκ_n φ_n d_n t_n F^2_n + e_mκ_m x_n,m (1-φ_n) d_n t_n F_m^2).§.§.§ User service experience scoreWe denote the service experience score of user n that is connected to server m as:v_n,m = ϖ_1 ln[1+ϖ_2 (p_m/p_max^(m) + F_n,m/F_max^(m) + b_n,m/b_max)],where ϖ_1 determines the range of function value, ϖ_2 is used for normalization of (p_m/p_max^(m) + F_n,m/F_max^(m) + b_n,m/b_max). This function is jointly concave of p_m, F_n,m, and b_n,m <cit.>. This user service experience score function is effective and sensitive in all value ranges of (p_m/p_max^(m) + F_n,m/F_max^(m) + b_n,m/b_max), which can describe each user's subjective experience of the communication and computing resources obtained from the server. §.§ Optimization ProblemUser connection x = (x_n,m)|_n ∈𝒩, m ∈ℳ, φ = (φ_n)|_n ∈𝒩, bandwidth b = (b_n,m)|_n ∈𝒩, m ∈ℳ, transmission power p_u = (p_n)|_n ∈𝒩 and p_s = (p_m)|_m ∈ℳ, and GPU computation speedf_u = (F_n)|_n ∈𝒩 and f_s = (F_m)|_m ∈ℳ. Our goal is to maximize the user service experience-cost ratio (ECR): 𝒱/ω_t T_total + ω_e E_total=∑_n ∈𝒩, m ∈ℳ x_n,m v_n,m/ω_t (T_s,n,m + T_u,n,m) + ω_e (∑_n ∈𝒩,m∈ℳ (E^(t_1)_n,m+E^(t_2)_n,m+E^(t_3)_n,m+E^(t_4)_n,m)),where ω_t and ω_e represent the weight values of delay and energy, respectively. In order to linearize the “maximize" term of T_total, we add an auxiliary variable T,which is constrained to be greater than or equal to T_total. Besides, we utilize Dinkelbach's Algorithm <cit.> by adding an additional variable y, which is obtained from the ECR value in the previous iteration. Then, the fractional programming in the trust-cost ratio is transformed into the following problem: max_x,φ,b,p_u,p_s,f_u,f_s,T{∑_n ∈𝒩, m ∈ℳ[x_n,m v_n,m-y ω_e (E_u,n,m+ E_s,n,m)]}- y ω_t T<ref> s.t.x_n,m∈{0,1}, ∀ n,m ∑_m∈ℳ x_n,m = 1, ∀ n φ_n ∈ [0,1], ∀ n ∑_n∈𝒩 x_n,m b_n,m≤ b_max,∀ mp_n ≤ p^(n)_max, ∀ n ∑_n∈𝒩 x_n,m p_n,m≤ p^(m)_max, ∀ mF_n ≤ F^(n)_max, ∀ n ∑_n∈𝒩 x_n,m F_m≤ F^(m)_max, ∀ mb_n,m≥ 0, p_n ≥0, p_n,m≥ 0, F_n ≥ 0, F_m≥0, ∀ n,mT_s,n,m + T_u,n,m≤ T, ∀ n,m Based on Dinkelbach's Algorithm, we iteratively optimize y and problem (<ref>). Specifically, at the i-th iteration, given y^(i-1), we first obtain x^(i),φ^(i),b^(i),p_u^(i),p_s^(i),f_u^(i),f_s^(i), T^(i) by solving the optimization problem <ref>; then we calculate y^(i) with the given x^(i),φ^(i),b^(i),p_u^(i),p_s^(i),f_u^(i),f_s^(i), T^(i). Repeat the above operations until the solutions converge. In the following section, we consider using the alternating optimization method (AO) to tackle the complex problem (<ref>). Roadmap of the whole algorithm. First, we decompose the outer fractional structure of the original ECR problem using Dinkelbach algorithm and sequentially optimize x,φ and b,p_u,p_s,f_u,f_s using the AO method. In the first step of AO, we fix b,p_u,p_s,f_u,f_s and optimize x,φ,T. We transform the optimization problem in the first step of AO into a Quadratically Constrained Quadratic Program (QCQP) and solve it using Semidefinite Relaxation (SDR) and the Hungarian algorithm. In the second step of AO, we fix x,φ and optimize b,p_u,p_s,f_u,f_s, T. During the optimization in the second step of AO, we propose a new fractional programming method to transform this non-convex problem into a convex one. Finally, we calculate y based on the obtained solutions and repeat the aforementioned process until y converges. In this algorithm, since we utilize Dinkelbach's algorithm, alternating optimization, semidefinite relaxation, Hungarian algorithm, and fractional programming, we refer to this algorithm as the DASHF Algorithm.§ OUR PROPOSED DASHF ALGORITHM TO SOLVE THE OPTIMIZATION PROBLEMAssuming that y is given, we need to optimize x,φ,b,p_u,p_s,f_u,f_s, T. In the outermost loops, we iteratively optimize y; In the innermost loops, we iteratively optimize x,φ,b,p_u,p_s,f_u,f_s, T. However, it is still difficult to optimize them in parallel. Thus, we consider operating two inner AO steps to solve it. At the i-th iteration,* Optimize x, φ, T, given b, p_u, p_s, f_u, f_s. Assuming that b^(i-1), p_u^(i-1),p_s^(i-1), f_u^(i-1), f_s^(i-1), y^(i-1) are given, we optimize x^(i), φ^(i), T^(i).* Optimize b,p_u,p_s,f_u,f_s,T, given x, φ. Assuming that x^(i-1), φ^(i-1),y^(i-1) are given, we optimize b^(i),p_u^(i),p_s^(i),f_u^(i),f_s^(i),T^(i). §.§ AO Part 1: Optimizing x, φ,T, given b,p_u,p_s,f_u,f_sGiven b,p_u,p_s,f_u,f_s,T, we optimize x,φ,T. The optimization problem will be: max_x,φ,T ∑_n ∈𝒩, m ∈ℳ[x_n,m v_n,m- yω_e (E_u,n,m+ E_s,n,m)]-y ω_t T <ref> s.t. (<ref>), (<ref>), (<ref>),(<ref>),(<ref>),(<ref>),(<ref>). x_n,m are binary variables and this is a mixed-integer nonlinear programming problem. We rewrite x_n,m∈{0,1}, ∀ n,m as x_n,m(x_n,m-1)=0, ∀ n,m. The optimization problem will be rewritten as: max_x,φ,T∑_n ∈𝒩, m ∈ℳ[x_n,m v_n,m- yω_e (E_u,n,m+ E_s,n,m)]-y ω_t T <ref>s.t.x_n,m(x_n,m-1)=0, ∀ n,m (<ref>), (<ref>),(<ref>),(<ref>),(<ref>),(<ref>). We substitute the expression of E_u,n,m+E_s,n,m into problem (<ref>) and convert the max problem in problem (<ref>) to a min problem (<ref>).min_x,φ,Ty ω_t T + ∑_n ∈𝒩, m ∈ℳ{y ω_e (p_n d_nω_b/r_n,m-p_m d_nω_b/r_m,n-e_m κ_m d_n t_n F_m^2)x_n,mφ_n + [y ω_e(p_m d_nω_b/r_m,n + e_m κ_m d_n t_n F_m^2)-v_n,m]x_n,m + y ω_e(e_n κ_n d_n t_n F_n^2)φ_n}<ref>s.t. (<ref>), (<ref>), (<ref>),(<ref>),(<ref>),(<ref>),(<ref>).Let G_n,m= y ω_e (p_n d_nω_b/r_n,m - p_m d_nω_b/r_m,n - e_m κ_m d_n t_n F_m^2), A_n,m = y ω_e(e_n κ_n d_n t_n F_n^2), and B_n,m = y ω_e(p_m d_nω_b/r_m,n + e_m κ_m d_n t_n F_m^2)-v_n,m.Let 𝐀 = [A_n]|_n ∈𝒩, 𝐁 = [B_n,m]|_n ∈𝒩,m∈ℳ, and 𝐆 = [G_n,m]|_n ∈𝒩,m∈ℳ. The optimization problem (<ref>) can be rewritten as 𝒫_1:min_x,φ,Tyω_t T +∑_n ∈𝒩 A_nφ_n +∑_n ∈𝒩, m ∈ℳ (B_n,m x_n,m +G_n,mx_n,mφ_n) <ref>s.t.(<ref>),(<ref>), (<ref>),(<ref>),(<ref>),(<ref>),(<ref>). This is a quadratically constrained quadratic program (QCQP) problem. Then, we need to get the standard form of the QCQP problem. Let Q=(φ^⊺,x_1^⊺,⋯,x_M^⊺)^⊺, where φ=(φ_1,⋯,φ_N)^⊺ and x_m=(x_1,m,⋯,x_n,m)^⊺, ∀ m∈ℳ. Let e_i denotes (0,⋯,i-th↑1,⋯,0)^⊺_NM+N × 1. Let e_i,j denotes (e_i,⋯,e_j)^⊺, i<j. Then, ∑_n∈𝒩,m∈ℳG_n,mx_n,mφ_n can be expressed as Q^⊺diag(G,0_NM × 1)(e_N+1,NM+N,0_NM× NM+N)^⊺Q.Let 𝐏_0=diag(G,0_NM × 1)(e_N+1,NM+N,0_NM× NM+N)^⊺.∑_n ∈𝒩, m ∈ℳ B_n,m x_n,m can be expressed as 𝐖_0^⊺Q, where 𝐖_0^⊺=(B_1^⊺,⋯,B_M^⊺)e_N+1,NM+N and B_i^⊺=(B_1,i,⋯,B_N,i), ∀ i ∈ℳ. ∑_n ∈𝒩 A_nφ_n can be expressed as 𝐖_1^⊺Q, where 𝐖_1^⊺=(A_1,⋯,A_N)e_1,N.Let P^(T1)_n,m = -d_nω_b/r_m,n + d_nω_b/r_n,m - t_n d_n e_m/g_m F_m, P^(T2)_n = t_n d_n e_n/g_n F_n, and P^(T3)_n,m = t_n d_n e_m/g_m F_m + d_n ω_b/r_m,n. Let 𝐏^(𝐓1) = [P^(T1)]|_n ∈𝒩, m ∈ℳ, 𝐏^(𝐓2) = [P^(T2)]|_n ∈𝒩, 𝐏^(𝐓3) = [P^(T3)]|_n ∈𝒩, m ∈ℳ. Let 𝐏^(𝐓4) = diag(P^(T1),0_NM × 1)(e_N+1,NM+N,0_NM× NM+N)^⊺, 𝐏^(𝐓5)_1^⊺=(P^(T2)_1,⋯,P^(T2)_N)e_1,N, 𝐏^(𝐓6)^⊺=(P^(T3)_1^⊺,⋯,P^(T3)_M^⊺)e_N+1,NM+N, where P^(T3)_i^⊺=(P^(T3)_1,i,⋯,P^(T3)_N,i), ∀ i ∈ℳ.Therefore, the optimization problem can be expressed as 𝒫_2:min_Q,TQ^⊺𝐏_0 Q+𝐖_0^⊺Q+𝐖_1^⊺Q+yω_t T<ref> s.t. diag(e_N+1,NM+N^⊺Q)(diag(e_N+1,NM+N^⊺Q)-𝐈)=0diag(e_1,M^⊺e_N+1,NM+N^⊺Q)=𝐈diag(e_1,N^⊺Q)≤𝐈diag(e_1,N^⊺Q)≥0B^⊺e_N+1,NM+N^⊺Q-B_max≤ 0 P^⊺e_N+1,NM+N^⊺Q-P_max^(m)≤ 0 F^⊺e_N+1,NM+N^⊺Q-F_max^(m)≤ 0 Q^⊺𝐏^(T4)Q + 𝐏^(T5)^⊺Q + 𝐏^(T6)^⊺Q≤ T, where e_i,j denotes (e_i,⋯,e_j)^⊺, i<j, B=(b_1,1,⋯,b_n,m)^⊺, P=(P_1,1,⋯,p_n,m)^⊺, and F=(F_1,1,⋯,F_n,m)^⊺.The constraints (<ref>), (<ref>),(<ref>), (<ref>), (<ref>), (<ref>), (<ref>) in 𝒫_1 are transformed into the constraints (<ref>), (<ref>), {(<ref>), (<ref>)}, (<ref>), (<ref>), (<ref>), (<ref>) in 𝒫_2, respectively. Problem (<ref>) is the standard QCQP form. However, it is still non-convex. Then, we need to utilize the semidefinite programming (SDP) method to transform this QCQP problem into a semidefinite relaxation (SDR) problem. Let 𝐒=(Q^⊺,1)^⊺(Q^⊺,1). Let e_i → j denotes (0,⋯,i-th↑1,1,⋯,j-th↑1,0,⋯,0)^⊺, i<j. Then we obtain the SDR problem 𝒫_3: min_S,T Tr(𝐏_1 𝐒)<ref> s.t. Tr(𝐏_2 𝐒)=0 Tr(𝐏_3 𝐒)=0 Tr(𝐏_4 𝐒)≤0 Tr(𝐏_5 𝐒)≤0 Tr(𝐏_6 𝐒)≤0 Tr(𝐏_7 𝐒)≤0 Tr(𝐏_8 𝐒)≤ T 𝐒≽0, where 𝐏_1= ( [𝐏_0 1/2(𝐖_0+𝐖_1); 1/2(𝐖_0+𝐖_1)^⊺ yω_t T ]), 𝐏_2= ( [e_i^⊺e_i -1/2e_i; -1/2e_i^⊺ 0 ]), ∀ i ∈{1,⋯, NM} 𝐏_3= ( [0_NM+N × NM+N 1/2(e_ie_N+1,NM+N^⊺); 1/2(e_ie_N+1,NM+N^⊺)^⊺ -1 ]),∀ i ∈{1,⋯, N} 𝐏_4= ( [ 0_NM+N × NM+N1/2e_i;1/2e_i^⊺-1 ]), ∀ i ∈{1,⋯, N} 𝐏_5= ( [0_NM+N × NM+N 1/2Be_N+1,NM+N; 1/2(Be_N+1,NM+N)^⊺ -B_max ]), 𝐏_6= ( [0_NM+N × NM+N 1/2Pe_N+1,NM+N; 1/2(Pe_N+1,NM+N)^⊺ -P_max^(m) ]), 𝐏_7= ( [0_NM+N × NM+N 1/2Fe_N+1,NM+N; 1/2(Fe_N+1,NM+N)^⊺ -F_max^(m) ]), 𝐏_8= ( [ 𝐏^(T4) 1/2(𝐏^(T5) + 𝐏^(T6)); 1/2(𝐏^(T5)+𝐏^(T6))^⊺0 ]).The constraints (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) in 𝒫_2 are transformed into the constraints (<ref>),(<ref>), (<ref>),(<ref>), (<ref>),(<ref>),(<ref>),(<ref>) in 𝒫_3, respectively. Drop the constraint rank(𝐒)=1 and the objective function and the constraints are all convex. Then this SDR problem will be solved in polynomial time by common convex solvers. By solving this SDR problem, we can get a continuous solution of Q. However, this solution is the lower bound of the optimal solution and it may not guarantee the constraint rank(𝐒)=1. Therefore, we need to use rounding techniques to recover the solution. The latter NM elements in Q is x_n,m, for all n ∈𝒩, m ∈ℳ, which means that user n is fractional connected to server m. Then, find all user n that ∑_m ∈ℳ x_n,m > 1. For these users, modify x_n,m as x_n,m/|∑_m ∈ℳ x_n,m|. Use the Hungarian algorithm <cit.> with augmented zero vectors to find the best matching with the maximum weight and denote this matching as a set 𝒳_matching. For nodes n and m in 𝒳_matching, let x_n,m = 1, else x_n,m = 0, and denote this integer association result as x_♯. Then substitute x_♯ into Problem (<ref>) to obtain the optimal φ.§.§ AO Part 2: Optimizing b,p_u,p_s,f_u,f_s,T, given x, φGiven x and φ, the remaining optimization problem is shown as (<ref>).max_b,p_u,p_s,f_u,f_s,T∑_n ∈𝒩, m ∈ℳ x_n,m v_n,m - y {ω_t T + ω_e {∑_n ∈𝒩 e_n κ_n φ_n d_n t_n F^2_n+∑_n∈𝒩, m∈ℳ{p_n x_n,mφ_n d_nω_b/r_n,m+e_m κ_m x_n,m (1-φ_n) d_n t_n F_n,m^2+ p_m x_n,m (1-φ_n) d_nω_b/r_m,n}}}s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>).We first Let ℱ(b,p_s,f_u,f_s,T)=∑_n ∈𝒩, m ∈ℳx_n,m v_n,m -y ω_t T-yω_e {∑_n ∈𝒩 e_n κ_n φ_n d_n t_n F^2_n+ ∑_n∈𝒩, m∈ℳ e_m κ_m x_n,m (1-φ_n) d_n t_n F_n,m^2}.It's easy to justify that ℱ(b,p_s,f_u,f_s,T) is concave. Then, the optimization objective function isℱ(b,p_s,f_u,f_s,T)-y ω_e ∑_n ∈𝒩, m ∈ℳ(p_n x_n,mφ_n d_nω_b/r_n,m + p_m x_n,m (1-φ_n) d_n ω_b/r_m,n),where p_n x_n,mφ_n d_nω_b/r_n,m + p_m x_n,mφ_n d_n ω_b/r_m,n is non-convex or concave. Then, according to the fractional programming technique introduced in the Section 4 in <cit.>, we let z_1,n,m=1/2 p_n x_n,mφ_n d_nω_b r_n,m and z_2,n,m=1/2 p_m x_n,mφ_n d_n ω_b r_m,n. The optimization objective function can be expressed as ℱ(b,p_s,f_u,f_s,T)-y ω_e ∑_n ∈𝒩, m ∈ℳ[(p_n x_n,mφ_n d_n)^2z_1,n,m+1/4 (b_n,mlog_2(1+g_n,mp_n/σ^2b_n,m))^2 z_1,n,m + (p_m x_n,m φ_n d_n ω_p)^2 z_2,n,m+ 1/4 (b_m,nlog_2(1+g_m,np_m/σ^2b_m,n))^2 z_2,n,m].The complete transformation optimization problem is shown as (<ref>).max_b,p_u,p_s,f_u,f_s,Tℱ(b,p_s,f_u,f_s,T)-y ω_e ∑_n ∈𝒩, m ∈ℳ[(p_n x_n,mφ_n d_n)^2z_1,n,m+1/4 (b_n,mlog_2(1+g_n,mp_n/σ^2b_n,m))^2 z_1,n,m+(p_m x_n,mφ_n d_n ω_p)^2 z_2,n,m+ 1/4 (b_m,nlog_2(1+g_m,np_m/σ^2b_m,n))^2 z_2,n,m]s.t.(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>).If z_1 = (z_1,1,1,⋯,z_1,n,m)^⊺ and z_2 = (z_2,1,1,⋯,z_2,n,m)^⊺ is given, the objective function (<ref>) is concave. At the i-th iteration, [y^(i), z_1^(i), z_2^(i)] are first calculated with the solution [b^(i-1),p_u^(i-1),p_s^(i-1),f_u^(i-1),f_s^(i-1)]. Then, [b^(i),p_u^(i),p_s^(i),f_u^(i),f_s^(i)] can be obtained by solving the concave problem (<ref>) with [y^(i), z_1^(i), z_2^(i)]. Thus, the optimization problem is concave and can be solved by common convex solvers. More detailed proofs can be seen in <cit.>. § SIMULATION RESULTSIn this section, we first introduce the default settings for the numerical simulations. Subsequently, we verify the convergence of the proposed DASHF algorithm and compare it with other baselines to validate its effectiveness. Then, we adjust the available communication and computational resources as well as the cost weight parameters to analyze their impacts on the ECR.§.§ Default settingsWe consider a network topology of 1000 m × 1000 m with 10 mobile users and 2 servers. The large-scale fading h_n,m between the user n and server m is modeled as 128.1+37.6 log_10d_n,m, where d_n,m denotes theEuclidean distance between the user n and server m. The small-scale fading is the Rayleigh fading. Gaussian noise power σ^2 is -134dBm. The total bandwidth for each server b_max is 10 MHz. The maximum transmit power of mobile users p_max^(n) is 0.2 W. The maximum transmit power of servers p_max^(m) is 10 W. We assume the GPU resource utilization is 0.55 for users and servers. The maximum GPU computation speed of mobile users F_max^(n) is 19.58 TFLOPs with four GTX 1080 GPUs and that of servers F_max^(m) is 1372.8 TFLOPs with eight A100 GPUs. The effective switched capacitance of mobile users and servers (κ_n and κ_m) is 10^-38. We refer to the adapter parameter sizes in <cit.> and <cit.>. The adapter parameter sizes of mobile users are randomly selected from [1.2, 14] M. To achieve this, pseudorandom values are generated, which follow a standard uniform distribution over the open interval (0,1). These pseudorandom values are then scaled to the range of [1.2, 14] M to determine the specific adapter parameter sizes for each mobile user. The token data sizes of users are randomly selected from [10, 50] M bits. The parameters of delay and energy consumption (ω_t and ω_e) are 0.5 and 0.005 to keep them in the same order. We set ϖ_1 and ϖ_2 as 10000/ln2 and 1/3 to keep ECR big enough, respectively. we consider the “float32” method to represent the floating-point number andω_b is 32. We consider using the Mosek optimization tool in Matlab to conduct the simulations. §.§ Convergence of proposed AlgorithmsIn this section, we evaluate the convergence of the proposed algorithms. We consider two network topologies with (10 users, 2 servers) and (20 users, 3 servers) and keep other settings as default. Primal objective value means an estimate for the primal objective value when using Mosek to solve the optimization problem. When the primal objective value converges, the algorithm converges to one stationary point. Fig. <ref> plots the convergence of the Algorithm AO-Part 1, which converges within 15 iterations. Fig. <ref> plots the convergence of the Algorithm AO-Part 2, which converges within 9 iterations. Fig. <ref> plots the convergence of the DASHF Algorithm, which converges within 9 iterations. Thus, the proposed DASHF Algorithm is effective in finding one stationary point of the Problem (<ref>).§.§ Comparison with baselinesIn this section, we mainly consider four baselines to carry out the comparison experiments. * Random user connection with average resource allocation (RUCAA). In this algorithm, one server is randomly selected for each user. The server equally allocates communication and computational resources among the users connected to it.* Greedy user connection with average resource allocation (GUCAA). In this algorithm, each user selects the server with the least number of users underserving. The server distributes communication and computational resources evenly to the users connected to it. * Average resource allocation with user connection optimization (AAUCO). In this algorithm, the communication and computation resources of each Metaverse server are equally allocated to each user connected to it. Besides, Algorithm <ref> is leveraged to operate user connection optimization.* Greedy user connection with resource allocation optimization (GUCRO). In this algorithm, each user selects the Metaverse server with the least number of users underserving. Besides, Algorithm <ref> is leveraged to operate resource optimization.* Proposed DASHF algorithm. Joint optimization of user connection and resource allocation by utilizing the whole proposed DASHF algorothm.In Fig. <ref>, we compare the resource consumption and ECR of the proposed DASHF Algorithms with other baselines. The performances of RUCAA and GUCAA are worse since no optimization is utilized. GUCRO and AAUCO have better performances than GUCAA, which confirms the effectiveness of the proposed Algorithm AO-Part 1 and Part 2. Furthermore, the ECR of AAUCO is higher than that of GUCRO, which shows that user connection optimization is more effective than resource optimization in this case. The time consumption of the proposed DASHF algorithm is the lowest of these five methods and the energy consumption is also low (just higher than AAUCO), and the ECR is the highest one. This results from the benefits of joint optimization of user connection and resource allocation.§.§ ECR versus the total bandwidthWe consider the total bandwidth from 10 MHz to 100 MHz to test the ECR under different total bandwidths. Other parameters are fixed as default settings. Fig. <ref> reveals distinct algorithmic performance trends, with the proposed DASHF method consistently outperforming GUCRO, AAUCO, RUCAA, and GUCAA in terms of the ECR. Notably, optimization algorithms (GUCRO and AAUCO) demonstrate superior or close performance compared to non-optimization algorithms (RUCAA and GUCAA). AAUCO employs user connection optimization strategies and performs better than RUCAA, GUCRO, and GUCAA. §.§ Impact of cost weights on ECRFig. <ref> featuring various combinations of (ω_t, ω_e) that signifies the trade-off between delay-energy optimization and trust score. As (ω_t, ω_e) values shift, emphasizing either delay or energy, distinct performance outcomes are evident. For instance, when prioritizing energy efficiency (e.g., (ω_t, ω_e) = (0.1, 0.009)), the system achieves lower energy consumption but at the expense of higher delay, resulting in a moderate ECR value. Conversely, balanced settings (e.g., (ω_t, ω_e) = (0.5, 0.005)) lead to lower delay and slightly higher energy consumption, yielding a high ECR value. These findings underscore the sensitivity of the optimization process to parameter choices and emphasize the importance of tailoring (ω_t, ω_e) values to meet specific application requirements while carefully considering the trade-offs between delay-energy and trust score.§ CONCLUSIONIn this investigation into LLMs and MEC, we've delved into the intricacies of ensuring efficient LLM service delivery amidst the constraints of wireless communication systems. With their vast linguistic and computational capabilities, the promise of LLMs is now being actualized in real-world applications. Our contributions, as presented in this paper, lay the foundation for seamless collaborative training between mobile users and servers, addressing the challenges of limited computational and communication resources. We optimize resource utilization and ensure robust LLM performance by implementing a framework where initial layers are trained by users and subsequent layers by servers. In this context, the ECR measures collaboration efficiency and resource optimization. The DASHF algorithm, central to our methodology, solidifies these efforts. In conclusion, we anticipate a landscape where mobile edge computing enables ubiquitous and efficient access to advanced LLM services, harmonizing computational constraints with the ever-growing demands of modern applications. IEEEtran | http://arxiv.org/abs/2310.17872v1 | {
"authors": [
"Liangxin Qian",
"Jun Zhao"
],
"categories": [
"cs.IT",
"eess.SP",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231027032049",
"title": "User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over Wireless Communications"
} |
[email protected] Instituto de Física Teórica,Universidade Estadual Paulista, Rua Dr. Bento Teobaldo Ferraz, 271, 01140-070, São Paulo, São Paulo, Brazil [email protected] Instituto de Física Teórica,Universidade Estadual Paulista, Rua Dr. Bento Teobaldo Ferraz, 271, 01140-070, São Paulo, São Paulo, Brazil The understanding of the emergence of classicality has challenged the scientific community since the beginning of quantum mechanics. Among the proposals to resolve this issue is the gravitational self-decoherence mechanism. Despite all efforts, this mechanism has been proven extremely difficult to probe. Here, we propose a simple Stern-Gerlach-like experiment to try it out. Probing gravitational self-decoherence in a Stern-Gerlach interferometer George E. A. Matsas January 14, 2024 ========================================================================Introduction: Explaining how the classical world emerges from the quantum paradigm is a fundamental question that remains elusive <cit.>. The gravitational self-decoherence mechanism idealized by Penrose is among the proposals put forward to resolve this issue <cit.>. According to it, a non-relativistic particle would be ruled by the Schrödinger-Newton equation that incorporates a gravitational self-interacting potential to the usual Schrödinger equation. Despite how simple such a mechanism may seem, the fact that the gravitational self-interacting potential is negligibly small compared to usual external electromagnetic ones makes any resulting deviation, e.g., in the energy spectrum, extremely difficult to trial <cit.>. Thus, instead of looking for stationary solutions of the Schrödinger-Newton equation, we propose using the particle spin as a self-decoherence witness in a simple Stern-Gerlach-like experiment, where the only potential is the self-interacting one. Eventually, we argue that this experiment can be realized with state-of-the-art technology.Schrödinger-Newton equation: Let us consider a non-relativistic particle of mass m described by the normalized state |ψ(t)⟩≡∫ d^3r ψ(r, t) |r ⟩ belonging to a Hilbert space ℋ, where ψ(r, t) = ⟨r |ψ(t)⟩ pertains to the space L^2(ℝ^3) of square-integrable complex functions on ℝ^3, |r ⟩ represents (delta-function) orthonormalized position eigenstates,⟨r |r '⟩≡δ^3(r - r '), and the normalization condition, ||ψ(t)||^2 = 1, leads to ∫ d^3r|𝒫(r, t)|^2 = 1 with𝒫(r, t) ≡ |ψ(r, t)|^2.According to Penrose's proposal <cit.>, ψ(r, t) would be evolved by the Schrödinger-Newton equation i ħ∂/∂ tψ(r, t) = (- ħ^2/2 m∇^2 + V(r, t) + U(r, t)) ψ(r, t), where V(r, t) is the usual external potential and U(r, t) ≡ - G m^2 ∫ d^3r ' 𝒫(r ', t)/|r - r '| is the self-interacting potential with G being the gravitational constant. The suitability of the “self-interacting" qualifying for potential (<ref>) comes from the fact that it depends on the particle state ψ(r, t) itself. While standard quantum mechanics asserts that 𝒫(r, t) is the probability density of finding the particle when (and only when) its position is measured, the presence of U(r, t) in Eq. (<ref>) implies that, to what concerns gravity, a quantum particle would be an extended system distributed according to the mass density m 𝒫(r, t). Hence, yet tiny for elementary particles, potential (<ref>) is at odds with standard quantum mechanics. Discrete superpositions: Now, let us assume that the particle is constrained to N discrete sites labeled by r_j, j = 1, …, N. In this case, one must replace the normalized state (<ref>) by |ψ_d(t)⟩ = ∑_j = 1^N ψ_j(t) |r_j ⟩ belonging to the Hilbert space ℋ = ℂ^N, where ψ_j(t) = ⟨r_j |ψ_d(t)⟩∈ℂ, |r_j ⟩ are orthonormalized position eigenstates, ⟨r_j |r_j' ⟩≡δ_j j', and the normalization condition, ||ψ_d(t)||^2 = 1, leads to ∑_j = 1^N P_j(t) = 1withP_j(t)≡|ψ_j(t)|^2, similarly to Eq. (<ref>). As a result, the corresponding Schrödinger-Newton equation (<ref>) is recast for the discrete wave-function amplitudes ψ_j(t) as i ħ∂/∂ tψ_j(t) = (- ħ^2/2 m∇^2 + V_j(t) + U_j(t)) ψ_j(t), where the self-interacting potential isU_j(t) ≡ - G m^2 ∑_j' = 1 j' ≠ j^N P_j'(t)/|r_j - r_j'|(j = 1, …, N), in line with Eq. (<ref>). Let us note that one must impose j' ≠ j to avoid divergent contributions, which are artifacts of the discretization not appearing in Eq. (<ref>) (see, e.g., the straightforward calculation carried on in Ref. <cit.>). By imposing j' ≠ j, our computation underestimates the gravitational self-decoherence implied by potential (<ref>). As a result, experiments capable of discarding Eq. (<ref>) will automatically rule out the original proposal (<ref>).Stern-Gerlach interferometry: Let us now consider the particular case we are interested in, namely, an inertial non-relativistic particle with spin s = 1/2 in a Stern-Gerlach interferometer. Let us begin considering a particle in the eigenstate |↑ ⟩ of S_z, whereS_z |↑ ⟩≡+ ħ/2 |↑ ⟩andS_z |↓ ⟩≡- ħ/2 |↓ ⟩, as usual. In sequence, with the help of a logic gate R_y(θ), |↑ ⟩ is driven into |α(0)⟩ =cos(θ/2) |↑ ⟩ + sin(θ/2) |↓ ⟩ for some θ∈ [0, 2 π). Next, |α(0)⟩ is subject to a Stern-Gerlach magnet that splits it into the space superposition state (see Fig. <ref>) |β(0)⟩ = cos(θ/2) |r_1 ↑ ⟩ + sin(θ/2) |r_2 ↓ ⟩. This state will evolve to|β(t)⟩ =β_1(t) |r_1 ↑ ⟩ + β_2(t) |r_2 ↓ ⟩, with the amplitudes β_j(t) (j = 1, 2) obeying Eq. (<ref>) where the external potential is assumed null: V_j(t) = 0. Moreover, in the experimental conditions that we shall consider further, the kinetic term will be much smaller than the self-interacting potential U_j(t), in which case Eq. (<ref>) simplifies to i ħ∂/∂ tβ_j(t) = U_j(t) β_j(t). We must resolve Eq. (<ref>) for the initial conditions β_1(0) = cos (θ/2), β_2(0) = sin (θ/2), and self-interacting potentials (<ref>), i.e., U_1(t) = - G m^2/d |β_2(t)|^2,U_2(t) = - G m^2/d |β_1(t)|^2, where d ≡ |r_1 - r_2| is the distance that the space superposition is set apart. The solution can be cast asβ_1(t) = cos(θ/2) e^i ω_2(θ) t, β_2(t) = sin(θ/2) e^i ω_1(θ) t, with ω_1(θ) ≡G m^2/ħ dcos^2(θ/2), ω_2(θ) ≡G m^2/ħ dsin^2(θ/2). Eventually, the Stern-Gerlach split is reversed, leading Eq. (<ref>) into |α(t)⟩ = β_1(t) |↑ ⟩ + β_2(t) |↓ ⟩. The final state (<ref>) just depends on the particle mass, initial state, Stern-Gerlach apparatus, and experiment duration [see Eqs. (<ref>) and (<ref>)].For the sake of convenience, let us cast |α(t)⟩ in the eigenstate basis {|→ ⟩, |← ⟩} of the x-axis spin operator S_x, S_x |⇄ ⟩ = ±ħ/2 |⇄ ⟩, in which case Eq. (<ref>) can be rewritten as |α(t)⟩ = 1/√(2)( cos(θ/2) e^i ω_2(θ) t + sin(θ/2) e^i ω_1(θ) t)|→ ⟩+ 1/√(2)( cos(θ/2) e^i ω_2(θ) t - sin(θ/2) e^i ω_1(θ) t) |← ⟩. It is important to note that ||α(t)||^2 = 1, as expected. Then, the probability of obtaining as an outcome +ħ/2 in a measurement for the spin projection along the x axis is P_x_+(θ, t)=|⟨ →|α(t)⟩|^2 = 1/2 + 1/2sinθcos[Δω(θ) t], where Δω(θ) ≡ω_1(θ) - ω_2(θ) = G m^2/ħ dcosθ. In order to compare P_x_+(θ, t) obtained considering the gravitational self-decoherence potential with the usual quantum-mechanical output, P_x_+^QM(θ, t) ≡lim_G → 0 P_x_+(θ, t), let us define D_x_+(θ, t) ≡ P_x_+(θ, t) - P_x_+^QM(θ, t), =- sinθsin^2 (T cosθ/2), where T=3×10^14 (m/m_P)^2(t/1)/(d/1) and m_P≡√(ħ c/G)≈ 2.2×10^-8 is the reduced Planck mass. Equation (<ref>) makes explicit the challenge posed by thePlanck scale, since for T ≪ 1 no self-decoherence effect would be observable according to Eq. (<ref>). Fortunately, the smallness of m/m_P can be compensated by choosing a setup with a small enough d and a sufficiently long experiment duration t, driving T ∼ 1. Experimental setup: Our experimental setup is inspired by Ref. <cit.>, where a Ytterbium (Yb) microcrystal with spin s = 1/2, radius ℓ∼ 1, mass m ∼ 10^-14, and space superposition set apart by a distance d ∼ 250 is suggested. The optimal conditions assume (see Ref. <cit.>) 0.15 for the microcrystal internal temperature, 0.5 for the environmental temperature, and 10^-15 for the external pressure.This configuration vindicates the procedure of disregarding the kinetic term in Eq. (<ref>) for experiments lasting some seconds. In order to see it, let us evaluate the final kinetic K per potential U_g energy ratio driven by the gravitational attraction between the superposed parts: K/U_g∼G m t^2/2 d^3∼ 10^-14 (t/1)^2, which is tiny for t ∼ 1. Figures <ref> and <ref>plot the probability P_x_+(θ, t) and probability difference D_x_+(θ, t), respectively, as functions of θ for experiments lasting t = 5 and t = 50. One can see that D_x_+(0, t) = D_x_+(π, t) = 0, as it should be, since in these cases the initial state is not spatially superposed: |β(0)⟩|_θ→ 0→ |r_1 ↑ ⟩,|β(0)⟩|_θ→π→ |r_2 ↓ ⟩. Also, D_x_+(π/2, t) = 0, because in this case the superposition is symmetric |β(0) ⟩|_θ→π/2→1/√(2) (|r_1 ↑ ⟩ + |r_2 ↓ ⟩), leading eventually to an undetectable global phase. The other local minima and maxima of D_x_+ (θ,t) will depend on T.In summary, gravitational self-decoherence can be probed under the conditions considered here provided environmental decoherence effects such as scattering with air molecules and blackbody photons <cit.> are suppressed for time intervals of the order of seconds. Despite the experimental challenge it may consist of <cit.>, this should be achievable with present state-of-the-art technology <cit.>. Conclusions: The emergence of the classical world from quantum mechanics has challenged the scientific community for a century today. A thought-provoking proposal is that gravity would be responsible for quantum self-decoherence <cit.>. On the one hand, this would explain why quantum effects would be suppressed in everyday phenomena. On the other hand, such a proposal is intrinsically arduous to probe since external potentials overwhelm the gravitational self-interacting one by many orders of magnitude for typical quantum particles. This drives deviations due to the self-interacting potential, e.g., in the energy spectrum, extremely hard to observe <cit.>. In order to circumvent this difficulty, we propose a simple experimental setup where the only potential is theself-interacting one. In this case, the challenge posed by the Planck scale can be compensated by choosing appropriate time intervals t and space superposition distances d [see Eq. (<ref>)]. This should be achievable with state-of-the-art technology.The authors acknowledge Juan Pêgas for the constructive discussions. G. H. S. A. was fully supported by the São Paulo Research Foundation (FAPESP) under grant 2022/08424-3. G. E. A. M. was partially supported by the National Council for Scientific and Technological Development and FAPESP under grants 301508/2022-4 and 2022/10561-9, respectively.arndt14 M. Arndt and K. Hornberger, Testing the limits of quantum mechanical superpositions, Nature Phys. 10, 271 (2014).penrose96 R. Penrose, On gravity’s role in quantum state reduction, Gen. Relativ. Gravit. 28, 581 (1996).penrose98 R. Penrose, Quantum computation, entanglement and state reduction, Phil. Trans. R. Soc. A 356, 1927 (1998).grossardt16 A. Großardt, J. Bateman, H. Ulbricht, and A. Bassi, Optomechanical test of the Schrödinger-Newton equation, Phys. Rev. D 93, 096003 (2016).gan16 C. C. Gan, C. M. Savage, and S. Z. Scully, Optomechanical tests of a Schrödinger-Newton equation for gravitational quantum mechanics, Phys. Rev. D 93, 124049 (2016).bassi16 A. Großardt, J. Bateman, H. Ulbricht, and A. Bassi, Effects of Newtonian gravitational self-interaction in harmonically trapped quantum systems, Sci. Rep. 6, 30840 (2016).silva23 J. V. B. da Silva, G. H. S. Aguiar, and G. E. A. Matsas, Disfavoring the Schrödinger-Newton equation in explaining the emergence of classicality, Phys. Rev. A 108, 012214 (2023).bose17 S. Bose, A. Mazumdar, G. W. Morley, H. Ulbricht, M. Toroš, M. Paternostro, A. A. Geraci, P. F. Barker, M. S. Kim, and G. Milburn, Spin Entanglement Witness for Quantum Gravity, Phys. Rev. Lett. 119, 240401 (2017).peres96 A. Peres, Separability Criterion for Density Matrices, Phys. Rev. Lett. 77, 1413 (1996).horodecki01 M. Horodecki, P. Horodecki, and R. Horodecki, Separability of n-particle mixed states: necessary and sufficient conditions in terms of linear maps, Phys. Lett. A 283, 1 (2001).hyllus05 P. Hyllus, O. Gühne, D. Bruß, and M. Lewenstein, Relations between entanglement witnesses and Bell inequalities, Phys. Rev. A 72, 012321 (2005).guhne09 O. Gühne and G. Tóth, Entanglement detection, Phys. Rep. 474, 1 (2009).chevalier20 H. Chevalier, A. J. Paige, and M. S. Kim, Witnessing the nonclassical nature of gravity in the presence of unknown interactions, Phys. Rev. A 102, 022428 (2020).isart11 O. Romero-Isart, Quantum superposition of massive objects and collapse models, Phys. Rev. A 84, 052121 (2011).schlosshauer19 M. Schlosshauer, Quantum decoherence, Phys. Rep. 831, 1 (2019).kamp20 T. W. van de Kamp, R. J. Marshman, S. Bose, and A. Mazumdar, Quantum gravity witness via entanglement of masses: Casimir screening, Phys. Rev. A 102, 062807 (2020).bar-gill13 N. Bar-Gill, L. M. Pham, A. Jarmola, D. Budker, and R. L. Walsworth, Solid-state electronic spin coherence time approaching one second, Nat. Commun. 4, 1743 (2013).abobeih18 M. H. Abobeih, J. Cramer, M. A. Bakker, N. Kalb, M. Markham, D. J. Twitchen, and T. H. Taminiau, One-second coherence for a single electron spin coupled to a multi-qubit nuclear-spin environment, Nat. Commun. 9, 2552 (2018).marshman22 R. J. Marshman, A. Mazumdar, R. Folman, and S. Bose, Constructing nano-object quantum superpositions with a Stern-Gerlach interferometer, Phys. Rev. Research 4, 023087 (2022).schut22 M. Schut, J. Tilly, R. J. Marshman, S. Bose, and A. Mazumdar, Improving resilience of quantum-gravity-induced entanglement of masses to decoherence using three superpositions, Phys. Rev. A 105, 032411 (2022). | http://arxiv.org/abs/2310.18072v1 | {
"authors": [
"Gabriel H. S. Aguiar",
"George E. A. Matsas"
],
"categories": [
"quant-ph",
"gr-qc",
"hep-th"
],
"primary_category": "quant-ph",
"published": "20231027114029",
"title": "Probing gravitational self-decoherence in a Stern-Gerlach interferometer"
} |
Article Title]Event Generation and Consistence Test for Physics with Sliced Wasserstein Distance1]Chu-Cheng Pan These authors contributed equally to this work. 1]Xiang Dong These authors contributed equally to this work. 1]Yu-Chang Sun 1]Ao-Yan Cheng 1]Ao-Bo Wang 1]Yu-Xuan Hu [1]Hao [email protected] *[1]School of physics and technology, Wuhan University, Wuhan, 430072, Hubei, China In the field of modern high-energy physics research, there is a growing emphasis on utilizing deep learning techniques to optimize event simulation, thereby expanding the statistical sample size for more accurate physical analysis. Traditional simulation methods often encounter challenges when dealing with complex physical processes and high-dimensional data distributions, resulting in slow performance. To overcome these limitations, we propose a solution based on deep learning with the sliced Wasserstein distance as the loss function. Our method shows its ability on high precision and large-scale simulations, and demonstrates its effectiveness in handling complex physical processes. By employing an advanced transformer learning architecture, we initiate the learning process from a Monte Carlo sample, and generate high-dimensional data while preserving all original distribution features. The generated data samples have passed the consistence test, that is developed to calculate the confidence of the high-dimentional distributions of the generated data samples through permutation tests. This fast simulation strategy, enabled by deep learning, holds significant potential not only for increasing sample sizes and reducing statistical uncertainties but also for applications in numerical integration, which is crucial in partial wave analysis, high-precision sample checks, and other related fields. It opens up new possibilities for improving event simulation in high-energy physics research.[ * 2023-10-25 ==============§ INTRODUCTIONIn modern high-energy physics research, experimental data are increasingly complex and high-dimensional. For instance, a single particle collision event can generate numerous primary particles, each characterized by parameters such as momentum, energy, charge, and mass <cit.>. Monte Carlo simulation is commonly employed to simulate particle collisions <cit.>, but as the complexity of high-energy particle physics detectors grows, so does the intricacy of the detector simulation programs, resulting in higher costs <cit.>. An example is from the LHCb experiment, where the limited statistics of bb̅→ D^*-3π^± X poses the largest systematic error in verifying lepton universality using B^0→ D^*-τ^+ν_τ <cit.>. This issue will be further exacerbated in future high-energy physics experiments like HL-LHC/CEPC/super-tau-charm <cit.>. Consequently, the demand for simulation data will continue to rise as high-energy physics exploration deepens. Additionally, complex data necessitate sophisticated analysis tools, with partial wave analysis (PWA) being a powerful tool for handling angular distributions and extracting valuable information from intermediate resonances. However, PWA requires substantial simulation data, often requiring massive storage to obtain a sufficient number of independent simulation data samples <cit.>. This places significant demands on computing power and storage systems. Thus, finding methods to rapidly generate high-precision simulation data to reduce computing and storage requirements has become a crucial research objective.Machine learning has demonstrated immense potential in event generation, with event generators frequently being trained on Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) <cit.>. The objective is to learn the features of finite statistical data and rapidly generate large amounts of simulated data with consistent features. However, several challenging problems need to be addressed. In GANs, the generator aims to simulate the target data, while the discriminator tries to distinguish between the generated data and the target data <cit.>. Consequently, the training process can be unstable, and issues like model collapse may arise <cit.>. On the other hand, VAEs map input data to a probability distribution in latent space through an encoder and then reconstruct the original data space using latent variables and a decoder. VAEs may overlook crucial details in the data, necessitating a balance between the reconstruction error and the prior distribution error of the latent space <cit.>. When generating Monte Carlo distributions, the loss functions of VAEs and GANs are constructed based on individual events, leading to a natural problem. Their primary focus is to optimize physical laws or latent space distribution, rather than directly learning the distribution features of the original physical space. As a result, the generated data may not align well with the target data in terms of probability distribution.To resolve this challenge, we propose the utilization of a novel loss function that captures the overall consistency between the target data and the generated data. However, the absence of a standardized measure, such as the one-dimensional KS test that directly quantifies the distance between high-dimensional samples, desires alternative approaches. Researchers often resort to constructing one-dimensional histogram distributions of relevant physical quantities or creating high-dimensional histograms to compare differences in each interval <cit.>. Nevertheless, this approach often leads to significant sparsity in higher dimensions, complicating data interpretation and analysis. Additionally, it is susceptible to outliers, which can distort the data distribution and hinder our understanding and analysis of the data <cit.>. Furthermore, this distance metric fails to provide an intuitive measure of the level of consistency between two samples' distributions.The Sliced Wasserstein Distance (SWD) presents a potential solution as an effective loss function. SWD quantifies the disparity between two probability distributions by slicing the probability distribution in multiple random directions, calculating the one-dimensional Wasserstein Distance in each direction, and averaging the distances across all directions to obtain the final SWD. In high-dimensional spaces, directly calculating the Wasserstein Distance becomes computationally complex <cit.>. However, SWD simplifies the high-dimensional problem into a computationally manageable one-dimensional problem, significantly reducing computational complexity and making it an ideal choice for handling high-dimensional data <cit.>, comparing with other candidate loss function like Energy distance <cit.>, Earth Mover's Distance <cit.> and Chamfer distance <cit.>. With its continuous differentiability and high computational efficiency, SWD offers a promising approach to address the loss function problem and optimize model parameters to ensure that the generated data distribution closely approximates the real data distribution.By combining the Sliced Wasserstein Distance with a permutation test <cit.>, we have successfully obtained a confidence indicator that quantitatively describes and compares distribution differences between high-dimensional data, as well as tests whether two data batches originate from the same distribution <cit.>. This confidence indicator is crucial in validating the accuracy of our generated data. The combined approach of Sliced Wasserstein Distance and permutation test is a powerful and flexible tool for studying high-dimensional data distributions, enabling comparison of data distributions from different models and assessment of disparities between experimental observations and theoretical predictions. It can provide vital support for practical activities like experimental simulation, data analysis, and serve as a reliable statistical testing method for high-dimensional data.In this study, we used the ψ(2S) decay data obtained from the BES III experiment, where the decay process is ψ(2S) decaying into ϕ and π^+, π^-, and then ϕ further decaying into K^+, K^-. We used the Monte Carlo simulated data in the center-of-mass system and explored a novel way of using generative models, i.e., representing each unit of a single distribution as a token, and using the self-attention mechanism of Transformer to learn the relationships between these tokens <cit.>, thereby gaining a deep understanding of the target multivariate distribution. During the model optimization process, we used the Sliced Wasserstein Distance (SWD) and the Wasserstein distance for physical quantity as loss functions, and adjusted their weights with a gamma value. We measure the distribution gap between the model output and the target data directly instead of only optimizing the latent space like SWAE <cit.>, and finally construct a tool to calculate the confidence index to measure the degree of same distribution of high-dimensional data.The structure of our paper will be divided into three main sections: results, discussion, and methods. In the results section, we will first compare our generated data with Monte Carlo simulated data by using some specific important one-dimensional physical quantities, such as the four-momentum, angles of the final state particles. This will be visually compared through histograms. Subsequently, we will use the Sliced Wasserstein Distance (SWD) and permutation test to calculate the confidence level, showcasing the differences in our generated data results under different model parameter settings, and demonstrating what kind of evaluation standards can accurately assess the data. In the discussion section, we will explore how we can make additional corrections based on specific physical analysis and specific detector information during the generation of instances. For instance, we may need to consider the efficiency of the detector and the dynamic factors of the decay process, among others. Finally, we need to verify whether the generated data could have merely replicated the training data or produced a single sample distribution, which will help validate the generative capacity and diversity of our model. In the methods section, we will first provide a detailed introduction to the structure of the target Monte Carlo simulated data we used. Then, we will expound on the theoretical principles of the Sliced Wasserstein Distance and the confidence indicator. Lastly, we will provide a detailed description of the deep learning model structure we used.We find that the modified Sliced Wasserstein Distance can serve as an efficient and universal loss function for measuring the distribution consistency in high-dimensional data, and we demonstrate the performance of the Sliced Wasserstein Distance under different parameter settings. This finding has significant theoretical and practical implications for understanding and improving generative models for high-dimensional data. Our generative model makes it difficult to manually distinguish distribution consistency based on histogram differences in specific dimensions or arbitrary random projections. This result indicates that our model has high accuracy and reliability in simulating the distribution characteristics of high-dimensional data. We use the Sliced Wasserstein Distance and permutation test to calculate the confidence level, providing a rigorous and quantitative method for measuring distribution consistency. The working principle and advantages of this method are that it can provide a quantitative assessment of distribution consistency and valuable feedback for understanding and improving our model.§ RESULTSIn this section, we demonstrate the application of Sliced Wasserstein Distance (SWD) in the generation of events. By generating high-energy physics decay events and comparing the histograms with the target distribution, we verify the effectiveness of the SWD loss function in optimizing deep learning models to learn the features of high-dimensional distributions. Furthermore, we showcase its advantages as a tool for evaluating differences in high-dimensional data. The dataset we have chosen describes a specific high-energy physics decay process, namely the decay of ψ(2S) to ϕ, π^+, and π^-, followed by the decay of ϕ to K^+ and K^-. Although the physical processes involved in this are not complex, the features presented by the decay dynamics and detector response in high-dimensional data are quite complex. This complexity provides an ideal scenario for our research.We designed a Transformer-based model to cater to the characteristics of high-dimensional distributions. The model takes a sequence of random numbers uniformly distributed in one dimension from -1 to 1 as input and outputs high-dimensional data. The output high-dimensional data and the target data are then subjected to loss calculation to optimize the model. We used a dataset containing approximately 3.3 million samples, which includes multiple features and corresponding labels. To explore the impact of hyperparameters on model performance, we divided the dataset into training, validation, and test sets, with a ratio of approximately 1.8 million: 0.5 million: 1 million. We used the AdamW optimizer <cit.>, with a learning rate set to 5×10^-5, epsilon set to 10^-8, and weight decay set to 0.1. For each iteration, we randomly extract batch size (default is 256) x 1024 events as targets for matching and loss calculation. At the same time, for events where certain physical quantities exceed the prior range, such as detector angles and event selection, we will directly cut them off during training to exclude events that are in an unreasonable range. The target data will be reduced synchronously, generally slightly less than 256 x 1024 events. We selected histograms of important physical quantity dimensions to verify the degree of distribution conformity.- p_x, p_y, p_z, E, p, p_T, θ of ϕ, K^+, K^-, π^+, π^-- M_ϕ, M_π^+π^-Data comparison was conducted through histograms to intuitively understand the similarity in distribution shape between the data generated by SWD and the Monte Carlo test set data. In Fig.<ref> We found that the distribution shapes of the SWD-generated data and the test set data were basically consistent with extremely small differences. This initial intuitive comparison provided us with a preliminary understanding of the similarity in the distribution of the SWD-generated data. To further verify the consistency of the distribution between the simulated data and the test set data, we used the Kolmogorov-Smirnov (KS) test, a conventional test for one-dimensional data, and the Wasserstein distance (Wd), based on one-dimensional optimal transport, to conduct a one-dimensional same-distribution test. On specific physical quantity dimensions, the confidence results of the KS test and Wd test confirmed that our SWD-generated data and the test set data have high consistency in the distribution on these dimensions. To comprehensively evaluate the performance of our SWD-generated data, we not only conducted tests on specific physical quantity dimensions but also conducted random one-dimensional linear projections on the free quantities, ignoring factors such as dimensions. This dimensionality reduction technique allows us to simplify high-dimensional data and observe and test the correlation of various distributions in a more intuitive way. Fig.<ref> shows that even under these projections, the simulated data we generated still maintain a high consistency with the target data. When the training data is 0.25 million, it can be compared with the test set of the order of 1 million under these traditional histogram comparison tests, and it is difficult to see significant differences clearly.To accurately assess the differences between high-dimensional distributions, we combined the Sliced Wasserstein Distance (SWD) and permutation test methods. Firstly, for two high-dimensional distributions X and Y, the original distance (D_original) between them is calculated using the Sliced Wasserstein Distance technique. To create a reference distribution, we merged the datasets X and Y and randomly reorganized them multiple times to generate a series of new data pairs (X_i', Y_i'), noting that this reorganization is not based on the original grouping labels. Subsequently, for each pair of newly matched datasets X_i' and Y_i', we calculate the SWD between them to obtain a series of permutation distances (D_perm_1, D_perm_2, ..., D_perm_n). In the evaluation process, we compare the position of the original distance (D_original) in these permutation distances. As an instance, if D_original is greater than 95% of the D_perm_i distances, a p-value of 0.05 can be obtained. In summary, by combining the Sliced Wasserstein Distance with the permutation test, we provide a statistically rigorous and computationally efficient method for assessing the differences between high-dimensional distributions, and for the projection dimensions with specific physical meanings, we can also obtain confidence through the same method. As Fig. <ref> shown, this indicator allows us to more accurately quantify the differences in model performance under different parameter settings, describing whether two high-dimensional data are of the same distribution in a situation where the overall high-dimensional confidence cannot be accurately described by the previous histogram comparison. This allows us to further test the degree of compliance of our high-dimensional data and demonstrate the impact on our final model performance under different parameters.Table <ref> shows the impact of the number of training data and batch size on model performance. We found that larger data volumes are beneficial for model learning, but when the data exceeds a certain amount, increasing the data may not significantly improve model learning, but it may produce better results when facing larger test data sets. We explore the impact of different batch sizes on model performance. Because our training strategy is different from conventional deep learning training, the larger the batch size, the more accurate the loss sorting match will be when calculating the loss. In principle, the smaller the batch size, the greater the impact of sample sampling. As shown in the table, a too small batch size will greatly discount the model performance, but when the batch size is increased to a certain extent under our existing data volume, it will balance in the loss sorting match and model training, and there are no obvious gains in further increasing it. In Table <ref>, we discuss the impact of different gamma values on model performance. A reasonable gamma value can fine-tune the constraints of a single dimension, which can be seen as an optimization of the original SWD, and it is beneficial to the overall effect of the model. However, if the gamma value is too large, the model may be too inclined to learn some specific physical quantity dimensions and ignore the overall characteristics of the high-dimensional data distribution. In summary, these results provide us with important guidance on how to adjust model parameters to optimize performance. Specifically, we need to ensure sufficient training data volume and enough batch size to reduce disturbances, to ensure that the generated data is consistent with the target distribution, and a reasonable gamma value to ensure that the model can balance the learning of specific physical quantity dimensions and the overall characteristics of high-dimensional data.§ DISCUSSIONIn this section, we will discuss the universal training methods and model testing methods in the machine learning generative models for future high-energy physics experiment events.Strong Physical Constraints In high-dimensional data, due to its inherent characteristics, there exist some strong constraint conditions. For example, the mass of each final-state particle is fixed after particle identification, which is a strong constraint <cit.>. Another example is that when using Phikk data, we boost the data to the center-of-mass system, which results in the sum of the three momenta of the final-state particles must be zero, this is also a strong constraint. Moreover, some physical quantities have describable ranges, and if they exceed this range, they do not comply with physical laws. In previous research, these strong constraints are often assumed to be content that the model needs to learn autonomously, and are regarded as evidence that deep learning can be used to learn physical knowledge. However, although we can obtain an approximate relationship in this way, there are often a certain number of deviating values. This is unreasonable for actual physical analysis, because these strong constraints are the basic characteristics of physical phenomena and should not be ignored or approximated. Therefore, we advocate taking these basic dynamical relationships as our prior knowledge, allowing the model to only need to learn the degrees of freedom in the dynamics. Then, we can use this prior physical knowledge to solve and obtain all the physical quantities we finally need. In this way, we can describe physical phenomena more accurately and avoid the bias caused by ignoring strong constraints. At the same time, these degrees of freedom will be scaled to -1 to 1 according to the prior range, to fit the commonly used output function tanh in deep learning <cit.>. Using our method, we can not only ensure that the model generates data in accordance with basic physical laws, but also make the model more focused on learning the physical distribution. For some sharp-shaped degrees of freedom such as mass spectra, we can use the sinh-arcsinh distribution transformation to transform them to be smoother <cit.>. This method combines the prior knowledge of physics with the learning process of the deep learning model, ensuring the physical rationality of the model and improving the learning efficiency of the model.Detector Effects In particle physics research, the detection and analysis of high-dimensional continuous distributions is a complex task. Due to the complexity of detector effects, high-dimensional distributions may exhibit non-smooth or spatial defect characteristics. These characteristics are caused by factors such as the geometric shape, sensitivity, and calibration errors of the detector, such as the theta angle distribution and transverse momentum (pt) distribution of particles <cit.>. Meanwhile, in high-energy physics experiments, veto operations are used to reduce background signals to enhance the distinction between signals and backgrounds. However, this operation may directly cause a part of the high-dimensional data to be missing, which may introduce bias and efficiency loss. Although SWD can easily handle generated continuous high-dimensional distributions, under the influence of detector efficiency, due to its initial function nature, the learning process may become difficult and inefficient. These issues bring additional complexity to our model training, but also provide us with opportunities to test the performance of the model in handling complex high-dimensional distributions. To fully understand and let the model learn these distributions, it may be necessary to comprehensively consider the complex interactions of experiments, data processing, and statistical methods, and take appropriate correction and calibration measures. We can adopt some strategies, such as additional learning for physical quantities that are highly affected by detector efficiency. Through such corrections, we can adjust the entire high-dimensional distribution to better understand and describe the intrinsic structure of experimental data. And we use a parameter as the proportion of the correction term, we find that taking an appropriate parameter can make a fine-tuning of the original high-dimensional distribution, so that the data can learn the true high-dimensional data distribution. From the dimensions of physical quantities in Fig. <ref> hat have not been put into additional constraints, we do not think that it is just an enhancement of some additional physical quantities, but the entire high-dimensional data can benefit.Measuring Overfitting When the model is overly complex, it may lead to the model understanding the characteristics of the training dataset too deeply, so that it merely replicates the training dataset, instead of learning the entire high-dimensional data distribution. If overfitting occurs, the data generated by the model may be a subset of the training set, and the generalization ability of the model will be greatly reduced. Therefore, in our research, we need to ensure that our generated simulated data do not simply replicate the training data, but learn its distribution and generate new data based on this distribution. To test whether our simulated data have overfitted, we adopted a test method based on the minimum Euclidean distance. We calculated the minimum Euclidean distance from each data point in the test set to all data points in the training set, and conducted statistics on these distances. Then, we compared this statistical distribution with the statistical distribution of the minimum Euclidean distance from the simulated data generated by Sliced Wasserstein Distance (SWD) to all data in the training set. If our SWD model is overfitting, then we would expect to see that the minimum distance between the simulated data and the training set is significantly smaller than the minimum distance between each data point in the test set and the training set. However, our results in Fig.<ref> show that the simulated data generated by SWD do not significantly bias towards the training set data, but are very similar to the minimum distance distribution from the test set to the training set data. This indicates that our model did not simply replicate the training data, but learned its distribution and successfully generated new data based on this distribution. Therefore, we conclude that our simulated data generated by SWD successfully avoided the problem of overfitting, and the distribution properties of its generated data are almost the same as those of the new data generated by the same use of Monte Carlo simulation. This result further proves the effectiveness and reliability of our simulation method.§ METHOD Monte Carlo Data We use the BES III simulation framework to simulate phase-space Monte Carlo, where ψ(2S) is produced in electron-positron collisions and subsequently decays into ϕ, π^+, π^-, with ϕ finally decaying into K^+, K^-. Our data is boosted into the center-of-mass system of ψ(2S), hence we have a total of eight degrees of freedom. We selected M_ϕ, M_π^+ π^-, θ_ϕ, ϕ_ϕ, as well as θ and ϕ of the decayed K^+, K^- in the rest frame of ϕ, and the same applies to the rest frame of π^+ π^- <cit.>. Note that this does not mean that ψ(2S) directly decays into a resonance state of ϕ and π^+π^-, but refers to the dynamical variables in the rest frame of π^+ π^-. For training, we need to record the variable range of the Monte Carlo data. For the final-state particles, we determine their θ range affected by the detector acceptance based on the data. For the mass of ϕ, we select 1.006 < M_ϕ < 1.032. For K^+ π^- and K^- π^+, we have M_Kπ < 0.85, M_K π > 0.95 to remove possible K^* decay events.SWD Theory Optimal Transport is a theory in the field of mathematics that studies how to transfer matter from one distribution to another with the minimum cost. Optimal Transport theory is now applied in multiple fields, including image processing, machine learning, economics, etc <cit.>. In the Optimal Transport problem, we need to define a cost function c(x,y) that represents the cost of transferring from point x to point y. The Optimal Transport problem is to find a transfer (or coupling) of probability measures π that minimizes the total transfer cost, i.e., minimizes ∫ c(x, y) dπ (x, y). In many cases, the cost function c(x,y) is set to some power of the distance between x and y. The Optimal Transport distance (or Wasserstein distance) is a way to measure the distance between two probability distributions. Given two probability distributions P and Q, their Wasserstein distance is defined as the cost of the probability measure that achieves the minimum total transfer cost <cit.>.Sliced Wasserstein Distance (SWD) is a variant of the Wasserstein distance, which transforms the high-dimensional Optimal Transport problem into a one-dimensional problem for ease of computation. Specifically, for each randomly chosen direction, SWD projects the distribution onto this direction and then calculates the Wasserstein distance of these one-dimensional distributions. This process is repeated multiple times and the results are averaged to obtain the final distance estimate. First, let us define the one-dimensional Wasserstein distance. For the one-dimensional case, the Wasserstein distance between two distributions P and Q can be defined as the L1 distance between their cumulative distribution functions (CDFs) <cit.>, i.e., WD(P,Q) = ∫ |F_P(x) - F_Q(x)|dx where F_P and F_Q are the cumulative distribution functions of distributions P and Q. For the high-dimensional case, the Sliced Wasserstein Distance is defined through the integral of the one-dimensional Wasserstein distance: SWD(P,Q) = ∫ WD(P_θ, Q_θ) d θ where P_θ and Q_θ are the one-dimensional distributions of distributions P and Q projected onto direction θ. θ is a direction randomly drawn from a uniform distribution. W(P_θ, Q_θ) is the Wasserstein distance between these two one-dimensional distributions. Note that the integral here is over all possible directions θ, i.e., in practice, we usually approximate this integral by sampling a large number of random directions and taking the average. It should be noted that the formula here is for the case of continuous probability distributions. For the discrete case, we assume that we have two sets of points X=x_1, ..., x_n and Y=y_1, ..., y_n, each set is in R^d. We can define the one-dimensional discrete Wasserstein distance as: WD(X_θ, Y_θ) = 1/n∑_i=0^n x_iθ - y_π(i) θ where X_θ and Y_θ are the projections of X and Y in direction θ, and π is a permutation that minimizes the above sum. In the one-dimensional case, such π can be found by sorting the points after projection. This is because in one dimension, the optimal transport scheme is to pair each point x_iθ with its immediate neighbor y_π(i)θ. Then, the Sliced Wasserstein Distance for the discrete case is defined as SWD(X, Y) = E[WD(X_θ, Y_θ)] The expectation is calculated by sampling along multiple random directions θ. We choose the projection dimension in the unit circle, which can make the projection more uniform <cit.>, and then calculate and average the one-dimensional Wasserstein distances in these directions <cit.>. Some revised theories regarding the Sliced Wasserstein Distance have been proposed to better accommodate complex high-dimensional data. These modifications have potential applications in event generation in the future work <cit.> <cit.> <cit.> <cit.>.To correct for detector effects and the spatial defects in high-dimensional data caused by event selection, we will make additional corrections to the dimensions of specific physical quantities, so our total loss is Loss = SWD(X, Y) + γ· E[WD(X_θ_specific, Y_θ_specific)] Model Training We employ the Transformer model, which is based on the self-attention mechanism <cit.>. This mechanism allows the model to compute a weight of relationships for each token in the input sequence <cit.>, enabling the model to capture the interrelationships within high-dimensional data. This model is designed to learn and generate complex high-dimensional data.Initially, after inputting a latent vector uniformly distributed from -1 to 1, we project it to a vector of the number of degrees of freedomx 1024, and reshape it into desired tokens as Fig.<ref> shown. Each individual token represents an independent unit distribution. This process is akin to breaking down a sentence into words or morphemes in natural language processing. On this basis, we consider a complex multivariate distribution as a sequence of these unit distribution tokens. Each token will contain 1024 elements, which also means that a token has information of 1024 points in that dimension. We then feed these tokens into a Transformer model. The core of the Transformer model is its self-attention mechanism. This mechanism can handle high-dimensional data and can calculate the complex interactions and dependencies of each distribution to the others. This capability allows us to have a global and in-depth understanding of the multivariate distribution. We use the trained Transformer model to predict or generate new multivariate distributions. Since the Transformer model can comprehend the complex relationships among these distributions, it is capable of generating multivariate distributions that meet expectations.§ CODE AVAILABLEThe code used for the analysis and visualizations in this study is openly available for the research community. We believe in open science and therefore, we have made our code publicly accessible. This allows other researchers to reproduce our results, use our code as a base for their own research, and facilitate further development and improvements. The code, data and results can be found at our GitHub repository: <https://github.com/caihao/SWD-EvtGen>. In case of any queries or issues related to the code, feel free to raise an issue on the GitHub repository or contact us directly. We will do our best to assist and support your research endeavors.§ ACKNOWLEDGEMENTS The National Science Foundation of China(nos. 11735010, U1932108, U2032102 and 12061131006) provided the funding for this project. Their financial support was instrumental in the realization of our research goals. The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of Wuhan University.§ COMPETING INTERESTSThe authors declare that they have no competing interests. | http://arxiv.org/abs/2310.17897v1 | {
"authors": [
"Chu-Cheng Pan",
"Xiang Dong",
"Yu-Chang Sun",
"Ao-Yan Cheng",
"Ao-Bo Wang",
"Yu-Xuan Hu",
"Hao Cai"
],
"categories": [
"physics.comp-ph",
"hep-ex"
],
"primary_category": "physics.comp-ph",
"published": "20231027050825",
"title": "Event Generation and Consistence Test for Physics with Sliced Wasserstein Distance"
} |
=0.1in =1=0.1in | http://arxiv.org/abs/2310.17710v1 | {
"authors": [
"Hyun Min Lee",
"Adriana G. Menkara",
"Myeong-Jung Seong",
"Jun-Ho Song"
],
"categories": [
"hep-ph",
"hep-th"
],
"primary_category": "hep-ph",
"published": "20231026181141",
"title": "Peccei-Quinn Inflation at the Pole and Axion Kinetic Misalignment"
} |
A class of fractional differential equations via power non-local and non-singular kernels: existence, uniqueness and numerical approximationsThis is a preprintof a paper whose final form is published in Physica D: Nonlinear Phenomena (ISSN 0167-2789). Submitted 19-Jan-2023; revised 15-May-2023; accepted for publication 11-Oct-2023. Hanaa Zitane Delfim F. M. TorresCorresponding author. Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal ===================================================================================================================================================================================================================================================================================================================================================Typically, a randomized experiment is designed to test a hypothesis about the average treatment effect and sometimes hypotheses about treatment effect variation. The results of such a study may then be used to inform policy and practice for units not in the study. In this paper, we argue that given this use, randomized experiments should instead be designed to predict unit-specific treatment effects in a well-defined population. We then consider how different sampling processes and models affect the bias, variance, and mean squared prediction error of these predictions. The results indicate, for example, that problems of generalizability — differences between samples and populations — can greatly affect bias both in predictive models and in measures of error in these models. We also examine when the average treatment effect estimate outperforms unit-specific treatment effect predictive models and implications of this for planning studies. Correspondence concerning this article should be addressed to Michalis Mamakos, Dept. of Psychology, Northwestern University, 2029 Sheridan Rd., Evanston, IL 60208 USA. Email: [email protected] 2 § INTRODUCTION In the evidence-based practice (EBP) movement, randomized trials are prioritized since, by design, they provide an unbiased estimate of the average treatment effect of an intervention guyatt1993,chassin1998,imbens2010,sanderson2002,davies1999. As the EBP movement has extended — from medicine into policy, education, and social welfare —it has necessitated the development of new methods for improving randomized trials, including new study designs (e.g.,<cit.>); methods for adjusting for biases resulting from attrition, noncompliance, and measurement error (e.g., <cit.>); and methods for improving statistical power (e.g., <cit.>).Adequate statistical power is now routinely required in both grant proposals and publications and a variety of tutorials, workshops, and software have been developed to help support this goal (e.g., <cit.>).These approaches to statistical power typically involve disaggregating the statistical power or the minimum detectable effect size into “design parameters” (e.g., <cit.>). In cluster randomized trials, for example, these parameters include: the expected effect size (δ), the number of clusters (m), the within cluster sample size (n), the intraclass correlation (ρ), the proportion of between-site variation that can be explained by covariates (R^2), and the proportion of clusters in treatment (π) raudenbush_statistical_1997. Software allows researchers to examine how different design parameters affect power, thus providing insights into how to better design their studies. For example, these indicate that if sample sizes are fixed, in order to increase statistical power, one might choose a design with equal allocation (π = 1/2), a sample that is fairly homogeneous (ρ < .05), and an outcome measure that is well aligned to the intervention, thus resulting in a large effect size (δ > 0.5). But the questions EBP asks are broader than that of isolating and testing hypotheses about the average treatment effect. Indeed, EBP asks not “what is the average causal impact of this intervention in this study?” but instead, “what will the effect of this intervention be in [insert setting] or for [insert type of person]?” In the medical community, this is often framed as the need for individual treatment effects, as in precision medicine hodson2016. In the social sciences, the heterogeneity revolution has called into question the stability of causal effects outside of the confines of typical research environments bryan2021. And in education research, we see this in questions regarding “what works, for whom, and under what conditions”, with a focus on helping schools decide which interventions might work for them and their students. In each of these cases, the question is certainly one of causality, but the focus is not on the past, but instead on the future. Similarly, this question asks not about a single average effect, but about unit-specific effects (plural). Over the past decade, three streams of methods developments have buttressed this interest in conditional and unit-specific treatment effects. The first stream has focused on methods for causal prediction. This includes parametric approaches (including regression) and non-parametric, machine learning methods, including random forests, Bayesian additive regression trees (BART), and causal forests (e.g., <cit.>). For example, in 2018, the American Causal Inference Conference’s annual data challenge pitted these methods against one another Kaggle-style (e.g., <cit.>). The second stream has focused on testing hypotheses regarding sources of treatment effect heterogeneity. Here there are questions regarding how to identify true moderators (as opposed to spurious associations; see <cit.>), as well as methods for improving the power of these tests (e.g., <cit.>).The third stream aggregates across this heterogeneity, focusing instead on how to generalize or transport average treatment effects from randomized trials to different populations (e.g., <cit.>). Developments here have focused on how eligibility and sample selection bias can affect both average causal effects and subgroup effects, as well as approaches for reducing this bias. But as these methods become increasingly integrated into practice — thus meeting the promise of EBP — there emerges a disconnect between what randomized trials are designed to do and how they are being used. That is, existing requirements and methods ensure that studies are designed to have adequate power for tests of average treatment effects, yet the data from such trials are being used to predict unit-specific treatment effects. This disconnect provides the motivation for this paper. Here we ask: How would we design randomized trials if, from the beginning, our goal was to predict unit-specific causal effects? In this framing, the goal of EBP is prediction, not hypothesis testing, and as such our focus is not on maximizing statistical power but on minimizing prediction error. To make progress on this we focus here on a two-group simple randomized experiment and predictions based upon parametric models estimated using OLS regression. We do so since this affords closed form expressions that parallel those found in the broader design and power analysis literature for randomized trials. The paper proceeds as follows. In Section <ref>, we provide an overview of current methods for the design and analysis of RCTs. In Section <ref>, we introduce the the problem of prediction, focused on predicting unit-specific treatment effects using a parametric model, and deriving formulas for measuring the accuracy of these predictions. In Section <ref>, we extend this to the (common) situation in which the estimation (i.e., source, training) sample is not drawn from the prediction (i.e.,target, test) population for whom predictions are desired. In Section <ref>, we provide an example illustrating our findings, with a focus on small RCTs. We then conclude the paper with a discussion of the implications for planning studies and for predicting causal effects from RCTs.§ AVERAGE TREATMENT EFFECTSWe begin by reviewing the literature on power analysis and generalizability, both of which will be central to the focus of this paper. To do so, we focus on the simplest study design — the simple random control trial (RCT) — in which N units are randomized to a control (n_0) or treatment (n_1) condition respectively. We assume that for every unit i = 1,...,N in the study, there are two potential outcomes, Y_i(0) and Y_i(1) and that, as a result of the Fundamental Problem of Causal Inference (Holland, 1986), we only observe one of these for each unit, i.e., we observe Y_i = Y_i(0)(1-T_i)+Y_i(1)T_i where T_i indicates if unit i was randomly assigned to the treatment condition.§.§ Designing for Sensitivity Using the observed data, we can estimate the average treatment effect using,Δ̂ = Y̅_1 - Y̅_0where Y̅_1 and Y̅_0 are the sample means for those assigned to the treatment (T=1) and comparison (T=0) conditions respectively. It is easily shown that Δ̂ is an unbiased estimate of the sample average treatment effect (SATE). The standard error of the SATE can be estimated using gerber_field_2008,ŜÊ^2(Δ̂) = s_1^2/n_1 + s_0^2/n_0where s_1^2 and s_0^2 are estimates of the residual variances in the treatment and comparsion groups respectively. Notice here that we do not require that the true variances are equal (i.e., σ_0^2 = σ_1^2), though often in the literature on power analysis this is assumed. A test of the null hypothesis that Δ = 0 can be conducted based upon the statistict = Δ̂/ŜÊ(Δ̂)where under the null hypothesis, t follows a t-distribution with degrees of freedom that can be estimated using a Satterthwaite approximation (since the variances differ).When designing an RCT, an important consideration is if the study design – sample size, randomization process, etc – will have enough sensitivity when estimating the average treatment effect. Design sensitivity can be thought of in terms of standard errors, statistical power, or the minimum detectable effect size (MDES).The development of formulas and rules of thumb in all three cases typically involve simplifying assumptions. For example, it is common to assume that the residuals are normally distributed and share a common variance, σ^2 = σ_1^2 = σ_0^2. These simplifications allow for closed form expressions that convey the relationship between sensitivity and design parameters. For example, in the simple RCT, the MDES — the smallest possible true effect size that could be detected with 1-β power and Type I error of α bloom_minimum_1995— can be shown to be dong_powerup!:_2013,MDES = M_N -p - 2√(1-R_p^2/Nπ(1-π))where N = n_0 + n_1 is the total sample size, π = n_1/N is the proportion in treatment, R_p^2 is the proportion of the within group variation that is explained by p covariates included in the pooled model, and M_df = t_α/2(df) + t_1-β(df) is a multiplier. Notice here that the effect size (and thus MDES) is standardized in relation to the residual variation, Δ_s = μ_1 - μ_0/σ. In more complex designs — e.g., cluster randomized, multisite trials — there are additional design parameters included in the MDES, such as the intraclass correlation (ρ), number of sites (m), and so on raudenbush_statistical_1997. These formulas for sensitivity can be solved for different parameters. For example, one might have a potential sample in mind and thus may be want to solve for N to understand how many units need to be recruited. The formulas provide insights as well, regarding which design considerations are most consequential. In the standard error and MDES formulas above, for example, it is clear that the degree of residual variation (σ) is consequential. For example, a large degree of residual variation increases both the standard errors and the MDES and reduces the statistical power of the associated hypothesis test. In practice this means that researchers often favor more homogeneous samples (small σ^2).When that itself is not possible, these formulas suggest that including covariates can improve sensitivity — though keeping in mind that there is a push and pull here, with more covariates leading to greater R_p^2 (increased sensitivity) while also reducing the degrees of freedom (reduced sensitivity).§.§ Designing for Generalizability In standard texts and methods related to the design of RCTs, the focus is nearly always on issues of sensitivity. This is because it is assumed — explicitly or implicitly — that the sample of N units is itself the focus of the study, the sample can be conceived of as a random sample from some population, or that the treatment effect is fairly constant. More recently, this focus on internal validity to the exclusion of external validity has been called into question. <cit.> showed that if we are interested in the ATE for a population P (i.e., PATE) — not just the ATE in our sample (i.e., SATE) — that,bias(SATE) = E(SATE) - PATE = Δ_sample + Δ_treatmentwhere Δ_treatment is bias resulting from either non-random assignment (or post-assignment attrition) and Δ_sample is bias resulting from non-random selection of the sample. They show that the ideal for causal generalization is a study with both random sampling and random assignment — a design that has been exceedingly rare in practice. Studies in a variety of fields have followed, indicating that the samples involved in RCTs are typically not representative of target populations that are likely of interest for policy (e.g., <cit.>). Here it is helpful to understand how this bias arises. Let us return to our potential outcomes framework, where now we add in the role of covariates. Here we include a set of p covariates that potentially moderate the treatment effect; put another way, the relationship between each of these covariates and the observed outcome differs for those in treatment versus the comparison condition. Let x_i be a vector with elements x_ik, for k = 1,...,p covariates. It is helpful here to standardize these covariates in relation to the sample S so that x_ik|S = x_ik-μ_x_k|S/σ_x_k|S. Thus we have,Y_i(0)= μ_0 + x_i|S'β_0 + ϵ_0iY_i(1)= μ_1 + x_i|S'β_1 + ϵ_1iwhere we assume E(ϵ_0i | x_i|S) = E(ϵ_1i| x_i|S) = 0. Notice here that because the covariates are centered around the sample mean, the SATE can be defined simply as SATE =μ_1 - μ_0. However, now we can define the PATE for P asPATE= SATE + Δ_sample=SATE + (μ_x|S - μ_x|P)'(β_1 - β_0)where μ_x|S = E_S(x_i) and μ_x|P = E_P(x_i) are vectors of average moderator values in the sample and population respectively.Thus the sample selection bias that results is a weighted average of the covariate specific standardized mean differences between the sample and population. Clearly, what we have written here assumes that all of the relevant covariates are included — what is referred to as a “sampling ignorability condition” tipton_improving_2013, hartman_sample_2015, stuart_use_2011. (Importantly, the standard error of the SATE — even when used to estimate the PATE — is not biased, since here the sampling variation is appropriately quantified with respect to the data collection process.)When there is bias, a variety of methods have been developed to reduce this bias, including the use of weights (inverse probability, entropy), stratification, and regression (see <cit.> for an overview). The application of these methods in practice, however, is often hampered by problems of undercoverage tipton_improving_2013 — parts of the population that are not represented at all in the sample; in related literature this is referred to as a violation of the common support or positivity assumption. When there is undercoverage, it is not possible to estimate the PATE without bias, and thus generalization to a smaller subset of the population may be the best that is possible. But even when all parts of the population are represented, if the sample S is very different from population P, these adjustments tend to result in larger standard errors. That is, the standard error of the adjusted (unbiased) estimate of the PATE may be larger — and significantly so — than the standard error of the unadjusted (biased) SATE estimate. In practice, however, little is said of this bias-variance trade-off, since the focus of RCTs — like in most causal studies — is strongly on reducing bias.The results from these adjustment methods suggest that a better approach, when feasible, is to design the study with one (or more) populations in mind and then to sample to represent this population. tipton_stratified_2014 proposed using k-means cluster analysis to stratify on many possible moderators (since, in advance, which actually moderate is unknown). Within these strata, different selection methods are possible, including random and model-based approaches litwok_selecting_2022. When the focus is not only on estimating the average treatment effect but also on testing hypotheses regarding moderators of effects, tipton_beyond_2021 shows that additional considerations for sampling are needed so that the sample has sufficient variation in the moderators to be tested. Finally, it is worth noting that this design approach fixes different parameters than the typical power analysis approach. By fixing the target population, it becomes clear that now the residual variation in treatment effects (σ^2) is fixed. As a result, it makes little sense to choose a more homogeneous sample if the goal is to explicitly generalize the ATE to a more heterogeneous population. In more complex designs, this means that related values — like the intraclass correlation — are also fixed. In practice, this means that larger sample sizes may be necessary and that finding and adjusting for covariates is even more important.§ PREDICTION OF UNIT-SPECIFIC TREATMENT EFFECTSUnderlying the generalizability concern — that the sample and population ATEs differ — is the assumption that treatment effects vary across units. But if treatment effects vary, why is the ATE of interest at all? This question is particularly salient for the EBP field, which provides estimates of ATEs in clearinghouses, encouraging decision-makers to transport these effects from the sample they were estimated on to a perhaps entirely different population. If treatment effects vary, our focus shifts from understanding how effective an intervention might be on average to its effectiveness for a particular unit. This unit might be an individual — e.g., a student — or an aggregate — e.g., a school. This means our goal is one of predicting unit-specific impacts. Recently, there has been considerable development in methods for achieving this goal (for an overview, see <cit.>). For example, Bayesian causal forests — a version of Bayesian Additive Regression Trees (BART) — have been shown to perform well (e.g., <cit.>). In this paper, however, we are focused on design. Our question is: Under what conditions is prediction possible? Where are problems likely to arise? And if this is indeed our goal, how should studies be designed for this purpose? To answer these, in Section <ref>, mirroring the literature on design, we narrow our focus to parametric linear models, which offer closed form expressions. Here we focus on a model that includes all moderators. In Section <ref>, we derive measures of error relevant for prediction. Here we focus on the development of predictions for a population based upon an RCT conducted in a random sample from this population. In Section <ref>, we examine models that include only a subset of the moderators, with a focus on comparing the effect of additional moderators on error. Throughout, our focus is on deriving both formulas that can be useful when planning studies, as well as provide general insights regarding the importance of different parameters. §.§ Specification of the modelTo begin, assume we have a sample S of i = 1,...,N units, where S is a random sample of some population P_A. For each unit i recall that we have defined Y_i(0) as the potential outcome of unit i if this unit is assigned to condition T = 0, and Y_i(1) is the potential outcome of unit i if it is assigned to condition T = 1. Now, we also have available k = 1,..,p covariates that moderate the treatment effect. For simplicity, we center the k = 1,...,p covariates x_ik around the mean in population P_A, μ_x_k|A, and standardize them in relation to the population standard deviation, σ_x_k|A. We denote these standardized covariates using x_ik|A. Here β_0 and β_1 are p-dimensional vectors that relate the covariates to potential outcomes. The terms ϵ_i0 and ϵ_i1 are residual errors, with E[ϵ_i0 | x_i|A] = E[ϵ_i1 | x_i|A] = 0, V(ϵ_i0|x_i|A) = σ^2_0|x and V(ϵ_i1|x_i|A) = σ^2_1|x. Using this notation, we can define the individual treatment effect δ_i for unit i as,δ_i= Y_i(1) - Y_i(0) = (μ_1|A - μ_0|A) + x_i|A'(β_1 - β_0) + (ϵ_i1 - ϵ_i0) = [ Δ_A + x_i|A'δ] + η_iNotice here that because of the standardization of the covariates, E_A(δ_i) = Δ_A is the ATE in the sample and, because S is a random sample of P_A, it is also the ATE for population P_A. The elements of the vector δ correspond to covariates that moderate the treatment effect. Finally, notice that the part of this final equation in [.] corresponds to the part of the unit specific treatment effect that is systematic and can thus be predicted, whereas the η_i is the part that is idiosyncratic and cannot be predicted (<cit.>). Moving forward, we will assume that E_A[η_i | x_iA] = 0 and that,τ_A|x^2= V_A(η_i | x_i)= V_A(ϵ_i1 - ϵ_i0) = σ_1|x^2 + σ_0|x^2 - 2ρ_01|xσ_1|xσ_0|x Here, the correlation ρ_01|x between the residualized potential outcomes is unknowable because of the Fundamental Problem of Causal Inference holland1986. The fact that it is unknowable means that it is impossible to directly identify τ_A|x^2. Instead, various approaches for bounding and sensitivity have been proposed (e.g., <cit.>). §.§ Prediction and ErrorWe now assume that the purpose of our RCT is to build a model to predict δ_i for any unit i in population P_A with a vector of p covariates x_i. To do so, we will use OLS regression,which provides closed form solutions that allow insights necessary for designing studies. In these models, we continue to standardize each of the p covariates x_i with respect to the mean and standard deviation of population P_A; thus, we use the standardized vector x_i|A for prediction.In order to predict the treatment effect for unit i we need to predict each of the potential outcomes. For this, we can use the n_0 and n_1 units in the sample to build separate predictive models, resulting in the equations,Ŷ_i(0)= μ̂_0|A + β̂_0' x_i|AŶ_i(1)= μ̂_1|A + β̂_1' x_i|AFrom these Ŷ_i(0) and Ŷ_i(1), the predicted treatment effect for unit i is,δ̂_i= Ŷ_i(1) - Ŷ_i(0) = δ̂_A + ( β̂_1 - β̂_0 )' x_i|A= δ̂_A + δ̂' x_i|AA question is thus how close this predicted effect is to the true treatment effect for unit i,δ̂_i -δ_i = (Δ̂_A - Δ_A) + (δ̂ - δ )' x_i|A+η_iIn general, it is desired to have the difference δ̂_i - δ_i be as close to zero as possible, as this would indicate an accurate prediction of the treatment effect for unit i. §.§.§ Measure of error for a specific unitWe need a measure of loss that provides a sense of the precision of this prediction for a specific unit. A common loss function is the squared prediction error (SPE), defined asSPE(δ̂_̂î)= E(δ̂_i - δ_i| x_i|A )^2= V(δ̂_A) + x_i|A'V(δ̂) x_i|A + τ_A |x^2This SPE is distinct for each unit i since it depends upon the vector of covariates x_i|A. The fact that the covariates are centered renders the ATE estimate independent of the moderator coefficient estimates. Thus, the second equality involves two terms that have to do with how well the average treatment effect (a function of the intercepts) and differences in slopes are estimated in the sample of N units, while the third term has to do with the additional idiosyncratic treatment effect variation that is unexplained by the model. To further simplify the form of the SPE, we continue to assume that the sample of N units is randomly assigned to a treatment and to a comparison condition and that this sample is randomly drawn from population P_A. Under this assumption it can be shown that,SPE(δ̂_̂î)= ( σ_0|x^2/n_0+ σ_1|x^2/n_1) (1 + x_i|A'Σ_x|A^-1x_i|A) +τ_A |x^2where σ_0|x^2 and σ_1|x^2 are the residual variations in the two potential outcome prediction models, Σ_x|A is the variance covariance matrix of the standardized covariates in P_A, and τ_A |x^2 is the total idiosyncratic variation in treatment impacts in P_A. For an observed unit i in P_A, we can estimate this SPE using,ŜP̂Ê(δ̂_̂î) = ( s_0|x^2/n_0+ s_1|x^2/n_1) (1 + x_i|A'S_x|A^-1x_i|A) +[ (s_0|x - s_1|x)^2 + 2s_0|xs_1|x(1-ρ_01|x) ]where the values s_k|x^2 are sample variances estimated in each of the two groups k ∈{0,1}. Remember that this remains a function of ρ_01|x, which is unknowable. In practice, this means that a sensitivity approach may be required. Regardless, this SPE can be used to provide prediction intervals that convey the accuracy of the unit specific treatment effects. If we assume that the residuals are normally distributed, these can be created using critical values from the normal distribution (z_α/2) and(δ̂_̂ĵ - z_α/2√(ŜP̂Ê), δ̂_̂ĵ + z_α/2√(ŜP̂Ê)).In some cases such prediction intervals might be quite wide, indicating that while prediction is possible, it is not particularly informative. We will return to this topic in later sections.§.§.§ Combined measure of error for comparing and planningClearly, SPE varies across units in P_A. For model comparison and for planning purposes, it is therefore helpful to have an aggregate measure of the prediction error for the whole population P_A in need of predicted treatment effects. A natural loss function to use here is the mean squared prediction error (MSPE), which averages the SPE across all units i in population P_A that need predictions,MSPE(δ̂_̂î)= E_A[ SPE(δ̂_i) ] = E_A [ E(δ̂_i - δ_i | x_i)^2]Here we use the notation E_A to indicate that this average is across all units in population P_A, which is our focus. The MSPE is a commonly used measure for assessing and comparing predictive models. It can be related to other metrics of model fit, such as Mallow's Cp and the Akaike Information Criterion (AIC), when residuals are assumed to be normally distributed neter_applied_1996, hastie_elements_2009.In the parametric case considered in this paper, the MSPE can be shown to be,MSPE(δ̂_i)= ( σ_0|x^2/n_0+ σ_1|x^2/n_1) (1 + p ) +τ_A|x^2which is a function of the residual variation in each group, the number of covariates, and the degree of idiosyncratic variation in effects remaining. Notice that this result is a straightforward extension of results found in standard regression texts.For planning purposes, it is helpful to rewrite this MSPE in different terms. To do so, we first define τ_*^2 =τ_A^2 / σ_0^2 as the total treatment effect variation standardized by the variance in Y(0). By writing Y_i(1) = x_i'(β_0 + δ) + ϵ_i + η_i, we can define two R^2 terms. The first — R_0p^2 — is the proportion of the variation comparison group variation (σ_0^2) explained by the p covariates, while the second — R_τ p^2 — is the proportion in treatment effect variation (τ^2) explained by the p covariates. Defining R_-a^2 = 1 - R_a^2, we can rewrite the MSPE as MSPE(δ̂_i) = 2 σ_0^2(1+p)/n[ R_-0p^2 + τ_*ρ_0η|xR_-τ pR_-0p + τ_*^2R_-τ p^2 ( 1/2 + n/2(1+p)) ] A proof for this is provided in Appendix A. Notice here that ρ_0η = Corr(ϵ_0i,η_i) is the correlation between unit specific Y_i(0) and unit specific treatment effects δ_i, after conditioning on the covariates. When this correlation is positive, it indicates a treatment that increases disparities (i.e., larger effects for those with larger Y(0) values). Like ρ_01|x, however, ρ_0η cannot be identified. Writing the MSPE this way reveals the trade-offs between the number of covariates p (that are estimated in each of the regressions) and the degree to which these covariates reduce the residual variation, both in terms of outcomes R_0p^2 and in terms of treatment effect moderators R_τ p^2. Clearly, the inclusion of a covariate can increase the MSPE (through the p+1 term) or decrease it (through the R^2 terms). Thus, the inclusion of covariates that explain outcomes but do not moderate the treatment effect reduces R_0p^2 but does not reduce R_τ p^2. The degree to which this matters, however, depends upon how much relative variation in treatment effects τ_*^2 there is overall.§.§ Model selection and predictionUntil now, we have focused on the general form of the MSPE, for the saturated model that includes all p moderators. Other models are possible, however, including: models with r < p moderators; models in which the effects of covariates are assumed to be the same in both groups (ANCOVA); and a model with no covariates or moderators at all. Here the choice of “best” predictive model might compare a variety of these models, searching for that with the lowest relative MSPE. Here we focus on one important subclass of models: those that assume there to be no moderators of the treatment effect. This includes both the simple average treatment effect estimator (e.g., Δ̂ = Y̅_1 - Y̅_0) and those that include adjustments for covariates (i.e., ANCOVA). We do so because in EBP, (covariate adjusted) ATE estimates are often collected and reported in individual papers and in evidence clearinghouses for use in decision-making regarding the adoption of an intervention. In effect, this approach predicts every unit specific treatment effect with the ATE (i.e., δ̂_̂î = Δ̂_A). A question, then, is if there are conditions under which this ATE estimate may outperform one that provides unit-specific predictions (i.e., using moderators). §.§.§ ANCOVA and Raw Means Models In estimation of an ATE, covariates are often included not as a means of predicting treatment effects, but as a way of reducing residual error. This is often referred to as an ANCOVA adjustment, since it assumes that the effects of the covariates are the same in both groups. This is akin to estimating a single model containing both the treatment and comparison units with an additive treatment, Y_j = β_0 + Δ_A T_i + x_i|A' β + ϵ_iA benefit of this model is that it involves estimation of fewer parameters (p+2 versus 2(p+1)), while also reducing variance. However, this model results in a homogeneous treatment effect — every unit i in population P_A is provided the same predicted treatment effect, Δ̂_A. In order to gain insight regarding this model, we focus on the balanced design in which n_0 = n_1 = n. In this case, the residual variation is pooled across both the treatment and comparison groups, resulting in the residual variation σ^2 = (σ_0^2 + σ_1^2)/2. The inclusion of p covariates reduces this variation by a factor of R_-^2 = (1-R^2). We can thus write the MSPE of this model using our notation as,MSPE(δ̂_j | ANCOVA)= σ^2(2+p)R_-^2/2n + τ_A^2 = σ_0^2(2+p)/2n[ R_-0^2 + ρ_0η|xτ_*R_-0 + τ_*^2 (1/2 + 2n/2+p) ]A proof for this is provided in Appendix A. The first equality is straightforward, including two components; the first accounts for error that results from estimating 2+p coefficients from 2n observations, while the second accounts for the true variation in treatment effects. Notice that the inclusion of covariates does not affect this latter variation, since the model assumes a constant effect for all units. The second equality factors out of this a common variance; recall τ_*^2 = τ^2/σ_0^2 is a scaled version of the treatment effect variation.A special case is a model in which no covariates are included. This is the unadjusted sample ATE estimator.In this case, p = 0 and R_0^2 = 0. This results in the MSPE,MSPE(δ̂_j | raw)= σ^2/n + τ^2 = σ_0^2/n[ 1 + ρ_0ητ_* + τ_*^2 (2n+1/2) ]Notice here that in the first equality, the first term of the MSPE (which has to do with estimation error) goes to zero as the sample size increases, while the second term (which is the true variation) does not. §.§.§ Comparing these models Given these models, under what conditions might the ATE provide a better prediction of unit-specific treatment effects than a model that includes moderators?To do so, we continue with our simplifying assumptions. First, we assume that n_0 = n_1 = n, as in a balanced design. Second, we assume that we are comparing nested models in which the same set of p covariates are included in both the ANCOVA and the moderator models. In the ANCOVA model, the p covariates are estimated in a single model that includes all 2n observations; in this model, an estimate of the ATE Δ_A is used to predict every unit specific effect (i.e., δ̂_̂î = Δ̂_A). In the moderator model, instead the relationship between the p covariates and the outcome are estimated separately in the treatment and comparison conditions, and then subtracted to develop a unit-specific predictive model. A question then is when the ANOVA model provides a more accurate prediction than one including moderators. To study this, let τ_*^2 = τ^2 / σ_0^2 be the standardized treatment effect variation. The MSPE for a model with p moderators (Equation <ref>) can be shown to be smaller than the MSPE for a constant treatment effect model that adjusts for p covariates (Equation <ref>) whenR_τ p^2 ≥ 1 -(-(1+p)ρ_0ηR_-0p±√((1+p)^2 ρ_0η^2 R_-0p^2 - (1+p+n)[2(1+p)R_-0p^2 - nMSPE_p ])/τ_*(1+p+n))^2 A proof for this is provided in Appendix A. As this equation shows, our preference for a model that includes moderators depends upon five parameters: the number of moderators p, the per group sample size n, the correlation ρ_0η between comparison outcomes and treatment effects, the proportion of the comparison group variance explained by the covariates (R_0^2), and the degree of treatment effect heterogeneity τ_*^2. In the next subsection we will investigate this empirically.An important question is how this approach to model selection differs from one based on hypothesis testing. In both the hypothesis testing and prediction frameworks, nested models are compared (e.g., ANCOVA vs Moderator). However, the nature of the nested models differs. In the hypothesis testing approach, the null model is one in which there is no moderator relationship at all, i.e., the true τ^2 = 0. In the prediction framework, we do not make this assumption. Instead, we assume that there may be variation in effects but we incorrectly model this using a simpler model. Thus, in the ANCOVA model, the values of the coefficients in β are not the values under Y(0), but instead the average values of those under Y(0) and Y(1). Similarly, the residual variation in the nested model is not simply a variance that is common to both Y(0) and Y(1) but instead is the average of these two. Put another way, here our question is which model better fits the data (in terms of minimizing mean squared error), instead of if an assumed model is true breiman_statistical_2001, efron_prediction_2020. §.§.§ Simulation studyIn order to understand under what conditions we might prefer the ANCOVA model, we explore the minimum R_τ p^2 required to prefer the alternative model. To do so, we examine the relationship between R_τ p^2 and the parameters n, p, τ_*^2, and ρ_0η. Importantly, while n and p are known to researchers, both τ_*^2 and ρ_0η are not. We focus on a range of treatment group sizes n that are small to large, including values of n between 10 and 1000. The small values are included since the units of interest may be groups (e.g., clusters, sites) and the studies may involve randomizing these groups; for example, we may desire treatment effect predictions for all sites (schools, hospitals) in a population based upon the results of a cluster-randomized trial. Typical cluster randomized trials can include as few as n = 20 sites in each treatment arm (assumed to be equal here). Examining Figure <ref>, we can see that in studies with a small number of units, in order for moderator models to improve predictive accuracy, they need to explain significant portions of the variation in treatment effects, particularly when the overall degree of treatment effect variation is small or moderate. For example, with n = 20 units per arm, if the treatment effect variation is about 40 percent of the outcome variation (τ_*^2 = 0.40), a model with p = 1 moderator would need to explain at least 100R_τ^2 = 34 percent of the variation in order to outperform an ANCOVA model. If one variable alone could not do this, a model with three variables would need to explain 68 percent of the overall treatment effect variation. In comparison, in samples with n = 100 units per arm, these percents drop to 8 percent and 16 percent respectively.Overall, this suggests that for models of unit-specific predictions of treatment effects to perform well, either substantially larger sample sizes are required or single moderators with large explanatory value need to be included. If treatment effects are not functions of only one or two variables, but instead vary in relation to a large variety of moderators (each with small effects), this means that very large samples are necessary to outperform the ANCOVA estimator. Put another way, in the small samples typical in cluster-randomized trials, the covariate-adjusted average treatment effect may offer the most accurate prediction of any unit's treatment effect, even if the true effects vary. § PREDICTION AND GENERALIZABILITY Until now, we have assumed that the sample of N units in the RCT was a random sample from the population P_A for whom treatment effect predictions are required. These are the same assumptions that are typical in the development of estimators of SPE for unit-specific predictions and MSPE for model comparisons. However, as the literature on methods for generalizing and transporting causal effects indicates, this assumption is likely not tenable in practice. That is, very often design decisions — including the selection of the sample — are made based upon a desire to improve sensitivity (reduce error) without regard to how the average treatment effect estimated in the sample will relate to ATEs for population of interest for policy and practice. More directly, researchers are often searching for and selecting the population P_A that allows for a precisely estimated average treatment impact. In the case of prediction, this would be akin to minimizing the MSPE by purposely selecting a sample in which the treatment effect did not vary, thus allowing for the ANCOVA or unadjusted estimators to dominate.In this section, we take a different approach. In Section <ref>, we continue to assume that the sample is randomly drawn from a population P_A, but we now assume that unit-specific treatment effect predictions are desired for a different population P_B. We show that this shift in population results in less accurate predictions regarding unit specific treatment effects. In Section <ref>, building on recent developments for predictive models under covariate shift, we show that the use of weighting adjustments can reduce the MSPE, but that these weighting methods come with a variance inflation penalty (relative to predictions in P_A). In Section <ref>, we provide examples of different population shifts to illustrate how large a penalty one might expect in practice, as well as general implications of these for designing studies.§.§ Prediction and ErrorTo begin, we assume that in population P_B, there is a set of r covariates 𝔹 that moderate the treatment effect systematically. Similarly, we assume that in population P_A, there is a set of p covariates 𝔸 that moderate the treatment effect. Let ℚ = 𝔸∪𝔹 be the union of these sets, resulting in q covariates that moderate the treatment effect in one or both of these populations. Now, assume that we standardize each of these q covariates with respect to population P_A; we do so since we will be estimating our model in P_A. For a unit j in P_B then we can define their true treatment effect as,δ_j = Δ_A + δ_B' x_j|A + η_jNote that if there is a covariate here that moderates the effect in P_A but not in P_B, it takes the value of zero in this vector δ_B. Now, we have available to us a treatment effect prediction model estimated on P_A, δ̂_j = Δ̂_A + δ̂_A' x_j|ANotice here that δ̂_A signifies that this estimate of the moderator coefficients is based upon the sample provided from P_A. §.§.§ BiasAgain, we are interested in understanding how close δ̂_j is to δ_j. Here we might begin by examining the bias,bias(δ̂_j) = x_j|A'θ_B|A + η_j|Bwhere θ_B|A = (δ_A - δ_B) is the difference between the relationship between the standardized covariates x_jk and treatment effects δ_j in population P_A versus that in P_B.One way to think of this is as “extrapolation” bias, which results from extrapolating a relationship beyond the support of x_jk. In the ML literature, this is referred to as the problem of “distribution shift”, wherein the covariate distribution used in the training data (here P_A) differs from that in the test data (here P_B). Two examples help illustrate the source of this extrapolation bias. For the first, assume that the covariate results from a categorical variable that indicates if a unit j is located in an urban, suburban, town, or rural area. Perhaps this is encoded as three dummy variables, with urban as the reference. Suppose that the true treatment effects in P_B differ across these areas, and that all areas are represented in P_B. However, suppose that P_A by design only included urban areas — then this would mean that it was not possible to estimate differences between treatment impacts across these areas (in the generalization literature, this is referred to as “undercoverage”; <cit.>). In our framework, this would amount to setting the coefficient associated with the k^th covariate to be zero in one population (δ_kA = 0), while it is non-zero in the other (δ_kB≠ 0). For the second example, consider a continuous covariate x_k in which the support of the covariate differs in P_B relative to P_A. For example, the support of x_k in P_B might be larger than in P_A. Suppose that within P_A the relationship between x_jk and δ_j is linear. It is easy to see that such a model could be correct in P_A and yet incorrect in P_B – e.g., if in the larger range of x_k values, the relationship was non-linear.§.§.§ Unit specific errorAgain, we can quantify the error in our predictions using the squared prediction error (SPE), which is a function of both this bias and the sampling variance from the estimation of the model in P_A,E[ (δ̂_j - δ_j )^2| x_j|A]= V ( Δ̂_A + δ̂_A' x_j|A | x_j|A) + θ_B|A' x_j|A'x_j|Aθ_B|A+ τ^2_B|x= ( σ_0|x,A^2/n_0+ σ_1|x,A^2/n_1) ( 1 + x_j|A'Σ_A^-1x_j|A) + θ_B|A' x_j|A'x_j|Aθ_B|A+ τ^2_B|xNotice here that there are now three terms. As before, the first term has only to do with how well the coefficients are estimated in the sample of N units in P_A. The second term is now the squared bias. Finally, the third term refers to the unexplained, idiosyncratic treatment effect variation in P_B. We can further expand this as,τ^2_B|x = σ_0|x,B^2+σ_1|x,B^2 - 2ρ_01|x,Bσ_0|x,Bσ_1|x,BThis is a function of the residual variances σ_0|x,B^2 and σ_1|x,B^2 and the correlation ρ_01|x,B. All three of these parameters have to do with relationships found in P_B not in P_A. If P_B differs markedly from P_A – as occurs when P_A was selected to be homogeneous — it is not hard to imagine that residual variances from P_A might not apply to P_B. Any estimation of this variance now requires further assumptions or data than in the P_A case. In general, this indicates that a problem that results when moving to P_B is how such prediction error can be estimated for a unit j. Parameter values with subscripts A can be directly estimated from the sample in P_A. But parameter values with subscripts involving B are unknowable. For example, we cannot know the degree of bias in the treatment effect moderators θ_B|A, since such relationships can only be estimated in P_A. Similarly, we cannot know if the variation due to idiosyncratic variation in treatment effects τ_B|x^2 is the same as that in A; even decomposing this requires extant information on the degree of residual variation in the outcomes in B. Despite this, one may be tempted to estimate this error using the previously defined SPE formula,ŜP̂Ê(δ̂_j)= ( s_0|x,A^2/n_0+ s_1|x,A^2/n_1) ( 1 + x_j|A'S_A^-1x_j|A) + [ s_0|x,B^2+ s_1|x,B^2 - 2ρ_01|x,Bs_0|x,Bs_1|x,B]= SPE(δ̂_j) - [ x_j|A' Θ_B|Ax_j|A' + (τ_A^2 - τ_B^2) ]However, as the second line indicates, this estimate of the error can be biased when P_A ≠ P_B. In general, this bias could be positive or negative, leading to over or under estimates. We will return to this with special cases later. §.§.§ Average error across unitsIt is helpful again to summarize this error across the whole of population P_B for whom we seek predictions. In order to develop the MSPE in this case, we need to define a few more parameters. Recall that we are standardizing our q covariates with respect to P_A; we do so since this is how the covariates were standardized in P_A, where our model was estimated. Here again keep in mind that we are not making any other assumptions regarding the distribution of covariates.Now, in P_B it can be shown that the MSPE can be written,MSPE(δ̂_j | B) = ( σ_0|x,A^2/n_0+ σ_1|x,A^2/n_1) (1 + D_B|A + M_B|A) + tr[Θ_B|AΣ_B|A] + τ_B|x^2where M_B|A = (μ_B - μ_A)Σ_A^-1(μ_B - μ_A) is the squared Mahalanobis distance between populations P_B and P_A and D_B|A = tr[ Σ_A^-1Σ_B] is proportional to the Burg divergence. Equation <ref> falls from properties of quadratic forms, where for a general random vector z and matrix A, E(z'Az) = μ_z'Aμ_z + tr(AΣ_z). Readers might note that the quantity D + M ≡ D_B|A + M_B|A is proportional to the Kullbach-Leibler divergence between two multivariate normal distributions (though we make no such normality assumptions here). In the special case when p = 1, the formula for D + M can be shown to simplify to,D_B|A + M_B|A = σ_B^2/σ_A^2 + (μ_B - μ_A/σ_A)^2Notice that this is a function combining differences in the first and second moments of the distributions of the moderators in P_A and P_B. Examining this, we can see that when P_A ≡ P_B , this function equals 1. In general, when the two distributions differ, these are values are likely greater than p. However, examining this formula shows that this is not the smallest possible value. Notice that this can be further minimized by selecting the estimation population P_A so that it is more heterogeneous than P_B, i.e., so that σ_A^2 >> σ_B^2.Finally, note here that the first two terms (multiplied) in the MSPE can be estimated directly from data available in P_A and P_B. However, like SPE, the latter two terms are more complex, as they require information that is directly unknowable — the degree of bias in the moderator coefficient estimates δ̂ and the degree to which the variation in residual variation in P_B is more or less the same as in P_A.§.§.§ Error when using sample ATEIn the previous section, we showed that when P_A ≡ P_B, in small samples, the ATE has smaller MSPE than unit-specific treatment effects. For completeness, we investigate the MSPE here when using the ATE estimate from P_A as the predicted unit-specific treatment effect for all units in P_B. More formally, let δ̂_j = Δ̂_A for all units j = 1,...,N in population P_B. It is straightforward to show that the MSPE across all units in P_B is,MSPE(δ̂_j | B )= ( σ_0|x,A^2/n_0 + σ_1|x,A^2/n_1) +(μ_B - μ_A)' Θ_B (μ_B - μ_A) + τ_B^2 ≤( σ_0|x,A^2/n_0 + σ_1|x,A^2/n_1) + (δ_B' Σ_A δ_B ) M_B|A + τ_B^2where Θ_B = δ_B δ_B'. Notice that when M_B|A = 0, Equations <ref> and <ref> differ only in the idiosyncratic error term τ_B^2. This residual error is specific to the prediction population. This result aligns with previous work on the generalization of average treatment effects, which has focused on the reduction of bias via efforts to weight sample data (from P_A) to have the same moderator means as that in the target (P_B) population stuart_use_2011, tipton_improving_2013. §.§ Possible adjustments As a result of the shifting covariate distributions between P_A and P_B we see that the accuracy of the model decreases. That is, the MSPE is now a function of the degree of shift, which has to do with both M_B|A and D_B|A. This suggests that a better approach would be to take into account this covariate shift in the estimation process. To do so, first stack the data so that P = P_A ∪ P_B and let Z_i indicate if a unit i is in P_A.For general prediction problems, shimodaira_improving_2000 proposes the use of weighted regression, with weightsw_i = Pr(x_i|Z_i = 0)/Pr(x_i| Z_i = 1).Notice that these weights require knowledge of the joint distribution of the covariates; e.g., Shimodaira assumes that the covariates are normally distributed. steingrimsson_transporting_2023 provide an alternative specification of weights that are simpler to specify. They show that by applying Bayes' rule, the shimodaira_improving_2000 weights are proportional to the inverse odds that a unit is in P_A (relative to P_B),w_i ∝Pr(Z_i=0|x_i)/Pr(Z_i = 2|x_i) = 1 - Pr(Z_i = 0| x_i) /Pr(Z_i = 0| x_i)This formulation suggests that weights can be estimated using logistic regression or a variety of other methods found in the propensity score literature (see <cit.> for an overview). When predicting treatment effects, this means incorporating these weights into estimation of both β_1 and β_0. To do so, for each i = 1,...,N units in P_A, define the weight w_i as above.For each of k = 0, 1, define a weight matrix W_k as an N_k × N_k diagonal weight matrix.Using these weights, for k = 0,1 calculateβ̂_̂k̂^̂ŵ = (X_k'W_kX_k)^-1X_k'W_kY_k. For a unit j in P_B, a treatment effect can thus be predicted using δ̂_j^w = x'_j (β_1^w - β_0^w).Notice that this is the same form as before, but now weighted regression is used instead.steingrimsson_transporting_2023 shows that two assumptions are required for this weighting estimator to result in an unbiased estimator of both the unit-specific treatment effects and the degree of prediction error (MPSE):* A1: Conditional independence of the outcome Y and the population. For every x with positive density in P_B, f(X=x, Z=0)> 0, f(Y|X = x, Z = 1) = f(Y|X = x, Z = 0). * A2: Positivity. For every x such that f(X = x, Z = 0) ≠ 0, we have Pr(Z = 1|X = x) > 0. Assumption A2 means that every covariate pattern found in the target population P_B also exists in population P_A. This ensures that the predictions of δ_j for units in P_B do not require extrapolations beyond the estimation data. Assumption A1 implies that for k = 0,1, the β_k estimated in P_A can be transported to P_B and that the estimate δ̂_j^w of the treatment effect for unit j is unbiased. Furthermore, Assumption A1 also implies that the estimator of the MSPE used based upon the sample data from population P_A is unbiased for the estimator of the MSPE in population P_B. In an appendix, steingrimsson_transporting_2023 provides an example illustrating a violation of this assumption. The use of weights comes at a cost in terms of sensitivity. Let M_VIF be a multiplier that indicates the degree of variance inflation due to use of weighting, whereM_VIF = Var(weighted)/Var(unweighted).Applying the weights and assumptions of steingrimsson_transporting_2023 to each of the regressions separately and combining them provides, MSPE(δ̂_j^w) = M_VIF( σ_0|x^2/n_0+ σ_1|x^2/n_1) (1 + p )+ τ_A|x^2Notice here that as a result of Assumptions A1 and A2, the term Θ'Σ = 0 is not included; similarly, A1 implies that τ_B^2 = τ_A^2, which enables estimation of the idiosyncratic treatment effect variation in P_B from data in P_A.The resulting multiplier M_VIF can be approximated using Kish's design effect kish_weighting_1992, which is a function of the coefficient of variation of the weights,M_VIF = 1 + V_A(w_i)/E_A(w_i)^2.Alternatively, this variance inflation can be written in terms of an effective sample size N_e. To do so,N_e = N / M_VIF.This effective sample size indicates how much smaller a sample N_e from P_B could be to have the same precision as the sample of size N from P_A. Importantly, since this M_VIF multiplier does not require outcome data, it can also be used for planning purposes. To do so, researchers would begin by specifying a target population P_B and several possible populations for estimation (samples) P_A. For each, they could then calculate the distance M + B, if the positivity assumption is met (by exploring the common support), and the expected variance inflation penalty. By conducting this analysis, a researcher may realize that a sample from some population P_A is not adequate for making predictions for units in P_B.§.§ Case studyIn this section, we explore the size of this distribution shift penalty using a case study in education. The Common Core of Data provides a census of public schools in the United States. States are required to submit demographic data on schools in a common form; however, this data is not always complete (e.g., some states may not report certain variables).For this study, we narrow this population to focus on elementary schools and select five potential moderators of a treatment effect: the number of schools in a school district (District Size); the number of students in a school (School Size); the Student to Teacher ratio (Stu/Tch); the proportion of students receiving free- or reduced-priced lunch (an indicator of poverty; PropFRL); and an indicator of if the school is located in an urban locale (Urban). We restrict ourselves to the population of schools that have no missing data on these five variables; this results in a population of 9,175 schools in P_B.§.§.§ Covariate shift across populationsWe then envision a situation in which a study could take place in a single state (P_A) while the ultimate goal would be to predict treatment effects for all public elementary schools in the U.S. (P_B). Here we only include states with at least 40 schools with complete data; the resulting 35 states include populations of between 46 - 2,403 schools, with a median of 141 schools. In Panels A-E of Figure <ref> we provide 5-number summaries via boxplots for each of these states across the five covariates. Each panel is ordered from smallest to largest with respect to the median values. The solid horizontal line indicates the average value in population P_B. Importantly, notice that the minimum and maximum values vary considerably across these state populations. For example, there are some states with only small school districts, while others include a range of school district sizes. These range differences will ultimately affect the positivity assumption (A2) required for making predictions in P_B.We next calculated the Mahalanobis distance M and the Burg distance D, as defined in Equation <ref>. Recall that when the population P_A ≡ P_B we would expect M + D = p = 5, and that values above p = 5 greatly increase the average prediction error (MSPE). Panel F, which is on a log-scale, indicates that these values are very high for most of these state populations, with combined values often above 100 or even 1000. Panel F also illustrates that the differences between P_A and P_B are not limited to mean differences (Mahalanobis distance), which has been the focus of much of the causal generalization literature. The Burg distance indicates that additionally, the variances and covariances in P_A tend to be smaller than in P_B. In general, the Burg distance for these state populations tend to be an order of magnitude larger than the mean differences.§.§.§ Weighting adjustments Given the large covariate distribution shifts, in order to predict treatment effects in P_B, weighting adjustments would be required. For each state population, we calculated the inverse odds weights, as defined in Equation <ref>. In order to meet the positivity assumption, we examined the common support of the distribution of these weights and excluded U.S. schools (P_B) outside the support of P_A. This meant that treatment effect estimates would not be possible for some proportion of the target population, i.e.,'undercoverage'. The x-axis of Panel B of Figure <ref> shows the range of coverage of P_B across these state populations. Notice, for example, that some states are so different from P_B that it would be possible to predict treatment effects for less than 40-percent of the U.S. population of schools.For each state population P_A, we then normed the weights so that they summed to one. The distribution of these weights is then provided in Figure <ref> Panel A. Notice that in fives states, one school carries 25-percent or more of the weight. Large weights of this sort are often trimmed in practice lee_weight_2011, but this trimming introduces bias; for this reason, we do not trim the weights here. The VIF was then calculated based on these weights using Equation <ref>. Panel B of Figure <ref> shows the relationship between the degree of coverage of the target population P_B and the variance inflation that results from estimation in P_A. Here the shade of the data points indicates the relative size of the covariate distance M + D. Examining the y-axis, we see that for roughly half of the states, the VIFs are between 1 and 2. In these cases, the variance inflation is small, though not insubstantial. For example, a VIF of 1.5 indicates an approximate 50-percent increase in the prediction error, relative to estimating treatment effects in P_B directly. In the other half of states, however, these VIFs are considerably larger - with 5 involving VIFs greater than 10. Further examination of the data indicates that these cases correspond to those with large weights (found in Panel A). Overall, this analysis indicates a few important trends. First, similar to when generalizing an ATE estimate from a sample to population, as mean covariate differences increase, so too do the variance penalties due to adjustment tipton_implications_2017. Second, when predicting unit-specific effects, these differences and penalties are also affected by differences in variances and covariances. When mean differences are large, these variance and covariance differences also tend to be large. Just like the generalization situation, the result is that there can be a large degree of undercoverage — parts of the population for whom no treatment effect prediction is possible — and/or increased variance. § EXAMPLE: PLANNING A STUDYThe ASSISTments platform is an educational technology used for teaching math in schools. The online platform provides teachers with tools to deliver formative assessments, allowing students to receive immediate feedback as they work through problems. Teachers use the platform to monitor student performance and growth, allowing them to adjust their classroom instruction to match the knowledge base of the class. The platform has been evaluated in two large efficacy studies in schools, one in Maine rochelle_how_2017 and the other in North Carolina feng_implementing_2023. Each of these studies was designed to estimate and test hypotheses regarding the average treatment effect. For this example, we focus on the North Carolina study, using it to illustrate how results from this paper could be used when designing a study.The evaluation took place in North Carolina because it was identified as a state that is "more geographically representative of the U.S." than the previous study in Maine. The paper provides a comparison of the population of schools in North Carolina to those in the U.S. on five covariates: Percent of White students, percent of Hispanic students, percent of Black students, Percent of students receiving free or reduced priced lunch (a proxy for socioeconomic status), and the Percent of schools that were rural. In the pre-registration plan (https://sreereg.icpsr.umich.edu/framework/pdf/index.php?id=2064), the study design focuses on N = 80 schools, with equal proportions randomized to the treatment or business as usual. The pre-registration also assumes a pre-test measure would be used and would explain 80 percent of the variation in outcomes. In addition to identifying a target population (P_B) — schools serving 7th grade students in the U.S. — the evaluation also provides us with some of the design parameters needed to explore predictive models: the total sample size planned (N = 80); thenumber of covariates potentially of interest (p = 5); a school level pretest covariate highly related to the outcome (R_0^2 = .8). This leaves three parameters without values: the standardized treatment effect variation (τ_*^2), the correlation between Y(0) and treatment effects (τ_0η), and the proportion of variation in treatment effects explainable by p covariates (R_τ p^2). The latter two of these parameters are very rarely reported in studies, making it difficult to anticipate their values in practice. For these we will use a sensitivity approach.Fortunately, there is some information regarding prior degrees of treatment effect variation. weiss_how_2017 examine data from 16 multisite randomized trials. This includes five trials in early childhood and elementary schools, seven in middle and high schools, two in post-secondary education, and two in labor or workforce development; each study provided multiple outcomes. Their analysis includes exploring both average treatment effects found in these studies, as well as the degree of variation in site-average treatment effects. These parameter estimates are provided in their Table 4. Overall, the estimates of τ_* range from 0 to 0.35, with 90-percent upper confidence values of between 0 to 0.49. This indicates values of the standardized variance between 0 and .25 SDs. For this analysis, we will consider three values of τ_*^2: low (.10^2), medium (.25^2), and large (.5^2).Unfortunately, this range of values is quite large. Further analyses by Weiss et al indicated that treatment effect variation was larger when the intervention had low specificity, low intensity, and when the comparison group was served in a different building or site. Since the ASSISTments program is a supplemental program and since schools are randomized, this suggests there may be a high degree of variation. Nonetheless, we will explore several values. For these explorations, we focus on the randomization of schools, assuming large enough samples of students within each school that we can ignore this nesting. Throughout we also assume equal treatment and comparison group sizes. §.§.§ ANOVA From Equation <ref>, we can see that the MDES with N = 80 schools and R_0^2 = 0.80 is 0.22 SDs. In order to investigate prediction, we begin by assuming a model in which the treatment effect for every school in the target population (P_B) is predicted to be the same (δ̂_̂î = Δ̂_̂Â), yet the true school treatment effects vary. Table <ref> explores the MSPE and 90-percent prediction interval width for three treatment group sizes (n = 40, 100, 500) and the three standardized treatment effect variation values defined above. This table shows that even with a small degree of variation, the width of a 90-percent prediction interval is quite large. For example, with an average effect size of .22, the 90-percent prediction intervals include both negative and positive values for nearly all cases except when the variation in treatment effects is small (τ_*^2 = .1^2). Thus, even though the sample size is adequate for testing if on average the effect of ASSISTments is zero, if treatment effect variation is moderate or large, the estimate of this effect will not be adequately sensitive for predicting if ASSISTments would work in any particular school.§.§.§ Moderator Model Given the width of these prediction intervals, a question a researcher might have would be if such intervals could be made smaller by using a moderator model. Applying Equation <ref>, in Table <ref> we provide the minimum R_τ p^2 value required in order for predictive model using p = 1 or 3 moderators to outperform an ANCOVA model (i.e., lower MSPE). The table indicates that if the degree of variation in the effect of ASSISTments across schools is small, in a sample with 2n = 80 schools, a school-specific treatment effect prediction only outperforms the average treatment effect if the moderator explains all of the variation in treatment effects (R_τ^2 = 1). In comparison, with a sample of size 2n = 200, the moderator model is preferable if the moderator explains only 50 percent of the variation in treatment effects. If the variation in the effects of ASSISTments was expected to be larger — as we have assumed here — then the ANCOVA model is not always best. For example, with a sample of 2n = 80 and moderate treatment effect variation, a single moderator would need to be able to explain at least 22 percent of the variation in treatment effects; if large, then only 8 percent of the variation. Importantly, note that if a very large study were possible (as in trials with individual random assignment), these requirements drop substantially. For example, with 2n = 1,000, a moderator would only need to be able to explain <2-percent of the treatment effect variation to improve the MSPE.If such a moderator (or three) were available, how much might their inclusion reduce the predictive error? In Table <ref> we investigate this further. For this table, we include values of R_τ p^2 in increments of 0.20, and include only rows in which the predictive model with a single moderator outperforms the average treatment effect (constant) model (see Appendix B for values with p=3 moderators). For each row, we include the width of a 90-percent prediction interval for each model; the final column indicates how much smaller this interval is for different values of R_τ^2.Reading from this table, for example, we see that if the variation in the effect of ASSISTments is moderate, a covariate explaining 40 percent of the treatment effects would reduce the prediction interval by about 9 percent, while one explaining 80 percent could reduce the interval by about 32 percent. In absolute terms, however, with a sample size of N=80 schools, even with a strong predictor, these prediction intervals are still very wide. Overall, this means that while moderators can improve the accuracy of predictions, to do so they need to be either very strongly predictive of treatment effects or sample sizes need to be substantially larger than typical in cluster randomized studies.§.§.§ Population choicesIn the ASSISTments study, the goal was to predict treatment effects for all public schools serving 7th graders in the U.S. However, the study itself only recruited schools in North Carolina. A question then is what penalty could be exerted by conducting the study in one population (P_A) but predicting treatment effects in another (P_B).To investigate this, we returned to the Common Core of Data and defined the target population as non-charter, non-virtual, public U.S. schools serving at least 30 7th graders. This resulted in a population of 16,775 schools. We then limited the population to the subset of these schools that had covariate data on all five identified variables — percent White; percent Hispanic; percent Black; percent free-or-reduced price lunch (an indicator of low socioeconomic status); and Rural. The percent of students receiving FRL was not reported for 484 schools; this included all 400 schools in Massachusetts. The final target population P_B thus included 16,290 schools in the United States. Of these, 536 schools are in North Carolina, the population where the study took place (P_A).We began by comparing the two populations using metrics common in the generalization literature (where the focus is on estimation of the average treatment effect). One metric is the absolute SMD, while the other is the variance ratio; in both cases, it is typical to standardize with respect to the target population (P_B). Here the absolute SMDs between the two groups range from 0.14SD to 0.42SD, with three of these larger than the commonly used 0.25SD threshold. The variance ratios (P_A to in P_B) range between 0.24 and 1.19, with only one outside the threshold of 0.5 to 2. The generalizability index for these two groups is 0.95, indicating that the population of schools in NC (P_A) is nearly as similar to those in the US (P_B) as a random sample. Altogether this suggests that the estimate of the ATE could be generalized to the target population ATE easily.In this paper, we have shown that when considering prediction, the standardization that matters for MSPE is with respect to the sampled population P_A not the target population P_B. On these five covariates, the Mahalanbois distance between the two populations is M = 0.55, while the Burg distance is 9.71. When P_A ≡ P_B, we would expect M+B = p = 5; here instead M+B = 9.71, about twice as large. This suggests that while the two populations are similar on average, there is generally less variation across these covariates in P_A than in P_B. If left unadjusted, this would result in a MSPE that is about twice as large as if P_A ≡ P_B.To adjust for these differences, inverse odds weights could be used. Here we predicted the outcome Z, where Z = 1 if a school was in North Carolina and Z = 0 if it was in the target population. We used a logistic regression model and included the five covariates identified before. Based on this, we calculated inverse odds weights using Equation <ref>. We then compared the distribution of these weights in North Carolina versus in the US, with a focus on identifying schools in the common support of these two distributions. We found that 98.3 percent of schools in the US were within the range of weights identified in North Carolina. Thus, in order to meet the positivity assumption (A2), we restrict the target population to this slightly smaller subset. In practice, this would mean that the resulting model could be used to predict treatment effects for all but 1.7 percent of US public schools serving 7th graders.Finally, we calculated the variance inflation penalty that would be incurred as a result of these weights (Equation <ref>). This penalty was found to be VIF = 1.42, indicating that the actual MSPE would be about 42 percent larger as a result of this reweighting. Importantly, while this is smaller than the doubling expected without adjustment (from the M+B versus p in the analysis above), it still exerts a large penalty. This would mean that by limiting the sample of schools to those in North Carolina (P_A), a sample of N*M_VIF = 113 schools would be required to have the same accuracy as a sample of N = 80 schools in the US (P_B).§ CONCLUSIONIn this paper, we have examined the conditions under which prediction of unit-specific treatment effects is possible on the basis of results from a randomized trial. We focus in particular on the development of intuition and functions that can be useful for planning studies. To develop these intuitions, we have focused on linear parametric models, as they provide closed form results. For those designing and conducting experiments, it is easy to focus on the development and the use of a predictive model without thinking carefully about its performance. Those working with RCTs are often aware of a variety of rules of thumb related to statistical power and sensitivity, all of which have to do with the ATE. We have shown, however, that theserules of thumb do not directly transfer to the predictive case. For example, predictive error involves new parameters, the trickiest of which is the degree of idiosyncratic variation. As we have shown, this is a function of a completely unknowable parameter – the correlation between potential outcomes. The only information truly available in data here is the degree to which the residual variances in the two groups (T = 0, 1) is the same. If they differ, then this idiosyncratic variation is clearly non-zero. But if they are the same, this does not prove that there is no idiosyncratic variation. To some extent, our choice for the consideration of the value of this correlation ρ_01|x must depend upon assumptions regarding the very nature of treatment effects: do we think that even under ideal circumstances they would follow a pattern that could be predicted? Or is there some part of them that is truly idiosyncratic — times when treatments happen to work for some for reasons that are truly random?Regardless, we have provided formulas that can be used to determine the types of moderators and sample sizes that are needed to provide accurate predictions of unit-specific treatment effects. We have shown that in the small sample sizes found in cluster-randomized trials — where predictions of site specific treatment effects or other aggregates are desired — the ATE is often the most accurate prediction of unit specific treatment effects. However, when the ATE is small, unless the treatment effect variation is also small, the resulting prediction may not be adequate for distinguishing between units with positive or negative treatment effects (i.e., prediction intervals include zero).We have also shown that in order to outperform the ATE – thus providing different predicted treatment effects for different units – the moderators included need to be highly predictive of treatment effects. If this is not the case, then larger sample sizes are required. Methods for quantifying predictive error have long existed in OLS regression; for example, we know that these predictive errors are larger, since they involve the residual from a new observation. As we have shown here, however, these formulas are too simple once we move out of the P_A case. That is, the formulas and estimators are only valid if our sample is a random sample from the population. When the sample might be highly selected (e.g., a combination of convenience and eligibility criteria) – as is typically the case in RCTs – the errors involve additional components. One of these components involves the introduction of bias that may arise from differences in the support for the covariates in different populations, called “distribution shift” in the language of ML. Other differences have to do with the degree of similarity between the means, variances, and covariances of these covariates in the two populations. Perhaps what is hardest here is that for a given sample, again it may be difficult to accurately quantify these terms. Thus, by all metrics available and calculable with the data, the model may appear to be performing well — and yet not perform well at all in the target population. The findings here are very much related to those in the generalizability literature, though they differ in important ways as well. The literature on generalizability has focused strongly on estimation of the average treatment effect in one or more target populations. These findings suggest, for example, that if one wants to design a study that minimizes this bias, they should match the first moments of the sample to the target population. Here we find that if the goal is prediction, this extends further — matching the means alone is simply not enough. Instead, we need to match the variances of the moderators as well. Importantly, however, we also show that an approach that further reduces the error (MSPE) is to purposely select the sample so that it maximizes heterogeneity in the covariates — that doing so can reduce the MSPE even beyond that of a random sample. Importantly, there is another difference. In the literature on generalizability, the focus has been on bias in the estimate of the ATE. While it can be counterintuitive, even when the ATE estimate is biased, there is no bias in its associated standard error. This is because the standard error has to do with the sampling variation in the data generating process, which focuses on the past. But in prediction, the focus is on the future. This means that the standard estimators of the predictive error — e.g., based upon the observed variation in residuals in the sample — can also be biased. Again, this requires adjusting not only the predictions themselves, but also their measures of error.Finally, our analysis suggests that if one is planning a study with prediction in mind, the broadest possible target population should be anticipated. As we have shown, it is simply not possible to build a strong predictive model of treatment effect heterogeneity without heterogeneity in the covariates and outcome. Put another way, we need heterogeneity “in” in order to get heterogeneity “out”. Ultimately, this means that while heterogeneity is often seen as our enemy when estimating the ATE, in prediction heterogeneity is our friend.§ APPENDIX: PROOFS §.§ Proof of Equation <ref>First, writeY_i(0)= μ_0 + x_i'β_0 + ϵ_i0Y_i(1)= μ_1 + x_i'β_1 + ϵ_i1= (μ_0 + Δ_A) + x_i'δ + η_i + ϵ_i0 Then V[Y_i(0)] = σ_0^2 and V[Y_i(1)] = σ_0^2 + τ^2 + 2ρ_0ησ_0τ. Let R_-0^2 = 1 - R_0^2 = σ_0|x^2/σ_0^2 and R_-τ^2 = 1 - R_τ^2 = τ_A|x^2 / τ_A^2. Let τ_*^2 = τ_A^2/ σ_0^2. Now, via substitution we have: MSPE(δ̂_̂î) = (σ_0|x^2/n + σ_1|x^2/n)(1+p) + τ_A|x^2= (1+p/n) ( σ_0|x^2 + (σ_0|x^2 + τ_A|x^2 + 2ρ_0η|xσ_0|xτ_A|x)) + τ_A|x^2= σ_0^2 (1+p/n) ( 2 R_-0^2 + R_-τ^2τ_*^2 + 2ρ_0η|xτ_*|xR_-0R_-τ) + R_-τ^2τ_A^2= 2σ_0^2 (1+p/n) [ R_-0^2 + ρ_0η|xτ_*|xR_-0R_-τ + τ_*^2R_-τ^2 (1/2 + n/2(1+p)) ]§.§ Proof of Equation <ref> First, recall that we will estimate the model,Y_i = μ_0 + Δ T_i + x_i'β + ϵ_iIf we now split the data into the two groups, we have:Y_i(0) = μ_0 + x_i'β + ϵ_i0Y_i(1) = (μ_0 + Δ) + x_i'β + ϵ_i1Now, assume we have two groups, each with sample size n. We estimate V(ϵ_i) = σ^2 using a pooled estimator, with s^2 = (s_0^2 + s_1^2)/2. Thus, we have σ^2 = E(s^2) = (σ_0^2 + σ_1^2)/2.Applying results from the proof of Equation <ref>, we haveσ_|x^2 = (σ_0|x^2 + σ_1|x^2)/2 = ( 2σ_0|x^2 + τ_A|x^2 + 2ρ_0η|xσ_0|xτ_A|x)/2 Now, by substitution and rearrangement we have:MSPE(δ̂_̂î|ANCOVA) = σ^2(2+p)R_-^2/2n + τ_A^2= (2+p/4n) (2σ_0|x^2 + τ_A^2 + 2ρ_0η|xτ_Aσ_0|x ) + τ^2= σ_0^2 [ ( 2R_-0^2 + 2ρ_0η|xτ_*R_-0) (2+p/2n) + τ_*^2 ( 2+p/4n + 1 ) ]= σ_0^2 (2+p/2n) [ R_-0^2 + ρ_0ητ_*R_-0 + τ_*^2 ( 1/2 + 2n/2+p)] Finally, note that the raw means is a special case of this ANCOVA result, substituting R_-0^2 = 1 and p = 0.§.§ Proof of Equation <ref> We wish to find the value of R_τ^2 = r_τ^2 such that when R_τ^2 < r_τ^2 we have MSPE(ANCOVA) < MSPE(Mod) where the ANCOVA estimator includes 2+p parameters and the Moderator estimator includes 2(1+p) parameters. To do so, first we write and rearrange:MSPE[2+p] < MSPE[2(1+p)] n MSPE[2+p]< 2(1+p)[R_-0^2 + ρ_0η|xτ_*R_-0R_-τ + τ_*^2R_-τ(1/2 + n/2(1+p)) ] 0< R_-τ^2A + R_-τB + Cwhere nowA= τ_*^2(1/2 + n/2(1+p)) B= 2(1+p)ρ_0η|xτ_*R_-0C= 2(1+p)R_-0^2 - nMSPE[2+p] Thus we need to solve the quadratic equation, with R_-τ = -B ±√(B^2 - 4AC)/2AThe final result falls from substitution and rearrangement, solving for R_τ^2 = 1 - R_-τ^2. Note that when there is no real root (because B^2 - 4AC < 0), the ANOVA model is always preferred.§ APPENDIX: P = 3 TABLEThis table provides a supplement to Table <ref>, now focused on p = 3 covariates. | http://arxiv.org/abs/2310.18500v1 | {
"authors": [
"Elizabeth Tipton",
"Michalis Mamakos"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20231027212948",
"title": "Designing Randomized Experiments to Predict Unit-Specific Treatment Effects"
} |
Ontology Revision based on Pre-trained Language Models Songtao Lu^8 October 15, 2023 ====================================================== The complex dielectric functions of LiNbO2 were determined using optical transmittance and reflectance spectroscopies at room temperature. The measured dielectric function spectra reveal distinct structures at several bandgap energies. The bandgaps (exciton resonances) in the spectrum were observed at ca. 2.3, 3.2, 3.9, and 5.1 eV, respectively. These experimental data have been fit using a model dielectric function based on the electronic energy-band structure near critical points plus excitonic effects. The features of measured dielectric functions are, to some extent, reproduced quantitatively by an ab-initio calculation including the interaction effects between electrons and holes.§ 1. INTRODUCTION Lithium niobate is a member of the transition-metal oxide compounds. At normal temperature and pressure, it crystallizes in a form of three-dimensional LiNbO3. Lithium niobate can also crystallize in the layered LiNbO2 (LNO) modification, although it is metastable under normal conditions [1-2]. By using the epitaxial growth techniques such as pulsed laser deposition, it is now possible to grow LiNbO2 thins films without the need for rather restricted conditions such as a reducing atmosphere [3-6]. This compound is known to be several intriguing properties such as the superconductivity [7-10] in the delithiated phase at approximately 5 K and the high ionic conductivity suitable for potential applications as a battery material [11-13].Although stoichiometric LNO is a “mother material” of the delithiated counterpart having compatibility of transparency and the superconductivity, the little is known about the optical response of this compound. LNO is a direct gap semiconductor [14-17]. The bandgap energies were evaluated to be ca. 2.0 eV both experimentally [18] and theoretically [14]. On the other hand, the nature of these optical transitions has not been clarified yet for this material. This is partly because the experimental information is based on a diffused reflectance spectrum taken on a specimen in powdered form [18]. In this case, the conversion to the dielectric functions is very difficult. Generally, to draw electronic-structure-related knowledge from measured data, analysis based on the standard critical point (SCP) modeling [19] has been adopted so far. According to the SCP theory, the lineshape is considered to reflect that of the spectral distribution in the joint density of states of the valence and conduction bands. The lineshape of the joint density of states strongly depends on the dimensionality and the Lynch index which corresponds to the number of negative components in the effective mass vectors. Gaining insight into the nature of the optical transitions is attributed to the determination of the parameters such as the dimensionality and the Lynch index. Such a parameterization has not been, however, done so far for this material. These knowledges are very important for e.g. device applications.In this work, we study the optical properties of LNO thin films. We present dielectric function spectra deduced from transmission and reflectance spectra at room temperature (RT) between 1.5 and 6.5 eV. A method is also described for calculating the spectroscopic distribution of the dielectric function of LNO where the relevant models are connected with the electronic energy-band structures of the compound. For comparison, we also show ab-initio calculation results with the GW level under linear-response approximations.§ 2. THEORETICAL MODEL 2.1 Electronic energy-band structure of LiNbO2Several groups have reported the calculated results on the electronic-energy band structures in LNO [14-17]. We also executed local-density approximation (LDA) calculations by ourselves. The reproducible results are obtained as shown in Fig. 1. To ensure computing cost-effectiveness for regression analysis, we attempted to reproduce the measured dielectric functions with critical points (CP's), the number of which is as small as possible. From the model-dielectric-function (MDF)-based regression analysis for the measured data, the energies of the CP's were evaluated to be ca. 2.3, 3.2, 3.9, and 5.1 eV. These transition energies are shown in Fig. 1 with vertical arrows and are labeled as E0, E1, E2, and E3. It should be noted that, for the assignments of the vertical arrows in Fig. 1, we looked for the corresponding direct transitions lower in energy if we could not find a transition having the same energy with the experiment. In the analysis, we neglect the following aspects for simplicity: (1) transitions above the E3 gap, (2) the contributions from indirect transitions due to their extremely weaker nature in their intensities, and (3) the spin-orbit effects and the related splitting of the bands (conventionally denoted as Δ in the literature).2.2 Model dielectric functionsWe performed the line-shape analysis using the MDF approach [20]. Here, we summarize dielectric functions as a form of a complex function. To obtain ϵ1(ω), one can take its real parts, while for ϵ2(ω), imaginary parts should be taken.The E0, E1, E2 CP's may be of the three-dimensional (3D) M1 type. The index i in the Mi notation corresponds to earlier-mentioned Lynch index. As shown in Fig. 2, the overall feature of the ϵ1(ω) spectrum is characterized with monotonically decreasing behavior concerning the photon energy, accompanied with optical anomalies at the CP's. To reproduce this behavior, the assumption of M1 type is more appropriate. Because the M1 CP's longitudinal effective mass is much larger than its transverse counterpart, one can treat these 3D M1 CP's as a two-dimensional (2D) minimum M0. It is known that the equation in the approximation of the 2D M0 CP gives a series of Wannier-type excitons if taking the excitonic effects into account [21]. In other words, one can neglect the one-electron contribution to the dielectric function in the case of the 3D M1 CP. Due to its layered structure, LNO is expected to be a material where the electron-hole correlations are rather strong. Rough calculation on the binding energy of the Wannier-type exciton yielded in ca. 70 meV for LNO by using the reported reduced effective mass (0.57 m0) and background dielectric constant of ca. 10.3 [14]. The notation m0 means electron's mass in rest. Besides, the excitonic effect could not be neglected in the case of 3D M1 (2D M0) CP for many elemental and compound semiconductors [20,22-25]. Thus, we can justifiably take this effect into account for the analysis. For e.g. the E0 feature, the contribution of these excitons to ϵ(ω) can be now written with Lorentzian lineshape as: ϵ(ħω) = ∑_n = 1^∞1/(2n - 1)^3( A_0/- iΓ_1 + E_0 - G_0/(2n - 1)^2 - ħω)where A0 is a constant corresponding to the strength parameter, E0 is the energy of the CP, G0 is the 2D binding energy of exciton. The value of Γ0 is the broadening parameter. Because the ground-state exciton term occupies almost 95% of the total oscillator strength, we neglected the excited-state terms (n ≥ 2) in the current analysis.The E3 peak is difficult to analyze as it does not correspond to a single, well defined CP. Thus, the E3 structure has been characterized as a damped harmonic oscillator: ϵ(ħω) = A_3/1 - χ_3^2 - iΓ_3χ_3with χ3 = E/E3, where A3 is the strength parameter and Γ3 is a dimensionless broadening parameter.§ 3. EXPERIMENTAL AND CALCULATION PROCEDURES LNO thin films were grown on MgAl2O4 (111) substrates using pulsed laser deposition (PLD) method with KrF excimer laser. The growth was conducted in vacuum at RT. After deposition, the films were annealed in situ at substrate temperature of 800 oC under a chamber pressure of 0.1 mtorr set by continuous flow of Ar/H2 gas. Films were capped by several-nanometer-thick alumina films using PLD at RT in vacuum for avoiding reactions with air. The description of detailed growth process can be found elsewhere [6]. The film thickness is approximately 150 nm.Optical transmittance and reflectance were measured with an ultraviolet-visible spectrometer at RT. Then, we converted these spectral intensities to those of the complex dielectric functions [26]. Here, the refractive index of the substrate is assumed to be independent of the photon energy because the bandgap of MgAl2O4 is significantly wider than the maximum energy of the measurement range (i.e., 6.5 eV).Structural information of LNO has been reported by several experimental groups, leading to the assignment of space group P63/mmc (No. 194). We performed ab-initio calculation using the plane-wave basis set PWscf package of Quantum ESPRESSO [27-28] to evaluate the electronic-energy-band structures. For the lattice constant of LiNbO2, a=2.938 Åand c=10.596 Åwere used. The 3D Brillouin zone was integrated using a 7 × 7 × 2 k-point grid. We have set the energy cutoff for the plane-wave basis to 30 Ry, and have used the Perdew–Burke–Ernzerhof (PBE) functional, which belongs to the class of generalized gradient approximation (GGA) functionals. In previous works, similar calculations have been made [14-15], and qualitatively coincided results could be obtained by ourselves. For the optical spectra calculations such as dielectric functions, we used Respack ab-initio package [29-30]. This package solves Bethe-Salpeter-like equations numerically under single-excitation configuration-interaction (SECI) treatment. By doing that, the interactions between electrons and holes are taken into account.§ 4. RESULTS AND DISCUSSION In Fig. 2, we plot the real (ϵ1) and imaginary (ϵ2) parts of dielectric function spectra ϵ(ω) of LNO determined at RT. As seen in the figure, the experimental data reveal clear structures at the 2.2 to 2.4-eV region. This structure originates from transition at the E0 edge. The structures appearing at the 3.0 to 3.4, 3.6 to 4.2, and 4.7 to 5.5-eV regions are due to the E1, E2, and E3 transitions, respectively.The model dielectric function (MDF) approach given in Sec. IIB was used to fit the experimental dispersion of ϵ(ω) over the entire range of the measurements (0 to 6.5 eV). The parameters such as A0 and A1 are used as adjustable constants for the calculations of ϵ1(ω) and ϵ2(ω). As has been already mentioned in Sec. IIB, the ϵ1(ω) for LNO is, in overall, a monotonically decreasing function with upward convex in 1.5 to 5-eV range except for several optical anomalies such as E0. This is very peculiar compared to those for many other elemental and compound semiconductors [20,22-25]. This is somehow reminiscent of that in α-Sn [24]. In this case, very strong absorption band is present in the ϵ2 spectrum, giving rise to consistent explanation in terms of the 3D M0 CP with very large transition strength. This is not the case for LNO. To reproduce the ϵ1 behavior, there seems no other solution except for positive adoption of the M1-type CP even for the lowest energy gap because the difference between the ϵ1 values at both ends is larger in the case of 3D M1 (2D M0). Comparison of the behavior at both ends revealed us that the lineshape of the 2D M0 MDF is known to drop significantly, while that of the 3D M0 MDF remains relatively flat [20]. Therefore, the 2D M0 MDF is more suitable to reproduce the overall ϵ1 tendency. The experimental data on ϵ1(ω) turned out to be still somehow larger than the model fit. To improve this fit, we then introduced a phenomenological term ϵ_1∞ in addition to ϵ1(ω) to account for the contributions from higher-lying energy gaps. This term ϵ_1∞ is assumed to be nondispersive.The solid and dashed lines in Fig. 2 are obtained from the sum of Eqs. (1) and (2), and ϵ_1∞=2.0. The vertical arrows in the figure indicate the excitonic peak energies (E0-G0, E1-G1, E2-G2) and the positions of the CP (E3). The best-fit parameters are listed in Table I. Due to the earlier-mentioned assumptions, we could not determine the excitonic binding energies such as G0 experimentally. Individual contributions to the dielectric functions of the various energy gaps for LNO are shown in Figs. 3 and 4, respectively. They are obtained from Eq. (1) for the 2D-exciton contribution in the E0, E1, and E2 regions, and from Eq. (2) for the E3 gap contribution.The sums of Eqs. (1) and (2) fit the main features of the ϵ1 and ϵ2 data, but the agreement with the experiment is rather qualitative. The difference probably could come from several sources, including (1) the Lorentz approximation for broadening (which is known to give too much absorption below direct band edges [31]), (2) the neglection for the excited-states excitons [20], or (3) the parabolic assumption of the CP's [19].As understood from the table, some of the best-fit parameters from the ϵ1 analysis are different from the ϵ2 counterparts. Being compliant with SCP theory's policy, we took the average from these values, as shown in this figure. We believe that these averaged values should be material parameters of LNO. An increase in the number of MDF's (CP's) is expected to lead to an improved agreement with the experiment, even with the same (common) values on the parameters. However, it might be an excessive assumption to include an additional transition to the position where no peak or anomaly is observed. Indeed, this might be justified by the fact that calculated SECI spectra include at least five CP's in the energy range (1.5 to 6.5 eV), as can be viewed from Fig. 5.Neglecting contributions from the optical anomalies, overall ϵ1(ω) in the SECI spectrum is a monotonically decreasing function with upward convex in the 1.5 to 4-eV range. This feature is in good agreement with the experiment. Besides, the agreement on the E0 energy is also good. According to the ab-initio calculation under the LDA level, the bandgap energy is evaluated to be approximately 1.5 eV. The calculated energy is significantly lower than that of experimental value (ca. 2.3 eV). As is often the case, it is well-known that the LDA level calculation tends to underestimate the bandgap energy. On the other hand, the ab-initio SECI calculation considers the electron-hole interaction (excitonic) effects in this material, leading to the improved agreement with the experiment. The spectral blue-shift from the LDA-level result is probably due to the so-called self-energy effect. Furthermore, the energetic positions of E1 and E3 CP's are in reasonable agreement with the experiment.We notice that the calculated spectra structures are sharper and more distinct than those in the measured one. The calculated ϵ1(ω) amplitude as defined by the difference of the ϵ1 value at 1.5 and 6.5 eV is also larger. These looser agreements are probably related to the same origin. The broadening parameter corresponding to the width of the Green functions used in the calculation is significantly smaller than that obtained in the measurement. The doublet splitting of the E2 is observed in the SECI spectra. Experimentally, the double seems not resolved due to the broadening effect. Thus, our results should stimulate further work for this material both experimentally and theoretically.Finally, we mention refractive-index dispersion in the transparency region. As seen in Fig. 2, the analysis results are in agreement with the experiment (ϵ1) over the entire range of photon energies. On the other hand, the agreement is looser. An increase in the parameter ϵ_1∞ from 2 to 3.5 improves the agreement in the transparency region. However, this does not give a satisfactory fit to the experimental data at higher photon energies ( 3.0 eV). Several research groups have reached a similar conclusion on GaP, AlSb, and ZnSe. The looser agreement is probably related to the earlier-mentioned Lorentz approximation for broadening [20,32-33]. Indeed, Pollak's group has introduced a phenomenological linear cutoff to eliminate this problem giving too much absorption below direct band edges [31]. We did not attempt this here because the cutoff's adoption forced us a numerical Kramers-Kronig conversion to the ϵ1 counterpart, thus making it impossible to analyze the experimental data within the regression analysis framework based on a conventional Levenberg–Marquardt algorithm.§ 5. CONCLUSION In summary, we have reported the dielectric functions of LNO around the fundamental band gap E0 (ca. 2.3 eV) and up to 6.5 eV for RT. The observed spectra reveal distinct structures at energies the E0, E1, E2 and E3 CP's. These data are analyzed based on a simplified model of the interband transitions, including these transitions as the main dispersion mechanisms. The analyzed results are in qualitative agreement with the experiment. Many important material parameters of these transitions were determined. The comparison was made with the results of ab-initio calculations, taking the electron-hole correlation effects into account.§ ACKNOWLEDGMENTS T.M. wishes to thank the financial support of the Ministry of Education, Culture, Sports, Science and Technology Grant No. KAKENHI-19K05303. This work was partially supported from No. KAKENHI-20K15169. All of the ab-initio calculations were performed in the Supercomputer Center at the Institute of Solid-State Physics, the University of Tokyo.§ REFERENCES [1] G. Meyer, R. Hoppe, The first oxoniobate LiNbO2, Angew. Chem. Int. Ed. 13 (1974) 744–745. <https://doi.org/10.1002/anie.197407441>.[2] N. Kumada, S. Muramatu, F. Muto, N. Kinomura, S. Kikkawa, M. Koizumi, Topochemical reactions of LiNbO2, J. Solid State Chem. 73 (1988) 33–39. <https://doi.org/10.1016/0022-4596(88)90050-3>.[3] N. Sarmadian, R. Saniz, B. Partoens, D. Lamoen, Easily doped p-type, low hole effective mass, transparent oxides, Sci. Reports. 6 (2016) 20446. <https://doi.org/10.1038/srep20446>.[4] W.E. Henderson, W.L. Calley, A.G. Carver, H. Chen, W.A. Doolittle, A versatile metal-halide vapor chemistry for the epitaxial growth of metallic, insulating and semiconducting films, J. Cryst. Growth. 324 (2011) 134–141. <https://doi.org/10.1016/j.jcrysgro.2011.03.049>.[5] M.B. Tellekamp, J.C. Shank, W.A. Doolittle, Molecular beam epitaxy of lithium niobium oxide multifunctional materials, J. Cryst. Growth. 463 (2017) 156. <https://doi.org/10.1016/j.jcrysgro.2017.02.020>.[6] T. Soma, K. Yoshimatsu, A. Ohtomo, P-type transparent superconductivity in a layered oxide, Science Adv. 6 (2020) eabb8570. <https://doi.org/10.1126/sciadv.abb8570>.[7] M.J. Geselbracht, T.J. Richardson, A.M. Stacy, Superconductivity in the layered compound LiNbO2, Nature (London). 345 (1990) 324. <https://doi.org/10.1038/345324a0>.[8] M.A. Rzeznik, M.J. Geselbracht, M.S. Thompson, A.M. Stacy, Superconductivity and phase separation in NaNbO2, Angew. Chem. Int. Ed. 32 (1993) 254–255. <https://doi.org/10.1002/anie.199302541>.[9] E.G. Moshopoulou, P. Bordet, J.J. Capponi, Superstructure and superconductivity in LiNbO2 single crystals, Phys. Rev. B. 59 (1999) 9590–9599. <https://doi.org/10.1103/physrevb.59.9590>.[10] G.T. Liu, J.L. Luo, Z. Li, Y.Q. Guo, N.L. Wang, D. Jin, T. Xiang, Evidence of s-wave pairing symmetry in the layered superconductor LiNbO2 from specific heat measurements, Phys. Rev. B. 74 (2006). <https://doi.org/10.1103/physrevb.74.012504>.[11] J.D. Greenlee, C.F. Petersburg, W.L. Calley, C. Jaye, D.A. Fischer, F.M. Alamgir, W.A. Doolittle, In-situ oxygen x-ray absorption spectroscopy investigation of the resistance modulation mechanism in LiNbO2 memristors, Appl. Phys. Lett. 100 (2012) 182106. <https://doi.org/10.1063/1.4709422>.[12] S.A. Howard, C.N. Singh, G.J. Paez, M.J. Wahila, L.W. Wangoh, S. Sallis, K. Tirpak, Y. Liang, D. Prendergast, M. Zuba, J. Rana, A. Weidenbach, T.M. McCrone, W. Yang, T.-L. Lee, F. Rodolakis, W. Doolittle, W.-C. Lee, L.F.J. Piper, Direct observation of delithiation as the origin of analog memristance in LiNbO2, APL Materials. 7 (2019) 071103. <https://doi.org/10.1063/1.5108525>.[13] X. Xu, G. Liu, S. Ni, J.T.S. Irvine, Layered lithium niobium oxide LiNbO2 as a visible-light-driven photocatalyst for H2 evolution, J. Phys.: Energy. 1 (2018) 015001. <https://doi.org/10.1088/2515-7655/aad4be>.[14] E.R. Ylvisaker, W.E. Pickett, First-principles study of the electronic and vibrational properties of LiNbO2, Phys. Rev. B. 74 (2006) 075104. <https://doi.org/10.1103/physrevb.74.075104>.[15] D.L. Novikov, V.A. Gubanov, V.G. Zubkov, A.J. Freeman, Electronic structure and electron-phonon interactions in layered LixNbO2 and NaxNbO2, Phys. Rev. B. 49 (1994) 15830. <https://doi.org/10.1103/physrevb.49.15830>.[16] K.-W. Lee, J. Kunes, R.T. Scalettar, W.E. Pickett, Correlation effects in the triangular lattice single-band system LixNbO2, Phys. Rev. B. 76 (2007) 144513. <https://doi.org/10.1103/physrevb.76.144513>.[17] J.U. Rahman, N.V. Du, G. Rahman, V.M. Garcia-Suarez, W.-S. Seo, M.H. Kim, S. Lee, Localized double phonon scattering and DOS induced thermoelectric enhancement of degenerate nonstoichiometric LiNbO2 compounds, RSC Advances. 7 (2017) 53255. <https://doi.org/10.1039/c7ra10557f>.[18] M.J. Geselbracht, A.M. Stacy, A.R. Garcia, B.G. Silbernagel, G.H. Kwei, Local environment and lithium ion mobility in lithium niobate LiNbO2: Inferences from structure, physical properties, and NMR, J. Phys. Chem. 97 (1993) 7102. <https://doi.org/10.1021/j100129a030>.[19] S. Loughin, R. French, L. DeNoyer, W.-Y. Ching, Y.-N. Xu, Critical point analysis of the interband transition strength of electrons, J. Phys. D. 29 (1996) 1740. <https://doi.org/10.1088/0022-3727/29/7/009>.[20] S. Adachi, T. Taguchi, Optical properties of ZnSe, Phys. Rev. B. 43 (1991) 9569. <https://doi.org/10.1103/PhysRevB.43.9569>.[21] Y. Petroff, M. Balkanski, Coulomb effects at saddle-type critical points in CdTe, ZnTe, ZnSe, and HgTe, Phys. Rev. B. 3 (1971) 3299. <https://doi.org/10.1103/physrevb.3.3299>.[22] S. Adachi, Model dielectric constants of GaP, GaAs, GaSb, InP, InAs, and InSb, Phys. Rev. B. 35 (1987) 7454. <https://doi.org/10.1103/PhysRevB.35.7454>.[23] S. Adachi, Model dielectric constants of si and ge, Phys. Rev. B. 38 (1988) 12966. <https://doi.org/10.1103/PhysRevB.38.12966>.[24] S. Adachi, Optical properties of alpha-sn, J. Appl. Phys. 66 (1989) 813. <https://doi.org/10.1063/1.343502>.[25] S. Ninomiya, S. Adachi, Optical properties of cubic and hexagonal CdSe, J. Appl. Phys. 78 (1995) 4681. <https://doi.org/10.1063/1.359815>.[26] R.E. Denton, R.D. Campbell, S.G. Tomlin, The determination of the optical constants of thin films from measurements of reflectance and transmittance at normal incidence, J. Phys. D. 5 (1972) 852. <https://doi.org/10.1088/0022-3727/5/4/329>.[27] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G.L. Chiarotti, M. Cococcioni, I. Dabo, A.D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A.P. Seitsonen, A. Smogunov, P. Umari, R.M. Wentzcovitch, QUANTUM ESPRESSO: A modular and open-source software project for quantum simulations of materials, J. Phys.: Cond. Mat. 21 (2009) 395502. <https://doi.org/10.1088/0953-8984/21/39/395502>.[28] P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M.B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A.D. Corso, S. de Gironcoli, P. Delugas, R.A.D. Jr, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Kucukbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N.L. Nguyen, H.-V. Nguyen, A. Otero-de-la-Roza, L. Paulatto, S. Ponce, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A.P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, S. Baroni, Advanced capabilities for materials modelling with QUANTUM ESPRESSO, J. Phys.: Cond. Mat. 29 (2017) 465901. <https://doi.org/10.1088/1361-648X/aa8f79>.[29] K. Nakamura, Y. Yoshimoto, R. Arita, S. Tsuneyuki, M. Imada, Optical absorption study by ab initio downfolding approach: Application to GaAs, Phys. Rev. B. 77 (2008) 195126. <https://doi.org/10.1103/physrevb.77.195126>.[30] K. Nakamura, Y. Yoshimoto, Y. Nomura, T. Tadano, M. Kawamura, T. Kosugi, K. Yoshimi, T. Misawa, Y. Motoyama, RESPACK: An ab initio tool for derivation of effective low-energy model of material, arXiv:2001.02351. (2020). <https://arxiv.org/abs/2001.02351>.[31] T. Holden, P. Ram, F.H. Pollak, J.L. Freeouf, B.X. Yang, M.C. Tamargo, Spectral ellipsometry investigation of ZnCdSe lattice matched to InP, Phys. Rev. B. 56 (1997) 4037–4046. <https://doi.org/10.1103/PhysRevB.56.4037>.[32] K. Strössner, S. Ves, M. Cardona, Refractive index of GaP and its pressure dependence, Phys. Rev. B. 32 (1985) 6614. <https://doi.org/10.1103/physrevb.32.6614>.[33] S. Zollner, C. Lin, E. Schönherr, A. Böhringer, M. Cardona, The dielectric function of AlSb from 1.4 to 5.8 eV determined by spectroscopic ellipsometry, J. Appl. Phys. 66 (1989) 383. <https://doi.org/10.1063/1.343888>. | http://arxiv.org/abs/2310.17825v1 | {
"authors": [
"T. Kurachi",
"T. Yamaguchi",
"E. Kobayashi",
"T. Soma",
"A. Ohtomo",
"T. Makino"
],
"categories": [
"cond-mat.mtrl-sci",
"physics.app-ph"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231027001047",
"title": "Optical Properties of LiNbO$_2$ thin films"
} |
A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications Ahmed Magooda*, Alec Helyar*, Kyle Jackson*, David Sullivan, Chad Atalla, Emily Sheng, Dan Vann, Richard Edgar, Hamid Palangi, Roman Lutz, Hongliang Kong, Vincent Yun, Eslam Kamal, Federico Zarfati, Hanna Wallach, Sarah Bird, Mei Chen January 14, 2024 ==============================================================================================================================================================================================================================================§ INTRODUCTIONTimepix detectors are used in a broad range of applications <cit.> ranging from medical imaging, material research to space radiation monitoring <cit.>. These environments are characterised by high-intensity radiation fields that degrade and damage the detectors exposed for an extended period of time.Currently, commercially available sensors for Timepix radiation detectors are made of Si and CdTe. However, in response to the challenge of radiation-induced damage, ongoing investigations are exploring alternative, radiation-hard materials, including GaAs <cit.>, diamond <cit.>, or SiC <cit.>. The SiC sensor with a 3.23 eV band gap also offers a high breakdown voltage (4×10^6 V m^-1), which allows the application of a large bias voltage that leads to fast charge collection. It has been demonstrated that the energy resolution performance of SiC is not degraded even at elevated temperatures up to hundreds of degrees Celsius <cit.>. Furthermore, the material itself has high electron mobility (around 900 cm^2 V^-1 s^-1) and high electron saturation drift velocity (2× 10^7 cm^-1). The primary significance of SiC lies in its application within radiation-challenging environments, requiring detectors that can endure prolonged radiation exposure. Potential uses include high-energy physics, outer space and particle radiotherapy.For this purpose, a fully operational miniaturized MiniPIX Timepix3 radiation camera, featuring an 80 μm thick SiC epitaxial layer, has been developed and built including energy calibration and SW/FW configuration.In this work we characterise this novel SiC Timepix3 detector in terms of its homogeneity, charge collection response, and spectral tracking response with proton beams of energy range 13 to 226 MeV. This comprehensive analysis aims to facilitate the use of a radiation hard sensor in various applications such as space radiation monitoring, neutron detection, or conventional and FLASH radiotherapy environments <cit.>.The Timepix detectors provide valuable capabilities of position-, time- and directional sensitivity which make them suitable for selective detection and high-resolution wide-range spectral tracking of single particles <cit.>. The detector high granularity and per-pixel spectral response can be used to examine and evaluate the uniformity and homogeneity of the radiation response and charge collection of the semiconductor sensor <cit.>.§ INSTRUMENTATION AND EXPERIMENTS As part of our investigation into the behaviour of SiC material, we performed a comprehensive characterisation of a Schottky 4H-SiC single-pad detector. This step served as a foundation in the development of the pixel structure for the hybrid semiconductor pixel detector Timepix3 with a SiC sensor. The alpha particles generated by the Am-241 radioisotope and detected by the SiC Schottky detector were resolved with a full width at half maximum (FWHM) of 13.8 keV at 5480 keV peak at room temperature. The single-pad Schottky barrier SiC detectors were developed to investigate and optimise their electrical and detection properties. The evaluation of 4H-SiC Schottky barrier detector performance at elevated temperatures is presented in <cit.>, where we demonstrated the detector’s operation up to 500^∘C. At this high-temperature, the FWHM degraded ∼20% when compared to its performance at room temperature. The 4H-SiC detectors have been proven to be suitable for high-resolution spectrometry of alpha particles in a wide range of elevated temperatures.§.§ Radiation camera Minipix Timepix3 with SiC sensor Based on the 4H-SiC epitaxial layer characterisation demonstrated in single-pad detectors, the pixel structure was prepared for the hybrid pixel detector Timepix3. For the investigation of SiC as a sensor material for the Timepix3 chip, two companies prepared four 4H-SiC sensors. Epitaxial growth of SiC was used to achieve 80 μm and 100 μm thick layer, respectively. After bump-bonding of the sensors to the Timepix3 chip, the detectors were connected to the MiniPIX readout; see Fig. <ref> (right). In this study, the sensors used were limited to a bias of 200 V to prevent electrical shortages. However, due to the limited bias voltage, only 65 μm out of 80 μm detector sensor thickness could be fully depleted. On top of the 80 μm active layer (nitrogen doped layer, doping concentration 1×10^14 cm^-3) there is a buffer (concentration 1×10^18 cm^-3) and a bulk layer (2×10^18 cm^-3) which together with the non-depleted epitaxial region form a 365.5 μm thick dead layer that absorbs radiation; see Fig. <ref> (left).As a result, low-energy X-ray imaging requires extended exposure time in order to acquire a significant number of counts. During the process of standard detector energy calibration using X-ray fluorescence, the exposure time has been adjusted to address these limitations. The description of the calibration process for Minipix Timepix detectors is described in <cit.>. Alpha particle measurements using laboratory isotopes such as Am-241 (5.5 MeV) were not feasible, given the particle range is in order of microns and stops within the detector material.The sensor is suitable for the detection of accelerated high-energy light and heavy ions that can transverse the active volume. An alternative approach for particle tracking and detection of accelerated particles is to tilt the sensor to enable the particle beam to impinge on the sensor from the side.§.§ Measurements with 13, 22, 31, 100 and 226 MeV protons In this work, we evaluate the response of the SiC Timepix3 detector on monoenergetic protons of various energies incident on a wide range of directions (from the perpendicular direction (0^o) to the parallel (90^o direction).Low-energy protons (13, 22 and 31 MeV) were measured at the light ion cyclotron U-120M <cit.> of the NPI CAS Rez near Prague.High-energy protons (100 and 226 MeV) were measured at the IBA Proteus-235 cyclotron at the Proton Therapy Center Prague. Measurements were performed in air, at 2 m and 80 cm distance from the accelerator beam nozzle. The pixel detector was rotated to the proton beam direction at various angles in the full angular range. By this approach it can be tuned the particle track length within the active region, resulting in a variable and well-defined energy deposition by charged particles at specific incident angles. § SPECTRAL-SENSITIVE TRACKING OF PROTONS§.§ Data processingThe active region of the detector consists of a matrix with 256×256 pixels, each with a pixel pitch of 55 μm, resulting in a total of over 65 thousand pixels. Each pixel functions as an individual semiconductor detector. When high-energy particles traverse the sensor, they ionise its volume, leading to charge collection and the generation of multi-pixel events. These individual pixels are grouped into clusters based on their spatial and temporal characteristics.High-resolution pattern recognition algorithms <cit.> are then applied to analyse the single particle cluster tracks and resolve particle-type events <cit.>. These parameters allows to select and examine specific types of particles in different radiation environments. Once we determined parameters that effectively described these distinct groups, we were able to filter and analyse them separately. These and further steps of data processing are made using the integrated data processing engine (DPE) <cit.>. Developed by ADVACAM, the DPE tool performs data clustering, applies per-pixel energy calibration, and calculates for each particle track morphological and spectral parameters, including cluster size, length, roundness, deposited energy and LET <cit.>.§.§ Particle-type discriminationBased on the parameters assigned to each cluster/particle detected, it is possible to process and filter each particle group in a mixed-field measurement separately <cit.>. This approach effectively separates high-energy particles or other specific events of interest from the unwanted background, yielding clean datasets for specific particle types. The SiC sensor provides a limited active volume, buried beneath the non-sensitive bulk material above which acts as a shielding filter for the incident radiation. Therefore, only a small part of the energy deposited by passing high-energy particles is collected in the 65 μm thick volume of the SiC sensor. This limitation extends to other parameters, such as morphological characteristics, where the boundaries between different particle groups are less distinct than in thick (i.e., ≥ 300 μm thick) fully depleted sensors such as Si, CdTe, or GaAs <cit.>. Only a few parameters are suitable for high-energy particle filtration, and for this task, the deposited energy and cluster size have proven to be the most effective whilst maintaining the high quality of the cleanliness of the filtered dataset. §.§ Quantum-imaging detectionThe spectral sensitive detection and track visualisation of 31 MeV and 226 MeV protons by SiC Timepix3 is shown in Fig. <ref>. In the figures, all registered particles are shown, no particle filter was applied.With increasing incident angle, the path of the proton track in the active sensor volume increases, leading to a higher deposited energy.For low-energy heavy charged particles incident at large angle, e.g., 31 MeV protons at ≥ 75^∘, the particle tracks approach the Bragg peak inside the sensor. For protons at 85^∘ the Bragg peak is within the sensor active volume. For such low-energy protons, it can be observed the effect of shielding by the casing edge (2 mm Al) of the MiniPIX camera. In the initial stage of our analysis, we examined the broad spectrum of particle tracks that were detected. This examination aimed to provide valuable information regarding their spectral and morphological characteristics while assessing the overall homogeneity of the detection process. Each of the SiC detectors used in this investigation generated datasets that contained primary proton data and various secondary events that occurred within the sensor. Prior to data processing, a pre-processing step involved masking a limited number of noisy pixels to ensure data quality.To select only proton tracks for analysis, we conducted a multiparameter assessment for each detector position/angle. This comprehensive analysis provided an understanding of the types of radiation groups detected and their respective parameters. These parameters, especially those associated with protons, can vary with the tilt of the sensor. Recognising this variability, we devised a strategy to isolate each specific group within the dataset. This approach allowed us to effectively separate specific radiation groups from the dataset and ensure primary proton detection across various detector rotations and proton beam energies. §.§ Deposited energy distributions The spectral response of the Timepix3 detector can be used to measure in detail the deposited energy of single particles. This work also serves to examine and evaluate the spectral sensitivity, range, and homogeneity of the SiC sensor. The results for protons of selected energy (13 and 226 MeV) and direction (perpendicular 0^∘) are given in Fig. <ref>. Background and unwanted events filtered (a filter pass was applied to the track cluster area of ≥ 3 pixels).For the data evaluated, the mean values of the deposited energy are around 830 keV and 500 keV for 13 MeV and 226 MeV protons, respectively.Data exhibit the characteristic broad and right-asymmetric distributions of energy loss of charged particles in semiconductors <cit.>.In addition, an anomalous large-amplitude component above a few MeV is observed. This is produced by events of high energy deposited up to nearly 10 MeV. These high-energy values are partly amplified by distortion and saturation of the per-pixel spectral response <cit.>.Correction for this effect and examination of these large-amplitude events are the subject of future work. In Fig. <ref> it can be examined and evaluated the spatial homogeneity over the sensor-pixel matrix.The response map of the regular spectral component is evenly spatially distributed. The anomalous large-amplitude component exhibits a partly localised pattern which will be subject to further examination. §.§ LET spectra The high-resolution spectral-tracking response of Timepix3 enables to measure both the particle deposited energy and track length across the sensor active volume. This enables one to derive the LET of single particles in a wide range of energy and direction. The results of the selected data are given in Fig. <ref> for protons of different energy incident perpendicularly to the sensor plane.For the SiC sensors used (see Fig. <ref>), the incident particles must first cross over the non-sensitive volume (365.5 μm thick inactive layer) before reaching the active volume with a thickness of 65 μm. For protons this effect results in a partly increased deposited energy and a right-shift of the peak position in the spectra. This effect can become significant for low-energy protons as they approach the Bragg peak. § CONCLUSIONS AND OUTLOOK Newly developed MiniPIX Timepix3 detectors with epitaxially grown 4H-SiC sensors were manufactured, per-pixel energy calibrated and tested with well-defined radiation sources. In this work we evaluated the device spectral-tracking response to the detection of protons with different energies ranging from 13 to 226 MeV. At each energy a rotation scan was performed to obtain the detector response at different angles between the sensor normal and proton beam. Due to the large dead zone above the 65 μm thick active region (with a 200 V bias), the tilt of the sensor allows a better detection of high-energy charged particles. The resolving power of particle-type classes is limited due to the small active region. The response of the sensor is homogeneous. A small yield of large-amplitude i.e., high-energy small-pixel tracks require further analysis and possible correction of high-energy per-pixel energy response.The detected proton tracks are smooth and continuous which demonstrates the proper energy calibration and suitable low-threshold detection. In the future, a new sensor design is prepared to prevent voltage shortages at high voltages, where approximately 300 V is required for full 80 μm sensor depletion (acquired from the C-V measurement as a function of reverse bias voltage and depth of depletion). The major advantage of the SiC is also its great spectral performance without a significant decrease at elevated temperatures up to hundreds of degrees Celsius. Tests to verify the performance of MiniPIX Timepix3 at elevated temperatures are being prepared together with the radiation hardness test, which is expected to be 10^3 times higher compared to commercially available Si. The high radiation hardness allows us to use this detector also for highly damaging neutron detection. In the following work, the absolute detection efficiency for fast neutrons will be analysed from the existing calibrated data and by further calibrations in well-defined neutron fields. Work in Advacam was performed in frame of Contract No. 40001250020/18/NL/GLC/hh from the European Space Agency. Work at STUBA was supported by the Slovak Research and Development Agency grant APVV-18-0273.99HEI13E.H.M. Heijne, et al., Measuring radiation environment in LHC or anywhere else, on your computer screen with Medipix, Nucl. Instrum. Meth. A 699 (2013) 198-204.BAL18R. Ballabriga, et al., ASIC developments for radiation imaging applications: The medipix and timepix family, Nucl. Instrum. Meth. A 878 (2018) 10-23.single_LETP. Stasica, et al., Single proton LET characterization with Timepix and artificial intelligence for proton therapy treatment planning, Physics in Medicine & Biology, 68 (2023) 104001.proton_pencil_beamC. Oancea, C. Granja, et al., Out-of-field measurements and simulations of a proton pencil beam in wide range of dose rates with Timepix3: Dose rate, flux and LET, Physica Medica, 106 (2023) 102529.imaging_1V. Olsansky, C. Granja, et al., Spectral-sensitive proton radiography of thin samples with the pixel detector Timepix3, JINST, 17 (2022) C04016.imaging_2C. Granja, C. Oancea, et al., Energy Sensitive Imaging of Focused and Scanning Ion Microbeams with m Spatial and s Time Resolution, EPJ Web Conf., 261 (2022) 01007.radiation_monitorSt. Gohl, et al., A miniaturized radiation monitor for continuous dosimetry and particle identification in space, IOP Publishing, 7 (2022) C01066.timepix3T. Poikela, et al., Timepix3: A 65K channel hybrid pixel readout chip with simultaneous ToA/ToT and sparse readout, Journal of Instrumentation, 9 (2014) C05013.gaasB. Zatko, D. Kubanda, et al., First tests of Timepix detectors based on semi-insulating GaAs matrix of different pixel size, Journal of Instrumentation, 13 (2018) C02013.diamondG. Claps, F. Murtas, et al., Diamondpix: A CVD Diamond Detector With Timepix3 Chip Interface, IEEE Transactions on Nuclear Science, PP (2018) 1-1.bB. Zatko, A. Sagatova, et al., From a single silicon carbide detector to pixelated structure for radiation imaging camera, JINST 17 (2022) C12005.sic_temperatureN. Gal, et al., High-resolution alpha-particle detector based on Schottky barrier 4H-SiC detector operated at elevated temperatures up to 500 °C, Applied Surface Science, 635 (2023) 157708.flash_strayC. Oancea, et al., Stray radiation produced in FLASH electron beams characterized by the MiniPIX Timepix3 Flex detector, JINST 17 (2022) C01003.GRA18C. Granja, et al., Resolving power of pixel detector Timepix for wide-range electron, proton and ion detection, Nuclear Instr. Methods A 908 (2018) 60-71.tpx2_mixed_fieldC. Granja, et al., Spectral and directional sensitive composition characterization of mixed-radiation fields with the miniaturized radiation camera MiniPIX, Timepix2, JINST 17 (2022) C11014.novakA. Novak, C. Granja, et al., Spectral tracking of proton beams by the Timepix3 detector with GaAs, CdTe and Si sensors, JINST 18 (2023) C01022.jakubekJ. Jakubek, Precise energy calibration of pixel detector working in time-over-threshold mode, Nucl. Inst. and Meth. A 633 (2011) S262-S266.KRI18F. Krizek, et al., Irradiation setup at the U-120M cyclotron facility, Nucl. Inst. and Meth. A 894 (2018) 87-95.HOL08T. Holy, et al., Pattern recognition of tracks induced by individual quanta of ionizing radiation in Medipix2 silicon detector, Nucl. Instrum. Meth. A 591 (2008) 287–290.dpeMarek L. et al., Data Processing Engine (DPE): Data Analysis Tool for Particle Tracking and Mixed Radiation Field Characterization with Pixel Detectors Timepix, submitted to JINST (2023). novel_LETR. Nabha, et al., A novel method to assess the incident angle and the LET of protons using a compact single-layer Timepix detector, Radiation Physics and Chemistry, 199 (2022) 110349.CAR21LETC. Granja, C. Oancea, et al., Wide-range tracking and LET-spectra of energetic light and heavy charged particles, Nucl. Instrum. and Meth. A 988 (2021) 164901.SOM22M. Sommer, C. Granja, et al., High-energy per-pixel calibration of Timepix pixel detector with laboratory alpha source, Nucl. Instrum. Meth. A 1022 (2022) 165957. | http://arxiv.org/abs/2310.17747v1 | {
"authors": [
"Andrej Novak",
"Carlos Granja",
"Andrea Sagatova",
"Jan Jakubek",
"Bohumir Zatko",
"Vladimir Vondracek",
"Michal Andrlik",
"Vaclav Zach",
"Stepan Polansky",
"Anuj Rathi",
"Cristina Oancea"
],
"categories": [
"physics.ins-det",
"physics.med-ph"
],
"primary_category": "physics.ins-det",
"published": "20231026194232",
"title": "Silicon Carbide Timepix3 detector for quantum-imaging detection and spectral tracking of charged particles in wide range of energy and field-of-view"
} |
Fast and simple unrooted dynamic forests Benjamin Aram BerendsohnFreie Universität Berlin, Germany. Email: . Supported by DFG Grant . ================================================================================================ Large language models (LLMs) offer unprecedented text completion capabilities. As general models, they can fulfill a wide range of roles, including those of more specialized models. We assess the performance of GPT-4 and GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-based sentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-art F1 score of 83.8 on the joint aspect term extraction and polarity classification task of the SemEval-2014 Task 4, improving upon InstructABSA (Scaria et al. 2023) by 5.7%. However, this comes at the price of 1000 times more model parameters and thus increased inference cost. We discuss the the cost-performance trade-offs of different models, and analyze the typical errors that they make. Our results also indicate that detailed prompts improve performance in zero-shot and few-shot settings but are not necessary for fine-tuned models. This evidence is relevant for practioners that are faced with the choice of prompt engineering versus fine-tuning when using LLMs for ABSA.Keywords •aspect-based sentiment analysis (ABSA) large language models (LLMs) GPT few-shot learning prompt engineeringShaded[boxrule=0pt, interior hidden, breakable, borderline west=3pt0ptshadecolor, enhanced, frame hidden, sharp corners]introduction § INTRODUCTION Aspect-based sentiment analysis (ABSA) is used for providing insights into digitized texts, such as product reviews or forum discussions, and is therefore a key capability for fields such as digital social sciences, humanities, and market research. It offers a more detailed view of reviews compared to conventional sentiment analysis, which typically categorizes the overall sentiment of a whole text as positive, negative, or neutral (Turney 2002; Pang, Lee, and Vaithyanathan 2002). The SemEval-2014 challenge (Pontiki et al. 2014) proposed an aspect term extraction (ATE) and an aspect term sentiment classification (ATSC) tasks. These can be merged into a joint task, where the goal is to simultaneously extract aspect terms and classify their polarity. We focus our efforts on the joint task in the present study [Code available at <https://github.com/qagentur/absa_llm>].A wide array of specialized models have been developed for ABSA. With the emergence of pre-trained language models like BERT (Devlin et al. 2019), the field has witnessed significant advancements in accuracy, particularly in transformer-based models that incorporate task-specific output layers. Zhang et al. (2022) provide a comprehensive survey of similar models in the domain. PyABSA (Yang and Li 2023) is notable for its collection of ABSA datasets and models for various ABSA tasks and across multiple languages.OpenAI has reshaped the field of Natural Language Processing (NLP) in the recent years with their family of Generative Pre-trained Transformer models, GPTs (Brown et al. 2020; Ouyang et al. 2022; OpenAI 2023). GPTs are generalist large language models (LLMs) with a very wide range of demonstrated capabilities ranging from classic NLP tasks to causal reasoning, and even functional knowledge in specialized domains like medicine or law (Mao et al. 2023). Zhang et al. (2023) investigated the accuracy of LLMs for sentiment analysis and found non-finetuned LLMs generally capable, but not on par with specialized models, on complex tasks such as ABSA. They also emphasize that the choice of prompt is critical for performance.InstructABSA is the current state of the art for ABSA. It utilizes a a fine-tuned T5 model (Wang et al. 2022) along with prompt instructions for improved performance.To investigate how well do the current generation of LLMs are able to perform ABSA, we test the performance of GPT-3.5 and GPT-4 (pinned model gpt-3.5-turbo-0613 and gpt-4-0613 available through OpenAI API) using a range of conditions, and compare their performance to that of InstructABSA, the state-of-the-art model for the SemEval-2014 Task 4 Joint Task. Table 1 contains an overview of the tested models. The selection leaves out some notable high performance LLMs, such as Meta's Llama2 (Touvron et al. 2023a), Anthropic's Claude2, and Google's Bard.[]@ >p(- 8) * 0.2000 >p(- 8) * 0.2000 >p(- 8) * 0.2000 >p(- 8) * 0.2000 >p(- 8) * 0.2000@ Model comparison [b] [b]GPT-4[b]GPT-3.5[b]GPT-3.5 fine tuned[b]InstructABSA[b] [b]GPT-4[b]GPT-3.5[b]GPT-3.5 fine tuned[b]InstructABSABase model GPT-4 GPT-3.5 GPT-3.5 T5 Parameters ~1.7T[The number of parameters of GPT-4 is currently not disclosed by OpenAI. Internet rumors put it at 1.7T. We use this number for comparison purposes.] 175B 175B 200M Fine-tuning Not available No SemEval-2014 SemEval2014 Language Multilingual Multilingual English English Availability Commercial Commercial Commercial, custom Open source methods-data § METHODS & DATA We evaluate performance on the gold standard benchmark dataset SemEval2014 (Pontiki et al. 2014), consisting of human annotated laptop and restaurant reviews. Model performance is measured on a joint aspect term extraction and polarity classification task with using F1 score as the main performance metric. We test multiple prompt variations, the number of provided in-context examples, and fine-tune the models via the OpenAI's API. Figure 1 shows the overview of the experimental set-up. We also break down the types of False positive errors that the models make in order to get a better understanding on their strengths and weaknesses, and compare costs and cost-efficiency of the different options.data§.§ Data The SemEval-2014 dataset contains human-annotated sentences from laptop and restaurant reviews. The original 2014 dataset has 4 polarity labels: positive, negative, neutral and conflict. To maintain consistency with InstructABSA and previous models, we also exclude examples with the “conflict” label. The examples that include them were removed completely, rather than just removing the “conflict” labeled aspects, to avoid the case where the text contains aspects that the model is not supposed to label. Altogether, 126 examples were removed from the training set and 28 from the test set, leaving 5759 in training and 1572 in test.We used up to 10 examples from the training set for in-context instructions and the whole training set for fine-tuning. During fine-tuning, we reserved 20% of the training set for validation. We used the test set to evaluate the final model performance.prompts§.§ Prompts The prompts for this study are structured in 3 parts: 1) a task description in the system message 2) and optional a set of in-context examples also included in the system message 3) a JSON schema defining the expected output format.task-description §.§.§ Task description The task description is a short text that instructs the model. Altogether, we used 9 different task descriptions (see Table 2; see Appendix for full prompt texts). They primarily consisted of variations of the original annotation guidelines for the SemEval2014 task and the prompt used in the state-of-the-art model, InstructABSA. In addition, we also added a task description to attempt to coax the generalist GPT model to behave as a specialist model (Roleplay), and to perform the task in a step-wise manner (Separate tasks). Finally, we added two controls: a minimal one sentence summary of the task (Minimal), and a No prompt control that was only tested on the fine-tuned models.The Reference task description explicitly taps into GPT-4's pre-existing knowledge of SemEval-2014 by referencing the task by name.The Guidelines summary was created by GPT-4 itself. We pasted the original annotation guidelines into the OpenAI API playground and asked the model to summarize them. The resulting summary was then used as the task description.in-context-examples §.§.§ In-context examples The examples provided as part of the completion request enable in-context learning without changing the model's parameters (Liu et al. 2021). Further, they introduce the output format to the model. Providing examples can have a large positive impact on model performance of text-to-text models in classification tasks and Min et al. (2022) show that the main benefit stems from establishing the answer format and providing information on the distribution of tokens.We tested a range of in-context example conditions. The examples were manually picked from the training set based on the curriculum learning (Bengio et al. 2009) concept, meaning that a series of examples starts with the simplest case and builds to complex edge cases. We tested the following conditions, each building on the previous:* 0 shot* 2 shot: one basic example per review domain (restaurants and laptops) showing typical case* 4 shot: two basic examples per domain, show variety of outputs* 6 shot: three examples per domain* 10 shot: same as above, plus hard examples for each domain to teach edge cases Full texts are provided in the appendix.chat-interface §.§.§ Chat interface In contrast to prior models, GPT-3.5 and GPT-4 use a chat interface rather than a pure text-to-text interface. The chat starts with a system message, which can be used for providing the task description. The model then predicts an answer and the resulting format is a dialogue between user messages and assistant responses.This presents two options for including in-context examples for the task. Either they are included in the system message, or they are presented as pairs of user and assistant messages already in the dialogue. We tested both options in a preliminary experiment and found that GPT-3.5's performance benefits only from the examples in the system message. For example, for the Guidelines summary prompt increasing the number of in-context examples from 0 to 6 increased the F1 score from 64.3 to 65.7 when the examples were provided within the system message, while same examples actually decreased F1 score from 64.3 to 60.0 when the examples were provided outside of the system message. GPT-4's performance appears to benefit from both options, but we chose to include the examples in the system message for both models.function-schema§.§ Function schema As the GPT models are text-to-text models, having a standardized output format is crucial for its usefulness in a structured task like ABSA. Prior studies by Zhang et al. (2023) and Scaria et al. (2023) have devised custom formats and instructed the model to use them in the prompt. We opted for JSON as a standard and use OpenAI's function calling feature to enforce the format: As part of the inference command, the model is instructed to call a function with the generated text as input. This function describes the expected output format using a JSON schema, standardizing the model outputs. The JSON schema used has description fields and informative field names, which act as further in-context instructions.Given an input text “The fajitas are tasty” the schema would then become:fine-tuning-gpt-3.5§.§ Fine tuning GPT-3.5 We fine-tuned GPT-3.5 on the examples from the training set for 3 epochs using an 80/20 train/validation split. A system message prompt can be provided for fine-tuning and for optimal performance. The same system message should be included at inference time. To test the influence of the system prompt, we fine-tuned the model with three different prompt: Guidelines summary, Minimal system message, and without a system message (No prompt).Training converged quickly in all cases (Figure 2). Further epochs are unlikely to improve performance because a training accuracy of 100% was reached. Validation accuracy fluctuates around 76%.The resulting fine-tuned models do not need a JSON schema to produce structured output, as they have learned the expected JSON format. Parsing the returned strings as JSON did not produce any errors. This reduces the number of input tokens required for inference.parameters§.§ Parameters We iterated on the number of in-context examples and prompt variations. Temperature was set to 0 to get near-deterministic results. Other parameters were left at their default values.results § RESULTS prompts-1§.§ Prompts As prompt selection and tuning is a key step for successful use of a task-agnostic text-to-text model like GPT, we first wanted to find out which prompt variants gave the best results in the task. We tested the prompt variations using GPT-3.5. Results are summarised in Table 2. Overall, the prompts gave fairly similar results and adding in-context examples did not improve performance significantly, with the exceptions of the InstructABSA and Minimal prompts. Using 6 in-context examples was better than 10 for almost all tested prompts. Examples 7-10 were purposefully selected to represent difficult cases and apparently failed to be useful for the task. The Guidelines summary provided the best average performance (F1: 64.99) across the tested in-context examples conditions, by a narrow margin. Overall, GPT-3.5 (without fine-tuning) did not attain the performance levels of modern specialized models for ABSA.[]@ >p(- 10) * 0.1842 >p(- 10) * 0.1118 >p(- 10) * 0.3421 >p(- 10) * 0.1184 >p(- 10) * 0.1184 >p(- 10) * 0.1250@ Task description variants (GPT-3.5) [b]Prompt[b]Prompt tokens[b]Description[b]F1, 0 examples[b]F1, 6 examples[b]F1, 10 examples[b]Prompt[b]Prompt tokens[b]Description[b]F1, 0 examples[b]F1, 6 examples[b]F1, 10 examplesGuidelines summary 178 Summary of the guidelines created by GPT-4 64.39 65.65 64.92 Annotation guidelines 2021 Official guide for SemEval-2014 Task 4 63.77 66.03 64.88 Roleplay 84 Pretend to be a specialized machine learning model 62.09 64.29 64.64 Reference 39 Name-drop SemEval-2014 Task 4 62.36 64.2 63.23 InstructABSA 18 InstructABSA prompt 52.94 61.39 61.32 Separate tasks 150 2 steps: Term extraction, polarity classification 61.68 62.74 61.27 InstructABSA with examples 249 InstructABSA prompt + 6 examples from paper 61.54 62.49 60.18 Minimal 20 One sentence summary of task. 46.62 60.29 59.71 gpt-4-and-fine-tuned-gpt-3.5-model-performance§.§ GPT-4 and fine-tuned GPT 3.5 model performance Next, we conducted experiments using an updated model, GPT-4. We chose the Guidelines summary prompt for this, as it was the best performing prompt on GPT-3.5. Surprisingly, in the zero-shot condition, GPT-4 exhibited slightly lower performance compared to GPT-3.5, achieving an F1 score of 57.6. However, its performance notably improved with the inclusion of in-context examples, surpassing GPT-3.5 in the 6 and 10 example conditions, reaching a peak F1 score of 71.3.In contrast, the three fine-tuned GPT-3.5 models consistently outperformed the previous state-of-the-art model, InstructABSA (Scaria et al. 2023), even with no in-context examples provided (Figure 3). All three tested models demonstrated remarkably similar performance. Notably, the Minimal prompt achieved the highest F1 score at 83.8, and even the No prompt model outperformed the previous state-of-the-art. The addition of extra in-context examples did not yield significant performance gains, but rather, even resulted in a mild performance decrease (tested only on the Minimal prompt). This possibly reflects the mismatch between the system message seen in fine-tuning and the system message with in-context examples at inference time. Collectively, these findings suggest that for a relatively straightforward task like ABSA, optimal performance is achieved by fine-tuning the model and allowing it to learn the task directly from the training data.error-analysis§.§ Error Analysis Overall, the fine-tuned GPT-3.5 models are both more sensitive and more specific than either GPT-3.5 or GPT-4. The tine-tuned GPT-3.5 makes 50% less errors than the best GPT-3.5 model, and 42% less than GPT-4, while correctly detecting and classifying 34% (vs. GPT-3.5) or 21% (vs. GPT-4) more aspect term-polarity pairs than the other tested models. Table 3 shows error metrics for each model.[]@ >p(- 14) * 0.2599 >p(- 14) * 0.0339 >p(- 14) * 0.0339 >p(- 14) * 0.0339 >p(- 14) * 0.2203 >p(- 14) * 0.1695 >p(- 14) * 0.1356 >p(- 14) * 0.1130@ Error analysis by model [b]Model[b]TP[b]FN[b]FP[b]FP:Predicted aspect not in gold set[b]FP:Polarity classification[b]FP:Aspect boundaries[b]FP:Made up terms[b]Model[b]TP[b]FN[b]FP[b]FP:Predicted aspect not in gold set[b]FP:Polarity classification[b]FP:Aspect boundaries[b]FP:Made up termsGPT-3.5 1426 791 676 346 142 154 34 GPT-3.5 finetuned, guidelines summary prompt 1907 377 397 81 189 125 2 GPT-3.5 finetuned, minimal prompt 1909 373 367 68 166 131 2 GPT-3.5 finetuned, no prompt 1876 404 387 61 192 132 2 GPT-4 1576 514 752 415 174 154 9 In order to find out what kinds of errors the models make, we also broke down the false positives further into the following error sub-types:* Predicted aspect not in gold set: The model extracts a term from the text that is not found in the gold aspect set.* Polarity classification: The model correctly extracts an aspect term but misclassifies its polarity.* Aspect boundaries: The model partially extracts an aspect term that has more or fewer tokens than the gold aspect term.* Made up terms: The model predicts an aspect term that is not found in the text. In aggregate, the most common error sub-type is Predicted aspect not in gold set. These were often terms that could be of interest in a real-world use-case, but broke some annotation rules of the benchmark dataset. For example, labeling non-noun terms in a sentence like: “It's fast [positive], quiet [positive], incredibly small [positive] and affordable [positive].” Fine-tuning the model had also the biggest effect here, leading to up to ~88% reduction of this type of FPs.The second most common error sub-type is the Polarity classification error. For example, in the sentence “The MAC Mini [positive], wireless keyboard / mouse [positive] and a HDMI cable [positive] is all I need to get some real work done.” GPT-3.5 labeled multiple of the listed features of the computer as positive. However, these were annotated as neutral aspects in the gold set, as no specific positive qualities about the features were explicitly mentioned in the review. Overall, we observed that all of the models over-predicted the number of positive and negative labels and under-predicted the number of neutral labels. It is also worth noting that the polarity classification errors increase with model performance, as a prerequisite for polarity classification is that the aspect term is extracted correctly.The third most common are Aspect boundary issues. For example, in the sentence “The Mini's body hasn't changed since late 2010- and for a good reason.” GPT-4 extracted the term “Mini's body” whereas only the term “body” is labelled in the gold set.Finally, we also occasionally saw the models identifying terms that were not present in the text. These tended to be abstractions of the sentence content, such labeling “speed” and “performance” in the sentence “It is extremely fast and never lags”. GPT-3.5 has a notably high number of made up terms, seemingly failing to learn the instruction that the aspect terms must be extracted verbatim. However, altogether this error sub-types were not that common and finetuning almost completely eliminated the issue.On the whole, the errors are typically related to the idiosyncrasies of the benchmark dataset. In a few-shot setting, the LLM struggles to pick up on the nuances of labeling rules, instead delivering more commonsense labels. This ties into remarks by Zhang et al. (2023) who found that LLMs are capable of ABSA, but not with the precise format required by the benchmark dataset. While such errors hamper benchmark performance, they should not necessarily discourage from using LLMs in real-world applications of ABSA or similar tasks. In domains like market research it may be preferable to for example also extract non-noun and abstracted terms,economic-analysis§.§ Economic analysis LLMs are computationally expensive to train and use. With OpenAI users pay based on both the number of input and output tokens[https://openai.com/pricing]. OpenAI charges eight times more for input and output tokens for fine-tuned GPT-3.5 models than for the default model. However, fine-tuned models do not require the use of a JSON schema, reducing the number of input tokens required. As demonstrated by our results, fine-tuned models are also not reliant on relatively long prompts or presence of in-context examples. Thus, they can lead to cost savings and providing more accurate results with a lower overall cost. See Table 4 for the overall cost summary of the tested model versions and conditions.[]@ >p(- 12) * 0.1638 >p(- 12) * 0.1810 >p(- 12) * 0.1293 >p(- 12) * 0.1983 >p(- 12) * 0.1034 >p(- 12) * 0.1207 >p(- 12) * 0.1034@ Cost comparison [b]Model[b]Prompt[b]JSON schema[b]In-context examples[b]F1 score[b]Cost (USD)[b]F1 / USD[b]Model[b]Prompt[b]JSON schema[b]In-context examples[b]F1 score[b]Cost (USD)[b]F1 / USDGPT-3.5 finetuned Minimal False 0 83.76 0.59 141.97 GPT-3.5 finetuned Guidelines summary False 0 83.13 2.08 39.97 GPT-3.5 finetuned No prompt False 0 82.59 0.36 229.42 GPT-3.5 finetuned Minimal False 6 81.14 3.17 25.6 InstructABSA InstructABSA prompt False 6 79.3 0.05 1586 GPT-4 Guidelines summary True 6 71.34 15.02 4.75 GPT-3.5 Guidelines summary True 6 65.65 0.71 92.46 GPT-3.5 Guidelines summary True 0 64.39 0.39 165.1 GPT-3.5 Minimal True 6 60.29 0.54 111.65 GPT-4 Guidelines summary True 0 57.58 8.91 6.46 GPT-3.5 Minimal True 0 46.62 0.24 194.25 In real-world applications with significantly larger datasets than the benchmark set used here, it is worth considering that InstructABSA is still significantly cheaper to operate, while providing near state-of-the-art results, with a run amounting to less than $0.05 when executed on a single vCPU on AWS or a similar cloud provider. GPT-4 on the other hand, is the strongest model available in a low computational resource setting and no access to training data for fine-tuning, but also by far the most expensive model to operate, reflecting its high parameter count. Notably, when measuring cost-efficiency as a ratio of obtained F1 score divided by the run cost, InstructABSA is more than 300-fold better than the best performing GPT-4 model, but only ~7-fold better than the most efficient fine-tuned GPT-3.5 model.discussion § DISCUSSION We explore the application of OpenAI's LLMs for the classic NLP task of aspect based sentiment analysis (ABSA). We focus on the joint aspect term extraction and polarity classification task on the widely used SemEval-2014 benchmark dataset. A fine-tuned GPT-3.5 model achieved state-of-the-art performance on the ABSA joint task (83.8% F1 score) on the benchmark task. Fine-tuning the model also emerged as the most efficient option for the task, offering superior performance without the need for extensive prompt engineering. This not only saves time but also reduces token consumption, both valuable aspects in real-world applications.Our analysis revealed that the errors made by the not-fine tuned models were often related to discrepancies between the model's predictions and the idiosyncrasies of the benchmark dataset's annotation rules. GPT-3.5 and 4 offered often sensible term-polarity predictions that just failed to take the intricacies of all the annotation rules into account, whereas fine-tuning seemed to align the models better with this specific formulation of the ABSA task. This is supported by how the main performance increase between the basic and fine-tuned models seemed to result both from decreased number of False negatives as well as near 90% reduction in False positives of the Predicted aspect not in gold set sub-type (e.g. extracing non-noun phrases from review texts).For the present paper, we limited our analysis to a single dataset and to the joint task. While this allowed us to be more focused in the optimization efforts, it also means that the generalizability of the findings to other datasets as well as to real-world use-cases remains a topic for further investigation. The annotation rules of SemEval 2014 specify that only noun phrases can be aspects. However, in real-world applications, it may be desirable to extract such aspects as well. For example, in market research, it may be of interest to extract aspects such as “speed” and “performance” from the sentence “It is extremely fast and never lags”. This would require a different annotation scheme, and possibly a different task formulation.Another interesting further avenue for follow-up work would be to test the performance of a fine-tuned GPT-4 (unavailable at the time of writing) as well as compare the performance of GPT models to that of open source LLMs such as Llama 2 (Touvron et al. 2023b). Although the fine-tuning appeared to significantly decrease the importance of prompt-engineering in general, it might still be of interest to test for example the effects of chain of thought prompting and self-correction for performance.In conclusion, our research demonstrates the great potential of fine-tuned LLMs for ABSA. We found fine-tuning GPT-3.5 to the task particularly effective, offering state-of-the-art performance at a price point between InstructABSA and GPT-4. The performance and model size tradeoff reflects the trend in research on transformer models: increase in model size brings improved performance, but also increased computational and operational costs. While our study focused on a single benchmark dataset, it lays the foundation for broader exploration and implementation of LLMs in ABSA across diverse datasets and use cases.references § REFERENCEStocsectionReferencesrefs 10 preref-bengio_curriculum_2009 Bengio, Yoshua, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. “Curriculum Learning.” In Proceedings of the 26th Annual International Conference on Machine Learning, 41–48. Montreal Quebec Canada: ACM. <https://doi.org/10.1145/1553374.1553380>.preref-DBLP:journalsux2fcorrux2fabs-2005-14165 Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” CoRR abs/2005.14165. <https://arxiv.org/abs/2005.14165>.preref-devlin_bert_2019 Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv. <https://doi.org/10.48550/arXiv.1810.04805>.preref-liu2021makes Liu, Jiachang, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. “What Makes Good in-Context Examples for GPT-3?” <https://arxiv.org/abs/2101.06804>.preref-mao2023gpteval Mao, Rui, Guanyi Chen, Xulang Zhang, Frank Guerin, and Erik Cambria. 2023. “GPTEval: A Survey on Assessments of ChatGPT and GPT-4.” <https://arxiv.org/abs/2308.12488>.preref-min_rethinking_2022 Min, Sewon, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. “Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?” arXiv. <http://arxiv.org/abs/2202.12837>.preref-openai2023gpt4 OpenAI. 2023. “GPT-4 Technical Report.” <https://arxiv.org/abs/2303.08774>.preref-ouyang2022training Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, et al. 2022. “Training Language Models to Follow Instructions with Human Feedback.” <https://arxiv.org/abs/2203.02155>.preref-pang-etal-2002-thumbs Pang, Bo, Lillian Lee, and Shivakumar Vaithyanathan. 2002. “Thumbs up? Sentiment Classification Using Machine Learning Techniques.” In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), 79–86. Association for Computational Linguistics. <https://doi.org/10.3115/1118693.1118704>.preref-pontiki_semeval-2014_2014 Pontiki, Maria, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. “SemEval-2014 Task 4: Aspect Based Sentiment Analysis.” In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), 27–35. Dublin, Ireland: Association for Computational Linguistics. <https://doi.org/10.3115/v1/S14-2004>.preref-scaria_instructabsa_2023 Scaria, Kevin, Himanshu Gupta, Siddharth Goyal, Saurabh Arjun Sawant, Swaroop Mishra, and Chitta Baral. 2023. “InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis.” arXiv. <https://doi.org/10.48550/arXiv.2302.08624>.preref-touvron2023llama Touvron, Hugo, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, et al. 2023a. “Llama 2: Open Foundation and Fine-Tuned Chat Models.” <https://arxiv.org/abs/2307.09288>.preref-touvron_llama_2023 ———, et al. 2023b. “Llama 2: Open Foundation and Fine-Tuned Chat Models.” arXiv. <https://doi.org/10.48550/arXiv.2307.09288>.preref-turney2002 Turney, Peter D. 2002. “Thumbs up or Thumbs down? Semantic Orientation Applied to Unsupervised Classification of Reviews.” In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, 417–24. ACL '02. USA: Association for Computational Linguistics. <https://doi.org/10.3115/1073083.1073153>.preref-wang_super-naturalinstructions_2022 Wang, Yizhong, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, et al. 2022. “Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks.” In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 5085–5109. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. <https://aclanthology.org/2022.emnlp-main.340>.preref-yang_pyabsa_2023 Yang, Heng, and Ke Li. 2023. “PyABSA: A Modularized Framework for Reproducible Aspect-Based Sentiment Analysis.” arXiv. <https://doi.org/10.48550/arXiv.2208.01368>.preref-zhang_sentiment_2023 Zhang, Wenxuan, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. 2023. “Sentiment Analysis in the Era of Large Language Models: A Reality Check.” arXiv. <http://arxiv.org/abs/2305.15005>.preref-zhang_survey_2022 Zhang, Wenxuan, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2022. “A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and Challenges.” IEEE Transactions on Knowledge and Data Engineering, 1–20. <https://doi.org/10.1109/TKDE.2022.3230975>.appendix § APPENDIX prompts-2§.§ Prompts Prompt: Annotation guidelinesAvailable at: <https://alt.qcri.org/semeval2014/task4/data/uploads/semeval14_absa_annotationguidelines.pdf>Prompt: Guidelines summaryThese guidelines detail Aspect Based Sentiment Analysis Annotation for restaurant and laptop customer reviews. The aim is to determine aspect terms and their sentiment polarities within sentences. Aspect terms are words or phrases describing the specific attributes of the target entity. Sentiment polarity can be positive, negative, or neutral. For aspect terms, annotators should mark nominal phrases explicitly mentioning aspects and verbs or verbal names of aspects. Subjectivity indicators, references to the target entity as a whole, and the name, type, or model of the laptop or restaurant should not be considered as aspect terms. Also, pronouns and implicit aspect terms should not be annotated. For sentiment polarity, an aspect term should be classified as positive or negative if the sentence expresses a positive or negative opinion. Neutral polarity should be assigned when a neutral sentiment or factual information is given about an aspect, or when the sentiment is inferred but not explicit.Prompt: InstructABSAThe output will be the aspects (both implicit and explicit), and the aspects sentiment polarity.Prompt: InstructABSA with examplesThe output will be the aspects (both implicit and explicit), and the aspects sentiment polarity.Positive Example Example Input 1: With the great variety on the menu, I eat here often and never get bored. Example Output 1: menu:positiveExample Input 2: Great food, good size menu, great service, and an unpretentious setting. Example Output 2: food:positiveNegative Example Example Input 1: They did not have mayonnaise, forgot our toast, left out ingredients (i.e., cheese in an omelet), below hot temperatures, and the bacon was so overcooked it crumbled on the plate when you touched it. Example Output 1: toast:negativeExample Input 2: The seats are uncomfortable if you are sitting against the wall on wooden benches. Example Output 2: seats:negativeNeutral Example Example Input 1: I asked for a seltzer with lime, no ice. Example Output 1: seltzer with lime:neutralExample Input 2: They wouldn't even let me finish my glass of wine before offering another. Example Output 2: glass of wine:neutralNow complete the following example-input:Prompt: MinimalYou extract aspects (noun phrases) and polarities (positive, neutral, negative) from reviewsPrompt: ReferenceSemEval 2014 Task 4: Aspect Based Sentiment AnalysisExtract aspects from customer reviews and classify their polarity. Follow the labeling rules of the SemEval 2014 Annotation Guidelines.Prompt: RoleplayYou are a perfectly accurate machine learning model for aspect based sentiment analysis (ABSA) on reviews. Polarity labels: `positive', `negative', `neutral'. Aspects are noun phrases that refer to specific features of the product or service. Don't include implicit aspects. Opinions about the product or service as a whole are not aspects. Copy aspect terms verbatim from the reviews, don't generalize them.Prompt: Separate tasksAspect-Based Sentiment Analysis on Customer ReviewsYour task is to perform aspect-based sentiment analysis (ABSA) on customer reviews.Follow the steps below to extract aspects and classify their polarity. 1. Aspect Term Extraction (ATE): - Extract all aspects mentioned in the reviews. - Copy aspect terms verbatim, without generalizing or modifying them. - Aspects are noun phrases that refer to specific features of the product or service. - Don't include implicit aspects. - Opinions about the product or service as a whole are not aspects. 2. Aspect Term Polarity (ATP): - Classify the polarity of each aspect as `positive', `negative', or `neutral'.in-context-examples-1§.§ In-context examples Example 1:User: Drivers updated ok but the BIOS update froze the system up and the computer shut down.Assistant: [{“term”: “Drivers”, “polarity”: “positive”}, {“term”: “BIOS update”, “polarity”: “negative”}, {“term”: “system”, “polarity”: “negative”}]Example 2:User: The food was absolutely amazing!!Assistant: [{“term”: “food”, “polarity”: “positive”}]Example 3:User: I have had to send in my laptop three times to get it fixed.Assistant: []Example 4:User: So, the menu is written in chalk above your head and it all sounds delicious.Assistant: [{“term”: “menu”, “polarity”: “neutral”}]Example 5:User: EITHER THE COMPUTER IS TOO SLOW TO DETECT THE KEYS TYPED (THIS IS UNLIKELY AS I AM A SLOW TYPIST) OR THE KEYBOARD SIMPLY DOES NOT DETECT THE KEYS BEING TYPED.Assistant: [{“term”: “KEYS”, “polarity”: “negative”}, {“term”: “KEYBOARD”, “polarity”: “negative”}, {“term”: “KEYS”, “polarity”: “negative”}]Example 6:User: The service was bad, the food took to forever to come, we sat on the upper level.Assistant: [{“term”: “service”, “polarity”: “negative”}, {“term”: “food”, “polarity”: “negative”}]Example 7:User: -4 RAM slots, 2 HDD Bays*, 16GB RAM support -No Wireless Issues, at least for me.Assistant: [{“term”: “RAM slots”, “polarity”: “neutral”}, {“term”: “HDD Bays”, “polarity”: “neutral”}, {“term”: “16GB RAM support”, “polarity”: “neutral”}, {“term”: “Wireless”, “polarity”: “positive”}]Example 8:User: Definitely worth the trip to Battery Park City!Assistant: []Example 9:User: Mine came at $1,700 w/o a DVD burner (!).Assistant: [{“term”: “DVD burner”, “polarity”: “negative”}]Example 10:User: I liked the food at this quasi-thai restaurant.Assistant: [{“term”: “food”, “polarity”: “positive”}, {“term”: “quasi-thai”, “polarity”: “positive”}]json-schema§.§ JSON schema {`name': `extract_aspects_and_polarities', `description': `Extract sentiment aspects and polarities from a text', `parameters': {`type': `object', `properties': {`aspects': {`type': `array', `description': `An array of aspects and their polarities. If no aspects are mentioned in the text, use an empty array.', `minItems': 0, `items': {`type': `object', `properties': {`term': {`type': `string', `description': `An aspect term, which is a verbatim text snippet. Single or multiword terms naming particular aspects of the reviewed product or service.'}, `polarity': {`type': `string', `enum': [`positive', `neutral', `negative'], `description': “The polarity expressed towards the aspect term. Valid polarities are `positive', `neutral', `negative'.”}}, `required': [`term', `polarity']}}}, `required': [`aspects']}} | http://arxiv.org/abs/2310.18025v1 | {
"authors": [
"Paul F. Simmering",
"Paavo Huoviala"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231027100321",
"title": "Large language models for aspect-based sentiment analysis"
} |
myboxcolback=red!5!white,colframe=red!75!black pullquotecolback=blue!5!white,colframe=blue!75!blackDissipating Stop-and-Go Waves with a Single Automated Vehicle in Dense Traffic: Experimental Evidence *–University of California, Berkeley†–École des Ponts Paristech, Marne la Vallée‡–Vanderbilt University – Peking University – The University of Alabama in Huntsville **–Temple University ‡‡–Rutgers University-Camden1–These authors contributed equallyTraffic smoothing using explicit local controllers AMAURY HAYAT†, ARWA ALANQARY*,1,RAHUL BHADANI, 1, CHRISTOPHER DENARO‡‡,1, RYAN J. WEIGHTMAN‡‡,1, SHENGQUAN XIANG,1,JONATHAN W. LEE*,MATTHEW BUNTING‡, ANISH GOLLAKOTA*, MATTHEW W. NICE‡, DEREK GLOUDEMANS‡, GERGELY ZACHÁR‡, JON F. DAVIS*, MARIA LAURA DELLE MONACHE*, BENJAMIN SEIBOLD**, ALEXANDRE M. BAYEN*, JONATHAN SPRINKLE‡, DANIEL B. WORK‡ AND BENEDETTO PICCOLI‡‡,==================================================================================================================================================================================================================================================================================================================================================================================================================================================The dissipation of stop-and-go waves attracted recent attention as a traffic management problem, which can be efficiently addressed by automated driving. As part of the 100 automated vehicles experiment named MegaVanderTest, feedback controls were used to induce strong dissipation via velocity smoothing. More precisely, a single vehicle driving differently in one of the four lanes of I-24 in the Nashville area was able to regularize the velocity profile by reducing oscillations in time and velocity differences among vehicles. Quantitative measures of this effect were possible due to the innovative I-24 MOTION <cit.> system capable of monitoring the traffic conditions for all vehicles on the roadway. This paper presents the control design, the technological aspects involved in its deployment, and, finally, the results achieved by the experiment. § INTRODUCTIONStop-and-go waves are ubiquitous traffic instabilities observed in almost all parts of the world <cit.>. The drawbacks of such waves include increased fuel consumption and decreased safety <cit.>. Taming or dissipating them is aproblem of traffic management. The latter has witnessed a revolution with the technical capability of replacing fixed control actuators (for example, toll gates, traffic lights, and traffic signals) with mobile actuators such as connected and automated vehicles (AVs).Researchers studied the capability of this control paradigm to regulate traffic, including dissipating waves, even if most results are for the case of high penetration of AVs (high percentage of AVs as part of the bulk traffic) or completely autonomous traffic. Simulation results can be found in<cit.>. Other strategies includedvariable speed limit strategies, <cit.> and jam absorption <cit.>. The modeling of the problem generates some interesting mathematical challenges, such as the need for multiscale models <cit.>.In the last few years, some experimental results have become available <cit.>.In particular, the ring-road experiment described in these papers shows how a single AV can tame waves produced by 21 human-driven vehicles. The experiment setup replicated faithfully the seminal one described in <cit.>: 22 vehicles started at equal distances on a circular track of 260 meters and reached the same speed. After a short time, stop-and-go waves naturally appear but the single AV is able to dissipate them. These results were achieved originally by the controller described in <cit.>. Continued research has explored eithermodel-based techniques <cit.> or AI ones <cit.>. The encouraging results included a decrease in speed variance, reduced fuel consumption, and reduced heavy braking. In this experiment, drivers were instructed to close the gap while driving safely but with no knowledge of the experiment's goals. This reduced the potential biases but limitations remained: the confined setting, the single-lane situation, and the artificial environment.In order to move forward to an open highway, a holistic vision was proposed <cit.> based on an innovative monitoring camera system <cit.>, the use of advanced hardware devices <cit.>, energy modeling and control algorithms. The approach consisted of inserting 100 automated vehicles in bulk traffic and using various control algorithms. The controller presented in this paper was used on a single vehicle out of the 100. Since the expected penetration rate was around 1-3%, each AV was surrounded by 30-100 human-driven vehicles. With this in mind, the key problem was that of designing controls for a single AV in a multilane setting (with no lane-changing maneuvers), which would impact traffic while maintaining requirements forsafety. As for the ring-road experiments, the focus was on smoothing traffic via reduced speed variance. Also, in this setting, the smoothed traffic was expected to be more fuel-efficient and safe than oscillating ones and stop-and-go waves. In this paper, we present the experimental evidence that a single automated car equipped with an appropriate model-based controller can efficiently dissipate a stop-and-go wave in open traffic on the highway. This experiment was carried out in November 2022 as part of the MegaVanderTest that took place on I-24 in Nashville, TN, described further in <cit.>. The highway includes freight traffic, exhibits stop-and-go traffic daily during rush hour, and includes more than 150,000 vehicles daily.This is the ideal situation for testing the controls and the general idea of traffic smoothing via a small number of AVs.We first present the controller we used, which was designed from mathematical principles using a microscopic traffic model <cit.>. To smooth traffic while ensuring safety, the controller was designed as a combination of three parts: a safety module, a target speed, and a Model Predictive Control (briefly MPC) component. The safety module computes the maximal speed, which would allow the AV to avoid collision in case of sudden braking of the leading vehicle (directly in front of the AV). The target speed is the expected uniform speed the smoothed traffic will travel at. Such speed is either decided using global information (speed planner) or local traffic velocity. Finally, the Model Predictive Control component is designed to anticipate the speed changes of the leader vehicle while keeping the speed as close as possible to the target ones.This paper goes beyond the theoretical development of the controller. It demonstrates implementation on a full-sized car, deployed in traffic as a moving traffic wave controller. The experiment was conducted as part of the MegaVanderTest. The deployment was on the westbound I-24 highway to Nashville, TN, on Wednesday, November 16th, during morning rush hour (8-9 am), on a 9.33-kilometer stretch between exits 66 and 60. We analyze the results using the trajectories reconstructed by the I-24 MOTION system <cit.>. Each car trajectory was obtained through processing and analysis of video data from the myriad cameras of I-24 MOTION (see sidebar:I-24sidebarI-24 MOTION).Since our aim was traffic smoothing via speed oscillation reduction, we focused on computing the speed variance along trajectories. More specifically, the effect of the AV control is measured by comparing the speed variance of the cars running in front of the AV (thus not subject to the control) with those of the cars running behind the AV. A direct effect can be visually noted by comparing the lane where the AV traveled with other lanes (see Figure <ref>). We demonstrate our results by calculating and comparing speed variance. The speed variance over 1.4 km behind the AV was 50% less than the speed variance in front. A more detailed analysis, see Table <ref>, reveals that such impact is stronger in the vicinity of the AV (200 to 400 m). Indeed oscillations are observed to appear again around 600 m behind the AV. This length has to be compared with the ring-road experiment of <cit.>, where oscillations were completely removed on a length of 260 m. In simple words, the AV obtained a similar effect on an open highway with a strong effect for around double the size of the ring road.In the remainder of this paper, we will carefully review the design, implementation, deployment, and analysis of our experiment and its results. The paper will show how a carefully crafted model-based acceleration controller was able to smooth traffic on an open highway with dense traffic and four lanes. The main result is the cut in half the speed oscillations between vehicles in front and behind the AV. Moreover, the controller used standard sensor measurements from stock ACC systems on commercially available Toyota Rav4 models, thus opening the door to large-scale implementation. § CONTROL THEORY, STABILIZATION... AND ROAD TRAFFICby Amaury Hayat and Shengquan Xiang §.§ Control theory in a nutshell... Control theory is about asking: “If I can act on the system, what can I make it do?". From a mathematical point of view, it consists of having a systemẋ(t)= f(x(t),u(t)),where x(t) is the state and u(t) is a function –called control– that can be chosen and represent the way we can act on the system. A typical goal in control theory is to know, given an initial state x_0, what states x_1 can be reached by choosing u(t) properly.Stabilization is a sub-branch of control theory where the goal is to make sure that the system follows a target state x̅ and returns to it when disturbed. A control is designed to stabilize a system that would be unstable in the absence of a controlling force. That is to say{ for any ε>0, there exists η s.t. for all t∈[0,+∞)x(0)-x̅(0)≤ηx(t)-x̅(t)≤ε, .∃δ>0 s.t. x(0)-x̅(0)≤δlim_t→+∞x(t)-x̅(t) = 0. Most of the time, x̅ is chosen as a constant and is an equilibrium of the system, that is f(x̅,0) = 0.The particularity of stabilization is that the control u(t) does not depend on the initial condition x_0 but rather on the current state x(t), that is formallyu(t) = g(x(t)).In more general versions u(t) could also depend on past state (x(τ))_τ≤ t. A control in this form is called a feedback law.§.§ ...and in road trafficIn road traffic, using an AV to smooth stop-and-go waves enters the following framework: x(t) represents the state of the cars on the road, for instance, their position h_i and velocity v_i. The control u(t) is the acceleration or the velocity of the AV that can be chosen, up to some safety and hardware constraints. Formally, this can be written asẋ(t)_traffic state = f(x(t), u(t)_AV dynamic)_traffic dynamic.The target state x̅ is the uniform flow equilibrium where every car is going the same constant speed. This target state is unstable in congestion when there is no controlled AV, meaning that x(t) is usually far from its equilibrium value x̅. An example of such a system in traffic is, for instance (see <cit.>){v̇_0= u(t), ẏ_0 = v_0, v̇_i =av_i+1-v_i/(y_i+1-y_i)^2+b[V(y_i+1-y_i)-v_i], ẏ_i =v_i,. Smoothing stop-and-go waves simply amounts to choosing u to reduce ∫_0^T x(t)-x̅dt,where the state is x(t) = (y(t),v(t)) and T is a given time horizon. In practice, there are several difficulties:(i) The target state x̅ (and in particular the target speed) is usually unknown in practical cases such as a highway. Two ways to tackle this difficulty can be thought of: * Infer a good approximation of x̅ from both theory and experimental measurements. This is, in part, the principle of the speed planner presented below (see side:plannerHierarchical Control Framework and Speed Planner); * Aim to reduce the variance of x with respect to time instead of aiming to reduce x(t)-x̅.(ii) The dynamic f is quite complicated because of the lane changes: the system is usually either infinite dimensional or hybrid.(iii) The mathematical models representing road traffic are usually imprecise. Thus the control needs to be robust with respect to errors on f.(iv) The control system must exhibit robustness in handling errors related to measurements, including signal loss, sensor and camera limitations, etc.§ CONTROLLER DESIGNFrom a control theory perspective, the approach consists of considering a connected AV as a means of control on the system, that is, the whole traffic flow. The controller is acceleration-based, meaning that the control variable is the acceleration of the connected AV.In this section, we describe the AV's acceleration-based controller based on the design described in <cit.>.§.§ Principle The controller combines three components:* Safety, a module that ensures that the vehicle never puts itself or others in danger.* Target, a module that calculates the target speed required to achieve the control goal.* Model Predictive Control (briefly MPC), a module that anticipates the leader's behavior to help limit the AV's speed deviations from the target speed. Each of the modules leads to a limit acceleration respectively denoted by a_safe, a_target, a_MPC. The controller combines these three limit accelerations by taking their minimum.The commanded acceleration of the controller can be written at each time as:a_cmd(t) = min(a_safe(t),a_target(t),a_MPC(t)),The mathematical expression of a_safe(t), a_target(t) and a_MPC(t) are detailed in the following paragraphs.§.§ NotationsWe introduce the following notations for these time-varying signals:* v refers to the instantaneous driving speed of the AV (or ego vehicle).* v_lead is the measured speed of the leader vehicle. * h is the space gap between the ego vehicle and the leading vehicle.* v_rel is the relative speed of the leader with respect to the ego vehicle.* a is the measured estimate of acceleration of the ego vehicle. * a_lead is the actual acceleration of the leading vehicle.§.§ Safety ModuleWe first define v_safe as the highest velocity below which the AV can remain safe by braking if needed, whatever the behavior of the leading vehicle. Under this velocity, even if the leader brakes extremely strongly until full stop, the AV can avoid collision. This v_safe(t) is a computed value at any time t,depending on the space gap h(t), the maximal braking capacity of the AV (a constant, denoted a_min<0), and the maximal braking capacity of the leading vehicle (a constant denoted a_l,min<0). This velocity can be computed explicitly and is given by (see <cit.>)v_safe(t)= √( 2|a_min|(h(t)-s_0+1/2v_lead^2(t)/|a_l,min|)),where s_0>0 is a given safety distance. The acceleration a_safe(t) is then defined as a_safe(t) = -k(v(t)-v_safe(t))+dv_safe(t)/dt,where k ∈ℝ^+ is a given positive parameter. This acceleration acts as a barrier and guarantees the safety of the AV (provided that it starts in the safe area and no unsafe lane changes happen, see <cit.>).§.§ Target ModuleThe acceleration a_target(t) is defined as a_target(t) = -k(v(t)-v_target(t)),where v_target(t) is a target velocity chosen at time t to reach the control goal. The choice of this target velocity depends on the availability of the downstream information. We consider two modes:Local mode When there is no downstream information available, either because there is no source or because of a defect in the connectivity of the AV, the target speed is chosen from the only information available, that is, the velocity of the leading vehicle, the one of the AV, and the space gap. In this case, the goal is to use this information to reconstruct an approximation of what would be the steady-state speed if there were as many vehicles but no stop-and-go wave. This speed is chosen as follows v_target(t) =v̅_lead(t) + c_1max(0,c_2(h(t)-δ_1v(t)))/max(1,v(t)))^2,where v̅_lead(t)= {1/t∫_0^t v_lead(s) ds,ift≤τ,1/τ∫_t-τ^tv_lead(s) ds,ift> τ,.and c_1, δ_1 and τ are design parameters that can be chosen and have the following interpretation: * c_1 is a catching-up weight,* δ_1 is a target time gap between the ego and the leading car,* τ is an approximation of the period of a stop-and-go wave.Planning modeWhen downstream information is available, we use the speed planner described in side:plannerHierarchical Control Framework and Speed Plannerto get an estimation of the target speed for the current location of the ego vehicle based on the downstream traffic information. We denote this speed by v_down(t), and we use it as our target speed after clipping it to be within some factor of the leader speed and ensure that it does not exceed a reference upper-bound speed v_ref. That isv̅_target(t) = max(max(v_down, α_0v_lead(t)), min(α_1 v_lead(t),v_ref)),whereα_0∈(0,1) and α_1>1 are design parameters and v_ref is a reference velocity which corresponds to a safeguard when we know an upper bound of the traffic speed in congestion. If no such bound is known, v_ref is simply set to the road speed limit. §.§ MPC ControlThe MPC control is the most complex part. It is designed to anticipate the leader's behavior and restrict the AV's deviation from its target speed. The paradigm is: to react quickly to a change in the leader's behavior, but as little and smoothly as possible. To do so, when the leading vehicle decelerates, the acceleration a_MPC commanded by the MPC module is set as the smallest possible deceleration such that the AV will remain safe in terms of collision: in other words, it would not reach the safety distance, should the leader keep its constant deceleration until full stop.To compute this value, we define the acceleration a_min brake as:a_min brake(h(t),v(t),v_lead(t),a_lead(t)) =-(h(t)-s_0+1/2v_lead^2(t)/-a_lead(t))^-1(v(t))^2/2,and the quantitiesP_1= a_min brake(h(t),v(t),v_lead(t),a_lead(t))-a_lead(t)v(t)/v_lead(t), P_2= v_lead(t)- v(t).The acceleration a_MPC commanded by the anticipation module isa_MPC= { a_min brake(h(t),v(t),v_lead(t),a_lead(t)),ifP_1>0, a_leadv(t)/v_lead(t),ifP_1≤ 0andP_2≥ 0, a_lead-(v-v_lead)^2/2(h(t)-s_0),ifP_1≤ 0andP_2< 0.. When the leading vehicle speeds up, to avoid any unwanted behavior and jittering we ensure the continuity of the controller by settinga_MPC= { a_lead-(v-v_lead)^2/2(h(t)-s_0),if P_2< 0, min(a_max, a_lead(1+k_2(v_lead-v))),if P_2≥0..More details and a detailed theoretical analysis of this controller can be found in <cit.>.§ HIERARCHICAL CONTROL FRAMEWORK AND SPEED PLANNER SIDE:PLANNERby Han Wang < g r a p h i c s > Centralized Speed Planner: the algorithm deployed on the server side that designs target speed profiles for vehicles in moving traffic, with the goal to smooth traffic waves. The system was validated on a fleet of 100 connected automated vehicles in the MVT The proposed controller served as the vehicle side controller in a novel hierarchical control framework. The framework includes two critical components: 1) the collection of algorithms on the server side operates as the centralized planner agent dealing with the heavy calculation tasks; 2) the algorithms deployed on the vehicle side act as executors following the target assigned by the centralized planner. The principle of the framework design is to solve the computational task allocation problem between the server and the onboard units (OBU) and to efficiently coordinate the control goals between macroscopic traffic flow optimization and microscopic vehicle control. At the beginning of each update, the speed planner extracts a combination of macroscopic traffic state estimation(TSE) and vehicle observations from the database to calculate the target speed profile. According to the demand and condition of the specific implementation, the speed profile could be published in various formats. Each vehicle's OBU will fetch the most recent target speed from the target speed profile and use it as the input, together with the local observation from the onboard detector's perceptions. For the Buffer Design module, we consider the interval ℐ⊂ℝ as the region of interest. Suppose further that ℐ_c ⊂ℐ denotes a congested area. The idea is to determine the moving bottleneck (the controlled vehicle) speed profile denoted by t ↦ U_b(t), such that the density k(t, x) for x ∈ℐ_c is distributed (evenly) throughout the entire region ℐ and consequently by average the density approaches k_c, the critical density associated with maximum flow. Determining the moving bottleneck speed profile will be done in the following steps: (i) predicting the density (t, x) ↦ k(t,x), (t, x) ∈ℝ_+ ×ℐ given a speed profile U_b(·) of the controlled vehicle, using a mathematical model of traffic flow (see the next subsection), (ii) assessing the efficiency of the speed profile U_b(·) based on the density k(t,x) employing the reinforcement learning (RL), and (iii) Updating U_b(·) and returning to step (i).The vehicle controllers extract information from the target speed and local observation for the control action selection. The local observation will be submitted to the central database as the future input for the speed planner.§.§ Adaptation to loss of signalBetween the ideal mathematical framework and reality, there are many disturbances and unplanned constraints. To be able to work in real life, the control deployed has to be robust to a number of external perturbations. As underlined in <cit.>, this controller is robust to delay in the measurements or the actuation or to small measurement imprecisions. However, as it stands, this controller is not robust to the loss of signal from the front sensor, which could lead to highly overestimating the space gap and the control to overshoot by far. This loss of signal could be the consequence of a road grade, a curve, or simply a radar malfunction or range limitations.An instance of this was encountered several times during the experiment.Over and above this problem, the question arises as to what control strategy to adopt when the position of the leading car is unknown for a long period. A simple option would consist of assuming that the leading car has a space gap and velocity that are identical to the last space gap and velocity measured until the signal is reached again. That is to sayv_lead(t) = v_lead(t_1),h(t) = h(t_1),for anyt∈[t_1,t_2),where t_1 is the time of the last signal measured and t_2 is the time of the next signal measured. This limits jumps in the behavior of the controlled AV. However, in local mode (see Section Principle), this velocity v_lead(t) is used to estimate the ideal steady-state speed of the system. Assuming a constant velocity equal to the last measured velocity of the leading car could bias this estimation by having a strong weight in (<ref>). This could lead to underestimating the real flow speed of the traffic and entering a self-perpetuating situation where the automated vehicle is slower than the traffic and consequently keeps too large a gap to the car in front for the radar to notice. To tackle this issue, for each time step where the signal is lost, we gradually increase the last seen space gap by setting h(t+Δ t) = h(t) + Δ t(h_correction). This results in increasing the estimated v_safe and v̅_target, which in turn allows the ego vehicle to gradually increase its velocity and close the gap to its leader. In the planning mode, we clip the target velocity to be no more than α_1 v_lead. So if the leader speed is severely underestimated (due to the lost signal), it can lead to an underestimation of the target velocity. To overcome this, we remove this upper bound on the target speed when the signal is lost and allow it to be larger than the last leader's speed observed until the signal is regained. “The controller that combines safety, objectives, and anticipation is capable of operating both with and without downstream information and remains robust even in the event of a signal loss."§ SOFTWARE AND HARDWARE IMPLEMENTATION We use a model-based design approach to implement the controller discussed earlier. We can abstract the controller as a_cmd(t) = f( v(t), v_lead(t), h(t), v_rel(t), a(t); Θ).Recall that v is the instantaneous driving speed of the vehicle to be controlled (ego vehicle), v_lead is the measured speed of the leader vehicle, h(t) is the space gap between ego vehicle and its leader, v_rel is the relative speed of the leader with respect to the ego vehicle, and a is the acceleration of the ego vehicle. In (<ref>), the controller parameters are represented as Θ. The functional layout of the controller model is given in Figure <ref>.§.§ Simulink with Code Generation to ROSThe abstracted controller (<ref>) is implemented as a Simulink model with data input and output components modeled using ROS (Robot Operating System) <cit.> Toolbox. The Simulink model is used to generate a standalone C++ ROS node that can be executed directly on a physical hardware board without any modification. Moving from the Simulink model to a C++ ROS node without writing any C++ code allows for faster prototyping and validation at an early stage in the simulation through data-driven software-in-the-loop validation of the controller behavior. The ROS node consists of multiple ROS subscribers that consume input data and provide acceleration command output through a ROS publisher <cit.>.ROS nodes are executed in a parameterized manner through , a configuration-based tool for ROS thatallows the execution of multiple ROS with node parameters supplied at runtime. Validation testing used regression data played back through our Gazebo-based simulator <cit.>, where multi-vehicle simulation uses rigid body dynamics. This allowed us to compare the software deployment candidate with desired performance criteria prior to deploying in hardware.§ I-24 MOTION SIDEBAR:I-24SIDEBARby Derek Gloudemans, Gergely Zachár, Yanbing Wang, Junyi Ji, Will Barbour, Dan Work I-24 MOTION <cit.> is a 4.2-mile instrumented section of I-24 in Nashville, TN. It serves as a traffic data collection instrument and a test bed for connected and automated vehicle technologies and traffic control strategies. The instrumentation consists of 276 4K resolution video cameras mounted on 110-135ft roadside poles. The cameras provide a complete view of the roadway and can observe the path of all vehicles with minimal occlusion. A video processing pipeline consisting of computer vision <cit.> and post-processing algorithms identifies and tracks vehicle locations, then stitches <cit.> and reconciles their trajectories to ensure physical feasibility. The trajectory reconciliation consists of optimization-based smoothing that ensures feasible vehicle dynamics in higher orders <cit.>. Trajectories are generated in a roadway-aligned curvilinear coordinate system <cit.> that can be converted to or from GNSS coordinate systems for the purposes of aligning data sources measured from vehicles. From the I-24 MOTION data, multiple traveling traffic waves are regularly observed during morning peak commute times, leading to high-speed variability and consequently increased fuel consumption.< g r a p h i c s > Time-space diagram of traffic trajectories reconstructed from I24-MOTION <cit.> and visualized with <cit.>, with the AV trajectory overlaid in light blue. Insets show close-ups of traffic near the AV. §.§ Deployment to Vehicle PlatformThe ego vehicle (the test vehicle on which the controller was deployed) was a Toyota RAV4, capable of acceleration-based control using our customized hardware and software stack. As described further in <cit.> the hardware stack includes Controller Area Network (CAN) transceivers for access to vehicle data (including on-board sensors and actuators) and to inject vehicle commands. The software stack relies on Libpanda <cit.> to read from hardware. The package strym <cit.> was used at runtime to decode the on-board data from the CAN, and the CAN_to_ROS package <cit.> was used to transmit data into ROS for producers/consumers of vehicle data. This included the ability to share the live view of the vehicle's state and to allow automatic upgrades, as described in <cit.>. All of the software and hardware are integrated through a Raspberry Pi, enabling a cost-effective means to retrofit the vehicle for data acquisition and control.The role of Libpanda and CAN_to_ROS is illustrated in Figure <ref>. The vehicle interface node in CAN_to_ROS subscribes to the commanded acceleration topic, and converts the required command to CAN messages that are sent over to the vehicle via CAN peripherals for actuation. A more detailed description of the controller implementation can be found in <cit.>. § FRAMEWORK OF THE EXPERIMENT The controller was deployed on a Toyota RAV4 in dense traffic on a segment of the four-lane westbound I-24 highway to Nashville, TN (see Figure <ref>). The experiment was carried out on Wednesday, November 16th, during the morning rush hour. The results presented in this paper are collected between 08:10 am and 08:50 am. During this time, the AV was traveling westbound between exits 66 and 60 for a total distance of 9.33 km and was located in lane 3, where lane 4 denotes the rightmost lane and lane 1 denotes the leftmost lane. During the experiment runs, the controller sets the desired acceleration of the vehicle, but only when Adaptive Cruise Control (ACC) is enabled and engaged: thus, the driver interface is the same as they would normally expect. Vehicle design features allow the controller to initially engage once the vehicle is driving above 20 mph. The driver engaged the controller as soon as the vehicle is driving above this limit on the highway. Both in stock ACC and with this controller, some events can trigger the controller to disengage without the driver's input. For instance, a close cut-in in front of the ego vehicle or following a very strong deceleration by the lead vehicle.In this case, the driver establishes as soon as possible, while under human control, an appropriate gap that allows for re-engaging the controller. In instances where the average speed of traffic was expected to be less than 20 mph for some time due to congestion and the controller disengaged for one of the reasons above, the driver was asked to manually follow the commanded values asked by the controller with the help of the on-board computer monitored by a researcher on the passenger seat until it could be re-engaged.Data from the experimental runs are collected in two ways. The CAN data from the ego vehicle is logged in a database as in <cit.>. More importantly, the trajectories of the AV and surrounding vehicles are recovered from video footage recorded and processed by I-24 MOTION <cit.> to extract the trajectories. A brief description of the I-24 MOTION system and data set can be found in sidebar:I-24sidebarI24-MOTION and more details can be found in <cit.>, in particular concerning the reconstruction procedure of the trajectories. The AV trajectory obtained from the camera is then synchronized with the AV trajectory from the CAN data to avoid any spatial offset due to GPS imprecision. “We present experimental evidence that an AV running a model-based controller can locally dissipate a stop-and-go wave in heavy traffic on the highway." § RESULTSIn this section, we present the experimental results of the controller on the test run described in <ref> section. These results show evidence that the single AV deployed in dense traffic improves the performance of the system locally. We will exhibit its effect qualitatively by looking into the time-space diagrams in the vicinity of the AV. We will also quantify the effect using the speed variance.§.§ Time-space diagrams. In Figure <ref> (a), we show the trajectories of the AV and the surrounding vehicles in the same lane (lane 3) through a single wave. We include trajectories up to 800m upstream of the wave bottleneck. The trajectories in black denote the vehicles behind the AV, while the trajectories in blue are the vehicles in front of the AV. We show trajectories up to 1400 m in front of the AV and 400 m behind it. Because the wave propagates backward in time, we need to identify the moving border of the wave. The procedure to do so is detailed in Appendix <ref>. In Figure <ref> (b), we show the trajectories of the vehicles in the same time-space region of the identified wave in a different lane (lane 1), which does not contain any AVs. We use lane 1 for comparison (instead of the adjacent lane 2) to reduce the possible spillover effect of the AV on adjacent lanes. Similarly, in this figure, the black trajectories denote the vehicles upstream of the x-position of the AV, while blue trajectories are the downstream vehicles.From Figure <ref> (a), we notice that the blue trajectories in front of the AV experience speed oscillations due to the propagation of waves similar to Figure <ref> (b). However, the effect of the AV is apparent on the vehicles behind it (black trajectories). In the regionupstream the bottleneck (the green box of Figure <ref> (a))the oscillations arecompletely absorbed by the AV and do not propagate to the vehicles behind it. In this region, we notice that the AV is driving with a steady speed despite the stop-and-go behavior in front of it. The trailing vehicles' behavior is closely aligned with the AV's as they also exhibitmuchless speed variations. The AV eventually catches up with the bottleneck (the red region of the time-space diagram) and has to slow down, indicating that it is traveling faster than the bottleneck. In contrast, we notice from Figure <ref> (b) that in the absence of the AV, the wave propagates throughout the entireregion. We observe several stop-and-go oscillations that affect all the vehicles in that region.§.§ Speed VarianceTo explore the smoothing effect of the AV, we compare the speed variance of the vehicles behind the AV to those in front of the AV. We use the trajectories shown in Figure <ref> (a) to compute the speed variance where the variance is computed across all cars and all time steps (for the vehicles in front and behind the AV separately). The speed variance of the trajectories in front of the AV is 19.6 m^2 · s^-2 while the speed variance for the trajectories behind the AV is 9.4 m^2 · s^-2.The introduction of the AV has a smoothing effect that reduces the speed variance by 52%. This is aligned with the observed behavior in the time-space diagram is <ref> (a).§.§ DiscussionThe results presented in the previous section clearly demonstrate that the proposed controller has a substantial impact on dissipating stop-and-go waves and reducing speed variance.Even in the most bottleneck part of the wave (the red region in Figure <ref> (a)), where one might assume that the controller's impact is minimal,the speed variance in front of the AV is 1.73m^2 · s^-2 while it reduces to 1.08m^2 · s^-2 due to the AV's smoothing effect. Our analysis thus far focuses on the vehicles within 400 m behind the AV. It is worth noting that the effect of the AV diminishes with the distance. In Figure <ref>, we extend the 400 m to 1400 m to observe the behavior of the vehicles driving further behind the AV. We notice that speed oscillations start to reappear after around 600 m behind the AV. We made this observation more concrete by reporting the speed variance of the vehicles behind the AV up to different distances ranging from 200 m to 1400 m in Figure <ref>. We notice a sharp increase in the speed variance after 600m. Furthermore, we report the percentage change in the speed variance considering different distances behind and in front of the AV in Table <ref>.Finally, in Figures <ref> and <ref> we illustrate the physical vehicle and the research team that executed the experiment.§ CONCLUSION We presented an acceleration-based controller for a connected AV to smooth stop-and-go waves in highway traffic. The controller was implemented on a commercially available Toyota Rav 4, using measurements typically used by the Automated Cruise Control. The vehicle was deployed on the I-24 in the Nashville area in bulk traffic during rush hours, as part of a large-scale experiment. Trajectories were reconstructed thanks to the I-24 MOTION system using 276 high-fidelity cameras. The experimental results show the control ability to dampen stop-and-go waves in real traffic at peak hours. The controlled AV reduces the overall speed variance of the trafficup to 600mbehind by around 50%. § ACKNOWLEDGEMENT This material is based upon work supported by the National Science Foundation under Grants CNS-1837244 (A. Bayen), CNS-1837652 (D. Work), CNS-1837481 (B. Piccoli), CNS-1837210 (G. Pappas), CNS-1446715 (B. Piccoli), CNS-2135579 (D. Work, A. Bayen, J. Sprinkle, J. Lee) as well as the IEA project SHYSTRA and the PEPS JCJC 2022 of CNRS INSMI (A. Hayat and S. Xiang). This material is based upon work supported by the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy (EERE) under the Vehicle Technologies Office award number CID DE–EE0008872. The views expressed herein do not necessarily represent the views of the U.S. Department of Energy or the United States Government.§ IDENTIFYING MOVING BORDER OF THE WAVE To identify the moving border of the wave for the cars in front and behind the AV we proceed as follows: * Define how far in front of the AV and how far behind the AV to consider (for example, 1500m in front, 700m behind) * Discretize the space in front and behind the AV in boxes of same width following the contour of the AV trajectory and ordered by distance to the AV (for example box 1 contains all trajectories starting between 0m and 200m from the AV, box 2 all trajectories starting between 200m and 400m, etc.).* Define a time-axis length to discretize each box with time (for example 10s) * For each box, create sub-boxes by using the time-axis discretization length. With a 10s time discretization and a 200m space discretization, this results in 10s by 200m sub-boxes for each box following the contour of the AV trajectory)* For a given box, compute the average speed and position of the vehicles in each sub-box. Optionally, use a moving average to smooth these values within the box.* Define a speed threshold which will determine when the congested part of the wave begins or ends (for example 4 meters per second)* For each box, iterate over the sub-boxes, and compare the average speed in the current sub-box to the average speed of the previous sub-box and the speed threshold. If the current average speed is less than the speed threshold and the previous average speed is greater than the speed threshold, denote the current sub-box as the beginning of the wave. If the current average speed is greater than the speed threshold and the previous average speed is less than the speed threshold, denote the current sub-box as the end of the wave. * For each box, define the boundary of the wave using the average time and average position of the sub-boxes denoting the start and end of the wave. Optionally smooth the obtained frontiers using a moving average. With 200m and 10s space and time resolution, this results in the identification of the congested part of the wave represented in Figure <ref> plain | http://arxiv.org/abs/2310.18151v1 | {
"authors": [
"Amaury Hayat",
"Arwa Alanqary",
"Rahul Bhadani",
"Christopher Denaro",
"Ryan J. Weightman",
"Shengquan Xiang",
"Jonathan W. Lee",
"Matthew Bunting",
"Anish Gollakota",
"Matthew W. Nice",
"Derek Gloudemans",
"Gergely Zachar",
"Jon F. Davis",
"Maria Laura Delle Monache",
"Benjamin Seibold",
"Alexandre M. Bayen",
"Jonathan Sprinkle",
"Daniel B. Work",
"Benedetto Piccoli"
],
"categories": [
"eess.SY",
"cs.SY",
"math.OC",
"93D15, 93D21, 93-05, 34H05,",
"H.2.2"
],
"primary_category": "eess.SY",
"published": "20231027135859",
"title": "Traffic smoothing using explicit local controllers"
} |
Fast and simple unrooted dynamic forests Benjamin Aram BerendsohnFreie Universität Berlin, Germany. Email: . Supported by DFG Grant . ================================================================================================ Effective and rapid decision-making from randomized controlled trials (RCTs) requires unbiased and precise treatment effect inferences. Two strategies to address this requirement are to adjust for covariates that are highly correlated with the outcome, and to leverage historical control information via Bayes' theorem. We propose a new Bayesian prognostic covariate adjustment methodology, referred to as Bayesian PROCOVA, that combines these two strategies. Covariate adjustment in Bayesian PROCOVA is based on generative artificial intelligence (AI) algorithms that construct a digital twin generator (DTG) for RCT participants. The DTG is trained on historical control data and yields a digital twin (DT) probability distribution for each RCT participant's outcome under the control treatment. The expectation of the DT distribution, referred to as the prognostic score, defines the covariate for adjustment. Historical control information is leveraged via an additive mixture prior with two components: an informative prior probability distribution specified based on historical control data, and a weakly informative prior distribution. The mixture weight determines the extent to which posterior inferences are drawn from the informative component, versus the weakly informative component. This weight has a prior distribution as well, and so the entire additive mixture prior is completely pre-specifiable without involving any RCT information. We establish an efficient Gibbs algorithm for sampling from the posterior distribution, and derive closed-form expressions for the posterior mean and variance of the treatment effect parameter conditional on the weight, in Bayesian PROCOVA. We evaluate efficiency gains of Bayesian PROCOVA via its bias control and variance reduction compared to frequentist PROCOVA in simulation studies that encompass different discrepancies between historical control and RCT data. These gains can be translated to smaller RCTs. Ultimately, Bayesian PROCOVA can yield informative treatment effect inferences with fewer control participants, thereby accelerating effective decision-making from RCTs. § IMPROVED DECISION-MAKING WITH RANDOMIZED CONTROLLED TRIALSRandomized controlled trials (RCTs) are increasingly faulted for failing to enable effective and rapid decision-making <cit.>. Key stakeholders in drug development are pushing for innovations in RCTs to address concerns of premature or incorrect decision-making that could lead to the abandonment of truly efficacious medical treatments, in addition to many other concerns and issues described by <cit.>. The COVID-19 pandemic further established the urgency for innovations to improve and accelerate decision-making from RCTs <cit.>. Statistical inferences for treatment effects in RCTs underlie decision-making in drug development, and so an imperative to address the faults of RCTs is to develop innovative statistical methods that yield unbiased treatment effect inferences with reduced uncertainty.Several strategies exist to obtain unbiased and precise treatment effect inferences for improved decision-making from RCTs. Two effective and general strategies are to adjust for participant covariates (e.g., baseline information collected at the start of an RCT), and to augment the RCT information with historical control information via Bayes' theorem. Regulatory agencies recognize covariate adjustment as a valid statistical method for unbiased and precise treatment effect inference, with the caveat that the adjustment should incorporate an appropriately small number of covariates <cit.>. An innovative approach for covariate adjustment is to use generative artificial intelligence (AI) algorithms, trained on historical control data, to yield a function of covariates that is optimized in terms of its correlation with the control outcomes. <cit.> describe their statistical methodology of prognostic covariate adjustment (PROCOVA™) that implements this approach for RCTs. The generative AI algorithm that they consider yields a digital twin generator (DTG) whose inputs are a participant's (potentially high-dimensional) covariate vector and whose output is a digital twin (DT) probability distribution for the participant's control outcome. Under PROCOVA, the mean of the DT distribution is calculated for each RCT participant and defines the single, optimized covariate that is used for adjustment in the analysis of the RCT. This covariate is referred to as the prognostic score. The European Medicines Agency (EMA) qualified PROCOVA as “an acceptable statistical approach for primary analysis” of Phase 2/3 RCTs with continuous endpoints <cit.>. The second strategy, involving Bayesian inference for the treatment effect, is increasing in consideration for modern RCTs, although fewer practitioners may be as familiar with the new Bayesian methods as with established frequentist methods <cit.>. The Food and Drug Administration (FDA)'s Center for Devices and Radiological Health (CDRH) and Center for Biologics Evaluation and Research (CBER) published guidance, with informative explanatory materials, on Bayesian inference for medical device trials in 2010 <cit.>. The FDA Center for Drug Evaluation and Research (CDER) and CBER have yet to publish guidance documents on Bayesian methods for applications beyond medical device trials (which are considered the domain of CDRH), but <cit.> and <cit.> discuss relevant considerations for the use of Bayesian methods. The Bayesian methods that were recently developed by <cit.>, <cit.>, and <cit.>, all three of which utilize additive mixture priors, reflect these regulatory considerations. Most notably, the approval of Pfizer's COVID-19 vaccine involved Bayesian analyses <cit.>, and highlighted the utility of Bayesian inference over frequentist methods for effective and rapid decision-making.These two strategies have yet to be combined to advance decision-making from RCTs. <cit.> proposed a Bayesian version of PROCOVA, but their prior distribution is not particularly justifiable or interpretable with respect to the regulatory considerations and examples of Bayesian analyses provided in <cit.>. The Bayesian methods of <cit.>, <cit.>, and <cit.> cannot be completely specified prior to the commencement of the RCT because the essential ingredient of the “weight” parameter in their additive mixture prior is specified based on RCT data. In particular, <cit.> and <cit.> define the weight as a p-value based on the RCT data, whereas <cit.> define the weight as a likelihood ratio test statistic involving the RCT data. Their specifications in this regard lead to twice the use of the RCT data, which technically constitutes an improper application of Bayes' theorem and complicates the regulatory approval process and interpretations of uncertainty quantifications from the Bayesian analysis. Limitations also exist in their scopes of application. For example, <cit.> and <cit.> consider solely binary endpoints, and do not incorporate covariate adjustment as in a regression model. The suggested approach for covariate adjustment in the method of <cit.> is propensity score matching, which may not be acceptable or desirable in practice. A gap remains in defining a completely pre-specificable, fully Bayesian covariate adjustment methodology that can incorporate predictors from generative AI algorithms for the analysis of continuous outcomes.We propose a new Bayesian methodology to perform covariate adjustment via the prognostic score and to leverage historical control information (consisting both of prognostic scores and control outcomes) in the analysis of an RCT. We refer to this method as Bayesian PROCOVA, as it constitutes a fully Bayesian extension of PROCOVA. Following the work of <cit.>, <cit.>, and <cit.>, we encode historical control information in Bayesian PROCOVA via one component of an additive mixture prior, and set the second component to be weakly informative. This prior specification results in a posterior distribution that dynamically borrows information from historical control data, thereby effectively augmenting the information in the RCT. When historical control and RCT data are consistent, Bayesian PROCOVA puts significant weight on the information from the historical control data and consequently increases the precision of treatment effect inferences. When historical control and RCT data are discrepant, Bayesian PROCOVA discounts the historical control data and yields inferences similar to PROCOVA in terms of controlling bias and precision in treatment effect inferences. Finally, a prior distribution is specified for the mixture weight parameter in Bayesian PROCOVA so as to make the entire prior completely pre-specifiable and interpretable before the commencement of the RCT, and to yield a fully Bayesian analysis. Thus Bayesian PROCOVA effectively addresses the limitations of the Bayesian methods of <cit.>, <cit.>, and <cit.>, as its additive mixture prior distribution is independent of any information from the RCT and still enables interpretable covariate adjustment and dynamic information borrowing for continuous outcomes. In addition, Bayesian PROCOVA goes beyond existing Bayesian methodologies by including the optimized prognostic score in the analysis of the RCT. It is particularly advantageous for improving the quality of treatment effect inferences, and hence decision-making, from small RCTs. We proceed in Section <ref> to summarize notations, assumptions, and background materials for Bayesian PROCOVA. The methodology is described in Section <ref>. We provide closed-form formulae for the posterior mean and variance of the treatment effect parameter conditional on the mixture weight in Section <ref>, and outline an efficient Gibbs algorithm <cit.> for calculating the joint posterior distribution of all the parameters in Section <ref>. We evaluate the bias control and variance reduction of the treatment effect estimator from Bayesian PROCOVA compared to PROCOVA via extensive simulation studies in Section <ref>. In these studies, we demonstrate how the properties of Bayesian PROCOVA change in cases of discrepancies between the historical control and RCT data due to domain shifts, and due to changes in correlations between the prognostic scores and the control outcomes. As we conclude in Section <ref>, Bayesian PROCOVA can improve the quality of treatment effect inferences compared to frequentist methods, and thereby can advance decision-making from smaller and faster RCTs. § BACKGROUND §.§ Notations and AssumptionsBayesian PROCOVA is formulated under the Neyman-Rubin Causal Model <cit.>. To describe this methodology, we first define the experimental units, covariates, treatments, potential outcomes, and causal estimands under consideration for the RCT and historical control data. We also provide the assumptions that we invoke on these elements in order to facilitate causal inferences via Bayesian PROCOVA.Experimental units are participants in the RCT at a particular time-point <cit.>. Each RCT participant i = 1, …, N has a vector of covariates x_i ∈ℝ^L that are either measured prior to treatment assignment, or measured afterwards but are known to be unaffected by treatment <cit.>. We consider the case of two treatments in the RCT, and denote them by 0 (control) and 1 (active treatment). The treatment indicator for participant i is w_i ∈{0, 1}. For each pair of participant i and treatment w, we define their potential outcome Y_i(w) as their endpoint value that would be observed at a specified time-point after treatment assignment. We invoke the Stable Unit-Treatment Value Assumption <cit.> in this definition, so that there do not exist hidden varieties of treatment for any participant, and the potential outcome for a participant is not a function of treatments assigned to other participants. Under SUTVA, the pair of potential outcomes Y_i =( Y_i(0), Y_i(1))^𝖳 of each participant is well-defined.Causal estimands are defined as comparisons of potential outcomes for a set of experimental units <cit.>. The experimental units could correspond to the finite-population of participants in the RCT, or to the conceptual “super-population” of all potential RCT participants. An estimand defined on the former set of participants is a finite-population estimand, and an estimand defined on the latter is a super-population estimand. For example, the quantity Y̅(1) - Y̅(0) = N^-1∑_i=1^N { Y_i(1) - Y_i(0) } is the finite-population average treatment effect. To define the super-population average treatment effect, let μ ( w) = ∫_-∞^∞ y dF_w(y) denote the expected value of the potential outcomes under treatment w ∈{0, 1} for the super-population of participants as defined by the cumulative distribution function F_w: ℝ→ (0,1). Then the super-population average treatment effect isμ(1) - μ(0). We focus on inferring μ(1) - μ(0) via Bayesian PROCOVA in Section <ref>, and summarize in Section <ref> how Bayesian inference can be conducted for Y̅(1) - Y̅(0).In addition to RCT data, Bayesian PROCOVA incorporates information from a historical control dataset, i.e., a dataset independent of the RCT in which all participants are given control. These data are used to specify one component of the mixture prior distribution for the model parameters in Bayesian PROCOVA. For the historical control data we denote the sample size by N_H, the covariate vector for participant i = 1, …, N_H by x_i,H∈ℝ^L, and their outcome by y_i,H.Causal inference under the Neyman-Rubin Causal Model is a missing data problem, because at most one potential outcome can be observed for any participant <cit.>. The treatment assignment mechanism, i.e., the probability mass function p( w_1, …, w_N | Y_1, …, Y_N, x_1, …, x_N), is critical for obtaining valid causal inferences. Similar to other types of missing data problems, it is essential to consider the treatment assignment mechanism so as to specify the likelihood function for Bayesian PROCOVA <cit.>. We assume that the treatment assignment mechanism is probabilistic, individualistic, and unconfounded <cit.>. These three assumptions are generally valid for traditional RCTs, and correspond to a strongly ignorable missing data mechanism <cit.>. They also enable the “automated” specification of the likelihood function for Bayesian inference on causal estimands, in terms of the observed outcomes y_i = w_iY_i(1) +( 1 - w_i)Y_i(0) in the RCT <cit.>.§.§ Bayesian Inference and Linear RegressionBayesian inference refers to the fitting of a statistical model to data so as to obtain a probability distribution on the unknown model parameters θ <cit.>. Under this paradigm, all uncertainties and information are encoded via probability distributions, and all inferences and conclusions are obtained via the laws of probability theory. The essential characteristic of Bayesian inference is its direct and explicit use of probability, specifically, via the prior and posterior probability distributions for θ, for quantifying uncertainty and information for all unknown parameters. This characteristic of Bayesian inference distinguishes it from frequentist inference, in which probability distributions are generally not specified for θ. The necessary elements for Bayesian inference are the prior distribution p( θ ) for θ and the likelihood function for θ. The prior encodes information about θ that is contained in historical data, and is expected to augment the information from the RCT. We denote the likelihood function by L( θ| y, w, X), where y =( y_1, …, y_N)^𝖳 is the vector of the participants' observed outcomes, w =( w_1, …, w_N)^𝖳 is the vector of their treatment assignments, and X = [ x_1^𝖳; ⋮; x_N^𝖳 ] is the matrix of their covariates. The likelihood function encodes information about θ from the RCT, and is obtained from the sampling distribution of the data as specified by the generative statistical model underlying the analysis <cit.>. The prior and likelihood function are combined via Bayes' theorem to calculate the posterior distribution p( θ| y, w, X) of θ conditional on the data. In practice, the posterior distribution is calculated as a proportional quantity, without a normalization constant, according to p( θ| y, w, X) ∝ p( θ ) L( θ| y, w, X). The posterior distribution p( θ| y, w, X) encodes all information about θ that is contained in the prior and data. All inferences on causal estimands can thus be obtained from this distribution. To illustrate, consider the finite-population average treatment effect Y̅(1) - Y̅(0). Bayesian inference for this estimand requires the calculation of its posterior distribution. Following the framework and reasoning for Bayesian causal inference employed by <cit.> and <cit.>, this posterior distribution is calculated by repeatedly drawing from p( θ| y, w, X), using the draws to impute the missing potential outcomes y_i^mis =( 1 - w_i)Y_i(1) + w_i Y_i(0), calculating the estimand for each such imputation according to N^-1∑_i=1^N { ( 2w_i-1)( y_i - y_i^mis ) }, and concatenating all such calculated estimands. More formally, the posterior p( Y̅(1) - Y̅(0) | y, w, X) is calculated according to the integration∫ p( Y̅(1) - Y̅(0) | y^mis, θ, y, w, X) p( y^mis|θ, y, w, X) p( θ| y, w, X) dy^mis dθ,where y^mis =( y_1^mis, …, y_N^mis )^𝖳. Given this posterior distribution, point estimates of the causal estimand can be obtained via its mean, median, mode(s), and other functionals of the posterior distribution. Interval estimates can be obtained by computing quantiles of the posterior distribution. Bayesian inferences for the super-population average treatment effect μ(1) - μ(0) are obtained from the posterior distribution for a specified parameter in the data generating mechanism. The mechanism that we consider is the linear regression modelY_i(w) = v_i^𝖳β + ϵ_i(w),where v_i ∈ℝ^K is the vector of predictors for participant i = 1, …, N that are defined as functions of w_i and x_i, β =( β_0, …, β_K-1 )^𝖳 is the vector of regression coefficients, and the ϵ_i(w) are independent random error terms distributed according to [ ϵ_i(w) | X, β, σ^2]∼N ( 0, σ^2) with variance parameter σ^2 > 0. The first two entries in each v_i are v_i,1 = 1 and v_i,2 = w_i, and the β_1 entry in β corresponds to the super-population average treatment effect when there are no interactions between treatment and covariates in v_i. As the observed outcomes are functions of the potential outcomes and treatment indicators, the mechanism in equation (<ref>) motivates the linear regression model y_i = v_i^𝖳β + ϵ_i,for the observed outcomes, where the [ ϵ_i | X, β, σ^2]∼N ( 0, σ^2) independently as before. Bayesian inferences are performed on β_1 (and other parameters) by extending model (<ref>) with a prior distribution on β, σ^2 and calculating the posterior distribution according to Bayes' theorem. The unconfoundedness assumption automates the derivation of the likelihood function asL( β, σ^2 | y, w, X) =( σ^2)^-N/2exp{ -1/2σ^2 ( y - V β )^𝖳 ( y - V β ) },where V = [ v_1^𝖳; ⋮; v_N^𝖳 ]. The posterior distribution is then calculated according top( β, σ^2 | y, w, X) ∝ p( β, σ^2)( σ^2)^-N/2exp{ -1/2σ^2 ( y - V β )^𝖳 ( y - V β ) }.The standard non-informative (and improper) prior for the model parameters is p( β, σ^2) ∝ ( σ^2 )^-1, and this corresponds to independent, flat priors on β and log ( σ^2). Inferences from PROCOVA (excluding the heteroskedastic-consistent <cit.>, or robust, standard errors <cit.>) are equivalent to posterior inferences from the corresponding Bayesian linear regression with this prior <cit.>. <cit.> provide additional computational techniques and inferential procedures for Bayesian linear regression. §.§ Prognostic Covariate AdjustmentFrom at least the time of <cit.>, statisticians and scientists have recognized the merits of regression models for causal inference <cit.>. Regulatory agencies have recently published guidance documents that indicate the merits of covariate adjustment for causal inference from RCTs. The FDA's guidance on covariate adjustment states that “Covariate adjustment leads to efficiency gains when the covariates are prognostic for the outcome of interest in the trial. Therefore, FDA recommends that sponsors adjust for covariates that are anticipated to be most strongly associated with the outcome of interest.” <cit.>. Similarly, the EMA's guideline document states that “Variables known a priori to be strongly, or at least moderately, associated with the primary outcome and/or variables for which there is a strong clinical rationale for such an association should also be considered as covariates in the primary analysis.” <cit.>. Both agencies also issued provisos that the number of covariates for adjustment should be kept at an appropriate minimum. Specifically, the FDA states that “The statistical properties of covariate adjustment are best understood when the number of covariates adjusted for in the study is small relative to the sample size <cit.>. Therefore, sponsors should discuss their proposal with the relevant review division if the number of covariates is large relative to the sample size or if proposing to adjust for a covariate with many levels (e.g., study site in a trial with many sites).” <cit.>. The EMA states more directly that “Only a few covariates should be included in a primary analysis. Although larger data sets may support more covariates than smaller ones, justification for including each of the covariates should be provided.” and “The primary model should not include treatment by covariate interactions. If substantial interactions are expected a priori, the trial should be designed to allow separate estimates of the treatment effects in specific subgroups.” <cit.>. Thus, regulatory opinions and documents indicate the need to adjust for as few covariates as possible such that the selected covariates are as highly correlated with the outcome as possible.The use of generative AI algorithms in PROCOVA directly addresses these fundamental regulatory considerations for covariate adjustment in RCTs. The generative AI algorithm is trained on historical control data, and the sole inputs of the algorithm for constructing the DTG are the participants' baseline covariates. These two aspects ensure that bias cannot result from the use of the DTG outputs, and that their use for covariate adjustment corresponds to regulatory guidance on pre-specifying all aspects of trial design and analysis prior to the commencement of the RCT. The DTG outputs for a RCT participant are forecasts for their control outcomes at future time-points after treatment assignment. The forecasts at one time-point correspond to the participant's DT probability distribution, and summaries of the DT distribution are used for adjustment. These summaries are themselves covariates, as they are transformations of baseline covariates. We denote the DT distribution at a specified time-point for participant i by the cumulative distribution function G_i: ℝ→ (0,1). By virtue of the training process for the DTG the prognostic score m_i = ∫_-∞^∞ ydG_i(y) is an optimized transformation of a participant's covariates in terms of its absolute correlation with the control outcome. This feature is advantageous for, and follows regulatory guidance on, covariate adjustment because it summarizes the information in a high-dimensional covariate vector into a scalar variable for adjustment that is highly correlated with the outcome. The PROCOVA methodology of <cit.> leverages the prognostic score as the essential predictor in a linear regression analysis of a RCT, i.e., it sets v_i =( 1, w_i, m_i)^𝖳 as iny_i = β_0 + β_1 w_i + β_2 m_i + ϵ_i.Inferences and tests for the treatment effect are performed with respect to β_1 in PROCOVA. Following regulatory guidance on uncertainty quantification for frequentist covariate adjustment <cit.>, PROCOVA utilizes HC standard errors for inferences on β_1 <cit.>.PROCOVA effectively leverages aspects of historical control data via covariate adjustment using AI-generated prognostic scores to improve the precision of unbiased treatment effect inferences. Further gains in precision beyond those from PROCOVA can be realized by combining covariate adjustment using the prognostic score with prior information from historical control data on the β_0, β_2, and σ^2 parameters in the PROCOVA model (<ref>). This combination is the defining feature of Bayesian PROCOVA, which we proceed to describe. § BAYESIAN PROGNOSTIC COVARIATE ADJUSTMENT §.§ OverviewBayesian PROCOVA is a Bayesian extension of the PROCOVA model (<ref>) with an additive mixture prior for the parameters β and σ^2 that is defined as the weighted sum of two probability density functions. One component in the sum is the “informative prior component” p_I( β, σ^2). This distribution is specified based on prognostic score and outcome information from historical control data. The other component is the “flat prior component” p_F( β, σ^2). In contrast to p_I( β, σ^2), the probability density function p_F( β, σ^2) is specified independently of any data and serves as a proper, weakly informative prior. As will be demonstrated in Section <ref>, one of the limiting cases of our specified p_F( β, σ^2) will correspond to the standard non-informative prior for Bayesian linear regression. These components are combined with a weight parameter ω∈ (0, 1) that is given its own prior p( ω ). Thus, the joint prior for all unknown parameters in Bayesian PROCOVA isp( β, σ^2, ω ) = ω p_I( β, σ^2) p( ω ) +( 1 - ω ) p_F( β, σ^2) p( ω ).The additive mixture prior for Bayesian PROCOVA is specified so as to yield dynamic information borrowing <cit.>. More formally, in the calculation of the posterior distribution for the regression parameters under Bayesian PROCOVA, the weight that is placed on the historical control information is effectively a function of the consistency between the historical control and RCT data. If the historical control and RCT data are consistent, then significant weight is placed on the information encoded in p_I( β, σ^2) when calculating the posterior distribution, and the precision for β_1 consequently increases. Alternatively, if the historical control and RCT data are discrepant, then the information from p_I( β, σ^2) is discounted when calculating the posterior distribution, and instead the weak information from p_F( β, σ^2) is more highly weighted. <cit.> note this attractive property of additive mixture priors in moving the posterior distribution towards the most compatible component rather than just towards historical information. As we demonstrate via extensive simulation studies in Section <ref>, the combination of dynamic information borrowing with the tuning of the hyperparameters in p_I( β, σ^2) helps to balance the two objectives of controlling the bias and increasing the precision of treatment effect inferences based on the level of consistency between historical control and RCT data in Bayesian PROCOVA.§.§ Likelihood and Prior SpecificationsThe likelihood function for Bayesian PROCOVA is specified by modifying the PROCOVA model (<ref>) according to the reasoning of <cit.> so as to imbue the intercept β_0 with an interpretation involving the prognostic scores. The prognostic score for RCT participant i is m_i ∈ℝ, and we set m̅ = N^-1∑_i=1^N m_i. The essential sampling distribution for Bayesian PROCOVA isy_i = β_0 + β_1 w_i + β_2( m_i - m̅ ) + m̅ + ϵ_i,so that the covariate adjustment for participant i is their centered prognostic score m_i - m̅. To facilitate the description of BayesianPROCOVA, demonstrate the essential theory, and reflect regulatory guidance on limiting the number of covariate adjustments <cit.>, we consider the case of adjusting solely for the centered prognostic scores, with no interactions. However, Equation (<ref>) and Bayesian PROCOVA can be expanded in a straightforward manner to include additional covariates and interactions. Equation (<ref>) is equivalent to the regression model in which the observed outcomes are transformed according to y_i^(c) = y_i - m̅ and the predictor vector is v_i =( 1, w_i, m_i - m̅ )^𝖳. Parameter β_0 is interpreted as the bias of the average of the prognostic scores in predicting the endpoint of a control participant, i.e., β_0 = 𝔼 ( y_i - m̅| w_i = 0, β, σ^2, ω ). For this interpretation we assume the m_i are independent and identically distributed, with finite mean, and that their probability distribution does not depend on the model parameters. The likelihood function corresponding to the model in equation (<ref>) (excluding proportionality constants) isL( β, σ^2, ω| y, w, X ) =(σ^2)^-N/2exp{ -1/2σ^2 ( y^(c) - Vβ )^𝖳 ( y ^(c) - V β ) },where y^(c) =( y_1^(c), …, y_N^(c) )^𝖳 and V = [1w_1 ( m_1 - m̅ );⋮⋮⋮;1w_N ( m_N - m̅ ) ]. Following the general form of p( β, σ^2, ω ) for Bayesian PROCOVA in equation (<ref>), we specify the prior by selecting p_I( β, σ^2) based on historical control information, ensuring that p_F( β, σ^2) is chosen independently of any data so as to be weakly informative, and choosing the functional form and hyperparameters of p( ω ) so that they do not involve any RCT data. We extend notation from the RCT to the historical control data to formally specify the prior. Let m_i,H∈ℝ denote the prognostic score for participant i in the historical control data, m̅_H = N_H^-1∑_i=1^N_H m_i,H be the average of the historical prognostic scores, y_i,H^(c) = y_i,H - m̅_H be the historical control outcomes centered by the average of the historical prognostic scores, y_H^(c) =( y_1,H^(c), …, y_N,H^(c) ) be the vector of centered historical control outcomes, and V_H = [ 1 ( m_1,H - m̅_H); ⋮ ⋮; 1( m_N,H - m̅_H ) ].First, we specify p_I( β, σ^2) according to [ β|σ^2] ∼N ( [β_0,H;0; β_2, H ], σ^2 K ), whereσ^2 ∼ (N_H-2)s_H^2/χ_N_H-2^2 and K = diag(K_0,H, K_1,H, K_2,H)is the 3 × 3 diagonal matrix of positive constants. Specifically, [ β_0,H; β_2,H ] =( V_H^𝖳V_H)^-1 V_H^𝖳y_H^(c) and s_H^2 =( N_H-2)^-1∑_i=1^N { y_i,H^(c) - β_0,H - β_2,H ( m_i,H - m̅_H) }^2 is the point estimate of σ^2 from the historical control data. Lastly [ K_0,H 0; 0 K_2,H ] =( V_H^𝖳 V_H)^-1 and K_1,H is selected to be a large value with respect to the scale of the endpoint. The hyperparameter values β_0,H, β_2,H, K_0,H, K_2,H, s_H^2, and the shape parameter N_H-2 are directly selected based on the posterior distribution of ( β_0, β_2, σ^2)^𝖳 when the Bayesian linear regression model is fit for the y_i,H^(c) on the m_i,H - m̅_H using the standard non-informative prior. Consequently, the informative prior component is directly interpreted and justified according to the posterior from the historical control data, and the selection of its hyperparameters is conceptually straightforward in this manner. By direct calculation, K_0,H = N_H^-1 and K_2,H = {∑_i=1^N_H ( m_i,H - m̅_H)^2 }^-1. In practice, setting K_0,H = N_H^-1 can lead to a posterior distribution that is excessively confident in the historical control information, and this overconfidence can yield biased inferences in cases of domain shifts between the historical and RCT data. Alternative tunings of K_0,H that would be more appropriate in such situations, and that are expected to yield improved bias control, are obtained by discounting N_H^-1 by another function of N_H so that the resulting K_0,H decays less rapidly towards zero as N_H increases. In particular, we consider K_0,H = N_H^-1/2 in cases of domain shift for improved bias control with precision gain over PROCOVA. In regard to the prior scale parameter for β_1, we could take K_1,H→∞ so as to make the prior for β_1 in the informative component non-informative. In our implementation of Bayesian PROCOVA we keep K_1,H as a finite, large value, so that p_I( β, σ^2) is a generative and proper prior. The probability density function for the informative prior component is thus p_I( β, σ^2)= { ( N_H - 2/2 )^ ( N_H-2)/2 ( s_H^2)^ ( N_H-2)/2 ( σ^2)^-{ (N_H+1)/2 + 1 }/Γ ( N_H-2/2 ) π^3/2 (K_0,H K_1,H K_2,H )^1/2} ×exp [ -(N_H-2)s_H^2/2σ^2 - 1/2σ^2{ ( β_0 - β_0,H )^2/K_0,H + β_1^2/K_1,H +( β_2 - β_2,H )^2/K_2,H} ].Under this prior, the marginal distribution of y is an N-dimensional Multivariate t distribution with N_H-2 degrees of freedom, center equal to V [ β_0,H; 0; β_2,H ] + m̅1 (where 1 is the N × 1 vector whose entries are all 1), and scale matrix s_H^2( I_N × N + V K V^𝖳 ).Next, we specify p_F( β, σ^2) according to [ β|σ^2] ∼N ( 0, σ^2 k I_3 × 3 ) and σ^2 ∼ν_0σ_0^2/χ_ν_0^2. Hyperparameter k governs the prior variances of β_0, β_1, and β_2, and its selection should correspond to a large value with respect to the scale of the endpoint. The value of σ_0^2 is interpreted as a prior point estimate of σ^2 with ν_0 degrees of freedom. The chosen ν_0 should be small, and σ_0^2 could be chosen such that the resulting Inverse Chi-Square probability distribution resembles the shape of the standard non-informative prior p( σ^2) ∝ ( σ^2)^-1. As ν_0 → 0 for a fixed σ_0^2, the prior converges to the standard non-informative prior. We could take this limiting case and set the prior for σ^2 in the flat component as the standard non-informative prior. However, as for the informative component, in Bayesian PROCOVA we specify the flat component so that it is fully generative and a proper probability distribution. The probability density function for the flat prior component isp_F( β, σ^2) = { ( ν_0/2 )^ν_0/2 ( σ_0^2)^ν_0/2 ( σ^2)^-{ (ν_0 + 3)/2 +1 }/Γ ( ν_0/2 )( π k )^3/2}exp [ -ν_0σ_0^2/2σ^2 - 1/2kσ^2 ( β_0^2 + β_1^2 + β_2^2 )].In addition, under this prior, the marginal distribution of y is an N-dimensional Multivariate t distribution with ν_0 degrees of freedom, center at m̅1, and scale matrix equal to σ_0^2( I_N × N + kVV^𝖳 ).Finally, we specify the prior on the mixture weight ω such that it does not depend on any information from the RCT. A flexible and established class of prior distributions consists of the Beta ( α_1, α_2) distributions, with probability density function p( ω ) = Γ ( α_1 + α_2) Γ ( α_1)^-1Γ ( α_2)^-1ω^α_1 - 1 ( 1 - ω )^α_2 - 1, where α_1, α_2 > 0. In practice, we take α_1 = α_2 = 1, i.e., the Uniform distribution, as our default prior on ω. Besides this selection, we could also choose other α_1, α_2 values so as to emphasize or discount the historical control information in the prior. For example, if we set α_1 to a large value and make α_2 small, then significant weight is placed on the historical control information a priori. In addition, if we set α_1 to be small and make α_2 large, then the prior will significantly discount the historical control information.The additive mixture prior in Bayesian PROCOVA has several advantages. First, it is easy to explain and justify. The distribution for ( β_0, β_2, σ^2)^𝖳 in the informative prior component is the posterior distribution of the parameters from the historical control data. Hence, Bayesian PROCOVA is, in part, updating the historical control posterior with data from the RCT to calculate a new posterior for all parameters. In addition, as both the informative and flat prior components are proper probability distributions, the additive mixture prior is guaranteed to be a proper probability distribution. Second, the additive mixture prior for Bayesian PROCOVA is structured such that it is straightforward to understand the encoding of historical control information, and how that information can be discounted to make the informative prior component less dominant. Besides the point estimates β_0,H, β_2,H, and s_H^2, the values of K_0,H and the prior shape parameter N_H-2 in the informative component are functions of the historical control sample size, and these two quantities can be decreased to discount the historical control information. The values of α_1 and α_2 can also be set to further discount the historical control information in the prior. This tuning is helpful in cases of domain shift to control bias while maintaining precision gains over PROCOVA. Third, by taking ω→ 0, k →∞, and ν_0 → 0, the additive mixture prior will converge to the standard non-informative prior p( β, σ^2) ∝ ( σ^2)^-1, and hence Bayesian PROCOVA will yield treatment effect inferences similar to those from PROCOVA.The amount of information contained in the prior of β_0 and β_1 conditional on a value of ω, i.e., p( β_0, β_1 |ω ), under Bayesian PROCOVA can be quantified by comparing the prior variances of β_0 and β_1 from Bayesian PROCOVA to the posterior variances of those parameters that would be obtained if the RCT data were analyzed using a Bayesian linear regression model with predictor vector v_i =( 1, w_i)^𝖳 and prior p( β_0, β_1, σ^2) ∝ ( σ^2)^-1. By setting the prior variances of β_0 and β_1 conditional on ω under Bayesian PROCOVA equal to their posterior variances in the latter, hypothetical Bayesian analysis, we can leverage closed-form expressions for the posterior variances of the parameters to identify the sample size for the hypothetical RCT such that the amount of information provided by the RCT on the parameters would be equivalent to the amount of information on the parameters encoded in the prior from Bayesian PROCOVA. We condition on ω for interpreting the prior effective sample size in this regard as it can be used for both trial planning purposes and sensitivity analyses.To illustrate this approach, first consider β_1. The calculation of Var ( β_1 |ω ) under Bayesian PROCOVA follows in a straightforward manner due to the additive mixture prior, or alternatively via simulation as the prior is generative. For a hypothetical RCT with N_1 treated participants and N - N_1 = N_0 control participants in which the centered outcomes y_i^(c) are analyzed using Bayesian linear regression with v_i =( 1, w_i)^𝖳 and in which p( β_0, β_1, σ^2) ∝ ( σ^2)^-1, the posterior variance of β_1 is s^2( N_1^-1 + N_0^-1 ) where s^2 is the estimate of σ^2 from the regression model. If we were to consider 1:1 designs, with N_0 = N_1 = N/2, then we set Var ( β_1 |ω ) = 4s^2N^-1 and solve for N to obtain N = 4s^2/Var ( β_1 |ω ). Thus, given the estimate s^2 of σ^2 and a value of ω, the amount of prior information on β_1 conditional on ω from Bayesian PROCOVA is equivalent to the corresponding amount of information in the posterior distribution for β_1 that is obtained from analyzing the hypothetical RCT of size N = 4s^2/Var ( β_1 |ω ) using the simpler Bayesian linear regression analysis. The existence of Var ( β_1 |ω ) for this calculation is ensured because Bayesian PROCOVA uses proper, generative prior components that have finite first and second moments. The same approach can be implemented for quantifying the information on β_0 from Bayesian PROCOVA. §.§ Posterior DistributionsThe calculation of p( β, σ^2, ω| y, w, X) in Bayesian PROCOVA is performed in a straightforward manner. In particular, the combination of the additive mixture prior with the likelihood results in p( β, σ^2 |ω, y, w, X) being a mixture distribution itself with two components, corresponding to the informative and flat components in the prior. For each mixture component, the specification of a Multivariate Normal distribution for the conditional prior [ β|σ^2] and the Inverse Chi-Square distribution for the marginal prior of σ^2 is conjugate to the likelihood function. As such, for both the informative and flat components, the conditional posterior [ β|σ^2, y, w, X] is a Multivariate Normal distribution, and the marginal posterior [ σ^2 | y, w, X] is an Inverse Chi-Square distribution. Both the marginal posterior p( ω| y, w, X) and the conditional posterior p( ω|β, σ^2, y, w, X) for the mixture weight can be derived in closed-form, with the normalizing constant calculated via numerical integration over the support (0,1). The latter conditional posterior is typically more numerically stable than the former marginal posterior. These observations indicate a straightforward Gibbs algorithm for sampling from the joint posterior, with the algorithm alternating between sampling from p( β, σ^2 |ω, y, w, X) and from p( ω|β, σ^2, y, w, X). The fact that p( β, σ^2 |ω, y, w, X) is a mixture distribution is evident from Bayes' theorem, asp( β, σ^2|ω, y, w, X) ∝ω L( β, σ^2, ω| y, w, X) p_I( β, σ^2 ) +( 1 - ω ) L( β, σ^2, ω| y, w, X)p_F( β, σ^2).The normalization constant for this mixture distribution isC = {ω∫ L( β, σ^2, ω| y, w, X) p_I( β, σ^2) dβ dσ^2 +( 1 - ω ) ∫ L( β, σ^2, ω| y, w, X) p_F(β, σ^2) d β d σ^2 }^-1.Given the marginal distributions of y under the informative and flat prior components, respectively, the closed-form expression for C^-1 isC^-1 = {ωΓ ( N+N_H-2/2 )/Γ ( N_H-2/2 )( N_H-2)^N/2π^N/2} ( s_H^2)^-N/2{det ( I_N × N + V KV^𝖳 ) }^-1/2×{ 1 + 1/ ( N_H-2) s_H^2 ( y^(c) - V[ β_0,H; 0; β_2,H ] )^𝖳 ( I_N × N + V K V^𝖳 )^-1 ( y^(c) - V [ β_0,H; 0; β_2,H ] ) }^-{ ( N+N_H-2)/2 } + { ( 1 - ω ) Γ ( N+ν_0/2 )/Γ ( ν_0/2 )( ν_0)^N/2π^N/2} ( σ_0^2)^-N/2{det ( I_N × N + kVV^𝖳 ) }^-1/2×{ 1 +( 1/ν_0 σ_0^2 )( y^(c) )^𝖳 ( I_N × N + kVV^𝖳 )^-1 y^(c)}^-{ ( N+ν_0)/2 }.The weights for the two components of the posterior are ω_* = C ω∫ L( β, σ^2, ω| y, w, X) p_I( β, σ^2) dβ dσ^2 and 1 - ω_* = C( 1 - ω ) ∫ L( β, σ^2, ω| y, w, X) p_F(β, σ^2) d β dσ^2, respectively. The closed-form expression for the weight of the informative component isω_*= {C ωΓ ( N+N_H-2/2 )/Γ ( N_H-2/2 )( N_H-2)^N/2π^N/2} ( s_H^2)^-N/2{det ( I_N × N + V KV^𝖳 ) }^-1/2 ×{ 1 + 1/ ( N_H-2) s_H^2 ( y^(c) - V [ β_0,H; 0; β_2,H ] )^𝖳 ( I_N × N + V K V^𝖳 )^-1( y^(c) - V [ β_0,H; 0; β_2,H ] ) }^-{ ( N+N_H-2)/2 }and the closed-form expression for the weight of the flat component is1-ω_*= {C( 1 - ω ) Γ ( N+ν_0/2 )/Γ ( ν_0/2 )( ν_0)^N/2π^N/2} ( σ_0^2)^-N/2{det ( I_N × N + kVV^𝖳 ) }^-1/2 ×{ 1 +( 1/ν_0 σ_0^2 )( y^(c) )^𝖳 ( I_N × N + kVV^𝖳 )^-1 y^(c)}^-{ ( N+ν_0)/2 }. For the informative component of the mixture conditional posterior, [ β|σ^2, y, w, X] is Multivariate Normal with mean vector β_* =( β_0,*, β_1,*, β_2,* )^𝖳 and covariance matrix Σ_* defined asβ_*= [ β_0,H; 0; β_2,H ] + K V^𝖳 ( I_N × N + V K V^𝖳 )^-1 ( y^(c) - V[ β_0,H; 0; β_2,H ] ),Σ_*= σ^2 K - σ^2 KV^𝖳 ( I_N × N + V K V^𝖳 )^-1 V K.The posterior of σ^2 in the informative component is p_I( σ^2 | y, w, X) = p_I( β, σ^2 | y, w, X)/p_I( β|σ^2, y, w, X), so that [ σ^2 | y, w, X] ∼ (N+N_H-2)s_*^2/χ_N+N_H-2^2 withs_*^2=( N+N_H-2)^-1{ ( y^(c) - Vβ_*)^𝖳 (y^(c) - Vβ_*) + (N_H-2)s_H^2 +( β_0,* - β_0,H )^2/K_0,H + β_1,*^2/K_1,H +( β_2,* - β_2,H )^2/K_2,H}. For the flat component of the mixture conditional posterior, [ β|σ^2, y, w, X] is Multivariate Normal with mean vector b_*=( b_0,*,b_1,*,b_2,* )^𝖳 and covariance matrix 𝐒_* defined asb_*= kV^𝖳 ( I_N × N + kVV^𝖳 )^-1 y^(c),𝐒_*= σ^2 kI_3 × 3 -σ^2 k^2 V^𝖳 ( I_N × N + kVV^𝖳 )^-1 V.Similar to the previous case, the marginal posterior of σ^2 in the flat component is [ σ^2 | y, w, X]∼ (N+ν_0)σ_0,*^2/χ_N+ν_0^2, whereσ_0,*^2 =( N+ν_0)^-1{ ( y^(c) - Vb_*)^𝖳 (y^(c) - Vb_*) + ν_0σ_0^2 + 1/k( b_0,*^2 + b_1,*^2 + b_2,*^2 ) }.Formulae for the posterior mean and variance of β_1 conditional on ω are derived via our previous expressions for the posteriors of the parameters. The posterior mean of β_1 is the second entry in𝔼 ( β|ω, y, w, X)= ω_* β_* + (1 - ω_*) b_*.The posterior variance of β_1 is calculated according to the law of total variance viaCov ( β|ω, y, w, X)= ω_* { ( N + N_H - 2)s_*^2/N+N_H-4}{K - K V^𝖳 ( I_N × N + V K V^𝖳 )^-1 V K} +( 1 - ω_*) { ( N + ν_0 ) σ_0,*^2/N+ν_0-2}{ kI_3 × 3 - k^2 V^𝖳 ( I_N × N + kVV^𝖳 )^-1 V } + ω_*( 1 - ω_*)( β_* - b_* ) ( β_* - b_* )^𝖳.The posterior variance of β_1 is the (2,2) entry of this matrix. By means of additional algebra, this formula indicates that the variance reduction of Bayesian PROCOVA over PROCOVA depends in part on comparisons between the historical control and RCT data in regard to the correlation between the prognostic scores and outcomes and the average bias in the prognostic scores in both datasets. The variance reduction is also a function of the sample sizes of the historical control and RCT data. These insights can help guide sensitivity analyses for Bayesian PROCOVA. The <cit.> provided a simple formula for the effective sample size (ESS) of a Bayesian analysis. This formula is ESS = N(𝕍_1/𝕍_2), where 𝕍_1 is the posterior variance of the parameter of interest (e.g., β_1) that is obtained from an analysis without using an informative prior (i.e., PROCOVA under the standard non-informative prior), and 𝕍_2 is the posterior variance of the same parameter under a more informative prior (i.e., Bayesian PROCOVA). We have closed-form expressions for variances conditional on ω, and can calculate the ESS accordingly. Furthermore, we can calculate the sample size reduction via ESS - N = N(𝕍_1/𝕍_2 - 1). This formula is also present in the work of <cit.>, <cit.>, and <cit.>. A more complicated expression for ESS was given by <cit.>, but the essential ingredient in their expression was the formula above.We calculate the marginal posterior p( ω| y, w, X) = p( β, σ^2, ω| y, w, X)/p( β, σ^2 |ω, y, w, X). As this is a function of ω, we can input any set of values for β and σ^2 into the right-hand side of the equation to obtain p( ω| y, w, X). We take care to include all normalizing constants in the numerator and denominator that involve ω in this calculation. This is a one-dimensional distribution whose support is on (0,1), and so we can calculate the normalizing constant of this marginal posterior using numerical integration. In addition, the posterior of ω conditional on β, σ^2 is directly calculated asp( ω|β, σ^2, y, w, X)∝ω p( ω )×{ ( N_H - 2/2 )^ ( N_H-2)/2 ( s_H^2)^ ( N_H-2)/2 ( σ^2)^-{ (N_H+1)/2 + 1 }/Γ ( N_H-2/2 ) π^3/2 (K_0,H K_1,H K_2,H )^1/2} ×exp [ - ( N_H - 2) s_H^2/2σ^2 - 1/2σ^2{ ( β_0 - β_0,H )^2/K_0,H + β_1^2/K_1,H +( β_2 - β_2,H )^2/K_2,H} ]+( 1 - ω ) p( ω )×{ ( ν_0/2 )^ν_0/2 ( σ_0^2)^ν_0/2 ( σ^2)^-{ (ν_0 + 3)/2 +1 }/Γ ( ν_0/2 )( π k )^3/2}exp{ -ν_0 σ_0^2/2σ^2 - 1/2kσ^2 ( β_0^2 + β_1^2 + β_2^2) }.Similar to the marginal posterior, for any β and σ^2 we can calculate the normalizing constant of this conditional distribution using numerical integration, and thereby directly obtain samples from it.§.§ Gibbs Sampler for the Mixture Posterior DistributionWe sample from the mixture posterior p( β, σ^2, ω| y, w, X) using a Gibbs algorithm. Specifically, we iterate between drawing the vector of parameters from p( β, σ^2 |ω, y, w, X) conditional on a previous draw of ω, and drawing ω from p( ω|β, σ^2, y, w, X) conditional on the previously drawn β and σ^2. The formal steps for the Gibbs algorithm are outlined below. 0. Initialize ω^(0).For iteration j = 1, 2, …: * Calculate ω_* based on ω^(j-1). * Draw Z^(j)∼Bernouilli ( ω_*). * If Z^(j) = 1: * Draw ( σ^2)^(j)∼{ (N+N_H-2)s_*^2 } / χ_N+N_H-2^2. * Draw β^(j) from the informative posterior component [ β| ( σ^2)^(j), y, w, X]. * If Z^(j) = 0: * Draw ( σ^2)^(j)∼{ (N+ν_0)σ_0,*^2 } / χ_N+ν_0^2. * Drawβ^(j) from the flat posterior component [ β| ( σ^2)^(j), y, w, X]. * Draw ω^(j) via the Probability Integral Transform applied to the cumulative distribution function of p( ω|β^(j),( σ^2)^(j), y, w, X) as obtained by numerical integration. § SIMULATION STUDIES §.§ Data Generation Mechanisms and Evaluation MetricsWe conduct simulation studies under five scenarios to evaluate the frequentist performance, in terms of bias control and variance reduction, of Bayesian PROCOVA compared to PROCOVA. In the first three scenarios, the historical control and RCT data are generated according to the same mechanism. Here we explore the sensitivity of the frequentist properties of Bayesian PROCOVA to the prior on ω (which will either be the Beta(1,1) or the Beta(1/2,1/2) distribution) and the prior on σ^2 in the flat prior component (which will have either ν_0 = 1, σ_0^2 = 1 or ν_0 = 3, σ_0^2 = 100). In the fourth scenario, we introduce shifts in the correlation of the prognostic scores with the control outcomes between the historical control and RCT data. In this case we only consider the Beta(1,1) prior for ω and the Inverse Chi-Square prior with ν_0 = 1, σ_0^2 = 1 for σ^2 in the flat prior component. Lastly we introduce shifts in the average bias of the prognostic score in the historical control data compared to the RCT. Here we only consider the Beta(1,1) prior for ω, and the Inverse Chi-Square priors for σ^2 with ν_0 = 1, σ_0^2 = 1 and ν_0 = 3, σ_0^2 = 100 in the flat prior component (Table <ref>). Table <ref> gives the parameter settings that remain fixed across simulation scenarios. Observed outcomes in both historical and trial datasets are generated according to equation (<ref>) with prognostic scores generated identically and independently from one another based on standard Normal random variables. In the historical data, prognostic scores are on average unbiased (β_0,H = 0) and error variance is set as σ^2=1. In the trial data, participants are randomized at a 1:1 ratio for the trial and a null treatment effect (β_1=0). The value of β_0 in the trial data depends on the shift in bias from the historical data, which we vary in one simulation scenario. The value of β_2 in both the historical and trial datasets depend on the correlation between the prognostic scores and control outcomes. Multiple levels of correlation are considered in each scenario, and in one scenario we introduce discrepancies between the values in the historical and trial data. All scenarios are simulated for both the case of K_0,H = N_H^-1 and K_0,H = N_H^-1/2. For each scenario, 1000 datasets are simulated, and the Gibbs algorithm is implemented for 1000 iterations in each simulated dataset to fit the Bayesian PROCOVA model. We confirmed that the Gibbs algorithm converged rapidly, and validated the implementation of the Gibbs algorithm using the diagnostics of <cit.>. The metrics that we calculate across the simulated datasets are bias and variance reduction with respect to PROCOVA.§.§ Bias ControlFigures <ref> to <ref> summarize the evaluations of the mean absolute bias of the posterior mean of the super-population treatment effect from Bayesian PROCOVA across different types of priors and simulation scenarios. We observe that bias in the Bayesian PROCOVA treatment effect estimator is effectively controlled in nearly all simulation scenarios (Figures <ref> and <ref>). The exception is the situation in which the RCT has a small sample size and there exists a mild-to-moderate shift in the absolute prognostic score bias between the historical and RCT data (Figures <ref> and <ref>). As the RCT sample size increases, the bias converges towards zero. By comparing the the top-left panels ofFigures <ref> and <ref>, we observe that setting K_0,H = N_H^-1/2 can control the maximum level of bias compared to setting K_0,H = N_H^-1 for a mild bias shift in the case of a small trial size. These two observations indicate that the bias is a consequence of the Bayesian method placing undue confidence in the historical control data as a result of the large historical control sample size and the relatively smaller RCT sample size. Additionally, in cases of prognostic score bias shift, the bias in the treatment effect estimator is smaller when the prognostic scores are more highly correlated with the control outcomes (Figures <ref> and <ref>). §.§ Variance ReductionFigures <ref> and <ref> illustrate the variance reductions from Bayesian PROCOVA in the first situation for which the historical and RCT data are consistent with one another. We observe that the Inverse Chi-Square prior with ν_0 = 3, σ_0^2 = 100 for σ^2 in the flat prior component and the Beta(1,1) prior on ω yield consistent and stable variance reduction for Bayesian PROCOVA over PROCOVA, even in small RCTs. Variance reduction is more unstable and smaller in expectation when the Inverse Chi-Square prior with ν_0 = 1, σ_0^2 = 1 is utilized in the flat prior component. This result on variance reduction can be explained by considering how the flat prior component for σ^2 affects the posterior distribution of ω. Specifying the Inverse Chi-Square prior with ν_0 = 3, σ_0^2 = 100 results in an average posterior weight of ω≈ 1, so that significant weight is placed on the information from the historical data. This is advantageous when the historical and RCT data are consistent with each other, because Bayesian PROCOVA effectively augments the small RCT sample size with all of the information in the larger historical control dataset. In contrast, the average posterior weight in the other case of ν_0 = 1, σ_0^2 = 1 is ω≈ 0.33. This results in both less information being leveraged from the historical control data, and more of the weak information from the flat prior component being utilized in the posterior inferences. While bias in the treatment effect inferences remains under control in this circumstance, this mixture of information increases the variance and introduces additional instabilities in posterior inferences, especially for small RCT sample sizes. The posterior variance of the treatment effect is also directly related to the ESS. This relationship is displayed in Figures <ref> and <ref>. We see from these figures that, the larger the ratio of the ESS to the actual RCT sample size, the greater the variance reduction of Bayesian PROCOVA compared to PROCOVA. We also observe that this ratio decreases for larger RCTs.Figures <ref> and <ref> illustrate how the variance reduction of Bayesian PROCOVA changes due to discrepancies in the correlations between prognostic scores and control outcomes across the historical and RCT data. In general, combinations of small correlation levels (the bottom left sections of each panel) result in less variance reduction. Combinations of correlation levels that lie above the line, representing cases in which the correlation between prognostic scores and control outcomes in the historical data is larger than that in the RCT data, correspond to greater variance reduction. By comparing these two figures, we observe that the variance reduction resulting from K_0,H = N_H^-1/2 is less than that resulting from K_0,H = N_H^-1. This is a direct consequence of the fact that the prior distribution under the first setting of K_0,H is less informative than that under the second setting, and hence the posterior for the treatment effect has greater posterior variance.The case of a shift in the bias of the prognostic scores, i.e., a change in the intercept from the historical control to the RCT data, is demonstrated via Figures <ref> and <ref>. We observe from these figures that when there’s no shift in the bias of the prognostic scores, Bayesian PROCOVA exhibits variance reduction over PROCOVA. However, in cases of a mild bias shift (e.g., one standardized unit), Bayesian PROCOVA can actually inflate the variance of the treatment effect estimator compared to PROCOVA. However, as the magnitude of the shift increases beyond one standardized unit, Bayesian PROCOVA effectively recovers the same inferences as PROCOVA, so that there would be zero variance reduction. Furthermore, when K_0,H = N_H^-1/2, the inflation of the variance of the treatment effect estimator is less than when K_0,H = N_H^-1. In addition, the severity and risk of variance inflation decreases as a function of the RCT sample size.The previous set of results indicate how one can mitigate variance inflation in cases of mild-to-moderate conflicts between historical control and RCT data by consideration of various hyperparameters in the prior specification for Bayesian PROCOVA. For example, in the previous simulation scenarios in which the prognostic score bias differed between the historical and RCT data, if we were to specify the Inverse Chi-Square prior with ν_0 = 3, σ_0^2 = 100, then we would observe significant variance inflation of Bayesian PROCOVA over PROCOVA. This is illustrated in Figures <ref> and <ref>. Ultimately, this choice of hyperparameters would lead to more weight being placed on the information from the historical control data, which decreases the quality of inferences due to the discrepancies between the historical control and RCT data. In contrast, by specifying the Inverse Chi-Square prior with ν_0 = 1, σ_0^2 = 1, which more closely resembles the standard non-informative prior p( σ^2) ∝ ( σ^2)^-1, we recover PROCOVA in cases of data conflict, as demonstrated in Figures <ref> and <ref>. This indicates that we can quickly recover PROCOVA, even in cases of mild shift, by placing a tighter Inverse Chi-Square prior for σ^2 in the flat prior component. We can also interpret the tighter prior as a tighter penalty on the variance parameter, which prevents the shift in bias from entering the inferences for the variance term. We also observe from the Figures <ref> and <ref> that there exists a potential trade-off with the stability of the variance reduction. Therefore the Inverse Chi-Square hyperparameter selection should be one focus of prospective sensitivity analyses. Another focus should be the selection of K_0,H, as changes in this hyperparameter can assist in discounting historical control data in case of domain shift. § CONCLUDING REMARKSThe capability for effective and rapid decision-making from RCTs can be improved by means of innovative statistical methods that increase the precision of treatment effect inferences while controlling bias. Our Bayesian PROCOVA methodology directly addresses these crucial requirements for decision-making. It incorporates covariate adjustment based on optimized, generative AI algorithms under the Bayesian paradigm. The combination of these two strategies in Bayesian PROCOVA follows regulatory guidance and best practices on covariate adjustment and Bayesian inference. Key features of Bayesian PROCOVA are its additive mixture prior on the regression parameters, and the prior for the mixture weight. The complete prior specification encodes historical control information while also enabling the consideration of a weakly informative prior component in case discrepancies exist between the historical control and RCT data. The prior for the mixture weight is completely pre-specifiable prior to the commencement of the RCT, so that the RCT data is not used twice in Bayesian PROCOVA as in the methods of <cit.>, <cit.>, and <cit.>. We derived the posterior distributions for the regression coefficients in closed-form conditional on the mixture weight, with treatment inferences formulated in terms of the super-population average treatment effect β_1. Our derivations led to the development of a straightforward and efficient Gibbs algorithm for sampling from the joint posterior distribution of all model parameters, including the mixture weight. Finally, we demonstrated via comprehensive simulation studies that Bayesian PROCOVA can be tuned to both control the bias and reduce the variance of its treatment effect inferences compared to PROCOVA. Ultimately, fewer control participants would be necessary for recruitment into the RCT, the RCT can consequently run much faster, and effective decision-making from RCTs can be accelerated by means of Bayesian PROCOVA.chicago | http://arxiv.org/abs/2310.18027v3 | {
"authors": [
"Alyssa M. Vanderbeek",
"Arman Sabbaghi",
"Jon R. Walsh",
"Charles K. Fisher"
],
"categories": [
"stat.ME",
"stat.AP",
"stat.ML",
"62F15"
],
"primary_category": "stat.ME",
"published": "20231027100506",
"title": "Bayesian Prognostic Covariate Adjustment With Additive Mixture Priors"
} |
18pt 18pt [1]TPB address1,address2]Hannah Götschcor1 [cor1]Corresponding author [email protected] address1]Reinhard Bürger[address1]Faculty of Mathematics, University of Vienna, 1090 Vienna, Austria [address2]Vienna Graduate School of Population Genetics, AustriaWe study the response of a quantitative trait to exponential directional selection in a finite haploid population, both at the genetic and the phenotypic level. We assume an infinite sites model, in which the number of new mutations per generation in the population follows a Poisson distribution (with mean ) and each mutation occurs at a new, previously monomorphic site. Mutation effects are beneficial and drawn from a distribution. Sites are unlinked and contribute additively to the trait. Assuming that selection is stronger than random genetic drift, we model the initial phase of the dynamics by a supercritical Galton-Watson process. This enables us to obtain time-dependent results. We show that the copy-number distribution of the mutant in generation n, conditioned on non-extinction until n, is described accurately by the deterministic increase from an initial distribution with mean 1. This distribution is related to the absolutely continuous part W^+ of the random variable, typically denoted W, that characterizes the stochasticity accumulating during the mutant’s sweep. A suitable transformation yields the approximate dynamics of the mutant frequency distribution in a Wright-Fisher population of size N. Our expression provides a very accurate approximation except when mutant frequencies are close to 1. On this basis, we derive explicitly the (approximate) time dependence of the expected mean and variance of the trait and of the expected number of segregating sites. Unexpectedly, we obtain highly accurate approximations for all times, even for the quasi-stationary phasewhen the expected per-generation response and the trait variance have equilibrated. The latter refine classical results. In addition, we find thatis the main determinant of the pattern of adaptation at the genetic level, i.e., whether the initial allele-frequency dynamics are best described by sweep-like patterns at few loci or small allele-frequency shifts at many. The number of segregating sites is an appropriate indicator for these patterns. The selection strength determines primarily the rate of adaptation. The accuracy of our results is tested by comprehensive simulations in a Wright-Fisher framework. We argue that our results apply to more complex forms of directional selection.Keywords: polygenic adaptation, mutation, infinite-sites model, Galton-Watson branching process, Wright-Fisher model [1] § INTRODUCTION Phenotypic adaptation is ubiquitous and can occur rapidly as a response to artificial selection or to a gradual or sudden change in the environment. On the basis of accessible phenotypic measurements, the response of the mean phenotype can often be predicted or described by simple equations, such as the breeder's equation or its evolutionary counterpart,<cit.>. During this process genotype frequencies change, and new genotypes and new alleles can be generated by recombination and mutation. Even if the response of the mean phenotype is simple to describe, it is a highly complex, often insurmountable, task to infer the response on the genetic level.There are multiple reasons for this problem. The response of a trait may be due to small frequency changes at a large number of loci of minor effect <cit.>, as originally proposed by <cit.> and idealized and formalized in the infinitesimal model <cit.>. The selection response may also be due to selective sweeps at a few loci of large effect <cit.>. Finally, a mixture of both is a third realistic scenario <cit.>. In principle, different genetic architectures in combination with different initial conditions can lead to similar phenotypic responses, at least as far as the mean phenotype is concerned. Even if loci contribute additively to the trait and their number and effects are known, it is notoriously difficult to derive the allele-frequency dynamics at individual loci and the resulting dynamics of the trait's mean and variance from the selection pressure acting on the trait. Additional complications arise if linkage disequilibrium, epistasis, or pleiotropy have to be taken into account <cit.>.For quite some time the focus in studying adaptation at the genetic and molecular level has been on selective sweeps, first hard sweepsand more recently soft sweeps which can occur from standing variation or immigration and are not restricted to mutation-limited evolutionary scenarios <cit.>. Although footprints of hard and soft selective sweeps are detectable in genetic data by various methods <cit.>, they seem to be sparse in the data, especially hard sweeps in humans <cit.>. The detection of polygenic responses by small allele-frequency changes at loci of minor effect is much more demanding. It became feasible at a larger scale only when sequencing became cheaper <cit.>. Genome-wide association studies (GWASs) became an important tool in identifying associations between genetic variants, typically single-nucleotide polymorphisms (SNPs), and phenotypic traits. Although GWASs are highly valuable in elucidating the genetic architecture underlying phenotypic traits,they provide no direct information on the contribution of the causal variants to adaptation of the trait, which can be deleterious, neutral or advantageous. More refined approaches, such as experimental evolution methods that yield time-series data, have been pursued successfully to demonstrate truly polygenic adaptation <cit.>.Theoreticians only recently started to explore when adaptation of polygenic traits is likely to be based on selective sweeps at a few loci vs. on small polygenic allele-frequency changes. Essential for this line of research is a good understanding of which genetic architectures and parameters (number and effects of loci, distribution of mutation effects, linkage, epistasis, pleiotropy) as well as population sizes and forms and strengths of selection can cause which adaptive patterns. By this we mean sweep patterns (hard or soft), polygenic patterns, or a mixture of both. Some studies focused on polygenic traits under stabilizing selection, where the trait mean is close to the optimum or at most a few phenotypic standard deviations away and the selective response is due to standing genetic variation established previously by mutation-selection-(drift) balance. The findings are qualitatively similar, whether the population size is so large that random genetic drift can be ignored <cit.> or moderately large so that it plays a role but is weaker than selection <cit.>. Roughly, if most standing variation is due to many polymorphic loci of small effect, then the response of the mean is mainly caused by small frequency changes. Only under exceptional circumstances do selective sweeps occur and drive the response. This result is not unexpected because (i) loci of small effect are more like to carry alleles of intermediate frequency and (ii) close to the phenotypic optimum fine tuning at the contributing loci is important. Other studies focused on polygenic adaptation under directional selection <cit.>, for instance caused by a sudden big environmental shift <cit.>. Although the assumptions on the genetic architecture and on the form of directional selection differ among these studies, they show that selective sweeps at several loci (parallel or successive) occur more often than in the above discussed scenario. The main determinant of the adaptive pattern at the genomic level is the so-called population-wide background mutation rate, but not the strength of selection or the population size <cit.>.In this study, we also explore how various evolutionary and genetic factors determine the rate and pattern of polygenic adaptation, both at the phenotypic and the genetic level. However, our assumptions depart in several respects from those imposed in previous studies. First, we consider a trait subject to exponential directional selection, which is nonepistatic, does not induce linkage disequilibrium, and leads to a constant selection strength s. Second, in contrast to the investigations cited above, we focus on the response due to de novo mutations, i.e., we ignore standing variation. Third, but similar to <cit.>, we assume an infinite sites model. The number of mutations per generation follows a Poisson distribution and each mutation occurs on a new, previously monomorphic locus. Loci are unlinked and contribute additively to the trait. Mutation effects are beneficial and follow a general distribution. This is justified by the assumption that in our population of size N selection is stronger than random genetic drift (Ns>1), so that prolonged segregation or fixation of non-advantageous mutations can be ignored. These assumptions enable us to derive analytical expressions for the time dependence of the mutant frequency distribution at each locus, of the expected number of segregating sites, and of the expected mean and variance of the trait. The paper is structured as follows. In Section <ref>, we introduce our basic model and give an overview of the analytical concepts regarding the one-locus model and its extension to infinitely many sites. Also our simulation setting is described.In Section <ref>, we derive an explicit, approximate expression for the mutant frequency distribution in a finite Wright-Fisher population as a function of time. This becomes feasible by using a supercritical Galton-Watson process with a quite general offspring distribution to describe the spreading of a beneficial mutant.Comparison with simulation results of a corresponding Wright-Fisher model demonstrates that our approximation is highly accurate for a wide range of parameters and a large time range. Key results are explicit and computationally efficient time-dependent expressions for the expected mean and the expected variance of an additive quantitative trait under exponential selection. They are presented in Section <ref>, seem to be entirely new, and provide highly accurate approximations to corresponding Wright-Fisher simulations. Interestingly, they even allow the derivation of expressions for the long-term, quasi-stationary response of the trait's mean and variance. They are not only derived from first principles by assuming the infinite sites model but also recover and refine classical results. Proofs are given in Appendix <ref>. In Section <ref> (and Appendix <ref>), we derive explicit, approximate expressions for the evolution of the expected number of segregating sites. They are based on a combination of branching process methods with (in part new) approximations for the expected time to loss or to fixation of a new beneficial mutant. The latter are deduced and numerically tested in Appendix <ref>. In Section <ref>, we use the approximation for the number of segregating sites to characterize the numerically determined initial response patterns. This allows us to examine the genomic patterns associated with the early phase of phenotypic adaptation. Because this paper is rather technical, we summarize, explain, and discuss our findings comprehensively in Section <ref>. A central quantity in our theory is the expected numberof new beneficial mutations occurring per generation in the total population. We provide quantitative support to previous findings in related but different contexts that sweep-like patterns will occur ifis sufficiently much smaller than 1, whereas polygenic shifts will drive the initial selection response ifis much greater than 1. Other model parameters have only a minor influence on the initial adaptive pattern, but may have a major influence on the rate of response of the trait. We propose to use the expected number of segregating sites as an indicator for distinguishing between sweep-like vs. polygenic shift-like patterns in the early phases of adaptation. In Section <ref>, we discuss the applicability of our approach to more general selection scenarios and to more general genetic architectures as well as the limitations inherent in our model assumptions. § THE MODEL We assume a haploid, panmictic population which has discrete generations and a constant population size of N. Our main goal is to explore the dependence of the pattern and rate of allele-frequency change at the loci driving the adaptation of a quantitative trait on the model's parameters. To achieve this aim, we derive the dynamics from first principles. First we set up the basic model for the quantitative trait, then the underlying one-locus model, the multilocus model, which we assume to be an infinite-sites model, and the simulation model. A glossary of symbols is provided in Table <ref>.§.§ The quantitative-genetic model We consider a quantitative trait subject to nonepistatic directional selection and investigate the evolution of the distribution of allele frequencies at the loci contributing to this trait during adaptation. The fitness of an individual with phenotypic value G is given by exp(sG), where the selection coefficient s>0 is assumed to be constant in time. The trait is determined additively by a (potentially infinite) number of diallelic loci at which mutation occurs. We ignore environmental contributions to the phenotype which, in the absence of genotype-environment interaction and of epistasis, would only weaken selection on the underlying loci. Therefore, genotypic and phenotypic values are equal.Each locus i can be in the ancestral state a_i or in the derived state A_i. The effect on the phenotype of a locus in the ancestral state is zero, whereas a mutation at locus i contributes a value _i>0 to the phenotype G. Thus, the phenotype of an individual is given by G = ∑_i _i δ_A_i, where δ_A_i=1 if locus i is in the derived state, and δ_A_i=0 otherwise.The fitness of the ancestral allele is normalized to 1. Therefore, the fitness of the ancestral genotype, which has value G=0, is also one. Because of our additivity assumption and the shape of the fitness function, a mutation at locus i has a fitness of σ_i = e^_i>1, where s>0. Hence, all mutations are advantageous. Because we mostly assume Ns>1, deleterious mutations are very unlikely to reach high frequency or become fixed, and thus will not contribute to a permanent selection response of the mean phenotype.Unless stated otherwise, we assume that initially all loci are in the ancestral state and adaptation occurs from de novo mutations. New mutations occur from the ancestral state to the derived state and back-mutations are ignored. We define the population-wide advantageous mutation rate by = N U, where U is the expected number of mutations per individual that increase the trait value. Thus,is the expected number of new mutations per generation that affect the trait. We assume an infinite sites model, i.e., each mutation hits a new locus. This assumption is appropriate if per-locus mutation rates are small enough such that the next mutation occurs only after the previous one becomes lost or fixed.In the infinite sites model we investigate two different scenarios. In the first, all mutation effects are equal. In the second, mutation effects _i at any locus i are drawn from a distribution defined on [0,∞) that has a density f. Its mean is denoted byand all its higher moments are assumed to exist. Thus, the loci are exchangeable and all mutations are beneficial. For specification of analytical results and their comparison with simulation results, we use an exponential distribution <cit.> and a truncated normal distribution (see Remark <ref>). §.§ The one-locus model We consider a single diallelic locus in isolation and assume that allele frequencies evolve according to a haploid Wright-Fisher model with selection. The (relative) fitness of the ancestral allele a and of the derived allele A (a new mutant) are set to 1 and σ= e^s, s>0. Therefore, the selective advantage of the mutant relative to the wildtype is e^s-1≈ s. We assume that in generation n=0 the new beneficial mutant A occurs as a single copy and no other mutation occurs while this allele is segregating. Because it seems unfeasible to derive analytic results for the time dependence of the allele frequency for either the Wright-Fisher model or its diffusion approximation, we approximate the stochastic dynamics in the initial phase, where stochasticity is most important, by a branching process <cit.>. During this initial phase, interactions between mutant lineages can be ignored if N is large enough. The first step is to approximate the evolution of the frequency distribution of the mutant by a Galton-Watson process. In Section <ref>, we will approximate this discrete process by a continuous one and couple it with the deterministic allele-frequency dynamics to obtain an analytically accessible model for the long-term evolution of the mutant in a population of size N.To be specific, we consider a Galton-Watson process {Z_n}, where Z_n=∑_j=1^Z_n-1ξ_j, Z_0=1, and ξ_j denotes the (random) number of offspring of individual j in generation n-1. Thus, Z_n counts the number of descendants of the mutant that emerged in generation 0. We assume that the ξ_i are independent, identically distributed random variables having at least three finite moments, where [ξ_j] =>1 and[ξ_j] = v.Therefore, the process {Z_n} is supercritical. The ξ_i are also independent of n.We denote the generating function of ξ_j (hence of Z_1) by φ. Whenever we consider extinction or survival probabilities as functions of =e^s, we assume that v=2/ρholds for the underlying family of offspring distributions, where ρ>0 is a constant (cf. Remark <ref>). In this case, we also assume that the probability of ultimate extinction is continuous in =e^s and decreases with increasing . When we compare our analytical results with simulation results from a Wright-Fisher model, we assume that the offspring distribution is Poisson, because this provides a close approximation to the binomial distribution if the population size N is sufficiently large, especially if Ns≫1 <cit.>.§.§ The infinite-sites model We assume that the trait is determined by a potentially infinite number of loci at which new, beneficial mutants can occur. We assume an infinite-sites model, so that every mutation occurs at a new, previously monomorphic locus. The initial population starts evolving in generation τ=0 in which the first mutations may occur. Prior to generation 0, the ancestral allele is fixed at every site potentially contributing to the trait. We assume that in each generation τ≥0, the total number of new mutants emerging in the population follows a Poisson distribution with mean . Each mutant occurs at a new site (locus). Because selection is multiplicative in our model and random genetic drift is a weak force relative to selection (because N is assumed to be large), we ignore linkage disequilibrium among loci. Hence, we assume that the allele-frequency distributions at the different loci evolve independently.According to the mutation model, the ith mutation will occur at a random time τ_i. The corresponding locus will be labeled locus i, and the mean offspring number of the mutant - when the mutant is still rare in the population - is denoted by _i>1. For mathematical convenience, we approximate the discrete distribution of the waiting time τ_i by the Erlang distribution with density m_i, (t) =^i t ^i-1/(i-1)!exp (-t ) , ift>0(and m_i, (t)=0 if t≤0). Throughout, we use t as an integration variable and emphasize that generations τ are discrete. The mean waiting time until mutation i is i/. We note that if mutations occurred according to a Poisson point process with rate , the Erlang distribution would be the exact waiting-time distribution between i events. Here, we use it as an approximation. The total number of mutation events k until generation τ is Poisson distributed withmean τ_τ (k) = (τ )^k/k!exp (-τ ) . §.§ The simulation model The simulation algorithm written in C++ is based on previous versions by <cit.> and <cit.>.With the above assumptions, we generate Wright-Fisher simulations forward in discrete time to examine the adaptation of a trait under selection, mutation and random genetic drift in the case of an originally monomorphic ancestral population. The first mutation events can occur in generation 0. The algorithm acts on each locus because we assume that loci evolve independently. The basic idea for the algorithm is that each generation is represented by a vector, whose length is determined by the number of mutated loci. A mutation event occurs according to the mutation rate, adds a new locus and thus increases the length of the vector by one. The entries of the vector contain the number of mutants at each locus. Random genetic drift is included via binomial sampling with sampling weights due to selection <cit.>. We use the GNU Scientific Library <cit.> and the MT19937 <cit.> random number generator. Unless stated otherwise, a simulation for a set of parameters consists of 20 000 replicate runs. In almost all cases, 20 000 replicates yielded standard error bars smaller than the size of the symbols in the figures; in a few cases, more runs were performed. Therefore, error bars are not shown. Runs are stopped at a given number n (or τ) of generations. From these replicate runs, means (and variances) of the quantities of interest are computed, or histograms if distributions are the objects of interest. § DYNAMICS OF ALLELE-FREQUENCY DISTRIBUTIONS It is well known that, conditioned on fixation, the frequency of a beneficial mutant grows faster than predicted by the corresponding deterministic model <cit.>. For models related to ours, it has been shown that conditioned on fixation and once a certain threshold has been exceeded, the evolution of the frequency of mutants can be approximated by a deterministic increase from a random variable, often labeled W, that describes the stochasticity that accumulates during the spread of the mutant <cit.>. The latter authors called this the effective initial mutant population size. <cit.> employed a variant of this approach and approximated the initial and the final phase of evolution by a Feller process conditioned on fixation. Based on this, they derived a semi-deterministic approximation for the distribution of the (beneficial) allele frequency at any given time. Below, we develop a refined variant that provides a highly accurate, analytically tractable, and computationally efficient approximation for the frequency distribution of a beneficial mutant in any generation n in a diallelic haploid Wright-Fisher population of size N (Result <ref>). Its extension in eqs. (<ref>) and (<ref>) to the ith mutant occurring in the infinite sites model paves the way for an analytical treatment of our applications in Sections <ref> and <ref>.§.§ Approximating the random variable W Our starting point is the supercritical Galton-Watson process {Z_n} defined in Section <ref>. The probability of ultimate extinction, denoted by (), is the unique fixed point of the offspring generating function φ in (0,1), i.e., it satisfies φ()= and 0<<1. When no confusion can occur, we suppress the dependence ofon the mean offspring number >1.We define W_n=Z_n/^n. According to a classical result <cit.> there exists a random variable W such that lim_n→∞ W_n = W with probability 1. The limiting random variable W has expectation [W]=1, satisfies W=0 precisely if the process dies out, and W is positive and absolutely continuous[Absolute continuity of a function f is equivalent to having a derivative f' almost everywhere and f' being Lebesgue integrable.] otherwise. We define the random variable W^+ corresponding to the absolutely continuous part of W by [W≤ x] =+ (1-)[W^+≤ x] , where x≥ 0 , so that W^+ has a proper distribution with density w^+. Because [W]=1, eq. (<ref>) informs us that [W^+]=(1-)^-1.In general, not much is known about the shape of the distribution W^+. However, for a geometric offspring distribution with mean , W^+ is exponential with parameter 1-1/ <cit.>. As pointed out by a reviewer, W^+ is exponential if and only if the offspring distribution is a fractional linear, or modified geometric, distribution (which has a fractional linear generating function). We treat the properties of the associated Galton-Watson processes that are relevant in our context in Appendix <ref>. We will use them when extending results in Sections <ref> and <ref> to cases when the effective population size N_e differs from the actual population size N.We denote the Laplace transform of W by Φ(u) =[e^-uW], where u ≥ 0. It is well known <cit.> that Φ is uniquely determined as the solution of Poincaré's (or Abel's) functional equation Φ(u)=φ(Φ(u/)) .In general, this cannot be solved explicitly. However, if the offspring distribution is geometric, then (<ref>) becomes Φ(u) = 1/1+(1-Φ(u/)) . This is (uniquely) solved by the Laplace transform of W with (cumulative) distribution [W ≤ x] =+ (1-)( 1- exp [-(1-)x] ) . Hence, W^+ is exponentially distributed with parameter 1- = 1-^-1 <cit.>. For a Poisson offspring distribution, which has the generating function φ(x) = e^-(1-x), we use the approximation φ(x)≈(1+(1-x))^-1 which has the same error as the first-order Taylor approximation near x=1, but is much more accurate on the whole interval [0,1]. With this approximation, we obtain from (<ref>) that Φ(u) ≈(1+(1-Φ(u/)))^-1. Therefore, for a Poisson offspring distribution, we approximate W^+ by an exponential distribution with parameter 1-, whereis the corresponding extinction probability (see Sect. <ref>).A straightforward calculation shows that[W^+] = 1/1-([W] +1 -1/1-) ,where [W]=v/((-1)) and v is the variance of the offspring distribution <cit.>. If the offspring distribution is geometric with mean >1, then =1/, [W^+]=^2/(-1)^2=[W^+]^2, as is required for an exponential distribution. For a Poisson offspring distribution with mean >1, it follows that the coefficient of variation of W^+ has the series expansion 1-13(-1)+O((-1)^2). It can also be shown that the skew has the expansion 2-(-1) + O((-1)^2). This suggests that asdecreases to 1, the accuracy of the exponential approximation increases. Therefore, our approximations perform best in the slightly supercritical case, i.e., ifis only slightly larger than 1. It is straightforward to infer that for a Poisson offspring distribution, W^+ is not gamma distributed because the skew is too small.As mentioned above and shown in Appendix <ref> for self-containedness, the distribution of W^+ is exponential if and only if the offspring distribution is fractional linear. Our argument above shows that W^+ is approximately exponential with parameter 1- for a Poisson offspring distribution, and second and third moments converge to that of an exponential as →1. We expect that this is the case for a large class of offspring distributions, i.e., when for → 1 the generating function is asymptotically equivalent to that of a fractional linear offspring distribution. Indeed, <cit.> derived an exponential distribution for W^+ for a birth-death branching processes with time-dependent mean offspring number (as they consider a model with temporal variation in its population size and the selection pressure). <cit.> show that W^+ is approximately exponential with parameter 1- for Feller diffusions. They start the deterministic phase in their Wright-Fisher model with selfing from a gamma distribution because they admit one or more initially segregating mutants. §.§ Extinction probability by generation n for a Poisson offspring distribution Because we use a Poisson offspring distribution for comparisons of our analytic results with simulation results from the corresponding Wright-Fisher model, we summarize the properties of the probabilities of extinction of the mutant until generation n.We assume that the mutant starts as a single copy in generation n = 0, and the offspring distribution is Poisson with mean =e^s, s>0. Then the probability () of ultimate loss, or extinction, of the mutant is the unique solution of φ(x) = e^-(1-x) =x in (0,1). It can be expressed in terms of the (principal branch of the) product logarithm, or the Lambert function:() = -[-exp(-)]/ .With =e^s, this has the series expansion() = 1 - 2s + 5s^2/3+ O(s^3) .The extinction probability (e^s) decreases from 1 to 0 as s increases from 0 to ∞. We write = 1- for the (long-term) survival probability. It follows directly from (<ref>) thatapproximation, 2s, serves as an upper bound forand approachesas s→0 (indeed, Haldane derived an equation equivalent to e^-(1-x) =x). We define (n,) as the probability that the mutation gets lost before generation n, i.e., 1-(n,) is the probability that the mutation is still segregating in generation n. If no confusion can occur, we write (n) instead of (n,). Then (n) satisfies the recursion (0) = 0, (n) = exp [ -(1-(n - 1))]. because (n)=φ_(n)(0)=φ(φ_(n-1))(0), the nth iteration of the generating function φ of the Poisson distribution. By monotonicity of (n) and continuity of φ it follows that (n,)→() for n →∞ and for every . Indeed, it is well known that (n,)→() holds for very general offspring distributions.An explicit formula for (n) seems to be unavailable. However, the recursion (<ref>) is readily evaluated numerically and convergence of 1-(n,) to ()=1-() is fast (Fig. <ref>). As is documented by Figure <ref> and supported by extensive numerical evalution, the inequality1-(n,)/1-()≤ 1 + ()/^n-()apparently holds always, and the upper bound serves as a highly accurate approximation unlessis large and n is very small. For fractional linear offspring distributions, equality holds in (<ref>) (Appendix <ref>). Indeed, convergence of 1-(n,) to 1-() at the geometric rate 1/ follows immediately from Corollary 1 in Sect. 1.11 of <cit.>. The time T_ needed for the probability of non-extinction by generation n to decline to (1+)(1-()) is explicitly given in (<ref>) for the fractional linear case, and it provides an accurate approximation and upper bound for the Poisson case (see Appendix <ref>). For a large class of offspring distributions having bounded third moments, () has an approximation of the form 2(-1)/v+O((-1)/v)^2) asdecreases to 1 <cit.>. If =e^s and s is small, this yields (e^s)≈ρ s, where ρ=1 for a geometric offspring distribution, and ρ=2 for a Poisson distribution or any other distribution with v=; cf. (<ref>) and (<ref>). By our monotonicity and continuity assumption on the extinction probability (Sect. <ref>),is monotone increasing in , (e^s)≤min{ρ s,1} for every s, and (e^s)→ρ s as s→0. §.§ Evolution of the distribution of mutant frequencies at a single locus Now we start the investigation of the evolution of the frequency distribution of a mutation that occurred as a single copy in generation n=0. Our first goal is to approximate the distribution of Z_n conditioned on Z_n>0 by a simple continuous distribution. Therefore, we define the distribution of the (discrete) random variable W_n^+ [W_n≤ x] = (n) + (1-(n))[W_n^+≤ x] , where x≥ 0 . Then [Z_n≤ y| Z_n>0] = [^n W_n^+≤ y]. We approximate this distribution by the exponential distributionΨ_n(y) = 1 - e^-_n y ,wherey≥0and _n = 1-(n)/^n ,and denote the corresponding random variable by Y_n and its density by ψ_n. For fractional linear offspring distributions, this approximation is best possible in a rigorous sense (see Appendix <ref>). Because of the convergence of W_n^+ to W^+, the exponential distribution Ψ_n will provide a close approximation to the true distribution of W^+_n (with the possible exception of very small values of n) if W^+ is approximately exponential. Figure <ref> shows that for a Poisson offspring distribution, the exponential density ψ_n provides an excellent approximation for the number of mutants in generation n, conditioned on non-extinction until n, in a Wright-Fisher model. Asdecreases to 1, the accuracy of the approximation will increase. The above approximation, [Z_n≤ y| Z_n>0] ≈Ψ_n(y), informs us that in an infinite population the distribution of the number of mutants after n generations, conditioned on non-extinction until generation n and originating from a single mutation, can be described by a deterministic exponential increase that starts with an exponentially distributed initial mutant population (according to ψ_0) of mean size 1. Then the mean of this population in generation n is ^n/(1-(n)), which approaches ^n/ as n increases, and the distribution remains exponential. Thus, according to this model a beneficial mutant destined for survival does indeed grow faster than expected from the corresponding deterministic model (which has a growth rate of ), in particular in the early generations.As already noted, <cit.>, <cit.>, and <cit.> used similar approaches, but conditioned on fixation, instead of non-extinction until the given generation. Therefore, they took the random variable W^+, or rather its exponentially distributed approximation with the fixation probability as parameter, as their effective initial mutant population size (which on average is larger than one). For large n, the distribution of ^n W^+ has the same asymptotic behavior as Ψ_n in (<ref>), but our distribution Ψ_n provides a more accurate approximation for the early phase of the spread of the mutant (Figure <ref>) because in each generation it has the correct mean. The distributions shown by dashed curves in Figure <ref> are obtained by conditioning on long-term survival in the branching process.Now we apply the above results to derive an approximation for the frequency of a mutant in a finite population of large and constant size N, conditioned on non-extinction until generation n. Our starting point is the approximate exponential growth of the number of mutants, as given by the distributions Ψ_n in (<ref>) of the random variables Y_n. Because in a finite population exponential growth is impossible, we follow the population genetics tradition and use relative frequencies (probabilities). We assume that the number of resident types remains at its initial frequency of N-1≈ N (because N is large and residents produce one offspring).We define the random variables X_n, measuring the relative frequency of the mutants in the total population in generation n, byX_n = Y_n/Y_n+N .We note that in the absence of stochastic variation ([Y_n]=0), we have Y_n=^n Y_0 and therefore p(n)=X_n = Y_n/(Y_n+N-1) solves the corresponding deterministic diallelic selection equation, (<ref>), if p(0)=Y_0/N=1/N. Because (<ref>) is equivalent to Y_n=NX_n/(1-X_n), the density of X_n is approximated by g_a(n) (x)= ψ_n(N x/1- x) d/d x(Nx/1-x) ,where ψ_n is the density of Ψ_n in (<ref>). A simple calculation yields (<ref>) in our key result of this section: Conditioned on non-extinction until generation n and for given N,the discrete distribution of the mutant frequency originating from a single mutation in generation 0 with fitnesscan be approximated by the distribution of the positive continuous random variable X_n with density g_a (x) = a/(1 - x)^2·exp[ - a x/1-x],wherea = a(n, , N)= N (1-(n,)) ^-n. This result does not provide a controlled approximation of the Wright-Fisher or the associated diffusion dynamics, simply because the time dependence of allele frequencies is prohibitively difficult to deal with analytically in these stochastic dynamical systems. However, its utility and accuracy will be demonstrated below. Our expression (<ref>) for the density g_a(x) has the same structure as that given by <cit.> in their equation (4). The difference is that their _t decays exponentially with t with a constant parameter, whereas our a=a(n,,N) decays exponentially with a parameter depending on n. This difference originates from the fact that we condition on non-extinction until n, whereas <cit.> condition on fixation. To interpret and understand the analytic results derived below, we first need to study the properties of the density g_a (x) in (<ref>) and of its constituent terms. The only parameter on which the mutant frequency distribution g_a (x) in (<ref>) depends is the compound parameter a=a(n,,N). This dependence is displayed in Fig. <ref>. Obviously, a(n,,N) is proportional to N. As a function of n, a(n,,N) decreases approximately exponentially from its maximum value N, assumed at n=0, to 0 as n→∞. As a function of(or s), a(n,,N) decreases as well (see Fig. <ref>).We observe that g_a(n, , N)(0)=a(n, , N). If a>2, then g_a (x) attains its maximum at x=0 (Fig. <ref>). In this case g_a (x) decays with increasing x (and approximately exponentially for small x). This will always be the case in the initial phase of the mutant's invasion. If a<2, which will occur for sufficiently large n, then g_a (x) has an unique maximum g_a (x̂) = 4/aexp(a-2) at x̂ = 1- a/2∈ (0,1], which shifts from x=0 to x=1 as a decreases (see the red curve in Fig. <ref>). A crucial quantity in a is also the probability of nonextinction by generation n, 1-(n,). Figure <ref> documents the approach of 1-(n,) to 1-() as n increases and compares it with the approximation (<ref>). The convergence is quick if s⪆0.1 (Section <ref>). For small s, however, 1-(n) is much larger than 1- for many more generations. Consequently, simplifying g_a by substitutingfor (n) will considerably decrease the accuracy of the approximation during this initial phase (cf. Fig. <ref>, which shows this effect for ψ_n).Figures <ref> and <ref> compare the analytic approximation (<ref>) for the distribution of mutant frequencies at various generations with the corresponding histograms obtained fromWright-Fisher simulations. They show that the approximation is very accurate in the initial phase whenever Ns≥1. If Ns≥100, it remains accurate almost until fixation of the mutant. If Ns is of order 10, the approximation remains accurate for a surprisingly long period, essentially until the (mean) mutant frequency exceeds 12, i.e., until it has become the dominating variant. That the branching-process approximation underestimates the effects of random genetic drift when the mutant reaches high frequency is an obvious consequence of its assumptions. In Section <ref> we will see that the mean of g_a provides a highly accurate approximation to the true mean even if the mutation is close to fixation and Ns is much less than 100.We note that Wright-Fisher simulations show that for s>0.5 (i.e., very strong selection), the distribution of the mutant frequencies cannot be approximated by the density g_a. The reason is that for largeand a Poisson offspring distribution, the density ψ_n of Y_n=N X_n/1-X_n deviates too much from an exponential distribution. That for large , the variance becomes much smaller than the mean, can be inferred directly from the coefficient of variation given in Remark <ref>. In the following we exclude very strong selection and assume s<0.5. In fact, we focus on parameters satisfying s≤ 0.1 and Ns ≥ 10. §.§ Evolution of the distribution of mutant frequencies in the infinite-sites model We use the model outlined in Section <ref>. In particular, τ refers to the generation since the initial population started to evolve. Mutations i=1,2,3,… occur at generations τ_i, as outlined above. We note that τ differs from n, as used in the above subsections on a single locus, because for locus i the generation number in the Galton-Watson process is n=0 at time τ_i.From(<ref>) we recall the Erlang distribution and defineM_i, (τ) = ∫_0^τ m_i, (t) d t = 1-Γ (i, τ)/Γ (i) ,which approximates the probability that i mutations have occurred by generation τ. Here, (n,z)=∫_z^∞ x^n-1e^-x dx denotes the incomplete Gamma function <cit.>. We recallthat lim_z→0(n,z)=(n) =(n-1)! (if n is a positive integer), and Γ(0,z)=E_1(z), where E_1(z)= ∫_z^∞ x^-1 e^-xdx is the exponential integral.In the following, we write [t] for the nearest integer to t, and we definea(t,_i,N) = N (1-([t],_i)) _i^-tfor every (real)t≥0 ,where _i is the fitness of the mutant at locus i. From Result <ref> we infer that, conditioned on non-extinction until generation τ, the discrete distribution of the frequency of mutations at the locus at which the ith mutation event occurred, locus i for short, can be approximated by the distribution of the continuous random variable with densityh^(i) (x) = h^(i)_τ,_i,N,(x) = 1/M_i, (τ)∫_0^τ g_a(τ -t,_i,N) (x) m_i, (t)d t .Here, we used that the ith mutant has occurred approximately τ-[t] generations in the past, where t is Erlang distributed with parameters i and . The probability that the mutation at locus i has been lost by generation τ, i.e., that the ancestral type is again fixed at this locus in generation τ, can be approximatedby^(i)(τ, _i,) = 1/M_i, (τ)∫_0^τ(τ -[t],_i )m_i, (t)d t(Fig. <ref>B). Below, we shall need unconditioned distributions of the mutant frequencies. Therefore, we defineh̃^(i) (x) =h̃^(i)_τ,σ_i,N,(x) = 1/M_i,(τ)∫_0^τ(1-([τ-t],σ_i)) g_a(τ-t, σ_i, N)(x) m_i,(t)dt(see Fig. <ref>A). Then ∫_0^1 h̃^(i)_τ,σ_i,N,(x) dx = 1-^(i)(τ,σ_i,) is the probability of nonextinction of the ith mutant until generation τ. Therefore, in the absence of conditioning on non-extinction, we obtain for the probability that in generation τ the frequency X̃^(i)_τ of the i-th mutant is less or equal p∈ [0, 1]: [X̃^(i)_τ≤ p] = ^(i)(τ,σ_i,) + ∫_0^p h̃^(i)_τ(x)dx .§ EVOLUTION OF THE PHENOTYPIC MEAN AND VARIANCE Based on the results obtained above, we are now in the position to derive approximations for the expected response of the phenotype distribution from first principles. As usual in quantitative genetics, we concentrate on the evolution of the phenotypic mean and the phenotypic variance, i.e., the within-population variance of the trait. For this purpose,we will need the first and second moments pertaining to the density g_a(x). They are given by_1(a) = ∫_0^1 x g_a (x)dx = 1 - a e^a E_1(a)and_2(a)= ∫_0^1 x^2 g_a (x)dx = 1 + a - a (2+a) e^a E_1(a) ,respectively, where E_1(a)= ∫_a^∞ x^-1 e^-xdx is the exponential integral. In addition, we will need the within-population variance of the mutant's allele frequency,(a)= ∫_0^1 x(1-x) g_a (x)dx = a(1+a) e^a E_1(a) -a . By the properties of the exponential integral, the mean _1(a) decays from one to zero as a increases from 0 to ∞ <cit.>. Together with the properties of a=a(t,,N), this implies that _1(a(t)) increases from approximately 1/N to 1 as t increases from 0 to ∞ (see Fig. <ref>A). For the variance (a), we find that (a)→0 as either a→0 or a→∞, andassumes a unique maximum at a≈0.719, where (0.719)≈ 0.196. As function of t, we have (a(0))≈ 1/N and (a(t))→0 as t→∞ (see Fig. <ref>B). The value t_M of t at which the variance is maximized can be computed from (<ref>) with a≈0.719. Because ([t],) converges to () quickly (Sects. <ref> and <ref>), we obtain t_M ≈ln(2.8Ns/v)/s, where v is the variance of the offspring distribution. For a Poisson offspring distribution, N=10^4 and s=0.001 or s=0.1, we obtain t_M≈3.33 or 7.84, which is in excellent agreement with Fig. <ref>B. For large values of a (such as a>80), we encountered numerical problems when usingto evaluate _1(a) or (a). The reason is that it multiplies the then huge value e^a with the tiny value E_1(a), which is set to 0 if it falls below a prescribed value. However, the right-hand side ofe^a E_1(a) = ∫_1^∞1/x e^a(1-x) dxcan be integrated numerically without problems. An efficient, alternative possibility is to use the approximation e^a E_1(a) ≈1/a(1-1/a) (e.g., if a>50), which follows from the asymptotic expansion (<ref>). Beginning with Section <ref>, we define the expected phenotypic mean, G̅(τ), and the expected phenotypic variance, V_G(τ), of the trait in generation τ in terms of the distributions of the random variables X̃^(i)_τ defined in (<ref>) and (<ref>). The definitions of G̅(τ) and V_G(τ) are given in eqs. (<ref>) and (<ref>). (In the introductory Section <ref> a simpler random variable can be used.) Therefore, our results below are precise mathematical statements about these quantities, and not statements about the mean and variance of the trait derived from the corresponding Wright-Fisher model. The figures illustrate the accuracy at which Wright-Fisher simulation results can be approximated by our theory, which we base on the assumption that allele-frequency distributions evolve as stated in Result <ref>.A Poisson offspring distribution is used only for comparison with the simulation results or when stated explicitly.§.§ A single locus We start by investigating the evolutionary dynamics of the trait's mean G̅_i and variance V_G_i caused by the contribution of a single locus i. Let _i be the phenotypic effect of the mutant and _i=e^s_i its fitness effect. Further, let X^(i)_τ be the random variable describing the mutant's frequency τ generations after its occurrence, conditioned on non-extinction until generation τ. Thus, X^(i)_τ corresponds to X_n defined in (<ref>) with τ=n, _i=, and i indicating the locus. The density of X^(i)_τ is g_a(N,_i,τ), as given in (<ref>). The mutant's expected contribution to the phenotypic mean is _i [X^(i)_τ] and to the phenotypic variance it is _i^2 [X^(i)_τ(1-X^(i)_τ)]. By employing equations (<ref>) and (<ref>), we can define the contributions of locus i to the expected mean and the expected variance of the trait by G̅_i (τ)= _i ∫_0^1 x g_a(τ,_i)(x)dx =_i _1(a(τ,_i))andV_G_i (τ)= _i^2∫_0^1 x(1-x) g_a(τ,_i)(x)dx = _i^2(a(τ,_i)) ,respectively.In Figure <ref> the branching process approximations G̅_i (τ) and V_G_i (τ) (dashed curves) are compared with simulation results from the diallelic Wright-Fisher model (symbols) and with _i p(τ) and _i^2p(τ)(1-p(τ)) (solid curves), where p(τ) = ^τ p(0) /1+(^τ-1)p(0)is the solution of the discrete, deterministic selection equation p(τ+1) =p(τ)/w̅(τ), and w̅(τ) = 1+(-1) p(τ) is the mean fitness <cit.>. Figure <ref> shows excellent agreement between the branching process approximations and the Wright-Fisher simulations. It also demonstrates that in a finite population and conditioned on non-extinction until generation τ, the mutant spreads faster than in a comparable infinite population. The smaller the selective advantage is, the faster the (relative) initial increase of the mutant frequency, and hence of the mean and the variance — on the time scale sτ. This is related to the fact that, to leading order in Ns (for Ns≫1), the scaled fixation time of a mutant of effect s in a Wright-Fisher model is ≈2ln(2Ns)/s ,which is a simplified version of eq. (<ref>). For N=10^4 and s=0.1, s=0.001, we obtain from (<ref>) s≈ 16.4, 7.0, respectively, which provide good estimates for the time to completion of the sweep (Fig. <ref>). Additional numerical support for the expected duration time ofis provided by Fig. <ref>. Indeed, the expected duration time (<ref>) of a selective sweep has been derived by <cit.>.The phenomenon described above is well-known <cit.>. The intuitive explanation is that mutants that initially spread more rapidly than corresponds to their (small) selective advantage have a much higher probability of becoming fixed than mutants that remain at low frequency for an extended time period. The smaller the selection coefficient s, the more important a quick early frequency increase is.§.§ Diffusion approximations and length of the phases of adaptation Although our results and approximations are derived on the basis of a branching process approximation, the beginnings and lengths of the phases when they are most accurate are best described using diffusion approximations for the (expected) fixation times. This is motivated and supported by the results of <cit.> and our simulation results.Because in diffusion theory quantities are typically given as functions of the selection coefficient (and the population size), we denote the selection coefficient of a mutant of effectby= e^s-1 . For given N and s, we will need the expected mean fixation time of a single mutant (destined for fixation).The expectation is taken with respect to the distribution f of the mutation effectswhich has mean :(s,f, N) = ∫_0^∞(, N) (, N) f()d/∫_0^∞(, N) f()d .Here, (, N) denotes the mean time to fixation of a single mutant with effectand (, N) its probability of fixation, both in a population of size N.For comparison with simulation results, we use the well-known diffusion approximation (<ref>) for(where (<ref>) yields (<ref>) if N_e=N) and the simplified version (<ref>) of the diffusion approximation for . Details and an efficient numerical procedure to compute (s,, N) are described in Appendix <ref>. If f is exponential with mean 1, an accurate approximation of (,N) is (s,,N) ≈2/sln(2Ns)if 2/N≤ s≤ 0.2 (see Appendix <ref>). For =1, the approximation (<ref>) corresponds to the simplified approximation (<ref>) for equal mutation effects.We note from (<ref>) and (<ref>) that (, N) > 1 - e^2 > (e^s) holds for every N, whereis based on a Poisson offspring distribution. We emphasize that we distinguish between the fixation probabilityin the diffusion approximation of the Wright-Fisher model and the survival probabilityin the Galton-Watson process, although they are quantitatively very similar if Ns is sufficiently large and s small. We use the former only to compute fixation times, whereas the latter is used in our results and their derivation. Our approach can be extended to cases when the effective population size N_e differs from the actual population size N. This occurs if the variance of the offspring distribution differs from its mean. We follow <cit.> and define the variance-effective population size by N_e=N/(v/) (because we assume large N, we simplified his N-12 to N). This yields the diffusion approximation(s,N,N_e) = 1-e^-2sN_e/N/1-e^-2N_esinstead of (<ref>). Numerical comparison of this fixation probability with the survival probabilitycomputed for fractional linear offspring distributions from (<ref>) shows that these values are almost identical if Ns>1 and v≥/2. For instance, if s=0.1 (i.e., =1.1), N=1000, v=/2, , 5, we obtain (s,N,N_e) ≈ 0.3297, 0.1813, 0.0392, respectively, and ^( FL)(1+s)≈ 0.3333, 0.1818, 0.0392. For the corresponding Poisson offspring distribution, ^( Poi)≈0.1761 is obtained. Essentially identical numbers are obtained if N=100. Interestingly, the value 0.1761 coincides almost exactly with the true fixation probability of ≈ 0.1761 in the Wright-Fisher model with N=1000. Indeed, it has been proved that the diffusion approximation always overestimates the true fixation probability in the Wright-Fisher model <cit.>. These considerations are an additional reason why we distinguish betweenand . §.§ Infinitely many sites Now we proceed with our infinite-sites model. Let _i denote the effect of the ith mutation (at locus i) on the trait. Then its fitness effect is _i=e^s_i. The (unconditioned) distribution of the frequency X̃^(i)_τ of the ith mutant in generation τ was defined in (<ref>) and its absolutely continuous part h̃^(i)_τ in (<ref>). Below we shall need the k-th moment of X̃^(i)_τ: [(X̃^(i)_τ)^k ] = ∫_0^1 x^k h̃^(i)_τ(x)dx. Now we are in the position to introduce general expressions for the expectations of the mean phenotype G̅(τ) = G̅(τ,f,s,N,) and of the phenotypic variance V_G(τ) = V_G(τ,f,s,N,) in any generation τ, where we recall that the monomorphic population starts to evolve at τ=0 and mutations occur when τ>0. The mutation effects _i are drawn independently from the distribution f. Because we assume linkage equilibrium in every generation τ, the random variables X̃^(i)_τ (i=1,2,3,…) are mutually independent.For a given sequence of mutation events and given mutant frequency x^(i)_τ at locus i in generation τ, the phenotypic mean is ∑_i _i x^(i)_τ and the phenotypic variance is ∑_i _i^2 x^(i)_τ(1-x^(i)_τ). Therefore, by taking expectations with respect to the Poisson distributed mutation events and the evolutionary trajectories of allele frequencies, we define the expected phenotypic mean G̅ and variance V_G as follows:G̅(τ) = ∑_n=1^∞_τ(n) ( ∑_i=1^n∫_0^∞_i [ X̃^(i)_τ,e^_i,] f(_i)d_i ) andV_G(τ)= ∑_n=1^∞_τ(n) ( ∑_i=1^n∫_0^∞_i^2 [ X̃^(i)_τ,e^_i,(1-X̃^(i)_τ,e^_i, ) ] f(_i)d_i ).The reader may note that the assumption of linkage equilibrium is not needed for the mean phenotype, and that assuming vanishing pairwise linkage disequilibria would be sufficient for the variance. The expected mean and variance of the trait, as defined in (<ref>) and (<ref>), respectively, can be expressed as G̅(τ) = ∫_0^∞ f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^))dt dandV_G(τ)= ∫_0^∞^2 f() ∫_0^τ(1-([t],e^) ) (a(t, e^))dt d .In particular, the population-wide mutation rateaffects both quantities only as a multiplicative factor. The proof is relegated to Appendix <ref>.Figure <ref> displays evolutionary trajectories of the expected mean and variance given in Proposition <ref> and compares them with simulation results from corresponding Wright-Fisher models. On this scale of resolution the agreement is excellent, both for equal and exponentially distributed effects. After an initial phase the trajectories reach a quasi-stationary phase, in which the phenotypic variance has stabilized and the phenotypic mean increasesapproximately linearly in time. Similar to the one-locus case treated above, the expected mean fixation times given in the figure caption provide decent approximations for the average time required to enter the quasi-stationary phase. For refined comparisons with more explicit approximations for the quasi-stationary phase and for the initial phase we refer to Fig. <ref> and to Fig. <ref>,respectively. Notable in all panels of Fig. <ref> is the faster increase of the mean and the variance for smaller selection coefficients on the time scale sτ (see the discussion in the connection with Fig. <ref>). In addition, for the same selection coefficient the evolutionary response is, especially initially, much faster for exponentially distributed mutation effects (panels C and D) than for equal mutation effects (panels A and B), as shown by different steepnesses (A vs. C) and shapes (B vs. D) of the trajectories. The reason is that mutations of larger effect tend to fix earlier and more frequently than mutations of smaller effect. Therefore, large mutation effects speed up the response in the initial phase, whereas small mutation effects delay the entry into the quasi-stationary phase, with both types occurring in an exponential distribution. As shown by the numerical values of s and s in the caption of Fig. <ref>, this results in similar mean fixation times for equal and for exponentially distributed mutation effects, and thus in similar average entry times into the quasi-stationary phase. Finally, with an exponential distribution the equilibrium variance is about twice as large as for equal effects (see also Corollary <ref>).Whereas the simulation results for 2 ≥ 1 are accurately described by the analytic approximations for the phenotypic mean and the phenotypic variance (Figure <ref>), for 2 ≪ 1 much more fluctuation is observed in the simulations (Fig. <ref>). The reason is that the time between successful mutation events is much larger for smaller , and this results in more pronounced stochastic effects. For smallone sweep after another occurs, whereas for largeparallel sweeps are common. This will be quantified in Section <ref> using the number of segregating sites in the population. In general, the integrals in (<ref>) and (<ref>) cannot be simplified. Of course, for equal effects , ∫_0^∞^k f() ∫_0^τ… dt d simplifies to ^k∫_0^τ… dt. An alternative approach is to assume that the ith mutation occurs at its mean waiting time i/ and the number of mutation events until time τ is (approximately) [τ]. For given mutation effects _i, we obtain instead of eqs. (<ref>) and (<ref>) the simple approximationsG̅(τ) ≈∑_i=0^[τ ]-1( 1-([i/],σ_i) ) G̅_i (i/)andV_G(τ)≈∑_i=0^[ τ ]-1( 1-([i/],σ_i) ) V_G_i(i/),respectively, where we have used the one-locus results (<ref>) and (<ref>). The derivation is given in Appendix <ref>. A meaningful application of these formulas requires [τ]>1. For values ofsmaller than 1, we use linear interpolation between =0 and =1 (Appendix <ref>). The approximations (<ref>) and (<ref>) are evaluated much faster than the integrals in Proposition <ref> and are applied in Fig. <ref>A,B and Fig. <ref> to the case of equal mutation effects. These figures demonstrate their high accuracy. In the following we derive simpler and more explicit expressions for the expected phenotypic mean and variance of the trait for different phases of adaptation. First, we characterize their long-term behavior, then we provide simpler approximations of the exact expressions in Proposition <ref>, and finally we derive very simple approximations for their initial increase.§.§ Approximations for the quasi-stationary phase Although our approach was designed to study the early phase of adaptation, we show here that it yields accurate and simple approximations for the quasi-stationary phase, when the equilibrium variance has stabilized and the response of the expected mean is approximately linear in time. This phase is reached when the flux of incoming and fixing mutations balances. This is the case after about τ_c generations (see Fig. <ref>), where τ_c = 1/ +,is the expected time by which the first mutation, which appeared at (about) τ = 1/, becomes fixed. If τ≥τ_c, then from the τ mutations, which are expected to arise until generation τ, (τ-τ_c)+1 =(τ - ) mutations are expected to have had enough time to go to fixation. Fixed mutations contributeto the G̅ and 0 to the V_G. The mutants from the remaining τ_c- 1 = mutation events either segregate in the population or have been lost before generation τ.In the results derived below, we will need the following assumptions.Let N→∞ and assume Ns^K = C^Kwhere C>0 is an arbitrary constant and the constant K satisfies K≥2. Offspring distributions of mutants of effectsatisfy v=2/ρ, where ρ is a positive constant, and have uniformly bounded third moments asdecreases to 1 <cit.>. In addition, the inequality (<ref>) holds. We note that the scaling Assumption <ref> for s and N reflects the fact that, compared to diffusion approximations when K=1, in our model selection is stronger than random genetic drift. In <cit.>, Assumption <ref> with K>2 is called moderately strong selection and used to prove that the fixation probability of a single mutant in Cannings models is asymptotically equivalent to 2(-1)/v (in our notation) as N→∞. The first requirement in Assumption <ref> implies that lim_s→∞(e^s)/(ρ s )=1 (see Remark <ref>). It is satisfied by fractional linear and by Poisson offspring distributions. As discussed in Sect. <ref> and shown in Appendix <ref>, equality holds in (<ref>) for fractional linear offspring distributions and (<ref>) is also fulfilled for Poisson offspring distributions (see also Fig. <ref>).We denote the values of G̅(τ) and V_G(τ) in the quasi-stationary phase by G̅^*(τ) and V_G^*, respectively.For notational simplicity, we set= (s,,N) = N(e^) .We recall from Section <ref> that the mutation distribution f has bounded moments. Here is our first main result. (1) At stationarity, the expected per-generation responseof the phenotypic mean, G̅^* = lim_τ→∞(G̅(τ+1)-G̅(τ)), is given byG̅^* = ∫_0^∞(e^s) f()d ,where G̅^* depends on , s, and f.(2) Assuming <ref> and <ref>, the expected phenotypic variance at stationarity, V_G^*=lim_τ→∞V_G(τ), has the approximationV_G^* = ∫_0^∞/s(e^)e^ E_1() f()d + O(N^-K_1/K), for every choice of K_1>1 and K>K_1+1, whence K>2 needs to hold. The proof of this Proposition is relegated to Appendix <ref>. We discuss the choice of K_1 and K in the error term in Remark <ref>. For mutation effects equal to , (<ref>) simplifies toG̅^* = (e^s) ,and (<ref>) simplifies toV_G^*(, s, N, ) ≈/s(e^)e^ E_1() . (1) The expression (<ref>) for the expected per-generation response of the mean phenotype reflects the (permanent) response due to mutations that are going to fixation. Therefore, during the quasi-stationary phase, the phenotypic mean increases linearly with time as mutants reach fixation. This response is independent of the population size (in constrast to the early response; see below). In Appendix <ref> (Remark <ref>), we provide the approximation (<ref>) for G̅(τ, f, s, N, ) when τ is sufficiently large. It is shown in Fig. <ref>.(2) Because only segregating mutations contribute to the phenotypic variance, the number of segregating sites and the phenotypic variance remain constant over time once the quasi-stationary phase has been reached. The term /s e^ E_1() in (<ref>) and (<ref>) is precisely the total variance accumulated by a mutant with effectduring its sweep to fixation. This follows immediately from (<ref>) and (<ref>). The following result provides much simpler approximations and not only recovers but refines well known results from evolutionary quantitative genetics. For n≥2 we write _n(f)=∫_0^∞^n f() d for the nth moment about zero of the mutation distribution f.We assume a Poisson offspring distribution and the assumptions in Proposition <ref>. Then the following simple approximations hold in the quasi-stationary phase.(1) The per-generation change of the mean phenotype at stationarity can be approximated as follows:G̅^*≈ 2 s ( _2(f) - 5s/6_3(f) ). (2) For mutation effects equal to , the change in the mean isG̅^* ≈ 2 s^2 (1-56s) . (3) For an exponential distribution f of mutation effects with expectation , the change in the mean isG̅^* ≈ 4 s^2 (1-52s) . (4) The stationary variance has the approximation V_G^*(f, s, N, ) ≈1/sG̅^* - /Ns . (5) If mutation effects are equal to , then (<ref>) and (<ref>) yieldV_G^* ≈ 2^2(1 - 5/6s - 1/2Ns) . (6) If f is exponential (with mean ), then (<ref>) and (<ref>) yieldV_G^* ≈ 4^2(1-5/2s-1/4Ns) .The proof is given in Appendix <ref>. It is based on series expansions of the expressions given in Proposition <ref>.(1) Assumption <ref> yields s=O(N^-1/K) and 1/(Ns) = O(N^-1+1/K). Thus, s is of lower (or equal) order than 1/(Ns) as N→∞. The error in (<ref>) is O(N^-K_1/K), where K_1>1 and K>K_1+1. By the latter assumption, we have N^-K_1/K< N^-1/K, but N^-K_1/K > N^-1+1/K. Thus, the term of order 1/(4Ns) in (<ref>) is not secured because the error term is of larger order. With more accurate estimates in our proof of (<ref>) a smaller error term might be obtainable.For illustration, we choose K_1=3/2 and K=3. Then we obtain s=O(N^-1/3), 1/(Ns)= O(N^-2/3), and the error term is of order O(N^-1/2).(2) Approximations analogous to (<ref>) can be obtained from (<ref>) for other offspring distributions by series expansion of the survival probability (e^s), or simply from (e^s)≈ρ s (cf. Remark <ref>). In Figure <ref> the approximations derived in Proposition <ref> and Corollary <ref> (shown as curves) are compared with results from Wright-Fisher simulations (shown as symbols). The figure shows that the approximation (<ref>) for the stationary variance (with its simplification (<ref>) for equal mutation effects) is highly accurate in a parameter range containing 5 ≤ Ns≤ 0.5N (dotted curves in A and B), although it has been derived under the assumption Ns≫1. The corresponding simplified approximations (<ref>) and (<ref>) (solid curves), derived under the additional assumption s≪1, are accurate in a range containing 5 ≤ Ns≤ 0.1N. Whereas these approximations correctly predict a decrease in V_G^*/ as N decreases, the analytically derived dependence of V_G^*/ on s exaggerates the decrease of variance observedfor small s in the Wright-Fisher simulations. Indeed, in the limit of s→0 (neutrality), the true scaled variance V_G^*/ should converge to ^2 if mutation effects are equal to , and to 2^2 if they are exponentially distributed with mean .This is indicated by the Wright-Fisher simulations and follows from the classical result <cit.> that in the absence of selection, the diploid stationary variance is to a close approximation 2∫_-∞^∞^2 f() d, where f is the density of a mutation distribution with arbitrary mean. The missing factors of 2 in comparison to the classical neutral result are due to the assumption of haploidy. The reason why for small s our approximation (<ref>) (dotted curves) decays below the neutral prediction is that for weak selection (Ns≤1), the allele-frequency dynamics are increasingly driven by random genetic drift which is not appropriately reflected by the branching-process approximation. The approximations (<ref>) and (<ref>) for ΔG̅^* (black dashed curves in panels A and B of Fig. <ref>) accurately match the response calculated from Wright-Fisher simulations (open symbols) in a range containing 2/N ≤ s≤ 0.1. If higher-order terms in s had been included, then these approximations would be accurate for s up to 0.5. (The dotted curves for the variance are accurate up to this value because they are computed from (<ref>) and (<ref>), which do not use series expansion in s.) Comparison of the terms of order s^2 in the approximations (<ref>) and (<ref>) with the term /Ns in (<ref>) shows that the latter can be neglected in (<ref>) if cs^2^2≫ 1/N, where c=53 for equal mutation effects and c=10 for exponentially distributed effect. The other way around, random genetic drift will distort the classical relation ΔG̅^* ≈ s V_G^* between the response of the mean and the genetic variance if (approximately) s<√(10/(cN)). For equal mutation effects (c=53), this becomes s<2.4/√(N); for exponentially distribution effects (c=10), it becomes s<1/√(N).Both relations confirm the observations in Fig. <ref>, where =1. In quantitative genetic models often a normal distribution with mean 0 is assumed for the mutation effects. To apply our analysis, which ignores mutations with a negative effect, to this setting, we have to adapt this distribution and use a truncated normal distribution instead. If the original normal distribution has variance v_N^2, then the truncated normal (restricted to positive effects) is defined by f() = 2/√(2π v_N^2)exp(-^2/(2v_N^2)) if ≥0 (and 0 otherwise). Also our mutation ratewill correspond to /2 in a quantitative genetic model in which mutations have (symmetric) positive and negative effects.This truncated normal distribution has mean =√(2v_N^2/π). Taking this as the parameter, we obtain _2(f) = v_N^2 = 12^2π and _3(f) = ^3π. Therefore, (<ref>) yieldsG̅^* ≈π s ^2(1-53s) ,instead of (<ref>), and the stationary variance is approximately V_G^*≈π^2 ( 1 - 5/3 s- 1/π N s ) .After this excursion to the long-term evolution of the trait, we first provide approximations for G̅(τ) and V_G(τ) in Proposition <ref> that can be evaluated expeditiously. Then we turn to the initial phase of adaptation.§.§ Approximations for the dynamics of G̅(τ) and V_G(τ) We posit Assumptions <ref> and <ref>. (1) The phenotypic mean has the approximationG̅(τ, f, s, N, ) =∫_0^∞1/s(e^s) ( exp( e^-τ) E_1( e^-τ) - e^ E_1() ) f()d + O( N^-1+2/K) asN→∞ ,where K>3 is required. (2) The phenotypic variance has the approximationV_G(τ, f, s, N, ) = ∫_0^∞/s(e^s) (e^ E_1() -e^-τexp ( e^-τ) E_1( e^-τ) ) f()d + O( N^-K_1/K)asN→∞ ,where =N(e^), K_1>1 and K>K_1+1 (whence K>2 is necessary). The proof is provided in Appendix <ref>. For many offspring distributions, including the Poisson, an insightful simplification is obtained by applying (e^)≈ρ for an appropriate constant ρ (cf. Remark <ref>). We note that the error term is not satisfactory for the very early phase of evolution when τ is still very small. For this very early phase, we derive simple explicit approximations with time-dependent error terms in Proposition <ref>. For large τ, V_G(τ, f, s, N, ) converges to V_G^* because the time-dependent term in (<ref>) vanishes. Therefore, this approximation will be very accurate for large τ. Analogously, G̅(τ, f, s, N, ) provides an accurate approximation for large τ as shown in Remark <ref> and illustrated in Figure <ref>.C. For equal mutation effects , the approximations (<ref>) and (<ref>) simplify toG̅(τ, f, s, N, ) ≈/s(e^s) ( exp( e^-τ) E_1( e^-τ) - e^ E_1() )and V_G(τ, f, s, N, ) ≈/s(e^s) (e^ E_1() -e^-τexp ( e^-τ) E_1( e^-τ) ),respectively.Again, it is insightful to keep in mind the approximation (e^)≈ρ. §.§ Approximations for the initial phase For the very early phase of adaptation simple explicit approximations can be derived for G̅(τ) and V_G(τ). We require Assumptions <ref> and <ref>. In addition, we assume 0≤τ≤1/(4 s) and that the mutation distribution f is exponential with mean .(1) The phenotypic mean and the phenotypic variance have the approximationsG̅(τ) = τ/N[/1- s τ + O(N^-1+1/K)] ,andV_G(τ) =2^2τ/N[1-sτ/2/(1-sτ)^2 + O(N^-1+1/K)] ,respectively (and the error terms are independent of τ). (2) For sufficiently small s the leading order terms have the series expansionsG̅(τ) ≈τ/N(1+sτ+(sτ)^2+(sτ)^3 )andV_G(τ) ≈2^2τ/N(1 + 32sτ+ 2(sτ)^2 + 52 (sτ)^3 ) . The proof is given in Appendix <ref>. If the mutation effects are equal to , then the approximationsG̅(τ) ≈1/ V_G(τ) ≈/N s(e^sτ -1 )are obtained. The errors are of order τ O((Ns)^-1)=τ O(N^-1+1/K). The validity of this approximation, in particular of the error term, requires only e^2sτ=O(1), thus effectively sτ=O(1) (see Appendix <ref>). In Figure <ref> the approximations for the expected phenotypic mean and variance obtained in Proposition <ref> and Proposition <ref> are compared with Wright-Fisher simulations for various selection coefficients. For equal mutation effects (panels A and B) the approximations are accurate in a much wider range of values sτ than for exponentially distributed effects (panels C and D). In both cases, the more complicated approximations, (<ref>) and (<ref>) (solid curves in A and B) and (<ref>) and (<ref>) (solid curves in C and D), are accurate for a much longer time span (in fact for τ→∞ as discussed above) than the corresponding simple approximations, (<ref>) (dashed curves in A, B) and (<ref>), (<ref>) (dashed curves in C, D).For an exponential mutation distribution, the simple approximations are accurate if, approximately, τ<1/(2s). For weak selection, this can still be quite a long time span. The approximations (<ref>) and (<ref>) are not shown because on this scale of resolution they are almost indistinguishable from the corresponding simple series expansions if τ⪅ 1/(2s), but then diverge as τ s→1. For large sτ, the approximations in Proposition <ref> and Remark <ref> for G̅(τ) and V_G(τ) are almost identical to the exact expressions in Proposition <ref> (results not shown). Visible differences on the scale of resolution in our figures occur only for small or moderately large sτ if s=0.001 and N=10^4, i.e., Ns=10 (see Figure <ref>). The approximations (<ref>) and (<ref>) inform us that initially, as long as sτ≪1, G̅(τ) and V_G(τ) increase nearly linearly. Comparison with (<ref>) shows that for an exponential mutation distribution with meanthis early increase is considerably larger than for mutation effects equal to . These approximations also show that initially the relation G̅≈ s V_G, typically expected if the trait variance is high, fails; cf. (<ref>). We note that (<ref>) and (<ref>) are always overestimates because their error terms are negative (see Appendix <ref>). § SWEEP-LIKE VS. POLYGENIC SHIFT-LIKE PATTERNS An important role in examining whether sweep-like patterns or polygenic shift-like patterns characterize the early phase of adaptation is played by the number of segregating sites. If sweeps occur successively, at most one segregating site is expected for most of the time. As the number of segregating sites increases, parallel sweeps occur and the sweep pattern will be transformed to a shift-like pattern. If adaptation is dominated by subtle allele frequency shifts, the number of segregating sites is expected to be large. Of course, there will be intermediate patterns, where no clear distinction between a sweep and a shift pattern can be made. Roughly, we call a pattern of adaptation `sweep-like' if a mutant has already risen to high frequency, so that its loss is very unlikely, when the next mutant starts its rise to (likely) fixation. We call a pattern shift-like if several or many mutants increase in frequency simultaneously (see Fig. <ref>). We emphasize that we focus on the patterns occurring during the early phases of adaptation because by our assumption that mutations are beneficial and random genetic drift is weak (s>0 and Ns≫1), all mutants that become `established' will sweep to fixation, independently of other parameters such as . In addition, although exponential selection may provide a good approximation for selection occurring in a moving optimum model or by continued truncation selection of constant intensity, it can approximate selection caused by a sudden shift in the phenotypic optimum only during the early phase when the population mean is still sufficiently far away from the new optimum (see Discussion).§.§ Number of segregating sites To characterize the patterns of adaptation, an approximation for the expected number of segregating sites, [S], is needed. This expectation is taken not only with respect to the distribution of the number of segregating sites, but also with respect to the mutation distribution f, and it assumes a Poisson offspring distribution. For a precise definition see eq. (<ref>) in Appendix <ref>. We assume the infinite sites model, as used in Sects. <ref> and onwards. However, we complement it by diffusion approximations for the fixation and the loss probabilities as well as for the corresponding expected times to fixation and loss (see Appendix <ref>). To derive a simple approximation S̅ for [S], we distinguish between mutations that eventually become fixed, which occurs with probability (,N) (see (<ref>) for ), and those that get lost, which occurs with probability 1-(,N). In addition, we assume that mutations that are destined for fixation and occurred t ≥(,N) generations in the past are already fixed. Thus, in a given generation τ, mutations destined for fixation contribute to S̅ if and only if they occurred less than or exactly min{(,N)-1,τ} generations in the past (including 0 generations in the past, i.e., now, in generation τ). Therefore, the number of generations in which mutants occur that contribute to the current variation is min{(,N),τ+1}. This is consistent with the fact that mutations can already occur in generation τ=0. Analogously, mutations destined for loss contribute to S̅ at time τ only if their time t of occurrence (prior to τ) satisfies 0 ≤ t≤min{(,N)-1,τ}, where (s,N) denotes the expected time to loss of a single mutant with selective advantage s in population of size N; see (<ref>).In summary, we define the approximation S̅ as follows:S̅ (τ,f,s,N,) = ∫_0^∞f() (,N) min{(,N),τ+1 }d+ ∫_0^∞f() (1-(,N)) min{(,N),τ+1 }d . For the quasi-stationary as well as for the initial phase, we can specify relatively simple explicit approximations by employing diffusion approximtions for , , and . We recall from (<ref>) that (s,,N) is the expected time to fixation of a single mutant with effecton the trait drawn from the (exponential) distribution f; see also Appendix <ref>.In the limit of large τ, i.e., τ≫, we obtain for S̅^*(f,s,N,) = lim_τ→∞S̅(τ,f,s,N,) the following simplifications:S̅^* (f,s,N,)≈∫_0^∞f() (,N) (,N) d+ ∫_0^∞f() (1-(,N)) (,N) d≈ 2[(1- s) (ln(N) + ln (2N s)) + 1+ -( + 12) s ] if=, 2[(1- s) (ln(N) + ln (2N s)) +1-32 s ] if ∼[1/ ].To evaluate the integrals, we used ≈ from (<ref>), (s,N)≈ 1-e^-2s from (<ref>), (s,N)≈2/s(ln(2Ns)+) from (<ref>), and (s,N)≈ -2(1+s)ln(2s)+2(1-)+s(3-2) from (<ref>) and (<ref>). Finally, a series expansion for small s was performed. The approximate formulas in (<ref>) are accurate if Ns>2 and s<0.1; see Figs. <ref>B and D.We observe that in both cases of (<ref>) the leading-order term is approximately 2ln(N), which is close to the classical neutral result [S] = ∑_i=1^N-11i≈(ln(N)+), in which the factor 2 is contained in(and N is the sample size) <cit.>. <cit.> investigated the site frequency spectrum assuming periodic selection coefficients. In the absence of dominance, their approximation (15b) for strong directional selection (i.e., N s̅≫1, s̅ the average selection coefficient) becomes independent of s̅ and yields the neutral result, thus essentially our leading order term. We note that for equal mutation effects the stationary value S̅^* of [S] is slightly higher than that for exponentially distributed effects.In analogy to (s,,N), in Appendix <ref> we introduce the expected time to loss, , of a single mutant with effecton the trait drawn from the (exponential) distribution f. If τ is small, but not tiny, i.e., < τ≪, thenS̅ (τ,f,s,N,)≈ (τ+1) ∫_0^∞f() (,N) d+ ∫_0^∞f() (1-(,N)) (,N) d≈τ (1-e^-2s) + 2[-(1-s)(ln (2s)+)+1+12s] if=,τ2s1+2s + 2[-(1-s)ln (2s) + 1+ 32s] if ∼[1/ ],where the same approximations as for (<ref>) are used. If τ <, then (<ref>) yields S̅ (τ,f,s,N,) ≈ (τ +1), which is an overestimate except when τ=0 (cf. Fig. <ref>).As Figure <ref> demonstrates, (<ref>) yields a quite accurate approximation for the expected number of segregating sites, both as a function of time (panels A and C), and as function of s at quasi-stationarity (panels B and D). The piecewise linear shape of the curves in A results from the simplifying assumption of taking the minima of τ andor , i.e., the kinks occur atand at . In C, the curves are smooth because effectsare drawn from the distribution f.Panels A and C in Figure <ref> and Figure <ref> show that as a function of sτ the rate of increase of the number of segregating sites is nearly independent of s, except for very early generations (e.g. sτ<0.1). This is also consistent with (<ref>). Therefore, if measured in generations, smaller values of s lead to a slower increase of the number of segregating sites, as is supported by the rough approximations in (<ref>). The intuitive reason is that invasion becomes less likely. Panels B and D (but also A and C) show that the stationary value S̅^* of the number of segregating sites is maximized at an intermediate selection strength. For small values of s, S̅^* is increasing in s because larger s increases the probability of a sweep.For sufficiently large values of s, S̅^* drops again because selected mutants spend less time segregating. Based on (<ref>), it is straightforward to show that, as a function of s, the maximum value of S̅^* is achieved close tos_max = 1/[2e^bN^2]≈1/2ln(N) ,where b=32+2≈2.65 for equal effects and b=52 for exponentially distributed effects. In each case, the maximum values of S̅^* are very close to 2(2ln(N) - lnln(N)). Thus, the maximum possible value of S̅^* under directional selection is about twice that under neutrality.This shows that once a quasi-stationary response to directional selection has been reached, the number of segregating sites depends only weakly on N, essentially logarithmically, and exceeds the neutral value by at most a factor of two, whatever the strength of selection. In summary, by far the most important factor determining the number of segregating sites at quasi-stationarity is .Our branching process model leads to an alternative approximation, S̃, for the expected number of segregating sites. If all mutation effects are equal to , the approximationS̃ (τ , ,s ,N,) ≈∑_j=0^[τ̃](1-(j,e^) )is derived in Appendix <ref>. Here, τ̃= min{(,N),τ}.For values sτ<2, this approximation is considerably more accurate than the approximation S̅ given above. This is illustrated by Fig. <ref>.A, which is based on a zoomed-in versions of panel A of Fig. <ref>. A disadvantage of the approximation (<ref>) is that the terms (j,e^) have to be computed recursively, whereas the above approximations S̅ are based on analytically explicit expressions.For mutation effects drawn from a distribution f, the following approximation of [S] is derived in Appendix <ref> under the assumption τ≪(,N):S̃(τ ,f ,s, ) ≈∑_j=0^τ(1-∫_0^∞f()(j,e^ )d).Figure <ref>.B demonstrates that, in contrast to S̅ which is based on diffusion approximations, S̃ is highly accurate if sτ<2. However, it is computationally more expensive than S̅. Equations (<ref>) and (<ref>) show that the expected number of segregating sites is proportional to . We already know that for large τ, i.e., in the quasi-stationary phase, the number of segregating sites depends at most logarithmically on N. The approximation (<ref>) of S̃ is independent of N, and S̃ in (<ref>) depends on N only through τ̃, which can be assumed to equal τ if τ≪. According to the approximation (<ref>) of , we have > 2.77/(s) if Ns≥2, because 2ln 4≈ 2.77. Therefore, we can expect independence of S̃(τ) from N if τ≤2.7/(s). Comparison of the dashed blue curves with the solid blue curves in panels A and C of Fig. <ref> shows that the expected number of segregating sites is essentially independent of N for a much longer time.Also the dependence of S̃(τ) on s is much weaker than that on .§.§ Number of segregating sites as an indicator for sweep-like vs. shift-like patterns Motivated by the work of <cit.>, we explored the polygenic pattern of adaptation at the loci contributing to the trait at the generation T when the mean phenotype G̅ in a population first reaches or exceeds a given value, which for Figure <ref> we chose to be 1 (in units of mutation effects ). For various parameter combinations and for equal and exponentially distributed mutation effects, Figure <ref> displays the frequency distributions of the first four successful mutants at this generation T. By successful mutant we mean that at the stopping time T, we condition on the sites at which mutants are present (segregating or fixed), and we number them from 1 to 4, depending on the order in which these mutations occurred. Thus, the mutation at site 1 is the oldest in the population. The sites segregating the first four successful mutations are only a small subset of sites at which mutations have occurred because their majority has been lost. The histograms are obtained from Wright-Fisher simulations. We computed the average stopping time, T̅, the average number of segregating sites at time T, S̅_T, and the average of mean phenotypic values in generation T, G̅_T. These values are given in each panel (with subscripts omitted for visibility reasons). Because time is discrete, G̅_T ≥ 1. For large selection coefficients (and large mutation effects) the per-generation response tends to be large once several mutations are segregating. Therefore, G̅_T can be noticeably larger than 1. Indeed, the distribution of G̅ (under the stopping condition G̅ =1) has a sharp lower bound at 1 and may have substantial skew to the right, especially for exponentially distributed effects (results not shown).Figure <ref> shows clear sweep patterns in the panels with 2=0.1 (where S̅_T <2.5) and distinctive small-shift patterns at many sites if 2=10 (where S̅_T >35). The patterns occurring for 2=1 suggest a series of partially overlapping sweeps. The main effect of varying the selection strength s is on the time T to reach (or exceed) G̅=1. Successive sweeps are indeed expected if 2=0.1 because the average waiting time between successful mutation events is approximately 1/(2s). If s=0.001, this waiting time is 10 000 and exceeds the expected fixation time of roughly 7 000 generations. If s=0.1, the waiting time is 100 generations and falls short of the expected fixation time, which is about 156 generations by (<ref>). In addition, we observe that for smallthe first successful mutant corresponds mostly to the site with the highest mutant frequency at the time of stopping, whereas for largethe distributions of the first four successful mutations are hardly distinguishable.In the panels with 2=0.1, the expected stopping times T̅ exceed their corresponding expected fixation times (given in the caption of Fig. <ref>) considerably. Hence, most of these populations have already reached the quasi-stationary phase. Somewhat unexpectedly, the observed average numbers of segregating sites (from top to bottom: 2.2, 2.1, 1.9) exceed their respective (approximate) stationary values S̅^* (1.37, 1.65, 1.60) considerably. This observation seems to be at variance with the results in panels B and D of Figure <ref>, which show that S̅^* is a very accurate approximation of the numerically determined [S]. The reason for this discrepancy is the different stopping condition, more precisely the fact that the distribution of stopping times T is very wide and strongly skewed to the right ifis small (Fig. <ref>). A slowly evolving population may carry many more mutations once they reach G̅=1 than the average or a fast evolving population because it has to wait long for the first successful mutant(s). As a result, the stopping condition G̅=1 inflates the number of segregating sites compared to a process that is stopped at a given generation. For 2≥1 our branching-process approximations for S̃ in (<ref>) and (<ref>) predict the observed average S̅_T very well if S̃ is evaluated at T̅. Indeed, very accurate approximations of the observed T̅ are obtained for equal effects by solving the equation G̅(t)=1 using (<ref>). The reason is that the distribution of T becomes increasingly symmetric and less variable asincreases. It is nearly perfectly symmetric if 2=10 (Fig. <ref>). For exponentially distributed effects, when the stopping time distribution is more skewed, an accurate approximation of T̅ is obtained from (<ref>) by solving G̅(t)=1 only if 2≥10 (results not shown). Accordingly, for such large values of , the Wright-Fisher simulations stopped at the previously determined value T̅ yield distributions very similar to those obtained by stopping when G̅=1 has been reached (results not shown).Figure <ref> confirms the above discussed finding that for an exponential distribution of mutation effects, the response of the mean is faster than for equal effects also under the current stopping condition. As a consequence, the number of segregating sites when G̅=1 is reached is smaller, and considerably so if G̅=1 is reached well before the quasi-stationary phase is entered, as is the case of 2≥1. Unsurprisingly, the distribution of stopping times T is much wider than for equal effects (Fig. <ref>) because mutations of larger (beneficial) effect have a higher fixation probability and lead to faster sweeps, especially for strong selection.Mutation distributions with a heavier tail than the exponential may intensify this effect. § DISCUSSION The goal of our study has been to provide essential theory for answering the following questions: Which are the main evolutionary and genetic factors determining the rate and pattern of polygenic adaptation? How do these factors determine the response to selection, both on the phenotypic and on the genetic level? When is the selective response mainly due to sweeps at few loci and when is it due to small frequency shifts at many loci? For this purpose, we developed and analysed a simple population-genetic model, starting from a branching process approximation to describe the spreading of a beneficial mutant in a Wright-Fisher population of size N. Our basic result is an accurate, analytically tractable, and computationally efficient approximation for the time dependence of the mutant-frequency distribution. Extension to an infinite sites model and the assumption of an additive quantitative trait under exponential directional selection enabled us to derive explicit expressions for the evolution of the expected mean phenotype and the expected phenotypic variance, as well as for the expected number of segregating sites underlying the adaptive response. As demonstrated by comparison with extensive simulations, they provide accurate approximations for the corresponding dynamics resulting from a Wright-Fisher model. Our theory focuses on the response from new mutations and assumes that selection is stronger than random genetic drift (Ns>1). In the sequel, and structured according to the main sections (3, 4, and 5) of the paper, we discuss the most relevant findings and their implications for answering the last question posed above. Finally, we provide brief conclusions and discuss the limitations caused by our model assumptions (Sect. <ref>).§.§ Branching process approximation for the time dependence of the mutant frequency distribution The spread of a new mutation in a population as a function of time can be described by the transition matrix of an appropriate Wright-Fisher model or the transition density of the approximating diffusion. Although diffusion theory leads to quite simple expressions for many important quantities, explicit and analytically tractable time-dependent results, tracing for instance the distribution of allele frequencies, seem to be out of reach <cit.>. We combined two methods to derive an explicit and accurate time-dependent approximation for the mutant's density in any generation n: a branching process approach capturing the stochastic effects and the deterministic logistic growth model. We developed this approach quite generally with the help of a slightly supercritical Galton-Watson process that allows for quite general offspring distributions. The approximation g_a(x) in (<ref>) for the mutant frequency density depends only on the compound parameter a, which in turn depends on the generation number n, the mean σ=e^s of the offspring distribution, the extinction probability (n,σ) until generation n in the branching process, and the population size N. Its shape is displayed in Figure <ref> and compared with Wright-Fisher simulations in Figures <ref> and <ref>. It is highly accurate in the initial phase, and requires only Ns≥1. If Ns≥10 then it is accurate until about 1/sln(2Ns), which is approximately one half of the expected fixation time in the Wright-Fisher model. If Ns≥100, then it remains accurate for even longer and even if x is relatively close to 1. Because, in contrast to previous related investigations <cit.>, we condition on non-extinction until generation n, our approximation is more accurate than those obtained there, especially in the early initial phase (cf. Figure <ref>, which shows the density ψ_n in the branching process, from which g_a is obtained by a propertransformation). However, by relying on a branching process to capture the inherent stochasticity, our model underestimates the effect of genetic drift near fixation, in particular if Ns is not much larger than 1. Remarkably, the approximations for the mean and variance of this distribution are accurate in a much wider parameter range. This may be explained by the fact that mutants which are rare or near fixation hardly contribute to the variance and to the response of the mean. These approximations clearly display the well known fact that, conditioned on non-extinction (in our case, until the time of observation), the mutant frequency grows faster than predicted by the corresponding deterministic model, particularly when selection is weak (see Fig. <ref> and the discussion in Sect. <ref>). Finally, these results are extended to the infinite sites model, in which the number of new (beneficial) mutants emerging in the population per generation is Poisson distributed with mean =NU, and every mutation occurs at a new site. Here, U is the expected number of beneficial mutations per individual per generation affecting the trait. Sites are assumed to be unlinked (and in linkage equilibrium). The monomorphic initial population started to evolve at τ=0. Equation (<ref>) provides the probability density h^(i)(x)=h_τ,σ_i,N,^(i)(x) of the ith mutation (which occurred at the random time τ_i at the ith site), conditioned on non-extinction until time τ. Mutations of unequal effects are admitted, as signified by _i. The integration is necessary because the waiting time distribution for the ith mutation event (an Erlang distribution) needs to be taken into account. Its cousin, the unconditioned distribution X̃_τ^(i) with absolutely continuous part h̃_τ^(i)(x) is defined in (<ref>) and (<ref>). It forms the basis for the applications to the quantitative genetic model.§.§ Evolution of the mean phenotype and the genetic variance The approximations (<ref>) and (<ref>) for the time dependence of the density g_a(τ) and of the distribution of X̃_τ^(i) allow us to obtain highly accurate approximations for the evolution of the expected mean (G̅) and variance (V_G) of an additive trait subject to exponential directional selection in a finite population of size N. We assume that potentially infinitely many, stochastically independent, loci (sites) can contribute additively to the trait. This assumption of global linkage equilibrium will be a good approximation if sites are physically unlinked because multiplicative fitnesses, as caused by exponential directional selection, do not generate linkage disequilibrium <cit.>. The expressions (<ref>) for the expected mean G̅(τ) and (<ref>) for the expected variance V_G(τ) of the trait are exact within our model based on the quasi-deterministic branching process approach, i.e., assuming that the density g_a in (<ref>) is exact. Comparison with the results from Wright-Fisher simulations shows that they are astonishingly accurate for the whole evolutionary process, i.e., from the initial phase until the quasi-stationary response has been achieved (Figure <ref>). These expressions require integration with respect to two parameters: one over the time span until the time τ of interest, the other with respect to the distribution f() of mutation effects. The former is necessary because the extinction probability is time dependent, at least in the initial phase. The latter integration is unavoidable unless all mutation effects are equal, when accurate approximations involve only the computation of the finite sums given in Remark <ref>. We note that these approximations hold for very general offspring distributions, so that they can cover cases where the effective population size N_e differs from the actual population size N. The offspring distribution enters these equations through the extinction probabilities . The general equations (<ref>) and (<ref>) do not provide much direct insight into the evolution of G̅ and V_G, except that the population-wide mutation rateenters exclusively as a multiplicative factor. Their numerical evaluation is straightforward but not very fast. However, they provide the basis for deriving very simple and highly accurate analytical approximations for the initial phase and for the quasi-stationary phase, when emergence and fixation of new mutations balance and the genetic variance has reached a stationary value. §.§.§ The quasi-stationary phase Figure <ref> documents that the quasi-stationary phase is reached once time exceeds the expected fixation time , which is defined in (<ref>) and has the approximation (<ref>). During the quasi-stationary phase the expected per-generation change in the mean, G̅^*, and the expected variance, V_G^*, are constant. They are given by (<ref>) and (<ref>), respectively, and invoke the generally unavoidable integration over the distribution of mutation effects. For equal effects, the much simpler expressions (<ref>) and (<ref>) are obtained. By recalling =NU, equation (<ref>) reveals the following general and fundamental relation between the change of the mean and the variance:G̅^* ≈ s V_G^* + U .If U≈0, this is a special case of , which was derived under the assumption of a normal distribution of breeding values.An approximation of G̅(τ), applicable already during the approach to the quasi-stationary phase, is provided in (<ref>), and shown as dashed curves in Figs. <ref> A,C.For a Poisson offspring distribution, an exponential distribution f of mutation effects, and weak selection, the response of the mean has the simple approximation G̅^* ≈ 4 s^2(1-52s). If the mutation effects are equal, the response is about one half of this value (Corollary <ref>). For large population sizes, N≥10^4, the simple approximations in Corollary <ref>, both for G̅^* and V_G^*, are highly accurate for selection coefficients s ranging over more than three orders of magnitude (Figure <ref>).The relation (<ref>) is violated when the population size is so small that Ns < 2, as can be observed in Fig. <ref> by noting the deviation of the open symbols from the black dashed curves. The reason is that for such small values of Ns random genetic drift becomes an important force and the fixation probability of very slightly beneficial mutations is higher than predicted by the branching process approximation, i.e., in this case the fixation probability under neutrality, 1/N, exceeds Haldane's approximation, 2s, for an advantageous mutation.Formally, the per-generation change of the mean phenotype at stationarity in (<ref>) is equivalent toapproximation for the asymptotic response of a quantitative trait coming from fixation of new mutations. In our terminology and for a haploid population, his general expression [3] becomesG̅ =(N_e/N) ∫_-∞^∞(s) f()d, where his mutation distribution is not necessarily confined to positive values. Here,is the diffusion approximation (<ref>) for the fixation probability in the Wright-Fisher model and thus nearly identical tobecause we assume Ns>1. Thus, we recover classical results, but from a different perspective. It seems somewhat surprising that the branching process approach, which is best suited to study the initial fate of a mutation subject to selection, yields highly accurate approximations for the long-term response of a quantitative trait. For a more general discussion of the long-term response to various forms of directional selection, see <cit.>. §.§.§ The transient and the initial phase In Proposition <ref>, we derived the approximations (<ref>) for G̅(τ) and (<ref>) for V_G(τ). They can be evaluated much more quickly than (<ref>) and (<ref>), and they are highly accurate for all times, except for a very early phase (e.g., Fig. <ref>). They perform particularly well when τ > 1/ + T_, where T_ is the time needed such that the probability of non-extinction by generation T_ has declined to (1+) (eq. (<ref>) and Section <ref>). Above this lower bound the (iterative) computation of the time-dependent extinction probabilities (t) in the general equations (<ref>) and (<ref>), hence the integration over the time, is not needed. Thus, for equal mutation effects, no integration at all is required to evaluate G̅(τ) and V_G(τ), which are then given by (<ref>) and (<ref>), respectively.For the very early phase when τ⪅ 1/(4s), the utterly simple approximations in Proposition <ref> are obtained for exponentially distributed mutation effects. The series expansions in (<ref>) and (<ref>) are accurate for a somewhat longer initial phase, even beyond τ≈ 1/(s) provided s is sufficiently small (Figure <ref>). For mutation effects equal to , the mean phenotype grows approximately exponentially, i.e., G̅(τ) ≈/Ns(e^sτ-1) (Remark <ref>). This yields an accurate approximation if τ⪅ 1/(s) for Ns=10 and for a longer time span as Ns increases while s decreases (Figure <ref>). During this phase, variance and mean are related through V_G≈G̅ and the relation G̅≈ s V_G is violated. For an exponential distribution of mutation effects, the initial increase of the mean and the variance is considerably faster than for equal mutation effects (=), indeed we have G̅(τ)≈/Nτ(1 + sτ + (sτ)^2) (Proposition <ref>). This yields an increase of the mean that is slightly faster than that for equal effects with twice the selection intensity, 2s. The reason is that with an exponential distribution of effects, mutations of large effect have a much higher probability of becoming established (and eventually fixed) than mutations of small effect. In fact, the average effect of mutations drawn from an exponential distribution with mean , conditioned on eventual fixation, is approximately 2. §.§.§ Comparison of the response from new mutations with that predicted by the breeder's equation We developed our model for an initially monomorphic population and adaptation from de novo mutations. Here, we compare our results on the response of the mean phenotype with corresponding results derived from classical quantitative genetic models (thus, essentially statistical models) and provide a rough guide for which parameter regions they conform.In the classical additive model of quantitative genetics, environmental effects are not ignored, and it is based on a given phenotypic variance. More precisely, the trait value is P=G+E, where G is the (additive) genetic contribution and E the environmental contribution. The simplest version assumes that environmental effects have a distribution with mean zero and variance _E^2 and are independent of the genetic effects, so that P̅=G̅ and _P^2=_G^2+_E^2. The classical breeder's equation, which reads P̅ = h^2 _s P̅ = i h^2 _P ,where h^2 =_G^2/_P^2 is the (narrow sense) heritability of the trait (and _G^2 the additive genetic variance, which corresponds to our V_G), _s P̅ is the selection differential, i.e., the difference in mean phenotype before and after selection, and i=_s P̅/_P is the so-called selection intensity, or the standardized selection differential <cit.>. Under exponential directional selection one obtains i=s_P <cit.>, so that (<ref>) becomesP̅ = h^2 s _P^2 . Stationary phase Our equation (<ref>) informs us that, once stationarity is achieved, the response from new mutations is approximatelyG̅^* ≈ρ s _2(f) = ρ s ^2(1+ CV_^2) ,where we have used (e^s)≈ρ s (Remark <ref>). If the offspring distribution is Poisson, then ρ=2 and N_e≈ N. Here, CV_^2 is the squared coefficient of variation of the distribution of mutation effects. For equal effects CV_^2=0, for exponentially distributed effects CV_^2=1, for Gamma-distributed effects with density proportional to ^β-1e^-, CV_^2=1/, and for effects drawn from the truncated normal distribution in Remark <ref>, CV_^2=12π-1. As already noted above, an equation equivalent to (<ref>) was derived and discussed by <cit.>. Equation (<ref>) demonstrates that the response of the mean depends not only on the selection intensity, the mutation rate and the average effect of beneficial mutations, but also on the shape of the mutation distribution, in particular its coefficient of variation. Therefore, distributions with a large coefficient of variation of beneficial effects, such as Gamma-distributions with small , lead to a faster response. This effect is already clearly visible by comparing panels A and C in Fig. <ref>, where in A we have CV_^2=0 and in B, CV_^2=1; see also panel A in Fig. <ref>, where CV_^2=12π-1≈0.507.Assuming a Poisson offspring distribution and exponentially distributed mutation effects with mean , the selection response predicted by (<ref>) equals that predicted by (<ref>) if 4^2 = h^2 _P^2 ,where, in the stationary phase and by (<ref>), _P^2 = V_G^*+_E^2 ≈ 4^2 +_E^2. Inferences from empirical estimates suggest thatmay often be not only of the same order of magnitude as _P, but actually quite similar to it <cit.>. In fact, they suggest _^2≈_E^2, where _^2 is the variance of a symmetric distribution of mutation effects with mean zero and _E^2 is the environmental variance. Under our assumption of exponentially distributed (positive) effects, we obtain _^2= 2 ^2, which yields _P^2 = 4^2+2 ^2. Therefore, (<ref>) is satisfied if h^2 = 2/(2+1). Heritabilities are, by definition, between 0 and 1, but are often in the range between 0.1 and 0.7 (e.g., Table 10.1 in ; Chap. 7 in ), for morphological traits more often in the lower half of this range. Thus a value of h^2≤12 is consistent with ≤12, and a value of h^2≤23 with ≤1.Initial phase A comparison of the response of the mean phenotype from standing variation and new mutations during the initial phase of adaptation is delicate. Whereas for the response from standing variation (<ref>) can still be used, the response from new mutations increases nonlinearly (see Fig. <ref>). In particular, equation (<ref>) as well as its simplification (<ref>) for equal effects, are too complicated to derive analytical comparisons between the two scenarios. Although numerical solution is simple and computationally fast, there are many scenarios to be compared with of how standing variation is maintained and how it is structured <cit.>. Thus, a detailed comparison has to be postponed to a future treatment.§.§ Number of segregating sites as an indicator for sweep-like vs. shift-like patterns Taking up the arguments in Sect. <ref>, we propose to use the number of segregating sites as a main indicator for the distinction between the scenarios of adaptation being mainly due to selective sweeps or mainly due to small frequency shifts at many loci.This is supported by our theory in Sect. <ref> showing that once a stationary response to directional selection has been achieved, the expected number of segregating sites, S̅^*, is proportional to , depends logarithmically on N, and exceeds the neutral expectation of 2ln N, but never by more than a factor of two if s is varied between 0 and 0.5 (see eq. (<ref>) and Fig. <ref>). As a function of s, the maximum value of S̅^* is achieved if s≈ 1/(2ln N).For the initial phase of adaptation, the branching process approximations S̃(τ) in (<ref>) and (<ref>) are much more accurate than the simple approximations S̅(τ) in (<ref>) (see Fig. <ref>). However, all these approximations show that the expected number of segregating sites is proportional to , depends only very weakly on the product of s and the average mutation effect , and is independent of the population size N.Further support for our proposition derives from Figure <ref>. Motivated bywork on a different model of selection, in this figure we illustrate the polygenic pattern of adaptation at the loci that contribute to the trait at the time T when the mean phenotype G̅ first reaches or exceeds a given value. Although the expected number of segregating sites S̅_T deviates from our approximations in Sect. <ref> for 2=0.1, the main conclusion remains the same. Clear sweep pattern can be observed for 2=0.1 (where S̅_T<2.5) and shift-like patterns for 2= 10 (where S̅_T>35).Because we assume exponential directional selection, beneficial mutations, and weak random genetic drift, all mutants that become `established' will sweep to fixation in the long run. Therefore, we focus on the initial response when characterizing patterns of adaptation as sweep-like vs. shift-like. In Section <ref>, we argue that our results are of relevance for other modes of directional selection. A popular model is one in which a population is initially in equilibrium under mutation-stabilizing selection-drift balance and then starts to evolve due to a sudden or a continuous change in the environment, modeled by a sudden shift in the optimum phenotype or by a continuously moving optimum <cit.>. For models like this, our findings may be of relevance either for the initial response (in case of a sudden shift) or even in the long term (in case of a moving optimum). Below, we briefly review recent treatments focusing on the polygenic dynamics underlying trait evolution.If a small sudden shift in the optimum phenotype occurs, mutations of relatively small effect may be most conducive to selective sweeps because then large-effect mutations become deleterious when the population closely approaches the optimum. If a sudden shift in the optimum of appreciable magnitude occurs and the initial population has been in mutation-stabilizing selection balance before, then a kind of threshold behavior has been identified for deterministic models <cit.>. Under mutation-selection balance the initial variance will be high if allelic effects are small because such alleles are maintained at intermediate frequencies. By contrast, the initial variance will be low if allelic effects are very large because such alleles will be nearly fixed or absent. Alleles of very small effect will respond slowly after the onset of selection because they are nearly neutral. If their number is large, as assumed by the infinite sites model, then the total response of the mean phenotype can still be large (proportional to the variance, as in Lande's equation discussed above). Alleles of very large effect will initially also respond very slowly because they are so rare, and this will lead to a slow response of the mean phenotype because there is little variation on which selection can act. As a result, alleles with effects close to an intermediate threshold value will be the first to respond quickly and start sweeping. On a longer time scale and for a large shift in the optimum, the response from a few loci with large effects is comparable with that from many loci with small effects if the sum of effect sizes is the same. If most loci have large effects, then the response will clearly be much faster than the response from the same number of small-effect loci; the larger the effect the faster the sweeps <cit.>. In finite populations the conclusions are even more subtle and may deviate from those in the deterministic case. Mutations of small effect may contribute much less to the response than in large populations because their evolutionary fate is dominated by genetic drift and their initial distribution under mutation-stabilizing selection-drift balance will be U-shaped. Again, mutations of moderate effects may contribute most to the selective response after a sudden shift in the optimum <cit.>. The latter authors also conclude that mutations of large effect are unlikely to sweep, but this is a consequence of their assumptions that the trait is highly polygenic (i.e., √()≫1), the population is initially in mutation-stabilizing selection-drift balance (whence alleles of large effect are extremely rare), and there is only a small shift in the optimum of a few phenotypic standard deviations. The latter assumption implies that the equilibration phase is reached quickly, and then fine-tuning by alleles of small effect is essential for adaptation. If alleles of large effect contribute substantially to the initial variance and the shift is large, then they will sweep <cit.>. For a gradually moving optimum, after a relatively short initial phase, a quasi-stationary phase is entered during which, except for stochastic fluctuations, a constant average variance is attained and the mean phenotype lags behind the optimum by an, on average, constant amount. Of course, a prolonged quasi-stationary response will depend crucially on a constant input of new mutations. During this phase, the trait experiences a mixture of directional and stabilizing selection, and the variance settles to a value that is lower than the variance under mutation-drift balance (which is nearly the same as that achieved under exponential selection), but higher than that under mutation-stabilizing selection-drift balance <cit.>. <cit.> showed that compared to adaptation from de novo mutations, adaptation from standing variation proceeds by the fixation of more alleles of small effect. If the optimum is stopped after some time or new mutations are ignored, then the phenotypic dynamics becomes more complex and may depend on details of the underlying genetics <cit.>. In our finite-population model, in which initial mutant frequencies are 1/N, all mutants are beneficial, and no epistatic interactions occur, the response of the mean is faster the larger the allelic effects. In addition, mutations that sweep, sweep faster than predicted by deterministic growth. Nevertheless, the response of the mean phenotype depends linearly on N through =NU (Corollary <ref> and Proposition <ref>), except in the very early phase when it depends mainly on U but is nearly independent of N (Proposition <ref>). §.§ Conclusions, limitations, and outlook We conclude from our analyses that , the expected number of beneficial mutations occurring in the population per generation (and contributing to the trait) is the main determinant of the pattern of adaptation at the genetic level. Selective sweeps at a few loci are the dominant pattern if ≤0.1, and small allele frequency shifts at many loci are observed if ≥10. Given the same mean (beneficial) mutation effect, , higher second and third moments of the mutation distribution favour sweeps. The expected number of segregating sites is an excellent indicator for the pattern of adaptation. However, it may be challenging to estimate the number of segregating sites contributing to a trait. The selection coefficient s and the average effectprimarily determine the rate of adaptation via their product s. Except for , the population size has a weak effect on the rate of adaptation and on the population variance; often it enters expressions approximately logarithmically. We emphasize that our analysis is based on the assumption Ns≫1, i.e., that selection is considerably stronger than random genetic drift. Most of our approximations, however, perform well when Ns>2. Our conclusions concerning the influence of the parameters on the pattern of adaptation are similar to those of <cit.>, although their model and approach deviate from ours in important respects. Most significantly, their genetic architecture differs drastically from ours. They study a polygenic trait that assumes only two or very few values, such as resistance and non-resistance. The contributing loci are thus highly redundant, and a mutation at one or at very few loci is sufficient to complete adaptation. This is a very strong form of epistasis. In addition, mutation effects are the same at all loci. Selection on the discrete trait acts similarly as in our model: they assume a linear fitness function in continuous time, which corresponds to an exponential in discrete time. In contrast to our model, their initial populations are polymorphic and (mostly) assumed to be in mutation-stabilizing selection-drift balance. Nevertheless, they find that their population-scaled background mutation rate _bg, which corresponds closely to our 2, explains the main differences in the patterns of adaptation, i.e., few sweeps versus many slight shifts. This main conclusion is confirmed by <cit.> for an additive trait subject to a sudden shift in the optimum phenotype, again for equal mutation effects.The validity of our conclusions may be limited by a number of important simplifying assumptions, which are unlinked loci and linkage equilibrium, no pleiotropy or epistasis, and exponential directional selection. In addition, we assume that selection is stronger than random drift and we study only the selection response from new mutations, i.e., we ignore the response from standing variation. Our results are derived for haploid populations, but will apply as well to diploids without dominance by substituting 2N for N.Our assumption of unlinked sites will be biologically reasonable unless the number of (simultaneously) segregating sites exceeds the number of chromosomes by far or many sites occur in regions of reduced recombination. Although exponential directional selection by itself does not induce linkage disequilibrium, deviations from this form of selection as well as random genetic drift will do so. Therefore, it would desirable to extend the above results to loosely linked loci and perform a kind of quasi-linkage equilibrium analysis. Quite extensive simulation results for models of artificial directional selection (such as truncation selection) by <cit.> suggest that linkage reduces the asymptotic variance, hence the response of the mean, compared to unlinked loci. The reduction will be small unless linkage is very tight. A similar observation was reported in <cit.> for exponential directional selection. However, these results included only population sizes of a few hundred individuals. For a moving optimum model and population sizes up to about 10^4, a qualitatively similar finding was obtained in <cit.>. In all these investigations, mutation distributions with positive and negative effects were used. Then, mutations of both signs appear on the same chromosome, whence low recombination will reduce fixation probabilities, especially of weakly beneficial mutants, and hamper adaptation <cit.>. In concordance with these results, a simulation study by <cit.> found that weak or moderate linkage has almost no effect on the response of the mean and on the variance of the trait, and also little effect on the fixation probabilities of advantageous mutations, but the resulting interference increases fixation times and affects haplotype diversity. Analytical work on the effects of linkage is scarce, based on specific assumptions, and shows that linkage can increase or decrease the variance of a trait under directional selection, depending on the model assumptions and the magnitude of the parameters <cit.>. The most comprehensive analysis may be that by <cit.>. They showed that even under strong epistatic selection, such as truncation selection, the trait distribution is significantly affected by linkage disequilibrium only if loci are tightly linked. This provides additional support for our expectation that our results derived under the assumptions of linkage disequilibrium and absence of epistasis will provide decent approximations if recombination among the contributing loci is moderate or strong. However, the explicit determination of the effects of linkage on the likelihood of complete sweeps will pose a formidable challenge <cit.>. Moderate or strong linkage can lead to selective interference. An extreme scenario is provided by a single locus with recurrent mutation. For this case our theory remains applicable if 2Nμ≤ 1, where μ is the per-locus mutation rate. Then it is sufficient to consider the first successful mutation event. If 2Nμ > 1, competition of mutants from different mutation events can no longer be ignored and complicates the evolutionary dynamics (unpublished results by HG).We expect that our assumption of exponential directional selection provides a good approximation for the early phases of adaptation under other forms of directional selection. A property of exponential selection that is crucial for our analysis is that the strength of selection remains constant over time. It shares this property with truncation selection (if the truncation probability remains constant), and with selection imposed by a steadily moving optimum during the phase when the population mean follows the optimum at the same rate and with a constant average lag. If a sudden shift in the optimum phenotype occurs then selection will remain approximately constant initially, but eventually gets increasingly weaker as the mean proceeds towards the optimum. Some of the above discussed results provide support for the expectation that our results may be applicable to these forms of selection. Exponential directional selection will not be suitable as an approximation if the selection strength is fluctuating in time, if there are pleiotropic fitness effects, or if selection is epistatic, strong, and recombination is weak.Our results concerning the time dependence of the distribution of sweeping de novo mutations will be applicable to very rare allelic variants already present in the population. This will be the case if their effect is large and the population has been under stabilizing selection before directional selection started. If more than one copy is initially present,the quasi-deterministic phase could be started from a gamma-distributed effective initial population size <cit.> instead of an exponential. The extent to which our results on the pattern of adaptation from de novo mutations remain applicable in the presence of standing variation would be an interesting and challenging topic for future research. left=3cm,right=3cm [c]l|p0.02p0.02|ll Glossary of symbols. For both the Roman and Greek alphabets, uppercase letters precede lowercase ones. For each uppercase or lowercase letter, listing is in order of appearance of the definition in the text. The second column indicates whether the symbol is used in the one-locus model or in the infinite sites (and/or quantitative-genetic) model. The references are to the equation closest to the definition of each symbol. Thus, (2.1), (2.1)+, (2.1)- refers to (2.1), the text below (2.1), the text above (2.1), respectively. Sometimes the respective subsection is given. Symbols that occur only in the Discussion (mainly Sect. <ref>) or only in the Appendix are not listed. Symbol 2c|1L/ISM Reference Definition (continued)Symbol 2c|1L/ISM Reference Definition a (<ref>), (<ref>) important parameter for g_af x Sect. <ref> density of the distr. of mutation effects _iG x Sect. <ref> genotype (or phenotype) G̅_i x (<ref>) contributions of locus i to G̅ G̅ x (<ref>) expected phenotypic mean G̅^* x (<ref>)- expected phenotypic mean in the quasi-stationary phase ΔG̅^* x (<ref>) expected per-generation response in the quasi-stationary phase G̅_T x Sect. <ref> average of mean phenotypic value in gen. T g_a x (<ref>) density of X_nh^(i) x (<ref>) density of r.v. measuring the freq. of the i-th mut. cond. on non-ext. by gen. τ h̃^(i) x (<ref>) describes the positive part of X̃_τ^(i)k x (<ref>)- total number of mutation events until time τM_i, x (<ref>) normalization constant m_i, x (<ref>) density of Erlang distr. (approx. waiting time τ_i)N Sect. <ref> population size N_e (<ref>)- effective population size n x Sect. <ref> generation_τ x (<ref>) Poisson distr.x (<ref>)-, (<ref>) probability of ultimate extinctionx (<ref>)+ (long-term) survival probability (Galton-Watson process)x (<ref>) probability of ultimate fixation (diffusion theory) (n) x (<ref>) probability of extinction before gen. n ^(i)(τ) x (<ref>) probability that the mutation at locus i has been lost by gen. τ p(·) x (<ref>) solution of the discrete, deterministic selection equationS x (<ref>)- number of segregating sites S̅ x (<ref>) expected number of seg. sitesS̅^* x (<ref>)- expected number of seg. sites in the stat. phaseS̃ x (<ref>), (<ref>), (<ref>) expected number of seg. sites under branching process modelS̅_T x Sect. <ref> average number of seg. sites at time T s Sect. <ref> selection coefficient s̃ x (<ref>) selection coefficient used in relation to diffusion theory s_max x (<ref>) sel. coeff. for which S̅^* is maximizedT_ x (<ref>) time needed such that 1-(T_) = (1+) (1-) for >0T x Sect. <ref> stopping time T̅ x Sect. <ref> average stopping time t_M x (<ref>)+ time at which the varianceis maximizedx (<ref>) expected time to fixationx (<ref>) expected mean time to fixationx (<ref>) expected time to lossx (<ref>) expected mean time to lossU x Sect. <ref> expected number of beneficial mutations per individual per generationV_G_i x (<ref>) contributions of locus i to V_G V_G x (<ref>) expected phenotypic variance V_G^* x (<ref>)- expected phenotypic variance in the quasi-stationary phase v x (<ref>) variance of the offspring distribution ξ_jW_n x (<ref>)- rescaled (discrete) random variable obtained from Z_n W x (<ref>) limiting random variable of W_n W^+ x (<ref>) absolutely continuous part of W w^+ x (<ref>)+ density of W^+ W_n^+ x (<ref>) positive part of W_n w̅(·) x (<ref>)+ mean fitness needed for the discrete, det. sel. equationX_n x (<ref>) random variable measuring the (positive) mut.-frequency in gen. n X̃_τ^(i) x (<ref>) (unconditioned) r.v. for the frequency of the i-th mutant X_τ^(i) x (<ref>)- X_n for locus i (with τ=n, _i=) x̂ x (<ref>)+ mut. freq. for which g_a has unique maximum for a<2Y_n x (<ref>)+ random variable with distr. Ψ_n (approx. for positive part of Z_n)Z_n x (<ref>)- Galton-Watson process _i x Sect. <ref> mutation effect on phenotype of locus ix Sect. <ref> mean of the mutation effects _n (f) x (<ref>)- nth moment about zero of the mutation distribution f_1 x (<ref>) first moment of g_a _2 x (<ref>) second moment of g_ax (<ref>) within-population variance of the mutant’s allele frequency x Sect. <ref> expected total number of mutations occurring in the population per generationλ_n x (<ref>) parameter of exponential distribution Ψ_n x (<ref>) notational simplificationξ_j x (<ref>)- (random) number of offspring of individual jρ x Rem. <ref> constant, such that ≈ρ s_i x Sect. <ref> fitness effect of a mutation at locus ix Sect. <ref> fitness of the derived state (mean of ξ_j, mean offspring number)τ x Sect. <ref> (discrete) generation since the initial population started to evolve τ_i x (<ref>)- random time at which ith mutation event occurred τ_c x (<ref>) expected time by which the first mutation becomes fixed τ_1 x (<ref>)- bound for time τ̃ x (<ref>)+ bound for timeΦ x (<ref>)- Laplace transform of W φ x (<ref>)+ offspring generating function of ξ_j (of Z_1)Ψ_n x (<ref>) exponential distribution ψ_n x (<ref>)+ density of distr. Ψ_n3l 3l expectation 3l variance 3l probability 3l (<ref>) product logarithm (Lambert function) 3l (<ref>)+ incomplete Gamma function 3l[·] (<ref>)- nearest integer to · 3lE_1 (<ref>)+ exponential integral 3l exponential distribution 3l Euler–Mascheroni constant § ACKNOWLEDGMENTSWe thank Joachim Hermisson, Emmanuel Schertzer, Ben Wölfl, and Ilse Höllinger for helpful discussions. We gratefully acknowledge extraordinarily detailed and helpful comments by three reviewers. Financial support by the Austrian Science Fund (FWF) through the Vienna Graduate School of Population Genetics (GrantDK W1225-B20)to HG is gratefully acknowledged.§ ADDITIONAL MATERIALSWe provide a comprehensive Wolfram Mathematica notebook, containing additional visualizations of the analytical predictions and efficient numerical procedures. Upon publication, the notebook and the simulation code (C++) will be made available via github: <https://github.com/Ha-nn-ah/EvolutionOfQuantitativeTraits>. § FRACTIONAL LINEAR OFFSPRING DISTRIBUTIONS AND ASSOCIATED GALTON-WATSON PROCESSES §.§ Basics The fractional linear, or modified geometric, distribution is given by_0 = r and_k = (1-r)(1-p)p^k-1if k≥1,where 0<r<1 and 0<p<1; e.g. <cit.> or <cit.>. The name fractional linear derives from the fact that its generating function isφ(x) = r + x(1-p-r)/1 - x p ,hence fractional linear. With r=1-p, the geometric distribution is recovered. It is straightforward to show that every fractional linear generating function generates a modified geometric distribution. Mean and variance of {_k} arem = 1-r/1-p andv = (1-r)(p+r)/(1-p)^2 .In terms of m and v, we obtainp = 1-2m/m+m^2+v andr = 1- 2m^2/m+m^2+v .Therefore m>1 if and only if 0<r<p<1. In this case the associated Galton-Watson process {Z_n} is supercritical. We note that m>v if and only if 2p<1-r, which implies p<12.If m≠1, the n-times iterated generating function φ_(n) = φ_n is (rearranged from the parameterization in <cit.>)φ_n(x) = r(1-x) - m^-n(r-px)/p (1-x)- m^-n(r-px) .It is fractional linear with parametersp_n = p(1-m^-n)/p-rm^-n andr_n = r(1-m^-n)/p-rm^-n . From now on we assume m>1, i.e., r<p. The probability of extinction by generation n, , is given by= φ_n(0) = r(1-m^-n)/p-rm^-n = r_n ,and the (ultimate) extinction probability is = r/p .From (<ref>) and (<ref>) we obtain the relation 1-/1- = 1 - m^-nand its equivalent1-/1- = 1 + /m^n- . This allows to compute the time needed for the probability of non-extinction by generation n, 1-, to decline to (1+)(1-).For >0 (not necessarily small) we define T_ as the (real) solution of 1-(T_) = (1+)(1-) . By (<ref>), this time can be calculated explicitly: T_ = ln((1+1/))/ln m . Therefore, 1ln mln(1+1/) is always an upper bound to T_.§.§ Distributions of W_n and W The cumulative distribution function of W_n=Z_n/m^n can be calculated explicitly (e.g., from φ_n):P(W_n≤ x) = 1-(p-r)p_n^⌊ m^nx ⌋/p-rm^-n ,where p_n is given by (<ref>) and ⌊ z ⌋ denotes the largest integer smaller than z. From the equation[W_n≤ x] =+ (1-)[W_n^+≤ x] ,we infer[W_n^+≤ x] = 1 - p_n^⌊ m^nx ⌋ .This shows that the cumulative distribution of W_n^+ is imbedded into (hence approximated by) the exponential distribution with parameter _n = -m^n ln p_n = -m^n lnp(1-m^-n)/p-rm^-n .We note that lim_n→∞ p_n^m^n x = e^-(1-r/p)x. As a consistency check, it can be confirmed directly from the CDF that [W_n^+]=(1-)^-1. If we take the exponential distribution with parameter _n^*=1-, this provides an approximation to the (discrete) distribution of W_n^+ which has the property that the integral of the difference of the two distribution functions vanishes, i.e., such that ∫_0^∞ (1 - p_n^⌊ m^nx ⌋)- (1 - e^_n^*x) dx = 0.Conditional on this requirement, the exponential with parameter _n^*=1- provides the best possible exponential approximation to W_n^+.We obtain that W^+=lim_n→∞W_n^+, which exists quite generally <cit.>, is exponentially distributed with parameter= lim_n→∞_n^* = 1- r/p = 1 -.This follows as well from Poincaré's functional equation (<ref>), i.e., Φ(u) = φ(Φ(u/m)), which implies that the Laplace transforms of W isΦ(u) = [e^-u W] = p-r + ru/p-r+pu = 1- +1/1+u/ ,withgiven by (<ref>).It is easy to show that if W^+ is exponential, then the generating function of the offspring distribution is fractional linear. This follows because the Laplace transform Φ is strictly monotone decreasing so that φ is uniquely determined by eq. (<ref>). Therefore, as pointed out by a reviewer, W^+ is exponential if and only if the offspring distribution is fractional linear. §.§ Comparison with the Galton-Watson process originating from a Poisson offspring distributionWe assume a fractional linear distribution with v=m, as for a Poisson distribution with mean m. A simple calculation shows that v=m if and only if p=m/2+m and r = 2-m/2+m, where we assume m<2. Then we obtain= 2/m - 1and= (2-m)(1-m^-n)/m - (2-m)m^-n . If m=e^, as we assume in the main text, then =2e^--1 and = 1- = 2(1-e^-) = 2 - ()^2 + O(()^3) .We note that <1 if and only if <ln 2≈ 0.69 (otherwise, =1). For comparison, the series expansion offor the corresponding Poisson offspring distribution is 2 - 53()^2 + O(()^3), and the survival probability is always <1. In particular, the survival probability for the Poisson distribution is always smaller than that for the corresponding linear fractional. In the limit of →0, they are asymptotically equivalent.If we assume in the fractional linear case that v and m are proportional as m=e^s is varied, i.e., v=2m/ρ (cf. Remark <ref>), equation (<ref>) simplifies to T_()= 1/s ln((1+1/)) = 1/sln(1+1/) - ρ + O(s) , where the asymptotic estimate in the second lines holds if s→0 because ≈ 1-ρ s. Comprehensive numerics show that (<ref>) apparently always (i.e., if (1+1/)≥1) provides an upper bound for any Poisson offspring distribution with mean e^s if the correspondingis used. This bound is a highly accurate approximation with relative errors of order s or smaller. Finally, we note that =0 if m≥ 1+2/ρ. § MEAN TIMES TO FIXATION OR LOSS OF A MUTANT Approximations for the expected times to absorption conditioned on either fixation or loss of a new mutation, for simplicity called mean fixation time or mean loss time, have been an important tool in theoretical population genetics ever since its inception in the 1920s. Even in large populations, the initial fate of a mutant, as well as its fate in the final stages before fixation, is subject to strong stochastic influences, and the derivation of simple and accurate approximations is still a topic of research <cit.>. In this appendix, we add simple estimates to this literature that are particularly useful to compute the mean fixation or loss times of beneficial mutants drawn from a distribution of effects.For the computation of mean times to fixation or loss in a finite population, we use diffusion approximations because the branching process model is not directly applicable. Although the approximation g_a(p) in (<ref>) for the distribution of the number of mutations is quite accurate for late stopping times, the derivation of useful approximations for the mean fixation and loss times requires more delicate transformations than that used for Result <ref> and is work in progress.§.§ Diffusion approximations Here, we collect results on diffusion approximations of the fixation probability and of the mean times to fixation or loss of favorable mutants with a fixed selection coefficient s>0. In a haploid population, the well-known diffusion approximation for the fixation probability of a single beneficial mutant is(s, N) = 1-e^-2s/1-e^-2Ns .In the applications to our model, we replace s bydefined in (<ref>). The diffusion approximation (s, N) is always an upper bound to the true fixation probability in the Wright-Fisher model, and is highly accurate for every N≥8 if 0<s≤0.1 <cit.>. The mean times to fixation or loss can be expressed only in terms of integrals of the corresponding sojourn time densities <cit.>. For the mean fixation time, the simple approximation ^(HP)(s, N) = 2/s(ln (2Ns) + γ -1/2Ns)was provided by <cit.>, where γ is the Euler gamma. It is very accurate if 2Ns≥3. If 2Ns<3, we use ^(small)(s, N) = 2N(1 - 2N s 11 - 6 - 6 ln 3/27) .This is the linear function in s which connects the mean fixation time 2N at s=0 with the value of ^(HP)(s, N) at s = 3/(2N). As a function of s, ^(HP) assumes its maximum very close to this value and becomes negative for very small s. As illustrated by Fig. <ref>, the function (s,N) ≈^(HP)(s, N)ifs ≥ 3/(2N), ^(small)(s, N) ifs < 3/(2N),obtained by concatenation of ^(HP)and ^(small) yields a highly accurate approximation of the true diffusion approximation.For the mean loss time of a single advantageous mutant, we obtain (see below)^(app)(s,N)≈ -2(1+s)ln(2s) + 2(1 -) + s(3-2) + 1/Ns≈ -2ln(2s) + 2(1 -) + 1/Ns ,which is very accurate if Ns≥2 but breaks down if Ns≤1.If Ns<2, we use^(small)(s, N) = 2ln(N) - sN(ln 4 + γ - 54) . This is the linear function in s which connects the (approximate) mean loss time 2ln(N) at s=0 <cit.> with the value of ^(app)(s, N) at s = 2/N. As illustrated by Fig. <ref>, the function (s,N) ≈^(app)(s, N)ifs ≥ 2/N, ^(small)(s, N) ifs < 2/Nyields a highly accurate approximation of the true diffusion approximation. To derive this approximation, we start with the well-known diffusion approximations for the conditional sojourn time densities<cit.>. To account for haploidy, we set =2Ns and p=1/N in these expressions (p the initial mutant frequency). By integration with respect to x, we obtain with the help ofand after simple algebraic rearrangement^(1)(s, N)= 1/(e^2Ns-1)s[ e^2Ns(-(-2s) + +ln(2Ns)-ln(N-1) ) - (2Ns) + (2(N-1)s) - (2s) ++ln(2Ns)-ln(N-1) + e^2Ns((-2(N-1)s) -(-2Ns)) ]for the expected time the mutant frequency spends in (0,1/N], and^(2)(s, N)= e^2s-1/(e^2Ns-1)(e^2Ns-e^2s)s[ -e^4Ns(-2s) + e^2Ns(2(N-1)s) + e^4Ns(-2Ns)+(2Ns)+2e^2Ns(+ln(2Ns)-ln(N-1))- (2s)+e^2Ns(-2(N-1)s) ]for the time spent in (1/N,1). We note that the times ^(1)(s, N) and ^(2)(s, N) have been rescaled to generations, i.e., the diffusion time has been multiplied by N. Omitting terms of order e^-2Ns or smaller in (<ref>) and using the approximation (x)≈ e^x/(x-1) if |x| > 4, we obtain^(1)(s, N)= 1/(e^2Ns-1)s[ e^2Ns(-(-2s) + +ln(2Ns)-ln(N-1) ) - (2Ns) + (2(N-1)s) ] ≈1/s[-(-2s) + +ln(2s)+ln(N)-ln(N-1) - 1/e^2Ns((2Ns) - (2(N-1)s)) ] ≈ 2 - s + 1/N .The latter approximation is obtained by performing a series expansion up to order s^2 and by keeping Ns constant (as in the diffusion approximation). For ^(2) we obtain in an analogous way, by omitting terms of order e^-2Ns or smaller,^(2)(s, N)≈e^2s-1/(e^2Ns-1)(e^2Ns-e^2s)s[- e^4Ns(-2s) + e^2Ns(2(N-1)s) ] ≈e^2s-1/s[-(-2s) + e^-2Ns(2(N-1)s) ] ≈ -2-2(1+s)ln(2s) + 2(2-)s +1-s/Ns .By summing up, i.e., = ^(1)+^(2), and omitting terms of order 1/N, we arrive at (<ref>). §.§ Mean fixation time averaged over the mutation distribution f Now we derive an approximation for the mean fixation time of a single mutant if its effecton the selected trait is drawn from an exponential distribution f with mean . For given N and s, we defined in (<ref>) the corresponding expected mean fixation time by(s,f, N) = ∫_0^∞(, N) (, N) f()d/∫_0^∞(, N) f()d .We recall from (<ref>) that =e^s-1 is the selection coefficient of a mutant of effect . For (, N) we use the approximation (<ref>), and for (, N) we use (<ref>). With this simplified diffusion approximation for (, N), the integrals in (<ref>) are readily computed numerically. Some values are given in the legend of Fig. <ref>.An analytically explicit, still quite accurate, approximation ofcan be derived as follows.We use the approximations (s,N) ≈ 1-e^-2s and ^(HP)(s, N) ≈2/s(ln(2N s ) + γ), which are valid for sufficiently large Ns. In addition, we assume that the selective advantage of the mutant is s instead of . If f is exponentially distributed with mean , we obtain∫_0^∞(s, N) f()d≈2 s/1+2sand∫_0^∞^(HP)(s, N) (s, N) f()d≈ln(1+2s)/s[2ln(2N s) - ln(1+2s) ] .Therefore,(s,,N) ≈(1+2s)ln(1+2s)/2(s)^2[2ln(2N s) - ln(1+2s) ] .The high accuracy of this approximation is demonstrated in the supplementarynotebook. Assuming s≪1 and performing a series expansion in s, we arrive at (s,,N) ≈ 2 ln(2Ns) (1/s +1 - 2/3s) - 2. If the selective coefficient of the mutants is , instead of s, then some of the above integrations cannot be performed analytically. However, numerics shows that the simple approximation (<ref>)still works very well, especially if 2/N≤ s≤ 0.2 (seenotebook).§.§ Mean loss time averaged over the mutation distribution f In analogy to (<ref>) and (<ref>), we define the expected mean loss time of a single mutant, with effecton the selected trait drawn from an exponential distribution f with mean , by(s,f, N) = ∫_0^∞(, N) (1-(, N)) f()d/∫_0^∞(1-(, N)) f()d .Here, (, N) is taken from (<ref>). With this simplified diffusion approximation for (, N), the integrals in (<ref>) are readily computed numerically (seenotebook). For graphs ofand , see Fig. <ref>. It shows that, unless s is very small, >. The reasons are that mutations of larger effect are lost with lower probability than those of smaller effect, and small effect mutations stay longer in the population. The following is a simple and accurate approximation if (approximately) Ns>1 and s<0.2 (seenotebook):(s,, N) ≈ 2(-ln (2s)+1+2s)-1.42-ln(Ns)/Ns .Usually, these times to loss are very small. For instance, if the mean of the exponential distribution is =1, N=10^4, we obtain ≈5.5, 8.8, and 14.5 for s=0.1, 0.01, and 0.001, respectively. For larger values of N almost the same numbers are obtained. § MUTATIONS OCCUR AT THEIR EXPECTED TIME - APPROXIMATIONS FOR THE PHENOTYPIC MEAN AND VARIANCE We outline a simpler approach than using the Erlang distribution for the waiting time to new mutations. We assume that the ith mutation occurs at its mean waiting time i/ and the number of mutation events until time τ is (approximately) [τ]. This is most efficient if the mutation effects _i are given, in particular if they are equal across sites. Then we can approximate (<ref>) by [(X̃^(i)_τ)^k ]≈( 1-([ τ - i/]) ) ∫_0^1 x^k g_a(τ - i/)(x)dx.Then, instead of (<ref>), we obtain the approximationG̅(τ)≈∑_j=1^[τ](1- ([τ-i/],e^s_i))_i _1(a(τ-i/,e^s_i))= ∑_j=0^[τ]-1(1- ([j/],e^s_j))_j _1(a(j/,e^s_j)) ,where we have reversed the summation order (i.e., j=[τ]-i) to obtain the second equation.This yields (<ref>) by using (<ref>).Analogously, we obtain instead of (<ref>) the approximation (<ref>).For equal mutation effects, (<ref>) and (<ref>) provide highly accurate approximations of (<ref>) and (<ref>), respectively (results not shown, but expected because the right-hand sides of (<ref>) and (<ref>) can be interpreted as approximating Riemann sums). For values ofmuch smaller than 1, few new mutations are expected until generation τ (unless τ is sufficiently large) and the rounded, expected number of mutation events, [τ], can be zero for many generations. Moreover, the Poisson distribution is highly asymmetric for small τ and gets more symmetric for larger τ.This also implies that the phenotypic mean and variance may be only poorly described by their first moments when one sweep after another occurs (as with very small ).By Proposition <ref>, the mutation rateenters the expected phenotypic mean and variance only as a multiplicative factor (see also Fig. <ref>). Hence, we use G̅(τ,) ≈·G̅(τ,1) and V_G(τ,) ≈· V_G(τ,1) to obtain rough approximations for small values offrom equations (<ref>) and (<ref>).§ PROOFS OF THE RESULTS IN SECTION <REF> Here, we give the proofs of the main results of Section <ref>. We start by collecting some formulas needed in the proofs. Integrals and series expansions were mostly calculated with the help of . However, integrals involving E_1 can be obtained as well by partial integration using ∫ E_1(x) dx = -e^-x + x E_1(x).§.§ Some integral formulas We recall the abbreviation = N(e^) from (<ref>). In analogy to (<ref>) we definea^∞(t,e^) =e^- t,where we omit the dependence on N. Furthermore, we introduce the (simplified) notation _1(t,e^) = _1(a(t,e^))and _1^∞(t,e^) = _1(a^∞(t,e^))as well as (t,e^) = (a(t,e^))and ^∞(t,e^) = (a^∞(t,e^)). We note that a(t,e^)>a^∞(t,e^) and, because _1 is monotone decreasing in a, _1^∞(t,e^)>_1(t,e^).We will need the following integrals: ∫_0^τ^∞(t,e^) dt =1/[e^ E_1()- exp( e^-sτ)e^-sτ E_1( e^-sτ)] , lim_τ→∞∫_0^τ^∞(t,e^) dt =/e^ E_1().In addition, we obtain for T_1>0:∫_T_1^τ_1^∞(t,e^) dt =1/[ exp( e^-τ)E_1( e^-τ)- exp( e^- T_1)E_1( e^- T_1)]and∫_0^T_1_1^∞(t,e^) dt =1/[exp( e^- T_1)E_1( e^- T_1) - e^ E_1()] .By setting T_1=τ-1 in (<ref>) and using lim_t→∞1/s texp( e^- t)E_1( e^- t) = 1, we arrive atlim_τ→∞∫_τ-1^τ_1^∞(t,e^) dt = 1 . From the asymptotic properties of the exponential integral <cit.>, in particulare^x E_1(x) = ∑_k=0^∞ (-1)^k k!/x^k+1 = 1/x-1/x^2 + 2/x^3 - 6/x^4 +… (x→∞) ,we derive for arbitrary T_1>0lim_T→∞1/ T[exp( e^- (T_1+T))E_1( e^- (T_1+T)) -exp( e^- T_1)E_1( e^- T_1) ] = 1 .Combining this with (<ref>), we obtainlim_τ→∞1/τ-T_1∫_T_1^τ_1^∞(t,e^) dt = 1. §.§ A relation between the Poisson and Erlang distribution We prove∑_n=1^∞_τ(n) (∑_i=1^n m_i,(t)/M_i,(τ)) = .Indeed, we have∑_n=1^∞_τ(n) (∑_i=1^n m_i,(t)/M_i,(τ)) = ∑_n=1^∞ e^-τ(τ)^n/n!∑_i=1^n e^- t ( t)^i-1/(i)M_i,(τ)= e^- t∑_i=1^∞( t)^i-1/(i)M_i,(τ)∑_n=i^∞ e^-τ(τ)^n/n!=e^- t∑_i=1^∞( t)^i-1/(i)M_i,(τ) M_i,(τ) = .The first equality is simply based on the definitions of _τ(n) in (<ref>) and m_i,(t) in (<ref>). The second is obtained by reordering the sums, the third from a well-known relation between the Poisson distribution and the incomplete Gamma function <cit.> and the definition of M_i,(τ) in (<ref>), and the last from the exponential series.§.§ Proof of Proposition <ref> We will need the following integrals. Let ϕ(x) be a (continuous) function. Then we obtain directly from the definition of h̃^(i) by a change in the order of integration:∫_0^1 ϕ(x) h̃^(i)_τ,_i,(x)dx = ∫_0^τ m_i,(t)/M_i,(τ)(1-([τ-t],_i) )∫_0^1 ϕ(x) g_a(τ-t,_i)(x)dx dt.With ϕ(x)=x, we get from (<ref>) and (<ref>)∫_0^1 x h̃^(i)_τ,_i,(x)dx = ∫_0^τ m_i,(t)/M_i,(τ)(1-([τ-t],_i) ) _1(a(τ-t, _i))dt.With ϕ(x)=x(1-x), we get from (<ref>) and (<ref>)∫_0^1 x(1-x) h̃^(i)_τ,_i,(x)dx = ∫_0^τ m_i,(t)/M_i,(τ)(1-([τ-t],_i) ) (a(τ-t, _i))dt.From the definition (<ref>) of G̅(τ) we deduce: G̅(τ)= ∑_n=1^∞_τ(n) ( ∑_i=1^n∫_0^∞[ X̃^(i)_τ,e^,] f()d) = ∑_n=1^∞_τ(n)∫_0^∞ f() (∑_i=1^n∫_0^1 x h̃^(i)_τ,e^,(x)dx ) d= ∑_n=1^∞_τ(n) ∫_0^∞ f() ×∫_0^τ(∑_i=1^n m_i,(t)/M_i,(τ)) (1-([τ-t],e^) ) _1(a(τ-t, e^))dt d= ∫_0^∞ f() ∫_0^τ(1-([τ-t],e^) ) _1(a(τ-t, e^))dt d . We obtained (<ref>) by using that the mutation effects _i are drawn independently from the same distribution f; (<ref>) by using (<ref>); and (<ref>) by employing (<ref>) and changing the order of integration and summation. Then (<ref>) yields (<ref>) by applying (<ref>), and (<ref>) yields (<ref>) by applying the time transformation τ-t→ T, and returning to t (instead of T).Note also that we suppressed the dependence on N in X̃^(i), h̃^(i), and a.Analogously, by using (<ref>) instead of (<ref>), we obtain for the variance from its definition (<ref>), V_G(τ) = ∑_n=1^∞_τ(n) ( ∑_i=1^n∫_0^∞^2 [ X̃^(i)_τ,e^,(1-X̃^(i)_τ,e^, ) ] f()d) =∑_n=1^∞_τ(n) ∫_0^∞^2 f() ( ∑_i=1^n∫_0^1 x(1-x) h̃^(i)_τ,e^,(x)dx ) d=∑_n=1^∞_τ(n) ∫_0^∞^2 f() ×∫_0^τ( ∑_i=1^n m_i,(t)/M_i,(τ)) (1-([τ-t],_i) ) (a(τ-t, _i))dt d=∫_0^∞^2 f() ∫_0^τ(1-([τ-t],e^) ) (a(τ-t, e^))dt d , where (<ref>) yields (<ref>) with the help of (<ref>). Finally, (<ref>) is obtained by the time transformation τ-t→ T (and returning from T to t). Note that for the derivation of the variance, the independence of the random variables X̃^(i) is crucial. §.§ Proof of Proposition <ref>From (<ref>) we obtainG̅(τ+1) -G̅(τ) = ∫_0^∞ f() H(e^,τ)d ,whereH(e^,τ)=∫_0^τ+1(1-([t],e^) ) _1(t, e^)dt- ∫_0^τ(1-([t],e^) ) _1(t, e^)dt =∫_τ^τ+1(1-([t],e^) ) _1(t, e^)dt.Now we use lim_τ→∞(1-([τ],e^)) = (e^), lim_τ→∞_1^∞(τ, e^)_1(τ, e^) =1 (by continuity of _1 and the fact that lim_τ→∞a^∞(τ, e^)a(τ, e^) =1), and lim_τ→∞∫_τ^τ+1_1(t, e^)dt = lim_τ→∞∫_τ^τ+1_1^∞(t, e^)dt = 1 ,where we used (<ref>).Therefore, lim_τ→∞ H(e^,τ) = (e^) and (<ref>) yieldsG̅^* = lim_τ→∞(G̅(τ+1) -G̅(τ)) = ∫_0^∞ f() (e^)d ,which is (<ref>).We start fromV_G(τ)= ∫_0^∞^2 f() ∫_0^τ(1-([t],e^) ) (t, e^)dt d . <ref>In order to derive (<ref>), we need to estimateD_V(τ)=∫_0^∞^2 f() ∫_0^τ(1-([t],e^) ) (t, e^)dt d -∫_0^∞^2 f() ∫_0^τ(1-(e^) ) ^∞(t, e^)dt d= ∫_0^∞^2 f() ∫_0^τ I_V(t,e^)dt das τ→∞, whereI_V(t,e^) = (1-([t],e^) ) (t, e^) - (1-(e^) ) ^∞(t, e^) .With D_V(∞)=lim_τ→∞D_V(τ) we obtainV_G^*/ = lim_τ→∞V_G(τ)/ = ∫_0^∞^2 f() (1-(e^) ) ∫_0^∞^∞(t, e^)dt d + D_V(∞)= ∫_0^∞/s f() (e^)e^ E_1() d+ D_V(∞),where we have used (<ref>) in the last step. Because V_G^*/ is of order 1 (x e^x E_1(x)<1 and ≤ρ), proving that |D_V(∞)| becomes arbitrarily small as N→∞ (in a sense to specified), is sufficient to establish the desired result.In view of the approximations (<ref>) and (<ref>), it is desirable to have an error of order o(s)+o(1/(Ns)) as Ns→∞ and s→0. For this reason, we will need the scaling Assumption <ref>, i.e., we start with Ns^K=C^K as N→∞, where C>0 and K>1 are constants, and K will be determined at the end of the proof.(1) In this first main step we fix the mutational effect . Then we deal with the complications arising from drawingfrom a distribution f with meanin a second main step.Let >0 be sufficiently small, where we will quantify this at the end of step (1). From the definition of T_ in (<ref>) and its explicit expression in (<ref>), we infer1- < 1-([t]) ≤ (1+) (1-) for everyt≥ T_ ,where here and below we suppress the dependence of ([t]), , and T_ on e^s.In the limit τ→∞, we split the integral with respect to time in (<ref>) as follows∫_0^∞ I_V(t,e^)dt= ∫_0^T_ I_V(t,e^)dt +∫_T_^∞ I_V(t,e^) dt . (i) First, we treat ∫_T_^∞ I_V(t,e^) dt by assuming t≥ T_. We recall the notation from (<ref>) and (<ref>). Then (<ref>) implies a^∞(t,e^) < a(t,e^) ≤ (1+)a^∞(t,e^). Usinga(t) = (1+u(t))a^∞(t) ,where 0<u(t)≤1 if t≥ T_ (indeed, u(t)→0 as t→∞, and for a fractional linear offspring distribution u(t) can be calculated explicitly from (<ref>)),we obtain(1-([t]) ) (a(t)) = (1+u(t))(1-) ^∞((1+u(t))a^∞(t)) .This yieldsI_V(t)=(1+u(t))(1-) ^∞((1+u(t))a^∞(t)) - (1-) ^∞(a^∞(t)) =u(t) (1-) _D(a^∞(t))+ ^2u(t)^2(1-)R_1(a^∞(t)),where_D(x) = x[(2+4x+x^2) e^x E_1(x) - 3 - x ]and R_1 are obtained by series expansion for →0. The function γ_D(x) is positive and bounded (by ≈0.217) and can be integrated. If we write T_= ln(Z)/(s) (with Z= (1+)/) and recall a^∞(t)=ν e^-s t (with =N(1-)), we obtain∫_T_^∞_D(a^∞(t))dt= -/Z/s(1-e^/Z(2+/Z)E_1(/Z))= 1/(1- 2(/Z)^-2 + O((/Z)^-3))as /Z→∞. In essentially the same way∫_T_^∞ R_1(a^∞(t)) dt ≤4/s(/Z)^-2follows.In order to satisfy /Z→∞, we need Ns→∞ as N→∞. In particular, it follows from the above results that I_V(t)>0 if t≥ T_. Therefore, we obtain∫_T_^τ I_V(t)dt≤∫_T_^∞ I_V(t)dt ≤ (1-) ∫_T_^∞_D(a^∞(t))dt + ^2(1-)∫_T_^∞ R_1(a^∞(t)) dt ≤ρ(1 + O((N)^-2)) + ^2 O((N)^-2)≤ 2 ρ .Here we used (<ref>) together with 0<u(t)≤1 in the second estimate, and (<ref>) and (<ref>) together with 1-≤ρ s (Remark <ref>) in the third. Thus, to achieve our desired result, we will need the conditions =o(s), =o((Ns)^-1), and (N)^-1 = o(1).(ii) Second, we derive an estimate for ∫_0^T_ I_V(t,e^)dt, where we recall I_V(t,e^) from (<ref>). For small t, a(t) is large because it is decreasing from N at t=0 to a(T_)≈ρ N s (if s is small), where we recall a(t)=N(1-([t],e^))e^- t. We will chooseand the scaling factor K such that N s→∞ as N→∞.Then we use the approximation (a(t))=1/a(t)-4/a(t)^2+O(a(t)^-3) which is readily derived from (<ref>). Therefore, we obtain by using the expansion of :I_V(t,e^)= (1-([t]) ) (t) - (1-) ^∞(t)= (e^ t/N - 4e^2 t/N^2(1-([t]))) - (e^ t/N - 4e^2 t/N^2(1-)) + O(a(t)^-3)= 4e^2 t/N^2(1/1- -1/1-([t])) + O(a(t)^-3) ,where again we have suppressed the dependence on e^ in the terms a and . From (<ref>) we obtain by a brief calculation1/1- - 1/1-([t])≤/1-e^- t .Now we integrate I_V(t,e^) with respect to t:∫_0^T_I_V(t,e^) dt≤4/N^2 /1-∫_0^T_ e^ t dt + R = 4/N^2 /1- e^s T_-1/s + R= 4/N^2 /1- (1+)-/s + R ≤4/N^2s(1-) + R ≤8/ρ^21/(Ns)^2 + 1/^2 O(1/(Ns)^3),where R arises from integration of terms of order a(t)^-3 or smaller, hence is of smaller order than the first term, and e^s T_=(1+)/ (see (<ref>)). In the final step we also used (1-)≥12ρ s for small s. To keep the error in (<ref>) sufficiently small, we need that ((Ns)^2)^-1 = o((Ns)^-1) (and ((Ns)^2)^-1 =o(s), which is weaker).In addition to Ns^K=C^K, choose =s^K_1, where K_1>0. The lower bound K>1 arises from the fact that in our model selection is stronger than random genetic drift. With this choice, we obtain s=O(N^-1/K), =O(N^-K_1/K), 1/(Ns)=O(N^-1+1/K), 1/((Ns)^2) = O(N^-2+(2+K_1)/K), and Ns=O(N^1-(1+K_1)/K). To keep all error terms sufficiently small, we need =o(s), 1/((Ns)^2)=o((Ns)^-1), N→∞ and Ns→∞ as N→∞. These inequalities are satisfied if and only if K_1>1 and K>K_1+1.With this choice, we get error terms of smaller order than s and a(T_)=ρ Ns→∞ (as N→∞), which is required in (i) and (ii) above.In summary, we obtain ∫_0^T_I_V(t,e^) dt=O(N^-2+(2+K_1)/K),∫_T_^∞ I_V(t,e^) dt=O(N^-K_1/K) , where K_1/K<2-(2+K_1)/K because K_1>1 and K>K_1+1. A simple choice for K_1 and K is K_1=32 and K=3. Then s=O(N^-1/3), =O(N^-1/2), 1/(Ns)=O(N^-2/3), 1/((Ns)^2) =1/( (Ns^3)(Ns^1/2))= O(N^-5/6), and Ns=O(N^1/6). Then the error term in (<ref>) is O(N^-5/6), and that in (<ref>) is O(N^-1/2).(2) In this second main step we assume that exponentially distributed effectswith mean . However, except for some constants occurring during integration with respect to , the same order estimates are obtained for a f having meanand bounded higher moments. We adopt the scaling assumption from step (1) with K_1>1 and K>K_1+1.(i) First we treat the case t≤ T_.We set z= K_2ln(Ns) (≥ 1), where we choose s small enough such that ≥12ρ s holds for every ≤ z. This is possible by Remark <ref> and our scaling assumption. Furthermore, we choose = r(N)/min{,z}, where r(N)=N^-K_1/K. Assuming ≤ z and using (<ref>), we obtaina(T_)= N (1+) e^-s T_ = N (1+) //1+≥12ρ Nsr(N) /min{,z} = 12ρNs r(N),and this inequality holds for every t≤ T_.Therefore, the bound (and approximation) (a(t))≤1/a(t) will be accurate in this parameter region since a(t) →∞ as N →∞.Therefore, we can use (<ref>) and obtain∫_0^z^2 f()∫_0^T_I_V(t,e^) dt d ≤2c̃/(Ns)^2 r(N)∫_0^z^2 f() /^2d ,≤2c̃/(Ns)^2 r(N) = O( N^-2+(2+K_1)/K) ,where we have used T_≤ln(1+min{,z}/r(N)) = ln(1+/r(N)) if ≤ z, and c̃>1 is a constant which also accounts for the error term R. Thus, the order of the error term is the same as in (<ref>).If > z, we use the brutal estimate I_V(t)≤(t)<1 and observe that T_≤1/sln(1+z/r(N)). Straightforward integration shows that ∫_z^∞^2 f() ∫_0^T_ I_V(t)dt d≤∫_z^∞^2 f() ∫_0^T_ 1 dt d = ∫_z^∞^2 f() T_d≤1/sln(1 + z/r(N))∫_z^∞ f() d = 1/sln(1 + z/r(N))e^-z(1+z)= /s(Ns)^K_2ln(1 +K_2ln(Ns)/r(N)) (1+K_2ln(Ns))and this holds for every τ>0.With K_2=K (which is not best possible), we obtain s(Ns)^K_2= O(N^K-1-1/K) which, by our scaling assumption, is is of greater order than the error term O(N^2-(2+K_1)/K) arising in (<ref>). Therefore, the error arising from (<ref>) is of smaller order than that in (<ref>). For instance, with K_2=K=3, we have (s(Ns)^K_2)^-1 = O(N^-5/3) so that, after taking care of the logarithmic terms, the error in (<ref>) is o(N^-4/3), hence smaller than O(N^-5/6) in (<ref>).(ii) Now we assume t>T_. Then we obtain from (<ref>)∫_0^∞^2 f() ∫_T_^τ I_V(t)dtd ≤2 ρ∫_0^∞^2 f() r(N)/min{,z} d =2 ρ r(N) ∫_0^z f()d + 2ρ/z r(N) ∫_z^∞^2 f()d <2 ρ r(N)+ 4ρ r(N) = O(N^-K_1/K)(because z≥ 1). With our standard choice of K_1=32 and K=3, this is O(N^-1/2).In summary, we obtain 0<D_V(∞)≤ O(N^-K_1/K) as N→∞. Because for fractional linear offspring distributions with v=2m/ρ, we have =0 if m≥1+2/ρ (Appendix <ref>), in some of the integrals above the upper bound ∞ should be reduced accordingly because then I_V(t)=0 for large .We provide an approximation for the time dependence of G̅(τ) for sufficiently large τ. We assume τ≫ (in particular, τ>τ_c) and suppress the dependence of a and ν on N. Then we obtain from (<ref>)G̅(τ)= ∫_0^∞ f()∫_0^τ(1-([t],e^) ) _1(a(t, e^))dt d≈∫_0^∞ f() (1-(e^)) ∫_0^τ_1(a(t, e^))dt d=∫_0^∞ f() (e^s) (∫_^τ_1(a(t)) dt + ∫_0^_1(a(t))dt )d≈(τ-) ∫_0^∞ f() (e^s) d + ∫_0^∞ f() 1/s(e^s) [exp( e^-)E_1( e^-) - e^ E_1()] d .Here, we used ([τ-t],e^)≈(e^) for sufficiently large τ to obtain (<ref>), a simple integral splitting (and =1-) for (<ref>), and finally (<ref>) and (<ref>) to arrive at (<ref>).The second term in (<ref>) accounts for the contribution of the segregating mutations to the phenotypic mean. On average, these are the mutations that appeared less thangenerations in the past and have not yet had enough time to become fixed. A permanent response of the mean is achieved by mutations that have gone to fixation. These are accounted for by the first term in (<ref>), which is consistent with (<ref>). §.§ Proof of Corollary <ref> (1) The approximation (<ref>) follows directly from (<ref>) by using (<ref>).(2) For equal effects, (<ref>) immediately simplifies to (<ref>) by using (<ref>).(3) follows from (<ref>) by the properties of the exponential distribution.(4) Using e^ E_1()≈ 1-1/ (because = N(e^)≫1) in (<ref>) we obtainV_G^*(f, s, N, ) ≈∫_0^∞/s(e^) (1- 1/) f()d= /s∫_0^∞(e^) f()d - /sN∫_0^∞ f()d , which yields (<ref>). We prove the exponential case (6) before (5).(6) We require Assumption <ref>. In (<ref>), we substitute 2Ns forand 2s-53(s)^2 for (e^) to obtain V_G^*(f, s, N, )≈ 4 Ns ∫_0^∞^3 (1-5/6s) e^2Ns E_1(2Ns) f()d=4 Ns^3/(2Ns-1)^4[(2Ns-1)(8(Ns)^2-14Ns + 11) - 6ln(2Ns)]-20 Ns^2^4/3(2Ns-1)^5[(2Ns-1)(24(Ns)^3 - 52(Ns)^2 + 46Ns -25)+ 12ln(2Ns) ] . By letting N→∞, we obtain to order 1/N:V_G^*(f, s, N, )≈ 4^2(1-5/2s - 1/4Ns +5/12N) ,where Assumption <ref> yields s=O(N^-1/K) and 1/(Ns) = O(N^-1+1/K), where K>2. This yields (<ref>) by omitting the term of order N^-1.(5) For equal effects, (<ref>) simplifies to4 Ns^3 (1-5/6s) e^2Ns E_1(2Ns). Using e^x E_1(x) ≈ x^-1 - x^-2 if x→∞ (with x=2Ns and the asymptotic assumption in (6)), we obtainV_G^*(, s, N, ) ≈ 2^2(1-5/6s - 1/2Ns +5/12N) ,which yields (<ref>), in which the term of smallest order, 5/12N, has been omitted.§.§ Proof of Proposition <ref> The proof of the approximation (<ref>) for V_G(τ) is based on that of the approximation (<ref>) for the stationary variance V_G^*. We recall the definition of D_V(τ) from (<ref>) and that of I_V(t) from (<ref>). Then we obtainV_G(τ)=∫_0^∞^2 f() (1-) ∫_0^τ^∞(t, e^)dt d +D_V(τ) =∫_0^∞/s(e^ E_1() -e^-τexp ( e^-τ) E_1( e^-τ) ) f()d+D_V(τ) ,where the second equality follows from (<ref>).As in step (2) of the proof of (<ref>), we choose T_=1/ln(1+/), with= r(N)/min{,z}, z= K_2ln(Ns) and, r(N)=N^-K_1/K.If τ≤ T_, we obtain from (<ref>) and (<ref>)D_V(τ) ≤ D_V(T_) = O( N^-2+(2+K_1)/K)because the integrands I_V(t) and their bounds are nonnegative. If τ > T_, we obtain from (<ref>)D_V(τ) ≤ D_V(∞) = O(N^-K_1/K). We leave the proof of (<ref>) for G̅(τ) to the interested reader. It is based on analogous estimates. The main difference is that instead of γ_D in (<ref>) the function γ̃_D(x) = 1+x-(2+x)xE_1(x) occurs, which has a diverging integral.However, instead of u(t)≤1 we can use the estimateu(t) ≤e^-s t/ (1- e^-s t)≤e^-s t/(1-) ,which follows from (<ref>). Then∫_T_^∞ e^-s tγ̃(a^∞(t)) dt = 1/s + terms of lower orderreplaces (<ref>). However, the leading-order term is now 1/(s), from which the stronger requirement K>3 is deduced.§.§ Proof of Proposition <ref>We recall Assumption <ref> and let N→∞. As a consequence Ns→∞ and s→0. The mean mutational effectis given and fixed. From the asymptotic expansion (<ref>) and the definition of _1 in (<ref>), we obtain the asymptotic equivalence _1(a) = 1- a e^a E_1(a)∼ 1/a - 2/a^2asa→∞ .In addition, max{0,1/a-2/a^2}<_1(a)<min{1/a,1} holds for every a>0 and _1(a(t)) is monotonically increasing from 0 to 1 as t increases from 0 to ∞; see text below (<ref>). The basic idea of the proof is to use the approximation_1(a(t,e^s,N)) ≈1/a(t,e^,N) = e^ t/N(1-([t],e^))for small t because a(t,e^,N), defined in (<ref>), is monotonically decreasing in t from a(0,e^,N)=N to 0 as t increases to ∞. Then one could proceed as follows by first substituting (<ref>) into (<ref>): G̅(τ)= ∫_0^∞ f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^,N))dt d≈/N∫_0^∞ f() ∫_0^τ e^ tdtd= /Ns∫_0^∞ f() (e^τ-1)d .The order of the error of the approximation in (<ref>) can be obtain by integrating 2/a(t,e^,N)^2 accordingly. However, on the way several technical problems occur. In particular, if mutation effects are drawn from a distribution, in our case the exponential distribution with mean , then for given τ, s, and N, the approximation (<ref>) will be accurate ifis sufficiently small, but will fail ifis extremely large because then a(τ,e^,N) can drop from N to close to 0 in the first generation. Therefore, we will proceed as in (<ref>) only if ≤ z, where we choose z=K̃ln(Ns) and determine K̃(≥1) below. Because we assume an exponential mutation distribution f with mean , we have [ > z]= e^-z. If > z, we use 0<_1(a)<1 (see above). Straightforward integration shows that 0 <∫_z^∞ f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^,N))dt d <∫_z^∞ f() ∫_0^τ 1 dt d = τ e^-z(1+z) = τ 1+K̃ln(Ns)/(Ns)^K̃ = τ O(ln(Ns)/(Ns)^K̃) asN→∞ ,and this holds for every τ>0. For ≤ z, we start from (<ref>) and use (<ref>) and (<ref>) to obtain ∫_0^z f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^,N))dt d= ∫_0^z f() ∫_0^τ1/N e^ tdtd+ R(τ)= 1/Ns∫_0^z f() (e^τ-1)d+ R(τ) = 1/Ns( s τ/1- s τ - R_1(τ) ) + R(τ) .Here, R_1(τ) = e^-z( e^z sτ/1- s τ -1 )and R(τ) is, to leading order in 1/a^2, the error term arising from the approximation (<ref>), i.e.,R(τ) = -2 ∫_0^z f() ∫_0^τ1-([t],e^)/a(t,e^,N)^2dtd .To obtain bounds on R_1(τ) and R(τ), we assume sτ≤14 (this is not best possible; any upper bound <12 would be sufficient to arrive at (<ref>)).For R_1(τ) we obtain by observing its convexity and bounding it by the linear function assuming 0 at τ=0 and e^-z(43e^z/4-1) at τ=1/(4 s):R_1(τ)≤ 4 sτ e^-z(4/3 e^z/4 -1 ) ≤16/3 sτ e^-3z/4 = 16/3 sτ (Ns)^-3K̃/4 ,which is of smaller order in magnitude than the leading order term s τ/1- s τ in (<ref>) (this holds for every K̃≥1).Now we derive a bound for R(τ).From the monotonic decay in t of the probabilities of non-extinction (1-([t],e^)) and the approximation 1-(e^)=(e^)≈ρ in Remark <ref>, which requires thatis sufficiently small, we infer1-([t],e^) > 1-(e^) > cif c<ρ. We can choose, for instance, c=ρ/2. Henceforth, we assume that s is small enough such that (<ref>) holds for all ≤ z. (This is possible because with Ns^K=C^K we obtain sz = sK̃ln(Ns) = CN^-1/KK̃ln(N^(K-1)/KC) →0 as N→∞. Hence, czs→0 as N→∞, which guaranties that (<ref>) holds for every ≤ z ifN is large enough). Then, we obtain for every t and ≤ z:2(1-([t],e^))/a(t,e^,N)^2 = 2e^2t/N^2(1-([t],e^) ) < 2e^2 t/N^2(1-(e^) ) < 2e^2 t/N^2 c .Therefore, the integral with respect to t in (<ref>), is bounded by∫_0^τ2e^2t/N^2 c = 1/N^2s^2ce^2τ-1/^2 .Substituting this into (<ref>) and integrating with respect towe obtain|R(τ)| ≤1/N^2s^2c∫_0^z f()e^2τ-1/d=1/N^2s^2c(-ln(1-2sτ) - R_2(τ) ) < 1/N^2s^2c(3sτ - R_2(τ)) ,where R_2(τ) = E_1(z(1-2sτ))-E_1(z) and the linear bound on the logarithmic term applies if sτ<14. Upon two-fold differentiation, we observe that R_2(τ) is convex and increases from 0 at τ=0 to E_1(z/2))-E_1(z) at τ=1/(4s). Therefore, we bound R_2(τ) by the corresponding linear function in τ and obtainR_2(τ) ≤4sτ( E_1(z/2)-E_1(z)) = sτ O( 1/(Ns)^K̃/2ln (Ns))by series expansion for Ns→∞ and using z=K̃ln(Ns). In summary, we obtain from (<ref>) and (<ref>), and then choosing K̃=4:|R(τ)|≤3τ/N^2sc(1 + O( s^2-K̃/2/N^K̃/2ln(Ns)) ) = τ O( 1/N^2s). We note that estimates similar to those in (<ref>) show thata(τ,e^sz,N) ≥ const(Ns)^3/4ln(Ns) ,where we have used sτ≤14. This shows that the approximation (<ref>) is indeed justified for every ≤ z.By combining (<ref>) and (<ref>), and substituting the bounds (<ref>) for R_1(τ) and (<ref>) for R(τ), we obtainG̅(τ)= ∫_0^∞ f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^,N))dt d= /Ns( s τ/1- s τ - R_1(τ) ) +R(τ) + τ O(ln(Ns)/(Ns)^K̃) = τ/Ns( s/1- s τ + O(s/(Ns)^3K̃/4)) +τO(1/N^2s) + τO(ln(Ns)/(Ns)^K̃) = τ/N[/1- s τ + O((Ns)^-3K̃/4) + O((Ns)^-1) + O(ln(Ns)/N^K̃-1s^K̃) ] =τ/N[/1- s τ + O(N^-1+1/K)] ,where in the last step we chose K̃=4 so that the first error term is O((Ns)^-3) and the last error term is O(ln(Ns)Ns1N^2s^3) = O(ln(Ns)Ns N^-2+3/K) = o( N^-1+1/K) = o((Ns)^-1)by using once again the scaling assumption <ref>. This finishes the proof of (<ref>).From (<ref>) and (<ref>) we get (a)∼ 1/a-4/a^2 as a→∞. Therefore, the proof is analogous to that of (<ref>), i.e., it is based on the idea of substituting this approximation into (<ref>), which yieldsV_G(τ)=∫_0^∞^2 f() ∫_0^τ(1-([t],e^) ) (a(t, e^))dt d≈/N∫_0^∞^2 f() ∫_0^τ e^s(t)dtd= /Ns∫_0^∞ f() (e^τ-1)d ,which yields (<ref>) if f is exponential with mean .We omit several of the details of the proof and present only the most relevant expression.Instead of (<ref>) we obtain0 <∫_z^∞^2 f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^,N))dt d < τ O(ln(Ns)^2/(Ns)^K̃) . Instead of (<ref>) we obtain∫_0^z^2 f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^,N))dt d= /Ns(1/(1-sτ)^2 - 1 - R_1^V(τ) ) + R^V(τ) ,where R_1^V(τ) and R^V(τ) are defined analogously to R_1(τ) and R(τ) above. For R_1^V(τ) one obtainsR_1^V(τ)=e^-z(e^szτ(1+z(1-sτ)/(1-sτ)^2 -(1+z) ) ≤τO( sln(Ns)/(Ns)^3K̃/4) ,where the expression in z is monotone increasing and convex in τ, and we bound it by the linear function assuming 0 at τ=0 and the corresponding value at τ=1/(4s). Finally, we obtain after similar calculations and series expansions |R^V(τ)| ≤2/N^2s^2c∫_0^z f()(e^2τ-1)d= 2/N^2s^2c(2sτ/1-2sτ + R_2(τ)) = τ O(1/N^2s).Putting all this together, writing the leading order term in (<ref>) as 1/(1-sτ)^2 - 1 = 2sτ 1-sτ/2/(1-sτ)^2and choosing K̃=4, we obtainV_G(τ)= ∫_0^∞^2 f() ∫_0^τ(1-([t],e^) ) _1(a(t, e^,N))dt d=2^2τ/N[1-sτ/2/(1-sτ)^2 + O(N^-1+1/K)] .The approximations (<ref>) and (<ref>) are then obtained by a series expansion of the leading order terms.For equal mutation effects the proof is much simpler because we do not have to integrate with respect to . Indeed, we observe directly from (<ref>) that, to leading order in Ns, G̅(τ)= /Ns (e^τ-1). The estimate of the error term, analogous to R(τ), is also much simpler and straightforwardly yields (cf. (<ref>))|R(τ)| ≤∫_0^τ2e^2t/N^2 c = 1/N^2s^2ce^2τ-1/^2 ,which is of smaller order, by a factor of 1/(Ns), than the leading order term. Of course, this requires e^2τ=O(1), but it does not require the stronger constraint τ<14. The approximation for V_G(τ) is derived analogously. § SEGREGATING SITES: A BRANCHING PROCESS APPROXIMATION Here we derive an alternative approximation for the number of segregating sites by relying on our branching process approach, similar as in Sect. <ref>, and amending it by a diffusion approximation for the mean fixation time.At a given generation τ, τ mutations have occurred on average.A mutant with fitnessis still present in the population with probability 1-(τ-t,), where t is the number of generations since its occurrence and (n,) is the probability of loss until generation n; see (<ref>). On average, t= i/ for the ith mutation event. Some of those mutations may have already reached fixation and are no longer segregating in the population. Fixation occurs on average aftergenerations (see eqs. <ref> and <ref>).With this in mind and following the definitions the expectations of the phenotypic mean and variance in (<ref>) and (<ref>), respectively, we define random variables S^(i)_τ,e^,, which are 1 when locus i is segregating in the population and 0 otherwise. Using the expected fixation timein (<ref>) from the diffusion approximation, we setτ̃= min{(,N),τ},which is the expected time a mutant present in generation τ has already been segregating. Then [S^(i)_τ,e^,] ≈∫_0^τ̃m_i,(t)/M_i,(τ) (1- ([τ̃-t],e^)) dt.We define the expected number of segregating sites in our model, where mutation effects are drawn from a distribution f and the offspring distribution is Poisson, by[S](τ ,f ,s ,N,) = ∑_n=1^∞_τ(n) ( ∑_i=1^n∫_0^∞[ S^(i)_τ,e^,] f()d) ,By assuming equality in (<ref>), we obtain the following approximation for [S]:S̃(τ ,f ,s ,N,)= ∑_n=1^∞_τ(n) ( ∑_i=1^n∫_0^∞∫_0^τ̃m_i,(t)/M_i,(τ)(1- ([τ̃-t],e^) )dt f()d)= ∫_0^∞∫_0^τ̃∑_n=1^∞_τ(n) ( ∑_i=1^nm_i,(t)/M_i,(τ)) (1- ([τ̃-t],e^))dtf()d = ∫_0^∞f()∫_0^τ̃(1- ([τ̃ -t ],e^ ) ) dt d≈∫_0^∞f()∑_i=0^[τ̃](1- ([τ̃]-i,e^ )) d =∫_0^∞f()∑_j=0^[τ̃](1- (j,e^ ))d .The above calculations are analogous to those in the derivation of (<ref>) for G̅(τ) in (<ref>) upon substitution of [X̃^(i)_τ,e^,] by [S^(i)_τ,e^,]. If all effects are equal to , then τ̃=τ̃() is constant and the integration in (<ref>) collapses to the computation of the sum, i.e., to (<ref>) in the main text. Unless the distribution f() is a delta function, as for equal mutation effects, no further simplification of (<ref>) seems possible, except when τ≪(,N). Then, for given small τ, we have τ̃=τ except for the (very rare) very large values ofin the tail of f for which (,N)<τ. As a consequence, we obtain from (<ref>) the approximation (<ref>) in the main text. § SUPPLEMENTARY FIGURES elsarticle-harv | http://arxiv.org/abs/2310.18106v1 | {
"authors": [
"Hannah Götsch",
"Reinhard Bürger"
],
"categories": [
"q-bio.PE"
],
"primary_category": "q-bio.PE",
"published": "20231027124453",
"title": "Polygenic dynamics underlying the response of quantitative traits to directional selection"
} |
smalltableaux definition theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition conjecture[theorem]Conjecture definition[theorem]Definition remark[theorem]Remark example[theorem]Example | http://arxiv.org/abs/2310.17906v1 | {
"authors": [
"Kyu-Hwan Lee"
],
"categories": [
"math.RT",
"math.CO"
],
"primary_category": "math.RT",
"published": "20231027055243",
"title": "Data-scientific study of Kronecker coefficients"
} |
1 Emerging Science and Technology Directorate, Agency for Defense Development, Daejeon 34186, South Korea*[email protected] practical applications to free-space quantum communications, the utilization of active beam coupling and stabilization techniques offers notable advantages, particularly when dealing with limited detecting areas or coupling into single-mode fibers(SMFs) to mitigate background noise. In this work, we introduce highly-enhanced active beam-wander-correction technique, specifically tailored to efficiently couple and stabilize beams into SMFs, particularly in scenarios where initial optical alignment with the SMF is misaligned.To achieve this objective, we implement a SMF auto-coupling algorithm and a decoupled stabilization method, effectively and reliably correcting beam wander caused by atmospheric turbulence effects.The performance of the proposed technique is thoroughly validated through quantitative measurements of the temporal variation in coupling efficiency(coincidence counts) of a laser beam(entangled photons).The results show significant improvements in both mean values and standard deviations of the coupling efficiency, even in the presence of 2.6 km atmospheric turbulence effects. When utilizing a laser source, the coupling efficiency demonstrates a remarkable mean value increase of over 50 %, accompanied by a substantial 4.4-fold improvement in the standard deviation. For the entangled photon source, a fine mean value increase of 14 % and an approximate 2-fold improvement in the standard deviation are observed. Furthermore,the proposed technique successfully restores the fidelity of the polarization-entangled state, which has been compromised by atmospheric effects in the free-space channel, to a level close to the fidelity measured directly from the source. Our work will be helpful in designing spatial light-fiber coupling system not only for free-space quantum communications but also for high-speed laser communications.§ INTRODUCTION The field of quantum key distribution(QKD) has witnessed remarkable advancements in achieving long-range transmissions since its initial demonstration in 1992 <cit.>. Fiber-based links have successfully demonstrated transmission distances of up to 1000 km in twin-field QKD <cit.> and up to 442 km using measurement-device-independent QKD <cit.>.On the other hand, terrestrial free-space links have shown impressive capabilities up to 144 km <cit.>. Recent advancements in free-space links have been particularly noteworthy, making pioneering breakthroughs in the field.Notably, QKD through satellite-ground free-space links has become a well-established technology <cit.>.Furthermore, QKD over air-to-ground links utilizing unmanned air vehicles (UAVs) and aircrafts has also achieved notable progress<cit.>.In addition to these accomplishments, urban QKD systems designed for operation in daylight conditions have seen substantial development <cit.>. Despite these impressive achievements, the practical implementation of reliable free-space quantum communication still faces substantial technical challenges related to background noise and atmospheric effects.One effective technique to mitigate the impact of background noise in a quantum communication system is to reduce the field of view of the receiver's detectors. Especially when dealing with small detecting areas or coupling into a single-mode fiber(SMF), efficient and stable light coupling into the optical fiber is crucial to ensure communication efficiency and the success rate of the receiving system. The process of light coupling into an optical fiber is influenced by beam wander, platform vibration, and misalignment between the fiber and focus spot. Here, atmospheric temperature variations, humidity gradients, and fast-varying atmospheric turbulence can cause beam wander, beam flickering, and wavefront distortion, leading to random fluctuations of the received beam and the apparent azimuth of the receiver over time scales ranging from milliseconds to minutes, resulting in increased system losses <cit.>.To address these challenges, active beam-wander-correction techniques plays a crucial role in minimizing losses and achieving efficient coupling and stabilization of beams into an optical fiber coupling system, although the utilization of adaptive optics techniques compensating wavefront error is also needed for further decreasing optical losses. Various excellent methods have been proposed so far for beam stabilization and wavefront correction <cit.>. However, there has been no research demonstrating the performance of beam-wander-correction techniques in a FSO setup based on entangled photon-pairs, where the coupling performance of the photon-pairs can be directly monitored. In this study, we focus on the beam-wander-correction technique of minimizing beam path fluctuation caused by atmospheric turbulence through a 60-m indoor experiment, primarily aiming for stable beam coupling into a SMF. Such investigation is crucial for exploring novel and efficient approaches to tackle the challenges posed by atmospheric effects in free-space quantum communication systems.Active beam-wander-corrections involve real-time adjustments and manipulation of the beam path to compensate for environmental factors or imperfections in the optical setup. By actively monitoring and dynamically adjusting the beam path, it becomes possible to mitigate losses caused by factors such as misalignment, atmospheric turbulence, or thermal effects. In previous free-space quantum communication setups, the predominant approach for mitigating beam wander and platform jitter effects at the receiver has relied on the utilization of a single fast-steering mirror(FSM). However, the limitation of employing a sole FSM lies in its capacity to stabilize the beam path exclusively along a single plane of the optical axis <cit.>. Particularly, when the transmitting or receiving module moves or when the direction of the incoming beam deviates at the receiver, the coupling efficiency may significantly decrease. Therefore, to achieve beam wander and jitter correction for all beam paths corresponding to the entire optical axis, the implementation of multiple FSMs is required, along with the development of algorithms specifically designed for this purpose.In this work, we present highly-enhanced active beam-wander-correction technique using two FSMs and two position-sensitive-detectors(PSDs) configuration, alongside an SMF auto-coupling algorithm and a decoupled stabilization method. This approach is particularly applicable in scenarios where the initial optical alignment deviates from the SMF. To effectively simulate atmospheric effects in long-range outdoor environments, we collected beam wander data at a distance of 2.6 km and constructed a vibration simulator by incorporating an additional FSM at the transmitter. Here, due to spatial constraints within our research facility, the simulated distance is limited to 2.6 km.By subjecting the system to these simulated atmospheric effects, we rigorously tested the performance of the proposed method, closely monitoring the SMF-coupling efficiency (coincidence counts) for both a laser source and an entangled photon source. Notably, our experimental results demonstrate very stable coupling efficiency, and we observed a remarkable reduction of its fluctuation by a factor of 4.4 for the laser source and a factor of 2 for the entangled photon source, as compared to the case without correction. The results provide valuable insights into the design of SMF-coupled optical links for both free-space quantum communications and high-speed laser communications applications. The novel combination of double FSM and PSD configuration, SMF auto-coupling algorithm, and decoupled stabilization method has the potential to significantly enhance the reliability and performance of free-space quantum communication systems and advancements in high-speed laser communication technologies. § AUTO-COUPLING ALGORITHM AND DECOUPLED STABILIZATION METHOD Figure <ref> illustrates the schematic of the SMF auto-coupling system employing two FSMs. The system effectively controls the angles of the two FSMs, which are determined by their horizontal components (α_1,α_2) and vertical components (β_1,β_2). To achieve precise beam alignment, a collimated beam is directed towards FSM1 to scan the angle of the optical beam path and subsequently reflected to FSM2 to scan the radial position of the beam spot. The beam is then focused by a collimator with a focal length F and finally coupled into the SMF.The SMF coupling efficiency η can be defined as the ratio of the power P_c coupled into the SMF to the input power P_i <cit.>:η= P_c/P_i=|∫∫ E^*(x,y)· F_G(x,y)dxdy|^2/∫∫| E(x,y)|^2dxdy,where E(x,y) and F_G(x,y) refer to the incident electromagnetic field and the Gaussian field distribution of the SMF, respectively. If an optical beam path has misaligned with the normal axis of the SMF, the beam propagating direction k⃗ undergoes a radial position shift r'=√(x'^2+y'^2) and an angular offset θ, defined as the angle between k⃗ and z-axis, at the collimator lens. In this case, the incident Gaussian field with the waist radius of w_s can be given by E(x,y) = √(2/π w^2_s) exp[-(x-x')^2+(y-y')^2/w^2_s]×exp[ikθ (xcosϕ+ysinϕ)],where ϕ is the azimuthal angle of k⃗_ which is the projection of k⃗ into the xy-plane.In addition to the incident Gaussian field, the fiber's Gaussian distribution with a mode-field radius of w_0 can be expressed inF_G(x,y)= √(2/π w^2_m) exp[-x^2+y^2/w^2_m],where w_m=λ F/π w_0 is the effective mode-field radius, F is the focal length of the collimator, and λ is the wavelength of the incident Gaussian beam. If we assume that the aperture radius of the collimator is much larger than the Gaussian beam size and the angular offset θ is very small, the coupling efficiency η of the SMF can be calculated asη = [2 w_m w_s/w_m^2+w_s^2 exp(-4r'^2+k^2γ^2 w_m^2 w_s^2/4(w_m^2+w_s^2))]^2where γ = arccos(cosθcosϕ) and k=2π/λ is the wave vector. Here, the radial position shift, r'=√(r^2_H+r^2_V), consists of the horizontal component, r_H=2d_1α_1+2d_2(α_1-α_2), and the vertical component, r_V=2d_1β_1+2d_2(β_1-β_2), where d_1 refers to the distances between FSM1 and FSM2, and d_2 denotes the distance between FSM2 and collimator. The SMF auto-coupling system operates on an angle scan, followed by the position scan, employing two FSMs. During the angle scan, FSM1 performs random scans of horizontal and vertical angles, namely α_1 and β_1, while keeping α_2 and β_2 fixed. Subsequently, the simultaneous scanning of α_1 and α_2 (β_1 and β_2) takes place, with the constraint that the difference α= α_1-α_2 (β= β_1-β_2) remains constant, thus facilitating the position search.At the receiver end, the optical setup comprises two FSMs (S-330.4SL, Physik Instrumente), exhibiting an angular resolution of 0.25 μrad. A fiber collimator (60FC-4-M8-10, Schäfter + Kirchhoff) with a focal length F of 8.1 mm and an SMF (780HP, Thorlabs) with a mode-field radius (w_0) of 2.5 μm are also incorporated.Notably, system parameters such as the wavelength λ are set to 810 nm, while the incident Gaussian field possesses a waist radius (w_s) of 0.6 mm. The distances d_1, d_2, and d_3 are specified as 9 cm, 28 cm, and 76 cm, respectively. To achieve precise angle variations, a relatively large path difference between PSD1 and PSD2 (C10443-03, Hamamatsu) has been intentionally established.The auto-coupling algorithm encompasses three primary stages as visually depicted in Fig. <ref>(a) and (b). Initially, the random search phase involves the random scanning of α_1 and β_1 to ascertain the position where the second-order correlation function g^(2)_si(τ) of coupled signal photons in the receiver and directly measured idler photons from the source (see Fig. <ref>), or the laser beam power, is measured from the initial alignment position, which lies outside the confines of the SMF. In our system, the angular range of α_1 and β_1 spans from -5 to 5 mrad. To achieve the randomness, the input voltage for FSM1 is varied within the range of 0 to 10 V, with the randomness being generated through the LabView graphical programming environment.Subsequently, the angle searching process proceeds in sequential manner, wherein α_1 and β_1 are scanned to identify the optimum values within a narrower range compared to the random search phase. During actual measurements, the scan ranges of α_1 and β_1 are reduced from 10 to 3 mrad around the position where g^(2)_si(τ) can be measured through random search process starting from a completely misaligned initial position, and at that time, the step size is set to 100 μrad. At each step of the scanning process, g^(2)_si(τ) and g^(2)_peak, which is the peak value of g^(2)_si(τ), are measured, and the maximum g^(2)_peak value is continually updated based on the current measurements.After the angle scanning process, α_1 and β_1 values are set to the point with the maximum g^(2)_peak, and the angle search process is ended.Figure <ref>(b) shows the flowchart of the position search algorithm. In this process, the optimum positions are determined by the simultaneous scan of α_1 and β_1 in steps of 0.5 mrad across the full range (-5 to 5 mrad), while the incident angles α (=α_1-α_2) and β (=β_1-β_2) have fixed values.Consequently, while the difference in angles between the two FSMs remains constant, the beam's entry angle into the collimator lens remains fixed, while the beam path position at the collimator lens changes correspondingly. The results of the complete scan are represented in Fig. <ref>(c) as a contour plot of the coupling efficiency η, revealing its dependency on α_1 and β_1. To achieve smoother representation, the measured data underwent smoothing using a Gaussian filter. Subsequently, based on the scan result, the input voltages were applied to the FSMs, corresponding to the point of the maximum coupling efficiency, η_max. The proposed algorithm exhibits applicability in both the classical and quantum source schemes. For laser sources, instead of the second-order correlation function g^(2)_si(τ), the optical power P coupled into a SMF was measured. Following the auto-coupling process, a beam stabilization algorithm based on the proportional-integral-differential (PID) control algorithm is established to ensure the stability of the laser beam(or entangled photons) in the laser(or quantum) communication channel. The stabilization system incorporates two PSDs for accurate beam position measurement. Leveraging the two FSMs originally employed in the auto-coupling process, the system effectively achieves beam wander stabilization. To address the rapid fluctuations of beam wander, a common simplified approach involves using a single set of FSM-PSD, where a PID controller is applied to the FSM to maintain the laser beam consistently at the center of the PSD, thereby achieving stabilization. However, in our study, we adopt a more elaborate strategy employing two sets of FSM-PSD to enable comprehensive beam-wander-correction for all possible beam paths. Although previous research has successfully achieved beam stabilization in FSO communication systems by employing two sets of FSM-PSD <cit.>, the application of the decoupled method proposed in this study, enabling independent control of two FSMs for effective beam stabilization, has not been implemented yet in free-space quantum communication systems.In a typical scenario where the PID control method is employed to concurrently manage both FSMs, it exerts an influence on the laser's motion and consequently impacts the measurements obtained from PSD1 and PSD2. Therefore, this simultaneous control could significantly affect the stability of the feedback process. Hence, to achieve stable and accurate beam positioning, each PID control loop is designed to be decoupled from one another, guaranteeing that their operations remain independent and do not interfere with each other.Let's denote the beam positions detected by PSD1 and PSD2 at the point of maximum coupling efficiency point as (x_1, y_1) and (x_2, y_2), respectively, as depicted in Fig. <ref>(a). For simplicity, we describe only the x-component feedback process, noting that the feedback process for y-component is identical. Figure <ref>(b) shows the PID control algorithm implemented in the decoupled stabilization method.To regulate the incident beam angle on the collimator (Fig. <ref>), we employ the difference between the position values of PSD1 and PSD2, denoted as x_1-x_2. Additionally, to control the beam position incident on the collimator, we utilize the position value x_1 from PSD1, which is situated at the same distance as the collimator. The feedback process yields output values x_1'-x_2' and x_1', which are subtracted from x_1-x_2 and x_1, respectively, generating error values that are fed back into the PID controller. The decoupling process, represented by the matrix D = [ 1 1d_1+d_2/d_21 ], is implemented to independently control the PID control values, c_1 and c_2. The resulting output values α_1 and α_2 from the decoupling process serve as inputs to FSM1 and FSM2, respectively, facilitating the adjustment of mirror angles. These adjusted angles influence the beam path passing through the optical system, leading to output values x_1'-x_2' and x_1'. The optical system can be effectively represented as a transfer function P= [ d_2-d_3 -d_2+d_3 d_1+d_2 -d_2 ] illustrating the overall control algorithm process. Upon careful examination of the transfer function, P[α_1α_2 ] = [(-d_2+d_3)d_1/d_20 0 d_1 ][ c_1 c_2 ], it becomes evident that it solely comprises diagonal terms. This noteworthy observation indicates that each PID control is adept at independent control of c_1 and c_2 <cit.>.§ EXPERIMENTAL SETUPFigure <ref> shows the experimental setup for the FSO system incorporating beam-wander-corrections based on a double FSM-PSD configuration. A continuous-wave (CW) laser operating at a wavelength of 810 nm serves as the classical light source. In contrast, for the quantum source, we generate polarization-entangled photons within a Sagnac interferometer. Specifically, a single-frequency laser operating at 405 nm pumps a 30 mm-long periodically poled potassium titanyl phosphate (ppKTP) crystal with a 3.425 μm poling period. Subsequently, non-degenerated photon-pairs comprising signal (∼780 nm) and idler (∼840 nm) are collinearly generated via a type-0 SPDC process. The emitted photon-pairs in both clockwise and counter-clockwise directions within the Sagnac interferometer form a polarization-entangled state, represented as |Φ^+⟩=1/√(2)(| HH ⟩+| VV ⟩). Insets in Fig.<ref> illustrate the density matrix of the two-photon polarization state, exhibiting a fidelity F of 0.97, and the second-order correlation function g^(2)(τ), which determines the rate of coincidence detection between modes s and i at a time delay τ. Here, the quantum source is operated in a relatively lower pump power regime where fidelity is relatively high, taking into account the multiphoton effects. The coincidence counts was approximately 9.0 × 10^4 Hz/mW at the transmitter. A CW laser with a wavelength of 660 nm, referred to as the tracking laser, is employed to monitor and correct beam wander effects in the received beam.The transmitter units, receiver units, and mirror targets are linked by a free-space channel with a round-trip distance of 60 m. The transmitter consists of a fiber collimator and an achromatic Galilean beam expander (GBE10-B, Thorlabs) with a fixed magnification of 10 times. To simulate air turbulence encountered in outdoor environments, two piezo-actuators (P-840, PI) are installed on 2-inch mirror, serving as an atmospheric turbulence simulator.By inducing rapid vibrations along the x- and y- axes of the mirror, these actuators introduce significant perturbations to the transmitted light source, effectively emulating external atmospheric turbulence. The simulated data for the air turbulence is obtained through actual measurements. The experimental measurement setup, as depicted in the inset of Fig.<ref>, utilizes a 20× beam expander at the transmitter and a 28× beam expander at the receiver to measure the beam wander of the transceiving laser beam using a PSD. The positions along the x and y-axes are measured at a rate of 10 kHz. Subsequently, the power spectrum density is calculated by averaging 10 measurements with a 0.05 Hz interval to analyze the frequency characteristics of the measured signal, representing the beam wander due to the atmospheric turbulence.The measurements are conducted at distances of 60-m indoors, 250-m outdoors, and 2.6-km outdoors, respectively. During the outdoor measurements, the weather conditions are recorded, including a temperature of 26.5 ^∘C, wind speed of 2 m/s, humidity of 55 %, and a partially cloudy environment, from 15:00 to 17:00 on 20 June 2023 (Korea Standard Time, KST). The results demonstrate a pronounced presence of frequency signals in the range of 100 Hz to 300 Hz as the distance increases, with a particularly significant magnitude observed at 2.6 km compared to 250 m. For the simulation of atmospheric turbulence, the time data of the PSD measurements along the x and y-axes at 2.6-km are encoded into the simulator to replicate atmospheric effects experienced at 2.6 km in the 60-m indoor FSO setup. The results show that the simulated data from the indoor setup and the measured data at 2.6 km exhibit similarity, as corroborated by Fig. <ref>. Here, the simulated vibration frequency range emanating from the transmitter spans from 0 to 1000 Hz, with the angular scope of beam wander being marginally smaller than the primary lens of the receiver telescope, encompassing a range from - 0.04 ^∘ to 0.04 ^∘. Given that the ascertained beam wander angle at a distance of 2.6 km amounts to approximately ±0.02 ^∘, this demonstrates that the 60-m indoor system adequately simulates 2.6 km outdoor conditions.The receiver part comprises a Galilean telescope (BE05-10-B, Thorlabs), offering variable optical beam expansion with 5 to 10 times magnification, featuring an objective lens of 45 mm diameter. A collimated beam, 15 mm in diameter, is propagated towards the receiver via a target mirror placed at a distance of 30 m. To compensate for atmospheric effects, the received beam undergoes a beam stabilization process within the receiver module. Beam stabilization is achieved using two PSDs and two FSMs. The transmitter emits either a tracking laser with a wavelength of 660 nm, following the same path as the quantum light source, or a laser source with a wavelength of 810 nm. Upon reception at the receiver, the 660 nm tracking beam is directed towards the PSD, while the 810 nm beam is directed towards the fiber coupler through the dichroic mirror. The aforementioned beam stabilization algorithm is implemented using an FPGA (USB-7855, NI), effectively controlling the angles of the FSMs. The 810 nm beam is ultimately coupled into a SMF, and the coupled optical power within the SMF is evaluated using a fiber-type photodiode. In the case of employing a SPDC source, the idler photons are directly transmitted to a SPCM, serving as heralding photons for the signal photons. The signal photons propagate through a 60 m free-space channel, where they are detected by another SPCM, and the coincidence counts are measured using a time-correlation single photon counter (TCSPC). The peak values of the g^(2)_si(τ) functions are used to determine the optimal angle and position of the signal photons, thereby maximizing the coincidence counts.§ RESULTSWe first conduct experimental investigations to validate the proposed auto-coupling algorithm and stabilization method within a 60 m FSO setup, employing an 810 nm laser source.The free-space channel loss just before the collimator is approximately 16 %. To measure the coupled optical power, we utilize a fiber-type photodiode (Fig. <ref>). As mentioned earlier, we implement an auto-coupling algorithm to identify the point of maximum coupling power and subsequently proceed with coupling stabilization as shown in Fig. <ref>. The data line shown in black represents the process of finding the point of maximum coupling efficiency, starting from an initial state where beam alignment is completely misaligned. Here, the coupling efficiency denotes the ratio between the laser power measured in front of the receiver's fiber collimator and the coupled laser power into a SMF. The red-colored data line illustrates significant fluctuations in coupling efficiency due to atmospheric disturbances, particularly near the vicinity of the point where the coupling efficiency is maximized. Subsequently, employing the decoupled stabilization method yields stable beam coupling into a SMF, as depicted by the blue line. We apply the decoupled stabilization method to both cases, with and without the influence of atmospheric turbulence effects. The results indicate that without simulating atmospheric turbulence effects and without applying the stabilization technique, the distribution of coupling efficiency exhibits a mean of 0.471 and a standard deviation of 0.113, as shown in Fig. <ref>(a). In contrast, when turbulence is not simulated but only stabilization is applied, the mean of the coupling efficiency improves significantly to 0.653, indicating an approximate enhancement of 39 %. Furthermore, at that point, the corresponding standard deviation is 0.017, representing a considerable improvement of over 6.5 times compared to the case without stabilization. In Fig. <ref>(b), which represents the simulation of 2.6 km atmospheric turbulence effects, an overall decrease in the distribution of coupling efficiency is observed compared to the case without atmospheric turbulence effects. This decrease can be attributed to the inherent beam wander caused by atmospheric turbulence as the beam propagates through the atmosphere. In the case of simulating 2.6 km atmospheric turbulence effects without stabilization, the distribution of coupling efficiency exhibits a mean of 0.405 and a standard deviation of 0.141. However, with the application of stabilization technique, the mean value significantly improves to 0.616, accompanied by a standard deviation of 0.032. This indicates a substantial enhancement in both the mean value, amounting to 52 %, and the standard deviation, increasing over 4.4 times compared to the case without stabilization technique. Hence, the proposed beam-wander-correction technique in this study demonstrates effective performance even under conditions of fast atmospheric turbulence. Figure <ref>(a) presents the variation of coincidence counts over time when employing the auto-coupling algorithm and stabilization technique with a polarization entangled photon-pair source (see FSO setup in Fig. <ref>). In the absence of stabilization (red-colored data), the mean value of coincidence counts is 5135 cps, accompanied by a corresponding standard deviation of 808 cps. However, with successful stabilization, the mean value increases to 5834 cps, and the standard deviation decreases to 461 cps, indicating an improvement in the mean value by approximately 14 % and a doubling of the standard deviation. To obtain accurate information about the beam position, a detection speed should be faster than the major frequency range (∼ 300 Hz) caused by atmospheric turbulence. In the measurements based on SPCMs, the accumulation of photon counts is set to 1 Hz. This results in a somewhat lower performance in terms of position stabilization compared to the classical source case utilizing fast photo-diode. In the case of the quantum source, since it is not possible to directly measure the number of photons in front of the fiber collimator, coincidence counts of coupled signal photons in the receiver and directly measured idler photons from the source are demonstrated.Finally, to verify the improvement in the quality of the entangled photon-pair source through stabilization, the fidelity of the polarization-entangled state is measured using quantum state tomography (QST) <cit.>, as shown in Fig. <ref>(b). Here, a comparison is made between the fidelity obtained directly from the polarization-entangled source, the fidelity of the transmitted and received polarization-entangled state under the condition of 2.6 km atmospheric turbulence effects, and the fidelity achieved when the stabilization method is applied. The fidelity of the polarization-entangled source is measured to be 0.97±0.01. However, after indoor transmission and reception of the entangled states, including the influence of 2.6 km atmospheric turbulence, the fidelity decreases to 0.83±0.07. Since the fidelity measurements are based on coincidence photon counts, in unstable atmospheric conditions, without sufficient beam stabilization, coincidence counts significantly decrease and exhibit substantial fluctuations. This can lead to significant deviations in QST measurement for each basis, resulting in a decrease in a fidelity. By applying the proposed stabilization technique, the fidelity is restored to 0.95±0.02. The slight decrease in fidelity compared to the source after stabilization can be attributed to external noise and the free-space transmission channel losses. § CONCLUSIONIn this paper, we propose a highly-enhanced active beam-wander-correction technique, comprising an SMF auto-coupling algorithm and a decoupled stabilization method. The proposed technique effectively searches for an optimal optical beam path, starting from a completely misaligned initial point, and compensates for beam wander caused by atmospheric turbulence by precisely controlling the angle and position of the beam using two sets of FSMs and PSDs. To evaluate its performance, we conduct experiments using a 60-m indoor FSO setup capable of simulating atmospheric turbulence effects. We measure the distribution of coupling efficiency and directly demonstrate the significant enhancement in coupling efficiency achieved through the proposed technique, even under the condition of 2.6 km atmospheric turbulence. For the laser source, the coupling efficiency distribution exhibits a mean value increase of over 50 % and a standard deviation improvement of more than 4.4 times. Similarly, for the entangled photon-pairs, the mean value increases by over 14 %, and the standard deviation experiences an approximately twofold improvement. Moreover, for the polarization-entangled state, we observe that the fidelity, which had decreased due to atmospheric turbulence effect, is effectively restored to a value close to the fidelity directly measured from the source when the stabilization method is applied. The results obtained from out study signify the substantial potential of our proposed technique in designing fiber coupling optical systems, not only for free-space quantum communications but also for high-speed laser communications. By mitigating the adverse impact of atmospheric turbulence on beam wander, our approach offers a promising solution for improving the efficiency and reliability of various optical communication systems, contributing to the advancement of both quantum and classical laser communication technologies. § ACKNOWLEDGMENTS This work was supported by Agency for Defense Development.§ DISCLOSURES The authors declare no conflicts of interest.§ DATA AVAILABILITY The datasets generated and/or analyzed during this study are not publicly available due to the security policy of the Ministry of National Defense of South Korea but are available from the corresponding author upon reasonable request.99 JCrypt92Bennett C. H. Bennett, F. Bessette, G. Brassard, L. Salvail, and J. Smolin, “Entanglemental quantum cryptography”, J. Cryptology 5, 3 (1992).PRL23Liu Y. Liu, W. J. Zhang, C. Jiang, J. P. Chen, C. Zhang, W. X. Pan, D. Ma, H. Dong, J. M. Xiong, C. J. Zhang, H. Li, R. C. Wang, J. Wu, T. Y. Chen, L. You, X. B. Wang, Q. Zhang, and J. W. Pan, “Experimental twin-field quantum key distribution over 1000 km fiber distance", Phys. Rev. Lett. 130(21), 210801 (2023).PRA23Liu J. Y. Liu, X. Ma, H. J. Ding, C. H. Zhang, X. Y. Zhou, and Q. Wang, “Experimental demonstration of five-intensity measurement-device-independent quantum key distribution over 442 km", Phys. Rev. A 108(2), 022605 (2023).Nature07Ursin R. Ursin, F. Tiefenbacher, T. Schmitt-Manderbach, H. Weier, T. Scheidl, M. Lindenthal, B. Blauensteiner, T. Jennewein, J. Perdigues, P. Trojek, B. Ömer, M. Fürst, M. Meyenburg, J. Rarity, Z. Sodnik, C. Barbieri, H. Weinfurter, and A. Zeilinger, “Entanglement-based quantum communication over 144 km", Nat. Phys. 3, 481 (2007).Nature12Yin J. Yin, J.-G. Ren, H. Lu, Y. Cao, H.-L. Yong, Y.-P. Wu, C. Liu, S.-K. Liao, F. Zhou, Y. Jiang, X.-D. Cai, P. Xu, G.-S. Pan, J.-J. Jia, Y.-M. Huang, H. Yin, J.-Y. Wang, Y.-A. Chen, C.-Z. Peng, and J.-W. Pan, “Quantum teleportation and entanglement distribution over 100-kilometre free-space channels", Nature 488, 185 (2012).RMP22Lu C. Y. Lu, Y. Cao, C. Z. Peng, and J. W. Pan, “Micius quantum experiments in space", Rev. Mod. Phys. 94(3), 035001(2022).Science17Yin J. Yin, et al., “Satellite-based entanglement distribution over 1200 kilometers",Science 356, 1140 (2017).Nature17Liao S.-K. Liao, at al., “Satellite-to-ground quantum key distribution",Nature 549, 43 (2017).NPhoton13Nauerth S. Nauerth, F. Moll, M. Rau, C. Fuchs, J. Horwath, S. Frick, and H. Weinfurter, “Air-to-ground quantum communication",Nat. Photonics 7, 382 (2013).NSR20Liu H.-Y. Liu, X.-H. Tian, C. Gu, P. Fan, X. Ni, R. Yang, J.-N. Zhang, M. Hu, J. Guo, X. Cao, X. Hu, G. Zhao, Y.-Q. Lu, Y.-X. Gong, Z. Xie, and S.-N. Zhu, “Drone-based entanglement dis tribution towards mobile quantum networks", Natl. Sci. Rev. 7, 921 (2020).IEEE21Alshaer N. Alshaer, A. Moawad, and T. Ismail, “Reliability and security analysis of an entanglement-based QKD protocol in a dynamic ground-to-UAV FSO communications system", IEEE Access 9, 168052 (2021).AO13GarciaMartinez M. J. Garcia-Martinez, N. Denisenko, D. Soto, D. Arroyo, A. B. Orue, and V. Fernandez, “High-speed free-space quantum key distribution system for urban daylight applications", Appl. Opt. 52, 3311 (2013).SR18Ko H. Ko, K.-J. Kim, J.-S. Choe, B.-S. Choi, J.-H. Kim, Y. Baek, and C. J. Youn, “Experimental filtering effect on the daylight operation of a free-space quantum key distribution", Sci. Rep. 8, 15315 (2018).PRL07Schmitt-Manderbach T. Schmitt-Manderbach, H. Weier, M. Fürst, R. Ursin, F. Tiefenbacher, T. Scheidl, J. Perdigues, Z. Sodnik, C. Kurtsiefer, J. G. Rarity, A. Zeilinger, and H. Weinfurter, “Experimental demonstration of free-space decoy-state quantum key distribution over 144 km", Phys. Rev. Lett. 98, 010504 (2007).OE14Carrasco-Casado A. Carrasco-Casado, N. Denisenko, and V. Fernandez, “Correction of beam wander for a free-space quantum key distribution system operating in urban environment", Opt. Eng. 53, 084112 (2014).MOTL16Carrasco-Casado A. Carrasco-Casado, N. Denisenko, and V. Fernandez, “Chromatic effects in beam wander correction for free-space quantum communications", Microw. Opt. Technol. Lett. 58, 1362 (2016).CAP22Kim D. Kim, D. Lim, K. Park, and Y. S. Ihn, “Quantum-correlatoion-based free-space optical link with an active reflector", Curr. Appl. Phys. 41, 156 (2022).SPIE02Weyrauch T. Weyrauch, M. A. Vorontsov, J. Gowens, and T. G. Bifano, “Fiber coupling with adaptive optics for free-space optical communication," Proc. SPIE 4489, 177 (2002).AO15Chen M. Chen, C. Liu, and H. Xian, “Experimental demonstration of single-mode fiber coupling over relatively strong turbulence with adaptive optics," Appl. Opt. 54, 8723 (2015).SPIE17Gruneisen M. T. Gruneisen, M. B. Flanagan, and B. A. Sickmiller, “Modeling satellite-Earth quantum channel downliks with adaptive-optics coupling to single-mode fibers", Proc. SPIE 10442, 104420E (2017).OE20Yang K.-X. Yang, M. Abulizi, Y.-H. Li, B.-Y. Zhang, S.-L. Li, W.-Y. Liu, J. Yin, Y. Cao, J.-G. Ren, and C.-Z. Peng, “Single-mode fiber coupling with a M-SPGD algorithm for long-range quantum communications", Opt. Express 28, 36600 (2020).JLT21Mai V. V. Mai, and H. Kim, “Non-mechanical beam steering and adaptive beam control using variable focus lenses for free-space optical communications", J. Light. Technol. 39, 7600 (2021).OC23Mai V. V. Mai, and H. Kim, “Beaconless angle-of-arrival ttracking with improved receiver sensitivity and tracking precision for free-space optical communications", Opt. Commun. 527, 128963 (2023).NP13Wang J.-Y. Wang et al., “Direct and full-scale experimental verifications towards ground-satellite quantum key distribution", Nat. Photon. 11, 509 (2017).NP17Liao S.-K. Liao et al., “Long-distance free-space quantum key distribution in daylight towards inter-satellite communication", Nat. Photon. 11, 509 (2017).OE18Gong Y.-H. Gong, K.-X. Yang, H.-L. Yong, J.-Y. Guan, G.-L. Shentu, C. Liu, F.-Z. Li, Y. Cao, J. Yin, Sheng.-K. Liao, J.-G. Ren, Q. Zhang, C.-Z. Peng, and J.-W. Pan, “Free-space quantum key distribution in urban daylight with the SPGD algorithm control of a deformable mirror", Opt. Express 26, 18897 (2018).IEEE18Fernandez V. Fernandez, J. Gomez-Garcia, A. Ocampos-Guillen, and A. Carrasco-Casado, “Correction of wavefront tilt caused by atmospheric turbulence using quardrant detectors for enabling fast free-space quantum communications in daylight", IEEE Access 6, 3336 (2018).OFT21Cao B. Cao, Z. Qiu, K. Huang, D. Lü, X. Zhang, and X. Lu, “Single-mode fiber auto-coupling system with wedges," Opt. Fiber Technol. 61, 102433 (2021).SPIE98Ruilier C. Ruilier, “A study of degraded light coupling into single-mode fibers," Proc. SPIE 3350, 319 (1998).book04Buck J. A. Buck, Fundamentals of optical fibers (Wiley, New York, 2004).OE16Liu W. Liu, K. Yao, D. Huang, X. Lin, L. Wang, and Y. Lv, “Performance evaluation of coherent free space optical communications with a double-stage fast-steering-mirror adaptive optics system depending on the Greenwood frequency", Opt. Express, 24, 13288-13302 (2016).OE21Liang Y. Liang, X. Su, C. Cai, L. Wang, J. Liu, H. Wang, and J. Wang, “Adaptive turbulence compensation and fast auto-alignment link for free-space optical communications", Opt. Express, 29, 40514-40523 (2021). AlChEJ70Luyben W. L. Luyben, “Distillation decoupling", AlChEJ, 16, 198 (1970).JPC06Nordfeldt P. Nordfeldt, and T. Hägglud, “Decoupler and PID controller design of TITO systems", J. Process Control, 16, 923 (2006).PRA01James D. F. James, P. G. Kwiat, W. J. Munro, and A. G. White, “Measurement of qubits", Phys. Rev. A, 64, 052312 (2001). | http://arxiv.org/abs/2310.17900v2 | {
"authors": [
"Dohoon Lim",
"Dongkyu Kim",
"Kyungdeuk Park",
"Dong-Gil Im",
"Yong Sup Ihn"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231027051958",
"title": "Highly-enhanced active beam-wander-correction for free-space quantum communications"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.